Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
"This is mostly update of the usual drivers: qla2xxx, hpsa, lpfc, ufs,
mpt3sas, ibmvscsi, megaraid_sas, bnx2fc and hisi_sas as well as the
removal of the osst driver (I heard from Willem privately that he
would like the driver removed because all his test hardware has
failed). Plus number of minor changes, spelling fixes and other
trivia.

The big merge conflict this time around is the SPDX licence tags.
Following discussion on linux-next, we believe our version to be more
accurate than the one in the tree, so the resolution is to take our
version for all the SPDX conflicts"

Note on the SPDX license tag conversion conflicts: the SCSI tree had
done its own SPDX conversion, which in some cases conflicted with the
treewide ones done by Thomas & co.

In almost all cases, the conflicts were purely syntactic: the SCSI tree
used the old-style SPDX tags ("GPL-2.0" and "GPL-2.0+") while the
treewide conversion had used the new-style ones ("GPL-2.0-only" and
"GPL-2.0-or-later").

In these cases I picked the new-style one.

In a few cases, the SPDX conversion was actually different, though. As
explained by James above, and in more detail in a pre-pull-request
thread:

"The other problem is actually substantive: In the libsas code Luben
Tuikov originally specified gpl 2.0 only by dint of stating:

* This file is licensed under GPLv2.

In all the libsas files, but then muddied the water by quoting GPLv2
verbatim (which includes the or later than language). So for these
files Christoph did the conversion to v2 only SPDX tags and Thomas
converted to v2 or later tags"

So in those cases, where the spdx tag substantially mattered, I took the
SCSI tree conversion of it, but then also took the opportunity to turn
the old-style "GPL-2.0" into a new-style "GPL-2.0-only" tag.

Similarly, when there were whitespace differences or other differences
to the comments around the copyright notices, I took the version from
the SCSI tree as being the more specific conversion.

Finally, in the spdx conversions that had no conflicts (because the
treewide ones hadn't been done for those files), I just took the SCSI
tree version as-is, even if it was old-style. The old-style conversions
are perfectly valid, even if the "-only" and "-or-later" versions are
perhaps more descriptive.

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (185 commits)
scsi: qla2xxx: move IO flush to the front of NVME rport unregistration
scsi: qla2xxx: Fix NVME cmd and LS cmd timeout race condition
scsi: qla2xxx: on session delete, return nvme cmd
scsi: qla2xxx: Fix kernel crash after disconnecting NVMe devices
scsi: megaraid_sas: Update driver version to 07.710.06.00-rc1
scsi: megaraid_sas: Introduce various Aero performance modes
scsi: megaraid_sas: Use high IOPS queues based on IO workload
scsi: megaraid_sas: Set affinity for high IOPS reply queues
scsi: megaraid_sas: Enable coalescing for high IOPS queues
scsi: megaraid_sas: Add support for High IOPS queues
scsi: megaraid_sas: Add support for MPI toolbox commands
scsi: megaraid_sas: Offload Aero RAID5/6 division calculations to driver
scsi: megaraid_sas: RAID1 PCI bandwidth limit algorithm is applicable for only Ventura
scsi: megaraid_sas: megaraid_sas: Add check for count returned by HOST_DEVICE_LIST DCMD
scsi: megaraid_sas: Handle sequence JBOD map failure at driver level
scsi: megaraid_sas: Don't send FPIO to RL Bypass queue
scsi: megaraid_sas: In probe context, retry IOC INIT once if firmware is in fault
scsi: megaraid_sas: Release Mutex lock before OCR in case of DCMD timeout
scsi: megaraid_sas: Call disable_irq from process IRQ poll
scsi: megaraid_sas: Remove few debug counters from IO path
...

+5183 -8878
-218
Documentation/scsi/osst.txt
··· 1 - README file for the osst driver 2 - =============================== 3 - (w) Kurt Garloff <garloff@suse.de> 12/2000 4 - 5 - This file describes the osst driver as of version 0.8.x/0.9.x, the released 6 - version of the osst driver. 7 - It is intended to help advanced users to understand the role of osst and to 8 - get them started using (and maybe debugging) it. 9 - It won't address issues like "How do I compile a kernel?" or "How do I load 10 - a module?", as these are too basic. 11 - Once the OnStream got merged into the official kernel, the distro makers 12 - will provide the OnStream support for those who are not familiar with 13 - hacking their kernels. 14 - 15 - 16 - Purpose 17 - ------- 18 - The osst driver was developed, because the standard SCSI tape driver in 19 - Linux, st, does not support the OnStream SC-x0 SCSI tape. The st is not to 20 - blame for that, as the OnStream tape drives do not support the standard SCSI 21 - command set for Serial Access Storage Devices (SASDs), which basically 22 - corresponds to the QIC-157 spec. 23 - Nevertheless, the OnStream tapes are nice pieces of hardware and therefore 24 - the osst driver has been written to make these tape devs supported by Linux. 25 - The driver is free software. It's released under the GNU GPL and planned to 26 - be integrated into the mainstream kernel. 27 - 28 - 29 - Implementation 30 - -------------- 31 - The osst is a new high-level SCSI driver, just like st, sr, sd and sg. It 32 - can be compiled into the kernel or loaded as a module. 33 - As it represents a new device, it got assigned a new device node: /dev/osstX 34 - are character devices with major no 206 and minor numbers like the /dev/stX 35 - devices. If those are not present, you may create them by calling 36 - Makedevs.sh as root (see below). 37 - The driver started being a copy of st and as such, the osst devices' 38 - behavior looks very much the same as st to the userspace applications. 39 - 40 - 41 - History 42 - ------- 43 - In the first place, osst shared its identity very much with st. That meant 44 - that it used the same kernel structures and the same device node as st. 45 - So you could only have either of them being present in the kernel. This has 46 - been fixed by registering an own device, now. 47 - st and osst can coexist, each only accessing the devices it can support by 48 - themselves. 49 - 50 - 51 - Installation 52 - ------------ 53 - osst got integrated into the linux kernel. Select it during kernel 54 - configuration as module or compile statically into the kernel. 55 - Compile your kernel and install the modules. 56 - 57 - Now, your osst driver is inside the kernel or available as a module, 58 - depending on your choice during kernel config. You may still need to create 59 - the device nodes by calling the Makedevs.sh script (see below) manually. 60 - 61 - To load your module, you may use the command 62 - modprobe osst 63 - as root. dmesg should show you, whether your OnStream tapes have been 64 - recognized. 65 - 66 - If you want to have the module autoloaded on access to /dev/osst, you may 67 - add something like 68 - alias char-major-206 osst 69 - to a file under /etc/modprobe.d/ directory. 70 - 71 - You may find it convenient to create a symbolic link 72 - ln -s nosst0 /dev/tape 73 - to make programs assuming a default name of /dev/tape more convenient to 74 - use. 75 - 76 - The device nodes for osst have to be created. Use the Makedevs.sh script 77 - attached to this file. 78 - 79 - 80 - Using it 81 - -------- 82 - You may use the OnStream tape driver with your standard backup software, 83 - which may be tar, cpio, amanda, arkeia, BRU, Lone Tar, ... 84 - by specifying /dev/(n)osst0 as the tape device to use or using the above 85 - symlink trick. The IOCTLs to control tape operation are also mostly 86 - supported and you may try the mt (or mt_st) program to jump between 87 - filemarks, eject the tape, ... 88 - 89 - There's one limitation: You need to use a block size of 32kB. 90 - 91 - (This limitation is worked on and will be fixed in version 0.8.8 of 92 - this driver.) 93 - 94 - If you just want to get started with standard software, here is an example 95 - for creating and restoring a full backup: 96 - # Backup 97 - tar cvf - / --exclude /proc | buffer -s 32k -m 24M -B -t -o /dev/nosst0 98 - # Restore 99 - buffer -s 32k -m 8M -B -t -i /dev/osst0 | tar xvf - -C / 100 - 101 - The buffer command has been used to buffer the data before it goes to the 102 - tape (or the file system) in order to smooth out the data stream and prevent 103 - the tape from needing to stop and rewind. The OnStream does have an internal 104 - buffer and a variable speed which help this, but especially on writing, the 105 - buffering still proves useful in most cases. It also pads the data to 106 - guarantees the block size of 32k. (Otherwise you may pass the -b64 option to 107 - tar.) 108 - Expect something like 1.8MB/s for the SC-x0 drives and 0.9MB/s for the DI-30. 109 - The USB drive will give you about 0.7MB/s. 110 - On a fast machine, you may profit from software data compression (z flag for 111 - tar). 112 - 113 - 114 - USB and IDE 115 - ----------- 116 - Via the SCSI emulation layers usb-storage and ide-scsi, you can also use the 117 - osst driver to drive the USB-30 and the DI-30 drives. (Unfortunately, there 118 - is no such layer for the parallel port, otherwise the DP-30 would work as 119 - well.) For the USB support, you need the latest 2.4.0-test kernels and the 120 - latest usb-storage driver from 121 - http://www.linux-usb.org/ 122 - http://sourceforge.net/cvs/?group_id=3581 123 - 124 - Note that the ide-tape driver as of 1.16f uses a slightly outdated on-tape 125 - format and therefore is not completely interoperable with osst tapes. 126 - 127 - The ADR-x0 line is fully SCSI-2 compliant and is supported by st, not osst. 128 - The on-tape format is supposed to be compatible with the one used by osst. 129 - 130 - 131 - Feedback and updates 132 - -------------------- 133 - The driver development is coordinated through a mailing list 134 - <osst@linux1.onstream.nl> 135 - a CVS repository and some web pages. 136 - The tester's pages which contain recent news and updated drivers to download 137 - can be found on 138 - http://sourceforge.net/projects/osst/ 139 - 140 - If you find any problems, please have a look at the tester's page in order 141 - to see whether the problem is already known and solved. Otherwise, please 142 - report it to the mailing list. Your feedback is welcome. (This holds also 143 - for reports of successful usage, of course.) 144 - In case of trouble, please do always provide the following info: 145 - * driver and kernel version used (see syslog) 146 - * driver messages (syslog) 147 - * SCSI config and OnStream Firmware (/proc/scsi/scsi) 148 - * description of error. Is it reproducible? 149 - * software and commands used 150 - 151 - You may subscribe to the mailing list, BTW, it's a majordomo list. 152 - 153 - 154 - Status 155 - ------ 156 - 0.8.0 was the first widespread BETA release. Since then a lot of reports 157 - have been sent, but mostly reported success or only minor trouble. 158 - All the issues have been addressed. 159 - Check the web pages for more info about the current developments. 160 - 0.9.x is the tree for the 2.3/2.4 kernel. 161 - 162 - 163 - Acknowledgments 164 - ---------------- 165 - The driver has been started by making a copy of Kai Makisara's st driver. 166 - Most of the development has been done by Willem Riede. The presence of the 167 - userspace program osg (onstreamsg) from Terry Hardie has been rather 168 - helpful. The same holds for Gadi Oxman's ide-tape support for the DI-30. 169 - I did add some patches to those drivers as well and coordinated things a 170 - little bit. 171 - Note that most of them did mostly spend their spare time for the creation of 172 - this driver. 173 - The people from OnStream, especially Jack Bombeeck did support this project 174 - and always tried to answer HW or FW related questions. Furthermore, he 175 - pushed the FW developers to do the right things. 176 - SuSE did support this project by allowing me to work on it during my working 177 - time for them and by integrating the driver into their distro. 178 - 179 - More people did help by sending useful comments. Sorry to those who have 180 - been forgotten. Thanks to all the GNU/FSF and Linux developers who made this 181 - platform such an interesting, nice and stable platform. 182 - Thanks go to those who tested the drivers and did send useful reports. Your 183 - help is needed! 184 - 185 - 186 - Makedevs.sh 187 - ----------- 188 - #!/bin/sh 189 - # Script to create OnStream SC-x0 device nodes (major 206) 190 - # Usage: Makedevs.sh [nos [path to dev]] 191 - # $Id: README.osst.kernel,v 1.4 2000/12/20 14:13:15 garloff Exp $ 192 - major=206 193 - nrs=4 194 - dir=/dev 195 - test -z "$1" || nrs=$1 196 - test -z "$2" || dir=$2 197 - declare -i nr 198 - nr=0 199 - test -d $dir || mkdir -p $dir 200 - while test $nr -lt $nrs; do 201 - mknod $dir/osst$nr c $major $nr 202 - chown 0.disk $dir/osst$nr; chmod 660 $dir/osst$nr; 203 - mknod $dir/nosst$nr c $major $[nr+128] 204 - chown 0.disk $dir/nosst$nr; chmod 660 $dir/nosst$nr; 205 - mknod $dir/osst${nr}l c $major $[nr+32] 206 - chown 0.disk $dir/osst${nr}l; chmod 660 $dir/osst${nr}l; 207 - mknod $dir/nosst${nr}l c $major $[nr+160] 208 - chown 0.disk $dir/nosst${nr}l; chmod 660 $dir/nosst${nr}l; 209 - mknod $dir/osst${nr}m c $major $[nr+64] 210 - chown 0.disk $dir/osst${nr}m; chmod 660 $dir/osst${nr}m; 211 - mknod $dir/nosst${nr}m c $major $[nr+192] 212 - chown 0.disk $dir/nosst${nr}m; chmod 660 $dir/nosst${nr}m; 213 - mknod $dir/osst${nr}a c $major $[nr+96] 214 - chown 0.disk $dir/osst${nr}a; chmod 660 $dir/osst${nr}a; 215 - mknod $dir/nosst${nr}a c $major $[nr+224] 216 - chown 0.disk $dir/nosst${nr}a; chmod 660 $dir/nosst${nr}a; 217 - let nr+=1 218 - done
+7
Documentation/scsi/ufs.txt
··· 158 158 If you wish to read or write a descriptor, use the appropriate xferp of 159 159 sg_io_v4. 160 160 161 + The userspace tool that interacts with the ufs-bsg endpoint and uses its 162 + upiu-based protocol is available at: 163 + 164 + https://github.com/westerndigitalcorporation/ufs-tool 165 + 166 + For more detailed information about the tool and its supported 167 + features, please see the tool's README. 161 168 162 169 UFS Specifications can be found at, 163 170 UFS - http://www.jedec.org/sites/default/files/docs/JESD220.pdf
+1 -12
MAINTAINERS
··· 11779 11779 F: drivers/mtd/nand/onenand/ 11780 11780 F: include/linux/mtd/onenand*.h 11781 11781 11782 - ONSTREAM SCSI TAPE DRIVER 11783 - M: Willem Riede <osst@riede.org> 11784 - L: osst-users@lists.sourceforge.net 11785 - L: linux-scsi@vger.kernel.org 11786 - S: Maintained 11787 - F: Documentation/scsi/osst.txt 11788 - F: drivers/scsi/osst.* 11789 - F: drivers/scsi/osst_*.h 11790 - F: drivers/scsi/st.h 11791 - 11792 11782 OP-TEE DRIVER 11793 11783 M: Jens Wiklander <jens.wiklander@linaro.org> 11794 11784 S: Maintained ··· 12670 12680 F: drivers/scsi/pmcraid.* 12671 12681 12672 12682 PMC SIERRA PM8001 DRIVER 12673 - M: Jack Wang <jinpu.wang@profitbricks.com> 12674 - M: lindar_liu@usish.com 12683 + M: Jack Wang <jinpu.wang@cloud.ionos.com> 12675 12684 L: linux-scsi@vger.kernel.org 12676 12685 S: Supported 12677 12686 F: drivers/scsi/pm8001/
+8 -2
arch/m68k/mac/config.c
··· 911 911 .flags = IORESOURCE_MEM, 912 912 .start = 0x50008000, 913 913 .end = 0x50009FFF, 914 + }, { 915 + .flags = IORESOURCE_MEM, 916 + .start = 0x50008000, 917 + .end = 0x50009FFF, 914 918 }, 915 919 }; 916 920 ··· 1016 1012 case MAC_SCSI_IIFX: 1017 1013 /* Addresses from The Guide to Mac Family Hardware. 1018 1014 * $5000 8000 - $5000 9FFF: SCSI DMA 1015 + * $5000 A000 - $5000 BFFF: Alternate SCSI 1019 1016 * $5000 C000 - $5000 DFFF: Alternate SCSI (DMA) 1020 1017 * $5000 E000 - $5000 FFFF: Alternate SCSI (Hsk) 1021 - * The SCSI DMA custom IC embeds the 53C80 core. mac_scsi does 1022 - * not make use of its DMA or hardware handshaking logic. 1018 + * The A/UX header file sys/uconfig.h says $50F0 8000. 1019 + * The "SCSI DMA" custom IC embeds the 53C80 core and 1020 + * supports Programmed IO, DMA and PDMA (hardware handshake). 1023 1021 */ 1024 1022 platform_device_register_simple("mac_scsi", 0, 1025 1023 mac_scsi_iifx_rsrc, ARRAY_SIZE(mac_scsi_iifx_rsrc));
+2 -19
drivers/infiniband/ulp/srp/ib_srp.c
··· 2340 2340 static int srp_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scmnd) 2341 2341 { 2342 2342 struct srp_target_port *target = host_to_target(shost); 2343 - struct srp_rport *rport = target->rport; 2344 2343 struct srp_rdma_ch *ch; 2345 2344 struct srp_request *req; 2346 2345 struct srp_iu *iu; ··· 2349 2350 u32 tag; 2350 2351 u16 idx; 2351 2352 int len, ret; 2352 - const bool in_scsi_eh = !in_interrupt() && current == shost->ehandler; 2353 - 2354 - /* 2355 - * The SCSI EH thread is the only context from which srp_queuecommand() 2356 - * can get invoked for blocked devices (SDEV_BLOCK / 2357 - * SDEV_CREATED_BLOCK). Avoid racing with srp_reconnect_rport() by 2358 - * locking the rport mutex if invoked from inside the SCSI EH. 2359 - */ 2360 - if (in_scsi_eh) 2361 - mutex_lock(&rport->mutex); 2362 2353 2363 2354 scmnd->result = srp_chkready(target->rport); 2364 2355 if (unlikely(scmnd->result)) ··· 2417 2428 goto err_unmap; 2418 2429 } 2419 2430 2420 - ret = 0; 2421 - 2422 - unlock_rport: 2423 - if (in_scsi_eh) 2424 - mutex_unlock(&rport->mutex); 2425 - 2426 - return ret; 2431 + return 0; 2427 2432 2428 2433 err_unmap: 2429 2434 srp_unmap_data(scmnd, ch, req); ··· 2439 2456 ret = SCSI_MLQUEUE_HOST_BUSY; 2440 2457 } 2441 2458 2442 - goto unlock_rport; 2459 + return ret; 2443 2460 } 2444 2461 2445 2462 /*
+1 -2
drivers/message/fusion/mptbase.c
··· 6001 6001 if (mpt_config(ioc, &cfg) != 0) 6002 6002 goto out; 6003 6003 6004 - mem = kmalloc(iocpage2sz, GFP_KERNEL); 6004 + mem = kmemdup(pIoc2, iocpage2sz, GFP_KERNEL); 6005 6005 if (!mem) { 6006 6006 rc = -ENOMEM; 6007 6007 goto out; 6008 6008 } 6009 6009 6010 - memcpy(mem, (u8 *)pIoc2, iocpage2sz); 6011 6010 ioc->raid_data.pIocPg2 = (IOCPage2_t *) mem; 6012 6011 6013 6012 mpt_read_ioc_pg_3(ioc);
+35 -22
drivers/scsi/Kconfig
··· 99 99 To compile this driver as a module, choose M here and read 100 100 <file:Documentation/scsi/scsi.txt>. The module will be called st. 101 101 102 - config CHR_DEV_OSST 103 - tristate "SCSI OnStream SC-x0 tape support" 104 - depends on SCSI 105 - ---help--- 106 - The OnStream SC-x0 SCSI tape drives cannot be driven by the 107 - standard st driver, but instead need this special osst driver and 108 - use the /dev/osstX char device nodes (major 206). Via usb-storage, 109 - you may be able to drive the USB-x0 and DI-x0 drives as well. 110 - Note that there is also a second generation of OnStream 111 - tape drives (ADR-x0) that supports the standard SCSI-2 commands for 112 - tapes (QIC-157) and can be driven by the standard driver st. 113 - For more information, you may have a look at the SCSI-HOWTO 114 - <http://www.tldp.org/docs.html#howto> and 115 - <file:Documentation/scsi/osst.txt> in the kernel source. 116 - More info on the OnStream driver may be found on 117 - <http://sourceforge.net/projects/osst/> 118 - Please also have a look at the standard st docu, as most of it 119 - applies to osst as well. 120 - 121 - To compile this driver as a module, choose M here and read 122 - <file:Documentation/scsi/scsi.txt>. The module will be called osst. 123 - 124 102 config BLK_DEV_SR 125 103 tristate "SCSI CDROM support" 126 104 depends on SCSI && BLK_DEV ··· 641 663 642 664 To compile this driver as a module, choose M here: the 643 665 module will be called dmx3191d. 666 + 667 + config SCSI_FDOMAIN 668 + tristate 669 + depends on SCSI 670 + 671 + config SCSI_FDOMAIN_PCI 672 + tristate "Future Domain TMC-3260/AHA-2920A PCI SCSI support" 673 + depends on PCI && SCSI 674 + select SCSI_FDOMAIN 675 + help 676 + This is support for Future Domain's PCI SCSI host adapters (TMC-3260) 677 + and other adapters with PCI bus based on the Future Domain chipsets 678 + (Adaptec AHA-2920A). 679 + 680 + NOTE: Newer Adaptec AHA-2920C boards use the Adaptec AIC-7850 chip 681 + and should use the aic7xxx driver ("Adaptec AIC7xxx chipset SCSI 682 + controller support"). This Future Domain driver works with the older 683 + Adaptec AHA-2920A boards with a Future Domain chip on them. 684 + 685 + To compile this driver as a module, choose M here: the 686 + module will be called fdomain_pci. 687 + 688 + config SCSI_FDOMAIN_ISA 689 + tristate "Future Domain 16xx ISA SCSI support" 690 + depends on ISA && SCSI 691 + select CHECK_SIGNATURE 692 + select SCSI_FDOMAIN 693 + help 694 + This is support for Future Domain's 16-bit SCSI host adapters 695 + (TMC-1660/1680, TMC-1650/1670, TMC-1610M/MER/MEX) and other adapters 696 + with ISA bus based on the Future Domain chipsets (Quantum ISA-200S, 697 + ISA-250MG; and at least one IBM board). 698 + 699 + To compile this driver as a module, choose M here: the 700 + module will be called fdomain_isa. 644 701 645 702 config SCSI_GDTH 646 703 tristate "Intel/ICP (former GDT SCSI Disk Array) RAID Controller support"
+3 -1
drivers/scsi/Makefile
··· 76 76 obj-$(CONFIG_SCSI_PM8001) += pm8001/ 77 77 obj-$(CONFIG_SCSI_ISCI) += isci/ 78 78 obj-$(CONFIG_SCSI_IPS) += ips.o 79 + obj-$(CONFIG_SCSI_FDOMAIN) += fdomain.o 80 + obj-$(CONFIG_SCSI_FDOMAIN_PCI) += fdomain_pci.o 81 + obj-$(CONFIG_SCSI_FDOMAIN_ISA) += fdomain_isa.o 79 82 obj-$(CONFIG_SCSI_GENERIC_NCR5380) += g_NCR5380.o 80 83 obj-$(CONFIG_SCSI_QLOGIC_FAS) += qlogicfas408.o qlogicfas.o 81 84 obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o ··· 146 143 obj-$(CONFIG_ARM) += arm/ 147 144 148 145 obj-$(CONFIG_CHR_DEV_ST) += st.o 149 - obj-$(CONFIG_CHR_DEV_OSST) += osst.o 150 146 obj-$(CONFIG_BLK_DEV_SD) += sd_mod.o 151 147 obj-$(CONFIG_BLK_DEV_SR) += sr_mod.o 152 148 obj-$(CONFIG_CHR_DEV_SG) += sg.o
+4 -14
drivers/scsi/NCR5380.c
··· 709 709 NCR5380_information_transfer(instance); 710 710 done = 0; 711 711 } 712 + if (!hostdata->connected) 713 + NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); 712 714 spin_unlock_irq(&hostdata->lock); 713 715 if (!done) 714 716 cond_resched(); ··· 1112 1110 spin_lock_irq(&hostdata->lock); 1113 1111 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); 1114 1112 NCR5380_reselect(instance); 1115 - if (!hostdata->connected) 1116 - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); 1117 1113 shost_printk(KERN_ERR, instance, "reselection after won arbitration?\n"); 1118 1114 goto out; 1119 1115 } ··· 1119 1119 if (err < 0) { 1120 1120 spin_lock_irq(&hostdata->lock); 1121 1121 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); 1122 - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); 1123 1122 1124 1123 /* Can't touch cmd if it has been reclaimed by the scsi ML */ 1125 1124 if (!hostdata->selecting) ··· 1156 1157 if (err < 0) { 1157 1158 shost_printk(KERN_ERR, instance, "select: REQ timeout\n"); 1158 1159 NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); 1159 - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); 1160 1160 goto out; 1161 1161 } 1162 1162 if (!hostdata->selecting) { ··· 1761 1763 scmd_printk(KERN_INFO, cmd, 1762 1764 "switching to slow handshake\n"); 1763 1765 cmd->device->borken = 1; 1764 - sink = 1; 1765 - do_abort(instance); 1766 - cmd->result = DID_ERROR << 16; 1767 - /* XXX - need to source or sink data here, as appropriate */ 1766 + do_reset(instance); 1767 + bus_reset_cleanup(instance); 1768 1768 } 1769 1769 } else { 1770 1770 /* Transfer a small chunk so that the ··· 1822 1826 */ 1823 1827 NCR5380_write(TARGET_COMMAND_REG, 0); 1824 1828 1825 - /* Enable reselect interrupts */ 1826 - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); 1827 - 1828 1829 maybe_release_dma_irq(instance); 1829 1830 return; 1830 1831 case MESSAGE_REJECT: ··· 1853 1860 */ 1854 1861 NCR5380_write(TARGET_COMMAND_REG, 0); 1855 1862 1856 - /* Enable reselect interrupts */ 1857 - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); 1858 1863 #ifdef SUN3_SCSI_VME 1859 1864 dregs->csr |= CSR_DMA_ENABLE; 1860 1865 #endif ··· 1955 1964 cmd->result = DID_ERROR << 16; 1956 1965 complete_cmd(instance, cmd); 1957 1966 maybe_release_dma_irq(instance); 1958 - NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); 1959 1967 return; 1960 1968 } 1961 1969 msgout = NOP;
+1 -1
drivers/scsi/NCR5380.h
··· 235 235 #define NCR5380_PIO_CHUNK_SIZE 256 236 236 237 237 /* Time limit (ms) to poll registers when IRQs are disabled, e.g. during PDMA */ 238 - #define NCR5380_REG_POLL_TIME 15 238 + #define NCR5380_REG_POLL_TIME 10 239 239 240 240 static inline struct scsi_cmnd *NCR5380_to_scmd(struct NCR5380_cmd *ncmd_ptr) 241 241 {
+1 -1
drivers/scsi/aic7xxx/aic7xxx.reg
··· 1666 1666 size 6 1667 1667 /* 1668 1668 * These are reserved registers in the card's scratch ram on the 2742. 1669 - * The EISA configuraiton chip is mapped here. On Rev E. of the 1669 + * The EISA configuration chip is mapped here. On Rev E. of the 1670 1670 * aic7770, the sequencer can use this area for scratch, but the 1671 1671 * host cannot directly access these registers. On later chips, this 1672 1672 * area can be read and written by both the host and the sequencer.
+1 -3
drivers/scsi/aic94xx/aic94xx_dev.c
··· 170 170 } 171 171 } else { 172 172 flags |= CONCURRENT_CONN_SUPP; 173 - if (!dev->parent && 174 - (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || 175 - dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE)) 173 + if (!dev->parent && dev_is_expander(dev->dev_type)) 176 174 asd_ddbsite_write_byte(asd_ha, ddb, MAX_CCONN, 177 175 4); 178 176 else
+9 -5
drivers/scsi/bnx2fc/bnx2fc.h
··· 66 66 #include "bnx2fc_constants.h" 67 67 68 68 #define BNX2FC_NAME "bnx2fc" 69 - #define BNX2FC_VERSION "2.11.8" 69 + #define BNX2FC_VERSION "2.12.10" 70 70 71 71 #define PFX "bnx2fc: " 72 72 ··· 75 75 #define BNX2X_DOORBELL_PCI_BAR 2 76 76 77 77 #define BNX2FC_MAX_BD_LEN 0xffff 78 - #define BNX2FC_BD_SPLIT_SZ 0x8000 79 - #define BNX2FC_MAX_BDS_PER_CMD 256 78 + #define BNX2FC_BD_SPLIT_SZ 0xffff 79 + #define BNX2FC_MAX_BDS_PER_CMD 255 80 + #define BNX2FC_FW_MAX_BDS_PER_CMD 255 80 81 81 82 #define BNX2FC_SQ_WQES_MAX 256 82 83 ··· 434 433 void (*cb_func)(struct bnx2fc_els_cb_arg *cb_arg); 435 434 struct bnx2fc_els_cb_arg *cb_arg; 436 435 struct delayed_work timeout_work; /* timer for ULP timeouts */ 437 - struct completion tm_done; 438 - int wait_for_comp; 436 + struct completion abts_done; 437 + struct completion cleanup_done; 438 + int wait_for_abts_comp; 439 + int wait_for_cleanup_comp; 439 440 u16 xid; 440 441 struct fcoe_err_report_entry err_entry; 441 442 struct fcoe_task_ctx_entry *task; ··· 458 455 #define BNX2FC_FLAG_ELS_TIMEOUT 0xb 459 456 #define BNX2FC_FLAG_CMD_LOST 0xc 460 457 #define BNX2FC_FLAG_SRR_SENT 0xd 458 + #define BNX2FC_FLAG_ISSUE_CLEANUP_REQ 0xe 461 459 u8 rec_retry; 462 460 u8 srr_retry; 463 461 u32 srr_offset;
+42 -18
drivers/scsi/bnx2fc/bnx2fc_els.c
··· 610 610 rc = bnx2fc_initiate_els(tgt, ELS_REC, &rec, sizeof(rec), 611 611 bnx2fc_rec_compl, cb_arg, 612 612 r_a_tov); 613 - rec_err: 614 613 if (rc) { 615 614 BNX2FC_IO_DBG(orig_io_req, "REC failed - release\n"); 616 615 spin_lock_bh(&tgt->tgt_lock); ··· 617 618 spin_unlock_bh(&tgt->tgt_lock); 618 619 kfree(cb_arg); 619 620 } 621 + rec_err: 620 622 return rc; 621 623 } 622 624 ··· 654 654 rc = bnx2fc_initiate_els(tgt, ELS_SRR, &srr, sizeof(srr), 655 655 bnx2fc_srr_compl, cb_arg, 656 656 r_a_tov); 657 - srr_err: 658 657 if (rc) { 659 658 BNX2FC_IO_DBG(orig_io_req, "SRR failed - release\n"); 660 659 spin_lock_bh(&tgt->tgt_lock); ··· 663 664 } else 664 665 set_bit(BNX2FC_FLAG_SRR_SENT, &orig_io_req->req_flags); 665 666 667 + srr_err: 666 668 return rc; 667 669 } 668 670 ··· 854 854 kref_put(&els_req->refcount, bnx2fc_cmd_release); 855 855 } 856 856 857 + #define BNX2FC_FCOE_MAC_METHOD_GRANGED_MAC 1 858 + #define BNX2FC_FCOE_MAC_METHOD_FCF_MAP 2 859 + #define BNX2FC_FCOE_MAC_METHOD_FCOE_SET_MAC 3 857 860 static void bnx2fc_flogi_resp(struct fc_seq *seq, struct fc_frame *fp, 858 861 void *arg) 859 862 { 860 863 struct fcoe_ctlr *fip = arg; 861 864 struct fc_exch *exch = fc_seq_exch(seq); 862 865 struct fc_lport *lport = exch->lp; 863 - u8 *mac; 864 - u8 op; 866 + 867 + struct fc_frame_header *fh; 868 + u8 *granted_mac; 869 + u8 fcoe_mac[6]; 870 + u8 fc_map[3]; 871 + int method; 865 872 866 873 if (IS_ERR(fp)) 867 874 goto done; 868 875 869 - mac = fr_cb(fp)->granted_mac; 870 - if (is_zero_ether_addr(mac)) { 871 - op = fc_frame_payload_op(fp); 872 - if (lport->vport) { 873 - if (op == ELS_LS_RJT) { 874 - printk(KERN_ERR PFX "bnx2fc_flogi_resp is LS_RJT\n"); 875 - fc_vport_terminate(lport->vport); 876 - fc_frame_free(fp); 877 - return; 878 - } 879 - } 880 - fcoe_ctlr_recv_flogi(fip, lport, fp); 876 + fh = fc_frame_header_get(fp); 877 + granted_mac = fr_cb(fp)->granted_mac; 878 + 879 + /* 880 + * We set the source MAC for FCoE traffic based on the Granted MAC 881 + * address from the switch. 882 + * 883 + * If granted_mac is non-zero, we use that. 884 + * If the granted_mac is zeroed out, create the FCoE MAC based on 885 + * the sel_fcf->fc_map and the d_id fo the FLOGI frame. 886 + * If sel_fcf->fc_map is 0, then we use the default FCF-MAC plus the 887 + * d_id of the FLOGI frame. 888 + */ 889 + if (!is_zero_ether_addr(granted_mac)) { 890 + ether_addr_copy(fcoe_mac, granted_mac); 891 + method = BNX2FC_FCOE_MAC_METHOD_GRANGED_MAC; 892 + } else if (fip->sel_fcf && fip->sel_fcf->fc_map != 0) { 893 + hton24(fc_map, fip->sel_fcf->fc_map); 894 + fcoe_mac[0] = fc_map[0]; 895 + fcoe_mac[1] = fc_map[1]; 896 + fcoe_mac[2] = fc_map[2]; 897 + fcoe_mac[3] = fh->fh_d_id[0]; 898 + fcoe_mac[4] = fh->fh_d_id[1]; 899 + fcoe_mac[5] = fh->fh_d_id[2]; 900 + method = BNX2FC_FCOE_MAC_METHOD_FCF_MAP; 901 + } else { 902 + fc_fcoe_set_mac(fcoe_mac, fh->fh_d_id); 903 + method = BNX2FC_FCOE_MAC_METHOD_FCOE_SET_MAC; 881 904 } 882 - if (!is_zero_ether_addr(mac)) 883 - fip->update_mac(lport, mac); 905 + 906 + BNX2FC_HBA_DBG(lport, "fcoe_mac=%pM method=%d\n", fcoe_mac, method); 907 + fip->update_mac(lport, fcoe_mac); 884 908 done: 885 909 fc_lport_flogi_resp(seq, fp, lport); 886 910 }
+2 -1
drivers/scsi/bnx2fc/bnx2fc_fcoe.c
··· 2971 2971 .this_id = -1, 2972 2972 .cmd_per_lun = 3, 2973 2973 .sg_tablesize = BNX2FC_MAX_BDS_PER_CMD, 2974 - .max_sectors = 1024, 2974 + .dma_boundary = 0x7fff, 2975 + .max_sectors = 0x3fbf, 2975 2976 .track_queue_depth = 1, 2976 2977 .slave_configure = bnx2fc_slave_configure, 2977 2978 .shost_attrs = bnx2fc_host_attrs,
+82 -34
drivers/scsi/bnx2fc/bnx2fc_io.c
··· 70 70 &io_req->req_flags)) { 71 71 /* Handle eh_abort timeout */ 72 72 BNX2FC_IO_DBG(io_req, "eh_abort timed out\n"); 73 - complete(&io_req->tm_done); 73 + complete(&io_req->abts_done); 74 74 } else if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, 75 75 &io_req->req_flags)) { 76 76 /* Handle internally generated ABTS timeout */ ··· 775 775 io_req->on_tmf_queue = 1; 776 776 list_add_tail(&io_req->link, &tgt->active_tm_queue); 777 777 778 - init_completion(&io_req->tm_done); 779 - io_req->wait_for_comp = 1; 778 + init_completion(&io_req->abts_done); 779 + io_req->wait_for_abts_comp = 1; 780 780 781 781 /* Ring doorbell */ 782 782 bnx2fc_ring_doorbell(tgt); 783 783 spin_unlock_bh(&tgt->tgt_lock); 784 784 785 - rc = wait_for_completion_timeout(&io_req->tm_done, 785 + rc = wait_for_completion_timeout(&io_req->abts_done, 786 786 interface->tm_timeout * HZ); 787 787 spin_lock_bh(&tgt->tgt_lock); 788 788 789 - io_req->wait_for_comp = 0; 789 + io_req->wait_for_abts_comp = 0; 790 790 if (!(test_bit(BNX2FC_FLAG_TM_COMPL, &io_req->req_flags))) { 791 791 set_bit(BNX2FC_FLAG_TM_TIMEOUT, &io_req->req_flags); 792 792 if (io_req->on_tmf_queue) { 793 793 list_del_init(&io_req->link); 794 794 io_req->on_tmf_queue = 0; 795 795 } 796 - io_req->wait_for_comp = 1; 796 + io_req->wait_for_cleanup_comp = 1; 797 + init_completion(&io_req->cleanup_done); 797 798 bnx2fc_initiate_cleanup(io_req); 798 799 spin_unlock_bh(&tgt->tgt_lock); 799 - rc = wait_for_completion_timeout(&io_req->tm_done, 800 + rc = wait_for_completion_timeout(&io_req->cleanup_done, 800 801 BNX2FC_FW_TIMEOUT); 801 802 spin_lock_bh(&tgt->tgt_lock); 802 - io_req->wait_for_comp = 0; 803 + io_req->wait_for_cleanup_comp = 0; 803 804 if (!rc) 804 805 kref_put(&io_req->refcount, bnx2fc_cmd_release); 805 806 } ··· 1048 1047 /* Obtain free SQ entry */ 1049 1048 bnx2fc_add_2_sq(tgt, xid); 1050 1049 1050 + /* Set flag that cleanup request is pending with the firmware */ 1051 + set_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, &io_req->req_flags); 1052 + 1051 1053 /* Ring doorbell */ 1052 1054 bnx2fc_ring_doorbell(tgt); 1053 1055 ··· 1089 1085 struct bnx2fc_rport *tgt = io_req->tgt; 1090 1086 unsigned int time_left; 1091 1087 1092 - io_req->wait_for_comp = 1; 1088 + init_completion(&io_req->cleanup_done); 1089 + io_req->wait_for_cleanup_comp = 1; 1093 1090 bnx2fc_initiate_cleanup(io_req); 1094 1091 1095 1092 spin_unlock_bh(&tgt->tgt_lock); ··· 1099 1094 * Can't wait forever on cleanup response lest we let the SCSI error 1100 1095 * handler wait forever 1101 1096 */ 1102 - time_left = wait_for_completion_timeout(&io_req->tm_done, 1097 + time_left = wait_for_completion_timeout(&io_req->cleanup_done, 1103 1098 BNX2FC_FW_TIMEOUT); 1104 - io_req->wait_for_comp = 0; 1105 - if (!time_left) 1099 + if (!time_left) { 1106 1100 BNX2FC_IO_DBG(io_req, "%s(): Wait for cleanup timed out.\n", 1107 1101 __func__); 1108 1102 1109 - /* 1110 - * Release reference held by SCSI command the cleanup completion 1111 - * hits the BNX2FC_CLEANUP case in bnx2fc_process_cq_compl() and 1112 - * thus the SCSI command is not returnedi by bnx2fc_scsi_done(). 1113 - */ 1114 - kref_put(&io_req->refcount, bnx2fc_cmd_release); 1103 + /* 1104 + * Put the extra reference to the SCSI command since it would 1105 + * not have been returned in this case. 1106 + */ 1107 + kref_put(&io_req->refcount, bnx2fc_cmd_release); 1108 + } 1115 1109 1116 1110 spin_lock_bh(&tgt->tgt_lock); 1111 + io_req->wait_for_cleanup_comp = 0; 1117 1112 return SUCCESS; 1118 1113 } 1119 1114 ··· 1202 1197 /* Move IO req to retire queue */ 1203 1198 list_add_tail(&io_req->link, &tgt->io_retire_queue); 1204 1199 1205 - init_completion(&io_req->tm_done); 1200 + init_completion(&io_req->abts_done); 1201 + init_completion(&io_req->cleanup_done); 1206 1202 1207 1203 if (test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags)) { 1208 1204 printk(KERN_ERR PFX "eh_abort: io_req (xid = 0x%x) " ··· 1231 1225 kref_put(&io_req->refcount, 1232 1226 bnx2fc_cmd_release); /* drop timer hold */ 1233 1227 set_bit(BNX2FC_FLAG_EH_ABORT, &io_req->req_flags); 1234 - io_req->wait_for_comp = 1; 1228 + io_req->wait_for_abts_comp = 1; 1235 1229 rc = bnx2fc_initiate_abts(io_req); 1236 1230 if (rc == FAILED) { 1231 + io_req->wait_for_cleanup_comp = 1; 1237 1232 bnx2fc_initiate_cleanup(io_req); 1238 1233 spin_unlock_bh(&tgt->tgt_lock); 1239 - wait_for_completion(&io_req->tm_done); 1234 + wait_for_completion(&io_req->cleanup_done); 1240 1235 spin_lock_bh(&tgt->tgt_lock); 1241 - io_req->wait_for_comp = 0; 1236 + io_req->wait_for_cleanup_comp = 0; 1242 1237 goto done; 1243 1238 } 1244 1239 spin_unlock_bh(&tgt->tgt_lock); 1245 1240 1246 1241 /* Wait 2 * RA_TOV + 1 to be sure timeout function hasn't fired */ 1247 - time_left = wait_for_completion_timeout(&io_req->tm_done, 1248 - (2 * rp->r_a_tov + 1) * HZ); 1242 + time_left = wait_for_completion_timeout(&io_req->abts_done, 1243 + (2 * rp->r_a_tov + 1) * HZ); 1249 1244 if (time_left) 1250 - BNX2FC_IO_DBG(io_req, "Timed out in eh_abort waiting for tm_done"); 1245 + BNX2FC_IO_DBG(io_req, 1246 + "Timed out in eh_abort waiting for abts_done"); 1251 1247 1252 1248 spin_lock_bh(&tgt->tgt_lock); 1253 - io_req->wait_for_comp = 0; 1249 + io_req->wait_for_abts_comp = 0; 1254 1250 if (test_bit(BNX2FC_FLAG_IO_COMPL, &io_req->req_flags)) { 1255 1251 BNX2FC_IO_DBG(io_req, "IO completed in a different context\n"); 1256 1252 rc = SUCCESS; ··· 1327 1319 BNX2FC_IO_DBG(io_req, "Entered process_cleanup_compl " 1328 1320 "refcnt = %d, cmd_type = %d\n", 1329 1321 kref_read(&io_req->refcount), io_req->cmd_type); 1322 + /* 1323 + * Test whether there is a cleanup request pending. If not just 1324 + * exit. 1325 + */ 1326 + if (!test_and_clear_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, 1327 + &io_req->req_flags)) 1328 + return; 1329 + /* 1330 + * If we receive a cleanup completion for this request then the 1331 + * firmware will not give us an abort completion for this request 1332 + * so clear any ABTS pending flags. 1333 + */ 1334 + if (test_bit(BNX2FC_FLAG_ISSUE_ABTS, &io_req->req_flags) && 1335 + !test_bit(BNX2FC_FLAG_ABTS_DONE, &io_req->req_flags)) { 1336 + set_bit(BNX2FC_FLAG_ABTS_DONE, &io_req->req_flags); 1337 + if (io_req->wait_for_abts_comp) 1338 + complete(&io_req->abts_done); 1339 + } 1340 + 1330 1341 bnx2fc_scsi_done(io_req, DID_ERROR); 1331 1342 kref_put(&io_req->refcount, bnx2fc_cmd_release); 1332 - if (io_req->wait_for_comp) 1333 - complete(&io_req->tm_done); 1343 + if (io_req->wait_for_cleanup_comp) 1344 + complete(&io_req->cleanup_done); 1334 1345 } 1335 1346 1336 1347 void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req, ··· 1371 1344 BNX2FC_IO_DBG(io_req, "Timer context finished processing" 1372 1345 " this io\n"); 1373 1346 return; 1347 + } 1348 + 1349 + /* 1350 + * If we receive an ABTS completion here then we will not receive 1351 + * a cleanup completion so clear any cleanup pending flags. 1352 + */ 1353 + if (test_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, &io_req->req_flags)) { 1354 + clear_bit(BNX2FC_FLAG_ISSUE_CLEANUP_REQ, &io_req->req_flags); 1355 + if (io_req->wait_for_cleanup_comp) 1356 + complete(&io_req->cleanup_done); 1374 1357 } 1375 1358 1376 1359 /* Do not issue RRQ as this IO is already cleanedup */ ··· 1427 1390 bnx2fc_cmd_timer_set(io_req, r_a_tov); 1428 1391 1429 1392 io_compl: 1430 - if (io_req->wait_for_comp) { 1393 + if (io_req->wait_for_abts_comp) { 1431 1394 if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT, 1432 1395 &io_req->req_flags)) 1433 - complete(&io_req->tm_done); 1396 + complete(&io_req->abts_done); 1434 1397 } else { 1435 1398 /* 1436 1399 * We end up here when ABTS is issued as ··· 1614 1577 sc_cmd->scsi_done(sc_cmd); 1615 1578 1616 1579 kref_put(&io_req->refcount, bnx2fc_cmd_release); 1617 - if (io_req->wait_for_comp) { 1580 + if (io_req->wait_for_abts_comp) { 1618 1581 BNX2FC_IO_DBG(io_req, "tm_compl - wake up the waiter\n"); 1619 - complete(&io_req->tm_done); 1582 + complete(&io_req->abts_done); 1620 1583 } 1621 1584 } 1622 1585 ··· 1660 1623 u64 addr; 1661 1624 int i; 1662 1625 1626 + WARN_ON(scsi_sg_count(sc) > BNX2FC_MAX_BDS_PER_CMD); 1663 1627 /* 1664 1628 * Use dma_map_sg directly to ensure we're using the correct 1665 1629 * dev struct off of pcidev. ··· 1707 1669 bd[0].buf_len = bd[0].flags = 0; 1708 1670 } 1709 1671 io_req->bd_tbl->bd_valid = bd_count; 1672 + 1673 + /* 1674 + * Return the command to ML if BD count exceeds the max number 1675 + * that can be handled by FW. 1676 + */ 1677 + if (bd_count > BNX2FC_FW_MAX_BDS_PER_CMD) { 1678 + pr_err("bd_count = %d exceeded FW supported max BD(255), task_id = 0x%x\n", 1679 + bd_count, io_req->xid); 1680 + return -ENOMEM; 1681 + } 1710 1682 1711 1683 return 0; 1712 1684 } ··· 1974 1926 * between command abort and (late) completion. 1975 1927 */ 1976 1928 BNX2FC_IO_DBG(io_req, "xid not on active_cmd_queue\n"); 1977 - if (io_req->wait_for_comp) 1929 + if (io_req->wait_for_abts_comp) 1978 1930 if (test_and_clear_bit(BNX2FC_FLAG_EH_ABORT, 1979 1931 &io_req->req_flags)) 1980 - complete(&io_req->tm_done); 1932 + complete(&io_req->abts_done); 1981 1933 } 1982 1934 1983 1935 bnx2fc_unmap_sg_list(io_req);
+5 -5
drivers/scsi/bnx2fc/bnx2fc_tgt.c
··· 187 187 /* Handle eh_abort timeout */ 188 188 BNX2FC_IO_DBG(io_req, "eh_abort for IO " 189 189 "cleaned up\n"); 190 - complete(&io_req->tm_done); 190 + complete(&io_req->abts_done); 191 191 } 192 192 kref_put(&io_req->refcount, 193 193 bnx2fc_cmd_release); /* drop timer hold */ ··· 210 210 list_del_init(&io_req->link); 211 211 io_req->on_tmf_queue = 0; 212 212 BNX2FC_IO_DBG(io_req, "tm_queue cleanup\n"); 213 - if (io_req->wait_for_comp) 214 - complete(&io_req->tm_done); 213 + if (io_req->wait_for_abts_comp) 214 + complete(&io_req->abts_done); 215 215 } 216 216 217 217 list_for_each_entry_safe(io_req, tmp, &tgt->els_queue, link) { ··· 251 251 /* Handle eh_abort timeout */ 252 252 BNX2FC_IO_DBG(io_req, "eh_abort for IO " 253 253 "in retire_q\n"); 254 - if (io_req->wait_for_comp) 255 - complete(&io_req->tm_done); 254 + if (io_req->wait_for_abts_comp) 255 + complete(&io_req->abts_done); 256 256 } 257 257 kref_put(&io_req->refcount, bnx2fc_cmd_release); 258 258 }
+7 -2
drivers/scsi/cxgbi/cxgb4i/cxgb4i.c
··· 1665 1665 return 0; 1666 1666 1667 1667 if (caps & DCB_CAP_DCBX_VER_IEEE) { 1668 - iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_ANY; 1668 + iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_STREAM; 1669 1669 rv = dcb_ieee_getapp_mask(ndev, &iscsi_dcb_app); 1670 + if (!rv) { 1671 + iscsi_dcb_app.selector = IEEE_8021QAZ_APP_SEL_ANY; 1672 + rv = dcb_ieee_getapp_mask(ndev, &iscsi_dcb_app); 1673 + } 1670 1674 } else if (caps & DCB_CAP_DCBX_VER_CEE) { 1671 1675 iscsi_dcb_app.selector = DCB_APP_IDTYPE_PORTNUM; 1672 1676 rv = dcb_getapp(ndev, &iscsi_dcb_app); ··· 2264 2260 u8 priority; 2265 2261 2266 2262 if (iscsi_app->dcbx & DCB_CAP_DCBX_VER_IEEE) { 2267 - if (iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_ANY) 2263 + if ((iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_STREAM) && 2264 + (iscsi_app->app.selector != IEEE_8021QAZ_APP_SEL_ANY)) 2268 2265 return NOTIFY_DONE; 2269 2266 2270 2267 priority = iscsi_app->app.priority;
+597
drivers/scsi/fdomain.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for Future Domain TMC-16x0 and TMC-3260 SCSI host adapters 4 + * Copyright 2019 Ondrej Zary 5 + * 6 + * Original driver by 7 + * Rickard E. Faith, faith@cs.unc.edu 8 + * 9 + * Future Domain BIOS versions supported for autodetect: 10 + * 2.0, 3.0, 3.2, 3.4 (1.0), 3.5 (2.0), 3.6, 3.61 11 + * Chips supported: 12 + * TMC-1800, TMC-18C50, TMC-18C30, TMC-36C70 13 + * Boards supported: 14 + * Future Domain TMC-1650, TMC-1660, TMC-1670, TMC-1680, TMC-1610M/MER/MEX 15 + * Future Domain TMC-3260 (PCI) 16 + * Quantum ISA-200S, ISA-250MG 17 + * Adaptec AHA-2920A (PCI) [BUT *NOT* AHA-2920C -- use aic7xxx instead] 18 + * IBM ? 19 + * 20 + * NOTE: 21 + * 22 + * The Adaptec AHA-2920C has an Adaptec AIC-7850 chip on it. 23 + * Use the aic7xxx driver for this board. 24 + * 25 + * The Adaptec AHA-2920A has a Future Domain chip on it, so this is the right 26 + * driver for that card. Unfortunately, the boxes will probably just say 27 + * "2920", so you'll have to look on the card for a Future Domain logo, or a 28 + * letter after the 2920. 29 + * 30 + * If you have a TMC-8xx or TMC-9xx board, then this is not the driver for 31 + * your board. 32 + * 33 + * DESCRIPTION: 34 + * 35 + * This is the Linux low-level SCSI driver for Future Domain TMC-1660/1680 36 + * TMC-1650/1670, and TMC-3260 SCSI host adapters. The 1650 and 1670 have a 37 + * 25-pin external connector, whereas the 1660 and 1680 have a SCSI-2 50-pin 38 + * high-density external connector. The 1670 and 1680 have floppy disk 39 + * controllers built in. The TMC-3260 is a PCI bus card. 40 + * 41 + * Future Domain's older boards are based on the TMC-1800 chip, and this 42 + * driver was originally written for a TMC-1680 board with the TMC-1800 chip. 43 + * More recently, boards are being produced with the TMC-18C50 and TMC-18C30 44 + * chips. 45 + * 46 + * Please note that the drive ordering that Future Domain implemented in BIOS 47 + * versions 3.4 and 3.5 is the opposite of the order (currently) used by the 48 + * rest of the SCSI industry. 49 + * 50 + * 51 + * REFERENCES USED: 52 + * 53 + * "TMC-1800 SCSI Chip Specification (FDC-1800T)", Future Domain Corporation, 54 + * 1990. 55 + * 56 + * "Technical Reference Manual: 18C50 SCSI Host Adapter Chip", Future Domain 57 + * Corporation, January 1992. 58 + * 59 + * "LXT SCSI Products: Specifications and OEM Technical Manual (Revision 60 + * B/September 1991)", Maxtor Corporation, 1991. 61 + * 62 + * "7213S product Manual (Revision P3)", Maxtor Corporation, 1992. 63 + * 64 + * "Draft Proposed American National Standard: Small Computer System 65 + * Interface - 2 (SCSI-2)", Global Engineering Documents. (X3T9.2/86-109, 66 + * revision 10h, October 17, 1991) 67 + * 68 + * Private communications, Drew Eckhardt (drew@cs.colorado.edu) and Eric 69 + * Youngdale (ericy@cais.com), 1992. 70 + * 71 + * Private communication, Tuong Le (Future Domain Engineering department), 72 + * 1994. (Disk geometry computations for Future Domain BIOS version 3.4, and 73 + * TMC-18C30 detection.) 74 + * 75 + * Hogan, Thom. The Programmer's PC Sourcebook. Microsoft Press, 1988. Page 76 + * 60 (2.39: Disk Partition Table Layout). 77 + * 78 + * "18C30 Technical Reference Manual", Future Domain Corporation, 1993, page 79 + * 6-1. 80 + */ 81 + 82 + #include <linux/module.h> 83 + #include <linux/interrupt.h> 84 + #include <linux/delay.h> 85 + #include <linux/pci.h> 86 + #include <linux/workqueue.h> 87 + #include <scsi/scsicam.h> 88 + #include <scsi/scsi_cmnd.h> 89 + #include <scsi/scsi_device.h> 90 + #include <scsi/scsi_host.h> 91 + #include "fdomain.h" 92 + 93 + /* 94 + * FIFO_COUNT: The host adapter has an 8K cache (host adapters based on the 95 + * 18C30 chip have a 2k cache). When this many 512 byte blocks are filled by 96 + * the SCSI device, an interrupt will be raised. Therefore, this could be as 97 + * low as 0, or as high as 16. Note, however, that values which are too high 98 + * or too low seem to prevent any interrupts from occurring, and thereby lock 99 + * up the machine. 100 + */ 101 + #define FIFO_COUNT 2 /* Number of 512 byte blocks before INTR */ 102 + #define PARITY_MASK ACTL_PAREN /* Parity enabled, 0 = disabled */ 103 + 104 + enum chip_type { 105 + unknown = 0x00, 106 + tmc1800 = 0x01, 107 + tmc18c50 = 0x02, 108 + tmc18c30 = 0x03, 109 + }; 110 + 111 + struct fdomain { 112 + int base; 113 + struct scsi_cmnd *cur_cmd; 114 + enum chip_type chip; 115 + struct work_struct work; 116 + }; 117 + 118 + static inline void fdomain_make_bus_idle(struct fdomain *fd) 119 + { 120 + outb(0, fd->base + REG_BCTL); 121 + outb(0, fd->base + REG_MCTL); 122 + if (fd->chip == tmc18c50 || fd->chip == tmc18c30) 123 + /* Clear forced intr. */ 124 + outb(ACTL_RESET | ACTL_CLRFIRQ | PARITY_MASK, 125 + fd->base + REG_ACTL); 126 + else 127 + outb(ACTL_RESET | PARITY_MASK, fd->base + REG_ACTL); 128 + } 129 + 130 + static enum chip_type fdomain_identify(int port) 131 + { 132 + u16 id = inb(port + REG_ID_LSB) | inb(port + REG_ID_MSB) << 8; 133 + 134 + switch (id) { 135 + case 0x6127: 136 + return tmc1800; 137 + case 0x60e9: /* 18c50 or 18c30 */ 138 + break; 139 + default: 140 + return unknown; 141 + } 142 + 143 + /* Try to toggle 32-bit mode. This only works on an 18c30 chip. */ 144 + outb(CFG2_32BIT, port + REG_CFG2); 145 + if ((inb(port + REG_CFG2) & CFG2_32BIT)) { 146 + outb(0, port + REG_CFG2); 147 + if ((inb(port + REG_CFG2) & CFG2_32BIT) == 0) 148 + return tmc18c30; 149 + } 150 + /* If that failed, we are an 18c50. */ 151 + return tmc18c50; 152 + } 153 + 154 + static int fdomain_test_loopback(int base) 155 + { 156 + int i; 157 + 158 + for (i = 0; i < 255; i++) { 159 + outb(i, base + REG_LOOPBACK); 160 + if (inb(base + REG_LOOPBACK) != i) 161 + return 1; 162 + } 163 + 164 + return 0; 165 + } 166 + 167 + static void fdomain_reset(int base) 168 + { 169 + outb(1, base + REG_BCTL); 170 + mdelay(20); 171 + outb(0, base + REG_BCTL); 172 + mdelay(1150); 173 + outb(0, base + REG_MCTL); 174 + outb(PARITY_MASK, base + REG_ACTL); 175 + } 176 + 177 + static int fdomain_select(struct Scsi_Host *sh, int target) 178 + { 179 + int status; 180 + unsigned long timeout; 181 + struct fdomain *fd = shost_priv(sh); 182 + 183 + outb(BCTL_BUSEN | BCTL_SEL, fd->base + REG_BCTL); 184 + outb(BIT(sh->this_id) | BIT(target), fd->base + REG_SCSI_DATA_NOACK); 185 + 186 + /* Stop arbitration and enable parity */ 187 + outb(PARITY_MASK, fd->base + REG_ACTL); 188 + 189 + timeout = 350; /* 350 msec */ 190 + 191 + do { 192 + status = inb(fd->base + REG_BSTAT); 193 + if (status & BSTAT_BSY) { 194 + /* Enable SCSI Bus */ 195 + /* (on error, should make bus idle with 0) */ 196 + outb(BCTL_BUSEN, fd->base + REG_BCTL); 197 + return 0; 198 + } 199 + mdelay(1); 200 + } while (--timeout); 201 + fdomain_make_bus_idle(fd); 202 + return 1; 203 + } 204 + 205 + static void fdomain_finish_cmd(struct fdomain *fd, int result) 206 + { 207 + outb(0, fd->base + REG_ICTL); 208 + fdomain_make_bus_idle(fd); 209 + fd->cur_cmd->result = result; 210 + fd->cur_cmd->scsi_done(fd->cur_cmd); 211 + fd->cur_cmd = NULL; 212 + } 213 + 214 + static void fdomain_read_data(struct scsi_cmnd *cmd) 215 + { 216 + struct fdomain *fd = shost_priv(cmd->device->host); 217 + unsigned char *virt, *ptr; 218 + size_t offset, len; 219 + 220 + while ((len = inw(fd->base + REG_FIFO_COUNT)) > 0) { 221 + offset = scsi_bufflen(cmd) - scsi_get_resid(cmd); 222 + virt = scsi_kmap_atomic_sg(scsi_sglist(cmd), scsi_sg_count(cmd), 223 + &offset, &len); 224 + ptr = virt + offset; 225 + if (len & 1) 226 + *ptr++ = inb(fd->base + REG_FIFO); 227 + if (len > 1) 228 + insw(fd->base + REG_FIFO, ptr, len >> 1); 229 + scsi_set_resid(cmd, scsi_get_resid(cmd) - len); 230 + scsi_kunmap_atomic_sg(virt); 231 + } 232 + } 233 + 234 + static void fdomain_write_data(struct scsi_cmnd *cmd) 235 + { 236 + struct fdomain *fd = shost_priv(cmd->device->host); 237 + /* 8k FIFO for pre-tmc18c30 chips, 2k FIFO for tmc18c30 */ 238 + int FIFO_Size = fd->chip == tmc18c30 ? 0x800 : 0x2000; 239 + unsigned char *virt, *ptr; 240 + size_t offset, len; 241 + 242 + while ((len = FIFO_Size - inw(fd->base + REG_FIFO_COUNT)) > 512) { 243 + offset = scsi_bufflen(cmd) - scsi_get_resid(cmd); 244 + if (len + offset > scsi_bufflen(cmd)) { 245 + len = scsi_bufflen(cmd) - offset; 246 + if (len == 0) 247 + break; 248 + } 249 + virt = scsi_kmap_atomic_sg(scsi_sglist(cmd), scsi_sg_count(cmd), 250 + &offset, &len); 251 + ptr = virt + offset; 252 + if (len & 1) 253 + outb(*ptr++, fd->base + REG_FIFO); 254 + if (len > 1) 255 + outsw(fd->base + REG_FIFO, ptr, len >> 1); 256 + scsi_set_resid(cmd, scsi_get_resid(cmd) - len); 257 + scsi_kunmap_atomic_sg(virt); 258 + } 259 + } 260 + 261 + static void fdomain_work(struct work_struct *work) 262 + { 263 + struct fdomain *fd = container_of(work, struct fdomain, work); 264 + struct Scsi_Host *sh = container_of((void *)fd, struct Scsi_Host, 265 + hostdata); 266 + struct scsi_cmnd *cmd = fd->cur_cmd; 267 + unsigned long flags; 268 + int status; 269 + int done = 0; 270 + 271 + spin_lock_irqsave(sh->host_lock, flags); 272 + 273 + if (cmd->SCp.phase & in_arbitration) { 274 + status = inb(fd->base + REG_ASTAT); 275 + if (!(status & ASTAT_ARB)) { 276 + fdomain_finish_cmd(fd, DID_BUS_BUSY << 16); 277 + goto out; 278 + } 279 + cmd->SCp.phase = in_selection; 280 + 281 + outb(ICTL_SEL | FIFO_COUNT, fd->base + REG_ICTL); 282 + outb(BCTL_BUSEN | BCTL_SEL, fd->base + REG_BCTL); 283 + outb(BIT(cmd->device->host->this_id) | BIT(scmd_id(cmd)), 284 + fd->base + REG_SCSI_DATA_NOACK); 285 + /* Stop arbitration and enable parity */ 286 + outb(ACTL_IRQEN | PARITY_MASK, fd->base + REG_ACTL); 287 + goto out; 288 + } else if (cmd->SCp.phase & in_selection) { 289 + status = inb(fd->base + REG_BSTAT); 290 + if (!(status & BSTAT_BSY)) { 291 + /* Try again, for slow devices */ 292 + if (fdomain_select(cmd->device->host, scmd_id(cmd))) { 293 + fdomain_finish_cmd(fd, DID_NO_CONNECT << 16); 294 + goto out; 295 + } 296 + /* Stop arbitration and enable parity */ 297 + outb(ACTL_IRQEN | PARITY_MASK, fd->base + REG_ACTL); 298 + } 299 + cmd->SCp.phase = in_other; 300 + outb(ICTL_FIFO | ICTL_REQ | FIFO_COUNT, fd->base + REG_ICTL); 301 + outb(BCTL_BUSEN, fd->base + REG_BCTL); 302 + goto out; 303 + } 304 + 305 + /* cur_cmd->SCp.phase == in_other: this is the body of the routine */ 306 + status = inb(fd->base + REG_BSTAT); 307 + 308 + if (status & BSTAT_REQ) { 309 + switch (status & 0x0e) { 310 + case BSTAT_CMD: /* COMMAND OUT */ 311 + outb(cmd->cmnd[cmd->SCp.sent_command++], 312 + fd->base + REG_SCSI_DATA); 313 + break; 314 + case 0: /* DATA OUT -- tmc18c50/tmc18c30 only */ 315 + if (fd->chip != tmc1800 && !cmd->SCp.have_data_in) { 316 + cmd->SCp.have_data_in = -1; 317 + outb(ACTL_IRQEN | ACTL_FIFOWR | ACTL_FIFOEN | 318 + PARITY_MASK, fd->base + REG_ACTL); 319 + } 320 + break; 321 + case BSTAT_IO: /* DATA IN -- tmc18c50/tmc18c30 only */ 322 + if (fd->chip != tmc1800 && !cmd->SCp.have_data_in) { 323 + cmd->SCp.have_data_in = 1; 324 + outb(ACTL_IRQEN | ACTL_FIFOEN | PARITY_MASK, 325 + fd->base + REG_ACTL); 326 + } 327 + break; 328 + case BSTAT_CMD | BSTAT_IO: /* STATUS IN */ 329 + cmd->SCp.Status = inb(fd->base + REG_SCSI_DATA); 330 + break; 331 + case BSTAT_MSG | BSTAT_CMD: /* MESSAGE OUT */ 332 + outb(MESSAGE_REJECT, fd->base + REG_SCSI_DATA); 333 + break; 334 + case BSTAT_MSG | BSTAT_IO | BSTAT_CMD: /* MESSAGE IN */ 335 + cmd->SCp.Message = inb(fd->base + REG_SCSI_DATA); 336 + if (!cmd->SCp.Message) 337 + ++done; 338 + break; 339 + } 340 + } 341 + 342 + if (fd->chip == tmc1800 && !cmd->SCp.have_data_in && 343 + cmd->SCp.sent_command >= cmd->cmd_len) { 344 + if (cmd->sc_data_direction == DMA_TO_DEVICE) { 345 + cmd->SCp.have_data_in = -1; 346 + outb(ACTL_IRQEN | ACTL_FIFOWR | ACTL_FIFOEN | 347 + PARITY_MASK, fd->base + REG_ACTL); 348 + } else { 349 + cmd->SCp.have_data_in = 1; 350 + outb(ACTL_IRQEN | ACTL_FIFOEN | PARITY_MASK, 351 + fd->base + REG_ACTL); 352 + } 353 + } 354 + 355 + if (cmd->SCp.have_data_in == -1) /* DATA OUT */ 356 + fdomain_write_data(cmd); 357 + 358 + if (cmd->SCp.have_data_in == 1) /* DATA IN */ 359 + fdomain_read_data(cmd); 360 + 361 + if (done) { 362 + fdomain_finish_cmd(fd, (cmd->SCp.Status & 0xff) | 363 + ((cmd->SCp.Message & 0xff) << 8) | 364 + (DID_OK << 16)); 365 + } else { 366 + if (cmd->SCp.phase & disconnect) { 367 + outb(ICTL_FIFO | ICTL_SEL | ICTL_REQ | FIFO_COUNT, 368 + fd->base + REG_ICTL); 369 + outb(0, fd->base + REG_BCTL); 370 + } else 371 + outb(ICTL_FIFO | ICTL_REQ | FIFO_COUNT, 372 + fd->base + REG_ICTL); 373 + } 374 + out: 375 + spin_unlock_irqrestore(sh->host_lock, flags); 376 + } 377 + 378 + static irqreturn_t fdomain_irq(int irq, void *dev_id) 379 + { 380 + struct fdomain *fd = dev_id; 381 + 382 + /* Is it our IRQ? */ 383 + if ((inb(fd->base + REG_ASTAT) & ASTAT_IRQ) == 0) 384 + return IRQ_NONE; 385 + 386 + outb(0, fd->base + REG_ICTL); 387 + 388 + /* We usually have one spurious interrupt after each command. */ 389 + if (!fd->cur_cmd) /* Spurious interrupt */ 390 + return IRQ_NONE; 391 + 392 + schedule_work(&fd->work); 393 + 394 + return IRQ_HANDLED; 395 + } 396 + 397 + static int fdomain_queue(struct Scsi_Host *sh, struct scsi_cmnd *cmd) 398 + { 399 + struct fdomain *fd = shost_priv(cmd->device->host); 400 + unsigned long flags; 401 + 402 + cmd->SCp.Status = 0; 403 + cmd->SCp.Message = 0; 404 + cmd->SCp.have_data_in = 0; 405 + cmd->SCp.sent_command = 0; 406 + cmd->SCp.phase = in_arbitration; 407 + scsi_set_resid(cmd, scsi_bufflen(cmd)); 408 + 409 + spin_lock_irqsave(sh->host_lock, flags); 410 + 411 + fd->cur_cmd = cmd; 412 + 413 + fdomain_make_bus_idle(fd); 414 + 415 + /* Start arbitration */ 416 + outb(0, fd->base + REG_ICTL); 417 + outb(0, fd->base + REG_BCTL); /* Disable data drivers */ 418 + /* Set our id bit */ 419 + outb(BIT(cmd->device->host->this_id), fd->base + REG_SCSI_DATA_NOACK); 420 + outb(ICTL_ARB, fd->base + REG_ICTL); 421 + /* Start arbitration */ 422 + outb(ACTL_ARB | ACTL_IRQEN | PARITY_MASK, fd->base + REG_ACTL); 423 + 424 + spin_unlock_irqrestore(sh->host_lock, flags); 425 + 426 + return 0; 427 + } 428 + 429 + static int fdomain_abort(struct scsi_cmnd *cmd) 430 + { 431 + struct Scsi_Host *sh = cmd->device->host; 432 + struct fdomain *fd = shost_priv(sh); 433 + unsigned long flags; 434 + 435 + if (!fd->cur_cmd) 436 + return FAILED; 437 + 438 + spin_lock_irqsave(sh->host_lock, flags); 439 + 440 + fdomain_make_bus_idle(fd); 441 + fd->cur_cmd->SCp.phase |= aborted; 442 + fd->cur_cmd->result = DID_ABORT << 16; 443 + 444 + /* Aborts are not done well. . . */ 445 + fdomain_finish_cmd(fd, DID_ABORT << 16); 446 + spin_unlock_irqrestore(sh->host_lock, flags); 447 + return SUCCESS; 448 + } 449 + 450 + static int fdomain_host_reset(struct scsi_cmnd *cmd) 451 + { 452 + struct Scsi_Host *sh = cmd->device->host; 453 + struct fdomain *fd = shost_priv(sh); 454 + unsigned long flags; 455 + 456 + spin_lock_irqsave(sh->host_lock, flags); 457 + fdomain_reset(fd->base); 458 + spin_unlock_irqrestore(sh->host_lock, flags); 459 + return SUCCESS; 460 + } 461 + 462 + static int fdomain_biosparam(struct scsi_device *sdev, 463 + struct block_device *bdev, sector_t capacity, 464 + int geom[]) 465 + { 466 + unsigned char *p = scsi_bios_ptable(bdev); 467 + 468 + if (p && p[65] == 0xaa && p[64] == 0x55 /* Partition table valid */ 469 + && p[4]) { /* Partition type */ 470 + geom[0] = p[5] + 1; /* heads */ 471 + geom[1] = p[6] & 0x3f; /* sectors */ 472 + } else { 473 + if (capacity >= 0x7e0000) { 474 + geom[0] = 255; /* heads */ 475 + geom[1] = 63; /* sectors */ 476 + } else if (capacity >= 0x200000) { 477 + geom[0] = 128; /* heads */ 478 + geom[1] = 63; /* sectors */ 479 + } else { 480 + geom[0] = 64; /* heads */ 481 + geom[1] = 32; /* sectors */ 482 + } 483 + } 484 + geom[2] = sector_div(capacity, geom[0] * geom[1]); 485 + kfree(p); 486 + 487 + return 0; 488 + } 489 + 490 + static struct scsi_host_template fdomain_template = { 491 + .module = THIS_MODULE, 492 + .name = "Future Domain TMC-16x0", 493 + .proc_name = "fdomain", 494 + .queuecommand = fdomain_queue, 495 + .eh_abort_handler = fdomain_abort, 496 + .eh_host_reset_handler = fdomain_host_reset, 497 + .bios_param = fdomain_biosparam, 498 + .can_queue = 1, 499 + .this_id = 7, 500 + .sg_tablesize = 64, 501 + .dma_boundary = PAGE_SIZE - 1, 502 + }; 503 + 504 + struct Scsi_Host *fdomain_create(int base, int irq, int this_id, 505 + struct device *dev) 506 + { 507 + struct Scsi_Host *sh; 508 + struct fdomain *fd; 509 + enum chip_type chip; 510 + static const char * const chip_names[] = { 511 + "Unknown", "TMC-1800", "TMC-18C50", "TMC-18C30" 512 + }; 513 + unsigned long irq_flags = 0; 514 + 515 + chip = fdomain_identify(base); 516 + if (!chip) 517 + return NULL; 518 + 519 + fdomain_reset(base); 520 + 521 + if (fdomain_test_loopback(base)) 522 + return NULL; 523 + 524 + if (!irq) { 525 + dev_err(dev, "card has no IRQ assigned"); 526 + return NULL; 527 + } 528 + 529 + sh = scsi_host_alloc(&fdomain_template, sizeof(struct fdomain)); 530 + if (!sh) 531 + return NULL; 532 + 533 + if (this_id) 534 + sh->this_id = this_id & 0x07; 535 + 536 + sh->irq = irq; 537 + sh->io_port = base; 538 + sh->n_io_port = FDOMAIN_REGION_SIZE; 539 + 540 + fd = shost_priv(sh); 541 + fd->base = base; 542 + fd->chip = chip; 543 + INIT_WORK(&fd->work, fdomain_work); 544 + 545 + if (dev_is_pci(dev) || !strcmp(dev->bus->name, "pcmcia")) 546 + irq_flags = IRQF_SHARED; 547 + 548 + if (request_irq(irq, fdomain_irq, irq_flags, "fdomain", fd)) 549 + goto fail_put; 550 + 551 + shost_printk(KERN_INFO, sh, "%s chip at 0x%x irq %d SCSI ID %d\n", 552 + dev_is_pci(dev) ? "TMC-36C70 (PCI bus)" : chip_names[chip], 553 + base, irq, sh->this_id); 554 + 555 + if (scsi_add_host(sh, dev)) 556 + goto fail_free_irq; 557 + 558 + scsi_scan_host(sh); 559 + 560 + return sh; 561 + 562 + fail_free_irq: 563 + free_irq(irq, fd); 564 + fail_put: 565 + scsi_host_put(sh); 566 + return NULL; 567 + } 568 + EXPORT_SYMBOL_GPL(fdomain_create); 569 + 570 + int fdomain_destroy(struct Scsi_Host *sh) 571 + { 572 + struct fdomain *fd = shost_priv(sh); 573 + 574 + cancel_work_sync(&fd->work); 575 + scsi_remove_host(sh); 576 + if (sh->irq) 577 + free_irq(sh->irq, fd); 578 + scsi_host_put(sh); 579 + return 0; 580 + } 581 + EXPORT_SYMBOL_GPL(fdomain_destroy); 582 + 583 + #ifdef CONFIG_PM_SLEEP 584 + static int fdomain_resume(struct device *dev) 585 + { 586 + struct fdomain *fd = shost_priv(dev_get_drvdata(dev)); 587 + 588 + fdomain_reset(fd->base); 589 + return 0; 590 + } 591 + 592 + static SIMPLE_DEV_PM_OPS(fdomain_pm_ops, NULL, fdomain_resume); 593 + #endif /* CONFIG_PM_SLEEP */ 594 + 595 + MODULE_AUTHOR("Ondrej Zary, Rickard E. Faith"); 596 + MODULE_DESCRIPTION("Future Domain TMC-16x0/TMC-3260 SCSI driver"); 597 + MODULE_LICENSE("GPL");
+114
drivers/scsi/fdomain.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #define FDOMAIN_REGION_SIZE 0x10 4 + #define FDOMAIN_BIOS_SIZE 0x2000 5 + 6 + enum { 7 + in_arbitration = 0x02, 8 + in_selection = 0x04, 9 + in_other = 0x08, 10 + disconnect = 0x10, 11 + aborted = 0x20, 12 + sent_ident = 0x40, 13 + }; 14 + 15 + /* (@) = not present on TMC1800, (#) = not present on TMC1800 and TMC18C50 */ 16 + #define REG_SCSI_DATA 0 /* R/W: SCSI Data (with ACK) */ 17 + #define REG_BSTAT 1 /* R: SCSI Bus Status */ 18 + #define BSTAT_BSY BIT(0) /* Busy */ 19 + #define BSTAT_MSG BIT(1) /* Message */ 20 + #define BSTAT_IO BIT(2) /* Input/Output */ 21 + #define BSTAT_CMD BIT(3) /* Command/Data */ 22 + #define BSTAT_REQ BIT(4) /* Request and Not Ack */ 23 + #define BSTAT_SEL BIT(5) /* Select */ 24 + #define BSTAT_ACK BIT(6) /* Acknowledge and Request */ 25 + #define BSTAT_ATN BIT(7) /* Attention */ 26 + #define REG_BCTL 1 /* W: SCSI Bus Control */ 27 + #define BCTL_RST BIT(0) /* Bus Reset */ 28 + #define BCTL_SEL BIT(1) /* Select */ 29 + #define BCTL_BSY BIT(2) /* Busy */ 30 + #define BCTL_ATN BIT(3) /* Attention */ 31 + #define BCTL_IO BIT(4) /* Input/Output */ 32 + #define BCTL_CMD BIT(5) /* Command/Data */ 33 + #define BCTL_MSG BIT(6) /* Message */ 34 + #define BCTL_BUSEN BIT(7) /* Enable bus drivers */ 35 + #define REG_ASTAT 2 /* R: Adapter Status 1 */ 36 + #define ASTAT_IRQ BIT(0) /* Interrupt active */ 37 + #define ASTAT_ARB BIT(1) /* Arbitration complete */ 38 + #define ASTAT_PARERR BIT(2) /* Parity error */ 39 + #define ASTAT_RST BIT(3) /* SCSI reset occurred */ 40 + #define ASTAT_FIFODIR BIT(4) /* FIFO direction */ 41 + #define ASTAT_FIFOEN BIT(5) /* FIFO enabled */ 42 + #define ASTAT_PAREN BIT(6) /* Parity enabled */ 43 + #define ASTAT_BUSEN BIT(7) /* Bus drivers enabled */ 44 + #define REG_ICTL 2 /* W: Interrupt Control */ 45 + #define ICTL_FIFO_MASK 0x0f /* FIFO threshold, 1/16 FIFO size */ 46 + #define ICTL_FIFO BIT(4) /* Int. on FIFO count */ 47 + #define ICTL_ARB BIT(5) /* Int. on Arbitration complete */ 48 + #define ICTL_SEL BIT(6) /* Int. on SCSI Select */ 49 + #define ICTL_REQ BIT(7) /* Int. on SCSI Request */ 50 + #define REG_FSTAT 3 /* R: Adapter Status 2 (FIFO) - (@) */ 51 + #define FSTAT_ONOTEMPTY BIT(0) /* Output FIFO not empty */ 52 + #define FSTAT_INOTEMPTY BIT(1) /* Input FIFO not empty */ 53 + #define FSTAT_NOTEMPTY BIT(2) /* Main FIFO not empty */ 54 + #define FSTAT_NOTFULL BIT(3) /* Main FIFO not full */ 55 + #define REG_MCTL 3 /* W: SCSI Data Mode Control */ 56 + #define MCTL_ACK_MASK 0x0f /* Acknowledge period */ 57 + #define MCTL_ACTDEASS BIT(4) /* Active deassert of REQ and ACK */ 58 + #define MCTL_TARGET BIT(5) /* Enable target mode */ 59 + #define MCTL_FASTSYNC BIT(6) /* Enable Fast Synchronous */ 60 + #define MCTL_SYNC BIT(7) /* Enable Synchronous */ 61 + #define REG_INTCOND 4 /* R: Interrupt Condition - (@) */ 62 + #define IRQ_FIFO BIT(1) /* FIFO interrupt */ 63 + #define IRQ_REQ BIT(2) /* SCSI Request interrupt */ 64 + #define IRQ_SEL BIT(3) /* SCSI Select interrupt */ 65 + #define IRQ_ARB BIT(4) /* SCSI Arbitration interrupt */ 66 + #define IRQ_RST BIT(5) /* SCSI Reset interrupt */ 67 + #define IRQ_FORCED BIT(6) /* Forced interrupt */ 68 + #define IRQ_TIMEOUT BIT(7) /* Bus timeout */ 69 + #define REG_ACTL 4 /* W: Adapter Control 1 */ 70 + #define ACTL_RESET BIT(0) /* Reset FIFO, parity, reset int. */ 71 + #define ACTL_FIRQ BIT(1) /* Set Forced interrupt */ 72 + #define ACTL_ARB BIT(2) /* Initiate Bus Arbitration */ 73 + #define ACTL_PAREN BIT(3) /* Enable SCSI Parity */ 74 + #define ACTL_IRQEN BIT(4) /* Enable interrupts */ 75 + #define ACTL_CLRFIRQ BIT(5) /* Clear Forced interrupt */ 76 + #define ACTL_FIFOWR BIT(6) /* FIFO Direction (1=write) */ 77 + #define ACTL_FIFOEN BIT(7) /* Enable FIFO */ 78 + #define REG_ID_LSB 5 /* R: ID Code (LSB) */ 79 + #define REG_ACTL2 5 /* Adapter Control 2 - (@) */ 80 + #define ACTL2_RAMOVRLY BIT(0) /* Enable RAM overlay */ 81 + #define ACTL2_SLEEP BIT(7) /* Sleep mode */ 82 + #define REG_ID_MSB 6 /* R: ID Code (MSB) */ 83 + #define REG_LOOPBACK 7 /* R/W: Loopback */ 84 + #define REG_SCSI_DATA_NOACK 8 /* R/W: SCSI Data (no ACK) */ 85 + #define REG_ASTAT3 9 /* R: Adapter Status 3 */ 86 + #define ASTAT3_ACTDEASS BIT(0) /* Active deassert enabled */ 87 + #define ASTAT3_RAMOVRLY BIT(1) /* RAM overlay enabled */ 88 + #define ASTAT3_TARGERR BIT(2) /* Target error */ 89 + #define ASTAT3_IRQEN BIT(3) /* Interrupts enabled */ 90 + #define ASTAT3_IRQMASK 0xf0 /* Enabled interrupts mask */ 91 + #define REG_CFG1 10 /* R: Configuration Register 1 */ 92 + #define CFG1_BUS BIT(0) /* 0 = ISA */ 93 + #define CFG1_IRQ_MASK 0x0e /* IRQ jumpers */ 94 + #define CFG1_IO_MASK 0x30 /* I/O base jumpers */ 95 + #define CFG1_BIOS_MASK 0xc0 /* BIOS base jumpers */ 96 + #define REG_CFG2 11 /* R/W: Configuration Register 2 (@) */ 97 + #define CFG2_ROMDIS BIT(0) /* ROM disabled */ 98 + #define CFG2_RAMDIS BIT(1) /* RAM disabled */ 99 + #define CFG2_IRQEDGE BIT(2) /* Edge-triggered interrupts */ 100 + #define CFG2_NOWS BIT(3) /* No wait states */ 101 + #define CFG2_32BIT BIT(7) /* 32-bit mode */ 102 + #define REG_FIFO 12 /* R/W: FIFO */ 103 + #define REG_FIFO_COUNT 14 /* R: FIFO Data Count */ 104 + 105 + #ifdef CONFIG_PM_SLEEP 106 + static const struct dev_pm_ops fdomain_pm_ops; 107 + #define FDOMAIN_PM_OPS (&fdomain_pm_ops) 108 + #else 109 + #define FDOMAIN_PM_OPS NULL 110 + #endif /* CONFIG_PM_SLEEP */ 111 + 112 + struct Scsi_Host *fdomain_create(int base, int irq, int this_id, 113 + struct device *dev); 114 + int fdomain_destroy(struct Scsi_Host *sh);
+222
drivers/scsi/fdomain_isa.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/module.h> 4 + #include <linux/io.h> 5 + #include <linux/isa.h> 6 + #include <scsi/scsi_host.h> 7 + #include "fdomain.h" 8 + 9 + #define MAXBOARDS_PARAM 4 10 + static int io[MAXBOARDS_PARAM] = { 0, 0, 0, 0 }; 11 + module_param_hw_array(io, int, ioport, NULL, 0); 12 + MODULE_PARM_DESC(io, "base I/O address of controller (0x140, 0x150, 0x160, 0x170)"); 13 + 14 + static int irq[MAXBOARDS_PARAM] = { 0, 0, 0, 0 }; 15 + module_param_hw_array(irq, int, irq, NULL, 0); 16 + MODULE_PARM_DESC(irq, "IRQ of controller (0=auto [default])"); 17 + 18 + static int scsi_id[MAXBOARDS_PARAM] = { 0, 0, 0, 0 }; 19 + module_param_hw_array(scsi_id, int, other, NULL, 0); 20 + MODULE_PARM_DESC(scsi_id, "SCSI ID of controller (default = 7)"); 21 + 22 + static unsigned long addresses[] = { 23 + 0xc8000, 24 + 0xca000, 25 + 0xce000, 26 + 0xde000, 27 + }; 28 + #define ADDRESS_COUNT ARRAY_SIZE(addresses) 29 + 30 + static unsigned short ports[] = { 0x140, 0x150, 0x160, 0x170 }; 31 + #define PORT_COUNT ARRAY_SIZE(ports) 32 + 33 + static unsigned short irqs[] = { 3, 5, 10, 11, 12, 14, 15, 0 }; 34 + 35 + /* This driver works *ONLY* for Future Domain cards using the TMC-1800, 36 + * TMC-18C50, or TMC-18C30 chip. This includes models TMC-1650, 1660, 1670, 37 + * and 1680. These are all 16-bit cards. 38 + * BIOS versions prior to 3.2 assigned SCSI ID 6 to SCSI adapter. 39 + * 40 + * The following BIOS signature signatures are for boards which do *NOT* 41 + * work with this driver (these TMC-8xx and TMC-9xx boards may work with the 42 + * Seagate driver): 43 + * 44 + * FUTURE DOMAIN CORP. (C) 1986-1988 V4.0I 03/16/88 45 + * FUTURE DOMAIN CORP. (C) 1986-1989 V5.0C2/14/89 46 + * FUTURE DOMAIN CORP. (C) 1986-1989 V6.0A7/28/89 47 + * FUTURE DOMAIN CORP. (C) 1986-1990 V6.0105/31/90 48 + * FUTURE DOMAIN CORP. (C) 1986-1990 V6.0209/18/90 49 + * FUTURE DOMAIN CORP. (C) 1986-1990 V7.009/18/90 50 + * FUTURE DOMAIN CORP. (C) 1992 V8.00.004/02/92 51 + * 52 + * (The cards which do *NOT* work are all 8-bit cards -- although some of 53 + * them have a 16-bit form-factor, the upper 8-bits are used only for IRQs 54 + * and are *NOT* used for data. You can tell the difference by following 55 + * the tracings on the circuit board -- if only the IRQ lines are involved, 56 + * you have a "8-bit" card, and should *NOT* use this driver.) 57 + */ 58 + 59 + static struct signature { 60 + const char *signature; 61 + int offset; 62 + int length; 63 + int this_id; 64 + int base_offset; 65 + } signatures[] = { 66 + /* 1 2 3 4 5 6 */ 67 + /* 123456789012345678901234567890123456789012345678901234567890 */ 68 + { "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V2.07/28/89", 5, 50, 6, 0x1fcc }, 69 + { "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V1.07/28/89", 5, 50, 6, 0x1fcc }, 70 + { "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V2.07/28/89", 72, 50, 6, 0x1fa2 }, 71 + { "FUTURE DOMAIN CORP. (C) 1986-1990 1800-V2.0", 73, 43, 6, 0x1fa2 }, 72 + { "FUTURE DOMAIN CORP. (C) 1991 1800-V2.0.", 72, 39, 6, 0x1fa3 }, 73 + { "FUTURE DOMAIN CORP. (C) 1992 V3.00.004/02/92", 5, 44, 6, 0 }, 74 + { "FUTURE DOMAIN TMC-18XX (C) 1993 V3.203/12/93", 5, 44, 7, 0 }, 75 + { "IBM F1 P2 BIOS v1.0011/09/92", 5, 28, 7, 0x1ff3 }, 76 + { "IBM F1 P2 BIOS v1.0104/29/93", 5, 28, 7, 0 }, 77 + { "Future Domain Corp. V1.0008/18/93", 5, 33, 7, 0 }, 78 + { "Future Domain Corp. V2.0108/18/93", 5, 33, 7, 0 }, 79 + { "FUTURE DOMAIN CORP. V3.5008/18/93", 5, 34, 7, 0 }, 80 + { "FUTURE DOMAIN 18c30/18c50/1800 (C) 1994 V3.5", 5, 44, 7, 0 }, 81 + { "FUTURE DOMAIN CORP. V3.6008/18/93", 5, 34, 7, 0 }, 82 + { "FUTURE DOMAIN CORP. V3.6108/18/93", 5, 34, 7, 0 }, 83 + }; 84 + #define SIGNATURE_COUNT ARRAY_SIZE(signatures) 85 + 86 + static int fdomain_isa_match(struct device *dev, unsigned int ndev) 87 + { 88 + struct Scsi_Host *sh; 89 + int i, base = 0, irq = 0; 90 + unsigned long bios_base = 0; 91 + struct signature *sig = NULL; 92 + void __iomem *p; 93 + static struct signature *saved_sig; 94 + int this_id = 7; 95 + 96 + if (ndev < ADDRESS_COUNT) { /* scan supported ISA BIOS addresses */ 97 + p = ioremap(addresses[ndev], FDOMAIN_BIOS_SIZE); 98 + if (!p) 99 + return 0; 100 + for (i = 0; i < SIGNATURE_COUNT; i++) 101 + if (check_signature(p + signatures[i].offset, 102 + signatures[i].signature, 103 + signatures[i].length)) 104 + break; 105 + if (i == SIGNATURE_COUNT) /* no signature found */ 106 + goto fail_unmap; 107 + sig = &signatures[i]; 108 + bios_base = addresses[ndev]; 109 + /* read I/O base from BIOS area */ 110 + if (sig->base_offset) 111 + base = readb(p + sig->base_offset) + 112 + (readb(p + sig->base_offset + 1) << 8); 113 + iounmap(p); 114 + if (base) 115 + dev_info(dev, "BIOS at 0x%lx specifies I/O base 0x%x\n", 116 + bios_base, base); 117 + else 118 + dev_info(dev, "BIOS at 0x%lx\n", bios_base); 119 + if (!base) { /* no I/O base in BIOS area */ 120 + /* save BIOS signature for later use in port probing */ 121 + saved_sig = sig; 122 + return 0; 123 + } 124 + } else /* scan supported I/O ports */ 125 + base = ports[ndev - ADDRESS_COUNT]; 126 + 127 + /* use saved BIOS signature if present */ 128 + if (!sig && saved_sig) 129 + sig = saved_sig; 130 + 131 + if (!request_region(base, FDOMAIN_REGION_SIZE, "fdomain_isa")) 132 + return 0; 133 + 134 + irq = irqs[(inb(base + REG_CFG1) & 0x0e) >> 1]; 135 + 136 + 137 + if (sig) 138 + this_id = sig->this_id; 139 + 140 + sh = fdomain_create(base, irq, this_id, dev); 141 + if (!sh) { 142 + release_region(base, FDOMAIN_REGION_SIZE); 143 + return 0; 144 + } 145 + 146 + dev_set_drvdata(dev, sh); 147 + return 1; 148 + fail_unmap: 149 + iounmap(p); 150 + return 0; 151 + } 152 + 153 + static int fdomain_isa_param_match(struct device *dev, unsigned int ndev) 154 + { 155 + struct Scsi_Host *sh; 156 + int irq_ = irq[ndev]; 157 + 158 + if (!io[ndev]) 159 + return 0; 160 + 161 + if (!request_region(io[ndev], FDOMAIN_REGION_SIZE, "fdomain_isa")) { 162 + dev_err(dev, "base 0x%x already in use", io[ndev]); 163 + return 0; 164 + } 165 + 166 + if (irq_ <= 0) 167 + irq_ = irqs[(inb(io[ndev] + REG_CFG1) & 0x0e) >> 1]; 168 + 169 + sh = fdomain_create(io[ndev], irq_, scsi_id[ndev], dev); 170 + if (!sh) { 171 + dev_err(dev, "controller not found at base 0x%x", io[ndev]); 172 + release_region(io[ndev], FDOMAIN_REGION_SIZE); 173 + return 0; 174 + } 175 + 176 + dev_set_drvdata(dev, sh); 177 + return 1; 178 + } 179 + 180 + static int fdomain_isa_remove(struct device *dev, unsigned int ndev) 181 + { 182 + struct Scsi_Host *sh = dev_get_drvdata(dev); 183 + int base = sh->io_port; 184 + 185 + fdomain_destroy(sh); 186 + release_region(base, FDOMAIN_REGION_SIZE); 187 + dev_set_drvdata(dev, NULL); 188 + return 0; 189 + } 190 + 191 + static struct isa_driver fdomain_isa_driver = { 192 + .match = fdomain_isa_match, 193 + .remove = fdomain_isa_remove, 194 + .driver = { 195 + .name = "fdomain_isa", 196 + .pm = FDOMAIN_PM_OPS, 197 + }, 198 + }; 199 + 200 + static int __init fdomain_isa_init(void) 201 + { 202 + int isa_probe_count = ADDRESS_COUNT + PORT_COUNT; 203 + 204 + if (io[0]) { /* use module parameters if present */ 205 + fdomain_isa_driver.match = fdomain_isa_param_match; 206 + isa_probe_count = MAXBOARDS_PARAM; 207 + } 208 + 209 + return isa_register_driver(&fdomain_isa_driver, isa_probe_count); 210 + } 211 + 212 + static void __exit fdomain_isa_exit(void) 213 + { 214 + isa_unregister_driver(&fdomain_isa_driver); 215 + } 216 + 217 + module_init(fdomain_isa_init); 218 + module_exit(fdomain_isa_exit); 219 + 220 + MODULE_AUTHOR("Ondrej Zary, Rickard E. Faith"); 221 + MODULE_DESCRIPTION("Future Domain TMC-16x0 ISA SCSI driver"); 222 + MODULE_LICENSE("GPL");
+68
drivers/scsi/fdomain_pci.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/module.h> 4 + #include <linux/pci.h> 5 + #include "fdomain.h" 6 + 7 + static int fdomain_pci_probe(struct pci_dev *pdev, 8 + const struct pci_device_id *d) 9 + { 10 + int err; 11 + struct Scsi_Host *sh; 12 + 13 + err = pci_enable_device(pdev); 14 + if (err) 15 + goto fail; 16 + 17 + err = pci_request_regions(pdev, "fdomain_pci"); 18 + if (err) 19 + goto disable_device; 20 + 21 + err = -ENODEV; 22 + if (pci_resource_len(pdev, 0) == 0) 23 + goto release_region; 24 + 25 + sh = fdomain_create(pci_resource_start(pdev, 0), pdev->irq, 7, 26 + &pdev->dev); 27 + if (!sh) 28 + goto release_region; 29 + 30 + pci_set_drvdata(pdev, sh); 31 + return 0; 32 + 33 + release_region: 34 + pci_release_regions(pdev); 35 + disable_device: 36 + pci_disable_device(pdev); 37 + fail: 38 + return err; 39 + } 40 + 41 + static void fdomain_pci_remove(struct pci_dev *pdev) 42 + { 43 + struct Scsi_Host *sh = pci_get_drvdata(pdev); 44 + 45 + fdomain_destroy(sh); 46 + pci_release_regions(pdev); 47 + pci_disable_device(pdev); 48 + } 49 + 50 + static struct pci_device_id fdomain_pci_table[] = { 51 + { PCI_DEVICE(PCI_VENDOR_ID_FD, PCI_DEVICE_ID_FD_36C70) }, 52 + {} 53 + }; 54 + MODULE_DEVICE_TABLE(pci, fdomain_pci_table); 55 + 56 + static struct pci_driver fdomain_pci_driver = { 57 + .name = "fdomain_pci", 58 + .id_table = fdomain_pci_table, 59 + .probe = fdomain_pci_probe, 60 + .remove = fdomain_pci_remove, 61 + .driver.pm = FDOMAIN_PM_OPS, 62 + }; 63 + 64 + module_pci_driver(fdomain_pci_driver); 65 + 66 + MODULE_AUTHOR("Ondrej Zary, Rickard E. Faith"); 67 + MODULE_DESCRIPTION("Future Domain TMC-3260 PCI SCSI driver"); 68 + MODULE_LICENSE("GPL");
+2 -6
drivers/scsi/hisi_sas/hisi_sas.h
··· 61 61 #define HISI_SAS_MAX_SMP_RESP_SZ 1028 62 62 #define HISI_SAS_MAX_STP_RESP_SZ 28 63 63 64 - #define DEV_IS_EXPANDER(type) \ 65 - ((type == SAS_EDGE_EXPANDER_DEVICE) || \ 66 - (type == SAS_FANOUT_EXPANDER_DEVICE)) 67 - 68 64 #define HISI_SAS_SATA_PROTOCOL_NONDATA 0x1 69 65 #define HISI_SAS_SATA_PROTOCOL_PIO 0x2 70 66 #define HISI_SAS_SATA_PROTOCOL_DMA 0x4 ··· 475 479 u8 atapi_cdb[ATAPI_CDB_LEN]; 476 480 }; 477 481 478 - #define HISI_SAS_SGE_PAGE_CNT SG_CHUNK_SIZE 482 + #define HISI_SAS_SGE_PAGE_CNT (124) 479 483 struct hisi_sas_sge_page { 480 484 struct hisi_sas_sge sge[HISI_SAS_SGE_PAGE_CNT]; 481 485 } __aligned(16); 482 486 483 - #define HISI_SAS_SGE_DIF_PAGE_CNT SG_CHUNK_SIZE 487 + #define HISI_SAS_SGE_DIF_PAGE_CNT HISI_SAS_SGE_PAGE_CNT 484 488 struct hisi_sas_sge_dif_page { 485 489 struct hisi_sas_sge sge[HISI_SAS_SGE_DIF_PAGE_CNT]; 486 490 } __aligned(16);
+12 -4
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 803 803 device->lldd_dev = sas_dev; 804 804 hisi_hba->hw->setup_itct(hisi_hba, sas_dev); 805 805 806 - if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) { 806 + if (parent_dev && dev_is_expander(parent_dev->dev_type)) { 807 807 int phy_no; 808 808 u8 phy_num = parent_dev->ex_dev.num_phys; 809 809 struct ex_phy *phy; ··· 1446 1446 1447 1447 _sas_port = sas_port; 1448 1448 1449 - if (DEV_IS_EXPANDER(dev->dev_type)) 1449 + if (dev_is_expander(dev->dev_type)) 1450 1450 sas_ha->notify_port_event(sas_phy, 1451 1451 PORTE_BROADCAST_RCVD); 1452 1452 } ··· 1533 1533 struct domain_device *port_dev = sas_port->port_dev; 1534 1534 struct domain_device *device; 1535 1535 1536 - if (!port_dev || !DEV_IS_EXPANDER(port_dev->dev_type)) 1536 + if (!port_dev || !dev_is_expander(port_dev->dev_type)) 1537 1537 continue; 1538 1538 1539 1539 /* Try to find a SATA device */ ··· 1903 1903 struct domain_device *device = sas_dev->sas_device; 1904 1904 1905 1905 if ((sas_dev->dev_type == SAS_PHY_UNUSED) || !device || 1906 - DEV_IS_EXPANDER(device->dev_type)) 1906 + dev_is_expander(device->dev_type)) 1907 1907 continue; 1908 1908 1909 1909 rc = hisi_sas_debug_I_T_nexus_reset(device); ··· 2475 2475 2476 2476 void hisi_sas_free(struct hisi_hba *hisi_hba) 2477 2477 { 2478 + int i; 2479 + 2480 + for (i = 0; i < hisi_hba->n_phy; i++) { 2481 + struct hisi_sas_phy *phy = &hisi_hba->phy[i]; 2482 + 2483 + del_timer_sync(&phy->timer); 2484 + } 2485 + 2478 2486 if (hisi_hba->wq) 2479 2487 destroy_workqueue(hisi_hba->wq); 2480 2488 }
+26 -24
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
··· 422 422 .irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_1B_OFF), 423 423 .msk = HGC_DQE_ECC_1B_ADDR_MSK, 424 424 .shift = HGC_DQE_ECC_1B_ADDR_OFF, 425 - .msg = "hgc_dqe_acc1b_intr found: Ram address is 0x%08X\n", 425 + .msg = "hgc_dqe_ecc1b_intr", 426 426 .reg = HGC_DQE_ECC_ADDR, 427 427 }, 428 428 { 429 429 .irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_1B_OFF), 430 430 .msk = HGC_IOST_ECC_1B_ADDR_MSK, 431 431 .shift = HGC_IOST_ECC_1B_ADDR_OFF, 432 - .msg = "hgc_iost_acc1b_intr found: Ram address is 0x%08X\n", 432 + .msg = "hgc_iost_ecc1b_intr", 433 433 .reg = HGC_IOST_ECC_ADDR, 434 434 }, 435 435 { 436 436 .irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_1B_OFF), 437 437 .msk = HGC_ITCT_ECC_1B_ADDR_MSK, 438 438 .shift = HGC_ITCT_ECC_1B_ADDR_OFF, 439 - .msg = "hgc_itct_acc1b_intr found: am address is 0x%08X\n", 439 + .msg = "hgc_itct_ecc1b_intr", 440 440 .reg = HGC_ITCT_ECC_ADDR, 441 441 }, 442 442 { 443 443 .irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_1B_OFF), 444 444 .msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK, 445 445 .shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF, 446 - .msg = "hgc_iostl_acc1b_intr found: memory address is 0x%08X\n", 446 + .msg = "hgc_iostl_ecc1b_intr", 447 447 .reg = HGC_LM_DFX_STATUS2, 448 448 }, 449 449 { 450 450 .irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_1B_OFF), 451 451 .msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK, 452 452 .shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF, 453 - .msg = "hgc_itctl_acc1b_intr found: memory address is 0x%08X\n", 453 + .msg = "hgc_itctl_ecc1b_intr", 454 454 .reg = HGC_LM_DFX_STATUS2, 455 455 }, 456 456 { 457 457 .irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_1B_OFF), 458 458 .msk = HGC_CQE_ECC_1B_ADDR_MSK, 459 459 .shift = HGC_CQE_ECC_1B_ADDR_OFF, 460 - .msg = "hgc_cqe_acc1b_intr found: Ram address is 0x%08X\n", 460 + .msg = "hgc_cqe_ecc1b_intr", 461 461 .reg = HGC_CQE_ECC_ADDR, 462 462 }, 463 463 { 464 464 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_1B_OFF), 465 465 .msk = HGC_RXM_DFX_STATUS14_MEM0_MSK, 466 466 .shift = HGC_RXM_DFX_STATUS14_MEM0_OFF, 467 - .msg = "rxm_mem0_acc1b_intr found: memory address is 0x%08X\n", 467 + .msg = "rxm_mem0_ecc1b_intr", 468 468 .reg = HGC_RXM_DFX_STATUS14, 469 469 }, 470 470 { 471 471 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_1B_OFF), 472 472 .msk = HGC_RXM_DFX_STATUS14_MEM1_MSK, 473 473 .shift = HGC_RXM_DFX_STATUS14_MEM1_OFF, 474 - .msg = "rxm_mem1_acc1b_intr found: memory address is 0x%08X\n", 474 + .msg = "rxm_mem1_ecc1b_intr", 475 475 .reg = HGC_RXM_DFX_STATUS14, 476 476 }, 477 477 { 478 478 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_1B_OFF), 479 479 .msk = HGC_RXM_DFX_STATUS14_MEM2_MSK, 480 480 .shift = HGC_RXM_DFX_STATUS14_MEM2_OFF, 481 - .msg = "rxm_mem2_acc1b_intr found: memory address is 0x%08X\n", 481 + .msg = "rxm_mem2_ecc1b_intr", 482 482 .reg = HGC_RXM_DFX_STATUS14, 483 483 }, 484 484 { 485 485 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_1B_OFF), 486 486 .msk = HGC_RXM_DFX_STATUS15_MEM3_MSK, 487 487 .shift = HGC_RXM_DFX_STATUS15_MEM3_OFF, 488 - .msg = "rxm_mem3_acc1b_intr found: memory address is 0x%08X\n", 488 + .msg = "rxm_mem3_ecc1b_intr", 489 489 .reg = HGC_RXM_DFX_STATUS15, 490 490 }, 491 491 }; ··· 495 495 .irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF), 496 496 .msk = HGC_DQE_ECC_MB_ADDR_MSK, 497 497 .shift = HGC_DQE_ECC_MB_ADDR_OFF, 498 - .msg = "hgc_dqe_accbad_intr (0x%x) found: Ram address is 0x%08X\n", 498 + .msg = "hgc_dqe_eccbad_intr", 499 499 .reg = HGC_DQE_ECC_ADDR, 500 500 }, 501 501 { 502 502 .irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF), 503 503 .msk = HGC_IOST_ECC_MB_ADDR_MSK, 504 504 .shift = HGC_IOST_ECC_MB_ADDR_OFF, 505 - .msg = "hgc_iost_accbad_intr (0x%x) found: Ram address is 0x%08X\n", 505 + .msg = "hgc_iost_eccbad_intr", 506 506 .reg = HGC_IOST_ECC_ADDR, 507 507 }, 508 508 { 509 509 .irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF), 510 510 .msk = HGC_ITCT_ECC_MB_ADDR_MSK, 511 511 .shift = HGC_ITCT_ECC_MB_ADDR_OFF, 512 - .msg = "hgc_itct_accbad_intr (0x%x) found: Ram address is 0x%08X\n", 512 + .msg = "hgc_itct_eccbad_intr", 513 513 .reg = HGC_ITCT_ECC_ADDR, 514 514 }, 515 515 { 516 516 .irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF), 517 517 .msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK, 518 518 .shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF, 519 - .msg = "hgc_iostl_accbad_intr (0x%x) found: memory address is 0x%08X\n", 519 + .msg = "hgc_iostl_eccbad_intr", 520 520 .reg = HGC_LM_DFX_STATUS2, 521 521 }, 522 522 { 523 523 .irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF), 524 524 .msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK, 525 525 .shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF, 526 - .msg = "hgc_itctl_accbad_intr (0x%x) found: memory address is 0x%08X\n", 526 + .msg = "hgc_itctl_eccbad_intr", 527 527 .reg = HGC_LM_DFX_STATUS2, 528 528 }, 529 529 { 530 530 .irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF), 531 531 .msk = HGC_CQE_ECC_MB_ADDR_MSK, 532 532 .shift = HGC_CQE_ECC_MB_ADDR_OFF, 533 - .msg = "hgc_cqe_accbad_intr (0x%x) found: Ram address is 0x%08X\n", 533 + .msg = "hgc_cqe_eccbad_intr", 534 534 .reg = HGC_CQE_ECC_ADDR, 535 535 }, 536 536 { 537 537 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF), 538 538 .msk = HGC_RXM_DFX_STATUS14_MEM0_MSK, 539 539 .shift = HGC_RXM_DFX_STATUS14_MEM0_OFF, 540 - .msg = "rxm_mem0_accbad_intr (0x%x) found: memory address is 0x%08X\n", 540 + .msg = "rxm_mem0_eccbad_intr", 541 541 .reg = HGC_RXM_DFX_STATUS14, 542 542 }, 543 543 { 544 544 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF), 545 545 .msk = HGC_RXM_DFX_STATUS14_MEM1_MSK, 546 546 .shift = HGC_RXM_DFX_STATUS14_MEM1_OFF, 547 - .msg = "rxm_mem1_accbad_intr (0x%x) found: memory address is 0x%08X\n", 547 + .msg = "rxm_mem1_eccbad_intr", 548 548 .reg = HGC_RXM_DFX_STATUS14, 549 549 }, 550 550 { 551 551 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF), 552 552 .msk = HGC_RXM_DFX_STATUS14_MEM2_MSK, 553 553 .shift = HGC_RXM_DFX_STATUS14_MEM2_OFF, 554 - .msg = "rxm_mem2_accbad_intr (0x%x) found: memory address is 0x%08X\n", 554 + .msg = "rxm_mem2_eccbad_intr", 555 555 .reg = HGC_RXM_DFX_STATUS14, 556 556 }, 557 557 { 558 558 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF), 559 559 .msk = HGC_RXM_DFX_STATUS15_MEM3_MSK, 560 560 .shift = HGC_RXM_DFX_STATUS15_MEM3_OFF, 561 - .msg = "rxm_mem3_accbad_intr (0x%x) found: memory address is 0x%08X\n", 561 + .msg = "rxm_mem3_eccbad_intr", 562 562 .reg = HGC_RXM_DFX_STATUS15, 563 563 }, 564 564 }; ··· 944 944 break; 945 945 case SAS_SATA_DEV: 946 946 case SAS_SATA_PENDING: 947 - if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 947 + if (parent_dev && dev_is_expander(parent_dev->dev_type)) 948 948 qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF; 949 949 else 950 950 qw0 = HISI_SAS_DEV_TYPE_SATA << ITCT_HDR_DEV_TYPE_OFF; ··· 2526 2526 /* create header */ 2527 2527 /* dw0 */ 2528 2528 dw0 = port->id << CMD_HDR_PORT_OFF; 2529 - if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 2529 + if (parent_dev && dev_is_expander(parent_dev->dev_type)) 2530 2530 dw0 |= 3 << CMD_HDR_CMD_OFF; 2531 2531 else 2532 2532 dw0 |= 4 << CMD_HDR_CMD_OFF; ··· 2973 2973 val = hisi_sas_read32(hisi_hba, ecc_error->reg); 2974 2974 val &= ecc_error->msk; 2975 2975 val >>= ecc_error->shift; 2976 - dev_warn(dev, ecc_error->msg, val); 2976 + dev_warn(dev, "%s found: mem addr is 0x%08X\n", 2977 + ecc_error->msg, val); 2977 2978 } 2978 2979 } 2979 2980 } ··· 2993 2992 val = hisi_sas_read32(hisi_hba, ecc_error->reg); 2994 2993 val &= ecc_error->msk; 2995 2994 val >>= ecc_error->shift; 2996 - dev_err(dev, ecc_error->msg, irq_value, val); 2995 + dev_err(dev, "%s (0x%x) found: mem addr is 0x%08X\n", 2996 + ecc_error->msg, irq_value, val); 2997 2997 queue_work(hisi_hba->wq, &hisi_hba->rst_work); 2998 2998 } 2999 2999 }
+34 -16
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 23 23 #define ITCT_CLR_EN_MSK (0x1 << ITCT_CLR_EN_OFF) 24 24 #define ITCT_DEV_OFF 0 25 25 #define ITCT_DEV_MSK (0x7ff << ITCT_DEV_OFF) 26 + #define SAS_AXI_USER3 0x50 26 27 #define IO_SATA_BROKEN_MSG_ADDR_LO 0x58 27 28 #define IO_SATA_BROKEN_MSG_ADDR_HI 0x5c 28 29 #define SATA_INITI_D2H_STORE_ADDR_LO 0x60 ··· 550 549 /* Global registers init */ 551 550 hisi_sas_write32(hisi_hba, DLVRY_QUEUE_ENABLE, 552 551 (u32)((1ULL << hisi_hba->queue_count) - 1)); 552 + hisi_sas_write32(hisi_hba, SAS_AXI_USER3, 0); 553 553 hisi_sas_write32(hisi_hba, CFG_MAX_TAG, 0xfff0400); 554 554 hisi_sas_write32(hisi_hba, HGC_SAS_TXFAIL_RETRY_CTRL, 0x108); 555 555 hisi_sas_write32(hisi_hba, CFG_AGING_TIME, 0x1); ··· 754 752 break; 755 753 case SAS_SATA_DEV: 756 754 case SAS_SATA_PENDING: 757 - if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 755 + if (parent_dev && dev_is_expander(parent_dev->dev_type)) 758 756 qw0 = HISI_SAS_DEV_TYPE_STP << ITCT_HDR_DEV_TYPE_OFF; 759 757 else 760 758 qw0 = HISI_SAS_DEV_TYPE_SATA << ITCT_HDR_DEV_TYPE_OFF; ··· 908 906 static void disable_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no) 909 907 { 910 908 u32 cfg = hisi_sas_phy_read32(hisi_hba, phy_no, PHY_CFG); 909 + u32 irq_msk = hisi_sas_phy_read32(hisi_hba, phy_no, CHL_INT2_MSK); 910 + static const u32 msk = BIT(CHL_INT2_RX_DISP_ERR_OFF) | 911 + BIT(CHL_INT2_RX_CODE_ERR_OFF) | 912 + BIT(CHL_INT2_RX_INVLD_DW_OFF); 911 913 u32 state; 914 + 915 + hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2_MSK, msk | irq_msk); 912 916 913 917 cfg &= ~PHY_CFG_ENA_MSK; 914 918 hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg); ··· 926 918 cfg |= PHY_CFG_PHY_RST_MSK; 927 919 hisi_sas_phy_write32(hisi_hba, phy_no, PHY_CFG, cfg); 928 920 } 921 + 922 + udelay(1); 923 + 924 + hisi_sas_phy_read32(hisi_hba, phy_no, ERR_CNT_INVLD_DW); 925 + hisi_sas_phy_read32(hisi_hba, phy_no, ERR_CNT_DISP_ERR); 926 + hisi_sas_phy_read32(hisi_hba, phy_no, ERR_CNT_CODE_ERR); 927 + 928 + hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2, msk); 929 + hisi_sas_phy_write32(hisi_hba, phy_no, CHL_INT2_MSK, irq_msk); 929 930 } 930 931 931 932 static void start_phy_v3_hw(struct hisi_hba *hisi_hba, int phy_no) ··· 1353 1336 u32 dw1 = 0, dw2 = 0; 1354 1337 1355 1338 hdr->dw0 = cpu_to_le32(port->id << CMD_HDR_PORT_OFF); 1356 - if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 1339 + if (parent_dev && dev_is_expander(parent_dev->dev_type)) 1357 1340 hdr->dw0 |= cpu_to_le32(3 << CMD_HDR_CMD_OFF); 1358 1341 else 1359 - hdr->dw0 |= cpu_to_le32(4 << CMD_HDR_CMD_OFF); 1342 + hdr->dw0 |= cpu_to_le32(4U << CMD_HDR_CMD_OFF); 1360 1343 1361 1344 switch (task->data_dir) { 1362 1345 case DMA_TO_DEVICE: ··· 1424 1407 struct hisi_sas_port *port = slot->port; 1425 1408 1426 1409 /* dw0 */ 1427 - hdr->dw0 = cpu_to_le32((5 << CMD_HDR_CMD_OFF) | /*abort*/ 1410 + hdr->dw0 = cpu_to_le32((5U << CMD_HDR_CMD_OFF) | /*abort*/ 1428 1411 (port->id << CMD_HDR_PORT_OFF) | 1429 1412 (dev_is_sata(dev) 1430 1413 << CMD_HDR_ABORT_DEVICE_TYPE_OFF) | ··· 1843 1826 .irq_msk = BIT(SAS_ECC_INTR_DQE_ECC_MB_OFF), 1844 1827 .msk = HGC_DQE_ECC_MB_ADDR_MSK, 1845 1828 .shift = HGC_DQE_ECC_MB_ADDR_OFF, 1846 - .msg = "hgc_dqe_eccbad_intr found: ram addr is 0x%08X\n", 1829 + .msg = "hgc_dqe_eccbad_intr", 1847 1830 .reg = HGC_DQE_ECC_ADDR, 1848 1831 }, 1849 1832 { 1850 1833 .irq_msk = BIT(SAS_ECC_INTR_IOST_ECC_MB_OFF), 1851 1834 .msk = HGC_IOST_ECC_MB_ADDR_MSK, 1852 1835 .shift = HGC_IOST_ECC_MB_ADDR_OFF, 1853 - .msg = "hgc_iost_eccbad_intr found: ram addr is 0x%08X\n", 1836 + .msg = "hgc_iost_eccbad_intr", 1854 1837 .reg = HGC_IOST_ECC_ADDR, 1855 1838 }, 1856 1839 { 1857 1840 .irq_msk = BIT(SAS_ECC_INTR_ITCT_ECC_MB_OFF), 1858 1841 .msk = HGC_ITCT_ECC_MB_ADDR_MSK, 1859 1842 .shift = HGC_ITCT_ECC_MB_ADDR_OFF, 1860 - .msg = "hgc_itct_eccbad_intr found: ram addr is 0x%08X\n", 1843 + .msg = "hgc_itct_eccbad_intr", 1861 1844 .reg = HGC_ITCT_ECC_ADDR, 1862 1845 }, 1863 1846 { 1864 1847 .irq_msk = BIT(SAS_ECC_INTR_IOSTLIST_ECC_MB_OFF), 1865 1848 .msk = HGC_LM_DFX_STATUS2_IOSTLIST_MSK, 1866 1849 .shift = HGC_LM_DFX_STATUS2_IOSTLIST_OFF, 1867 - .msg = "hgc_iostl_eccbad_intr found: mem addr is 0x%08X\n", 1850 + .msg = "hgc_iostl_eccbad_intr", 1868 1851 .reg = HGC_LM_DFX_STATUS2, 1869 1852 }, 1870 1853 { 1871 1854 .irq_msk = BIT(SAS_ECC_INTR_ITCTLIST_ECC_MB_OFF), 1872 1855 .msk = HGC_LM_DFX_STATUS2_ITCTLIST_MSK, 1873 1856 .shift = HGC_LM_DFX_STATUS2_ITCTLIST_OFF, 1874 - .msg = "hgc_itctl_eccbad_intr found: mem addr is 0x%08X\n", 1857 + .msg = "hgc_itctl_eccbad_intr", 1875 1858 .reg = HGC_LM_DFX_STATUS2, 1876 1859 }, 1877 1860 { 1878 1861 .irq_msk = BIT(SAS_ECC_INTR_CQE_ECC_MB_OFF), 1879 1862 .msk = HGC_CQE_ECC_MB_ADDR_MSK, 1880 1863 .shift = HGC_CQE_ECC_MB_ADDR_OFF, 1881 - .msg = "hgc_cqe_eccbad_intr found: ram address is 0x%08X\n", 1864 + .msg = "hgc_cqe_eccbad_intr", 1882 1865 .reg = HGC_CQE_ECC_ADDR, 1883 1866 }, 1884 1867 { 1885 1868 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM0_ECC_MB_OFF), 1886 1869 .msk = HGC_RXM_DFX_STATUS14_MEM0_MSK, 1887 1870 .shift = HGC_RXM_DFX_STATUS14_MEM0_OFF, 1888 - .msg = "rxm_mem0_eccbad_intr found: mem addr is 0x%08X\n", 1871 + .msg = "rxm_mem0_eccbad_intr", 1889 1872 .reg = HGC_RXM_DFX_STATUS14, 1890 1873 }, 1891 1874 { 1892 1875 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM1_ECC_MB_OFF), 1893 1876 .msk = HGC_RXM_DFX_STATUS14_MEM1_MSK, 1894 1877 .shift = HGC_RXM_DFX_STATUS14_MEM1_OFF, 1895 - .msg = "rxm_mem1_eccbad_intr found: mem addr is 0x%08X\n", 1878 + .msg = "rxm_mem1_eccbad_intr", 1896 1879 .reg = HGC_RXM_DFX_STATUS14, 1897 1880 }, 1898 1881 { 1899 1882 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM2_ECC_MB_OFF), 1900 1883 .msk = HGC_RXM_DFX_STATUS14_MEM2_MSK, 1901 1884 .shift = HGC_RXM_DFX_STATUS14_MEM2_OFF, 1902 - .msg = "rxm_mem2_eccbad_intr found: mem addr is 0x%08X\n", 1885 + .msg = "rxm_mem2_eccbad_intr", 1903 1886 .reg = HGC_RXM_DFX_STATUS14, 1904 1887 }, 1905 1888 { 1906 1889 .irq_msk = BIT(SAS_ECC_INTR_NCQ_MEM3_ECC_MB_OFF), 1907 1890 .msk = HGC_RXM_DFX_STATUS15_MEM3_MSK, 1908 1891 .shift = HGC_RXM_DFX_STATUS15_MEM3_OFF, 1909 - .msg = "rxm_mem3_eccbad_intr found: mem addr is 0x%08X\n", 1892 + .msg = "rxm_mem3_eccbad_intr", 1910 1893 .reg = HGC_RXM_DFX_STATUS15, 1911 1894 }, 1912 1895 { 1913 1896 .irq_msk = BIT(SAS_ECC_INTR_OOO_RAM_ECC_MB_OFF), 1914 1897 .msk = AM_ROB_ECC_ERR_ADDR_MSK, 1915 1898 .shift = AM_ROB_ECC_ERR_ADDR_OFF, 1916 - .msg = "ooo_ram_eccbad_intr found: ROB_ECC_ERR_ADDR=0x%08X\n", 1899 + .msg = "ooo_ram_eccbad_intr", 1917 1900 .reg = AM_ROB_ECC_ERR_ADDR, 1918 1901 }, 1919 1902 }; ··· 1932 1915 val = hisi_sas_read32(hisi_hba, ecc_error->reg); 1933 1916 val &= ecc_error->msk; 1934 1917 val >>= ecc_error->shift; 1935 - dev_err(dev, ecc_error->msg, irq_value, val); 1918 + dev_err(dev, "%s (0x%x) found: mem addr is 0x%08X\n", 1919 + ecc_error->msg, irq_value, val); 1936 1920 queue_work(hisi_hba->wq, &hisi_hba->rst_work); 1937 1921 } 1938 1922 }
+169 -111
drivers/scsi/hpsa.c
··· 60 60 * HPSA_DRIVER_VERSION must be 3 byte values (0-255) separated by '.' 61 61 * with an optional trailing '-' followed by a byte value (0-255). 62 62 */ 63 - #define HPSA_DRIVER_VERSION "3.4.20-160" 63 + #define HPSA_DRIVER_VERSION "3.4.20-170" 64 64 #define DRIVER_NAME "HP HPSA Driver (v " HPSA_DRIVER_VERSION ")" 65 65 #define HPSA "hpsa" 66 66 ··· 73 73 74 74 /*define how many times we will try a command because of bus resets */ 75 75 #define MAX_CMD_RETRIES 3 76 + /* How long to wait before giving up on a command */ 77 + #define HPSA_EH_PTRAID_TIMEOUT (240 * HZ) 76 78 77 79 /* Embedded module documentation macros - see modules.h */ 78 80 MODULE_AUTHOR("Hewlett-Packard Company"); ··· 344 342 static inline bool hpsa_is_cmd_idle(struct CommandList *c) 345 343 { 346 344 return c->scsi_cmd == SCSI_CMD_IDLE; 347 - } 348 - 349 - static inline bool hpsa_is_pending_event(struct CommandList *c) 350 - { 351 - return c->reset_pending; 352 345 } 353 346 354 347 /* extract sense key, asc, and ascq from sense data. -1 means invalid. */ ··· 1141 1144 { 1142 1145 dial_down_lockup_detection_during_fw_flash(h, c); 1143 1146 atomic_inc(&h->commands_outstanding); 1147 + if (c->device) 1148 + atomic_inc(&c->device->commands_outstanding); 1144 1149 1145 1150 reply_queue = h->reply_map[raw_smp_processor_id()]; 1146 1151 switch (c->cmd_type) { ··· 1166 1167 1167 1168 static void enqueue_cmd_and_start_io(struct ctlr_info *h, struct CommandList *c) 1168 1169 { 1169 - if (unlikely(hpsa_is_pending_event(c))) 1170 - return finish_cmd(c); 1171 - 1172 1170 __enqueue_cmd_and_start_io(h, c, DEFAULT_REPLY_QUEUE); 1173 1171 } 1174 1172 ··· 1838 1842 return count; 1839 1843 } 1840 1844 1845 + #define NUM_WAIT 20 1841 1846 static void hpsa_wait_for_outstanding_commands_for_dev(struct ctlr_info *h, 1842 1847 struct hpsa_scsi_dev_t *device) 1843 1848 { 1844 1849 int cmds = 0; 1845 1850 int waits = 0; 1851 + int num_wait = NUM_WAIT; 1852 + 1853 + if (device->external) 1854 + num_wait = HPSA_EH_PTRAID_TIMEOUT; 1846 1855 1847 1856 while (1) { 1848 1857 cmds = hpsa_find_outstanding_commands_for_dev(h, device); 1849 1858 if (cmds == 0) 1850 1859 break; 1851 - if (++waits > 20) 1860 + if (++waits > num_wait) 1852 1861 break; 1853 1862 msleep(1000); 1854 1863 } 1855 1864 1856 - if (waits > 20) 1865 + if (waits > num_wait) { 1857 1866 dev_warn(&h->pdev->dev, 1858 - "%s: removing device with %d outstanding commands!\n", 1859 - __func__, cmds); 1867 + "%s: removing device [%d:%d:%d:%d] with %d outstanding commands!\n", 1868 + __func__, 1869 + h->scsi_host->host_no, 1870 + device->bus, device->target, device->lun, cmds); 1871 + } 1860 1872 } 1861 1873 1862 1874 static void hpsa_remove_device(struct ctlr_info *h, ··· 2135 2131 sdev->no_uld_attach = !sd || !sd->expose_device; 2136 2132 2137 2133 if (sd) { 2138 - if (sd->external) 2134 + sd->was_removed = 0; 2135 + if (sd->external) { 2139 2136 queue_depth = EXTERNAL_QD; 2140 - else 2137 + sdev->eh_timeout = HPSA_EH_PTRAID_TIMEOUT; 2138 + blk_queue_rq_timeout(sdev->request_queue, 2139 + HPSA_EH_PTRAID_TIMEOUT); 2140 + } else { 2141 2141 queue_depth = sd->queue_depth != 0 ? 2142 2142 sd->queue_depth : sdev->host->can_queue; 2143 + } 2143 2144 } else 2144 2145 queue_depth = sdev->host->can_queue; 2145 2146 ··· 2155 2146 2156 2147 static void hpsa_slave_destroy(struct scsi_device *sdev) 2157 2148 { 2158 - /* nothing to do. */ 2149 + struct hpsa_scsi_dev_t *hdev = NULL; 2150 + 2151 + hdev = sdev->hostdata; 2152 + 2153 + if (hdev) 2154 + hdev->was_removed = 1; 2159 2155 } 2160 2156 2161 2157 static void hpsa_free_ioaccel2_sg_chain_blocks(struct ctlr_info *h) ··· 2428 2414 break; 2429 2415 } 2430 2416 2417 + if (dev->in_reset) 2418 + retry = 0; 2419 + 2431 2420 return retry; /* retry on raid path? */ 2432 2421 } 2433 2422 2434 2423 static void hpsa_cmd_resolve_events(struct ctlr_info *h, 2435 2424 struct CommandList *c) 2436 2425 { 2437 - bool do_wake = false; 2426 + struct hpsa_scsi_dev_t *dev = c->device; 2438 2427 2439 2428 /* 2440 2429 * Reset c->scsi_cmd here so that the reset handler will know ··· 2446 2429 */ 2447 2430 c->scsi_cmd = SCSI_CMD_IDLE; 2448 2431 mb(); /* Declare command idle before checking for pending events. */ 2449 - if (c->reset_pending) { 2450 - unsigned long flags; 2451 - struct hpsa_scsi_dev_t *dev; 2452 - 2453 - /* 2454 - * There appears to be a reset pending; lock the lock and 2455 - * reconfirm. If so, then decrement the count of outstanding 2456 - * commands and wake the reset command if this is the last one. 2457 - */ 2458 - spin_lock_irqsave(&h->lock, flags); 2459 - dev = c->reset_pending; /* Re-fetch under the lock. */ 2460 - if (dev && atomic_dec_and_test(&dev->reset_cmds_out)) 2461 - do_wake = true; 2462 - c->reset_pending = NULL; 2463 - spin_unlock_irqrestore(&h->lock, flags); 2432 + if (dev) { 2433 + atomic_dec(&dev->commands_outstanding); 2434 + if (dev->in_reset && 2435 + atomic_read(&dev->commands_outstanding) <= 0) 2436 + wake_up_all(&h->event_sync_wait_queue); 2464 2437 } 2465 - 2466 - if (do_wake) 2467 - wake_up_all(&h->event_sync_wait_queue); 2468 2438 } 2469 2439 2470 2440 static void hpsa_cmd_resolve_and_free(struct ctlr_info *h, ··· 2498 2494 IOACCEL2_STATUS_SR_IOACCEL_DISABLED) { 2499 2495 dev->offload_enabled = 0; 2500 2496 dev->offload_to_be_enabled = 0; 2497 + } 2498 + 2499 + if (dev->in_reset) { 2500 + cmd->result = DID_RESET << 16; 2501 + return hpsa_cmd_free_and_done(h, c, cmd); 2501 2502 } 2502 2503 2503 2504 return hpsa_retry_cmd(h, c); ··· 2583 2574 cmd->result = (DID_OK << 16); /* host byte */ 2584 2575 cmd->result |= (COMMAND_COMPLETE << 8); /* msg byte */ 2585 2576 2577 + /* SCSI command has already been cleaned up in SML */ 2578 + if (dev->was_removed) { 2579 + hpsa_cmd_resolve_and_free(h, cp); 2580 + return; 2581 + } 2582 + 2586 2583 if (cp->cmd_type == CMD_IOACCEL2 || cp->cmd_type == CMD_IOACCEL1) { 2587 2584 if (dev->physical_device && dev->expose_device && 2588 2585 dev->removed) { ··· 2609 2594 cmd->result = DID_NO_CONNECT << 16; 2610 2595 return hpsa_cmd_free_and_done(h, cp, cmd); 2611 2596 } 2612 - 2613 - if ((unlikely(hpsa_is_pending_event(cp)))) 2614 - if (cp->reset_pending) 2615 - return hpsa_cmd_free_and_done(h, cp, cmd); 2616 2597 2617 2598 if (cp->cmd_type == CMD_IOACCEL2) 2618 2599 return process_ioaccel2_completion(h, cp, cmd, dev); ··· 3059 3048 return rc; 3060 3049 } 3061 3050 3062 - static int hpsa_send_reset(struct ctlr_info *h, unsigned char *scsi3addr, 3051 + static int hpsa_send_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev, 3063 3052 u8 reset_type, int reply_queue) 3064 3053 { 3065 3054 int rc = IO_OK; ··· 3067 3056 struct ErrorInfo *ei; 3068 3057 3069 3058 c = cmd_alloc(h); 3070 - 3059 + c->device = dev; 3071 3060 3072 3061 /* fill_cmd can't fail here, no data buffer to map. */ 3073 - (void) fill_cmd(c, reset_type, h, NULL, 0, 0, 3074 - scsi3addr, TYPE_MSG); 3062 + (void) fill_cmd(c, reset_type, h, NULL, 0, 0, dev->scsi3addr, TYPE_MSG); 3075 3063 rc = hpsa_scsi_do_simple_cmd(h, c, reply_queue, NO_TIMEOUT); 3076 3064 if (rc) { 3077 3065 dev_warn(&h->pdev->dev, "Failed to send reset command\n"); ··· 3148 3138 } 3149 3139 3150 3140 static int hpsa_do_reset(struct ctlr_info *h, struct hpsa_scsi_dev_t *dev, 3151 - unsigned char *scsi3addr, u8 reset_type, int reply_queue) 3141 + u8 reset_type, int reply_queue) 3152 3142 { 3153 - int i; 3154 3143 int rc = 0; 3155 3144 3156 3145 /* We can really only handle one reset at a time */ ··· 3158 3149 return -EINTR; 3159 3150 } 3160 3151 3161 - BUG_ON(atomic_read(&dev->reset_cmds_out) != 0); 3162 - 3163 - for (i = 0; i < h->nr_cmds; i++) { 3164 - struct CommandList *c = h->cmd_pool + i; 3165 - int refcount = atomic_inc_return(&c->refcount); 3166 - 3167 - if (refcount > 1 && hpsa_cmd_dev_match(h, c, dev, scsi3addr)) { 3168 - unsigned long flags; 3169 - 3170 - /* 3171 - * Mark the target command as having a reset pending, 3172 - * then lock a lock so that the command cannot complete 3173 - * while we're considering it. If the command is not 3174 - * idle then count it; otherwise revoke the event. 3175 - */ 3176 - c->reset_pending = dev; 3177 - spin_lock_irqsave(&h->lock, flags); /* Implied MB */ 3178 - if (!hpsa_is_cmd_idle(c)) 3179 - atomic_inc(&dev->reset_cmds_out); 3180 - else 3181 - c->reset_pending = NULL; 3182 - spin_unlock_irqrestore(&h->lock, flags); 3183 - } 3184 - 3185 - cmd_free(h, c); 3186 - } 3187 - 3188 - rc = hpsa_send_reset(h, scsi3addr, reset_type, reply_queue); 3189 - if (!rc) 3152 + rc = hpsa_send_reset(h, dev, reset_type, reply_queue); 3153 + if (!rc) { 3154 + /* incremented by sending the reset request */ 3155 + atomic_dec(&dev->commands_outstanding); 3190 3156 wait_event(h->event_sync_wait_queue, 3191 - atomic_read(&dev->reset_cmds_out) == 0 || 3157 + atomic_read(&dev->commands_outstanding) <= 0 || 3192 3158 lockup_detected(h)); 3159 + } 3193 3160 3194 3161 if (unlikely(lockup_detected(h))) { 3195 3162 dev_warn(&h->pdev->dev, ··· 3173 3188 rc = -ENODEV; 3174 3189 } 3175 3190 3176 - if (unlikely(rc)) 3177 - atomic_set(&dev->reset_cmds_out, 0); 3178 - else 3179 - rc = wait_for_device_to_become_ready(h, scsi3addr, 0); 3191 + if (!rc) 3192 + rc = wait_for_device_to_become_ready(h, dev->scsi3addr, 0); 3180 3193 3181 3194 mutex_unlock(&h->reset_mutex); 3182 3195 return rc; ··· 4803 4820 4804 4821 c->phys_disk = dev; 4805 4822 4823 + if (dev->in_reset) 4824 + return -1; 4825 + 4806 4826 return hpsa_scsi_ioaccel_queue_command(h, c, dev->ioaccel_handle, 4807 4827 cmd->cmnd, cmd->cmd_len, dev->scsi3addr, dev); 4808 4828 } ··· 4996 5010 } else 4997 5011 cp->sg_count = (u8) use_sg; 4998 5012 5013 + if (phys_disk->in_reset) { 5014 + cmd->result = DID_RESET << 16; 5015 + return -1; 5016 + } 5017 + 4999 5018 enqueue_cmd_and_start_io(h, c); 5000 5019 return 0; 5001 5020 } ··· 5016 5025 return -1; 5017 5026 5018 5027 if (!c->scsi_cmd->device->hostdata) 5028 + return -1; 5029 + 5030 + if (phys_disk->in_reset) 5019 5031 return -1; 5020 5032 5021 5033 /* Try to honor the device's queue depth */ ··· 5102 5108 int offload_to_mirror; 5103 5109 5104 5110 if (!dev) 5111 + return -1; 5112 + 5113 + if (dev->in_reset) 5105 5114 return -1; 5106 5115 5107 5116 /* check for valid opcode, get LBA and block count */ ··· 5411 5414 */ 5412 5415 static int hpsa_ciss_submit(struct ctlr_info *h, 5413 5416 struct CommandList *c, struct scsi_cmnd *cmd, 5414 - unsigned char scsi3addr[]) 5417 + struct hpsa_scsi_dev_t *dev) 5415 5418 { 5416 5419 cmd->host_scribble = (unsigned char *) c; 5417 5420 c->cmd_type = CMD_SCSI; 5418 5421 c->scsi_cmd = cmd; 5419 5422 c->Header.ReplyQueue = 0; /* unused in simple mode */ 5420 - memcpy(&c->Header.LUN.LunAddrBytes[0], &scsi3addr[0], 8); 5423 + memcpy(&c->Header.LUN.LunAddrBytes[0], &dev->scsi3addr[0], 8); 5421 5424 c->Header.tag = cpu_to_le64((c->cmdindex << DIRECT_LOOKUP_SHIFT)); 5422 5425 5423 5426 /* Fill in the request block... */ ··· 5468 5471 hpsa_cmd_resolve_and_free(h, c); 5469 5472 return SCSI_MLQUEUE_HOST_BUSY; 5470 5473 } 5474 + 5475 + if (dev->in_reset) { 5476 + hpsa_cmd_resolve_and_free(h, c); 5477 + return SCSI_MLQUEUE_HOST_BUSY; 5478 + } 5479 + 5471 5480 enqueue_cmd_and_start_io(h, c); 5472 5481 /* the cmd'll come back via intr handler in complete_scsi_command() */ 5473 5482 return 0; ··· 5525 5522 } 5526 5523 5527 5524 static int hpsa_ioaccel_submit(struct ctlr_info *h, 5528 - struct CommandList *c, struct scsi_cmnd *cmd, 5529 - unsigned char *scsi3addr) 5525 + struct CommandList *c, struct scsi_cmnd *cmd) 5530 5526 { 5531 5527 struct hpsa_scsi_dev_t *dev = cmd->device->hostdata; 5532 5528 int rc = IO_ACCEL_INELIGIBLE; 5533 5529 5534 5530 if (!dev) 5535 5531 return SCSI_MLQUEUE_HOST_BUSY; 5532 + 5533 + if (dev->in_reset) 5534 + return SCSI_MLQUEUE_HOST_BUSY; 5535 + 5536 + if (hpsa_simple_mode) 5537 + return IO_ACCEL_INELIGIBLE; 5536 5538 5537 5539 cmd->host_scribble = (unsigned char *) c; 5538 5540 ··· 5571 5563 cmd->result = DID_NO_CONNECT << 16; 5572 5564 return hpsa_cmd_free_and_done(c->h, c, cmd); 5573 5565 } 5574 - if (c->reset_pending) 5566 + 5567 + if (dev->in_reset) { 5568 + cmd->result = DID_RESET << 16; 5575 5569 return hpsa_cmd_free_and_done(c->h, c, cmd); 5570 + } 5571 + 5576 5572 if (c->cmd_type == CMD_IOACCEL2) { 5577 5573 struct ctlr_info *h = c->h; 5578 5574 struct io_accel2_cmd *c2 = &h->ioaccel2_cmd_pool[c->cmdindex]; ··· 5584 5572 5585 5573 if (c2->error_data.serv_response == 5586 5574 IOACCEL2_STATUS_SR_TASK_COMP_SET_FULL) { 5587 - rc = hpsa_ioaccel_submit(h, c, cmd, dev->scsi3addr); 5575 + rc = hpsa_ioaccel_submit(h, c, cmd); 5588 5576 if (rc == 0) 5589 5577 return; 5590 5578 if (rc == SCSI_MLQUEUE_HOST_BUSY) { ··· 5600 5588 } 5601 5589 } 5602 5590 hpsa_cmd_partial_init(c->h, c->cmdindex, c); 5603 - if (hpsa_ciss_submit(c->h, c, cmd, dev->scsi3addr)) { 5591 + if (hpsa_ciss_submit(c->h, c, cmd, dev)) { 5604 5592 /* 5605 5593 * If we get here, it means dma mapping failed. Try 5606 5594 * again via scsi mid layer, which will then get ··· 5619 5607 { 5620 5608 struct ctlr_info *h; 5621 5609 struct hpsa_scsi_dev_t *dev; 5622 - unsigned char scsi3addr[8]; 5623 5610 struct CommandList *c; 5624 5611 int rc = 0; 5625 5612 ··· 5640 5629 return 0; 5641 5630 } 5642 5631 5643 - memcpy(scsi3addr, dev->scsi3addr, sizeof(scsi3addr)); 5644 - 5645 5632 if (unlikely(lockup_detected(h))) { 5646 5633 cmd->result = DID_NO_CONNECT << 16; 5647 5634 cmd->scsi_done(cmd); 5648 5635 return 0; 5649 5636 } 5637 + 5638 + if (dev->in_reset) 5639 + return SCSI_MLQUEUE_DEVICE_BUSY; 5640 + 5650 5641 c = cmd_tagged_alloc(h, cmd); 5642 + if (c == NULL) 5643 + return SCSI_MLQUEUE_DEVICE_BUSY; 5651 5644 5652 5645 /* 5653 5646 * Call alternate submit routine for I/O accelerated commands. ··· 5660 5645 if (likely(cmd->retries == 0 && 5661 5646 !blk_rq_is_passthrough(cmd->request) && 5662 5647 h->acciopath_status)) { 5663 - rc = hpsa_ioaccel_submit(h, c, cmd, scsi3addr); 5648 + rc = hpsa_ioaccel_submit(h, c, cmd); 5664 5649 if (rc == 0) 5665 5650 return 0; 5666 5651 if (rc == SCSI_MLQUEUE_HOST_BUSY) { ··· 5668 5653 return SCSI_MLQUEUE_HOST_BUSY; 5669 5654 } 5670 5655 } 5671 - return hpsa_ciss_submit(h, c, cmd, scsi3addr); 5656 + return hpsa_ciss_submit(h, c, cmd, dev); 5672 5657 } 5673 5658 5674 5659 static void hpsa_scan_complete(struct ctlr_info *h) ··· 5950 5935 static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd) 5951 5936 { 5952 5937 int rc = SUCCESS; 5938 + int i; 5953 5939 struct ctlr_info *h; 5954 - struct hpsa_scsi_dev_t *dev; 5940 + struct hpsa_scsi_dev_t *dev = NULL; 5955 5941 u8 reset_type; 5956 5942 char msg[48]; 5957 5943 unsigned long flags; ··· 6018 6002 reset_type == HPSA_DEVICE_RESET_MSG ? "logical " : "physical "); 6019 6003 hpsa_show_dev_msg(KERN_WARNING, h, dev, msg); 6020 6004 6005 + /* 6006 + * wait to see if any commands will complete before sending reset 6007 + */ 6008 + dev->in_reset = true; /* block any new cmds from OS for this device */ 6009 + for (i = 0; i < 10; i++) { 6010 + if (atomic_read(&dev->commands_outstanding) > 0) 6011 + msleep(1000); 6012 + else 6013 + break; 6014 + } 6015 + 6021 6016 /* send a reset to the SCSI LUN which the command was sent to */ 6022 - rc = hpsa_do_reset(h, dev, dev->scsi3addr, reset_type, 6023 - DEFAULT_REPLY_QUEUE); 6017 + rc = hpsa_do_reset(h, dev, reset_type, DEFAULT_REPLY_QUEUE); 6024 6018 if (rc == 0) 6025 6019 rc = SUCCESS; 6026 6020 else ··· 6044 6018 return_reset_status: 6045 6019 spin_lock_irqsave(&h->reset_lock, flags); 6046 6020 h->reset_in_progress = 0; 6021 + if (dev) 6022 + dev->in_reset = false; 6047 6023 spin_unlock_irqrestore(&h->reset_lock, flags); 6048 6024 return rc; 6049 6025 } ··· 6071 6043 BUG(); 6072 6044 } 6073 6045 6074 - atomic_inc(&c->refcount); 6075 6046 if (unlikely(!hpsa_is_cmd_idle(c))) { 6076 6047 /* 6077 6048 * We expect that the SCSI layer will hand us a unique tag ··· 6078 6051 * two requests...because if the selected command isn't idle 6079 6052 * then someone is going to be very disappointed. 6080 6053 */ 6081 - dev_err(&h->pdev->dev, 6082 - "tag collision (tag=%d) in cmd_tagged_alloc().\n", 6083 - idx); 6084 - if (c->scsi_cmd != NULL) 6085 - scsi_print_command(c->scsi_cmd); 6086 - scsi_print_command(scmd); 6054 + if (idx != h->last_collision_tag) { /* Print once per tag */ 6055 + dev_warn(&h->pdev->dev, 6056 + "%s: tag collision (tag=%d)\n", __func__, idx); 6057 + if (c->scsi_cmd != NULL) 6058 + scsi_print_command(c->scsi_cmd); 6059 + if (scmd) 6060 + scsi_print_command(scmd); 6061 + h->last_collision_tag = idx; 6062 + } 6063 + return NULL; 6087 6064 } 6065 + 6066 + atomic_inc(&c->refcount); 6088 6067 6089 6068 hpsa_cmd_partial_init(h, idx, c); 6090 6069 return c; ··· 6159 6126 break; /* it's ours now. */ 6160 6127 } 6161 6128 hpsa_cmd_partial_init(h, i, c); 6129 + c->device = NULL; 6162 6130 return c; 6163 6131 } 6164 6132 ··· 6613 6579 } 6614 6580 } 6615 6581 6616 - static void hpsa_send_host_reset(struct ctlr_info *h, unsigned char *scsi3addr, 6617 - u8 reset_type) 6582 + static void hpsa_send_host_reset(struct ctlr_info *h, u8 reset_type) 6618 6583 { 6619 6584 struct CommandList *c; 6620 6585 ··· 8016 7983 static void hpsa_free_irqs(struct ctlr_info *h) 8017 7984 { 8018 7985 int i; 7986 + int irq_vector = 0; 7987 + 7988 + if (hpsa_simple_mode) 7989 + irq_vector = h->intr_mode; 8019 7990 8020 7991 if (!h->msix_vectors || h->intr_mode != PERF_MODE_INT) { 8021 7992 /* Single reply queue, only one irq to free */ 8022 - free_irq(pci_irq_vector(h->pdev, 0), &h->q[h->intr_mode]); 7993 + free_irq(pci_irq_vector(h->pdev, irq_vector), 7994 + &h->q[h->intr_mode]); 8023 7995 h->q[h->intr_mode] = 0; 8024 7996 return; 8025 7997 } ··· 8043 8005 irqreturn_t (*intxhandler)(int, void *)) 8044 8006 { 8045 8007 int rc, i; 8008 + int irq_vector = 0; 8009 + 8010 + if (hpsa_simple_mode) 8011 + irq_vector = h->intr_mode; 8046 8012 8047 8013 /* 8048 8014 * initialize h->q[x] = x so that interrupt handlers know which ··· 8082 8040 if (h->msix_vectors > 0 || h->pdev->msi_enabled) { 8083 8041 sprintf(h->intrname[0], "%s-msi%s", h->devname, 8084 8042 h->msix_vectors ? "x" : ""); 8085 - rc = request_irq(pci_irq_vector(h->pdev, 0), 8043 + rc = request_irq(pci_irq_vector(h->pdev, irq_vector), 8086 8044 msixhandler, 0, 8087 8045 h->intrname[0], 8088 8046 &h->q[h->intr_mode]); 8089 8047 } else { 8090 8048 sprintf(h->intrname[h->intr_mode], 8091 8049 "%s-intx", h->devname); 8092 - rc = request_irq(pci_irq_vector(h->pdev, 0), 8050 + rc = request_irq(pci_irq_vector(h->pdev, irq_vector), 8093 8051 intxhandler, IRQF_SHARED, 8094 8052 h->intrname[0], 8095 8053 &h->q[h->intr_mode]); ··· 8097 8055 } 8098 8056 if (rc) { 8099 8057 dev_err(&h->pdev->dev, "failed to get irq %d for %s\n", 8100 - pci_irq_vector(h->pdev, 0), h->devname); 8058 + pci_irq_vector(h->pdev, irq_vector), h->devname); 8101 8059 hpsa_free_irqs(h); 8102 8060 return -ENODEV; 8103 8061 } ··· 8107 8065 static int hpsa_kdump_soft_reset(struct ctlr_info *h) 8108 8066 { 8109 8067 int rc; 8110 - hpsa_send_host_reset(h, RAID_CTLR_LUNID, HPSA_RESET_TYPE_CONTROLLER); 8068 + hpsa_send_host_reset(h, HPSA_RESET_TYPE_CONTROLLER); 8111 8069 8112 8070 dev_info(&h->pdev->dev, "Waiting for board to soft reset.\n"); 8113 8071 rc = hpsa_wait_for_board_state(h->pdev, h->vaddr, BOARD_NOT_READY); ··· 8163 8121 destroy_workqueue(h->rescan_ctlr_wq); 8164 8122 h->rescan_ctlr_wq = NULL; 8165 8123 } 8124 + if (h->monitor_ctlr_wq) { 8125 + destroy_workqueue(h->monitor_ctlr_wq); 8126 + h->monitor_ctlr_wq = NULL; 8127 + } 8128 + 8166 8129 kfree(h); /* init_one 1 */ 8167 8130 } 8168 8131 ··· 8503 8456 8504 8457 spin_lock_irqsave(&h->lock, flags); 8505 8458 if (!h->remove_in_progress) 8506 - schedule_delayed_work(&h->event_monitor_work, 8507 - HPSA_EVENT_MONITOR_INTERVAL); 8459 + queue_delayed_work(h->monitor_ctlr_wq, &h->event_monitor_work, 8460 + HPSA_EVENT_MONITOR_INTERVAL); 8508 8461 spin_unlock_irqrestore(&h->lock, flags); 8509 8462 } 8510 8463 ··· 8549 8502 8550 8503 spin_lock_irqsave(&h->lock, flags); 8551 8504 if (!h->remove_in_progress) 8552 - schedule_delayed_work(&h->monitor_ctlr_work, 8505 + queue_delayed_work(h->monitor_ctlr_wq, &h->monitor_ctlr_work, 8553 8506 h->heartbeat_sample_interval); 8554 8507 spin_unlock_irqrestore(&h->lock, flags); 8555 8508 } ··· 8717 8670 goto clean7; /* aer/h */ 8718 8671 } 8719 8672 8673 + h->monitor_ctlr_wq = hpsa_create_controller_wq(h, "monitor"); 8674 + if (!h->monitor_ctlr_wq) { 8675 + rc = -ENOMEM; 8676 + goto clean7; 8677 + } 8678 + 8720 8679 /* 8721 8680 * At this point, the controller is ready to take commands. 8722 8681 * Now, if reset_devices and the hard reset didn't work, try ··· 8851 8798 if (h->rescan_ctlr_wq) { 8852 8799 destroy_workqueue(h->rescan_ctlr_wq); 8853 8800 h->rescan_ctlr_wq = NULL; 8801 + } 8802 + if (h->monitor_ctlr_wq) { 8803 + destroy_workqueue(h->monitor_ctlr_wq); 8804 + h->monitor_ctlr_wq = NULL; 8854 8805 } 8855 8806 kfree(h); 8856 8807 return rc; ··· 9003 8946 cancel_delayed_work_sync(&h->event_monitor_work); 9004 8947 destroy_workqueue(h->rescan_ctlr_wq); 9005 8948 destroy_workqueue(h->resubmit_wq); 8949 + destroy_workqueue(h->monitor_ctlr_wq); 9006 8950 9007 8951 hpsa_delete_sas_host(h); 9008 8952
+5 -1
drivers/scsi/hpsa.h
··· 65 65 u8 physical_device : 1; 66 66 u8 expose_device; 67 67 u8 removed : 1; /* device is marked for death */ 68 + u8 was_removed : 1; /* device actually removed */ 68 69 #define RAID_CTLR_LUNID "\0\0\0\0\0\0\0\0" 69 70 unsigned char device_id[16]; /* from inquiry pg. 0x83 */ 70 71 u64 sas_address; ··· 76 75 unsigned char raid_level; /* from inquiry page 0xC1 */ 77 76 unsigned char volume_offline; /* discovered via TUR or VPD */ 78 77 u16 queue_depth; /* max queue_depth for this device */ 79 - atomic_t reset_cmds_out; /* Count of commands to-be affected */ 78 + atomic_t commands_outstanding; /* track commands sent to device */ 80 79 atomic_t ioaccel_cmds_out; /* Only used for physical devices 81 80 * counts commands sent to physical 82 81 * device via "ioaccel" path. 83 82 */ 83 + bool in_reset; 84 84 u32 ioaccel_handle; 85 85 u8 active_path_index; 86 86 u8 path_map; ··· 176 174 struct CfgTable __iomem *cfgtable; 177 175 int interrupts_enabled; 178 176 int max_commands; 177 + int last_collision_tag; /* tags are global */ 179 178 atomic_t commands_outstanding; 180 179 # define PERF_MODE_INT 0 181 180 # define DOORBELL_INT 1 ··· 303 300 int needs_abort_tags_swizzled; 304 301 struct workqueue_struct *resubmit_wq; 305 302 struct workqueue_struct *rescan_ctlr_wq; 303 + struct workqueue_struct *monitor_ctlr_wq; 306 304 atomic_t abort_cmds_available; 307 305 wait_queue_head_t event_sync_wait_queue; 308 306 struct mutex reset_mutex;
+1 -1
drivers/scsi/hpsa_cmd.h
··· 448 448 struct hpsa_scsi_dev_t *phys_disk; 449 449 450 450 int abort_pending; 451 - struct hpsa_scsi_dev_t *reset_pending; 451 + struct hpsa_scsi_dev_t *device; 452 452 atomic_t refcount; /* Must be last to avoid memset in hpsa_cmd_init() */ 453 453 } __aligned(COMMANDLIST_ALIGNMENT); 454 454
+58 -19
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 814 814 atomic_set(&hostdata->request_limit, 0); 815 815 816 816 purge_requests(hostdata, DID_ERROR); 817 - hostdata->reset_crq = 1; 817 + hostdata->action = IBMVSCSI_HOST_ACTION_RESET; 818 818 wake_up(&hostdata->work_wait_q); 819 819 } 820 820 ··· 1165 1165 be32_to_cpu(evt_struct->xfer_iu->srp.login_rsp.req_lim_delta)); 1166 1166 1167 1167 /* If we had any pending I/Os, kick them */ 1168 - scsi_unblock_requests(hostdata->host); 1168 + hostdata->action = IBMVSCSI_HOST_ACTION_UNBLOCK; 1169 + wake_up(&hostdata->work_wait_q); 1169 1170 } 1170 1171 1171 1172 /** ··· 1784 1783 /* We need to re-setup the interpartition connection */ 1785 1784 dev_info(hostdata->dev, "Re-enabling adapter!\n"); 1786 1785 hostdata->client_migrated = 1; 1787 - hostdata->reenable_crq = 1; 1786 + hostdata->action = IBMVSCSI_HOST_ACTION_REENABLE; 1788 1787 purge_requests(hostdata, DID_REQUEUE); 1789 1788 wake_up(&hostdata->work_wait_q); 1790 1789 } else { ··· 2037 2036 .show = show_host_config, 2038 2037 }; 2039 2038 2039 + static int ibmvscsi_host_reset(struct Scsi_Host *shost, int reset_type) 2040 + { 2041 + struct ibmvscsi_host_data *hostdata = shost_priv(shost); 2042 + 2043 + dev_info(hostdata->dev, "Initiating adapter reset!\n"); 2044 + ibmvscsi_reset_host(hostdata); 2045 + 2046 + return 0; 2047 + } 2048 + 2040 2049 static struct device_attribute *ibmvscsi_attrs[] = { 2041 2050 &ibmvscsi_host_vhost_loc, 2042 2051 &ibmvscsi_host_vhost_name, ··· 2073 2062 .eh_host_reset_handler = ibmvscsi_eh_host_reset_handler, 2074 2063 .slave_configure = ibmvscsi_slave_configure, 2075 2064 .change_queue_depth = ibmvscsi_change_queue_depth, 2065 + .host_reset = ibmvscsi_host_reset, 2076 2066 .cmd_per_lun = IBMVSCSI_CMDS_PER_LUN_DEFAULT, 2077 2067 .can_queue = IBMVSCSI_MAX_REQUESTS_DEFAULT, 2078 2068 .this_id = -1, ··· 2103 2091 2104 2092 static void ibmvscsi_do_work(struct ibmvscsi_host_data *hostdata) 2105 2093 { 2094 + unsigned long flags; 2106 2095 int rc; 2107 2096 char *action = "reset"; 2108 2097 2109 - if (hostdata->reset_crq) { 2110 - smp_rmb(); 2111 - hostdata->reset_crq = 0; 2112 - 2098 + spin_lock_irqsave(hostdata->host->host_lock, flags); 2099 + switch (hostdata->action) { 2100 + case IBMVSCSI_HOST_ACTION_UNBLOCK: 2101 + rc = 0; 2102 + break; 2103 + case IBMVSCSI_HOST_ACTION_RESET: 2104 + spin_unlock_irqrestore(hostdata->host->host_lock, flags); 2113 2105 rc = ibmvscsi_reset_crq_queue(&hostdata->queue, hostdata); 2106 + spin_lock_irqsave(hostdata->host->host_lock, flags); 2114 2107 if (!rc) 2115 2108 rc = ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0); 2116 2109 vio_enable_interrupts(to_vio_dev(hostdata->dev)); 2117 - } else if (hostdata->reenable_crq) { 2118 - smp_rmb(); 2110 + break; 2111 + case IBMVSCSI_HOST_ACTION_REENABLE: 2119 2112 action = "enable"; 2113 + spin_unlock_irqrestore(hostdata->host->host_lock, flags); 2120 2114 rc = ibmvscsi_reenable_crq_queue(&hostdata->queue, hostdata); 2121 - hostdata->reenable_crq = 0; 2115 + spin_lock_irqsave(hostdata->host->host_lock, flags); 2122 2116 if (!rc) 2123 2117 rc = ibmvscsi_send_crq(hostdata, 0xC001000000000000LL, 0); 2124 - } else 2118 + break; 2119 + case IBMVSCSI_HOST_ACTION_NONE: 2120 + default: 2121 + spin_unlock_irqrestore(hostdata->host->host_lock, flags); 2125 2122 return; 2123 + } 2124 + 2125 + hostdata->action = IBMVSCSI_HOST_ACTION_NONE; 2126 2126 2127 2127 if (rc) { 2128 2128 atomic_set(&hostdata->request_limit, -1); 2129 2129 dev_err(hostdata->dev, "error after %s\n", action); 2130 2130 } 2131 + spin_unlock_irqrestore(hostdata->host->host_lock, flags); 2131 2132 2132 2133 scsi_unblock_requests(hostdata->host); 2133 2134 } 2134 2135 2135 - static int ibmvscsi_work_to_do(struct ibmvscsi_host_data *hostdata) 2136 + static int __ibmvscsi_work_to_do(struct ibmvscsi_host_data *hostdata) 2136 2137 { 2137 2138 if (kthread_should_stop()) 2138 2139 return 1; 2139 - else if (hostdata->reset_crq) { 2140 - smp_rmb(); 2141 - return 1; 2142 - } else if (hostdata->reenable_crq) { 2143 - smp_rmb(); 2144 - return 1; 2140 + switch (hostdata->action) { 2141 + case IBMVSCSI_HOST_ACTION_NONE: 2142 + return 0; 2143 + case IBMVSCSI_HOST_ACTION_RESET: 2144 + case IBMVSCSI_HOST_ACTION_REENABLE: 2145 + case IBMVSCSI_HOST_ACTION_UNBLOCK: 2146 + default: 2147 + break; 2145 2148 } 2146 2149 2147 - return 0; 2150 + return 1; 2151 + } 2152 + 2153 + static int ibmvscsi_work_to_do(struct ibmvscsi_host_data *hostdata) 2154 + { 2155 + unsigned long flags; 2156 + int rc; 2157 + 2158 + spin_lock_irqsave(hostdata->host->host_lock, flags); 2159 + rc = __ibmvscsi_work_to_do(hostdata); 2160 + spin_unlock_irqrestore(hostdata->host->host_lock, flags); 2161 + 2162 + return rc; 2148 2163 } 2149 2164 2150 2165 static int ibmvscsi_work(void *data)
+8 -2
drivers/scsi/ibmvscsi/ibmvscsi.h
··· 74 74 dma_addr_t iu_token; 75 75 }; 76 76 77 + enum ibmvscsi_host_action { 78 + IBMVSCSI_HOST_ACTION_NONE = 0, 79 + IBMVSCSI_HOST_ACTION_RESET, 80 + IBMVSCSI_HOST_ACTION_REENABLE, 81 + IBMVSCSI_HOST_ACTION_UNBLOCK, 82 + }; 83 + 77 84 /* all driver data associated with a host adapter */ 78 85 struct ibmvscsi_host_data { 79 86 struct list_head host_list; 80 87 atomic_t request_limit; 81 88 int client_migrated; 82 - int reset_crq; 83 - int reenable_crq; 89 + enum ibmvscsi_host_action action; 84 90 struct device *dev; 85 91 struct event_pool pool; 86 92 struct crq_queue queue;
+2 -2
drivers/scsi/isci/remote_device.c
··· 1087 1087 1088 1088 if (dev->dev_type == SAS_SATA_DEV || (dev->tproto & SAS_PROTOCOL_SATA)) { 1089 1089 sci_change_state(&idev->sm, SCI_STP_DEV_IDLE); 1090 - } else if (dev_is_expander(dev)) { 1090 + } else if (dev_is_expander(dev->dev_type)) { 1091 1091 sci_change_state(&idev->sm, SCI_SMP_DEV_IDLE); 1092 1092 } else 1093 1093 isci_remote_device_ready(ihost, idev); ··· 1478 1478 struct domain_device *dev = idev->domain_dev; 1479 1479 enum sci_status status; 1480 1480 1481 - if (dev->parent && dev_is_expander(dev->parent)) 1481 + if (dev->parent && dev_is_expander(dev->parent->dev_type)) 1482 1482 status = sci_remote_device_ea_construct(iport, idev); 1483 1483 else 1484 1484 status = sci_remote_device_da_construct(iport, idev);
-5
drivers/scsi/isci/remote_device.h
··· 295 295 return idev; 296 296 } 297 297 298 - static inline bool dev_is_expander(struct domain_device *dev) 299 - { 300 - return dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE; 301 - } 302 - 303 298 static inline void sci_remote_device_decrement_request_count(struct isci_remote_device *idev) 304 299 { 305 300 /* XXX delete this voodoo when converting to the top-level device
+4 -4
drivers/scsi/isci/request.c
··· 224 224 idev = ireq->target_device; 225 225 iport = idev->owning_port; 226 226 227 - /* Fill in the TC with the its required data */ 227 + /* Fill in the TC with its required data */ 228 228 task_context->abort = 0; 229 229 task_context->priority = 0; 230 230 task_context->initiator_request = 1; ··· 506 506 idev = ireq->target_device; 507 507 iport = idev->owning_port; 508 508 509 - /* Fill in the TC with the its required data */ 509 + /* Fill in the TC with its required data */ 510 510 task_context->abort = 0; 511 511 task_context->priority = SCU_TASK_PRIORITY_NORMAL; 512 512 task_context->initiator_request = 1; ··· 3101 3101 /* pass */; 3102 3102 else if (dev_is_sata(dev)) 3103 3103 memset(&ireq->stp.cmd, 0, sizeof(ireq->stp.cmd)); 3104 - else if (dev_is_expander(dev)) 3104 + else if (dev_is_expander(dev->dev_type)) 3105 3105 /* pass */; 3106 3106 else 3107 3107 return SCI_FAILURE_UNSUPPORTED_PROTOCOL; ··· 3235 3235 iport = idev->owning_port; 3236 3236 3237 3237 /* 3238 - * Fill in the TC with the its required data 3238 + * Fill in the TC with its required data 3239 3239 * 00h 3240 3240 */ 3241 3241 task_context->priority = 0;
+1 -1
drivers/scsi/isci/task.c
··· 511 511 "%s: dev = %p (%s%s), task = %p, old_request == %p\n", 512 512 __func__, idev, 513 513 (dev_is_sata(task->dev) ? "STP/SATA" 514 - : ((dev_is_expander(task->dev)) 514 + : ((dev_is_expander(task->dev->dev_type)) 515 515 ? "SMP" 516 516 : "SSP")), 517 517 ((idev) ? ((test_bit(IDEV_GONE, &idev->flags))
-2
drivers/scsi/libiscsi_tcp.c
··· 8 8 * Copyright (C) 2006 Red Hat, Inc. All rights reserved. 9 9 * maintained by open-iscsi@googlegroups.com 10 10 * 11 - * See the file COPYING included with this distribution for more details. 12 - * 13 11 * Credits: 14 12 * Christoph Hellwig 15 13 * FUJITA Tomonori
+3 -20
drivers/scsi/libsas/sas_discover.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Serial Attached SCSI (SAS) Discover process 3 4 * 4 5 * Copyright (C) 2005 Adaptec, Inc. All rights reserved. 5 6 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 6 - * 7 - * This file is licensed under GPLv2. 8 - * 9 - * This program is free software; you can redistribute it and/or 10 - * modify it under the terms of the GNU General Public License as 11 - * published by the Free Software Foundation; either version 2 of the 12 - * License, or (at your option) any later version. 13 - * 14 - * This program is distributed in the hope that it will be useful, but 15 - * WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 - * General Public License for more details. 18 - * 19 - * You should have received a copy of the GNU General Public License 20 - * along with this program; if not, write to the Free Software 21 - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 22 - * 23 7 */ 24 8 25 9 #include <linux/scatterlist.h> ··· 293 309 dev->phy = NULL; 294 310 295 311 /* remove the phys and ports, everything else should be gone */ 296 - if (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) 312 + if (dev_is_expander(dev->dev_type)) 297 313 kfree(dev->ex_dev.ex_phy); 298 314 299 315 if (dev_is_sata(dev) && dev->sata_dev.ap) { ··· 503 519 pr_debug("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id, 504 520 task_pid_nr(current)); 505 521 506 - if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE || 507 - ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE)) 522 + if (ddev && dev_is_expander(ddev->dev_type)) 508 523 res = sas_ex_revalidate_domain(ddev); 509 524 510 525 pr_debug("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n",
+1 -17
drivers/scsi/libsas/sas_event.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Serial Attached SCSI (SAS) Event processing 3 4 * 4 5 * Copyright (C) 2005 Adaptec, Inc. All rights reserved. 5 6 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 6 - * 7 - * This file is licensed under GPLv2. 8 - * 9 - * This program is free software; you can redistribute it and/or 10 - * modify it under the terms of the GNU General Public License as 11 - * published by the Free Software Foundation; either version 2 of the 12 - * License, or (at your option) any later version. 13 - * 14 - * This program is distributed in the hope that it will be useful, but 15 - * WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 - * General Public License for more details. 18 - * 19 - * You should have received a copy of the GNU General Public License 20 - * along with this program; if not, write to the Free Software 21 - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 22 - * 23 7 */ 24 8 25 9 #include <linux/export.h>
+15 -56
drivers/scsi/libsas/sas_expander.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Serial Attached SCSI (SAS) Expander discovery and configuration 3 4 * ··· 6 5 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 7 6 * 8 7 * This file is licensed under GPLv2. 9 - * 10 - * This program is free software; you can redistribute it and/or 11 - * modify it under the terms of the GNU General Public License as 12 - * published by the Free Software Foundation; either version 2 of the 13 - * License, or (at your option) any later version. 14 - * 15 - * This program is distributed in the hope that it will be useful, but 16 - * WITHOUT ANY WARRANTY; without even the implied warranty of 17 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 18 - * General Public License for more details. 19 - * 20 - * You should have received a copy of the GNU General Public License 21 - * along with this program; if not, write to the Free Software 22 - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 23 - * 24 8 */ 25 9 26 10 #include <linux/scatterlist.h> ··· 1092 1106 SAS_ADDR(dev->sas_addr), 1093 1107 phy_id); 1094 1108 sas_ex_disable_phy(dev, phy_id); 1095 - break; 1109 + return res; 1096 1110 } else 1097 1111 memcpy(dev->port->disc.fanout_sas_addr, 1098 1112 ex_phy->attached_sas_addr, SAS_ADDR_SIZE); ··· 1104 1118 break; 1105 1119 } 1106 1120 1107 - if (child) { 1108 - int i; 1109 - 1110 - for (i = 0; i < ex->num_phys; i++) { 1111 - if (ex->ex_phy[i].phy_state == PHY_VACANT || 1112 - ex->ex_phy[i].phy_state == PHY_NOT_PRESENT) 1113 - continue; 1114 - /* 1115 - * Due to races, the phy might not get added to the 1116 - * wide port, so we add the phy to the wide port here. 1117 - */ 1118 - if (SAS_ADDR(ex->ex_phy[i].attached_sas_addr) == 1119 - SAS_ADDR(child->sas_addr)) { 1120 - ex->ex_phy[i].phy_state= PHY_DEVICE_DISCOVERED; 1121 - if (sas_ex_join_wide_port(dev, i)) 1122 - pr_debug("Attaching ex phy%02d to wide port %016llx\n", 1123 - i, SAS_ADDR(ex->ex_phy[i].attached_sas_addr)); 1124 - } 1125 - } 1126 - } 1127 - 1121 + if (!child) 1122 + pr_notice("ex %016llx phy%02d failed to discover\n", 1123 + SAS_ADDR(dev->sas_addr), phy_id); 1128 1124 return res; 1129 1125 } 1130 1126 ··· 1122 1154 phy->phy_state == PHY_NOT_PRESENT) 1123 1155 continue; 1124 1156 1125 - if ((phy->attached_dev_type == SAS_EDGE_EXPANDER_DEVICE || 1126 - phy->attached_dev_type == SAS_FANOUT_EXPANDER_DEVICE) && 1157 + if (dev_is_expander(phy->attached_dev_type) && 1127 1158 phy->routing_attr == SUBTRACTIVE_ROUTING) { 1128 1159 1129 1160 memcpy(sub_addr, phy->attached_sas_addr, SAS_ADDR_SIZE); ··· 1140 1173 u8 sub_addr[SAS_ADDR_SIZE] = {0, }; 1141 1174 1142 1175 list_for_each_entry(child, &ex->children, siblings) { 1143 - if (child->dev_type != SAS_EDGE_EXPANDER_DEVICE && 1144 - child->dev_type != SAS_FANOUT_EXPANDER_DEVICE) 1176 + if (!dev_is_expander(child->dev_type)) 1145 1177 continue; 1146 1178 if (sub_addr[0] == 0) { 1147 1179 sas_find_sub_addr(child, sub_addr); ··· 1225 1259 phy->phy_state == PHY_NOT_PRESENT) 1226 1260 continue; 1227 1261 1228 - if ((phy->attached_dev_type == SAS_FANOUT_EXPANDER_DEVICE || 1229 - phy->attached_dev_type == SAS_EDGE_EXPANDER_DEVICE) && 1262 + if (dev_is_expander(phy->attached_dev_type) && 1230 1263 phy->routing_attr == SUBTRACTIVE_ROUTING) { 1231 1264 1232 1265 if (!sub_sas_addr) ··· 1321 1356 if (!child->parent) 1322 1357 return 0; 1323 1358 1324 - if (child->parent->dev_type != SAS_EDGE_EXPANDER_DEVICE && 1325 - child->parent->dev_type != SAS_FANOUT_EXPANDER_DEVICE) 1359 + if (!dev_is_expander(child->parent->dev_type)) 1326 1360 return 0; 1327 1361 1328 1362 parent_ex = &child->parent->ex_dev; ··· 1617 1653 struct domain_device *dev; 1618 1654 1619 1655 list_for_each_entry(dev, &port->dev_list, dev_list_node) { 1620 - if (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1621 - dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) { 1656 + if (dev_is_expander(dev->dev_type)) { 1622 1657 struct sas_expander_device *ex = 1623 1658 rphy_to_expander_device(dev->rphy); 1624 1659 ··· 1849 1886 SAS_ADDR(dev->sas_addr)); 1850 1887 } 1851 1888 list_for_each_entry(ch, &ex->children, siblings) { 1852 - if (ch->dev_type == SAS_EDGE_EXPANDER_DEVICE || ch->dev_type == SAS_FANOUT_EXPANDER_DEVICE) { 1889 + if (dev_is_expander(ch->dev_type)) { 1853 1890 res = sas_find_bcast_dev(ch, src_dev); 1854 1891 if (*src_dev) 1855 1892 return res; ··· 1866 1903 1867 1904 list_for_each_entry_safe(child, n, &ex->children, siblings) { 1868 1905 set_bit(SAS_DEV_GONE, &child->state); 1869 - if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1870 - child->dev_type == SAS_FANOUT_EXPANDER_DEVICE) 1906 + if (dev_is_expander(child->dev_type)) 1871 1907 sas_unregister_ex_tree(port, child); 1872 1908 else 1873 1909 sas_unregister_dev(port, child); ··· 1886 1924 if (SAS_ADDR(child->sas_addr) == 1887 1925 SAS_ADDR(phy->attached_sas_addr)) { 1888 1926 set_bit(SAS_DEV_GONE, &child->state); 1889 - if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1890 - child->dev_type == SAS_FANOUT_EXPANDER_DEVICE) 1927 + if (dev_is_expander(child->dev_type)) 1891 1928 sas_unregister_ex_tree(parent->port, child); 1892 1929 else 1893 1930 sas_unregister_dev(parent->port, child); ··· 1915 1954 int res = 0; 1916 1955 1917 1956 list_for_each_entry(child, &ex_root->children, siblings) { 1918 - if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1919 - child->dev_type == SAS_FANOUT_EXPANDER_DEVICE) { 1957 + if (dev_is_expander(child->dev_type)) { 1920 1958 struct sas_expander_device *ex = 1921 1959 rphy_to_expander_device(child->rphy); 1922 1960 ··· 1968 2008 list_for_each_entry(child, &dev->ex_dev.children, siblings) { 1969 2009 if (SAS_ADDR(child->sas_addr) == 1970 2010 SAS_ADDR(ex_phy->attached_sas_addr)) { 1971 - if (child->dev_type == SAS_EDGE_EXPANDER_DEVICE || 1972 - child->dev_type == SAS_FANOUT_EXPANDER_DEVICE) 2011 + if (dev_is_expander(child->dev_type)) 1973 2012 res = sas_discover_bfs_by_root(child); 1974 2013 break; 1975 2014 }
+1 -1
drivers/scsi/libsas/sas_init.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 1 + // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 3 * Serial Attached SCSI (SAS) Transport Layer initialization 4 4 *
+1 -1
drivers/scsi/libsas/sas_internal.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 3 * Serial Attached SCSI (SAS) class internal header file 4 4 *
+1 -17
drivers/scsi/libsas/sas_phy.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Serial Attached SCSI (SAS) Phy class 3 4 * 4 5 * Copyright (C) 2005 Adaptec, Inc. All rights reserved. 5 6 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 6 - * 7 - * This file is licensed under GPLv2. 8 - * 9 - * This program is free software; you can redistribute it and/or 10 - * modify it under the terms of the GNU General Public License as 11 - * published by the Free Software Foundation; either version 2 of the 12 - * License, or (at your option) any later version. 13 - * 14 - * This program is distributed in the hope that it will be useful, but 15 - * WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 - * General Public License for more details. 18 - * 19 - * You should have received a copy of the GNU General Public License 20 - * along with this program; if not, write to the Free Software 21 - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 22 - * 23 7 */ 24 8 25 9 #include "sas_internal.h"
+4 -20
drivers/scsi/libsas/sas_port.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Serial Attached SCSI (SAS) Port class 3 4 * 4 5 * Copyright (C) 2005 Adaptec, Inc. All rights reserved. 5 6 * Copyright (C) 2005 Luben Tuikov <luben_tuikov@adaptec.com> 6 - * 7 - * This file is licensed under GPLv2. 8 - * 9 - * This program is free software; you can redistribute it and/or 10 - * modify it under the terms of the GNU General Public License as 11 - * published by the Free Software Foundation; either version 2 of the 12 - * License, or (at your option) any later version. 13 - * 14 - * This program is distributed in the hope that it will be useful, but 15 - * WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 17 - * General Public License for more details. 18 - * 19 - * You should have received a copy of the GNU General Public License 20 - * along with this program; if not, write to the Free Software 21 - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 22 - * 23 7 */ 24 8 25 9 #include "sas_internal.h" ··· 54 70 continue; 55 71 } 56 72 57 - if (dev->dev_type == SAS_EDGE_EXPANDER_DEVICE || dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) { 73 + if (dev_is_expander(dev->dev_type)) { 58 74 dev->ex_dev.ex_change_count = -1; 59 75 for (i = 0; i < dev->ex_dev.num_phys; i++) { 60 76 struct ex_phy *phy = &dev->ex_dev.ex_phy[i]; ··· 179 195 180 196 sas_discover_event(phy->port, DISCE_DISCOVER_DOMAIN); 181 197 /* Only insert a revalidate event after initial discovery */ 182 - if (port_dev && sas_dev_type_is_expander(port_dev->dev_type)) { 198 + if (port_dev && dev_is_expander(port_dev->dev_type)) { 183 199 struct expander_device *ex_dev = &port_dev->ex_dev; 184 200 185 201 ex_dev->ex_change_count = -1; ··· 248 264 spin_unlock_irqrestore(&sas_ha->phy_port_lock, flags); 249 265 250 266 /* Only insert revalidate event if the port still has members */ 251 - if (port->port && dev && sas_dev_type_is_expander(dev->dev_type)) { 267 + if (port->port && dev && dev_is_expander(dev->dev_type)) { 252 268 struct expander_device *ex_dev = &dev->ex_dev; 253 269 254 270 ex_dev->ex_change_count = -1;
+1 -1
drivers/scsi/libsas/sas_scsi_host.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-or-later 1 + // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 3 * Serial Attached SCSI (SAS) class SCSI Host glue. 4 4 *
+22 -12
drivers/scsi/lpfc/lpfc_attr.c
··· 4097 4097 } 4098 4098 if ((phba->pcidev->device == PCI_DEVICE_ID_LANCER_G6_FC || 4099 4099 phba->pcidev->device == PCI_DEVICE_ID_LANCER_G7_FC) && 4100 - val != FLAGS_TOPOLOGY_MODE_PT_PT) { 4100 + val == 4) { 4101 4101 lpfc_printf_vlog(vport, KERN_ERR, LOG_INIT, 4102 - "3114 Only non-FC-AL mode is supported\n"); 4102 + "3114 Loop mode not supported\n"); 4103 4103 return -EINVAL; 4104 4104 } 4105 4105 phba->cfg_topology = val; ··· 5180 5180 5181 5181 /* set the values on the cq's */ 5182 5182 for (i = 0; i < phba->cfg_irq_chann; i++) { 5183 - eq = phba->sli4_hba.hdwq[i].hba_eq; 5183 + /* Get the EQ corresponding to the IRQ vector */ 5184 + eq = phba->sli4_hba.hba_eq_hdl[i].eq; 5184 5185 if (!eq) 5185 5186 continue; 5186 5187 ··· 5302 5301 len += scnprintf( 5303 5302 buf + len, PAGE_SIZE - len, 5304 5303 "CPU %02d hdwq None " 5305 - "physid %d coreid %d ht %d\n", 5304 + "physid %d coreid %d ht %d ua %d\n", 5306 5305 phba->sli4_hba.curr_disp_cpu, 5307 - cpup->phys_id, 5308 - cpup->core_id, cpup->hyper); 5306 + cpup->phys_id, cpup->core_id, 5307 + (cpup->flag & LPFC_CPU_MAP_HYPER), 5308 + (cpup->flag & LPFC_CPU_MAP_UNASSIGN)); 5309 5309 else 5310 5310 len += scnprintf( 5311 5311 buf + len, PAGE_SIZE - len, 5312 5312 "CPU %02d EQ %04d hdwq %04d " 5313 - "physid %d coreid %d ht %d\n", 5313 + "physid %d coreid %d ht %d ua %d\n", 5314 5314 phba->sli4_hba.curr_disp_cpu, 5315 5315 cpup->eq, cpup->hdwq, cpup->phys_id, 5316 - cpup->core_id, cpup->hyper); 5316 + cpup->core_id, 5317 + (cpup->flag & LPFC_CPU_MAP_HYPER), 5318 + (cpup->flag & LPFC_CPU_MAP_UNASSIGN)); 5317 5319 } else { 5318 5320 if (cpup->hdwq == LPFC_VECTOR_MAP_EMPTY) 5319 5321 len += scnprintf( 5320 5322 buf + len, PAGE_SIZE - len, 5321 5323 "CPU %02d hdwq None " 5322 - "physid %d coreid %d ht %d IRQ %d\n", 5324 + "physid %d coreid %d ht %d ua %d IRQ %d\n", 5323 5325 phba->sli4_hba.curr_disp_cpu, 5324 5326 cpup->phys_id, 5325 - cpup->core_id, cpup->hyper, cpup->irq); 5327 + cpup->core_id, 5328 + (cpup->flag & LPFC_CPU_MAP_HYPER), 5329 + (cpup->flag & LPFC_CPU_MAP_UNASSIGN), 5330 + cpup->irq); 5326 5331 else 5327 5332 len += scnprintf( 5328 5333 buf + len, PAGE_SIZE - len, 5329 5334 "CPU %02d EQ %04d hdwq %04d " 5330 - "physid %d coreid %d ht %d IRQ %d\n", 5335 + "physid %d coreid %d ht %d ua %d IRQ %d\n", 5331 5336 phba->sli4_hba.curr_disp_cpu, 5332 5337 cpup->eq, cpup->hdwq, cpup->phys_id, 5333 - cpup->core_id, cpup->hyper, cpup->irq); 5338 + cpup->core_id, 5339 + (cpup->flag & LPFC_CPU_MAP_HYPER), 5340 + (cpup->flag & LPFC_CPU_MAP_UNASSIGN), 5341 + cpup->irq); 5334 5342 } 5335 5343 5336 5344 phba->sli4_hba.curr_disp_cpu++;
+1 -1
drivers/scsi/lpfc/lpfc_bsg.c
··· 5741 5741 5742 5742 event_reply->port_speed = phba->sli4_hba.link_state.speed / 1000; 5743 5743 event_reply->logical_speed = 5744 - phba->sli4_hba.link_state.logical_speed / 100; 5744 + phba->sli4_hba.link_state.logical_speed / 1000; 5745 5745 job_error: 5746 5746 bsg_reply->result = rc; 5747 5747 bsg_job_done(job, bsg_reply->result,
+2 -1
drivers/scsi/lpfc/lpfc_crtn.h
··· 572 572 void lpfc_nvmet_unsol_ls_event(struct lpfc_hba *phba, 573 573 struct lpfc_sli_ring *pring, struct lpfc_iocbq *piocb); 574 574 void lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba, uint32_t idx, 575 - struct rqb_dmabuf *nvmebuf, uint64_t isr_ts); 575 + struct rqb_dmabuf *nvmebuf, uint64_t isr_ts, 576 + uint8_t cqflag); 576 577 void lpfc_nvme_mod_param_dep(struct lpfc_hba *phba); 577 578 void lpfc_nvme_abort_fcreq_cmpl(struct lpfc_hba *phba, 578 579 struct lpfc_iocbq *cmdiocb,
+11 -3
drivers/scsi/lpfc/lpfc_ct.c
··· 2358 2358 lpfc_fdmi_port_attr_fc4type(struct lpfc_vport *vport, 2359 2359 struct lpfc_fdmi_attr_def *ad) 2360 2360 { 2361 + struct lpfc_hba *phba = vport->phba; 2361 2362 struct lpfc_fdmi_attr_entry *ae; 2362 2363 uint32_t size; 2363 2364 ··· 2367 2366 2368 2367 ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */ 2369 2368 ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */ 2370 - if (vport->nvmei_support || vport->phba->nvmet_support) 2371 - ae->un.AttrTypes[6] = 0x01; /* Type 0x28 - NVME */ 2372 2369 ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */ 2370 + 2371 + /* Check to see if Firmware supports NVME and on physical port */ 2372 + if ((phba->sli_rev == LPFC_SLI_REV4) && (vport == phba->pport) && 2373 + phba->sli4_hba.pc_sli4_params.nvme) 2374 + ae->un.AttrTypes[6] = 0x01; /* Type 0x28 - NVME */ 2375 + 2373 2376 size = FOURBYTES + 32; 2374 2377 ad->AttrLen = cpu_to_be16(size); 2375 2378 ad->AttrType = cpu_to_be16(RPRT_SUPPORTED_FC4_TYPES); ··· 2685 2680 2686 2681 ae->un.AttrTypes[3] = 0x02; /* Type 0x1 - ELS */ 2687 2682 ae->un.AttrTypes[2] = 0x01; /* Type 0x8 - FCP */ 2683 + ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */ 2684 + 2685 + /* Check to see if NVME is configured or not */ 2688 2686 if (vport->phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) 2689 2687 ae->un.AttrTypes[6] = 0x1; /* Type 0x28 - NVME */ 2690 - ae->un.AttrTypes[7] = 0x01; /* Type 0x20 - CT */ 2688 + 2691 2689 size = FOURBYTES + 32; 2692 2690 ad->AttrLen = cpu_to_be16(size); 2693 2691 ad->AttrType = cpu_to_be16(RPRT_ACTIVE_FC4_TYPES);
+1
drivers/scsi/lpfc/lpfc_els.c
··· 4308 4308 if ((rspiocb->iocb.ulpStatus == 0) 4309 4309 && (ndlp->nlp_flag & NLP_ACC_REGLOGIN)) { 4310 4310 if (!lpfc_unreg_rpi(vport, ndlp) && 4311 + (!(vport->fc_flag & FC_PT2PT)) && 4311 4312 (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE || 4312 4313 ndlp->nlp_state == NLP_STE_REG_LOGIN_ISSUE)) { 4313 4314 lpfc_printf_vlog(vport, KERN_INFO,
+395 -113
drivers/scsi/lpfc/lpfc_init.c
··· 72 72 spinlock_t _dump_buf_lock; 73 73 74 74 /* Used when mapping IRQ vectors in a driver centric manner */ 75 - uint32_t lpfc_present_cpu; 75 + static uint32_t lpfc_present_cpu; 76 76 77 77 static void lpfc_get_hba_model_desc(struct lpfc_hba *, uint8_t *, uint8_t *); 78 78 static int lpfc_post_rcv_buf(struct lpfc_hba *); ··· 93 93 static void lpfc_sli4_disable_intr(struct lpfc_hba *); 94 94 static uint32_t lpfc_sli4_enable_intr(struct lpfc_hba *, uint32_t); 95 95 static void lpfc_sli4_oas_verify(struct lpfc_hba *phba); 96 - static uint16_t lpfc_find_eq_handle(struct lpfc_hba *, uint16_t); 97 96 static uint16_t lpfc_find_cpu_handle(struct lpfc_hba *, uint16_t, int); 97 + static void lpfc_setup_bg(struct lpfc_hba *, struct Scsi_Host *); 98 98 99 99 static struct scsi_transport_template *lpfc_transport_template = NULL; 100 100 static struct scsi_transport_template *lpfc_vport_transport_template = NULL; ··· 1274 1274 if (!eqcnt) 1275 1275 goto requeue; 1276 1276 1277 + /* Loop thru all IRQ vectors */ 1277 1278 for (i = 0; i < phba->cfg_irq_chann; i++) { 1278 - eq = phba->sli4_hba.hdwq[i].hba_eq; 1279 + /* Get the EQ corresponding to the IRQ vector */ 1280 + eq = phba->sli4_hba.hba_eq_hdl[i].eq; 1279 1281 if (eq && eqcnt[eq->last_cpu] < 2) 1280 1282 eqcnt[eq->last_cpu]++; 1281 1283 continue; ··· 4116 4114 * pci bus space for an I/O. The DMA buffer includes the 4117 4115 * number of SGE's necessary to support the sg_tablesize. 4118 4116 */ 4119 - lpfc_ncmd->data = dma_pool_alloc(phba->lpfc_sg_dma_buf_pool, 4120 - GFP_KERNEL, 4121 - &lpfc_ncmd->dma_handle); 4117 + lpfc_ncmd->data = dma_pool_zalloc(phba->lpfc_sg_dma_buf_pool, 4118 + GFP_KERNEL, 4119 + &lpfc_ncmd->dma_handle); 4122 4120 if (!lpfc_ncmd->data) { 4123 4121 kfree(lpfc_ncmd); 4124 4122 break; 4125 4123 } 4126 - memset(lpfc_ncmd->data, 0, phba->cfg_sg_dma_buf_size); 4127 4124 4128 4125 /* 4129 4126 * 4K Page alignment is CRITICAL to BlockGuard, double check ··· 4347 4346 timer_setup(&vport->els_tmofunc, lpfc_els_timeout, 0); 4348 4347 4349 4348 timer_setup(&vport->delayed_disc_tmo, lpfc_delayed_disc_tmo, 0); 4349 + 4350 + if (phba->sli3_options & LPFC_SLI3_BG_ENABLED) 4351 + lpfc_setup_bg(phba, shost); 4350 4352 4351 4353 error = scsi_add_host_with_dma(shost, dev, &phba->pcidev->dev); 4352 4354 if (error) ··· 5059 5055 bf_get(lpfc_acqe_fc_la_speed, acqe_fc)); 5060 5056 5061 5057 phba->sli4_hba.link_state.logical_speed = 5062 - bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc); 5058 + bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc) * 10; 5063 5059 /* We got FC link speed, convert to fc_linkspeed (READ_TOPOLOGY) */ 5064 5060 phba->fc_linkspeed = 5065 5061 lpfc_async_link_speed_to_read_top( ··· 5162 5158 bf_get(lpfc_acqe_fc_la_port_number, acqe_fc); 5163 5159 phba->sli4_hba.link_state.fault = 5164 5160 bf_get(lpfc_acqe_link_fault, acqe_fc); 5165 - phba->sli4_hba.link_state.logical_speed = 5161 + 5162 + if (bf_get(lpfc_acqe_fc_la_att_type, acqe_fc) == 5163 + LPFC_FC_LA_TYPE_LINK_DOWN) 5164 + phba->sli4_hba.link_state.logical_speed = 0; 5165 + else if (!phba->sli4_hba.conf_trunk) 5166 + phba->sli4_hba.link_state.logical_speed = 5166 5167 bf_get(lpfc_acqe_fc_la_llink_spd, acqe_fc) * 10; 5168 + 5167 5169 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 5168 5170 "2896 Async FC event - Speed:%dGBaud Topology:x%x " 5169 5171 "LA Type:x%x Port Type:%d Port Number:%d Logical speed:" ··· 6561 6551 spin_lock_init(&phba->sli4_hba.abts_nvmet_buf_list_lock); 6562 6552 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_abts_nvmet_ctx_list); 6563 6553 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_nvmet_io_wait_list); 6554 + spin_lock_init(&phba->sli4_hba.t_active_list_lock); 6555 + INIT_LIST_HEAD(&phba->sli4_hba.t_active_ctx_list); 6564 6556 } 6565 6557 6566 6558 /* This abort list used by worker thread */ ··· 7672 7660 */ 7673 7661 shost = pci_get_drvdata(phba->pcidev); 7674 7662 shost->can_queue = phba->cfg_hba_queue_depth - 10; 7675 - if (phba->sli3_options & LPFC_SLI3_BG_ENABLED) 7676 - lpfc_setup_bg(phba, shost); 7677 7663 7678 7664 lpfc_host_attrib_init(shost); 7679 7665 ··· 8750 8740 lpfc_sli4_queue_create(struct lpfc_hba *phba) 8751 8741 { 8752 8742 struct lpfc_queue *qdesc; 8753 - int idx, eqidx, cpu; 8743 + int idx, cpu, eqcpu; 8754 8744 struct lpfc_sli4_hdw_queue *qp; 8745 + struct lpfc_vector_map_info *cpup; 8746 + struct lpfc_vector_map_info *eqcpup; 8755 8747 struct lpfc_eq_intr_info *eqi; 8756 8748 8757 8749 /* ··· 8838 8826 INIT_LIST_HEAD(&phba->sli4_hba.lpfc_wq_list); 8839 8827 8840 8828 /* Create HBA Event Queues (EQs) */ 8841 - for (idx = 0; idx < phba->cfg_hdw_queue; idx++) { 8842 - /* determine EQ affinity */ 8843 - eqidx = lpfc_find_eq_handle(phba, idx); 8844 - cpu = lpfc_find_cpu_handle(phba, eqidx, LPFC_FIND_BY_EQ); 8845 - /* 8846 - * If there are more Hardware Queues than available 8847 - * EQs, multiple Hardware Queues may share a common EQ. 8829 + for_each_present_cpu(cpu) { 8830 + /* We only want to create 1 EQ per vector, even though 8831 + * multiple CPUs might be using that vector. so only 8832 + * selects the CPUs that are LPFC_CPU_FIRST_IRQ. 8848 8833 */ 8849 - if (idx >= phba->cfg_irq_chann) { 8850 - /* Share an existing EQ */ 8851 - phba->sli4_hba.hdwq[idx].hba_eq = 8852 - phba->sli4_hba.hdwq[eqidx].hba_eq; 8834 + cpup = &phba->sli4_hba.cpu_map[cpu]; 8835 + if (!(cpup->flag & LPFC_CPU_FIRST_IRQ)) 8853 8836 continue; 8854 - } 8855 - /* Create an EQ */ 8837 + 8838 + /* Get a ptr to the Hardware Queue associated with this CPU */ 8839 + qp = &phba->sli4_hba.hdwq[cpup->hdwq]; 8840 + 8841 + /* Allocate an EQ */ 8856 8842 qdesc = lpfc_sli4_queue_alloc(phba, LPFC_DEFAULT_PAGE_SIZE, 8857 8843 phba->sli4_hba.eq_esize, 8858 8844 phba->sli4_hba.eq_ecount, cpu); 8859 8845 if (!qdesc) { 8860 8846 lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 8861 - "0497 Failed allocate EQ (%d)\n", idx); 8847 + "0497 Failed allocate EQ (%d)\n", 8848 + cpup->hdwq); 8862 8849 goto out_error; 8863 8850 } 8864 8851 qdesc->qe_valid = 1; 8865 - qdesc->hdwq = idx; 8866 - 8867 - /* Save the CPU this EQ is affinitised to */ 8868 - qdesc->chann = cpu; 8869 - phba->sli4_hba.hdwq[idx].hba_eq = qdesc; 8852 + qdesc->hdwq = cpup->hdwq; 8853 + qdesc->chann = cpu; /* First CPU this EQ is affinitised to */ 8870 8854 qdesc->last_cpu = qdesc->chann; 8855 + 8856 + /* Save the allocated EQ in the Hardware Queue */ 8857 + qp->hba_eq = qdesc; 8858 + 8871 8859 eqi = per_cpu_ptr(phba->sli4_hba.eq_info, qdesc->last_cpu); 8872 8860 list_add(&qdesc->cpu_list, &eqi->list); 8873 8861 } 8874 8862 8863 + /* Now we need to populate the other Hardware Queues, that share 8864 + * an IRQ vector, with the associated EQ ptr. 8865 + */ 8866 + for_each_present_cpu(cpu) { 8867 + cpup = &phba->sli4_hba.cpu_map[cpu]; 8868 + 8869 + /* Check for EQ already allocated in previous loop */ 8870 + if (cpup->flag & LPFC_CPU_FIRST_IRQ) 8871 + continue; 8872 + 8873 + /* Check for multiple CPUs per hdwq */ 8874 + qp = &phba->sli4_hba.hdwq[cpup->hdwq]; 8875 + if (qp->hba_eq) 8876 + continue; 8877 + 8878 + /* We need to share an EQ for this hdwq */ 8879 + eqcpu = lpfc_find_cpu_handle(phba, cpup->eq, LPFC_FIND_BY_EQ); 8880 + eqcpup = &phba->sli4_hba.cpu_map[eqcpu]; 8881 + qp->hba_eq = phba->sli4_hba.hdwq[eqcpup->hdwq].hba_eq; 8882 + } 8875 8883 8876 8884 /* Allocate SCSI SLI4 CQ/WQs */ 8877 8885 for (idx = 0; idx < phba->cfg_hdw_queue; idx++) { ··· 9154 9122 lpfc_sli4_release_hdwq(struct lpfc_hba *phba) 9155 9123 { 9156 9124 struct lpfc_sli4_hdw_queue *hdwq; 9125 + struct lpfc_queue *eq; 9157 9126 uint32_t idx; 9158 9127 9159 9128 hdwq = phba->sli4_hba.hdwq; 9160 - for (idx = 0; idx < phba->cfg_hdw_queue; idx++) { 9161 - if (idx < phba->cfg_irq_chann) 9162 - lpfc_sli4_queue_free(hdwq[idx].hba_eq); 9163 - hdwq[idx].hba_eq = NULL; 9164 9129 9130 + /* Loop thru all Hardware Queues */ 9131 + for (idx = 0; idx < phba->cfg_hdw_queue; idx++) { 9132 + /* Free the CQ/WQ corresponding to the Hardware Queue */ 9165 9133 lpfc_sli4_queue_free(hdwq[idx].fcp_cq); 9166 9134 lpfc_sli4_queue_free(hdwq[idx].nvme_cq); 9167 9135 lpfc_sli4_queue_free(hdwq[idx].fcp_wq); 9168 9136 lpfc_sli4_queue_free(hdwq[idx].nvme_wq); 9137 + hdwq[idx].hba_eq = NULL; 9169 9138 hdwq[idx].fcp_cq = NULL; 9170 9139 hdwq[idx].nvme_cq = NULL; 9171 9140 hdwq[idx].fcp_wq = NULL; 9172 9141 hdwq[idx].nvme_wq = NULL; 9142 + } 9143 + /* Loop thru all IRQ vectors */ 9144 + for (idx = 0; idx < phba->cfg_irq_chann; idx++) { 9145 + /* Free the EQ corresponding to the IRQ vector */ 9146 + eq = phba->sli4_hba.hba_eq_hdl[idx].eq; 9147 + lpfc_sli4_queue_free(eq); 9148 + phba->sli4_hba.hba_eq_hdl[idx].eq = NULL; 9173 9149 } 9174 9150 } 9175 9151 ··· 9356 9316 lpfc_setup_cq_lookup(struct lpfc_hba *phba) 9357 9317 { 9358 9318 struct lpfc_queue *eq, *childq; 9359 - struct lpfc_sli4_hdw_queue *qp; 9360 9319 int qidx; 9361 9320 9362 - qp = phba->sli4_hba.hdwq; 9363 9321 memset(phba->sli4_hba.cq_lookup, 0, 9364 9322 (sizeof(struct lpfc_queue *) * (phba->sli4_hba.cq_max + 1))); 9323 + /* Loop thru all IRQ vectors */ 9365 9324 for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) { 9366 - eq = qp[qidx].hba_eq; 9325 + /* Get the EQ corresponding to the IRQ vector */ 9326 + eq = phba->sli4_hba.hba_eq_hdl[qidx].eq; 9367 9327 if (!eq) 9368 9328 continue; 9329 + /* Loop through all CQs associated with that EQ */ 9369 9330 list_for_each_entry(childq, &eq->child_list, list) { 9370 9331 if (childq->queue_id > phba->sli4_hba.cq_max) 9371 9332 continue; ··· 9395 9354 { 9396 9355 uint32_t shdr_status, shdr_add_status; 9397 9356 union lpfc_sli4_cfg_shdr *shdr; 9357 + struct lpfc_vector_map_info *cpup; 9398 9358 struct lpfc_sli4_hdw_queue *qp; 9399 9359 LPFC_MBOXQ_t *mboxq; 9400 - int qidx; 9360 + int qidx, cpu; 9401 9361 uint32_t length, usdelay; 9402 9362 int rc = -ENOMEM; 9403 9363 ··· 9459 9417 rc = -ENOMEM; 9460 9418 goto out_error; 9461 9419 } 9420 + 9421 + /* Loop thru all IRQ vectors */ 9462 9422 for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) { 9463 - if (!qp[qidx].hba_eq) { 9464 - lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 9465 - "0522 Fast-path EQ (%d) not " 9466 - "allocated\n", qidx); 9467 - rc = -ENOMEM; 9468 - goto out_destroy; 9423 + /* Create HBA Event Queues (EQs) in order */ 9424 + for_each_present_cpu(cpu) { 9425 + cpup = &phba->sli4_hba.cpu_map[cpu]; 9426 + 9427 + /* Look for the CPU thats using that vector with 9428 + * LPFC_CPU_FIRST_IRQ set. 9429 + */ 9430 + if (!(cpup->flag & LPFC_CPU_FIRST_IRQ)) 9431 + continue; 9432 + if (qidx != cpup->eq) 9433 + continue; 9434 + 9435 + /* Create an EQ for that vector */ 9436 + rc = lpfc_eq_create(phba, qp[cpup->hdwq].hba_eq, 9437 + phba->cfg_fcp_imax); 9438 + if (rc) { 9439 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 9440 + "0523 Failed setup of fast-path" 9441 + " EQ (%d), rc = 0x%x\n", 9442 + cpup->eq, (uint32_t)rc); 9443 + goto out_destroy; 9444 + } 9445 + 9446 + /* Save the EQ for that vector in the hba_eq_hdl */ 9447 + phba->sli4_hba.hba_eq_hdl[cpup->eq].eq = 9448 + qp[cpup->hdwq].hba_eq; 9449 + 9450 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 9451 + "2584 HBA EQ setup: queue[%d]-id=%d\n", 9452 + cpup->eq, 9453 + qp[cpup->hdwq].hba_eq->queue_id); 9469 9454 } 9470 - rc = lpfc_eq_create(phba, qp[qidx].hba_eq, 9471 - phba->cfg_fcp_imax); 9472 - if (rc) { 9473 - lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 9474 - "0523 Failed setup of fast-path EQ " 9475 - "(%d), rc = 0x%x\n", qidx, 9476 - (uint32_t)rc); 9477 - goto out_destroy; 9478 - } 9479 - lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 9480 - "2584 HBA EQ setup: queue[%d]-id=%d\n", qidx, 9481 - qp[qidx].hba_eq->queue_id); 9482 9455 } 9483 9456 9457 + /* Loop thru all Hardware Queues */ 9484 9458 if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) { 9485 9459 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) { 9460 + cpu = lpfc_find_cpu_handle(phba, qidx, 9461 + LPFC_FIND_BY_HDWQ); 9462 + cpup = &phba->sli4_hba.cpu_map[cpu]; 9463 + 9464 + /* Create the CQ/WQ corresponding to the 9465 + * Hardware Queue 9466 + */ 9486 9467 rc = lpfc_create_wq_cq(phba, 9487 - qp[qidx].hba_eq, 9468 + phba->sli4_hba.hdwq[cpup->hdwq].hba_eq, 9488 9469 qp[qidx].nvme_cq, 9489 9470 qp[qidx].nvme_wq, 9490 9471 &phba->sli4_hba.hdwq[qidx].nvme_cq_map, ··· 9523 9458 } 9524 9459 9525 9460 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) { 9461 + cpu = lpfc_find_cpu_handle(phba, qidx, LPFC_FIND_BY_HDWQ); 9462 + cpup = &phba->sli4_hba.cpu_map[cpu]; 9463 + 9464 + /* Create the CQ/WQ corresponding to the Hardware Queue */ 9526 9465 rc = lpfc_create_wq_cq(phba, 9527 - qp[qidx].hba_eq, 9466 + phba->sli4_hba.hdwq[cpup->hdwq].hba_eq, 9528 9467 qp[qidx].fcp_cq, 9529 9468 qp[qidx].fcp_wq, 9530 9469 &phba->sli4_hba.hdwq[qidx].fcp_cq_map, ··· 9780 9711 lpfc_sli4_queue_unset(struct lpfc_hba *phba) 9781 9712 { 9782 9713 struct lpfc_sli4_hdw_queue *qp; 9714 + struct lpfc_queue *eq; 9783 9715 int qidx; 9784 9716 9785 9717 /* Unset mailbox command work queue */ ··· 9832 9762 9833 9763 /* Unset fast-path SLI4 queues */ 9834 9764 if (phba->sli4_hba.hdwq) { 9765 + /* Loop thru all Hardware Queues */ 9835 9766 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) { 9767 + /* Destroy the CQ/WQ corresponding to Hardware Queue */ 9836 9768 qp = &phba->sli4_hba.hdwq[qidx]; 9837 9769 lpfc_wq_destroy(phba, qp->fcp_wq); 9838 9770 lpfc_wq_destroy(phba, qp->nvme_wq); 9839 9771 lpfc_cq_destroy(phba, qp->fcp_cq); 9840 9772 lpfc_cq_destroy(phba, qp->nvme_cq); 9841 - if (qidx < phba->cfg_irq_chann) 9842 - lpfc_eq_destroy(phba, qp->hba_eq); 9773 + } 9774 + /* Loop thru all IRQ vectors */ 9775 + for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) { 9776 + /* Destroy the EQ corresponding to the IRQ vector */ 9777 + eq = phba->sli4_hba.hba_eq_hdl[qidx].eq; 9778 + lpfc_eq_destroy(phba, eq); 9843 9779 } 9844 9780 } 9845 9781 ··· 10635 10559 } 10636 10560 10637 10561 /** 10638 - * lpfc_find_cpu_handle - Find the CPU that corresponds to the specified EQ 10562 + * lpfc_find_cpu_handle - Find the CPU that corresponds to the specified Queue 10639 10563 * @phba: pointer to lpfc hba data structure. 10640 10564 * @id: EQ vector index or Hardware Queue index 10641 10565 * @match: LPFC_FIND_BY_EQ = match by EQ 10642 10566 * LPFC_FIND_BY_HDWQ = match by Hardware Queue 10567 + * Return the CPU that matches the selection criteria 10643 10568 */ 10644 10569 static uint16_t 10645 10570 lpfc_find_cpu_handle(struct lpfc_hba *phba, uint16_t id, int match) ··· 10648 10571 struct lpfc_vector_map_info *cpup; 10649 10572 int cpu; 10650 10573 10651 - /* Find the desired phys_id for the specified EQ */ 10574 + /* Loop through all CPUs */ 10652 10575 for_each_present_cpu(cpu) { 10653 10576 cpup = &phba->sli4_hba.cpu_map[cpu]; 10577 + 10578 + /* If we are matching by EQ, there may be multiple CPUs using 10579 + * using the same vector, so select the one with 10580 + * LPFC_CPU_FIRST_IRQ set. 10581 + */ 10654 10582 if ((match == LPFC_FIND_BY_EQ) && 10583 + (cpup->flag & LPFC_CPU_FIRST_IRQ) && 10655 10584 (cpup->irq != LPFC_VECTOR_MAP_EMPTY) && 10656 10585 (cpup->eq == id)) 10657 10586 return cpu; 10587 + 10588 + /* If matching by HDWQ, select the first CPU that matches */ 10658 10589 if ((match == LPFC_FIND_BY_HDWQ) && (cpup->hdwq == id)) 10659 10590 return cpu; 10660 - } 10661 - return 0; 10662 - } 10663 - 10664 - /** 10665 - * lpfc_find_eq_handle - Find the EQ that corresponds to the specified 10666 - * Hardware Queue 10667 - * @phba: pointer to lpfc hba data structure. 10668 - * @hdwq: Hardware Queue index 10669 - */ 10670 - static uint16_t 10671 - lpfc_find_eq_handle(struct lpfc_hba *phba, uint16_t hdwq) 10672 - { 10673 - struct lpfc_vector_map_info *cpup; 10674 - int cpu; 10675 - 10676 - /* Find the desired phys_id for the specified EQ */ 10677 - for_each_present_cpu(cpu) { 10678 - cpup = &phba->sli4_hba.cpu_map[cpu]; 10679 - if (cpup->hdwq == hdwq) 10680 - return cpup->eq; 10681 10591 } 10682 10592 return 0; 10683 10593 } ··· 10709 10645 static void 10710 10646 lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors) 10711 10647 { 10712 - int i, cpu, idx; 10648 + int i, cpu, idx, new_cpu, start_cpu, first_cpu; 10713 10649 int max_phys_id, min_phys_id; 10714 10650 int max_core_id, min_core_id; 10715 10651 struct lpfc_vector_map_info *cpup; 10652 + struct lpfc_vector_map_info *new_cpup; 10716 10653 const struct cpumask *maskp; 10717 10654 #ifdef CONFIG_X86 10718 10655 struct cpuinfo_x86 *cpuinfo; 10719 10656 #endif 10720 10657 10721 10658 /* Init cpu_map array */ 10722 - memset(phba->sli4_hba.cpu_map, 0xff, 10723 - (sizeof(struct lpfc_vector_map_info) * 10724 - phba->sli4_hba.num_possible_cpu)); 10659 + for_each_possible_cpu(cpu) { 10660 + cpup = &phba->sli4_hba.cpu_map[cpu]; 10661 + cpup->phys_id = LPFC_VECTOR_MAP_EMPTY; 10662 + cpup->core_id = LPFC_VECTOR_MAP_EMPTY; 10663 + cpup->hdwq = LPFC_VECTOR_MAP_EMPTY; 10664 + cpup->eq = LPFC_VECTOR_MAP_EMPTY; 10665 + cpup->irq = LPFC_VECTOR_MAP_EMPTY; 10666 + cpup->flag = 0; 10667 + } 10725 10668 10726 10669 max_phys_id = 0; 10727 - min_phys_id = 0xffff; 10670 + min_phys_id = LPFC_VECTOR_MAP_EMPTY; 10728 10671 max_core_id = 0; 10729 - min_core_id = 0xffff; 10672 + min_core_id = LPFC_VECTOR_MAP_EMPTY; 10730 10673 10731 10674 /* Update CPU map with physical id and core id of each CPU */ 10732 10675 for_each_present_cpu(cpu) { ··· 10742 10671 cpuinfo = &cpu_data(cpu); 10743 10672 cpup->phys_id = cpuinfo->phys_proc_id; 10744 10673 cpup->core_id = cpuinfo->cpu_core_id; 10745 - cpup->hyper = lpfc_find_hyper(phba, cpu, 10746 - cpup->phys_id, cpup->core_id); 10674 + if (lpfc_find_hyper(phba, cpu, cpup->phys_id, cpup->core_id)) 10675 + cpup->flag |= LPFC_CPU_MAP_HYPER; 10747 10676 #else 10748 10677 /* No distinction between CPUs for other platforms */ 10749 10678 cpup->phys_id = 0; 10750 10679 cpup->core_id = cpu; 10751 - cpup->hyper = 0; 10752 10680 #endif 10753 10681 10754 10682 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, ··· 10773 10703 eqi->icnt = 0; 10774 10704 } 10775 10705 10706 + /* This loop sets up all CPUs that are affinitized with a 10707 + * irq vector assigned to the driver. All affinitized CPUs 10708 + * will get a link to that vectors IRQ and EQ. 10709 + */ 10776 10710 for (idx = 0; idx < phba->cfg_irq_chann; idx++) { 10711 + /* Get a CPU mask for all CPUs affinitized to this vector */ 10777 10712 maskp = pci_irq_get_affinity(phba->pcidev, idx); 10778 10713 if (!maskp) 10779 10714 continue; 10780 10715 10716 + i = 0; 10717 + /* Loop through all CPUs associated with vector idx */ 10781 10718 for_each_cpu_and(cpu, maskp, cpu_present_mask) { 10719 + /* Set the EQ index and IRQ for that vector */ 10782 10720 cpup = &phba->sli4_hba.cpu_map[cpu]; 10783 10721 cpup->eq = idx; 10784 - cpup->hdwq = idx; 10785 10722 cpup->irq = pci_irq_vector(phba->pcidev, idx); 10786 10723 10787 - lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 10724 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 10788 10725 "3336 Set Affinity: CPU %d " 10789 - "hdwq %d irq %d\n", 10790 - cpu, cpup->hdwq, cpup->irq); 10726 + "irq %d eq %d\n", 10727 + cpu, cpup->irq, cpup->eq); 10728 + 10729 + /* If this is the first CPU thats assigned to this 10730 + * vector, set LPFC_CPU_FIRST_IRQ. 10731 + */ 10732 + if (!i) 10733 + cpup->flag |= LPFC_CPU_FIRST_IRQ; 10734 + i++; 10791 10735 } 10792 10736 } 10737 + 10738 + /* After looking at each irq vector assigned to this pcidev, its 10739 + * possible to see that not ALL CPUs have been accounted for. 10740 + * Next we will set any unassigned (unaffinitized) cpu map 10741 + * entries to a IRQ on the same phys_id. 10742 + */ 10743 + first_cpu = cpumask_first(cpu_present_mask); 10744 + start_cpu = first_cpu; 10745 + 10746 + for_each_present_cpu(cpu) { 10747 + cpup = &phba->sli4_hba.cpu_map[cpu]; 10748 + 10749 + /* Is this CPU entry unassigned */ 10750 + if (cpup->eq == LPFC_VECTOR_MAP_EMPTY) { 10751 + /* Mark CPU as IRQ not assigned by the kernel */ 10752 + cpup->flag |= LPFC_CPU_MAP_UNASSIGN; 10753 + 10754 + /* If so, find a new_cpup thats on the the SAME 10755 + * phys_id as cpup. start_cpu will start where we 10756 + * left off so all unassigned entries don't get assgined 10757 + * the IRQ of the first entry. 10758 + */ 10759 + new_cpu = start_cpu; 10760 + for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) { 10761 + new_cpup = &phba->sli4_hba.cpu_map[new_cpu]; 10762 + if (!(new_cpup->flag & LPFC_CPU_MAP_UNASSIGN) && 10763 + (new_cpup->irq != LPFC_VECTOR_MAP_EMPTY) && 10764 + (new_cpup->phys_id == cpup->phys_id)) 10765 + goto found_same; 10766 + new_cpu = cpumask_next( 10767 + new_cpu, cpu_present_mask); 10768 + if (new_cpu == nr_cpumask_bits) 10769 + new_cpu = first_cpu; 10770 + } 10771 + /* At this point, we leave the CPU as unassigned */ 10772 + continue; 10773 + found_same: 10774 + /* We found a matching phys_id, so copy the IRQ info */ 10775 + cpup->eq = new_cpup->eq; 10776 + cpup->irq = new_cpup->irq; 10777 + 10778 + /* Bump start_cpu to the next slot to minmize the 10779 + * chance of having multiple unassigned CPU entries 10780 + * selecting the same IRQ. 10781 + */ 10782 + start_cpu = cpumask_next(new_cpu, cpu_present_mask); 10783 + if (start_cpu == nr_cpumask_bits) 10784 + start_cpu = first_cpu; 10785 + 10786 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 10787 + "3337 Set Affinity: CPU %d " 10788 + "irq %d from id %d same " 10789 + "phys_id (%d)\n", 10790 + cpu, cpup->irq, new_cpu, cpup->phys_id); 10791 + } 10792 + } 10793 + 10794 + /* Set any unassigned cpu map entries to a IRQ on any phys_id */ 10795 + start_cpu = first_cpu; 10796 + 10797 + for_each_present_cpu(cpu) { 10798 + cpup = &phba->sli4_hba.cpu_map[cpu]; 10799 + 10800 + /* Is this entry unassigned */ 10801 + if (cpup->eq == LPFC_VECTOR_MAP_EMPTY) { 10802 + /* Mark it as IRQ not assigned by the kernel */ 10803 + cpup->flag |= LPFC_CPU_MAP_UNASSIGN; 10804 + 10805 + /* If so, find a new_cpup thats on ANY phys_id 10806 + * as the cpup. start_cpu will start where we 10807 + * left off so all unassigned entries don't get 10808 + * assigned the IRQ of the first entry. 10809 + */ 10810 + new_cpu = start_cpu; 10811 + for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) { 10812 + new_cpup = &phba->sli4_hba.cpu_map[new_cpu]; 10813 + if (!(new_cpup->flag & LPFC_CPU_MAP_UNASSIGN) && 10814 + (new_cpup->irq != LPFC_VECTOR_MAP_EMPTY)) 10815 + goto found_any; 10816 + new_cpu = cpumask_next( 10817 + new_cpu, cpu_present_mask); 10818 + if (new_cpu == nr_cpumask_bits) 10819 + new_cpu = first_cpu; 10820 + } 10821 + /* We should never leave an entry unassigned */ 10822 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 10823 + "3339 Set Affinity: CPU %d " 10824 + "irq %d UNASSIGNED\n", 10825 + cpup->hdwq, cpup->irq); 10826 + continue; 10827 + found_any: 10828 + /* We found an available entry, copy the IRQ info */ 10829 + cpup->eq = new_cpup->eq; 10830 + cpup->irq = new_cpup->irq; 10831 + 10832 + /* Bump start_cpu to the next slot to minmize the 10833 + * chance of having multiple unassigned CPU entries 10834 + * selecting the same IRQ. 10835 + */ 10836 + start_cpu = cpumask_next(new_cpu, cpu_present_mask); 10837 + if (start_cpu == nr_cpumask_bits) 10838 + start_cpu = first_cpu; 10839 + 10840 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 10841 + "3338 Set Affinity: CPU %d " 10842 + "irq %d from id %d (%d/%d)\n", 10843 + cpu, cpup->irq, new_cpu, 10844 + new_cpup->phys_id, new_cpup->core_id); 10845 + } 10846 + } 10847 + 10848 + /* Finally we need to associate a hdwq with each cpu_map entry 10849 + * This will be 1 to 1 - hdwq to cpu, unless there are less 10850 + * hardware queues then CPUs. For that case we will just round-robin 10851 + * the available hardware queues as they get assigned to CPUs. 10852 + */ 10853 + idx = 0; 10854 + start_cpu = 0; 10855 + for_each_present_cpu(cpu) { 10856 + cpup = &phba->sli4_hba.cpu_map[cpu]; 10857 + if (idx >= phba->cfg_hdw_queue) { 10858 + /* We need to reuse a Hardware Queue for another CPU, 10859 + * so be smart about it and pick one that has its 10860 + * IRQ/EQ mapped to the same phys_id (CPU package). 10861 + * and core_id. 10862 + */ 10863 + new_cpu = start_cpu; 10864 + for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) { 10865 + new_cpup = &phba->sli4_hba.cpu_map[new_cpu]; 10866 + if ((new_cpup->hdwq != LPFC_VECTOR_MAP_EMPTY) && 10867 + (new_cpup->phys_id == cpup->phys_id) && 10868 + (new_cpup->core_id == cpup->core_id)) 10869 + goto found_hdwq; 10870 + new_cpu = cpumask_next( 10871 + new_cpu, cpu_present_mask); 10872 + if (new_cpu == nr_cpumask_bits) 10873 + new_cpu = first_cpu; 10874 + } 10875 + 10876 + /* If we can't match both phys_id and core_id, 10877 + * settle for just a phys_id match. 10878 + */ 10879 + new_cpu = start_cpu; 10880 + for (i = 0; i < phba->sli4_hba.num_present_cpu; i++) { 10881 + new_cpup = &phba->sli4_hba.cpu_map[new_cpu]; 10882 + if ((new_cpup->hdwq != LPFC_VECTOR_MAP_EMPTY) && 10883 + (new_cpup->phys_id == cpup->phys_id)) 10884 + goto found_hdwq; 10885 + new_cpu = cpumask_next( 10886 + new_cpu, cpu_present_mask); 10887 + if (new_cpu == nr_cpumask_bits) 10888 + new_cpu = first_cpu; 10889 + } 10890 + 10891 + /* Otherwise just round robin on cfg_hdw_queue */ 10892 + cpup->hdwq = idx % phba->cfg_hdw_queue; 10893 + goto logit; 10894 + found_hdwq: 10895 + /* We found an available entry, copy the IRQ info */ 10896 + start_cpu = cpumask_next(new_cpu, cpu_present_mask); 10897 + if (start_cpu == nr_cpumask_bits) 10898 + start_cpu = first_cpu; 10899 + cpup->hdwq = new_cpup->hdwq; 10900 + } else { 10901 + /* 1 to 1, CPU to hdwq */ 10902 + cpup->hdwq = idx; 10903 + } 10904 + logit: 10905 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT, 10906 + "3335 Set Affinity: CPU %d (phys %d core %d): " 10907 + "hdwq %d eq %d irq %d flg x%x\n", 10908 + cpu, cpup->phys_id, cpup->core_id, 10909 + cpup->hdwq, cpup->eq, cpup->irq, cpup->flag); 10910 + idx++; 10911 + } 10912 + 10913 + /* The cpu_map array will be used later during initialization 10914 + * when EQ / CQ / WQs are allocated and configured. 10915 + */ 10793 10916 return; 10794 10917 } 10795 10918 ··· 11594 11331 mbx_sli4_parameters); 11595 11332 phba->sli4_hba.extents_in_use = bf_get(cfg_ext, mbx_sli4_parameters); 11596 11333 phba->sli4_hba.rpi_hdrs_in_use = bf_get(cfg_hdrr, mbx_sli4_parameters); 11597 - phba->nvme_support = (bf_get(cfg_nvme, mbx_sli4_parameters) && 11598 - bf_get(cfg_xib, mbx_sli4_parameters)); 11599 11334 11600 - if ((phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP) || 11601 - !phba->nvme_support) { 11602 - phba->nvme_support = 0; 11603 - phba->nvmet_support = 0; 11604 - phba->cfg_nvmet_mrq = 0; 11605 - lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_NVME, 11606 - "6101 Disabling NVME support: " 11607 - "Not supported by firmware: %d %d\n", 11608 - bf_get(cfg_nvme, mbx_sli4_parameters), 11609 - bf_get(cfg_xib, mbx_sli4_parameters)); 11335 + /* Check for firmware nvme support */ 11336 + rc = (bf_get(cfg_nvme, mbx_sli4_parameters) && 11337 + bf_get(cfg_xib, mbx_sli4_parameters)); 11610 11338 11611 - /* If firmware doesn't support NVME, just use SCSI support */ 11612 - if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_FCP)) 11613 - return -ENODEV; 11614 - phba->cfg_enable_fc4_type = LPFC_ENABLE_FCP; 11339 + if (rc) { 11340 + /* Save this to indicate the Firmware supports NVME */ 11341 + sli4_params->nvme = 1; 11342 + 11343 + /* Firmware NVME support, check driver FC4 NVME support */ 11344 + if (phba->cfg_enable_fc4_type == LPFC_ENABLE_FCP) { 11345 + lpfc_printf_log(phba, KERN_INFO, LOG_INIT | LOG_NVME, 11346 + "6133 Disabling NVME support: " 11347 + "FC4 type not supported: x%x\n", 11348 + phba->cfg_enable_fc4_type); 11349 + goto fcponly; 11350 + } 11351 + } else { 11352 + /* No firmware NVME support, check driver FC4 NVME support */ 11353 + sli4_params->nvme = 0; 11354 + if (phba->cfg_enable_fc4_type & LPFC_ENABLE_NVME) { 11355 + lpfc_printf_log(phba, KERN_ERR, LOG_INIT | LOG_NVME, 11356 + "6101 Disabling NVME support: Not " 11357 + "supported by firmware (%d %d) x%x\n", 11358 + bf_get(cfg_nvme, mbx_sli4_parameters), 11359 + bf_get(cfg_xib, mbx_sli4_parameters), 11360 + phba->cfg_enable_fc4_type); 11361 + fcponly: 11362 + phba->nvme_support = 0; 11363 + phba->nvmet_support = 0; 11364 + phba->cfg_nvmet_mrq = 0; 11365 + 11366 + /* If no FC4 type support, move to just SCSI support */ 11367 + if (!(phba->cfg_enable_fc4_type & LPFC_ENABLE_FCP)) 11368 + return -ENODEV; 11369 + phba->cfg_enable_fc4_type = LPFC_ENABLE_FCP; 11370 + } 11615 11371 } 11616 11372 11617 11373 /* Only embed PBDE for if_type 6, PBDE support requires xib be set */
+13 -3
drivers/scsi/lpfc/lpfc_nvme.c
··· 2143 2143 struct completion *lport_unreg_cmp) 2144 2144 { 2145 2145 u32 wait_tmo; 2146 - int ret; 2146 + int ret, i, pending = 0; 2147 + struct lpfc_sli_ring *pring; 2148 + struct lpfc_hba *phba = vport->phba; 2147 2149 2148 2150 /* Host transport has to clean up and confirm requiring an indefinite 2149 2151 * wait. Print a message if a 10 second wait expires and renew the ··· 2155 2153 while (true) { 2156 2154 ret = wait_for_completion_timeout(lport_unreg_cmp, wait_tmo); 2157 2155 if (unlikely(!ret)) { 2156 + pending = 0; 2157 + for (i = 0; i < phba->cfg_hdw_queue; i++) { 2158 + pring = phba->sli4_hba.hdwq[i].nvme_wq->pring; 2159 + if (!pring) 2160 + continue; 2161 + if (pring->txcmplq_cnt) 2162 + pending += pring->txcmplq_cnt; 2163 + } 2158 2164 lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_IOERR, 2159 2165 "6176 Lport %p Localport %p wait " 2160 - "timed out. Renewing.\n", 2161 - lport, vport->localport); 2166 + "timed out. Pending %d. Renewing.\n", 2167 + lport, vport->localport, pending); 2162 2168 continue; 2163 2169 } 2164 2170 break;
+279 -53
drivers/scsi/lpfc/lpfc_nvmet.c
··· 220 220 /* Word 12, 13, 14, 15 - is zero */ 221 221 } 222 222 223 + #if (IS_ENABLED(CONFIG_NVME_TARGET_FC)) 224 + static struct lpfc_nvmet_rcv_ctx * 225 + lpfc_nvmet_get_ctx_for_xri(struct lpfc_hba *phba, u16 xri) 226 + { 227 + struct lpfc_nvmet_rcv_ctx *ctxp; 228 + unsigned long iflag; 229 + bool found = false; 230 + 231 + spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag); 232 + list_for_each_entry(ctxp, &phba->sli4_hba.t_active_ctx_list, list) { 233 + if (ctxp->ctxbuf->sglq->sli4_xritag != xri) 234 + continue; 235 + 236 + found = true; 237 + break; 238 + } 239 + spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag); 240 + if (found) 241 + return ctxp; 242 + 243 + return NULL; 244 + } 245 + 246 + static struct lpfc_nvmet_rcv_ctx * 247 + lpfc_nvmet_get_ctx_for_oxid(struct lpfc_hba *phba, u16 oxid, u32 sid) 248 + { 249 + struct lpfc_nvmet_rcv_ctx *ctxp; 250 + unsigned long iflag; 251 + bool found = false; 252 + 253 + spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag); 254 + list_for_each_entry(ctxp, &phba->sli4_hba.t_active_ctx_list, list) { 255 + if (ctxp->oxid != oxid || ctxp->sid != sid) 256 + continue; 257 + 258 + found = true; 259 + break; 260 + } 261 + spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag); 262 + if (found) 263 + return ctxp; 264 + 265 + return NULL; 266 + } 267 + #endif 268 + 223 269 static void 224 270 lpfc_nvmet_defer_release(struct lpfc_hba *phba, struct lpfc_nvmet_rcv_ctx *ctxp) 225 271 { 226 272 lockdep_assert_held(&ctxp->ctxlock); 227 273 228 274 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 229 - "6313 NVMET Defer ctx release xri x%x flg x%x\n", 275 + "6313 NVMET Defer ctx release oxid x%x flg x%x\n", 230 276 ctxp->oxid, ctxp->flag); 231 277 232 278 if (ctxp->flag & LPFC_NVMET_CTX_RLS) 233 279 return; 234 280 235 281 ctxp->flag |= LPFC_NVMET_CTX_RLS; 282 + spin_lock(&phba->sli4_hba.t_active_list_lock); 283 + list_del(&ctxp->list); 284 + spin_unlock(&phba->sli4_hba.t_active_list_lock); 236 285 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 237 286 list_add_tail(&ctxp->list, &phba->sli4_hba.lpfc_abts_nvmet_ctx_list); 238 287 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); ··· 392 343 } 393 344 394 345 if (ctxp->rqb_buffer) { 395 - nvmebuf = ctxp->rqb_buffer; 396 346 spin_lock_irqsave(&ctxp->ctxlock, iflag); 397 - ctxp->rqb_buffer = NULL; 398 - if (ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) { 399 - ctxp->flag &= ~LPFC_NVMET_CTX_REUSE_WQ; 400 - spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 401 - nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); 347 + nvmebuf = ctxp->rqb_buffer; 348 + /* check if freed in another path whilst acquiring lock */ 349 + if (nvmebuf) { 350 + ctxp->rqb_buffer = NULL; 351 + if (ctxp->flag & LPFC_NVMET_CTX_REUSE_WQ) { 352 + ctxp->flag &= ~LPFC_NVMET_CTX_REUSE_WQ; 353 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 354 + nvmebuf->hrq->rqbp->rqb_free_buffer(phba, 355 + nvmebuf); 356 + } else { 357 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 358 + /* repost */ 359 + lpfc_rq_buf_free(phba, &nvmebuf->hbuf); 360 + } 402 361 } else { 403 362 spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 404 - lpfc_rq_buf_free(phba, &nvmebuf->hbuf); /* repost */ 405 363 } 406 364 } 407 365 ctxp->state = LPFC_NVMET_STE_FREE; ··· 444 388 spin_lock_init(&ctxp->ctxlock); 445 389 446 390 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 447 - if (ctxp->ts_cmd_nvme) { 448 - ctxp->ts_cmd_nvme = ktime_get_ns(); 391 + /* NOTE: isr time stamp is stale when context is re-assigned*/ 392 + if (ctxp->ts_isr_cmd) { 393 + ctxp->ts_cmd_nvme = 0; 449 394 ctxp->ts_nvme_data = 0; 450 395 ctxp->ts_data_wqput = 0; 451 396 ctxp->ts_isr_data = 0; ··· 459 402 #endif 460 403 atomic_inc(&tgtp->rcv_fcp_cmd_in); 461 404 462 - /* flag new work queued, replacement buffer has already 463 - * been reposted 464 - */ 405 + /* Indicate that a replacement buffer has been posted */ 465 406 spin_lock_irqsave(&ctxp->ctxlock, iflag); 466 407 ctxp->flag |= LPFC_NVMET_CTX_REUSE_WQ; 467 408 spin_unlock_irqrestore(&ctxp->ctxlock, iflag); ··· 488 433 * Use the CPU context list, from the MRQ the IO was received on 489 434 * (ctxp->idx), to save context structure. 490 435 */ 436 + spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag); 437 + list_del_init(&ctxp->list); 438 + spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag); 491 439 cpu = raw_smp_processor_id(); 492 440 infop = lpfc_get_ctx_list(phba, cpu, ctxp->idx); 493 441 spin_lock_irqsave(&infop->nvmet_ctx_list_lock, iflag); ··· 758 700 } 759 701 760 702 lpfc_printf_log(phba, KERN_INFO, logerr, 761 - "6315 IO Error Cmpl xri x%x: %x/%x XBUSY:x%x\n", 762 - ctxp->oxid, status, result, ctxp->flag); 703 + "6315 IO Error Cmpl oxid: x%x xri: x%x %x/%x " 704 + "XBUSY:x%x\n", 705 + ctxp->oxid, ctxp->ctxbuf->sglq->sli4_xritag, 706 + status, result, ctxp->flag); 763 707 764 708 } else { 765 709 rsp->fcp_error = NVME_SC_SUCCESS; ··· 909 849 * before freeing ctxp and iocbq. 910 850 */ 911 851 lpfc_in_buf_free(phba, &nvmebuf->dbuf); 912 - ctxp->rqb_buffer = 0; 913 852 atomic_inc(&nvmep->xmt_ls_rsp); 914 853 return 0; 915 854 } ··· 981 922 (ctxp->state == LPFC_NVMET_STE_ABORT)) { 982 923 atomic_inc(&lpfc_nvmep->xmt_fcp_drop); 983 924 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 984 - "6102 IO xri x%x aborted\n", 925 + "6102 IO oxid x%x aborted\n", 985 926 ctxp->oxid); 986 927 rc = -ENXIO; 987 928 goto aerr; ··· 1081 1022 ctxp->hdwq = &phba->sli4_hba.hdwq[0]; 1082 1023 1083 1024 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 1084 - "6103 NVMET Abort op: oxri x%x flg x%x ste %d\n", 1025 + "6103 NVMET Abort op: oxid x%x flg x%x ste %d\n", 1085 1026 ctxp->oxid, ctxp->flag, ctxp->state); 1086 1027 1087 1028 lpfc_nvmeio_data(phba, "NVMET FCP ABRT: xri x%x flg x%x ste x%x\n", ··· 1094 1035 /* Since iaab/iaar are NOT set, we need to check 1095 1036 * if the firmware is in process of aborting IO 1096 1037 */ 1097 - if (ctxp->flag & LPFC_NVMET_XBUSY) { 1038 + if (ctxp->flag & (LPFC_NVMET_XBUSY | LPFC_NVMET_ABORT_OP)) { 1098 1039 spin_unlock_irqrestore(&ctxp->ctxlock, flags); 1099 1040 return; 1100 1041 } ··· 1157 1098 ctxp->state, aborting); 1158 1099 1159 1100 atomic_inc(&lpfc_nvmep->xmt_fcp_release); 1101 + ctxp->flag &= ~LPFC_NVMET_TNOTIFY; 1160 1102 1161 1103 if (aborting) 1162 1104 return; ··· 1182 1122 1183 1123 if (!nvmebuf) { 1184 1124 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR, 1185 - "6425 Defer rcv: no buffer xri x%x: " 1125 + "6425 Defer rcv: no buffer oxid x%x: " 1186 1126 "flg %x ste %x\n", 1187 1127 ctxp->oxid, ctxp->flag, ctxp->state); 1188 1128 return; ··· 1574 1514 lpfc_sli4_nvmet_xri_aborted(struct lpfc_hba *phba, 1575 1515 struct sli4_wcqe_xri_aborted *axri) 1576 1516 { 1517 + #if (IS_ENABLED(CONFIG_NVME_TARGET_FC)) 1577 1518 uint16_t xri = bf_get(lpfc_wcqe_xa_xri, axri); 1578 1519 uint16_t rxid = bf_get(lpfc_wcqe_xa_remote_xid, axri); 1579 1520 struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp; 1580 1521 struct lpfc_nvmet_tgtport *tgtp; 1522 + struct nvmefc_tgt_fcp_req *req = NULL; 1581 1523 struct lpfc_nodelist *ndlp; 1582 1524 unsigned long iflag = 0; 1583 1525 int rrq_empty = 0; ··· 1610 1548 */ 1611 1549 if (ctxp->flag & LPFC_NVMET_CTX_RLS && 1612 1550 !(ctxp->flag & LPFC_NVMET_ABORT_OP)) { 1613 - list_del(&ctxp->list); 1551 + list_del_init(&ctxp->list); 1614 1552 released = true; 1615 1553 } 1616 1554 ctxp->flag &= ~LPFC_NVMET_XBUSY; ··· 1630 1568 } 1631 1569 1632 1570 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 1633 - "6318 XB aborted oxid %x flg x%x (%x)\n", 1571 + "6318 XB aborted oxid x%x flg x%x (%x)\n", 1634 1572 ctxp->oxid, ctxp->flag, released); 1635 1573 if (released) 1636 1574 lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf); ··· 1641 1579 } 1642 1580 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 1643 1581 spin_unlock_irqrestore(&phba->hbalock, iflag); 1582 + 1583 + ctxp = lpfc_nvmet_get_ctx_for_xri(phba, xri); 1584 + if (ctxp) { 1585 + /* 1586 + * Abort already done by FW, so BA_ACC sent. 1587 + * However, the transport may be unaware. 1588 + */ 1589 + lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 1590 + "6323 NVMET Rcv ABTS xri x%x ctxp state x%x " 1591 + "flag x%x oxid x%x rxid x%x\n", 1592 + xri, ctxp->state, ctxp->flag, ctxp->oxid, 1593 + rxid); 1594 + 1595 + spin_lock_irqsave(&ctxp->ctxlock, iflag); 1596 + ctxp->flag |= LPFC_NVMET_ABTS_RCV; 1597 + ctxp->state = LPFC_NVMET_STE_ABORT; 1598 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1599 + 1600 + lpfc_nvmeio_data(phba, 1601 + "NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n", 1602 + xri, raw_smp_processor_id(), 0); 1603 + 1604 + req = &ctxp->ctx.fcp_req; 1605 + if (req) 1606 + nvmet_fc_rcv_fcp_abort(phba->targetport, req); 1607 + } 1608 + #endif 1644 1609 } 1645 1610 1646 1611 int ··· 1678 1589 struct lpfc_hba *phba = vport->phba; 1679 1590 struct lpfc_nvmet_rcv_ctx *ctxp, *next_ctxp; 1680 1591 struct nvmefc_tgt_fcp_req *rsp; 1681 - uint16_t xri; 1592 + uint32_t sid; 1593 + uint16_t oxid, xri; 1682 1594 unsigned long iflag = 0; 1683 1595 1684 - xri = be16_to_cpu(fc_hdr->fh_ox_id); 1596 + sid = sli4_sid_from_fc_hdr(fc_hdr); 1597 + oxid = be16_to_cpu(fc_hdr->fh_ox_id); 1685 1598 1686 1599 spin_lock_irqsave(&phba->hbalock, iflag); 1687 1600 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 1688 1601 list_for_each_entry_safe(ctxp, next_ctxp, 1689 1602 &phba->sli4_hba.lpfc_abts_nvmet_ctx_list, 1690 1603 list) { 1691 - if (ctxp->ctxbuf->sglq->sli4_xritag != xri) 1604 + if (ctxp->oxid != oxid || ctxp->sid != sid) 1692 1605 continue; 1606 + 1607 + xri = ctxp->ctxbuf->sglq->sli4_xritag; 1693 1608 1694 1609 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 1695 1610 spin_unlock_irqrestore(&phba->hbalock, iflag); ··· 1719 1626 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 1720 1627 spin_unlock_irqrestore(&phba->hbalock, iflag); 1721 1628 1722 - lpfc_nvmeio_data(phba, "NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n", 1723 - xri, raw_smp_processor_id(), 1); 1629 + /* check the wait list */ 1630 + if (phba->sli4_hba.nvmet_io_wait_cnt) { 1631 + struct rqb_dmabuf *nvmebuf; 1632 + struct fc_frame_header *fc_hdr_tmp; 1633 + u32 sid_tmp; 1634 + u16 oxid_tmp; 1635 + bool found = false; 1636 + 1637 + spin_lock_irqsave(&phba->sli4_hba.nvmet_io_wait_lock, iflag); 1638 + 1639 + /* match by oxid and s_id */ 1640 + list_for_each_entry(nvmebuf, 1641 + &phba->sli4_hba.lpfc_nvmet_io_wait_list, 1642 + hbuf.list) { 1643 + fc_hdr_tmp = (struct fc_frame_header *) 1644 + (nvmebuf->hbuf.virt); 1645 + oxid_tmp = be16_to_cpu(fc_hdr_tmp->fh_ox_id); 1646 + sid_tmp = sli4_sid_from_fc_hdr(fc_hdr_tmp); 1647 + if (oxid_tmp != oxid || sid_tmp != sid) 1648 + continue; 1649 + 1650 + lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 1651 + "6321 NVMET Rcv ABTS oxid x%x from x%x " 1652 + "is waiting for a ctxp\n", 1653 + oxid, sid); 1654 + 1655 + list_del_init(&nvmebuf->hbuf.list); 1656 + phba->sli4_hba.nvmet_io_wait_cnt--; 1657 + found = true; 1658 + break; 1659 + } 1660 + spin_unlock_irqrestore(&phba->sli4_hba.nvmet_io_wait_lock, 1661 + iflag); 1662 + 1663 + /* free buffer since already posted a new DMA buffer to RQ */ 1664 + if (found) { 1665 + nvmebuf->hrq->rqbp->rqb_free_buffer(phba, nvmebuf); 1666 + /* Respond with BA_ACC accordingly */ 1667 + lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 1); 1668 + return 0; 1669 + } 1670 + } 1671 + 1672 + /* check active list */ 1673 + ctxp = lpfc_nvmet_get_ctx_for_oxid(phba, oxid, sid); 1674 + if (ctxp) { 1675 + xri = ctxp->ctxbuf->sglq->sli4_xritag; 1676 + 1677 + spin_lock_irqsave(&ctxp->ctxlock, iflag); 1678 + ctxp->flag |= (LPFC_NVMET_ABTS_RCV | LPFC_NVMET_ABORT_OP); 1679 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1680 + 1681 + lpfc_nvmeio_data(phba, 1682 + "NVMET ABTS RCV: xri x%x CPU %02x rjt %d\n", 1683 + xri, raw_smp_processor_id(), 0); 1684 + 1685 + lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 1686 + "6322 NVMET Rcv ABTS:acc oxid x%x xri x%x " 1687 + "flag x%x state x%x\n", 1688 + ctxp->oxid, xri, ctxp->flag, ctxp->state); 1689 + 1690 + if (ctxp->flag & LPFC_NVMET_TNOTIFY) { 1691 + /* Notify the transport */ 1692 + nvmet_fc_rcv_fcp_abort(phba->targetport, 1693 + &ctxp->ctx.fcp_req); 1694 + } else { 1695 + cancel_work_sync(&ctxp->ctxbuf->defer_work); 1696 + spin_lock_irqsave(&ctxp->ctxlock, iflag); 1697 + lpfc_nvmet_defer_release(phba, ctxp); 1698 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1699 + } 1700 + if (ctxp->state == LPFC_NVMET_STE_RCV) 1701 + lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, ctxp->sid, 1702 + ctxp->oxid); 1703 + else 1704 + lpfc_nvmet_sol_fcp_issue_abort(phba, ctxp, ctxp->sid, 1705 + ctxp->oxid); 1706 + 1707 + lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 1); 1708 + return 0; 1709 + } 1710 + 1711 + lpfc_nvmeio_data(phba, "NVMET ABTS RCV: oxid x%x CPU %02x rjt %d\n", 1712 + oxid, raw_smp_processor_id(), 1); 1724 1713 1725 1714 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 1726 - "6320 NVMET Rcv ABTS:rjt xri x%x\n", xri); 1715 + "6320 NVMET Rcv ABTS:rjt oxid x%x\n", oxid); 1727 1716 1728 1717 /* Respond with BA_RJT accordingly */ 1729 1718 lpfc_sli4_seq_abort_rsp(vport, fc_hdr, 0); ··· 1888 1713 list_add(&nvmewqeq->list, &wq->wqfull_list); 1889 1714 spin_unlock_irqrestore(&pring->ring_lock, iflags); 1890 1715 return; 1716 + } 1717 + if (rc == WQE_SUCCESS) { 1718 + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 1719 + if (ctxp->ts_cmd_nvme) { 1720 + if (ctxp->ctx.fcp_req.op == NVMET_FCOP_RSP) 1721 + ctxp->ts_status_wqput = ktime_get_ns(); 1722 + else 1723 + ctxp->ts_data_wqput = ktime_get_ns(); 1724 + } 1725 + #endif 1726 + } else { 1727 + WARN_ON(rc); 1891 1728 } 1892 1729 } 1893 1730 wq->q_flag &= ~HBA_NVMET_WQFULL; ··· 2066 1879 return; 2067 1880 } 2068 1881 1882 + if (ctxp->flag & LPFC_NVMET_ABTS_RCV) { 1883 + lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 1884 + "6324 IO oxid x%x aborted\n", 1885 + ctxp->oxid); 1886 + return; 1887 + } 1888 + 2069 1889 payload = (uint32_t *)(nvmebuf->dbuf.virt); 2070 1890 tgtp = (struct lpfc_nvmet_tgtport *)phba->targetport->private; 1891 + ctxp->flag |= LPFC_NVMET_TNOTIFY; 1892 + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 1893 + if (ctxp->ts_isr_cmd) 1894 + ctxp->ts_cmd_nvme = ktime_get_ns(); 1895 + #endif 2071 1896 /* 2072 1897 * The calling sequence should be: 2073 1898 * nvmet_fc_rcv_fcp_req->lpfc_nvmet_xmt_fcp_op/cmp- req->done ··· 2129 1930 phba->sli4_hba.nvmet_mrq_data[qno], 1, qno); 2130 1931 return; 2131 1932 } 1933 + ctxp->flag &= ~LPFC_NVMET_TNOTIFY; 2132 1934 atomic_inc(&tgtp->rcv_fcp_cmd_drop); 2133 1935 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 2134 1936 "2582 FCP Drop IO x%x: err x%x: x%x x%x x%x\n", ··· 2219 2019 * @phba: pointer to lpfc hba data structure. 2220 2020 * @idx: relative index of MRQ vector 2221 2021 * @nvmebuf: pointer to lpfc nvme command HBQ data structure. 2022 + * @isr_timestamp: in jiffies. 2023 + * @cqflag: cq processing information regarding workload. 2222 2024 * 2223 2025 * This routine is used for processing the WQE associated with a unsolicited 2224 2026 * event. It first determines whether there is an existing ndlp that matches ··· 2233 2031 lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba, 2234 2032 uint32_t idx, 2235 2033 struct rqb_dmabuf *nvmebuf, 2236 - uint64_t isr_timestamp) 2034 + uint64_t isr_timestamp, 2035 + uint8_t cqflag) 2237 2036 { 2238 2037 struct lpfc_nvmet_rcv_ctx *ctxp; 2239 2038 struct lpfc_nvmet_tgtport *tgtp; ··· 2321 2118 sid = sli4_sid_from_fc_hdr(fc_hdr); 2322 2119 2323 2120 ctxp = (struct lpfc_nvmet_rcv_ctx *)ctx_buf->context; 2121 + spin_lock_irqsave(&phba->sli4_hba.t_active_list_lock, iflag); 2122 + list_add_tail(&ctxp->list, &phba->sli4_hba.t_active_ctx_list); 2123 + spin_unlock_irqrestore(&phba->sli4_hba.t_active_list_lock, iflag); 2324 2124 if (ctxp->state != LPFC_NVMET_STE_FREE) { 2325 2125 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_IOERR, 2326 2126 "6414 NVMET Context corrupt %d %d oxid x%x\n", ··· 2346 2140 spin_lock_init(&ctxp->ctxlock); 2347 2141 2348 2142 #ifdef CONFIG_SCSI_LPFC_DEBUG_FS 2349 - if (isr_timestamp) { 2143 + if (isr_timestamp) 2350 2144 ctxp->ts_isr_cmd = isr_timestamp; 2351 - ctxp->ts_cmd_nvme = ktime_get_ns(); 2352 - ctxp->ts_nvme_data = 0; 2353 - ctxp->ts_data_wqput = 0; 2354 - ctxp->ts_isr_data = 0; 2355 - ctxp->ts_data_nvme = 0; 2356 - ctxp->ts_nvme_status = 0; 2357 - ctxp->ts_status_wqput = 0; 2358 - ctxp->ts_isr_status = 0; 2359 - ctxp->ts_status_nvme = 0; 2360 - } else { 2361 - ctxp->ts_cmd_nvme = 0; 2362 - } 2145 + ctxp->ts_cmd_nvme = 0; 2146 + ctxp->ts_nvme_data = 0; 2147 + ctxp->ts_data_wqput = 0; 2148 + ctxp->ts_isr_data = 0; 2149 + ctxp->ts_data_nvme = 0; 2150 + ctxp->ts_nvme_status = 0; 2151 + ctxp->ts_status_wqput = 0; 2152 + ctxp->ts_isr_status = 0; 2153 + ctxp->ts_status_nvme = 0; 2363 2154 #endif 2364 2155 2365 2156 atomic_inc(&tgtp->rcv_fcp_cmd_in); 2366 - lpfc_nvmet_process_rcv_fcp_req(ctx_buf); 2157 + /* check for cq processing load */ 2158 + if (!cqflag) { 2159 + lpfc_nvmet_process_rcv_fcp_req(ctx_buf); 2160 + return; 2161 + } 2162 + 2163 + if (!queue_work(phba->wq, &ctx_buf->defer_work)) { 2164 + atomic_inc(&tgtp->rcv_fcp_cmd_drop); 2165 + lpfc_printf_log(phba, KERN_ERR, LOG_NVME, 2166 + "6325 Unable to queue work for oxid x%x. " 2167 + "FCP Drop IO [x%x x%x x%x]\n", 2168 + ctxp->oxid, 2169 + atomic_read(&tgtp->rcv_fcp_cmd_in), 2170 + atomic_read(&tgtp->rcv_fcp_cmd_out), 2171 + atomic_read(&tgtp->xmt_fcp_release)); 2172 + 2173 + spin_lock_irqsave(&ctxp->ctxlock, iflag); 2174 + lpfc_nvmet_defer_release(phba, ctxp); 2175 + spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 2176 + lpfc_nvmet_unsol_fcp_issue_abort(phba, ctxp, sid, oxid); 2177 + } 2367 2178 } 2368 2179 2369 2180 /** ··· 2417 2194 * @phba: pointer to lpfc hba data structure. 2418 2195 * @idx: relative index of MRQ vector 2419 2196 * @nvmebuf: pointer to received nvme data structure. 2197 + * @isr_timestamp: in jiffies. 2198 + * @cqflag: cq processing information regarding workload. 2420 2199 * 2421 2200 * This routine is used to process an unsolicited event received from a SLI 2422 2201 * (Service Level Interface) ring. The actual processing of the data buffer ··· 2430 2205 lpfc_nvmet_unsol_fcp_event(struct lpfc_hba *phba, 2431 2206 uint32_t idx, 2432 2207 struct rqb_dmabuf *nvmebuf, 2433 - uint64_t isr_timestamp) 2208 + uint64_t isr_timestamp, 2209 + uint8_t cqflag) 2434 2210 { 2435 2211 if (phba->nvmet_support == 0) { 2436 2212 lpfc_rq_buf_free(phba, &nvmebuf->hbuf); 2437 2213 return; 2438 2214 } 2439 - lpfc_nvmet_unsol_fcp_buffer(phba, idx, nvmebuf, 2440 - isr_timestamp); 2215 + lpfc_nvmet_unsol_fcp_buffer(phba, idx, nvmebuf, isr_timestamp, cqflag); 2441 2216 } 2442 2217 2443 2218 /** ··· 2975 2750 if ((ctxp->flag & LPFC_NVMET_CTX_RLS) && 2976 2751 !(ctxp->flag & LPFC_NVMET_XBUSY)) { 2977 2752 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 2978 - list_del(&ctxp->list); 2753 + list_del_init(&ctxp->list); 2979 2754 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 2980 2755 released = true; 2981 2756 } ··· 2984 2759 atomic_inc(&tgtp->xmt_abort_rsp); 2985 2760 2986 2761 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 2987 - "6165 ABORT cmpl: xri x%x flg x%x (%d) " 2762 + "6165 ABORT cmpl: oxid x%x flg x%x (%d) " 2988 2763 "WCQE: %08x %08x %08x %08x\n", 2989 2764 ctxp->oxid, ctxp->flag, released, 2990 2765 wcqe->word0, wcqe->total_data_placed, ··· 3059 2834 if ((ctxp->flag & LPFC_NVMET_CTX_RLS) && 3060 2835 !(ctxp->flag & LPFC_NVMET_XBUSY)) { 3061 2836 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 3062 - list_del(&ctxp->list); 2837 + list_del_init(&ctxp->list); 3063 2838 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 3064 2839 released = true; 3065 2840 } ··· 3068 2843 atomic_inc(&tgtp->xmt_abort_rsp); 3069 2844 3070 2845 lpfc_printf_log(phba, KERN_INFO, LOG_NVME_ABTS, 3071 - "6316 ABTS cmpl xri x%x flg x%x (%x) " 2846 + "6316 ABTS cmpl oxid x%x flg x%x (%x) " 3072 2847 "WCQE: %08x %08x %08x %08x\n", 3073 2848 ctxp->oxid, ctxp->flag, released, 3074 2849 wcqe->word0, wcqe->total_data_placed, ··· 3439 3214 spin_lock_irqsave(&ctxp->ctxlock, flags); 3440 3215 if (ctxp->flag & LPFC_NVMET_CTX_RLS) { 3441 3216 spin_lock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 3442 - list_del(&ctxp->list); 3217 + list_del_init(&ctxp->list); 3443 3218 spin_unlock(&phba->sli4_hba.abts_nvmet_buf_list_lock); 3444 3219 released = true; 3445 3220 } ··· 3448 3223 3449 3224 atomic_inc(&tgtp->xmt_abort_rsp_error); 3450 3225 lpfc_printf_log(phba, KERN_ERR, LOG_NVME_ABTS, 3451 - "6135 Failed to Issue ABTS for oxid x%x. Status x%x\n", 3452 - ctxp->oxid, rc); 3226 + "6135 Failed to Issue ABTS for oxid x%x. Status x%x " 3227 + "(%x)\n", 3228 + ctxp->oxid, rc, released); 3453 3229 if (released) 3454 3230 lpfc_nvmet_ctxbuf_post(phba, ctxp->ctxbuf); 3455 3231 return 1;
+1
drivers/scsi/lpfc/lpfc_nvmet.h
··· 140 140 #define LPFC_NVMET_ABTS_RCV 0x10 /* ABTS received on exchange */ 141 141 #define LPFC_NVMET_CTX_REUSE_WQ 0x20 /* ctx reused via WQ */ 142 142 #define LPFC_NVMET_DEFER_WQFULL 0x40 /* Waiting on a free WQE */ 143 + #define LPFC_NVMET_TNOTIFY 0x80 /* notify transport of abts */ 143 144 struct rqb_dmabuf *rqb_buffer; 144 145 struct lpfc_nvmet_ctxbuf *ctxbuf; 145 146 struct lpfc_sli4_hdw_queue *hdwq;
+11 -5
drivers/scsi/lpfc/lpfc_scsi.c
··· 3879 3879 */ 3880 3880 spin_lock(&lpfc_cmd->buf_lock); 3881 3881 lpfc_cmd->cur_iocbq.iocb_flag &= ~LPFC_DRIVER_ABORTED; 3882 - if (lpfc_cmd->waitq) { 3882 + if (lpfc_cmd->waitq) 3883 3883 wake_up(lpfc_cmd->waitq); 3884 - lpfc_cmd->waitq = NULL; 3885 - } 3886 3884 spin_unlock(&lpfc_cmd->buf_lock); 3887 3885 3888 3886 lpfc_release_scsi_buf(phba, lpfc_cmd); ··· 4716 4718 iocb->sli4_xritag, ret, 4717 4719 cmnd->device->id, cmnd->device->lun); 4718 4720 } 4721 + 4722 + lpfc_cmd->waitq = NULL; 4723 + 4719 4724 spin_unlock(&lpfc_cmd->buf_lock); 4720 4725 goto out; 4721 4726 ··· 4798 4797 rsp_info, 4799 4798 rsp_len, rsp_info_code); 4800 4799 4801 - if ((fcprsp->rspStatus2&RSP_LEN_VALID) && (rsp_len == 8)) { 4800 + /* If FCP_RSP_LEN_VALID bit is one, then the FCP_RSP_LEN 4801 + * field specifies the number of valid bytes of FCP_RSP_INFO. 4802 + * The FCP_RSP_LEN field shall be set to 0x04 or 0x08 4803 + */ 4804 + if ((fcprsp->rspStatus2 & RSP_LEN_VALID) && 4805 + ((rsp_len == 8) || (rsp_len == 4))) { 4802 4806 switch (rsp_info_code) { 4803 4807 case RSP_NO_FAILURE: 4804 4808 lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP, ··· 5747 5741 5748 5742 /* Create an lun info structure and add to list of luns */ 5749 5743 lun_info = lpfc_create_device_data(phba, vport_wwpn, target_wwpn, lun, 5750 - pri, false); 5744 + pri, true); 5751 5745 if (lun_info) { 5752 5746 lun_info->oas_enabled = true; 5753 5747 lun_info->priority = pri;
+45 -31
drivers/scsi/lpfc/lpfc_sli.c
··· 108 108 * endianness. This function can be called with or without 109 109 * lock. 110 110 **/ 111 - void 111 + static void 112 112 lpfc_sli4_pcimem_bcopy(void *srcp, void *destp, uint32_t cnt) 113 113 { 114 114 uint64_t *src = srcp; ··· 5571 5571 int qidx; 5572 5572 struct lpfc_sli4_hba *sli4_hba = &phba->sli4_hba; 5573 5573 struct lpfc_sli4_hdw_queue *qp; 5574 + struct lpfc_queue *eq; 5574 5575 5575 5576 sli4_hba->sli4_write_cq_db(phba, sli4_hba->mbx_cq, 0, LPFC_QUEUE_REARM); 5576 5577 sli4_hba->sli4_write_cq_db(phba, sli4_hba->els_cq, 0, LPFC_QUEUE_REARM); ··· 5579 5578 sli4_hba->sli4_write_cq_db(phba, sli4_hba->nvmels_cq, 0, 5580 5579 LPFC_QUEUE_REARM); 5581 5580 5582 - qp = sli4_hba->hdwq; 5583 5581 if (sli4_hba->hdwq) { 5582 + /* Loop thru all Hardware Queues */ 5584 5583 for (qidx = 0; qidx < phba->cfg_hdw_queue; qidx++) { 5585 - sli4_hba->sli4_write_cq_db(phba, qp[qidx].fcp_cq, 0, 5584 + qp = &sli4_hba->hdwq[qidx]; 5585 + /* ARM the corresponding CQ */ 5586 + sli4_hba->sli4_write_cq_db(phba, qp->fcp_cq, 0, 5586 5587 LPFC_QUEUE_REARM); 5587 - sli4_hba->sli4_write_cq_db(phba, qp[qidx].nvme_cq, 0, 5588 + sli4_hba->sli4_write_cq_db(phba, qp->nvme_cq, 0, 5588 5589 LPFC_QUEUE_REARM); 5589 5590 } 5590 5591 5591 - for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) 5592 - sli4_hba->sli4_write_eq_db(phba, qp[qidx].hba_eq, 5593 - 0, LPFC_QUEUE_REARM); 5592 + /* Loop thru all IRQ vectors */ 5593 + for (qidx = 0; qidx < phba->cfg_irq_chann; qidx++) { 5594 + eq = sli4_hba->hba_eq_hdl[qidx].eq; 5595 + /* ARM the corresponding EQ */ 5596 + sli4_hba->sli4_write_eq_db(phba, eq, 5597 + 0, LPFC_QUEUE_REARM); 5598 + } 5594 5599 } 5595 5600 5596 5601 if (phba->nvmet_support) { ··· 7882 7875 * and will process all the completions associated with the eq for the 7883 7876 * mailbox completion queue. 7884 7877 **/ 7885 - bool 7878 + static bool 7886 7879 lpfc_sli4_process_missed_mbox_completions(struct lpfc_hba *phba) 7887 7880 { 7888 7881 struct lpfc_sli4_hba *sli4_hba = &phba->sli4_hba; 7889 7882 uint32_t eqidx; 7890 7883 struct lpfc_queue *fpeq = NULL; 7884 + struct lpfc_queue *eq; 7891 7885 bool mbox_pending; 7892 7886 7893 7887 if (unlikely(!phba) || (phba->sli_rev != LPFC_SLI_REV4)) 7894 7888 return false; 7895 7889 7896 - /* Find the eq associated with the mcq */ 7897 - 7898 - if (sli4_hba->hdwq) 7899 - for (eqidx = 0; eqidx < phba->cfg_irq_chann; eqidx++) 7900 - if (sli4_hba->hdwq[eqidx].hba_eq->queue_id == 7901 - sli4_hba->mbx_cq->assoc_qid) { 7902 - fpeq = sli4_hba->hdwq[eqidx].hba_eq; 7890 + /* Find the EQ associated with the mbox CQ */ 7891 + if (sli4_hba->hdwq) { 7892 + for (eqidx = 0; eqidx < phba->cfg_irq_chann; eqidx++) { 7893 + eq = phba->sli4_hba.hba_eq_hdl[eqidx].eq; 7894 + if (eq->queue_id == sli4_hba->mbx_cq->assoc_qid) { 7895 + fpeq = eq; 7903 7896 break; 7904 7897 } 7898 + } 7899 + } 7905 7900 if (!fpeq) 7906 7901 return false; 7907 7902 ··· 13614 13605 goto rearm_and_exit; 13615 13606 13616 13607 /* Process all the entries to the CQ */ 13608 + cq->q_flag = 0; 13617 13609 cqe = lpfc_sli4_cq_get(cq); 13618 13610 while (cqe) { 13619 - #if defined(CONFIG_SCSI_LPFC_DEBUG_FS) && defined(BUILD_NVME) 13620 - if (phba->ktime_on) 13621 - cq->isr_timestamp = ktime_get_ns(); 13622 - else 13623 - cq->isr_timestamp = 0; 13624 - #endif 13625 13611 workposted |= handler(phba, cq, cqe); 13626 13612 __lpfc_sli4_consume_cqe(phba, cq, cqe); 13627 13613 ··· 13629 13625 LPFC_QUEUE_NOARM); 13630 13626 consumed = 0; 13631 13627 } 13628 + 13629 + if (count == LPFC_NVMET_CQ_NOTIFY) 13630 + cq->q_flag |= HBA_NVMET_CQ_NOTIFY; 13632 13631 13633 13632 cqe = lpfc_sli4_cq_get(cq); 13634 13633 } ··· 13948 13941 goto drop; 13949 13942 13950 13943 if (fc_hdr->fh_type == FC_TYPE_FCP) { 13951 - dma_buf->bytes_recv = bf_get(lpfc_rcqe_length, rcqe); 13944 + dma_buf->bytes_recv = bf_get(lpfc_rcqe_length, rcqe); 13952 13945 lpfc_nvmet_unsol_fcp_event( 13953 - phba, idx, dma_buf, 13954 - cq->isr_timestamp); 13946 + phba, idx, dma_buf, cq->isr_timestamp, 13947 + cq->q_flag & HBA_NVMET_CQ_NOTIFY); 13955 13948 return false; 13956 13949 } 13957 13950 drop: ··· 14117 14110 } 14118 14111 14119 14112 work_cq: 14113 + #if defined(CONFIG_SCSI_LPFC_DEBUG_FS) 14114 + if (phba->ktime_on) 14115 + cq->isr_timestamp = ktime_get_ns(); 14116 + else 14117 + cq->isr_timestamp = 0; 14118 + #endif 14120 14119 if (!queue_work_on(cq->chann, phba->wq, &cq->irqwork)) 14121 14120 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 14122 14121 "0363 Cannot schedule soft IRQ " ··· 14249 14236 return IRQ_NONE; 14250 14237 14251 14238 /* Get to the EQ struct associated with this vector */ 14252 - fpeq = phba->sli4_hba.hdwq[hba_eqidx].hba_eq; 14239 + fpeq = phba->sli4_hba.hba_eq_hdl[hba_eqidx].eq; 14253 14240 if (unlikely(!fpeq)) 14254 14241 return IRQ_NONE; 14255 14242 ··· 14534 14521 /* set values by EQ_DELAY register if supported */ 14535 14522 if (phba->sli.sli_flag & LPFC_SLI_USE_EQDR) { 14536 14523 for (qidx = startq; qidx < phba->cfg_irq_chann; qidx++) { 14537 - eq = phba->sli4_hba.hdwq[qidx].hba_eq; 14524 + eq = phba->sli4_hba.hba_eq_hdl[qidx].eq; 14538 14525 if (!eq) 14539 14526 continue; 14540 14527 ··· 14543 14530 if (++cnt >= numq) 14544 14531 break; 14545 14532 } 14546 - 14547 14533 return; 14548 14534 } 14549 14535 ··· 14570 14558 dmult = LPFC_DMULT_MAX; 14571 14559 14572 14560 for (qidx = startq; qidx < phba->cfg_irq_chann; qidx++) { 14573 - eq = phba->sli4_hba.hdwq[qidx].hba_eq; 14561 + eq = phba->sli4_hba.hba_eq_hdl[qidx].eq; 14574 14562 if (!eq) 14575 14563 continue; 14576 14564 eq->q_mode = usdelay; ··· 14672 14660 lpfc_printf_log(phba, KERN_ERR, LOG_SLI, 14673 14661 "0360 Unsupported EQ count. (%d)\n", 14674 14662 eq->entry_count); 14675 - if (eq->entry_count < 256) 14676 - return -EINVAL; 14663 + if (eq->entry_count < 256) { 14664 + status = -EINVAL; 14665 + goto out; 14666 + } 14677 14667 /* fall through - otherwise default to smallest count */ 14678 14668 case 256: 14679 14669 bf_set(lpfc_eq_context_count, &eq_create->u.request.context, ··· 14727 14713 eq->host_index = 0; 14728 14714 eq->notify_interval = LPFC_EQ_NOTIFY_INTRVL; 14729 14715 eq->max_proc_limit = LPFC_EQ_MAX_PROC_LIMIT; 14730 - 14716 + out: 14731 14717 mempool_free(mbox, phba->mbox_mem_pool); 14732 14718 return status; 14733 14719 }
+10 -1
drivers/scsi/lpfc/lpfc_sli4.h
··· 197 197 #define LPFC_DB_LIST_FORMAT 0x02 198 198 uint8_t q_flag; 199 199 #define HBA_NVMET_WQFULL 0x1 /* We hit WQ Full condition for NVMET */ 200 + #define HBA_NVMET_CQ_NOTIFY 0x1 /* LPFC_NVMET_CQ_NOTIFY CQEs this EQE */ 201 + #define LPFC_NVMET_CQ_NOTIFY 4 200 202 void __iomem *db_regaddr; 201 203 uint16_t dpp_enable; 202 204 uint16_t dpp_id; ··· 452 450 uint32_t idx; 453 451 char handler_name[LPFC_SLI4_HANDLER_NAME_SZ]; 454 452 struct lpfc_hba *phba; 453 + struct lpfc_queue *eq; 455 454 }; 456 455 457 456 /*BB Credit recovery value*/ ··· 515 512 #define LPFC_WQ_SZ64_SUPPORT 1 516 513 #define LPFC_WQ_SZ128_SUPPORT 2 517 514 uint8_t wqpcnt; 515 + uint8_t nvme; 518 516 }; 519 517 520 518 #define LPFC_CQ_4K_PAGE_SZ 0x1 ··· 550 546 uint16_t irq; 551 547 uint16_t eq; 552 548 uint16_t hdwq; 553 - uint16_t hyper; 549 + uint16_t flag; 550 + #define LPFC_CPU_MAP_HYPER 0x1 551 + #define LPFC_CPU_MAP_UNASSIGN 0x2 552 + #define LPFC_CPU_FIRST_IRQ 0x4 554 553 }; 555 554 #define LPFC_VECTOR_MAP_EMPTY 0xffff 556 555 ··· 850 843 struct list_head lpfc_nvmet_sgl_list; 851 844 spinlock_t abts_nvmet_buf_list_lock; /* list of aborted NVMET IOs */ 852 845 struct list_head lpfc_abts_nvmet_ctx_list; 846 + spinlock_t t_active_list_lock; /* list of active NVMET IOs */ 847 + struct list_head t_active_ctx_list; 853 848 struct list_head lpfc_nvmet_io_wait_list; 854 849 struct lpfc_nvmet_ctx_info *nvmet_ctx_info; 855 850 struct lpfc_sglq **lpfc_sglq_active_list;
+1 -1
drivers/scsi/lpfc/lpfc_version.h
··· 20 20 * included with this package. * 21 21 *******************************************************************/ 22 22 23 - #define LPFC_DRIVER_VERSION "12.2.0.2" 23 + #define LPFC_DRIVER_VERSION "12.2.0.3" 24 24 #define LPFC_DRIVER_NAME "lpfc" 25 25 26 26 /* Used for SLI 2/3 */
+263 -164
drivers/scsi/mac_scsi.c
··· 4 4 * 5 5 * Copyright 1998, Michael Schmitz <mschmitz@lbl.gov> 6 6 * 7 + * Copyright 2019 Finn Thain 8 + * 7 9 * derived in part from: 8 10 */ 9 11 /* ··· 14 12 * Copyright 1995, Russell King 15 13 */ 16 14 15 + #include <linux/delay.h> 17 16 #include <linux/types.h> 18 17 #include <linux/module.h> 19 18 #include <linux/ioport.h> ··· 25 22 26 23 #include <asm/hwtest.h> 27 24 #include <asm/io.h> 25 + #include <asm/macintosh.h> 28 26 #include <asm/macints.h> 29 27 #include <asm/setup.h> 30 28 ··· 57 53 module_param(setup_cmd_per_lun, int, 0); 58 54 static int setup_sg_tablesize = -1; 59 55 module_param(setup_sg_tablesize, int, 0); 60 - static int setup_use_pdma = -1; 56 + static int setup_use_pdma = 512; 61 57 module_param(setup_use_pdma, int, 0); 62 58 static int setup_hostid = -1; 63 59 module_param(setup_hostid, int, 0); ··· 94 90 __setup("mac5380=", mac_scsi_setup); 95 91 #endif /* !MODULE */ 96 92 97 - /* Pseudo DMA asm originally by Ove Edlund */ 93 + /* 94 + * According to "Inside Macintosh: Devices", Mac OS requires disk drivers to 95 + * specify the number of bytes between the delays expected from a SCSI target. 96 + * This allows the operating system to "prevent bus errors when a target fails 97 + * to deliver the next byte within the processor bus error timeout period." 98 + * Linux SCSI drivers lack knowledge of the timing behaviour of SCSI targets 99 + * so bus errors are unavoidable. 100 + * 101 + * If a MOVE.B instruction faults, we assume that zero bytes were transferred 102 + * and simply retry. That assumption probably depends on target behaviour but 103 + * seems to hold up okay. The NOP provides synchronization: without it the 104 + * fault can sometimes occur after the program counter has moved past the 105 + * offending instruction. Post-increment addressing can't be used. 106 + */ 98 107 99 - #define CP_IO_TO_MEM(s,d,n) \ 100 - __asm__ __volatile__ \ 101 - (" cmp.w #4,%2\n" \ 102 - " bls 8f\n" \ 103 - " move.w %1,%%d0\n" \ 104 - " neg.b %%d0\n" \ 105 - " and.w #3,%%d0\n" \ 106 - " sub.w %%d0,%2\n" \ 107 - " bra 2f\n" \ 108 - " 1: move.b (%0),(%1)+\n" \ 109 - " 2: dbf %%d0,1b\n" \ 110 - " move.w %2,%%d0\n" \ 111 - " lsr.w #5,%%d0\n" \ 112 - " bra 4f\n" \ 113 - " 3: move.l (%0),(%1)+\n" \ 114 - "31: move.l (%0),(%1)+\n" \ 115 - "32: move.l (%0),(%1)+\n" \ 116 - "33: move.l (%0),(%1)+\n" \ 117 - "34: move.l (%0),(%1)+\n" \ 118 - "35: move.l (%0),(%1)+\n" \ 119 - "36: move.l (%0),(%1)+\n" \ 120 - "37: move.l (%0),(%1)+\n" \ 121 - " 4: dbf %%d0,3b\n" \ 122 - " move.w %2,%%d0\n" \ 123 - " lsr.w #2,%%d0\n" \ 124 - " and.w #7,%%d0\n" \ 125 - " bra 6f\n" \ 126 - " 5: move.l (%0),(%1)+\n" \ 127 - " 6: dbf %%d0,5b\n" \ 128 - " and.w #3,%2\n" \ 129 - " bra 8f\n" \ 130 - " 7: move.b (%0),(%1)+\n" \ 131 - " 8: dbf %2,7b\n" \ 132 - " moveq.l #0, %2\n" \ 133 - " 9: \n" \ 134 - ".section .fixup,\"ax\"\n" \ 135 - " .even\n" \ 136 - "91: moveq.l #1, %2\n" \ 137 - " jra 9b\n" \ 138 - "94: moveq.l #4, %2\n" \ 139 - " jra 9b\n" \ 140 - ".previous\n" \ 141 - ".section __ex_table,\"a\"\n" \ 142 - " .align 4\n" \ 143 - " .long 1b,91b\n" \ 144 - " .long 3b,94b\n" \ 145 - " .long 31b,94b\n" \ 146 - " .long 32b,94b\n" \ 147 - " .long 33b,94b\n" \ 148 - " .long 34b,94b\n" \ 149 - " .long 35b,94b\n" \ 150 - " .long 36b,94b\n" \ 151 - " .long 37b,94b\n" \ 152 - " .long 5b,94b\n" \ 153 - " .long 7b,91b\n" \ 154 - ".previous" \ 155 - : "=a"(s), "=a"(d), "=d"(n) \ 156 - : "0"(s), "1"(d), "2"(n) \ 157 - : "d0") 108 + #define MOVE_BYTE(operands) \ 109 + asm volatile ( \ 110 + "1: moveb " operands " \n" \ 111 + "11: nop \n" \ 112 + " addq #1,%0 \n" \ 113 + " subq #1,%1 \n" \ 114 + "40: \n" \ 115 + " \n" \ 116 + ".section .fixup,\"ax\" \n" \ 117 + ".even \n" \ 118 + "90: movel #1, %2 \n" \ 119 + " jra 40b \n" \ 120 + ".previous \n" \ 121 + " \n" \ 122 + ".section __ex_table,\"a\" \n" \ 123 + ".align 4 \n" \ 124 + ".long 1b,90b \n" \ 125 + ".long 11b,90b \n" \ 126 + ".previous \n" \ 127 + : "+a" (addr), "+r" (n), "+r" (result) : "a" (io)) 128 + 129 + /* 130 + * If a MOVE.W (or MOVE.L) instruction faults, it cannot be retried because 131 + * the residual byte count would be uncertain. In that situation the MOVE_WORD 132 + * macro clears n in the fixup section to abort the transfer. 133 + */ 134 + 135 + #define MOVE_WORD(operands) \ 136 + asm volatile ( \ 137 + "1: movew " operands " \n" \ 138 + "11: nop \n" \ 139 + " subq #2,%1 \n" \ 140 + "40: \n" \ 141 + " \n" \ 142 + ".section .fixup,\"ax\" \n" \ 143 + ".even \n" \ 144 + "90: movel #0, %1 \n" \ 145 + " movel #2, %2 \n" \ 146 + " jra 40b \n" \ 147 + ".previous \n" \ 148 + " \n" \ 149 + ".section __ex_table,\"a\" \n" \ 150 + ".align 4 \n" \ 151 + ".long 1b,90b \n" \ 152 + ".long 11b,90b \n" \ 153 + ".previous \n" \ 154 + : "+a" (addr), "+r" (n), "+r" (result) : "a" (io)) 155 + 156 + #define MOVE_16_WORDS(operands) \ 157 + asm volatile ( \ 158 + "1: movew " operands " \n" \ 159 + "2: movew " operands " \n" \ 160 + "3: movew " operands " \n" \ 161 + "4: movew " operands " \n" \ 162 + "5: movew " operands " \n" \ 163 + "6: movew " operands " \n" \ 164 + "7: movew " operands " \n" \ 165 + "8: movew " operands " \n" \ 166 + "9: movew " operands " \n" \ 167 + "10: movew " operands " \n" \ 168 + "11: movew " operands " \n" \ 169 + "12: movew " operands " \n" \ 170 + "13: movew " operands " \n" \ 171 + "14: movew " operands " \n" \ 172 + "15: movew " operands " \n" \ 173 + "16: movew " operands " \n" \ 174 + "17: nop \n" \ 175 + " subl #32,%1 \n" \ 176 + "40: \n" \ 177 + " \n" \ 178 + ".section .fixup,\"ax\" \n" \ 179 + ".even \n" \ 180 + "90: movel #0, %1 \n" \ 181 + " movel #2, %2 \n" \ 182 + " jra 40b \n" \ 183 + ".previous \n" \ 184 + " \n" \ 185 + ".section __ex_table,\"a\" \n" \ 186 + ".align 4 \n" \ 187 + ".long 1b,90b \n" \ 188 + ".long 2b,90b \n" \ 189 + ".long 3b,90b \n" \ 190 + ".long 4b,90b \n" \ 191 + ".long 5b,90b \n" \ 192 + ".long 6b,90b \n" \ 193 + ".long 7b,90b \n" \ 194 + ".long 8b,90b \n" \ 195 + ".long 9b,90b \n" \ 196 + ".long 10b,90b \n" \ 197 + ".long 11b,90b \n" \ 198 + ".long 12b,90b \n" \ 199 + ".long 13b,90b \n" \ 200 + ".long 14b,90b \n" \ 201 + ".long 15b,90b \n" \ 202 + ".long 16b,90b \n" \ 203 + ".long 17b,90b \n" \ 204 + ".previous \n" \ 205 + : "+a" (addr), "+r" (n), "+r" (result) : "a" (io)) 206 + 207 + #define MAC_PDMA_DELAY 32 208 + 209 + static inline int mac_pdma_recv(void __iomem *io, unsigned char *start, int n) 210 + { 211 + unsigned char *addr = start; 212 + int result = 0; 213 + 214 + if (n >= 1) { 215 + MOVE_BYTE("%3@,%0@"); 216 + if (result) 217 + goto out; 218 + } 219 + if (n >= 1 && ((unsigned long)addr & 1)) { 220 + MOVE_BYTE("%3@,%0@"); 221 + if (result) 222 + goto out; 223 + } 224 + while (n >= 32) 225 + MOVE_16_WORDS("%3@,%0@+"); 226 + while (n >= 2) 227 + MOVE_WORD("%3@,%0@+"); 228 + if (result) 229 + return start - addr; /* Negated to indicate uncertain length */ 230 + if (n == 1) 231 + MOVE_BYTE("%3@,%0@"); 232 + out: 233 + return addr - start; 234 + } 235 + 236 + static inline int mac_pdma_send(unsigned char *start, void __iomem *io, int n) 237 + { 238 + unsigned char *addr = start; 239 + int result = 0; 240 + 241 + if (n >= 1) { 242 + MOVE_BYTE("%0@,%3@"); 243 + if (result) 244 + goto out; 245 + } 246 + if (n >= 1 && ((unsigned long)addr & 1)) { 247 + MOVE_BYTE("%0@,%3@"); 248 + if (result) 249 + goto out; 250 + } 251 + while (n >= 32) 252 + MOVE_16_WORDS("%0@+,%3@"); 253 + while (n >= 2) 254 + MOVE_WORD("%0@+,%3@"); 255 + if (result) 256 + return start - addr; /* Negated to indicate uncertain length */ 257 + if (n == 1) 258 + MOVE_BYTE("%0@,%3@"); 259 + out: 260 + return addr - start; 261 + } 262 + 263 + /* The "SCSI DMA" chip on the IIfx implements this register. */ 264 + #define CTRL_REG 0x8 265 + #define CTRL_INTERRUPTS_ENABLE BIT(1) 266 + #define CTRL_HANDSHAKE_MODE BIT(3) 267 + 268 + static inline void write_ctrl_reg(struct NCR5380_hostdata *hostdata, u32 value) 269 + { 270 + out_be32(hostdata->io + (CTRL_REG << 4), value); 271 + } 158 272 159 273 static inline int macscsi_pread(struct NCR5380_hostdata *hostdata, 160 274 unsigned char *dst, int len) 161 275 { 162 276 u8 __iomem *s = hostdata->pdma_io + (INPUT_DATA_REG << 4); 163 277 unsigned char *d = dst; 164 - int n = len; 165 - int transferred; 278 + int result = 0; 279 + 280 + hostdata->pdma_residual = len; 166 281 167 282 while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG, 168 283 BASR_DRQ | BASR_PHASE_MATCH, 169 284 BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) { 170 - CP_IO_TO_MEM(s, d, n); 285 + int bytes; 171 286 172 - transferred = d - dst - n; 173 - hostdata->pdma_residual = len - transferred; 287 + if (macintosh_config->ident == MAC_MODEL_IIFX) 288 + write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE | 289 + CTRL_INTERRUPTS_ENABLE); 174 290 175 - /* No bus error. */ 176 - if (n == 0) 177 - return 0; 291 + bytes = mac_pdma_recv(s, d, min(hostdata->pdma_residual, 512)); 178 292 179 - /* Target changed phase early? */ 293 + if (bytes > 0) { 294 + d += bytes; 295 + hostdata->pdma_residual -= bytes; 296 + } 297 + 298 + if (hostdata->pdma_residual == 0) 299 + goto out; 300 + 180 301 if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ, 181 - BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0) 182 - scmd_printk(KERN_ERR, hostdata->connected, 302 + BUS_AND_STATUS_REG, BASR_ACK, 303 + BASR_ACK, HZ / 64) < 0) 304 + scmd_printk(KERN_DEBUG, hostdata->connected, 183 305 "%s: !REQ and !ACK\n", __func__); 184 306 if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) 185 - return 0; 307 + goto out; 308 + 309 + if (bytes == 0) 310 + udelay(MAC_PDMA_DELAY); 311 + 312 + if (bytes >= 0) 313 + continue; 186 314 187 315 dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host, 188 - "%s: bus error (%d/%d)\n", __func__, transferred, len); 316 + "%s: bus error (%d/%d)\n", __func__, d - dst, len); 189 317 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); 190 - d = dst + transferred; 191 - n = len - transferred; 318 + result = -1; 319 + goto out; 192 320 } 193 321 194 322 scmd_printk(KERN_ERR, hostdata->connected, 195 323 "%s: phase mismatch or !DRQ\n", __func__); 196 324 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); 197 - return -1; 325 + result = -1; 326 + out: 327 + if (macintosh_config->ident == MAC_MODEL_IIFX) 328 + write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE); 329 + return result; 198 330 } 199 - 200 - 201 - #define CP_MEM_TO_IO(s,d,n) \ 202 - __asm__ __volatile__ \ 203 - (" cmp.w #4,%2\n" \ 204 - " bls 8f\n" \ 205 - " move.w %0,%%d0\n" \ 206 - " neg.b %%d0\n" \ 207 - " and.w #3,%%d0\n" \ 208 - " sub.w %%d0,%2\n" \ 209 - " bra 2f\n" \ 210 - " 1: move.b (%0)+,(%1)\n" \ 211 - " 2: dbf %%d0,1b\n" \ 212 - " move.w %2,%%d0\n" \ 213 - " lsr.w #5,%%d0\n" \ 214 - " bra 4f\n" \ 215 - " 3: move.l (%0)+,(%1)\n" \ 216 - "31: move.l (%0)+,(%1)\n" \ 217 - "32: move.l (%0)+,(%1)\n" \ 218 - "33: move.l (%0)+,(%1)\n" \ 219 - "34: move.l (%0)+,(%1)\n" \ 220 - "35: move.l (%0)+,(%1)\n" \ 221 - "36: move.l (%0)+,(%1)\n" \ 222 - "37: move.l (%0)+,(%1)\n" \ 223 - " 4: dbf %%d0,3b\n" \ 224 - " move.w %2,%%d0\n" \ 225 - " lsr.w #2,%%d0\n" \ 226 - " and.w #7,%%d0\n" \ 227 - " bra 6f\n" \ 228 - " 5: move.l (%0)+,(%1)\n" \ 229 - " 6: dbf %%d0,5b\n" \ 230 - " and.w #3,%2\n" \ 231 - " bra 8f\n" \ 232 - " 7: move.b (%0)+,(%1)\n" \ 233 - " 8: dbf %2,7b\n" \ 234 - " moveq.l #0, %2\n" \ 235 - " 9: \n" \ 236 - ".section .fixup,\"ax\"\n" \ 237 - " .even\n" \ 238 - "91: moveq.l #1, %2\n" \ 239 - " jra 9b\n" \ 240 - "94: moveq.l #4, %2\n" \ 241 - " jra 9b\n" \ 242 - ".previous\n" \ 243 - ".section __ex_table,\"a\"\n" \ 244 - " .align 4\n" \ 245 - " .long 1b,91b\n" \ 246 - " .long 3b,94b\n" \ 247 - " .long 31b,94b\n" \ 248 - " .long 32b,94b\n" \ 249 - " .long 33b,94b\n" \ 250 - " .long 34b,94b\n" \ 251 - " .long 35b,94b\n" \ 252 - " .long 36b,94b\n" \ 253 - " .long 37b,94b\n" \ 254 - " .long 5b,94b\n" \ 255 - " .long 7b,91b\n" \ 256 - ".previous" \ 257 - : "=a"(s), "=a"(d), "=d"(n) \ 258 - : "0"(s), "1"(d), "2"(n) \ 259 - : "d0") 260 331 261 332 static inline int macscsi_pwrite(struct NCR5380_hostdata *hostdata, 262 333 unsigned char *src, int len) 263 334 { 264 335 unsigned char *s = src; 265 336 u8 __iomem *d = hostdata->pdma_io + (OUTPUT_DATA_REG << 4); 266 - int n = len; 267 - int transferred; 337 + int result = 0; 338 + 339 + hostdata->pdma_residual = len; 268 340 269 341 while (!NCR5380_poll_politely(hostdata, BUS_AND_STATUS_REG, 270 342 BASR_DRQ | BASR_PHASE_MATCH, 271 343 BASR_DRQ | BASR_PHASE_MATCH, HZ / 64)) { 272 - CP_MEM_TO_IO(s, d, n); 344 + int bytes; 273 345 274 - transferred = s - src - n; 275 - hostdata->pdma_residual = len - transferred; 346 + if (macintosh_config->ident == MAC_MODEL_IIFX) 347 + write_ctrl_reg(hostdata, CTRL_HANDSHAKE_MODE | 348 + CTRL_INTERRUPTS_ENABLE); 276 349 277 - /* Target changed phase early? */ 278 - if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ, 279 - BUS_AND_STATUS_REG, BASR_ACK, BASR_ACK, HZ / 64) < 0) 280 - scmd_printk(KERN_ERR, hostdata->connected, 281 - "%s: !REQ and !ACK\n", __func__); 282 - if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) 283 - return 0; 350 + bytes = mac_pdma_send(s, d, min(hostdata->pdma_residual, 512)); 284 351 285 - /* No bus error. */ 286 - if (n == 0) { 287 - if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG, 288 - TCR_LAST_BYTE_SENT, 289 - TCR_LAST_BYTE_SENT, HZ / 64) < 0) 290 - scmd_printk(KERN_ERR, hostdata->connected, 291 - "%s: Last Byte Sent timeout\n", __func__); 292 - return 0; 352 + if (bytes > 0) { 353 + s += bytes; 354 + hostdata->pdma_residual -= bytes; 293 355 } 294 356 357 + if (hostdata->pdma_residual == 0) { 358 + if (NCR5380_poll_politely(hostdata, TARGET_COMMAND_REG, 359 + TCR_LAST_BYTE_SENT, 360 + TCR_LAST_BYTE_SENT, 361 + HZ / 64) < 0) { 362 + scmd_printk(KERN_ERR, hostdata->connected, 363 + "%s: Last Byte Sent timeout\n", __func__); 364 + result = -1; 365 + } 366 + goto out; 367 + } 368 + 369 + if (NCR5380_poll_politely2(hostdata, STATUS_REG, SR_REQ, SR_REQ, 370 + BUS_AND_STATUS_REG, BASR_ACK, 371 + BASR_ACK, HZ / 64) < 0) 372 + scmd_printk(KERN_DEBUG, hostdata->connected, 373 + "%s: !REQ and !ACK\n", __func__); 374 + if (!(NCR5380_read(BUS_AND_STATUS_REG) & BASR_PHASE_MATCH)) 375 + goto out; 376 + 377 + if (bytes == 0) 378 + udelay(MAC_PDMA_DELAY); 379 + 380 + if (bytes >= 0) 381 + continue; 382 + 295 383 dsprintk(NDEBUG_PSEUDO_DMA, hostdata->host, 296 - "%s: bus error (%d/%d)\n", __func__, transferred, len); 384 + "%s: bus error (%d/%d)\n", __func__, s - src, len); 297 385 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); 298 - s = src + transferred; 299 - n = len - transferred; 386 + result = -1; 387 + goto out; 300 388 } 301 389 302 390 scmd_printk(KERN_ERR, hostdata->connected, 303 391 "%s: phase mismatch or !DRQ\n", __func__); 304 392 NCR5380_dprint(NDEBUG_PSEUDO_DMA, hostdata->host); 305 - 306 - return -1; 393 + result = -1; 394 + out: 395 + if (macintosh_config->ident == MAC_MODEL_IIFX) 396 + write_ctrl_reg(hostdata, CTRL_INTERRUPTS_ENABLE); 397 + return result; 307 398 } 308 399 309 400 static int macscsi_dma_xfer_len(struct NCR5380_hostdata *hostdata, 310 401 struct scsi_cmnd *cmd) 311 402 { 312 403 if (hostdata->flags & FLAG_NO_PSEUDO_DMA || 313 - cmd->SCp.this_residual < 16) 404 + cmd->SCp.this_residual < setup_use_pdma) 314 405 return 0; 315 406 316 407 return cmd->SCp.this_residual;
+1
drivers/scsi/megaraid/Kconfig.megaraid
··· 79 79 config MEGARAID_SAS 80 80 tristate "LSI Logic MegaRAID SAS RAID Module" 81 81 depends on PCI && SCSI 82 + select IRQ_POLL 82 83 help 83 84 Module for LSI Logic's SAS based RAID controllers. 84 85 To compile this driver as a module, choose 'm' here.
+1 -1
drivers/scsi/megaraid/Makefile
··· 3 3 obj-$(CONFIG_MEGARAID_MAILBOX) += megaraid_mbox.o 4 4 obj-$(CONFIG_MEGARAID_SAS) += megaraid_sas.o 5 5 megaraid_sas-objs := megaraid_sas_base.o megaraid_sas_fusion.o \ 6 - megaraid_sas_fp.o 6 + megaraid_sas_fp.o megaraid_sas_debugfs.o
+91 -10
drivers/scsi/megaraid/megaraid_sas.h
··· 21 21 /* 22 22 * MegaRAID SAS Driver meta data 23 23 */ 24 - #define MEGASAS_VERSION "07.707.51.00-rc1" 25 - #define MEGASAS_RELDATE "February 7, 2019" 24 + #define MEGASAS_VERSION "07.710.06.00-rc1" 25 + #define MEGASAS_RELDATE "June 18, 2019" 26 26 27 27 /* 28 28 * Device IDs ··· 52 52 #define PCI_DEVICE_ID_LSI_AERO_10E2 0x10e2 53 53 #define PCI_DEVICE_ID_LSI_AERO_10E5 0x10e5 54 54 #define PCI_DEVICE_ID_LSI_AERO_10E6 0x10e6 55 + #define PCI_DEVICE_ID_LSI_AERO_10E0 0x10e0 56 + #define PCI_DEVICE_ID_LSI_AERO_10E3 0x10e3 57 + #define PCI_DEVICE_ID_LSI_AERO_10E4 0x10e4 58 + #define PCI_DEVICE_ID_LSI_AERO_10E7 0x10e7 55 59 56 60 /* 57 61 * Intel HBA SSDIDs ··· 127 123 #define MFI_RESET_ADAPTER 0x00000002 128 124 #define MEGAMFI_FRAME_SIZE 64 129 125 126 + #define MFI_STATE_FAULT_CODE 0x0FFF0000 127 + #define MFI_STATE_FAULT_SUBCODE 0x0000FF00 130 128 /* 131 129 * During FW init, clear pending cmds & reset state using inbound_msg_0 132 130 * ··· 196 190 MFI_CMD_SMP = 0x7, 197 191 MFI_CMD_STP = 0x8, 198 192 MFI_CMD_NVME = 0x9, 193 + MFI_CMD_TOOLBOX = 0xa, 199 194 MFI_CMD_OP_COUNT, 200 195 MFI_CMD_INVALID = 0xff 201 196 }; ··· 1456 1449 1457 1450 u8 reserved6[64]; 1458 1451 1459 - u32 rsvdForAdptOp[64]; 1452 + struct { 1453 + #if defined(__BIG_ENDIAN_BITFIELD) 1454 + u32 reserved:19; 1455 + u32 support_pci_lane_margining: 1; 1456 + u32 support_psoc_update:1; 1457 + u32 support_force_personality_change:1; 1458 + u32 support_fde_type_mix:1; 1459 + u32 support_snap_dump:1; 1460 + u32 support_nvme_tm:1; 1461 + u32 support_oce_only:1; 1462 + u32 support_ext_mfg_vpd:1; 1463 + u32 support_pcie:1; 1464 + u32 support_cvhealth_info:1; 1465 + u32 support_profile_change:2; 1466 + u32 mr_config_ext2_supported:1; 1467 + #else 1468 + u32 mr_config_ext2_supported:1; 1469 + u32 support_profile_change:2; 1470 + u32 support_cvhealth_info:1; 1471 + u32 support_pcie:1; 1472 + u32 support_ext_mfg_vpd:1; 1473 + u32 support_oce_only:1; 1474 + u32 support_nvme_tm:1; 1475 + u32 support_snap_dump:1; 1476 + u32 support_fde_type_mix:1; 1477 + u32 support_force_personality_change:1; 1478 + u32 support_psoc_update:1; 1479 + u32 support_pci_lane_margining: 1; 1480 + u32 reserved:19; 1481 + #endif 1482 + } adapter_operations5; 1483 + 1484 + u32 rsvdForAdptOp[63]; 1460 1485 1461 1486 u8 reserved7[3]; 1462 1487 ··· 1522 1483 #define MEGASAS_FW_BUSY 1 1523 1484 1524 1485 /* Driver's internal Logging levels*/ 1525 - #define OCR_LOGS (1 << 0) 1486 + #define OCR_DEBUG (1 << 0) 1487 + #define TM_DEBUG (1 << 1) 1488 + #define LD_PD_DEBUG (1 << 2) 1526 1489 1527 1490 #define SCAN_PD_CHANNEL 0x1 1528 1491 #define SCAN_VD_CHANNEL 0x2 ··· 1600 1559 #define MFI_IO_TIMEOUT_SECS 180 1601 1560 #define MEGASAS_SRIOV_HEARTBEAT_INTERVAL_VF (5 * HZ) 1602 1561 #define MEGASAS_OCR_SETTLE_TIME_VF (1000 * 30) 1562 + #define MEGASAS_SRIOV_MAX_RESET_TRIES_VF 1 1603 1563 #define MEGASAS_ROUTINE_WAIT_TIME_VF 300 1604 1564 #define MFI_REPLY_1078_MESSAGE_INTERRUPT 0x80000000 1605 1565 #define MFI_REPLY_GEN2_MESSAGE_INTERRUPT 0x00000001 ··· 1625 1583 1626 1584 #define MR_CAN_HANDLE_SYNC_CACHE_OFFSET 0X01000000 1627 1585 1586 + #define MR_ATOMIC_DESCRIPTOR_SUPPORT_OFFSET (1 << 24) 1587 + 1628 1588 #define MR_CAN_HANDLE_64_BIT_DMA_OFFSET (1 << 25) 1589 + #define MR_INTR_COALESCING_SUPPORT_OFFSET (1 << 26) 1629 1590 1630 1591 #define MEGASAS_WATCHDOG_THREAD_INTERVAL 1000 1631 1592 #define MEGASAS_WAIT_FOR_NEXT_DMA_MSECS 20 ··· 1807 1762 __le32 pad_0; /*0Ch */ 1808 1763 1809 1764 __le16 flags; /*10h */ 1810 - __le16 reserved_3; /*12h */ 1765 + __le16 replyqueue_mask; /*12h */ 1811 1766 __le32 data_xfer_len; /*14h */ 1812 1767 1813 1768 __le32 queue_info_new_phys_addr_lo; /*18h */ ··· 2205 2160 struct megasas_irq_context { 2206 2161 struct megasas_instance *instance; 2207 2162 u32 MSIxIndex; 2163 + u32 os_irq; 2164 + struct irq_poll irqpoll; 2165 + bool irq_poll_scheduled; 2166 + bool irq_line_enable; 2208 2167 }; 2209 2168 2210 2169 struct MR_DRV_SYSTEM_INFO { ··· 2238 2189 #define MR_DEFAULT_NVME_PAGE_SHIFT 12 2239 2190 #define MR_DEFAULT_NVME_MDTS_KB 128 2240 2191 #define MR_NVME_PAGE_SIZE_MASK 0x000000FF 2192 + 2193 + /*Aero performance parameters*/ 2194 + #define MR_HIGH_IOPS_QUEUE_COUNT 8 2195 + #define MR_DEVICE_HIGH_IOPS_DEPTH 8 2196 + #define MR_HIGH_IOPS_BATCH_COUNT 16 2197 + 2198 + enum MR_PERF_MODE { 2199 + MR_BALANCED_PERF_MODE = 0, 2200 + MR_IOPS_PERF_MODE = 1, 2201 + MR_LATENCY_PERF_MODE = 2, 2202 + }; 2203 + 2204 + #define MEGASAS_PERF_MODE_2STR(mode) \ 2205 + ((mode) == MR_BALANCED_PERF_MODE ? "Balanced" : \ 2206 + (mode) == MR_IOPS_PERF_MODE ? "IOPS" : \ 2207 + (mode) == MR_LATENCY_PERF_MODE ? "Latency" : \ 2208 + "Unknown") 2241 2209 2242 2210 struct megasas_instance { 2243 2211 ··· 2312 2246 u32 secure_jbod_support; 2313 2247 u32 support_morethan256jbod; /* FW support for more than 256 PD/JBOD */ 2314 2248 bool use_seqnum_jbod_fp; /* Added for PD sequence */ 2249 + bool smp_affinity_enable; 2315 2250 spinlock_t crashdump_lock; 2316 2251 2317 2252 struct megasas_register_set __iomem *reg_set; ··· 2330 2263 u16 ldio_threshold; 2331 2264 u16 cur_can_queue; 2332 2265 u32 max_sectors_per_req; 2266 + bool msix_load_balance; 2333 2267 struct megasas_aen_event *ev; 2334 2268 2335 2269 struct megasas_cmd **cmd_list; ··· 2358 2290 struct pci_dev *pdev; 2359 2291 u32 unique_id; 2360 2292 u32 fw_support_ieee; 2293 + u32 threshold_reply_count; 2361 2294 2362 2295 atomic_t fw_outstanding; 2363 2296 atomic_t ldio_outstanding; 2364 2297 atomic_t fw_reset_no_pci_access; 2365 - atomic_t ieee_sgl; 2366 - atomic_t prp_sgl; 2367 - atomic_t sge_holes_type1; 2368 - atomic_t sge_holes_type2; 2369 - atomic_t sge_holes_type3; 2298 + atomic64_t total_io_count; 2299 + atomic64_t high_iops_outstanding; 2370 2300 2371 2301 struct megasas_instance_template *instancet; 2372 2302 struct tasklet_struct isr_tasklet; ··· 2432 2366 u8 task_abort_tmo; 2433 2367 u8 max_reset_tmo; 2434 2368 u8 snapdump_wait_time; 2369 + #ifdef CONFIG_DEBUG_FS 2370 + struct dentry *debugfs_root; 2371 + struct dentry *raidmap_dump; 2372 + #endif 2435 2373 u8 enable_fw_dev_list; 2374 + bool atomic_desc_support; 2375 + bool support_seqnum_jbod_fp; 2376 + bool support_pci_lane_margining; 2377 + u8 low_latency_index_start; 2378 + int perf_mode; 2436 2379 }; 2380 + 2437 2381 struct MR_LD_VF_MAP { 2438 2382 u32 size; 2439 2383 union MR_LD_REF ref; ··· 2699 2623 void megasas_set_dma_settings(struct megasas_instance *instance, 2700 2624 struct megasas_dcmd_frame *dcmd, 2701 2625 dma_addr_t dma_addr, u32 dma_len); 2626 + int megasas_adp_reset_wait_for_ready(struct megasas_instance *instance, 2627 + bool do_adp_reset, 2628 + int ocr_context); 2629 + int megasas_irqpoll(struct irq_poll *irqpoll, int budget); 2630 + void megasas_dump_fusion_io(struct scsi_cmnd *scmd); 2702 2631 #endif /*LSI_MEGARAID_SAS_H */
+581 -131
drivers/scsi/megaraid/megaraid_sas_base.c
··· 36 36 #include <linux/mutex.h> 37 37 #include <linux/poll.h> 38 38 #include <linux/vmalloc.h> 39 + #include <linux/irq_poll.h> 39 40 40 41 #include <scsi/scsi.h> 41 42 #include <scsi/scsi_cmnd.h> 42 43 #include <scsi/scsi_device.h> 43 44 #include <scsi/scsi_host.h> 44 45 #include <scsi/scsi_tcq.h> 46 + #include <scsi/scsi_dbg.h> 45 47 #include "megaraid_sas_fusion.h" 46 48 #include "megaraid_sas.h" 47 49 ··· 52 50 * Will be set in megasas_init_mfi if user does not provide 53 51 */ 54 52 static unsigned int max_sectors; 55 - module_param_named(max_sectors, max_sectors, int, 0); 53 + module_param_named(max_sectors, max_sectors, int, 0444); 56 54 MODULE_PARM_DESC(max_sectors, 57 55 "Maximum number of sectors per IO command"); 58 56 59 57 static int msix_disable; 60 - module_param(msix_disable, int, S_IRUGO); 58 + module_param(msix_disable, int, 0444); 61 59 MODULE_PARM_DESC(msix_disable, "Disable MSI-X interrupt handling. Default: 0"); 62 60 63 61 static unsigned int msix_vectors; 64 - module_param(msix_vectors, int, S_IRUGO); 62 + module_param(msix_vectors, int, 0444); 65 63 MODULE_PARM_DESC(msix_vectors, "MSI-X max vector count. Default: Set by FW"); 66 64 67 65 static int allow_vf_ioctls; 68 - module_param(allow_vf_ioctls, int, S_IRUGO); 66 + module_param(allow_vf_ioctls, int, 0444); 69 67 MODULE_PARM_DESC(allow_vf_ioctls, "Allow ioctls in SR-IOV VF mode. Default: 0"); 70 68 71 69 static unsigned int throttlequeuedepth = MEGASAS_THROTTLE_QUEUE_DEPTH; 72 - module_param(throttlequeuedepth, int, S_IRUGO); 70 + module_param(throttlequeuedepth, int, 0444); 73 71 MODULE_PARM_DESC(throttlequeuedepth, 74 72 "Adapter queue depth when throttled due to I/O timeout. Default: 16"); 75 73 76 74 unsigned int resetwaittime = MEGASAS_RESET_WAIT_TIME; 77 - module_param(resetwaittime, int, S_IRUGO); 75 + module_param(resetwaittime, int, 0444); 78 76 MODULE_PARM_DESC(resetwaittime, "Wait time in (1-180s) after I/O timeout before resetting adapter. Default: 180s"); 79 77 80 78 int smp_affinity_enable = 1; 81 - module_param(smp_affinity_enable, int, S_IRUGO); 79 + module_param(smp_affinity_enable, int, 0444); 82 80 MODULE_PARM_DESC(smp_affinity_enable, "SMP affinity feature enable/disable Default: enable(1)"); 83 81 84 82 int rdpq_enable = 1; 85 - module_param(rdpq_enable, int, S_IRUGO); 83 + module_param(rdpq_enable, int, 0444); 86 84 MODULE_PARM_DESC(rdpq_enable, "Allocate reply queue in chunks for large queue depth enable/disable Default: enable(1)"); 87 85 88 86 unsigned int dual_qdepth_disable; 89 - module_param(dual_qdepth_disable, int, S_IRUGO); 87 + module_param(dual_qdepth_disable, int, 0444); 90 88 MODULE_PARM_DESC(dual_qdepth_disable, "Disable dual queue depth feature. Default: 0"); 91 89 92 90 unsigned int scmd_timeout = MEGASAS_DEFAULT_CMD_TIMEOUT; 93 - module_param(scmd_timeout, int, S_IRUGO); 91 + module_param(scmd_timeout, int, 0444); 94 92 MODULE_PARM_DESC(scmd_timeout, "scsi command timeout (10-90s), default 90s. See megasas_reset_timer."); 93 + 94 + int perf_mode = -1; 95 + module_param(perf_mode, int, 0444); 96 + MODULE_PARM_DESC(perf_mode, "Performance mode (only for Aero adapters), options:\n\t\t" 97 + "0 - balanced: High iops and low latency queues are allocated &\n\t\t" 98 + "interrupt coalescing is enabled only on high iops queues\n\t\t" 99 + "1 - iops: High iops queues are not allocated &\n\t\t" 100 + "interrupt coalescing is enabled on all queues\n\t\t" 101 + "2 - latency: High iops queues are not allocated &\n\t\t" 102 + "interrupt coalescing is disabled on all queues\n\t\t" 103 + "default mode is 'balanced'" 104 + ); 95 105 96 106 MODULE_LICENSE("GPL"); 97 107 MODULE_VERSION(MEGASAS_VERSION); ··· 168 154 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E2)}, 169 155 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E5)}, 170 156 {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E6)}, 157 + {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E0)}, 158 + {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E3)}, 159 + {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E4)}, 160 + {PCI_DEVICE(PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_AERO_10E7)}, 171 161 {} 172 162 }; 173 163 ··· 188 170 u32 megasas_dbg_lvl; 189 171 static u32 support_device_change; 190 172 static bool support_nvme_encapsulation; 173 + static bool support_pci_lane_margining; 191 174 192 175 /* define lock for aen poll */ 193 176 spinlock_t poll_aen_lock; 177 + 178 + extern struct dentry *megasas_debugfs_root; 179 + extern void megasas_init_debugfs(void); 180 + extern void megasas_exit_debugfs(void); 181 + extern void megasas_setup_debugfs(struct megasas_instance *instance); 182 + extern void megasas_destroy_debugfs(struct megasas_instance *instance); 194 183 195 184 void 196 185 megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd, ··· 1123 1098 ret = wait_event_timeout(instance->int_cmd_wait_q, 1124 1099 cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS, timeout * HZ); 1125 1100 if (!ret) { 1126 - dev_err(&instance->pdev->dev, "Failed from %s %d DCMD Timed out\n", 1127 - __func__, __LINE__); 1101 + dev_err(&instance->pdev->dev, 1102 + "DCMD(opcode: 0x%x) is timed out, func:%s\n", 1103 + cmd->frame->dcmd.opcode, __func__); 1128 1104 return DCMD_TIMEOUT; 1129 1105 } 1130 1106 } else ··· 1154 1128 struct megasas_cmd *cmd; 1155 1129 struct megasas_abort_frame *abort_fr; 1156 1130 int ret = 0; 1131 + u32 opcode; 1157 1132 1158 1133 cmd = megasas_get_cmd(instance); 1159 1134 ··· 1190 1163 ret = wait_event_timeout(instance->abort_cmd_wait_q, 1191 1164 cmd->cmd_status_drv != MFI_STAT_INVALID_STATUS, timeout * HZ); 1192 1165 if (!ret) { 1193 - dev_err(&instance->pdev->dev, "Failed from %s %d Abort Timed out\n", 1194 - __func__, __LINE__); 1166 + opcode = cmd_to_abort->frame->dcmd.opcode; 1167 + dev_err(&instance->pdev->dev, 1168 + "Abort(to be aborted DCMD opcode: 0x%x) is timed out func:%s\n", 1169 + opcode, __func__); 1195 1170 return DCMD_TIMEOUT; 1196 1171 } 1197 1172 } else ··· 1947 1918 static void megasas_set_static_target_properties(struct scsi_device *sdev, 1948 1919 bool is_target_prop) 1949 1920 { 1950 - u16 target_index = 0; 1951 1921 u8 interface_type; 1952 1922 u32 device_qd = MEGASAS_DEFAULT_CMD_PER_LUN; 1953 1923 u32 max_io_size_kb = MR_DEFAULT_NVME_MDTS_KB; ··· 1962 1934 * The RAID firmware may require extended timeouts. 1963 1935 */ 1964 1936 blk_queue_rq_timeout(sdev->request_queue, scmd_timeout * HZ); 1965 - 1966 - target_index = (sdev->channel * MEGASAS_MAX_DEV_PER_CHANNEL) + sdev->id; 1967 1937 1968 1938 switch (interface_type) { 1969 1939 case SAS_PD: ··· 2848 2822 } 2849 2823 2850 2824 /** 2851 - * megasas_dump_frame - This function will dump MPT/MFI frame 2825 + * megasas_dump - This function will print hexdump of provided buffer. 2826 + * @buf: Buffer to be dumped 2827 + * @sz: Size in bytes 2828 + * @format: Different formats of dumping e.g. format=n will 2829 + * cause only 'n' 32 bit words to be dumped in a single 2830 + * line. 2852 2831 */ 2853 - static inline void 2854 - megasas_dump_frame(void *mpi_request, int sz) 2832 + inline void 2833 + megasas_dump(void *buf, int sz, int format) 2855 2834 { 2856 2835 int i; 2857 - __le32 *mfp = (__le32 *)mpi_request; 2836 + __le32 *buf_loc = (__le32 *)buf; 2858 2837 2859 - printk(KERN_INFO "IO request frame:\n\t"); 2860 - for (i = 0; i < sz / sizeof(__le32); i++) { 2861 - if (i && ((i % 8) == 0)) 2862 - printk("\n\t"); 2863 - printk("%08x ", le32_to_cpu(mfp[i])); 2838 + for (i = 0; i < (sz / sizeof(__le32)); i++) { 2839 + if ((i % format) == 0) { 2840 + if (i != 0) 2841 + printk(KERN_CONT "\n"); 2842 + printk(KERN_CONT "%08x: ", (i * 4)); 2843 + } 2844 + printk(KERN_CONT "%08x ", le32_to_cpu(buf_loc[i])); 2864 2845 } 2865 - printk("\n"); 2846 + printk(KERN_CONT "\n"); 2847 + } 2848 + 2849 + /** 2850 + * megasas_dump_reg_set - This function will print hexdump of register set 2851 + * @buf: Buffer to be dumped 2852 + * @sz: Size in bytes 2853 + * @format: Different formats of dumping e.g. format=n will 2854 + * cause only 'n' 32 bit words to be dumped in a 2855 + * single line. 2856 + */ 2857 + inline void 2858 + megasas_dump_reg_set(void __iomem *reg_set) 2859 + { 2860 + unsigned int i, sz = 256; 2861 + u32 __iomem *reg = (u32 __iomem *)reg_set; 2862 + 2863 + for (i = 0; i < (sz / sizeof(u32)); i++) 2864 + printk("%08x: %08x\n", (i * 4), readl(&reg[i])); 2865 + } 2866 + 2867 + /** 2868 + * megasas_dump_fusion_io - This function will print key details 2869 + * of SCSI IO 2870 + * @scmd: SCSI command pointer of SCSI IO 2871 + */ 2872 + void 2873 + megasas_dump_fusion_io(struct scsi_cmnd *scmd) 2874 + { 2875 + struct megasas_cmd_fusion *cmd; 2876 + union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc; 2877 + struct megasas_instance *instance; 2878 + 2879 + cmd = (struct megasas_cmd_fusion *)scmd->SCp.ptr; 2880 + instance = (struct megasas_instance *)scmd->device->host->hostdata; 2881 + 2882 + scmd_printk(KERN_INFO, scmd, 2883 + "scmd: (0x%p) retries: 0x%x allowed: 0x%x\n", 2884 + scmd, scmd->retries, scmd->allowed); 2885 + scsi_print_command(scmd); 2886 + 2887 + if (cmd) { 2888 + req_desc = (union MEGASAS_REQUEST_DESCRIPTOR_UNION *)cmd->request_desc; 2889 + scmd_printk(KERN_INFO, scmd, "Request descriptor details:\n"); 2890 + scmd_printk(KERN_INFO, scmd, 2891 + "RequestFlags:0x%x MSIxIndex:0x%x SMID:0x%x LMID:0x%x DevHandle:0x%x\n", 2892 + req_desc->SCSIIO.RequestFlags, 2893 + req_desc->SCSIIO.MSIxIndex, req_desc->SCSIIO.SMID, 2894 + req_desc->SCSIIO.LMID, req_desc->SCSIIO.DevHandle); 2895 + 2896 + printk(KERN_INFO "IO request frame:\n"); 2897 + megasas_dump(cmd->io_request, 2898 + MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE, 8); 2899 + printk(KERN_INFO "Chain frame:\n"); 2900 + megasas_dump(cmd->sg_frame, 2901 + instance->max_chain_frame_sz, 8); 2902 + } 2903 + 2904 + } 2905 + 2906 + /* 2907 + * megasas_dump_sys_regs - This function will dump system registers through 2908 + * sysfs. 2909 + * @reg_set: Pointer to System register set. 2910 + * @buf: Buffer to which output is to be written. 2911 + * @return: Number of bytes written to buffer. 2912 + */ 2913 + static inline ssize_t 2914 + megasas_dump_sys_regs(void __iomem *reg_set, char *buf) 2915 + { 2916 + unsigned int i, sz = 256; 2917 + int bytes_wrote = 0; 2918 + char *loc = (char *)buf; 2919 + u32 __iomem *reg = (u32 __iomem *)reg_set; 2920 + 2921 + for (i = 0; i < sz / sizeof(u32); i++) { 2922 + bytes_wrote += snprintf(loc + bytes_wrote, PAGE_SIZE, 2923 + "%08x: %08x\n", (i * 4), 2924 + readl(&reg[i])); 2925 + } 2926 + return bytes_wrote; 2866 2927 } 2867 2928 2868 2929 /** ··· 2963 2850 instance = (struct megasas_instance *)scmd->device->host->hostdata; 2964 2851 2965 2852 scmd_printk(KERN_INFO, scmd, 2966 - "Controller reset is requested due to IO timeout\n" 2967 - "SCSI command pointer: (%p)\t SCSI host state: %d\t" 2968 - " SCSI host busy: %d\t FW outstanding: %d\n", 2969 - scmd, scmd->device->host->shost_state, 2853 + "OCR is requested due to IO timeout!!\n"); 2854 + 2855 + scmd_printk(KERN_INFO, scmd, 2856 + "SCSI host state: %d SCSI host busy: %d FW outstanding: %d\n", 2857 + scmd->device->host->shost_state, 2970 2858 scsi_host_busy(scmd->device->host), 2971 2859 atomic_read(&instance->fw_outstanding)); 2972 - 2973 2860 /* 2974 2861 * First wait for all commands to complete 2975 2862 */ 2976 2863 if (instance->adapter_type == MFI_SERIES) { 2977 2864 ret = megasas_generic_reset(scmd); 2978 2865 } else { 2979 - struct megasas_cmd_fusion *cmd; 2980 - cmd = (struct megasas_cmd_fusion *)scmd->SCp.ptr; 2981 - if (cmd) 2982 - megasas_dump_frame(cmd->io_request, 2983 - MEGA_MPI2_RAID_DEFAULT_IO_FRAME_SIZE); 2866 + megasas_dump_fusion_io(scmd); 2984 2867 ret = megasas_reset_fusion(scmd->device->host, 2985 2868 SCSIIO_TIMEOUT_OCR); 2986 2869 } ··· 3126 3017 } 3127 3018 3128 3019 static ssize_t 3129 - megasas_fw_crash_buffer_store(struct device *cdev, 3020 + fw_crash_buffer_store(struct device *cdev, 3130 3021 struct device_attribute *attr, const char *buf, size_t count) 3131 3022 { 3132 3023 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3145 3036 } 3146 3037 3147 3038 static ssize_t 3148 - megasas_fw_crash_buffer_show(struct device *cdev, 3039 + fw_crash_buffer_show(struct device *cdev, 3149 3040 struct device_attribute *attr, char *buf) 3150 3041 { 3151 3042 struct Scsi_Host *shost = class_to_shost(cdev); 3152 3043 struct megasas_instance *instance = 3153 3044 (struct megasas_instance *) shost->hostdata; 3154 3045 u32 size; 3155 - unsigned long buff_addr; 3156 3046 unsigned long dmachunk = CRASH_DMA_BUF_SIZE; 3157 3047 unsigned long src_addr; 3158 3048 unsigned long flags; ··· 3167 3059 spin_unlock_irqrestore(&instance->crashdump_lock, flags); 3168 3060 return -EINVAL; 3169 3061 } 3170 - 3171 - buff_addr = (unsigned long) buf; 3172 3062 3173 3063 if (buff_offset > (instance->fw_crash_buffer_size * dmachunk)) { 3174 3064 dev_err(&instance->pdev->dev, ··· 3187 3081 } 3188 3082 3189 3083 static ssize_t 3190 - megasas_fw_crash_buffer_size_show(struct device *cdev, 3084 + fw_crash_buffer_size_show(struct device *cdev, 3191 3085 struct device_attribute *attr, char *buf) 3192 3086 { 3193 3087 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3199 3093 } 3200 3094 3201 3095 static ssize_t 3202 - megasas_fw_crash_state_store(struct device *cdev, 3096 + fw_crash_state_store(struct device *cdev, 3203 3097 struct device_attribute *attr, const char *buf, size_t count) 3204 3098 { 3205 3099 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3234 3128 } 3235 3129 3236 3130 static ssize_t 3237 - megasas_fw_crash_state_show(struct device *cdev, 3131 + fw_crash_state_show(struct device *cdev, 3238 3132 struct device_attribute *attr, char *buf) 3239 3133 { 3240 3134 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3245 3139 } 3246 3140 3247 3141 static ssize_t 3248 - megasas_page_size_show(struct device *cdev, 3142 + page_size_show(struct device *cdev, 3249 3143 struct device_attribute *attr, char *buf) 3250 3144 { 3251 3145 return snprintf(buf, PAGE_SIZE, "%ld\n", (unsigned long)PAGE_SIZE - 1); 3252 3146 } 3253 3147 3254 3148 static ssize_t 3255 - megasas_ldio_outstanding_show(struct device *cdev, struct device_attribute *attr, 3149 + ldio_outstanding_show(struct device *cdev, struct device_attribute *attr, 3256 3150 char *buf) 3257 3151 { 3258 3152 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3262 3156 } 3263 3157 3264 3158 static ssize_t 3265 - megasas_fw_cmds_outstanding_show(struct device *cdev, 3159 + fw_cmds_outstanding_show(struct device *cdev, 3266 3160 struct device_attribute *attr, char *buf) 3267 3161 { 3268 3162 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3271 3165 return snprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&instance->fw_outstanding)); 3272 3166 } 3273 3167 3274 - static DEVICE_ATTR(fw_crash_buffer, S_IRUGO | S_IWUSR, 3275 - megasas_fw_crash_buffer_show, megasas_fw_crash_buffer_store); 3276 - static DEVICE_ATTR(fw_crash_buffer_size, S_IRUGO, 3277 - megasas_fw_crash_buffer_size_show, NULL); 3278 - static DEVICE_ATTR(fw_crash_state, S_IRUGO | S_IWUSR, 3279 - megasas_fw_crash_state_show, megasas_fw_crash_state_store); 3280 - static DEVICE_ATTR(page_size, S_IRUGO, 3281 - megasas_page_size_show, NULL); 3282 - static DEVICE_ATTR(ldio_outstanding, S_IRUGO, 3283 - megasas_ldio_outstanding_show, NULL); 3284 - static DEVICE_ATTR(fw_cmds_outstanding, S_IRUGO, 3285 - megasas_fw_cmds_outstanding_show, NULL); 3168 + static ssize_t 3169 + dump_system_regs_show(struct device *cdev, 3170 + struct device_attribute *attr, char *buf) 3171 + { 3172 + struct Scsi_Host *shost = class_to_shost(cdev); 3173 + struct megasas_instance *instance = 3174 + (struct megasas_instance *)shost->hostdata; 3175 + 3176 + return megasas_dump_sys_regs(instance->reg_set, buf); 3177 + } 3178 + 3179 + static ssize_t 3180 + raid_map_id_show(struct device *cdev, struct device_attribute *attr, 3181 + char *buf) 3182 + { 3183 + struct Scsi_Host *shost = class_to_shost(cdev); 3184 + struct megasas_instance *instance = 3185 + (struct megasas_instance *)shost->hostdata; 3186 + 3187 + return snprintf(buf, PAGE_SIZE, "%ld\n", 3188 + (unsigned long)instance->map_id); 3189 + } 3190 + 3191 + static DEVICE_ATTR_RW(fw_crash_buffer); 3192 + static DEVICE_ATTR_RO(fw_crash_buffer_size); 3193 + static DEVICE_ATTR_RW(fw_crash_state); 3194 + static DEVICE_ATTR_RO(page_size); 3195 + static DEVICE_ATTR_RO(ldio_outstanding); 3196 + static DEVICE_ATTR_RO(fw_cmds_outstanding); 3197 + static DEVICE_ATTR_RO(dump_system_regs); 3198 + static DEVICE_ATTR_RO(raid_map_id); 3286 3199 3287 3200 struct device_attribute *megaraid_host_attrs[] = { 3288 3201 &dev_attr_fw_crash_buffer_size, ··· 3310 3185 &dev_attr_page_size, 3311 3186 &dev_attr_ldio_outstanding, 3312 3187 &dev_attr_fw_cmds_outstanding, 3188 + &dev_attr_dump_system_regs, 3189 + &dev_attr_raid_map_id, 3313 3190 NULL, 3314 3191 }; 3315 3192 ··· 3495 3368 case MFI_CMD_SMP: 3496 3369 case MFI_CMD_STP: 3497 3370 case MFI_CMD_NVME: 3371 + case MFI_CMD_TOOLBOX: 3498 3372 megasas_complete_int_cmd(instance, cmd); 3499 3373 break; 3500 3374 ··· 3904 3776 int i; 3905 3777 u8 max_wait; 3906 3778 u32 fw_state; 3907 - u32 cur_state; 3908 3779 u32 abs_state, curr_abs_state; 3909 3780 3910 3781 abs_state = instance->instancet->read_fw_status_reg(instance); ··· 3918 3791 switch (fw_state) { 3919 3792 3920 3793 case MFI_STATE_FAULT: 3921 - dev_printk(KERN_DEBUG, &instance->pdev->dev, "FW in FAULT state!!\n"); 3794 + dev_printk(KERN_ERR, &instance->pdev->dev, 3795 + "FW in FAULT state, Fault code:0x%x subcode:0x%x func:%s\n", 3796 + abs_state & MFI_STATE_FAULT_CODE, 3797 + abs_state & MFI_STATE_FAULT_SUBCODE, __func__); 3922 3798 if (ocr) { 3923 3799 max_wait = MEGASAS_RESET_WAIT_TIME; 3924 - cur_state = MFI_STATE_FAULT; 3925 3800 break; 3926 - } else 3801 + } else { 3802 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "System Register set:\n"); 3803 + megasas_dump_reg_set(instance->reg_set); 3927 3804 return -ENODEV; 3805 + } 3928 3806 3929 3807 case MFI_STATE_WAIT_HANDSHAKE: 3930 3808 /* ··· 3949 3817 &instance->reg_set->inbound_doorbell); 3950 3818 3951 3819 max_wait = MEGASAS_RESET_WAIT_TIME; 3952 - cur_state = MFI_STATE_WAIT_HANDSHAKE; 3953 3820 break; 3954 3821 3955 3822 case MFI_STATE_BOOT_MESSAGE_PENDING: ··· 3964 3833 &instance->reg_set->inbound_doorbell); 3965 3834 3966 3835 max_wait = MEGASAS_RESET_WAIT_TIME; 3967 - cur_state = MFI_STATE_BOOT_MESSAGE_PENDING; 3968 3836 break; 3969 3837 3970 3838 case MFI_STATE_OPERATIONAL: ··· 3996 3866 &instance->reg_set->inbound_doorbell); 3997 3867 3998 3868 max_wait = MEGASAS_RESET_WAIT_TIME; 3999 - cur_state = MFI_STATE_OPERATIONAL; 4000 3869 break; 4001 3870 4002 3871 case MFI_STATE_UNDEFINED: ··· 4003 3874 * This state should not last for more than 2 seconds 4004 3875 */ 4005 3876 max_wait = MEGASAS_RESET_WAIT_TIME; 4006 - cur_state = MFI_STATE_UNDEFINED; 4007 3877 break; 4008 3878 4009 3879 case MFI_STATE_BB_INIT: 4010 3880 max_wait = MEGASAS_RESET_WAIT_TIME; 4011 - cur_state = MFI_STATE_BB_INIT; 4012 3881 break; 4013 3882 4014 3883 case MFI_STATE_FW_INIT: 4015 3884 max_wait = MEGASAS_RESET_WAIT_TIME; 4016 - cur_state = MFI_STATE_FW_INIT; 4017 3885 break; 4018 3886 4019 3887 case MFI_STATE_FW_INIT_2: 4020 3888 max_wait = MEGASAS_RESET_WAIT_TIME; 4021 - cur_state = MFI_STATE_FW_INIT_2; 4022 3889 break; 4023 3890 4024 3891 case MFI_STATE_DEVICE_SCAN: 4025 3892 max_wait = MEGASAS_RESET_WAIT_TIME; 4026 - cur_state = MFI_STATE_DEVICE_SCAN; 4027 3893 break; 4028 3894 4029 3895 case MFI_STATE_FLUSH_CACHE: 4030 3896 max_wait = MEGASAS_RESET_WAIT_TIME; 4031 - cur_state = MFI_STATE_FLUSH_CACHE; 4032 3897 break; 4033 3898 4034 3899 default: 4035 3900 dev_printk(KERN_DEBUG, &instance->pdev->dev, "Unknown state 0x%x\n", 4036 3901 fw_state); 3902 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "System Register set:\n"); 3903 + megasas_dump_reg_set(instance->reg_set); 4037 3904 return -ENODEV; 4038 3905 } 4039 3906 ··· 4052 3927 if (curr_abs_state == abs_state) { 4053 3928 dev_printk(KERN_DEBUG, &instance->pdev->dev, "FW state [%d] hasn't changed " 4054 3929 "in %d secs\n", fw_state, max_wait); 3930 + dev_printk(KERN_DEBUG, &instance->pdev->dev, "System Register set:\n"); 3931 + megasas_dump_reg_set(instance->reg_set); 4055 3932 return -ENODEV; 4056 3933 } 4057 3934 ··· 4117 3990 { 4118 3991 int i; 4119 3992 u16 max_cmd; 4120 - u32 sge_sz; 4121 3993 u32 frame_count; 4122 3994 struct megasas_cmd *cmd; 4123 3995 4124 3996 max_cmd = instance->max_mfi_cmds; 4125 - 4126 - /* 4127 - * Size of our frame is 64 bytes for MFI frame, followed by max SG 4128 - * elements and finally SCSI_SENSE_BUFFERSIZE bytes for sense buffer 4129 - */ 4130 - sge_sz = (IS_DMA64) ? sizeof(struct megasas_sge64) : 4131 - sizeof(struct megasas_sge32); 4132 - 4133 - if (instance->flag_ieee) 4134 - sge_sz = sizeof(struct megasas_sge_skinny); 4135 3997 4136 3998 /* 4137 3999 * For MFI controllers. ··· 4371 4255 switch (dcmd_timeout_ocr_possible(instance)) { 4372 4256 case INITIATE_OCR: 4373 4257 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 4258 + mutex_unlock(&instance->reset_mutex); 4374 4259 megasas_reset_fusion(instance->host, 4375 4260 MFI_IO_TIMEOUT_OCR); 4261 + mutex_lock(&instance->reset_mutex); 4376 4262 break; 4377 4263 case KILL_ADAPTER: 4378 4264 megaraid_sas_kill_hba(instance); ··· 4410 4292 struct megasas_dcmd_frame *dcmd; 4411 4293 struct MR_PD_LIST *ci; 4412 4294 struct MR_PD_ADDRESS *pd_addr; 4413 - dma_addr_t ci_h = 0; 4414 4295 4415 4296 if (instance->pd_list_not_supported) { 4416 4297 dev_info(&instance->pdev->dev, "MR_DCMD_PD_LIST_QUERY " ··· 4418 4301 } 4419 4302 4420 4303 ci = instance->pd_list_buf; 4421 - ci_h = instance->pd_list_buf_h; 4422 4304 4423 4305 cmd = megasas_get_cmd(instance); 4424 4306 ··· 4490 4374 4491 4375 case DCMD_SUCCESS: 4492 4376 pd_addr = ci->addr; 4377 + if (megasas_dbg_lvl & LD_PD_DEBUG) 4378 + dev_info(&instance->pdev->dev, "%s, sysPD count: 0x%x\n", 4379 + __func__, le32_to_cpu(ci->count)); 4493 4380 4494 4381 if ((le32_to_cpu(ci->count) > 4495 4382 (MEGASAS_MAX_PD_CHANNELS * MEGASAS_MAX_DEV_PER_CHANNEL))) ··· 4508 4389 pd_addr->scsiDevType; 4509 4390 instance->local_pd_list[le16_to_cpu(pd_addr->deviceId)].driveState = 4510 4391 MR_PD_STATE_SYSTEM; 4392 + if (megasas_dbg_lvl & LD_PD_DEBUG) 4393 + dev_info(&instance->pdev->dev, 4394 + "PD%d: targetID: 0x%03x deviceType:0x%x\n", 4395 + pd_index, le16_to_cpu(pd_addr->deviceId), 4396 + pd_addr->scsiDevType); 4511 4397 pd_addr++; 4512 4398 } 4513 4399 ··· 4616 4492 break; 4617 4493 4618 4494 case DCMD_SUCCESS: 4495 + if (megasas_dbg_lvl & LD_PD_DEBUG) 4496 + dev_info(&instance->pdev->dev, "%s, LD count: 0x%x\n", 4497 + __func__, ld_count); 4498 + 4619 4499 if (ld_count > instance->fw_supported_vd_count) 4620 4500 break; 4621 4501 ··· 4629 4501 if (ci->ldList[ld_index].state != 0) { 4630 4502 ids = ci->ldList[ld_index].ref.targetId; 4631 4503 instance->ld_ids[ids] = ci->ldList[ld_index].ref.targetId; 4504 + if (megasas_dbg_lvl & LD_PD_DEBUG) 4505 + dev_info(&instance->pdev->dev, 4506 + "LD%d: targetID: 0x%03x\n", 4507 + ld_index, ids); 4632 4508 } 4633 4509 } 4634 4510 ··· 4736 4604 case DCMD_SUCCESS: 4737 4605 tgtid_count = le32_to_cpu(ci->count); 4738 4606 4607 + if (megasas_dbg_lvl & LD_PD_DEBUG) 4608 + dev_info(&instance->pdev->dev, "%s, LD count: 0x%x\n", 4609 + __func__, tgtid_count); 4610 + 4739 4611 if ((tgtid_count > (instance->fw_supported_vd_count))) 4740 4612 break; 4741 4613 ··· 4747 4611 for (ld_index = 0; ld_index < tgtid_count; ld_index++) { 4748 4612 ids = ci->targetId[ld_index]; 4749 4613 instance->ld_ids[ids] = ci->targetId[ld_index]; 4614 + if (megasas_dbg_lvl & LD_PD_DEBUG) 4615 + dev_info(&instance->pdev->dev, "LD%d: targetID: 0x%03x\n", 4616 + ld_index, ci->targetId[ld_index]); 4750 4617 } 4751 4618 4752 4619 break; ··· 4829 4690 */ 4830 4691 count = le32_to_cpu(ci->count); 4831 4692 4693 + if (count > (MEGASAS_MAX_PD + MAX_LOGICAL_DRIVES_EXT)) 4694 + break; 4695 + 4696 + if (megasas_dbg_lvl & LD_PD_DEBUG) 4697 + dev_info(&instance->pdev->dev, "%s, Device count: 0x%x\n", 4698 + __func__, count); 4699 + 4832 4700 memset(instance->local_pd_list, 0, 4833 4701 MEGASAS_MAX_PD * sizeof(struct megasas_pd_list)); 4834 4702 memset(instance->ld_ids, 0xff, MAX_LOGICAL_DRIVES_EXT); ··· 4847 4701 ci->host_device_list[i].scsi_type; 4848 4702 instance->local_pd_list[target_id].driveState = 4849 4703 MR_PD_STATE_SYSTEM; 4704 + if (megasas_dbg_lvl & LD_PD_DEBUG) 4705 + dev_info(&instance->pdev->dev, 4706 + "Device %d: PD targetID: 0x%03x deviceType:0x%x\n", 4707 + i, target_id, ci->host_device_list[i].scsi_type); 4850 4708 } else { 4851 4709 instance->ld_ids[target_id] = target_id; 4710 + if (megasas_dbg_lvl & LD_PD_DEBUG) 4711 + dev_info(&instance->pdev->dev, 4712 + "Device %d: LD targetID: 0x%03x\n", 4713 + i, target_id); 4852 4714 } 4853 4715 } 4854 4716 ··· 4868 4714 switch (dcmd_timeout_ocr_possible(instance)) { 4869 4715 case INITIATE_OCR: 4870 4716 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 4717 + mutex_unlock(&instance->reset_mutex); 4871 4718 megasas_reset_fusion(instance->host, 4872 4719 MFI_IO_TIMEOUT_OCR); 4720 + mutex_lock(&instance->reset_mutex); 4873 4721 break; 4874 4722 case KILL_ADAPTER: 4875 4723 megaraid_sas_kill_hba(instance); ··· 5019 4863 switch (dcmd_timeout_ocr_possible(instance)) { 5020 4864 case INITIATE_OCR: 5021 4865 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 4866 + mutex_unlock(&instance->reset_mutex); 5022 4867 megasas_reset_fusion(instance->host, 5023 4868 MFI_IO_TIMEOUT_OCR); 4869 + mutex_lock(&instance->reset_mutex); 5024 4870 break; 5025 4871 case KILL_ADAPTER: 5026 4872 megaraid_sas_kill_hba(instance); ··· 5101 4943 le32_to_cpus((u32 *)&ci->adapterOperations2); 5102 4944 le32_to_cpus((u32 *)&ci->adapterOperations3); 5103 4945 le16_to_cpus((u16 *)&ci->adapter_operations4); 4946 + le32_to_cpus((u32 *)&ci->adapter_operations5); 5104 4947 5105 4948 /* Update the latest Ext VD info. 5106 4949 * From Init path, store current firmware details. ··· 5109 4950 * in case of Firmware upgrade without system reboot. 5110 4951 */ 5111 4952 megasas_update_ext_vd_details(instance); 5112 - instance->use_seqnum_jbod_fp = 4953 + instance->support_seqnum_jbod_fp = 5113 4954 ci->adapterOperations3.useSeqNumJbodFP; 5114 4955 instance->support_morethan256jbod = 5115 4956 ci->adapter_operations4.support_pd_map_target_id; 5116 4957 instance->support_nvme_passthru = 5117 4958 ci->adapter_operations4.support_nvme_passthru; 4959 + instance->support_pci_lane_margining = 4960 + ci->adapter_operations5.support_pci_lane_margining; 5118 4961 instance->task_abort_tmo = ci->TaskAbortTO; 5119 4962 instance->max_reset_tmo = ci->MaxResetTO; 5120 4963 ··· 5148 4987 dev_info(&instance->pdev->dev, 5149 4988 "FW provided TM TaskAbort/Reset timeout\t: %d secs/%d secs\n", 5150 4989 instance->task_abort_tmo, instance->max_reset_tmo); 4990 + dev_info(&instance->pdev->dev, "JBOD sequence map support\t: %s\n", 4991 + instance->support_seqnum_jbod_fp ? "Yes" : "No"); 4992 + dev_info(&instance->pdev->dev, "PCI Lane Margining support\t: %s\n", 4993 + instance->support_pci_lane_margining ? "Yes" : "No"); 5151 4994 5152 4995 break; 5153 4996 ··· 5159 4994 switch (dcmd_timeout_ocr_possible(instance)) { 5160 4995 case INITIATE_OCR: 5161 4996 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 4997 + mutex_unlock(&instance->reset_mutex); 5162 4998 megasas_reset_fusion(instance->host, 5163 4999 MFI_IO_TIMEOUT_OCR); 5000 + mutex_lock(&instance->reset_mutex); 5164 5001 break; 5165 5002 case KILL_ADAPTER: 5166 5003 megaraid_sas_kill_hba(instance); ··· 5429 5262 return 1; 5430 5263 } 5431 5264 5265 + static 5266 + void megasas_setup_irq_poll(struct megasas_instance *instance) 5267 + { 5268 + struct megasas_irq_context *irq_ctx; 5269 + u32 count, i; 5270 + 5271 + count = instance->msix_vectors > 0 ? instance->msix_vectors : 1; 5272 + 5273 + /* Initialize IRQ poll */ 5274 + for (i = 0; i < count; i++) { 5275 + irq_ctx = &instance->irq_context[i]; 5276 + irq_ctx->os_irq = pci_irq_vector(instance->pdev, i); 5277 + irq_ctx->irq_poll_scheduled = false; 5278 + irq_poll_init(&irq_ctx->irqpoll, 5279 + instance->threshold_reply_count, 5280 + megasas_irqpoll); 5281 + } 5282 + } 5283 + 5432 5284 /* 5433 5285 * megasas_setup_irqs_ioapic - register legacy interrupts. 5434 5286 * @instance: Adapter soft state ··· 5472 5286 __func__, __LINE__); 5473 5287 return -1; 5474 5288 } 5289 + instance->perf_mode = MR_LATENCY_PERF_MODE; 5290 + instance->low_latency_index_start = 0; 5475 5291 return 0; 5476 5292 } 5477 5293 ··· 5508 5320 &instance->irq_context[j]); 5509 5321 /* Retry irq register for IO_APIC*/ 5510 5322 instance->msix_vectors = 0; 5323 + instance->msix_load_balance = false; 5511 5324 if (is_probe) { 5512 5325 pci_free_irq_vectors(instance->pdev); 5513 5326 return megasas_setup_irqs_ioapic(instance); ··· 5517 5328 } 5518 5329 } 5519 5330 } 5331 + 5520 5332 return 0; 5521 5333 } 5522 5334 ··· 5530 5340 megasas_destroy_irqs(struct megasas_instance *instance) { 5531 5341 5532 5342 int i; 5343 + int count; 5344 + struct megasas_irq_context *irq_ctx; 5345 + 5346 + count = instance->msix_vectors > 0 ? instance->msix_vectors : 1; 5347 + if (instance->adapter_type != MFI_SERIES) { 5348 + for (i = 0; i < count; i++) { 5349 + irq_ctx = &instance->irq_context[i]; 5350 + irq_poll_disable(&irq_ctx->irqpoll); 5351 + } 5352 + } 5533 5353 5534 5354 if (instance->msix_vectors) 5535 5355 for (i = 0; i < instance->msix_vectors; i++) { ··· 5568 5368 pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) + 5569 5369 (sizeof(struct MR_PD_CFG_SEQ) * (MAX_PHYSICAL_DEVICES - 1)); 5570 5370 5371 + instance->use_seqnum_jbod_fp = 5372 + instance->support_seqnum_jbod_fp; 5571 5373 if (reset_devices || !fusion || 5572 - !instance->ctrl_info_buf->adapterOperations3.useSeqNumJbodFP) { 5374 + !instance->support_seqnum_jbod_fp) { 5573 5375 dev_info(&instance->pdev->dev, 5574 - "Jbod map is not supported %s %d\n", 5376 + "JBOD sequence map is disabled %s %d\n", 5575 5377 __func__, __LINE__); 5576 5378 instance->use_seqnum_jbod_fp = false; 5577 5379 return; ··· 5612 5410 static void megasas_setup_reply_map(struct megasas_instance *instance) 5613 5411 { 5614 5412 const struct cpumask *mask; 5615 - unsigned int queue, cpu; 5413 + unsigned int queue, cpu, low_latency_index_start; 5616 5414 5617 - for (queue = 0; queue < instance->msix_vectors; queue++) { 5415 + low_latency_index_start = instance->low_latency_index_start; 5416 + 5417 + for (queue = low_latency_index_start; queue < instance->msix_vectors; queue++) { 5618 5418 mask = pci_irq_get_affinity(instance->pdev, queue); 5619 5419 if (!mask) 5620 5420 goto fallback; ··· 5627 5423 return; 5628 5424 5629 5425 fallback: 5630 - for_each_possible_cpu(cpu) 5631 - instance->reply_map[cpu] = cpu % instance->msix_vectors; 5426 + queue = low_latency_index_start; 5427 + for_each_possible_cpu(cpu) { 5428 + instance->reply_map[cpu] = queue; 5429 + if (queue == (instance->msix_vectors - 1)) 5430 + queue = low_latency_index_start; 5431 + else 5432 + queue++; 5433 + } 5632 5434 } 5633 5435 5634 5436 /** ··· 5671 5461 5672 5462 return SUCCESS; 5673 5463 } 5464 + 5465 + /** 5466 + * megasas_set_high_iops_queue_affinity_hint - Set affinity hint for high IOPS queues 5467 + * @instance: Adapter soft state 5468 + * return: void 5469 + */ 5470 + static inline void 5471 + megasas_set_high_iops_queue_affinity_hint(struct megasas_instance *instance) 5472 + { 5473 + int i; 5474 + int local_numa_node; 5475 + 5476 + if (instance->perf_mode == MR_BALANCED_PERF_MODE) { 5477 + local_numa_node = dev_to_node(&instance->pdev->dev); 5478 + 5479 + for (i = 0; i < instance->low_latency_index_start; i++) 5480 + irq_set_affinity_hint(pci_irq_vector(instance->pdev, i), 5481 + cpumask_of_node(local_numa_node)); 5482 + } 5483 + } 5484 + 5485 + static int 5486 + __megasas_alloc_irq_vectors(struct megasas_instance *instance) 5487 + { 5488 + int i, irq_flags; 5489 + struct irq_affinity desc = { .pre_vectors = instance->low_latency_index_start }; 5490 + struct irq_affinity *descp = &desc; 5491 + 5492 + irq_flags = PCI_IRQ_MSIX; 5493 + 5494 + if (instance->smp_affinity_enable) 5495 + irq_flags |= PCI_IRQ_AFFINITY; 5496 + else 5497 + descp = NULL; 5498 + 5499 + i = pci_alloc_irq_vectors_affinity(instance->pdev, 5500 + instance->low_latency_index_start, 5501 + instance->msix_vectors, irq_flags, descp); 5502 + 5503 + return i; 5504 + } 5505 + 5506 + /** 5507 + * megasas_alloc_irq_vectors - Allocate IRQ vectors/enable MSI-x vectors 5508 + * @instance: Adapter soft state 5509 + * return: void 5510 + */ 5511 + static void 5512 + megasas_alloc_irq_vectors(struct megasas_instance *instance) 5513 + { 5514 + int i; 5515 + unsigned int num_msix_req; 5516 + 5517 + i = __megasas_alloc_irq_vectors(instance); 5518 + 5519 + if ((instance->perf_mode == MR_BALANCED_PERF_MODE) && 5520 + (i != instance->msix_vectors)) { 5521 + if (instance->msix_vectors) 5522 + pci_free_irq_vectors(instance->pdev); 5523 + /* Disable Balanced IOPS mode and try realloc vectors */ 5524 + instance->perf_mode = MR_LATENCY_PERF_MODE; 5525 + instance->low_latency_index_start = 1; 5526 + num_msix_req = num_online_cpus() + instance->low_latency_index_start; 5527 + 5528 + instance->msix_vectors = min(num_msix_req, 5529 + instance->msix_vectors); 5530 + 5531 + i = __megasas_alloc_irq_vectors(instance); 5532 + 5533 + } 5534 + 5535 + dev_info(&instance->pdev->dev, 5536 + "requested/available msix %d/%d\n", instance->msix_vectors, i); 5537 + 5538 + if (i > 0) 5539 + instance->msix_vectors = i; 5540 + else 5541 + instance->msix_vectors = 0; 5542 + 5543 + if (instance->smp_affinity_enable) 5544 + megasas_set_high_iops_queue_affinity_hint(instance); 5545 + } 5546 + 5674 5547 /** 5675 5548 * megasas_init_fw - Initializes the FW 5676 5549 * @instance: Adapter soft state ··· 5767 5474 u32 max_sectors_2, tmp_sectors, msix_enable; 5768 5475 u32 scratch_pad_1, scratch_pad_2, scratch_pad_3, status_reg; 5769 5476 resource_size_t base_addr; 5477 + void *base_addr_phys; 5770 5478 struct megasas_ctrl_info *ctrl_info = NULL; 5771 5479 unsigned long bar_list; 5772 - int i, j, loop, fw_msix_count = 0; 5480 + int i, j, loop; 5773 5481 struct IOV_111 *iovPtr; 5774 5482 struct fusion_context *fusion; 5775 - bool do_adp_reset = true; 5483 + bool intr_coalescing; 5484 + unsigned int num_msix_req; 5485 + u16 lnksta, speed; 5776 5486 5777 5487 fusion = instance->ctrl_context; 5778 5488 ··· 5795 5499 dev_printk(KERN_DEBUG, &instance->pdev->dev, "Failed to map IO mem\n"); 5796 5500 goto fail_ioremap; 5797 5501 } 5502 + 5503 + base_addr_phys = &base_addr; 5504 + dev_printk(KERN_DEBUG, &instance->pdev->dev, 5505 + "BAR:0x%lx BAR's base_addr(phys):%pa mapped virt_addr:0x%p\n", 5506 + instance->bar, base_addr_phys, instance->reg_set); 5798 5507 5799 5508 if (instance->adapter_type != MFI_SERIES) 5800 5509 instance->instancet = &megasas_instance_template_fusion; ··· 5827 5526 } 5828 5527 5829 5528 if (megasas_transition_to_ready(instance, 0)) { 5830 - if (instance->adapter_type >= INVADER_SERIES) { 5529 + dev_info(&instance->pdev->dev, 5530 + "Failed to transition controller to ready from %s!\n", 5531 + __func__); 5532 + if (instance->adapter_type != MFI_SERIES) { 5831 5533 status_reg = instance->instancet->read_fw_status_reg( 5832 5534 instance); 5833 - do_adp_reset = status_reg & MFI_RESET_ADAPTER; 5834 - } 5835 - 5836 - if (do_adp_reset) { 5535 + if (status_reg & MFI_RESET_ADAPTER) { 5536 + if (megasas_adp_reset_wait_for_ready 5537 + (instance, true, 0) == FAILED) 5538 + goto fail_ready_state; 5539 + } else { 5540 + goto fail_ready_state; 5541 + } 5542 + } else { 5837 5543 atomic_set(&instance->fw_reset_no_pci_access, 1); 5838 5544 instance->instancet->adp_reset 5839 5545 (instance, instance->reg_set); 5840 5546 atomic_set(&instance->fw_reset_no_pci_access, 0); 5841 - dev_info(&instance->pdev->dev, 5842 - "FW restarted successfully from %s!\n", 5843 - __func__); 5844 5547 5845 5548 /*waiting for about 30 second before retry*/ 5846 5549 ssleep(30); 5847 5550 5848 5551 if (megasas_transition_to_ready(instance, 0)) 5849 5552 goto fail_ready_state; 5850 - } else { 5851 - goto fail_ready_state; 5852 5553 } 5554 + 5555 + dev_info(&instance->pdev->dev, 5556 + "FW restarted successfully from %s!\n", 5557 + __func__); 5853 5558 } 5854 5559 5855 5560 megasas_init_ctrl_params(instance); ··· 5880 5573 MR_MAX_RAID_MAP_SIZE_MASK); 5881 5574 } 5882 5575 5576 + switch (instance->adapter_type) { 5577 + case VENTURA_SERIES: 5578 + fusion->pcie_bw_limitation = true; 5579 + break; 5580 + case AERO_SERIES: 5581 + fusion->r56_div_offload = true; 5582 + break; 5583 + default: 5584 + break; 5585 + } 5586 + 5883 5587 /* Check if MSI-X is supported while in ready state */ 5884 5588 msix_enable = (instance->instancet->read_fw_status_reg(instance) & 5885 5589 0x4000000) >> 0x1a; 5886 5590 if (msix_enable && !msix_disable) { 5887 - int irq_flags = PCI_IRQ_MSIX; 5888 5591 5889 5592 scratch_pad_1 = megasas_readl 5890 5593 (instance, &instance->reg_set->outbound_scratch_pad_1); ··· 5904 5587 /* Thunderbolt Series*/ 5905 5588 instance->msix_vectors = (scratch_pad_1 5906 5589 & MR_MAX_REPLY_QUEUES_OFFSET) + 1; 5907 - fw_msix_count = instance->msix_vectors; 5908 5590 } else { 5909 5591 instance->msix_vectors = ((scratch_pad_1 5910 5592 & MR_MAX_REPLY_QUEUES_EXT_OFFSET) ··· 5932 5616 if (rdpq_enable) 5933 5617 instance->is_rdpq = (scratch_pad_1 & MR_RDPQ_MODE_OFFSET) ? 5934 5618 1 : 0; 5935 - fw_msix_count = instance->msix_vectors; 5619 + 5620 + if (!instance->msix_combined) { 5621 + instance->msix_load_balance = true; 5622 + instance->smp_affinity_enable = false; 5623 + } 5624 + 5936 5625 /* Save 1-15 reply post index address to local memory 5937 5626 * Index 0 is already saved from reg offset 5938 5627 * MPI2_REPLY_POST_HOST_INDEX_OFFSET ··· 5950 5629 + (loop * 0x10)); 5951 5630 } 5952 5631 } 5632 + 5633 + dev_info(&instance->pdev->dev, 5634 + "firmware supports msix\t: (%d)", 5635 + instance->msix_vectors); 5953 5636 if (msix_vectors) 5954 5637 instance->msix_vectors = min(msix_vectors, 5955 5638 instance->msix_vectors); 5956 5639 } else /* MFI adapters */ 5957 5640 instance->msix_vectors = 1; 5958 - /* Don't bother allocating more MSI-X vectors than cpus */ 5959 - instance->msix_vectors = min(instance->msix_vectors, 5960 - (unsigned int)num_online_cpus()); 5961 - if (smp_affinity_enable) 5962 - irq_flags |= PCI_IRQ_AFFINITY; 5963 - i = pci_alloc_irq_vectors(instance->pdev, 1, 5964 - instance->msix_vectors, irq_flags); 5965 - if (i > 0) 5966 - instance->msix_vectors = i; 5641 + 5642 + 5643 + /* 5644 + * For Aero (if some conditions are met), driver will configure a 5645 + * few additional reply queues with interrupt coalescing enabled. 5646 + * These queues with interrupt coalescing enabled are called 5647 + * High IOPS queues and rest of reply queues (based on number of 5648 + * logical CPUs) are termed as Low latency queues. 5649 + * 5650 + * Total Number of reply queues = High IOPS queues + low latency queues 5651 + * 5652 + * For rest of fusion adapters, 1 additional reply queue will be 5653 + * reserved for management commands, rest of reply queues 5654 + * (based on number of logical CPUs) will be used for IOs and 5655 + * referenced as IO queues. 5656 + * Total Number of reply queues = 1 + IO queues 5657 + * 5658 + * MFI adapters supports single MSI-x so single reply queue 5659 + * will be used for IO and management commands. 5660 + */ 5661 + 5662 + intr_coalescing = (scratch_pad_1 & MR_INTR_COALESCING_SUPPORT_OFFSET) ? 5663 + true : false; 5664 + if (intr_coalescing && 5665 + (num_online_cpus() >= MR_HIGH_IOPS_QUEUE_COUNT) && 5666 + (instance->msix_vectors == MEGASAS_MAX_MSIX_QUEUES)) 5667 + instance->perf_mode = MR_BALANCED_PERF_MODE; 5967 5668 else 5968 - instance->msix_vectors = 0; 5669 + instance->perf_mode = MR_LATENCY_PERF_MODE; 5670 + 5671 + 5672 + if (instance->adapter_type == AERO_SERIES) { 5673 + pcie_capability_read_word(instance->pdev, PCI_EXP_LNKSTA, &lnksta); 5674 + speed = lnksta & PCI_EXP_LNKSTA_CLS; 5675 + 5676 + /* 5677 + * For Aero, if PCIe link speed is <16 GT/s, then driver should operate 5678 + * in latency perf mode and enable R1 PCI bandwidth algorithm 5679 + */ 5680 + if (speed < 0x4) { 5681 + instance->perf_mode = MR_LATENCY_PERF_MODE; 5682 + fusion->pcie_bw_limitation = true; 5683 + } 5684 + 5685 + /* 5686 + * Performance mode settings provided through module parameter-perf_mode will 5687 + * take affect only for: 5688 + * 1. Aero family of adapters. 5689 + * 2. When user sets module parameter- perf_mode in range of 0-2. 5690 + */ 5691 + if ((perf_mode >= MR_BALANCED_PERF_MODE) && 5692 + (perf_mode <= MR_LATENCY_PERF_MODE)) 5693 + instance->perf_mode = perf_mode; 5694 + /* 5695 + * If intr coalescing is not supported by controller FW, then IOPS 5696 + * and Balanced modes are not feasible. 5697 + */ 5698 + if (!intr_coalescing) 5699 + instance->perf_mode = MR_LATENCY_PERF_MODE; 5700 + 5701 + } 5702 + 5703 + if (instance->perf_mode == MR_BALANCED_PERF_MODE) 5704 + instance->low_latency_index_start = 5705 + MR_HIGH_IOPS_QUEUE_COUNT; 5706 + else 5707 + instance->low_latency_index_start = 1; 5708 + 5709 + num_msix_req = num_online_cpus() + instance->low_latency_index_start; 5710 + 5711 + instance->msix_vectors = min(num_msix_req, 5712 + instance->msix_vectors); 5713 + 5714 + megasas_alloc_irq_vectors(instance); 5715 + if (!instance->msix_vectors) 5716 + instance->msix_load_balance = false; 5969 5717 } 5970 5718 /* 5971 5719 * MSI-X host index 0 is common for all adapter. ··· 6058 5668 6059 5669 megasas_setup_reply_map(instance); 6060 5670 6061 - dev_info(&instance->pdev->dev, 6062 - "firmware supports msix\t: (%d)", fw_msix_count); 6063 5671 dev_info(&instance->pdev->dev, 6064 5672 "current msix/online cpus\t: (%d/%d)\n", 6065 5673 instance->msix_vectors, (unsigned int)num_online_cpus()); ··· 6094 5706 megasas_setup_irqs_msix(instance, 1) : 6095 5707 megasas_setup_irqs_ioapic(instance)) 6096 5708 goto fail_init_adapter; 5709 + 5710 + if (instance->adapter_type != MFI_SERIES) 5711 + megasas_setup_irq_poll(instance); 6097 5712 6098 5713 instance->instancet->enable_intr(instance); 6099 5714 ··· 6224 5833 instance->UnevenSpanSupport ? "yes" : "no"); 6225 5834 dev_info(&instance->pdev->dev, "firmware crash dump : %s\n", 6226 5835 instance->crash_dump_drv_support ? "yes" : "no"); 6227 - dev_info(&instance->pdev->dev, "jbod sync map : %s\n", 6228 - instance->use_seqnum_jbod_fp ? "yes" : "no"); 5836 + dev_info(&instance->pdev->dev, "JBOD sequence map : %s\n", 5837 + instance->use_seqnum_jbod_fp ? "enabled" : "disabled"); 6229 5838 6230 5839 instance->max_sectors_per_req = instance->max_num_sge * 6231 5840 SGE_BUFFER_SIZE / 512; ··· 6588 6197 switch (dcmd_timeout_ocr_possible(instance)) { 6589 6198 case INITIATE_OCR: 6590 6199 cmd->flags |= DRV_DCMD_SKIP_REFIRE; 6200 + mutex_unlock(&instance->reset_mutex); 6591 6201 megasas_reset_fusion(instance->host, 6592 6202 MFI_IO_TIMEOUT_OCR); 6203 + mutex_lock(&instance->reset_mutex); 6593 6204 break; 6594 6205 case KILL_ADAPTER: 6595 6206 megaraid_sas_kill_hba(instance); ··· 7141 6748 INIT_LIST_HEAD(&instance->internal_reset_pending_q); 7142 6749 7143 6750 atomic_set(&instance->fw_outstanding, 0); 6751 + atomic64_set(&instance->total_io_count, 0); 7144 6752 7145 6753 init_waitqueue_head(&instance->int_cmd_wait_q); 7146 6754 init_waitqueue_head(&instance->abort_cmd_wait_q); ··· 7164 6770 instance->last_time = 0; 7165 6771 instance->disableOnlineCtrlReset = 1; 7166 6772 instance->UnevenSpanSupport = 0; 6773 + instance->smp_affinity_enable = smp_affinity_enable ? true : false; 6774 + instance->msix_load_balance = false; 7167 6775 7168 6776 if (instance->adapter_type != MFI_SERIES) 7169 6777 INIT_WORK(&instance->work_init, megasas_fusion_ocr_wq); ··· 7187 6791 u16 control = 0; 7188 6792 7189 6793 switch (pdev->device) { 6794 + case PCI_DEVICE_ID_LSI_AERO_10E0: 6795 + case PCI_DEVICE_ID_LSI_AERO_10E3: 6796 + case PCI_DEVICE_ID_LSI_AERO_10E4: 6797 + case PCI_DEVICE_ID_LSI_AERO_10E7: 6798 + dev_err(&pdev->dev, "Adapter is in non secure mode\n"); 6799 + return 1; 7190 6800 case PCI_DEVICE_ID_LSI_AERO_10E1: 7191 6801 case PCI_DEVICE_ID_LSI_AERO_10E5: 7192 6802 dev_info(&pdev->dev, "Adapter is in configurable secure mode\n"); ··· 7311 6909 dev_printk(KERN_DEBUG, &pdev->dev, "start aen failed\n"); 7312 6910 goto fail_start_aen; 7313 6911 } 6912 + 6913 + megasas_setup_debugfs(instance); 7314 6914 7315 6915 /* Get current SR-IOV LD/VF affiliation */ 7316 6916 if (instance->requestorId) ··· 7445 7041 static int 7446 7042 megasas_suspend(struct pci_dev *pdev, pm_message_t state) 7447 7043 { 7448 - struct Scsi_Host *host; 7449 7044 struct megasas_instance *instance; 7450 7045 7451 7046 instance = pci_get_drvdata(pdev); 7452 - host = instance->host; 7047 + 7048 + if (!instance) 7049 + return 0; 7050 + 7453 7051 instance->unload = 1; 7052 + 7053 + dev_info(&pdev->dev, "%s is called\n", __func__); 7454 7054 7455 7055 /* Shutdown SR-IOV heartbeat timer */ 7456 7056 if (instance->requestorId && !instance->skip_heartbeat_timer_del) ··· 7505 7097 int irq_flags = PCI_IRQ_LEGACY; 7506 7098 7507 7099 instance = pci_get_drvdata(pdev); 7100 + 7101 + if (!instance) 7102 + return 0; 7103 + 7508 7104 host = instance->host; 7509 7105 pci_set_power_state(pdev, PCI_D0); 7510 7106 pci_enable_wake(pdev, PCI_D0, 0); 7511 7107 pci_restore_state(pdev); 7512 7108 7109 + dev_info(&pdev->dev, "%s is called\n", __func__); 7513 7110 /* 7514 7111 * PCI prepping: enable device set bus mastering and dma mask 7515 7112 */ ··· 7546 7133 /* Now re-enable MSI-X */ 7547 7134 if (instance->msix_vectors) { 7548 7135 irq_flags = PCI_IRQ_MSIX; 7549 - if (smp_affinity_enable) 7136 + if (instance->smp_affinity_enable) 7550 7137 irq_flags |= PCI_IRQ_AFFINITY; 7551 7138 } 7552 7139 rval = pci_alloc_irq_vectors(instance->pdev, 1, ··· 7583 7170 megasas_setup_irqs_msix(instance, 0) : 7584 7171 megasas_setup_irqs_ioapic(instance)) 7585 7172 goto fail_init_mfi; 7173 + 7174 + if (instance->adapter_type != MFI_SERIES) 7175 + megasas_setup_irq_poll(instance); 7586 7176 7587 7177 /* Re-launch SR-IOV heartbeat timer */ 7588 7178 if (instance->requestorId) { ··· 7677 7261 u32 pd_seq_map_sz; 7678 7262 7679 7263 instance = pci_get_drvdata(pdev); 7264 + 7265 + if (!instance) 7266 + return; 7267 + 7680 7268 host = instance->host; 7681 7269 fusion = instance->ctrl_context; 7682 7270 ··· 7794 7374 7795 7375 megasas_free_ctrl_mem(instance); 7796 7376 7377 + megasas_destroy_debugfs(instance); 7378 + 7797 7379 scsi_host_put(host); 7798 7380 7799 7381 pci_disable_device(pdev); ··· 7808 7386 static void megasas_shutdown(struct pci_dev *pdev) 7809 7387 { 7810 7388 struct megasas_instance *instance = pci_get_drvdata(pdev); 7389 + 7390 + if (!instance) 7391 + return; 7811 7392 7812 7393 instance->unload = 1; 7813 7394 ··· 7957 7532 7958 7533 if ((ioc->frame.hdr.cmd >= MFI_CMD_OP_COUNT) || 7959 7534 ((ioc->frame.hdr.cmd == MFI_CMD_NVME) && 7960 - !instance->support_nvme_passthru)) { 7535 + !instance->support_nvme_passthru) || 7536 + ((ioc->frame.hdr.cmd == MFI_CMD_TOOLBOX) && 7537 + !instance->support_pci_lane_margining)) { 7961 7538 dev_err(&instance->pdev->dev, 7962 7539 "Received invalid ioctl command 0x%x\n", 7963 7540 ioc->frame.hdr.cmd); ··· 7995 7568 opcode = le32_to_cpu(cmd->frame->dcmd.opcode); 7996 7569 7997 7570 if (opcode == MR_DCMD_CTRL_SHUTDOWN) { 7571 + mutex_lock(&instance->reset_mutex); 7998 7572 if (megasas_get_ctrl_info(instance) != DCMD_SUCCESS) { 7999 7573 megasas_return_cmd(instance, cmd); 7574 + mutex_unlock(&instance->reset_mutex); 8000 7575 return -1; 8001 7576 } 7577 + mutex_unlock(&instance->reset_mutex); 8002 7578 } 8003 7579 8004 7580 if (opcode == MR_DRIVER_SET_APP_CRASHDUMP_MODE) { ··· 8443 8013 8444 8014 static DRIVER_ATTR_RO(support_nvme_encapsulation); 8445 8015 8016 + static ssize_t 8017 + support_pci_lane_margining_show(struct device_driver *dd, char *buf) 8018 + { 8019 + return sprintf(buf, "%u\n", support_pci_lane_margining); 8020 + } 8021 + 8022 + static DRIVER_ATTR_RO(support_pci_lane_margining); 8023 + 8446 8024 static inline void megasas_remove_scsi_device(struct scsi_device *sdev) 8447 8025 { 8448 8026 sdev_printk(KERN_INFO, sdev, "SCSI device is removed\n"); ··· 8599 8161 struct megasas_instance *instance = ev->instance; 8600 8162 union megasas_evt_class_locale class_locale; 8601 8163 int event_type = 0; 8602 - u32 seq_num, wait_time = MEGASAS_RESET_WAIT_TIME; 8164 + u32 seq_num; 8603 8165 int error; 8604 8166 u8 dcmd_ret = DCMD_SUCCESS; 8605 8167 ··· 8608 8170 kfree(ev); 8609 8171 return; 8610 8172 } 8611 - 8612 - /* Adjust event workqueue thread wait time for VF mode */ 8613 - if (instance->requestorId) 8614 - wait_time = MEGASAS_ROUTINE_WAIT_TIME_VF; 8615 8173 8616 8174 /* Don't run the event workqueue thread if OCR is running */ 8617 8175 mutex_lock(&instance->reset_mutex); ··· 8720 8286 support_poll_for_event = 2; 8721 8287 support_device_change = 1; 8722 8288 support_nvme_encapsulation = true; 8289 + support_pci_lane_margining = true; 8723 8290 8724 8291 memset(&megasas_mgmt_info, 0, sizeof(megasas_mgmt_info)); 8725 8292 ··· 8735 8300 } 8736 8301 8737 8302 megasas_mgmt_majorno = rval; 8303 + 8304 + megasas_init_debugfs(); 8738 8305 8739 8306 /* 8740 8307 * Register ourselves as PCI hotplug module ··· 8777 8340 if (rval) 8778 8341 goto err_dcf_support_nvme_encapsulation; 8779 8342 8343 + rval = driver_create_file(&megasas_pci_driver.driver, 8344 + &driver_attr_support_pci_lane_margining); 8345 + if (rval) 8346 + goto err_dcf_support_pci_lane_margining; 8347 + 8780 8348 return rval; 8349 + 8350 + err_dcf_support_pci_lane_margining: 8351 + driver_remove_file(&megasas_pci_driver.driver, 8352 + &driver_attr_support_nvme_encapsulation); 8781 8353 8782 8354 err_dcf_support_nvme_encapsulation: 8783 8355 driver_remove_file(&megasas_pci_driver.driver, ··· 8806 8360 err_dcf_attr_ver: 8807 8361 pci_unregister_driver(&megasas_pci_driver); 8808 8362 err_pcidrv: 8363 + megasas_exit_debugfs(); 8809 8364 unregister_chrdev(megasas_mgmt_majorno, "megaraid_sas_ioctl"); 8810 8365 return rval; 8811 8366 } ··· 8827 8380 driver_remove_file(&megasas_pci_driver.driver, &driver_attr_version); 8828 8381 driver_remove_file(&megasas_pci_driver.driver, 8829 8382 &driver_attr_support_nvme_encapsulation); 8383 + driver_remove_file(&megasas_pci_driver.driver, 8384 + &driver_attr_support_pci_lane_margining); 8830 8385 8831 8386 pci_unregister_driver(&megasas_pci_driver); 8387 + megasas_exit_debugfs(); 8832 8388 unregister_chrdev(megasas_mgmt_majorno, "megaraid_sas_ioctl"); 8833 8389 } 8834 8390
+179
drivers/scsi/megaraid/megaraid_sas_debugfs.c
··· 1 + /* 2 + * Linux MegaRAID driver for SAS based RAID controllers 3 + * 4 + * Copyright (c) 2003-2018 LSI Corporation. 5 + * Copyright (c) 2003-2018 Avago Technologies. 6 + * Copyright (c) 2003-2018 Broadcom Inc. 7 + * 8 + * This program is free software; you can redistribute it and/or 9 + * modify it under the terms of the GNU General Public License 10 + * as published by the Free Software Foundation; either version 2 11 + * of the License, or (at your option) any later version. 12 + * 13 + * This program is distributed in the hope that it will be useful, 14 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 + * GNU General Public License for more details. 17 + * 18 + * You should have received a copy of the GNU General Public License 19 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 20 + * 21 + * Authors: Broadcom Inc. 22 + * Kashyap Desai <kashyap.desai@broadcom.com> 23 + * Sumit Saxena <sumit.saxena@broadcom.com> 24 + * Shivasharan S <shivasharan.srikanteshwara@broadcom.com> 25 + * 26 + * Send feedback to: megaraidlinux.pdl@broadcom.com 27 + */ 28 + #include <linux/kernel.h> 29 + #include <linux/types.h> 30 + #include <linux/pci.h> 31 + #include <linux/interrupt.h> 32 + #include <linux/compat.h> 33 + #include <linux/irq_poll.h> 34 + 35 + #include <scsi/scsi.h> 36 + #include <scsi/scsi_device.h> 37 + #include <scsi/scsi_host.h> 38 + 39 + #include "megaraid_sas_fusion.h" 40 + #include "megaraid_sas.h" 41 + 42 + #ifdef CONFIG_DEBUG_FS 43 + #include <linux/debugfs.h> 44 + 45 + struct dentry *megasas_debugfs_root; 46 + 47 + static ssize_t 48 + megasas_debugfs_read(struct file *filp, char __user *ubuf, size_t cnt, 49 + loff_t *ppos) 50 + { 51 + struct megasas_debugfs_buffer *debug = filp->private_data; 52 + 53 + if (!debug || !debug->buf) 54 + return 0; 55 + 56 + return simple_read_from_buffer(ubuf, cnt, ppos, debug->buf, debug->len); 57 + } 58 + 59 + static int 60 + megasas_debugfs_raidmap_open(struct inode *inode, struct file *file) 61 + { 62 + struct megasas_instance *instance = inode->i_private; 63 + struct megasas_debugfs_buffer *debug; 64 + struct fusion_context *fusion; 65 + 66 + fusion = instance->ctrl_context; 67 + 68 + debug = kzalloc(sizeof(struct megasas_debugfs_buffer), GFP_KERNEL); 69 + if (!debug) 70 + return -ENOMEM; 71 + 72 + debug->buf = (void *)fusion->ld_drv_map[(instance->map_id & 1)]; 73 + debug->len = fusion->drv_map_sz; 74 + file->private_data = debug; 75 + 76 + return 0; 77 + } 78 + 79 + static int 80 + megasas_debugfs_release(struct inode *inode, struct file *file) 81 + { 82 + struct megasas_debug_buffer *debug = file->private_data; 83 + 84 + if (!debug) 85 + return 0; 86 + 87 + file->private_data = NULL; 88 + kfree(debug); 89 + return 0; 90 + } 91 + 92 + static const struct file_operations megasas_debugfs_raidmap_fops = { 93 + .owner = THIS_MODULE, 94 + .open = megasas_debugfs_raidmap_open, 95 + .read = megasas_debugfs_read, 96 + .release = megasas_debugfs_release, 97 + }; 98 + 99 + /* 100 + * megasas_init_debugfs : Create debugfs root for megaraid_sas driver 101 + */ 102 + void megasas_init_debugfs(void) 103 + { 104 + megasas_debugfs_root = debugfs_create_dir("megaraid_sas", NULL); 105 + if (!megasas_debugfs_root) 106 + pr_info("Cannot create debugfs root\n"); 107 + } 108 + 109 + /* 110 + * megasas_exit_debugfs : Remove debugfs root for megaraid_sas driver 111 + */ 112 + void megasas_exit_debugfs(void) 113 + { 114 + debugfs_remove_recursive(megasas_debugfs_root); 115 + } 116 + 117 + /* 118 + * megasas_setup_debugfs : Setup debugfs per Fusion adapter 119 + * instance: Soft instance of adapter 120 + */ 121 + void 122 + megasas_setup_debugfs(struct megasas_instance *instance) 123 + { 124 + char name[64]; 125 + struct fusion_context *fusion; 126 + 127 + fusion = instance->ctrl_context; 128 + 129 + if (fusion) { 130 + snprintf(name, sizeof(name), 131 + "scsi_host%d", instance->host->host_no); 132 + if (!instance->debugfs_root) { 133 + instance->debugfs_root = 134 + debugfs_create_dir(name, megasas_debugfs_root); 135 + if (!instance->debugfs_root) { 136 + dev_err(&instance->pdev->dev, 137 + "Cannot create per adapter debugfs directory\n"); 138 + return; 139 + } 140 + } 141 + 142 + snprintf(name, sizeof(name), "raidmap_dump"); 143 + instance->raidmap_dump = 144 + debugfs_create_file(name, S_IRUGO, 145 + instance->debugfs_root, instance, 146 + &megasas_debugfs_raidmap_fops); 147 + if (!instance->raidmap_dump) { 148 + dev_err(&instance->pdev->dev, 149 + "Cannot create raidmap debugfs file\n"); 150 + debugfs_remove(instance->debugfs_root); 151 + return; 152 + } 153 + } 154 + 155 + } 156 + 157 + /* 158 + * megasas_destroy_debugfs : Destroy debugfs per Fusion adapter 159 + * instance: Soft instance of adapter 160 + */ 161 + void megasas_destroy_debugfs(struct megasas_instance *instance) 162 + { 163 + debugfs_remove_recursive(instance->debugfs_root); 164 + } 165 + 166 + #else 167 + void megasas_init_debugfs(void) 168 + { 169 + } 170 + void megasas_exit_debugfs(void) 171 + { 172 + } 173 + void megasas_setup_debugfs(struct megasas_instance *instance) 174 + { 175 + } 176 + void megasas_destroy_debugfs(struct megasas_instance *instance) 177 + { 178 + } 179 + #endif /*CONFIG_DEBUG_FS*/
+81 -1
drivers/scsi/megaraid/megaraid_sas_fp.c
··· 33 33 #include <linux/compat.h> 34 34 #include <linux/blkdev.h> 35 35 #include <linux/poll.h> 36 + #include <linux/irq_poll.h> 36 37 37 38 #include <scsi/scsi.h> 38 39 #include <scsi/scsi_cmnd.h> ··· 46 45 47 46 #define LB_PENDING_CMDS_DEFAULT 4 48 47 static unsigned int lb_pending_cmds = LB_PENDING_CMDS_DEFAULT; 49 - module_param(lb_pending_cmds, int, S_IRUGO); 48 + module_param(lb_pending_cmds, int, 0444); 50 49 MODULE_PARM_DESC(lb_pending_cmds, "Change raid-1 load balancing outstanding " 51 50 "threshold. Valid Values are 1-128. Default: 4"); 52 51 ··· 890 889 } 891 890 892 891 /* 892 + * mr_get_phy_params_r56_rmw - Calculate parameters for R56 CTIO write operation 893 + * @instance: Adapter soft state 894 + * @ld: LD index 895 + * @stripNo: Strip Number 896 + * @io_info: IO info structure pointer 897 + * pRAID_Context: RAID context pointer 898 + * map: RAID map pointer 899 + * 900 + * This routine calculates the logical arm, data Arm, row number and parity arm 901 + * for R56 CTIO write operation. 902 + */ 903 + static void mr_get_phy_params_r56_rmw(struct megasas_instance *instance, 904 + u32 ld, u64 stripNo, 905 + struct IO_REQUEST_INFO *io_info, 906 + struct RAID_CONTEXT_G35 *pRAID_Context, 907 + struct MR_DRV_RAID_MAP_ALL *map) 908 + { 909 + struct MR_LD_RAID *raid = MR_LdRaidGet(ld, map); 910 + u8 span, dataArms, arms, dataArm, logArm; 911 + s8 rightmostParityArm, PParityArm; 912 + u64 rowNum; 913 + u64 *pdBlock = &io_info->pdBlock; 914 + 915 + dataArms = raid->rowDataSize; 916 + arms = raid->rowSize; 917 + 918 + rowNum = mega_div64_32(stripNo, dataArms); 919 + /* parity disk arm, first arm is 0 */ 920 + rightmostParityArm = (arms - 1) - mega_mod64(rowNum, arms); 921 + 922 + /* logical arm within row */ 923 + logArm = mega_mod64(stripNo, dataArms); 924 + /* physical arm for data */ 925 + dataArm = mega_mod64((rightmostParityArm + 1 + logArm), arms); 926 + 927 + if (raid->spanDepth == 1) { 928 + span = 0; 929 + } else { 930 + span = (u8)MR_GetSpanBlock(ld, rowNum, pdBlock, map); 931 + if (span == SPAN_INVALID) 932 + return; 933 + } 934 + 935 + if (raid->level == 6) { 936 + /* P Parity arm, note this can go negative adjust if negative */ 937 + PParityArm = (arms - 2) - mega_mod64(rowNum, arms); 938 + 939 + if (PParityArm < 0) 940 + PParityArm += arms; 941 + 942 + /* rightmostParityArm is P-Parity for RAID 5 and Q-Parity for RAID */ 943 + pRAID_Context->flow_specific.r56_arm_map = rightmostParityArm; 944 + pRAID_Context->flow_specific.r56_arm_map |= 945 + (u16)(PParityArm << RAID_CTX_R56_P_ARM_SHIFT); 946 + } else { 947 + pRAID_Context->flow_specific.r56_arm_map |= 948 + (u16)(rightmostParityArm << RAID_CTX_R56_P_ARM_SHIFT); 949 + } 950 + 951 + pRAID_Context->reg_lock_row_lba = cpu_to_le64(rowNum); 952 + pRAID_Context->flow_specific.r56_arm_map |= 953 + (u16)(logArm << RAID_CTX_R56_LOG_ARM_SHIFT); 954 + cpu_to_le16s(&pRAID_Context->flow_specific.r56_arm_map); 955 + pRAID_Context->span_arm = (span << RAID_CTX_SPANARM_SPAN_SHIFT) | dataArm; 956 + pRAID_Context->raid_flags = (MR_RAID_FLAGS_IO_SUB_TYPE_R56_DIV_OFFLOAD << 957 + MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT); 958 + 959 + return; 960 + } 961 + 962 + /* 893 963 ****************************************************************************** 894 964 * 895 965 * MR_BuildRaidContext function ··· 1026 954 stripSize = 1 << raid->stripeShift; 1027 955 stripe_mask = stripSize-1; 1028 956 957 + io_info->data_arms = raid->rowDataSize; 1029 958 1030 959 /* 1031 960 * calculate starting row and stripe, and number of strips and rows ··· 1168 1095 /* save pointer to raid->LUN array */ 1169 1096 *raidLUN = raid->LUN; 1170 1097 1098 + /* Aero R5/6 Division Offload for WRITE */ 1099 + if (fusion->r56_div_offload && (raid->level >= 5) && !isRead) { 1100 + mr_get_phy_params_r56_rmw(instance, ld, start_strip, io_info, 1101 + (struct RAID_CONTEXT_G35 *)pRAID_Context, 1102 + map); 1103 + return true; 1104 + } 1171 1105 1172 1106 /*Get Phy Params only if FP capable, or else leave it to MR firmware 1173 1107 to do the calculation.*/
+397 -154
drivers/scsi/megaraid/megaraid_sas_fusion.c
··· 35 35 #include <linux/poll.h> 36 36 #include <linux/vmalloc.h> 37 37 #include <linux/workqueue.h> 38 + #include <linux/irq_poll.h> 38 39 39 40 #include <scsi/scsi.h> 40 41 #include <scsi/scsi_cmnd.h> ··· 88 87 const volatile void __iomem *addr); 89 88 90 89 /** 90 + * megasas_adp_reset_wait_for_ready - initiate chip reset and wait for 91 + * controller to come to ready state 92 + * @instance - adapter's soft state 93 + * @do_adp_reset - If true, do a chip reset 94 + * @ocr_context - If called from OCR context this will 95 + * be set to 1, else 0 96 + * 97 + * This function initates a chip reset followed by a wait for controller to 98 + * transition to ready state. 99 + * During this, driver will block all access to PCI config space from userspace 100 + */ 101 + int 102 + megasas_adp_reset_wait_for_ready(struct megasas_instance *instance, 103 + bool do_adp_reset, 104 + int ocr_context) 105 + { 106 + int ret = FAILED; 107 + 108 + /* 109 + * Block access to PCI config space from userspace 110 + * when diag reset is initiated from driver 111 + */ 112 + if (megasas_dbg_lvl & OCR_DEBUG) 113 + dev_info(&instance->pdev->dev, 114 + "Block access to PCI config space %s %d\n", 115 + __func__, __LINE__); 116 + 117 + pci_cfg_access_lock(instance->pdev); 118 + 119 + if (do_adp_reset) { 120 + if (instance->instancet->adp_reset 121 + (instance, instance->reg_set)) 122 + goto out; 123 + } 124 + 125 + /* Wait for FW to become ready */ 126 + if (megasas_transition_to_ready(instance, ocr_context)) { 127 + dev_warn(&instance->pdev->dev, 128 + "Failed to transition controller to ready for scsi%d.\n", 129 + instance->host->host_no); 130 + goto out; 131 + } 132 + 133 + ret = SUCCESS; 134 + out: 135 + if (megasas_dbg_lvl & OCR_DEBUG) 136 + dev_info(&instance->pdev->dev, 137 + "Unlock access to PCI config space %s %d\n", 138 + __func__, __LINE__); 139 + 140 + pci_cfg_access_unlock(instance->pdev); 141 + 142 + return ret; 143 + } 144 + 145 + /** 91 146 * megasas_check_same_4gb_region - check if allocation 92 147 * crosses same 4GB boundary or not 93 148 * @instance - adapter's soft instance ··· 190 133 writel(~MFI_FUSION_ENABLE_INTERRUPT_MASK, &(regs)->outbound_intr_mask); 191 134 192 135 /* Dummy readl to force pci flush */ 193 - readl(&regs->outbound_intr_mask); 136 + dev_info(&instance->pdev->dev, "%s is called outbound_intr_mask:0x%08x\n", 137 + __func__, readl(&regs->outbound_intr_mask)); 194 138 } 195 139 196 140 /** ··· 202 144 megasas_disable_intr_fusion(struct megasas_instance *instance) 203 145 { 204 146 u32 mask = 0xFFFFFFFF; 205 - u32 status; 206 147 struct megasas_register_set __iomem *regs; 207 148 regs = instance->reg_set; 208 149 instance->mask_interrupts = 1; 209 150 210 151 writel(mask, &regs->outbound_intr_mask); 211 152 /* Dummy readl to force pci flush */ 212 - status = readl(&regs->outbound_intr_mask); 153 + dev_info(&instance->pdev->dev, "%s is called outbound_intr_mask:0x%08x\n", 154 + __func__, readl(&regs->outbound_intr_mask)); 213 155 } 214 156 215 157 int ··· 265 207 } 266 208 267 209 /** 268 - * megasas_fire_cmd_fusion - Sends command to the FW 269 - * @instance: Adapter soft state 270 - * @req_desc: 64bit Request descriptor 271 - * 272 - * Perform PCI Write. 210 + * megasas_write_64bit_req_desc - PCI writes 64bit request descriptor 211 + * @instance: Adapter soft state 212 + * @req_desc: 64bit Request descriptor 273 213 */ 274 - 275 214 static void 276 - megasas_fire_cmd_fusion(struct megasas_instance *instance, 215 + megasas_write_64bit_req_desc(struct megasas_instance *instance, 277 216 union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc) 278 217 { 279 218 #if defined(writeq) && defined(CONFIG_64BIT) 280 219 u64 req_data = (((u64)le32_to_cpu(req_desc->u.high) << 32) | 281 220 le32_to_cpu(req_desc->u.low)); 282 - 283 221 writeq(req_data, &instance->reg_set->inbound_low_queue_port); 284 222 #else 285 223 unsigned long flags; ··· 286 232 &instance->reg_set->inbound_high_queue_port); 287 233 spin_unlock_irqrestore(&instance->hba_lock, flags); 288 234 #endif 235 + } 236 + 237 + /** 238 + * megasas_fire_cmd_fusion - Sends command to the FW 239 + * @instance: Adapter soft state 240 + * @req_desc: 32bit or 64bit Request descriptor 241 + * 242 + * Perform PCI Write. AERO SERIES supports 32 bit Descriptor. 243 + * Prior to AERO_SERIES support 64 bit Descriptor. 244 + */ 245 + static void 246 + megasas_fire_cmd_fusion(struct megasas_instance *instance, 247 + union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc) 248 + { 249 + if (instance->atomic_desc_support) 250 + writel(le32_to_cpu(req_desc->u.low), 251 + &instance->reg_set->inbound_single_queue_port); 252 + else 253 + megasas_write_64bit_req_desc(instance, req_desc); 289 254 } 290 255 291 256 /** ··· 997 924 { 998 925 int i; 999 926 struct megasas_header *frame_hdr = &cmd->frame->hdr; 927 + u32 status_reg; 1000 928 1001 929 u32 msecs = seconds * 1000; 1002 930 ··· 1007 933 for (i = 0; (i < msecs) && (frame_hdr->cmd_status == 0xff); i += 20) { 1008 934 rmb(); 1009 935 msleep(20); 936 + if (!(i % 5000)) { 937 + status_reg = instance->instancet->read_fw_status_reg(instance) 938 + & MFI_STATE_MASK; 939 + if (status_reg == MFI_STATE_FAULT) 940 + break; 941 + } 1010 942 } 1011 943 1012 944 if (frame_hdr->cmd_status == MFI_STAT_INVALID_STATUS) ··· 1046 966 u32 scratch_pad_1; 1047 967 ktime_t time; 1048 968 bool cur_fw_64bit_dma_capable; 969 + bool cur_intr_coalescing; 1049 970 1050 971 fusion = instance->ctrl_context; 1051 972 ··· 1079 998 ret = 1; 1080 999 goto fail_fw_init; 1081 1000 } 1001 + 1002 + cur_intr_coalescing = (scratch_pad_1 & MR_INTR_COALESCING_SUPPORT_OFFSET) ? 1003 + true : false; 1004 + 1005 + if ((instance->low_latency_index_start == 1006 + MR_HIGH_IOPS_QUEUE_COUNT) && cur_intr_coalescing) 1007 + instance->perf_mode = MR_BALANCED_PERF_MODE; 1008 + 1009 + dev_info(&instance->pdev->dev, "Performance mode :%s\n", 1010 + MEGASAS_PERF_MODE_2STR(instance->perf_mode)); 1082 1011 1083 1012 instance->fw_sync_cache_support = (scratch_pad_1 & 1084 1013 MR_CAN_HANDLE_SYNC_CACHE_OFFSET) ? 1 : 0; ··· 1174 1083 cpu_to_le32(lower_32_bits(ioc_init_handle)); 1175 1084 init_frame->data_xfer_len = cpu_to_le32(sizeof(struct MPI2_IOC_INIT_REQUEST)); 1176 1085 1086 + /* 1087 + * Each bit in replyqueue_mask represents one group of MSI-x vectors 1088 + * (each group has 8 vectors) 1089 + */ 1090 + switch (instance->perf_mode) { 1091 + case MR_BALANCED_PERF_MODE: 1092 + init_frame->replyqueue_mask = 1093 + cpu_to_le16(~(~0 << instance->low_latency_index_start/8)); 1094 + break; 1095 + case MR_IOPS_PERF_MODE: 1096 + init_frame->replyqueue_mask = 1097 + cpu_to_le16(~(~0 << instance->msix_vectors/8)); 1098 + break; 1099 + } 1100 + 1101 + 1177 1102 req_desc.u.low = cpu_to_le32(lower_32_bits(cmd->frame_phys_addr)); 1178 1103 req_desc.u.high = cpu_to_le32(upper_32_bits(cmd->frame_phys_addr)); 1179 1104 req_desc.MFAIo.RequestFlags = ··· 1208 1101 break; 1209 1102 } 1210 1103 1211 - megasas_fire_cmd_fusion(instance, &req_desc); 1104 + /* For AERO also, IOC_INIT requires 64 bit descriptor write */ 1105 + megasas_write_64bit_req_desc(instance, &req_desc); 1212 1106 1213 1107 wait_and_poll(instance, cmd, MFI_IO_TIMEOUT_SECS); 1214 1108 ··· 1217 1109 if (frame_hdr->cmd_status != 0) { 1218 1110 ret = 1; 1219 1111 goto fail_fw_init; 1112 + } 1113 + 1114 + if (instance->adapter_type >= AERO_SERIES) { 1115 + scratch_pad_1 = megasas_readl 1116 + (instance, &instance->reg_set->outbound_scratch_pad_1); 1117 + 1118 + instance->atomic_desc_support = 1119 + (scratch_pad_1 & MR_ATOMIC_DESCRIPTOR_SUPPORT_OFFSET) ? 1 : 0; 1120 + 1121 + dev_info(&instance->pdev->dev, "FW supports atomic descriptor\t: %s\n", 1122 + instance->atomic_desc_support ? "Yes" : "No"); 1220 1123 } 1221 1124 1222 1125 return 0; ··· 1252 1133 int 1253 1134 megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend) { 1254 1135 int ret = 0; 1255 - u32 pd_seq_map_sz; 1136 + size_t pd_seq_map_sz; 1256 1137 struct megasas_cmd *cmd; 1257 1138 struct megasas_dcmd_frame *dcmd; 1258 1139 struct fusion_context *fusion = instance->ctrl_context; ··· 1261 1142 1262 1143 pd_sync = (void *)fusion->pd_seq_sync[(instance->pd_seq_map_id & 1)]; 1263 1144 pd_seq_h = fusion->pd_seq_phys[(instance->pd_seq_map_id & 1)]; 1264 - pd_seq_map_sz = sizeof(struct MR_PD_CFG_SEQ_NUM_SYNC) + 1265 - (sizeof(struct MR_PD_CFG_SEQ) * 1266 - (MAX_PHYSICAL_DEVICES - 1)); 1145 + pd_seq_map_sz = struct_size(pd_sync, seq, MAX_PHYSICAL_DEVICES - 1); 1267 1146 1268 1147 cmd = megasas_get_cmd(instance); 1269 1148 if (!cmd) { ··· 1742 1625 struct fusion_context *fusion; 1743 1626 u32 scratch_pad_1; 1744 1627 int i = 0, count; 1628 + u32 status_reg; 1745 1629 1746 1630 fusion = instance->ctrl_context; 1747 1631 ··· 1825 1707 if (megasas_alloc_cmds_fusion(instance)) 1826 1708 goto fail_alloc_cmds; 1827 1709 1828 - if (megasas_ioc_init_fusion(instance)) 1829 - goto fail_ioc_init; 1710 + if (megasas_ioc_init_fusion(instance)) { 1711 + status_reg = instance->instancet->read_fw_status_reg(instance); 1712 + if (((status_reg & MFI_STATE_MASK) == MFI_STATE_FAULT) && 1713 + (status_reg & MFI_RESET_ADAPTER)) { 1714 + /* Do a chip reset and then retry IOC INIT once */ 1715 + if (megasas_adp_reset_wait_for_ready 1716 + (instance, true, 0) == FAILED) 1717 + goto fail_ioc_init; 1718 + 1719 + if (megasas_ioc_init_fusion(instance)) 1720 + goto fail_ioc_init; 1721 + } else { 1722 + goto fail_ioc_init; 1723 + } 1724 + } 1830 1725 1831 1726 megasas_display_intel_branding(instance); 1832 1727 if (megasas_get_ctrl_info(instance)) { ··· 1851 1720 1852 1721 instance->flag_ieee = 1; 1853 1722 instance->r1_ldio_hint_default = MR_R1_LDIO_PIGGYBACK_DEFAULT; 1723 + instance->threshold_reply_count = instance->max_fw_cmds / 4; 1854 1724 fusion->fast_path_io = 0; 1855 1725 1856 1726 if (megasas_allocate_raid_maps(instance)) ··· 2102 1970 mega_mod64(sg_dma_address(sg_scmd), 2103 1971 mr_nvme_pg_size)) { 2104 1972 build_prp = false; 2105 - atomic_inc(&instance->sge_holes_type1); 2106 1973 break; 2107 1974 } 2108 1975 } ··· 2111 1980 sg_dma_len(sg_scmd)), 2112 1981 mr_nvme_pg_size))) { 2113 1982 build_prp = false; 2114 - atomic_inc(&instance->sge_holes_type2); 2115 1983 break; 2116 1984 } 2117 1985 } ··· 2119 1989 if (mega_mod64(sg_dma_address(sg_scmd), 2120 1990 mr_nvme_pg_size)) { 2121 1991 build_prp = false; 2122 - atomic_inc(&instance->sge_holes_type3); 2123 1992 break; 2124 1993 } 2125 1994 } ··· 2251 2122 main_chain_element->Length = 2252 2123 cpu_to_le32(num_prp_in_chain * sizeof(u64)); 2253 2124 2254 - atomic_inc(&instance->prp_sgl); 2255 2125 return build_prp; 2256 2126 } 2257 2127 ··· 2325 2197 memset(sgl_ptr, 0, instance->max_chain_frame_sz); 2326 2198 } 2327 2199 } 2328 - atomic_inc(&instance->ieee_sgl); 2329 2200 } 2330 2201 2331 2202 /** ··· 2636 2509 * 2637 2510 */ 2638 2511 static void 2639 - megasas_set_raidflag_cpu_affinity(union RAID_CONTEXT_UNION *praid_context, 2640 - struct MR_LD_RAID *raid, bool fp_possible, 2641 - u8 is_read, u32 scsi_buff_len) 2512 + megasas_set_raidflag_cpu_affinity(struct fusion_context *fusion, 2513 + union RAID_CONTEXT_UNION *praid_context, 2514 + struct MR_LD_RAID *raid, bool fp_possible, 2515 + u8 is_read, u32 scsi_buff_len) 2642 2516 { 2643 2517 u8 cpu_sel = MR_RAID_CTX_CPUSEL_0; 2644 2518 struct RAID_CONTEXT_G35 *rctx_g35; ··· 2697 2569 * vs MR_RAID_FLAGS_IO_SUB_TYPE_CACHE_BYPASS. 2698 2570 * IO Subtype is not bitmap. 2699 2571 */ 2700 - if ((raid->level == 1) && (!is_read)) { 2701 - if (scsi_buff_len > MR_LARGE_IO_MIN_SIZE) 2702 - praid_context->raid_context_g35.raid_flags = 2703 - (MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT 2704 - << MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT); 2572 + if ((fusion->pcie_bw_limitation) && (raid->level == 1) && (!is_read) && 2573 + (scsi_buff_len > MR_LARGE_IO_MIN_SIZE)) { 2574 + praid_context->raid_context_g35.raid_flags = 2575 + (MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT 2576 + << MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT); 2705 2577 } 2706 2578 } 2707 2579 ··· 2807 2679 io_info.r1_alt_dev_handle = MR_DEVHANDLE_INVALID; 2808 2680 scsi_buff_len = scsi_bufflen(scp); 2809 2681 io_request->DataLength = cpu_to_le32(scsi_buff_len); 2682 + io_info.data_arms = 1; 2810 2683 2811 2684 if (scp->sc_data_direction == DMA_FROM_DEVICE) 2812 2685 io_info.isRead = 1; ··· 2827 2698 fp_possible = (io_info.fpOkForIo > 0) ? true : false; 2828 2699 } 2829 2700 2830 - cmd->request_desc->SCSIIO.MSIxIndex = 2831 - instance->reply_map[raw_smp_processor_id()]; 2701 + if ((instance->perf_mode == MR_BALANCED_PERF_MODE) && 2702 + atomic_read(&scp->device->device_busy) > 2703 + (io_info.data_arms * MR_DEVICE_HIGH_IOPS_DEPTH)) 2704 + cmd->request_desc->SCSIIO.MSIxIndex = 2705 + mega_mod64((atomic64_add_return(1, &instance->high_iops_outstanding) / 2706 + MR_HIGH_IOPS_BATCH_COUNT), instance->low_latency_index_start); 2707 + else if (instance->msix_load_balance) 2708 + cmd->request_desc->SCSIIO.MSIxIndex = 2709 + (mega_mod64(atomic64_add_return(1, &instance->total_io_count), 2710 + instance->msix_vectors)); 2711 + else 2712 + cmd->request_desc->SCSIIO.MSIxIndex = 2713 + instance->reply_map[raw_smp_processor_id()]; 2832 2714 2833 2715 if (instance->adapter_type >= VENTURA_SERIES) { 2834 2716 /* FP for Optimal raid level 1. ··· 2857 2717 (instance->host->can_queue)) { 2858 2718 fp_possible = false; 2859 2719 atomic_dec(&instance->fw_outstanding); 2860 - } else if ((scsi_buff_len > MR_LARGE_IO_MIN_SIZE) || 2861 - (atomic_dec_if_positive(&mrdev_priv->r1_ldio_hint) > 0)) { 2720 + } else if (fusion->pcie_bw_limitation && 2721 + ((scsi_buff_len > MR_LARGE_IO_MIN_SIZE) || 2722 + (atomic_dec_if_positive(&mrdev_priv->r1_ldio_hint) > 0))) { 2862 2723 fp_possible = false; 2863 2724 atomic_dec(&instance->fw_outstanding); 2864 2725 if (scsi_buff_len > MR_LARGE_IO_MIN_SIZE) ··· 2884 2743 2885 2744 /* If raid is NULL, set CPU affinity to default CPU0 */ 2886 2745 if (raid) 2887 - megasas_set_raidflag_cpu_affinity(&io_request->RaidContext, 2746 + megasas_set_raidflag_cpu_affinity(fusion, &io_request->RaidContext, 2888 2747 raid, fp_possible, io_info.isRead, 2889 2748 scsi_buff_len); 2890 2749 else ··· 2900 2759 (MPI2_REQ_DESCRIPT_FLAGS_FP_IO 2901 2760 << MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT); 2902 2761 if (instance->adapter_type == INVADER_SERIES) { 2903 - if (rctx->reg_lock_flags == REGION_TYPE_UNUSED) 2904 - cmd->request_desc->SCSIIO.RequestFlags = 2905 - (MEGASAS_REQ_DESCRIPT_FLAGS_NO_LOCK << 2906 - MEGASAS_REQ_DESCRIPT_FLAGS_TYPE_SHIFT); 2907 2762 rctx->type = MPI2_TYPE_CUDA; 2908 2763 rctx->nseg = 0x1; 2909 2764 io_request->IoFlags |= cpu_to_le16(MPI25_SAS_DEVICE0_FLAGS_ENABLED_FAST_PATH); ··· 3107 2970 << MR_RAID_CTX_RAID_FLAGS_IO_SUB_TYPE_SHIFT; 3108 2971 3109 2972 /* If FW supports PD sequence number */ 3110 - if (instance->use_seqnum_jbod_fp && 3111 - instance->pd_list[pd_index].driveType == TYPE_DISK) { 3112 - /* TgtId must be incremented by 255 as jbod seq number is index 3113 - * below raid map 3114 - */ 3115 - /* More than 256 PD/JBOD support for Ventura */ 3116 - if (instance->support_morethan256jbod) 3117 - pRAID_Context->virtual_disk_tgt_id = 3118 - pd_sync->seq[pd_index].pd_target_id; 3119 - else 3120 - pRAID_Context->virtual_disk_tgt_id = 3121 - cpu_to_le16(device_id + (MAX_PHYSICAL_DEVICES - 1)); 3122 - pRAID_Context->config_seq_num = pd_sync->seq[pd_index].seqNum; 3123 - io_request->DevHandle = pd_sync->seq[pd_index].devHandle; 3124 - if (instance->adapter_type >= VENTURA_SERIES) { 3125 - io_request->RaidContext.raid_context_g35.routing_flags |= 3126 - (1 << MR_RAID_CTX_ROUTINGFLAGS_SQN_SHIFT); 3127 - io_request->RaidContext.raid_context_g35.nseg_type |= 3128 - (1 << RAID_CONTEXT_NSEG_SHIFT); 3129 - io_request->RaidContext.raid_context_g35.nseg_type |= 3130 - (MPI2_TYPE_CUDA << RAID_CONTEXT_TYPE_SHIFT); 2973 + if (instance->support_seqnum_jbod_fp) { 2974 + if (instance->use_seqnum_jbod_fp && 2975 + instance->pd_list[pd_index].driveType == TYPE_DISK) { 2976 + 2977 + /* More than 256 PD/JBOD support for Ventura */ 2978 + if (instance->support_morethan256jbod) 2979 + pRAID_Context->virtual_disk_tgt_id = 2980 + pd_sync->seq[pd_index].pd_target_id; 2981 + else 2982 + pRAID_Context->virtual_disk_tgt_id = 2983 + cpu_to_le16(device_id + 2984 + (MAX_PHYSICAL_DEVICES - 1)); 2985 + pRAID_Context->config_seq_num = 2986 + pd_sync->seq[pd_index].seqNum; 2987 + io_request->DevHandle = 2988 + pd_sync->seq[pd_index].devHandle; 2989 + if (instance->adapter_type >= VENTURA_SERIES) { 2990 + io_request->RaidContext.raid_context_g35.routing_flags |= 2991 + (1 << MR_RAID_CTX_ROUTINGFLAGS_SQN_SHIFT); 2992 + io_request->RaidContext.raid_context_g35.nseg_type |= 2993 + (1 << RAID_CONTEXT_NSEG_SHIFT); 2994 + io_request->RaidContext.raid_context_g35.nseg_type |= 2995 + (MPI2_TYPE_CUDA << RAID_CONTEXT_TYPE_SHIFT); 2996 + } else { 2997 + pRAID_Context->type = MPI2_TYPE_CUDA; 2998 + pRAID_Context->nseg = 0x1; 2999 + pRAID_Context->reg_lock_flags |= 3000 + (MR_RL_FLAGS_SEQ_NUM_ENABLE | 3001 + MR_RL_FLAGS_GRANT_DESTINATION_CUDA); 3002 + } 3131 3003 } else { 3132 - pRAID_Context->type = MPI2_TYPE_CUDA; 3133 - pRAID_Context->nseg = 0x1; 3134 - pRAID_Context->reg_lock_flags |= 3135 - (MR_RL_FLAGS_SEQ_NUM_ENABLE|MR_RL_FLAGS_GRANT_DESTINATION_CUDA); 3004 + pRAID_Context->virtual_disk_tgt_id = 3005 + cpu_to_le16(device_id + 3006 + (MAX_PHYSICAL_DEVICES - 1)); 3007 + pRAID_Context->config_seq_num = 0; 3008 + io_request->DevHandle = cpu_to_le16(0xFFFF); 3136 3009 } 3137 - } else if (fusion->fast_path_io) { 3138 - pRAID_Context->virtual_disk_tgt_id = cpu_to_le16(device_id); 3139 - pRAID_Context->config_seq_num = 0; 3140 - local_map_ptr = fusion->ld_drv_map[(instance->map_id & 1)]; 3141 - io_request->DevHandle = 3142 - local_map_ptr->raidMap.devHndlInfo[device_id].curDevHdl; 3143 3010 } else { 3144 - /* Want to send all IO via FW path */ 3145 3011 pRAID_Context->virtual_disk_tgt_id = cpu_to_le16(device_id); 3146 3012 pRAID_Context->config_seq_num = 0; 3147 - io_request->DevHandle = cpu_to_le16(0xFFFF); 3013 + 3014 + if (fusion->fast_path_io) { 3015 + local_map_ptr = 3016 + fusion->ld_drv_map[(instance->map_id & 1)]; 3017 + io_request->DevHandle = 3018 + local_map_ptr->raidMap.devHndlInfo[device_id].curDevHdl; 3019 + } else { 3020 + io_request->DevHandle = cpu_to_le16(0xFFFF); 3021 + } 3148 3022 } 3149 3023 3150 3024 cmd->request_desc->SCSIIO.DevHandle = io_request->DevHandle; 3151 3025 3152 - cmd->request_desc->SCSIIO.MSIxIndex = 3153 - instance->reply_map[raw_smp_processor_id()]; 3026 + if ((instance->perf_mode == MR_BALANCED_PERF_MODE) && 3027 + atomic_read(&scmd->device->device_busy) > MR_DEVICE_HIGH_IOPS_DEPTH) 3028 + cmd->request_desc->SCSIIO.MSIxIndex = 3029 + mega_mod64((atomic64_add_return(1, &instance->high_iops_outstanding) / 3030 + MR_HIGH_IOPS_BATCH_COUNT), instance->low_latency_index_start); 3031 + else if (instance->msix_load_balance) 3032 + cmd->request_desc->SCSIIO.MSIxIndex = 3033 + (mega_mod64(atomic64_add_return(1, &instance->total_io_count), 3034 + instance->msix_vectors)); 3035 + else 3036 + cmd->request_desc->SCSIIO.MSIxIndex = 3037 + instance->reply_map[raw_smp_processor_id()]; 3154 3038 3155 3039 if (!fp_possible) { 3156 3040 /* system pd firmware path */ ··· 3351 3193 r1_cmd->request_desc->SCSIIO.DevHandle = cmd->r1_alt_dev_handle; 3352 3194 r1_cmd->io_request->DevHandle = cmd->r1_alt_dev_handle; 3353 3195 r1_cmd->r1_alt_dev_handle = cmd->io_request->DevHandle; 3354 - cmd->io_request->RaidContext.raid_context_g35.smid.peer_smid = 3196 + cmd->io_request->RaidContext.raid_context_g35.flow_specific.peer_smid = 3355 3197 cpu_to_le16(r1_cmd->index); 3356 - r1_cmd->io_request->RaidContext.raid_context_g35.smid.peer_smid = 3198 + r1_cmd->io_request->RaidContext.raid_context_g35.flow_specific.peer_smid = 3357 3199 cpu_to_le16(cmd->index); 3358 3200 /*MSIxIndex of both commands request descriptors should be same*/ 3359 3201 r1_cmd->request_desc->SCSIIO.MSIxIndex = ··· 3471 3313 3472 3314 rctx_g35 = &cmd->io_request->RaidContext.raid_context_g35; 3473 3315 fusion = instance->ctrl_context; 3474 - peer_smid = le16_to_cpu(rctx_g35->smid.peer_smid); 3316 + peer_smid = le16_to_cpu(rctx_g35->flow_specific.peer_smid); 3475 3317 3476 3318 r1_cmd = fusion->cmd_list[peer_smid - 1]; 3477 3319 scmd_local = cmd->scmd; ··· 3511 3353 * Completes all commands that is in reply descriptor queue 3512 3354 */ 3513 3355 int 3514 - complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex) 3356 + complete_cmd_fusion(struct megasas_instance *instance, u32 MSIxIndex, 3357 + struct megasas_irq_context *irq_context) 3515 3358 { 3516 3359 union MPI2_REPLY_DESCRIPTORS_UNION *desc; 3517 3360 struct MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR *reply_desc; ··· 3645 3486 * number of reply counts and still there are more replies in reply queue 3646 3487 * pending to be completed 3647 3488 */ 3648 - if (threshold_reply_count >= THRESHOLD_REPLY_COUNT) { 3489 + if (threshold_reply_count >= instance->threshold_reply_count) { 3649 3490 if (instance->msix_combined) 3650 3491 writel(((MSIxIndex & 0x7) << 24) | 3651 3492 fusion->last_reply_idx[MSIxIndex], ··· 3655 3496 fusion->last_reply_idx[MSIxIndex], 3656 3497 instance->reply_post_host_index_addr[0]); 3657 3498 threshold_reply_count = 0; 3499 + if (irq_context) { 3500 + if (!irq_context->irq_poll_scheduled) { 3501 + irq_context->irq_poll_scheduled = true; 3502 + irq_context->irq_line_enable = true; 3503 + irq_poll_sched(&irq_context->irqpoll); 3504 + } 3505 + return num_completed; 3506 + } 3658 3507 } 3659 3508 } 3660 3509 3661 - if (!num_completed) 3662 - return IRQ_NONE; 3510 + if (num_completed) { 3511 + wmb(); 3512 + if (instance->msix_combined) 3513 + writel(((MSIxIndex & 0x7) << 24) | 3514 + fusion->last_reply_idx[MSIxIndex], 3515 + instance->reply_post_host_index_addr[MSIxIndex/8]); 3516 + else 3517 + writel((MSIxIndex << 24) | 3518 + fusion->last_reply_idx[MSIxIndex], 3519 + instance->reply_post_host_index_addr[0]); 3520 + megasas_check_and_restore_queue_depth(instance); 3521 + } 3522 + return num_completed; 3523 + } 3663 3524 3664 - wmb(); 3665 - if (instance->msix_combined) 3666 - writel(((MSIxIndex & 0x7) << 24) | 3667 - fusion->last_reply_idx[MSIxIndex], 3668 - instance->reply_post_host_index_addr[MSIxIndex/8]); 3669 - else 3670 - writel((MSIxIndex << 24) | 3671 - fusion->last_reply_idx[MSIxIndex], 3672 - instance->reply_post_host_index_addr[0]); 3673 - megasas_check_and_restore_queue_depth(instance); 3674 - return IRQ_HANDLED; 3525 + /** 3526 + * megasas_enable_irq_poll() - enable irqpoll 3527 + */ 3528 + static void megasas_enable_irq_poll(struct megasas_instance *instance) 3529 + { 3530 + u32 count, i; 3531 + struct megasas_irq_context *irq_ctx; 3532 + 3533 + count = instance->msix_vectors > 0 ? instance->msix_vectors : 1; 3534 + 3535 + for (i = 0; i < count; i++) { 3536 + irq_ctx = &instance->irq_context[i]; 3537 + irq_poll_enable(&irq_ctx->irqpoll); 3538 + } 3675 3539 } 3676 3540 3677 3541 /** ··· 3706 3524 u32 count, i; 3707 3525 struct megasas_instance *instance = 3708 3526 (struct megasas_instance *)instance_addr; 3527 + struct megasas_irq_context *irq_ctx; 3709 3528 3710 3529 count = instance->msix_vectors > 0 ? instance->msix_vectors : 1; 3711 3530 3712 - for (i = 0; i < count; i++) 3531 + for (i = 0; i < count; i++) { 3713 3532 synchronize_irq(pci_irq_vector(instance->pdev, i)); 3533 + irq_ctx = &instance->irq_context[i]; 3534 + irq_poll_disable(&irq_ctx->irqpoll); 3535 + if (irq_ctx->irq_poll_scheduled) { 3536 + irq_ctx->irq_poll_scheduled = false; 3537 + enable_irq(irq_ctx->os_irq); 3538 + } 3539 + } 3540 + } 3541 + 3542 + /** 3543 + * megasas_irqpoll() - process a queue for completed reply descriptors 3544 + * @irqpoll: IRQ poll structure associated with queue to poll. 3545 + * @budget: Threshold of reply descriptors to process per poll. 3546 + * 3547 + * Return: The number of entries processed. 3548 + */ 3549 + 3550 + int megasas_irqpoll(struct irq_poll *irqpoll, int budget) 3551 + { 3552 + struct megasas_irq_context *irq_ctx; 3553 + struct megasas_instance *instance; 3554 + int num_entries; 3555 + 3556 + irq_ctx = container_of(irqpoll, struct megasas_irq_context, irqpoll); 3557 + instance = irq_ctx->instance; 3558 + 3559 + if (irq_ctx->irq_line_enable) { 3560 + disable_irq(irq_ctx->os_irq); 3561 + irq_ctx->irq_line_enable = false; 3562 + } 3563 + 3564 + num_entries = complete_cmd_fusion(instance, irq_ctx->MSIxIndex, irq_ctx); 3565 + if (num_entries < budget) { 3566 + irq_poll_complete(irqpoll); 3567 + irq_ctx->irq_poll_scheduled = false; 3568 + enable_irq(irq_ctx->os_irq); 3569 + } 3570 + 3571 + return num_entries; 3714 3572 } 3715 3573 3716 3574 /** ··· 3773 3551 return; 3774 3552 3775 3553 for (MSIxIndex = 0 ; MSIxIndex < count; MSIxIndex++) 3776 - complete_cmd_fusion(instance, MSIxIndex); 3554 + complete_cmd_fusion(instance, MSIxIndex, NULL); 3777 3555 } 3778 3556 3779 3557 /** ··· 3788 3566 if (instance->mask_interrupts) 3789 3567 return IRQ_NONE; 3790 3568 3569 + #if defined(ENABLE_IRQ_POLL) 3570 + if (irq_context->irq_poll_scheduled) 3571 + return IRQ_HANDLED; 3572 + #endif 3573 + 3791 3574 if (!instance->msix_vectors) { 3792 3575 mfiStatus = instance->instancet->clear_intr(instance); 3793 3576 if (!mfiStatus) ··· 3805 3578 return IRQ_HANDLED; 3806 3579 } 3807 3580 3808 - return complete_cmd_fusion(instance, irq_context->MSIxIndex); 3581 + return complete_cmd_fusion(instance, irq_context->MSIxIndex, irq_context) 3582 + ? IRQ_HANDLED : IRQ_NONE; 3809 3583 } 3810 3584 3811 3585 /** ··· 4071 3843 static inline void megasas_trigger_snap_dump(struct megasas_instance *instance) 4072 3844 { 4073 3845 int j; 4074 - u32 fw_state; 3846 + u32 fw_state, abs_state; 4075 3847 4076 3848 if (!instance->disableOnlineCtrlReset) { 4077 3849 dev_info(&instance->pdev->dev, "Trigger snap dump\n"); ··· 4081 3853 } 4082 3854 4083 3855 for (j = 0; j < instance->snapdump_wait_time; j++) { 4084 - fw_state = instance->instancet->read_fw_status_reg(instance) & 4085 - MFI_STATE_MASK; 3856 + abs_state = instance->instancet->read_fw_status_reg(instance); 3857 + fw_state = abs_state & MFI_STATE_MASK; 4086 3858 if (fw_state == MFI_STATE_FAULT) { 4087 - dev_err(&instance->pdev->dev, 4088 - "Found FW in FAULT state, after snap dump trigger\n"); 3859 + dev_printk(KERN_ERR, &instance->pdev->dev, 3860 + "FW in FAULT state Fault code:0x%x subcode:0x%x func:%s\n", 3861 + abs_state & MFI_STATE_FAULT_CODE, 3862 + abs_state & MFI_STATE_FAULT_SUBCODE, __func__); 4089 3863 return; 4090 3864 } 4091 3865 msleep(1000); ··· 4099 3869 int reason, int *convert) 4100 3870 { 4101 3871 int i, outstanding, retval = 0, hb_seconds_missed = 0; 4102 - u32 fw_state; 3872 + u32 fw_state, abs_state; 4103 3873 u32 waittime_for_io_completion; 4104 3874 4105 3875 waittime_for_io_completion = ··· 4118 3888 4119 3889 for (i = 0; i < waittime_for_io_completion; i++) { 4120 3890 /* Check if firmware is in fault state */ 4121 - fw_state = instance->instancet->read_fw_status_reg(instance) & 4122 - MFI_STATE_MASK; 3891 + abs_state = instance->instancet->read_fw_status_reg(instance); 3892 + fw_state = abs_state & MFI_STATE_MASK; 4123 3893 if (fw_state == MFI_STATE_FAULT) { 4124 - dev_warn(&instance->pdev->dev, "Found FW in FAULT state," 4125 - " will reset adapter scsi%d.\n", 4126 - instance->host->host_no); 3894 + dev_printk(KERN_ERR, &instance->pdev->dev, 3895 + "FW in FAULT state Fault code:0x%x subcode:0x%x func:%s\n", 3896 + abs_state & MFI_STATE_FAULT_CODE, 3897 + abs_state & MFI_STATE_FAULT_SUBCODE, __func__); 4127 3898 megasas_complete_cmd_dpc_fusion((unsigned long)instance); 4128 3899 if (instance->requestorId && reason) { 4129 3900 dev_warn(&instance->pdev->dev, "SR-IOV Found FW in FAULT" ··· 4268 4037 break; 4269 4038 case MFI_CMD_NVME: 4270 4039 if (!instance->support_nvme_passthru) { 4040 + cmd_mfi->frame->hdr.cmd_status = MFI_STAT_INVALID_CMD; 4041 + result = COMPLETE_CMD; 4042 + } 4043 + 4044 + break; 4045 + case MFI_CMD_TOOLBOX: 4046 + if (!instance->support_pci_lane_margining) { 4271 4047 cmd_mfi->frame->hdr.cmd_status = MFI_STAT_INVALID_CMD; 4272 4048 result = COMPLETE_CMD; 4273 4049 } ··· 4503 4265 instance->instancet->disable_intr(instance); 4504 4266 megasas_sync_irqs((unsigned long)instance); 4505 4267 instance->instancet->enable_intr(instance); 4268 + megasas_enable_irq_poll(instance); 4506 4269 if (scsi_lookup->scmd == NULL) 4507 4270 break; 4508 4271 } ··· 4517 4278 megasas_sync_irqs((unsigned long)instance); 4518 4279 rc = megasas_track_scsiio(instance, id, channel); 4519 4280 instance->instancet->enable_intr(instance); 4281 + megasas_enable_irq_poll(instance); 4520 4282 4521 4283 break; 4522 4284 case MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET: ··· 4616 4376 4617 4377 instance = (struct megasas_instance *)scmd->device->host->hostdata; 4618 4378 4619 - scmd_printk(KERN_INFO, scmd, "task abort called for scmd(%p)\n", scmd); 4620 - scsi_print_command(scmd); 4621 - 4622 4379 if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) { 4623 4380 dev_err(&instance->pdev->dev, "Controller is not OPERATIONAL," 4624 4381 "SCSI host:%d\n", instance->host->host_no); ··· 4658 4421 goto out; 4659 4422 } 4660 4423 sdev_printk(KERN_INFO, scmd->device, 4661 - "attempting task abort! scmd(%p) tm_dev_handle 0x%x\n", 4424 + "attempting task abort! scmd(0x%p) tm_dev_handle 0x%x\n", 4662 4425 scmd, devhandle); 4663 4426 4664 4427 mr_device_priv_data->tm_busy = 1; ··· 4669 4432 mr_device_priv_data->tm_busy = 0; 4670 4433 4671 4434 mutex_unlock(&instance->reset_mutex); 4672 - out: 4673 - sdev_printk(KERN_INFO, scmd->device, "task abort: %s scmd(%p)\n", 4435 + scmd_printk(KERN_INFO, scmd, "task abort %s!! scmd(0x%p)\n", 4674 4436 ((ret == SUCCESS) ? "SUCCESS" : "FAILED"), scmd); 4437 + out: 4438 + scsi_print_command(scmd); 4439 + if (megasas_dbg_lvl & TM_DEBUG) 4440 + megasas_dump_fusion_io(scmd); 4675 4441 4676 4442 return ret; 4677 4443 } ··· 4697 4457 4698 4458 instance = (struct megasas_instance *)scmd->device->host->hostdata; 4699 4459 4700 - sdev_printk(KERN_INFO, scmd->device, 4701 - "target reset called for scmd(%p)\n", scmd); 4702 - 4703 4460 if (atomic_read(&instance->adprecovery) != MEGASAS_HBA_OPERATIONAL) { 4704 4461 dev_err(&instance->pdev->dev, "Controller is not OPERATIONAL," 4705 4462 "SCSI host:%d\n", instance->host->host_no); ··· 4705 4468 } 4706 4469 4707 4470 if (!mr_device_priv_data) { 4708 - sdev_printk(KERN_INFO, scmd->device, "device been deleted! " 4709 - "scmd(%p)\n", scmd); 4471 + sdev_printk(KERN_INFO, scmd->device, 4472 + "device been deleted! scmd: (0x%p)\n", scmd); 4710 4473 scmd->result = DID_NO_CONNECT << 16; 4711 4474 ret = SUCCESS; 4712 4475 goto out; ··· 4729 4492 } 4730 4493 4731 4494 sdev_printk(KERN_INFO, scmd->device, 4732 - "attempting target reset! scmd(%p) tm_dev_handle 0x%x\n", 4495 + "attempting target reset! scmd(0x%p) tm_dev_handle: 0x%x\n", 4733 4496 scmd, devhandle); 4734 4497 mr_device_priv_data->tm_busy = 1; 4735 4498 ret = megasas_issue_tm(instance, devhandle, ··· 4738 4501 mr_device_priv_data); 4739 4502 mr_device_priv_data->tm_busy = 0; 4740 4503 mutex_unlock(&instance->reset_mutex); 4741 - out: 4742 - scmd_printk(KERN_NOTICE, scmd, "megasas: target reset %s!!\n", 4504 + scmd_printk(KERN_NOTICE, scmd, "target reset %s!!\n", 4743 4505 (ret == SUCCESS) ? "SUCCESS" : "FAILED"); 4744 4506 4507 + out: 4745 4508 return ret; 4746 4509 } 4747 4510 ··· 4786 4549 struct megasas_instance *instance; 4787 4550 struct megasas_cmd_fusion *cmd_fusion, *r1_cmd; 4788 4551 struct fusion_context *fusion; 4789 - u32 abs_state, status_reg, reset_adapter; 4552 + u32 abs_state, status_reg, reset_adapter, fpio_count = 0; 4790 4553 u32 io_timeout_in_crash_mode = 0; 4791 4554 struct scsi_cmnd *scmd_local = NULL; 4792 4555 struct scsi_device *sdev; 4793 4556 int ret_target_prop = DCMD_FAILED; 4794 4557 bool is_target_prop = false; 4558 + bool do_adp_reset = true; 4559 + int max_reset_tries = MEGASAS_FUSION_MAX_RESET_TRIES; 4795 4560 4796 4561 instance = (struct megasas_instance *)shost->hostdata; 4797 4562 fusion = instance->ctrl_context; ··· 4860 4621 if (convert) 4861 4622 reason = 0; 4862 4623 4863 - if (megasas_dbg_lvl & OCR_LOGS) 4624 + if (megasas_dbg_lvl & OCR_DEBUG) 4864 4625 dev_info(&instance->pdev->dev, "\nPending SCSI commands:\n"); 4865 4626 4866 4627 /* Now return commands back to the OS */ ··· 4873 4634 } 4874 4635 scmd_local = cmd_fusion->scmd; 4875 4636 if (cmd_fusion->scmd) { 4876 - if (megasas_dbg_lvl & OCR_LOGS) { 4637 + if (megasas_dbg_lvl & OCR_DEBUG) { 4877 4638 sdev_printk(KERN_INFO, 4878 4639 cmd_fusion->scmd->device, "SMID: 0x%x\n", 4879 4640 cmd_fusion->index); 4880 - scsi_print_command(cmd_fusion->scmd); 4641 + megasas_dump_fusion_io(cmd_fusion->scmd); 4881 4642 } 4643 + 4644 + if (cmd_fusion->io_request->Function == 4645 + MPI2_FUNCTION_SCSI_IO_REQUEST) 4646 + fpio_count++; 4882 4647 4883 4648 scmd_local->result = 4884 4649 megasas_check_mpio_paths(instance, ··· 4896 4653 } 4897 4654 } 4898 4655 4656 + dev_info(&instance->pdev->dev, "Outstanding fastpath IOs: %d\n", 4657 + fpio_count); 4658 + 4899 4659 atomic_set(&instance->fw_outstanding, 0); 4900 4660 4901 4661 status_reg = instance->instancet->read_fw_status_reg(instance); ··· 4910 4664 dev_warn(&instance->pdev->dev, "Reset not supported" 4911 4665 ", killing adapter scsi%d.\n", 4912 4666 instance->host->host_no); 4913 - megaraid_sas_kill_hba(instance); 4914 - instance->skip_heartbeat_timer_del = 1; 4915 - retval = FAILED; 4916 - goto out; 4667 + goto kill_hba; 4917 4668 } 4918 4669 4919 4670 /* Let SR-IOV VF & PF sync up if there was a HB failure */ 4920 4671 if (instance->requestorId && !reason) { 4921 4672 msleep(MEGASAS_OCR_SETTLE_TIME_VF); 4922 - goto transition_to_ready; 4673 + do_adp_reset = false; 4674 + max_reset_tries = MEGASAS_SRIOV_MAX_RESET_TRIES_VF; 4923 4675 } 4924 4676 4925 4677 /* Now try to reset the chip */ 4926 - for (i = 0; i < MEGASAS_FUSION_MAX_RESET_TRIES; i++) { 4927 - 4928 - if (instance->instancet->adp_reset 4929 - (instance, instance->reg_set)) 4678 + for (i = 0; i < max_reset_tries; i++) { 4679 + /* 4680 + * Do adp reset and wait for 4681 + * controller to transition to ready 4682 + */ 4683 + if (megasas_adp_reset_wait_for_ready(instance, 4684 + do_adp_reset, 1) == FAILED) 4930 4685 continue; 4931 - transition_to_ready: 4686 + 4932 4687 /* Wait for FW to become ready */ 4933 4688 if (megasas_transition_to_ready(instance, 1)) { 4934 4689 dev_warn(&instance->pdev->dev, 4935 4690 "Failed to transition controller to ready for " 4936 4691 "scsi%d.\n", instance->host->host_no); 4937 - if (instance->requestorId && !reason) 4938 - goto fail_kill_adapter; 4939 - else 4940 - continue; 4692 + continue; 4941 4693 } 4942 4694 megasas_reset_reply_desc(instance); 4943 4695 megasas_fusion_update_can_queue(instance, OCR_CONTEXT); 4944 4696 4945 4697 if (megasas_ioc_init_fusion(instance)) { 4946 - if (instance->requestorId && !reason) 4947 - goto fail_kill_adapter; 4948 - else 4949 - continue; 4698 + continue; 4950 4699 } 4951 4700 4952 4701 if (megasas_get_ctrl_info(instance)) { 4953 4702 dev_info(&instance->pdev->dev, 4954 4703 "Failed from %s %d\n", 4955 4704 __func__, __LINE__); 4956 - megaraid_sas_kill_hba(instance); 4957 - retval = FAILED; 4958 - goto out; 4705 + goto kill_hba; 4959 4706 } 4960 4707 4961 4708 megasas_refire_mgmt_cmd(instance); ··· 4977 4738 clear_bit(MEGASAS_FUSION_IN_RESET, 4978 4739 &instance->reset_flags); 4979 4740 instance->instancet->enable_intr(instance); 4980 - 4741 + megasas_enable_irq_poll(instance); 4981 4742 shost_for_each_device(sdev, shost) { 4982 4743 if ((instance->tgt_prop) && 4983 4744 (instance->nvme_page_size)) ··· 4989 4750 4990 4751 atomic_set(&instance->adprecovery, MEGASAS_HBA_OPERATIONAL); 4991 4752 4992 - dev_info(&instance->pdev->dev, "Interrupts are enabled and" 4993 - " controller is OPERATIONAL for scsi:%d\n", 4994 - instance->host->host_no); 4753 + dev_info(&instance->pdev->dev, 4754 + "Adapter is OPERATIONAL for scsi:%d\n", 4755 + instance->host->host_no); 4995 4756 4996 4757 /* Restart SR-IOV heartbeat */ 4997 4758 if (instance->requestorId) { ··· 5025 4786 5026 4787 goto out; 5027 4788 } 5028 - fail_kill_adapter: 5029 4789 /* Reset failed, kill the adapter */ 5030 4790 dev_warn(&instance->pdev->dev, "Reset failed, killing " 5031 4791 "adapter scsi%d.\n", instance->host->host_no); 5032 - megaraid_sas_kill_hba(instance); 5033 - instance->skip_heartbeat_timer_del = 1; 5034 - retval = FAILED; 4792 + goto kill_hba; 5035 4793 } else { 5036 4794 /* For VF: Restart HB timer if we didn't OCR */ 5037 4795 if (instance->requestorId) { ··· 5036 4800 } 5037 4801 clear_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags); 5038 4802 instance->instancet->enable_intr(instance); 4803 + megasas_enable_irq_poll(instance); 5039 4804 atomic_set(&instance->adprecovery, MEGASAS_HBA_OPERATIONAL); 4805 + goto out; 5040 4806 } 4807 + kill_hba: 4808 + megaraid_sas_kill_hba(instance); 4809 + megasas_enable_irq_poll(instance); 4810 + instance->skip_heartbeat_timer_del = 1; 4811 + retval = FAILED; 5041 4812 out: 5042 4813 clear_bit(MEGASAS_FUSION_IN_RESET, &instance->reset_flags); 5043 4814 mutex_unlock(&instance->reset_mutex);
+25 -8
drivers/scsi/megaraid/megaraid_sas_fusion.h
··· 75 75 MR_RAID_FLAGS_IO_SUB_TYPE_RMW_P = 3, 76 76 MR_RAID_FLAGS_IO_SUB_TYPE_RMW_Q = 4, 77 77 MR_RAID_FLAGS_IO_SUB_TYPE_CACHE_BYPASS = 6, 78 - MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT = 7 78 + MR_RAID_FLAGS_IO_SUB_TYPE_LDIO_BW_LIMIT = 7, 79 + MR_RAID_FLAGS_IO_SUB_TYPE_R56_DIV_OFFLOAD = 8 79 80 }; 80 81 81 82 /* ··· 89 88 90 89 #define MEGASAS_FP_CMD_LEN 16 91 90 #define MEGASAS_FUSION_IN_RESET 0 92 - #define THRESHOLD_REPLY_COUNT 50 93 91 #define RAID_1_PEER_CMDS 2 94 92 #define JBOD_MAPS_COUNT 2 95 93 #define MEGASAS_REDUCE_QD_COUNT 64 ··· 140 140 u16 timeout_value; /* 0x02 -0x03 */ 141 141 u16 routing_flags; // 0x04 -0x05 routing flags 142 142 u16 virtual_disk_tgt_id; /* 0x06 -0x07 */ 143 - u64 reg_lock_row_lba; /* 0x08 - 0x0F */ 143 + __le64 reg_lock_row_lba; /* 0x08 - 0x0F */ 144 144 u32 reg_lock_length; /* 0x10 - 0x13 */ 145 - union { 146 - u16 next_lmid; /* 0x14 - 0x15 */ 147 - u16 peer_smid; /* used for the raid 1/10 fp writes */ 148 - } smid; 145 + union { // flow specific 146 + u16 rmw_op_index; /* 0x14 - 0x15, R5/6 RMW: rmw operation index*/ 147 + u16 peer_smid; /* 0x14 - 0x15, R1 Write: peer smid*/ 148 + u16 r56_arm_map; /* 0x14 - 0x15, Unused [15], LogArm[14:10], P-Arm[9:5], Q-Arm[4:0] */ 149 + 150 + } flow_specific; 151 + 149 152 u8 ex_status; /* 0x16 : OUT */ 150 153 u8 status; /* 0x17 status */ 151 154 u8 raid_flags; /* 0x18 resvd[7:6], ioSubType[5:4], ··· 238 235 239 236 #define RAID_CTX_SPANARM_SPAN_SHIFT (5) 240 237 #define RAID_CTX_SPANARM_SPAN_MASK (0xE0) 238 + 239 + /* LogArm[14:10], P-Arm[9:5], Q-Arm[4:0] */ 240 + #define RAID_CTX_R56_Q_ARM_MASK (0x1F) 241 + #define RAID_CTX_R56_P_ARM_SHIFT (5) 242 + #define RAID_CTX_R56_P_ARM_MASK (0x3E0) 243 + #define RAID_CTX_R56_LOG_ARM_SHIFT (10) 244 + #define RAID_CTX_R56_LOG_ARM_MASK (0x7C00) 241 245 242 246 /* number of bits per index in U32 TrackStream */ 243 247 #define BITS_PER_INDEX_STREAM 4 ··· 950 940 u8 pd_after_lb; 951 941 u16 r1_alt_dev_handle; /* raid 1/10 only */ 952 942 bool ra_capable; 943 + u8 data_arms; 953 944 }; 954 945 955 946 struct MR_LD_TARGET_SYNC { ··· 1335 1324 dma_addr_t ioc_init_request_phys; 1336 1325 struct MPI2_IOC_INIT_REQUEST *ioc_init_request; 1337 1326 struct megasas_cmd *ioc_init_cmd; 1338 - 1327 + bool pcie_bw_limitation; 1328 + bool r56_div_offload; 1339 1329 }; 1340 1330 1341 1331 union desc_value { ··· 1359 1347 u8 cur_num_supported; 1360 1348 u8 trigger_min_num_sec_before_ocr; 1361 1349 u8 reserved[12]; 1350 + }; 1351 + 1352 + struct megasas_debugfs_buffer { 1353 + void *buf; 1354 + u32 len; 1362 1355 }; 1363 1356 1364 1357 void megasas_free_cmds_fusion(struct megasas_instance *instance);
+1 -1
drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h
··· 1398 1398 U8 PCIBusNum; /*0x0E */ 1399 1399 U8 PCIDomainSegment; /*0x0F */ 1400 1400 U32 Reserved1; /*0x10 */ 1401 - U32 Reserved2; /*0x14 */ 1401 + U32 ProductSpecific; /* 0x14 */ 1402 1402 } MPI2_CONFIG_PAGE_IOC_1, 1403 1403 *PTR_MPI2_CONFIG_PAGE_IOC_1, 1404 1404 Mpi2IOCPage1_t, *pMpi2IOCPage1_t;
+438 -59
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 74 74 #define MAX_HBA_QUEUE_DEPTH 30000 75 75 #define MAX_CHAIN_DEPTH 100000 76 76 static int max_queue_depth = -1; 77 - module_param(max_queue_depth, int, 0); 77 + module_param(max_queue_depth, int, 0444); 78 78 MODULE_PARM_DESC(max_queue_depth, " max controller queue depth "); 79 79 80 80 static int max_sgl_entries = -1; 81 - module_param(max_sgl_entries, int, 0); 81 + module_param(max_sgl_entries, int, 0444); 82 82 MODULE_PARM_DESC(max_sgl_entries, " max sg entries "); 83 83 84 84 static int msix_disable = -1; 85 - module_param(msix_disable, int, 0); 85 + module_param(msix_disable, int, 0444); 86 86 MODULE_PARM_DESC(msix_disable, " disable msix routed interrupts (default=0)"); 87 87 88 88 static int smp_affinity_enable = 1; 89 - module_param(smp_affinity_enable, int, S_IRUGO); 89 + module_param(smp_affinity_enable, int, 0444); 90 90 MODULE_PARM_DESC(smp_affinity_enable, "SMP affinity feature enable/disable Default: enable(1)"); 91 91 92 92 static int max_msix_vectors = -1; 93 - module_param(max_msix_vectors, int, 0); 93 + module_param(max_msix_vectors, int, 0444); 94 94 MODULE_PARM_DESC(max_msix_vectors, 95 95 " max msix vectors"); 96 96 97 97 static int irqpoll_weight = -1; 98 - module_param(irqpoll_weight, int, 0); 98 + module_param(irqpoll_weight, int, 0444); 99 99 MODULE_PARM_DESC(irqpoll_weight, 100 100 "irq poll weight (default= one fourth of HBA queue depth)"); 101 101 102 102 static int mpt3sas_fwfault_debug; 103 103 MODULE_PARM_DESC(mpt3sas_fwfault_debug, 104 104 " enable detection of firmware fault and halt firmware - (default=0)"); 105 + 106 + static int perf_mode = -1; 107 + module_param(perf_mode, int, 0444); 108 + MODULE_PARM_DESC(perf_mode, 109 + "Performance mode (only for Aero/Sea Generation), options:\n\t\t" 110 + "0 - balanced: high iops mode is enabled &\n\t\t" 111 + "interrupt coalescing is enabled only on high iops queues,\n\t\t" 112 + "1 - iops: high iops mode is disabled &\n\t\t" 113 + "interrupt coalescing is enabled on all queues,\n\t\t" 114 + "2 - latency: high iops mode is disabled &\n\t\t" 115 + "interrupt coalescing is enabled on all queues with timeout value 0xA,\n" 116 + "\t\tdefault - default perf_mode is 'balanced'" 117 + ); 118 + 119 + enum mpt3sas_perf_mode { 120 + MPT_PERF_MODE_DEFAULT = -1, 121 + MPT_PERF_MODE_BALANCED = 0, 122 + MPT_PERF_MODE_IOPS = 1, 123 + MPT_PERF_MODE_LATENCY = 2, 124 + }; 105 125 106 126 static int 107 127 _base_get_ioc_facts(struct MPT3SAS_ADAPTER *ioc); ··· 1302 1282 ack_request->EventContext = mpi_reply->EventContext; 1303 1283 ack_request->VF_ID = 0; /* TODO */ 1304 1284 ack_request->VP_ID = 0; 1305 - mpt3sas_base_put_smid_default(ioc, smid); 1285 + ioc->put_smid_default(ioc, smid); 1306 1286 1307 1287 out: 1308 1288 ··· 2813 2793 2814 2794 list_for_each_entry_safe(reply_q, next, &ioc->reply_queue_list, list) { 2815 2795 list_del(&reply_q->list); 2796 + if (ioc->smp_affinity_enable) 2797 + irq_set_affinity_hint(pci_irq_vector(ioc->pdev, 2798 + reply_q->msix_index), NULL); 2816 2799 free_irq(pci_irq_vector(ioc->pdev, reply_q->msix_index), 2817 2800 reply_q); 2818 2801 kfree(reply_q); ··· 2880 2857 { 2881 2858 unsigned int cpu, nr_cpus, nr_msix, index = 0; 2882 2859 struct adapter_reply_queue *reply_q; 2860 + int local_numa_node; 2883 2861 2884 2862 if (!_base_is_controller_msix_enabled(ioc)) 2885 2863 return; 2886 - ioc->msix_load_balance = false; 2887 - if (ioc->reply_queue_count < num_online_cpus()) { 2888 - ioc->msix_load_balance = true; 2864 + 2865 + if (ioc->msix_load_balance) 2889 2866 return; 2890 - } 2891 2867 2892 2868 memset(ioc->cpu_msix_table, 0, ioc->cpu_msix_table_sz); 2893 2869 ··· 2896 2874 if (!nr_msix) 2897 2875 return; 2898 2876 2899 - if (smp_affinity_enable) { 2877 + if (ioc->smp_affinity_enable) { 2878 + 2879 + /* 2880 + * set irq affinity to local numa node for those irqs 2881 + * corresponding to high iops queues. 2882 + */ 2883 + if (ioc->high_iops_queues) { 2884 + local_numa_node = dev_to_node(&ioc->pdev->dev); 2885 + for (index = 0; index < ioc->high_iops_queues; 2886 + index++) { 2887 + irq_set_affinity_hint(pci_irq_vector(ioc->pdev, 2888 + index), cpumask_of_node(local_numa_node)); 2889 + } 2890 + } 2891 + 2900 2892 list_for_each_entry(reply_q, &ioc->reply_queue_list, list) { 2901 - const cpumask_t *mask = pci_irq_get_affinity(ioc->pdev, 2902 - reply_q->msix_index); 2893 + const cpumask_t *mask; 2894 + 2895 + if (reply_q->msix_index < ioc->high_iops_queues) 2896 + continue; 2897 + 2898 + mask = pci_irq_get_affinity(ioc->pdev, 2899 + reply_q->msix_index); 2903 2900 if (!mask) { 2904 2901 ioc_warn(ioc, "no affinity for msi %x\n", 2905 2902 reply_q->msix_index); 2906 - continue; 2903 + goto fall_back; 2907 2904 } 2908 2905 2909 2906 for_each_cpu_and(cpu, mask, cpu_online_mask) { ··· 2933 2892 } 2934 2893 return; 2935 2894 } 2895 + 2896 + fall_back: 2936 2897 cpu = cpumask_first(cpu_online_mask); 2898 + nr_msix -= ioc->high_iops_queues; 2899 + index = 0; 2937 2900 2938 2901 list_for_each_entry(reply_q, &ioc->reply_queue_list, list) { 2939 - 2940 2902 unsigned int i, group = nr_cpus / nr_msix; 2903 + 2904 + if (reply_q->msix_index < ioc->high_iops_queues) 2905 + continue; 2941 2906 2942 2907 if (cpu >= nr_cpus) 2943 2908 break; ··· 2960 2913 } 2961 2914 2962 2915 /** 2916 + * _base_check_and_enable_high_iops_queues - enable high iops mode 2917 + * @ ioc - per adapter object 2918 + * @ hba_msix_vector_count - msix vectors supported by HBA 2919 + * 2920 + * Enable high iops queues only if 2921 + * - HBA is a SEA/AERO controller and 2922 + * - MSI-Xs vector supported by the HBA is 128 and 2923 + * - total CPU count in the system >=16 and 2924 + * - loaded driver with default max_msix_vectors module parameter and 2925 + * - system booted in non kdump mode 2926 + * 2927 + * returns nothing. 2928 + */ 2929 + static void 2930 + _base_check_and_enable_high_iops_queues(struct MPT3SAS_ADAPTER *ioc, 2931 + int hba_msix_vector_count) 2932 + { 2933 + u16 lnksta, speed; 2934 + 2935 + if (perf_mode == MPT_PERF_MODE_IOPS || 2936 + perf_mode == MPT_PERF_MODE_LATENCY) { 2937 + ioc->high_iops_queues = 0; 2938 + return; 2939 + } 2940 + 2941 + if (perf_mode == MPT_PERF_MODE_DEFAULT) { 2942 + 2943 + pcie_capability_read_word(ioc->pdev, PCI_EXP_LNKSTA, &lnksta); 2944 + speed = lnksta & PCI_EXP_LNKSTA_CLS; 2945 + 2946 + if (speed < 0x4) { 2947 + ioc->high_iops_queues = 0; 2948 + return; 2949 + } 2950 + } 2951 + 2952 + if (!reset_devices && ioc->is_aero_ioc && 2953 + hba_msix_vector_count == MPT3SAS_GEN35_MAX_MSIX_QUEUES && 2954 + num_online_cpus() >= MPT3SAS_HIGH_IOPS_REPLY_QUEUES && 2955 + max_msix_vectors == -1) 2956 + ioc->high_iops_queues = MPT3SAS_HIGH_IOPS_REPLY_QUEUES; 2957 + else 2958 + ioc->high_iops_queues = 0; 2959 + } 2960 + 2961 + /** 2963 2962 * _base_disable_msix - disables msix 2964 2963 * @ioc: per adapter object 2965 2964 * ··· 3015 2922 { 3016 2923 if (!ioc->msix_enable) 3017 2924 return; 3018 - pci_disable_msix(ioc->pdev); 2925 + pci_free_irq_vectors(ioc->pdev); 3019 2926 ioc->msix_enable = 0; 2927 + } 2928 + 2929 + /** 2930 + * _base_alloc_irq_vectors - allocate msix vectors 2931 + * @ioc: per adapter object 2932 + * 2933 + */ 2934 + static int 2935 + _base_alloc_irq_vectors(struct MPT3SAS_ADAPTER *ioc) 2936 + { 2937 + int i, irq_flags = PCI_IRQ_MSIX; 2938 + struct irq_affinity desc = { .pre_vectors = ioc->high_iops_queues }; 2939 + struct irq_affinity *descp = &desc; 2940 + 2941 + if (ioc->smp_affinity_enable) 2942 + irq_flags |= PCI_IRQ_AFFINITY; 2943 + else 2944 + descp = NULL; 2945 + 2946 + ioc_info(ioc, " %d %d\n", ioc->high_iops_queues, 2947 + ioc->msix_vector_count); 2948 + 2949 + i = pci_alloc_irq_vectors_affinity(ioc->pdev, 2950 + ioc->high_iops_queues, 2951 + ioc->msix_vector_count, irq_flags, descp); 2952 + 2953 + return i; 3020 2954 } 3021 2955 3022 2956 /** ··· 3057 2937 int r; 3058 2938 int i, local_max_msix_vectors; 3059 2939 u8 try_msix = 0; 3060 - unsigned int irq_flags = PCI_IRQ_MSIX; 2940 + 2941 + ioc->msix_load_balance = false; 3061 2942 3062 2943 if (msix_disable == -1 || msix_disable == 0) 3063 2944 try_msix = 1; ··· 3069 2948 if (_base_check_enable_msix(ioc) != 0) 3070 2949 goto try_ioapic; 3071 2950 3072 - ioc->reply_queue_count = min_t(int, ioc->cpu_count, 2951 + ioc_info(ioc, "MSI-X vectors supported: %d\n", ioc->msix_vector_count); 2952 + pr_info("\t no of cores: %d, max_msix_vectors: %d\n", 2953 + ioc->cpu_count, max_msix_vectors); 2954 + if (ioc->is_aero_ioc) 2955 + _base_check_and_enable_high_iops_queues(ioc, 2956 + ioc->msix_vector_count); 2957 + ioc->reply_queue_count = 2958 + min_t(int, ioc->cpu_count + ioc->high_iops_queues, 3073 2959 ioc->msix_vector_count); 3074 - 3075 - ioc_info(ioc, "MSI-X vectors supported: %d, no of cores: %d, max_msix_vectors: %d\n", 3076 - ioc->msix_vector_count, ioc->cpu_count, max_msix_vectors); 3077 2960 3078 2961 if (!ioc->rdpq_array_enable && max_msix_vectors == -1) 3079 2962 local_max_msix_vectors = (reset_devices) ? 1 : 8; ··· 3090 2965 else if (local_max_msix_vectors == 0) 3091 2966 goto try_ioapic; 3092 2967 3093 - if (ioc->msix_vector_count < ioc->cpu_count) 3094 - smp_affinity_enable = 0; 2968 + /* 2969 + * Enable msix_load_balance only if combined reply queue mode is 2970 + * disabled on SAS3 & above generation HBA devices. 2971 + */ 2972 + if (!ioc->combined_reply_queue && 2973 + ioc->hba_mpi_version_belonged != MPI2_VERSION) { 2974 + ioc->msix_load_balance = true; 2975 + } 3095 2976 3096 - if (smp_affinity_enable) 3097 - irq_flags |= PCI_IRQ_AFFINITY; 2977 + /* 2978 + * smp affinity setting is not need when msix load balance 2979 + * is enabled. 2980 + */ 2981 + if (ioc->msix_load_balance) 2982 + ioc->smp_affinity_enable = 0; 3098 2983 3099 - r = pci_alloc_irq_vectors(ioc->pdev, 1, ioc->reply_queue_count, 3100 - irq_flags); 2984 + r = _base_alloc_irq_vectors(ioc); 3101 2985 if (r < 0) { 3102 2986 dfailprintk(ioc, 3103 2987 ioc_info(ioc, "pci_alloc_irq_vectors failed (r=%d) !!!\n", ··· 3125 2991 } 3126 2992 } 3127 2993 2994 + ioc_info(ioc, "High IOPs queues : %s\n", 2995 + ioc->high_iops_queues ? "enabled" : "disabled"); 2996 + 3128 2997 return 0; 3129 2998 3130 2999 /* failback to io_apic interrupt routing */ 3131 3000 try_ioapic: 3132 - 3001 + ioc->high_iops_queues = 0; 3002 + ioc_info(ioc, "High IOPs queues : disabled\n"); 3133 3003 ioc->reply_queue_count = 1; 3134 3004 r = pci_alloc_irq_vectors(ioc->pdev, 1, 1, PCI_IRQ_LEGACY); 3135 3005 if (r < 0) { ··· 3403 3265 return ioc->reply + (phys_addr - (u32)ioc->reply_dma); 3404 3266 } 3405 3267 3268 + /** 3269 + * _base_get_msix_index - get the msix index 3270 + * @ioc: per adapter object 3271 + * @scmd: scsi_cmnd object 3272 + * 3273 + * returns msix index of general reply queues, 3274 + * i.e. reply queue on which IO request's reply 3275 + * should be posted by the HBA firmware. 3276 + */ 3406 3277 static inline u8 3407 - _base_get_msix_index(struct MPT3SAS_ADAPTER *ioc) 3278 + _base_get_msix_index(struct MPT3SAS_ADAPTER *ioc, 3279 + struct scsi_cmnd *scmd) 3408 3280 { 3409 3281 /* Enables reply_queue load balancing */ 3410 3282 if (ioc->msix_load_balance) ··· 3423 3275 &ioc->total_io_cnt), ioc->reply_queue_count) : 0; 3424 3276 3425 3277 return ioc->cpu_msix_table[raw_smp_processor_id()]; 3278 + } 3279 + 3280 + /** 3281 + * _base_get_high_iops_msix_index - get the msix index of 3282 + * high iops queues 3283 + * @ioc: per adapter object 3284 + * @scmd: scsi_cmnd object 3285 + * 3286 + * Returns: msix index of high iops reply queues. 3287 + * i.e. high iops reply queue on which IO request's 3288 + * reply should be posted by the HBA firmware. 3289 + */ 3290 + static inline u8 3291 + _base_get_high_iops_msix_index(struct MPT3SAS_ADAPTER *ioc, 3292 + struct scsi_cmnd *scmd) 3293 + { 3294 + /** 3295 + * Round robin the IO interrupts among the high iops 3296 + * reply queues in terms of batch count 16 when outstanding 3297 + * IOs on the target device is >=8. 3298 + */ 3299 + if (atomic_read(&scmd->device->device_busy) > 3300 + MPT3SAS_DEVICE_HIGH_IOPS_DEPTH) 3301 + return base_mod64(( 3302 + atomic64_add_return(1, &ioc->high_iops_outstanding) / 3303 + MPT3SAS_HIGH_IOPS_BATCH_COUNT), 3304 + MPT3SAS_HIGH_IOPS_REPLY_QUEUES); 3305 + 3306 + return _base_get_msix_index(ioc, scmd); 3426 3307 } 3427 3308 3428 3309 /** ··· 3502 3325 3503 3326 smid = tag + 1; 3504 3327 request->cb_idx = cb_idx; 3505 - request->msix_io = _base_get_msix_index(ioc); 3506 3328 request->smid = smid; 3329 + request->scmd = scmd; 3507 3330 INIT_LIST_HEAD(&request->chain_list); 3508 3331 return smid; 3509 3332 } ··· 3557 3380 return; 3558 3381 st->cb_idx = 0xFF; 3559 3382 st->direct_io = 0; 3383 + st->scmd = NULL; 3560 3384 atomic_set(&ioc->chain_lookup[st->smid - 1].chain_offset, 0); 3561 3385 st->smid = 0; 3562 3386 } ··· 3657 3479 #endif 3658 3480 3659 3481 /** 3482 + * _base_set_and_get_msix_index - get the msix index and assign to msix_io 3483 + * variable of scsi tracker 3484 + * @ioc: per adapter object 3485 + * @smid: system request message index 3486 + * 3487 + * returns msix index. 3488 + */ 3489 + static u8 3490 + _base_set_and_get_msix_index(struct MPT3SAS_ADAPTER *ioc, u16 smid) 3491 + { 3492 + struct scsiio_tracker *st = NULL; 3493 + 3494 + if (smid < ioc->hi_priority_smid) 3495 + st = _get_st_from_smid(ioc, smid); 3496 + 3497 + if (st == NULL) 3498 + return _base_get_msix_index(ioc, NULL); 3499 + 3500 + st->msix_io = ioc->get_msix_index_for_smlio(ioc, st->scmd); 3501 + return st->msix_io; 3502 + } 3503 + 3504 + /** 3660 3505 * _base_put_smid_mpi_ep_scsi_io - send SCSI_IO request to firmware 3661 3506 * @ioc: per adapter object 3662 3507 * @smid: system request message index 3663 3508 * @handle: device handle 3664 3509 */ 3665 3510 static void 3666 - _base_put_smid_mpi_ep_scsi_io(struct MPT3SAS_ADAPTER *ioc, u16 smid, u16 handle) 3511 + _base_put_smid_mpi_ep_scsi_io(struct MPT3SAS_ADAPTER *ioc, 3512 + u16 smid, u16 handle) 3667 3513 { 3668 3514 Mpi2RequestDescriptorUnion_t descriptor; 3669 3515 u64 *request = (u64 *)&descriptor; ··· 3700 3498 _base_clone_mpi_to_sys_mem(mpi_req_iomem, (void *)mfp, 3701 3499 ioc->request_sz); 3702 3500 descriptor.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO; 3703 - descriptor.SCSIIO.MSIxIndex = _base_get_msix_index(ioc); 3501 + descriptor.SCSIIO.MSIxIndex = _base_set_and_get_msix_index(ioc, smid); 3704 3502 descriptor.SCSIIO.SMID = cpu_to_le16(smid); 3705 3503 descriptor.SCSIIO.DevHandle = cpu_to_le16(handle); 3706 3504 descriptor.SCSIIO.LMID = 0; ··· 3722 3520 3723 3521 3724 3522 descriptor.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO; 3725 - descriptor.SCSIIO.MSIxIndex = _base_get_msix_index(ioc); 3523 + descriptor.SCSIIO.MSIxIndex = _base_set_and_get_msix_index(ioc, smid); 3726 3524 descriptor.SCSIIO.SMID = cpu_to_le16(smid); 3727 3525 descriptor.SCSIIO.DevHandle = cpu_to_le16(handle); 3728 3526 descriptor.SCSIIO.LMID = 0; ··· 3731 3529 } 3732 3530 3733 3531 /** 3734 - * mpt3sas_base_put_smid_fast_path - send fast path request to firmware 3532 + * _base_put_smid_fast_path - send fast path request to firmware 3735 3533 * @ioc: per adapter object 3736 3534 * @smid: system request message index 3737 3535 * @handle: device handle 3738 3536 */ 3739 - void 3740 - mpt3sas_base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid, 3537 + static void 3538 + _base_put_smid_fast_path(struct MPT3SAS_ADAPTER *ioc, u16 smid, 3741 3539 u16 handle) 3742 3540 { 3743 3541 Mpi2RequestDescriptorUnion_t descriptor; ··· 3745 3543 3746 3544 descriptor.SCSIIO.RequestFlags = 3747 3545 MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO; 3748 - descriptor.SCSIIO.MSIxIndex = _base_get_msix_index(ioc); 3546 + descriptor.SCSIIO.MSIxIndex = _base_set_and_get_msix_index(ioc, smid); 3749 3547 descriptor.SCSIIO.SMID = cpu_to_le16(smid); 3750 3548 descriptor.SCSIIO.DevHandle = cpu_to_le16(handle); 3751 3549 descriptor.SCSIIO.LMID = 0; ··· 3754 3552 } 3755 3553 3756 3554 /** 3757 - * mpt3sas_base_put_smid_hi_priority - send Task Management request to firmware 3555 + * _base_put_smid_hi_priority - send Task Management request to firmware 3758 3556 * @ioc: per adapter object 3759 3557 * @smid: system request message index 3760 3558 * @msix_task: msix_task will be same as msix of IO incase of task abort else 0. 3761 3559 */ 3762 - void 3763 - mpt3sas_base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc, u16 smid, 3560 + static void 3561 + _base_put_smid_hi_priority(struct MPT3SAS_ADAPTER *ioc, u16 smid, 3764 3562 u16 msix_task) 3765 3563 { 3766 3564 Mpi2RequestDescriptorUnion_t descriptor; ··· 3809 3607 3810 3608 descriptor.Default.RequestFlags = 3811 3609 MPI26_REQ_DESCRIPT_FLAGS_PCIE_ENCAPSULATED; 3812 - descriptor.Default.MSIxIndex = _base_get_msix_index(ioc); 3610 + descriptor.Default.MSIxIndex = _base_set_and_get_msix_index(ioc, smid); 3813 3611 descriptor.Default.SMID = cpu_to_le16(smid); 3814 3612 descriptor.Default.LMID = 0; 3815 3613 descriptor.Default.DescriptorTypeDependent = 0; ··· 3818 3616 } 3819 3617 3820 3618 /** 3821 - * mpt3sas_base_put_smid_default - Default, primarily used for config pages 3619 + * _base_put_smid_default - Default, primarily used for config pages 3822 3620 * @ioc: per adapter object 3823 3621 * @smid: system request message index 3824 3622 */ 3825 - void 3826 - mpt3sas_base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid) 3623 + static void 3624 + _base_put_smid_default(struct MPT3SAS_ADAPTER *ioc, u16 smid) 3827 3625 { 3828 3626 Mpi2RequestDescriptorUnion_t descriptor; 3829 3627 void *mpi_req_iomem; ··· 3841 3639 } 3842 3640 request = (u64 *)&descriptor; 3843 3641 descriptor.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; 3844 - descriptor.Default.MSIxIndex = _base_get_msix_index(ioc); 3642 + descriptor.Default.MSIxIndex = _base_set_and_get_msix_index(ioc, smid); 3845 3643 descriptor.Default.SMID = cpu_to_le16(smid); 3846 3644 descriptor.Default.LMID = 0; 3847 3645 descriptor.Default.DescriptorTypeDependent = 0; ··· 3852 3650 else 3853 3651 _base_writeq(*request, &ioc->chip->RequestDescriptorPostLow, 3854 3652 &ioc->scsi_lookup_lock); 3653 + } 3654 + 3655 + /** 3656 + * _base_put_smid_scsi_io_atomic - send SCSI_IO request to firmware using 3657 + * Atomic Request Descriptor 3658 + * @ioc: per adapter object 3659 + * @smid: system request message index 3660 + * @handle: device handle, unused in this function, for function type match 3661 + * 3662 + * Return nothing. 3663 + */ 3664 + static void 3665 + _base_put_smid_scsi_io_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid, 3666 + u16 handle) 3667 + { 3668 + Mpi26AtomicRequestDescriptor_t descriptor; 3669 + u32 *request = (u32 *)&descriptor; 3670 + 3671 + descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO; 3672 + descriptor.MSIxIndex = _base_set_and_get_msix_index(ioc, smid); 3673 + descriptor.SMID = cpu_to_le16(smid); 3674 + 3675 + writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost); 3676 + } 3677 + 3678 + /** 3679 + * _base_put_smid_fast_path_atomic - send fast path request to firmware 3680 + * using Atomic Request Descriptor 3681 + * @ioc: per adapter object 3682 + * @smid: system request message index 3683 + * @handle: device handle, unused in this function, for function type match 3684 + * Return nothing 3685 + */ 3686 + static void 3687 + _base_put_smid_fast_path_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid, 3688 + u16 handle) 3689 + { 3690 + Mpi26AtomicRequestDescriptor_t descriptor; 3691 + u32 *request = (u32 *)&descriptor; 3692 + 3693 + descriptor.RequestFlags = MPI25_REQ_DESCRIPT_FLAGS_FAST_PATH_SCSI_IO; 3694 + descriptor.MSIxIndex = _base_set_and_get_msix_index(ioc, smid); 3695 + descriptor.SMID = cpu_to_le16(smid); 3696 + 3697 + writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost); 3698 + } 3699 + 3700 + /** 3701 + * _base_put_smid_hi_priority_atomic - send Task Management request to 3702 + * firmware using Atomic Request Descriptor 3703 + * @ioc: per adapter object 3704 + * @smid: system request message index 3705 + * @msix_task: msix_task will be same as msix of IO incase of task abort else 0 3706 + * 3707 + * Return nothing. 3708 + */ 3709 + static void 3710 + _base_put_smid_hi_priority_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid, 3711 + u16 msix_task) 3712 + { 3713 + Mpi26AtomicRequestDescriptor_t descriptor; 3714 + u32 *request = (u32 *)&descriptor; 3715 + 3716 + descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY; 3717 + descriptor.MSIxIndex = msix_task; 3718 + descriptor.SMID = cpu_to_le16(smid); 3719 + 3720 + writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost); 3721 + } 3722 + 3723 + /** 3724 + * _base_put_smid_default - Default, primarily used for config pages 3725 + * use Atomic Request Descriptor 3726 + * @ioc: per adapter object 3727 + * @smid: system request message index 3728 + * 3729 + * Return nothing. 3730 + */ 3731 + static void 3732 + _base_put_smid_default_atomic(struct MPT3SAS_ADAPTER *ioc, u16 smid) 3733 + { 3734 + Mpi26AtomicRequestDescriptor_t descriptor; 3735 + u32 *request = (u32 *)&descriptor; 3736 + 3737 + descriptor.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE; 3738 + descriptor.MSIxIndex = _base_set_and_get_msix_index(ioc, smid); 3739 + descriptor.SMID = cpu_to_le16(smid); 3740 + 3741 + writel(cpu_to_le32(*request), &ioc->chip->AtomicRequestDescriptorPost); 3855 3742 } 3856 3743 3857 3744 /** ··· 4243 3952 ioc->build_sg(ioc, &mpi_request->SGL, 0, 0, fwpkg_data_dma, 4244 3953 data_length); 4245 3954 init_completion(&ioc->base_cmds.done); 4246 - mpt3sas_base_put_smid_default(ioc, smid); 3955 + ioc->put_smid_default(ioc, smid); 4247 3956 /* Wait for 15 seconds */ 4248 3957 wait_for_completion_timeout(&ioc->base_cmds.done, 4249 3958 FW_IMG_HDR_READ_TIMEOUT*HZ); ··· 4483 4192 } 4484 4193 4485 4194 /** 4195 + * _base_update_ioc_page1_inlinewith_perf_mode - Update IOC Page1 fields 4196 + * according to performance mode. 4197 + * @ioc : per adapter object 4198 + * 4199 + * Return nothing. 4200 + */ 4201 + static void 4202 + _base_update_ioc_page1_inlinewith_perf_mode(struct MPT3SAS_ADAPTER *ioc) 4203 + { 4204 + Mpi2IOCPage1_t ioc_pg1; 4205 + Mpi2ConfigReply_t mpi_reply; 4206 + 4207 + mpt3sas_config_get_ioc_pg1(ioc, &mpi_reply, &ioc->ioc_pg1_copy); 4208 + memcpy(&ioc_pg1, &ioc->ioc_pg1_copy, sizeof(Mpi2IOCPage1_t)); 4209 + 4210 + switch (perf_mode) { 4211 + case MPT_PERF_MODE_DEFAULT: 4212 + case MPT_PERF_MODE_BALANCED: 4213 + if (ioc->high_iops_queues) { 4214 + ioc_info(ioc, 4215 + "Enable interrupt coalescing only for first\t" 4216 + "%d reply queues\n", 4217 + MPT3SAS_HIGH_IOPS_REPLY_QUEUES); 4218 + /* 4219 + * If 31st bit is zero then interrupt coalescing is 4220 + * enabled for all reply descriptor post queues. 4221 + * If 31st bit is set to one then user can 4222 + * enable/disable interrupt coalescing on per reply 4223 + * descriptor post queue group(8) basis. So to enable 4224 + * interrupt coalescing only on first reply descriptor 4225 + * post queue group 31st bit and zero th bit is enabled. 4226 + */ 4227 + ioc_pg1.ProductSpecific = cpu_to_le32(0x80000000 | 4228 + ((1 << MPT3SAS_HIGH_IOPS_REPLY_QUEUES/8) - 1)); 4229 + mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1); 4230 + ioc_info(ioc, "performance mode: balanced\n"); 4231 + return; 4232 + } 4233 + /* Fall through */ 4234 + case MPT_PERF_MODE_LATENCY: 4235 + /* 4236 + * Enable interrupt coalescing on all reply queues 4237 + * with timeout value 0xA 4238 + */ 4239 + ioc_pg1.CoalescingTimeout = cpu_to_le32(0xa); 4240 + ioc_pg1.Flags |= cpu_to_le32(MPI2_IOCPAGE1_REPLY_COALESCING); 4241 + ioc_pg1.ProductSpecific = 0; 4242 + mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1); 4243 + ioc_info(ioc, "performance mode: latency\n"); 4244 + break; 4245 + case MPT_PERF_MODE_IOPS: 4246 + /* 4247 + * Enable interrupt coalescing on all reply queues. 4248 + */ 4249 + ioc_info(ioc, 4250 + "performance mode: iops with coalescing timeout: 0x%x\n", 4251 + le32_to_cpu(ioc_pg1.CoalescingTimeout)); 4252 + ioc_pg1.Flags |= cpu_to_le32(MPI2_IOCPAGE1_REPLY_COALESCING); 4253 + ioc_pg1.ProductSpecific = 0; 4254 + mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, &ioc_pg1); 4255 + break; 4256 + } 4257 + } 4258 + 4259 + /** 4486 4260 * _base_static_config_pages - static start of day config pages 4487 4261 * @ioc: per adapter object 4488 4262 */ ··· 4614 4258 4615 4259 if (ioc->iounit_pg8.NumSensors) 4616 4260 ioc->temp_sensors_count = ioc->iounit_pg8.NumSensors; 4261 + if (ioc->is_aero_ioc) 4262 + _base_update_ioc_page1_inlinewith_perf_mode(ioc); 4617 4263 } 4618 4264 4619 4265 /** ··· 5789 5431 mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET) 5790 5432 ioc->ioc_link_reset_in_progress = 1; 5791 5433 init_completion(&ioc->base_cmds.done); 5792 - mpt3sas_base_put_smid_default(ioc, smid); 5434 + ioc->put_smid_default(ioc, smid); 5793 5435 wait_for_completion_timeout(&ioc->base_cmds.done, 5794 5436 msecs_to_jiffies(10000)); 5795 5437 if ((mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET || ··· 5868 5510 ioc->base_cmds.smid = smid; 5869 5511 memcpy(request, mpi_request, sizeof(Mpi2SepReply_t)); 5870 5512 init_completion(&ioc->base_cmds.done); 5871 - mpt3sas_base_put_smid_default(ioc, smid); 5513 + ioc->put_smid_default(ioc, smid); 5872 5514 wait_for_completion_timeout(&ioc->base_cmds.done, 5873 5515 msecs_to_jiffies(10000)); 5874 5516 if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) { ··· 6051 5693 if ((facts->IOCCapabilities & 6052 5694 MPI2_IOCFACTS_CAPABILITY_RDPQ_ARRAY_CAPABLE) && (!reset_devices)) 6053 5695 ioc->rdpq_array_capable = 1; 5696 + if ((facts->IOCCapabilities & MPI26_IOCFACTS_CAPABILITY_ATOMIC_REQ) 5697 + && ioc->is_aero_ioc) 5698 + ioc->atomic_desc_capable = 1; 6054 5699 facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word); 6055 5700 facts->IOCRequestFrameSize = 6056 5701 le16_to_cpu(mpi_reply.IOCRequestFrameSize); ··· 6275 5914 mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE; 6276 5915 6277 5916 init_completion(&ioc->port_enable_cmds.done); 6278 - mpt3sas_base_put_smid_default(ioc, smid); 5917 + ioc->put_smid_default(ioc, smid); 6279 5918 wait_for_completion_timeout(&ioc->port_enable_cmds.done, 300*HZ); 6280 5919 if (!(ioc->port_enable_cmds.status & MPT3_CMD_COMPLETE)) { 6281 5920 ioc_err(ioc, "%s: timeout\n", __func__); ··· 6334 5973 memset(mpi_request, 0, sizeof(Mpi2PortEnableRequest_t)); 6335 5974 mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE; 6336 5975 6337 - mpt3sas_base_put_smid_default(ioc, smid); 5976 + ioc->put_smid_default(ioc, smid); 6338 5977 return 0; 6339 5978 } 6340 5979 ··· 6450 6089 mpi_request->EventMasks[i] = 6451 6090 cpu_to_le32(ioc->event_masks[i]); 6452 6091 init_completion(&ioc->base_cmds.done); 6453 - mpt3sas_base_put_smid_default(ioc, smid); 6092 + ioc->put_smid_default(ioc, smid); 6454 6093 wait_for_completion_timeout(&ioc->base_cmds.done, 30*HZ); 6455 6094 if (!(ioc->base_cmds.status & MPT3_CMD_COMPLETE)) { 6456 6095 ioc_err(ioc, "%s: timeout\n", __func__); ··· 6910 6549 } 6911 6550 } 6912 6551 6552 + ioc->smp_affinity_enable = smp_affinity_enable; 6553 + 6913 6554 ioc->rdpq_array_enable_assigned = 0; 6914 6555 ioc->dma_mask = 0; 6915 6556 if (ioc->is_aero_ioc) ··· 6932 6569 ioc->build_sg_scmd = &_base_build_sg_scmd; 6933 6570 ioc->build_sg = &_base_build_sg; 6934 6571 ioc->build_zero_len_sge = &_base_build_zero_len_sge; 6572 + ioc->get_msix_index_for_smlio = &_base_get_msix_index; 6935 6573 break; 6936 6574 case MPI25_VERSION: 6937 6575 case MPI26_VERSION: ··· 6947 6583 ioc->build_nvme_prp = &_base_build_nvme_prp; 6948 6584 ioc->build_zero_len_sge = &_base_build_zero_len_sge_ieee; 6949 6585 ioc->sge_size_ieee = sizeof(Mpi2IeeeSgeSimple64_t); 6950 - 6586 + if (ioc->high_iops_queues) 6587 + ioc->get_msix_index_for_smlio = 6588 + &_base_get_high_iops_msix_index; 6589 + else 6590 + ioc->get_msix_index_for_smlio = &_base_get_msix_index; 6951 6591 break; 6952 6592 } 6953 - 6954 - if (ioc->is_mcpu_endpoint) 6955 - ioc->put_smid_scsi_io = &_base_put_smid_mpi_ep_scsi_io; 6956 - else 6957 - ioc->put_smid_scsi_io = &_base_put_smid_scsi_io; 6958 - 6593 + if (ioc->atomic_desc_capable) { 6594 + ioc->put_smid_default = &_base_put_smid_default_atomic; 6595 + ioc->put_smid_scsi_io = &_base_put_smid_scsi_io_atomic; 6596 + ioc->put_smid_fast_path = 6597 + &_base_put_smid_fast_path_atomic; 6598 + ioc->put_smid_hi_priority = 6599 + &_base_put_smid_hi_priority_atomic; 6600 + } else { 6601 + ioc->put_smid_default = &_base_put_smid_default; 6602 + ioc->put_smid_fast_path = &_base_put_smid_fast_path; 6603 + ioc->put_smid_hi_priority = &_base_put_smid_hi_priority; 6604 + if (ioc->is_mcpu_endpoint) 6605 + ioc->put_smid_scsi_io = 6606 + &_base_put_smid_mpi_ep_scsi_io; 6607 + else 6608 + ioc->put_smid_scsi_io = &_base_put_smid_scsi_io; 6609 + } 6959 6610 /* 6960 6611 * These function pointers for other requests that don't 6961 6612 * the require IEEE scatter gather elements.
+32 -3
drivers/scsi/mpt3sas/mpt3sas_base.h
··· 76 76 #define MPT3SAS_DRIVER_NAME "mpt3sas" 77 77 #define MPT3SAS_AUTHOR "Avago Technologies <MPT-FusionLinux.pdl@avagotech.com>" 78 78 #define MPT3SAS_DESCRIPTION "LSI MPT Fusion SAS 3.0 Device Driver" 79 - #define MPT3SAS_DRIVER_VERSION "28.100.00.00" 80 - #define MPT3SAS_MAJOR_VERSION 28 79 + #define MPT3SAS_DRIVER_VERSION "29.100.00.00" 80 + #define MPT3SAS_MAJOR_VERSION 29 81 81 #define MPT3SAS_MINOR_VERSION 100 82 82 #define MPT3SAS_BUILD_VERSION 0 83 83 #define MPT3SAS_RELEASE_VERSION 00 ··· 354 354 #define MFG10_GF0_SINGLE_DRIVE_R0 (0x00000010) 355 355 356 356 #define VIRTUAL_IO_FAILED_RETRY (0x32010081) 357 + 358 + /* High IOPs definitions */ 359 + #define MPT3SAS_DEVICE_HIGH_IOPS_DEPTH 8 360 + #define MPT3SAS_HIGH_IOPS_REPLY_QUEUES 8 361 + #define MPT3SAS_HIGH_IOPS_BATCH_COUNT 16 362 + #define MPT3SAS_GEN35_MAX_MSIX_QUEUES 128 357 363 358 364 /* OEM Specific Flags will come from OEM specific header files */ 359 365 struct Mpi2ManufacturingPage10_t { ··· 830 824 */ 831 825 struct scsiio_tracker { 832 826 u16 smid; 827 + struct scsi_cmnd *scmd; 833 828 u8 cb_idx; 834 829 u8 direct_io; 835 830 struct pcie_sg_list pcie_sg_list; ··· 931 924 u16 funcdep); 932 925 typedef void (*PUT_SMID_DEFAULT) (struct MPT3SAS_ADAPTER *ioc, u16 smid); 933 926 typedef u32 (*BASE_READ_REG) (const volatile void __iomem *addr); 927 + /* 928 + * To get high iops reply queue's msix index when high iops mode is enabled 929 + * else get the msix index of general reply queues. 930 + */ 931 + typedef u8 (*GET_MSIX_INDEX) (struct MPT3SAS_ADAPTER *ioc, 932 + struct scsi_cmnd *scmd); 934 933 935 934 /* IOC Facts and Port Facts converted from little endian to cpu */ 936 935 union mpi3_version_union { ··· 1038 1025 * @cpu_msix_table: table for mapping cpus to msix index 1039 1026 * @cpu_msix_table_sz: table size 1040 1027 * @total_io_cnt: Gives total IO count, used to load balance the interrupts 1028 + * @high_iops_outstanding: used to load balance the interrupts 1029 + * within high iops reply queues 1041 1030 * @msix_load_balance: Enables load balancing of interrupts across 1042 1031 * the multiple MSIXs 1043 1032 * @schedule_dead_ioc_flush_running_cmds: callback to flush pending commands ··· 1162 1147 * path functions resulting in Null pointer reference followed by kernel 1163 1148 * crash. To avoid the above race condition we use mutex syncrhonization 1164 1149 * which ensures the syncrhonization between cli/sysfs_show path. 1150 + * @atomic_desc_capable: Atomic Request Descriptor support. 1151 + * @GET_MSIX_INDEX: Get the msix index of high iops queues. 1165 1152 */ 1166 1153 struct MPT3SAS_ADAPTER { 1167 1154 struct list_head list; ··· 1223 1206 MPT3SAS_FLUSH_RUNNING_CMDS schedule_dead_ioc_flush_running_cmds; 1224 1207 u32 non_operational_loop; 1225 1208 atomic64_t total_io_cnt; 1209 + atomic64_t high_iops_outstanding; 1226 1210 bool msix_load_balance; 1227 1211 u16 thresh_hold; 1212 + u8 high_iops_queues; 1228 1213 1229 1214 /* internal commands, callback index */ 1230 1215 u8 scsi_io_cb_idx; ··· 1286 1267 Mpi2IOUnitPage0_t iounit_pg0; 1287 1268 Mpi2IOUnitPage1_t iounit_pg1; 1288 1269 Mpi2IOUnitPage8_t iounit_pg8; 1270 + Mpi2IOCPage1_t ioc_pg1_copy; 1289 1271 1290 1272 struct _boot_device req_boot_device; 1291 1273 struct _boot_device req_alt_boot_device; ··· 1405 1385 1406 1386 u8 combined_reply_queue; 1407 1387 u8 combined_reply_index_count; 1388 + u8 smp_affinity_enable; 1408 1389 /* reply post register index */ 1409 1390 resource_size_t **replyPostRegisterIndex; 1410 1391 ··· 1433 1412 u8 hide_drives; 1434 1413 spinlock_t diag_trigger_lock; 1435 1414 u8 diag_trigger_active; 1415 + u8 atomic_desc_capable; 1436 1416 BASE_READ_REG base_readl; 1437 1417 struct SL_WH_MASTER_TRIGGER_T diag_trigger_master; 1438 1418 struct SL_WH_EVENT_TRIGGERS_T diag_trigger_event; ··· 1444 1422 u8 is_gen35_ioc; 1445 1423 u8 is_aero_ioc; 1446 1424 PUT_SMID_IO_FP_HIP put_smid_scsi_io; 1447 - 1425 + PUT_SMID_IO_FP_HIP put_smid_fast_path; 1426 + PUT_SMID_IO_FP_HIP put_smid_hi_priority; 1427 + PUT_SMID_DEFAULT put_smid_default; 1428 + GET_MSIX_INDEX get_msix_index_for_smlio; 1448 1429 }; 1449 1430 1450 1431 typedef u8 (*MPT_CALLBACK)(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, ··· 1636 1611 int mpt3sas_config_set_sas_iounit_pg1(struct MPT3SAS_ADAPTER *ioc, 1637 1612 Mpi2ConfigReply_t *mpi_reply, Mpi2SasIOUnitPage1_t *config_page, 1638 1613 u16 sz); 1614 + int mpt3sas_config_get_ioc_pg1(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t 1615 + *mpi_reply, Mpi2IOCPage1_t *config_page); 1616 + int mpt3sas_config_set_ioc_pg1(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t 1617 + *mpi_reply, Mpi2IOCPage1_t *config_page); 1639 1618 int mpt3sas_config_get_ioc_pg8(struct MPT3SAS_ADAPTER *ioc, Mpi2ConfigReply_t 1640 1619 *mpi_reply, Mpi2IOCPage8_t *config_page); 1641 1620 int mpt3sas_config_get_expander_pg0(struct MPT3SAS_ADAPTER *ioc,
+72 -1
drivers/scsi/mpt3sas/mpt3sas_config.c
··· 380 380 memcpy(config_request, mpi_request, sizeof(Mpi2ConfigRequest_t)); 381 381 _config_display_some_debug(ioc, smid, "config_request", NULL); 382 382 init_completion(&ioc->config_cmds.done); 383 - mpt3sas_base_put_smid_default(ioc, smid); 383 + ioc->put_smid_default(ioc, smid); 384 384 wait_for_completion_timeout(&ioc->config_cmds.done, timeout*HZ); 385 385 if (!(ioc->config_cmds.status & MPT3_CMD_COMPLETE)) { 386 386 mpt3sas_base_check_cmd_timeout(ioc, ··· 943 943 goto out; 944 944 945 945 mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; 946 + r = _config_request(ioc, &mpi_request, mpi_reply, 947 + MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page, 948 + sizeof(*config_page)); 949 + out: 950 + return r; 951 + } 952 + /** 953 + * mpt3sas_config_get_ioc_pg1 - obtain ioc page 1 954 + * @ioc: per adapter object 955 + * @mpi_reply: reply mf payload returned from firmware 956 + * @config_page: contents of the config page 957 + * Context: sleep. 958 + * 959 + * Return: 0 for success, non-zero for failure. 960 + */ 961 + int 962 + mpt3sas_config_get_ioc_pg1(struct MPT3SAS_ADAPTER *ioc, 963 + Mpi2ConfigReply_t *mpi_reply, Mpi2IOCPage1_t *config_page) 964 + { 965 + Mpi2ConfigRequest_t mpi_request; 966 + int r; 967 + 968 + memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 969 + mpi_request.Function = MPI2_FUNCTION_CONFIG; 970 + mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER; 971 + mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IOC; 972 + mpi_request.Header.PageNumber = 1; 973 + mpi_request.Header.PageVersion = MPI2_IOCPAGE8_PAGEVERSION; 974 + ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE); 975 + r = _config_request(ioc, &mpi_request, mpi_reply, 976 + MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0); 977 + if (r) 978 + goto out; 979 + 980 + mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT; 981 + r = _config_request(ioc, &mpi_request, mpi_reply, 982 + MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page, 983 + sizeof(*config_page)); 984 + out: 985 + return r; 986 + } 987 + 988 + /** 989 + * mpt3sas_config_set_ioc_pg1 - modify ioc page 1 990 + * @ioc: per adapter object 991 + * @mpi_reply: reply mf payload returned from firmware 992 + * @config_page: contents of the config page 993 + * Context: sleep. 994 + * 995 + * Return: 0 for success, non-zero for failure. 996 + */ 997 + int 998 + mpt3sas_config_set_ioc_pg1(struct MPT3SAS_ADAPTER *ioc, 999 + Mpi2ConfigReply_t *mpi_reply, Mpi2IOCPage1_t *config_page) 1000 + { 1001 + Mpi2ConfigRequest_t mpi_request; 1002 + int r; 1003 + 1004 + memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t)); 1005 + mpi_request.Function = MPI2_FUNCTION_CONFIG; 1006 + mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER; 1007 + mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IOC; 1008 + mpi_request.Header.PageNumber = 1; 1009 + mpi_request.Header.PageVersion = MPI2_IOCPAGE8_PAGEVERSION; 1010 + ioc->build_zero_len_sge_mpi(ioc, &mpi_request.PageBufferSGE); 1011 + r = _config_request(ioc, &mpi_request, mpi_reply, 1012 + MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, NULL, 0); 1013 + if (r) 1014 + goto out; 1015 + 1016 + mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_WRITE_CURRENT; 946 1017 r = _config_request(ioc, &mpi_request, mpi_reply, 947 1018 MPT3_CONFIG_PAGE_DEFAULT_TIMEOUT, config_page, 948 1019 sizeof(*config_page));
+111 -123
drivers/scsi/mpt3sas/mpt3sas_ctl.c
··· 822 822 if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST) 823 823 ioc->put_smid_scsi_io(ioc, smid, device_handle); 824 824 else 825 - mpt3sas_base_put_smid_default(ioc, smid); 825 + ioc->put_smid_default(ioc, smid); 826 826 break; 827 827 } 828 828 case MPI2_FUNCTION_SCSI_TASK_MGMT: ··· 859 859 tm_request->DevHandle)); 860 860 ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz, 861 861 data_in_dma, data_in_sz); 862 - mpt3sas_base_put_smid_hi_priority(ioc, smid, 0); 862 + ioc->put_smid_hi_priority(ioc, smid, 0); 863 863 break; 864 864 } 865 865 case MPI2_FUNCTION_SMP_PASSTHROUGH: ··· 890 890 } 891 891 ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma, 892 892 data_in_sz); 893 - mpt3sas_base_put_smid_default(ioc, smid); 893 + ioc->put_smid_default(ioc, smid); 894 894 break; 895 895 } 896 896 case MPI2_FUNCTION_SATA_PASSTHROUGH: ··· 905 905 } 906 906 ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma, 907 907 data_in_sz); 908 - mpt3sas_base_put_smid_default(ioc, smid); 908 + ioc->put_smid_default(ioc, smid); 909 909 break; 910 910 } 911 911 case MPI2_FUNCTION_FW_DOWNLOAD: ··· 913 913 { 914 914 ioc->build_sg(ioc, psge, data_out_dma, data_out_sz, data_in_dma, 915 915 data_in_sz); 916 - mpt3sas_base_put_smid_default(ioc, smid); 916 + ioc->put_smid_default(ioc, smid); 917 917 break; 918 918 } 919 919 case MPI2_FUNCTION_TOOLBOX: ··· 928 928 ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz, 929 929 data_in_dma, data_in_sz); 930 930 } 931 - mpt3sas_base_put_smid_default(ioc, smid); 931 + ioc->put_smid_default(ioc, smid); 932 932 break; 933 933 } 934 934 case MPI2_FUNCTION_SAS_IO_UNIT_CONTROL: ··· 948 948 default: 949 949 ioc->build_sg_mpi(ioc, psge, data_out_dma, data_out_sz, 950 950 data_in_dma, data_in_sz); 951 - mpt3sas_base_put_smid_default(ioc, smid); 951 + ioc->put_smid_default(ioc, smid); 952 952 break; 953 953 } 954 954 ··· 1576 1576 cpu_to_le32(ioc->product_specific[buffer_type][i]); 1577 1577 1578 1578 init_completion(&ioc->ctl_cmds.done); 1579 - mpt3sas_base_put_smid_default(ioc, smid); 1579 + ioc->put_smid_default(ioc, smid); 1580 1580 wait_for_completion_timeout(&ioc->ctl_cmds.done, 1581 1581 MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); 1582 1582 ··· 1903 1903 mpi_request->VP_ID = 0; 1904 1904 1905 1905 init_completion(&ioc->ctl_cmds.done); 1906 - mpt3sas_base_put_smid_default(ioc, smid); 1906 + ioc->put_smid_default(ioc, smid); 1907 1907 wait_for_completion_timeout(&ioc->ctl_cmds.done, 1908 1908 MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); 1909 1909 ··· 2151 2151 mpi_request->VP_ID = 0; 2152 2152 2153 2153 init_completion(&ioc->ctl_cmds.done); 2154 - mpt3sas_base_put_smid_default(ioc, smid); 2154 + ioc->put_smid_default(ioc, smid); 2155 2155 wait_for_completion_timeout(&ioc->ctl_cmds.done, 2156 2156 MPT3_IOCTL_DEFAULT_TIMEOUT*HZ); 2157 2157 ··· 2319 2319 break; 2320 2320 } 2321 2321 2322 + if (karg.hdr.ioc_number != ioctl_header.ioc_number) { 2323 + ret = -EINVAL; 2324 + break; 2325 + } 2322 2326 if (_IOC_SIZE(cmd) == sizeof(struct mpt3_ioctl_command)) { 2323 2327 uarg = arg; 2324 2328 ret = _ctl_do_mpt_command(ioc, karg, &uarg->mf); ··· 2457 2453 2458 2454 /* scsi host attributes */ 2459 2455 /** 2460 - * _ctl_version_fw_show - firmware version 2456 + * version_fw_show - firmware version 2461 2457 * @cdev: pointer to embedded class device 2462 2458 * @attr: ? 2463 2459 * @buf: the buffer returned ··· 2465 2461 * A sysfs 'read-only' shost attribute. 2466 2462 */ 2467 2463 static ssize_t 2468 - _ctl_version_fw_show(struct device *cdev, struct device_attribute *attr, 2464 + version_fw_show(struct device *cdev, struct device_attribute *attr, 2469 2465 char *buf) 2470 2466 { 2471 2467 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2477 2473 (ioc->facts.FWVersion.Word & 0x0000FF00) >> 8, 2478 2474 ioc->facts.FWVersion.Word & 0x000000FF); 2479 2475 } 2480 - static DEVICE_ATTR(version_fw, S_IRUGO, _ctl_version_fw_show, NULL); 2476 + static DEVICE_ATTR_RO(version_fw); 2481 2477 2482 2478 /** 2483 - * _ctl_version_bios_show - bios version 2479 + * version_bios_show - bios version 2484 2480 * @cdev: pointer to embedded class device 2485 2481 * @attr: ? 2486 2482 * @buf: the buffer returned ··· 2488 2484 * A sysfs 'read-only' shost attribute. 2489 2485 */ 2490 2486 static ssize_t 2491 - _ctl_version_bios_show(struct device *cdev, struct device_attribute *attr, 2487 + version_bios_show(struct device *cdev, struct device_attribute *attr, 2492 2488 char *buf) 2493 2489 { 2494 2490 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2502 2498 (version & 0x0000FF00) >> 8, 2503 2499 version & 0x000000FF); 2504 2500 } 2505 - static DEVICE_ATTR(version_bios, S_IRUGO, _ctl_version_bios_show, NULL); 2501 + static DEVICE_ATTR_RO(version_bios); 2506 2502 2507 2503 /** 2508 - * _ctl_version_mpi_show - MPI (message passing interface) version 2504 + * version_mpi_show - MPI (message passing interface) version 2509 2505 * @cdev: pointer to embedded class device 2510 2506 * @attr: ? 2511 2507 * @buf: the buffer returned ··· 2513 2509 * A sysfs 'read-only' shost attribute. 2514 2510 */ 2515 2511 static ssize_t 2516 - _ctl_version_mpi_show(struct device *cdev, struct device_attribute *attr, 2512 + version_mpi_show(struct device *cdev, struct device_attribute *attr, 2517 2513 char *buf) 2518 2514 { 2519 2515 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2522 2518 return snprintf(buf, PAGE_SIZE, "%03x.%02x\n", 2523 2519 ioc->facts.MsgVersion, ioc->facts.HeaderVersion >> 8); 2524 2520 } 2525 - static DEVICE_ATTR(version_mpi, S_IRUGO, _ctl_version_mpi_show, NULL); 2521 + static DEVICE_ATTR_RO(version_mpi); 2526 2522 2527 2523 /** 2528 - * _ctl_version_product_show - product name 2524 + * version_product_show - product name 2529 2525 * @cdev: pointer to embedded class device 2530 2526 * @attr: ? 2531 2527 * @buf: the buffer returned ··· 2533 2529 * A sysfs 'read-only' shost attribute. 2534 2530 */ 2535 2531 static ssize_t 2536 - _ctl_version_product_show(struct device *cdev, struct device_attribute *attr, 2532 + version_product_show(struct device *cdev, struct device_attribute *attr, 2537 2533 char *buf) 2538 2534 { 2539 2535 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2541 2537 2542 2538 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.ChipName); 2543 2539 } 2544 - static DEVICE_ATTR(version_product, S_IRUGO, _ctl_version_product_show, NULL); 2540 + static DEVICE_ATTR_RO(version_product); 2545 2541 2546 2542 /** 2547 - * _ctl_version_nvdata_persistent_show - ndvata persistent version 2543 + * version_nvdata_persistent_show - ndvata persistent version 2548 2544 * @cdev: pointer to embedded class device 2549 2545 * @attr: ? 2550 2546 * @buf: the buffer returned ··· 2552 2548 * A sysfs 'read-only' shost attribute. 2553 2549 */ 2554 2550 static ssize_t 2555 - _ctl_version_nvdata_persistent_show(struct device *cdev, 2551 + version_nvdata_persistent_show(struct device *cdev, 2556 2552 struct device_attribute *attr, char *buf) 2557 2553 { 2558 2554 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2561 2557 return snprintf(buf, PAGE_SIZE, "%08xh\n", 2562 2558 le32_to_cpu(ioc->iounit_pg0.NvdataVersionPersistent.Word)); 2563 2559 } 2564 - static DEVICE_ATTR(version_nvdata_persistent, S_IRUGO, 2565 - _ctl_version_nvdata_persistent_show, NULL); 2560 + static DEVICE_ATTR_RO(version_nvdata_persistent); 2566 2561 2567 2562 /** 2568 - * _ctl_version_nvdata_default_show - nvdata default version 2563 + * version_nvdata_default_show - nvdata default version 2569 2564 * @cdev: pointer to embedded class device 2570 2565 * @attr: ? 2571 2566 * @buf: the buffer returned ··· 2572 2569 * A sysfs 'read-only' shost attribute. 2573 2570 */ 2574 2571 static ssize_t 2575 - _ctl_version_nvdata_default_show(struct device *cdev, struct device_attribute 2572 + version_nvdata_default_show(struct device *cdev, struct device_attribute 2576 2573 *attr, char *buf) 2577 2574 { 2578 2575 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2581 2578 return snprintf(buf, PAGE_SIZE, "%08xh\n", 2582 2579 le32_to_cpu(ioc->iounit_pg0.NvdataVersionDefault.Word)); 2583 2580 } 2584 - static DEVICE_ATTR(version_nvdata_default, S_IRUGO, 2585 - _ctl_version_nvdata_default_show, NULL); 2581 + static DEVICE_ATTR_RO(version_nvdata_default); 2586 2582 2587 2583 /** 2588 - * _ctl_board_name_show - board name 2584 + * board_name_show - board name 2589 2585 * @cdev: pointer to embedded class device 2590 2586 * @attr: ? 2591 2587 * @buf: the buffer returned ··· 2592 2590 * A sysfs 'read-only' shost attribute. 2593 2591 */ 2594 2592 static ssize_t 2595 - _ctl_board_name_show(struct device *cdev, struct device_attribute *attr, 2593 + board_name_show(struct device *cdev, struct device_attribute *attr, 2596 2594 char *buf) 2597 2595 { 2598 2596 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2600 2598 2601 2599 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardName); 2602 2600 } 2603 - static DEVICE_ATTR(board_name, S_IRUGO, _ctl_board_name_show, NULL); 2601 + static DEVICE_ATTR_RO(board_name); 2604 2602 2605 2603 /** 2606 - * _ctl_board_assembly_show - board assembly name 2604 + * board_assembly_show - board assembly name 2607 2605 * @cdev: pointer to embedded class device 2608 2606 * @attr: ? 2609 2607 * @buf: the buffer returned ··· 2611 2609 * A sysfs 'read-only' shost attribute. 2612 2610 */ 2613 2611 static ssize_t 2614 - _ctl_board_assembly_show(struct device *cdev, struct device_attribute *attr, 2612 + board_assembly_show(struct device *cdev, struct device_attribute *attr, 2615 2613 char *buf) 2616 2614 { 2617 2615 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2619 2617 2620 2618 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardAssembly); 2621 2619 } 2622 - static DEVICE_ATTR(board_assembly, S_IRUGO, _ctl_board_assembly_show, NULL); 2620 + static DEVICE_ATTR_RO(board_assembly); 2623 2621 2624 2622 /** 2625 - * _ctl_board_tracer_show - board tracer number 2623 + * board_tracer_show - board tracer number 2626 2624 * @cdev: pointer to embedded class device 2627 2625 * @attr: ? 2628 2626 * @buf: the buffer returned ··· 2630 2628 * A sysfs 'read-only' shost attribute. 2631 2629 */ 2632 2630 static ssize_t 2633 - _ctl_board_tracer_show(struct device *cdev, struct device_attribute *attr, 2631 + board_tracer_show(struct device *cdev, struct device_attribute *attr, 2634 2632 char *buf) 2635 2633 { 2636 2634 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2638 2636 2639 2637 return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardTracerNumber); 2640 2638 } 2641 - static DEVICE_ATTR(board_tracer, S_IRUGO, _ctl_board_tracer_show, NULL); 2639 + static DEVICE_ATTR_RO(board_tracer); 2642 2640 2643 2641 /** 2644 - * _ctl_io_delay_show - io missing delay 2642 + * io_delay_show - io missing delay 2645 2643 * @cdev: pointer to embedded class device 2646 2644 * @attr: ? 2647 2645 * @buf: the buffer returned ··· 2652 2650 * A sysfs 'read-only' shost attribute. 2653 2651 */ 2654 2652 static ssize_t 2655 - _ctl_io_delay_show(struct device *cdev, struct device_attribute *attr, 2653 + io_delay_show(struct device *cdev, struct device_attribute *attr, 2656 2654 char *buf) 2657 2655 { 2658 2656 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2660 2658 2661 2659 return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->io_missing_delay); 2662 2660 } 2663 - static DEVICE_ATTR(io_delay, S_IRUGO, _ctl_io_delay_show, NULL); 2661 + static DEVICE_ATTR_RO(io_delay); 2664 2662 2665 2663 /** 2666 - * _ctl_device_delay_show - device missing delay 2664 + * device_delay_show - device missing delay 2667 2665 * @cdev: pointer to embedded class device 2668 2666 * @attr: ? 2669 2667 * @buf: the buffer returned ··· 2674 2672 * A sysfs 'read-only' shost attribute. 2675 2673 */ 2676 2674 static ssize_t 2677 - _ctl_device_delay_show(struct device *cdev, struct device_attribute *attr, 2675 + device_delay_show(struct device *cdev, struct device_attribute *attr, 2678 2676 char *buf) 2679 2677 { 2680 2678 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2682 2680 2683 2681 return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->device_missing_delay); 2684 2682 } 2685 - static DEVICE_ATTR(device_delay, S_IRUGO, _ctl_device_delay_show, NULL); 2683 + static DEVICE_ATTR_RO(device_delay); 2686 2684 2687 2685 /** 2688 - * _ctl_fw_queue_depth_show - global credits 2686 + * fw_queue_depth_show - global credits 2689 2687 * @cdev: pointer to embedded class device 2690 2688 * @attr: ? 2691 2689 * @buf: the buffer returned ··· 2695 2693 * A sysfs 'read-only' shost attribute. 2696 2694 */ 2697 2695 static ssize_t 2698 - _ctl_fw_queue_depth_show(struct device *cdev, struct device_attribute *attr, 2696 + fw_queue_depth_show(struct device *cdev, struct device_attribute *attr, 2699 2697 char *buf) 2700 2698 { 2701 2699 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2703 2701 2704 2702 return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->facts.RequestCredit); 2705 2703 } 2706 - static DEVICE_ATTR(fw_queue_depth, S_IRUGO, _ctl_fw_queue_depth_show, NULL); 2704 + static DEVICE_ATTR_RO(fw_queue_depth); 2707 2705 2708 2706 /** 2709 - * _ctl_sas_address_show - sas address 2707 + * sas_address_show - sas address 2710 2708 * @cdev: pointer to embedded class device 2711 2709 * @attr: ? 2712 2710 * @buf: the buffer returned ··· 2716 2714 * A sysfs 'read-only' shost attribute. 2717 2715 */ 2718 2716 static ssize_t 2719 - _ctl_host_sas_address_show(struct device *cdev, struct device_attribute *attr, 2717 + host_sas_address_show(struct device *cdev, struct device_attribute *attr, 2720 2718 char *buf) 2721 2719 2722 2720 { ··· 2726 2724 return snprintf(buf, PAGE_SIZE, "0x%016llx\n", 2727 2725 (unsigned long long)ioc->sas_hba.sas_address); 2728 2726 } 2729 - static DEVICE_ATTR(host_sas_address, S_IRUGO, 2730 - _ctl_host_sas_address_show, NULL); 2727 + static DEVICE_ATTR_RO(host_sas_address); 2731 2728 2732 2729 /** 2733 - * _ctl_logging_level_show - logging level 2730 + * logging_level_show - logging level 2734 2731 * @cdev: pointer to embedded class device 2735 2732 * @attr: ? 2736 2733 * @buf: the buffer returned ··· 2737 2736 * A sysfs 'read/write' shost attribute. 2738 2737 */ 2739 2738 static ssize_t 2740 - _ctl_logging_level_show(struct device *cdev, struct device_attribute *attr, 2739 + logging_level_show(struct device *cdev, struct device_attribute *attr, 2741 2740 char *buf) 2742 2741 { 2743 2742 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2746 2745 return snprintf(buf, PAGE_SIZE, "%08xh\n", ioc->logging_level); 2747 2746 } 2748 2747 static ssize_t 2749 - _ctl_logging_level_store(struct device *cdev, struct device_attribute *attr, 2748 + logging_level_store(struct device *cdev, struct device_attribute *attr, 2750 2749 const char *buf, size_t count) 2751 2750 { 2752 2751 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2761 2760 ioc->logging_level); 2762 2761 return strlen(buf); 2763 2762 } 2764 - static DEVICE_ATTR(logging_level, S_IRUGO | S_IWUSR, _ctl_logging_level_show, 2765 - _ctl_logging_level_store); 2763 + static DEVICE_ATTR_RW(logging_level); 2766 2764 2767 2765 /** 2768 - * _ctl_fwfault_debug_show - show/store fwfault_debug 2766 + * fwfault_debug_show - show/store fwfault_debug 2769 2767 * @cdev: pointer to embedded class device 2770 2768 * @attr: ? 2771 2769 * @buf: the buffer returned ··· 2773 2773 * A sysfs 'read/write' shost attribute. 2774 2774 */ 2775 2775 static ssize_t 2776 - _ctl_fwfault_debug_show(struct device *cdev, struct device_attribute *attr, 2776 + fwfault_debug_show(struct device *cdev, struct device_attribute *attr, 2777 2777 char *buf) 2778 2778 { 2779 2779 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2782 2782 return snprintf(buf, PAGE_SIZE, "%d\n", ioc->fwfault_debug); 2783 2783 } 2784 2784 static ssize_t 2785 - _ctl_fwfault_debug_store(struct device *cdev, struct device_attribute *attr, 2785 + fwfault_debug_store(struct device *cdev, struct device_attribute *attr, 2786 2786 const char *buf, size_t count) 2787 2787 { 2788 2788 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2797 2797 ioc->fwfault_debug); 2798 2798 return strlen(buf); 2799 2799 } 2800 - static DEVICE_ATTR(fwfault_debug, S_IRUGO | S_IWUSR, 2801 - _ctl_fwfault_debug_show, _ctl_fwfault_debug_store); 2800 + static DEVICE_ATTR_RW(fwfault_debug); 2802 2801 2803 2802 /** 2804 - * _ctl_ioc_reset_count_show - ioc reset count 2803 + * ioc_reset_count_show - ioc reset count 2805 2804 * @cdev: pointer to embedded class device 2806 2805 * @attr: ? 2807 2806 * @buf: the buffer returned ··· 2810 2811 * A sysfs 'read-only' shost attribute. 2811 2812 */ 2812 2813 static ssize_t 2813 - _ctl_ioc_reset_count_show(struct device *cdev, struct device_attribute *attr, 2814 + ioc_reset_count_show(struct device *cdev, struct device_attribute *attr, 2814 2815 char *buf) 2815 2816 { 2816 2817 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2818 2819 2819 2820 return snprintf(buf, PAGE_SIZE, "%d\n", ioc->ioc_reset_count); 2820 2821 } 2821 - static DEVICE_ATTR(ioc_reset_count, S_IRUGO, _ctl_ioc_reset_count_show, NULL); 2822 + static DEVICE_ATTR_RO(ioc_reset_count); 2822 2823 2823 2824 /** 2824 - * _ctl_ioc_reply_queue_count_show - number of reply queues 2825 + * reply_queue_count_show - number of reply queues 2825 2826 * @cdev: pointer to embedded class device 2826 2827 * @attr: ? 2827 2828 * @buf: the buffer returned ··· 2831 2832 * A sysfs 'read-only' shost attribute. 2832 2833 */ 2833 2834 static ssize_t 2834 - _ctl_ioc_reply_queue_count_show(struct device *cdev, 2835 + reply_queue_count_show(struct device *cdev, 2835 2836 struct device_attribute *attr, char *buf) 2836 2837 { 2837 2838 u8 reply_queue_count; ··· 2846 2847 2847 2848 return snprintf(buf, PAGE_SIZE, "%d\n", reply_queue_count); 2848 2849 } 2849 - static DEVICE_ATTR(reply_queue_count, S_IRUGO, _ctl_ioc_reply_queue_count_show, 2850 - NULL); 2850 + static DEVICE_ATTR_RO(reply_queue_count); 2851 2851 2852 2852 /** 2853 - * _ctl_BRM_status_show - Backup Rail Monitor Status 2853 + * BRM_status_show - Backup Rail Monitor Status 2854 2854 * @cdev: pointer to embedded class device 2855 2855 * @attr: ? 2856 2856 * @buf: the buffer returned ··· 2859 2861 * A sysfs 'read-only' shost attribute. 2860 2862 */ 2861 2863 static ssize_t 2862 - _ctl_BRM_status_show(struct device *cdev, struct device_attribute *attr, 2864 + BRM_status_show(struct device *cdev, struct device_attribute *attr, 2863 2865 char *buf) 2864 2866 { 2865 2867 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2921 2923 mutex_unlock(&ioc->pci_access_mutex); 2922 2924 return rc; 2923 2925 } 2924 - static DEVICE_ATTR(BRM_status, S_IRUGO, _ctl_BRM_status_show, NULL); 2926 + static DEVICE_ATTR_RO(BRM_status); 2925 2927 2926 2928 struct DIAG_BUFFER_START { 2927 2929 __le32 Size; ··· 2934 2936 }; 2935 2937 2936 2938 /** 2937 - * _ctl_host_trace_buffer_size_show - host buffer size (trace only) 2939 + * host_trace_buffer_size_show - host buffer size (trace only) 2938 2940 * @cdev: pointer to embedded class device 2939 2941 * @attr: ? 2940 2942 * @buf: the buffer returned ··· 2942 2944 * A sysfs 'read-only' shost attribute. 2943 2945 */ 2944 2946 static ssize_t 2945 - _ctl_host_trace_buffer_size_show(struct device *cdev, 2947 + host_trace_buffer_size_show(struct device *cdev, 2946 2948 struct device_attribute *attr, char *buf) 2947 2949 { 2948 2950 struct Scsi_Host *shost = class_to_shost(cdev); ··· 2974 2976 ioc->ring_buffer_sz = size; 2975 2977 return snprintf(buf, PAGE_SIZE, "%d\n", size); 2976 2978 } 2977 - static DEVICE_ATTR(host_trace_buffer_size, S_IRUGO, 2978 - _ctl_host_trace_buffer_size_show, NULL); 2979 + static DEVICE_ATTR_RO(host_trace_buffer_size); 2979 2980 2980 2981 /** 2981 - * _ctl_host_trace_buffer_show - firmware ring buffer (trace only) 2982 + * host_trace_buffer_show - firmware ring buffer (trace only) 2982 2983 * @cdev: pointer to embedded class device 2983 2984 * @attr: ? 2984 2985 * @buf: the buffer returned ··· 2989 2992 * offset to the same attribute, it will move the pointer. 2990 2993 */ 2991 2994 static ssize_t 2992 - _ctl_host_trace_buffer_show(struct device *cdev, struct device_attribute *attr, 2995 + host_trace_buffer_show(struct device *cdev, struct device_attribute *attr, 2993 2996 char *buf) 2994 2997 { 2995 2998 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3021 3024 } 3022 3025 3023 3026 static ssize_t 3024 - _ctl_host_trace_buffer_store(struct device *cdev, struct device_attribute *attr, 3027 + host_trace_buffer_store(struct device *cdev, struct device_attribute *attr, 3025 3028 const char *buf, size_t count) 3026 3029 { 3027 3030 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3034 3037 ioc->ring_buffer_offset = val; 3035 3038 return strlen(buf); 3036 3039 } 3037 - static DEVICE_ATTR(host_trace_buffer, S_IRUGO | S_IWUSR, 3038 - _ctl_host_trace_buffer_show, _ctl_host_trace_buffer_store); 3040 + static DEVICE_ATTR_RW(host_trace_buffer); 3039 3041 3040 3042 3041 3043 /*****************************************/ 3042 3044 3043 3045 /** 3044 - * _ctl_host_trace_buffer_enable_show - firmware ring buffer (trace only) 3046 + * host_trace_buffer_enable_show - firmware ring buffer (trace only) 3045 3047 * @cdev: pointer to embedded class device 3046 3048 * @attr: ? 3047 3049 * @buf: the buffer returned ··· 3050 3054 * This is a mechnism to post/release host_trace_buffers 3051 3055 */ 3052 3056 static ssize_t 3053 - _ctl_host_trace_buffer_enable_show(struct device *cdev, 3057 + host_trace_buffer_enable_show(struct device *cdev, 3054 3058 struct device_attribute *attr, char *buf) 3055 3059 { 3056 3060 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3068 3072 } 3069 3073 3070 3074 static ssize_t 3071 - _ctl_host_trace_buffer_enable_store(struct device *cdev, 3075 + host_trace_buffer_enable_store(struct device *cdev, 3072 3076 struct device_attribute *attr, const char *buf, size_t count) 3073 3077 { 3074 3078 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3118 3122 out: 3119 3123 return strlen(buf); 3120 3124 } 3121 - static DEVICE_ATTR(host_trace_buffer_enable, S_IRUGO | S_IWUSR, 3122 - _ctl_host_trace_buffer_enable_show, 3123 - _ctl_host_trace_buffer_enable_store); 3125 + static DEVICE_ATTR_RW(host_trace_buffer_enable); 3124 3126 3125 3127 /*********** diagnostic trigger suppport *********************************/ 3126 3128 3127 3129 /** 3128 - * _ctl_diag_trigger_master_show - show the diag_trigger_master attribute 3130 + * diag_trigger_master_show - show the diag_trigger_master attribute 3129 3131 * @cdev: pointer to embedded class device 3130 3132 * @attr: ? 3131 3133 * @buf: the buffer returned ··· 3131 3137 * A sysfs 'read/write' shost attribute. 3132 3138 */ 3133 3139 static ssize_t 3134 - _ctl_diag_trigger_master_show(struct device *cdev, 3140 + diag_trigger_master_show(struct device *cdev, 3135 3141 struct device_attribute *attr, char *buf) 3136 3142 3137 3143 { ··· 3148 3154 } 3149 3155 3150 3156 /** 3151 - * _ctl_diag_trigger_master_store - store the diag_trigger_master attribute 3157 + * diag_trigger_master_store - store the diag_trigger_master attribute 3152 3158 * @cdev: pointer to embedded class device 3153 3159 * @attr: ? 3154 3160 * @buf: the buffer returned ··· 3157 3163 * A sysfs 'read/write' shost attribute. 3158 3164 */ 3159 3165 static ssize_t 3160 - _ctl_diag_trigger_master_store(struct device *cdev, 3166 + diag_trigger_master_store(struct device *cdev, 3161 3167 struct device_attribute *attr, const char *buf, size_t count) 3162 3168 3163 3169 { ··· 3176 3182 spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags); 3177 3183 return rc; 3178 3184 } 3179 - static DEVICE_ATTR(diag_trigger_master, S_IRUGO | S_IWUSR, 3180 - _ctl_diag_trigger_master_show, _ctl_diag_trigger_master_store); 3185 + static DEVICE_ATTR_RW(diag_trigger_master); 3181 3186 3182 3187 3183 3188 /** 3184 - * _ctl_diag_trigger_event_show - show the diag_trigger_event attribute 3189 + * diag_trigger_event_show - show the diag_trigger_event attribute 3185 3190 * @cdev: pointer to embedded class device 3186 3191 * @attr: ? 3187 3192 * @buf: the buffer returned ··· 3188 3195 * A sysfs 'read/write' shost attribute. 3189 3196 */ 3190 3197 static ssize_t 3191 - _ctl_diag_trigger_event_show(struct device *cdev, 3198 + diag_trigger_event_show(struct device *cdev, 3192 3199 struct device_attribute *attr, char *buf) 3193 3200 { 3194 3201 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3204 3211 } 3205 3212 3206 3213 /** 3207 - * _ctl_diag_trigger_event_store - store the diag_trigger_event attribute 3214 + * diag_trigger_event_store - store the diag_trigger_event attribute 3208 3215 * @cdev: pointer to embedded class device 3209 3216 * @attr: ? 3210 3217 * @buf: the buffer returned ··· 3213 3220 * A sysfs 'read/write' shost attribute. 3214 3221 */ 3215 3222 static ssize_t 3216 - _ctl_diag_trigger_event_store(struct device *cdev, 3223 + diag_trigger_event_store(struct device *cdev, 3217 3224 struct device_attribute *attr, const char *buf, size_t count) 3218 3225 3219 3226 { ··· 3232 3239 spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags); 3233 3240 return sz; 3234 3241 } 3235 - static DEVICE_ATTR(diag_trigger_event, S_IRUGO | S_IWUSR, 3236 - _ctl_diag_trigger_event_show, _ctl_diag_trigger_event_store); 3242 + static DEVICE_ATTR_RW(diag_trigger_event); 3237 3243 3238 3244 3239 3245 /** 3240 - * _ctl_diag_trigger_scsi_show - show the diag_trigger_scsi attribute 3246 + * diag_trigger_scsi_show - show the diag_trigger_scsi attribute 3241 3247 * @cdev: pointer to embedded class device 3242 3248 * @attr: ? 3243 3249 * @buf: the buffer returned ··· 3244 3252 * A sysfs 'read/write' shost attribute. 3245 3253 */ 3246 3254 static ssize_t 3247 - _ctl_diag_trigger_scsi_show(struct device *cdev, 3255 + diag_trigger_scsi_show(struct device *cdev, 3248 3256 struct device_attribute *attr, char *buf) 3249 3257 { 3250 3258 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3260 3268 } 3261 3269 3262 3270 /** 3263 - * _ctl_diag_trigger_scsi_store - store the diag_trigger_scsi attribute 3271 + * diag_trigger_scsi_store - store the diag_trigger_scsi attribute 3264 3272 * @cdev: pointer to embedded class device 3265 3273 * @attr: ? 3266 3274 * @buf: the buffer returned ··· 3269 3277 * A sysfs 'read/write' shost attribute. 3270 3278 */ 3271 3279 static ssize_t 3272 - _ctl_diag_trigger_scsi_store(struct device *cdev, 3280 + diag_trigger_scsi_store(struct device *cdev, 3273 3281 struct device_attribute *attr, const char *buf, size_t count) 3274 3282 { 3275 3283 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3287 3295 spin_unlock_irqrestore(&ioc->diag_trigger_lock, flags); 3288 3296 return sz; 3289 3297 } 3290 - static DEVICE_ATTR(diag_trigger_scsi, S_IRUGO | S_IWUSR, 3291 - _ctl_diag_trigger_scsi_show, _ctl_diag_trigger_scsi_store); 3298 + static DEVICE_ATTR_RW(diag_trigger_scsi); 3292 3299 3293 3300 3294 3301 /** 3295 - * _ctl_diag_trigger_scsi_show - show the diag_trigger_mpi attribute 3302 + * diag_trigger_scsi_show - show the diag_trigger_mpi attribute 3296 3303 * @cdev: pointer to embedded class device 3297 3304 * @attr: ? 3298 3305 * @buf: the buffer returned ··· 3299 3308 * A sysfs 'read/write' shost attribute. 3300 3309 */ 3301 3310 static ssize_t 3302 - _ctl_diag_trigger_mpi_show(struct device *cdev, 3311 + diag_trigger_mpi_show(struct device *cdev, 3303 3312 struct device_attribute *attr, char *buf) 3304 3313 { 3305 3314 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3315 3324 } 3316 3325 3317 3326 /** 3318 - * _ctl_diag_trigger_mpi_store - store the diag_trigger_mpi attribute 3327 + * diag_trigger_mpi_store - store the diag_trigger_mpi attribute 3319 3328 * @cdev: pointer to embedded class device 3320 3329 * @attr: ? 3321 3330 * @buf: the buffer returned ··· 3324 3333 * A sysfs 'read/write' shost attribute. 3325 3334 */ 3326 3335 static ssize_t 3327 - _ctl_diag_trigger_mpi_store(struct device *cdev, 3336 + diag_trigger_mpi_store(struct device *cdev, 3328 3337 struct device_attribute *attr, const char *buf, size_t count) 3329 3338 { 3330 3339 struct Scsi_Host *shost = class_to_shost(cdev); ··· 3343 3352 return sz; 3344 3353 } 3345 3354 3346 - static DEVICE_ATTR(diag_trigger_mpi, S_IRUGO | S_IWUSR, 3347 - _ctl_diag_trigger_mpi_show, _ctl_diag_trigger_mpi_store); 3355 + static DEVICE_ATTR_RW(diag_trigger_mpi); 3348 3356 3349 3357 /*********** diagnostic trigger suppport *** END ****************************/ 3350 3358 ··· 3381 3391 /* device attributes */ 3382 3392 3383 3393 /** 3384 - * _ctl_device_sas_address_show - sas address 3394 + * sas_address_show - sas address 3385 3395 * @dev: pointer to embedded class device 3386 3396 * @attr: ? 3387 3397 * @buf: the buffer returned ··· 3391 3401 * A sysfs 'read-only' shost attribute. 3392 3402 */ 3393 3403 static ssize_t 3394 - _ctl_device_sas_address_show(struct device *dev, struct device_attribute *attr, 3404 + sas_address_show(struct device *dev, struct device_attribute *attr, 3395 3405 char *buf) 3396 3406 { 3397 3407 struct scsi_device *sdev = to_scsi_device(dev); ··· 3400 3410 return snprintf(buf, PAGE_SIZE, "0x%016llx\n", 3401 3411 (unsigned long long)sas_device_priv_data->sas_target->sas_address); 3402 3412 } 3403 - static DEVICE_ATTR(sas_address, S_IRUGO, _ctl_device_sas_address_show, NULL); 3413 + static DEVICE_ATTR_RO(sas_address); 3404 3414 3405 3415 /** 3406 - * _ctl_device_handle_show - device handle 3416 + * sas_device_handle_show - device handle 3407 3417 * @dev: pointer to embedded class device 3408 3418 * @attr: ? 3409 3419 * @buf: the buffer returned ··· 3413 3423 * A sysfs 'read-only' shost attribute. 3414 3424 */ 3415 3425 static ssize_t 3416 - _ctl_device_handle_show(struct device *dev, struct device_attribute *attr, 3426 + sas_device_handle_show(struct device *dev, struct device_attribute *attr, 3417 3427 char *buf) 3418 3428 { 3419 3429 struct scsi_device *sdev = to_scsi_device(dev); ··· 3422 3432 return snprintf(buf, PAGE_SIZE, "0x%04x\n", 3423 3433 sas_device_priv_data->sas_target->handle); 3424 3434 } 3425 - static DEVICE_ATTR(sas_device_handle, S_IRUGO, _ctl_device_handle_show, NULL); 3435 + static DEVICE_ATTR_RO(sas_device_handle); 3426 3436 3427 3437 /** 3428 - * _ctl_device_ncq_io_prio_show - send prioritized io commands to device 3438 + * sas_ncq_io_prio_show - send prioritized io commands to device 3429 3439 * @dev: pointer to embedded device 3430 3440 * @attr: ? 3431 3441 * @buf: the buffer returned ··· 3433 3443 * A sysfs 'read/write' sdev attribute, only works with SATA 3434 3444 */ 3435 3445 static ssize_t 3436 - _ctl_device_ncq_prio_enable_show(struct device *dev, 3446 + sas_ncq_prio_enable_show(struct device *dev, 3437 3447 struct device_attribute *attr, char *buf) 3438 3448 { 3439 3449 struct scsi_device *sdev = to_scsi_device(dev); ··· 3444 3454 } 3445 3455 3446 3456 static ssize_t 3447 - _ctl_device_ncq_prio_enable_store(struct device *dev, 3457 + sas_ncq_prio_enable_store(struct device *dev, 3448 3458 struct device_attribute *attr, 3449 3459 const char *buf, size_t count) 3450 3460 { ··· 3461 3471 sas_device_priv_data->ncq_prio_enable = ncq_prio_enable; 3462 3472 return strlen(buf); 3463 3473 } 3464 - static DEVICE_ATTR(sas_ncq_prio_enable, S_IRUGO | S_IWUSR, 3465 - _ctl_device_ncq_prio_enable_show, 3466 - _ctl_device_ncq_prio_enable_store); 3474 + static DEVICE_ATTR_RW(sas_ncq_prio_enable); 3467 3475 3468 3476 struct device_attribute *mpt3sas_dev_attrs[] = { 3469 3477 &dev_attr_sas_address,
+34 -18
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 113 113 114 114 115 115 static ushort max_sectors = 0xFFFF; 116 - module_param(max_sectors, ushort, 0); 116 + module_param(max_sectors, ushort, 0444); 117 117 MODULE_PARM_DESC(max_sectors, "max sectors, range 64 to 32767 default=32767"); 118 118 119 119 120 120 static int missing_delay[2] = {-1, -1}; 121 - module_param_array(missing_delay, int, NULL, 0); 121 + module_param_array(missing_delay, int, NULL, 0444); 122 122 MODULE_PARM_DESC(missing_delay, " device missing delay , io missing delay"); 123 123 124 124 /* scsi-mid layer global parmeter is max_report_luns, which is 511 */ 125 125 #define MPT3SAS_MAX_LUN (16895) 126 126 static u64 max_lun = MPT3SAS_MAX_LUN; 127 - module_param(max_lun, ullong, 0); 127 + module_param(max_lun, ullong, 0444); 128 128 MODULE_PARM_DESC(max_lun, " max lun, default=16895 "); 129 129 130 130 static ushort hbas_to_enumerate; 131 - module_param(hbas_to_enumerate, ushort, 0); 131 + module_param(hbas_to_enumerate, ushort, 0444); 132 132 MODULE_PARM_DESC(hbas_to_enumerate, 133 133 " 0 - enumerates both SAS 2.0 & SAS 3.0 generation HBAs\n \ 134 134 1 - enumerates only SAS 2.0 generation HBAs\n \ ··· 142 142 * Either bit can be set, or both 143 143 */ 144 144 static int diag_buffer_enable = -1; 145 - module_param(diag_buffer_enable, int, 0); 145 + module_param(diag_buffer_enable, int, 0444); 146 146 MODULE_PARM_DESC(diag_buffer_enable, 147 147 " post diag buffers (TRACE=1/SNAPSHOT=2/EXTENDED=4/default=0)"); 148 148 static int disable_discovery = -1; 149 - module_param(disable_discovery, int, 0); 149 + module_param(disable_discovery, int, 0444); 150 150 MODULE_PARM_DESC(disable_discovery, " disable discovery "); 151 151 152 152 153 153 /* permit overriding the host protection capabilities mask (EEDP/T10 PI) */ 154 154 static int prot_mask = -1; 155 - module_param(prot_mask, int, 0); 155 + module_param(prot_mask, int, 0444); 156 156 MODULE_PARM_DESC(prot_mask, " host protection capabilities mask, def=7 "); 157 157 158 158 ··· 2685 2685 int_to_scsilun(lun, (struct scsi_lun *)mpi_request->LUN); 2686 2686 mpt3sas_scsih_set_tm_flag(ioc, handle); 2687 2687 init_completion(&ioc->tm_cmds.done); 2688 - mpt3sas_base_put_smid_hi_priority(ioc, smid, msix_task); 2688 + ioc->put_smid_hi_priority(ioc, smid, msix_task); 2689 2689 wait_for_completion_timeout(&ioc->tm_cmds.done, timeout*HZ); 2690 2690 if (!(ioc->tm_cmds.status & MPT3_CMD_COMPLETE)) { 2691 2691 if (mpt3sas_base_check_cmd_timeout(ioc, ··· 3659 3659 mpi_request->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; 3660 3660 mpi_request->MsgFlags = tr_method; 3661 3661 set_bit(handle, ioc->device_remove_in_progress); 3662 - mpt3sas_base_put_smid_hi_priority(ioc, smid, 0); 3662 + ioc->put_smid_hi_priority(ioc, smid, 0); 3663 3663 mpt3sas_trigger_master(ioc, MASTER_TRIGGER_DEVICE_REMOVAL); 3664 3664 3665 3665 out: ··· 3755 3755 mpi_request->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL; 3756 3756 mpi_request->Operation = MPI2_SAS_OP_REMOVE_DEVICE; 3757 3757 mpi_request->DevHandle = mpi_request_tm->DevHandle; 3758 - mpt3sas_base_put_smid_default(ioc, smid_sas_ctrl); 3758 + ioc->put_smid_default(ioc, smid_sas_ctrl); 3759 3759 3760 3760 return _scsih_check_for_pending_tm(ioc, smid); 3761 3761 } ··· 3881 3881 mpi_request->Function = MPI2_FUNCTION_SCSI_TASK_MGMT; 3882 3882 mpi_request->DevHandle = cpu_to_le16(handle); 3883 3883 mpi_request->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET; 3884 - mpt3sas_base_put_smid_hi_priority(ioc, smid, 0); 3884 + ioc->put_smid_hi_priority(ioc, smid, 0); 3885 3885 } 3886 3886 3887 3887 /** ··· 3970 3970 ack_request->EventContext = event_context; 3971 3971 ack_request->VF_ID = 0; /* TODO */ 3972 3972 ack_request->VP_ID = 0; 3973 - mpt3sas_base_put_smid_default(ioc, smid); 3973 + ioc->put_smid_default(ioc, smid); 3974 3974 } 3975 3975 3976 3976 /** ··· 4026 4026 mpi_request->Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL; 4027 4027 mpi_request->Operation = MPI2_SAS_OP_REMOVE_DEVICE; 4028 4028 mpi_request->DevHandle = cpu_to_le16(handle); 4029 - mpt3sas_base_put_smid_default(ioc, smid); 4029 + ioc->put_smid_default(ioc, smid); 4030 4030 } 4031 4031 4032 4032 /** ··· 4734 4734 if (sas_target_priv_data->flags & MPT_TARGET_FASTPATH_IO) { 4735 4735 mpi_request->IoFlags = cpu_to_le16(scmd->cmd_len | 4736 4736 MPI25_SCSIIO_IOFLAGS_FAST_PATH); 4737 - mpt3sas_base_put_smid_fast_path(ioc, smid, handle); 4737 + ioc->put_smid_fast_path(ioc, smid, handle); 4738 4738 } else 4739 4739 ioc->put_smid_scsi_io(ioc, smid, 4740 4740 le16_to_cpu(mpi_request->DevHandle)); 4741 4741 } else 4742 - mpt3sas_base_put_smid_default(ioc, smid); 4742 + ioc->put_smid_default(ioc, smid); 4743 4743 return 0; 4744 4744 4745 4745 out: ··· 5210 5210 ((ioc_status & MPI2_IOCSTATUS_MASK) 5211 5211 != MPI2_IOCSTATUS_SCSI_TASK_TERMINATED)) { 5212 5212 st->direct_io = 0; 5213 + st->scmd = scmd; 5213 5214 memcpy(mpi_request->CDB.CDB32, scmd->cmnd, scmd->cmd_len); 5214 5215 mpi_request->DevHandle = 5215 5216 cpu_to_le16(sas_device_priv_data->sas_target->handle); ··· 7602 7601 handle, phys_disk_num)); 7603 7602 7604 7603 init_completion(&ioc->scsih_cmds.done); 7605 - mpt3sas_base_put_smid_default(ioc, smid); 7604 + ioc->put_smid_default(ioc, smid); 7606 7605 wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ); 7607 7606 7608 7607 if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) { ··· 9634 9633 if (!ioc->hide_ir_msg) 9635 9634 ioc_info(ioc, "IR shutdown (sending)\n"); 9636 9635 init_completion(&ioc->scsih_cmds.done); 9637 - mpt3sas_base_put_smid_default(ioc, smid); 9636 + ioc->put_smid_default(ioc, smid); 9638 9637 wait_for_completion_timeout(&ioc->scsih_cmds.done, 10*HZ); 9639 9638 9640 9639 if (!(ioc->scsih_cmds.status & MPT3_CMD_COMPLETE)) { ··· 9671 9670 struct _pcie_device *pcie_device, *pcienext; 9672 9671 struct workqueue_struct *wq; 9673 9672 unsigned long flags; 9673 + Mpi2ConfigReply_t mpi_reply; 9674 9674 9675 9675 ioc->remove_host = 1; 9676 9676 ··· 9686 9684 spin_unlock_irqrestore(&ioc->fw_event_lock, flags); 9687 9685 if (wq) 9688 9686 destroy_workqueue(wq); 9689 - 9687 + /* 9688 + * Copy back the unmodified ioc page1. so that on next driver load, 9689 + * current modified changes on ioc page1 won't take effect. 9690 + */ 9691 + if (ioc->is_aero_ioc) 9692 + mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, 9693 + &ioc->ioc_pg1_copy); 9690 9694 /* release all the volumes */ 9691 9695 _scsih_ir_shutdown(ioc); 9692 9696 sas_remove_host(shost); ··· 9755 9747 struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); 9756 9748 struct workqueue_struct *wq; 9757 9749 unsigned long flags; 9750 + Mpi2ConfigReply_t mpi_reply; 9758 9751 9759 9752 ioc->remove_host = 1; 9760 9753 ··· 9770 9761 spin_unlock_irqrestore(&ioc->fw_event_lock, flags); 9771 9762 if (wq) 9772 9763 destroy_workqueue(wq); 9764 + /* 9765 + * Copy back the unmodified ioc page1 so that on next driver load, 9766 + * current modified changes on ioc page1 won't take effect. 9767 + */ 9768 + if (ioc->is_aero_ioc) 9769 + mpt3sas_config_set_ioc_pg1(ioc, &mpi_reply, 9770 + &ioc->ioc_pg1_copy); 9773 9771 9774 9772 _scsih_ir_shutdown(ioc); 9775 9773 mpt3sas_base_detach(ioc);
+4 -4
drivers/scsi/mpt3sas/mpt3sas_transport.c
··· 367 367 ioc_info(ioc, "report_manufacture - send to sas_addr(0x%016llx)\n", 368 368 (u64)sas_address)); 369 369 init_completion(&ioc->transport_cmds.done); 370 - mpt3sas_base_put_smid_default(ioc, smid); 370 + ioc->put_smid_default(ioc, smid); 371 371 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ); 372 372 373 373 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { ··· 1139 1139 (u64)phy->identify.sas_address, 1140 1140 phy->number)); 1141 1141 init_completion(&ioc->transport_cmds.done); 1142 - mpt3sas_base_put_smid_default(ioc, smid); 1142 + ioc->put_smid_default(ioc, smid); 1143 1143 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ); 1144 1144 1145 1145 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { ··· 1434 1434 (u64)phy->identify.sas_address, 1435 1435 phy->number, phy_operation)); 1436 1436 init_completion(&ioc->transport_cmds.done); 1437 - mpt3sas_base_put_smid_default(ioc, smid); 1437 + ioc->put_smid_default(ioc, smid); 1438 1438 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ); 1439 1439 1440 1440 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) { ··· 1911 1911 ioc_info(ioc, "%s: sending smp request\n", __func__)); 1912 1912 1913 1913 init_completion(&ioc->transport_cmds.done); 1914 - mpt3sas_base_put_smid_default(ioc, smid); 1914 + ioc->put_smid_default(ioc, smid); 1915 1915 wait_for_completion_timeout(&ioc->transport_cmds.done, 10*HZ); 1916 1916 1917 1917 if (!(ioc->transport_cmds.status & MPT3_CMD_COMPLETE)) {
+1 -1
drivers/scsi/mvsas/mv_sas.c
··· 1193 1193 mvi_device->dev_type = dev->dev_type; 1194 1194 mvi_device->mvi_info = mvi; 1195 1195 mvi_device->sas_device = dev; 1196 - if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) { 1196 + if (parent_dev && dev_is_expander(parent_dev->dev_type)) { 1197 1197 int phy_id; 1198 1198 u8 phy_num = parent_dev->ex_dev.num_phys; 1199 1199 struct ex_phy *phy;
-3
drivers/scsi/mvsas/mv_sas.h
··· 50 50 extern const struct mvs_dispatch mvs_64xx_dispatch; 51 51 extern const struct mvs_dispatch mvs_94xx_dispatch; 52 52 53 - #define DEV_IS_EXPANDER(type) \ 54 - ((type == SAS_EDGE_EXPANDER_DEVICE) || (type == SAS_FANOUT_EXPANDER_DEVICE)) 55 - 56 53 #define bit(n) ((u64)1 << n) 57 54 58 55 #define for_each_phy(__lseq_mask, __mc, __lseq) \
-6108
drivers/scsi/osst.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - SCSI Tape Driver for Linux version 1.1 and newer. See the accompanying 4 - file Documentation/scsi/st.txt for more information. 5 - 6 - History: 7 - 8 - OnStream SCSI Tape support (osst) cloned from st.c by 9 - Willem Riede (osst@riede.org) Feb 2000 10 - Fixes ... Kurt Garloff <garloff@suse.de> Mar 2000 11 - 12 - Rewritten from Dwayne Forsyth's SCSI tape driver by Kai Makisara. 13 - Contribution and ideas from several people including (in alphabetical 14 - order) Klaus Ehrenfried, Wolfgang Denk, Steve Hirsch, Andreas Koppenh"ofer, 15 - Michael Leodolter, Eyal Lebedinsky, J"org Weule, and Eric Youngdale. 16 - 17 - Copyright 1992 - 2002 Kai Makisara / 2000 - 2006 Willem Riede 18 - email osst@riede.org 19 - 20 - $Header: /cvsroot/osst/Driver/osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $ 21 - 22 - Microscopic alterations - Rik Ling, 2000/12/21 23 - Last st.c sync: Tue Oct 15 22:01:04 2002 by makisara 24 - Some small formal changes - aeb, 950809 25 - */ 26 - 27 - static const char * cvsid = "$Id: osst.c,v 1.73 2005/01/01 21:13:34 wriede Exp $"; 28 - static const char * osst_version = "0.99.4"; 29 - 30 - /* The "failure to reconnect" firmware bug */ 31 - #define OSST_FW_NEED_POLL_MIN 10601 /*(107A)*/ 32 - #define OSST_FW_NEED_POLL_MAX 10704 /*(108D)*/ 33 - #define OSST_FW_NEED_POLL(x,d) ((x) >= OSST_FW_NEED_POLL_MIN && (x) <= OSST_FW_NEED_POLL_MAX && d->host->this_id != 7) 34 - 35 - #include <linux/module.h> 36 - 37 - #include <linux/fs.h> 38 - #include <linux/kernel.h> 39 - #include <linux/sched/signal.h> 40 - #include <linux/proc_fs.h> 41 - #include <linux/mm.h> 42 - #include <linux/slab.h> 43 - #include <linux/init.h> 44 - #include <linux/string.h> 45 - #include <linux/errno.h> 46 - #include <linux/mtio.h> 47 - #include <linux/ioctl.h> 48 - #include <linux/fcntl.h> 49 - #include <linux/spinlock.h> 50 - #include <linux/vmalloc.h> 51 - #include <linux/blkdev.h> 52 - #include <linux/moduleparam.h> 53 - #include <linux/delay.h> 54 - #include <linux/jiffies.h> 55 - #include <linux/mutex.h> 56 - #include <linux/uaccess.h> 57 - #include <asm/dma.h> 58 - 59 - /* The driver prints some debugging information on the console if DEBUG 60 - is defined and non-zero. */ 61 - #define DEBUG 0 62 - 63 - /* The message level for the debug messages is currently set to KERN_NOTICE 64 - so that people can easily see the messages. Later when the debugging messages 65 - in the drivers are more widely classified, this may be changed to KERN_DEBUG. */ 66 - #define OSST_DEB_MSG KERN_NOTICE 67 - 68 - #include <scsi/scsi.h> 69 - #include <scsi/scsi_dbg.h> 70 - #include <scsi/scsi_device.h> 71 - #include <scsi/scsi_driver.h> 72 - #include <scsi/scsi_eh.h> 73 - #include <scsi/scsi_host.h> 74 - #include <scsi/scsi_ioctl.h> 75 - 76 - #define ST_KILOBYTE 1024 77 - 78 - #include "st.h" 79 - #include "osst.h" 80 - #include "osst_options.h" 81 - #include "osst_detect.h" 82 - 83 - static DEFINE_MUTEX(osst_int_mutex); 84 - static int max_dev = 0; 85 - static int write_threshold_kbs = 0; 86 - static int max_sg_segs = 0; 87 - 88 - #ifdef MODULE 89 - MODULE_AUTHOR("Willem Riede"); 90 - MODULE_DESCRIPTION("OnStream {DI-|FW-|SC-|USB}{30|50} Tape Driver"); 91 - MODULE_LICENSE("GPL"); 92 - MODULE_ALIAS_CHARDEV_MAJOR(OSST_MAJOR); 93 - MODULE_ALIAS_SCSI_DEVICE(TYPE_TAPE); 94 - 95 - module_param(max_dev, int, 0444); 96 - MODULE_PARM_DESC(max_dev, "Maximum number of OnStream Tape Drives to attach (4)"); 97 - 98 - module_param(write_threshold_kbs, int, 0644); 99 - MODULE_PARM_DESC(write_threshold_kbs, "Asynchronous write threshold (KB; 32)"); 100 - 101 - module_param(max_sg_segs, int, 0644); 102 - MODULE_PARM_DESC(max_sg_segs, "Maximum number of scatter/gather segments to use (9)"); 103 - #else 104 - static struct osst_dev_parm { 105 - char *name; 106 - int *val; 107 - } parms[] __initdata = { 108 - { "max_dev", &max_dev }, 109 - { "write_threshold_kbs", &write_threshold_kbs }, 110 - { "max_sg_segs", &max_sg_segs } 111 - }; 112 - #endif 113 - 114 - /* Some default definitions have been moved to osst_options.h */ 115 - #define OSST_BUFFER_SIZE (OSST_BUFFER_BLOCKS * ST_KILOBYTE) 116 - #define OSST_WRITE_THRESHOLD (OSST_WRITE_THRESHOLD_BLOCKS * ST_KILOBYTE) 117 - 118 - /* The buffer size should fit into the 24 bits for length in the 119 - 6-byte SCSI read and write commands. */ 120 - #if OSST_BUFFER_SIZE >= (2 << 24 - 1) 121 - #error "Buffer size should not exceed (2 << 24 - 1) bytes!" 122 - #endif 123 - 124 - #if DEBUG 125 - static int debugging = 1; 126 - /* uncomment define below to test error recovery */ 127 - // #define OSST_INJECT_ERRORS 1 128 - #endif 129 - 130 - /* Do not retry! The drive firmware already retries when appropriate, 131 - and when it tries to tell us something, we had better listen... */ 132 - #define MAX_RETRIES 0 133 - 134 - #define NO_TAPE NOT_READY 135 - 136 - #define OSST_WAIT_POSITION_COMPLETE (HZ > 200 ? HZ / 200 : 1) 137 - #define OSST_WAIT_WRITE_COMPLETE (HZ / 12) 138 - #define OSST_WAIT_LONG_WRITE_COMPLETE (HZ / 2) 139 - 140 - #define OSST_TIMEOUT (200 * HZ) 141 - #define OSST_LONG_TIMEOUT (1800 * HZ) 142 - 143 - #define TAPE_NR(x) (iminor(x) & ((1 << ST_MODE_SHIFT)-1)) 144 - #define TAPE_MODE(x) ((iminor(x) & ST_MODE_MASK) >> ST_MODE_SHIFT) 145 - #define TAPE_REWIND(x) ((iminor(x) & 0x80) == 0) 146 - #define TAPE_IS_RAW(x) (TAPE_MODE(x) & (ST_NBR_MODES >> 1)) 147 - 148 - /* Internal ioctl to set both density (uppermost 8 bits) and blocksize (lower 149 - 24 bits) */ 150 - #define SET_DENS_AND_BLK 0x10001 151 - 152 - static int osst_buffer_size = OSST_BUFFER_SIZE; 153 - static int osst_write_threshold = OSST_WRITE_THRESHOLD; 154 - static int osst_max_sg_segs = OSST_MAX_SG; 155 - static int osst_max_dev = OSST_MAX_TAPES; 156 - static int osst_nr_dev; 157 - 158 - static struct osst_tape **os_scsi_tapes = NULL; 159 - static DEFINE_RWLOCK(os_scsi_tapes_lock); 160 - 161 - static int modes_defined = 0; 162 - 163 - static struct osst_buffer *new_tape_buffer(int, int, int); 164 - static int enlarge_buffer(struct osst_buffer *, int); 165 - static void normalize_buffer(struct osst_buffer *); 166 - static int append_to_buffer(const char __user *, struct osst_buffer *, int); 167 - static int from_buffer(struct osst_buffer *, char __user *, int); 168 - static int osst_zero_buffer_tail(struct osst_buffer *); 169 - static int osst_copy_to_buffer(struct osst_buffer *, unsigned char *); 170 - static int osst_copy_from_buffer(struct osst_buffer *, unsigned char *); 171 - 172 - static int osst_probe(struct device *); 173 - static int osst_remove(struct device *); 174 - 175 - static struct scsi_driver osst_template = { 176 - .gendrv = { 177 - .name = "osst", 178 - .owner = THIS_MODULE, 179 - .probe = osst_probe, 180 - .remove = osst_remove, 181 - } 182 - }; 183 - 184 - static int osst_int_ioctl(struct osst_tape *STp, struct osst_request ** aSRpnt, 185 - unsigned int cmd_in, unsigned long arg); 186 - 187 - static int osst_set_frame_position(struct osst_tape *STp, struct osst_request ** aSRpnt, int frame, int skip); 188 - 189 - static int osst_get_frame_position(struct osst_tape *STp, struct osst_request ** aSRpnt); 190 - 191 - static int osst_flush_write_buffer(struct osst_tape *STp, struct osst_request ** aSRpnt); 192 - 193 - static int osst_write_error_recovery(struct osst_tape * STp, struct osst_request ** aSRpnt, int pending); 194 - 195 - static inline char *tape_name(struct osst_tape *tape) 196 - { 197 - return tape->drive->disk_name; 198 - } 199 - 200 - /* Routines that handle the interaction with mid-layer SCSI routines */ 201 - 202 - 203 - /* Normalize Sense */ 204 - static void osst_analyze_sense(struct osst_request *SRpnt, struct st_cmdstatus *s) 205 - { 206 - const u8 *ucp; 207 - const u8 *sense = SRpnt->sense; 208 - 209 - s->have_sense = scsi_normalize_sense(SRpnt->sense, 210 - SCSI_SENSE_BUFFERSIZE, &s->sense_hdr); 211 - s->flags = 0; 212 - 213 - if (s->have_sense) { 214 - s->deferred = 0; 215 - s->remainder_valid = 216 - scsi_get_sense_info_fld(sense, SCSI_SENSE_BUFFERSIZE, &s->uremainder64); 217 - switch (sense[0] & 0x7f) { 218 - case 0x71: 219 - s->deferred = 1; 220 - /* fall through */ 221 - case 0x70: 222 - s->fixed_format = 1; 223 - s->flags = sense[2] & 0xe0; 224 - break; 225 - case 0x73: 226 - s->deferred = 1; 227 - /* fall through */ 228 - case 0x72: 229 - s->fixed_format = 0; 230 - ucp = scsi_sense_desc_find(sense, SCSI_SENSE_BUFFERSIZE, 4); 231 - s->flags = ucp ? (ucp[3] & 0xe0) : 0; 232 - break; 233 - } 234 - } 235 - } 236 - 237 - /* Convert the result to success code */ 238 - static int osst_chk_result(struct osst_tape * STp, struct osst_request * SRpnt) 239 - { 240 - char *name = tape_name(STp); 241 - int result = SRpnt->result; 242 - u8 * sense = SRpnt->sense, scode; 243 - #if DEBUG 244 - const char *stp; 245 - #endif 246 - struct st_cmdstatus *cmdstatp; 247 - 248 - if (!result) 249 - return 0; 250 - 251 - cmdstatp = &STp->buffer->cmdstat; 252 - osst_analyze_sense(SRpnt, cmdstatp); 253 - 254 - if (cmdstatp->have_sense) 255 - scode = STp->buffer->cmdstat.sense_hdr.sense_key; 256 - else 257 - scode = 0; 258 - #if DEBUG 259 - if (debugging) { 260 - printk(OSST_DEB_MSG "%s:D: Error: %x, cmd: %x %x %x %x %x %x\n", 261 - name, result, 262 - SRpnt->cmd[0], SRpnt->cmd[1], SRpnt->cmd[2], 263 - SRpnt->cmd[3], SRpnt->cmd[4], SRpnt->cmd[5]); 264 - if (scode) printk(OSST_DEB_MSG "%s:D: Sense: %02x, ASC: %02x, ASCQ: %02x\n", 265 - name, scode, sense[12], sense[13]); 266 - if (cmdstatp->have_sense) 267 - __scsi_print_sense(STp->device, name, 268 - SRpnt->sense, SCSI_SENSE_BUFFERSIZE); 269 - } 270 - else 271 - #endif 272 - if (cmdstatp->have_sense && ( 273 - scode != NO_SENSE && 274 - scode != RECOVERED_ERROR && 275 - /* scode != UNIT_ATTENTION && */ 276 - scode != BLANK_CHECK && 277 - scode != VOLUME_OVERFLOW && 278 - SRpnt->cmd[0] != MODE_SENSE && 279 - SRpnt->cmd[0] != TEST_UNIT_READY)) { /* Abnormal conditions for tape */ 280 - if (cmdstatp->have_sense) { 281 - printk(KERN_WARNING "%s:W: Command with sense data:\n", name); 282 - __scsi_print_sense(STp->device, name, 283 - SRpnt->sense, SCSI_SENSE_BUFFERSIZE); 284 - } 285 - else { 286 - static int notyetprinted = 1; 287 - 288 - printk(KERN_WARNING 289 - "%s:W: Warning %x (driver bt 0x%x, host bt 0x%x).\n", 290 - name, result, driver_byte(result), 291 - host_byte(result)); 292 - if (notyetprinted) { 293 - notyetprinted = 0; 294 - printk(KERN_INFO 295 - "%s:I: This warning may be caused by your scsi controller,\n", name); 296 - printk(KERN_INFO 297 - "%s:I: it has been reported with some Buslogic cards.\n", name); 298 - } 299 - } 300 - } 301 - STp->pos_unknown |= STp->device->was_reset; 302 - 303 - if (cmdstatp->have_sense && scode == RECOVERED_ERROR) { 304 - STp->recover_count++; 305 - STp->recover_erreg++; 306 - #if DEBUG 307 - if (debugging) { 308 - if (SRpnt->cmd[0] == READ_6) 309 - stp = "read"; 310 - else if (SRpnt->cmd[0] == WRITE_6) 311 - stp = "write"; 312 - else 313 - stp = "ioctl"; 314 - printk(OSST_DEB_MSG "%s:D: Recovered %s error (%d).\n", name, stp, 315 - STp->recover_count); 316 - } 317 - #endif 318 - if ((sense[2] & 0xe0) == 0) 319 - return 0; 320 - } 321 - return (-EIO); 322 - } 323 - 324 - 325 - /* Wakeup from interrupt */ 326 - static void osst_end_async(struct request *req, blk_status_t status) 327 - { 328 - struct scsi_request *rq = scsi_req(req); 329 - struct osst_request *SRpnt = req->end_io_data; 330 - struct osst_tape *STp = SRpnt->stp; 331 - struct rq_map_data *mdata = &SRpnt->stp->buffer->map_data; 332 - 333 - STp->buffer->cmdstat.midlevel_result = SRpnt->result = rq->result; 334 - #if DEBUG 335 - STp->write_pending = 0; 336 - #endif 337 - if (rq->sense_len) 338 - memcpy(SRpnt->sense, rq->sense, SCSI_SENSE_BUFFERSIZE); 339 - if (SRpnt->waiting) 340 - complete(SRpnt->waiting); 341 - 342 - if (SRpnt->bio) { 343 - kfree(mdata->pages); 344 - blk_rq_unmap_user(SRpnt->bio); 345 - } 346 - 347 - blk_put_request(req); 348 - } 349 - 350 - /* osst_request memory management */ 351 - static struct osst_request *osst_allocate_request(void) 352 - { 353 - return kzalloc(sizeof(struct osst_request), GFP_KERNEL); 354 - } 355 - 356 - static void osst_release_request(struct osst_request *streq) 357 - { 358 - kfree(streq); 359 - } 360 - 361 - static int osst_execute(struct osst_request *SRpnt, const unsigned char *cmd, 362 - int cmd_len, int data_direction, void *buffer, unsigned bufflen, 363 - int use_sg, int timeout, int retries) 364 - { 365 - struct request *req; 366 - struct scsi_request *rq; 367 - struct page **pages = NULL; 368 - struct rq_map_data *mdata = &SRpnt->stp->buffer->map_data; 369 - 370 - int err = 0; 371 - int write = (data_direction == DMA_TO_DEVICE); 372 - 373 - req = blk_get_request(SRpnt->stp->device->request_queue, 374 - write ? REQ_OP_SCSI_OUT : REQ_OP_SCSI_IN, 0); 375 - if (IS_ERR(req)) 376 - return DRIVER_ERROR << 24; 377 - 378 - rq = scsi_req(req); 379 - req->rq_flags |= RQF_QUIET; 380 - 381 - SRpnt->bio = NULL; 382 - 383 - if (use_sg) { 384 - struct scatterlist *sg, *sgl = (struct scatterlist *)buffer; 385 - int i; 386 - 387 - pages = kcalloc(use_sg, sizeof(struct page *), GFP_KERNEL); 388 - if (!pages) 389 - goto free_req; 390 - 391 - for_each_sg(sgl, sg, use_sg, i) 392 - pages[i] = sg_page(sg); 393 - 394 - mdata->null_mapped = 1; 395 - 396 - mdata->page_order = get_order(sgl[0].length); 397 - mdata->nr_entries = 398 - DIV_ROUND_UP(bufflen, PAGE_SIZE << mdata->page_order); 399 - mdata->offset = 0; 400 - 401 - err = blk_rq_map_user(req->q, req, mdata, NULL, bufflen, GFP_KERNEL); 402 - if (err) { 403 - kfree(pages); 404 - goto free_req; 405 - } 406 - SRpnt->bio = req->bio; 407 - mdata->pages = pages; 408 - 409 - } else if (bufflen) { 410 - err = blk_rq_map_kern(req->q, req, buffer, bufflen, GFP_KERNEL); 411 - if (err) 412 - goto free_req; 413 - } 414 - 415 - rq->cmd_len = cmd_len; 416 - memset(rq->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */ 417 - memcpy(rq->cmd, cmd, rq->cmd_len); 418 - req->timeout = timeout; 419 - rq->retries = retries; 420 - req->end_io_data = SRpnt; 421 - 422 - blk_execute_rq_nowait(req->q, NULL, req, 1, osst_end_async); 423 - return 0; 424 - free_req: 425 - blk_put_request(req); 426 - return DRIVER_ERROR << 24; 427 - } 428 - 429 - /* Do the scsi command. Waits until command performed if do_wait is true. 430 - Otherwise osst_write_behind_check() is used to check that the command 431 - has finished. */ 432 - static struct osst_request * osst_do_scsi(struct osst_request *SRpnt, struct osst_tape *STp, 433 - unsigned char *cmd, int bytes, int direction, int timeout, int retries, int do_wait) 434 - { 435 - unsigned char *bp; 436 - unsigned short use_sg; 437 - #ifdef OSST_INJECT_ERRORS 438 - static int inject = 0; 439 - static int repeat = 0; 440 - #endif 441 - struct completion *waiting; 442 - 443 - /* if async, make sure there's no command outstanding */ 444 - if (!do_wait && ((STp->buffer)->last_SRpnt)) { 445 - printk(KERN_ERR "%s: Async command already active.\n", 446 - tape_name(STp)); 447 - if (signal_pending(current)) 448 - (STp->buffer)->syscall_result = (-EINTR); 449 - else 450 - (STp->buffer)->syscall_result = (-EBUSY); 451 - return NULL; 452 - } 453 - 454 - if (SRpnt == NULL) { 455 - SRpnt = osst_allocate_request(); 456 - if (SRpnt == NULL) { 457 - printk(KERN_ERR "%s: Can't allocate SCSI request.\n", 458 - tape_name(STp)); 459 - if (signal_pending(current)) 460 - (STp->buffer)->syscall_result = (-EINTR); 461 - else 462 - (STp->buffer)->syscall_result = (-EBUSY); 463 - return NULL; 464 - } 465 - SRpnt->stp = STp; 466 - } 467 - 468 - /* If async IO, set last_SRpnt. This ptr tells write_behind_check 469 - which IO is outstanding. It's nulled out when the IO completes. */ 470 - if (!do_wait) 471 - (STp->buffer)->last_SRpnt = SRpnt; 472 - 473 - waiting = &STp->wait; 474 - init_completion(waiting); 475 - SRpnt->waiting = waiting; 476 - 477 - use_sg = (bytes > STp->buffer->sg[0].length) ? STp->buffer->use_sg : 0; 478 - if (use_sg) { 479 - bp = (char *)&(STp->buffer->sg[0]); 480 - if (STp->buffer->sg_segs < use_sg) 481 - use_sg = STp->buffer->sg_segs; 482 - } 483 - else 484 - bp = (STp->buffer)->b_data; 485 - 486 - memcpy(SRpnt->cmd, cmd, sizeof(SRpnt->cmd)); 487 - STp->buffer->cmdstat.have_sense = 0; 488 - STp->buffer->syscall_result = 0; 489 - 490 - if (osst_execute(SRpnt, cmd, COMMAND_SIZE(cmd[0]), direction, bp, bytes, 491 - use_sg, timeout, retries)) 492 - /* could not allocate the buffer or request was too large */ 493 - (STp->buffer)->syscall_result = (-EBUSY); 494 - else if (do_wait) { 495 - wait_for_completion(waiting); 496 - SRpnt->waiting = NULL; 497 - STp->buffer->syscall_result = osst_chk_result(STp, SRpnt); 498 - #ifdef OSST_INJECT_ERRORS 499 - if (STp->buffer->syscall_result == 0 && 500 - cmd[0] == READ_6 && 501 - cmd[4] && 502 - ( (++ inject % 83) == 29 || 503 - (STp->first_frame_position == 240 504 - /* or STp->read_error_frame to fail again on the block calculated above */ && 505 - ++repeat < 3))) { 506 - printk(OSST_DEB_MSG "%s:D: Injecting read error\n", tape_name(STp)); 507 - STp->buffer->last_result_fatal = 1; 508 - } 509 - #endif 510 - } 511 - return SRpnt; 512 - } 513 - 514 - 515 - /* Handle the write-behind checking (downs the semaphore) */ 516 - static void osst_write_behind_check(struct osst_tape *STp) 517 - { 518 - struct osst_buffer * STbuffer; 519 - 520 - STbuffer = STp->buffer; 521 - 522 - #if DEBUG 523 - if (STp->write_pending) 524 - STp->nbr_waits++; 525 - else 526 - STp->nbr_finished++; 527 - #endif 528 - wait_for_completion(&(STp->wait)); 529 - STp->buffer->last_SRpnt->waiting = NULL; 530 - 531 - STp->buffer->syscall_result = osst_chk_result(STp, STp->buffer->last_SRpnt); 532 - 533 - if (STp->buffer->syscall_result) 534 - STp->buffer->syscall_result = 535 - osst_write_error_recovery(STp, &(STp->buffer->last_SRpnt), 1); 536 - else 537 - STp->first_frame_position++; 538 - 539 - osst_release_request(STp->buffer->last_SRpnt); 540 - 541 - if (STbuffer->writing < STbuffer->buffer_bytes) 542 - printk(KERN_WARNING "osst :A: write_behind_check: something left in buffer!\n"); 543 - 544 - STbuffer->last_SRpnt = NULL; 545 - STbuffer->buffer_bytes -= STbuffer->writing; 546 - STbuffer->writing = 0; 547 - 548 - return; 549 - } 550 - 551 - 552 - 553 - /* Onstream specific Routines */ 554 - /* 555 - * Initialize the OnStream AUX 556 - */ 557 - static void osst_init_aux(struct osst_tape * STp, int frame_type, int frame_seq_number, 558 - int logical_blk_num, int blk_sz, int blk_cnt) 559 - { 560 - os_aux_t *aux = STp->buffer->aux; 561 - os_partition_t *par = &aux->partition; 562 - os_dat_t *dat = &aux->dat; 563 - 564 - if (STp->raw) return; 565 - 566 - memset(aux, 0, sizeof(*aux)); 567 - aux->format_id = htonl(0); 568 - memcpy(aux->application_sig, "LIN4", 4); 569 - aux->hdwr = htonl(0); 570 - aux->frame_type = frame_type; 571 - 572 - switch (frame_type) { 573 - case OS_FRAME_TYPE_HEADER: 574 - aux->update_frame_cntr = htonl(STp->update_frame_cntr); 575 - par->partition_num = OS_CONFIG_PARTITION; 576 - par->par_desc_ver = OS_PARTITION_VERSION; 577 - par->wrt_pass_cntr = htons(0xffff); 578 - /* 0-4 = reserved, 5-9 = header, 2990-2994 = header, 2995-2999 = reserved */ 579 - par->first_frame_ppos = htonl(0); 580 - par->last_frame_ppos = htonl(0xbb7); 581 - aux->frame_seq_num = htonl(0); 582 - aux->logical_blk_num_high = htonl(0); 583 - aux->logical_blk_num = htonl(0); 584 - aux->next_mark_ppos = htonl(STp->first_mark_ppos); 585 - break; 586 - case OS_FRAME_TYPE_DATA: 587 - case OS_FRAME_TYPE_MARKER: 588 - dat->dat_sz = 8; 589 - dat->reserved1 = 0; 590 - dat->entry_cnt = 1; 591 - dat->reserved3 = 0; 592 - dat->dat_list[0].blk_sz = htonl(blk_sz); 593 - dat->dat_list[0].blk_cnt = htons(blk_cnt); 594 - dat->dat_list[0].flags = frame_type==OS_FRAME_TYPE_MARKER? 595 - OS_DAT_FLAGS_MARK:OS_DAT_FLAGS_DATA; 596 - dat->dat_list[0].reserved = 0; 597 - /* fall through */ 598 - case OS_FRAME_TYPE_EOD: 599 - aux->update_frame_cntr = htonl(0); 600 - par->partition_num = OS_DATA_PARTITION; 601 - par->par_desc_ver = OS_PARTITION_VERSION; 602 - par->wrt_pass_cntr = htons(STp->wrt_pass_cntr); 603 - par->first_frame_ppos = htonl(STp->first_data_ppos); 604 - par->last_frame_ppos = htonl(STp->capacity); 605 - aux->frame_seq_num = htonl(frame_seq_number); 606 - aux->logical_blk_num_high = htonl(0); 607 - aux->logical_blk_num = htonl(logical_blk_num); 608 - break; 609 - default: ; /* probably FILL */ 610 - } 611 - aux->filemark_cnt = htonl(STp->filemark_cnt); 612 - aux->phys_fm = htonl(0xffffffff); 613 - aux->last_mark_ppos = htonl(STp->last_mark_ppos); 614 - aux->last_mark_lbn = htonl(STp->last_mark_lbn); 615 - } 616 - 617 - /* 618 - * Verify that we have the correct tape frame 619 - */ 620 - static int osst_verify_frame(struct osst_tape * STp, int frame_seq_number, int quiet) 621 - { 622 - char * name = tape_name(STp); 623 - os_aux_t * aux = STp->buffer->aux; 624 - os_partition_t * par = &(aux->partition); 625 - struct st_partstat * STps = &(STp->ps[STp->partition]); 626 - unsigned int blk_cnt, blk_sz, i; 627 - 628 - if (STp->raw) { 629 - if (STp->buffer->syscall_result) { 630 - for (i=0; i < STp->buffer->sg_segs; i++) 631 - memset(page_address(sg_page(&STp->buffer->sg[i])), 632 - 0, STp->buffer->sg[i].length); 633 - strcpy(STp->buffer->b_data, "READ ERROR ON FRAME"); 634 - } else 635 - STp->buffer->buffer_bytes = OS_FRAME_SIZE; 636 - return 1; 637 - } 638 - if (STp->buffer->syscall_result) { 639 - #if DEBUG 640 - printk(OSST_DEB_MSG "%s:D: Skipping frame, read error\n", name); 641 - #endif 642 - return 0; 643 - } 644 - if (ntohl(aux->format_id) != 0) { 645 - #if DEBUG 646 - printk(OSST_DEB_MSG "%s:D: Skipping frame, format_id %u\n", name, ntohl(aux->format_id)); 647 - #endif 648 - goto err_out; 649 - } 650 - if (memcmp(aux->application_sig, STp->application_sig, 4) != 0 && 651 - (memcmp(aux->application_sig, "LIN3", 4) != 0 || STp->linux_media_version != 4)) { 652 - #if DEBUG 653 - printk(OSST_DEB_MSG "%s:D: Skipping frame, incorrect application signature\n", name); 654 - #endif 655 - goto err_out; 656 - } 657 - if (par->partition_num != OS_DATA_PARTITION) { 658 - if (!STp->linux_media || STp->linux_media_version != 2) { 659 - #if DEBUG 660 - printk(OSST_DEB_MSG "%s:D: Skipping frame, partition num %d\n", 661 - name, par->partition_num); 662 - #endif 663 - goto err_out; 664 - } 665 - } 666 - if (par->par_desc_ver != OS_PARTITION_VERSION) { 667 - #if DEBUG 668 - printk(OSST_DEB_MSG "%s:D: Skipping frame, partition version %d\n", name, par->par_desc_ver); 669 - #endif 670 - goto err_out; 671 - } 672 - if (ntohs(par->wrt_pass_cntr) != STp->wrt_pass_cntr) { 673 - #if DEBUG 674 - printk(OSST_DEB_MSG "%s:D: Skipping frame, wrt_pass_cntr %d (expected %d)\n", 675 - name, ntohs(par->wrt_pass_cntr), STp->wrt_pass_cntr); 676 - #endif 677 - goto err_out; 678 - } 679 - if (aux->frame_type != OS_FRAME_TYPE_DATA && 680 - aux->frame_type != OS_FRAME_TYPE_EOD && 681 - aux->frame_type != OS_FRAME_TYPE_MARKER) { 682 - if (!quiet) { 683 - #if DEBUG 684 - printk(OSST_DEB_MSG "%s:D: Skipping frame, frame type %x\n", name, aux->frame_type); 685 - #endif 686 - } 687 - goto err_out; 688 - } 689 - if (aux->frame_type == OS_FRAME_TYPE_EOD && 690 - STp->first_frame_position < STp->eod_frame_ppos) { 691 - printk(KERN_INFO "%s:I: Skipping premature EOD frame %d\n", name, 692 - STp->first_frame_position); 693 - goto err_out; 694 - } 695 - if (frame_seq_number != -1 && ntohl(aux->frame_seq_num) != frame_seq_number) { 696 - if (!quiet) { 697 - #if DEBUG 698 - printk(OSST_DEB_MSG "%s:D: Skipping frame, sequence number %u (expected %d)\n", 699 - name, ntohl(aux->frame_seq_num), frame_seq_number); 700 - #endif 701 - } 702 - goto err_out; 703 - } 704 - if (aux->frame_type == OS_FRAME_TYPE_MARKER) { 705 - STps->eof = ST_FM_HIT; 706 - 707 - i = ntohl(aux->filemark_cnt); 708 - if (STp->header_cache != NULL && i < OS_FM_TAB_MAX && (i > STp->filemark_cnt || 709 - STp->first_frame_position - 1 != ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[i]))) { 710 - #if DEBUG 711 - printk(OSST_DEB_MSG "%s:D: %s filemark %d at frame pos %d\n", name, 712 - STp->header_cache->dat_fm_tab.fm_tab_ent[i] == 0?"Learned":"Corrected", 713 - i, STp->first_frame_position - 1); 714 - #endif 715 - STp->header_cache->dat_fm_tab.fm_tab_ent[i] = htonl(STp->first_frame_position - 1); 716 - if (i >= STp->filemark_cnt) 717 - STp->filemark_cnt = i+1; 718 - } 719 - } 720 - if (aux->frame_type == OS_FRAME_TYPE_EOD) { 721 - STps->eof = ST_EOD_1; 722 - STp->frame_in_buffer = 1; 723 - } 724 - if (aux->frame_type == OS_FRAME_TYPE_DATA) { 725 - blk_cnt = ntohs(aux->dat.dat_list[0].blk_cnt); 726 - blk_sz = ntohl(aux->dat.dat_list[0].blk_sz); 727 - STp->buffer->buffer_bytes = blk_cnt * blk_sz; 728 - STp->buffer->read_pointer = 0; 729 - STp->frame_in_buffer = 1; 730 - 731 - /* See what block size was used to write file */ 732 - if (STp->block_size != blk_sz && blk_sz > 0) { 733 - printk(KERN_INFO 734 - "%s:I: File was written with block size %d%c, currently %d%c, adjusted to match.\n", 735 - name, blk_sz<1024?blk_sz:blk_sz/1024,blk_sz<1024?'b':'k', 736 - STp->block_size<1024?STp->block_size:STp->block_size/1024, 737 - STp->block_size<1024?'b':'k'); 738 - STp->block_size = blk_sz; 739 - STp->buffer->buffer_blocks = OS_DATA_SIZE / blk_sz; 740 - } 741 - STps->eof = ST_NOEOF; 742 - } 743 - STp->frame_seq_number = ntohl(aux->frame_seq_num); 744 - STp->logical_blk_num = ntohl(aux->logical_blk_num); 745 - return 1; 746 - 747 - err_out: 748 - if (STp->read_error_frame == 0) 749 - STp->read_error_frame = STp->first_frame_position - 1; 750 - return 0; 751 - } 752 - 753 - /* 754 - * Wait for the unit to become Ready 755 - */ 756 - static int osst_wait_ready(struct osst_tape * STp, struct osst_request ** aSRpnt, 757 - unsigned timeout, int initial_delay) 758 - { 759 - unsigned char cmd[MAX_COMMAND_SIZE]; 760 - struct osst_request * SRpnt; 761 - unsigned long startwait = jiffies; 762 - #if DEBUG 763 - int dbg = debugging; 764 - char * name = tape_name(STp); 765 - 766 - printk(OSST_DEB_MSG "%s:D: Reached onstream wait ready\n", name); 767 - #endif 768 - 769 - if (initial_delay > 0) 770 - msleep(jiffies_to_msecs(initial_delay)); 771 - 772 - memset(cmd, 0, MAX_COMMAND_SIZE); 773 - cmd[0] = TEST_UNIT_READY; 774 - 775 - SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1); 776 - *aSRpnt = SRpnt; 777 - if (!SRpnt) return (-EBUSY); 778 - 779 - while ( STp->buffer->syscall_result && time_before(jiffies, startwait + timeout*HZ) && 780 - (( SRpnt->sense[2] == 2 && SRpnt->sense[12] == 4 && 781 - (SRpnt->sense[13] == 1 || SRpnt->sense[13] == 8) ) || 782 - ( SRpnt->sense[2] == 6 && SRpnt->sense[12] == 0x28 && 783 - SRpnt->sense[13] == 0 ) )) { 784 - #if DEBUG 785 - if (debugging) { 786 - printk(OSST_DEB_MSG "%s:D: Sleeping in onstream wait ready\n", name); 787 - printk(OSST_DEB_MSG "%s:D: Turning off debugging for a while\n", name); 788 - debugging = 0; 789 - } 790 - #endif 791 - msleep(100); 792 - 793 - memset(cmd, 0, MAX_COMMAND_SIZE); 794 - cmd[0] = TEST_UNIT_READY; 795 - 796 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1); 797 - } 798 - *aSRpnt = SRpnt; 799 - #if DEBUG 800 - debugging = dbg; 801 - #endif 802 - if ( STp->buffer->syscall_result && 803 - osst_write_error_recovery(STp, aSRpnt, 0) ) { 804 - #if DEBUG 805 - printk(OSST_DEB_MSG "%s:D: Abnormal exit from onstream wait ready\n", name); 806 - printk(OSST_DEB_MSG "%s:D: Result = %d, Sense: 0=%02x, 2=%02x, 12=%02x, 13=%02x\n", name, 807 - STp->buffer->syscall_result, SRpnt->sense[0], SRpnt->sense[2], 808 - SRpnt->sense[12], SRpnt->sense[13]); 809 - #endif 810 - return (-EIO); 811 - } 812 - #if DEBUG 813 - printk(OSST_DEB_MSG "%s:D: Normal exit from onstream wait ready\n", name); 814 - #endif 815 - return 0; 816 - } 817 - 818 - /* 819 - * Wait for a tape to be inserted in the unit 820 - */ 821 - static int osst_wait_for_medium(struct osst_tape * STp, struct osst_request ** aSRpnt, unsigned timeout) 822 - { 823 - unsigned char cmd[MAX_COMMAND_SIZE]; 824 - struct osst_request * SRpnt; 825 - unsigned long startwait = jiffies; 826 - #if DEBUG 827 - int dbg = debugging; 828 - char * name = tape_name(STp); 829 - 830 - printk(OSST_DEB_MSG "%s:D: Reached onstream wait for medium\n", name); 831 - #endif 832 - 833 - memset(cmd, 0, MAX_COMMAND_SIZE); 834 - cmd[0] = TEST_UNIT_READY; 835 - 836 - SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1); 837 - *aSRpnt = SRpnt; 838 - if (!SRpnt) return (-EBUSY); 839 - 840 - while ( STp->buffer->syscall_result && time_before(jiffies, startwait + timeout*HZ) && 841 - SRpnt->sense[2] == 2 && SRpnt->sense[12] == 0x3a && SRpnt->sense[13] == 0 ) { 842 - #if DEBUG 843 - if (debugging) { 844 - printk(OSST_DEB_MSG "%s:D: Sleeping in onstream wait medium\n", name); 845 - printk(OSST_DEB_MSG "%s:D: Turning off debugging for a while\n", name); 846 - debugging = 0; 847 - } 848 - #endif 849 - msleep(100); 850 - 851 - memset(cmd, 0, MAX_COMMAND_SIZE); 852 - cmd[0] = TEST_UNIT_READY; 853 - 854 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1); 855 - } 856 - *aSRpnt = SRpnt; 857 - #if DEBUG 858 - debugging = dbg; 859 - #endif 860 - if ( STp->buffer->syscall_result && SRpnt->sense[2] != 2 && 861 - SRpnt->sense[12] != 4 && SRpnt->sense[13] == 1) { 862 - #if DEBUG 863 - printk(OSST_DEB_MSG "%s:D: Abnormal exit from onstream wait medium\n", name); 864 - printk(OSST_DEB_MSG "%s:D: Result = %d, Sense: 0=%02x, 2=%02x, 12=%02x, 13=%02x\n", name, 865 - STp->buffer->syscall_result, SRpnt->sense[0], SRpnt->sense[2], 866 - SRpnt->sense[12], SRpnt->sense[13]); 867 - #endif 868 - return 0; 869 - } 870 - #if DEBUG 871 - printk(OSST_DEB_MSG "%s:D: Normal exit from onstream wait medium\n", name); 872 - #endif 873 - return 1; 874 - } 875 - 876 - static int osst_position_tape_and_confirm(struct osst_tape * STp, struct osst_request ** aSRpnt, int frame) 877 - { 878 - int retval; 879 - 880 - osst_wait_ready(STp, aSRpnt, 15 * 60, 0); /* TODO - can this catch a write error? */ 881 - retval = osst_set_frame_position(STp, aSRpnt, frame, 0); 882 - if (retval) return (retval); 883 - osst_wait_ready(STp, aSRpnt, 15 * 60, OSST_WAIT_POSITION_COMPLETE); 884 - return (osst_get_frame_position(STp, aSRpnt)); 885 - } 886 - 887 - /* 888 - * Wait for write(s) to complete 889 - */ 890 - static int osst_flush_drive_buffer(struct osst_tape * STp, struct osst_request ** aSRpnt) 891 - { 892 - unsigned char cmd[MAX_COMMAND_SIZE]; 893 - struct osst_request * SRpnt; 894 - int result = 0; 895 - int delay = OSST_WAIT_WRITE_COMPLETE; 896 - #if DEBUG 897 - char * name = tape_name(STp); 898 - 899 - printk(OSST_DEB_MSG "%s:D: Reached onstream flush drive buffer (write filemark)\n", name); 900 - #endif 901 - 902 - memset(cmd, 0, MAX_COMMAND_SIZE); 903 - cmd[0] = WRITE_FILEMARKS; 904 - cmd[1] = 1; 905 - 906 - SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1); 907 - *aSRpnt = SRpnt; 908 - if (!SRpnt) return (-EBUSY); 909 - if (STp->buffer->syscall_result) { 910 - if ((SRpnt->sense[2] & 0x0f) == 2 && SRpnt->sense[12] == 4) { 911 - if (SRpnt->sense[13] == 8) { 912 - delay = OSST_WAIT_LONG_WRITE_COMPLETE; 913 - } 914 - } else 915 - result = osst_write_error_recovery(STp, aSRpnt, 0); 916 - } 917 - result |= osst_wait_ready(STp, aSRpnt, 5 * 60, delay); 918 - STp->ps[STp->partition].rw = OS_WRITING_COMPLETE; 919 - 920 - return (result); 921 - } 922 - 923 - #define OSST_POLL_PER_SEC 10 924 - static int osst_wait_frame(struct osst_tape * STp, struct osst_request ** aSRpnt, int curr, int minlast, int to) 925 - { 926 - unsigned long startwait = jiffies; 927 - char * name = tape_name(STp); 928 - #if DEBUG 929 - char notyetprinted = 1; 930 - #endif 931 - if (minlast >= 0 && STp->ps[STp->partition].rw != ST_READING) 932 - printk(KERN_ERR "%s:A: Waiting for frame without having initialized read!\n", name); 933 - 934 - while (time_before (jiffies, startwait + to*HZ)) 935 - { 936 - int result; 937 - result = osst_get_frame_position(STp, aSRpnt); 938 - if (result == -EIO) 939 - if ((result = osst_write_error_recovery(STp, aSRpnt, 0)) == 0) 940 - return 0; /* successful recovery leaves drive ready for frame */ 941 - if (result < 0) break; 942 - if (STp->first_frame_position == curr && 943 - ((minlast < 0 && 944 - (signed)STp->last_frame_position > (signed)curr + minlast) || 945 - (minlast >= 0 && STp->cur_frames > minlast) 946 - ) && result >= 0) 947 - { 948 - #if DEBUG 949 - if (debugging || time_after_eq(jiffies, startwait + 2*HZ/OSST_POLL_PER_SEC)) 950 - printk (OSST_DEB_MSG 951 - "%s:D: Succ wait f fr %i (>%i): %i-%i %i (%i): %3li.%li s\n", 952 - name, curr, curr+minlast, STp->first_frame_position, 953 - STp->last_frame_position, STp->cur_frames, 954 - result, (jiffies-startwait)/HZ, 955 - (((jiffies-startwait)%HZ)*10)/HZ); 956 - #endif 957 - return 0; 958 - } 959 - #if DEBUG 960 - if (time_after_eq(jiffies, startwait + 2*HZ/OSST_POLL_PER_SEC) && notyetprinted) 961 - { 962 - printk (OSST_DEB_MSG "%s:D: Wait for frame %i (>%i): %i-%i %i (%i)\n", 963 - name, curr, curr+minlast, STp->first_frame_position, 964 - STp->last_frame_position, STp->cur_frames, result); 965 - notyetprinted--; 966 - } 967 - #endif 968 - msleep(1000 / OSST_POLL_PER_SEC); 969 - } 970 - #if DEBUG 971 - printk (OSST_DEB_MSG "%s:D: Fail wait f fr %i (>%i): %i-%i %i: %3li.%li s\n", 972 - name, curr, curr+minlast, STp->first_frame_position, 973 - STp->last_frame_position, STp->cur_frames, 974 - (jiffies-startwait)/HZ, (((jiffies-startwait)%HZ)*10)/HZ); 975 - #endif 976 - return -EBUSY; 977 - } 978 - 979 - static int osst_recover_wait_frame(struct osst_tape * STp, struct osst_request ** aSRpnt, int writing) 980 - { 981 - struct osst_request * SRpnt; 982 - unsigned char cmd[MAX_COMMAND_SIZE]; 983 - unsigned long startwait = jiffies; 984 - int retval = 1; 985 - char * name = tape_name(STp); 986 - 987 - if (writing) { 988 - char mybuf[24]; 989 - char * olddata = STp->buffer->b_data; 990 - int oldsize = STp->buffer->buffer_size; 991 - 992 - /* write zero fm then read pos - if shows write error, try to recover - if no progress, wait */ 993 - 994 - memset(cmd, 0, MAX_COMMAND_SIZE); 995 - cmd[0] = WRITE_FILEMARKS; 996 - cmd[1] = 1; 997 - SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, 998 - MAX_RETRIES, 1); 999 - 1000 - while (retval && time_before (jiffies, startwait + 5*60*HZ)) { 1001 - 1002 - if (STp->buffer->syscall_result && (SRpnt->sense[2] & 0x0f) != 2) { 1003 - 1004 - /* some failure - not just not-ready */ 1005 - retval = osst_write_error_recovery(STp, aSRpnt, 0); 1006 - break; 1007 - } 1008 - schedule_timeout_interruptible(HZ / OSST_POLL_PER_SEC); 1009 - 1010 - STp->buffer->b_data = mybuf; STp->buffer->buffer_size = 24; 1011 - memset(cmd, 0, MAX_COMMAND_SIZE); 1012 - cmd[0] = READ_POSITION; 1013 - 1014 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, 20, DMA_FROM_DEVICE, STp->timeout, 1015 - MAX_RETRIES, 1); 1016 - 1017 - retval = ( STp->buffer->syscall_result || (STp->buffer)->b_data[15] > 25 ); 1018 - STp->buffer->b_data = olddata; STp->buffer->buffer_size = oldsize; 1019 - } 1020 - if (retval) 1021 - printk(KERN_ERR "%s:E: Device did not succeed to write buffered data\n", name); 1022 - } else 1023 - /* TODO - figure out which error conditions can be handled */ 1024 - if (STp->buffer->syscall_result) 1025 - printk(KERN_WARNING 1026 - "%s:W: Recover_wait_frame(read) cannot handle %02x:%02x:%02x\n", name, 1027 - (*aSRpnt)->sense[ 2] & 0x0f, 1028 - (*aSRpnt)->sense[12], 1029 - (*aSRpnt)->sense[13]); 1030 - 1031 - return retval; 1032 - } 1033 - 1034 - /* 1035 - * Read the next OnStream tape frame at the current location 1036 - */ 1037 - static int osst_read_frame(struct osst_tape * STp, struct osst_request ** aSRpnt, int timeout) 1038 - { 1039 - unsigned char cmd[MAX_COMMAND_SIZE]; 1040 - struct osst_request * SRpnt; 1041 - int retval = 0; 1042 - #if DEBUG 1043 - os_aux_t * aux = STp->buffer->aux; 1044 - char * name = tape_name(STp); 1045 - #endif 1046 - 1047 - if (STp->poll) 1048 - if (osst_wait_frame (STp, aSRpnt, STp->first_frame_position, 0, timeout)) 1049 - retval = osst_recover_wait_frame(STp, aSRpnt, 0); 1050 - 1051 - memset(cmd, 0, MAX_COMMAND_SIZE); 1052 - cmd[0] = READ_6; 1053 - cmd[1] = 1; 1054 - cmd[4] = 1; 1055 - 1056 - #if DEBUG 1057 - if (debugging) 1058 - printk(OSST_DEB_MSG "%s:D: Reading frame from OnStream tape\n", name); 1059 - #endif 1060 - SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, OS_FRAME_SIZE, DMA_FROM_DEVICE, 1061 - STp->timeout, MAX_RETRIES, 1); 1062 - *aSRpnt = SRpnt; 1063 - if (!SRpnt) 1064 - return (-EBUSY); 1065 - 1066 - if ((STp->buffer)->syscall_result) { 1067 - retval = 1; 1068 - if (STp->read_error_frame == 0) { 1069 - STp->read_error_frame = STp->first_frame_position; 1070 - #if DEBUG 1071 - printk(OSST_DEB_MSG "%s:D: Recording read error at %d\n", name, STp->read_error_frame); 1072 - #endif 1073 - } 1074 - #if DEBUG 1075 - if (debugging) 1076 - printk(OSST_DEB_MSG "%s:D: Sense: %2x %2x %2x %2x %2x %2x %2x %2x\n", 1077 - name, 1078 - SRpnt->sense[0], SRpnt->sense[1], 1079 - SRpnt->sense[2], SRpnt->sense[3], 1080 - SRpnt->sense[4], SRpnt->sense[5], 1081 - SRpnt->sense[6], SRpnt->sense[7]); 1082 - #endif 1083 - } 1084 - else 1085 - STp->first_frame_position++; 1086 - #if DEBUG 1087 - if (debugging) { 1088 - char sig[8]; int i; 1089 - for (i=0;i<4;i++) 1090 - sig[i] = aux->application_sig[i]<32?'^':aux->application_sig[i]; 1091 - sig[4] = '\0'; 1092 - printk(OSST_DEB_MSG 1093 - "%s:D: AUX: %s UpdFrCt#%d Wpass#%d %s FrSeq#%d LogBlk#%d Qty=%d Sz=%d\n", name, sig, 1094 - ntohl(aux->update_frame_cntr), ntohs(aux->partition.wrt_pass_cntr), 1095 - aux->frame_type==1?"EOD":aux->frame_type==2?"MARK": 1096 - aux->frame_type==8?"HEADR":aux->frame_type==0x80?"DATA":"FILL", 1097 - ntohl(aux->frame_seq_num), ntohl(aux->logical_blk_num), 1098 - ntohs(aux->dat.dat_list[0].blk_cnt), ntohl(aux->dat.dat_list[0].blk_sz) ); 1099 - if (aux->frame_type==2) 1100 - printk(OSST_DEB_MSG "%s:D: mark_cnt=%d, last_mark_ppos=%d, last_mark_lbn=%d\n", name, 1101 - ntohl(aux->filemark_cnt), ntohl(aux->last_mark_ppos), ntohl(aux->last_mark_lbn)); 1102 - printk(OSST_DEB_MSG "%s:D: Exit read frame from OnStream tape with code %d\n", name, retval); 1103 - } 1104 - #endif 1105 - return (retval); 1106 - } 1107 - 1108 - static int osst_initiate_read(struct osst_tape * STp, struct osst_request ** aSRpnt) 1109 - { 1110 - struct st_partstat * STps = &(STp->ps[STp->partition]); 1111 - struct osst_request * SRpnt ; 1112 - unsigned char cmd[MAX_COMMAND_SIZE]; 1113 - int retval = 0; 1114 - char * name = tape_name(STp); 1115 - 1116 - if (STps->rw != ST_READING) { /* Initialize read operation */ 1117 - if (STps->rw == ST_WRITING || STp->dirty) { 1118 - STp->write_type = OS_WRITE_DATA; 1119 - osst_flush_write_buffer(STp, aSRpnt); 1120 - osst_flush_drive_buffer(STp, aSRpnt); 1121 - } 1122 - STps->rw = ST_READING; 1123 - STp->frame_in_buffer = 0; 1124 - 1125 - /* 1126 - * Issue a read 0 command to get the OnStream drive 1127 - * read frames into its buffer. 1128 - */ 1129 - memset(cmd, 0, MAX_COMMAND_SIZE); 1130 - cmd[0] = READ_6; 1131 - cmd[1] = 1; 1132 - 1133 - #if DEBUG 1134 - printk(OSST_DEB_MSG "%s:D: Start Read Ahead on OnStream tape\n", name); 1135 - #endif 1136 - SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1); 1137 - *aSRpnt = SRpnt; 1138 - if ((retval = STp->buffer->syscall_result)) 1139 - printk(KERN_WARNING "%s:W: Error starting read ahead\n", name); 1140 - } 1141 - 1142 - return retval; 1143 - } 1144 - 1145 - static int osst_get_logical_frame(struct osst_tape * STp, struct osst_request ** aSRpnt, 1146 - int frame_seq_number, int quiet) 1147 - { 1148 - struct st_partstat * STps = &(STp->ps[STp->partition]); 1149 - char * name = tape_name(STp); 1150 - int cnt = 0, 1151 - bad = 0, 1152 - past = 0, 1153 - x, 1154 - position; 1155 - 1156 - /* 1157 - * If we want just any frame (-1) and there is a frame in the buffer, return it 1158 - */ 1159 - if (frame_seq_number == -1 && STp->frame_in_buffer) { 1160 - #if DEBUG 1161 - printk(OSST_DEB_MSG "%s:D: Frame %d still in buffer\n", name, STp->frame_seq_number); 1162 - #endif 1163 - return (STps->eof); 1164 - } 1165 - /* 1166 - * Search and wait for the next logical tape frame 1167 - */ 1168 - while (1) { 1169 - if (cnt++ > 400) { 1170 - printk(KERN_ERR "%s:E: Couldn't find logical frame %d, aborting\n", 1171 - name, frame_seq_number); 1172 - if (STp->read_error_frame) { 1173 - osst_set_frame_position(STp, aSRpnt, STp->read_error_frame, 0); 1174 - #if DEBUG 1175 - printk(OSST_DEB_MSG "%s:D: Repositioning tape to bad frame %d\n", 1176 - name, STp->read_error_frame); 1177 - #endif 1178 - STp->read_error_frame = 0; 1179 - STp->abort_count++; 1180 - } 1181 - return (-EIO); 1182 - } 1183 - #if DEBUG 1184 - if (debugging) 1185 - printk(OSST_DEB_MSG "%s:D: Looking for frame %d, attempt %d\n", 1186 - name, frame_seq_number, cnt); 1187 - #endif 1188 - if ( osst_initiate_read(STp, aSRpnt) 1189 - || ( (!STp->frame_in_buffer) && osst_read_frame(STp, aSRpnt, 30) ) ) { 1190 - if (STp->raw) 1191 - return (-EIO); 1192 - position = osst_get_frame_position(STp, aSRpnt); 1193 - if (position >= 0xbae && position < 0xbb8) 1194 - position = 0xbb8; 1195 - else if (position > STp->eod_frame_ppos || ++bad == 10) { 1196 - position = STp->read_error_frame - 1; 1197 - bad = 0; 1198 - } 1199 - else { 1200 - position += 29; 1201 - cnt += 19; 1202 - } 1203 - #if DEBUG 1204 - printk(OSST_DEB_MSG "%s:D: Bad frame detected, positioning tape to block %d\n", 1205 - name, position); 1206 - #endif 1207 - osst_set_frame_position(STp, aSRpnt, position, 0); 1208 - continue; 1209 - } 1210 - if (osst_verify_frame(STp, frame_seq_number, quiet)) 1211 - break; 1212 - if (osst_verify_frame(STp, -1, quiet)) { 1213 - x = ntohl(STp->buffer->aux->frame_seq_num); 1214 - if (STp->fast_open) { 1215 - printk(KERN_WARNING 1216 - "%s:W: Found logical frame %d instead of %d after fast open\n", 1217 - name, x, frame_seq_number); 1218 - STp->header_ok = 0; 1219 - STp->read_error_frame = 0; 1220 - return (-EIO); 1221 - } 1222 - if (x > frame_seq_number) { 1223 - if (++past > 3) { 1224 - /* positioning backwards did not bring us to the desired frame */ 1225 - position = STp->read_error_frame - 1; 1226 - } 1227 - else { 1228 - position = osst_get_frame_position(STp, aSRpnt) 1229 - + frame_seq_number - x - 1; 1230 - 1231 - if (STp->first_frame_position >= 3000 && position < 3000) 1232 - position -= 10; 1233 - } 1234 - #if DEBUG 1235 - printk(OSST_DEB_MSG 1236 - "%s:D: Found logical frame %d while looking for %d: back up %d\n", 1237 - name, x, frame_seq_number, 1238 - STp->first_frame_position - position); 1239 - #endif 1240 - osst_set_frame_position(STp, aSRpnt, position, 0); 1241 - cnt += 10; 1242 - } 1243 - else 1244 - past = 0; 1245 - } 1246 - if (osst_get_frame_position(STp, aSRpnt) == 0xbaf) { 1247 - #if DEBUG 1248 - printk(OSST_DEB_MSG "%s:D: Skipping config partition\n", name); 1249 - #endif 1250 - osst_set_frame_position(STp, aSRpnt, 0xbb8, 0); 1251 - cnt--; 1252 - } 1253 - STp->frame_in_buffer = 0; 1254 - } 1255 - if (cnt > 1) { 1256 - STp->recover_count++; 1257 - STp->recover_erreg++; 1258 - printk(KERN_WARNING "%s:I: Don't worry, Read error at position %d recovered\n", 1259 - name, STp->read_error_frame); 1260 - } 1261 - STp->read_count++; 1262 - 1263 - #if DEBUG 1264 - if (debugging || STps->eof) 1265 - printk(OSST_DEB_MSG 1266 - "%s:D: Exit get logical frame (%d=>%d) from OnStream tape with code %d\n", 1267 - name, frame_seq_number, STp->frame_seq_number, STps->eof); 1268 - #endif 1269 - STp->fast_open = 0; 1270 - STp->read_error_frame = 0; 1271 - return (STps->eof); 1272 - } 1273 - 1274 - static int osst_seek_logical_blk(struct osst_tape * STp, struct osst_request ** aSRpnt, int logical_blk_num) 1275 - { 1276 - struct st_partstat * STps = &(STp->ps[STp->partition]); 1277 - char * name = tape_name(STp); 1278 - int retries = 0; 1279 - int frame_seq_estimate, ppos_estimate, move; 1280 - 1281 - if (logical_blk_num < 0) logical_blk_num = 0; 1282 - #if DEBUG 1283 - printk(OSST_DEB_MSG "%s:D: Seeking logical block %d (now at %d, size %d%c)\n", 1284 - name, logical_blk_num, STp->logical_blk_num, 1285 - STp->block_size<1024?STp->block_size:STp->block_size/1024, 1286 - STp->block_size<1024?'b':'k'); 1287 - #endif 1288 - /* Do we know where we are? */ 1289 - if (STps->drv_block >= 0) { 1290 - move = logical_blk_num - STp->logical_blk_num; 1291 - if (move < 0) move -= (OS_DATA_SIZE / STp->block_size) - 1; 1292 - move /= (OS_DATA_SIZE / STp->block_size); 1293 - frame_seq_estimate = STp->frame_seq_number + move; 1294 - } else 1295 - frame_seq_estimate = logical_blk_num * STp->block_size / OS_DATA_SIZE; 1296 - 1297 - if (frame_seq_estimate < 2980) ppos_estimate = frame_seq_estimate + 10; 1298 - else ppos_estimate = frame_seq_estimate + 20; 1299 - while (++retries < 10) { 1300 - if (ppos_estimate > STp->eod_frame_ppos-2) { 1301 - frame_seq_estimate += STp->eod_frame_ppos - 2 - ppos_estimate; 1302 - ppos_estimate = STp->eod_frame_ppos - 2; 1303 - } 1304 - if (frame_seq_estimate < 0) { 1305 - frame_seq_estimate = 0; 1306 - ppos_estimate = 10; 1307 - } 1308 - osst_set_frame_position(STp, aSRpnt, ppos_estimate, 0); 1309 - if (osst_get_logical_frame(STp, aSRpnt, frame_seq_estimate, 1) >= 0) { 1310 - /* we've located the estimated frame, now does it have our block? */ 1311 - if (logical_blk_num < STp->logical_blk_num || 1312 - logical_blk_num >= STp->logical_blk_num + ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt)) { 1313 - if (STps->eof == ST_FM_HIT) 1314 - move = logical_blk_num < STp->logical_blk_num? -2 : 1; 1315 - else { 1316 - move = logical_blk_num - STp->logical_blk_num; 1317 - if (move < 0) move -= (OS_DATA_SIZE / STp->block_size) - 1; 1318 - move /= (OS_DATA_SIZE / STp->block_size); 1319 - } 1320 - if (!move) move = logical_blk_num > STp->logical_blk_num ? 1 : -1; 1321 - #if DEBUG 1322 - printk(OSST_DEB_MSG 1323 - "%s:D: Seek retry %d at ppos %d fsq %d (est %d) lbn %d (need %d) move %d\n", 1324 - name, retries, ppos_estimate, STp->frame_seq_number, frame_seq_estimate, 1325 - STp->logical_blk_num, logical_blk_num, move); 1326 - #endif 1327 - frame_seq_estimate += move; 1328 - ppos_estimate += move; 1329 - continue; 1330 - } else { 1331 - STp->buffer->read_pointer = (logical_blk_num - STp->logical_blk_num) * STp->block_size; 1332 - STp->buffer->buffer_bytes -= STp->buffer->read_pointer; 1333 - STp->logical_blk_num = logical_blk_num; 1334 - #if DEBUG 1335 - printk(OSST_DEB_MSG 1336 - "%s:D: Seek success at ppos %d fsq %d in_buf %d, bytes %d, ptr %d*%d\n", 1337 - name, ppos_estimate, STp->frame_seq_number, STp->frame_in_buffer, 1338 - STp->buffer->buffer_bytes, STp->buffer->read_pointer / STp->block_size, 1339 - STp->block_size); 1340 - #endif 1341 - STps->drv_file = ntohl(STp->buffer->aux->filemark_cnt); 1342 - if (STps->eof == ST_FM_HIT) { 1343 - STps->drv_file++; 1344 - STps->drv_block = 0; 1345 - } else { 1346 - STps->drv_block = ntohl(STp->buffer->aux->last_mark_lbn)? 1347 - STp->logical_blk_num - 1348 - (STps->drv_file ? ntohl(STp->buffer->aux->last_mark_lbn) + 1 : 0): 1349 - -1; 1350 - } 1351 - STps->eof = (STp->first_frame_position >= STp->eod_frame_ppos)?ST_EOD:ST_NOEOF; 1352 - return 0; 1353 - } 1354 - } 1355 - if (osst_get_logical_frame(STp, aSRpnt, -1, 1) < 0) 1356 - goto error; 1357 - /* we are not yet at the estimated frame, adjust our estimate of its physical position */ 1358 - #if DEBUG 1359 - printk(OSST_DEB_MSG "%s:D: Seek retry %d at ppos %d fsq %d (est %d) lbn %d (need %d)\n", 1360 - name, retries, ppos_estimate, STp->frame_seq_number, frame_seq_estimate, 1361 - STp->logical_blk_num, logical_blk_num); 1362 - #endif 1363 - if (frame_seq_estimate != STp->frame_seq_number) 1364 - ppos_estimate += frame_seq_estimate - STp->frame_seq_number; 1365 - else 1366 - break; 1367 - } 1368 - error: 1369 - printk(KERN_ERR "%s:E: Couldn't seek to logical block %d (at %d), %d retries\n", 1370 - name, logical_blk_num, STp->logical_blk_num, retries); 1371 - return (-EIO); 1372 - } 1373 - 1374 - /* The values below are based on the OnStream frame payload size of 32K == 2**15, 1375 - * that is, OSST_FRAME_SHIFT + OSST_SECTOR_SHIFT must be 15. With a minimum block 1376 - * size of 512 bytes, we need to be able to resolve 32K/512 == 64 == 2**6 positions 1377 - * inside each frame. Finally, OSST_SECTOR_MASK == 2**OSST_FRAME_SHIFT - 1. 1378 - */ 1379 - #define OSST_FRAME_SHIFT 6 1380 - #define OSST_SECTOR_SHIFT 9 1381 - #define OSST_SECTOR_MASK 0x03F 1382 - 1383 - static int osst_get_sector(struct osst_tape * STp, struct osst_request ** aSRpnt) 1384 - { 1385 - int sector; 1386 - #if DEBUG 1387 - char * name = tape_name(STp); 1388 - 1389 - printk(OSST_DEB_MSG 1390 - "%s:D: Positioned at ppos %d, frame %d, lbn %d, file %d, blk %d, %cptr %d, eof %d\n", 1391 - name, STp->first_frame_position, STp->frame_seq_number, STp->logical_blk_num, 1392 - STp->ps[STp->partition].drv_file, STp->ps[STp->partition].drv_block, 1393 - STp->ps[STp->partition].rw == ST_WRITING?'w':'r', 1394 - STp->ps[STp->partition].rw == ST_WRITING?STp->buffer->buffer_bytes: 1395 - STp->buffer->read_pointer, STp->ps[STp->partition].eof); 1396 - #endif 1397 - /* do we know where we are inside a file? */ 1398 - if (STp->ps[STp->partition].drv_block >= 0) { 1399 - sector = (STp->frame_in_buffer ? STp->first_frame_position-1 : 1400 - STp->first_frame_position) << OSST_FRAME_SHIFT; 1401 - if (STp->ps[STp->partition].rw == ST_WRITING) 1402 - sector |= (STp->buffer->buffer_bytes >> OSST_SECTOR_SHIFT) & OSST_SECTOR_MASK; 1403 - else 1404 - sector |= (STp->buffer->read_pointer >> OSST_SECTOR_SHIFT) & OSST_SECTOR_MASK; 1405 - } else { 1406 - sector = osst_get_frame_position(STp, aSRpnt); 1407 - if (sector > 0) 1408 - sector <<= OSST_FRAME_SHIFT; 1409 - } 1410 - return sector; 1411 - } 1412 - 1413 - static int osst_seek_sector(struct osst_tape * STp, struct osst_request ** aSRpnt, int sector) 1414 - { 1415 - struct st_partstat * STps = &(STp->ps[STp->partition]); 1416 - int frame = sector >> OSST_FRAME_SHIFT, 1417 - offset = (sector & OSST_SECTOR_MASK) << OSST_SECTOR_SHIFT, 1418 - r; 1419 - #if DEBUG 1420 - char * name = tape_name(STp); 1421 - 1422 - printk(OSST_DEB_MSG "%s:D: Seeking sector %d in frame %d at offset %d\n", 1423 - name, sector, frame, offset); 1424 - #endif 1425 - if (frame < 0 || frame >= STp->capacity) return (-ENXIO); 1426 - 1427 - if (frame <= STp->first_data_ppos) { 1428 - STp->frame_seq_number = STp->logical_blk_num = STps->drv_file = STps->drv_block = 0; 1429 - return (osst_set_frame_position(STp, aSRpnt, frame, 0)); 1430 - } 1431 - r = osst_set_frame_position(STp, aSRpnt, offset?frame:frame-1, 0); 1432 - if (r < 0) return r; 1433 - 1434 - r = osst_get_logical_frame(STp, aSRpnt, -1, 1); 1435 - if (r < 0) return r; 1436 - 1437 - if (osst_get_frame_position(STp, aSRpnt) != (offset?frame+1:frame)) return (-EIO); 1438 - 1439 - if (offset) { 1440 - STp->logical_blk_num += offset / STp->block_size; 1441 - STp->buffer->read_pointer = offset; 1442 - STp->buffer->buffer_bytes -= offset; 1443 - } else { 1444 - STp->frame_seq_number++; 1445 - STp->frame_in_buffer = 0; 1446 - STp->logical_blk_num += ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt); 1447 - STp->buffer->buffer_bytes = STp->buffer->read_pointer = 0; 1448 - } 1449 - STps->drv_file = ntohl(STp->buffer->aux->filemark_cnt); 1450 - if (STps->eof == ST_FM_HIT) { 1451 - STps->drv_file++; 1452 - STps->drv_block = 0; 1453 - } else { 1454 - STps->drv_block = ntohl(STp->buffer->aux->last_mark_lbn)? 1455 - STp->logical_blk_num - 1456 - (STps->drv_file ? ntohl(STp->buffer->aux->last_mark_lbn) + 1 : 0): 1457 - -1; 1458 - } 1459 - STps->eof = (STp->first_frame_position >= STp->eod_frame_ppos)?ST_EOD:ST_NOEOF; 1460 - #if DEBUG 1461 - printk(OSST_DEB_MSG 1462 - "%s:D: Now positioned at ppos %d, frame %d, lbn %d, file %d, blk %d, rptr %d, eof %d\n", 1463 - name, STp->first_frame_position, STp->frame_seq_number, STp->logical_blk_num, 1464 - STps->drv_file, STps->drv_block, STp->buffer->read_pointer, STps->eof); 1465 - #endif 1466 - return 0; 1467 - } 1468 - 1469 - /* 1470 - * Read back the drive's internal buffer contents, as a part 1471 - * of the write error recovery mechanism for old OnStream 1472 - * firmware revisions. 1473 - * Precondition for this function to work: all frames in the 1474 - * drive's buffer must be of one type (DATA, MARK or EOD)! 1475 - */ 1476 - static int osst_read_back_buffer_and_rewrite(struct osst_tape * STp, struct osst_request ** aSRpnt, 1477 - unsigned int frame, unsigned int skip, int pending) 1478 - { 1479 - struct osst_request * SRpnt = * aSRpnt; 1480 - unsigned char * buffer, * p; 1481 - unsigned char cmd[MAX_COMMAND_SIZE]; 1482 - int flag, new_frame, i; 1483 - int nframes = STp->cur_frames; 1484 - int blks_per_frame = ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt); 1485 - int frame_seq_number = ntohl(STp->buffer->aux->frame_seq_num) 1486 - - (nframes + pending - 1); 1487 - int logical_blk_num = ntohl(STp->buffer->aux->logical_blk_num) 1488 - - (nframes + pending - 1) * blks_per_frame; 1489 - char * name = tape_name(STp); 1490 - unsigned long startwait = jiffies; 1491 - #if DEBUG 1492 - int dbg = debugging; 1493 - #endif 1494 - 1495 - if ((buffer = vmalloc(array_size((nframes + 1), OS_DATA_SIZE))) == NULL) 1496 - return (-EIO); 1497 - 1498 - printk(KERN_INFO "%s:I: Reading back %d frames from drive buffer%s\n", 1499 - name, nframes, pending?" and one that was pending":""); 1500 - 1501 - osst_copy_from_buffer(STp->buffer, (p = &buffer[nframes * OS_DATA_SIZE])); 1502 - #if DEBUG 1503 - if (pending && debugging) 1504 - printk(OSST_DEB_MSG "%s:D: Pending frame %d (lblk %d), data %02x %02x %02x %02x\n", 1505 - name, frame_seq_number + nframes, 1506 - logical_blk_num + nframes * blks_per_frame, 1507 - p[0], p[1], p[2], p[3]); 1508 - #endif 1509 - for (i = 0, p = buffer; i < nframes; i++, p += OS_DATA_SIZE) { 1510 - 1511 - memset(cmd, 0, MAX_COMMAND_SIZE); 1512 - cmd[0] = 0x3C; /* Buffer Read */ 1513 - cmd[1] = 6; /* Retrieve Faulty Block */ 1514 - cmd[7] = 32768 >> 8; 1515 - cmd[8] = 32768 & 0xff; 1516 - 1517 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, OS_FRAME_SIZE, DMA_FROM_DEVICE, 1518 - STp->timeout, MAX_RETRIES, 1); 1519 - 1520 - if ((STp->buffer)->syscall_result || !SRpnt) { 1521 - printk(KERN_ERR "%s:E: Failed to read frame back from OnStream buffer\n", name); 1522 - vfree(buffer); 1523 - *aSRpnt = SRpnt; 1524 - return (-EIO); 1525 - } 1526 - osst_copy_from_buffer(STp->buffer, p); 1527 - #if DEBUG 1528 - if (debugging) 1529 - printk(OSST_DEB_MSG "%s:D: Read back logical frame %d, data %02x %02x %02x %02x\n", 1530 - name, frame_seq_number + i, p[0], p[1], p[2], p[3]); 1531 - #endif 1532 - } 1533 - *aSRpnt = SRpnt; 1534 - osst_get_frame_position(STp, aSRpnt); 1535 - 1536 - #if DEBUG 1537 - printk(OSST_DEB_MSG "%s:D: Frames left in buffer: %d\n", name, STp->cur_frames); 1538 - #endif 1539 - /* Write synchronously so we can be sure we're OK again and don't have to recover recursively */ 1540 - /* In the header we don't actually re-write the frames that fail, just the ones after them */ 1541 - 1542 - for (flag=1, new_frame=frame, p=buffer, i=0; i < nframes + pending; ) { 1543 - 1544 - if (flag) { 1545 - if (STp->write_type == OS_WRITE_HEADER) { 1546 - i += skip; 1547 - p += skip * OS_DATA_SIZE; 1548 - } 1549 - else if (new_frame < 2990 && new_frame+skip+nframes+pending >= 2990) 1550 - new_frame = 3000-i; 1551 - else 1552 - new_frame += skip; 1553 - #if DEBUG 1554 - printk(OSST_DEB_MSG "%s:D: Position to frame %d, write fseq %d\n", 1555 - name, new_frame+i, frame_seq_number+i); 1556 - #endif 1557 - osst_set_frame_position(STp, aSRpnt, new_frame + i, 0); 1558 - osst_wait_ready(STp, aSRpnt, 60, OSST_WAIT_POSITION_COMPLETE); 1559 - osst_get_frame_position(STp, aSRpnt); 1560 - SRpnt = * aSRpnt; 1561 - 1562 - if (new_frame > frame + 1000) { 1563 - printk(KERN_ERR "%s:E: Failed to find writable tape media\n", name); 1564 - vfree(buffer); 1565 - return (-EIO); 1566 - } 1567 - if ( i >= nframes + pending ) break; 1568 - flag = 0; 1569 - } 1570 - osst_copy_to_buffer(STp->buffer, p); 1571 - /* 1572 - * IMPORTANT: for error recovery to work, _never_ queue frames with mixed frame type! 1573 - */ 1574 - osst_init_aux(STp, STp->buffer->aux->frame_type, frame_seq_number+i, 1575 - logical_blk_num + i*blks_per_frame, 1576 - ntohl(STp->buffer->aux->dat.dat_list[0].blk_sz), blks_per_frame); 1577 - memset(cmd, 0, MAX_COMMAND_SIZE); 1578 - cmd[0] = WRITE_6; 1579 - cmd[1] = 1; 1580 - cmd[4] = 1; 1581 - 1582 - #if DEBUG 1583 - if (debugging) 1584 - printk(OSST_DEB_MSG 1585 - "%s:D: About to write frame %d, seq %d, lbn %d, data %02x %02x %02x %02x\n", 1586 - name, new_frame+i, frame_seq_number+i, logical_blk_num + i*blks_per_frame, 1587 - p[0], p[1], p[2], p[3]); 1588 - #endif 1589 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, OS_FRAME_SIZE, DMA_TO_DEVICE, 1590 - STp->timeout, MAX_RETRIES, 1); 1591 - 1592 - if (STp->buffer->syscall_result) 1593 - flag = 1; 1594 - else { 1595 - p += OS_DATA_SIZE; i++; 1596 - 1597 - /* if we just sent the last frame, wait till all successfully written */ 1598 - if ( i == nframes + pending ) { 1599 - #if DEBUG 1600 - printk(OSST_DEB_MSG "%s:D: Check re-write successful\n", name); 1601 - #endif 1602 - memset(cmd, 0, MAX_COMMAND_SIZE); 1603 - cmd[0] = WRITE_FILEMARKS; 1604 - cmd[1] = 1; 1605 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, 1606 - STp->timeout, MAX_RETRIES, 1); 1607 - #if DEBUG 1608 - if (debugging) { 1609 - printk(OSST_DEB_MSG "%s:D: Sleeping in re-write wait ready\n", name); 1610 - printk(OSST_DEB_MSG "%s:D: Turning off debugging for a while\n", name); 1611 - debugging = 0; 1612 - } 1613 - #endif 1614 - flag = STp->buffer->syscall_result; 1615 - while ( !flag && time_before(jiffies, startwait + 60*HZ) ) { 1616 - 1617 - memset(cmd, 0, MAX_COMMAND_SIZE); 1618 - cmd[0] = TEST_UNIT_READY; 1619 - 1620 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, STp->timeout, 1621 - MAX_RETRIES, 1); 1622 - 1623 - if (SRpnt->sense[2] == 2 && SRpnt->sense[12] == 4 && 1624 - (SRpnt->sense[13] == 1 || SRpnt->sense[13] == 8)) { 1625 - /* in the process of becoming ready */ 1626 - msleep(100); 1627 - continue; 1628 - } 1629 - if (STp->buffer->syscall_result) 1630 - flag = 1; 1631 - break; 1632 - } 1633 - #if DEBUG 1634 - debugging = dbg; 1635 - printk(OSST_DEB_MSG "%s:D: Wait re-write finished\n", name); 1636 - #endif 1637 - } 1638 - } 1639 - *aSRpnt = SRpnt; 1640 - if (flag) { 1641 - if ((SRpnt->sense[ 2] & 0x0f) == 13 && 1642 - SRpnt->sense[12] == 0 && 1643 - SRpnt->sense[13] == 2) { 1644 - printk(KERN_ERR "%s:E: Volume overflow in write error recovery\n", name); 1645 - vfree(buffer); 1646 - return (-EIO); /* hit end of tape = fail */ 1647 - } 1648 - i = ((SRpnt->sense[3] << 24) | 1649 - (SRpnt->sense[4] << 16) | 1650 - (SRpnt->sense[5] << 8) | 1651 - SRpnt->sense[6] ) - new_frame; 1652 - p = &buffer[i * OS_DATA_SIZE]; 1653 - #if DEBUG 1654 - printk(OSST_DEB_MSG "%s:D: Additional write error at %d\n", name, new_frame+i); 1655 - #endif 1656 - osst_get_frame_position(STp, aSRpnt); 1657 - #if DEBUG 1658 - printk(OSST_DEB_MSG "%s:D: reported frame positions: host = %d, tape = %d, buffer = %d\n", 1659 - name, STp->first_frame_position, STp->last_frame_position, STp->cur_frames); 1660 - #endif 1661 - } 1662 - } 1663 - if (flag) { 1664 - /* error recovery did not successfully complete */ 1665 - printk(KERN_ERR "%s:D: Write error recovery failed in %s\n", name, 1666 - STp->write_type == OS_WRITE_HEADER?"header":"body"); 1667 - } 1668 - if (!pending) 1669 - osst_copy_to_buffer(STp->buffer, p); /* so buffer content == at entry in all cases */ 1670 - vfree(buffer); 1671 - return 0; 1672 - } 1673 - 1674 - static int osst_reposition_and_retry(struct osst_tape * STp, struct osst_request ** aSRpnt, 1675 - unsigned int frame, unsigned int skip, int pending) 1676 - { 1677 - unsigned char cmd[MAX_COMMAND_SIZE]; 1678 - struct osst_request * SRpnt; 1679 - char * name = tape_name(STp); 1680 - int expected = 0; 1681 - int attempts = 1000 / skip; 1682 - int flag = 1; 1683 - unsigned long startwait = jiffies; 1684 - #if DEBUG 1685 - int dbg = debugging; 1686 - #endif 1687 - 1688 - while (attempts && time_before(jiffies, startwait + 60*HZ)) { 1689 - if (flag) { 1690 - #if DEBUG 1691 - debugging = dbg; 1692 - #endif 1693 - if (frame < 2990 && frame+skip+STp->cur_frames+pending >= 2990) 1694 - frame = 3000-skip; 1695 - expected = frame+skip+STp->cur_frames+pending; 1696 - #if DEBUG 1697 - printk(OSST_DEB_MSG "%s:D: Position to fppos %d, re-write from fseq %d\n", 1698 - name, frame+skip, STp->frame_seq_number-STp->cur_frames-pending); 1699 - #endif 1700 - osst_set_frame_position(STp, aSRpnt, frame + skip, 1); 1701 - flag = 0; 1702 - attempts--; 1703 - schedule_timeout_interruptible(msecs_to_jiffies(100)); 1704 - } 1705 - if (osst_get_frame_position(STp, aSRpnt) < 0) { /* additional write error */ 1706 - #if DEBUG 1707 - printk(OSST_DEB_MSG "%s:D: Addl error, host %d, tape %d, buffer %d\n", 1708 - name, STp->first_frame_position, 1709 - STp->last_frame_position, STp->cur_frames); 1710 - #endif 1711 - frame = STp->last_frame_position; 1712 - flag = 1; 1713 - continue; 1714 - } 1715 - if (pending && STp->cur_frames < 50) { 1716 - 1717 - memset(cmd, 0, MAX_COMMAND_SIZE); 1718 - cmd[0] = WRITE_6; 1719 - cmd[1] = 1; 1720 - cmd[4] = 1; 1721 - #if DEBUG 1722 - printk(OSST_DEB_MSG "%s:D: About to write pending fseq %d at fppos %d\n", 1723 - name, STp->frame_seq_number-1, STp->first_frame_position); 1724 - #endif 1725 - SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, OS_FRAME_SIZE, DMA_TO_DEVICE, 1726 - STp->timeout, MAX_RETRIES, 1); 1727 - *aSRpnt = SRpnt; 1728 - 1729 - if (STp->buffer->syscall_result) { /* additional write error */ 1730 - if ((SRpnt->sense[ 2] & 0x0f) == 13 && 1731 - SRpnt->sense[12] == 0 && 1732 - SRpnt->sense[13] == 2) { 1733 - printk(KERN_ERR 1734 - "%s:E: Volume overflow in write error recovery\n", 1735 - name); 1736 - break; /* hit end of tape = fail */ 1737 - } 1738 - flag = 1; 1739 - } 1740 - else 1741 - pending = 0; 1742 - 1743 - continue; 1744 - } 1745 - if (STp->cur_frames == 0) { 1746 - #if DEBUG 1747 - debugging = dbg; 1748 - printk(OSST_DEB_MSG "%s:D: Wait re-write finished\n", name); 1749 - #endif 1750 - if (STp->first_frame_position != expected) { 1751 - printk(KERN_ERR "%s:A: Actual position %d - expected %d\n", 1752 - name, STp->first_frame_position, expected); 1753 - return (-EIO); 1754 - } 1755 - return 0; 1756 - } 1757 - #if DEBUG 1758 - if (debugging) { 1759 - printk(OSST_DEB_MSG "%s:D: Sleeping in re-write wait ready\n", name); 1760 - printk(OSST_DEB_MSG "%s:D: Turning off debugging for a while\n", name); 1761 - debugging = 0; 1762 - } 1763 - #endif 1764 - schedule_timeout_interruptible(msecs_to_jiffies(100)); 1765 - } 1766 - printk(KERN_ERR "%s:E: Failed to find valid tape media\n", name); 1767 - #if DEBUG 1768 - debugging = dbg; 1769 - #endif 1770 - return (-EIO); 1771 - } 1772 - 1773 - /* 1774 - * Error recovery algorithm for the OnStream tape. 1775 - */ 1776 - 1777 - static int osst_write_error_recovery(struct osst_tape * STp, struct osst_request ** aSRpnt, int pending) 1778 - { 1779 - struct osst_request * SRpnt = * aSRpnt; 1780 - struct st_partstat * STps = & STp->ps[STp->partition]; 1781 - char * name = tape_name(STp); 1782 - int retval = 0; 1783 - int rw_state; 1784 - unsigned int frame, skip; 1785 - 1786 - rw_state = STps->rw; 1787 - 1788 - if ((SRpnt->sense[ 2] & 0x0f) != 3 1789 - || SRpnt->sense[12] != 12 1790 - || SRpnt->sense[13] != 0) { 1791 - #if DEBUG 1792 - printk(OSST_DEB_MSG "%s:D: Write error recovery cannot handle %02x:%02x:%02x\n", name, 1793 - SRpnt->sense[2], SRpnt->sense[12], SRpnt->sense[13]); 1794 - #endif 1795 - return (-EIO); 1796 - } 1797 - frame = (SRpnt->sense[3] << 24) | 1798 - (SRpnt->sense[4] << 16) | 1799 - (SRpnt->sense[5] << 8) | 1800 - SRpnt->sense[6]; 1801 - skip = SRpnt->sense[9]; 1802 - 1803 - #if DEBUG 1804 - printk(OSST_DEB_MSG "%s:D: Detected physical bad frame at %u, advised to skip %d\n", name, frame, skip); 1805 - #endif 1806 - osst_get_frame_position(STp, aSRpnt); 1807 - #if DEBUG 1808 - printk(OSST_DEB_MSG "%s:D: reported frame positions: host = %d, tape = %d\n", 1809 - name, STp->first_frame_position, STp->last_frame_position); 1810 - #endif 1811 - switch (STp->write_type) { 1812 - case OS_WRITE_DATA: 1813 - case OS_WRITE_EOD: 1814 - case OS_WRITE_NEW_MARK: 1815 - printk(KERN_WARNING 1816 - "%s:I: Relocating %d buffered logical frames from position %u to %u\n", 1817 - name, STp->cur_frames, frame, (frame + skip > 3000 && frame < 3000)?3000:frame + skip); 1818 - if (STp->os_fw_rev >= 10600) 1819 - retval = osst_reposition_and_retry(STp, aSRpnt, frame, skip, pending); 1820 - else 1821 - retval = osst_read_back_buffer_and_rewrite(STp, aSRpnt, frame, skip, pending); 1822 - printk(KERN_WARNING "%s:%s: %sWrite error%srecovered\n", name, 1823 - retval?"E" :"I", 1824 - retval?"" :"Don't worry, ", 1825 - retval?" not ":" "); 1826 - break; 1827 - case OS_WRITE_LAST_MARK: 1828 - printk(KERN_ERR "%s:E: Bad frame in update last marker, fatal\n", name); 1829 - osst_set_frame_position(STp, aSRpnt, frame + STp->cur_frames + pending, 0); 1830 - retval = -EIO; 1831 - break; 1832 - case OS_WRITE_HEADER: 1833 - printk(KERN_WARNING "%s:I: Bad frame in header partition, skipped\n", name); 1834 - retval = osst_read_back_buffer_and_rewrite(STp, aSRpnt, frame, 1, pending); 1835 - break; 1836 - default: 1837 - printk(KERN_INFO "%s:I: Bad frame in filler, ignored\n", name); 1838 - osst_set_frame_position(STp, aSRpnt, frame + STp->cur_frames + pending, 0); 1839 - } 1840 - osst_get_frame_position(STp, aSRpnt); 1841 - #if DEBUG 1842 - printk(OSST_DEB_MSG "%s:D: Positioning complete, cur_frames %d, pos %d, tape pos %d\n", 1843 - name, STp->cur_frames, STp->first_frame_position, STp->last_frame_position); 1844 - printk(OSST_DEB_MSG "%s:D: next logical frame to write: %d\n", name, STp->logical_blk_num); 1845 - #endif 1846 - if (retval == 0) { 1847 - STp->recover_count++; 1848 - STp->recover_erreg++; 1849 - } else 1850 - STp->abort_count++; 1851 - 1852 - STps->rw = rw_state; 1853 - return retval; 1854 - } 1855 - 1856 - static int osst_space_over_filemarks_backward(struct osst_tape * STp, struct osst_request ** aSRpnt, 1857 - int mt_op, int mt_count) 1858 - { 1859 - char * name = tape_name(STp); 1860 - int cnt; 1861 - int last_mark_ppos = -1; 1862 - 1863 - #if DEBUG 1864 - printk(OSST_DEB_MSG "%s:D: Reached space_over_filemarks_backwards %d %d\n", name, mt_op, mt_count); 1865 - #endif 1866 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 1867 - #if DEBUG 1868 - printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks_bwd\n", name); 1869 - #endif 1870 - return -EIO; 1871 - } 1872 - if (STp->linux_media_version >= 4) { 1873 - /* 1874 - * direct lookup in header filemark list 1875 - */ 1876 - cnt = ntohl(STp->buffer->aux->filemark_cnt); 1877 - if (STp->header_ok && 1878 - STp->header_cache != NULL && 1879 - (cnt - mt_count) >= 0 && 1880 - (cnt - mt_count) < OS_FM_TAB_MAX && 1881 - (cnt - mt_count) < STp->filemark_cnt && 1882 - STp->header_cache->dat_fm_tab.fm_tab_ent[cnt-1] == STp->buffer->aux->last_mark_ppos) 1883 - 1884 - last_mark_ppos = ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[cnt - mt_count]); 1885 - #if DEBUG 1886 - if (STp->header_cache == NULL || (cnt - mt_count) < 0 || (cnt - mt_count) >= OS_FM_TAB_MAX) 1887 - printk(OSST_DEB_MSG "%s:D: Filemark lookup fail due to %s\n", name, 1888 - STp->header_cache == NULL?"lack of header cache":"count out of range"); 1889 - else 1890 - printk(OSST_DEB_MSG "%s:D: Filemark lookup: prev mark %d (%s), skip %d to %d\n", 1891 - name, cnt, 1892 - ((cnt == -1 && ntohl(STp->buffer->aux->last_mark_ppos) == -1) || 1893 - (STp->header_cache->dat_fm_tab.fm_tab_ent[cnt-1] == 1894 - STp->buffer->aux->last_mark_ppos))?"match":"error", 1895 - mt_count, last_mark_ppos); 1896 - #endif 1897 - if (last_mark_ppos > 10 && last_mark_ppos < STp->eod_frame_ppos) { 1898 - osst_position_tape_and_confirm(STp, aSRpnt, last_mark_ppos); 1899 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 1900 - #if DEBUG 1901 - printk(OSST_DEB_MSG 1902 - "%s:D: Couldn't get logical blk num in space_filemarks\n", name); 1903 - #endif 1904 - return (-EIO); 1905 - } 1906 - if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) { 1907 - printk(KERN_WARNING "%s:W: Expected to find marker at ppos %d, not found\n", 1908 - name, last_mark_ppos); 1909 - return (-EIO); 1910 - } 1911 - goto found; 1912 - } 1913 - #if DEBUG 1914 - printk(OSST_DEB_MSG "%s:D: Reverting to scan filemark backwards\n", name); 1915 - #endif 1916 - } 1917 - cnt = 0; 1918 - while (cnt != mt_count) { 1919 - last_mark_ppos = ntohl(STp->buffer->aux->last_mark_ppos); 1920 - if (last_mark_ppos == -1) 1921 - return (-EIO); 1922 - #if DEBUG 1923 - printk(OSST_DEB_MSG "%s:D: Positioning to last mark at %d\n", name, last_mark_ppos); 1924 - #endif 1925 - osst_position_tape_and_confirm(STp, aSRpnt, last_mark_ppos); 1926 - cnt++; 1927 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 1928 - #if DEBUG 1929 - printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks\n", name); 1930 - #endif 1931 - return (-EIO); 1932 - } 1933 - if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) { 1934 - printk(KERN_WARNING "%s:W: Expected to find marker at ppos %d, not found\n", 1935 - name, last_mark_ppos); 1936 - return (-EIO); 1937 - } 1938 - } 1939 - found: 1940 - if (mt_op == MTBSFM) { 1941 - STp->frame_seq_number++; 1942 - STp->frame_in_buffer = 0; 1943 - STp->buffer->buffer_bytes = 0; 1944 - STp->buffer->read_pointer = 0; 1945 - STp->logical_blk_num += ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt); 1946 - } 1947 - return 0; 1948 - } 1949 - 1950 - /* 1951 - * ADRL 1.1 compatible "slow" space filemarks fwd version 1952 - * 1953 - * Just scans for the filemark sequentially. 1954 - */ 1955 - static int osst_space_over_filemarks_forward_slow(struct osst_tape * STp, struct osst_request ** aSRpnt, 1956 - int mt_op, int mt_count) 1957 - { 1958 - int cnt = 0; 1959 - #if DEBUG 1960 - char * name = tape_name(STp); 1961 - 1962 - printk(OSST_DEB_MSG "%s:D: Reached space_over_filemarks_forward_slow %d %d\n", name, mt_op, mt_count); 1963 - #endif 1964 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 1965 - #if DEBUG 1966 - printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks_fwd\n", name); 1967 - #endif 1968 - return (-EIO); 1969 - } 1970 - while (1) { 1971 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 1972 - #if DEBUG 1973 - printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks\n", name); 1974 - #endif 1975 - return (-EIO); 1976 - } 1977 - if (STp->buffer->aux->frame_type == OS_FRAME_TYPE_MARKER) 1978 - cnt++; 1979 - if (STp->buffer->aux->frame_type == OS_FRAME_TYPE_EOD) { 1980 - #if DEBUG 1981 - printk(OSST_DEB_MSG "%s:D: space_fwd: EOD reached\n", name); 1982 - #endif 1983 - if (STp->first_frame_position > STp->eod_frame_ppos+1) { 1984 - #if DEBUG 1985 - printk(OSST_DEB_MSG "%s:D: EOD position corrected (%d=>%d)\n", 1986 - name, STp->eod_frame_ppos, STp->first_frame_position-1); 1987 - #endif 1988 - STp->eod_frame_ppos = STp->first_frame_position-1; 1989 - } 1990 - return (-EIO); 1991 - } 1992 - if (cnt == mt_count) 1993 - break; 1994 - STp->frame_in_buffer = 0; 1995 - } 1996 - if (mt_op == MTFSF) { 1997 - STp->frame_seq_number++; 1998 - STp->frame_in_buffer = 0; 1999 - STp->buffer->buffer_bytes = 0; 2000 - STp->buffer->read_pointer = 0; 2001 - STp->logical_blk_num += ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt); 2002 - } 2003 - return 0; 2004 - } 2005 - 2006 - /* 2007 - * Fast linux specific version of OnStream FSF 2008 - */ 2009 - static int osst_space_over_filemarks_forward_fast(struct osst_tape * STp, struct osst_request ** aSRpnt, 2010 - int mt_op, int mt_count) 2011 - { 2012 - char * name = tape_name(STp); 2013 - int cnt = 0, 2014 - next_mark_ppos = -1; 2015 - 2016 - #if DEBUG 2017 - printk(OSST_DEB_MSG "%s:D: Reached space_over_filemarks_forward_fast %d %d\n", name, mt_op, mt_count); 2018 - #endif 2019 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 2020 - #if DEBUG 2021 - printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks_fwd\n", name); 2022 - #endif 2023 - return (-EIO); 2024 - } 2025 - 2026 - if (STp->linux_media_version >= 4) { 2027 - /* 2028 - * direct lookup in header filemark list 2029 - */ 2030 - cnt = ntohl(STp->buffer->aux->filemark_cnt) - 1; 2031 - if (STp->header_ok && 2032 - STp->header_cache != NULL && 2033 - (cnt + mt_count) < OS_FM_TAB_MAX && 2034 - (cnt + mt_count) < STp->filemark_cnt && 2035 - ((cnt == -1 && ntohl(STp->buffer->aux->last_mark_ppos) == -1) || 2036 - (STp->header_cache->dat_fm_tab.fm_tab_ent[cnt] == STp->buffer->aux->last_mark_ppos))) 2037 - 2038 - next_mark_ppos = ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[cnt + mt_count]); 2039 - #if DEBUG 2040 - if (STp->header_cache == NULL || (cnt + mt_count) >= OS_FM_TAB_MAX) 2041 - printk(OSST_DEB_MSG "%s:D: Filemark lookup fail due to %s\n", name, 2042 - STp->header_cache == NULL?"lack of header cache":"count out of range"); 2043 - else 2044 - printk(OSST_DEB_MSG "%s:D: Filemark lookup: prev mark %d (%s), skip %d to %d\n", 2045 - name, cnt, 2046 - ((cnt == -1 && ntohl(STp->buffer->aux->last_mark_ppos) == -1) || 2047 - (STp->header_cache->dat_fm_tab.fm_tab_ent[cnt] == 2048 - STp->buffer->aux->last_mark_ppos))?"match":"error", 2049 - mt_count, next_mark_ppos); 2050 - #endif 2051 - if (next_mark_ppos <= 10 || next_mark_ppos > STp->eod_frame_ppos) { 2052 - #if DEBUG 2053 - printk(OSST_DEB_MSG "%s:D: Reverting to slow filemark space\n", name); 2054 - #endif 2055 - return osst_space_over_filemarks_forward_slow(STp, aSRpnt, mt_op, mt_count); 2056 - } else { 2057 - osst_position_tape_and_confirm(STp, aSRpnt, next_mark_ppos); 2058 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 2059 - #if DEBUG 2060 - printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks\n", 2061 - name); 2062 - #endif 2063 - return (-EIO); 2064 - } 2065 - if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) { 2066 - printk(KERN_WARNING "%s:W: Expected to find marker at ppos %d, not found\n", 2067 - name, next_mark_ppos); 2068 - return (-EIO); 2069 - } 2070 - if (ntohl(STp->buffer->aux->filemark_cnt) != cnt + mt_count) { 2071 - printk(KERN_WARNING "%s:W: Expected to find marker %d at ppos %d, not %d\n", 2072 - name, cnt+mt_count, next_mark_ppos, 2073 - ntohl(STp->buffer->aux->filemark_cnt)); 2074 - return (-EIO); 2075 - } 2076 - } 2077 - } else { 2078 - /* 2079 - * Find nearest (usually previous) marker, then jump from marker to marker 2080 - */ 2081 - while (1) { 2082 - if (STp->buffer->aux->frame_type == OS_FRAME_TYPE_MARKER) 2083 - break; 2084 - if (STp->buffer->aux->frame_type == OS_FRAME_TYPE_EOD) { 2085 - #if DEBUG 2086 - printk(OSST_DEB_MSG "%s:D: space_fwd: EOD reached\n", name); 2087 - #endif 2088 - return (-EIO); 2089 - } 2090 - if (ntohl(STp->buffer->aux->filemark_cnt) == 0) { 2091 - if (STp->first_mark_ppos == -1) { 2092 - #if DEBUG 2093 - printk(OSST_DEB_MSG "%s:D: Reverting to slow filemark space\n", name); 2094 - #endif 2095 - return osst_space_over_filemarks_forward_slow(STp, aSRpnt, mt_op, mt_count); 2096 - } 2097 - osst_position_tape_and_confirm(STp, aSRpnt, STp->first_mark_ppos); 2098 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 2099 - #if DEBUG 2100 - printk(OSST_DEB_MSG 2101 - "%s:D: Couldn't get logical blk num in space_filemarks_fwd_fast\n", 2102 - name); 2103 - #endif 2104 - return (-EIO); 2105 - } 2106 - if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) { 2107 - printk(KERN_WARNING "%s:W: Expected to find filemark at %d\n", 2108 - name, STp->first_mark_ppos); 2109 - return (-EIO); 2110 - } 2111 - } else { 2112 - if (osst_space_over_filemarks_backward(STp, aSRpnt, MTBSF, 1) < 0) 2113 - return (-EIO); 2114 - mt_count++; 2115 - } 2116 - } 2117 - cnt++; 2118 - while (cnt != mt_count) { 2119 - next_mark_ppos = ntohl(STp->buffer->aux->next_mark_ppos); 2120 - if (!next_mark_ppos || next_mark_ppos > STp->eod_frame_ppos) { 2121 - #if DEBUG 2122 - printk(OSST_DEB_MSG "%s:D: Reverting to slow filemark space\n", name); 2123 - #endif 2124 - return osst_space_over_filemarks_forward_slow(STp, aSRpnt, mt_op, mt_count - cnt); 2125 - } 2126 - #if DEBUG 2127 - else printk(OSST_DEB_MSG "%s:D: Positioning to next mark at %d\n", name, next_mark_ppos); 2128 - #endif 2129 - osst_position_tape_and_confirm(STp, aSRpnt, next_mark_ppos); 2130 - cnt++; 2131 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 2132 - #if DEBUG 2133 - printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in space_filemarks\n", 2134 - name); 2135 - #endif 2136 - return (-EIO); 2137 - } 2138 - if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_MARKER) { 2139 - printk(KERN_WARNING "%s:W: Expected to find marker at ppos %d, not found\n", 2140 - name, next_mark_ppos); 2141 - return (-EIO); 2142 - } 2143 - } 2144 - } 2145 - if (mt_op == MTFSF) { 2146 - STp->frame_seq_number++; 2147 - STp->frame_in_buffer = 0; 2148 - STp->buffer->buffer_bytes = 0; 2149 - STp->buffer->read_pointer = 0; 2150 - STp->logical_blk_num += ntohs(STp->buffer->aux->dat.dat_list[0].blk_cnt); 2151 - } 2152 - return 0; 2153 - } 2154 - 2155 - /* 2156 - * In debug mode, we want to see as many errors as possible 2157 - * to test the error recovery mechanism. 2158 - */ 2159 - #if DEBUG 2160 - static void osst_set_retries(struct osst_tape * STp, struct osst_request ** aSRpnt, int retries) 2161 - { 2162 - unsigned char cmd[MAX_COMMAND_SIZE]; 2163 - struct osst_request * SRpnt = * aSRpnt; 2164 - char * name = tape_name(STp); 2165 - 2166 - memset(cmd, 0, MAX_COMMAND_SIZE); 2167 - cmd[0] = MODE_SELECT; 2168 - cmd[1] = 0x10; 2169 - cmd[4] = NUMBER_RETRIES_PAGE_LENGTH + MODE_HEADER_LENGTH; 2170 - 2171 - (STp->buffer)->b_data[0] = cmd[4] - 1; 2172 - (STp->buffer)->b_data[1] = 0; /* Medium Type - ignoring */ 2173 - (STp->buffer)->b_data[2] = 0; /* Reserved */ 2174 - (STp->buffer)->b_data[3] = 0; /* Block Descriptor Length */ 2175 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 0] = NUMBER_RETRIES_PAGE | (1 << 7); 2176 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 1] = 2; 2177 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 2] = 4; 2178 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 3] = retries; 2179 - 2180 - if (debugging) 2181 - printk(OSST_DEB_MSG "%s:D: Setting number of retries on OnStream tape to %d\n", name, retries); 2182 - 2183 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_TO_DEVICE, STp->timeout, 0, 1); 2184 - *aSRpnt = SRpnt; 2185 - 2186 - if ((STp->buffer)->syscall_result) 2187 - printk (KERN_ERR "%s:D: Couldn't set retries to %d\n", name, retries); 2188 - } 2189 - #endif 2190 - 2191 - 2192 - static int osst_write_filemark(struct osst_tape * STp, struct osst_request ** aSRpnt) 2193 - { 2194 - int result; 2195 - int this_mark_ppos = STp->first_frame_position; 2196 - int this_mark_lbn = STp->logical_blk_num; 2197 - #if DEBUG 2198 - char * name = tape_name(STp); 2199 - #endif 2200 - 2201 - if (STp->raw) return 0; 2202 - 2203 - STp->write_type = OS_WRITE_NEW_MARK; 2204 - #if DEBUG 2205 - printk(OSST_DEB_MSG "%s:D: Writing Filemark %i at fppos %d (fseq %d, lblk %d)\n", 2206 - name, STp->filemark_cnt, this_mark_ppos, STp->frame_seq_number, this_mark_lbn); 2207 - #endif 2208 - STp->dirty = 1; 2209 - result = osst_flush_write_buffer(STp, aSRpnt); 2210 - result |= osst_flush_drive_buffer(STp, aSRpnt); 2211 - STp->last_mark_ppos = this_mark_ppos; 2212 - STp->last_mark_lbn = this_mark_lbn; 2213 - if (STp->header_cache != NULL && STp->filemark_cnt < OS_FM_TAB_MAX) 2214 - STp->header_cache->dat_fm_tab.fm_tab_ent[STp->filemark_cnt] = htonl(this_mark_ppos); 2215 - if (STp->filemark_cnt++ == 0) 2216 - STp->first_mark_ppos = this_mark_ppos; 2217 - return result; 2218 - } 2219 - 2220 - static int osst_write_eod(struct osst_tape * STp, struct osst_request ** aSRpnt) 2221 - { 2222 - int result; 2223 - #if DEBUG 2224 - char * name = tape_name(STp); 2225 - #endif 2226 - 2227 - if (STp->raw) return 0; 2228 - 2229 - STp->write_type = OS_WRITE_EOD; 2230 - STp->eod_frame_ppos = STp->first_frame_position; 2231 - #if DEBUG 2232 - printk(OSST_DEB_MSG "%s:D: Writing EOD at fppos %d (fseq %d, lblk %d)\n", name, 2233 - STp->eod_frame_ppos, STp->frame_seq_number, STp->logical_blk_num); 2234 - #endif 2235 - STp->dirty = 1; 2236 - 2237 - result = osst_flush_write_buffer(STp, aSRpnt); 2238 - result |= osst_flush_drive_buffer(STp, aSRpnt); 2239 - STp->eod_frame_lfa = --(STp->frame_seq_number); 2240 - return result; 2241 - } 2242 - 2243 - static int osst_write_filler(struct osst_tape * STp, struct osst_request ** aSRpnt, int where, int count) 2244 - { 2245 - char * name = tape_name(STp); 2246 - 2247 - #if DEBUG 2248 - printk(OSST_DEB_MSG "%s:D: Reached onstream write filler group %d\n", name, where); 2249 - #endif 2250 - osst_wait_ready(STp, aSRpnt, 60 * 5, 0); 2251 - osst_set_frame_position(STp, aSRpnt, where, 0); 2252 - STp->write_type = OS_WRITE_FILLER; 2253 - while (count--) { 2254 - memcpy(STp->buffer->b_data, "Filler", 6); 2255 - STp->buffer->buffer_bytes = 6; 2256 - STp->dirty = 1; 2257 - if (osst_flush_write_buffer(STp, aSRpnt)) { 2258 - printk(KERN_INFO "%s:I: Couldn't write filler frame\n", name); 2259 - return (-EIO); 2260 - } 2261 - } 2262 - #if DEBUG 2263 - printk(OSST_DEB_MSG "%s:D: Exiting onstream write filler group\n", name); 2264 - #endif 2265 - return osst_flush_drive_buffer(STp, aSRpnt); 2266 - } 2267 - 2268 - static int __osst_write_header(struct osst_tape * STp, struct osst_request ** aSRpnt, int where, int count) 2269 - { 2270 - char * name = tape_name(STp); 2271 - int result; 2272 - 2273 - #if DEBUG 2274 - printk(OSST_DEB_MSG "%s:D: Reached onstream write header group %d\n", name, where); 2275 - #endif 2276 - osst_wait_ready(STp, aSRpnt, 60 * 5, 0); 2277 - osst_set_frame_position(STp, aSRpnt, where, 0); 2278 - STp->write_type = OS_WRITE_HEADER; 2279 - while (count--) { 2280 - osst_copy_to_buffer(STp->buffer, (unsigned char *)STp->header_cache); 2281 - STp->buffer->buffer_bytes = sizeof(os_header_t); 2282 - STp->dirty = 1; 2283 - if (osst_flush_write_buffer(STp, aSRpnt)) { 2284 - printk(KERN_INFO "%s:I: Couldn't write header frame\n", name); 2285 - return (-EIO); 2286 - } 2287 - } 2288 - result = osst_flush_drive_buffer(STp, aSRpnt); 2289 - #if DEBUG 2290 - printk(OSST_DEB_MSG "%s:D: Write onstream header group %s\n", name, result?"failed":"done"); 2291 - #endif 2292 - return result; 2293 - } 2294 - 2295 - static int osst_write_header(struct osst_tape * STp, struct osst_request ** aSRpnt, int locate_eod) 2296 - { 2297 - os_header_t * header; 2298 - int result; 2299 - char * name = tape_name(STp); 2300 - 2301 - #if DEBUG 2302 - printk(OSST_DEB_MSG "%s:D: Writing tape header\n", name); 2303 - #endif 2304 - if (STp->raw) return 0; 2305 - 2306 - if (STp->header_cache == NULL) { 2307 - if ((STp->header_cache = vmalloc(sizeof(os_header_t))) == NULL) { 2308 - printk(KERN_ERR "%s:E: Failed to allocate header cache\n", name); 2309 - return (-ENOMEM); 2310 - } 2311 - memset(STp->header_cache, 0, sizeof(os_header_t)); 2312 - #if DEBUG 2313 - printk(OSST_DEB_MSG "%s:D: Allocated and cleared memory for header cache\n", name); 2314 - #endif 2315 - } 2316 - if (STp->header_ok) STp->update_frame_cntr++; 2317 - else STp->update_frame_cntr = 0; 2318 - 2319 - header = STp->header_cache; 2320 - strcpy(header->ident_str, "ADR_SEQ"); 2321 - header->major_rev = 1; 2322 - header->minor_rev = 4; 2323 - header->ext_trk_tb_off = htons(17192); 2324 - header->pt_par_num = 1; 2325 - header->partition[0].partition_num = OS_DATA_PARTITION; 2326 - header->partition[0].par_desc_ver = OS_PARTITION_VERSION; 2327 - header->partition[0].wrt_pass_cntr = htons(STp->wrt_pass_cntr); 2328 - header->partition[0].first_frame_ppos = htonl(STp->first_data_ppos); 2329 - header->partition[0].last_frame_ppos = htonl(STp->capacity); 2330 - header->partition[0].eod_frame_ppos = htonl(STp->eod_frame_ppos); 2331 - header->cfg_col_width = htonl(20); 2332 - header->dat_col_width = htonl(1500); 2333 - header->qfa_col_width = htonl(0); 2334 - header->ext_track_tb.nr_stream_part = 1; 2335 - header->ext_track_tb.et_ent_sz = 32; 2336 - header->ext_track_tb.dat_ext_trk_ey.et_part_num = 0; 2337 - header->ext_track_tb.dat_ext_trk_ey.fmt = 1; 2338 - header->ext_track_tb.dat_ext_trk_ey.fm_tab_off = htons(17736); 2339 - header->ext_track_tb.dat_ext_trk_ey.last_hlb_hi = 0; 2340 - header->ext_track_tb.dat_ext_trk_ey.last_hlb = htonl(STp->eod_frame_lfa); 2341 - header->ext_track_tb.dat_ext_trk_ey.last_pp = htonl(STp->eod_frame_ppos); 2342 - header->dat_fm_tab.fm_part_num = 0; 2343 - header->dat_fm_tab.fm_tab_ent_sz = 4; 2344 - header->dat_fm_tab.fm_tab_ent_cnt = htons(STp->filemark_cnt<OS_FM_TAB_MAX? 2345 - STp->filemark_cnt:OS_FM_TAB_MAX); 2346 - 2347 - result = __osst_write_header(STp, aSRpnt, 0xbae, 5); 2348 - if (STp->update_frame_cntr == 0) 2349 - osst_write_filler(STp, aSRpnt, 0xbb3, 5); 2350 - result &= __osst_write_header(STp, aSRpnt, 5, 5); 2351 - 2352 - if (locate_eod) { 2353 - #if DEBUG 2354 - printk(OSST_DEB_MSG "%s:D: Locating back to eod frame addr %d\n", name, STp->eod_frame_ppos); 2355 - #endif 2356 - osst_set_frame_position(STp, aSRpnt, STp->eod_frame_ppos, 0); 2357 - } 2358 - if (result) 2359 - printk(KERN_ERR "%s:E: Write header failed\n", name); 2360 - else { 2361 - memcpy(STp->application_sig, "LIN4", 4); 2362 - STp->linux_media = 1; 2363 - STp->linux_media_version = 4; 2364 - STp->header_ok = 1; 2365 - } 2366 - return result; 2367 - } 2368 - 2369 - static int osst_reset_header(struct osst_tape * STp, struct osst_request ** aSRpnt) 2370 - { 2371 - if (STp->header_cache != NULL) 2372 - memset(STp->header_cache, 0, sizeof(os_header_t)); 2373 - 2374 - STp->logical_blk_num = STp->frame_seq_number = 0; 2375 - STp->frame_in_buffer = 0; 2376 - STp->eod_frame_ppos = STp->first_data_ppos = 0x0000000A; 2377 - STp->filemark_cnt = 0; 2378 - STp->first_mark_ppos = STp->last_mark_ppos = STp->last_mark_lbn = -1; 2379 - return osst_write_header(STp, aSRpnt, 1); 2380 - } 2381 - 2382 - static int __osst_analyze_headers(struct osst_tape * STp, struct osst_request ** aSRpnt, int ppos) 2383 - { 2384 - char * name = tape_name(STp); 2385 - os_header_t * header; 2386 - os_aux_t * aux; 2387 - char id_string[8]; 2388 - int linux_media_version, 2389 - update_frame_cntr; 2390 - 2391 - if (STp->raw) 2392 - return 1; 2393 - 2394 - if (ppos == 5 || ppos == 0xbae || STp->buffer->syscall_result) { 2395 - if (osst_set_frame_position(STp, aSRpnt, ppos, 0)) 2396 - printk(KERN_WARNING "%s:W: Couldn't position tape\n", name); 2397 - osst_wait_ready(STp, aSRpnt, 60 * 15, 0); 2398 - if (osst_initiate_read (STp, aSRpnt)) { 2399 - printk(KERN_WARNING "%s:W: Couldn't initiate read\n", name); 2400 - return 0; 2401 - } 2402 - } 2403 - if (osst_read_frame(STp, aSRpnt, 180)) { 2404 - #if DEBUG 2405 - printk(OSST_DEB_MSG "%s:D: Couldn't read header frame\n", name); 2406 - #endif 2407 - return 0; 2408 - } 2409 - header = (os_header_t *) STp->buffer->b_data; /* warning: only first segment addressable */ 2410 - aux = STp->buffer->aux; 2411 - if (aux->frame_type != OS_FRAME_TYPE_HEADER) { 2412 - #if DEBUG 2413 - printk(OSST_DEB_MSG "%s:D: Skipping non-header frame (%d)\n", name, ppos); 2414 - #endif 2415 - return 0; 2416 - } 2417 - if (ntohl(aux->frame_seq_num) != 0 || 2418 - ntohl(aux->logical_blk_num) != 0 || 2419 - aux->partition.partition_num != OS_CONFIG_PARTITION || 2420 - ntohl(aux->partition.first_frame_ppos) != 0 || 2421 - ntohl(aux->partition.last_frame_ppos) != 0xbb7 ) { 2422 - #if DEBUG 2423 - printk(OSST_DEB_MSG "%s:D: Invalid header frame (%d,%d,%d,%d,%d)\n", name, 2424 - ntohl(aux->frame_seq_num), ntohl(aux->logical_blk_num), 2425 - aux->partition.partition_num, ntohl(aux->partition.first_frame_ppos), 2426 - ntohl(aux->partition.last_frame_ppos)); 2427 - #endif 2428 - return 0; 2429 - } 2430 - if (strncmp(header->ident_str, "ADR_SEQ", 7) != 0 && 2431 - strncmp(header->ident_str, "ADR-SEQ", 7) != 0) { 2432 - strlcpy(id_string, header->ident_str, 8); 2433 - #if DEBUG 2434 - printk(OSST_DEB_MSG "%s:D: Invalid header identification string %s\n", name, id_string); 2435 - #endif 2436 - return 0; 2437 - } 2438 - update_frame_cntr = ntohl(aux->update_frame_cntr); 2439 - if (update_frame_cntr < STp->update_frame_cntr) { 2440 - #if DEBUG 2441 - printk(OSST_DEB_MSG "%s:D: Skipping frame %d with update_frame_counter %d<%d\n", 2442 - name, ppos, update_frame_cntr, STp->update_frame_cntr); 2443 - #endif 2444 - return 0; 2445 - } 2446 - if (header->major_rev != 1 || header->minor_rev != 4 ) { 2447 - #if DEBUG 2448 - printk(OSST_DEB_MSG "%s:D: %s revision %d.%d detected (1.4 supported)\n", 2449 - name, (header->major_rev != 1 || header->minor_rev < 2 || 2450 - header->minor_rev > 4 )? "Invalid" : "Warning:", 2451 - header->major_rev, header->minor_rev); 2452 - #endif 2453 - if (header->major_rev != 1 || header->minor_rev < 2 || header->minor_rev > 4) 2454 - return 0; 2455 - } 2456 - #if DEBUG 2457 - if (header->pt_par_num != 1) 2458 - printk(KERN_INFO "%s:W: %d partitions defined, only one supported\n", 2459 - name, header->pt_par_num); 2460 - #endif 2461 - memcpy(id_string, aux->application_sig, 4); 2462 - id_string[4] = 0; 2463 - if (memcmp(id_string, "LIN", 3) == 0) { 2464 - STp->linux_media = 1; 2465 - linux_media_version = id_string[3] - '0'; 2466 - if (linux_media_version != 4) 2467 - printk(KERN_INFO "%s:I: Linux media version %d detected (current 4)\n", 2468 - name, linux_media_version); 2469 - } else { 2470 - printk(KERN_WARNING "%s:W: Non Linux media detected (%s)\n", name, id_string); 2471 - return 0; 2472 - } 2473 - if (linux_media_version < STp->linux_media_version) { 2474 - #if DEBUG 2475 - printk(OSST_DEB_MSG "%s:D: Skipping frame %d with linux_media_version %d\n", 2476 - name, ppos, linux_media_version); 2477 - #endif 2478 - return 0; 2479 - } 2480 - if (linux_media_version > STp->linux_media_version) { 2481 - #if DEBUG 2482 - printk(OSST_DEB_MSG "%s:D: Frame %d sets linux_media_version to %d\n", 2483 - name, ppos, linux_media_version); 2484 - #endif 2485 - memcpy(STp->application_sig, id_string, 5); 2486 - STp->linux_media_version = linux_media_version; 2487 - STp->update_frame_cntr = -1; 2488 - } 2489 - if (update_frame_cntr > STp->update_frame_cntr) { 2490 - #if DEBUG 2491 - printk(OSST_DEB_MSG "%s:D: Frame %d sets update_frame_counter to %d\n", 2492 - name, ppos, update_frame_cntr); 2493 - #endif 2494 - if (STp->header_cache == NULL) { 2495 - if ((STp->header_cache = vmalloc(sizeof(os_header_t))) == NULL) { 2496 - printk(KERN_ERR "%s:E: Failed to allocate header cache\n", name); 2497 - return 0; 2498 - } 2499 - #if DEBUG 2500 - printk(OSST_DEB_MSG "%s:D: Allocated memory for header cache\n", name); 2501 - #endif 2502 - } 2503 - osst_copy_from_buffer(STp->buffer, (unsigned char *)STp->header_cache); 2504 - header = STp->header_cache; /* further accesses from cached (full) copy */ 2505 - 2506 - STp->wrt_pass_cntr = ntohs(header->partition[0].wrt_pass_cntr); 2507 - STp->first_data_ppos = ntohl(header->partition[0].first_frame_ppos); 2508 - STp->eod_frame_ppos = ntohl(header->partition[0].eod_frame_ppos); 2509 - STp->eod_frame_lfa = ntohl(header->ext_track_tb.dat_ext_trk_ey.last_hlb); 2510 - STp->filemark_cnt = ntohl(aux->filemark_cnt); 2511 - STp->first_mark_ppos = ntohl(aux->next_mark_ppos); 2512 - STp->last_mark_ppos = ntohl(aux->last_mark_ppos); 2513 - STp->last_mark_lbn = ntohl(aux->last_mark_lbn); 2514 - STp->update_frame_cntr = update_frame_cntr; 2515 - #if DEBUG 2516 - printk(OSST_DEB_MSG "%s:D: Detected write pass %d, update frame counter %d, filemark counter %d\n", 2517 - name, STp->wrt_pass_cntr, STp->update_frame_cntr, STp->filemark_cnt); 2518 - printk(OSST_DEB_MSG "%s:D: first data frame on tape = %d, last = %d, eod frame = %d\n", name, 2519 - STp->first_data_ppos, 2520 - ntohl(header->partition[0].last_frame_ppos), 2521 - ntohl(header->partition[0].eod_frame_ppos)); 2522 - printk(OSST_DEB_MSG "%s:D: first mark on tape = %d, last = %d, eod frame = %d\n", 2523 - name, STp->first_mark_ppos, STp->last_mark_ppos, STp->eod_frame_ppos); 2524 - #endif 2525 - if (header->minor_rev < 4 && STp->linux_media_version == 4) { 2526 - #if DEBUG 2527 - printk(OSST_DEB_MSG "%s:D: Moving filemark list to ADR 1.4 location\n", name); 2528 - #endif 2529 - memcpy((void *)header->dat_fm_tab.fm_tab_ent, 2530 - (void *)header->old_filemark_list, sizeof(header->dat_fm_tab.fm_tab_ent)); 2531 - memset((void *)header->old_filemark_list, 0, sizeof(header->old_filemark_list)); 2532 - } 2533 - if (header->minor_rev == 4 && 2534 - (header->ext_trk_tb_off != htons(17192) || 2535 - header->partition[0].partition_num != OS_DATA_PARTITION || 2536 - header->partition[0].par_desc_ver != OS_PARTITION_VERSION || 2537 - header->partition[0].last_frame_ppos != htonl(STp->capacity) || 2538 - header->cfg_col_width != htonl(20) || 2539 - header->dat_col_width != htonl(1500) || 2540 - header->qfa_col_width != htonl(0) || 2541 - header->ext_track_tb.nr_stream_part != 1 || 2542 - header->ext_track_tb.et_ent_sz != 32 || 2543 - header->ext_track_tb.dat_ext_trk_ey.et_part_num != OS_DATA_PARTITION || 2544 - header->ext_track_tb.dat_ext_trk_ey.fmt != 1 || 2545 - header->ext_track_tb.dat_ext_trk_ey.fm_tab_off != htons(17736) || 2546 - header->ext_track_tb.dat_ext_trk_ey.last_hlb_hi != 0 || 2547 - header->ext_track_tb.dat_ext_trk_ey.last_pp != htonl(STp->eod_frame_ppos) || 2548 - header->dat_fm_tab.fm_part_num != OS_DATA_PARTITION || 2549 - header->dat_fm_tab.fm_tab_ent_sz != 4 || 2550 - header->dat_fm_tab.fm_tab_ent_cnt != 2551 - htons(STp->filemark_cnt<OS_FM_TAB_MAX?STp->filemark_cnt:OS_FM_TAB_MAX))) 2552 - printk(KERN_WARNING "%s:W: Failed consistency check ADR 1.4 format\n", name); 2553 - 2554 - } 2555 - 2556 - return 1; 2557 - } 2558 - 2559 - static int osst_analyze_headers(struct osst_tape * STp, struct osst_request ** aSRpnt) 2560 - { 2561 - int position, ppos; 2562 - int first, last; 2563 - int valid = 0; 2564 - char * name = tape_name(STp); 2565 - 2566 - position = osst_get_frame_position(STp, aSRpnt); 2567 - 2568 - if (STp->raw) { 2569 - STp->header_ok = STp->linux_media = 1; 2570 - STp->linux_media_version = 0; 2571 - return 1; 2572 - } 2573 - STp->header_ok = STp->linux_media = STp->linux_media_version = 0; 2574 - STp->wrt_pass_cntr = STp->update_frame_cntr = -1; 2575 - STp->eod_frame_ppos = STp->first_data_ppos = -1; 2576 - STp->first_mark_ppos = STp->last_mark_ppos = STp->last_mark_lbn = -1; 2577 - #if DEBUG 2578 - printk(OSST_DEB_MSG "%s:D: Reading header\n", name); 2579 - #endif 2580 - 2581 - /* optimization for speed - if we are positioned at ppos 10, read second group first */ 2582 - /* TODO try the ADR 1.1 locations for the second group if we have no valid one yet... */ 2583 - 2584 - first = position==10?0xbae: 5; 2585 - last = position==10?0xbb3:10; 2586 - 2587 - for (ppos = first; ppos < last; ppos++) 2588 - if (__osst_analyze_headers(STp, aSRpnt, ppos)) 2589 - valid = 1; 2590 - 2591 - first = position==10? 5:0xbae; 2592 - last = position==10?10:0xbb3; 2593 - 2594 - for (ppos = first; ppos < last; ppos++) 2595 - if (__osst_analyze_headers(STp, aSRpnt, ppos)) 2596 - valid = 1; 2597 - 2598 - if (!valid) { 2599 - printk(KERN_ERR "%s:E: Failed to find valid ADRL header, new media?\n", name); 2600 - STp->eod_frame_ppos = STp->first_data_ppos = 0; 2601 - osst_set_frame_position(STp, aSRpnt, 10, 0); 2602 - return 0; 2603 - } 2604 - if (position <= STp->first_data_ppos) { 2605 - position = STp->first_data_ppos; 2606 - STp->ps[0].drv_file = STp->ps[0].drv_block = STp->frame_seq_number = STp->logical_blk_num = 0; 2607 - } 2608 - osst_set_frame_position(STp, aSRpnt, position, 0); 2609 - STp->header_ok = 1; 2610 - 2611 - return 1; 2612 - } 2613 - 2614 - static int osst_verify_position(struct osst_tape * STp, struct osst_request ** aSRpnt) 2615 - { 2616 - int frame_position = STp->first_frame_position; 2617 - int frame_seq_numbr = STp->frame_seq_number; 2618 - int logical_blk_num = STp->logical_blk_num; 2619 - int halfway_frame = STp->frame_in_buffer; 2620 - int read_pointer = STp->buffer->read_pointer; 2621 - int prev_mark_ppos = -1; 2622 - int actual_mark_ppos, i, n; 2623 - #if DEBUG 2624 - char * name = tape_name(STp); 2625 - 2626 - printk(OSST_DEB_MSG "%s:D: Verify that the tape is really the one we think before writing\n", name); 2627 - #endif 2628 - osst_set_frame_position(STp, aSRpnt, frame_position - 1, 0); 2629 - if (osst_get_logical_frame(STp, aSRpnt, -1, 0) < 0) { 2630 - #if DEBUG 2631 - printk(OSST_DEB_MSG "%s:D: Couldn't get logical blk num in verify_position\n", name); 2632 - #endif 2633 - return (-EIO); 2634 - } 2635 - if (STp->linux_media_version >= 4) { 2636 - for (i=0; i<STp->filemark_cnt; i++) 2637 - if ((n=ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[i])) < frame_position) 2638 - prev_mark_ppos = n; 2639 - } else 2640 - prev_mark_ppos = frame_position - 1; /* usually - we don't really know */ 2641 - actual_mark_ppos = STp->buffer->aux->frame_type == OS_FRAME_TYPE_MARKER ? 2642 - frame_position - 1 : ntohl(STp->buffer->aux->last_mark_ppos); 2643 - if (frame_position != STp->first_frame_position || 2644 - frame_seq_numbr != STp->frame_seq_number + (halfway_frame?0:1) || 2645 - prev_mark_ppos != actual_mark_ppos ) { 2646 - #if DEBUG 2647 - printk(OSST_DEB_MSG "%s:D: Block mismatch: fppos %d-%d, fseq %d-%d, mark %d-%d\n", name, 2648 - STp->first_frame_position, frame_position, 2649 - STp->frame_seq_number + (halfway_frame?0:1), 2650 - frame_seq_numbr, actual_mark_ppos, prev_mark_ppos); 2651 - #endif 2652 - return (-EIO); 2653 - } 2654 - if (halfway_frame) { 2655 - /* prepare buffer for append and rewrite on top of original */ 2656 - osst_set_frame_position(STp, aSRpnt, frame_position - 1, 0); 2657 - STp->buffer->buffer_bytes = read_pointer; 2658 - STp->ps[STp->partition].rw = ST_WRITING; 2659 - STp->dirty = 1; 2660 - } 2661 - STp->frame_in_buffer = halfway_frame; 2662 - STp->frame_seq_number = frame_seq_numbr; 2663 - STp->logical_blk_num = logical_blk_num; 2664 - return 0; 2665 - } 2666 - 2667 - /* Acc. to OnStream, the vers. numbering is the following: 2668 - * X.XX for released versions (X=digit), 2669 - * XXXY for unreleased versions (Y=letter) 2670 - * Ordering 1.05 < 106A < 106B < ... < 106a < ... < 1.06 2671 - * This fn makes monoton numbers out of this scheme ... 2672 - */ 2673 - static unsigned int osst_parse_firmware_rev (const char * str) 2674 - { 2675 - if (str[1] == '.') { 2676 - return (str[0]-'0')*10000 2677 - +(str[2]-'0')*1000 2678 - +(str[3]-'0')*100; 2679 - } else { 2680 - return (str[0]-'0')*10000 2681 - +(str[1]-'0')*1000 2682 - +(str[2]-'0')*100 - 100 2683 - +(str[3]-'@'); 2684 - } 2685 - } 2686 - 2687 - /* 2688 - * Configure the OnStream SCII tape drive for default operation 2689 - */ 2690 - static int osst_configure_onstream(struct osst_tape *STp, struct osst_request ** aSRpnt) 2691 - { 2692 - unsigned char cmd[MAX_COMMAND_SIZE]; 2693 - char * name = tape_name(STp); 2694 - struct osst_request * SRpnt = * aSRpnt; 2695 - osst_mode_parameter_header_t * header; 2696 - osst_block_size_page_t * bs; 2697 - osst_capabilities_page_t * cp; 2698 - osst_tape_paramtr_page_t * prm; 2699 - int drive_buffer_size; 2700 - 2701 - if (STp->ready != ST_READY) { 2702 - #if DEBUG 2703 - printk(OSST_DEB_MSG "%s:D: Not Ready\n", name); 2704 - #endif 2705 - return (-EIO); 2706 - } 2707 - 2708 - if (STp->os_fw_rev < 10600) { 2709 - printk(KERN_INFO "%s:I: Old OnStream firmware revision detected (%s),\n", name, STp->device->rev); 2710 - printk(KERN_INFO "%s:I: an upgrade to version 1.06 or above is recommended\n", name); 2711 - } 2712 - 2713 - /* 2714 - * Configure 32.5KB (data+aux) frame size. 2715 - * Get the current frame size from the block size mode page 2716 - */ 2717 - memset(cmd, 0, MAX_COMMAND_SIZE); 2718 - cmd[0] = MODE_SENSE; 2719 - cmd[1] = 8; 2720 - cmd[2] = BLOCK_SIZE_PAGE; 2721 - cmd[4] = BLOCK_SIZE_PAGE_LENGTH + MODE_HEADER_LENGTH; 2722 - 2723 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_FROM_DEVICE, STp->timeout, 0, 1); 2724 - if (SRpnt == NULL) { 2725 - #if DEBUG 2726 - printk(OSST_DEB_MSG "osst :D: Busy\n"); 2727 - #endif 2728 - return (-EBUSY); 2729 - } 2730 - *aSRpnt = SRpnt; 2731 - if ((STp->buffer)->syscall_result != 0) { 2732 - printk (KERN_ERR "%s:E: Can't get tape block size mode page\n", name); 2733 - return (-EIO); 2734 - } 2735 - 2736 - header = (osst_mode_parameter_header_t *) (STp->buffer)->b_data; 2737 - bs = (osst_block_size_page_t *) ((STp->buffer)->b_data + sizeof(osst_mode_parameter_header_t) + header->bdl); 2738 - 2739 - #if DEBUG 2740 - printk(OSST_DEB_MSG "%s:D: 32KB play back: %s\n", name, bs->play32 ? "Yes" : "No"); 2741 - printk(OSST_DEB_MSG "%s:D: 32.5KB play back: %s\n", name, bs->play32_5 ? "Yes" : "No"); 2742 - printk(OSST_DEB_MSG "%s:D: 32KB record: %s\n", name, bs->record32 ? "Yes" : "No"); 2743 - printk(OSST_DEB_MSG "%s:D: 32.5KB record: %s\n", name, bs->record32_5 ? "Yes" : "No"); 2744 - #endif 2745 - 2746 - /* 2747 - * Configure default auto columns mode, 32.5KB transfer mode 2748 - */ 2749 - bs->one = 1; 2750 - bs->play32 = 0; 2751 - bs->play32_5 = 1; 2752 - bs->record32 = 0; 2753 - bs->record32_5 = 1; 2754 - 2755 - memset(cmd, 0, MAX_COMMAND_SIZE); 2756 - cmd[0] = MODE_SELECT; 2757 - cmd[1] = 0x10; 2758 - cmd[4] = BLOCK_SIZE_PAGE_LENGTH + MODE_HEADER_LENGTH; 2759 - 2760 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_TO_DEVICE, STp->timeout, 0, 1); 2761 - *aSRpnt = SRpnt; 2762 - if ((STp->buffer)->syscall_result != 0) { 2763 - printk (KERN_ERR "%s:E: Couldn't set tape block size mode page\n", name); 2764 - return (-EIO); 2765 - } 2766 - 2767 - #if DEBUG 2768 - printk(KERN_INFO "%s:D: Drive Block Size changed to 32.5K\n", name); 2769 - /* 2770 - * In debug mode, we want to see as many errors as possible 2771 - * to test the error recovery mechanism. 2772 - */ 2773 - osst_set_retries(STp, aSRpnt, 0); 2774 - SRpnt = * aSRpnt; 2775 - #endif 2776 - 2777 - /* 2778 - * Set vendor name to 'LIN4' for "Linux support version 4". 2779 - */ 2780 - 2781 - memset(cmd, 0, MAX_COMMAND_SIZE); 2782 - cmd[0] = MODE_SELECT; 2783 - cmd[1] = 0x10; 2784 - cmd[4] = VENDOR_IDENT_PAGE_LENGTH + MODE_HEADER_LENGTH; 2785 - 2786 - header->mode_data_length = VENDOR_IDENT_PAGE_LENGTH + MODE_HEADER_LENGTH - 1; 2787 - header->medium_type = 0; /* Medium Type - ignoring */ 2788 - header->dsp = 0; /* Reserved */ 2789 - header->bdl = 0; /* Block Descriptor Length */ 2790 - 2791 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 0] = VENDOR_IDENT_PAGE | (1 << 7); 2792 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 1] = 6; 2793 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 2] = 'L'; 2794 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 3] = 'I'; 2795 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 4] = 'N'; 2796 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 5] = '4'; 2797 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 6] = 0; 2798 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 7] = 0; 2799 - 2800 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_TO_DEVICE, STp->timeout, 0, 1); 2801 - *aSRpnt = SRpnt; 2802 - 2803 - if ((STp->buffer)->syscall_result != 0) { 2804 - printk (KERN_ERR "%s:E: Couldn't set vendor name to %s\n", name, 2805 - (char *) ((STp->buffer)->b_data + MODE_HEADER_LENGTH + 2)); 2806 - return (-EIO); 2807 - } 2808 - 2809 - memset(cmd, 0, MAX_COMMAND_SIZE); 2810 - cmd[0] = MODE_SENSE; 2811 - cmd[1] = 8; 2812 - cmd[2] = CAPABILITIES_PAGE; 2813 - cmd[4] = CAPABILITIES_PAGE_LENGTH + MODE_HEADER_LENGTH; 2814 - 2815 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_FROM_DEVICE, STp->timeout, 0, 1); 2816 - *aSRpnt = SRpnt; 2817 - 2818 - if ((STp->buffer)->syscall_result != 0) { 2819 - printk (KERN_ERR "%s:E: Can't get capabilities page\n", name); 2820 - return (-EIO); 2821 - } 2822 - 2823 - header = (osst_mode_parameter_header_t *) (STp->buffer)->b_data; 2824 - cp = (osst_capabilities_page_t *) ((STp->buffer)->b_data + 2825 - sizeof(osst_mode_parameter_header_t) + header->bdl); 2826 - 2827 - drive_buffer_size = ntohs(cp->buffer_size) / 2; 2828 - 2829 - memset(cmd, 0, MAX_COMMAND_SIZE); 2830 - cmd[0] = MODE_SENSE; 2831 - cmd[1] = 8; 2832 - cmd[2] = TAPE_PARAMTR_PAGE; 2833 - cmd[4] = TAPE_PARAMTR_PAGE_LENGTH + MODE_HEADER_LENGTH; 2834 - 2835 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_FROM_DEVICE, STp->timeout, 0, 1); 2836 - *aSRpnt = SRpnt; 2837 - 2838 - if ((STp->buffer)->syscall_result != 0) { 2839 - printk (KERN_ERR "%s:E: Can't get tape parameter page\n", name); 2840 - return (-EIO); 2841 - } 2842 - 2843 - header = (osst_mode_parameter_header_t *) (STp->buffer)->b_data; 2844 - prm = (osst_tape_paramtr_page_t *) ((STp->buffer)->b_data + 2845 - sizeof(osst_mode_parameter_header_t) + header->bdl); 2846 - 2847 - STp->density = prm->density; 2848 - STp->capacity = ntohs(prm->segtrk) * ntohs(prm->trks); 2849 - #if DEBUG 2850 - printk(OSST_DEB_MSG "%s:D: Density %d, tape length: %dMB, drive buffer size: %dKB\n", 2851 - name, STp->density, STp->capacity / 32, drive_buffer_size); 2852 - #endif 2853 - 2854 - return 0; 2855 - 2856 - } 2857 - 2858 - 2859 - /* Step over EOF if it has been inadvertently crossed (ioctl not used because 2860 - it messes up the block number). */ 2861 - static int cross_eof(struct osst_tape *STp, struct osst_request ** aSRpnt, int forward) 2862 - { 2863 - int result; 2864 - char * name = tape_name(STp); 2865 - 2866 - #if DEBUG 2867 - if (debugging) 2868 - printk(OSST_DEB_MSG "%s:D: Stepping over filemark %s.\n", 2869 - name, forward ? "forward" : "backward"); 2870 - #endif 2871 - 2872 - if (forward) { 2873 - /* assumes that the filemark is already read by the drive, so this is low cost */ 2874 - result = osst_space_over_filemarks_forward_slow(STp, aSRpnt, MTFSF, 1); 2875 - } 2876 - else 2877 - /* assumes this is only called if we just read the filemark! */ 2878 - result = osst_seek_logical_blk(STp, aSRpnt, STp->logical_blk_num - 1); 2879 - 2880 - if (result < 0) 2881 - printk(KERN_WARNING "%s:W: Stepping over filemark %s failed.\n", 2882 - name, forward ? "forward" : "backward"); 2883 - 2884 - return result; 2885 - } 2886 - 2887 - 2888 - /* Get the tape position. */ 2889 - 2890 - static int osst_get_frame_position(struct osst_tape *STp, struct osst_request ** aSRpnt) 2891 - { 2892 - unsigned char scmd[MAX_COMMAND_SIZE]; 2893 - struct osst_request * SRpnt; 2894 - int result = 0; 2895 - char * name = tape_name(STp); 2896 - 2897 - /* KG: We want to be able to use it for checking Write Buffer availability 2898 - * and thus don't want to risk to overwrite anything. Exchange buffers ... */ 2899 - char mybuf[24]; 2900 - char * olddata = STp->buffer->b_data; 2901 - int oldsize = STp->buffer->buffer_size; 2902 - 2903 - if (STp->ready != ST_READY) return (-EIO); 2904 - 2905 - memset (scmd, 0, MAX_COMMAND_SIZE); 2906 - scmd[0] = READ_POSITION; 2907 - 2908 - STp->buffer->b_data = mybuf; STp->buffer->buffer_size = 24; 2909 - SRpnt = osst_do_scsi(*aSRpnt, STp, scmd, 20, DMA_FROM_DEVICE, 2910 - STp->timeout, MAX_RETRIES, 1); 2911 - if (!SRpnt) { 2912 - STp->buffer->b_data = olddata; STp->buffer->buffer_size = oldsize; 2913 - return (-EBUSY); 2914 - } 2915 - *aSRpnt = SRpnt; 2916 - 2917 - if (STp->buffer->syscall_result) 2918 - result = ((SRpnt->sense[2] & 0x0f) == 3) ? -EIO : -EINVAL; /* 3: Write Error */ 2919 - 2920 - if (result == -EINVAL) 2921 - printk(KERN_ERR "%s:E: Can't read tape position.\n", name); 2922 - else { 2923 - if (result == -EIO) { /* re-read position - this needs to preserve media errors */ 2924 - unsigned char mysense[16]; 2925 - memcpy (mysense, SRpnt->sense, 16); 2926 - memset (scmd, 0, MAX_COMMAND_SIZE); 2927 - scmd[0] = READ_POSITION; 2928 - STp->buffer->b_data = mybuf; STp->buffer->buffer_size = 24; 2929 - SRpnt = osst_do_scsi(SRpnt, STp, scmd, 20, DMA_FROM_DEVICE, 2930 - STp->timeout, MAX_RETRIES, 1); 2931 - #if DEBUG 2932 - printk(OSST_DEB_MSG "%s:D: Reread position, reason=[%02x:%02x:%02x], result=[%s%02x:%02x:%02x]\n", 2933 - name, mysense[2], mysense[12], mysense[13], STp->buffer->syscall_result?"":"ok:", 2934 - SRpnt->sense[2],SRpnt->sense[12],SRpnt->sense[13]); 2935 - #endif 2936 - if (!STp->buffer->syscall_result) 2937 - memcpy (SRpnt->sense, mysense, 16); 2938 - else 2939 - printk(KERN_WARNING "%s:W: Double error in get position\n", name); 2940 - } 2941 - STp->first_frame_position = ((STp->buffer)->b_data[4] << 24) 2942 - + ((STp->buffer)->b_data[5] << 16) 2943 - + ((STp->buffer)->b_data[6] << 8) 2944 - + (STp->buffer)->b_data[7]; 2945 - STp->last_frame_position = ((STp->buffer)->b_data[ 8] << 24) 2946 - + ((STp->buffer)->b_data[ 9] << 16) 2947 - + ((STp->buffer)->b_data[10] << 8) 2948 - + (STp->buffer)->b_data[11]; 2949 - STp->cur_frames = (STp->buffer)->b_data[15]; 2950 - #if DEBUG 2951 - if (debugging) { 2952 - printk(OSST_DEB_MSG "%s:D: Drive Positions: host %d, tape %d%s, buffer %d\n", name, 2953 - STp->first_frame_position, STp->last_frame_position, 2954 - ((STp->buffer)->b_data[0]&0x80)?" (BOP)": 2955 - ((STp->buffer)->b_data[0]&0x40)?" (EOP)":"", 2956 - STp->cur_frames); 2957 - } 2958 - #endif 2959 - if (STp->cur_frames == 0 && STp->first_frame_position != STp->last_frame_position) { 2960 - #if DEBUG 2961 - printk(OSST_DEB_MSG "%s:D: Correcting read position %d, %d, %d\n", name, 2962 - STp->first_frame_position, STp->last_frame_position, STp->cur_frames); 2963 - #endif 2964 - STp->first_frame_position = STp->last_frame_position; 2965 - } 2966 - } 2967 - STp->buffer->b_data = olddata; STp->buffer->buffer_size = oldsize; 2968 - 2969 - return (result == 0 ? STp->first_frame_position : result); 2970 - } 2971 - 2972 - 2973 - /* Set the tape block */ 2974 - static int osst_set_frame_position(struct osst_tape *STp, struct osst_request ** aSRpnt, int ppos, int skip) 2975 - { 2976 - unsigned char scmd[MAX_COMMAND_SIZE]; 2977 - struct osst_request * SRpnt; 2978 - struct st_partstat * STps; 2979 - int result = 0; 2980 - int pp = (ppos == 3000 && !skip)? 0 : ppos; 2981 - char * name = tape_name(STp); 2982 - 2983 - if (STp->ready != ST_READY) return (-EIO); 2984 - 2985 - STps = &(STp->ps[STp->partition]); 2986 - 2987 - if (ppos < 0 || ppos > STp->capacity) { 2988 - printk(KERN_WARNING "%s:W: Reposition request %d out of range\n", name, ppos); 2989 - pp = ppos = ppos < 0 ? 0 : (STp->capacity - 1); 2990 - result = (-EINVAL); 2991 - } 2992 - 2993 - do { 2994 - #if DEBUG 2995 - if (debugging) 2996 - printk(OSST_DEB_MSG "%s:D: Setting ppos to %d.\n", name, pp); 2997 - #endif 2998 - memset (scmd, 0, MAX_COMMAND_SIZE); 2999 - scmd[0] = SEEK_10; 3000 - scmd[1] = 1; 3001 - scmd[3] = (pp >> 24); 3002 - scmd[4] = (pp >> 16); 3003 - scmd[5] = (pp >> 8); 3004 - scmd[6] = pp; 3005 - if (skip) 3006 - scmd[9] = 0x80; 3007 - 3008 - SRpnt = osst_do_scsi(*aSRpnt, STp, scmd, 0, DMA_NONE, STp->long_timeout, 3009 - MAX_RETRIES, 1); 3010 - if (!SRpnt) 3011 - return (-EBUSY); 3012 - *aSRpnt = SRpnt; 3013 - 3014 - if ((STp->buffer)->syscall_result != 0) { 3015 - #if DEBUG 3016 - printk(OSST_DEB_MSG "%s:D: SEEK command from %d to %d failed.\n", 3017 - name, STp->first_frame_position, pp); 3018 - #endif 3019 - result = (-EIO); 3020 - } 3021 - if (pp != ppos) 3022 - osst_wait_ready(STp, aSRpnt, 5 * 60, OSST_WAIT_POSITION_COMPLETE); 3023 - } while ((pp != ppos) && (pp = ppos)); 3024 - STp->first_frame_position = STp->last_frame_position = ppos; 3025 - STps->eof = ST_NOEOF; 3026 - STps->at_sm = 0; 3027 - STps->rw = ST_IDLE; 3028 - STp->frame_in_buffer = 0; 3029 - return result; 3030 - } 3031 - 3032 - static int osst_write_trailer(struct osst_tape *STp, struct osst_request ** aSRpnt, int leave_at_EOT) 3033 - { 3034 - struct st_partstat * STps = &(STp->ps[STp->partition]); 3035 - int result = 0; 3036 - 3037 - if (STp->write_type != OS_WRITE_NEW_MARK) { 3038 - /* true unless the user wrote the filemark for us */ 3039 - result = osst_flush_drive_buffer(STp, aSRpnt); 3040 - if (result < 0) goto out; 3041 - result = osst_write_filemark(STp, aSRpnt); 3042 - if (result < 0) goto out; 3043 - 3044 - if (STps->drv_file >= 0) 3045 - STps->drv_file++ ; 3046 - STps->drv_block = 0; 3047 - } 3048 - result = osst_write_eod(STp, aSRpnt); 3049 - osst_write_header(STp, aSRpnt, leave_at_EOT); 3050 - 3051 - STps->eof = ST_FM; 3052 - out: 3053 - return result; 3054 - } 3055 - 3056 - /* osst versions of st functions - augmented and stripped to suit OnStream only */ 3057 - 3058 - /* Flush the write buffer (never need to write if variable blocksize). */ 3059 - static int osst_flush_write_buffer(struct osst_tape *STp, struct osst_request ** aSRpnt) 3060 - { 3061 - int offset, transfer, blks = 0; 3062 - int result = 0; 3063 - unsigned char cmd[MAX_COMMAND_SIZE]; 3064 - struct osst_request * SRpnt = *aSRpnt; 3065 - struct st_partstat * STps; 3066 - char * name = tape_name(STp); 3067 - 3068 - if ((STp->buffer)->writing) { 3069 - if (SRpnt == (STp->buffer)->last_SRpnt) 3070 - #if DEBUG 3071 - { printk(OSST_DEB_MSG 3072 - "%s:D: aSRpnt points to osst_request that write_behind_check will release -- cleared\n", name); 3073 - #endif 3074 - *aSRpnt = SRpnt = NULL; 3075 - #if DEBUG 3076 - } else if (SRpnt) 3077 - printk(OSST_DEB_MSG 3078 - "%s:D: aSRpnt does not point to osst_request that write_behind_check will release -- strange\n", name); 3079 - #endif 3080 - osst_write_behind_check(STp); 3081 - if ((STp->buffer)->syscall_result) { 3082 - #if DEBUG 3083 - if (debugging) 3084 - printk(OSST_DEB_MSG "%s:D: Async write error (flush) %x.\n", 3085 - name, (STp->buffer)->midlevel_result); 3086 - #endif 3087 - if ((STp->buffer)->midlevel_result == INT_MAX) 3088 - return (-ENOSPC); 3089 - return (-EIO); 3090 - } 3091 - } 3092 - 3093 - result = 0; 3094 - if (STp->dirty == 1) { 3095 - 3096 - STp->write_count++; 3097 - STps = &(STp->ps[STp->partition]); 3098 - STps->rw = ST_WRITING; 3099 - offset = STp->buffer->buffer_bytes; 3100 - blks = (offset + STp->block_size - 1) / STp->block_size; 3101 - transfer = OS_FRAME_SIZE; 3102 - 3103 - if (offset < OS_DATA_SIZE) 3104 - osst_zero_buffer_tail(STp->buffer); 3105 - 3106 - if (STp->poll) 3107 - if (osst_wait_frame (STp, aSRpnt, STp->first_frame_position, -50, 120)) 3108 - result = osst_recover_wait_frame(STp, aSRpnt, 1); 3109 - 3110 - memset(cmd, 0, MAX_COMMAND_SIZE); 3111 - cmd[0] = WRITE_6; 3112 - cmd[1] = 1; 3113 - cmd[4] = 1; 3114 - 3115 - switch (STp->write_type) { 3116 - case OS_WRITE_DATA: 3117 - #if DEBUG 3118 - if (debugging) 3119 - printk(OSST_DEB_MSG "%s:D: Writing %d blocks to frame %d, lblks %d-%d\n", 3120 - name, blks, STp->frame_seq_number, 3121 - STp->logical_blk_num - blks, STp->logical_blk_num - 1); 3122 - #endif 3123 - osst_init_aux(STp, OS_FRAME_TYPE_DATA, STp->frame_seq_number++, 3124 - STp->logical_blk_num - blks, STp->block_size, blks); 3125 - break; 3126 - case OS_WRITE_EOD: 3127 - osst_init_aux(STp, OS_FRAME_TYPE_EOD, STp->frame_seq_number++, 3128 - STp->logical_blk_num, 0, 0); 3129 - break; 3130 - case OS_WRITE_NEW_MARK: 3131 - osst_init_aux(STp, OS_FRAME_TYPE_MARKER, STp->frame_seq_number++, 3132 - STp->logical_blk_num++, 0, blks=1); 3133 - break; 3134 - case OS_WRITE_HEADER: 3135 - osst_init_aux(STp, OS_FRAME_TYPE_HEADER, 0, 0, 0, blks=0); 3136 - break; 3137 - default: /* probably FILLER */ 3138 - osst_init_aux(STp, OS_FRAME_TYPE_FILL, 0, 0, 0, 0); 3139 - } 3140 - #if DEBUG 3141 - if (debugging) 3142 - printk(OSST_DEB_MSG "%s:D: Flushing %d bytes, Transferring %d bytes in %d lblocks.\n", 3143 - name, offset, transfer, blks); 3144 - #endif 3145 - 3146 - SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, transfer, DMA_TO_DEVICE, 3147 - STp->timeout, MAX_RETRIES, 1); 3148 - *aSRpnt = SRpnt; 3149 - if (!SRpnt) 3150 - return (-EBUSY); 3151 - 3152 - if ((STp->buffer)->syscall_result != 0) { 3153 - #if DEBUG 3154 - printk(OSST_DEB_MSG 3155 - "%s:D: write sense [0]=0x%02x [2]=%02x [12]=%02x [13]=%02x\n", 3156 - name, SRpnt->sense[0], SRpnt->sense[2], 3157 - SRpnt->sense[12], SRpnt->sense[13]); 3158 - #endif 3159 - if ((SRpnt->sense[0] & 0x70) == 0x70 && 3160 - (SRpnt->sense[2] & 0x40) && /* FIXME - SC-30 drive doesn't assert EOM bit */ 3161 - (SRpnt->sense[2] & 0x0f) == NO_SENSE) { 3162 - STp->dirty = 0; 3163 - (STp->buffer)->buffer_bytes = 0; 3164 - result = (-ENOSPC); 3165 - } 3166 - else { 3167 - if (osst_write_error_recovery(STp, aSRpnt, 1)) { 3168 - printk(KERN_ERR "%s:E: Error on flush write.\n", name); 3169 - result = (-EIO); 3170 - } 3171 - } 3172 - STps->drv_block = (-1); /* FIXME - even if write recovery succeeds? */ 3173 - } 3174 - else { 3175 - STp->first_frame_position++; 3176 - STp->dirty = 0; 3177 - (STp->buffer)->buffer_bytes = 0; 3178 - } 3179 - } 3180 - #if DEBUG 3181 - printk(OSST_DEB_MSG "%s:D: Exit flush write buffer with code %d\n", name, result); 3182 - #endif 3183 - return result; 3184 - } 3185 - 3186 - 3187 - /* Flush the tape buffer. The tape will be positioned correctly unless 3188 - seek_next is true. */ 3189 - static int osst_flush_buffer(struct osst_tape * STp, struct osst_request ** aSRpnt, int seek_next) 3190 - { 3191 - struct st_partstat * STps; 3192 - int backspace = 0, result = 0; 3193 - #if DEBUG 3194 - char * name = tape_name(STp); 3195 - #endif 3196 - 3197 - /* 3198 - * If there was a bus reset, block further access 3199 - * to this device. 3200 - */ 3201 - if( STp->pos_unknown) 3202 - return (-EIO); 3203 - 3204 - if (STp->ready != ST_READY) 3205 - return 0; 3206 - 3207 - STps = &(STp->ps[STp->partition]); 3208 - if (STps->rw == ST_WRITING || STp->dirty) { /* Writing */ 3209 - STp->write_type = OS_WRITE_DATA; 3210 - return osst_flush_write_buffer(STp, aSRpnt); 3211 - } 3212 - if (STp->block_size == 0) 3213 - return 0; 3214 - 3215 - #if DEBUG 3216 - printk(OSST_DEB_MSG "%s:D: Reached flush (read) buffer\n", name); 3217 - #endif 3218 - 3219 - if (!STp->can_bsr) { 3220 - backspace = ((STp->buffer)->buffer_bytes + (STp->buffer)->read_pointer) / STp->block_size - 3221 - ((STp->buffer)->read_pointer + STp->block_size - 1 ) / STp->block_size ; 3222 - (STp->buffer)->buffer_bytes = 0; 3223 - (STp->buffer)->read_pointer = 0; 3224 - STp->frame_in_buffer = 0; /* FIXME is this relevant w. OSST? */ 3225 - } 3226 - 3227 - if (!seek_next) { 3228 - if (STps->eof == ST_FM_HIT) { 3229 - result = cross_eof(STp, aSRpnt, 0); /* Back over the EOF hit */ 3230 - if (!result) 3231 - STps->eof = ST_NOEOF; 3232 - else { 3233 - if (STps->drv_file >= 0) 3234 - STps->drv_file++; 3235 - STps->drv_block = 0; 3236 - } 3237 - } 3238 - if (!result && backspace > 0) /* TODO -- design and run a test case for this */ 3239 - result = osst_seek_logical_blk(STp, aSRpnt, STp->logical_blk_num - backspace); 3240 - } 3241 - else if (STps->eof == ST_FM_HIT) { 3242 - if (STps->drv_file >= 0) 3243 - STps->drv_file++; 3244 - STps->drv_block = 0; 3245 - STps->eof = ST_NOEOF; 3246 - } 3247 - 3248 - return result; 3249 - } 3250 - 3251 - static int osst_write_frame(struct osst_tape * STp, struct osst_request ** aSRpnt, int synchronous) 3252 - { 3253 - unsigned char cmd[MAX_COMMAND_SIZE]; 3254 - struct osst_request * SRpnt; 3255 - int blks; 3256 - #if DEBUG 3257 - char * name = tape_name(STp); 3258 - #endif 3259 - 3260 - if ((!STp-> raw) && (STp->first_frame_position == 0xbae)) { /* _must_ preserve buffer! */ 3261 - #if DEBUG 3262 - printk(OSST_DEB_MSG "%s:D: Reaching config partition.\n", name); 3263 - #endif 3264 - if (osst_flush_drive_buffer(STp, aSRpnt) < 0) { 3265 - return (-EIO); 3266 - } 3267 - /* error recovery may have bumped us past the header partition */ 3268 - if (osst_get_frame_position(STp, aSRpnt) < 0xbb8) { 3269 - #if DEBUG 3270 - printk(OSST_DEB_MSG "%s:D: Skipping over config partition.\n", name); 3271 - #endif 3272 - osst_position_tape_and_confirm(STp, aSRpnt, 0xbb8); 3273 - } 3274 - } 3275 - 3276 - if (STp->poll) 3277 - if (osst_wait_frame (STp, aSRpnt, STp->first_frame_position, -48, 120)) 3278 - if (osst_recover_wait_frame(STp, aSRpnt, 1)) 3279 - return (-EIO); 3280 - 3281 - // osst_build_stats(STp, &SRpnt); 3282 - 3283 - STp->ps[STp->partition].rw = ST_WRITING; 3284 - STp->write_type = OS_WRITE_DATA; 3285 - 3286 - memset(cmd, 0, MAX_COMMAND_SIZE); 3287 - cmd[0] = WRITE_6; 3288 - cmd[1] = 1; 3289 - cmd[4] = 1; /* one frame at a time... */ 3290 - blks = STp->buffer->buffer_bytes / STp->block_size; 3291 - #if DEBUG 3292 - if (debugging) 3293 - printk(OSST_DEB_MSG "%s:D: Writing %d blocks to frame %d, lblks %d-%d\n", name, blks, 3294 - STp->frame_seq_number, STp->logical_blk_num - blks, STp->logical_blk_num - 1); 3295 - #endif 3296 - osst_init_aux(STp, OS_FRAME_TYPE_DATA, STp->frame_seq_number++, 3297 - STp->logical_blk_num - blks, STp->block_size, blks); 3298 - 3299 - #if DEBUG 3300 - if (!synchronous) 3301 - STp->write_pending = 1; 3302 - #endif 3303 - SRpnt = osst_do_scsi(*aSRpnt, STp, cmd, OS_FRAME_SIZE, DMA_TO_DEVICE, STp->timeout, 3304 - MAX_RETRIES, synchronous); 3305 - if (!SRpnt) 3306 - return (-EBUSY); 3307 - *aSRpnt = SRpnt; 3308 - 3309 - if (synchronous) { 3310 - if (STp->buffer->syscall_result != 0) { 3311 - #if DEBUG 3312 - if (debugging) 3313 - printk(OSST_DEB_MSG "%s:D: Error on write:\n", name); 3314 - #endif 3315 - if ((SRpnt->sense[0] & 0x70) == 0x70 && 3316 - (SRpnt->sense[2] & 0x40)) { 3317 - if ((SRpnt->sense[2] & 0x0f) == VOLUME_OVERFLOW) 3318 - return (-ENOSPC); 3319 - } 3320 - else { 3321 - if (osst_write_error_recovery(STp, aSRpnt, 1)) 3322 - return (-EIO); 3323 - } 3324 - } 3325 - else 3326 - STp->first_frame_position++; 3327 - } 3328 - 3329 - STp->write_count++; 3330 - 3331 - return 0; 3332 - } 3333 - 3334 - /* Lock or unlock the drive door. Don't use when struct osst_request allocated. */ 3335 - static int do_door_lock(struct osst_tape * STp, int do_lock) 3336 - { 3337 - int retval; 3338 - 3339 - #if DEBUG 3340 - printk(OSST_DEB_MSG "%s:D: %socking drive door.\n", tape_name(STp), do_lock ? "L" : "Unl"); 3341 - #endif 3342 - 3343 - retval = scsi_set_medium_removal(STp->device, 3344 - do_lock ? SCSI_REMOVAL_PREVENT : SCSI_REMOVAL_ALLOW); 3345 - if (!retval) 3346 - STp->door_locked = do_lock ? ST_LOCKED_EXPLICIT : ST_UNLOCKED; 3347 - else 3348 - STp->door_locked = ST_LOCK_FAILS; 3349 - return retval; 3350 - } 3351 - 3352 - /* Set the internal state after reset */ 3353 - static void reset_state(struct osst_tape *STp) 3354 - { 3355 - int i; 3356 - struct st_partstat *STps; 3357 - 3358 - STp->pos_unknown = 0; 3359 - for (i = 0; i < ST_NBR_PARTITIONS; i++) { 3360 - STps = &(STp->ps[i]); 3361 - STps->rw = ST_IDLE; 3362 - STps->eof = ST_NOEOF; 3363 - STps->at_sm = 0; 3364 - STps->last_block_valid = 0; 3365 - STps->drv_block = -1; 3366 - STps->drv_file = -1; 3367 - } 3368 - } 3369 - 3370 - 3371 - /* Entry points to osst */ 3372 - 3373 - /* Write command */ 3374 - static ssize_t osst_write(struct file * filp, const char __user * buf, size_t count, loff_t *ppos) 3375 - { 3376 - ssize_t total, retval = 0; 3377 - ssize_t i, do_count, blks, transfer; 3378 - int write_threshold; 3379 - int doing_write = 0; 3380 - const char __user * b_point; 3381 - struct osst_request * SRpnt = NULL; 3382 - struct st_modedef * STm; 3383 - struct st_partstat * STps; 3384 - struct osst_tape * STp = filp->private_data; 3385 - char * name = tape_name(STp); 3386 - 3387 - 3388 - if (mutex_lock_interruptible(&STp->lock)) 3389 - return (-ERESTARTSYS); 3390 - 3391 - /* 3392 - * If we are in the middle of error recovery, don't let anyone 3393 - * else try and use this device. Also, if error recovery fails, it 3394 - * may try and take the device offline, in which case all further 3395 - * access to the device is prohibited. 3396 - */ 3397 - if( !scsi_block_when_processing_errors(STp->device) ) { 3398 - retval = (-ENXIO); 3399 - goto out; 3400 - } 3401 - 3402 - if (STp->ready != ST_READY) { 3403 - if (STp->ready == ST_NO_TAPE) 3404 - retval = (-ENOMEDIUM); 3405 - else 3406 - retval = (-EIO); 3407 - goto out; 3408 - } 3409 - STm = &(STp->modes[STp->current_mode]); 3410 - if (!STm->defined) { 3411 - retval = (-ENXIO); 3412 - goto out; 3413 - } 3414 - if (count == 0) 3415 - goto out; 3416 - 3417 - /* 3418 - * If there was a bus reset, block further access 3419 - * to this device. 3420 - */ 3421 - if (STp->pos_unknown) { 3422 - retval = (-EIO); 3423 - goto out; 3424 - } 3425 - 3426 - #if DEBUG 3427 - if (!STp->in_use) { 3428 - printk(OSST_DEB_MSG "%s:D: Incorrect device.\n", name); 3429 - retval = (-EIO); 3430 - goto out; 3431 - } 3432 - #endif 3433 - 3434 - if (STp->write_prot) { 3435 - retval = (-EACCES); 3436 - goto out; 3437 - } 3438 - 3439 - /* Write must be integral number of blocks */ 3440 - if (STp->block_size != 0 && (count % STp->block_size) != 0) { 3441 - printk(KERN_ERR "%s:E: Write (%zd bytes) not multiple of tape block size (%d%c).\n", 3442 - name, count, STp->block_size<1024? 3443 - STp->block_size:STp->block_size/1024, STp->block_size<1024?'b':'k'); 3444 - retval = (-EINVAL); 3445 - goto out; 3446 - } 3447 - 3448 - if (STp->first_frame_position >= STp->capacity - OSST_EOM_RESERVE) { 3449 - printk(KERN_ERR "%s:E: Write truncated at EOM early warning (frame %d).\n", 3450 - name, STp->first_frame_position); 3451 - retval = (-ENOSPC); 3452 - goto out; 3453 - } 3454 - 3455 - if (STp->do_auto_lock && STp->door_locked == ST_UNLOCKED && !do_door_lock(STp, 1)) 3456 - STp->door_locked = ST_LOCKED_AUTO; 3457 - 3458 - STps = &(STp->ps[STp->partition]); 3459 - 3460 - if (STps->rw == ST_READING) { 3461 - #if DEBUG 3462 - printk(OSST_DEB_MSG "%s:D: Switching from read to write at file %d, block %d\n", name, 3463 - STps->drv_file, STps->drv_block); 3464 - #endif 3465 - retval = osst_flush_buffer(STp, &SRpnt, 0); 3466 - if (retval) 3467 - goto out; 3468 - STps->rw = ST_IDLE; 3469 - } 3470 - if (STps->rw != ST_WRITING) { 3471 - /* Are we totally rewriting this tape? */ 3472 - if (!STp->header_ok || 3473 - (STp->first_frame_position == STp->first_data_ppos && STps->drv_block < 0) || 3474 - (STps->drv_file == 0 && STps->drv_block == 0)) { 3475 - STp->wrt_pass_cntr++; 3476 - #if DEBUG 3477 - printk(OSST_DEB_MSG "%s:D: Allocating next write pass counter: %d\n", 3478 - name, STp->wrt_pass_cntr); 3479 - #endif 3480 - osst_reset_header(STp, &SRpnt); 3481 - STps->drv_file = STps->drv_block = 0; 3482 - } 3483 - /* Do we know where we'll be writing on the tape? */ 3484 - else { 3485 - if ((STp->fast_open && osst_verify_position(STp, &SRpnt)) || 3486 - STps->drv_file < 0 || STps->drv_block < 0) { 3487 - if (STp->first_frame_position == STp->eod_frame_ppos) { /* at EOD */ 3488 - STps->drv_file = STp->filemark_cnt; 3489 - STps->drv_block = 0; 3490 - } 3491 - else { 3492 - /* We have no idea where the tape is positioned - give up */ 3493 - #if DEBUG 3494 - printk(OSST_DEB_MSG 3495 - "%s:D: Cannot write at indeterminate position.\n", name); 3496 - #endif 3497 - retval = (-EIO); 3498 - goto out; 3499 - } 3500 - } 3501 - if ((STps->drv_file + STps->drv_block) > 0 && STps->drv_file < STp->filemark_cnt) { 3502 - STp->filemark_cnt = STps->drv_file; 3503 - STp->last_mark_ppos = 3504 - ntohl(STp->header_cache->dat_fm_tab.fm_tab_ent[STp->filemark_cnt-1]); 3505 - printk(KERN_WARNING 3506 - "%s:W: Overwriting file %d with old write pass counter %d\n", 3507 - name, STps->drv_file, STp->wrt_pass_cntr); 3508 - printk(KERN_WARNING 3509 - "%s:W: may lead to stale data being accepted on reading back!\n", 3510 - name); 3511 - #if DEBUG 3512 - printk(OSST_DEB_MSG 3513 - "%s:D: resetting filemark count to %d and last mark ppos,lbn to %d,%d\n", 3514 - name, STp->filemark_cnt, STp->last_mark_ppos, STp->last_mark_lbn); 3515 - #endif 3516 - } 3517 - } 3518 - STp->fast_open = 0; 3519 - } 3520 - if (!STp->header_ok) { 3521 - #if DEBUG 3522 - printk(OSST_DEB_MSG "%s:D: Write cannot proceed without valid headers\n", name); 3523 - #endif 3524 - retval = (-EIO); 3525 - goto out; 3526 - } 3527 - 3528 - if ((STp->buffer)->writing) { 3529 - if (SRpnt) printk(KERN_ERR "%s:A: Not supposed to have SRpnt at line %d\n", name, __LINE__); 3530 - osst_write_behind_check(STp); 3531 - if ((STp->buffer)->syscall_result) { 3532 - #if DEBUG 3533 - if (debugging) 3534 - printk(OSST_DEB_MSG "%s:D: Async write error (write) %x.\n", name, 3535 - (STp->buffer)->midlevel_result); 3536 - #endif 3537 - if ((STp->buffer)->midlevel_result == INT_MAX) 3538 - STps->eof = ST_EOM_OK; 3539 - else 3540 - STps->eof = ST_EOM_ERROR; 3541 - } 3542 - } 3543 - if (STps->eof == ST_EOM_OK) { 3544 - retval = (-ENOSPC); 3545 - goto out; 3546 - } 3547 - else if (STps->eof == ST_EOM_ERROR) { 3548 - retval = (-EIO); 3549 - goto out; 3550 - } 3551 - 3552 - /* Check the buffer readability in cases where copy_user might catch 3553 - the problems after some tape movement. */ 3554 - if ((copy_from_user(&i, buf, 1) != 0 || 3555 - copy_from_user(&i, buf + count - 1, 1) != 0)) { 3556 - retval = (-EFAULT); 3557 - goto out; 3558 - } 3559 - 3560 - if (!STm->do_buffer_writes) { 3561 - write_threshold = 1; 3562 - } 3563 - else 3564 - write_threshold = (STp->buffer)->buffer_blocks * STp->block_size; 3565 - if (!STm->do_async_writes) 3566 - write_threshold--; 3567 - 3568 - total = count; 3569 - #if DEBUG 3570 - if (debugging) 3571 - printk(OSST_DEB_MSG "%s:D: Writing %d bytes to file %d block %d lblk %d fseq %d fppos %d\n", 3572 - name, (int) count, STps->drv_file, STps->drv_block, 3573 - STp->logical_blk_num, STp->frame_seq_number, STp->first_frame_position); 3574 - #endif 3575 - b_point = buf; 3576 - while ((STp->buffer)->buffer_bytes + count > write_threshold) 3577 - { 3578 - doing_write = 1; 3579 - do_count = (STp->buffer)->buffer_blocks * STp->block_size - 3580 - (STp->buffer)->buffer_bytes; 3581 - if (do_count > count) 3582 - do_count = count; 3583 - 3584 - i = append_to_buffer(b_point, STp->buffer, do_count); 3585 - if (i) { 3586 - retval = i; 3587 - goto out; 3588 - } 3589 - 3590 - blks = do_count / STp->block_size; 3591 - STp->logical_blk_num += blks; /* logical_blk_num is incremented as data is moved from user */ 3592 - 3593 - i = osst_write_frame(STp, &SRpnt, 1); 3594 - 3595 - if (i == (-ENOSPC)) { 3596 - transfer = STp->buffer->writing; /* FIXME -- check this logic */ 3597 - if (transfer <= do_count) { 3598 - *ppos += do_count - transfer; 3599 - count -= do_count - transfer; 3600 - if (STps->drv_block >= 0) { 3601 - STps->drv_block += (do_count - transfer) / STp->block_size; 3602 - } 3603 - STps->eof = ST_EOM_OK; 3604 - retval = (-ENOSPC); /* EOM within current request */ 3605 - #if DEBUG 3606 - if (debugging) 3607 - printk(OSST_DEB_MSG "%s:D: EOM with %d bytes unwritten.\n", 3608 - name, (int) transfer); 3609 - #endif 3610 - } 3611 - else { 3612 - STps->eof = ST_EOM_ERROR; 3613 - STps->drv_block = (-1); /* Too cautious? */ 3614 - retval = (-EIO); /* EOM for old data */ 3615 - #if DEBUG 3616 - if (debugging) 3617 - printk(OSST_DEB_MSG "%s:D: EOM with lost data.\n", name); 3618 - #endif 3619 - } 3620 - } 3621 - else 3622 - retval = i; 3623 - 3624 - if (retval < 0) { 3625 - if (SRpnt != NULL) { 3626 - osst_release_request(SRpnt); 3627 - SRpnt = NULL; 3628 - } 3629 - STp->buffer->buffer_bytes = 0; 3630 - STp->dirty = 0; 3631 - if (count < total) 3632 - retval = total - count; 3633 - goto out; 3634 - } 3635 - 3636 - *ppos += do_count; 3637 - b_point += do_count; 3638 - count -= do_count; 3639 - if (STps->drv_block >= 0) { 3640 - STps->drv_block += blks; 3641 - } 3642 - STp->buffer->buffer_bytes = 0; 3643 - STp->dirty = 0; 3644 - } /* end while write threshold exceeded */ 3645 - 3646 - if (count != 0) { 3647 - STp->dirty = 1; 3648 - i = append_to_buffer(b_point, STp->buffer, count); 3649 - if (i) { 3650 - retval = i; 3651 - goto out; 3652 - } 3653 - blks = count / STp->block_size; 3654 - STp->logical_blk_num += blks; 3655 - if (STps->drv_block >= 0) { 3656 - STps->drv_block += blks; 3657 - } 3658 - *ppos += count; 3659 - count = 0; 3660 - } 3661 - 3662 - if (doing_write && (STp->buffer)->syscall_result != 0) { 3663 - retval = (STp->buffer)->syscall_result; 3664 - goto out; 3665 - } 3666 - 3667 - if (STm->do_async_writes && ((STp->buffer)->buffer_bytes >= STp->write_threshold)) { 3668 - /* Schedule an asynchronous write */ 3669 - (STp->buffer)->writing = ((STp->buffer)->buffer_bytes / 3670 - STp->block_size) * STp->block_size; 3671 - STp->dirty = !((STp->buffer)->writing == 3672 - (STp->buffer)->buffer_bytes); 3673 - 3674 - i = osst_write_frame(STp, &SRpnt, 0); 3675 - if (i < 0) { 3676 - retval = (-EIO); 3677 - goto out; 3678 - } 3679 - SRpnt = NULL; /* Prevent releasing this request! */ 3680 - } 3681 - STps->at_sm &= (total == 0); 3682 - if (total > 0) 3683 - STps->eof = ST_NOEOF; 3684 - 3685 - retval = total; 3686 - 3687 - out: 3688 - if (SRpnt != NULL) osst_release_request(SRpnt); 3689 - 3690 - mutex_unlock(&STp->lock); 3691 - 3692 - return retval; 3693 - } 3694 - 3695 - 3696 - /* Read command */ 3697 - static ssize_t osst_read(struct file * filp, char __user * buf, size_t count, loff_t *ppos) 3698 - { 3699 - ssize_t total, retval = 0; 3700 - ssize_t i, transfer; 3701 - int special; 3702 - struct st_modedef * STm; 3703 - struct st_partstat * STps; 3704 - struct osst_request * SRpnt = NULL; 3705 - struct osst_tape * STp = filp->private_data; 3706 - char * name = tape_name(STp); 3707 - 3708 - 3709 - if (mutex_lock_interruptible(&STp->lock)) 3710 - return (-ERESTARTSYS); 3711 - 3712 - /* 3713 - * If we are in the middle of error recovery, don't let anyone 3714 - * else try and use this device. Also, if error recovery fails, it 3715 - * may try and take the device offline, in which case all further 3716 - * access to the device is prohibited. 3717 - */ 3718 - if( !scsi_block_when_processing_errors(STp->device) ) { 3719 - retval = (-ENXIO); 3720 - goto out; 3721 - } 3722 - 3723 - if (STp->ready != ST_READY) { 3724 - if (STp->ready == ST_NO_TAPE) 3725 - retval = (-ENOMEDIUM); 3726 - else 3727 - retval = (-EIO); 3728 - goto out; 3729 - } 3730 - STm = &(STp->modes[STp->current_mode]); 3731 - if (!STm->defined) { 3732 - retval = (-ENXIO); 3733 - goto out; 3734 - } 3735 - #if DEBUG 3736 - if (!STp->in_use) { 3737 - printk(OSST_DEB_MSG "%s:D: Incorrect device.\n", name); 3738 - retval = (-EIO); 3739 - goto out; 3740 - } 3741 - #endif 3742 - /* Must have initialized medium */ 3743 - if (!STp->header_ok) { 3744 - retval = (-EIO); 3745 - goto out; 3746 - } 3747 - 3748 - if (STp->do_auto_lock && STp->door_locked == ST_UNLOCKED && !do_door_lock(STp, 1)) 3749 - STp->door_locked = ST_LOCKED_AUTO; 3750 - 3751 - STps = &(STp->ps[STp->partition]); 3752 - if (STps->rw == ST_WRITING) { 3753 - retval = osst_flush_buffer(STp, &SRpnt, 0); 3754 - if (retval) 3755 - goto out; 3756 - STps->rw = ST_IDLE; 3757 - /* FIXME -- this may leave the tape without EOD and up2date headers */ 3758 - } 3759 - 3760 - if ((count % STp->block_size) != 0) { 3761 - printk(KERN_WARNING 3762 - "%s:W: Read (%zd bytes) not multiple of tape block size (%d%c).\n", name, count, 3763 - STp->block_size<1024?STp->block_size:STp->block_size/1024, STp->block_size<1024?'b':'k'); 3764 - } 3765 - 3766 - #if DEBUG 3767 - if (debugging && STps->eof != ST_NOEOF) 3768 - printk(OSST_DEB_MSG "%s:D: EOF/EOM flag up (%d). Bytes %d\n", name, 3769 - STps->eof, (STp->buffer)->buffer_bytes); 3770 - #endif 3771 - if ((STp->buffer)->buffer_bytes == 0 && 3772 - STps->eof >= ST_EOD_1) { 3773 - if (STps->eof < ST_EOD) { 3774 - STps->eof += 1; 3775 - retval = 0; 3776 - goto out; 3777 - } 3778 - retval = (-EIO); /* EOM or Blank Check */ 3779 - goto out; 3780 - } 3781 - 3782 - /* Check the buffer writability before any tape movement. Don't alter 3783 - buffer data. */ 3784 - if (copy_from_user(&i, buf, 1) != 0 || 3785 - copy_to_user (buf, &i, 1) != 0 || 3786 - copy_from_user(&i, buf + count - 1, 1) != 0 || 3787 - copy_to_user (buf + count - 1, &i, 1) != 0) { 3788 - retval = (-EFAULT); 3789 - goto out; 3790 - } 3791 - 3792 - /* Loop until enough data in buffer or a special condition found */ 3793 - for (total = 0, special = 0; total < count - STp->block_size + 1 && !special; ) { 3794 - 3795 - /* Get new data if the buffer is empty */ 3796 - if ((STp->buffer)->buffer_bytes == 0) { 3797 - if (STps->eof == ST_FM_HIT) 3798 - break; 3799 - special = osst_get_logical_frame(STp, &SRpnt, STp->frame_seq_number, 0); 3800 - if (special < 0) { /* No need to continue read */ 3801 - STp->frame_in_buffer = 0; 3802 - retval = special; 3803 - goto out; 3804 - } 3805 - } 3806 - 3807 - /* Move the data from driver buffer to user buffer */ 3808 - if ((STp->buffer)->buffer_bytes > 0) { 3809 - #if DEBUG 3810 - if (debugging && STps->eof != ST_NOEOF) 3811 - printk(OSST_DEB_MSG "%s:D: EOF up (%d). Left %d, needed %d.\n", name, 3812 - STps->eof, (STp->buffer)->buffer_bytes, (int) (count - total)); 3813 - #endif 3814 - /* force multiple of block size, note block_size may have been adjusted */ 3815 - transfer = (((STp->buffer)->buffer_bytes < count - total ? 3816 - (STp->buffer)->buffer_bytes : count - total)/ 3817 - STp->block_size) * STp->block_size; 3818 - 3819 - if (transfer == 0) { 3820 - printk(KERN_WARNING 3821 - "%s:W: Nothing can be transferred, requested %zd, tape block size (%d%c).\n", 3822 - name, count, STp->block_size < 1024? 3823 - STp->block_size:STp->block_size/1024, 3824 - STp->block_size<1024?'b':'k'); 3825 - break; 3826 - } 3827 - i = from_buffer(STp->buffer, buf, transfer); 3828 - if (i) { 3829 - retval = i; 3830 - goto out; 3831 - } 3832 - STp->logical_blk_num += transfer / STp->block_size; 3833 - STps->drv_block += transfer / STp->block_size; 3834 - *ppos += transfer; 3835 - buf += transfer; 3836 - total += transfer; 3837 - } 3838 - 3839 - if ((STp->buffer)->buffer_bytes == 0) { 3840 - #if DEBUG 3841 - if (debugging) 3842 - printk(OSST_DEB_MSG "%s:D: Finished with frame %d\n", 3843 - name, STp->frame_seq_number); 3844 - #endif 3845 - STp->frame_in_buffer = 0; 3846 - STp->frame_seq_number++; /* frame to look for next time */ 3847 - } 3848 - } /* for (total = 0, special = 0; total < count && !special; ) */ 3849 - 3850 - /* Change the eof state if no data from tape or buffer */ 3851 - if (total == 0) { 3852 - if (STps->eof == ST_FM_HIT) { 3853 - STps->eof = (STp->first_frame_position >= STp->eod_frame_ppos)?ST_EOD_2:ST_FM; 3854 - STps->drv_block = 0; 3855 - if (STps->drv_file >= 0) 3856 - STps->drv_file++; 3857 - } 3858 - else if (STps->eof == ST_EOD_1) { 3859 - STps->eof = ST_EOD_2; 3860 - if (STps->drv_block > 0 && STps->drv_file >= 0) 3861 - STps->drv_file++; 3862 - STps->drv_block = 0; 3863 - } 3864 - else if (STps->eof == ST_EOD_2) 3865 - STps->eof = ST_EOD; 3866 - } 3867 - else if (STps->eof == ST_FM) 3868 - STps->eof = ST_NOEOF; 3869 - 3870 - retval = total; 3871 - 3872 - out: 3873 - if (SRpnt != NULL) osst_release_request(SRpnt); 3874 - 3875 - mutex_unlock(&STp->lock); 3876 - 3877 - return retval; 3878 - } 3879 - 3880 - 3881 - /* Set the driver options */ 3882 - static void osst_log_options(struct osst_tape *STp, struct st_modedef *STm, char *name) 3883 - { 3884 - printk(KERN_INFO 3885 - "%s:I: Mode %d options: buffer writes: %d, async writes: %d, read ahead: %d\n", 3886 - name, STp->current_mode, STm->do_buffer_writes, STm->do_async_writes, 3887 - STm->do_read_ahead); 3888 - printk(KERN_INFO 3889 - "%s:I: can bsr: %d, two FMs: %d, fast mteom: %d, auto lock: %d,\n", 3890 - name, STp->can_bsr, STp->two_fm, STp->fast_mteom, STp->do_auto_lock); 3891 - printk(KERN_INFO 3892 - "%s:I: defs for wr: %d, no block limits: %d, partitions: %d, s2 log: %d\n", 3893 - name, STm->defaults_for_writes, STp->omit_blklims, STp->can_partitions, 3894 - STp->scsi2_logical); 3895 - printk(KERN_INFO 3896 - "%s:I: sysv: %d\n", name, STm->sysv); 3897 - #if DEBUG 3898 - printk(KERN_INFO 3899 - "%s:D: debugging: %d\n", 3900 - name, debugging); 3901 - #endif 3902 - } 3903 - 3904 - 3905 - static int osst_set_options(struct osst_tape *STp, long options) 3906 - { 3907 - int value; 3908 - long code; 3909 - struct st_modedef * STm; 3910 - char * name = tape_name(STp); 3911 - 3912 - STm = &(STp->modes[STp->current_mode]); 3913 - if (!STm->defined) { 3914 - memcpy(STm, &(STp->modes[0]), sizeof(*STm)); 3915 - modes_defined = 1; 3916 - #if DEBUG 3917 - if (debugging) 3918 - printk(OSST_DEB_MSG "%s:D: Initialized mode %d definition from mode 0\n", 3919 - name, STp->current_mode); 3920 - #endif 3921 - } 3922 - 3923 - code = options & MT_ST_OPTIONS; 3924 - if (code == MT_ST_BOOLEANS) { 3925 - STm->do_buffer_writes = (options & MT_ST_BUFFER_WRITES) != 0; 3926 - STm->do_async_writes = (options & MT_ST_ASYNC_WRITES) != 0; 3927 - STm->defaults_for_writes = (options & MT_ST_DEF_WRITES) != 0; 3928 - STm->do_read_ahead = (options & MT_ST_READ_AHEAD) != 0; 3929 - STp->two_fm = (options & MT_ST_TWO_FM) != 0; 3930 - STp->fast_mteom = (options & MT_ST_FAST_MTEOM) != 0; 3931 - STp->do_auto_lock = (options & MT_ST_AUTO_LOCK) != 0; 3932 - STp->can_bsr = (options & MT_ST_CAN_BSR) != 0; 3933 - STp->omit_blklims = (options & MT_ST_NO_BLKLIMS) != 0; 3934 - if ((STp->device)->scsi_level >= SCSI_2) 3935 - STp->can_partitions = (options & MT_ST_CAN_PARTITIONS) != 0; 3936 - STp->scsi2_logical = (options & MT_ST_SCSI2LOGICAL) != 0; 3937 - STm->sysv = (options & MT_ST_SYSV) != 0; 3938 - #if DEBUG 3939 - debugging = (options & MT_ST_DEBUGGING) != 0; 3940 - #endif 3941 - osst_log_options(STp, STm, name); 3942 - } 3943 - else if (code == MT_ST_SETBOOLEANS || code == MT_ST_CLEARBOOLEANS) { 3944 - value = (code == MT_ST_SETBOOLEANS); 3945 - if ((options & MT_ST_BUFFER_WRITES) != 0) 3946 - STm->do_buffer_writes = value; 3947 - if ((options & MT_ST_ASYNC_WRITES) != 0) 3948 - STm->do_async_writes = value; 3949 - if ((options & MT_ST_DEF_WRITES) != 0) 3950 - STm->defaults_for_writes = value; 3951 - if ((options & MT_ST_READ_AHEAD) != 0) 3952 - STm->do_read_ahead = value; 3953 - if ((options & MT_ST_TWO_FM) != 0) 3954 - STp->two_fm = value; 3955 - if ((options & MT_ST_FAST_MTEOM) != 0) 3956 - STp->fast_mteom = value; 3957 - if ((options & MT_ST_AUTO_LOCK) != 0) 3958 - STp->do_auto_lock = value; 3959 - if ((options & MT_ST_CAN_BSR) != 0) 3960 - STp->can_bsr = value; 3961 - if ((options & MT_ST_NO_BLKLIMS) != 0) 3962 - STp->omit_blklims = value; 3963 - if ((STp->device)->scsi_level >= SCSI_2 && 3964 - (options & MT_ST_CAN_PARTITIONS) != 0) 3965 - STp->can_partitions = value; 3966 - if ((options & MT_ST_SCSI2LOGICAL) != 0) 3967 - STp->scsi2_logical = value; 3968 - if ((options & MT_ST_SYSV) != 0) 3969 - STm->sysv = value; 3970 - #if DEBUG 3971 - if ((options & MT_ST_DEBUGGING) != 0) 3972 - debugging = value; 3973 - #endif 3974 - osst_log_options(STp, STm, name); 3975 - } 3976 - else if (code == MT_ST_WRITE_THRESHOLD) { 3977 - value = (options & ~MT_ST_OPTIONS) * ST_KILOBYTE; 3978 - if (value < 1 || value > osst_buffer_size) { 3979 - printk(KERN_WARNING "%s:W: Write threshold %d too small or too large.\n", 3980 - name, value); 3981 - return (-EIO); 3982 - } 3983 - STp->write_threshold = value; 3984 - printk(KERN_INFO "%s:I: Write threshold set to %d bytes.\n", 3985 - name, value); 3986 - } 3987 - else if (code == MT_ST_DEF_BLKSIZE) { 3988 - value = (options & ~MT_ST_OPTIONS); 3989 - if (value == ~MT_ST_OPTIONS) { 3990 - STm->default_blksize = (-1); 3991 - printk(KERN_INFO "%s:I: Default block size disabled.\n", name); 3992 - } 3993 - else { 3994 - if (value < 512 || value > OS_DATA_SIZE || OS_DATA_SIZE % value) { 3995 - printk(KERN_WARNING "%s:W: Default block size cannot be set to %d.\n", 3996 - name, value); 3997 - return (-EINVAL); 3998 - } 3999 - STm->default_blksize = value; 4000 - printk(KERN_INFO "%s:I: Default block size set to %d bytes.\n", 4001 - name, STm->default_blksize); 4002 - } 4003 - } 4004 - else if (code == MT_ST_TIMEOUTS) { 4005 - value = (options & ~MT_ST_OPTIONS); 4006 - if ((value & MT_ST_SET_LONG_TIMEOUT) != 0) { 4007 - STp->long_timeout = (value & ~MT_ST_SET_LONG_TIMEOUT) * HZ; 4008 - printk(KERN_INFO "%s:I: Long timeout set to %d seconds.\n", name, 4009 - (value & ~MT_ST_SET_LONG_TIMEOUT)); 4010 - } 4011 - else { 4012 - STp->timeout = value * HZ; 4013 - printk(KERN_INFO "%s:I: Normal timeout set to %d seconds.\n", name, value); 4014 - } 4015 - } 4016 - else if (code == MT_ST_DEF_OPTIONS) { 4017 - code = (options & ~MT_ST_CLEAR_DEFAULT); 4018 - value = (options & MT_ST_CLEAR_DEFAULT); 4019 - if (code == MT_ST_DEF_DENSITY) { 4020 - if (value == MT_ST_CLEAR_DEFAULT) { 4021 - STm->default_density = (-1); 4022 - printk(KERN_INFO "%s:I: Density default disabled.\n", name); 4023 - } 4024 - else { 4025 - STm->default_density = value & 0xff; 4026 - printk(KERN_INFO "%s:I: Density default set to %x\n", 4027 - name, STm->default_density); 4028 - } 4029 - } 4030 - else if (code == MT_ST_DEF_DRVBUFFER) { 4031 - if (value == MT_ST_CLEAR_DEFAULT) { 4032 - STp->default_drvbuffer = 0xff; 4033 - printk(KERN_INFO "%s:I: Drive buffer default disabled.\n", name); 4034 - } 4035 - else { 4036 - STp->default_drvbuffer = value & 7; 4037 - printk(KERN_INFO "%s:I: Drive buffer default set to %x\n", 4038 - name, STp->default_drvbuffer); 4039 - } 4040 - } 4041 - else if (code == MT_ST_DEF_COMPRESSION) { 4042 - if (value == MT_ST_CLEAR_DEFAULT) { 4043 - STm->default_compression = ST_DONT_TOUCH; 4044 - printk(KERN_INFO "%s:I: Compression default disabled.\n", name); 4045 - } 4046 - else { 4047 - STm->default_compression = (value & 1 ? ST_YES : ST_NO); 4048 - printk(KERN_INFO "%s:I: Compression default set to %x\n", 4049 - name, (value & 1)); 4050 - } 4051 - } 4052 - } 4053 - else 4054 - return (-EIO); 4055 - 4056 - return 0; 4057 - } 4058 - 4059 - 4060 - /* Internal ioctl function */ 4061 - static int osst_int_ioctl(struct osst_tape * STp, struct osst_request ** aSRpnt, 4062 - unsigned int cmd_in, unsigned long arg) 4063 - { 4064 - int timeout; 4065 - long ltmp; 4066 - int i, ioctl_result; 4067 - int chg_eof = 1; 4068 - unsigned char cmd[MAX_COMMAND_SIZE]; 4069 - struct osst_request * SRpnt = * aSRpnt; 4070 - struct st_partstat * STps; 4071 - int fileno, blkno, at_sm, frame_seq_numbr, logical_blk_num; 4072 - int datalen = 0, direction = DMA_NONE; 4073 - char * name = tape_name(STp); 4074 - 4075 - if (STp->ready != ST_READY && cmd_in != MTLOAD) { 4076 - if (STp->ready == ST_NO_TAPE) 4077 - return (-ENOMEDIUM); 4078 - else 4079 - return (-EIO); 4080 - } 4081 - timeout = STp->long_timeout; 4082 - STps = &(STp->ps[STp->partition]); 4083 - fileno = STps->drv_file; 4084 - blkno = STps->drv_block; 4085 - at_sm = STps->at_sm; 4086 - frame_seq_numbr = STp->frame_seq_number; 4087 - logical_blk_num = STp->logical_blk_num; 4088 - 4089 - memset(cmd, 0, MAX_COMMAND_SIZE); 4090 - switch (cmd_in) { 4091 - case MTFSFM: 4092 - chg_eof = 0; /* Changed from the FSF after this */ 4093 - /* fall through */ 4094 - case MTFSF: 4095 - if (STp->raw) 4096 - return (-EIO); 4097 - if (STp->linux_media) 4098 - ioctl_result = osst_space_over_filemarks_forward_fast(STp, &SRpnt, cmd_in, arg); 4099 - else 4100 - ioctl_result = osst_space_over_filemarks_forward_slow(STp, &SRpnt, cmd_in, arg); 4101 - if (fileno >= 0) 4102 - fileno += arg; 4103 - blkno = 0; 4104 - at_sm &= (arg == 0); 4105 - goto os_bypass; 4106 - 4107 - case MTBSF: 4108 - chg_eof = 0; /* Changed from the FSF after this */ 4109 - /* fall through */ 4110 - case MTBSFM: 4111 - if (STp->raw) 4112 - return (-EIO); 4113 - ioctl_result = osst_space_over_filemarks_backward(STp, &SRpnt, cmd_in, arg); 4114 - if (fileno >= 0) 4115 - fileno -= arg; 4116 - blkno = (-1); /* We can't know the block number */ 4117 - at_sm &= (arg == 0); 4118 - goto os_bypass; 4119 - 4120 - case MTFSR: 4121 - case MTBSR: 4122 - #if DEBUG 4123 - if (debugging) 4124 - printk(OSST_DEB_MSG "%s:D: Skipping %lu blocks %s from logical block %d\n", 4125 - name, arg, cmd_in==MTFSR?"forward":"backward", logical_blk_num); 4126 - #endif 4127 - if (cmd_in == MTFSR) { 4128 - logical_blk_num += arg; 4129 - if (blkno >= 0) blkno += arg; 4130 - } 4131 - else { 4132 - logical_blk_num -= arg; 4133 - if (blkno >= 0) blkno -= arg; 4134 - } 4135 - ioctl_result = osst_seek_logical_blk(STp, &SRpnt, logical_blk_num); 4136 - fileno = STps->drv_file; 4137 - blkno = STps->drv_block; 4138 - at_sm &= (arg == 0); 4139 - goto os_bypass; 4140 - 4141 - case MTFSS: 4142 - cmd[0] = SPACE; 4143 - cmd[1] = 0x04; /* Space Setmarks */ /* FIXME -- OS can't do this? */ 4144 - cmd[2] = (arg >> 16); 4145 - cmd[3] = (arg >> 8); 4146 - cmd[4] = arg; 4147 - #if DEBUG 4148 - if (debugging) 4149 - printk(OSST_DEB_MSG "%s:D: Spacing tape forward %d setmarks.\n", name, 4150 - cmd[2] * 65536 + cmd[3] * 256 + cmd[4]); 4151 - #endif 4152 - if (arg != 0) { 4153 - blkno = fileno = (-1); 4154 - at_sm = 1; 4155 - } 4156 - break; 4157 - case MTBSS: 4158 - cmd[0] = SPACE; 4159 - cmd[1] = 0x04; /* Space Setmarks */ /* FIXME -- OS can't do this? */ 4160 - ltmp = (-arg); 4161 - cmd[2] = (ltmp >> 16); 4162 - cmd[3] = (ltmp >> 8); 4163 - cmd[4] = ltmp; 4164 - #if DEBUG 4165 - if (debugging) { 4166 - if (cmd[2] & 0x80) 4167 - ltmp = 0xff000000; 4168 - ltmp = ltmp | (cmd[2] << 16) | (cmd[3] << 8) | cmd[4]; 4169 - printk(OSST_DEB_MSG "%s:D: Spacing tape backward %ld setmarks.\n", 4170 - name, (-ltmp)); 4171 - } 4172 - #endif 4173 - if (arg != 0) { 4174 - blkno = fileno = (-1); 4175 - at_sm = 1; 4176 - } 4177 - break; 4178 - case MTWEOF: 4179 - if ((STps->rw == ST_WRITING || STp->dirty) && !STp->pos_unknown) { 4180 - STp->write_type = OS_WRITE_DATA; 4181 - ioctl_result = osst_flush_write_buffer(STp, &SRpnt); 4182 - } else 4183 - ioctl_result = 0; 4184 - #if DEBUG 4185 - if (debugging) 4186 - printk(OSST_DEB_MSG "%s:D: Writing %ld filemark(s).\n", name, arg); 4187 - #endif 4188 - for (i=0; i<arg; i++) 4189 - ioctl_result |= osst_write_filemark(STp, &SRpnt); 4190 - if (fileno >= 0) fileno += arg; 4191 - if (blkno >= 0) blkno = 0; 4192 - goto os_bypass; 4193 - 4194 - case MTWSM: 4195 - if (STp->write_prot) 4196 - return (-EACCES); 4197 - if (!STp->raw) 4198 - return 0; 4199 - cmd[0] = WRITE_FILEMARKS; /* FIXME -- need OS version */ 4200 - if (cmd_in == MTWSM) 4201 - cmd[1] = 2; 4202 - cmd[2] = (arg >> 16); 4203 - cmd[3] = (arg >> 8); 4204 - cmd[4] = arg; 4205 - timeout = STp->timeout; 4206 - #if DEBUG 4207 - if (debugging) 4208 - printk(OSST_DEB_MSG "%s:D: Writing %d setmark(s).\n", name, 4209 - cmd[2] * 65536 + cmd[3] * 256 + cmd[4]); 4210 - #endif 4211 - if (fileno >= 0) 4212 - fileno += arg; 4213 - blkno = 0; 4214 - at_sm = (cmd_in == MTWSM); 4215 - break; 4216 - case MTOFFL: 4217 - case MTLOAD: 4218 - case MTUNLOAD: 4219 - case MTRETEN: 4220 - cmd[0] = START_STOP; 4221 - cmd[1] = 1; /* Don't wait for completion */ 4222 - if (cmd_in == MTLOAD) { 4223 - if (STp->ready == ST_NO_TAPE) 4224 - cmd[4] = 4; /* open tray */ 4225 - else 4226 - cmd[4] = 1; /* load */ 4227 - } 4228 - if (cmd_in == MTRETEN) 4229 - cmd[4] = 3; /* retension then mount */ 4230 - if (cmd_in == MTOFFL) 4231 - cmd[4] = 4; /* rewind then eject */ 4232 - timeout = STp->timeout; 4233 - #if DEBUG 4234 - if (debugging) { 4235 - switch (cmd_in) { 4236 - case MTUNLOAD: 4237 - printk(OSST_DEB_MSG "%s:D: Unloading tape.\n", name); 4238 - break; 4239 - case MTLOAD: 4240 - printk(OSST_DEB_MSG "%s:D: Loading tape.\n", name); 4241 - break; 4242 - case MTRETEN: 4243 - printk(OSST_DEB_MSG "%s:D: Retensioning tape.\n", name); 4244 - break; 4245 - case MTOFFL: 4246 - printk(OSST_DEB_MSG "%s:D: Ejecting tape.\n", name); 4247 - break; 4248 - } 4249 - } 4250 - #endif 4251 - fileno = blkno = at_sm = frame_seq_numbr = logical_blk_num = 0 ; 4252 - break; 4253 - case MTNOP: 4254 - #if DEBUG 4255 - if (debugging) 4256 - printk(OSST_DEB_MSG "%s:D: No-op on tape.\n", name); 4257 - #endif 4258 - return 0; /* Should do something ? */ 4259 - break; 4260 - case MTEOM: 4261 - #if DEBUG 4262 - if (debugging) 4263 - printk(OSST_DEB_MSG "%s:D: Spacing to end of recorded medium.\n", name); 4264 - #endif 4265 - if ((osst_position_tape_and_confirm(STp, &SRpnt, STp->eod_frame_ppos) < 0) || 4266 - (osst_get_logical_frame(STp, &SRpnt, -1, 0) < 0)) { 4267 - ioctl_result = -EIO; 4268 - goto os_bypass; 4269 - } 4270 - if (STp->buffer->aux->frame_type != OS_FRAME_TYPE_EOD) { 4271 - #if DEBUG 4272 - printk(OSST_DEB_MSG "%s:D: No EOD frame found where expected.\n", name); 4273 - #endif 4274 - ioctl_result = -EIO; 4275 - goto os_bypass; 4276 - } 4277 - ioctl_result = osst_set_frame_position(STp, &SRpnt, STp->eod_frame_ppos, 0); 4278 - fileno = STp->filemark_cnt; 4279 - blkno = at_sm = 0; 4280 - goto os_bypass; 4281 - 4282 - case MTERASE: 4283 - if (STp->write_prot) 4284 - return (-EACCES); 4285 - ioctl_result = osst_reset_header(STp, &SRpnt); 4286 - i = osst_write_eod(STp, &SRpnt); 4287 - if (i < ioctl_result) ioctl_result = i; 4288 - i = osst_position_tape_and_confirm(STp, &SRpnt, STp->eod_frame_ppos); 4289 - if (i < ioctl_result) ioctl_result = i; 4290 - fileno = blkno = at_sm = 0 ; 4291 - goto os_bypass; 4292 - 4293 - case MTREW: 4294 - cmd[0] = REZERO_UNIT; /* rewind */ 4295 - cmd[1] = 1; 4296 - #if DEBUG 4297 - if (debugging) 4298 - printk(OSST_DEB_MSG "%s:D: Rewinding tape, Immed=%d.\n", name, cmd[1]); 4299 - #endif 4300 - fileno = blkno = at_sm = frame_seq_numbr = logical_blk_num = 0 ; 4301 - break; 4302 - 4303 - case MTSETBLK: /* Set block length */ 4304 - if ((STps->drv_block == 0 ) && 4305 - !STp->dirty && 4306 - ((STp->buffer)->buffer_bytes == 0) && 4307 - ((arg & MT_ST_BLKSIZE_MASK) >= 512 ) && 4308 - ((arg & MT_ST_BLKSIZE_MASK) <= OS_DATA_SIZE) && 4309 - !(OS_DATA_SIZE % (arg & MT_ST_BLKSIZE_MASK)) ) { 4310 - /* 4311 - * Only allowed to change the block size if you opened the 4312 - * device at the beginning of a file before writing anything. 4313 - * Note, that when reading, changing block_size is futile, 4314 - * as the size used when writing overrides it. 4315 - */ 4316 - STp->block_size = (arg & MT_ST_BLKSIZE_MASK); 4317 - printk(KERN_INFO "%s:I: Block size set to %d bytes.\n", 4318 - name, STp->block_size); 4319 - return 0; 4320 - } 4321 - /* fall through */ 4322 - case MTSETDENSITY: /* Set tape density */ 4323 - case MTSETDRVBUFFER: /* Set drive buffering */ 4324 - case SET_DENS_AND_BLK: /* Set density and block size */ 4325 - chg_eof = 0; 4326 - if (STp->dirty || (STp->buffer)->buffer_bytes != 0) 4327 - return (-EIO); /* Not allowed if data in buffer */ 4328 - if ((cmd_in == MTSETBLK || cmd_in == SET_DENS_AND_BLK) && 4329 - (arg & MT_ST_BLKSIZE_MASK) != 0 && 4330 - (arg & MT_ST_BLKSIZE_MASK) != STp->block_size ) { 4331 - printk(KERN_WARNING "%s:W: Illegal to set block size to %d%s.\n", 4332 - name, (int)(arg & MT_ST_BLKSIZE_MASK), 4333 - (OS_DATA_SIZE % (arg & MT_ST_BLKSIZE_MASK))?"":" now"); 4334 - return (-EINVAL); 4335 - } 4336 - return 0; /* FIXME silently ignore if block size didn't change */ 4337 - 4338 - default: 4339 - return (-ENOSYS); 4340 - } 4341 - 4342 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, datalen, direction, timeout, MAX_RETRIES, 1); 4343 - 4344 - ioctl_result = (STp->buffer)->syscall_result; 4345 - 4346 - if (!SRpnt) { 4347 - #if DEBUG 4348 - printk(OSST_DEB_MSG "%s:D: Couldn't exec scsi cmd for IOCTL\n", name); 4349 - #endif 4350 - return ioctl_result; 4351 - } 4352 - 4353 - if (!ioctl_result) { /* SCSI command successful */ 4354 - STp->frame_seq_number = frame_seq_numbr; 4355 - STp->logical_blk_num = logical_blk_num; 4356 - } 4357 - 4358 - os_bypass: 4359 - #if DEBUG 4360 - if (debugging) 4361 - printk(OSST_DEB_MSG "%s:D: IOCTL (%d) Result=%d\n", name, cmd_in, ioctl_result); 4362 - #endif 4363 - 4364 - if (!ioctl_result) { /* success */ 4365 - 4366 - if (cmd_in == MTFSFM) { 4367 - fileno--; 4368 - blkno--; 4369 - } 4370 - if (cmd_in == MTBSFM) { 4371 - fileno++; 4372 - blkno++; 4373 - } 4374 - STps->drv_block = blkno; 4375 - STps->drv_file = fileno; 4376 - STps->at_sm = at_sm; 4377 - 4378 - if (cmd_in == MTEOM) 4379 - STps->eof = ST_EOD; 4380 - else if ((cmd_in == MTFSFM || cmd_in == MTBSF) && STps->eof == ST_FM_HIT) { 4381 - ioctl_result = osst_seek_logical_blk(STp, &SRpnt, STp->logical_blk_num-1); 4382 - STps->drv_block++; 4383 - STp->logical_blk_num++; 4384 - STp->frame_seq_number++; 4385 - STp->frame_in_buffer = 0; 4386 - STp->buffer->read_pointer = 0; 4387 - } 4388 - else if (cmd_in == MTFSF) 4389 - STps->eof = (STp->first_frame_position >= STp->eod_frame_ppos)?ST_EOD:ST_FM; 4390 - else if (chg_eof) 4391 - STps->eof = ST_NOEOF; 4392 - 4393 - if (cmd_in == MTOFFL || cmd_in == MTUNLOAD) 4394 - STp->rew_at_close = 0; 4395 - else if (cmd_in == MTLOAD) { 4396 - for (i=0; i < ST_NBR_PARTITIONS; i++) { 4397 - STp->ps[i].rw = ST_IDLE; 4398 - STp->ps[i].last_block_valid = 0;/* FIXME - where else is this field maintained? */ 4399 - } 4400 - STp->partition = 0; 4401 - } 4402 - 4403 - if (cmd_in == MTREW) { 4404 - ioctl_result = osst_position_tape_and_confirm(STp, &SRpnt, STp->first_data_ppos); 4405 - if (ioctl_result > 0) 4406 - ioctl_result = 0; 4407 - } 4408 - 4409 - } else if (cmd_in == MTBSF || cmd_in == MTBSFM ) { 4410 - if (osst_position_tape_and_confirm(STp, &SRpnt, STp->first_data_ppos) < 0) 4411 - STps->drv_file = STps->drv_block = -1; 4412 - else 4413 - STps->drv_file = STps->drv_block = 0; 4414 - STps->eof = ST_NOEOF; 4415 - } else if (cmd_in == MTFSF || cmd_in == MTFSFM) { 4416 - if (osst_position_tape_and_confirm(STp, &SRpnt, STp->eod_frame_ppos) < 0) 4417 - STps->drv_file = STps->drv_block = -1; 4418 - else { 4419 - STps->drv_file = STp->filemark_cnt; 4420 - STps->drv_block = 0; 4421 - } 4422 - STps->eof = ST_EOD; 4423 - } else if (cmd_in == MTBSR || cmd_in == MTFSR || cmd_in == MTWEOF || cmd_in == MTEOM) { 4424 - STps->drv_file = STps->drv_block = (-1); 4425 - STps->eof = ST_NOEOF; 4426 - STp->header_ok = 0; 4427 - } else if (cmd_in == MTERASE) { 4428 - STp->header_ok = 0; 4429 - } else if (SRpnt) { /* SCSI command was not completely successful. */ 4430 - if (SRpnt->sense[2] & 0x40) { 4431 - STps->eof = ST_EOM_OK; 4432 - STps->drv_block = 0; 4433 - } 4434 - if (chg_eof) 4435 - STps->eof = ST_NOEOF; 4436 - 4437 - if ((SRpnt->sense[2] & 0x0f) == BLANK_CHECK) 4438 - STps->eof = ST_EOD; 4439 - 4440 - if (cmd_in == MTLOAD && osst_wait_for_medium(STp, &SRpnt, 60)) 4441 - ioctl_result = osst_wait_ready(STp, &SRpnt, 5 * 60, OSST_WAIT_POSITION_COMPLETE); 4442 - } 4443 - *aSRpnt = SRpnt; 4444 - 4445 - return ioctl_result; 4446 - } 4447 - 4448 - 4449 - /* Open the device */ 4450 - static int __os_scsi_tape_open(struct inode * inode, struct file * filp) 4451 - { 4452 - unsigned short flags; 4453 - int i, b_size, new_session = 0, retval = 0; 4454 - unsigned char cmd[MAX_COMMAND_SIZE]; 4455 - struct osst_request * SRpnt = NULL; 4456 - struct osst_tape * STp; 4457 - struct st_modedef * STm; 4458 - struct st_partstat * STps; 4459 - char * name; 4460 - int dev = TAPE_NR(inode); 4461 - int mode = TAPE_MODE(inode); 4462 - 4463 - /* 4464 - * We really want to do nonseekable_open(inode, filp); here, but some 4465 - * versions of tar incorrectly call lseek on tapes and bail out if that 4466 - * fails. So we disallow pread() and pwrite(), but permit lseeks. 4467 - */ 4468 - filp->f_mode &= ~(FMODE_PREAD | FMODE_PWRITE); 4469 - 4470 - write_lock(&os_scsi_tapes_lock); 4471 - if (dev >= osst_max_dev || os_scsi_tapes == NULL || 4472 - (STp = os_scsi_tapes[dev]) == NULL || !STp->device) { 4473 - write_unlock(&os_scsi_tapes_lock); 4474 - return (-ENXIO); 4475 - } 4476 - 4477 - name = tape_name(STp); 4478 - 4479 - if (STp->in_use) { 4480 - write_unlock(&os_scsi_tapes_lock); 4481 - #if DEBUG 4482 - printk(OSST_DEB_MSG "%s:D: Device already in use.\n", name); 4483 - #endif 4484 - return (-EBUSY); 4485 - } 4486 - if (scsi_device_get(STp->device)) { 4487 - write_unlock(&os_scsi_tapes_lock); 4488 - #if DEBUG 4489 - printk(OSST_DEB_MSG "%s:D: Failed scsi_device_get.\n", name); 4490 - #endif 4491 - return (-ENXIO); 4492 - } 4493 - filp->private_data = STp; 4494 - STp->in_use = 1; 4495 - write_unlock(&os_scsi_tapes_lock); 4496 - STp->rew_at_close = TAPE_REWIND(inode); 4497 - 4498 - if( !scsi_block_when_processing_errors(STp->device) ) { 4499 - return -ENXIO; 4500 - } 4501 - 4502 - if (mode != STp->current_mode) { 4503 - #if DEBUG 4504 - if (debugging) 4505 - printk(OSST_DEB_MSG "%s:D: Mode change from %d to %d.\n", 4506 - name, STp->current_mode, mode); 4507 - #endif 4508 - new_session = 1; 4509 - STp->current_mode = mode; 4510 - } 4511 - STm = &(STp->modes[STp->current_mode]); 4512 - 4513 - flags = filp->f_flags; 4514 - STp->write_prot = ((flags & O_ACCMODE) == O_RDONLY); 4515 - 4516 - STp->raw = TAPE_IS_RAW(inode); 4517 - if (STp->raw) 4518 - STp->header_ok = 0; 4519 - 4520 - /* Allocate data segments for this device's tape buffer */ 4521 - if (!enlarge_buffer(STp->buffer, STp->restr_dma)) { 4522 - printk(KERN_ERR "%s:E: Unable to allocate memory segments for tape buffer.\n", name); 4523 - retval = (-EOVERFLOW); 4524 - goto err_out; 4525 - } 4526 - if (STp->buffer->buffer_size >= OS_FRAME_SIZE) { 4527 - for (i = 0, b_size = 0; 4528 - (i < STp->buffer->sg_segs) && ((b_size + STp->buffer->sg[i].length) <= OS_DATA_SIZE); 4529 - b_size += STp->buffer->sg[i++].length); 4530 - STp->buffer->aux = (os_aux_t *) (page_address(sg_page(&STp->buffer->sg[i])) + OS_DATA_SIZE - b_size); 4531 - #if DEBUG 4532 - printk(OSST_DEB_MSG "%s:D: b_data points to %p in segment 0 at %p\n", name, 4533 - STp->buffer->b_data, page_address(STp->buffer->sg[0].page)); 4534 - printk(OSST_DEB_MSG "%s:D: AUX points to %p in segment %d at %p\n", name, 4535 - STp->buffer->aux, i, page_address(STp->buffer->sg[i].page)); 4536 - #endif 4537 - } else { 4538 - STp->buffer->aux = NULL; /* this had better never happen! */ 4539 - printk(KERN_NOTICE "%s:A: Framesize %d too large for buffer.\n", name, OS_FRAME_SIZE); 4540 - retval = (-EIO); 4541 - goto err_out; 4542 - } 4543 - STp->buffer->writing = 0; 4544 - STp->buffer->syscall_result = 0; 4545 - STp->dirty = 0; 4546 - for (i=0; i < ST_NBR_PARTITIONS; i++) { 4547 - STps = &(STp->ps[i]); 4548 - STps->rw = ST_IDLE; 4549 - } 4550 - STp->ready = ST_READY; 4551 - #if DEBUG 4552 - STp->nbr_waits = STp->nbr_finished = 0; 4553 - #endif 4554 - 4555 - memset (cmd, 0, MAX_COMMAND_SIZE); 4556 - cmd[0] = TEST_UNIT_READY; 4557 - 4558 - SRpnt = osst_do_scsi(NULL, STp, cmd, 0, DMA_NONE, STp->timeout, MAX_RETRIES, 1); 4559 - if (!SRpnt) { 4560 - retval = (STp->buffer)->syscall_result; /* FIXME - valid? */ 4561 - goto err_out; 4562 - } 4563 - if ((SRpnt->sense[0] & 0x70) == 0x70 && 4564 - (SRpnt->sense[2] & 0x0f) == NOT_READY && 4565 - SRpnt->sense[12] == 4 ) { 4566 - #if DEBUG 4567 - printk(OSST_DEB_MSG "%s:D: Unit not ready, cause %x\n", name, SRpnt->sense[13]); 4568 - #endif 4569 - if (filp->f_flags & O_NONBLOCK) { 4570 - retval = -EAGAIN; 4571 - goto err_out; 4572 - } 4573 - if (SRpnt->sense[13] == 2) { /* initialize command required (LOAD) */ 4574 - memset (cmd, 0, MAX_COMMAND_SIZE); 4575 - cmd[0] = START_STOP; 4576 - cmd[1] = 1; 4577 - cmd[4] = 1; 4578 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, 4579 - STp->timeout, MAX_RETRIES, 1); 4580 - } 4581 - osst_wait_ready(STp, &SRpnt, (SRpnt->sense[13]==1?15:3) * 60, 0); 4582 - } 4583 - if ((SRpnt->sense[0] & 0x70) == 0x70 && 4584 - (SRpnt->sense[2] & 0x0f) == UNIT_ATTENTION) { /* New media? */ 4585 - #if DEBUG 4586 - printk(OSST_DEB_MSG "%s:D: Unit wants attention\n", name); 4587 - #endif 4588 - STp->header_ok = 0; 4589 - 4590 - for (i=0; i < 10; i++) { 4591 - 4592 - memset (cmd, 0, MAX_COMMAND_SIZE); 4593 - cmd[0] = TEST_UNIT_READY; 4594 - 4595 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, 4596 - STp->timeout, MAX_RETRIES, 1); 4597 - if ((SRpnt->sense[0] & 0x70) != 0x70 || 4598 - (SRpnt->sense[2] & 0x0f) != UNIT_ATTENTION) 4599 - break; 4600 - } 4601 - 4602 - STp->pos_unknown = 0; 4603 - STp->partition = STp->new_partition = 0; 4604 - if (STp->can_partitions) 4605 - STp->nbr_partitions = 1; /* This guess will be updated later if necessary */ 4606 - for (i=0; i < ST_NBR_PARTITIONS; i++) { 4607 - STps = &(STp->ps[i]); 4608 - STps->rw = ST_IDLE; /* FIXME - seems to be redundant... */ 4609 - STps->eof = ST_NOEOF; 4610 - STps->at_sm = 0; 4611 - STps->last_block_valid = 0; 4612 - STps->drv_block = 0; 4613 - STps->drv_file = 0 ; 4614 - } 4615 - new_session = 1; 4616 - STp->recover_count = 0; 4617 - STp->abort_count = 0; 4618 - } 4619 - /* 4620 - * if we have valid headers from before, and the drive/tape seem untouched, 4621 - * open without reconfiguring and re-reading the headers 4622 - */ 4623 - if (!STp->buffer->syscall_result && STp->header_ok && 4624 - !SRpnt->result && SRpnt->sense[0] == 0) { 4625 - 4626 - memset(cmd, 0, MAX_COMMAND_SIZE); 4627 - cmd[0] = MODE_SENSE; 4628 - cmd[1] = 8; 4629 - cmd[2] = VENDOR_IDENT_PAGE; 4630 - cmd[4] = VENDOR_IDENT_PAGE_LENGTH + MODE_HEADER_LENGTH; 4631 - 4632 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_FROM_DEVICE, STp->timeout, 0, 1); 4633 - 4634 - if (STp->buffer->syscall_result || 4635 - STp->buffer->b_data[MODE_HEADER_LENGTH + 2] != 'L' || 4636 - STp->buffer->b_data[MODE_HEADER_LENGTH + 3] != 'I' || 4637 - STp->buffer->b_data[MODE_HEADER_LENGTH + 4] != 'N' || 4638 - STp->buffer->b_data[MODE_HEADER_LENGTH + 5] != '4' ) { 4639 - #if DEBUG 4640 - printk(OSST_DEB_MSG "%s:D: Signature was changed to %c%c%c%c\n", name, 4641 - STp->buffer->b_data[MODE_HEADER_LENGTH + 2], 4642 - STp->buffer->b_data[MODE_HEADER_LENGTH + 3], 4643 - STp->buffer->b_data[MODE_HEADER_LENGTH + 4], 4644 - STp->buffer->b_data[MODE_HEADER_LENGTH + 5]); 4645 - #endif 4646 - STp->header_ok = 0; 4647 - } 4648 - i = STp->first_frame_position; 4649 - if (STp->header_ok && i == osst_get_frame_position(STp, &SRpnt)) { 4650 - if (STp->door_locked == ST_UNLOCKED) { 4651 - if (do_door_lock(STp, 1)) 4652 - printk(KERN_INFO "%s:I: Can't lock drive door\n", name); 4653 - else 4654 - STp->door_locked = ST_LOCKED_AUTO; 4655 - } 4656 - if (!STp->frame_in_buffer) { 4657 - STp->block_size = (STm->default_blksize > 0) ? 4658 - STm->default_blksize : OS_DATA_SIZE; 4659 - STp->buffer->buffer_bytes = STp->buffer->read_pointer = 0; 4660 - } 4661 - STp->buffer->buffer_blocks = OS_DATA_SIZE / STp->block_size; 4662 - STp->fast_open = 1; 4663 - osst_release_request(SRpnt); 4664 - return 0; 4665 - } 4666 - #if DEBUG 4667 - if (i != STp->first_frame_position) 4668 - printk(OSST_DEB_MSG "%s:D: Tape position changed from %d to %d\n", 4669 - name, i, STp->first_frame_position); 4670 - #endif 4671 - STp->header_ok = 0; 4672 - } 4673 - STp->fast_open = 0; 4674 - 4675 - if ((STp->buffer)->syscall_result != 0 && /* in all error conditions except no medium */ 4676 - (SRpnt->sense[2] != 2 || SRpnt->sense[12] != 0x3A) ) { 4677 - 4678 - memset(cmd, 0, MAX_COMMAND_SIZE); 4679 - cmd[0] = MODE_SELECT; 4680 - cmd[1] = 0x10; 4681 - cmd[4] = 4 + MODE_HEADER_LENGTH; 4682 - 4683 - (STp->buffer)->b_data[0] = cmd[4] - 1; 4684 - (STp->buffer)->b_data[1] = 0; /* Medium Type - ignoring */ 4685 - (STp->buffer)->b_data[2] = 0; /* Reserved */ 4686 - (STp->buffer)->b_data[3] = 0; /* Block Descriptor Length */ 4687 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 0] = 0x3f; 4688 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 1] = 1; 4689 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 2] = 2; 4690 - (STp->buffer)->b_data[MODE_HEADER_LENGTH + 3] = 3; 4691 - 4692 - #if DEBUG 4693 - printk(OSST_DEB_MSG "%s:D: Applying soft reset\n", name); 4694 - #endif 4695 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, cmd[4], DMA_TO_DEVICE, STp->timeout, 0, 1); 4696 - 4697 - STp->header_ok = 0; 4698 - 4699 - for (i=0; i < 10; i++) { 4700 - 4701 - memset (cmd, 0, MAX_COMMAND_SIZE); 4702 - cmd[0] = TEST_UNIT_READY; 4703 - 4704 - SRpnt = osst_do_scsi(SRpnt, STp, cmd, 0, DMA_NONE, 4705 - STp->timeout, MAX_RETRIES, 1); 4706 - if ((SRpnt->sense[0] & 0x70) != 0x70 || 4707 - (SRpnt->sense[2] & 0x0f) == NOT_READY) 4708 - break; 4709 - 4710 - if ((SRpnt->sense[2] & 0x0f) == UNIT_ATTENTION) { 4711 - int j; 4712 - 4713 - STp->pos_unknown = 0; 4714 - STp->partition = STp->new_partition = 0; 4715 - if (STp->can_partitions) 4716 - STp->nbr_partitions = 1; /* This guess will be updated later if necessary */ 4717 - for (j = 0; j < ST_NBR_PARTITIONS; j++) { 4718 - STps = &(STp->ps[j]); 4719 - STps->rw = ST_IDLE; 4720 - STps->eof = ST_NOEOF; 4721 - STps->at_sm = 0; 4722 - STps->last_block_valid = 0; 4723 - STps->drv_block = 0; 4724 - STps->drv_file = 0 ; 4725 - } 4726 - new_session = 1; 4727 - } 4728 - } 4729 - } 4730 - 4731 - if (osst_wait_ready(STp, &SRpnt, 15 * 60, 0)) /* FIXME - not allowed with NOBLOCK */ 4732 - printk(KERN_INFO "%s:I: Device did not become Ready in open\n", name); 4733 - 4734 - if ((STp->buffer)->syscall_result != 0) { 4735 - if ((STp->device)->scsi_level >= SCSI_2 && 4736 - (SRpnt->sense[0] & 0x70) == 0x70 && 4737 - (SRpnt->sense[2] & 0x0f) == NOT_READY && 4738 - SRpnt->sense[12] == 0x3a) { /* Check ASC */ 4739 - STp->ready = ST_NO_TAPE; 4740 - } else 4741 - STp->ready = ST_NOT_READY; 4742 - osst_release_request(SRpnt); 4743 - SRpnt = NULL; 4744 - STp->density = 0; /* Clear the erroneous "residue" */ 4745 - STp->write_prot = 0; 4746 - STp->block_size = 0; 4747 - STp->ps[0].drv_file = STp->ps[0].drv_block = (-1); 4748 - STp->partition = STp->new_partition = 0; 4749 - STp->door_locked = ST_UNLOCKED; 4750 - return 0; 4751 - } 4752 - 4753 - osst_configure_onstream(STp, &SRpnt); 4754 - 4755 - STp->block_size = STp->raw ? OS_FRAME_SIZE : ( 4756 - (STm->default_blksize > 0) ? STm->default_blksize : OS_DATA_SIZE); 4757 - STp->buffer->buffer_blocks = STp->raw ? 1 : OS_DATA_SIZE / STp->block_size; 4758 - STp->buffer->buffer_bytes = 4759 - STp->buffer->read_pointer = 4760 - STp->frame_in_buffer = 0; 4761 - 4762 - #if DEBUG 4763 - if (debugging) 4764 - printk(OSST_DEB_MSG "%s:D: Block size: %d, frame size: %d, buffer size: %d (%d blocks).\n", 4765 - name, STp->block_size, OS_FRAME_SIZE, (STp->buffer)->buffer_size, 4766 - (STp->buffer)->buffer_blocks); 4767 - #endif 4768 - 4769 - if (STp->drv_write_prot) { 4770 - STp->write_prot = 1; 4771 - #if DEBUG 4772 - if (debugging) 4773 - printk(OSST_DEB_MSG "%s:D: Write protected\n", name); 4774 - #endif 4775 - if ((flags & O_ACCMODE) == O_WRONLY || (flags & O_ACCMODE) == O_RDWR) { 4776 - retval = (-EROFS); 4777 - goto err_out; 4778 - } 4779 - } 4780 - 4781 - if (new_session) { /* Change the drive parameters for the new mode */ 4782 - #if DEBUG 4783 - if (debugging) 4784 - printk(OSST_DEB_MSG "%s:D: New Session\n", name); 4785 - #endif 4786 - STp->density_changed = STp->blksize_changed = 0; 4787 - STp->compression_changed = 0; 4788 - } 4789 - 4790 - /* 4791 - * properly position the tape and check the ADR headers 4792 - */ 4793 - if (STp->door_locked == ST_UNLOCKED) { 4794 - if (do_door_lock(STp, 1)) 4795 - printk(KERN_INFO "%s:I: Can't lock drive door\n", name); 4796 - else 4797 - STp->door_locked = ST_LOCKED_AUTO; 4798 - } 4799 - 4800 - osst_analyze_headers(STp, &SRpnt); 4801 - 4802 - osst_release_request(SRpnt); 4803 - SRpnt = NULL; 4804 - 4805 - return 0; 4806 - 4807 - err_out: 4808 - if (SRpnt != NULL) 4809 - osst_release_request(SRpnt); 4810 - normalize_buffer(STp->buffer); 4811 - STp->header_ok = 0; 4812 - STp->in_use = 0; 4813 - scsi_device_put(STp->device); 4814 - 4815 - return retval; 4816 - } 4817 - 4818 - /* BKL pushdown: spaghetti avoidance wrapper */ 4819 - static int os_scsi_tape_open(struct inode * inode, struct file * filp) 4820 - { 4821 - int ret; 4822 - 4823 - mutex_lock(&osst_int_mutex); 4824 - ret = __os_scsi_tape_open(inode, filp); 4825 - mutex_unlock(&osst_int_mutex); 4826 - return ret; 4827 - } 4828 - 4829 - 4830 - 4831 - /* Flush the tape buffer before close */ 4832 - static int os_scsi_tape_flush(struct file * filp, fl_owner_t id) 4833 - { 4834 - int result = 0, result2; 4835 - struct osst_tape * STp = filp->private_data; 4836 - struct st_modedef * STm = &(STp->modes[STp->current_mode]); 4837 - struct st_partstat * STps = &(STp->ps[STp->partition]); 4838 - struct osst_request * SRpnt = NULL; 4839 - char * name = tape_name(STp); 4840 - 4841 - if (file_count(filp) > 1) 4842 - return 0; 4843 - 4844 - if ((STps->rw == ST_WRITING || STp->dirty) && !STp->pos_unknown) { 4845 - STp->write_type = OS_WRITE_DATA; 4846 - result = osst_flush_write_buffer(STp, &SRpnt); 4847 - if (result != 0 && result != (-ENOSPC)) 4848 - goto out; 4849 - } 4850 - if ( STps->rw >= ST_WRITING && !STp->pos_unknown) { 4851 - 4852 - #if DEBUG 4853 - if (debugging) { 4854 - printk(OSST_DEB_MSG "%s:D: File length %ld bytes.\n", 4855 - name, (long)(filp->f_pos)); 4856 - printk(OSST_DEB_MSG "%s:D: Async write waits %d, finished %d.\n", 4857 - name, STp->nbr_waits, STp->nbr_finished); 4858 - } 4859 - #endif 4860 - result = osst_write_trailer(STp, &SRpnt, !(STp->rew_at_close)); 4861 - #if DEBUG 4862 - if (debugging) 4863 - printk(OSST_DEB_MSG "%s:D: Buffer flushed, %d EOF(s) written\n", 4864 - name, 1+STp->two_fm); 4865 - #endif 4866 - } 4867 - else if (!STp->rew_at_close) { 4868 - STps = &(STp->ps[STp->partition]); 4869 - if (!STm->sysv || STps->rw != ST_READING) { 4870 - if (STp->can_bsr) 4871 - result = osst_flush_buffer(STp, &SRpnt, 0); /* this is the default path */ 4872 - else if (STps->eof == ST_FM_HIT) { 4873 - result = cross_eof(STp, &SRpnt, 0); 4874 - if (result) { 4875 - if (STps->drv_file >= 0) 4876 - STps->drv_file++; 4877 - STps->drv_block = 0; 4878 - STps->eof = ST_FM; 4879 - } 4880 - else 4881 - STps->eof = ST_NOEOF; 4882 - } 4883 - } 4884 - else if ((STps->eof == ST_NOEOF && 4885 - !(result = cross_eof(STp, &SRpnt, 1))) || 4886 - STps->eof == ST_FM_HIT) { 4887 - if (STps->drv_file >= 0) 4888 - STps->drv_file++; 4889 - STps->drv_block = 0; 4890 - STps->eof = ST_FM; 4891 - } 4892 - } 4893 - 4894 - out: 4895 - if (STp->rew_at_close) { 4896 - result2 = osst_position_tape_and_confirm(STp, &SRpnt, STp->first_data_ppos); 4897 - STps->drv_file = STps->drv_block = STp->frame_seq_number = STp->logical_blk_num = 0; 4898 - if (result == 0 && result2 < 0) 4899 - result = result2; 4900 - } 4901 - if (SRpnt) osst_release_request(SRpnt); 4902 - 4903 - if (STp->abort_count || STp->recover_count) { 4904 - printk(KERN_INFO "%s:I:", name); 4905 - if (STp->abort_count) 4906 - printk(" %d unrecovered errors", STp->abort_count); 4907 - if (STp->recover_count) 4908 - printk(" %d recovered errors", STp->recover_count); 4909 - if (STp->write_count) 4910 - printk(" in %d frames written", STp->write_count); 4911 - if (STp->read_count) 4912 - printk(" in %d frames read", STp->read_count); 4913 - printk("\n"); 4914 - STp->recover_count = 0; 4915 - STp->abort_count = 0; 4916 - } 4917 - STp->write_count = 0; 4918 - STp->read_count = 0; 4919 - 4920 - return result; 4921 - } 4922 - 4923 - 4924 - /* Close the device and release it */ 4925 - static int os_scsi_tape_close(struct inode * inode, struct file * filp) 4926 - { 4927 - int result = 0; 4928 - struct osst_tape * STp = filp->private_data; 4929 - 4930 - if (STp->door_locked == ST_LOCKED_AUTO) 4931 - do_door_lock(STp, 0); 4932 - 4933 - if (STp->raw) 4934 - STp->header_ok = 0; 4935 - 4936 - normalize_buffer(STp->buffer); 4937 - write_lock(&os_scsi_tapes_lock); 4938 - STp->in_use = 0; 4939 - write_unlock(&os_scsi_tapes_lock); 4940 - 4941 - scsi_device_put(STp->device); 4942 - 4943 - return result; 4944 - } 4945 - 4946 - 4947 - /* The ioctl command */ 4948 - static long osst_ioctl(struct file * file, 4949 - unsigned int cmd_in, unsigned long arg) 4950 - { 4951 - int i, cmd_nr, cmd_type, blk, retval = 0; 4952 - struct st_modedef * STm; 4953 - struct st_partstat * STps; 4954 - struct osst_request * SRpnt = NULL; 4955 - struct osst_tape * STp = file->private_data; 4956 - char * name = tape_name(STp); 4957 - void __user * p = (void __user *)arg; 4958 - 4959 - mutex_lock(&osst_int_mutex); 4960 - if (mutex_lock_interruptible(&STp->lock)) { 4961 - mutex_unlock(&osst_int_mutex); 4962 - return -ERESTARTSYS; 4963 - } 4964 - 4965 - #if DEBUG 4966 - if (debugging && !STp->in_use) { 4967 - printk(OSST_DEB_MSG "%s:D: Incorrect device.\n", name); 4968 - retval = (-EIO); 4969 - goto out; 4970 - } 4971 - #endif 4972 - STm = &(STp->modes[STp->current_mode]); 4973 - STps = &(STp->ps[STp->partition]); 4974 - 4975 - /* 4976 - * If we are in the middle of error recovery, don't let anyone 4977 - * else try and use this device. Also, if error recovery fails, it 4978 - * may try and take the device offline, in which case all further 4979 - * access to the device is prohibited. 4980 - */ 4981 - retval = scsi_ioctl_block_when_processing_errors(STp->device, cmd_in, 4982 - file->f_flags & O_NDELAY); 4983 - if (retval) 4984 - goto out; 4985 - 4986 - cmd_type = _IOC_TYPE(cmd_in); 4987 - cmd_nr = _IOC_NR(cmd_in); 4988 - #if DEBUG 4989 - printk(OSST_DEB_MSG "%s:D: Ioctl %d,%d in %s mode\n", name, 4990 - cmd_type, cmd_nr, STp->raw?"raw":"normal"); 4991 - #endif 4992 - if (cmd_type == _IOC_TYPE(MTIOCTOP) && cmd_nr == _IOC_NR(MTIOCTOP)) { 4993 - struct mtop mtc; 4994 - int auto_weof = 0; 4995 - 4996 - if (_IOC_SIZE(cmd_in) != sizeof(mtc)) { 4997 - retval = (-EINVAL); 4998 - goto out; 4999 - } 5000 - 5001 - i = copy_from_user((char *) &mtc, p, sizeof(struct mtop)); 5002 - if (i) { 5003 - retval = (-EFAULT); 5004 - goto out; 5005 - } 5006 - 5007 - if (mtc.mt_op == MTSETDRVBUFFER && !capable(CAP_SYS_ADMIN)) { 5008 - printk(KERN_WARNING "%s:W: MTSETDRVBUFFER only allowed for root.\n", name); 5009 - retval = (-EPERM); 5010 - goto out; 5011 - } 5012 - 5013 - if (!STm->defined && (mtc.mt_op != MTSETDRVBUFFER && (mtc.mt_count & MT_ST_OPTIONS) == 0)) { 5014 - retval = (-ENXIO); 5015 - goto out; 5016 - } 5017 - 5018 - if (!STp->pos_unknown) { 5019 - 5020 - if (STps->eof == ST_FM_HIT) { 5021 - if (mtc.mt_op == MTFSF || mtc.mt_op == MTFSFM|| mtc.mt_op == MTEOM) { 5022 - mtc.mt_count -= 1; 5023 - if (STps->drv_file >= 0) 5024 - STps->drv_file += 1; 5025 - } 5026 - else if (mtc.mt_op == MTBSF || mtc.mt_op == MTBSFM) { 5027 - mtc.mt_count += 1; 5028 - if (STps->drv_file >= 0) 5029 - STps->drv_file += 1; 5030 - } 5031 - } 5032 - 5033 - if (mtc.mt_op == MTSEEK) { 5034 - /* Old position must be restored if partition will be changed */ 5035 - i = !STp->can_partitions || (STp->new_partition != STp->partition); 5036 - } 5037 - else { 5038 - i = mtc.mt_op == MTREW || mtc.mt_op == MTOFFL || 5039 - mtc.mt_op == MTRETEN || mtc.mt_op == MTEOM || 5040 - mtc.mt_op == MTLOCK || mtc.mt_op == MTLOAD || 5041 - mtc.mt_op == MTFSF || mtc.mt_op == MTFSFM || 5042 - mtc.mt_op == MTBSF || mtc.mt_op == MTBSFM || 5043 - mtc.mt_op == MTCOMPRESSION; 5044 - } 5045 - i = osst_flush_buffer(STp, &SRpnt, i); 5046 - if (i < 0) { 5047 - retval = i; 5048 - goto out; 5049 - } 5050 - } 5051 - else { 5052 - /* 5053 - * If there was a bus reset, block further access 5054 - * to this device. If the user wants to rewind the tape, 5055 - * then reset the flag and allow access again. 5056 - */ 5057 - if(mtc.mt_op != MTREW && 5058 - mtc.mt_op != MTOFFL && 5059 - mtc.mt_op != MTRETEN && 5060 - mtc.mt_op != MTERASE && 5061 - mtc.mt_op != MTSEEK && 5062 - mtc.mt_op != MTEOM) { 5063 - retval = (-EIO); 5064 - goto out; 5065 - } 5066 - reset_state(STp); 5067 - /* remove this when the midlevel properly clears was_reset */ 5068 - STp->device->was_reset = 0; 5069 - } 5070 - 5071 - if (mtc.mt_op != MTCOMPRESSION && mtc.mt_op != MTLOCK && 5072 - mtc.mt_op != MTNOP && mtc.mt_op != MTSETBLK && 5073 - mtc.mt_op != MTSETDENSITY && mtc.mt_op != MTSETDRVBUFFER && 5074 - mtc.mt_op != MTMKPART && mtc.mt_op != MTSETPART && 5075 - mtc.mt_op != MTWEOF && mtc.mt_op != MTWSM ) { 5076 - 5077 - /* 5078 - * The user tells us to move to another position on the tape. 5079 - * If we were appending to the tape content, that would leave 5080 - * the tape without proper end, in that case write EOD and 5081 - * update the header to reflect its position. 5082 - */ 5083 - #if DEBUG 5084 - printk(KERN_WARNING "%s:D: auto_weod %s at ffp=%d,efp=%d,fsn=%d,lbn=%d,fn=%d,bn=%d\n", name, 5085 - STps->rw >= ST_WRITING ? "write" : STps->rw == ST_READING ? "read" : "idle", 5086 - STp->first_frame_position, STp->eod_frame_ppos, STp->frame_seq_number, 5087 - STp->logical_blk_num, STps->drv_file, STps->drv_block ); 5088 - #endif 5089 - if (STps->rw >= ST_WRITING && STp->first_frame_position >= STp->eod_frame_ppos) { 5090 - auto_weof = ((STp->write_type != OS_WRITE_NEW_MARK) && 5091 - !(mtc.mt_op == MTREW || mtc.mt_op == MTOFFL)); 5092 - i = osst_write_trailer(STp, &SRpnt, 5093 - !(mtc.mt_op == MTREW || mtc.mt_op == MTOFFL)); 5094 - #if DEBUG 5095 - printk(KERN_WARNING "%s:D: post trailer xeof=%d,ffp=%d,efp=%d,fsn=%d,lbn=%d,fn=%d,bn=%d\n", 5096 - name, auto_weof, STp->first_frame_position, STp->eod_frame_ppos, 5097 - STp->frame_seq_number, STp->logical_blk_num, STps->drv_file, STps->drv_block ); 5098 - #endif 5099 - if (i < 0) { 5100 - retval = i; 5101 - goto out; 5102 - } 5103 - } 5104 - STps->rw = ST_IDLE; 5105 - } 5106 - 5107 - if (mtc.mt_op == MTOFFL && STp->door_locked != ST_UNLOCKED) 5108 - do_door_lock(STp, 0); /* Ignore result! */ 5109 - 5110 - if (mtc.mt_op == MTSETDRVBUFFER && 5111 - (mtc.mt_count & MT_ST_OPTIONS) != 0) { 5112 - retval = osst_set_options(STp, mtc.mt_count); 5113 - goto out; 5114 - } 5115 - 5116 - if (mtc.mt_op == MTSETPART) { 5117 - if (mtc.mt_count >= STp->nbr_partitions) 5118 - retval = -EINVAL; 5119 - else { 5120 - STp->new_partition = mtc.mt_count; 5121 - retval = 0; 5122 - } 5123 - goto out; 5124 - } 5125 - 5126 - if (mtc.mt_op == MTMKPART) { 5127 - if (!STp->can_partitions) { 5128 - retval = (-EINVAL); 5129 - goto out; 5130 - } 5131 - if ((i = osst_int_ioctl(STp, &SRpnt, MTREW, 0)) < 0 /*|| 5132 - (i = partition_tape(inode, mtc.mt_count)) < 0*/) { 5133 - retval = i; 5134 - goto out; 5135 - } 5136 - for (i=0; i < ST_NBR_PARTITIONS; i++) { 5137 - STp->ps[i].rw = ST_IDLE; 5138 - STp->ps[i].at_sm = 0; 5139 - STp->ps[i].last_block_valid = 0; 5140 - } 5141 - STp->partition = STp->new_partition = 0; 5142 - STp->nbr_partitions = 1; /* Bad guess ?-) */ 5143 - STps->drv_block = STps->drv_file = 0; 5144 - retval = 0; 5145 - goto out; 5146 - } 5147 - 5148 - if (mtc.mt_op == MTSEEK) { 5149 - if (STp->raw) 5150 - i = osst_set_frame_position(STp, &SRpnt, mtc.mt_count, 0); 5151 - else 5152 - i = osst_seek_sector(STp, &SRpnt, mtc.mt_count); 5153 - if (!STp->can_partitions) 5154 - STp->ps[0].rw = ST_IDLE; 5155 - retval = i; 5156 - goto out; 5157 - } 5158 - 5159 - if (mtc.mt_op == MTLOCK || mtc.mt_op == MTUNLOCK) { 5160 - retval = do_door_lock(STp, (mtc.mt_op == MTLOCK)); 5161 - goto out; 5162 - } 5163 - 5164 - if (auto_weof) 5165 - cross_eof(STp, &SRpnt, 0); 5166 - 5167 - if (mtc.mt_op == MTCOMPRESSION) 5168 - retval = -EINVAL; /* OnStream drives don't have compression hardware */ 5169 - else 5170 - /* MTBSF MTBSFM MTBSR MTBSS MTEOM MTERASE MTFSF MTFSFB MTFSR MTFSS 5171 - * MTLOAD MTOFFL MTRESET MTRETEN MTREW MTUNLOAD MTWEOF MTWSM */ 5172 - retval = osst_int_ioctl(STp, &SRpnt, mtc.mt_op, mtc.mt_count); 5173 - goto out; 5174 - } 5175 - 5176 - if (!STm->defined) { 5177 - retval = (-ENXIO); 5178 - goto out; 5179 - } 5180 - 5181 - if ((i = osst_flush_buffer(STp, &SRpnt, 0)) < 0) { 5182 - retval = i; 5183 - goto out; 5184 - } 5185 - 5186 - if (cmd_type == _IOC_TYPE(MTIOCGET) && cmd_nr == _IOC_NR(MTIOCGET)) { 5187 - struct mtget mt_status; 5188 - 5189 - if (_IOC_SIZE(cmd_in) != sizeof(struct mtget)) { 5190 - retval = (-EINVAL); 5191 - goto out; 5192 - } 5193 - 5194 - mt_status.mt_type = MT_ISONSTREAM_SC; 5195 - mt_status.mt_erreg = STp->recover_erreg << MT_ST_SOFTERR_SHIFT; 5196 - mt_status.mt_dsreg = 5197 - ((STp->block_size << MT_ST_BLKSIZE_SHIFT) & MT_ST_BLKSIZE_MASK) | 5198 - ((STp->density << MT_ST_DENSITY_SHIFT) & MT_ST_DENSITY_MASK); 5199 - mt_status.mt_blkno = STps->drv_block; 5200 - mt_status.mt_fileno = STps->drv_file; 5201 - if (STp->block_size != 0) { 5202 - if (STps->rw == ST_WRITING) 5203 - mt_status.mt_blkno += (STp->buffer)->buffer_bytes / STp->block_size; 5204 - else if (STps->rw == ST_READING) 5205 - mt_status.mt_blkno -= ((STp->buffer)->buffer_bytes + 5206 - STp->block_size - 1) / STp->block_size; 5207 - } 5208 - 5209 - mt_status.mt_gstat = 0; 5210 - if (STp->drv_write_prot) 5211 - mt_status.mt_gstat |= GMT_WR_PROT(0xffffffff); 5212 - if (mt_status.mt_blkno == 0) { 5213 - if (mt_status.mt_fileno == 0) 5214 - mt_status.mt_gstat |= GMT_BOT(0xffffffff); 5215 - else 5216 - mt_status.mt_gstat |= GMT_EOF(0xffffffff); 5217 - } 5218 - mt_status.mt_resid = STp->partition; 5219 - if (STps->eof == ST_EOM_OK || STps->eof == ST_EOM_ERROR) 5220 - mt_status.mt_gstat |= GMT_EOT(0xffffffff); 5221 - else if (STps->eof >= ST_EOM_OK) 5222 - mt_status.mt_gstat |= GMT_EOD(0xffffffff); 5223 - if (STp->density == 1) 5224 - mt_status.mt_gstat |= GMT_D_800(0xffffffff); 5225 - else if (STp->density == 2) 5226 - mt_status.mt_gstat |= GMT_D_1600(0xffffffff); 5227 - else if (STp->density == 3) 5228 - mt_status.mt_gstat |= GMT_D_6250(0xffffffff); 5229 - if (STp->ready == ST_READY) 5230 - mt_status.mt_gstat |= GMT_ONLINE(0xffffffff); 5231 - if (STp->ready == ST_NO_TAPE) 5232 - mt_status.mt_gstat |= GMT_DR_OPEN(0xffffffff); 5233 - if (STps->at_sm) 5234 - mt_status.mt_gstat |= GMT_SM(0xffffffff); 5235 - if (STm->do_async_writes || (STm->do_buffer_writes && STp->block_size != 0) || 5236 - STp->drv_buffer != 0) 5237 - mt_status.mt_gstat |= GMT_IM_REP_EN(0xffffffff); 5238 - 5239 - i = copy_to_user(p, &mt_status, sizeof(struct mtget)); 5240 - if (i) { 5241 - retval = (-EFAULT); 5242 - goto out; 5243 - } 5244 - 5245 - STp->recover_erreg = 0; /* Clear after read */ 5246 - retval = 0; 5247 - goto out; 5248 - } /* End of MTIOCGET */ 5249 - 5250 - if (cmd_type == _IOC_TYPE(MTIOCPOS) && cmd_nr == _IOC_NR(MTIOCPOS)) { 5251 - struct mtpos mt_pos; 5252 - 5253 - if (_IOC_SIZE(cmd_in) != sizeof(struct mtpos)) { 5254 - retval = (-EINVAL); 5255 - goto out; 5256 - } 5257 - if (STp->raw) 5258 - blk = osst_get_frame_position(STp, &SRpnt); 5259 - else 5260 - blk = osst_get_sector(STp, &SRpnt); 5261 - if (blk < 0) { 5262 - retval = blk; 5263 - goto out; 5264 - } 5265 - mt_pos.mt_blkno = blk; 5266 - i = copy_to_user(p, &mt_pos, sizeof(struct mtpos)); 5267 - if (i) 5268 - retval = -EFAULT; 5269 - goto out; 5270 - } 5271 - if (SRpnt) osst_release_request(SRpnt); 5272 - 5273 - mutex_unlock(&STp->lock); 5274 - 5275 - retval = scsi_ioctl(STp->device, cmd_in, p); 5276 - mutex_unlock(&osst_int_mutex); 5277 - return retval; 5278 - 5279 - out: 5280 - if (SRpnt) osst_release_request(SRpnt); 5281 - 5282 - mutex_unlock(&STp->lock); 5283 - mutex_unlock(&osst_int_mutex); 5284 - 5285 - return retval; 5286 - } 5287 - 5288 - #ifdef CONFIG_COMPAT 5289 - static long osst_compat_ioctl(struct file * file, unsigned int cmd_in, unsigned long arg) 5290 - { 5291 - struct osst_tape *STp = file->private_data; 5292 - struct scsi_device *sdev = STp->device; 5293 - int ret = -ENOIOCTLCMD; 5294 - if (sdev->host->hostt->compat_ioctl) { 5295 - 5296 - ret = sdev->host->hostt->compat_ioctl(sdev, cmd_in, (void __user *)arg); 5297 - 5298 - } 5299 - return ret; 5300 - } 5301 - #endif 5302 - 5303 - 5304 - 5305 - /* Memory handling routines */ 5306 - 5307 - /* Try to allocate a new tape buffer skeleton. Caller must not hold os_scsi_tapes_lock */ 5308 - static struct osst_buffer * new_tape_buffer( int from_initialization, int need_dma, int max_sg ) 5309 - { 5310 - int i; 5311 - gfp_t priority; 5312 - struct osst_buffer *tb; 5313 - 5314 - if (from_initialization) 5315 - priority = GFP_ATOMIC; 5316 - else 5317 - priority = GFP_KERNEL; 5318 - 5319 - i = sizeof(struct osst_buffer) + (osst_max_sg_segs - 1) * sizeof(struct scatterlist); 5320 - tb = kzalloc(i, priority); 5321 - if (!tb) { 5322 - printk(KERN_NOTICE "osst :I: Can't allocate new tape buffer.\n"); 5323 - return NULL; 5324 - } 5325 - 5326 - tb->sg_segs = tb->orig_sg_segs = 0; 5327 - tb->use_sg = max_sg; 5328 - tb->in_use = 1; 5329 - tb->dma = need_dma; 5330 - tb->buffer_size = 0; 5331 - #if DEBUG 5332 - if (debugging) 5333 - printk(OSST_DEB_MSG 5334 - "osst :D: Allocated tape buffer skeleton (%d bytes, %d segments, dma: %d).\n", 5335 - i, max_sg, need_dma); 5336 - #endif 5337 - return tb; 5338 - } 5339 - 5340 - /* Try to allocate a temporary (while a user has the device open) enlarged tape buffer */ 5341 - static int enlarge_buffer(struct osst_buffer *STbuffer, int need_dma) 5342 - { 5343 - int segs, nbr, max_segs, b_size, order, got; 5344 - gfp_t priority; 5345 - 5346 - if (STbuffer->buffer_size >= OS_FRAME_SIZE) 5347 - return 1; 5348 - 5349 - if (STbuffer->sg_segs) { 5350 - printk(KERN_WARNING "osst :A: Buffer not previously normalized.\n"); 5351 - normalize_buffer(STbuffer); 5352 - } 5353 - /* See how many segments we can use -- need at least two */ 5354 - nbr = max_segs = STbuffer->use_sg; 5355 - if (nbr <= 2) 5356 - return 0; 5357 - 5358 - priority = GFP_KERNEL /* | __GFP_NOWARN */; 5359 - if (need_dma) 5360 - priority |= GFP_DMA; 5361 - 5362 - /* Try to allocate the first segment up to OS_DATA_SIZE and the others 5363 - big enough to reach the goal (code assumes no segments in place) */ 5364 - for (b_size = OS_DATA_SIZE, order = OSST_FIRST_ORDER; b_size >= PAGE_SIZE; order--, b_size /= 2) { 5365 - struct page *page = alloc_pages(priority, order); 5366 - 5367 - STbuffer->sg[0].offset = 0; 5368 - if (page != NULL) { 5369 - sg_set_page(&STbuffer->sg[0], page, b_size, 0); 5370 - STbuffer->b_data = page_address(page); 5371 - break; 5372 - } 5373 - } 5374 - if (sg_page(&STbuffer->sg[0]) == NULL) { 5375 - printk(KERN_NOTICE "osst :I: Can't allocate tape buffer main segment.\n"); 5376 - return 0; 5377 - } 5378 - /* Got initial segment of 'bsize,order', continue with same size if possible, except for AUX */ 5379 - for (segs=STbuffer->sg_segs=1, got=b_size; 5380 - segs < max_segs && got < OS_FRAME_SIZE; ) { 5381 - struct page *page = alloc_pages(priority, (OS_FRAME_SIZE - got <= PAGE_SIZE) ? 0 : order); 5382 - STbuffer->sg[segs].offset = 0; 5383 - if (page == NULL) { 5384 - printk(KERN_WARNING "osst :W: Failed to enlarge buffer to %d bytes.\n", 5385 - OS_FRAME_SIZE); 5386 - #if DEBUG 5387 - STbuffer->buffer_size = got; 5388 - #endif 5389 - normalize_buffer(STbuffer); 5390 - return 0; 5391 - } 5392 - sg_set_page(&STbuffer->sg[segs], page, (OS_FRAME_SIZE - got <= PAGE_SIZE / 2) ? (OS_FRAME_SIZE - got) : b_size, 0); 5393 - got += STbuffer->sg[segs].length; 5394 - STbuffer->buffer_size = got; 5395 - STbuffer->sg_segs = ++segs; 5396 - } 5397 - #if DEBUG 5398 - if (debugging) { 5399 - printk(OSST_DEB_MSG 5400 - "osst :D: Expanded tape buffer (%d bytes, %d->%d segments, dma: %d, at: %p).\n", 5401 - got, STbuffer->orig_sg_segs, STbuffer->sg_segs, need_dma, STbuffer->b_data); 5402 - printk(OSST_DEB_MSG 5403 - "osst :D: segment sizes: first %d at %p, last %d bytes at %p.\n", 5404 - STbuffer->sg[0].length, page_address(STbuffer->sg[0].page), 5405 - STbuffer->sg[segs-1].length, page_address(STbuffer->sg[segs-1].page)); 5406 - } 5407 - #endif 5408 - 5409 - return 1; 5410 - } 5411 - 5412 - 5413 - /* Release the segments */ 5414 - static void normalize_buffer(struct osst_buffer *STbuffer) 5415 - { 5416 - int i, order, b_size; 5417 - 5418 - for (i=0; i < STbuffer->sg_segs; i++) { 5419 - 5420 - for (b_size = PAGE_SIZE, order = 0; 5421 - b_size < STbuffer->sg[i].length; 5422 - b_size *= 2, order++); 5423 - 5424 - __free_pages(sg_page(&STbuffer->sg[i]), order); 5425 - STbuffer->buffer_size -= STbuffer->sg[i].length; 5426 - } 5427 - #if DEBUG 5428 - if (debugging && STbuffer->orig_sg_segs < STbuffer->sg_segs) 5429 - printk(OSST_DEB_MSG "osst :D: Buffer at %p normalized to %d bytes (segs %d).\n", 5430 - STbuffer->b_data, STbuffer->buffer_size, STbuffer->sg_segs); 5431 - #endif 5432 - STbuffer->sg_segs = STbuffer->orig_sg_segs = 0; 5433 - } 5434 - 5435 - 5436 - /* Move data from the user buffer to the tape buffer. Returns zero (success) or 5437 - negative error code. */ 5438 - static int append_to_buffer(const char __user *ubp, struct osst_buffer *st_bp, int do_count) 5439 - { 5440 - int i, cnt, res, offset; 5441 - 5442 - for (i=0, offset=st_bp->buffer_bytes; 5443 - i < st_bp->sg_segs && offset >= st_bp->sg[i].length; i++) 5444 - offset -= st_bp->sg[i].length; 5445 - if (i == st_bp->sg_segs) { /* Should never happen */ 5446 - printk(KERN_WARNING "osst :A: Append_to_buffer offset overflow.\n"); 5447 - return (-EIO); 5448 - } 5449 - for ( ; i < st_bp->sg_segs && do_count > 0; i++) { 5450 - cnt = st_bp->sg[i].length - offset < do_count ? 5451 - st_bp->sg[i].length - offset : do_count; 5452 - res = copy_from_user(page_address(sg_page(&st_bp->sg[i])) + offset, ubp, cnt); 5453 - if (res) 5454 - return (-EFAULT); 5455 - do_count -= cnt; 5456 - st_bp->buffer_bytes += cnt; 5457 - ubp += cnt; 5458 - offset = 0; 5459 - } 5460 - if (do_count) { /* Should never happen */ 5461 - printk(KERN_WARNING "osst :A: Append_to_buffer overflow (left %d).\n", 5462 - do_count); 5463 - return (-EIO); 5464 - } 5465 - return 0; 5466 - } 5467 - 5468 - 5469 - /* Move data from the tape buffer to the user buffer. Returns zero (success) or 5470 - negative error code. */ 5471 - static int from_buffer(struct osst_buffer *st_bp, char __user *ubp, int do_count) 5472 - { 5473 - int i, cnt, res, offset; 5474 - 5475 - for (i=0, offset=st_bp->read_pointer; 5476 - i < st_bp->sg_segs && offset >= st_bp->sg[i].length; i++) 5477 - offset -= st_bp->sg[i].length; 5478 - if (i == st_bp->sg_segs) { /* Should never happen */ 5479 - printk(KERN_WARNING "osst :A: From_buffer offset overflow.\n"); 5480 - return (-EIO); 5481 - } 5482 - for ( ; i < st_bp->sg_segs && do_count > 0; i++) { 5483 - cnt = st_bp->sg[i].length - offset < do_count ? 5484 - st_bp->sg[i].length - offset : do_count; 5485 - res = copy_to_user(ubp, page_address(sg_page(&st_bp->sg[i])) + offset, cnt); 5486 - if (res) 5487 - return (-EFAULT); 5488 - do_count -= cnt; 5489 - st_bp->buffer_bytes -= cnt; 5490 - st_bp->read_pointer += cnt; 5491 - ubp += cnt; 5492 - offset = 0; 5493 - } 5494 - if (do_count) { /* Should never happen */ 5495 - printk(KERN_WARNING "osst :A: From_buffer overflow (left %d).\n", do_count); 5496 - return (-EIO); 5497 - } 5498 - return 0; 5499 - } 5500 - 5501 - /* Sets the tail of the buffer after fill point to zero. 5502 - Returns zero (success) or negative error code. */ 5503 - static int osst_zero_buffer_tail(struct osst_buffer *st_bp) 5504 - { 5505 - int i, offset, do_count, cnt; 5506 - 5507 - for (i = 0, offset = st_bp->buffer_bytes; 5508 - i < st_bp->sg_segs && offset >= st_bp->sg[i].length; i++) 5509 - offset -= st_bp->sg[i].length; 5510 - if (i == st_bp->sg_segs) { /* Should never happen */ 5511 - printk(KERN_WARNING "osst :A: Zero_buffer offset overflow.\n"); 5512 - return (-EIO); 5513 - } 5514 - for (do_count = OS_DATA_SIZE - st_bp->buffer_bytes; 5515 - i < st_bp->sg_segs && do_count > 0; i++) { 5516 - cnt = st_bp->sg[i].length - offset < do_count ? 5517 - st_bp->sg[i].length - offset : do_count ; 5518 - memset(page_address(sg_page(&st_bp->sg[i])) + offset, 0, cnt); 5519 - do_count -= cnt; 5520 - offset = 0; 5521 - } 5522 - if (do_count) { /* Should never happen */ 5523 - printk(KERN_WARNING "osst :A: Zero_buffer overflow (left %d).\n", do_count); 5524 - return (-EIO); 5525 - } 5526 - return 0; 5527 - } 5528 - 5529 - /* Copy a osst 32K chunk of memory into the buffer. 5530 - Returns zero (success) or negative error code. */ 5531 - static int osst_copy_to_buffer(struct osst_buffer *st_bp, unsigned char *ptr) 5532 - { 5533 - int i, cnt, do_count = OS_DATA_SIZE; 5534 - 5535 - for (i = 0; i < st_bp->sg_segs && do_count > 0; i++) { 5536 - cnt = st_bp->sg[i].length < do_count ? 5537 - st_bp->sg[i].length : do_count ; 5538 - memcpy(page_address(sg_page(&st_bp->sg[i])), ptr, cnt); 5539 - do_count -= cnt; 5540 - ptr += cnt; 5541 - } 5542 - if (do_count || i != st_bp->sg_segs-1) { /* Should never happen */ 5543 - printk(KERN_WARNING "osst :A: Copy_to_buffer overflow (left %d at sg %d).\n", 5544 - do_count, i); 5545 - return (-EIO); 5546 - } 5547 - return 0; 5548 - } 5549 - 5550 - /* Copy a osst 32K chunk of memory from the buffer. 5551 - Returns zero (success) or negative error code. */ 5552 - static int osst_copy_from_buffer(struct osst_buffer *st_bp, unsigned char *ptr) 5553 - { 5554 - int i, cnt, do_count = OS_DATA_SIZE; 5555 - 5556 - for (i = 0; i < st_bp->sg_segs && do_count > 0; i++) { 5557 - cnt = st_bp->sg[i].length < do_count ? 5558 - st_bp->sg[i].length : do_count ; 5559 - memcpy(ptr, page_address(sg_page(&st_bp->sg[i])), cnt); 5560 - do_count -= cnt; 5561 - ptr += cnt; 5562 - } 5563 - if (do_count || i != st_bp->sg_segs-1) { /* Should never happen */ 5564 - printk(KERN_WARNING "osst :A: Copy_from_buffer overflow (left %d at sg %d).\n", 5565 - do_count, i); 5566 - return (-EIO); 5567 - } 5568 - return 0; 5569 - } 5570 - 5571 - 5572 - /* Module housekeeping */ 5573 - 5574 - static void validate_options (void) 5575 - { 5576 - if (max_dev > 0) 5577 - osst_max_dev = max_dev; 5578 - if (write_threshold_kbs > 0) 5579 - osst_write_threshold = write_threshold_kbs * ST_KILOBYTE; 5580 - if (osst_write_threshold > osst_buffer_size) 5581 - osst_write_threshold = osst_buffer_size; 5582 - if (max_sg_segs >= OSST_FIRST_SG) 5583 - osst_max_sg_segs = max_sg_segs; 5584 - #if DEBUG 5585 - printk(OSST_DEB_MSG "osst :D: max tapes %d, write threshold %d, max s/g segs %d.\n", 5586 - osst_max_dev, osst_write_threshold, osst_max_sg_segs); 5587 - #endif 5588 - } 5589 - 5590 - #ifndef MODULE 5591 - /* Set the boot options. Syntax: osst=xxx,yyy,... 5592 - where xxx is write threshold in 1024 byte blocks, 5593 - and yyy is number of s/g segments to use. */ 5594 - static int __init osst_setup (char *str) 5595 - { 5596 - int i, ints[5]; 5597 - char *stp; 5598 - 5599 - stp = get_options(str, ARRAY_SIZE(ints), ints); 5600 - 5601 - if (ints[0] > 0) { 5602 - for (i = 0; i < ints[0] && i < ARRAY_SIZE(parms); i++) 5603 - *parms[i].val = ints[i + 1]; 5604 - } else { 5605 - while (stp != NULL) { 5606 - for (i = 0; i < ARRAY_SIZE(parms); i++) { 5607 - int len = strlen(parms[i].name); 5608 - if (!strncmp(stp, parms[i].name, len) && 5609 - (*(stp + len) == ':' || *(stp + len) == '=')) { 5610 - *parms[i].val = 5611 - simple_strtoul(stp + len + 1, NULL, 0); 5612 - break; 5613 - } 5614 - } 5615 - if (i >= ARRAY_SIZE(parms)) 5616 - printk(KERN_INFO "osst :I: Illegal parameter in '%s'\n", 5617 - stp); 5618 - stp = strchr(stp, ','); 5619 - if (stp) 5620 - stp++; 5621 - } 5622 - } 5623 - 5624 - return 1; 5625 - } 5626 - 5627 - __setup("osst=", osst_setup); 5628 - 5629 - #endif 5630 - 5631 - static const struct file_operations osst_fops = { 5632 - .owner = THIS_MODULE, 5633 - .read = osst_read, 5634 - .write = osst_write, 5635 - .unlocked_ioctl = osst_ioctl, 5636 - #ifdef CONFIG_COMPAT 5637 - .compat_ioctl = osst_compat_ioctl, 5638 - #endif 5639 - .open = os_scsi_tape_open, 5640 - .flush = os_scsi_tape_flush, 5641 - .release = os_scsi_tape_close, 5642 - .llseek = noop_llseek, 5643 - }; 5644 - 5645 - static int osst_supports(struct scsi_device * SDp) 5646 - { 5647 - struct osst_support_data { 5648 - char *vendor; 5649 - char *model; 5650 - char *rev; 5651 - char *driver_hint; /* Name of the correct driver, NULL if unknown */ 5652 - }; 5653 - 5654 - static struct osst_support_data support_list[] = { 5655 - /* {"XXX", "Yy-", "", NULL}, example */ 5656 - SIGS_FROM_OSST, 5657 - {NULL, }}; 5658 - 5659 - struct osst_support_data *rp; 5660 - 5661 - /* We are willing to drive OnStream SC-x0 as well as the 5662 - * * IDE, ParPort, FireWire, USB variants, if accessible by 5663 - * * emulation layer (ide-scsi, usb-storage, ...) */ 5664 - 5665 - for (rp=&(support_list[0]); rp->vendor != NULL; rp++) 5666 - if (!strncmp(rp->vendor, SDp->vendor, strlen(rp->vendor)) && 5667 - !strncmp(rp->model, SDp->model, strlen(rp->model)) && 5668 - !strncmp(rp->rev, SDp->rev, strlen(rp->rev))) 5669 - return 1; 5670 - return 0; 5671 - } 5672 - 5673 - /* 5674 - * sysfs support for osst driver parameter information 5675 - */ 5676 - 5677 - static ssize_t version_show(struct device_driver *ddd, char *buf) 5678 - { 5679 - return snprintf(buf, PAGE_SIZE, "%s\n", osst_version); 5680 - } 5681 - 5682 - static DRIVER_ATTR_RO(version); 5683 - 5684 - static int osst_create_sysfs_files(struct device_driver *sysfs) 5685 - { 5686 - return driver_create_file(sysfs, &driver_attr_version); 5687 - } 5688 - 5689 - static void osst_remove_sysfs_files(struct device_driver *sysfs) 5690 - { 5691 - driver_remove_file(sysfs, &driver_attr_version); 5692 - } 5693 - 5694 - /* 5695 - * sysfs support for accessing ADR header information 5696 - */ 5697 - 5698 - static ssize_t osst_adr_rev_show(struct device *dev, 5699 - struct device_attribute *attr, char *buf) 5700 - { 5701 - struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev); 5702 - ssize_t l = 0; 5703 - 5704 - if (STp && STp->header_ok && STp->linux_media) 5705 - l = snprintf(buf, PAGE_SIZE, "%d.%d\n", STp->header_cache->major_rev, STp->header_cache->minor_rev); 5706 - return l; 5707 - } 5708 - 5709 - DEVICE_ATTR(ADR_rev, S_IRUGO, osst_adr_rev_show, NULL); 5710 - 5711 - static ssize_t osst_linux_media_version_show(struct device *dev, 5712 - struct device_attribute *attr, 5713 - char *buf) 5714 - { 5715 - struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev); 5716 - ssize_t l = 0; 5717 - 5718 - if (STp && STp->header_ok && STp->linux_media) 5719 - l = snprintf(buf, PAGE_SIZE, "LIN%d\n", STp->linux_media_version); 5720 - return l; 5721 - } 5722 - 5723 - DEVICE_ATTR(media_version, S_IRUGO, osst_linux_media_version_show, NULL); 5724 - 5725 - static ssize_t osst_capacity_show(struct device *dev, 5726 - struct device_attribute *attr, char *buf) 5727 - { 5728 - struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev); 5729 - ssize_t l = 0; 5730 - 5731 - if (STp && STp->header_ok && STp->linux_media) 5732 - l = snprintf(buf, PAGE_SIZE, "%d\n", STp->capacity); 5733 - return l; 5734 - } 5735 - 5736 - DEVICE_ATTR(capacity, S_IRUGO, osst_capacity_show, NULL); 5737 - 5738 - static ssize_t osst_first_data_ppos_show(struct device *dev, 5739 - struct device_attribute *attr, 5740 - char *buf) 5741 - { 5742 - struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev); 5743 - ssize_t l = 0; 5744 - 5745 - if (STp && STp->header_ok && STp->linux_media) 5746 - l = snprintf(buf, PAGE_SIZE, "%d\n", STp->first_data_ppos); 5747 - return l; 5748 - } 5749 - 5750 - DEVICE_ATTR(BOT_frame, S_IRUGO, osst_first_data_ppos_show, NULL); 5751 - 5752 - static ssize_t osst_eod_frame_ppos_show(struct device *dev, 5753 - struct device_attribute *attr, 5754 - char *buf) 5755 - { 5756 - struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev); 5757 - ssize_t l = 0; 5758 - 5759 - if (STp && STp->header_ok && STp->linux_media) 5760 - l = snprintf(buf, PAGE_SIZE, "%d\n", STp->eod_frame_ppos); 5761 - return l; 5762 - } 5763 - 5764 - DEVICE_ATTR(EOD_frame, S_IRUGO, osst_eod_frame_ppos_show, NULL); 5765 - 5766 - static ssize_t osst_filemark_cnt_show(struct device *dev, 5767 - struct device_attribute *attr, char *buf) 5768 - { 5769 - struct osst_tape * STp = (struct osst_tape *) dev_get_drvdata (dev); 5770 - ssize_t l = 0; 5771 - 5772 - if (STp && STp->header_ok && STp->linux_media) 5773 - l = snprintf(buf, PAGE_SIZE, "%d\n", STp->filemark_cnt); 5774 - return l; 5775 - } 5776 - 5777 - DEVICE_ATTR(file_count, S_IRUGO, osst_filemark_cnt_show, NULL); 5778 - 5779 - static struct class *osst_sysfs_class; 5780 - 5781 - static int osst_sysfs_init(void) 5782 - { 5783 - osst_sysfs_class = class_create(THIS_MODULE, "onstream_tape"); 5784 - if (IS_ERR(osst_sysfs_class)) { 5785 - printk(KERN_ERR "osst :W: Unable to register sysfs class\n"); 5786 - return PTR_ERR(osst_sysfs_class); 5787 - } 5788 - 5789 - return 0; 5790 - } 5791 - 5792 - static void osst_sysfs_destroy(dev_t dev) 5793 - { 5794 - device_destroy(osst_sysfs_class, dev); 5795 - } 5796 - 5797 - static int osst_sysfs_add(dev_t dev, struct device *device, struct osst_tape * STp, char * name) 5798 - { 5799 - struct device *osst_member; 5800 - int err; 5801 - 5802 - osst_member = device_create(osst_sysfs_class, device, dev, STp, 5803 - "%s", name); 5804 - if (IS_ERR(osst_member)) { 5805 - printk(KERN_WARNING "osst :W: Unable to add sysfs class member %s\n", name); 5806 - return PTR_ERR(osst_member); 5807 - } 5808 - 5809 - err = device_create_file(osst_member, &dev_attr_ADR_rev); 5810 - if (err) 5811 - goto err_out; 5812 - err = device_create_file(osst_member, &dev_attr_media_version); 5813 - if (err) 5814 - goto err_out; 5815 - err = device_create_file(osst_member, &dev_attr_capacity); 5816 - if (err) 5817 - goto err_out; 5818 - err = device_create_file(osst_member, &dev_attr_BOT_frame); 5819 - if (err) 5820 - goto err_out; 5821 - err = device_create_file(osst_member, &dev_attr_EOD_frame); 5822 - if (err) 5823 - goto err_out; 5824 - err = device_create_file(osst_member, &dev_attr_file_count); 5825 - if (err) 5826 - goto err_out; 5827 - 5828 - return 0; 5829 - 5830 - err_out: 5831 - osst_sysfs_destroy(dev); 5832 - return err; 5833 - } 5834 - 5835 - static void osst_sysfs_cleanup(void) 5836 - { 5837 - class_destroy(osst_sysfs_class); 5838 - } 5839 - 5840 - /* 5841 - * osst startup / cleanup code 5842 - */ 5843 - 5844 - static int osst_probe(struct device *dev) 5845 - { 5846 - struct scsi_device * SDp = to_scsi_device(dev); 5847 - struct osst_tape * tpnt; 5848 - struct st_modedef * STm; 5849 - struct st_partstat * STps; 5850 - struct osst_buffer * buffer; 5851 - struct gendisk * drive; 5852 - int i, dev_num, err = -ENODEV; 5853 - 5854 - if (SDp->type != TYPE_TAPE || !osst_supports(SDp)) 5855 - return -ENODEV; 5856 - 5857 - drive = alloc_disk(1); 5858 - if (!drive) { 5859 - printk(KERN_ERR "osst :E: Out of memory. Device not attached.\n"); 5860 - return -ENODEV; 5861 - } 5862 - 5863 - /* if this is the first attach, build the infrastructure */ 5864 - write_lock(&os_scsi_tapes_lock); 5865 - if (os_scsi_tapes == NULL) { 5866 - os_scsi_tapes = kmalloc_array(osst_max_dev, 5867 - sizeof(struct osst_tape *), 5868 - GFP_ATOMIC); 5869 - if (os_scsi_tapes == NULL) { 5870 - write_unlock(&os_scsi_tapes_lock); 5871 - printk(KERN_ERR "osst :E: Unable to allocate array for OnStream SCSI tapes.\n"); 5872 - goto out_put_disk; 5873 - } 5874 - for (i=0; i < osst_max_dev; ++i) os_scsi_tapes[i] = NULL; 5875 - } 5876 - 5877 - if (osst_nr_dev >= osst_max_dev) { 5878 - write_unlock(&os_scsi_tapes_lock); 5879 - printk(KERN_ERR "osst :E: Too many tape devices (max. %d).\n", osst_max_dev); 5880 - goto out_put_disk; 5881 - } 5882 - 5883 - /* find a free minor number */ 5884 - for (i = 0; i < osst_max_dev && os_scsi_tapes[i]; i++) 5885 - ; 5886 - if(i >= osst_max_dev) panic ("Scsi_devices corrupt (osst)"); 5887 - dev_num = i; 5888 - 5889 - /* allocate a struct osst_tape for this device */ 5890 - tpnt = kzalloc(sizeof(struct osst_tape), GFP_ATOMIC); 5891 - if (!tpnt) { 5892 - write_unlock(&os_scsi_tapes_lock); 5893 - printk(KERN_ERR "osst :E: Can't allocate device descriptor, device not attached.\n"); 5894 - goto out_put_disk; 5895 - } 5896 - 5897 - /* allocate a buffer for this device */ 5898 - i = SDp->host->sg_tablesize; 5899 - if (osst_max_sg_segs < i) 5900 - i = osst_max_sg_segs; 5901 - buffer = new_tape_buffer(1, SDp->host->unchecked_isa_dma, i); 5902 - if (buffer == NULL) { 5903 - write_unlock(&os_scsi_tapes_lock); 5904 - printk(KERN_ERR "osst :E: Unable to allocate a tape buffer, device not attached.\n"); 5905 - kfree(tpnt); 5906 - goto out_put_disk; 5907 - } 5908 - os_scsi_tapes[dev_num] = tpnt; 5909 - tpnt->buffer = buffer; 5910 - tpnt->device = SDp; 5911 - drive->private_data = &tpnt->driver; 5912 - sprintf(drive->disk_name, "osst%d", dev_num); 5913 - tpnt->driver = &osst_template; 5914 - tpnt->drive = drive; 5915 - tpnt->in_use = 0; 5916 - tpnt->capacity = 0xfffff; 5917 - tpnt->dirty = 0; 5918 - tpnt->drv_buffer = 1; /* Try buffering if no mode sense */ 5919 - tpnt->restr_dma = (SDp->host)->unchecked_isa_dma; 5920 - tpnt->density = 0; 5921 - tpnt->do_auto_lock = OSST_AUTO_LOCK; 5922 - tpnt->can_bsr = OSST_IN_FILE_POS; 5923 - tpnt->can_partitions = 0; 5924 - tpnt->two_fm = OSST_TWO_FM; 5925 - tpnt->fast_mteom = OSST_FAST_MTEOM; 5926 - tpnt->scsi2_logical = OSST_SCSI2LOGICAL; /* FIXME */ 5927 - tpnt->write_threshold = osst_write_threshold; 5928 - tpnt->default_drvbuffer = 0xff; /* No forced buffering */ 5929 - tpnt->partition = 0; 5930 - tpnt->new_partition = 0; 5931 - tpnt->nbr_partitions = 0; 5932 - tpnt->min_block = 512; 5933 - tpnt->max_block = OS_DATA_SIZE; 5934 - tpnt->timeout = OSST_TIMEOUT; 5935 - tpnt->long_timeout = OSST_LONG_TIMEOUT; 5936 - 5937 - /* Recognize OnStream tapes */ 5938 - /* We don't need to test for OnStream, as this has been done in detect () */ 5939 - tpnt->os_fw_rev = osst_parse_firmware_rev (SDp->rev); 5940 - tpnt->omit_blklims = 1; 5941 - 5942 - tpnt->poll = (strncmp(SDp->model, "DI-", 3) == 0) || 5943 - (strncmp(SDp->model, "FW-", 3) == 0) || OSST_FW_NEED_POLL(tpnt->os_fw_rev,SDp); 5944 - tpnt->frame_in_buffer = 0; 5945 - tpnt->header_ok = 0; 5946 - tpnt->linux_media = 0; 5947 - tpnt->header_cache = NULL; 5948 - 5949 - for (i=0; i < ST_NBR_MODES; i++) { 5950 - STm = &(tpnt->modes[i]); 5951 - STm->defined = 0; 5952 - STm->sysv = OSST_SYSV; 5953 - STm->defaults_for_writes = 0; 5954 - STm->do_async_writes = OSST_ASYNC_WRITES; 5955 - STm->do_buffer_writes = OSST_BUFFER_WRITES; 5956 - STm->do_read_ahead = OSST_READ_AHEAD; 5957 - STm->default_compression = ST_DONT_TOUCH; 5958 - STm->default_blksize = 512; 5959 - STm->default_density = (-1); /* No forced density */ 5960 - } 5961 - 5962 - for (i=0; i < ST_NBR_PARTITIONS; i++) { 5963 - STps = &(tpnt->ps[i]); 5964 - STps->rw = ST_IDLE; 5965 - STps->eof = ST_NOEOF; 5966 - STps->at_sm = 0; 5967 - STps->last_block_valid = 0; 5968 - STps->drv_block = (-1); 5969 - STps->drv_file = (-1); 5970 - } 5971 - 5972 - tpnt->current_mode = 0; 5973 - tpnt->modes[0].defined = 1; 5974 - tpnt->modes[2].defined = 1; 5975 - tpnt->density_changed = tpnt->compression_changed = tpnt->blksize_changed = 0; 5976 - 5977 - mutex_init(&tpnt->lock); 5978 - osst_nr_dev++; 5979 - write_unlock(&os_scsi_tapes_lock); 5980 - 5981 - { 5982 - char name[8]; 5983 - 5984 - /* Rewind entry */ 5985 - err = osst_sysfs_add(MKDEV(OSST_MAJOR, dev_num), dev, tpnt, tape_name(tpnt)); 5986 - if (err) 5987 - goto out_free_buffer; 5988 - 5989 - /* No-rewind entry */ 5990 - snprintf(name, 8, "%s%s", "n", tape_name(tpnt)); 5991 - err = osst_sysfs_add(MKDEV(OSST_MAJOR, dev_num + 128), dev, tpnt, name); 5992 - if (err) 5993 - goto out_free_sysfs1; 5994 - } 5995 - 5996 - sdev_printk(KERN_INFO, SDp, 5997 - "osst :I: Attached OnStream %.5s tape as %s\n", 5998 - SDp->model, tape_name(tpnt)); 5999 - 6000 - return 0; 6001 - 6002 - out_free_sysfs1: 6003 - osst_sysfs_destroy(MKDEV(OSST_MAJOR, dev_num)); 6004 - out_free_buffer: 6005 - kfree(buffer); 6006 - out_put_disk: 6007 - put_disk(drive); 6008 - return err; 6009 - }; 6010 - 6011 - static int osst_remove(struct device *dev) 6012 - { 6013 - struct scsi_device * SDp = to_scsi_device(dev); 6014 - struct osst_tape * tpnt; 6015 - int i; 6016 - 6017 - if ((SDp->type != TYPE_TAPE) || (osst_nr_dev <= 0)) 6018 - return 0; 6019 - 6020 - write_lock(&os_scsi_tapes_lock); 6021 - for(i=0; i < osst_max_dev; i++) { 6022 - if((tpnt = os_scsi_tapes[i]) && (tpnt->device == SDp)) { 6023 - osst_sysfs_destroy(MKDEV(OSST_MAJOR, i)); 6024 - osst_sysfs_destroy(MKDEV(OSST_MAJOR, i+128)); 6025 - tpnt->device = NULL; 6026 - put_disk(tpnt->drive); 6027 - os_scsi_tapes[i] = NULL; 6028 - osst_nr_dev--; 6029 - write_unlock(&os_scsi_tapes_lock); 6030 - vfree(tpnt->header_cache); 6031 - if (tpnt->buffer) { 6032 - normalize_buffer(tpnt->buffer); 6033 - kfree(tpnt->buffer); 6034 - } 6035 - kfree(tpnt); 6036 - return 0; 6037 - } 6038 - } 6039 - write_unlock(&os_scsi_tapes_lock); 6040 - return 0; 6041 - } 6042 - 6043 - static int __init init_osst(void) 6044 - { 6045 - int err; 6046 - 6047 - printk(KERN_INFO "osst :I: Tape driver with OnStream support version %s\nosst :I: %s\n", osst_version, cvsid); 6048 - 6049 - validate_options(); 6050 - 6051 - err = osst_sysfs_init(); 6052 - if (err) 6053 - return err; 6054 - 6055 - err = register_chrdev(OSST_MAJOR, "osst", &osst_fops); 6056 - if (err < 0) { 6057 - printk(KERN_ERR "osst :E: Unable to register major %d for OnStream tapes\n", OSST_MAJOR); 6058 - goto err_out; 6059 - } 6060 - 6061 - err = scsi_register_driver(&osst_template.gendrv); 6062 - if (err) 6063 - goto err_out_chrdev; 6064 - 6065 - err = osst_create_sysfs_files(&osst_template.gendrv); 6066 - if (err) 6067 - goto err_out_scsidrv; 6068 - 6069 - return 0; 6070 - 6071 - err_out_scsidrv: 6072 - scsi_unregister_driver(&osst_template.gendrv); 6073 - err_out_chrdev: 6074 - unregister_chrdev(OSST_MAJOR, "osst"); 6075 - err_out: 6076 - osst_sysfs_cleanup(); 6077 - return err; 6078 - } 6079 - 6080 - static void __exit exit_osst (void) 6081 - { 6082 - int i; 6083 - struct osst_tape * STp; 6084 - 6085 - osst_remove_sysfs_files(&osst_template.gendrv); 6086 - scsi_unregister_driver(&osst_template.gendrv); 6087 - unregister_chrdev(OSST_MAJOR, "osst"); 6088 - osst_sysfs_cleanup(); 6089 - 6090 - if (os_scsi_tapes) { 6091 - for (i=0; i < osst_max_dev; ++i) { 6092 - if (!(STp = os_scsi_tapes[i])) continue; 6093 - /* This is defensive, supposed to happen during detach */ 6094 - vfree(STp->header_cache); 6095 - if (STp->buffer) { 6096 - normalize_buffer(STp->buffer); 6097 - kfree(STp->buffer); 6098 - } 6099 - put_disk(STp->drive); 6100 - kfree(STp); 6101 - } 6102 - kfree(os_scsi_tapes); 6103 - } 6104 - printk(KERN_INFO "osst :I: Unloaded.\n"); 6105 - } 6106 - 6107 - module_init(init_osst); 6108 - module_exit(exit_osst);
-651
drivers/scsi/osst.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - * $Header: /cvsroot/osst/Driver/osst.h,v 1.16 2005/01/01 21:13:35 wriede Exp $ 4 - */ 5 - 6 - #include <asm/byteorder.h> 7 - #include <linux/completion.h> 8 - #include <linux/mutex.h> 9 - 10 - /* FIXME - rename and use the following two types or delete them! 11 - * and the types really should go to st.h anyway... 12 - * INQUIRY packet command - Data Format (From Table 6-8 of QIC-157C) 13 - */ 14 - typedef struct { 15 - unsigned device_type :5; /* Peripheral Device Type */ 16 - unsigned reserved0_765 :3; /* Peripheral Qualifier - Reserved */ 17 - unsigned reserved1_6t0 :7; /* Reserved */ 18 - unsigned rmb :1; /* Removable Medium Bit */ 19 - unsigned ansi_version :3; /* ANSI Version */ 20 - unsigned ecma_version :3; /* ECMA Version */ 21 - unsigned iso_version :2; /* ISO Version */ 22 - unsigned response_format :4; /* Response Data Format */ 23 - unsigned reserved3_45 :2; /* Reserved */ 24 - unsigned reserved3_6 :1; /* TrmIOP - Reserved */ 25 - unsigned reserved3_7 :1; /* AENC - Reserved */ 26 - u8 additional_length; /* Additional Length (total_length-4) */ 27 - u8 rsv5, rsv6, rsv7; /* Reserved */ 28 - u8 vendor_id[8]; /* Vendor Identification */ 29 - u8 product_id[16]; /* Product Identification */ 30 - u8 revision_level[4]; /* Revision Level */ 31 - u8 vendor_specific[20]; /* Vendor Specific - Optional */ 32 - u8 reserved56t95[40]; /* Reserved - Optional */ 33 - /* Additional information may be returned */ 34 - } idetape_inquiry_result_t; 35 - 36 - /* 37 - * READ POSITION packet command - Data Format (From Table 6-57) 38 - */ 39 - typedef struct { 40 - unsigned reserved0_10 :2; /* Reserved */ 41 - unsigned bpu :1; /* Block Position Unknown */ 42 - unsigned reserved0_543 :3; /* Reserved */ 43 - unsigned eop :1; /* End Of Partition */ 44 - unsigned bop :1; /* Beginning Of Partition */ 45 - u8 partition; /* Partition Number */ 46 - u8 reserved2, reserved3; /* Reserved */ 47 - u32 first_block; /* First Block Location */ 48 - u32 last_block; /* Last Block Location (Optional) */ 49 - u8 reserved12; /* Reserved */ 50 - u8 blocks_in_buffer[3]; /* Blocks In Buffer - (Optional) */ 51 - u32 bytes_in_buffer; /* Bytes In Buffer (Optional) */ 52 - } idetape_read_position_result_t; 53 - 54 - /* 55 - * Follows structures which are related to the SELECT SENSE / MODE SENSE 56 - * packet commands. 57 - */ 58 - #define COMPRESSION_PAGE 0x0f 59 - #define COMPRESSION_PAGE_LENGTH 16 60 - 61 - #define CAPABILITIES_PAGE 0x2a 62 - #define CAPABILITIES_PAGE_LENGTH 20 63 - 64 - #define TAPE_PARAMTR_PAGE 0x2b 65 - #define TAPE_PARAMTR_PAGE_LENGTH 16 66 - 67 - #define NUMBER_RETRIES_PAGE 0x2f 68 - #define NUMBER_RETRIES_PAGE_LENGTH 4 69 - 70 - #define BLOCK_SIZE_PAGE 0x30 71 - #define BLOCK_SIZE_PAGE_LENGTH 4 72 - 73 - #define BUFFER_FILLING_PAGE 0x33 74 - #define BUFFER_FILLING_PAGE_LENGTH 4 75 - 76 - #define VENDOR_IDENT_PAGE 0x36 77 - #define VENDOR_IDENT_PAGE_LENGTH 8 78 - 79 - #define LOCATE_STATUS_PAGE 0x37 80 - #define LOCATE_STATUS_PAGE_LENGTH 0 81 - 82 - #define MODE_HEADER_LENGTH 4 83 - 84 - 85 - /* 86 - * REQUEST SENSE packet command result - Data Format. 87 - */ 88 - typedef struct { 89 - unsigned error_code :7; /* Current of deferred errors */ 90 - unsigned valid :1; /* The information field conforms to QIC-157C */ 91 - u8 reserved1 :8; /* Segment Number - Reserved */ 92 - unsigned sense_key :4; /* Sense Key */ 93 - unsigned reserved2_4 :1; /* Reserved */ 94 - unsigned ili :1; /* Incorrect Length Indicator */ 95 - unsigned eom :1; /* End Of Medium */ 96 - unsigned filemark :1; /* Filemark */ 97 - u32 information __attribute__ ((packed)); 98 - u8 asl; /* Additional sense length (n-7) */ 99 - u32 command_specific; /* Additional command specific information */ 100 - u8 asc; /* Additional Sense Code */ 101 - u8 ascq; /* Additional Sense Code Qualifier */ 102 - u8 replaceable_unit_code; /* Field Replaceable Unit Code */ 103 - unsigned sk_specific1 :7; /* Sense Key Specific */ 104 - unsigned sksv :1; /* Sense Key Specific information is valid */ 105 - u8 sk_specific2; /* Sense Key Specific */ 106 - u8 sk_specific3; /* Sense Key Specific */ 107 - u8 pad[2]; /* Padding to 20 bytes */ 108 - } idetape_request_sense_result_t; 109 - 110 - /* 111 - * Mode Parameter Header for the MODE SENSE packet command 112 - */ 113 - typedef struct { 114 - u8 mode_data_length; /* Length of the following data transfer */ 115 - u8 medium_type; /* Medium Type */ 116 - u8 dsp; /* Device Specific Parameter */ 117 - u8 bdl; /* Block Descriptor Length */ 118 - } osst_mode_parameter_header_t; 119 - 120 - /* 121 - * Mode Parameter Block Descriptor the MODE SENSE packet command 122 - * 123 - * Support for block descriptors is optional. 124 - */ 125 - typedef struct { 126 - u8 density_code; /* Medium density code */ 127 - u8 blocks[3]; /* Number of blocks */ 128 - u8 reserved4; /* Reserved */ 129 - u8 length[3]; /* Block Length */ 130 - } osst_parameter_block_descriptor_t; 131 - 132 - /* 133 - * The Data Compression Page, as returned by the MODE SENSE packet command. 134 - */ 135 - typedef struct { 136 - #if defined(__BIG_ENDIAN_BITFIELD) 137 - unsigned ps :1; 138 - unsigned reserved0 :1; /* Reserved */ 139 - unsigned page_code :6; /* Page Code - Should be 0xf */ 140 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 141 - unsigned page_code :6; /* Page Code - Should be 0xf */ 142 - unsigned reserved0 :1; /* Reserved */ 143 - unsigned ps :1; 144 - #else 145 - #error "Please fix <asm/byteorder.h>" 146 - #endif 147 - u8 page_length; /* Page Length - Should be 14 */ 148 - #if defined(__BIG_ENDIAN_BITFIELD) 149 - unsigned dce :1; /* Data Compression Enable */ 150 - unsigned dcc :1; /* Data Compression Capable */ 151 - unsigned reserved2 :6; /* Reserved */ 152 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 153 - unsigned reserved2 :6; /* Reserved */ 154 - unsigned dcc :1; /* Data Compression Capable */ 155 - unsigned dce :1; /* Data Compression Enable */ 156 - #else 157 - #error "Please fix <asm/byteorder.h>" 158 - #endif 159 - #if defined(__BIG_ENDIAN_BITFIELD) 160 - unsigned dde :1; /* Data Decompression Enable */ 161 - unsigned red :2; /* Report Exception on Decompression */ 162 - unsigned reserved3 :5; /* Reserved */ 163 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 164 - unsigned reserved3 :5; /* Reserved */ 165 - unsigned red :2; /* Report Exception on Decompression */ 166 - unsigned dde :1; /* Data Decompression Enable */ 167 - #else 168 - #error "Please fix <asm/byteorder.h>" 169 - #endif 170 - u32 ca; /* Compression Algorithm */ 171 - u32 da; /* Decompression Algorithm */ 172 - u8 reserved[4]; /* Reserved */ 173 - } osst_data_compression_page_t; 174 - 175 - /* 176 - * The Medium Partition Page, as returned by the MODE SENSE packet command. 177 - */ 178 - typedef struct { 179 - #if defined(__BIG_ENDIAN_BITFIELD) 180 - unsigned ps :1; 181 - unsigned reserved1_6 :1; /* Reserved */ 182 - unsigned page_code :6; /* Page Code - Should be 0x11 */ 183 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 184 - unsigned page_code :6; /* Page Code - Should be 0x11 */ 185 - unsigned reserved1_6 :1; /* Reserved */ 186 - unsigned ps :1; 187 - #else 188 - #error "Please fix <asm/byteorder.h>" 189 - #endif 190 - u8 page_length; /* Page Length - Should be 6 */ 191 - u8 map; /* Maximum Additional Partitions - Should be 0 */ 192 - u8 apd; /* Additional Partitions Defined - Should be 0 */ 193 - #if defined(__BIG_ENDIAN_BITFIELD) 194 - unsigned fdp :1; /* Fixed Data Partitions */ 195 - unsigned sdp :1; /* Should be 0 */ 196 - unsigned idp :1; /* Should be 0 */ 197 - unsigned psum :2; /* Should be 0 */ 198 - unsigned reserved4_012 :3; /* Reserved */ 199 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 200 - unsigned reserved4_012 :3; /* Reserved */ 201 - unsigned psum :2; /* Should be 0 */ 202 - unsigned idp :1; /* Should be 0 */ 203 - unsigned sdp :1; /* Should be 0 */ 204 - unsigned fdp :1; /* Fixed Data Partitions */ 205 - #else 206 - #error "Please fix <asm/byteorder.h>" 207 - #endif 208 - u8 mfr; /* Medium Format Recognition */ 209 - u8 reserved[2]; /* Reserved */ 210 - } osst_medium_partition_page_t; 211 - 212 - /* 213 - * Capabilities and Mechanical Status Page 214 - */ 215 - typedef struct { 216 - #if defined(__BIG_ENDIAN_BITFIELD) 217 - unsigned reserved1_67 :2; 218 - unsigned page_code :6; /* Page code - Should be 0x2a */ 219 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 220 - unsigned page_code :6; /* Page code - Should be 0x2a */ 221 - unsigned reserved1_67 :2; 222 - #else 223 - #error "Please fix <asm/byteorder.h>" 224 - #endif 225 - u8 page_length; /* Page Length - Should be 0x12 */ 226 - u8 reserved2, reserved3; 227 - #if defined(__BIG_ENDIAN_BITFIELD) 228 - unsigned reserved4_67 :2; 229 - unsigned sprev :1; /* Supports SPACE in the reverse direction */ 230 - unsigned reserved4_1234 :4; 231 - unsigned ro :1; /* Read Only Mode */ 232 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 233 - unsigned ro :1; /* Read Only Mode */ 234 - unsigned reserved4_1234 :4; 235 - unsigned sprev :1; /* Supports SPACE in the reverse direction */ 236 - unsigned reserved4_67 :2; 237 - #else 238 - #error "Please fix <asm/byteorder.h>" 239 - #endif 240 - #if defined(__BIG_ENDIAN_BITFIELD) 241 - unsigned reserved5_67 :2; 242 - unsigned qfa :1; /* Supports the QFA two partition formats */ 243 - unsigned reserved5_4 :1; 244 - unsigned efmt :1; /* Supports ERASE command initiated formatting */ 245 - unsigned reserved5_012 :3; 246 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 247 - unsigned reserved5_012 :3; 248 - unsigned efmt :1; /* Supports ERASE command initiated formatting */ 249 - unsigned reserved5_4 :1; 250 - unsigned qfa :1; /* Supports the QFA two partition formats */ 251 - unsigned reserved5_67 :2; 252 - #else 253 - #error "Please fix <asm/byteorder.h>" 254 - #endif 255 - #if defined(__BIG_ENDIAN_BITFIELD) 256 - unsigned cmprs :1; /* Supports data compression */ 257 - unsigned ecc :1; /* Supports error correction */ 258 - unsigned reserved6_45 :2; /* Reserved */ 259 - unsigned eject :1; /* The device can eject the volume */ 260 - unsigned prevent :1; /* The device defaults in the prevent state after power up */ 261 - unsigned locked :1; /* The volume is locked */ 262 - unsigned lock :1; /* Supports locking the volume */ 263 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 264 - unsigned lock :1; /* Supports locking the volume */ 265 - unsigned locked :1; /* The volume is locked */ 266 - unsigned prevent :1; /* The device defaults in the prevent state after power up */ 267 - unsigned eject :1; /* The device can eject the volume */ 268 - unsigned reserved6_45 :2; /* Reserved */ 269 - unsigned ecc :1; /* Supports error correction */ 270 - unsigned cmprs :1; /* Supports data compression */ 271 - #else 272 - #error "Please fix <asm/byteorder.h>" 273 - #endif 274 - #if defined(__BIG_ENDIAN_BITFIELD) 275 - unsigned blk32768 :1; /* slowb - the device restricts the byte count for PIO */ 276 - /* transfers for slow buffer memory ??? */ 277 - /* Also 32768 block size in some cases */ 278 - unsigned reserved7_3_6 :4; 279 - unsigned blk1024 :1; /* Supports 1024 bytes block size */ 280 - unsigned blk512 :1; /* Supports 512 bytes block size */ 281 - unsigned reserved7_0 :1; 282 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 283 - unsigned reserved7_0 :1; 284 - unsigned blk512 :1; /* Supports 512 bytes block size */ 285 - unsigned blk1024 :1; /* Supports 1024 bytes block size */ 286 - unsigned reserved7_3_6 :4; 287 - unsigned blk32768 :1; /* slowb - the device restricts the byte count for PIO */ 288 - /* transfers for slow buffer memory ??? */ 289 - /* Also 32768 block size in some cases */ 290 - #else 291 - #error "Please fix <asm/byteorder.h>" 292 - #endif 293 - __be16 max_speed; /* Maximum speed supported in KBps */ 294 - u8 reserved10, reserved11; 295 - __be16 ctl; /* Continuous Transfer Limit in blocks */ 296 - __be16 speed; /* Current Speed, in KBps */ 297 - __be16 buffer_size; /* Buffer Size, in 512 bytes */ 298 - u8 reserved18, reserved19; 299 - } osst_capabilities_page_t; 300 - 301 - /* 302 - * Block Size Page 303 - */ 304 - typedef struct { 305 - #if defined(__BIG_ENDIAN_BITFIELD) 306 - unsigned ps :1; 307 - unsigned reserved1_6 :1; 308 - unsigned page_code :6; /* Page code - Should be 0x30 */ 309 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 310 - unsigned page_code :6; /* Page code - Should be 0x30 */ 311 - unsigned reserved1_6 :1; 312 - unsigned ps :1; 313 - #else 314 - #error "Please fix <asm/byteorder.h>" 315 - #endif 316 - u8 page_length; /* Page Length - Should be 2 */ 317 - u8 reserved2; 318 - #if defined(__BIG_ENDIAN_BITFIELD) 319 - unsigned one :1; 320 - unsigned reserved2_6 :1; 321 - unsigned record32_5 :1; 322 - unsigned record32 :1; 323 - unsigned reserved2_23 :2; 324 - unsigned play32_5 :1; 325 - unsigned play32 :1; 326 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 327 - unsigned play32 :1; 328 - unsigned play32_5 :1; 329 - unsigned reserved2_23 :2; 330 - unsigned record32 :1; 331 - unsigned record32_5 :1; 332 - unsigned reserved2_6 :1; 333 - unsigned one :1; 334 - #else 335 - #error "Please fix <asm/byteorder.h>" 336 - #endif 337 - } osst_block_size_page_t; 338 - 339 - /* 340 - * Tape Parameters Page 341 - */ 342 - typedef struct { 343 - #if defined(__BIG_ENDIAN_BITFIELD) 344 - unsigned ps :1; 345 - unsigned reserved1_6 :1; 346 - unsigned page_code :6; /* Page code - Should be 0x2b */ 347 - #elif defined(__LITTLE_ENDIAN_BITFIELD) 348 - unsigned page_code :6; /* Page code - Should be 0x2b */ 349 - unsigned reserved1_6 :1; 350 - unsigned ps :1; 351 - #else 352 - #error "Please fix <asm/byteorder.h>" 353 - #endif 354 - u8 reserved2; 355 - u8 density; 356 - u8 reserved3,reserved4; 357 - __be16 segtrk; 358 - __be16 trks; 359 - u8 reserved5,reserved6,reserved7,reserved8,reserved9,reserved10; 360 - } osst_tape_paramtr_page_t; 361 - 362 - /* OnStream definitions */ 363 - 364 - #define OS_CONFIG_PARTITION (0xff) 365 - #define OS_DATA_PARTITION (0) 366 - #define OS_PARTITION_VERSION (1) 367 - 368 - /* 369 - * partition 370 - */ 371 - typedef struct os_partition_s { 372 - __u8 partition_num; 373 - __u8 par_desc_ver; 374 - __be16 wrt_pass_cntr; 375 - __be32 first_frame_ppos; 376 - __be32 last_frame_ppos; 377 - __be32 eod_frame_ppos; 378 - } os_partition_t; 379 - 380 - /* 381 - * DAT entry 382 - */ 383 - typedef struct os_dat_entry_s { 384 - __be32 blk_sz; 385 - __be16 blk_cnt; 386 - __u8 flags; 387 - __u8 reserved; 388 - } os_dat_entry_t; 389 - 390 - /* 391 - * DAT 392 - */ 393 - #define OS_DAT_FLAGS_DATA (0xc) 394 - #define OS_DAT_FLAGS_MARK (0x1) 395 - 396 - typedef struct os_dat_s { 397 - __u8 dat_sz; 398 - __u8 reserved1; 399 - __u8 entry_cnt; 400 - __u8 reserved3; 401 - os_dat_entry_t dat_list[16]; 402 - } os_dat_t; 403 - 404 - /* 405 - * Frame types 406 - */ 407 - #define OS_FRAME_TYPE_FILL (0) 408 - #define OS_FRAME_TYPE_EOD (1 << 0) 409 - #define OS_FRAME_TYPE_MARKER (1 << 1) 410 - #define OS_FRAME_TYPE_HEADER (1 << 3) 411 - #define OS_FRAME_TYPE_DATA (1 << 7) 412 - 413 - /* 414 - * AUX 415 - */ 416 - typedef struct os_aux_s { 417 - __be32 format_id; /* hardware compatibility AUX is based on */ 418 - char application_sig[4]; /* driver used to write this media */ 419 - __be32 hdwr; /* reserved */ 420 - __be32 update_frame_cntr; /* for configuration frame */ 421 - __u8 frame_type; 422 - __u8 frame_type_reserved; 423 - __u8 reserved_18_19[2]; 424 - os_partition_t partition; 425 - __u8 reserved_36_43[8]; 426 - __be32 frame_seq_num; 427 - __be32 logical_blk_num_high; 428 - __be32 logical_blk_num; 429 - os_dat_t dat; 430 - __u8 reserved188_191[4]; 431 - __be32 filemark_cnt; 432 - __be32 phys_fm; 433 - __be32 last_mark_ppos; 434 - __u8 reserved204_223[20]; 435 - 436 - /* 437 - * __u8 app_specific[32]; 438 - * 439 - * Linux specific fields: 440 - */ 441 - __be32 next_mark_ppos; /* when known, points to next marker */ 442 - __be32 last_mark_lbn; /* storing log_blk_num of last mark is extends ADR spec */ 443 - __u8 linux_specific[24]; 444 - 445 - __u8 reserved_256_511[256]; 446 - } os_aux_t; 447 - 448 - #define OS_FM_TAB_MAX 1024 449 - 450 - typedef struct os_fm_tab_s { 451 - __u8 fm_part_num; 452 - __u8 reserved_1; 453 - __u8 fm_tab_ent_sz; 454 - __u8 reserved_3; 455 - __be16 fm_tab_ent_cnt; 456 - __u8 reserved6_15[10]; 457 - __be32 fm_tab_ent[OS_FM_TAB_MAX]; 458 - } os_fm_tab_t; 459 - 460 - typedef struct os_ext_trk_ey_s { 461 - __u8 et_part_num; 462 - __u8 fmt; 463 - __be16 fm_tab_off; 464 - __u8 reserved4_7[4]; 465 - __be32 last_hlb_hi; 466 - __be32 last_hlb; 467 - __be32 last_pp; 468 - __u8 reserved20_31[12]; 469 - } os_ext_trk_ey_t; 470 - 471 - typedef struct os_ext_trk_tb_s { 472 - __u8 nr_stream_part; 473 - __u8 reserved_1; 474 - __u8 et_ent_sz; 475 - __u8 reserved3_15[13]; 476 - os_ext_trk_ey_t dat_ext_trk_ey; 477 - os_ext_trk_ey_t qfa_ext_trk_ey; 478 - } os_ext_trk_tb_t; 479 - 480 - typedef struct os_header_s { 481 - char ident_str[8]; 482 - __u8 major_rev; 483 - __u8 minor_rev; 484 - __be16 ext_trk_tb_off; 485 - __u8 reserved12_15[4]; 486 - __u8 pt_par_num; 487 - __u8 pt_reserved1_3[3]; 488 - os_partition_t partition[16]; 489 - __be32 cfg_col_width; 490 - __be32 dat_col_width; 491 - __be32 qfa_col_width; 492 - __u8 cartridge[16]; 493 - __u8 reserved304_511[208]; 494 - __be32 old_filemark_list[16680/4]; /* in ADR 1.4 __u8 track_table[16680] */ 495 - os_ext_trk_tb_t ext_track_tb; 496 - __u8 reserved17272_17735[464]; 497 - os_fm_tab_t dat_fm_tab; 498 - os_fm_tab_t qfa_fm_tab; 499 - __u8 reserved25960_32767[6808]; 500 - } os_header_t; 501 - 502 - 503 - /* 504 - * OnStream ADRL frame 505 - */ 506 - #define OS_FRAME_SIZE (32 * 1024 + 512) 507 - #define OS_DATA_SIZE (32 * 1024) 508 - #define OS_AUX_SIZE (512) 509 - //#define OSST_MAX_SG 2 510 - 511 - /* The OnStream tape buffer descriptor. */ 512 - struct osst_buffer { 513 - unsigned char in_use; 514 - unsigned char dma; /* DMA-able buffer */ 515 - int buffer_size; 516 - int buffer_blocks; 517 - int buffer_bytes; 518 - int read_pointer; 519 - int writing; 520 - int midlevel_result; 521 - int syscall_result; 522 - struct osst_request *last_SRpnt; 523 - struct st_cmdstatus cmdstat; 524 - struct rq_map_data map_data; 525 - unsigned char *b_data; 526 - os_aux_t *aux; /* onstream AUX structure at end of each block */ 527 - unsigned short use_sg; /* zero or number of s/g segments for this adapter */ 528 - unsigned short sg_segs; /* number of segments in s/g list */ 529 - unsigned short orig_sg_segs; /* number of segments allocated at first try */ 530 - struct scatterlist sg[1]; /* MUST BE last item */ 531 - } ; 532 - 533 - /* The OnStream tape drive descriptor */ 534 - struct osst_tape { 535 - struct scsi_driver *driver; 536 - unsigned capacity; 537 - struct scsi_device *device; 538 - struct mutex lock; /* for serialization */ 539 - struct completion wait; /* for SCSI commands */ 540 - struct osst_buffer * buffer; 541 - 542 - /* Drive characteristics */ 543 - unsigned char omit_blklims; 544 - unsigned char do_auto_lock; 545 - unsigned char can_bsr; 546 - unsigned char can_partitions; 547 - unsigned char two_fm; 548 - unsigned char fast_mteom; 549 - unsigned char restr_dma; 550 - unsigned char scsi2_logical; 551 - unsigned char default_drvbuffer; /* 0xff = don't touch, value 3 bits */ 552 - unsigned char pos_unknown; /* after reset position unknown */ 553 - int write_threshold; 554 - int timeout; /* timeout for normal commands */ 555 - int long_timeout; /* timeout for commands known to take long time*/ 556 - 557 - /* Mode characteristics */ 558 - struct st_modedef modes[ST_NBR_MODES]; 559 - int current_mode; 560 - 561 - /* Status variables */ 562 - int partition; 563 - int new_partition; 564 - int nbr_partitions; /* zero until partition support enabled */ 565 - struct st_partstat ps[ST_NBR_PARTITIONS]; 566 - unsigned char dirty; 567 - unsigned char ready; 568 - unsigned char write_prot; 569 - unsigned char drv_write_prot; 570 - unsigned char in_use; 571 - unsigned char blksize_changed; 572 - unsigned char density_changed; 573 - unsigned char compression_changed; 574 - unsigned char drv_buffer; 575 - unsigned char density; 576 - unsigned char door_locked; 577 - unsigned char rew_at_close; 578 - unsigned char inited; 579 - int block_size; 580 - int min_block; 581 - int max_block; 582 - int recover_count; /* from tape opening */ 583 - int abort_count; 584 - int write_count; 585 - int read_count; 586 - int recover_erreg; /* from last status call */ 587 - /* 588 - * OnStream specific data 589 - */ 590 - int os_fw_rev; /* the firmware revision * 10000 */ 591 - unsigned char raw; /* flag OnStream raw access (32.5KB block size) */ 592 - unsigned char poll; /* flag that this drive needs polling (IDE|firmware) */ 593 - unsigned char frame_in_buffer; /* flag that the frame as per frame_seq_number 594 - * has been read into STp->buffer and is valid */ 595 - int frame_seq_number; /* logical frame number */ 596 - int logical_blk_num; /* logical block number */ 597 - unsigned first_frame_position; /* physical frame to be transferred to/from host */ 598 - unsigned last_frame_position; /* physical frame to be transferd to/from tape */ 599 - int cur_frames; /* current number of frames in internal buffer */ 600 - int max_frames; /* max number of frames in internal buffer */ 601 - char application_sig[5]; /* application signature */ 602 - unsigned char fast_open; /* flag that reminds us we didn't check headers at open */ 603 - unsigned short wrt_pass_cntr; /* write pass counter */ 604 - int update_frame_cntr; /* update frame counter */ 605 - int onstream_write_error; /* write error recovery active */ 606 - int header_ok; /* header frame verified ok */ 607 - int linux_media; /* reading linux-specifc media */ 608 - int linux_media_version; 609 - os_header_t * header_cache; /* cache is kept for filemark positions */ 610 - int filemark_cnt; 611 - int first_mark_ppos; 612 - int last_mark_ppos; 613 - int last_mark_lbn; /* storing log_blk_num of last mark is extends ADR spec */ 614 - int first_data_ppos; 615 - int eod_frame_ppos; 616 - int eod_frame_lfa; 617 - int write_type; /* used in write error recovery */ 618 - int read_error_frame; /* used in read error recovery */ 619 - unsigned long cmd_start_time; 620 - unsigned long max_cmd_time; 621 - 622 - #if DEBUG 623 - unsigned char write_pending; 624 - int nbr_finished; 625 - int nbr_waits; 626 - unsigned char last_cmnd[6]; 627 - unsigned char last_sense[16]; 628 - #endif 629 - struct gendisk *drive; 630 - } ; 631 - 632 - /* scsi tape command */ 633 - struct osst_request { 634 - unsigned char cmd[MAX_COMMAND_SIZE]; 635 - unsigned char sense[SCSI_SENSE_BUFFERSIZE]; 636 - int result; 637 - struct osst_tape *stp; 638 - struct completion *waiting; 639 - struct bio *bio; 640 - }; 641 - 642 - /* Values of write_type */ 643 - #define OS_WRITE_DATA 0 644 - #define OS_WRITE_EOD 1 645 - #define OS_WRITE_NEW_MARK 2 646 - #define OS_WRITE_LAST_MARK 3 647 - #define OS_WRITE_HEADER 4 648 - #define OS_WRITE_FILLER 5 649 - 650 - /* Additional rw state */ 651 - #define OS_WRITING_COMPLETE 3
-7
drivers/scsi/osst_detect.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - #define SIGS_FROM_OSST \ 3 - {"OnStream", "SC-", "", "osst"}, \ 4 - {"OnStream", "DI-", "", "osst"}, \ 5 - {"OnStream", "DP-", "", "osst"}, \ 6 - {"OnStream", "FW-", "", "osst"}, \ 7 - {"OnStream", "USB", "", "osst"}
-107
drivers/scsi/osst_options.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 */ 2 - /* 3 - The compile-time configurable defaults for the Linux SCSI tape driver. 4 - 5 - Copyright 1995 Kai Makisara. 6 - 7 - Last modified: Wed Sep 2 21:24:07 1998 by root@home 8 - 9 - Changed (and renamed) for OnStream SCSI drives garloff@suse.de 10 - 2000-06-21 11 - 12 - $Header: /cvsroot/osst/Driver/osst_options.h,v 1.6 2003/12/23 14:22:12 wriede Exp $ 13 - */ 14 - 15 - #ifndef _OSST_OPTIONS_H 16 - #define _OSST_OPTIONS_H 17 - 18 - /* The minimum limit for the number of SCSI tape devices is determined by 19 - OSST_MAX_TAPES. If the number of tape devices and the "slack" defined by 20 - OSST_EXTRA_DEVS exceeds OSST_MAX_TAPES, the large number is used. */ 21 - #define OSST_MAX_TAPES 4 22 - 23 - /* If OSST_IN_FILE_POS is nonzero, the driver positions the tape after the 24 - record been read by the user program even if the tape has moved further 25 - because of buffered reads. Should be set to zero to support also drives 26 - that can't space backwards over records. NOTE: The tape will be 27 - spaced backwards over an "accidentally" crossed filemark in any case. */ 28 - #define OSST_IN_FILE_POS 1 29 - 30 - /* The tape driver buffer size in kilobytes. */ 31 - /* Don't change, as this is the HW blocksize */ 32 - #define OSST_BUFFER_BLOCKS 32 33 - 34 - /* The number of kilobytes of data in the buffer that triggers an 35 - asynchronous write in fixed block mode. See also OSST_ASYNC_WRITES 36 - below. */ 37 - #define OSST_WRITE_THRESHOLD_BLOCKS 32 38 - 39 - /* OSST_EOM_RESERVE defines the number of frames are kept in reserve for 40 - * * write error recovery when writing near end of medium. ENOSPC is returned 41 - * * when write() is called and the tape write position is within this number 42 - * * of blocks from the tape capacity. */ 43 - #define OSST_EOM_RESERVE 300 44 - 45 - /* The maximum number of tape buffers the driver allocates. The number 46 - is also constrained by the number of drives detected. Determines the 47 - maximum number of concurrently active tape drives. */ 48 - #define OSST_MAX_BUFFERS OSST_MAX_TAPES 49 - 50 - /* Maximum number of scatter/gather segments */ 51 - /* Fit one buffer in pages and add one for the AUX header */ 52 - #define OSST_MAX_SG (((OSST_BUFFER_BLOCKS*1024) / PAGE_SIZE) + 1) 53 - 54 - /* The number of scatter/gather segments to allocate at first try (must be 55 - smaller or equal to the maximum). */ 56 - #define OSST_FIRST_SG ((OSST_BUFFER_BLOCKS*1024) / PAGE_SIZE) 57 - 58 - /* The size of the first scatter/gather segments (determines the maximum block 59 - size for SCSI adapters not supporting scatter/gather). The default is set 60 - to try to allocate the buffer as one chunk. */ 61 - #define OSST_FIRST_ORDER (15-PAGE_SHIFT) 62 - 63 - 64 - /* The following lines define defaults for properties that can be set 65 - separately for each drive using the MTSTOPTIONS ioctl. */ 66 - 67 - /* If OSST_TWO_FM is non-zero, the driver writes two filemarks after a 68 - file being written. Some drives can't handle two filemarks at the 69 - end of data. */ 70 - #define OSST_TWO_FM 0 71 - 72 - /* If OSST_BUFFER_WRITES is non-zero, writes in fixed block mode are 73 - buffered until the driver buffer is full or asynchronous write is 74 - triggered. */ 75 - #define OSST_BUFFER_WRITES 1 76 - 77 - /* If OSST_ASYNC_WRITES is non-zero, the SCSI write command may be started 78 - without waiting for it to finish. May cause problems in multiple 79 - tape backups. */ 80 - #define OSST_ASYNC_WRITES 1 81 - 82 - /* If OSST_READ_AHEAD is non-zero, blocks are read ahead in fixed block 83 - mode. */ 84 - #define OSST_READ_AHEAD 1 85 - 86 - /* If OSST_AUTO_LOCK is non-zero, the drive door is locked at the first 87 - read or write command after the device is opened. The door is opened 88 - when the device is closed. */ 89 - #define OSST_AUTO_LOCK 0 90 - 91 - /* If OSST_FAST_MTEOM is non-zero, the MTEOM ioctl is done using the 92 - direct SCSI command. The file number status is lost but this method 93 - is fast with some drives. Otherwise MTEOM is done by spacing over 94 - files and the file number status is retained. */ 95 - #define OSST_FAST_MTEOM 0 96 - 97 - /* If OSST_SCSI2LOGICAL is nonzero, the logical block addresses are used for 98 - MTIOCPOS and MTSEEK by default. Vendor addresses are used if OSST_SCSI2LOGICAL 99 - is zero. */ 100 - #define OSST_SCSI2LOGICAL 0 101 - 102 - /* If OSST_SYSV is non-zero, the tape behaves according to the SYS V semantics. 103 - The default is BSD semantics. */ 104 - #define OSST_SYSV 0 105 - 106 - 107 - #endif
+10
drivers/scsi/pcmcia/Kconfig
··· 20 20 To compile this driver as a module, choose M here: the 21 21 module will be called aha152x_cs. 22 22 23 + config PCMCIA_FDOMAIN 24 + tristate "Future Domain PCMCIA support" 25 + select SCSI_FDOMAIN 26 + help 27 + Say Y here if you intend to attach this type of PCMCIA SCSI host 28 + adapter to your computer. 29 + 30 + To compile this driver as a module, choose M here: the 31 + module will be called fdomain_cs. 32 + 23 33 config PCMCIA_NINJA_SCSI 24 34 tristate "NinjaSCSI-3 / NinjaSCSI-32Bi (16bit) PCMCIA support" 25 35 depends on !64BIT
+1
drivers/scsi/pcmcia/Makefile
··· 4 4 5 5 # 16-bit client drivers 6 6 obj-$(CONFIG_PCMCIA_QLOGIC) += qlogic_cs.o 7 + obj-$(CONFIG_PCMCIA_FDOMAIN) += fdomain_cs.o 7 8 obj-$(CONFIG_PCMCIA_AHA152X) += aha152x_cs.o 8 9 obj-$(CONFIG_PCMCIA_NINJA_SCSI) += nsp_cs.o 9 10 obj-$(CONFIG_PCMCIA_SYM53C500) += sym53c500_cs.o
+95
drivers/scsi/pcmcia/fdomain_cs.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR MPL-1.1) 2 + /* 3 + * Driver for Future Domain-compatible PCMCIA SCSI cards 4 + * Copyright 2019 Ondrej Zary 5 + * 6 + * The initial developer of the original code is David A. Hinds 7 + * <dahinds@users.sourceforge.net>. Portions created by David A. Hinds 8 + * are Copyright (C) 1999 David A. Hinds. All Rights Reserved. 9 + */ 10 + 11 + #include <linux/module.h> 12 + #include <linux/init.h> 13 + #include <scsi/scsi_host.h> 14 + #include <pcmcia/cistpl.h> 15 + #include <pcmcia/ds.h> 16 + #include "fdomain.h" 17 + 18 + MODULE_AUTHOR("Ondrej Zary, David Hinds"); 19 + MODULE_DESCRIPTION("Future Domain PCMCIA SCSI driver"); 20 + MODULE_LICENSE("Dual MPL/GPL"); 21 + 22 + static int fdomain_config_check(struct pcmcia_device *p_dev, void *priv_data) 23 + { 24 + p_dev->io_lines = 10; 25 + p_dev->resource[0]->end = FDOMAIN_REGION_SIZE; 26 + p_dev->resource[0]->flags &= ~IO_DATA_PATH_WIDTH; 27 + p_dev->resource[0]->flags |= IO_DATA_PATH_WIDTH_AUTO; 28 + return pcmcia_request_io(p_dev); 29 + } 30 + 31 + static int fdomain_probe(struct pcmcia_device *link) 32 + { 33 + int ret; 34 + struct Scsi_Host *sh; 35 + 36 + link->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_SET_IO; 37 + link->config_regs = PRESENT_OPTION; 38 + 39 + ret = pcmcia_loop_config(link, fdomain_config_check, NULL); 40 + if (ret) 41 + return ret; 42 + 43 + ret = pcmcia_enable_device(link); 44 + if (ret) 45 + goto fail_disable; 46 + 47 + if (!request_region(link->resource[0]->start, FDOMAIN_REGION_SIZE, 48 + "fdomain_cs")) 49 + goto fail_disable; 50 + 51 + sh = fdomain_create(link->resource[0]->start, link->irq, 7, &link->dev); 52 + if (!sh) { 53 + dev_err(&link->dev, "Controller initialization failed"); 54 + ret = -ENODEV; 55 + goto fail_release; 56 + } 57 + 58 + link->priv = sh; 59 + 60 + return 0; 61 + 62 + fail_release: 63 + release_region(link->resource[0]->start, FDOMAIN_REGION_SIZE); 64 + fail_disable: 65 + pcmcia_disable_device(link); 66 + return ret; 67 + } 68 + 69 + static void fdomain_remove(struct pcmcia_device *link) 70 + { 71 + fdomain_destroy(link->priv); 72 + release_region(link->resource[0]->start, FDOMAIN_REGION_SIZE); 73 + pcmcia_disable_device(link); 74 + } 75 + 76 + static const struct pcmcia_device_id fdomain_ids[] = { 77 + PCMCIA_DEVICE_PROD_ID12("IBM Corp.", "SCSI PCMCIA Card", 0xe3736c88, 78 + 0x859cad20), 79 + PCMCIA_DEVICE_PROD_ID1("SCSI PCMCIA Adapter Card", 0x8dacb57e), 80 + PCMCIA_DEVICE_PROD_ID12(" SIMPLE TECHNOLOGY Corporation", 81 + "SCSI PCMCIA Credit Card Controller", 82 + 0x182bdafe, 0xc80d106f), 83 + PCMCIA_DEVICE_NULL, 84 + }; 85 + MODULE_DEVICE_TABLE(pcmcia, fdomain_ids); 86 + 87 + static struct pcmcia_driver fdomain_cs_driver = { 88 + .owner = THIS_MODULE, 89 + .name = "fdomain_cs", 90 + .probe = fdomain_probe, 91 + .remove = fdomain_remove, 92 + .id_table = fdomain_ids, 93 + }; 94 + 95 + module_pcmcia_driver(fdomain_cs_driver);
+36 -16
drivers/scsi/pm8001/pm8001_ctl.c
··· 462 462 } 463 463 static DEVICE_ATTR(bios_version, S_IRUGO, pm8001_ctl_bios_version_show, NULL); 464 464 /** 465 + * event_log_size_show - event log size 466 + * @cdev: pointer to embedded class device 467 + * @buf: the buffer returned 468 + * 469 + * A sysfs read shost attribute. 470 + */ 471 + static ssize_t event_log_size_show(struct device *cdev, 472 + struct device_attribute *attr, char *buf) 473 + { 474 + struct Scsi_Host *shost = class_to_shost(cdev); 475 + struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); 476 + struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; 477 + 478 + return snprintf(buf, PAGE_SIZE, "%d\n", 479 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.event_log_size); 480 + } 481 + static DEVICE_ATTR_RO(event_log_size); 482 + /** 465 483 * pm8001_ctl_aap_log_show - IOP event log 466 484 * @cdev: pointer to embedded class device 467 485 * @buf: the buffer returned ··· 492 474 struct Scsi_Host *shost = class_to_shost(cdev); 493 475 struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost); 494 476 struct pm8001_hba_info *pm8001_ha = sha->lldd_ha; 495 - #define IOP_MEMMAP(r, c) \ 496 - (*(u32 *)((u8*)pm8001_ha->memoryMap.region[IOP].virt_ptr + (r) * 32 \ 497 - + (c))) 498 - int i; 499 477 char *str = buf; 500 - int max = 2; 501 - for (i = 0; i < max; i++) { 502 - str += sprintf(str, "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" 503 - "0x%08x 0x%08x\n", 504 - IOP_MEMMAP(i, 0), 505 - IOP_MEMMAP(i, 4), 506 - IOP_MEMMAP(i, 8), 507 - IOP_MEMMAP(i, 12), 508 - IOP_MEMMAP(i, 16), 509 - IOP_MEMMAP(i, 20), 510 - IOP_MEMMAP(i, 24), 511 - IOP_MEMMAP(i, 28)); 478 + u32 read_size = 479 + pm8001_ha->main_cfg_tbl.pm80xx_tbl.event_log_size / 1024; 480 + static u32 start, end, count; 481 + u32 max_read_times = 32; 482 + u32 max_count = (read_size * 1024) / (max_read_times * 4); 483 + u32 *temp = (u32 *)pm8001_ha->memoryMap.region[IOP].virt_ptr; 484 + 485 + if ((count % max_count) == 0) { 486 + start = 0; 487 + end = max_read_times; 488 + count = 0; 489 + } else { 490 + start = end; 491 + end = end + max_read_times; 512 492 } 513 493 494 + for (; start < end; start++) 495 + str += sprintf(str, "%08x ", *(temp+start)); 496 + count++; 514 497 return str - buf; 515 498 } 516 499 static DEVICE_ATTR(iop_log, S_IRUGO, pm8001_ctl_iop_log_show, NULL); ··· 815 796 &dev_attr_max_sg_list, 816 797 &dev_attr_sas_spec_support, 817 798 &dev_attr_logging_level, 799 + &dev_attr_event_log_size, 818 800 &dev_attr_host_sas_address, 819 801 &dev_attr_bios_version, 820 802 &dev_attr_ib_log,
+2 -2
drivers/scsi/pm8001/pm8001_hwi.c
··· 2356 2356 if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) && 2357 2357 (status != IO_UNDERFLOW)) { 2358 2358 if (!((t->dev->parent) && 2359 - (DEV_IS_EXPANDER(t->dev->parent->dev_type)))) { 2359 + (dev_is_expander(t->dev->parent->dev_type)))) { 2360 2360 for (i = 0 , j = 4; j <= 7 && i <= 3; i++ , j++) 2361 2361 sata_addr_low[i] = pm8001_ha->sas_addr[j]; 2362 2362 for (i = 0 , j = 0; j <= 3 && i <= 3; i++ , j++) ··· 4560 4560 pm8001_dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) 4561 4561 stp_sspsmp_sata = 0x01; /*ssp or smp*/ 4562 4562 } 4563 - if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 4563 + if (parent_dev && dev_is_expander(parent_dev->dev_type)) 4564 4564 phy_id = parent_dev->ex_dev.ex_phy->phy_id; 4565 4565 else 4566 4566 phy_id = pm8001_dev->attached_phy;
+2 -2
drivers/scsi/pm8001/pm8001_sas.c
··· 634 634 dev->lldd_dev = pm8001_device; 635 635 pm8001_device->dev_type = dev->dev_type; 636 636 pm8001_device->dcompletion = &completion; 637 - if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) { 637 + if (parent_dev && dev_is_expander(parent_dev->dev_type)) { 638 638 int phy_id; 639 639 struct ex_phy *phy; 640 640 for (phy_id = 0; phy_id < parent_dev->ex_dev.num_phys; ··· 1181 1181 return rc; 1182 1182 } 1183 1183 1184 - /* mandatory SAM-3, still need free task/ccb info, abord the specified task */ 1184 + /* mandatory SAM-3, still need free task/ccb info, abort the specified task */ 1185 1185 int pm8001_abort_task(struct sas_task *task) 1186 1186 { 1187 1187 unsigned long flags;
-1
drivers/scsi/pm8001/pm8001_sas.h
··· 103 103 #define PM8001_READ_VPD 104 104 105 105 106 - #define DEV_IS_EXPANDER(type) ((type == SAS_EDGE_EXPANDER_DEVICE) || (type == SAS_FANOUT_EXPANDER_DEVICE)) 107 106 #define IS_SPCV_12G(dev) ((dev->device == 0X8074) \ 108 107 || (dev->device == 0X8076) \ 109 108 || (dev->device == 0X8077) \
+2 -2
drivers/scsi/pm8001/pm80xx_hwi.c
··· 2066 2066 if ((status != IO_SUCCESS) && (status != IO_OVERFLOW) && 2067 2067 (status != IO_UNDERFLOW)) { 2068 2068 if (!((t->dev->parent) && 2069 - (DEV_IS_EXPANDER(t->dev->parent->dev_type)))) { 2069 + (dev_is_expander(t->dev->parent->dev_type)))) { 2070 2070 for (i = 0 , j = 4; i <= 3 && j <= 7; i++ , j++) 2071 2071 sata_addr_low[i] = pm8001_ha->sas_addr[j]; 2072 2072 for (i = 0 , j = 0; i <= 3 && j <= 3; i++ , j++) ··· 4561 4561 pm8001_dev->dev_type == SAS_FANOUT_EXPANDER_DEVICE) 4562 4562 stp_sspsmp_sata = 0x01; /*ssp or smp*/ 4563 4563 } 4564 - if (parent_dev && DEV_IS_EXPANDER(parent_dev->dev_type)) 4564 + if (parent_dev && dev_is_expander(parent_dev->dev_type)) 4565 4565 phy_id = parent_dev->ex_dev.ex_phy->phy_id; 4566 4566 else 4567 4567 phy_id = pm8001_dev->attached_phy;
+3 -2
drivers/scsi/qla2xxx/qla_def.h
··· 532 532 uint8_t cmd_type; 533 533 uint8_t pad[3]; 534 534 atomic_t ref_count; 535 + struct kref cmd_kref; /* need to migrate ref_count over to this */ 536 + void *priv; 535 537 wait_queue_head_t nvme_ls_waitq; 536 538 struct fc_port *fcport; 537 539 struct scsi_qla_host *vha; ··· 556 554 } u; 557 555 void (*done)(void *, int); 558 556 void (*free)(void *); 557 + void (*put_fn)(struct kref *kref); 559 558 } srb_t; 560 559 561 560 #define GET_CMD_SP(sp) (sp->u.scmd.cmd) ··· 2339 2336 unsigned int id_changed:1; 2340 2337 unsigned int scan_needed:1; 2341 2338 2342 - struct work_struct nvme_del_work; 2343 2339 struct completion nvme_del_done; 2344 2340 uint32_t nvme_prli_service_param; 2345 2341 #define NVME_PRLI_SP_CONF BIT_7 ··· 4378 4376 4379 4377 struct nvme_fc_local_port *nvme_local_port; 4380 4378 struct completion nvme_del_done; 4381 - struct list_head nvme_rport_list; 4382 4379 4383 4380 uint16_t fcoe_vlan_id; 4384 4381 uint16_t fcoe_fcf_idx;
+2
drivers/scsi/qla2xxx/qla_gbl.h
··· 908 908 void qlt_set_mode(struct scsi_qla_host *); 909 909 int qla2x00_set_data_rate(scsi_qla_host_t *vha, uint16_t mode); 910 910 911 + /* nvme.c */ 912 + void qla_nvme_unregister_remote_port(struct fc_port *fcport); 911 913 #endif /* _QLA_GBL_H */
-1
drivers/scsi/qla2xxx/qla_init.c
··· 5403 5403 fcport->flags &= ~(FCF_LOGIN_NEEDED | FCF_ASYNC_SENT); 5404 5404 fcport->deleted = 0; 5405 5405 fcport->logout_on_delete = 1; 5406 - fcport->login_retry = vha->hw->login_retry_count; 5407 5406 fcport->n2n_chip_reset = fcport->n2n_link_reset_cnt = 0; 5408 5407 5409 5408 switch (vha->hw->current_topology) {
+144 -98
drivers/scsi/qla2xxx/qla_nvme.c
··· 12 12 13 13 static struct nvme_fc_port_template qla_nvme_fc_transport; 14 14 15 - static void qla_nvme_unregister_remote_port(struct work_struct *); 16 - 17 15 int qla_nvme_register_remote(struct scsi_qla_host *vha, struct fc_port *fcport) 18 16 { 19 17 struct qla_nvme_rport *rport; ··· 36 38 (fcport->nvme_flag & NVME_FLAG_REGISTERED)) 37 39 return 0; 38 40 39 - INIT_WORK(&fcport->nvme_del_work, qla_nvme_unregister_remote_port); 40 41 fcport->nvme_flag &= ~NVME_FLAG_RESETTING; 41 42 42 43 memset(&req, 0, sizeof(struct nvme_fc_port_info)); ··· 71 74 72 75 rport = fcport->nvme_remote_port->private; 73 76 rport->fcport = fcport; 74 - list_add_tail(&rport->list, &vha->nvme_rport_list); 75 77 76 78 fcport->nvme_flag |= NVME_FLAG_REGISTERED; 77 79 return 0; ··· 120 124 return 0; 121 125 } 122 126 123 - static void qla_nvme_sp_ls_done(void *ptr, int res) 127 + static void qla_nvme_release_fcp_cmd_kref(struct kref *kref) 124 128 { 125 - srb_t *sp = ptr; 126 - struct srb_iocb *nvme; 127 - struct nvmefc_ls_req *fd; 128 - struct nvme_private *priv; 129 - 130 - if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0)) 131 - return; 132 - 133 - atomic_dec(&sp->ref_count); 134 - 135 - if (res) 136 - res = -EINVAL; 137 - 138 - nvme = &sp->u.iocb_cmd; 139 - fd = nvme->u.nvme.desc; 140 - priv = fd->private; 141 - priv->comp_status = res; 142 - schedule_work(&priv->ls_work); 143 - /* work schedule doesn't need the sp */ 144 - qla2x00_rel_sp(sp); 145 - } 146 - 147 - static void qla_nvme_sp_done(void *ptr, int res) 148 - { 149 - srb_t *sp = ptr; 150 - struct srb_iocb *nvme; 129 + struct srb *sp = container_of(kref, struct srb, cmd_kref); 130 + struct nvme_private *priv = (struct nvme_private *)sp->priv; 151 131 struct nvmefc_fcp_req *fd; 132 + struct srb_iocb *nvme; 133 + unsigned long flags; 134 + 135 + if (!priv) 136 + goto out; 152 137 153 138 nvme = &sp->u.iocb_cmd; 154 139 fd = nvme->u.nvme.desc; 155 140 156 - if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0)) 157 - return; 158 - 159 - atomic_dec(&sp->ref_count); 160 - 161 - if (res == QLA_SUCCESS) { 141 + spin_lock_irqsave(&priv->cmd_lock, flags); 142 + priv->sp = NULL; 143 + sp->priv = NULL; 144 + if (priv->comp_status == QLA_SUCCESS) { 162 145 fd->rcv_rsplen = nvme->u.nvme.rsp_pyld_len; 163 146 } else { 164 147 fd->rcv_rsplen = 0; 165 148 fd->transferred_length = 0; 166 149 } 167 150 fd->status = 0; 151 + spin_unlock_irqrestore(&priv->cmd_lock, flags); 152 + 168 153 fd->done(fd); 154 + out: 169 155 qla2xxx_rel_qpair_sp(sp->qpair, sp); 156 + } 157 + 158 + static void qla_nvme_release_ls_cmd_kref(struct kref *kref) 159 + { 160 + struct srb *sp = container_of(kref, struct srb, cmd_kref); 161 + struct nvme_private *priv = (struct nvme_private *)sp->priv; 162 + struct nvmefc_ls_req *fd; 163 + unsigned long flags; 164 + 165 + if (!priv) 166 + goto out; 167 + 168 + spin_lock_irqsave(&priv->cmd_lock, flags); 169 + priv->sp = NULL; 170 + sp->priv = NULL; 171 + spin_unlock_irqrestore(&priv->cmd_lock, flags); 172 + 173 + fd = priv->fd; 174 + fd->done(fd, priv->comp_status); 175 + out: 176 + qla2x00_rel_sp(sp); 177 + } 178 + 179 + static void qla_nvme_ls_complete(struct work_struct *work) 180 + { 181 + struct nvme_private *priv = 182 + container_of(work, struct nvme_private, ls_work); 183 + 184 + kref_put(&priv->sp->cmd_kref, qla_nvme_release_ls_cmd_kref); 185 + } 186 + 187 + static void qla_nvme_sp_ls_done(void *ptr, int res) 188 + { 189 + srb_t *sp = ptr; 190 + struct nvme_private *priv; 191 + 192 + if (WARN_ON_ONCE(kref_read(&sp->cmd_kref) == 0)) 193 + return; 194 + 195 + if (res) 196 + res = -EINVAL; 197 + 198 + priv = (struct nvme_private *)sp->priv; 199 + priv->comp_status = res; 200 + INIT_WORK(&priv->ls_work, qla_nvme_ls_complete); 201 + schedule_work(&priv->ls_work); 202 + } 203 + 204 + /* it assumed that QPair lock is held. */ 205 + static void qla_nvme_sp_done(void *ptr, int res) 206 + { 207 + srb_t *sp = ptr; 208 + struct nvme_private *priv = (struct nvme_private *)sp->priv; 209 + 210 + priv->comp_status = res; 211 + kref_put(&sp->cmd_kref, qla_nvme_release_fcp_cmd_kref); 170 212 171 213 return; 172 214 } ··· 223 189 __func__, sp, sp->handle, fcport, fcport->deleted); 224 190 225 191 if (!ha->flags.fw_started && (fcport && fcport->deleted)) 226 - return; 192 + goto out; 227 193 228 194 if (ha->flags.host_shutting_down) { 229 195 ql_log(ql_log_info, sp->fcport->vha, 0xffff, 230 196 "%s Calling done on sp: %p, type: 0x%x, sp->ref_count: 0x%x\n", 231 197 __func__, sp, sp->type, atomic_read(&sp->ref_count)); 232 198 sp->done(sp, 0); 233 - return; 199 + goto out; 234 200 } 235 - 236 - if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0)) 237 - return; 238 201 239 202 rval = ha->isp_ops->abort_command(sp); 240 203 ··· 239 208 "%s: %s command for sp=%p, handle=%x on fcport=%p rval=%x\n", 240 209 __func__, (rval != QLA_SUCCESS) ? "Failed to abort" : "Aborted", 241 210 sp, sp->handle, fcport, rval); 211 + 212 + out: 213 + /* kref_get was done before work was schedule. */ 214 + kref_put(&sp->cmd_kref, sp->put_fn); 242 215 } 243 216 244 217 static void qla_nvme_ls_abort(struct nvme_fc_local_port *lport, 245 218 struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd) 246 219 { 247 220 struct nvme_private *priv = fd->private; 221 + unsigned long flags; 222 + 223 + spin_lock_irqsave(&priv->cmd_lock, flags); 224 + if (!priv->sp) { 225 + spin_unlock_irqrestore(&priv->cmd_lock, flags); 226 + return; 227 + } 228 + 229 + if (!kref_get_unless_zero(&priv->sp->cmd_kref)) { 230 + spin_unlock_irqrestore(&priv->cmd_lock, flags); 231 + return; 232 + } 233 + spin_unlock_irqrestore(&priv->cmd_lock, flags); 248 234 249 235 INIT_WORK(&priv->abort_work, qla_nvme_abort_work); 250 236 schedule_work(&priv->abort_work); 251 237 } 252 238 253 - static void qla_nvme_ls_complete(struct work_struct *work) 254 - { 255 - struct nvme_private *priv = 256 - container_of(work, struct nvme_private, ls_work); 257 - struct nvmefc_ls_req *fd = priv->fd; 258 - 259 - fd->done(fd, priv->comp_status); 260 - } 261 239 262 240 static int qla_nvme_ls_req(struct nvme_fc_local_port *lport, 263 241 struct nvme_fc_remote_port *rport, struct nvmefc_ls_req *fd) ··· 280 240 struct qla_hw_data *ha; 281 241 srb_t *sp; 282 242 243 + 244 + if (!fcport || (fcport && fcport->deleted)) 245 + return rval; 246 + 283 247 vha = fcport->vha; 284 248 ha = vha->hw; 249 + 250 + if (!ha->flags.fw_started) 251 + return rval; 252 + 285 253 /* Alloc SRB structure */ 286 254 sp = qla2x00_get_sp(vha, fcport, GFP_ATOMIC); 287 255 if (!sp) ··· 298 250 sp->type = SRB_NVME_LS; 299 251 sp->name = "nvme_ls"; 300 252 sp->done = qla_nvme_sp_ls_done; 301 - atomic_set(&sp->ref_count, 1); 302 - nvme = &sp->u.iocb_cmd; 253 + sp->put_fn = qla_nvme_release_ls_cmd_kref; 254 + sp->priv = (void *)priv; 303 255 priv->sp = sp; 256 + kref_init(&sp->cmd_kref); 257 + spin_lock_init(&priv->cmd_lock); 258 + nvme = &sp->u.iocb_cmd; 304 259 priv->fd = fd; 305 - INIT_WORK(&priv->ls_work, qla_nvme_ls_complete); 306 260 nvme->u.nvme.desc = fd; 307 261 nvme->u.nvme.dir = 0; 308 262 nvme->u.nvme.dl = 0; ··· 321 271 if (rval != QLA_SUCCESS) { 322 272 ql_log(ql_log_warn, vha, 0x700e, 323 273 "qla2x00_start_sp failed = %d\n", rval); 324 - atomic_dec(&sp->ref_count); 325 274 wake_up(&sp->nvme_ls_waitq); 275 + sp->priv = NULL; 276 + priv->sp = NULL; 277 + qla2x00_rel_sp(sp); 326 278 return rval; 327 279 } 328 280 ··· 336 284 struct nvmefc_fcp_req *fd) 337 285 { 338 286 struct nvme_private *priv = fd->private; 287 + unsigned long flags; 288 + 289 + spin_lock_irqsave(&priv->cmd_lock, flags); 290 + if (!priv->sp) { 291 + spin_unlock_irqrestore(&priv->cmd_lock, flags); 292 + return; 293 + } 294 + if (!kref_get_unless_zero(&priv->sp->cmd_kref)) { 295 + spin_unlock_irqrestore(&priv->cmd_lock, flags); 296 + return; 297 + } 298 + spin_unlock_irqrestore(&priv->cmd_lock, flags); 339 299 340 300 INIT_WORK(&priv->abort_work, qla_nvme_abort_work); 341 301 schedule_work(&priv->abort_work); ··· 551 487 552 488 fcport = qla_rport->fcport; 553 489 554 - vha = fcport->vha; 555 - 556 - if (test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags)) 490 + if (!qpair || !fcport || (qpair && !qpair->fw_started) || 491 + (fcport && fcport->deleted)) 557 492 return rval; 558 493 494 + vha = fcport->vha; 559 495 /* 560 496 * If we know the dev is going away while the transport is still sending 561 497 * IO's return busy back to stall the IO Q. This happens when the ··· 571 507 if (!sp) 572 508 return -EBUSY; 573 509 574 - atomic_set(&sp->ref_count, 1); 575 510 init_waitqueue_head(&sp->nvme_ls_waitq); 511 + kref_init(&sp->cmd_kref); 512 + spin_lock_init(&priv->cmd_lock); 513 + sp->priv = (void *)priv; 576 514 priv->sp = sp; 577 515 sp->type = SRB_NVME_CMD; 578 516 sp->name = "nvme_cmd"; 579 517 sp->done = qla_nvme_sp_done; 518 + sp->put_fn = qla_nvme_release_fcp_cmd_kref; 580 519 sp->qpair = qpair; 581 520 sp->vha = vha; 582 521 nvme = &sp->u.iocb_cmd; ··· 589 522 if (rval != QLA_SUCCESS) { 590 523 ql_log(ql_log_warn, vha, 0x212d, 591 524 "qla2x00_start_nvme_mq failed = %d\n", rval); 592 - atomic_dec(&sp->ref_count); 593 525 wake_up(&sp->nvme_ls_waitq); 526 + sp->priv = NULL; 527 + priv->sp = NULL; 528 + qla2xxx_rel_qpair_sp(sp->qpair, sp); 594 529 } 595 530 596 531 return rval; ··· 611 542 static void qla_nvme_remoteport_delete(struct nvme_fc_remote_port *rport) 612 543 { 613 544 fc_port_t *fcport; 614 - struct qla_nvme_rport *qla_rport = rport->private, *trport; 545 + struct qla_nvme_rport *qla_rport = rport->private; 615 546 616 547 fcport = qla_rport->fcport; 617 548 fcport->nvme_remote_port = NULL; 618 549 fcport->nvme_flag &= ~NVME_FLAG_REGISTERED; 619 - 620 - list_for_each_entry_safe(qla_rport, trport, 621 - &fcport->vha->nvme_rport_list, list) { 622 - if (qla_rport->fcport == fcport) { 623 - list_del(&qla_rport->list); 624 - break; 625 - } 626 - } 627 - complete(&fcport->nvme_del_done); 628 - 629 - if (!test_bit(UNLOADING, &fcport->vha->dpc_flags)) { 630 - INIT_WORK(&fcport->free_work, qlt_free_session_done); 631 - schedule_work(&fcport->free_work); 632 - } 633 - 634 550 fcport->nvme_flag &= ~NVME_FLAG_DELETING; 635 551 ql_log(ql_log_info, fcport->vha, 0x2110, 636 - "remoteport_delete of %p completed.\n", fcport); 552 + "remoteport_delete of %p %8phN completed.\n", 553 + fcport, fcport->port_name); 554 + complete(&fcport->nvme_del_done); 637 555 } 638 556 639 557 static struct nvme_fc_port_template qla_nvme_fc_transport = { ··· 642 586 .fcprqst_priv_sz = sizeof(struct nvme_private), 643 587 }; 644 588 645 - static void qla_nvme_unregister_remote_port(struct work_struct *work) 589 + void qla_nvme_unregister_remote_port(struct fc_port *fcport) 646 590 { 647 - struct fc_port *fcport = container_of(work, struct fc_port, 648 - nvme_del_work); 649 - struct qla_nvme_rport *qla_rport, *trport; 591 + int ret; 650 592 651 593 if (!IS_ENABLED(CONFIG_NVME_FC)) 652 594 return; 653 595 654 596 ql_log(ql_log_warn, NULL, 0x2112, 655 - "%s: unregister remoteport on %p\n",__func__, fcport); 597 + "%s: unregister remoteport on %p %8phN\n", 598 + __func__, fcport, fcport->port_name); 656 599 657 - list_for_each_entry_safe(qla_rport, trport, 658 - &fcport->vha->nvme_rport_list, list) { 659 - if (qla_rport->fcport == fcport) { 660 - ql_log(ql_log_info, fcport->vha, 0x2113, 661 - "%s: fcport=%p\n", __func__, fcport); 662 - nvme_fc_set_remoteport_devloss 663 - (fcport->nvme_remote_port, 0); 664 - init_completion(&fcport->nvme_del_done); 665 - if (nvme_fc_unregister_remoteport 666 - (fcport->nvme_remote_port)) 667 - ql_log(ql_log_info, fcport->vha, 0x2114, 668 - "%s: Failed to unregister nvme_remote_port\n", 669 - __func__); 670 - wait_for_completion(&fcport->nvme_del_done); 671 - break; 672 - } 673 - } 600 + nvme_fc_set_remoteport_devloss(fcport->nvme_remote_port, 0); 601 + init_completion(&fcport->nvme_del_done); 602 + ret = nvme_fc_unregister_remoteport(fcport->nvme_remote_port); 603 + if (ret) 604 + ql_log(ql_log_info, fcport->vha, 0x2114, 605 + "%s: Failed to unregister nvme_remote_port (%d)\n", 606 + __func__, ret); 607 + wait_for_completion(&fcport->nvme_del_done); 674 608 } 675 609 676 610 void qla_nvme_delete(struct scsi_qla_host *vha)
+1 -1
drivers/scsi/qla2xxx/qla_nvme.h
··· 34 34 struct work_struct ls_work; 35 35 struct work_struct abort_work; 36 36 int comp_status; 37 + spinlock_t cmd_lock; 37 38 }; 38 39 39 40 struct qla_nvme_rport { 40 - struct list_head list; 41 41 struct fc_port *fcport; 42 42 }; 43 43
-1
drivers/scsi/qla2xxx/qla_os.c
··· 4789 4789 INIT_LIST_HEAD(&vha->plogi_ack_list); 4790 4790 INIT_LIST_HEAD(&vha->qp_list); 4791 4791 INIT_LIST_HEAD(&vha->gnl.fcports); 4792 - INIT_LIST_HEAD(&vha->nvme_rport_list); 4793 4792 INIT_LIST_HEAD(&vha->gpnid_list); 4794 4793 INIT_WORK(&vha->iocb_work, qla2x00_iocb_work_fn); 4795 4794
+8 -8
drivers/scsi/qla2xxx/qla_target.c
··· 1004 1004 else 1005 1005 logout_started = true; 1006 1006 } 1007 + } /* if sess->logout_on_delete */ 1008 + 1009 + if (sess->nvme_flag & NVME_FLAG_REGISTERED && 1010 + !(sess->nvme_flag & NVME_FLAG_DELETING)) { 1011 + sess->nvme_flag |= NVME_FLAG_DELETING; 1012 + qla_nvme_unregister_remote_port(sess); 1007 1013 } 1008 1014 } 1009 1015 ··· 1161 1155 sess->last_rscn_gen = sess->rscn_gen; 1162 1156 sess->last_login_gen = sess->login_gen; 1163 1157 1164 - if (sess->nvme_flag & NVME_FLAG_REGISTERED && 1165 - !(sess->nvme_flag & NVME_FLAG_DELETING)) { 1166 - sess->nvme_flag |= NVME_FLAG_DELETING; 1167 - schedule_work(&sess->nvme_del_work); 1168 - } else { 1169 - INIT_WORK(&sess->free_work, qlt_free_session_done); 1170 - schedule_work(&sess->free_work); 1171 - } 1158 + INIT_WORK(&sess->free_work, qlt_free_session_done); 1159 + schedule_work(&sess->free_work); 1172 1160 } 1173 1161 EXPORT_SYMBOL(qlt_unreg_sess); 1174 1162
+3 -9
drivers/scsi/scsi.c
··· 86 86 EXPORT_SYMBOL(scsi_logging_level); 87 87 #endif 88 88 89 - /* sd, scsi core and power management need to coordinate flushing async actions */ 90 - ASYNC_DOMAIN(scsi_sd_probe_domain); 91 - EXPORT_SYMBOL(scsi_sd_probe_domain); 92 - 93 89 /* 94 - * Separate domain (from scsi_sd_probe_domain) to maximize the benefit of 95 - * asynchronous system resume operations. It is marked 'exclusive' to avoid 96 - * being included in the async_synchronize_full() that is invoked by 97 - * dpm_resume() 90 + * Domain for asynchronous system resume operations. It is marked 'exclusive' 91 + * to avoid being included in the async_synchronize_full() that is invoked by 92 + * dpm_resume(). 98 93 */ 99 94 ASYNC_DOMAIN_EXCLUSIVE(scsi_sd_pm_domain); 100 95 EXPORT_SYMBOL(scsi_sd_pm_domain); ··· 816 821 scsi_exit_devinfo(); 817 822 scsi_exit_procfs(); 818 823 scsi_exit_queue(); 819 - async_unregister_domain(&scsi_sd_probe_domain); 820 824 } 821 825 822 826 subsys_initcall(init_scsi);
+1
drivers/scsi/scsi_debugfs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 struct request; 2 3 struct seq_file; 3 4
+24 -2
drivers/scsi/scsi_error.c
··· 1055 1055 struct scsi_device *sdev = scmd->device; 1056 1056 struct Scsi_Host *shost = sdev->host; 1057 1057 DECLARE_COMPLETION_ONSTACK(done); 1058 - unsigned long timeleft = timeout; 1058 + unsigned long timeleft = timeout, delay; 1059 1059 struct scsi_eh_save ses; 1060 1060 const unsigned long stall_for = msecs_to_jiffies(100); 1061 1061 int rtn; ··· 1066 1066 1067 1067 scsi_log_send(scmd); 1068 1068 scmd->scsi_done = scsi_eh_done; 1069 - rtn = shost->hostt->queuecommand(shost, scmd); 1069 + 1070 + /* 1071 + * Lock sdev->state_mutex to avoid that scsi_device_quiesce() can 1072 + * change the SCSI device state after we have examined it and before 1073 + * .queuecommand() is called. 1074 + */ 1075 + mutex_lock(&sdev->state_mutex); 1076 + while (sdev->sdev_state == SDEV_BLOCK && timeleft > 0) { 1077 + mutex_unlock(&sdev->state_mutex); 1078 + SCSI_LOG_ERROR_RECOVERY(5, sdev_printk(KERN_DEBUG, sdev, 1079 + "%s: state %d <> %d\n", __func__, sdev->sdev_state, 1080 + SDEV_BLOCK)); 1081 + delay = min(timeleft, stall_for); 1082 + timeleft -= delay; 1083 + msleep(jiffies_to_msecs(delay)); 1084 + mutex_lock(&sdev->state_mutex); 1085 + } 1086 + if (sdev->sdev_state != SDEV_BLOCK) 1087 + rtn = shost->hostt->queuecommand(shost, scmd); 1088 + else 1089 + rtn = SCSI_MLQUEUE_DEVICE_BUSY; 1090 + mutex_unlock(&sdev->state_mutex); 1091 + 1070 1092 if (rtn) { 1071 1093 if (timeleft > stall_for) { 1072 1094 scsi_eh_restore_cmnd(scmd, &ses);
-4
drivers/scsi/scsi_lib.c
··· 2616 2616 * a legal transition). When the device is in this state, command processing 2617 2617 * is paused until the device leaves the SDEV_BLOCK state. See also 2618 2618 * scsi_internal_device_unblock(). 2619 - * 2620 - * To do: avoid that scsi_send_eh_cmnd() calls queuecommand() after 2621 - * scsi_internal_device_block() has blocked a SCSI device and also 2622 - * remove the rport mutex lock and unlock calls from srp_queuecommand(). 2623 2619 */ 2624 2620 static int scsi_internal_device_block(struct scsi_device *sdev) 2625 2621 {
+1 -5
drivers/scsi/scsi_pm.c
··· 176 176 177 177 static int scsi_bus_prepare(struct device *dev) 178 178 { 179 - if (scsi_is_sdev_device(dev)) { 180 - /* sd probing uses async_schedule. Wait until it finishes. */ 181 - async_synchronize_full_domain(&scsi_sd_probe_domain); 182 - 183 - } else if (scsi_is_host_device(dev)) { 179 + if (scsi_is_host_device(dev)) { 184 180 /* Wait until async scanning is finished */ 185 181 scsi_complete_async_scans(); 186 182 }
-1
drivers/scsi/scsi_priv.h
··· 175 175 #endif /* CONFIG_PM */ 176 176 177 177 extern struct async_domain scsi_sd_pm_domain; 178 - extern struct async_domain scsi_sd_probe_domain; 179 178 180 179 /* scsi_dh.c */ 181 180 #ifdef CONFIG_SCSI_DH
+6 -1
drivers/scsi/scsi_sysfs.c
··· 767 767 break; 768 768 } 769 769 } 770 - if (!state) 770 + switch (state) { 771 + case SDEV_RUNNING: 772 + case SDEV_OFFLINE: 773 + break; 774 + default: 771 775 return -EINVAL; 776 + } 772 777 773 778 mutex_lock(&sdev->state_mutex); 774 779 ret = scsi_device_set_state(sdev, state);
-3
drivers/scsi/scsi_transport_fc.c
··· 3 3 * FiberChannel transport specific attributes exported to sysfs. 4 4 * 5 5 * Copyright (c) 2003 Silicon Graphics, Inc. All rights reserved. 6 - * 7 - * ======== 8 - * 9 6 * Copyright (C) 2004-2007 James Smart, Emulex Corporation 10 7 * Rewrite for host, target, device, and remote port attributes, 11 8 * statistics, and service functions...
+45 -66
drivers/scsi/sd.c
··· 568 568 .name = "sd", 569 569 .owner = THIS_MODULE, 570 570 .probe = sd_probe, 571 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 571 572 .remove = sd_remove, 572 573 .shutdown = sd_shutdown, 573 574 .pm = &sd_pm_ops, ··· 3253 3252 return 0; 3254 3253 } 3255 3254 3256 - /* 3257 - * The asynchronous part of sd_probe 3258 - */ 3259 - static void sd_probe_async(void *data, async_cookie_t cookie) 3260 - { 3261 - struct scsi_disk *sdkp = data; 3262 - struct scsi_device *sdp; 3263 - struct gendisk *gd; 3264 - u32 index; 3265 - struct device *dev; 3266 - 3267 - sdp = sdkp->device; 3268 - gd = sdkp->disk; 3269 - index = sdkp->index; 3270 - dev = &sdp->sdev_gendev; 3271 - 3272 - gd->major = sd_major((index & 0xf0) >> 4); 3273 - gd->first_minor = ((index & 0xf) << 4) | (index & 0xfff00); 3274 - 3275 - gd->fops = &sd_fops; 3276 - gd->private_data = &sdkp->driver; 3277 - gd->queue = sdkp->device->request_queue; 3278 - 3279 - /* defaults, until the device tells us otherwise */ 3280 - sdp->sector_size = 512; 3281 - sdkp->capacity = 0; 3282 - sdkp->media_present = 1; 3283 - sdkp->write_prot = 0; 3284 - sdkp->cache_override = 0; 3285 - sdkp->WCE = 0; 3286 - sdkp->RCD = 0; 3287 - sdkp->ATO = 0; 3288 - sdkp->first_scan = 1; 3289 - sdkp->max_medium_access_timeouts = SD_MAX_MEDIUM_TIMEOUTS; 3290 - 3291 - sd_revalidate_disk(gd); 3292 - 3293 - gd->flags = GENHD_FL_EXT_DEVT; 3294 - if (sdp->removable) { 3295 - gd->flags |= GENHD_FL_REMOVABLE; 3296 - gd->events |= DISK_EVENT_MEDIA_CHANGE; 3297 - gd->event_flags = DISK_EVENT_FLAG_POLL | DISK_EVENT_FLAG_UEVENT; 3298 - } 3299 - 3300 - blk_pm_runtime_init(sdp->request_queue, dev); 3301 - device_add_disk(dev, gd, NULL); 3302 - if (sdkp->capacity) 3303 - sd_dif_config_host(sdkp); 3304 - 3305 - sd_revalidate_disk(gd); 3306 - 3307 - if (sdkp->security) { 3308 - sdkp->opal_dev = init_opal_dev(sdp, &sd_sec_submit); 3309 - if (sdkp->opal_dev) 3310 - sd_printk(KERN_NOTICE, sdkp, "supports TCG Opal\n"); 3311 - } 3312 - 3313 - sd_printk(KERN_NOTICE, sdkp, "Attached SCSI %sdisk\n", 3314 - sdp->removable ? "removable " : ""); 3315 - scsi_autopm_put_device(sdp); 3316 - put_device(&sdkp->dev); 3317 - } 3318 - 3319 3255 /** 3320 3256 * sd_probe - called during driver initialization and whenever a 3321 3257 * new scsi device is attached to the system. It is called once ··· 3342 3404 get_device(dev); 3343 3405 dev_set_drvdata(dev, sdkp); 3344 3406 3345 - get_device(&sdkp->dev); /* prevent release before async_schedule */ 3346 - async_schedule_domain(sd_probe_async, sdkp, &scsi_sd_probe_domain); 3407 + gd->major = sd_major((index & 0xf0) >> 4); 3408 + gd->first_minor = ((index & 0xf) << 4) | (index & 0xfff00); 3409 + 3410 + gd->fops = &sd_fops; 3411 + gd->private_data = &sdkp->driver; 3412 + gd->queue = sdkp->device->request_queue; 3413 + 3414 + /* defaults, until the device tells us otherwise */ 3415 + sdp->sector_size = 512; 3416 + sdkp->capacity = 0; 3417 + sdkp->media_present = 1; 3418 + sdkp->write_prot = 0; 3419 + sdkp->cache_override = 0; 3420 + sdkp->WCE = 0; 3421 + sdkp->RCD = 0; 3422 + sdkp->ATO = 0; 3423 + sdkp->first_scan = 1; 3424 + sdkp->max_medium_access_timeouts = SD_MAX_MEDIUM_TIMEOUTS; 3425 + 3426 + sd_revalidate_disk(gd); 3427 + 3428 + gd->flags = GENHD_FL_EXT_DEVT; 3429 + if (sdp->removable) { 3430 + gd->flags |= GENHD_FL_REMOVABLE; 3431 + gd->events |= DISK_EVENT_MEDIA_CHANGE; 3432 + gd->event_flags = DISK_EVENT_FLAG_POLL | DISK_EVENT_FLAG_UEVENT; 3433 + } 3434 + 3435 + blk_pm_runtime_init(sdp->request_queue, dev); 3436 + device_add_disk(dev, gd, NULL); 3437 + if (sdkp->capacity) 3438 + sd_dif_config_host(sdkp); 3439 + 3440 + sd_revalidate_disk(gd); 3441 + 3442 + if (sdkp->security) { 3443 + sdkp->opal_dev = init_opal_dev(sdp, &sd_sec_submit); 3444 + if (sdkp->opal_dev) 3445 + sd_printk(KERN_NOTICE, sdkp, "supports TCG Opal\n"); 3446 + } 3447 + 3448 + sd_printk(KERN_NOTICE, sdkp, "Attached SCSI %sdisk\n", 3449 + sdp->removable ? "removable " : ""); 3450 + scsi_autopm_put_device(sdp); 3347 3451 3348 3452 return 0; 3349 3453 ··· 3421 3441 scsi_autopm_get_device(sdkp->device); 3422 3442 3423 3443 async_synchronize_full_domain(&scsi_sd_pm_domain); 3424 - async_synchronize_full_domain(&scsi_sd_probe_domain); 3425 3444 device_del(&sdkp->dev); 3426 3445 del_gendisk(sdkp->disk); 3427 3446 sd_shutdown(dev);
+1 -6
drivers/scsi/ses.c
··· 3 3 * SCSI Enclosure Services 4 4 * 5 5 * Copyright (C) 2008 James Bottomley <James.Bottomley@HansenPartnership.com> 6 - * 7 - **----------------------------------------------------------------------------- 8 - ** 9 - ** 10 - **----------------------------------------------------------------------------- 11 - */ 6 + */ 12 7 13 8 #include <linux/slab.h> 14 9 #include <linux/module.h>
+3 -3
drivers/scsi/st.c
··· 228 228 229 229 230 230 231 - #include "osst_detect.h" 232 231 #ifndef SIGS_FROM_OSST 233 232 #define SIGS_FROM_OSST \ 234 233 {"OnStream", "SC-", "", "osst"}, \ ··· 4266 4267 if (SDp->type != TYPE_TAPE) 4267 4268 return -ENODEV; 4268 4269 if ((stp = st_incompatible(SDp))) { 4269 - sdev_printk(KERN_INFO, SDp, "Found incompatible tape\n"); 4270 4270 sdev_printk(KERN_INFO, SDp, 4271 - "st: The suggested driver is %s.\n", stp); 4271 + "OnStream tapes are no longer supported;\n"); 4272 + sdev_printk(KERN_INFO, SDp, 4273 + "please mail to linux-scsi@vger.kernel.org.\n"); 4272 4274 return -ENODEV; 4273 4275 } 4274 4276
+11
drivers/scsi/storvsc_drv.c
··· 375 375 376 376 static int storvsc_ringbuffer_size = (128 * 1024); 377 377 static u32 max_outstanding_req_per_channel; 378 + static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth); 378 379 379 380 static int storvsc_vcpus_per_sub_channel = 4; 380 381 ··· 1700 1699 .dma_boundary = PAGE_SIZE-1, 1701 1700 .no_write_same = 1, 1702 1701 .track_queue_depth = 1, 1702 + .change_queue_depth = storvsc_change_queue_depth, 1703 1703 }; 1704 1704 1705 1705 enum { ··· 1905 1903 err_out0: 1906 1904 scsi_host_put(host); 1907 1905 return ret; 1906 + } 1907 + 1908 + /* Change a scsi target's queue depth */ 1909 + static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth) 1910 + { 1911 + if (queue_depth > scsi_driver.can_queue) 1912 + queue_depth = scsi_driver.can_queue; 1913 + 1914 + return scsi_change_queue_depth(sdev, queue_depth); 1908 1915 } 1909 1916 1910 1917 static int storvsc_remove(struct hv_device *dev)
+20 -3
drivers/scsi/ufs/ufs-qcom.c
··· 3 3 * Copyright (c) 2013-2016, Linux Foundation. All rights reserved. 4 4 */ 5 5 6 + #include <linux/acpi.h> 6 7 #include <linux/time.h> 7 8 #include <linux/of.h> 8 9 #include <linux/platform_device.h> ··· 161 160 { 162 161 int err = 0; 163 162 struct device *dev = host->hba->dev; 163 + 164 + if (has_acpi_companion(dev)) 165 + return 0; 164 166 165 167 err = ufs_qcom_host_clk_get(dev, "rx_lane0_sync_clk", 166 168 &host->rx_l0_sync_clk, false); ··· 1131 1127 __func__, err); 1132 1128 goto out_variant_clear; 1133 1129 } else if (IS_ERR(host->generic_phy)) { 1134 - err = PTR_ERR(host->generic_phy); 1135 - dev_err(dev, "%s: PHY get failed %d\n", __func__, err); 1136 - goto out_variant_clear; 1130 + if (has_acpi_companion(dev)) { 1131 + host->generic_phy = NULL; 1132 + } else { 1133 + err = PTR_ERR(host->generic_phy); 1134 + dev_err(dev, "%s: PHY get failed %d\n", __func__, err); 1135 + goto out_variant_clear; 1136 + } 1137 1137 } 1138 1138 1139 1139 err = ufs_qcom_bus_register(host); ··· 1607 1599 }; 1608 1600 MODULE_DEVICE_TABLE(of, ufs_qcom_of_match); 1609 1601 1602 + #ifdef CONFIG_ACPI 1603 + static const struct acpi_device_id ufs_qcom_acpi_match[] = { 1604 + { "QCOM24A5" }, 1605 + { }, 1606 + }; 1607 + MODULE_DEVICE_TABLE(acpi, ufs_qcom_acpi_match); 1608 + #endif 1609 + 1610 1610 static const struct dev_pm_ops ufs_qcom_pm_ops = { 1611 1611 .suspend = ufshcd_pltfrm_suspend, 1612 1612 .resume = ufshcd_pltfrm_resume, ··· 1631 1615 .name = "ufshcd-qcom", 1632 1616 .pm = &ufs_qcom_pm_ops, 1633 1617 .of_match_table = of_match_ptr(ufs_qcom_of_match), 1618 + .acpi_match_table = ACPI_PTR(ufs_qcom_acpi_match), 1634 1619 }, 1635 1620 }; 1636 1621 module_platform_driver(ufs_qcom_pltform);
+3 -3
drivers/scsi/ufs/ufs-sysfs.c
··· 122 122 { 123 123 unsigned long flags; 124 124 125 - if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT)) 125 + if (!ufshcd_is_auto_hibern8_supported(hba)) 126 126 return; 127 127 128 128 spin_lock_irqsave(hba->host->host_lock, flags); ··· 164 164 { 165 165 struct ufs_hba *hba = dev_get_drvdata(dev); 166 166 167 - if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT)) 167 + if (!ufshcd_is_auto_hibern8_supported(hba)) 168 168 return -EOPNOTSUPP; 169 169 170 170 return snprintf(buf, PAGE_SIZE, "%d\n", ufshcd_ahit_to_us(hba->ahit)); ··· 177 177 struct ufs_hba *hba = dev_get_drvdata(dev); 178 178 unsigned int timer; 179 179 180 - if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT)) 180 + if (!ufshcd_is_auto_hibern8_supported(hba)) 181 181 return -EOPNOTSUPP; 182 182 183 183 if (kstrtouint(buf, 0, &timer))
+4 -2
drivers/scsi/ufs/ufs_bsg.c
··· 122 122 memcpy(&uc, &bsg_request->upiu_req.uc, UIC_CMD_SIZE); 123 123 ret = ufshcd_send_uic_cmd(hba, &uc); 124 124 if (ret) 125 - dev_dbg(hba->dev, 125 + dev_err(hba->dev, 126 126 "send uic cmd: error code %d\n", ret); 127 127 128 128 memcpy(&bsg_reply->upiu_rsp.uc, &uc, UIC_CMD_SIZE); ··· 149 149 out: 150 150 bsg_reply->result = ret; 151 151 job->reply_len = sizeof(struct ufs_bsg_reply); 152 - bsg_job_done(job, ret, bsg_reply->reply_payload_rcv_len); 152 + /* complete the job here only if no error */ 153 + if (ret == 0) 154 + bsg_job_done(job, ret, bsg_reply->reply_payload_rcv_len); 153 155 154 156 return ret; 155 157 }
+2
drivers/scsi/ufs/ufshcd-pci.c
··· 200 200 static const struct pci_device_id ufshcd_pci_tbl[] = { 201 201 { PCI_VENDOR_ID_SAMSUNG, 0xC00C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0 }, 202 202 { PCI_VDEVICE(INTEL, 0x9DFA), (kernel_ulong_t)&ufs_intel_cnl_hba_vops }, 203 + { PCI_VDEVICE(INTEL, 0x4B41), (kernel_ulong_t)&ufs_intel_cnl_hba_vops }, 204 + { PCI_VDEVICE(INTEL, 0x4B43), (kernel_ulong_t)&ufs_intel_cnl_hba_vops }, 203 205 { } /* terminate list */ 204 206 }; 205 207
+33 -2
drivers/scsi/ufs/ufshcd.c
··· 3908 3908 { 3909 3909 unsigned long flags; 3910 3910 3911 - if (!(hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT) || !hba->ahit) 3911 + if (!ufshcd_is_auto_hibern8_supported(hba) || !hba->ahit) 3912 3912 return; 3913 3913 3914 3914 spin_lock_irqsave(hba->host->host_lock, flags); ··· 5255 5255 goto skip_err_handling; 5256 5256 } 5257 5257 if ((hba->saved_err & INT_FATAL_ERRORS) || 5258 + (hba->saved_err & UFSHCD_UIC_HIBERN8_MASK) || 5258 5259 ((hba->saved_err & UIC_ERROR) && 5259 5260 (hba->saved_uic_err & (UFSHCD_UIC_DL_PA_INIT_ERROR | 5260 5261 UFSHCD_UIC_DL_NAC_RECEIVED_ERROR | ··· 5415 5414 __func__, hba->uic_error); 5416 5415 } 5417 5416 5417 + static bool ufshcd_is_auto_hibern8_error(struct ufs_hba *hba, 5418 + u32 intr_mask) 5419 + { 5420 + if (!ufshcd_is_auto_hibern8_supported(hba)) 5421 + return false; 5422 + 5423 + if (!(intr_mask & UFSHCD_UIC_HIBERN8_MASK)) 5424 + return false; 5425 + 5426 + if (hba->active_uic_cmd && 5427 + (hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_ENTER || 5428 + hba->active_uic_cmd->command == UIC_CMD_DME_HIBER_EXIT)) 5429 + return false; 5430 + 5431 + return true; 5432 + } 5433 + 5418 5434 /** 5419 5435 * ufshcd_check_errors - Check for errors that need s/w attention 5420 5436 * @hba: per-adapter instance ··· 5448 5430 ufshcd_update_uic_error(hba); 5449 5431 if (hba->uic_error) 5450 5432 queue_eh_work = true; 5433 + } 5434 + 5435 + if (hba->errors & UFSHCD_UIC_HIBERN8_MASK) { 5436 + dev_err(hba->dev, 5437 + "%s: Auto Hibern8 %s failed - status: 0x%08x, upmcrs: 0x%08x\n", 5438 + __func__, (hba->errors & UIC_HIBERNATE_ENTER) ? 5439 + "Enter" : "Exit", 5440 + hba->errors, ufshcd_get_upmcrs(hba)); 5441 + queue_eh_work = true; 5451 5442 } 5452 5443 5453 5444 if (queue_eh_work) { ··· 5521 5494 static void ufshcd_sl_intr(struct ufs_hba *hba, u32 intr_status) 5522 5495 { 5523 5496 hba->errors = UFSHCD_ERROR_MASK & intr_status; 5497 + 5498 + if (ufshcd_is_auto_hibern8_error(hba, intr_status)) 5499 + hba->errors |= (UFSHCD_UIC_HIBERN8_MASK & intr_status); 5500 + 5524 5501 if (hba->errors) 5525 5502 ufshcd_check_errors(hba); 5526 5503 ··· 8344 8313 UIC_LINK_HIBERN8_STATE); 8345 8314 8346 8315 /* Set the default auto-hiberate idle timer value to 150 ms */ 8347 - if (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT) { 8316 + if (ufshcd_is_auto_hibern8_supported(hba) && !hba->ahit) { 8348 8317 hba->ahit = FIELD_PREP(UFSHCI_AHIBERN8_TIMER_MASK, 150) | 8349 8318 FIELD_PREP(UFSHCI_AHIBERN8_SCALE_MASK, 3); 8350 8319 }
+5
drivers/scsi/ufs/ufshcd.h
··· 740 740 #endif 741 741 } 742 742 743 + static inline bool ufshcd_is_auto_hibern8_supported(struct ufs_hba *hba) 744 + { 745 + return (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT); 746 + } 747 + 743 748 #define ufshcd_writel(hba, val, reg) \ 744 749 writel((val), (hba)->mmio_base + (reg)) 745 750 #define ufshcd_readl(hba, reg) \
+4 -2
drivers/scsi/ufs/ufshci.h
··· 144 144 #define CONTROLLER_FATAL_ERROR 0x10000 145 145 #define SYSTEM_BUS_FATAL_ERROR 0x20000 146 146 147 - #define UFSHCD_UIC_PWR_MASK (UIC_HIBERNATE_ENTER |\ 148 - UIC_HIBERNATE_EXIT |\ 147 + #define UFSHCD_UIC_HIBERN8_MASK (UIC_HIBERNATE_ENTER |\ 148 + UIC_HIBERNATE_EXIT) 149 + 150 + #define UFSHCD_UIC_PWR_MASK (UFSHCD_UIC_HIBERN8_MASK |\ 149 151 UIC_POWER_MODE) 150 152 151 153 #define UFSHCD_UIC_MASK (UIC_COMMAND_COMPL | UFSHCD_UIC_PWR_MASK)
-3
drivers/scsi/virtio_scsi.c
··· 74 74 75 75 u32 num_queues; 76 76 77 - /* If the affinity hint is set for virtqueues */ 78 - bool affinity_hint_set; 79 - 80 77 struct hlist_node node; 81 78 82 79 /* Protected by event_vq lock */
+30 -12
drivers/scsi/wd719x.c
··· 108 108 } 109 109 110 110 if (status != WD719X_INT_NOERRORS) { 111 + u8 sue = wd719x_readb(wd, WD719X_AMR_SCB_ERROR); 112 + /* we get this after wd719x_dev_reset, it's not an error */ 113 + if (sue == WD719X_SUE_TERM) 114 + return 0; 115 + /* we get this after wd719x_bus_reset, it's not an error */ 116 + if (sue == WD719X_SUE_RESET) 117 + return 0; 111 118 dev_err(&wd->pdev->dev, "direct command failed, status 0x%02x, SUE 0x%02x\n", 112 - status, wd719x_readb(wd, WD719X_AMR_SCB_ERROR)); 119 + status, sue); 113 120 return -EIO; 114 121 } 115 122 ··· 135 128 if (wd719x_wait_ready(wd)) 136 129 return -ETIMEDOUT; 137 130 138 - /* make sure we get NO interrupts */ 139 - dev |= WD719X_DISABLE_INT; 131 + /* disable interrupts except for RESET/ABORT (it breaks them) */ 132 + if (opcode != WD719X_CMD_BUSRESET && opcode != WD719X_CMD_ABORT && 133 + opcode != WD719X_CMD_ABORT_TAG && opcode != WD719X_CMD_RESET) 134 + dev |= WD719X_DISABLE_INT; 140 135 wd719x_writeb(wd, WD719X_AMR_CMD_PARAM, dev); 141 136 wd719x_writeb(wd, WD719X_AMR_CMD_PARAM_2, lun); 142 137 wd719x_writeb(wd, WD719X_AMR_CMD_PARAM_3, tag); ··· 474 465 spin_lock_irqsave(wd->sh->host_lock, flags); 475 466 result = wd719x_direct_cmd(wd, action, cmd->device->id, 476 467 cmd->device->lun, cmd->tag, scb->phys, 0); 468 + wd719x_finish_cmd(scb, DID_ABORT); 477 469 spin_unlock_irqrestore(wd->sh->host_lock, flags); 478 470 if (result) 479 471 return FAILED; ··· 487 477 int result; 488 478 unsigned long flags; 489 479 struct wd719x *wd = shost_priv(cmd->device->host); 480 + struct wd719x_scb *scb, *tmp; 490 481 491 482 dev_info(&wd->pdev->dev, "%s reset requested\n", 492 483 (opcode == WD719X_CMD_BUSRESET) ? "bus" : "device"); ··· 495 484 spin_lock_irqsave(wd->sh->host_lock, flags); 496 485 result = wd719x_direct_cmd(wd, opcode, device, 0, 0, 0, 497 486 WD719X_WAIT_FOR_SCSI_RESET); 487 + /* flush all SCBs (or all for a device if dev_reset) */ 488 + list_for_each_entry_safe(scb, tmp, &wd->active_scbs, list) { 489 + if (opcode == WD719X_CMD_BUSRESET || 490 + scb->cmd->device->id == device) 491 + wd719x_finish_cmd(scb, DID_RESET); 492 + } 498 493 spin_unlock_irqrestore(wd->sh->host_lock, flags); 499 494 if (result) 500 495 return FAILED; ··· 523 506 struct wd719x *wd = shost_priv(cmd->device->host); 524 507 struct wd719x_scb *scb, *tmp; 525 508 unsigned long flags; 526 - int result; 527 509 528 510 dev_info(&wd->pdev->dev, "host reset requested\n"); 529 511 spin_lock_irqsave(wd->sh->host_lock, flags); 530 - /* Try to reinit the RISC */ 531 - if (wd719x_chip_init(wd) == 0) 532 - result = SUCCESS; 533 - else 534 - result = FAILED; 512 + /* stop the RISC */ 513 + if (wd719x_direct_cmd(wd, WD719X_CMD_SLEEP, 0, 0, 0, 0, 514 + WD719X_WAIT_FOR_RISC)) 515 + dev_warn(&wd->pdev->dev, "RISC sleep command failed\n"); 516 + /* disable RISC */ 517 + wd719x_writeb(wd, WD719X_PCI_MODE_SELECT, 0); 535 518 536 519 /* flush all SCBs */ 537 520 list_for_each_entry_safe(scb, tmp, &wd->active_scbs, list) 538 - wd719x_finish_cmd(scb, result); 521 + wd719x_finish_cmd(scb, DID_RESET); 539 522 spin_unlock_irqrestore(wd->sh->host_lock, flags); 540 523 541 - return result; 524 + /* Try to reinit the RISC */ 525 + return wd719x_chip_init(wd) == 0 ? SUCCESS : FAILED; 542 526 } 543 527 544 528 static int wd719x_biosparam(struct scsi_device *sdev, struct block_device *bdev, ··· 691 673 else 692 674 dev_err(&wd->pdev->dev, "card returned invalid SCB pointer\n"); 693 675 } else 694 - dev_warn(&wd->pdev->dev, "direct command 0x%x completed\n", 676 + dev_dbg(&wd->pdev->dev, "direct command 0x%x completed\n", 695 677 regs.bytes.OPC); 696 678 break; 697 679 case WD719X_INT_PIOREADY:
+2 -13
drivers/target/iscsi/iscsi_target_nego.c
··· 152 152 153 153 if (strstr("None", authtype)) 154 154 return 1; 155 - #ifdef CANSRP 156 - else if (strstr("SRP", authtype)) 157 - return srp_main_loop(conn, auth, in_buf, out_buf, 158 - &in_length, out_length); 159 - #endif 160 155 else if (strstr("CHAP", authtype)) 161 156 return chap_main_loop(conn, auth, in_buf, out_buf, 162 157 &in_length, out_length); 163 - else if (strstr("SPKM1", authtype)) 164 - return 2; 165 - else if (strstr("SPKM2", authtype)) 166 - return 2; 167 - else if (strstr("KRB5", authtype)) 168 - return 2; 169 - else 170 - return 2; 158 + /* SRP, SPKM1, SPKM2 and KRB5 are unsupported */ 159 + return 2; 171 160 } 172 161 173 162 static void iscsi_remove_failed_auth_entry(struct iscsi_conn *conn)
+7 -9
drivers/target/target_core_user.c
··· 1824 1824 { 1825 1825 struct tcmu_hba *hba = udev->hba->hba_ptr; 1826 1826 struct uio_info *info; 1827 - size_t size, used; 1828 1827 char *str; 1829 1828 1830 1829 info = &udev->uio_info; 1831 - size = snprintf(NULL, 0, "tcm-user/%u/%s/%s", hba->host_id, udev->name, 1832 - udev->dev_config); 1833 - size += 1; /* for \0 */ 1834 - str = kmalloc(size, GFP_KERNEL); 1830 + 1831 + if (udev->dev_config[0]) 1832 + str = kasprintf(GFP_KERNEL, "tcm-user/%u/%s/%s", hba->host_id, 1833 + udev->name, udev->dev_config); 1834 + else 1835 + str = kasprintf(GFP_KERNEL, "tcm-user/%u/%s", hba->host_id, 1836 + udev->name); 1835 1837 if (!str) 1836 1838 return -ENOMEM; 1837 - 1838 - used = snprintf(str, size, "tcm-user/%u/%s", hba->host_id, udev->name); 1839 - if (udev->dev_config[0]) 1840 - snprintf(str + used, size - used, "/%s", udev->dev_config); 1841 1839 1842 1840 /* If the old string exists, free it */ 1843 1841 kfree(info->name);
+1 -13
include/scsi/fc/fc_fip.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Copyright 2008 Cisco Systems, Inc. All rights reserved. 3 - * 4 - * This program is free software; you may redistribute it and/or modify 5 - * it under the terms of the GNU General Public License as published by 6 - * the Free Software Foundation; version 2 of the License. 7 - * 8 - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 9 - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 10 - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 11 - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS 12 - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN 13 - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 14 - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 15 - * SOFTWARE. 16 4 */ 17 5 #ifndef _FC_FIP_H_ 18 6 #define _FC_FIP_H_
+2 -1
include/scsi/fc/fc_ms.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* * Copyright(c) 2011 Intel Corporation. All rights reserved. 2 + /* 3 + * Copyright(c) 2011 Intel Corporation. All rights reserved. 3 4 * 4 5 * Maintained at www.Open-FCoE.org 5 6 */
-2
include/scsi/iscsi_if.h
··· 5 5 * Copyright (C) 2005 Dmitry Yusupov 6 6 * Copyright (C) 2005 Alex Aizman 7 7 * maintained by open-iscsi@googlegroups.com 8 - * 9 - * See the file COPYING included with this distribution for more details. 10 8 */ 11 9 12 10 #ifndef ISCSI_IF_H
-2
include/scsi/iscsi_proto.h
··· 5 5 * Copyright (C) 2005 Dmitry Yusupov 6 6 * Copyright (C) 2005 Alex Aizman 7 7 * maintained by open-iscsi@googlegroups.com 8 - * 9 - * See the file COPYING included with this distribution for more details. 10 8 */ 11 9 12 10 #ifndef ISCSI_PROTO_H
-2
include/scsi/libiscsi_tcp.h
··· 5 5 * Copyright (C) 2008 Mike Christie 6 6 * Copyright (C) 2008 Red Hat, Inc. All rights reserved. 7 7 * maintained by open-iscsi@googlegroups.com 8 - * 9 - * See the file COPYING included with this distribution for more details. 10 8 */ 11 9 12 10 #ifndef LIBISCSI_TCP_H
+2 -3
include/scsi/libsas.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 3 * SAS host prototypes and structures header file 4 4 * ··· 207 207 struct work_struct work; 208 208 }; 209 209 210 - /* Lots of code duplicates this in the SCSI tree, which can be factored out */ 211 - static inline bool sas_dev_type_is_expander(enum sas_device_type type) 210 + static inline bool dev_is_expander(enum sas_device_type type) 212 211 { 213 212 return type == SAS_EDGE_EXPANDER_DEVICE || 214 213 type == SAS_FANOUT_EXPANDER_DEVICE;
+1 -1
include/scsi/sas.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 3 * SAS structures and definitions header file 4 4 *
+1 -1
include/scsi/scsi_transport.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 2 /* 3 3 * Transport specific attributes. 4 4 *
-3
include/scsi/scsi_transport_fc.h
··· 3 3 * FiberChannel transport specific attributes exported to sysfs. 4 4 * 5 5 * Copyright (c) 2003 Silicon Graphics, Inc. All rights reserved. 6 - * 7 - * ======== 8 - * 9 6 * Copyright (C) 2004-2007 James Smart, Emulex Corporation 10 7 * Rewrite for host, target, device, and remote port attributes, 11 8 * statistics, and service functions...
-13
include/uapi/scsi/fc/fc_els.h
··· 2 2 /* 3 3 * Copyright(c) 2007 Intel Corporation. All rights reserved. 4 4 * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program; if not, write to the Free Software Foundation, Inc., 16 - * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 - * 18 5 * Maintained at www.Open-FCoE.org 19 6 */ 20 7
-13
include/uapi/scsi/fc/fc_fs.h
··· 2 2 /* 3 3 * Copyright(c) 2007 Intel Corporation. All rights reserved. 4 4 * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program; if not, write to the Free Software Foundation, Inc., 16 - * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 - * 18 5 * Maintained at www.Open-FCoE.org 19 6 */ 20 7
-13
include/uapi/scsi/fc/fc_gs.h
··· 2 2 /* 3 3 * Copyright(c) 2007 Intel Corporation. All rights reserved. 4 4 * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program; if not, write to the Free Software Foundation, Inc., 16 - * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 - * 18 5 * Maintained at www.Open-FCoE.org 19 6 */ 20 7
-13
include/uapi/scsi/fc/fc_ns.h
··· 2 2 /* 3 3 * Copyright(c) 2007 Intel Corporation. All rights reserved. 4 4 * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program; if not, write to the Free Software Foundation, Inc., 16 - * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 17 - * 18 5 * Maintained at www.Open-FCoE.org 19 6 */ 20 7
-15
include/uapi/scsi/scsi_bsg_fc.h
··· 3 3 * FC Transport BSG Interface 4 4 * 5 5 * Copyright (C) 2008 James Smart, Emulex Corporation 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License as published by 9 - * the Free Software Foundation; either version 2 of the License, or 10 - * (at your option) any later version. 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 - * 21 6 */ 22 7 23 8 #ifndef SCSI_BSG_FC_H
-15
include/uapi/scsi/scsi_netlink.h
··· 4 4 * Used for the posting of outbound SCSI transport events 5 5 * 6 6 * Copyright (C) 2006 James Smart, Emulex Corporation 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License, or 11 - * (at your option) any later version. 12 - * 13 - * This program is distributed in the hope that it will be useful, 14 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 - * GNU General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 - * 22 7 */ 23 8 #ifndef SCSI_NETLINK_H 24 9 #define SCSI_NETLINK_H