Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
"Updates to the usual drivers (ufs, lpfc, qla2xxx, mpi3mr, libsas).

The major update (which causes a conflict with block, see below) is
Christoph removing the queue limits and their associated block
helpers.

The remaining patches are assorted minor fixes and deprecated function
updates plus a bit of constification"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (141 commits)
scsi: mpi3mr: Sanitise num_phys
scsi: lpfc: Copyright updates for 14.4.0.2 patches
scsi: lpfc: Update lpfc version to 14.4.0.2
scsi: lpfc: Add support for 32 byte CDBs
scsi: lpfc: Change lpfc_hba hba_flag member into a bitmask
scsi: lpfc: Introduce rrq_list_lock to protect active_rrq_list
scsi: lpfc: Clear deferred RSCN processing flag when driver is unloading
scsi: lpfc: Update logging of protection type for T10 DIF I/O
scsi: lpfc: Change default logging level for unsolicited CT MIB commands
scsi: target: Remove unused list 'device_list'
scsi: iscsi: Remove unused list 'connlist_err'
scsi: ufs: exynos: Add support for Tensor gs101 SoC
scsi: ufs: exynos: Add some pa_dbg_ register offsets into drvdata
scsi: ufs: exynos: Allow max frequencies up to 267Mhz
scsi: ufs: exynos: Add EXYNOS_UFS_OPT_TIMER_TICK_SELECT option
scsi: ufs: exynos: Add EXYNOS_UFS_OPT_UFSPR_SECURE option
scsi: ufs: dt-bindings: exynos: Add gs101 compatible
scsi: qla2xxx: Fix debugfs output for fw_resource_count
scsi: qedf: Ensure the copied buf is NUL terminated
scsi: bfa: Ensure the copied buf is NUL terminated
...

+2179 -2048
+35 -3
Documentation/devicetree/bindings/ufs/samsung,exynos-ufs.yaml
··· 12 12 description: | 13 13 Each Samsung UFS host controller instance should have its own node. 14 14 15 - allOf: 16 - - $ref: ufs-common.yaml 17 - 18 15 properties: 19 16 compatible: 20 17 enum: 18 + - google,gs101-ufs 21 19 - samsung,exynos7-ufs 22 20 - samsung,exynosautov9-ufs 23 21 - samsung,exynosautov9-ufs-vh ··· 36 38 - const: ufsp 37 39 38 40 clocks: 41 + minItems: 2 39 42 items: 40 43 - description: ufs link core clock 41 44 - description: unipro main clock 45 + - description: fmp clock 46 + - description: ufs aclk clock 47 + - description: ufs pclk clock 48 + - description: sysreg clock 42 49 43 50 clock-names: 51 + minItems: 2 44 52 items: 45 53 - const: core_clk 46 54 - const: sclk_unipro_main 55 + - const: fmp 56 + - const: aclk 57 + - const: pclk 58 + - const: sysreg 47 59 48 60 phys: 49 61 maxItems: 1 ··· 79 71 - phy-names 80 72 - clocks 81 73 - clock-names 74 + 75 + allOf: 76 + - $ref: ufs-common.yaml 77 + - if: 78 + properties: 79 + compatible: 80 + contains: 81 + const: google,gs101-ufs 82 + 83 + then: 84 + properties: 85 + clocks: 86 + minItems: 6 87 + 88 + clock-names: 89 + minItems: 6 90 + 91 + else: 92 + properties: 93 + clocks: 94 + maxItems: 2 95 + 96 + clock-names: 97 + maxItems: 2 82 98 83 99 unevaluatedProperties: false 84 100
+7 -8
Documentation/driver-api/scsi.rst
··· 20 20 out of use, the SCSI command set is more widely used than ever to 21 21 communicate with devices over a number of different busses. 22 22 23 - The `SCSI protocol <http://www.t10.org/scsi-3.htm>`__ is a big-endian 23 + The `SCSI protocol <https://www.t10.org/scsi-3.htm>`__ is a big-endian 24 24 peer-to-peer packet based protocol. SCSI commands are 6, 10, 12, or 16 25 25 bytes long, often followed by an associated data payload. 26 26 ··· 28 28 are the default protocol for storage devices attached to USB, SATA, SAS, 29 29 Fibre Channel, FireWire, and ATAPI devices. SCSI packets are also 30 30 commonly exchanged over Infiniband, 31 - `I2O <http://i2o.shadowconnect.com/faq.php>`__, TCP/IP 32 - (`iSCSI <https://en.wikipedia.org/wiki/ISCSI>`__), even `Parallel 31 + TCP/IP (`iSCSI <https://en.wikipedia.org/wiki/ISCSI>`__), even `Parallel 33 32 ports <http://cyberelk.net/tim/parport/parscsi.html>`__. 34 33 35 34 Design of the Linux SCSI subsystem ··· 169 170 170 171 Infrastructure to provide async events from transports to userspace via 171 172 netlink, using a single NETLINK_SCSITRANSPORT protocol for all 172 - transports. See `the original patch 173 - submission <http://marc.info/?l=linux-scsi&m=115507374832500&w=2>`__ for 174 - more details. 173 + transports. See `the original patch submission 174 + <https://lore.kernel.org/linux-scsi/1155070439.6275.5.camel@localhost.localdomain/>`__ 175 + for more details. 175 176 176 177 .. kernel-doc:: drivers/scsi/scsi_netlink.c 177 178 :internal: ··· 327 328 To be more realistic, the simulated devices have the transport 328 329 attributes of SAS disks. 329 330 330 - For documentation see http://sg.danny.cz/sg/sdebug26.html 331 + For documentation see http://sg.danny.cz/sg/scsi_debug.html 331 332 332 333 todo 333 334 ~~~~ 334 335 335 336 Parallel (fast/wide/ultra) SCSI, USB, SATA, SAS, Fibre Channel, 336 - FireWire, ATAPI devices, Infiniband, I2O, Parallel ports, 337 + FireWire, ATAPI devices, Infiniband, Parallel ports, 337 338 netlink...
+10 -10
Documentation/scsi/scsi_mid_low_api.rst
··· 42 42 Documentation 43 43 ============= 44 44 There is a SCSI documentation directory within the kernel source tree, 45 - typically Documentation/scsi . Most documents are in plain 46 - (i.e. ASCII) text. This file is named scsi_mid_low_api.txt and can be 45 + typically Documentation/scsi . Most documents are in reStructuredText 46 + format. This file is named scsi_mid_low_api.rst and can be 47 47 found in that directory. A more recent copy of this document may be found 48 - at http://web.archive.org/web/20070107183357rn_1/sg.torque.net/scsi/. 49 - Many LLDs are documented there (e.g. aic7xxx.txt). The SCSI mid-level is 50 - briefly described in scsi.txt which contains a url to a document 51 - describing the SCSI subsystem in the lk 2.4 series. Two upper level 52 - drivers have documents in that directory: st.txt (SCSI tape driver) and 53 - scsi-generic.txt (for the sg driver). 48 + at https://docs.kernel.org/scsi/scsi_mid_low_api.html. Many LLDs are 49 + documented in Documentation/scsi (e.g. aic7xxx.rst). The SCSI mid-level is 50 + briefly described in scsi.rst which contains a URL to a document describing 51 + the SCSI subsystem in the Linux Kernel 2.4 series. Two upper level 52 + drivers have documents in that directory: st.rst (SCSI tape driver) and 53 + scsi-generic.rst (for the sg driver). 54 54 55 - Some documentation (or urls) for LLDs may be found in the C source code 56 - or in the same directory as the C source code. For example to find a url 55 + Some documentation (or URLs) for LLDs may be found in the C source code 56 + or in the same directory as the C source code. For example to find a URL 57 57 about the USB mass storage driver see the 58 58 /usr/src/linux/drivers/usb/storage directory. 59 59
+1 -2
MAINTAINERS
··· 5828 5828 5829 5829 CXLFLASH (IBM Coherent Accelerator Processor Interface CAPI Flash) SCSI DRIVER 5830 5830 M: Manoj N. Kumar <manoj@linux.ibm.com> 5831 - M: Matthew R. Ochs <mrochs@linux.ibm.com> 5832 5831 M: Uma Krishnan <ukrishn@linux.ibm.com> 5833 5832 L: linux-scsi@vger.kernel.org 5834 - S: Supported 5833 + S: Obsolete 5835 5834 F: Documentation/arch/powerpc/cxlflash.rst 5836 5835 F: drivers/scsi/cxlflash/ 5837 5836 F: include/uapi/scsi/cxlflash_ioctl.h
-245
block/blk-settings.c
··· 283 283 EXPORT_SYMBOL_GPL(queue_limits_set); 284 284 285 285 /** 286 - * blk_queue_bounce_limit - set bounce buffer limit for queue 287 - * @q: the request queue for the device 288 - * @bounce: bounce limit to enforce 289 - * 290 - * Description: 291 - * Force bouncing for ISA DMA ranges or highmem. 292 - * 293 - * DEPRECATED, don't use in new code. 294 - **/ 295 - void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce bounce) 296 - { 297 - q->limits.bounce = bounce; 298 - } 299 - EXPORT_SYMBOL(blk_queue_bounce_limit); 300 - 301 - /** 302 - * blk_queue_max_hw_sectors - set max sectors for a request for this queue 303 - * @q: the request queue for the device 304 - * @max_hw_sectors: max hardware sectors in the usual 512b unit 305 - * 306 - * Description: 307 - * Enables a low level driver to set a hard upper limit, 308 - * max_hw_sectors, on the size of requests. max_hw_sectors is set by 309 - * the device driver based upon the capabilities of the I/O 310 - * controller. 311 - * 312 - * max_dev_sectors is a hard limit imposed by the storage device for 313 - * READ/WRITE requests. It is set by the disk driver. 314 - * 315 - * max_sectors is a soft limit imposed by the block layer for 316 - * filesystem type requests. This value can be overridden on a 317 - * per-device basis in /sys/block/<device>/queue/max_sectors_kb. 318 - * The soft limit can not exceed max_hw_sectors. 319 - **/ 320 - void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_sectors) 321 - { 322 - struct queue_limits *limits = &q->limits; 323 - unsigned int max_sectors; 324 - 325 - if ((max_hw_sectors << 9) < PAGE_SIZE) { 326 - max_hw_sectors = 1 << (PAGE_SHIFT - 9); 327 - pr_info("%s: set to minimum %u\n", __func__, max_hw_sectors); 328 - } 329 - 330 - max_hw_sectors = round_down(max_hw_sectors, 331 - limits->logical_block_size >> SECTOR_SHIFT); 332 - limits->max_hw_sectors = max_hw_sectors; 333 - 334 - max_sectors = min_not_zero(max_hw_sectors, limits->max_dev_sectors); 335 - 336 - if (limits->max_user_sectors) 337 - max_sectors = min(max_sectors, limits->max_user_sectors); 338 - else 339 - max_sectors = min(max_sectors, BLK_DEF_MAX_SECTORS_CAP); 340 - 341 - max_sectors = round_down(max_sectors, 342 - limits->logical_block_size >> SECTOR_SHIFT); 343 - limits->max_sectors = max_sectors; 344 - 345 - if (!q->disk) 346 - return; 347 - q->disk->bdi->io_pages = max_sectors >> (PAGE_SHIFT - 9); 348 - } 349 - EXPORT_SYMBOL(blk_queue_max_hw_sectors); 350 - 351 - /** 352 286 * blk_queue_chunk_sectors - set size of the chunk for this queue 353 287 * @q: the request queue for the device 354 288 * @chunk_sectors: chunk sectors in the usual 512b unit ··· 375 441 q->limits.max_zone_append_sectors = max_sectors; 376 442 } 377 443 EXPORT_SYMBOL_GPL(blk_queue_max_zone_append_sectors); 378 - 379 - /** 380 - * blk_queue_max_segments - set max hw segments for a request for this queue 381 - * @q: the request queue for the device 382 - * @max_segments: max number of segments 383 - * 384 - * Description: 385 - * Enables a low level driver to set an upper limit on the number of 386 - * hw data segments in a request. 387 - **/ 388 - void blk_queue_max_segments(struct request_queue *q, unsigned short max_segments) 389 - { 390 - if (!max_segments) { 391 - max_segments = 1; 392 - pr_info("%s: set to minimum %u\n", __func__, max_segments); 393 - } 394 - 395 - q->limits.max_segments = max_segments; 396 - } 397 - EXPORT_SYMBOL(blk_queue_max_segments); 398 - 399 - /** 400 - * blk_queue_max_discard_segments - set max segments for discard requests 401 - * @q: the request queue for the device 402 - * @max_segments: max number of segments 403 - * 404 - * Description: 405 - * Enables a low level driver to set an upper limit on the number of 406 - * segments in a discard request. 407 - **/ 408 - void blk_queue_max_discard_segments(struct request_queue *q, 409 - unsigned short max_segments) 410 - { 411 - q->limits.max_discard_segments = max_segments; 412 - } 413 - EXPORT_SYMBOL_GPL(blk_queue_max_discard_segments); 414 - 415 - /** 416 - * blk_queue_max_segment_size - set max segment size for blk_rq_map_sg 417 - * @q: the request queue for the device 418 - * @max_size: max size of segment in bytes 419 - * 420 - * Description: 421 - * Enables a low level driver to set an upper limit on the size of a 422 - * coalesced segment 423 - **/ 424 - void blk_queue_max_segment_size(struct request_queue *q, unsigned int max_size) 425 - { 426 - if (max_size < PAGE_SIZE) { 427 - max_size = PAGE_SIZE; 428 - pr_info("%s: set to minimum %u\n", __func__, max_size); 429 - } 430 - 431 - /* see blk_queue_virt_boundary() for the explanation */ 432 - WARN_ON_ONCE(q->limits.virt_boundary_mask); 433 - 434 - q->limits.max_segment_size = max_size; 435 - } 436 - EXPORT_SYMBOL(blk_queue_max_segment_size); 437 444 438 445 /** 439 446 * blk_queue_logical_block_size - set logical block size for the queue ··· 541 666 limits->io_opt = opt; 542 667 } 543 668 EXPORT_SYMBOL(blk_limits_io_opt); 544 - 545 - /** 546 - * blk_queue_io_opt - set optimal request size for the queue 547 - * @q: the request queue for the device 548 - * @opt: optimal request size in bytes 549 - * 550 - * Description: 551 - * Storage devices may report an optimal I/O size, which is the 552 - * device's preferred unit for sustained I/O. This is rarely reported 553 - * for disk drives. For RAID arrays it is usually the stripe width or 554 - * the internal track size. A properly aligned multiple of 555 - * optimal_io_size is the preferred request size for workloads where 556 - * sustained throughput is desired. 557 - */ 558 - void blk_queue_io_opt(struct request_queue *q, unsigned int opt) 559 - { 560 - blk_limits_io_opt(&q->limits, opt); 561 - if (!q->disk) 562 - return; 563 - q->disk->bdi->ra_pages = 564 - max(queue_io_opt(q) * 2 / PAGE_SIZE, VM_READAHEAD_PAGES); 565 - } 566 - EXPORT_SYMBOL(blk_queue_io_opt); 567 669 568 670 static int queue_limit_alignment_offset(const struct queue_limits *lim, 569 671 sector_t sector) ··· 792 940 EXPORT_SYMBOL(blk_queue_update_dma_pad); 793 941 794 942 /** 795 - * blk_queue_segment_boundary - set boundary rules for segment merging 796 - * @q: the request queue for the device 797 - * @mask: the memory boundary mask 798 - **/ 799 - void blk_queue_segment_boundary(struct request_queue *q, unsigned long mask) 800 - { 801 - if (mask < PAGE_SIZE - 1) { 802 - mask = PAGE_SIZE - 1; 803 - pr_info("%s: set to minimum %lx\n", __func__, mask); 804 - } 805 - 806 - q->limits.seg_boundary_mask = mask; 807 - } 808 - EXPORT_SYMBOL(blk_queue_segment_boundary); 809 - 810 - /** 811 - * blk_queue_virt_boundary - set boundary rules for bio merging 812 - * @q: the request queue for the device 813 - * @mask: the memory boundary mask 814 - **/ 815 - void blk_queue_virt_boundary(struct request_queue *q, unsigned long mask) 816 - { 817 - q->limits.virt_boundary_mask = mask; 818 - 819 - /* 820 - * Devices that require a virtual boundary do not support scatter/gather 821 - * I/O natively, but instead require a descriptor list entry for each 822 - * page (which might not be idential to the Linux PAGE_SIZE). Because 823 - * of that they are not limited by our notion of "segment size". 824 - */ 825 - if (mask) 826 - q->limits.max_segment_size = UINT_MAX; 827 - } 828 - EXPORT_SYMBOL(blk_queue_virt_boundary); 829 - 830 - /** 831 - * blk_queue_dma_alignment - set dma length and memory alignment 832 - * @q: the request queue for the device 833 - * @mask: alignment mask 834 - * 835 - * description: 836 - * set required memory and length alignment for direct dma transactions. 837 - * this is used when building direct io requests for the queue. 838 - * 839 - **/ 840 - void blk_queue_dma_alignment(struct request_queue *q, int mask) 841 - { 842 - q->limits.dma_alignment = mask; 843 - } 844 - EXPORT_SYMBOL(blk_queue_dma_alignment); 845 - 846 - /** 847 - * blk_queue_update_dma_alignment - update dma length and memory alignment 848 - * @q: the request queue for the device 849 - * @mask: alignment mask 850 - * 851 - * description: 852 - * update required memory and length alignment for direct dma transactions. 853 - * If the requested alignment is larger than the current alignment, then 854 - * the current queue alignment is updated to the new value, otherwise it 855 - * is left alone. The design of this is to allow multiple objects 856 - * (driver, device, transport etc) to set their respective 857 - * alignments without having them interfere. 858 - * 859 - **/ 860 - void blk_queue_update_dma_alignment(struct request_queue *q, int mask) 861 - { 862 - BUG_ON(mask > PAGE_SIZE); 863 - 864 - if (mask > q->limits.dma_alignment) 865 - q->limits.dma_alignment = mask; 866 - } 867 - EXPORT_SYMBOL(blk_queue_update_dma_alignment); 868 - 869 - /** 870 943 * blk_set_queue_depth - tell the block layer about the device queue depth 871 944 * @q: the request queue for the device 872 945 * @depth: queue depth ··· 827 1050 blk_queue_flag_clear(QUEUE_FLAG_FUA, q); 828 1051 } 829 1052 EXPORT_SYMBOL_GPL(blk_queue_write_cache); 830 - 831 - /** 832 - * blk_queue_can_use_dma_map_merging - configure queue for merging segments. 833 - * @q: the request queue for the device 834 - * @dev: the device pointer for dma 835 - * 836 - * Tell the block layer about merging the segments by dma map of @q. 837 - */ 838 - bool blk_queue_can_use_dma_map_merging(struct request_queue *q, 839 - struct device *dev) 840 - { 841 - unsigned long boundary = dma_get_merge_boundary(dev); 842 - 843 - if (!boundary) 844 - return false; 845 - 846 - /* No need to update max_segment_size. see blk_queue_virt_boundary() */ 847 - blk_queue_virt_boundary(q, boundary); 848 - 849 - return true; 850 - } 851 - EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging); 852 1053 853 1054 /** 854 1055 * disk_set_zoned - inidicate a zoned device
+4 -2
block/bsg-lib.c
··· 354 354 * bsg_setup_queue - Create and add the bsg hooks so we can receive requests 355 355 * @dev: device to attach bsg device to 356 356 * @name: device to give bsg device 357 + * @lim: queue limits for the bsg queue 357 358 * @job_fn: bsg job handler 358 359 * @timeout: timeout handler function pointer 359 360 * @dd_job_size: size of LLD data needed for each job 360 361 */ 361 362 struct request_queue *bsg_setup_queue(struct device *dev, const char *name, 362 - bsg_job_fn *job_fn, bsg_timeout_fn *timeout, int dd_job_size) 363 + struct queue_limits *lim, bsg_job_fn *job_fn, 364 + bsg_timeout_fn *timeout, int dd_job_size) 363 365 { 364 366 struct bsg_set *bset; 365 367 struct blk_mq_tag_set *set; ··· 385 383 if (blk_mq_alloc_tag_set(set)) 386 384 goto out_tag_set; 387 385 388 - q = blk_mq_alloc_queue(set, NULL, NULL); 386 + q = blk_mq_alloc_queue(set, lim, NULL); 389 387 if (IS_ERR(q)) { 390 388 ret = PTR_ERR(q); 391 389 goto out_queue;
+1 -1
drivers/ata/ahci.h
··· 397 397 .sdev_groups = ahci_sdev_groups, \ 398 398 .change_queue_depth = ata_scsi_change_queue_depth, \ 399 399 .tag_alloc_policy = BLK_TAG_ALLOC_RR, \ 400 - .slave_configure = ata_scsi_slave_config 400 + .device_configure = ata_scsi_device_configure 401 401 402 402 extern struct ata_port_operations ahci_ops; 403 403 extern struct ata_port_operations ahci_platform_ops;
+137 -50
drivers/ata/libata-sata.c
··· 848 848 ata_scsi_lpm_show, ata_scsi_lpm_store); 849 849 EXPORT_SYMBOL_GPL(dev_attr_link_power_management_policy); 850 850 851 + /** 852 + * ata_ncq_prio_supported - Check if device supports NCQ Priority 853 + * @ap: ATA port of the target device 854 + * @sdev: SCSI device 855 + * @supported: Address of a boolean to store the result 856 + * 857 + * Helper to check if device supports NCQ Priority feature. 858 + * 859 + * Context: Any context. Takes and releases @ap->lock. 860 + * 861 + * Return: 862 + * * %0 - OK. Status is stored into @supported 863 + * * %-ENODEV - Failed to find the ATA device 864 + */ 865 + int ata_ncq_prio_supported(struct ata_port *ap, struct scsi_device *sdev, 866 + bool *supported) 867 + { 868 + struct ata_device *dev; 869 + unsigned long flags; 870 + int rc = 0; 871 + 872 + spin_lock_irqsave(ap->lock, flags); 873 + dev = ata_scsi_find_dev(ap, sdev); 874 + if (!dev) 875 + rc = -ENODEV; 876 + else 877 + *supported = dev->flags & ATA_DFLAG_NCQ_PRIO; 878 + spin_unlock_irqrestore(ap->lock, flags); 879 + 880 + return rc; 881 + } 882 + EXPORT_SYMBOL_GPL(ata_ncq_prio_supported); 883 + 851 884 static ssize_t ata_ncq_prio_supported_show(struct device *device, 852 885 struct device_attribute *attr, 853 886 char *buf) 854 887 { 855 888 struct scsi_device *sdev = to_scsi_device(device); 856 889 struct ata_port *ap = ata_shost_to_port(sdev->host); 857 - struct ata_device *dev; 858 - bool ncq_prio_supported; 859 - int rc = 0; 890 + bool supported; 891 + int rc; 860 892 861 - spin_lock_irq(ap->lock); 862 - dev = ata_scsi_find_dev(ap, sdev); 863 - if (!dev) 864 - rc = -ENODEV; 865 - else 866 - ncq_prio_supported = dev->flags & ATA_DFLAG_NCQ_PRIO; 867 - spin_unlock_irq(ap->lock); 893 + rc = ata_ncq_prio_supported(ap, sdev, &supported); 894 + if (rc) 895 + return rc; 868 896 869 - return rc ? rc : sysfs_emit(buf, "%u\n", ncq_prio_supported); 897 + return sysfs_emit(buf, "%d\n", supported); 870 898 } 871 899 872 900 DEVICE_ATTR(ncq_prio_supported, S_IRUGO, ata_ncq_prio_supported_show, NULL); 873 901 EXPORT_SYMBOL_GPL(dev_attr_ncq_prio_supported); 902 + 903 + /** 904 + * ata_ncq_prio_enabled - Check if NCQ Priority is enabled 905 + * @ap: ATA port of the target device 906 + * @sdev: SCSI device 907 + * @enabled: Address of a boolean to store the result 908 + * 909 + * Helper to check if NCQ Priority feature is enabled. 910 + * 911 + * Context: Any context. Takes and releases @ap->lock. 912 + * 913 + * Return: 914 + * * %0 - OK. Status is stored into @enabled 915 + * * %-ENODEV - Failed to find the ATA device 916 + */ 917 + int ata_ncq_prio_enabled(struct ata_port *ap, struct scsi_device *sdev, 918 + bool *enabled) 919 + { 920 + struct ata_device *dev; 921 + unsigned long flags; 922 + int rc = 0; 923 + 924 + spin_lock_irqsave(ap->lock, flags); 925 + dev = ata_scsi_find_dev(ap, sdev); 926 + if (!dev) 927 + rc = -ENODEV; 928 + else 929 + *enabled = dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED; 930 + spin_unlock_irqrestore(ap->lock, flags); 931 + 932 + return rc; 933 + } 934 + EXPORT_SYMBOL_GPL(ata_ncq_prio_enabled); 874 935 875 936 static ssize_t ata_ncq_prio_enable_show(struct device *device, 876 937 struct device_attribute *attr, ··· 939 878 { 940 879 struct scsi_device *sdev = to_scsi_device(device); 941 880 struct ata_port *ap = ata_shost_to_port(sdev->host); 942 - struct ata_device *dev; 943 - bool ncq_prio_enable; 944 - int rc = 0; 881 + bool enabled; 882 + int rc; 945 883 946 - spin_lock_irq(ap->lock); 947 - dev = ata_scsi_find_dev(ap, sdev); 948 - if (!dev) 949 - rc = -ENODEV; 950 - else 951 - ncq_prio_enable = dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLED; 952 - spin_unlock_irq(ap->lock); 953 - 954 - return rc ? rc : sysfs_emit(buf, "%u\n", ncq_prio_enable); 955 - } 956 - 957 - static ssize_t ata_ncq_prio_enable_store(struct device *device, 958 - struct device_attribute *attr, 959 - const char *buf, size_t len) 960 - { 961 - struct scsi_device *sdev = to_scsi_device(device); 962 - struct ata_port *ap; 963 - struct ata_device *dev; 964 - long int input; 965 - int rc = 0; 966 - 967 - rc = kstrtol(buf, 10, &input); 884 + rc = ata_ncq_prio_enabled(ap, sdev, &enabled); 968 885 if (rc) 969 886 return rc; 970 - if ((input < 0) || (input > 1)) 971 - return -EINVAL; 972 887 973 - ap = ata_shost_to_port(sdev->host); 888 + return sysfs_emit(buf, "%d\n", enabled); 889 + } 890 + 891 + /** 892 + * ata_ncq_prio_enable - Enable/disable NCQ Priority 893 + * @ap: ATA port of the target device 894 + * @sdev: SCSI device 895 + * @enable: true - enable NCQ Priority, false - disable NCQ Priority 896 + * 897 + * Helper to enable/disable NCQ Priority feature. 898 + * 899 + * Context: Any context. Takes and releases @ap->lock. 900 + * 901 + * Return: 902 + * * %0 - OK. Status is stored into @enabled 903 + * * %-ENODEV - Failed to find the ATA device 904 + * * %-EINVAL - NCQ Priority is not supported or CDL is enabled 905 + */ 906 + int ata_ncq_prio_enable(struct ata_port *ap, struct scsi_device *sdev, 907 + bool enable) 908 + { 909 + struct ata_device *dev; 910 + unsigned long flags; 911 + int rc = 0; 912 + 913 + spin_lock_irqsave(ap->lock, flags); 914 + 974 915 dev = ata_scsi_find_dev(ap, sdev); 975 - if (unlikely(!dev)) 976 - return -ENODEV; 977 - 978 - spin_lock_irq(ap->lock); 916 + if (!dev) { 917 + rc = -ENODEV; 918 + goto unlock; 919 + } 979 920 980 921 if (!(dev->flags & ATA_DFLAG_NCQ_PRIO)) { 981 922 rc = -EINVAL; 982 923 goto unlock; 983 924 } 984 925 985 - if (input) { 926 + if (enable) { 986 927 if (dev->flags & ATA_DFLAG_CDL_ENABLED) { 987 928 ata_dev_err(dev, 988 929 "CDL must be disabled to enable NCQ priority\n"); ··· 997 934 } 998 935 999 936 unlock: 1000 - spin_unlock_irq(ap->lock); 937 + spin_unlock_irqrestore(ap->lock, flags); 1001 938 1002 - return rc ? rc : len; 939 + return rc; 940 + } 941 + EXPORT_SYMBOL_GPL(ata_ncq_prio_enable); 942 + 943 + static ssize_t ata_ncq_prio_enable_store(struct device *device, 944 + struct device_attribute *attr, 945 + const char *buf, size_t len) 946 + { 947 + struct scsi_device *sdev = to_scsi_device(device); 948 + struct ata_port *ap = ata_shost_to_port(sdev->host); 949 + bool enable; 950 + int rc; 951 + 952 + rc = kstrtobool(buf, &enable); 953 + if (rc) 954 + return rc; 955 + 956 + rc = ata_ncq_prio_enable(ap, sdev, enable); 957 + if (rc) 958 + return rc; 959 + 960 + return len; 1003 961 } 1004 962 1005 963 DEVICE_ATTR(ncq_prio_enable, S_IRUGO | S_IWUSR, ··· 1254 1170 EXPORT_SYMBOL_GPL(ata_sas_tport_delete); 1255 1171 1256 1172 /** 1257 - * ata_sas_slave_configure - Default slave_config routine for libata devices 1173 + * ata_sas_device_configure - Default device_configure routine for libata 1174 + * devices 1258 1175 * @sdev: SCSI device to configure 1176 + * @lim: queue limits 1259 1177 * @ap: ATA port to which SCSI device is attached 1260 1178 * 1261 1179 * RETURNS: 1262 1180 * Zero. 1263 1181 */ 1264 1182 1265 - int ata_sas_slave_configure(struct scsi_device *sdev, struct ata_port *ap) 1183 + int ata_sas_device_configure(struct scsi_device *sdev, struct queue_limits *lim, 1184 + struct ata_port *ap) 1266 1185 { 1267 1186 ata_scsi_sdev_config(sdev); 1268 1187 1269 - return ata_scsi_dev_config(sdev, ap->link.device); 1188 + return ata_scsi_dev_config(sdev, lim, ap->link.device); 1270 1189 } 1271 - EXPORT_SYMBOL_GPL(ata_sas_slave_configure); 1190 + EXPORT_SYMBOL_GPL(ata_sas_device_configure); 1272 1191 1273 1192 /** 1274 1193 * ata_sas_queuecmd - Issue SCSI cdb to libata-managed device
+11 -8
drivers/ata/libata-scsi.c
··· 1021 1021 } 1022 1022 EXPORT_SYMBOL_GPL(ata_scsi_dma_need_drain); 1023 1023 1024 - int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev) 1024 + int ata_scsi_dev_config(struct scsi_device *sdev, struct queue_limits *lim, 1025 + struct ata_device *dev) 1025 1026 { 1026 1027 struct request_queue *q = sdev->request_queue; 1027 1028 int depth = 1; ··· 1032 1031 1033 1032 /* configure max sectors */ 1034 1033 dev->max_sectors = min(dev->max_sectors, sdev->host->max_sectors); 1035 - blk_queue_max_hw_sectors(q, dev->max_sectors); 1034 + lim->max_hw_sectors = dev->max_sectors; 1036 1035 1037 1036 if (dev->class == ATA_DEV_ATAPI) { 1038 1037 sdev->sector_size = ATA_SECT_SIZE; ··· 1041 1040 blk_queue_update_dma_pad(q, ATA_DMA_PAD_SZ - 1); 1042 1041 1043 1042 /* make room for appending the drain */ 1044 - blk_queue_max_segments(q, queue_max_segments(q) - 1); 1043 + lim->max_segments--; 1045 1044 1046 1045 sdev->dma_drain_len = ATAPI_MAX_DRAIN; 1047 1046 sdev->dma_drain_buf = kmalloc(sdev->dma_drain_len, GFP_NOIO); ··· 1078 1077 "sector_size=%u > PAGE_SIZE, PIO may malfunction\n", 1079 1078 sdev->sector_size); 1080 1079 1081 - blk_queue_update_dma_alignment(q, sdev->sector_size - 1); 1080 + lim->dma_alignment = sdev->sector_size - 1; 1082 1081 1083 1082 if (dev->flags & ATA_DFLAG_AN) 1084 1083 set_bit(SDEV_EVT_MEDIA_CHANGE, sdev->supported_events); ··· 1132 1131 EXPORT_SYMBOL_GPL(ata_scsi_slave_alloc); 1133 1132 1134 1133 /** 1135 - * ata_scsi_slave_config - Set SCSI device attributes 1134 + * ata_scsi_device_configure - Set SCSI device attributes 1136 1135 * @sdev: SCSI device to examine 1136 + * @lim: queue limits 1137 1137 * 1138 1138 * This is called before we actually start reading 1139 1139 * and writing to the device, to configure certain ··· 1144 1142 * Defined by SCSI layer. We don't really care. 1145 1143 */ 1146 1144 1147 - int ata_scsi_slave_config(struct scsi_device *sdev) 1145 + int ata_scsi_device_configure(struct scsi_device *sdev, 1146 + struct queue_limits *lim) 1148 1147 { 1149 1148 struct ata_port *ap = ata_shost_to_port(sdev->host); 1150 1149 struct ata_device *dev = __ata_scsi_find_dev(ap, sdev); 1151 1150 1152 1151 if (dev) 1153 - return ata_scsi_dev_config(sdev, dev); 1152 + return ata_scsi_dev_config(sdev, lim, dev); 1154 1153 1155 1154 return 0; 1156 1155 } 1157 - EXPORT_SYMBOL_GPL(ata_scsi_slave_config); 1156 + EXPORT_SYMBOL_GPL(ata_scsi_device_configure); 1158 1157 1159 1158 /** 1160 1159 * ata_scsi_slave_destroy - SCSI device is about to be destroyed
+2 -1
drivers/ata/libata.h
··· 131 131 extern int ata_scsi_user_scan(struct Scsi_Host *shost, unsigned int channel, 132 132 unsigned int id, u64 lun); 133 133 void ata_scsi_sdev_config(struct scsi_device *sdev); 134 - int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev); 134 + int ata_scsi_dev_config(struct scsi_device *sdev, struct queue_limits *lim, 135 + struct ata_device *dev); 135 136 int __ata_scsi_queuecmd(struct scsi_cmnd *scmd, struct ata_device *dev); 136 137 137 138 /* libata-eh.c */
+6 -5
drivers/ata/pata_macio.c
··· 796 796 /* Hook the standard slave config to fixup some HW related alignment 797 797 * restrictions 798 798 */ 799 - static int pata_macio_slave_config(struct scsi_device *sdev) 799 + static int pata_macio_device_configure(struct scsi_device *sdev, 800 + struct queue_limits *lim) 800 801 { 801 802 struct ata_port *ap = ata_shost_to_port(sdev->host); 802 803 struct pata_macio_priv *priv = ap->private_data; ··· 806 805 int rc; 807 806 808 807 /* First call original */ 809 - rc = ata_scsi_slave_config(sdev); 808 + rc = ata_scsi_device_configure(sdev, lim); 810 809 if (rc) 811 810 return rc; 812 811 ··· 815 814 816 815 /* OHare has issues with non cache aligned DMA on some chipsets */ 817 816 if (priv->kind == controller_ohare) { 818 - blk_queue_update_dma_alignment(sdev->request_queue, 31); 817 + lim->dma_alignment = 31; 819 818 blk_queue_update_dma_pad(sdev->request_queue, 31); 820 819 821 820 /* Tell the world about it */ ··· 830 829 /* Shasta and K2 seem to have "issues" with reads ... */ 831 830 if (priv->kind == controller_sh_ata6 || priv->kind == controller_k2_ata6) { 832 831 /* Allright these are bad, apply restrictions */ 833 - blk_queue_update_dma_alignment(sdev->request_queue, 15); 832 + lim->dma_alignment = 15; 834 833 blk_queue_update_dma_pad(sdev->request_queue, 15); 835 834 836 835 /* We enable MWI and hack cache line size directly here, this ··· 919 918 * use 64K minus 256 920 919 */ 921 920 .max_segment_size = MAX_DBDMA_SEG, 922 - .slave_configure = pata_macio_slave_config, 921 + .device_configure = pata_macio_device_configure, 923 922 .sdev_groups = ata_common_sdev_groups, 924 923 .can_queue = ATA_DEF_QUEUE, 925 924 .tag_alloc_policy = BLK_TAG_ALLOC_RR,
+1 -1
drivers/ata/sata_mv.c
··· 673 673 .sdev_groups = ata_ncq_sdev_groups, 674 674 .change_queue_depth = ata_scsi_change_queue_depth, 675 675 .tag_alloc_policy = BLK_TAG_ALLOC_RR, 676 - .slave_configure = ata_scsi_slave_config 676 + .device_configure = ata_scsi_device_configure 677 677 }; 678 678 679 679 static struct ata_port_operations mv5_ops = {
+14 -10
drivers/ata/sata_nv.c
··· 296 296 static void nv_nf2_thaw(struct ata_port *ap); 297 297 static void nv_ck804_freeze(struct ata_port *ap); 298 298 static void nv_ck804_thaw(struct ata_port *ap); 299 - static int nv_adma_slave_config(struct scsi_device *sdev); 299 + static int nv_adma_device_configure(struct scsi_device *sdev, 300 + struct queue_limits *lim); 300 301 static int nv_adma_check_atapi_dma(struct ata_queued_cmd *qc); 301 302 static enum ata_completion_errors nv_adma_qc_prep(struct ata_queued_cmd *qc); 302 303 static unsigned int nv_adma_qc_issue(struct ata_queued_cmd *qc); ··· 319 318 static void nv_mcp55_thaw(struct ata_port *ap); 320 319 static void nv_mcp55_freeze(struct ata_port *ap); 321 320 static void nv_swncq_error_handler(struct ata_port *ap); 322 - static int nv_swncq_slave_config(struct scsi_device *sdev); 321 + static int nv_swncq_device_configure(struct scsi_device *sdev, 322 + struct queue_limits *lim); 323 323 static int nv_swncq_port_start(struct ata_port *ap); 324 324 static enum ata_completion_errors nv_swncq_qc_prep(struct ata_queued_cmd *qc); 325 325 static void nv_swncq_fill_sg(struct ata_queued_cmd *qc); ··· 382 380 .can_queue = NV_ADMA_MAX_CPBS, 383 381 .sg_tablesize = NV_ADMA_SGTBL_TOTAL_LEN, 384 382 .dma_boundary = NV_ADMA_DMA_BOUNDARY, 385 - .slave_configure = nv_adma_slave_config, 383 + .device_configure = nv_adma_device_configure, 386 384 .sdev_groups = ata_ncq_sdev_groups, 387 385 .change_queue_depth = ata_scsi_change_queue_depth, 388 386 .tag_alloc_policy = BLK_TAG_ALLOC_RR, ··· 393 391 .can_queue = ATA_MAX_QUEUE - 1, 394 392 .sg_tablesize = LIBATA_MAX_PRD, 395 393 .dma_boundary = ATA_DMA_BOUNDARY, 396 - .slave_configure = nv_swncq_slave_config, 394 + .device_configure = nv_swncq_device_configure, 397 395 .sdev_groups = ata_ncq_sdev_groups, 398 396 .change_queue_depth = ata_scsi_change_queue_depth, 399 397 .tag_alloc_policy = BLK_TAG_ALLOC_RR, ··· 663 661 pp->flags &= ~NV_ADMA_PORT_REGISTER_MODE; 664 662 } 665 663 666 - static int nv_adma_slave_config(struct scsi_device *sdev) 664 + static int nv_adma_device_configure(struct scsi_device *sdev, 665 + struct queue_limits *lim) 667 666 { 668 667 struct ata_port *ap = ata_shost_to_port(sdev->host); 669 668 struct nv_adma_port_priv *pp = ap->private_data; ··· 676 673 int adma_enable; 677 674 u32 current_reg, new_reg, config_mask; 678 675 679 - rc = ata_scsi_slave_config(sdev); 676 + rc = ata_scsi_device_configure(sdev, lim); 680 677 681 678 if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun) 682 679 /* Not a proper libata device, ignore */ ··· 743 740 rc = dma_set_mask(&pdev->dev, pp->adma_dma_mask); 744 741 } 745 742 746 - blk_queue_segment_boundary(sdev->request_queue, segment_boundary); 747 - blk_queue_max_segments(sdev->request_queue, sg_tablesize); 743 + lim->seg_boundary_mask = segment_boundary; 744 + lim->max_segments = sg_tablesize; 748 745 ata_port_info(ap, 749 746 "DMA mask 0x%llX, segment boundary 0x%lX, hw segs %hu\n", 750 747 (unsigned long long)*ap->host->dev->dma_mask, ··· 1871 1868 writel(~0x0, mmio + NV_INT_STATUS_MCP55); 1872 1869 } 1873 1870 1874 - static int nv_swncq_slave_config(struct scsi_device *sdev) 1871 + static int nv_swncq_device_configure(struct scsi_device *sdev, 1872 + struct queue_limits *lim) 1875 1873 { 1876 1874 struct ata_port *ap = ata_shost_to_port(sdev->host); 1877 1875 struct pci_dev *pdev = to_pci_dev(ap->host->dev); ··· 1882 1878 u8 check_maxtor = 0; 1883 1879 unsigned char model_num[ATA_ID_PROD_LEN + 1]; 1884 1880 1885 - rc = ata_scsi_slave_config(sdev); 1881 + rc = ata_scsi_device_configure(sdev, lim); 1886 1882 if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun) 1887 1883 /* Not a proper libata device, ignore */ 1888 1884 return rc;
+1 -1
drivers/ata/sata_sil24.c
··· 381 381 .tag_alloc_policy = BLK_TAG_ALLOC_FIFO, 382 382 .sdev_groups = ata_ncq_sdev_groups, 383 383 .change_queue_depth = ata_scsi_change_queue_depth, 384 - .slave_configure = ata_scsi_slave_config 384 + .device_configure = ata_scsi_device_configure 385 385 }; 386 386 387 387 static struct ata_port_operations sil24_ops = {
+4 -9
drivers/firewire/sbp2.c
··· 1500 1500 1501 1501 sdev->allow_restart = 1; 1502 1502 1503 - /* 1504 - * SBP-2 does not require any alignment, but we set it anyway 1505 - * for compatibility with earlier versions of this driver. 1506 - */ 1507 - blk_queue_update_dma_alignment(sdev->request_queue, 4 - 1); 1508 - 1509 1503 if (lu->tgt->workarounds & SBP2_WORKAROUND_INQUIRY_36) 1510 1504 sdev->inquiry_len = 36; 1511 1505 1512 1506 return 0; 1513 1507 } 1514 1508 1515 - static int sbp2_scsi_slave_configure(struct scsi_device *sdev) 1509 + static int sbp2_scsi_device_configure(struct scsi_device *sdev, 1510 + struct queue_limits *lim) 1516 1511 { 1517 1512 struct sbp2_logical_unit *lu = sdev->hostdata; 1518 1513 ··· 1533 1538 sdev->start_stop_pwr_cond = 1; 1534 1539 1535 1540 if (lu->tgt->workarounds & SBP2_WORKAROUND_128K_MAX_TRANS) 1536 - blk_queue_max_hw_sectors(sdev->request_queue, 128 * 1024 / 512); 1541 + lim->max_hw_sectors = 128 * 1024 / 512; 1537 1542 1538 1543 return 0; 1539 1544 } ··· 1591 1596 .proc_name = "sbp2", 1592 1597 .queuecommand = sbp2_scsi_queuecommand, 1593 1598 .slave_alloc = sbp2_scsi_slave_alloc, 1594 - .slave_configure = sbp2_scsi_slave_configure, 1599 + .device_configure = sbp2_scsi_device_configure, 1595 1600 .eh_abort_handler = sbp2_scsi_abort, 1596 1601 .this_id = -1, 1597 1602 .sg_tablesize = SG_ALL,
+1
drivers/message/fusion/mptfc.c
··· 129 129 .sg_tablesize = MPT_SCSI_SG_DEPTH, 130 130 .max_sectors = 8192, 131 131 .cmd_per_lun = 7, 132 + .dma_alignment = 511, 132 133 .shost_groups = mptscsih_host_attr_groups, 133 134 }; 134 135
+1
drivers/message/fusion/mptsas.c
··· 2020 2020 .sg_tablesize = MPT_SCSI_SG_DEPTH, 2021 2021 .max_sectors = 8192, 2022 2022 .cmd_per_lun = 7, 2023 + .dma_alignment = 511, 2023 2024 .shost_groups = mptscsih_host_attr_groups, 2024 2025 .no_write_same = 1, 2025 2026 };
-2
drivers/message/fusion/mptscsih.c
··· 2438 2438 "tagged %d, simple %d\n", 2439 2439 ioc->name,sdev->tagged_supported, sdev->simple_tags)); 2440 2440 2441 - blk_queue_dma_alignment (sdev->request_queue, 512 - 1); 2442 - 2443 2441 return 0; 2444 2442 } 2445 2443
+1
drivers/message/fusion/mptspi.c
··· 843 843 .sg_tablesize = MPT_SCSI_SG_DEPTH, 844 844 .max_sectors = 8192, 845 845 .cmd_per_lun = 7, 846 + .dma_alignment = 511, 846 847 .shost_groups = mptscsih_host_attr_groups, 847 848 }; 848 849
+1 -1
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 1351 1351 (params->drv_rev << 8) | 1352 1352 (params->drv_eng); 1353 1353 strscpy(drv_version.name, params->name, 1354 - MCP_DRV_VER_STR_SIZE - 4); 1354 + sizeof(drv_version.name)); 1355 1355 rc = qed_mcp_send_drv_version(hwfn, hwfn->p_main_ptt, 1356 1356 &drv_version); 1357 1357 if (rc) {
+3 -3
drivers/s390/block/dasd_eckd.c
··· 4561 4561 len_to_track_end = 0; 4562 4562 /* 4563 4563 * A tidaw can address 4k of memory, but must not cross page boundaries 4564 - * We can let the block layer handle this by setting 4565 - * blk_queue_segment_boundary to page boundaries and 4566 - * blk_max_segment_size to page size when setting up the request queue. 4564 + * We can let the block layer handle this by setting seg_boundary_mask 4565 + * to page boundaries and max_segment_size to page size when setting up 4566 + * the request queue. 4567 4567 * For write requests, a TIDAW must not cross track boundaries, because 4568 4568 * we have to set the CBC flag on the last tidaw for each track. 4569 4569 */
-1
drivers/scsi/FlashPoint.c
··· 2631 2631 WRW_HARPOON((port + hp_fiforead), (unsigned short)0x00); 2632 2632 2633 2633 our_target = (unsigned char)(RD_HARPOON(port + hp_select_id) >> 4); 2634 - currTar_Info = &FPT_sccbMgrTbl[p_card][our_target]; 2635 2634 2636 2635 msgRetryCount = 0; 2637 2636 do {
+2 -2
drivers/scsi/Kconfig
··· 53 53 54 54 config SCSI_NETLINK 55 55 bool 56 - default n 56 + default n 57 57 depends on NET 58 58 59 59 config SCSI_PROC_FS ··· 327 327 328 328 config ISCSI_BOOT_SYSFS 329 329 tristate "iSCSI Boot Sysfs Interface" 330 - default n 330 + default n 331 331 help 332 332 This option enables support for exposing iSCSI boot information 333 333 via sysfs to userspace. If you wish to export this information,
+7 -1
drivers/scsi/a3000.c
··· 295 295 release_mem_region(res->start, resource_size(res)); 296 296 } 297 297 298 - static struct platform_driver amiga_a3000_scsi_driver = { 298 + /* 299 + * amiga_a3000_scsi_remove() lives in .exit.text. For drivers registered via 300 + * module_platform_driver_probe() this is ok because they cannot get unbound at 301 + * runtime. So mark the driver struct with __refdata to prevent modpost 302 + * triggering a section mismatch warning. 303 + */ 304 + static struct platform_driver amiga_a3000_scsi_driver __refdata = { 299 305 .remove_new = __exit_p(amiga_a3000_scsi_remove), 300 306 .driver = { 301 307 .name = "amiga-a3000-scsi",
+7 -1
drivers/scsi/a4000t.c
··· 108 108 release_mem_region(res->start, resource_size(res)); 109 109 } 110 110 111 - static struct platform_driver amiga_a4000t_scsi_driver = { 111 + /* 112 + * amiga_a4000t_scsi_remove() lives in .exit.text. For drivers registered via 113 + * module_platform_driver_probe() this is ok because they cannot get unbound at 114 + * runtime. So mark the driver struct with __refdata to prevent modpost 115 + * triggering a section mismatch warning. 116 + */ 117 + static struct platform_driver amiga_a4000t_scsi_driver __refdata = { 112 118 .remove_new = __exit_p(amiga_a4000t_scsi_remove), 113 119 .driver = { 114 120 .name = "amiga-a4000t-scsi",
+1 -7
drivers/scsi/aha152x.c
··· 746 746 /* need to have host registered before triggering any interrupt */ 747 747 list_add_tail(&HOSTDATA(shpnt)->host_list, &aha152x_host_list); 748 748 749 + shpnt->no_highmem = true; 749 750 shpnt->io_port = setup->io_port; 750 751 shpnt->n_io_port = IO_RANGE; 751 752 shpnt->irq = setup->irq; ··· 2941 2940 return 0; 2942 2941 } 2943 2942 2944 - static int aha152x_adjust_queue(struct scsi_device *device) 2945 - { 2946 - blk_queue_bounce_limit(device->request_queue, BLK_BOUNCE_HIGH); 2947 - return 0; 2948 - } 2949 - 2950 2943 static const struct scsi_host_template aha152x_driver_template = { 2951 2944 .module = THIS_MODULE, 2952 2945 .name = AHA152X_REVID, ··· 2956 2961 .this_id = 7, 2957 2962 .sg_tablesize = SG_ALL, 2958 2963 .dma_boundary = PAGE_SIZE - 1, 2959 - .slave_alloc = aha152x_adjust_queue, 2960 2964 .cmd_size = sizeof(struct aha152x_cmd_priv), 2961 2965 }; 2962 2966
+38 -37
drivers/scsi/aic7xxx/Kconfig.aic79xx
··· 8 8 depends on PCI && HAS_IOPORT && SCSI 9 9 select SCSI_SPI_ATTRS 10 10 help 11 - This driver supports all of Adaptec's Ultra 320 PCI-X 12 - based SCSI controllers. 11 + This driver supports all of Adaptec's Ultra 320 PCI-X 12 + based SCSI controllers. 13 13 14 14 config AIC79XX_CMDS_PER_DEVICE 15 15 int "Maximum number of TCQ commands per device" 16 16 depends on SCSI_AIC79XX 17 17 default "32" 18 18 help 19 - Specify the number of commands you would like to allocate per SCSI 20 - device when Tagged Command Queueing (TCQ) is enabled on that device. 19 + Specify the number of commands you would like to allocate per SCSI 20 + device when Tagged Command Queueing (TCQ) is enabled on that device. 21 21 22 - This is an upper bound value for the number of tagged transactions 23 - to be used for any device. The aic7xxx driver will automatically 24 - vary this number based on device behavior. For devices with a 25 - fixed maximum, the driver will eventually lock to this maximum 26 - and display a console message indicating this value. 22 + This is an upper bound value for the number of tagged transactions 23 + to be used for any device. The aic7xxx driver will automatically 24 + vary this number based on device behavior. For devices with a 25 + fixed maximum, the driver will eventually lock to this maximum 26 + and display a console message indicating this value. 27 27 28 - Due to resource allocation issues in the Linux SCSI mid-layer, using 29 - a high number of commands per device may result in memory allocation 30 - failures when many devices are attached to the system. For this reason, 31 - the default is set to 32. Higher values may result in higher performance 32 - on some devices. The upper bound is 253. 0 disables tagged queueing. 28 + Due to resource allocation issues in the Linux SCSI mid-layer, using 29 + a high number of commands per device may result in memory allocation 30 + failures when many devices are attached to the system. For this 31 + reason, the default is set to 32. Higher values may result in higher 32 + performance on some devices. The upper bound is 253. 0 disables 33 + tagged queueing. 33 34 34 - Per device tag depth can be controlled via the kernel command line 35 - "tag_info" option. See Documentation/scsi/aic79xx.rst for details. 35 + Per device tag depth can be controlled via the kernel command line 36 + "tag_info" option. See Documentation/scsi/aic79xx.rst for details. 36 37 37 38 config AIC79XX_RESET_DELAY_MS 38 39 int "Initial bus reset delay in milli-seconds" 39 40 depends on SCSI_AIC79XX 40 41 default "5000" 41 42 help 42 - The number of milliseconds to delay after an initial bus reset. 43 - The bus settle delay following all error recovery actions is 44 - dictated by the SCSI layer and is not affected by this value. 43 + The number of milliseconds to delay after an initial bus reset. 44 + The bus settle delay following all error recovery actions is 45 + dictated by the SCSI layer and is not affected by this value. 45 46 46 - Default: 5000 (5 seconds) 47 + Default: 5000 (5 seconds) 47 48 48 49 config AIC79XX_BUILD_FIRMWARE 49 50 bool "Build Adapter Firmware with Kernel Build" 50 51 depends on SCSI_AIC79XX && !PREVENT_FIRMWARE_BUILD 51 52 help 52 - This option should only be enabled if you are modifying the firmware 53 - source to the aic79xx driver and wish to have the generated firmware 54 - include files updated during a normal kernel build. The assembler 55 - for the firmware requires lex and yacc or their equivalents, as well 56 - as the db v1 library. You may have to install additional packages 57 - or modify the assembler Makefile or the files it includes if your 58 - build environment is different than that of the author. 53 + This option should only be enabled if you are modifying the firmware 54 + source to the aic79xx driver and wish to have the generated firmware 55 + include files updated during a normal kernel build. The assembler 56 + for the firmware requires lex and yacc or their equivalents, as well 57 + as the db v1 library. You may have to install additional packages 58 + or modify the assembler Makefile or the files it includes if your 59 + build environment is different than that of the author. 59 60 60 61 config AIC79XX_DEBUG_ENABLE 61 62 bool "Compile in Debugging Code" 62 63 depends on SCSI_AIC79XX 63 64 default y 64 65 help 65 - Compile in aic79xx debugging code that can be useful in diagnosing 66 - driver errors. 66 + Compile in aic79xx debugging code that can be useful in diagnosing 67 + driver errors. 67 68 68 69 config AIC79XX_DEBUG_MASK 69 70 int "Debug code enable mask (16383 for all debugging)" 70 71 depends on SCSI_AIC79XX 71 72 default "0" 72 73 help 73 - Bit mask of debug options that is only valid if the 74 - CONFIG_AIC79XX_DEBUG_ENABLE option is enabled. The bits in this mask 75 - are defined in the drivers/scsi/aic7xxx/aic79xx.h - search for the 76 - variable ahd_debug in that file to find them. 74 + Bit mask of debug options that is only valid if the 75 + CONFIG_AIC79XX_DEBUG_ENABLE option is enabled. The bits in this mask 76 + are defined in the drivers/scsi/aic7xxx/aic79xx.h - search for the 77 + variable ahd_debug in that file to find them. 77 78 78 79 config AIC79XX_REG_PRETTY_PRINT 79 80 bool "Decode registers during diagnostics" 80 81 depends on SCSI_AIC79XX 81 82 default y 82 83 help 83 - Compile in register value tables for the output of expanded register 84 - contents in diagnostics. This make it much easier to understand debug 85 - output without having to refer to a data book and/or the aic7xxx.reg 86 - file. 84 + Compile in register value tables for the output of expanded register 85 + contents in diagnostics. This make it much easier to understand debug 86 + output without having to refer to a data book and/or the aic7xxx.reg 87 + file.
+49 -48
drivers/scsi/aic7xxx/Kconfig.aic7xxx
··· 8 8 depends on (PCI || EISA) && HAS_IOPORT && SCSI 9 9 select SCSI_SPI_ATTRS 10 10 help 11 - This driver supports all of Adaptec's Fast through Ultra 160 PCI 12 - based SCSI controllers as well as the aic7770 based EISA and VLB 13 - SCSI controllers (the 274x and 284x series). For AAA and ARO based 14 - configurations, only SCSI functionality is provided. 11 + This driver supports all of Adaptec's Fast through Ultra 160 PCI 12 + based SCSI controllers as well as the aic7770 based EISA and VLB 13 + SCSI controllers (the 274x and 284x series). For AAA and ARO based 14 + configurations, only SCSI functionality is provided. 15 15 16 - To compile this driver as a module, choose M here: the 17 - module will be called aic7xxx. 16 + To compile this driver as a module, choose M here: the 17 + module will be called aic7xxx. 18 18 19 19 config AIC7XXX_CMDS_PER_DEVICE 20 20 int "Maximum number of TCQ commands per device" 21 21 depends on SCSI_AIC7XXX 22 22 default "32" 23 23 help 24 - Specify the number of commands you would like to allocate per SCSI 25 - device when Tagged Command Queueing (TCQ) is enabled on that device. 24 + Specify the number of commands you would like to allocate per SCSI 25 + device when Tagged Command Queueing (TCQ) is enabled on that device. 26 26 27 - This is an upper bound value for the number of tagged transactions 28 - to be used for any device. The aic7xxx driver will automatically 29 - vary this number based on device behavior. For devices with a 30 - fixed maximum, the driver will eventually lock to this maximum 31 - and display a console message indicating this value. 27 + This is an upper bound value for the number of tagged transactions 28 + to be used for any device. The aic7xxx driver will automatically 29 + vary this number based on device behavior. For devices with a 30 + fixed maximum, the driver will eventually lock to this maximum 31 + and display a console message indicating this value. 32 32 33 - Due to resource allocation issues in the Linux SCSI mid-layer, using 34 - a high number of commands per device may result in memory allocation 35 - failures when many devices are attached to the system. For this reason, 36 - the default is set to 32. Higher values may result in higher performance 37 - on some devices. The upper bound is 253. 0 disables tagged queueing. 33 + Due to resource allocation issues in the Linux SCSI mid-layer, using 34 + a high number of commands per device may result in memory allocation 35 + failures when many devices are attached to the system. For this 36 + reason, the default is set to 32. Higher values may result in higher 37 + performance on some devices. The upper bound is 253. 0 disables tagged 38 + queueing. 38 39 39 - Per device tag depth can be controlled via the kernel command line 40 - "tag_info" option. See Documentation/scsi/aic7xxx.rst for details. 40 + Per device tag depth can be controlled via the kernel command line 41 + "tag_info" option. See Documentation/scsi/aic7xxx.rst for details. 41 42 42 43 config AIC7XXX_RESET_DELAY_MS 43 44 int "Initial bus reset delay in milli-seconds" 44 45 depends on SCSI_AIC7XXX 45 46 default "5000" 46 47 help 47 - The number of milliseconds to delay after an initial bus reset. 48 - The bus settle delay following all error recovery actions is 49 - dictated by the SCSI layer and is not affected by this value. 48 + The number of milliseconds to delay after an initial bus reset. 49 + The bus settle delay following all error recovery actions is 50 + dictated by the SCSI layer and is not affected by this value. 50 51 51 - Default: 5000 (5 seconds) 52 + Default: 5000 (5 seconds) 52 53 53 54 config AIC7XXX_BUILD_FIRMWARE 54 55 bool "Build Adapter Firmware with Kernel Build" 55 56 depends on SCSI_AIC7XXX && !PREVENT_FIRMWARE_BUILD 56 57 help 57 - This option should only be enabled if you are modifying the firmware 58 - source to the aic7xxx driver and wish to have the generated firmware 59 - include files updated during a normal kernel build. The assembler 60 - for the firmware requires lex and yacc or their equivalents, as well 61 - as the db v1 library. You may have to install additional packages 62 - or modify the assembler Makefile or the files it includes if your 63 - build environment is different than that of the author. 58 + This option should only be enabled if you are modifying the firmware 59 + source to the aic7xxx driver and wish to have the generated firmware 60 + include files updated during a normal kernel build. The assembler 61 + for the firmware requires lex and yacc or their equivalents, as well 62 + as the db v1 library. You may have to install additional packages 63 + or modify the assembler Makefile or the files it includes if your 64 + build environment is different than that of the author. 64 65 65 66 config AIC7XXX_DEBUG_ENABLE 66 67 bool "Compile in Debugging Code" 67 68 depends on SCSI_AIC7XXX 68 69 default y 69 70 help 70 - Compile in aic7xxx debugging code that can be useful in diagnosing 71 - driver errors. 71 + Compile in aic7xxx debugging code that can be useful in diagnosing 72 + driver errors. 72 73 73 74 config AIC7XXX_DEBUG_MASK 74 - int "Debug code enable mask (2047 for all debugging)" 75 - depends on SCSI_AIC7XXX 76 - default "0" 77 - help 78 - Bit mask of debug options that is only valid if the 79 - CONFIG_AIC7XXX_DEBUG_ENABLE option is enabled. The bits in this mask 80 - are defined in the drivers/scsi/aic7xxx/aic7xxx.h - search for the 81 - variable ahc_debug in that file to find them. 75 + int "Debug code enable mask (2047 for all debugging)" 76 + depends on SCSI_AIC7XXX 77 + default "0" 78 + help 79 + Bit mask of debug options that is only valid if the 80 + CONFIG_AIC7XXX_DEBUG_ENABLE option is enabled. The bits in this mask 81 + are defined in the drivers/scsi/aic7xxx/aic7xxx.h - search for the 82 + variable ahc_debug in that file to find them. 82 83 83 84 config AIC7XXX_REG_PRETTY_PRINT 84 - bool "Decode registers during diagnostics" 85 - depends on SCSI_AIC7XXX 85 + bool "Decode registers during diagnostics" 86 + depends on SCSI_AIC7XXX 86 87 default y 87 - help 88 - Compile in register value tables for the output of expanded register 89 - contents in diagnostics. This make it much easier to understand debug 90 - output without having to refer to a data book and/or the aic7xxx.reg 91 - file. 88 + help 89 + Compile in register value tables for the output of expanded register 90 + contents in diagnostics. This make it much easier to understand debug 91 + output without having to refer to a data book and/or the aic7xxx.reg 92 + file.
+10 -19
drivers/scsi/aic94xx/aic94xx_init.c
··· 14 14 #include <linux/firmware.h> 15 15 #include <linux/slab.h> 16 16 17 + #include <scsi/sas_ata.h> 17 18 #include <scsi/scsi_host.h> 18 19 19 20 #include "aic94xx.h" ··· 25 24 26 25 /* The format is "version.release.patchlevel" */ 27 26 #define ASD_DRIVER_VERSION "1.0.3" 27 + #define DRV_NAME "aic94xx" 28 28 29 29 static int use_msi = 0; 30 30 module_param_named(use_msi, use_msi, int, S_IRUGO); ··· 36 34 static struct scsi_transport_template *aic94xx_transport_template; 37 35 static int asd_scan_finished(struct Scsi_Host *, unsigned long); 38 36 static void asd_scan_start(struct Scsi_Host *); 37 + static const struct attribute_group *asd_sdev_groups[]; 39 38 40 39 static const struct scsi_host_template aic94xx_sht = { 41 - .module = THIS_MODULE, 42 - /* .name is initialized */ 43 - .name = "aic94xx", 44 - .queuecommand = sas_queuecommand, 45 - .dma_need_drain = ata_scsi_dma_need_drain, 46 - .target_alloc = sas_target_alloc, 47 - .slave_configure = sas_slave_configure, 40 + LIBSAS_SHT_BASE 48 41 .scan_finished = asd_scan_finished, 49 42 .scan_start = asd_scan_start, 50 - .change_queue_depth = sas_change_queue_depth, 51 - .bios_param = sas_bios_param, 52 43 .can_queue = 1, 53 - .this_id = -1, 54 44 .sg_tablesize = SG_ALL, 55 - .max_sectors = SCSI_DEFAULT_MAX_SECTORS, 56 - .eh_device_reset_handler = sas_eh_device_reset_handler, 57 - .eh_target_reset_handler = sas_eh_target_reset_handler, 58 - .slave_alloc = sas_slave_alloc, 59 - .target_destroy = sas_target_destroy, 60 - .ioctl = sas_ioctl, 61 - #ifdef CONFIG_COMPAT 62 - .compat_ioctl = sas_ioctl, 63 - #endif 64 45 .track_queue_depth = 1, 46 + .sdev_groups = asd_sdev_groups, 65 47 }; 66 48 67 49 static int asd_map_memio(struct asd_ha_struct *asd_ha) ··· 936 950 { 937 951 driver_remove_file(driver, &driver_attr_version); 938 952 } 953 + 954 + static const struct attribute_group *asd_sdev_groups[] = { 955 + &sas_ata_sdev_attr_group, 956 + NULL 957 + }; 939 958 940 959 static struct sas_domain_function_template aic94xx_transport_functions = { 941 960 .lldd_dev_found = asd_dev_found,
+7 -1
drivers/scsi/atari_scsi.c
··· 878 878 atari_stram_free(atari_dma_buffer); 879 879 } 880 880 881 - static struct platform_driver atari_scsi_driver = { 881 + /* 882 + * atari_scsi_remove() lives in .exit.text. For drivers registered via 883 + * module_platform_driver_probe() this is ok because they cannot get unbound at 884 + * runtime. So mark the driver struct with __refdata to prevent modpost 885 + * triggering a section mismatch warning. 886 + */ 887 + static struct platform_driver atari_scsi_driver __refdata = { 882 888 .remove_new = __exit_p(atari_scsi_remove), 883 889 .driver = { 884 890 .name = DRV_MODULE_NAME,
+2 -2
drivers/scsi/bfa/bfad_debugfs.c
··· 250 250 unsigned long flags; 251 251 void *kern_buf; 252 252 253 - kern_buf = memdup_user(buf, nbytes); 253 + kern_buf = memdup_user_nul(buf, nbytes); 254 254 if (IS_ERR(kern_buf)) 255 255 return PTR_ERR(kern_buf); 256 256 ··· 317 317 unsigned long flags; 318 318 void *kern_buf; 319 319 320 - kern_buf = memdup_user(buf, nbytes); 320 + kern_buf = memdup_user_nul(buf, nbytes); 321 321 if (IS_ERR(kern_buf)) 322 322 return PTR_ERR(kern_buf); 323 323
+1 -3
drivers/scsi/bnx2fc/bnx2fc_tgt.c
··· 128 128 BNX2FC_TGT_DBG(tgt, "ctx_alloc_failure, " 129 129 "retry ofld..%d\n", i++); 130 130 msleep_interruptible(1000); 131 - if (i > 3) { 132 - i = 0; 131 + if (i > 3) 133 132 goto ofld_err; 134 - } 135 133 goto retry_ofld; 136 134 } 137 135 goto ofld_err;
-3
drivers/scsi/csiostor/csio_init.c
··· 1185 1185 1186 1186 static struct pci_driver csio_pci_driver = { 1187 1187 .name = KBUILD_MODNAME, 1188 - .driver = { 1189 - .owner = THIS_MODULE, 1190 - }, 1191 1188 .id_table = csio_pci_tbl, 1192 1189 .probe = csio_probe_one, 1193 1190 .remove = csio_remove_one,
+3 -3
drivers/scsi/cxlflash/lunmgt.c
··· 216 216 /** 217 217 * cxlflash_manage_lun() - handles LUN management activities 218 218 * @sdev: SCSI device associated with LUN. 219 - * @manage: Manage ioctl data structure. 219 + * @arg: Manage ioctl data structure. 220 220 * 221 221 * This routine is used to notify the driver about a LUN's WWID and associate 222 222 * SCSI devices (sdev) with a global LUN instance. Additionally it serves to ··· 224 224 * 225 225 * Return: 0 on success, -errno on failure 226 226 */ 227 - int cxlflash_manage_lun(struct scsi_device *sdev, 228 - struct dk_cxlflash_manage_lun *manage) 227 + int cxlflash_manage_lun(struct scsi_device *sdev, void *arg) 229 228 { 229 + struct dk_cxlflash_manage_lun *manage = arg; 230 230 struct cxlflash_cfg *cfg = shost_priv(sdev->host); 231 231 struct device *dev = &cfg->dev->dev; 232 232 struct llun_info *lli = NULL;
+8 -10
drivers/scsi/cxlflash/main.c
··· 3280 3280 /** 3281 3281 * cxlflash_lun_provision() - host LUN provisioning handler 3282 3282 * @cfg: Internal structure associated with the host. 3283 - * @lunprov: Kernel copy of userspace ioctl data structure. 3283 + * @arg: Kernel copy of userspace ioctl data structure. 3284 3284 * 3285 3285 * Return: 0 on success, -errno on failure 3286 3286 */ 3287 - static int cxlflash_lun_provision(struct cxlflash_cfg *cfg, 3288 - struct ht_cxlflash_lun_provision *lunprov) 3287 + static int cxlflash_lun_provision(struct cxlflash_cfg *cfg, void *arg) 3289 3288 { 3289 + struct ht_cxlflash_lun_provision *lunprov = arg; 3290 3290 struct afu *afu = cfg->afu; 3291 3291 struct device *dev = &cfg->dev->dev; 3292 3292 struct sisl_ioarcb rcb; ··· 3371 3371 /** 3372 3372 * cxlflash_afu_debug() - host AFU debug handler 3373 3373 * @cfg: Internal structure associated with the host. 3374 - * @afu_dbg: Kernel copy of userspace ioctl data structure. 3374 + * @arg: Kernel copy of userspace ioctl data structure. 3375 3375 * 3376 3376 * For debug requests requiring a data buffer, always provide an aligned 3377 3377 * (cache line) buffer to the AFU to appease any alignment requirements. 3378 3378 * 3379 3379 * Return: 0 on success, -errno on failure 3380 3380 */ 3381 - static int cxlflash_afu_debug(struct cxlflash_cfg *cfg, 3382 - struct ht_cxlflash_afu_debug *afu_dbg) 3381 + static int cxlflash_afu_debug(struct cxlflash_cfg *cfg, void *arg) 3383 3382 { 3383 + struct ht_cxlflash_afu_debug *afu_dbg = arg; 3384 3384 struct afu *afu = cfg->afu; 3385 3385 struct device *dev = &cfg->dev->dev; 3386 3386 struct sisl_ioarcb rcb; ··· 3494 3494 size_t size; 3495 3495 hioctl ioctl; 3496 3496 } ioctl_tbl[] = { /* NOTE: order matters here */ 3497 - { sizeof(struct ht_cxlflash_lun_provision), 3498 - (hioctl)cxlflash_lun_provision }, 3499 - { sizeof(struct ht_cxlflash_afu_debug), 3500 - (hioctl)cxlflash_afu_debug }, 3497 + { sizeof(struct ht_cxlflash_lun_provision), cxlflash_lun_provision }, 3498 + { sizeof(struct ht_cxlflash_afu_debug), cxlflash_afu_debug }, 3501 3499 }; 3502 3500 3503 3501 /* Hold read semaphore so we can drain if needed */
+19 -21
drivers/scsi/cxlflash/superpipe.c
··· 729 729 return rc; 730 730 } 731 731 732 - int cxlflash_disk_release(struct scsi_device *sdev, 733 - struct dk_cxlflash_release *release) 732 + int cxlflash_disk_release(struct scsi_device *sdev, void *release) 734 733 { 735 734 return _cxlflash_disk_release(sdev, NULL, release); 736 735 } ··· 954 955 return rc; 955 956 } 956 957 957 - static int cxlflash_disk_detach(struct scsi_device *sdev, 958 - struct dk_cxlflash_detach *detach) 958 + static int cxlflash_disk_detach(struct scsi_device *sdev, void *detach) 959 959 { 960 960 return _cxlflash_disk_detach(sdev, NULL, detach); 961 961 } ··· 1303 1305 /** 1304 1306 * cxlflash_disk_attach() - attach a LUN to a context 1305 1307 * @sdev: SCSI device associated with LUN. 1306 - * @attach: Attach ioctl data structure. 1308 + * @arg: Attach ioctl data structure. 1307 1309 * 1308 1310 * Creates a context and attaches LUN to it. A LUN can only be attached 1309 1311 * one time to a context (subsequent attaches for the same context/LUN pair ··· 1312 1314 * 1313 1315 * Return: 0 on success, -errno on failure 1314 1316 */ 1315 - static int cxlflash_disk_attach(struct scsi_device *sdev, 1316 - struct dk_cxlflash_attach *attach) 1317 + static int cxlflash_disk_attach(struct scsi_device *sdev, void *arg) 1317 1318 { 1319 + struct dk_cxlflash_attach *attach = arg; 1318 1320 struct cxlflash_cfg *cfg = shost_priv(sdev->host); 1319 1321 struct device *dev = &cfg->dev->dev; 1320 1322 struct afu *afu = cfg->afu; ··· 1619 1621 /** 1620 1622 * cxlflash_afu_recover() - initiates AFU recovery 1621 1623 * @sdev: SCSI device associated with LUN. 1622 - * @recover: Recover ioctl data structure. 1624 + * @arg: Recover ioctl data structure. 1623 1625 * 1624 1626 * Only a single recovery is allowed at a time to avoid exhausting CXL 1625 1627 * resources (leading to recovery failure) in the event that we're up ··· 1646 1648 * 1647 1649 * Return: 0 on success, -errno on failure 1648 1650 */ 1649 - static int cxlflash_afu_recover(struct scsi_device *sdev, 1650 - struct dk_cxlflash_recover_afu *recover) 1651 + static int cxlflash_afu_recover(struct scsi_device *sdev, void *arg) 1651 1652 { 1653 + struct dk_cxlflash_recover_afu *recover = arg; 1652 1654 struct cxlflash_cfg *cfg = shost_priv(sdev->host); 1653 1655 struct device *dev = &cfg->dev->dev; 1654 1656 struct llun_info *lli = sdev->hostdata; ··· 1827 1829 /** 1828 1830 * cxlflash_disk_verify() - verifies a LUN is the same and handle size changes 1829 1831 * @sdev: SCSI device associated with LUN. 1830 - * @verify: Verify ioctl data structure. 1832 + * @arg: Verify ioctl data structure. 1831 1833 * 1832 1834 * Return: 0 on success, -errno on failure 1833 1835 */ 1834 - static int cxlflash_disk_verify(struct scsi_device *sdev, 1835 - struct dk_cxlflash_verify *verify) 1836 + static int cxlflash_disk_verify(struct scsi_device *sdev, void *arg) 1836 1837 { 1838 + struct dk_cxlflash_verify *verify = arg; 1837 1839 int rc = 0; 1838 1840 struct ctx_info *ctxi = NULL; 1839 1841 struct cxlflash_cfg *cfg = shost_priv(sdev->host); ··· 2109 2111 size_t size; 2110 2112 sioctl ioctl; 2111 2113 } ioctl_tbl[] = { /* NOTE: order matters here */ 2112 - {sizeof(struct dk_cxlflash_attach), (sioctl)cxlflash_disk_attach}, 2114 + {sizeof(struct dk_cxlflash_attach), cxlflash_disk_attach}, 2113 2115 {sizeof(struct dk_cxlflash_udirect), cxlflash_disk_direct_open}, 2114 - {sizeof(struct dk_cxlflash_release), (sioctl)cxlflash_disk_release}, 2115 - {sizeof(struct dk_cxlflash_detach), (sioctl)cxlflash_disk_detach}, 2116 - {sizeof(struct dk_cxlflash_verify), (sioctl)cxlflash_disk_verify}, 2117 - {sizeof(struct dk_cxlflash_recover_afu), (sioctl)cxlflash_afu_recover}, 2118 - {sizeof(struct dk_cxlflash_manage_lun), (sioctl)cxlflash_manage_lun}, 2116 + {sizeof(struct dk_cxlflash_release), cxlflash_disk_release}, 2117 + {sizeof(struct dk_cxlflash_detach), cxlflash_disk_detach}, 2118 + {sizeof(struct dk_cxlflash_verify), cxlflash_disk_verify}, 2119 + {sizeof(struct dk_cxlflash_recover_afu), cxlflash_afu_recover}, 2120 + {sizeof(struct dk_cxlflash_manage_lun), cxlflash_manage_lun}, 2119 2121 {sizeof(struct dk_cxlflash_uvirtual), cxlflash_disk_virtual_open}, 2120 - {sizeof(struct dk_cxlflash_resize), (sioctl)cxlflash_vlun_resize}, 2121 - {sizeof(struct dk_cxlflash_clone), (sioctl)cxlflash_disk_clone}, 2122 + {sizeof(struct dk_cxlflash_resize), cxlflash_vlun_resize}, 2123 + {sizeof(struct dk_cxlflash_clone), cxlflash_disk_clone}, 2122 2124 }; 2123 2125 2124 2126 /* Hold read semaphore so we can drain if needed */
+4 -7
drivers/scsi/cxlflash/superpipe.h
··· 114 114 struct page *err_page; /* One page of all 0xF for error notification */ 115 115 }; 116 116 117 - int cxlflash_vlun_resize(struct scsi_device *sdev, 118 - struct dk_cxlflash_resize *resize); 117 + int cxlflash_vlun_resize(struct scsi_device *sdev, void *resize); 119 118 int _cxlflash_vlun_resize(struct scsi_device *sdev, struct ctx_info *ctxi, 120 119 struct dk_cxlflash_resize *resize); 121 120 122 121 int cxlflash_disk_release(struct scsi_device *sdev, 123 - struct dk_cxlflash_release *release); 122 + void *release); 124 123 int _cxlflash_disk_release(struct scsi_device *sdev, struct ctx_info *ctxi, 125 124 struct dk_cxlflash_release *release); 126 125 127 - int cxlflash_disk_clone(struct scsi_device *sdev, 128 - struct dk_cxlflash_clone *clone); 126 + int cxlflash_disk_clone(struct scsi_device *sdev, void *arg); 129 127 130 128 int cxlflash_disk_virtual_open(struct scsi_device *sdev, void *arg); 131 129 ··· 143 145 144 146 void cxlflash_ba_terminate(struct ba_lun *ba_lun); 145 147 146 - int cxlflash_manage_lun(struct scsi_device *sdev, 147 - struct dk_cxlflash_manage_lun *manage); 148 + int cxlflash_manage_lun(struct scsi_device *sdev, void *manage); 148 149 149 150 int check_state(struct cxlflash_cfg *cfg); 150 151
+4 -5
drivers/scsi/cxlflash/vlun.c
··· 819 819 return rc; 820 820 } 821 821 822 - int cxlflash_vlun_resize(struct scsi_device *sdev, 823 - struct dk_cxlflash_resize *resize) 822 + int cxlflash_vlun_resize(struct scsi_device *sdev, void *resize) 824 823 { 825 824 return _cxlflash_vlun_resize(sdev, NULL, resize); 826 825 } ··· 1177 1178 /** 1178 1179 * cxlflash_disk_clone() - clone a context by making snapshot of another 1179 1180 * @sdev: SCSI device associated with LUN owning virtual LUN. 1180 - * @clone: Clone ioctl data structure. 1181 + * @arg: Clone ioctl data structure. 1181 1182 * 1182 1183 * This routine effectively performs cxlflash_disk_open operation for each 1183 1184 * in-use virtual resource in the source context. Note that the destination ··· 1186 1187 * 1187 1188 * Return: 0 on success, -errno on failure 1188 1189 */ 1189 - int cxlflash_disk_clone(struct scsi_device *sdev, 1190 - struct dk_cxlflash_clone *clone) 1190 + int cxlflash_disk_clone(struct scsi_device *sdev, void *arg) 1191 1191 { 1192 + struct dk_cxlflash_clone *clone = arg; 1192 1193 struct cxlflash_cfg *cfg = shost_priv(sdev->host); 1193 1194 struct device *dev = &cfg->dev->dev; 1194 1195 struct llun_info *lli = sdev->hostdata;
+2 -1
drivers/scsi/hisi_sas/hisi_sas.h
··· 643 643 const struct hisi_sas_hw *ops); 644 644 extern void hisi_sas_remove(struct platform_device *pdev); 645 645 646 - extern int hisi_sas_slave_configure(struct scsi_device *sdev); 646 + int hisi_sas_device_configure(struct scsi_device *sdev, 647 + struct queue_limits *lim); 647 648 extern int hisi_sas_slave_alloc(struct scsi_device *sdev); 648 649 extern int hisi_sas_scan_finished(struct Scsi_Host *shost, unsigned long time); 649 650 extern void hisi_sas_scan_start(struct Scsi_Host *shost);
+4 -3
drivers/scsi/hisi_sas/hisi_sas_main.c
··· 868 868 return rc; 869 869 } 870 870 871 - int hisi_sas_slave_configure(struct scsi_device *sdev) 871 + int hisi_sas_device_configure(struct scsi_device *sdev, 872 + struct queue_limits *lim) 872 873 { 873 874 struct domain_device *dev = sdev_to_domain_dev(sdev); 874 - int ret = sas_slave_configure(sdev); 875 + int ret = sas_device_configure(sdev, lim); 875 876 876 877 if (ret) 877 878 return ret; ··· 881 880 882 881 return 0; 883 882 } 884 - EXPORT_SYMBOL_GPL(hisi_sas_slave_configure); 883 + EXPORT_SYMBOL_GPL(hisi_sas_device_configure); 885 884 886 885 void hisi_sas_scan_start(struct Scsi_Host *shost) 887 886 {
+2 -18
drivers/scsi/hisi_sas/hisi_sas_v1_hw.c
··· 1735 1735 ATTRIBUTE_GROUPS(host_v1_hw); 1736 1736 1737 1737 static const struct scsi_host_template sht_v1_hw = { 1738 - .name = DRV_NAME, 1739 - .proc_name = DRV_NAME, 1740 - .module = THIS_MODULE, 1741 - .queuecommand = sas_queuecommand, 1742 - .dma_need_drain = ata_scsi_dma_need_drain, 1743 - .target_alloc = sas_target_alloc, 1744 - .slave_configure = hisi_sas_slave_configure, 1738 + LIBSAS_SHT_BASE_NO_SLAVE_INIT 1739 + .device_configure = hisi_sas_device_configure, 1745 1740 .scan_finished = hisi_sas_scan_finished, 1746 1741 .scan_start = hisi_sas_scan_start, 1747 - .change_queue_depth = sas_change_queue_depth, 1748 - .bios_param = sas_bios_param, 1749 - .this_id = -1, 1750 1742 .sg_tablesize = HISI_SAS_SGE_PAGE_CNT, 1751 - .max_sectors = SCSI_DEFAULT_MAX_SECTORS, 1752 - .eh_device_reset_handler = sas_eh_device_reset_handler, 1753 - .eh_target_reset_handler = sas_eh_target_reset_handler, 1754 1743 .slave_alloc = hisi_sas_slave_alloc, 1755 - .target_destroy = sas_target_destroy, 1756 - .ioctl = sas_ioctl, 1757 - #ifdef CONFIG_COMPAT 1758 - .compat_ioctl = sas_ioctl, 1759 - #endif 1760 1744 .shost_groups = host_v1_hw_groups, 1761 1745 .host_reset = hisi_sas_host_reset, 1762 1746 };
+8 -18
drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
··· 3544 3544 3545 3545 ATTRIBUTE_GROUPS(host_v2_hw); 3546 3546 3547 + static const struct attribute_group *sdev_groups_v2_hw[] = { 3548 + &sas_ata_sdev_attr_group, 3549 + NULL 3550 + }; 3551 + 3547 3552 static void map_queues_v2_hw(struct Scsi_Host *shost) 3548 3553 { 3549 3554 struct hisi_hba *hisi_hba = shost_priv(shost); ··· 3567 3562 } 3568 3563 3569 3564 static const struct scsi_host_template sht_v2_hw = { 3570 - .name = DRV_NAME, 3571 - .proc_name = DRV_NAME, 3572 - .module = THIS_MODULE, 3573 - .queuecommand = sas_queuecommand, 3574 - .dma_need_drain = ata_scsi_dma_need_drain, 3575 - .target_alloc = sas_target_alloc, 3576 - .slave_configure = hisi_sas_slave_configure, 3565 + LIBSAS_SHT_BASE_NO_SLAVE_INIT 3566 + .device_configure = hisi_sas_device_configure, 3577 3567 .scan_finished = hisi_sas_scan_finished, 3578 3568 .scan_start = hisi_sas_scan_start, 3579 - .change_queue_depth = sas_change_queue_depth, 3580 - .bios_param = sas_bios_param, 3581 - .this_id = -1, 3582 3569 .sg_tablesize = HISI_SAS_SGE_PAGE_CNT, 3583 - .max_sectors = SCSI_DEFAULT_MAX_SECTORS, 3584 - .eh_device_reset_handler = sas_eh_device_reset_handler, 3585 - .eh_target_reset_handler = sas_eh_target_reset_handler, 3586 3570 .slave_alloc = hisi_sas_slave_alloc, 3587 - .target_destroy = sas_target_destroy, 3588 - .ioctl = sas_ioctl, 3589 - #ifdef CONFIG_COMPAT 3590 - .compat_ioctl = sas_ioctl, 3591 - #endif 3592 3571 .shost_groups = host_v2_hw_groups, 3572 + .sdev_groups = sdev_groups_v2_hw, 3593 3573 .host_reset = hisi_sas_host_reset, 3594 3574 .map_queues = map_queues_v2_hw, 3595 3575 .host_tagset = 1,
+11 -20
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 2902 2902 } 2903 2903 static DEVICE_ATTR_RO(iopoll_q_cnt_v3_hw); 2904 2904 2905 - static int slave_configure_v3_hw(struct scsi_device *sdev) 2905 + static int device_configure_v3_hw(struct scsi_device *sdev, 2906 + struct queue_limits *lim) 2906 2907 { 2907 2908 struct Scsi_Host *shost = dev_to_shost(&sdev->sdev_gendev); 2908 2909 struct hisi_hba *hisi_hba = shost_priv(shost); 2909 - int ret = hisi_sas_slave_configure(sdev); 2910 + int ret = hisi_sas_device_configure(sdev, lim); 2910 2911 struct device *dev = hisi_hba->dev; 2911 2912 2912 2913 if (ret) ··· 2937 2936 }; 2938 2937 2939 2938 ATTRIBUTE_GROUPS(host_v3_hw); 2939 + 2940 + static const struct attribute_group *sdev_groups_v3_hw[] = { 2941 + &sas_ata_sdev_attr_group, 2942 + NULL 2943 + }; 2940 2944 2941 2945 #define HISI_SAS_DEBUGFS_REG(x) {#x, x} 2942 2946 ··· 3329 3323 } 3330 3324 3331 3325 static const struct scsi_host_template sht_v3_hw = { 3332 - .name = DRV_NAME, 3333 - .proc_name = DRV_NAME, 3334 - .module = THIS_MODULE, 3335 - .queuecommand = sas_queuecommand, 3336 - .dma_need_drain = ata_scsi_dma_need_drain, 3337 - .target_alloc = sas_target_alloc, 3338 - .slave_configure = slave_configure_v3_hw, 3326 + LIBSAS_SHT_BASE_NO_SLAVE_INIT 3327 + .device_configure = device_configure_v3_hw, 3339 3328 .scan_finished = hisi_sas_scan_finished, 3340 3329 .scan_start = hisi_sas_scan_start, 3341 3330 .map_queues = hisi_sas_map_queues, 3342 - .change_queue_depth = sas_change_queue_depth, 3343 - .bios_param = sas_bios_param, 3344 - .this_id = -1, 3345 3331 .sg_tablesize = HISI_SAS_SGE_PAGE_CNT, 3346 3332 .sg_prot_tablesize = HISI_SAS_SGE_PAGE_CNT, 3347 - .max_sectors = SCSI_DEFAULT_MAX_SECTORS, 3348 - .eh_device_reset_handler = sas_eh_device_reset_handler, 3349 - .eh_target_reset_handler = sas_eh_target_reset_handler, 3350 3333 .slave_alloc = hisi_sas_slave_alloc, 3351 - .target_destroy = sas_target_destroy, 3352 - .ioctl = sas_ioctl, 3353 - #ifdef CONFIG_COMPAT 3354 - .compat_ioctl = sas_ioctl, 3355 - #endif 3356 3334 .shost_groups = host_v3_hw_groups, 3335 + .sdev_groups = sdev_groups_v3_hw, 3357 3336 .tag_alloc_policy = BLK_TAG_ALLOC_RR, 3358 3337 .host_reset = hisi_sas_host_reset, 3359 3338 .host_tagset = 1,
+6
drivers/scsi/hosts.c
··· 479 479 else 480 480 shost->max_segment_size = BLK_MAX_SEGMENT_SIZE; 481 481 482 + /* 32-byte (dword) is a common minimum for HBAs. */ 483 + if (sht->dma_alignment) 484 + shost->dma_alignment = sht->dma_alignment; 485 + else 486 + shost->dma_alignment = 3; 487 + 482 488 /* 483 489 * assume a 4GB boundary, if not set 484 490 */
+1 -1
drivers/scsi/hpsa.c
··· 5850 5850 { 5851 5851 struct Scsi_Host *sh; 5852 5852 5853 - sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info)); 5853 + sh = scsi_host_alloc(&hpsa_driver_template, sizeof(struct ctlr_info *)); 5854 5854 if (sh == NULL) { 5855 5855 dev_err(&h->pdev->dev, "scsi_host_alloc failed\n"); 5856 5856 return -ENOMEM;
+4 -4
drivers/scsi/hptiop.c
··· 1151 1151 1152 1152 ATTRIBUTE_GROUPS(hptiop_host); 1153 1153 1154 - static int hptiop_slave_config(struct scsi_device *sdev) 1154 + static int hptiop_device_configure(struct scsi_device *sdev, 1155 + struct queue_limits *lim) 1155 1156 { 1156 1157 if (sdev->type == TYPE_TAPE) 1157 - blk_queue_max_hw_sectors(sdev->request_queue, 8192); 1158 - 1158 + lim->max_hw_sectors = 8192; 1159 1159 return 0; 1160 1160 } 1161 1161 ··· 1168 1168 .emulated = 0, 1169 1169 .proc_name = driver_name, 1170 1170 .shost_groups = hptiop_host_groups, 1171 - .slave_configure = hptiop_slave_config, 1171 + .device_configure = hptiop_device_configure, 1172 1172 .this_id = -1, 1173 1173 .change_queue_depth = hptiop_adjust_disk_queue_depth, 1174 1174 .cmd_size = sizeof(struct hpt_cmd_priv),
+1 -4
drivers/scsi/ibmvscsi/ibmvfc.c
··· 5541 5541 rport->supported_classes |= FC_COS_CLASS2; 5542 5542 if (be32_to_cpu(tgt->service_parms.class3_parms[0]) & 0x80000000) 5543 5543 rport->supported_classes |= FC_COS_CLASS3; 5544 - if (rport->rqst_q) 5545 - blk_queue_max_segments(rport->rqst_q, 1); 5546 5544 } else 5547 5545 tgt_dbg(tgt, "rport add failed\n"); 5548 5546 spin_unlock_irqrestore(vhost->host->host_lock, flags); ··· 6389 6391 6390 6392 ibmvfc_init_sub_crqs(vhost); 6391 6393 6392 - if (shost_to_fc_host(shost)->rqst_q) 6393 - blk_queue_max_segments(shost_to_fc_host(shost)->rqst_q, 1); 6394 6394 dev_set_drvdata(dev, vhost); 6395 6395 spin_lock(&ibmvfc_driver_lock); 6396 6396 list_add_tail(&vhost->queue, &ibmvfc_head); ··· 6543 6547 .get_starget_port_id = ibmvfc_get_starget_port_id, 6544 6548 .show_starget_port_id = 1, 6545 6549 6550 + .max_bsg_segments = 1, 6546 6551 .bsg_request = ibmvfc_bsg_request, 6547 6552 .bsg_timeout = ibmvfc_bsg_timeout, 6548 6553 };
+1 -11
drivers/scsi/imm.c
··· 1100 1100 return -ENODEV; 1101 1101 } 1102 1102 1103 - /* 1104 - * imm cannot deal with highmem, so this causes all IO pages for this host 1105 - * to reside in low memory (hence mapped) 1106 - */ 1107 - static int imm_adjust_queue(struct scsi_device *device) 1108 - { 1109 - blk_queue_bounce_limit(device->request_queue, BLK_BOUNCE_HIGH); 1110 - return 0; 1111 - } 1112 - 1113 1103 static const struct scsi_host_template imm_template = { 1114 1104 .module = THIS_MODULE, 1115 1105 .proc_name = "imm", ··· 1113 1123 .this_id = 7, 1114 1124 .sg_tablesize = SG_ALL, 1115 1125 .can_queue = 1, 1116 - .slave_alloc = imm_adjust_queue, 1117 1126 .cmd_size = sizeof(struct scsi_pointer), 1118 1127 }; 1119 1128 ··· 1224 1235 host = scsi_host_alloc(&imm_template, sizeof(imm_struct *)); 1225 1236 if (!host) 1226 1237 goto out1; 1238 + host->no_highmem = true; 1227 1239 host->io_port = pb->base; 1228 1240 host->n_io_port = ports; 1229 1241 host->dma_channel = -1;
+6 -4
drivers/scsi/ipr.c
··· 4769 4769 } 4770 4770 4771 4771 /** 4772 - * ipr_slave_configure - Configure a SCSI device 4772 + * ipr_device_configure - Configure a SCSI device 4773 4773 * @sdev: scsi device struct 4774 + * @lim: queue limits 4774 4775 * 4775 4776 * This function configures the specified scsi device. 4776 4777 * 4777 4778 * Return value: 4778 4779 * 0 on success 4779 4780 **/ 4780 - static int ipr_slave_configure(struct scsi_device *sdev) 4781 + static int ipr_device_configure(struct scsi_device *sdev, 4782 + struct queue_limits *lim) 4781 4783 { 4782 4784 struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata; 4783 4785 struct ipr_resource_entry *res; ··· 4800 4798 sdev->no_report_opcodes = 1; 4801 4799 blk_queue_rq_timeout(sdev->request_queue, 4802 4800 IPR_VSET_RW_TIMEOUT); 4803 - blk_queue_max_hw_sectors(sdev->request_queue, IPR_VSET_MAX_SECTORS); 4801 + lim->max_hw_sectors = IPR_VSET_MAX_SECTORS; 4804 4802 } 4805 4803 spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags); 4806 4804 ··· 6399 6397 .eh_device_reset_handler = ipr_eh_dev_reset, 6400 6398 .eh_host_reset_handler = ipr_eh_host_reset, 6401 6399 .slave_alloc = ipr_slave_alloc, 6402 - .slave_configure = ipr_slave_configure, 6400 + .device_configure = ipr_device_configure, 6403 6401 .slave_destroy = ipr_slave_destroy, 6404 6402 .scan_finished = ipr_scan_finished, 6405 6403 .target_destroy = ipr_target_destroy,
+8 -21
drivers/scsi/isci/init.c
··· 149 149 150 150 ATTRIBUTE_GROUPS(isci_host); 151 151 152 - static const struct scsi_host_template isci_sht = { 152 + static const struct attribute_group *isci_sdev_groups[] = { 153 + &sas_ata_sdev_attr_group, 154 + NULL 155 + }; 153 156 154 - .module = THIS_MODULE, 155 - .name = DRV_NAME, 156 - .proc_name = DRV_NAME, 157 - .queuecommand = sas_queuecommand, 158 - .dma_need_drain = ata_scsi_dma_need_drain, 159 - .target_alloc = sas_target_alloc, 160 - .slave_configure = sas_slave_configure, 157 + static const struct scsi_host_template isci_sht = { 158 + LIBSAS_SHT_BASE 161 159 .scan_finished = isci_host_scan_finished, 162 160 .scan_start = isci_host_start, 163 - .change_queue_depth = sas_change_queue_depth, 164 - .bios_param = sas_bios_param, 165 161 .can_queue = ISCI_CAN_QUEUE_VAL, 166 - .this_id = -1, 167 162 .sg_tablesize = SG_ALL, 168 - .max_sectors = SCSI_DEFAULT_MAX_SECTORS, 169 - .eh_abort_handler = sas_eh_abort_handler, 170 - .eh_device_reset_handler = sas_eh_device_reset_handler, 171 - .eh_target_reset_handler = sas_eh_target_reset_handler, 172 - .slave_alloc = sas_slave_alloc, 173 - .target_destroy = sas_target_destroy, 174 - .ioctl = sas_ioctl, 175 - #ifdef CONFIG_COMPAT 176 - .compat_ioctl = sas_ioctl, 177 - #endif 163 + .eh_abort_handler = sas_eh_abort_handler, 178 164 .shost_groups = isci_host_groups, 165 + .sdev_groups = isci_sdev_groups, 179 166 .track_queue_depth = 1, 180 167 }; 181 168
+1 -1
drivers/scsi/iscsi_tcp.c
··· 943 943 shost->max_id = 0; 944 944 shost->max_channel = 0; 945 945 shost->max_cmd_len = SCSI_MAX_VARLEN_CDB_SIZE; 946 + shost->dma_alignment = 0; 946 947 947 948 rc = iscsi_host_get_max_scsi_cmds(shost, cmds_max); 948 949 if (rc < 0) ··· 1066 1065 if (conn->datadgst_en) 1067 1066 blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, 1068 1067 sdev->request_queue); 1069 - blk_queue_dma_alignment(sdev->request_queue, 0); 1070 1068 return 0; 1071 1069 } 1072 1070
+84
drivers/scsi/libsas/sas_ata.c
··· 964 964 force_phy_id, &tmf_task); 965 965 } 966 966 EXPORT_SYMBOL_GPL(sas_execute_ata_cmd); 967 + 968 + static ssize_t sas_ncq_prio_supported_show(struct device *device, 969 + struct device_attribute *attr, 970 + char *buf) 971 + { 972 + struct scsi_device *sdev = to_scsi_device(device); 973 + struct domain_device *ddev = sdev_to_domain_dev(sdev); 974 + bool supported; 975 + int rc; 976 + 977 + rc = ata_ncq_prio_supported(ddev->sata_dev.ap, sdev, &supported); 978 + if (rc) 979 + return rc; 980 + 981 + return sysfs_emit(buf, "%d\n", supported); 982 + } 983 + 984 + static struct device_attribute dev_attr_sas_ncq_prio_supported = 985 + __ATTR(ncq_prio_supported, S_IRUGO, sas_ncq_prio_supported_show, NULL); 986 + 987 + static ssize_t sas_ncq_prio_enable_show(struct device *device, 988 + struct device_attribute *attr, 989 + char *buf) 990 + { 991 + struct scsi_device *sdev = to_scsi_device(device); 992 + struct domain_device *ddev = sdev_to_domain_dev(sdev); 993 + bool enabled; 994 + int rc; 995 + 996 + rc = ata_ncq_prio_enabled(ddev->sata_dev.ap, sdev, &enabled); 997 + if (rc) 998 + return rc; 999 + 1000 + return sysfs_emit(buf, "%d\n", enabled); 1001 + } 1002 + 1003 + static ssize_t sas_ncq_prio_enable_store(struct device *device, 1004 + struct device_attribute *attr, 1005 + const char *buf, size_t len) 1006 + { 1007 + struct scsi_device *sdev = to_scsi_device(device); 1008 + struct domain_device *ddev = sdev_to_domain_dev(sdev); 1009 + bool enable; 1010 + int rc; 1011 + 1012 + rc = kstrtobool(buf, &enable); 1013 + if (rc) 1014 + return rc; 1015 + 1016 + rc = ata_ncq_prio_enable(ddev->sata_dev.ap, sdev, enable); 1017 + if (rc) 1018 + return rc; 1019 + 1020 + return len; 1021 + } 1022 + 1023 + static struct device_attribute dev_attr_sas_ncq_prio_enable = 1024 + __ATTR(ncq_prio_enable, S_IRUGO | S_IWUSR, 1025 + sas_ncq_prio_enable_show, sas_ncq_prio_enable_store); 1026 + 1027 + static struct attribute *sas_ata_sdev_attrs[] = { 1028 + &dev_attr_sas_ncq_prio_supported.attr, 1029 + &dev_attr_sas_ncq_prio_enable.attr, 1030 + NULL 1031 + }; 1032 + 1033 + static umode_t sas_ata_attr_is_visible(struct kobject *kobj, 1034 + struct attribute *attr, int i) 1035 + { 1036 + struct device *dev = kobj_to_dev(kobj); 1037 + struct scsi_device *sdev = to_scsi_device(dev); 1038 + struct domain_device *ddev = sdev_to_domain_dev(sdev); 1039 + 1040 + if (!dev_is_sata(ddev)) 1041 + return 0; 1042 + 1043 + return attr->mode; 1044 + } 1045 + 1046 + const struct attribute_group sas_ata_sdev_attr_group = { 1047 + .attrs = sas_ata_sdev_attrs, 1048 + .is_visible = sas_ata_attr_is_visible, 1049 + }; 1050 + EXPORT_SYMBOL_GPL(sas_ata_sdev_attr_group);
+30 -8
drivers/scsi/libsas/sas_expander.c
··· 26 26 u8 *sas_addr, int include); 27 27 static int sas_disable_routing(struct domain_device *dev, u8 *sas_addr); 28 28 29 + static void sas_port_add_ex_phy(struct sas_port *port, struct ex_phy *ex_phy) 30 + { 31 + sas_port_add_phy(port, ex_phy->phy); 32 + ex_phy->port = port; 33 + ex_phy->phy_state = PHY_DEVICE_DISCOVERED; 34 + } 35 + 36 + static void sas_ex_add_parent_port(struct domain_device *dev, int phy_id) 37 + { 38 + struct expander_device *ex = &dev->ex_dev; 39 + struct ex_phy *ex_phy = &ex->ex_phy[phy_id]; 40 + 41 + if (!ex->parent_port) { 42 + ex->parent_port = sas_port_alloc(&dev->rphy->dev, phy_id); 43 + /* FIXME: error handling */ 44 + BUG_ON(!ex->parent_port); 45 + BUG_ON(sas_port_add(ex->parent_port)); 46 + sas_port_mark_backlink(ex->parent_port); 47 + } 48 + sas_port_add_ex_phy(ex->parent_port, ex_phy); 49 + } 50 + 29 51 /* ---------- SMP task management ---------- */ 30 52 31 53 /* Give it some long enough timeout. In seconds. */ ··· 261 239 /* help some expanders that fail to zero sas_address in the 'no 262 240 * device' case 263 241 */ 264 - if (phy->attached_dev_type == SAS_PHY_UNUSED || 265 - phy->linkrate < SAS_LINK_RATE_1_5_GBPS) 242 + if (phy->attached_dev_type == SAS_PHY_UNUSED) 266 243 memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE); 267 244 else 268 245 memcpy(phy->attached_sas_addr, dr->attached_sas_addr, SAS_ADDR_SIZE); ··· 878 857 879 858 if (!memcmp(phy->attached_sas_addr, ephy->attached_sas_addr, 880 859 SAS_ADDR_SIZE) && ephy->port) { 881 - sas_port_add_phy(ephy->port, phy->phy); 882 - phy->port = ephy->port; 883 - phy->phy_state = PHY_DEVICE_DISCOVERED; 860 + sas_port_add_ex_phy(ephy->port, phy); 884 861 return true; 885 862 } 886 863 } ··· 982 963 983 964 /* Parent and domain coherency */ 984 965 if (!dev->parent && sas_phy_match_port_addr(dev->port, ex_phy)) { 985 - sas_add_parent_port(dev, phy_id); 966 + sas_ex_add_parent_port(dev, phy_id); 986 967 return 0; 987 968 } 988 969 if (dev->parent && sas_phy_match_dev_addr(dev->parent, ex_phy)) { 989 - sas_add_parent_port(dev, phy_id); 970 + sas_ex_add_parent_port(dev, phy_id); 990 971 if (ex_phy->routing_attr == TABLE_ROUTING) 991 972 sas_configure_phy(dev, phy_id, dev->port->sas_addr, 1); 992 973 return 0; ··· 1868 1849 if (phy->port) { 1869 1850 sas_port_delete_phy(phy->port, phy->phy); 1870 1851 sas_device_set_phy(found, phy->port); 1871 - if (phy->port->num_phys == 0) 1852 + if (phy->port->num_phys == 0) { 1872 1853 list_add_tail(&phy->port->del_list, 1873 1854 &parent->port->sas_port_del_list); 1855 + if (ex_dev->parent_port == phy->port) 1856 + ex_dev->parent_port = NULL; 1857 + } 1874 1858 phy->port = NULL; 1875 1859 } 1876 1860 }
-15
drivers/scsi/libsas/sas_internal.h
··· 189 189 } 190 190 } 191 191 192 - static inline void sas_add_parent_port(struct domain_device *dev, int phy_id) 193 - { 194 - struct expander_device *ex = &dev->ex_dev; 195 - struct ex_phy *ex_phy = &ex->ex_phy[phy_id]; 196 - 197 - if (!ex->parent_port) { 198 - ex->parent_port = sas_port_alloc(&dev->rphy->dev, phy_id); 199 - /* FIXME: error handling */ 200 - BUG_ON(!ex->parent_port); 201 - BUG_ON(sas_port_add(ex->parent_port)); 202 - sas_port_mark_backlink(ex->parent_port); 203 - } 204 - sas_port_add_phy(ex->parent_port, ex_phy->phy); 205 - } 206 - 207 192 static inline struct domain_device *sas_alloc_device(void) 208 193 { 209 194 struct domain_device *dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+4 -3
drivers/scsi/libsas/sas_scsi_host.c
··· 804 804 805 805 #define SAS_DEF_QD 256 806 806 807 - int sas_slave_configure(struct scsi_device *scsi_dev) 807 + int sas_device_configure(struct scsi_device *scsi_dev, 808 + struct queue_limits *lim) 808 809 { 809 810 struct domain_device *dev = sdev_to_domain_dev(scsi_dev); 810 811 811 812 BUG_ON(dev->rphy->identify.device_type != SAS_END_DEVICE); 812 813 813 814 if (dev_is_sata(dev)) { 814 - ata_sas_slave_configure(scsi_dev, dev->sata_dev.ap); 815 + ata_sas_device_configure(scsi_dev, lim, dev->sata_dev.ap); 815 816 return 0; 816 817 } 817 818 ··· 830 829 831 830 return 0; 832 831 } 833 - EXPORT_SYMBOL_GPL(sas_slave_configure); 832 + EXPORT_SYMBOL_GPL(sas_device_configure); 834 833 835 834 int sas_change_queue_depth(struct scsi_device *sdev, int depth) 836 835 {
+33 -29
drivers/scsi/lpfc/lpfc.h
··· 393 393 LPFC_HBA_ERROR = -1 394 394 }; 395 395 396 + enum lpfc_hba_flag { /* hba generic flags */ 397 + HBA_ERATT_HANDLED = 0, /* This flag is set when eratt handled */ 398 + DEFER_ERATT = 1, /* Deferred error attn in progress */ 399 + HBA_FCOE_MODE = 2, /* HBA function in FCoE Mode */ 400 + HBA_SP_QUEUE_EVT = 3, /* Slow-path qevt posted to worker thread*/ 401 + HBA_POST_RECEIVE_BUFFER = 4, /* Rcv buffers need to be posted */ 402 + HBA_PERSISTENT_TOPO = 5, /* Persistent topology support in hba */ 403 + ELS_XRI_ABORT_EVENT = 6, /* ELS_XRI abort event was queued */ 404 + ASYNC_EVENT = 7, 405 + LINK_DISABLED = 8, /* Link disabled by user */ 406 + FCF_TS_INPROG = 9, /* FCF table scan in progress */ 407 + FCF_RR_INPROG = 10, /* FCF roundrobin flogi in progress */ 408 + HBA_FIP_SUPPORT = 11, /* FIP support in HBA */ 409 + HBA_DEVLOSS_TMO = 13, /* HBA in devloss timeout */ 410 + HBA_RRQ_ACTIVE = 14, /* process the rrq active list */ 411 + HBA_IOQ_FLUSH = 15, /* I/O queues being flushed */ 412 + HBA_RECOVERABLE_UE = 17, /* FW supports recoverable UE */ 413 + HBA_FORCED_LINK_SPEED = 18, /* 414 + * Firmware supports Forced Link 415 + * Speed capability 416 + */ 417 + HBA_FLOGI_ISSUED = 20, /* FLOGI was issued */ 418 + HBA_DEFER_FLOGI = 23, /* Defer FLOGI till read_sparm cmpl */ 419 + HBA_SETUP = 24, /* HBA setup completed */ 420 + HBA_NEEDS_CFG_PORT = 25, /* SLI3: CONFIG_PORT mbox needed */ 421 + HBA_HBEAT_INP = 26, /* mbox HBEAT is in progress */ 422 + HBA_HBEAT_TMO = 27, /* HBEAT initiated after timeout */ 423 + HBA_FLOGI_OUTSTANDING = 28, /* FLOGI is outstanding */ 424 + HBA_RHBA_CMPL = 29, /* RHBA FDMI cmd is successful */ 425 + }; 426 + 396 427 struct lpfc_trunk_link_state { 397 428 enum hba_state state; 398 429 uint8_t fault; ··· 1038 1007 #define LS_CT_VEN_RPA 0x20 /* Vendor RPA sent to switch */ 1039 1008 #define LS_EXTERNAL_LOOPBACK 0x40 /* External loopback plug inserted */ 1040 1009 1041 - uint32_t hba_flag; /* hba generic flags */ 1042 - #define HBA_ERATT_HANDLED 0x1 /* This flag is set when eratt handled */ 1043 - #define DEFER_ERATT 0x2 /* Deferred error attention in progress */ 1044 - #define HBA_FCOE_MODE 0x4 /* HBA function in FCoE Mode */ 1045 - #define HBA_SP_QUEUE_EVT 0x8 /* Slow-path qevt posted to worker thread*/ 1046 - #define HBA_POST_RECEIVE_BUFFER 0x10 /* Rcv buffers need to be posted */ 1047 - #define HBA_PERSISTENT_TOPO 0x20 /* Persistent topology support in hba */ 1048 - #define ELS_XRI_ABORT_EVENT 0x40 /* ELS_XRI abort event was queued */ 1049 - #define ASYNC_EVENT 0x80 1050 - #define LINK_DISABLED 0x100 /* Link disabled by user */ 1051 - #define FCF_TS_INPROG 0x200 /* FCF table scan in progress */ 1052 - #define FCF_RR_INPROG 0x400 /* FCF roundrobin flogi in progress */ 1053 - #define HBA_FIP_SUPPORT 0x800 /* FIP support in HBA */ 1054 - #define HBA_DEVLOSS_TMO 0x2000 /* HBA in devloss timeout */ 1055 - #define HBA_RRQ_ACTIVE 0x4000 /* process the rrq active list */ 1056 - #define HBA_IOQ_FLUSH 0x8000 /* FCP/NVME I/O queues being flushed */ 1057 - #define HBA_RECOVERABLE_UE 0x20000 /* Firmware supports recoverable UE */ 1058 - #define HBA_FORCED_LINK_SPEED 0x40000 /* 1059 - * Firmware supports Forced Link Speed 1060 - * capability 1061 - */ 1062 - #define HBA_FLOGI_ISSUED 0x100000 /* FLOGI was issued */ 1063 - #define HBA_DEFER_FLOGI 0x800000 /* Defer FLOGI till read_sparm cmpl */ 1064 - #define HBA_SETUP 0x1000000 /* Signifies HBA setup is completed */ 1065 - #define HBA_NEEDS_CFG_PORT 0x2000000 /* SLI3 - needs a CONFIG_PORT mbox */ 1066 - #define HBA_HBEAT_INP 0x4000000 /* mbox HBEAT is in progress */ 1067 - #define HBA_HBEAT_TMO 0x8000000 /* HBEAT initiated after timeout */ 1068 - #define HBA_FLOGI_OUTSTANDING 0x10000000 /* FLOGI is outstanding */ 1069 - #define HBA_RHBA_CMPL 0x20000000 /* RHBA FDMI command is successful */ 1010 + unsigned long hba_flag; /* hba generic flags */ 1070 1011 1071 1012 struct completion *fw_dump_cmpl; /* cmpl event tracker for fw_dump */ 1072 1013 uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/ ··· 1287 1284 uint32_t total_scsi_bufs; 1288 1285 struct list_head lpfc_iocb_list; 1289 1286 uint32_t total_iocbq_bufs; 1287 + spinlock_t rrq_list_lock; /* lock for active_rrq_list */ 1290 1288 struct list_head active_rrq_list; 1291 1289 spinlock_t hbalock; 1292 1290 struct work_struct unblock_request_work; /* SCSI layer unblock IOs */
+17 -14
drivers/scsi/lpfc/lpfc_attr.c
··· 322 322 struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata; 323 323 struct lpfc_hba *phba = vport->phba; 324 324 325 - if (phba->hba_flag & HBA_FIP_SUPPORT) 325 + if (test_bit(HBA_FIP_SUPPORT, &phba->hba_flag)) 326 326 return scnprintf(buf, PAGE_SIZE, "1\n"); 327 327 else 328 328 return scnprintf(buf, PAGE_SIZE, "0\n"); ··· 1049 1049 case LPFC_INIT_MBX_CMDS: 1050 1050 case LPFC_LINK_DOWN: 1051 1051 case LPFC_HBA_ERROR: 1052 - if (phba->hba_flag & LINK_DISABLED) 1052 + if (test_bit(LINK_DISABLED, &phba->hba_flag)) 1053 1053 len += scnprintf(buf + len, PAGE_SIZE-len, 1054 1054 "Link Down - User disabled\n"); 1055 1055 else ··· 1292 1292 * it doesn't make any sense to allow issue_lip 1293 1293 */ 1294 1294 if (test_bit(FC_OFFLINE_MODE, &vport->fc_flag) || 1295 - (phba->hba_flag & LINK_DISABLED) || 1295 + test_bit(LINK_DISABLED, &phba->hba_flag) || 1296 1296 (phba->sli.sli_flag & LPFC_BLOCK_MGMT_IO)) 1297 1297 return -EPERM; 1298 1298 ··· 3635 3635 struct lpfc_hba *phba = ((struct lpfc_vport *)shost->hostdata)->phba; 3636 3636 3637 3637 return scnprintf(buf, PAGE_SIZE, "%d\n", 3638 - (phba->hba_flag & HBA_PERSISTENT_TOPO) ? 1 : 0); 3638 + test_bit(HBA_PERSISTENT_TOPO, 3639 + &phba->hba_flag) ? 1 : 0); 3639 3640 } 3640 3641 static DEVICE_ATTR(pt, 0444, 3641 3642 lpfc_pt_show, NULL); ··· 4206 4205 &phba->sli4_hba.sli_intf); 4207 4206 if_type = bf_get(lpfc_sli_intf_if_type, 4208 4207 &phba->sli4_hba.sli_intf); 4209 - if ((phba->hba_flag & HBA_PERSISTENT_TOPO || 4210 - (!phba->sli4_hba.pc_sli4_params.pls && 4208 + if ((test_bit(HBA_PERSISTENT_TOPO, &phba->hba_flag) || 4209 + (!phba->sli4_hba.pc_sli4_params.pls && 4211 4210 (sli_family == LPFC_SLI_INTF_FAMILY_G6 || 4212 4211 if_type == LPFC_SLI_INTF_IF_TYPE_6))) && 4213 4212 val == 4) { ··· 4310 4309 4311 4310 if_type = bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf); 4312 4311 if (if_type >= LPFC_SLI_INTF_IF_TYPE_2 && 4313 - phba->hba_flag & HBA_FORCED_LINK_SPEED) 4312 + test_bit(HBA_FORCED_LINK_SPEED, &phba->hba_flag)) 4314 4313 return -EPERM; 4315 4314 4316 4315 if (!strncmp(buf, "nolip ", strlen("nolip "))) { ··· 6498 6497 struct lpfc_vport *vport = (struct lpfc_vport *) shost->hostdata; 6499 6498 struct lpfc_hba *phba = vport->phba; 6500 6499 6501 - if ((lpfc_is_link_up(phba)) && (!(phba->hba_flag & HBA_FCOE_MODE))) { 6500 + if ((lpfc_is_link_up(phba)) && 6501 + !test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 6502 6502 switch(phba->fc_linkspeed) { 6503 6503 case LPFC_LINK_SPEED_1GHZ: 6504 6504 fc_host_speed(shost) = FC_PORTSPEED_1GBIT; ··· 6535 6533 fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN; 6536 6534 break; 6537 6535 } 6538 - } else if (lpfc_is_link_up(phba) && (phba->hba_flag & HBA_FCOE_MODE)) { 6536 + } else if (lpfc_is_link_up(phba) && 6537 + test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 6539 6538 switch (phba->fc_linkspeed) { 6540 6539 case LPFC_ASYNC_LINK_SPEED_1GBPS: 6541 6540 fc_host_speed(shost) = FC_PORTSPEED_1GBIT; ··· 6721 6718 hs->invalid_crc_count -= lso->invalid_crc_count; 6722 6719 hs->error_frames -= lso->error_frames; 6723 6720 6724 - if (phba->hba_flag & HBA_FCOE_MODE) { 6721 + if (test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 6725 6722 hs->lip_count = -1; 6726 6723 hs->nos_count = (phba->link_events >> 1); 6727 6724 hs->nos_count -= lso->link_events; ··· 6819 6816 lso->invalid_tx_word_count = pmb->un.varRdLnk.invalidXmitWord; 6820 6817 lso->invalid_crc_count = pmb->un.varRdLnk.crcCnt; 6821 6818 lso->error_frames = pmb->un.varRdLnk.crcCnt; 6822 - if (phba->hba_flag & HBA_FCOE_MODE) 6819 + if (test_bit(HBA_FCOE_MODE, &phba->hba_flag)) 6823 6820 lso->link_events = (phba->link_events >> 1); 6824 6821 else 6825 6822 lso->link_events = (phba->fc_eventTag >> 1); ··· 7164 7161 case PCI_DEVICE_ID_ZEPHYR_DCSP: 7165 7162 case PCI_DEVICE_ID_TIGERSHARK: 7166 7163 case PCI_DEVICE_ID_TOMCAT: 7167 - phba->hba_flag |= HBA_FCOE_MODE; 7164 + set_bit(HBA_FCOE_MODE, &phba->hba_flag); 7168 7165 break; 7169 7166 default: 7170 7167 /* for others, clear the flag */ 7171 - phba->hba_flag &= ~HBA_FCOE_MODE; 7168 + clear_bit(HBA_FCOE_MODE, &phba->hba_flag); 7172 7169 } 7173 7170 } 7174 7171 ··· 7239 7236 lpfc_get_hba_function_mode(phba); 7240 7237 7241 7238 /* BlockGuard allowed for FC only. */ 7242 - if (phba->cfg_enable_bg && phba->hba_flag & HBA_FCOE_MODE) { 7239 + if (phba->cfg_enable_bg && test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 7243 7240 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 7244 7241 "0581 BlockGuard feature not supported\n"); 7245 7242 /* If set, clear the BlockGuard support param */
+2 -1
drivers/scsi/lpfc/lpfc_bsg.c
··· 5002 5002 goto job_error; 5003 5003 } 5004 5004 5005 - forced_reply->supported = (phba->hba_flag & HBA_FORCED_LINK_SPEED) 5005 + forced_reply->supported = test_bit(HBA_FORCED_LINK_SPEED, 5006 + &phba->hba_flag) 5006 5007 ? LPFC_FORCED_LINK_SPEED_SUPPORTED 5007 5008 : LPFC_FORCED_LINK_SPEED_NOT_SUPPORTED; 5008 5009 job_error:
+12 -12
drivers/scsi/lpfc/lpfc_ct.c
··· 291 291 292 292 did = bf_get(els_rsp64_sid, &ctiocbq->wqe.xmit_els_rsp); 293 293 if (ulp_status) { 294 - lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 295 - "6438 Unsol CT: status:x%x/x%x did : x%x\n", 296 - ulp_status, ulp_word4, did); 294 + lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS, 295 + "6438 Unsol CT: status:x%x/x%x did : x%x\n", 296 + ulp_status, ulp_word4, did); 297 297 return; 298 298 } 299 299 ··· 303 303 304 304 ndlp = lpfc_findnode_did(vport, did); 305 305 if (!ndlp) { 306 - lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 307 - "6439 Unsol CT: NDLP Not Found for DID : x%x", 308 - did); 306 + lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS, 307 + "6439 Unsol CT: NDLP Not Found for DID : x%x", 308 + did); 309 309 return; 310 310 } 311 311 312 312 ct_req = (struct lpfc_sli_ct_request *)ctiocbq->cmd_dmabuf->virt; 313 313 314 314 mi_cmd = be16_to_cpu(ct_req->CommandResponse.bits.CmdRsp); 315 - lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 316 - "6442 : MI Cmd : x%x Not Supported\n", mi_cmd); 315 + lpfc_vlog_msg(vport, KERN_WARNING, LOG_ELS, 316 + "6442 MI Cmd : x%x Not Supported\n", mi_cmd); 317 317 lpfc_ct_reject_event(ndlp, ct_req, 318 318 bf_get(wqe_ctxt_tag, 319 319 &ctiocbq->wqe.xmit_els_rsp.wqe_com), ··· 2173 2173 struct lpfc_nodelist *ndlp; 2174 2174 int i; 2175 2175 2176 - phba->hba_flag |= HBA_RHBA_CMPL; 2176 + set_bit(HBA_RHBA_CMPL, &phba->hba_flag); 2177 2177 vports = lpfc_create_vport_work_array(phba); 2178 2178 if (vports) { 2179 2179 for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) { ··· 2368 2368 * for the physical port completes successfully. 2369 2369 * We may have to defer the RPRT accordingly. 2370 2370 */ 2371 - if (phba->hba_flag & HBA_RHBA_CMPL) { 2371 + if (test_bit(HBA_RHBA_CMPL, &phba->hba_flag)) { 2372 2372 lpfc_fdmi_cmd(vport, ndlp, SLI_MGMT_RPRT, 0); 2373 2373 } else { 2374 2374 lpfc_printf_vlog(vport, KERN_INFO, ··· 2785 2785 u32 tcfg; 2786 2786 u8 i, cnt; 2787 2787 2788 - if (!(phba->hba_flag & HBA_FCOE_MODE)) { 2788 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 2789 2789 cnt = 0; 2790 2790 if (phba->sli_rev == LPFC_SLI_REV4) { 2791 2791 tcfg = phba->sli4_hba.conf_trunk; ··· 2859 2859 struct lpfc_hba *phba = vport->phba; 2860 2860 u32 speeds = 0; 2861 2861 2862 - if (!(phba->hba_flag & HBA_FCOE_MODE)) { 2862 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 2863 2863 switch (phba->fc_linkspeed) { 2864 2864 case LPFC_LINK_SPEED_1GHZ: 2865 2865 speeds = HBA_PORTSPEED_1GFC;
+24 -19
drivers/scsi/lpfc/lpfc_els.c
··· 189 189 * If this command is for fabric controller and HBA running 190 190 * in FIP mode send FLOGI, FDISC and LOGO as FIP frames. 191 191 */ 192 - if ((did == Fabric_DID) && 193 - (phba->hba_flag & HBA_FIP_SUPPORT) && 194 - ((elscmd == ELS_CMD_FLOGI) || 195 - (elscmd == ELS_CMD_FDISC) || 196 - (elscmd == ELS_CMD_LOGO))) 192 + if (did == Fabric_DID && 193 + test_bit(HBA_FIP_SUPPORT, &phba->hba_flag) && 194 + (elscmd == ELS_CMD_FLOGI || 195 + elscmd == ELS_CMD_FDISC || 196 + elscmd == ELS_CMD_LOGO)) 197 197 switch (elscmd) { 198 198 case ELS_CMD_FLOGI: 199 199 elsiocb->cmd_flag |= ··· 965 965 * In case of FIP mode, perform roundrobin FCF failover 966 966 * due to new FCF discovery 967 967 */ 968 - if ((phba->hba_flag & HBA_FIP_SUPPORT) && 968 + if (test_bit(HBA_FIP_SUPPORT, &phba->hba_flag) && 969 969 (phba->fcf.fcf_flag & FCF_DISCOVERY)) { 970 970 if (phba->link_state < LPFC_LINK_UP) 971 971 goto stop_rr_fcf_flogi; ··· 999 999 IOERR_LOOP_OPEN_FAILURE))) 1000 1000 lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 1001 1001 "2858 FLOGI failure Status:x%x/x%x TMO" 1002 - ":x%x Data x%x x%x\n", 1002 + ":x%x Data x%lx x%x\n", 1003 1003 ulp_status, ulp_word4, tmo, 1004 1004 phba->hba_flag, phba->fcf.fcf_flag); 1005 1005 ··· 1119 1119 if (sp->cmn.fPort) 1120 1120 rc = lpfc_cmpl_els_flogi_fabric(vport, ndlp, sp, 1121 1121 ulp_word4); 1122 - else if (!(phba->hba_flag & HBA_FCOE_MODE)) 1122 + else if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) 1123 1123 rc = lpfc_cmpl_els_flogi_nport(vport, ndlp, sp); 1124 1124 else { 1125 1125 lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, ··· 1149 1149 lpfc_nlp_put(ndlp); 1150 1150 spin_lock_irq(&phba->hbalock); 1151 1151 phba->fcf.fcf_flag &= ~FCF_DISCOVERY; 1152 - phba->hba_flag &= ~(FCF_RR_INPROG | HBA_DEVLOSS_TMO); 1153 1152 spin_unlock_irq(&phba->hbalock); 1153 + clear_bit(FCF_RR_INPROG, &phba->hba_flag); 1154 + clear_bit(HBA_DEVLOSS_TMO, &phba->hba_flag); 1154 1155 phba->fcf.fcf_redisc_attempted = 0; /* reset */ 1155 1156 goto out; 1156 1157 } 1157 1158 if (!rc) { 1158 1159 /* Mark the FCF discovery process done */ 1159 - if (phba->hba_flag & HBA_FIP_SUPPORT) 1160 + if (test_bit(HBA_FIP_SUPPORT, &phba->hba_flag)) 1160 1161 lpfc_printf_vlog(vport, KERN_INFO, LOG_FIP | 1161 1162 LOG_ELS, 1162 1163 "2769 FLOGI to FCF (x%x) " ··· 1165 1164 phba->fcf.current_rec.fcf_indx); 1166 1165 spin_lock_irq(&phba->hbalock); 1167 1166 phba->fcf.fcf_flag &= ~FCF_DISCOVERY; 1168 - phba->hba_flag &= ~(FCF_RR_INPROG | HBA_DEVLOSS_TMO); 1169 1167 spin_unlock_irq(&phba->hbalock); 1168 + clear_bit(FCF_RR_INPROG, &phba->hba_flag); 1169 + clear_bit(HBA_DEVLOSS_TMO, &phba->hba_flag); 1170 1170 phba->fcf.fcf_redisc_attempted = 0; /* reset */ 1171 1171 goto out; 1172 1172 } ··· 1204 1202 } 1205 1203 out: 1206 1204 if (!flogi_in_retry) 1207 - phba->hba_flag &= ~HBA_FLOGI_OUTSTANDING; 1205 + clear_bit(HBA_FLOGI_OUTSTANDING, &phba->hba_flag); 1208 1206 1209 1207 lpfc_els_free_iocb(phba, cmdiocb); 1210 1208 lpfc_nlp_put(ndlp); ··· 1374 1372 } 1375 1373 1376 1374 /* Avoid race with FLOGI completion and hba_flags. */ 1377 - phba->hba_flag |= (HBA_FLOGI_ISSUED | HBA_FLOGI_OUTSTANDING); 1375 + set_bit(HBA_FLOGI_ISSUED, &phba->hba_flag); 1376 + set_bit(HBA_FLOGI_OUTSTANDING, &phba->hba_flag); 1378 1377 1379 1378 rc = lpfc_issue_fabric_iocb(phba, elsiocb); 1380 1379 if (rc == IOCB_ERROR) { 1381 - phba->hba_flag &= ~(HBA_FLOGI_ISSUED | HBA_FLOGI_OUTSTANDING); 1380 + clear_bit(HBA_FLOGI_ISSUED, &phba->hba_flag); 1381 + clear_bit(HBA_FLOGI_OUTSTANDING, &phba->hba_flag); 1382 1382 lpfc_els_free_iocb(phba, elsiocb); 1383 1383 lpfc_nlp_put(ndlp); 1384 1384 return 1; ··· 1417 1413 1418 1414 lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 1419 1415 "3354 Xmit deferred FLOGI ACC: rx_id: x%x," 1420 - " ox_id: x%x, hba_flag x%x\n", 1416 + " ox_id: x%x, hba_flag x%lx\n", 1421 1417 phba->defer_flogi_acc_rx_id, 1422 1418 phba->defer_flogi_acc_ox_id, phba->hba_flag); 1423 1419 ··· 7419 7415 goto error; 7420 7416 } 7421 7417 7422 - if (phba->sli_rev < LPFC_SLI_REV4 || (phba->hba_flag & HBA_FCOE_MODE)) { 7418 + if (phba->sli_rev < LPFC_SLI_REV4 || 7419 + test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 7423 7420 rjt_err = LSRJT_UNABLE_TPC; 7424 7421 rjt_expl = LSEXP_REQ_UNSUPPORTED; 7425 7422 goto error; ··· 7743 7738 } 7744 7739 7745 7740 if (phba->sli_rev < LPFC_SLI_REV4 || 7746 - phba->hba_flag & HBA_FCOE_MODE || 7741 + test_bit(HBA_FCOE_MODE, &phba->hba_flag) || 7747 7742 (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) < 7748 7743 LPFC_SLI_INTF_IF_TYPE_2)) { 7749 7744 rjt_err = LSRJT_CMD_UNSUPPORTED; ··· 8448 8443 memcpy(&phba->fc_fabparam, sp, sizeof(struct serv_parm)); 8449 8444 8450 8445 /* Defer ACC response until AFTER we issue a FLOGI */ 8451 - if (!(phba->hba_flag & HBA_FLOGI_ISSUED)) { 8446 + if (!test_bit(HBA_FLOGI_ISSUED, &phba->hba_flag)) { 8452 8447 phba->defer_flogi_acc_rx_id = bf_get(wqe_ctxt_tag, 8453 8448 &wqe->xmit_els_rsp.wqe_com); 8454 8449 phba->defer_flogi_acc_ox_id = bf_get(wqe_rcvoxid, ··· 8458 8453 8459 8454 lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, 8460 8455 "3344 Deferring FLOGI ACC: rx_id: x%x," 8461 - " ox_id: x%x, hba_flag x%x\n", 8456 + " ox_id: x%x, hba_flag x%lx\n", 8462 8457 phba->defer_flogi_acc_rx_id, 8463 8458 phba->defer_flogi_acc_ox_id, phba->hba_flag); 8464 8459
+63 -74
drivers/scsi/lpfc/lpfc_hbadisc.c
··· 487 487 recovering = true; 488 488 } else { 489 489 /* Physical port path. */ 490 - if (phba->hba_flag & HBA_FLOGI_OUTSTANDING) 490 + if (test_bit(HBA_FLOGI_OUTSTANDING, 491 + &phba->hba_flag)) 491 492 recovering = true; 492 493 } 493 494 break; ··· 653 652 if (!fcf_inuse) 654 653 return; 655 654 656 - if ((phba->hba_flag & HBA_FIP_SUPPORT) && !lpfc_fcf_inuse(phba)) { 655 + if (test_bit(HBA_FIP_SUPPORT, &phba->hba_flag) && 656 + !lpfc_fcf_inuse(phba)) { 657 657 spin_lock_irq(&phba->hbalock); 658 658 if (phba->fcf.fcf_flag & FCF_DISCOVERY) { 659 - if (phba->hba_flag & HBA_DEVLOSS_TMO) { 659 + if (test_and_set_bit(HBA_DEVLOSS_TMO, 660 + &phba->hba_flag)) { 660 661 spin_unlock_irq(&phba->hbalock); 661 662 return; 662 663 } 663 - phba->hba_flag |= HBA_DEVLOSS_TMO; 664 664 lpfc_printf_log(phba, KERN_INFO, LOG_FIP, 665 665 "2847 Last remote node (x%x) using " 666 666 "FCF devloss tmo\n", nlp_did); ··· 673 671 "in progress\n"); 674 672 return; 675 673 } 676 - if (!(phba->hba_flag & (FCF_TS_INPROG | FCF_RR_INPROG))) { 677 - spin_unlock_irq(&phba->hbalock); 674 + spin_unlock_irq(&phba->hbalock); 675 + if (!test_bit(FCF_TS_INPROG, &phba->hba_flag) && 676 + !test_bit(FCF_RR_INPROG, &phba->hba_flag)) { 678 677 lpfc_printf_log(phba, KERN_INFO, LOG_FIP, 679 678 "2869 Devloss tmo to idle FIP engine, " 680 679 "unreg in-use FCF and rescan.\n"); ··· 683 680 lpfc_unregister_fcf_rescan(phba); 684 681 return; 685 682 } 686 - spin_unlock_irq(&phba->hbalock); 687 - if (phba->hba_flag & FCF_TS_INPROG) 683 + if (test_bit(FCF_TS_INPROG, &phba->hba_flag)) 688 684 lpfc_printf_log(phba, KERN_INFO, LOG_FIP, 689 685 "2870 FCF table scan in progress\n"); 690 - if (phba->hba_flag & FCF_RR_INPROG) 686 + if (test_bit(FCF_RR_INPROG, &phba->hba_flag)) 691 687 lpfc_printf_log(phba, KERN_INFO, LOG_FIP, 692 688 "2871 FLOGI roundrobin FCF failover " 693 689 "in progress\n"); ··· 980 978 981 979 /* Process SLI4 events */ 982 980 if (phba->pci_dev_grp == LPFC_PCI_DEV_OC) { 983 - if (phba->hba_flag & HBA_RRQ_ACTIVE) 981 + if (test_bit(HBA_RRQ_ACTIVE, &phba->hba_flag)) 984 982 lpfc_handle_rrq_active(phba); 985 - if (phba->hba_flag & ELS_XRI_ABORT_EVENT) 983 + if (test_bit(ELS_XRI_ABORT_EVENT, &phba->hba_flag)) 986 984 lpfc_sli4_els_xri_abort_event_proc(phba); 987 - if (phba->hba_flag & ASYNC_EVENT) 985 + if (test_bit(ASYNC_EVENT, &phba->hba_flag)) 988 986 lpfc_sli4_async_event_proc(phba); 989 - if (phba->hba_flag & HBA_POST_RECEIVE_BUFFER) { 990 - spin_lock_irq(&phba->hbalock); 991 - phba->hba_flag &= ~HBA_POST_RECEIVE_BUFFER; 992 - spin_unlock_irq(&phba->hbalock); 987 + if (test_and_clear_bit(HBA_POST_RECEIVE_BUFFER, 988 + &phba->hba_flag)) 993 989 lpfc_sli_hbqbuf_add_hbqs(phba, LPFC_ELS_HBQ); 994 - } 995 990 if (phba->fcf.fcf_flag & FCF_REDISC_EVT) 996 991 lpfc_sli4_fcf_redisc_event_proc(phba); 997 992 } ··· 1034 1035 status >>= (4*LPFC_ELS_RING); 1035 1036 if (pring && (status & HA_RXMASK || 1036 1037 pring->flag & LPFC_DEFERRED_RING_EVENT || 1037 - phba->hba_flag & HBA_SP_QUEUE_EVT)) { 1038 + test_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag))) { 1038 1039 if (pring->flag & LPFC_STOP_IOCB_EVENT) { 1039 1040 pring->flag |= LPFC_DEFERRED_RING_EVENT; 1040 1041 /* Preserve legacy behavior. */ 1041 - if (!(phba->hba_flag & HBA_SP_QUEUE_EVT)) 1042 + if (!test_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag)) 1042 1043 set_bit(LPFC_DATA_READY, &phba->data_flags); 1043 1044 } else { 1044 1045 /* Driver could have abort request completed in queue ··· 1419 1420 spin_unlock_irq(shost->host_lock); 1420 1421 1421 1422 /* reinitialize initial HBA flag */ 1422 - phba->hba_flag &= ~(HBA_FLOGI_ISSUED | HBA_RHBA_CMPL); 1423 + clear_bit(HBA_FLOGI_ISSUED, &phba->hba_flag); 1424 + clear_bit(HBA_RHBA_CMPL, &phba->hba_flag); 1423 1425 1424 1426 return 0; 1425 1427 } ··· 1505 1505 1506 1506 /* don't perform discovery for SLI4 loopback diagnostic test */ 1507 1507 if ((phba->sli_rev == LPFC_SLI_REV4) && 1508 - !(phba->hba_flag & HBA_FCOE_MODE) && 1508 + !test_bit(HBA_FCOE_MODE, &phba->hba_flag) && 1509 1509 (phba->link_flag & LS_LOOPBACK_MODE)) 1510 1510 return; 1511 1511 ··· 1548 1548 goto sparam_out; 1549 1549 } 1550 1550 1551 - phba->hba_flag |= HBA_DEFER_FLOGI; 1551 + set_bit(HBA_DEFER_FLOGI, &phba->hba_flag); 1552 1552 } else { 1553 1553 lpfc_initial_flogi(vport); 1554 1554 } ··· 1617 1617 spin_unlock_irq(&phba->hbalock); 1618 1618 1619 1619 /* If there is a pending FCoE event, restart FCF table scan. */ 1620 - if ((!(phba->hba_flag & FCF_RR_INPROG)) && 1621 - lpfc_check_pending_fcoe_event(phba, LPFC_UNREG_FCF)) 1620 + if (!test_bit(FCF_RR_INPROG, &phba->hba_flag) && 1621 + lpfc_check_pending_fcoe_event(phba, LPFC_UNREG_FCF)) 1622 1622 goto fail_out; 1623 1623 1624 1624 /* Mark successful completion of FCF table scan */ 1625 1625 spin_lock_irq(&phba->hbalock); 1626 1626 phba->fcf.fcf_flag |= (FCF_SCAN_DONE | FCF_IN_USE); 1627 - phba->hba_flag &= ~FCF_TS_INPROG; 1628 - if (vport->port_state != LPFC_FLOGI) { 1629 - phba->hba_flag |= FCF_RR_INPROG; 1630 - spin_unlock_irq(&phba->hbalock); 1631 - lpfc_issue_init_vfi(vport); 1632 - goto out; 1633 - } 1634 1627 spin_unlock_irq(&phba->hbalock); 1628 + clear_bit(FCF_TS_INPROG, &phba->hba_flag); 1629 + if (vport->port_state != LPFC_FLOGI) { 1630 + set_bit(FCF_RR_INPROG, &phba->hba_flag); 1631 + lpfc_issue_init_vfi(vport); 1632 + } 1635 1633 goto out; 1636 1634 1637 1635 fail_out: 1638 - spin_lock_irq(&phba->hbalock); 1639 - phba->hba_flag &= ~FCF_RR_INPROG; 1640 - spin_unlock_irq(&phba->hbalock); 1636 + clear_bit(FCF_RR_INPROG, &phba->hba_flag); 1641 1637 out: 1642 1638 mempool_free(mboxq, phba->mbox_mem_pool); 1643 1639 } ··· 1863 1867 spin_lock_irq(&phba->hbalock); 1864 1868 /* If the FCF is not available do nothing. */ 1865 1869 if (!(phba->fcf.fcf_flag & FCF_AVAILABLE)) { 1866 - phba->hba_flag &= ~(FCF_TS_INPROG | FCF_RR_INPROG); 1867 1870 spin_unlock_irq(&phba->hbalock); 1871 + clear_bit(FCF_TS_INPROG, &phba->hba_flag); 1872 + clear_bit(FCF_RR_INPROG, &phba->hba_flag); 1868 1873 return; 1869 1874 } 1870 1875 1871 1876 /* The FCF is already registered, start discovery */ 1872 1877 if (phba->fcf.fcf_flag & FCF_REGISTERED) { 1873 1878 phba->fcf.fcf_flag |= (FCF_SCAN_DONE | FCF_IN_USE); 1874 - phba->hba_flag &= ~FCF_TS_INPROG; 1879 + spin_unlock_irq(&phba->hbalock); 1880 + clear_bit(FCF_TS_INPROG, &phba->hba_flag); 1875 1881 if (phba->pport->port_state != LPFC_FLOGI && 1876 1882 test_bit(FC_FABRIC, &phba->pport->fc_flag)) { 1877 - phba->hba_flag |= FCF_RR_INPROG; 1878 - spin_unlock_irq(&phba->hbalock); 1883 + set_bit(FCF_RR_INPROG, &phba->hba_flag); 1879 1884 lpfc_initial_flogi(phba->pport); 1880 1885 return; 1881 1886 } 1882 - spin_unlock_irq(&phba->hbalock); 1883 1887 return; 1884 1888 } 1885 1889 spin_unlock_irq(&phba->hbalock); 1886 1890 1887 1891 fcf_mbxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 1888 1892 if (!fcf_mbxq) { 1889 - spin_lock_irq(&phba->hbalock); 1890 - phba->hba_flag &= ~(FCF_TS_INPROG | FCF_RR_INPROG); 1891 - spin_unlock_irq(&phba->hbalock); 1893 + clear_bit(FCF_TS_INPROG, &phba->hba_flag); 1894 + clear_bit(FCF_RR_INPROG, &phba->hba_flag); 1892 1895 return; 1893 1896 } 1894 1897 ··· 1896 1901 fcf_mbxq->mbox_cmpl = lpfc_mbx_cmpl_reg_fcfi; 1897 1902 rc = lpfc_sli_issue_mbox(phba, fcf_mbxq, MBX_NOWAIT); 1898 1903 if (rc == MBX_NOT_FINISHED) { 1899 - spin_lock_irq(&phba->hbalock); 1900 - phba->hba_flag &= ~(FCF_TS_INPROG | FCF_RR_INPROG); 1901 - spin_unlock_irq(&phba->hbalock); 1904 + clear_bit(FCF_TS_INPROG, &phba->hba_flag); 1905 + clear_bit(FCF_RR_INPROG, &phba->hba_flag); 1902 1906 mempool_free(fcf_mbxq, phba->mbox_mem_pool); 1903 1907 } 1904 1908 ··· 1950 1956 bf_get(lpfc_fcf_record_fcf_sol, new_fcf_record)) 1951 1957 return 0; 1952 1958 1953 - if (!(phba->hba_flag & HBA_FIP_SUPPORT)) { 1959 + if (!test_bit(HBA_FIP_SUPPORT, &phba->hba_flag)) { 1954 1960 *boot_flag = 0; 1955 1961 *addr_mode = bf_get(lpfc_fcf_record_mac_addr_prov, 1956 1962 new_fcf_record); ··· 2145 2151 lpfc_printf_log(phba, KERN_INFO, LOG_FIP | LOG_DISCOVERY, 2146 2152 "2833 Stop FCF discovery process due to link " 2147 2153 "state change (x%x)\n", phba->link_state); 2154 + clear_bit(FCF_TS_INPROG, &phba->hba_flag); 2155 + clear_bit(FCF_RR_INPROG, &phba->hba_flag); 2148 2156 spin_lock_irq(&phba->hbalock); 2149 - phba->hba_flag &= ~(FCF_TS_INPROG | FCF_RR_INPROG); 2150 2157 phba->fcf.fcf_flag &= ~(FCF_REDISC_FOV | FCF_DISCOVERY); 2151 2158 spin_unlock_irq(&phba->hbalock); 2152 2159 } ··· 2375 2380 int rc; 2376 2381 2377 2382 if (fcf_index == LPFC_FCOE_FCF_NEXT_NONE) { 2378 - spin_lock_irq(&phba->hbalock); 2379 - if (phba->hba_flag & HBA_DEVLOSS_TMO) { 2380 - spin_unlock_irq(&phba->hbalock); 2383 + if (test_bit(HBA_DEVLOSS_TMO, &phba->hba_flag)) { 2381 2384 lpfc_printf_log(phba, KERN_INFO, LOG_FIP, 2382 2385 "2872 Devloss tmo with no eligible " 2383 2386 "FCF, unregister in-use FCF (x%x) " ··· 2385 2392 goto stop_flogi_current_fcf; 2386 2393 } 2387 2394 /* Mark the end to FLOGI roundrobin failover */ 2388 - phba->hba_flag &= ~FCF_RR_INPROG; 2395 + clear_bit(FCF_RR_INPROG, &phba->hba_flag); 2389 2396 /* Allow action to new fcf asynchronous event */ 2397 + spin_lock_irq(&phba->hbalock); 2390 2398 phba->fcf.fcf_flag &= ~(FCF_AVAILABLE | FCF_SCAN_DONE); 2391 2399 spin_unlock_irq(&phba->hbalock); 2392 2400 lpfc_printf_log(phba, KERN_INFO, LOG_FIP, ··· 2624 2630 "2765 Mailbox command READ_FCF_RECORD " 2625 2631 "failed to retrieve a FCF record.\n"); 2626 2632 /* Let next new FCF event trigger fast failover */ 2627 - spin_lock_irq(&phba->hbalock); 2628 - phba->hba_flag &= ~FCF_TS_INPROG; 2629 - spin_unlock_irq(&phba->hbalock); 2633 + clear_bit(FCF_TS_INPROG, &phba->hba_flag); 2630 2634 lpfc_sli4_mbox_cmd_free(phba, mboxq); 2631 2635 return; 2632 2636 } ··· 2865 2873 phba->fcoe_eventtag_at_fcf_scan, 2866 2874 bf_get(lpfc_fcf_record_fcf_index, 2867 2875 new_fcf_record)); 2868 - spin_lock_irq(&phba->hbalock); 2869 - if (phba->hba_flag & HBA_DEVLOSS_TMO) { 2870 - phba->hba_flag &= ~FCF_TS_INPROG; 2871 - spin_unlock_irq(&phba->hbalock); 2876 + if (test_bit(HBA_DEVLOSS_TMO, 2877 + &phba->hba_flag)) { 2878 + clear_bit(FCF_TS_INPROG, 2879 + &phba->hba_flag); 2872 2880 /* Unregister in-use FCF and rescan */ 2873 2881 lpfc_printf_log(phba, KERN_INFO, 2874 2882 LOG_FIP, ··· 2881 2889 /* 2882 2890 * Let next new FCF event trigger fast failover 2883 2891 */ 2884 - phba->hba_flag &= ~FCF_TS_INPROG; 2885 - spin_unlock_irq(&phba->hbalock); 2892 + clear_bit(FCF_TS_INPROG, &phba->hba_flag); 2886 2893 return; 2887 2894 } 2888 2895 /* ··· 2987 2996 if (phba->link_state < LPFC_LINK_UP) { 2988 2997 spin_lock_irq(&phba->hbalock); 2989 2998 phba->fcf.fcf_flag &= ~FCF_DISCOVERY; 2990 - phba->hba_flag &= ~FCF_RR_INPROG; 2991 2999 spin_unlock_irq(&phba->hbalock); 3000 + clear_bit(FCF_RR_INPROG, &phba->hba_flag); 2992 3001 goto out; 2993 3002 } 2994 3003 ··· 2999 3008 lpfc_printf_log(phba, KERN_WARNING, LOG_FIP, 3000 3009 "2766 Mailbox command READ_FCF_RECORD " 3001 3010 "failed to retrieve a FCF record. " 3002 - "hba_flg x%x fcf_flg x%x\n", phba->hba_flag, 3011 + "hba_flg x%lx fcf_flg x%x\n", phba->hba_flag, 3003 3012 phba->fcf.fcf_flag); 3004 3013 lpfc_unregister_fcf_rescan(phba); 3005 3014 goto out; ··· 3462 3471 /* Check if sending the FLOGI is being deferred to after we get 3463 3472 * up to date CSPs from MBX_READ_SPARAM. 3464 3473 */ 3465 - if (phba->hba_flag & HBA_DEFER_FLOGI) { 3474 + if (test_bit(HBA_DEFER_FLOGI, &phba->hba_flag)) { 3466 3475 lpfc_initial_flogi(vport); 3467 - phba->hba_flag &= ~HBA_DEFER_FLOGI; 3476 + clear_bit(HBA_DEFER_FLOGI, &phba->hba_flag); 3468 3477 } 3469 3478 return; 3470 3479 ··· 3486 3495 spin_lock_irqsave(&phba->hbalock, iflags); 3487 3496 phba->fc_linkspeed = bf_get(lpfc_mbx_read_top_link_spd, la); 3488 3497 3489 - if (!(phba->hba_flag & HBA_FCOE_MODE)) { 3498 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 3490 3499 switch (bf_get(lpfc_mbx_read_top_link_spd, la)) { 3491 3500 case LPFC_LINK_SPEED_1GHZ: 3492 3501 case LPFC_LINK_SPEED_2GHZ: ··· 3602 3611 goto out; 3603 3612 } 3604 3613 3605 - if (!(phba->hba_flag & HBA_FCOE_MODE)) { 3614 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 3606 3615 cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); 3607 3616 if (!cfglink_mbox) 3608 3617 goto out; ··· 3622 3631 * is phase 1 implementation that support FCF index 0 and driver 3623 3632 * defaults. 3624 3633 */ 3625 - if (!(phba->hba_flag & HBA_FIP_SUPPORT)) { 3634 + if (!test_bit(HBA_FIP_SUPPORT, &phba->hba_flag)) { 3626 3635 fcf_record = kzalloc(sizeof(struct fcf_record), 3627 3636 GFP_KERNEL); 3628 3637 if (unlikely(!fcf_record)) { ··· 3652 3661 * The driver is expected to do FIP/FCF. Call the port 3653 3662 * and get the FCF Table. 3654 3663 */ 3655 - spin_lock_irqsave(&phba->hbalock, iflags); 3656 - if (phba->hba_flag & FCF_TS_INPROG) { 3657 - spin_unlock_irqrestore(&phba->hbalock, iflags); 3664 + if (test_bit(FCF_TS_INPROG, &phba->hba_flag)) 3658 3665 return; 3659 - } 3660 3666 /* This is the initial FCF discovery scan */ 3667 + spin_lock_irqsave(&phba->hbalock, iflags); 3661 3668 phba->fcf.fcf_flag |= FCF_INIT_DISC; 3662 3669 spin_unlock_irqrestore(&phba->hbalock, iflags); 3663 3670 lpfc_printf_log(phba, KERN_INFO, LOG_FIP | LOG_DISCOVERY, ··· 6986 6997 * registered, do nothing. 6987 6998 */ 6988 6999 spin_lock_irq(&phba->hbalock); 6989 - if (!(phba->hba_flag & HBA_FCOE_MODE) || 7000 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag) || 6990 7001 !(phba->fcf.fcf_flag & FCF_REGISTERED) || 6991 - !(phba->hba_flag & HBA_FIP_SUPPORT) || 7002 + !test_bit(HBA_FIP_SUPPORT, &phba->hba_flag) || 6992 7003 (phba->fcf.fcf_flag & FCF_DISCOVERY) || 6993 - (phba->pport->port_state == LPFC_FLOGI)) { 7004 + phba->pport->port_state == LPFC_FLOGI) { 6994 7005 spin_unlock_irq(&phba->hbalock); 6995 7006 return; 6996 7007 }
+8
drivers/scsi/lpfc/lpfc_hw4.h
··· 2146 2146 uint32_t sge_len; 2147 2147 }; 2148 2148 2149 + struct sli4_sge_le { 2150 + __le32 addr_hi; 2151 + __le32 addr_lo; 2152 + 2153 + __le32 word2; 2154 + __le32 sge_len; 2155 + }; 2156 + 2149 2157 struct sli4_hybrid_sgl { 2150 2158 struct list_head list_node; 2151 2159 struct sli4_sge *dma_sgl;
+52 -67
drivers/scsi/lpfc/lpfc_init.c
··· 567 567 568 568 spin_lock_irq(&phba->hbalock); 569 569 /* Initialize ERATT handling flag */ 570 - phba->hba_flag &= ~HBA_ERATT_HANDLED; 570 + clear_bit(HBA_ERATT_HANDLED, &phba->hba_flag); 571 571 572 572 /* Enable appropriate host interrupts */ 573 573 if (lpfc_readl(phba->HCregaddr, &status)) { ··· 599 599 /* Set up heart beat (HB) timer */ 600 600 mod_timer(&phba->hb_tmofunc, 601 601 jiffies + msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL)); 602 - phba->hba_flag &= ~(HBA_HBEAT_INP | HBA_HBEAT_TMO); 602 + clear_bit(HBA_HBEAT_INP, &phba->hba_flag); 603 + clear_bit(HBA_HBEAT_TMO, &phba->hba_flag); 603 604 phba->last_completion_time = jiffies; 604 605 /* Set up error attention (ERATT) polling timer */ 605 606 mod_timer(&phba->eratt_poll, 606 607 jiffies + msecs_to_jiffies(1000 * phba->eratt_poll_interval)); 607 608 608 - if (phba->hba_flag & LINK_DISABLED) { 609 + if (test_bit(LINK_DISABLED, &phba->hba_flag)) { 609 610 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 610 611 "2598 Adapter Link is disabled.\n"); 611 612 lpfc_down_link(phba, pmb); ··· 926 925 struct hbq_dmabuf *dmabuf; 927 926 struct lpfc_cq_event *cq_event; 928 927 929 - spin_lock_irq(&phba->hbalock); 930 - phba->hba_flag &= ~HBA_SP_QUEUE_EVT; 931 - spin_unlock_irq(&phba->hbalock); 928 + clear_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag); 932 929 933 930 while (!list_empty(&phba->sli4_hba.sp_queue_event)) { 934 931 /* Get the response iocb from the head of work queue */ ··· 1227 1228 lpfc_rrq_timeout(struct timer_list *t) 1228 1229 { 1229 1230 struct lpfc_hba *phba; 1230 - unsigned long iflag; 1231 1231 1232 1232 phba = from_timer(phba, t, rrq_tmr); 1233 - spin_lock_irqsave(&phba->pport->work_port_lock, iflag); 1234 - if (!test_bit(FC_UNLOADING, &phba->pport->load_flag)) 1235 - phba->hba_flag |= HBA_RRQ_ACTIVE; 1236 - else 1237 - phba->hba_flag &= ~HBA_RRQ_ACTIVE; 1238 - spin_unlock_irqrestore(&phba->pport->work_port_lock, iflag); 1233 + if (test_bit(FC_UNLOADING, &phba->pport->load_flag)) { 1234 + clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag); 1235 + return; 1236 + } 1239 1237 1240 - if (!test_bit(FC_UNLOADING, &phba->pport->load_flag)) 1241 - lpfc_worker_wake_up(phba); 1238 + set_bit(HBA_RRQ_ACTIVE, &phba->hba_flag); 1239 + lpfc_worker_wake_up(phba); 1242 1240 } 1243 1241 1244 1242 /** ··· 1257 1261 static void 1258 1262 lpfc_hb_mbox_cmpl(struct lpfc_hba * phba, LPFC_MBOXQ_t * pmboxq) 1259 1263 { 1260 - unsigned long drvr_flag; 1261 - 1262 - spin_lock_irqsave(&phba->hbalock, drvr_flag); 1263 - phba->hba_flag &= ~(HBA_HBEAT_INP | HBA_HBEAT_TMO); 1264 - spin_unlock_irqrestore(&phba->hbalock, drvr_flag); 1264 + clear_bit(HBA_HBEAT_INP, &phba->hba_flag); 1265 + clear_bit(HBA_HBEAT_TMO, &phba->hba_flag); 1265 1266 1266 1267 /* Check and reset heart-beat timer if necessary */ 1267 1268 mempool_free(pmboxq, phba->mbox_mem_pool); ··· 1450 1457 int retval; 1451 1458 1452 1459 /* Is a Heartbeat mbox already in progress */ 1453 - if (phba->hba_flag & HBA_HBEAT_INP) 1460 + if (test_bit(HBA_HBEAT_INP, &phba->hba_flag)) 1454 1461 return 0; 1455 1462 1456 1463 pmboxq = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); ··· 1466 1473 mempool_free(pmboxq, phba->mbox_mem_pool); 1467 1474 return -ENXIO; 1468 1475 } 1469 - phba->hba_flag |= HBA_HBEAT_INP; 1476 + set_bit(HBA_HBEAT_INP, &phba->hba_flag); 1470 1477 1471 1478 return 0; 1472 1479 } ··· 1486 1493 { 1487 1494 if (phba->cfg_enable_hba_heartbeat) 1488 1495 return; 1489 - phba->hba_flag |= HBA_HBEAT_TMO; 1496 + set_bit(HBA_HBEAT_TMO, &phba->hba_flag); 1490 1497 } 1491 1498 1492 1499 /** ··· 1558 1565 msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL), 1559 1566 jiffies)) { 1560 1567 spin_unlock_irq(&phba->pport->work_port_lock); 1561 - if (phba->hba_flag & HBA_HBEAT_INP) 1568 + if (test_bit(HBA_HBEAT_INP, &phba->hba_flag)) 1562 1569 tmo = (1000 * LPFC_HB_MBOX_TIMEOUT); 1563 1570 else 1564 1571 tmo = (1000 * LPFC_HB_MBOX_INTERVAL); ··· 1567 1574 spin_unlock_irq(&phba->pport->work_port_lock); 1568 1575 1569 1576 /* Check if a MBX_HEARTBEAT is already in progress */ 1570 - if (phba->hba_flag & HBA_HBEAT_INP) { 1577 + if (test_bit(HBA_HBEAT_INP, &phba->hba_flag)) { 1571 1578 /* 1572 1579 * If heart beat timeout called with HBA_HBEAT_INP set 1573 1580 * we need to give the hb mailbox cmd a chance to ··· 1604 1611 } 1605 1612 } else { 1606 1613 /* Check to see if we want to force a MBX_HEARTBEAT */ 1607 - if (phba->hba_flag & HBA_HBEAT_TMO) { 1614 + if (test_bit(HBA_HBEAT_TMO, &phba->hba_flag)) { 1608 1615 retval = lpfc_issue_hb_mbox(phba); 1609 1616 if (retval) 1610 1617 tmo = (1000 * LPFC_HB_MBOX_INTERVAL); ··· 1692 1699 * since we cannot communicate with the pci card anyway. 1693 1700 */ 1694 1701 if (pci_channel_offline(phba->pcidev)) { 1695 - spin_lock_irq(&phba->hbalock); 1696 - phba->hba_flag &= ~DEFER_ERATT; 1697 - spin_unlock_irq(&phba->hbalock); 1702 + clear_bit(DEFER_ERATT, &phba->hba_flag); 1698 1703 return; 1699 1704 } 1700 1705 ··· 1743 1752 if (!phba->work_hs && !test_bit(FC_UNLOADING, &phba->pport->load_flag)) 1744 1753 phba->work_hs = old_host_status & ~HS_FFER1; 1745 1754 1746 - spin_lock_irq(&phba->hbalock); 1747 - phba->hba_flag &= ~DEFER_ERATT; 1748 - spin_unlock_irq(&phba->hbalock); 1755 + clear_bit(DEFER_ERATT, &phba->hba_flag); 1749 1756 phba->work_status[0] = readl(phba->MBslimaddr + 0xa8); 1750 1757 phba->work_status[1] = readl(phba->MBslimaddr + 0xac); 1751 1758 } ··· 1787 1798 * since we cannot communicate with the pci card anyway. 1788 1799 */ 1789 1800 if (pci_channel_offline(phba->pcidev)) { 1790 - spin_lock_irq(&phba->hbalock); 1791 - phba->hba_flag &= ~DEFER_ERATT; 1792 - spin_unlock_irq(&phba->hbalock); 1801 + clear_bit(DEFER_ERATT, &phba->hba_flag); 1793 1802 return; 1794 1803 } 1795 1804 ··· 1798 1811 /* Send an internal error event to mgmt application */ 1799 1812 lpfc_board_errevt_to_mgmt(phba); 1800 1813 1801 - if (phba->hba_flag & DEFER_ERATT) 1814 + if (test_bit(DEFER_ERATT, &phba->hba_flag)) 1802 1815 lpfc_handle_deferred_eratt(phba); 1803 1816 1804 1817 if ((phba->work_hs & HS_FFER6) || (phba->work_hs & HS_FFER8)) { ··· 2013 2026 /* consider PCI bus read error as pci_channel_offline */ 2014 2027 if (pci_rd_rc1 == -EIO && pci_rd_rc2 == -EIO) 2015 2028 return; 2016 - if (!(phba->hba_flag & HBA_RECOVERABLE_UE)) { 2029 + if (!test_bit(HBA_RECOVERABLE_UE, &phba->hba_flag)) { 2017 2030 lpfc_sli4_offline_eratt(phba); 2018 2031 return; 2019 2032 } ··· 3306 3319 del_timer_sync(&phba->hb_tmofunc); 3307 3320 if (phba->sli_rev == LPFC_SLI_REV4) { 3308 3321 del_timer_sync(&phba->rrq_tmr); 3309 - phba->hba_flag &= ~HBA_RRQ_ACTIVE; 3322 + clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag); 3310 3323 } 3311 - phba->hba_flag &= ~(HBA_HBEAT_INP | HBA_HBEAT_TMO); 3324 + clear_bit(HBA_HBEAT_INP, &phba->hba_flag); 3325 + clear_bit(HBA_HBEAT_TMO, &phba->hba_flag); 3312 3326 3313 3327 switch (phba->pci_dev_grp) { 3314 3328 case LPFC_PCI_DEV_LP: ··· 4773 4785 shost->max_id = LPFC_MAX_TARGET; 4774 4786 shost->max_lun = vport->cfg_max_luns; 4775 4787 shost->this_id = -1; 4776 - shost->max_cmd_len = 16; 4788 + if (phba->sli_rev == LPFC_SLI_REV4) 4789 + shost->max_cmd_len = LPFC_FCP_CDB_LEN_32; 4790 + else 4791 + shost->max_cmd_len = LPFC_FCP_CDB_LEN; 4777 4792 4778 4793 if (phba->sli_rev == LPFC_SLI_REV4) { 4779 4794 if (!phba->cfg_fcp_mq_threshold || ··· 4967 4976 * Avoid reporting supported link speed for FCoE as it can't be 4968 4977 * controlled via FCoE. 4969 4978 */ 4970 - if (phba->hba_flag & HBA_FCOE_MODE) 4979 + if (test_bit(HBA_FCOE_MODE, &phba->hba_flag)) 4971 4980 return; 4972 4981 4973 4982 if (phba->lmt & LMT_256Gb) ··· 5481 5490 * For FC Mode: issue the READ_TOPOLOGY mailbox command to fetch 5482 5491 * topology info. Note: Optional for non FC-AL ports. 5483 5492 */ 5484 - if (!(phba->hba_flag & HBA_FCOE_MODE)) { 5493 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 5485 5494 rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT); 5486 5495 if (rc == MBX_NOT_FINISHED) 5487 5496 goto out_free_pmb; ··· 6016 6025 */ 6017 6026 if (phba->cmf_active_mode == LPFC_CFG_MANAGED && 6018 6027 phba->link_state != LPFC_LINK_DOWN && 6019 - phba->hba_flag & HBA_SETUP) { 6028 + test_bit(HBA_SETUP, &phba->hba_flag)) { 6020 6029 mbpi = phba->cmf_last_sync_bw; 6021 6030 phba->cmf_last_sync_bw = 0; 6022 6031 extra = 0; ··· 6769 6778 } 6770 6779 6771 6780 /* If the FCF discovery is in progress, do nothing. */ 6772 - spin_lock_irq(&phba->hbalock); 6773 - if (phba->hba_flag & FCF_TS_INPROG) { 6774 - spin_unlock_irq(&phba->hbalock); 6781 + if (test_bit(FCF_TS_INPROG, &phba->hba_flag)) 6775 6782 break; 6776 - } 6783 + spin_lock_irq(&phba->hbalock); 6777 6784 /* If fast FCF failover rescan event is pending, do nothing */ 6778 6785 if (phba->fcf.fcf_flag & (FCF_REDISC_EVT | FCF_REDISC_PEND)) { 6779 6786 spin_unlock_irq(&phba->hbalock); ··· 7310 7321 unsigned long iflags; 7311 7322 7312 7323 /* First, declare the async event has been handled */ 7313 - spin_lock_irqsave(&phba->hbalock, iflags); 7314 - phba->hba_flag &= ~ASYNC_EVENT; 7315 - spin_unlock_irqrestore(&phba->hbalock, iflags); 7324 + clear_bit(ASYNC_EVENT, &phba->hba_flag); 7316 7325 7317 7326 /* Now, handle all the async events */ 7318 7327 spin_lock_irqsave(&phba->sli4_hba.asynce_list_lock, iflags); ··· 8234 8247 * our max amount and we need to limit lpfc_sg_seg_cnt 8235 8248 * to minimize the risk of running out. 8236 8249 */ 8237 - phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd) + 8250 + phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd32) + 8238 8251 sizeof(struct fcp_rsp) + max_buf_size; 8239 8252 8240 8253 /* Total SGEs for scsi_sg_list and scsi_sg_prot_list */ ··· 8256 8269 * the FCP rsp, a SGE for each, and a SGE for up to 8257 8270 * cfg_sg_seg_cnt data segments. 8258 8271 */ 8259 - phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd) + 8272 + phba->cfg_sg_dma_buf_size = sizeof(struct fcp_cmnd32) + 8260 8273 sizeof(struct fcp_rsp) + 8261 8274 ((phba->cfg_sg_seg_cnt + extra) * 8262 8275 sizeof(struct sli4_sge)); ··· 8319 8332 phba->lpfc_cmd_rsp_buf_pool = 8320 8333 dma_pool_create("lpfc_cmd_rsp_buf_pool", 8321 8334 &phba->pcidev->dev, 8322 - sizeof(struct fcp_cmnd) + 8335 + sizeof(struct fcp_cmnd32) + 8323 8336 sizeof(struct fcp_rsp), 8324 8337 i, 0); 8325 8338 if (!phba->lpfc_cmd_rsp_buf_pool) { ··· 9856 9869 return; 9857 9870 } 9858 9871 /* FW supports persistent topology - override module parameter value */ 9859 - phba->hba_flag |= HBA_PERSISTENT_TOPO; 9872 + set_bit(HBA_PERSISTENT_TOPO, &phba->hba_flag); 9860 9873 9861 9874 /* if ASIC_GEN_NUM >= 0xC) */ 9862 9875 if ((bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) == 9863 9876 LPFC_SLI_INTF_IF_TYPE_6) || 9864 9877 (bf_get(lpfc_sli_intf_sli_family, &phba->sli4_hba.sli_intf) == 9865 9878 LPFC_SLI_INTF_FAMILY_G6)) { 9866 - if (!tf) { 9879 + if (!tf) 9867 9880 phba->cfg_topology = ((pt == LINK_FLAGS_LOOP) 9868 9881 ? FLAGS_TOPOLOGY_MODE_LOOP 9869 9882 : FLAGS_TOPOLOGY_MODE_PT_PT); 9870 - } else { 9871 - phba->hba_flag &= ~HBA_PERSISTENT_TOPO; 9872 - } 9883 + else 9884 + clear_bit(HBA_PERSISTENT_TOPO, &phba->hba_flag); 9873 9885 } else { /* G5 */ 9874 - if (tf) { 9886 + if (tf) 9875 9887 /* If topology failover set - pt is '0' or '1' */ 9876 9888 phba->cfg_topology = (pt ? FLAGS_TOPOLOGY_MODE_PT_LOOP : 9877 9889 FLAGS_TOPOLOGY_MODE_LOOP_PT); 9878 - } else { 9890 + else 9879 9891 phba->cfg_topology = ((pt == LINK_FLAGS_P2P) 9880 9892 ? FLAGS_TOPOLOGY_MODE_PT_PT 9881 9893 : FLAGS_TOPOLOGY_MODE_LOOP); 9882 - } 9883 9894 } 9884 - if (phba->hba_flag & HBA_PERSISTENT_TOPO) { 9895 + if (test_bit(HBA_PERSISTENT_TOPO, &phba->hba_flag)) 9885 9896 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 9886 9897 "2020 Using persistent topology value [%s]", 9887 9898 lpfc_topo_to_str[phba->cfg_topology]); 9888 - } else { 9899 + else 9889 9900 lpfc_printf_log(phba, KERN_WARNING, LOG_SLI, 9890 9901 "2021 Invalid topology values from FW " 9891 9902 "Using driver parameter defined value [%s]", 9892 9903 lpfc_topo_to_str[phba->cfg_topology]); 9893 - } 9894 9904 } 9895 9905 9896 9906 /** ··· 10130 10146 forced_link_speed = 10131 10147 bf_get(lpfc_mbx_rd_conf_link_speed, rd_config); 10132 10148 if (forced_link_speed) { 10133 - phba->hba_flag |= HBA_FORCED_LINK_SPEED; 10149 + set_bit(HBA_FORCED_LINK_SPEED, &phba->hba_flag); 10134 10150 10135 10151 switch (forced_link_speed) { 10136 10152 case LINK_SPEED_1G: ··· 12225 12241 retval = lpfc_sli_config_port(phba, LPFC_SLI_REV3); 12226 12242 if (retval) 12227 12243 return intr_mode; 12228 - phba->hba_flag &= ~HBA_NEEDS_CFG_PORT; 12244 + clear_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag); 12229 12245 12230 12246 if (cfg_mode == 2) { 12231 12247 /* Now, try to enable MSI-X interrupt mode */ ··· 14796 14812 goto out_unset_pci_mem_s4; 14797 14813 } 14798 14814 14815 + spin_lock_init(&phba->rrq_list_lock); 14799 14816 INIT_LIST_HEAD(&phba->active_rrq_list); 14800 14817 INIT_LIST_HEAD(&phba->fcf.fcf_pri_list); 14801 14818 ··· 15513 15528 pci_ers_result_t rc = PCI_ERS_RESULT_DISCONNECT; 15514 15529 15515 15530 if (phba->link_state == LPFC_HBA_ERROR && 15516 - phba->hba_flag & HBA_IOQ_FLUSH) 15531 + test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) 15517 15532 return PCI_ERS_RESULT_NEED_RESET; 15518 15533 15519 15534 switch (phba->pci_dev_grp) {
+38 -25
drivers/scsi/lpfc/lpfc_nportdisc.c
··· 47 47 #include "lpfc_debugfs.h" 48 48 49 49 50 + /* Called to clear RSCN discovery flags when driver is unloading. */ 51 + static bool 52 + lpfc_check_unload_and_clr_rscn(unsigned long *fc_flag) 53 + { 54 + /* If unloading, then clear the FC_RSCN_DEFERRED flag */ 55 + if (test_bit(FC_UNLOADING, fc_flag)) { 56 + clear_bit(FC_RSCN_DEFERRED, fc_flag); 57 + return false; 58 + } 59 + return test_bit(FC_RSCN_DEFERRED, fc_flag); 60 + } 61 + 50 62 /* Called to verify a rcv'ed ADISC was intended for us. */ 51 63 static int 52 64 lpfc_check_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, ··· 225 213 lpfc_els_abort(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp) 226 214 { 227 215 LIST_HEAD(abort_list); 216 + LIST_HEAD(drv_cmpl_list); 228 217 struct lpfc_sli_ring *pring; 229 218 struct lpfc_iocbq *iocb, *next_iocb; 219 + int retval = 0; 230 220 231 221 pring = lpfc_phba_elsring(phba); 232 222 ··· 264 250 265 251 /* Abort the targeted IOs and remove them from the abort list. */ 266 252 list_for_each_entry_safe(iocb, next_iocb, &abort_list, dlist) { 267 - spin_lock_irq(&phba->hbalock); 268 - list_del_init(&iocb->dlist); 269 - lpfc_sli_issue_abort_iotag(phba, pring, iocb, NULL); 270 - spin_unlock_irq(&phba->hbalock); 253 + spin_lock_irq(&phba->hbalock); 254 + list_del_init(&iocb->dlist); 255 + retval = lpfc_sli_issue_abort_iotag(phba, pring, iocb, NULL); 256 + spin_unlock_irq(&phba->hbalock); 257 + 258 + if (retval && test_bit(FC_UNLOADING, &phba->pport->load_flag)) { 259 + list_del_init(&iocb->list); 260 + list_add_tail(&iocb->list, &drv_cmpl_list); 261 + } 271 262 } 263 + 264 + lpfc_sli_cancel_iocbs(phba, &drv_cmpl_list, IOSTAT_LOCAL_REJECT, 265 + IOERR_SLI_ABORTED); 266 + 272 267 /* Make sure HBA is alive */ 273 268 lpfc_issue_hb_tmo(phba); 274 269 ··· 504 481 * must have ACCed the remote NPorts FLOGI to us 505 482 * to make it here. 506 483 */ 507 - if (phba->hba_flag & HBA_FLOGI_OUTSTANDING) 484 + if (test_bit(HBA_FLOGI_OUTSTANDING, &phba->hba_flag)) 508 485 lpfc_els_abort_flogi(phba); 509 486 510 487 ed_tov = be32_to_cpu(sp->cmn.e_d_tov); ··· 1627 1604 { 1628 1605 struct lpfc_hba *phba = vport->phba; 1629 1606 1630 - /* Don't do anything that will mess up processing of the 1631 - * previous RSCN. 1632 - */ 1633 - if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag)) 1607 + /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */ 1608 + if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag)) 1634 1609 return ndlp->nlp_state; 1635 1610 1636 1611 /* software abort outstanding PLOGI */ ··· 1811 1790 { 1812 1791 struct lpfc_hba *phba = vport->phba; 1813 1792 1814 - /* Don't do anything that will mess up processing of the 1815 - * previous RSCN. 1816 - */ 1817 - if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag)) 1793 + /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */ 1794 + if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag)) 1818 1795 return ndlp->nlp_state; 1819 1796 1820 1797 /* software abort outstanding ADISC */ ··· 2078 2059 void *arg, 2079 2060 uint32_t evt) 2080 2061 { 2081 - /* Don't do anything that will mess up processing of the 2082 - * previous RSCN. 2083 - */ 2084 - if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag)) 2062 + /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */ 2063 + if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag)) 2085 2064 return ndlp->nlp_state; 2086 2065 2087 2066 ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE; ··· 2392 2375 { 2393 2376 struct lpfc_hba *phba = vport->phba; 2394 2377 2395 - /* Don't do anything that will mess up processing of the 2396 - * previous RSCN. 2397 - */ 2398 - if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag)) 2378 + /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */ 2379 + if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag)) 2399 2380 return ndlp->nlp_state; 2400 2381 2401 2382 /* software abort outstanding PRLI */ ··· 2909 2894 lpfc_device_recov_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, 2910 2895 void *arg, uint32_t evt) 2911 2896 { 2912 - /* Don't do anything that will mess up processing of the 2913 - * previous RSCN. 2914 - */ 2915 - if (test_bit(FC_RSCN_DEFERRED, &vport->fc_flag)) 2897 + /* Don't do anything that disrupts the RSCN unless lpfc is unloading. */ 2898 + if (lpfc_check_unload_and_clr_rscn(&vport->fc_flag)) 2916 2899 return ndlp->nlp_state; 2917 2900 2918 2901 lpfc_cancel_retry_delay_tmo(vport, ndlp);
+11 -16
drivers/scsi/lpfc/lpfc_nvme.c
··· 95 95 vport = lport->vport; 96 96 97 97 if (!vport || test_bit(FC_UNLOADING, &vport->load_flag) || 98 - vport->phba->hba_flag & HBA_IOQ_FLUSH) 98 + test_bit(HBA_IOQ_FLUSH, &vport->phba->hba_flag)) 99 99 return -ENODEV; 100 100 101 101 qhandle = kzalloc(sizeof(struct lpfc_nvme_qhandle), GFP_KERNEL); ··· 272 272 273 273 remoteport = lpfc_rport->remoteport; 274 274 if (!vport->localport || 275 - vport->phba->hba_flag & HBA_IOQ_FLUSH) 275 + test_bit(HBA_IOQ_FLUSH, &vport->phba->hba_flag)) 276 276 return -EINVAL; 277 277 278 278 lport = vport->localport->private; ··· 569 569 ndlp->nlp_DID, ntype, nstate); 570 570 return -ENODEV; 571 571 } 572 - if (vport->phba->hba_flag & HBA_IOQ_FLUSH) 572 + if (test_bit(HBA_IOQ_FLUSH, &vport->phba->hba_flag)) 573 573 return -ENODEV; 574 574 575 575 if (!vport->phba->sli4_hba.nvmels_wq) ··· 675 675 676 676 vport = lport->vport; 677 677 if (test_bit(FC_UNLOADING, &vport->load_flag) || 678 - vport->phba->hba_flag & HBA_IOQ_FLUSH) 678 + test_bit(HBA_IOQ_FLUSH, &vport->phba->hba_flag)) 679 679 return -ENODEV; 680 680 681 681 atomic_inc(&lport->fc4NvmeLsRequests); ··· 1568 1568 phba = vport->phba; 1569 1569 1570 1570 if ((unlikely(test_bit(FC_UNLOADING, &vport->load_flag))) || 1571 - phba->hba_flag & HBA_IOQ_FLUSH) { 1571 + test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) { 1572 1572 lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR, 1573 1573 "6124 Fail IO, Driver unload\n"); 1574 1574 atomic_inc(&lport->xmt_fcp_err); ··· 1909 1909 return; 1910 1910 } 1911 1911 1912 - /* Guard against IO completion being called at same time */ 1913 - spin_lock_irqsave(&lpfc_nbuf->buf_lock, flags); 1914 - 1915 - /* If the hba is getting reset, this flag is set. It is 1916 - * cleared when the reset is complete and rings reestablished. 1917 - */ 1918 - spin_lock(&phba->hbalock); 1919 1912 /* driver queued commands are in process of being flushed */ 1920 - if (phba->hba_flag & HBA_IOQ_FLUSH) { 1921 - spin_unlock(&phba->hbalock); 1922 - spin_unlock_irqrestore(&lpfc_nbuf->buf_lock, flags); 1913 + if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) { 1923 1914 lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, 1924 1915 "6139 Driver in reset cleanup - flushing " 1925 - "NVME Req now. hba_flag x%x\n", 1916 + "NVME Req now. hba_flag x%lx\n", 1926 1917 phba->hba_flag); 1927 1918 return; 1928 1919 } 1920 + 1921 + /* Guard against IO completion being called at same time */ 1922 + spin_lock_irqsave(&lpfc_nbuf->buf_lock, flags); 1923 + spin_lock(&phba->hbalock); 1929 1924 1930 1925 nvmereq_wqe = &lpfc_nbuf->cur_iocbq; 1931 1926
+5 -4
drivers/scsi/lpfc/lpfc_nvmet.c
··· 1811 1811 ctxp->flag &= ~LPFC_NVME_XBUSY; 1812 1812 spin_unlock_irqrestore(&ctxp->ctxlock, iflag); 1813 1813 1814 + spin_lock_irqsave(&phba->rrq_list_lock, iflag); 1814 1815 rrq_empty = list_empty(&phba->active_rrq_list); 1816 + spin_unlock_irqrestore(&phba->rrq_list_lock, iflag); 1815 1817 ndlp = lpfc_findnode_did(phba->pport, ctxp->sid); 1816 1818 if (ndlp && 1817 1819 (ndlp->nlp_state == NLP_STE_UNMAPPED_NODE || ··· 3395 3393 /* If the hba is getting reset, this flag is set. It is 3396 3394 * cleared when the reset is complete and rings reestablished. 3397 3395 */ 3398 - spin_lock_irqsave(&phba->hbalock, flags); 3399 3396 /* driver queued commands are in process of being flushed */ 3400 - if (phba->hba_flag & HBA_IOQ_FLUSH) { 3401 - spin_unlock_irqrestore(&phba->hbalock, flags); 3397 + if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) { 3402 3398 atomic_inc(&tgtp->xmt_abort_rsp_error); 3403 3399 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 3404 3400 "6163 Driver in reset cleanup - flushing " 3405 - "NVME Req now. hba_flag x%x oxid x%x\n", 3401 + "NVME Req now. hba_flag x%lx oxid x%x\n", 3406 3402 phba->hba_flag, ctxp->oxid); 3407 3403 lpfc_sli_release_iocbq(phba, abts_wqeq); 3408 3404 spin_lock_irqsave(&ctxp->ctxlock, flags); ··· 3409 3409 return 0; 3410 3410 } 3411 3411 3412 + spin_lock_irqsave(&phba->hbalock, flags); 3412 3413 /* Outstanding abort is in progress */ 3413 3414 if (abts_wqeq->cmd_flag & LPFC_DRIVER_ABORTED) { 3414 3415 spin_unlock_irqrestore(&phba->hbalock, flags);
+46 -27
drivers/scsi/lpfc/lpfc_scsi.c
··· 474 474 ndlp = psb->rdata->pnode; 475 475 else 476 476 ndlp = NULL; 477 - 478 - rrq_empty = list_empty(&phba->active_rrq_list); 479 477 spin_unlock_irqrestore(&phba->hbalock, iflag); 478 + 479 + spin_lock_irqsave(&phba->rrq_list_lock, iflag); 480 + rrq_empty = list_empty(&phba->active_rrq_list); 481 + spin_unlock_irqrestore(&phba->rrq_list_lock, iflag); 480 482 if (ndlp && !offline) { 481 483 lpfc_set_rrq_active(phba, ndlp, 482 484 psb->cur_iocbq.sli4_lxritag, rxid, 1); ··· 600 598 { 601 599 struct lpfc_io_buf *lpfc_cmd; 602 600 struct lpfc_sli4_hdw_queue *qp; 603 - struct sli4_sge *sgl; 601 + struct sli4_sge_le *sgl; 604 602 dma_addr_t pdma_phys_fcp_rsp; 605 603 dma_addr_t pdma_phys_fcp_cmd; 606 604 uint32_t cpu, idx; ··· 651 649 * The balance are sg list bdes. Initialize the 652 650 * first two and leave the rest for queuecommand. 653 651 */ 654 - sgl = (struct sli4_sge *)lpfc_cmd->dma_sgl; 652 + sgl = (struct sli4_sge_le *)lpfc_cmd->dma_sgl; 655 653 pdma_phys_fcp_cmd = tmp->fcp_cmd_rsp_dma_handle; 656 654 sgl->addr_hi = cpu_to_le32(putPaddrHigh(pdma_phys_fcp_cmd)); 657 655 sgl->addr_lo = cpu_to_le32(putPaddrLow(pdma_phys_fcp_cmd)); 658 - sgl->word2 = le32_to_cpu(sgl->word2); 659 - bf_set(lpfc_sli4_sge_last, sgl, 0); 660 - sgl->word2 = cpu_to_le32(sgl->word2); 661 - sgl->sge_len = cpu_to_le32(sizeof(struct fcp_cmnd)); 656 + bf_set_le32(lpfc_sli4_sge_last, sgl, 0); 657 + if (cmnd && cmnd->cmd_len > LPFC_FCP_CDB_LEN) 658 + sgl->sge_len = cpu_to_le32(sizeof(struct fcp_cmnd32)); 659 + else 660 + sgl->sge_len = cpu_to_le32(sizeof(struct fcp_cmnd)); 661 + 662 662 sgl++; 663 663 664 664 /* Setup the physical region for the FCP RSP */ 665 - pdma_phys_fcp_rsp = pdma_phys_fcp_cmd + sizeof(struct fcp_cmnd); 665 + pdma_phys_fcp_rsp = pdma_phys_fcp_cmd + sizeof(struct fcp_cmnd32); 666 666 sgl->addr_hi = cpu_to_le32(putPaddrHigh(pdma_phys_fcp_rsp)); 667 667 sgl->addr_lo = cpu_to_le32(putPaddrLow(pdma_phys_fcp_rsp)); 668 - sgl->word2 = le32_to_cpu(sgl->word2); 669 - bf_set(lpfc_sli4_sge_last, sgl, 1); 670 - sgl->word2 = cpu_to_le32(sgl->word2); 668 + bf_set_le32(lpfc_sli4_sge_last, sgl, 1); 671 669 sgl->sge_len = cpu_to_le32(sizeof(struct fcp_rsp)); 672 670 673 671 if (lpfc_ndlp_check_qdepth(phba, ndlp)) { ··· 2608 2606 iocb_cmd->ulpLe = 1; 2609 2607 2610 2608 fcpdl = lpfc_bg_scsi_adjust_dl(phba, lpfc_cmd); 2611 - fcp_cmnd->fcpDl = be32_to_cpu(fcpdl); 2609 + fcp_cmnd->fcpDl = cpu_to_be32(fcpdl); 2612 2610 2613 2611 /* 2614 2612 * Due to difference in data length between DIF/non-DIF paths, ··· 3225 3223 * explicitly reinitialized. 3226 3224 * all iocb memory resources are reused. 3227 3225 */ 3228 - fcp_cmnd->fcpDl = cpu_to_be32(scsi_bufflen(scsi_cmnd)); 3226 + if (scsi_cmnd->cmd_len > LPFC_FCP_CDB_LEN) 3227 + ((struct fcp_cmnd32 *)fcp_cmnd)->fcpDl = 3228 + cpu_to_be32(scsi_bufflen(scsi_cmnd)); 3229 + else 3230 + fcp_cmnd->fcpDl = cpu_to_be32(scsi_bufflen(scsi_cmnd)); 3229 3231 /* Set first-burst provided it was successfully negotiated */ 3230 - if (!(phba->hba_flag & HBA_FCOE_MODE) && 3232 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag) && 3231 3233 vport->cfg_first_burst_size && 3232 3234 scsi_cmnd->sc_data_direction == DMA_TO_DEVICE) { 3233 3235 u32 init_len, total_len; 3234 3236 3235 - total_len = be32_to_cpu(fcp_cmnd->fcpDl); 3237 + total_len = scsi_bufflen(scsi_cmnd); 3236 3238 init_len = min(total_len, vport->cfg_first_burst_size); 3237 3239 3238 3240 /* Word 4 & 5 */ ··· 3424 3418 } 3425 3419 3426 3420 fcpdl = lpfc_bg_scsi_adjust_dl(phba, lpfc_cmd); 3427 - fcp_cmnd->fcpDl = be32_to_cpu(fcpdl); 3421 + if (lpfc_cmd->pCmd->cmd_len > LPFC_FCP_CDB_LEN) 3422 + ((struct fcp_cmnd32 *)fcp_cmnd)->fcpDl = cpu_to_be32(fcpdl); 3423 + else 3424 + fcp_cmnd->fcpDl = cpu_to_be32(fcpdl); 3428 3425 3429 3426 /* Set first-burst provided it was successfully negotiated */ 3430 - if (!(phba->hba_flag & HBA_FCOE_MODE) && 3427 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag) && 3431 3428 vport->cfg_first_burst_size && 3432 3429 scsi_cmnd->sc_data_direction == DMA_TO_DEVICE) { 3433 3430 u32 init_len, total_len; 3434 3431 3435 - total_len = be32_to_cpu(fcp_cmnd->fcpDl); 3432 + total_len = fcpdl; 3436 3433 init_len = min(total_len, vport->cfg_first_burst_size); 3437 3434 3438 3435 /* Word 4 & 5 */ ··· 3443 3434 wqe->fcp_iwrite.total_xfer_len = total_len; 3444 3435 } else { 3445 3436 /* Word 4 */ 3446 - wqe->fcp_iwrite.total_xfer_len = 3447 - be32_to_cpu(fcp_cmnd->fcpDl); 3437 + wqe->fcp_iwrite.total_xfer_len = fcpdl; 3448 3438 } 3449 3439 3450 3440 /* ··· 3900 3892 fcprsp->rspInfo3); 3901 3893 3902 3894 scsi_set_resid(cmnd, 0); 3903 - fcpDl = be32_to_cpu(fcpcmd->fcpDl); 3895 + if (cmnd->cmd_len > LPFC_FCP_CDB_LEN) 3896 + fcpDl = be32_to_cpu(((struct fcp_cmnd32 *)fcpcmd)->fcpDl); 3897 + else 3898 + fcpDl = be32_to_cpu(fcpcmd->fcpDl); 3904 3899 if (resp_info & RESID_UNDER) { 3905 3900 scsi_set_resid(cmnd, be32_to_cpu(fcprsp->rspResId)); 3906 3901 ··· 4732 4721 bf_set(wqe_iod, &wqe->fcp_iread.wqe_com, 4733 4722 LPFC_WQE_IOD_NONE); 4734 4723 } 4724 + 4725 + /* Additional fcp cdb length field calculation. 4726 + * LPFC_FCP_CDB_LEN_32 - normal 16 byte cdb length, 4727 + * then divide by 4 for the word count. 4728 + * shift 2 because of the RDDATA/WRDATA. 4729 + */ 4730 + if (scsi_cmnd->cmd_len > LPFC_FCP_CDB_LEN) 4731 + fcp_cmnd->fcpCntl3 |= 4 << 2; 4735 4732 } else { 4736 4733 /* From the icmnd template, initialize words 4 - 11 */ 4737 4734 memcpy(&wqe->words[4], &lpfc_icmnd_cmd_template.words[4], ··· 4760 4741 4761 4742 /* Word 3 */ 4762 4743 bf_set(payload_offset_len, &wqe->fcp_icmd, 4763 - sizeof(struct fcp_cmnd) + sizeof(struct fcp_rsp)); 4744 + sizeof(struct fcp_cmnd32) + sizeof(struct fcp_rsp)); 4764 4745 4765 4746 /* Word 6 */ 4766 4747 bf_set(wqe_ctxt_tag, &wqe->generic.wqe_com, ··· 4815 4796 int_to_scsilun(lpfc_cmd->pCmd->device->lun, 4816 4797 &lpfc_cmd->fcp_cmnd->fcp_lun); 4817 4798 4818 - ptr = &fcp_cmnd->fcpCdb[0]; 4799 + ptr = &((struct fcp_cmnd32 *)fcp_cmnd)->fcpCdb[0]; 4819 4800 memcpy(ptr, scsi_cmnd->cmnd, scsi_cmnd->cmd_len); 4820 4801 if (scsi_cmnd->cmd_len < LPFC_FCP_CDB_LEN) { 4821 4802 ptr += scsi_cmnd->cmd_len; ··· 5060 5041 5061 5042 /* Check for valid Emulex Device ID */ 5062 5043 if (phba->sli_rev != LPFC_SLI_REV4 || 5063 - phba->hba_flag & HBA_FCOE_MODE) { 5044 + test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 5064 5045 lpfc_printf_log(phba, KERN_INFO, LOG_INIT, 5065 5046 "8347 Incapable PCI reset device: " 5066 5047 "0x%04x\n", ptr->device); ··· 5346 5327 cmnd->cmnd[0], 5347 5328 scsi_prot_ref_tag(cmnd), 5348 5329 scsi_logical_block_count(cmnd), 5349 - (cmnd->cmnd[1]>>5)); 5330 + scsi_get_prot_type(cmnd)); 5350 5331 } 5351 5332 err = lpfc_bg_scsi_prep_dma_buf(phba, lpfc_cmd); 5352 5333 } else { ··· 5537 5518 5538 5519 spin_lock(&phba->hbalock); 5539 5520 /* driver queued commands are in process of being flushed */ 5540 - if (phba->hba_flag & HBA_IOQ_FLUSH) { 5521 + if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) { 5541 5522 lpfc_printf_vlog(vport, KERN_WARNING, LOG_FCP, 5542 5523 "3168 SCSI Layer abort requested I/O has been " 5543 5524 "flushed by LLD.\n");
+23 -9
drivers/scsi/lpfc/lpfc_scsi.h
··· 1 1 /******************************************************************* 2 2 * This file is part of the Emulex Linux Device Driver for * 3 3 * Fibre Channel Host Bus Adapters. * 4 - * Copyright (C) 2017-2022 Broadcom. All Rights Reserved. The term * 4 + * Copyright (C) 2017-2024 Broadcom. All Rights Reserved. The term * 5 5 * “Broadcom” refers to Broadcom Inc and/or its subsidiaries. * 6 6 * Copyright (C) 2004-2016 Emulex. All rights reserved. * 7 7 * EMULEX and SLI are trademarks of Emulex. * ··· 24 24 25 25 struct lpfc_hba; 26 26 #define LPFC_FCP_CDB_LEN 16 27 + #define LPFC_FCP_CDB_LEN_32 32 27 28 28 29 #define list_remove_head(list, entry, type, member) \ 29 30 do { \ ··· 100 99 #define SNSCOD_BADCMD 0x20 /* sense code is byte 13 ([12]) */ 101 100 }; 102 101 103 - struct fcp_cmnd { 104 - struct scsi_lun fcp_lun; 105 - 106 - uint8_t fcpCntl0; /* FCP_CNTL byte 0 (reserved) */ 107 - uint8_t fcpCntl1; /* FCP_CNTL byte 1 task codes */ 108 102 #define SIMPLE_Q 0x00 109 103 #define HEAD_OF_Q 0x01 110 104 #define ORDERED_Q 0x02 111 105 #define ACA_Q 0x04 112 106 #define UNTAGGED 0x05 113 - uint8_t fcpCntl2; /* FCP_CTL byte 2 task management codes */ 114 107 #define FCP_ABORT_TASK_SET 0x02 /* Bit 1 */ 115 108 #define FCP_CLEAR_TASK_SET 0x04 /* bit 2 */ 116 109 #define FCP_BUS_RESET 0x08 /* bit 3 */ ··· 112 117 #define FCP_TARGET_RESET 0x20 /* bit 5 */ 113 118 #define FCP_CLEAR_ACA 0x40 /* bit 6 */ 114 119 #define FCP_TERMINATE_TASK 0x80 /* bit 7 */ 115 - uint8_t fcpCntl3; 116 120 #define WRITE_DATA 0x01 /* Bit 0 */ 117 121 #define READ_DATA 0x02 /* Bit 1 */ 118 122 123 + struct fcp_cmnd { 124 + struct scsi_lun fcp_lun; 125 + 126 + uint8_t fcpCntl0; /* FCP_CNTL byte 0 (reserved) */ 127 + uint8_t fcpCntl1; /* FCP_CNTL byte 1 task codes */ 128 + uint8_t fcpCntl2; /* FCP_CTL byte 2 task management codes */ 129 + uint8_t fcpCntl3; 130 + 119 131 uint8_t fcpCdb[LPFC_FCP_CDB_LEN]; /* SRB cdb field is copied here */ 120 - uint32_t fcpDl; /* Total transfer length */ 132 + __be32 fcpDl; /* Total transfer length */ 133 + 134 + }; 135 + struct fcp_cmnd32 { 136 + struct scsi_lun fcp_lun; 137 + 138 + uint8_t fcpCntl0; /* FCP_CNTL byte 0 (reserved) */ 139 + uint8_t fcpCntl1; /* FCP_CNTL byte 1 task codes */ 140 + uint8_t fcpCntl2; /* FCP_CTL byte 2 task management codes */ 141 + uint8_t fcpCntl3; 142 + 143 + uint8_t fcpCdb[LPFC_FCP_CDB_LEN_32]; /* SRB cdb field is copied here */ 144 + __be32 fcpDl; /* Total transfer length */ 121 145 122 146 }; 123 147
+100 -118
drivers/scsi/lpfc/lpfc_sli.c
··· 1024 1024 unsigned long iflags; 1025 1025 LIST_HEAD(send_rrq); 1026 1026 1027 - spin_lock_irqsave(&phba->hbalock, iflags); 1028 - phba->hba_flag &= ~HBA_RRQ_ACTIVE; 1027 + clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag); 1029 1028 next_time = jiffies + msecs_to_jiffies(1000 * (phba->fc_ratov + 1)); 1029 + spin_lock_irqsave(&phba->rrq_list_lock, iflags); 1030 1030 list_for_each_entry_safe(rrq, nextrrq, 1031 1031 &phba->active_rrq_list, list) { 1032 1032 if (time_after(jiffies, rrq->rrq_stop_time)) ··· 1034 1034 else if (time_before(rrq->rrq_stop_time, next_time)) 1035 1035 next_time = rrq->rrq_stop_time; 1036 1036 } 1037 - spin_unlock_irqrestore(&phba->hbalock, iflags); 1037 + spin_unlock_irqrestore(&phba->rrq_list_lock, iflags); 1038 1038 if ((!list_empty(&phba->active_rrq_list)) && 1039 1039 (!test_bit(FC_UNLOADING, &phba->pport->load_flag))) 1040 1040 mod_timer(&phba->rrq_tmr, next_time); ··· 1072 1072 1073 1073 if (phba->sli_rev != LPFC_SLI_REV4) 1074 1074 return NULL; 1075 - spin_lock_irqsave(&phba->hbalock, iflags); 1075 + spin_lock_irqsave(&phba->rrq_list_lock, iflags); 1076 1076 list_for_each_entry_safe(rrq, nextrrq, &phba->active_rrq_list, list) { 1077 1077 if (rrq->vport == vport && rrq->xritag == xri && 1078 1078 rrq->nlp_DID == did){ 1079 1079 list_del(&rrq->list); 1080 - spin_unlock_irqrestore(&phba->hbalock, iflags); 1080 + spin_unlock_irqrestore(&phba->rrq_list_lock, iflags); 1081 1081 return rrq; 1082 1082 } 1083 1083 } 1084 - spin_unlock_irqrestore(&phba->hbalock, iflags); 1084 + spin_unlock_irqrestore(&phba->rrq_list_lock, iflags); 1085 1085 return NULL; 1086 1086 } 1087 1087 ··· 1109 1109 lpfc_sli4_vport_delete_els_xri_aborted(vport); 1110 1110 lpfc_sli4_vport_delete_fcp_xri_aborted(vport); 1111 1111 } 1112 - spin_lock_irqsave(&phba->hbalock, iflags); 1112 + spin_lock_irqsave(&phba->rrq_list_lock, iflags); 1113 1113 list_for_each_entry_safe(rrq, nextrrq, &phba->active_rrq_list, list) { 1114 1114 if (rrq->vport != vport) 1115 1115 continue; ··· 1118 1118 list_move(&rrq->list, &rrq_list); 1119 1119 1120 1120 } 1121 - spin_unlock_irqrestore(&phba->hbalock, iflags); 1121 + spin_unlock_irqrestore(&phba->rrq_list_lock, iflags); 1122 1122 1123 1123 list_for_each_entry_safe(rrq, nextrrq, &rrq_list, list) { 1124 1124 list_del(&rrq->list); ··· 1179 1179 if (!phba->cfg_enable_rrq) 1180 1180 return -EINVAL; 1181 1181 1182 - spin_lock_irqsave(&phba->hbalock, iflags); 1183 1182 if (test_bit(FC_UNLOADING, &phba->pport->load_flag)) { 1184 - phba->hba_flag &= ~HBA_RRQ_ACTIVE; 1185 - goto out; 1183 + clear_bit(HBA_RRQ_ACTIVE, &phba->hba_flag); 1184 + goto outnl; 1186 1185 } 1187 1186 1187 + spin_lock_irqsave(&phba->hbalock, iflags); 1188 1188 if (ndlp->vport && test_bit(FC_UNLOADING, &ndlp->vport->load_flag)) 1189 1189 goto out; 1190 1190 ··· 1213 1213 rrq->nlp_DID = ndlp->nlp_DID; 1214 1214 rrq->vport = ndlp->vport; 1215 1215 rrq->rxid = rxid; 1216 - spin_lock_irqsave(&phba->hbalock, iflags); 1216 + 1217 + spin_lock_irqsave(&phba->rrq_list_lock, iflags); 1217 1218 empty = list_empty(&phba->active_rrq_list); 1218 1219 list_add_tail(&rrq->list, &phba->active_rrq_list); 1219 - phba->hba_flag |= HBA_RRQ_ACTIVE; 1220 - spin_unlock_irqrestore(&phba->hbalock, iflags); 1220 + spin_unlock_irqrestore(&phba->rrq_list_lock, iflags); 1221 + set_bit(HBA_RRQ_ACTIVE, &phba->hba_flag); 1221 1222 if (empty) 1222 1223 lpfc_worker_wake_up(phba); 1223 1224 return 0; 1224 1225 out: 1225 1226 spin_unlock_irqrestore(&phba->hbalock, iflags); 1227 + outnl: 1226 1228 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 1227 1229 "2921 Can't set rrq active xri:0x%x rxid:0x%x" 1228 1230 " DID:0x%x Send:%d\n", ··· 3939 3937 uint64_t sli_intr, cnt; 3940 3938 3941 3939 phba = from_timer(phba, t, eratt_poll); 3942 - if (!(phba->hba_flag & HBA_SETUP)) 3940 + if (!test_bit(HBA_SETUP, &phba->hba_flag)) 3943 3941 return; 3944 3942 3945 3943 if (test_bit(FC_UNLOADING, &phba->pport->load_flag)) ··· 4524 4522 unsigned long iflag; 4525 4523 int count = 0; 4526 4524 4527 - spin_lock_irqsave(&phba->hbalock, iflag); 4528 - phba->hba_flag &= ~HBA_SP_QUEUE_EVT; 4529 - spin_unlock_irqrestore(&phba->hbalock, iflag); 4525 + clear_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag); 4530 4526 while (!list_empty(&phba->sli4_hba.sp_queue_event)) { 4531 4527 /* Get the response iocb from the head of work queue */ 4532 4528 spin_lock_irqsave(&phba->hbalock, iflag); ··· 4681 4681 uint32_t i; 4682 4682 struct lpfc_iocbq *piocb, *next_iocb; 4683 4683 4684 - spin_lock_irq(&phba->hbalock); 4685 4684 /* Indicate the I/O queues are flushed */ 4686 - phba->hba_flag |= HBA_IOQ_FLUSH; 4687 - spin_unlock_irq(&phba->hbalock); 4685 + set_bit(HBA_IOQ_FLUSH, &phba->hba_flag); 4688 4686 4689 4687 /* Look on all the FCP Rings for the iotag */ 4690 4688 if (phba->sli_rev >= LPFC_SLI_REV4) { ··· 4760 4762 if (lpfc_readl(phba->HSregaddr, &status)) 4761 4763 return 1; 4762 4764 4763 - phba->hba_flag |= HBA_NEEDS_CFG_PORT; 4765 + set_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag); 4764 4766 4765 4767 /* 4766 4768 * Check status register every 100ms for 5 retries, then every ··· 4839 4841 } else 4840 4842 phba->sli4_hba.intr_enable = 0; 4841 4843 4842 - phba->hba_flag &= ~HBA_SETUP; 4844 + clear_bit(HBA_SETUP, &phba->hba_flag); 4843 4845 return retval; 4844 4846 } 4845 4847 ··· 5091 5093 /* perform board reset */ 5092 5094 phba->fc_eventTag = 0; 5093 5095 phba->link_events = 0; 5094 - phba->hba_flag |= HBA_NEEDS_CFG_PORT; 5096 + set_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag); 5095 5097 if (phba->pport) { 5096 5098 phba->pport->fc_myDID = 0; 5097 5099 phba->pport->fc_prevDID = 0; ··· 5151 5153 5152 5154 /* Reset HBA */ 5153 5155 lpfc_printf_log(phba, KERN_INFO, LOG_SLI, 5154 - "0295 Reset HBA Data: x%x x%x x%x\n", 5156 + "0295 Reset HBA Data: x%x x%x x%lx\n", 5155 5157 phba->pport->port_state, psli->sli_flag, 5156 5158 phba->hba_flag); 5157 5159 ··· 5160 5162 phba->link_events = 0; 5161 5163 phba->pport->fc_myDID = 0; 5162 5164 phba->pport->fc_prevDID = 0; 5163 - phba->hba_flag &= ~HBA_SETUP; 5165 + clear_bit(HBA_SETUP, &phba->hba_flag); 5164 5166 5165 5167 spin_lock_irq(&phba->hbalock); 5166 5168 psli->sli_flag &= ~(LPFC_PROCESS_LA); ··· 5404 5406 return -EIO; 5405 5407 } 5406 5408 5407 - phba->hba_flag |= HBA_NEEDS_CFG_PORT; 5409 + set_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag); 5408 5410 5409 5411 /* Clear all interrupt enable conditions */ 5410 5412 writel(0, phba->HCregaddr); ··· 5706 5708 int longs; 5707 5709 5708 5710 /* Enable ISR already does config_port because of config_msi mbx */ 5709 - if (phba->hba_flag & HBA_NEEDS_CFG_PORT) { 5711 + if (test_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag)) { 5710 5712 rc = lpfc_sli_config_port(phba, LPFC_SLI_REV3); 5711 5713 if (rc) 5712 5714 return -EIO; 5713 - phba->hba_flag &= ~HBA_NEEDS_CFG_PORT; 5715 + clear_bit(HBA_NEEDS_CFG_PORT, &phba->hba_flag); 5714 5716 } 5715 5717 phba->fcp_embed_io = 0; /* SLI4 FC support only */ 5716 5718 ··· 7757 7759 snprintf(mbox->u.mqe.un.set_host_data.un.data, 7758 7760 LPFC_HOST_OS_DRIVER_VERSION_SIZE, 7759 7761 "Linux %s v"LPFC_DRIVER_VERSION, 7760 - (phba->hba_flag & HBA_FCOE_MODE) ? "FCoE" : "FC"); 7762 + test_bit(HBA_FCOE_MODE, &phba->hba_flag) ? "FCoE" : "FC"); 7761 7763 } 7762 7764 7763 7765 int ··· 8485 8487 spin_unlock_irq(&phba->hbalock); 8486 8488 } 8487 8489 } 8488 - phba->hba_flag &= ~HBA_SETUP; 8490 + clear_bit(HBA_SETUP, &phba->hba_flag); 8489 8491 8490 8492 lpfc_sli4_dip(phba); 8491 8493 ··· 8514 8516 mqe = &mboxq->u.mqe; 8515 8517 phba->sli_rev = bf_get(lpfc_mbx_rd_rev_sli_lvl, &mqe->un.read_rev); 8516 8518 if (bf_get(lpfc_mbx_rd_rev_fcoe, &mqe->un.read_rev)) { 8517 - phba->hba_flag |= HBA_FCOE_MODE; 8519 + set_bit(HBA_FCOE_MODE, &phba->hba_flag); 8518 8520 phba->fcp_embed_io = 0; /* SLI4 FC support only */ 8519 8521 } else { 8520 - phba->hba_flag &= ~HBA_FCOE_MODE; 8522 + clear_bit(HBA_FCOE_MODE, &phba->hba_flag); 8521 8523 } 8522 8524 8523 8525 if (bf_get(lpfc_mbx_rd_rev_cee_ver, &mqe->un.read_rev) == 8524 8526 LPFC_DCBX_CEE_MODE) 8525 - phba->hba_flag |= HBA_FIP_SUPPORT; 8527 + set_bit(HBA_FIP_SUPPORT, &phba->hba_flag); 8526 8528 else 8527 - phba->hba_flag &= ~HBA_FIP_SUPPORT; 8529 + clear_bit(HBA_FIP_SUPPORT, &phba->hba_flag); 8528 8530 8529 - phba->hba_flag &= ~HBA_IOQ_FLUSH; 8531 + clear_bit(HBA_IOQ_FLUSH, &phba->hba_flag); 8530 8532 8531 8533 if (phba->sli_rev != LPFC_SLI_REV4) { 8532 8534 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 8533 8535 "0376 READ_REV Error. SLI Level %d " 8534 8536 "FCoE enabled %d\n", 8535 - phba->sli_rev, phba->hba_flag & HBA_FCOE_MODE); 8537 + phba->sli_rev, 8538 + test_bit(HBA_FCOE_MODE, &phba->hba_flag) ? 1 : 0); 8536 8539 rc = -EIO; 8537 8540 kfree(vpd); 8538 8541 goto out_free_mbox; ··· 8548 8549 * to read FCoE param config regions, only read parameters if the 8549 8550 * board is FCoE 8550 8551 */ 8551 - if (phba->hba_flag & HBA_FCOE_MODE && 8552 + if (test_bit(HBA_FCOE_MODE, &phba->hba_flag) && 8552 8553 lpfc_sli4_read_fcoe_params(phba)) 8553 8554 lpfc_printf_log(phba, KERN_WARNING, LOG_MBOX | LOG_INIT, 8554 8555 "2570 Failed to read FCoE parameters\n"); ··· 8625 8626 lpfc_set_features(phba, mboxq, LPFC_SET_UE_RECOVERY); 8626 8627 rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); 8627 8628 if (rc == MBX_SUCCESS) { 8628 - phba->hba_flag |= HBA_RECOVERABLE_UE; 8629 + set_bit(HBA_RECOVERABLE_UE, &phba->hba_flag); 8629 8630 /* Set 1Sec interval to detect UE */ 8630 8631 phba->eratt_poll_interval = 1; 8631 8632 phba->sli4_hba.ue_to_sr = bf_get( ··· 8676 8677 } 8677 8678 8678 8679 /* Performance Hints are ONLY for FCoE */ 8679 - if (phba->hba_flag & HBA_FCOE_MODE) { 8680 + if (test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 8680 8681 if (bf_get(lpfc_mbx_rq_ftr_rsp_perfh, &mqe->un.req_ftrs)) 8681 8682 phba->sli3_options |= LPFC_SLI4_PERFH_ENABLED; 8682 8683 else ··· 8935 8936 } 8936 8937 lpfc_sli4_node_prep(phba); 8937 8938 8938 - if (!(phba->hba_flag & HBA_FCOE_MODE)) { 8939 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { 8939 8940 if ((phba->nvmet_support == 0) || (phba->cfg_nvmet_mrq == 1)) { 8940 8941 /* 8941 8942 * The FC Port needs to register FCFI (index 0) ··· 9011 9012 /* Start heart beat timer */ 9012 9013 mod_timer(&phba->hb_tmofunc, 9013 9014 jiffies + msecs_to_jiffies(1000 * LPFC_HB_MBOX_INTERVAL)); 9014 - phba->hba_flag &= ~(HBA_HBEAT_INP | HBA_HBEAT_TMO); 9015 + clear_bit(HBA_HBEAT_INP, &phba->hba_flag); 9016 + clear_bit(HBA_HBEAT_TMO, &phba->hba_flag); 9015 9017 phba->last_completion_time = jiffies; 9016 9018 9017 9019 /* start eq_delay heartbeat */ ··· 9054 9054 /* Setup CMF after HBA is initialized */ 9055 9055 lpfc_cmf_setup(phba); 9056 9056 9057 - if (!(phba->hba_flag & HBA_FCOE_MODE) && 9058 - (phba->hba_flag & LINK_DISABLED)) { 9057 + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag) && 9058 + test_bit(LINK_DISABLED, &phba->hba_flag)) { 9059 9059 lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, 9060 9060 "3103 Adapter Link is disabled.\n"); 9061 9061 lpfc_down_link(phba, mboxq); ··· 9079 9079 /* Enable RAS FW log support */ 9080 9080 lpfc_sli4_ras_setup(phba); 9081 9081 9082 - phba->hba_flag |= HBA_SETUP; 9082 + set_bit(HBA_SETUP, &phba->hba_flag); 9083 9083 return rc; 9084 9084 9085 9085 out_io_buff_free: ··· 9383 9383 } 9384 9384 9385 9385 /* If HBA has a deferred error attention, fail the iocb. */ 9386 - if (unlikely(phba->hba_flag & DEFER_ERATT)) { 9386 + if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag))) { 9387 9387 spin_unlock_irqrestore(&phba->hbalock, drvr_flag); 9388 9388 goto out_not_finished; 9389 9389 } ··· 10447 10447 return IOCB_ERROR; 10448 10448 10449 10449 /* If HBA has a deferred error attention, fail the iocb. */ 10450 - if (unlikely(phba->hba_flag & DEFER_ERATT)) 10450 + if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag))) 10451 10451 return IOCB_ERROR; 10452 10452 10453 10453 /* ··· 10595 10595 BUFF_TYPE_BDE_IMMED; 10596 10596 wqe->generic.bde.tus.f.bdeSize = sgl->sge_len; 10597 10597 wqe->generic.bde.addrHigh = 0; 10598 - wqe->generic.bde.addrLow = 88; /* Word 22 */ 10598 + wqe->generic.bde.addrLow = 72; /* Word 18 */ 10599 10599 10600 10600 bf_set(wqe_wqes, &wqe->fcp_iwrite.wqe_com, 1); 10601 10601 bf_set(wqe_dbde, &wqe->fcp_iwrite.wqe_com, 0); 10602 10602 10603 - /* Word 22-29 FCP CMND Payload */ 10604 - ptr = &wqe->words[22]; 10605 - memcpy(ptr, fcp_cmnd, sizeof(struct fcp_cmnd)); 10603 + /* Word 18-29 FCP CMND Payload */ 10604 + ptr = &wqe->words[18]; 10605 + memcpy(ptr, fcp_cmnd, sgl->sge_len); 10606 10606 } else { 10607 10607 /* Word 0-2 - Inline BDE */ 10608 10608 wqe->generic.bde.tus.f.bdeFlags = BUFF_TYPE_BDE_64; 10609 - wqe->generic.bde.tus.f.bdeSize = sizeof(struct fcp_cmnd); 10609 + wqe->generic.bde.tus.f.bdeSize = sgl->sge_len; 10610 10610 wqe->generic.bde.addrHigh = sgl->addr_hi; 10611 10611 wqe->generic.bde.addrLow = sgl->addr_lo; 10612 10612 ··· 12361 12361 12362 12362 /* ELS cmd tag <ulpIoTag> completes */ 12363 12363 lpfc_printf_log(phba, KERN_INFO, LOG_ELS, 12364 - "0139 Ignoring ELS cmd code x%x completion Data: " 12364 + "0139 Ignoring ELS cmd code x%x ref cnt x%x Data: " 12365 12365 "x%x x%x x%x x%px\n", 12366 - ulp_command, ulp_status, ulp_word4, iotag, 12367 - cmdiocb->ndlp); 12366 + ulp_command, kref_read(&cmdiocb->ndlp->kref), 12367 + ulp_status, ulp_word4, iotag, cmdiocb->ndlp); 12368 12368 /* 12369 12369 * Deref the ndlp after free_iocb. sli_release_iocb will access the ndlp 12370 12370 * if exchange is busy. ··· 12460 12460 } 12461 12461 } 12462 12462 12463 - if (phba->link_state < LPFC_LINK_UP || 12463 + /* Just close the exchange under certain conditions. */ 12464 + if (test_bit(FC_UNLOADING, &vport->load_flag) || 12465 + phba->link_state < LPFC_LINK_UP || 12464 12466 (phba->sli_rev == LPFC_SLI_REV4 && 12465 12467 phba->sli4_hba.link_state.status == LPFC_FC_LA_TYPE_LINK_DOWN) || 12466 12468 (phba->link_flag & LS_EXTERNAL_LOOPBACK)) ··· 12509 12507 lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI, 12510 12508 "0339 Abort IO XRI x%x, Original iotag x%x, " 12511 12509 "abort tag x%x Cmdjob : x%px Abortjob : x%px " 12512 - "retval x%x\n", 12510 + "retval x%x : IA %d\n", 12513 12511 ulp_context, (phba->sli_rev == LPFC_SLI_REV4) ? 12514 12512 cmdiocb->iotag : iotag, iotag, cmdiocb, abtsiocbp, 12515 - retval); 12513 + retval, ia); 12516 12514 if (retval) { 12517 12515 cmdiocb->cmd_flag &= ~LPFC_DRIVER_ABORTED; 12518 12516 __lpfc_sli_release_iocbq(phba, abtsiocbp); ··· 12777 12775 int i; 12778 12776 12779 12777 /* all I/Os are in process of being flushed */ 12780 - if (phba->hba_flag & HBA_IOQ_FLUSH) 12778 + if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) 12781 12779 return errcnt; 12782 12780 12783 12781 for (i = 1; i <= phba->sli.last_iotag; i++) { ··· 12847 12845 u16 ulp_context, iotag, cqid = LPFC_WQE_CQ_ID_DEFAULT; 12848 12846 bool ia; 12849 12847 12850 - spin_lock_irqsave(&phba->hbalock, iflags); 12851 - 12852 12848 /* all I/Os are in process of being flushed */ 12853 - if (phba->hba_flag & HBA_IOQ_FLUSH) { 12854 - spin_unlock_irqrestore(&phba->hbalock, iflags); 12849 + if (test_bit(HBA_IOQ_FLUSH, &phba->hba_flag)) 12855 12850 return 0; 12856 - } 12851 + 12857 12852 sum = 0; 12858 12853 12854 + spin_lock_irqsave(&phba->hbalock, iflags); 12859 12855 for (i = 1; i <= phba->sli.last_iotag; i++) { 12860 12856 iocbq = phba->sli.iocbq_lookup[i]; 12861 12857 ··· 13385 13385 if ((HS_FFER1 & phba->work_hs) && 13386 13386 ((HS_FFER2 | HS_FFER3 | HS_FFER4 | HS_FFER5 | 13387 13387 HS_FFER6 | HS_FFER7 | HS_FFER8) & phba->work_hs)) { 13388 - phba->hba_flag |= DEFER_ERATT; 13388 + set_bit(DEFER_ERATT, &phba->hba_flag); 13389 13389 /* Clear all interrupt enable conditions */ 13390 13390 writel(0, phba->HCregaddr); 13391 13391 readl(phba->HCregaddr); ··· 13394 13394 /* Set the driver HA work bitmap */ 13395 13395 phba->work_ha |= HA_ERATT; 13396 13396 /* Indicate polling handles this ERATT */ 13397 - phba->hba_flag |= HBA_ERATT_HANDLED; 13397 + set_bit(HBA_ERATT_HANDLED, &phba->hba_flag); 13398 13398 return 1; 13399 13399 } 13400 13400 return 0; ··· 13405 13405 /* Set the driver HA work bitmap */ 13406 13406 phba->work_ha |= HA_ERATT; 13407 13407 /* Indicate polling handles this ERATT */ 13408 - phba->hba_flag |= HBA_ERATT_HANDLED; 13408 + set_bit(HBA_ERATT_HANDLED, &phba->hba_flag); 13409 13409 return 1; 13410 13410 } 13411 13411 ··· 13441 13441 &uerr_sta_hi)) { 13442 13442 phba->work_hs |= UNPLUG_ERR; 13443 13443 phba->work_ha |= HA_ERATT; 13444 - phba->hba_flag |= HBA_ERATT_HANDLED; 13444 + set_bit(HBA_ERATT_HANDLED, &phba->hba_flag); 13445 13445 return 1; 13446 13446 } 13447 13447 if ((~phba->sli4_hba.ue_mask_lo & uerr_sta_lo) || ··· 13457 13457 phba->work_status[0] = uerr_sta_lo; 13458 13458 phba->work_status[1] = uerr_sta_hi; 13459 13459 phba->work_ha |= HA_ERATT; 13460 - phba->hba_flag |= HBA_ERATT_HANDLED; 13460 + set_bit(HBA_ERATT_HANDLED, &phba->hba_flag); 13461 13461 return 1; 13462 13462 } 13463 13463 break; ··· 13469 13469 &portsmphr)){ 13470 13470 phba->work_hs |= UNPLUG_ERR; 13471 13471 phba->work_ha |= HA_ERATT; 13472 - phba->hba_flag |= HBA_ERATT_HANDLED; 13472 + set_bit(HBA_ERATT_HANDLED, &phba->hba_flag); 13473 13473 return 1; 13474 13474 } 13475 13475 if (bf_get(lpfc_sliport_status_err, &portstat_reg)) { ··· 13492 13492 phba->work_status[0], 13493 13493 phba->work_status[1]); 13494 13494 phba->work_ha |= HA_ERATT; 13495 - phba->hba_flag |= HBA_ERATT_HANDLED; 13495 + set_bit(HBA_ERATT_HANDLED, &phba->hba_flag); 13496 13496 return 1; 13497 13497 } 13498 13498 break; ··· 13529 13529 return 0; 13530 13530 13531 13531 /* Check if interrupt handler handles this ERATT */ 13532 - spin_lock_irq(&phba->hbalock); 13533 - if (phba->hba_flag & HBA_ERATT_HANDLED) { 13532 + if (test_bit(HBA_ERATT_HANDLED, &phba->hba_flag)) 13534 13533 /* Interrupt handler has handled ERATT */ 13535 - spin_unlock_irq(&phba->hbalock); 13536 13534 return 0; 13537 - } 13538 13535 13539 13536 /* 13540 13537 * If there is deferred error attention, do not check for error 13541 13538 * attention 13542 13539 */ 13543 - if (unlikely(phba->hba_flag & DEFER_ERATT)) { 13544 - spin_unlock_irq(&phba->hbalock); 13540 + if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag))) 13545 13541 return 0; 13546 - } 13547 13542 13543 + spin_lock_irq(&phba->hbalock); 13548 13544 /* If PCI channel is offline, don't process it */ 13549 13545 if (unlikely(pci_channel_offline(phba->pcidev))) { 13550 13546 spin_unlock_irq(&phba->hbalock); ··· 13662 13666 ha_copy &= ~HA_ERATT; 13663 13667 /* Check the need for handling ERATT in interrupt handler */ 13664 13668 if (ha_copy & HA_ERATT) { 13665 - if (phba->hba_flag & HBA_ERATT_HANDLED) 13669 + if (test_and_set_bit(HBA_ERATT_HANDLED, 13670 + &phba->hba_flag)) 13666 13671 /* ERATT polling has handled ERATT */ 13667 13672 ha_copy &= ~HA_ERATT; 13668 - else 13669 - /* Indicate interrupt handler handles ERATT */ 13670 - phba->hba_flag |= HBA_ERATT_HANDLED; 13671 13673 } 13672 13674 13673 13675 /* 13674 13676 * If there is deferred error attention, do not check for any 13675 13677 * interrupt. 13676 13678 */ 13677 - if (unlikely(phba->hba_flag & DEFER_ERATT)) { 13679 + if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag))) { 13678 13680 spin_unlock_irqrestore(&phba->hbalock, iflag); 13679 13681 return IRQ_NONE; 13680 13682 } ··· 13768 13774 ((HS_FFER2 | HS_FFER3 | HS_FFER4 | HS_FFER5 | 13769 13775 HS_FFER6 | HS_FFER7 | HS_FFER8) & 13770 13776 phba->work_hs)) { 13771 - phba->hba_flag |= DEFER_ERATT; 13777 + set_bit(DEFER_ERATT, &phba->hba_flag); 13772 13778 /* Clear all interrupt enable conditions */ 13773 13779 writel(0, phba->HCregaddr); 13774 13780 readl(phba->HCregaddr); ··· 13955 13961 /* Need to read HA REG for FCP ring and other ring events */ 13956 13962 if (lpfc_readl(phba->HAregaddr, &ha_copy)) 13957 13963 return IRQ_HANDLED; 13958 - /* Clear up only attention source related to fast-path */ 13959 - spin_lock_irqsave(&phba->hbalock, iflag); 13964 + 13960 13965 /* 13961 13966 * If there is deferred error attention, do not check for 13962 13967 * any interrupt. 13963 13968 */ 13964 - if (unlikely(phba->hba_flag & DEFER_ERATT)) { 13965 - spin_unlock_irqrestore(&phba->hbalock, iflag); 13969 + if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag))) 13966 13970 return IRQ_NONE; 13967 - } 13971 + 13972 + /* Clear up only attention source related to fast-path */ 13973 + spin_lock_irqsave(&phba->hbalock, iflag); 13968 13974 writel((ha_copy & (HA_R0_CLR_MSK | HA_R1_CLR_MSK)), 13969 13975 phba->HAregaddr); 13970 13976 readl(phba->HAregaddr); /* flush */ ··· 14047 14053 spin_unlock(&phba->hbalock); 14048 14054 return IRQ_NONE; 14049 14055 } else if (phba->ha_copy & HA_ERATT) { 14050 - if (phba->hba_flag & HBA_ERATT_HANDLED) 14056 + if (test_and_set_bit(HBA_ERATT_HANDLED, &phba->hba_flag)) 14051 14057 /* ERATT polling has handled ERATT */ 14052 14058 phba->ha_copy &= ~HA_ERATT; 14053 - else 14054 - /* Indicate interrupt handler handles ERATT */ 14055 - phba->hba_flag |= HBA_ERATT_HANDLED; 14056 14059 } 14057 14060 14058 14061 /* 14059 14062 * If there is deferred error attention, do not check for any interrupt. 14060 14063 */ 14061 - if (unlikely(phba->hba_flag & DEFER_ERATT)) { 14064 + if (unlikely(test_bit(DEFER_ERATT, &phba->hba_flag))) { 14062 14065 spin_unlock(&phba->hbalock); 14063 14066 return IRQ_NONE; 14064 14067 } ··· 14126 14135 unsigned long iflags; 14127 14136 14128 14137 /* First, declare the els xri abort event has been handled */ 14129 - spin_lock_irqsave(&phba->hbalock, iflags); 14130 - phba->hba_flag &= ~ELS_XRI_ABORT_EVENT; 14131 - spin_unlock_irqrestore(&phba->hbalock, iflags); 14138 + clear_bit(ELS_XRI_ABORT_EVENT, &phba->hba_flag); 14132 14139 14133 14140 /* Now, handle all the els xri abort events */ 14134 14141 spin_lock_irqsave(&phba->sli4_hba.els_xri_abrt_list_lock, iflags); ··· 14252 14263 spin_unlock_irqrestore(&phba->sli4_hba.asynce_list_lock, iflags); 14253 14264 14254 14265 /* Set the async event flag */ 14255 - spin_lock_irqsave(&phba->hbalock, iflags); 14256 - phba->hba_flag |= ASYNC_EVENT; 14257 - spin_unlock_irqrestore(&phba->hbalock, iflags); 14266 + set_bit(ASYNC_EVENT, &phba->hba_flag); 14258 14267 14259 14268 return true; 14260 14269 } ··· 14492 14505 spin_lock_irqsave(&phba->hbalock, iflags); 14493 14506 list_add_tail(&irspiocbq->cq_event.list, 14494 14507 &phba->sli4_hba.sp_queue_event); 14495 - phba->hba_flag |= HBA_SP_QUEUE_EVT; 14496 14508 spin_unlock_irqrestore(&phba->hbalock, iflags); 14509 + set_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag); 14497 14510 14498 14511 return true; 14499 14512 } ··· 14567 14580 list_add_tail(&cq_event->list, 14568 14581 &phba->sli4_hba.sp_els_xri_aborted_work_queue); 14569 14582 /* Set the els xri abort event flag */ 14570 - phba->hba_flag |= ELS_XRI_ABORT_EVENT; 14583 + set_bit(ELS_XRI_ABORT_EVENT, &phba->hba_flag); 14571 14584 spin_unlock_irqrestore(&phba->sli4_hba.els_xri_abrt_list_lock, 14572 14585 iflags); 14573 14586 workposted = true; ··· 14654 14667 /* save off the frame for the work thread to process */ 14655 14668 list_add_tail(&dma_buf->cq_event.list, 14656 14669 &phba->sli4_hba.sp_queue_event); 14657 - /* Frame received */ 14658 - phba->hba_flag |= HBA_SP_QUEUE_EVT; 14659 14670 spin_unlock_irqrestore(&phba->hbalock, iflags); 14671 + /* Frame received */ 14672 + set_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag); 14660 14673 workposted = true; 14661 14674 break; 14662 14675 case FC_STATUS_INSUFF_BUF_FRM_DISC: ··· 14676 14689 case FC_STATUS_INSUFF_BUF_NEED_BUF: 14677 14690 hrq->RQ_no_posted_buf++; 14678 14691 /* Post more buffers if possible */ 14679 - spin_lock_irqsave(&phba->hbalock, iflags); 14680 - phba->hba_flag |= HBA_POST_RECEIVE_BUFFER; 14681 - spin_unlock_irqrestore(&phba->hbalock, iflags); 14692 + set_bit(HBA_POST_RECEIVE_BUFFER, &phba->hba_flag); 14682 14693 workposted = true; 14683 14694 break; 14684 14695 case FC_STATUS_RQ_DMA_FAILURE: ··· 19334 19349 spin_lock_irqsave(&phba->hbalock, iflags); 19335 19350 list_add_tail(&dmabuf->cq_event.list, 19336 19351 &phba->sli4_hba.sp_queue_event); 19337 - phba->hba_flag |= HBA_SP_QUEUE_EVT; 19338 19352 spin_unlock_irqrestore(&phba->hbalock, iflags); 19353 + set_bit(HBA_SP_QUEUE_EVT, &phba->hba_flag); 19339 19354 lpfc_worker_wake_up(phba); 19340 19355 return; 19341 19356 } ··· 20087 20102 mboxq->vport = phba->pport; 20088 20103 mboxq->mbox_cmpl = lpfc_mbx_cmpl_fcf_scan_read_fcf_rec; 20089 20104 20090 - spin_lock_irq(&phba->hbalock); 20091 - phba->hba_flag |= FCF_TS_INPROG; 20092 - spin_unlock_irq(&phba->hbalock); 20105 + set_bit(FCF_TS_INPROG, &phba->hba_flag); 20093 20106 20094 20107 rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_NOWAIT); 20095 20108 if (rc == MBX_NOT_FINISHED) ··· 20103 20120 if (mboxq) 20104 20121 lpfc_sli4_mbox_cmd_free(phba, mboxq); 20105 20122 /* FCF scan failed, clear FCF_TS_INPROG flag */ 20106 - spin_lock_irq(&phba->hbalock); 20107 - phba->hba_flag &= ~FCF_TS_INPROG; 20108 - spin_unlock_irq(&phba->hbalock); 20123 + clear_bit(FCF_TS_INPROG, &phba->hba_flag); 20109 20124 } 20110 20125 return error; 20111 20126 } ··· 20760 20779 20761 20780 /* This HBA contains PORT_STE configured */ 20762 20781 if (!rgn23_data[offset + 2]) 20763 - phba->hba_flag |= LINK_DISABLED; 20782 + set_bit(LINK_DISABLED, &phba->hba_flag); 20764 20783 20765 20784 goto out; 20766 20785 } ··· 22469 22488 } 22470 22489 22471 22490 tmp->fcp_rsp = (struct fcp_rsp *)((uint8_t *)tmp->fcp_cmnd + 22472 - sizeof(struct fcp_cmnd)); 22491 + sizeof(struct fcp_cmnd32)); 22473 22492 22474 22493 spin_lock_irqsave(&hdwq->hdwq_lock, iflags); 22475 22494 list_add_tail(&tmp->list_node, &lpfc_buf->dma_cmd_rsp_list); ··· 22574 22593 u8 cmnd; 22575 22594 u32 *pcmd; 22576 22595 u32 if_type = 0; 22577 - u32 fip, abort_tag; 22596 + u32 abort_tag; 22597 + bool fip; 22578 22598 struct lpfc_nodelist *ndlp = NULL; 22579 22599 union lpfc_wqe128 *wqe = &job->wqe; 22580 22600 u8 command_type = ELS_COMMAND_NON_FIP; 22581 22601 22582 - fip = phba->hba_flag & HBA_FIP_SUPPORT; 22602 + fip = test_bit(HBA_FIP_SUPPORT, &phba->hba_flag); 22583 22603 /* The fcp commands will set command type */ 22584 22604 if (job->cmd_flag & LPFC_IO_FCP) 22585 22605 command_type = FCP_COMMAND;
+1 -1
drivers/scsi/lpfc/lpfc_version.h
··· 20 20 * included with this package. * 21 21 *******************************************************************/ 22 22 23 - #define LPFC_DRIVER_VERSION "14.4.0.1" 23 + #define LPFC_DRIVER_VERSION "14.4.0.2" 24 24 #define LPFC_DRIVER_NAME "lpfc" 25 25 26 26 /* Used for SLI 2/3 */
+7 -1
drivers/scsi/mac_scsi.c
··· 534 534 scsi_host_put(instance); 535 535 } 536 536 537 - static struct platform_driver mac_scsi_driver = { 537 + /* 538 + * mac_scsi_remove() lives in .exit.text. For drivers registered via 539 + * module_platform_driver_probe() this is ok because they cannot get unbound at 540 + * runtime. So mark the driver struct with __refdata to prevent modpost 541 + * triggering a section mismatch warning. 542 + */ 543 + static struct platform_driver mac_scsi_driver __refdata = { 538 544 .remove_new = __exit_p(mac_scsi_remove), 539 545 .driver = { 540 546 .name = DRV_MODULE_NAME,
+56 -57
drivers/scsi/megaraid/Kconfig.megaraid
··· 3 3 bool "LSI Logic New Generation RAID Device Drivers" 4 4 depends on PCI && HAS_IOPORT && SCSI 5 5 help 6 - LSI Logic RAID Device Drivers 6 + LSI Logic RAID Device Drivers 7 7 8 8 config MEGARAID_MM 9 9 tristate "LSI Logic Management Module (New Driver)" 10 10 depends on PCI && HAS_IOPORT && SCSI && MEGARAID_NEWGEN 11 11 help 12 - Management Module provides ioctl, sysfs support for LSI Logic 13 - RAID controllers. 14 - To compile this driver as a module, choose M here: the 15 - module will be called megaraid_mm 12 + Management Module provides ioctl, sysfs support for LSI Logic 13 + RAID controllers. 14 + To compile this driver as a module, choose M here: the 15 + module will be called megaraid_mm 16 16 17 17 18 18 config MEGARAID_MAILBOX 19 19 tristate "LSI Logic MegaRAID Driver (New Driver)" 20 20 depends on PCI && SCSI && MEGARAID_MM 21 21 help 22 - List of supported controllers 22 + List of supported controllers 23 23 24 - OEM Product Name VID :DID :SVID:SSID 25 - --- ------------ ---- ---- ---- ---- 26 - Dell PERC3/QC 101E:1960:1028:0471 27 - Dell PERC3/DC 101E:1960:1028:0493 28 - Dell PERC3/SC 101E:1960:1028:0475 29 - Dell PERC3/Di 1028:000E:1028:0123 30 - Dell PERC4/SC 1000:1960:1028:0520 31 - Dell PERC4/DC 1000:1960:1028:0518 32 - Dell PERC4/QC 1000:0407:1028:0531 33 - Dell PERC4/Di 1028:000F:1028:014A 34 - Dell PERC 4e/Si 1028:0013:1028:016c 35 - Dell PERC 4e/Di 1028:0013:1028:016d 36 - Dell PERC 4e/Di 1028:0013:1028:016e 37 - Dell PERC 4e/Di 1028:0013:1028:016f 38 - Dell PERC 4e/Di 1028:0013:1028:0170 39 - Dell PERC 4e/DC 1000:0408:1028:0002 40 - Dell PERC 4e/SC 1000:0408:1028:0001 41 - LSI MegaRAID SCSI 320-0 1000:1960:1000:A520 42 - LSI MegaRAID SCSI 320-1 1000:1960:1000:0520 43 - LSI MegaRAID SCSI 320-2 1000:1960:1000:0518 44 - LSI MegaRAID SCSI 320-0X 1000:0407:1000:0530 45 - LSI MegaRAID SCSI 320-2X 1000:0407:1000:0532 46 - LSI MegaRAID SCSI 320-4X 1000:0407:1000:0531 47 - LSI MegaRAID SCSI 320-1E 1000:0408:1000:0001 48 - LSI MegaRAID SCSI 320-2E 1000:0408:1000:0002 49 - LSI MegaRAID SATA 150-4 1000:1960:1000:4523 50 - LSI MegaRAID SATA 150-6 1000:1960:1000:0523 51 - LSI MegaRAID SATA 300-4X 1000:0409:1000:3004 52 - LSI MegaRAID SATA 300-8X 1000:0409:1000:3008 53 - INTEL RAID Controller SRCU42X 1000:0407:8086:0532 54 - INTEL RAID Controller SRCS16 1000:1960:8086:0523 55 - INTEL RAID Controller SRCU42E 1000:0408:8086:0002 56 - INTEL RAID Controller SRCZCRX 1000:0407:8086:0530 57 - INTEL RAID Controller SRCS28X 1000:0409:8086:3008 58 - INTEL RAID Controller SROMBU42E 1000:0408:8086:3431 59 - INTEL RAID Controller SROMBU42E 1000:0408:8086:3499 60 - INTEL RAID Controller SRCU51L 1000:1960:8086:0520 61 - FSC MegaRAID PCI Express ROMB 1000:0408:1734:1065 62 - ACER MegaRAID ROMB-2E 1000:0408:1025:004D 63 - NEC MegaRAID PCI Express ROMB 1000:0408:1033:8287 24 + OEM Product Name VID :DID :SVID:SSID 25 + --- ------------ ---- ---- ---- ---- 26 + Dell PERC3/QC 101E:1960:1028:0471 27 + Dell PERC3/DC 101E:1960:1028:0493 28 + Dell PERC3/SC 101E:1960:1028:0475 29 + Dell PERC3/Di 1028:000E:1028:0123 30 + Dell PERC4/SC 1000:1960:1028:0520 31 + Dell PERC4/DC 1000:1960:1028:0518 32 + Dell PERC4/QC 1000:0407:1028:0531 33 + Dell PERC4/Di 1028:000F:1028:014A 34 + Dell PERC 4e/Si 1028:0013:1028:016c 35 + Dell PERC 4e/Di 1028:0013:1028:016d 36 + Dell PERC 4e/Di 1028:0013:1028:016e 37 + Dell PERC 4e/Di 1028:0013:1028:016f 38 + Dell PERC 4e/Di 1028:0013:1028:0170 39 + Dell PERC 4e/DC 1000:0408:1028:0002 40 + Dell PERC 4e/SC 1000:0408:1028:0001 41 + LSI MegaRAID SCSI 320-0 1000:1960:1000:A520 42 + LSI MegaRAID SCSI 320-1 1000:1960:1000:0520 43 + LSI MegaRAID SCSI 320-2 1000:1960:1000:0518 44 + LSI MegaRAID SCSI 320-0X 1000:0407:1000:0530 45 + LSI MegaRAID SCSI 320-2X 1000:0407:1000:0532 46 + LSI MegaRAID SCSI 320-4X 1000:0407:1000:0531 47 + LSI MegaRAID SCSI 320-1E 1000:0408:1000:0001 48 + LSI MegaRAID SCSI 320-2E 1000:0408:1000:0002 49 + LSI MegaRAID SATA 150-4 1000:1960:1000:4523 50 + LSI MegaRAID SATA 150-6 1000:1960:1000:0523 51 + LSI MegaRAID SATA 300-4X 1000:0409:1000:3004 52 + LSI MegaRAID SATA 300-8X 1000:0409:1000:3008 53 + INTEL RAID Controller SRCU42X 1000:0407:8086:0532 54 + INTEL RAID Controller SRCS16 1000:1960:8086:0523 55 + INTEL RAID Controller SRCU42E 1000:0408:8086:0002 56 + INTEL RAID Controller SRCZCRX 1000:0407:8086:0530 57 + INTEL RAID Controller SRCS28X 1000:0409:8086:3008 58 + INTEL RAID Controller SROMBU42E 1000:0408:8086:3431 59 + INTEL RAID Controller SROMBU42E 1000:0408:8086:3499 60 + INTEL RAID Controller SRCU51L 1000:1960:8086:0520 61 + FSC MegaRAID PCI Express ROMB 1000:0408:1734:1065 62 + ACER MegaRAID ROMB-2E 1000:0408:1025:004D 63 + NEC MegaRAID PCI Express ROMB 1000:0408:1033:8287 64 64 65 - To compile this driver as a module, choose M here: the 66 - module will be called megaraid_mbox 65 + To compile this driver as a module, choose M here: the 66 + module will be called megaraid_mbox 67 67 68 68 config MEGARAID_LEGACY 69 69 tristate "LSI Logic Legacy MegaRAID Driver" 70 70 depends on PCI && HAS_IOPORT && SCSI 71 71 help 72 - This driver supports the LSI MegaRAID 418, 428, 438, 466, 762, 490 73 - and 467 SCSI host adapters. This driver also support the all U320 74 - RAID controllers 72 + This driver supports the LSI MegaRAID 418, 428, 438, 466, 762, 490 73 + and 467 SCSI host adapters. This driver also support the all U320 74 + RAID controllers 75 75 76 - To compile this driver as a module, choose M here: the 77 - module will be called megaraid 76 + To compile this driver as a module, choose M here: the 77 + module will be called megaraid 78 78 79 79 config MEGARAID_SAS 80 80 tristate "LSI Logic MegaRAID SAS RAID Module" 81 81 depends on PCI && SCSI 82 82 select IRQ_POLL 83 83 help 84 - Module for LSI Logic's SAS based RAID controllers. 85 - To compile this driver as a module, choose 'm' here. 86 - Module will be called megaraid_sas 87 - 84 + Module for LSI Logic's SAS based RAID controllers. 85 + To compile this driver as a module, choose 'm' here. 86 + Module will be called megaraid_sas
+1 -1
drivers/scsi/megaraid/megaraid_sas.h
··· 2701 2701 int 2702 2702 megasas_sync_pd_seq_num(struct megasas_instance *instance, bool pend); 2703 2703 void megasas_set_dynamic_target_properties(struct scsi_device *sdev, 2704 - bool is_target_prop); 2704 + struct queue_limits *lim, bool is_target_prop); 2705 2705 int megasas_get_target_prop(struct megasas_instance *instance, 2706 2706 struct scsi_device *sdev); 2707 2707 void megasas_get_snapdump_properties(struct megasas_instance *instance);
+17 -12
drivers/scsi/megaraid/megaraid_sas_base.c
··· 1888 1888 * Returns void 1889 1889 */ 1890 1890 void megasas_set_dynamic_target_properties(struct scsi_device *sdev, 1891 - bool is_target_prop) 1891 + struct queue_limits *lim, bool is_target_prop) 1892 1892 { 1893 1893 u16 pd_index = 0, ld; 1894 1894 u32 device_id; ··· 1915 1915 return; 1916 1916 raid = MR_LdRaidGet(ld, local_map_ptr); 1917 1917 1918 - if (raid->capability.ldPiMode == MR_PROT_INFO_TYPE_CONTROLLER) 1919 - blk_queue_update_dma_alignment(sdev->request_queue, 0x7); 1918 + if (raid->capability.ldPiMode == MR_PROT_INFO_TYPE_CONTROLLER) { 1919 + if (lim) 1920 + lim->dma_alignment = 0x7; 1921 + } 1920 1922 1921 1923 mr_device_priv_data->is_tm_capable = 1922 1924 raid->capability.tmCapable; ··· 1969 1967 * 1970 1968 */ 1971 1969 static inline void 1972 - megasas_set_nvme_device_properties(struct scsi_device *sdev, u32 max_io_size) 1970 + megasas_set_nvme_device_properties(struct scsi_device *sdev, 1971 + struct queue_limits *lim, u32 max_io_size) 1973 1972 { 1974 1973 struct megasas_instance *instance; 1975 1974 u32 mr_nvme_pg_size; ··· 1979 1976 mr_nvme_pg_size = max_t(u32, instance->nvme_page_size, 1980 1977 MR_DEFAULT_NVME_PAGE_SIZE); 1981 1978 1982 - blk_queue_max_hw_sectors(sdev->request_queue, (max_io_size / 512)); 1979 + lim->max_hw_sectors = max_io_size / 512; 1980 + lim->virt_boundary_mask = mr_nvme_pg_size - 1; 1983 1981 1984 1982 blk_queue_flag_set(QUEUE_FLAG_NOMERGES, sdev->request_queue); 1985 - blk_queue_virt_boundary(sdev->request_queue, mr_nvme_pg_size - 1); 1986 1983 } 1987 1984 1988 1985 /* ··· 2044 2041 * @is_target_prop true, if fw provided target properties. 2045 2042 */ 2046 2043 static void megasas_set_static_target_properties(struct scsi_device *sdev, 2047 - bool is_target_prop) 2044 + struct queue_limits *lim, bool is_target_prop) 2048 2045 { 2049 2046 u32 max_io_size_kb = MR_DEFAULT_NVME_MDTS_KB; 2050 2047 struct megasas_instance *instance; ··· 2063 2060 max_io_size_kb = le32_to_cpu(instance->tgt_prop->max_io_size_kb); 2064 2061 2065 2062 if (instance->nvme_page_size && max_io_size_kb) 2066 - megasas_set_nvme_device_properties(sdev, (max_io_size_kb << 10)); 2063 + megasas_set_nvme_device_properties(sdev, lim, 2064 + max_io_size_kb << 10); 2067 2065 2068 2066 megasas_set_fw_assisted_qd(sdev, is_target_prop); 2069 2067 } 2070 2068 2071 2069 2072 - static int megasas_slave_configure(struct scsi_device *sdev) 2070 + static int megasas_device_configure(struct scsi_device *sdev, 2071 + struct queue_limits *lim) 2073 2072 { 2074 2073 u16 pd_index = 0; 2075 2074 struct megasas_instance *instance; ··· 2101 2096 ret_target_prop = megasas_get_target_prop(instance, sdev); 2102 2097 2103 2098 is_target_prop = (ret_target_prop == DCMD_SUCCESS) ? true : false; 2104 - megasas_set_static_target_properties(sdev, is_target_prop); 2099 + megasas_set_static_target_properties(sdev, lim, is_target_prop); 2105 2100 2106 2101 /* This sdev property may change post OCR */ 2107 - megasas_set_dynamic_target_properties(sdev, is_target_prop); 2102 + megasas_set_dynamic_target_properties(sdev, lim, is_target_prop); 2108 2103 2109 2104 mutex_unlock(&instance->reset_mutex); 2110 2105 ··· 3512 3507 .module = THIS_MODULE, 3513 3508 .name = "Avago SAS based MegaRAID driver", 3514 3509 .proc_name = "megaraid_sas", 3515 - .slave_configure = megasas_slave_configure, 3510 + .device_configure = megasas_device_configure, 3516 3511 .slave_alloc = megasas_slave_alloc, 3517 3512 .slave_destroy = megasas_slave_destroy, 3518 3513 .queuecommand = megasas_queue_command,
+2 -1
drivers/scsi/megaraid/megaraid_sas_fusion.c
··· 5119 5119 ret_target_prop = megasas_get_target_prop(instance, sdev); 5120 5120 5121 5121 is_target_prop = (ret_target_prop == DCMD_SUCCESS) ? true : false; 5122 - megasas_set_dynamic_target_properties(sdev, is_target_prop); 5122 + megasas_set_dynamic_target_properties(sdev, NULL, 5123 + is_target_prop); 5123 5124 } 5124 5125 5125 5126 status_reg = instance->instancet->read_fw_status_reg
+3
drivers/scsi/mpi3mr/mpi/mpi30_cnfg.h
··· 309 309 #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_SOURCE_GENERIC (0x00) 310 310 #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_SOURCE_CABLE_MGMT (0x10) 311 311 #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_SOURCE_ACTIVE_CABLE_OVERCURRENT (0x20) 312 + #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_ACK_REQUIRED (0x02) 312 313 #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_TRIGGER_MASK (0x01) 313 314 #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_TRIGGER_EDGE (0x00) 314 315 #define MPI3_MAN6_GPIO_EXTINT_PARAM1_FLAGS_TRIGGER_LEVEL (0x01) ··· 1316 1315 __le32 reserved18; 1317 1316 }; 1318 1317 #define MPI3_DRIVER0_PAGEVERSION (0x00) 1318 + #define MPI3_DRIVER0_BSDOPTS_DEVICEEXPOSURE_DISABLE (0x00000020) 1319 + #define MPI3_DRIVER0_BSDOPTS_WRITECACHE_DISABLE (0x00000010) 1319 1320 #define MPI3_DRIVER0_BSDOPTS_HEADLESS_MODE_ENABLE (0x00000008) 1320 1321 #define MPI3_DRIVER0_BSDOPTS_DIS_HII_CONFIG_UTIL (0x00000004) 1321 1322 #define MPI3_DRIVER0_BSDOPTS_REGISTRATION_MASK (0x00000003)
+5 -15
drivers/scsi/mpi3mr/mpi/mpi30_image.h
··· 198 198 struct mpi3_supported_device supported_device[MPI3_SUPPORTED_DEVICE_MAX]; 199 199 }; 200 200 201 - #ifndef MPI3_ENCRYPTED_HASH_MAX 202 - #define MPI3_ENCRYPTED_HASH_MAX (1) 201 + #ifndef MPI3_PUBLIC_KEY_MAX 202 + #define MPI3_PUBLIC_KEY_MAX (1) 203 203 #endif 204 204 struct mpi3_encrypted_hash_entry { 205 205 u8 hash_image_type; 206 206 u8 hash_algorithm; 207 207 u8 encryption_algorithm; 208 208 u8 reserved03; 209 - __le32 reserved04; 210 - __le32 encrypted_hash[MPI3_ENCRYPTED_HASH_MAX]; 209 + __le16 public_key_size; 210 + __le16 signature_size; 211 + __le32 public_key[MPI3_PUBLIC_KEY_MAX]; 211 212 }; 212 213 213 214 #define MPI3_HASH_IMAGE_TYPE_KEY_WITH_SIGNATURE (0x03) ··· 229 228 #define MPI3_ENCRYPTION_ALGORITHM_RSA2048 (0x04) 230 229 #define MPI3_ENCRYPTION_ALGORITHM_RSA4096 (0x05) 231 230 #define MPI3_ENCRYPTION_ALGORITHM_RSA3072 (0x06) 232 - #ifndef MPI3_PUBLIC_KEY_MAX 233 - #define MPI3_PUBLIC_KEY_MAX (1) 234 - #endif 235 - struct mpi3_encrypted_key_with_hash_entry { 236 - u8 hash_image_type; 237 - u8 hash_algorithm; 238 - u8 encryption_algorithm; 239 - u8 reserved03; 240 - __le32 reserved04; 241 - __le32 public_key[MPI3_PUBLIC_KEY_MAX]; 242 - }; 243 231 244 232 #ifndef MPI3_ENCRYPTED_HASH_ENTRY_MAX 245 233 #define MPI3_ENCRYPTED_HASH_ENTRY_MAX (1)
+12 -8
drivers/scsi/mpi3mr/mpi/mpi30_ioc.h
··· 27 27 __le64 sense_buffer_free_queue_address; 28 28 __le64 driver_information_address; 29 29 }; 30 - 30 + #define MPI3_IOCINIT_MSGFLAGS_WRITESAMEDIVERT_SUPPORTED (0x08) 31 31 #define MPI3_IOCINIT_MSGFLAGS_SCSIIOSTATUSREPLY_SUPPORTED (0x04) 32 32 #define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_MASK (0x03) 33 33 #define MPI3_IOCINIT_MSGFLAGS_HOSTMETADATA_NOT_USED (0x00) ··· 101 101 __le16 max_io_throttle_group; 102 102 __le16 io_throttle_low; 103 103 __le16 io_throttle_high; 104 + __le32 diag_fdl_size; 105 + __le32 diag_tty_size; 104 106 }; 105 107 #define MPI3_IOCFACTS_CAPABILITY_NON_SUPERVISOR_MASK (0x80000000) 106 108 #define MPI3_IOCFACTS_CAPABILITY_SUPERVISOR_IOC (0x00000000) ··· 110 108 #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_MASK (0x00000600) 111 109 #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_FIXED_THRESHOLD (0x00000000) 112 110 #define MPI3_IOCFACTS_CAPABILITY_INT_COALESCE_OUTSTANDING_IO (0x00000200) 113 - #define MPI3_IOCFACTS_CAPABILITY_COMPLETE_RESET_CAPABLE (0x00000100) 114 - #define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_TRACE_ENABLED (0x00000080) 115 - #define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_FW_ENABLED (0x00000040) 116 - #define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_DRIVER_ENABLED (0x00000020) 117 - #define MPI3_IOCFACTS_CAPABILITY_ADVANCED_HOST_PD_ENABLED (0x00000010) 118 - #define MPI3_IOCFACTS_CAPABILITY_RAID_CAPABLE (0x00000008) 119 - #define MPI3_IOCFACTS_CAPABILITY_MULTIPATH_ENABLED (0x00000002) 111 + #define MPI3_IOCFACTS_CAPABILITY_COMPLETE_RESET_SUPPORTED (0x00000100) 112 + #define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_TRACE_SUPPORTED (0x00000080) 113 + #define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_FW_SUPPORTED (0x00000040) 114 + #define MPI3_IOCFACTS_CAPABILITY_SEG_DIAG_DRIVER_SUPPORTED (0x00000020) 115 + #define MPI3_IOCFACTS_CAPABILITY_ADVANCED_HOST_PD_SUPPORTED (0x00000010) 116 + #define MPI3_IOCFACTS_CAPABILITY_RAID_SUPPORTED (0x00000008) 117 + #define MPI3_IOCFACTS_CAPABILITY_MULTIPATH_SUPPORTED (0x00000002) 120 118 #define MPI3_IOCFACTS_CAPABILITY_COALESCE_CTRL_SUPPORTED (0x00000001) 121 119 #define MPI3_IOCFACTS_PID_TYPE_MASK (0xf000) 122 120 #define MPI3_IOCFACTS_PID_TYPE_SHIFT (12) ··· 161 159 #define MPI3_IOCFACTS_FLAGS_PERSONALITY_RAID_DDR (0x00000002) 162 160 #define MPI3_IOCFACTS_IO_THROTTLE_DATA_LENGTH_NOT_REQUIRED (0x0000) 163 161 #define MPI3_IOCFACTS_MAX_IO_THROTTLE_GROUP_NOT_REQUIRED (0x0000) 162 + #define MPI3_IOCFACTS_DIAGFDLSIZE_NOT_SUPPORTED (0x00000000) 163 + #define MPI3_IOCFACTS_DIAGTTYSIZE_NOT_SUPPORTED (0x00000000) 164 164 struct mpi3_mgmt_passthrough_request { 165 165 __le16 host_tag; 166 166 u8 ioc_use_only02;
+1 -1
drivers/scsi/mpi3mr/mpi/mpi30_transport.h
··· 18 18 19 19 #define MPI3_VERSION_MAJOR (3) 20 20 #define MPI3_VERSION_MINOR (0) 21 - #define MPI3_VERSION_UNIT (28) 21 + #define MPI3_VERSION_UNIT (31) 22 22 #define MPI3_VERSION_DEV (0) 23 23 #define MPI3_DEVHANDLE_INVALID (0xffff) 24 24 struct mpi3_sysif_oper_queue_indexes {
+9 -6
drivers/scsi/mpi3mr/mpi3mr.h
··· 55 55 extern int prot_mask; 56 56 extern atomic64_t event_counter; 57 57 58 - #define MPI3MR_DRIVER_VERSION "8.5.1.0.0" 59 - #define MPI3MR_DRIVER_RELDATE "5-December-2023" 58 + #define MPI3MR_DRIVER_VERSION "8.8.1.0.50" 59 + #define MPI3MR_DRIVER_RELDATE "5-March-2024" 60 60 61 61 #define MPI3MR_DRIVER_NAME "mpi3mr" 62 62 #define MPI3MR_DRIVER_LICENSE "GPL" 63 63 #define MPI3MR_DRIVER_AUTHOR "Broadcom Inc. <mpi3mr-linuxdrv.pdl@broadcom.com>" 64 64 #define MPI3MR_DRIVER_DESC "MPI3 Storage Controller Device Driver" 65 65 66 - #define MPI3MR_NAME_LENGTH 32 66 + #define MPI3MR_NAME_LENGTH 64 67 67 #define IOCNAME "%s: " 68 68 69 69 #define MPI3MR_DEFAULT_MAX_IO_SIZE (1 * 1024 * 1024) ··· 293 293 MPI3MR_RESET_FROM_CFG_REQ_TIMEOUT = 29, 294 294 MPI3MR_RESET_FROM_SAS_TRANSPORT_TIMEOUT = 30, 295 295 }; 296 + 297 + #define MPI3MR_RESET_REASON_OSTYPE_LINUX 1 298 + #define MPI3MR_RESET_REASON_OSTYPE_SHIFT 28 299 + #define MPI3MR_RESET_REASON_IOCNUM_SHIFT 20 296 300 297 301 /* Queue type definitions */ 298 302 enum queue_type { ··· 1146 1142 spinlock_t fwevt_lock; 1147 1143 struct list_head fwevt_list; 1148 1144 1149 - char watchdog_work_q_name[20]; 1145 + char watchdog_work_q_name[50]; 1150 1146 struct workqueue_struct *watchdog_work_q; 1151 1147 struct delayed_work watchdog_work; 1152 1148 spinlock_t watchdog_lock; ··· 1340 1336 void mpi3mr_stop_watchdog(struct mpi3mr_ioc *mrioc); 1341 1337 1342 1338 int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, 1343 - u32 reset_reason, u8 snapdump); 1339 + u16 reset_reason, u8 snapdump); 1344 1340 void mpi3mr_ioc_disable_intr(struct mpi3mr_ioc *mrioc); 1345 1341 void mpi3mr_ioc_enable_intr(struct mpi3mr_ioc *mrioc); 1346 1342 ··· 1352 1348 void mpi3mr_cleanup_fwevt_list(struct mpi3mr_ioc *mrioc); 1353 1349 void mpi3mr_flush_host_io(struct mpi3mr_ioc *mrioc); 1354 1350 void mpi3mr_invalidate_devhandles(struct mpi3mr_ioc *mrioc); 1355 - void mpi3mr_rfresh_tgtdevs(struct mpi3mr_ioc *mrioc); 1356 1351 void mpi3mr_flush_delayed_cmd_lists(struct mpi3mr_ioc *mrioc); 1357 1352 void mpi3mr_check_rh_fault_ioc(struct mpi3mr_ioc *mrioc, u32 reason_code); 1358 1353 void mpi3mr_print_fault_info(struct mpi3mr_ioc *mrioc);
+19 -14
drivers/scsi/mpi3mr/mpi3mr_app.c
··· 1598 1598 rval = -EAGAIN; 1599 1599 if (mrioc->bsg_cmds.state & MPI3MR_CMD_RESET) 1600 1600 goto out_unlock; 1601 - dprint_bsg_err(mrioc, 1602 - "%s: bsg request timedout after %d seconds\n", __func__, 1603 - karg->timeout); 1604 - if (mrioc->logging_level & MPI3_DEBUG_BSG_ERROR) { 1605 - dprint_dump(mpi_req, MPI3MR_ADMIN_REQ_FRAME_SZ, 1601 + if (((mpi_header->function != MPI3_FUNCTION_SCSI_IO) && 1602 + (mpi_header->function != MPI3_FUNCTION_NVME_ENCAPSULATED)) 1603 + || (mrioc->logging_level & MPI3_DEBUG_BSG_ERROR)) { 1604 + ioc_info(mrioc, "%s: bsg request timedout after %d seconds\n", 1605 + __func__, karg->timeout); 1606 + if (!(mrioc->logging_level & MPI3_DEBUG_BSG_INFO)) { 1607 + dprint_dump(mpi_req, MPI3MR_ADMIN_REQ_FRAME_SZ, 1606 1608 "bsg_mpi3_req"); 1607 1609 if (mpi_header->function == 1608 - MPI3_BSG_FUNCTION_MGMT_PASSTHROUGH) { 1610 + MPI3_FUNCTION_MGMT_PASSTHROUGH) { 1609 1611 drv_buf_iter = &drv_bufs[0]; 1610 1612 dprint_dump(drv_buf_iter->kern_buf, 1611 1613 rmc_size, "mpi3_mgmt_req"); 1614 + } 1612 1615 } 1613 1616 } 1614 1617 if ((mpi_header->function == MPI3_BSG_FUNCTION_NVME_ENCAPSULATED) || 1615 - (mpi_header->function == MPI3_BSG_FUNCTION_SCSI_IO)) 1618 + (mpi_header->function == MPI3_BSG_FUNCTION_SCSI_IO)) { 1619 + dprint_bsg_err(mrioc, "%s: bsg request timedout after %d seconds,\n" 1620 + "issuing target reset to (0x%04x)\n", __func__, 1621 + karg->timeout, mpi_header->function_dependent); 1616 1622 mpi3mr_issue_tm(mrioc, 1617 1623 MPI3_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 1618 1624 mpi_header->function_dependent, 0, 1619 1625 MPI3MR_HOSTTAG_BLK_TMS, MPI3MR_RESETTM_TIMEOUT, 1620 1626 &mrioc->host_tm_cmds, &resp_code, NULL); 1627 + } 1621 1628 if (!(mrioc->bsg_cmds.state & MPI3MR_CMD_COMPLETE) && 1622 1629 !(mrioc->bsg_cmds.state & MPI3MR_CMD_RESET)) 1623 1630 mpi3mr_soft_reset_handler(mrioc, ··· 1845 1838 { 1846 1839 struct device *bsg_dev = &mrioc->bsg_dev; 1847 1840 struct device *parent = &mrioc->shost->shost_gendev; 1841 + struct queue_limits lim = { 1842 + .max_hw_sectors = MPI3MR_MAX_APP_XFER_SECTORS, 1843 + .max_segments = MPI3MR_MAX_APP_XFER_SEGMENTS, 1844 + }; 1848 1845 1849 1846 device_initialize(bsg_dev); 1850 1847 ··· 1864 1853 return; 1865 1854 } 1866 1855 1867 - mrioc->bsg_queue = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), 1856 + mrioc->bsg_queue = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), &lim, 1868 1857 mpi3mr_bsg_request, NULL, 0); 1869 1858 if (IS_ERR(mrioc->bsg_queue)) { 1870 1859 ioc_err(mrioc, "%s: bsg registration failed\n", 1871 1860 dev_name(bsg_dev)); 1872 1861 device_del(bsg_dev); 1873 1862 put_device(bsg_dev); 1874 - return; 1875 1863 } 1876 - 1877 - blk_queue_max_segments(mrioc->bsg_queue, MPI3MR_MAX_APP_XFER_SEGMENTS); 1878 - blk_queue_max_hw_sectors(mrioc->bsg_queue, MPI3MR_MAX_APP_XFER_SECTORS); 1879 - 1880 - return; 1881 1864 } 1882 1865 1883 1866 /**
+26 -16
drivers/scsi/mpi3mr/mpi3mr_fw.c
··· 11 11 #include <linux/io-64-nonatomic-lo-hi.h> 12 12 13 13 static int 14 - mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type, u32 reset_reason); 14 + mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type, u16 reset_reason); 15 15 static int mpi3mr_setup_admin_qpair(struct mpi3mr_ioc *mrioc); 16 16 static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc, 17 17 struct mpi3_ioc_facts_data *facts_data); ··· 1195 1195 static int mpi3mr_issue_and_process_mur(struct mpi3mr_ioc *mrioc, 1196 1196 u32 reset_reason) 1197 1197 { 1198 - u32 ioc_config, timeout, ioc_status; 1198 + u32 ioc_config, timeout, ioc_status, scratch_pad0; 1199 1199 int retval = -1; 1200 1200 1201 1201 ioc_info(mrioc, "Issuing Message unit Reset(MUR)\n"); ··· 1204 1204 return retval; 1205 1205 } 1206 1206 mpi3mr_clear_reset_history(mrioc); 1207 - writel(reset_reason, &mrioc->sysif_regs->scratchpad[0]); 1207 + scratch_pad0 = ((MPI3MR_RESET_REASON_OSTYPE_LINUX << 1208 + MPI3MR_RESET_REASON_OSTYPE_SHIFT) | 1209 + (mrioc->facts.ioc_num << 1210 + MPI3MR_RESET_REASON_IOCNUM_SHIFT) | reset_reason); 1211 + writel(scratch_pad0, &mrioc->sysif_regs->scratchpad[0]); 1208 1212 ioc_config = readl(&mrioc->sysif_regs->ioc_configuration); 1209 1213 ioc_config &= ~MPI3_SYSIF_IOC_CONFIG_ENABLE_IOC; 1210 1214 writel(ioc_config, &mrioc->sysif_regs->ioc_configuration); ··· 1280 1276 mrioc->shost->max_sectors * 512, mrioc->facts.max_data_length); 1281 1277 1282 1278 if ((mrioc->sas_transport_enabled) && (mrioc->facts.ioc_capabilities & 1283 - MPI3_IOCFACTS_CAPABILITY_MULTIPATH_ENABLED)) 1279 + MPI3_IOCFACTS_CAPABILITY_MULTIPATH_SUPPORTED)) 1284 1280 ioc_err(mrioc, 1285 1281 "critical error: multipath capability is enabled at the\n" 1286 1282 "\tcontroller while sas transport support is enabled at the\n" ··· 1524 1520 * Return: 0 on success, non-zero on failure. 1525 1521 */ 1526 1522 static int mpi3mr_issue_reset(struct mpi3mr_ioc *mrioc, u16 reset_type, 1527 - u32 reset_reason) 1523 + u16 reset_reason) 1528 1524 { 1529 1525 int retval = -1; 1530 1526 u8 unlock_retry_count = 0; 1531 - u32 host_diagnostic, ioc_status, ioc_config; 1527 + u32 host_diagnostic, ioc_status, ioc_config, scratch_pad0; 1532 1528 u32 timeout = MPI3MR_RESET_ACK_TIMEOUT * 10; 1533 1529 1534 1530 if ((reset_type != MPI3_SYSIF_HOST_DIAG_RESET_ACTION_SOFT_RESET) && ··· 1580 1576 unlock_retry_count, host_diagnostic); 1581 1577 } while (!(host_diagnostic & MPI3_SYSIF_HOST_DIAG_DIAG_WRITE_ENABLE)); 1582 1578 1579 + scratch_pad0 = ((MPI3MR_RESET_REASON_OSTYPE_LINUX << 1580 + MPI3MR_RESET_REASON_OSTYPE_SHIFT) | (mrioc->facts.ioc_num << 1581 + MPI3MR_RESET_REASON_IOCNUM_SHIFT) | reset_reason); 1583 1582 writel(reset_reason, &mrioc->sysif_regs->scratchpad[0]); 1584 1583 writel(host_diagnostic | reset_type, 1585 1584 &mrioc->sysif_regs->host_diagnostic); ··· 2588 2581 unsigned long flags; 2589 2582 enum mpi3mr_iocstate ioc_state; 2590 2583 u32 fault, host_diagnostic, ioc_status; 2591 - u32 reset_reason = MPI3MR_RESET_FROM_FAULT_WATCH; 2584 + u16 reset_reason = MPI3MR_RESET_FROM_FAULT_WATCH; 2592 2585 2593 2586 if (mrioc->reset_in_progress) 2594 2587 return; ··· 3309 3302 3310 3303 iocinit_req.msg_flags |= 3311 3304 MPI3_IOCINIT_MSGFLAGS_SCSIIOSTATUSREPLY_SUPPORTED; 3305 + iocinit_req.msg_flags |= 3306 + MPI3_IOCINIT_MSGFLAGS_WRITESAMEDIVERT_SUPPORTED; 3312 3307 3313 3308 init_completion(&mrioc->init_cmds.done); 3314 3309 retval = mpi3mr_admin_request_post(mrioc, &iocinit_req, ··· 3677 3668 u32 capability; 3678 3669 char *name; 3679 3670 } mpi3mr_capabilities[] = { 3680 - { MPI3_IOCFACTS_CAPABILITY_RAID_CAPABLE, "RAID" }, 3681 - { MPI3_IOCFACTS_CAPABILITY_MULTIPATH_ENABLED, "MultiPath" }, 3671 + { MPI3_IOCFACTS_CAPABILITY_RAID_SUPPORTED, "RAID" }, 3672 + { MPI3_IOCFACTS_CAPABILITY_MULTIPATH_SUPPORTED, "MultiPath" }, 3682 3673 }; 3683 3674 3684 3675 /** 3685 3676 * mpi3mr_print_ioc_info - Display controller information 3686 3677 * @mrioc: Adapter instance reference 3687 3678 * 3688 - * Display controller personalit, capability, supported 3679 + * Display controller personality, capability, supported 3689 3680 * protocols etc. 3690 3681 * 3691 3682 * Return: Nothing ··· 3694 3685 mpi3mr_print_ioc_info(struct mpi3mr_ioc *mrioc) 3695 3686 { 3696 3687 int i = 0, bytes_written = 0; 3697 - char personality[16]; 3688 + const char *personality; 3698 3689 char protocol[50] = {0}; 3699 3690 char capabilities[100] = {0}; 3700 3691 struct mpi3mr_compimg_ver *fwver = &mrioc->facts.fw_ver; 3701 3692 3702 3693 switch (mrioc->facts.personality) { 3703 3694 case MPI3_IOCFACTS_FLAGS_PERSONALITY_EHBA: 3704 - strncpy(personality, "Enhanced HBA", sizeof(personality)); 3695 + personality = "Enhanced HBA"; 3705 3696 break; 3706 3697 case MPI3_IOCFACTS_FLAGS_PERSONALITY_RAID_DDR: 3707 - strncpy(personality, "RAID", sizeof(personality)); 3698 + personality = "RAID"; 3708 3699 break; 3709 3700 default: 3710 - strncpy(personality, "Unknown", sizeof(personality)); 3701 + personality = "Unknown"; 3711 3702 break; 3712 3703 } 3713 3704 ··· 3960 3951 MPI3MR_HOST_IOS_KDUMP); 3961 3952 3962 3953 if (!(mrioc->facts.ioc_capabilities & 3963 - MPI3_IOCFACTS_CAPABILITY_MULTIPATH_ENABLED)) { 3954 + MPI3_IOCFACTS_CAPABILITY_MULTIPATH_SUPPORTED)) { 3964 3955 mrioc->sas_transport_enabled = 1; 3965 3956 mrioc->scsi_device_channel = 1; 3966 3957 mrioc->shost->max_channel = 1; ··· 4975 4966 * Return: 0 on success, non-zero on failure. 4976 4967 */ 4977 4968 int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc, 4978 - u32 reset_reason, u8 snapdump) 4969 + u16 reset_reason, u8 snapdump) 4979 4970 { 4980 4971 int retval = 0, i; 4981 4972 unsigned long flags; ··· 5111 5102 mrioc->device_refresh_on = 0; 5112 5103 mrioc->unrecoverable = 1; 5113 5104 mrioc->reset_in_progress = 0; 5105 + mrioc->stop_bsgs = 0; 5114 5106 retval = -1; 5115 5107 mpi3mr_flush_cmds_for_unrecovered_controller(mrioc); 5116 5108 }
+36 -50
drivers/scsi/mpi3mr/mpi3mr_os.c
··· 986 986 return retval; 987 987 } 988 988 989 + static void mpi3mr_configure_nvme_dev(struct mpi3mr_tgt_dev *tgt_dev, 990 + struct queue_limits *lim) 991 + { 992 + u8 pgsz = tgt_dev->dev_spec.pcie_inf.pgsz ? : MPI3MR_DEFAULT_PGSZEXP; 993 + 994 + lim->max_hw_sectors = tgt_dev->dev_spec.pcie_inf.mdts / 512; 995 + lim->virt_boundary_mask = (1 << pgsz) - 1; 996 + } 997 + 998 + static void mpi3mr_configure_tgt_dev(struct mpi3mr_tgt_dev *tgt_dev, 999 + struct queue_limits *lim) 1000 + { 1001 + if (tgt_dev->dev_type == MPI3_DEVICE_DEVFORM_PCIE && 1002 + (tgt_dev->dev_spec.pcie_inf.dev_info & 1003 + MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_MASK) == 1004 + MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_NVME_DEVICE) 1005 + mpi3mr_configure_nvme_dev(tgt_dev, lim); 1006 + } 1007 + 989 1008 /** 990 1009 * mpi3mr_update_sdev - Update SCSI device information 991 1010 * @sdev: SCSI device reference ··· 1020 1001 mpi3mr_update_sdev(struct scsi_device *sdev, void *data) 1021 1002 { 1022 1003 struct mpi3mr_tgt_dev *tgtdev; 1004 + struct queue_limits lim; 1023 1005 1024 1006 tgtdev = (struct mpi3mr_tgt_dev *)data; 1025 1007 if (!tgtdev) 1026 1008 return; 1027 1009 1028 1010 mpi3mr_change_queue_depth(sdev, tgtdev->q_depth); 1029 - switch (tgtdev->dev_type) { 1030 - case MPI3_DEVICE_DEVFORM_PCIE: 1031 - /*The block layer hw sector size = 512*/ 1032 - if ((tgtdev->dev_spec.pcie_inf.dev_info & 1033 - MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_MASK) == 1034 - MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_NVME_DEVICE) { 1035 - blk_queue_max_hw_sectors(sdev->request_queue, 1036 - tgtdev->dev_spec.pcie_inf.mdts / 512); 1037 - if (tgtdev->dev_spec.pcie_inf.pgsz == 0) 1038 - blk_queue_virt_boundary(sdev->request_queue, 1039 - ((1 << MPI3MR_DEFAULT_PGSZEXP) - 1)); 1040 - else 1041 - blk_queue_virt_boundary(sdev->request_queue, 1042 - ((1 << tgtdev->dev_spec.pcie_inf.pgsz) - 1)); 1043 - } 1044 - break; 1045 - default: 1046 - break; 1047 - } 1011 + 1012 + lim = queue_limits_start_update(sdev->request_queue); 1013 + mpi3mr_configure_tgt_dev(tgtdev, &lim); 1014 + WARN_ON_ONCE(queue_limits_commit_update(sdev->request_queue, &lim)); 1048 1015 } 1049 1016 1050 1017 /** 1051 - * mpi3mr_rfresh_tgtdevs - Refresh target device exposure 1018 + * mpi3mr_refresh_tgtdevs - Refresh target device exposure 1052 1019 * @mrioc: Adapter instance reference 1053 1020 * 1054 1021 * This is executed post controller reset to identify any ··· 1043 1038 * 1044 1039 * Return: Nothing. 1045 1040 */ 1046 - 1047 - void mpi3mr_rfresh_tgtdevs(struct mpi3mr_ioc *mrioc) 1041 + static void mpi3mr_refresh_tgtdevs(struct mpi3mr_ioc *mrioc) 1048 1042 { 1049 1043 struct mpi3mr_tgt_dev *tgtdev, *tgtdev_next; 1050 1044 struct mpi3mr_stgt_priv_data *tgt_priv; ··· 1051 1047 dprint_reset(mrioc, "refresh target devices: check for removals\n"); 1052 1048 list_for_each_entry_safe(tgtdev, tgtdev_next, &mrioc->tgtdev_list, 1053 1049 list) { 1054 - if ((tgtdev->dev_handle == MPI3MR_INVALID_DEV_HANDLE) && 1055 - tgtdev->is_hidden && 1050 + if (((tgtdev->dev_handle == MPI3MR_INVALID_DEV_HANDLE) || 1051 + tgtdev->is_hidden) && 1056 1052 tgtdev->host_exposed && tgtdev->starget && 1057 1053 tgtdev->starget->hostdata) { 1058 1054 tgt_priv = tgtdev->starget->hostdata; ··· 2014 2010 mpi3mr_refresh_sas_ports(mrioc); 2015 2011 mpi3mr_refresh_expanders(mrioc); 2016 2012 } 2017 - mpi3mr_rfresh_tgtdevs(mrioc); 2013 + mpi3mr_refresh_tgtdevs(mrioc); 2018 2014 ioc_info(mrioc, 2019 2015 "scan for non responding and newly added devices after soft reset completed\n"); 2020 2016 break; ··· 4397 4393 } 4398 4394 4399 4395 /** 4400 - * mpi3mr_slave_configure - Slave configure callback handler 4396 + * mpi3mr_device_configure - Slave configure callback handler 4401 4397 * @sdev: SCSI device reference 4398 + * @lim: queue limits 4402 4399 * 4403 4400 * Configure queue depth, max hardware sectors and virt boundary 4404 4401 * as required 4405 4402 * 4406 4403 * Return: 0 always. 4407 4404 */ 4408 - static int mpi3mr_slave_configure(struct scsi_device *sdev) 4405 + static int mpi3mr_device_configure(struct scsi_device *sdev, 4406 + struct queue_limits *lim) 4409 4407 { 4410 4408 struct scsi_target *starget; 4411 4409 struct Scsi_Host *shost; ··· 4438 4432 sdev->eh_timeout = MPI3MR_EH_SCMD_TIMEOUT; 4439 4433 blk_queue_rq_timeout(sdev->request_queue, MPI3MR_SCMD_TIMEOUT); 4440 4434 4441 - switch (tgt_dev->dev_type) { 4442 - case MPI3_DEVICE_DEVFORM_PCIE: 4443 - /*The block layer hw sector size = 512*/ 4444 - if ((tgt_dev->dev_spec.pcie_inf.dev_info & 4445 - MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_MASK) == 4446 - MPI3_DEVICE0_PCIE_DEVICE_INFO_TYPE_NVME_DEVICE) { 4447 - blk_queue_max_hw_sectors(sdev->request_queue, 4448 - tgt_dev->dev_spec.pcie_inf.mdts / 512); 4449 - if (tgt_dev->dev_spec.pcie_inf.pgsz == 0) 4450 - blk_queue_virt_boundary(sdev->request_queue, 4451 - ((1 << MPI3MR_DEFAULT_PGSZEXP) - 1)); 4452 - else 4453 - blk_queue_virt_boundary(sdev->request_queue, 4454 - ((1 << tgt_dev->dev_spec.pcie_inf.pgsz) - 1)); 4455 - } 4456 - break; 4457 - default: 4458 - break; 4459 - } 4460 - 4435 + mpi3mr_configure_tgt_dev(tgt_dev, lim); 4461 4436 mpi3mr_tgtdev_put(tgt_dev); 4462 - 4463 4437 return retval; 4464 4438 } 4465 4439 ··· 4881 4895 MPI3_SCSIIO_MSGFLAGS_DIVERT_TO_FIRMWARE; 4882 4896 scsiio_flags |= MPI3_SCSIIO_FLAGS_DIVERT_REASON_IO_THROTTLING; 4883 4897 } 4884 - scsiio_req->flags = cpu_to_le32(scsiio_flags); 4898 + scsiio_req->flags |= cpu_to_le32(scsiio_flags); 4885 4899 4886 4900 if (mpi3mr_op_request_post(mrioc, op_req_q, 4887 4901 scmd_priv_data->mpi3mr_scsiio_req)) { ··· 4907 4921 .queuecommand = mpi3mr_qcmd, 4908 4922 .target_alloc = mpi3mr_target_alloc, 4909 4923 .slave_alloc = mpi3mr_slave_alloc, 4910 - .slave_configure = mpi3mr_slave_configure, 4924 + .device_configure = mpi3mr_device_configure, 4911 4925 .target_destroy = mpi3mr_target_destroy, 4912 4926 .slave_destroy = mpi3mr_slave_destroy, 4913 4927 .scan_finished = mpi3mr_scan_finished,
+10
drivers/scsi/mpi3mr/mpi3mr_transport.c
··· 1351 1351 mpi3mr_sas_port_sanity_check(mrioc, mr_sas_node, 1352 1352 mr_sas_port->remote_identify.sas_address, hba_port); 1353 1353 1354 + if (mr_sas_node->num_phys > sizeof(mr_sas_port->phy_mask) * 8) 1355 + ioc_info(mrioc, "max port count %u could be too high\n", 1356 + mr_sas_node->num_phys); 1357 + 1354 1358 for (i = 0; i < mr_sas_node->num_phys; i++) { 1355 1359 if ((mr_sas_node->phy[i].remote_identify.sas_address != 1356 1360 mr_sas_port->remote_identify.sas_address) || 1357 1361 (mr_sas_node->phy[i].hba_port != hba_port)) 1358 1362 continue; 1363 + 1364 + if (i > sizeof(mr_sas_port->phy_mask) * 8) { 1365 + ioc_warn(mrioc, "skipping port %u, max allowed value is %lu\n", 1366 + i, sizeof(mr_sas_port->phy_mask) * 8); 1367 + goto out_fail; 1368 + } 1359 1369 list_add_tail(&mr_sas_node->phy[i].port_siblings, 1360 1370 &mr_sas_port->phy_list); 1361 1371 mr_sas_port->num_phys++;
+1 -1
drivers/scsi/mpt3sas/mpt3sas_base.c
··· 4774 4774 char desc[17] = {0}; 4775 4775 u32 iounit_pg1_flags; 4776 4776 4777 - strncpy(desc, ioc->manu_pg0.ChipName, 16); 4777 + strscpy(desc, ioc->manu_pg0.ChipName, sizeof(desc)); 4778 4778 ioc_info(ioc, "%s: FWVersion(%02d.%02d.%02d.%02d), ChipRevision(0x%02x)\n", 4779 4779 desc, 4780 4780 (ioc->facts.FWVersion.Word & 0xFF000000) >> 24,
+8 -10
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 2497 2497 } 2498 2498 2499 2499 /** 2500 - * scsih_slave_configure - device configure routine. 2500 + * scsih_device_configure - device configure routine. 2501 2501 * @sdev: scsi device struct 2502 + * @lim: queue limits 2502 2503 * 2503 2504 * Return: 0 if ok. Any other return is assumed to be an error and 2504 2505 * the device is ignored. 2505 2506 */ 2506 2507 static int 2507 - scsih_slave_configure(struct scsi_device *sdev) 2508 + scsih_device_configure(struct scsi_device *sdev, struct queue_limits *lim) 2508 2509 { 2509 2510 struct Scsi_Host *shost = sdev->host; 2510 2511 struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); ··· 2610 2609 raid_device->num_pds, ds); 2611 2610 2612 2611 if (shost->max_sectors > MPT3SAS_RAID_MAX_SECTORS) { 2613 - blk_queue_max_hw_sectors(sdev->request_queue, 2614 - MPT3SAS_RAID_MAX_SECTORS); 2612 + lim->max_hw_sectors = MPT3SAS_RAID_MAX_SECTORS; 2615 2613 sdev_printk(KERN_INFO, sdev, 2616 2614 "Set queue's max_sector to: %u\n", 2617 2615 MPT3SAS_RAID_MAX_SECTORS); ··· 2675 2675 pcie_device->connector_name); 2676 2676 2677 2677 if (pcie_device->nvme_mdts) 2678 - blk_queue_max_hw_sectors(sdev->request_queue, 2679 - pcie_device->nvme_mdts/512); 2678 + lim->max_hw_sectors = pcie_device->nvme_mdts / 512; 2680 2679 2681 2680 pcie_device_put(pcie_device); 2682 2681 spin_unlock_irqrestore(&ioc->pcie_device_lock, flags); ··· 2686 2687 **/ 2687 2688 blk_queue_flag_set(QUEUE_FLAG_NOMERGES, 2688 2689 sdev->request_queue); 2689 - blk_queue_virt_boundary(sdev->request_queue, 2690 - ioc->page_size - 1); 2690 + lim->virt_boundary_mask = ioc->page_size - 1; 2691 2691 return 0; 2692 2692 } 2693 2693 ··· 11912 11914 .queuecommand = scsih_qcmd, 11913 11915 .target_alloc = scsih_target_alloc, 11914 11916 .slave_alloc = scsih_slave_alloc, 11915 - .slave_configure = scsih_slave_configure, 11917 + .device_configure = scsih_device_configure, 11916 11918 .target_destroy = scsih_target_destroy, 11917 11919 .slave_destroy = scsih_slave_destroy, 11918 11920 .scan_finished = scsih_scan_finished, ··· 11950 11952 .queuecommand = scsih_qcmd, 11951 11953 .target_alloc = scsih_target_alloc, 11952 11954 .slave_alloc = scsih_slave_alloc, 11953 - .slave_configure = scsih_slave_configure, 11955 + .device_configure = scsih_device_configure, 11954 11956 .target_destroy = scsih_target_destroy, 11955 11957 .slave_destroy = scsih_slave_destroy, 11956 11958 .scan_finished = scsih_scan_finished,
+9 -9
drivers/scsi/mpt3sas/mpt3sas_transport.c
··· 458 458 goto out; 459 459 460 460 manufacture_reply = data_out + sizeof(struct rep_manu_request); 461 - strncpy(edev->vendor_id, manufacture_reply->vendor_id, 462 - SAS_EXPANDER_VENDOR_ID_LEN); 463 - strncpy(edev->product_id, manufacture_reply->product_id, 464 - SAS_EXPANDER_PRODUCT_ID_LEN); 465 - strncpy(edev->product_rev, manufacture_reply->product_rev, 466 - SAS_EXPANDER_PRODUCT_REV_LEN); 461 + strscpy(edev->vendor_id, manufacture_reply->vendor_id, 462 + sizeof(edev->vendor_id)); 463 + strscpy(edev->product_id, manufacture_reply->product_id, 464 + sizeof(edev->product_id)); 465 + strscpy(edev->product_rev, manufacture_reply->product_rev, 466 + sizeof(edev->product_rev)); 467 467 edev->level = manufacture_reply->sas_format & 1; 468 468 if (edev->level) { 469 - strncpy(edev->component_vendor_id, 470 - manufacture_reply->component_vendor_id, 471 - SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN); 469 + strscpy(edev->component_vendor_id, 470 + manufacture_reply->component_vendor_id, 471 + sizeof(edev->component_vendor_id)); 472 472 tmp = (u8 *)&manufacture_reply->component_id; 473 473 edev->component_id = tmp[0] << 8 | tmp[1]; 474 474 edev->component_revision_id =
+8 -18
drivers/scsi/mvsas/mv_init.c
··· 26 26 }; 27 27 28 28 static const struct attribute_group *mvst_host_groups[]; 29 + static const struct attribute_group *mvst_sdev_groups[]; 29 30 30 31 #define SOC_SAS_NUM 2 31 32 32 33 static const struct scsi_host_template mvs_sht = { 33 - .module = THIS_MODULE, 34 - .name = DRV_NAME, 35 - .queuecommand = sas_queuecommand, 36 - .dma_need_drain = ata_scsi_dma_need_drain, 37 - .target_alloc = sas_target_alloc, 38 - .slave_configure = sas_slave_configure, 34 + LIBSAS_SHT_BASE 39 35 .scan_finished = mvs_scan_finished, 40 36 .scan_start = mvs_scan_start, 41 - .change_queue_depth = sas_change_queue_depth, 42 - .bios_param = sas_bios_param, 43 37 .can_queue = 1, 44 - .this_id = -1, 45 38 .sg_tablesize = SG_ALL, 46 - .max_sectors = SCSI_DEFAULT_MAX_SECTORS, 47 - .eh_device_reset_handler = sas_eh_device_reset_handler, 48 - .eh_target_reset_handler = sas_eh_target_reset_handler, 49 - .slave_alloc = sas_slave_alloc, 50 - .target_destroy = sas_target_destroy, 51 - .ioctl = sas_ioctl, 52 - #ifdef CONFIG_COMPAT 53 - .compat_ioctl = sas_ioctl, 54 - #endif 55 39 .shost_groups = mvst_host_groups, 40 + .sdev_groups = mvst_sdev_groups, 56 41 .track_queue_depth = 1, 57 42 }; 58 43 ··· 763 778 }; 764 779 765 780 ATTRIBUTE_GROUPS(mvst_host); 781 + 782 + static const struct attribute_group *mvst_sdev_groups[] = { 783 + &sas_ata_sdev_attr_group, 784 + NULL 785 + }; 766 786 767 787 module_init(mvs_init); 768 788 module_exit(mvs_exit);
+5
drivers/scsi/pm8001/pm8001_ctl.c
··· 1039 1039 &pm8001_host_attr_group, 1040 1040 NULL 1041 1041 }; 1042 + 1043 + const struct attribute_group *pm8001_sdev_groups[] = { 1044 + &sas_ata_sdev_attr_group, 1045 + NULL 1046 + };
+2 -19
drivers/scsi/pm8001/pm8001_init.c
··· 110 110 * The main structure which LLDD must register for scsi core. 111 111 */ 112 112 static const struct scsi_host_template pm8001_sht = { 113 - .module = THIS_MODULE, 114 - .name = DRV_NAME, 115 - .proc_name = DRV_NAME, 116 - .queuecommand = sas_queuecommand, 117 - .dma_need_drain = ata_scsi_dma_need_drain, 118 - .target_alloc = sas_target_alloc, 119 - .slave_configure = sas_slave_configure, 113 + LIBSAS_SHT_BASE 120 114 .scan_finished = pm8001_scan_finished, 121 115 .scan_start = pm8001_scan_start, 122 - .change_queue_depth = sas_change_queue_depth, 123 - .bios_param = sas_bios_param, 124 116 .can_queue = 1, 125 - .this_id = -1, 126 117 .sg_tablesize = PM8001_MAX_DMA_SG, 127 - .max_sectors = SCSI_DEFAULT_MAX_SECTORS, 128 - .eh_device_reset_handler = sas_eh_device_reset_handler, 129 - .eh_target_reset_handler = sas_eh_target_reset_handler, 130 - .slave_alloc = sas_slave_alloc, 131 - .target_destroy = sas_target_destroy, 132 - .ioctl = sas_ioctl, 133 - #ifdef CONFIG_COMPAT 134 - .compat_ioctl = sas_ioctl, 135 - #endif 136 118 .shost_groups = pm8001_host_groups, 119 + .sdev_groups = pm8001_sdev_groups, 137 120 .track_queue_depth = 1, 138 121 .cmd_per_lun = 32, 139 122 .map_queues = pm8001_map_queues,
+1
drivers/scsi/pm8001/pm8001_sas.h
··· 717 717 void pm8001_free_dev(struct pm8001_device *pm8001_dev); 718 718 /* ctl shared API */ 719 719 extern const struct attribute_group *pm8001_host_groups[]; 720 + extern const struct attribute_group *pm8001_sdev_groups[]; 720 721 721 722 #define PM8001_INVALID_TAG ((u32)-1) 722 723
+6 -5
drivers/scsi/pmcraid.c
··· 197 197 } 198 198 199 199 /** 200 - * pmcraid_slave_configure - Configures a SCSI device 200 + * pmcraid_device_configure - Configures a SCSI device 201 201 * @scsi_dev: scsi device struct 202 + * @lim: queue limits 202 203 * 203 204 * This function is executed by SCSI mid layer just after a device is first 204 205 * scanned (i.e. it has responded to an INQUIRY). For VSET resources, the ··· 210 209 * Return value: 211 210 * 0 on success 212 211 */ 213 - static int pmcraid_slave_configure(struct scsi_device *scsi_dev) 212 + static int pmcraid_device_configure(struct scsi_device *scsi_dev, 213 + struct queue_limits *lim) 214 214 { 215 215 struct pmcraid_resource_entry *res = scsi_dev->hostdata; 216 216 ··· 235 233 scsi_dev->allow_restart = 1; 236 234 blk_queue_rq_timeout(scsi_dev->request_queue, 237 235 PMCRAID_VSET_IO_TIMEOUT); 238 - blk_queue_max_hw_sectors(scsi_dev->request_queue, 239 - PMCRAID_VSET_MAX_SECTORS); 236 + lim->max_hw_sectors = PMCRAID_VSET_MAX_SECTORS; 240 237 } 241 238 242 239 /* ··· 3669 3668 .eh_host_reset_handler = pmcraid_eh_host_reset_handler, 3670 3669 3671 3670 .slave_alloc = pmcraid_slave_alloc, 3672 - .slave_configure = pmcraid_slave_configure, 3671 + .device_configure = pmcraid_device_configure, 3673 3672 .slave_destroy = pmcraid_slave_destroy, 3674 3673 .change_queue_depth = pmcraid_change_queue_depth, 3675 3674 .can_queue = PMCRAID_MAX_IO_CMD,
+1 -7
drivers/scsi/ppa.c
··· 986 986 return -ENODEV; 987 987 } 988 988 989 - static int ppa_adjust_queue(struct scsi_device *device) 990 - { 991 - blk_queue_bounce_limit(device->request_queue, BLK_BOUNCE_HIGH); 992 - return 0; 993 - } 994 - 995 989 static const struct scsi_host_template ppa_template = { 996 990 .module = THIS_MODULE, 997 991 .proc_name = "ppa", ··· 999 1005 .this_id = -1, 1000 1006 .sg_tablesize = SG_ALL, 1001 1007 .can_queue = 1, 1002 - .slave_alloc = ppa_adjust_queue, 1003 1008 .cmd_size = sizeof(struct scsi_pointer), 1004 1009 }; 1005 1010 ··· 1104 1111 host = scsi_host_alloc(&ppa_template, sizeof(ppa_struct *)); 1105 1112 if (!host) 1106 1113 goto out1; 1114 + host->no_highmem = true; 1107 1115 host->io_port = pb->base; 1108 1116 host->n_io_port = ports; 1109 1117 host->dma_channel = -1;
+1 -1
drivers/scsi/qedf/qedf_debugfs.c
··· 170 170 if (!count || *ppos) 171 171 return 0; 172 172 173 - kern_buf = memdup_user(buffer, count); 173 + kern_buf = memdup_user_nul(buffer, count); 174 174 if (IS_ERR(kern_buf)) 175 175 return PTR_ERR(kern_buf); 176 176
+3 -3
drivers/scsi/qedf/qedf_io.c
··· 2324 2324 io_req->fcport = fcport; 2325 2325 io_req->cmd_type = QEDF_TASK_MGMT_CMD; 2326 2326 2327 - /* Record which cpu this request is associated with */ 2328 - io_req->cpu = smp_processor_id(); 2329 - 2330 2327 /* Set TM flags */ 2331 2328 io_req->io_req_flags = QEDF_READ; 2332 2329 io_req->data_xfer_len = 0; ··· 2345 2348 init_completion(&io_req->tm_done); 2346 2349 2347 2350 spin_lock_irqsave(&fcport->rport_lock, flags); 2351 + 2352 + /* Record which cpu this request is associated with */ 2353 + io_req->cpu = smp_processor_id(); 2348 2354 2349 2355 sqe_idx = qedf_get_sqe_idx(fcport); 2350 2356 sqe = &fcport->sq[sqe_idx];
+1 -1
drivers/scsi/qedf/qedf_main.c
··· 3468 3468 slowpath_params.drv_minor = QEDF_DRIVER_MINOR_VER; 3469 3469 slowpath_params.drv_rev = QEDF_DRIVER_REV_VER; 3470 3470 slowpath_params.drv_eng = QEDF_DRIVER_ENG_VER; 3471 - strncpy(slowpath_params.name, "qedf", QED_DRV_VER_STR_SIZE); 3471 + strscpy(slowpath_params.name, "qedf", sizeof(slowpath_params.name)); 3472 3472 rc = qed_ops->common->slowpath_start(qedf->cdev, &slowpath_params); 3473 3473 if (rc) { 3474 3474 QEDF_ERR(&(qedf->dbg_ctx), "Cannot start slowpath.\n");
+4 -8
drivers/scsi/qedi/qedi_debugfs.c
··· 120 120 qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer, 121 121 size_t count, loff_t *ppos) 122 122 { 123 - size_t cnt = 0; 123 + char buf[64]; 124 + int len; 124 125 125 - if (*ppos) 126 - return 0; 127 - 128 - cnt = sprintf(buffer, "do_not_recover=%d\n", qedi_do_not_recover); 129 - cnt = min_t(int, count, cnt - *ppos); 130 - *ppos += cnt; 131 - return cnt; 126 + len = sprintf(buf, "do_not_recover=%d\n", qedi_do_not_recover); 127 + return simple_read_from_buffer(buffer, count, ppos, buf, len); 132 128 } 133 129 134 130 static int
+22 -20
drivers/scsi/qla2xxx/Kconfig
··· 7 7 select FW_LOADER 8 8 select BTREE 9 9 help 10 - This qla2xxx driver supports all QLogic Fibre Channel 11 - PCI and PCIe host adapters. 10 + This qla2xxx driver supports all QLogic Fibre Channel 11 + PCI and PCIe host adapters. 12 12 13 - By default, firmware for the ISP parts will be loaded 14 - via the Firmware Loader interface. 13 + By default, firmware for the ISP parts will be loaded 14 + via the Firmware Loader interface. 15 15 16 - ISP Firmware Filename 17 - ---------- ----------------- 18 - 21xx ql2100_fw.bin 19 - 22xx ql2200_fw.bin 20 - 2300, 2312, 6312 ql2300_fw.bin 21 - 2322, 6322 ql2322_fw.bin 22 - 24xx, 54xx ql2400_fw.bin 23 - 25xx ql2500_fw.bin 16 + ISP Firmware Filename 17 + ---------- ----------------- 18 + 21xx ql2100_fw.bin 19 + 22xx ql2200_fw.bin 20 + 2300, 2312, 6312 ql2300_fw.bin 21 + 2322, 6322 ql2322_fw.bin 22 + 24xx, 54xx ql2400_fw.bin 23 + 25xx ql2500_fw.bin 24 24 25 - Upon request, the driver caches the firmware image until 26 - the driver is unloaded. 25 + Upon request, the driver caches the firmware image until 26 + the driver is unloaded. 27 27 28 - Firmware images can be retrieved from: 28 + Firmware images can be retrieved from: 29 29 30 - http://ldriver.qlogic.com/firmware/ 30 + http://ldriver.qlogic.com/firmware/ 31 31 32 - They are also included in the linux-firmware tree as well. 32 + They are also included in the linux-firmware tree as well. 33 33 34 34 config TCM_QLA2XXX 35 35 tristate "TCM_QLA2XXX fabric module for QLogic 24xx+ series target mode HBAs" ··· 38 38 select BTREE 39 39 default n 40 40 help 41 - Say Y here to enable the TCM_QLA2XXX fabric module for QLogic 24xx+ series target mode HBAs 41 + Say Y here to enable the TCM_QLA2XXX fabric module for QLogic 24xx+ 42 + series target mode HBAs. 42 43 43 44 if TCM_QLA2XXX 44 45 config TCM_QLA2XXX_DEBUG 45 46 bool "TCM_QLA2XXX fabric module DEBUG mode for QLogic 24xx+ series target mode HBAs" 46 47 default n 47 48 help 48 - Say Y here to enable the TCM_QLA2XXX fabric module DEBUG for QLogic 24xx+ series target mode HBAs 49 - This will include code to enable the SCSI command jammer 49 + Say Y here to enable the TCM_QLA2XXX fabric module DEBUG for 50 + QLogic 24xx+ series target mode HBAs. 51 + This will include code to enable the SCSI command jammer. 50 52 endif
+1 -1
drivers/scsi/qla2xxx/qla_dfs.c
··· 274 274 seq_printf(s, "Driver: estimate iocb used [%d] high water limit [%d]\n", 275 275 iocbs_used, ha->base_qpair->fwres.iocbs_limit); 276 276 277 - seq_printf(s, "estimate exchange used[%d] high water limit [%d] n", 277 + seq_printf(s, "estimate exchange used[%d] high water limit [%d]\n", 278 278 exch_used, ha->base_qpair->fwres.exch_limit); 279 279 280 280 if (ql2xenforce_iocb_limit == 2) {
+3 -6
drivers/scsi/qla2xxx/qla_os.c
··· 1957 1957 scsi_qla_host_t *vha = shost_priv(sdev->host); 1958 1958 struct req_que *req = vha->req; 1959 1959 1960 - if (IS_T10_PI_CAPABLE(vha->hw)) 1961 - blk_queue_update_dma_alignment(sdev->request_queue, 0x7); 1962 - 1963 1960 scsi_change_queue_depth(sdev, req->max_q_depth); 1964 1961 return 0; 1965 1962 } ··· 3571 3574 host->sg_tablesize = (ha->mr.extended_io_enabled) ? 3572 3575 QLA_SG_ALL : 128; 3573 3576 } 3577 + 3578 + if (IS_T10_PI_CAPABLE(base_vha->hw)) 3579 + host->dma_alignment = 0x7; 3574 3580 3575 3581 ret = scsi_add_host(host, &pdev->dev); 3576 3582 if (ret) ··· 8156 8156 8157 8157 static struct pci_driver qla2xxx_pci_driver = { 8158 8158 .name = QLA2XXX_DRIVER_NAME, 8159 - .driver = { 8160 - .owner = THIS_MODULE, 8161 - }, 8162 8159 .id_table = qla2xxx_pci_tbl, 8163 8160 .probe = qla2x00_probe_one, 8164 8161 .remove = qla2x00_remove_one,
+12 -5
drivers/scsi/qla4xxx/ql4_mbx.c
··· 1641 1641 struct ql4_chap_table *chap_table; 1642 1642 uint32_t chap_size = 0; 1643 1643 dma_addr_t chap_dma; 1644 + ssize_t secret_len; 1644 1645 1645 1646 chap_table = dma_pool_zalloc(ha->chap_dma_pool, GFP_KERNEL, &chap_dma); 1646 1647 if (chap_table == NULL) { ··· 1653 1652 chap_table->flags |= BIT_6; /* peer */ 1654 1653 else 1655 1654 chap_table->flags |= BIT_7; /* local */ 1656 - chap_table->secret_len = strlen(password); 1657 - strncpy(chap_table->secret, password, MAX_CHAP_SECRET_LEN - 1); 1658 - strncpy(chap_table->name, username, MAX_CHAP_NAME_LEN - 1); 1655 + 1656 + secret_len = strscpy(chap_table->secret, password, 1657 + sizeof(chap_table->secret)); 1658 + if (secret_len < MIN_CHAP_SECRET_LEN) 1659 + goto cleanup_chap_table; 1660 + chap_table->secret_len = (uint8_t)secret_len; 1661 + strscpy(chap_table->name, username, sizeof(chap_table->name)); 1659 1662 chap_table->cookie = cpu_to_le16(CHAP_VALID_COOKIE); 1660 1663 1661 1664 if (is_qla40XX(ha)) { ··· 1684 1679 memcpy((struct ql4_chap_table *)ha->chap_list + idx, 1685 1680 chap_table, sizeof(struct ql4_chap_table)); 1686 1681 } 1682 + 1683 + cleanup_chap_table: 1687 1684 dma_pool_free(ha->chap_dma_pool, chap_table, chap_dma); 1688 1685 if (rval != QLA_SUCCESS) 1689 1686 ret = -EINVAL; ··· 2288 2281 mbox_cmd[0] = MBOX_CMD_SET_PARAM; 2289 2282 if (param == SET_DRVR_VERSION) { 2290 2283 mbox_cmd[1] = SET_DRVR_VERSION; 2291 - strncpy((char *)&mbox_cmd[2], QLA4XXX_DRIVER_VERSION, 2292 - MAX_DRVR_VER_LEN - 1); 2284 + strscpy((char *)&mbox_cmd[2], QLA4XXX_DRIVER_VERSION, 2285 + MAX_DRVR_VER_LEN); 2293 2286 } else { 2294 2287 ql4_printk(KERN_ERR, ha, "%s: invalid parameter 0x%x\n", 2295 2288 __func__, param);
+7 -7
drivers/scsi/qla4xxx/ql4_os.c
··· 799 799 800 800 chap_rec->chap_tbl_idx = i; 801 801 strscpy(chap_rec->username, chap_table->name, 802 - ISCSI_CHAP_AUTH_NAME_MAX_LEN); 803 - strscpy(chap_rec->password, chap_table->secret, 804 - QL4_CHAP_MAX_SECRET_LEN); 805 - chap_rec->password_length = chap_table->secret_len; 802 + sizeof(chap_rec->username)); 803 + chap_rec->password_length = strscpy(chap_rec->password, 804 + chap_table->secret, 805 + sizeof(chap_rec->password)); 806 806 807 807 if (chap_table->flags & BIT_7) /* local */ 808 808 chap_rec->chap_type = CHAP_TYPE_OUT; ··· 6291 6291 6292 6292 tddb->tpgt = sess->tpgt; 6293 6293 tddb->port = conn->persistent_port; 6294 - strscpy(tddb->iscsi_name, sess->targetname, ISCSI_NAME_SIZE); 6295 - strscpy(tddb->ip_addr, conn->persistent_address, DDB_IPADDR_LEN); 6294 + strscpy(tddb->iscsi_name, sess->targetname, sizeof(tddb->iscsi_name)); 6295 + strscpy(tddb->ip_addr, conn->persistent_address, sizeof(tddb->ip_addr)); 6296 6296 } 6297 6297 6298 6298 static void qla4xxx_convert_param_ddb(struct dev_db_entry *fw_ddb_entry, ··· 7792 7792 } 7793 7793 7794 7794 strscpy(flash_tddb->iscsi_name, fnode_sess->targetname, 7795 - ISCSI_NAME_SIZE); 7795 + sizeof(flash_tddb->iscsi_name)); 7796 7796 7797 7797 if (!strncmp(fnode_sess->portal_type, PORTAL_TYPE_IPV6, 4)) 7798 7798 sprintf(flash_tddb->ip_addr, "%pI6", fnode_conn->ipaddress);
+31 -25
drivers/scsi/scsi_debugfs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/bitops.h> 3 + #include <linux/cleanup.h> 3 4 #include <linux/seq_file.h> 4 5 #include <scsi/scsi_cmnd.h> 5 6 #include <scsi/scsi_dbg.h> ··· 33 32 return 0; 34 33 } 35 34 35 + static const char *scsi_cmd_list_info(struct scsi_cmnd *cmd) 36 + { 37 + struct Scsi_Host *shost = cmd->device->host; 38 + struct scsi_cmnd *cmd2; 39 + 40 + guard(spinlock_irq)(shost->host_lock); 41 + 42 + list_for_each_entry(cmd2, &shost->eh_abort_list, eh_entry) 43 + if (cmd == cmd2) 44 + return "on eh_abort_list"; 45 + 46 + list_for_each_entry(cmd2, &shost->eh_cmd_q, eh_entry) 47 + if (cmd == cmd2) 48 + return "on eh_cmd_q"; 49 + 50 + return NULL; 51 + } 52 + 36 53 void scsi_show_rq(struct seq_file *m, struct request *rq) 37 54 { 38 - struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq), *cmd2; 39 - struct Scsi_Host *shost = cmd->device->host; 55 + struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq); 40 56 int alloc_ms = jiffies_to_msecs(jiffies - cmd->jiffies_at_alloc); 41 57 int timeout_ms = jiffies_to_msecs(rq->timeout); 42 - const char *list_info = NULL; 43 58 char buf[80] = "(?)"; 44 59 45 - spin_lock_irq(shost->host_lock); 46 - list_for_each_entry(cmd2, &shost->eh_abort_list, eh_entry) { 47 - if (cmd == cmd2) { 48 - list_info = "on eh_abort_list"; 49 - goto unlock; 50 - } 51 - } 52 - list_for_each_entry(cmd2, &shost->eh_cmd_q, eh_entry) { 53 - if (cmd == cmd2) { 54 - list_info = "on eh_cmd_q"; 55 - goto unlock; 56 - } 57 - } 58 - unlock: 59 - spin_unlock_irq(shost->host_lock); 60 + if (cmd->flags & SCMD_INITIALIZED) { 61 + const char *list_info = scsi_cmd_list_info(cmd); 60 62 61 - __scsi_format_command(buf, sizeof(buf), cmd->cmnd, cmd->cmd_len); 62 - seq_printf(m, ", .cmd=%s, .retries=%d, .allowed=%d, .result = %#x, %s%s.flags=", 63 - buf, cmd->retries, cmd->allowed, cmd->result, 64 - list_info ? : "", list_info ? ", " : ""); 63 + __scsi_format_command(buf, sizeof(buf), cmd->cmnd, cmd->cmd_len); 64 + seq_printf(m, ", .cmd=%s, .retries=%d, .allowed=%d, .result = %#x%s%s", 65 + buf, cmd->retries, cmd->allowed, cmd->result, 66 + list_info ? ", " : "", list_info ? : ""); 67 + seq_printf(m, ", .timeout=%d.%03d, allocated %d.%03d s ago", 68 + timeout_ms / 1000, timeout_ms % 1000, 69 + alloc_ms / 1000, alloc_ms % 1000); 70 + } 71 + seq_printf(m, ", .flags="); 65 72 scsi_flags_show(m, cmd->flags, scsi_cmd_flags, 66 73 ARRAY_SIZE(scsi_cmd_flags)); 67 - seq_printf(m, ", .timeout=%d.%03d, allocated %d.%03d s ago", 68 - timeout_ms / 1000, timeout_ms % 1000, 69 - alloc_ms / 1000, alloc_ms % 1000); 70 74 }
+10 -8
drivers/scsi/scsi_devinfo.c
··· 293 293 size_t from_length; 294 294 295 295 from_length = strlen(from); 296 - /* This zero-pads the destination */ 297 - strncpy(to, from, to_length); 298 - if (from_length < to_length && !compatible) { 299 - /* 300 - * space pad the string if it is short. 301 - */ 302 - memset(&to[from_length], ' ', to_length - from_length); 303 - } 296 + 297 + /* 298 + * null pad and null terminate if compatible 299 + * otherwise space pad 300 + */ 301 + if (compatible) 302 + strscpy_pad(to, from, to_length); 303 + else 304 + memcpy_and_pad(to, to_length, from, from_length, ' '); 305 + 304 306 if (from_length > to_length) 305 307 printk(KERN_WARNING "%s: %s string '%s' is too long\n", 306 308 __func__, name, from);
+18 -24
drivers/scsi/scsi_lib.c
··· 32 32 #include <scsi/scsi_driver.h> 33 33 #include <scsi/scsi_eh.h> 34 34 #include <scsi/scsi_host.h> 35 - #include <scsi/scsi_transport.h> /* __scsi_init_queue() */ 35 + #include <scsi/scsi_transport.h> /* scsi_init_limits() */ 36 36 #include <scsi/scsi_dh.h> 37 37 38 38 #include <trace/events/scsi.h> ··· 1963 1963 blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]); 1964 1964 } 1965 1965 1966 - void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q) 1966 + void scsi_init_limits(struct Scsi_Host *shost, struct queue_limits *lim) 1967 1967 { 1968 1968 struct device *dev = shost->dma_dev; 1969 1969 1970 - /* 1971 - * this limit is imposed by hardware restrictions 1972 - */ 1973 - blk_queue_max_segments(q, min_t(unsigned short, shost->sg_tablesize, 1974 - SG_MAX_SEGMENTS)); 1970 + memset(lim, 0, sizeof(*lim)); 1971 + lim->max_segments = 1972 + min_t(unsigned short, shost->sg_tablesize, SG_MAX_SEGMENTS); 1975 1973 1976 1974 if (scsi_host_prot_dma(shost)) { 1977 1975 shost->sg_prot_tablesize = 1978 1976 min_not_zero(shost->sg_prot_tablesize, 1979 1977 (unsigned short)SCSI_MAX_PROT_SG_SEGMENTS); 1980 1978 BUG_ON(shost->sg_prot_tablesize < shost->sg_tablesize); 1981 - blk_queue_max_integrity_segments(q, shost->sg_prot_tablesize); 1979 + lim->max_integrity_segments = shost->sg_prot_tablesize; 1982 1980 } 1983 1981 1984 - blk_queue_max_hw_sectors(q, shost->max_sectors); 1985 - blk_queue_segment_boundary(q, shost->dma_boundary); 1982 + lim->max_hw_sectors = shost->max_sectors; 1983 + lim->seg_boundary_mask = shost->dma_boundary; 1984 + lim->max_segment_size = shost->max_segment_size; 1985 + lim->virt_boundary_mask = shost->virt_boundary_mask; 1986 + lim->dma_alignment = max_t(unsigned int, 1987 + shost->dma_alignment, dma_get_cache_alignment() - 1); 1988 + 1989 + if (shost->no_highmem) 1990 + lim->bounce = BLK_BOUNCE_HIGH; 1991 + 1986 1992 dma_set_seg_boundary(dev, shost->dma_boundary); 1987 - 1988 - blk_queue_max_segment_size(q, shost->max_segment_size); 1989 - blk_queue_virt_boundary(q, shost->virt_boundary_mask); 1990 - dma_set_max_seg_size(dev, queue_max_segment_size(q)); 1991 - 1992 - /* 1993 - * Set a reasonable default alignment: The larger of 32-byte (dword), 1994 - * which is a common minimum for HBAs, and the minimum DMA alignment, 1995 - * which is set by the platform. 1996 - * 1997 - * Devices that require a bigger alignment can increase it later. 1998 - */ 1999 - blk_queue_dma_alignment(q, max(4, dma_get_cache_alignment()) - 1); 1993 + dma_set_max_seg_size(dev, shost->max_segment_size); 2000 1994 } 2001 - EXPORT_SYMBOL_GPL(__scsi_init_queue); 1995 + EXPORT_SYMBOL_GPL(scsi_init_limits); 2002 1996 2003 1997 static const struct blk_mq_ops scsi_mq_ops_no_commit = { 2004 1998 .get_budget = scsi_mq_get_budget,
+41 -33
drivers/scsi/scsi_scan.c
··· 227 227 228 228 /* 229 229 * realloc if new shift is calculated, which is caused by setting 230 - * up one new default queue depth after calling ->slave_configure 230 + * up one new default queue depth after calling ->device_configure 231 231 */ 232 232 if (!need_alloc && new_shift != sdev->budget_map.shift) 233 233 need_alloc = need_free = true; ··· 283 283 struct request_queue *q; 284 284 int display_failure_msg = 1, ret; 285 285 struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); 286 + struct queue_limits lim; 286 287 287 288 sdev = kzalloc(sizeof(*sdev) + shost->transportt->device_size, 288 289 GFP_KERNEL); ··· 333 332 334 333 sdev->sg_reserved_size = INT_MAX; 335 334 336 - q = blk_mq_alloc_queue(&sdev->host->tag_set, NULL, NULL); 335 + scsi_init_limits(shost, &lim); 336 + q = blk_mq_alloc_queue(&sdev->host->tag_set, &lim, NULL); 337 337 if (IS_ERR(q)) { 338 338 /* release fn is set up in scsi_sysfs_device_initialise, so 339 339 * have to free and put manually here */ ··· 345 343 kref_get(&sdev->host->tagset_refcnt); 346 344 sdev->request_queue = q; 347 345 q->queuedata = sdev; 348 - __scsi_init_queue(sdev->host, q); 349 346 350 347 depth = sdev->host->cmd_per_lun ?: 1; 351 348 ··· 874 873 static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result, 875 874 blist_flags_t *bflags, int async) 876 875 { 876 + const struct scsi_host_template *hostt = sdev->host->hostt; 877 + struct queue_limits lim; 877 878 int ret; 878 879 879 880 /* ··· 1007 1004 sdev->select_no_atn = 1; 1008 1005 1009 1006 /* 1010 - * Maximum 512 sector transfer length 1011 - * broken RA4x00 Compaq Disk Array 1012 - */ 1013 - if (*bflags & BLIST_MAX_512) 1014 - blk_queue_max_hw_sectors(sdev->request_queue, 512); 1015 - /* 1016 - * Max 1024 sector transfer length for targets that report incorrect 1017 - * max/optimal lengths and relied on the old block layer safe default 1018 - */ 1019 - else if (*bflags & BLIST_MAX_1024) 1020 - blk_queue_max_hw_sectors(sdev->request_queue, 1024); 1021 - 1022 - /* 1023 1007 * Some devices may not want to have a start command automatically 1024 1008 * issued when a device is added. 1025 1009 */ ··· 1066 1076 1067 1077 transport_configure_device(&sdev->sdev_gendev); 1068 1078 1069 - if (sdev->host->hostt->slave_configure) { 1070 - ret = sdev->host->hostt->slave_configure(sdev); 1071 - if (ret) { 1072 - /* 1073 - * if LLDD reports slave not present, don't clutter 1074 - * console with alloc failure messages 1075 - */ 1076 - if (ret != -ENXIO) { 1077 - sdev_printk(KERN_ERR, sdev, 1078 - "failed to configure device\n"); 1079 - } 1080 - return SCSI_SCAN_NO_RESPONSE; 1081 - } 1079 + /* 1080 + * No need to freeze the queue as it isn't reachable to anyone else yet. 1081 + */ 1082 + lim = queue_limits_start_update(sdev->request_queue); 1083 + if (*bflags & BLIST_MAX_512) 1084 + lim.max_hw_sectors = 512; 1085 + else if (*bflags & BLIST_MAX_1024) 1086 + lim.max_hw_sectors = 1024; 1082 1087 1088 + if (hostt->device_configure) 1089 + ret = hostt->device_configure(sdev, &lim); 1090 + else if (hostt->slave_configure) 1091 + ret = hostt->slave_configure(sdev); 1092 + if (ret) { 1093 + queue_limits_cancel_update(sdev->request_queue); 1083 1094 /* 1084 - * The queue_depth is often changed in ->slave_configure. 1085 - * Set up budget map again since memory consumption of 1086 - * the map depends on actual queue depth. 1095 + * If the LLDD reports device not present, don't clutter the 1096 + * console with failure messages. 1087 1097 */ 1088 - scsi_realloc_sdev_budget_map(sdev, sdev->queue_depth); 1098 + if (ret != -ENXIO) 1099 + sdev_printk(KERN_ERR, sdev, 1100 + "failed to configure device\n"); 1101 + return SCSI_SCAN_NO_RESPONSE; 1089 1102 } 1103 + 1104 + ret = queue_limits_commit_update(sdev->request_queue, &lim); 1105 + if (ret) { 1106 + sdev_printk(KERN_ERR, sdev, "failed to apply queue limits.\n"); 1107 + return SCSI_SCAN_NO_RESPONSE; 1108 + } 1109 + 1110 + /* 1111 + * The queue_depth is often changed in ->device_configure. 1112 + * 1113 + * Set up budget map again since memory consumption of the map depends 1114 + * on actual queue depth. 1115 + */ 1116 + if (hostt->device_configure || hostt->slave_configure) 1117 + scsi_realloc_sdev_budget_map(sdev, sdev->queue_depth); 1090 1118 1091 1119 if (sdev->scsi_level >= SCSI_3) 1092 1120 scsi_attach_vpd(sdev);
+3 -2
drivers/scsi/scsi_sysfs.c
··· 1609 1609 } 1610 1610 EXPORT_SYMBOL(scsi_remove_target); 1611 1611 1612 - int scsi_register_driver(struct device_driver *drv) 1612 + int __scsi_register_driver(struct device_driver *drv, struct module *owner) 1613 1613 { 1614 1614 drv->bus = &scsi_bus_type; 1615 + drv->owner = owner; 1615 1616 1616 1617 return driver_register(drv); 1617 1618 } 1618 - EXPORT_SYMBOL(scsi_register_driver); 1619 + EXPORT_SYMBOL(__scsi_register_driver); 1619 1620 1620 1621 int scsi_register_interface(struct class_interface *intf) 1621 1622 {
+9 -6
drivers/scsi/scsi_transport_fc.c
··· 4276 4276 { 4277 4277 struct device *dev = &shost->shost_gendev; 4278 4278 struct fc_internal *i = to_fc_internal(shost->transportt); 4279 + struct queue_limits lim; 4279 4280 struct request_queue *q; 4280 4281 char bsg_name[20]; 4281 4282 ··· 4287 4286 4288 4287 snprintf(bsg_name, sizeof(bsg_name), 4289 4288 "fc_host%d", shost->host_no); 4290 - 4291 - q = bsg_setup_queue(dev, bsg_name, fc_bsg_dispatch, fc_bsg_job_timeout, 4292 - i->f->dd_bsg_size); 4289 + scsi_init_limits(shost, &lim); 4290 + lim.max_segments = min_not_zero(lim.max_segments, i->f->max_bsg_segments); 4291 + q = bsg_setup_queue(dev, bsg_name, &lim, fc_bsg_dispatch, 4292 + fc_bsg_job_timeout, i->f->dd_bsg_size); 4293 4293 if (IS_ERR(q)) { 4294 4294 dev_err(dev, 4295 4295 "fc_host%d: bsg interface failed to initialize - setup queue\n", 4296 4296 shost->host_no); 4297 4297 return PTR_ERR(q); 4298 4298 } 4299 - __scsi_init_queue(shost, q); 4300 4299 blk_queue_rq_timeout(q, FC_DEFAULT_BSG_TIMEOUT); 4301 4300 fc_host->rqst_q = q; 4302 4301 return 0; ··· 4312 4311 { 4313 4312 struct device *dev = &rport->dev; 4314 4313 struct fc_internal *i = to_fc_internal(shost->transportt); 4314 + struct queue_limits lim; 4315 4315 struct request_queue *q; 4316 4316 4317 4317 rport->rqst_q = NULL; ··· 4320 4318 if (!i->f->bsg_request) 4321 4319 return -ENOTSUPP; 4322 4320 4323 - q = bsg_setup_queue(dev, dev_name(dev), fc_bsg_dispatch_prep, 4321 + scsi_init_limits(shost, &lim); 4322 + lim.max_segments = min_not_zero(lim.max_segments, i->f->max_bsg_segments); 4323 + q = bsg_setup_queue(dev, dev_name(dev), &lim, fc_bsg_dispatch_prep, 4324 4324 fc_bsg_job_timeout, i->f->dd_bsg_size); 4325 4325 if (IS_ERR(q)) { 4326 4326 dev_err(dev, "failed to setup bsg queue\n"); 4327 4327 return PTR_ERR(q); 4328 4328 } 4329 - __scsi_init_queue(shost, q); 4330 4329 blk_queue_rq_timeout(q, BLK_DEFAULT_SG_TIMEOUT); 4331 4330 rport->rqst_q = q; 4332 4331 return 0;
+4 -3
drivers/scsi/scsi_transport_iscsi.c
··· 1535 1535 { 1536 1536 struct device *dev = &shost->shost_gendev; 1537 1537 struct iscsi_internal *i = to_iscsi_internal(shost->transportt); 1538 + struct queue_limits lim; 1538 1539 struct request_queue *q; 1539 1540 char bsg_name[20]; 1540 1541 ··· 1543 1542 return -ENOTSUPP; 1544 1543 1545 1544 snprintf(bsg_name, sizeof(bsg_name), "iscsi_host%d", shost->host_no); 1546 - q = bsg_setup_queue(dev, bsg_name, iscsi_bsg_host_dispatch, NULL, 0); 1545 + scsi_init_limits(shost, &lim); 1546 + q = bsg_setup_queue(dev, bsg_name, &lim, iscsi_bsg_host_dispatch, NULL, 1547 + 0); 1547 1548 if (IS_ERR(q)) { 1548 1549 shost_printk(KERN_ERR, shost, "bsg interface failed to " 1549 1550 "initialize - no request queue\n"); 1550 1551 return PTR_ERR(q); 1551 1552 } 1552 - __scsi_init_queue(shost, q); 1553 1553 1554 1554 ihost->bsg_q = q; 1555 1555 return 0; ··· 1605 1603 static LIST_HEAD(sesslist); 1606 1604 static DEFINE_SPINLOCK(sesslock); 1607 1605 static LIST_HEAD(connlist); 1608 - static LIST_HEAD(connlist_err); 1609 1606 static DEFINE_SPINLOCK(connlock); 1610 1607 1611 1608 static uint32_t iscsi_conn_get_sid(struct iscsi_cls_conn *conn)
+2 -2
drivers/scsi/scsi_transport_sas.c
··· 197 197 } 198 198 199 199 if (rphy) { 200 - q = bsg_setup_queue(&rphy->dev, dev_name(&rphy->dev), 200 + q = bsg_setup_queue(&rphy->dev, dev_name(&rphy->dev), NULL, 201 201 sas_smp_dispatch, NULL, 0); 202 202 if (IS_ERR(q)) 203 203 return PTR_ERR(q); ··· 206 206 char name[20]; 207 207 208 208 snprintf(name, sizeof(name), "sas_host%d", shost->host_no); 209 - q = bsg_setup_queue(&shost->shost_gendev, name, 209 + q = bsg_setup_queue(&shost->shost_gendev, name, NULL, 210 210 sas_smp_dispatch, NULL, 0); 211 211 if (IS_ERR(q)) 212 212 return PTR_ERR(q);
-1
drivers/scsi/sd.c
··· 4193 4193 static struct scsi_driver sd_template = { 4194 4194 .gendrv = { 4195 4195 .name = "sd", 4196 - .owner = THIS_MODULE, 4197 4196 .probe = sd_probe, 4198 4197 .probe_type = PROBE_PREFER_ASYNCHRONOUS, 4199 4198 .remove = sd_remove,
-1
drivers/scsi/ses.c
··· 908 908 static struct scsi_driver ses_template = { 909 909 .gendrv = { 910 910 .name = "ses", 911 - .owner = THIS_MODULE, 912 911 .probe = ses_probe, 913 912 .remove = ses_remove, 914 913 },
+2 -3
drivers/scsi/smartpqi/smartpqi_init.c
··· 1041 1041 buffer->driver_version_tag[1] = 'V'; 1042 1042 put_unaligned_le16(sizeof(buffer->driver_version), 1043 1043 &buffer->driver_version_length); 1044 - strncpy(buffer->driver_version, "Linux " DRIVER_VERSION, 1045 - sizeof(buffer->driver_version) - 1); 1046 - buffer->driver_version[sizeof(buffer->driver_version) - 1] = '\0'; 1044 + strscpy(buffer->driver_version, "Linux " DRIVER_VERSION, 1045 + sizeof(buffer->driver_version)); 1047 1046 buffer->dont_write_tag[0] = 'D'; 1048 1047 buffer->dont_write_tag[1] = 'W'; 1049 1048 buffer->end_tag[0] = 'Z';
+5 -6
drivers/scsi/snic/snic_attrs.c
··· 13 13 { 14 14 struct snic *snic = shost_priv(class_to_shost(dev)); 15 15 16 - return snprintf(buf, PAGE_SIZE, "%s\n", snic->name); 16 + return sysfs_emit(buf, "%s\n", snic->name); 17 17 } 18 18 19 19 static ssize_t ··· 23 23 { 24 24 struct snic *snic = shost_priv(class_to_shost(dev)); 25 25 26 - return snprintf(buf, PAGE_SIZE, "%s\n", 27 - snic_state_str[snic_get_state(snic)]); 26 + return sysfs_emit(buf, "%s\n", snic_state_str[snic_get_state(snic)]); 28 27 } 29 28 30 29 static ssize_t ··· 31 32 struct device_attribute *attr, 32 33 char *buf) 33 34 { 34 - return snprintf(buf, PAGE_SIZE, "%s\n", SNIC_DRV_VERSION); 35 + return sysfs_emit(buf, "%s\n", SNIC_DRV_VERSION); 35 36 } 36 37 37 38 static ssize_t ··· 44 45 if (snic->config.xpt_type == SNIC_DAS) 45 46 snic->link_status = svnic_dev_link_status(snic->vdev); 46 47 47 - return snprintf(buf, PAGE_SIZE, "%s\n", 48 - (snic->link_status) ? "Link Up" : "Link Down"); 48 + return sysfs_emit(buf, "%s\n", 49 + (snic->link_status) ? "Link Up" : "Link Down"); 49 50 } 50 51 51 52 static DEVICE_ATTR(snic_sym_name, S_IRUGO, snic_show_sym_name, NULL);
-1
drivers/scsi/sr.c
··· 95 95 static struct scsi_driver sr_template = { 96 96 .gendrv = { 97 97 .name = "sr", 98 - .owner = THIS_MODULE, 99 98 .probe = sr_probe, 100 99 .remove = sr_remove, 101 100 .pm = &sr_pm_ops,
-1
drivers/scsi/st.c
··· 206 206 static struct scsi_driver st_template = { 207 207 .gendrv = { 208 208 .name = "st", 209 - .owner = THIS_MODULE, 210 209 .probe = st_probe, 211 210 .remove = st_remove, 212 211 .groups = st_drv_groups,
+1 -3
drivers/scsi/wd33c93.c
··· 1721 1721 p1 = setup_buffer; 1722 1722 *p1 = '\0'; 1723 1723 if (str) 1724 - strncpy(p1, str, SETUP_BUFFER_SIZE - strlen(setup_buffer)); 1725 - setup_buffer[SETUP_BUFFER_SIZE - 1] = '\0'; 1726 - p1 = setup_buffer; 1724 + strscpy(p1, str, SETUP_BUFFER_SIZE); 1727 1725 i = 0; 1728 1726 while (*p1 && (i < MAX_SETUP_ARGS)) { 1729 1727 p2 = strchr(p1, ',');
+12 -12
drivers/staging/rts5208/rtsx.c
··· 70 70 71 71 static int slave_configure(struct scsi_device *sdev) 72 72 { 73 - /* 74 - * Scatter-gather buffers (all but the last) must have a length 75 - * divisible by the bulk maxpacket size. Otherwise a data packet 76 - * would end up being short, causing a premature end to the data 77 - * transfer. Since high-speed bulk pipes have a maxpacket size 78 - * of 512, we'll use that as the scsi device queue's DMA alignment 79 - * mask. Guaranteeing proper alignment of the first buffer will 80 - * have the desired effect because, except at the beginning and 81 - * the end, scatter-gather buffers follow page boundaries. 82 - */ 83 - blk_queue_dma_alignment(sdev->request_queue, (512 - 1)); 84 - 85 73 /* Set the SCSI level to at least 2. We'll leave it at 3 if that's 86 74 * what is originally reported. We need this to avoid confusing 87 75 * the SCSI layer with devices that report 0 or 1, but need 10-byte ··· 206 218 207 219 /* limit the total size of a transfer to 120 KB */ 208 220 .max_sectors = 240, 221 + 222 + /* 223 + * Scatter-gather buffers (all but the last) must have a length 224 + * divisible by the bulk maxpacket size. Otherwise a data packet 225 + * would end up being short, causing a premature end to the data 226 + * transfer. Since high-speed bulk pipes have a maxpacket size 227 + * of 512, we'll use that as the scsi device queue's DMA alignment 228 + * mask. Guaranteeing proper alignment of the first buffer will 229 + * have the desired effect because, except at the beginning and 230 + * the end, scatter-gather buffers follow page boundaries. 231 + */ 232 + .dma_alignment = 511, 209 233 210 234 /* emulated HBA */ 211 235 .emulated = 1,
-1
drivers/target/target_core_device.c
··· 37 37 #include "target_core_ua.h" 38 38 39 39 static DEFINE_MUTEX(device_mutex); 40 - static LIST_HEAD(device_list); 41 40 static DEFINE_IDR(devices_idr); 42 41 43 42 static struct se_hba *lun0_hba;
+1 -2
drivers/ufs/core/ufs-mcq.c
··· 601 601 addr = le64_to_cpu(cmd_desc_base_addr) & CQE_UCD_BA; 602 602 603 603 while (sq_head_slot != hwq->sq_tail_slot) { 604 - utrd = hwq->sqe_base_addr + 605 - sq_head_slot * sizeof(struct utp_transfer_req_desc); 604 + utrd = hwq->sqe_base_addr + sq_head_slot; 606 605 match = le64_to_cpu(utrd->command_desc_base_addr) & CQE_UCD_BA; 607 606 if (addr == match) { 608 607 ufshcd_mcq_nullify_sqe(utrd);
+2 -1
drivers/ufs/core/ufs_bsg.c
··· 253 253 if (ret) 254 254 goto out; 255 255 256 - q = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), ufs_bsg_request, NULL, 0); 256 + q = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), NULL, ufs_bsg_request, 257 + NULL, 0); 257 258 if (IS_ERR(q)) { 258 259 ret = PTR_ERR(q); 259 260 goto out;
+92 -278
drivers/ufs/core/ufshcd.c
··· 748 748 */ 749 749 static inline u32 ufshcd_get_intr_mask(struct ufs_hba *hba) 750 750 { 751 - if (hba->ufs_version == ufshci_version(1, 0)) 752 - return INTERRUPT_MASK_ALL_VER_10; 753 751 if (hba->ufs_version <= ufshci_version(2, 0)) 754 752 return INTERRUPT_MASK_ALL_VER_11; 755 753 ··· 987 989 return ufshcd_readl(hba, REG_CONTROLLER_ENABLE) & CONTROLLER_ENABLE; 988 990 } 989 991 EXPORT_SYMBOL_GPL(ufshcd_is_hba_active); 990 - 991 - u32 ufshcd_get_local_unipro_ver(struct ufs_hba *hba) 992 - { 993 - /* HCI version 1.0 and 1.1 supports UniPro 1.41 */ 994 - if (hba->ufs_version <= ufshci_version(1, 1)) 995 - return UFS_UNIPRO_VER_1_41; 996 - else 997 - return UFS_UNIPRO_VER_1_6; 998 - } 999 - EXPORT_SYMBOL(ufshcd_get_local_unipro_ver); 1000 - 1001 - static bool ufshcd_is_unipro_pa_params_tuning_req(struct ufs_hba *hba) 1002 - { 1003 - /* 1004 - * If both host and device support UniPro ver1.6 or later, PA layer 1005 - * parameters tuning happens during link startup itself. 1006 - * 1007 - * We can manually tune PA layer parameters if either host or device 1008 - * doesn't support UniPro ver 1.6 or later. But to keep manual tuning 1009 - * logic simple, we will only do manual tuning if local unipro version 1010 - * doesn't support ver1.6 or later. 1011 - */ 1012 - return ufshcd_get_local_unipro_ver(hba) < UFS_UNIPRO_VER_1_6; 1013 - } 1014 992 1015 993 /** 1016 994 * ufshcd_pm_qos_init - initialize PM QoS request ··· 2648 2674 { 2649 2675 u32 set = ufshcd_readl(hba, REG_INTERRUPT_ENABLE); 2650 2676 2651 - if (hba->ufs_version == ufshci_version(1, 0)) { 2652 - u32 rw; 2653 - rw = set & INTERRUPT_MASK_RW_VER_10; 2654 - set = rw | ((set ^ intrs) & intrs); 2655 - } else { 2656 - set |= intrs; 2657 - } 2658 - 2677 + set |= intrs; 2659 2678 ufshcd_writel(hba, set, REG_INTERRUPT_ENABLE); 2660 2679 } 2661 2680 ··· 2661 2694 { 2662 2695 u32 set = ufshcd_readl(hba, REG_INTERRUPT_ENABLE); 2663 2696 2664 - if (hba->ufs_version == ufshci_version(1, 0)) { 2665 - u32 rw; 2666 - rw = (set & INTERRUPT_MASK_RW_VER_10) & 2667 - ~(intrs & INTERRUPT_MASK_RW_VER_10); 2668 - set = rw | ((set & intrs) & ~INTERRUPT_MASK_RW_VER_10); 2669 - 2670 - } else { 2671 - set &= ~intrs; 2672 - } 2673 - 2697 + set &= ~intrs; 2674 2698 ufshcd_writel(hba, set, REG_INTERRUPT_ENABLE); 2675 2699 } 2676 2700 2677 2701 /** 2678 2702 * ufshcd_prepare_req_desc_hdr - Fill UTP Transfer request descriptor header according to request 2679 2703 * descriptor according to request 2704 + * @hba: per adapter instance 2680 2705 * @lrbp: pointer to local reference block 2681 2706 * @upiu_flags: flags required in the header 2682 2707 * @cmd_dir: requests data direction 2683 2708 * @ehs_length: Total EHS Length (in 32‐bytes units of all Extra Header Segments) 2684 2709 */ 2685 - static void ufshcd_prepare_req_desc_hdr(struct ufshcd_lrb *lrbp, u8 *upiu_flags, 2686 - enum dma_data_direction cmd_dir, int ehs_length) 2710 + static void 2711 + ufshcd_prepare_req_desc_hdr(struct ufs_hba *hba, struct ufshcd_lrb *lrbp, 2712 + u8 *upiu_flags, enum dma_data_direction cmd_dir, 2713 + int ehs_length) 2687 2714 { 2688 2715 struct utp_transfer_req_desc *req_desc = lrbp->utr_descriptor_ptr; 2689 2716 struct request_desc_header *h = &req_desc->header; 2690 2717 enum utp_data_direction data_direction; 2718 + 2719 + lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE; 2691 2720 2692 2721 *h = (typeof(*h)){ }; 2693 2722 ··· 2817 2854 u8 upiu_flags; 2818 2855 int ret = 0; 2819 2856 2820 - if (hba->ufs_version <= ufshci_version(1, 1)) 2821 - lrbp->command_type = UTP_CMD_TYPE_DEV_MANAGE; 2822 - else 2823 - lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE; 2857 + ufshcd_prepare_req_desc_hdr(hba, lrbp, &upiu_flags, DMA_NONE, 0); 2824 2858 2825 - ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, DMA_NONE, 0); 2826 2859 if (hba->dev_cmd.type == DEV_CMD_TYPE_QUERY) 2827 2860 ufshcd_prepare_utp_query_req_upiu(hba, lrbp, upiu_flags); 2828 2861 else if (hba->dev_cmd.type == DEV_CMD_TYPE_NOP) ··· 2841 2882 unsigned int ioprio_class = IOPRIO_PRIO_CLASS(req_get_ioprio(rq)); 2842 2883 u8 upiu_flags; 2843 2884 2844 - if (hba->ufs_version <= ufshci_version(1, 1)) 2845 - lrbp->command_type = UTP_CMD_TYPE_SCSI; 2846 - else 2847 - lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE; 2848 - 2849 - ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, 2850 - lrbp->cmd->sc_data_direction, 0); 2885 + ufshcd_prepare_req_desc_hdr(hba, lrbp, &upiu_flags, lrbp->cmd->sc_data_direction, 0); 2851 2886 if (ioprio_class == IOPRIO_CLASS_RT) 2852 2887 upiu_flags |= UPIU_CMD_FLAGS_CP; 2853 2888 ufshcd_prepare_utp_scsi_cmd_upiu(lrbp, upiu_flags); ··· 3014 3061 return err; 3015 3062 } 3016 3063 3017 - static int ufshcd_compose_dev_cmd(struct ufs_hba *hba, 3018 - struct ufshcd_lrb *lrbp, enum dev_cmd_type cmd_type, int tag) 3064 + static void ufshcd_setup_dev_cmd(struct ufs_hba *hba, struct ufshcd_lrb *lrbp, 3065 + enum dev_cmd_type cmd_type, u8 lun, int tag) 3019 3066 { 3020 3067 lrbp->cmd = NULL; 3021 3068 lrbp->task_tag = tag; 3022 - lrbp->lun = 0; /* device management cmd is not specific to any LUN */ 3069 + lrbp->lun = lun; 3023 3070 lrbp->intr_cmd = true; /* No interrupt aggregation */ 3024 3071 ufshcd_prepare_lrbp_crypto(NULL, lrbp); 3025 3072 hba->dev_cmd.type = cmd_type; 3073 + } 3074 + 3075 + static int ufshcd_compose_dev_cmd(struct ufs_hba *hba, 3076 + struct ufshcd_lrb *lrbp, enum dev_cmd_type cmd_type, int tag) 3077 + { 3078 + ufshcd_setup_dev_cmd(hba, lrbp, cmd_type, 0, tag); 3026 3079 3027 3080 return ufshcd_compose_devman_upiu(hba, lrbp); 3028 3081 } ··· 3041 3082 */ 3042 3083 bool ufshcd_cmd_inflight(struct scsi_cmnd *cmd) 3043 3084 { 3044 - struct request *rq; 3045 - 3046 - if (!cmd) 3047 - return false; 3048 - 3049 - rq = scsi_cmd_to_rq(cmd); 3050 - if (!blk_mq_request_started(rq)) 3051 - return false; 3052 - 3053 - return true; 3085 + return cmd && blk_mq_rq_state(scsi_cmd_to_rq(cmd)) == MQ_RQ_IN_FLIGHT; 3054 3086 } 3055 3087 3056 3088 /* ··· 3226 3276 return err; 3227 3277 } 3228 3278 3279 + static void ufshcd_dev_man_lock(struct ufs_hba *hba) 3280 + { 3281 + ufshcd_hold(hba); 3282 + mutex_lock(&hba->dev_cmd.lock); 3283 + down_read(&hba->clk_scaling_lock); 3284 + } 3285 + 3286 + static void ufshcd_dev_man_unlock(struct ufs_hba *hba) 3287 + { 3288 + up_read(&hba->clk_scaling_lock); 3289 + mutex_unlock(&hba->dev_cmd.lock); 3290 + ufshcd_release(hba); 3291 + } 3292 + 3293 + static int ufshcd_issue_dev_cmd(struct ufs_hba *hba, struct ufshcd_lrb *lrbp, 3294 + const u32 tag, int timeout) 3295 + { 3296 + DECLARE_COMPLETION_ONSTACK(wait); 3297 + int err; 3298 + 3299 + hba->dev_cmd.complete = &wait; 3300 + 3301 + ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr); 3302 + 3303 + ufshcd_send_command(hba, tag, hba->dev_cmd_queue); 3304 + err = ufshcd_wait_for_dev_cmd(hba, lrbp, timeout); 3305 + 3306 + ufshcd_add_query_upiu_trace(hba, err ? UFS_QUERY_ERR : UFS_QUERY_COMP, 3307 + (struct utp_upiu_req *)lrbp->ucd_rsp_ptr); 3308 + 3309 + return err; 3310 + } 3311 + 3229 3312 /** 3230 3313 * ufshcd_exec_dev_cmd - API for sending device management requests 3231 3314 * @hba: UFS hba ··· 3273 3290 static int ufshcd_exec_dev_cmd(struct ufs_hba *hba, 3274 3291 enum dev_cmd_type cmd_type, int timeout) 3275 3292 { 3276 - DECLARE_COMPLETION_ONSTACK(wait); 3277 3293 const u32 tag = hba->reserved_slot; 3278 - struct ufshcd_lrb *lrbp; 3294 + struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 3279 3295 int err; 3280 3296 3281 3297 /* Protects use of hba->reserved_slot. */ 3282 3298 lockdep_assert_held(&hba->dev_cmd.lock); 3283 3299 3284 - down_read(&hba->clk_scaling_lock); 3285 - 3286 - lrbp = &hba->lrb[tag]; 3287 - lrbp->cmd = NULL; 3288 3300 err = ufshcd_compose_dev_cmd(hba, lrbp, cmd_type, tag); 3289 3301 if (unlikely(err)) 3290 - goto out; 3302 + return err; 3291 3303 3292 - hba->dev_cmd.complete = &wait; 3293 - 3294 - ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr); 3295 - 3296 - ufshcd_send_command(hba, tag, hba->dev_cmd_queue); 3297 - err = ufshcd_wait_for_dev_cmd(hba, lrbp, timeout); 3298 - ufshcd_add_query_upiu_trace(hba, err ? UFS_QUERY_ERR : UFS_QUERY_COMP, 3299 - (struct utp_upiu_req *)lrbp->ucd_rsp_ptr); 3300 - 3301 - out: 3302 - up_read(&hba->clk_scaling_lock); 3303 - return err; 3304 + return ufshcd_issue_dev_cmd(hba, lrbp, tag, timeout); 3304 3305 } 3305 3306 3306 3307 /** ··· 3354 3387 3355 3388 BUG_ON(!hba); 3356 3389 3357 - ufshcd_hold(hba); 3358 - mutex_lock(&hba->dev_cmd.lock); 3390 + ufshcd_dev_man_lock(hba); 3391 + 3359 3392 ufshcd_init_query(hba, &request, &response, opcode, idn, index, 3360 3393 selector); 3361 3394 ··· 3397 3430 MASK_QUERY_UPIU_FLAG_LOC) & 0x1; 3398 3431 3399 3432 out_unlock: 3400 - mutex_unlock(&hba->dev_cmd.lock); 3401 - ufshcd_release(hba); 3433 + ufshcd_dev_man_unlock(hba); 3402 3434 return err; 3403 3435 } 3404 3436 ··· 3427 3461 return -EINVAL; 3428 3462 } 3429 3463 3430 - ufshcd_hold(hba); 3464 + ufshcd_dev_man_lock(hba); 3431 3465 3432 - mutex_lock(&hba->dev_cmd.lock); 3433 3466 ufshcd_init_query(hba, &request, &response, opcode, idn, index, 3434 3467 selector); 3435 3468 ··· 3458 3493 *attr_val = be32_to_cpu(response->upiu_res.value); 3459 3494 3460 3495 out_unlock: 3461 - mutex_unlock(&hba->dev_cmd.lock); 3462 - ufshcd_release(hba); 3496 + ufshcd_dev_man_unlock(hba); 3463 3497 return err; 3464 3498 } 3465 3499 ··· 3521 3557 return -EINVAL; 3522 3558 } 3523 3559 3524 - ufshcd_hold(hba); 3560 + ufshcd_dev_man_lock(hba); 3525 3561 3526 - mutex_lock(&hba->dev_cmd.lock); 3527 3562 ufshcd_init_query(hba, &request, &response, opcode, idn, index, 3528 3563 selector); 3529 3564 hba->dev_cmd.query.descriptor = desc_buf; ··· 3555 3592 3556 3593 out_unlock: 3557 3594 hba->dev_cmd.query.descriptor = NULL; 3558 - mutex_unlock(&hba->dev_cmd.lock); 3559 - ufshcd_release(hba); 3595 + ufshcd_dev_man_unlock(hba); 3560 3596 return err; 3561 3597 } 3562 3598 ··· 4251 4289 * Make sure UIC command completion interrupt is disabled before 4252 4290 * issuing UIC command. 4253 4291 */ 4254 - wmb(); 4292 + ufshcd_readl(hba, REG_INTERRUPT_ENABLE); 4255 4293 reenable_intr = true; 4256 4294 } 4257 4295 spin_unlock_irqrestore(hba->host->host_lock, flags); ··· 4734 4772 REG_UTP_TASK_REQ_LIST_BASE_H); 4735 4773 4736 4774 /* 4737 - * Make sure base address and interrupt setup are updated before 4738 - * enabling the run/stop registers below. 4739 - */ 4740 - wmb(); 4741 - 4742 - /* 4743 4775 * UCRDY, UTMRLDY and UTRLRDY bits must be 1 4744 4776 */ 4745 4777 reg = ufshcd_readl(hba, REG_CONTROLLER_STATUS); ··· 5030 5074 int err = 0; 5031 5075 int retries; 5032 5076 5033 - ufshcd_hold(hba); 5034 - mutex_lock(&hba->dev_cmd.lock); 5077 + ufshcd_dev_man_lock(hba); 5078 + 5035 5079 for (retries = NOP_OUT_RETRIES; retries > 0; retries--) { 5036 5080 err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_NOP, 5037 5081 hba->nop_out_timeout); ··· 5041 5085 5042 5086 dev_dbg(hba->dev, "%s: error %d retrying\n", __func__, err); 5043 5087 } 5044 - mutex_unlock(&hba->dev_cmd.lock); 5045 - ufshcd_release(hba); 5088 + 5089 + ufshcd_dev_man_unlock(hba); 5046 5090 5047 5091 if (err) 5048 5092 dev_err(hba->dev, "%s: NOP OUT failed %d\n", __func__, err); ··· 5219 5263 * resume, causing more messages and so on. 5220 5264 */ 5221 5265 sdev->silence_suspend = 1; 5222 - 5223 - if (hba->vops && hba->vops->config_scsi_dev) 5224 - hba->vops->config_scsi_dev(sdev); 5225 5266 5226 5267 ufshcd_crypto_register(hba, q); 5227 5268 ··· 5502 5549 ufshcd_release_scsi_cmd(hba, lrbp); 5503 5550 /* Do not touch lrbp after scsi done */ 5504 5551 scsi_done(cmd); 5505 - } else if (lrbp->command_type == UTP_CMD_TYPE_DEV_MANAGE || 5506 - lrbp->command_type == UTP_CMD_TYPE_UFS_STORAGE) { 5507 - if (hba->dev_cmd.complete) { 5508 - if (cqe) { 5509 - ocs = le32_to_cpu(cqe->status) & MASK_OCS; 5510 - lrbp->utr_descriptor_ptr->header.ocs = ocs; 5511 - } 5512 - complete(hba->dev_cmd.complete); 5552 + } else if (hba->dev_cmd.complete) { 5553 + if (cqe) { 5554 + ocs = le32_to_cpu(cqe->status) & MASK_OCS; 5555 + lrbp->utr_descriptor_ptr->header.ocs = ocs; 5513 5556 } 5557 + complete(hba->dev_cmd.complete); 5514 5558 } 5515 5559 } 5516 5560 ··· 7042 7092 7043 7093 /* send command to the controller */ 7044 7094 __set_bit(task_tag, &hba->outstanding_tasks); 7045 - 7046 7095 ufshcd_writel(hba, 1 << task_tag, REG_UTP_TASK_REQ_DOOR_BELL); 7047 - /* Make sure that doorbell is committed immediately */ 7048 - wmb(); 7049 7096 7050 7097 spin_unlock_irqrestore(host->host_lock, flags); 7051 7098 ··· 7150 7203 enum dev_cmd_type cmd_type, 7151 7204 enum query_opcode desc_op) 7152 7205 { 7153 - DECLARE_COMPLETION_ONSTACK(wait); 7154 7206 const u32 tag = hba->reserved_slot; 7155 - struct ufshcd_lrb *lrbp; 7207 + struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 7156 7208 int err = 0; 7157 7209 u8 upiu_flags; 7158 7210 7159 7211 /* Protects use of hba->reserved_slot. */ 7160 7212 lockdep_assert_held(&hba->dev_cmd.lock); 7161 7213 7162 - down_read(&hba->clk_scaling_lock); 7214 + ufshcd_setup_dev_cmd(hba, lrbp, cmd_type, 0, tag); 7163 7215 7164 - lrbp = &hba->lrb[tag]; 7165 - lrbp->cmd = NULL; 7166 - lrbp->task_tag = tag; 7167 - lrbp->lun = 0; 7168 - lrbp->intr_cmd = true; 7169 - ufshcd_prepare_lrbp_crypto(NULL, lrbp); 7170 - hba->dev_cmd.type = cmd_type; 7171 - 7172 - if (hba->ufs_version <= ufshci_version(1, 1)) 7173 - lrbp->command_type = UTP_CMD_TYPE_DEV_MANAGE; 7174 - else 7175 - lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE; 7216 + ufshcd_prepare_req_desc_hdr(hba, lrbp, &upiu_flags, DMA_NONE, 0); 7176 7217 7177 7218 /* update the task tag in the request upiu */ 7178 7219 req_upiu->header.task_tag = tag; 7179 - 7180 - ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, DMA_NONE, 0); 7181 7220 7182 7221 /* just copy the upiu request as it is */ 7183 7222 memcpy(lrbp->ucd_req_ptr, req_upiu, sizeof(*lrbp->ucd_req_ptr)); ··· 7178 7245 7179 7246 memset(lrbp->ucd_rsp_ptr, 0, sizeof(struct utp_upiu_rsp)); 7180 7247 7181 - hba->dev_cmd.complete = &wait; 7182 - 7183 - ufshcd_add_query_upiu_trace(hba, UFS_QUERY_SEND, lrbp->ucd_req_ptr); 7184 - 7185 - ufshcd_send_command(hba, tag, hba->dev_cmd_queue); 7186 7248 /* 7187 7249 * ignore the returning value here - ufshcd_check_query_response is 7188 7250 * bound to fail since dev_cmd.query and dev_cmd.type were left empty. 7189 7251 * read the response directly ignoring all errors. 7190 7252 */ 7191 - ufshcd_wait_for_dev_cmd(hba, lrbp, QUERY_REQ_TIMEOUT); 7253 + ufshcd_issue_dev_cmd(hba, lrbp, tag, QUERY_REQ_TIMEOUT); 7192 7254 7193 7255 /* just copy the upiu response as it is */ 7194 7256 memcpy(rsp_upiu, lrbp->ucd_rsp_ptr, sizeof(*rsp_upiu)); ··· 7206 7278 ufshcd_add_query_upiu_trace(hba, err ? UFS_QUERY_ERR : UFS_QUERY_COMP, 7207 7279 (struct utp_upiu_req *)lrbp->ucd_rsp_ptr); 7208 7280 7209 - up_read(&hba->clk_scaling_lock); 7210 7281 return err; 7211 7282 } 7212 7283 ··· 7244 7317 cmd_type = DEV_CMD_TYPE_NOP; 7245 7318 fallthrough; 7246 7319 case UPIU_TRANSACTION_QUERY_REQ: 7247 - ufshcd_hold(hba); 7248 - mutex_lock(&hba->dev_cmd.lock); 7320 + ufshcd_dev_man_lock(hba); 7249 7321 err = ufshcd_issue_devman_upiu_cmd(hba, req_upiu, rsp_upiu, 7250 7322 desc_buff, buff_len, 7251 7323 cmd_type, desc_op); 7252 - mutex_unlock(&hba->dev_cmd.lock); 7253 - ufshcd_release(hba); 7324 + ufshcd_dev_man_unlock(hba); 7254 7325 7255 7326 break; 7256 7327 case UPIU_TRANSACTION_TASK_REQ: ··· 7298 7373 struct ufs_ehs *rsp_ehs, int sg_cnt, struct scatterlist *sg_list, 7299 7374 enum dma_data_direction dir) 7300 7375 { 7301 - DECLARE_COMPLETION_ONSTACK(wait); 7302 7376 const u32 tag = hba->reserved_slot; 7303 - struct ufshcd_lrb *lrbp; 7377 + struct ufshcd_lrb *lrbp = &hba->lrb[tag]; 7304 7378 int err = 0; 7305 7379 int result; 7306 7380 u8 upiu_flags; 7307 7381 u8 *ehs_data; 7308 7382 u16 ehs_len; 7383 + int ehs = (hba->capabilities & MASK_EHSLUTRD_SUPPORTED) ? 2 : 0; 7309 7384 7310 7385 /* Protects use of hba->reserved_slot. */ 7311 - ufshcd_hold(hba); 7312 - mutex_lock(&hba->dev_cmd.lock); 7313 - down_read(&hba->clk_scaling_lock); 7386 + ufshcd_dev_man_lock(hba); 7314 7387 7315 - lrbp = &hba->lrb[tag]; 7316 - lrbp->cmd = NULL; 7317 - lrbp->task_tag = tag; 7318 - lrbp->lun = UFS_UPIU_RPMB_WLUN; 7388 + ufshcd_setup_dev_cmd(hba, lrbp, DEV_CMD_TYPE_RPMB, UFS_UPIU_RPMB_WLUN, tag); 7319 7389 7320 - lrbp->intr_cmd = true; 7321 - ufshcd_prepare_lrbp_crypto(NULL, lrbp); 7322 - hba->dev_cmd.type = DEV_CMD_TYPE_RPMB; 7323 - 7324 - /* Advanced RPMB starts from UFS 4.0, so its command type is UTP_CMD_TYPE_UFS_STORAGE */ 7325 - lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE; 7326 - 7327 - /* 7328 - * According to UFSHCI 4.0 specification page 24, if EHSLUTRDS is 0, host controller takes 7329 - * EHS length from CMD UPIU, and SW driver use EHS Length field in CMD UPIU. if it is 1, 7330 - * HW controller takes EHS length from UTRD. 7331 - */ 7332 - if (hba->capabilities & MASK_EHSLUTRD_SUPPORTED) 7333 - ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, dir, 2); 7334 - else 7335 - ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, dir, 0); 7390 + ufshcd_prepare_req_desc_hdr(hba, lrbp, &upiu_flags, DMA_NONE, ehs); 7336 7391 7337 7392 /* update the task tag */ 7338 7393 req_upiu->header.task_tag = tag; ··· 7327 7422 7328 7423 memset(lrbp->ucd_rsp_ptr, 0, sizeof(struct utp_upiu_rsp)); 7329 7424 7330 - hba->dev_cmd.complete = &wait; 7331 - 7332 - ufshcd_send_command(hba, tag, hba->dev_cmd_queue); 7333 - 7334 - err = ufshcd_wait_for_dev_cmd(hba, lrbp, ADVANCED_RPMB_REQ_TIMEOUT); 7425 + err = ufshcd_issue_dev_cmd(hba, lrbp, tag, ADVANCED_RPMB_REQ_TIMEOUT); 7335 7426 7336 7427 if (!err) { 7337 7428 /* Just copy the upiu response as it is */ ··· 7352 7451 } 7353 7452 } 7354 7453 7355 - up_read(&hba->clk_scaling_lock); 7356 - mutex_unlock(&hba->dev_cmd.lock); 7357 - ufshcd_release(hba); 7454 + ufshcd_dev_man_unlock(hba); 7455 + 7358 7456 return err ? : result; 7359 7457 } 7360 7458 ··· 8300 8400 } 8301 8401 8302 8402 /** 8303 - * ufshcd_tune_pa_tactivate - Tunes PA_TActivate of local UniPro 8304 - * @hba: per-adapter instance 8305 - * 8306 - * PA_TActivate parameter can be tuned manually if UniPro version is less than 8307 - * 1.61. PA_TActivate needs to be greater than or equal to peerM-PHY's 8308 - * RX_MIN_ACTIVATETIME_CAPABILITY attribute. This optimal value can help reduce 8309 - * the hibern8 exit latency. 8310 - * 8311 - * Return: zero on success, non-zero error value on failure. 8312 - */ 8313 - static int ufshcd_tune_pa_tactivate(struct ufs_hba *hba) 8314 - { 8315 - int ret = 0; 8316 - u32 peer_rx_min_activatetime = 0, tuned_pa_tactivate; 8317 - 8318 - ret = ufshcd_dme_peer_get(hba, 8319 - UIC_ARG_MIB_SEL( 8320 - RX_MIN_ACTIVATETIME_CAPABILITY, 8321 - UIC_ARG_MPHY_RX_GEN_SEL_INDEX(0)), 8322 - &peer_rx_min_activatetime); 8323 - if (ret) 8324 - goto out; 8325 - 8326 - /* make sure proper unit conversion is applied */ 8327 - tuned_pa_tactivate = 8328 - ((peer_rx_min_activatetime * RX_MIN_ACTIVATETIME_UNIT_US) 8329 - / PA_TACTIVATE_TIME_UNIT_US); 8330 - ret = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_TACTIVATE), 8331 - tuned_pa_tactivate); 8332 - 8333 - out: 8334 - return ret; 8335 - } 8336 - 8337 - /** 8338 - * ufshcd_tune_pa_hibern8time - Tunes PA_Hibern8Time of local UniPro 8339 - * @hba: per-adapter instance 8340 - * 8341 - * PA_Hibern8Time parameter can be tuned manually if UniPro version is less than 8342 - * 1.61. PA_Hibern8Time needs to be maximum of local M-PHY's 8343 - * TX_HIBERN8TIME_CAPABILITY & peer M-PHY's RX_HIBERN8TIME_CAPABILITY. 8344 - * This optimal value can help reduce the hibern8 exit latency. 8345 - * 8346 - * Return: zero on success, non-zero error value on failure. 8347 - */ 8348 - static int ufshcd_tune_pa_hibern8time(struct ufs_hba *hba) 8349 - { 8350 - int ret = 0; 8351 - u32 local_tx_hibern8_time_cap = 0, peer_rx_hibern8_time_cap = 0; 8352 - u32 max_hibern8_time, tuned_pa_hibern8time; 8353 - 8354 - ret = ufshcd_dme_get(hba, 8355 - UIC_ARG_MIB_SEL(TX_HIBERN8TIME_CAPABILITY, 8356 - UIC_ARG_MPHY_TX_GEN_SEL_INDEX(0)), 8357 - &local_tx_hibern8_time_cap); 8358 - if (ret) 8359 - goto out; 8360 - 8361 - ret = ufshcd_dme_peer_get(hba, 8362 - UIC_ARG_MIB_SEL(RX_HIBERN8TIME_CAPABILITY, 8363 - UIC_ARG_MPHY_RX_GEN_SEL_INDEX(0)), 8364 - &peer_rx_hibern8_time_cap); 8365 - if (ret) 8366 - goto out; 8367 - 8368 - max_hibern8_time = max(local_tx_hibern8_time_cap, 8369 - peer_rx_hibern8_time_cap); 8370 - /* make sure proper unit conversion is applied */ 8371 - tuned_pa_hibern8time = ((max_hibern8_time * HIBERN8TIME_UNIT_US) 8372 - / PA_HIBERN8_TIME_UNIT_US); 8373 - ret = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_HIBERN8TIME), 8374 - tuned_pa_hibern8time); 8375 - out: 8376 - return ret; 8377 - } 8378 - 8379 - /** 8380 8403 * ufshcd_quirk_tune_host_pa_tactivate - Ensures that host PA_TACTIVATE is 8381 8404 * less than device PA_TACTIVATE time. 8382 8405 * @hba: per-adapter instance ··· 8371 8548 8372 8549 static void ufshcd_tune_unipro_params(struct ufs_hba *hba) 8373 8550 { 8374 - if (ufshcd_is_unipro_pa_params_tuning_req(hba)) { 8375 - ufshcd_tune_pa_tactivate(hba); 8376 - ufshcd_tune_pa_hibern8time(hba); 8377 - } 8378 - 8379 8551 ufshcd_vops_apply_dev_quirks(hba); 8380 8552 8381 8553 if (hba->dev_quirks & UFS_DEVICE_QUIRK_PA_TACTIVATE) ··· 8534 8716 if (dev_info->wspecversion < 0x400) 8535 8717 return; 8536 8718 8537 - ufshcd_hold(hba); 8538 - 8539 - mutex_lock(&hba->dev_cmd.lock); 8719 + ufshcd_dev_man_lock(hba); 8540 8720 8541 8721 ufshcd_init_query(hba, &request, &response, 8542 8722 UPIU_QUERY_OPCODE_WRITE_ATTR, ··· 8552 8736 dev_err(hba->dev, "%s: failed to set timestamp %d\n", 8553 8737 __func__, err); 8554 8738 8555 - mutex_unlock(&hba->dev_cmd.lock); 8556 - ufshcd_release(hba); 8739 + ufshcd_dev_man_unlock(hba); 8557 8740 } 8558 8741 8559 8742 /** ··· 10215 10400 * are updated with the latest queue addresses. Only after 10216 10401 * updating these addresses, we can queue the new commands. 10217 10402 */ 10218 - mb(); 10403 + ufshcd_readl(hba, REG_UTP_TASK_REQ_LIST_BASE_H); 10219 10404 10220 10405 /* Resuming from hibernate, assume that link was OFF */ 10221 10406 ufshcd_set_link_off(hba); ··· 10436 10621 * Make sure that UFS interrupts are disabled and any pending interrupt 10437 10622 * status is cleared before registering UFS interrupt handler. 10438 10623 */ 10439 - mb(); 10624 + ufshcd_readl(hba, REG_INTERRUPT_ENABLE); 10440 10625 10441 10626 /* IRQ registration */ 10442 10627 err = devm_request_irq(dev, irq, ufshcd_intr, IRQF_SHARED, UFSHCD, hba); ··· 10716 10901 static struct scsi_driver ufs_dev_wlun_template = { 10717 10902 .gendrv = { 10718 10903 .name = "ufs_device_wlun", 10719 - .owner = THIS_MODULE, 10720 10904 .probe = ufshcd_wl_probe, 10721 10905 .remove = ufshcd_wl_remove, 10722 10906 .pm = &ufshcd_wl_pm_ops,
+1 -1
drivers/ufs/host/cdns-pltfrm.c
··· 136 136 * Make sure the register was updated, 137 137 * UniPro layer will not work with an incorrect value. 138 138 */ 139 - mb(); 139 + ufshcd_readl(hba, CDNS_UFS_REG_HCLKDIV); 140 140 141 141 return 0; 142 142 }
+186 -19
drivers/ufs/host/ufs-exynos.c
··· 50 50 #define HCI_ERR_EN_N_LAYER 0x80 51 51 #define HCI_ERR_EN_T_LAYER 0x84 52 52 #define HCI_ERR_EN_DME_LAYER 0x88 53 + #define HCI_V2P1_CTRL 0x8C 54 + #define IA_TICK_SEL BIT(16) 53 55 #define HCI_CLKSTOP_CTRL 0xB0 54 56 #define REFCLKOUT_STOP BIT(4) 55 57 #define MPHY_APBCLK_STOP BIT(3) ··· 61 59 #define CLK_STOP_MASK (REFCLKOUT_STOP | REFCLK_STOP |\ 62 60 UNIPRO_MCLK_STOP | MPHY_APBCLK_STOP|\ 63 61 UNIPRO_PCLK_STOP) 62 + /* HCI_MISC is also known as HCI_FORCE_HCS */ 64 63 #define HCI_MISC 0xB4 65 64 #define REFCLK_CTRL_EN BIT(7) 66 65 #define UNIPRO_PCLK_CTRL_EN BIT(6) ··· 139 136 /* 140 137 * UNIPRO registers 141 138 */ 139 + #define UNIPRO_DME_POWERMODE_REQ_LOCALL2TIMER0 0x7888 140 + #define UNIPRO_DME_POWERMODE_REQ_LOCALL2TIMER1 0x788c 141 + #define UNIPRO_DME_POWERMODE_REQ_LOCALL2TIMER2 0x7890 142 142 #define UNIPRO_DME_POWERMODE_REQ_REMOTEL2TIMER0 0x78B8 143 143 #define UNIPRO_DME_POWERMODE_REQ_REMOTEL2TIMER1 0x78BC 144 144 #define UNIPRO_DME_POWERMODE_REQ_REMOTEL2TIMER2 0x78C0 ··· 312 306 313 307 static int exynos7_ufs_pre_link(struct exynos_ufs *ufs) 314 308 { 309 + struct exynos_ufs_uic_attr *attr = ufs->drv_data->uic_attr; 310 + u32 val = attr->pa_dbg_opt_suite1_val; 315 311 struct ufs_hba *hba = ufs->hba; 316 - u32 val = ufs->drv_data->uic_attr->pa_dbg_option_suite; 317 312 int i; 318 313 319 314 exynos_ufs_enable_ov_tm(hba); ··· 331 324 UIC_ARG_MIB_SEL(TX_HIBERN8_CONTROL, i), 0x0); 332 325 ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_TXPHY_CFGUPDT), 0x1); 333 326 udelay(1); 334 - ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_OPTION_SUITE), val | (1 << 12)); 327 + ufshcd_dme_set(hba, UIC_ARG_MIB(attr->pa_dbg_opt_suite1_off), 328 + val | (1 << 12)); 335 329 ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_SKIP_RESET_PHY), 0x1); 336 330 ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_SKIP_LINE_RESET), 0x1); 337 331 ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_LINE_RESET_REQ), 0x1); 338 332 udelay(1600); 339 - ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_OPTION_SUITE), val); 333 + ufshcd_dme_set(hba, UIC_ARG_MIB(attr->pa_dbg_opt_suite1_off), val); 340 334 341 335 return 0; 342 336 } ··· 929 921 930 922 static void exynos_ufs_config_unipro(struct exynos_ufs *ufs) 931 923 { 924 + struct exynos_ufs_uic_attr *attr = ufs->drv_data->uic_attr; 932 925 struct ufs_hba *hba = ufs->hba; 933 926 934 - ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_CLK_PERIOD), 935 - DIV_ROUND_UP(NSEC_PER_SEC, ufs->mclk_rate)); 927 + if (attr->pa_dbg_clk_period_off) 928 + ufshcd_dme_set(hba, UIC_ARG_MIB(attr->pa_dbg_clk_period_off), 929 + DIV_ROUND_UP(NSEC_PER_SEC, ufs->mclk_rate)); 930 + 936 931 ufshcd_dme_set(hba, UIC_ARG_MIB(PA_TXTRAILINGCLOCKS), 937 932 ufs->drv_data->uic_attr->tx_trailingclks); 938 - ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_OPTION_SUITE), 939 - ufs->drv_data->uic_attr->pa_dbg_option_suite); 933 + 934 + if (attr->pa_dbg_opt_suite1_off) 935 + ufshcd_dme_set(hba, UIC_ARG_MIB(attr->pa_dbg_opt_suite1_off), 936 + attr->pa_dbg_opt_suite1_val); 937 + 938 + if (attr->pa_dbg_opt_suite2_off) 939 + ufshcd_dme_set(hba, UIC_ARG_MIB(attr->pa_dbg_opt_suite2_off), 940 + attr->pa_dbg_opt_suite2_val); 940 941 } 941 942 942 943 static void exynos_ufs_config_intr(struct exynos_ufs *ufs, u32 errs, u8 index) ··· 1021 1004 static void exynos_ufs_fit_aggr_timeout(struct exynos_ufs *ufs) 1022 1005 { 1023 1006 u32 val; 1007 + 1008 + /* Select function clock (mclk) for timer tick */ 1009 + if (ufs->opts & EXYNOS_UFS_OPT_TIMER_TICK_SELECT) { 1010 + val = hci_readl(ufs, HCI_V2P1_CTRL); 1011 + val |= IA_TICK_SEL; 1012 + hci_writel(ufs, val, HCI_V2P1_CTRL); 1013 + } 1024 1014 1025 1015 val = exynos_ufs_calc_time_cntr(ufs, IATOVAL_NSEC / CNTR_DIV_VAL); 1026 1016 hci_writel(ufs, val & CNT_VAL_1US_MASK, HCI_1US_TO_CNT_VAL); ··· 1210 1186 if (ret) 1211 1187 goto out; 1212 1188 exynos_ufs_specify_phy_time_attr(ufs); 1213 - exynos_ufs_config_smu(ufs); 1189 + if (!(ufs->opts & EXYNOS_UFS_OPT_UFSPR_SECURE)) 1190 + exynos_ufs_config_smu(ufs); 1191 + 1192 + hba->host->dma_alignment = SZ_4K - 1; 1214 1193 return 0; 1215 1194 1216 1195 out: ··· 1502 1475 1503 1476 static int fsd_ufs_pre_link(struct exynos_ufs *ufs) 1504 1477 { 1505 - int i; 1478 + struct exynos_ufs_uic_attr *attr = ufs->drv_data->uic_attr; 1506 1479 struct ufs_hba *hba = ufs->hba; 1480 + int i; 1507 1481 1508 - ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_CLK_PERIOD), 1482 + ufshcd_dme_set(hba, UIC_ARG_MIB(attr->pa_dbg_clk_period_off), 1509 1483 DIV_ROUND_UP(NSEC_PER_SEC, ufs->mclk_rate)); 1510 1484 ufshcd_dme_set(hba, UIC_ARG_MIB(0x201), 0x12); 1511 1485 ufshcd_dme_set(hba, UIC_ARG_MIB(0x200), 0x40); ··· 1530 1502 1531 1503 ufshcd_dme_set(hba, UIC_ARG_MIB(0x200), 0x0); 1532 1504 ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_AUTOMODE_THLD), 0x4E20); 1533 - ufshcd_dme_set(hba, UIC_ARG_MIB(PA_DBG_OPTION_SUITE), 0x2e820183); 1505 + 1506 + ufshcd_dme_set(hba, UIC_ARG_MIB(attr->pa_dbg_opt_suite1_off), 1507 + 0x2e820183); 1534 1508 ufshcd_dme_set(hba, UIC_ARG_MIB(PA_LOCAL_TX_LCC_ENABLE), 0x0); 1535 1509 1536 1510 exynos_ufs_establish_connt(ufs); 1537 1511 1538 1512 return 0; 1539 - } 1540 - 1541 - static void exynos_ufs_config_scsi_dev(struct scsi_device *sdev) 1542 - { 1543 - blk_queue_update_dma_alignment(sdev->request_queue, SZ_4K - 1); 1544 1513 } 1545 1514 1546 1515 static int fsd_ufs_post_link(struct exynos_ufs *ufs) ··· 1596 1571 return 0; 1597 1572 } 1598 1573 1574 + static inline u32 get_mclk_period_unipro_18(struct exynos_ufs *ufs) 1575 + { 1576 + return (16 * 1000 * 1000000UL / ufs->mclk_rate); 1577 + } 1578 + 1579 + static int gs101_ufs_pre_link(struct exynos_ufs *ufs) 1580 + { 1581 + struct ufs_hba *hba = ufs->hba; 1582 + int i; 1583 + u32 tx_line_reset_period, rx_line_reset_period; 1584 + 1585 + rx_line_reset_period = (RX_LINE_RESET_TIME * ufs->mclk_rate) 1586 + / NSEC_PER_MSEC; 1587 + tx_line_reset_period = (TX_LINE_RESET_TIME * ufs->mclk_rate) 1588 + / NSEC_PER_MSEC; 1589 + 1590 + unipro_writel(ufs, get_mclk_period_unipro_18(ufs), COMP_CLK_PERIOD); 1591 + 1592 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x200), 0x40); 1593 + 1594 + for_each_ufs_rx_lane(ufs, i) { 1595 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_CLK_PRD, i), 1596 + DIV_ROUND_UP(NSEC_PER_SEC, ufs->mclk_rate)); 1597 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_CLK_PRD_EN, i), 0x0); 1598 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_LINERESET_VALUE2, i), 1599 + (rx_line_reset_period >> 16) & 0xFF); 1600 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_LINERESET_VALUE1, i), 1601 + (rx_line_reset_period >> 8) & 0xFF); 1602 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_RX_LINERESET_VALUE0, i), 1603 + (rx_line_reset_period) & 0xFF); 1604 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(0x2f, i), 0x69); 1605 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(0x84, i), 0x1); 1606 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(0x25, i), 0xf6); 1607 + } 1608 + 1609 + for_each_ufs_tx_lane(ufs, i) { 1610 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_CLK_PRD, i), 1611 + DIV_ROUND_UP(NSEC_PER_SEC, ufs->mclk_rate)); 1612 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_CLK_PRD_EN, i), 1613 + 0x02); 1614 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_LINERESET_PVALUE2, i), 1615 + (tx_line_reset_period >> 16) & 0xFF); 1616 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_LINERESET_PVALUE1, i), 1617 + (tx_line_reset_period >> 8) & 0xFF); 1618 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(VND_TX_LINERESET_PVALUE0, i), 1619 + (tx_line_reset_period) & 0xFF); 1620 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(0x04, i), 1); 1621 + ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(0x7F, i), 0); 1622 + } 1623 + 1624 + ufshcd_dme_set(hba, UIC_ARG_MIB(0x200), 0x0); 1625 + ufshcd_dme_set(hba, UIC_ARG_MIB(PA_LOCAL_TX_LCC_ENABLE), 0x0); 1626 + ufshcd_dme_set(hba, UIC_ARG_MIB(N_DEVICEID), 0x0); 1627 + ufshcd_dme_set(hba, UIC_ARG_MIB(N_DEVICEID_VALID), 0x1); 1628 + ufshcd_dme_set(hba, UIC_ARG_MIB(T_PEERDEVICEID), 0x1); 1629 + ufshcd_dme_set(hba, UIC_ARG_MIB(T_CONNECTIONSTATE), CPORT_CONNECTED); 1630 + ufshcd_dme_set(hba, UIC_ARG_MIB(0xA006), 0x8000); 1631 + 1632 + return 0; 1633 + } 1634 + 1635 + static int gs101_ufs_post_link(struct exynos_ufs *ufs) 1636 + { 1637 + struct ufs_hba *hba = ufs->hba; 1638 + 1639 + exynos_ufs_enable_dbg_mode(hba); 1640 + ufshcd_dme_set(hba, UIC_ARG_MIB(PA_SAVECONFIGTIME), 0x3e8); 1641 + exynos_ufs_disable_dbg_mode(hba); 1642 + 1643 + return 0; 1644 + } 1645 + 1646 + static int gs101_ufs_pre_pwr_change(struct exynos_ufs *ufs, 1647 + struct ufs_pa_layer_attr *pwr) 1648 + { 1649 + struct ufs_hba *hba = ufs->hba; 1650 + 1651 + ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA0), 12000); 1652 + ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA1), 32000); 1653 + ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PWRMODEUSERDATA2), 16000); 1654 + unipro_writel(ufs, 8064, UNIPRO_DME_POWERMODE_REQ_LOCALL2TIMER0); 1655 + unipro_writel(ufs, 28224, UNIPRO_DME_POWERMODE_REQ_LOCALL2TIMER1); 1656 + unipro_writel(ufs, 20160, UNIPRO_DME_POWERMODE_REQ_LOCALL2TIMER2); 1657 + unipro_writel(ufs, 12000, UNIPRO_DME_POWERMODE_REQ_REMOTEL2TIMER0); 1658 + unipro_writel(ufs, 32000, UNIPRO_DME_POWERMODE_REQ_REMOTEL2TIMER1); 1659 + unipro_writel(ufs, 16000, UNIPRO_DME_POWERMODE_REQ_REMOTEL2TIMER2); 1660 + 1661 + return 0; 1662 + } 1663 + 1599 1664 static const struct ufs_hba_variant_ops ufs_hba_exynos_ops = { 1600 1665 .name = "exynos_ufs", 1601 1666 .init = exynos_ufs_init, ··· 1698 1583 .hibern8_notify = exynos_ufs_hibern8_notify, 1699 1584 .suspend = exynos_ufs_suspend, 1700 1585 .resume = exynos_ufs_resume, 1701 - .config_scsi_dev = exynos_ufs_config_scsi_dev, 1702 1586 }; 1703 1587 1704 1588 static struct ufs_hba_variant_ops ufs_hba_exynosauto_vh_ops = { ··· 1758 1644 .rx_hs_g1_prep_sync_len_cap = PREP_LEN(0xf), 1759 1645 .rx_hs_g2_prep_sync_len_cap = PREP_LEN(0xf), 1760 1646 .rx_hs_g3_prep_sync_len_cap = PREP_LEN(0xf), 1761 - .pa_dbg_option_suite = 0x30103, 1647 + .pa_dbg_clk_period_off = PA_DBG_CLK_PERIOD, 1648 + .pa_dbg_opt_suite1_val = 0x30103, 1649 + .pa_dbg_opt_suite1_off = PA_DBG_OPTION_SUITE, 1762 1650 }; 1763 1651 1764 1652 static const struct exynos_ufs_drv_data exynosauto_ufs_drvs = { ··· 1812 1696 .post_pwr_change = exynos7_ufs_post_pwr_change, 1813 1697 }; 1814 1698 1699 + static struct exynos_ufs_uic_attr gs101_uic_attr = { 1700 + .tx_trailingclks = 0xff, 1701 + .tx_dif_p_nsec = 3000000, /* unit: ns */ 1702 + .tx_dif_n_nsec = 1000000, /* unit: ns */ 1703 + .tx_high_z_cnt_nsec = 20000, /* unit: ns */ 1704 + .tx_base_unit_nsec = 100000, /* unit: ns */ 1705 + .tx_gran_unit_nsec = 4000, /* unit: ns */ 1706 + .tx_sleep_cnt = 1000, /* unit: ns */ 1707 + .tx_min_activatetime = 0xa, 1708 + .rx_filler_enable = 0x2, 1709 + .rx_dif_p_nsec = 1000000, /* unit: ns */ 1710 + .rx_hibern8_wait_nsec = 4000000, /* unit: ns */ 1711 + .rx_base_unit_nsec = 100000, /* unit: ns */ 1712 + .rx_gran_unit_nsec = 4000, /* unit: ns */ 1713 + .rx_sleep_cnt = 1280, /* unit: ns */ 1714 + .rx_stall_cnt = 320, /* unit: ns */ 1715 + .rx_hs_g1_sync_len_cap = SYNC_LEN_COARSE(0xf), 1716 + .rx_hs_g2_sync_len_cap = SYNC_LEN_COARSE(0xf), 1717 + .rx_hs_g3_sync_len_cap = SYNC_LEN_COARSE(0xf), 1718 + .rx_hs_g1_prep_sync_len_cap = PREP_LEN(0xf), 1719 + .rx_hs_g2_prep_sync_len_cap = PREP_LEN(0xf), 1720 + .rx_hs_g3_prep_sync_len_cap = PREP_LEN(0xf), 1721 + .pa_dbg_opt_suite1_val = 0x90913C1C, 1722 + .pa_dbg_opt_suite1_off = PA_GS101_DBG_OPTION_SUITE1, 1723 + .pa_dbg_opt_suite2_val = 0xE01C115F, 1724 + .pa_dbg_opt_suite2_off = PA_GS101_DBG_OPTION_SUITE2, 1725 + }; 1726 + 1815 1727 static struct exynos_ufs_uic_attr fsd_uic_attr = { 1816 1728 .tx_trailingclks = 0x10, 1817 1729 .tx_dif_p_nsec = 3000000, /* unit: ns */ ··· 1862 1718 .rx_hs_g1_prep_sync_len_cap = PREP_LEN(0xf), 1863 1719 .rx_hs_g2_prep_sync_len_cap = PREP_LEN(0xf), 1864 1720 .rx_hs_g3_prep_sync_len_cap = PREP_LEN(0xf), 1865 - .pa_dbg_option_suite = 0x2E820183, 1721 + .pa_dbg_clk_period_off = PA_DBG_CLK_PERIOD, 1722 + .pa_dbg_opt_suite1_val = 0x2E820183, 1723 + .pa_dbg_opt_suite1_off = PA_DBG_OPTION_SUITE, 1866 1724 }; 1867 1725 1868 1726 static const struct exynos_ufs_drv_data fsd_ufs_drvs = { ··· 1883 1737 .pre_pwr_change = fsd_ufs_pre_pwr_change, 1884 1738 }; 1885 1739 1740 + static const struct exynos_ufs_drv_data gs101_ufs_drvs = { 1741 + .uic_attr = &gs101_uic_attr, 1742 + .quirks = UFSHCD_QUIRK_PRDT_BYTE_GRAN | 1743 + UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR | 1744 + UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR | 1745 + UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR | 1746 + UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL | 1747 + UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING, 1748 + .opts = EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL | 1749 + EXYNOS_UFS_OPT_SKIP_CONFIG_PHY_ATTR | 1750 + EXYNOS_UFS_OPT_UFSPR_SECURE | 1751 + EXYNOS_UFS_OPT_TIMER_TICK_SELECT, 1752 + .drv_init = exynosauto_ufs_drv_init, 1753 + .pre_link = gs101_ufs_pre_link, 1754 + .post_link = gs101_ufs_post_link, 1755 + .pre_pwr_change = gs101_ufs_pre_pwr_change, 1756 + }; 1757 + 1886 1758 static const struct of_device_id exynos_ufs_of_match[] = { 1759 + { .compatible = "google,gs101-ufs", 1760 + .data = &gs101_ufs_drvs }, 1887 1761 { .compatible = "samsung,exynos7-ufs", 1888 1762 .data = &exynos_ufs_drvs }, 1889 1763 { .compatible = "samsung,exynosautov9-ufs", ··· 1914 1748 .data = &fsd_ufs_drvs }, 1915 1749 {}, 1916 1750 }; 1751 + MODULE_DEVICE_TABLE(of, exynos_ufs_of_match); 1917 1752 1918 1753 static const struct dev_pm_ops exynos_ufs_pm_ops = { 1919 1754 SET_SYSTEM_SLEEP_PM_OPS(ufshcd_system_suspend, ufshcd_system_resume)
+22 -2
drivers/ufs/host/ufs-exynos.h
··· 10 10 #define _UFS_EXYNOS_H_ 11 11 12 12 /* 13 + * Component registers 14 + */ 15 + 16 + #define COMP_CLK_PERIOD 0x44 17 + 18 + /* 13 19 * UNIPRO registers 14 20 */ 15 21 #define UNIPRO_DBG_FORCE_DME_CTRL_STATE 0x150 ··· 34 28 #define PA_DBG_LINE_RESET_REQ 0x9543 35 29 #define PA_DBG_OPTION_SUITE 0x9564 36 30 #define PA_DBG_OPTION_SUITE_DYN 0x9565 31 + 32 + /* 33 + * Note: GS101_DBG_OPTION offsets below differ from the TRM 34 + * but match the downstream driver. Following the TRM 35 + * results in non-functioning UFS. 36 + */ 37 + #define PA_GS101_DBG_OPTION_SUITE1 0x956a 38 + #define PA_GS101_DBG_OPTION_SUITE2 0x956d 37 39 38 40 /* 39 41 * MIBs for Transport Layer debug registers ··· 130 116 #define PA_HIBERN8TIME_VAL 0x20 131 117 132 118 #define PCLK_AVAIL_MIN 70000000 133 - #define PCLK_AVAIL_MAX 167000000 119 + #define PCLK_AVAIL_MAX 267000000 134 120 135 121 struct exynos_ufs_uic_attr { 136 122 /* TX Attributes */ ··· 159 145 /* Common Attributes */ 160 146 unsigned int cmn_pwm_clk_ctrl; 161 147 /* Internal Attributes */ 162 - unsigned int pa_dbg_option_suite; 148 + unsigned int pa_dbg_clk_period_off; 149 + unsigned int pa_dbg_opt_suite1_val; 150 + unsigned int pa_dbg_opt_suite1_off; 151 + unsigned int pa_dbg_opt_suite2_val; 152 + unsigned int pa_dbg_opt_suite2_off; 163 153 /* Changeable Attributes */ 164 154 unsigned int rx_adv_fine_gran_sup_en; 165 155 unsigned int rx_adv_fine_gran_step; ··· 239 221 #define EXYNOS_UFS_OPT_BROKEN_RX_SEL_IDX BIT(3) 240 222 #define EXYNOS_UFS_OPT_USE_SW_HIBERN8_TIMER BIT(4) 241 223 #define EXYNOS_UFS_OPT_SKIP_CONFIG_PHY_ATTR BIT(5) 224 + #define EXYNOS_UFS_OPT_UFSPR_SECURE BIT(6) 225 + #define EXYNOS_UFS_OPT_TIMER_TICK_SELECT BIT(7) 242 226 }; 243 227 244 228 #define for_each_ufs_rx_lane(ufs, i) \
+94
drivers/ufs/host/ufs-mediatek-sip.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2022 MediaTek Inc. 4 + */ 5 + 6 + #ifndef _UFS_MEDIATEK_SIP_H 7 + #define _UFS_MEDIATEK_SIP_H 8 + 9 + #include <linux/soc/mediatek/mtk_sip_svc.h> 10 + 11 + /* 12 + * SiP (Slicon Partner) commands 13 + */ 14 + #define MTK_SIP_UFS_CONTROL MTK_SIP_SMC_CMD(0x276) 15 + #define UFS_MTK_SIP_VA09_PWR_CTRL BIT(0) 16 + #define UFS_MTK_SIP_DEVICE_RESET BIT(1) 17 + #define UFS_MTK_SIP_CRYPTO_CTRL BIT(2) 18 + #define UFS_MTK_SIP_REF_CLK_NOTIFICATION BIT(3) 19 + #define UFS_MTK_SIP_SRAM_PWR_CTRL BIT(5) 20 + #define UFS_MTK_SIP_GET_VCC_NUM BIT(6) 21 + #define UFS_MTK_SIP_DEVICE_PWR_CTRL BIT(7) 22 + #define UFS_MTK_SIP_MPHY_CTRL BIT(8) 23 + #define UFS_MTK_SIP_MTCMOS_CTRL BIT(9) 24 + 25 + /* 26 + * Multi-VCC by Numbering 27 + */ 28 + enum ufs_mtk_vcc_num { 29 + UFS_VCC_NONE = 0, 30 + UFS_VCC_1, 31 + UFS_VCC_2, 32 + UFS_VCC_MAX 33 + }; 34 + 35 + enum ufs_mtk_mphy_op { 36 + UFS_MPHY_BACKUP = 0, 37 + UFS_MPHY_RESTORE 38 + }; 39 + 40 + /* 41 + * SMC call wrapper function 42 + */ 43 + struct ufs_mtk_smc_arg { 44 + unsigned long cmd; 45 + struct arm_smccc_res *res; 46 + unsigned long v1; 47 + unsigned long v2; 48 + unsigned long v3; 49 + unsigned long v4; 50 + unsigned long v5; 51 + unsigned long v6; 52 + unsigned long v7; 53 + }; 54 + 55 + 56 + static inline void _ufs_mtk_smc(struct ufs_mtk_smc_arg s) 57 + { 58 + arm_smccc_smc(MTK_SIP_UFS_CONTROL, 59 + s.cmd, 60 + s.v1, s.v2, s.v3, s.v4, s.v5, s.v6, s.res); 61 + } 62 + 63 + #define ufs_mtk_smc(...) \ 64 + _ufs_mtk_smc((struct ufs_mtk_smc_arg) {__VA_ARGS__}) 65 + 66 + /* Sip kernel interface */ 67 + #define ufs_mtk_va09_pwr_ctrl(res, on) \ 68 + ufs_mtk_smc(UFS_MTK_SIP_VA09_PWR_CTRL, &(res), on) 69 + 70 + #define ufs_mtk_crypto_ctrl(res, enable) \ 71 + ufs_mtk_smc(UFS_MTK_SIP_CRYPTO_CTRL, &(res), enable) 72 + 73 + #define ufs_mtk_ref_clk_notify(on, stage, res) \ 74 + ufs_mtk_smc(UFS_MTK_SIP_REF_CLK_NOTIFICATION, &(res), on, stage) 75 + 76 + #define ufs_mtk_device_reset_ctrl(high, res) \ 77 + ufs_mtk_smc(UFS_MTK_SIP_DEVICE_RESET, &(res), high) 78 + 79 + #define ufs_mtk_sram_pwr_ctrl(on, res) \ 80 + ufs_mtk_smc(UFS_MTK_SIP_SRAM_PWR_CTRL, &(res), on) 81 + 82 + #define ufs_mtk_get_vcc_num(res) \ 83 + ufs_mtk_smc(UFS_MTK_SIP_GET_VCC_NUM, &(res)) 84 + 85 + #define ufs_mtk_device_pwr_ctrl(on, ufs_version, res) \ 86 + ufs_mtk_smc(UFS_MTK_SIP_DEVICE_PWR_CTRL, &(res), on, ufs_version) 87 + 88 + #define ufs_mtk_mphy_ctrl(op, res) \ 89 + ufs_mtk_smc(UFS_MTK_SIP_MPHY_CTRL, &(res), op) 90 + 91 + #define ufs_mtk_mtcmos_ctrl(op, res) \ 92 + ufs_mtk_smc(UFS_MTK_SIP_MTCMOS_CTRL, &(res), op) 93 + 94 + #endif /* !_UFS_MEDIATEK_SIP_H */
+116 -15
drivers/ufs/host/ufs-mediatek.c
··· 19 19 #include <linux/platform_device.h> 20 20 #include <linux/regulator/consumer.h> 21 21 #include <linux/reset.h> 22 - #include <linux/soc/mediatek/mtk_sip_svc.h> 23 22 24 23 #include <ufs/ufshcd.h> 25 24 #include "ufshcd-pltfrm.h" 26 25 #include <ufs/ufs_quirks.h> 27 26 #include <ufs/unipro.h> 27 + 28 28 #include "ufs-mediatek.h" 29 + #include "ufs-mediatek-sip.h" 29 30 30 31 static int ufs_mtk_config_mcq(struct ufs_hba *hba, bool irq); 31 32 ··· 52 51 { .compatible = "mediatek,mt8183-ufshci" }, 53 52 {}, 54 53 }; 54 + MODULE_DEVICE_TABLE(of, ufs_mtk_of_match); 55 55 56 56 /* 57 57 * Details of UIC Errors ··· 120 118 return !!(host->caps & UFS_MTK_CAP_PMC_VIA_FASTAUTO); 121 119 } 122 120 121 + static bool ufs_mtk_is_tx_skew_fix(struct ufs_hba *hba) 122 + { 123 + struct ufs_mtk_host *host = ufshcd_get_variant(hba); 124 + 125 + return (host->caps & UFS_MTK_CAP_TX_SKEW_FIX); 126 + } 127 + 128 + static bool ufs_mtk_is_rtff_mtcmos(struct ufs_hba *hba) 129 + { 130 + struct ufs_mtk_host *host = ufshcd_get_variant(hba); 131 + 132 + return (host->caps & UFS_MTK_CAP_RTFF_MTCMOS); 133 + } 134 + 135 + static bool ufs_mtk_is_allow_vccqx_lpm(struct ufs_hba *hba) 136 + { 137 + struct ufs_mtk_host *host = ufshcd_get_variant(hba); 138 + 139 + return (host->caps & UFS_MTK_CAP_ALLOW_VCCQX_LPM); 140 + } 141 + 123 142 static void ufs_mtk_cfg_unipro_cg(struct ufs_hba *hba, bool enable) 124 143 { 125 144 u32 tmp; ··· 192 169 static void ufs_mtk_host_reset(struct ufs_hba *hba) 193 170 { 194 171 struct ufs_mtk_host *host = ufshcd_get_variant(hba); 172 + struct arm_smccc_res res; 195 173 196 174 reset_control_assert(host->hci_reset); 197 175 reset_control_assert(host->crypto_reset); 198 176 reset_control_assert(host->unipro_reset); 177 + reset_control_assert(host->mphy_reset); 199 178 200 179 usleep_range(100, 110); 201 180 202 181 reset_control_deassert(host->unipro_reset); 203 182 reset_control_deassert(host->crypto_reset); 204 183 reset_control_deassert(host->hci_reset); 184 + reset_control_deassert(host->mphy_reset); 185 + 186 + /* restore mphy setting aftre mphy reset */ 187 + if (host->mphy_reset) 188 + ufs_mtk_mphy_ctrl(UFS_MPHY_RESTORE, res); 205 189 } 206 190 207 191 static void ufs_mtk_init_reset_control(struct ufs_hba *hba, ··· 233 203 "unipro_rst"); 234 204 ufs_mtk_init_reset_control(hba, &host->crypto_reset, 235 205 "crypto_rst"); 206 + ufs_mtk_init_reset_control(hba, &host->mphy_reset, 207 + "mphy_rst"); 236 208 } 237 209 238 210 static int ufs_mtk_hce_enable_notify(struct ufs_hba *hba, ··· 654 622 if (of_property_read_bool(np, "mediatek,ufs-pmc-via-fastauto")) 655 623 host->caps |= UFS_MTK_CAP_PMC_VIA_FASTAUTO; 656 624 625 + if (of_property_read_bool(np, "mediatek,ufs-tx-skew-fix")) 626 + host->caps |= UFS_MTK_CAP_TX_SKEW_FIX; 627 + 628 + if (of_property_read_bool(np, "mediatek,ufs-disable-mcq")) 629 + host->caps |= UFS_MTK_CAP_DISABLE_MCQ; 630 + 631 + if (of_property_read_bool(np, "mediatek,ufs-rtff-mtcmos")) 632 + host->caps |= UFS_MTK_CAP_RTFF_MTCMOS; 633 + 657 634 dev_info(hba->dev, "caps: 0x%x", host->caps); 658 635 } 659 636 ··· 926 885 host->mcq_nr_intr = UFSHCD_MAX_Q_NR; 927 886 pdev = container_of(hba->dev, struct platform_device, dev); 928 887 888 + if (host->caps & UFS_MTK_CAP_DISABLE_MCQ) 889 + goto failed; 890 + 929 891 for (i = 0; i < host->mcq_nr_intr; i++) { 930 892 /* irq index 0 is legacy irq, sq/cq irq start from index 1 */ 931 893 irq = platform_get_irq(pdev, i + 1); ··· 967 923 struct ufs_mtk_host *host; 968 924 struct Scsi_Host *shost = hba->host; 969 925 int err = 0; 926 + struct arm_smccc_res res; 970 927 971 928 host = devm_kzalloc(dev, sizeof(*host), GFP_KERNEL); 972 929 if (!host) { ··· 995 950 goto out_variant_clear; 996 951 997 952 ufs_mtk_init_reset(hba); 953 + 954 + /* backup mphy setting if mphy can reset */ 955 + if (host->mphy_reset) 956 + ufs_mtk_mphy_ctrl(UFS_MPHY_BACKUP, res); 998 957 999 958 /* Enable runtime autosuspend */ 1000 959 hba->caps |= UFSHCD_CAP_RPM_AUTOSUSPEND; ··· 1036 987 * Enable phy clocks specifically here. 1037 988 */ 1038 989 ufs_mtk_mphy_power_on(hba, true); 990 + 991 + if (ufs_mtk_is_rtff_mtcmos(hba)) { 992 + /* First Restore here, to avoid backup unexpected value */ 993 + ufs_mtk_mtcmos_ctrl(false, res); 994 + 995 + /* Power on to init */ 996 + ufs_mtk_mtcmos_ctrl(true, res); 997 + } 998 + 1039 999 ufs_mtk_setup_clocks(hba, true, POST_CHANGE); 1040 1000 1041 1001 host->ip_ver = ufshcd_readl(hba, REG_UFS_MTK_IP_VER); ··· 1361 1303 1362 1304 static void ufs_mtk_dev_vreg_set_lpm(struct ufs_hba *hba, bool lpm) 1363 1305 { 1364 - if (!hba->vreg_info.vccq && !hba->vreg_info.vccq2) 1365 - return; 1306 + bool skip_vccqx = false; 1366 1307 1367 - /* Skip if VCC is assumed always-on */ 1368 - if (!hba->vreg_info.vcc) 1369 - return; 1370 - 1371 - /* Bypass LPM when device is still active */ 1308 + /* Prevent entering LPM when device is still active */ 1372 1309 if (lpm && ufshcd_is_ufs_dev_active(hba)) 1373 1310 return; 1374 1311 1375 - /* Bypass LPM if VCC is enabled */ 1376 - if (lpm && hba->vreg_info.vcc->enabled) 1377 - return; 1312 + /* Skip vccqx lpm control and control vsx only */ 1313 + if (!hba->vreg_info.vccq && !hba->vreg_info.vccq2) 1314 + skip_vccqx = true; 1315 + 1316 + /* VCC is always-on, control vsx only */ 1317 + if (!hba->vreg_info.vcc) 1318 + skip_vccqx = true; 1319 + 1320 + /* Broken vcc keep vcc always on, most case control vsx only */ 1321 + if (lpm && hba->vreg_info.vcc && hba->vreg_info.vcc->enabled) { 1322 + /* Some device vccqx/vsx can enter lpm */ 1323 + if (ufs_mtk_is_allow_vccqx_lpm(hba)) 1324 + skip_vccqx = false; 1325 + else /* control vsx only */ 1326 + skip_vccqx = true; 1327 + } 1378 1328 1379 1329 if (lpm) { 1380 - ufs_mtk_vccqx_set_lpm(hba, lpm); 1330 + if (!skip_vccqx) 1331 + ufs_mtk_vccqx_set_lpm(hba, lpm); 1381 1332 ufs_mtk_vsx_set_lpm(hba, lpm); 1382 1333 } else { 1383 1334 ufs_mtk_vsx_set_lpm(hba, lpm); 1384 - ufs_mtk_vccqx_set_lpm(hba, lpm); 1335 + if (!skip_vccqx) 1336 + ufs_mtk_vccqx_set_lpm(hba, lpm); 1385 1337 } 1386 1338 } 1387 1339 ··· 1442 1374 if (ufshcd_is_link_off(hba)) 1443 1375 ufs_mtk_device_reset_ctrl(0, res); 1444 1376 1445 - ufs_mtk_host_pwr_ctrl(HOST_PWR_HCI, false, res); 1377 + ufs_mtk_sram_pwr_ctrl(false, res); 1446 1378 1447 1379 return 0; 1448 1380 fail: ··· 1463 1395 if (hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL) 1464 1396 ufs_mtk_dev_vreg_set_lpm(hba, false); 1465 1397 1466 - ufs_mtk_host_pwr_ctrl(HOST_PWR_HCI, true, res); 1398 + ufs_mtk_sram_pwr_ctrl(true, res); 1467 1399 1468 1400 err = ufs_mtk_mphy_power_on(hba, true); 1469 1401 if (err) ··· 1506 1438 if (mid == UFS_VENDOR_SAMSUNG) { 1507 1439 ufshcd_dme_set(hba, UIC_ARG_MIB(PA_TACTIVATE), 6); 1508 1440 ufshcd_dme_set(hba, UIC_ARG_MIB(PA_HIBERN8TIME), 10); 1441 + } else if (mid == UFS_VENDOR_MICRON) { 1442 + /* Only for the host which have TX skew issue */ 1443 + if (ufs_mtk_is_tx_skew_fix(hba) && 1444 + (STR_PRFX_EQUAL("MT128GBCAV2U31", dev_info->model) || 1445 + STR_PRFX_EQUAL("MT256GBCAV4U31", dev_info->model) || 1446 + STR_PRFX_EQUAL("MT512GBCAV8U31", dev_info->model) || 1447 + STR_PRFX_EQUAL("MT256GBEAX4U40", dev_info->model) || 1448 + STR_PRFX_EQUAL("MT512GAYAX4U40", dev_info->model) || 1449 + STR_PRFX_EQUAL("MT001TAYAX8U40", dev_info->model))) { 1450 + ufshcd_dme_set(hba, UIC_ARG_MIB(PA_TACTIVATE), 8); 1451 + } 1509 1452 } 1510 1453 1511 1454 /* ··· 1658 1579 1659 1580 static int ufs_mtk_get_hba_mac(struct ufs_hba *hba) 1660 1581 { 1582 + struct ufs_mtk_host *host = ufshcd_get_variant(hba); 1583 + 1584 + /* MCQ operation not permitted */ 1585 + if (host->caps & UFS_MTK_CAP_DISABLE_MCQ) 1586 + return -EPERM; 1587 + 1661 1588 return MAX_SUPP_MAC; 1662 1589 } 1663 1590 ··· 1875 1790 static int ufs_mtk_system_suspend(struct device *dev) 1876 1791 { 1877 1792 struct ufs_hba *hba = dev_get_drvdata(dev); 1793 + struct arm_smccc_res res; 1878 1794 int ret; 1879 1795 1880 1796 ret = ufshcd_system_suspend(dev); ··· 1884 1798 1885 1799 ufs_mtk_dev_vreg_set_lpm(hba, true); 1886 1800 1801 + if (ufs_mtk_is_rtff_mtcmos(hba)) 1802 + ufs_mtk_mtcmos_ctrl(false, res); 1803 + 1887 1804 return 0; 1888 1805 } 1889 1806 1890 1807 static int ufs_mtk_system_resume(struct device *dev) 1891 1808 { 1892 1809 struct ufs_hba *hba = dev_get_drvdata(dev); 1810 + struct arm_smccc_res res; 1893 1811 1894 1812 ufs_mtk_dev_vreg_set_lpm(hba, false); 1813 + 1814 + if (ufs_mtk_is_rtff_mtcmos(hba)) 1815 + ufs_mtk_mtcmos_ctrl(true, res); 1895 1816 1896 1817 return ufshcd_system_resume(dev); 1897 1818 } ··· 1908 1815 static int ufs_mtk_runtime_suspend(struct device *dev) 1909 1816 { 1910 1817 struct ufs_hba *hba = dev_get_drvdata(dev); 1818 + struct arm_smccc_res res; 1911 1819 int ret = 0; 1912 1820 1913 1821 ret = ufshcd_runtime_suspend(dev); ··· 1917 1823 1918 1824 ufs_mtk_dev_vreg_set_lpm(hba, true); 1919 1825 1826 + if (ufs_mtk_is_rtff_mtcmos(hba)) 1827 + ufs_mtk_mtcmos_ctrl(false, res); 1828 + 1920 1829 return 0; 1921 1830 } 1922 1831 1923 1832 static int ufs_mtk_runtime_resume(struct device *dev) 1924 1833 { 1925 1834 struct ufs_hba *hba = dev_get_drvdata(dev); 1835 + struct arm_smccc_res res; 1836 + 1837 + if (ufs_mtk_is_rtff_mtcmos(hba)) 1838 + ufs_mtk_mtcmos_ctrl(true, res); 1926 1839 1927 1840 ufs_mtk_dev_vreg_set_lpm(hba, false); 1928 1841
+11 -79
drivers/ufs/host/ufs-mediatek.h
··· 7 7 #define _UFS_MEDIATEK_H 8 8 9 9 #include <linux/bitops.h> 10 - #include <linux/soc/mediatek/mtk_sip_svc.h> 11 10 12 11 /* 13 12 * MCQ define and struct ··· 99 100 }; 100 101 101 102 /* 102 - * SiP commands 103 - */ 104 - #define MTK_SIP_UFS_CONTROL MTK_SIP_SMC_CMD(0x276) 105 - #define UFS_MTK_SIP_VA09_PWR_CTRL BIT(0) 106 - #define UFS_MTK_SIP_DEVICE_RESET BIT(1) 107 - #define UFS_MTK_SIP_CRYPTO_CTRL BIT(2) 108 - #define UFS_MTK_SIP_REF_CLK_NOTIFICATION BIT(3) 109 - #define UFS_MTK_SIP_HOST_PWR_CTRL BIT(5) 110 - #define UFS_MTK_SIP_GET_VCC_NUM BIT(6) 111 - #define UFS_MTK_SIP_DEVICE_PWR_CTRL BIT(7) 112 - 113 - /* 114 103 * VS_DEBUGCLOCKENABLE 115 104 */ 116 105 enum { ··· 122 135 UFS_MTK_CAP_VA09_PWR_CTRL = 1 << 1, 123 136 UFS_MTK_CAP_DISABLE_AH8 = 1 << 2, 124 137 UFS_MTK_CAP_BROKEN_VCC = 1 << 3, 138 + 139 + /* 140 + * Override UFS_MTK_CAP_BROKEN_VCC's behavior to 141 + * allow vccqx upstream to enter LPM 142 + */ 143 + UFS_MTK_CAP_ALLOW_VCCQX_LPM = 1 << 5, 125 144 UFS_MTK_CAP_PMC_VIA_FASTAUTO = 1 << 6, 145 + UFS_MTK_CAP_TX_SKEW_FIX = 1 << 7, 146 + UFS_MTK_CAP_DISABLE_MCQ = 1 << 8, 147 + /* Control MTCMOS with RTFF */ 148 + UFS_MTK_CAP_RTFF_MTCMOS = 1 << 9, 126 149 }; 127 150 128 151 struct ufs_mtk_crypt_cfg { ··· 167 170 struct reset_control *hci_reset; 168 171 struct reset_control *unipro_reset; 169 172 struct reset_control *crypto_reset; 173 + struct reset_control *mphy_reset; 170 174 struct ufs_hba *hba; 171 175 struct ufs_mtk_crypt_cfg *crypt; 172 176 struct ufs_mtk_clk mclk; ··· 188 190 189 191 /* MTK delay of autosuspend: 500 ms */ 190 192 #define MTK_RPM_AUTOSUSPEND_DELAY_MS 500 191 - 192 - /* 193 - * Multi-VCC by Numbering 194 - */ 195 - enum ufs_mtk_vcc_num { 196 - UFS_VCC_NONE = 0, 197 - UFS_VCC_1, 198 - UFS_VCC_2, 199 - UFS_VCC_MAX 200 - }; 201 - 202 - /* 203 - * Host Power Control options 204 - */ 205 - enum { 206 - HOST_PWR_HCI = 0, 207 - HOST_PWR_MPHY 208 - }; 209 - 210 - /* 211 - * SMC call wrapper function 212 - */ 213 - struct ufs_mtk_smc_arg { 214 - unsigned long cmd; 215 - struct arm_smccc_res *res; 216 - unsigned long v1; 217 - unsigned long v2; 218 - unsigned long v3; 219 - unsigned long v4; 220 - unsigned long v5; 221 - unsigned long v6; 222 - unsigned long v7; 223 - }; 224 - 225 - static void _ufs_mtk_smc(struct ufs_mtk_smc_arg s) 226 - { 227 - arm_smccc_smc(MTK_SIP_UFS_CONTROL, 228 - s.cmd, s.v1, s.v2, s.v3, s.v4, s.v5, s.v6, s.res); 229 - } 230 - 231 - #define ufs_mtk_smc(...) \ 232 - _ufs_mtk_smc((struct ufs_mtk_smc_arg) {__VA_ARGS__}) 233 - 234 - /* 235 - * SMC call interface 236 - */ 237 - #define ufs_mtk_va09_pwr_ctrl(res, on) \ 238 - ufs_mtk_smc(UFS_MTK_SIP_VA09_PWR_CTRL, &(res), on) 239 - 240 - #define ufs_mtk_crypto_ctrl(res, enable) \ 241 - ufs_mtk_smc(UFS_MTK_SIP_CRYPTO_CTRL, &(res), enable) 242 - 243 - #define ufs_mtk_ref_clk_notify(on, stage, res) \ 244 - ufs_mtk_smc(UFS_MTK_SIP_REF_CLK_NOTIFICATION, &(res), on, stage) 245 - 246 - #define ufs_mtk_device_reset_ctrl(high, res) \ 247 - ufs_mtk_smc(UFS_MTK_SIP_DEVICE_RESET, &(res), high) 248 - 249 - #define ufs_mtk_host_pwr_ctrl(opt, on, res) \ 250 - ufs_mtk_smc(UFS_MTK_SIP_HOST_PWR_CTRL, &(res), opt, on) 251 - 252 - #define ufs_mtk_get_vcc_num(res) \ 253 - ufs_mtk_smc(UFS_MTK_SIP_GET_VCC_NUM, &(res)) 254 - 255 - #define ufs_mtk_device_pwr_ctrl(on, ufs_ver, res) \ 256 - ufs_mtk_smc(UFS_MTK_SIP_DEVICE_PWR_CTRL, &(res), on, ufs_ver) 257 193 258 194 #endif /* !_UFS_MEDIATEK_H */
+13 -12
drivers/ufs/host/ufs-qcom.c
··· 284 284 285 285 if (host->hw_ver.major >= 0x05) 286 286 ufshcd_rmwl(host->hba, QUNIPRO_G4_SEL, 0, REG_UFS_CFG0); 287 - 288 - /* make sure above configuration is applied before we return */ 289 - mb(); 290 287 } 291 288 292 289 /* ··· 412 415 REG_UFS_CFG2); 413 416 414 417 /* Ensure that HW clock gating is enabled before next operations */ 415 - mb(); 418 + ufshcd_readl(hba, REG_UFS_CFG2); 416 419 } 417 420 418 421 static int ufs_qcom_hce_enable_notify(struct ufs_hba *hba, ··· 504 507 * make sure above write gets applied before we return from 505 508 * this function. 506 509 */ 507 - mb(); 510 + ufshcd_readl(hba, REG_UFS_SYS1CLK_1US); 508 511 } 509 512 510 513 return 0; ··· 534 537 * and device TX LCC are disabled once link startup is 535 538 * completed. 536 539 */ 537 - if (ufshcd_get_local_unipro_ver(hba) != UFS_UNIPRO_VER_1_41) 538 - err = ufshcd_disable_host_tx_lcc(hba); 540 + err = ufshcd_disable_host_tx_lcc(hba); 539 541 540 542 break; 541 543 default: ··· 691 695 struct ufs_pa_layer_attr *p = &host->dev_req_params; 692 696 int gear = max_t(u32, p->gear_rx, p->gear_tx); 693 697 int lane = max_t(u32, p->lane_rx, p->lane_tx); 698 + 699 + if (WARN_ONCE(gear > QCOM_UFS_MAX_GEAR, 700 + "ICC scaling for UFS Gear (%d) not supported. Using Gear (%d) bandwidth\n", 701 + gear, QCOM_UFS_MAX_GEAR)) 702 + gear = QCOM_UFS_MAX_GEAR; 703 + 704 + if (WARN_ONCE(lane > QCOM_UFS_MAX_LANE, 705 + "ICC scaling for UFS Lane (%d) not supported. Using Lane (%d) bandwidth\n", 706 + lane, QCOM_UFS_MAX_LANE)) 707 + lane = QCOM_UFS_MAX_LANE; 694 708 695 709 if (ufshcd_is_hs_mode(p)) { 696 710 if (p->hs_rate == PA_HS_MODE_B) ··· 1457 1451 (u32)host->testbus.select_minor << offset, 1458 1452 reg); 1459 1453 ufs_qcom_enable_test_bus(host); 1460 - /* 1461 - * Make sure the test bus configuration is 1462 - * committed before returning. 1463 - */ 1464 - mb(); 1465 1454 1466 1455 return 0; 1467 1456 }
+6 -6
drivers/ufs/host/ufs-qcom.h
··· 151 151 ufshcd_rmwl(hba, UFS_PHY_SOFT_RESET, UFS_PHY_SOFT_RESET, REG_UFS_CFG1); 152 152 153 153 /* 154 - * Make sure assertion of ufs phy reset is written to 155 - * register before returning 154 + * Dummy read to ensure the write takes effect before doing any sort 155 + * of delay 156 156 */ 157 - mb(); 157 + ufshcd_readl(hba, REG_UFS_CFG1); 158 158 } 159 159 160 160 static inline void ufs_qcom_deassert_reset(struct ufs_hba *hba) ··· 162 162 ufshcd_rmwl(hba, UFS_PHY_SOFT_RESET, 0, REG_UFS_CFG1); 163 163 164 164 /* 165 - * Make sure de-assertion of ufs phy reset is written to 166 - * register before returning 165 + * Dummy read to ensure the write takes effect before doing any sort 166 + * of delay 167 167 */ 168 - mb(); 168 + ufshcd_readl(hba, REG_UFS_CFG1); 169 169 } 170 170 171 171 /* Host controller hardware version: major.minor.step */
+1 -7
drivers/usb/image/microtek.c
··· 328 328 return 0; 329 329 } 330 330 331 - static int mts_slave_configure (struct scsi_device *s) 332 - { 333 - blk_queue_dma_alignment(s->request_queue, (512 - 1)); 334 - return 0; 335 - } 336 - 337 331 static int mts_scsi_abort(struct scsi_cmnd *srb) 338 332 { 339 333 struct mts_desc* desc = (struct mts_desc*)(srb->device->host->hostdata[0]); ··· 625 631 .can_queue = 1, 626 632 .this_id = -1, 627 633 .emulated = 1, 634 + .dma_alignment = 511, 628 635 .slave_alloc = mts_slave_alloc, 629 - .slave_configure = mts_slave_configure, 630 636 .max_sectors= 256, /* 128 K */ 631 637 }; 632 638
+26 -31
drivers/usb/storage/scsiglue.c
··· 40 40 #include <scsi/scsi_eh.h> 41 41 42 42 #include "usb.h" 43 - #include <linux/usb/hcd.h> 44 43 #include "scsiglue.h" 45 44 #include "debug.h" 46 45 #include "transport.h" ··· 75 76 */ 76 77 sdev->inquiry_len = 36; 77 78 78 - /* 79 - * Some host controllers may have alignment requirements. 80 - * We'll play it safe by requiring 512-byte alignment always. 81 - */ 82 - blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1)); 83 - 84 79 /* Tell the SCSI layer if we know there is more than one LUN */ 85 80 if (us->protocol == USB_PR_BULK && us->max_lun > 0) 86 81 sdev->sdev_bflags |= BLIST_FORCELUN; ··· 82 89 return 0; 83 90 } 84 91 85 - static int slave_configure(struct scsi_device *sdev) 92 + static int device_configure(struct scsi_device *sdev, struct queue_limits *lim) 86 93 { 87 94 struct us_data *us = host_to_us(sdev->host); 88 95 struct device *dev = us->pusb_dev->bus->sysdev; ··· 97 104 98 105 if (us->fflags & US_FL_MAX_SECTORS_MIN) 99 106 max_sectors = PAGE_SIZE >> 9; 100 - if (queue_max_hw_sectors(sdev->request_queue) > max_sectors) 101 - blk_queue_max_hw_sectors(sdev->request_queue, 102 - max_sectors); 107 + lim->max_hw_sectors = min(lim->max_hw_sectors, max_sectors); 103 108 } else if (sdev->type == TYPE_TAPE) { 104 109 /* 105 110 * Tapes need much higher max_sector limits, so just 106 111 * raise it to the maximum possible (4 GB / 512) and 107 112 * let the queue segment size sort out the real limit. 108 113 */ 109 - blk_queue_max_hw_sectors(sdev->request_queue, 0x7FFFFF); 114 + lim->max_hw_sectors = 0x7FFFFF; 110 115 } else if (us->pusb_dev->speed >= USB_SPEED_SUPER) { 111 116 /* 112 117 * USB3 devices will be limited to 2048 sectors. This gives us 113 118 * better throughput on most devices. 114 119 */ 115 - blk_queue_max_hw_sectors(sdev->request_queue, 2048); 120 + lim->max_hw_sectors = 2048; 116 121 } 117 122 118 123 /* 119 124 * The max_hw_sectors should be up to maximum size of a mapping for 120 125 * the device. Otherwise, a DMA API might fail on swiotlb environment. 121 126 */ 122 - blk_queue_max_hw_sectors(sdev->request_queue, 123 - min_t(size_t, queue_max_hw_sectors(sdev->request_queue), 124 - dma_max_mapping_size(dev) >> SECTOR_SHIFT)); 125 - 126 - /* 127 - * Some USB host controllers can't do DMA; they have to use PIO. 128 - * For such controllers we need to make sure the block layer sets 129 - * up bounce buffers in addressable memory. 130 - */ 131 - if (!hcd_uses_dma(bus_to_hcd(us->pusb_dev->bus)) || 132 - (bus_to_hcd(us->pusb_dev->bus)->localmem_pool != NULL)) 133 - blk_queue_bounce_limit(sdev->request_queue, BLK_BOUNCE_HIGH); 127 + lim->max_hw_sectors = min_t(size_t, 128 + lim->max_hw_sectors, dma_max_mapping_size(dev) >> SECTOR_SHIFT); 134 129 135 130 /* 136 131 * We can't put these settings in slave_alloc() because that gets ··· 579 598 size_t count) 580 599 { 581 600 struct scsi_device *sdev = to_scsi_device(dev); 601 + struct queue_limits lim; 582 602 unsigned short ms; 603 + int ret; 583 604 584 - if (sscanf(buf, "%hu", &ms) > 0) { 585 - blk_queue_max_hw_sectors(sdev->request_queue, ms); 586 - return count; 587 - } 588 - return -EINVAL; 605 + if (sscanf(buf, "%hu", &ms) <= 0) 606 + return -EINVAL; 607 + 608 + blk_mq_freeze_queue(sdev->request_queue); 609 + lim = queue_limits_start_update(sdev->request_queue); 610 + lim.max_hw_sectors = ms; 611 + ret = queue_limits_commit_update(sdev->request_queue, &lim); 612 + blk_mq_unfreeze_queue(sdev->request_queue); 613 + 614 + if (ret) 615 + return ret; 616 + return count; 589 617 } 590 618 static DEVICE_ATTR_RW(max_sectors); 591 619 ··· 632 642 .this_id = -1, 633 643 634 644 .slave_alloc = slave_alloc, 635 - .slave_configure = slave_configure, 645 + .device_configure = device_configure, 636 646 .target_alloc = target_alloc, 637 647 638 648 /* lots of sg segments can be handled */ 639 649 .sg_tablesize = SG_MAX_SEGMENTS, 640 650 651 + /* 652 + * Some host controllers may have alignment requirements. 653 + * We'll play it safe by requiring 512-byte alignment always. 654 + */ 655 + .dma_alignment = 511, 641 656 642 657 /* 643 658 * Limit the total size of a transfer to 120 KB.
+14 -15
drivers/usb/storage/uas.c
··· 821 821 (struct uas_dev_info *)sdev->host->hostdata; 822 822 823 823 sdev->hostdata = devinfo; 824 - 825 - /* 826 - * The protocol has no requirements on alignment in the strict sense. 827 - * Controllers may or may not have alignment restrictions. 828 - * As this is not exported, we use an extremely conservative guess. 829 - */ 830 - blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1)); 831 - 832 - if (devinfo->flags & US_FL_MAX_SECTORS_64) 833 - blk_queue_max_hw_sectors(sdev->request_queue, 64); 834 - else if (devinfo->flags & US_FL_MAX_SECTORS_240) 835 - blk_queue_max_hw_sectors(sdev->request_queue, 240); 836 - 837 824 return 0; 838 825 } 839 826 840 - static int uas_slave_configure(struct scsi_device *sdev) 827 + static int uas_device_configure(struct scsi_device *sdev, 828 + struct queue_limits *lim) 841 829 { 842 830 struct uas_dev_info *devinfo = sdev->hostdata; 831 + 832 + if (devinfo->flags & US_FL_MAX_SECTORS_64) 833 + lim->max_hw_sectors = 64; 834 + else if (devinfo->flags & US_FL_MAX_SECTORS_240) 835 + lim->max_hw_sectors = 240; 843 836 844 837 if (devinfo->flags & US_FL_NO_REPORT_OPCODES) 845 838 sdev->no_report_opcodes = 1; ··· 898 905 .queuecommand = uas_queuecommand, 899 906 .target_alloc = uas_target_alloc, 900 907 .slave_alloc = uas_slave_alloc, 901 - .slave_configure = uas_slave_configure, 908 + .device_configure = uas_device_configure, 902 909 .eh_abort_handler = uas_eh_abort_handler, 903 910 .eh_device_reset_handler = uas_eh_device_reset_handler, 904 911 .this_id = -1, 905 912 .skip_settle_delay = 1, 913 + /* 914 + * The protocol has no requirements on alignment in the strict sense. 915 + * Controllers may or may not have alignment restrictions. 916 + * As this is not exported, we use an extremely conservative guess. 917 + */ 918 + .dma_alignment = 511, 906 919 .dma_boundary = PAGE_SIZE - 1, 907 920 .cmd_size = sizeof(struct uas_cmd_info), 908 921 };
+10
drivers/usb/storage/usb.c
··· 47 47 #include <scsi/scsi_device.h> 48 48 49 49 #include "usb.h" 50 + #include <linux/usb/hcd.h> 50 51 #include "scsiglue.h" 51 52 #include "transport.h" 52 53 #include "protocol.h" ··· 961 960 result = associate_dev(us, intf); 962 961 if (result) 963 962 goto BadDevice; 963 + 964 + /* 965 + * Some USB host controllers can't do DMA; they have to use PIO. 966 + * For such controllers we need to make sure the block layer sets 967 + * up bounce buffers in addressable memory. 968 + */ 969 + if (!hcd_uses_dma(bus_to_hcd(us->pusb_dev->bus)) || 970 + bus_to_hcd(us->pusb_dev->bus)->localmem_pool) 971 + host->no_highmem = true; 964 972 965 973 /* Get the unusual_devs entries and the descriptors */ 966 974 result = get_device_info(us, id, unusual_dev);
+13 -14
include/linux/blkdev.h
··· 897 897 struct queue_limits *lim); 898 898 int queue_limits_set(struct request_queue *q, struct queue_limits *lim); 899 899 900 + /** 901 + * queue_limits_cancel_update - cancel an atomic update of queue limits 902 + * @q: queue to update 903 + * 904 + * This functions cancels an atomic update of the queue limits started by 905 + * queue_limits_start_update() and should be used when an error occurs after 906 + * starting update. 907 + */ 908 + static inline void queue_limits_cancel_update(struct request_queue *q) 909 + { 910 + mutex_unlock(&q->limits_lock); 911 + } 912 + 900 913 /* 901 914 * Access functions for manipulating queue properties 902 915 */ 903 - void blk_queue_bounce_limit(struct request_queue *q, enum blk_bounce limit); 904 - extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int); 905 916 extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int); 906 - extern void blk_queue_max_segments(struct request_queue *, unsigned short); 907 - extern void blk_queue_max_discard_segments(struct request_queue *, 908 - unsigned short); 909 917 void blk_queue_max_secure_erase_sectors(struct request_queue *q, 910 918 unsigned int max_sectors); 911 - extern void blk_queue_max_segment_size(struct request_queue *, unsigned int); 912 919 extern void blk_queue_max_discard_sectors(struct request_queue *q, 913 920 unsigned int max_discard_sectors); 914 921 extern void blk_queue_max_write_zeroes_sectors(struct request_queue *q, ··· 932 925 extern void blk_limits_io_min(struct queue_limits *limits, unsigned int min); 933 926 extern void blk_queue_io_min(struct request_queue *q, unsigned int min); 934 927 extern void blk_limits_io_opt(struct queue_limits *limits, unsigned int opt); 935 - extern void blk_queue_io_opt(struct request_queue *q, unsigned int opt); 936 928 extern void blk_set_queue_depth(struct request_queue *q, unsigned int depth); 937 929 extern void blk_set_stacking_limits(struct queue_limits *lim); 938 930 extern int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, ··· 939 933 void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, 940 934 sector_t offset, const char *pfx); 941 935 extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int); 942 - extern void blk_queue_segment_boundary(struct request_queue *, unsigned long); 943 - extern void blk_queue_virt_boundary(struct request_queue *, unsigned long); 944 - extern void blk_queue_dma_alignment(struct request_queue *, int); 945 - extern void blk_queue_update_dma_alignment(struct request_queue *, int); 946 936 extern void blk_queue_rq_timeout(struct request_queue *, unsigned int); 947 937 extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua); 948 938 ··· 946 944 disk_alloc_independent_access_ranges(struct gendisk *disk, int nr_ia_ranges); 947 945 void disk_set_independent_access_ranges(struct gendisk *disk, 948 946 struct blk_independent_access_ranges *iars); 949 - 950 - extern bool blk_queue_can_use_dma_map_merging(struct request_queue *q, 951 - struct device *dev); 952 947 953 948 bool __must_check blk_get_queue(struct request_queue *); 954 949 extern void blk_put_queue(struct request_queue *);
+2 -1
include/linux/bsg-lib.h
··· 65 65 void bsg_job_done(struct bsg_job *job, int result, 66 66 unsigned int reply_payload_rcv_len); 67 67 struct request_queue *bsg_setup_queue(struct device *dev, const char *name, 68 - bsg_job_fn *job_fn, bsg_timeout_fn *timeout, int dd_job_size); 68 + struct queue_limits *lim, bsg_job_fn *job_fn, 69 + bsg_timeout_fn *timeout, int dd_job_size); 69 70 void bsg_remove_queue(struct request_queue *q); 70 71 void bsg_job_put(struct bsg_job *job); 71 72 int __must_check bsg_job_get(struct bsg_job *job);
+12 -4
include/linux/libata.h
··· 1152 1152 sector_t capacity, int geom[]); 1153 1153 extern void ata_scsi_unlock_native_capacity(struct scsi_device *sdev); 1154 1154 extern int ata_scsi_slave_alloc(struct scsi_device *sdev); 1155 - extern int ata_scsi_slave_config(struct scsi_device *sdev); 1155 + int ata_scsi_device_configure(struct scsi_device *sdev, 1156 + struct queue_limits *lim); 1156 1157 extern void ata_scsi_slave_destroy(struct scsi_device *sdev); 1157 1158 extern int ata_scsi_change_queue_depth(struct scsi_device *sdev, 1158 1159 int queue_depth); 1159 1160 extern int ata_change_queue_depth(struct ata_port *ap, struct scsi_device *sdev, 1160 1161 int queue_depth); 1162 + extern int ata_ncq_prio_supported(struct ata_port *ap, struct scsi_device *sdev, 1163 + bool *supported); 1164 + extern int ata_ncq_prio_enabled(struct ata_port *ap, struct scsi_device *sdev, 1165 + bool *enabled); 1166 + extern int ata_ncq_prio_enable(struct ata_port *ap, struct scsi_device *sdev, 1167 + bool enable); 1161 1168 extern struct ata_device *ata_dev_pair(struct ata_device *adev); 1162 1169 extern int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev); 1163 1170 extern void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap); ··· 1251 1244 extern void ata_port_probe(struct ata_port *ap); 1252 1245 extern int ata_sas_tport_add(struct device *parent, struct ata_port *ap); 1253 1246 extern void ata_sas_tport_delete(struct ata_port *ap); 1254 - extern int ata_sas_slave_configure(struct scsi_device *, struct ata_port *); 1247 + int ata_sas_device_configure(struct scsi_device *sdev, struct queue_limits *lim, 1248 + struct ata_port *ap); 1255 1249 extern int ata_sas_queuecmd(struct scsi_cmnd *cmd, struct ata_port *ap); 1256 1250 extern void ata_tf_to_fis(const struct ata_taskfile *tf, 1257 1251 u8 pmp, int is_cmd, u8 *fis); ··· 1418 1410 __ATA_BASE_SHT(drv_name), \ 1419 1411 .can_queue = ATA_DEF_QUEUE, \ 1420 1412 .tag_alloc_policy = BLK_TAG_ALLOC_RR, \ 1421 - .slave_configure = ata_scsi_slave_config 1413 + .device_configure = ata_scsi_device_configure 1422 1414 1423 1415 #define ATA_SUBBASE_SHT_QD(drv_name, drv_qd) \ 1424 1416 __ATA_BASE_SHT(drv_name), \ 1425 1417 .can_queue = drv_qd, \ 1426 1418 .tag_alloc_policy = BLK_TAG_ALLOC_RR, \ 1427 - .slave_configure = ata_scsi_slave_config 1419 + .device_configure = ata_scsi_device_configure 1428 1420 1429 1421 #define ATA_BASE_SHT(drv_name) \ 1430 1422 ATA_SUBBASE_SHT(drv_name), \
+2 -2
include/linux/mmc/host.h
··· 433 433 mmc_pm_flag_t pm_caps; /* supported pm features */ 434 434 435 435 /* host specific block data */ 436 - unsigned int max_seg_size; /* see blk_queue_max_segment_size */ 437 - unsigned short max_segs; /* see blk_queue_max_segments */ 436 + unsigned int max_seg_size; /* lim->max_segment_size */ 437 + unsigned short max_segs; /* lim->max_segments */ 438 438 unsigned short unused; 439 439 unsigned int max_req_size; /* maximum number of bytes in one req */ 440 440 unsigned int max_blk_size; /* maximum size of one mmc block */
+1 -1
include/scsi/iser.h
··· 63 63 * @rsvd: reserved 64 64 * @write_stag: write rkey 65 65 * @write_va: write virtual address 66 - * @reaf_stag: read rkey 66 + * @read_stag: read rkey 67 67 * @read_va: read virtual address 68 68 */ 69 69 struct iser_ctrl {
+15 -3
include/scsi/libfc.h
··· 44 44 * @LPORT_ST_DISABLED: Disabled 45 45 * @LPORT_ST_FLOGI: Fabric login (FLOGI) sent 46 46 * @LPORT_ST_DNS: Waiting for name server remote port to become ready 47 - * @LPORT_ST_RPN_ID: Register port name by ID (RPN_ID) sent 47 + * @LPORT_ST_RNN_ID: Register port name by ID (RNN_ID) sent 48 + * @LPORT_ST_RSNN_NN: Waiting for host symbolic node name 49 + * @LPORT_ST_RSPN_ID: Waiting for host symbolic port name 48 50 * @LPORT_ST_RFT_ID: Register Fibre Channel types by ID (RFT_ID) sent 49 51 * @LPORT_ST_RFF_ID: Register FC-4 Features by ID (RFF_ID) sent 50 52 * @LPORT_ST_FDMI: Waiting for mgmt server rport to become ready 51 - * @LPORT_ST_RHBA: 53 + * @LPORT_ST_RHBA: Register HBA 54 + * @LPORT_ST_RPA: Register Port Attributes 55 + * @LPORT_ST_DHBA: Deregister HBA 56 + * @LPORT_ST_DPRT: Deregister Port 52 57 * @LPORT_ST_SCR: State Change Register (SCR) sent 53 58 * @LPORT_ST_READY: Ready for use 54 59 * @LPORT_ST_LOGO: Local port logout (LOGO) sent ··· 188 183 * @r_a_tov: Resource allocation timeout value (in msec) 189 184 * @rp_mutex: The mutex that protects the remote port 190 185 * @retry_work: Handle for retries 191 - * @event_callback: Callback when READY, FAILED or LOGO states complete 186 + * @lld_event_callback: Callback when READY, FAILED or LOGO states complete 192 187 * @prli_count: Count of open PRLI sessions in providers 193 188 * @rcu: Structure used for freeing in an RCU-safe manner 194 189 */ ··· 294 289 * @timer: The command timer 295 290 * @tm_done: Completion indicator 296 291 * @wait_for_comp: Indicator to wait for completion of the I/O (in jiffies) 292 + * @timer_delay: FCP packet timer delay in jiffies 297 293 * @data_len: The length of the data 298 294 * @cdb_cmd: The CDB command 299 295 * @xfer_len: The transfer length ··· 794 788 /** 795 789 * fc_lport_test_ready() - Determine if a local port is in the READY state 796 790 * @lport: The local port to test 791 + * 792 + * Returns: %true if local port is in the READY state, %false otherwise 797 793 */ 798 794 static inline int fc_lport_test_ready(struct fc_lport *lport) 799 795 { ··· 838 830 /** 839 831 * fc_lport_init_stats() - Allocate per-CPU statistics for a local port 840 832 * @lport: The local port whose statistics are to be initialized 833 + * 834 + * Returns: %0 on success, %-ENOMEM on failure 841 835 */ 842 836 static inline int fc_lport_init_stats(struct fc_lport *lport) 843 837 { ··· 861 851 /** 862 852 * lport_priv() - Return the private data from a local port 863 853 * @lport: The local port whose private data is to be retrieved 854 + * 855 + * Returns: the local port's private data pointer 864 856 */ 865 857 static inline void *lport_priv(const struct fc_lport *lport) 866 858 {
+18 -7
include/scsi/libfcoe.h
··· 157 157 158 158 /** 159 159 * fcoe_ctlr_priv() - Return the private data from a fcoe_ctlr 160 - * @cltr: The fcoe_ctlr whose private data will be returned 160 + * @ctlr: The fcoe_ctlr whose private data will be returned 161 + * 162 + * Returns: pointer to the private data 161 163 */ 162 164 static inline void *fcoe_ctlr_priv(const struct fcoe_ctlr *ctlr) 163 165 { ··· 176 174 * struct fcoe_fcf - Fibre-Channel Forwarder 177 175 * @list: list linkage 178 176 * @event_work: Work for FC Transport actions queue 179 - * @event: The event to be processed 180 177 * @fip: The controller that the FCF was discovered on 181 178 * @fcf_dev: The associated fcoe_fcf_device instance 182 179 * @time: system time (jiffies) when an advertisement was last received ··· 189 188 * @flogi_sent: current FLOGI sent to this FCF 190 189 * @flags: flags received from advertisement 191 190 * @fka_period: keep-alive period, in jiffies 191 + * @fd_flags: no need for FKA from ENode 192 192 * 193 193 * A Fibre-Channel Forwarder (FCF) is the entity on the Ethernet that 194 194 * passes FCoE frames on to an FC fabric. This structure represents ··· 224 222 225 223 /** 226 224 * struct fcoe_rport - VN2VN remote port 225 + * @rdata: libfc remote port private data 227 226 * @time: time of create or last beacon packet received from node 228 227 * @fcoe_len: max FCoE frame size, not including VLAN or Ethernet headers 229 228 * @flags: flags from probe or claim ··· 269 266 void fcoe_ctlr_get_lesb(struct fcoe_ctlr_device *ctlr_dev); 270 267 271 268 /** 272 - * is_fip_mode() - returns true if FIP mode selected. 269 + * is_fip_mode() - test if FIP mode selected. 273 270 * @fip: FCoE controller. 271 + * 272 + * Returns: %true if FIP mode is selected 274 273 */ 275 274 static inline bool is_fip_mode(struct fcoe_ctlr *fip) 276 275 { ··· 323 318 * @kthread: The thread context (used by bnx2fc) 324 319 * @work: The work item (used by fcoe) 325 320 * @fcoe_rx_list: The queue of pending packets to process 326 - * @page: The memory page for calculating frame trailer CRCs 321 + * @crc_eof_page: The memory page for calculating frame trailer CRCs 327 322 * @crc_eof_offset: The offset into the CRC page pointing to available 328 323 * memory for a new trailer 324 + * @lock: local lock for members of this struct 329 325 */ 330 326 struct fcoe_percpu_s { 331 327 struct task_struct *kthread; ··· 349 343 * @timer: The queue timer 350 344 * @destroy_work: Handle for work context 351 345 * (to prevent RTNL deadlocks) 352 - * @data_srt_addr: Source address for data 346 + * @data_src_addr: Source address for data 347 + * @get_netdev: function that returns a &net_device from @lport 353 348 * 354 349 * An instance of this structure is to be allocated along with the 355 350 * Scsi_Host and libfc fc_lport structures. ··· 371 364 /** 372 365 * fcoe_get_netdev() - Return the net device associated with a local port 373 366 * @lport: The local port to get the net device from 367 + * 368 + * Returns: the &net_device associated with this @lport 374 369 */ 375 370 static inline struct net_device *fcoe_get_netdev(const struct fc_lport *lport) 376 371 { ··· 392 383 void fcoe_ctlr_set_fip_mode(struct fcoe_ctlr_device *); 393 384 394 385 /** 395 - * struct netdev_list 396 - * A mapping from netdevice to fcoe_transport 386 + * struct fcoe_netdev_mapping - A mapping from &net_device to &fcoe_transport 387 + * @list: list linkage of the mappings 388 + * @netdev: the &net_device 389 + * @ft: the fcoe_transport associated with @netdev 397 390 */ 398 391 struct fcoe_netdev_mapping { 399 392 struct list_head list;
+31 -1
include/scsi/libsas.h
··· 683 683 int sas_phy_enable(struct sas_phy *phy, int enable); 684 684 extern int sas_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); 685 685 extern int sas_target_alloc(struct scsi_target *); 686 - extern int sas_slave_configure(struct scsi_device *); 686 + int sas_device_configure(struct scsi_device *dev, 687 + struct queue_limits *lim); 687 688 extern int sas_change_queue_depth(struct scsi_device *, int new_depth); 688 689 extern int sas_bios_param(struct scsi_device *, struct block_device *, 689 690 sector_t capacity, int *hsc); ··· 726 725 gfp_t gfp_flags); 727 726 void sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event, 728 727 gfp_t gfp_flags); 728 + 729 + #define __LIBSAS_SHT_BASE \ 730 + .module = THIS_MODULE, \ 731 + .name = DRV_NAME, \ 732 + .proc_name = DRV_NAME, \ 733 + .queuecommand = sas_queuecommand, \ 734 + .dma_need_drain = ata_scsi_dma_need_drain, \ 735 + .target_alloc = sas_target_alloc, \ 736 + .change_queue_depth = sas_change_queue_depth, \ 737 + .bios_param = sas_bios_param, \ 738 + .this_id = -1, \ 739 + .eh_device_reset_handler = sas_eh_device_reset_handler, \ 740 + .eh_target_reset_handler = sas_eh_target_reset_handler, \ 741 + .target_destroy = sas_target_destroy, \ 742 + .ioctl = sas_ioctl, \ 743 + 744 + #ifdef CONFIG_COMPAT 745 + #define _LIBSAS_SHT_BASE __LIBSAS_SHT_BASE \ 746 + .compat_ioctl = sas_ioctl, 747 + #else 748 + #define _LIBSAS_SHT_BASE __LIBSAS_SHT_BASE 749 + #endif 750 + 751 + #define LIBSAS_SHT_BASE _LIBSAS_SHT_BASE \ 752 + .device_configure = sas_device_configure, \ 753 + .slave_alloc = sas_slave_alloc, \ 754 + 755 + #define LIBSAS_SHT_BASE_NO_SLAVE_INIT _LIBSAS_SHT_BASE 756 + 729 757 730 758 #endif /* _SASLIB_H_ */
+6
include/scsi/sas_ata.h
··· 39 39 int sas_discover_sata(struct domain_device *dev); 40 40 int sas_ata_add_dev(struct domain_device *parent, struct ex_phy *phy, 41 41 struct domain_device *child, int phy_id); 42 + 43 + extern const struct attribute_group sas_ata_sdev_attr_group; 44 + 42 45 #else 43 46 44 47 static inline void sas_ata_disabled_notice(void) ··· 126 123 sas_ata_disabled_notice(); 127 124 return -ENODEV; 128 125 } 126 + 127 + #define sas_ata_sdev_attr_group ((struct attribute_group) {}) 128 + 129 129 #endif 130 130 131 131 #endif /* _SAS_ATA_H_ */
+7 -5
include/scsi/scsi.h
··· 7 7 #define _SCSI_SCSI_H 8 8 9 9 #include <linux/types.h> 10 - #include <linux/scatterlist.h> 11 - #include <linux/kernel.h> 10 + 11 + #include <asm/param.h> 12 + 12 13 #include <scsi/scsi_common.h> 13 14 #include <scsi/scsi_proto.h> 14 15 #include <scsi/scsi_status.h> ··· 70 69 * @status: the status passed up from the driver (including host and 71 70 * driver components) 72 71 * 73 - * This returns true if the status code is SAM_STAT_CHECK_CONDITION. 72 + * Returns: %true if the status code is SAM_STAT_CHECK_CONDITION. 74 73 */ 75 74 static inline int scsi_status_is_check_condition(int status) 76 75 { ··· 190 189 /* Used to obtain the PCI location of a device */ 191 190 #define SCSI_IOCTL_GET_PCI 0x5387 192 191 193 - /** scsi_status_is_good - check the status return. 192 + /** 193 + * scsi_status_is_good - check the status return. 194 194 * 195 195 * @status: the status passed up from the driver (including host and 196 196 * driver components) 197 197 * 198 - * This returns true for known good conditions that may be treated as 198 + * Returns: %true for known good conditions that may be treated as 199 199 * command completed normally 200 200 */ 201 201 static inline bool scsi_status_is_good(int status)
+2
include/scsi/scsi_cmnd.h
··· 353 353 354 354 /** 355 355 * scsi_msg_to_host_byte() - translate message byte 356 + * @cmd: the SCSI command 357 + * @msg: the SCSI parallel message byte to translate 356 358 * 357 359 * Translate the SCSI parallel message byte to a matching 358 360 * host byte setting. A message of COMMAND_COMPLETE indicates
+3 -1
include/scsi/scsi_driver.h
··· 23 23 #define to_scsi_driver(drv) \ 24 24 container_of((drv), struct scsi_driver, gendrv) 25 25 26 - extern int scsi_register_driver(struct device_driver *); 26 + #define scsi_register_driver(drv) \ 27 + __scsi_register_driver(drv, THIS_MODULE) 28 + int __scsi_register_driver(struct device_driver *, struct module *); 27 29 #define scsi_unregister_driver(drv) \ 28 30 driver_unregister(drv); 29 31
+9
include/scsi/scsi_host.h
··· 211 211 * up after yourself before returning non-0 212 212 * 213 213 * Status: OPTIONAL 214 + * 215 + * Note: slave_configure is the legacy version, use device_configure for 216 + * all new code. A driver must never define both. 214 217 */ 218 + int (* device_configure)(struct scsi_device *, struct queue_limits *lim); 215 219 int (* slave_configure)(struct scsi_device *); 216 220 217 221 /* ··· 408 404 * Maximum size in bytes of a single segment. 409 405 */ 410 406 unsigned int max_segment_size; 407 + 408 + unsigned int dma_alignment; 411 409 412 410 /* 413 411 * DMA scatter gather segment boundary limit. A segment crossing this ··· 620 614 unsigned int max_sectors; 621 615 unsigned int opt_sectors; 622 616 unsigned int max_segment_size; 617 + unsigned int dma_alignment; 623 618 unsigned long dma_boundary; 624 619 unsigned long virt_boundary_mask; 625 620 /* ··· 671 664 672 665 /* The transport requires the LUN bits NOT to be stored in CDB[1] */ 673 666 unsigned no_scsi2_lun_in_cdb:1; 667 + 668 + unsigned no_highmem:1; 674 669 675 670 /* 676 671 * Optional work queue to be utilized by the transport
+1 -1
include/scsi/scsi_transport.h
··· 83 83 + shost->transportt->device_private_offset; 84 84 } 85 85 86 - void __scsi_init_queue(struct Scsi_Host *shost, struct request_queue *q); 86 + void scsi_init_limits(struct Scsi_Host *shost, struct queue_limits *lim); 87 87 88 88 #endif /* SCSI_TRANSPORT_H */
+3 -3
include/scsi/scsi_transport_fc.h
··· 709 709 int (*vport_delete)(struct fc_vport *); 710 710 711 711 /* bsg support */ 712 + u32 max_bsg_segments; 712 713 int (*bsg_request)(struct bsg_job *); 713 714 int (*bsg_timeout)(struct bsg_job *); 714 715 ··· 771 770 /** 772 771 * fc_remote_port_chkready - called to validate the remote port state 773 772 * prior to initiating io to the port. 774 - * 775 - * Returns a scsi result code that can be returned by the LLDD. 776 - * 777 773 * @rport: remote port to be checked 774 + * 775 + * Returns: a scsi result code that can be returned by the LLDD. 778 776 **/ 779 777 static inline int 780 778 fc_remote_port_chkready(struct fc_rport *rport)
+2 -2
include/scsi/scsi_transport_srp.h
··· 74 74 }; 75 75 76 76 /** 77 - * struct srp_function_template 77 + * struct srp_function_template - template for SRP initiator drivers 78 78 * 79 79 * Fields that are only relevant for SRP initiator drivers: 80 80 * @has_rport_state: Whether or not to create the state, fast_io_fail_tmo and ··· 124 124 * srp_chkready() - evaluate the transport layer state before I/O 125 125 * @rport: SRP target port pointer. 126 126 * 127 - * Returns a SCSI result code that can be returned by the LLD queuecommand() 127 + * Returns: a SCSI result code that can be returned by the LLD queuecommand() 128 128 * implementation. The role of this function is similar to that of 129 129 * fc_remote_port_chkready(). 130 130 */
+5 -3
include/uapi/scsi/scsi_bsg_mpi3mr.h
··· 401 401 __u32 buf_len; 402 402 }; 403 403 /** 404 - * struct mpi3mr_bsg_buf_entry_list - list of user buffer 404 + * struct mpi3mr_buf_entry_list - list of user buffer 405 405 * descriptor for MPI Passthrough requests. 406 406 * 407 407 * @num_of_entries: Number of buffer descriptors ··· 424 424 * @mrioc_id: Controller ID 425 425 * @rsvd1: Reserved 426 426 * @timeout: MPI request timeout 427 + * @rsvd2: Reserved 427 428 * @buf_entry_list: Buffer descriptor list 428 429 */ 429 430 struct mpi3mr_bsg_mptcmd { ··· 442 441 * @cmd_type: represents drvrcmd or mptcmd 443 442 * @rsvd1: Reserved 444 443 * @rsvd2: Reserved 445 - * @drvrcmd: driver request structure 446 - * @mptcmd: mpt request structure 444 + * @rsvd3: Reserved 445 + * @cmd.drvrcmd: driver request structure 446 + * @cmd.mptcmd: mpt request structure 447 447 */ 448 448 struct mpi3mr_bsg_packet { 449 449 __u8 cmd_type;
+3 -1
include/uapi/scsi/scsi_bsg_ufs.h
··· 123 123 * @idn: a value that indicates the particular type of data B-1 124 124 * @index: Index to further identify data B-2 125 125 * @selector: Index to further identify data B-3 126 + * @osf3: spec field B-4 126 127 * @osf4: spec field B-5 127 128 * @osf5: spec field B 6,7 128 129 * @osf6: spec field DW 8,9 ··· 139 138 __be16 osf5; 140 139 __be32 osf6; 141 140 __be32 osf7; 141 + /* private: */ 142 142 __be32 reserved; 143 143 }; 144 144 145 145 /** 146 146 * struct utp_upiu_cmd - Command UPIU structure 147 - * @data_transfer_len: Data Transfer Length DW-3 147 + * @exp_data_transfer_len: Data Transfer Length DW-3 148 148 * @cdb: Command Descriptor Block CDB DW-4 to DW-7 149 149 */ 150 150 struct utp_upiu_cmd {
-3
include/ufs/ufshcd.h
··· 374 374 int (*get_outstanding_cqs)(struct ufs_hba *hba, 375 375 unsigned long *ocqs); 376 376 int (*config_esi)(struct ufs_hba *hba); 377 - void (*config_scsi_dev)(struct scsi_device *sdev); 378 377 }; 379 378 380 379 /* clock gating state */ ··· 1388 1389 void ufshcd_release(struct ufs_hba *hba); 1389 1390 1390 1391 void ufshcd_clkgate_delay_set(struct device *dev, unsigned long value); 1391 - 1392 - u32 ufshcd_get_local_unipro_ver(struct ufs_hba *hba); 1393 1392 1394 1393 int ufshcd_get_vreg(struct device *dev, struct ufs_vreg *vreg); 1395 1394
+1 -12
include/ufs/ufshci.h
··· 355 355 356 356 /* Interrupt disable masks */ 357 357 enum { 358 - /* Interrupt disable mask for UFSHCI v1.0 */ 359 - INTERRUPT_MASK_ALL_VER_10 = 0x30FFF, 360 - INTERRUPT_MASK_RW_VER_10 = 0x30000, 361 - 362 358 /* Interrupt disable mask for UFSHCI v1.1 */ 363 - INTERRUPT_MASK_ALL_VER_11 = 0x31FFF, 359 + INTERRUPT_MASK_ALL_VER_11 = 0x31FFF, 364 360 365 361 /* Interrupt disable mask for UFSHCI v2.1 */ 366 362 INTERRUPT_MASK_ALL_VER_21 = 0x71FFF, ··· 420 424 /* 421 425 * Request Descriptor Definitions 422 426 */ 423 - 424 - /* Transfer request command type */ 425 - enum { 426 - UTP_CMD_TYPE_SCSI = 0x0, 427 - UTP_CMD_TYPE_UFS = 0x1, 428 - UTP_CMD_TYPE_DEV_MANAGE = 0x2, 429 - }; 430 427 431 428 /* To accommodate UFS2.0 required Command type */ 432 429 enum {