Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'ata-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata

Pull ATA updates from Damien Le Moal:
"A larger than usual set of changes for this cycle. The bulk of the
changes are part of a rework of libata messages and debugging features
from Hannes. In more detail, the changes are as follows.

- Small code cleanups in the pata_ali driver (unnecessary variable
initialization and simplified return statement, from Jason and
Colin.

- Switch to using struct_group() in the sata_fsl driver, from Kees.

- Convert many sysfs attribute show functions to use sysfs_emit()
instead of snprintf(), from me.

- sata_dwc_460ex driver code cleanups, from Andy.

- Improve DMA setup and remove superfluous error message in
libahci_platform, from Andy

- A small code cleanup in libata to use min() instead of open coding
test, from Changcheng.

- Rework of libata messages from Hannes. This is especially focused
on replacing compile time defined debugging messages (DPRINTK() and
VPRINTK()) with regular dynamic debugging messages (pr_debug()) and
traceipoint events. Both libata-core and many drivers are updated
to have a consistent debugging level control for all drivers.

- Extend compile test support to as many drivers as possible in ATA
Kconfig to improve compile test coverage, from me.

- Fixes to avoid compile time warnings (W=1) and sparse warnings in
sata_fsl and ahci_xgene drivers, from me.

- Fix the interface of the read_id() port operation method to clarify
that the data buffer passed as an argument is little endian. This
avoids sparse warnings in the pata_netcell, pata_it821x,
ahci_xgene, ahci_cevaxi and ahci_brcm drivers. From me.

- Small code cleanup in the pata_octeon_cf driver, from Minghao.

- Improved IRQ configuration code in pata_of_platform, from Lad.

- Simplified implementation of __ata_scsi_queuecmd(), from Wenchao.

- Debounce delay flag renaming, from Paul.

- Add support for AMD A85 FCH (Hudson D4) AHCI adapters, from Paul"

* tag 'ata-5.17-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/dlemoal/libata: (106 commits)
ata: pata_ali: remove redundant return statement
ata: ahci: Add support for AMD A85 FCH (Hudson D4)
ata: libata: Rename link flag ATA_LFLAG_NO_DB_DELAY
ata: libata-scsi: simplify __ata_scsi_queuecmd()
ata: pata_of_platform: Use platform_get_irq_optional() to get the interrupt
ata: pata_samsung_cf: add compile test support
ata: pata_pxa: add compile test support
ata: pata_imx: add compile test support
ata: pata_ftide010: add compile test support
ata: pata_cs5535: add compile test support
ata: pata_octeon_cf: remove redundant val variable
ata: fix read_id() ata port operation interface
ata: ahci_xgene: use correct type for port mmio address
ata: sata_fsl: fix cmdhdr_tbl_entry and prde struct definitions
ata: sata_fsl: fix scsi host initialization
ata: pata_bk3710: add compile test support
ata: ahci_seattle: add compile test support
ata: ahci_xgene: add compile test support
ata: ahci_tegra: add compile test support
ata: ahci_sunxi: add compile test support
...

+1271 -1438
+16 -28
drivers/ata/Kconfig
··· 146 146 config AHCI_BRCM 147 147 tristate "Broadcom AHCI SATA support" 148 148 depends on ARCH_BRCMSTB || BMIPS_GENERIC || ARCH_BCM_NSP || \ 149 - ARCH_BCM_63XX 149 + ARCH_BCM_63XX || COMPILE_TEST 150 150 select SATA_HOST 151 151 help 152 152 This option enables support for the AHCI SATA3 controller found on ··· 156 156 157 157 config AHCI_DA850 158 158 tristate "DaVinci DA850 AHCI SATA support" 159 - depends on ARCH_DAVINCI_DA850 159 + depends on ARCH_DAVINCI_DA850 || COMPILE_TEST 160 160 select SATA_HOST 161 161 help 162 162 This option enables support for the DaVinci DA850 SoC's ··· 166 166 167 167 config AHCI_DM816 168 168 tristate "DaVinci DM816 AHCI SATA support" 169 - depends on ARCH_OMAP2PLUS 169 + depends on ARCH_OMAP2PLUS || COMPILE_TEST 170 170 select SATA_HOST 171 171 help 172 172 This option enables support for the DaVinci DM816 SoC's ··· 206 206 207 207 config AHCI_MTK 208 208 tristate "MediaTek AHCI SATA support" 209 - depends on ARCH_MEDIATEK 209 + depends on ARCH_MEDIATEK || COMPILE_TEST 210 210 select MFD_SYSCON 211 211 select SATA_HOST 212 212 help ··· 217 217 218 218 config AHCI_MVEBU 219 219 tristate "Marvell EBU AHCI SATA support" 220 - depends on ARCH_MVEBU 220 + depends on ARCH_MVEBU || COMPILE_TEST 221 221 select SATA_HOST 222 222 help 223 223 This option enables support for the Marvebu EBU SoC's ··· 236 236 237 237 config AHCI_SUNXI 238 238 tristate "Allwinner sunxi AHCI SATA support" 239 - depends on ARCH_SUNXI 239 + depends on ARCH_SUNXI || COMPILE_TEST 240 240 select SATA_HOST 241 241 help 242 242 This option enables support for the Allwinner sunxi SoC's ··· 246 246 247 247 config AHCI_TEGRA 248 248 tristate "NVIDIA Tegra AHCI SATA support" 249 - depends on ARCH_TEGRA 249 + depends on ARCH_TEGRA || COMPILE_TEST 250 250 select SATA_HOST 251 251 help 252 252 This option enables support for the NVIDIA Tegra SoC's ··· 256 256 257 257 config AHCI_XGENE 258 258 tristate "APM X-Gene 6.0Gbps AHCI SATA host controller support" 259 - depends on PHY_XGENE 259 + depends on PHY_XGENE || COMPILE_TEST 260 260 select SATA_HOST 261 261 help 262 262 This option enables support for APM X-Gene SoC SATA host controller. ··· 273 273 274 274 config SATA_FSL 275 275 tristate "Freescale 3.0Gbps SATA support" 276 - depends on FSL_SOC 276 + depends on FSL_SOC || COMPILE_TEST 277 277 select SATA_HOST 278 278 help 279 279 This option enables support for Freescale 3.0Gbps SATA controller. ··· 294 294 295 295 config SATA_AHCI_SEATTLE 296 296 tristate "AMD Seattle 6.0Gbps AHCI SATA host controller support" 297 - depends on ARCH_SEATTLE 297 + depends on ARCH_SEATTLE || COMPILE_TEST 298 298 select SATA_HOST 299 299 help 300 300 This option enables support for AMD Seattle SATA host controller. ··· 431 431 help 432 432 This option enables support for old device trees without the 433 433 "dmas" property. 434 - 435 - config SATA_DWC_DEBUG 436 - bool "Debugging driver version" 437 - depends on SATA_DWC 438 - help 439 - This option enables debugging output in the driver. 440 - 441 - config SATA_DWC_VDEBUG 442 - bool "Verbose debug output" 443 - depends on SATA_DWC_DEBUG 444 - help 445 - This option enables the taskfile dumping and NCQ debugging. 446 434 447 435 config SATA_HIGHBANK 448 436 tristate "Calxeda Highbank SATA support" ··· 599 611 600 612 config PATA_BK3710 601 613 tristate "Palmchip BK3710 PATA support" 602 - depends on ARCH_DAVINCI 614 + depends on ARCH_DAVINCI || COMPILE_TEST 603 615 select PATA_TIMINGS 604 616 help 605 617 This option enables support for the integrated IDE controller on ··· 637 649 638 650 config PATA_CS5535 639 651 tristate "CS5535 PATA support (Experimental)" 640 - depends on PCI && X86_32 652 + depends on PCI && (X86_32 || (X86_64 && COMPILE_TEST)) 641 653 help 642 654 This option enables support for the NatSemi/AMD CS5535 643 655 companion chip used with the Geode processor family. ··· 685 697 config PATA_FTIDE010 686 698 tristate "Faraday Technology FTIDE010 PATA support" 687 699 depends on OF 688 - depends on ARM 700 + depends on ARM || COMPILE_TEST 689 701 depends on SATA_GEMINI 690 702 help 691 703 This option enables support for the Faraday FTIDE010 ··· 748 760 749 761 config PATA_IMX 750 762 tristate "PATA support for Freescale iMX" 751 - depends on ARCH_MXC 763 + depends on ARCH_MXC || COMPILE_TEST 752 764 select PATA_TIMINGS 753 765 help 754 766 This option enables support for the PATA host available on Freescale ··· 969 981 970 982 config PATA_PXA 971 983 tristate "PXA DMA-capable PATA support" 972 - depends on ARCH_PXA 984 + depends on ARCH_PXA || COMPILE_TEST 973 985 help 974 986 This option enables support for harddrive attached to PXA CPU's bus. 975 987 ··· 1145 1157 1146 1158 config PATA_SAMSUNG_CF 1147 1159 tristate "Samsung SoC PATA support" 1148 - depends on SAMSUNG_DEV_IDE 1160 + depends on SAMSUNG_DEV_IDE || COMPILE_TEST 1149 1161 select PATA_TIMINGS 1150 1162 help 1151 1163 This option enables basic support for Samsung's S3C/S5P board
-4
drivers/ata/acard-ahci.c
··· 185 185 struct acard_sg *acard_sg = cmd_tbl + AHCI_CMD_TBL_HDR_SZ; 186 186 unsigned int si, last_si = 0; 187 187 188 - VPRINTK("ENTER\n"); 189 - 190 188 /* 191 189 * Next, the S/G list. 192 190 */ ··· 359 361 struct ahci_host_priv *hpriv; 360 362 struct ata_host *host; 361 363 int n_ports, i, rc; 362 - 363 - VPRINTK("ENTER\n"); 364 364 365 365 WARN_ON((int)ATA_MAX_QUEUE > AHCI_MAX_CMDS); 366 366
+11 -13
drivers/ata/ahci.c
··· 51 51 board_ahci, 52 52 board_ahci_ign_iferr, 53 53 board_ahci_mobile, 54 + board_ahci_no_debounce_delay, 54 55 board_ahci_nomsi, 55 56 board_ahci_noncq, 56 57 board_ahci_nosntf, ··· 138 137 [board_ahci_mobile] = { 139 138 AHCI_HFLAGS (AHCI_HFLAG_IS_MOBILE), 140 139 .flags = AHCI_FLAG_COMMON, 140 + .pio_mask = ATA_PIO4, 141 + .udma_mask = ATA_UDMA6, 142 + .port_ops = &ahci_ops, 143 + }, 144 + [board_ahci_no_debounce_delay] = { 145 + .flags = AHCI_FLAG_COMMON, 146 + .link_flags = ATA_LFLAG_NO_DEBOUNCE_DELAY, 141 147 .pio_mask = ATA_PIO4, 142 148 .udma_mask = ATA_UDMA6, 143 149 .port_ops = &ahci_ops, ··· 445 437 board_ahci_al }, 446 438 /* AMD */ 447 439 { PCI_VDEVICE(AMD, 0x7800), board_ahci }, /* AMD Hudson-2 */ 440 + { PCI_VDEVICE(AMD, 0x7801), board_ahci_no_debounce_delay }, /* AMD Hudson-2 (AHCI mode) */ 448 441 { PCI_VDEVICE(AMD, 0x7900), board_ahci }, /* AMD CZ */ 449 442 { PCI_VDEVICE(AMD, 0x7901), board_ahci_mobile }, /* AMD Green Sardine */ 450 443 /* AMD is using RAID class only for ahci controllers */ ··· 693 684 694 685 /* clear port IRQ */ 695 686 tmp = readl(port_mmio + PORT_IRQ_STAT); 696 - VPRINTK("PORT_IRQ_STAT 0x%x\n", tmp); 687 + dev_dbg(&pdev->dev, "PORT_IRQ_STAT 0x%x\n", tmp); 697 688 if (tmp) 698 689 writel(tmp, port_mmio + PORT_IRQ_STAT); 699 690 } ··· 709 700 bool online; 710 701 int rc; 711 702 712 - DPRINTK("ENTER\n"); 713 - 714 703 hpriv->stop_engine(ap); 715 704 716 705 rc = sata_link_hardreset(link, sata_ehc_deb_timing(&link->eh_context), 717 706 deadline, &online, NULL); 718 707 719 708 hpriv->start_engine(ap); 720 - 721 - DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class); 722 709 723 710 /* vt8251 doesn't clear BSY on signature FIS reception, 724 711 * request follow-up softreset. ··· 795 790 bool online; 796 791 int rc, i; 797 792 798 - DPRINTK("ENTER\n"); 799 - 800 793 hpriv->stop_engine(ap); 801 794 802 795 for (i = 0; i < 2; i++) { ··· 832 829 if (online) 833 830 *class = ahci_dev_classify(ap); 834 831 835 - DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class); 836 832 return rc; 837 833 } 838 834 ··· 1478 1476 u32 irq_stat, irq_masked; 1479 1477 unsigned int handled = 1; 1480 1478 1481 - VPRINTK("ENTER\n"); 1482 1479 hpriv = host->private_data; 1483 1480 mmio = hpriv->mmio; 1484 1481 irq_stat = readl(mmio + HOST_IRQ_STAT); ··· 1494 1493 irq_stat = readl(mmio + HOST_IRQ_STAT); 1495 1494 spin_unlock(&host->lock); 1496 1495 } while (irq_stat); 1497 - VPRINTK("EXIT\n"); 1498 1496 1499 1497 return IRQ_RETVAL(handled); 1500 1498 } ··· 1657 1657 struct ata_host *host = dev_get_drvdata(dev); 1658 1658 struct ahci_host_priv *hpriv = host->private_data; 1659 1659 1660 - return sprintf(buf, "%u\n", hpriv->remapped_nvme); 1660 + return sysfs_emit(buf, "%u\n", hpriv->remapped_nvme); 1661 1661 } 1662 1662 1663 1663 static DEVICE_ATTR_RO(remapped_nvme); ··· 1672 1672 struct ata_host *host; 1673 1673 int n_ports, i, rc; 1674 1674 int ahci_pci_bar = AHCI_PCI_BAR_STANDARD; 1675 - 1676 - VPRINTK("ENTER\n"); 1677 1675 1678 1676 WARN_ON((int)ATA_MAX_QUEUE > AHCI_MAX_CMDS); 1679 1677
+2 -2
drivers/ata/ahci_brcm.c
··· 246 246 } 247 247 248 248 static unsigned int brcm_ahci_read_id(struct ata_device *dev, 249 - struct ata_taskfile *tf, u16 *id) 249 + struct ata_taskfile *tf, __le16 *id) 250 250 { 251 251 struct ata_port *ap = dev->link->ap; 252 252 struct ata_host *host = ap->host; ··· 333 333 334 334 static const struct ata_port_info ahci_brcm_port_info = { 335 335 .flags = AHCI_FLAG_COMMON | ATA_FLAG_NO_DIPM, 336 - .link_flags = ATA_LFLAG_NO_DB_DELAY, 336 + .link_flags = ATA_LFLAG_NO_DEBOUNCE_DELAY, 337 337 .pio_mask = ATA_PIO4, 338 338 .udma_mask = ATA_UDMA6, 339 339 .port_ops = &ahci_brcm_platform_ops,
+2 -3
drivers/ata/ahci_ceva.c
··· 92 92 }; 93 93 94 94 static unsigned int ceva_ahci_read_id(struct ata_device *dev, 95 - struct ata_taskfile *tf, u16 *id) 95 + struct ata_taskfile *tf, __le16 *id) 96 96 { 97 - __le16 *__id = (__le16 *)id; 98 97 u32 err_mask; 99 98 100 99 err_mask = ata_do_dev_read_id(dev, tf, id); ··· 103 104 * Since CEVA controller does not support device sleep feature, we 104 105 * need to clear DEVSLP (bit 8) in word78 of the IDENTIFY DEVICE data. 105 106 */ 106 - __id[ATA_ID_FEATURE_SUPP] &= cpu_to_le16(~(1 << 8)); 107 + id[ATA_ID_FEATURE_SUPP] &= cpu_to_le16(~(1 << 8)); 107 108 108 109 return 0; 109 110 }
-4
drivers/ata/ahci_qoriq.c
··· 103 103 int rc; 104 104 bool ls1021a_workaround = (qoriq_priv->type == AHCI_LS1021A); 105 105 106 - DPRINTK("ENTER\n"); 107 - 108 106 hpriv->stop_engine(ap); 109 107 110 108 /* ··· 144 146 145 147 if (online) 146 148 *class = ahci_dev_classify(ap); 147 - 148 - DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class); 149 149 return rc; 150 150 } 151 151
+4 -8
drivers/ata/ahci_xgene.c
··· 193 193 struct xgene_ahci_context *ctx = hpriv->plat_data; 194 194 int rc = 0; 195 195 u32 port_fbs; 196 - void *port_mmio = ahci_port_base(ap); 196 + void __iomem *port_mmio = ahci_port_base(ap); 197 197 198 198 /* 199 199 * Write the pmp value to PxFBS.DEV ··· 237 237 * does not support DEVSLP. 238 238 */ 239 239 static unsigned int xgene_ahci_read_id(struct ata_device *dev, 240 - struct ata_taskfile *tf, u16 *id) 240 + struct ata_taskfile *tf, __le16 *id) 241 241 { 242 242 u32 err_mask; 243 243 ··· 454 454 int pmp = sata_srst_pmp(link); 455 455 struct ata_port *ap = link->ap; 456 456 u32 rc; 457 - void *port_mmio = ahci_port_base(ap); 457 + void __iomem *port_mmio = ahci_port_base(ap); 458 458 u32 port_fbs; 459 459 460 460 /* ··· 499 499 struct ata_port *ap = link->ap; 500 500 struct ahci_host_priv *hpriv = ap->host->private_data; 501 501 struct xgene_ahci_context *ctx = hpriv->plat_data; 502 - void *port_mmio = ahci_port_base(ap); 502 + void __iomem *port_mmio = ahci_port_base(ap); 503 503 u32 port_fbs; 504 504 u32 port_fbs_save; 505 505 u32 retry = 1; ··· 588 588 void __iomem *mmio; 589 589 u32 irq_stat, irq_masked; 590 590 591 - VPRINTK("ENTER\n"); 592 - 593 591 hpriv = host->private_data; 594 592 mmio = hpriv->mmio; 595 593 ··· 609 611 rc = xgene_ahci_handle_broken_edge_irq(host, irq_masked); 610 612 611 613 spin_unlock(&host->lock); 612 - 613 - VPRINTK("EXIT\n"); 614 614 615 615 return IRQ_RETVAL(rc); 616 616 }
+7 -4
drivers/ata/ata_piix.c
··· 77 77 #include <scsi/scsi_host.h> 78 78 #include <linux/libata.h> 79 79 #include <linux/dmi.h> 80 + #include <trace/events/libata.h> 80 81 81 82 #define DRV_NAME "ata_piix" 82 83 #define DRV_VERSION "2.13" ··· 817 816 818 817 static bool piix_irq_check(struct ata_port *ap) 819 818 { 819 + unsigned char host_stat; 820 + 820 821 if (unlikely(!ap->ioaddr.bmdma_addr)) 821 822 return false; 822 823 823 - return ap->ops->bmdma_status(ap) & ATA_DMA_INTR; 824 + host_stat = ap->ops->bmdma_status(ap); 825 + trace_ata_bmdma_status(ap, host_stat); 826 + 827 + return host_stat & ATA_DMA_INTR; 824 828 } 825 829 826 830 #ifdef CONFIG_PM_SLEEP ··· 1351 1345 new_pcs = pcs | map_db->port_enable; 1352 1346 1353 1347 if (new_pcs != pcs) { 1354 - DPRINTK("updating PCS from 0x%x to 0x%x\n", pcs, new_pcs); 1355 1348 pci_write_config_word(pdev, ICH5_PCS, new_pcs); 1356 1349 msleep(150); 1357 1350 } ··· 1774 1769 { 1775 1770 int rc; 1776 1771 1777 - DPRINTK("pci_register_driver\n"); 1778 1772 rc = pci_register_driver(&piix_pci_driver); 1779 1773 if (rc) 1780 1774 return rc; 1781 1775 1782 1776 in_module_init = 0; 1783 1777 1784 - DPRINTK("done\n"); 1785 1778 return 0; 1786 1779 } 1787 1780
+5 -28
drivers/ata/libahci.c
··· 1234 1234 1235 1235 /* clear SError */ 1236 1236 tmp = readl(port_mmio + PORT_SCR_ERR); 1237 - VPRINTK("PORT_SCR_ERR 0x%x\n", tmp); 1237 + dev_dbg(dev, "PORT_SCR_ERR 0x%x\n", tmp); 1238 1238 writel(tmp, port_mmio + PORT_SCR_ERR); 1239 1239 1240 1240 /* clear port IRQ */ 1241 1241 tmp = readl(port_mmio + PORT_IRQ_STAT); 1242 - VPRINTK("PORT_IRQ_STAT 0x%x\n", tmp); 1242 + dev_dbg(dev, "PORT_IRQ_STAT 0x%x\n", tmp); 1243 1243 if (tmp) 1244 1244 writel(tmp, port_mmio + PORT_IRQ_STAT); 1245 1245 ··· 1270 1270 } 1271 1271 1272 1272 tmp = readl(mmio + HOST_CTL); 1273 - VPRINTK("HOST_CTL 0x%x\n", tmp); 1273 + dev_dbg(host->dev, "HOST_CTL 0x%x\n", tmp); 1274 1274 writel(tmp | HOST_IRQ_EN, mmio + HOST_CTL); 1275 1275 tmp = readl(mmio + HOST_CTL); 1276 - VPRINTK("HOST_CTL 0x%x\n", tmp); 1276 + dev_dbg(host->dev, "HOST_CTL 0x%x\n", tmp); 1277 1277 } 1278 1278 EXPORT_SYMBOL_GPL(ahci_init_controller); 1279 1279 ··· 1300 1300 tf.lbal = (tmp >> 8) & 0xff; 1301 1301 tf.nsect = (tmp) & 0xff; 1302 1302 1303 - return ata_dev_classify(&tf); 1303 + return ata_port_classify(ap, &tf); 1304 1304 } 1305 1305 EXPORT_SYMBOL_GPL(ahci_dev_classify); 1306 1306 ··· 1415 1415 bool fbs_disabled = false; 1416 1416 int rc; 1417 1417 1418 - DPRINTK("ENTER\n"); 1419 - 1420 1418 /* prepare for SRST (AHCI-1.1 10.4.1) */ 1421 1419 rc = ahci_kick_engine(ap); 1422 1420 if (rc && rc != -EOPNOTSUPP) ··· 1474 1476 if (fbs_disabled) 1475 1477 ahci_enable_fbs(ap); 1476 1478 1477 - DPRINTK("EXIT, class=%u\n", *class); 1478 1479 return 0; 1479 1480 1480 1481 fail: ··· 1494 1497 unsigned long deadline) 1495 1498 { 1496 1499 int pmp = sata_srst_pmp(link); 1497 - 1498 - DPRINTK("ENTER\n"); 1499 1500 1500 1501 return ahci_do_softreset(link, class, pmp, deadline, ahci_check_ready); 1501 1502 } ··· 1523 1528 int pmp = sata_srst_pmp(link); 1524 1529 int rc; 1525 1530 u32 irq_sts; 1526 - 1527 - DPRINTK("ENTER\n"); 1528 1531 1529 1532 rc = ahci_do_softreset(link, class, pmp, deadline, 1530 1533 ahci_bad_pmp_check_ready); ··· 1557 1564 struct ata_taskfile tf; 1558 1565 int rc; 1559 1566 1560 - DPRINTK("ENTER\n"); 1561 - 1562 1567 hpriv->stop_engine(ap); 1563 1568 1564 1569 /* clear D2H reception area to properly wait for D2H FIS */ ··· 1572 1581 if (*online) 1573 1582 *class = ahci_dev_classify(ap); 1574 1583 1575 - DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class); 1576 1584 return rc; 1577 1585 } 1578 1586 EXPORT_SYMBOL_GPL(ahci_do_hardreset); ··· 1609 1619 struct scatterlist *sg; 1610 1620 struct ahci_sg *ahci_sg = cmd_tbl + AHCI_CMD_TBL_HDR_SZ; 1611 1621 unsigned int si; 1612 - 1613 - VPRINTK("ENTER\n"); 1614 1622 1615 1623 /* 1616 1624 * Next, the S/G list. ··· 1683 1695 u32 fbs = readl(port_mmio + PORT_FBS); 1684 1696 int retries = 3; 1685 1697 1686 - DPRINTK("ENTER\n"); 1687 1698 BUG_ON(!pp->fbs_enabled); 1688 1699 1689 1700 /* time to wait for DEC is not specified by AHCI spec, ··· 1911 1924 void __iomem *port_mmio = ahci_port_base(ap); 1912 1925 u32 status; 1913 1926 1914 - VPRINTK("ENTER\n"); 1915 - 1916 1927 status = readl(port_mmio + PORT_IRQ_STAT); 1917 1928 writel(status, port_mmio + PORT_IRQ_STAT); 1918 1929 1919 1930 spin_lock(ap->lock); 1920 1931 ahci_handle_port_interrupt(ap, port_mmio, status); 1921 1932 spin_unlock(ap->lock); 1922 - 1923 - VPRINTK("EXIT\n"); 1924 1933 1925 1934 return IRQ_HANDLED; 1926 1935 } ··· 1934 1951 ap = host->ports[i]; 1935 1952 if (ap) { 1936 1953 ahci_port_intr(ap); 1937 - VPRINTK("port %u\n", i); 1938 1954 } else { 1939 - VPRINTK("port %u (no irq)\n", i); 1940 1955 if (ata_ratelimit()) 1941 1956 dev_warn(host->dev, 1942 1957 "interrupt on disabled port %u\n", i); ··· 1954 1973 unsigned int rc = 0; 1955 1974 void __iomem *mmio; 1956 1975 u32 irq_stat, irq_masked; 1957 - 1958 - VPRINTK("ENTER\n"); 1959 1976 1960 1977 hpriv = host->private_data; 1961 1978 mmio = hpriv->mmio; ··· 1981 2002 writel(irq_stat, mmio + HOST_IRQ_STAT); 1982 2003 1983 2004 spin_unlock(&host->lock); 1984 - 1985 - VPRINTK("EXIT\n"); 1986 2005 1987 2006 return IRQ_RETVAL(rc); 1988 2007 }
+3 -11
drivers/ata/libahci_platform.c
··· 579 579 int i, irq, n_ports, rc; 580 580 581 581 irq = platform_get_irq(pdev, 0); 582 - if (irq < 0) { 583 - if (irq != -EPROBE_DEFER) 584 - dev_err(dev, "no irq\n"); 582 + if (irq < 0) 585 583 return irq; 586 - } 587 584 if (!irq) 588 585 return -EINVAL; 589 586 ··· 639 642 if (hpriv->cap & HOST_CAP_64) { 640 643 rc = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64)); 641 644 if (rc) { 642 - rc = dma_coerce_mask_and_coherent(dev, 643 - DMA_BIT_MASK(32)); 644 - if (rc) { 645 - dev_err(dev, "Failed to enable 64-bit DMA.\n"); 646 - return rc; 647 - } 648 - dev_warn(dev, "Enable 32-bit DMA instead of 64-bit.\n"); 645 + dev_err(dev, "Failed to enable 64-bit DMA.\n"); 646 + return rc; 649 647 } 650 648 } 651 649
+31 -38
drivers/ata/libata-acpi.c
··· 402 402 */ 403 403 static int ata_dev_get_GTF(struct ata_device *dev, struct ata_acpi_gtf **gtf) 404 404 { 405 - struct ata_port *ap = dev->link->ap; 406 405 acpi_status status; 407 406 struct acpi_buffer output; 408 407 union acpi_object *out_obj; ··· 416 417 /* set up output buffer */ 417 418 output.length = ACPI_ALLOCATE_BUFFER; 418 419 output.pointer = NULL; /* ACPI-CA sets this; save/free it later */ 419 - 420 - if (ata_msg_probe(ap)) 421 - ata_dev_dbg(dev, "%s: ENTER: port#: %d\n", 422 - __func__, ap->port_no); 423 420 424 421 /* _GTF has no input parameters */ 425 422 status = acpi_evaluate_object(ata_dev_acpi_handle(dev), "_GTF", NULL, ··· 432 437 } 433 438 434 439 if (!output.length || !output.pointer) { 435 - if (ata_msg_probe(ap)) 436 - ata_dev_dbg(dev, "%s: Run _GTF: length or ptr is NULL (0x%llx, 0x%p)\n", 437 - __func__, 438 - (unsigned long long)output.length, 439 - output.pointer); 440 + ata_dev_dbg(dev, "Run _GTF: length or ptr is NULL (0x%llx, 0x%p)\n", 441 + (unsigned long long)output.length, 442 + output.pointer); 440 443 rc = -EINVAL; 441 444 goto out_free; 442 445 } ··· 457 464 rc = out_obj->buffer.length / REGS_PER_GTF; 458 465 if (gtf) { 459 466 *gtf = (void *)out_obj->buffer.pointer; 460 - if (ata_msg_probe(ap)) 461 - ata_dev_dbg(dev, "%s: returning gtf=%p, gtf_count=%d\n", 462 - __func__, *gtf, rc); 467 + ata_dev_dbg(dev, "returning gtf=%p, gtf_count=%d\n", 468 + *gtf, rc); 463 469 } 464 470 return rc; 465 471 ··· 642 650 struct ata_taskfile *pptf = NULL; 643 651 struct ata_taskfile tf, ptf, rtf; 644 652 unsigned int err_mask; 645 - const char *level; 646 653 const char *descr; 647 - char msg[60]; 648 654 int rc; 649 655 650 656 if ((gtf->tf[0] == 0) && (gtf->tf[1] == 0) && (gtf->tf[2] == 0) ··· 656 666 pptf = &ptf; 657 667 } 658 668 669 + descr = ata_get_cmd_name(tf.command); 670 + 659 671 if (!ata_acpi_filter_tf(dev, &tf, pptf)) { 660 672 rtf = tf; 661 673 err_mask = ata_exec_internal(dev, &rtf, NULL, ··· 665 673 666 674 switch (err_mask) { 667 675 case 0: 668 - level = KERN_DEBUG; 669 - snprintf(msg, sizeof(msg), "succeeded"); 676 + ata_dev_dbg(dev, 677 + "ACPI cmd %02x/%02x:%02x:%02x:%02x:%02x:%02x" 678 + "(%s) succeeded\n", 679 + tf.command, tf.feature, tf.nsect, tf.lbal, 680 + tf.lbam, tf.lbah, tf.device, descr); 670 681 rc = 1; 671 682 break; 672 683 673 684 case AC_ERR_DEV: 674 - level = KERN_INFO; 675 - snprintf(msg, sizeof(msg), 676 - "rejected by device (Stat=0x%02x Err=0x%02x)", 677 - rtf.command, rtf.feature); 685 + ata_dev_info(dev, 686 + "ACPI cmd %02x/%02x:%02x:%02x:%02x:%02x:%02x" 687 + "(%s) rejected by device (Stat=0x%02x Err=0x%02x)", 688 + tf.command, tf.feature, tf.nsect, tf.lbal, 689 + tf.lbam, tf.lbah, tf.device, descr, 690 + rtf.command, rtf.feature); 678 691 rc = 0; 679 692 break; 680 693 681 694 default: 682 - level = KERN_ERR; 683 - snprintf(msg, sizeof(msg), 684 - "failed (Emask=0x%x Stat=0x%02x Err=0x%02x)", 685 - err_mask, rtf.command, rtf.feature); 695 + ata_dev_err(dev, 696 + "ACPI cmd %02x/%02x:%02x:%02x:%02x:%02x:%02x" 697 + "(%s) failed (Emask=0x%x Stat=0x%02x Err=0x%02x)", 698 + tf.command, tf.feature, tf.nsect, tf.lbal, 699 + tf.lbam, tf.lbah, tf.device, descr, 700 + err_mask, rtf.command, rtf.feature); 686 701 rc = -EIO; 687 702 break; 688 703 } 689 704 } else { 690 - level = KERN_INFO; 691 - snprintf(msg, sizeof(msg), "filtered out"); 705 + ata_dev_info(dev, 706 + "ACPI cmd %02x/%02x:%02x:%02x:%02x:%02x:%02x" 707 + "(%s) filtered out\n", 708 + tf.command, tf.feature, tf.nsect, tf.lbal, 709 + tf.lbam, tf.lbah, tf.device, descr); 692 710 rc = 0; 693 711 } 694 - descr = ata_get_cmd_descript(tf.command); 695 - 696 - ata_dev_printk(dev, level, 697 - "ACPI cmd %02x/%02x:%02x:%02x:%02x:%02x:%02x (%s) %s\n", 698 - tf.command, tf.feature, tf.nsect, tf.lbal, 699 - tf.lbam, tf.lbah, tf.device, 700 - (descr ? descr : "unknown"), msg); 701 - 702 712 return rc; 703 713 } 704 714 ··· 770 776 struct acpi_object_list input; 771 777 union acpi_object in_params[1]; 772 778 773 - if (ata_msg_probe(ap)) 774 - ata_dev_dbg(dev, "%s: ix = %d, port#: %d\n", 775 - __func__, dev->devno, ap->port_no); 779 + ata_dev_dbg(dev, "%s: ix = %d, port#: %d\n", 780 + __func__, dev->devno, ap->port_no); 776 781 777 782 /* Give the drive Identify data to the drive via the _SDD method */ 778 783 /* _SDD: set up input parameters */
+51 -178
drivers/ata/libata-core.c
··· 764 764 head = track % dev->heads; 765 765 sect = (u32)block % dev->sectors + 1; 766 766 767 - DPRINTK("block %u track %u cyl %u head %u sect %u\n", 768 - (u32)block, track, cyl, head, sect); 769 - 770 767 /* Check whether the converted CHS can fit. 771 768 Cylinder: 0-65535 772 769 Head: 0-15 ··· 1007 1010 * SEMB signature. This is worked around in 1008 1011 * ata_dev_read_id(). 1009 1012 */ 1010 - if ((tf->lbam == 0) && (tf->lbah == 0)) { 1011 - DPRINTK("found ATA device by sig\n"); 1013 + if (tf->lbam == 0 && tf->lbah == 0) 1012 1014 return ATA_DEV_ATA; 1013 - } 1014 1015 1015 - if ((tf->lbam == 0x14) && (tf->lbah == 0xeb)) { 1016 - DPRINTK("found ATAPI device by sig\n"); 1016 + if (tf->lbam == 0x14 && tf->lbah == 0xeb) 1017 1017 return ATA_DEV_ATAPI; 1018 - } 1019 1018 1020 - if ((tf->lbam == 0x69) && (tf->lbah == 0x96)) { 1021 - DPRINTK("found PMP device by sig\n"); 1019 + if (tf->lbam == 0x69 && tf->lbah == 0x96) 1022 1020 return ATA_DEV_PMP; 1023 - } 1024 1021 1025 - if ((tf->lbam == 0x3c) && (tf->lbah == 0xc3)) { 1026 - DPRINTK("found SEMB device by sig (could be ATA device)\n"); 1022 + if (tf->lbam == 0x3c && tf->lbah == 0xc3) 1027 1023 return ATA_DEV_SEMB; 1028 - } 1029 1024 1030 - if ((tf->lbam == 0xcd) && (tf->lbah == 0xab)) { 1031 - DPRINTK("found ZAC device by sig\n"); 1025 + if (tf->lbam == 0xcd && tf->lbah == 0xab) 1032 1026 return ATA_DEV_ZAC; 1033 - } 1034 1027 1035 - DPRINTK("unknown device\n"); 1036 1028 return ATA_DEV_UNKNOWN; 1037 1029 } 1038 1030 EXPORT_SYMBOL_GPL(ata_dev_classify); ··· 1341 1355 1342 1356 /** 1343 1357 * ata_dump_id - IDENTIFY DEVICE info debugging output 1358 + * @dev: device from which the information is fetched 1344 1359 * @id: IDENTIFY DEVICE page to dump 1345 1360 * 1346 1361 * Dump selected 16-bit words from the given IDENTIFY DEVICE ··· 1351 1364 * caller. 1352 1365 */ 1353 1366 1354 - static inline void ata_dump_id(const u16 *id) 1367 + static inline void ata_dump_id(struct ata_device *dev, const u16 *id) 1355 1368 { 1356 - DPRINTK("49==0x%04x " 1357 - "53==0x%04x " 1358 - "63==0x%04x " 1359 - "64==0x%04x " 1360 - "75==0x%04x \n", 1361 - id[49], 1362 - id[53], 1363 - id[63], 1364 - id[64], 1365 - id[75]); 1366 - DPRINTK("80==0x%04x " 1367 - "81==0x%04x " 1368 - "82==0x%04x " 1369 - "83==0x%04x " 1370 - "84==0x%04x \n", 1371 - id[80], 1372 - id[81], 1373 - id[82], 1374 - id[83], 1375 - id[84]); 1376 - DPRINTK("88==0x%04x " 1377 - "93==0x%04x\n", 1378 - id[88], 1379 - id[93]); 1369 + ata_dev_dbg(dev, 1370 + "49==0x%04x 53==0x%04x 63==0x%04x 64==0x%04x 75==0x%04x\n" 1371 + "80==0x%04x 81==0x%04x 82==0x%04x 83==0x%04x 84==0x%04x\n" 1372 + "88==0x%04x 93==0x%04x\n", 1373 + id[49], id[53], id[63], id[64], id[75], id[80], 1374 + id[81], id[82], id[83], id[84], id[88], id[93]); 1380 1375 } 1381 1376 1382 1377 /** ··· 1571 1602 else 1572 1603 ata_qc_complete(qc); 1573 1604 1574 - if (ata_msg_warn(ap)) 1575 - ata_dev_warn(dev, "qc timeout (cmd 0x%x)\n", 1576 - command); 1605 + ata_dev_warn(dev, "qc timeout (cmd 0x%x)\n", 1606 + command); 1577 1607 } 1578 1608 1579 1609 spin_unlock_irqrestore(ap->lock, flags); ··· 1722 1754 * this function is wrapped or replaced by the driver 1723 1755 */ 1724 1756 unsigned int ata_do_dev_read_id(struct ata_device *dev, 1725 - struct ata_taskfile *tf, u16 *id) 1757 + struct ata_taskfile *tf, __le16 *id) 1726 1758 { 1727 1759 return ata_exec_internal(dev, tf, NULL, DMA_FROM_DEVICE, 1728 1760 id, sizeof(id[0]) * ATA_ID_WORDS, 0); ··· 1762 1794 int may_fallback = 1, tried_spinup = 0; 1763 1795 int rc; 1764 1796 1765 - if (ata_msg_ctl(ap)) 1766 - ata_dev_dbg(dev, "%s: ENTER\n", __func__); 1767 - 1768 1797 retry: 1769 1798 ata_tf_init(dev, &tf); 1770 1799 ··· 1795 1830 tf.flags |= ATA_TFLAG_POLLING; 1796 1831 1797 1832 if (ap->ops->read_id) 1798 - err_mask = ap->ops->read_id(dev, &tf, id); 1833 + err_mask = ap->ops->read_id(dev, &tf, (__le16 *)id); 1799 1834 else 1800 - err_mask = ata_do_dev_read_id(dev, &tf, id); 1835 + err_mask = ata_do_dev_read_id(dev, &tf, (__le16 *)id); 1801 1836 1802 1837 if (err_mask) { 1803 1838 if (err_mask & AC_ERR_NODEV_HINT) { ··· 1844 1879 } 1845 1880 1846 1881 if (dev->horkage & ATA_HORKAGE_DUMP_ID) { 1847 - ata_dev_dbg(dev, "dumping IDENTIFY data, " 1882 + ata_dev_info(dev, "dumping IDENTIFY data, " 1848 1883 "class=%d may_fallback=%d tried_spinup=%d\n", 1849 1884 class, may_fallback, tried_spinup); 1850 - print_hex_dump(KERN_DEBUG, "", DUMP_PREFIX_OFFSET, 1885 + print_hex_dump(KERN_INFO, "", DUMP_PREFIX_OFFSET, 1851 1886 16, 2, id, ATA_ID_WORDS * sizeof(*id), true); 1852 1887 } 1853 1888 ··· 1931 1966 return 0; 1932 1967 1933 1968 err_out: 1934 - if (ata_msg_warn(ap)) 1935 - ata_dev_warn(dev, "failed to IDENTIFY (%s, err_mask=0x%x)\n", 1936 - reason, err_mask); 1969 + ata_dev_warn(dev, "failed to IDENTIFY (%s, err_mask=0x%x)\n", 1970 + reason, err_mask); 1937 1971 return rc; 1938 1972 } 1939 1973 ··· 1960 1996 unsigned int err_mask; 1961 1997 bool dma = false; 1962 1998 1963 - DPRINTK("read log page - log 0x%x, page 0x%x\n", log, page); 1999 + ata_dev_dbg(dev, "read log page - log 0x%x, page 0x%x\n", log, page); 1964 2000 1965 2001 /* 1966 2002 * Return error without actually issuing the command on controllers ··· 2354 2390 2355 2391 static int ata_dev_config_lba(struct ata_device *dev) 2356 2392 { 2357 - struct ata_port *ap = dev->link->ap; 2358 2393 const u16 *id = dev->id; 2359 2394 const char *lba_desc; 2360 2395 char ncq_desc[24]; ··· 2375 2412 ret = ata_dev_config_ncq(dev, ncq_desc, sizeof(ncq_desc)); 2376 2413 2377 2414 /* print device info to dmesg */ 2378 - if (ata_msg_drv(ap) && ata_dev_print_info(dev)) 2415 + if (ata_dev_print_info(dev)) 2379 2416 ata_dev_info(dev, 2380 2417 "%llu sectors, multi %u: %s %s\n", 2381 2418 (unsigned long long)dev->n_sectors, ··· 2386 2423 2387 2424 static void ata_dev_config_chs(struct ata_device *dev) 2388 2425 { 2389 - struct ata_port *ap = dev->link->ap; 2390 2426 const u16 *id = dev->id; 2391 2427 2392 2428 if (ata_id_current_chs_valid(id)) { ··· 2401 2439 } 2402 2440 2403 2441 /* print device info to dmesg */ 2404 - if (ata_msg_drv(ap) && ata_dev_print_info(dev)) 2442 + if (ata_dev_print_info(dev)) 2405 2443 ata_dev_info(dev, 2406 2444 "%llu sectors, multi %u, CHS %u/%u/%u\n", 2407 2445 (unsigned long long)dev->n_sectors, ··· 2528 2566 char modelbuf[ATA_ID_PROD_LEN+1]; 2529 2567 int rc; 2530 2568 2531 - if (!ata_dev_enabled(dev) && ata_msg_info(ap)) { 2532 - ata_dev_info(dev, "%s: ENTER/EXIT -- nodev\n", __func__); 2569 + if (!ata_dev_enabled(dev)) { 2570 + ata_dev_dbg(dev, "no device\n"); 2533 2571 return 0; 2534 2572 } 2535 - 2536 - if (ata_msg_probe(ap)) 2537 - ata_dev_dbg(dev, "%s: ENTER\n", __func__); 2538 2573 2539 2574 /* set horkage */ 2540 2575 dev->horkage |= ata_dev_blacklisted(dev); ··· 2580 2621 return rc; 2581 2622 2582 2623 /* print device capabilities */ 2583 - if (ata_msg_probe(ap)) 2584 - ata_dev_dbg(dev, 2585 - "%s: cfg 49:%04x 82:%04x 83:%04x 84:%04x " 2586 - "85:%04x 86:%04x 87:%04x 88:%04x\n", 2587 - __func__, 2588 - id[49], id[82], id[83], id[84], 2589 - id[85], id[86], id[87], id[88]); 2624 + ata_dev_dbg(dev, 2625 + "%s: cfg 49:%04x 82:%04x 83:%04x 84:%04x " 2626 + "85:%04x 86:%04x 87:%04x 88:%04x\n", 2627 + __func__, 2628 + id[49], id[82], id[83], id[84], 2629 + id[85], id[86], id[87], id[88]); 2590 2630 2591 2631 /* initialize to-be-configured parameters */ 2592 2632 dev->flags &= ~ATA_DFLAG_CFG_MASK; ··· 2604 2646 /* find max transfer mode; for printk only */ 2605 2647 xfer_mask = ata_id_xfermask(id); 2606 2648 2607 - if (ata_msg_probe(ap)) 2608 - ata_dump_id(id); 2649 + ata_dump_id(dev, id); 2609 2650 2610 2651 /* SCSI only uses 4-char revisions, dump full 8 chars from ATA */ 2611 2652 ata_id_c_string(dev->id, fwrevbuf, ATA_ID_FW_REV, ··· 2642 2685 } 2643 2686 2644 2687 /* print device info to dmesg */ 2645 - if (ata_msg_drv(ap) && print_info) 2688 + if (print_info) 2646 2689 ata_dev_info(dev, "%s: %s, %s, max %s\n", 2647 2690 revbuf, modelbuf, fwrevbuf, 2648 2691 ata_mode_string(xfer_mask)); ··· 2662 2705 ata_dev_config_cpr(dev); 2663 2706 dev->cdb_len = 32; 2664 2707 2665 - if (ata_msg_drv(ap) && print_info) 2708 + if (print_info) 2666 2709 ata_dev_print_features(dev); 2667 2710 } 2668 2711 ··· 2675 2718 2676 2719 rc = atapi_cdb_len(id); 2677 2720 if ((rc < 12) || (rc > ATAPI_CDB_LEN)) { 2678 - if (ata_msg_warn(ap)) 2679 - ata_dev_warn(dev, "unsupported CDB len\n"); 2721 + ata_dev_warn(dev, "unsupported CDB len %d\n", rc); 2680 2722 rc = -EINVAL; 2681 2723 goto err_out_nosup; 2682 2724 } ··· 2719 2763 } 2720 2764 2721 2765 /* print device info to dmesg */ 2722 - if (ata_msg_drv(ap) && print_info) 2766 + if (print_info) 2723 2767 ata_dev_info(dev, 2724 2768 "ATAPI: %s, %s, max %s%s%s%s\n", 2725 2769 modelbuf, fwrevbuf, ··· 2736 2780 /* Limit PATA drive on SATA cable bridge transfers to udma5, 2737 2781 200 sectors */ 2738 2782 if (ata_dev_knobble(dev)) { 2739 - if (ata_msg_drv(ap) && print_info) 2783 + if (print_info) 2740 2784 ata_dev_info(dev, "applying bridge limits\n"); 2741 2785 dev->udma_mask &= ATA_UDMA5; 2742 2786 dev->max_sectors = ATA_MAX_SECTORS; ··· 2785 2829 return 0; 2786 2830 2787 2831 err_out_nosup: 2788 - if (ata_msg_probe(ap)) 2789 - ata_dev_dbg(dev, "%s: EXIT, err\n", __func__); 2790 2832 return rc; 2791 2833 } 2792 2834 ··· 3327 3373 dev_err_whine = " (device error ignored)"; 3328 3374 } 3329 3375 3330 - DPRINTK("xfer_shift=%u, xfer_mode=0x%x\n", 3331 - dev->xfer_shift, (int)dev->xfer_mode); 3376 + ata_dev_dbg(dev, "xfer_shift=%u, xfer_mode=0x%x\n", 3377 + dev->xfer_shift, (int)dev->xfer_mode); 3332 3378 3333 3379 if (!(ehc->i.flags & ATA_EHI_QUIET) || 3334 3380 ehc->i.flags & ATA_EHI_DID_HARDRESET) ··· 3642 3688 { 3643 3689 u32 serror; 3644 3690 3645 - DPRINTK("ENTER\n"); 3646 - 3647 3691 /* reset complete, clear SError */ 3648 3692 if (!sata_scr_read(link, SCR_ERROR, &serror)) 3649 3693 sata_scr_write(link, SCR_ERROR, serror); 3650 3694 3651 3695 /* print link status */ 3652 3696 sata_print_link_status(link); 3653 - 3654 - DPRINTK("EXIT\n"); 3655 3697 } 3656 3698 EXPORT_SYMBOL_GPL(ata_std_postreset); 3657 3699 ··· 4274 4324 unsigned int err_mask; 4275 4325 4276 4326 /* set up set-features taskfile */ 4277 - DPRINTK("set features - xfer mode\n"); 4327 + ata_dev_dbg(dev, "set features - xfer mode\n"); 4278 4328 4279 4329 /* Some controllers and ATAPI devices show flaky interrupt 4280 4330 * behavior after setting xfer mode. Use polling instead. ··· 4296 4346 /* On some disks, this command causes spin-up, so we need longer timeout */ 4297 4347 err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 15000); 4298 4348 4299 - DPRINTK("EXIT, err_mask=%x\n", err_mask); 4300 4349 return err_mask; 4301 4350 } 4302 4351 ··· 4321 4372 unsigned long timeout = 0; 4322 4373 4323 4374 /* set up set-features taskfile */ 4324 - DPRINTK("set features - SATA features\n"); 4375 + ata_dev_dbg(dev, "set features - SATA features\n"); 4325 4376 4326 4377 ata_tf_init(dev, &tf); 4327 4378 tf.command = ATA_CMD_SET_FEATURES; ··· 4335 4386 ata_probe_timeout * 1000 : SETFEATURES_SPINUP_TIMEOUT; 4336 4387 err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, timeout); 4337 4388 4338 - DPRINTK("EXIT, err_mask=%x\n", err_mask); 4339 4389 return err_mask; 4340 4390 } 4341 4391 EXPORT_SYMBOL_GPL(ata_dev_set_feature); ··· 4362 4414 return AC_ERR_INVALID; 4363 4415 4364 4416 /* set up init dev params taskfile */ 4365 - DPRINTK("init dev params \n"); 4417 + ata_dev_dbg(dev, "init dev params \n"); 4366 4418 4367 4419 ata_tf_init(dev, &tf); 4368 4420 tf.command = ATA_CMD_INIT_DEV_PARAMS; ··· 4378 4430 if (err_mask == AC_ERR_DEV && (tf.feature & ATA_ABORTED)) 4379 4431 err_mask = 0; 4380 4432 4381 - DPRINTK("EXIT, err_mask=%x\n", err_mask); 4382 4433 return err_mask; 4383 4434 } 4384 4435 ··· 4489 4542 4490 4543 WARN_ON_ONCE(sg == NULL); 4491 4544 4492 - VPRINTK("unmapping %u sg elements\n", qc->n_elem); 4493 - 4494 4545 if (qc->n_elem) 4495 4546 dma_unmap_sg(ap->dev, sg, qc->orig_n_elem, dir); 4496 4547 ··· 4514 4569 struct ata_port *ap = qc->ap; 4515 4570 unsigned int n_elem; 4516 4571 4517 - VPRINTK("ENTER, ata%u\n", ap->print_id); 4518 - 4519 4572 n_elem = dma_map_sg(ap->dev, qc->sg, qc->n_elem, qc->dma_dir); 4520 4573 if (n_elem < 1) 4521 4574 return -1; 4522 4575 4523 - VPRINTK("%d sg elements mapped\n", n_elem); 4524 4576 qc->orig_n_elem = qc->n_elem; 4525 4577 qc->n_elem = n_elem; 4526 4578 qc->flags |= ATA_QCFLAG_DMAMAP; ··· 4872 4930 return; 4873 4931 } 4874 4932 4933 + trace_ata_qc_prep(qc); 4875 4934 qc->err_mask |= ap->ops->qc_prep(qc); 4876 4935 if (unlikely(qc->err_mask)) 4877 4936 goto err; ··· 5318 5375 { 5319 5376 struct ata_port *ap; 5320 5377 5321 - DPRINTK("ENTER\n"); 5322 - 5323 5378 ap = kzalloc(sizeof(*ap), GFP_KERNEL); 5324 5379 if (!ap) 5325 5380 return NULL; ··· 5328 5387 ap->local_port_no = -1; 5329 5388 ap->host = host; 5330 5389 ap->dev = host->dev; 5331 - 5332 - #if defined(ATA_VERBOSE_DEBUG) 5333 - /* turn on all debugging levels */ 5334 - ap->msg_enable = 0x00FF; 5335 - #elif defined(ATA_DEBUG) 5336 - ap->msg_enable = ATA_MSG_DRV | ATA_MSG_INFO | ATA_MSG_CTL | ATA_MSG_WARN | ATA_MSG_ERR; 5337 - #else 5338 - ap->msg_enable = ATA_MSG_DRV | ATA_MSG_ERR | ATA_MSG_WARN; 5339 - #endif 5340 5390 5341 5391 mutex_init(&ap->scsi_scan_mutex); 5342 5392 INIT_DELAYED_WORK(&ap->hotplug_task, ata_scsi_hotplug); ··· 5424 5492 size_t sz; 5425 5493 int i; 5426 5494 void *dr; 5427 - 5428 - DPRINTK("ENTER\n"); 5429 5495 5430 5496 /* alloc a container for our list of ATA ports (buses) */ 5431 5497 sz = sizeof(struct ata_host) + (max_ports + 1) * sizeof(void *); ··· 5714 5784 __ata_port_probe(ap); 5715 5785 ata_port_wait_eh(ap); 5716 5786 } else { 5717 - DPRINTK("ata%u: bus probe begin\n", ap->print_id); 5718 5787 rc = ata_bus_probe(ap); 5719 - DPRINTK("ata%u: bus probe end\n", ap->print_id); 5720 5788 } 5721 5789 return rc; 5722 5790 } ··· 6481 6553 }; 6482 6554 EXPORT_SYMBOL_GPL(ata_dummy_port_info); 6483 6555 6484 - /* 6485 - * Utility print functions 6486 - */ 6487 - void ata_port_printk(const struct ata_port *ap, const char *level, 6488 - const char *fmt, ...) 6489 - { 6490 - struct va_format vaf; 6491 - va_list args; 6492 - 6493 - va_start(args, fmt); 6494 - 6495 - vaf.fmt = fmt; 6496 - vaf.va = &args; 6497 - 6498 - printk("%sata%u: %pV", level, ap->print_id, &vaf); 6499 - 6500 - va_end(args); 6501 - } 6502 - EXPORT_SYMBOL(ata_port_printk); 6503 - 6504 - void ata_link_printk(const struct ata_link *link, const char *level, 6505 - const char *fmt, ...) 6506 - { 6507 - struct va_format vaf; 6508 - va_list args; 6509 - 6510 - va_start(args, fmt); 6511 - 6512 - vaf.fmt = fmt; 6513 - vaf.va = &args; 6514 - 6515 - if (sata_pmp_attached(link->ap) || link->ap->slave_link) 6516 - printk("%sata%u.%02u: %pV", 6517 - level, link->ap->print_id, link->pmp, &vaf); 6518 - else 6519 - printk("%sata%u: %pV", 6520 - level, link->ap->print_id, &vaf); 6521 - 6522 - va_end(args); 6523 - } 6524 - EXPORT_SYMBOL(ata_link_printk); 6525 - 6526 - void ata_dev_printk(const struct ata_device *dev, const char *level, 6527 - const char *fmt, ...) 6528 - { 6529 - struct va_format vaf; 6530 - va_list args; 6531 - 6532 - va_start(args, fmt); 6533 - 6534 - vaf.fmt = fmt; 6535 - vaf.va = &args; 6536 - 6537 - printk("%sata%u.%02u: %pV", 6538 - level, dev->link->ap->print_id, dev->link->pmp + dev->devno, 6539 - &vaf); 6540 - 6541 - va_end(args); 6542 - } 6543 - EXPORT_SYMBOL(ata_dev_printk); 6544 - 6545 6556 void ata_print_version(const struct device *dev, const char *version) 6546 6557 { 6547 6558 dev_printk(KERN_DEBUG, dev, "version %s\n", version); 6548 6559 } 6549 6560 EXPORT_SYMBOL(ata_print_version); 6561 + 6562 + EXPORT_TRACEPOINT_SYMBOL_GPL(ata_tf_load); 6563 + EXPORT_TRACEPOINT_SYMBOL_GPL(ata_exec_command); 6564 + EXPORT_TRACEPOINT_SYMBOL_GPL(ata_bmdma_setup); 6565 + EXPORT_TRACEPOINT_SYMBOL_GPL(ata_bmdma_start); 6566 + EXPORT_TRACEPOINT_SYMBOL_GPL(ata_bmdma_status);
+35 -37
drivers/ata/libata-eh.c
··· 533 533 unsigned long flags; 534 534 LIST_HEAD(eh_work_q); 535 535 536 - DPRINTK("ENTER\n"); 537 - 538 536 spin_lock_irqsave(host->host_lock, flags); 539 537 list_splice_init(&host->eh_cmd_q, &eh_work_q); 540 538 spin_unlock_irqrestore(host->host_lock, flags); ··· 546 548 /* finish or retry handled scmd's and clean up */ 547 549 WARN_ON(!list_empty(&eh_work_q)); 548 550 549 - DPRINTK("EXIT\n"); 550 551 } 551 552 552 553 /** ··· 937 940 ata_eh_set_pending(ap, 1); 938 941 scsi_schedule_eh(ap->scsi_host); 939 942 940 - DPRINTK("port EH scheduled\n"); 943 + trace_ata_std_sched_eh(ap); 941 944 } 942 945 EXPORT_SYMBOL_GPL(ata_std_sched_eh); 943 946 ··· 1067 1070 1068 1071 ap->pflags |= ATA_PFLAG_FROZEN; 1069 1072 1070 - DPRINTK("ata%u port frozen\n", ap->print_id); 1073 + trace_ata_port_freeze(ap); 1071 1074 } 1072 1075 1073 1076 /** ··· 1144 1147 1145 1148 spin_unlock_irqrestore(ap->lock, flags); 1146 1149 1147 - DPRINTK("ata%u port thawed\n", ap->print_id); 1150 + trace_ata_port_thaw(ap); 1148 1151 } 1149 1152 1150 1153 static void ata_eh_scsidone(struct scsi_cmnd *scmd) ··· 1214 1217 if (!ata_dev_enabled(dev)) 1215 1218 return; 1216 1219 1217 - if (ata_msg_drv(dev->link->ap)) 1218 - ata_dev_warn(dev, "disabled\n"); 1220 + ata_dev_warn(dev, "disable device\n"); 1219 1221 ata_acpi_on_disable(dev); 1220 1222 ata_down_xfermask_limit(dev, ATA_DNXFER_FORCE_PIO0 | ATA_DNXFER_QUIET); 1221 1223 dev->class++; ··· 1283 1287 struct ata_eh_context *ehc = &link->eh_context; 1284 1288 unsigned long flags; 1285 1289 1290 + trace_ata_eh_about_to_do(link, dev ? dev->devno : 0, action); 1291 + 1286 1292 spin_lock_irqsave(ap->lock, flags); 1287 1293 1288 1294 ata_eh_clear_action(link, dev, ehi, action); ··· 1314 1316 unsigned int action) 1315 1317 { 1316 1318 struct ata_eh_context *ehc = &link->eh_context; 1319 + 1320 + trace_ata_eh_done(link, dev ? dev->devno : 0, action); 1317 1321 1318 1322 ata_eh_clear_action(link, dev, &ehc->i, action); 1319 1323 } ··· 1421 1421 return; 1422 1422 } 1423 1423 1424 - DPRINTK("ATA request sense\n"); 1425 - 1426 1424 ata_tf_init(dev, &tf); 1427 1425 tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; 1428 1426 tf.flags |= ATA_TFLAG_LBA | ATA_TFLAG_LBA48; ··· 1460 1462 { REQUEST_SENSE, 0, 0, 0, SCSI_SENSE_BUFFERSIZE, 0 }; 1461 1463 struct ata_port *ap = dev->link->ap; 1462 1464 struct ata_taskfile tf; 1463 - 1464 - DPRINTK("ATAPI request sense\n"); 1465 1465 1466 1466 memset(sense_buf, 0, SCSI_SENSE_BUFFERSIZE); 1467 1467 ··· 1924 1928 u32 serror; 1925 1929 int rc; 1926 1930 1927 - DPRINTK("ENTER\n"); 1928 - 1929 1931 if (ehc->i.flags & ATA_EHI_NO_AUTOPSY) 1930 1932 return; 1931 1933 ··· 2030 2036 ehc->i.action |= ata_eh_speed_down(dev, eflags, all_err_mask); 2031 2037 trace_ata_eh_link_autopsy(dev, ehc->i.action, all_err_mask); 2032 2038 } 2033 - DPRINTK("EXIT\n"); 2034 2039 } 2035 2040 2036 2041 /** ··· 2079 2086 } 2080 2087 2081 2088 /** 2082 - * ata_get_cmd_descript - get description for ATA command 2083 - * @command: ATA command code to get description for 2089 + * ata_get_cmd_name - get name for ATA command 2090 + * @command: ATA command code to get name for 2084 2091 * 2085 - * Return a textual description of the given command, or NULL if the 2086 - * command is not known. 2092 + * Return a textual name of the given command or "unknown" 2087 2093 * 2088 2094 * LOCKING: 2089 2095 * None 2090 2096 */ 2091 - const char *ata_get_cmd_descript(u8 command) 2097 + const char *ata_get_cmd_name(u8 command) 2092 2098 { 2093 2099 #ifdef CONFIG_ATA_VERBOSE_ERROR 2094 2100 static const struct ··· 2195 2203 return cmd_descr[i].text; 2196 2204 #endif 2197 2205 2198 - return NULL; 2206 + return "unknown"; 2199 2207 } 2200 - EXPORT_SYMBOL_GPL(ata_get_cmd_descript); 2208 + EXPORT_SYMBOL_GPL(ata_get_cmd_name); 2201 2209 2202 2210 /** 2203 2211 * ata_eh_link_report - report error handling to user ··· 2346 2354 } 2347 2355 __scsi_format_command(cdb_buf, sizeof(cdb_buf), 2348 2356 cdb, cdb_len); 2349 - } else { 2350 - const char *descr = ata_get_cmd_descript(cmd->command); 2351 - if (descr) 2352 - ata_dev_err(qc->dev, "failed command: %s\n", 2353 - descr); 2354 - } 2357 + } else 2358 + ata_dev_err(qc->dev, "failed command: %s\n", 2359 + ata_get_cmd_name(cmd->command)); 2355 2360 2356 2361 ata_dev_err(qc->dev, 2357 2362 "cmd %02x/%02x:%02x:%02x:%02x:%02x/%02x:%02x:%02x:%02x:%02x/%02x " ··· 2585 2596 2586 2597 /* mark that this EH session started with reset */ 2587 2598 ehc->last_reset = jiffies; 2588 - if (reset == hardreset) 2599 + if (reset == hardreset) { 2589 2600 ehc->i.flags |= ATA_EHI_DID_HARDRESET; 2590 - else 2601 + trace_ata_link_hardreset_begin(link, classes, deadline); 2602 + } else { 2591 2603 ehc->i.flags |= ATA_EHI_DID_SOFTRESET; 2604 + trace_ata_link_softreset_begin(link, classes, deadline); 2605 + } 2592 2606 2593 2607 rc = ata_do_reset(link, reset, classes, deadline, true); 2608 + if (reset == hardreset) 2609 + trace_ata_link_hardreset_end(link, classes, rc); 2610 + else 2611 + trace_ata_link_softreset_end(link, classes, rc); 2594 2612 if (rc && rc != -EAGAIN) { 2595 2613 failed_link = link; 2596 2614 goto fail; ··· 2611 2615 ata_link_info(slave, "hard resetting link\n"); 2612 2616 2613 2617 ata_eh_about_to_do(slave, NULL, ATA_EH_RESET); 2618 + trace_ata_slave_hardreset_begin(slave, classes, 2619 + deadline); 2614 2620 tmp = ata_do_reset(slave, reset, classes, deadline, 2615 2621 false); 2622 + trace_ata_slave_hardreset_end(slave, classes, tmp); 2616 2623 switch (tmp) { 2617 2624 case -EAGAIN: 2618 2625 rc = -EAGAIN; ··· 2643 2644 } 2644 2645 2645 2646 ata_eh_about_to_do(link, NULL, ATA_EH_RESET); 2647 + trace_ata_link_softreset_begin(link, classes, deadline); 2646 2648 rc = ata_do_reset(link, reset, classes, deadline, true); 2649 + trace_ata_link_softreset_end(link, classes, rc); 2647 2650 if (rc) { 2648 2651 failed_link = link; 2649 2652 goto fail; ··· 2699 2698 */ 2700 2699 if (postreset) { 2701 2700 postreset(link, classes); 2702 - if (slave) 2701 + trace_ata_link_postreset(link, classes, rc); 2702 + if (slave) { 2703 2703 postreset(slave, classes); 2704 + trace_ata_slave_postreset(slave, classes, rc); 2705 + } 2704 2706 } 2705 2707 2706 2708 /* ··· 2925 2921 unsigned long flags; 2926 2922 int rc = 0; 2927 2923 2928 - DPRINTK("ENTER\n"); 2929 - 2930 2924 /* For PATA drive side cable detection to work, IDENTIFY must 2931 2925 * be done backwards such that PDIAG- is released by the slave 2932 2926 * device before the master device is identified. ··· 3038 3036 3039 3037 err: 3040 3038 *r_failed_dev = dev; 3041 - DPRINTK("EXIT rc=%d\n", rc); 3042 3039 return rc; 3043 3040 } 3044 3041 ··· 3552 3551 int rc, nr_fails; 3553 3552 unsigned long flags, deadline; 3554 3553 3555 - DPRINTK("ENTER\n"); 3556 - 3557 3554 /* prep for recovery */ 3558 3555 ata_for_each_link(link, ap, EDGE) { 3559 3556 struct ata_eh_context *ehc = &link->eh_context; ··· 3759 3760 if (rc && r_failed_link) 3760 3761 *r_failed_link = link; 3761 3762 3762 - DPRINTK("EXIT, rc=%d\n", rc); 3763 3763 return rc; 3764 3764 } 3765 3765
-8
drivers/ata/libata-pmp.c
··· 652 652 u32 *gscr = (void *)ap->sector_buf; 653 653 int rc; 654 654 655 - DPRINTK("ENTER\n"); 656 - 657 655 ata_eh_about_to_do(link, NULL, ATA_EH_REVALIDATE); 658 656 659 657 if (!ata_dev_enabled(dev)) { ··· 684 686 685 687 ata_eh_done(link, NULL, ATA_EH_REVALIDATE); 686 688 687 - DPRINTK("EXIT, rc=0\n"); 688 689 return 0; 689 690 690 691 fail: 691 692 ata_dev_err(dev, "PMP revalidation failed (errno=%d)\n", rc); 692 - DPRINTK("EXIT, rc=%d\n", rc); 693 693 return rc; 694 694 } 695 695 ··· 754 758 int tries = ATA_EH_PMP_TRIES; 755 759 int detach = 0, rc = 0; 756 760 int reval_failed = 0; 757 - 758 - DPRINTK("ENTER\n"); 759 761 760 762 if (dev->flags & ATA_DFLAG_DETACH) { 761 763 detach = 1; ··· 822 828 /* okay, PMP resurrected */ 823 829 ehc->i.flags = 0; 824 830 825 - DPRINTK("EXIT, rc=0\n"); 826 831 return 0; 827 832 828 833 fail: ··· 831 838 else 832 839 ata_dev_disable(dev); 833 840 834 - DPRINTK("EXIT, rc=%d\n", rc); 835 841 return rc; 836 842 } 837 843
+3 -8
drivers/ata/libata-sata.c
··· 317 317 * immediately after resuming. Delay 200ms before 318 318 * debouncing. 319 319 */ 320 - if (!(link->flags & ATA_LFLAG_NO_DB_DELAY)) 320 + if (!(link->flags & ATA_LFLAG_NO_DEBOUNCE_DELAY)) 321 321 ata_msleep(link->ap, 200); 322 322 323 323 /* is SControl restored correctly? */ ··· 533 533 u32 scontrol; 534 534 int rc; 535 535 536 - DPRINTK("ENTER\n"); 537 - 538 536 if (online) 539 537 *online = false; 540 538 ··· 608 610 *online = false; 609 611 ata_link_err(link, "COMRESET failed (errno=%d)\n", rc); 610 612 } 611 - DPRINTK("EXIT, rc=%d\n", rc); 612 613 return rc; 613 614 } 614 615 EXPORT_SYMBOL_GPL(sata_link_hardreset); ··· 873 876 ncq_prio_enable = dev->flags & ATA_DFLAG_NCQ_PRIO_ENABLE; 874 877 spin_unlock_irq(ap->lock); 875 878 876 - return rc ? rc : snprintf(buf, 20, "%u\n", ncq_prio_enable); 879 + return rc ? rc : sysfs_emit(buf, "%u\n", ncq_prio_enable); 877 880 } 878 881 879 882 static ssize_t ata_ncq_prio_enable_store(struct device *device, ··· 969 972 struct Scsi_Host *shost = class_to_shost(dev); 970 973 struct ata_port *ap = ata_shost_to_port(shost); 971 974 972 - return snprintf(buf, 23, "%d\n", ap->em_message_type); 975 + return sysfs_emit(buf, "%d\n", ap->em_message_type); 973 976 } 974 977 DEVICE_ATTR(em_message_type, S_IRUGO, 975 978 ata_scsi_em_message_type_show, NULL); ··· 1257 1260 int ata_sas_queuecmd(struct scsi_cmnd *cmd, struct ata_port *ap) 1258 1261 { 1259 1262 int rc = 0; 1260 - 1261 - ata_scsi_dump_cdb(ap, cmd); 1262 1263 1263 1264 if (likely(ata_dev_enabled(ap->link.device))) 1264 1265 rc = __ata_scsi_queuecmd(cmd, ap->link.device);
+48 -122
drivers/ata/libata-scsi.c
··· 121 121 unlock: 122 122 spin_unlock_irq(ap->lock); 123 123 124 - return rc ? rc : snprintf(buf, 20, "%u\n", msecs); 124 + return rc ? rc : sysfs_emit(buf, "%u\n", msecs); 125 125 } 126 126 127 127 static ssize_t ata_scsi_park_store(struct device *device, ··· 668 668 669 669 /** 670 670 * ata_dump_status - user friendly display of error info 671 - * @id: id of the port in question 671 + * @ap: the port in question 672 672 * @tf: ptr to filled out taskfile 673 673 * 674 674 * Decode and dump the ATA error/status registers for the user so ··· 678 678 * LOCKING: 679 679 * inherited from caller 680 680 */ 681 - static void ata_dump_status(unsigned id, struct ata_taskfile *tf) 681 + static void ata_dump_status(struct ata_port *ap, struct ata_taskfile *tf) 682 682 { 683 683 u8 stat = tf->command, err = tf->feature; 684 684 685 - pr_warn("ata%u: status=0x%02x { ", id, stat); 686 685 if (stat & ATA_BUSY) { 687 - pr_cont("Busy }\n"); /* Data is not valid in this case */ 686 + ata_port_warn(ap, "status=0x%02x {Busy} ", stat); 688 687 } else { 689 - if (stat & ATA_DRDY) pr_cont("DriveReady "); 690 - if (stat & ATA_DF) pr_cont("DeviceFault "); 691 - if (stat & ATA_DSC) pr_cont("SeekComplete "); 692 - if (stat & ATA_DRQ) pr_cont("DataRequest "); 693 - if (stat & ATA_CORR) pr_cont("CorrectedError "); 694 - if (stat & ATA_SENSE) pr_cont("Sense "); 695 - if (stat & ATA_ERR) pr_cont("Error "); 696 - pr_cont("}\n"); 697 - 698 - if (err) { 699 - pr_warn("ata%u: error=0x%02x { ", id, err); 700 - if (err & ATA_ABORTED) pr_cont("DriveStatusError "); 701 - if (err & ATA_ICRC) { 702 - if (err & ATA_ABORTED) 703 - pr_cont("BadCRC "); 704 - else pr_cont("Sector "); 705 - } 706 - if (err & ATA_UNC) pr_cont("UncorrectableError "); 707 - if (err & ATA_IDNF) pr_cont("SectorIdNotFound "); 708 - if (err & ATA_TRK0NF) pr_cont("TrackZeroNotFound "); 709 - if (err & ATA_AMNF) pr_cont("AddrMarkNotFound "); 710 - pr_cont("}\n"); 711 - } 688 + ata_port_warn(ap, "status=0x%02x { %s%s%s%s%s%s%s} ", stat, 689 + stat & ATA_DRDY ? "DriveReady " : "", 690 + stat & ATA_DF ? "DeviceFault " : "", 691 + stat & ATA_DSC ? "SeekComplete " : "", 692 + stat & ATA_DRQ ? "DataRequest " : "", 693 + stat & ATA_CORR ? "CorrectedError " : "", 694 + stat & ATA_SENSE ? "Sense " : "", 695 + stat & ATA_ERR ? "Error " : ""); 696 + if (err) 697 + ata_port_warn(ap, "error=0x%02x {%s%s%s%s%s%s", err, 698 + err & ATA_ABORTED ? 699 + "DriveStatusError " : "", 700 + err & ATA_ICRC ? 701 + (err & ATA_ABORTED ? 702 + "BadCRC " : "Sector ") : "", 703 + err & ATA_UNC ? "UncorrectableError " : "", 704 + err & ATA_IDNF ? "SectorIdNotFound " : "", 705 + err & ATA_TRK0NF ? "TrackZeroNotFound " : "", 706 + err & ATA_AMNF ? "AddrMarkNotFound " : ""); 712 707 } 713 708 } 714 709 ··· 1294 1299 u64 lba = 0; 1295 1300 u32 len; 1296 1301 1297 - VPRINTK("six-byte command\n"); 1298 - 1299 1302 lba |= ((u64)(cdb[1] & 0x1f)) << 16; 1300 1303 lba |= ((u64)cdb[2]) << 8; 1301 1304 lba |= ((u64)cdb[3]); ··· 1318 1325 { 1319 1326 u64 lba = 0; 1320 1327 u32 len = 0; 1321 - 1322 - VPRINTK("ten-byte command\n"); 1323 1328 1324 1329 lba |= ((u64)cdb[2]) << 24; 1325 1330 lba |= ((u64)cdb[3]) << 16; ··· 1345 1354 { 1346 1355 u64 lba = 0; 1347 1356 u32 len = 0; 1348 - 1349 - VPRINTK("sixteen-byte command\n"); 1350 1357 1351 1358 lba |= ((u64)cdb[2]) << 56; 1352 1359 lba |= ((u64)cdb[3]) << 48; ··· 1457 1468 cyl = track / dev->heads; 1458 1469 head = track % dev->heads; 1459 1470 sect = (u32)block % dev->sectors + 1; 1460 - 1461 - DPRINTK("block %u track %u cyl %u head %u sect %u\n", 1462 - (u32)block, track, cyl, head, sect); 1463 1471 1464 1472 /* Check whether the converted CHS can fit. 1465 1473 Cylinder: 0-65535 ··· 1580 1594 goto invalid_fld; 1581 1595 break; 1582 1596 default: 1583 - DPRINTK("no-byte command\n"); 1584 1597 fp = 0; 1585 1598 goto invalid_fld; 1586 1599 } ··· 1657 1672 cmd->result = SAM_STAT_GOOD; 1658 1673 1659 1674 if (need_sense && !ap->ops->error_handler) 1660 - ata_dump_status(ap->print_id, &qc->result_tf); 1675 + ata_dump_status(ap, &qc->result_tf); 1661 1676 1662 1677 ata_qc_done(qc); 1663 1678 } ··· 1695 1710 struct ata_queued_cmd *qc; 1696 1711 int rc; 1697 1712 1698 - VPRINTK("ENTER\n"); 1699 - 1700 1713 qc = ata_scsi_qc_new(dev, cmd); 1701 1714 if (!qc) 1702 1715 goto err_mem; ··· 1725 1742 /* select device, send command to hardware */ 1726 1743 ata_qc_issue(qc); 1727 1744 1728 - VPRINTK("EXIT\n"); 1729 1745 return 0; 1730 1746 1731 1747 early_finish: 1732 1748 ata_qc_free(qc); 1733 1749 scsi_done(cmd); 1734 - DPRINTK("EXIT - early finish (good or error)\n"); 1735 1750 return 0; 1736 1751 1737 1752 err_did: ··· 1737 1756 cmd->result = (DID_ERROR << 16); 1738 1757 scsi_done(cmd); 1739 1758 err_mem: 1740 - DPRINTK("EXIT - internal\n"); 1741 1759 return 0; 1742 1760 1743 1761 defer: 1744 1762 ata_qc_free(qc); 1745 - DPRINTK("EXIT - defer\n"); 1746 1763 if (rc == ATA_DEFER_LINK) 1747 1764 return SCSI_MLQUEUE_DEVICE_BUSY; 1748 1765 else ··· 1836 1857 0, 1837 1858 2 1838 1859 }; 1839 - 1840 - VPRINTK("ENTER\n"); 1841 1860 1842 1861 /* set scsi removable (RMB) bit per ata bit, or if the 1843 1862 * AHCI port says it's external (Hotplug-capable, eSATA). ··· 2271 2294 u8 dpofua, bp = 0xff; 2272 2295 u16 fp; 2273 2296 2274 - VPRINTK("ENTER\n"); 2275 - 2276 2297 six_byte = (scsicmd[0] == MODE_SENSE); 2277 2298 ebd = !(scsicmd[1] & 0x8); /* dbd bit inverted == edb */ 2278 2299 /* ··· 2388 2413 log2_per_phys = ata_id_log2_per_physical_sector(dev->id); 2389 2414 lowest_aligned = ata_id_logical_sector_offset(dev->id, log2_per_phys); 2390 2415 2391 - VPRINTK("ENTER\n"); 2392 - 2393 2416 if (args->cmd->cmnd[0] == READ_CAPACITY) { 2394 2417 if (last_lba >= 0xffffffffULL) 2395 2418 last_lba = 0xffffffff; ··· 2454 2481 */ 2455 2482 static unsigned int ata_scsiop_report_luns(struct ata_scsi_args *args, u8 *rbuf) 2456 2483 { 2457 - VPRINTK("ENTER\n"); 2458 2484 rbuf[3] = 8; /* just one lun, LUN 0, size 8 bytes */ 2459 2485 2460 2486 return 0; ··· 2483 2511 { 2484 2512 struct ata_port *ap = qc->ap; 2485 2513 struct scsi_cmnd *cmd = qc->scsicmd; 2486 - 2487 - DPRINTK("ATAPI request sense\n"); 2488 2514 2489 2515 memset(cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE); 2490 2516 ··· 2522 2552 qc->complete_fn = atapi_sense_complete; 2523 2553 2524 2554 ata_qc_issue(qc); 2525 - 2526 - DPRINTK("EXIT\n"); 2527 2555 } 2528 2556 2529 2557 /* ··· 2548 2580 { 2549 2581 struct scsi_cmnd *cmd = qc->scsicmd; 2550 2582 unsigned int err_mask = qc->err_mask; 2551 - 2552 - VPRINTK("ENTER, err_mask 0x%X\n", err_mask); 2553 2583 2554 2584 /* handle completion from new EH */ 2555 2585 if (unlikely(qc->ap->ops->error_handler && ··· 2629 2663 qc->tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; 2630 2664 if (scmd->sc_data_direction == DMA_TO_DEVICE) { 2631 2665 qc->tf.flags |= ATA_TFLAG_WRITE; 2632 - DPRINTK("direction: write\n"); 2633 2666 } 2634 2667 2635 2668 qc->tf.command = ATA_CMD_PACKET; ··· 3556 3591 */ 3557 3592 3558 3593 if (len != CACHE_MPAGE_LEN - 2) { 3559 - if (len < CACHE_MPAGE_LEN - 2) 3560 - *fp = len; 3561 - else 3562 - *fp = CACHE_MPAGE_LEN - 2; 3594 + *fp = min(len, CACHE_MPAGE_LEN - 2); 3563 3595 return -EINVAL; 3564 3596 } 3565 3597 ··· 3609 3647 */ 3610 3648 3611 3649 if (len != CONTROL_MPAGE_LEN - 2) { 3612 - if (len < CONTROL_MPAGE_LEN - 2) 3613 - *fp = len; 3614 - else 3615 - *fp = CONTROL_MPAGE_LEN - 2; 3650 + *fp = min(len, CONTROL_MPAGE_LEN - 2); 3616 3651 return -EINVAL; 3617 3652 } 3618 3653 ··· 3656 3697 u8 bp = 0xff; 3657 3698 u8 buffer[64]; 3658 3699 const u8 *p = buffer; 3659 - 3660 - VPRINTK("ENTER\n"); 3661 3700 3662 3701 six_byte = (cdb[0] == MODE_SELECT); 3663 3702 if (six_byte) { ··· 3954 3997 return NULL; 3955 3998 } 3956 3999 3957 - /** 3958 - * ata_scsi_dump_cdb - dump SCSI command contents to dmesg 3959 - * @ap: ATA port to which the command was being sent 3960 - * @cmd: SCSI command to dump 3961 - * 3962 - * Prints the contents of a SCSI command via printk(). 3963 - */ 3964 - 3965 - void ata_scsi_dump_cdb(struct ata_port *ap, struct scsi_cmnd *cmd) 3966 - { 3967 - #ifdef ATA_VERBOSE_DEBUG 3968 - struct scsi_device *scsidev = cmd->device; 3969 - 3970 - VPRINTK("CDB (%u:%d,%d,%lld) %9ph\n", 3971 - ap->print_id, 3972 - scsidev->channel, scsidev->id, scsidev->lun, 3973 - cmd->cmnd); 3974 - #endif 3975 - } 3976 - 3977 4000 int __ata_scsi_queuecmd(struct scsi_cmnd *scmd, struct ata_device *dev) 3978 4001 { 3979 4002 u8 scsi_op = scmd->cmnd[0]; 3980 4003 ata_xlat_func_t xlat_func; 3981 - int rc = 0; 4004 + 4005 + if (unlikely(!scmd->cmd_len)) 4006 + goto bad_cdb_len; 3982 4007 3983 4008 if (dev->class == ATA_DEV_ATA || dev->class == ATA_DEV_ZAC) { 3984 - if (unlikely(!scmd->cmd_len || scmd->cmd_len > dev->cdb_len)) 4009 + if (unlikely(scmd->cmd_len > dev->cdb_len)) 3985 4010 goto bad_cdb_len; 3986 4011 3987 4012 xlat_func = ata_get_xlat_func(dev, scsi_op); 3988 - } else { 3989 - if (unlikely(!scmd->cmd_len)) 4013 + } else if (likely((scsi_op != ATA_16) || !atapi_passthru16)) { 4014 + /* relay SCSI command to ATAPI device */ 4015 + int len = COMMAND_SIZE(scsi_op); 4016 + 4017 + if (unlikely(len > scmd->cmd_len || 4018 + len > dev->cdb_len || 4019 + scmd->cmd_len > ATAPI_CDB_LEN)) 3990 4020 goto bad_cdb_len; 3991 4021 3992 - xlat_func = NULL; 3993 - if (likely((scsi_op != ATA_16) || !atapi_passthru16)) { 3994 - /* relay SCSI command to ATAPI device */ 3995 - int len = COMMAND_SIZE(scsi_op); 3996 - if (unlikely(len > scmd->cmd_len || 3997 - len > dev->cdb_len || 3998 - scmd->cmd_len > ATAPI_CDB_LEN)) 3999 - goto bad_cdb_len; 4022 + xlat_func = atapi_xlat; 4023 + } else { 4024 + /* ATA_16 passthru, treat as an ATA command */ 4025 + if (unlikely(scmd->cmd_len > 16)) 4026 + goto bad_cdb_len; 4000 4027 4001 - xlat_func = atapi_xlat; 4002 - } else { 4003 - /* ATA_16 passthru, treat as an ATA command */ 4004 - if (unlikely(scmd->cmd_len > 16)) 4005 - goto bad_cdb_len; 4006 - 4007 - xlat_func = ata_get_xlat_func(dev, scsi_op); 4008 - } 4028 + xlat_func = ata_get_xlat_func(dev, scsi_op); 4009 4029 } 4010 4030 4011 4031 if (xlat_func) 4012 - rc = ata_scsi_translate(dev, scmd, xlat_func); 4013 - else 4014 - ata_scsi_simulate(dev, scmd); 4032 + return ata_scsi_translate(dev, scmd, xlat_func); 4015 4033 4016 - return rc; 4034 + ata_scsi_simulate(dev, scmd); 4035 + 4036 + return 0; 4017 4037 4018 4038 bad_cdb_len: 4019 - DPRINTK("bad CDB len=%u, scsi_op=0x%02x, max=%u\n", 4020 - scmd->cmd_len, scsi_op, dev->cdb_len); 4021 4039 scmd->result = DID_ERROR << 16; 4022 4040 scsi_done(scmd); 4023 4041 return 0; ··· 4028 4096 ap = ata_shost_to_port(shost); 4029 4097 4030 4098 spin_lock_irqsave(ap->lock, irq_flags); 4031 - 4032 - ata_scsi_dump_cdb(ap, cmd); 4033 4099 4034 4100 dev = ata_scsi_find_dev(ap, scsidev); 4035 4101 if (likely(dev)) ··· 4461 4531 container_of(work, struct ata_port, hotplug_task.work); 4462 4532 int i; 4463 4533 4464 - if (ap->pflags & ATA_PFLAG_UNLOADING) { 4465 - DPRINTK("ENTER/EXIT - unloading\n"); 4534 + if (ap->pflags & ATA_PFLAG_UNLOADING) 4466 4535 return; 4467 - } 4468 4536 4469 - DPRINTK("ENTER\n"); 4470 4537 mutex_lock(&ap->scsi_scan_mutex); 4471 4538 4472 4539 /* Unplug detached devices. We cannot use link iterator here ··· 4479 4552 ata_scsi_scan_host(ap, 0); 4480 4553 4481 4554 mutex_unlock(&ap->scsi_scan_mutex); 4482 - DPRINTK("EXIT\n"); 4483 4555 } 4484 4556 4485 4557 /**
+30 -58
drivers/ata/libata-sff.c
··· 18 18 #include <linux/module.h> 19 19 #include <linux/libata.h> 20 20 #include <linux/highmem.h> 21 - 21 + #include <trace/events/libata.h> 22 22 #include "libata.h" 23 23 24 24 static struct workqueue_struct *ata_sff_wq; ··· 330 330 static void ata_dev_select(struct ata_port *ap, unsigned int device, 331 331 unsigned int wait, unsigned int can_sleep) 332 332 { 333 - if (ata_msg_probe(ap)) 334 - ata_port_info(ap, "ata_dev_select: ENTER, device %u, wait %u\n", 335 - device, wait); 336 - 337 333 if (wait) 338 334 ata_wait_idle(ap); 339 335 ··· 405 409 iowrite8(tf->hob_lbal, ioaddr->lbal_addr); 406 410 iowrite8(tf->hob_lbam, ioaddr->lbam_addr); 407 411 iowrite8(tf->hob_lbah, ioaddr->lbah_addr); 408 - VPRINTK("hob: feat 0x%X nsect 0x%X, lba 0x%X 0x%X 0x%X\n", 409 - tf->hob_feature, 410 - tf->hob_nsect, 411 - tf->hob_lbal, 412 - tf->hob_lbam, 413 - tf->hob_lbah); 414 412 } 415 413 416 414 if (is_addr) { ··· 413 423 iowrite8(tf->lbal, ioaddr->lbal_addr); 414 424 iowrite8(tf->lbam, ioaddr->lbam_addr); 415 425 iowrite8(tf->lbah, ioaddr->lbah_addr); 416 - VPRINTK("feat 0x%X nsect 0x%X lba 0x%X 0x%X 0x%X\n", 417 - tf->feature, 418 - tf->nsect, 419 - tf->lbal, 420 - tf->lbam, 421 - tf->lbah); 422 426 } 423 427 424 - if (tf->flags & ATA_TFLAG_DEVICE) { 428 + if (tf->flags & ATA_TFLAG_DEVICE) 425 429 iowrite8(tf->device, ioaddr->device_addr); 426 - VPRINTK("device 0x%X\n", tf->device); 427 - } 428 430 429 431 ata_wait_idle(ap); 430 432 } ··· 476 494 */ 477 495 void ata_sff_exec_command(struct ata_port *ap, const struct ata_taskfile *tf) 478 496 { 479 - DPRINTK("ata%u: cmd 0x%X\n", ap->print_id, tf->command); 480 - 481 497 iowrite8(tf->command, ap->ioaddr.command_addr); 482 498 ata_sff_pause(ap); 483 499 } ··· 485 505 * ata_tf_to_host - issue ATA taskfile to host controller 486 506 * @ap: port to which command is being issued 487 507 * @tf: ATA taskfile register set 508 + * @tag: tag of the associated command 488 509 * 489 510 * Issues ATA taskfile register set to ATA host controller, 490 511 * with proper synchronization with interrupt handler and ··· 495 514 * spin_lock_irqsave(host lock) 496 515 */ 497 516 static inline void ata_tf_to_host(struct ata_port *ap, 498 - const struct ata_taskfile *tf) 517 + const struct ata_taskfile *tf, 518 + unsigned int tag) 499 519 { 520 + trace_ata_tf_load(ap, tf); 500 521 ap->ops->sff_tf_load(ap, tf); 522 + trace_ata_exec_command(ap, tf, tag); 501 523 ap->ops->sff_exec_command(ap, tf); 502 524 } 503 525 ··· 664 680 page = nth_page(page, (offset >> PAGE_SHIFT)); 665 681 offset %= PAGE_SIZE; 666 682 667 - DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); 683 + trace_ata_sff_pio_transfer_data(qc, offset, qc->sect_size); 668 684 669 685 /* 670 686 * Split the transfer when it splits a page boundary. Note that the ··· 734 750 static void atapi_send_cdb(struct ata_port *ap, struct ata_queued_cmd *qc) 735 751 { 736 752 /* send SCSI cdb */ 737 - DPRINTK("send cdb\n"); 753 + trace_atapi_send_cdb(qc, 0, qc->dev->cdb_len); 738 754 WARN_ON_ONCE(qc->dev->cdb_len < 12); 739 755 740 756 ap->ops->sff_data_xfer(qc, qc->cdb, qc->dev->cdb_len, 1); ··· 752 768 case ATAPI_PROT_DMA: 753 769 ap->hsm_task_state = HSM_ST_LAST; 754 770 /* initiate bmdma */ 771 + trace_ata_bmdma_start(ap, &qc->tf, qc->tag); 755 772 ap->ops->bmdma_start(qc); 756 773 break; 757 774 #endif /* CONFIG_ATA_BMDMA */ ··· 805 820 /* don't cross page boundaries */ 806 821 count = min(count, (unsigned int)PAGE_SIZE - offset); 807 822 808 - DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); 823 + trace_atapi_pio_transfer_data(qc, offset, count); 809 824 810 825 /* do the actual data transfer */ 811 826 buf = kmap_atomic(page); ··· 872 887 873 888 if (unlikely(!bytes)) 874 889 goto atapi_check; 875 - 876 - VPRINTK("ata%u: xfering %d bytes\n", ap->print_id, bytes); 877 890 878 891 if (unlikely(__atapi_pio_bytes(qc, bytes))) 879 892 goto err_out; ··· 985 1002 WARN_ON_ONCE(in_wq != ata_hsm_ok_in_wq(ap, qc)); 986 1003 987 1004 fsm_start: 988 - DPRINTK("ata%u: protocol %d task_state %d (dev_stat 0x%X)\n", 989 - ap->print_id, qc->tf.protocol, ap->hsm_task_state, status); 1005 + trace_ata_sff_hsm_state(qc, status); 990 1006 991 1007 switch (ap->hsm_task_state) { 992 1008 case HSM_ST_FIRST: ··· 1186 1204 } 1187 1205 1188 1206 /* no more data to transfer */ 1189 - DPRINTK("ata%u: dev %u command complete, drv_stat 0x%x\n", 1190 - ap->print_id, qc->dev->devno, status); 1207 + trace_ata_sff_hsm_command_complete(qc, status); 1191 1208 1192 1209 WARN_ON_ONCE(qc->err_mask & (AC_ERR_DEV | AC_ERR_HSM)); 1193 1210 ··· 1243 1262 1244 1263 void ata_sff_flush_pio_task(struct ata_port *ap) 1245 1264 { 1246 - DPRINTK("ENTER\n"); 1265 + trace_ata_sff_flush_pio_task(ap); 1247 1266 1248 1267 cancel_delayed_work_sync(&ap->sff_pio_task); 1249 1268 ··· 1260 1279 spin_unlock_irq(ap->lock); 1261 1280 1262 1281 ap->sff_pio_task_link = NULL; 1263 - 1264 - if (ata_msg_ctl(ap)) 1265 - ata_port_dbg(ap, "%s: EXIT\n", __func__); 1266 1282 } 1267 1283 1268 1284 static void ata_sff_pio_task(struct work_struct *work) ··· 1354 1376 if (qc->tf.flags & ATA_TFLAG_POLLING) 1355 1377 ata_qc_set_polling(qc); 1356 1378 1357 - ata_tf_to_host(ap, &qc->tf); 1379 + ata_tf_to_host(ap, &qc->tf, qc->tag); 1358 1380 ap->hsm_task_state = HSM_ST_LAST; 1359 1381 1360 1382 if (qc->tf.flags & ATA_TFLAG_POLLING) ··· 1366 1388 if (qc->tf.flags & ATA_TFLAG_POLLING) 1367 1389 ata_qc_set_polling(qc); 1368 1390 1369 - ata_tf_to_host(ap, &qc->tf); 1391 + ata_tf_to_host(ap, &qc->tf, qc->tag); 1370 1392 1371 1393 if (qc->tf.flags & ATA_TFLAG_WRITE) { 1372 1394 /* PIO data out protocol */ ··· 1396 1418 if (qc->tf.flags & ATA_TFLAG_POLLING) 1397 1419 ata_qc_set_polling(qc); 1398 1420 1399 - ata_tf_to_host(ap, &qc->tf); 1421 + ata_tf_to_host(ap, &qc->tf, qc->tag); 1400 1422 1401 1423 ap->hsm_task_state = HSM_ST_FIRST; 1402 1424 ··· 1456 1478 { 1457 1479 u8 status; 1458 1480 1459 - VPRINTK("ata%u: protocol %d task_state %d\n", 1460 - ap->print_id, qc->tf.protocol, ap->hsm_task_state); 1481 + trace_ata_sff_port_intr(qc, hsmv_on_idle); 1461 1482 1462 1483 /* Check whether we are expecting interrupt in this state */ 1463 1484 switch (ap->hsm_task_state) { ··· 1830 1853 return ATA_DEV_NONE; 1831 1854 1832 1855 /* determine if device is ATA or ATAPI */ 1833 - class = ata_dev_classify(&tf); 1856 + class = ata_port_classify(ap, &tf); 1834 1857 1835 1858 if (class == ATA_DEV_UNKNOWN) { 1836 1859 /* If the device failed diagnostic, it's likely to ··· 1933 1956 { 1934 1957 struct ata_ioports *ioaddr = &ap->ioaddr; 1935 1958 1936 - DPRINTK("ata%u: bus reset via SRST\n", ap->print_id); 1937 - 1938 1959 if (ap->ioaddr.ctl_addr) { 1939 1960 /* software reset. causes dev0 to be selected */ 1940 1961 iowrite8(ap->ctl, ioaddr->ctl_addr); ··· 1970 1995 int rc; 1971 1996 u8 err; 1972 1997 1973 - DPRINTK("ENTER\n"); 1974 - 1975 1998 /* determine if device 0/1 are present */ 1976 1999 if (ata_devchk(ap, 0)) 1977 2000 devmask |= (1 << 0); ··· 1980 2007 ap->ops->sff_dev_select(ap, 0); 1981 2008 1982 2009 /* issue bus reset */ 1983 - DPRINTK("about to softreset, devmask=%x\n", devmask); 1984 2010 rc = ata_bus_softreset(ap, devmask, deadline); 1985 2011 /* if link is occupied, -ENODEV too is an error */ 1986 2012 if (rc && (rc != -ENODEV || sata_scr_valid(link))) { ··· 1994 2022 classes[1] = ata_sff_dev_classify(&link->device[1], 1995 2023 devmask & (1 << 1), &err); 1996 2024 1997 - DPRINTK("EXIT, classes[0]=%u [1]=%u\n", classes[0], classes[1]); 1998 2025 return 0; 1999 2026 } 2000 2027 EXPORT_SYMBOL_GPL(ata_sff_softreset); ··· 2026 2055 if (online) 2027 2056 *class = ata_sff_dev_classify(link->device, 1, NULL); 2028 2057 2029 - DPRINTK("EXIT, class=%u\n", *class); 2030 2058 return rc; 2031 2059 } 2032 2060 EXPORT_SYMBOL_GPL(sata_sff_hardreset); ··· 2055 2085 ap->ops->sff_dev_select(ap, 0); 2056 2086 2057 2087 /* bail out if no device is present */ 2058 - if (classes[0] == ATA_DEV_NONE && classes[1] == ATA_DEV_NONE) { 2059 - DPRINTK("EXIT, no device\n"); 2088 + if (classes[0] == ATA_DEV_NONE && classes[1] == ATA_DEV_NONE) 2060 2089 return; 2061 - } 2062 2090 2063 2091 /* set up device control */ 2064 2092 if (ap->ops->sff_set_devctl || ap->ioaddr.ctl_addr) { ··· 2091 2123 && count < 65536; count += 2) 2092 2124 ioread16(ap->ioaddr.data_addr); 2093 2125 2094 - /* Can become DEBUG later */ 2095 2126 if (count) 2096 2127 ata_port_dbg(ap, "drained %d bytes to clear DRQ\n", count); 2097 2128 ··· 2434 2467 struct ata_host *host = NULL; 2435 2468 int rc; 2436 2469 2437 - DPRINTK("ENTER\n"); 2438 - 2439 2470 pi = ata_sff_find_valid_pi(ppi); 2440 2471 if (!pi) { 2441 2472 dev_err(&pdev->dev, "no valid port_info specified\n"); ··· 2579 2614 2580 2615 prd[pi].addr = cpu_to_le32(addr); 2581 2616 prd[pi].flags_len = cpu_to_le32(len & 0xffff); 2582 - VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", pi, addr, len); 2583 2617 2584 2618 pi++; 2585 2619 sg_len -= len; ··· 2638 2674 prd[++pi].addr = cpu_to_le32(addr + 0x8000); 2639 2675 } 2640 2676 prd[pi].flags_len = cpu_to_le32(blen); 2641 - VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", pi, addr, len); 2642 2677 2643 2678 pi++; 2644 2679 sg_len -= len; ··· 2719 2756 case ATA_PROT_DMA: 2720 2757 WARN_ON_ONCE(qc->tf.flags & ATA_TFLAG_POLLING); 2721 2758 2759 + trace_ata_tf_load(ap, &qc->tf); 2722 2760 ap->ops->sff_tf_load(ap, &qc->tf); /* load tf registers */ 2761 + trace_ata_bmdma_setup(ap, &qc->tf, qc->tag); 2723 2762 ap->ops->bmdma_setup(qc); /* set up bmdma */ 2763 + trace_ata_bmdma_start(ap, &qc->tf, qc->tag); 2724 2764 ap->ops->bmdma_start(qc); /* initiate bmdma */ 2725 2765 ap->hsm_task_state = HSM_ST_LAST; 2726 2766 break; ··· 2731 2765 case ATAPI_PROT_DMA: 2732 2766 WARN_ON_ONCE(qc->tf.flags & ATA_TFLAG_POLLING); 2733 2767 2768 + trace_ata_tf_load(ap, &qc->tf); 2734 2769 ap->ops->sff_tf_load(ap, &qc->tf); /* load tf registers */ 2770 + trace_ata_bmdma_setup(ap, &qc->tf, qc->tag); 2735 2771 ap->ops->bmdma_setup(qc); /* set up bmdma */ 2736 2772 ap->hsm_task_state = HSM_ST_FIRST; 2737 2773 ··· 2774 2806 if (ap->hsm_task_state == HSM_ST_LAST && ata_is_dma(qc->tf.protocol)) { 2775 2807 /* check status of DMA engine */ 2776 2808 host_stat = ap->ops->bmdma_status(ap); 2777 - VPRINTK("ata%u: host_stat 0x%X\n", ap->print_id, host_stat); 2809 + trace_ata_bmdma_status(ap, host_stat); 2778 2810 2779 2811 /* if it's not our irq... */ 2780 2812 if (!(host_stat & ATA_DMA_INTR)) 2781 2813 return ata_sff_idle_irq(ap); 2782 2814 2783 2815 /* before we do anything else, clear DMA-Start bit */ 2816 + trace_ata_bmdma_stop(ap, &qc->tf, qc->tag); 2784 2817 ap->ops->bmdma_stop(qc); 2785 2818 bmdma_stopped = true; 2786 2819 ··· 2850 2881 u8 host_stat; 2851 2882 2852 2883 host_stat = ap->ops->bmdma_status(ap); 2884 + trace_ata_bmdma_status(ap, host_stat); 2853 2885 2854 2886 /* BMDMA controllers indicate host bus error by 2855 2887 * setting DMA_ERR bit and timing out. As it wasn't ··· 2862 2892 thaw = true; 2863 2893 } 2864 2894 2895 + trace_ata_bmdma_stop(ap, &qc->tf, qc->tag); 2865 2896 ap->ops->bmdma_stop(qc); 2866 2897 2867 2898 /* if we're gonna thaw, make sure IRQ is clear */ ··· 2896 2925 2897 2926 if (ata_is_dma(qc->tf.protocol)) { 2898 2927 spin_lock_irqsave(ap->lock, flags); 2928 + trace_ata_bmdma_stop(ap, &qc->tf, qc->tag); 2899 2929 ap->ops->bmdma_stop(qc); 2900 2930 spin_unlock_irqrestore(ap->lock, flags); 2901 2931 }
+47
drivers/ata/libata-trace.c
··· 39 39 } 40 40 41 41 const char * 42 + libata_trace_parse_host_stat(struct trace_seq *p, unsigned char host_stat) 43 + { 44 + const char *ret = trace_seq_buffer_ptr(p); 45 + 46 + trace_seq_printf(p, "{ "); 47 + if (host_stat & ATA_DMA_INTR) 48 + trace_seq_printf(p, "INTR "); 49 + if (host_stat & ATA_DMA_ERR) 50 + trace_seq_printf(p, "ERR "); 51 + if (host_stat & ATA_DMA_ACTIVE) 52 + trace_seq_printf(p, "ACTIVE "); 53 + trace_seq_putc(p, '}'); 54 + trace_seq_putc(p, 0); 55 + 56 + return ret; 57 + } 58 + 59 + const char * 42 60 libata_trace_parse_eh_action(struct trace_seq *p, unsigned int eh_action) 43 61 { 44 62 const char *ret = trace_seq_buffer_ptr(p); ··· 148 130 trace_seq_printf(p, "SENSE_VALID "); 149 131 if (qc_flags & ATA_QCFLAG_EH_SCHEDULED) 150 132 trace_seq_printf(p, "EH_SCHEDULED "); 133 + trace_seq_putc(p, '}'); 134 + } 135 + trace_seq_putc(p, 0); 136 + 137 + return ret; 138 + } 139 + 140 + const char * 141 + libata_trace_parse_tf_flags(struct trace_seq *p, unsigned int tf_flags) 142 + { 143 + const char *ret = trace_seq_buffer_ptr(p); 144 + 145 + trace_seq_printf(p, "%x", tf_flags); 146 + if (tf_flags) { 147 + trace_seq_printf(p, "{ "); 148 + if (tf_flags & ATA_TFLAG_LBA48) 149 + trace_seq_printf(p, "LBA48 "); 150 + if (tf_flags & ATA_TFLAG_ISADDR) 151 + trace_seq_printf(p, "ISADDR "); 152 + if (tf_flags & ATA_TFLAG_DEVICE) 153 + trace_seq_printf(p, "DEV "); 154 + if (tf_flags & ATA_TFLAG_WRITE) 155 + trace_seq_printf(p, "WRITE "); 156 + if (tf_flags & ATA_TFLAG_LBA) 157 + trace_seq_printf(p, "LBA "); 158 + if (tf_flags & ATA_TFLAG_FUA) 159 + trace_seq_printf(p, "FUA "); 160 + if (tf_flags & ATA_TFLAG_POLLING) 161 + trace_seq_printf(p, "POLL "); 151 162 trace_seq_putc(p, '}'); 152 163 } 153 164 trace_seq_putc(p, 0);
+39 -9
drivers/ata/libata-transport.c
··· 163 163 { AC_ERR_INVALID, "InvalidArg" }, 164 164 { AC_ERR_OTHER, "Unknown" }, 165 165 { AC_ERR_NODEV_HINT, "NoDeviceHint" }, 166 - { AC_ERR_NCQ, "NCQError" } 166 + { AC_ERR_NCQ, "NCQError" } 167 167 }; 168 168 ata_bitfield_name_match(err, ata_err_names) 169 169 ··· 321 321 return error; 322 322 } 323 323 324 + /** 325 + * ata_port_classify - determine device type based on ATA-spec signature 326 + * @ap: ATA port device on which the classification should be run 327 + * @tf: ATA taskfile register set for device to be identified 328 + * 329 + * A wrapper around ata_dev_classify() to provide additional logging 330 + * 331 + * RETURNS: 332 + * Device type, %ATA_DEV_ATA, %ATA_DEV_ATAPI, %ATA_DEV_PMP, 333 + * %ATA_DEV_ZAC, or %ATA_DEV_UNKNOWN the event of failure. 334 + */ 335 + unsigned int ata_port_classify(struct ata_port *ap, 336 + const struct ata_taskfile *tf) 337 + { 338 + int i; 339 + unsigned int class = ata_dev_classify(tf); 340 + 341 + /* Start with index '1' to skip the 'unknown' entry */ 342 + for (i = 1; i < ARRAY_SIZE(ata_class_names); i++) { 343 + if (ata_class_names[i].value == class) { 344 + ata_port_dbg(ap, "found %s device by sig\n", 345 + ata_class_names[i].name); 346 + return class; 347 + } 348 + } 349 + 350 + ata_port_info(ap, "found unknown device (class %u)\n", class); 351 + return class; 352 + } 353 + EXPORT_SYMBOL_GPL(ata_port_classify); 324 354 325 355 /* 326 356 * ATA link attributes 327 357 */ 328 358 static int noop(int x) { return x; } 329 359 330 - #define ata_link_show_linkspeed(field, format) \ 360 + #define ata_link_show_linkspeed(field, format) \ 331 361 static ssize_t \ 332 362 show_ata_link_##field(struct device *dev, \ 333 363 struct device_attribute *attr, char *buf) \ ··· 446 416 dev->release = ata_tlink_release; 447 417 if (ata_is_host_link(link)) 448 418 dev_set_name(dev, "link%d", ap->print_id); 449 - else 419 + else 450 420 dev_set_name(dev, "link%d.%d", ap->print_id, link->pmp); 451 421 452 422 transport_setup_device(dev); ··· 502 472 ata_dev_attr(xfer, xfer_mode); 503 473 504 474 505 - #define ata_dev_show_simple(field, format_string, cast) \ 475 + #define ata_dev_show_simple(field, format_string, cast) \ 506 476 static ssize_t \ 507 477 show_ata_dev_##field(struct device *dev, \ 508 478 struct device_attribute *attr, char *buf) \ ··· 512 482 return scnprintf(buf, 20, format_string, cast ata_dev->field); \ 513 483 } 514 484 515 - #define ata_dev_simple_attr(field, format_string, type) \ 485 + #define ata_dev_simple_attr(field, format_string, type) \ 516 486 ata_dev_show_simple(field, format_string, (type)) \ 517 - static DEVICE_ATTR(field, S_IRUGO, \ 487 + static DEVICE_ATTR(field, S_IRUGO, \ 518 488 show_ata_dev_##field, NULL) 519 489 520 490 ata_dev_simple_attr(spdn_cnt, "%d\n", int); ··· 532 502 533 503 seconds = div_u64_rem(ent->timestamp, HZ, &rem); 534 504 arg->written += sprintf(arg->buf + arg->written, 535 - "[%5llu.%09lu]", seconds, 505 + "[%5llu.%09lu]", seconds, 536 506 rem * NSEC_PER_SEC / HZ); 537 507 arg->written += get_ata_err_names(ent->err_mask, 538 508 arg->buf + arg->written); ··· 697 667 dev->release = ata_tdev_release; 698 668 if (ata_is_host_link(link)) 699 669 dev_set_name(dev, "dev%d.%d", ap->print_id,ata_dev->devno); 700 - else 670 + else 701 671 dev_set_name(dev, "dev%d.%d.0", ap->print_id, link->pmp); 702 672 703 673 transport_setup_device(dev); ··· 719 689 */ 720 690 721 691 #define SETUP_TEMPLATE(attrb, field, perm, test) \ 722 - i->private_##attrb[count] = dev_attr_##field; \ 692 + i->private_##attrb[count] = dev_attr_##field; \ 723 693 i->private_##attrb[count].attr.mode = perm; \ 724 694 i->attrb[count] = &i->private_##attrb[count]; \ 725 695 if (test) \
+2 -3
drivers/ata/libata.h
··· 148 148 unsigned int id, u64 lun); 149 149 void ata_scsi_sdev_config(struct scsi_device *sdev); 150 150 int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev); 151 - void ata_scsi_dump_cdb(struct ata_port *ap, struct scsi_cmnd *cmd); 152 151 int __ata_scsi_queuecmd(struct scsi_cmnd *scmd, struct ata_device *dev); 153 152 154 153 /* libata-eh.c */ ··· 165 166 extern void ata_eh_done(struct ata_link *link, struct ata_device *dev, 166 167 unsigned int action); 167 168 extern void ata_eh_autopsy(struct ata_port *ap); 168 - const char *ata_get_cmd_descript(u8 command); 169 + const char *ata_get_cmd_name(u8 command); 169 170 extern void ata_eh_report(struct ata_port *ap); 170 171 extern int ata_eh_reset(struct ata_link *link, int classify, 171 172 ata_prereset_fn_t prereset, ata_reset_fn_t softreset, ··· 178 179 extern void ata_eh_finish(struct ata_port *ap); 179 180 extern int ata_ering_map(struct ata_ering *ering, 180 181 int (*map_fn)(struct ata_ering_entry *, void *), 181 - void *arg); 182 + void *arg); 182 183 extern unsigned int atapi_eh_tur(struct ata_device *dev, u8 *r_sense_key); 183 184 extern unsigned int atapi_eh_request_sense(struct ata_device *dev, 184 185 u8 *sense_buf, u8 dfl_sense_key);
+2 -2
drivers/ata/pata_ali.c
··· 37 37 #define DRV_NAME "pata_ali" 38 38 #define DRV_VERSION "0.7.8" 39 39 40 - static int ali_atapi_dma = 0; 40 + static int ali_atapi_dma; 41 41 module_param_named(atapi_dma, ali_atapi_dma, int, 0644); 42 42 MODULE_PARM_DESC(atapi_dma, "Enable ATAPI DMA (0=disable, 1=enable)"); 43 43 ··· 123 123 mask &= ~(ATA_MASK_MWDMA | ATA_MASK_UDMA); 124 124 ata_id_c_string(adev->id, model_num, ATA_ID_PROD, sizeof(model_num)); 125 125 if (strstr(model_num, "WDC")) 126 - return mask &= ~ATA_MASK_UDMA; 126 + mask &= ~ATA_MASK_UDMA; 127 127 return mask; 128 128 } 129 129
+3
drivers/ata/pata_arasan_cf.c
··· 39 39 #include <linux/spinlock.h> 40 40 #include <linux/types.h> 41 41 #include <linux/workqueue.h> 42 + #include <trace/events/libata.h> 42 43 43 44 #define DRIVER_NAME "arasan_cf" 44 45 #define TIMEOUT msecs_to_jiffies(3000) ··· 704 703 case ATA_PROT_DMA: 705 704 WARN_ON_ONCE(qc->tf.flags & ATA_TFLAG_POLLING); 706 705 706 + trace_ata_tf_load(ap, &qc->tf); 707 707 ap->ops->sff_tf_load(ap, &qc->tf); 708 708 acdev->dma_status = 0; 709 709 acdev->qc = qc; 710 + trace_ata_bmdma_start(ap, &qc->tf, qc->tag); 710 711 arasan_cf_dma_start(acdev); 711 712 ap->hsm_task_state = HSM_ST_LAST; 712 713 break;
+50 -55
drivers/ata/pata_atp867x.c
··· 155 155 case 1 ... 6: 156 156 break; 157 157 default: 158 - printk(KERN_WARNING "ATP867X: active %dclk is invalid. " 158 + ata_port_warn(ap, "ATP867X: active %dclk is invalid. " 159 159 "Using 12clk.\n", clk); 160 160 fallthrough; 161 161 case 9 ... 12: ··· 171 171 return clocks << ATP867X_IO_PIOSPD_ACTIVE_SHIFT; 172 172 } 173 173 174 - static int atp867x_get_recover_clocks_shifted(unsigned int clk) 174 + static int atp867x_get_recover_clocks_shifted(struct ata_port *ap, 175 + unsigned int clk) 175 176 { 176 177 unsigned char clocks = clk; 177 178 ··· 189 188 case 15: 190 189 break; 191 190 default: 192 - printk(KERN_WARNING "ATP867X: recover %dclk is invalid. " 191 + ata_port_warn(ap, "ATP867X: recover %dclk is invalid. " 193 192 "Using default 12clk.\n", clk); 194 193 fallthrough; 195 194 case 12: /* default 12 clk */ ··· 226 225 iowrite8(b, dp->dma_mode); 227 226 228 227 b = atp867x_get_active_clocks_shifted(ap, t.active) | 229 - atp867x_get_recover_clocks_shifted(t.recover); 228 + atp867x_get_recover_clocks_shifted(ap, t.recover); 230 229 231 230 if (adev->devno & 1) 232 231 iowrite8(b, dp->slave_piospd); ··· 234 233 iowrite8(b, dp->mstr_piospd); 235 234 236 235 b = atp867x_get_active_clocks_shifted(ap, t.act8b) | 237 - atp867x_get_recover_clocks_shifted(t.rec8b); 236 + atp867x_get_recover_clocks_shifted(ap, t.rec8b); 238 237 239 238 iowrite8(b, dp->eightb_piospd); 240 239 } ··· 271 270 }; 272 271 273 272 274 - #ifdef ATP867X_DEBUG 275 273 static void atp867x_check_res(struct pci_dev *pdev) 276 274 { 277 275 int i; ··· 280 280 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 281 281 start = pci_resource_start(pdev, i); 282 282 len = pci_resource_len(pdev, i); 283 - printk(KERN_DEBUG "ATP867X: resource start:len=%lx:%lx\n", 283 + dev_dbg(&pdev->dev, "ATP867X: resource start:len=%lx:%lx\n", 284 284 start, len); 285 285 } 286 286 } ··· 290 290 struct ata_ioports *ioaddr = &ap->ioaddr; 291 291 struct atp867x_priv *dp = ap->private_data; 292 292 293 - printk(KERN_DEBUG "ATP867X: port[%d] addresses\n" 294 - " cmd_addr =0x%llx, 0x%llx\n" 295 - " ctl_addr =0x%llx, 0x%llx\n" 296 - " bmdma_addr =0x%llx, 0x%llx\n" 297 - " data_addr =0x%llx\n" 298 - " error_addr =0x%llx\n" 299 - " feature_addr =0x%llx\n" 300 - " nsect_addr =0x%llx\n" 301 - " lbal_addr =0x%llx\n" 302 - " lbam_addr =0x%llx\n" 303 - " lbah_addr =0x%llx\n" 304 - " device_addr =0x%llx\n" 305 - " status_addr =0x%llx\n" 306 - " command_addr =0x%llx\n" 307 - " dp->dma_mode =0x%llx\n" 308 - " dp->mstr_piospd =0x%llx\n" 309 - " dp->slave_piospd =0x%llx\n" 310 - " dp->eightb_piospd =0x%llx\n" 293 + ata_port_dbg(ap, "ATP867X: port[%d] addresses\n" 294 + " cmd_addr =0x%lx, 0x%lx\n" 295 + " ctl_addr =0x%lx, 0x%lx\n" 296 + " bmdma_addr =0x%lx, 0x%lx\n" 297 + " data_addr =0x%lx\n" 298 + " error_addr =0x%lx\n" 299 + " feature_addr =0x%lx\n" 300 + " nsect_addr =0x%lx\n" 301 + " lbal_addr =0x%lx\n" 302 + " lbam_addr =0x%lx\n" 303 + " lbah_addr =0x%lx\n" 304 + " device_addr =0x%lx\n" 305 + " status_addr =0x%lx\n" 306 + " command_addr =0x%lx\n" 307 + " dp->dma_mode =0x%lx\n" 308 + " dp->mstr_piospd =0x%lx\n" 309 + " dp->slave_piospd =0x%lx\n" 310 + " dp->eightb_piospd =0x%lx\n" 311 311 " dp->pci66mhz =0x%lx\n", 312 312 port, 313 - (unsigned long long)ioaddr->cmd_addr, 314 - (unsigned long long)ATP867X_IO_PORTBASE(ap, port), 315 - (unsigned long long)ioaddr->ctl_addr, 316 - (unsigned long long)ATP867X_IO_ALTSTATUS(ap, port), 317 - (unsigned long long)ioaddr->bmdma_addr, 318 - (unsigned long long)ATP867X_IO_DMABASE(ap, port), 319 - (unsigned long long)ioaddr->data_addr, 320 - (unsigned long long)ioaddr->error_addr, 321 - (unsigned long long)ioaddr->feature_addr, 322 - (unsigned long long)ioaddr->nsect_addr, 323 - (unsigned long long)ioaddr->lbal_addr, 324 - (unsigned long long)ioaddr->lbam_addr, 325 - (unsigned long long)ioaddr->lbah_addr, 326 - (unsigned long long)ioaddr->device_addr, 327 - (unsigned long long)ioaddr->status_addr, 328 - (unsigned long long)ioaddr->command_addr, 329 - (unsigned long long)dp->dma_mode, 330 - (unsigned long long)dp->mstr_piospd, 331 - (unsigned long long)dp->slave_piospd, 332 - (unsigned long long)dp->eightb_piospd, 313 + (unsigned long)ioaddr->cmd_addr, 314 + (unsigned long)ATP867X_IO_PORTBASE(ap, port), 315 + (unsigned long)ioaddr->ctl_addr, 316 + (unsigned long)ATP867X_IO_ALTSTATUS(ap, port), 317 + (unsigned long)ioaddr->bmdma_addr, 318 + (unsigned long)ATP867X_IO_DMABASE(ap, port), 319 + (unsigned long)ioaddr->data_addr, 320 + (unsigned long)ioaddr->error_addr, 321 + (unsigned long)ioaddr->feature_addr, 322 + (unsigned long)ioaddr->nsect_addr, 323 + (unsigned long)ioaddr->lbal_addr, 324 + (unsigned long)ioaddr->lbam_addr, 325 + (unsigned long)ioaddr->lbah_addr, 326 + (unsigned long)ioaddr->device_addr, 327 + (unsigned long)ioaddr->status_addr, 328 + (unsigned long)ioaddr->command_addr, 329 + (unsigned long)dp->dma_mode, 330 + (unsigned long)dp->mstr_piospd, 331 + (unsigned long)dp->slave_piospd, 332 + (unsigned long)dp->eightb_piospd, 333 333 (unsigned long)dp->pci66mhz); 334 334 } 335 - #endif 336 335 337 336 static int atp867x_set_priv(struct ata_port *ap) 338 337 { ··· 369 370 if (v < 0x80) { 370 371 v = 0x80; 371 372 pci_write_config_byte(pdev, PCI_LATENCY_TIMER, v); 372 - printk(KERN_DEBUG "ATP867X: set latency timer of device %s" 373 - " to %d\n", pci_name(pdev), v); 373 + dev_dbg(&pdev->dev, "ATP867X: set latency timer to %d\n", v); 374 374 } 375 375 376 376 /* ··· 417 419 return rc; 418 420 host->iomap = pcim_iomap_table(pdev); 419 421 420 - #ifdef ATP867X_DEBUG 421 422 atp867x_check_res(pdev); 422 423 423 424 for (i = 0; i < PCI_STD_NUM_BARS; i++) 424 - printk(KERN_DEBUG "ATP867X: iomap[%d]=0x%llx\n", i, 425 - (unsigned long long)(host->iomap[i])); 426 - #endif 425 + dev_dbg(gdev, "ATP867X: iomap[%d]=0x%p\n", i, 426 + host->iomap[i]); 427 427 428 428 /* 429 429 * request, iomap BARs and init port addresses accordingly ··· 440 444 if (rc) 441 445 return rc; 442 446 443 - #ifdef ATP867X_DEBUG 444 447 atp867x_check_ports(ap, i); 445 - #endif 448 + 446 449 ata_port_desc(ap, "cmd 0x%lx ctl 0x%lx", 447 450 (unsigned long)ioaddr->cmd_addr, 448 451 (unsigned long)ioaddr->ctl_addr); ··· 481 486 if (rc) 482 487 return rc; 483 488 484 - printk(KERN_INFO "ATP867X: ATP867 ATA UDMA133 controller (rev %02X)", 489 + dev_info(&pdev->dev, "ATP867X: ATP867 ATA UDMA133 controller (rev %02X)", 485 490 pdev->device); 486 491 487 492 host = ata_host_alloc_pinfo(&pdev->dev, ppi, ATP867X_NUM_PORTS);
+1 -1
drivers/ata/pata_cmd640.c
··· 61 61 struct ata_device *pair = ata_dev_pair(adev); 62 62 63 63 if (ata_timing_compute(adev, adev->pio_mode, &t, T, 0) < 0) { 64 - printk(KERN_ERR DRV_NAME ": mode computation failed.\n"); 64 + ata_dev_err(adev, DRV_NAME ": mode computation failed.\n"); 65 65 return; 66 66 } 67 67
+2 -2
drivers/ata/pata_cmd64x.c
··· 116 116 /* ata_timing_compute is smart and will produce timings for MWDMA 117 117 that don't violate the drives PIO capabilities. */ 118 118 if (ata_timing_compute(adev, mode, &t, T, 0) < 0) { 119 - printk(KERN_ERR DRV_NAME ": mode computation failed.\n"); 119 + ata_dev_err(adev, DRV_NAME ": mode computation failed.\n"); 120 120 return; 121 121 } 122 122 if (ap->port_no) { ··· 130 130 } 131 131 } 132 132 133 - printk(KERN_DEBUG DRV_NAME ": active %d recovery %d setup %d.\n", 133 + ata_dev_dbg(adev, DRV_NAME ": active %d recovery %d setup %d.\n", 134 134 t.active, t.recover, t.setup); 135 135 if (t.recover > 16) { 136 136 t.active += t.recover - 16;
+2 -2
drivers/ata/pata_cs5520.c
··· 153 153 154 154 /* Perform set up for DMA */ 155 155 if (pci_enable_device_io(pdev)) { 156 - printk(KERN_ERR DRV_NAME ": unable to configure BAR2.\n"); 156 + dev_err(&pdev->dev, "unable to configure BAR2.\n"); 157 157 return -ENODEV; 158 158 } 159 159 160 160 if (dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32))) { 161 - printk(KERN_ERR DRV_NAME ": unable to configure DMA mask.\n"); 161 + dev_err(&pdev->dev, "unable to configure DMA mask.\n"); 162 162 return -ENODEV; 163 163 } 164 164
+2 -2
drivers/ata/pata_cs5536.c
··· 263 263 ppi[1] = &ata_dummy_port_info; 264 264 265 265 if (use_msr) 266 - printk(KERN_ERR DRV_NAME ": Using MSR regs instead of PCI\n"); 266 + dev_err(&dev->dev, DRV_NAME ": Using MSR regs instead of PCI\n"); 267 267 268 268 cs5536_read(dev, CFG, &cfg); 269 269 270 270 if ((cfg & IDE_CFG_CHANEN) == 0) { 271 - printk(KERN_ERR DRV_NAME ": disabled by BIOS\n"); 271 + dev_err(&dev->dev, DRV_NAME ": disabled by BIOS\n"); 272 272 return -ENODEV; 273 273 } 274 274
+1 -1
drivers/ata/pata_cypress.c
··· 62 62 u32 addr; 63 63 64 64 if (ata_timing_compute(adev, adev->pio_mode, &t, T, 1) < 0) { 65 - printk(KERN_ERR DRV_NAME ": mome computation failed.\n"); 65 + ata_dev_err(adev, DRV_NAME ": mome computation failed.\n"); 66 66 return; 67 67 } 68 68
-1
drivers/ata/pata_ep93xx.c
··· 855 855 && count < 65536; count += 2) 856 856 ep93xx_pata_read_reg(drv_data, IDECTRL_ADDR_DATA); 857 857 858 - /* Can become DEBUG later */ 859 858 if (count) 860 859 ata_port_dbg(ap, "drained %d bytes to clear DRQ.\n", count); 861 860
+1 -4
drivers/ata/pata_hpt366.c
··· 14 14 * TODO 15 15 * Look into engine reset on timeout errors. Should not be required. 16 16 */ 17 - 18 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 19 - 20 17 #include <linux/kernel.h> 21 18 #include <linux/module.h> 22 19 #include <linux/pci.h> ··· 180 183 181 184 i = match_string(list, -1, model_num); 182 185 if (i >= 0) { 183 - pr_warn("%s is not supported for %s\n", modestr, list[i]); 186 + ata_dev_warn(dev, "%s is not supported for %s\n", modestr, list[i]); 184 187 return 1; 185 188 } 186 189 return 0;
+10 -10
drivers/ata/pata_hpt37x.c
··· 14 14 * TODO 15 15 * Look into engine reset on timeout errors. Should not be required. 16 16 */ 17 - 18 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 19 - 20 17 #include <linux/kernel.h> 21 18 #include <linux/module.h> 22 19 #include <linux/pci.h> ··· 228 231 229 232 i = match_string(list, -1, model_num); 230 233 if (i >= 0) { 231 - pr_warn("%s is not supported for %s\n", modestr, list[i]); 234 + ata_dev_warn(dev, "%s is not supported for %s\n", 235 + modestr, list[i]); 232 236 return 1; 233 237 } 234 238 return 0; ··· 862 864 chip_table = &hpt372; 863 865 break; 864 866 default: 865 - pr_err("Unknown HPT366 subtype, please report (%d)\n", 867 + dev_err(&dev->dev, 868 + "Unknown HPT366 subtype, please report (%d)\n", 866 869 rev); 867 870 return -ENODEV; 868 871 } ··· 904 905 *ppi = &info_hpt374_fn1; 905 906 break; 906 907 default: 907 - pr_err("PCI table is bogus, please report (%d)\n", dev->device); 908 + dev_err(&dev->dev, "PCI table is bogus, please report (%d)\n", 909 + dev->device); 908 910 return -ENODEV; 909 911 } 910 912 /* Ok so this is a chip we support */ ··· 953 953 u8 sr; 954 954 u32 total = 0; 955 955 956 - pr_warn("BIOS has not set timing clocks\n"); 956 + dev_warn(&dev->dev, "BIOS has not set timing clocks\n"); 957 957 958 958 /* This is the process the HPT371 BIOS is reported to use */ 959 959 for (i = 0; i < 128; i++) { ··· 1009 1009 (f_high << 16) | f_low | 0x100); 1010 1010 } 1011 1011 if (adjust == 8) { 1012 - pr_err("DPLL did not stabilize!\n"); 1012 + dev_err(&dev->dev, "DPLL did not stabilize!\n"); 1013 1013 return -ENODEV; 1014 1014 } 1015 1015 if (dpll == 3) ··· 1017 1017 else 1018 1018 private_data = (void *)hpt37x_timings_50; 1019 1019 1020 - pr_info("bus clock %dMHz, using %dMHz DPLL\n", 1020 + dev_info(&dev->dev, "bus clock %dMHz, using %dMHz DPLL\n", 1021 1021 MHz[clock_slot], MHz[dpll]); 1022 1022 } else { 1023 1023 private_data = (void *)chip_table->clocks[clock_slot]; ··· 1032 1032 if (clock_slot < 2 && ppi[0] == &info_hpt370a) 1033 1033 ppi[0] = &info_hpt370a_33; 1034 1034 1035 - pr_info("%s using %dMHz bus clock\n", 1035 + dev_info(&dev->dev, "%s using %dMHz bus clock\n", 1036 1036 chip_table->name, MHz[clock_slot]); 1037 1037 } 1038 1038
+5 -7
drivers/ata/pata_hpt3x2n.c
··· 15 15 * TODO 16 16 * Work out best PLL policy 17 17 */ 18 - 19 - #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 20 - 21 18 #include <linux/kernel.h> 22 19 #include <linux/module.h> 23 20 #include <linux/pci.h> ··· 417 420 u16 sr; 418 421 u32 total = 0; 419 422 420 - pr_warn("BIOS clock data not set\n"); 423 + dev_warn(&pdev->dev, "BIOS clock data not set\n"); 421 424 422 425 /* This is the process the HPT371 BIOS is reported to use */ 423 426 for (i = 0; i < 128; i++) { ··· 527 530 ppi[0] = &info_hpt372n; 528 531 break; 529 532 default: 530 - pr_err("PCI table is bogus, please report (%d)\n", dev->device); 533 + dev_err(&dev->dev,"PCI table is bogus, please report (%d)\n", 534 + dev->device); 531 535 return -ENODEV; 532 536 } 533 537 ··· 577 579 pci_write_config_dword(dev, 0x5C, (f_high << 16) | f_low); 578 580 } 579 581 if (adjust == 8) { 580 - pr_err("DPLL did not stabilize!\n"); 582 + dev_err(&dev->dev, "DPLL did not stabilize!\n"); 581 583 return -ENODEV; 582 584 } 583 585 584 - pr_info("bus clock %dMHz, using 66MHz DPLL\n", pci_mhz); 586 + dev_info(&dev->dev, "bus clock %dMHz, using 66MHz DPLL\n", pci_mhz); 585 587 586 588 /* 587 589 * Set our private data up. We only need a few flags
+35 -31
drivers/ata/pata_it821x.c
··· 431 431 case ATA_CMD_SET_FEATURES: 432 432 return ata_bmdma_qc_issue(qc); 433 433 } 434 - printk(KERN_DEBUG "it821x: can't process command 0x%02X\n", qc->tf.command); 434 + ata_dev_dbg(qc->dev, "it821x: can't process command 0x%02X\n", 435 + qc->tf.command); 435 436 return AC_ERR_DEV; 436 437 } 437 438 ··· 508 507 509 508 if (strstr(model_num, "Integrated Technology Express")) { 510 509 /* RAID mode */ 511 - ata_dev_info(adev, "%sRAID%d volume", 512 - adev->id[147] ? "Bootable " : "", 513 - adev->id[129]); 514 - if (adev->id[129] != 1) 515 - pr_cont("(%dK stripe)", adev->id[146]); 516 - pr_cont("\n"); 510 + if (adev->id[129] == 1) 511 + ata_dev_info(adev, "%sRAID%d volume\n", 512 + adev->id[147] ? "Bootable " : "", 513 + adev->id[129]); 514 + else 515 + ata_dev_info(adev, "%sRAID%d volume (%dK stripe)\n", 516 + adev->id[147] ? "Bootable " : "", 517 + adev->id[129], adev->id[146]); 517 518 } 518 519 /* This is a controller firmware triggered funny, don't 519 520 report the drive faulty! */ ··· 537 534 */ 538 535 539 536 static unsigned int it821x_read_id(struct ata_device *adev, 540 - struct ata_taskfile *tf, u16 *id) 537 + struct ata_taskfile *tf, __le16 *id) 541 538 { 542 539 unsigned int err_mask; 543 540 unsigned char model_num[ATA_ID_PROD_LEN + 1]; ··· 545 542 err_mask = ata_do_dev_read_id(adev, tf, id); 546 543 if (err_mask) 547 544 return err_mask; 548 - ata_id_c_string(id, model_num, ATA_ID_PROD, sizeof(model_num)); 545 + ata_id_c_string((u16 *)id, model_num, ATA_ID_PROD, sizeof(model_num)); 549 546 550 - id[83] &= ~(1 << 12); /* Cache flush is firmware handled */ 551 - id[83] &= ~(1 << 13); /* Ditto for LBA48 flushes */ 552 - id[84] &= ~(1 << 6); /* No FUA */ 553 - id[85] &= ~(1 << 10); /* No HPA */ 554 - id[76] = 0; /* No NCQ/AN etc */ 547 + id[83] &= cpu_to_le16(~(1 << 12)); /* Cache flush is firmware handled */ 548 + id[84] &= cpu_to_le16(~(1 << 6)); /* No FUA */ 549 + id[85] &= cpu_to_le16(~(1 << 10)); /* No HPA */ 550 + id[76] = 0; /* No NCQ/AN etc */ 555 551 556 552 if (strstr(model_num, "Integrated Technology Express")) { 557 553 /* Set feature bits the firmware neglects */ 558 - id[49] |= 0x0300; /* LBA, DMA */ 559 - id[83] &= 0x7FFF; 560 - id[83] |= 0x4400; /* Word 83 is valid and LBA48 */ 561 - id[86] |= 0x0400; /* LBA48 on */ 562 - id[ATA_ID_MAJOR_VER] |= 0x1F; 554 + id[49] |= cpu_to_le16(0x0300); /* LBA, DMA */ 555 + id[83] &= cpu_to_le16(0x7FFF); 556 + id[83] |= cpu_to_le16(0x4400); /* Word 83 is valid and LBA48 */ 557 + id[86] |= cpu_to_le16(0x0400); /* LBA48 on */ 558 + id[ATA_ID_MAJOR_VER] |= cpu_to_le16(0x1F); 563 559 /* Clear the serial number because it's different each boot 564 560 which breaks validation on resume */ 565 561 memset(&id[ATA_ID_SERNO], 0x20, ATA_ID_SERNO_LEN); ··· 595 593 596 594 /** 597 595 * it821x_display_disk - display disk setup 596 + * @ap: ATA port 598 597 * @n: Device number 599 598 * @buf: Buffer block from firmware 600 599 * ··· 603 600 * by the firmware. 604 601 */ 605 602 606 - static void it821x_display_disk(int n, u8 *buf) 603 + static void it821x_display_disk(struct ata_port *ap, int n, u8 *buf) 607 604 { 608 605 unsigned char id[41]; 609 606 int mode = 0; ··· 636 633 else 637 634 strcpy(mbuf, "PIO"); 638 635 if (buf[52] == 4) 639 - printk(KERN_INFO "%d: %-6s %-8s %s %s\n", 636 + ata_port_info(ap, "%d: %-6s %-8s %s %s\n", 640 637 n, mbuf, types[buf[52]], id, cbl); 641 638 else 642 - printk(KERN_INFO "%d: %-6s %-8s Volume: %1d %s %s\n", 639 + ata_port_info(ap, "%d: %-6s %-8s Volume: %1d %s %s\n", 643 640 n, mbuf, types[buf[52]], buf[53], id, cbl); 644 641 if (buf[125] < 100) 645 - printk(KERN_INFO "%d: Rebuilding: %d%%\n", n, buf[125]); 642 + ata_port_info(ap, "%d: Rebuilding: %d%%\n", n, buf[125]); 646 643 } 647 644 648 645 /** ··· 679 676 status = ioread8(ap->ioaddr.status_addr); 680 677 if (status & ATA_ERR) { 681 678 kfree(buf); 682 - printk(KERN_ERR "it821x_firmware_command: rejected\n"); 679 + ata_port_err(ap, "%s: rejected\n", __func__); 683 680 return NULL; 684 681 } 685 682 if (status & ATA_DRQ) { ··· 689 686 usleep_range(500, 1000); 690 687 } 691 688 kfree(buf); 692 - printk(KERN_ERR "it821x_firmware_command: timeout\n"); 689 + ata_port_err(ap, "%s: timeout\n", __func__); 693 690 return NULL; 694 691 } 695 692 ··· 712 709 buf = it821x_firmware_command(ap, 0xFA, 512); 713 710 714 711 if (buf != NULL) { 715 - printk(KERN_INFO "pata_it821x: Firmware %02X/%02X/%02X%02X\n", 712 + ata_port_info(ap, "pata_it821x: Firmware %02X/%02X/%02X%02X\n", 716 713 buf[505], 717 714 buf[506], 718 715 buf[507], 719 716 buf[508]); 720 717 for (i = 0; i < 4; i++) 721 - it821x_display_disk(i, buf + 128 * i); 718 + it821x_display_disk(ap, i, buf + 128 * i); 722 719 kfree(buf); 723 720 } 724 721 } ··· 774 771 itdev->timing10 = 1; 775 772 /* Need to disable ATAPI DMA for this case */ 776 773 if (!itdev->smart) 777 - printk(KERN_WARNING DRV_NAME": Revision 0x10, workarounds activated.\n"); 774 + dev_warn(&pdev->dev, 775 + "Revision 0x10, workarounds activated.\n"); 778 776 } 779 777 780 778 return 0; ··· 923 919 } else { 924 920 /* Force the card into bypass mode if so requested */ 925 921 if (it8212_noraid) { 926 - printk(KERN_INFO DRV_NAME ": forcing bypass mode.\n"); 922 + dev_info(&pdev->dev, "forcing bypass mode.\n"); 927 923 it821x_disable_raid(pdev); 928 924 } 929 925 pci_read_config_byte(pdev, 0x50, &conf); 930 926 conf &= 1; 931 927 932 - printk(KERN_INFO DRV_NAME": controller in %s mode.\n", 933 - mode[conf]); 928 + dev_info(&pdev->dev, "controller in %s mode.\n", mode[conf]); 929 + 934 930 if (conf == 0) 935 931 ppi[0] = &info_passthru; 936 932 else
+3 -3
drivers/ata/pata_ixp4xx_cf.c
··· 114 114 { 115 115 struct ixp4xx_pata *ixpp = ap->host->private_data; 116 116 117 - ata_dev_printk(adev, KERN_INFO, "configured for PIO%d 8bit\n", 117 + ata_dev_info(adev, "configured for PIO%d 8bit\n", 118 118 adev->pio_mode - XFER_PIO_0); 119 119 ixp4xx_set_8bit_timing(ixpp, adev->pio_mode); 120 120 } ··· 132 132 struct ixp4xx_pata *ixpp = ap->host->private_data; 133 133 unsigned long flags; 134 134 135 - ata_dev_printk(adev, KERN_DEBUG, "%s %d bytes\n", (rw == READ) ? "READ" : "WRITE", 136 - buflen); 135 + ata_dev_dbg(adev, "%s %d bytes\n", (rw == READ) ? "READ" : "WRITE", 136 + buflen); 137 137 spin_lock_irqsave(ap->lock, flags); 138 138 139 139 /* set the expansion bus in 16bit mode and restore
+2 -7
drivers/ata/pata_marvell.c
··· 32 32 33 33 static int marvell_pata_active(struct pci_dev *pdev) 34 34 { 35 - int i; 36 35 u32 devices; 37 36 void __iomem *barp; 38 37 ··· 42 43 barp = pci_iomap(pdev, 5, 0x10); 43 44 if (barp == NULL) 44 45 return -ENOMEM; 45 - 46 - printk("BAR5:"); 47 - for(i = 0; i <= 0x0F; i++) 48 - printk("%02X:%02X ", i, ioread8(barp + i)); 49 - printk("\n"); 50 46 51 47 devices = ioread32(barp + 0x0C); 52 48 pci_iounmap(pdev, barp); ··· 143 149 144 150 #if IS_ENABLED(CONFIG_SATA_AHCI) 145 151 if (!marvell_pata_active(pdev)) { 146 - printk(KERN_INFO DRV_NAME ": PATA port not active, deferring to AHCI driver.\n"); 152 + dev_info(&pdev->dev, 153 + "PATA port not active, deferring to AHCI driver.\n"); 147 154 return -ENODEV; 148 155 } 149 156 #endif
+3 -2
drivers/ata/pata_netcell.c
··· 21 21 /* No PIO or DMA methods needed for this device */ 22 22 23 23 static unsigned int netcell_read_id(struct ata_device *adev, 24 - struct ata_taskfile *tf, u16 *id) 24 + struct ata_taskfile *tf, __le16 *id) 25 25 { 26 26 unsigned int err_mask = ata_do_dev_read_id(adev, tf, id); 27 + 27 28 /* Firmware forgets to mark words 85-87 valid */ 28 29 if (err_mask == 0) 29 - id[ATA_ID_CSF_DEFAULT] |= 0x4000; 30 + id[ATA_ID_CSF_DEFAULT] |= cpu_to_le16(0x4000); 30 31 return err_mask; 31 32 } 32 33
+11 -43
drivers/ata/pata_octeon_cf.c
··· 19 19 #include <linux/of_platform.h> 20 20 #include <linux/platform_device.h> 21 21 #include <scsi/scsi_host.h> 22 - 22 + #include <trace/events/libata.h> 23 23 #include <asm/byteorder.h> 24 24 #include <asm/octeon/octeon.h> 25 25 ··· 73 73 */ 74 74 static unsigned int ns_to_tim_reg(unsigned int tim_mult, unsigned int nsecs) 75 75 { 76 - unsigned int val; 77 - 78 76 /* 79 77 * Compute # of eclock periods to get desired duration in 80 78 * nanoseconds. 81 79 */ 82 - val = DIV_ROUND_UP(nsecs * (octeon_get_io_clock_rate() / 1000000), 80 + return DIV_ROUND_UP(nsecs * (octeon_get_io_clock_rate() / 1000000), 83 81 1000 * tim_mult); 84 - 85 - return val; 86 82 } 87 83 88 84 static void octeon_cf_set_boot_reg_cfg(int cs, unsigned int multiplier) ··· 269 273 dma_tim.s.we_n = ns_to_tim_reg(tim_mult, oe_n); 270 274 dma_tim.s.we_a = ns_to_tim_reg(tim_mult, oe_a); 271 275 272 - pr_debug("ns to ticks (mult %d) of %d is: %d\n", tim_mult, 60, 276 + ata_dev_dbg(dev, "ns to ticks (mult %d) of %d is: %d\n", tim_mult, 60, 273 277 ns_to_tim_reg(tim_mult, 60)); 274 - pr_debug("oe_n: %d, oe_a: %d, dmack_s: %d, dmack_h: %d, dmarq: %d, pause: %d\n", 278 + ata_dev_dbg(dev, "oe_n: %d, oe_a: %d, dmack_s: %d, dmack_h: %d, dmarq: %d, pause: %d\n", 275 279 dma_tim.s.oe_n, dma_tim.s.oe_a, dma_tim.s.dmack_s, 276 280 dma_tim.s.dmack_h, dma_tim.s.dmarq, dma_tim.s.pause); 277 281 ··· 436 440 int rc; 437 441 u8 err; 438 442 439 - DPRINTK("about to softreset\n"); 440 443 __raw_writew(ap->ctl, base + 0xe); 441 444 udelay(20); 442 445 __raw_writew(ap->ctl | ATA_SRST, base + 0xe); ··· 450 455 451 456 /* determine by signature whether we have ATA or ATAPI devices */ 452 457 classes[0] = ata_sff_dev_classify(&link->device[0], 1, &err); 453 - DPRINTK("EXIT, classes[0]=%u [1]=%u\n", classes[0], classes[1]); 454 458 return 0; 455 459 } 456 460 ··· 473 479 __raw_writew(tf->hob_feature << 8, base + 0xc); 474 480 __raw_writew(tf->hob_nsect | tf->hob_lbal << 8, base + 2); 475 481 __raw_writew(tf->hob_lbam | tf->hob_lbah << 8, base + 4); 476 - VPRINTK("hob: feat 0x%X nsect 0x%X, lba 0x%X 0x%X 0x%X\n", 477 - tf->hob_feature, 478 - tf->hob_nsect, 479 - tf->hob_lbal, 480 - tf->hob_lbam, 481 - tf->hob_lbah); 482 482 } 483 483 if (is_addr) { 484 484 __raw_writew(tf->feature << 8, base + 0xc); 485 485 __raw_writew(tf->nsect | tf->lbal << 8, base + 2); 486 486 __raw_writew(tf->lbam | tf->lbah << 8, base + 4); 487 - VPRINTK("feat 0x%X nsect 0x%X, lba 0x%X 0x%X 0x%X\n", 488 - tf->feature, 489 - tf->nsect, 490 - tf->lbal, 491 - tf->lbam, 492 - tf->lbah); 493 487 } 494 488 ata_wait_idle(ap); 495 489 } ··· 498 516 { 499 517 /* The base of the registers is at ioaddr.data_addr. */ 500 518 void __iomem *base = ap->ioaddr.data_addr; 501 - u16 blob; 519 + u16 blob = 0; 502 520 503 - if (tf->flags & ATA_TFLAG_DEVICE) { 504 - VPRINTK("device 0x%X\n", tf->device); 521 + if (tf->flags & ATA_TFLAG_DEVICE) 505 522 blob = tf->device; 506 - } else { 507 - blob = 0; 508 - } 509 523 510 - DPRINTK("ata%u: cmd 0x%X\n", ap->print_id, tf->command); 511 524 blob |= (tf->command << 8); 512 525 __raw_writew(blob, base + 6); 513 - 514 526 515 527 ata_wait_idle(ap); 516 528 } ··· 519 543 struct octeon_cf_port *cf_port; 520 544 521 545 cf_port = ap->private_data; 522 - DPRINTK("ENTER\n"); 523 546 /* issue r/w command */ 524 547 qc->cursg = qc->sg; 525 548 cf_port->dma_finished = 0; 526 549 ap->ops->sff_exec_command(ap, &qc->tf); 527 - DPRINTK("EXIT\n"); 528 550 } 529 551 530 552 /** ··· 536 562 union cvmx_mio_boot_dma_cfgx mio_boot_dma_cfg; 537 563 union cvmx_mio_boot_dma_intx mio_boot_dma_int; 538 564 struct scatterlist *sg; 539 - 540 - VPRINTK("%d scatterlists\n", qc->n_elem); 541 565 542 566 /* Get the scatter list entry we need to DMA into */ 543 567 sg = qc->cursg; ··· 577 605 578 606 mio_boot_dma_cfg.s.adr = sg_dma_address(sg); 579 607 580 - VPRINTK("%s %d bytes address=%p\n", 581 - (mio_boot_dma_cfg.s.rw) ? "write" : "read", sg->length, 582 - (void *)(unsigned long)mio_boot_dma_cfg.s.adr); 583 - 584 608 cvmx_write_csr(cf_port->dma_base + DMA_CFG, mio_boot_dma_cfg.u64); 585 609 } 586 610 ··· 595 627 union cvmx_mio_boot_dma_intx dma_int; 596 628 u8 status; 597 629 598 - VPRINTK("ata%u: protocol %d task_state %d\n", 599 - ap->print_id, qc->tf.protocol, ap->hsm_task_state); 600 - 630 + trace_ata_bmdma_stop(qc, &qc->tf, qc->tag); 601 631 602 632 if (ap->hsm_task_state != HSM_ST_LAST) 603 633 return 0; ··· 644 678 645 679 spin_lock_irqsave(&host->lock, flags); 646 680 647 - DPRINTK("ENTER\n"); 648 681 for (i = 0; i < host->n_ports; i++) { 649 682 u8 status; 650 683 struct ata_port *ap; ··· 666 701 if (!sg_is_last(qc->cursg)) { 667 702 qc->cursg = sg_next(qc->cursg); 668 703 handled = 1; 704 + trace_ata_bmdma_start(ap, &qc->tf, qc->tag); 669 705 octeon_cf_dma_start(qc); 670 706 continue; 671 707 } else { ··· 698 732 } 699 733 } 700 734 spin_unlock_irqrestore(&host->lock, flags); 701 - DPRINTK("EXIT\n"); 702 735 return IRQ_RETVAL(handled); 703 736 } 704 737 ··· 765 800 case ATA_PROT_DMA: 766 801 WARN_ON(qc->tf.flags & ATA_TFLAG_POLLING); 767 802 803 + trace_ata_tf_load(ap, &qc->tf); 768 804 ap->ops->sff_tf_load(ap, &qc->tf); /* load tf registers */ 805 + trace_ata_bmdma_setup(ap, &qc->tf, qc->tag); 769 806 octeon_cf_dma_setup(qc); /* set up dma */ 807 + trace_ata_bmdma_start(ap, &qc->tf, qc->tag); 770 808 octeon_cf_dma_start(qc); /* initiate dma */ 771 809 ap->hsm_task_state = HSM_ST_LAST; 772 810 break;
+12 -3
drivers/ata/pata_of_platform.c
··· 25 25 struct device_node *dn = ofdev->dev.of_node; 26 26 struct resource io_res; 27 27 struct resource ctl_res; 28 - struct resource *irq_res; 28 + struct resource irq_res; 29 29 unsigned int reg_shift = 0; 30 30 int pio_mode = 0; 31 31 int pio_mask; 32 32 bool use16bit; 33 + int irq; 33 34 34 35 ret = of_address_to_resource(dn, 0, &io_res); 35 36 if (ret) { ··· 46 45 return -EINVAL; 47 46 } 48 47 49 - irq_res = platform_get_resource(ofdev, IORESOURCE_IRQ, 0); 48 + memset(&irq_res, 0, sizeof(irq_res)); 49 + 50 + irq = platform_get_irq_optional(ofdev, 0); 51 + if (irq < 0 && irq != -ENXIO) 52 + return irq; 53 + if (irq > 0) { 54 + irq_res.start = irq; 55 + irq_res.end = irq; 56 + } 50 57 51 58 of_property_read_u32(dn, "reg-shift", &reg_shift); 52 59 ··· 72 63 pio_mask = 1 << pio_mode; 73 64 pio_mask |= (1 << pio_mode) - 1; 74 65 75 - return __pata_platform_probe(&ofdev->dev, &io_res, &ctl_res, irq_res, 66 + return __pata_platform_probe(&ofdev->dev, &io_res, &ctl_res, irq > 0 ? &irq_res : NULL, 76 67 reg_shift, pio_mask, &pata_platform_sht, 77 68 use16bit); 78 69 }
+28 -43
drivers/ata/pata_pdc2027x.c
··· 30 30 31 31 #define DRV_NAME "pata_pdc2027x" 32 32 #define DRV_VERSION "1.0" 33 - #undef PDC_DEBUG 34 - 35 - #ifdef PDC_DEBUG 36 - #define PDPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __func__, ## args) 37 - #else 38 - #define PDPRINTK(fmt, args...) 39 - #endif 40 33 41 34 enum { 42 35 PDC_MMIO_BAR = 5, ··· 207 214 if (cgcr & (1 << 26)) 208 215 goto cbl40; 209 216 210 - PDPRINTK("No cable or 80-conductor cable on port %d\n", ap->port_no); 217 + ata_port_dbg(ap, "No cable or 80-conductor cable\n"); 211 218 212 219 return ATA_CBL_PATA80; 213 220 cbl40: 214 - printk(KERN_INFO DRV_NAME ": 40-conductor cable detected on port %d\n", ap->port_no); 221 + ata_port_info(ap, DRV_NAME ":40-conductor cable detected\n"); 215 222 return ATA_CBL_PATA40; 216 223 } 217 224 ··· 285 292 unsigned int pio = adev->pio_mode - XFER_PIO_0; 286 293 u32 ctcr0, ctcr1; 287 294 288 - PDPRINTK("adev->pio_mode[%X]\n", adev->pio_mode); 295 + ata_port_dbg(ap, "adev->pio_mode[%X]\n", adev->pio_mode); 289 296 290 297 /* Sanity check */ 291 298 if (pio > 4) { 292 - printk(KERN_ERR DRV_NAME ": Unknown pio mode [%d] ignored\n", pio); 299 + ata_port_err(ap, "Unknown pio mode [%d] ignored\n", pio); 293 300 return; 294 301 295 302 } 296 303 297 304 /* Set the PIO timing registers using value table for 133MHz */ 298 - PDPRINTK("Set pio regs... \n"); 305 + ata_port_dbg(ap, "Set pio regs... \n"); 299 306 300 307 ctcr0 = ioread32(dev_mmio(ap, adev, PDC_CTCR0)); 301 308 ctcr0 &= 0xffff0000; ··· 308 315 ctcr1 |= (pdc2027x_pio_timing_tbl[pio].value2 << 24); 309 316 iowrite32(ctcr1, dev_mmio(ap, adev, PDC_CTCR1)); 310 317 311 - PDPRINTK("Set pio regs done\n"); 312 - 313 - PDPRINTK("Set to pio mode[%u] \n", pio); 318 + ata_port_dbg(ap, "Set to pio mode[%u] \n", pio); 314 319 } 315 320 316 321 /** ··· 341 350 iowrite32(ctcr1 & ~(1 << 7), dev_mmio(ap, adev, PDC_CTCR1)); 342 351 } 343 352 344 - PDPRINTK("Set udma regs... \n"); 353 + ata_port_dbg(ap, "Set udma regs... \n"); 345 354 346 355 ctcr1 = ioread32(dev_mmio(ap, adev, PDC_CTCR1)); 347 356 ctcr1 &= 0xff000000; ··· 350 359 (pdc2027x_udma_timing_tbl[udma_mode].value2 << 16); 351 360 iowrite32(ctcr1, dev_mmio(ap, adev, PDC_CTCR1)); 352 361 353 - PDPRINTK("Set udma regs done\n"); 354 - 355 - PDPRINTK("Set to udma mode[%u] \n", udma_mode); 362 + ata_port_dbg(ap, "Set to udma mode[%u] \n", udma_mode); 356 363 357 364 } else if ((dma_mode >= XFER_MW_DMA_0) && 358 365 (dma_mode <= XFER_MW_DMA_2)) { 359 366 /* Set the MDMA timing registers with value table for 133MHz */ 360 367 unsigned int mdma_mode = dma_mode & 0x07; 361 368 362 - PDPRINTK("Set mdma regs... \n"); 369 + ata_port_dbg(ap, "Set mdma regs... \n"); 363 370 ctcr0 = ioread32(dev_mmio(ap, adev, PDC_CTCR0)); 364 371 365 372 ctcr0 &= 0x0000ffff; ··· 365 376 (pdc2027x_mdma_timing_tbl[mdma_mode].value1 << 24); 366 377 367 378 iowrite32(ctcr0, dev_mmio(ap, adev, PDC_CTCR0)); 368 - PDPRINTK("Set mdma regs done\n"); 369 379 370 - PDPRINTK("Set to mdma mode[%u] \n", mdma_mode); 380 + ata_port_dbg(ap, "Set to mdma mode[%u] \n", mdma_mode); 371 381 } else { 372 - printk(KERN_ERR DRV_NAME ": Unknown dma mode [%u] ignored\n", dma_mode); 382 + ata_port_err(ap, "Unknown dma mode [%u] ignored\n", dma_mode); 373 383 } 374 384 } 375 385 ··· 402 414 ctcr1 |= (1 << 25); 403 415 iowrite32(ctcr1, dev_mmio(ap, dev, PDC_CTCR1)); 404 416 405 - PDPRINTK("Turn on prefetch\n"); 417 + ata_dev_dbg(dev, "Turn on prefetch\n"); 406 418 } else { 407 419 pdc2027x_set_dmamode(ap, dev); 408 420 } ··· 473 485 474 486 counter = (bccrh << 15) | bccrl; 475 487 476 - PDPRINTK("bccrh [%X] bccrl [%X]\n", bccrh, bccrl); 477 - PDPRINTK("bccrhv[%X] bccrlv[%X]\n", bccrhv, bccrlv); 488 + dev_dbg(host->dev, "bccrh [%X] bccrl [%X]\n", bccrh, bccrl); 489 + dev_dbg(host->dev, "bccrhv[%X] bccrlv[%X]\n", bccrhv, bccrlv); 478 490 479 491 /* 480 492 * The 30-bit decreasing counter are read by 2 pieces. ··· 483 495 */ 484 496 if (retry && !(bccrh == bccrhv && bccrl >= bccrlv)) { 485 497 retry--; 486 - PDPRINTK("rereading counter\n"); 498 + dev_dbg(host->dev, "rereading counter\n"); 487 499 goto retry; 488 500 } 489 501 ··· 508 520 509 521 /* Sanity check */ 510 522 if (unlikely(pll_clock_khz < 5000L || pll_clock_khz > 70000L)) { 511 - printk(KERN_ERR DRV_NAME ": Invalid PLL input clock %ldkHz, give up!\n", pll_clock_khz); 523 + dev_err(host->dev, "Invalid PLL input clock %ldkHz, give up!\n", 524 + pll_clock_khz); 512 525 return; 513 526 } 514 527 515 - #ifdef PDC_DEBUG 516 - PDPRINTK("pout_required is %ld\n", pout_required); 528 + dev_dbg(host->dev, "pout_required is %ld\n", pout_required); 517 529 518 530 /* Show the current clock value of PLL control register 519 531 * (maybe already configured by the firmware) 520 532 */ 521 533 pll_ctl = ioread16(mmio_base + PDC_PLL_CTL); 522 534 523 - PDPRINTK("pll_ctl[%X]\n", pll_ctl); 524 - #endif 535 + dev_dbg(host->dev, "pll_ctl[%X]\n", pll_ctl); 525 536 526 537 /* 527 538 * Calculate the ratio of F, R and OD ··· 539 552 R = 0x00; 540 553 } else { 541 554 /* Invalid ratio */ 542 - printk(KERN_ERR DRV_NAME ": Invalid ratio %ld, give up!\n", ratio); 555 + dev_err(host->dev, "Invalid ratio %ld, give up!\n", ratio); 543 556 return; 544 557 } 545 558 ··· 547 560 548 561 if (unlikely(F < 0 || F > 127)) { 549 562 /* Invalid F */ 550 - printk(KERN_ERR DRV_NAME ": F[%d] invalid!\n", F); 563 + dev_err(host->dev, "F[%d] invalid!\n", F); 551 564 return; 552 565 } 553 566 554 - PDPRINTK("F[%d] R[%d] ratio*1000[%ld]\n", F, R, ratio); 567 + dev_dbg(host->dev, "F[%d] R[%d] ratio*1000[%ld]\n", F, R, ratio); 555 568 556 569 pll_ctl = (R << 8) | F; 557 570 558 - PDPRINTK("Writing pll_ctl[%X]\n", pll_ctl); 571 + dev_dbg(host->dev, "Writing pll_ctl[%X]\n", pll_ctl); 559 572 560 573 iowrite16(pll_ctl, mmio_base + PDC_PLL_CTL); 561 574 ioread16(mmio_base + PDC_PLL_CTL); /* flush */ ··· 563 576 /* Wait the PLL circuit to be stable */ 564 577 msleep(30); 565 578 566 - #ifdef PDC_DEBUG 567 579 /* 568 580 * Show the current clock value of PLL control register 569 581 * (maybe configured by the firmware) 570 582 */ 571 583 pll_ctl = ioread16(mmio_base + PDC_PLL_CTL); 572 584 573 - PDPRINTK("pll_ctl[%X]\n", pll_ctl); 574 - #endif 585 + dev_dbg(host->dev, "pll_ctl[%X]\n", pll_ctl); 575 586 576 587 return; 577 588 } ··· 590 605 591 606 /* Start the test mode */ 592 607 scr = ioread32(mmio_base + PDC_SYS_CTL); 593 - PDPRINTK("scr[%X]\n", scr); 608 + dev_dbg(host->dev, "scr[%X]\n", scr); 594 609 iowrite32(scr | (0x01 << 14), mmio_base + PDC_SYS_CTL); 595 610 ioread32(mmio_base + PDC_SYS_CTL); /* flush */ 596 611 ··· 607 622 608 623 /* Stop the test mode */ 609 624 scr = ioread32(mmio_base + PDC_SYS_CTL); 610 - PDPRINTK("scr[%X]\n", scr); 625 + dev_dbg(host->dev, "scr[%X]\n", scr); 611 626 iowrite32(scr & ~(0x01 << 14), mmio_base + PDC_SYS_CTL); 612 627 ioread32(mmio_base + PDC_SYS_CTL); /* flush */ 613 628 ··· 617 632 pll_clock = ((start_count - end_count) & 0x3fffffff) / 100 * 618 633 (100000000 / usec_elapsed); 619 634 620 - PDPRINTK("start[%ld] end[%ld] \n", start_count, end_count); 621 - PDPRINTK("PLL input clock[%ld]Hz\n", pll_clock); 635 + dev_dbg(host->dev, "start[%ld] end[%ld] PLL input clock[%ld]HZ\n", 636 + start_count, end_count, pll_clock); 622 637 623 638 return pll_clock; 624 639 }
-2
drivers/ata/pata_pdc202xx_old.c
··· 38 38 static void pdc202xx_exec_command(struct ata_port *ap, 39 39 const struct ata_taskfile *tf) 40 40 { 41 - DPRINTK("ata%u: cmd 0x%X\n", ap->print_id, tf->command); 42 - 43 41 iowrite8(tf->command, ap->ioaddr.command_addr); 44 42 ndelay(400); 45 43 }
+2 -2
drivers/ata/pata_rz1000.c
··· 69 69 reg &= 0xDFFF; 70 70 if (pci_write_config_word(pdev, 0x40, reg) != 0) 71 71 return -1; 72 - printk(KERN_INFO DRV_NAME ": disabled chipset readahead.\n"); 72 + dev_info(&pdev->dev, "disabled chipset readahead.\n"); 73 73 return 0; 74 74 } 75 75 ··· 97 97 if (rz1000_fifo_disable(pdev) == 0) 98 98 return ata_pci_sff_init_one(pdev, ppi, &rz1000_sht, NULL, 0); 99 99 100 - printk(KERN_ERR DRV_NAME ": failed to disable read-ahead on chipset..\n"); 100 + dev_err(&pdev->dev, "failed to disable read-ahead on chipset.\n"); 101 101 /* Not safe to use so skip */ 102 102 return -ENODEV; 103 103 }
+2 -2
drivers/ata/pata_serverworks.c
··· 286 286 pci_read_config_dword(isa_dev, 0x64, &reg); 287 287 reg &= ~0x00002000; /* disable 600ns interrupt mask */ 288 288 if (!(reg & 0x00004000)) 289 - printk(KERN_DEBUG DRV_NAME ": UDMA not BIOS enabled.\n"); 289 + dev_info(&pdev->dev, "UDMA not BIOS enabled.\n"); 290 290 reg |= 0x00004000; /* enable UDMA/33 support */ 291 291 pci_write_config_dword(isa_dev, 0x64, reg); 292 292 pci_dev_put(isa_dev); 293 293 return 0; 294 294 } 295 - printk(KERN_WARNING DRV_NAME ": Unable to find bridge.\n"); 295 + dev_warn(&pdev->dev, "Unable to find bridge.\n"); 296 296 return -ENODEV; 297 297 } 298 298
+4 -5
drivers/ata/pata_sil680.c
··· 212 212 static void sil680_sff_exec_command(struct ata_port *ap, 213 213 const struct ata_taskfile *tf) 214 214 { 215 - DPRINTK("ata%u: cmd 0x%X\n", ap->print_id, tf->command); 216 215 iowrite8(tf->command, ap->ioaddr.command_addr); 217 216 ioread8(ap->ioaddr.bmdma_addr + ATA_DMA_CMD); 218 217 } ··· 308 309 309 310 switch (tmpbyte & 0x30) { 310 311 case 0x00: 311 - printk(KERN_INFO "sil680: 100MHz clock.\n"); 312 + dev_info(&pdev->dev, "sil680: 100MHz clock.\n"); 312 313 break; 313 314 case 0x10: 314 - printk(KERN_INFO "sil680: 133MHz clock.\n"); 315 + dev_info(&pdev->dev, "sil680: 133MHz clock.\n"); 315 316 break; 316 317 case 0x20: 317 - printk(KERN_INFO "sil680: Using PCI clock.\n"); 318 + dev_info(&pdev->dev, "sil680: Using PCI clock.\n"); 318 319 break; 319 320 /* This last case is _NOT_ ok */ 320 321 case 0x30: 321 - printk(KERN_ERR "sil680: Clock disabled ?\n"); 322 + dev_err(&pdev->dev, "sil680: Clock disabled ?\n"); 322 323 } 323 324 return tmpbyte & 0x30; 324 325 }
-12
drivers/ata/pata_via.c
··· 414 414 iowrite8(tf->hob_lbal, ioaddr->lbal_addr); 415 415 iowrite8(tf->hob_lbam, ioaddr->lbam_addr); 416 416 iowrite8(tf->hob_lbah, ioaddr->lbah_addr); 417 - VPRINTK("hob: feat 0x%X nsect 0x%X, lba 0x%X 0x%X 0x%X\n", 418 - tf->hob_feature, 419 - tf->hob_nsect, 420 - tf->hob_lbal, 421 - tf->hob_lbam, 422 - tf->hob_lbah); 423 417 } 424 418 425 419 if (is_addr) { ··· 422 428 iowrite8(tf->lbal, ioaddr->lbal_addr); 423 429 iowrite8(tf->lbam, ioaddr->lbam_addr); 424 430 iowrite8(tf->lbah, ioaddr->lbah_addr); 425 - VPRINTK("feat 0x%X nsect 0x%X lba 0x%X 0x%X 0x%X\n", 426 - tf->feature, 427 - tf->nsect, 428 - tf->lbal, 429 - tf->lbam, 430 - tf->lbah); 431 431 } 432 432 433 433 ata_wait_idle(ap);
+2 -31
drivers/ata/pdc_adma.c
··· 284 284 *(__le32 *)(buf + i) = 285 285 (pFLAGS & pEND) ? 0 : cpu_to_le32(pp->pkt_dma + i + 4); 286 286 i += 4; 287 - 288 - VPRINTK("PRD[%u] = (0x%lX, 0x%X)\n", i/4, 289 - (unsigned long)addr, len); 290 287 } 291 288 292 289 if (likely(last_buf)) ··· 298 301 u8 *buf = pp->pkt; 299 302 u32 pkt_dma = (u32)pp->pkt_dma; 300 303 int i = 0; 301 - 302 - VPRINTK("ENTER\n"); 303 304 304 305 adma_enter_reg_mode(qc->ap); 305 306 if (qc->tf.protocol != ATA_PROT_DMA) ··· 350 355 351 356 i = adma_fill_sg(qc); 352 357 wmb(); /* flush PRDs and pkt to memory */ 353 - #if 0 354 - /* dump out CPB + PRDs for debug */ 355 - { 356 - int j, len = 0; 357 - static char obuf[2048]; 358 - for (j = 0; j < i; ++j) { 359 - len += sprintf(obuf+len, "%02x ", buf[j]); 360 - if ((j & 7) == 7) { 361 - printk("%s\n", obuf); 362 - len = 0; 363 - } 364 - } 365 - if (len) 366 - printk("%s\n", obuf); 367 - } 368 - #endif 369 358 return AC_ERR_OK; 370 359 } 371 360 ··· 357 378 { 358 379 struct ata_port *ap = qc->ap; 359 380 void __iomem *chan = ADMA_PORT_REGS(ap); 360 - 361 - VPRINTK("ENTER, ap %p\n", ap); 362 381 363 382 /* fire up the ADMA engine */ 364 383 writew(aPIOMD4 | aGO, chan + ADMA_CONTROL); ··· 452 475 u8 status = ata_sff_check_status(ap); 453 476 if ((status & ATA_BUSY)) 454 477 continue; 455 - DPRINTK("ata%u: protocol %d (dev_stat 0x%X)\n", 456 - ap->print_id, qc->tf.protocol, status); 457 478 458 479 /* complete taskfile transaction */ 459 480 pp->state = adma_state_idle; ··· 479 504 struct ata_host *host = dev_instance; 480 505 unsigned int handled = 0; 481 506 482 - VPRINTK("ENTER\n"); 483 - 484 507 spin_lock(&host->lock); 485 508 handled = adma_intr_pkt(host) | adma_intr_mmio(host); 486 509 spin_unlock(&host->lock); 487 - 488 - VPRINTK("EXIT\n"); 489 510 490 511 return IRQ_RETVAL(handled); 491 512 } ··· 518 547 return -ENOMEM; 519 548 /* paranoia? */ 520 549 if ((pp->pkt_dma & 7) != 0) { 521 - printk(KERN_ERR "bad alignment for pp->pkt_dma: %08x\n", 522 - (u32)pp->pkt_dma); 550 + ata_port_err(ap, "bad alignment for pp->pkt_dma: %08x\n", 551 + (u32)pp->pkt_dma); 523 552 return -ENOMEM; 524 553 } 525 554 ap->private_data = pp;
+42 -123
drivers/ata/sata_dwc_460ex.c
··· 14 14 * COPYRIGHT (C) 2005 SYNOPSYS, INC. ALL RIGHTS RESERVED 15 15 */ 16 16 17 - #ifdef CONFIG_SATA_DWC_DEBUG 18 - #define DEBUG 19 - #endif 20 - 21 - #ifdef CONFIG_SATA_DWC_VDEBUG 22 - #define VERBOSE_DEBUG 23 - #define DEBUG_NCQ 24 - #endif 25 - 26 17 #include <linux/kernel.h> 27 18 #include <linux/module.h> 28 19 #include <linux/device.h> ··· 25 34 #include <linux/phy/phy.h> 26 35 #include <linux/libata.h> 27 36 #include <linux/slab.h> 37 + #include <trace/events/libata.h> 28 38 29 39 #include "libata.h" 30 40 ··· 174 182 * Prototypes 175 183 */ 176 184 static void sata_dwc_bmdma_start_by_tag(struct ata_queued_cmd *qc, u8 tag); 177 - static int sata_dwc_qc_complete(struct ata_port *ap, struct ata_queued_cmd *qc, 178 - u32 check_status); 179 - static void sata_dwc_dma_xfer_complete(struct ata_port *ap, u32 check_status); 180 - static void sata_dwc_port_stop(struct ata_port *ap); 185 + static int sata_dwc_qc_complete(struct ata_port *ap, struct ata_queued_cmd *qc); 186 + static void sata_dwc_dma_xfer_complete(struct ata_port *ap); 181 187 static void sata_dwc_clear_dmacr(struct sata_dwc_device_port *hsdevp, u8 tag); 182 188 183 189 #ifdef CONFIG_SATA_DWC_OLD_DMA ··· 205 215 { 206 216 struct sata_dwc_device *hsdev = hsdevp->hsdev; 207 217 struct dw_dma_slave *dws = &sata_dwc_dma_dws; 218 + struct device *dev = hsdev->dev; 208 219 dma_cap_mask_t mask; 209 220 210 - dws->dma_dev = hsdev->dev; 221 + dws->dma_dev = dev; 211 222 212 223 dma_cap_zero(mask); 213 224 dma_cap_set(DMA_SLAVE, mask); ··· 216 225 /* Acquire DMA channel */ 217 226 hsdevp->chan = dma_request_channel(mask, sata_dwc_dma_filter, hsdevp); 218 227 if (!hsdevp->chan) { 219 - dev_err(hsdev->dev, "%s: dma channel unavailable\n", 220 - __func__); 228 + dev_err(dev, "%s: dma channel unavailable\n", __func__); 221 229 return -EAGAIN; 222 230 } 223 231 ··· 226 236 static int sata_dwc_dma_init_old(struct platform_device *pdev, 227 237 struct sata_dwc_device *hsdev) 228 238 { 229 - struct device_node *np = pdev->dev.of_node; 230 - struct resource *res; 239 + struct device *dev = &pdev->dev; 240 + struct device_node *np = dev->of_node; 231 241 232 - hsdev->dma = devm_kzalloc(&pdev->dev, sizeof(*hsdev->dma), GFP_KERNEL); 242 + hsdev->dma = devm_kzalloc(dev, sizeof(*hsdev->dma), GFP_KERNEL); 233 243 if (!hsdev->dma) 234 244 return -ENOMEM; 235 245 236 - hsdev->dma->dev = &pdev->dev; 246 + hsdev->dma->dev = dev; 237 247 hsdev->dma->id = pdev->id; 238 248 239 249 /* Get SATA DMA interrupt number */ 240 250 hsdev->dma->irq = irq_of_parse_and_map(np, 1); 241 251 if (hsdev->dma->irq == NO_IRQ) { 242 - dev_err(&pdev->dev, "no SATA DMA irq\n"); 252 + dev_err(dev, "no SATA DMA irq\n"); 243 253 return -ENODEV; 244 254 } 245 255 246 256 /* Get physical SATA DMA register base address */ 247 - res = platform_get_resource(pdev, IORESOURCE_MEM, 1); 248 - hsdev->dma->regs = devm_ioremap_resource(&pdev->dev, res); 257 + hsdev->dma->regs = devm_platform_ioremap_resource(pdev, 1); 249 258 if (IS_ERR(hsdev->dma->regs)) 250 259 return PTR_ERR(hsdev->dma->regs); 251 260 ··· 286 297 } 287 298 } 288 299 289 - static const char *get_dma_dir_descript(int dma_dir) 290 - { 291 - switch ((enum dma_data_direction)dma_dir) { 292 - case DMA_BIDIRECTIONAL: 293 - return "bidirectional"; 294 - case DMA_TO_DEVICE: 295 - return "to device"; 296 - case DMA_FROM_DEVICE: 297 - return "from device"; 298 - default: 299 - return "none"; 300 - } 301 - } 302 - 303 - static void sata_dwc_tf_dump(struct ata_port *ap, struct ata_taskfile *tf) 304 - { 305 - dev_vdbg(ap->dev, 306 - "taskfile cmd: 0x%02x protocol: %s flags: 0x%lx device: %x\n", 307 - tf->command, get_prot_descript(tf->protocol), tf->flags, 308 - tf->device); 309 - dev_vdbg(ap->dev, 310 - "feature: 0x%02x nsect: 0x%x lbal: 0x%x lbam: 0x%x lbah: 0x%x\n", 311 - tf->feature, tf->nsect, tf->lbal, tf->lbam, tf->lbah); 312 - dev_vdbg(ap->dev, 313 - "hob_feature: 0x%02x hob_nsect: 0x%x hob_lbal: 0x%x hob_lbam: 0x%x hob_lbah: 0x%x\n", 314 - tf->hob_feature, tf->hob_nsect, tf->hob_lbal, tf->hob_lbam, 315 - tf->hob_lbah); 316 - } 317 - 318 300 static void dma_dwc_xfer_done(void *hsdev_instance) 319 301 { 320 302 unsigned long flags; ··· 315 355 } 316 356 317 357 if ((hsdevp->dma_interrupt_count % 2) == 0) 318 - sata_dwc_dma_xfer_complete(ap, 1); 358 + sata_dwc_dma_xfer_complete(ap); 319 359 320 360 spin_unlock_irqrestore(&host->lock, flags); 321 361 } ··· 513 553 * active tag. It is the tag that matches the command about to 514 554 * be completed. 515 555 */ 556 + trace_ata_bmdma_start(ap, &qc->tf, tag); 516 557 qc->ap->link.active_tag = tag; 517 558 sata_dwc_bmdma_start_by_tag(qc, tag); 518 559 ··· 547 586 548 587 if (status & ATA_ERR) { 549 588 dev_dbg(ap->dev, "interrupt ATA_ERR (0x%x)\n", status); 550 - sata_dwc_qc_complete(ap, qc, 1); 589 + sata_dwc_qc_complete(ap, qc); 551 590 handled = 1; 552 591 goto DONE; 553 592 } ··· 572 611 } 573 612 574 613 if ((hsdevp->dma_interrupt_count % 2) == 0) 575 - sata_dwc_dma_xfer_complete(ap, 1); 614 + sata_dwc_dma_xfer_complete(ap); 576 615 } else if (ata_is_pio(qc->tf.protocol)) { 577 616 ata_sff_hsm_move(ap, qc, status, 0); 578 617 handled = 1; 579 618 goto DONE; 580 619 } else { 581 - if (unlikely(sata_dwc_qc_complete(ap, qc, 1))) 620 + if (unlikely(sata_dwc_qc_complete(ap, qc))) 582 621 goto DRVSTILLBUSY; 583 622 } 584 623 ··· 638 677 if (status & ATA_ERR) { 639 678 dev_dbg(ap->dev, "%s ATA_ERR (0x%x)\n", __func__, 640 679 status); 641 - sata_dwc_qc_complete(ap, qc, 1); 680 + sata_dwc_qc_complete(ap, qc); 642 681 handled = 1; 643 682 goto DONE; 644 683 } ··· 653 692 dev_warn(ap->dev, "%s: DMA not pending?\n", 654 693 __func__); 655 694 if ((hsdevp->dma_interrupt_count % 2) == 0) 656 - sata_dwc_dma_xfer_complete(ap, 1); 695 + sata_dwc_dma_xfer_complete(ap); 657 696 } else { 658 - if (unlikely(sata_dwc_qc_complete(ap, qc, 1))) 697 + if (unlikely(sata_dwc_qc_complete(ap, qc))) 659 698 goto STILLBUSY; 660 699 } 661 700 continue; ··· 710 749 } 711 750 } 712 751 713 - static void sata_dwc_dma_xfer_complete(struct ata_port *ap, u32 check_status) 752 + static void sata_dwc_dma_xfer_complete(struct ata_port *ap) 714 753 { 715 754 struct ata_queued_cmd *qc; 716 755 struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap); ··· 724 763 return; 725 764 } 726 765 727 - #ifdef DEBUG_NCQ 728 - if (tag > 0) { 729 - dev_info(ap->dev, 730 - "%s tag=%u cmd=0x%02x dma dir=%s proto=%s dmacr=0x%08x\n", 731 - __func__, qc->hw_tag, qc->tf.command, 732 - get_dma_dir_descript(qc->dma_dir), 733 - get_prot_descript(qc->tf.protocol), 734 - sata_dwc_readl(&hsdev->sata_dwc_regs->dmacr)); 735 - } 736 - #endif 737 - 738 766 if (ata_is_dma(qc->tf.protocol)) { 739 767 if (hsdevp->dma_pending[tag] == SATA_DWC_DMA_PENDING_NONE) { 740 768 dev_err(ap->dev, ··· 733 783 } 734 784 735 785 hsdevp->dma_pending[tag] = SATA_DWC_DMA_PENDING_NONE; 736 - sata_dwc_qc_complete(ap, qc, check_status); 786 + sata_dwc_qc_complete(ap, qc); 737 787 ap->link.active_tag = ATA_TAG_POISON; 738 788 } else { 739 - sata_dwc_qc_complete(ap, qc, check_status); 789 + sata_dwc_qc_complete(ap, qc); 740 790 } 741 791 } 742 792 743 - static int sata_dwc_qc_complete(struct ata_port *ap, struct ata_queued_cmd *qc, 744 - u32 check_status) 793 + static int sata_dwc_qc_complete(struct ata_port *ap, struct ata_queued_cmd *qc) 745 794 { 746 795 u8 status = 0; 747 796 u32 mask = 0x0; ··· 748 799 struct sata_dwc_device *hsdev = HSDEV_FROM_AP(ap); 749 800 struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap); 750 801 hsdev->sactive_queued = 0; 751 - dev_dbg(ap->dev, "%s checkstatus? %x\n", __func__, check_status); 752 802 753 803 if (hsdevp->dma_pending[tag] == SATA_DWC_DMA_PENDING_TX) 754 804 dev_err(ap->dev, "TX DMA PENDING\n"); ··· 928 980 { 929 981 struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap); 930 982 931 - dev_dbg(ap->dev, "%s cmd(0x%02x): %s tag=%d\n", __func__, tf->command, 932 - ata_get_cmd_descript(tf->command), tag); 933 - 934 983 hsdevp->cmd_issued[tag] = cmd_issued; 935 984 936 985 /* ··· 950 1005 { 951 1006 u8 tag = qc->hw_tag; 952 1007 953 - if (ata_is_ncq(qc->tf.protocol)) { 954 - dev_dbg(qc->ap->dev, "%s: ap->link.sactive=0x%08x tag=%d\n", 955 - __func__, qc->ap->link.sactive, tag); 956 - } else { 1008 + if (!ata_is_ncq(qc->tf.protocol)) 957 1009 tag = 0; 958 - } 1010 + 959 1011 sata_dwc_bmdma_setup_by_tag(qc, tag); 960 1012 } 961 1013 ··· 979 1037 start_dma = 0; 980 1038 } 981 1039 982 - dev_dbg(ap->dev, 983 - "%s qc=%p tag: %x cmd: 0x%02x dma_dir: %s start_dma? %x\n", 984 - __func__, qc, tag, qc->tf.command, 985 - get_dma_dir_descript(qc->dma_dir), start_dma); 986 - sata_dwc_tf_dump(ap, &qc->tf); 987 - 988 1040 if (start_dma) { 989 1041 sata_dwc_scr_read(&ap->link, SCR_ERROR, &reg); 990 1042 if (reg & SATA_DWC_SERROR_ERR_BITS) { ··· 1003 1067 { 1004 1068 u8 tag = qc->hw_tag; 1005 1069 1006 - if (ata_is_ncq(qc->tf.protocol)) { 1007 - dev_dbg(qc->ap->dev, "%s: ap->link.sactive=0x%08x tag=%d\n", 1008 - __func__, qc->ap->link.sactive, tag); 1009 - } else { 1070 + if (!ata_is_ncq(qc->tf.protocol)) 1010 1071 tag = 0; 1011 - } 1012 - dev_dbg(qc->ap->dev, "%s\n", __func__); 1072 + 1013 1073 sata_dwc_bmdma_start_by_tag(qc, tag); 1014 1074 } 1015 1075 ··· 1015 1083 u8 tag = qc->hw_tag; 1016 1084 struct ata_port *ap = qc->ap; 1017 1085 struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap); 1018 - 1019 - #ifdef DEBUG_NCQ 1020 - if (qc->hw_tag > 0 || ap->link.sactive > 1) 1021 - dev_info(ap->dev, 1022 - "%s ap id=%d cmd(0x%02x)=%s qc tag=%d prot=%s ap active_tag=0x%08x ap sactive=0x%08x\n", 1023 - __func__, ap->print_id, qc->tf.command, 1024 - ata_get_cmd_descript(qc->tf.command), 1025 - qc->hw_tag, get_prot_descript(qc->tf.protocol), 1026 - ap->link.active_tag, ap->link.sactive); 1027 - #endif 1028 1086 1029 1087 if (!ata_is_ncq(qc->tf.protocol)) 1030 1088 tag = 0; ··· 1032 1110 sactive |= (0x00000001 << tag); 1033 1111 sata_dwc_scr_write(&ap->link, SCR_ACTIVE, sactive); 1034 1112 1035 - dev_dbg(qc->ap->dev, 1036 - "%s: tag=%d ap->link.sactive = 0x%08x sactive=0x%08x\n", 1037 - __func__, tag, qc->ap->link.sactive, sactive); 1038 - 1113 + trace_ata_tf_load(ap, &qc->tf); 1039 1114 ap->ops->sff_tf_load(ap, &qc->tf); 1115 + trace_ata_exec_command(ap, &qc->tf, tag); 1040 1116 sata_dwc_exec_command_by_tag(ap, &qc->tf, tag, 1041 1117 SATA_DWC_CMD_ISSUED_PEND); 1042 1118 } else { ··· 1127 1207 1128 1208 static int sata_dwc_probe(struct platform_device *ofdev) 1129 1209 { 1210 + struct device *dev = &ofdev->dev; 1211 + struct device_node *np = dev->of_node; 1130 1212 struct sata_dwc_device *hsdev; 1131 1213 u32 idr, versionr; 1132 1214 char *ver = (char *)&versionr; ··· 1138 1216 struct ata_host *host; 1139 1217 struct ata_port_info pi = sata_dwc_port_info[0]; 1140 1218 const struct ata_port_info *ppi[] = { &pi, NULL }; 1141 - struct device_node *np = ofdev->dev.of_node; 1142 1219 struct resource *res; 1143 1220 1144 1221 /* Allocate DWC SATA device */ 1145 - host = ata_host_alloc_pinfo(&ofdev->dev, ppi, SATA_DWC_MAX_PORTS); 1146 - hsdev = devm_kzalloc(&ofdev->dev, sizeof(*hsdev), GFP_KERNEL); 1222 + host = ata_host_alloc_pinfo(dev, ppi, SATA_DWC_MAX_PORTS); 1223 + hsdev = devm_kzalloc(dev, sizeof(*hsdev), GFP_KERNEL); 1147 1224 if (!host || !hsdev) 1148 1225 return -ENOMEM; 1149 1226 1150 1227 host->private_data = hsdev; 1151 1228 1152 1229 /* Ioremap SATA registers */ 1153 - res = platform_get_resource(ofdev, IORESOURCE_MEM, 0); 1154 - base = devm_ioremap_resource(&ofdev->dev, res); 1230 + base = devm_platform_get_and_ioremap_resource(ofdev, 0, &res); 1155 1231 if (IS_ERR(base)) 1156 1232 return PTR_ERR(base); 1157 - dev_dbg(&ofdev->dev, "ioremap done for SATA register address\n"); 1233 + dev_dbg(dev, "ioremap done for SATA register address\n"); 1158 1234 1159 1235 /* Synopsys DWC SATA specific Registers */ 1160 1236 hsdev->sata_dwc_regs = base + SATA_DWC_REG_OFFSET; ··· 1166 1246 /* Read the ID and Version Registers */ 1167 1247 idr = sata_dwc_readl(&hsdev->sata_dwc_regs->idr); 1168 1248 versionr = sata_dwc_readl(&hsdev->sata_dwc_regs->versionr); 1169 - dev_notice(&ofdev->dev, "id %d, controller version %c.%c%c\n", 1170 - idr, ver[0], ver[1], ver[2]); 1249 + dev_notice(dev, "id %d, controller version %c.%c%c\n", idr, ver[0], ver[1], ver[2]); 1171 1250 1172 1251 /* Save dev for later use in dev_xxx() routines */ 1173 - hsdev->dev = &ofdev->dev; 1252 + hsdev->dev = dev; 1174 1253 1175 1254 /* Enable SATA Interrupts */ 1176 1255 sata_dwc_enable_interrupts(hsdev); ··· 1177 1258 /* Get SATA interrupt number */ 1178 1259 irq = irq_of_parse_and_map(np, 0); 1179 1260 if (irq == NO_IRQ) { 1180 - dev_err(&ofdev->dev, "no SATA DMA irq\n"); 1261 + dev_err(dev, "no SATA DMA irq\n"); 1181 1262 return -ENODEV; 1182 1263 } 1183 1264 ··· 1189 1270 } 1190 1271 #endif 1191 1272 1192 - hsdev->phy = devm_phy_optional_get(hsdev->dev, "sata-phy"); 1273 + hsdev->phy = devm_phy_optional_get(dev, "sata-phy"); 1193 1274 if (IS_ERR(hsdev->phy)) 1194 1275 return PTR_ERR(hsdev->phy); 1195 1276 ··· 1204 1285 */ 1205 1286 err = ata_host_activate(host, irq, sata_dwc_isr, 0, &sata_dwc_sht); 1206 1287 if (err) 1207 - dev_err(&ofdev->dev, "failed to activate host"); 1288 + dev_err(dev, "failed to activate host"); 1208 1289 1209 1290 return 0; 1210 1291 ··· 1228 1309 sata_dwc_dma_exit_old(hsdev); 1229 1310 #endif 1230 1311 1231 - dev_dbg(&ofdev->dev, "done\n"); 1312 + dev_dbg(dev, "done\n"); 1232 1313 return 0; 1233 1314 } 1234 1315
+92 -120
drivers/ata/sata_fsl.c
··· 221 221 * 4 Dwords per command slot, command header size == 64 Dwords. 222 222 */ 223 223 struct cmdhdr_tbl_entry { 224 - u32 cda; 225 - u32 prde_fis_len; 226 - u32 ttl; 227 - u32 desc_info; 224 + __le32 cda; 225 + __le32 prde_fis_len; 226 + __le32 ttl; 227 + __le32 desc_info; 228 228 }; 229 229 230 230 /* ··· 246 246 struct command_desc { 247 247 u8 cfis[8 * 4]; 248 248 u8 sfis[8 * 4]; 249 - u8 acmd[4 * 4]; 250 - u8 fill[4 * 4]; 249 + struct_group(cdb, 250 + u8 acmd[4 * 4]; 251 + u8 fill[4 * 4]; 252 + ); 251 253 u32 prdt[SATA_FSL_MAX_PRD_DIRECT * 4]; 252 254 u32 prdt_indirect[(SATA_FSL_MAX_PRD - SATA_FSL_MAX_PRD_DIRECT) * 4]; 253 255 }; ··· 259 257 */ 260 258 261 259 struct prde { 262 - u32 dba; 260 + __le32 dba; 263 261 u8 fill[2 * 4]; 264 - u32 ddc_and_ext; 262 + __le32 ddc_and_ext; 265 263 }; 266 264 267 265 /* ··· 313 311 intr_coalescing_ticks = ticks; 314 312 spin_unlock_irqrestore(&host->lock, flags); 315 313 316 - DPRINTK("interrupt coalescing, count = 0x%x, ticks = %x\n", 317 - intr_coalescing_count, intr_coalescing_ticks); 318 - DPRINTK("ICC register status: (hcr base: %p) = 0x%x\n", 319 - hcr_base, ioread32(hcr_base + ICC)); 314 + dev_dbg(host->dev, "interrupt coalescing, count = 0x%x, ticks = %x\n", 315 + intr_coalescing_count, intr_coalescing_ticks); 316 + dev_dbg(host->dev, "ICC register status: (hcr base: 0x%p) = 0x%x\n", 317 + hcr_base, ioread32(hcr_base + ICC)); 320 318 } 321 319 322 320 static ssize_t fsl_sata_intr_coalescing_show(struct device *dev, 323 321 struct device_attribute *attr, char *buf) 324 322 { 325 - return sprintf(buf, "%d %d\n", 323 + return sysfs_emit(buf, "%d %d\n", 326 324 intr_coalescing_count, intr_coalescing_ticks); 327 325 } 328 326 ··· 357 355 spin_lock_irqsave(&host->lock, flags); 358 356 rx_watermark = ioread32(csr_base + TRANSCFG); 359 357 rx_watermark &= 0x1f; 360 - 361 358 spin_unlock_irqrestore(&host->lock, flags); 362 - return sprintf(buf, "%d\n", rx_watermark); 359 + 360 + return sysfs_emit(buf, "%d\n", rx_watermark); 363 361 } 364 362 365 363 static ssize_t fsl_sata_rx_watermark_store(struct device *dev, ··· 387 385 return strlen(buf); 388 386 } 389 387 390 - static inline unsigned int sata_fsl_tag(unsigned int tag, 388 + static inline unsigned int sata_fsl_tag(struct ata_port *ap, 389 + unsigned int tag, 391 390 void __iomem *hcr_base) 392 391 { 393 392 /* We let libATA core do actual (queue) tag allocation */ 394 393 395 394 if (unlikely(tag >= SATA_FSL_QUEUE_DEPTH)) { 396 - DPRINTK("tag %d invalid : out of range\n", tag); 395 + ata_port_dbg(ap, "tag %d invalid : out of range\n", tag); 397 396 return 0; 398 397 } 399 398 400 399 if (unlikely((ioread32(hcr_base + CQ)) & (1 << tag))) { 401 - DPRINTK("tag %d invalid : in use!!\n", tag); 400 + ata_port_dbg(ap, "tag %d invalid : in use!!\n", tag); 402 401 return 0; 403 402 } 404 403 405 404 return tag; 406 405 } 407 406 408 - static void sata_fsl_setup_cmd_hdr_entry(struct sata_fsl_port_priv *pp, 407 + static void sata_fsl_setup_cmd_hdr_entry(struct ata_port *ap, 408 + struct sata_fsl_port_priv *pp, 409 409 unsigned int tag, u32 desc_info, 410 410 u32 data_xfer_len, u8 num_prde, 411 411 u8 fis_len) ··· 425 421 pp->cmdslot[tag].ttl = cpu_to_le32(data_xfer_len & ~0x03); 426 422 pp->cmdslot[tag].desc_info = cpu_to_le32(desc_info | (tag & 0x1F)); 427 423 428 - VPRINTK("cda=0x%x, prde_fis_len=0x%x, ttl=0x%x, di=0x%x\n", 429 - pp->cmdslot[tag].cda, 430 - pp->cmdslot[tag].prde_fis_len, 431 - pp->cmdslot[tag].ttl, pp->cmdslot[tag].desc_info); 432 - 424 + ata_port_dbg(ap, "cda=0x%x, prde_fis_len=0x%x, ttl=0x%x, di=0x%x\n", 425 + le32_to_cpu(pp->cmdslot[tag].cda), 426 + le32_to_cpu(pp->cmdslot[tag].prde_fis_len), 427 + le32_to_cpu(pp->cmdslot[tag].ttl), 428 + le32_to_cpu(pp->cmdslot[tag].desc_info)); 433 429 } 434 430 435 431 static unsigned int sata_fsl_fill_sg(struct ata_queued_cmd *qc, void *cmd_desc, ··· 451 447 dma_addr_t indirect_ext_segment_paddr; 452 448 unsigned int si; 453 449 454 - VPRINTK("SATA FSL : cd = 0x%p, prd = 0x%p\n", cmd_desc, prd); 455 - 456 450 indirect_ext_segment_paddr = cmd_desc_paddr + 457 451 SATA_FSL_CMD_DESC_OFFSET_TO_PRDT + SATA_FSL_MAX_PRD_DIRECT * 16; 458 452 459 453 for_each_sg(qc->sg, sg, qc->n_elem, si) { 460 454 dma_addr_t sg_addr = sg_dma_address(sg); 461 455 u32 sg_len = sg_dma_len(sg); 462 - 463 - VPRINTK("SATA FSL : fill_sg, sg_addr = 0x%llx, sg_len = %d\n", 464 - (unsigned long long)sg_addr, sg_len); 465 456 466 457 /* warn if each s/g element is not dword aligned */ 467 458 if (unlikely(sg_addr & 0x03)) ··· 468 469 469 470 if (num_prde == (SATA_FSL_MAX_PRD_DIRECT - 1) && 470 471 sg_next(sg) != NULL) { 471 - VPRINTK("setting indirect prde\n"); 472 472 prd_ptr_to_indirect_ext = prd; 473 473 prd->dba = cpu_to_le32(indirect_ext_segment_paddr); 474 474 indirect_ext_segment_sz = 0; ··· 478 480 ttl_dwords += sg_len; 479 481 prd->dba = cpu_to_le32(sg_addr); 480 482 prd->ddc_and_ext = cpu_to_le32(data_snoop | (sg_len & ~0x03)); 481 - 482 - VPRINTK("sg_fill, ttl=%d, dba=0x%x, ddc=0x%x\n", 483 - ttl_dwords, prd->dba, prd->ddc_and_ext); 484 483 485 484 ++num_prde; 486 485 ++prd; ··· 503 508 struct sata_fsl_port_priv *pp = ap->private_data; 504 509 struct sata_fsl_host_priv *host_priv = ap->host->private_data; 505 510 void __iomem *hcr_base = host_priv->hcr_base; 506 - unsigned int tag = sata_fsl_tag(qc->hw_tag, hcr_base); 511 + unsigned int tag = sata_fsl_tag(ap, qc->hw_tag, hcr_base); 507 512 struct command_desc *cd; 508 513 u32 desc_info = CMD_DESC_RES | CMD_DESC_SNOOP_ENABLE; 509 514 u32 num_prde = 0; ··· 515 520 516 521 ata_tf_to_fis(&qc->tf, qc->dev->link->pmp, 1, (u8 *) &cd->cfis); 517 522 518 - VPRINTK("Dumping cfis : 0x%x, 0x%x, 0x%x\n", 519 - cd->cfis[0], cd->cfis[1], cd->cfis[2]); 520 - 521 - if (qc->tf.protocol == ATA_PROT_NCQ) { 522 - VPRINTK("FPDMA xfer,Sctor cnt[0:7],[8:15] = %d,%d\n", 523 - cd->cfis[3], cd->cfis[11]); 524 - } 525 - 526 523 /* setup "ACMD - atapi command" in cmd. desc. if this is ATAPI cmd */ 527 524 if (ata_is_atapi(qc->tf.protocol)) { 528 525 desc_info |= ATAPI_CMD; 529 - memset((void *)&cd->acmd, 0, 32); 530 - memcpy((void *)&cd->acmd, qc->cdb, qc->dev->cdb_len); 526 + memset(&cd->cdb, 0, sizeof(cd->cdb)); 527 + memcpy(&cd->cdb, qc->cdb, qc->dev->cdb_len); 531 528 } 532 529 533 530 if (qc->flags & ATA_QCFLAG_DMAMAP) ··· 530 543 if (qc->tf.protocol == ATA_PROT_NCQ) 531 544 desc_info |= FPDMA_QUEUED_CMD; 532 545 533 - sata_fsl_setup_cmd_hdr_entry(pp, tag, desc_info, ttl_dwords, 546 + sata_fsl_setup_cmd_hdr_entry(ap, pp, tag, desc_info, ttl_dwords, 534 547 num_prde, 5); 535 548 536 - VPRINTK("SATA FSL : xx_qc_prep, di = 0x%x, ttl = %d, num_prde = %d\n", 549 + ata_port_dbg(ap, "SATA FSL : di = 0x%x, ttl = %d, num_prde = %d\n", 537 550 desc_info, ttl_dwords, num_prde); 538 551 539 552 return AC_ERR_OK; ··· 544 557 struct ata_port *ap = qc->ap; 545 558 struct sata_fsl_host_priv *host_priv = ap->host->private_data; 546 559 void __iomem *hcr_base = host_priv->hcr_base; 547 - unsigned int tag = sata_fsl_tag(qc->hw_tag, hcr_base); 560 + unsigned int tag = sata_fsl_tag(ap, qc->hw_tag, hcr_base); 548 561 549 - VPRINTK("xx_qc_issue called,CQ=0x%x,CA=0x%x,CE=0x%x,CC=0x%x\n", 562 + ata_port_dbg(ap, "CQ=0x%x,CA=0x%x,CE=0x%x,CC=0x%x\n", 550 563 ioread32(CQ + hcr_base), 551 564 ioread32(CA + hcr_base), 552 565 ioread32(CE + hcr_base), ioread32(CC + hcr_base)); ··· 556 569 /* Simply queue command to the controller/device */ 557 570 iowrite32(1 << tag, CQ + hcr_base); 558 571 559 - VPRINTK("xx_qc_issue called, tag=%d, CQ=0x%x, CA=0x%x\n", 572 + ata_port_dbg(ap, "tag=%d, CQ=0x%x, CA=0x%x\n", 560 573 tag, ioread32(CQ + hcr_base), ioread32(CA + hcr_base)); 561 574 562 - VPRINTK("CE=0x%x, DE=0x%x, CC=0x%x, CmdStat = 0x%x\n", 575 + ata_port_dbg(ap, "CE=0x%x, DE=0x%x, CC=0x%x, CmdStat = 0x%x\n", 563 576 ioread32(CE + hcr_base), 564 577 ioread32(DE + hcr_base), 565 578 ioread32(CC + hcr_base), ··· 573 586 struct sata_fsl_port_priv *pp = qc->ap->private_data; 574 587 struct sata_fsl_host_priv *host_priv = qc->ap->host->private_data; 575 588 void __iomem *hcr_base = host_priv->hcr_base; 576 - unsigned int tag = sata_fsl_tag(qc->hw_tag, hcr_base); 589 + unsigned int tag = sata_fsl_tag(qc->ap, qc->hw_tag, hcr_base); 577 590 struct command_desc *cd; 578 591 579 592 cd = pp->cmdentry + tag; ··· 600 613 return -EINVAL; 601 614 } 602 615 603 - VPRINTK("xx_scr_write, reg_in = %d\n", sc_reg); 616 + ata_link_dbg(link, "reg_in = %d\n", sc_reg); 604 617 605 618 iowrite32(val, ssr_base + (sc_reg * 4)); 606 619 return 0; ··· 624 637 return -EINVAL; 625 638 } 626 639 627 - VPRINTK("xx_scr_read, reg_in = %d\n", sc_reg); 640 + ata_link_dbg(link, "reg_in = %d\n", sc_reg); 628 641 629 642 *val = ioread32(ssr_base + (sc_reg * 4)); 630 643 return 0; ··· 636 649 void __iomem *hcr_base = host_priv->hcr_base; 637 650 u32 temp; 638 651 639 - VPRINTK("xx_freeze, CQ=0x%x, CA=0x%x, CE=0x%x, DE=0x%x\n", 652 + ata_port_dbg(ap, "CQ=0x%x, CA=0x%x, CE=0x%x, DE=0x%x\n", 640 653 ioread32(CQ + hcr_base), 641 654 ioread32(CA + hcr_base), 642 655 ioread32(CE + hcr_base), ioread32(DE + hcr_base)); 643 - VPRINTK("CmdStat = 0x%x\n", 656 + ata_port_dbg(ap, "CmdStat = 0x%x\n", 644 657 ioread32(host_priv->csr_base + COMMANDSTAT)); 645 658 646 659 /* disable interrupts on the controller/port */ 647 660 temp = ioread32(hcr_base + HCONTROL); 648 661 iowrite32((temp & ~0x3F), hcr_base + HCONTROL); 649 662 650 - VPRINTK("in xx_freeze : HControl = 0x%x, HStatus = 0x%x\n", 663 + ata_port_dbg(ap, "HControl = 0x%x, HStatus = 0x%x\n", 651 664 ioread32(hcr_base + HCONTROL), ioread32(hcr_base + HSTATUS)); 652 665 } 653 666 ··· 660 673 /* ack. any pending IRQs for this controller/port */ 661 674 temp = ioread32(hcr_base + HSTATUS); 662 675 663 - VPRINTK("xx_thaw, pending IRQs = 0x%x\n", (temp & 0x3F)); 676 + ata_port_dbg(ap, "pending IRQs = 0x%x\n", (temp & 0x3F)); 664 677 665 678 if (temp & 0x3F) 666 679 iowrite32((temp & 0x3F), hcr_base + HSTATUS); ··· 669 682 temp = ioread32(hcr_base + HCONTROL); 670 683 iowrite32((temp | DEFAULT_PORT_IRQ_ENABLE_MASK), hcr_base + HCONTROL); 671 684 672 - VPRINTK("xx_thaw : HControl = 0x%x, HStatus = 0x%x\n", 685 + ata_port_dbg(ap, "HControl = 0x%x, HStatus = 0x%x\n", 673 686 ioread32(hcr_base + HCONTROL), ioread32(hcr_base + HSTATUS)); 674 687 } 675 688 ··· 731 744 732 745 ap->private_data = pp; 733 746 734 - VPRINTK("CHBA = 0x%x, cmdentry_phys = 0x%x\n", 735 - pp->cmdslot_paddr, pp->cmdentry_paddr); 747 + ata_port_dbg(ap, "CHBA = 0x%lx, cmdentry_phys = 0x%lx\n", 748 + (unsigned long)pp->cmdslot_paddr, 749 + (unsigned long)pp->cmdentry_paddr); 736 750 737 751 /* Now, update the CHBA register in host controller cmd register set */ 738 752 iowrite32(pp->cmdslot_paddr & 0xffffffff, hcr_base + CHBA); ··· 749 761 temp = ioread32(hcr_base + HCONTROL); 750 762 iowrite32((temp | HCONTROL_ONLINE_PHY_RST), hcr_base + HCONTROL); 751 763 752 - VPRINTK("HStatus = 0x%x\n", ioread32(hcr_base + HSTATUS)); 753 - VPRINTK("HControl = 0x%x\n", ioread32(hcr_base + HCONTROL)); 754 - VPRINTK("CHBA = 0x%x\n", ioread32(hcr_base + CHBA)); 764 + ata_port_dbg(ap, "HStatus = 0x%x\n", ioread32(hcr_base + HSTATUS)); 765 + ata_port_dbg(ap, "HControl = 0x%x\n", ioread32(hcr_base + HCONTROL)); 766 + ata_port_dbg(ap, "CHBA = 0x%x\n", ioread32(hcr_base + CHBA)); 755 767 756 768 return 0; 757 769 } ··· 791 803 792 804 temp = ioread32(hcr_base + SIGNATURE); 793 805 794 - VPRINTK("raw sig = 0x%x\n", temp); 795 - VPRINTK("HStatus = 0x%x\n", ioread32(hcr_base + HSTATUS)); 796 - VPRINTK("HControl = 0x%x\n", ioread32(hcr_base + HCONTROL)); 806 + ata_port_dbg(ap, "HStatus = 0x%x\n", ioread32(hcr_base + HSTATUS)); 807 + ata_port_dbg(ap, "HControl = 0x%x\n", ioread32(hcr_base + HCONTROL)); 797 808 798 809 tf.lbah = (temp >> 24) & 0xff; 799 810 tf.lbam = (temp >> 16) & 0xff; 800 811 tf.lbal = (temp >> 8) & 0xff; 801 812 tf.nsect = temp & 0xff; 802 813 803 - return ata_dev_classify(&tf); 814 + return ata_port_classify(ap, &tf); 804 815 } 805 816 806 817 static int sata_fsl_hardreset(struct ata_link *link, unsigned int *class, ··· 811 824 u32 temp; 812 825 int i = 0; 813 826 unsigned long start_jiffies; 814 - 815 - DPRINTK("in xx_hardreset\n"); 816 827 817 828 try_offline_again: 818 829 /* ··· 837 852 goto try_offline_again; 838 853 } 839 854 840 - DPRINTK("hardreset, controller off-lined\n"); 841 - VPRINTK("HStatus = 0x%x\n", ioread32(hcr_base + HSTATUS)); 842 - VPRINTK("HControl = 0x%x\n", ioread32(hcr_base + HCONTROL)); 855 + ata_port_dbg(ap, "hardreset, controller off-lined\n" 856 + "HStatus = 0x%x HControl = 0x%x\n", 857 + ioread32(hcr_base + HSTATUS), 858 + ioread32(hcr_base + HCONTROL)); 843 859 844 860 /* 845 861 * PHY reset should remain asserted for atleast 1ms ··· 868 882 goto err; 869 883 } 870 884 871 - DPRINTK("hardreset, controller off-lined & on-lined\n"); 872 - VPRINTK("HStatus = 0x%x\n", ioread32(hcr_base + HSTATUS)); 873 - VPRINTK("HControl = 0x%x\n", ioread32(hcr_base + HCONTROL)); 885 + ata_port_dbg(ap, "controller off-lined & on-lined\n" 886 + "HStatus = 0x%x HControl = 0x%x\n", 887 + ioread32(hcr_base + HSTATUS), 888 + ioread32(hcr_base + HCONTROL)); 874 889 875 890 /* 876 891 * First, wait for the PHYRDY change to occur before waiting for ··· 928 941 u8 *cfis; 929 942 u32 Serror; 930 943 931 - DPRINTK("in xx_softreset\n"); 932 - 933 944 if (ata_link_offline(link)) { 934 - DPRINTK("PHY reports no device\n"); 935 945 *class = ATA_DEV_NONE; 936 946 return 0; 937 947 } ··· 941 957 * reached here, we can send a command to the target device 942 958 */ 943 959 944 - DPRINTK("Sending SRST/device reset\n"); 945 - 946 960 ata_tf_init(link->device, &tf); 947 961 cfis = (u8 *) &pp->cmdentry->cfis; 948 962 949 963 /* device reset/SRST is a control register update FIS, uses tag0 */ 950 - sata_fsl_setup_cmd_hdr_entry(pp, 0, 964 + sata_fsl_setup_cmd_hdr_entry(ap, pp, 0, 951 965 SRST_CMD | CMD_DESC_RES | CMD_DESC_SNOOP_ENABLE, 0, 0, 5); 952 966 953 967 tf.ctl |= ATA_SRST; /* setup SRST bit in taskfile control reg */ 954 968 ata_tf_to_fis(&tf, pmp, 0, cfis); 955 969 956 - DPRINTK("Dumping cfis : 0x%x, 0x%x, 0x%x, 0x%x\n", 970 + ata_port_dbg(ap, "Dumping cfis : 0x%x, 0x%x, 0x%x, 0x%x\n", 957 971 cfis[0], cfis[1], cfis[2], cfis[3]); 958 972 959 973 /* ··· 959 977 * other commands are active on the controller/device 960 978 */ 961 979 962 - DPRINTK("@Softreset, CQ = 0x%x, CA = 0x%x, CC = 0x%x\n", 980 + ata_port_dbg(ap, "CQ = 0x%x, CA = 0x%x, CC = 0x%x\n", 963 981 ioread32(CQ + hcr_base), 964 982 ioread32(CA + hcr_base), ioread32(CC + hcr_base)); 965 983 ··· 972 990 if (temp & 0x1) { 973 991 ata_port_warn(ap, "ATA_SRST issue failed\n"); 974 992 975 - DPRINTK("Softreset@5000,CQ=0x%x,CA=0x%x,CC=0x%x\n", 993 + ata_port_dbg(ap, "Softreset@5000,CQ=0x%x,CA=0x%x,CC=0x%x\n", 976 994 ioread32(CQ + hcr_base), 977 995 ioread32(CA + hcr_base), ioread32(CC + hcr_base)); 978 996 979 997 sata_fsl_scr_read(&ap->link, SCR_ERROR, &Serror); 980 998 981 - DPRINTK("HStatus = 0x%x\n", ioread32(hcr_base + HSTATUS)); 982 - DPRINTK("HControl = 0x%x\n", ioread32(hcr_base + HCONTROL)); 983 - DPRINTK("Serror = 0x%x\n", Serror); 999 + ata_port_dbg(ap, "HStatus = 0x%x HControl = 0x%x Serror = 0x%x\n", 1000 + ioread32(hcr_base + HSTATUS), 1001 + ioread32(hcr_base + HCONTROL), 1002 + Serror); 984 1003 goto err; 985 1004 } 986 1005 ··· 995 1012 * using ATA signature D2H register FIS to the host controller. 996 1013 */ 997 1014 998 - sata_fsl_setup_cmd_hdr_entry(pp, 0, CMD_DESC_RES | CMD_DESC_SNOOP_ENABLE, 999 - 0, 0, 5); 1015 + sata_fsl_setup_cmd_hdr_entry(ap, pp, 0, 1016 + CMD_DESC_RES | CMD_DESC_SNOOP_ENABLE, 1017 + 0, 0, 5); 1000 1018 1001 1019 tf.ctl &= ~ATA_SRST; /* 2nd H2D Ctl. register FIS */ 1002 1020 ata_tf_to_fis(&tf, pmp, 0, cfis); ··· 1014 1030 */ 1015 1031 iowrite32(0x01, CC + hcr_base); /* We know it will be cmd#0 always */ 1016 1032 1017 - DPRINTK("SATA FSL : Now checking device signature\n"); 1018 - 1019 1033 *class = ATA_DEV_NONE; 1020 1034 1021 1035 /* Verify if SStatus indicates device presence */ ··· 1027 1045 1028 1046 *class = sata_fsl_dev_classify(ap); 1029 1047 1030 - DPRINTK("class = %d\n", *class); 1031 - VPRINTK("ccreg = 0x%x\n", ioread32(hcr_base + CC)); 1032 - VPRINTK("cereg = 0x%x\n", ioread32(hcr_base + CE)); 1048 + ata_port_dbg(ap, "ccreg = 0x%x\n", ioread32(hcr_base + CC)); 1049 + ata_port_dbg(ap, "cereg = 0x%x\n", ioread32(hcr_base + CE)); 1033 1050 } 1034 1051 1035 1052 return 0; ··· 1039 1058 1040 1059 static void sata_fsl_error_handler(struct ata_port *ap) 1041 1060 { 1042 - 1043 - DPRINTK("in xx_error_handler\n"); 1044 1061 sata_pmp_error_handler(ap); 1045 - 1046 1062 } 1047 1063 1048 1064 static void sata_fsl_post_internal_cmd(struct ata_queued_cmd *qc) ··· 1080 1102 if (unlikely(SError & 0xFFFF0000)) 1081 1103 sata_fsl_scr_write(&ap->link, SCR_ERROR, SError); 1082 1104 1083 - DPRINTK("error_intr,hStat=0x%x,CE=0x%x,DE =0x%x,SErr=0x%x\n", 1105 + ata_port_dbg(ap, "hStat=0x%x,CE=0x%x,DE =0x%x,SErr=0x%x\n", 1084 1106 hstatus, cereg, ioread32(hcr_base + DE), SError); 1085 1107 1086 1108 /* handle fatal errors */ ··· 1097 1119 1098 1120 /* Handle PHYRDY change notification */ 1099 1121 if (hstatus & INT_ON_PHYRDY_CHG) { 1100 - DPRINTK("SATA FSL: PHYRDY change indication\n"); 1122 + ata_port_dbg(ap, "PHYRDY change indication\n"); 1101 1123 1102 1124 /* Setup a soft-reset EH action */ 1103 1125 ata_ehi_hotplugged(ehi); ··· 1118 1140 */ 1119 1141 abort = 1; 1120 1142 1121 - DPRINTK("single device error, CE=0x%x, DE=0x%x\n", 1143 + ata_port_dbg(ap, "single device error, CE=0x%x, DE=0x%x\n", 1122 1144 ioread32(hcr_base + CE), ioread32(hcr_base + DE)); 1123 1145 1124 1146 /* find out the offending link and qc */ ··· 1223 1245 } 1224 1246 1225 1247 if (unlikely(SError & 0xFFFF0000)) { 1226 - DPRINTK("serror @host_intr : 0x%x\n", SError); 1248 + ata_port_dbg(ap, "serror @host_intr : 0x%x\n", SError); 1227 1249 sata_fsl_error_intr(ap); 1228 1250 } 1229 1251 1230 1252 if (unlikely(hstatus & status_mask)) { 1231 - DPRINTK("error interrupt!!\n"); 1253 + ata_port_dbg(ap, "error interrupt!!\n"); 1232 1254 sata_fsl_error_intr(ap); 1233 1255 return; 1234 1256 } 1235 1257 1236 - VPRINTK("Status of all queues :\n"); 1237 - VPRINTK("done_mask/CC = 0x%x, CA = 0x%x, CE=0x%x,CQ=0x%x,apqa=0x%llx\n", 1258 + ata_port_dbg(ap, "Status of all queues :\n"); 1259 + ata_port_dbg(ap, "done_mask/CC = 0x%x, CA = 0x%x, CE=0x%x,CQ=0x%x,apqa=0x%llx\n", 1238 1260 done_mask, 1239 1261 ioread32(hcr_base + CA), 1240 1262 ioread32(hcr_base + CE), ··· 1246 1268 /* clear CC bit, this will also complete the interrupt */ 1247 1269 iowrite32(done_mask, hcr_base + CC); 1248 1270 1249 - DPRINTK("Status of all queues :\n"); 1250 - DPRINTK("done_mask/CC = 0x%x, CA = 0x%x, CE=0x%x\n", 1271 + ata_port_dbg(ap, "Status of all queues: done_mask/CC = 0x%x, CA = 0x%x, CE=0x%x\n", 1251 1272 done_mask, ioread32(hcr_base + CA), 1252 1273 ioread32(hcr_base + CE)); 1253 1274 1254 1275 for (i = 0; i < SATA_FSL_QUEUE_DEPTH; i++) { 1255 1276 if (done_mask & (1 << i)) 1256 - DPRINTK 1257 - ("completing ncq cmd,tag=%d,CC=0x%x,CA=0x%x\n", 1277 + ata_port_dbg(ap, "completing ncq cmd,tag=%d,CC=0x%x,CA=0x%x\n", 1258 1278 i, ioread32(hcr_base + CC), 1259 1279 ioread32(hcr_base + CA)); 1260 1280 } ··· 1263 1287 iowrite32(1, hcr_base + CC); 1264 1288 qc = ata_qc_from_tag(ap, ATA_TAG_INTERNAL); 1265 1289 1266 - DPRINTK("completing non-ncq cmd, CC=0x%x\n", 1290 + ata_port_dbg(ap, "completing non-ncq cmd, CC=0x%x\n", 1267 1291 ioread32(hcr_base + CC)); 1268 1292 1269 1293 if (qc) { ··· 1271 1295 } 1272 1296 } else { 1273 1297 /* Spurious Interrupt!! */ 1274 - DPRINTK("spurious interrupt!!, CC = 0x%x\n", 1298 + ata_port_dbg(ap, "spurious interrupt!!, CC = 0x%x\n", 1275 1299 ioread32(hcr_base + CC)); 1276 1300 iowrite32(done_mask, hcr_base + CC); 1277 1301 return; ··· 1290 1314 /* ack. any pending IRQs for this controller/port */ 1291 1315 interrupt_enables = ioread32(hcr_base + HSTATUS); 1292 1316 interrupt_enables &= 0x3F; 1293 - 1294 - DPRINTK("interrupt status 0x%x\n", interrupt_enables); 1295 1317 1296 1318 if (!interrupt_enables) 1297 1319 return IRQ_NONE; ··· 1343 1369 iowrite32((temp & ~0x3F), hcr_base + HCONTROL); 1344 1370 1345 1371 /* Disable interrupt coalescing control(icc), for the moment */ 1346 - DPRINTK("icc = 0x%x\n", ioread32(hcr_base + ICC)); 1372 + dev_dbg(host->dev, "icc = 0x%x\n", ioread32(hcr_base + ICC)); 1347 1373 iowrite32(0x01000000, hcr_base + ICC); 1348 1374 1349 1375 /* clear error registers, SError is cleared by libATA */ ··· 1362 1388 * callback, that should also initiate the OOB, COMINIT sequence 1363 1389 */ 1364 1390 1365 - DPRINTK("HStatus = 0x%x\n", ioread32(hcr_base + HSTATUS)); 1366 - DPRINTK("HControl = 0x%x\n", ioread32(hcr_base + HCONTROL)); 1391 + dev_dbg(host->dev, "HStatus = 0x%x HControl = 0x%x\n", 1392 + ioread32(hcr_base + HSTATUS), ioread32(hcr_base + HCONTROL)); 1367 1393 1368 1394 return 0; 1369 1395 } ··· 1380 1406 * scsi mid-layer and libata interface structures 1381 1407 */ 1382 1408 static struct scsi_host_template sata_fsl_sht = { 1383 - ATA_NCQ_SHT("sata_fsl"), 1384 - .can_queue = SATA_FSL_QUEUE_DEPTH, 1409 + ATA_NCQ_SHT_QD("sata_fsl", SATA_FSL_QUEUE_DEPTH), 1385 1410 .sg_tablesize = SATA_FSL_MAX_PRD_USABLE, 1386 1411 .dma_boundary = ATA_DMA_BOUNDARY, 1387 1412 }; ··· 1451 1478 iowrite32(temp | TRANSCFG_RX_WATER_MARK, csr_base + TRANSCFG); 1452 1479 } 1453 1480 1454 - DPRINTK("@reset i/o = 0x%x\n", ioread32(csr_base + TRANSCFG)); 1455 - DPRINTK("sizeof(cmd_desc) = %d\n", sizeof(struct command_desc)); 1456 - DPRINTK("sizeof(#define cmd_desc) = %d\n", SATA_FSL_CMD_DESC_SIZE); 1481 + dev_dbg(&ofdev->dev, "@reset i/o = 0x%x\n", 1482 + ioread32(csr_base + TRANSCFG)); 1457 1483 1458 1484 host_priv = kzalloc(sizeof(struct sata_fsl_host_priv), GFP_KERNEL); 1459 1485 if (!host_priv)
+2 -2
drivers/ata/sata_gemini.c
··· 253 253 254 254 ret = clk_prepare_enable(sg->sata0_pclk); 255 255 if (ret) { 256 - pr_err("failed to enable SATA0 PCLK\n"); 256 + dev_err(dev, "failed to enable SATA0 PCLK\n"); 257 257 return ret; 258 258 } 259 259 ret = clk_prepare_enable(sg->sata1_pclk); 260 260 if (ret) { 261 - pr_err("failed to enable SATA1 PCLK\n"); 261 + dev_err(dev, "failed to enable SATA1 PCLK\n"); 262 262 clk_disable_unprepare(sg->sata0_pclk); 263 263 return ret; 264 264 }
+1 -3
drivers/ata/sata_inic162x.c
··· 488 488 bool is_data = ata_is_data(qc->tf.protocol); 489 489 unsigned int cdb_len = 0; 490 490 491 - VPRINTK("ENTER\n"); 492 - 493 491 if (is_atapi) 494 492 cdb_len = qc->dev->cdb_len; 495 493 ··· 655 657 } 656 658 657 659 inic_tf_read(ap, &tf); 658 - *class = ata_dev_classify(&tf); 660 + *class = ata_port_classify(ap, &tf); 659 661 } 660 662 661 663 return 0;
+61 -71
drivers/ata/sata_mv.c
··· 579 579 void (*enable_leds)(struct mv_host_priv *hpriv, void __iomem *mmio); 580 580 void (*read_preamp)(struct mv_host_priv *hpriv, int idx, 581 581 void __iomem *mmio); 582 - int (*reset_hc)(struct mv_host_priv *hpriv, void __iomem *mmio, 582 + int (*reset_hc)(struct ata_host *host, void __iomem *mmio, 583 583 unsigned int n_hc); 584 584 void (*reset_flash)(struct mv_host_priv *hpriv, void __iomem *mmio); 585 585 void (*reset_bus)(struct ata_host *host, void __iomem *mmio); ··· 606 606 static void mv5_enable_leds(struct mv_host_priv *hpriv, void __iomem *mmio); 607 607 static void mv5_read_preamp(struct mv_host_priv *hpriv, int idx, 608 608 void __iomem *mmio); 609 - static int mv5_reset_hc(struct mv_host_priv *hpriv, void __iomem *mmio, 609 + static int mv5_reset_hc(struct ata_host *host, void __iomem *mmio, 610 610 unsigned int n_hc); 611 611 static void mv5_reset_flash(struct mv_host_priv *hpriv, void __iomem *mmio); 612 612 static void mv5_reset_bus(struct ata_host *host, void __iomem *mmio); ··· 616 616 static void mv6_enable_leds(struct mv_host_priv *hpriv, void __iomem *mmio); 617 617 static void mv6_read_preamp(struct mv_host_priv *hpriv, int idx, 618 618 void __iomem *mmio); 619 - static int mv6_reset_hc(struct mv_host_priv *hpriv, void __iomem *mmio, 619 + static int mv6_reset_hc(struct ata_host *host, void __iomem *mmio, 620 620 unsigned int n_hc); 621 621 static void mv6_reset_flash(struct mv_host_priv *hpriv, void __iomem *mmio); 622 622 static void mv_soc_enable_leds(struct mv_host_priv *hpriv, 623 623 void __iomem *mmio); 624 624 static void mv_soc_read_preamp(struct mv_host_priv *hpriv, int idx, 625 625 void __iomem *mmio); 626 - static int mv_soc_reset_hc(struct mv_host_priv *hpriv, 626 + static int mv_soc_reset_hc(struct ata_host *host, 627 627 void __iomem *mmio, unsigned int n_hc); 628 628 static void mv_soc_reset_flash(struct mv_host_priv *hpriv, 629 629 void __iomem *mmio); ··· 1248 1248 return err; 1249 1249 } 1250 1250 1251 - #ifdef ATA_DEBUG 1252 - static void mv_dump_mem(void __iomem *start, unsigned bytes) 1251 + static void mv_dump_mem(struct device *dev, void __iomem *start, unsigned bytes) 1253 1252 { 1254 - int b, w; 1253 + int b, w, o; 1254 + unsigned char linebuf[38]; 1255 + 1255 1256 for (b = 0; b < bytes; ) { 1256 - DPRINTK("%p: ", start + b); 1257 - for (w = 0; b < bytes && w < 4; w++) { 1258 - printk("%08x ", readl(start + b)); 1257 + for (w = 0, o = 0; b < bytes && w < 4; w++) { 1258 + o += snprintf(linebuf + o, sizeof(linebuf) - o, 1259 + "%08x ", readl(start + b)); 1259 1260 b += sizeof(u32); 1260 1261 } 1261 - printk("\n"); 1262 + dev_dbg(dev, "%s: %p: %s\n", 1263 + __func__, start + b, linebuf); 1262 1264 } 1263 1265 } 1264 - #endif 1265 - #if defined(ATA_DEBUG) || defined(CONFIG_PCI) 1266 + 1266 1267 static void mv_dump_pci_cfg(struct pci_dev *pdev, unsigned bytes) 1267 1268 { 1268 - #ifdef ATA_DEBUG 1269 - int b, w; 1270 - u32 dw; 1269 + int b, w, o; 1270 + u32 dw = 0; 1271 + unsigned char linebuf[38]; 1272 + 1271 1273 for (b = 0; b < bytes; ) { 1272 - DPRINTK("%02x: ", b); 1273 - for (w = 0; b < bytes && w < 4; w++) { 1274 + for (w = 0, o = 0; b < bytes && w < 4; w++) { 1274 1275 (void) pci_read_config_dword(pdev, b, &dw); 1275 - printk("%08x ", dw); 1276 + o += snprintf(linebuf + o, sizeof(linebuf) - o, 1277 + "%08x ", dw); 1276 1278 b += sizeof(u32); 1277 1279 } 1278 - printk("\n"); 1280 + dev_dbg(&pdev->dev, "%s: %02x: %s\n", 1281 + __func__, b, linebuf); 1279 1282 } 1280 - #endif 1281 1283 } 1282 - #endif 1283 - static void mv_dump_all_regs(void __iomem *mmio_base, int port, 1284 + 1285 + static void mv_dump_all_regs(void __iomem *mmio_base, 1284 1286 struct pci_dev *pdev) 1285 1287 { 1286 - #ifdef ATA_DEBUG 1287 - void __iomem *hc_base = mv_hc_base(mmio_base, 1288 - port >> MV_PORT_HC_SHIFT); 1288 + void __iomem *hc_base; 1289 1289 void __iomem *port_base; 1290 1290 int start_port, num_ports, p, start_hc, num_hcs, hc; 1291 1291 1292 - if (0 > port) { 1293 - start_hc = start_port = 0; 1294 - num_ports = 8; /* shld be benign for 4 port devs */ 1295 - num_hcs = 2; 1296 - } else { 1297 - start_hc = port >> MV_PORT_HC_SHIFT; 1298 - start_port = port; 1299 - num_ports = num_hcs = 1; 1300 - } 1301 - DPRINTK("All registers for port(s) %u-%u:\n", start_port, 1302 - num_ports > 1 ? num_ports - 1 : start_port); 1292 + start_hc = start_port = 0; 1293 + num_ports = 8; /* should be benign for 4 port devs */ 1294 + num_hcs = 2; 1295 + dev_dbg(&pdev->dev, 1296 + "%s: All registers for port(s) %u-%u:\n", __func__, 1297 + start_port, num_ports > 1 ? num_ports - 1 : start_port); 1303 1298 1304 - if (NULL != pdev) { 1305 - DPRINTK("PCI config space regs:\n"); 1306 - mv_dump_pci_cfg(pdev, 0x68); 1307 - } 1308 - DPRINTK("PCI regs:\n"); 1309 - mv_dump_mem(mmio_base+0xc00, 0x3c); 1310 - mv_dump_mem(mmio_base+0xd00, 0x34); 1311 - mv_dump_mem(mmio_base+0xf00, 0x4); 1312 - mv_dump_mem(mmio_base+0x1d00, 0x6c); 1299 + dev_dbg(&pdev->dev, "%s: PCI config space regs:\n", __func__); 1300 + mv_dump_pci_cfg(pdev, 0x68); 1301 + 1302 + dev_dbg(&pdev->dev, "%s: PCI regs:\n", __func__); 1303 + mv_dump_mem(&pdev->dev, mmio_base+0xc00, 0x3c); 1304 + mv_dump_mem(&pdev->dev, mmio_base+0xd00, 0x34); 1305 + mv_dump_mem(&pdev->dev, mmio_base+0xf00, 0x4); 1306 + mv_dump_mem(&pdev->dev, mmio_base+0x1d00, 0x6c); 1313 1307 for (hc = start_hc; hc < start_hc + num_hcs; hc++) { 1314 1308 hc_base = mv_hc_base(mmio_base, hc); 1315 - DPRINTK("HC regs (HC %i):\n", hc); 1316 - mv_dump_mem(hc_base, 0x1c); 1309 + dev_dbg(&pdev->dev, "%s: HC regs (HC %i):\n", __func__, hc); 1310 + mv_dump_mem(&pdev->dev, hc_base, 0x1c); 1317 1311 } 1318 1312 for (p = start_port; p < start_port + num_ports; p++) { 1319 1313 port_base = mv_port_base(mmio_base, p); 1320 - DPRINTK("EDMA regs (port %i):\n", p); 1321 - mv_dump_mem(port_base, 0x54); 1322 - DPRINTK("SATA regs (port %i):\n", p); 1323 - mv_dump_mem(port_base+0x300, 0x60); 1314 + dev_dbg(&pdev->dev, "%s: EDMA regs (port %i):\n", __func__, p); 1315 + mv_dump_mem(&pdev->dev, port_base, 0x54); 1316 + dev_dbg(&pdev->dev, "%s: SATA regs (port %i):\n", __func__, p); 1317 + mv_dump_mem(&pdev->dev, port_base+0x300, 0x60); 1324 1318 } 1325 - #endif 1326 1319 } 1327 1320 1328 1321 static unsigned int mv_scr_offset(unsigned int sc_reg_in) ··· 2955 2962 2956 2963 dev_err(host->dev, "PCI ERROR; PCI IRQ cause=0x%08x\n", err_cause); 2957 2964 2958 - DPRINTK("All regs @ PCI error\n"); 2959 - mv_dump_all_regs(mmio, -1, to_pci_dev(host->dev)); 2965 + dev_dbg(host->dev, "%s: All regs @ PCI error\n", __func__); 2966 + mv_dump_all_regs(mmio, to_pci_dev(host->dev)); 2960 2967 2961 2968 writelfl(0, mmio + hpriv->irq_cause_offset); 2962 2969 ··· 3194 3201 } 3195 3202 #undef ZERO 3196 3203 3197 - static int mv5_reset_hc(struct mv_host_priv *hpriv, void __iomem *mmio, 3204 + static int mv5_reset_hc(struct ata_host *host, void __iomem *mmio, 3198 3205 unsigned int n_hc) 3199 3206 { 3207 + struct mv_host_priv *hpriv = host->private_data; 3200 3208 unsigned int hc, port; 3201 3209 3202 3210 for (hc = 0; hc < n_hc; hc++) { ··· 3256 3262 * LOCKING: 3257 3263 * Inherited from caller. 3258 3264 */ 3259 - static int mv6_reset_hc(struct mv_host_priv *hpriv, void __iomem *mmio, 3265 + static int mv6_reset_hc(struct ata_host *host, void __iomem *mmio, 3260 3266 unsigned int n_hc) 3261 3267 { 3262 3268 void __iomem *reg = mmio + PCI_MAIN_CMD_STS; ··· 3276 3282 break; 3277 3283 } 3278 3284 if (!(PCI_MASTER_EMPTY & t)) { 3279 - printk(KERN_ERR DRV_NAME ": PCI master won't flush\n"); 3285 + dev_err(host->dev, "PCI master won't flush\n"); 3280 3286 rc = 1; 3281 3287 goto done; 3282 3288 } ··· 3290 3296 } while (!(GLOB_SFT_RST & t) && (i-- > 0)); 3291 3297 3292 3298 if (!(GLOB_SFT_RST & t)) { 3293 - printk(KERN_ERR DRV_NAME ": can't set global reset\n"); 3299 + dev_err(host->dev, "can't set global reset\n"); 3294 3300 rc = 1; 3295 3301 goto done; 3296 3302 } ··· 3304 3310 } while ((GLOB_SFT_RST & t) && (i-- > 0)); 3305 3311 3306 3312 if (GLOB_SFT_RST & t) { 3307 - printk(KERN_ERR DRV_NAME ": can't clear global reset\n"); 3313 + dev_err(host->dev, "can't clear global reset\n"); 3308 3314 rc = 1; 3309 3315 } 3310 3316 done: ··· 3473 3479 3474 3480 #undef ZERO 3475 3481 3476 - static int mv_soc_reset_hc(struct mv_host_priv *hpriv, 3482 + static int mv_soc_reset_hc(struct ata_host *host, 3477 3483 void __iomem *mmio, unsigned int n_hc) 3478 3484 { 3485 + struct mv_host_priv *hpriv = host->private_data; 3479 3486 unsigned int port; 3480 3487 3481 3488 for (port = 0; port < hpriv->n_ports; port++) ··· 3718 3723 3719 3724 /* unmask all non-transient EDMA error interrupts */ 3720 3725 writelfl(~EDMA_ERR_IRQ_TRANSIENT, port_mmio + EDMA_ERR_IRQ_MASK); 3721 - 3722 - VPRINTK("EDMA cfg=0x%08x EDMA IRQ err cause/mask=0x%08x/0x%08x\n", 3723 - readl(port_mmio + EDMA_CFG), 3724 - readl(port_mmio + EDMA_ERR_IRQ_CAUSE), 3725 - readl(port_mmio + EDMA_ERR_IRQ_MASK)); 3726 3726 } 3727 3727 3728 3728 static unsigned int mv_in_pcix_mode(struct ata_host *host) ··· 3849 3859 * 3850 3860 * Warn the user, lest they think we're just buggy. 3851 3861 */ 3852 - printk(KERN_WARNING DRV_NAME ": Highpoint RocketRAID" 3862 + dev_warn(&pdev->dev, "Highpoint RocketRAID" 3853 3863 " BIOS CORRUPTS DATA on all attached drives," 3854 3864 " regardless of if/how they are configured." 3855 3865 " BEWARE!\n"); 3856 - printk(KERN_WARNING DRV_NAME ": For data safety, do not" 3866 + dev_warn(&pdev->dev, "For data safety, do not" 3857 3867 " use sectors 8-9 on \"Legacy\" drives," 3858 3868 " and avoid the final two gigabytes on" 3859 3869 " all RocketRAID BIOS initialized drives.\n"); ··· 3944 3954 if (hpriv->ops->read_preamp) 3945 3955 hpriv->ops->read_preamp(hpriv, port, mmio); 3946 3956 3947 - rc = hpriv->ops->reset_hc(hpriv, mmio, n_hc); 3957 + rc = hpriv->ops->reset_hc(host, mmio, n_hc); 3948 3958 if (rc) 3949 3959 goto done; 3950 3960 ··· 3962 3972 for (hc = 0; hc < n_hc; hc++) { 3963 3973 void __iomem *hc_mmio = mv_hc_base(mmio, hc); 3964 3974 3965 - VPRINTK("HC%i: HC config=0x%08x HC IRQ cause " 3975 + dev_dbg(host->dev, "HC%i: HC config=0x%08x HC IRQ cause " 3966 3976 "(before clear)=0x%08x\n", hc, 3967 3977 readl(hc_mmio + HC_CFG), 3968 3978 readl(hc_mmio + HC_IRQ_CAUSE)); ··· 4260 4270 /* initialize adapter */ 4261 4271 ret = mv_init_host(host); 4262 4272 if (ret) { 4263 - printk(KERN_ERR DRV_NAME ": Error during HW init\n"); 4273 + dev_err(&pdev->dev, "Error during HW init\n"); 4264 4274 return ret; 4265 4275 } 4266 4276 ata_host_resume(host);
+18 -36
drivers/ata/sata_nv.c
··· 31 31 #include <scsi/scsi_host.h> 32 32 #include <scsi/scsi_device.h> 33 33 #include <linux/libata.h> 34 + #include <trace/events/libata.h> 34 35 35 36 #define DRV_NAME "sata_nv" 36 37 #define DRV_VERSION "3.5" ··· 809 808 struct nv_adma_port_priv *pp = ap->private_data; 810 809 u8 flags = pp->cpb[cpb_num].resp_flags; 811 810 812 - VPRINTK("CPB %d, flags=0x%x\n", cpb_num, flags); 811 + ata_port_dbg(ap, "CPB %d, flags=0x%x\n", cpb_num, flags); 813 812 814 813 if (unlikely((force_err || 815 814 flags & (NV_CPB_RESP_ATA_ERR | ··· 1101 1100 struct pci_dev *pdev = to_pci_dev(dev); 1102 1101 u16 tmp; 1103 1102 1104 - VPRINTK("ENTER\n"); 1105 - 1106 1103 /* 1107 1104 * Ensure DMA mask is set to 32-bit before allocating legacy PRD and 1108 1105 * pad buffers. ··· 1189 1190 struct nv_adma_port_priv *pp = ap->private_data; 1190 1191 void __iomem *mmio = pp->ctl_block; 1191 1192 1192 - VPRINTK("ENTER\n"); 1193 1193 writew(0, mmio + NV_ADMA_CTL); 1194 1194 } 1195 1195 ··· 1250 1252 void __iomem *mmio = ap->host->iomap[NV_MMIO_BAR]; 1251 1253 struct ata_ioports *ioport = &ap->ioaddr; 1252 1254 1253 - VPRINTK("ENTER\n"); 1254 - 1255 1255 mmio += NV_ADMA_PORT + ap->port_no * NV_ADMA_PORT_SIZE; 1256 1256 1257 1257 ioport->cmd_addr = mmio; ··· 1272 1276 struct pci_dev *pdev = to_pci_dev(host->dev); 1273 1277 unsigned int i; 1274 1278 u32 tmp32; 1275 - 1276 - VPRINTK("ENTER\n"); 1277 1279 1278 1280 /* enable ADMA on the ports */ 1279 1281 pci_read_config_dword(pdev, NV_MCP_SATA_CFG_20, &tmp32); ··· 1313 1319 struct nv_adma_prd *aprd; 1314 1320 struct scatterlist *sg; 1315 1321 unsigned int si; 1316 - 1317 - VPRINTK("ENTER\n"); 1318 1322 1319 1323 for_each_sg(qc->sg, sg, qc->n_elem, si) { 1320 1324 aprd = (si < 5) ? &cpb->aprd[si] : ··· 1370 1378 if (qc->tf.protocol == ATA_PROT_NCQ) 1371 1379 ctl_flags |= NV_CPB_CTL_QUEUE | NV_CPB_CTL_FPDMA; 1372 1380 1373 - VPRINTK("qc->flags = 0x%lx\n", qc->flags); 1374 - 1375 1381 nv_adma_tf_to_cpb(&qc->tf, cpb->tf); 1376 1382 1377 1383 if (qc->flags & ATA_QCFLAG_DMAMAP) { ··· 1394 1404 void __iomem *mmio = pp->ctl_block; 1395 1405 int curr_ncq = (qc->tf.protocol == ATA_PROT_NCQ); 1396 1406 1397 - VPRINTK("ENTER\n"); 1398 - 1399 1407 /* We can't handle result taskfile with NCQ commands, since 1400 1408 retrieving the taskfile switches us out of ADMA mode and would abort 1401 1409 existing commands. */ ··· 1405 1417 1406 1418 if (nv_adma_use_reg_mode(qc)) { 1407 1419 /* use ATA register mode */ 1408 - VPRINTK("using ATA register mode: 0x%lx\n", qc->flags); 1409 1420 BUG_ON(!(pp->flags & NV_ADMA_ATAPI_SETUP_COMPLETE) && 1410 1421 (qc->flags & ATA_QCFLAG_DMAMAP)); 1411 1422 nv_adma_register_mode(qc->ap); ··· 1424 1437 } 1425 1438 1426 1439 writew(qc->hw_tag, mmio + NV_ADMA_APPEND); 1427 - 1428 - DPRINTK("Issued tag %u\n", qc->hw_tag); 1429 1440 1430 1441 return 0; 1431 1442 } ··· 1856 1871 1857 1872 /* enable swncq */ 1858 1873 tmp = readl(mmio + NV_CTL_MCP55); 1859 - VPRINTK("HOST_CTL:0x%X\n", tmp); 1874 + dev_dbg(&pdev->dev, "HOST_CTL:0x%X\n", tmp); 1860 1875 writel(tmp | NV_CTL_PRI_SWNCQ | NV_CTL_SEC_SWNCQ, mmio + NV_CTL_MCP55); 1861 1876 1862 1877 /* enable irq intr */ 1863 1878 tmp = readl(mmio + NV_INT_ENABLE_MCP55); 1864 - VPRINTK("HOST_ENABLE:0x%X\n", tmp); 1879 + dev_dbg(&pdev->dev, "HOST_ENABLE:0x%X\n", tmp); 1865 1880 writel(tmp | 0x00fd00fd, mmio + NV_INT_ENABLE_MCP55); 1866 1881 1867 1882 /* clear port irq */ ··· 2002 2017 if (qc == NULL) 2003 2018 return 0; 2004 2019 2005 - DPRINTK("Enter\n"); 2006 - 2007 2020 writel((1 << qc->hw_tag), pp->sactive_block); 2008 2021 pp->last_issue_tag = qc->hw_tag; 2009 2022 pp->dhfis_bits &= ~(1 << qc->hw_tag); 2010 2023 pp->dmafis_bits &= ~(1 << qc->hw_tag); 2011 2024 pp->qc_active |= (0x1 << qc->hw_tag); 2012 2025 2026 + trace_ata_tf_load(ap, &qc->tf); 2013 2027 ap->ops->sff_tf_load(ap, &qc->tf); /* load tf registers */ 2028 + trace_ata_exec_command(ap, &qc->tf, qc->hw_tag); 2014 2029 ap->ops->sff_exec_command(ap, &qc->tf); 2015 - 2016 - DPRINTK("Issued tag %u\n", qc->hw_tag); 2017 2030 2018 2031 return 0; 2019 2032 } ··· 2023 2040 2024 2041 if (qc->tf.protocol != ATA_PROT_NCQ) 2025 2042 return ata_bmdma_qc_issue(qc); 2026 - 2027 - DPRINTK("Enter\n"); 2028 2043 2029 2044 if (!pp->qc_active) 2030 2045 nv_swncq_issue_atacmd(ap, qc); ··· 2068 2087 u8 lack_dhfis = 0; 2069 2088 2070 2089 host_stat = ap->ops->bmdma_status(ap); 2090 + trace_ata_bmdma_status(ap, host_stat); 2071 2091 if (unlikely(host_stat & ATA_DMA_ERR)) { 2072 2092 /* error when transferring data to/from memory */ 2073 2093 ata_ehi_clear_desc(ehi); ··· 2091 2109 ata_qc_complete_multiple(ap, ata_qc_get_active(ap) ^ done_mask); 2092 2110 2093 2111 if (!ap->qc_active) { 2094 - DPRINTK("over\n"); 2112 + ata_port_dbg(ap, "over\n"); 2095 2113 nv_swncq_pp_reinit(ap); 2096 2114 return 0; 2097 2115 } ··· 2106 2124 */ 2107 2125 lack_dhfis = 1; 2108 2126 2109 - DPRINTK("id 0x%x QC: qc_active 0x%llx," 2110 - "SWNCQ:qc_active 0x%X defer_bits %X " 2111 - "dhfis 0x%X dmafis 0x%X last_issue_tag %x\n", 2112 - ap->print_id, ap->qc_active, pp->qc_active, 2113 - pp->defer_queue.defer_bits, pp->dhfis_bits, 2114 - pp->dmafis_bits, pp->last_issue_tag); 2127 + ata_port_dbg(ap, "QC: qc_active 0x%llx," 2128 + "SWNCQ:qc_active 0x%X defer_bits %X " 2129 + "dhfis 0x%X dmafis 0x%X last_issue_tag %x\n", 2130 + ap->qc_active, pp->qc_active, 2131 + pp->defer_queue.defer_bits, pp->dhfis_bits, 2132 + pp->dmafis_bits, pp->last_issue_tag); 2115 2133 2116 2134 nv_swncq_fis_reinit(ap); 2117 2135 ··· 2151 2169 __ata_bmdma_stop(ap); 2152 2170 tag = nv_swncq_tag(ap); 2153 2171 2154 - DPRINTK("dma setup tag 0x%x\n", tag); 2172 + ata_port_dbg(ap, "dma setup tag 0x%x\n", tag); 2155 2173 qc = ata_qc_from_tag(ap, tag); 2156 2174 2157 2175 if (unlikely(!qc)) ··· 2219 2237 2220 2238 if (fis & NV_SWNCQ_IRQ_SDBFIS) { 2221 2239 pp->ncq_flags |= ncq_saw_sdb; 2222 - DPRINTK("id 0x%x SWNCQ: qc_active 0x%X " 2240 + ata_port_dbg(ap, "SWNCQ: qc_active 0x%X " 2223 2241 "dhfis 0x%X dmafis 0x%X sactive 0x%X\n", 2224 - ap->print_id, pp->qc_active, pp->dhfis_bits, 2242 + pp->qc_active, pp->dhfis_bits, 2225 2243 pp->dmafis_bits, readl(pp->sactive_block)); 2226 2244 if (nv_swncq_sdbfis(ap) < 0) 2227 2245 goto irq_error; ··· 2247 2265 goto irq_exit; 2248 2266 2249 2267 if (pp->defer_queue.defer_bits) { 2250 - DPRINTK("send next command\n"); 2268 + ata_port_dbg(ap, "send next command\n"); 2251 2269 qc = nv_swncq_qc_from_dq(ap); 2252 2270 nv_swncq_issue_atacmd(ap, qc); 2253 2271 }
+8 -23
drivers/ata/sata_promise.c
··· 596 596 597 597 prd[idx].addr = cpu_to_le32(addr); 598 598 prd[idx].flags_len = cpu_to_le32(len & 0xffff); 599 - VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len); 599 + ata_port_dbg(ap, "PRD[%u] = (0x%X, 0x%X)\n", 600 + idx, addr, len); 600 601 601 602 idx++; 602 603 sg_len -= len; ··· 610 609 if (len > SG_COUNT_ASIC_BUG) { 611 610 u32 addr; 612 611 613 - VPRINTK("Splitting last PRD.\n"); 614 - 615 612 addr = le32_to_cpu(prd[idx - 1].addr); 616 613 prd[idx - 1].flags_len = cpu_to_le32(len - SG_COUNT_ASIC_BUG); 617 - VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx - 1, addr, SG_COUNT_ASIC_BUG); 614 + ata_port_dbg(ap, "PRD[%u] = (0x%X, 0x%X)\n", 615 + idx - 1, addr, SG_COUNT_ASIC_BUG); 618 616 619 617 addr = addr + len - SG_COUNT_ASIC_BUG; 620 618 len = SG_COUNT_ASIC_BUG; 621 619 prd[idx].addr = cpu_to_le32(addr); 622 620 prd[idx].flags_len = cpu_to_le32(len); 623 - VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len); 621 + ata_port_dbg(ap, "PRD[%u] = (0x%X, 0x%X)\n", idx, addr, len); 624 622 625 623 idx++; 626 624 } ··· 631 631 { 632 632 struct pdc_port_priv *pp = qc->ap->private_data; 633 633 unsigned int i; 634 - 635 - VPRINTK("ENTER\n"); 636 634 637 635 switch (qc->tf.protocol) { 638 636 case ATA_PROT_DMA: ··· 920 922 u32 hotplug_status; 921 923 int is_sataii_tx4; 922 924 923 - VPRINTK("ENTER\n"); 924 - 925 - if (!host || !host->iomap[PDC_MMIO_BAR]) { 926 - VPRINTK("QUICK EXIT\n"); 925 + if (!host || !host->iomap[PDC_MMIO_BAR]) 927 926 return IRQ_NONE; 928 - } 929 927 930 928 host_mmio = host->iomap[PDC_MMIO_BAR]; 931 929 ··· 940 946 /* reading should also clear interrupts */ 941 947 mask = readl(host_mmio + PDC_INT_SEQMASK); 942 948 943 - if (mask == 0xffffffff && hotplug_status == 0) { 944 - VPRINTK("QUICK EXIT 2\n"); 949 + if (mask == 0xffffffff && hotplug_status == 0) 945 950 goto done_irq; 946 - } 947 951 948 952 mask &= 0xffff; /* only 16 SEQIDs possible */ 949 - if (mask == 0 && hotplug_status == 0) { 950 - VPRINTK("QUICK EXIT 3\n"); 953 + if (mask == 0 && hotplug_status == 0) 951 954 goto done_irq; 952 - } 953 955 954 956 writel(mask, host_mmio + PDC_INT_SEQMASK); 955 957 956 958 is_sataii_tx4 = pdc_is_sataii_tx4(host->ports[0]->flags); 957 959 958 960 for (i = 0; i < host->n_ports; i++) { 959 - VPRINTK("port %u\n", i); 960 961 ap = host->ports[i]; 961 962 962 963 /* check for a plug or unplug event */ ··· 978 989 } 979 990 } 980 991 981 - VPRINTK("EXIT\n"); 982 - 983 992 done_irq: 984 993 spin_unlock(&host->lock); 985 994 return IRQ_RETVAL(handled); ··· 991 1004 void __iomem *ata_mmio = ap->ioaddr.cmd_addr; 992 1005 unsigned int port_no = ap->port_no; 993 1006 u8 seq = (u8) (port_no + 1); 994 - 995 - VPRINTK("ENTER, ap %p\n", ap); 996 1007 997 1008 writel(0x00000001, host_mmio + (seq * 4)); 998 1009 readl(host_mmio + (seq * 4)); /* flush */
+2 -13
drivers/ata/sata_qstor.c
··· 252 252 len = sg_dma_len(sg); 253 253 *(__le32 *)prd = cpu_to_le32(len); 254 254 prd += sizeof(u64); 255 - 256 - VPRINTK("PRD[%u] = (0x%llX, 0x%X)\n", si, 257 - (unsigned long long)addr, len); 258 255 } 259 256 260 257 return si; ··· 264 267 u8 hflags = QS_HF_DAT | QS_HF_IEN | QS_HF_VLD; 265 268 u64 addr; 266 269 unsigned int nelem; 267 - 268 - VPRINTK("ENTER\n"); 269 270 270 271 qs_enter_reg_mode(qc->ap); 271 272 if (qc->tf.protocol != ATA_PROT_DMA) ··· 298 303 { 299 304 struct ata_port *ap = qc->ap; 300 305 u8 __iomem *chan = qs_mmio_base(ap->host) + (ap->port_no * 0x4000); 301 - 302 - VPRINTK("ENTER, ap %p\n", ap); 303 306 304 307 writeb(QS_CTR0_CLER, chan + QS_CCT_CTR0); 305 308 wmb(); /* flush PRDs and pkt to memory */ ··· 367 374 struct qs_port_priv *pp = ap->private_data; 368 375 struct ata_queued_cmd *qc; 369 376 370 - DPRINTK("SFF=%08x%08x: sCHAN=%u sHST=%d sDST=%02x\n", 371 - sff1, sff0, port_no, sHST, sDST); 377 + dev_dbg(host->dev, "SFF=%08x%08x: sHST=%d sDST=%02x\n", 378 + sff1, sff0, sHST, sDST); 372 379 handled = 1; 373 380 if (!pp || pp->state != qs_state_pkt) 374 381 continue; ··· 428 435 unsigned int handled = 0; 429 436 unsigned long flags; 430 437 431 - VPRINTK("ENTER\n"); 432 - 433 438 spin_lock_irqsave(&host->lock, flags); 434 439 handled = qs_intr_pkt(host) | qs_intr_mmio(host); 435 440 spin_unlock_irqrestore(&host->lock, flags); 436 - 437 - VPRINTK("EXIT\n"); 438 441 439 442 return IRQ_RETVAL(handled); 440 443 }
+2 -24
drivers/ata/sata_rcar.c
··· 323 323 { 324 324 struct ata_ioports *ioaddr = &ap->ioaddr; 325 325 326 - DPRINTK("ata%u: bus reset via SRST\n", ap->print_id); 327 - 328 326 /* software reset. causes dev0 to be selected */ 329 327 iowrite32(ap->ctl, ioaddr->ctl_addr); 330 328 udelay(20); ··· 348 350 devmask |= 1 << 0; 349 351 350 352 /* issue bus reset */ 351 - DPRINTK("about to softreset, devmask=%x\n", devmask); 352 353 rc = sata_rcar_bus_softreset(ap, deadline); 353 354 /* if link is occupied, -ENODEV too is an error */ 354 355 if (rc && (rc != -ENODEV || sata_scr_valid(link))) { ··· 358 361 /* determine by signature whether we have ATA or ATAPI devices */ 359 362 classes[0] = ata_sff_dev_classify(&link->device[0], devmask, &err); 360 363 361 - DPRINTK("classes[0]=%u\n", classes[0]); 362 364 return 0; 363 365 } 364 366 ··· 379 383 iowrite32(tf->hob_lbal, ioaddr->lbal_addr); 380 384 iowrite32(tf->hob_lbam, ioaddr->lbam_addr); 381 385 iowrite32(tf->hob_lbah, ioaddr->lbah_addr); 382 - VPRINTK("hob: feat 0x%X nsect 0x%X, lba 0x%X 0x%X 0x%X\n", 383 - tf->hob_feature, 384 - tf->hob_nsect, 385 - tf->hob_lbal, 386 - tf->hob_lbam, 387 - tf->hob_lbah); 388 386 } 389 387 390 388 if (is_addr) { ··· 387 397 iowrite32(tf->lbal, ioaddr->lbal_addr); 388 398 iowrite32(tf->lbam, ioaddr->lbam_addr); 389 399 iowrite32(tf->lbah, ioaddr->lbah_addr); 390 - VPRINTK("feat 0x%X nsect 0x%X lba 0x%X 0x%X 0x%X\n", 391 - tf->feature, 392 - tf->nsect, 393 - tf->lbal, 394 - tf->lbam, 395 - tf->lbah); 396 400 } 397 401 398 - if (tf->flags & ATA_TFLAG_DEVICE) { 402 + if (tf->flags & ATA_TFLAG_DEVICE) 399 403 iowrite32(tf->device, ioaddr->device_addr); 400 - VPRINTK("device 0x%X\n", tf->device); 401 - } 402 404 403 405 ata_wait_idle(ap); 404 406 } ··· 422 440 static void sata_rcar_exec_command(struct ata_port *ap, 423 441 const struct ata_taskfile *tf) 424 442 { 425 - DPRINTK("ata%u: cmd 0x%X\n", ap->print_id, tf->command); 426 - 427 443 iowrite32(tf->command, ap->ioaddr.command_addr); 428 444 ata_sff_pause(ap); 429 445 } ··· 479 499 count < 65536; count += 2) 480 500 ioread32(ap->ioaddr.data_addr); 481 501 482 - /* Can become DEBUG later */ 483 502 if (count) 484 503 ata_port_dbg(ap, "drained %d bytes to clear DRQ\n", count); 485 504 } ··· 522 543 523 544 prd[si].addr = cpu_to_le32(addr); 524 545 prd[si].flags_len = cpu_to_le32(sg_len); 525 - VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", si, addr, sg_len); 526 546 } 527 547 528 548 /* end-of-table flag */ ··· 663 685 if (!serror) 664 686 return; 665 687 666 - DPRINTK("SError @host_intr: 0x%x\n", serror); 688 + ata_port_dbg(ap, "SError @host_intr: 0x%x\n", serror); 667 689 668 690 /* first, analyze and record host port events */ 669 691 ata_ehi_clear_desc(ehi);
-1
drivers/ata/sata_sil.c
··· 307 307 308 308 prd->addr = cpu_to_le32(addr); 309 309 prd->flags_len = cpu_to_le32(sg_len); 310 - VPRINTK("PRD[%u] = (0x%X, 0x%X)\n", si, addr, sg_len); 311 310 312 311 last_prd = prd; 313 312 prd++;
+1 -4
drivers/ata/sata_sil24.c
··· 656 656 const char *reason; 657 657 int rc; 658 658 659 - DPRINTK("ENTER\n"); 660 - 661 659 /* put the port into known state */ 662 660 if (sil24_init_port(ap)) { 663 661 reason = "port not ready"; ··· 678 680 } 679 681 680 682 sil24_read_tf(ap, 0, &tf); 681 - *class = ata_dev_classify(&tf); 683 + *class = ata_port_classify(ap, &tf); 682 684 683 - DPRINTK("EXIT, class=%u\n", *class); 684 685 return 0; 685 686 686 687 err:
+53 -95
drivers/ata/sata_sx4.c
··· 78 78 #define DRV_NAME "sata_sx4" 79 79 #define DRV_VERSION "0.12" 80 80 81 + static int dimm_test; 82 + module_param(dimm_test, int, 0644); 83 + MODULE_PARM_DESC(dimm_test, "Enable DIMM test during startup (1 = enabled)"); 81 84 82 85 enum { 83 86 PDC_MMIO_BAR = 3, ··· 214 211 u32 device, u32 subaddr, u32 *pdata); 215 212 static int pdc20621_prog_dimm0(struct ata_host *host); 216 213 static unsigned int pdc20621_prog_dimm_global(struct ata_host *host); 217 - #ifdef ATA_VERBOSE_DEBUG 218 214 static void pdc20621_get_from_dimm(struct ata_host *host, 219 215 void *psource, u32 offset, u32 size); 220 - #endif 221 216 static void pdc20621_put_to_dimm(struct ata_host *host, 222 217 void *psource, u32 offset, u32 size); 223 218 static void pdc20621_irq_clear(struct ata_port *ap); ··· 309 308 /* output ATA packet S/G table */ 310 309 addr = PDC_20621_DIMM_BASE + PDC_20621_DIMM_DATA + 311 310 (PDC_DIMM_DATA_STEP * portno); 312 - VPRINTK("ATA sg addr 0x%x, %d\n", addr, addr); 311 + 313 312 buf32[dw] = cpu_to_le32(addr); 314 313 buf32[dw + 1] = cpu_to_le32(total_len | ATA_PRD_EOT); 315 - 316 - VPRINTK("ATA PSG @ %x == (0x%x, 0x%x)\n", 317 - PDC_20621_DIMM_BASE + 318 - (PDC_DIMM_WINDOW_STEP * portno) + 319 - PDC_DIMM_APKT_PRD, 320 - buf32[dw], buf32[dw + 1]); 321 314 } 322 315 323 316 static inline void pdc20621_host_sg(u8 *buf, unsigned int portno, ··· 327 332 328 333 buf32[dw] = cpu_to_le32(addr); 329 334 buf32[dw + 1] = cpu_to_le32(total_len | ATA_PRD_EOT); 330 - 331 - VPRINTK("HOST PSG @ %x == (0x%x, 0x%x)\n", 332 - PDC_20621_DIMM_BASE + 333 - (PDC_DIMM_WINDOW_STEP * portno) + 334 - PDC_DIMM_HPKT_PRD, 335 - buf32[dw], buf32[dw + 1]); 336 335 } 337 336 338 337 static inline unsigned int pdc20621_ata_pkt(struct ata_taskfile *tf, ··· 340 351 unsigned int dimm_sg = PDC_20621_DIMM_BASE + 341 352 (PDC_DIMM_WINDOW_STEP * portno) + 342 353 PDC_DIMM_APKT_PRD; 343 - VPRINTK("ENTER, dimm_sg == 0x%x, %d\n", dimm_sg, dimm_sg); 344 354 345 355 i = PDC_DIMM_ATA_PKT; 346 356 ··· 394 406 unsigned int dimm_sg = PDC_20621_DIMM_BASE + 395 407 (PDC_DIMM_WINDOW_STEP * portno) + 396 408 PDC_DIMM_HPKT_PRD; 397 - VPRINTK("ENTER, dimm_sg == 0x%x, %d\n", dimm_sg, dimm_sg); 398 - VPRINTK("host_sg == 0x%x, %d\n", host_sg, host_sg); 399 409 400 410 dw = PDC_DIMM_HOST_PKT >> 2; 401 411 ··· 410 424 buf32[dw + 1] = cpu_to_le32(host_sg); 411 425 buf32[dw + 2] = cpu_to_le32(dimm_sg); 412 426 buf32[dw + 3] = 0; 413 - 414 - VPRINTK("HOST PKT @ %x == (0x%x 0x%x 0x%x 0x%x)\n", 415 - PDC_20621_DIMM_BASE + (PDC_DIMM_WINDOW_STEP * portno) + 416 - PDC_DIMM_HOST_PKT, 417 - buf32[dw + 0], 418 - buf32[dw + 1], 419 - buf32[dw + 2], 420 - buf32[dw + 3]); 421 427 } 422 428 423 429 static void pdc20621_dma_prep(struct ata_queued_cmd *qc) ··· 424 446 __le32 *buf = (__le32 *) &pp->dimm_buf[PDC_DIMM_HEADER_SZ]; 425 447 426 448 WARN_ON(!(qc->flags & ATA_QCFLAG_DMAMAP)); 427 - 428 - VPRINTK("ata%u: ENTER\n", ap->print_id); 429 449 430 450 /* hard-code chip #0 */ 431 451 mmio += PDC_CHIP0_OFS; ··· 468 492 469 493 readl(dimm_mmio); /* MMIO PCI posting flush */ 470 494 471 - VPRINTK("ata pkt buf ofs %u, prd size %u, mmio copied\n", i, sgt_len); 495 + ata_port_dbg(ap, "ata pkt buf ofs %u, prd size %u, mmio copied\n", 496 + i, sgt_len); 472 497 } 473 498 474 499 static void pdc20621_nodata_prep(struct ata_queued_cmd *qc) ··· 480 503 void __iomem *dimm_mmio = ap->host->iomap[PDC_DIMM_BAR]; 481 504 unsigned int portno = ap->port_no; 482 505 unsigned int i; 483 - 484 - VPRINTK("ata%u: ENTER\n", ap->print_id); 485 506 486 507 /* hard-code chip #0 */ 487 508 mmio += PDC_CHIP0_OFS; ··· 502 527 503 528 readl(dimm_mmio); /* MMIO PCI posting flush */ 504 529 505 - VPRINTK("ata pkt buf ofs %u, mmio copied\n", i); 530 + ata_port_dbg(ap, "ata pkt buf ofs %u, mmio copied\n", i); 506 531 } 507 532 508 533 static enum ata_completion_errors pdc20621_qc_prep(struct ata_queued_cmd *qc) ··· 576 601 pp->hdma_cons++; 577 602 } 578 603 579 - #ifdef ATA_VERBOSE_DEBUG 580 604 static void pdc20621_dump_hdma(struct ata_queued_cmd *qc) 581 605 { 582 606 struct ata_port *ap = qc->ap; ··· 585 611 dimm_mmio += (port_no * PDC_DIMM_WINDOW_STEP); 586 612 dimm_mmio += PDC_DIMM_HOST_PKT; 587 613 588 - printk(KERN_ERR "HDMA[0] == 0x%08X\n", readl(dimm_mmio)); 589 - printk(KERN_ERR "HDMA[1] == 0x%08X\n", readl(dimm_mmio + 4)); 590 - printk(KERN_ERR "HDMA[2] == 0x%08X\n", readl(dimm_mmio + 8)); 591 - printk(KERN_ERR "HDMA[3] == 0x%08X\n", readl(dimm_mmio + 12)); 614 + ata_port_dbg(ap, "HDMA 0x%08X 0x%08X 0x%08X 0x%08X\n", 615 + readl(dimm_mmio), readl(dimm_mmio + 4), 616 + readl(dimm_mmio + 8), readl(dimm_mmio + 12)); 592 617 } 593 - #else 594 - static inline void pdc20621_dump_hdma(struct ata_queued_cmd *qc) { } 595 - #endif /* ATA_VERBOSE_DEBUG */ 596 618 597 619 static void pdc20621_packet_start(struct ata_queued_cmd *qc) 598 620 { ··· 603 633 /* hard-code chip #0 */ 604 634 mmio += PDC_CHIP0_OFS; 605 635 606 - VPRINTK("ata%u: ENTER\n", ap->print_id); 607 - 608 636 wmb(); /* flush PRD, pkt writes */ 609 637 610 638 port_ofs = PDC_20621_DIMM_BASE + (PDC_DIMM_WINDOW_STEP * port_no); ··· 613 645 614 646 pdc20621_dump_hdma(qc); 615 647 pdc20621_push_hdma(qc, seq, port_ofs + PDC_DIMM_HOST_PKT); 616 - VPRINTK("queued ofs 0x%x (%u), seq %u\n", 648 + ata_port_dbg(ap, "queued ofs 0x%x (%u), seq %u\n", 617 649 port_ofs + PDC_DIMM_HOST_PKT, 618 650 port_ofs + PDC_DIMM_HOST_PKT, 619 651 seq); ··· 624 656 writel(port_ofs + PDC_DIMM_ATA_PKT, 625 657 ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT); 626 658 readl(ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT); 627 - VPRINTK("submitted ofs 0x%x (%u), seq %u\n", 659 + ata_port_dbg(ap, "submitted ofs 0x%x (%u), seq %u\n", 628 660 port_ofs + PDC_DIMM_ATA_PKT, 629 661 port_ofs + PDC_DIMM_ATA_PKT, 630 662 seq); ··· 664 696 u8 status; 665 697 unsigned int handled = 0; 666 698 667 - VPRINTK("ENTER\n"); 668 - 669 699 if ((qc->tf.protocol == ATA_PROT_DMA) && /* read */ 670 700 (!(qc->tf.flags & ATA_TFLAG_WRITE))) { 671 701 672 702 /* step two - DMA from DIMM to host */ 673 703 if (doing_hdma) { 674 - VPRINTK("ata%u: read hdma, 0x%x 0x%x\n", ap->print_id, 704 + ata_port_dbg(ap, "read hdma, 0x%x 0x%x\n", 675 705 readl(mmio + 0x104), readl(mmio + PDC_HDMA_CTLSTAT)); 676 706 /* get drive status; clear intr; complete txn */ 677 707 qc->err_mask |= ac_err_mask(ata_wait_idle(ap)); ··· 680 714 /* step one - exec ATA command */ 681 715 else { 682 716 u8 seq = (u8) (port_no + 1 + 4); 683 - VPRINTK("ata%u: read ata, 0x%x 0x%x\n", ap->print_id, 717 + ata_port_dbg(ap, "read ata, 0x%x 0x%x\n", 684 718 readl(mmio + 0x104), readl(mmio + PDC_HDMA_CTLSTAT)); 685 719 686 720 /* submit hdma pkt */ ··· 695 729 /* step one - DMA from host to DIMM */ 696 730 if (doing_hdma) { 697 731 u8 seq = (u8) (port_no + 1); 698 - VPRINTK("ata%u: write hdma, 0x%x 0x%x\n", ap->print_id, 732 + ata_port_dbg(ap, "write hdma, 0x%x 0x%x\n", 699 733 readl(mmio + 0x104), readl(mmio + PDC_HDMA_CTLSTAT)); 700 734 701 735 /* submit ata pkt */ ··· 708 742 709 743 /* step two - execute ATA command */ 710 744 else { 711 - VPRINTK("ata%u: write ata, 0x%x 0x%x\n", ap->print_id, 745 + ata_port_dbg(ap, "write ata, 0x%x 0x%x\n", 712 746 readl(mmio + 0x104), readl(mmio + PDC_HDMA_CTLSTAT)); 713 747 /* get drive status; clear intr; complete txn */ 714 748 qc->err_mask |= ac_err_mask(ata_wait_idle(ap)); ··· 721 755 } else if (qc->tf.protocol == ATA_PROT_NODATA) { 722 756 723 757 status = ata_sff_busy_wait(ap, ATA_BUSY | ATA_DRQ, 1000); 724 - DPRINTK("BUS_NODATA (drv_stat 0x%X)\n", status); 758 + ata_port_dbg(ap, "BUS_NODATA (drv_stat 0x%X)\n", status); 725 759 qc->err_mask |= ac_err_mask(status); 726 760 ata_qc_complete(qc); 727 761 handled = 1; ··· 747 781 unsigned int handled = 0; 748 782 void __iomem *mmio_base; 749 783 750 - VPRINTK("ENTER\n"); 751 - 752 - if (!host || !host->iomap[PDC_MMIO_BAR]) { 753 - VPRINTK("QUICK EXIT\n"); 784 + if (!host || !host->iomap[PDC_MMIO_BAR]) 754 785 return IRQ_NONE; 755 - } 756 786 757 787 mmio_base = host->iomap[PDC_MMIO_BAR]; 758 788 759 789 /* reading should also clear interrupts */ 760 790 mmio_base += PDC_CHIP0_OFS; 761 791 mask = readl(mmio_base + PDC_20621_SEQMASK); 762 - VPRINTK("mask == 0x%x\n", mask); 763 792 764 - if (mask == 0xffffffff) { 765 - VPRINTK("QUICK EXIT 2\n"); 793 + if (mask == 0xffffffff) 766 794 return IRQ_NONE; 767 - } 795 + 768 796 mask &= 0xffff; /* only 16 tags possible */ 769 - if (!mask) { 770 - VPRINTK("QUICK EXIT 3\n"); 797 + if (!mask) 771 798 return IRQ_NONE; 772 - } 773 799 774 800 spin_lock(&host->lock); 775 801 ··· 774 816 else 775 817 ap = host->ports[port_no]; 776 818 tmp = mask & (1 << i); 777 - VPRINTK("seq %u, port_no %u, ap %p, tmp %x\n", i, port_no, ap, tmp); 819 + if (ap) 820 + ata_port_dbg(ap, "seq %u, tmp %x\n", i, tmp); 778 821 if (tmp && ap) { 779 822 struct ata_queued_cmd *qc; 780 823 ··· 787 828 } 788 829 789 830 spin_unlock(&host->lock); 790 - 791 - VPRINTK("mask == 0x%x\n", mask); 792 - 793 - VPRINTK("EXIT\n"); 794 831 795 832 return IRQ_RETVAL(handled); 796 833 } ··· 934 979 } 935 980 936 981 937 - #ifdef ATA_VERBOSE_DEBUG 938 982 static void pdc20621_get_from_dimm(struct ata_host *host, void *psource, 939 983 u32 offset, u32 size) 940 984 { ··· 983 1029 memcpy_fromio(psource, dimm_mmio, size / 4); 984 1030 } 985 1031 } 986 - #endif 987 1032 988 1033 989 1034 static void pdc20621_put_to_dimm(struct ata_host *host, void *psource, ··· 1179 1226 /* Turn on for ECC */ 1180 1227 if (!pdc20621_i2c_read(host, PDC_DIMM0_SPD_DEV_ADDRESS, 1181 1228 PDC_DIMM_SPD_TYPE, &spd0)) { 1182 - pr_err("Failed in i2c read: device=%#x, subaddr=%#x\n", 1183 - PDC_DIMM0_SPD_DEV_ADDRESS, PDC_DIMM_SPD_TYPE); 1229 + dev_err(host->dev, 1230 + "Failed in i2c read: device=%#x, subaddr=%#x\n", 1231 + PDC_DIMM0_SPD_DEV_ADDRESS, PDC_DIMM_SPD_TYPE); 1184 1232 return 1; 1185 1233 } 1186 1234 if (spd0 == 0x02) { 1187 1235 data |= (0x01 << 16); 1188 1236 writel(data, mmio + PDC_SDRAM_CONTROL); 1189 1237 readl(mmio + PDC_SDRAM_CONTROL); 1190 - printk(KERN_ERR "Local DIMM ECC Enabled\n"); 1238 + dev_err(host->dev, "Local DIMM ECC Enabled\n"); 1191 1239 } 1192 1240 1193 1241 /* DIMM Initialization Select/Enable (bit 18/19) */ ··· 1228 1274 /* Initialize Time Period Register */ 1229 1275 writel(0xffffffff, mmio + PDC_TIME_PERIOD); 1230 1276 time_period = readl(mmio + PDC_TIME_PERIOD); 1231 - VPRINTK("Time Period Register (0x40): 0x%x\n", time_period); 1277 + dev_dbg(host->dev, "Time Period Register (0x40): 0x%x\n", time_period); 1232 1278 1233 1279 /* Enable timer */ 1234 1280 writel(PDC_TIMER_DEFAULT, mmio + PDC_TIME_CONTROL); ··· 1243 1289 */ 1244 1290 1245 1291 tcount = readl(mmio + PDC_TIME_COUNTER); 1246 - VPRINTK("Time Counter Register (0x44): 0x%x\n", tcount); 1292 + dev_dbg(host->dev, "Time Counter Register (0x44): 0x%x\n", tcount); 1247 1293 1248 1294 /* 1249 1295 If SX4 is on PCI-X bus, after 3 seconds, the timer counter ··· 1251 1297 */ 1252 1298 if (tcount >= PCI_X_TCOUNT) { 1253 1299 ticks = (time_period - tcount); 1254 - VPRINTK("Num counters 0x%x (%d)\n", ticks, ticks); 1300 + dev_dbg(host->dev, "Num counters 0x%x (%d)\n", ticks, ticks); 1255 1301 1256 1302 clock = (ticks / 300000); 1257 - VPRINTK("10 * Internal clk = 0x%x (%d)\n", clock, clock); 1303 + dev_dbg(host->dev, "10 * Internal clk = 0x%x (%d)\n", 1304 + clock, clock); 1258 1305 1259 1306 clock = (clock * 33); 1260 - VPRINTK("10 * Internal clk * 33 = 0x%x (%d)\n", clock, clock); 1307 + dev_dbg(host->dev, "10 * Internal clk * 33 = 0x%x (%d)\n", 1308 + clock, clock); 1261 1309 1262 1310 /* PLL F Param (bit 22:16) */ 1263 1311 fparam = (1400000 / clock) - 2; 1264 - VPRINTK("PLL F Param: 0x%x (%d)\n", fparam, fparam); 1312 + dev_dbg(host->dev, "PLL F Param: 0x%x (%d)\n", fparam, fparam); 1265 1313 1266 1314 /* OD param = 0x2 (bit 31:30), R param = 0x5 (bit 29:25) */ 1267 1315 pci_status = (0x8a001824 | (fparam << 16)); ··· 1271 1315 pci_status = PCI_PLL_INIT; 1272 1316 1273 1317 /* Initialize PLL. */ 1274 - VPRINTK("pci_status: 0x%x\n", pci_status); 1318 + dev_dbg(host->dev, "pci_status: 0x%x\n", pci_status); 1275 1319 writel(pci_status, mmio + PDC_CTL_STATUS); 1276 1320 readl(mmio + PDC_CTL_STATUS); 1277 1321 ··· 1280 1324 and program the DIMM Module Controller. 1281 1325 */ 1282 1326 if (!(speed = pdc20621_detect_dimm(host))) { 1283 - printk(KERN_ERR "Detect Local DIMM Fail\n"); 1327 + dev_err(host->dev, "Detect Local DIMM Fail\n"); 1284 1328 return 1; /* DIMM error */ 1285 1329 } 1286 - VPRINTK("Local DIMM Speed = %d\n", speed); 1330 + dev_dbg(host->dev, "Local DIMM Speed = %d\n", speed); 1287 1331 1288 1332 /* Programming DIMM0 Module Control Register (index_CID0:80h) */ 1289 1333 size = pdc20621_prog_dimm0(host); 1290 - VPRINTK("Local DIMM Size = %dMB\n", size); 1334 + dev_dbg(host->dev, "Local DIMM Size = %dMB\n", size); 1291 1335 1292 1336 /* Programming DIMM Module Global Control Register (index_CID0:88h) */ 1293 1337 if (pdc20621_prog_dimm_global(host)) { 1294 - printk(KERN_ERR "Programming DIMM Module Global Control Register Fail\n"); 1338 + dev_err(host->dev, 1339 + "Programming DIMM Module Global Control Register Fail\n"); 1295 1340 return 1; 1296 1341 } 1297 1342 1298 - #ifdef ATA_VERBOSE_DEBUG 1299 - { 1343 + if (dimm_test) { 1300 1344 u8 test_parttern1[40] = 1301 1345 {0x55,0xAA,'P','r','o','m','i','s','e',' ', 1302 1346 'N','o','t',' ','Y','e','t',' ', ··· 1310 1354 1311 1355 pdc20621_put_to_dimm(host, test_parttern1, 0x10040, 40); 1312 1356 pdc20621_get_from_dimm(host, test_parttern2, 0x40, 40); 1313 - printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0], 1357 + dev_info(host->dev, "DIMM test pattern 1: %x, %x, %s\n", test_parttern2[0], 1314 1358 test_parttern2[1], &(test_parttern2[2])); 1315 1359 pdc20621_get_from_dimm(host, test_parttern2, 0x10040, 1316 1360 40); 1317 - printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0], 1318 - test_parttern2[1], &(test_parttern2[2])); 1361 + dev_info(host->dev, "DIMM test pattern 2: %x, %x, %s\n", 1362 + test_parttern2[0], 1363 + test_parttern2[1], &(test_parttern2[2])); 1319 1364 1320 1365 pdc20621_put_to_dimm(host, test_parttern1, 0x40, 40); 1321 1366 pdc20621_get_from_dimm(host, test_parttern2, 0x40, 40); 1322 - printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0], 1323 - test_parttern2[1], &(test_parttern2[2])); 1367 + dev_info(host->dev, "DIMM test pattern 3: %x, %x, %s\n", 1368 + test_parttern2[0], 1369 + test_parttern2[1], &(test_parttern2[2])); 1324 1370 } 1325 - #endif 1326 1371 1327 1372 /* ECC initiliazation. */ 1328 1373 1329 1374 if (!pdc20621_i2c_read(host, PDC_DIMM0_SPD_DEV_ADDRESS, 1330 1375 PDC_DIMM_SPD_TYPE, &spd0)) { 1331 - pr_err("Failed in i2c read: device=%#x, subaddr=%#x\n", 1376 + dev_err(host->dev, 1377 + "Failed in i2c read: device=%#x, subaddr=%#x\n", 1332 1378 PDC_DIMM0_SPD_DEV_ADDRESS, PDC_DIMM_SPD_TYPE); 1333 1379 return 1; 1334 1380 } 1335 1381 if (spd0 == 0x02) { 1336 1382 void *buf; 1337 - VPRINTK("Start ECC initialization\n"); 1383 + dev_dbg(host->dev, "Start ECC initialization\n"); 1338 1384 addr = 0; 1339 1385 length = size * 1024 * 1024; 1340 1386 buf = kzalloc(ECC_ERASE_BUF_SZ, GFP_KERNEL); ··· 1348 1390 addr += ECC_ERASE_BUF_SZ; 1349 1391 } 1350 1392 kfree(buf); 1351 - VPRINTK("Finish ECC initialization\n"); 1393 + dev_dbg(host->dev, "Finish ECC initialization\n"); 1352 1394 } 1353 1395 return 0; 1354 1396 }
+55 -83
include/linux/libata.h
··· 39 39 * compile-time options: to be removed as soon as all the drivers are 40 40 * converted to the new debugging mechanism 41 41 */ 42 - #undef ATA_DEBUG /* debugging output */ 43 - #undef ATA_VERBOSE_DEBUG /* yet more debugging output */ 44 42 #undef ATA_IRQ_TRAP /* define to ack screaming irqs */ 45 - #undef ATA_NDEBUG /* define to disable quick runtime checks */ 46 43 47 - 48 - /* note: prints function name for you */ 49 - #ifdef ATA_DEBUG 50 - #define DPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __func__, ## args) 51 - #ifdef ATA_VERBOSE_DEBUG 52 - #define VPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __func__, ## args) 53 - #else 54 - #define VPRINTK(fmt, args...) 55 - #endif /* ATA_VERBOSE_DEBUG */ 56 - #else 57 - #define DPRINTK(fmt, args...) 58 - #define VPRINTK(fmt, args...) 59 - #endif /* ATA_DEBUG */ 60 44 61 45 #define ata_print_version_once(dev, version) \ 62 46 ({ \ ··· 51 67 ata_print_version(dev, version); \ 52 68 } \ 53 69 }) 54 - 55 - /* NEW: debug levels */ 56 - #define HAVE_LIBATA_MSG 1 57 - 58 - enum { 59 - ATA_MSG_DRV = 0x0001, 60 - ATA_MSG_INFO = 0x0002, 61 - ATA_MSG_PROBE = 0x0004, 62 - ATA_MSG_WARN = 0x0008, 63 - ATA_MSG_MALLOC = 0x0010, 64 - ATA_MSG_CTL = 0x0020, 65 - ATA_MSG_INTR = 0x0040, 66 - ATA_MSG_ERR = 0x0080, 67 - }; 68 - 69 - #define ata_msg_drv(p) ((p)->msg_enable & ATA_MSG_DRV) 70 - #define ata_msg_info(p) ((p)->msg_enable & ATA_MSG_INFO) 71 - #define ata_msg_probe(p) ((p)->msg_enable & ATA_MSG_PROBE) 72 - #define ata_msg_warn(p) ((p)->msg_enable & ATA_MSG_WARN) 73 - #define ata_msg_malloc(p) ((p)->msg_enable & ATA_MSG_MALLOC) 74 - #define ata_msg_ctl(p) ((p)->msg_enable & ATA_MSG_CTL) 75 - #define ata_msg_intr(p) ((p)->msg_enable & ATA_MSG_INTR) 76 - #define ata_msg_err(p) ((p)->msg_enable & ATA_MSG_ERR) 77 - 78 - static inline u32 ata_msg_init(int dval, int default_msg_enable_bits) 79 - { 80 - if (dval < 0 || dval >= (sizeof(u32) * 8)) 81 - return default_msg_enable_bits; /* should be 0x1 - only driver info msgs */ 82 - if (!dval) 83 - return 0; 84 - return (1 << dval) - 1; 85 - } 86 70 87 71 /* defines only for the constants which don't work well as enums */ 88 72 #define ATA_TAG_POISON 0xfafbfcfdU ··· 143 191 ATA_LFLAG_NO_LPM = (1 << 8), /* disable LPM on this link */ 144 192 ATA_LFLAG_RST_ONCE = (1 << 9), /* limit recovery to one reset */ 145 193 ATA_LFLAG_CHANGED = (1 << 10), /* LPM state changed on this link */ 146 - ATA_LFLAG_NO_DB_DELAY = (1 << 11), /* no debounce delay on link resume */ 194 + ATA_LFLAG_NO_DEBOUNCE_DELAY = (1 << 11), /* no debounce delay on link resume */ 147 195 148 196 /* struct ata_port flags */ 149 197 ATA_FLAG_SLAVE_POSS = (1 << 0), /* host supports slave dev */ ··· 836 884 837 885 unsigned int hsm_task_state; 838 886 839 - u32 msg_enable; 840 887 struct list_head eh_done_q; 841 888 wait_queue_head_t eh_wait_q; 842 889 int eh_tries; ··· 884 933 void (*set_piomode)(struct ata_port *ap, struct ata_device *dev); 885 934 void (*set_dmamode)(struct ata_port *ap, struct ata_device *dev); 886 935 int (*set_mode)(struct ata_link *link, struct ata_device **r_failed_dev); 887 - unsigned int (*read_id)(struct ata_device *dev, struct ata_taskfile *tf, u16 *id); 936 + unsigned int (*read_id)(struct ata_device *dev, struct ata_taskfile *tf, 937 + __le16 *id); 888 938 889 939 void (*dev_config)(struct ata_device *dev); 890 940 ··· 1112 1160 extern void ata_sg_init(struct ata_queued_cmd *qc, struct scatterlist *sg, 1113 1161 unsigned int n_elem); 1114 1162 extern unsigned int ata_dev_classify(const struct ata_taskfile *tf); 1163 + extern unsigned int ata_port_classify(struct ata_port *ap, 1164 + const struct ata_taskfile *tf); 1115 1165 extern void ata_dev_disable(struct ata_device *adev); 1116 1166 extern void ata_id_string(const u16 *id, unsigned char *s, 1117 1167 unsigned int ofs, unsigned int len); 1118 1168 extern void ata_id_c_string(const u16 *id, unsigned char *s, 1119 1169 unsigned int ofs, unsigned int len); 1120 1170 extern unsigned int ata_do_dev_read_id(struct ata_device *dev, 1121 - struct ata_taskfile *tf, u16 *id); 1171 + struct ata_taskfile *tf, __le16 *id); 1122 1172 extern void ata_qc_complete(struct ata_queued_cmd *qc); 1123 1173 extern u64 ata_qc_get_active(struct ata_port *ap); 1124 1174 extern void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd); ··· 1386 1432 .tag_alloc_policy = BLK_TAG_ALLOC_RR, \ 1387 1433 .slave_configure = ata_scsi_slave_config 1388 1434 1435 + #define ATA_SUBBASE_SHT_QD(drv_name, drv_qd) \ 1436 + __ATA_BASE_SHT(drv_name), \ 1437 + .can_queue = drv_qd, \ 1438 + .tag_alloc_policy = BLK_TAG_ALLOC_RR, \ 1439 + .slave_configure = ata_scsi_slave_config 1440 + 1389 1441 #define ATA_BASE_SHT(drv_name) \ 1390 1442 ATA_SUBBASE_SHT(drv_name), \ 1391 1443 .sdev_groups = ata_common_sdev_groups ··· 1401 1441 1402 1442 #define ATA_NCQ_SHT(drv_name) \ 1403 1443 ATA_SUBBASE_SHT(drv_name), \ 1444 + .sdev_groups = ata_ncq_sdev_groups, \ 1445 + .change_queue_depth = ata_scsi_change_queue_depth 1446 + 1447 + #define ATA_NCQ_SHT_QD(drv_name, drv_qd) \ 1448 + ATA_SUBBASE_SHT_QD(drv_name, drv_qd), \ 1404 1449 .sdev_groups = ata_ncq_sdev_groups, \ 1405 1450 .change_queue_depth = ata_scsi_change_queue_depth 1406 1451 #endif ··· 1452 1487 return link->pmp; 1453 1488 } 1454 1489 1455 - /* 1456 - * printk helpers 1457 - */ 1458 - __printf(3, 4) 1459 - void ata_port_printk(const struct ata_port *ap, const char *level, 1460 - const char *fmt, ...); 1461 - __printf(3, 4) 1462 - void ata_link_printk(const struct ata_link *link, const char *level, 1463 - const char *fmt, ...); 1464 - __printf(3, 4) 1465 - void ata_dev_printk(const struct ata_device *dev, const char *level, 1466 - const char *fmt, ...); 1490 + #define ata_port_printk(level, ap, fmt, ...) \ 1491 + pr_ ## level ("ata%u: " fmt, (ap)->print_id, ##__VA_ARGS__) 1467 1492 1468 1493 #define ata_port_err(ap, fmt, ...) \ 1469 - ata_port_printk(ap, KERN_ERR, fmt, ##__VA_ARGS__) 1494 + ata_port_printk(err, ap, fmt, ##__VA_ARGS__) 1470 1495 #define ata_port_warn(ap, fmt, ...) \ 1471 - ata_port_printk(ap, KERN_WARNING, fmt, ##__VA_ARGS__) 1496 + ata_port_printk(warn, ap, fmt, ##__VA_ARGS__) 1472 1497 #define ata_port_notice(ap, fmt, ...) \ 1473 - ata_port_printk(ap, KERN_NOTICE, fmt, ##__VA_ARGS__) 1498 + ata_port_printk(notice, ap, fmt, ##__VA_ARGS__) 1474 1499 #define ata_port_info(ap, fmt, ...) \ 1475 - ata_port_printk(ap, KERN_INFO, fmt, ##__VA_ARGS__) 1500 + ata_port_printk(info, ap, fmt, ##__VA_ARGS__) 1476 1501 #define ata_port_dbg(ap, fmt, ...) \ 1477 - ata_port_printk(ap, KERN_DEBUG, fmt, ##__VA_ARGS__) 1502 + ata_port_printk(debug, ap, fmt, ##__VA_ARGS__) 1503 + 1504 + #define ata_link_printk(level, link, fmt, ...) \ 1505 + do { \ 1506 + if (sata_pmp_attached((link)->ap) || \ 1507 + (link)->ap->slave_link) \ 1508 + pr_ ## level ("ata%u.%02u: " fmt, \ 1509 + (link)->ap->print_id, \ 1510 + (link)->pmp, \ 1511 + ##__VA_ARGS__); \ 1512 + else \ 1513 + pr_ ## level ("ata%u: " fmt, \ 1514 + (link)->ap->print_id, \ 1515 + ##__VA_ARGS__); \ 1516 + } while (0) 1478 1517 1479 1518 #define ata_link_err(link, fmt, ...) \ 1480 - ata_link_printk(link, KERN_ERR, fmt, ##__VA_ARGS__) 1519 + ata_link_printk(err, link, fmt, ##__VA_ARGS__) 1481 1520 #define ata_link_warn(link, fmt, ...) \ 1482 - ata_link_printk(link, KERN_WARNING, fmt, ##__VA_ARGS__) 1521 + ata_link_printk(warn, link, fmt, ##__VA_ARGS__) 1483 1522 #define ata_link_notice(link, fmt, ...) \ 1484 - ata_link_printk(link, KERN_NOTICE, fmt, ##__VA_ARGS__) 1523 + ata_link_printk(notice, link, fmt, ##__VA_ARGS__) 1485 1524 #define ata_link_info(link, fmt, ...) \ 1486 - ata_link_printk(link, KERN_INFO, fmt, ##__VA_ARGS__) 1525 + ata_link_printk(info, link, fmt, ##__VA_ARGS__) 1487 1526 #define ata_link_dbg(link, fmt, ...) \ 1488 - ata_link_printk(link, KERN_DEBUG, fmt, ##__VA_ARGS__) 1527 + ata_link_printk(debug, link, fmt, ##__VA_ARGS__) 1528 + 1529 + #define ata_dev_printk(level, dev, fmt, ...) \ 1530 + pr_ ## level("ata%u.%02u: " fmt, \ 1531 + (dev)->link->ap->print_id, \ 1532 + (dev)->link->pmp + (dev)->devno, \ 1533 + ##__VA_ARGS__) 1489 1534 1490 1535 #define ata_dev_err(dev, fmt, ...) \ 1491 - ata_dev_printk(dev, KERN_ERR, fmt, ##__VA_ARGS__) 1536 + ata_dev_printk(err, dev, fmt, ##__VA_ARGS__) 1492 1537 #define ata_dev_warn(dev, fmt, ...) \ 1493 - ata_dev_printk(dev, KERN_WARNING, fmt, ##__VA_ARGS__) 1538 + ata_dev_printk(warn, dev, fmt, ##__VA_ARGS__) 1494 1539 #define ata_dev_notice(dev, fmt, ...) \ 1495 - ata_dev_printk(dev, KERN_NOTICE, fmt, ##__VA_ARGS__) 1540 + ata_dev_printk(notice, dev, fmt, ##__VA_ARGS__) 1496 1541 #define ata_dev_info(dev, fmt, ...) \ 1497 - ata_dev_printk(dev, KERN_INFO, fmt, ##__VA_ARGS__) 1542 + ata_dev_printk(info, dev, fmt, ##__VA_ARGS__) 1498 1543 #define ata_dev_dbg(dev, fmt, ...) \ 1499 - ata_dev_printk(dev, KERN_DEBUG, fmt, ##__VA_ARGS__) 1544 + ata_dev_printk(debug, dev, fmt, ##__VA_ARGS__) 1500 1545 1501 1546 void ata_print_version(const struct device *dev, const char *version); 1502 1547 ··· 2040 2065 { 2041 2066 u8 status = ata_sff_busy_wait(ap, ATA_BUSY | ATA_DRQ, 1000); 2042 2067 2043 - #ifdef ATA_DEBUG 2044 2068 if (status != 0xff && (status & (ATA_BUSY | ATA_DRQ))) 2045 - ata_port_printk(ap, KERN_DEBUG, "abnormal Status 0x%X\n", 2046 - status); 2047 - #endif 2069 + ata_port_dbg(ap, "abnormal Status 0x%X\n", status); 2048 2070 2049 2071 return status; 2050 2072 }
+415 -1
include/trace/events/libata.h
··· 132 132 ata_protocol_name(ATAPI_PROT_PIO), \ 133 133 ata_protocol_name(ATAPI_PROT_DMA)) 134 134 135 + #define ata_class_name(class) { class, #class } 136 + #define show_class_name(val) \ 137 + __print_symbolic(val, \ 138 + ata_class_name(ATA_DEV_UNKNOWN), \ 139 + ata_class_name(ATA_DEV_ATA), \ 140 + ata_class_name(ATA_DEV_ATA_UNSUP), \ 141 + ata_class_name(ATA_DEV_ATAPI), \ 142 + ata_class_name(ATA_DEV_ATAPI_UNSUP), \ 143 + ata_class_name(ATA_DEV_PMP), \ 144 + ata_class_name(ATA_DEV_PMP_UNSUP), \ 145 + ata_class_name(ATA_DEV_SEMB), \ 146 + ata_class_name(ATA_DEV_SEMB_UNSUP), \ 147 + ata_class_name(ATA_DEV_ZAC), \ 148 + ata_class_name(ATA_DEV_ZAC_UNSUP), \ 149 + ata_class_name(ATA_DEV_NONE)) 150 + 151 + #define ata_sff_hsm_state_name(state) { state, #state } 152 + #define show_sff_hsm_state_name(val) \ 153 + __print_symbolic(val, \ 154 + ata_sff_hsm_state_name(HSM_ST_IDLE), \ 155 + ata_sff_hsm_state_name(HSM_ST_FIRST), \ 156 + ata_sff_hsm_state_name(HSM_ST), \ 157 + ata_sff_hsm_state_name(HSM_ST_LAST), \ 158 + ata_sff_hsm_state_name(HSM_ST_ERR)) 159 + 135 160 const char *libata_trace_parse_status(struct trace_seq*, unsigned char); 136 161 #define __parse_status(s) libata_trace_parse_status(p, s) 162 + 163 + const char *libata_trace_parse_host_stat(struct trace_seq *, unsigned char); 164 + #define __parse_host_stat(s) libata_trace_parse_host_stat(p, s) 137 165 138 166 const char *libata_trace_parse_eh_action(struct trace_seq *, unsigned int); 139 167 #define __parse_eh_action(a) libata_trace_parse_eh_action(p, a) ··· 172 144 const char *libata_trace_parse_qc_flags(struct trace_seq *, unsigned int); 173 145 #define __parse_qc_flags(f) libata_trace_parse_qc_flags(p, f) 174 146 147 + const char *libata_trace_parse_tf_flags(struct trace_seq *, unsigned int); 148 + #define __parse_tf_flags(f) libata_trace_parse_tf_flags(p, f) 149 + 175 150 const char *libata_trace_parse_subcmd(struct trace_seq *, unsigned char, 176 151 unsigned char, unsigned char); 177 152 #define __parse_subcmd(c,f,h) libata_trace_parse_subcmd(p, c, f, h) 178 153 179 - TRACE_EVENT(ata_qc_issue, 154 + DECLARE_EVENT_CLASS(ata_qc_issue_template, 180 155 181 156 TP_PROTO(struct ata_queued_cmd *qc), 182 157 ··· 237 206 __entry->hob_lbal, __entry->hob_lbam, __entry->hob_lbah, 238 207 __entry->dev) 239 208 ); 209 + 210 + DEFINE_EVENT(ata_qc_issue_template, ata_qc_prep, 211 + TP_PROTO(struct ata_queued_cmd *qc), 212 + TP_ARGS(qc)); 213 + 214 + DEFINE_EVENT(ata_qc_issue_template, ata_qc_issue, 215 + TP_PROTO(struct ata_queued_cmd *qc), 216 + TP_ARGS(qc)); 240 217 241 218 DECLARE_EVENT_CLASS(ata_qc_complete_template, 242 219 ··· 314 275 TP_PROTO(struct ata_queued_cmd *qc), 315 276 TP_ARGS(qc)); 316 277 278 + TRACE_EVENT(ata_tf_load, 279 + 280 + TP_PROTO(struct ata_port *ap, const struct ata_taskfile *tf), 281 + 282 + TP_ARGS(ap, tf), 283 + 284 + TP_STRUCT__entry( 285 + __field( unsigned int, ata_port ) 286 + __field( unsigned char, cmd ) 287 + __field( unsigned char, dev ) 288 + __field( unsigned char, lbal ) 289 + __field( unsigned char, lbam ) 290 + __field( unsigned char, lbah ) 291 + __field( unsigned char, nsect ) 292 + __field( unsigned char, feature ) 293 + __field( unsigned char, hob_lbal ) 294 + __field( unsigned char, hob_lbam ) 295 + __field( unsigned char, hob_lbah ) 296 + __field( unsigned char, hob_nsect ) 297 + __field( unsigned char, hob_feature ) 298 + __field( unsigned char, proto ) 299 + ), 300 + 301 + TP_fast_assign( 302 + __entry->ata_port = ap->print_id; 303 + __entry->proto = tf->protocol; 304 + __entry->cmd = tf->command; 305 + __entry->dev = tf->device; 306 + __entry->lbal = tf->lbal; 307 + __entry->lbam = tf->lbam; 308 + __entry->lbah = tf->lbah; 309 + __entry->hob_lbal = tf->hob_lbal; 310 + __entry->hob_lbam = tf->hob_lbam; 311 + __entry->hob_lbah = tf->hob_lbah; 312 + __entry->feature = tf->feature; 313 + __entry->hob_feature = tf->hob_feature; 314 + __entry->nsect = tf->nsect; 315 + __entry->hob_nsect = tf->hob_nsect; 316 + ), 317 + 318 + TP_printk("ata_port=%u proto=%s cmd=%s%s " \ 319 + " tf=(%02x/%02x:%02x:%02x:%02x:%02x/%02x:%02x:%02x:%02x:%02x/%02x)", 320 + __entry->ata_port, 321 + show_protocol_name(__entry->proto), 322 + show_opcode_name(__entry->cmd), 323 + __parse_subcmd(__entry->cmd, __entry->feature, __entry->hob_nsect), 324 + __entry->cmd, __entry->feature, __entry->nsect, 325 + __entry->lbal, __entry->lbam, __entry->lbah, 326 + __entry->hob_feature, __entry->hob_nsect, 327 + __entry->hob_lbal, __entry->hob_lbam, __entry->hob_lbah, 328 + __entry->dev) 329 + ); 330 + 331 + DECLARE_EVENT_CLASS(ata_exec_command_template, 332 + 333 + TP_PROTO(struct ata_port *ap, const struct ata_taskfile *tf, unsigned int tag), 334 + 335 + TP_ARGS(ap, tf, tag), 336 + 337 + TP_STRUCT__entry( 338 + __field( unsigned int, ata_port ) 339 + __field( unsigned int, tag ) 340 + __field( unsigned char, cmd ) 341 + __field( unsigned char, feature ) 342 + __field( unsigned char, hob_nsect ) 343 + __field( unsigned char, proto ) 344 + ), 345 + 346 + TP_fast_assign( 347 + __entry->ata_port = ap->print_id; 348 + __entry->tag = tag; 349 + __entry->proto = tf->protocol; 350 + __entry->cmd = tf->command; 351 + __entry->feature = tf->feature; 352 + __entry->hob_nsect = tf->hob_nsect; 353 + ), 354 + 355 + TP_printk("ata_port=%u tag=%d proto=%s cmd=%s%s", 356 + __entry->ata_port, __entry->tag, 357 + show_protocol_name(__entry->proto), 358 + show_opcode_name(__entry->cmd), 359 + __parse_subcmd(__entry->cmd, __entry->feature, __entry->hob_nsect)) 360 + ); 361 + 362 + DEFINE_EVENT(ata_exec_command_template, ata_exec_command, 363 + TP_PROTO(struct ata_port *ap, const struct ata_taskfile *tf, unsigned int tag), 364 + TP_ARGS(ap, tf, tag)); 365 + 366 + DEFINE_EVENT(ata_exec_command_template, ata_bmdma_setup, 367 + TP_PROTO(struct ata_port *ap, const struct ata_taskfile *tf, unsigned int tag), 368 + TP_ARGS(ap, tf, tag)); 369 + 370 + DEFINE_EVENT(ata_exec_command_template, ata_bmdma_start, 371 + TP_PROTO(struct ata_port *ap, const struct ata_taskfile *tf, unsigned int tag), 372 + TP_ARGS(ap, tf, tag)); 373 + 374 + DEFINE_EVENT(ata_exec_command_template, ata_bmdma_stop, 375 + TP_PROTO(struct ata_port *ap, const struct ata_taskfile *tf, unsigned int tag), 376 + TP_ARGS(ap, tf, tag)); 377 + 378 + TRACE_EVENT(ata_bmdma_status, 379 + 380 + TP_PROTO(struct ata_port *ap, unsigned int host_stat), 381 + 382 + TP_ARGS(ap, host_stat), 383 + 384 + TP_STRUCT__entry( 385 + __field( unsigned int, ata_port ) 386 + __field( unsigned int, tag ) 387 + __field( unsigned char, host_stat ) 388 + ), 389 + 390 + TP_fast_assign( 391 + __entry->ata_port = ap->print_id; 392 + __entry->host_stat = host_stat; 393 + ), 394 + 395 + TP_printk("ata_port=%u host_stat=%s", 396 + __entry->ata_port, 397 + __parse_host_stat(__entry->host_stat)) 398 + ); 399 + 317 400 TRACE_EVENT(ata_eh_link_autopsy, 318 401 319 402 TP_PROTO(struct ata_device *dev, unsigned int eh_action, unsigned int eh_err_mask), ··· 489 328 __parse_qc_flags(__entry->qc_flags), 490 329 __parse_eh_err_mask(__entry->eh_err_mask)) 491 330 ); 331 + 332 + DECLARE_EVENT_CLASS(ata_eh_action_template, 333 + 334 + TP_PROTO(struct ata_link *link, unsigned int devno, unsigned int eh_action), 335 + 336 + TP_ARGS(link, devno, eh_action), 337 + 338 + TP_STRUCT__entry( 339 + __field( unsigned int, ata_port ) 340 + __field( unsigned int, ata_dev ) 341 + __field( unsigned int, eh_action ) 342 + ), 343 + 344 + TP_fast_assign( 345 + __entry->ata_port = link->ap->print_id; 346 + __entry->ata_dev = link->pmp + devno; 347 + __entry->eh_action = eh_action; 348 + ), 349 + 350 + TP_printk("ata_port=%u ata_dev=%u eh_action=%s", 351 + __entry->ata_port, __entry->ata_dev, 352 + __parse_eh_action(__entry->eh_action)) 353 + ); 354 + 355 + DEFINE_EVENT(ata_eh_action_template, ata_eh_about_to_do, 356 + TP_PROTO(struct ata_link *link, unsigned int devno, unsigned int eh_action), 357 + TP_ARGS(link, devno, eh_action)); 358 + 359 + DEFINE_EVENT(ata_eh_action_template, ata_eh_done, 360 + TP_PROTO(struct ata_link *link, unsigned int devno, unsigned int eh_action), 361 + TP_ARGS(link, devno, eh_action)); 362 + 363 + DECLARE_EVENT_CLASS(ata_link_reset_begin_template, 364 + 365 + TP_PROTO(struct ata_link *link, unsigned int *class, unsigned long deadline), 366 + 367 + TP_ARGS(link, class, deadline), 368 + 369 + TP_STRUCT__entry( 370 + __field( unsigned int, ata_port ) 371 + __array( unsigned int, class, 2 ) 372 + __field( unsigned long, deadline ) 373 + ), 374 + 375 + TP_fast_assign( 376 + __entry->ata_port = link->ap->print_id; 377 + memcpy(__entry->class, class, 2); 378 + __entry->deadline = deadline; 379 + ), 380 + 381 + TP_printk("ata_port=%u deadline=%lu classes=[%s,%s]", 382 + __entry->ata_port, __entry->deadline, 383 + show_class_name(__entry->class[0]), 384 + show_class_name(__entry->class[1])) 385 + ); 386 + 387 + DEFINE_EVENT(ata_link_reset_begin_template, ata_link_hardreset_begin, 388 + TP_PROTO(struct ata_link *link, unsigned int *class, unsigned long deadline), 389 + TP_ARGS(link, class, deadline)); 390 + 391 + DEFINE_EVENT(ata_link_reset_begin_template, ata_slave_hardreset_begin, 392 + TP_PROTO(struct ata_link *link, unsigned int *class, unsigned long deadline), 393 + TP_ARGS(link, class, deadline)); 394 + 395 + DEFINE_EVENT(ata_link_reset_begin_template, ata_link_softreset_begin, 396 + TP_PROTO(struct ata_link *link, unsigned int *class, unsigned long deadline), 397 + TP_ARGS(link, class, deadline)); 398 + 399 + DECLARE_EVENT_CLASS(ata_link_reset_end_template, 400 + 401 + TP_PROTO(struct ata_link *link, unsigned int *class, int rc), 402 + 403 + TP_ARGS(link, class, rc), 404 + 405 + TP_STRUCT__entry( 406 + __field( unsigned int, ata_port ) 407 + __array( unsigned int, class, 2 ) 408 + __field( int, rc ) 409 + ), 410 + 411 + TP_fast_assign( 412 + __entry->ata_port = link->ap->print_id; 413 + memcpy(__entry->class, class, 2); 414 + __entry->rc = rc; 415 + ), 416 + 417 + TP_printk("ata_port=%u rc=%d class=[%s,%s]", 418 + __entry->ata_port, __entry->rc, 419 + show_class_name(__entry->class[0]), 420 + show_class_name(__entry->class[1])) 421 + ); 422 + 423 + DEFINE_EVENT(ata_link_reset_end_template, ata_link_hardreset_end, 424 + TP_PROTO(struct ata_link *link, unsigned int *class, int rc), 425 + TP_ARGS(link, class, rc)); 426 + 427 + DEFINE_EVENT(ata_link_reset_end_template, ata_slave_hardreset_end, 428 + TP_PROTO(struct ata_link *link, unsigned int *class, int rc), 429 + TP_ARGS(link, class, rc)); 430 + 431 + DEFINE_EVENT(ata_link_reset_end_template, ata_link_softreset_end, 432 + TP_PROTO(struct ata_link *link, unsigned int *class, int rc), 433 + TP_ARGS(link, class, rc)); 434 + 435 + DEFINE_EVENT(ata_link_reset_end_template, ata_link_postreset, 436 + TP_PROTO(struct ata_link *link, unsigned int *class, int rc), 437 + TP_ARGS(link, class, rc)); 438 + 439 + DEFINE_EVENT(ata_link_reset_end_template, ata_slave_postreset, 440 + TP_PROTO(struct ata_link *link, unsigned int *class, int rc), 441 + TP_ARGS(link, class, rc)); 442 + 443 + DECLARE_EVENT_CLASS(ata_port_eh_begin_template, 444 + 445 + TP_PROTO(struct ata_port *ap), 446 + 447 + TP_ARGS(ap), 448 + 449 + TP_STRUCT__entry( 450 + __field( unsigned int, ata_port ) 451 + ), 452 + 453 + TP_fast_assign( 454 + __entry->ata_port = ap->print_id; 455 + ), 456 + 457 + TP_printk("ata_port=%u", __entry->ata_port) 458 + ); 459 + 460 + DEFINE_EVENT(ata_port_eh_begin_template, ata_std_sched_eh, 461 + TP_PROTO(struct ata_port *ap), 462 + TP_ARGS(ap)); 463 + 464 + DEFINE_EVENT(ata_port_eh_begin_template, ata_port_freeze, 465 + TP_PROTO(struct ata_port *ap), 466 + TP_ARGS(ap)); 467 + 468 + DEFINE_EVENT(ata_port_eh_begin_template, ata_port_thaw, 469 + TP_PROTO(struct ata_port *ap), 470 + TP_ARGS(ap)); 471 + 472 + DECLARE_EVENT_CLASS(ata_sff_hsm_template, 473 + 474 + TP_PROTO(struct ata_queued_cmd *qc, unsigned char status), 475 + 476 + TP_ARGS(qc, status), 477 + 478 + TP_STRUCT__entry( 479 + __field( unsigned int, ata_port ) 480 + __field( unsigned int, ata_dev ) 481 + __field( unsigned int, tag ) 482 + __field( unsigned int, qc_flags ) 483 + __field( unsigned int, protocol ) 484 + __field( unsigned int, hsm_state ) 485 + __field( unsigned char, dev_state ) 486 + ), 487 + 488 + TP_fast_assign( 489 + __entry->ata_port = qc->ap->print_id; 490 + __entry->ata_dev = qc->dev->link->pmp + qc->dev->devno; 491 + __entry->tag = qc->tag; 492 + __entry->qc_flags = qc->flags; 493 + __entry->protocol = qc->tf.protocol; 494 + __entry->hsm_state = qc->ap->hsm_task_state; 495 + __entry->dev_state = status; 496 + ), 497 + 498 + TP_printk("ata_port=%u ata_dev=%u tag=%d proto=%s flags=%s task_state=%s dev_stat=0x%X", 499 + __entry->ata_port, __entry->ata_dev, __entry->tag, 500 + show_protocol_name(__entry->protocol), 501 + __parse_qc_flags(__entry->qc_flags), 502 + show_sff_hsm_state_name(__entry->hsm_state), 503 + __entry->dev_state) 504 + ); 505 + 506 + DEFINE_EVENT(ata_sff_hsm_template, ata_sff_hsm_state, 507 + TP_PROTO(struct ata_queued_cmd *qc, unsigned char state), 508 + TP_ARGS(qc, state)); 509 + 510 + DEFINE_EVENT(ata_sff_hsm_template, ata_sff_hsm_command_complete, 511 + TP_PROTO(struct ata_queued_cmd *qc, unsigned char state), 512 + TP_ARGS(qc, state)); 513 + 514 + DEFINE_EVENT(ata_sff_hsm_template, ata_sff_port_intr, 515 + TP_PROTO(struct ata_queued_cmd *qc, unsigned char state), 516 + TP_ARGS(qc, state)); 517 + 518 + DECLARE_EVENT_CLASS(ata_transfer_data_template, 519 + 520 + TP_PROTO(struct ata_queued_cmd *qc, unsigned int offset, unsigned int count), 521 + 522 + TP_ARGS(qc, offset, count), 523 + 524 + TP_STRUCT__entry( 525 + __field( unsigned int, ata_port ) 526 + __field( unsigned int, ata_dev ) 527 + __field( unsigned int, tag ) 528 + __field( unsigned int, flags ) 529 + __field( unsigned int, offset ) 530 + __field( unsigned int, bytes ) 531 + ), 532 + 533 + TP_fast_assign( 534 + __entry->ata_port = qc->ap->print_id; 535 + __entry->ata_dev = qc->dev->link->pmp + qc->dev->devno; 536 + __entry->tag = qc->tag; 537 + __entry->flags = qc->tf.flags; 538 + __entry->offset = offset; 539 + __entry->bytes = count; 540 + ), 541 + 542 + TP_printk("ata_port=%u ata_dev=%u tag=%d flags=%s offset=%u bytes=%u", 543 + __entry->ata_port, __entry->ata_dev, __entry->tag, 544 + __parse_tf_flags(__entry->flags), 545 + __entry->offset, __entry->bytes) 546 + ); 547 + 548 + DEFINE_EVENT(ata_transfer_data_template, ata_sff_pio_transfer_data, 549 + TP_PROTO(struct ata_queued_cmd *qc, unsigned int offset, unsigned int count), 550 + TP_ARGS(qc, offset, count)); 551 + 552 + DEFINE_EVENT(ata_transfer_data_template, atapi_pio_transfer_data, 553 + TP_PROTO(struct ata_queued_cmd *qc, unsigned int offset, unsigned int count), 554 + TP_ARGS(qc, offset, count)); 555 + 556 + DEFINE_EVENT(ata_transfer_data_template, atapi_send_cdb, 557 + TP_PROTO(struct ata_queued_cmd *qc, unsigned int offset, unsigned int count), 558 + TP_ARGS(qc, offset, count)); 559 + 560 + DECLARE_EVENT_CLASS(ata_sff_template, 561 + 562 + TP_PROTO(struct ata_port *ap), 563 + 564 + TP_ARGS(ap), 565 + 566 + TP_STRUCT__entry( 567 + __field( unsigned int, ata_port ) 568 + __field( unsigned char, hsm_state ) 569 + ), 570 + 571 + TP_fast_assign( 572 + __entry->ata_port = ap->print_id; 573 + __entry->hsm_state = ap->hsm_task_state; 574 + ), 575 + 576 + TP_printk("ata_port=%u task_state=%s", 577 + __entry->ata_port, 578 + show_sff_hsm_state_name(__entry->hsm_state)) 579 + ); 580 + 581 + DEFINE_EVENT(ata_sff_template, ata_sff_flush_pio_task, 582 + TP_PROTO(struct ata_port *ap), 583 + TP_ARGS(ap)); 492 584 493 585 #endif /* _TRACE_LIBATA_H */ 494 586