Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/libata-dev

* 'upstream-linus' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/libata-dev: (258 commits)
[libata] conversion to new debug scheme, part 1 of $N
[PATCH] libata: Add ata_scsi_dev_disabled
[libata] Add host lock to struct ata_port
[PATCH] libata: implement per-dev EH action mask eh_info->dev_action[]
[PATCH] libata-dev: move the CDB-intr DMA blacklisting
[PATCH] ahci: disable NCQ support on vt8251
[libata] ahci: add JMicron PCI IDs
[libata] sata_nv: add PCI IDs
[libata] ahci: Add NVIDIA PCI IDs.
[PATCH] libata: convert several bmdma-style controllers to new EH, take #3
[PATCH] sata_via: convert to new EH, take #3
[libata] sata_nv: s/spin_lock_irqsave/spin_lock/ in irq handler
[PATCH] sata_nv: add hotplug support
[PATCH] sata_nv: convert to new EH
[PATCH] sata_nv: better irq handlers
[PATCH] sata_nv: simplify constants
[PATCH] sata_nv: kill struct nv_host_desc and nv_host
[PATCH] sata_nv: kill not-working hotplug code
[libata] Update docs to reflect current driver API
[PATCH] libata: add host_set->next for legacy two host_sets case, take #3
...

+6577 -2405
+80 -24
Documentation/DocBook/libata.tmpl
··· 169 169 170 170 </sect2> 171 171 172 + <sect2><title>PIO data read/write</title> 173 + <programlisting> 174 + void (*data_xfer) (struct ata_device *, unsigned char *, unsigned int, int); 175 + </programlisting> 176 + 177 + <para> 178 + All bmdma-style drivers must implement this hook. This is the low-level 179 + operation that actually copies the data bytes during a PIO data 180 + transfer. 181 + Typically the driver 182 + will choose one of ata_pio_data_xfer_noirq(), ata_pio_data_xfer(), or 183 + ata_mmio_data_xfer(). 184 + </para> 185 + 186 + </sect2> 187 + 172 188 <sect2><title>ATA command execute</title> 173 189 <programlisting> 174 190 void (*exec_command)(struct ata_port *ap, struct ata_taskfile *tf); ··· 220 204 <programlisting> 221 205 u8 (*check_status)(struct ata_port *ap); 222 206 u8 (*check_altstatus)(struct ata_port *ap); 223 - u8 (*check_err)(struct ata_port *ap); 224 207 </programlisting> 225 208 226 209 <para> 227 - Reads the Status/AltStatus/Error ATA shadow register from 210 + Reads the Status/AltStatus ATA shadow register from 228 211 hardware. On some hardware, reading the Status register has 229 212 the side effect of clearing the interrupt condition. 230 213 Most drivers for taskfile-based hardware use ··· 281 266 mode_filter hook instead. 282 267 </para> 283 268 </warning> 284 - 285 - </sect2> 286 - 287 - <sect2><title>Reset ATA bus</title> 288 - <programlisting> 289 - void (*phy_reset) (struct ata_port *ap); 290 - </programlisting> 291 - 292 - <para> 293 - The very first step in the probe phase. Actions vary depending 294 - on the bus type, typically. After waking up the device and probing 295 - for device presence (PATA and SATA), typically a soft reset 296 - (SRST) will be performed. Drivers typically use the helper 297 - functions ata_bus_reset() or sata_phy_reset() for this hook. 298 - Many SATA drivers use sata_phy_reset() or call it from within 299 - their own phy_reset() functions. 300 - </para> 301 269 302 270 </sect2> 303 271 ··· 352 354 353 355 </sect2> 354 356 355 - <sect2><title>Timeout (error) handling</title> 357 + <sect2><title>Exception and probe handling (EH)</title> 356 358 <programlisting> 357 359 void (*eng_timeout) (struct ata_port *ap); 360 + void (*phy_reset) (struct ata_port *ap); 358 361 </programlisting> 359 362 360 363 <para> 361 - This is a high level error handling function, called from the 362 - error handling thread, when a command times out. Most newer 363 - hardware will implement its own error handling code here. IDE BMDMA 364 - drivers may use the helper function ata_eng_timeout(). 364 + Deprecated. Use ->error_handler() instead. 365 + </para> 366 + 367 + <programlisting> 368 + void (*freeze) (struct ata_port *ap); 369 + void (*thaw) (struct ata_port *ap); 370 + </programlisting> 371 + 372 + <para> 373 + ata_port_freeze() is called when HSM violations or some other 374 + condition disrupts normal operation of the port. A frozen port 375 + is not allowed to perform any operation until the port is 376 + thawed, which usually follows a successful reset. 377 + </para> 378 + 379 + <para> 380 + The optional ->freeze() callback can be used for freezing the port 381 + hardware-wise (e.g. mask interrupt and stop DMA engine). If a 382 + port cannot be frozen hardware-wise, the interrupt handler 383 + must ack and clear interrupts unconditionally while the port 384 + is frozen. 385 + </para> 386 + <para> 387 + The optional ->thaw() callback is called to perform the opposite of ->freeze(): 388 + prepare the port for normal operation once again. Unmask interrupts, 389 + start DMA engine, etc. 390 + </para> 391 + 392 + <programlisting> 393 + void (*error_handler) (struct ata_port *ap); 394 + </programlisting> 395 + 396 + <para> 397 + ->error_handler() is a driver's hook into probe, hotplug, and recovery 398 + and other exceptional conditions. The primary responsibility of an 399 + implementation is to call ata_do_eh() or ata_bmdma_drive_eh() with a set 400 + of EH hooks as arguments: 401 + </para> 402 + 403 + <para> 404 + 'prereset' hook (may be NULL) is called during an EH reset, before any other actions 405 + are taken. 406 + </para> 407 + 408 + <para> 409 + 'postreset' hook (may be NULL) is called after the EH reset is performed. Based on 410 + existing conditions, severity of the problem, and hardware capabilities, 411 + </para> 412 + 413 + <para> 414 + Either 'softreset' (may be NULL) or 'hardreset' (may be NULL) will be 415 + called to perform the low-level EH reset. 416 + </para> 417 + 418 + <programlisting> 419 + void (*post_internal_cmd) (struct ata_queued_cmd *qc); 420 + </programlisting> 421 + 422 + <para> 423 + Perform any hardware-specific actions necessary to finish processing 424 + after executing a probe-time or EH-time command via ata_exec_internal(). 365 425 </para> 366 426 367 427 </sect2>
+5 -2
drivers/ide/pci/amd74xx.c
··· 74 74 { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, 0x50, AMD_UDMA_133 }, 75 75 { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, 0x50, AMD_UDMA_133 }, 76 76 { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_IDE, 0x50, AMD_UDMA_133 }, 77 + { PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_IDE, 0x50, AMD_UDMA_133 }, 77 78 { PCI_DEVICE_ID_AMD_CS5536_IDE, 0x40, AMD_UDMA_100 }, 78 79 { 0 } 79 80 }; ··· 489 488 /* 14 */ DECLARE_NV_DEV("NFORCE-MCP04"), 490 489 /* 15 */ DECLARE_NV_DEV("NFORCE-MCP51"), 491 490 /* 16 */ DECLARE_NV_DEV("NFORCE-MCP55"), 492 - /* 17 */ DECLARE_AMD_DEV("AMD5536"), 491 + /* 17 */ DECLARE_NV_DEV("NFORCE-MCP61"), 492 + /* 18 */ DECLARE_AMD_DEV("AMD5536"), 493 493 }; 494 494 495 495 static int __devinit amd74xx_probe(struct pci_dev *dev, const struct pci_device_id *id) ··· 527 525 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 14 }, 528 526 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 15 }, 529 527 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 16 }, 530 - { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CS5536_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 17 }, 528 + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 17 }, 529 + { PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CS5536_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 18 }, 531 530 { 0, }, 532 531 }; 533 532 MODULE_DEVICE_TABLE(pci, amd74xx_pci_tbl);
+1 -1
drivers/scsi/Makefile
··· 165 165 CFLAGS_ncr53c8xx.o := $(ncr53c8xx-flags-y) $(ncr53c8xx-flags-m) 166 166 zalon7xx-objs := zalon.o ncr53c8xx.o 167 167 NCR_Q720_mod-objs := NCR_Q720.o ncr53c8xx.o 168 - libata-objs := libata-core.o libata-scsi.o libata-bmdma.o 168 + libata-objs := libata-core.o libata-scsi.o libata-bmdma.o libata-eh.o 169 169 oktagon_esp_mod-objs := oktagon_esp.o oktagon_io.o 170 170 171 171 # Files generated that shall be removed upon make clean
+311 -188
drivers/scsi/ahci.c
··· 48 48 #include <asm/io.h> 49 49 50 50 #define DRV_NAME "ahci" 51 - #define DRV_VERSION "1.2" 51 + #define DRV_VERSION "1.3" 52 52 53 53 54 54 enum { ··· 56 56 AHCI_MAX_SG = 168, /* hardware max is 64K */ 57 57 AHCI_DMA_BOUNDARY = 0xffffffff, 58 58 AHCI_USE_CLUSTERING = 0, 59 - AHCI_CMD_SLOT_SZ = 32 * 32, 59 + AHCI_MAX_CMDS = 32, 60 + AHCI_CMD_SZ = 32, 61 + AHCI_CMD_SLOT_SZ = AHCI_MAX_CMDS * AHCI_CMD_SZ, 60 62 AHCI_RX_FIS_SZ = 256, 61 - AHCI_CMD_TBL_HDR = 0x80, 62 63 AHCI_CMD_TBL_CDB = 0x40, 63 - AHCI_CMD_TBL_SZ = AHCI_CMD_TBL_HDR + (AHCI_MAX_SG * 16), 64 - AHCI_PORT_PRIV_DMA_SZ = AHCI_CMD_SLOT_SZ + AHCI_CMD_TBL_SZ + 64 + AHCI_CMD_TBL_HDR_SZ = 0x80, 65 + AHCI_CMD_TBL_SZ = AHCI_CMD_TBL_HDR_SZ + (AHCI_MAX_SG * 16), 66 + AHCI_CMD_TBL_AR_SZ = AHCI_CMD_TBL_SZ * AHCI_MAX_CMDS, 67 + AHCI_PORT_PRIV_DMA_SZ = AHCI_CMD_SLOT_SZ + AHCI_CMD_TBL_AR_SZ + 65 68 AHCI_RX_FIS_SZ, 66 69 AHCI_IRQ_ON_SG = (1 << 31), 67 70 AHCI_CMD_ATAPI = (1 << 5), ··· 74 71 AHCI_CMD_CLR_BUSY = (1 << 10), 75 72 76 73 RX_FIS_D2H_REG = 0x40, /* offset of D2H Register FIS data */ 74 + RX_FIS_UNK = 0x60, /* offset of Unknown FIS data */ 77 75 78 76 board_ahci = 0, 77 + board_ahci_vt8251 = 1, 79 78 80 79 /* global controller registers */ 81 80 HOST_CAP = 0x00, /* host capabilities */ ··· 92 87 HOST_AHCI_EN = (1 << 31), /* AHCI enabled */ 93 88 94 89 /* HOST_CAP bits */ 95 - HOST_CAP_64 = (1 << 31), /* PCI DAC (64-bit DMA) support */ 96 90 HOST_CAP_CLO = (1 << 24), /* Command List Override support */ 91 + HOST_CAP_NCQ = (1 << 30), /* Native Command Queueing */ 92 + HOST_CAP_64 = (1 << 31), /* PCI DAC (64-bit DMA) support */ 97 93 98 94 /* registers for each SATA port */ 99 95 PORT_LST_ADDR = 0x00, /* command list DMA addr */ ··· 133 127 PORT_IRQ_PIOS_FIS = (1 << 1), /* PIO Setup FIS rx'd */ 134 128 PORT_IRQ_D2H_REG_FIS = (1 << 0), /* D2H Register FIS rx'd */ 135 129 136 - PORT_IRQ_FATAL = PORT_IRQ_TF_ERR | 137 - PORT_IRQ_HBUS_ERR | 138 - PORT_IRQ_HBUS_DATA_ERR | 139 - PORT_IRQ_IF_ERR, 140 - DEF_PORT_IRQ = PORT_IRQ_FATAL | PORT_IRQ_PHYRDY | 141 - PORT_IRQ_CONNECT | PORT_IRQ_SG_DONE | 142 - PORT_IRQ_UNK_FIS | PORT_IRQ_SDB_FIS | 143 - PORT_IRQ_DMAS_FIS | PORT_IRQ_PIOS_FIS | 144 - PORT_IRQ_D2H_REG_FIS, 130 + PORT_IRQ_FREEZE = PORT_IRQ_HBUS_ERR | 131 + PORT_IRQ_IF_ERR | 132 + PORT_IRQ_CONNECT | 133 + PORT_IRQ_PHYRDY | 134 + PORT_IRQ_UNK_FIS, 135 + PORT_IRQ_ERROR = PORT_IRQ_FREEZE | 136 + PORT_IRQ_TF_ERR | 137 + PORT_IRQ_HBUS_DATA_ERR, 138 + DEF_PORT_IRQ = PORT_IRQ_ERROR | PORT_IRQ_SG_DONE | 139 + PORT_IRQ_SDB_FIS | PORT_IRQ_DMAS_FIS | 140 + PORT_IRQ_PIOS_FIS | PORT_IRQ_D2H_REG_FIS, 145 141 146 142 /* PORT_CMD bits */ 147 143 PORT_CMD_ATAPI = (1 << 24), /* Device is ATAPI */ ··· 161 153 162 154 /* hpriv->flags bits */ 163 155 AHCI_FLAG_MSI = (1 << 0), 156 + 157 + /* ap->flags bits */ 158 + AHCI_FLAG_RESET_NEEDS_CLO = (1 << 24), 159 + AHCI_FLAG_NO_NCQ = (1 << 25), 164 160 }; 165 161 166 162 struct ahci_cmd_hdr { ··· 193 181 dma_addr_t cmd_slot_dma; 194 182 void *cmd_tbl; 195 183 dma_addr_t cmd_tbl_dma; 196 - struct ahci_sg *cmd_tbl_sg; 197 184 void *rx_fis; 198 185 dma_addr_t rx_fis_dma; 199 186 }; ··· 202 191 static int ahci_init_one (struct pci_dev *pdev, const struct pci_device_id *ent); 203 192 static unsigned int ahci_qc_issue(struct ata_queued_cmd *qc); 204 193 static irqreturn_t ahci_interrupt (int irq, void *dev_instance, struct pt_regs *regs); 205 - static int ahci_probe_reset(struct ata_port *ap, unsigned int *classes); 206 194 static void ahci_irq_clear(struct ata_port *ap); 207 - static void ahci_eng_timeout(struct ata_port *ap); 208 195 static int ahci_port_start(struct ata_port *ap); 209 196 static void ahci_port_stop(struct ata_port *ap); 210 197 static void ahci_tf_read(struct ata_port *ap, struct ata_taskfile *tf); 211 198 static void ahci_qc_prep(struct ata_queued_cmd *qc); 212 199 static u8 ahci_check_status(struct ata_port *ap); 213 - static inline int ahci_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc); 200 + static void ahci_freeze(struct ata_port *ap); 201 + static void ahci_thaw(struct ata_port *ap); 202 + static void ahci_error_handler(struct ata_port *ap); 203 + static void ahci_post_internal_cmd(struct ata_queued_cmd *qc); 214 204 static void ahci_remove_one (struct pci_dev *pdev); 215 205 216 206 static struct scsi_host_template ahci_sht = { ··· 219 207 .name = DRV_NAME, 220 208 .ioctl = ata_scsi_ioctl, 221 209 .queuecommand = ata_scsi_queuecmd, 222 - .can_queue = ATA_DEF_QUEUE, 210 + .change_queue_depth = ata_scsi_change_queue_depth, 211 + .can_queue = AHCI_MAX_CMDS - 1, 223 212 .this_id = ATA_SHT_THIS_ID, 224 213 .sg_tablesize = AHCI_MAX_SG, 225 214 .cmd_per_lun = ATA_SHT_CMD_PER_LUN, ··· 229 216 .proc_name = DRV_NAME, 230 217 .dma_boundary = AHCI_DMA_BOUNDARY, 231 218 .slave_configure = ata_scsi_slave_config, 219 + .slave_destroy = ata_scsi_slave_destroy, 232 220 .bios_param = ata_std_bios_param, 233 221 }; 234 222 ··· 242 228 243 229 .tf_read = ahci_tf_read, 244 230 245 - .probe_reset = ahci_probe_reset, 246 - 247 231 .qc_prep = ahci_qc_prep, 248 232 .qc_issue = ahci_qc_issue, 249 - 250 - .eng_timeout = ahci_eng_timeout, 251 233 252 234 .irq_handler = ahci_interrupt, 253 235 .irq_clear = ahci_irq_clear, 254 236 255 237 .scr_read = ahci_scr_read, 256 238 .scr_write = ahci_scr_write, 239 + 240 + .freeze = ahci_freeze, 241 + .thaw = ahci_thaw, 242 + 243 + .error_handler = ahci_error_handler, 244 + .post_internal_cmd = ahci_post_internal_cmd, 257 245 258 246 .port_start = ahci_port_start, 259 247 .port_stop = ahci_port_stop, ··· 266 250 { 267 251 .sht = &ahci_sht, 268 252 .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 269 - ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA, 253 + ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | 254 + ATA_FLAG_SKIP_D2H_BSY, 255 + .pio_mask = 0x1f, /* pio0-4 */ 256 + .udma_mask = 0x7f, /* udma0-6 ; FIXME */ 257 + .port_ops = &ahci_ops, 258 + }, 259 + /* board_ahci_vt8251 */ 260 + { 261 + .sht = &ahci_sht, 262 + .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 263 + ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | 264 + ATA_FLAG_SKIP_D2H_BSY | 265 + AHCI_FLAG_RESET_NEEDS_CLO | AHCI_FLAG_NO_NCQ, 270 266 .pio_mask = 0x1f, /* pio0-4 */ 271 267 .udma_mask = 0x7f, /* udma0-6 ; FIXME */ 272 268 .port_ops = &ahci_ops, ··· 286 258 }; 287 259 288 260 static const struct pci_device_id ahci_pci_tbl[] = { 261 + /* Intel */ 289 262 { PCI_VENDOR_ID_INTEL, 0x2652, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 290 263 board_ahci }, /* ICH6 */ 291 264 { PCI_VENDOR_ID_INTEL, 0x2653, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ··· 317 288 board_ahci }, /* ICH8M */ 318 289 { PCI_VENDOR_ID_INTEL, 0x282a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 319 290 board_ahci }, /* ICH8M */ 291 + 292 + /* JMicron */ 320 293 { 0x197b, 0x2360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 321 294 board_ahci }, /* JMicron JMB360 */ 295 + { 0x197b, 0x2361, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 296 + board_ahci }, /* JMicron JMB361 */ 322 297 { 0x197b, 0x2363, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 323 298 board_ahci }, /* JMicron JMB363 */ 299 + { 0x197b, 0x2365, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 300 + board_ahci }, /* JMicron JMB365 */ 301 + { 0x197b, 0x2366, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 302 + board_ahci }, /* JMicron JMB366 */ 303 + 304 + /* ATI */ 324 305 { PCI_VENDOR_ID_ATI, 0x4380, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 325 306 board_ahci }, /* ATI SB600 non-raid */ 326 307 { PCI_VENDOR_ID_ATI, 0x4381, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 327 308 board_ahci }, /* ATI SB600 raid */ 309 + 310 + /* VIA */ 311 + { PCI_VENDOR_ID_VIA, 0x3349, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 312 + board_ahci_vt8251 }, /* VIA VT8251 */ 313 + 314 + /* NVIDIA */ 315 + { PCI_VENDOR_ID_NVIDIA, 0x044c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 316 + board_ahci }, /* MCP65 */ 317 + { PCI_VENDOR_ID_NVIDIA, 0x044d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 318 + board_ahci }, /* MCP65 */ 319 + { PCI_VENDOR_ID_NVIDIA, 0x044e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 320 + board_ahci }, /* MCP65 */ 321 + { PCI_VENDOR_ID_NVIDIA, 0x044f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 322 + board_ahci }, /* MCP65 */ 323 + 328 324 { } /* terminate list */ 329 325 }; 330 326 ··· 427 373 */ 428 374 pp->cmd_tbl = mem; 429 375 pp->cmd_tbl_dma = mem_dma; 430 - 431 - pp->cmd_tbl_sg = mem + AHCI_CMD_TBL_HDR; 432 376 433 377 ap->private_data = pp; 434 378 ··· 560 508 return ata_dev_classify(&tf); 561 509 } 562 510 563 - static void ahci_fill_cmd_slot(struct ahci_port_priv *pp, u32 opts) 511 + static void ahci_fill_cmd_slot(struct ahci_port_priv *pp, unsigned int tag, 512 + u32 opts) 564 513 { 565 - pp->cmd_slot[0].opts = cpu_to_le32(opts); 566 - pp->cmd_slot[0].status = 0; 567 - pp->cmd_slot[0].tbl_addr = cpu_to_le32(pp->cmd_tbl_dma & 0xffffffff); 568 - pp->cmd_slot[0].tbl_addr_hi = cpu_to_le32((pp->cmd_tbl_dma >> 16) >> 16); 514 + dma_addr_t cmd_tbl_dma; 515 + 516 + cmd_tbl_dma = pp->cmd_tbl_dma + tag * AHCI_CMD_TBL_SZ; 517 + 518 + pp->cmd_slot[tag].opts = cpu_to_le32(opts); 519 + pp->cmd_slot[tag].status = 0; 520 + pp->cmd_slot[tag].tbl_addr = cpu_to_le32(cmd_tbl_dma & 0xffffffff); 521 + pp->cmd_slot[tag].tbl_addr_hi = cpu_to_le32((cmd_tbl_dma >> 16) >> 16); 569 522 } 570 523 571 - static int ahci_poll_register(void __iomem *reg, u32 mask, u32 val, 572 - unsigned long interval_msec, 573 - unsigned long timeout_msec) 524 + static int ahci_clo(struct ata_port *ap) 574 525 { 575 - unsigned long timeout; 526 + void __iomem *port_mmio = (void __iomem *) ap->ioaddr.cmd_addr; 527 + struct ahci_host_priv *hpriv = ap->host_set->private_data; 576 528 u32 tmp; 577 529 578 - timeout = jiffies + (timeout_msec * HZ) / 1000; 579 - do { 580 - tmp = readl(reg); 581 - if ((tmp & mask) == val) 582 - return 0; 583 - msleep(interval_msec); 584 - } while (time_before(jiffies, timeout)); 530 + if (!(hpriv->cap & HOST_CAP_CLO)) 531 + return -EOPNOTSUPP; 585 532 586 - return -1; 533 + tmp = readl(port_mmio + PORT_CMD); 534 + tmp |= PORT_CMD_CLO; 535 + writel(tmp, port_mmio + PORT_CMD); 536 + 537 + tmp = ata_wait_register(port_mmio + PORT_CMD, 538 + PORT_CMD_CLO, PORT_CMD_CLO, 1, 500); 539 + if (tmp & PORT_CMD_CLO) 540 + return -EIO; 541 + 542 + return 0; 587 543 } 588 544 589 - static int ahci_softreset(struct ata_port *ap, int verbose, unsigned int *class) 545 + static int ahci_prereset(struct ata_port *ap) 590 546 { 591 - struct ahci_host_priv *hpriv = ap->host_set->private_data; 547 + if ((ap->flags & AHCI_FLAG_RESET_NEEDS_CLO) && 548 + (ata_busy_wait(ap, ATA_BUSY, 1000) & ATA_BUSY)) { 549 + /* ATA_BUSY hasn't cleared, so send a CLO */ 550 + ahci_clo(ap); 551 + } 552 + 553 + return ata_std_prereset(ap); 554 + } 555 + 556 + static int ahci_softreset(struct ata_port *ap, unsigned int *class) 557 + { 592 558 struct ahci_port_priv *pp = ap->private_data; 593 559 void __iomem *mmio = ap->host_set->mmio_base; 594 560 void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); 595 561 const u32 cmd_fis_len = 5; /* five dwords */ 596 562 const char *reason = NULL; 597 563 struct ata_taskfile tf; 564 + u32 tmp; 598 565 u8 *fis; 599 566 int rc; 600 567 601 568 DPRINTK("ENTER\n"); 569 + 570 + if (ata_port_offline(ap)) { 571 + DPRINTK("PHY reports no device\n"); 572 + *class = ATA_DEV_NONE; 573 + return 0; 574 + } 602 575 603 576 /* prepare for SRST (AHCI-1.1 10.4.1) */ 604 577 rc = ahci_stop_engine(ap); ··· 635 558 /* check BUSY/DRQ, perform Command List Override if necessary */ 636 559 ahci_tf_read(ap, &tf); 637 560 if (tf.command & (ATA_BUSY | ATA_DRQ)) { 638 - u32 tmp; 561 + rc = ahci_clo(ap); 639 562 640 - if (!(hpriv->cap & HOST_CAP_CLO)) { 641 - rc = -EIO; 642 - reason = "port busy but no CLO"; 563 + if (rc == -EOPNOTSUPP) { 564 + reason = "port busy but CLO unavailable"; 643 565 goto fail_restart; 644 - } 645 - 646 - tmp = readl(port_mmio + PORT_CMD); 647 - tmp |= PORT_CMD_CLO; 648 - writel(tmp, port_mmio + PORT_CMD); 649 - readl(port_mmio + PORT_CMD); /* flush */ 650 - 651 - if (ahci_poll_register(port_mmio + PORT_CMD, PORT_CMD_CLO, 0x0, 652 - 1, 500)) { 653 - rc = -EIO; 654 - reason = "CLO failed"; 566 + } else if (rc) { 567 + reason = "port busy but CLO failed"; 655 568 goto fail_restart; 656 569 } 657 570 } ··· 649 582 /* restart engine */ 650 583 ahci_start_engine(ap); 651 584 652 - ata_tf_init(ap, &tf, 0); 585 + ata_tf_init(ap->device, &tf); 653 586 fis = pp->cmd_tbl; 654 587 655 588 /* issue the first D2H Register FIS */ 656 - ahci_fill_cmd_slot(pp, cmd_fis_len | AHCI_CMD_RESET | AHCI_CMD_CLR_BUSY); 589 + ahci_fill_cmd_slot(pp, 0, 590 + cmd_fis_len | AHCI_CMD_RESET | AHCI_CMD_CLR_BUSY); 657 591 658 592 tf.ctl |= ATA_SRST; 659 593 ata_tf_to_fis(&tf, fis, 0); 660 594 fis[1] &= ~(1 << 7); /* turn off Command FIS bit */ 661 595 662 596 writel(1, port_mmio + PORT_CMD_ISSUE); 663 - readl(port_mmio + PORT_CMD_ISSUE); /* flush */ 664 597 665 - if (ahci_poll_register(port_mmio + PORT_CMD_ISSUE, 0x1, 0x0, 1, 500)) { 598 + tmp = ata_wait_register(port_mmio + PORT_CMD_ISSUE, 0x1, 0x1, 1, 500); 599 + if (tmp & 0x1) { 666 600 rc = -EIO; 667 601 reason = "1st FIS failed"; 668 602 goto fail; ··· 673 605 msleep(1); 674 606 675 607 /* issue the second D2H Register FIS */ 676 - ahci_fill_cmd_slot(pp, cmd_fis_len); 608 + ahci_fill_cmd_slot(pp, 0, cmd_fis_len); 677 609 678 610 tf.ctl &= ~ATA_SRST; 679 611 ata_tf_to_fis(&tf, fis, 0); ··· 693 625 msleep(150); 694 626 695 627 *class = ATA_DEV_NONE; 696 - if (sata_dev_present(ap)) { 628 + if (ata_port_online(ap)) { 697 629 if (ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT)) { 698 630 rc = -EIO; 699 631 reason = "device not ready"; ··· 708 640 fail_restart: 709 641 ahci_start_engine(ap); 710 642 fail: 711 - if (verbose) 712 - printk(KERN_ERR "ata%u: softreset failed (%s)\n", 713 - ap->id, reason); 714 - else 715 - DPRINTK("EXIT, rc=%d reason=\"%s\"\n", rc, reason); 643 + ata_port_printk(ap, KERN_ERR, "softreset failed (%s)\n", reason); 716 644 return rc; 717 645 } 718 646 719 - static int ahci_hardreset(struct ata_port *ap, int verbose, unsigned int *class) 647 + static int ahci_hardreset(struct ata_port *ap, unsigned int *class) 720 648 { 649 + struct ahci_port_priv *pp = ap->private_data; 650 + u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG; 651 + struct ata_taskfile tf; 721 652 int rc; 722 653 723 654 DPRINTK("ENTER\n"); 724 655 725 656 ahci_stop_engine(ap); 726 - rc = sata_std_hardreset(ap, verbose, class); 657 + 658 + /* clear D2H reception area to properly wait for D2H FIS */ 659 + ata_tf_init(ap->device, &tf); 660 + tf.command = 0xff; 661 + ata_tf_to_fis(&tf, d2h_fis, 0); 662 + 663 + rc = sata_std_hardreset(ap, class); 664 + 727 665 ahci_start_engine(ap); 728 666 729 - if (rc == 0) 667 + if (rc == 0 && ata_port_online(ap)) 730 668 *class = ahci_dev_classify(ap); 731 669 if (*class == ATA_DEV_UNKNOWN) 732 670 *class = ATA_DEV_NONE; ··· 760 686 } 761 687 } 762 688 763 - static int ahci_probe_reset(struct ata_port *ap, unsigned int *classes) 764 - { 765 - return ata_drive_probe_reset(ap, ata_std_probeinit, 766 - ahci_softreset, ahci_hardreset, 767 - ahci_postreset, classes); 768 - } 769 - 770 689 static u8 ahci_check_status(struct ata_port *ap) 771 690 { 772 691 void __iomem *mmio = (void __iomem *) ap->ioaddr.cmd_addr; ··· 775 708 ata_tf_from_fis(d2h_fis, tf); 776 709 } 777 710 778 - static unsigned int ahci_fill_sg(struct ata_queued_cmd *qc) 711 + static unsigned int ahci_fill_sg(struct ata_queued_cmd *qc, void *cmd_tbl) 779 712 { 780 - struct ahci_port_priv *pp = qc->ap->private_data; 781 713 struct scatterlist *sg; 782 714 struct ahci_sg *ahci_sg; 783 715 unsigned int n_sg = 0; ··· 786 720 /* 787 721 * Next, the S/G list. 788 722 */ 789 - ahci_sg = pp->cmd_tbl_sg; 723 + ahci_sg = cmd_tbl + AHCI_CMD_TBL_HDR_SZ; 790 724 ata_for_each_sg(sg, qc) { 791 725 dma_addr_t addr = sg_dma_address(sg); 792 726 u32 sg_len = sg_dma_len(sg); ··· 807 741 struct ata_port *ap = qc->ap; 808 742 struct ahci_port_priv *pp = ap->private_data; 809 743 int is_atapi = is_atapi_taskfile(&qc->tf); 744 + void *cmd_tbl; 810 745 u32 opts; 811 746 const u32 cmd_fis_len = 5; /* five dwords */ 812 747 unsigned int n_elem; ··· 816 749 * Fill in command table information. First, the header, 817 750 * a SATA Register - Host to Device command FIS. 818 751 */ 819 - ata_tf_to_fis(&qc->tf, pp->cmd_tbl, 0); 752 + cmd_tbl = pp->cmd_tbl + qc->tag * AHCI_CMD_TBL_SZ; 753 + 754 + ata_tf_to_fis(&qc->tf, cmd_tbl, 0); 820 755 if (is_atapi) { 821 - memset(pp->cmd_tbl + AHCI_CMD_TBL_CDB, 0, 32); 822 - memcpy(pp->cmd_tbl + AHCI_CMD_TBL_CDB, qc->cdb, 823 - qc->dev->cdb_len); 756 + memset(cmd_tbl + AHCI_CMD_TBL_CDB, 0, 32); 757 + memcpy(cmd_tbl + AHCI_CMD_TBL_CDB, qc->cdb, qc->dev->cdb_len); 824 758 } 825 759 826 760 n_elem = 0; 827 761 if (qc->flags & ATA_QCFLAG_DMAMAP) 828 - n_elem = ahci_fill_sg(qc); 762 + n_elem = ahci_fill_sg(qc, cmd_tbl); 829 763 830 764 /* 831 765 * Fill in command slot information. ··· 837 769 if (is_atapi) 838 770 opts |= AHCI_CMD_ATAPI | AHCI_CMD_PREFETCH; 839 771 840 - ahci_fill_cmd_slot(pp, opts); 772 + ahci_fill_cmd_slot(pp, qc->tag, opts); 841 773 } 842 774 843 - static void ahci_restart_port(struct ata_port *ap, u32 irq_stat) 775 + static void ahci_error_intr(struct ata_port *ap, u32 irq_stat) 844 776 { 845 - void __iomem *mmio = ap->host_set->mmio_base; 846 - void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); 847 - u32 tmp; 777 + struct ahci_port_priv *pp = ap->private_data; 778 + struct ata_eh_info *ehi = &ap->eh_info; 779 + unsigned int err_mask = 0, action = 0; 780 + struct ata_queued_cmd *qc; 781 + u32 serror; 848 782 849 - if ((ap->device[0].class != ATA_DEV_ATAPI) || 850 - ((irq_stat & PORT_IRQ_TF_ERR) == 0)) 851 - printk(KERN_WARNING "ata%u: port reset, " 852 - "p_is %x is %x pis %x cmd %x tf %x ss %x se %x\n", 853 - ap->id, 854 - irq_stat, 855 - readl(mmio + HOST_IRQ_STAT), 856 - readl(port_mmio + PORT_IRQ_STAT), 857 - readl(port_mmio + PORT_CMD), 858 - readl(port_mmio + PORT_TFDATA), 859 - readl(port_mmio + PORT_SCR_STAT), 860 - readl(port_mmio + PORT_SCR_ERR)); 783 + ata_ehi_clear_desc(ehi); 861 784 862 - /* stop DMA */ 863 - ahci_stop_engine(ap); 785 + /* AHCI needs SError cleared; otherwise, it might lock up */ 786 + serror = ahci_scr_read(ap, SCR_ERROR); 787 + ahci_scr_write(ap, SCR_ERROR, serror); 864 788 865 - /* clear SATA phy error, if any */ 866 - tmp = readl(port_mmio + PORT_SCR_ERR); 867 - writel(tmp, port_mmio + PORT_SCR_ERR); 789 + /* analyze @irq_stat */ 790 + ata_ehi_push_desc(ehi, "irq_stat 0x%08x", irq_stat); 868 791 869 - /* if DRQ/BSY is set, device needs to be reset. 870 - * if so, issue COMRESET 871 - */ 872 - tmp = readl(port_mmio + PORT_TFDATA); 873 - if (tmp & (ATA_BUSY | ATA_DRQ)) { 874 - writel(0x301, port_mmio + PORT_SCR_CTL); 875 - readl(port_mmio + PORT_SCR_CTL); /* flush */ 876 - udelay(10); 877 - writel(0x300, port_mmio + PORT_SCR_CTL); 878 - readl(port_mmio + PORT_SCR_CTL); /* flush */ 792 + if (irq_stat & PORT_IRQ_TF_ERR) 793 + err_mask |= AC_ERR_DEV; 794 + 795 + if (irq_stat & (PORT_IRQ_HBUS_ERR | PORT_IRQ_HBUS_DATA_ERR)) { 796 + err_mask |= AC_ERR_HOST_BUS; 797 + action |= ATA_EH_SOFTRESET; 879 798 } 880 799 881 - /* re-start DMA */ 882 - ahci_start_engine(ap); 883 - } 800 + if (irq_stat & PORT_IRQ_IF_ERR) { 801 + err_mask |= AC_ERR_ATA_BUS; 802 + action |= ATA_EH_SOFTRESET; 803 + ata_ehi_push_desc(ehi, ", interface fatal error"); 804 + } 884 805 885 - static void ahci_eng_timeout(struct ata_port *ap) 886 - { 887 - struct ata_host_set *host_set = ap->host_set; 888 - void __iomem *mmio = host_set->mmio_base; 889 - void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); 890 - struct ata_queued_cmd *qc; 891 - unsigned long flags; 806 + if (irq_stat & (PORT_IRQ_CONNECT | PORT_IRQ_PHYRDY)) { 807 + ata_ehi_hotplugged(ehi); 808 + ata_ehi_push_desc(ehi, ", %s", irq_stat & PORT_IRQ_CONNECT ? 809 + "connection status changed" : "PHY RDY changed"); 810 + } 892 811 893 - printk(KERN_WARNING "ata%u: handling error/timeout\n", ap->id); 812 + if (irq_stat & PORT_IRQ_UNK_FIS) { 813 + u32 *unk = (u32 *)(pp->rx_fis + RX_FIS_UNK); 894 814 895 - spin_lock_irqsave(&host_set->lock, flags); 815 + err_mask |= AC_ERR_HSM; 816 + action |= ATA_EH_SOFTRESET; 817 + ata_ehi_push_desc(ehi, ", unknown FIS %08x %08x %08x %08x", 818 + unk[0], unk[1], unk[2], unk[3]); 819 + } 896 820 897 - ahci_restart_port(ap, readl(port_mmio + PORT_IRQ_STAT)); 821 + /* okay, let's hand over to EH */ 822 + ehi->serror |= serror; 823 + ehi->action |= action; 824 + 898 825 qc = ata_qc_from_tag(ap, ap->active_tag); 899 - qc->err_mask |= AC_ERR_TIMEOUT; 826 + if (qc) 827 + qc->err_mask |= err_mask; 828 + else 829 + ehi->err_mask |= err_mask; 900 830 901 - spin_unlock_irqrestore(&host_set->lock, flags); 902 - 903 - ata_eh_qc_complete(qc); 831 + if (irq_stat & PORT_IRQ_FREEZE) 832 + ata_port_freeze(ap); 833 + else 834 + ata_port_abort(ap); 904 835 } 905 836 906 - static inline int ahci_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc) 837 + static void ahci_host_intr(struct ata_port *ap) 907 838 { 908 839 void __iomem *mmio = ap->host_set->mmio_base; 909 840 void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); 910 - u32 status, serr, ci; 911 - 912 - serr = readl(port_mmio + PORT_SCR_ERR); 913 - writel(serr, port_mmio + PORT_SCR_ERR); 841 + struct ata_eh_info *ehi = &ap->eh_info; 842 + u32 status, qc_active; 843 + int rc; 914 844 915 845 status = readl(port_mmio + PORT_IRQ_STAT); 916 846 writel(status, port_mmio + PORT_IRQ_STAT); 917 847 918 - ci = readl(port_mmio + PORT_CMD_ISSUE); 919 - if (likely((ci & 0x1) == 0)) { 920 - if (qc) { 921 - WARN_ON(qc->err_mask); 922 - ata_qc_complete(qc); 923 - qc = NULL; 924 - } 848 + if (unlikely(status & PORT_IRQ_ERROR)) { 849 + ahci_error_intr(ap, status); 850 + return; 925 851 } 926 852 927 - if (status & PORT_IRQ_FATAL) { 928 - unsigned int err_mask; 929 - if (status & PORT_IRQ_TF_ERR) 930 - err_mask = AC_ERR_DEV; 931 - else if (status & PORT_IRQ_IF_ERR) 932 - err_mask = AC_ERR_ATA_BUS; 933 - else 934 - err_mask = AC_ERR_HOST_BUS; 853 + if (ap->sactive) 854 + qc_active = readl(port_mmio + PORT_SCR_ACT); 855 + else 856 + qc_active = readl(port_mmio + PORT_CMD_ISSUE); 935 857 936 - /* command processing has stopped due to error; restart */ 937 - ahci_restart_port(ap, status); 938 - 939 - if (qc) { 940 - qc->err_mask |= err_mask; 941 - ata_qc_complete(qc); 942 - } 858 + rc = ata_qc_complete_multiple(ap, qc_active, NULL); 859 + if (rc > 0) 860 + return; 861 + if (rc < 0) { 862 + ehi->err_mask |= AC_ERR_HSM; 863 + ehi->action |= ATA_EH_SOFTRESET; 864 + ata_port_freeze(ap); 865 + return; 943 866 } 944 867 945 - return 1; 868 + /* hmmm... a spurious interupt */ 869 + 870 + /* some devices send D2H reg with I bit set during NCQ command phase */ 871 + if (ap->sactive && status & PORT_IRQ_D2H_REG_FIS) 872 + return; 873 + 874 + /* ignore interim PIO setup fis interrupts */ 875 + if (ata_tag_valid(ap->active_tag)) { 876 + struct ata_queued_cmd *qc = 877 + ata_qc_from_tag(ap, ap->active_tag); 878 + 879 + if (qc && qc->tf.protocol == ATA_PROT_PIO && 880 + (status & PORT_IRQ_PIOS_FIS)) 881 + return; 882 + } 883 + 884 + if (ata_ratelimit()) 885 + ata_port_printk(ap, KERN_INFO, "spurious interrupt " 886 + "(irq_stat 0x%x active_tag %d sactive 0x%x)\n", 887 + status, ap->active_tag, ap->sactive); 946 888 } 947 889 948 890 static void ahci_irq_clear(struct ata_port *ap) ··· 960 882 /* TODO */ 961 883 } 962 884 963 - static irqreturn_t ahci_interrupt (int irq, void *dev_instance, struct pt_regs *regs) 885 + static irqreturn_t ahci_interrupt(int irq, void *dev_instance, struct pt_regs *regs) 964 886 { 965 887 struct ata_host_set *host_set = dev_instance; 966 888 struct ahci_host_priv *hpriv; ··· 989 911 990 912 ap = host_set->ports[i]; 991 913 if (ap) { 992 - struct ata_queued_cmd *qc; 993 - qc = ata_qc_from_tag(ap, ap->active_tag); 994 - if (!ahci_host_intr(ap, qc)) 995 - if (ata_ratelimit()) 996 - dev_printk(KERN_WARNING, host_set->dev, 997 - "unhandled interrupt on port %u\n", 998 - i); 999 - 914 + ahci_host_intr(ap); 1000 915 VPRINTK("port %u\n", i); 1001 916 } else { 1002 917 VPRINTK("port %u (no irq)\n", i); ··· 1006 935 handled = 1; 1007 936 } 1008 937 1009 - spin_unlock(&host_set->lock); 938 + spin_unlock(&host_set->lock); 1010 939 1011 940 VPRINTK("EXIT\n"); 1012 941 ··· 1018 947 struct ata_port *ap = qc->ap; 1019 948 void __iomem *port_mmio = (void __iomem *) ap->ioaddr.cmd_addr; 1020 949 1021 - writel(1, port_mmio + PORT_CMD_ISSUE); 950 + if (qc->tf.protocol == ATA_PROT_NCQ) 951 + writel(1 << qc->tag, port_mmio + PORT_SCR_ACT); 952 + writel(1 << qc->tag, port_mmio + PORT_CMD_ISSUE); 1022 953 readl(port_mmio + PORT_CMD_ISSUE); /* flush */ 1023 954 1024 955 return 0; 956 + } 957 + 958 + static void ahci_freeze(struct ata_port *ap) 959 + { 960 + void __iomem *mmio = ap->host_set->mmio_base; 961 + void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); 962 + 963 + /* turn IRQ off */ 964 + writel(0, port_mmio + PORT_IRQ_MASK); 965 + } 966 + 967 + static void ahci_thaw(struct ata_port *ap) 968 + { 969 + void __iomem *mmio = ap->host_set->mmio_base; 970 + void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no); 971 + u32 tmp; 972 + 973 + /* clear IRQ */ 974 + tmp = readl(port_mmio + PORT_IRQ_STAT); 975 + writel(tmp, port_mmio + PORT_IRQ_STAT); 976 + writel(1 << ap->id, mmio + HOST_IRQ_STAT); 977 + 978 + /* turn IRQ back on */ 979 + writel(DEF_PORT_IRQ, port_mmio + PORT_IRQ_MASK); 980 + } 981 + 982 + static void ahci_error_handler(struct ata_port *ap) 983 + { 984 + if (!(ap->flags & ATA_FLAG_FROZEN)) { 985 + /* restart engine */ 986 + ahci_stop_engine(ap); 987 + ahci_start_engine(ap); 988 + } 989 + 990 + /* perform recovery */ 991 + ata_do_eh(ap, ahci_prereset, ahci_softreset, ahci_hardreset, 992 + ahci_postreset); 993 + } 994 + 995 + static void ahci_post_internal_cmd(struct ata_queued_cmd *qc) 996 + { 997 + struct ata_port *ap = qc->ap; 998 + 999 + if (qc->flags & ATA_QCFLAG_FAILED) 1000 + qc->err_mask |= AC_ERR_OTHER; 1001 + 1002 + if (qc->err_mask) { 1003 + /* make DMA engine forget about the failed command */ 1004 + ahci_stop_engine(ap); 1005 + ahci_start_engine(ap); 1006 + } 1025 1007 } 1026 1008 1027 1009 static void ahci_setup_port(struct ata_ioports *port, unsigned long base, ··· 1221 1097 writel(tmp, port_mmio + PORT_IRQ_STAT); 1222 1098 1223 1099 writel(1 << i, mmio + HOST_IRQ_STAT); 1224 - 1225 - /* set irq mask (enables interrupts) */ 1226 - writel(DEF_PORT_IRQ, port_mmio + PORT_IRQ_MASK); 1227 1100 } 1228 1101 1229 1102 tmp = readl(mmio + HOST_CTL); ··· 1318 1197 1319 1198 VPRINTK("ENTER\n"); 1320 1199 1200 + WARN_ON(ATA_MAX_QUEUE > AHCI_MAX_CMDS); 1201 + 1321 1202 if (!printed_version++) 1322 1203 dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); 1323 1204 ··· 1387 1264 if (rc) 1388 1265 goto err_out_hpriv; 1389 1266 1267 + if (!(probe_ent->host_flags & AHCI_FLAG_NO_NCQ) && 1268 + (hpriv->cap & HOST_CAP_NCQ)) 1269 + probe_ent->host_flags |= ATA_FLAG_NCQ; 1270 + 1390 1271 ahci_print_info(probe_ent); 1391 1272 1392 1273 /* FIXME: check ata_device_add return value */ ··· 1422 1295 struct device *dev = pci_dev_to_dev(pdev); 1423 1296 struct ata_host_set *host_set = dev_get_drvdata(dev); 1424 1297 struct ahci_host_priv *hpriv = host_set->private_data; 1425 - struct ata_port *ap; 1426 1298 unsigned int i; 1427 1299 int have_msi; 1428 1300 1429 - for (i = 0; i < host_set->n_ports; i++) { 1430 - ap = host_set->ports[i]; 1431 - 1432 - scsi_remove_host(ap->host); 1433 - } 1301 + for (i = 0; i < host_set->n_ports; i++) 1302 + ata_port_detach(host_set->ports[i]); 1434 1303 1435 1304 have_msi = hpriv->flags & AHCI_FLAG_MSI; 1436 1305 free_irq(host_set->irq, host_set); 1437 1306 1438 1307 for (i = 0; i < host_set->n_ports; i++) { 1439 - ap = host_set->ports[i]; 1308 + struct ata_port *ap = host_set->ports[i]; 1440 1309 1441 1310 ata_scsi_release(ap->host); 1442 1311 scsi_host_put(ap->host);
+50 -62
drivers/scsi/ata_piix.c
··· 93 93 #include <linux/libata.h> 94 94 95 95 #define DRV_NAME "ata_piix" 96 - #define DRV_VERSION "1.05" 96 + #define DRV_VERSION "1.10" 97 97 98 98 enum { 99 99 PIIX_IOCFG = 0x54, /* IDE I/O configuration register */ ··· 146 146 147 147 static int piix_init_one (struct pci_dev *pdev, 148 148 const struct pci_device_id *ent); 149 - 150 - static int piix_pata_probe_reset(struct ata_port *ap, unsigned int *classes); 151 - static int piix_sata_probe_reset(struct ata_port *ap, unsigned int *classes); 152 149 static void piix_set_piomode (struct ata_port *ap, struct ata_device *adev); 153 150 static void piix_set_dmamode (struct ata_port *ap, struct ata_device *adev); 151 + static void piix_pata_error_handler(struct ata_port *ap); 152 + static void piix_sata_error_handler(struct ata_port *ap); 154 153 155 154 static unsigned int in_module_init = 1; 156 155 ··· 158 159 { 0x8086, 0x7111, PCI_ANY_ID, PCI_ANY_ID, 0, 0, piix4_pata }, 159 160 { 0x8086, 0x24db, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich5_pata }, 160 161 { 0x8086, 0x25a2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich5_pata }, 162 + { 0x8086, 0x27df, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich5_pata }, 161 163 #endif 162 164 163 165 /* NOTE: The following PCI ids must be kept in sync with the ··· 218 218 .proc_name = DRV_NAME, 219 219 .dma_boundary = ATA_DMA_BOUNDARY, 220 220 .slave_configure = ata_scsi_slave_config, 221 + .slave_destroy = ata_scsi_slave_destroy, 221 222 .bios_param = ata_std_bios_param, 222 223 .resume = ata_scsi_device_resume, 223 224 .suspend = ata_scsi_device_suspend, ··· 228 227 .port_disable = ata_port_disable, 229 228 .set_piomode = piix_set_piomode, 230 229 .set_dmamode = piix_set_dmamode, 230 + .mode_filter = ata_pci_default_filter, 231 231 232 232 .tf_load = ata_tf_load, 233 233 .tf_read = ata_tf_read, ··· 236 234 .exec_command = ata_exec_command, 237 235 .dev_select = ata_std_dev_select, 238 236 239 - .probe_reset = piix_pata_probe_reset, 240 - 241 237 .bmdma_setup = ata_bmdma_setup, 242 238 .bmdma_start = ata_bmdma_start, 243 239 .bmdma_stop = ata_bmdma_stop, 244 240 .bmdma_status = ata_bmdma_status, 245 241 .qc_prep = ata_qc_prep, 246 242 .qc_issue = ata_qc_issue_prot, 243 + .data_xfer = ata_pio_data_xfer, 247 244 248 - .eng_timeout = ata_eng_timeout, 245 + .freeze = ata_bmdma_freeze, 246 + .thaw = ata_bmdma_thaw, 247 + .error_handler = piix_pata_error_handler, 248 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 249 249 250 250 .irq_handler = ata_interrupt, 251 251 .irq_clear = ata_bmdma_irq_clear, ··· 266 262 .exec_command = ata_exec_command, 267 263 .dev_select = ata_std_dev_select, 268 264 269 - .probe_reset = piix_sata_probe_reset, 270 - 271 265 .bmdma_setup = ata_bmdma_setup, 272 266 .bmdma_start = ata_bmdma_start, 273 267 .bmdma_stop = ata_bmdma_stop, 274 268 .bmdma_status = ata_bmdma_status, 275 269 .qc_prep = ata_qc_prep, 276 270 .qc_issue = ata_qc_issue_prot, 271 + .data_xfer = ata_pio_data_xfer, 277 272 278 - .eng_timeout = ata_eng_timeout, 273 + .freeze = ata_bmdma_freeze, 274 + .thaw = ata_bmdma_thaw, 275 + .error_handler = piix_sata_error_handler, 276 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 279 277 280 278 .irq_handler = ata_interrupt, 281 279 .irq_clear = ata_bmdma_irq_clear, ··· 461 455 } 462 456 463 457 /** 464 - * piix_pata_probeinit - probeinit for PATA host controller 458 + * piix_pata_prereset - prereset for PATA host controller 465 459 * @ap: Target port 466 460 * 467 - * Probeinit including cable detection. 461 + * Prereset including cable detection. 468 462 * 469 463 * LOCKING: 470 464 * None (inherited from caller). 471 465 */ 472 - static void piix_pata_probeinit(struct ata_port *ap) 473 - { 474 - piix_pata_cbl_detect(ap); 475 - ata_std_probeinit(ap); 476 - } 477 - 478 - /** 479 - * piix_pata_probe_reset - Perform reset on PATA port and classify 480 - * @ap: Port to reset 481 - * @classes: Resulting classes of attached devices 482 - * 483 - * Reset PATA phy and classify attached devices. 484 - * 485 - * LOCKING: 486 - * None (inherited from caller). 487 - */ 488 - static int piix_pata_probe_reset(struct ata_port *ap, unsigned int *classes) 466 + static int piix_pata_prereset(struct ata_port *ap) 489 467 { 490 468 struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); 491 469 492 470 if (!pci_test_config_bits(pdev, &piix_enable_bits[ap->hard_port_no])) { 493 - printk(KERN_INFO "ata%u: port disabled. ignoring.\n", ap->id); 471 + ata_port_printk(ap, KERN_INFO, "port disabled. ignoring.\n"); 472 + ap->eh_context.i.action &= ~ATA_EH_RESET_MASK; 494 473 return 0; 495 474 } 496 475 497 - return ata_drive_probe_reset(ap, piix_pata_probeinit, 498 - ata_std_softreset, NULL, 499 - ata_std_postreset, classes); 476 + piix_pata_cbl_detect(ap); 477 + 478 + return ata_std_prereset(ap); 479 + } 480 + 481 + static void piix_pata_error_handler(struct ata_port *ap) 482 + { 483 + ata_bmdma_drive_eh(ap, piix_pata_prereset, ata_std_softreset, NULL, 484 + ata_std_postreset); 500 485 } 501 486 502 487 /** 503 - * piix_sata_probe - Probe PCI device for present SATA devices 504 - * @ap: Port associated with the PCI device we wish to probe 488 + * piix_sata_prereset - prereset for SATA host controller 489 + * @ap: Target port 505 490 * 506 491 * Reads and configures SATA PCI device's PCI config register 507 492 * Port Configuration and Status (PCS) to determine port and 508 - * device availability. 493 + * device availability. Return -ENODEV to skip reset if no 494 + * device is present. 509 495 * 510 496 * LOCKING: 511 497 * None (inherited from caller). 512 498 * 513 499 * RETURNS: 514 - * Mask of avaliable devices on the port. 500 + * 0 if device is present, -ENODEV otherwise. 515 501 */ 516 - static unsigned int piix_sata_probe (struct ata_port *ap) 502 + static int piix_sata_prereset(struct ata_port *ap) 517 503 { 518 504 struct pci_dev *pdev = to_pci_dev(ap->host_set->dev); 519 505 const unsigned int *map = ap->host_set->private_data; ··· 547 549 DPRINTK("ata%u: LEAVE, pcs=0x%x present_mask=0x%x\n", 548 550 ap->id, pcs, present_mask); 549 551 550 - return present_mask; 551 - } 552 - 553 - /** 554 - * piix_sata_probe_reset - Perform reset on SATA port and classify 555 - * @ap: Port to reset 556 - * @classes: Resulting classes of attached devices 557 - * 558 - * Reset SATA phy and classify attached devices. 559 - * 560 - * LOCKING: 561 - * None (inherited from caller). 562 - */ 563 - static int piix_sata_probe_reset(struct ata_port *ap, unsigned int *classes) 564 - { 565 - if (!piix_sata_probe(ap)) { 566 - printk(KERN_INFO "ata%u: SATA port has no device.\n", ap->id); 552 + if (!present_mask) { 553 + ata_port_printk(ap, KERN_INFO, "SATA port has no device.\n"); 554 + ap->eh_context.i.action &= ~ATA_EH_RESET_MASK; 567 555 return 0; 568 556 } 569 557 570 - return ata_drive_probe_reset(ap, ata_std_probeinit, 571 - ata_std_softreset, NULL, 572 - ata_std_postreset, classes); 558 + return ata_std_prereset(ap); 559 + } 560 + 561 + static void piix_sata_error_handler(struct ata_port *ap) 562 + { 563 + ata_bmdma_drive_eh(ap, piix_sata_prereset, ata_std_softreset, NULL, 564 + ata_std_postreset); 573 565 } 574 566 575 567 /** ··· 748 760 pci_read_config_byte(pdev, PCI_REVISION_ID, &rev); 749 761 pci_read_config_word(pdev, 0x41, &cfg); 750 762 /* Only on the original revision: IDE DMA can hang */ 751 - if(rev == 0x00) 763 + if (rev == 0x00) 752 764 no_piix_dma = 1; 753 765 /* On all revisions below 5 PXB bus lock must be disabled for IDE */ 754 - else if(cfg & (1<<14) && rev < 5) 766 + else if (cfg & (1<<14) && rev < 5) 755 767 no_piix_dma = 2; 756 768 } 757 - if(no_piix_dma) 769 + if (no_piix_dma) 758 770 dev_printk(KERN_WARNING, &ata_dev->dev, "450NX errata present, disabling IDE DMA.\n"); 759 - if(no_piix_dma == 2) 771 + if (no_piix_dma == 2) 760 772 dev_printk(KERN_WARNING, &ata_dev->dev, "A BIOS update may resolve this.\n"); 761 773 return no_piix_dma; 762 774 }
+158 -2
drivers/scsi/libata-bmdma.c
··· 652 652 ata_altstatus(ap); /* dummy read */ 653 653 } 654 654 655 + /** 656 + * ata_bmdma_freeze - Freeze BMDMA controller port 657 + * @ap: port to freeze 658 + * 659 + * Freeze BMDMA controller port. 660 + * 661 + * LOCKING: 662 + * Inherited from caller. 663 + */ 664 + void ata_bmdma_freeze(struct ata_port *ap) 665 + { 666 + struct ata_ioports *ioaddr = &ap->ioaddr; 667 + 668 + ap->ctl |= ATA_NIEN; 669 + ap->last_ctl = ap->ctl; 670 + 671 + if (ap->flags & ATA_FLAG_MMIO) 672 + writeb(ap->ctl, (void __iomem *)ioaddr->ctl_addr); 673 + else 674 + outb(ap->ctl, ioaddr->ctl_addr); 675 + } 676 + 677 + /** 678 + * ata_bmdma_thaw - Thaw BMDMA controller port 679 + * @ap: port to thaw 680 + * 681 + * Thaw BMDMA controller port. 682 + * 683 + * LOCKING: 684 + * Inherited from caller. 685 + */ 686 + void ata_bmdma_thaw(struct ata_port *ap) 687 + { 688 + /* clear & re-enable interrupts */ 689 + ata_chk_status(ap); 690 + ap->ops->irq_clear(ap); 691 + if (ap->ioaddr.ctl_addr) /* FIXME: hack. create a hook instead */ 692 + ata_irq_on(ap); 693 + } 694 + 695 + /** 696 + * ata_bmdma_drive_eh - Perform EH with given methods for BMDMA controller 697 + * @ap: port to handle error for 698 + * @prereset: prereset method (can be NULL) 699 + * @softreset: softreset method (can be NULL) 700 + * @hardreset: hardreset method (can be NULL) 701 + * @postreset: postreset method (can be NULL) 702 + * 703 + * Handle error for ATA BMDMA controller. It can handle both 704 + * PATA and SATA controllers. Many controllers should be able to 705 + * use this EH as-is or with some added handling before and 706 + * after. 707 + * 708 + * This function is intended to be used for constructing 709 + * ->error_handler callback by low level drivers. 710 + * 711 + * LOCKING: 712 + * Kernel thread context (may sleep) 713 + */ 714 + void ata_bmdma_drive_eh(struct ata_port *ap, ata_prereset_fn_t prereset, 715 + ata_reset_fn_t softreset, ata_reset_fn_t hardreset, 716 + ata_postreset_fn_t postreset) 717 + { 718 + struct ata_eh_context *ehc = &ap->eh_context; 719 + struct ata_queued_cmd *qc; 720 + unsigned long flags; 721 + int thaw = 0; 722 + 723 + qc = __ata_qc_from_tag(ap, ap->active_tag); 724 + if (qc && !(qc->flags & ATA_QCFLAG_FAILED)) 725 + qc = NULL; 726 + 727 + /* reset PIO HSM and stop DMA engine */ 728 + spin_lock_irqsave(ap->lock, flags); 729 + 730 + ap->hsm_task_state = HSM_ST_IDLE; 731 + 732 + if (qc && (qc->tf.protocol == ATA_PROT_DMA || 733 + qc->tf.protocol == ATA_PROT_ATAPI_DMA)) { 734 + u8 host_stat; 735 + 736 + host_stat = ata_bmdma_status(ap); 737 + 738 + ata_ehi_push_desc(&ehc->i, "BMDMA stat 0x%x", host_stat); 739 + 740 + /* BMDMA controllers indicate host bus error by 741 + * setting DMA_ERR bit and timing out. As it wasn't 742 + * really a timeout event, adjust error mask and 743 + * cancel frozen state. 744 + */ 745 + if (qc->err_mask == AC_ERR_TIMEOUT && host_stat & ATA_DMA_ERR) { 746 + qc->err_mask = AC_ERR_HOST_BUS; 747 + thaw = 1; 748 + } 749 + 750 + ap->ops->bmdma_stop(qc); 751 + } 752 + 753 + ata_altstatus(ap); 754 + ata_chk_status(ap); 755 + ap->ops->irq_clear(ap); 756 + 757 + spin_unlock_irqrestore(ap->lock, flags); 758 + 759 + if (thaw) 760 + ata_eh_thaw_port(ap); 761 + 762 + /* PIO and DMA engines have been stopped, perform recovery */ 763 + ata_do_eh(ap, prereset, softreset, hardreset, postreset); 764 + } 765 + 766 + /** 767 + * ata_bmdma_error_handler - Stock error handler for BMDMA controller 768 + * @ap: port to handle error for 769 + * 770 + * Stock error handler for BMDMA controller. 771 + * 772 + * LOCKING: 773 + * Kernel thread context (may sleep) 774 + */ 775 + void ata_bmdma_error_handler(struct ata_port *ap) 776 + { 777 + ata_reset_fn_t hardreset; 778 + 779 + hardreset = NULL; 780 + if (sata_scr_valid(ap)) 781 + hardreset = sata_std_hardreset; 782 + 783 + ata_bmdma_drive_eh(ap, ata_std_prereset, ata_std_softreset, hardreset, 784 + ata_std_postreset); 785 + } 786 + 787 + /** 788 + * ata_bmdma_post_internal_cmd - Stock post_internal_cmd for 789 + * BMDMA controller 790 + * @qc: internal command to clean up 791 + * 792 + * LOCKING: 793 + * Kernel thread context (may sleep) 794 + */ 795 + void ata_bmdma_post_internal_cmd(struct ata_queued_cmd *qc) 796 + { 797 + ata_bmdma_stop(qc); 798 + } 799 + 655 800 #ifdef CONFIG_PCI 656 801 static struct ata_probe_ent * 657 802 ata_probe_ent_alloc(struct device *dev, const struct ata_port_info *port) ··· 1075 930 1076 931 /* FIXME: check ata_device_add return */ 1077 932 if (legacy_mode) { 1078 - if (legacy_mode & (1 << 0)) 933 + struct device *dev = &pdev->dev; 934 + struct ata_host_set *host_set = NULL; 935 + 936 + if (legacy_mode & (1 << 0)) { 1079 937 ata_device_add(probe_ent); 1080 - if (legacy_mode & (1 << 1)) 938 + host_set = dev_get_drvdata(dev); 939 + } 940 + 941 + if (legacy_mode & (1 << 1)) { 1081 942 ata_device_add(probe_ent2); 943 + if (host_set) { 944 + host_set->next = dev_get_drvdata(dev); 945 + dev_set_drvdata(dev, host_set); 946 + } 947 + } 1082 948 } else 1083 949 ata_device_add(probe_ent); 1084 950
+2041 -1107
drivers/scsi/libata-core.c
··· 61 61 62 62 #include "libata.h" 63 63 64 - static unsigned int ata_dev_init_params(struct ata_port *ap, 65 - struct ata_device *dev, 66 - u16 heads, 67 - u16 sectors); 68 - static void ata_set_mode(struct ata_port *ap); 69 - static unsigned int ata_dev_set_xfermode(struct ata_port *ap, 70 - struct ata_device *dev); 71 - static void ata_dev_xfermask(struct ata_port *ap, struct ata_device *dev); 64 + /* debounce timing parameters in msecs { interval, duration, timeout } */ 65 + const unsigned long sata_deb_timing_boot[] = { 5, 100, 2000 }; 66 + const unsigned long sata_deb_timing_eh[] = { 25, 500, 2000 }; 67 + const unsigned long sata_deb_timing_before_fsrst[] = { 100, 2000, 5000 }; 68 + 69 + static unsigned int ata_dev_init_params(struct ata_device *dev, 70 + u16 heads, u16 sectors); 71 + static unsigned int ata_dev_set_xfermode(struct ata_device *dev); 72 + static void ata_dev_xfermask(struct ata_device *dev); 72 73 73 74 static unsigned int ata_unique_id = 1; 74 75 static struct workqueue_struct *ata_wq; 75 76 77 + struct workqueue_struct *ata_aux_wq; 78 + 76 79 int atapi_enabled = 1; 77 80 module_param(atapi_enabled, int, 0444); 78 81 MODULE_PARM_DESC(atapi_enabled, "Enable discovery of ATAPI devices (0=off, 1=on)"); 82 + 83 + int atapi_dmadir = 0; 84 + module_param(atapi_dmadir, int, 0444); 85 + MODULE_PARM_DESC(atapi_dmadir, "Enable ATAPI DMADIR bridge support (0=off, 1=on)"); 79 86 80 87 int libata_fua = 0; 81 88 module_param_named(fua, libata_fua, int, 0444); ··· 404 397 return "<n/a>"; 405 398 } 406 399 407 - static void ata_dev_disable(struct ata_port *ap, struct ata_device *dev) 400 + static const char *sata_spd_string(unsigned int spd) 408 401 { 409 - if (ata_dev_present(dev)) { 410 - printk(KERN_WARNING "ata%u: dev %u disabled\n", 411 - ap->id, dev->devno); 402 + static const char * const spd_str[] = { 403 + "1.5 Gbps", 404 + "3.0 Gbps", 405 + }; 406 + 407 + if (spd == 0 || (spd - 1) >= ARRAY_SIZE(spd_str)) 408 + return "<unknown>"; 409 + return spd_str[spd - 1]; 410 + } 411 + 412 + void ata_dev_disable(struct ata_device *dev) 413 + { 414 + if (ata_dev_enabled(dev) && ata_msg_drv(dev->ap)) { 415 + ata_dev_printk(dev, KERN_WARNING, "disabled\n"); 412 416 dev->class++; 413 417 } 414 418 } ··· 777 759 void ata_dev_select(struct ata_port *ap, unsigned int device, 778 760 unsigned int wait, unsigned int can_sleep) 779 761 { 780 - VPRINTK("ENTER, ata%u: device %u, wait %u\n", 781 - ap->id, device, wait); 762 + if (ata_msg_probe(ap)) { 763 + ata_port_printk(ap, KERN_INFO, "ata_dev_select: ENTER, ata%u: " 764 + "device %u, wait %u\n", 765 + ap->id, device, wait); 766 + } 782 767 783 768 if (wait) 784 769 ata_wait_idle(ap); ··· 936 915 937 916 DPRINTK("ENTER\n"); 938 917 939 - spin_lock_irqsave(&ap->host_set->lock, flags); 918 + spin_lock_irqsave(ap->lock, flags); 940 919 ap->flags |= ATA_FLAG_FLUSH_PORT_TASK; 941 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 920 + spin_unlock_irqrestore(ap->lock, flags); 942 921 943 922 DPRINTK("flush #1\n"); 944 923 flush_workqueue(ata_wq); ··· 949 928 * Cancel and flush. 950 929 */ 951 930 if (!cancel_delayed_work(&ap->port_task)) { 952 - DPRINTK("flush #2\n"); 931 + if (ata_msg_ctl(ap)) 932 + ata_port_printk(ap, KERN_DEBUG, "%s: flush #2\n", __FUNCTION__); 953 933 flush_workqueue(ata_wq); 954 934 } 955 935 956 - spin_lock_irqsave(&ap->host_set->lock, flags); 936 + spin_lock_irqsave(ap->lock, flags); 957 937 ap->flags &= ~ATA_FLAG_FLUSH_PORT_TASK; 958 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 938 + spin_unlock_irqrestore(ap->lock, flags); 959 939 960 - DPRINTK("EXIT\n"); 940 + if (ata_msg_ctl(ap)) 941 + ata_port_printk(ap, KERN_DEBUG, "%s: EXIT\n", __FUNCTION__); 961 942 } 962 943 963 944 void ata_qc_complete_internal(struct ata_queued_cmd *qc) 964 945 { 965 946 struct completion *waiting = qc->private_data; 966 947 967 - qc->ap->ops->tf_read(qc->ap, &qc->tf); 968 948 complete(waiting); 969 949 } 970 950 971 951 /** 972 952 * ata_exec_internal - execute libata internal command 973 - * @ap: Port to which the command is sent 974 953 * @dev: Device to which the command is sent 975 954 * @tf: Taskfile registers for the command and the result 955 + * @cdb: CDB for packet command 976 956 * @dma_dir: Data tranfer direction of the command 977 957 * @buf: Data buffer of the command 978 958 * @buflen: Length of data buffer ··· 986 964 * 987 965 * LOCKING: 988 966 * None. Should be called with kernel context, might sleep. 967 + * 968 + * RETURNS: 969 + * Zero on success, AC_ERR_* mask on failure 989 970 */ 990 - 991 - static unsigned 992 - ata_exec_internal(struct ata_port *ap, struct ata_device *dev, 993 - struct ata_taskfile *tf, 994 - int dma_dir, void *buf, unsigned int buflen) 971 + unsigned ata_exec_internal(struct ata_device *dev, 972 + struct ata_taskfile *tf, const u8 *cdb, 973 + int dma_dir, void *buf, unsigned int buflen) 995 974 { 975 + struct ata_port *ap = dev->ap; 996 976 u8 command = tf->command; 997 977 struct ata_queued_cmd *qc; 978 + unsigned int tag, preempted_tag; 979 + u32 preempted_sactive, preempted_qc_active; 998 980 DECLARE_COMPLETION(wait); 999 981 unsigned long flags; 1000 982 unsigned int err_mask; 983 + int rc; 1001 984 1002 - spin_lock_irqsave(&ap->host_set->lock, flags); 985 + spin_lock_irqsave(ap->lock, flags); 1003 986 1004 - qc = ata_qc_new_init(ap, dev); 1005 - BUG_ON(qc == NULL); 987 + /* no internal command while frozen */ 988 + if (ap->flags & ATA_FLAG_FROZEN) { 989 + spin_unlock_irqrestore(ap->lock, flags); 990 + return AC_ERR_SYSTEM; 991 + } 1006 992 993 + /* initialize internal qc */ 994 + 995 + /* XXX: Tag 0 is used for drivers with legacy EH as some 996 + * drivers choke if any other tag is given. This breaks 997 + * ata_tag_internal() test for those drivers. Don't use new 998 + * EH stuff without converting to it. 999 + */ 1000 + if (ap->ops->error_handler) 1001 + tag = ATA_TAG_INTERNAL; 1002 + else 1003 + tag = 0; 1004 + 1005 + if (test_and_set_bit(tag, &ap->qc_allocated)) 1006 + BUG(); 1007 + qc = __ata_qc_from_tag(ap, tag); 1008 + 1009 + qc->tag = tag; 1010 + qc->scsicmd = NULL; 1011 + qc->ap = ap; 1012 + qc->dev = dev; 1013 + ata_qc_reinit(qc); 1014 + 1015 + preempted_tag = ap->active_tag; 1016 + preempted_sactive = ap->sactive; 1017 + preempted_qc_active = ap->qc_active; 1018 + ap->active_tag = ATA_TAG_POISON; 1019 + ap->sactive = 0; 1020 + ap->qc_active = 0; 1021 + 1022 + /* prepare & issue qc */ 1007 1023 qc->tf = *tf; 1024 + if (cdb) 1025 + memcpy(qc->cdb, cdb, ATAPI_CDB_LEN); 1026 + qc->flags |= ATA_QCFLAG_RESULT_TF; 1008 1027 qc->dma_dir = dma_dir; 1009 1028 if (dma_dir != DMA_NONE) { 1010 1029 ata_sg_init_one(qc, buf, buflen); ··· 1057 994 1058 995 ata_qc_issue(qc); 1059 996 1060 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 997 + spin_unlock_irqrestore(ap->lock, flags); 1061 998 1062 - if (!wait_for_completion_timeout(&wait, ATA_TMOUT_INTERNAL)) { 1063 - ata_port_flush_task(ap); 999 + rc = wait_for_completion_timeout(&wait, ATA_TMOUT_INTERNAL); 1064 1000 1065 - spin_lock_irqsave(&ap->host_set->lock, flags); 1001 + ata_port_flush_task(ap); 1002 + 1003 + if (!rc) { 1004 + spin_lock_irqsave(ap->lock, flags); 1066 1005 1067 1006 /* We're racing with irq here. If we lose, the 1068 1007 * following test prevents us from completing the qc 1069 - * again. If completion irq occurs after here but 1070 - * before the caller cleans up, it will result in a 1071 - * spurious interrupt. We can live with that. 1008 + * twice. If we win, the port is frozen and will be 1009 + * cleaned up by ->post_internal_cmd(). 1072 1010 */ 1073 1011 if (qc->flags & ATA_QCFLAG_ACTIVE) { 1074 - qc->err_mask = AC_ERR_TIMEOUT; 1075 - ata_qc_complete(qc); 1076 - printk(KERN_WARNING "ata%u: qc timeout (cmd 0x%x)\n", 1077 - ap->id, command); 1012 + qc->err_mask |= AC_ERR_TIMEOUT; 1013 + 1014 + if (ap->ops->error_handler) 1015 + ata_port_freeze(ap); 1016 + else 1017 + ata_qc_complete(qc); 1018 + 1019 + if (ata_msg_warn(ap)) 1020 + ata_dev_printk(dev, KERN_WARNING, 1021 + "qc timeout (cmd 0x%x)\n", command); 1078 1022 } 1079 1023 1080 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 1024 + spin_unlock_irqrestore(ap->lock, flags); 1081 1025 } 1082 1026 1083 - *tf = qc->tf; 1027 + /* do post_internal_cmd */ 1028 + if (ap->ops->post_internal_cmd) 1029 + ap->ops->post_internal_cmd(qc); 1030 + 1031 + if (qc->flags & ATA_QCFLAG_FAILED && !qc->err_mask) { 1032 + if (ata_msg_warn(ap)) 1033 + ata_dev_printk(dev, KERN_WARNING, 1034 + "zero err_mask for failed " 1035 + "internal command, assuming AC_ERR_OTHER\n"); 1036 + qc->err_mask |= AC_ERR_OTHER; 1037 + } 1038 + 1039 + /* finish up */ 1040 + spin_lock_irqsave(ap->lock, flags); 1041 + 1042 + *tf = qc->result_tf; 1084 1043 err_mask = qc->err_mask; 1085 1044 1086 1045 ata_qc_free(qc); 1046 + ap->active_tag = preempted_tag; 1047 + ap->sactive = preempted_sactive; 1048 + ap->qc_active = preempted_qc_active; 1087 1049 1088 1050 /* XXX - Some LLDDs (sata_mv) disable port on command failure. 1089 1051 * Until those drivers are fixed, we detect the condition ··· 1121 1033 * 1122 1034 * Kill the following code as soon as those drivers are fixed. 1123 1035 */ 1124 - if (ap->flags & ATA_FLAG_PORT_DISABLED) { 1036 + if (ap->flags & ATA_FLAG_DISABLED) { 1125 1037 err_mask |= AC_ERR_SYSTEM; 1126 1038 ata_port_probe(ap); 1127 1039 } 1040 + 1041 + spin_unlock_irqrestore(ap->lock, flags); 1128 1042 1129 1043 return err_mask; 1130 1044 } ··· 1166 1076 1167 1077 /** 1168 1078 * ata_dev_read_id - Read ID data from the specified device 1169 - * @ap: port on which target device resides 1170 1079 * @dev: target device 1171 1080 * @p_class: pointer to class of the target device (may be changed) 1172 1081 * @post_reset: is this read ID post-reset? 1173 - * @p_id: read IDENTIFY page (newly allocated) 1082 + * @id: buffer to read IDENTIFY data into 1174 1083 * 1175 1084 * Read ID data from the specified device. ATA_CMD_ID_ATA is 1176 1085 * performed on ATA devices and ATA_CMD_ID_ATAPI on ATAPI ··· 1182 1093 * RETURNS: 1183 1094 * 0 on success, -errno otherwise. 1184 1095 */ 1185 - static int ata_dev_read_id(struct ata_port *ap, struct ata_device *dev, 1186 - unsigned int *p_class, int post_reset, u16 **p_id) 1096 + int ata_dev_read_id(struct ata_device *dev, unsigned int *p_class, 1097 + int post_reset, u16 *id) 1187 1098 { 1099 + struct ata_port *ap = dev->ap; 1188 1100 unsigned int class = *p_class; 1189 1101 struct ata_taskfile tf; 1190 1102 unsigned int err_mask = 0; 1191 - u16 *id; 1192 1103 const char *reason; 1193 1104 int rc; 1194 1105 1195 - DPRINTK("ENTER, host %u, dev %u\n", ap->id, dev->devno); 1106 + if (ata_msg_ctl(ap)) 1107 + ata_dev_printk(dev, KERN_DEBUG, "%s: ENTER, host %u, dev %u\n", 1108 + __FUNCTION__, ap->id, dev->devno); 1196 1109 1197 1110 ata_dev_select(ap, dev->devno, 1, 1); /* select device 0/1 */ 1198 1111 1199 - id = kmalloc(sizeof(id[0]) * ATA_ID_WORDS, GFP_KERNEL); 1200 - if (id == NULL) { 1201 - rc = -ENOMEM; 1202 - reason = "out of memory"; 1203 - goto err_out; 1204 - } 1205 - 1206 1112 retry: 1207 - ata_tf_init(ap, &tf, dev->devno); 1113 + ata_tf_init(dev, &tf); 1208 1114 1209 1115 switch (class) { 1210 1116 case ATA_DEV_ATA: ··· 1216 1132 1217 1133 tf.protocol = ATA_PROT_PIO; 1218 1134 1219 - err_mask = ata_exec_internal(ap, dev, &tf, DMA_FROM_DEVICE, 1135 + err_mask = ata_exec_internal(dev, &tf, NULL, DMA_FROM_DEVICE, 1220 1136 id, sizeof(id[0]) * ATA_ID_WORDS); 1221 1137 if (err_mask) { 1222 1138 rc = -EIO; ··· 1243 1159 * Some drives were very specific about that exact sequence. 1244 1160 */ 1245 1161 if (ata_id_major_version(id) < 4 || !ata_id_has_lba(id)) { 1246 - err_mask = ata_dev_init_params(ap, dev, id[3], id[6]); 1162 + err_mask = ata_dev_init_params(dev, id[3], id[6]); 1247 1163 if (err_mask) { 1248 1164 rc = -EIO; 1249 1165 reason = "INIT_DEV_PARAMS failed"; ··· 1259 1175 } 1260 1176 1261 1177 *p_class = class; 1262 - *p_id = id; 1178 + 1263 1179 return 0; 1264 1180 1265 1181 err_out: 1266 - printk(KERN_WARNING "ata%u: dev %u failed to IDENTIFY (%s)\n", 1267 - ap->id, dev->devno, reason); 1268 - kfree(id); 1182 + if (ata_msg_warn(ap)) 1183 + ata_dev_printk(dev, KERN_WARNING, "failed to IDENTIFY " 1184 + "(%s, err_mask=0x%x)\n", reason, err_mask); 1269 1185 return rc; 1270 1186 } 1271 1187 1272 - static inline u8 ata_dev_knobble(const struct ata_port *ap, 1273 - struct ata_device *dev) 1188 + static inline u8 ata_dev_knobble(struct ata_device *dev) 1274 1189 { 1275 - return ((ap->cbl == ATA_CBL_SATA) && (!ata_id_is_sata(dev->id))); 1190 + return ((dev->ap->cbl == ATA_CBL_SATA) && (!ata_id_is_sata(dev->id))); 1191 + } 1192 + 1193 + static void ata_dev_config_ncq(struct ata_device *dev, 1194 + char *desc, size_t desc_sz) 1195 + { 1196 + struct ata_port *ap = dev->ap; 1197 + int hdepth = 0, ddepth = ata_id_queue_depth(dev->id); 1198 + 1199 + if (!ata_id_has_ncq(dev->id)) { 1200 + desc[0] = '\0'; 1201 + return; 1202 + } 1203 + 1204 + if (ap->flags & ATA_FLAG_NCQ) { 1205 + hdepth = min(ap->host->can_queue, ATA_MAX_QUEUE - 1); 1206 + dev->flags |= ATA_DFLAG_NCQ; 1207 + } 1208 + 1209 + if (hdepth >= ddepth) 1210 + snprintf(desc, desc_sz, "NCQ (depth %d)", ddepth); 1211 + else 1212 + snprintf(desc, desc_sz, "NCQ (depth %d/%d)", hdepth, ddepth); 1276 1213 } 1277 1214 1278 1215 /** 1279 1216 * ata_dev_configure - Configure the specified ATA/ATAPI device 1280 - * @ap: Port on which target device resides 1281 1217 * @dev: Target device to configure 1282 1218 * @print_info: Enable device info printout 1283 1219 * ··· 1310 1206 * RETURNS: 1311 1207 * 0 on success, -errno otherwise 1312 1208 */ 1313 - static int ata_dev_configure(struct ata_port *ap, struct ata_device *dev, 1314 - int print_info) 1209 + int ata_dev_configure(struct ata_device *dev, int print_info) 1315 1210 { 1211 + struct ata_port *ap = dev->ap; 1316 1212 const u16 *id = dev->id; 1317 1213 unsigned int xfer_mask; 1318 1214 int i, rc; 1319 1215 1320 - if (!ata_dev_present(dev)) { 1321 - DPRINTK("ENTER/EXIT (host %u, dev %u) -- nodev\n", 1322 - ap->id, dev->devno); 1216 + if (!ata_dev_enabled(dev) && ata_msg_info(ap)) { 1217 + ata_dev_printk(dev, KERN_INFO, "%s: ENTER/EXIT (host %u, dev %u) -- nodev\n", 1218 + __FUNCTION__, ap->id, dev->devno); 1323 1219 return 0; 1324 1220 } 1325 1221 1326 - DPRINTK("ENTER, host %u, dev %u\n", ap->id, dev->devno); 1222 + if (ata_msg_probe(ap)) 1223 + ata_dev_printk(dev, KERN_DEBUG, "%s: ENTER, host %u, dev %u\n", 1224 + __FUNCTION__, ap->id, dev->devno); 1327 1225 1328 1226 /* print device capabilities */ 1329 - if (print_info) 1330 - printk(KERN_DEBUG "ata%u: dev %u cfg 49:%04x 82:%04x 83:%04x " 1331 - "84:%04x 85:%04x 86:%04x 87:%04x 88:%04x\n", 1332 - ap->id, dev->devno, id[49], id[82], id[83], 1333 - id[84], id[85], id[86], id[87], id[88]); 1227 + if (ata_msg_probe(ap)) 1228 + ata_dev_printk(dev, KERN_DEBUG, "%s: cfg 49:%04x 82:%04x 83:%04x " 1229 + "84:%04x 85:%04x 86:%04x 87:%04x 88:%04x\n", 1230 + __FUNCTION__, 1231 + id[49], id[82], id[83], id[84], 1232 + id[85], id[86], id[87], id[88]); 1334 1233 1335 1234 /* initialize to-be-configured parameters */ 1336 - dev->flags = 0; 1235 + dev->flags &= ~ATA_DFLAG_CFG_MASK; 1337 1236 dev->max_sectors = 0; 1338 1237 dev->cdb_len = 0; 1339 1238 dev->n_sectors = 0; ··· 1351 1244 /* find max transfer mode; for printk only */ 1352 1245 xfer_mask = ata_id_xfermask(id); 1353 1246 1354 - ata_dump_id(id); 1247 + if (ata_msg_probe(ap)) 1248 + ata_dump_id(id); 1355 1249 1356 1250 /* ATA-specific feature tests */ 1357 1251 if (dev->class == ATA_DEV_ATA) { ··· 1360 1252 1361 1253 if (ata_id_has_lba(id)) { 1362 1254 const char *lba_desc; 1255 + char ncq_desc[20]; 1363 1256 1364 1257 lba_desc = "LBA"; 1365 1258 dev->flags |= ATA_DFLAG_LBA; ··· 1369 1260 lba_desc = "LBA48"; 1370 1261 } 1371 1262 1263 + /* config NCQ */ 1264 + ata_dev_config_ncq(dev, ncq_desc, sizeof(ncq_desc)); 1265 + 1372 1266 /* print device info to dmesg */ 1373 - if (print_info) 1374 - printk(KERN_INFO "ata%u: dev %u ATA-%d, " 1375 - "max %s, %Lu sectors: %s\n", 1376 - ap->id, dev->devno, 1377 - ata_id_major_version(id), 1378 - ata_mode_string(xfer_mask), 1379 - (unsigned long long)dev->n_sectors, 1380 - lba_desc); 1267 + if (ata_msg_info(ap)) 1268 + ata_dev_printk(dev, KERN_INFO, "ATA-%d, " 1269 + "max %s, %Lu sectors: %s %s\n", 1270 + ata_id_major_version(id), 1271 + ata_mode_string(xfer_mask), 1272 + (unsigned long long)dev->n_sectors, 1273 + lba_desc, ncq_desc); 1381 1274 } else { 1382 1275 /* CHS */ 1383 1276 ··· 1396 1285 } 1397 1286 1398 1287 /* print device info to dmesg */ 1399 - if (print_info) 1400 - printk(KERN_INFO "ata%u: dev %u ATA-%d, " 1401 - "max %s, %Lu sectors: CHS %u/%u/%u\n", 1402 - ap->id, dev->devno, 1403 - ata_id_major_version(id), 1404 - ata_mode_string(xfer_mask), 1405 - (unsigned long long)dev->n_sectors, 1406 - dev->cylinders, dev->heads, dev->sectors); 1288 + if (ata_msg_info(ap)) 1289 + ata_dev_printk(dev, KERN_INFO, "ATA-%d, " 1290 + "max %s, %Lu sectors: CHS %u/%u/%u\n", 1291 + ata_id_major_version(id), 1292 + ata_mode_string(xfer_mask), 1293 + (unsigned long long)dev->n_sectors, 1294 + dev->cylinders, dev->heads, dev->sectors); 1295 + } 1296 + 1297 + if (dev->id[59] & 0x100) { 1298 + dev->multi_count = dev->id[59] & 0xff; 1299 + if (ata_msg_info(ap)) 1300 + ata_dev_printk(dev, KERN_INFO, "ata%u: dev %u multi count %u\n", 1301 + ap->id, dev->devno, dev->multi_count); 1407 1302 } 1408 1303 1409 1304 dev->cdb_len = 16; ··· 1417 1300 1418 1301 /* ATAPI-specific feature tests */ 1419 1302 else if (dev->class == ATA_DEV_ATAPI) { 1303 + char *cdb_intr_string = ""; 1304 + 1420 1305 rc = atapi_cdb_len(id); 1421 1306 if ((rc < 12) || (rc > ATAPI_CDB_LEN)) { 1422 - printk(KERN_WARNING "ata%u: unsupported CDB len\n", ap->id); 1307 + if (ata_msg_warn(ap)) 1308 + ata_dev_printk(dev, KERN_WARNING, 1309 + "unsupported CDB len\n"); 1423 1310 rc = -EINVAL; 1424 1311 goto err_out_nosup; 1425 1312 } 1426 1313 dev->cdb_len = (unsigned int) rc; 1427 1314 1315 + if (ata_id_cdb_intr(dev->id)) { 1316 + dev->flags |= ATA_DFLAG_CDB_INTR; 1317 + cdb_intr_string = ", CDB intr"; 1318 + } 1319 + 1428 1320 /* print device info to dmesg */ 1429 - if (print_info) 1430 - printk(KERN_INFO "ata%u: dev %u ATAPI, max %s\n", 1431 - ap->id, dev->devno, ata_mode_string(xfer_mask)); 1321 + if (ata_msg_info(ap)) 1322 + ata_dev_printk(dev, KERN_INFO, "ATAPI, max %s%s\n", 1323 + ata_mode_string(xfer_mask), 1324 + cdb_intr_string); 1432 1325 } 1433 1326 1434 1327 ap->host->max_cmd_len = 0; ··· 1448 1321 ap->device[i].cdb_len); 1449 1322 1450 1323 /* limit bridge transfers to udma5, 200 sectors */ 1451 - if (ata_dev_knobble(ap, dev)) { 1452 - if (print_info) 1453 - printk(KERN_INFO "ata%u(%u): applying bridge limits\n", 1454 - ap->id, dev->devno); 1324 + if (ata_dev_knobble(dev)) { 1325 + if (ata_msg_info(ap)) 1326 + ata_dev_printk(dev, KERN_INFO, 1327 + "applying bridge limits\n"); 1455 1328 dev->udma_mask &= ATA_UDMA5; 1456 1329 dev->max_sectors = ATA_MAX_SECTORS; 1457 1330 } ··· 1459 1332 if (ap->ops->dev_config) 1460 1333 ap->ops->dev_config(ap, dev); 1461 1334 1462 - DPRINTK("EXIT, drv_stat = 0x%x\n", ata_chk_status(ap)); 1335 + if (ata_msg_probe(ap)) 1336 + ata_dev_printk(dev, KERN_DEBUG, "%s: EXIT, drv_stat = 0x%x\n", 1337 + __FUNCTION__, ata_chk_status(ap)); 1463 1338 return 0; 1464 1339 1465 1340 err_out_nosup: 1466 - DPRINTK("EXIT, err\n"); 1341 + if (ata_msg_probe(ap)) 1342 + ata_dev_printk(dev, KERN_DEBUG, 1343 + "%s: EXIT, err\n", __FUNCTION__); 1467 1344 return rc; 1468 1345 } 1469 1346 ··· 1483 1352 * PCI/etc. bus probe sem. 1484 1353 * 1485 1354 * RETURNS: 1486 - * Zero on success, non-zero on error. 1355 + * Zero on success, negative errno otherwise. 1487 1356 */ 1488 1357 1489 1358 static int ata_bus_probe(struct ata_port *ap) 1490 1359 { 1491 1360 unsigned int classes[ATA_MAX_DEVICES]; 1492 - unsigned int i, rc, found = 0; 1361 + int tries[ATA_MAX_DEVICES]; 1362 + int i, rc, down_xfermask; 1363 + struct ata_device *dev; 1493 1364 1494 1365 ata_port_probe(ap); 1495 1366 1496 - /* reset and determine device classes */ 1497 1367 for (i = 0; i < ATA_MAX_DEVICES; i++) 1498 - classes[i] = ATA_DEV_UNKNOWN; 1368 + tries[i] = ATA_PROBE_MAX_TRIES; 1499 1369 1500 - if (ap->ops->probe_reset) { 1501 - rc = ap->ops->probe_reset(ap, classes); 1502 - if (rc) { 1503 - printk("ata%u: reset failed (errno=%d)\n", ap->id, rc); 1504 - return rc; 1505 - } 1506 - } else { 1507 - ap->ops->phy_reset(ap); 1370 + retry: 1371 + down_xfermask = 0; 1508 1372 1509 - if (!(ap->flags & ATA_FLAG_PORT_DISABLED)) 1510 - for (i = 0; i < ATA_MAX_DEVICES; i++) 1511 - classes[i] = ap->device[i].class; 1373 + /* reset and determine device classes */ 1374 + ap->ops->phy_reset(ap); 1512 1375 1513 - ata_port_probe(ap); 1376 + for (i = 0; i < ATA_MAX_DEVICES; i++) { 1377 + dev = &ap->device[i]; 1378 + 1379 + if (!(ap->flags & ATA_FLAG_DISABLED) && 1380 + dev->class != ATA_DEV_UNKNOWN) 1381 + classes[dev->devno] = dev->class; 1382 + else 1383 + classes[dev->devno] = ATA_DEV_NONE; 1384 + 1385 + dev->class = ATA_DEV_UNKNOWN; 1514 1386 } 1515 1387 1388 + ata_port_probe(ap); 1389 + 1390 + /* after the reset the device state is PIO 0 and the controller 1391 + state is undefined. Record the mode */ 1392 + 1516 1393 for (i = 0; i < ATA_MAX_DEVICES; i++) 1517 - if (classes[i] == ATA_DEV_UNKNOWN) 1518 - classes[i] = ATA_DEV_NONE; 1394 + ap->device[i].pio_mode = XFER_PIO_0; 1519 1395 1520 1396 /* read IDENTIFY page and configure devices */ 1521 1397 for (i = 0; i < ATA_MAX_DEVICES; i++) { 1522 - struct ata_device *dev = &ap->device[i]; 1398 + dev = &ap->device[i]; 1523 1399 1524 - dev->class = classes[i]; 1400 + if (tries[i]) 1401 + dev->class = classes[i]; 1525 1402 1526 - if (!ata_dev_present(dev)) 1403 + if (!ata_dev_enabled(dev)) 1527 1404 continue; 1528 1405 1529 - WARN_ON(dev->id != NULL); 1530 - if (ata_dev_read_id(ap, dev, &dev->class, 1, &dev->id)) { 1531 - dev->class = ATA_DEV_NONE; 1532 - continue; 1533 - } 1406 + rc = ata_dev_read_id(dev, &dev->class, 1, dev->id); 1407 + if (rc) 1408 + goto fail; 1534 1409 1535 - if (ata_dev_configure(ap, dev, 1)) { 1536 - ata_dev_disable(ap, dev); 1537 - continue; 1538 - } 1539 - 1540 - found = 1; 1410 + rc = ata_dev_configure(dev, 1); 1411 + if (rc) 1412 + goto fail; 1541 1413 } 1542 1414 1543 - if (!found) 1544 - goto err_out_disable; 1415 + /* configure transfer mode */ 1416 + rc = ata_set_mode(ap, &dev); 1417 + if (rc) { 1418 + down_xfermask = 1; 1419 + goto fail; 1420 + } 1545 1421 1546 - if (ap->ops->set_mode) 1547 - ap->ops->set_mode(ap); 1548 - else 1549 - ata_set_mode(ap); 1422 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1423 + if (ata_dev_enabled(&ap->device[i])) 1424 + return 0; 1550 1425 1551 - if (ap->flags & ATA_FLAG_PORT_DISABLED) 1552 - goto err_out_disable; 1553 - 1554 - return 0; 1555 - 1556 - err_out_disable: 1426 + /* no device present, disable port */ 1427 + ata_port_disable(ap); 1557 1428 ap->ops->port_disable(ap); 1558 - return -1; 1429 + return -ENODEV; 1430 + 1431 + fail: 1432 + switch (rc) { 1433 + case -EINVAL: 1434 + case -ENODEV: 1435 + tries[dev->devno] = 0; 1436 + break; 1437 + case -EIO: 1438 + sata_down_spd_limit(ap); 1439 + /* fall through */ 1440 + default: 1441 + tries[dev->devno]--; 1442 + if (down_xfermask && 1443 + ata_down_xfermask_limit(dev, tries[dev->devno] == 1)) 1444 + tries[dev->devno] = 0; 1445 + } 1446 + 1447 + if (!tries[dev->devno]) { 1448 + ata_down_xfermask_limit(dev, 1); 1449 + ata_dev_disable(dev); 1450 + } 1451 + 1452 + goto retry; 1559 1453 } 1560 1454 1561 1455 /** ··· 1596 1440 1597 1441 void ata_port_probe(struct ata_port *ap) 1598 1442 { 1599 - ap->flags &= ~ATA_FLAG_PORT_DISABLED; 1443 + ap->flags &= ~ATA_FLAG_DISABLED; 1600 1444 } 1601 1445 1602 1446 /** ··· 1610 1454 */ 1611 1455 static void sata_print_link_status(struct ata_port *ap) 1612 1456 { 1613 - u32 sstatus, tmp; 1614 - const char *speed; 1457 + u32 sstatus, scontrol, tmp; 1615 1458 1616 - if (!ap->ops->scr_read) 1459 + if (sata_scr_read(ap, SCR_STATUS, &sstatus)) 1617 1460 return; 1461 + sata_scr_read(ap, SCR_CONTROL, &scontrol); 1618 1462 1619 - sstatus = scr_read(ap, SCR_STATUS); 1620 - 1621 - if (sata_dev_present(ap)) { 1463 + if (ata_port_online(ap)) { 1622 1464 tmp = (sstatus >> 4) & 0xf; 1623 - if (tmp & (1 << 0)) 1624 - speed = "1.5"; 1625 - else if (tmp & (1 << 1)) 1626 - speed = "3.0"; 1627 - else 1628 - speed = "<unknown>"; 1629 - printk(KERN_INFO "ata%u: SATA link up %s Gbps (SStatus %X)\n", 1630 - ap->id, speed, sstatus); 1465 + ata_port_printk(ap, KERN_INFO, 1466 + "SATA link up %s (SStatus %X SControl %X)\n", 1467 + sata_spd_string(tmp), sstatus, scontrol); 1631 1468 } else { 1632 - printk(KERN_INFO "ata%u: SATA link down (SStatus %X)\n", 1633 - ap->id, sstatus); 1469 + ata_port_printk(ap, KERN_INFO, 1470 + "SATA link down (SStatus %X SControl %X)\n", 1471 + sstatus, scontrol); 1634 1472 } 1635 1473 } 1636 1474 ··· 1647 1497 1648 1498 if (ap->flags & ATA_FLAG_SATA_RESET) { 1649 1499 /* issue phy wake/reset */ 1650 - scr_write_flush(ap, SCR_CONTROL, 0x301); 1500 + sata_scr_write_flush(ap, SCR_CONTROL, 0x301); 1651 1501 /* Couldn't find anything in SATA I/II specs, but 1652 1502 * AHCI-1.1 10.4.2 says at least 1 ms. */ 1653 1503 mdelay(1); 1654 1504 } 1655 - scr_write_flush(ap, SCR_CONTROL, 0x300); /* phy wake/clear reset */ 1505 + /* phy wake/clear reset */ 1506 + sata_scr_write_flush(ap, SCR_CONTROL, 0x300); 1656 1507 1657 1508 /* wait for phy to become ready, if necessary */ 1658 1509 do { 1659 1510 msleep(200); 1660 - sstatus = scr_read(ap, SCR_STATUS); 1511 + sata_scr_read(ap, SCR_STATUS, &sstatus); 1661 1512 if ((sstatus & 0xf) != 1) 1662 1513 break; 1663 1514 } while (time_before(jiffies, timeout)); ··· 1667 1516 sata_print_link_status(ap); 1668 1517 1669 1518 /* TODO: phy layer with polling, timeouts, etc. */ 1670 - if (sata_dev_present(ap)) 1519 + if (!ata_port_offline(ap)) 1671 1520 ata_port_probe(ap); 1672 1521 else 1673 1522 ata_port_disable(ap); 1674 1523 1675 - if (ap->flags & ATA_FLAG_PORT_DISABLED) 1524 + if (ap->flags & ATA_FLAG_DISABLED) 1676 1525 return; 1677 1526 1678 1527 if (ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT)) { ··· 1697 1546 void sata_phy_reset(struct ata_port *ap) 1698 1547 { 1699 1548 __sata_phy_reset(ap); 1700 - if (ap->flags & ATA_FLAG_PORT_DISABLED) 1549 + if (ap->flags & ATA_FLAG_DISABLED) 1701 1550 return; 1702 1551 ata_bus_reset(ap); 1703 1552 } 1704 1553 1705 1554 /** 1706 1555 * ata_dev_pair - return other device on cable 1707 - * @ap: port 1708 1556 * @adev: device 1709 1557 * 1710 1558 * Obtain the other device on the same cable, or if none is 1711 1559 * present NULL is returned 1712 1560 */ 1713 1561 1714 - struct ata_device *ata_dev_pair(struct ata_port *ap, struct ata_device *adev) 1562 + struct ata_device *ata_dev_pair(struct ata_device *adev) 1715 1563 { 1564 + struct ata_port *ap = adev->ap; 1716 1565 struct ata_device *pair = &ap->device[1 - adev->devno]; 1717 - if (!ata_dev_present(pair)) 1566 + if (!ata_dev_enabled(pair)) 1718 1567 return NULL; 1719 1568 return pair; 1720 1569 } ··· 1736 1585 { 1737 1586 ap->device[0].class = ATA_DEV_NONE; 1738 1587 ap->device[1].class = ATA_DEV_NONE; 1739 - ap->flags |= ATA_FLAG_PORT_DISABLED; 1588 + ap->flags |= ATA_FLAG_DISABLED; 1589 + } 1590 + 1591 + /** 1592 + * sata_down_spd_limit - adjust SATA spd limit downward 1593 + * @ap: Port to adjust SATA spd limit for 1594 + * 1595 + * Adjust SATA spd limit of @ap downward. Note that this 1596 + * function only adjusts the limit. The change must be applied 1597 + * using sata_set_spd(). 1598 + * 1599 + * LOCKING: 1600 + * Inherited from caller. 1601 + * 1602 + * RETURNS: 1603 + * 0 on success, negative errno on failure 1604 + */ 1605 + int sata_down_spd_limit(struct ata_port *ap) 1606 + { 1607 + u32 sstatus, spd, mask; 1608 + int rc, highbit; 1609 + 1610 + rc = sata_scr_read(ap, SCR_STATUS, &sstatus); 1611 + if (rc) 1612 + return rc; 1613 + 1614 + mask = ap->sata_spd_limit; 1615 + if (mask <= 1) 1616 + return -EINVAL; 1617 + highbit = fls(mask) - 1; 1618 + mask &= ~(1 << highbit); 1619 + 1620 + spd = (sstatus >> 4) & 0xf; 1621 + if (spd <= 1) 1622 + return -EINVAL; 1623 + spd--; 1624 + mask &= (1 << spd) - 1; 1625 + if (!mask) 1626 + return -EINVAL; 1627 + 1628 + ap->sata_spd_limit = mask; 1629 + 1630 + ata_port_printk(ap, KERN_WARNING, "limiting SATA link speed to %s\n", 1631 + sata_spd_string(fls(mask))); 1632 + 1633 + return 0; 1634 + } 1635 + 1636 + static int __sata_set_spd_needed(struct ata_port *ap, u32 *scontrol) 1637 + { 1638 + u32 spd, limit; 1639 + 1640 + if (ap->sata_spd_limit == UINT_MAX) 1641 + limit = 0; 1642 + else 1643 + limit = fls(ap->sata_spd_limit); 1644 + 1645 + spd = (*scontrol >> 4) & 0xf; 1646 + *scontrol = (*scontrol & ~0xf0) | ((limit & 0xf) << 4); 1647 + 1648 + return spd != limit; 1649 + } 1650 + 1651 + /** 1652 + * sata_set_spd_needed - is SATA spd configuration needed 1653 + * @ap: Port in question 1654 + * 1655 + * Test whether the spd limit in SControl matches 1656 + * @ap->sata_spd_limit. This function is used to determine 1657 + * whether hardreset is necessary to apply SATA spd 1658 + * configuration. 1659 + * 1660 + * LOCKING: 1661 + * Inherited from caller. 1662 + * 1663 + * RETURNS: 1664 + * 1 if SATA spd configuration is needed, 0 otherwise. 1665 + */ 1666 + int sata_set_spd_needed(struct ata_port *ap) 1667 + { 1668 + u32 scontrol; 1669 + 1670 + if (sata_scr_read(ap, SCR_CONTROL, &scontrol)) 1671 + return 0; 1672 + 1673 + return __sata_set_spd_needed(ap, &scontrol); 1674 + } 1675 + 1676 + /** 1677 + * sata_set_spd - set SATA spd according to spd limit 1678 + * @ap: Port to set SATA spd for 1679 + * 1680 + * Set SATA spd of @ap according to sata_spd_limit. 1681 + * 1682 + * LOCKING: 1683 + * Inherited from caller. 1684 + * 1685 + * RETURNS: 1686 + * 0 if spd doesn't need to be changed, 1 if spd has been 1687 + * changed. Negative errno if SCR registers are inaccessible. 1688 + */ 1689 + int sata_set_spd(struct ata_port *ap) 1690 + { 1691 + u32 scontrol; 1692 + int rc; 1693 + 1694 + if ((rc = sata_scr_read(ap, SCR_CONTROL, &scontrol))) 1695 + return rc; 1696 + 1697 + if (!__sata_set_spd_needed(ap, &scontrol)) 1698 + return 0; 1699 + 1700 + if ((rc = sata_scr_write(ap, SCR_CONTROL, scontrol))) 1701 + return rc; 1702 + 1703 + return 1; 1740 1704 } 1741 1705 1742 1706 /* ··· 2002 1736 return 0; 2003 1737 } 2004 1738 2005 - static int ata_dev_set_mode(struct ata_port *ap, struct ata_device *dev) 1739 + /** 1740 + * ata_down_xfermask_limit - adjust dev xfer masks downward 1741 + * @dev: Device to adjust xfer masks 1742 + * @force_pio0: Force PIO0 1743 + * 1744 + * Adjust xfer masks of @dev downward. Note that this function 1745 + * does not apply the change. Invoking ata_set_mode() afterwards 1746 + * will apply the limit. 1747 + * 1748 + * LOCKING: 1749 + * Inherited from caller. 1750 + * 1751 + * RETURNS: 1752 + * 0 on success, negative errno on failure 1753 + */ 1754 + int ata_down_xfermask_limit(struct ata_device *dev, int force_pio0) 1755 + { 1756 + unsigned long xfer_mask; 1757 + int highbit; 1758 + 1759 + xfer_mask = ata_pack_xfermask(dev->pio_mask, dev->mwdma_mask, 1760 + dev->udma_mask); 1761 + 1762 + if (!xfer_mask) 1763 + goto fail; 1764 + /* don't gear down to MWDMA from UDMA, go directly to PIO */ 1765 + if (xfer_mask & ATA_MASK_UDMA) 1766 + xfer_mask &= ~ATA_MASK_MWDMA; 1767 + 1768 + highbit = fls(xfer_mask) - 1; 1769 + xfer_mask &= ~(1 << highbit); 1770 + if (force_pio0) 1771 + xfer_mask &= 1 << ATA_SHIFT_PIO; 1772 + if (!xfer_mask) 1773 + goto fail; 1774 + 1775 + ata_unpack_xfermask(xfer_mask, &dev->pio_mask, &dev->mwdma_mask, 1776 + &dev->udma_mask); 1777 + 1778 + ata_dev_printk(dev, KERN_WARNING, "limiting speed to %s\n", 1779 + ata_mode_string(xfer_mask)); 1780 + 1781 + return 0; 1782 + 1783 + fail: 1784 + return -EINVAL; 1785 + } 1786 + 1787 + static int ata_dev_set_mode(struct ata_device *dev) 2006 1788 { 2007 1789 unsigned int err_mask; 2008 1790 int rc; 2009 1791 1792 + dev->flags &= ~ATA_DFLAG_PIO; 2010 1793 if (dev->xfer_shift == ATA_SHIFT_PIO) 2011 1794 dev->flags |= ATA_DFLAG_PIO; 2012 1795 2013 - err_mask = ata_dev_set_xfermode(ap, dev); 1796 + err_mask = ata_dev_set_xfermode(dev); 2014 1797 if (err_mask) { 2015 - printk(KERN_ERR 2016 - "ata%u: failed to set xfermode (err_mask=0x%x)\n", 2017 - ap->id, err_mask); 1798 + ata_dev_printk(dev, KERN_ERR, "failed to set xfermode " 1799 + "(err_mask=0x%x)\n", err_mask); 2018 1800 return -EIO; 2019 1801 } 2020 1802 2021 - rc = ata_dev_revalidate(ap, dev, 0); 2022 - if (rc) { 2023 - printk(KERN_ERR 2024 - "ata%u: failed to revalidate after set xfermode\n", 2025 - ap->id); 1803 + rc = ata_dev_revalidate(dev, 0); 1804 + if (rc) 2026 1805 return rc; 2027 - } 2028 1806 2029 1807 DPRINTK("xfer_shift=%u, xfer_mode=0x%x\n", 2030 1808 dev->xfer_shift, (int)dev->xfer_mode); 2031 1809 2032 - printk(KERN_INFO "ata%u: dev %u configured for %s\n", 2033 - ap->id, dev->devno, 2034 - ata_mode_string(ata_xfer_mode2mask(dev->xfer_mode))); 1810 + ata_dev_printk(dev, KERN_INFO, "configured for %s\n", 1811 + ata_mode_string(ata_xfer_mode2mask(dev->xfer_mode))); 2035 1812 return 0; 2036 1813 } 2037 1814 2038 - static int ata_host_set_pio(struct ata_port *ap) 1815 + /** 1816 + * ata_set_mode - Program timings and issue SET FEATURES - XFER 1817 + * @ap: port on which timings will be programmed 1818 + * @r_failed_dev: out paramter for failed device 1819 + * 1820 + * Set ATA device disk transfer mode (PIO3, UDMA6, etc.). If 1821 + * ata_set_mode() fails, pointer to the failing device is 1822 + * returned in @r_failed_dev. 1823 + * 1824 + * LOCKING: 1825 + * PCI/etc. bus probe sem. 1826 + * 1827 + * RETURNS: 1828 + * 0 on success, negative errno otherwise 1829 + */ 1830 + int ata_set_mode(struct ata_port *ap, struct ata_device **r_failed_dev) 2039 1831 { 2040 - int i; 1832 + struct ata_device *dev; 1833 + int i, rc = 0, used_dma = 0, found = 0; 2041 1834 1835 + /* has private set_mode? */ 1836 + if (ap->ops->set_mode) { 1837 + /* FIXME: make ->set_mode handle no device case and 1838 + * return error code and failing device on failure. 1839 + */ 1840 + for (i = 0; i < ATA_MAX_DEVICES; i++) { 1841 + if (ata_dev_enabled(&ap->device[i])) { 1842 + ap->ops->set_mode(ap); 1843 + break; 1844 + } 1845 + } 1846 + return 0; 1847 + } 1848 + 1849 + /* step 1: calculate xfer_mask */ 2042 1850 for (i = 0; i < ATA_MAX_DEVICES; i++) { 2043 - struct ata_device *dev = &ap->device[i]; 1851 + unsigned int pio_mask, dma_mask; 2044 1852 2045 - if (!ata_dev_present(dev)) 1853 + dev = &ap->device[i]; 1854 + 1855 + if (!ata_dev_enabled(dev)) 1856 + continue; 1857 + 1858 + ata_dev_xfermask(dev); 1859 + 1860 + pio_mask = ata_pack_xfermask(dev->pio_mask, 0, 0); 1861 + dma_mask = ata_pack_xfermask(0, dev->mwdma_mask, dev->udma_mask); 1862 + dev->pio_mode = ata_xfer_mask2mode(pio_mask); 1863 + dev->dma_mode = ata_xfer_mask2mode(dma_mask); 1864 + 1865 + found = 1; 1866 + if (dev->dma_mode) 1867 + used_dma = 1; 1868 + } 1869 + if (!found) 1870 + goto out; 1871 + 1872 + /* step 2: always set host PIO timings */ 1873 + for (i = 0; i < ATA_MAX_DEVICES; i++) { 1874 + dev = &ap->device[i]; 1875 + if (!ata_dev_enabled(dev)) 2046 1876 continue; 2047 1877 2048 1878 if (!dev->pio_mode) { 2049 - printk(KERN_WARNING "ata%u: no PIO support for device %d.\n", ap->id, i); 2050 - return -1; 1879 + ata_dev_printk(dev, KERN_WARNING, "no PIO support\n"); 1880 + rc = -EINVAL; 1881 + goto out; 2051 1882 } 2052 1883 2053 1884 dev->xfer_mode = dev->pio_mode; ··· 2153 1790 ap->ops->set_piomode(ap, dev); 2154 1791 } 2155 1792 2156 - return 0; 2157 - } 2158 - 2159 - static void ata_host_set_dma(struct ata_port *ap) 2160 - { 2161 - int i; 2162 - 1793 + /* step 3: set host DMA timings */ 2163 1794 for (i = 0; i < ATA_MAX_DEVICES; i++) { 2164 - struct ata_device *dev = &ap->device[i]; 1795 + dev = &ap->device[i]; 2165 1796 2166 - if (!ata_dev_present(dev) || !dev->dma_mode) 1797 + if (!ata_dev_enabled(dev) || !dev->dma_mode) 2167 1798 continue; 2168 1799 2169 1800 dev->xfer_mode = dev->dma_mode; ··· 2165 1808 if (ap->ops->set_dmamode) 2166 1809 ap->ops->set_dmamode(ap, dev); 2167 1810 } 2168 - } 2169 - 2170 - /** 2171 - * ata_set_mode - Program timings and issue SET FEATURES - XFER 2172 - * @ap: port on which timings will be programmed 2173 - * 2174 - * Set ATA device disk transfer mode (PIO3, UDMA6, etc.). 2175 - * 2176 - * LOCKING: 2177 - * PCI/etc. bus probe sem. 2178 - */ 2179 - static void ata_set_mode(struct ata_port *ap) 2180 - { 2181 - int i, rc, used_dma = 0; 2182 - 2183 - /* step 1: calculate xfer_mask */ 2184 - for (i = 0; i < ATA_MAX_DEVICES; i++) { 2185 - struct ata_device *dev = &ap->device[i]; 2186 - unsigned int pio_mask, dma_mask; 2187 - 2188 - if (!ata_dev_present(dev)) 2189 - continue; 2190 - 2191 - ata_dev_xfermask(ap, dev); 2192 - 2193 - /* TODO: let LLDD filter dev->*_mask here */ 2194 - 2195 - pio_mask = ata_pack_xfermask(dev->pio_mask, 0, 0); 2196 - dma_mask = ata_pack_xfermask(0, dev->mwdma_mask, dev->udma_mask); 2197 - dev->pio_mode = ata_xfer_mask2mode(pio_mask); 2198 - dev->dma_mode = ata_xfer_mask2mode(dma_mask); 2199 - 2200 - if (dev->dma_mode) 2201 - used_dma = 1; 2202 - } 2203 - 2204 - /* step 2: always set host PIO timings */ 2205 - rc = ata_host_set_pio(ap); 2206 - if (rc) 2207 - goto err_out; 2208 - 2209 - /* step 3: set host DMA timings */ 2210 - ata_host_set_dma(ap); 2211 1811 2212 1812 /* step 4: update devices' xfer mode */ 2213 1813 for (i = 0; i < ATA_MAX_DEVICES; i++) { 2214 - struct ata_device *dev = &ap->device[i]; 1814 + dev = &ap->device[i]; 2215 1815 2216 - if (!ata_dev_present(dev)) 1816 + if (!ata_dev_enabled(dev)) 2217 1817 continue; 2218 1818 2219 - if (ata_dev_set_mode(ap, dev)) 2220 - goto err_out; 1819 + rc = ata_dev_set_mode(dev); 1820 + if (rc) 1821 + goto out; 2221 1822 } 2222 1823 2223 - /* 2224 - * Record simplex status. If we selected DMA then the other 2225 - * host channels are not permitted to do so. 1824 + /* Record simplex status. If we selected DMA then the other 1825 + * host channels are not permitted to do so. 2226 1826 */ 2227 - 2228 1827 if (used_dma && (ap->host_set->flags & ATA_HOST_SIMPLEX)) 2229 1828 ap->host_set->simplex_claimed = 1; 2230 1829 2231 - /* 2232 - * Chip specific finalisation 2233 - */ 1830 + /* step5: chip specific finalisation */ 2234 1831 if (ap->ops->post_set_mode) 2235 1832 ap->ops->post_set_mode(ap); 2236 1833 2237 - return; 2238 - 2239 - err_out: 2240 - ata_port_disable(ap); 1834 + out: 1835 + if (rc) 1836 + *r_failed_dev = dev; 1837 + return rc; 2241 1838 } 2242 1839 2243 1840 /** ··· 2241 1930 } 2242 1931 2243 1932 if (status & ATA_BUSY) 2244 - printk(KERN_WARNING "ata%u is slow to respond, " 2245 - "please be patient\n", ap->id); 1933 + ata_port_printk(ap, KERN_WARNING, 1934 + "port is slow to respond, please be patient\n"); 2246 1935 2247 1936 timeout = timer_start + tmout; 2248 1937 while ((status & ATA_BUSY) && (time_before(jiffies, timeout))) { ··· 2251 1940 } 2252 1941 2253 1942 if (status & ATA_BUSY) { 2254 - printk(KERN_ERR "ata%u failed to respond (%lu secs)\n", 2255 - ap->id, tmout / HZ); 1943 + ata_port_printk(ap, KERN_ERR, "port failed to respond " 1944 + "(%lu secs)\n", tmout / HZ); 2256 1945 return 1; 2257 1946 } 2258 1947 ··· 2344 2033 * the bus shows 0xFF because the odd clown forgets the D7 2345 2034 * pulldown resistor. 2346 2035 */ 2347 - if (ata_check_status(ap) == 0xFF) 2036 + if (ata_check_status(ap) == 0xFF) { 2037 + ata_port_printk(ap, KERN_ERR, "SRST failed (status 0xFF)\n"); 2348 2038 return AC_ERR_OTHER; 2039 + } 2349 2040 2350 2041 ata_bus_post_reset(ap, devmask); 2351 2042 ··· 2371 2058 * Obtains host_set lock. 2372 2059 * 2373 2060 * SIDE EFFECTS: 2374 - * Sets ATA_FLAG_PORT_DISABLED if bus reset fails. 2061 + * Sets ATA_FLAG_DISABLED if bus reset fails. 2375 2062 */ 2376 2063 2377 2064 void ata_bus_reset(struct ata_port *ap) ··· 2439 2126 return; 2440 2127 2441 2128 err_out: 2442 - printk(KERN_ERR "ata%u: disabling port\n", ap->id); 2129 + ata_port_printk(ap, KERN_ERR, "disabling port\n"); 2443 2130 ap->ops->port_disable(ap); 2444 2131 2445 2132 DPRINTK("EXIT\n"); 2446 2133 } 2447 2134 2448 - static int sata_phy_resume(struct ata_port *ap) 2449 - { 2450 - unsigned long timeout = jiffies + (HZ * 5); 2451 - u32 sstatus; 2452 - 2453 - scr_write_flush(ap, SCR_CONTROL, 0x300); 2454 - 2455 - /* Wait for phy to become ready, if necessary. */ 2456 - do { 2457 - msleep(200); 2458 - sstatus = scr_read(ap, SCR_STATUS); 2459 - if ((sstatus & 0xf) != 1) 2460 - return 0; 2461 - } while (time_before(jiffies, timeout)); 2462 - 2463 - return -1; 2464 - } 2465 - 2466 2135 /** 2467 - * ata_std_probeinit - initialize probing 2468 - * @ap: port to be probed 2136 + * sata_phy_debounce - debounce SATA phy status 2137 + * @ap: ATA port to debounce SATA phy status for 2138 + * @params: timing parameters { interval, duratinon, timeout } in msec 2469 2139 * 2470 - * @ap is about to be probed. Initialize it. This function is 2471 - * to be used as standard callback for ata_drive_probe_reset(). 2140 + * Make sure SStatus of @ap reaches stable state, determined by 2141 + * holding the same value where DET is not 1 for @duration polled 2142 + * every @interval, before @timeout. Timeout constraints the 2143 + * beginning of the stable state. Because, after hot unplugging, 2144 + * DET gets stuck at 1 on some controllers, this functions waits 2145 + * until timeout then returns 0 if DET is stable at 1. 2472 2146 * 2473 - * NOTE!!! Do not use this function as probeinit if a low level 2474 - * driver implements only hardreset. Just pass NULL as probeinit 2475 - * in that case. Using this function is probably okay but doing 2476 - * so makes reset sequence different from the original 2477 - * ->phy_reset implementation and Jeff nervous. :-P 2147 + * LOCKING: 2148 + * Kernel thread context (may sleep) 2149 + * 2150 + * RETURNS: 2151 + * 0 on success, -errno on failure. 2478 2152 */ 2479 - void ata_std_probeinit(struct ata_port *ap) 2153 + int sata_phy_debounce(struct ata_port *ap, const unsigned long *params) 2480 2154 { 2481 - if ((ap->flags & ATA_FLAG_SATA) && ap->ops->scr_read) { 2482 - sata_phy_resume(ap); 2483 - if (sata_dev_present(ap)) 2484 - ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT); 2155 + unsigned long interval_msec = params[0]; 2156 + unsigned long duration = params[1] * HZ / 1000; 2157 + unsigned long timeout = jiffies + params[2] * HZ / 1000; 2158 + unsigned long last_jiffies; 2159 + u32 last, cur; 2160 + int rc; 2161 + 2162 + if ((rc = sata_scr_read(ap, SCR_STATUS, &cur))) 2163 + return rc; 2164 + cur &= 0xf; 2165 + 2166 + last = cur; 2167 + last_jiffies = jiffies; 2168 + 2169 + while (1) { 2170 + msleep(interval_msec); 2171 + if ((rc = sata_scr_read(ap, SCR_STATUS, &cur))) 2172 + return rc; 2173 + cur &= 0xf; 2174 + 2175 + /* DET stable? */ 2176 + if (cur == last) { 2177 + if (cur == 1 && time_before(jiffies, timeout)) 2178 + continue; 2179 + if (time_after(jiffies, last_jiffies + duration)) 2180 + return 0; 2181 + continue; 2182 + } 2183 + 2184 + /* unstable, start over */ 2185 + last = cur; 2186 + last_jiffies = jiffies; 2187 + 2188 + /* check timeout */ 2189 + if (time_after(jiffies, timeout)) 2190 + return -EBUSY; 2485 2191 } 2486 2192 } 2487 2193 2488 2194 /** 2489 - * ata_std_softreset - reset host port via ATA SRST 2490 - * @ap: port to reset 2491 - * @verbose: fail verbosely 2492 - * @classes: resulting classes of attached devices 2195 + * sata_phy_resume - resume SATA phy 2196 + * @ap: ATA port to resume SATA phy for 2197 + * @params: timing parameters { interval, duratinon, timeout } in msec 2493 2198 * 2494 - * Reset host port using ATA SRST. This function is to be used 2495 - * as standard callback for ata_drive_*_reset() functions. 2199 + * Resume SATA phy of @ap and debounce it. 2200 + * 2201 + * LOCKING: 2202 + * Kernel thread context (may sleep) 2203 + * 2204 + * RETURNS: 2205 + * 0 on success, -errno on failure. 2206 + */ 2207 + int sata_phy_resume(struct ata_port *ap, const unsigned long *params) 2208 + { 2209 + u32 scontrol; 2210 + int rc; 2211 + 2212 + if ((rc = sata_scr_read(ap, SCR_CONTROL, &scontrol))) 2213 + return rc; 2214 + 2215 + scontrol = (scontrol & 0x0f0) | 0x300; 2216 + 2217 + if ((rc = sata_scr_write(ap, SCR_CONTROL, scontrol))) 2218 + return rc; 2219 + 2220 + /* Some PHYs react badly if SStatus is pounded immediately 2221 + * after resuming. Delay 200ms before debouncing. 2222 + */ 2223 + msleep(200); 2224 + 2225 + return sata_phy_debounce(ap, params); 2226 + } 2227 + 2228 + static void ata_wait_spinup(struct ata_port *ap) 2229 + { 2230 + struct ata_eh_context *ehc = &ap->eh_context; 2231 + unsigned long end, secs; 2232 + int rc; 2233 + 2234 + /* first, debounce phy if SATA */ 2235 + if (ap->cbl == ATA_CBL_SATA) { 2236 + rc = sata_phy_debounce(ap, sata_deb_timing_eh); 2237 + 2238 + /* if debounced successfully and offline, no need to wait */ 2239 + if ((rc == 0 || rc == -EOPNOTSUPP) && ata_port_offline(ap)) 2240 + return; 2241 + } 2242 + 2243 + /* okay, let's give the drive time to spin up */ 2244 + end = ehc->i.hotplug_timestamp + ATA_SPINUP_WAIT * HZ / 1000; 2245 + secs = ((end - jiffies) + HZ - 1) / HZ; 2246 + 2247 + if (time_after(jiffies, end)) 2248 + return; 2249 + 2250 + if (secs > 5) 2251 + ata_port_printk(ap, KERN_INFO, "waiting for device to spin up " 2252 + "(%lu secs)\n", secs); 2253 + 2254 + schedule_timeout_uninterruptible(end - jiffies); 2255 + } 2256 + 2257 + /** 2258 + * ata_std_prereset - prepare for reset 2259 + * @ap: ATA port to be reset 2260 + * 2261 + * @ap is about to be reset. Initialize it. 2496 2262 * 2497 2263 * LOCKING: 2498 2264 * Kernel thread context (may sleep) ··· 2579 2187 * RETURNS: 2580 2188 * 0 on success, -errno otherwise. 2581 2189 */ 2582 - int ata_std_softreset(struct ata_port *ap, int verbose, unsigned int *classes) 2190 + int ata_std_prereset(struct ata_port *ap) 2191 + { 2192 + struct ata_eh_context *ehc = &ap->eh_context; 2193 + const unsigned long *timing; 2194 + int rc; 2195 + 2196 + /* hotplug? */ 2197 + if (ehc->i.flags & ATA_EHI_HOTPLUGGED) { 2198 + if (ap->flags & ATA_FLAG_HRST_TO_RESUME) 2199 + ehc->i.action |= ATA_EH_HARDRESET; 2200 + if (ap->flags & ATA_FLAG_SKIP_D2H_BSY) 2201 + ata_wait_spinup(ap); 2202 + } 2203 + 2204 + /* if we're about to do hardreset, nothing more to do */ 2205 + if (ehc->i.action & ATA_EH_HARDRESET) 2206 + return 0; 2207 + 2208 + /* if SATA, resume phy */ 2209 + if (ap->cbl == ATA_CBL_SATA) { 2210 + if (ap->flags & ATA_FLAG_LOADING) 2211 + timing = sata_deb_timing_boot; 2212 + else 2213 + timing = sata_deb_timing_eh; 2214 + 2215 + rc = sata_phy_resume(ap, timing); 2216 + if (rc && rc != -EOPNOTSUPP) { 2217 + /* phy resume failed */ 2218 + ata_port_printk(ap, KERN_WARNING, "failed to resume " 2219 + "link for reset (errno=%d)\n", rc); 2220 + return rc; 2221 + } 2222 + } 2223 + 2224 + /* Wait for !BSY if the controller can wait for the first D2H 2225 + * Reg FIS and we don't know that no device is attached. 2226 + */ 2227 + if (!(ap->flags & ATA_FLAG_SKIP_D2H_BSY) && !ata_port_offline(ap)) 2228 + ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT); 2229 + 2230 + return 0; 2231 + } 2232 + 2233 + /** 2234 + * ata_std_softreset - reset host port via ATA SRST 2235 + * @ap: port to reset 2236 + * @classes: resulting classes of attached devices 2237 + * 2238 + * Reset host port using ATA SRST. 2239 + * 2240 + * LOCKING: 2241 + * Kernel thread context (may sleep) 2242 + * 2243 + * RETURNS: 2244 + * 0 on success, -errno otherwise. 2245 + */ 2246 + int ata_std_softreset(struct ata_port *ap, unsigned int *classes) 2583 2247 { 2584 2248 unsigned int slave_possible = ap->flags & ATA_FLAG_SLAVE_POSS; 2585 2249 unsigned int devmask = 0, err_mask; ··· 2643 2195 2644 2196 DPRINTK("ENTER\n"); 2645 2197 2646 - if (ap->ops->scr_read && !sata_dev_present(ap)) { 2198 + if (ata_port_offline(ap)) { 2647 2199 classes[0] = ATA_DEV_NONE; 2648 2200 goto out; 2649 2201 } ··· 2661 2213 DPRINTK("about to softreset, devmask=%x\n", devmask); 2662 2214 err_mask = ata_bus_softreset(ap, devmask); 2663 2215 if (err_mask) { 2664 - if (verbose) 2665 - printk(KERN_ERR "ata%u: SRST failed (err_mask=0x%x)\n", 2666 - ap->id, err_mask); 2667 - else 2668 - DPRINTK("EXIT, softreset failed (err_mask=0x%x)\n", 2216 + ata_port_printk(ap, KERN_ERR, "SRST failed (err_mask=0x%x)\n", 2669 2217 err_mask); 2670 2218 return -EIO; 2671 2219 } ··· 2679 2235 /** 2680 2236 * sata_std_hardreset - reset host port via SATA phy reset 2681 2237 * @ap: port to reset 2682 - * @verbose: fail verbosely 2683 2238 * @class: resulting class of attached device 2684 2239 * 2685 2240 * SATA phy-reset host port using DET bits of SControl register. 2686 - * This function is to be used as standard callback for 2687 - * ata_drive_*_reset(). 2688 2241 * 2689 2242 * LOCKING: 2690 2243 * Kernel thread context (may sleep) ··· 2689 2248 * RETURNS: 2690 2249 * 0 on success, -errno otherwise. 2691 2250 */ 2692 - int sata_std_hardreset(struct ata_port *ap, int verbose, unsigned int *class) 2251 + int sata_std_hardreset(struct ata_port *ap, unsigned int *class) 2693 2252 { 2253 + u32 scontrol; 2254 + int rc; 2255 + 2694 2256 DPRINTK("ENTER\n"); 2695 2257 2696 - /* Issue phy wake/reset */ 2697 - scr_write_flush(ap, SCR_CONTROL, 0x301); 2258 + if (sata_set_spd_needed(ap)) { 2259 + /* SATA spec says nothing about how to reconfigure 2260 + * spd. To be on the safe side, turn off phy during 2261 + * reconfiguration. This works for at least ICH7 AHCI 2262 + * and Sil3124. 2263 + */ 2264 + if ((rc = sata_scr_read(ap, SCR_CONTROL, &scontrol))) 2265 + return rc; 2698 2266 2699 - /* 2700 - * Couldn't find anything in SATA I/II specs, but AHCI-1.1 2267 + scontrol = (scontrol & 0x0f0) | 0x302; 2268 + 2269 + if ((rc = sata_scr_write(ap, SCR_CONTROL, scontrol))) 2270 + return rc; 2271 + 2272 + sata_set_spd(ap); 2273 + } 2274 + 2275 + /* issue phy wake/reset */ 2276 + if ((rc = sata_scr_read(ap, SCR_CONTROL, &scontrol))) 2277 + return rc; 2278 + 2279 + scontrol = (scontrol & 0x0f0) | 0x301; 2280 + 2281 + if ((rc = sata_scr_write_flush(ap, SCR_CONTROL, scontrol))) 2282 + return rc; 2283 + 2284 + /* Couldn't find anything in SATA I/II specs, but AHCI-1.1 2701 2285 * 10.4.2 says at least 1 ms. 2702 2286 */ 2703 2287 msleep(1); 2704 2288 2705 - /* Bring phy back */ 2706 - sata_phy_resume(ap); 2289 + /* bring phy back */ 2290 + sata_phy_resume(ap, sata_deb_timing_eh); 2707 2291 2708 2292 /* TODO: phy layer with polling, timeouts, etc. */ 2709 - if (!sata_dev_present(ap)) { 2293 + if (ata_port_offline(ap)) { 2710 2294 *class = ATA_DEV_NONE; 2711 2295 DPRINTK("EXIT, link offline\n"); 2712 2296 return 0; 2713 2297 } 2714 2298 2715 2299 if (ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT)) { 2716 - if (verbose) 2717 - printk(KERN_ERR "ata%u: COMRESET failed " 2718 - "(device not ready)\n", ap->id); 2719 - else 2720 - DPRINTK("EXIT, device not ready\n"); 2300 + ata_port_printk(ap, KERN_ERR, 2301 + "COMRESET failed (device not ready)\n"); 2721 2302 return -EIO; 2722 2303 } 2723 2304 ··· 2760 2297 * the device might have been reset more than once using 2761 2298 * different reset methods before postreset is invoked. 2762 2299 * 2763 - * This function is to be used as standard callback for 2764 - * ata_drive_*_reset(). 2765 - * 2766 2300 * LOCKING: 2767 2301 * Kernel thread context (may sleep) 2768 2302 */ 2769 2303 void ata_std_postreset(struct ata_port *ap, unsigned int *classes) 2770 2304 { 2305 + u32 serror; 2306 + 2771 2307 DPRINTK("ENTER\n"); 2772 2308 2773 - /* set cable type if it isn't already set */ 2774 - if (ap->cbl == ATA_CBL_NONE && ap->flags & ATA_FLAG_SATA) 2775 - ap->cbl = ATA_CBL_SATA; 2776 - 2777 2309 /* print link status */ 2778 - if (ap->cbl == ATA_CBL_SATA) 2779 - sata_print_link_status(ap); 2310 + sata_print_link_status(ap); 2311 + 2312 + /* clear SError */ 2313 + if (sata_scr_read(ap, SCR_ERROR, &serror) == 0) 2314 + sata_scr_write(ap, SCR_ERROR, serror); 2780 2315 2781 2316 /* re-enable interrupts */ 2782 - if (ap->ioaddr.ctl_addr) /* FIXME: hack. create a hook instead */ 2783 - ata_irq_on(ap); 2317 + if (!ap->ops->error_handler) { 2318 + /* FIXME: hack. create a hook instead */ 2319 + if (ap->ioaddr.ctl_addr) 2320 + ata_irq_on(ap); 2321 + } 2784 2322 2785 2323 /* is double-select really necessary? */ 2786 2324 if (classes[0] != ATA_DEV_NONE) ··· 2807 2343 } 2808 2344 2809 2345 /** 2810 - * ata_std_probe_reset - standard probe reset method 2811 - * @ap: prot to perform probe-reset 2812 - * @classes: resulting classes of attached devices 2813 - * 2814 - * The stock off-the-shelf ->probe_reset method. 2815 - * 2816 - * LOCKING: 2817 - * Kernel thread context (may sleep) 2818 - * 2819 - * RETURNS: 2820 - * 0 on success, -errno otherwise. 2821 - */ 2822 - int ata_std_probe_reset(struct ata_port *ap, unsigned int *classes) 2823 - { 2824 - ata_reset_fn_t hardreset; 2825 - 2826 - hardreset = NULL; 2827 - if (ap->flags & ATA_FLAG_SATA && ap->ops->scr_read) 2828 - hardreset = sata_std_hardreset; 2829 - 2830 - return ata_drive_probe_reset(ap, ata_std_probeinit, 2831 - ata_std_softreset, hardreset, 2832 - ata_std_postreset, classes); 2833 - } 2834 - 2835 - static int do_probe_reset(struct ata_port *ap, ata_reset_fn_t reset, 2836 - ata_postreset_fn_t postreset, 2837 - unsigned int *classes) 2838 - { 2839 - int i, rc; 2840 - 2841 - for (i = 0; i < ATA_MAX_DEVICES; i++) 2842 - classes[i] = ATA_DEV_UNKNOWN; 2843 - 2844 - rc = reset(ap, 0, classes); 2845 - if (rc) 2846 - return rc; 2847 - 2848 - /* If any class isn't ATA_DEV_UNKNOWN, consider classification 2849 - * is complete and convert all ATA_DEV_UNKNOWN to 2850 - * ATA_DEV_NONE. 2851 - */ 2852 - for (i = 0; i < ATA_MAX_DEVICES; i++) 2853 - if (classes[i] != ATA_DEV_UNKNOWN) 2854 - break; 2855 - 2856 - if (i < ATA_MAX_DEVICES) 2857 - for (i = 0; i < ATA_MAX_DEVICES; i++) 2858 - if (classes[i] == ATA_DEV_UNKNOWN) 2859 - classes[i] = ATA_DEV_NONE; 2860 - 2861 - if (postreset) 2862 - postreset(ap, classes); 2863 - 2864 - return classes[0] != ATA_DEV_UNKNOWN ? 0 : -ENODEV; 2865 - } 2866 - 2867 - /** 2868 - * ata_drive_probe_reset - Perform probe reset with given methods 2869 - * @ap: port to reset 2870 - * @probeinit: probeinit method (can be NULL) 2871 - * @softreset: softreset method (can be NULL) 2872 - * @hardreset: hardreset method (can be NULL) 2873 - * @postreset: postreset method (can be NULL) 2874 - * @classes: resulting classes of attached devices 2875 - * 2876 - * Reset the specified port and classify attached devices using 2877 - * given methods. This function prefers softreset but tries all 2878 - * possible reset sequences to reset and classify devices. This 2879 - * function is intended to be used for constructing ->probe_reset 2880 - * callback by low level drivers. 2881 - * 2882 - * Reset methods should follow the following rules. 2883 - * 2884 - * - Return 0 on sucess, -errno on failure. 2885 - * - If classification is supported, fill classes[] with 2886 - * recognized class codes. 2887 - * - If classification is not supported, leave classes[] alone. 2888 - * - If verbose is non-zero, print error message on failure; 2889 - * otherwise, shut up. 2890 - * 2891 - * LOCKING: 2892 - * Kernel thread context (may sleep) 2893 - * 2894 - * RETURNS: 2895 - * 0 on success, -EINVAL if no reset method is avaliable, -ENODEV 2896 - * if classification fails, and any error code from reset 2897 - * methods. 2898 - */ 2899 - int ata_drive_probe_reset(struct ata_port *ap, ata_probeinit_fn_t probeinit, 2900 - ata_reset_fn_t softreset, ata_reset_fn_t hardreset, 2901 - ata_postreset_fn_t postreset, unsigned int *classes) 2902 - { 2903 - int rc = -EINVAL; 2904 - 2905 - if (probeinit) 2906 - probeinit(ap); 2907 - 2908 - if (softreset) { 2909 - rc = do_probe_reset(ap, softreset, postreset, classes); 2910 - if (rc == 0) 2911 - return 0; 2912 - } 2913 - 2914 - if (!hardreset) 2915 - return rc; 2916 - 2917 - rc = do_probe_reset(ap, hardreset, postreset, classes); 2918 - if (rc == 0 || rc != -ENODEV) 2919 - return rc; 2920 - 2921 - if (softreset) 2922 - rc = do_probe_reset(ap, softreset, postreset, classes); 2923 - 2924 - return rc; 2925 - } 2926 - 2927 - /** 2928 2346 * ata_dev_same_device - Determine whether new ID matches configured device 2929 - * @ap: port on which the device to compare against resides 2930 2347 * @dev: device to compare against 2931 2348 * @new_class: class of the new device 2932 2349 * @new_id: IDENTIFY page of the new device ··· 2822 2477 * RETURNS: 2823 2478 * 1 if @dev matches @new_class and @new_id, 0 otherwise. 2824 2479 */ 2825 - static int ata_dev_same_device(struct ata_port *ap, struct ata_device *dev, 2826 - unsigned int new_class, const u16 *new_id) 2480 + static int ata_dev_same_device(struct ata_device *dev, unsigned int new_class, 2481 + const u16 *new_id) 2827 2482 { 2828 2483 const u16 *old_id = dev->id; 2829 2484 unsigned char model[2][41], serial[2][21]; 2830 2485 u64 new_n_sectors; 2831 2486 2832 2487 if (dev->class != new_class) { 2833 - printk(KERN_INFO 2834 - "ata%u: dev %u class mismatch %d != %d\n", 2835 - ap->id, dev->devno, dev->class, new_class); 2488 + ata_dev_printk(dev, KERN_INFO, "class mismatch %d != %d\n", 2489 + dev->class, new_class); 2836 2490 return 0; 2837 2491 } 2838 2492 ··· 2842 2498 new_n_sectors = ata_id_n_sectors(new_id); 2843 2499 2844 2500 if (strcmp(model[0], model[1])) { 2845 - printk(KERN_INFO 2846 - "ata%u: dev %u model number mismatch '%s' != '%s'\n", 2847 - ap->id, dev->devno, model[0], model[1]); 2501 + ata_dev_printk(dev, KERN_INFO, "model number mismatch " 2502 + "'%s' != '%s'\n", model[0], model[1]); 2848 2503 return 0; 2849 2504 } 2850 2505 2851 2506 if (strcmp(serial[0], serial[1])) { 2852 - printk(KERN_INFO 2853 - "ata%u: dev %u serial number mismatch '%s' != '%s'\n", 2854 - ap->id, dev->devno, serial[0], serial[1]); 2507 + ata_dev_printk(dev, KERN_INFO, "serial number mismatch " 2508 + "'%s' != '%s'\n", serial[0], serial[1]); 2855 2509 return 0; 2856 2510 } 2857 2511 2858 2512 if (dev->class == ATA_DEV_ATA && dev->n_sectors != new_n_sectors) { 2859 - printk(KERN_INFO 2860 - "ata%u: dev %u n_sectors mismatch %llu != %llu\n", 2861 - ap->id, dev->devno, (unsigned long long)dev->n_sectors, 2862 - (unsigned long long)new_n_sectors); 2513 + ata_dev_printk(dev, KERN_INFO, "n_sectors mismatch " 2514 + "%llu != %llu\n", 2515 + (unsigned long long)dev->n_sectors, 2516 + (unsigned long long)new_n_sectors); 2863 2517 return 0; 2864 2518 } 2865 2519 ··· 2866 2524 2867 2525 /** 2868 2526 * ata_dev_revalidate - Revalidate ATA device 2869 - * @ap: port on which the device to revalidate resides 2870 2527 * @dev: device to revalidate 2871 2528 * @post_reset: is this revalidation after reset? 2872 2529 * ··· 2878 2537 * RETURNS: 2879 2538 * 0 on success, negative errno otherwise 2880 2539 */ 2881 - int ata_dev_revalidate(struct ata_port *ap, struct ata_device *dev, 2882 - int post_reset) 2540 + int ata_dev_revalidate(struct ata_device *dev, int post_reset) 2883 2541 { 2884 - unsigned int class; 2885 - u16 *id; 2542 + unsigned int class = dev->class; 2543 + u16 *id = (void *)dev->ap->sector_buf; 2886 2544 int rc; 2887 2545 2888 - if (!ata_dev_present(dev)) 2889 - return -ENODEV; 2890 - 2891 - class = dev->class; 2892 - id = NULL; 2893 - 2894 - /* allocate & read ID data */ 2895 - rc = ata_dev_read_id(ap, dev, &class, post_reset, &id); 2896 - if (rc) 2897 - goto fail; 2898 - 2899 - /* is the device still there? */ 2900 - if (!ata_dev_same_device(ap, dev, class, id)) { 2546 + if (!ata_dev_enabled(dev)) { 2901 2547 rc = -ENODEV; 2902 2548 goto fail; 2903 2549 } 2904 2550 2905 - kfree(dev->id); 2906 - dev->id = id; 2551 + /* read ID data */ 2552 + rc = ata_dev_read_id(dev, &class, post_reset, id); 2553 + if (rc) 2554 + goto fail; 2555 + 2556 + /* is the device still there? */ 2557 + if (!ata_dev_same_device(dev, class, id)) { 2558 + rc = -ENODEV; 2559 + goto fail; 2560 + } 2561 + 2562 + memcpy(dev->id, id, sizeof(id[0]) * ATA_ID_WORDS); 2907 2563 2908 2564 /* configure device according to the new ID */ 2909 - return ata_dev_configure(ap, dev, 0); 2565 + rc = ata_dev_configure(dev, 0); 2566 + if (rc == 0) 2567 + return 0; 2910 2568 2911 2569 fail: 2912 - printk(KERN_ERR "ata%u: dev %u revalidation failed (errno=%d)\n", 2913 - ap->id, dev->devno, rc); 2914 - kfree(id); 2570 + ata_dev_printk(dev, KERN_ERR, "revalidation failed (errno=%d)\n", rc); 2915 2571 return rc; 2916 2572 } 2917 2573 ··· 2964 2626 unsigned int nlen, rlen; 2965 2627 int i; 2966 2628 2629 + /* We don't support polling DMA. 2630 + * DMA blacklist those ATAPI devices with CDB-intr (and use PIO) 2631 + * if the LLDD handles only interrupts in the HSM_ST_LAST state. 2632 + */ 2633 + if ((dev->ap->flags & ATA_FLAG_PIO_POLLING) && 2634 + (dev->flags & ATA_DFLAG_CDB_INTR)) 2635 + return 1; 2636 + 2967 2637 ata_id_string(dev->id, model_num, ATA_ID_PROD_OFS, 2968 2638 sizeof(model_num)); 2969 2639 ata_id_string(dev->id, model_rev, ATA_ID_FW_REV_OFS, ··· 2992 2646 2993 2647 /** 2994 2648 * ata_dev_xfermask - Compute supported xfermask of the given device 2995 - * @ap: Port on which the device to compute xfermask for resides 2996 2649 * @dev: Device to compute xfermask for 2997 2650 * 2998 2651 * Compute supported xfermask of @dev and store it in ··· 3006 2661 * LOCKING: 3007 2662 * None. 3008 2663 */ 3009 - static void ata_dev_xfermask(struct ata_port *ap, struct ata_device *dev) 2664 + static void ata_dev_xfermask(struct ata_device *dev) 3010 2665 { 2666 + struct ata_port *ap = dev->ap; 3011 2667 struct ata_host_set *hs = ap->host_set; 3012 2668 unsigned long xfer_mask; 3013 2669 int i; 3014 2670 3015 - xfer_mask = ata_pack_xfermask(ap->pio_mask, ap->mwdma_mask, 3016 - ap->udma_mask); 2671 + xfer_mask = ata_pack_xfermask(ap->pio_mask, 2672 + ap->mwdma_mask, ap->udma_mask); 2673 + 2674 + /* Apply cable rule here. Don't apply it early because when 2675 + * we handle hot plug the cable type can itself change. 2676 + */ 2677 + if (ap->cbl == ATA_CBL_PATA40) 2678 + xfer_mask &= ~(0xF8 << ATA_SHIFT_UDMA); 3017 2679 3018 2680 /* FIXME: Use port-wide xfermask for now */ 3019 2681 for (i = 0; i < ATA_MAX_DEVICES; i++) { 3020 2682 struct ata_device *d = &ap->device[i]; 3021 - if (!ata_dev_present(d)) 2683 + 2684 + if (ata_dev_absent(d)) 3022 2685 continue; 3023 - xfer_mask &= ata_pack_xfermask(d->pio_mask, d->mwdma_mask, 3024 - d->udma_mask); 2686 + 2687 + if (ata_dev_disabled(d)) { 2688 + /* to avoid violating device selection timing */ 2689 + xfer_mask &= ata_pack_xfermask(d->pio_mask, 2690 + UINT_MAX, UINT_MAX); 2691 + continue; 2692 + } 2693 + 2694 + xfer_mask &= ata_pack_xfermask(d->pio_mask, 2695 + d->mwdma_mask, d->udma_mask); 3025 2696 xfer_mask &= ata_id_xfermask(d->id); 3026 2697 if (ata_dma_blacklisted(d)) 3027 2698 xfer_mask &= ~(ATA_MASK_MWDMA | ATA_MASK_UDMA); 3028 - /* Apply cable rule here. Don't apply it early because when 3029 - we handle hot plug the cable type can itself change */ 3030 - if (ap->cbl == ATA_CBL_PATA40) 3031 - xfer_mask &= ~(0xF8 << ATA_SHIFT_UDMA); 3032 2699 } 3033 2700 3034 2701 if (ata_dma_blacklisted(dev)) 3035 - printk(KERN_WARNING "ata%u: dev %u is on DMA blacklist, " 3036 - "disabling DMA\n", ap->id, dev->devno); 2702 + ata_dev_printk(dev, KERN_WARNING, 2703 + "device is on DMA blacklist, disabling DMA\n"); 3037 2704 3038 2705 if (hs->flags & ATA_HOST_SIMPLEX) { 3039 2706 if (hs->simplex_claimed) 3040 2707 xfer_mask &= ~(ATA_MASK_MWDMA | ATA_MASK_UDMA); 3041 2708 } 2709 + 3042 2710 if (ap->ops->mode_filter) 3043 2711 xfer_mask = ap->ops->mode_filter(ap, dev, xfer_mask); 3044 2712 3045 - ata_unpack_xfermask(xfer_mask, &dev->pio_mask, &dev->mwdma_mask, 3046 - &dev->udma_mask); 2713 + ata_unpack_xfermask(xfer_mask, &dev->pio_mask, 2714 + &dev->mwdma_mask, &dev->udma_mask); 3047 2715 } 3048 2716 3049 2717 /** 3050 2718 * ata_dev_set_xfermode - Issue SET FEATURES - XFER MODE command 3051 - * @ap: Port associated with device @dev 3052 2719 * @dev: Device to which command will be sent 3053 2720 * 3054 2721 * Issue SET FEATURES - XFER MODE command to device @dev ··· 3073 2716 * 0 on success, AC_ERR_* mask otherwise. 3074 2717 */ 3075 2718 3076 - static unsigned int ata_dev_set_xfermode(struct ata_port *ap, 3077 - struct ata_device *dev) 2719 + static unsigned int ata_dev_set_xfermode(struct ata_device *dev) 3078 2720 { 3079 2721 struct ata_taskfile tf; 3080 2722 unsigned int err_mask; ··· 3081 2725 /* set up set-features taskfile */ 3082 2726 DPRINTK("set features - xfer mode\n"); 3083 2727 3084 - ata_tf_init(ap, &tf, dev->devno); 2728 + ata_tf_init(dev, &tf); 3085 2729 tf.command = ATA_CMD_SET_FEATURES; 3086 2730 tf.feature = SETFEATURES_XFER; 3087 2731 tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; 3088 2732 tf.protocol = ATA_PROT_NODATA; 3089 2733 tf.nsect = dev->xfer_mode; 3090 2734 3091 - err_mask = ata_exec_internal(ap, dev, &tf, DMA_NONE, NULL, 0); 2735 + err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0); 3092 2736 3093 2737 DPRINTK("EXIT, err_mask=%x\n", err_mask); 3094 2738 return err_mask; ··· 3096 2740 3097 2741 /** 3098 2742 * ata_dev_init_params - Issue INIT DEV PARAMS command 3099 - * @ap: Port associated with device @dev 3100 2743 * @dev: Device to which command will be sent 3101 2744 * @heads: Number of heads (taskfile parameter) 3102 2745 * @sectors: Number of sectors (taskfile parameter) ··· 3106 2751 * RETURNS: 3107 2752 * 0 on success, AC_ERR_* mask otherwise. 3108 2753 */ 3109 - 3110 - static unsigned int ata_dev_init_params(struct ata_port *ap, 3111 - struct ata_device *dev, 3112 - u16 heads, 3113 - u16 sectors) 2754 + static unsigned int ata_dev_init_params(struct ata_device *dev, 2755 + u16 heads, u16 sectors) 3114 2756 { 3115 2757 struct ata_taskfile tf; 3116 2758 unsigned int err_mask; ··· 3119 2767 /* set up init dev params taskfile */ 3120 2768 DPRINTK("init dev params \n"); 3121 2769 3122 - ata_tf_init(ap, &tf, dev->devno); 2770 + ata_tf_init(dev, &tf); 3123 2771 tf.command = ATA_CMD_INIT_DEV_PARAMS; 3124 2772 tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; 3125 2773 tf.protocol = ATA_PROT_NODATA; 3126 2774 tf.nsect = sectors; 3127 2775 tf.device |= (heads - 1) & 0x0f; /* max head = num. of heads - 1 */ 3128 2776 3129 - err_mask = ata_exec_internal(ap, dev, &tf, DMA_NONE, NULL, 0); 2777 + err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0); 3130 2778 3131 2779 DPRINTK("EXIT, err_mask=%x\n", err_mask); 3132 2780 return err_mask; ··· 3309 2957 qc->n_elem = 1; 3310 2958 qc->orig_n_elem = 1; 3311 2959 qc->buf_virt = buf; 2960 + qc->nbytes = buflen; 3312 2961 3313 2962 sg = qc->__sg; 3314 2963 sg_init_one(sg, buf, buflen); ··· 3493 3140 } 3494 3141 3495 3142 /** 3496 - * ata_poll_qc_complete - turn irq back on and finish qc 3497 - * @qc: Command to complete 3498 - * @err_mask: ATA status register content 3499 - * 3500 - * LOCKING: 3501 - * None. (grabs host lock) 3502 - */ 3503 - 3504 - void ata_poll_qc_complete(struct ata_queued_cmd *qc) 3505 - { 3506 - struct ata_port *ap = qc->ap; 3507 - unsigned long flags; 3508 - 3509 - spin_lock_irqsave(&ap->host_set->lock, flags); 3510 - ap->flags &= ~ATA_FLAG_NOINTR; 3511 - ata_irq_on(ap); 3512 - ata_qc_complete(qc); 3513 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 3514 - } 3515 - 3516 - /** 3517 - * ata_pio_poll - poll using PIO, depending on current state 3518 - * @ap: the target ata_port 3519 - * 3520 - * LOCKING: 3521 - * None. (executing in kernel thread context) 3522 - * 3523 - * RETURNS: 3524 - * timeout value to use 3525 - */ 3526 - 3527 - static unsigned long ata_pio_poll(struct ata_port *ap) 3528 - { 3529 - struct ata_queued_cmd *qc; 3530 - u8 status; 3531 - unsigned int poll_state = HSM_ST_UNKNOWN; 3532 - unsigned int reg_state = HSM_ST_UNKNOWN; 3533 - 3534 - qc = ata_qc_from_tag(ap, ap->active_tag); 3535 - WARN_ON(qc == NULL); 3536 - 3537 - switch (ap->hsm_task_state) { 3538 - case HSM_ST: 3539 - case HSM_ST_POLL: 3540 - poll_state = HSM_ST_POLL; 3541 - reg_state = HSM_ST; 3542 - break; 3543 - case HSM_ST_LAST: 3544 - case HSM_ST_LAST_POLL: 3545 - poll_state = HSM_ST_LAST_POLL; 3546 - reg_state = HSM_ST_LAST; 3547 - break; 3548 - default: 3549 - BUG(); 3550 - break; 3551 - } 3552 - 3553 - status = ata_chk_status(ap); 3554 - if (status & ATA_BUSY) { 3555 - if (time_after(jiffies, ap->pio_task_timeout)) { 3556 - qc->err_mask |= AC_ERR_TIMEOUT; 3557 - ap->hsm_task_state = HSM_ST_TMOUT; 3558 - return 0; 3559 - } 3560 - ap->hsm_task_state = poll_state; 3561 - return ATA_SHORT_PAUSE; 3562 - } 3563 - 3564 - ap->hsm_task_state = reg_state; 3565 - return 0; 3566 - } 3567 - 3568 - /** 3569 - * ata_pio_complete - check if drive is busy or idle 3570 - * @ap: the target ata_port 3571 - * 3572 - * LOCKING: 3573 - * None. (executing in kernel thread context) 3574 - * 3575 - * RETURNS: 3576 - * Non-zero if qc completed, zero otherwise. 3577 - */ 3578 - 3579 - static int ata_pio_complete (struct ata_port *ap) 3580 - { 3581 - struct ata_queued_cmd *qc; 3582 - u8 drv_stat; 3583 - 3584 - /* 3585 - * This is purely heuristic. This is a fast path. Sometimes when 3586 - * we enter, BSY will be cleared in a chk-status or two. If not, 3587 - * the drive is probably seeking or something. Snooze for a couple 3588 - * msecs, then chk-status again. If still busy, fall back to 3589 - * HSM_ST_POLL state. 3590 - */ 3591 - drv_stat = ata_busy_wait(ap, ATA_BUSY, 10); 3592 - if (drv_stat & ATA_BUSY) { 3593 - msleep(2); 3594 - drv_stat = ata_busy_wait(ap, ATA_BUSY, 10); 3595 - if (drv_stat & ATA_BUSY) { 3596 - ap->hsm_task_state = HSM_ST_LAST_POLL; 3597 - ap->pio_task_timeout = jiffies + ATA_TMOUT_PIO; 3598 - return 0; 3599 - } 3600 - } 3601 - 3602 - qc = ata_qc_from_tag(ap, ap->active_tag); 3603 - WARN_ON(qc == NULL); 3604 - 3605 - drv_stat = ata_wait_idle(ap); 3606 - if (!ata_ok(drv_stat)) { 3607 - qc->err_mask |= __ac_err_mask(drv_stat); 3608 - ap->hsm_task_state = HSM_ST_ERR; 3609 - return 0; 3610 - } 3611 - 3612 - ap->hsm_task_state = HSM_ST_IDLE; 3613 - 3614 - WARN_ON(qc->err_mask); 3615 - ata_poll_qc_complete(qc); 3616 - 3617 - /* another command may start at this point */ 3618 - 3619 - return 1; 3620 - } 3621 - 3622 - 3623 - /** 3624 3143 * swap_buf_le16 - swap halves of 16-bit words in place 3625 3144 * @buf: Buffer to swap 3626 3145 * @buf_words: Number of 16-bit words in buffer. ··· 3516 3291 3517 3292 /** 3518 3293 * ata_mmio_data_xfer - Transfer data by MMIO 3519 - * @ap: port to read/write 3294 + * @adev: device for this I/O 3520 3295 * @buf: data buffer 3521 3296 * @buflen: buffer length 3522 3297 * @write_data: read/write ··· 3527 3302 * Inherited from caller. 3528 3303 */ 3529 3304 3530 - static void ata_mmio_data_xfer(struct ata_port *ap, unsigned char *buf, 3531 - unsigned int buflen, int write_data) 3305 + void ata_mmio_data_xfer(struct ata_device *adev, unsigned char *buf, 3306 + unsigned int buflen, int write_data) 3532 3307 { 3308 + struct ata_port *ap = adev->ap; 3533 3309 unsigned int i; 3534 3310 unsigned int words = buflen >> 1; 3535 3311 u16 *buf16 = (u16 *) buf; ··· 3562 3336 3563 3337 /** 3564 3338 * ata_pio_data_xfer - Transfer data by PIO 3565 - * @ap: port to read/write 3339 + * @adev: device to target 3566 3340 * @buf: data buffer 3567 3341 * @buflen: buffer length 3568 3342 * @write_data: read/write ··· 3573 3347 * Inherited from caller. 3574 3348 */ 3575 3349 3576 - static void ata_pio_data_xfer(struct ata_port *ap, unsigned char *buf, 3577 - unsigned int buflen, int write_data) 3350 + void ata_pio_data_xfer(struct ata_device *adev, unsigned char *buf, 3351 + unsigned int buflen, int write_data) 3578 3352 { 3353 + struct ata_port *ap = adev->ap; 3579 3354 unsigned int words = buflen >> 1; 3580 3355 3581 3356 /* Transfer multiple of 2 bytes */ ··· 3601 3374 } 3602 3375 3603 3376 /** 3604 - * ata_data_xfer - Transfer data from/to the data register. 3605 - * @ap: port to read/write 3377 + * ata_pio_data_xfer_noirq - Transfer data by PIO 3378 + * @adev: device to target 3606 3379 * @buf: data buffer 3607 3380 * @buflen: buffer length 3608 - * @do_write: read/write 3381 + * @write_data: read/write 3609 3382 * 3610 - * Transfer data from/to the device data register. 3383 + * Transfer data from/to the device data register by PIO. Do the 3384 + * transfer with interrupts disabled. 3611 3385 * 3612 3386 * LOCKING: 3613 3387 * Inherited from caller. 3614 3388 */ 3615 3389 3616 - static void ata_data_xfer(struct ata_port *ap, unsigned char *buf, 3617 - unsigned int buflen, int do_write) 3390 + void ata_pio_data_xfer_noirq(struct ata_device *adev, unsigned char *buf, 3391 + unsigned int buflen, int write_data) 3618 3392 { 3619 - /* Make the crap hardware pay the costs not the good stuff */ 3620 - if (unlikely(ap->flags & ATA_FLAG_IRQ_MASK)) { 3621 - unsigned long flags; 3622 - local_irq_save(flags); 3623 - if (ap->flags & ATA_FLAG_MMIO) 3624 - ata_mmio_data_xfer(ap, buf, buflen, do_write); 3625 - else 3626 - ata_pio_data_xfer(ap, buf, buflen, do_write); 3627 - local_irq_restore(flags); 3628 - } else { 3629 - if (ap->flags & ATA_FLAG_MMIO) 3630 - ata_mmio_data_xfer(ap, buf, buflen, do_write); 3631 - else 3632 - ata_pio_data_xfer(ap, buf, buflen, do_write); 3633 - } 3393 + unsigned long flags; 3394 + local_irq_save(flags); 3395 + ata_pio_data_xfer(adev, buf, buflen, write_data); 3396 + local_irq_restore(flags); 3634 3397 } 3398 + 3635 3399 3636 3400 /** 3637 3401 * ata_pio_sector - Transfer ATA_SECT_SIZE (512 bytes) of data. ··· 3653 3435 page = nth_page(page, (offset >> PAGE_SHIFT)); 3654 3436 offset %= PAGE_SIZE; 3655 3437 3656 - buf = kmap(page) + offset; 3438 + DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); 3439 + 3440 + if (PageHighMem(page)) { 3441 + unsigned long flags; 3442 + 3443 + /* FIXME: use a bounce buffer */ 3444 + local_irq_save(flags); 3445 + buf = kmap_atomic(page, KM_IRQ0); 3446 + 3447 + /* do the actual data transfer */ 3448 + ap->ops->data_xfer(qc->dev, buf + offset, ATA_SECT_SIZE, do_write); 3449 + 3450 + kunmap_atomic(buf, KM_IRQ0); 3451 + local_irq_restore(flags); 3452 + } else { 3453 + buf = page_address(page); 3454 + ap->ops->data_xfer(qc->dev, buf + offset, ATA_SECT_SIZE, do_write); 3455 + } 3657 3456 3658 3457 qc->cursect++; 3659 3458 qc->cursg_ofs++; ··· 3679 3444 qc->cursg++; 3680 3445 qc->cursg_ofs = 0; 3681 3446 } 3447 + } 3682 3448 3683 - DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); 3449 + /** 3450 + * ata_pio_sectors - Transfer one or many 512-byte sectors. 3451 + * @qc: Command on going 3452 + * 3453 + * Transfer one or many ATA_SECT_SIZE of data from/to the 3454 + * ATA device for the DRQ request. 3455 + * 3456 + * LOCKING: 3457 + * Inherited from caller. 3458 + */ 3684 3459 3685 - /* do the actual data transfer */ 3686 - do_write = (qc->tf.flags & ATA_TFLAG_WRITE); 3687 - ata_data_xfer(ap, buf, ATA_SECT_SIZE, do_write); 3460 + static void ata_pio_sectors(struct ata_queued_cmd *qc) 3461 + { 3462 + if (is_multi_taskfile(&qc->tf)) { 3463 + /* READ/WRITE MULTIPLE */ 3464 + unsigned int nsect; 3688 3465 3689 - kunmap(page); 3466 + WARN_ON(qc->dev->multi_count == 0); 3467 + 3468 + nsect = min(qc->nsect - qc->cursect, qc->dev->multi_count); 3469 + while (nsect--) 3470 + ata_pio_sector(qc); 3471 + } else 3472 + ata_pio_sector(qc); 3473 + } 3474 + 3475 + /** 3476 + * atapi_send_cdb - Write CDB bytes to hardware 3477 + * @ap: Port to which ATAPI device is attached. 3478 + * @qc: Taskfile currently active 3479 + * 3480 + * When device has indicated its readiness to accept 3481 + * a CDB, this function is called. Send the CDB. 3482 + * 3483 + * LOCKING: 3484 + * caller. 3485 + */ 3486 + 3487 + static void atapi_send_cdb(struct ata_port *ap, struct ata_queued_cmd *qc) 3488 + { 3489 + /* send SCSI cdb */ 3490 + DPRINTK("send cdb\n"); 3491 + WARN_ON(qc->dev->cdb_len < 12); 3492 + 3493 + ap->ops->data_xfer(qc->dev, qc->cdb, qc->dev->cdb_len, 1); 3494 + ata_altstatus(ap); /* flush */ 3495 + 3496 + switch (qc->tf.protocol) { 3497 + case ATA_PROT_ATAPI: 3498 + ap->hsm_task_state = HSM_ST; 3499 + break; 3500 + case ATA_PROT_ATAPI_NODATA: 3501 + ap->hsm_task_state = HSM_ST_LAST; 3502 + break; 3503 + case ATA_PROT_ATAPI_DMA: 3504 + ap->hsm_task_state = HSM_ST_LAST; 3505 + /* initiate bmdma */ 3506 + ap->ops->bmdma_start(qc); 3507 + break; 3508 + } 3690 3509 } 3691 3510 3692 3511 /** ··· 3781 3492 unsigned int i; 3782 3493 3783 3494 if (words) /* warning if bytes > 1 */ 3784 - printk(KERN_WARNING "ata%u: %u bytes trailing data\n", 3785 - ap->id, bytes); 3495 + ata_dev_printk(qc->dev, KERN_WARNING, 3496 + "%u bytes trailing data\n", bytes); 3786 3497 3787 3498 for (i = 0; i < words; i++) 3788 - ata_data_xfer(ap, (unsigned char*)pad_buf, 2, do_write); 3499 + ap->ops->data_xfer(qc->dev, (unsigned char*)pad_buf, 2, do_write); 3789 3500 3790 3501 ap->hsm_task_state = HSM_ST_LAST; 3791 3502 return; ··· 3806 3517 /* don't cross page boundaries */ 3807 3518 count = min(count, (unsigned int)PAGE_SIZE - offset); 3808 3519 3809 - buf = kmap(page) + offset; 3520 + DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); 3521 + 3522 + if (PageHighMem(page)) { 3523 + unsigned long flags; 3524 + 3525 + /* FIXME: use bounce buffer */ 3526 + local_irq_save(flags); 3527 + buf = kmap_atomic(page, KM_IRQ0); 3528 + 3529 + /* do the actual data transfer */ 3530 + ap->ops->data_xfer(qc->dev, buf + offset, count, do_write); 3531 + 3532 + kunmap_atomic(buf, KM_IRQ0); 3533 + local_irq_restore(flags); 3534 + } else { 3535 + buf = page_address(page); 3536 + ap->ops->data_xfer(qc->dev, buf + offset, count, do_write); 3537 + } 3810 3538 3811 3539 bytes -= count; 3812 3540 qc->curbytes += count; ··· 3833 3527 qc->cursg++; 3834 3528 qc->cursg_ofs = 0; 3835 3529 } 3836 - 3837 - DPRINTK("data %s\n", qc->tf.flags & ATA_TFLAG_WRITE ? "write" : "read"); 3838 - 3839 - /* do the actual data transfer */ 3840 - ata_data_xfer(ap, buf, count, do_write); 3841 - 3842 - kunmap(page); 3843 3530 3844 3531 if (bytes) 3845 3532 goto next_sg; ··· 3855 3556 unsigned int ireason, bc_lo, bc_hi, bytes; 3856 3557 int i_write, do_write = (qc->tf.flags & ATA_TFLAG_WRITE) ? 1 : 0; 3857 3558 3858 - ap->ops->tf_read(ap, &qc->tf); 3859 - ireason = qc->tf.nsect; 3860 - bc_lo = qc->tf.lbam; 3861 - bc_hi = qc->tf.lbah; 3559 + /* Abuse qc->result_tf for temp storage of intermediate TF 3560 + * here to save some kernel stack usage. 3561 + * For normal completion, qc->result_tf is not relevant. For 3562 + * error, qc->result_tf is later overwritten by ata_qc_complete(). 3563 + * So, the correctness of qc->result_tf is not affected. 3564 + */ 3565 + ap->ops->tf_read(ap, &qc->result_tf); 3566 + ireason = qc->result_tf.nsect; 3567 + bc_lo = qc->result_tf.lbam; 3568 + bc_hi = qc->result_tf.lbah; 3862 3569 bytes = (bc_hi << 8) | bc_lo; 3863 3570 3864 3571 /* shall be cleared to zero, indicating xfer of data */ ··· 3876 3571 if (do_write != i_write) 3877 3572 goto err_out; 3878 3573 3574 + VPRINTK("ata%u: xfering %d bytes\n", ap->id, bytes); 3575 + 3879 3576 __atapi_pio_bytes(qc, bytes); 3880 3577 3881 3578 return; 3882 3579 3883 3580 err_out: 3884 - printk(KERN_INFO "ata%u: dev %u: ATAPI check failed\n", 3885 - ap->id, dev->devno); 3581 + ata_dev_printk(dev, KERN_INFO, "ATAPI check failed\n"); 3886 3582 qc->err_mask |= AC_ERR_HSM; 3887 3583 ap->hsm_task_state = HSM_ST_ERR; 3888 3584 } 3889 3585 3890 3586 /** 3891 - * ata_pio_block - start PIO on a block 3587 + * ata_hsm_ok_in_wq - Check if the qc can be handled in the workqueue. 3892 3588 * @ap: the target ata_port 3589 + * @qc: qc on going 3893 3590 * 3894 - * LOCKING: 3895 - * None. (executing in kernel thread context) 3591 + * RETURNS: 3592 + * 1 if ok in workqueue, 0 otherwise. 3896 3593 */ 3897 3594 3898 - static void ata_pio_block(struct ata_port *ap) 3595 + static inline int ata_hsm_ok_in_wq(struct ata_port *ap, struct ata_queued_cmd *qc) 3899 3596 { 3900 - struct ata_queued_cmd *qc; 3597 + if (qc->tf.flags & ATA_TFLAG_POLLING) 3598 + return 1; 3599 + 3600 + if (ap->hsm_task_state == HSM_ST_FIRST) { 3601 + if (qc->tf.protocol == ATA_PROT_PIO && 3602 + (qc->tf.flags & ATA_TFLAG_WRITE)) 3603 + return 1; 3604 + 3605 + if (is_atapi_taskfile(&qc->tf) && 3606 + !(qc->dev->flags & ATA_DFLAG_CDB_INTR)) 3607 + return 1; 3608 + } 3609 + 3610 + return 0; 3611 + } 3612 + 3613 + /** 3614 + * ata_hsm_qc_complete - finish a qc running on standard HSM 3615 + * @qc: Command to complete 3616 + * @in_wq: 1 if called from workqueue, 0 otherwise 3617 + * 3618 + * Finish @qc which is running on standard HSM. 3619 + * 3620 + * LOCKING: 3621 + * If @in_wq is zero, spin_lock_irqsave(host_set lock). 3622 + * Otherwise, none on entry and grabs host lock. 3623 + */ 3624 + static void ata_hsm_qc_complete(struct ata_queued_cmd *qc, int in_wq) 3625 + { 3626 + struct ata_port *ap = qc->ap; 3627 + unsigned long flags; 3628 + 3629 + if (ap->ops->error_handler) { 3630 + if (in_wq) { 3631 + spin_lock_irqsave(ap->lock, flags); 3632 + 3633 + /* EH might have kicked in while host_set lock 3634 + * is released. 3635 + */ 3636 + qc = ata_qc_from_tag(ap, qc->tag); 3637 + if (qc) { 3638 + if (likely(!(qc->err_mask & AC_ERR_HSM))) { 3639 + ata_irq_on(ap); 3640 + ata_qc_complete(qc); 3641 + } else 3642 + ata_port_freeze(ap); 3643 + } 3644 + 3645 + spin_unlock_irqrestore(ap->lock, flags); 3646 + } else { 3647 + if (likely(!(qc->err_mask & AC_ERR_HSM))) 3648 + ata_qc_complete(qc); 3649 + else 3650 + ata_port_freeze(ap); 3651 + } 3652 + } else { 3653 + if (in_wq) { 3654 + spin_lock_irqsave(ap->lock, flags); 3655 + ata_irq_on(ap); 3656 + ata_qc_complete(qc); 3657 + spin_unlock_irqrestore(ap->lock, flags); 3658 + } else 3659 + ata_qc_complete(qc); 3660 + } 3661 + 3662 + ata_altstatus(ap); /* flush */ 3663 + } 3664 + 3665 + /** 3666 + * ata_hsm_move - move the HSM to the next state. 3667 + * @ap: the target ata_port 3668 + * @qc: qc on going 3669 + * @status: current device status 3670 + * @in_wq: 1 if called from workqueue, 0 otherwise 3671 + * 3672 + * RETURNS: 3673 + * 1 when poll next status needed, 0 otherwise. 3674 + */ 3675 + int ata_hsm_move(struct ata_port *ap, struct ata_queued_cmd *qc, 3676 + u8 status, int in_wq) 3677 + { 3678 + unsigned long flags = 0; 3679 + int poll_next; 3680 + 3681 + WARN_ON((qc->flags & ATA_QCFLAG_ACTIVE) == 0); 3682 + 3683 + /* Make sure ata_qc_issue_prot() does not throw things 3684 + * like DMA polling into the workqueue. Notice that 3685 + * in_wq is not equivalent to (qc->tf.flags & ATA_TFLAG_POLLING). 3686 + */ 3687 + WARN_ON(in_wq != ata_hsm_ok_in_wq(ap, qc)); 3688 + 3689 + fsm_start: 3690 + DPRINTK("ata%u: protocol %d task_state %d (dev_stat 0x%X)\n", 3691 + ap->id, qc->tf.protocol, ap->hsm_task_state, status); 3692 + 3693 + switch (ap->hsm_task_state) { 3694 + case HSM_ST_FIRST: 3695 + /* Send first data block or PACKET CDB */ 3696 + 3697 + /* If polling, we will stay in the work queue after 3698 + * sending the data. Otherwise, interrupt handler 3699 + * takes over after sending the data. 3700 + */ 3701 + poll_next = (qc->tf.flags & ATA_TFLAG_POLLING); 3702 + 3703 + /* check device status */ 3704 + if (unlikely((status & ATA_DRQ) == 0)) { 3705 + /* handle BSY=0, DRQ=0 as error */ 3706 + if (likely(status & (ATA_ERR | ATA_DF))) 3707 + /* device stops HSM for abort/error */ 3708 + qc->err_mask |= AC_ERR_DEV; 3709 + else 3710 + /* HSM violation. Let EH handle this */ 3711 + qc->err_mask |= AC_ERR_HSM; 3712 + 3713 + ap->hsm_task_state = HSM_ST_ERR; 3714 + goto fsm_start; 3715 + } 3716 + 3717 + /* Device should not ask for data transfer (DRQ=1) 3718 + * when it finds something wrong. 3719 + * We ignore DRQ here and stop the HSM by 3720 + * changing hsm_task_state to HSM_ST_ERR and 3721 + * let the EH abort the command or reset the device. 3722 + */ 3723 + if (unlikely(status & (ATA_ERR | ATA_DF))) { 3724 + printk(KERN_WARNING "ata%d: DRQ=1 with device error, dev_stat 0x%X\n", 3725 + ap->id, status); 3726 + qc->err_mask |= AC_ERR_HSM; 3727 + ap->hsm_task_state = HSM_ST_ERR; 3728 + goto fsm_start; 3729 + } 3730 + 3731 + /* Send the CDB (atapi) or the first data block (ata pio out). 3732 + * During the state transition, interrupt handler shouldn't 3733 + * be invoked before the data transfer is complete and 3734 + * hsm_task_state is changed. Hence, the following locking. 3735 + */ 3736 + if (in_wq) 3737 + spin_lock_irqsave(ap->lock, flags); 3738 + 3739 + if (qc->tf.protocol == ATA_PROT_PIO) { 3740 + /* PIO data out protocol. 3741 + * send first data block. 3742 + */ 3743 + 3744 + /* ata_pio_sectors() might change the state 3745 + * to HSM_ST_LAST. so, the state is changed here 3746 + * before ata_pio_sectors(). 3747 + */ 3748 + ap->hsm_task_state = HSM_ST; 3749 + ata_pio_sectors(qc); 3750 + ata_altstatus(ap); /* flush */ 3751 + } else 3752 + /* send CDB */ 3753 + atapi_send_cdb(ap, qc); 3754 + 3755 + if (in_wq) 3756 + spin_unlock_irqrestore(ap->lock, flags); 3757 + 3758 + /* if polling, ata_pio_task() handles the rest. 3759 + * otherwise, interrupt handler takes over from here. 3760 + */ 3761 + break; 3762 + 3763 + case HSM_ST: 3764 + /* complete command or read/write the data register */ 3765 + if (qc->tf.protocol == ATA_PROT_ATAPI) { 3766 + /* ATAPI PIO protocol */ 3767 + if ((status & ATA_DRQ) == 0) { 3768 + /* No more data to transfer or device error. 3769 + * Device error will be tagged in HSM_ST_LAST. 3770 + */ 3771 + ap->hsm_task_state = HSM_ST_LAST; 3772 + goto fsm_start; 3773 + } 3774 + 3775 + /* Device should not ask for data transfer (DRQ=1) 3776 + * when it finds something wrong. 3777 + * We ignore DRQ here and stop the HSM by 3778 + * changing hsm_task_state to HSM_ST_ERR and 3779 + * let the EH abort the command or reset the device. 3780 + */ 3781 + if (unlikely(status & (ATA_ERR | ATA_DF))) { 3782 + printk(KERN_WARNING "ata%d: DRQ=1 with device error, dev_stat 0x%X\n", 3783 + ap->id, status); 3784 + qc->err_mask |= AC_ERR_HSM; 3785 + ap->hsm_task_state = HSM_ST_ERR; 3786 + goto fsm_start; 3787 + } 3788 + 3789 + atapi_pio_bytes(qc); 3790 + 3791 + if (unlikely(ap->hsm_task_state == HSM_ST_ERR)) 3792 + /* bad ireason reported by device */ 3793 + goto fsm_start; 3794 + 3795 + } else { 3796 + /* ATA PIO protocol */ 3797 + if (unlikely((status & ATA_DRQ) == 0)) { 3798 + /* handle BSY=0, DRQ=0 as error */ 3799 + if (likely(status & (ATA_ERR | ATA_DF))) 3800 + /* device stops HSM for abort/error */ 3801 + qc->err_mask |= AC_ERR_DEV; 3802 + else 3803 + /* HSM violation. Let EH handle this */ 3804 + qc->err_mask |= AC_ERR_HSM; 3805 + 3806 + ap->hsm_task_state = HSM_ST_ERR; 3807 + goto fsm_start; 3808 + } 3809 + 3810 + /* For PIO reads, some devices may ask for 3811 + * data transfer (DRQ=1) alone with ERR=1. 3812 + * We respect DRQ here and transfer one 3813 + * block of junk data before changing the 3814 + * hsm_task_state to HSM_ST_ERR. 3815 + * 3816 + * For PIO writes, ERR=1 DRQ=1 doesn't make 3817 + * sense since the data block has been 3818 + * transferred to the device. 3819 + */ 3820 + if (unlikely(status & (ATA_ERR | ATA_DF))) { 3821 + /* data might be corrputed */ 3822 + qc->err_mask |= AC_ERR_DEV; 3823 + 3824 + if (!(qc->tf.flags & ATA_TFLAG_WRITE)) { 3825 + ata_pio_sectors(qc); 3826 + ata_altstatus(ap); 3827 + status = ata_wait_idle(ap); 3828 + } 3829 + 3830 + if (status & (ATA_BUSY | ATA_DRQ)) 3831 + qc->err_mask |= AC_ERR_HSM; 3832 + 3833 + /* ata_pio_sectors() might change the 3834 + * state to HSM_ST_LAST. so, the state 3835 + * is changed after ata_pio_sectors(). 3836 + */ 3837 + ap->hsm_task_state = HSM_ST_ERR; 3838 + goto fsm_start; 3839 + } 3840 + 3841 + ata_pio_sectors(qc); 3842 + 3843 + if (ap->hsm_task_state == HSM_ST_LAST && 3844 + (!(qc->tf.flags & ATA_TFLAG_WRITE))) { 3845 + /* all data read */ 3846 + ata_altstatus(ap); 3847 + status = ata_wait_idle(ap); 3848 + goto fsm_start; 3849 + } 3850 + } 3851 + 3852 + ata_altstatus(ap); /* flush */ 3853 + poll_next = 1; 3854 + break; 3855 + 3856 + case HSM_ST_LAST: 3857 + if (unlikely(!ata_ok(status))) { 3858 + qc->err_mask |= __ac_err_mask(status); 3859 + ap->hsm_task_state = HSM_ST_ERR; 3860 + goto fsm_start; 3861 + } 3862 + 3863 + /* no more data to transfer */ 3864 + DPRINTK("ata%u: dev %u command complete, drv_stat 0x%x\n", 3865 + ap->id, qc->dev->devno, status); 3866 + 3867 + WARN_ON(qc->err_mask); 3868 + 3869 + ap->hsm_task_state = HSM_ST_IDLE; 3870 + 3871 + /* complete taskfile transaction */ 3872 + ata_hsm_qc_complete(qc, in_wq); 3873 + 3874 + poll_next = 0; 3875 + break; 3876 + 3877 + case HSM_ST_ERR: 3878 + /* make sure qc->err_mask is available to 3879 + * know what's wrong and recover 3880 + */ 3881 + WARN_ON(qc->err_mask == 0); 3882 + 3883 + ap->hsm_task_state = HSM_ST_IDLE; 3884 + 3885 + /* complete taskfile transaction */ 3886 + ata_hsm_qc_complete(qc, in_wq); 3887 + 3888 + poll_next = 0; 3889 + break; 3890 + default: 3891 + poll_next = 0; 3892 + BUG(); 3893 + } 3894 + 3895 + return poll_next; 3896 + } 3897 + 3898 + static void ata_pio_task(void *_data) 3899 + { 3900 + struct ata_queued_cmd *qc = _data; 3901 + struct ata_port *ap = qc->ap; 3901 3902 u8 status; 3903 + int poll_next; 3904 + 3905 + fsm_start: 3906 + WARN_ON(ap->hsm_task_state == HSM_ST_IDLE); 3902 3907 3903 3908 /* 3904 3909 * This is purely heuristic. This is a fast path. 3905 3910 * Sometimes when we enter, BSY will be cleared in 3906 3911 * a chk-status or two. If not, the drive is probably seeking 3907 3912 * or something. Snooze for a couple msecs, then 3908 - * chk-status again. If still busy, fall back to 3909 - * HSM_ST_POLL state. 3913 + * chk-status again. If still busy, queue delayed work. 3910 3914 */ 3911 3915 status = ata_busy_wait(ap, ATA_BUSY, 5); 3912 3916 if (status & ATA_BUSY) { 3913 3917 msleep(2); 3914 3918 status = ata_busy_wait(ap, ATA_BUSY, 10); 3915 3919 if (status & ATA_BUSY) { 3916 - ap->hsm_task_state = HSM_ST_POLL; 3917 - ap->pio_task_timeout = jiffies + ATA_TMOUT_PIO; 3920 + ata_port_queue_task(ap, ata_pio_task, qc, ATA_SHORT_PAUSE); 3918 3921 return; 3919 3922 } 3920 3923 } 3921 3924 3922 - qc = ata_qc_from_tag(ap, ap->active_tag); 3923 - WARN_ON(qc == NULL); 3925 + /* move the HSM */ 3926 + poll_next = ata_hsm_move(ap, qc, status, 1); 3924 3927 3925 - /* check error */ 3926 - if (status & (ATA_ERR | ATA_DF)) { 3927 - qc->err_mask |= AC_ERR_DEV; 3928 - ap->hsm_task_state = HSM_ST_ERR; 3929 - return; 3930 - } 3931 - 3932 - /* transfer data if any */ 3933 - if (is_atapi_taskfile(&qc->tf)) { 3934 - /* DRQ=0 means no more data to transfer */ 3935 - if ((status & ATA_DRQ) == 0) { 3936 - ap->hsm_task_state = HSM_ST_LAST; 3937 - return; 3938 - } 3939 - 3940 - atapi_pio_bytes(qc); 3941 - } else { 3942 - /* handle BSY=0, DRQ=0 as error */ 3943 - if ((status & ATA_DRQ) == 0) { 3944 - qc->err_mask |= AC_ERR_HSM; 3945 - ap->hsm_task_state = HSM_ST_ERR; 3946 - return; 3947 - } 3948 - 3949 - ata_pio_sector(qc); 3950 - } 3951 - 3952 - ata_altstatus(ap); /* flush */ 3953 - } 3954 - 3955 - static void ata_pio_error(struct ata_port *ap) 3956 - { 3957 - struct ata_queued_cmd *qc; 3958 - 3959 - qc = ata_qc_from_tag(ap, ap->active_tag); 3960 - WARN_ON(qc == NULL); 3961 - 3962 - if (qc->tf.command != ATA_CMD_PACKET) 3963 - printk(KERN_WARNING "ata%u: PIO error\n", ap->id); 3964 - 3965 - /* make sure qc->err_mask is available to 3966 - * know what's wrong and recover 3928 + /* another command or interrupt handler 3929 + * may be running at this point. 3967 3930 */ 3968 - WARN_ON(qc->err_mask == 0); 3969 - 3970 - ap->hsm_task_state = HSM_ST_IDLE; 3971 - 3972 - ata_poll_qc_complete(qc); 3973 - } 3974 - 3975 - static void ata_pio_task(void *_data) 3976 - { 3977 - struct ata_port *ap = _data; 3978 - unsigned long timeout; 3979 - int qc_completed; 3980 - 3981 - fsm_start: 3982 - timeout = 0; 3983 - qc_completed = 0; 3984 - 3985 - switch (ap->hsm_task_state) { 3986 - case HSM_ST_IDLE: 3987 - return; 3988 - 3989 - case HSM_ST: 3990 - ata_pio_block(ap); 3991 - break; 3992 - 3993 - case HSM_ST_LAST: 3994 - qc_completed = ata_pio_complete(ap); 3995 - break; 3996 - 3997 - case HSM_ST_POLL: 3998 - case HSM_ST_LAST_POLL: 3999 - timeout = ata_pio_poll(ap); 4000 - break; 4001 - 4002 - case HSM_ST_TMOUT: 4003 - case HSM_ST_ERR: 4004 - ata_pio_error(ap); 4005 - return; 4006 - } 4007 - 4008 - if (timeout) 4009 - ata_port_queue_task(ap, ata_pio_task, ap, timeout); 4010 - else if (!qc_completed) 3931 + if (poll_next) 4011 3932 goto fsm_start; 4012 - } 4013 - 4014 - /** 4015 - * atapi_packet_task - Write CDB bytes to hardware 4016 - * @_data: Port to which ATAPI device is attached. 4017 - * 4018 - * When device has indicated its readiness to accept 4019 - * a CDB, this function is called. Send the CDB. 4020 - * If DMA is to be performed, exit immediately. 4021 - * Otherwise, we are in polling mode, so poll 4022 - * status under operation succeeds or fails. 4023 - * 4024 - * LOCKING: 4025 - * Kernel thread context (may sleep) 4026 - */ 4027 - 4028 - static void atapi_packet_task(void *_data) 4029 - { 4030 - struct ata_port *ap = _data; 4031 - struct ata_queued_cmd *qc; 4032 - u8 status; 4033 - 4034 - qc = ata_qc_from_tag(ap, ap->active_tag); 4035 - WARN_ON(qc == NULL); 4036 - WARN_ON(!(qc->flags & ATA_QCFLAG_ACTIVE)); 4037 - 4038 - /* sleep-wait for BSY to clear */ 4039 - DPRINTK("busy wait\n"); 4040 - if (ata_busy_sleep(ap, ATA_TMOUT_CDB_QUICK, ATA_TMOUT_CDB)) { 4041 - qc->err_mask |= AC_ERR_TIMEOUT; 4042 - goto err_out; 4043 - } 4044 - 4045 - /* make sure DRQ is set */ 4046 - status = ata_chk_status(ap); 4047 - if ((status & (ATA_BUSY | ATA_DRQ)) != ATA_DRQ) { 4048 - qc->err_mask |= AC_ERR_HSM; 4049 - goto err_out; 4050 - } 4051 - 4052 - /* send SCSI cdb */ 4053 - DPRINTK("send cdb\n"); 4054 - WARN_ON(qc->dev->cdb_len < 12); 4055 - 4056 - if (qc->tf.protocol == ATA_PROT_ATAPI_DMA || 4057 - qc->tf.protocol == ATA_PROT_ATAPI_NODATA) { 4058 - unsigned long flags; 4059 - 4060 - /* Once we're done issuing command and kicking bmdma, 4061 - * irq handler takes over. To not lose irq, we need 4062 - * to clear NOINTR flag before sending cdb, but 4063 - * interrupt handler shouldn't be invoked before we're 4064 - * finished. Hence, the following locking. 4065 - */ 4066 - spin_lock_irqsave(&ap->host_set->lock, flags); 4067 - ap->flags &= ~ATA_FLAG_NOINTR; 4068 - ata_data_xfer(ap, qc->cdb, qc->dev->cdb_len, 1); 4069 - ata_altstatus(ap); /* flush */ 4070 - 4071 - if (qc->tf.protocol == ATA_PROT_ATAPI_DMA) 4072 - ap->ops->bmdma_start(qc); /* initiate bmdma */ 4073 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 4074 - } else { 4075 - ata_data_xfer(ap, qc->cdb, qc->dev->cdb_len, 1); 4076 - ata_altstatus(ap); /* flush */ 4077 - 4078 - /* PIO commands are handled by polling */ 4079 - ap->hsm_task_state = HSM_ST; 4080 - ata_port_queue_task(ap, ata_pio_task, ap, 0); 4081 - } 4082 - 4083 - return; 4084 - 4085 - err_out: 4086 - ata_poll_qc_complete(qc); 4087 - } 4088 - 4089 - /** 4090 - * ata_qc_timeout - Handle timeout of queued command 4091 - * @qc: Command that timed out 4092 - * 4093 - * Some part of the kernel (currently, only the SCSI layer) 4094 - * has noticed that the active command on port @ap has not 4095 - * completed after a specified length of time. Handle this 4096 - * condition by disabling DMA (if necessary) and completing 4097 - * transactions, with error if necessary. 4098 - * 4099 - * This also handles the case of the "lost interrupt", where 4100 - * for some reason (possibly hardware bug, possibly driver bug) 4101 - * an interrupt was not delivered to the driver, even though the 4102 - * transaction completed successfully. 4103 - * 4104 - * LOCKING: 4105 - * Inherited from SCSI layer (none, can sleep) 4106 - */ 4107 - 4108 - static void ata_qc_timeout(struct ata_queued_cmd *qc) 4109 - { 4110 - struct ata_port *ap = qc->ap; 4111 - struct ata_host_set *host_set = ap->host_set; 4112 - u8 host_stat = 0, drv_stat; 4113 - unsigned long flags; 4114 - 4115 - DPRINTK("ENTER\n"); 4116 - 4117 - ap->hsm_task_state = HSM_ST_IDLE; 4118 - 4119 - spin_lock_irqsave(&host_set->lock, flags); 4120 - 4121 - switch (qc->tf.protocol) { 4122 - 4123 - case ATA_PROT_DMA: 4124 - case ATA_PROT_ATAPI_DMA: 4125 - host_stat = ap->ops->bmdma_status(ap); 4126 - 4127 - /* before we do anything else, clear DMA-Start bit */ 4128 - ap->ops->bmdma_stop(qc); 4129 - 4130 - /* fall through */ 4131 - 4132 - default: 4133 - ata_altstatus(ap); 4134 - drv_stat = ata_chk_status(ap); 4135 - 4136 - /* ack bmdma irq events */ 4137 - ap->ops->irq_clear(ap); 4138 - 4139 - printk(KERN_ERR "ata%u: command 0x%x timeout, stat 0x%x host_stat 0x%x\n", 4140 - ap->id, qc->tf.command, drv_stat, host_stat); 4141 - 4142 - /* complete taskfile transaction */ 4143 - qc->err_mask |= ac_err_mask(drv_stat); 4144 - break; 4145 - } 4146 - 4147 - spin_unlock_irqrestore(&host_set->lock, flags); 4148 - 4149 - ata_eh_qc_complete(qc); 4150 - 4151 - DPRINTK("EXIT\n"); 4152 - } 4153 - 4154 - /** 4155 - * ata_eng_timeout - Handle timeout of queued command 4156 - * @ap: Port on which timed-out command is active 4157 - * 4158 - * Some part of the kernel (currently, only the SCSI layer) 4159 - * has noticed that the active command on port @ap has not 4160 - * completed after a specified length of time. Handle this 4161 - * condition by disabling DMA (if necessary) and completing 4162 - * transactions, with error if necessary. 4163 - * 4164 - * This also handles the case of the "lost interrupt", where 4165 - * for some reason (possibly hardware bug, possibly driver bug) 4166 - * an interrupt was not delivered to the driver, even though the 4167 - * transaction completed successfully. 4168 - * 4169 - * LOCKING: 4170 - * Inherited from SCSI layer (none, can sleep) 4171 - */ 4172 - 4173 - void ata_eng_timeout(struct ata_port *ap) 4174 - { 4175 - DPRINTK("ENTER\n"); 4176 - 4177 - ata_qc_timeout(ata_qc_from_tag(ap, ap->active_tag)); 4178 - 4179 - DPRINTK("EXIT\n"); 4180 3933 } 4181 3934 4182 3935 /** ··· 4251 3888 struct ata_queued_cmd *qc = NULL; 4252 3889 unsigned int i; 4253 3890 4254 - for (i = 0; i < ATA_MAX_QUEUE; i++) 4255 - if (!test_and_set_bit(i, &ap->qactive)) { 4256 - qc = ata_qc_from_tag(ap, i); 3891 + /* no command while frozen */ 3892 + if (unlikely(ap->flags & ATA_FLAG_FROZEN)) 3893 + return NULL; 3894 + 3895 + /* the last tag is reserved for internal command. */ 3896 + for (i = 0; i < ATA_MAX_QUEUE - 1; i++) 3897 + if (!test_and_set_bit(i, &ap->qc_allocated)) { 3898 + qc = __ata_qc_from_tag(ap, i); 4257 3899 break; 4258 3900 } 4259 3901 ··· 4270 3902 4271 3903 /** 4272 3904 * ata_qc_new_init - Request an available ATA command, and initialize it 4273 - * @ap: Port associated with device @dev 4274 3905 * @dev: Device from whom we request an available command structure 4275 3906 * 4276 3907 * LOCKING: 4277 3908 * None. 4278 3909 */ 4279 3910 4280 - struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap, 4281 - struct ata_device *dev) 3911 + struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev) 4282 3912 { 3913 + struct ata_port *ap = dev->ap; 4283 3914 struct ata_queued_cmd *qc; 4284 3915 4285 3916 qc = ata_qc_new(ap); ··· 4313 3946 qc->flags = 0; 4314 3947 tag = qc->tag; 4315 3948 if (likely(ata_tag_valid(tag))) { 4316 - if (tag == ap->active_tag) 4317 - ap->active_tag = ATA_TAG_POISON; 4318 3949 qc->tag = ATA_TAG_POISON; 4319 - clear_bit(tag, &ap->qactive); 3950 + clear_bit(tag, &ap->qc_allocated); 4320 3951 } 4321 3952 } 4322 3953 4323 3954 void __ata_qc_complete(struct ata_queued_cmd *qc) 4324 3955 { 3956 + struct ata_port *ap = qc->ap; 3957 + 4325 3958 WARN_ON(qc == NULL); /* ata_qc_from_tag _might_ return NULL */ 4326 3959 WARN_ON(!(qc->flags & ATA_QCFLAG_ACTIVE)); 4327 3960 4328 3961 if (likely(qc->flags & ATA_QCFLAG_DMAMAP)) 4329 3962 ata_sg_clean(qc); 4330 3963 3964 + /* command should be marked inactive atomically with qc completion */ 3965 + if (qc->tf.protocol == ATA_PROT_NCQ) 3966 + ap->sactive &= ~(1 << qc->tag); 3967 + else 3968 + ap->active_tag = ATA_TAG_POISON; 3969 + 4331 3970 /* atapi: mark qc as inactive to prevent the interrupt handler 4332 3971 * from completing the command twice later, before the error handler 4333 3972 * is called. (when rc != 0 and atapi request sense is needed) 4334 3973 */ 4335 3974 qc->flags &= ~ATA_QCFLAG_ACTIVE; 3975 + ap->qc_active &= ~(1 << qc->tag); 4336 3976 4337 3977 /* call completion callback */ 4338 3978 qc->complete_fn(qc); 3979 + } 3980 + 3981 + /** 3982 + * ata_qc_complete - Complete an active ATA command 3983 + * @qc: Command to complete 3984 + * @err_mask: ATA Status register contents 3985 + * 3986 + * Indicate to the mid and upper layers that an ATA 3987 + * command has completed, with either an ok or not-ok status. 3988 + * 3989 + * LOCKING: 3990 + * spin_lock_irqsave(host_set lock) 3991 + */ 3992 + void ata_qc_complete(struct ata_queued_cmd *qc) 3993 + { 3994 + struct ata_port *ap = qc->ap; 3995 + 3996 + /* XXX: New EH and old EH use different mechanisms to 3997 + * synchronize EH with regular execution path. 3998 + * 3999 + * In new EH, a failed qc is marked with ATA_QCFLAG_FAILED. 4000 + * Normal execution path is responsible for not accessing a 4001 + * failed qc. libata core enforces the rule by returning NULL 4002 + * from ata_qc_from_tag() for failed qcs. 4003 + * 4004 + * Old EH depends on ata_qc_complete() nullifying completion 4005 + * requests if ATA_QCFLAG_EH_SCHEDULED is set. Old EH does 4006 + * not synchronize with interrupt handler. Only PIO task is 4007 + * taken care of. 4008 + */ 4009 + if (ap->ops->error_handler) { 4010 + WARN_ON(ap->flags & ATA_FLAG_FROZEN); 4011 + 4012 + if (unlikely(qc->err_mask)) 4013 + qc->flags |= ATA_QCFLAG_FAILED; 4014 + 4015 + if (unlikely(qc->flags & ATA_QCFLAG_FAILED)) { 4016 + if (!ata_tag_internal(qc->tag)) { 4017 + /* always fill result TF for failed qc */ 4018 + ap->ops->tf_read(ap, &qc->result_tf); 4019 + ata_qc_schedule_eh(qc); 4020 + return; 4021 + } 4022 + } 4023 + 4024 + /* read result TF if requested */ 4025 + if (qc->flags & ATA_QCFLAG_RESULT_TF) 4026 + ap->ops->tf_read(ap, &qc->result_tf); 4027 + 4028 + __ata_qc_complete(qc); 4029 + } else { 4030 + if (qc->flags & ATA_QCFLAG_EH_SCHEDULED) 4031 + return; 4032 + 4033 + /* read result TF if failed or requested */ 4034 + if (qc->err_mask || qc->flags & ATA_QCFLAG_RESULT_TF) 4035 + ap->ops->tf_read(ap, &qc->result_tf); 4036 + 4037 + __ata_qc_complete(qc); 4038 + } 4039 + } 4040 + 4041 + /** 4042 + * ata_qc_complete_multiple - Complete multiple qcs successfully 4043 + * @ap: port in question 4044 + * @qc_active: new qc_active mask 4045 + * @finish_qc: LLDD callback invoked before completing a qc 4046 + * 4047 + * Complete in-flight commands. This functions is meant to be 4048 + * called from low-level driver's interrupt routine to complete 4049 + * requests normally. ap->qc_active and @qc_active is compared 4050 + * and commands are completed accordingly. 4051 + * 4052 + * LOCKING: 4053 + * spin_lock_irqsave(host_set lock) 4054 + * 4055 + * RETURNS: 4056 + * Number of completed commands on success, -errno otherwise. 4057 + */ 4058 + int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active, 4059 + void (*finish_qc)(struct ata_queued_cmd *)) 4060 + { 4061 + int nr_done = 0; 4062 + u32 done_mask; 4063 + int i; 4064 + 4065 + done_mask = ap->qc_active ^ qc_active; 4066 + 4067 + if (unlikely(done_mask & qc_active)) { 4068 + ata_port_printk(ap, KERN_ERR, "illegal qc_active transition " 4069 + "(%08x->%08x)\n", ap->qc_active, qc_active); 4070 + return -EINVAL; 4071 + } 4072 + 4073 + for (i = 0; i < ATA_MAX_QUEUE; i++) { 4074 + struct ata_queued_cmd *qc; 4075 + 4076 + if (!(done_mask & (1 << i))) 4077 + continue; 4078 + 4079 + if ((qc = ata_qc_from_tag(ap, i))) { 4080 + if (finish_qc) 4081 + finish_qc(qc); 4082 + ata_qc_complete(qc); 4083 + nr_done++; 4084 + } 4085 + } 4086 + 4087 + return nr_done; 4339 4088 } 4340 4089 4341 4090 static inline int ata_should_dma_map(struct ata_queued_cmd *qc) ··· 4459 3976 struct ata_port *ap = qc->ap; 4460 3977 4461 3978 switch (qc->tf.protocol) { 3979 + case ATA_PROT_NCQ: 4462 3980 case ATA_PROT_DMA: 4463 3981 case ATA_PROT_ATAPI_DMA: 4464 3982 return 1; ··· 4494 4010 { 4495 4011 struct ata_port *ap = qc->ap; 4496 4012 4497 - qc->ap->active_tag = qc->tag; 4013 + /* Make sure only one non-NCQ command is outstanding. The 4014 + * check is skipped for old EH because it reuses active qc to 4015 + * request ATAPI sense. 4016 + */ 4017 + WARN_ON(ap->ops->error_handler && ata_tag_valid(ap->active_tag)); 4018 + 4019 + if (qc->tf.protocol == ATA_PROT_NCQ) { 4020 + WARN_ON(ap->sactive & (1 << qc->tag)); 4021 + ap->sactive |= 1 << qc->tag; 4022 + } else { 4023 + WARN_ON(ap->sactive); 4024 + ap->active_tag = qc->tag; 4025 + } 4026 + 4498 4027 qc->flags |= ATA_QCFLAG_ACTIVE; 4028 + ap->qc_active |= 1 << qc->tag; 4499 4029 4500 4030 if (ata_should_dma_map(qc)) { 4501 4031 if (qc->flags & ATA_QCFLAG_SG) { ··· 4559 4061 { 4560 4062 struct ata_port *ap = qc->ap; 4561 4063 4064 + /* Use polling pio if the LLD doesn't handle 4065 + * interrupt driven pio and atapi CDB interrupt. 4066 + */ 4067 + if (ap->flags & ATA_FLAG_PIO_POLLING) { 4068 + switch (qc->tf.protocol) { 4069 + case ATA_PROT_PIO: 4070 + case ATA_PROT_ATAPI: 4071 + case ATA_PROT_ATAPI_NODATA: 4072 + qc->tf.flags |= ATA_TFLAG_POLLING; 4073 + break; 4074 + case ATA_PROT_ATAPI_DMA: 4075 + if (qc->dev->flags & ATA_DFLAG_CDB_INTR) 4076 + /* see ata_dma_blacklisted() */ 4077 + BUG(); 4078 + break; 4079 + default: 4080 + break; 4081 + } 4082 + } 4083 + 4084 + /* select the device */ 4562 4085 ata_dev_select(ap, qc->dev->devno, 1, 0); 4563 4086 4087 + /* start the command */ 4564 4088 switch (qc->tf.protocol) { 4565 4089 case ATA_PROT_NODATA: 4090 + if (qc->tf.flags & ATA_TFLAG_POLLING) 4091 + ata_qc_set_polling(qc); 4092 + 4566 4093 ata_tf_to_host(ap, &qc->tf); 4094 + ap->hsm_task_state = HSM_ST_LAST; 4095 + 4096 + if (qc->tf.flags & ATA_TFLAG_POLLING) 4097 + ata_port_queue_task(ap, ata_pio_task, qc, 0); 4098 + 4567 4099 break; 4568 4100 4569 4101 case ATA_PROT_DMA: 4102 + WARN_ON(qc->tf.flags & ATA_TFLAG_POLLING); 4103 + 4570 4104 ap->ops->tf_load(ap, &qc->tf); /* load tf registers */ 4571 4105 ap->ops->bmdma_setup(qc); /* set up bmdma */ 4572 4106 ap->ops->bmdma_start(qc); /* initiate bmdma */ 4107 + ap->hsm_task_state = HSM_ST_LAST; 4573 4108 break; 4574 4109 4575 - case ATA_PROT_PIO: /* load tf registers, initiate polling pio */ 4576 - ata_qc_set_polling(qc); 4110 + case ATA_PROT_PIO: 4111 + if (qc->tf.flags & ATA_TFLAG_POLLING) 4112 + ata_qc_set_polling(qc); 4113 + 4577 4114 ata_tf_to_host(ap, &qc->tf); 4578 - ap->hsm_task_state = HSM_ST; 4579 - ata_port_queue_task(ap, ata_pio_task, ap, 0); 4115 + 4116 + if (qc->tf.flags & ATA_TFLAG_WRITE) { 4117 + /* PIO data out protocol */ 4118 + ap->hsm_task_state = HSM_ST_FIRST; 4119 + ata_port_queue_task(ap, ata_pio_task, qc, 0); 4120 + 4121 + /* always send first data block using 4122 + * the ata_pio_task() codepath. 4123 + */ 4124 + } else { 4125 + /* PIO data in protocol */ 4126 + ap->hsm_task_state = HSM_ST; 4127 + 4128 + if (qc->tf.flags & ATA_TFLAG_POLLING) 4129 + ata_port_queue_task(ap, ata_pio_task, qc, 0); 4130 + 4131 + /* if polling, ata_pio_task() handles the rest. 4132 + * otherwise, interrupt handler takes over from here. 4133 + */ 4134 + } 4135 + 4580 4136 break; 4581 4137 4582 4138 case ATA_PROT_ATAPI: 4583 - ata_qc_set_polling(qc); 4584 - ata_tf_to_host(ap, &qc->tf); 4585 - ata_port_queue_task(ap, atapi_packet_task, ap, 0); 4586 - break; 4587 - 4588 4139 case ATA_PROT_ATAPI_NODATA: 4589 - ap->flags |= ATA_FLAG_NOINTR; 4140 + if (qc->tf.flags & ATA_TFLAG_POLLING) 4141 + ata_qc_set_polling(qc); 4142 + 4590 4143 ata_tf_to_host(ap, &qc->tf); 4591 - ata_port_queue_task(ap, atapi_packet_task, ap, 0); 4144 + 4145 + ap->hsm_task_state = HSM_ST_FIRST; 4146 + 4147 + /* send cdb by polling if no cdb interrupt */ 4148 + if ((!(qc->dev->flags & ATA_DFLAG_CDB_INTR)) || 4149 + (qc->tf.flags & ATA_TFLAG_POLLING)) 4150 + ata_port_queue_task(ap, ata_pio_task, qc, 0); 4592 4151 break; 4593 4152 4594 4153 case ATA_PROT_ATAPI_DMA: 4595 - ap->flags |= ATA_FLAG_NOINTR; 4154 + WARN_ON(qc->tf.flags & ATA_TFLAG_POLLING); 4155 + 4596 4156 ap->ops->tf_load(ap, &qc->tf); /* load tf registers */ 4597 4157 ap->ops->bmdma_setup(qc); /* set up bmdma */ 4598 - ata_port_queue_task(ap, atapi_packet_task, ap, 0); 4158 + ap->hsm_task_state = HSM_ST_FIRST; 4159 + 4160 + /* send cdb by polling if no cdb interrupt */ 4161 + if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR)) 4162 + ata_port_queue_task(ap, ata_pio_task, qc, 0); 4599 4163 break; 4600 4164 4601 4165 default: ··· 4687 4127 inline unsigned int ata_host_intr (struct ata_port *ap, 4688 4128 struct ata_queued_cmd *qc) 4689 4129 { 4690 - u8 status, host_stat; 4130 + u8 status, host_stat = 0; 4691 4131 4692 - switch (qc->tf.protocol) { 4132 + VPRINTK("ata%u: protocol %d task_state %d\n", 4133 + ap->id, qc->tf.protocol, ap->hsm_task_state); 4693 4134 4694 - case ATA_PROT_DMA: 4695 - case ATA_PROT_ATAPI_DMA: 4696 - case ATA_PROT_ATAPI: 4697 - /* check status of DMA engine */ 4698 - host_stat = ap->ops->bmdma_status(ap); 4699 - VPRINTK("ata%u: host_stat 0x%X\n", ap->id, host_stat); 4135 + /* Check whether we are expecting interrupt in this state */ 4136 + switch (ap->hsm_task_state) { 4137 + case HSM_ST_FIRST: 4138 + /* Some pre-ATAPI-4 devices assert INTRQ 4139 + * at this state when ready to receive CDB. 4140 + */ 4700 4141 4701 - /* if it's not our irq... */ 4702 - if (!(host_stat & ATA_DMA_INTR)) 4142 + /* Check the ATA_DFLAG_CDB_INTR flag is enough here. 4143 + * The flag was turned on only for atapi devices. 4144 + * No need to check is_atapi_taskfile(&qc->tf) again. 4145 + */ 4146 + if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR)) 4703 4147 goto idle_irq; 4704 - 4705 - /* before we do anything else, clear DMA-Start bit */ 4706 - ap->ops->bmdma_stop(qc); 4707 - 4708 - /* fall through */ 4709 - 4710 - case ATA_PROT_ATAPI_NODATA: 4711 - case ATA_PROT_NODATA: 4712 - /* check altstatus */ 4713 - status = ata_altstatus(ap); 4714 - if (status & ATA_BUSY) 4715 - goto idle_irq; 4716 - 4717 - /* check main status, clearing INTRQ */ 4718 - status = ata_chk_status(ap); 4719 - if (unlikely(status & ATA_BUSY)) 4720 - goto idle_irq; 4721 - DPRINTK("ata%u: protocol %d (dev_stat 0x%X)\n", 4722 - ap->id, qc->tf.protocol, status); 4723 - 4724 - /* ack bmdma irq events */ 4725 - ap->ops->irq_clear(ap); 4726 - 4727 - /* complete taskfile transaction */ 4728 - qc->err_mask |= ac_err_mask(status); 4729 - ata_qc_complete(qc); 4730 4148 break; 4149 + case HSM_ST_LAST: 4150 + if (qc->tf.protocol == ATA_PROT_DMA || 4151 + qc->tf.protocol == ATA_PROT_ATAPI_DMA) { 4152 + /* check status of DMA engine */ 4153 + host_stat = ap->ops->bmdma_status(ap); 4154 + VPRINTK("ata%u: host_stat 0x%X\n", ap->id, host_stat); 4731 4155 4156 + /* if it's not our irq... */ 4157 + if (!(host_stat & ATA_DMA_INTR)) 4158 + goto idle_irq; 4159 + 4160 + /* before we do anything else, clear DMA-Start bit */ 4161 + ap->ops->bmdma_stop(qc); 4162 + 4163 + if (unlikely(host_stat & ATA_DMA_ERR)) { 4164 + /* error when transfering data to/from memory */ 4165 + qc->err_mask |= AC_ERR_HOST_BUS; 4166 + ap->hsm_task_state = HSM_ST_ERR; 4167 + } 4168 + } 4169 + break; 4170 + case HSM_ST: 4171 + break; 4732 4172 default: 4733 4173 goto idle_irq; 4734 4174 } 4735 4175 4176 + /* check altstatus */ 4177 + status = ata_altstatus(ap); 4178 + if (status & ATA_BUSY) 4179 + goto idle_irq; 4180 + 4181 + /* check main status, clearing INTRQ */ 4182 + status = ata_chk_status(ap); 4183 + if (unlikely(status & ATA_BUSY)) 4184 + goto idle_irq; 4185 + 4186 + /* ack bmdma irq events */ 4187 + ap->ops->irq_clear(ap); 4188 + 4189 + ata_hsm_move(ap, qc, status, 0); 4736 4190 return 1; /* irq handled */ 4737 4191 4738 4192 idle_irq: ··· 4755 4181 #ifdef ATA_IRQ_TRAP 4756 4182 if ((ap->stats.idle_irq % 1000) == 0) { 4757 4183 ata_irq_ack(ap, 0); /* debug trap */ 4758 - printk(KERN_WARNING "ata%d: irq trap\n", ap->id); 4184 + ata_port_printk(ap, KERN_WARNING, "irq trap\n"); 4759 4185 return 1; 4760 4186 } 4761 4187 #endif ··· 4793 4219 4794 4220 ap = host_set->ports[i]; 4795 4221 if (ap && 4796 - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { 4222 + !(ap->flags & ATA_FLAG_DISABLED)) { 4797 4223 struct ata_queued_cmd *qc; 4798 4224 4799 4225 qc = ata_qc_from_tag(ap, ap->active_tag); 4800 - if (qc && (!(qc->tf.ctl & ATA_NIEN)) && 4226 + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING)) && 4801 4227 (qc->flags & ATA_QCFLAG_ACTIVE)) 4802 4228 handled |= ata_host_intr(ap, qc); 4803 4229 } ··· 4808 4234 return IRQ_RETVAL(handled); 4809 4235 } 4810 4236 4237 + /** 4238 + * sata_scr_valid - test whether SCRs are accessible 4239 + * @ap: ATA port to test SCR accessibility for 4240 + * 4241 + * Test whether SCRs are accessible for @ap. 4242 + * 4243 + * LOCKING: 4244 + * None. 4245 + * 4246 + * RETURNS: 4247 + * 1 if SCRs are accessible, 0 otherwise. 4248 + */ 4249 + int sata_scr_valid(struct ata_port *ap) 4250 + { 4251 + return ap->cbl == ATA_CBL_SATA && ap->ops->scr_read; 4252 + } 4253 + 4254 + /** 4255 + * sata_scr_read - read SCR register of the specified port 4256 + * @ap: ATA port to read SCR for 4257 + * @reg: SCR to read 4258 + * @val: Place to store read value 4259 + * 4260 + * Read SCR register @reg of @ap into *@val. This function is 4261 + * guaranteed to succeed if the cable type of the port is SATA 4262 + * and the port implements ->scr_read. 4263 + * 4264 + * LOCKING: 4265 + * None. 4266 + * 4267 + * RETURNS: 4268 + * 0 on success, negative errno on failure. 4269 + */ 4270 + int sata_scr_read(struct ata_port *ap, int reg, u32 *val) 4271 + { 4272 + if (sata_scr_valid(ap)) { 4273 + *val = ap->ops->scr_read(ap, reg); 4274 + return 0; 4275 + } 4276 + return -EOPNOTSUPP; 4277 + } 4278 + 4279 + /** 4280 + * sata_scr_write - write SCR register of the specified port 4281 + * @ap: ATA port to write SCR for 4282 + * @reg: SCR to write 4283 + * @val: value to write 4284 + * 4285 + * Write @val to SCR register @reg of @ap. This function is 4286 + * guaranteed to succeed if the cable type of the port is SATA 4287 + * and the port implements ->scr_read. 4288 + * 4289 + * LOCKING: 4290 + * None. 4291 + * 4292 + * RETURNS: 4293 + * 0 on success, negative errno on failure. 4294 + */ 4295 + int sata_scr_write(struct ata_port *ap, int reg, u32 val) 4296 + { 4297 + if (sata_scr_valid(ap)) { 4298 + ap->ops->scr_write(ap, reg, val); 4299 + return 0; 4300 + } 4301 + return -EOPNOTSUPP; 4302 + } 4303 + 4304 + /** 4305 + * sata_scr_write_flush - write SCR register of the specified port and flush 4306 + * @ap: ATA port to write SCR for 4307 + * @reg: SCR to write 4308 + * @val: value to write 4309 + * 4310 + * This function is identical to sata_scr_write() except that this 4311 + * function performs flush after writing to the register. 4312 + * 4313 + * LOCKING: 4314 + * None. 4315 + * 4316 + * RETURNS: 4317 + * 0 on success, negative errno on failure. 4318 + */ 4319 + int sata_scr_write_flush(struct ata_port *ap, int reg, u32 val) 4320 + { 4321 + if (sata_scr_valid(ap)) { 4322 + ap->ops->scr_write(ap, reg, val); 4323 + ap->ops->scr_read(ap, reg); 4324 + return 0; 4325 + } 4326 + return -EOPNOTSUPP; 4327 + } 4328 + 4329 + /** 4330 + * ata_port_online - test whether the given port is online 4331 + * @ap: ATA port to test 4332 + * 4333 + * Test whether @ap is online. Note that this function returns 0 4334 + * if online status of @ap cannot be obtained, so 4335 + * ata_port_online(ap) != !ata_port_offline(ap). 4336 + * 4337 + * LOCKING: 4338 + * None. 4339 + * 4340 + * RETURNS: 4341 + * 1 if the port online status is available and online. 4342 + */ 4343 + int ata_port_online(struct ata_port *ap) 4344 + { 4345 + u32 sstatus; 4346 + 4347 + if (!sata_scr_read(ap, SCR_STATUS, &sstatus) && (sstatus & 0xf) == 0x3) 4348 + return 1; 4349 + return 0; 4350 + } 4351 + 4352 + /** 4353 + * ata_port_offline - test whether the given port is offline 4354 + * @ap: ATA port to test 4355 + * 4356 + * Test whether @ap is offline. Note that this function returns 4357 + * 0 if offline status of @ap cannot be obtained, so 4358 + * ata_port_online(ap) != !ata_port_offline(ap). 4359 + * 4360 + * LOCKING: 4361 + * None. 4362 + * 4363 + * RETURNS: 4364 + * 1 if the port offline status is available and offline. 4365 + */ 4366 + int ata_port_offline(struct ata_port *ap) 4367 + { 4368 + u32 sstatus; 4369 + 4370 + if (!sata_scr_read(ap, SCR_STATUS, &sstatus) && (sstatus & 0xf) != 0x3) 4371 + return 1; 4372 + return 0; 4373 + } 4811 4374 4812 4375 /* 4813 4376 * Execute a 'simple' command, that only consists of the opcode 'cmd' itself, 4814 4377 * without filling any other registers 4815 4378 */ 4816 - static int ata_do_simple_cmd(struct ata_port *ap, struct ata_device *dev, 4817 - u8 cmd) 4379 + static int ata_do_simple_cmd(struct ata_device *dev, u8 cmd) 4818 4380 { 4819 4381 struct ata_taskfile tf; 4820 4382 int err; 4821 4383 4822 - ata_tf_init(ap, &tf, dev->devno); 4384 + ata_tf_init(dev, &tf); 4823 4385 4824 4386 tf.command = cmd; 4825 4387 tf.flags |= ATA_TFLAG_DEVICE; 4826 4388 tf.protocol = ATA_PROT_NODATA; 4827 4389 4828 - err = ata_exec_internal(ap, dev, &tf, DMA_NONE, NULL, 0); 4390 + err = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0); 4829 4391 if (err) 4830 - printk(KERN_ERR "%s: ata command failed: %d\n", 4831 - __FUNCTION__, err); 4392 + ata_dev_printk(dev, KERN_ERR, "%s: ata command failed: %d\n", 4393 + __FUNCTION__, err); 4832 4394 4833 4395 return err; 4834 4396 } 4835 4397 4836 - static int ata_flush_cache(struct ata_port *ap, struct ata_device *dev) 4398 + static int ata_flush_cache(struct ata_device *dev) 4837 4399 { 4838 4400 u8 cmd; 4839 4401 ··· 4981 4271 else 4982 4272 cmd = ATA_CMD_FLUSH; 4983 4273 4984 - return ata_do_simple_cmd(ap, dev, cmd); 4274 + return ata_do_simple_cmd(dev, cmd); 4985 4275 } 4986 4276 4987 - static int ata_standby_drive(struct ata_port *ap, struct ata_device *dev) 4277 + static int ata_standby_drive(struct ata_device *dev) 4988 4278 { 4989 - return ata_do_simple_cmd(ap, dev, ATA_CMD_STANDBYNOW1); 4279 + return ata_do_simple_cmd(dev, ATA_CMD_STANDBYNOW1); 4990 4280 } 4991 4281 4992 - static int ata_start_drive(struct ata_port *ap, struct ata_device *dev) 4282 + static int ata_start_drive(struct ata_device *dev) 4993 4283 { 4994 - return ata_do_simple_cmd(ap, dev, ATA_CMD_IDLEIMMEDIATE); 4284 + return ata_do_simple_cmd(dev, ATA_CMD_IDLEIMMEDIATE); 4995 4285 } 4996 4286 4997 4287 /** 4998 4288 * ata_device_resume - wakeup a previously suspended devices 4999 - * @ap: port the device is connected to 5000 4289 * @dev: the device to resume 5001 4290 * 5002 4291 * Kick the drive back into action, by sending it an idle immediate ··· 5003 4294 * and host. 5004 4295 * 5005 4296 */ 5006 - int ata_device_resume(struct ata_port *ap, struct ata_device *dev) 4297 + int ata_device_resume(struct ata_device *dev) 5007 4298 { 4299 + struct ata_port *ap = dev->ap; 4300 + 5008 4301 if (ap->flags & ATA_FLAG_SUSPENDED) { 4302 + struct ata_device *failed_dev; 4303 + 5009 4304 ata_busy_sleep(ap, ATA_TMOUT_BOOT_QUICK, ATA_TMOUT_BOOT); 5010 4305 ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 200000); 4306 + 5011 4307 ap->flags &= ~ATA_FLAG_SUSPENDED; 5012 - ata_set_mode(ap); 4308 + while (ata_set_mode(ap, &failed_dev)) 4309 + ata_dev_disable(failed_dev); 5013 4310 } 5014 - if (!ata_dev_present(dev)) 4311 + if (!ata_dev_enabled(dev)) 5015 4312 return 0; 5016 4313 if (dev->class == ATA_DEV_ATA) 5017 - ata_start_drive(ap, dev); 4314 + ata_start_drive(dev); 5018 4315 5019 4316 return 0; 5020 4317 } 5021 4318 5022 4319 /** 5023 4320 * ata_device_suspend - prepare a device for suspend 5024 - * @ap: port the device is connected to 5025 4321 * @dev: the device to suspend 5026 4322 * @state: target power management state 5027 4323 * 5028 4324 * Flush the cache on the drive, if appropriate, then issue a 5029 4325 * standbynow command. 5030 4326 */ 5031 - int ata_device_suspend(struct ata_port *ap, struct ata_device *dev, pm_message_t state) 4327 + int ata_device_suspend(struct ata_device *dev, pm_message_t state) 5032 4328 { 5033 - if (!ata_dev_present(dev)) 4329 + struct ata_port *ap = dev->ap; 4330 + 4331 + if (!ata_dev_enabled(dev)) 5034 4332 return 0; 5035 4333 if (dev->class == ATA_DEV_ATA) 5036 - ata_flush_cache(ap, dev); 4334 + ata_flush_cache(dev); 5037 4335 5038 4336 if (state.event != PM_EVENT_FREEZE) 5039 - ata_standby_drive(ap, dev); 4337 + ata_standby_drive(dev); 5040 4338 ap->flags |= ATA_FLAG_SUSPENDED; 5041 4339 return 0; 5042 4340 } ··· 5131 4415 } 5132 4416 5133 4417 /** 4418 + * ata_dev_init - Initialize an ata_device structure 4419 + * @dev: Device structure to initialize 4420 + * 4421 + * Initialize @dev in preparation for probing. 4422 + * 4423 + * LOCKING: 4424 + * Inherited from caller. 4425 + */ 4426 + void ata_dev_init(struct ata_device *dev) 4427 + { 4428 + struct ata_port *ap = dev->ap; 4429 + unsigned long flags; 4430 + 4431 + /* SATA spd limit is bound to the first device */ 4432 + ap->sata_spd_limit = ap->hw_sata_spd_limit; 4433 + 4434 + /* High bits of dev->flags are used to record warm plug 4435 + * requests which occur asynchronously. Synchronize using 4436 + * host_set lock. 4437 + */ 4438 + spin_lock_irqsave(ap->lock, flags); 4439 + dev->flags &= ~ATA_DFLAG_INIT_MASK; 4440 + spin_unlock_irqrestore(ap->lock, flags); 4441 + 4442 + memset((void *)dev + ATA_DEVICE_CLEAR_OFFSET, 0, 4443 + sizeof(*dev) - ATA_DEVICE_CLEAR_OFFSET); 4444 + dev->pio_mask = UINT_MAX; 4445 + dev->mwdma_mask = UINT_MAX; 4446 + dev->udma_mask = UINT_MAX; 4447 + } 4448 + 4449 + /** 5134 4450 * ata_host_init - Initialize an ata_port structure 5135 4451 * @ap: Structure to initialize 5136 4452 * @host: associated SCSI mid-layer structure ··· 5176 4428 * LOCKING: 5177 4429 * Inherited from caller. 5178 4430 */ 5179 - 5180 4431 static void ata_host_init(struct ata_port *ap, struct Scsi_Host *host, 5181 4432 struct ata_host_set *host_set, 5182 4433 const struct ata_probe_ent *ent, unsigned int port_no) ··· 5188 4441 host->unique_id = ata_unique_id++; 5189 4442 host->max_cmd_len = 12; 5190 4443 5191 - ap->flags = ATA_FLAG_PORT_DISABLED; 4444 + ap->lock = &host_set->lock; 4445 + ap->flags = ATA_FLAG_DISABLED; 5192 4446 ap->id = host->unique_id; 5193 4447 ap->host = host; 5194 4448 ap->ctl = ATA_DEVCTL_OBS; ··· 5203 4455 ap->udma_mask = ent->udma_mask; 5204 4456 ap->flags |= ent->host_flags; 5205 4457 ap->ops = ent->port_ops; 5206 - ap->cbl = ATA_CBL_NONE; 4458 + ap->hw_sata_spd_limit = UINT_MAX; 5207 4459 ap->active_tag = ATA_TAG_POISON; 5208 4460 ap->last_ctl = 0xFF; 5209 4461 4462 + #if defined(ATA_VERBOSE_DEBUG) 4463 + /* turn on all debugging levels */ 4464 + ap->msg_enable = 0x00FF; 4465 + #elif defined(ATA_DEBUG) 4466 + ap->msg_enable = ATA_MSG_DRV | ATA_MSG_INFO | ATA_MSG_CTL | ATA_MSG_WARN | ATA_MSG_ERR; 4467 + #else 4468 + ap->msg_enable = ATA_MSG_DRV | ATA_MSG_ERR | ATA_MSG_WARN; 4469 + #endif 4470 + 5210 4471 INIT_WORK(&ap->port_task, NULL, NULL); 4472 + INIT_WORK(&ap->hotplug_task, ata_scsi_hotplug, ap); 4473 + INIT_WORK(&ap->scsi_rescan_task, ata_scsi_dev_rescan, ap); 5211 4474 INIT_LIST_HEAD(&ap->eh_done_q); 4475 + init_waitqueue_head(&ap->eh_wait_q); 4476 + 4477 + /* set cable type */ 4478 + ap->cbl = ATA_CBL_NONE; 4479 + if (ap->flags & ATA_FLAG_SATA) 4480 + ap->cbl = ATA_CBL_SATA; 5212 4481 5213 4482 for (i = 0; i < ATA_MAX_DEVICES; i++) { 5214 4483 struct ata_device *dev = &ap->device[i]; 4484 + dev->ap = ap; 5215 4485 dev->devno = i; 5216 - dev->pio_mask = UINT_MAX; 5217 - dev->mwdma_mask = UINT_MAX; 5218 - dev->udma_mask = UINT_MAX; 4486 + ata_dev_init(dev); 5219 4487 } 5220 4488 5221 4489 #ifdef ATA_IRQ_TRAP ··· 5267 4503 5268 4504 DPRINTK("ENTER\n"); 5269 4505 5270 - if (!ent->port_ops->probe_reset && 4506 + if (!ent->port_ops->error_handler && 5271 4507 !(ent->host_flags & (ATA_FLAG_SATA_RESET | ATA_FLAG_SRST))) { 5272 4508 printk(KERN_ERR "ata%u: no reset mechanism available\n", 5273 4509 port_no); ··· 5280 4516 5281 4517 host->transportt = &ata_scsi_transport_template; 5282 4518 5283 - ap = (struct ata_port *) &host->hostdata[0]; 4519 + ap = ata_shost_to_port(host); 5284 4520 5285 4521 ata_host_init(ap, host, host_set, ent, port_no); 5286 4522 ··· 5313 4549 * RETURNS: 5314 4550 * Number of ports registered. Zero on error (no ports registered). 5315 4551 */ 5316 - 5317 4552 int ata_device_add(const struct ata_probe_ent *ent) 5318 4553 { 5319 4554 unsigned int count = 0, i; 5320 4555 struct device *dev = ent->dev; 5321 4556 struct ata_host_set *host_set; 4557 + int rc; 5322 4558 5323 4559 DPRINTK("ENTER\n"); 5324 4560 /* alloc a container for our list of ATA ports (buses) */ ··· 5351 4587 (ap->pio_mask << ATA_SHIFT_PIO); 5352 4588 5353 4589 /* print per-port info to dmesg */ 5354 - printk(KERN_INFO "ata%u: %cATA max %s cmd 0x%lX ctl 0x%lX " 5355 - "bmdma 0x%lX irq %lu\n", 5356 - ap->id, 5357 - ap->flags & ATA_FLAG_SATA ? 'S' : 'P', 5358 - ata_mode_string(xfer_mode_mask), 5359 - ap->ioaddr.cmd_addr, 5360 - ap->ioaddr.ctl_addr, 5361 - ap->ioaddr.bmdma_addr, 5362 - ent->irq); 4590 + ata_port_printk(ap, KERN_INFO, "%cATA max %s cmd 0x%lX " 4591 + "ctl 0x%lX bmdma 0x%lX irq %lu\n", 4592 + ap->flags & ATA_FLAG_SATA ? 'S' : 'P', 4593 + ata_mode_string(xfer_mode_mask), 4594 + ap->ioaddr.cmd_addr, 4595 + ap->ioaddr.ctl_addr, 4596 + ap->ioaddr.bmdma_addr, 4597 + ent->irq); 5363 4598 5364 4599 ata_chk_status(ap); 5365 4600 host_set->ops->irq_clear(ap); 4601 + ata_eh_freeze_port(ap); /* freeze port before requesting IRQ */ 5366 4602 count++; 5367 4603 } 5368 4604 ··· 5370 4606 goto err_free_ret; 5371 4607 5372 4608 /* obtain irq, that is shared between channels */ 5373 - if (request_irq(ent->irq, ent->port_ops->irq_handler, ent->irq_flags, 5374 - DRV_NAME, host_set)) 4609 + rc = request_irq(ent->irq, ent->port_ops->irq_handler, ent->irq_flags, 4610 + DRV_NAME, host_set); 4611 + if (rc) { 4612 + dev_printk(KERN_ERR, dev, "irq %lu request failed: %d\n", 4613 + ent->irq, rc); 5375 4614 goto err_out; 4615 + } 5376 4616 5377 4617 /* perform each probe synchronously */ 5378 4618 DPRINTK("probe begin\n"); 5379 4619 for (i = 0; i < count; i++) { 5380 4620 struct ata_port *ap; 4621 + u32 scontrol; 5381 4622 int rc; 5382 4623 5383 4624 ap = host_set->ports[i]; 5384 4625 5385 - DPRINTK("ata%u: bus probe begin\n", ap->id); 5386 - rc = ata_bus_probe(ap); 5387 - DPRINTK("ata%u: bus probe end\n", ap->id); 5388 - 5389 - if (rc) { 5390 - /* FIXME: do something useful here? 5391 - * Current libata behavior will 5392 - * tear down everything when 5393 - * the module is removed 5394 - * or the h/w is unplugged. 5395 - */ 4626 + /* init sata_spd_limit to the current value */ 4627 + if (sata_scr_read(ap, SCR_CONTROL, &scontrol) == 0) { 4628 + int spd = (scontrol >> 4) & 0xf; 4629 + ap->hw_sata_spd_limit &= (1 << spd) - 1; 5396 4630 } 4631 + ap->sata_spd_limit = ap->hw_sata_spd_limit; 5397 4632 5398 4633 rc = scsi_add_host(ap->host, dev); 5399 4634 if (rc) { 5400 - printk(KERN_ERR "ata%u: scsi_add_host failed\n", 5401 - ap->id); 4635 + ata_port_printk(ap, KERN_ERR, "scsi_add_host failed\n"); 5402 4636 /* FIXME: do something useful here */ 5403 4637 /* FIXME: handle unconditional calls to 5404 4638 * scsi_scan_host and ata_host_remove, below, 5405 4639 * at the very least 5406 4640 */ 4641 + } 4642 + 4643 + if (ap->ops->error_handler) { 4644 + unsigned long flags; 4645 + 4646 + ata_port_probe(ap); 4647 + 4648 + /* kick EH for boot probing */ 4649 + spin_lock_irqsave(ap->lock, flags); 4650 + 4651 + ap->eh_info.probe_mask = (1 << ATA_MAX_DEVICES) - 1; 4652 + ap->eh_info.action |= ATA_EH_SOFTRESET; 4653 + 4654 + ap->flags |= ATA_FLAG_LOADING; 4655 + ata_port_schedule_eh(ap); 4656 + 4657 + spin_unlock_irqrestore(ap->lock, flags); 4658 + 4659 + /* wait for EH to finish */ 4660 + ata_port_wait_eh(ap); 4661 + } else { 4662 + DPRINTK("ata%u: bus probe begin\n", ap->id); 4663 + rc = ata_bus_probe(ap); 4664 + DPRINTK("ata%u: bus probe end\n", ap->id); 4665 + 4666 + if (rc) { 4667 + /* FIXME: do something useful here? 4668 + * Current libata behavior will 4669 + * tear down everything when 4670 + * the module is removed 4671 + * or the h/w is unplugged. 4672 + */ 4673 + } 5407 4674 } 5408 4675 } 5409 4676 ··· 5463 4668 } 5464 4669 5465 4670 /** 4671 + * ata_port_detach - Detach ATA port in prepration of device removal 4672 + * @ap: ATA port to be detached 4673 + * 4674 + * Detach all ATA devices and the associated SCSI devices of @ap; 4675 + * then, remove the associated SCSI host. @ap is guaranteed to 4676 + * be quiescent on return from this function. 4677 + * 4678 + * LOCKING: 4679 + * Kernel thread context (may sleep). 4680 + */ 4681 + void ata_port_detach(struct ata_port *ap) 4682 + { 4683 + unsigned long flags; 4684 + int i; 4685 + 4686 + if (!ap->ops->error_handler) 4687 + return; 4688 + 4689 + /* tell EH we're leaving & flush EH */ 4690 + spin_lock_irqsave(ap->lock, flags); 4691 + ap->flags |= ATA_FLAG_UNLOADING; 4692 + spin_unlock_irqrestore(ap->lock, flags); 4693 + 4694 + ata_port_wait_eh(ap); 4695 + 4696 + /* EH is now guaranteed to see UNLOADING, so no new device 4697 + * will be attached. Disable all existing devices. 4698 + */ 4699 + spin_lock_irqsave(ap->lock, flags); 4700 + 4701 + for (i = 0; i < ATA_MAX_DEVICES; i++) 4702 + ata_dev_disable(&ap->device[i]); 4703 + 4704 + spin_unlock_irqrestore(ap->lock, flags); 4705 + 4706 + /* Final freeze & EH. All in-flight commands are aborted. EH 4707 + * will be skipped and retrials will be terminated with bad 4708 + * target. 4709 + */ 4710 + spin_lock_irqsave(ap->lock, flags); 4711 + ata_port_freeze(ap); /* won't be thawed */ 4712 + spin_unlock_irqrestore(ap->lock, flags); 4713 + 4714 + ata_port_wait_eh(ap); 4715 + 4716 + /* Flush hotplug task. The sequence is similar to 4717 + * ata_port_flush_task(). 4718 + */ 4719 + flush_workqueue(ata_aux_wq); 4720 + cancel_delayed_work(&ap->hotplug_task); 4721 + flush_workqueue(ata_aux_wq); 4722 + 4723 + /* remove the associated SCSI host */ 4724 + scsi_remove_host(ap->host); 4725 + } 4726 + 4727 + /** 5466 4728 * ata_host_set_remove - PCI layer callback for device removal 5467 4729 * @host_set: ATA host set that was removed 5468 4730 * ··· 5532 4680 5533 4681 void ata_host_set_remove(struct ata_host_set *host_set) 5534 4682 { 5535 - struct ata_port *ap; 5536 4683 unsigned int i; 5537 4684 5538 - for (i = 0; i < host_set->n_ports; i++) { 5539 - ap = host_set->ports[i]; 5540 - scsi_remove_host(ap->host); 5541 - } 4685 + for (i = 0; i < host_set->n_ports; i++) 4686 + ata_port_detach(host_set->ports[i]); 5542 4687 5543 4688 free_irq(host_set->irq, host_set); 5544 4689 5545 4690 for (i = 0; i < host_set->n_ports; i++) { 5546 - ap = host_set->ports[i]; 4691 + struct ata_port *ap = host_set->ports[i]; 5547 4692 5548 4693 ata_scsi_release(ap->host); 5549 4694 ··· 5578 4729 5579 4730 int ata_scsi_release(struct Scsi_Host *host) 5580 4731 { 5581 - struct ata_port *ap = (struct ata_port *) &host->hostdata[0]; 5582 - int i; 4732 + struct ata_port *ap = ata_shost_to_port(host); 5583 4733 5584 4734 DPRINTK("ENTER\n"); 5585 4735 5586 4736 ap->ops->port_disable(ap); 5587 4737 ata_host_remove(ap, 0); 5588 - for (i = 0; i < ATA_MAX_DEVICES; i++) 5589 - kfree(ap->device[i].id); 5590 4738 5591 4739 DPRINTK("EXIT\n"); 5592 4740 return 1; ··· 5643 4797 { 5644 4798 struct device *dev = pci_dev_to_dev(pdev); 5645 4799 struct ata_host_set *host_set = dev_get_drvdata(dev); 4800 + struct ata_host_set *host_set2 = host_set->next; 5646 4801 5647 4802 ata_host_set_remove(host_set); 4803 + if (host_set2) 4804 + ata_host_set_remove(host_set2); 4805 + 5648 4806 pci_release_regions(pdev); 5649 4807 pci_disable_device(pdev); 5650 4808 dev_set_drvdata(dev, NULL); ··· 5713 4863 if (!ata_wq) 5714 4864 return -ENOMEM; 5715 4865 4866 + ata_aux_wq = create_singlethread_workqueue("ata_aux"); 4867 + if (!ata_aux_wq) { 4868 + destroy_workqueue(ata_wq); 4869 + return -ENOMEM; 4870 + } 4871 + 5716 4872 printk(KERN_DEBUG "libata version " DRV_VERSION " loaded.\n"); 5717 4873 return 0; 5718 4874 } ··· 5726 4870 static void __exit ata_exit(void) 5727 4871 { 5728 4872 destroy_workqueue(ata_wq); 4873 + destroy_workqueue(ata_aux_wq); 5729 4874 } 5730 4875 5731 4876 module_init(ata_init); ··· 5753 4896 return rc; 5754 4897 } 5755 4898 4899 + /** 4900 + * ata_wait_register - wait until register value changes 4901 + * @reg: IO-mapped register 4902 + * @mask: Mask to apply to read register value 4903 + * @val: Wait condition 4904 + * @interval_msec: polling interval in milliseconds 4905 + * @timeout_msec: timeout in milliseconds 4906 + * 4907 + * Waiting for some bits of register to change is a common 4908 + * operation for ATA controllers. This function reads 32bit LE 4909 + * IO-mapped register @reg and tests for the following condition. 4910 + * 4911 + * (*@reg & mask) != val 4912 + * 4913 + * If the condition is met, it returns; otherwise, the process is 4914 + * repeated after @interval_msec until timeout. 4915 + * 4916 + * LOCKING: 4917 + * Kernel thread context (may sleep) 4918 + * 4919 + * RETURNS: 4920 + * The final register value. 4921 + */ 4922 + u32 ata_wait_register(void __iomem *reg, u32 mask, u32 val, 4923 + unsigned long interval_msec, 4924 + unsigned long timeout_msec) 4925 + { 4926 + unsigned long timeout; 4927 + u32 tmp; 4928 + 4929 + tmp = ioread32(reg); 4930 + 4931 + /* Calculate timeout _after_ the first read to make sure 4932 + * preceding writes reach the controller before starting to 4933 + * eat away the timeout. 4934 + */ 4935 + timeout = jiffies + (timeout_msec * HZ) / 1000; 4936 + 4937 + while ((tmp & mask) == val && time_before(jiffies, timeout)) { 4938 + msleep(interval_msec); 4939 + tmp = ioread32(reg); 4940 + } 4941 + 4942 + return tmp; 4943 + } 4944 + 5756 4945 /* 5757 4946 * libata is essentially a library of internal helper functions for 5758 4947 * low-level ATA host controller drivers. As such, the API/ABI is ··· 5806 4903 * Do not depend on ABI/API stability. 5807 4904 */ 5808 4905 4906 + EXPORT_SYMBOL_GPL(sata_deb_timing_boot); 4907 + EXPORT_SYMBOL_GPL(sata_deb_timing_eh); 4908 + EXPORT_SYMBOL_GPL(sata_deb_timing_before_fsrst); 5809 4909 EXPORT_SYMBOL_GPL(ata_std_bios_param); 5810 4910 EXPORT_SYMBOL_GPL(ata_std_ports); 5811 4911 EXPORT_SYMBOL_GPL(ata_device_add); 4912 + EXPORT_SYMBOL_GPL(ata_port_detach); 5812 4913 EXPORT_SYMBOL_GPL(ata_host_set_remove); 5813 4914 EXPORT_SYMBOL_GPL(ata_sg_init); 5814 4915 EXPORT_SYMBOL_GPL(ata_sg_init_one); 5815 - EXPORT_SYMBOL_GPL(__ata_qc_complete); 4916 + EXPORT_SYMBOL_GPL(ata_hsm_move); 4917 + EXPORT_SYMBOL_GPL(ata_qc_complete); 4918 + EXPORT_SYMBOL_GPL(ata_qc_complete_multiple); 5816 4919 EXPORT_SYMBOL_GPL(ata_qc_issue_prot); 5817 - EXPORT_SYMBOL_GPL(ata_eng_timeout); 5818 4920 EXPORT_SYMBOL_GPL(ata_tf_load); 5819 4921 EXPORT_SYMBOL_GPL(ata_tf_read); 5820 4922 EXPORT_SYMBOL_GPL(ata_noop_dev_select); ··· 5833 4925 EXPORT_SYMBOL_GPL(ata_port_stop); 5834 4926 EXPORT_SYMBOL_GPL(ata_host_stop); 5835 4927 EXPORT_SYMBOL_GPL(ata_interrupt); 4928 + EXPORT_SYMBOL_GPL(ata_mmio_data_xfer); 4929 + EXPORT_SYMBOL_GPL(ata_pio_data_xfer); 4930 + EXPORT_SYMBOL_GPL(ata_pio_data_xfer_noirq); 5836 4931 EXPORT_SYMBOL_GPL(ata_qc_prep); 5837 4932 EXPORT_SYMBOL_GPL(ata_noop_qc_prep); 5838 4933 EXPORT_SYMBOL_GPL(ata_bmdma_setup); ··· 5843 4932 EXPORT_SYMBOL_GPL(ata_bmdma_irq_clear); 5844 4933 EXPORT_SYMBOL_GPL(ata_bmdma_status); 5845 4934 EXPORT_SYMBOL_GPL(ata_bmdma_stop); 4935 + EXPORT_SYMBOL_GPL(ata_bmdma_freeze); 4936 + EXPORT_SYMBOL_GPL(ata_bmdma_thaw); 4937 + EXPORT_SYMBOL_GPL(ata_bmdma_drive_eh); 4938 + EXPORT_SYMBOL_GPL(ata_bmdma_error_handler); 4939 + EXPORT_SYMBOL_GPL(ata_bmdma_post_internal_cmd); 5846 4940 EXPORT_SYMBOL_GPL(ata_port_probe); 4941 + EXPORT_SYMBOL_GPL(sata_set_spd); 4942 + EXPORT_SYMBOL_GPL(sata_phy_debounce); 4943 + EXPORT_SYMBOL_GPL(sata_phy_resume); 5847 4944 EXPORT_SYMBOL_GPL(sata_phy_reset); 5848 4945 EXPORT_SYMBOL_GPL(__sata_phy_reset); 5849 4946 EXPORT_SYMBOL_GPL(ata_bus_reset); 5850 - EXPORT_SYMBOL_GPL(ata_std_probeinit); 4947 + EXPORT_SYMBOL_GPL(ata_std_prereset); 5851 4948 EXPORT_SYMBOL_GPL(ata_std_softreset); 5852 4949 EXPORT_SYMBOL_GPL(sata_std_hardreset); 5853 4950 EXPORT_SYMBOL_GPL(ata_std_postreset); 5854 - EXPORT_SYMBOL_GPL(ata_std_probe_reset); 5855 - EXPORT_SYMBOL_GPL(ata_drive_probe_reset); 5856 4951 EXPORT_SYMBOL_GPL(ata_dev_revalidate); 5857 4952 EXPORT_SYMBOL_GPL(ata_dev_classify); 5858 4953 EXPORT_SYMBOL_GPL(ata_dev_pair); 5859 4954 EXPORT_SYMBOL_GPL(ata_port_disable); 5860 4955 EXPORT_SYMBOL_GPL(ata_ratelimit); 4956 + EXPORT_SYMBOL_GPL(ata_wait_register); 5861 4957 EXPORT_SYMBOL_GPL(ata_busy_sleep); 5862 4958 EXPORT_SYMBOL_GPL(ata_port_queue_task); 5863 4959 EXPORT_SYMBOL_GPL(ata_scsi_ioctl); 5864 4960 EXPORT_SYMBOL_GPL(ata_scsi_queuecmd); 5865 4961 EXPORT_SYMBOL_GPL(ata_scsi_slave_config); 4962 + EXPORT_SYMBOL_GPL(ata_scsi_slave_destroy); 4963 + EXPORT_SYMBOL_GPL(ata_scsi_change_queue_depth); 5866 4964 EXPORT_SYMBOL_GPL(ata_scsi_release); 5867 4965 EXPORT_SYMBOL_GPL(ata_host_intr); 4966 + EXPORT_SYMBOL_GPL(sata_scr_valid); 4967 + EXPORT_SYMBOL_GPL(sata_scr_read); 4968 + EXPORT_SYMBOL_GPL(sata_scr_write); 4969 + EXPORT_SYMBOL_GPL(sata_scr_write_flush); 4970 + EXPORT_SYMBOL_GPL(ata_port_online); 4971 + EXPORT_SYMBOL_GPL(ata_port_offline); 5868 4972 EXPORT_SYMBOL_GPL(ata_id_string); 5869 4973 EXPORT_SYMBOL_GPL(ata_id_c_string); 5870 4974 EXPORT_SYMBOL_GPL(ata_scsi_simulate); 5871 - EXPORT_SYMBOL_GPL(ata_eh_qc_complete); 5872 - EXPORT_SYMBOL_GPL(ata_eh_qc_retry); 5873 4975 5874 4976 EXPORT_SYMBOL_GPL(ata_pio_need_iordy); 5875 4977 EXPORT_SYMBOL_GPL(ata_timing_compute); ··· 5904 4980 EXPORT_SYMBOL_GPL(ata_device_resume); 5905 4981 EXPORT_SYMBOL_GPL(ata_scsi_device_suspend); 5906 4982 EXPORT_SYMBOL_GPL(ata_scsi_device_resume); 4983 + 4984 + EXPORT_SYMBOL_GPL(ata_eng_timeout); 4985 + EXPORT_SYMBOL_GPL(ata_port_schedule_eh); 4986 + EXPORT_SYMBOL_GPL(ata_port_abort); 4987 + EXPORT_SYMBOL_GPL(ata_port_freeze); 4988 + EXPORT_SYMBOL_GPL(ata_eh_freeze_port); 4989 + EXPORT_SYMBOL_GPL(ata_eh_thaw_port); 4990 + EXPORT_SYMBOL_GPL(ata_eh_qc_complete); 4991 + EXPORT_SYMBOL_GPL(ata_eh_qc_retry); 4992 + EXPORT_SYMBOL_GPL(ata_do_eh);
+1907
drivers/scsi/libata-eh.c
··· 1 + /* 2 + * libata-eh.c - libata error handling 3 + * 4 + * Maintained by: Jeff Garzik <jgarzik@pobox.com> 5 + * Please ALWAYS copy linux-ide@vger.kernel.org 6 + * on emails. 7 + * 8 + * Copyright 2006 Tejun Heo <htejun@gmail.com> 9 + * 10 + * 11 + * This program is free software; you can redistribute it and/or 12 + * modify it under the terms of the GNU General Public License as 13 + * published by the Free Software Foundation; either version 2, or 14 + * (at your option) any later version. 15 + * 16 + * This program is distributed in the hope that it will be useful, 17 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 19 + * General Public License for more details. 20 + * 21 + * You should have received a copy of the GNU General Public License 22 + * along with this program; see the file COPYING. If not, write to 23 + * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, 24 + * USA. 25 + * 26 + * 27 + * libata documentation is available via 'make {ps|pdf}docs', 28 + * as Documentation/DocBook/libata.* 29 + * 30 + * Hardware documentation available from http://www.t13.org/ and 31 + * http://www.sata-io.org/ 32 + * 33 + */ 34 + 35 + #include <linux/config.h> 36 + #include <linux/kernel.h> 37 + #include <scsi/scsi.h> 38 + #include <scsi/scsi_host.h> 39 + #include <scsi/scsi_eh.h> 40 + #include <scsi/scsi_device.h> 41 + #include <scsi/scsi_cmnd.h> 42 + #include "scsi_transport_api.h" 43 + 44 + #include <linux/libata.h> 45 + 46 + #include "libata.h" 47 + 48 + static void __ata_port_freeze(struct ata_port *ap); 49 + static void ata_eh_finish(struct ata_port *ap); 50 + 51 + static void ata_ering_record(struct ata_ering *ering, int is_io, 52 + unsigned int err_mask) 53 + { 54 + struct ata_ering_entry *ent; 55 + 56 + WARN_ON(!err_mask); 57 + 58 + ering->cursor++; 59 + ering->cursor %= ATA_ERING_SIZE; 60 + 61 + ent = &ering->ring[ering->cursor]; 62 + ent->is_io = is_io; 63 + ent->err_mask = err_mask; 64 + ent->timestamp = get_jiffies_64(); 65 + } 66 + 67 + static struct ata_ering_entry * ata_ering_top(struct ata_ering *ering) 68 + { 69 + struct ata_ering_entry *ent = &ering->ring[ering->cursor]; 70 + if (!ent->err_mask) 71 + return NULL; 72 + return ent; 73 + } 74 + 75 + static int ata_ering_map(struct ata_ering *ering, 76 + int (*map_fn)(struct ata_ering_entry *, void *), 77 + void *arg) 78 + { 79 + int idx, rc = 0; 80 + struct ata_ering_entry *ent; 81 + 82 + idx = ering->cursor; 83 + do { 84 + ent = &ering->ring[idx]; 85 + if (!ent->err_mask) 86 + break; 87 + rc = map_fn(ent, arg); 88 + if (rc) 89 + break; 90 + idx = (idx - 1 + ATA_ERING_SIZE) % ATA_ERING_SIZE; 91 + } while (idx != ering->cursor); 92 + 93 + return rc; 94 + } 95 + 96 + /** 97 + * ata_scsi_timed_out - SCSI layer time out callback 98 + * @cmd: timed out SCSI command 99 + * 100 + * Handles SCSI layer timeout. We race with normal completion of 101 + * the qc for @cmd. If the qc is already gone, we lose and let 102 + * the scsi command finish (EH_HANDLED). Otherwise, the qc has 103 + * timed out and EH should be invoked. Prevent ata_qc_complete() 104 + * from finishing it by setting EH_SCHEDULED and return 105 + * EH_NOT_HANDLED. 106 + * 107 + * TODO: kill this function once old EH is gone. 108 + * 109 + * LOCKING: 110 + * Called from timer context 111 + * 112 + * RETURNS: 113 + * EH_HANDLED or EH_NOT_HANDLED 114 + */ 115 + enum scsi_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd) 116 + { 117 + struct Scsi_Host *host = cmd->device->host; 118 + struct ata_port *ap = ata_shost_to_port(host); 119 + unsigned long flags; 120 + struct ata_queued_cmd *qc; 121 + enum scsi_eh_timer_return ret; 122 + 123 + DPRINTK("ENTER\n"); 124 + 125 + if (ap->ops->error_handler) { 126 + ret = EH_NOT_HANDLED; 127 + goto out; 128 + } 129 + 130 + ret = EH_HANDLED; 131 + spin_lock_irqsave(ap->lock, flags); 132 + qc = ata_qc_from_tag(ap, ap->active_tag); 133 + if (qc) { 134 + WARN_ON(qc->scsicmd != cmd); 135 + qc->flags |= ATA_QCFLAG_EH_SCHEDULED; 136 + qc->err_mask |= AC_ERR_TIMEOUT; 137 + ret = EH_NOT_HANDLED; 138 + } 139 + spin_unlock_irqrestore(ap->lock, flags); 140 + 141 + out: 142 + DPRINTK("EXIT, ret=%d\n", ret); 143 + return ret; 144 + } 145 + 146 + /** 147 + * ata_scsi_error - SCSI layer error handler callback 148 + * @host: SCSI host on which error occurred 149 + * 150 + * Handles SCSI-layer-thrown error events. 151 + * 152 + * LOCKING: 153 + * Inherited from SCSI layer (none, can sleep) 154 + * 155 + * RETURNS: 156 + * Zero. 157 + */ 158 + void ata_scsi_error(struct Scsi_Host *host) 159 + { 160 + struct ata_port *ap = ata_shost_to_port(host); 161 + spinlock_t *ap_lock = ap->lock; 162 + int i, repeat_cnt = ATA_EH_MAX_REPEAT; 163 + unsigned long flags; 164 + 165 + DPRINTK("ENTER\n"); 166 + 167 + /* synchronize with port task */ 168 + ata_port_flush_task(ap); 169 + 170 + /* synchronize with host_set lock and sort out timeouts */ 171 + 172 + /* For new EH, all qcs are finished in one of three ways - 173 + * normal completion, error completion, and SCSI timeout. 174 + * Both cmpletions can race against SCSI timeout. When normal 175 + * completion wins, the qc never reaches EH. When error 176 + * completion wins, the qc has ATA_QCFLAG_FAILED set. 177 + * 178 + * When SCSI timeout wins, things are a bit more complex. 179 + * Normal or error completion can occur after the timeout but 180 + * before this point. In such cases, both types of 181 + * completions are honored. A scmd is determined to have 182 + * timed out iff its associated qc is active and not failed. 183 + */ 184 + if (ap->ops->error_handler) { 185 + struct scsi_cmnd *scmd, *tmp; 186 + int nr_timedout = 0; 187 + 188 + spin_lock_irqsave(ap_lock, flags); 189 + 190 + list_for_each_entry_safe(scmd, tmp, &host->eh_cmd_q, eh_entry) { 191 + struct ata_queued_cmd *qc; 192 + 193 + for (i = 0; i < ATA_MAX_QUEUE; i++) { 194 + qc = __ata_qc_from_tag(ap, i); 195 + if (qc->flags & ATA_QCFLAG_ACTIVE && 196 + qc->scsicmd == scmd) 197 + break; 198 + } 199 + 200 + if (i < ATA_MAX_QUEUE) { 201 + /* the scmd has an associated qc */ 202 + if (!(qc->flags & ATA_QCFLAG_FAILED)) { 203 + /* which hasn't failed yet, timeout */ 204 + qc->err_mask |= AC_ERR_TIMEOUT; 205 + qc->flags |= ATA_QCFLAG_FAILED; 206 + nr_timedout++; 207 + } 208 + } else { 209 + /* Normal completion occurred after 210 + * SCSI timeout but before this point. 211 + * Successfully complete it. 212 + */ 213 + scmd->retries = scmd->allowed; 214 + scsi_eh_finish_cmd(scmd, &ap->eh_done_q); 215 + } 216 + } 217 + 218 + /* If we have timed out qcs. They belong to EH from 219 + * this point but the state of the controller is 220 + * unknown. Freeze the port to make sure the IRQ 221 + * handler doesn't diddle with those qcs. This must 222 + * be done atomically w.r.t. setting QCFLAG_FAILED. 223 + */ 224 + if (nr_timedout) 225 + __ata_port_freeze(ap); 226 + 227 + spin_unlock_irqrestore(ap_lock, flags); 228 + } else 229 + spin_unlock_wait(ap_lock); 230 + 231 + repeat: 232 + /* invoke error handler */ 233 + if (ap->ops->error_handler) { 234 + /* fetch & clear EH info */ 235 + spin_lock_irqsave(ap_lock, flags); 236 + 237 + memset(&ap->eh_context, 0, sizeof(ap->eh_context)); 238 + ap->eh_context.i = ap->eh_info; 239 + memset(&ap->eh_info, 0, sizeof(ap->eh_info)); 240 + 241 + ap->flags |= ATA_FLAG_EH_IN_PROGRESS; 242 + ap->flags &= ~ATA_FLAG_EH_PENDING; 243 + 244 + spin_unlock_irqrestore(ap_lock, flags); 245 + 246 + /* invoke EH. if unloading, just finish failed qcs */ 247 + if (!(ap->flags & ATA_FLAG_UNLOADING)) 248 + ap->ops->error_handler(ap); 249 + else 250 + ata_eh_finish(ap); 251 + 252 + /* Exception might have happend after ->error_handler 253 + * recovered the port but before this point. Repeat 254 + * EH in such case. 255 + */ 256 + spin_lock_irqsave(ap_lock, flags); 257 + 258 + if (ap->flags & ATA_FLAG_EH_PENDING) { 259 + if (--repeat_cnt) { 260 + ata_port_printk(ap, KERN_INFO, 261 + "EH pending after completion, " 262 + "repeating EH (cnt=%d)\n", repeat_cnt); 263 + spin_unlock_irqrestore(ap_lock, flags); 264 + goto repeat; 265 + } 266 + ata_port_printk(ap, KERN_ERR, "EH pending after %d " 267 + "tries, giving up\n", ATA_EH_MAX_REPEAT); 268 + } 269 + 270 + /* this run is complete, make sure EH info is clear */ 271 + memset(&ap->eh_info, 0, sizeof(ap->eh_info)); 272 + 273 + /* Clear host_eh_scheduled while holding ap_lock such 274 + * that if exception occurs after this point but 275 + * before EH completion, SCSI midlayer will 276 + * re-initiate EH. 277 + */ 278 + host->host_eh_scheduled = 0; 279 + 280 + spin_unlock_irqrestore(ap_lock, flags); 281 + } else { 282 + WARN_ON(ata_qc_from_tag(ap, ap->active_tag) == NULL); 283 + ap->ops->eng_timeout(ap); 284 + } 285 + 286 + /* finish or retry handled scmd's and clean up */ 287 + WARN_ON(host->host_failed || !list_empty(&host->eh_cmd_q)); 288 + 289 + scsi_eh_flush_done_q(&ap->eh_done_q); 290 + 291 + /* clean up */ 292 + spin_lock_irqsave(ap_lock, flags); 293 + 294 + if (ap->flags & ATA_FLAG_LOADING) { 295 + ap->flags &= ~ATA_FLAG_LOADING; 296 + } else { 297 + if (ap->flags & ATA_FLAG_SCSI_HOTPLUG) 298 + queue_work(ata_aux_wq, &ap->hotplug_task); 299 + if (ap->flags & ATA_FLAG_RECOVERED) 300 + ata_port_printk(ap, KERN_INFO, "EH complete\n"); 301 + } 302 + 303 + ap->flags &= ~(ATA_FLAG_SCSI_HOTPLUG | ATA_FLAG_RECOVERED); 304 + 305 + /* tell wait_eh that we're done */ 306 + ap->flags &= ~ATA_FLAG_EH_IN_PROGRESS; 307 + wake_up_all(&ap->eh_wait_q); 308 + 309 + spin_unlock_irqrestore(ap_lock, flags); 310 + 311 + DPRINTK("EXIT\n"); 312 + } 313 + 314 + /** 315 + * ata_port_wait_eh - Wait for the currently pending EH to complete 316 + * @ap: Port to wait EH for 317 + * 318 + * Wait until the currently pending EH is complete. 319 + * 320 + * LOCKING: 321 + * Kernel thread context (may sleep). 322 + */ 323 + void ata_port_wait_eh(struct ata_port *ap) 324 + { 325 + unsigned long flags; 326 + DEFINE_WAIT(wait); 327 + 328 + retry: 329 + spin_lock_irqsave(ap->lock, flags); 330 + 331 + while (ap->flags & (ATA_FLAG_EH_PENDING | ATA_FLAG_EH_IN_PROGRESS)) { 332 + prepare_to_wait(&ap->eh_wait_q, &wait, TASK_UNINTERRUPTIBLE); 333 + spin_unlock_irqrestore(ap->lock, flags); 334 + schedule(); 335 + spin_lock_irqsave(ap->lock, flags); 336 + } 337 + finish_wait(&ap->eh_wait_q, &wait); 338 + 339 + spin_unlock_irqrestore(ap->lock, flags); 340 + 341 + /* make sure SCSI EH is complete */ 342 + if (scsi_host_in_recovery(ap->host)) { 343 + msleep(10); 344 + goto retry; 345 + } 346 + } 347 + 348 + /** 349 + * ata_qc_timeout - Handle timeout of queued command 350 + * @qc: Command that timed out 351 + * 352 + * Some part of the kernel (currently, only the SCSI layer) 353 + * has noticed that the active command on port @ap has not 354 + * completed after a specified length of time. Handle this 355 + * condition by disabling DMA (if necessary) and completing 356 + * transactions, with error if necessary. 357 + * 358 + * This also handles the case of the "lost interrupt", where 359 + * for some reason (possibly hardware bug, possibly driver bug) 360 + * an interrupt was not delivered to the driver, even though the 361 + * transaction completed successfully. 362 + * 363 + * TODO: kill this function once old EH is gone. 364 + * 365 + * LOCKING: 366 + * Inherited from SCSI layer (none, can sleep) 367 + */ 368 + static void ata_qc_timeout(struct ata_queued_cmd *qc) 369 + { 370 + struct ata_port *ap = qc->ap; 371 + u8 host_stat = 0, drv_stat; 372 + unsigned long flags; 373 + 374 + DPRINTK("ENTER\n"); 375 + 376 + ap->hsm_task_state = HSM_ST_IDLE; 377 + 378 + spin_lock_irqsave(ap->lock, flags); 379 + 380 + switch (qc->tf.protocol) { 381 + 382 + case ATA_PROT_DMA: 383 + case ATA_PROT_ATAPI_DMA: 384 + host_stat = ap->ops->bmdma_status(ap); 385 + 386 + /* before we do anything else, clear DMA-Start bit */ 387 + ap->ops->bmdma_stop(qc); 388 + 389 + /* fall through */ 390 + 391 + default: 392 + ata_altstatus(ap); 393 + drv_stat = ata_chk_status(ap); 394 + 395 + /* ack bmdma irq events */ 396 + ap->ops->irq_clear(ap); 397 + 398 + ata_dev_printk(qc->dev, KERN_ERR, "command 0x%x timeout, " 399 + "stat 0x%x host_stat 0x%x\n", 400 + qc->tf.command, drv_stat, host_stat); 401 + 402 + /* complete taskfile transaction */ 403 + qc->err_mask |= AC_ERR_TIMEOUT; 404 + break; 405 + } 406 + 407 + spin_unlock_irqrestore(ap->lock, flags); 408 + 409 + ata_eh_qc_complete(qc); 410 + 411 + DPRINTK("EXIT\n"); 412 + } 413 + 414 + /** 415 + * ata_eng_timeout - Handle timeout of queued command 416 + * @ap: Port on which timed-out command is active 417 + * 418 + * Some part of the kernel (currently, only the SCSI layer) 419 + * has noticed that the active command on port @ap has not 420 + * completed after a specified length of time. Handle this 421 + * condition by disabling DMA (if necessary) and completing 422 + * transactions, with error if necessary. 423 + * 424 + * This also handles the case of the "lost interrupt", where 425 + * for some reason (possibly hardware bug, possibly driver bug) 426 + * an interrupt was not delivered to the driver, even though the 427 + * transaction completed successfully. 428 + * 429 + * TODO: kill this function once old EH is gone. 430 + * 431 + * LOCKING: 432 + * Inherited from SCSI layer (none, can sleep) 433 + */ 434 + void ata_eng_timeout(struct ata_port *ap) 435 + { 436 + DPRINTK("ENTER\n"); 437 + 438 + ata_qc_timeout(ata_qc_from_tag(ap, ap->active_tag)); 439 + 440 + DPRINTK("EXIT\n"); 441 + } 442 + 443 + /** 444 + * ata_qc_schedule_eh - schedule qc for error handling 445 + * @qc: command to schedule error handling for 446 + * 447 + * Schedule error handling for @qc. EH will kick in as soon as 448 + * other commands are drained. 449 + * 450 + * LOCKING: 451 + * spin_lock_irqsave(host_set lock) 452 + */ 453 + void ata_qc_schedule_eh(struct ata_queued_cmd *qc) 454 + { 455 + struct ata_port *ap = qc->ap; 456 + 457 + WARN_ON(!ap->ops->error_handler); 458 + 459 + qc->flags |= ATA_QCFLAG_FAILED; 460 + qc->ap->flags |= ATA_FLAG_EH_PENDING; 461 + 462 + /* The following will fail if timeout has already expired. 463 + * ata_scsi_error() takes care of such scmds on EH entry. 464 + * Note that ATA_QCFLAG_FAILED is unconditionally set after 465 + * this function completes. 466 + */ 467 + scsi_req_abort_cmd(qc->scsicmd); 468 + } 469 + 470 + /** 471 + * ata_port_schedule_eh - schedule error handling without a qc 472 + * @ap: ATA port to schedule EH for 473 + * 474 + * Schedule error handling for @ap. EH will kick in as soon as 475 + * all commands are drained. 476 + * 477 + * LOCKING: 478 + * spin_lock_irqsave(host_set lock) 479 + */ 480 + void ata_port_schedule_eh(struct ata_port *ap) 481 + { 482 + WARN_ON(!ap->ops->error_handler); 483 + 484 + ap->flags |= ATA_FLAG_EH_PENDING; 485 + scsi_schedule_eh(ap->host); 486 + 487 + DPRINTK("port EH scheduled\n"); 488 + } 489 + 490 + /** 491 + * ata_port_abort - abort all qc's on the port 492 + * @ap: ATA port to abort qc's for 493 + * 494 + * Abort all active qc's of @ap and schedule EH. 495 + * 496 + * LOCKING: 497 + * spin_lock_irqsave(host_set lock) 498 + * 499 + * RETURNS: 500 + * Number of aborted qc's. 501 + */ 502 + int ata_port_abort(struct ata_port *ap) 503 + { 504 + int tag, nr_aborted = 0; 505 + 506 + WARN_ON(!ap->ops->error_handler); 507 + 508 + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 509 + struct ata_queued_cmd *qc = ata_qc_from_tag(ap, tag); 510 + 511 + if (qc) { 512 + qc->flags |= ATA_QCFLAG_FAILED; 513 + ata_qc_complete(qc); 514 + nr_aborted++; 515 + } 516 + } 517 + 518 + if (!nr_aborted) 519 + ata_port_schedule_eh(ap); 520 + 521 + return nr_aborted; 522 + } 523 + 524 + /** 525 + * __ata_port_freeze - freeze port 526 + * @ap: ATA port to freeze 527 + * 528 + * This function is called when HSM violation or some other 529 + * condition disrupts normal operation of the port. Frozen port 530 + * is not allowed to perform any operation until the port is 531 + * thawed, which usually follows a successful reset. 532 + * 533 + * ap->ops->freeze() callback can be used for freezing the port 534 + * hardware-wise (e.g. mask interrupt and stop DMA engine). If a 535 + * port cannot be frozen hardware-wise, the interrupt handler 536 + * must ack and clear interrupts unconditionally while the port 537 + * is frozen. 538 + * 539 + * LOCKING: 540 + * spin_lock_irqsave(host_set lock) 541 + */ 542 + static void __ata_port_freeze(struct ata_port *ap) 543 + { 544 + WARN_ON(!ap->ops->error_handler); 545 + 546 + if (ap->ops->freeze) 547 + ap->ops->freeze(ap); 548 + 549 + ap->flags |= ATA_FLAG_FROZEN; 550 + 551 + DPRINTK("ata%u port frozen\n", ap->id); 552 + } 553 + 554 + /** 555 + * ata_port_freeze - abort & freeze port 556 + * @ap: ATA port to freeze 557 + * 558 + * Abort and freeze @ap. 559 + * 560 + * LOCKING: 561 + * spin_lock_irqsave(host_set lock) 562 + * 563 + * RETURNS: 564 + * Number of aborted commands. 565 + */ 566 + int ata_port_freeze(struct ata_port *ap) 567 + { 568 + int nr_aborted; 569 + 570 + WARN_ON(!ap->ops->error_handler); 571 + 572 + nr_aborted = ata_port_abort(ap); 573 + __ata_port_freeze(ap); 574 + 575 + return nr_aborted; 576 + } 577 + 578 + /** 579 + * ata_eh_freeze_port - EH helper to freeze port 580 + * @ap: ATA port to freeze 581 + * 582 + * Freeze @ap. 583 + * 584 + * LOCKING: 585 + * None. 586 + */ 587 + void ata_eh_freeze_port(struct ata_port *ap) 588 + { 589 + unsigned long flags; 590 + 591 + if (!ap->ops->error_handler) 592 + return; 593 + 594 + spin_lock_irqsave(ap->lock, flags); 595 + __ata_port_freeze(ap); 596 + spin_unlock_irqrestore(ap->lock, flags); 597 + } 598 + 599 + /** 600 + * ata_port_thaw_port - EH helper to thaw port 601 + * @ap: ATA port to thaw 602 + * 603 + * Thaw frozen port @ap. 604 + * 605 + * LOCKING: 606 + * None. 607 + */ 608 + void ata_eh_thaw_port(struct ata_port *ap) 609 + { 610 + unsigned long flags; 611 + 612 + if (!ap->ops->error_handler) 613 + return; 614 + 615 + spin_lock_irqsave(ap->lock, flags); 616 + 617 + ap->flags &= ~ATA_FLAG_FROZEN; 618 + 619 + if (ap->ops->thaw) 620 + ap->ops->thaw(ap); 621 + 622 + spin_unlock_irqrestore(ap->lock, flags); 623 + 624 + DPRINTK("ata%u port thawed\n", ap->id); 625 + } 626 + 627 + static void ata_eh_scsidone(struct scsi_cmnd *scmd) 628 + { 629 + /* nada */ 630 + } 631 + 632 + static void __ata_eh_qc_complete(struct ata_queued_cmd *qc) 633 + { 634 + struct ata_port *ap = qc->ap; 635 + struct scsi_cmnd *scmd = qc->scsicmd; 636 + unsigned long flags; 637 + 638 + spin_lock_irqsave(ap->lock, flags); 639 + qc->scsidone = ata_eh_scsidone; 640 + __ata_qc_complete(qc); 641 + WARN_ON(ata_tag_valid(qc->tag)); 642 + spin_unlock_irqrestore(ap->lock, flags); 643 + 644 + scsi_eh_finish_cmd(scmd, &ap->eh_done_q); 645 + } 646 + 647 + /** 648 + * ata_eh_qc_complete - Complete an active ATA command from EH 649 + * @qc: Command to complete 650 + * 651 + * Indicate to the mid and upper layers that an ATA command has 652 + * completed. To be used from EH. 653 + */ 654 + void ata_eh_qc_complete(struct ata_queued_cmd *qc) 655 + { 656 + struct scsi_cmnd *scmd = qc->scsicmd; 657 + scmd->retries = scmd->allowed; 658 + __ata_eh_qc_complete(qc); 659 + } 660 + 661 + /** 662 + * ata_eh_qc_retry - Tell midlayer to retry an ATA command after EH 663 + * @qc: Command to retry 664 + * 665 + * Indicate to the mid and upper layers that an ATA command 666 + * should be retried. To be used from EH. 667 + * 668 + * SCSI midlayer limits the number of retries to scmd->allowed. 669 + * scmd->retries is decremented for commands which get retried 670 + * due to unrelated failures (qc->err_mask is zero). 671 + */ 672 + void ata_eh_qc_retry(struct ata_queued_cmd *qc) 673 + { 674 + struct scsi_cmnd *scmd = qc->scsicmd; 675 + if (!qc->err_mask && scmd->retries) 676 + scmd->retries--; 677 + __ata_eh_qc_complete(qc); 678 + } 679 + 680 + /** 681 + * ata_eh_detach_dev - detach ATA device 682 + * @dev: ATA device to detach 683 + * 684 + * Detach @dev. 685 + * 686 + * LOCKING: 687 + * None. 688 + */ 689 + static void ata_eh_detach_dev(struct ata_device *dev) 690 + { 691 + struct ata_port *ap = dev->ap; 692 + unsigned long flags; 693 + 694 + ata_dev_disable(dev); 695 + 696 + spin_lock_irqsave(ap->lock, flags); 697 + 698 + dev->flags &= ~ATA_DFLAG_DETACH; 699 + 700 + if (ata_scsi_offline_dev(dev)) { 701 + dev->flags |= ATA_DFLAG_DETACHED; 702 + ap->flags |= ATA_FLAG_SCSI_HOTPLUG; 703 + } 704 + 705 + spin_unlock_irqrestore(ap->lock, flags); 706 + } 707 + 708 + static void ata_eh_clear_action(struct ata_device *dev, 709 + struct ata_eh_info *ehi, unsigned int action) 710 + { 711 + int i; 712 + 713 + if (!dev) { 714 + ehi->action &= ~action; 715 + for (i = 0; i < ATA_MAX_DEVICES; i++) 716 + ehi->dev_action[i] &= ~action; 717 + } else { 718 + /* doesn't make sense for port-wide EH actions */ 719 + WARN_ON(!(action & ATA_EH_PERDEV_MASK)); 720 + 721 + /* break ehi->action into ehi->dev_action */ 722 + if (ehi->action & action) { 723 + for (i = 0; i < ATA_MAX_DEVICES; i++) 724 + ehi->dev_action[i] |= ehi->action & action; 725 + ehi->action &= ~action; 726 + } 727 + 728 + /* turn off the specified per-dev action */ 729 + ehi->dev_action[dev->devno] &= ~action; 730 + } 731 + } 732 + 733 + /** 734 + * ata_eh_about_to_do - about to perform eh_action 735 + * @ap: target ATA port 736 + * @dev: target ATA dev for per-dev action (can be NULL) 737 + * @action: action about to be performed 738 + * 739 + * Called just before performing EH actions to clear related bits 740 + * in @ap->eh_info such that eh actions are not unnecessarily 741 + * repeated. 742 + * 743 + * LOCKING: 744 + * None. 745 + */ 746 + static void ata_eh_about_to_do(struct ata_port *ap, struct ata_device *dev, 747 + unsigned int action) 748 + { 749 + unsigned long flags; 750 + 751 + spin_lock_irqsave(ap->lock, flags); 752 + ata_eh_clear_action(dev, &ap->eh_info, action); 753 + ap->flags |= ATA_FLAG_RECOVERED; 754 + spin_unlock_irqrestore(ap->lock, flags); 755 + } 756 + 757 + /** 758 + * ata_eh_done - EH action complete 759 + * @ap: target ATA port 760 + * @dev: target ATA dev for per-dev action (can be NULL) 761 + * @action: action just completed 762 + * 763 + * Called right after performing EH actions to clear related bits 764 + * in @ap->eh_context. 765 + * 766 + * LOCKING: 767 + * None. 768 + */ 769 + static void ata_eh_done(struct ata_port *ap, struct ata_device *dev, 770 + unsigned int action) 771 + { 772 + ata_eh_clear_action(dev, &ap->eh_context.i, action); 773 + } 774 + 775 + /** 776 + * ata_err_string - convert err_mask to descriptive string 777 + * @err_mask: error mask to convert to string 778 + * 779 + * Convert @err_mask to descriptive string. Errors are 780 + * prioritized according to severity and only the most severe 781 + * error is reported. 782 + * 783 + * LOCKING: 784 + * None. 785 + * 786 + * RETURNS: 787 + * Descriptive string for @err_mask 788 + */ 789 + static const char * ata_err_string(unsigned int err_mask) 790 + { 791 + if (err_mask & AC_ERR_HOST_BUS) 792 + return "host bus error"; 793 + if (err_mask & AC_ERR_ATA_BUS) 794 + return "ATA bus error"; 795 + if (err_mask & AC_ERR_TIMEOUT) 796 + return "timeout"; 797 + if (err_mask & AC_ERR_HSM) 798 + return "HSM violation"; 799 + if (err_mask & AC_ERR_SYSTEM) 800 + return "internal error"; 801 + if (err_mask & AC_ERR_MEDIA) 802 + return "media error"; 803 + if (err_mask & AC_ERR_INVALID) 804 + return "invalid argument"; 805 + if (err_mask & AC_ERR_DEV) 806 + return "device error"; 807 + return "unknown error"; 808 + } 809 + 810 + /** 811 + * ata_read_log_page - read a specific log page 812 + * @dev: target device 813 + * @page: page to read 814 + * @buf: buffer to store read page 815 + * @sectors: number of sectors to read 816 + * 817 + * Read log page using READ_LOG_EXT command. 818 + * 819 + * LOCKING: 820 + * Kernel thread context (may sleep). 821 + * 822 + * RETURNS: 823 + * 0 on success, AC_ERR_* mask otherwise. 824 + */ 825 + static unsigned int ata_read_log_page(struct ata_device *dev, 826 + u8 page, void *buf, unsigned int sectors) 827 + { 828 + struct ata_taskfile tf; 829 + unsigned int err_mask; 830 + 831 + DPRINTK("read log page - page %d\n", page); 832 + 833 + ata_tf_init(dev, &tf); 834 + tf.command = ATA_CMD_READ_LOG_EXT; 835 + tf.lbal = page; 836 + tf.nsect = sectors; 837 + tf.hob_nsect = sectors >> 8; 838 + tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_LBA48 | ATA_TFLAG_DEVICE; 839 + tf.protocol = ATA_PROT_PIO; 840 + 841 + err_mask = ata_exec_internal(dev, &tf, NULL, DMA_FROM_DEVICE, 842 + buf, sectors * ATA_SECT_SIZE); 843 + 844 + DPRINTK("EXIT, err_mask=%x\n", err_mask); 845 + return err_mask; 846 + } 847 + 848 + /** 849 + * ata_eh_read_log_10h - Read log page 10h for NCQ error details 850 + * @dev: Device to read log page 10h from 851 + * @tag: Resulting tag of the failed command 852 + * @tf: Resulting taskfile registers of the failed command 853 + * 854 + * Read log page 10h to obtain NCQ error details and clear error 855 + * condition. 856 + * 857 + * LOCKING: 858 + * Kernel thread context (may sleep). 859 + * 860 + * RETURNS: 861 + * 0 on success, -errno otherwise. 862 + */ 863 + static int ata_eh_read_log_10h(struct ata_device *dev, 864 + int *tag, struct ata_taskfile *tf) 865 + { 866 + u8 *buf = dev->ap->sector_buf; 867 + unsigned int err_mask; 868 + u8 csum; 869 + int i; 870 + 871 + err_mask = ata_read_log_page(dev, ATA_LOG_SATA_NCQ, buf, 1); 872 + if (err_mask) 873 + return -EIO; 874 + 875 + csum = 0; 876 + for (i = 0; i < ATA_SECT_SIZE; i++) 877 + csum += buf[i]; 878 + if (csum) 879 + ata_dev_printk(dev, KERN_WARNING, 880 + "invalid checksum 0x%x on log page 10h\n", csum); 881 + 882 + if (buf[0] & 0x80) 883 + return -ENOENT; 884 + 885 + *tag = buf[0] & 0x1f; 886 + 887 + tf->command = buf[2]; 888 + tf->feature = buf[3]; 889 + tf->lbal = buf[4]; 890 + tf->lbam = buf[5]; 891 + tf->lbah = buf[6]; 892 + tf->device = buf[7]; 893 + tf->hob_lbal = buf[8]; 894 + tf->hob_lbam = buf[9]; 895 + tf->hob_lbah = buf[10]; 896 + tf->nsect = buf[12]; 897 + tf->hob_nsect = buf[13]; 898 + 899 + return 0; 900 + } 901 + 902 + /** 903 + * atapi_eh_request_sense - perform ATAPI REQUEST_SENSE 904 + * @dev: device to perform REQUEST_SENSE to 905 + * @sense_buf: result sense data buffer (SCSI_SENSE_BUFFERSIZE bytes long) 906 + * 907 + * Perform ATAPI REQUEST_SENSE after the device reported CHECK 908 + * SENSE. This function is EH helper. 909 + * 910 + * LOCKING: 911 + * Kernel thread context (may sleep). 912 + * 913 + * RETURNS: 914 + * 0 on success, AC_ERR_* mask on failure 915 + */ 916 + static unsigned int atapi_eh_request_sense(struct ata_device *dev, 917 + unsigned char *sense_buf) 918 + { 919 + struct ata_port *ap = dev->ap; 920 + struct ata_taskfile tf; 921 + u8 cdb[ATAPI_CDB_LEN]; 922 + 923 + DPRINTK("ATAPI request sense\n"); 924 + 925 + ata_tf_init(dev, &tf); 926 + 927 + /* FIXME: is this needed? */ 928 + memset(sense_buf, 0, SCSI_SENSE_BUFFERSIZE); 929 + 930 + /* XXX: why tf_read here? */ 931 + ap->ops->tf_read(ap, &tf); 932 + 933 + /* fill these in, for the case where they are -not- overwritten */ 934 + sense_buf[0] = 0x70; 935 + sense_buf[2] = tf.feature >> 4; 936 + 937 + memset(cdb, 0, ATAPI_CDB_LEN); 938 + cdb[0] = REQUEST_SENSE; 939 + cdb[4] = SCSI_SENSE_BUFFERSIZE; 940 + 941 + tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; 942 + tf.command = ATA_CMD_PACKET; 943 + 944 + /* is it pointless to prefer PIO for "safety reasons"? */ 945 + if (ap->flags & ATA_FLAG_PIO_DMA) { 946 + tf.protocol = ATA_PROT_ATAPI_DMA; 947 + tf.feature |= ATAPI_PKT_DMA; 948 + } else { 949 + tf.protocol = ATA_PROT_ATAPI; 950 + tf.lbam = (8 * 1024) & 0xff; 951 + tf.lbah = (8 * 1024) >> 8; 952 + } 953 + 954 + return ata_exec_internal(dev, &tf, cdb, DMA_FROM_DEVICE, 955 + sense_buf, SCSI_SENSE_BUFFERSIZE); 956 + } 957 + 958 + /** 959 + * ata_eh_analyze_serror - analyze SError for a failed port 960 + * @ap: ATA port to analyze SError for 961 + * 962 + * Analyze SError if available and further determine cause of 963 + * failure. 964 + * 965 + * LOCKING: 966 + * None. 967 + */ 968 + static void ata_eh_analyze_serror(struct ata_port *ap) 969 + { 970 + struct ata_eh_context *ehc = &ap->eh_context; 971 + u32 serror = ehc->i.serror; 972 + unsigned int err_mask = 0, action = 0; 973 + 974 + if (serror & SERR_PERSISTENT) { 975 + err_mask |= AC_ERR_ATA_BUS; 976 + action |= ATA_EH_HARDRESET; 977 + } 978 + if (serror & 979 + (SERR_DATA_RECOVERED | SERR_COMM_RECOVERED | SERR_DATA)) { 980 + err_mask |= AC_ERR_ATA_BUS; 981 + action |= ATA_EH_SOFTRESET; 982 + } 983 + if (serror & SERR_PROTOCOL) { 984 + err_mask |= AC_ERR_HSM; 985 + action |= ATA_EH_SOFTRESET; 986 + } 987 + if (serror & SERR_INTERNAL) { 988 + err_mask |= AC_ERR_SYSTEM; 989 + action |= ATA_EH_SOFTRESET; 990 + } 991 + if (serror & (SERR_PHYRDY_CHG | SERR_DEV_XCHG)) 992 + ata_ehi_hotplugged(&ehc->i); 993 + 994 + ehc->i.err_mask |= err_mask; 995 + ehc->i.action |= action; 996 + } 997 + 998 + /** 999 + * ata_eh_analyze_ncq_error - analyze NCQ error 1000 + * @ap: ATA port to analyze NCQ error for 1001 + * 1002 + * Read log page 10h, determine the offending qc and acquire 1003 + * error status TF. For NCQ device errors, all LLDDs have to do 1004 + * is setting AC_ERR_DEV in ehi->err_mask. This function takes 1005 + * care of the rest. 1006 + * 1007 + * LOCKING: 1008 + * Kernel thread context (may sleep). 1009 + */ 1010 + static void ata_eh_analyze_ncq_error(struct ata_port *ap) 1011 + { 1012 + struct ata_eh_context *ehc = &ap->eh_context; 1013 + struct ata_device *dev = ap->device; 1014 + struct ata_queued_cmd *qc; 1015 + struct ata_taskfile tf; 1016 + int tag, rc; 1017 + 1018 + /* if frozen, we can't do much */ 1019 + if (ap->flags & ATA_FLAG_FROZEN) 1020 + return; 1021 + 1022 + /* is it NCQ device error? */ 1023 + if (!ap->sactive || !(ehc->i.err_mask & AC_ERR_DEV)) 1024 + return; 1025 + 1026 + /* has LLDD analyzed already? */ 1027 + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 1028 + qc = __ata_qc_from_tag(ap, tag); 1029 + 1030 + if (!(qc->flags & ATA_QCFLAG_FAILED)) 1031 + continue; 1032 + 1033 + if (qc->err_mask) 1034 + return; 1035 + } 1036 + 1037 + /* okay, this error is ours */ 1038 + rc = ata_eh_read_log_10h(dev, &tag, &tf); 1039 + if (rc) { 1040 + ata_port_printk(ap, KERN_ERR, "failed to read log page 10h " 1041 + "(errno=%d)\n", rc); 1042 + return; 1043 + } 1044 + 1045 + if (!(ap->sactive & (1 << tag))) { 1046 + ata_port_printk(ap, KERN_ERR, "log page 10h reported " 1047 + "inactive tag %d\n", tag); 1048 + return; 1049 + } 1050 + 1051 + /* we've got the perpetrator, condemn it */ 1052 + qc = __ata_qc_from_tag(ap, tag); 1053 + memcpy(&qc->result_tf, &tf, sizeof(tf)); 1054 + qc->err_mask |= AC_ERR_DEV; 1055 + ehc->i.err_mask &= ~AC_ERR_DEV; 1056 + } 1057 + 1058 + /** 1059 + * ata_eh_analyze_tf - analyze taskfile of a failed qc 1060 + * @qc: qc to analyze 1061 + * @tf: Taskfile registers to analyze 1062 + * 1063 + * Analyze taskfile of @qc and further determine cause of 1064 + * failure. This function also requests ATAPI sense data if 1065 + * avaliable. 1066 + * 1067 + * LOCKING: 1068 + * Kernel thread context (may sleep). 1069 + * 1070 + * RETURNS: 1071 + * Determined recovery action 1072 + */ 1073 + static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc, 1074 + const struct ata_taskfile *tf) 1075 + { 1076 + unsigned int tmp, action = 0; 1077 + u8 stat = tf->command, err = tf->feature; 1078 + 1079 + if ((stat & (ATA_BUSY | ATA_DRQ | ATA_DRDY)) != ATA_DRDY) { 1080 + qc->err_mask |= AC_ERR_HSM; 1081 + return ATA_EH_SOFTRESET; 1082 + } 1083 + 1084 + if (!(qc->err_mask & AC_ERR_DEV)) 1085 + return 0; 1086 + 1087 + switch (qc->dev->class) { 1088 + case ATA_DEV_ATA: 1089 + if (err & ATA_ICRC) 1090 + qc->err_mask |= AC_ERR_ATA_BUS; 1091 + if (err & ATA_UNC) 1092 + qc->err_mask |= AC_ERR_MEDIA; 1093 + if (err & ATA_IDNF) 1094 + qc->err_mask |= AC_ERR_INVALID; 1095 + break; 1096 + 1097 + case ATA_DEV_ATAPI: 1098 + tmp = atapi_eh_request_sense(qc->dev, 1099 + qc->scsicmd->sense_buffer); 1100 + if (!tmp) { 1101 + /* ATA_QCFLAG_SENSE_VALID is used to tell 1102 + * atapi_qc_complete() that sense data is 1103 + * already valid. 1104 + * 1105 + * TODO: interpret sense data and set 1106 + * appropriate err_mask. 1107 + */ 1108 + qc->flags |= ATA_QCFLAG_SENSE_VALID; 1109 + } else 1110 + qc->err_mask |= tmp; 1111 + } 1112 + 1113 + if (qc->err_mask & (AC_ERR_HSM | AC_ERR_TIMEOUT | AC_ERR_ATA_BUS)) 1114 + action |= ATA_EH_SOFTRESET; 1115 + 1116 + return action; 1117 + } 1118 + 1119 + static int ata_eh_categorize_ering_entry(struct ata_ering_entry *ent) 1120 + { 1121 + if (ent->err_mask & (AC_ERR_ATA_BUS | AC_ERR_TIMEOUT)) 1122 + return 1; 1123 + 1124 + if (ent->is_io) { 1125 + if (ent->err_mask & AC_ERR_HSM) 1126 + return 1; 1127 + if ((ent->err_mask & 1128 + (AC_ERR_DEV|AC_ERR_MEDIA|AC_ERR_INVALID)) == AC_ERR_DEV) 1129 + return 2; 1130 + } 1131 + 1132 + return 0; 1133 + } 1134 + 1135 + struct speed_down_needed_arg { 1136 + u64 since; 1137 + int nr_errors[3]; 1138 + }; 1139 + 1140 + static int speed_down_needed_cb(struct ata_ering_entry *ent, void *void_arg) 1141 + { 1142 + struct speed_down_needed_arg *arg = void_arg; 1143 + 1144 + if (ent->timestamp < arg->since) 1145 + return -1; 1146 + 1147 + arg->nr_errors[ata_eh_categorize_ering_entry(ent)]++; 1148 + return 0; 1149 + } 1150 + 1151 + /** 1152 + * ata_eh_speed_down_needed - Determine wheter speed down is necessary 1153 + * @dev: Device of interest 1154 + * 1155 + * This function examines error ring of @dev and determines 1156 + * whether speed down is necessary. Speed down is necessary if 1157 + * there have been more than 3 of Cat-1 errors or 10 of Cat-2 1158 + * errors during last 15 minutes. 1159 + * 1160 + * Cat-1 errors are ATA_BUS, TIMEOUT for any command and HSM 1161 + * violation for known supported commands. 1162 + * 1163 + * Cat-2 errors are unclassified DEV error for known supported 1164 + * command. 1165 + * 1166 + * LOCKING: 1167 + * Inherited from caller. 1168 + * 1169 + * RETURNS: 1170 + * 1 if speed down is necessary, 0 otherwise 1171 + */ 1172 + static int ata_eh_speed_down_needed(struct ata_device *dev) 1173 + { 1174 + const u64 interval = 15LLU * 60 * HZ; 1175 + static const int err_limits[3] = { -1, 3, 10 }; 1176 + struct speed_down_needed_arg arg; 1177 + struct ata_ering_entry *ent; 1178 + int err_cat; 1179 + u64 j64; 1180 + 1181 + ent = ata_ering_top(&dev->ering); 1182 + if (!ent) 1183 + return 0; 1184 + 1185 + err_cat = ata_eh_categorize_ering_entry(ent); 1186 + if (err_cat == 0) 1187 + return 0; 1188 + 1189 + memset(&arg, 0, sizeof(arg)); 1190 + 1191 + j64 = get_jiffies_64(); 1192 + if (j64 >= interval) 1193 + arg.since = j64 - interval; 1194 + else 1195 + arg.since = 0; 1196 + 1197 + ata_ering_map(&dev->ering, speed_down_needed_cb, &arg); 1198 + 1199 + return arg.nr_errors[err_cat] > err_limits[err_cat]; 1200 + } 1201 + 1202 + /** 1203 + * ata_eh_speed_down - record error and speed down if necessary 1204 + * @dev: Failed device 1205 + * @is_io: Did the device fail during normal IO? 1206 + * @err_mask: err_mask of the error 1207 + * 1208 + * Record error and examine error history to determine whether 1209 + * adjusting transmission speed is necessary. It also sets 1210 + * transmission limits appropriately if such adjustment is 1211 + * necessary. 1212 + * 1213 + * LOCKING: 1214 + * Kernel thread context (may sleep). 1215 + * 1216 + * RETURNS: 1217 + * 0 on success, -errno otherwise 1218 + */ 1219 + static int ata_eh_speed_down(struct ata_device *dev, int is_io, 1220 + unsigned int err_mask) 1221 + { 1222 + if (!err_mask) 1223 + return 0; 1224 + 1225 + /* record error and determine whether speed down is necessary */ 1226 + ata_ering_record(&dev->ering, is_io, err_mask); 1227 + 1228 + if (!ata_eh_speed_down_needed(dev)) 1229 + return 0; 1230 + 1231 + /* speed down SATA link speed if possible */ 1232 + if (sata_down_spd_limit(dev->ap) == 0) 1233 + return ATA_EH_HARDRESET; 1234 + 1235 + /* lower transfer mode */ 1236 + if (ata_down_xfermask_limit(dev, 0) == 0) 1237 + return ATA_EH_SOFTRESET; 1238 + 1239 + ata_dev_printk(dev, KERN_ERR, 1240 + "speed down requested but no transfer mode left\n"); 1241 + return 0; 1242 + } 1243 + 1244 + /** 1245 + * ata_eh_autopsy - analyze error and determine recovery action 1246 + * @ap: ATA port to perform autopsy on 1247 + * 1248 + * Analyze why @ap failed and determine which recovery action is 1249 + * needed. This function also sets more detailed AC_ERR_* values 1250 + * and fills sense data for ATAPI CHECK SENSE. 1251 + * 1252 + * LOCKING: 1253 + * Kernel thread context (may sleep). 1254 + */ 1255 + static void ata_eh_autopsy(struct ata_port *ap) 1256 + { 1257 + struct ata_eh_context *ehc = &ap->eh_context; 1258 + unsigned int action = ehc->i.action; 1259 + struct ata_device *failed_dev = NULL; 1260 + unsigned int all_err_mask = 0; 1261 + int tag, is_io = 0; 1262 + u32 serror; 1263 + int rc; 1264 + 1265 + DPRINTK("ENTER\n"); 1266 + 1267 + /* obtain and analyze SError */ 1268 + rc = sata_scr_read(ap, SCR_ERROR, &serror); 1269 + if (rc == 0) { 1270 + ehc->i.serror |= serror; 1271 + ata_eh_analyze_serror(ap); 1272 + } else if (rc != -EOPNOTSUPP) 1273 + action |= ATA_EH_HARDRESET; 1274 + 1275 + /* analyze NCQ failure */ 1276 + ata_eh_analyze_ncq_error(ap); 1277 + 1278 + /* any real error trumps AC_ERR_OTHER */ 1279 + if (ehc->i.err_mask & ~AC_ERR_OTHER) 1280 + ehc->i.err_mask &= ~AC_ERR_OTHER; 1281 + 1282 + all_err_mask |= ehc->i.err_mask; 1283 + 1284 + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 1285 + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); 1286 + 1287 + if (!(qc->flags & ATA_QCFLAG_FAILED)) 1288 + continue; 1289 + 1290 + /* inherit upper level err_mask */ 1291 + qc->err_mask |= ehc->i.err_mask; 1292 + 1293 + /* analyze TF */ 1294 + action |= ata_eh_analyze_tf(qc, &qc->result_tf); 1295 + 1296 + /* DEV errors are probably spurious in case of ATA_BUS error */ 1297 + if (qc->err_mask & AC_ERR_ATA_BUS) 1298 + qc->err_mask &= ~(AC_ERR_DEV | AC_ERR_MEDIA | 1299 + AC_ERR_INVALID); 1300 + 1301 + /* any real error trumps unknown error */ 1302 + if (qc->err_mask & ~AC_ERR_OTHER) 1303 + qc->err_mask &= ~AC_ERR_OTHER; 1304 + 1305 + /* SENSE_VALID trumps dev/unknown error and revalidation */ 1306 + if (qc->flags & ATA_QCFLAG_SENSE_VALID) { 1307 + qc->err_mask &= ~(AC_ERR_DEV | AC_ERR_OTHER); 1308 + action &= ~ATA_EH_REVALIDATE; 1309 + } 1310 + 1311 + /* accumulate error info */ 1312 + failed_dev = qc->dev; 1313 + all_err_mask |= qc->err_mask; 1314 + if (qc->flags & ATA_QCFLAG_IO) 1315 + is_io = 1; 1316 + } 1317 + 1318 + /* enforce default EH actions */ 1319 + if (ap->flags & ATA_FLAG_FROZEN || 1320 + all_err_mask & (AC_ERR_HSM | AC_ERR_TIMEOUT)) 1321 + action |= ATA_EH_SOFTRESET; 1322 + else if (all_err_mask) 1323 + action |= ATA_EH_REVALIDATE; 1324 + 1325 + /* if we have offending qcs and the associated failed device */ 1326 + if (failed_dev) { 1327 + /* speed down */ 1328 + action |= ata_eh_speed_down(failed_dev, is_io, all_err_mask); 1329 + 1330 + /* perform per-dev EH action only on the offending device */ 1331 + ehc->i.dev_action[failed_dev->devno] |= 1332 + action & ATA_EH_PERDEV_MASK; 1333 + action &= ~ATA_EH_PERDEV_MASK; 1334 + } 1335 + 1336 + /* record autopsy result */ 1337 + ehc->i.dev = failed_dev; 1338 + ehc->i.action = action; 1339 + 1340 + DPRINTK("EXIT\n"); 1341 + } 1342 + 1343 + /** 1344 + * ata_eh_report - report error handling to user 1345 + * @ap: ATA port EH is going on 1346 + * 1347 + * Report EH to user. 1348 + * 1349 + * LOCKING: 1350 + * None. 1351 + */ 1352 + static void ata_eh_report(struct ata_port *ap) 1353 + { 1354 + struct ata_eh_context *ehc = &ap->eh_context; 1355 + const char *frozen, *desc; 1356 + int tag, nr_failed = 0; 1357 + 1358 + desc = NULL; 1359 + if (ehc->i.desc[0] != '\0') 1360 + desc = ehc->i.desc; 1361 + 1362 + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 1363 + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); 1364 + 1365 + if (!(qc->flags & ATA_QCFLAG_FAILED)) 1366 + continue; 1367 + if (qc->flags & ATA_QCFLAG_SENSE_VALID && !qc->err_mask) 1368 + continue; 1369 + 1370 + nr_failed++; 1371 + } 1372 + 1373 + if (!nr_failed && !ehc->i.err_mask) 1374 + return; 1375 + 1376 + frozen = ""; 1377 + if (ap->flags & ATA_FLAG_FROZEN) 1378 + frozen = " frozen"; 1379 + 1380 + if (ehc->i.dev) { 1381 + ata_dev_printk(ehc->i.dev, KERN_ERR, "exception Emask 0x%x " 1382 + "SAct 0x%x SErr 0x%x action 0x%x%s\n", 1383 + ehc->i.err_mask, ap->sactive, ehc->i.serror, 1384 + ehc->i.action, frozen); 1385 + if (desc) 1386 + ata_dev_printk(ehc->i.dev, KERN_ERR, "(%s)\n", desc); 1387 + } else { 1388 + ata_port_printk(ap, KERN_ERR, "exception Emask 0x%x " 1389 + "SAct 0x%x SErr 0x%x action 0x%x%s\n", 1390 + ehc->i.err_mask, ap->sactive, ehc->i.serror, 1391 + ehc->i.action, frozen); 1392 + if (desc) 1393 + ata_port_printk(ap, KERN_ERR, "(%s)\n", desc); 1394 + } 1395 + 1396 + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 1397 + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); 1398 + 1399 + if (!(qc->flags & ATA_QCFLAG_FAILED) || !qc->err_mask) 1400 + continue; 1401 + 1402 + ata_dev_printk(qc->dev, KERN_ERR, "tag %d cmd 0x%x " 1403 + "Emask 0x%x stat 0x%x err 0x%x (%s)\n", 1404 + qc->tag, qc->tf.command, qc->err_mask, 1405 + qc->result_tf.command, qc->result_tf.feature, 1406 + ata_err_string(qc->err_mask)); 1407 + } 1408 + } 1409 + 1410 + static int ata_do_reset(struct ata_port *ap, ata_reset_fn_t reset, 1411 + unsigned int *classes) 1412 + { 1413 + int i, rc; 1414 + 1415 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1416 + classes[i] = ATA_DEV_UNKNOWN; 1417 + 1418 + rc = reset(ap, classes); 1419 + if (rc) 1420 + return rc; 1421 + 1422 + /* If any class isn't ATA_DEV_UNKNOWN, consider classification 1423 + * is complete and convert all ATA_DEV_UNKNOWN to 1424 + * ATA_DEV_NONE. 1425 + */ 1426 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1427 + if (classes[i] != ATA_DEV_UNKNOWN) 1428 + break; 1429 + 1430 + if (i < ATA_MAX_DEVICES) 1431 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1432 + if (classes[i] == ATA_DEV_UNKNOWN) 1433 + classes[i] = ATA_DEV_NONE; 1434 + 1435 + return 0; 1436 + } 1437 + 1438 + static int ata_eh_followup_srst_needed(int rc, int classify, 1439 + const unsigned int *classes) 1440 + { 1441 + if (rc == -EAGAIN) 1442 + return 1; 1443 + if (rc != 0) 1444 + return 0; 1445 + if (classify && classes[0] == ATA_DEV_UNKNOWN) 1446 + return 1; 1447 + return 0; 1448 + } 1449 + 1450 + static int ata_eh_reset(struct ata_port *ap, int classify, 1451 + ata_prereset_fn_t prereset, ata_reset_fn_t softreset, 1452 + ata_reset_fn_t hardreset, ata_postreset_fn_t postreset) 1453 + { 1454 + struct ata_eh_context *ehc = &ap->eh_context; 1455 + unsigned int *classes = ehc->classes; 1456 + int tries = ATA_EH_RESET_TRIES; 1457 + int verbose = !(ap->flags & ATA_FLAG_LOADING); 1458 + unsigned int action; 1459 + ata_reset_fn_t reset; 1460 + int i, did_followup_srst, rc; 1461 + 1462 + /* Determine which reset to use and record in ehc->i.action. 1463 + * prereset() may examine and modify it. 1464 + */ 1465 + action = ehc->i.action; 1466 + ehc->i.action &= ~ATA_EH_RESET_MASK; 1467 + if (softreset && (!hardreset || (!sata_set_spd_needed(ap) && 1468 + !(action & ATA_EH_HARDRESET)))) 1469 + ehc->i.action |= ATA_EH_SOFTRESET; 1470 + else 1471 + ehc->i.action |= ATA_EH_HARDRESET; 1472 + 1473 + if (prereset) { 1474 + rc = prereset(ap); 1475 + if (rc) { 1476 + ata_port_printk(ap, KERN_ERR, 1477 + "prereset failed (errno=%d)\n", rc); 1478 + return rc; 1479 + } 1480 + } 1481 + 1482 + /* prereset() might have modified ehc->i.action */ 1483 + if (ehc->i.action & ATA_EH_HARDRESET) 1484 + reset = hardreset; 1485 + else if (ehc->i.action & ATA_EH_SOFTRESET) 1486 + reset = softreset; 1487 + else { 1488 + /* prereset told us not to reset, bang classes and return */ 1489 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1490 + classes[i] = ATA_DEV_NONE; 1491 + return 0; 1492 + } 1493 + 1494 + /* did prereset() screw up? if so, fix up to avoid oopsing */ 1495 + if (!reset) { 1496 + ata_port_printk(ap, KERN_ERR, "BUG: prereset() requested " 1497 + "invalid reset type\n"); 1498 + if (softreset) 1499 + reset = softreset; 1500 + else 1501 + reset = hardreset; 1502 + } 1503 + 1504 + retry: 1505 + /* shut up during boot probing */ 1506 + if (verbose) 1507 + ata_port_printk(ap, KERN_INFO, "%s resetting port\n", 1508 + reset == softreset ? "soft" : "hard"); 1509 + 1510 + /* reset */ 1511 + ata_eh_about_to_do(ap, NULL, ATA_EH_RESET_MASK); 1512 + ehc->i.flags |= ATA_EHI_DID_RESET; 1513 + 1514 + rc = ata_do_reset(ap, reset, classes); 1515 + 1516 + did_followup_srst = 0; 1517 + if (reset == hardreset && 1518 + ata_eh_followup_srst_needed(rc, classify, classes)) { 1519 + /* okay, let's do follow-up softreset */ 1520 + did_followup_srst = 1; 1521 + reset = softreset; 1522 + 1523 + if (!reset) { 1524 + ata_port_printk(ap, KERN_ERR, 1525 + "follow-up softreset required " 1526 + "but no softreset avaliable\n"); 1527 + return -EINVAL; 1528 + } 1529 + 1530 + ata_eh_about_to_do(ap, NULL, ATA_EH_RESET_MASK); 1531 + rc = ata_do_reset(ap, reset, classes); 1532 + 1533 + if (rc == 0 && classify && 1534 + classes[0] == ATA_DEV_UNKNOWN) { 1535 + ata_port_printk(ap, KERN_ERR, 1536 + "classification failed\n"); 1537 + return -EINVAL; 1538 + } 1539 + } 1540 + 1541 + if (rc && --tries) { 1542 + const char *type; 1543 + 1544 + if (reset == softreset) { 1545 + if (did_followup_srst) 1546 + type = "follow-up soft"; 1547 + else 1548 + type = "soft"; 1549 + } else 1550 + type = "hard"; 1551 + 1552 + ata_port_printk(ap, KERN_WARNING, 1553 + "%sreset failed, retrying in 5 secs\n", type); 1554 + ssleep(5); 1555 + 1556 + if (reset == hardreset) 1557 + sata_down_spd_limit(ap); 1558 + if (hardreset) 1559 + reset = hardreset; 1560 + goto retry; 1561 + } 1562 + 1563 + if (rc == 0) { 1564 + /* After the reset, the device state is PIO 0 and the 1565 + * controller state is undefined. Record the mode. 1566 + */ 1567 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1568 + ap->device[i].pio_mode = XFER_PIO_0; 1569 + 1570 + if (postreset) 1571 + postreset(ap, classes); 1572 + 1573 + /* reset successful, schedule revalidation */ 1574 + ata_eh_done(ap, NULL, ATA_EH_RESET_MASK); 1575 + ehc->i.action |= ATA_EH_REVALIDATE; 1576 + } 1577 + 1578 + return rc; 1579 + } 1580 + 1581 + static int ata_eh_revalidate_and_attach(struct ata_port *ap, 1582 + struct ata_device **r_failed_dev) 1583 + { 1584 + struct ata_eh_context *ehc = &ap->eh_context; 1585 + struct ata_device *dev; 1586 + unsigned long flags; 1587 + int i, rc = 0; 1588 + 1589 + DPRINTK("ENTER\n"); 1590 + 1591 + for (i = 0; i < ATA_MAX_DEVICES; i++) { 1592 + unsigned int action; 1593 + 1594 + dev = &ap->device[i]; 1595 + action = ehc->i.action | ehc->i.dev_action[dev->devno]; 1596 + 1597 + if (action & ATA_EH_REVALIDATE && ata_dev_enabled(dev)) { 1598 + if (ata_port_offline(ap)) { 1599 + rc = -EIO; 1600 + break; 1601 + } 1602 + 1603 + ata_eh_about_to_do(ap, dev, ATA_EH_REVALIDATE); 1604 + rc = ata_dev_revalidate(dev, 1605 + ehc->i.flags & ATA_EHI_DID_RESET); 1606 + if (rc) 1607 + break; 1608 + 1609 + ata_eh_done(ap, dev, ATA_EH_REVALIDATE); 1610 + 1611 + /* schedule the scsi_rescan_device() here */ 1612 + queue_work(ata_aux_wq, &(ap->scsi_rescan_task)); 1613 + } else if (dev->class == ATA_DEV_UNKNOWN && 1614 + ehc->tries[dev->devno] && 1615 + ata_class_enabled(ehc->classes[dev->devno])) { 1616 + dev->class = ehc->classes[dev->devno]; 1617 + 1618 + rc = ata_dev_read_id(dev, &dev->class, 1, dev->id); 1619 + if (rc == 0) 1620 + rc = ata_dev_configure(dev, 1); 1621 + 1622 + if (rc) { 1623 + dev->class = ATA_DEV_UNKNOWN; 1624 + break; 1625 + } 1626 + 1627 + spin_lock_irqsave(ap->lock, flags); 1628 + ap->flags |= ATA_FLAG_SCSI_HOTPLUG; 1629 + spin_unlock_irqrestore(ap->lock, flags); 1630 + } 1631 + } 1632 + 1633 + if (rc) 1634 + *r_failed_dev = dev; 1635 + 1636 + DPRINTK("EXIT\n"); 1637 + return rc; 1638 + } 1639 + 1640 + static int ata_port_nr_enabled(struct ata_port *ap) 1641 + { 1642 + int i, cnt = 0; 1643 + 1644 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1645 + if (ata_dev_enabled(&ap->device[i])) 1646 + cnt++; 1647 + return cnt; 1648 + } 1649 + 1650 + static int ata_port_nr_vacant(struct ata_port *ap) 1651 + { 1652 + int i, cnt = 0; 1653 + 1654 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1655 + if (ap->device[i].class == ATA_DEV_UNKNOWN) 1656 + cnt++; 1657 + return cnt; 1658 + } 1659 + 1660 + static int ata_eh_skip_recovery(struct ata_port *ap) 1661 + { 1662 + struct ata_eh_context *ehc = &ap->eh_context; 1663 + int i; 1664 + 1665 + if (ap->flags & ATA_FLAG_FROZEN || ata_port_nr_enabled(ap)) 1666 + return 0; 1667 + 1668 + /* skip if class codes for all vacant slots are ATA_DEV_NONE */ 1669 + for (i = 0; i < ATA_MAX_DEVICES; i++) { 1670 + struct ata_device *dev = &ap->device[i]; 1671 + 1672 + if (dev->class == ATA_DEV_UNKNOWN && 1673 + ehc->classes[dev->devno] != ATA_DEV_NONE) 1674 + return 0; 1675 + } 1676 + 1677 + return 1; 1678 + } 1679 + 1680 + /** 1681 + * ata_eh_recover - recover host port after error 1682 + * @ap: host port to recover 1683 + * @prereset: prereset method (can be NULL) 1684 + * @softreset: softreset method (can be NULL) 1685 + * @hardreset: hardreset method (can be NULL) 1686 + * @postreset: postreset method (can be NULL) 1687 + * 1688 + * This is the alpha and omega, eum and yang, heart and soul of 1689 + * libata exception handling. On entry, actions required to 1690 + * recover the port and hotplug requests are recorded in 1691 + * eh_context. This function executes all the operations with 1692 + * appropriate retrials and fallbacks to resurrect failed 1693 + * devices, detach goners and greet newcomers. 1694 + * 1695 + * LOCKING: 1696 + * Kernel thread context (may sleep). 1697 + * 1698 + * RETURNS: 1699 + * 0 on success, -errno on failure. 1700 + */ 1701 + static int ata_eh_recover(struct ata_port *ap, ata_prereset_fn_t prereset, 1702 + ata_reset_fn_t softreset, ata_reset_fn_t hardreset, 1703 + ata_postreset_fn_t postreset) 1704 + { 1705 + struct ata_eh_context *ehc = &ap->eh_context; 1706 + struct ata_device *dev; 1707 + int down_xfermask, i, rc; 1708 + 1709 + DPRINTK("ENTER\n"); 1710 + 1711 + /* prep for recovery */ 1712 + for (i = 0; i < ATA_MAX_DEVICES; i++) { 1713 + dev = &ap->device[i]; 1714 + 1715 + ehc->tries[dev->devno] = ATA_EH_DEV_TRIES; 1716 + 1717 + /* process hotplug request */ 1718 + if (dev->flags & ATA_DFLAG_DETACH) 1719 + ata_eh_detach_dev(dev); 1720 + 1721 + if (!ata_dev_enabled(dev) && 1722 + ((ehc->i.probe_mask & (1 << dev->devno)) && 1723 + !(ehc->did_probe_mask & (1 << dev->devno)))) { 1724 + ata_eh_detach_dev(dev); 1725 + ata_dev_init(dev); 1726 + ehc->did_probe_mask |= (1 << dev->devno); 1727 + ehc->i.action |= ATA_EH_SOFTRESET; 1728 + } 1729 + } 1730 + 1731 + retry: 1732 + down_xfermask = 0; 1733 + rc = 0; 1734 + 1735 + /* if UNLOADING, finish immediately */ 1736 + if (ap->flags & ATA_FLAG_UNLOADING) 1737 + goto out; 1738 + 1739 + /* skip EH if possible. */ 1740 + if (ata_eh_skip_recovery(ap)) 1741 + ehc->i.action = 0; 1742 + 1743 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1744 + ehc->classes[i] = ATA_DEV_UNKNOWN; 1745 + 1746 + /* reset */ 1747 + if (ehc->i.action & ATA_EH_RESET_MASK) { 1748 + ata_eh_freeze_port(ap); 1749 + 1750 + rc = ata_eh_reset(ap, ata_port_nr_vacant(ap), prereset, 1751 + softreset, hardreset, postreset); 1752 + if (rc) { 1753 + ata_port_printk(ap, KERN_ERR, 1754 + "reset failed, giving up\n"); 1755 + goto out; 1756 + } 1757 + 1758 + ata_eh_thaw_port(ap); 1759 + } 1760 + 1761 + /* revalidate existing devices and attach new ones */ 1762 + rc = ata_eh_revalidate_and_attach(ap, &dev); 1763 + if (rc) 1764 + goto dev_fail; 1765 + 1766 + /* configure transfer mode if the port has been reset */ 1767 + if (ehc->i.flags & ATA_EHI_DID_RESET) { 1768 + rc = ata_set_mode(ap, &dev); 1769 + if (rc) { 1770 + down_xfermask = 1; 1771 + goto dev_fail; 1772 + } 1773 + } 1774 + 1775 + goto out; 1776 + 1777 + dev_fail: 1778 + switch (rc) { 1779 + case -ENODEV: 1780 + /* device missing, schedule probing */ 1781 + ehc->i.probe_mask |= (1 << dev->devno); 1782 + case -EINVAL: 1783 + ehc->tries[dev->devno] = 0; 1784 + break; 1785 + case -EIO: 1786 + sata_down_spd_limit(ap); 1787 + default: 1788 + ehc->tries[dev->devno]--; 1789 + if (down_xfermask && 1790 + ata_down_xfermask_limit(dev, ehc->tries[dev->devno] == 1)) 1791 + ehc->tries[dev->devno] = 0; 1792 + } 1793 + 1794 + if (ata_dev_enabled(dev) && !ehc->tries[dev->devno]) { 1795 + /* disable device if it has used up all its chances */ 1796 + ata_dev_disable(dev); 1797 + 1798 + /* detach if offline */ 1799 + if (ata_port_offline(ap)) 1800 + ata_eh_detach_dev(dev); 1801 + 1802 + /* probe if requested */ 1803 + if ((ehc->i.probe_mask & (1 << dev->devno)) && 1804 + !(ehc->did_probe_mask & (1 << dev->devno))) { 1805 + ata_eh_detach_dev(dev); 1806 + ata_dev_init(dev); 1807 + 1808 + ehc->tries[dev->devno] = ATA_EH_DEV_TRIES; 1809 + ehc->did_probe_mask |= (1 << dev->devno); 1810 + ehc->i.action |= ATA_EH_SOFTRESET; 1811 + } 1812 + } else { 1813 + /* soft didn't work? be haaaaard */ 1814 + if (ehc->i.flags & ATA_EHI_DID_RESET) 1815 + ehc->i.action |= ATA_EH_HARDRESET; 1816 + else 1817 + ehc->i.action |= ATA_EH_SOFTRESET; 1818 + } 1819 + 1820 + if (ata_port_nr_enabled(ap)) { 1821 + ata_port_printk(ap, KERN_WARNING, "failed to recover some " 1822 + "devices, retrying in 5 secs\n"); 1823 + ssleep(5); 1824 + } else { 1825 + /* no device left, repeat fast */ 1826 + msleep(500); 1827 + } 1828 + 1829 + goto retry; 1830 + 1831 + out: 1832 + if (rc) { 1833 + for (i = 0; i < ATA_MAX_DEVICES; i++) 1834 + ata_dev_disable(&ap->device[i]); 1835 + } 1836 + 1837 + DPRINTK("EXIT, rc=%d\n", rc); 1838 + return rc; 1839 + } 1840 + 1841 + /** 1842 + * ata_eh_finish - finish up EH 1843 + * @ap: host port to finish EH for 1844 + * 1845 + * Recovery is complete. Clean up EH states and retry or finish 1846 + * failed qcs. 1847 + * 1848 + * LOCKING: 1849 + * None. 1850 + */ 1851 + static void ata_eh_finish(struct ata_port *ap) 1852 + { 1853 + int tag; 1854 + 1855 + /* retry or finish qcs */ 1856 + for (tag = 0; tag < ATA_MAX_QUEUE; tag++) { 1857 + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); 1858 + 1859 + if (!(qc->flags & ATA_QCFLAG_FAILED)) 1860 + continue; 1861 + 1862 + if (qc->err_mask) { 1863 + /* FIXME: Once EH migration is complete, 1864 + * generate sense data in this function, 1865 + * considering both err_mask and tf. 1866 + */ 1867 + if (qc->err_mask & AC_ERR_INVALID) 1868 + ata_eh_qc_complete(qc); 1869 + else 1870 + ata_eh_qc_retry(qc); 1871 + } else { 1872 + if (qc->flags & ATA_QCFLAG_SENSE_VALID) { 1873 + ata_eh_qc_complete(qc); 1874 + } else { 1875 + /* feed zero TF to sense generation */ 1876 + memset(&qc->result_tf, 0, sizeof(qc->result_tf)); 1877 + ata_eh_qc_retry(qc); 1878 + } 1879 + } 1880 + } 1881 + } 1882 + 1883 + /** 1884 + * ata_do_eh - do standard error handling 1885 + * @ap: host port to handle error for 1886 + * @prereset: prereset method (can be NULL) 1887 + * @softreset: softreset method (can be NULL) 1888 + * @hardreset: hardreset method (can be NULL) 1889 + * @postreset: postreset method (can be NULL) 1890 + * 1891 + * Perform standard error handling sequence. 1892 + * 1893 + * LOCKING: 1894 + * Kernel thread context (may sleep). 1895 + */ 1896 + void ata_do_eh(struct ata_port *ap, ata_prereset_fn_t prereset, 1897 + ata_reset_fn_t softreset, ata_reset_fn_t hardreset, 1898 + ata_postreset_fn_t postreset) 1899 + { 1900 + if (!(ap->flags & ATA_FLAG_LOADING)) { 1901 + ata_eh_autopsy(ap); 1902 + ata_eh_report(ap); 1903 + } 1904 + 1905 + ata_eh_recover(ap, prereset, softreset, hardreset, postreset); 1906 + ata_eh_finish(ap); 1907 + }
+539 -235
drivers/scsi/libata-scsi.c
··· 41 41 #include <scsi/scsi_cmnd.h> 42 42 #include <scsi/scsi_eh.h> 43 43 #include <scsi/scsi_device.h> 44 + #include <scsi/scsi_tcq.h> 44 45 #include <scsi/scsi_transport.h> 45 46 #include <linux/libata.h> 46 47 #include <linux/hdreg.h> ··· 52 51 #define SECTOR_SIZE 512 53 52 54 53 typedef unsigned int (*ata_xlat_func_t)(struct ata_queued_cmd *qc, const u8 *scsicmd); 55 - static struct ata_device * 56 - ata_scsi_find_dev(struct ata_port *ap, const struct scsi_device *scsidev); 57 - static void ata_scsi_error(struct Scsi_Host *host); 58 - enum scsi_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd); 54 + 55 + static struct ata_device * __ata_scsi_find_dev(struct ata_port *ap, 56 + const struct scsi_device *scsidev); 57 + static struct ata_device * ata_scsi_find_dev(struct ata_port *ap, 58 + const struct scsi_device *scsidev); 59 + static int ata_scsi_user_scan(struct Scsi_Host *shost, unsigned int channel, 60 + unsigned int id, unsigned int lun); 61 + 59 62 60 63 #define RW_RECOVERY_MPAGE 0x1 61 64 #define RW_RECOVERY_MPAGE_LEN 12 ··· 107 102 struct scsi_transport_template ata_scsi_transport_template = { 108 103 .eh_strategy_handler = ata_scsi_error, 109 104 .eh_timed_out = ata_scsi_timed_out, 105 + .user_scan = ata_scsi_user_scan, 110 106 }; 111 107 112 108 ··· 310 304 311 305 /** 312 306 * ata_scsi_qc_new - acquire new ata_queued_cmd reference 313 - * @ap: ATA port to which the new command is attached 314 307 * @dev: ATA device to which the new command is attached 315 308 * @cmd: SCSI command that originated this ATA command 316 309 * @done: SCSI command completion function ··· 328 323 * RETURNS: 329 324 * Command allocated, or %NULL if none available. 330 325 */ 331 - struct ata_queued_cmd *ata_scsi_qc_new(struct ata_port *ap, 332 - struct ata_device *dev, 326 + struct ata_queued_cmd *ata_scsi_qc_new(struct ata_device *dev, 333 327 struct scsi_cmnd *cmd, 334 328 void (*done)(struct scsi_cmnd *)) 335 329 { 336 330 struct ata_queued_cmd *qc; 337 331 338 - qc = ata_qc_new_init(ap, dev); 332 + qc = ata_qc_new_init(dev); 339 333 if (qc) { 340 334 qc->scsicmd = cmd; 341 335 qc->scsidone = done; ··· 401 397 402 398 int ata_scsi_device_resume(struct scsi_device *sdev) 403 399 { 404 - struct ata_port *ap = (struct ata_port *) &sdev->host->hostdata[0]; 405 - struct ata_device *dev = &ap->device[sdev->id]; 400 + struct ata_port *ap = ata_shost_to_port(sdev->host); 401 + struct ata_device *dev = __ata_scsi_find_dev(ap, sdev); 406 402 407 - return ata_device_resume(ap, dev); 403 + return ata_device_resume(dev); 408 404 } 409 405 410 406 int ata_scsi_device_suspend(struct scsi_device *sdev, pm_message_t state) 411 407 { 412 - struct ata_port *ap = (struct ata_port *) &sdev->host->hostdata[0]; 413 - struct ata_device *dev = &ap->device[sdev->id]; 408 + struct ata_port *ap = ata_shost_to_port(sdev->host); 409 + struct ata_device *dev = __ata_scsi_find_dev(ap, sdev); 414 410 415 - return ata_device_suspend(ap, dev, state); 411 + return ata_device_suspend(dev, state); 416 412 } 417 413 418 414 /** ··· 423 419 * @sk: the sense key we'll fill out 424 420 * @asc: the additional sense code we'll fill out 425 421 * @ascq: the additional sense code qualifier we'll fill out 422 + * @verbose: be verbose 426 423 * 427 424 * Converts an ATA error into a SCSI error. Fill out pointers to 428 425 * SK, ASC, and ASCQ bytes for later use in fixed or descriptor ··· 433 428 * spin_lock_irqsave(host_set lock) 434 429 */ 435 430 void ata_to_sense_error(unsigned id, u8 drv_stat, u8 drv_err, u8 *sk, u8 *asc, 436 - u8 *ascq) 431 + u8 *ascq, int verbose) 437 432 { 438 433 int i; 439 434 ··· 498 493 } 499 494 } 500 495 /* No immediate match */ 501 - printk(KERN_WARNING "ata%u: no sense translation for " 502 - "error 0x%02x\n", id, drv_err); 496 + if (verbose) 497 + printk(KERN_WARNING "ata%u: no sense translation for " 498 + "error 0x%02x\n", id, drv_err); 503 499 } 504 500 505 501 /* Fall back to interpreting status bits */ ··· 513 507 } 514 508 } 515 509 /* No error? Undecoded? */ 516 - printk(KERN_WARNING "ata%u: no sense translation for status: 0x%02x\n", 517 - id, drv_stat); 510 + if (verbose) 511 + printk(KERN_WARNING "ata%u: no sense translation for " 512 + "status: 0x%02x\n", id, drv_stat); 518 513 519 514 /* We need a sensible error return here, which is tricky, and one 520 515 that won't cause people to do things like return a disk wrongly */ ··· 524 517 *ascq = 0x00; 525 518 526 519 translate_done: 527 - printk(KERN_ERR "ata%u: translated ATA stat/err 0x%02x/%02x to " 528 - "SCSI SK/ASC/ASCQ 0x%x/%02x/%02x\n", id, drv_stat, drv_err, 529 - *sk, *asc, *ascq); 520 + if (verbose) 521 + printk(KERN_ERR "ata%u: translated ATA stat/err 0x%02x/%02x " 522 + "to SCSI SK/ASC/ASCQ 0x%x/%02x/%02x\n", 523 + id, drv_stat, drv_err, *sk, *asc, *ascq); 530 524 return; 531 525 } 532 526 ··· 547 539 void ata_gen_ata_desc_sense(struct ata_queued_cmd *qc) 548 540 { 549 541 struct scsi_cmnd *cmd = qc->scsicmd; 550 - struct ata_taskfile *tf = &qc->tf; 542 + struct ata_taskfile *tf = &qc->result_tf; 551 543 unsigned char *sb = cmd->sense_buffer; 552 544 unsigned char *desc = sb + 8; 545 + int verbose = qc->ap->ops->error_handler == NULL; 553 546 554 547 memset(sb, 0, SCSI_SENSE_BUFFERSIZE); 555 548 556 549 cmd->result = (DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION; 557 550 558 551 /* 559 - * Read the controller registers. 560 - */ 561 - WARN_ON(qc->ap->ops->tf_read == NULL); 562 - qc->ap->ops->tf_read(qc->ap, tf); 563 - 564 - /* 565 552 * Use ata_to_sense_error() to map status register bits 566 553 * onto sense key, asc & ascq. 567 554 */ 568 - if (tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) { 555 + if (qc->err_mask || 556 + tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) { 569 557 ata_to_sense_error(qc->ap->id, tf->command, tf->feature, 570 - &sb[1], &sb[2], &sb[3]); 558 + &sb[1], &sb[2], &sb[3], verbose); 571 559 sb[1] &= 0x0f; 572 560 } 573 561 ··· 619 615 void ata_gen_fixed_sense(struct ata_queued_cmd *qc) 620 616 { 621 617 struct scsi_cmnd *cmd = qc->scsicmd; 622 - struct ata_taskfile *tf = &qc->tf; 618 + struct ata_taskfile *tf = &qc->result_tf; 623 619 unsigned char *sb = cmd->sense_buffer; 620 + int verbose = qc->ap->ops->error_handler == NULL; 624 621 625 622 memset(sb, 0, SCSI_SENSE_BUFFERSIZE); 626 623 627 624 cmd->result = (DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION; 628 625 629 626 /* 630 - * Read the controller registers. 631 - */ 632 - WARN_ON(qc->ap->ops->tf_read == NULL); 633 - qc->ap->ops->tf_read(qc->ap, tf); 634 - 635 - /* 636 627 * Use ata_to_sense_error() to map status register bits 637 628 * onto sense key, asc & ascq. 638 629 */ 639 - if (tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) { 630 + if (qc->err_mask || 631 + tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) { 640 632 ata_to_sense_error(qc->ap->id, tf->command, tf->feature, 641 - &sb[2], &sb[12], &sb[13]); 633 + &sb[2], &sb[12], &sb[13], verbose); 642 634 sb[2] &= 0x0f; 643 635 } 644 636 ··· 677 677 */ 678 678 max_sectors = ATA_MAX_SECTORS; 679 679 if (dev->flags & ATA_DFLAG_LBA48) 680 - max_sectors = 2048; 680 + max_sectors = ATA_MAX_SECTORS_LBA48; 681 681 if (dev->max_sectors) 682 682 max_sectors = dev->max_sectors; 683 683 ··· 691 691 if (dev->class == ATA_DEV_ATAPI) { 692 692 request_queue_t *q = sdev->request_queue; 693 693 blk_queue_max_hw_segments(q, q->max_hw_segments - 1); 694 + } 695 + 696 + if (dev->flags & ATA_DFLAG_NCQ) { 697 + int depth; 698 + 699 + depth = min(sdev->host->can_queue, ata_id_queue_depth(dev->id)); 700 + depth = min(ATA_MAX_QUEUE - 1, depth); 701 + scsi_adjust_queue_depth(sdev, MSG_SIMPLE_TAG, depth); 694 702 } 695 703 } 696 704 ··· 716 708 717 709 int ata_scsi_slave_config(struct scsi_device *sdev) 718 710 { 711 + struct ata_port *ap = ata_shost_to_port(sdev->host); 712 + struct ata_device *dev = __ata_scsi_find_dev(ap, sdev); 713 + 719 714 ata_scsi_sdev_config(sdev); 720 715 721 716 blk_queue_max_phys_segments(sdev->request_queue, LIBATA_MAX_PRD); 722 717 723 - if (sdev->id < ATA_MAX_DEVICES) { 724 - struct ata_port *ap; 725 - struct ata_device *dev; 726 - 727 - ap = (struct ata_port *) &sdev->host->hostdata[0]; 728 - dev = &ap->device[sdev->id]; 729 - 718 + if (dev) 730 719 ata_scsi_dev_config(sdev, dev); 731 - } 732 720 733 721 return 0; /* scsi layer doesn't check return value, sigh */ 734 722 } 735 723 736 724 /** 737 - * ata_scsi_timed_out - SCSI layer time out callback 738 - * @cmd: timed out SCSI command 725 + * ata_scsi_slave_destroy - SCSI device is about to be destroyed 726 + * @sdev: SCSI device to be destroyed 739 727 * 740 - * Handles SCSI layer timeout. We race with normal completion of 741 - * the qc for @cmd. If the qc is already gone, we lose and let 742 - * the scsi command finish (EH_HANDLED). Otherwise, the qc has 743 - * timed out and EH should be invoked. Prevent ata_qc_complete() 744 - * from finishing it by setting EH_SCHEDULED and return 745 - * EH_NOT_HANDLED. 728 + * @sdev is about to be destroyed for hot/warm unplugging. If 729 + * this unplugging was initiated by libata as indicated by NULL 730 + * dev->sdev, this function doesn't have to do anything. 731 + * Otherwise, SCSI layer initiated warm-unplug is in progress. 732 + * Clear dev->sdev, schedule the device for ATA detach and invoke 733 + * EH. 746 734 * 747 735 * LOCKING: 748 - * Called from timer context 736 + * Defined by SCSI layer. We don't really care. 737 + */ 738 + void ata_scsi_slave_destroy(struct scsi_device *sdev) 739 + { 740 + struct ata_port *ap = ata_shost_to_port(sdev->host); 741 + unsigned long flags; 742 + struct ata_device *dev; 743 + 744 + if (!ap->ops->error_handler) 745 + return; 746 + 747 + spin_lock_irqsave(ap->lock, flags); 748 + dev = __ata_scsi_find_dev(ap, sdev); 749 + if (dev && dev->sdev) { 750 + /* SCSI device already in CANCEL state, no need to offline it */ 751 + dev->sdev = NULL; 752 + dev->flags |= ATA_DFLAG_DETACH; 753 + ata_port_schedule_eh(ap); 754 + } 755 + spin_unlock_irqrestore(ap->lock, flags); 756 + } 757 + 758 + /** 759 + * ata_scsi_change_queue_depth - SCSI callback for queue depth config 760 + * @sdev: SCSI device to configure queue depth for 761 + * @queue_depth: new queue depth 762 + * 763 + * This is libata standard hostt->change_queue_depth callback. 764 + * SCSI will call into this callback when user tries to set queue 765 + * depth via sysfs. 766 + * 767 + * LOCKING: 768 + * SCSI layer (we don't care) 749 769 * 750 770 * RETURNS: 751 - * EH_HANDLED or EH_NOT_HANDLED 771 + * Newly configured queue depth. 752 772 */ 753 - enum scsi_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd) 773 + int ata_scsi_change_queue_depth(struct scsi_device *sdev, int queue_depth) 754 774 { 755 - struct Scsi_Host *host = cmd->device->host; 756 - struct ata_port *ap = (struct ata_port *) &host->hostdata[0]; 757 - unsigned long flags; 758 - struct ata_queued_cmd *qc; 759 - enum scsi_eh_timer_return ret = EH_HANDLED; 775 + struct ata_port *ap = ata_shost_to_port(sdev->host); 776 + struct ata_device *dev; 777 + int max_depth; 760 778 761 - DPRINTK("ENTER\n"); 779 + if (queue_depth < 1) 780 + return sdev->queue_depth; 762 781 763 - spin_lock_irqsave(&ap->host_set->lock, flags); 764 - qc = ata_qc_from_tag(ap, ap->active_tag); 765 - if (qc) { 766 - WARN_ON(qc->scsicmd != cmd); 767 - qc->flags |= ATA_QCFLAG_EH_SCHEDULED; 768 - qc->err_mask |= AC_ERR_TIMEOUT; 769 - ret = EH_NOT_HANDLED; 770 - } 771 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 782 + dev = ata_scsi_find_dev(ap, sdev); 783 + if (!dev || !ata_dev_enabled(dev)) 784 + return sdev->queue_depth; 772 785 773 - DPRINTK("EXIT, ret=%d\n", ret); 774 - return ret; 775 - } 786 + max_depth = min(sdev->host->can_queue, ata_id_queue_depth(dev->id)); 787 + max_depth = min(ATA_MAX_QUEUE - 1, max_depth); 788 + if (queue_depth > max_depth) 789 + queue_depth = max_depth; 776 790 777 - /** 778 - * ata_scsi_error - SCSI layer error handler callback 779 - * @host: SCSI host on which error occurred 780 - * 781 - * Handles SCSI-layer-thrown error events. 782 - * 783 - * LOCKING: 784 - * Inherited from SCSI layer (none, can sleep) 785 - */ 786 - 787 - static void ata_scsi_error(struct Scsi_Host *host) 788 - { 789 - struct ata_port *ap; 790 - unsigned long flags; 791 - 792 - DPRINTK("ENTER\n"); 793 - 794 - ap = (struct ata_port *) &host->hostdata[0]; 795 - 796 - spin_lock_irqsave(&ap->host_set->lock, flags); 797 - WARN_ON(ap->flags & ATA_FLAG_IN_EH); 798 - ap->flags |= ATA_FLAG_IN_EH; 799 - WARN_ON(ata_qc_from_tag(ap, ap->active_tag) == NULL); 800 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 801 - 802 - ata_port_flush_task(ap); 803 - 804 - ap->ops->eng_timeout(ap); 805 - 806 - WARN_ON(host->host_failed || !list_empty(&host->eh_cmd_q)); 807 - 808 - scsi_eh_flush_done_q(&ap->eh_done_q); 809 - 810 - spin_lock_irqsave(&ap->host_set->lock, flags); 811 - ap->flags &= ~ATA_FLAG_IN_EH; 812 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 813 - 814 - DPRINTK("EXIT\n"); 815 - } 816 - 817 - static void ata_eh_scsidone(struct scsi_cmnd *scmd) 818 - { 819 - /* nada */ 820 - } 821 - 822 - static void __ata_eh_qc_complete(struct ata_queued_cmd *qc) 823 - { 824 - struct ata_port *ap = qc->ap; 825 - struct scsi_cmnd *scmd = qc->scsicmd; 826 - unsigned long flags; 827 - 828 - spin_lock_irqsave(&ap->host_set->lock, flags); 829 - qc->scsidone = ata_eh_scsidone; 830 - __ata_qc_complete(qc); 831 - WARN_ON(ata_tag_valid(qc->tag)); 832 - spin_unlock_irqrestore(&ap->host_set->lock, flags); 833 - 834 - scsi_eh_finish_cmd(scmd, &ap->eh_done_q); 835 - } 836 - 837 - /** 838 - * ata_eh_qc_complete - Complete an active ATA command from EH 839 - * @qc: Command to complete 840 - * 841 - * Indicate to the mid and upper layers that an ATA command has 842 - * completed. To be used from EH. 843 - */ 844 - void ata_eh_qc_complete(struct ata_queued_cmd *qc) 845 - { 846 - struct scsi_cmnd *scmd = qc->scsicmd; 847 - scmd->retries = scmd->allowed; 848 - __ata_eh_qc_complete(qc); 849 - } 850 - 851 - /** 852 - * ata_eh_qc_retry - Tell midlayer to retry an ATA command after EH 853 - * @qc: Command to retry 854 - * 855 - * Indicate to the mid and upper layers that an ATA command 856 - * should be retried. To be used from EH. 857 - * 858 - * SCSI midlayer limits the number of retries to scmd->allowed. 859 - * This function might need to adjust scmd->retries for commands 860 - * which get retried due to unrelated NCQ failures. 861 - */ 862 - void ata_eh_qc_retry(struct ata_queued_cmd *qc) 863 - { 864 - __ata_eh_qc_complete(qc); 791 + scsi_adjust_queue_depth(sdev, MSG_SIMPLE_TAG, queue_depth); 792 + return queue_depth; 865 793 } 866 794 867 795 /** ··· 835 891 tf->nsect = 1; /* 1 sector, lba=0 */ 836 892 837 893 if (qc->dev->flags & ATA_DFLAG_LBA) { 838 - qc->tf.flags |= ATA_TFLAG_LBA; 894 + tf->flags |= ATA_TFLAG_LBA; 839 895 840 896 tf->lbah = 0x0; 841 897 tf->lbam = 0x0; ··· 1139 1195 u64 block; 1140 1196 u32 n_block; 1141 1197 1198 + qc->flags |= ATA_QCFLAG_IO; 1142 1199 tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE; 1143 1200 1144 1201 if (scsicmd[0] == WRITE_10 || scsicmd[0] == WRITE_6 || ··· 1186 1241 */ 1187 1242 goto nothing_to_do; 1188 1243 1189 - if (dev->flags & ATA_DFLAG_LBA) { 1244 + if ((dev->flags & (ATA_DFLAG_PIO | ATA_DFLAG_NCQ)) == ATA_DFLAG_NCQ) { 1245 + /* yay, NCQ */ 1246 + if (!lba_48_ok(block, n_block)) 1247 + goto out_of_range; 1248 + 1249 + tf->protocol = ATA_PROT_NCQ; 1250 + tf->flags |= ATA_TFLAG_LBA | ATA_TFLAG_LBA48; 1251 + 1252 + if (tf->flags & ATA_TFLAG_WRITE) 1253 + tf->command = ATA_CMD_FPDMA_WRITE; 1254 + else 1255 + tf->command = ATA_CMD_FPDMA_READ; 1256 + 1257 + qc->nsect = n_block; 1258 + 1259 + tf->nsect = qc->tag << 3; 1260 + tf->hob_feature = (n_block >> 8) & 0xff; 1261 + tf->feature = n_block & 0xff; 1262 + 1263 + tf->hob_lbah = (block >> 40) & 0xff; 1264 + tf->hob_lbam = (block >> 32) & 0xff; 1265 + tf->hob_lbal = (block >> 24) & 0xff; 1266 + tf->lbah = (block >> 16) & 0xff; 1267 + tf->lbam = (block >> 8) & 0xff; 1268 + tf->lbal = block & 0xff; 1269 + 1270 + tf->device = 1 << 6; 1271 + if (tf->flags & ATA_TFLAG_FUA) 1272 + tf->device |= 1 << 7; 1273 + } else if (dev->flags & ATA_DFLAG_LBA) { 1190 1274 tf->flags |= ATA_TFLAG_LBA; 1191 1275 1192 1276 if (lba_28_ok(block, n_block)) { ··· 1306 1332 u8 *cdb = cmd->cmnd; 1307 1333 int need_sense = (qc->err_mask != 0); 1308 1334 1335 + /* We snoop the SET_FEATURES - Write Cache ON/OFF command, and 1336 + * schedule EH_REVALIDATE operation to update the IDENTIFY DEVICE 1337 + * cache 1338 + */ 1339 + if (!need_sense && (qc->tf.command == ATA_CMD_SET_FEATURES) && 1340 + ((qc->tf.feature == SETFEATURES_WC_ON) || 1341 + (qc->tf.feature == SETFEATURES_WC_OFF))) { 1342 + qc->ap->eh_info.action |= ATA_EH_REVALIDATE; 1343 + ata_port_schedule_eh(qc->ap); 1344 + } 1345 + 1309 1346 /* For ATA pass thru (SAT) commands, generate a sense block if 1310 1347 * user mandated it or if there's an error. Note that if we 1311 1348 * generate because the user forced us to, a check condition ··· 1341 1356 } 1342 1357 } 1343 1358 1344 - if (need_sense) { 1345 - /* The ata_gen_..._sense routines fill in tf */ 1346 - ata_dump_status(qc->ap->id, &qc->tf); 1347 - } 1359 + if (need_sense && !qc->ap->ops->error_handler) 1360 + ata_dump_status(qc->ap->id, &qc->result_tf); 1348 1361 1349 1362 qc->scsidone(cmd); 1350 1363 ··· 1350 1367 } 1351 1368 1352 1369 /** 1370 + * ata_scmd_need_defer - Check whether we need to defer scmd 1371 + * @dev: ATA device to which the command is addressed 1372 + * @is_io: Is the command IO (and thus possibly NCQ)? 1373 + * 1374 + * NCQ and non-NCQ commands cannot run together. As upper layer 1375 + * only knows the queue depth, we are responsible for maintaining 1376 + * exclusion. This function checks whether a new command can be 1377 + * issued to @dev. 1378 + * 1379 + * LOCKING: 1380 + * spin_lock_irqsave(host_set lock) 1381 + * 1382 + * RETURNS: 1383 + * 1 if deferring is needed, 0 otherwise. 1384 + */ 1385 + static int ata_scmd_need_defer(struct ata_device *dev, int is_io) 1386 + { 1387 + struct ata_port *ap = dev->ap; 1388 + 1389 + if (!(dev->flags & ATA_DFLAG_NCQ)) 1390 + return 0; 1391 + 1392 + if (is_io) { 1393 + if (!ata_tag_valid(ap->active_tag)) 1394 + return 0; 1395 + } else { 1396 + if (!ata_tag_valid(ap->active_tag) && !ap->sactive) 1397 + return 0; 1398 + } 1399 + return 1; 1400 + } 1401 + 1402 + /** 1353 1403 * ata_scsi_translate - Translate then issue SCSI command to ATA device 1354 - * @ap: ATA port to which the command is addressed 1355 1404 * @dev: ATA device to which the command is addressed 1356 1405 * @cmd: SCSI command to execute 1357 1406 * @done: SCSI command completion function ··· 1404 1389 * 1405 1390 * LOCKING: 1406 1391 * spin_lock_irqsave(host_set lock) 1392 + * 1393 + * RETURNS: 1394 + * 0 on success, SCSI_ML_QUEUE_DEVICE_BUSY if the command 1395 + * needs to be deferred. 1407 1396 */ 1408 - 1409 - static void ata_scsi_translate(struct ata_port *ap, struct ata_device *dev, 1410 - struct scsi_cmnd *cmd, 1397 + static int ata_scsi_translate(struct ata_device *dev, struct scsi_cmnd *cmd, 1411 1398 void (*done)(struct scsi_cmnd *), 1412 1399 ata_xlat_func_t xlat_func) 1413 1400 { 1414 1401 struct ata_queued_cmd *qc; 1415 1402 u8 *scsicmd = cmd->cmnd; 1403 + int is_io = xlat_func == ata_scsi_rw_xlat; 1416 1404 1417 1405 VPRINTK("ENTER\n"); 1418 1406 1419 - qc = ata_scsi_qc_new(ap, dev, cmd, done); 1407 + if (unlikely(ata_scmd_need_defer(dev, is_io))) 1408 + goto defer; 1409 + 1410 + qc = ata_scsi_qc_new(dev, cmd, done); 1420 1411 if (!qc) 1421 1412 goto err_mem; 1422 1413 ··· 1430 1409 if (cmd->sc_data_direction == DMA_FROM_DEVICE || 1431 1410 cmd->sc_data_direction == DMA_TO_DEVICE) { 1432 1411 if (unlikely(cmd->request_bufflen < 1)) { 1433 - printk(KERN_WARNING "ata%u(%u): WARNING: zero len r/w req\n", 1434 - ap->id, dev->devno); 1412 + ata_dev_printk(dev, KERN_WARNING, 1413 + "WARNING: zero len r/w req\n"); 1435 1414 goto err_did; 1436 1415 } 1437 1416 ··· 1453 1432 ata_qc_issue(qc); 1454 1433 1455 1434 VPRINTK("EXIT\n"); 1456 - return; 1435 + return 0; 1457 1436 1458 1437 early_finish: 1459 1438 ata_qc_free(qc); 1460 1439 done(cmd); 1461 1440 DPRINTK("EXIT - early finish (good or error)\n"); 1462 - return; 1441 + return 0; 1463 1442 1464 1443 err_did: 1465 1444 ata_qc_free(qc); ··· 1467 1446 cmd->result = (DID_ERROR << 16); 1468 1447 done(cmd); 1469 1448 DPRINTK("EXIT - internal\n"); 1470 - return; 1449 + return 0; 1450 + 1451 + defer: 1452 + DPRINTK("EXIT - defer\n"); 1453 + return SCSI_MLQUEUE_DEVICE_BUSY; 1471 1454 } 1472 1455 1473 1456 /** ··· 1969 1944 return 0; 1970 1945 1971 1946 dpofua = 0; 1972 - if (ata_dev_supports_fua(args->id) && dev->flags & ATA_DFLAG_LBA48 && 1947 + if (ata_dev_supports_fua(args->id) && (dev->flags & ATA_DFLAG_LBA48) && 1973 1948 (!(dev->flags & ATA_DFLAG_PIO) || dev->multi_count)) 1974 1949 dpofua = 1 << 4; 1975 1950 ··· 2162 2137 2163 2138 static void atapi_sense_complete(struct ata_queued_cmd *qc) 2164 2139 { 2165 - if (qc->err_mask && ((qc->err_mask & AC_ERR_DEV) == 0)) 2140 + if (qc->err_mask && ((qc->err_mask & AC_ERR_DEV) == 0)) { 2166 2141 /* FIXME: not quite right; we don't want the 2167 2142 * translation of taskfile registers into 2168 2143 * a sense descriptors, since that's only 2169 2144 * correct for ATA, not ATAPI 2170 2145 */ 2171 2146 ata_gen_ata_desc_sense(qc); 2147 + } 2172 2148 2173 2149 qc->scsidone(qc->scsicmd); 2174 2150 ata_qc_free(qc); ··· 2233 2207 2234 2208 VPRINTK("ENTER, err_mask 0x%X\n", err_mask); 2235 2209 2210 + /* handle completion from new EH */ 2211 + if (unlikely(qc->ap->ops->error_handler && 2212 + (err_mask || qc->flags & ATA_QCFLAG_SENSE_VALID))) { 2213 + 2214 + if (!(qc->flags & ATA_QCFLAG_SENSE_VALID)) { 2215 + /* FIXME: not quite right; we don't want the 2216 + * translation of taskfile registers into a 2217 + * sense descriptors, since that's only 2218 + * correct for ATA, not ATAPI 2219 + */ 2220 + ata_gen_ata_desc_sense(qc); 2221 + } 2222 + 2223 + qc->scsicmd->result = SAM_STAT_CHECK_CONDITION; 2224 + qc->scsidone(cmd); 2225 + ata_qc_free(qc); 2226 + return; 2227 + } 2228 + 2229 + /* successful completion or old EH failure path */ 2236 2230 if (unlikely(err_mask & AC_ERR_DEV)) { 2237 2231 cmd->result = SAM_STAT_CHECK_CONDITION; 2238 2232 atapi_request_sense(qc); 2239 2233 return; 2240 - } 2241 - 2242 - else if (unlikely(err_mask)) 2234 + } else if (unlikely(err_mask)) { 2243 2235 /* FIXME: not quite right; we don't want the 2244 2236 * translation of taskfile registers into 2245 2237 * a sense descriptors, since that's only 2246 2238 * correct for ATA, not ATAPI 2247 2239 */ 2248 2240 ata_gen_ata_desc_sense(qc); 2249 - 2250 - else { 2241 + } else { 2251 2242 u8 *scsicmd = cmd->cmnd; 2252 2243 2253 2244 if ((scsicmd[0] == INQUIRY) && ((scsicmd[1] & 0x03) == 0)) { ··· 2346 2303 qc->tf.protocol = ATA_PROT_ATAPI_DMA; 2347 2304 qc->tf.feature |= ATAPI_PKT_DMA; 2348 2305 2349 - #ifdef ATAPI_ENABLE_DMADIR 2350 - /* some SATA bridges need us to indicate data xfer direction */ 2351 - if (cmd->sc_data_direction != DMA_TO_DEVICE) 2306 + if (atapi_dmadir && (cmd->sc_data_direction != DMA_TO_DEVICE)) 2307 + /* some SATA bridges need us to indicate data xfer direction */ 2352 2308 qc->tf.feature |= ATAPI_DMADIR; 2353 - #endif 2354 2309 } 2355 2310 2356 2311 qc->nbytes = cmd->request_bufflen; 2357 2312 2358 2313 return 0; 2314 + } 2315 + 2316 + static struct ata_device * ata_find_dev(struct ata_port *ap, int id) 2317 + { 2318 + if (likely(id < ATA_MAX_DEVICES)) 2319 + return &ap->device[id]; 2320 + return NULL; 2321 + } 2322 + 2323 + static struct ata_device * __ata_scsi_find_dev(struct ata_port *ap, 2324 + const struct scsi_device *scsidev) 2325 + { 2326 + /* skip commands not addressed to targets we simulate */ 2327 + if (unlikely(scsidev->channel || scsidev->lun)) 2328 + return NULL; 2329 + 2330 + return ata_find_dev(ap, scsidev->id); 2331 + } 2332 + 2333 + /** 2334 + * ata_scsi_dev_enabled - determine if device is enabled 2335 + * @dev: ATA device 2336 + * 2337 + * Determine if commands should be sent to the specified device. 2338 + * 2339 + * LOCKING: 2340 + * spin_lock_irqsave(host_set lock) 2341 + * 2342 + * RETURNS: 2343 + * 0 if commands are not allowed / 1 if commands are allowed 2344 + */ 2345 + 2346 + static int ata_scsi_dev_enabled(struct ata_device *dev) 2347 + { 2348 + if (unlikely(!ata_dev_enabled(dev))) 2349 + return 0; 2350 + 2351 + if (!atapi_enabled || (dev->ap->flags & ATA_FLAG_NO_ATAPI)) { 2352 + if (unlikely(dev->class == ATA_DEV_ATAPI)) { 2353 + ata_dev_printk(dev, KERN_WARNING, 2354 + "WARNING: ATAPI is %s, device ignored.\n", 2355 + atapi_enabled ? "not supported with this driver" : "disabled"); 2356 + return 0; 2357 + } 2358 + } 2359 + 2360 + return 1; 2359 2361 } 2360 2362 2361 2363 /** ··· 2419 2331 * RETURNS: 2420 2332 * Associated ATA device, or %NULL if not found. 2421 2333 */ 2422 - 2423 2334 static struct ata_device * 2424 2335 ata_scsi_find_dev(struct ata_port *ap, const struct scsi_device *scsidev) 2425 2336 { 2426 - struct ata_device *dev; 2337 + struct ata_device *dev = __ata_scsi_find_dev(ap, scsidev); 2427 2338 2428 - /* skip commands not addressed to targets we simulate */ 2429 - if (likely(scsidev->id < ATA_MAX_DEVICES)) 2430 - dev = &ap->device[scsidev->id]; 2431 - else 2339 + if (unlikely(!dev || !ata_scsi_dev_enabled(dev))) 2432 2340 return NULL; 2433 - 2434 - if (unlikely((scsidev->channel != 0) || 2435 - (scsidev->lun != 0))) 2436 - return NULL; 2437 - 2438 - if (unlikely(!ata_dev_present(dev))) 2439 - return NULL; 2440 - 2441 - if (!atapi_enabled || (ap->flags & ATA_FLAG_NO_ATAPI)) { 2442 - if (unlikely(dev->class == ATA_DEV_ATAPI)) { 2443 - printk(KERN_WARNING "ata%u(%u): WARNING: ATAPI is %s, device ignored.\n", 2444 - ap->id, dev->devno, atapi_enabled ? "not supported with this driver" : "disabled"); 2445 - return NULL; 2446 - } 2447 - } 2448 2341 2449 2342 return dev; 2450 2343 } ··· 2483 2414 { 2484 2415 struct ata_taskfile *tf = &(qc->tf); 2485 2416 struct scsi_cmnd *cmd = qc->scsicmd; 2417 + struct ata_device *dev = qc->dev; 2486 2418 2487 2419 if ((tf->protocol = ata_scsi_map_proto(scsicmd[1])) == ATA_PROT_UNKNOWN) 2420 + goto invalid_fld; 2421 + 2422 + /* We may not issue DMA commands if no DMA mode is set */ 2423 + if (tf->protocol == ATA_PROT_DMA && dev->dma_mode == 0) 2488 2424 goto invalid_fld; 2489 2425 2490 2426 if (scsicmd[1] & 0xe0) ··· 2576 2502 */ 2577 2503 qc->nsect = cmd->request_bufflen / ATA_SECT_SIZE; 2578 2504 2505 + /* request result TF */ 2506 + qc->flags |= ATA_QCFLAG_RESULT_TF; 2507 + 2579 2508 return 0; 2580 2509 2581 2510 invalid_fld: ··· 2655 2578 #endif 2656 2579 } 2657 2580 2658 - static inline void __ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *), 2659 - struct ata_port *ap, struct ata_device *dev) 2581 + static inline int __ata_scsi_queuecmd(struct scsi_cmnd *cmd, 2582 + void (*done)(struct scsi_cmnd *), 2583 + struct ata_device *dev) 2660 2584 { 2585 + int rc = 0; 2586 + 2661 2587 if (dev->class == ATA_DEV_ATA) { 2662 2588 ata_xlat_func_t xlat_func = ata_get_xlat_func(dev, 2663 2589 cmd->cmnd[0]); 2664 2590 2665 2591 if (xlat_func) 2666 - ata_scsi_translate(ap, dev, cmd, done, xlat_func); 2592 + rc = ata_scsi_translate(dev, cmd, done, xlat_func); 2667 2593 else 2668 - ata_scsi_simulate(ap, dev, cmd, done); 2594 + ata_scsi_simulate(dev, cmd, done); 2669 2595 } else 2670 - ata_scsi_translate(ap, dev, cmd, done, atapi_xlat); 2596 + rc = ata_scsi_translate(dev, cmd, done, atapi_xlat); 2597 + 2598 + return rc; 2671 2599 } 2672 2600 2673 2601 /** ··· 2691 2609 * Releases scsi-layer-held lock, and obtains host_set lock. 2692 2610 * 2693 2611 * RETURNS: 2694 - * Zero. 2612 + * Return value from __ata_scsi_queuecmd() if @cmd can be queued, 2613 + * 0 otherwise. 2695 2614 */ 2696 - 2697 2615 int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) 2698 2616 { 2699 2617 struct ata_port *ap; 2700 2618 struct ata_device *dev; 2701 2619 struct scsi_device *scsidev = cmd->device; 2702 2620 struct Scsi_Host *shost = scsidev->host; 2621 + int rc = 0; 2703 2622 2704 - ap = (struct ata_port *) &shost->hostdata[0]; 2623 + ap = ata_shost_to_port(shost); 2705 2624 2706 2625 spin_unlock(shost->host_lock); 2707 - spin_lock(&ap->host_set->lock); 2626 + spin_lock(ap->lock); 2708 2627 2709 2628 ata_scsi_dump_cdb(ap, cmd); 2710 2629 2711 2630 dev = ata_scsi_find_dev(ap, scsidev); 2712 2631 if (likely(dev)) 2713 - __ata_scsi_queuecmd(cmd, done, ap, dev); 2632 + rc = __ata_scsi_queuecmd(cmd, done, dev); 2714 2633 else { 2715 2634 cmd->result = (DID_BAD_TARGET << 16); 2716 2635 done(cmd); 2717 2636 } 2718 2637 2719 - spin_unlock(&ap->host_set->lock); 2638 + spin_unlock(ap->lock); 2720 2639 spin_lock(shost->host_lock); 2721 - return 0; 2640 + return rc; 2722 2641 } 2723 2642 2724 2643 /** 2725 2644 * ata_scsi_simulate - simulate SCSI command on ATA device 2726 - * @ap: port the device is connected to 2727 2645 * @dev: the target device 2728 2646 * @cmd: SCSI command being sent to device. 2729 2647 * @done: SCSI command completion function. ··· 2735 2653 * spin_lock_irqsave(host_set lock) 2736 2654 */ 2737 2655 2738 - void ata_scsi_simulate(struct ata_port *ap, struct ata_device *dev, 2739 - struct scsi_cmnd *cmd, 2656 + void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd, 2740 2657 void (*done)(struct scsi_cmnd *)) 2741 2658 { 2742 2659 struct ata_scsi_args args; 2743 2660 const u8 *scsicmd = cmd->cmnd; 2744 2661 2745 - args.ap = ap; 2746 2662 args.dev = dev; 2747 2663 args.id = dev->id; 2748 2664 args.cmd = cmd; ··· 2812 2732 2813 2733 void ata_scsi_scan_host(struct ata_port *ap) 2814 2734 { 2815 - struct ata_device *dev; 2816 2735 unsigned int i; 2817 2736 2818 - if (ap->flags & ATA_FLAG_PORT_DISABLED) 2737 + if (ap->flags & ATA_FLAG_DISABLED) 2819 2738 return; 2739 + 2740 + for (i = 0; i < ATA_MAX_DEVICES; i++) { 2741 + struct ata_device *dev = &ap->device[i]; 2742 + struct scsi_device *sdev; 2743 + 2744 + if (!ata_dev_enabled(dev) || dev->sdev) 2745 + continue; 2746 + 2747 + sdev = __scsi_add_device(ap->host, 0, i, 0, NULL); 2748 + if (!IS_ERR(sdev)) { 2749 + dev->sdev = sdev; 2750 + scsi_device_put(sdev); 2751 + } 2752 + } 2753 + } 2754 + 2755 + /** 2756 + * ata_scsi_offline_dev - offline attached SCSI device 2757 + * @dev: ATA device to offline attached SCSI device for 2758 + * 2759 + * This function is called from ata_eh_hotplug() and responsible 2760 + * for taking the SCSI device attached to @dev offline. This 2761 + * function is called with host_set lock which protects dev->sdev 2762 + * against clearing. 2763 + * 2764 + * LOCKING: 2765 + * spin_lock_irqsave(host_set lock) 2766 + * 2767 + * RETURNS: 2768 + * 1 if attached SCSI device exists, 0 otherwise. 2769 + */ 2770 + int ata_scsi_offline_dev(struct ata_device *dev) 2771 + { 2772 + if (dev->sdev) { 2773 + scsi_device_set_state(dev->sdev, SDEV_OFFLINE); 2774 + return 1; 2775 + } 2776 + return 0; 2777 + } 2778 + 2779 + /** 2780 + * ata_scsi_remove_dev - remove attached SCSI device 2781 + * @dev: ATA device to remove attached SCSI device for 2782 + * 2783 + * This function is called from ata_eh_scsi_hotplug() and 2784 + * responsible for removing the SCSI device attached to @dev. 2785 + * 2786 + * LOCKING: 2787 + * Kernel thread context (may sleep). 2788 + */ 2789 + static void ata_scsi_remove_dev(struct ata_device *dev) 2790 + { 2791 + struct ata_port *ap = dev->ap; 2792 + struct scsi_device *sdev; 2793 + unsigned long flags; 2794 + 2795 + /* Alas, we need to grab scan_mutex to ensure SCSI device 2796 + * state doesn't change underneath us and thus 2797 + * scsi_device_get() always succeeds. The mutex locking can 2798 + * be removed if there is __scsi_device_get() interface which 2799 + * increments reference counts regardless of device state. 2800 + */ 2801 + mutex_lock(&ap->host->scan_mutex); 2802 + spin_lock_irqsave(ap->lock, flags); 2803 + 2804 + /* clearing dev->sdev is protected by host_set lock */ 2805 + sdev = dev->sdev; 2806 + dev->sdev = NULL; 2807 + 2808 + if (sdev) { 2809 + /* If user initiated unplug races with us, sdev can go 2810 + * away underneath us after the host_set lock and 2811 + * scan_mutex are released. Hold onto it. 2812 + */ 2813 + if (scsi_device_get(sdev) == 0) { 2814 + /* The following ensures the attached sdev is 2815 + * offline on return from ata_scsi_offline_dev() 2816 + * regardless it wins or loses the race 2817 + * against this function. 2818 + */ 2819 + scsi_device_set_state(sdev, SDEV_OFFLINE); 2820 + } else { 2821 + WARN_ON(1); 2822 + sdev = NULL; 2823 + } 2824 + } 2825 + 2826 + spin_unlock_irqrestore(ap->lock, flags); 2827 + mutex_unlock(&ap->host->scan_mutex); 2828 + 2829 + if (sdev) { 2830 + ata_dev_printk(dev, KERN_INFO, "detaching (SCSI %s)\n", 2831 + sdev->sdev_gendev.bus_id); 2832 + 2833 + scsi_remove_device(sdev); 2834 + scsi_device_put(sdev); 2835 + } 2836 + } 2837 + 2838 + /** 2839 + * ata_scsi_hotplug - SCSI part of hotplug 2840 + * @data: Pointer to ATA port to perform SCSI hotplug on 2841 + * 2842 + * Perform SCSI part of hotplug. It's executed from a separate 2843 + * workqueue after EH completes. This is necessary because SCSI 2844 + * hot plugging requires working EH and hot unplugging is 2845 + * synchronized with hot plugging with a mutex. 2846 + * 2847 + * LOCKING: 2848 + * Kernel thread context (may sleep). 2849 + */ 2850 + void ata_scsi_hotplug(void *data) 2851 + { 2852 + struct ata_port *ap = data; 2853 + int i; 2854 + 2855 + if (ap->flags & ATA_FLAG_UNLOADING) { 2856 + DPRINTK("ENTER/EXIT - unloading\n"); 2857 + return; 2858 + } 2859 + 2860 + DPRINTK("ENTER\n"); 2861 + 2862 + /* unplug detached devices */ 2863 + for (i = 0; i < ATA_MAX_DEVICES; i++) { 2864 + struct ata_device *dev = &ap->device[i]; 2865 + unsigned long flags; 2866 + 2867 + if (!(dev->flags & ATA_DFLAG_DETACHED)) 2868 + continue; 2869 + 2870 + spin_lock_irqsave(ap->lock, flags); 2871 + dev->flags &= ~ATA_DFLAG_DETACHED; 2872 + spin_unlock_irqrestore(ap->lock, flags); 2873 + 2874 + ata_scsi_remove_dev(dev); 2875 + } 2876 + 2877 + /* scan for new ones */ 2878 + ata_scsi_scan_host(ap); 2879 + 2880 + /* If we scanned while EH was in progress, scan would have 2881 + * failed silently. Requeue if there are enabled but 2882 + * unattached devices. 2883 + */ 2884 + for (i = 0; i < ATA_MAX_DEVICES; i++) { 2885 + struct ata_device *dev = &ap->device[i]; 2886 + if (ata_dev_enabled(dev) && !dev->sdev) { 2887 + queue_delayed_work(ata_aux_wq, &ap->hotplug_task, HZ); 2888 + break; 2889 + } 2890 + } 2891 + 2892 + DPRINTK("EXIT\n"); 2893 + } 2894 + 2895 + /** 2896 + * ata_scsi_user_scan - indication for user-initiated bus scan 2897 + * @shost: SCSI host to scan 2898 + * @channel: Channel to scan 2899 + * @id: ID to scan 2900 + * @lun: LUN to scan 2901 + * 2902 + * This function is called when user explicitly requests bus 2903 + * scan. Set probe pending flag and invoke EH. 2904 + * 2905 + * LOCKING: 2906 + * SCSI layer (we don't care) 2907 + * 2908 + * RETURNS: 2909 + * Zero. 2910 + */ 2911 + static int ata_scsi_user_scan(struct Scsi_Host *shost, unsigned int channel, 2912 + unsigned int id, unsigned int lun) 2913 + { 2914 + struct ata_port *ap = ata_shost_to_port(shost); 2915 + unsigned long flags; 2916 + int rc = 0; 2917 + 2918 + if (!ap->ops->error_handler) 2919 + return -EOPNOTSUPP; 2920 + 2921 + if ((channel != SCAN_WILD_CARD && channel != 0) || 2922 + (lun != SCAN_WILD_CARD && lun != 0)) 2923 + return -EINVAL; 2924 + 2925 + spin_lock_irqsave(ap->lock, flags); 2926 + 2927 + if (id == SCAN_WILD_CARD) { 2928 + ap->eh_info.probe_mask |= (1 << ATA_MAX_DEVICES) - 1; 2929 + ap->eh_info.action |= ATA_EH_SOFTRESET; 2930 + } else { 2931 + struct ata_device *dev = ata_find_dev(ap, id); 2932 + 2933 + if (dev) { 2934 + ap->eh_info.probe_mask |= 1 << dev->devno; 2935 + ap->eh_info.action |= ATA_EH_SOFTRESET; 2936 + } else 2937 + rc = -EINVAL; 2938 + } 2939 + 2940 + if (rc == 0) 2941 + ata_port_schedule_eh(ap); 2942 + 2943 + spin_unlock_irqrestore(ap->lock, flags); 2944 + 2945 + return rc; 2946 + } 2947 + 2948 + /** 2949 + * ata_scsi_dev_rescan - initiate scsi_rescan_device() 2950 + * @data: Pointer to ATA port to perform scsi_rescan_device() 2951 + * 2952 + * After ATA pass thru (SAT) commands are executed successfully, 2953 + * libata need to propagate the changes to SCSI layer. This 2954 + * function must be executed from ata_aux_wq such that sdev 2955 + * attach/detach don't race with rescan. 2956 + * 2957 + * LOCKING: 2958 + * Kernel thread context (may sleep). 2959 + */ 2960 + void ata_scsi_dev_rescan(void *data) 2961 + { 2962 + struct ata_port *ap = data; 2963 + struct ata_device *dev; 2964 + unsigned int i; 2820 2965 2821 2966 for (i = 0; i < ATA_MAX_DEVICES; i++) { 2822 2967 dev = &ap->device[i]; 2823 2968 2824 - if (ata_dev_present(dev)) 2825 - scsi_scan_target(&ap->host->shost_gendev, 0, i, 0, 0); 2969 + if (ata_dev_enabled(dev) && dev->sdev) 2970 + scsi_rescan_device(&(dev->sdev->sdev_gendev)); 2826 2971 } 2827 2972 } 2828 -
+27 -4
drivers/scsi/libata.h
··· 29 29 #define __LIBATA_H__ 30 30 31 31 #define DRV_NAME "libata" 32 - #define DRV_VERSION "1.20" /* must be exactly four chars */ 32 + #define DRV_VERSION "1.30" /* must be exactly four chars */ 33 33 34 34 struct ata_scsi_args { 35 - struct ata_port *ap; 36 35 struct ata_device *dev; 37 36 u16 *id; 38 37 struct scsi_cmnd *cmd; ··· 39 40 }; 40 41 41 42 /* libata-core.c */ 43 + extern struct workqueue_struct *ata_aux_wq; 42 44 extern int atapi_enabled; 45 + extern int atapi_dmadir; 43 46 extern int libata_fua; 44 - extern struct ata_queued_cmd *ata_qc_new_init(struct ata_port *ap, 45 - struct ata_device *dev); 47 + extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev); 46 48 extern int ata_rwcmd_protocol(struct ata_queued_cmd *qc); 49 + extern void ata_dev_disable(struct ata_device *dev); 47 50 extern void ata_port_flush_task(struct ata_port *ap); 51 + extern unsigned ata_exec_internal(struct ata_device *dev, 52 + struct ata_taskfile *tf, const u8 *cdb, 53 + int dma_dir, void *buf, unsigned int buflen); 54 + extern int ata_dev_read_id(struct ata_device *dev, unsigned int *p_class, 55 + int post_reset, u16 *id); 56 + extern int ata_dev_configure(struct ata_device *dev, int print_info); 57 + extern int sata_down_spd_limit(struct ata_port *ap); 58 + extern int sata_set_spd_needed(struct ata_port *ap); 59 + extern int ata_down_xfermask_limit(struct ata_device *dev, int force_pio0); 60 + extern int ata_set_mode(struct ata_port *ap, struct ata_device **r_failed_dev); 48 61 extern void ata_qc_free(struct ata_queued_cmd *qc); 49 62 extern void ata_qc_issue(struct ata_queued_cmd *qc); 63 + extern void __ata_qc_complete(struct ata_queued_cmd *qc); 50 64 extern int ata_check_atapi_dma(struct ata_queued_cmd *qc); 51 65 extern void ata_dev_select(struct ata_port *ap, unsigned int device, 52 66 unsigned int wait, unsigned int can_sleep); 53 67 extern void swap_buf_le16(u16 *buf, unsigned int buf_words); 68 + extern void ata_dev_init(struct ata_device *dev); 54 69 extern int ata_task_ioctl(struct scsi_device *scsidev, void __user *arg); 55 70 extern int ata_cmd_ioctl(struct scsi_device *scsidev, void __user *arg); 56 71 ··· 73 60 extern struct scsi_transport_template ata_scsi_transport_template; 74 61 75 62 extern void ata_scsi_scan_host(struct ata_port *ap); 63 + extern int ata_scsi_offline_dev(struct ata_device *dev); 64 + extern void ata_scsi_hotplug(void *data); 76 65 extern unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf, 77 66 unsigned int buflen); 78 67 ··· 103 88 extern void ata_scsi_rbuf_fill(struct ata_scsi_args *args, 104 89 unsigned int (*actor) (struct ata_scsi_args *args, 105 90 u8 *rbuf, unsigned int buflen)); 91 + extern void ata_schedule_scsi_eh(struct Scsi_Host *shost); 92 + extern void ata_scsi_dev_rescan(void *data); 93 + 94 + /* libata-eh.c */ 95 + extern enum scsi_eh_timer_return ata_scsi_timed_out(struct scsi_cmnd *cmd); 96 + extern void ata_scsi_error(struct Scsi_Host *host); 97 + extern void ata_port_wait_eh(struct ata_port *ap); 98 + extern void ata_qc_schedule_eh(struct ata_queued_cmd *qc); 106 99 107 100 #endif /* __LIBATA_H__ */
+7 -5
drivers/scsi/pdc_adma.c
··· 46 46 #include <linux/libata.h> 47 47 48 48 #define DRV_NAME "pdc_adma" 49 - #define DRV_VERSION "0.03" 49 + #define DRV_VERSION "0.04" 50 50 51 51 /* macro to calculate base address for ATA regs */ 52 52 #define ADMA_ATA_REGS(base,port_no) ((base) + ((port_no) * 0x40)) ··· 152 152 .proc_name = DRV_NAME, 153 153 .dma_boundary = ADMA_DMA_BOUNDARY, 154 154 .slave_configure = ata_scsi_slave_config, 155 + .slave_destroy = ata_scsi_slave_destroy, 155 156 .bios_param = ata_std_bios_param, 156 157 }; 157 158 ··· 168 167 .qc_prep = adma_qc_prep, 169 168 .qc_issue = adma_qc_issue, 170 169 .eng_timeout = adma_eng_timeout, 170 + .data_xfer = ata_mmio_data_xfer, 171 171 .irq_handler = adma_intr, 172 172 .irq_clear = adma_irq_clear, 173 173 .port_start = adma_port_start, ··· 457 455 continue; 458 456 handled = 1; 459 457 adma_enter_reg_mode(ap); 460 - if (ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR)) 458 + if (ap->flags & ATA_FLAG_DISABLED) 461 459 continue; 462 460 pp = ap->private_data; 463 461 if (!pp || pp->state != adma_state_pkt) 464 462 continue; 465 463 qc = ata_qc_from_tag(ap, ap->active_tag); 466 - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { 464 + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) { 467 465 if ((status & (aPERR | aPSD | aUIRQ))) 468 466 qc->err_mask |= AC_ERR_OTHER; 469 467 else if (pp->pkt[0] != cDONE) ··· 482 480 for (port_no = 0; port_no < host_set->n_ports; ++port_no) { 483 481 struct ata_port *ap; 484 482 ap = host_set->ports[port_no]; 485 - if (ap && (!(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR)))) { 483 + if (ap && (!(ap->flags & ATA_FLAG_DISABLED))) { 486 484 struct ata_queued_cmd *qc; 487 485 struct adma_port_priv *pp = ap->private_data; 488 486 if (!pp || pp->state != adma_state_mmio) 489 487 continue; 490 488 qc = ata_qc_from_tag(ap, ap->active_tag); 491 - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { 489 + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) { 492 490 493 491 /* check main status, clearing INTRQ */ 494 492 u8 status = ata_check_status(ap);
+38 -33
drivers/scsi/sata_mv.c
··· 93 93 MV_FLAG_IRQ_COALESCE = (1 << 29), /* IRQ coalescing capability */ 94 94 MV_COMMON_FLAGS = (ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 95 95 ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO | 96 - ATA_FLAG_NO_ATAPI), 96 + ATA_FLAG_NO_ATAPI | ATA_FLAG_PIO_POLLING), 97 97 MV_6XXX_FLAGS = MV_FLAG_IRQ_COALESCE, 98 98 99 99 CRQB_FLAG_READ = (1 << 0), ··· 272 272 273 273 /* Command ReQuest Block: 32B */ 274 274 struct mv_crqb { 275 - u32 sg_addr; 276 - u32 sg_addr_hi; 277 - u16 ctrl_flags; 278 - u16 ata_cmd[11]; 275 + __le32 sg_addr; 276 + __le32 sg_addr_hi; 277 + __le16 ctrl_flags; 278 + __le16 ata_cmd[11]; 279 279 }; 280 280 281 281 struct mv_crqb_iie { 282 - u32 addr; 283 - u32 addr_hi; 284 - u32 flags; 285 - u32 len; 286 - u32 ata_cmd[4]; 282 + __le32 addr; 283 + __le32 addr_hi; 284 + __le32 flags; 285 + __le32 len; 286 + __le32 ata_cmd[4]; 287 287 }; 288 288 289 289 /* Command ResPonse Block: 8B */ 290 290 struct mv_crpb { 291 - u16 id; 292 - u16 flags; 293 - u32 tmstmp; 291 + __le16 id; 292 + __le16 flags; 293 + __le32 tmstmp; 294 294 }; 295 295 296 296 /* EDMA Physical Region Descriptor (ePRD); A.K.A. SG */ 297 297 struct mv_sg { 298 - u32 addr; 299 - u32 flags_size; 300 - u32 addr_hi; 301 - u32 reserved; 298 + __le32 addr; 299 + __le32 flags_size; 300 + __le32 addr_hi; 301 + __le32 reserved; 302 302 }; 303 303 304 304 struct mv_port_priv { ··· 390 390 .proc_name = DRV_NAME, 391 391 .dma_boundary = MV_DMA_BOUNDARY, 392 392 .slave_configure = ata_scsi_slave_config, 393 + .slave_destroy = ata_scsi_slave_destroy, 393 394 .bios_param = ata_std_bios_param, 394 395 }; 395 396 ··· 407 406 408 407 .qc_prep = mv_qc_prep, 409 408 .qc_issue = mv_qc_issue, 409 + .data_xfer = ata_mmio_data_xfer, 410 410 411 411 .eng_timeout = mv_eng_timeout, 412 412 ··· 435 433 436 434 .qc_prep = mv_qc_prep, 437 435 .qc_issue = mv_qc_issue, 436 + .data_xfer = ata_mmio_data_xfer, 438 437 439 438 .eng_timeout = mv_eng_timeout, 440 439 ··· 686 683 } 687 684 688 685 if (EDMA_EN & reg) { 689 - printk(KERN_ERR "ata%u: Unable to stop eDMA\n", ap->id); 686 + ata_port_printk(ap, KERN_ERR, "Unable to stop eDMA\n"); 690 687 /* FIXME: Consider doing a reset here to recover */ 691 688 } 692 689 } ··· 1031 1028 return (index + 1) & MV_MAX_Q_DEPTH_MASK; 1032 1029 } 1033 1030 1034 - static inline void mv_crqb_pack_cmd(u16 *cmdw, u8 data, u8 addr, unsigned last) 1031 + static inline void mv_crqb_pack_cmd(__le16 *cmdw, u8 data, u8 addr, unsigned last) 1035 1032 { 1036 1033 u16 tmp = data | (addr << CRQB_CMD_ADDR_SHIFT) | CRQB_CMD_CS | 1037 1034 (last ? CRQB_CMD_LAST : 0); ··· 1054 1051 { 1055 1052 struct ata_port *ap = qc->ap; 1056 1053 struct mv_port_priv *pp = ap->private_data; 1057 - u16 *cw; 1054 + __le16 *cw; 1058 1055 struct ata_taskfile *tf; 1059 1056 u16 flags = 0; 1060 1057 unsigned in_index; ··· 1310 1307 edma_err_cause = readl(port_mmio + EDMA_ERR_IRQ_CAUSE_OFS); 1311 1308 1312 1309 if (EDMA_ERR_SERR & edma_err_cause) { 1313 - serr = scr_read(ap, SCR_ERROR); 1314 - scr_write_flush(ap, SCR_ERROR, serr); 1310 + sata_scr_read(ap, SCR_ERROR, &serr); 1311 + sata_scr_write_flush(ap, SCR_ERROR, serr); 1315 1312 } 1316 1313 if (EDMA_ERR_SELF_DIS & edma_err_cause) { 1317 1314 struct mv_port_priv *pp = ap->private_data; ··· 1380 1377 /* Note that DEV_IRQ might happen spuriously during EDMA, 1381 1378 * and should be ignored in such cases. 1382 1379 * The cause of this is still under investigation. 1383 - */ 1380 + */ 1384 1381 if (pp->pp_flags & MV_PP_FLAG_EDMA_EN) { 1385 1382 /* EDMA: check for response queue interrupt */ 1386 1383 if ((CRPB_DMA_DONE << hard_port) & hc_irq_cause) { ··· 1401 1398 } 1402 1399 } 1403 1400 1404 - if (ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR)) 1401 + if (ap && (ap->flags & ATA_FLAG_DISABLED)) 1405 1402 continue; 1406 1403 1407 1404 err_mask = ac_err_mask(ata_status); ··· 1422 1419 VPRINTK("port %u IRQ found for qc, " 1423 1420 "ata_status 0x%x\n", port,ata_status); 1424 1421 /* mark qc status appropriately */ 1425 - if (!(qc->tf.ctl & ATA_NIEN)) { 1422 + if (!(qc->tf.flags & ATA_TFLAG_POLLING)) { 1426 1423 qc->err_mask |= err_mask; 1427 1424 ata_qc_complete(qc); 1428 1425 } ··· 1952 1949 1953 1950 /* Issue COMRESET via SControl */ 1954 1951 comreset_retry: 1955 - scr_write_flush(ap, SCR_CONTROL, 0x301); 1952 + sata_scr_write_flush(ap, SCR_CONTROL, 0x301); 1956 1953 __msleep(1, can_sleep); 1957 1954 1958 - scr_write_flush(ap, SCR_CONTROL, 0x300); 1955 + sata_scr_write_flush(ap, SCR_CONTROL, 0x300); 1959 1956 __msleep(20, can_sleep); 1960 1957 1961 1958 timeout = jiffies + msecs_to_jiffies(200); 1962 1959 do { 1963 - sstatus = scr_read(ap, SCR_STATUS) & 0x3; 1960 + sata_scr_read(ap, SCR_STATUS, &sstatus); 1961 + sstatus &= 0x3; 1964 1962 if ((sstatus == 3) || (sstatus == 0)) 1965 1963 break; 1966 1964 ··· 1978 1974 "SCtrl 0x%08x\n", mv_scr_read(ap, SCR_STATUS), 1979 1975 mv_scr_read(ap, SCR_ERROR), mv_scr_read(ap, SCR_CONTROL)); 1980 1976 1981 - if (sata_dev_present(ap)) { 1977 + if (ata_port_online(ap)) { 1982 1978 ata_port_probe(ap); 1983 1979 } else { 1984 - printk(KERN_INFO "ata%u: no device found (phy stat %08x)\n", 1985 - ap->id, scr_read(ap, SCR_STATUS)); 1980 + sata_scr_read(ap, SCR_STATUS, &sstatus); 1981 + ata_port_printk(ap, KERN_INFO, 1982 + "no device found (phy stat %08x)\n", sstatus); 1986 1983 ata_port_disable(ap); 1987 1984 return; 1988 1985 } ··· 2010 2005 tf.nsect = readb((void __iomem *) ap->ioaddr.nsect_addr); 2011 2006 2012 2007 dev->class = ata_dev_classify(&tf); 2013 - if (!ata_dev_present(dev)) { 2008 + if (!ata_dev_enabled(dev)) { 2014 2009 VPRINTK("Port disabled post-sig: No device present.\n"); 2015 2010 ata_port_disable(ap); 2016 2011 } ··· 2042 2037 struct ata_queued_cmd *qc; 2043 2038 unsigned long flags; 2044 2039 2045 - printk(KERN_ERR "ata%u: Entering mv_eng_timeout\n",ap->id); 2040 + ata_port_printk(ap, KERN_ERR, "Entering mv_eng_timeout\n"); 2046 2041 DPRINTK("All regs @ start of eng_timeout\n"); 2047 2042 mv_dump_all_regs(ap->host_set->mmio_base, ap->port_no, 2048 2043 to_pci_dev(ap->host_set->dev));
+285 -250
drivers/scsi/sata_nv.c
··· 44 44 #include <linux/libata.h> 45 45 46 46 #define DRV_NAME "sata_nv" 47 - #define DRV_VERSION "0.8" 47 + #define DRV_VERSION "0.9" 48 48 49 49 enum { 50 50 NV_PORTS = 2, ··· 54 54 NV_PORT0_SCR_REG_OFFSET = 0x00, 55 55 NV_PORT1_SCR_REG_OFFSET = 0x40, 56 56 57 + /* INT_STATUS/ENABLE */ 57 58 NV_INT_STATUS = 0x10, 58 - NV_INT_STATUS_CK804 = 0x440, 59 - NV_INT_STATUS_PDEV_INT = 0x01, 60 - NV_INT_STATUS_PDEV_PM = 0x02, 61 - NV_INT_STATUS_PDEV_ADDED = 0x04, 62 - NV_INT_STATUS_PDEV_REMOVED = 0x08, 63 - NV_INT_STATUS_SDEV_INT = 0x10, 64 - NV_INT_STATUS_SDEV_PM = 0x20, 65 - NV_INT_STATUS_SDEV_ADDED = 0x40, 66 - NV_INT_STATUS_SDEV_REMOVED = 0x80, 67 - NV_INT_STATUS_PDEV_HOTPLUG = (NV_INT_STATUS_PDEV_ADDED | 68 - NV_INT_STATUS_PDEV_REMOVED), 69 - NV_INT_STATUS_SDEV_HOTPLUG = (NV_INT_STATUS_SDEV_ADDED | 70 - NV_INT_STATUS_SDEV_REMOVED), 71 - NV_INT_STATUS_HOTPLUG = (NV_INT_STATUS_PDEV_HOTPLUG | 72 - NV_INT_STATUS_SDEV_HOTPLUG), 73 - 74 59 NV_INT_ENABLE = 0x11, 60 + NV_INT_STATUS_CK804 = 0x440, 75 61 NV_INT_ENABLE_CK804 = 0x441, 76 - NV_INT_ENABLE_PDEV_MASK = 0x01, 77 - NV_INT_ENABLE_PDEV_PM = 0x02, 78 - NV_INT_ENABLE_PDEV_ADDED = 0x04, 79 - NV_INT_ENABLE_PDEV_REMOVED = 0x08, 80 - NV_INT_ENABLE_SDEV_MASK = 0x10, 81 - NV_INT_ENABLE_SDEV_PM = 0x20, 82 - NV_INT_ENABLE_SDEV_ADDED = 0x40, 83 - NV_INT_ENABLE_SDEV_REMOVED = 0x80, 84 - NV_INT_ENABLE_PDEV_HOTPLUG = (NV_INT_ENABLE_PDEV_ADDED | 85 - NV_INT_ENABLE_PDEV_REMOVED), 86 - NV_INT_ENABLE_SDEV_HOTPLUG = (NV_INT_ENABLE_SDEV_ADDED | 87 - NV_INT_ENABLE_SDEV_REMOVED), 88 - NV_INT_ENABLE_HOTPLUG = (NV_INT_ENABLE_PDEV_HOTPLUG | 89 - NV_INT_ENABLE_SDEV_HOTPLUG), 90 62 63 + /* INT_STATUS/ENABLE bits */ 64 + NV_INT_DEV = 0x01, 65 + NV_INT_PM = 0x02, 66 + NV_INT_ADDED = 0x04, 67 + NV_INT_REMOVED = 0x08, 68 + 69 + NV_INT_PORT_SHIFT = 4, /* each port occupies 4 bits */ 70 + 71 + NV_INT_ALL = 0x0f, 72 + NV_INT_MASK = NV_INT_DEV | 73 + NV_INT_ADDED | NV_INT_REMOVED, 74 + 75 + /* INT_CONFIG */ 91 76 NV_INT_CONFIG = 0x12, 92 77 NV_INT_CONFIG_METHD = 0x01, // 0 = INT, 1 = SMI 93 78 ··· 82 97 }; 83 98 84 99 static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent); 85 - static irqreturn_t nv_interrupt (int irq, void *dev_instance, 86 - struct pt_regs *regs); 100 + static void nv_ck804_host_stop(struct ata_host_set *host_set); 101 + static irqreturn_t nv_generic_interrupt(int irq, void *dev_instance, 102 + struct pt_regs *regs); 103 + static irqreturn_t nv_nf2_interrupt(int irq, void *dev_instance, 104 + struct pt_regs *regs); 105 + static irqreturn_t nv_ck804_interrupt(int irq, void *dev_instance, 106 + struct pt_regs *regs); 87 107 static u32 nv_scr_read (struct ata_port *ap, unsigned int sc_reg); 88 108 static void nv_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val); 89 - static void nv_host_stop (struct ata_host_set *host_set); 90 - static void nv_enable_hotplug(struct ata_probe_ent *probe_ent); 91 - static void nv_disable_hotplug(struct ata_host_set *host_set); 92 - static int nv_check_hotplug(struct ata_host_set *host_set); 93 - static void nv_enable_hotplug_ck804(struct ata_probe_ent *probe_ent); 94 - static void nv_disable_hotplug_ck804(struct ata_host_set *host_set); 95 - static int nv_check_hotplug_ck804(struct ata_host_set *host_set); 109 + 110 + static void nv_nf2_freeze(struct ata_port *ap); 111 + static void nv_nf2_thaw(struct ata_port *ap); 112 + static void nv_ck804_freeze(struct ata_port *ap); 113 + static void nv_ck804_thaw(struct ata_port *ap); 114 + static void nv_error_handler(struct ata_port *ap); 96 115 97 116 enum nv_host_type 98 117 { 99 118 GENERIC, 100 119 NFORCE2, 101 - NFORCE3, 120 + NFORCE3 = NFORCE2, /* NF2 == NF3 as far as sata_nv is concerned */ 102 121 CK804 103 122 }; 104 123 ··· 129 140 PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, 130 141 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA2, 131 142 PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, 143 + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA, 144 + PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, 145 + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA2, 146 + PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, 147 + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA3, 148 + PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, 149 + { PCI_VENDOR_ID_NVIDIA, 0x045c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, 150 + { PCI_VENDOR_ID_NVIDIA, 0x045d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, 151 + { PCI_VENDOR_ID_NVIDIA, 0x045e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, 152 + { PCI_VENDOR_ID_NVIDIA, 0x045f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, GENERIC }, 132 153 { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, 133 154 PCI_ANY_ID, PCI_ANY_ID, 134 155 PCI_CLASS_STORAGE_IDE<<8, 0xffff00, GENERIC }, ··· 146 147 PCI_ANY_ID, PCI_ANY_ID, 147 148 PCI_CLASS_STORAGE_RAID<<8, 0xffff00, GENERIC }, 148 149 { 0, } /* terminate list */ 149 - }; 150 - 151 - struct nv_host_desc 152 - { 153 - enum nv_host_type host_type; 154 - void (*enable_hotplug)(struct ata_probe_ent *probe_ent); 155 - void (*disable_hotplug)(struct ata_host_set *host_set); 156 - int (*check_hotplug)(struct ata_host_set *host_set); 157 - 158 - }; 159 - static struct nv_host_desc nv_device_tbl[] = { 160 - { 161 - .host_type = GENERIC, 162 - .enable_hotplug = NULL, 163 - .disable_hotplug= NULL, 164 - .check_hotplug = NULL, 165 - }, 166 - { 167 - .host_type = NFORCE2, 168 - .enable_hotplug = nv_enable_hotplug, 169 - .disable_hotplug= nv_disable_hotplug, 170 - .check_hotplug = nv_check_hotplug, 171 - }, 172 - { 173 - .host_type = NFORCE3, 174 - .enable_hotplug = nv_enable_hotplug, 175 - .disable_hotplug= nv_disable_hotplug, 176 - .check_hotplug = nv_check_hotplug, 177 - }, 178 - { .host_type = CK804, 179 - .enable_hotplug = nv_enable_hotplug_ck804, 180 - .disable_hotplug= nv_disable_hotplug_ck804, 181 - .check_hotplug = nv_check_hotplug_ck804, 182 - }, 183 - }; 184 - 185 - struct nv_host 186 - { 187 - struct nv_host_desc *host_desc; 188 - unsigned long host_flags; 189 150 }; 190 151 191 152 static struct pci_driver nv_pci_driver = { ··· 169 210 .proc_name = DRV_NAME, 170 211 .dma_boundary = ATA_DMA_BOUNDARY, 171 212 .slave_configure = ata_scsi_slave_config, 213 + .slave_destroy = ata_scsi_slave_destroy, 172 214 .bios_param = ata_std_bios_param, 173 215 }; 174 216 175 - static const struct ata_port_operations nv_ops = { 217 + static const struct ata_port_operations nv_generic_ops = { 176 218 .port_disable = ata_port_disable, 177 219 .tf_load = ata_tf_load, 178 220 .tf_read = ata_tf_read, 179 221 .exec_command = ata_exec_command, 180 222 .check_status = ata_check_status, 181 223 .dev_select = ata_std_dev_select, 182 - .phy_reset = sata_phy_reset, 183 224 .bmdma_setup = ata_bmdma_setup, 184 225 .bmdma_start = ata_bmdma_start, 185 226 .bmdma_stop = ata_bmdma_stop, 186 227 .bmdma_status = ata_bmdma_status, 187 228 .qc_prep = ata_qc_prep, 188 229 .qc_issue = ata_qc_issue_prot, 189 - .eng_timeout = ata_eng_timeout, 190 - .irq_handler = nv_interrupt, 230 + .freeze = ata_bmdma_freeze, 231 + .thaw = ata_bmdma_thaw, 232 + .error_handler = nv_error_handler, 233 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 234 + .data_xfer = ata_pio_data_xfer, 235 + .irq_handler = nv_generic_interrupt, 191 236 .irq_clear = ata_bmdma_irq_clear, 192 237 .scr_read = nv_scr_read, 193 238 .scr_write = nv_scr_write, 194 239 .port_start = ata_port_start, 195 240 .port_stop = ata_port_stop, 196 - .host_stop = nv_host_stop, 241 + .host_stop = ata_pci_host_stop, 197 242 }; 198 243 199 - /* FIXME: The hardware provides the necessary SATA PHY controls 200 - * to support ATA_FLAG_SATA_RESET. However, it is currently 201 - * necessary to disable that flag, to solve misdetection problems. 202 - * See http://bugme.osdl.org/show_bug.cgi?id=3352 for more info. 203 - * 204 - * This problem really needs to be investigated further. But in the 205 - * meantime, we avoid ATA_FLAG_SATA_RESET to get people working. 206 - */ 207 - static struct ata_port_info nv_port_info = { 208 - .sht = &nv_sht, 209 - .host_flags = ATA_FLAG_SATA | 210 - /* ATA_FLAG_SATA_RESET | */ 211 - ATA_FLAG_SRST | 212 - ATA_FLAG_NO_LEGACY, 213 - .pio_mask = NV_PIO_MASK, 214 - .mwdma_mask = NV_MWDMA_MASK, 215 - .udma_mask = NV_UDMA_MASK, 216 - .port_ops = &nv_ops, 244 + static const struct ata_port_operations nv_nf2_ops = { 245 + .port_disable = ata_port_disable, 246 + .tf_load = ata_tf_load, 247 + .tf_read = ata_tf_read, 248 + .exec_command = ata_exec_command, 249 + .check_status = ata_check_status, 250 + .dev_select = ata_std_dev_select, 251 + .bmdma_setup = ata_bmdma_setup, 252 + .bmdma_start = ata_bmdma_start, 253 + .bmdma_stop = ata_bmdma_stop, 254 + .bmdma_status = ata_bmdma_status, 255 + .qc_prep = ata_qc_prep, 256 + .qc_issue = ata_qc_issue_prot, 257 + .freeze = nv_nf2_freeze, 258 + .thaw = nv_nf2_thaw, 259 + .error_handler = nv_error_handler, 260 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 261 + .data_xfer = ata_pio_data_xfer, 262 + .irq_handler = nv_nf2_interrupt, 263 + .irq_clear = ata_bmdma_irq_clear, 264 + .scr_read = nv_scr_read, 265 + .scr_write = nv_scr_write, 266 + .port_start = ata_port_start, 267 + .port_stop = ata_port_stop, 268 + .host_stop = ata_pci_host_stop, 269 + }; 270 + 271 + static const struct ata_port_operations nv_ck804_ops = { 272 + .port_disable = ata_port_disable, 273 + .tf_load = ata_tf_load, 274 + .tf_read = ata_tf_read, 275 + .exec_command = ata_exec_command, 276 + .check_status = ata_check_status, 277 + .dev_select = ata_std_dev_select, 278 + .bmdma_setup = ata_bmdma_setup, 279 + .bmdma_start = ata_bmdma_start, 280 + .bmdma_stop = ata_bmdma_stop, 281 + .bmdma_status = ata_bmdma_status, 282 + .qc_prep = ata_qc_prep, 283 + .qc_issue = ata_qc_issue_prot, 284 + .freeze = nv_ck804_freeze, 285 + .thaw = nv_ck804_thaw, 286 + .error_handler = nv_error_handler, 287 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 288 + .data_xfer = ata_pio_data_xfer, 289 + .irq_handler = nv_ck804_interrupt, 290 + .irq_clear = ata_bmdma_irq_clear, 291 + .scr_read = nv_scr_read, 292 + .scr_write = nv_scr_write, 293 + .port_start = ata_port_start, 294 + .port_stop = ata_port_stop, 295 + .host_stop = nv_ck804_host_stop, 296 + }; 297 + 298 + static struct ata_port_info nv_port_info[] = { 299 + /* generic */ 300 + { 301 + .sht = &nv_sht, 302 + .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY, 303 + .pio_mask = NV_PIO_MASK, 304 + .mwdma_mask = NV_MWDMA_MASK, 305 + .udma_mask = NV_UDMA_MASK, 306 + .port_ops = &nv_generic_ops, 307 + }, 308 + /* nforce2/3 */ 309 + { 310 + .sht = &nv_sht, 311 + .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY, 312 + .pio_mask = NV_PIO_MASK, 313 + .mwdma_mask = NV_MWDMA_MASK, 314 + .udma_mask = NV_UDMA_MASK, 315 + .port_ops = &nv_nf2_ops, 316 + }, 317 + /* ck804 */ 318 + { 319 + .sht = &nv_sht, 320 + .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY, 321 + .pio_mask = NV_PIO_MASK, 322 + .mwdma_mask = NV_MWDMA_MASK, 323 + .udma_mask = NV_UDMA_MASK, 324 + .port_ops = &nv_ck804_ops, 325 + }, 217 326 }; 218 327 219 328 MODULE_AUTHOR("NVIDIA"); ··· 290 263 MODULE_DEVICE_TABLE(pci, nv_pci_tbl); 291 264 MODULE_VERSION(DRV_VERSION); 292 265 293 - static irqreturn_t nv_interrupt (int irq, void *dev_instance, 294 - struct pt_regs *regs) 266 + static irqreturn_t nv_generic_interrupt(int irq, void *dev_instance, 267 + struct pt_regs *regs) 295 268 { 296 269 struct ata_host_set *host_set = dev_instance; 297 - struct nv_host *host = host_set->private_data; 298 270 unsigned int i; 299 271 unsigned int handled = 0; 300 272 unsigned long flags; ··· 305 279 306 280 ap = host_set->ports[i]; 307 281 if (ap && 308 - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { 282 + !(ap->flags & ATA_FLAG_DISABLED)) { 309 283 struct ata_queued_cmd *qc; 310 284 311 285 qc = ata_qc_from_tag(ap, ap->active_tag); 312 - if (qc && (!(qc->tf.ctl & ATA_NIEN))) 286 + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) 313 287 handled += ata_host_intr(ap, qc); 314 288 else 315 289 // No request pending? Clear interrupt status ··· 319 293 320 294 } 321 295 322 - if (host->host_desc->check_hotplug) 323 - handled += host->host_desc->check_hotplug(host_set); 324 - 325 296 spin_unlock_irqrestore(&host_set->lock, flags); 326 297 327 298 return IRQ_RETVAL(handled); 299 + } 300 + 301 + static int nv_host_intr(struct ata_port *ap, u8 irq_stat) 302 + { 303 + struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag); 304 + int handled; 305 + 306 + /* freeze if hotplugged */ 307 + if (unlikely(irq_stat & (NV_INT_ADDED | NV_INT_REMOVED))) { 308 + ata_port_freeze(ap); 309 + return 1; 310 + } 311 + 312 + /* bail out if not our interrupt */ 313 + if (!(irq_stat & NV_INT_DEV)) 314 + return 0; 315 + 316 + /* DEV interrupt w/ no active qc? */ 317 + if (unlikely(!qc || (qc->tf.flags & ATA_TFLAG_POLLING))) { 318 + ata_check_status(ap); 319 + return 1; 320 + } 321 + 322 + /* handle interrupt */ 323 + handled = ata_host_intr(ap, qc); 324 + if (unlikely(!handled)) { 325 + /* spurious, clear it */ 326 + ata_check_status(ap); 327 + } 328 + 329 + return 1; 330 + } 331 + 332 + static irqreturn_t nv_do_interrupt(struct ata_host_set *host_set, u8 irq_stat) 333 + { 334 + int i, handled = 0; 335 + 336 + for (i = 0; i < host_set->n_ports; i++) { 337 + struct ata_port *ap = host_set->ports[i]; 338 + 339 + if (ap && !(ap->flags & ATA_FLAG_DISABLED)) 340 + handled += nv_host_intr(ap, irq_stat); 341 + 342 + irq_stat >>= NV_INT_PORT_SHIFT; 343 + } 344 + 345 + return IRQ_RETVAL(handled); 346 + } 347 + 348 + static irqreturn_t nv_nf2_interrupt(int irq, void *dev_instance, 349 + struct pt_regs *regs) 350 + { 351 + struct ata_host_set *host_set = dev_instance; 352 + u8 irq_stat; 353 + irqreturn_t ret; 354 + 355 + spin_lock(&host_set->lock); 356 + irq_stat = inb(host_set->ports[0]->ioaddr.scr_addr + NV_INT_STATUS); 357 + ret = nv_do_interrupt(host_set, irq_stat); 358 + spin_unlock(&host_set->lock); 359 + 360 + return ret; 361 + } 362 + 363 + static irqreturn_t nv_ck804_interrupt(int irq, void *dev_instance, 364 + struct pt_regs *regs) 365 + { 366 + struct ata_host_set *host_set = dev_instance; 367 + u8 irq_stat; 368 + irqreturn_t ret; 369 + 370 + spin_lock(&host_set->lock); 371 + irq_stat = readb(host_set->mmio_base + NV_INT_STATUS_CK804); 372 + ret = nv_do_interrupt(host_set, irq_stat); 373 + spin_unlock(&host_set->lock); 374 + 375 + return ret; 328 376 } 329 377 330 378 static u32 nv_scr_read (struct ata_port *ap, unsigned int sc_reg) ··· 417 317 iowrite32(val, (void __iomem *)ap->ioaddr.scr_addr + (sc_reg * 4)); 418 318 } 419 319 420 - static void nv_host_stop (struct ata_host_set *host_set) 320 + static void nv_nf2_freeze(struct ata_port *ap) 421 321 { 422 - struct nv_host *host = host_set->private_data; 322 + unsigned long scr_addr = ap->host_set->ports[0]->ioaddr.scr_addr; 323 + int shift = ap->port_no * NV_INT_PORT_SHIFT; 324 + u8 mask; 423 325 424 - // Disable hotplug event interrupts. 425 - if (host->host_desc->disable_hotplug) 426 - host->host_desc->disable_hotplug(host_set); 326 + mask = inb(scr_addr + NV_INT_ENABLE); 327 + mask &= ~(NV_INT_ALL << shift); 328 + outb(mask, scr_addr + NV_INT_ENABLE); 329 + } 427 330 428 - kfree(host); 331 + static void nv_nf2_thaw(struct ata_port *ap) 332 + { 333 + unsigned long scr_addr = ap->host_set->ports[0]->ioaddr.scr_addr; 334 + int shift = ap->port_no * NV_INT_PORT_SHIFT; 335 + u8 mask; 429 336 430 - ata_pci_host_stop(host_set); 337 + outb(NV_INT_ALL << shift, scr_addr + NV_INT_STATUS); 338 + 339 + mask = inb(scr_addr + NV_INT_ENABLE); 340 + mask |= (NV_INT_MASK << shift); 341 + outb(mask, scr_addr + NV_INT_ENABLE); 342 + } 343 + 344 + static void nv_ck804_freeze(struct ata_port *ap) 345 + { 346 + void __iomem *mmio_base = ap->host_set->mmio_base; 347 + int shift = ap->port_no * NV_INT_PORT_SHIFT; 348 + u8 mask; 349 + 350 + mask = readb(mmio_base + NV_INT_ENABLE_CK804); 351 + mask &= ~(NV_INT_ALL << shift); 352 + writeb(mask, mmio_base + NV_INT_ENABLE_CK804); 353 + } 354 + 355 + static void nv_ck804_thaw(struct ata_port *ap) 356 + { 357 + void __iomem *mmio_base = ap->host_set->mmio_base; 358 + int shift = ap->port_no * NV_INT_PORT_SHIFT; 359 + u8 mask; 360 + 361 + writeb(NV_INT_ALL << shift, mmio_base + NV_INT_STATUS_CK804); 362 + 363 + mask = readb(mmio_base + NV_INT_ENABLE_CK804); 364 + mask |= (NV_INT_MASK << shift); 365 + writeb(mask, mmio_base + NV_INT_ENABLE_CK804); 366 + } 367 + 368 + static int nv_hardreset(struct ata_port *ap, unsigned int *class) 369 + { 370 + unsigned int dummy; 371 + 372 + /* SATA hardreset fails to retrieve proper device signature on 373 + * some controllers. Don't classify on hardreset. For more 374 + * info, see http://bugme.osdl.org/show_bug.cgi?id=3352 375 + */ 376 + return sata_std_hardreset(ap, &dummy); 377 + } 378 + 379 + static void nv_error_handler(struct ata_port *ap) 380 + { 381 + ata_bmdma_drive_eh(ap, ata_std_prereset, ata_std_softreset, 382 + nv_hardreset, ata_std_postreset); 431 383 } 432 384 433 385 static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) 434 386 { 435 387 static int printed_version = 0; 436 - struct nv_host *host; 437 388 struct ata_port_info *ppi; 438 389 struct ata_probe_ent *probe_ent; 439 390 int pci_dev_busy = 0; ··· 521 370 522 371 rc = -ENOMEM; 523 372 524 - ppi = &nv_port_info; 373 + ppi = &nv_port_info[ent->driver_data]; 525 374 probe_ent = ata_pci_init_native_mode(pdev, &ppi, ATA_PORT_PRIMARY | ATA_PORT_SECONDARY); 526 375 if (!probe_ent) 527 376 goto err_out_regions; 528 377 529 - host = kmalloc(sizeof(struct nv_host), GFP_KERNEL); 530 - if (!host) 531 - goto err_out_free_ent; 532 - 533 - memset(host, 0, sizeof(struct nv_host)); 534 - host->host_desc = &nv_device_tbl[ent->driver_data]; 535 - 536 - probe_ent->private_data = host; 537 - 538 378 probe_ent->mmio_base = pci_iomap(pdev, 5, 0); 539 379 if (!probe_ent->mmio_base) { 540 380 rc = -EIO; 541 - goto err_out_free_host; 381 + goto err_out_free_ent; 542 382 } 543 383 544 384 base = (unsigned long)probe_ent->mmio_base; ··· 537 395 probe_ent->port[0].scr_addr = base + NV_PORT0_SCR_REG_OFFSET; 538 396 probe_ent->port[1].scr_addr = base + NV_PORT1_SCR_REG_OFFSET; 539 397 398 + /* enable SATA space for CK804 */ 399 + if (ent->driver_data == CK804) { 400 + u8 regval; 401 + 402 + pci_read_config_byte(pdev, NV_MCP_SATA_CFG_20, &regval); 403 + regval |= NV_MCP_SATA_CFG_20_SATA_SPACE_EN; 404 + pci_write_config_byte(pdev, NV_MCP_SATA_CFG_20, regval); 405 + } 406 + 540 407 pci_set_master(pdev); 541 408 542 409 rc = ata_device_add(probe_ent); 543 410 if (rc != NV_PORTS) 544 411 goto err_out_iounmap; 545 - 546 - // Enable hotplug event interrupts. 547 - if (host->host_desc->enable_hotplug) 548 - host->host_desc->enable_hotplug(probe_ent); 549 412 550 413 kfree(probe_ent); 551 414 ··· 558 411 559 412 err_out_iounmap: 560 413 pci_iounmap(pdev, probe_ent->mmio_base); 561 - err_out_free_host: 562 - kfree(host); 563 414 err_out_free_ent: 564 415 kfree(probe_ent); 565 416 err_out_regions: ··· 569 424 return rc; 570 425 } 571 426 572 - static void nv_enable_hotplug(struct ata_probe_ent *probe_ent) 573 - { 574 - u8 intr_mask; 575 - 576 - outb(NV_INT_STATUS_HOTPLUG, 577 - probe_ent->port[0].scr_addr + NV_INT_STATUS); 578 - 579 - intr_mask = inb(probe_ent->port[0].scr_addr + NV_INT_ENABLE); 580 - intr_mask |= NV_INT_ENABLE_HOTPLUG; 581 - 582 - outb(intr_mask, probe_ent->port[0].scr_addr + NV_INT_ENABLE); 583 - } 584 - 585 - static void nv_disable_hotplug(struct ata_host_set *host_set) 586 - { 587 - u8 intr_mask; 588 - 589 - intr_mask = inb(host_set->ports[0]->ioaddr.scr_addr + NV_INT_ENABLE); 590 - 591 - intr_mask &= ~(NV_INT_ENABLE_HOTPLUG); 592 - 593 - outb(intr_mask, host_set->ports[0]->ioaddr.scr_addr + NV_INT_ENABLE); 594 - } 595 - 596 - static int nv_check_hotplug(struct ata_host_set *host_set) 597 - { 598 - u8 intr_status; 599 - 600 - intr_status = inb(host_set->ports[0]->ioaddr.scr_addr + NV_INT_STATUS); 601 - 602 - // Clear interrupt status. 603 - outb(0xff, host_set->ports[0]->ioaddr.scr_addr + NV_INT_STATUS); 604 - 605 - if (intr_status & NV_INT_STATUS_HOTPLUG) { 606 - if (intr_status & NV_INT_STATUS_PDEV_ADDED) 607 - printk(KERN_WARNING "nv_sata: " 608 - "Primary device added\n"); 609 - 610 - if (intr_status & NV_INT_STATUS_PDEV_REMOVED) 611 - printk(KERN_WARNING "nv_sata: " 612 - "Primary device removed\n"); 613 - 614 - if (intr_status & NV_INT_STATUS_SDEV_ADDED) 615 - printk(KERN_WARNING "nv_sata: " 616 - "Secondary device added\n"); 617 - 618 - if (intr_status & NV_INT_STATUS_SDEV_REMOVED) 619 - printk(KERN_WARNING "nv_sata: " 620 - "Secondary device removed\n"); 621 - 622 - return 1; 623 - } 624 - 625 - return 0; 626 - } 627 - 628 - static void nv_enable_hotplug_ck804(struct ata_probe_ent *probe_ent) 629 - { 630 - struct pci_dev *pdev = to_pci_dev(probe_ent->dev); 631 - u8 intr_mask; 632 - u8 regval; 633 - 634 - pci_read_config_byte(pdev, NV_MCP_SATA_CFG_20, &regval); 635 - regval |= NV_MCP_SATA_CFG_20_SATA_SPACE_EN; 636 - pci_write_config_byte(pdev, NV_MCP_SATA_CFG_20, regval); 637 - 638 - writeb(NV_INT_STATUS_HOTPLUG, probe_ent->mmio_base + NV_INT_STATUS_CK804); 639 - 640 - intr_mask = readb(probe_ent->mmio_base + NV_INT_ENABLE_CK804); 641 - intr_mask |= NV_INT_ENABLE_HOTPLUG; 642 - 643 - writeb(intr_mask, probe_ent->mmio_base + NV_INT_ENABLE_CK804); 644 - } 645 - 646 - static void nv_disable_hotplug_ck804(struct ata_host_set *host_set) 427 + static void nv_ck804_host_stop(struct ata_host_set *host_set) 647 428 { 648 429 struct pci_dev *pdev = to_pci_dev(host_set->dev); 649 - u8 intr_mask; 650 430 u8 regval; 651 431 652 - intr_mask = readb(host_set->mmio_base + NV_INT_ENABLE_CK804); 653 - 654 - intr_mask &= ~(NV_INT_ENABLE_HOTPLUG); 655 - 656 - writeb(intr_mask, host_set->mmio_base + NV_INT_ENABLE_CK804); 657 - 432 + /* disable SATA space for CK804 */ 658 433 pci_read_config_byte(pdev, NV_MCP_SATA_CFG_20, &regval); 659 434 regval &= ~NV_MCP_SATA_CFG_20_SATA_SPACE_EN; 660 435 pci_write_config_byte(pdev, NV_MCP_SATA_CFG_20, regval); 661 - } 662 436 663 - static int nv_check_hotplug_ck804(struct ata_host_set *host_set) 664 - { 665 - u8 intr_status; 666 - 667 - intr_status = readb(host_set->mmio_base + NV_INT_STATUS_CK804); 668 - 669 - // Clear interrupt status. 670 - writeb(0xff, host_set->mmio_base + NV_INT_STATUS_CK804); 671 - 672 - if (intr_status & NV_INT_STATUS_HOTPLUG) { 673 - if (intr_status & NV_INT_STATUS_PDEV_ADDED) 674 - printk(KERN_WARNING "nv_sata: " 675 - "Primary device added\n"); 676 - 677 - if (intr_status & NV_INT_STATUS_PDEV_REMOVED) 678 - printk(KERN_WARNING "nv_sata: " 679 - "Primary device removed\n"); 680 - 681 - if (intr_status & NV_INT_STATUS_SDEV_ADDED) 682 - printk(KERN_WARNING "nv_sata: " 683 - "Secondary device added\n"); 684 - 685 - if (intr_status & NV_INT_STATUS_SDEV_REMOVED) 686 - printk(KERN_WARNING "nv_sata: " 687 - "Secondary device removed\n"); 688 - 689 - return 1; 690 - } 691 - 692 - return 0; 437 + ata_pci_host_stop(host_set); 693 438 } 694 439 695 440 static int __init nv_init(void)
+26 -14
drivers/scsi/sata_promise.c
··· 76 76 PDC_RESET = (1 << 11), /* HDMA reset */ 77 77 78 78 PDC_COMMON_FLAGS = ATA_FLAG_NO_LEGACY | ATA_FLAG_SRST | 79 - ATA_FLAG_MMIO | ATA_FLAG_NO_ATAPI, 79 + ATA_FLAG_MMIO | ATA_FLAG_NO_ATAPI | 80 + ATA_FLAG_PIO_POLLING, 80 81 }; 81 82 82 83 ··· 121 120 .proc_name = DRV_NAME, 122 121 .dma_boundary = ATA_DMA_BOUNDARY, 123 122 .slave_configure = ata_scsi_slave_config, 123 + .slave_destroy = ata_scsi_slave_destroy, 124 124 .bios_param = ata_std_bios_param, 125 125 }; 126 126 ··· 138 136 .qc_prep = pdc_qc_prep, 139 137 .qc_issue = pdc_qc_issue_prot, 140 138 .eng_timeout = pdc_eng_timeout, 139 + .data_xfer = ata_mmio_data_xfer, 141 140 .irq_handler = pdc_interrupt, 142 141 .irq_clear = pdc_irq_clear, 143 142 ··· 161 158 162 159 .qc_prep = pdc_qc_prep, 163 160 .qc_issue = pdc_qc_issue_prot, 161 + .data_xfer = ata_mmio_data_xfer, 164 162 .eng_timeout = pdc_eng_timeout, 165 163 .irq_handler = pdc_interrupt, 166 164 .irq_clear = pdc_irq_clear, ··· 367 363 sata_phy_reset(ap); 368 364 } 369 365 366 + static void pdc_pata_cbl_detect(struct ata_port *ap) 367 + { 368 + u8 tmp; 369 + void __iomem *mmio = (void *) ap->ioaddr.cmd_addr + PDC_CTLSTAT + 0x03; 370 + 371 + tmp = readb(mmio); 372 + 373 + if (tmp & 0x01) { 374 + ap->cbl = ATA_CBL_PATA40; 375 + ap->udma_mask &= ATA_UDMA_MASK_40C; 376 + } else 377 + ap->cbl = ATA_CBL_PATA80; 378 + } 379 + 370 380 static void pdc_pata_phy_reset(struct ata_port *ap) 371 381 { 372 - /* FIXME: add cable detect. Don't assume 40-pin cable */ 373 - ap->cbl = ATA_CBL_PATA40; 374 - ap->udma_mask &= ATA_UDMA_MASK_40C; 375 - 382 + pdc_pata_cbl_detect(ap); 376 383 pdc_reset_port(ap); 377 384 ata_port_probe(ap); 378 385 ata_bus_reset(ap); ··· 450 435 switch (qc->tf.protocol) { 451 436 case ATA_PROT_DMA: 452 437 case ATA_PROT_NODATA: 453 - printk(KERN_ERR "ata%u: command timeout\n", ap->id); 438 + ata_port_printk(ap, KERN_ERR, "command timeout\n"); 454 439 drv_stat = ata_wait_idle(ap); 455 440 qc->err_mask |= __ac_err_mask(drv_stat); 456 441 break; ··· 458 443 default: 459 444 drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 1000); 460 445 461 - printk(KERN_ERR "ata%u: unknown timeout, cmd 0x%x stat 0x%x\n", 462 - ap->id, qc->tf.command, drv_stat); 446 + ata_port_printk(ap, KERN_ERR, 447 + "unknown timeout, cmd 0x%x stat 0x%x\n", 448 + qc->tf.command, drv_stat); 463 449 464 450 qc->err_mask |= ac_err_mask(drv_stat); 465 451 break; ··· 549 533 ap = host_set->ports[i]; 550 534 tmp = mask & (1 << (i + 1)); 551 535 if (tmp && ap && 552 - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { 536 + !(ap->flags & ATA_FLAG_DISABLED)) { 553 537 struct ata_queued_cmd *qc; 554 538 555 539 qc = ata_qc_from_tag(ap, ap->active_tag); 556 - if (qc && (!(qc->tf.ctl & ATA_NIEN))) 540 + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) 557 541 handled += pdc_host_intr(ap, qc); 558 542 } 559 543 } ··· 692 676 if (!printed_version++) 693 677 dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); 694 678 695 - /* 696 - * If this driver happens to only be useful on Apple's K2, then 697 - * we should check that here as it has a normal Serverworks ID 698 - */ 699 679 rc = pci_enable_device(pdev); 700 680 if (rc) 701 681 return rc;
+8 -7
drivers/scsi/sata_qstor.c
··· 41 41 #include <linux/libata.h> 42 42 43 43 #define DRV_NAME "sata_qstor" 44 - #define DRV_VERSION "0.05" 44 + #define DRV_VERSION "0.06" 45 45 46 46 enum { 47 47 QS_PORTS = 4, ··· 142 142 .proc_name = DRV_NAME, 143 143 .dma_boundary = QS_DMA_BOUNDARY, 144 144 .slave_configure = ata_scsi_slave_config, 145 + .slave_destroy = ata_scsi_slave_destroy, 145 146 .bios_param = ata_std_bios_param, 146 147 }; 147 148 ··· 157 156 .phy_reset = qs_phy_reset, 158 157 .qc_prep = qs_qc_prep, 159 158 .qc_issue = qs_qc_issue, 159 + .data_xfer = ata_mmio_data_xfer, 160 160 .eng_timeout = qs_eng_timeout, 161 161 .irq_handler = qs_intr, 162 162 .irq_clear = qs_irq_clear, ··· 177 175 .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 178 176 ATA_FLAG_SATA_RESET | 179 177 //FIXME ATA_FLAG_SRST | 180 - ATA_FLAG_MMIO, 178 + ATA_FLAG_MMIO | ATA_FLAG_PIO_POLLING, 181 179 .pio_mask = 0x10, /* pio4 */ 182 180 .udma_mask = 0x7f, /* udma0-6 */ 183 181 .port_ops = &qs_ata_ops, ··· 396 394 DPRINTK("SFF=%08x%08x: sCHAN=%u sHST=%d sDST=%02x\n", 397 395 sff1, sff0, port_no, sHST, sDST); 398 396 handled = 1; 399 - if (ap && !(ap->flags & 400 - (ATA_FLAG_PORT_DISABLED|ATA_FLAG_NOINTR))) { 397 + if (ap && !(ap->flags & ATA_FLAG_DISABLED)) { 401 398 struct ata_queued_cmd *qc; 402 399 struct qs_port_priv *pp = ap->private_data; 403 400 if (!pp || pp->state != qs_state_pkt) 404 401 continue; 405 402 qc = ata_qc_from_tag(ap, ap->active_tag); 406 - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { 403 + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) { 407 404 switch (sHST) { 408 405 case 0: /* successful CPB */ 409 406 case 3: /* device error */ ··· 429 428 struct ata_port *ap; 430 429 ap = host_set->ports[port_no]; 431 430 if (ap && 432 - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { 431 + !(ap->flags & ATA_FLAG_DISABLED)) { 433 432 struct ata_queued_cmd *qc; 434 433 struct qs_port_priv *pp = ap->private_data; 435 434 if (!pp || pp->state != qs_state_mmio) 436 435 continue; 437 436 qc = ata_qc_from_tag(ap, ap->active_tag); 438 - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { 437 + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) { 439 438 440 439 /* check main status, clearing INTRQ */ 441 440 u8 status = ata_check_status(ap);
+184 -37
drivers/scsi/sata_sil.c
··· 46 46 #include <linux/libata.h> 47 47 48 48 #define DRV_NAME "sata_sil" 49 - #define DRV_VERSION "0.9" 49 + #define DRV_VERSION "1.0" 50 50 51 51 enum { 52 52 /* ··· 54 54 */ 55 55 SIL_FLAG_RERR_ON_DMA_ACT = (1 << 29), 56 56 SIL_FLAG_MOD15WRITE = (1 << 30), 57 + 57 58 SIL_DFL_HOST_FLAGS = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 58 - ATA_FLAG_MMIO, 59 + ATA_FLAG_MMIO | ATA_FLAG_HRST_TO_RESUME, 59 60 60 61 /* 61 62 * Controller IDs ··· 85 84 /* BMDMA/BMDMA2 */ 86 85 SIL_INTR_STEERING = (1 << 1), 87 86 87 + SIL_DMA_ENABLE = (1 << 0), /* DMA run switch */ 88 + SIL_DMA_RDWR = (1 << 3), /* DMA Rd-Wr */ 89 + SIL_DMA_SATA_IRQ = (1 << 4), /* OR of all SATA IRQs */ 90 + SIL_DMA_ACTIVE = (1 << 16), /* DMA running */ 91 + SIL_DMA_ERROR = (1 << 17), /* PCI bus error */ 92 + SIL_DMA_COMPLETE = (1 << 18), /* cmd complete / IRQ pending */ 93 + SIL_DMA_N_SATA_IRQ = (1 << 6), /* SATA_IRQ for the next channel */ 94 + SIL_DMA_N_ACTIVE = (1 << 24), /* ACTIVE for the next channel */ 95 + SIL_DMA_N_ERROR = (1 << 25), /* ERROR for the next channel */ 96 + SIL_DMA_N_COMPLETE = (1 << 26), /* COMPLETE for the next channel */ 97 + 98 + /* SIEN */ 99 + SIL_SIEN_N = (1 << 16), /* triggered by SError.N */ 100 + 88 101 /* 89 102 * Others 90 103 */ ··· 111 96 static u32 sil_scr_read (struct ata_port *ap, unsigned int sc_reg); 112 97 static void sil_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val); 113 98 static void sil_post_set_mode (struct ata_port *ap); 99 + static irqreturn_t sil_interrupt(int irq, void *dev_instance, 100 + struct pt_regs *regs); 101 + static void sil_freeze(struct ata_port *ap); 102 + static void sil_thaw(struct ata_port *ap); 114 103 115 104 116 105 static const struct pci_device_id sil_pci_tbl[] = { ··· 174 155 .proc_name = DRV_NAME, 175 156 .dma_boundary = ATA_DMA_BOUNDARY, 176 157 .slave_configure = ata_scsi_slave_config, 158 + .slave_destroy = ata_scsi_slave_destroy, 177 159 .bios_param = ata_std_bios_param, 178 160 }; 179 161 ··· 186 166 .check_status = ata_check_status, 187 167 .exec_command = ata_exec_command, 188 168 .dev_select = ata_std_dev_select, 189 - .probe_reset = ata_std_probe_reset, 190 169 .post_set_mode = sil_post_set_mode, 191 170 .bmdma_setup = ata_bmdma_setup, 192 171 .bmdma_start = ata_bmdma_start, ··· 193 174 .bmdma_status = ata_bmdma_status, 194 175 .qc_prep = ata_qc_prep, 195 176 .qc_issue = ata_qc_issue_prot, 196 - .eng_timeout = ata_eng_timeout, 197 - .irq_handler = ata_interrupt, 177 + .data_xfer = ata_mmio_data_xfer, 178 + .freeze = sil_freeze, 179 + .thaw = sil_thaw, 180 + .error_handler = ata_bmdma_error_handler, 181 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 182 + .irq_handler = sil_interrupt, 198 183 .irq_clear = ata_bmdma_irq_clear, 199 184 .scr_read = sil_scr_read, 200 185 .scr_write = sil_scr_write, ··· 243 220 unsigned long tf; /* ATA taskfile register block */ 244 221 unsigned long ctl; /* ATA control/altstatus register block */ 245 222 unsigned long bmdma; /* DMA register block */ 223 + unsigned long bmdma2; /* DMA register block #2 */ 246 224 unsigned long fifo_cfg; /* FIFO Valid Byte Count and Control */ 247 225 unsigned long scr; /* SATA control register block */ 248 226 unsigned long sien; /* SATA Interrupt Enable register */ ··· 251 227 unsigned long sfis_cfg; /* SATA FIS reception config register */ 252 228 } sil_port[] = { 253 229 /* port 0 ... */ 254 - { 0x80, 0x8A, 0x00, 0x40, 0x100, 0x148, 0xb4, 0x14c }, 255 - { 0xC0, 0xCA, 0x08, 0x44, 0x180, 0x1c8, 0xf4, 0x1cc }, 256 - { 0x280, 0x28A, 0x200, 0x240, 0x300, 0x348, 0x2b4, 0x34c }, 257 - { 0x2C0, 0x2CA, 0x208, 0x244, 0x380, 0x3c8, 0x2f4, 0x3cc }, 230 + { 0x80, 0x8A, 0x00, 0x10, 0x40, 0x100, 0x148, 0xb4, 0x14c }, 231 + { 0xC0, 0xCA, 0x08, 0x18, 0x44, 0x180, 0x1c8, 0xf4, 0x1cc }, 232 + { 0x280, 0x28A, 0x200, 0x210, 0x240, 0x300, 0x348, 0x2b4, 0x34c }, 233 + { 0x2C0, 0x2CA, 0x208, 0x218, 0x244, 0x380, 0x3c8, 0x2f4, 0x3cc }, 258 234 /* ... port 3 */ 259 235 }; 260 236 ··· 287 263 288 264 for (i = 0; i < 2; i++) { 289 265 dev = &ap->device[i]; 290 - if (!ata_dev_present(dev)) 266 + if (!ata_dev_enabled(dev)) 291 267 dev_mode[i] = 0; /* PIO0/1/2 */ 292 268 else if (dev->flags & ATA_DFLAG_PIO) 293 269 dev_mode[i] = 1; /* PIO3/4 */ ··· 338 314 writel(val, mmio); 339 315 } 340 316 317 + static void sil_host_intr(struct ata_port *ap, u32 bmdma2) 318 + { 319 + struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag); 320 + u8 status; 321 + 322 + if (unlikely(bmdma2 & SIL_DMA_SATA_IRQ)) { 323 + u32 serror; 324 + 325 + /* SIEN doesn't mask SATA IRQs on some 3112s. Those 326 + * controllers continue to assert IRQ as long as 327 + * SError bits are pending. Clear SError immediately. 328 + */ 329 + serror = sil_scr_read(ap, SCR_ERROR); 330 + sil_scr_write(ap, SCR_ERROR, serror); 331 + 332 + /* Trigger hotplug and accumulate SError only if the 333 + * port isn't already frozen. Otherwise, PHY events 334 + * during hardreset makes controllers with broken SIEN 335 + * repeat probing needlessly. 336 + */ 337 + if (!(ap->flags & ATA_FLAG_FROZEN)) { 338 + ata_ehi_hotplugged(&ap->eh_info); 339 + ap->eh_info.serror |= serror; 340 + } 341 + 342 + goto freeze; 343 + } 344 + 345 + if (unlikely(!qc || qc->tf.ctl & ATA_NIEN)) 346 + goto freeze; 347 + 348 + /* Check whether we are expecting interrupt in this state */ 349 + switch (ap->hsm_task_state) { 350 + case HSM_ST_FIRST: 351 + /* Some pre-ATAPI-4 devices assert INTRQ 352 + * at this state when ready to receive CDB. 353 + */ 354 + 355 + /* Check the ATA_DFLAG_CDB_INTR flag is enough here. 356 + * The flag was turned on only for atapi devices. 357 + * No need to check is_atapi_taskfile(&qc->tf) again. 358 + */ 359 + if (!(qc->dev->flags & ATA_DFLAG_CDB_INTR)) 360 + goto err_hsm; 361 + break; 362 + case HSM_ST_LAST: 363 + if (qc->tf.protocol == ATA_PROT_DMA || 364 + qc->tf.protocol == ATA_PROT_ATAPI_DMA) { 365 + /* clear DMA-Start bit */ 366 + ap->ops->bmdma_stop(qc); 367 + 368 + if (bmdma2 & SIL_DMA_ERROR) { 369 + qc->err_mask |= AC_ERR_HOST_BUS; 370 + ap->hsm_task_state = HSM_ST_ERR; 371 + } 372 + } 373 + break; 374 + case HSM_ST: 375 + break; 376 + default: 377 + goto err_hsm; 378 + } 379 + 380 + /* check main status, clearing INTRQ */ 381 + status = ata_chk_status(ap); 382 + if (unlikely(status & ATA_BUSY)) 383 + goto err_hsm; 384 + 385 + /* ack bmdma irq events */ 386 + ata_bmdma_irq_clear(ap); 387 + 388 + /* kick HSM in the ass */ 389 + ata_hsm_move(ap, qc, status, 0); 390 + 391 + return; 392 + 393 + err_hsm: 394 + qc->err_mask |= AC_ERR_HSM; 395 + freeze: 396 + ata_port_freeze(ap); 397 + } 398 + 399 + static irqreturn_t sil_interrupt(int irq, void *dev_instance, 400 + struct pt_regs *regs) 401 + { 402 + struct ata_host_set *host_set = dev_instance; 403 + void __iomem *mmio_base = host_set->mmio_base; 404 + int handled = 0; 405 + int i; 406 + 407 + spin_lock(&host_set->lock); 408 + 409 + for (i = 0; i < host_set->n_ports; i++) { 410 + struct ata_port *ap = host_set->ports[i]; 411 + u32 bmdma2 = readl(mmio_base + sil_port[ap->port_no].bmdma2); 412 + 413 + if (unlikely(!ap || ap->flags & ATA_FLAG_DISABLED)) 414 + continue; 415 + 416 + if (bmdma2 == 0xffffffff || 417 + !(bmdma2 & (SIL_DMA_COMPLETE | SIL_DMA_SATA_IRQ))) 418 + continue; 419 + 420 + sil_host_intr(ap, bmdma2); 421 + handled = 1; 422 + } 423 + 424 + spin_unlock(&host_set->lock); 425 + 426 + return IRQ_RETVAL(handled); 427 + } 428 + 429 + static void sil_freeze(struct ata_port *ap) 430 + { 431 + void __iomem *mmio_base = ap->host_set->mmio_base; 432 + u32 tmp; 433 + 434 + /* global IRQ mask doesn't block SATA IRQ, turn off explicitly */ 435 + writel(0, mmio_base + sil_port[ap->port_no].sien); 436 + 437 + /* plug IRQ */ 438 + tmp = readl(mmio_base + SIL_SYSCFG); 439 + tmp |= SIL_MASK_IDE0_INT << ap->port_no; 440 + writel(tmp, mmio_base + SIL_SYSCFG); 441 + readl(mmio_base + SIL_SYSCFG); /* flush */ 442 + } 443 + 444 + static void sil_thaw(struct ata_port *ap) 445 + { 446 + void __iomem *mmio_base = ap->host_set->mmio_base; 447 + u32 tmp; 448 + 449 + /* clear IRQ */ 450 + ata_chk_status(ap); 451 + ata_bmdma_irq_clear(ap); 452 + 453 + /* turn on SATA IRQ */ 454 + writel(SIL_SIEN_N, mmio_base + sil_port[ap->port_no].sien); 455 + 456 + /* turn on IRQ */ 457 + tmp = readl(mmio_base + SIL_SYSCFG); 458 + tmp &= ~(SIL_MASK_IDE0_INT << ap->port_no); 459 + writel(tmp, mmio_base + SIL_SYSCFG); 460 + } 461 + 341 462 /** 342 463 * sil_dev_config - Apply device/host-specific errata fixups 343 464 * @ap: Port containing device to be examined ··· 529 360 if (slow_down || 530 361 ((ap->flags & SIL_FLAG_MOD15WRITE) && 531 362 (quirks & SIL_QUIRK_MOD15WRITE))) { 532 - printk(KERN_INFO "ata%u(%u): applying Seagate errata fix (mod15write workaround)\n", 533 - ap->id, dev->devno); 363 + ata_dev_printk(dev, KERN_INFO, "applying Seagate errata fix " 364 + "(mod15write workaround)\n"); 534 365 dev->max_sectors = 15; 535 366 return; 536 367 } 537 368 538 369 /* limit to udma5 */ 539 370 if (quirks & SIL_QUIRK_UDMA5MAX) { 540 - printk(KERN_INFO "ata%u(%u): applying Maxtor errata fix %s\n", 541 - ap->id, dev->devno, model_num); 371 + ata_dev_printk(dev, KERN_INFO, 372 + "applying Maxtor errata fix %s\n", model_num); 542 373 dev->udma_mask &= ATA_UDMA5; 543 374 return; 544 375 } ··· 553 384 int rc; 554 385 unsigned int i; 555 386 int pci_dev_busy = 0; 556 - u32 tmp, irq_mask; 387 + u32 tmp; 557 388 u8 cls; 558 389 559 390 if (!printed_version++) 560 391 dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); 561 392 562 - /* 563 - * If this driver happens to only be useful on Apple's K2, then 564 - * we should check that here as it has a normal Serverworks ID 565 - */ 566 393 rc = pci_enable_device(pdev); 567 394 if (rc) 568 395 return rc; ··· 643 478 } 644 479 645 480 if (ent->driver_data == sil_3114) { 646 - irq_mask = SIL_MASK_4PORT; 647 - 648 481 /* flip the magic "make 4 ports work" bit */ 649 482 tmp = readl(mmio_base + sil_port[2].bmdma); 650 483 if ((tmp & SIL_INTR_STEERING) == 0) 651 484 writel(tmp | SIL_INTR_STEERING, 652 485 mmio_base + sil_port[2].bmdma); 653 - 654 - } else { 655 - irq_mask = SIL_MASK_2PORT; 656 486 } 657 - 658 - /* make sure IDE0/1/2/3 interrupts are not masked */ 659 - tmp = readl(mmio_base + SIL_SYSCFG); 660 - if (tmp & irq_mask) { 661 - tmp &= ~irq_mask; 662 - writel(tmp, mmio_base + SIL_SYSCFG); 663 - readl(mmio_base + SIL_SYSCFG); /* flush */ 664 - } 665 - 666 - /* mask all SATA phy-related interrupts */ 667 - /* TODO: unmask bit 6 (SError N bit) for hotplug */ 668 - for (i = 0; i < probe_ent->n_ports; i++) 669 - writel(0, mmio_base + sil_port[i].sien); 670 487 671 488 pci_set_master(pdev); 672 489
+409 -265
drivers/scsi/sata_sil24.c
··· 31 31 #include <asm/io.h> 32 32 33 33 #define DRV_NAME "sata_sil24" 34 - #define DRV_VERSION "0.23" 34 + #define DRV_VERSION "0.24" 35 35 36 36 /* 37 37 * Port request block (PRB) 32 bytes 38 38 */ 39 39 struct sil24_prb { 40 - u16 ctrl; 41 - u16 prot; 42 - u32 rx_cnt; 40 + __le16 ctrl; 41 + __le16 prot; 42 + __le32 rx_cnt; 43 43 u8 fis[6 * 4]; 44 44 }; 45 45 ··· 47 47 * Scatter gather entry (SGE) 16 bytes 48 48 */ 49 49 struct sil24_sge { 50 - u64 addr; 51 - u32 cnt; 52 - u32 flags; 50 + __le64 addr; 51 + __le32 cnt; 52 + __le32 flags; 53 53 }; 54 54 55 55 /* 56 56 * Port multiplier 57 57 */ 58 58 struct sil24_port_multiplier { 59 - u32 diag; 60 - u32 sactive; 59 + __le32 diag; 60 + __le32 sactive; 61 61 }; 62 62 63 63 enum { ··· 86 86 /* HOST_SLOT_STAT bits */ 87 87 HOST_SSTAT_ATTN = (1 << 31), 88 88 89 + /* HOST_CTRL bits */ 90 + HOST_CTRL_M66EN = (1 << 16), /* M66EN PCI bus signal */ 91 + HOST_CTRL_TRDY = (1 << 17), /* latched PCI TRDY */ 92 + HOST_CTRL_STOP = (1 << 18), /* latched PCI STOP */ 93 + HOST_CTRL_DEVSEL = (1 << 19), /* latched PCI DEVSEL */ 94 + HOST_CTRL_REQ64 = (1 << 20), /* latched PCI REQ64 */ 95 + 89 96 /* 90 97 * Port registers 91 98 * (8192 bytes @ +0x0000, +0x2000, +0x4000 and +0x6000 @ BAR2) 92 99 */ 93 100 PORT_REGS_SIZE = 0x2000, 94 - PORT_PRB = 0x0000, /* (32 bytes PRB + 16 bytes SGEs * 6) * 31 (3968 bytes) */ 101 + 102 + PORT_LRAM = 0x0000, /* 31 LRAM slots and PM regs */ 103 + PORT_LRAM_SLOT_SZ = 0x0080, /* 32 bytes PRB + 2 SGE, ACT... */ 95 104 96 105 PORT_PM = 0x0f80, /* 8 bytes PM * 16 (128 bytes) */ 97 106 /* 32 bit regs */ ··· 151 142 PORT_IRQ_PWR_CHG = (1 << 3), /* power management change */ 152 143 PORT_IRQ_PHYRDY_CHG = (1 << 4), /* PHY ready change */ 153 144 PORT_IRQ_COMWAKE = (1 << 5), /* COMWAKE received */ 154 - PORT_IRQ_UNK_FIS = (1 << 6), /* Unknown FIS received */ 155 - PORT_IRQ_SDB_FIS = (1 << 11), /* SDB FIS received */ 145 + PORT_IRQ_UNK_FIS = (1 << 6), /* unknown FIS received */ 146 + PORT_IRQ_DEV_XCHG = (1 << 7), /* device exchanged */ 147 + PORT_IRQ_8B10B = (1 << 8), /* 8b/10b decode error threshold */ 148 + PORT_IRQ_CRC = (1 << 9), /* CRC error threshold */ 149 + PORT_IRQ_HANDSHAKE = (1 << 10), /* handshake error threshold */ 150 + PORT_IRQ_SDB_NOTIFY = (1 << 11), /* SDB notify received */ 151 + 152 + DEF_PORT_IRQ = PORT_IRQ_COMPLETE | PORT_IRQ_ERROR | 153 + PORT_IRQ_PHYRDY_CHG | PORT_IRQ_DEV_XCHG | 154 + PORT_IRQ_UNK_FIS, 156 155 157 156 /* bits[27:16] are unmasked (raw) */ 158 157 PORT_IRQ_RAW_SHIFT = 16, ··· 191 174 PORT_CERR_CMD_PCIPERR = 27, /* ctrl[15:13] 110 - PCI parity err while fetching PRB */ 192 175 PORT_CERR_XFR_UNDEF = 32, /* PSD ecode 00 - undefined */ 193 176 PORT_CERR_XFR_TGTABRT = 33, /* PSD ecode 01 - target abort */ 194 - PORT_CERR_XFR_MSGABRT = 34, /* PSD ecode 10 - master abort */ 177 + PORT_CERR_XFR_MSTABRT = 34, /* PSD ecode 10 - master abort */ 195 178 PORT_CERR_XFR_PCIPERR = 35, /* PSD ecode 11 - PCI prity err during transfer */ 196 179 PORT_CERR_SENDSERVICE = 36, /* FIS received while sending service */ 197 180 ··· 219 202 SGE_DRD = (1 << 29), /* discard data read (/dev/null) 220 203 data address ignored */ 221 204 205 + SIL24_MAX_CMDS = 31, 206 + 222 207 /* board id */ 223 208 BID_SIL3124 = 0, 224 209 BID_SIL3132 = 1, 225 210 BID_SIL3131 = 2, 211 + 212 + /* host flags */ 213 + SIL24_COMMON_FLAGS = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 214 + ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | 215 + ATA_FLAG_NCQ | ATA_FLAG_SKIP_D2H_BSY, 216 + SIL24_FLAG_PCIX_IRQ_WOC = (1 << 24), /* IRQ loss errata on PCI-X */ 226 217 227 218 IRQ_STAT_4PORTS = 0xf, 228 219 }; ··· 249 224 union sil24_cmd_block { 250 225 struct sil24_ata_block ata; 251 226 struct sil24_atapi_block atapi; 227 + }; 228 + 229 + static struct sil24_cerr_info { 230 + unsigned int err_mask, action; 231 + const char *desc; 232 + } sil24_cerr_db[] = { 233 + [0] = { AC_ERR_DEV, ATA_EH_REVALIDATE, 234 + "device error" }, 235 + [PORT_CERR_DEV] = { AC_ERR_DEV, ATA_EH_REVALIDATE, 236 + "device error via D2H FIS" }, 237 + [PORT_CERR_SDB] = { AC_ERR_DEV, ATA_EH_REVALIDATE, 238 + "device error via SDB FIS" }, 239 + [PORT_CERR_DATA] = { AC_ERR_ATA_BUS, ATA_EH_SOFTRESET, 240 + "error in data FIS" }, 241 + [PORT_CERR_SEND] = { AC_ERR_ATA_BUS, ATA_EH_SOFTRESET, 242 + "failed to transmit command FIS" }, 243 + [PORT_CERR_INCONSISTENT] = { AC_ERR_HSM, ATA_EH_SOFTRESET, 244 + "protocol mismatch" }, 245 + [PORT_CERR_DIRECTION] = { AC_ERR_HSM, ATA_EH_SOFTRESET, 246 + "data directon mismatch" }, 247 + [PORT_CERR_UNDERRUN] = { AC_ERR_HSM, ATA_EH_SOFTRESET, 248 + "ran out of SGEs while writing" }, 249 + [PORT_CERR_OVERRUN] = { AC_ERR_HSM, ATA_EH_SOFTRESET, 250 + "ran out of SGEs while reading" }, 251 + [PORT_CERR_PKT_PROT] = { AC_ERR_HSM, ATA_EH_SOFTRESET, 252 + "invalid data directon for ATAPI CDB" }, 253 + [PORT_CERR_SGT_BOUNDARY] = { AC_ERR_SYSTEM, ATA_EH_SOFTRESET, 254 + "SGT no on qword boundary" }, 255 + [PORT_CERR_SGT_TGTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 256 + "PCI target abort while fetching SGT" }, 257 + [PORT_CERR_SGT_MSTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 258 + "PCI master abort while fetching SGT" }, 259 + [PORT_CERR_SGT_PCIPERR] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 260 + "PCI parity error while fetching SGT" }, 261 + [PORT_CERR_CMD_BOUNDARY] = { AC_ERR_SYSTEM, ATA_EH_SOFTRESET, 262 + "PRB not on qword boundary" }, 263 + [PORT_CERR_CMD_TGTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 264 + "PCI target abort while fetching PRB" }, 265 + [PORT_CERR_CMD_MSTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 266 + "PCI master abort while fetching PRB" }, 267 + [PORT_CERR_CMD_PCIPERR] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 268 + "PCI parity error while fetching PRB" }, 269 + [PORT_CERR_XFR_UNDEF] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 270 + "undefined error while transferring data" }, 271 + [PORT_CERR_XFR_TGTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 272 + "PCI target abort while transferring data" }, 273 + [PORT_CERR_XFR_MSTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 274 + "PCI master abort while transferring data" }, 275 + [PORT_CERR_XFR_PCIPERR] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET, 276 + "PCI parity error while transferring data" }, 277 + [PORT_CERR_SENDSERVICE] = { AC_ERR_HSM, ATA_EH_SOFTRESET, 278 + "FIS received while sending service FIS" }, 252 279 }; 253 280 254 281 /* ··· 326 249 static u32 sil24_scr_read(struct ata_port *ap, unsigned sc_reg); 327 250 static void sil24_scr_write(struct ata_port *ap, unsigned sc_reg, u32 val); 328 251 static void sil24_tf_read(struct ata_port *ap, struct ata_taskfile *tf); 329 - static int sil24_probe_reset(struct ata_port *ap, unsigned int *classes); 330 252 static void sil24_qc_prep(struct ata_queued_cmd *qc); 331 253 static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc); 332 254 static void sil24_irq_clear(struct ata_port *ap); 333 - static void sil24_eng_timeout(struct ata_port *ap); 334 255 static irqreturn_t sil24_interrupt(int irq, void *dev_instance, struct pt_regs *regs); 256 + static void sil24_freeze(struct ata_port *ap); 257 + static void sil24_thaw(struct ata_port *ap); 258 + static void sil24_error_handler(struct ata_port *ap); 259 + static void sil24_post_internal_cmd(struct ata_queued_cmd *qc); 335 260 static int sil24_port_start(struct ata_port *ap); 336 261 static void sil24_port_stop(struct ata_port *ap); 337 262 static void sil24_host_stop(struct ata_host_set *host_set); ··· 360 281 .name = DRV_NAME, 361 282 .ioctl = ata_scsi_ioctl, 362 283 .queuecommand = ata_scsi_queuecmd, 363 - .can_queue = ATA_DEF_QUEUE, 284 + .change_queue_depth = ata_scsi_change_queue_depth, 285 + .can_queue = SIL24_MAX_CMDS, 364 286 .this_id = ATA_SHT_THIS_ID, 365 287 .sg_tablesize = LIBATA_MAX_PRD, 366 288 .cmd_per_lun = ATA_SHT_CMD_PER_LUN, ··· 370 290 .proc_name = DRV_NAME, 371 291 .dma_boundary = ATA_DMA_BOUNDARY, 372 292 .slave_configure = ata_scsi_slave_config, 293 + .slave_destroy = ata_scsi_slave_destroy, 373 294 .bios_param = ata_std_bios_param, 374 295 }; 375 296 ··· 385 304 386 305 .tf_read = sil24_tf_read, 387 306 388 - .probe_reset = sil24_probe_reset, 389 - 390 307 .qc_prep = sil24_qc_prep, 391 308 .qc_issue = sil24_qc_issue, 392 - 393 - .eng_timeout = sil24_eng_timeout, 394 309 395 310 .irq_handler = sil24_interrupt, 396 311 .irq_clear = sil24_irq_clear, 397 312 398 313 .scr_read = sil24_scr_read, 399 314 .scr_write = sil24_scr_write, 315 + 316 + .freeze = sil24_freeze, 317 + .thaw = sil24_thaw, 318 + .error_handler = sil24_error_handler, 319 + .post_internal_cmd = sil24_post_internal_cmd, 400 320 401 321 .port_start = sil24_port_start, 402 322 .port_stop = sil24_port_stop, ··· 415 333 /* sil_3124 */ 416 334 { 417 335 .sht = &sil24_sht, 418 - .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 419 - ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | 420 - SIL24_NPORTS2FLAG(4), 336 + .host_flags = SIL24_COMMON_FLAGS | SIL24_NPORTS2FLAG(4) | 337 + SIL24_FLAG_PCIX_IRQ_WOC, 421 338 .pio_mask = 0x1f, /* pio0-4 */ 422 339 .mwdma_mask = 0x07, /* mwdma0-2 */ 423 340 .udma_mask = 0x3f, /* udma0-5 */ ··· 425 344 /* sil_3132 */ 426 345 { 427 346 .sht = &sil24_sht, 428 - .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 429 - ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | 430 - SIL24_NPORTS2FLAG(2), 347 + .host_flags = SIL24_COMMON_FLAGS | SIL24_NPORTS2FLAG(2), 431 348 .pio_mask = 0x1f, /* pio0-4 */ 432 349 .mwdma_mask = 0x07, /* mwdma0-2 */ 433 350 .udma_mask = 0x3f, /* udma0-5 */ ··· 434 355 /* sil_3131/sil_3531 */ 435 356 { 436 357 .sht = &sil24_sht, 437 - .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 438 - ATA_FLAG_MMIO | ATA_FLAG_PIO_DMA | 439 - SIL24_NPORTS2FLAG(1), 358 + .host_flags = SIL24_COMMON_FLAGS | SIL24_NPORTS2FLAG(1), 440 359 .pio_mask = 0x1f, /* pio0-4 */ 441 360 .mwdma_mask = 0x07, /* mwdma0-2 */ 442 361 .udma_mask = 0x3f, /* udma0-5 */ 443 362 .port_ops = &sil24_ops, 444 363 }, 445 364 }; 365 + 366 + static int sil24_tag(int tag) 367 + { 368 + if (unlikely(ata_tag_internal(tag))) 369 + return 0; 370 + return tag; 371 + } 446 372 447 373 static void sil24_dev_config(struct ata_port *ap, struct ata_device *dev) 448 374 { ··· 510 426 *tf = pp->tf; 511 427 } 512 428 513 - static int sil24_softreset(struct ata_port *ap, int verbose, 514 - unsigned int *class) 429 + static int sil24_init_port(struct ata_port *ap) 430 + { 431 + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; 432 + u32 tmp; 433 + 434 + writel(PORT_CS_INIT, port + PORT_CTRL_STAT); 435 + ata_wait_register(port + PORT_CTRL_STAT, 436 + PORT_CS_INIT, PORT_CS_INIT, 10, 100); 437 + tmp = ata_wait_register(port + PORT_CTRL_STAT, 438 + PORT_CS_RDY, 0, 10, 100); 439 + 440 + if ((tmp & (PORT_CS_INIT | PORT_CS_RDY)) != PORT_CS_RDY) 441 + return -EIO; 442 + return 0; 443 + } 444 + 445 + static int sil24_softreset(struct ata_port *ap, unsigned int *class) 515 446 { 516 447 void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; 517 448 struct sil24_port_priv *pp = ap->private_data; 518 449 struct sil24_prb *prb = &pp->cmd_block[0].ata.prb; 519 450 dma_addr_t paddr = pp->cmd_block_dma; 520 - unsigned long timeout = jiffies + ATA_TMOUT_BOOT * HZ; 521 - u32 irq_enable, irq_stat; 451 + u32 mask, irq_stat; 452 + const char *reason; 522 453 523 454 DPRINTK("ENTER\n"); 524 455 525 - if (!sata_dev_present(ap)) { 456 + if (ata_port_offline(ap)) { 526 457 DPRINTK("PHY reports no device\n"); 527 458 *class = ATA_DEV_NONE; 528 459 goto out; 529 460 } 530 461 531 - /* temporarily turn off IRQs during SRST */ 532 - irq_enable = readl(port + PORT_IRQ_ENABLE_SET); 533 - writel(irq_enable, port + PORT_IRQ_ENABLE_CLR); 462 + /* put the port into known state */ 463 + if (sil24_init_port(ap)) { 464 + reason ="port not ready"; 465 + goto err; 466 + } 534 467 535 - /* 536 - * XXX: Not sure whether the following sleep is needed or not. 537 - * The original driver had it. So.... 538 - */ 539 - msleep(10); 540 - 468 + /* do SRST */ 541 469 prb->ctrl = cpu_to_le16(PRB_CTRL_SRST); 542 470 prb->fis[1] = 0; /* no PM yet */ 543 471 544 472 writel((u32)paddr, port + PORT_CMD_ACTIVATE); 473 + writel((u64)paddr >> 32, port + PORT_CMD_ACTIVATE + 4); 545 474 546 - do { 547 - irq_stat = readl(port + PORT_IRQ_STAT); 548 - writel(irq_stat, port + PORT_IRQ_STAT); /* clear irq */ 475 + mask = (PORT_IRQ_COMPLETE | PORT_IRQ_ERROR) << PORT_IRQ_RAW_SHIFT; 476 + irq_stat = ata_wait_register(port + PORT_IRQ_STAT, mask, 0x0, 477 + 100, ATA_TMOUT_BOOT / HZ * 1000); 549 478 550 - irq_stat >>= PORT_IRQ_RAW_SHIFT; 551 - if (irq_stat & (PORT_IRQ_COMPLETE | PORT_IRQ_ERROR)) 552 - break; 553 - 554 - msleep(100); 555 - } while (time_before(jiffies, timeout)); 556 - 557 - /* restore IRQs */ 558 - writel(irq_enable, port + PORT_IRQ_ENABLE_SET); 479 + writel(irq_stat, port + PORT_IRQ_STAT); /* clear IRQs */ 480 + irq_stat >>= PORT_IRQ_RAW_SHIFT; 559 481 560 482 if (!(irq_stat & PORT_IRQ_COMPLETE)) { 561 - DPRINTK("EXIT, srst failed\n"); 562 - return -EIO; 483 + if (irq_stat & PORT_IRQ_ERROR) 484 + reason = "SRST command error"; 485 + else 486 + reason = "timeout"; 487 + goto err; 563 488 } 564 489 565 490 sil24_update_tf(ap); ··· 580 487 out: 581 488 DPRINTK("EXIT, class=%u\n", *class); 582 489 return 0; 490 + 491 + err: 492 + ata_port_printk(ap, KERN_ERR, "softreset failed (%s)\n", reason); 493 + return -EIO; 583 494 } 584 495 585 - static int sil24_hardreset(struct ata_port *ap, int verbose, 586 - unsigned int *class) 496 + static int sil24_hardreset(struct ata_port *ap, unsigned int *class) 587 497 { 588 - unsigned int dummy_class; 498 + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; 499 + const char *reason; 500 + int tout_msec, rc; 501 + u32 tmp; 589 502 590 - /* sil24 doesn't report device signature after hard reset */ 591 - return sata_std_hardreset(ap, verbose, &dummy_class); 592 - } 503 + /* sil24 does the right thing(tm) without any protection */ 504 + sata_set_spd(ap); 593 505 594 - static int sil24_probe_reset(struct ata_port *ap, unsigned int *classes) 595 - { 596 - return ata_drive_probe_reset(ap, ata_std_probeinit, 597 - sil24_softreset, sil24_hardreset, 598 - ata_std_postreset, classes); 506 + tout_msec = 100; 507 + if (ata_port_online(ap)) 508 + tout_msec = 5000; 509 + 510 + writel(PORT_CS_DEV_RST, port + PORT_CTRL_STAT); 511 + tmp = ata_wait_register(port + PORT_CTRL_STAT, 512 + PORT_CS_DEV_RST, PORT_CS_DEV_RST, 10, tout_msec); 513 + 514 + /* SStatus oscillates between zero and valid status after 515 + * DEV_RST, debounce it. 516 + */ 517 + rc = sata_phy_debounce(ap, sata_deb_timing_before_fsrst); 518 + if (rc) { 519 + reason = "PHY debouncing failed"; 520 + goto err; 521 + } 522 + 523 + if (tmp & PORT_CS_DEV_RST) { 524 + if (ata_port_offline(ap)) 525 + return 0; 526 + reason = "link not ready"; 527 + goto err; 528 + } 529 + 530 + /* Sil24 doesn't store signature FIS after hardreset, so we 531 + * can't wait for BSY to clear. Some devices take a long time 532 + * to get ready and those devices will choke if we don't wait 533 + * for BSY clearance here. Tell libata to perform follow-up 534 + * softreset. 535 + */ 536 + return -EAGAIN; 537 + 538 + err: 539 + ata_port_printk(ap, KERN_ERR, "hardreset failed (%s)\n", reason); 540 + return -EIO; 599 541 } 600 542 601 543 static inline void sil24_fill_sg(struct ata_queued_cmd *qc, ··· 656 528 { 657 529 struct ata_port *ap = qc->ap; 658 530 struct sil24_port_priv *pp = ap->private_data; 659 - union sil24_cmd_block *cb = pp->cmd_block + qc->tag; 531 + union sil24_cmd_block *cb; 660 532 struct sil24_prb *prb; 661 533 struct sil24_sge *sge; 534 + u16 ctrl = 0; 535 + 536 + cb = &pp->cmd_block[sil24_tag(qc->tag)]; 662 537 663 538 switch (qc->tf.protocol) { 664 539 case ATA_PROT_PIO: 665 540 case ATA_PROT_DMA: 541 + case ATA_PROT_NCQ: 666 542 case ATA_PROT_NODATA: 667 543 prb = &cb->ata.prb; 668 544 sge = cb->ata.sge; 669 - prb->ctrl = 0; 670 545 break; 671 546 672 547 case ATA_PROT_ATAPI: ··· 682 551 683 552 if (qc->tf.protocol != ATA_PROT_ATAPI_NODATA) { 684 553 if (qc->tf.flags & ATA_TFLAG_WRITE) 685 - prb->ctrl = cpu_to_le16(PRB_CTRL_PACKET_WRITE); 554 + ctrl = PRB_CTRL_PACKET_WRITE; 686 555 else 687 - prb->ctrl = cpu_to_le16(PRB_CTRL_PACKET_READ); 688 - } else 689 - prb->ctrl = 0; 690 - 556 + ctrl = PRB_CTRL_PACKET_READ; 557 + } 691 558 break; 692 559 693 560 default: ··· 694 565 BUG(); 695 566 } 696 567 568 + prb->ctrl = cpu_to_le16(ctrl); 697 569 ata_tf_to_fis(&qc->tf, prb->fis, 0); 698 570 699 571 if (qc->flags & ATA_QCFLAG_DMAMAP) ··· 704 574 static unsigned int sil24_qc_issue(struct ata_queued_cmd *qc) 705 575 { 706 576 struct ata_port *ap = qc->ap; 707 - void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; 708 577 struct sil24_port_priv *pp = ap->private_data; 709 - dma_addr_t paddr = pp->cmd_block_dma + qc->tag * sizeof(*pp->cmd_block); 578 + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; 579 + unsigned int tag = sil24_tag(qc->tag); 580 + dma_addr_t paddr; 581 + void __iomem *activate; 710 582 711 - writel((u32)paddr, port + PORT_CMD_ACTIVATE); 583 + paddr = pp->cmd_block_dma + tag * sizeof(*pp->cmd_block); 584 + activate = port + PORT_CMD_ACTIVATE + tag * 8; 585 + 586 + writel((u32)paddr, activate); 587 + writel((u64)paddr >> 32, activate + 4); 588 + 712 589 return 0; 713 590 } 714 591 ··· 724 587 /* unused */ 725 588 } 726 589 727 - static int __sil24_restart_controller(void __iomem *port) 590 + static void sil24_freeze(struct ata_port *ap) 728 591 { 729 - u32 tmp; 730 - int cnt; 731 - 732 - writel(PORT_CS_INIT, port + PORT_CTRL_STAT); 733 - 734 - /* Max ~10ms */ 735 - for (cnt = 0; cnt < 10000; cnt++) { 736 - tmp = readl(port + PORT_CTRL_STAT); 737 - if (tmp & PORT_CS_RDY) 738 - return 0; 739 - udelay(1); 740 - } 741 - 742 - return -1; 743 - } 744 - 745 - static void sil24_restart_controller(struct ata_port *ap) 746 - { 747 - if (__sil24_restart_controller((void __iomem *)ap->ioaddr.cmd_addr)) 748 - printk(KERN_ERR DRV_NAME 749 - " ata%u: failed to restart controller\n", ap->id); 750 - } 751 - 752 - static int __sil24_reset_controller(void __iomem *port) 753 - { 754 - int cnt; 755 - u32 tmp; 756 - 757 - /* Reset controller state. Is this correct? */ 758 - writel(PORT_CS_DEV_RST, port + PORT_CTRL_STAT); 759 - readl(port + PORT_CTRL_STAT); /* sync */ 760 - 761 - /* Max ~100ms */ 762 - for (cnt = 0; cnt < 1000; cnt++) { 763 - udelay(100); 764 - tmp = readl(port + PORT_CTRL_STAT); 765 - if (!(tmp & PORT_CS_DEV_RST)) 766 - break; 767 - } 768 - 769 - if (tmp & PORT_CS_DEV_RST) 770 - return -1; 771 - 772 - if (tmp & PORT_CS_RDY) 773 - return 0; 774 - 775 - return __sil24_restart_controller(port); 776 - } 777 - 778 - static void sil24_reset_controller(struct ata_port *ap) 779 - { 780 - printk(KERN_NOTICE DRV_NAME 781 - " ata%u: resetting controller...\n", ap->id); 782 - if (__sil24_reset_controller((void __iomem *)ap->ioaddr.cmd_addr)) 783 - printk(KERN_ERR DRV_NAME 784 - " ata%u: failed to reset controller\n", ap->id); 785 - } 786 - 787 - static void sil24_eng_timeout(struct ata_port *ap) 788 - { 789 - struct ata_queued_cmd *qc; 790 - 791 - qc = ata_qc_from_tag(ap, ap->active_tag); 792 - 793 - printk(KERN_ERR "ata%u: command timeout\n", ap->id); 794 - qc->err_mask |= AC_ERR_TIMEOUT; 795 - ata_eh_qc_complete(qc); 796 - 797 - sil24_reset_controller(ap); 798 - } 799 - 800 - static void sil24_error_intr(struct ata_port *ap, u32 slot_stat) 801 - { 802 - struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag); 803 - struct sil24_port_priv *pp = ap->private_data; 804 592 void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; 805 - u32 irq_stat, cmd_err, sstatus, serror; 806 - unsigned int err_mask; 807 593 808 - irq_stat = readl(port + PORT_IRQ_STAT); 809 - writel(irq_stat, port + PORT_IRQ_STAT); /* clear irq */ 810 - 811 - if (!(irq_stat & PORT_IRQ_ERROR)) { 812 - /* ignore non-completion, non-error irqs for now */ 813 - printk(KERN_WARNING DRV_NAME 814 - "ata%u: non-error exception irq (irq_stat %x)\n", 815 - ap->id, irq_stat); 816 - return; 817 - } 818 - 819 - cmd_err = readl(port + PORT_CMD_ERR); 820 - sstatus = readl(port + PORT_SSTATUS); 821 - serror = readl(port + PORT_SERROR); 822 - if (serror) 823 - writel(serror, port + PORT_SERROR); 824 - 825 - /* 826 - * Don't log ATAPI device errors. They're supposed to happen 827 - * and any serious errors will be logged using sense data by 828 - * the SCSI layer. 594 + /* Port-wide IRQ mask in HOST_CTRL doesn't really work, clear 595 + * PORT_IRQ_ENABLE instead. 829 596 */ 830 - if (ap->device[0].class != ATA_DEV_ATAPI || cmd_err > PORT_CERR_SDB) 831 - printk("ata%u: error interrupt on port%d\n" 832 - " stat=0x%x irq=0x%x cmd_err=%d sstatus=0x%x serror=0x%x\n", 833 - ap->id, ap->port_no, slot_stat, irq_stat, cmd_err, sstatus, serror); 597 + writel(0xffff, port + PORT_IRQ_ENABLE_CLR); 598 + } 834 599 835 - if (cmd_err == PORT_CERR_DEV || cmd_err == PORT_CERR_SDB) { 836 - /* 837 - * Device is reporting error, tf registers are valid. 838 - */ 839 - sil24_update_tf(ap); 840 - err_mask = ac_err_mask(pp->tf.command); 841 - sil24_restart_controller(ap); 842 - } else { 843 - /* 844 - * Other errors. libata currently doesn't have any 845 - * mechanism to report these errors. Just turn on 846 - * ATA_ERR. 847 - */ 848 - err_mask = AC_ERR_OTHER; 849 - sil24_reset_controller(ap); 600 + static void sil24_thaw(struct ata_port *ap) 601 + { 602 + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; 603 + u32 tmp; 604 + 605 + /* clear IRQ */ 606 + tmp = readl(port + PORT_IRQ_STAT); 607 + writel(tmp, port + PORT_IRQ_STAT); 608 + 609 + /* turn IRQ back on */ 610 + writel(DEF_PORT_IRQ, port + PORT_IRQ_ENABLE_SET); 611 + } 612 + 613 + static void sil24_error_intr(struct ata_port *ap) 614 + { 615 + void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; 616 + struct ata_eh_info *ehi = &ap->eh_info; 617 + int freeze = 0; 618 + u32 irq_stat; 619 + 620 + /* on error, we need to clear IRQ explicitly */ 621 + irq_stat = readl(port + PORT_IRQ_STAT); 622 + writel(irq_stat, port + PORT_IRQ_STAT); 623 + 624 + /* first, analyze and record host port events */ 625 + ata_ehi_clear_desc(ehi); 626 + 627 + ata_ehi_push_desc(ehi, "irq_stat 0x%08x", irq_stat); 628 + 629 + if (irq_stat & (PORT_IRQ_PHYRDY_CHG | PORT_IRQ_DEV_XCHG)) { 630 + ata_ehi_hotplugged(ehi); 631 + ata_ehi_push_desc(ehi, ", %s", 632 + irq_stat & PORT_IRQ_PHYRDY_CHG ? 633 + "PHY RDY changed" : "device exchanged"); 634 + freeze = 1; 850 635 } 851 636 852 - if (qc) { 853 - qc->err_mask |= err_mask; 854 - ata_qc_complete(qc); 637 + if (irq_stat & PORT_IRQ_UNK_FIS) { 638 + ehi->err_mask |= AC_ERR_HSM; 639 + ehi->action |= ATA_EH_SOFTRESET; 640 + ata_ehi_push_desc(ehi , ", unknown FIS"); 641 + freeze = 1; 855 642 } 643 + 644 + /* deal with command error */ 645 + if (irq_stat & PORT_IRQ_ERROR) { 646 + struct sil24_cerr_info *ci = NULL; 647 + unsigned int err_mask = 0, action = 0; 648 + struct ata_queued_cmd *qc; 649 + u32 cerr; 650 + 651 + /* analyze CMD_ERR */ 652 + cerr = readl(port + PORT_CMD_ERR); 653 + if (cerr < ARRAY_SIZE(sil24_cerr_db)) 654 + ci = &sil24_cerr_db[cerr]; 655 + 656 + if (ci && ci->desc) { 657 + err_mask |= ci->err_mask; 658 + action |= ci->action; 659 + ata_ehi_push_desc(ehi, ", %s", ci->desc); 660 + } else { 661 + err_mask |= AC_ERR_OTHER; 662 + action |= ATA_EH_SOFTRESET; 663 + ata_ehi_push_desc(ehi, ", unknown command error %d", 664 + cerr); 665 + } 666 + 667 + /* record error info */ 668 + qc = ata_qc_from_tag(ap, ap->active_tag); 669 + if (qc) { 670 + sil24_update_tf(ap); 671 + qc->err_mask |= err_mask; 672 + } else 673 + ehi->err_mask |= err_mask; 674 + 675 + ehi->action |= action; 676 + } 677 + 678 + /* freeze or abort */ 679 + if (freeze) 680 + ata_port_freeze(ap); 681 + else 682 + ata_port_abort(ap); 683 + } 684 + 685 + static void sil24_finish_qc(struct ata_queued_cmd *qc) 686 + { 687 + if (qc->flags & ATA_QCFLAG_RESULT_TF) 688 + sil24_update_tf(qc->ap); 856 689 } 857 690 858 691 static inline void sil24_host_intr(struct ata_port *ap) 859 692 { 860 - struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag); 861 693 void __iomem *port = (void __iomem *)ap->ioaddr.cmd_addr; 862 - u32 slot_stat; 694 + u32 slot_stat, qc_active; 695 + int rc; 863 696 864 697 slot_stat = readl(port + PORT_SLOT_STAT); 865 - if (!(slot_stat & HOST_SSTAT_ATTN)) { 866 - struct sil24_port_priv *pp = ap->private_data; 867 - /* 868 - * !HOST_SSAT_ATTN guarantees successful completion, 869 - * so reading back tf registers is unnecessary for 870 - * most commands. TODO: read tf registers for 871 - * commands which require these values on successful 872 - * completion (EXECUTE DEVICE DIAGNOSTIC, CHECK POWER, 873 - * DEVICE RESET and READ PORT MULTIPLIER (any more?). 874 - */ 875 - sil24_update_tf(ap); 876 698 877 - if (qc) { 878 - qc->err_mask |= ac_err_mask(pp->tf.command); 879 - ata_qc_complete(qc); 880 - } 881 - } else 882 - sil24_error_intr(ap, slot_stat); 699 + if (unlikely(slot_stat & HOST_SSTAT_ATTN)) { 700 + sil24_error_intr(ap); 701 + return; 702 + } 703 + 704 + if (ap->flags & SIL24_FLAG_PCIX_IRQ_WOC) 705 + writel(PORT_IRQ_COMPLETE, port + PORT_IRQ_STAT); 706 + 707 + qc_active = slot_stat & ~HOST_SSTAT_ATTN; 708 + rc = ata_qc_complete_multiple(ap, qc_active, sil24_finish_qc); 709 + if (rc > 0) 710 + return; 711 + if (rc < 0) { 712 + struct ata_eh_info *ehi = &ap->eh_info; 713 + ehi->err_mask |= AC_ERR_HSM; 714 + ehi->action |= ATA_EH_SOFTRESET; 715 + ata_port_freeze(ap); 716 + return; 717 + } 718 + 719 + if (ata_ratelimit()) 720 + ata_port_printk(ap, KERN_INFO, "spurious interrupt " 721 + "(slot_stat 0x%x active_tag %d sactive 0x%x)\n", 722 + slot_stat, ap->active_tag, ap->sactive); 883 723 } 884 724 885 725 static irqreturn_t sil24_interrupt(int irq, void *dev_instance, struct pt_regs *regs) ··· 883 769 for (i = 0; i < host_set->n_ports; i++) 884 770 if (status & (1 << i)) { 885 771 struct ata_port *ap = host_set->ports[i]; 886 - if (ap && !(ap->flags & ATA_FLAG_PORT_DISABLED)) { 772 + if (ap && !(ap->flags & ATA_FLAG_DISABLED)) { 887 773 sil24_host_intr(host_set->ports[i]); 888 774 handled++; 889 775 } else ··· 896 782 return IRQ_RETVAL(handled); 897 783 } 898 784 785 + static void sil24_error_handler(struct ata_port *ap) 786 + { 787 + struct ata_eh_context *ehc = &ap->eh_context; 788 + 789 + if (sil24_init_port(ap)) { 790 + ata_eh_freeze_port(ap); 791 + ehc->i.action |= ATA_EH_HARDRESET; 792 + } 793 + 794 + /* perform recovery */ 795 + ata_do_eh(ap, ata_std_prereset, sil24_softreset, sil24_hardreset, 796 + ata_std_postreset); 797 + } 798 + 799 + static void sil24_post_internal_cmd(struct ata_queued_cmd *qc) 800 + { 801 + struct ata_port *ap = qc->ap; 802 + 803 + if (qc->flags & ATA_QCFLAG_FAILED) 804 + qc->err_mask |= AC_ERR_OTHER; 805 + 806 + /* make DMA engine forget about the failed command */ 807 + if (qc->err_mask) 808 + sil24_init_port(ap); 809 + } 810 + 899 811 static inline void sil24_cblk_free(struct sil24_port_priv *pp, struct device *dev) 900 812 { 901 - const size_t cb_size = sizeof(*pp->cmd_block); 813 + const size_t cb_size = sizeof(*pp->cmd_block) * SIL24_MAX_CMDS; 902 814 903 815 dma_free_coherent(dev, cb_size, pp->cmd_block, pp->cmd_block_dma); 904 816 } ··· 934 794 struct device *dev = ap->host_set->dev; 935 795 struct sil24_port_priv *pp; 936 796 union sil24_cmd_block *cb; 937 - size_t cb_size = sizeof(*cb); 797 + size_t cb_size = sizeof(*cb) * SIL24_MAX_CMDS; 938 798 dma_addr_t cb_dma; 939 799 int rc = -ENOMEM; 940 800 ··· 998 858 void __iomem *host_base = NULL; 999 859 void __iomem *port_base = NULL; 1000 860 int i, rc; 861 + u32 tmp; 1001 862 1002 863 if (!printed_version++) 1003 864 dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); ··· 1051 910 /* 1052 911 * Configure the device 1053 912 */ 1054 - /* 1055 - * FIXME: This device is certainly 64-bit capable. We just 1056 - * don't know how to use it. After fixing 32bit activation in 1057 - * this function, enable 64bit masks here. 1058 - */ 1059 - rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK); 1060 - if (rc) { 1061 - dev_printk(KERN_ERR, &pdev->dev, 1062 - "32-bit DMA enable failed\n"); 1063 - goto out_free; 1064 - } 1065 - rc = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); 1066 - if (rc) { 1067 - dev_printk(KERN_ERR, &pdev->dev, 1068 - "32-bit consistent DMA enable failed\n"); 1069 - goto out_free; 913 + if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)) { 914 + rc = pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK); 915 + if (rc) { 916 + rc = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); 917 + if (rc) { 918 + dev_printk(KERN_ERR, &pdev->dev, 919 + "64-bit DMA enable failed\n"); 920 + goto out_free; 921 + } 922 + } 923 + } else { 924 + rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK); 925 + if (rc) { 926 + dev_printk(KERN_ERR, &pdev->dev, 927 + "32-bit DMA enable failed\n"); 928 + goto out_free; 929 + } 930 + rc = pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK); 931 + if (rc) { 932 + dev_printk(KERN_ERR, &pdev->dev, 933 + "32-bit consistent DMA enable failed\n"); 934 + goto out_free; 935 + } 1070 936 } 1071 937 1072 938 /* GPIO off */ 1073 939 writel(0, host_base + HOST_FLASH_CMD); 1074 940 1075 - /* Mask interrupts during initialization */ 941 + /* Apply workaround for completion IRQ loss on PCI-X errata */ 942 + if (probe_ent->host_flags & SIL24_FLAG_PCIX_IRQ_WOC) { 943 + tmp = readl(host_base + HOST_CTRL); 944 + if (tmp & (HOST_CTRL_TRDY | HOST_CTRL_STOP | HOST_CTRL_DEVSEL)) 945 + dev_printk(KERN_INFO, &pdev->dev, 946 + "Applying completion IRQ loss on PCI-X " 947 + "errata fix\n"); 948 + else 949 + probe_ent->host_flags &= ~SIL24_FLAG_PCIX_IRQ_WOC; 950 + } 951 + 952 + /* clear global reset & mask interrupts during initialization */ 1076 953 writel(0, host_base + HOST_CTRL); 1077 954 1078 955 for (i = 0; i < probe_ent->n_ports; i++) { 1079 956 void __iomem *port = port_base + i * PORT_REGS_SIZE; 1080 957 unsigned long portu = (unsigned long)port; 1081 - u32 tmp; 1082 - int cnt; 1083 958 1084 - probe_ent->port[i].cmd_addr = portu + PORT_PRB; 959 + probe_ent->port[i].cmd_addr = portu; 1085 960 probe_ent->port[i].scr_addr = portu + PORT_SCONTROL; 1086 961 1087 962 ata_std_ports(&probe_ent->port[i]); ··· 1109 952 tmp = readl(port + PORT_CTRL_STAT); 1110 953 if (tmp & PORT_CS_PORT_RST) { 1111 954 writel(PORT_CS_PORT_RST, port + PORT_CTRL_CLR); 1112 - readl(port + PORT_CTRL_STAT); /* sync */ 1113 - for (cnt = 0; cnt < 10; cnt++) { 1114 - msleep(10); 1115 - tmp = readl(port + PORT_CTRL_STAT); 1116 - if (!(tmp & PORT_CS_PORT_RST)) 1117 - break; 1118 - } 955 + tmp = ata_wait_register(port + PORT_CTRL_STAT, 956 + PORT_CS_PORT_RST, 957 + PORT_CS_PORT_RST, 10, 100); 1119 958 if (tmp & PORT_CS_PORT_RST) 1120 959 dev_printk(KERN_ERR, &pdev->dev, 1121 960 "failed to clear port RST\n"); 1122 961 } 962 + 963 + /* Configure IRQ WoC */ 964 + if (probe_ent->host_flags & SIL24_FLAG_PCIX_IRQ_WOC) 965 + writel(PORT_CS_IRQ_WOC, port + PORT_CTRL_STAT); 966 + else 967 + writel(PORT_CS_IRQ_WOC, port + PORT_CTRL_CLR); 1123 968 1124 969 /* Zero error counters. */ 1125 970 writel(0x8000, port + PORT_DECODE_ERR_THRESH); ··· 1131 972 writel(0x0000, port + PORT_CRC_ERR_CNT); 1132 973 writel(0x0000, port + PORT_HSHK_ERR_CNT); 1133 974 1134 - /* FIXME: 32bit activation? */ 1135 - writel(0, port + PORT_ACTIVATE_UPPER_ADDR); 1136 - writel(PORT_CS_32BIT_ACTV, port + PORT_CTRL_STAT); 1137 - 1138 - /* Configure interrupts */ 1139 - writel(0xffff, port + PORT_IRQ_ENABLE_CLR); 1140 - writel(PORT_IRQ_COMPLETE | PORT_IRQ_ERROR | PORT_IRQ_SDB_FIS, 1141 - port + PORT_IRQ_ENABLE_SET); 1142 - 1143 - /* Clear interrupts */ 1144 - writel(0x0fff0fff, port + PORT_IRQ_STAT); 1145 - writel(PORT_CS_IRQ_WOC, port + PORT_CTRL_CLR); 975 + /* Always use 64bit activation */ 976 + writel(PORT_CS_32BIT_ACTV, port + PORT_CTRL_CLR); 1146 977 1147 978 /* Clear port multiplier enable and resume bits */ 1148 979 writel(PORT_CS_PM_EN | PORT_CS_RESUME, port + PORT_CTRL_CLR); 1149 - 1150 - /* Reset itself */ 1151 - if (__sil24_reset_controller(port)) 1152 - dev_printk(KERN_ERR, &pdev->dev, 1153 - "failed to reset controller\n"); 1154 980 } 1155 981 1156 982 /* Turn on interrupts */
+8 -5
drivers/scsi/sata_sis.c
··· 43 43 #include <linux/libata.h> 44 44 45 45 #define DRV_NAME "sata_sis" 46 - #define DRV_VERSION "0.5" 46 + #define DRV_VERSION "0.6" 47 47 48 48 enum { 49 49 sis_180 = 0, ··· 96 96 .proc_name = DRV_NAME, 97 97 .dma_boundary = ATA_DMA_BOUNDARY, 98 98 .slave_configure = ata_scsi_slave_config, 99 + .slave_destroy = ata_scsi_slave_destroy, 99 100 .bios_param = ata_std_bios_param, 100 101 }; 101 102 ··· 107 106 .check_status = ata_check_status, 108 107 .exec_command = ata_exec_command, 109 108 .dev_select = ata_std_dev_select, 110 - .phy_reset = sata_phy_reset, 111 109 .bmdma_setup = ata_bmdma_setup, 112 110 .bmdma_start = ata_bmdma_start, 113 111 .bmdma_stop = ata_bmdma_stop, 114 112 .bmdma_status = ata_bmdma_status, 115 113 .qc_prep = ata_qc_prep, 116 114 .qc_issue = ata_qc_issue_prot, 117 - .eng_timeout = ata_eng_timeout, 115 + .data_xfer = ata_pio_data_xfer, 116 + .freeze = ata_bmdma_freeze, 117 + .thaw = ata_bmdma_thaw, 118 + .error_handler = ata_bmdma_error_handler, 119 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 118 120 .irq_handler = ata_interrupt, 119 121 .irq_clear = ata_bmdma_irq_clear, 120 122 .scr_read = sis_scr_read, ··· 129 125 130 126 static struct ata_port_info sis_port_info = { 131 127 .sht = &sis_sht, 132 - .host_flags = ATA_FLAG_SATA | ATA_FLAG_SATA_RESET | 133 - ATA_FLAG_NO_LEGACY, 128 + .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY, 134 129 .pio_mask = 0x1f, 135 130 .mwdma_mask = 0x7, 136 131 .udma_mask = 0x7f,
+10 -6
drivers/scsi/sata_svw.c
··· 54 54 #endif /* CONFIG_PPC_OF */ 55 55 56 56 #define DRV_NAME "sata_svw" 57 - #define DRV_VERSION "1.07" 57 + #define DRV_VERSION "1.8" 58 58 59 59 enum { 60 60 /* Taskfile registers offsets */ ··· 257 257 int len, index; 258 258 259 259 /* Find the ata_port */ 260 - ap = (struct ata_port *) &shost->hostdata[0]; 260 + ap = ata_shost_to_port(shost); 261 261 if (ap == NULL) 262 262 return 0; 263 263 ··· 299 299 .proc_name = DRV_NAME, 300 300 .dma_boundary = ATA_DMA_BOUNDARY, 301 301 .slave_configure = ata_scsi_slave_config, 302 + .slave_destroy = ata_scsi_slave_destroy, 302 303 #ifdef CONFIG_PPC_OF 303 304 .proc_info = k2_sata_proc_info, 304 305 #endif ··· 314 313 .check_status = k2_stat_check_status, 315 314 .exec_command = ata_exec_command, 316 315 .dev_select = ata_std_dev_select, 317 - .phy_reset = sata_phy_reset, 318 316 .bmdma_setup = k2_bmdma_setup_mmio, 319 317 .bmdma_start = k2_bmdma_start_mmio, 320 318 .bmdma_stop = ata_bmdma_stop, 321 319 .bmdma_status = ata_bmdma_status, 322 320 .qc_prep = ata_qc_prep, 323 321 .qc_issue = ata_qc_issue_prot, 324 - .eng_timeout = ata_eng_timeout, 322 + .data_xfer = ata_mmio_data_xfer, 323 + .freeze = ata_bmdma_freeze, 324 + .thaw = ata_bmdma_thaw, 325 + .error_handler = ata_bmdma_error_handler, 326 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 325 327 .irq_handler = ata_interrupt, 326 328 .irq_clear = ata_bmdma_irq_clear, 327 329 .scr_read = k2_sata_scr_read, ··· 424 420 writel(0x0, mmio_base + K2_SATA_SIM_OFFSET); 425 421 426 422 probe_ent->sht = &k2_sata_sht; 427 - probe_ent->host_flags = ATA_FLAG_SATA | ATA_FLAG_SATA_RESET | 428 - ATA_FLAG_NO_LEGACY | ATA_FLAG_MMIO; 423 + probe_ent->host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 424 + ATA_FLAG_MMIO; 429 425 probe_ent->port_ops = &k2_sata_ops; 430 426 probe_ent->n_ports = 4; 431 427 probe_ent->irq = pdev->irq;
+10 -11
drivers/scsi/sata_sx4.c
··· 46 46 #include "sata_promise.h" 47 47 48 48 #define DRV_NAME "sata_sx4" 49 - #define DRV_VERSION "0.8" 49 + #define DRV_VERSION "0.9" 50 50 51 51 52 52 enum { ··· 191 191 .proc_name = DRV_NAME, 192 192 .dma_boundary = ATA_DMA_BOUNDARY, 193 193 .slave_configure = ata_scsi_slave_config, 194 + .slave_destroy = ata_scsi_slave_destroy, 194 195 .bios_param = ata_std_bios_param, 195 196 }; 196 197 ··· 205 204 .phy_reset = pdc_20621_phy_reset, 206 205 .qc_prep = pdc20621_qc_prep, 207 206 .qc_issue = pdc20621_qc_issue_prot, 207 + .data_xfer = ata_mmio_data_xfer, 208 208 .eng_timeout = pdc_eng_timeout, 209 209 .irq_handler = pdc20621_interrupt, 210 210 .irq_clear = pdc20621_irq_clear, ··· 220 218 .sht = &pdc_sata_sht, 221 219 .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 222 220 ATA_FLAG_SRST | ATA_FLAG_MMIO | 223 - ATA_FLAG_NO_ATAPI, 221 + ATA_FLAG_NO_ATAPI | ATA_FLAG_PIO_POLLING, 224 222 .pio_mask = 0x1f, /* pio0-4 */ 225 223 .mwdma_mask = 0x07, /* mwdma0-2 */ 226 224 .udma_mask = 0x7f, /* udma0-6 ; FIXME */ ··· 835 833 tmp = mask & (1 << i); 836 834 VPRINTK("seq %u, port_no %u, ap %p, tmp %x\n", i, port_no, ap, tmp); 837 835 if (tmp && ap && 838 - !(ap->flags & (ATA_FLAG_PORT_DISABLED | ATA_FLAG_NOINTR))) { 836 + !(ap->flags & ATA_FLAG_DISABLED)) { 839 837 struct ata_queued_cmd *qc; 840 838 841 839 qc = ata_qc_from_tag(ap, ap->active_tag); 842 - if (qc && (!(qc->tf.ctl & ATA_NIEN))) 840 + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) 843 841 handled += pdc20621_host_intr(ap, qc, (i > 4), 844 842 mmio_base); 845 843 } ··· 870 868 switch (qc->tf.protocol) { 871 869 case ATA_PROT_DMA: 872 870 case ATA_PROT_NODATA: 873 - printk(KERN_ERR "ata%u: command timeout\n", ap->id); 871 + ata_port_printk(ap, KERN_ERR, "command timeout\n"); 874 872 qc->err_mask |= __ac_err_mask(ata_wait_idle(ap)); 875 873 break; 876 874 877 875 default: 878 876 drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 1000); 879 877 880 - printk(KERN_ERR "ata%u: unknown timeout, cmd 0x%x stat 0x%x\n", 881 - ap->id, qc->tf.command, drv_stat); 878 + ata_port_printk(ap, KERN_ERR, 879 + "unknown timeout, cmd 0x%x stat 0x%x\n", 880 + qc->tf.command, drv_stat); 882 881 883 882 qc->err_mask |= ac_err_mask(drv_stat); 884 883 break; ··· 1378 1375 if (!printed_version++) 1379 1376 dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n"); 1380 1377 1381 - /* 1382 - * If this driver happens to only be useful on Apple's K2, then 1383 - * we should check that here as it has a normal Serverworks ID 1384 - */ 1385 1378 rc = pci_enable_device(pdev); 1386 1379 if (rc) 1387 1380 return rc;
+8 -6
drivers/scsi/sata_uli.c
··· 37 37 #include <linux/libata.h> 38 38 39 39 #define DRV_NAME "sata_uli" 40 - #define DRV_VERSION "0.5" 40 + #define DRV_VERSION "0.6" 41 41 42 42 enum { 43 43 uli_5289 = 0, ··· 90 90 .proc_name = DRV_NAME, 91 91 .dma_boundary = ATA_DMA_BOUNDARY, 92 92 .slave_configure = ata_scsi_slave_config, 93 + .slave_destroy = ata_scsi_slave_destroy, 93 94 .bios_param = ata_std_bios_param, 94 95 }; 95 96 ··· 103 102 .exec_command = ata_exec_command, 104 103 .dev_select = ata_std_dev_select, 105 104 106 - .phy_reset = sata_phy_reset, 107 - 108 105 .bmdma_setup = ata_bmdma_setup, 109 106 .bmdma_start = ata_bmdma_start, 110 107 .bmdma_stop = ata_bmdma_stop, 111 108 .bmdma_status = ata_bmdma_status, 112 109 .qc_prep = ata_qc_prep, 113 110 .qc_issue = ata_qc_issue_prot, 111 + .data_xfer = ata_pio_data_xfer, 114 112 115 - .eng_timeout = ata_eng_timeout, 113 + .freeze = ata_bmdma_freeze, 114 + .thaw = ata_bmdma_thaw, 115 + .error_handler = ata_bmdma_error_handler, 116 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 116 117 117 118 .irq_handler = ata_interrupt, 118 119 .irq_clear = ata_bmdma_irq_clear, ··· 129 126 130 127 static struct ata_port_info uli_port_info = { 131 128 .sht = &uli_sht, 132 - .host_flags = ATA_FLAG_SATA | ATA_FLAG_SATA_RESET | 133 - ATA_FLAG_NO_LEGACY, 129 + .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY, 134 130 .pio_mask = 0x1f, /* pio0-4 */ 135 131 .udma_mask = 0x7f, /* udma0-6 */ 136 132 .port_ops = &uli_ops,
+9 -7
drivers/scsi/sata_via.c
··· 47 47 #include <asm/io.h> 48 48 49 49 #define DRV_NAME "sata_via" 50 - #define DRV_VERSION "1.1" 50 + #define DRV_VERSION "1.2" 51 51 52 52 enum board_ids_enum { 53 53 vt6420, ··· 103 103 .proc_name = DRV_NAME, 104 104 .dma_boundary = ATA_DMA_BOUNDARY, 105 105 .slave_configure = ata_scsi_slave_config, 106 + .slave_destroy = ata_scsi_slave_destroy, 106 107 .bios_param = ata_std_bios_param, 107 108 }; 108 109 ··· 116 115 .exec_command = ata_exec_command, 117 116 .dev_select = ata_std_dev_select, 118 117 119 - .phy_reset = sata_phy_reset, 120 - 121 118 .bmdma_setup = ata_bmdma_setup, 122 119 .bmdma_start = ata_bmdma_start, 123 120 .bmdma_stop = ata_bmdma_stop, ··· 123 124 124 125 .qc_prep = ata_qc_prep, 125 126 .qc_issue = ata_qc_issue_prot, 127 + .data_xfer = ata_pio_data_xfer, 126 128 127 - .eng_timeout = ata_eng_timeout, 129 + .freeze = ata_bmdma_freeze, 130 + .thaw = ata_bmdma_thaw, 131 + .error_handler = ata_bmdma_error_handler, 132 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 128 133 129 134 .irq_handler = ata_interrupt, 130 135 .irq_clear = ata_bmdma_irq_clear, ··· 143 140 144 141 static struct ata_port_info svia_port_info = { 145 142 .sht = &svia_sht, 146 - .host_flags = ATA_FLAG_SATA | ATA_FLAG_SRST | ATA_FLAG_NO_LEGACY, 143 + .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY, 147 144 .pio_mask = 0x1f, 148 145 .mwdma_mask = 0x07, 149 146 .udma_mask = 0x7f, ··· 238 235 INIT_LIST_HEAD(&probe_ent->node); 239 236 240 237 probe_ent->sht = &svia_sht; 241 - probe_ent->host_flags = ATA_FLAG_SATA | ATA_FLAG_SATA_RESET | 242 - ATA_FLAG_NO_LEGACY; 238 + probe_ent->host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY; 243 239 probe_ent->port_ops = &svia_sata_ops; 244 240 probe_ent->n_ports = N_PORTS; 245 241 probe_ent->irq = pdev->irq;
+18 -7
drivers/scsi/sata_vsc.c
··· 221 221 222 222 ap = host_set->ports[i]; 223 223 224 - if (ap && !(ap->flags & 225 - (ATA_FLAG_PORT_DISABLED|ATA_FLAG_NOINTR))) { 224 + if (is_vsc_sata_int_err(i, int_status)) { 225 + u32 err_status; 226 + printk(KERN_DEBUG "%s: ignoring interrupt(s)\n", __FUNCTION__); 227 + err_status = ap ? vsc_sata_scr_read(ap, SCR_ERROR) : 0; 228 + vsc_sata_scr_write(ap, SCR_ERROR, err_status); 229 + handled++; 230 + } 231 + 232 + if (ap && !(ap->flags & ATA_FLAG_DISABLED)) { 226 233 struct ata_queued_cmd *qc; 227 234 228 235 qc = ata_qc_from_tag(ap, ap->active_tag); 229 - if (qc && (!(qc->tf.ctl & ATA_NIEN))) { 236 + if (qc && (!(qc->tf.flags & ATA_TFLAG_POLLING))) 230 237 handled += ata_host_intr(ap, qc); 231 - } else if (is_vsc_sata_int_err(i, int_status)) { 238 + else if (is_vsc_sata_int_err(i, int_status)) { 232 239 /* 233 240 * On some chips (i.e. Intel 31244), an error 234 241 * interrupt will sneak in at initialization ··· 279 272 .proc_name = DRV_NAME, 280 273 .dma_boundary = ATA_DMA_BOUNDARY, 281 274 .slave_configure = ata_scsi_slave_config, 275 + .slave_destroy = ata_scsi_slave_destroy, 282 276 .bios_param = ata_std_bios_param, 283 277 }; 284 278 ··· 291 283 .exec_command = ata_exec_command, 292 284 .check_status = ata_check_status, 293 285 .dev_select = ata_std_dev_select, 294 - .phy_reset = sata_phy_reset, 295 286 .bmdma_setup = ata_bmdma_setup, 296 287 .bmdma_start = ata_bmdma_start, 297 288 .bmdma_stop = ata_bmdma_stop, 298 289 .bmdma_status = ata_bmdma_status, 299 290 .qc_prep = ata_qc_prep, 300 291 .qc_issue = ata_qc_issue_prot, 301 - .eng_timeout = ata_eng_timeout, 292 + .data_xfer = ata_pio_data_xfer, 293 + .freeze = ata_bmdma_freeze, 294 + .thaw = ata_bmdma_thaw, 295 + .error_handler = ata_bmdma_error_handler, 296 + .post_internal_cmd = ata_bmdma_post_internal_cmd, 302 297 .irq_handler = vsc_sata_interrupt, 303 298 .irq_clear = ata_bmdma_irq_clear, 304 299 .scr_read = vsc_sata_scr_read, ··· 396 385 397 386 probe_ent->sht = &vsc_sata_sht; 398 387 probe_ent->host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY | 399 - ATA_FLAG_MMIO | ATA_FLAG_SATA_RESET; 388 + ATA_FLAG_MMIO; 400 389 probe_ent->port_ops = &vsc_sata_ops; 401 390 probe_ent->n_ports = 4; 402 391 probe_ent->irq = pdev->irq;
+18
drivers/scsi/scsi.c
··· 579 579 static DEFINE_PER_CPU(struct list_head, scsi_done_q); 580 580 581 581 /** 582 + * scsi_req_abort_cmd -- Request command recovery for the specified command 583 + * cmd: pointer to the SCSI command of interest 584 + * 585 + * This function requests that SCSI Core start recovery for the 586 + * command by deleting the timer and adding the command to the eh 587 + * queue. It can be called by either LLDDs or SCSI Core. LLDDs who 588 + * implement their own error recovery MAY ignore the timeout event if 589 + * they generated scsi_req_abort_cmd. 590 + */ 591 + void scsi_req_abort_cmd(struct scsi_cmnd *cmd) 592 + { 593 + if (!scsi_delete_timer(cmd)) 594 + return; 595 + scsi_times_out(cmd); 596 + } 597 + EXPORT_SYMBOL(scsi_req_abort_cmd); 598 + 599 + /** 582 600 * scsi_done - Enqueue the finished SCSI command into the done queue. 583 601 * @cmd: The SCSI Command for which a low-level device driver (LLDD) gives 584 602 * ownership back to SCSI Core -- i.e. the LLDD has finished with it.
+23 -1
drivers/scsi/scsi_error.c
··· 58 58 } 59 59 60 60 /** 61 + * scsi_schedule_eh - schedule EH for SCSI host 62 + * @shost: SCSI host to invoke error handling on. 63 + * 64 + * Schedule SCSI EH without scmd. 65 + **/ 66 + void scsi_schedule_eh(struct Scsi_Host *shost) 67 + { 68 + unsigned long flags; 69 + 70 + spin_lock_irqsave(shost->host_lock, flags); 71 + 72 + if (scsi_host_set_state(shost, SHOST_RECOVERY) == 0 || 73 + scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY) == 0) { 74 + shost->host_eh_scheduled++; 75 + scsi_eh_wakeup(shost); 76 + } 77 + 78 + spin_unlock_irqrestore(shost->host_lock, flags); 79 + } 80 + EXPORT_SYMBOL_GPL(scsi_schedule_eh); 81 + 82 + /** 61 83 * scsi_eh_scmd_add - add scsi cmd to error handling. 62 84 * @scmd: scmd to run eh on. 63 85 * @eh_flag: optional SCSI_EH flag. ··· 1537 1515 */ 1538 1516 set_current_state(TASK_INTERRUPTIBLE); 1539 1517 while (!kthread_should_stop()) { 1540 - if (shost->host_failed == 0 || 1518 + if ((shost->host_failed == 0 && shost->host_eh_scheduled == 0) || 1541 1519 shost->host_failed != shost->host_busy) { 1542 1520 SCSI_LOG_ERROR_RECOVERY(1, 1543 1521 printk("Error handler scsi_eh_%d sleeping\n",
+1 -1
drivers/scsi/scsi_lib.c
··· 500 500 spin_lock_irqsave(shost->host_lock, flags); 501 501 shost->host_busy--; 502 502 if (unlikely(scsi_host_in_recovery(shost) && 503 - shost->host_failed)) 503 + (shost->host_failed || shost->host_eh_scheduled))) 504 504 scsi_eh_wakeup(shost); 505 505 spin_unlock(shost->host_lock); 506 506 spin_lock(sdev->request_queue->queue_lock);
+6
drivers/scsi/scsi_transport_api.h
··· 1 + #ifndef _SCSI_TRANSPORT_API_H 2 + #define _SCSI_TRANSPORT_API_H 3 + 4 + void scsi_schedule_eh(struct Scsi_Host *shost); 5 + 6 + #endif /* _SCSI_TRANSPORT_API_H */
+37
include/linux/ata.h
··· 97 97 ATA_DRQ = (1 << 3), /* data request i/o */ 98 98 ATA_ERR = (1 << 0), /* have an error */ 99 99 ATA_SRST = (1 << 2), /* software reset */ 100 + ATA_ICRC = (1 << 7), /* interface CRC error */ 101 + ATA_UNC = (1 << 6), /* uncorrectable media error */ 102 + ATA_IDNF = (1 << 4), /* ID not found */ 100 103 ATA_ABORTED = (1 << 2), /* command aborted */ 101 104 102 105 /* ATA command block registers */ ··· 133 130 ATA_CMD_WRITE = 0xCA, 134 131 ATA_CMD_WRITE_EXT = 0x35, 135 132 ATA_CMD_WRITE_FUA_EXT = 0x3D, 133 + ATA_CMD_FPDMA_READ = 0x60, 134 + ATA_CMD_FPDMA_WRITE = 0x61, 136 135 ATA_CMD_PIO_READ = 0x20, 137 136 ATA_CMD_PIO_READ_EXT = 0x24, 138 137 ATA_CMD_PIO_WRITE = 0x30, ··· 153 148 ATA_CMD_INIT_DEV_PARAMS = 0x91, 154 149 ATA_CMD_READ_NATIVE_MAX = 0xF8, 155 150 ATA_CMD_READ_NATIVE_MAX_EXT = 0x27, 151 + ATA_CMD_READ_LOG_EXT = 0x2f, 152 + 153 + /* READ_LOG_EXT pages */ 154 + ATA_LOG_SATA_NCQ = 0x10, 156 155 157 156 /* SETFEATURES stuff */ 158 157 SETFEATURES_XFER = 0x03, ··· 181 172 XFER_PIO_0 = 0x08, 182 173 XFER_PIO_SLOW = 0x00, 183 174 175 + SETFEATURES_WC_ON = 0x02, /* Enable write cache */ 176 + SETFEATURES_WC_OFF = 0x82, /* Disable write cache */ 177 + 184 178 /* ATAPI stuff */ 185 179 ATAPI_PKT_DMA = (1 << 0), 186 180 ATAPI_DMADIR = (1 << 2), /* ATAPI data dir: ··· 204 192 SCR_ACTIVE = 3, 205 193 SCR_NOTIFICATION = 4, 206 194 195 + /* SError bits */ 196 + SERR_DATA_RECOVERED = (1 << 0), /* recovered data error */ 197 + SERR_COMM_RECOVERED = (1 << 1), /* recovered comm failure */ 198 + SERR_DATA = (1 << 8), /* unrecovered data error */ 199 + SERR_PERSISTENT = (1 << 9), /* persistent data/comm error */ 200 + SERR_PROTOCOL = (1 << 10), /* protocol violation */ 201 + SERR_INTERNAL = (1 << 11), /* host internal error */ 202 + SERR_PHYRDY_CHG = (1 << 16), /* PHY RDY changed */ 203 + SERR_DEV_XCHG = (1 << 26), /* device exchanged */ 204 + 207 205 /* struct ata_taskfile flags */ 208 206 ATA_TFLAG_LBA48 = (1 << 0), /* enable 48-bit LBA and "HOB" */ 209 207 ATA_TFLAG_ISADDR = (1 << 1), /* enable r/w to nsect/lba regs */ ··· 221 199 ATA_TFLAG_WRITE = (1 << 3), /* data dir: host->dev==1 (write) */ 222 200 ATA_TFLAG_LBA = (1 << 4), /* enable LBA */ 223 201 ATA_TFLAG_FUA = (1 << 5), /* enable FUA */ 202 + ATA_TFLAG_POLLING = (1 << 6), /* set nIEN to 1 and use polling */ 224 203 }; 225 204 226 205 enum ata_tf_protocols { ··· 230 207 ATA_PROT_NODATA, /* no data */ 231 208 ATA_PROT_PIO, /* PIO single sector */ 232 209 ATA_PROT_DMA, /* DMA */ 210 + ATA_PROT_NCQ, /* NCQ */ 233 211 ATA_PROT_ATAPI, /* packet command, PIO data xfer*/ 234 212 ATA_PROT_ATAPI_NODATA, /* packet command, no data */ 235 213 ATA_PROT_ATAPI_DMA, /* packet command with special DMA sauce */ ··· 286 262 #define ata_id_has_pm(id) ((id)[82] & (1 << 3)) 287 263 #define ata_id_has_lba(id) ((id)[49] & (1 << 9)) 288 264 #define ata_id_has_dma(id) ((id)[49] & (1 << 8)) 265 + #define ata_id_has_ncq(id) ((id)[76] & (1 << 8)) 266 + #define ata_id_queue_depth(id) (((id)[75] & 0x1f) + 1) 289 267 #define ata_id_removeable(id) ((id)[0] & (1 << 7)) 290 268 #define ata_id_has_dword_io(id) ((id)[50] & (1 << 0)) 291 269 #define ata_id_u32(id,n) \ ··· 297 271 ((u64) (id)[(n) + 2] << 32) | \ 298 272 ((u64) (id)[(n) + 1] << 16) | \ 299 273 ((u64) (id)[(n) + 0]) ) 274 + 275 + #define ata_id_cdb_intr(id) (((id)[0] & 0x60) == 0x20) 300 276 301 277 static inline unsigned int ata_id_major_version(const u16 *id) 302 278 { ··· 337 309 return (tf->protocol == ATA_PROT_ATAPI) || 338 310 (tf->protocol == ATA_PROT_ATAPI_NODATA) || 339 311 (tf->protocol == ATA_PROT_ATAPI_DMA); 312 + } 313 + 314 + static inline int is_multi_taskfile(struct ata_taskfile *tf) 315 + { 316 + return (tf->command == ATA_CMD_READ_MULTI) || 317 + (tf->command == ATA_CMD_WRITE_MULTI) || 318 + (tf->command == ATA_CMD_READ_MULTI_EXT) || 319 + (tf->command == ATA_CMD_WRITE_MULTI_EXT) || 320 + (tf->command == ATA_CMD_WRITE_MULTI_FUA_EXT); 340 321 } 341 322 342 323 static inline int ata_ok(u8 status)
+343 -124
include/linux/libata.h
··· 33 33 #include <asm/io.h> 34 34 #include <linux/ata.h> 35 35 #include <linux/workqueue.h> 36 + #include <scsi/scsi_host.h> 36 37 37 38 /* 38 39 * compile-time options: to be removed as soon as all the drivers are ··· 45 44 #undef ATA_NDEBUG /* define to disable quick runtime checks */ 46 45 #undef ATA_ENABLE_PATA /* define to enable PATA support in some 47 46 * low-level drivers */ 48 - #undef ATAPI_ENABLE_DMADIR /* enables ATAPI DMADIR bridge support */ 49 47 50 48 51 49 /* note: prints function name for you */ ··· 108 108 LIBATA_MAX_PRD = ATA_MAX_PRD / 2, 109 109 ATA_MAX_PORTS = 8, 110 110 ATA_DEF_QUEUE = 1, 111 - ATA_MAX_QUEUE = 1, 111 + /* tag ATA_MAX_QUEUE - 1 is reserved for internal commands */ 112 + ATA_MAX_QUEUE = 32, 113 + ATA_TAG_INTERNAL = ATA_MAX_QUEUE - 1, 112 114 ATA_MAX_SECTORS = 200, /* FIXME */ 115 + ATA_MAX_SECTORS_LBA48 = 65535, 113 116 ATA_MAX_BUS = 2, 114 117 ATA_DEF_BUSY_WAIT = 10000, 115 118 ATA_SHORT_PAUSE = (HZ >> 6) + 1, ··· 123 120 ATA_SHT_USE_CLUSTERING = 1, 124 121 125 122 /* struct ata_device stuff */ 126 - ATA_DFLAG_LBA48 = (1 << 0), /* device supports LBA48 */ 127 - ATA_DFLAG_PIO = (1 << 1), /* device currently in PIO mode */ 128 - ATA_DFLAG_LBA = (1 << 2), /* device supports LBA */ 123 + ATA_DFLAG_LBA = (1 << 0), /* device supports LBA */ 124 + ATA_DFLAG_LBA48 = (1 << 1), /* device supports LBA48 */ 125 + ATA_DFLAG_CDB_INTR = (1 << 2), /* device asserts INTRQ when ready for CDB */ 126 + ATA_DFLAG_NCQ = (1 << 3), /* device supports NCQ */ 127 + ATA_DFLAG_CFG_MASK = (1 << 8) - 1, 128 + 129 + ATA_DFLAG_PIO = (1 << 8), /* device currently in PIO mode */ 130 + ATA_DFLAG_INIT_MASK = (1 << 16) - 1, 131 + 132 + ATA_DFLAG_DETACH = (1 << 16), 133 + ATA_DFLAG_DETACHED = (1 << 17), 129 134 130 135 ATA_DEV_UNKNOWN = 0, /* unknown device */ 131 136 ATA_DEV_ATA = 1, /* ATA device */ ··· 143 132 ATA_DEV_NONE = 5, /* no device */ 144 133 145 134 /* struct ata_port flags */ 146 - ATA_FLAG_SLAVE_POSS = (1 << 1), /* host supports slave dev */ 135 + ATA_FLAG_SLAVE_POSS = (1 << 0), /* host supports slave dev */ 147 136 /* (doesn't imply presence) */ 148 - ATA_FLAG_PORT_DISABLED = (1 << 2), /* port is disabled, ignore it */ 149 - ATA_FLAG_SATA = (1 << 3), 150 - ATA_FLAG_NO_LEGACY = (1 << 4), /* no legacy mode check */ 151 - ATA_FLAG_SRST = (1 << 5), /* (obsolete) use ATA SRST, not E.D.D. */ 152 - ATA_FLAG_MMIO = (1 << 6), /* use MMIO, not PIO */ 153 - ATA_FLAG_SATA_RESET = (1 << 7), /* (obsolete) use COMRESET */ 154 - ATA_FLAG_PIO_DMA = (1 << 8), /* PIO cmds via DMA */ 155 - ATA_FLAG_NOINTR = (1 << 9), /* FIXME: Remove this once 156 - * proper HSM is in place. */ 157 - ATA_FLAG_DEBUGMSG = (1 << 10), 158 - ATA_FLAG_NO_ATAPI = (1 << 11), /* No ATAPI support */ 137 + ATA_FLAG_SATA = (1 << 1), 138 + ATA_FLAG_NO_LEGACY = (1 << 2), /* no legacy mode check */ 139 + ATA_FLAG_MMIO = (1 << 3), /* use MMIO, not PIO */ 140 + ATA_FLAG_SRST = (1 << 4), /* (obsolete) use ATA SRST, not E.D.D. */ 141 + ATA_FLAG_SATA_RESET = (1 << 5), /* (obsolete) use COMRESET */ 142 + ATA_FLAG_NO_ATAPI = (1 << 6), /* No ATAPI support */ 143 + ATA_FLAG_PIO_DMA = (1 << 7), /* PIO cmds via DMA */ 144 + ATA_FLAG_PIO_LBA48 = (1 << 8), /* Host DMA engine is LBA28 only */ 145 + ATA_FLAG_PIO_POLLING = (1 << 9), /* use polling PIO if LLD 146 + * doesn't handle PIO interrupts */ 147 + ATA_FLAG_NCQ = (1 << 10), /* host supports NCQ */ 148 + ATA_FLAG_HRST_TO_RESUME = (1 << 11), /* hardreset to resume phy */ 149 + ATA_FLAG_SKIP_D2H_BSY = (1 << 12), /* can't wait for the first D2H 150 + * Register FIS clearing BSY */ 159 151 160 - ATA_FLAG_SUSPENDED = (1 << 12), /* port is suspended */ 152 + ATA_FLAG_DEBUGMSG = (1 << 13), 153 + ATA_FLAG_FLUSH_PORT_TASK = (1 << 14), /* flush port task */ 161 154 162 - ATA_FLAG_PIO_LBA48 = (1 << 13), /* Host DMA engine is LBA28 only */ 163 - ATA_FLAG_IRQ_MASK = (1 << 14), /* Mask IRQ in PIO xfers */ 155 + ATA_FLAG_EH_PENDING = (1 << 15), /* EH pending */ 156 + ATA_FLAG_EH_IN_PROGRESS = (1 << 16), /* EH in progress */ 157 + ATA_FLAG_FROZEN = (1 << 17), /* port is frozen */ 158 + ATA_FLAG_RECOVERED = (1 << 18), /* recovery action performed */ 159 + ATA_FLAG_LOADING = (1 << 19), /* boot/loading probe */ 160 + ATA_FLAG_UNLOADING = (1 << 20), /* module is unloading */ 161 + ATA_FLAG_SCSI_HOTPLUG = (1 << 21), /* SCSI hotplug scheduled */ 164 162 165 - ATA_FLAG_FLUSH_PORT_TASK = (1 << 15), /* Flush port task */ 166 - ATA_FLAG_IN_EH = (1 << 16), /* EH in progress */ 163 + ATA_FLAG_DISABLED = (1 << 22), /* port is disabled, ignore it */ 164 + ATA_FLAG_SUSPENDED = (1 << 23), /* port is suspended (power) */ 167 165 168 - ATA_QCFLAG_ACTIVE = (1 << 1), /* cmd not yet ack'd to scsi lyer */ 169 - ATA_QCFLAG_SG = (1 << 3), /* have s/g table? */ 170 - ATA_QCFLAG_SINGLE = (1 << 4), /* no s/g, just a single buffer */ 166 + /* bits 24:31 of ap->flags are reserved for LLDD specific flags */ 167 + 168 + /* struct ata_queued_cmd flags */ 169 + ATA_QCFLAG_ACTIVE = (1 << 0), /* cmd not yet ack'd to scsi lyer */ 170 + ATA_QCFLAG_SG = (1 << 1), /* have s/g table? */ 171 + ATA_QCFLAG_SINGLE = (1 << 2), /* no s/g, just a single buffer */ 171 172 ATA_QCFLAG_DMAMAP = ATA_QCFLAG_SG | ATA_QCFLAG_SINGLE, 172 - ATA_QCFLAG_EH_SCHEDULED = (1 << 5), /* EH scheduled */ 173 + ATA_QCFLAG_IO = (1 << 3), /* standard IO command */ 174 + ATA_QCFLAG_RESULT_TF = (1 << 4), /* result TF requested */ 175 + 176 + ATA_QCFLAG_FAILED = (1 << 16), /* cmd failed and is owned by EH */ 177 + ATA_QCFLAG_SENSE_VALID = (1 << 17), /* sense data valid */ 178 + ATA_QCFLAG_EH_SCHEDULED = (1 << 18), /* EH scheduled (obsolete) */ 173 179 174 180 /* host set flags */ 175 181 ATA_HOST_SIMPLEX = (1 << 0), /* Host is simplex, one DMA channel per host_set only */ 176 182 177 183 /* various lengths of time */ 178 - ATA_TMOUT_PIO = 30 * HZ, 179 184 ATA_TMOUT_BOOT = 30 * HZ, /* heuristic */ 180 185 ATA_TMOUT_BOOT_QUICK = 7 * HZ, /* heuristic */ 181 - ATA_TMOUT_CDB = 30 * HZ, 182 - ATA_TMOUT_CDB_QUICK = 5 * HZ, 183 186 ATA_TMOUT_INTERNAL = 30 * HZ, 184 187 ATA_TMOUT_INTERNAL_QUICK = 5 * HZ, 185 188 ··· 232 207 /* size of buffer to pad xfers ending on unaligned boundaries */ 233 208 ATA_DMA_PAD_SZ = 4, 234 209 ATA_DMA_PAD_BUF_SZ = ATA_DMA_PAD_SZ * ATA_MAX_QUEUE, 235 - 236 - /* Masks for port functions */ 210 + 211 + /* masks for port functions */ 237 212 ATA_PORT_PRIMARY = (1 << 0), 238 213 ATA_PORT_SECONDARY = (1 << 1), 214 + 215 + /* ering size */ 216 + ATA_ERING_SIZE = 32, 217 + 218 + /* desc_len for ata_eh_info and context */ 219 + ATA_EH_DESC_LEN = 80, 220 + 221 + /* reset / recovery action types */ 222 + ATA_EH_REVALIDATE = (1 << 0), 223 + ATA_EH_SOFTRESET = (1 << 1), 224 + ATA_EH_HARDRESET = (1 << 2), 225 + 226 + ATA_EH_RESET_MASK = ATA_EH_SOFTRESET | ATA_EH_HARDRESET, 227 + ATA_EH_PERDEV_MASK = ATA_EH_REVALIDATE, 228 + 229 + /* ata_eh_info->flags */ 230 + ATA_EHI_HOTPLUGGED = (1 << 0), /* could have been hotplugged */ 231 + 232 + ATA_EHI_DID_RESET = (1 << 16), /* already reset this port */ 233 + 234 + /* max repeat if error condition is still set after ->error_handler */ 235 + ATA_EH_MAX_REPEAT = 5, 236 + 237 + /* how hard are we gonna try to probe/recover devices */ 238 + ATA_PROBE_MAX_TRIES = 3, 239 + ATA_EH_RESET_TRIES = 3, 240 + ATA_EH_DEV_TRIES = 3, 241 + 242 + /* Drive spinup time (time from power-on to the first D2H FIS) 243 + * in msecs - 8s currently. Failing to get ready in this time 244 + * isn't critical. It will result in reset failure for 245 + * controllers which can't wait for the first D2H FIS. libata 246 + * will retry, so it just has to be long enough to spin up 247 + * most devices. 248 + */ 249 + ATA_SPINUP_WAIT = 8000, 239 250 }; 240 251 241 252 enum hsm_task_states { 242 - HSM_ST_UNKNOWN, 243 - HSM_ST_IDLE, 244 - HSM_ST_POLL, 245 - HSM_ST_TMOUT, 246 - HSM_ST, 247 - HSM_ST_LAST, 248 - HSM_ST_LAST_POLL, 249 - HSM_ST_ERR, 253 + HSM_ST_UNKNOWN, /* state unknown */ 254 + HSM_ST_IDLE, /* no command on going */ 255 + HSM_ST, /* (waiting the device to) transfer data */ 256 + HSM_ST_LAST, /* (waiting the device to) complete command */ 257 + HSM_ST_ERR, /* error */ 258 + HSM_ST_FIRST, /* (waiting the device to) 259 + write CDB or first data block */ 250 260 }; 251 261 252 262 enum ata_completion_errors { ··· 304 244 305 245 /* typedefs */ 306 246 typedef void (*ata_qc_cb_t) (struct ata_queued_cmd *qc); 307 - typedef void (*ata_probeinit_fn_t)(struct ata_port *); 308 - typedef int (*ata_reset_fn_t)(struct ata_port *, int, unsigned int *); 309 - typedef void (*ata_postreset_fn_t)(struct ata_port *ap, unsigned int *); 247 + typedef int (*ata_prereset_fn_t)(struct ata_port *ap); 248 + typedef int (*ata_reset_fn_t)(struct ata_port *ap, unsigned int *classes); 249 + typedef void (*ata_postreset_fn_t)(struct ata_port *ap, unsigned int *classes); 310 250 311 251 struct ata_ioports { 312 252 unsigned long cmd_addr; ··· 357 297 unsigned long flags; 358 298 int simplex_claimed; /* Keep seperate in case we 359 299 ever need to do this locked */ 360 - struct ata_port * ports[0]; 300 + struct ata_host_set *next; /* for legacy mode */ 301 + struct ata_port *ports[0]; 361 302 }; 362 303 363 304 struct ata_queued_cmd { ··· 397 336 struct scatterlist *__sg; 398 337 399 338 unsigned int err_mask; 400 - 339 + struct ata_taskfile result_tf; 401 340 ata_qc_cb_t complete_fn; 402 341 403 342 void *private_data; ··· 409 348 unsigned long rw_reqbuf; 410 349 }; 411 350 351 + struct ata_ering_entry { 352 + int is_io; 353 + unsigned int err_mask; 354 + u64 timestamp; 355 + }; 356 + 357 + struct ata_ering { 358 + int cursor; 359 + struct ata_ering_entry ring[ATA_ERING_SIZE]; 360 + }; 361 + 412 362 struct ata_device { 413 - u64 n_sectors; /* size of device, if ATA */ 414 - unsigned long flags; /* ATA_DFLAG_xxx */ 415 - unsigned int class; /* ATA_DEV_xxx */ 363 + struct ata_port *ap; 416 364 unsigned int devno; /* 0 or 1 */ 417 - u16 *id; /* IDENTIFY xxx DEVICE data */ 365 + unsigned long flags; /* ATA_DFLAG_xxx */ 366 + struct scsi_device *sdev; /* attached SCSI device */ 367 + /* n_sector is used as CLEAR_OFFSET, read comment above CLEAR_OFFSET */ 368 + u64 n_sectors; /* size of device, if ATA */ 369 + unsigned int class; /* ATA_DEV_xxx */ 370 + u16 id[ATA_ID_WORDS]; /* IDENTIFY xxx DEVICE data */ 418 371 u8 pio_mode; 419 372 u8 dma_mode; 420 373 u8 xfer_mode; ··· 448 373 u16 cylinders; /* Number of cylinders */ 449 374 u16 heads; /* Number of heads */ 450 375 u16 sectors; /* Number of sectors per track */ 376 + 377 + /* error history */ 378 + struct ata_ering ering; 379 + }; 380 + 381 + /* Offset into struct ata_device. Fields above it are maintained 382 + * acress device init. Fields below are zeroed. 383 + */ 384 + #define ATA_DEVICE_CLEAR_OFFSET offsetof(struct ata_device, n_sectors) 385 + 386 + struct ata_eh_info { 387 + struct ata_device *dev; /* offending device */ 388 + u32 serror; /* SError from LLDD */ 389 + unsigned int err_mask; /* port-wide err_mask */ 390 + unsigned int action; /* ATA_EH_* action mask */ 391 + unsigned int dev_action[ATA_MAX_DEVICES]; /* dev EH action */ 392 + unsigned int flags; /* ATA_EHI_* flags */ 393 + 394 + unsigned long hotplug_timestamp; 395 + unsigned int probe_mask; 396 + 397 + char desc[ATA_EH_DESC_LEN]; 398 + int desc_len; 399 + }; 400 + 401 + struct ata_eh_context { 402 + struct ata_eh_info i; 403 + int tries[ATA_MAX_DEVICES]; 404 + unsigned int classes[ATA_MAX_DEVICES]; 405 + unsigned int did_probe_mask; 451 406 }; 452 407 453 408 struct ata_port { 454 409 struct Scsi_Host *host; /* our co-allocated scsi host */ 455 410 const struct ata_port_operations *ops; 411 + spinlock_t *lock; 456 412 unsigned long flags; /* ATA_FLAG_xxx */ 457 413 unsigned int id; /* unique id req'd by scsi midlyr */ 458 414 unsigned int port_no; /* unique port #; from zero */ ··· 503 397 unsigned int mwdma_mask; 504 398 unsigned int udma_mask; 505 399 unsigned int cbl; /* cable type; ATA_CBL_xxx */ 400 + unsigned int hw_sata_spd_limit; 401 + unsigned int sata_spd_limit; /* SATA PHY speed limit */ 402 + 403 + /* record runtime error info, protected by host_set lock */ 404 + struct ata_eh_info eh_info; 405 + /* EH context owned by EH */ 406 + struct ata_eh_context eh_context; 506 407 507 408 struct ata_device device[ATA_MAX_DEVICES]; 508 409 509 410 struct ata_queued_cmd qcmd[ATA_MAX_QUEUE]; 510 - unsigned long qactive; 411 + unsigned long qc_allocated; 412 + unsigned int qc_active; 413 + 511 414 unsigned int active_tag; 415 + u32 sactive; 512 416 513 417 struct ata_host_stats stats; 514 418 struct ata_host_set *host_set; 515 419 struct device *dev; 516 420 517 421 struct work_struct port_task; 422 + struct work_struct hotplug_task; 423 + struct work_struct scsi_rescan_task; 518 424 519 425 unsigned int hsm_task_state; 520 - unsigned long pio_task_timeout; 521 426 522 427 u32 msg_enable; 523 428 struct list_head eh_done_q; 429 + wait_queue_head_t eh_wait_q; 524 430 525 431 void *private_data; 432 + 433 + u8 sector_buf[ATA_SECT_SIZE]; /* owned by EH */ 526 434 }; 527 435 528 436 struct ata_port_operations { ··· 558 438 559 439 void (*phy_reset) (struct ata_port *ap); /* obsolete */ 560 440 void (*set_mode) (struct ata_port *ap); 561 - int (*probe_reset) (struct ata_port *ap, unsigned int *classes); 562 441 563 442 void (*post_set_mode) (struct ata_port *ap); 564 443 ··· 566 447 void (*bmdma_setup) (struct ata_queued_cmd *qc); 567 448 void (*bmdma_start) (struct ata_queued_cmd *qc); 568 449 450 + void (*data_xfer) (struct ata_device *, unsigned char *, unsigned int, int); 451 + 569 452 void (*qc_prep) (struct ata_queued_cmd *qc); 570 453 unsigned int (*qc_issue) (struct ata_queued_cmd *qc); 571 454 572 - void (*eng_timeout) (struct ata_port *ap); 455 + /* Error handlers. ->error_handler overrides ->eng_timeout and 456 + * indicates that new-style EH is in place. 457 + */ 458 + void (*eng_timeout) (struct ata_port *ap); /* obsolete */ 459 + 460 + void (*freeze) (struct ata_port *ap); 461 + void (*thaw) (struct ata_port *ap); 462 + void (*error_handler) (struct ata_port *ap); 463 + void (*post_internal_cmd) (struct ata_queued_cmd *qc); 573 464 574 465 irqreturn_t (*irq_handler)(int, void *, struct pt_regs *); 575 466 void (*irq_clear) (struct ata_port *); ··· 621 492 622 493 #define FIT(v,vmin,vmax) max_t(short,min_t(short,v,vmax),vmin) 623 494 495 + extern const unsigned long sata_deb_timing_boot[]; 496 + extern const unsigned long sata_deb_timing_eh[]; 497 + extern const unsigned long sata_deb_timing_before_fsrst[]; 498 + 624 499 extern void ata_port_probe(struct ata_port *); 625 500 extern void __sata_phy_reset(struct ata_port *ap); 626 501 extern void sata_phy_reset(struct ata_port *ap); 627 502 extern void ata_bus_reset(struct ata_port *ap); 628 - extern int ata_drive_probe_reset(struct ata_port *ap, 629 - ata_probeinit_fn_t probeinit, 630 - ata_reset_fn_t softreset, ata_reset_fn_t hardreset, 631 - ata_postreset_fn_t postreset, unsigned int *classes); 632 - extern void ata_std_probeinit(struct ata_port *ap); 633 - extern int ata_std_softreset(struct ata_port *ap, int verbose, 634 - unsigned int *classes); 635 - extern int sata_std_hardreset(struct ata_port *ap, int verbose, 636 - unsigned int *class); 503 + extern int sata_set_spd(struct ata_port *ap); 504 + extern int sata_phy_debounce(struct ata_port *ap, const unsigned long *param); 505 + extern int sata_phy_resume(struct ata_port *ap, const unsigned long *param); 506 + extern int ata_std_prereset(struct ata_port *ap); 507 + extern int ata_std_softreset(struct ata_port *ap, unsigned int *classes); 508 + extern int sata_std_hardreset(struct ata_port *ap, unsigned int *class); 637 509 extern void ata_std_postreset(struct ata_port *ap, unsigned int *classes); 638 - extern int ata_dev_revalidate(struct ata_port *ap, struct ata_device *dev, 639 - int post_reset); 510 + extern int ata_dev_revalidate(struct ata_device *dev, int post_reset); 640 511 extern void ata_port_disable(struct ata_port *); 641 512 extern void ata_std_ports(struct ata_ioports *ioaddr); 642 513 #ifdef CONFIG_PCI ··· 648 519 extern int ata_pci_clear_simplex(struct pci_dev *pdev); 649 520 #endif /* CONFIG_PCI */ 650 521 extern int ata_device_add(const struct ata_probe_ent *ent); 522 + extern void ata_port_detach(struct ata_port *ap); 651 523 extern void ata_host_set_remove(struct ata_host_set *host_set); 652 524 extern int ata_scsi_detect(struct scsi_host_template *sht); 653 525 extern int ata_scsi_ioctl(struct scsi_device *dev, int cmd, void __user *arg); 654 526 extern int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)); 655 - extern void ata_eh_qc_complete(struct ata_queued_cmd *qc); 656 - extern void ata_eh_qc_retry(struct ata_queued_cmd *qc); 657 527 extern int ata_scsi_release(struct Scsi_Host *host); 658 528 extern unsigned int ata_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc); 529 + extern int sata_scr_valid(struct ata_port *ap); 530 + extern int sata_scr_read(struct ata_port *ap, int reg, u32 *val); 531 + extern int sata_scr_write(struct ata_port *ap, int reg, u32 val); 532 + extern int sata_scr_write_flush(struct ata_port *ap, int reg, u32 val); 533 + extern int ata_port_online(struct ata_port *ap); 534 + extern int ata_port_offline(struct ata_port *ap); 659 535 extern int ata_scsi_device_resume(struct scsi_device *); 660 536 extern int ata_scsi_device_suspend(struct scsi_device *, pm_message_t state); 661 - extern int ata_device_resume(struct ata_port *, struct ata_device *); 662 - extern int ata_device_suspend(struct ata_port *, struct ata_device *, pm_message_t state); 537 + extern int ata_device_resume(struct ata_device *); 538 + extern int ata_device_suspend(struct ata_device *, pm_message_t state); 663 539 extern int ata_ratelimit(void); 664 540 extern unsigned int ata_busy_sleep(struct ata_port *ap, 665 541 unsigned long timeout_pat, 666 542 unsigned long timeout); 667 543 extern void ata_port_queue_task(struct ata_port *ap, void (*fn)(void *), 668 544 void *data, unsigned long delay); 545 + extern u32 ata_wait_register(void __iomem *reg, u32 mask, u32 val, 546 + unsigned long interval_msec, 547 + unsigned long timeout_msec); 669 548 670 549 /* 671 550 * Default driver ops implementations ··· 687 550 extern u8 ata_check_status(struct ata_port *ap); 688 551 extern u8 ata_altstatus(struct ata_port *ap); 689 552 extern void ata_exec_command(struct ata_port *ap, const struct ata_taskfile *tf); 690 - extern int ata_std_probe_reset(struct ata_port *ap, unsigned int *classes); 691 553 extern int ata_port_start (struct ata_port *ap); 692 554 extern void ata_port_stop (struct ata_port *ap); 693 555 extern void ata_host_stop (struct ata_host_set *host_set); 694 556 extern irqreturn_t ata_interrupt (int irq, void *dev_instance, struct pt_regs *regs); 557 + extern void ata_mmio_data_xfer(struct ata_device *adev, unsigned char *buf, 558 + unsigned int buflen, int write_data); 559 + extern void ata_pio_data_xfer(struct ata_device *adev, unsigned char *buf, 560 + unsigned int buflen, int write_data); 561 + extern void ata_pio_data_xfer_noirq(struct ata_device *adev, unsigned char *buf, 562 + unsigned int buflen, int write_data); 695 563 extern void ata_qc_prep(struct ata_queued_cmd *qc); 696 564 extern void ata_noop_qc_prep(struct ata_queued_cmd *qc); 697 565 extern unsigned int ata_qc_issue_prot(struct ata_queued_cmd *qc); ··· 714 572 extern void ata_bmdma_stop(struct ata_queued_cmd *qc); 715 573 extern u8 ata_bmdma_status(struct ata_port *ap); 716 574 extern void ata_bmdma_irq_clear(struct ata_port *ap); 717 - extern void __ata_qc_complete(struct ata_queued_cmd *qc); 718 - extern void ata_eng_timeout(struct ata_port *ap); 719 - extern void ata_scsi_simulate(struct ata_port *ap, struct ata_device *dev, 720 - struct scsi_cmnd *cmd, 575 + extern void ata_bmdma_freeze(struct ata_port *ap); 576 + extern void ata_bmdma_thaw(struct ata_port *ap); 577 + extern void ata_bmdma_drive_eh(struct ata_port *ap, ata_prereset_fn_t prereset, 578 + ata_reset_fn_t softreset, 579 + ata_reset_fn_t hardreset, 580 + ata_postreset_fn_t postreset); 581 + extern void ata_bmdma_error_handler(struct ata_port *ap); 582 + extern void ata_bmdma_post_internal_cmd(struct ata_queued_cmd *qc); 583 + extern int ata_hsm_move(struct ata_port *ap, struct ata_queued_cmd *qc, 584 + u8 status, int in_wq); 585 + extern void ata_qc_complete(struct ata_queued_cmd *qc); 586 + extern int ata_qc_complete_multiple(struct ata_port *ap, u32 qc_active, 587 + void (*finish_qc)(struct ata_queued_cmd *)); 588 + extern void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd, 721 589 void (*done)(struct scsi_cmnd *)); 722 590 extern int ata_std_bios_param(struct scsi_device *sdev, 723 591 struct block_device *bdev, 724 592 sector_t capacity, int geom[]); 725 593 extern int ata_scsi_slave_config(struct scsi_device *sdev); 726 - extern struct ata_device *ata_dev_pair(struct ata_port *ap, 727 - struct ata_device *adev); 594 + extern void ata_scsi_slave_destroy(struct scsi_device *sdev); 595 + extern int ata_scsi_change_queue_depth(struct scsi_device *sdev, 596 + int queue_depth); 597 + extern struct ata_device *ata_dev_pair(struct ata_device *adev); 728 598 729 599 /* 730 600 * Timing helpers ··· 782 628 extern unsigned long ata_pci_default_filter(const struct ata_port *, struct ata_device *, unsigned long); 783 629 #endif /* CONFIG_PCI */ 784 630 631 + /* 632 + * EH 633 + */ 634 + extern void ata_eng_timeout(struct ata_port *ap); 785 635 636 + extern void ata_port_schedule_eh(struct ata_port *ap); 637 + extern int ata_port_abort(struct ata_port *ap); 638 + extern int ata_port_freeze(struct ata_port *ap); 639 + 640 + extern void ata_eh_freeze_port(struct ata_port *ap); 641 + extern void ata_eh_thaw_port(struct ata_port *ap); 642 + 643 + extern void ata_eh_qc_complete(struct ata_queued_cmd *qc); 644 + extern void ata_eh_qc_retry(struct ata_queued_cmd *qc); 645 + 646 + extern void ata_do_eh(struct ata_port *ap, ata_prereset_fn_t prereset, 647 + ata_reset_fn_t softreset, ata_reset_fn_t hardreset, 648 + ata_postreset_fn_t postreset); 649 + 650 + /* 651 + * printk helpers 652 + */ 653 + #define ata_port_printk(ap, lv, fmt, args...) \ 654 + printk(lv"ata%u: "fmt, (ap)->id , ##args) 655 + 656 + #define ata_dev_printk(dev, lv, fmt, args...) \ 657 + printk(lv"ata%u.%02u: "fmt, (dev)->ap->id, (dev)->devno , ##args) 658 + 659 + /* 660 + * ata_eh_info helpers 661 + */ 662 + #define ata_ehi_push_desc(ehi, fmt, args...) do { \ 663 + (ehi)->desc_len += scnprintf((ehi)->desc + (ehi)->desc_len, \ 664 + ATA_EH_DESC_LEN - (ehi)->desc_len, \ 665 + fmt , ##args); \ 666 + } while (0) 667 + 668 + #define ata_ehi_clear_desc(ehi) do { \ 669 + (ehi)->desc[0] = '\0'; \ 670 + (ehi)->desc_len = 0; \ 671 + } while (0) 672 + 673 + static inline void ata_ehi_hotplugged(struct ata_eh_info *ehi) 674 + { 675 + if (ehi->flags & ATA_EHI_HOTPLUGGED) 676 + return; 677 + 678 + ehi->flags |= ATA_EHI_HOTPLUGGED; 679 + ehi->hotplug_timestamp = jiffies; 680 + 681 + ehi->err_mask |= AC_ERR_ATA_BUS; 682 + ehi->action |= ATA_EH_SOFTRESET; 683 + ehi->probe_mask |= (1 << ATA_MAX_DEVICES) - 1; 684 + } 685 + 686 + /* 687 + * qc helpers 688 + */ 786 689 static inline int 787 690 ata_sg_is_last(struct scatterlist *sg, struct ata_queued_cmd *qc) 788 691 { ··· 882 671 return (tag < ATA_MAX_QUEUE) ? 1 : 0; 883 672 } 884 673 885 - static inline unsigned int ata_class_present(unsigned int class) 674 + static inline unsigned int ata_tag_internal(unsigned int tag) 675 + { 676 + return tag == ATA_MAX_QUEUE - 1; 677 + } 678 + 679 + static inline unsigned int ata_class_enabled(unsigned int class) 886 680 { 887 681 return class == ATA_DEV_ATA || class == ATA_DEV_ATAPI; 888 682 } 889 683 890 - static inline unsigned int ata_dev_present(const struct ata_device *dev) 684 + static inline unsigned int ata_class_disabled(unsigned int class) 891 685 { 892 - return ata_class_present(dev->class); 686 + return class == ATA_DEV_ATA_UNSUP || class == ATA_DEV_ATAPI_UNSUP; 687 + } 688 + 689 + static inline unsigned int ata_class_absent(unsigned int class) 690 + { 691 + return !ata_class_enabled(class) && !ata_class_disabled(class); 692 + } 693 + 694 + static inline unsigned int ata_dev_enabled(const struct ata_device *dev) 695 + { 696 + return ata_class_enabled(dev->class); 697 + } 698 + 699 + static inline unsigned int ata_dev_disabled(const struct ata_device *dev) 700 + { 701 + return ata_class_disabled(dev->class); 702 + } 703 + 704 + static inline unsigned int ata_dev_absent(const struct ata_device *dev) 705 + { 706 + return ata_class_absent(dev->class); 893 707 } 894 708 895 709 static inline u8 ata_chk_status(struct ata_port *ap) ··· 995 759 qc->tf.ctl |= ATA_NIEN; 996 760 } 997 761 998 - static inline struct ata_queued_cmd *ata_qc_from_tag (struct ata_port *ap, 999 - unsigned int tag) 762 + static inline struct ata_queued_cmd *__ata_qc_from_tag(struct ata_port *ap, 763 + unsigned int tag) 1000 764 { 1001 765 if (likely(ata_tag_valid(tag))) 1002 766 return &ap->qcmd[tag]; 1003 767 return NULL; 1004 768 } 1005 769 1006 - static inline void ata_tf_init(struct ata_port *ap, struct ata_taskfile *tf, unsigned int device) 770 + static inline struct ata_queued_cmd *ata_qc_from_tag(struct ata_port *ap, 771 + unsigned int tag) 772 + { 773 + struct ata_queued_cmd *qc = __ata_qc_from_tag(ap, tag); 774 + 775 + if (unlikely(!qc) || !ap->ops->error_handler) 776 + return qc; 777 + 778 + if ((qc->flags & (ATA_QCFLAG_ACTIVE | 779 + ATA_QCFLAG_FAILED)) == ATA_QCFLAG_ACTIVE) 780 + return qc; 781 + 782 + return NULL; 783 + } 784 + 785 + static inline void ata_tf_init(struct ata_device *dev, struct ata_taskfile *tf) 1007 786 { 1008 787 memset(tf, 0, sizeof(*tf)); 1009 788 1010 - tf->ctl = ap->ctl; 1011 - if (device == 0) 789 + tf->ctl = dev->ap->ctl; 790 + if (dev->devno == 0) 1012 791 tf->device = ATA_DEVICE_OBS; 1013 792 else 1014 793 tf->device = ATA_DEVICE_OBS | ATA_DEV1; ··· 1038 787 qc->nbytes = qc->curbytes = 0; 1039 788 qc->err_mask = 0; 1040 789 1041 - ata_tf_init(qc->ap, &qc->tf, qc->dev->devno); 1042 - } 790 + ata_tf_init(qc->dev, &qc->tf); 1043 791 1044 - /** 1045 - * ata_qc_complete - Complete an active ATA command 1046 - * @qc: Command to complete 1047 - * @err_mask: ATA Status register contents 1048 - * 1049 - * Indicate to the mid and upper layers that an ATA 1050 - * command has completed, with either an ok or not-ok status. 1051 - * 1052 - * LOCKING: 1053 - * spin_lock_irqsave(host_set lock) 1054 - */ 1055 - static inline void ata_qc_complete(struct ata_queued_cmd *qc) 1056 - { 1057 - if (unlikely(qc->flags & ATA_QCFLAG_EH_SCHEDULED)) 1058 - return; 1059 - 1060 - __ata_qc_complete(qc); 792 + /* init result_tf such that it indicates normal completion */ 793 + qc->result_tf.command = ATA_DRDY; 794 + qc->result_tf.feature = 0; 1061 795 } 1062 796 1063 797 /** ··· 1121 885 return status; 1122 886 } 1123 887 1124 - static inline u32 scr_read(struct ata_port *ap, unsigned int reg) 1125 - { 1126 - return ap->ops->scr_read(ap, reg); 1127 - } 1128 - 1129 - static inline void scr_write(struct ata_port *ap, unsigned int reg, u32 val) 1130 - { 1131 - ap->ops->scr_write(ap, reg, val); 1132 - } 1133 - 1134 - static inline void scr_write_flush(struct ata_port *ap, unsigned int reg, 1135 - u32 val) 1136 - { 1137 - ap->ops->scr_write(ap, reg, val); 1138 - (void) ap->ops->scr_read(ap, reg); 1139 - } 1140 - 1141 - static inline unsigned int sata_dev_present(struct ata_port *ap) 1142 - { 1143 - return ((scr_read(ap, SCR_STATUS) & 0xf) == 0x3) ? 1 : 0; 1144 - } 1145 - 1146 888 static inline int ata_try_flush_cache(const struct ata_device *dev) 1147 889 { 1148 890 return ata_id_wcache_enabled(dev->id) || ··· 1130 916 1131 917 static inline unsigned int ac_err_mask(u8 status) 1132 918 { 1133 - if (status & ATA_BUSY) 919 + if (status & (ATA_BUSY | ATA_DRQ)) 1134 920 return AC_ERR_HSM; 1135 921 if (status & (ATA_ERR | ATA_DF)) 1136 922 return AC_ERR_DEV; ··· 1156 942 static inline void ata_pad_free(struct ata_port *ap, struct device *dev) 1157 943 { 1158 944 dma_free_coherent(dev, ATA_DMA_PAD_BUF_SZ, ap->pad, ap->pad_dma); 945 + } 946 + 947 + static inline struct ata_port *ata_shost_to_port(struct Scsi_Host *host) 948 + { 949 + return (struct ata_port *) &host->hostdata[0]; 1159 950 } 1160 951 1161 952 #endif /* __LINUX_LIBATA_H__ */
+8 -1
include/linux/pci_ids.h
··· 1196 1196 #define PCI_DEVICE_ID_NVIDIA_NVENET_15 0x0373 1197 1197 #define PCI_DEVICE_ID_NVIDIA_NVENET_16 0x03E5 1198 1198 #define PCI_DEVICE_ID_NVIDIA_NVENET_17 0x03E6 1199 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA 0x03E7 1200 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_IDE 0x03EC 1199 1201 #define PCI_DEVICE_ID_NVIDIA_NVENET_18 0x03EE 1200 1202 #define PCI_DEVICE_ID_NVIDIA_NVENET_19 0x03EF 1203 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA2 0x03F6 1204 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP61_SATA3 0x03F7 1201 1205 #define PCI_DEVICE_ID_NVIDIA_NVENET_20 0x0450 1202 1206 #define PCI_DEVICE_ID_NVIDIA_NVENET_21 0x0451 1203 1207 #define PCI_DEVICE_ID_NVIDIA_NVENET_22 0x0452 ··· 1259 1255 #define PCI_DEVICE_ID_VIA_PX8X0_0 0x0259 1260 1256 #define PCI_DEVICE_ID_VIA_3269_0 0x0269 1261 1257 #define PCI_DEVICE_ID_VIA_K8T800PRO_0 0x0282 1258 + #define PCI_DEVICE_ID_VIA_3296_0 0x0296 1262 1259 #define PCI_DEVICE_ID_VIA_8363_0 0x0305 1263 1260 #define PCI_DEVICE_ID_VIA_P4M800CE 0x0314 1264 1261 #define PCI_DEVICE_ID_VIA_8371_0 0x0391 ··· 1267 1262 #define PCI_DEVICE_ID_VIA_82C561 0x0561 1268 1263 #define PCI_DEVICE_ID_VIA_82C586_1 0x0571 1269 1264 #define PCI_DEVICE_ID_VIA_82C576 0x0576 1265 + #define PCI_DEVICE_ID_VIA_SATA_EIDE 0x0581 1270 1266 #define PCI_DEVICE_ID_VIA_82C586_0 0x0586 1271 1267 #define PCI_DEVICE_ID_VIA_82C596 0x0596 1272 1268 #define PCI_DEVICE_ID_VIA_82C597_0 0x0597 ··· 1308 1302 #define PCI_DEVICE_ID_VIA_8783_0 0x3208 1309 1303 #define PCI_DEVICE_ID_VIA_8237 0x3227 1310 1304 #define PCI_DEVICE_ID_VIA_8251 0x3287 1311 - #define PCI_DEVICE_ID_VIA_3296_0 0x0296 1305 + #define PCI_DEVICE_ID_VIA_8237A 0x3337 1312 1306 #define PCI_DEVICE_ID_VIA_8231 0x8231 1313 1307 #define PCI_DEVICE_ID_VIA_8231_4 0x8235 1314 1308 #define PCI_DEVICE_ID_VIA_8365_1 0x8305 1309 + #define PCI_DEVICE_ID_VIA_CX700 0x8324 1315 1310 #define PCI_DEVICE_ID_VIA_8371_1 0x8391 1316 1311 #define PCI_DEVICE_ID_VIA_82C598_1 0x8598 1317 1312 #define PCI_DEVICE_ID_VIA_838X_1 0xB188
+1
include/scsi/scsi_cmnd.h
··· 145 145 extern void scsi_put_command(struct scsi_cmnd *); 146 146 extern void scsi_io_completion(struct scsi_cmnd *, unsigned int, unsigned int); 147 147 extern void scsi_finish_command(struct scsi_cmnd *cmd); 148 + extern void scsi_req_abort_cmd(struct scsi_cmnd *cmd); 148 149 149 150 extern void *scsi_kmap_atomic_sg(struct scatterlist *sg, int sg_count, 150 151 size_t *offset, size_t *len);
+1
include/scsi/scsi_host.h
··· 472 472 */ 473 473 unsigned int host_busy; /* commands actually active on low-level */ 474 474 unsigned int host_failed; /* commands that failed. */ 475 + unsigned int host_eh_scheduled; /* EH scheduled without command */ 475 476 476 477 unsigned short host_no; /* Used for IOCTL_GET_IDLUN, /proc/scsi et al. */ 477 478 int resetting; /* if set, it means that last_reset is a valid value */