Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394-2.6: (55 commits)
ieee1394: sbp2: code formatting around work_struct stuff
ieee1394: nodemgr: remove a kcalloc
ieee1394: conditionally export ieee1394_bus_type
ieee1394: Consolidate driver registering
ieee1394: sbp2: convert from PCI DMA to generic DMA
ieee1394: nodemgr: spaces to tabs
ieee1394: nodemgr: fix deadlock in shutdown
ieee1394: nodemgr: remove duplicate assignment
sbp2: make 1bit bitfield unsigned
ieee1394: schedule *_oui sysfs attributes for removal
ieee1394: schedule unused symbol exports for removal
ieee1394: dv1394: schedule for feature removal
ieee1394: raw1394: defer feature removal of old isoch interface
ieee1394: ohci1394: call PMac code in shutdown only for proper machines
ieee1394: ohci1394: reformat PPC_PMAC platform code
ieee1394: ohci1394: add PPC_PMAC platform code to driver probe
ieee1394: sbp2: wrap two functions into one
ieee1394: sbp2: update comment on things to do
ieee1394: sbp2: use list_move_tail()
ieee1394: sbp2: more concise names for types and variables
...

+1448 -1914
+33 -5
Documentation/feature-removal-schedule.txt
··· 30 --------------------------- 31 32 What: raw1394: requests of type RAW1394_REQ_ISO_SEND, RAW1394_REQ_ISO_LISTEN 33 - When: November 2006 34 - Why: Deprecated in favour of the new ioctl-based rawiso interface, which is 35 - more efficient. You should really be using libraw1394 for raw1394 36 - access anyway. 37 - Who: Jody McIntyre <scjody@modernduck.com> 38 39 --------------------------- 40
··· 30 --------------------------- 31 32 What: raw1394: requests of type RAW1394_REQ_ISO_SEND, RAW1394_REQ_ISO_LISTEN 33 + When: June 2007 34 + Why: Deprecated in favour of the more efficient and robust rawiso interface. 35 + Affected are applications which use the deprecated part of libraw1394 36 + (raw1394_iso_write, raw1394_start_iso_write, raw1394_start_iso_rcv, 37 + raw1394_stop_iso_rcv) or bypass libraw1394. 38 + Who: Dan Dennedy <dan@dennedy.org>, Stefan Richter <stefanr@s5r6.in-berlin.de> 39 + 40 + --------------------------- 41 + 42 + What: dv1394 driver (CONFIG_IEEE1394_DV1394) 43 + When: June 2007 44 + Why: Replaced by raw1394 + userspace libraries, notably libiec61883. This 45 + shift of application support has been indicated on www.linux1394.org 46 + and developers' mailinglists for quite some time. Major applications 47 + have been converted, with the exception of ffmpeg and hence xine. 48 + Piped output of dvgrab2 is a partial equivalent to dv1394. 49 + Who: Dan Dennedy <dan@dennedy.org>, Stefan Richter <stefanr@s5r6.in-berlin.de> 50 + 51 + --------------------------- 52 + 53 + What: ieee1394 core's unused exports (CONFIG_IEEE1394_EXPORT_FULL_API) 54 + When: January 2007 55 + Why: There are no projects known to use these exported symbols, except 56 + dfg1394 (uses one symbol whose functionality is core-internal now). 57 + Who: Stefan Richter <stefanr@s5r6.in-berlin.de> 58 + 59 + --------------------------- 60 + 61 + What: ieee1394's *_oui sysfs attributes (CONFIG_IEEE1394_OUI_DB) 62 + When: January 2007 63 + Files: drivers/ieee1394/: oui.db, oui2c.sh 64 + Why: big size, little value 65 + Who: Stefan Richter <stefanr@s5r6.in-berlin.de> 66 67 --------------------------- 68
+8 -18
drivers/ieee1394/Kconfig
··· 36 else says N. 37 38 config IEEE1394_OUI_DB 39 - bool "OUI Database built-in" 40 depends on IEEE1394 41 help 42 If you say Y here, then an OUI list (vendor unique ID's) will be ··· 67 eth1394 option below. 68 69 config IEEE1394_EXPORT_FULL_API 70 - bool "Export all symbols of ieee1394's API" 71 depends on IEEE1394 72 default n 73 help 74 - Export all symbols of ieee1394's driver programming interface, even 75 - those that are not currently used by the standard IEEE 1394 drivers. 76 - 77 - This option does not affect the interface to userspace applications. 78 - Say Y here if you want to compile externally developed drivers that 79 - make extended use of ieee1394's API. It is otherwise safe to say N. 80 81 comment "Device Drivers" 82 depends on IEEE1394 ··· 120 121 config IEEE1394_SBP2 122 tristate "SBP-2 support (Harddisks etc.)" 123 - depends on IEEE1394 && SCSI && (PCI || BROKEN) 124 help 125 This option enables you to use SBP-2 devices connected to an IEEE 126 1394 bus. SBP-2 devices include storage devices like harddisks and ··· 156 MCAP, therefore multicast support is significantly limited. 157 158 config IEEE1394_DV1394 159 - tristate "OHCI-DV I/O support" 160 depends on IEEE1394 && IEEE1394_OHCI1394 161 help 162 - This driver allows you to transmit and receive DV (digital video) 163 - streams on an OHCI-1394 card using a simple frame-oriented 164 - interface. 165 - 166 - The user-space API for dv1394 is documented in dv1394.h. 167 - 168 - To compile this driver as a module, say M here: the 169 - module will be called dv1394. 170 171 config IEEE1394_RAWIO 172 tristate "Raw IEEE1394 I/O support"
··· 36 else says N. 37 38 config IEEE1394_OUI_DB 39 + bool "OUI Database built-in (deprecated)" 40 depends on IEEE1394 41 help 42 If you say Y here, then an OUI list (vendor unique ID's) will be ··· 67 eth1394 option below. 68 69 config IEEE1394_EXPORT_FULL_API 70 + bool "Export all symbols of ieee1394's API (deprecated)" 71 depends on IEEE1394 72 default n 73 help 74 + This option will be removed soon. Don't worry, say N. 75 76 comment "Device Drivers" 77 depends on IEEE1394 ··· 125 126 config IEEE1394_SBP2 127 tristate "SBP-2 support (Harddisks etc.)" 128 + depends on IEEE1394 && SCSI 129 help 130 This option enables you to use SBP-2 devices connected to an IEEE 131 1394 bus. SBP-2 devices include storage devices like harddisks and ··· 161 MCAP, therefore multicast support is significantly limited. 162 163 config IEEE1394_DV1394 164 + tristate "OHCI-DV I/O support (deprecated)" 165 depends on IEEE1394 && IEEE1394_OHCI1394 166 help 167 + The dv1394 driver will be removed from Linux in a future release. 168 + Its functionality is now provided by raw1394 together with libraries 169 + such as libiec61883. 170 171 config IEEE1394_RAWIO 172 tristate "Raw IEEE1394 I/O support"
+4 -1
drivers/ieee1394/Makefile
··· 3 # 4 5 ieee1394-objs := ieee1394_core.o ieee1394_transactions.o hosts.o \ 6 - highlevel.o csr.o nodemgr.o oui.o dma.o iso.o \ 7 csr1212.o config_roms.o 8 9 obj-$(CONFIG_IEEE1394) += ieee1394.o 10 obj-$(CONFIG_IEEE1394_PCILYNX) += pcilynx.o
··· 3 # 4 5 ieee1394-objs := ieee1394_core.o ieee1394_transactions.o hosts.o \ 6 + highlevel.o csr.o nodemgr.o dma.o iso.o \ 7 csr1212.o config_roms.o 8 + ifdef CONFIG_IEEE1394_OUI_DB 9 + ieee1394-objs += oui.o 10 + endif 11 12 obj-$(CONFIG_IEEE1394) += ieee1394.o 13 obj-$(CONFIG_IEEE1394_PCILYNX) += pcilynx.o
+3 -5
drivers/ieee1394/csr.c
··· 158 */ 159 static inline void calculate_expire(struct csr_control *csr) 160 { 161 - unsigned long usecs = 162 - (csr->split_timeout_hi & 0x07) * USEC_PER_SEC + 163 - (csr->split_timeout_lo >> 19) * 125L; 164 165 - csr->expire = usecs_to_jiffies(usecs > 100000L ? usecs : 100000L); 166 - 167 HPSB_VERBOSE("CSR: setting expire to %lu, HZ=%u", csr->expire, HZ); 168 } 169
··· 158 */ 159 static inline void calculate_expire(struct csr_control *csr) 160 { 161 + unsigned int usecs = (csr->split_timeout_hi & 7) * 1000000 + 162 + (csr->split_timeout_lo >> 19) * 125; 163 164 + csr->expire = usecs_to_jiffies(usecs > 100000 ? usecs : 100000); 165 HPSB_VERBOSE("CSR: setting expire to %lu, HZ=%u", csr->expire, HZ); 166 } 167
+8 -16
drivers/ieee1394/dv1394.c
··· 1536 1537 static long dv1394_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 1538 { 1539 - struct video_card *video; 1540 unsigned long flags; 1541 int ret = -EINVAL; 1542 void __user *argp = (void __user *)arg; 1543 1544 DECLARE_WAITQUEUE(wait, current); 1545 1546 - lock_kernel(); 1547 - video = file_to_video_card(file); 1548 - 1549 /* serialize this to prevent multi-threaded mayhem */ 1550 if (file->f_flags & O_NONBLOCK) { 1551 - if (!mutex_trylock(&video->mtx)) { 1552 - unlock_kernel(); 1553 return -EAGAIN; 1554 - } 1555 } else { 1556 - if (mutex_lock_interruptible(&video->mtx)) { 1557 - unlock_kernel(); 1558 return -ERESTARTSYS; 1559 - } 1560 } 1561 1562 switch(cmd) ··· 1773 1774 out: 1775 mutex_unlock(&video->mtx); 1776 - unlock_kernel(); 1777 return ret; 1778 } 1779 ··· 2180 MODULE_DEVICE_TABLE(ieee1394, dv1394_id_table); 2181 2182 static struct hpsb_protocol_driver dv1394_driver = { 2183 - .name = "DV/1394 Driver", 2184 .id_table = dv1394_id_table, 2185 - .driver = { 2186 - .name = "dv1394", 2187 - .bus = &ieee1394_bus_type, 2188 - }, 2189 }; 2190 2191 ··· 2574 static int __init dv1394_init_module(void) 2575 { 2576 int ret; 2577 2578 cdev_init(&dv1394_cdev, &dv1394_fops); 2579 dv1394_cdev.owner = THIS_MODULE;
··· 1536 1537 static long dv1394_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 1538 { 1539 + struct video_card *video = file_to_video_card(file); 1540 unsigned long flags; 1541 int ret = -EINVAL; 1542 void __user *argp = (void __user *)arg; 1543 1544 DECLARE_WAITQUEUE(wait, current); 1545 1546 /* serialize this to prevent multi-threaded mayhem */ 1547 if (file->f_flags & O_NONBLOCK) { 1548 + if (!mutex_trylock(&video->mtx)) 1549 return -EAGAIN; 1550 } else { 1551 + if (mutex_lock_interruptible(&video->mtx)) 1552 return -ERESTARTSYS; 1553 } 1554 1555 switch(cmd) ··· 1780 1781 out: 1782 mutex_unlock(&video->mtx); 1783 return ret; 1784 } 1785 ··· 2188 MODULE_DEVICE_TABLE(ieee1394, dv1394_id_table); 2189 2190 static struct hpsb_protocol_driver dv1394_driver = { 2191 + .name = "dv1394", 2192 .id_table = dv1394_id_table, 2193 }; 2194 2195 ··· 2586 static int __init dv1394_init_module(void) 2587 { 2588 int ret; 2589 + 2590 + printk(KERN_WARNING 2591 + "WARNING: The dv1394 driver is unsupported and will be removed " 2592 + "from Linux soon. Use raw1394 instead.\n"); 2593 2594 cdev_init(&dv1394_cdev, &dv1394_fops); 2595 dv1394_cdev.owner = THIS_MODULE;
+1 -3
drivers/ieee1394/eth1394.c
··· 474 MODULE_DEVICE_TABLE(ieee1394, eth1394_id_table); 475 476 static struct hpsb_protocol_driver eth1394_proto_driver = { 477 - .name = "IPv4 over 1394 Driver", 478 .id_table = eth1394_id_table, 479 .update = eth1394_update, 480 .driver = { 481 - .name = ETH1394_DRIVER_NAME, 482 - .bus = &ieee1394_bus_type, 483 .probe = eth1394_probe, 484 .remove = eth1394_remove, 485 },
··· 474 MODULE_DEVICE_TABLE(ieee1394, eth1394_id_table); 475 476 static struct hpsb_protocol_driver eth1394_proto_driver = { 477 + .name = ETH1394_DRIVER_NAME, 478 .id_table = eth1394_id_table, 479 .update = eth1394_update, 480 .driver = { 481 .probe = eth1394_probe, 482 .remove = eth1394_remove, 483 },
-1
drivers/ieee1394/highlevel.h
··· 24 /* Only the following structures are of interest to actual highlevel drivers. */ 25 26 struct hpsb_highlevel { 27 - struct module *owner; 28 const char *name; 29 30 /* Any of the following pointers can legally be NULL, except for
··· 24 /* Only the following structures are of interest to actual highlevel drivers. */ 25 26 struct hpsb_highlevel { 27 const char *name; 28 29 /* Any of the following pointers can legally be NULL, except for
+24 -17
drivers/ieee1394/hosts.c
··· 44 45 CSR_SET_BUS_INFO_GENERATION(host->csr.rom, generation); 46 if (csr1212_generate_csr_image(host->csr.rom) != CSR1212_SUCCESS) { 47 - /* CSR image creation failed, reset generation field and do not 48 - * issue a bus reset. */ 49 - CSR_SET_BUS_INFO_GENERATION(host->csr.rom, host->csr.generation); 50 return; 51 } 52 ··· 55 56 host->update_config_rom = 0; 57 if (host->driver->set_hw_config_rom) 58 - host->driver->set_hw_config_rom(host, host->csr.rom->bus_info_data); 59 60 host->csr.gen_timestamp[host->csr.generation] = jiffies; 61 hpsb_reset_bus(host, SHORT_RESET); ··· 72 return -1; 73 } 74 75 - static int dummy_isoctl(struct hpsb_iso *iso, enum isoctl_cmd command, unsigned long arg) 76 { 77 return -1; 78 } ··· 131 return NULL; 132 133 h->csr.rom = csr1212_create_csr(&csr_bus_ops, CSR_BUS_INFO_SIZE, h); 134 - if (!h->csr.rom) { 135 - kfree(h); 136 - return NULL; 137 - } 138 139 h->hostdata = h + 1; 140 h->driver = drv; ··· 152 init_timer(&h->timeout); 153 h->timeout.data = (unsigned long) h; 154 h->timeout.function = abort_timedouts; 155 - h->timeout_interval = HZ / 20; // 50ms by default 156 157 h->topology_map = h->csr.topology_map + 3; 158 h->speed_map = (u8 *)(h->csr.speed_map + 2); 159 160 mutex_lock(&host_num_alloc); 161 - 162 while (nodemgr_for_each_host(&hostnum, alloc_hostnum_cb)) 163 hostnum++; 164 - 165 h->id = hostnum; 166 167 memcpy(&h->device, &nodemgr_dev_template_host, sizeof(h->device)); ··· 171 h->class_dev.class = &hpsb_host_class; 172 snprintf(h->class_dev.class_id, BUS_ID_SIZE, "fw-host%d", h->id); 173 174 - device_register(&h->device); 175 - class_device_register(&h->class_dev); 176 get_device(&h->device); 177 178 - mutex_unlock(&host_num_alloc); 179 - 180 return h; 181 } 182 183 int hpsb_add_host(struct hpsb_host *host) ··· 235 if (time_before(jiffies, host->csr.gen_timestamp[next_gen] + 60 * HZ)) 236 /* Wait 60 seconds from the last time this generation number was 237 * used. */ 238 - reset_delay = (60 * HZ) + host->csr.gen_timestamp[next_gen] - jiffies; 239 else 240 /* Wait 1 second in case some other code wants to change the 241 * Config ROM in the near future. */
··· 44 45 CSR_SET_BUS_INFO_GENERATION(host->csr.rom, generation); 46 if (csr1212_generate_csr_image(host->csr.rom) != CSR1212_SUCCESS) { 47 + /* CSR image creation failed. 48 + * Reset generation field and do not issue a bus reset. */ 49 + CSR_SET_BUS_INFO_GENERATION(host->csr.rom, 50 + host->csr.generation); 51 return; 52 } 53 ··· 54 55 host->update_config_rom = 0; 56 if (host->driver->set_hw_config_rom) 57 + host->driver->set_hw_config_rom(host, 58 + host->csr.rom->bus_info_data); 59 60 host->csr.gen_timestamp[host->csr.generation] = jiffies; 61 hpsb_reset_bus(host, SHORT_RESET); ··· 70 return -1; 71 } 72 73 + static int dummy_isoctl(struct hpsb_iso *iso, enum isoctl_cmd command, 74 + unsigned long arg) 75 { 76 return -1; 77 } ··· 128 return NULL; 129 130 h->csr.rom = csr1212_create_csr(&csr_bus_ops, CSR_BUS_INFO_SIZE, h); 131 + if (!h->csr.rom) 132 + goto fail; 133 134 h->hostdata = h + 1; 135 h->driver = drv; ··· 151 init_timer(&h->timeout); 152 h->timeout.data = (unsigned long) h; 153 h->timeout.function = abort_timedouts; 154 + h->timeout_interval = HZ / 20; /* 50ms, half of minimum SPLIT_TIMEOUT */ 155 156 h->topology_map = h->csr.topology_map + 3; 157 h->speed_map = (u8 *)(h->csr.speed_map + 2); 158 159 mutex_lock(&host_num_alloc); 160 while (nodemgr_for_each_host(&hostnum, alloc_hostnum_cb)) 161 hostnum++; 162 + mutex_unlock(&host_num_alloc); 163 h->id = hostnum; 164 165 memcpy(&h->device, &nodemgr_dev_template_host, sizeof(h->device)); ··· 171 h->class_dev.class = &hpsb_host_class; 172 snprintf(h->class_dev.class_id, BUS_ID_SIZE, "fw-host%d", h->id); 173 174 + if (device_register(&h->device)) 175 + goto fail; 176 + if (class_device_register(&h->class_dev)) { 177 + device_unregister(&h->device); 178 + goto fail; 179 + } 180 get_device(&h->device); 181 182 return h; 183 + 184 + fail: 185 + kfree(h); 186 + return NULL; 187 } 188 189 int hpsb_add_host(struct hpsb_host *host) ··· 229 if (time_before(jiffies, host->csr.gen_timestamp[next_gen] + 60 * HZ)) 230 /* Wait 60 seconds from the last time this generation number was 231 * used. */ 232 + reset_delay = 233 + (60 * HZ) + host->csr.gen_timestamp[next_gen] - jiffies; 234 else 235 /* Wait 1 second in case some other code wants to change the 236 * Config ROM in the near future. */
+2 -2
drivers/ieee1394/ieee1394_core.c
··· 1237 /** nodemgr.c **/ 1238 EXPORT_SYMBOL(hpsb_node_fill_packet); 1239 EXPORT_SYMBOL(hpsb_node_write); 1240 - EXPORT_SYMBOL(hpsb_register_protocol); 1241 EXPORT_SYMBOL(hpsb_unregister_protocol); 1242 - EXPORT_SYMBOL(ieee1394_bus_type); 1243 #ifdef CONFIG_IEEE1394_EXPORT_FULL_API 1244 EXPORT_SYMBOL(nodemgr_for_each_host); 1245 #endif 1246
··· 1237 /** nodemgr.c **/ 1238 EXPORT_SYMBOL(hpsb_node_fill_packet); 1239 EXPORT_SYMBOL(hpsb_node_write); 1240 + EXPORT_SYMBOL(__hpsb_register_protocol); 1241 EXPORT_SYMBOL(hpsb_unregister_protocol); 1242 #ifdef CONFIG_IEEE1394_EXPORT_FULL_API 1243 + EXPORT_SYMBOL(ieee1394_bus_type); 1244 EXPORT_SYMBOL(nodemgr_for_each_host); 1245 #endif 1246
+279 -184
drivers/ieee1394/nodemgr.c
··· 14 #include <linux/slab.h> 15 #include <linux/delay.h> 16 #include <linux/kthread.h> 17 #include <linux/moduleparam.h> 18 #include <linux/freezer.h> 19 #include <asm/atomic.h> ··· 68 { 69 quadlet_t q; 70 u8 i, *speed, old_speed, good_speed; 71 - int ret; 72 73 speed = &(ci->host->speed[NODEID_TO_NODE(ci->nodeid)]); 74 old_speed = *speed; ··· 80 * just finished its initialization. */ 81 for (i = IEEE1394_SPEED_100; i <= old_speed; i++) { 82 *speed = i; 83 - ret = hpsb_read(ci->host, ci->nodeid, ci->generation, addr, 84 - &q, sizeof(quadlet_t)); 85 - if (ret) 86 break; 87 *buffer = q; 88 good_speed = i; ··· 96 return 0; 97 } 98 *speed = old_speed; 99 - return ret; 100 } 101 102 static int nodemgr_bus_read(struct csr1212_csr *csr, u64 addr, u16 length, 103 - void *buffer, void *__ci) 104 { 105 struct nodemgr_csr_info *ci = (struct nodemgr_csr_info*)__ci; 106 - int i, ret; 107 108 for (i = 1; ; i++) { 109 - ret = hpsb_read(ci->host, ci->nodeid, ci->generation, addr, 110 - buffer, length); 111 - if (!ret) { 112 ci->speed_unverified = 0; 113 break; 114 } ··· 119 /* The ieee1394_core guessed the node's speed capability from 120 * the self ID. Check whether a lower speed works. */ 121 if (ci->speed_unverified && length == sizeof(quadlet_t)) { 122 - ret = nodemgr_check_speed(ci, addr, buffer); 123 - if (!ret) 124 break; 125 } 126 if (msleep_interruptible(334)) 127 return -EINTR; 128 } 129 - return ret; 130 } 131 132 static int nodemgr_get_max_rom(quadlet_t *bus_info_data, void *__ci) ··· 261 .release = nodemgr_release_ne, 262 }; 263 264 struct device nodemgr_dev_template_host = { 265 .bus = &ieee1394_bus_type, 266 .release = nodemgr_release_host, 267 }; 268 269 ··· 319 return sprintf(buf, format_string, (type)driver->field);\ 320 } \ 321 static struct driver_attribute driver_attr_drv_##field = { \ 322 - .attr = {.name = __stringify(field), .mode = S_IRUGO }, \ 323 - .show = fw_drv_show_##field, \ 324 }; 325 326 ··· 374 #endif 375 spin_unlock_irqrestore(&hpsb_tlabel_lock, flags); 376 377 - return sprintf(buf, "0x%016llx\n", tm); 378 } 379 static DEVICE_ATTR(tlabels_mask, S_IRUGO, fw_show_ne_tlabels_mask, NULL); 380 #endif /* HPSB_DEBUG_TLABELS */ ··· 386 int state = simple_strtoul(buf, NULL, 10); 387 388 if (state == 1) { 389 - down_write(&dev->bus->subsys.rwsem); 390 - device_release_driver(dev); 391 ud->ignore_driver = 1; 392 - up_write(&dev->bus->subsys.rwsem); 393 - } else if (!state) 394 ud->ignore_driver = 0; 395 396 return count; ··· 425 static BUS_ATTR(destroy_node, S_IWUSR | S_IRUGO, fw_get_destroy_node, fw_set_destroy_node); 426 427 428 - static ssize_t fw_set_rescan(struct bus_type *bus, const char *buf, size_t count) 429 { 430 if (simple_strtoul(buf, NULL, 10) == 1) 431 - bus_rescan_devices(&ieee1394_bus_type); 432 - return count; 433 } 434 static ssize_t fw_get_rescan(struct bus_type *bus, char *buf) 435 { ··· 448 449 if (state == 1) 450 ignore_drivers = 1; 451 - else if (!state) 452 ignore_drivers = 0; 453 454 return count; ··· 541 int length = 0; 542 char *scratch = buf; 543 544 - driver = container_of(drv, struct hpsb_protocol_driver, driver); 545 546 for (id = driver->id_table; id->match_flags != 0; id++) { 547 int need_coma = 0; ··· 598 int i; 599 600 for (i = 0; i < ARRAY_SIZE(fw_drv_attrs); i++) 601 - driver_create_file(drv, fw_drv_attrs[i]); 602 } 603 604 ··· 622 int i; 623 624 for (i = 0; i < ARRAY_SIZE(fw_ne_attrs); i++) 625 - device_create_file(dev, fw_ne_attrs[i]); 626 } 627 628 ··· 637 int i; 638 639 for (i = 0; i < ARRAY_SIZE(fw_host_attrs); i++) 640 - device_create_file(dev, fw_host_attrs[i]); 641 } 642 643 644 - static struct node_entry *find_entry_by_nodeid(struct hpsb_host *host, nodeid_t nodeid); 645 646 static void nodemgr_update_host_dev_links(struct hpsb_host *host) 647 { ··· 657 sysfs_remove_link(&dev->kobj, "busmgr_id"); 658 sysfs_remove_link(&dev->kobj, "host_id"); 659 660 - if ((ne = find_entry_by_nodeid(host, host->irm_id))) 661 - sysfs_create_link(&dev->kobj, &ne->device.kobj, "irm_id"); 662 - if ((ne = find_entry_by_nodeid(host, host->busmgr_id))) 663 - sysfs_create_link(&dev->kobj, &ne->device.kobj, "busmgr_id"); 664 - if ((ne = find_entry_by_nodeid(host, host->node_id))) 665 - sysfs_create_link(&dev->kobj, &ne->device.kobj, "host_id"); 666 } 667 668 static void nodemgr_create_ud_dev_files(struct unit_directory *ud) ··· 677 int i; 678 679 for (i = 0; i < ARRAY_SIZE(fw_ud_attrs); i++) 680 - device_create_file(dev, fw_ud_attrs[i]); 681 - 682 if (ud->flags & UNIT_DIRECTORY_SPECIFIER_ID) 683 - device_create_file(dev, &dev_attr_ud_specifier_id); 684 - 685 if (ud->flags & UNIT_DIRECTORY_VERSION) 686 - device_create_file(dev, &dev_attr_ud_version); 687 - 688 if (ud->flags & UNIT_DIRECTORY_VENDOR_ID) { 689 - device_create_file(dev, &dev_attr_ud_vendor_id); 690 - if (ud->vendor_name_kv) 691 - device_create_file(dev, &dev_attr_ud_vendor_name_kv); 692 } 693 - 694 if (ud->flags & UNIT_DIRECTORY_MODEL_ID) { 695 - device_create_file(dev, &dev_attr_ud_model_id); 696 - if (ud->model_name_kv) 697 - device_create_file(dev, &dev_attr_ud_model_name_kv); 698 } 699 } 700 701 702 static int nodemgr_bus_match(struct device * dev, struct device_driver * drv) 703 { 704 - struct hpsb_protocol_driver *driver; 705 - struct unit_directory *ud; 706 struct ieee1394_device_id *id; 707 708 /* We only match unit directories */ ··· 717 return 0; 718 719 ud = container_of(dev, struct unit_directory, device); 720 - driver = container_of(drv, struct hpsb_protocol_driver, driver); 721 - 722 if (ud->ne->in_limbo || ud->ignore_driver) 723 return 0; 724 725 - for (id = driver->id_table; id->match_flags != 0; id++) { 726 - if ((id->match_flags & IEEE1394_MATCH_VENDOR_ID) && 727 - id->vendor_id != ud->vendor_id) 728 - continue; 729 730 - if ((id->match_flags & IEEE1394_MATCH_MODEL_ID) && 731 - id->model_id != ud->model_id) 732 - continue; 733 734 - if ((id->match_flags & IEEE1394_MATCH_SPECIFIER_ID) && 735 - id->specifier_id != ud->specifier_id) 736 - continue; 737 738 - if ((id->match_flags & IEEE1394_MATCH_VERSION) && 739 - id->version != ud->version) 740 - continue; 741 742 return 1; 743 - } 744 745 return 0; 746 } 747 748 749 static void nodemgr_remove_uds(struct node_entry *ne) 750 { 751 - struct class_device *cdev, *next; 752 - struct unit_directory *ud; 753 754 - list_for_each_entry_safe(cdev, next, &nodemgr_ud_class.children, node) { 755 - ud = container_of(cdev, struct unit_directory, class_dev); 756 - 757 - if (ud->ne != ne) 758 - continue; 759 - 760 class_device_unregister(&ud->class_dev); 761 device_unregister(&ud->device); 762 } 763 } 764 765 766 static void nodemgr_remove_ne(struct node_entry *ne) 767 { 768 - struct device *dev = &ne->device; 769 770 dev = get_device(&ne->device); 771 if (!dev) ··· 812 813 static void nodemgr_remove_host_dev(struct device *dev) 814 { 815 - device_for_each_child(dev, NULL, __nodemgr_remove_host_dev); 816 sysfs_remove_link(&dev->kobj, "irm_id"); 817 sysfs_remove_link(&dev->kobj, "busmgr_id"); 818 sysfs_remove_link(&dev->kobj, "host_id"); ··· 826 #endif 827 quadlet_t busoptions = be32_to_cpu(ne->csr->bus_info_data[2]); 828 829 - ne->busopt.irmc = (busoptions >> 31) & 1; 830 - ne->busopt.cmc = (busoptions >> 30) & 1; 831 - ne->busopt.isc = (busoptions >> 29) & 1; 832 - ne->busopt.bmc = (busoptions >> 28) & 1; 833 - ne->busopt.pmc = (busoptions >> 27) & 1; 834 - ne->busopt.cyc_clk_acc = (busoptions >> 16) & 0xff; 835 - ne->busopt.max_rec = 1 << (((busoptions >> 12) & 0xf) + 1); 836 ne->busopt.max_rom = (busoptions >> 8) & 0x3; 837 - ne->busopt.generation = (busoptions >> 4) & 0xf; 838 - ne->busopt.lnkspd = busoptions & 0x7; 839 840 HPSB_VERBOSE("NodeMgr: raw=0x%08x irmc=%d cmc=%d isc=%d bmc=%d pmc=%d " 841 "cyc_clk_acc=%d max_rec=%d max_rom=%d gen=%d lspd=%d", ··· 856 857 ne = kzalloc(sizeof(*ne), GFP_KERNEL); 858 if (!ne) 859 - return NULL; 860 861 ne->host = host; 862 ne->nodeid = nodeid; ··· 879 snprintf(ne->class_dev.class_id, BUS_ID_SIZE, "%016Lx", 880 (unsigned long long)(ne->guid)); 881 882 - device_register(&ne->device); 883 - class_device_register(&ne->class_dev); 884 get_device(&ne->device); 885 886 - if (ne->guid_vendor_oui) 887 - device_create_file(&ne->device, &dev_attr_ne_guid_vendor_oui); 888 nodemgr_create_ne_dev_files(ne); 889 890 nodemgr_update_bus_options(ne); ··· 897 NODE_BUS_ARGS(host, nodeid), (unsigned long long)guid); 898 899 return ne; 900 } 901 902 903 static struct node_entry *find_entry_by_guid(u64 guid) 904 { 905 - struct class *class = &nodemgr_ne_class; 906 struct class_device *cdev; 907 struct node_entry *ne, *ret_ne = NULL; 908 909 - down_read(&class->subsys.rwsem); 910 - list_for_each_entry(cdev, &class->children, node) { 911 ne = container_of(cdev, struct node_entry, class_dev); 912 913 if (ne->guid == guid) { ··· 926 break; 927 } 928 } 929 - up_read(&class->subsys.rwsem); 930 931 - return ret_ne; 932 } 933 934 935 - static struct node_entry *find_entry_by_nodeid(struct hpsb_host *host, nodeid_t nodeid) 936 { 937 - struct class *class = &nodemgr_ne_class; 938 struct class_device *cdev; 939 struct node_entry *ne, *ret_ne = NULL; 940 941 - down_read(&class->subsys.rwsem); 942 - list_for_each_entry(cdev, &class->children, node) { 943 ne = container_of(cdev, struct node_entry, class_dev); 944 945 if (ne->host == host && ne->nodeid == nodeid) { ··· 947 break; 948 } 949 } 950 - up_read(&class->subsys.rwsem); 951 952 return ret_ne; 953 } ··· 969 snprintf(ud->class_dev.class_id, BUS_ID_SIZE, "%s-%u", 970 ne->device.bus_id, ud->id); 971 972 - device_register(&ud->device); 973 - class_device_register(&ud->class_dev); 974 get_device(&ud->device); 975 976 - if (ud->vendor_oui) 977 - device_create_file(&ud->device, &dev_attr_ud_vendor_oui); 978 nodemgr_create_ud_dev_files(ud); 979 } 980 981 ··· 1067 /* Logical Unit Number */ 1068 if (kv->key.type == CSR1212_KV_TYPE_IMMEDIATE) { 1069 if (ud->flags & UNIT_DIRECTORY_HAS_LUN) { 1070 - ud_child = kmalloc(sizeof(*ud_child), GFP_KERNEL); 1071 if (!ud_child) 1072 goto unit_directory_error; 1073 - memcpy(ud_child, ud, sizeof(*ud_child)); 1074 nodemgr_register_device(ne, ud_child, &ne->device); 1075 ud_child = NULL; 1076 ··· 1183 last_key_id = kv->key.id; 1184 } 1185 1186 - if (ne->vendor_oui) 1187 - device_create_file(&ne->device, &dev_attr_ne_vendor_oui); 1188 - if (ne->vendor_name_kv) 1189 - device_create_file(&ne->device, &dev_attr_ne_vendor_name_kv); 1190 } 1191 1192 #ifdef CONFIG_HOTPLUG ··· 1256 #endif /* CONFIG_HOTPLUG */ 1257 1258 1259 - int hpsb_register_protocol(struct hpsb_protocol_driver *driver) 1260 { 1261 - int ret; 1262 1263 /* This will cause a probe for devices */ 1264 - ret = driver_register(&driver->driver); 1265 - if (!ret) 1266 - nodemgr_create_drv_files(driver); 1267 - 1268 - return ret; 1269 } 1270 1271 void hpsb_unregister_protocol(struct hpsb_protocol_driver *driver) ··· 1397 1398 static void nodemgr_node_scan(struct host_info *hi, int generation) 1399 { 1400 - int count; 1401 - struct hpsb_host *host = hi->host; 1402 - struct selfid *sid = (struct selfid *)host->topology_map; 1403 - nodeid_t nodeid = LOCAL_BUS; 1404 1405 - /* Scan each node on the bus */ 1406 - for (count = host->selfid_count; count; count--, sid++) { 1407 - if (sid->extended) 1408 - continue; 1409 1410 - if (!sid->link_active) { 1411 - nodeid++; 1412 - continue; 1413 - } 1414 - nodemgr_node_scan_one(hi, nodeid++, generation); 1415 - } 1416 } 1417 1418 1419 - /* Caller needs to hold nodemgr_ud_class.subsys.rwsem as reader. */ 1420 static void nodemgr_suspend_ne(struct node_entry *ne) 1421 { 1422 struct class_device *cdev; ··· 1425 NODE_BUS_ARGS(ne->host, ne->nodeid), (unsigned long long)ne->guid); 1426 1427 ne->in_limbo = 1; 1428 - device_create_file(&ne->device, &dev_attr_ne_in_limbo); 1429 1430 - down_write(&ne->device.bus->subsys.rwsem); 1431 list_for_each_entry(cdev, &nodemgr_ud_class.children, node) { 1432 ud = container_of(cdev, struct unit_directory, class_dev); 1433 - 1434 if (ud->ne != ne) 1435 continue; 1436 1437 if (ud->device.driver && 1438 (!ud->device.driver->suspend || 1439 ud->device.driver->suspend(&ud->device, PMSG_SUSPEND))) 1440 device_release_driver(&ud->device); 1441 } 1442 - up_write(&ne->device.bus->subsys.rwsem); 1443 } 1444 1445 ··· 1452 ne->in_limbo = 0; 1453 device_remove_file(&ne->device, &dev_attr_ne_in_limbo); 1454 1455 - down_read(&nodemgr_ud_class.subsys.rwsem); 1456 - down_read(&ne->device.bus->subsys.rwsem); 1457 list_for_each_entry(cdev, &nodemgr_ud_class.children, node) { 1458 ud = container_of(cdev, struct unit_directory, class_dev); 1459 - 1460 if (ud->ne != ne) 1461 continue; 1462 1463 if (ud->device.driver && ud->device.driver->resume) 1464 ud->device.driver->resume(&ud->device); 1465 } 1466 - up_read(&ne->device.bus->subsys.rwsem); 1467 - up_read(&nodemgr_ud_class.subsys.rwsem); 1468 1469 HPSB_DEBUG("Node resumed: ID:BUS[" NODE_BUS_FMT "] GUID[%016Lx]", 1470 NODE_BUS_ARGS(ne->host, ne->nodeid), (unsigned long long)ne->guid); 1471 } 1472 1473 1474 - /* Caller needs to hold nodemgr_ud_class.subsys.rwsem as reader. */ 1475 static void nodemgr_update_pdrv(struct node_entry *ne) 1476 { 1477 struct unit_directory *ud; 1478 struct hpsb_protocol_driver *pdrv; 1479 struct class_device *cdev; 1480 1481 list_for_each_entry(cdev, &nodemgr_ud_class.children, node) { 1482 ud = container_of(cdev, struct unit_directory, class_dev); 1483 - if (ud->ne != ne || !ud->device.driver) 1484 continue; 1485 1486 - pdrv = container_of(ud->device.driver, struct hpsb_protocol_driver, driver); 1487 - 1488 - if (pdrv->update && pdrv->update(ud)) { 1489 - down_write(&ud->device.bus->subsys.rwsem); 1490 - device_release_driver(&ud->device); 1491 - up_write(&ud->device.bus->subsys.rwsem); 1492 } 1493 } 1494 } 1495 1496 ··· 1506 { 1507 const u64 bc_addr = (CSR_REGISTER_BASE | CSR_BROADCAST_CHANNEL); 1508 quadlet_t bc_remote, bc_local; 1509 - int ret; 1510 1511 if (!ne->host->is_irm || ne->generation != generation || 1512 ne->nodeid == ne->host->node_id) ··· 1515 bc_local = cpu_to_be32(ne->host->csr.broadcast_channel); 1516 1517 /* Check if the register is implemented and 1394a compliant. */ 1518 - ret = hpsb_read(ne->host, ne->nodeid, generation, bc_addr, &bc_remote, 1519 - sizeof(bc_remote)); 1520 - if (!ret && bc_remote & cpu_to_be32(0x80000000) && 1521 bc_remote != bc_local) 1522 hpsb_node_write(ne, bc_addr, &bc_local, sizeof(bc_local)); 1523 } 1524 1525 1526 - /* Caller needs to hold nodemgr_ud_class.subsys.rwsem as reader because the 1527 - * calls to nodemgr_update_pdrv() and nodemgr_suspend_ne() here require it. */ 1528 static void nodemgr_probe_ne(struct host_info *hi, struct node_entry *ne, int generation) 1529 { 1530 struct device *dev; ··· 1555 static void nodemgr_node_probe(struct host_info *hi, int generation) 1556 { 1557 struct hpsb_host *host = hi->host; 1558 - struct class *class = &nodemgr_ne_class; 1559 struct class_device *cdev; 1560 struct node_entry *ne; 1561 ··· 1567 * while probes are time-consuming. (Well, those probes need some 1568 * improvement...) */ 1569 1570 - down_read(&class->subsys.rwsem); 1571 - list_for_each_entry(cdev, &class->children, node) { 1572 ne = container_of(cdev, struct node_entry, class_dev); 1573 if (!ne->needs_probe) 1574 nodemgr_probe_ne(hi, ne, generation); 1575 } 1576 - list_for_each_entry(cdev, &class->children, node) { 1577 ne = container_of(cdev, struct node_entry, class_dev); 1578 if (ne->needs_probe) 1579 nodemgr_probe_ne(hi, ne, generation); 1580 } 1581 - up_read(&class->subsys.rwsem); 1582 1583 1584 /* If we had a bus reset while we were scanning the bus, it is ··· 1596 * just removed. */ 1597 1598 if (generation == get_hpsb_generation(host)) 1599 - bus_rescan_devices(&ieee1394_bus_type); 1600 - 1601 - return; 1602 } 1603 1604 static int nodemgr_send_resume_packet(struct hpsb_host *host) 1605 { 1606 struct hpsb_packet *packet; 1607 - int ret = 1; 1608 1609 packet = hpsb_make_phypacket(host, 1610 EXTPHYPACKET_TYPE_RESUME | ··· 1611 if (packet) { 1612 packet->no_waiter = 1; 1613 packet->generation = get_hpsb_generation(host); 1614 - ret = hpsb_send_packet(packet); 1615 } 1616 - if (ret) 1617 HPSB_WARN("fw-host%d: Failed to broadcast resume packet", 1618 host->id); 1619 - return ret; 1620 } 1621 1622 /* Perform a few high-level IRM responsibilities. */ ··· 1789 1790 int nodemgr_for_each_host(void *__data, int (*cb)(struct hpsb_host *, void *)) 1791 { 1792 - struct class *class = &hpsb_host_class; 1793 struct class_device *cdev; 1794 struct hpsb_host *host; 1795 int error = 0; 1796 1797 - down_read(&class->subsys.rwsem); 1798 - list_for_each_entry(cdev, &class->children, node) { 1799 host = container_of(cdev, struct hpsb_host, class_dev); 1800 1801 if ((error = cb(host, __data))) 1802 break; 1803 } 1804 - up_read(&class->subsys.rwsem); 1805 1806 return error; 1807 } ··· 1822 1823 void hpsb_node_fill_packet(struct node_entry *ne, struct hpsb_packet *pkt) 1824 { 1825 - pkt->host = ne->host; 1826 - pkt->generation = ne->generation; 1827 barrier(); 1828 - pkt->node_id = ne->nodeid; 1829 } 1830 1831 int hpsb_node_write(struct node_entry *ne, u64 addr, ··· 1885 1886 int init_ieee1394_nodemgr(void) 1887 { 1888 - int ret; 1889 1890 - ret = class_register(&nodemgr_ne_class); 1891 - if (ret < 0) 1892 - return ret; 1893 1894 - ret = class_register(&nodemgr_ud_class); 1895 - if (ret < 0) { 1896 class_unregister(&nodemgr_ne_class); 1897 - return ret; 1898 } 1899 - 1900 hpsb_register_highlevel(&nodemgr_highlevel); 1901 - 1902 return 0; 1903 } 1904 1905 void cleanup_ieee1394_nodemgr(void) 1906 { 1907 - hpsb_unregister_highlevel(&nodemgr_highlevel); 1908 1909 class_unregister(&nodemgr_ud_class); 1910 class_unregister(&nodemgr_ne_class);
··· 14 #include <linux/slab.h> 15 #include <linux/delay.h> 16 #include <linux/kthread.h> 17 + #include <linux/module.h> 18 #include <linux/moduleparam.h> 19 #include <linux/freezer.h> 20 #include <asm/atomic.h> ··· 67 { 68 quadlet_t q; 69 u8 i, *speed, old_speed, good_speed; 70 + int error; 71 72 speed = &(ci->host->speed[NODEID_TO_NODE(ci->nodeid)]); 73 old_speed = *speed; ··· 79 * just finished its initialization. */ 80 for (i = IEEE1394_SPEED_100; i <= old_speed; i++) { 81 *speed = i; 82 + error = hpsb_read(ci->host, ci->nodeid, ci->generation, addr, 83 + &q, sizeof(quadlet_t)); 84 + if (error) 85 break; 86 *buffer = q; 87 good_speed = i; ··· 95 return 0; 96 } 97 *speed = old_speed; 98 + return error; 99 } 100 101 static int nodemgr_bus_read(struct csr1212_csr *csr, u64 addr, u16 length, 102 + void *buffer, void *__ci) 103 { 104 struct nodemgr_csr_info *ci = (struct nodemgr_csr_info*)__ci; 105 + int i, error; 106 107 for (i = 1; ; i++) { 108 + error = hpsb_read(ci->host, ci->nodeid, ci->generation, addr, 109 + buffer, length); 110 + if (!error) { 111 ci->speed_unverified = 0; 112 break; 113 } ··· 118 /* The ieee1394_core guessed the node's speed capability from 119 * the self ID. Check whether a lower speed works. */ 120 if (ci->speed_unverified && length == sizeof(quadlet_t)) { 121 + error = nodemgr_check_speed(ci, addr, buffer); 122 + if (!error) 123 break; 124 } 125 if (msleep_interruptible(334)) 126 return -EINTR; 127 } 128 + return error; 129 } 130 131 static int nodemgr_get_max_rom(quadlet_t *bus_info_data, void *__ci) ··· 260 .release = nodemgr_release_ne, 261 }; 262 263 + /* This dummy driver prevents the host devices from being scanned. We have no 264 + * useful drivers for them yet, and there would be a deadlock possible if the 265 + * driver core scans the host device while the host's low-level driver (i.e. 266 + * the host's parent device) is being removed. */ 267 + static struct device_driver nodemgr_mid_layer_driver = { 268 + .bus = &ieee1394_bus_type, 269 + .name = "nodemgr", 270 + .owner = THIS_MODULE, 271 + }; 272 + 273 struct device nodemgr_dev_template_host = { 274 .bus = &ieee1394_bus_type, 275 .release = nodemgr_release_host, 276 + .driver = &nodemgr_mid_layer_driver, 277 }; 278 279 ··· 307 return sprintf(buf, format_string, (type)driver->field);\ 308 } \ 309 static struct driver_attribute driver_attr_drv_##field = { \ 310 + .attr = {.name = __stringify(field), .mode = S_IRUGO }, \ 311 + .show = fw_drv_show_##field, \ 312 }; 313 314 ··· 362 #endif 363 spin_unlock_irqrestore(&hpsb_tlabel_lock, flags); 364 365 + return sprintf(buf, "0x%016llx\n", (unsigned long long)tm); 366 } 367 static DEVICE_ATTR(tlabels_mask, S_IRUGO, fw_show_ne_tlabels_mask, NULL); 368 #endif /* HPSB_DEBUG_TLABELS */ ··· 374 int state = simple_strtoul(buf, NULL, 10); 375 376 if (state == 1) { 377 ud->ignore_driver = 1; 378 + down_write(&ieee1394_bus_type.subsys.rwsem); 379 + device_release_driver(dev); 380 + up_write(&ieee1394_bus_type.subsys.rwsem); 381 + } else if (state == 0) 382 ud->ignore_driver = 0; 383 384 return count; ··· 413 static BUS_ATTR(destroy_node, S_IWUSR | S_IRUGO, fw_get_destroy_node, fw_set_destroy_node); 414 415 416 + static ssize_t fw_set_rescan(struct bus_type *bus, const char *buf, 417 + size_t count) 418 { 419 + int error = 0; 420 + 421 if (simple_strtoul(buf, NULL, 10) == 1) 422 + error = bus_rescan_devices(&ieee1394_bus_type); 423 + return error ? error : count; 424 } 425 static ssize_t fw_get_rescan(struct bus_type *bus, char *buf) 426 { ··· 433 434 if (state == 1) 435 ignore_drivers = 1; 436 + else if (state == 0) 437 ignore_drivers = 0; 438 439 return count; ··· 526 int length = 0; 527 char *scratch = buf; 528 529 + driver = container_of(drv, struct hpsb_protocol_driver, driver); 530 531 for (id = driver->id_table; id->match_flags != 0; id++) { 532 int need_coma = 0; ··· 583 int i; 584 585 for (i = 0; i < ARRAY_SIZE(fw_drv_attrs); i++) 586 + if (driver_create_file(drv, fw_drv_attrs[i])) 587 + goto fail; 588 + return; 589 + fail: 590 + HPSB_ERR("Failed to add sysfs attribute for driver %s", driver->name); 591 } 592 593 ··· 603 int i; 604 605 for (i = 0; i < ARRAY_SIZE(fw_ne_attrs); i++) 606 + if (device_create_file(dev, fw_ne_attrs[i])) 607 + goto fail; 608 + return; 609 + fail: 610 + HPSB_ERR("Failed to add sysfs attribute for node %016Lx", 611 + (unsigned long long)ne->guid); 612 } 613 614 ··· 613 int i; 614 615 for (i = 0; i < ARRAY_SIZE(fw_host_attrs); i++) 616 + if (device_create_file(dev, fw_host_attrs[i])) 617 + goto fail; 618 + return; 619 + fail: 620 + HPSB_ERR("Failed to add sysfs attribute for host %d", host->id); 621 } 622 623 624 + static struct node_entry *find_entry_by_nodeid(struct hpsb_host *host, 625 + nodeid_t nodeid); 626 627 static void nodemgr_update_host_dev_links(struct hpsb_host *host) 628 { ··· 628 sysfs_remove_link(&dev->kobj, "busmgr_id"); 629 sysfs_remove_link(&dev->kobj, "host_id"); 630 631 + if ((ne = find_entry_by_nodeid(host, host->irm_id)) && 632 + sysfs_create_link(&dev->kobj, &ne->device.kobj, "irm_id")) 633 + goto fail; 634 + if ((ne = find_entry_by_nodeid(host, host->busmgr_id)) && 635 + sysfs_create_link(&dev->kobj, &ne->device.kobj, "busmgr_id")) 636 + goto fail; 637 + if ((ne = find_entry_by_nodeid(host, host->node_id)) && 638 + sysfs_create_link(&dev->kobj, &ne->device.kobj, "host_id")) 639 + goto fail; 640 + return; 641 + fail: 642 + HPSB_ERR("Failed to update sysfs attributes for host %d", host->id); 643 } 644 645 static void nodemgr_create_ud_dev_files(struct unit_directory *ud) ··· 642 int i; 643 644 for (i = 0; i < ARRAY_SIZE(fw_ud_attrs); i++) 645 + if (device_create_file(dev, fw_ud_attrs[i])) 646 + goto fail; 647 if (ud->flags & UNIT_DIRECTORY_SPECIFIER_ID) 648 + if (device_create_file(dev, &dev_attr_ud_specifier_id)) 649 + goto fail; 650 if (ud->flags & UNIT_DIRECTORY_VERSION) 651 + if (device_create_file(dev, &dev_attr_ud_version)) 652 + goto fail; 653 if (ud->flags & UNIT_DIRECTORY_VENDOR_ID) { 654 + if (device_create_file(dev, &dev_attr_ud_vendor_id)) 655 + goto fail; 656 + if (ud->vendor_name_kv && 657 + device_create_file(dev, &dev_attr_ud_vendor_name_kv)) 658 + goto fail; 659 } 660 if (ud->flags & UNIT_DIRECTORY_MODEL_ID) { 661 + if (device_create_file(dev, &dev_attr_ud_model_id)) 662 + goto fail; 663 + if (ud->model_name_kv && 664 + device_create_file(dev, &dev_attr_ud_model_name_kv)) 665 + goto fail; 666 } 667 + return; 668 + fail: 669 + HPSB_ERR("Failed to add sysfs attributes for unit %s", 670 + ud->device.bus_id); 671 } 672 673 674 static int nodemgr_bus_match(struct device * dev, struct device_driver * drv) 675 { 676 + struct hpsb_protocol_driver *driver; 677 + struct unit_directory *ud; 678 struct ieee1394_device_id *id; 679 680 /* We only match unit directories */ ··· 675 return 0; 676 677 ud = container_of(dev, struct unit_directory, device); 678 if (ud->ne->in_limbo || ud->ignore_driver) 679 return 0; 680 681 + /* We only match drivers of type hpsb_protocol_driver */ 682 + if (drv == &nodemgr_mid_layer_driver) 683 + return 0; 684 685 + driver = container_of(drv, struct hpsb_protocol_driver, driver); 686 + for (id = driver->id_table; id->match_flags != 0; id++) { 687 + if ((id->match_flags & IEEE1394_MATCH_VENDOR_ID) && 688 + id->vendor_id != ud->vendor_id) 689 + continue; 690 691 + if ((id->match_flags & IEEE1394_MATCH_MODEL_ID) && 692 + id->model_id != ud->model_id) 693 + continue; 694 695 + if ((id->match_flags & IEEE1394_MATCH_SPECIFIER_ID) && 696 + id->specifier_id != ud->specifier_id) 697 + continue; 698 + 699 + if ((id->match_flags & IEEE1394_MATCH_VERSION) && 700 + id->version != ud->version) 701 + continue; 702 703 return 1; 704 + } 705 706 return 0; 707 } 708 709 710 + static DEFINE_MUTEX(nodemgr_serialize_remove_uds); 711 + 712 static void nodemgr_remove_uds(struct node_entry *ne) 713 { 714 + struct class_device *cdev; 715 + struct unit_directory *tmp, *ud; 716 717 + /* Iteration over nodemgr_ud_class.children has to be protected by 718 + * nodemgr_ud_class.sem, but class_device_unregister() will eventually 719 + * take nodemgr_ud_class.sem too. Therefore pick out one ud at a time, 720 + * release the semaphore, and then unregister the ud. Since this code 721 + * may be called from other contexts besides the knodemgrds, protect the 722 + * gap after release of the semaphore by nodemgr_serialize_remove_uds. 723 + */ 724 + mutex_lock(&nodemgr_serialize_remove_uds); 725 + for (;;) { 726 + ud = NULL; 727 + down(&nodemgr_ud_class.sem); 728 + list_for_each_entry(cdev, &nodemgr_ud_class.children, node) { 729 + tmp = container_of(cdev, struct unit_directory, 730 + class_dev); 731 + if (tmp->ne == ne) { 732 + ud = tmp; 733 + break; 734 + } 735 + } 736 + up(&nodemgr_ud_class.sem); 737 + if (ud == NULL) 738 + break; 739 class_device_unregister(&ud->class_dev); 740 device_unregister(&ud->device); 741 } 742 + mutex_unlock(&nodemgr_serialize_remove_uds); 743 } 744 745 746 static void nodemgr_remove_ne(struct node_entry *ne) 747 { 748 + struct device *dev; 749 750 dev = get_device(&ne->device); 751 if (!dev) ··· 748 749 static void nodemgr_remove_host_dev(struct device *dev) 750 { 751 + WARN_ON(device_for_each_child(dev, NULL, __nodemgr_remove_host_dev)); 752 sysfs_remove_link(&dev->kobj, "irm_id"); 753 sysfs_remove_link(&dev->kobj, "busmgr_id"); 754 sysfs_remove_link(&dev->kobj, "host_id"); ··· 762 #endif 763 quadlet_t busoptions = be32_to_cpu(ne->csr->bus_info_data[2]); 764 765 + ne->busopt.irmc = (busoptions >> 31) & 1; 766 + ne->busopt.cmc = (busoptions >> 30) & 1; 767 + ne->busopt.isc = (busoptions >> 29) & 1; 768 + ne->busopt.bmc = (busoptions >> 28) & 1; 769 + ne->busopt.pmc = (busoptions >> 27) & 1; 770 + ne->busopt.cyc_clk_acc = (busoptions >> 16) & 0xff; 771 + ne->busopt.max_rec = 1 << (((busoptions >> 12) & 0xf) + 1); 772 ne->busopt.max_rom = (busoptions >> 8) & 0x3; 773 + ne->busopt.generation = (busoptions >> 4) & 0xf; 774 + ne->busopt.lnkspd = busoptions & 0x7; 775 776 HPSB_VERBOSE("NodeMgr: raw=0x%08x irmc=%d cmc=%d isc=%d bmc=%d pmc=%d " 777 "cyc_clk_acc=%d max_rec=%d max_rom=%d gen=%d lspd=%d", ··· 792 793 ne = kzalloc(sizeof(*ne), GFP_KERNEL); 794 if (!ne) 795 + goto fail_alloc; 796 797 ne->host = host; 798 ne->nodeid = nodeid; ··· 815 snprintf(ne->class_dev.class_id, BUS_ID_SIZE, "%016Lx", 816 (unsigned long long)(ne->guid)); 817 818 + if (device_register(&ne->device)) 819 + goto fail_devreg; 820 + if (class_device_register(&ne->class_dev)) 821 + goto fail_classdevreg; 822 get_device(&ne->device); 823 824 + if (ne->guid_vendor_oui && 825 + device_create_file(&ne->device, &dev_attr_ne_guid_vendor_oui)) 826 + goto fail_addoiu; 827 nodemgr_create_ne_dev_files(ne); 828 829 nodemgr_update_bus_options(ne); ··· 830 NODE_BUS_ARGS(host, nodeid), (unsigned long long)guid); 831 832 return ne; 833 + 834 + fail_addoiu: 835 + put_device(&ne->device); 836 + fail_classdevreg: 837 + device_unregister(&ne->device); 838 + fail_devreg: 839 + kfree(ne); 840 + fail_alloc: 841 + HPSB_ERR("Failed to create node ID:BUS[" NODE_BUS_FMT "] GUID[%016Lx]", 842 + NODE_BUS_ARGS(host, nodeid), (unsigned long long)guid); 843 + 844 + return NULL; 845 } 846 847 848 static struct node_entry *find_entry_by_guid(u64 guid) 849 { 850 struct class_device *cdev; 851 struct node_entry *ne, *ret_ne = NULL; 852 853 + down(&nodemgr_ne_class.sem); 854 + list_for_each_entry(cdev, &nodemgr_ne_class.children, node) { 855 ne = container_of(cdev, struct node_entry, class_dev); 856 857 if (ne->guid == guid) { ··· 848 break; 849 } 850 } 851 + up(&nodemgr_ne_class.sem); 852 853 + return ret_ne; 854 } 855 856 857 + static struct node_entry *find_entry_by_nodeid(struct hpsb_host *host, 858 + nodeid_t nodeid) 859 { 860 struct class_device *cdev; 861 struct node_entry *ne, *ret_ne = NULL; 862 863 + down(&nodemgr_ne_class.sem); 864 + list_for_each_entry(cdev, &nodemgr_ne_class.children, node) { 865 ne = container_of(cdev, struct node_entry, class_dev); 866 867 if (ne->host == host && ne->nodeid == nodeid) { ··· 869 break; 870 } 871 } 872 + up(&nodemgr_ne_class.sem); 873 874 return ret_ne; 875 } ··· 891 snprintf(ud->class_dev.class_id, BUS_ID_SIZE, "%s-%u", 892 ne->device.bus_id, ud->id); 893 894 + if (device_register(&ud->device)) 895 + goto fail_devreg; 896 + if (class_device_register(&ud->class_dev)) 897 + goto fail_classdevreg; 898 get_device(&ud->device); 899 900 + if (ud->vendor_oui && 901 + device_create_file(&ud->device, &dev_attr_ud_vendor_oui)) 902 + goto fail_addoui; 903 nodemgr_create_ud_dev_files(ud); 904 + 905 + return; 906 + 907 + fail_addoui: 908 + put_device(&ud->device); 909 + fail_classdevreg: 910 + device_unregister(&ud->device); 911 + fail_devreg: 912 + HPSB_ERR("Failed to create unit %s", ud->device.bus_id); 913 } 914 915 ··· 977 /* Logical Unit Number */ 978 if (kv->key.type == CSR1212_KV_TYPE_IMMEDIATE) { 979 if (ud->flags & UNIT_DIRECTORY_HAS_LUN) { 980 + ud_child = kmemdup(ud, sizeof(*ud_child), GFP_KERNEL); 981 if (!ud_child) 982 goto unit_directory_error; 983 nodemgr_register_device(ne, ud_child, &ne->device); 984 ud_child = NULL; 985 ··· 1094 last_key_id = kv->key.id; 1095 } 1096 1097 + if (ne->vendor_oui && 1098 + device_create_file(&ne->device, &dev_attr_ne_vendor_oui)) 1099 + goto fail; 1100 + if (ne->vendor_name_kv && 1101 + device_create_file(&ne->device, &dev_attr_ne_vendor_name_kv)) 1102 + goto fail; 1103 + return; 1104 + fail: 1105 + HPSB_ERR("Failed to add sysfs attribute for node %016Lx", 1106 + (unsigned long long)ne->guid); 1107 } 1108 1109 #ifdef CONFIG_HOTPLUG ··· 1161 #endif /* CONFIG_HOTPLUG */ 1162 1163 1164 + int __hpsb_register_protocol(struct hpsb_protocol_driver *drv, 1165 + struct module *owner) 1166 { 1167 + int error; 1168 + 1169 + drv->driver.bus = &ieee1394_bus_type; 1170 + drv->driver.owner = owner; 1171 + drv->driver.name = drv->name; 1172 1173 /* This will cause a probe for devices */ 1174 + error = driver_register(&drv->driver); 1175 + if (!error) 1176 + nodemgr_create_drv_files(drv); 1177 + return error; 1178 } 1179 1180 void hpsb_unregister_protocol(struct hpsb_protocol_driver *driver) ··· 1298 1299 static void nodemgr_node_scan(struct host_info *hi, int generation) 1300 { 1301 + int count; 1302 + struct hpsb_host *host = hi->host; 1303 + struct selfid *sid = (struct selfid *)host->topology_map; 1304 + nodeid_t nodeid = LOCAL_BUS; 1305 1306 + /* Scan each node on the bus */ 1307 + for (count = host->selfid_count; count; count--, sid++) { 1308 + if (sid->extended) 1309 + continue; 1310 1311 + if (!sid->link_active) { 1312 + nodeid++; 1313 + continue; 1314 + } 1315 + nodemgr_node_scan_one(hi, nodeid++, generation); 1316 + } 1317 } 1318 1319 1320 static void nodemgr_suspend_ne(struct node_entry *ne) 1321 { 1322 struct class_device *cdev; ··· 1327 NODE_BUS_ARGS(ne->host, ne->nodeid), (unsigned long long)ne->guid); 1328 1329 ne->in_limbo = 1; 1330 + WARN_ON(device_create_file(&ne->device, &dev_attr_ne_in_limbo)); 1331 1332 + down(&nodemgr_ud_class.sem); 1333 list_for_each_entry(cdev, &nodemgr_ud_class.children, node) { 1334 ud = container_of(cdev, struct unit_directory, class_dev); 1335 if (ud->ne != ne) 1336 continue; 1337 1338 + down_write(&ieee1394_bus_type.subsys.rwsem); 1339 if (ud->device.driver && 1340 (!ud->device.driver->suspend || 1341 ud->device.driver->suspend(&ud->device, PMSG_SUSPEND))) 1342 device_release_driver(&ud->device); 1343 + up_write(&ieee1394_bus_type.subsys.rwsem); 1344 } 1345 + up(&nodemgr_ud_class.sem); 1346 } 1347 1348 ··· 1353 ne->in_limbo = 0; 1354 device_remove_file(&ne->device, &dev_attr_ne_in_limbo); 1355 1356 + down(&nodemgr_ud_class.sem); 1357 list_for_each_entry(cdev, &nodemgr_ud_class.children, node) { 1358 ud = container_of(cdev, struct unit_directory, class_dev); 1359 if (ud->ne != ne) 1360 continue; 1361 1362 + down_read(&ieee1394_bus_type.subsys.rwsem); 1363 if (ud->device.driver && ud->device.driver->resume) 1364 ud->device.driver->resume(&ud->device); 1365 + up_read(&ieee1394_bus_type.subsys.rwsem); 1366 } 1367 + up(&nodemgr_ud_class.sem); 1368 1369 HPSB_DEBUG("Node resumed: ID:BUS[" NODE_BUS_FMT "] GUID[%016Lx]", 1370 NODE_BUS_ARGS(ne->host, ne->nodeid), (unsigned long long)ne->guid); 1371 } 1372 1373 1374 static void nodemgr_update_pdrv(struct node_entry *ne) 1375 { 1376 struct unit_directory *ud; 1377 struct hpsb_protocol_driver *pdrv; 1378 struct class_device *cdev; 1379 1380 + down(&nodemgr_ud_class.sem); 1381 list_for_each_entry(cdev, &nodemgr_ud_class.children, node) { 1382 ud = container_of(cdev, struct unit_directory, class_dev); 1383 + if (ud->ne != ne) 1384 continue; 1385 1386 + down_write(&ieee1394_bus_type.subsys.rwsem); 1387 + if (ud->device.driver) { 1388 + pdrv = container_of(ud->device.driver, 1389 + struct hpsb_protocol_driver, 1390 + driver); 1391 + if (pdrv->update && pdrv->update(ud)) 1392 + device_release_driver(&ud->device); 1393 } 1394 + up_write(&ieee1394_bus_type.subsys.rwsem); 1395 } 1396 + up(&nodemgr_ud_class.sem); 1397 } 1398 1399 ··· 1405 { 1406 const u64 bc_addr = (CSR_REGISTER_BASE | CSR_BROADCAST_CHANNEL); 1407 quadlet_t bc_remote, bc_local; 1408 + int error; 1409 1410 if (!ne->host->is_irm || ne->generation != generation || 1411 ne->nodeid == ne->host->node_id) ··· 1414 bc_local = cpu_to_be32(ne->host->csr.broadcast_channel); 1415 1416 /* Check if the register is implemented and 1394a compliant. */ 1417 + error = hpsb_read(ne->host, ne->nodeid, generation, bc_addr, &bc_remote, 1418 + sizeof(bc_remote)); 1419 + if (!error && bc_remote & cpu_to_be32(0x80000000) && 1420 bc_remote != bc_local) 1421 hpsb_node_write(ne, bc_addr, &bc_local, sizeof(bc_local)); 1422 } 1423 1424 1425 static void nodemgr_probe_ne(struct host_info *hi, struct node_entry *ne, int generation) 1426 { 1427 struct device *dev; ··· 1456 static void nodemgr_node_probe(struct host_info *hi, int generation) 1457 { 1458 struct hpsb_host *host = hi->host; 1459 struct class_device *cdev; 1460 struct node_entry *ne; 1461 ··· 1469 * while probes are time-consuming. (Well, those probes need some 1470 * improvement...) */ 1471 1472 + down(&nodemgr_ne_class.sem); 1473 + list_for_each_entry(cdev, &nodemgr_ne_class.children, node) { 1474 ne = container_of(cdev, struct node_entry, class_dev); 1475 if (!ne->needs_probe) 1476 nodemgr_probe_ne(hi, ne, generation); 1477 } 1478 + list_for_each_entry(cdev, &nodemgr_ne_class.children, node) { 1479 ne = container_of(cdev, struct node_entry, class_dev); 1480 if (ne->needs_probe) 1481 nodemgr_probe_ne(hi, ne, generation); 1482 } 1483 + up(&nodemgr_ne_class.sem); 1484 1485 1486 /* If we had a bus reset while we were scanning the bus, it is ··· 1498 * just removed. */ 1499 1500 if (generation == get_hpsb_generation(host)) 1501 + if (bus_rescan_devices(&ieee1394_bus_type)) 1502 + HPSB_DEBUG("bus_rescan_devices had an error"); 1503 } 1504 1505 static int nodemgr_send_resume_packet(struct hpsb_host *host) 1506 { 1507 struct hpsb_packet *packet; 1508 + int error = -ENOMEM; 1509 1510 packet = hpsb_make_phypacket(host, 1511 EXTPHYPACKET_TYPE_RESUME | ··· 1514 if (packet) { 1515 packet->no_waiter = 1; 1516 packet->generation = get_hpsb_generation(host); 1517 + error = hpsb_send_packet(packet); 1518 } 1519 + if (error) 1520 HPSB_WARN("fw-host%d: Failed to broadcast resume packet", 1521 host->id); 1522 + return error; 1523 } 1524 1525 /* Perform a few high-level IRM responsibilities. */ ··· 1692 1693 int nodemgr_for_each_host(void *__data, int (*cb)(struct hpsb_host *, void *)) 1694 { 1695 struct class_device *cdev; 1696 struct hpsb_host *host; 1697 int error = 0; 1698 1699 + down(&hpsb_host_class.sem); 1700 + list_for_each_entry(cdev, &hpsb_host_class.children, node) { 1701 host = container_of(cdev, struct hpsb_host, class_dev); 1702 1703 if ((error = cb(host, __data))) 1704 break; 1705 } 1706 + up(&hpsb_host_class.sem); 1707 1708 return error; 1709 } ··· 1726 1727 void hpsb_node_fill_packet(struct node_entry *ne, struct hpsb_packet *pkt) 1728 { 1729 + pkt->host = ne->host; 1730 + pkt->generation = ne->generation; 1731 barrier(); 1732 + pkt->node_id = ne->nodeid; 1733 } 1734 1735 int hpsb_node_write(struct node_entry *ne, u64 addr, ··· 1789 1790 int init_ieee1394_nodemgr(void) 1791 { 1792 + int error; 1793 1794 + error = class_register(&nodemgr_ne_class); 1795 + if (error) 1796 + return error; 1797 1798 + error = class_register(&nodemgr_ud_class); 1799 + if (error) { 1800 class_unregister(&nodemgr_ne_class); 1801 + return error; 1802 } 1803 + error = driver_register(&nodemgr_mid_layer_driver); 1804 hpsb_register_highlevel(&nodemgr_highlevel); 1805 return 0; 1806 } 1807 1808 void cleanup_ieee1394_nodemgr(void) 1809 { 1810 + hpsb_unregister_highlevel(&nodemgr_highlevel); 1811 1812 class_unregister(&nodemgr_ud_class); 1813 class_unregister(&nodemgr_ne_class);
+6 -1
drivers/ieee1394/nodemgr.h
··· 144 struct device_driver driver; 145 }; 146 147 - int hpsb_register_protocol(struct hpsb_protocol_driver *driver); 148 void hpsb_unregister_protocol(struct hpsb_protocol_driver *driver); 149 150 static inline int hpsb_node_entry_valid(struct node_entry *ne)
··· 144 struct device_driver driver; 145 }; 146 147 + int __hpsb_register_protocol(struct hpsb_protocol_driver *, struct module *); 148 + static inline int hpsb_register_protocol(struct hpsb_protocol_driver *driver) 149 + { 150 + return __hpsb_register_protocol(driver, THIS_MODULE); 151 + } 152 + 153 void hpsb_unregister_protocol(struct hpsb_protocol_driver *driver); 154 155 static inline int hpsb_node_entry_valid(struct node_entry *ne)
+100 -50
drivers/ieee1394/ohci1394.c
··· 468 /* Global initialization */ 469 static void ohci_initialize(struct ti_ohci *ohci) 470 { 471 - char irq_buf[16]; 472 quadlet_t buf; 473 int num_ports, i; 474 ··· 585 reg_write(ohci, OHCI1394_HCControlSet, OHCI1394_HCControl_linkEnable); 586 587 buf = reg_read(ohci, OHCI1394_Version); 588 - sprintf (irq_buf, "%d", ohci->dev->irq); 589 - PRINT(KERN_INFO, "OHCI-1394 %d.%d (PCI): IRQ=[%s] " 590 "MMIO=[%llx-%llx] Max Packet=[%d] IR/IT contexts=[%d/%d]", 591 ((((buf) >> 16) & 0xf) + (((buf) >> 20) & 0xf) * 10), 592 - ((((buf) >> 4) & 0xf) + ((buf) & 0xf) * 10), irq_buf, 593 (unsigned long long)pci_resource_start(ohci->dev, 0), 594 (unsigned long long)pci_resource_start(ohci->dev, 0) + OHCI1394_REGISTER_SIZE - 1, 595 ohci->max_packet_size, ··· 3215 struct ti_ohci *ohci; /* shortcut to currently handled device */ 3216 resource_size_t ohci_base; 3217 3218 if (pci_enable_device(dev)) 3219 FAIL(-ENXIO, "Failed to enable OHCI hardware"); 3220 pci_set_master(dev); ··· 3515 #endif 3516 3517 #ifdef CONFIG_PPC_PMAC 3518 - /* On UniNorth, power down the cable and turn off the chip 3519 - * clock when the module is removed to save power on 3520 - * laptops. Turning it back ON is done by the arch code when 3521 - * pci_enable_device() is called */ 3522 - { 3523 - struct device_node* of_node; 3524 3525 - of_node = pci_device_to_OF_node(ohci->dev); 3526 - if (of_node) { 3527 - pmac_call_feature(PMAC_FTR_1394_ENABLE, of_node, 0, 0); 3528 - pmac_call_feature(PMAC_FTR_1394_CABLE_POWER, of_node, 0, 0); 3529 } 3530 } 3531 #endif /* CONFIG_PPC_PMAC */ ··· 3536 } 3537 3538 #ifdef CONFIG_PM 3539 - static int ohci1394_pci_resume (struct pci_dev *pdev) 3540 { 3541 /* PowerMac resume code comes first */ 3542 #ifdef CONFIG_PPC_PMAC 3543 if (machine_is(powermac)) { 3544 - struct device_node *of_node; 3545 3546 - /* Re-enable 1394 */ 3547 - of_node = pci_device_to_OF_node (pdev); 3548 - if (of_node) 3549 - pmac_call_feature (PMAC_FTR_1394_ENABLE, of_node, 0, 1); 3550 } 3551 #endif /* CONFIG_PPC_PMAC */ 3552 3553 pci_set_power_state(pdev, PCI_D0); 3554 pci_restore_state(pdev); 3555 - return pci_enable_device(pdev); 3556 - } 3557 - 3558 - static int ohci1394_pci_suspend (struct pci_dev *pdev, pm_message_t state) 3559 - { 3560 - int err; 3561 - 3562 - printk(KERN_INFO "%s does not fully support suspend and resume yet\n", 3563 - OHCI1394_DRIVER_NAME); 3564 - 3565 - err = pci_save_state(pdev); 3566 if (err) { 3567 - printk(KERN_ERR "%s: pci_save_state failed with %d\n", 3568 - OHCI1394_DRIVER_NAME, err); 3569 return err; 3570 } 3571 - err = pci_set_power_state(pdev, pci_choose_state(pdev, state)); 3572 - #ifdef OHCI1394_DEBUG 3573 - if (err) 3574 - printk(KERN_DEBUG "%s: pci_set_power_state failed with %d\n", 3575 - OHCI1394_DRIVER_NAME, err); 3576 - #endif /* OHCI1394_DEBUG */ 3577 3578 - /* PowerMac suspend code comes last */ 3579 - #ifdef CONFIG_PPC_PMAC 3580 - if (machine_is(powermac)) { 3581 - struct device_node *of_node; 3582 - 3583 - /* Disable 1394 */ 3584 - of_node = pci_device_to_OF_node (pdev); 3585 - if (of_node) 3586 - pmac_call_feature(PMAC_FTR_1394_ENABLE, of_node, 0, 0); 3587 - } 3588 - #endif /* CONFIG_PPC_PMAC */ 3589 3590 return 0; 3591 }
··· 468 /* Global initialization */ 469 static void ohci_initialize(struct ti_ohci *ohci) 470 { 471 quadlet_t buf; 472 int num_ports, i; 473 ··· 586 reg_write(ohci, OHCI1394_HCControlSet, OHCI1394_HCControl_linkEnable); 587 588 buf = reg_read(ohci, OHCI1394_Version); 589 + PRINT(KERN_INFO, "OHCI-1394 %d.%d (PCI): IRQ=[%d] " 590 "MMIO=[%llx-%llx] Max Packet=[%d] IR/IT contexts=[%d/%d]", 591 ((((buf) >> 16) & 0xf) + (((buf) >> 20) & 0xf) * 10), 592 + ((((buf) >> 4) & 0xf) + ((buf) & 0xf) * 10), ohci->dev->irq, 593 (unsigned long long)pci_resource_start(ohci->dev, 0), 594 (unsigned long long)pci_resource_start(ohci->dev, 0) + OHCI1394_REGISTER_SIZE - 1, 595 ohci->max_packet_size, ··· 3217 struct ti_ohci *ohci; /* shortcut to currently handled device */ 3218 resource_size_t ohci_base; 3219 3220 + #ifdef CONFIG_PPC_PMAC 3221 + /* Necessary on some machines if ohci1394 was loaded/ unloaded before */ 3222 + if (machine_is(powermac)) { 3223 + struct device_node *ofn = pci_device_to_OF_node(dev); 3224 + 3225 + if (ofn) { 3226 + pmac_call_feature(PMAC_FTR_1394_CABLE_POWER, ofn, 0, 1); 3227 + pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 1); 3228 + } 3229 + } 3230 + #endif /* CONFIG_PPC_PMAC */ 3231 + 3232 if (pci_enable_device(dev)) 3233 FAIL(-ENXIO, "Failed to enable OHCI hardware"); 3234 pci_set_master(dev); ··· 3505 #endif 3506 3507 #ifdef CONFIG_PPC_PMAC 3508 + /* On UniNorth, power down the cable and turn off the chip clock 3509 + * to save power on laptops */ 3510 + if (machine_is(powermac)) { 3511 + struct device_node* ofn = pci_device_to_OF_node(ohci->dev); 3512 3513 + if (ofn) { 3514 + pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 0); 3515 + pmac_call_feature(PMAC_FTR_1394_CABLE_POWER, ofn, 0, 0); 3516 } 3517 } 3518 #endif /* CONFIG_PPC_PMAC */ ··· 3529 } 3530 3531 #ifdef CONFIG_PM 3532 + static int ohci1394_pci_suspend(struct pci_dev *pdev, pm_message_t state) 3533 { 3534 + int err; 3535 + struct ti_ohci *ohci = pci_get_drvdata(pdev); 3536 + 3537 + printk(KERN_INFO "%s does not fully support suspend and resume yet\n", 3538 + OHCI1394_DRIVER_NAME); 3539 + 3540 + if (!ohci) { 3541 + printk(KERN_ERR "%s: tried to suspend nonexisting host\n", 3542 + OHCI1394_DRIVER_NAME); 3543 + return -ENXIO; 3544 + } 3545 + DBGMSG("suspend called"); 3546 + 3547 + /* Clear the async DMA contexts and stop using the controller */ 3548 + hpsb_bus_reset(ohci->host); 3549 + 3550 + /* See ohci1394_pci_remove() for comments on this sequence */ 3551 + reg_write(ohci, OHCI1394_ConfigROMhdr, 0); 3552 + reg_write(ohci, OHCI1394_BusOptions, 3553 + (reg_read(ohci, OHCI1394_BusOptions) & 0x0000f007) | 3554 + 0x00ff0000); 3555 + reg_write(ohci, OHCI1394_IntMaskClear, 0xffffffff); 3556 + reg_write(ohci, OHCI1394_IntEventClear, 0xffffffff); 3557 + reg_write(ohci, OHCI1394_IsoXmitIntMaskClear, 0xffffffff); 3558 + reg_write(ohci, OHCI1394_IsoXmitIntEventClear, 0xffffffff); 3559 + reg_write(ohci, OHCI1394_IsoRecvIntMaskClear, 0xffffffff); 3560 + reg_write(ohci, OHCI1394_IsoRecvIntEventClear, 0xffffffff); 3561 + set_phy_reg(ohci, 4, ~0xc0 & get_phy_reg(ohci, 4)); 3562 + reg_write(ohci, OHCI1394_LinkControlClear, 0xffffffff); 3563 + ohci_devctl(ohci->host, RESET_BUS, LONG_RESET_NO_FORCE_ROOT); 3564 + ohci_soft_reset(ohci); 3565 + 3566 + err = pci_save_state(pdev); 3567 + if (err) { 3568 + PRINT(KERN_ERR, "pci_save_state failed with %d", err); 3569 + return err; 3570 + } 3571 + err = pci_set_power_state(pdev, pci_choose_state(pdev, state)); 3572 + if (err) 3573 + DBGMSG("pci_set_power_state failed with %d", err); 3574 + 3575 + /* PowerMac suspend code comes last */ 3576 + #ifdef CONFIG_PPC_PMAC 3577 + if (machine_is(powermac)) { 3578 + struct device_node *ofn = pci_device_to_OF_node(pdev); 3579 + 3580 + if (ofn) 3581 + pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 0); 3582 + } 3583 + #endif /* CONFIG_PPC_PMAC */ 3584 + 3585 + return 0; 3586 + } 3587 + 3588 + static int ohci1394_pci_resume(struct pci_dev *pdev) 3589 + { 3590 + int err; 3591 + struct ti_ohci *ohci = pci_get_drvdata(pdev); 3592 + 3593 + if (!ohci) { 3594 + printk(KERN_ERR "%s: tried to resume nonexisting host\n", 3595 + OHCI1394_DRIVER_NAME); 3596 + return -ENXIO; 3597 + } 3598 + DBGMSG("resume called"); 3599 + 3600 /* PowerMac resume code comes first */ 3601 #ifdef CONFIG_PPC_PMAC 3602 if (machine_is(powermac)) { 3603 + struct device_node *ofn = pci_device_to_OF_node(pdev); 3604 3605 + if (ofn) 3606 + pmac_call_feature(PMAC_FTR_1394_ENABLE, ofn, 0, 1); 3607 } 3608 #endif /* CONFIG_PPC_PMAC */ 3609 3610 pci_set_power_state(pdev, PCI_D0); 3611 pci_restore_state(pdev); 3612 + err = pci_enable_device(pdev); 3613 if (err) { 3614 + PRINT(KERN_ERR, "pci_enable_device failed with %d", err); 3615 return err; 3616 } 3617 3618 + /* See ohci1394_pci_probe() for comments on this sequence */ 3619 + ohci_soft_reset(ohci); 3620 + reg_write(ohci, OHCI1394_HCControlSet, OHCI1394_HCControl_LPS); 3621 + reg_write(ohci, OHCI1394_IntEventClear, 0xffffffff); 3622 + reg_write(ohci, OHCI1394_IntMaskClear, 0xffffffff); 3623 + mdelay(50); 3624 + ohci_initialize(ohci); 3625 3626 return 0; 3627 }
+1 -2
drivers/ieee1394/pcilynx.c
··· 1428 struct i2c_algo_bit_data i2c_adapter_data; 1429 1430 error = -ENOMEM; 1431 - i2c_ad = kmalloc(sizeof(*i2c_ad), GFP_KERNEL); 1432 if (!i2c_ad) FAIL("failed to allocate I2C adapter memory"); 1433 1434 - memcpy(i2c_ad, &bit_ops, sizeof(struct i2c_adapter)); 1435 i2c_adapter_data = bit_data; 1436 i2c_ad->algo_data = &i2c_adapter_data; 1437 i2c_adapter_data.data = lynx;
··· 1428 struct i2c_algo_bit_data i2c_adapter_data; 1429 1430 error = -ENOMEM; 1431 + i2c_ad = kmemdup(&bit_ops, sizeof(*i2c_ad), GFP_KERNEL); 1432 if (!i2c_ad) FAIL("failed to allocate I2C adapter memory"); 1433 1434 i2c_adapter_data = bit_data; 1435 i2c_ad->algo_data = &i2c_adapter_data; 1436 i2c_adapter_data.data = lynx;
+5 -5
drivers/ieee1394/raw1394-private.h
··· 27 28 struct hpsb_host *host; 29 30 - struct list_head req_pending; 31 - struct list_head req_complete; 32 spinlock_t reqlists_lock; 33 wait_queue_head_t wait_complete; 34 35 - struct list_head addr_list; 36 37 u8 __user *fcp_buffer; 38 ··· 63 u8 client_transactions; 64 u64 recvb; 65 u16 rec_length; 66 - u8 *addr_space_buffer; /* accessed by read/write/lock */ 67 }; 68 69 struct pending_request { ··· 79 struct host_info { 80 struct list_head list; 81 struct hpsb_host *host; 82 - struct list_head file_info_list; 83 }; 84 85 #endif /* IEEE1394_RAW1394_PRIVATE_H */
··· 27 28 struct hpsb_host *host; 29 30 + struct list_head req_pending; /* protected by reqlists_lock */ 31 + struct list_head req_complete; /* protected by reqlists_lock */ 32 spinlock_t reqlists_lock; 33 wait_queue_head_t wait_complete; 34 35 + struct list_head addr_list; /* protected by host_info_lock */ 36 37 u8 __user *fcp_buffer; 38 ··· 63 u8 client_transactions; 64 u64 recvb; 65 u16 rec_length; 66 + u8 *addr_space_buffer; /* accessed by read/write/lock requests */ 67 }; 68 69 struct pending_request { ··· 79 struct host_info { 80 struct list_head list; 81 struct hpsb_host *host; 82 + struct list_head file_info_list; /* protected by host_info_lock */ 83 }; 84 85 #endif /* IEEE1394_RAW1394_PRIVATE_H */
+18 -5
drivers/ieee1394/raw1394.c
··· 99 100 static void queue_complete_cb(struct pending_request *req); 101 102 static struct pending_request *__alloc_pending_request(gfp_t flags) 103 { 104 struct pending_request *req; ··· 2307 return sizeof(struct raw1394_request); 2308 2309 case RAW1394_REQ_ISO_SEND: 2310 return handle_iso_send(fi, req, node); 2311 2312 case RAW1394_REQ_ARM_REGISTER: ··· 2326 return reset_notification(fi, req); 2327 2328 case RAW1394_REQ_ISO_LISTEN: 2329 handle_iso_listen(fi, req); 2330 return sizeof(struct raw1394_request); 2331 ··· 2987 MODULE_DEVICE_TABLE(ieee1394, raw1394_id_table); 2988 2989 static struct hpsb_protocol_driver raw1394_driver = { 2990 - .name = "raw1394 Driver", 2991 .id_table = raw1394_id_table, 2992 - .driver = { 2993 - .name = "raw1394", 2994 - .bus = &ieee1394_bus_type, 2995 - }, 2996 }; 2997 2998 /******************************************************************************/
··· 99 100 static void queue_complete_cb(struct pending_request *req); 101 102 + #include <asm/current.h> 103 + static void print_old_iso_deprecation(void) 104 + { 105 + static pid_t p; 106 + 107 + if (p == current->pid) 108 + return; 109 + p = current->pid; 110 + printk(KERN_WARNING "raw1394: WARNING - Program \"%s\" uses unsupported" 111 + " isochronous request types which will be removed in a next" 112 + " kernel release\n", current->comm); 113 + printk(KERN_WARNING "raw1394: Update your software to use libraw1394's" 114 + " newer interface\n"); 115 + } 116 + 117 static struct pending_request *__alloc_pending_request(gfp_t flags) 118 { 119 struct pending_request *req; ··· 2292 return sizeof(struct raw1394_request); 2293 2294 case RAW1394_REQ_ISO_SEND: 2295 + print_old_iso_deprecation(); 2296 return handle_iso_send(fi, req, node); 2297 2298 case RAW1394_REQ_ARM_REGISTER: ··· 2310 return reset_notification(fi, req); 2311 2312 case RAW1394_REQ_ISO_LISTEN: 2313 + print_old_iso_deprecation(); 2314 handle_iso_listen(fi, req); 2315 return sizeof(struct raw1394_request); 2316 ··· 2970 MODULE_DEVICE_TABLE(ieee1394, raw1394_id_table); 2971 2972 static struct hpsb_protocol_driver raw1394_driver = { 2973 + .name = "raw1394", 2974 .id_table = raw1394_id_table, 2975 }; 2976 2977 /******************************************************************************/
+812 -1372
drivers/ieee1394/sbp2.c
··· 29 * driver. It also registers as a SCSI lower-level driver in order to accept 30 * SCSI commands for transport using SBP-2. 31 * 32 - * You may access any attached SBP-2 storage devices as if they were SCSI 33 - * devices (e.g. mount /dev/sda1, fdisk, mkfs, etc.). 34 * 35 - * Current Issues: 36 * 37 - * - Error Handling: SCSI aborts and bus reset requests are handled somewhat 38 - * but the code needs additional debugging. 39 */ 40 41 #include <linux/blkdev.h> ··· 62 #include <linux/list.h> 63 #include <linux/module.h> 64 #include <linux/moduleparam.h> 65 - #include <linux/pci.h> 66 #include <linux/slab.h> 67 #include <linux/spinlock.h> 68 #include <linux/stat.h> ··· 110 * (probably due to PCI latency/throughput issues with the part). You can 111 * bump down the speed if you are running into problems. 112 */ 113 - static int max_speed = IEEE1394_SPEED_MAX; 114 - module_param(max_speed, int, 0644); 115 - MODULE_PARM_DESC(max_speed, "Force max speed (3 = 800mb, 2 = 400mb, 1 = 200mb, 0 = 100mb)"); 116 117 /* 118 * Set serialize_io to 1 if you'd like only one scsi command sent 119 * down to us at a time (debugging). This might be necessary for very 120 * badly behaved sbp2 devices. 121 - * 122 - * TODO: Make this configurable per device. 123 */ 124 - static int serialize_io = 1; 125 - module_param(serialize_io, int, 0444); 126 - MODULE_PARM_DESC(serialize_io, "Serialize I/O coming from scsi drivers (default = 1, faster = 0)"); 127 128 /* 129 * Bump up max_sectors if you'd like to support very large sized ··· 133 * the Oxsemi sbp2 chipsets have no problems supporting very large 134 * transfer sizes. 135 */ 136 - static int max_sectors = SBP2_MAX_SECTORS; 137 - module_param(max_sectors, int, 0444); 138 - MODULE_PARM_DESC(max_sectors, "Change max sectors per I/O supported (default = " 139 - __stringify(SBP2_MAX_SECTORS) ")"); 140 141 /* 142 * Exclusive login to sbp2 device? In most cases, the sbp2 driver should ··· 151 * concurrent logins. Depending on firmware, four or two concurrent logins 152 * are possible on OXFW911 and newer Oxsemi bridges. 153 */ 154 - static int exclusive_login = 1; 155 - module_param(exclusive_login, int, 0644); 156 - MODULE_PARM_DESC(exclusive_login, "Exclusive login to sbp2 device (default = 1)"); 157 158 /* 159 * If any of the following workarounds is required for your device to work, ··· 192 ", override internal blacklist = " __stringify(SBP2_WORKAROUND_OVERRIDE) 193 ", or a combination)"); 194 195 - /* 196 - * Export information about protocols/devices supported by this driver. 197 - */ 198 - static struct ieee1394_device_id sbp2_id_table[] = { 199 - { 200 - .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION, 201 - .specifier_id = SBP2_UNIT_SPEC_ID_ENTRY & 0xffffff, 202 - .version = SBP2_SW_VERSION_ENTRY & 0xffffff}, 203 - {} 204 - }; 205 206 - MODULE_DEVICE_TABLE(ieee1394, sbp2_id_table); 207 - 208 - /* 209 - * Debug levels, configured via kernel config, or enable here. 210 - */ 211 - 212 - #define CONFIG_IEEE1394_SBP2_DEBUG 0 213 - /* #define CONFIG_IEEE1394_SBP2_DEBUG_ORBS */ 214 - /* #define CONFIG_IEEE1394_SBP2_DEBUG_DMA */ 215 - /* #define CONFIG_IEEE1394_SBP2_DEBUG 1 */ 216 - /* #define CONFIG_IEEE1394_SBP2_DEBUG 2 */ 217 - /* #define CONFIG_IEEE1394_SBP2_PACKET_DUMP */ 218 - 219 - #ifdef CONFIG_IEEE1394_SBP2_DEBUG_ORBS 220 - #define SBP2_ORB_DEBUG(fmt, args...) HPSB_ERR("sbp2(%s): "fmt, __FUNCTION__, ## args) 221 - static u32 global_outstanding_command_orbs = 0; 222 - #define outstanding_orb_incr global_outstanding_command_orbs++ 223 - #define outstanding_orb_decr global_outstanding_command_orbs-- 224 - #else 225 - #define SBP2_ORB_DEBUG(fmt, args...) do {} while (0) 226 - #define outstanding_orb_incr do {} while (0) 227 - #define outstanding_orb_decr do {} while (0) 228 - #endif 229 - 230 - #ifdef CONFIG_IEEE1394_SBP2_DEBUG_DMA 231 - #define SBP2_DMA_ALLOC(fmt, args...) \ 232 - HPSB_ERR("sbp2(%s)alloc(%d): "fmt, __FUNCTION__, \ 233 - ++global_outstanding_dmas, ## args) 234 - #define SBP2_DMA_FREE(fmt, args...) \ 235 - HPSB_ERR("sbp2(%s)free(%d): "fmt, __FUNCTION__, \ 236 - --global_outstanding_dmas, ## args) 237 - static u32 global_outstanding_dmas = 0; 238 - #else 239 - #define SBP2_DMA_ALLOC(fmt, args...) do {} while (0) 240 - #define SBP2_DMA_FREE(fmt, args...) do {} while (0) 241 - #endif 242 - 243 - #if CONFIG_IEEE1394_SBP2_DEBUG >= 2 244 - #define SBP2_DEBUG(fmt, args...) HPSB_ERR("sbp2: "fmt, ## args) 245 - #define SBP2_INFO(fmt, args...) HPSB_ERR("sbp2: "fmt, ## args) 246 - #define SBP2_NOTICE(fmt, args...) HPSB_ERR("sbp2: "fmt, ## args) 247 - #define SBP2_WARN(fmt, args...) HPSB_ERR("sbp2: "fmt, ## args) 248 - #elif CONFIG_IEEE1394_SBP2_DEBUG == 1 249 - #define SBP2_DEBUG(fmt, args...) HPSB_DEBUG("sbp2: "fmt, ## args) 250 - #define SBP2_INFO(fmt, args...) HPSB_INFO("sbp2: "fmt, ## args) 251 - #define SBP2_NOTICE(fmt, args...) HPSB_NOTICE("sbp2: "fmt, ## args) 252 - #define SBP2_WARN(fmt, args...) HPSB_WARN("sbp2: "fmt, ## args) 253 - #else 254 - #define SBP2_DEBUG(fmt, args...) do {} while (0) 255 - #define SBP2_INFO(fmt, args...) HPSB_INFO("sbp2: "fmt, ## args) 256 - #define SBP2_NOTICE(fmt, args...) HPSB_NOTICE("sbp2: "fmt, ## args) 257 - #define SBP2_WARN(fmt, args...) HPSB_WARN("sbp2: "fmt, ## args) 258 - #endif 259 - 260 - #define SBP2_ERR(fmt, args...) HPSB_ERR("sbp2: "fmt, ## args) 261 - #define SBP2_DEBUG_ENTER() SBP2_DEBUG("%s", __FUNCTION__) 262 263 /* 264 * Globals 265 */ 266 267 - static void sbp2scsi_complete_all_commands(struct scsi_id_instance_data *scsi_id, 268 - u32 status); 269 - 270 - static void sbp2scsi_complete_command(struct scsi_id_instance_data *scsi_id, 271 - u32 scsi_status, struct scsi_cmnd *SCpnt, 272 - void (*done)(struct scsi_cmnd *)); 273 - 274 - static struct scsi_host_template scsi_driver_template; 275 276 static const u8 sbp2_speedto_max_payload[] = { 0x7, 0x8, 0x9, 0xA, 0xB, 0xC }; 277 278 - static void sbp2_host_reset(struct hpsb_host *host); 279 - 280 - static int sbp2_probe(struct device *dev); 281 - static int sbp2_remove(struct device *dev); 282 - static int sbp2_update(struct unit_directory *ud); 283 - 284 static struct hpsb_highlevel sbp2_highlevel = { 285 - .name = SBP2_DEVICE_NAME, 286 - .host_reset = sbp2_host_reset, 287 }; 288 289 static struct hpsb_address_ops sbp2_ops = { 290 - .write = sbp2_handle_status_write 291 }; 292 293 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 294 static struct hpsb_address_ops sbp2_physdma_ops = { 295 - .read = sbp2_handle_physdma_read, 296 - .write = sbp2_handle_physdma_write, 297 }; 298 #endif 299 300 static struct hpsb_protocol_driver sbp2_driver = { 301 - .name = "SBP2 Driver", 302 .id_table = sbp2_id_table, 303 .update = sbp2_update, 304 .driver = { 305 - .name = SBP2_DEVICE_NAME, 306 - .bus = &ieee1394_bus_type, 307 .probe = sbp2_probe, 308 .remove = sbp2_remove, 309 }, 310 }; 311 312 /* 313 * List of devices with known bugs. ··· 376 377 for (length = (length >> 2); length--; ) 378 temp[length] = be32_to_cpu(temp[length]); 379 - 380 - return; 381 } 382 383 /* ··· 387 388 for (length = (length >> 2); length--; ) 389 temp[length] = cpu_to_be32(temp[length]); 390 - 391 - return; 392 } 393 #else /* BIG_ENDIAN */ 394 /* Why waste the cpu cycles? */ ··· 394 #define sbp2util_cpu_to_be32_buffer(x,y) do {} while (0) 395 #endif 396 397 - #ifdef CONFIG_IEEE1394_SBP2_PACKET_DUMP 398 - /* 399 - * Debug packet dump routine. Length is in bytes. 400 - */ 401 - static void sbp2util_packet_dump(void *buffer, int length, char *dump_name, 402 - u32 dump_phys_addr) 403 - { 404 - int i; 405 - unsigned char *dump = buffer; 406 - 407 - if (!dump || !length || !dump_name) 408 - return; 409 - 410 - if (dump_phys_addr) 411 - printk("[%s, 0x%x]", dump_name, dump_phys_addr); 412 - else 413 - printk("[%s]", dump_name); 414 - for (i = 0; i < length; i++) { 415 - if (i > 0x3f) { 416 - printk("\n ..."); 417 - break; 418 - } 419 - if ((i & 0x3) == 0) 420 - printk(" "); 421 - if ((i & 0xf) == 0) 422 - printk("\n "); 423 - printk("%02x ", (int)dump[i]); 424 - } 425 - printk("\n"); 426 - 427 - return; 428 - } 429 - #else 430 - #define sbp2util_packet_dump(w,x,y,z) do {} while (0) 431 - #endif 432 - 433 - static DECLARE_WAIT_QUEUE_HEAD(access_wq); 434 435 /* 436 * Waits for completion of an SBP-2 access request. 437 * Returns nonzero if timed out or prematurely interrupted. 438 */ 439 - static int sbp2util_access_timeout(struct scsi_id_instance_data *scsi_id, 440 - int timeout) 441 { 442 - long leftover = wait_event_interruptible_timeout( 443 - access_wq, scsi_id->access_complete, timeout); 444 445 - scsi_id->access_complete = 0; 446 return leftover <= 0; 447 } 448 449 - /* Frees an allocated packet */ 450 - static void sbp2_free_packet(struct hpsb_packet *packet) 451 { 452 hpsb_free_tlabel(packet); 453 hpsb_free_packet(packet); 454 } 455 456 - /* This is much like hpsb_node_write(), except it ignores the response 457 - * subaction and returns immediately. Can be used from interrupts. 458 */ 459 static int sbp2util_node_write_no_wait(struct node_entry *ne, u64 addr, 460 - quadlet_t *buffer, size_t length) 461 { 462 struct hpsb_packet *packet; 463 464 - packet = hpsb_make_writepacket(ne->host, ne->nodeid, 465 - addr, buffer, length); 466 if (!packet) 467 return -ENOMEM; 468 469 - hpsb_set_packet_complete_task(packet, 470 - (void (*)(void *))sbp2_free_packet, 471 - packet); 472 - 473 hpsb_node_fill_packet(ne, packet); 474 - 475 if (hpsb_send_packet(packet) < 0) { 476 sbp2_free_packet(packet); 477 return -EIO; 478 } 479 - 480 return 0; 481 } 482 483 - static void sbp2util_notify_fetch_agent(struct scsi_id_instance_data *scsi_id, 484 - u64 offset, quadlet_t *data, size_t len) 485 { 486 - /* 487 - * There is a small window after a bus reset within which the node 488 - * entry's generation is current but the reconnect wasn't completed. 489 - */ 490 - if (unlikely(atomic_read(&scsi_id->state) == SBP2LU_STATE_IN_RESET)) 491 return; 492 493 - if (hpsb_node_write(scsi_id->ne, 494 - scsi_id->sbp2_command_block_agent_addr + offset, 495 data, len)) 496 SBP2_ERR("sbp2util_notify_fetch_agent failed."); 497 - /* 498 - * Now accept new SCSI commands, unless a bus reset happended during 499 - * hpsb_node_write. 500 - */ 501 - if (likely(atomic_read(&scsi_id->state) != SBP2LU_STATE_IN_RESET)) 502 - scsi_unblock_requests(scsi_id->scsi_host); 503 } 504 505 static void sbp2util_write_orb_pointer(struct work_struct *work) 506 { 507 - struct scsi_id_instance_data *scsi_id = 508 - container_of(work, struct scsi_id_instance_data, 509 - protocol_work.work); 510 quadlet_t data[2]; 511 512 - data[0] = ORB_SET_NODE_ID(scsi_id->hi->host->node_id); 513 - data[1] = scsi_id->last_orb_dma; 514 sbp2util_cpu_to_be32_buffer(data, 8); 515 - sbp2util_notify_fetch_agent(scsi_id, SBP2_ORB_POINTER_OFFSET, data, 8); 516 } 517 518 static void sbp2util_write_doorbell(struct work_struct *work) 519 { 520 - struct scsi_id_instance_data *scsi_id = 521 - container_of(work, struct scsi_id_instance_data, 522 - protocol_work.work); 523 - sbp2util_notify_fetch_agent(scsi_id, SBP2_DOORBELL_OFFSET, NULL, 4); 524 } 525 526 - /* 527 - * This function is called to create a pool of command orbs used for 528 - * command processing. It is called when a new sbp2 device is detected. 529 - */ 530 - static int sbp2util_create_command_orb_pool(struct scsi_id_instance_data *scsi_id) 531 { 532 - struct sbp2scsi_host_info *hi = scsi_id->hi; 533 int i; 534 unsigned long flags, orbs; 535 - struct sbp2_command_info *command; 536 537 - orbs = serialize_io ? 2 : SBP2_MAX_CMDS; 538 539 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 540 for (i = 0; i < orbs; i++) { 541 - command = kzalloc(sizeof(*command), GFP_ATOMIC); 542 - if (!command) { 543 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, 544 - flags); 545 return -ENOMEM; 546 } 547 - command->command_orb_dma = 548 - pci_map_single(hi->host->pdev, &command->command_orb, 549 - sizeof(struct sbp2_command_orb), 550 - PCI_DMA_TODEVICE); 551 - SBP2_DMA_ALLOC("single command orb DMA"); 552 - command->sge_dma = 553 - pci_map_single(hi->host->pdev, 554 - &command->scatter_gather_element, 555 - sizeof(command->scatter_gather_element), 556 - PCI_DMA_BIDIRECTIONAL); 557 - SBP2_DMA_ALLOC("scatter_gather_element"); 558 - INIT_LIST_HEAD(&command->list); 559 - list_add_tail(&command->list, &scsi_id->sbp2_command_orb_completed); 560 } 561 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 562 return 0; 563 } 564 565 - /* 566 - * This function is called to delete a pool of command orbs. 567 - */ 568 - static void sbp2util_remove_command_orb_pool(struct scsi_id_instance_data *scsi_id) 569 { 570 - struct hpsb_host *host = scsi_id->hi->host; 571 struct list_head *lh, *next; 572 - struct sbp2_command_info *command; 573 unsigned long flags; 574 575 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 576 - if (!list_empty(&scsi_id->sbp2_command_orb_completed)) { 577 - list_for_each_safe(lh, next, &scsi_id->sbp2_command_orb_completed) { 578 - command = list_entry(lh, struct sbp2_command_info, list); 579 - 580 - /* Release our generic DMA's */ 581 - pci_unmap_single(host->pdev, command->command_orb_dma, 582 sizeof(struct sbp2_command_orb), 583 - PCI_DMA_TODEVICE); 584 - SBP2_DMA_FREE("single command orb DMA"); 585 - pci_unmap_single(host->pdev, command->sge_dma, 586 - sizeof(command->scatter_gather_element), 587 - PCI_DMA_BIDIRECTIONAL); 588 - SBP2_DMA_FREE("scatter_gather_element"); 589 - 590 - kfree(command); 591 } 592 - } 593 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 594 return; 595 } 596 597 /* 598 - * This function finds the sbp2_command for a given outstanding command 599 - * orb.Only looks at the inuse list. 600 */ 601 static struct sbp2_command_info *sbp2util_find_command_for_orb( 602 - struct scsi_id_instance_data *scsi_id, dma_addr_t orb) 603 { 604 - struct sbp2_command_info *command; 605 unsigned long flags; 606 607 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 608 - if (!list_empty(&scsi_id->sbp2_command_orb_inuse)) { 609 - list_for_each_entry(command, &scsi_id->sbp2_command_orb_inuse, list) { 610 - if (command->command_orb_dma == orb) { 611 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 612 - return command; 613 } 614 - } 615 - } 616 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 617 - 618 - SBP2_ORB_DEBUG("could not match command orb %x", (unsigned int)orb); 619 - 620 return NULL; 621 } 622 623 /* 624 - * This function finds the sbp2_command for a given outstanding SCpnt. 625 - * Only looks at the inuse list. 626 - * Must be called with scsi_id->sbp2_command_orb_lock held. 627 */ 628 static struct sbp2_command_info *sbp2util_find_command_for_SCpnt( 629 - struct scsi_id_instance_data *scsi_id, void *SCpnt) 630 { 631 - struct sbp2_command_info *command; 632 633 - if (!list_empty(&scsi_id->sbp2_command_orb_inuse)) 634 - list_for_each_entry(command, &scsi_id->sbp2_command_orb_inuse, list) 635 - if (command->Current_SCpnt == SCpnt) 636 - return command; 637 return NULL; 638 } 639 640 - /* 641 - * This function allocates a command orb used to send a scsi command. 642 - */ 643 static struct sbp2_command_info *sbp2util_allocate_command_orb( 644 - struct scsi_id_instance_data *scsi_id, 645 - struct scsi_cmnd *Current_SCpnt, 646 - void (*Current_done)(struct scsi_cmnd *)) 647 { 648 struct list_head *lh; 649 - struct sbp2_command_info *command = NULL; 650 unsigned long flags; 651 652 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 653 - if (!list_empty(&scsi_id->sbp2_command_orb_completed)) { 654 - lh = scsi_id->sbp2_command_orb_completed.next; 655 list_del(lh); 656 - command = list_entry(lh, struct sbp2_command_info, list); 657 - command->Current_done = Current_done; 658 - command->Current_SCpnt = Current_SCpnt; 659 - list_add_tail(&command->list, &scsi_id->sbp2_command_orb_inuse); 660 - } else { 661 SBP2_ERR("%s: no orbs available", __FUNCTION__); 662 - } 663 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 664 - return command; 665 - } 666 - 667 - /* Free our DMA's */ 668 - static void sbp2util_free_command_dma(struct sbp2_command_info *command) 669 - { 670 - struct scsi_id_instance_data *scsi_id = 671 - (struct scsi_id_instance_data *)command->Current_SCpnt->device->host->hostdata[0]; 672 - struct hpsb_host *host; 673 - 674 - if (!scsi_id) { 675 - SBP2_ERR("%s: scsi_id == NULL", __FUNCTION__); 676 - return; 677 - } 678 - 679 - host = scsi_id->ud->ne->host; 680 - 681 - if (command->cmd_dma) { 682 - if (command->dma_type == CMD_DMA_SINGLE) { 683 - pci_unmap_single(host->pdev, command->cmd_dma, 684 - command->dma_size, command->dma_dir); 685 - SBP2_DMA_FREE("single bulk"); 686 - } else if (command->dma_type == CMD_DMA_PAGE) { 687 - pci_unmap_page(host->pdev, command->cmd_dma, 688 - command->dma_size, command->dma_dir); 689 - SBP2_DMA_FREE("single page"); 690 - } /* XXX: Check for CMD_DMA_NONE bug */ 691 - command->dma_type = CMD_DMA_NONE; 692 - command->cmd_dma = 0; 693 - } 694 - 695 - if (command->sge_buffer) { 696 - pci_unmap_sg(host->pdev, command->sge_buffer, 697 - command->dma_size, command->dma_dir); 698 - SBP2_DMA_FREE("scatter list"); 699 - command->sge_buffer = NULL; 700 - } 701 } 702 703 /* 704 - * This function moves a command to the completed orb list. 705 - * Must be called with scsi_id->sbp2_command_orb_lock held. 706 */ 707 - static void sbp2util_mark_command_completed( 708 - struct scsi_id_instance_data *scsi_id, 709 - struct sbp2_command_info *command) 710 { 711 - list_del(&command->list); 712 - sbp2util_free_command_dma(command); 713 - list_add_tail(&command->list, &scsi_id->sbp2_command_orb_completed); 714 } 715 716 /* 717 - * Is scsi_id valid? Is the 1394 node still present? 718 */ 719 - static inline int sbp2util_node_is_available(struct scsi_id_instance_data *scsi_id) 720 { 721 - return scsi_id && scsi_id->ne && !scsi_id->ne->in_limbo; 722 } 723 724 /********************************************* 725 * IEEE-1394 core driver stack related section 726 *********************************************/ 727 - static struct scsi_id_instance_data *sbp2_alloc_device(struct unit_directory *ud); 728 729 static int sbp2_probe(struct device *dev) 730 { 731 struct unit_directory *ud; 732 - struct scsi_id_instance_data *scsi_id; 733 - 734 - SBP2_DEBUG_ENTER(); 735 736 ud = container_of(dev, struct unit_directory, device); 737 ··· 642 if (ud->flags & UNIT_DIRECTORY_HAS_LUN_DIRECTORY) 643 return -ENODEV; 644 645 - scsi_id = sbp2_alloc_device(ud); 646 - 647 - if (!scsi_id) 648 return -ENOMEM; 649 650 - sbp2_parse_unit_directory(scsi_id, ud); 651 - 652 - return sbp2_start_device(scsi_id); 653 } 654 655 static int sbp2_remove(struct device *dev) 656 { 657 struct unit_directory *ud; 658 - struct scsi_id_instance_data *scsi_id; 659 struct scsi_device *sdev; 660 661 - SBP2_DEBUG_ENTER(); 662 - 663 ud = container_of(dev, struct unit_directory, device); 664 - scsi_id = ud->device.driver_data; 665 - if (!scsi_id) 666 return 0; 667 668 - if (scsi_id->scsi_host) { 669 /* Get rid of enqueued commands if there is no chance to 670 * send them. */ 671 - if (!sbp2util_node_is_available(scsi_id)) 672 - sbp2scsi_complete_all_commands(scsi_id, DID_NO_CONNECT); 673 - /* scsi_remove_device() will trigger shutdown functions of SCSI 674 * highlevel drivers which would deadlock if blocked. */ 675 - atomic_set(&scsi_id->state, SBP2LU_STATE_IN_SHUTDOWN); 676 - scsi_unblock_requests(scsi_id->scsi_host); 677 } 678 - sdev = scsi_id->sdev; 679 if (sdev) { 680 - scsi_id->sdev = NULL; 681 scsi_remove_device(sdev); 682 } 683 684 - sbp2_logout_device(scsi_id); 685 - sbp2_remove_device(scsi_id); 686 687 return 0; 688 } 689 690 static int sbp2_update(struct unit_directory *ud) 691 { 692 - struct scsi_id_instance_data *scsi_id = ud->device.driver_data; 693 694 - SBP2_DEBUG_ENTER(); 695 696 - if (sbp2_reconnect_device(scsi_id)) { 697 - 698 - /* 699 - * Ok, reconnect has failed. Perhaps we didn't 700 - * reconnect fast enough. Try doing a regular login, but 701 - * first do a logout just in case of any weirdness. 702 - */ 703 - sbp2_logout_device(scsi_id); 704 - 705 - if (sbp2_login_device(scsi_id)) { 706 /* Login failed too, just fail, and the backend 707 * will call our sbp2_remove for us */ 708 SBP2_ERR("Failed to reconnect to sbp2 device!"); ··· 701 } 702 } 703 704 - /* Set max retries to something large on the device. */ 705 - sbp2_set_busy_timeout(scsi_id); 706 707 - /* Do a SBP-2 fetch agent reset. */ 708 - sbp2_agent_reset(scsi_id, 1); 709 - 710 - /* Get the max speed and packet size that we can use. */ 711 - sbp2_max_speed_and_size(scsi_id); 712 - 713 - /* Complete any pending commands with busy (so they get 714 - * retried) and remove them from our queue 715 - */ 716 - sbp2scsi_complete_all_commands(scsi_id, DID_BUS_BUSY); 717 718 /* Accept new commands unless there was another bus reset in the 719 * meantime. */ 720 - if (hpsb_node_entry_valid(scsi_id->ne)) { 721 - atomic_set(&scsi_id->state, SBP2LU_STATE_RUNNING); 722 - scsi_unblock_requests(scsi_id->scsi_host); 723 } 724 return 0; 725 } 726 727 - /* This functions is called by the sbp2_probe, for each new device. We now 728 - * allocate one scsi host for each scsi_id (unit directory). */ 729 - static struct scsi_id_instance_data *sbp2_alloc_device(struct unit_directory *ud) 730 { 731 - struct sbp2scsi_host_info *hi; 732 - struct Scsi_Host *scsi_host = NULL; 733 - struct scsi_id_instance_data *scsi_id = NULL; 734 735 - SBP2_DEBUG_ENTER(); 736 - 737 - scsi_id = kzalloc(sizeof(*scsi_id), GFP_KERNEL); 738 - if (!scsi_id) { 739 - SBP2_ERR("failed to create scsi_id"); 740 goto failed_alloc; 741 } 742 743 - scsi_id->ne = ud->ne; 744 - scsi_id->ud = ud; 745 - scsi_id->speed_code = IEEE1394_SPEED_100; 746 - scsi_id->max_payload_size = sbp2_speedto_max_payload[IEEE1394_SPEED_100]; 747 - scsi_id->status_fifo_addr = CSR1212_INVALID_ADDR_SPACE; 748 - INIT_LIST_HEAD(&scsi_id->sbp2_command_orb_inuse); 749 - INIT_LIST_HEAD(&scsi_id->sbp2_command_orb_completed); 750 - INIT_LIST_HEAD(&scsi_id->scsi_list); 751 - spin_lock_init(&scsi_id->sbp2_command_orb_lock); 752 - atomic_set(&scsi_id->state, SBP2LU_STATE_RUNNING); 753 - INIT_DELAYED_WORK(&scsi_id->protocol_work, NULL); 754 755 - ud->device.driver_data = scsi_id; 756 757 hi = hpsb_get_hostinfo(&sbp2_highlevel, ud->ne->host); 758 if (!hi) { 759 - hi = hpsb_create_hostinfo(&sbp2_highlevel, ud->ne->host, sizeof(*hi)); 760 if (!hi) { 761 SBP2_ERR("failed to allocate hostinfo"); 762 goto failed_alloc; 763 } 764 - SBP2_DEBUG("sbp2_alloc_device: allocated hostinfo"); 765 hi->host = ud->ne->host; 766 - INIT_LIST_HEAD(&hi->scsi_ids); 767 768 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 769 /* Handle data movement if physical dma is not ··· 773 goto failed_alloc; 774 } 775 776 - scsi_id->hi = hi; 777 778 - list_add_tail(&scsi_id->scsi_list, &hi->scsi_ids); 779 780 /* Register the status FIFO address range. We could use the same FIFO 781 * for targets at different nodes. However we need different FIFOs per ··· 785 * then be performed as unified transactions. This slightly reduces 786 * bandwidth usage, and some Prolific based devices seem to require it. 787 */ 788 - scsi_id->status_fifo_addr = hpsb_allocate_and_register_addrspace( 789 &sbp2_highlevel, ud->ne->host, &sbp2_ops, 790 sizeof(struct sbp2_status_block), sizeof(quadlet_t), 791 ud->ne->host->low_addr_space, CSR1212_ALL_SPACE_END); 792 - if (scsi_id->status_fifo_addr == CSR1212_INVALID_ADDR_SPACE) { 793 SBP2_ERR("failed to allocate status FIFO address range"); 794 goto failed_alloc; 795 } 796 797 - /* Register our host with the SCSI stack. */ 798 - scsi_host = scsi_host_alloc(&scsi_driver_template, 799 - sizeof(unsigned long)); 800 - if (!scsi_host) { 801 SBP2_ERR("failed to register scsi host"); 802 goto failed_alloc; 803 } 804 805 - scsi_host->hostdata[0] = (unsigned long)scsi_id; 806 807 - if (!scsi_add_host(scsi_host, &ud->device)) { 808 - scsi_id->scsi_host = scsi_host; 809 - return scsi_id; 810 } 811 812 SBP2_ERR("failed to add scsi host"); 813 - scsi_host_put(scsi_host); 814 815 failed_alloc: 816 - sbp2_remove_device(scsi_id); 817 return NULL; 818 } 819 820 static void sbp2_host_reset(struct hpsb_host *host) 821 { 822 - struct sbp2scsi_host_info *hi; 823 - struct scsi_id_instance_data *scsi_id; 824 825 hi = hpsb_get_hostinfo(&sbp2_highlevel, host); 826 if (!hi) 827 return; 828 - list_for_each_entry(scsi_id, &hi->scsi_ids, scsi_list) 829 - if (likely(atomic_read(&scsi_id->state) != 830 SBP2LU_STATE_IN_SHUTDOWN)) { 831 - atomic_set(&scsi_id->state, SBP2LU_STATE_IN_RESET); 832 - scsi_block_requests(scsi_id->scsi_host); 833 } 834 } 835 836 - /* 837 - * This function is where we first pull the node unique ids, and then 838 - * allocate memory and register a SBP-2 device. 839 - */ 840 - static int sbp2_start_device(struct scsi_id_instance_data *scsi_id) 841 { 842 - struct sbp2scsi_host_info *hi = scsi_id->hi; 843 int error; 844 845 - SBP2_DEBUG_ENTER(); 846 - 847 - /* Login FIFO DMA */ 848 - scsi_id->login_response = 849 - pci_alloc_consistent(hi->host->pdev, 850 sizeof(struct sbp2_login_response), 851 - &scsi_id->login_response_dma); 852 - if (!scsi_id->login_response) 853 goto alloc_fail; 854 - SBP2_DMA_ALLOC("consistent DMA region for login FIFO"); 855 856 - /* Query logins ORB DMA */ 857 - scsi_id->query_logins_orb = 858 - pci_alloc_consistent(hi->host->pdev, 859 sizeof(struct sbp2_query_logins_orb), 860 - &scsi_id->query_logins_orb_dma); 861 - if (!scsi_id->query_logins_orb) 862 goto alloc_fail; 863 - SBP2_DMA_ALLOC("consistent DMA region for query logins ORB"); 864 865 - /* Query logins response DMA */ 866 - scsi_id->query_logins_response = 867 - pci_alloc_consistent(hi->host->pdev, 868 sizeof(struct sbp2_query_logins_response), 869 - &scsi_id->query_logins_response_dma); 870 - if (!scsi_id->query_logins_response) 871 goto alloc_fail; 872 - SBP2_DMA_ALLOC("consistent DMA region for query logins response"); 873 874 - /* Reconnect ORB DMA */ 875 - scsi_id->reconnect_orb = 876 - pci_alloc_consistent(hi->host->pdev, 877 sizeof(struct sbp2_reconnect_orb), 878 - &scsi_id->reconnect_orb_dma); 879 - if (!scsi_id->reconnect_orb) 880 goto alloc_fail; 881 - SBP2_DMA_ALLOC("consistent DMA region for reconnect ORB"); 882 883 - /* Logout ORB DMA */ 884 - scsi_id->logout_orb = 885 - pci_alloc_consistent(hi->host->pdev, 886 sizeof(struct sbp2_logout_orb), 887 - &scsi_id->logout_orb_dma); 888 - if (!scsi_id->logout_orb) 889 goto alloc_fail; 890 - SBP2_DMA_ALLOC("consistent DMA region for logout ORB"); 891 892 - /* Login ORB DMA */ 893 - scsi_id->login_orb = 894 - pci_alloc_consistent(hi->host->pdev, 895 sizeof(struct sbp2_login_orb), 896 - &scsi_id->login_orb_dma); 897 - if (!scsi_id->login_orb) 898 goto alloc_fail; 899 - SBP2_DMA_ALLOC("consistent DMA region for login ORB"); 900 901 - SBP2_DEBUG("New SBP-2 device inserted, SCSI ID = %x", scsi_id->ud->id); 902 - 903 - /* 904 - * Create our command orb pool 905 - */ 906 - if (sbp2util_create_command_orb_pool(scsi_id)) { 907 SBP2_ERR("sbp2util_create_command_orb_pool failed!"); 908 - sbp2_remove_device(scsi_id); 909 return -ENOMEM; 910 } 911 912 - /* Schedule a timeout here. The reason is that we may be so close 913 - * to a bus reset, that the device is not available for logins. 914 - * This can happen when the bus reset is caused by the host 915 - * connected to the sbp2 device being removed. That host would 916 - * have a certain amount of time to relogin before the sbp2 device 917 - * allows someone else to login instead. One second makes sense. */ 918 if (msleep_interruptible(1000)) { 919 - sbp2_remove_device(scsi_id); 920 return -EINTR; 921 } 922 923 - /* 924 - * Login to the sbp-2 device 925 - */ 926 - if (sbp2_login_device(scsi_id)) { 927 - /* Login failed, just remove the device. */ 928 - sbp2_remove_device(scsi_id); 929 return -EBUSY; 930 } 931 932 - /* 933 - * Set max retries to something large on the device 934 - */ 935 - sbp2_set_busy_timeout(scsi_id); 936 937 - /* 938 - * Do a SBP-2 fetch agent reset 939 - */ 940 - sbp2_agent_reset(scsi_id, 1); 941 - 942 - /* 943 - * Get the max speed and packet size that we can use 944 - */ 945 - sbp2_max_speed_and_size(scsi_id); 946 - 947 - /* Add this device to the scsi layer now */ 948 - error = scsi_add_device(scsi_id->scsi_host, 0, scsi_id->ud->id, 0); 949 if (error) { 950 SBP2_ERR("scsi_add_device failed"); 951 - sbp2_logout_device(scsi_id); 952 - sbp2_remove_device(scsi_id); 953 return error; 954 } 955 956 return 0; 957 958 alloc_fail: 959 - SBP2_ERR("Could not allocate memory for scsi_id"); 960 - sbp2_remove_device(scsi_id); 961 return -ENOMEM; 962 } 963 964 - /* 965 - * This function removes an sbp2 device from the sbp2scsi_host_info struct. 966 - */ 967 - static void sbp2_remove_device(struct scsi_id_instance_data *scsi_id) 968 { 969 - struct sbp2scsi_host_info *hi; 970 971 - SBP2_DEBUG_ENTER(); 972 - 973 - if (!scsi_id) 974 return; 975 976 - hi = scsi_id->hi; 977 978 - /* This will remove our scsi device aswell */ 979 - if (scsi_id->scsi_host) { 980 - scsi_remove_host(scsi_id->scsi_host); 981 - scsi_host_put(scsi_id->scsi_host); 982 } 983 flush_scheduled_work(); 984 - sbp2util_remove_command_orb_pool(scsi_id); 985 986 - list_del(&scsi_id->scsi_list); 987 988 - if (scsi_id->login_response) { 989 - pci_free_consistent(hi->host->pdev, 990 sizeof(struct sbp2_login_response), 991 - scsi_id->login_response, 992 - scsi_id->login_response_dma); 993 - SBP2_DMA_FREE("single login FIFO"); 994 - } 995 - 996 - if (scsi_id->login_orb) { 997 - pci_free_consistent(hi->host->pdev, 998 sizeof(struct sbp2_login_orb), 999 - scsi_id->login_orb, 1000 - scsi_id->login_orb_dma); 1001 - SBP2_DMA_FREE("single login ORB"); 1002 - } 1003 - 1004 - if (scsi_id->reconnect_orb) { 1005 - pci_free_consistent(hi->host->pdev, 1006 sizeof(struct sbp2_reconnect_orb), 1007 - scsi_id->reconnect_orb, 1008 - scsi_id->reconnect_orb_dma); 1009 - SBP2_DMA_FREE("single reconnect orb"); 1010 - } 1011 - 1012 - if (scsi_id->logout_orb) { 1013 - pci_free_consistent(hi->host->pdev, 1014 sizeof(struct sbp2_logout_orb), 1015 - scsi_id->logout_orb, 1016 - scsi_id->logout_orb_dma); 1017 - SBP2_DMA_FREE("single logout orb"); 1018 - } 1019 - 1020 - if (scsi_id->query_logins_orb) { 1021 - pci_free_consistent(hi->host->pdev, 1022 sizeof(struct sbp2_query_logins_orb), 1023 - scsi_id->query_logins_orb, 1024 - scsi_id->query_logins_orb_dma); 1025 - SBP2_DMA_FREE("single query logins orb"); 1026 - } 1027 - 1028 - if (scsi_id->query_logins_response) { 1029 - pci_free_consistent(hi->host->pdev, 1030 sizeof(struct sbp2_query_logins_response), 1031 - scsi_id->query_logins_response, 1032 - scsi_id->query_logins_response_dma); 1033 - SBP2_DMA_FREE("single query logins data"); 1034 - } 1035 1036 - if (scsi_id->status_fifo_addr != CSR1212_INVALID_ADDR_SPACE) 1037 hpsb_unregister_addrspace(&sbp2_highlevel, hi->host, 1038 - scsi_id->status_fifo_addr); 1039 1040 - scsi_id->ud->device.driver_data = NULL; 1041 1042 if (hi) 1043 module_put(hi->host->driver->owner); 1044 1045 - SBP2_DEBUG("SBP-2 device removed, SCSI ID = %d", scsi_id->ud->id); 1046 - 1047 - kfree(scsi_id); 1048 } 1049 1050 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 1051 /* 1052 - * This function deals with physical dma write requests (for adapters that do not support 1053 - * physical dma in hardware). Mostly just here for debugging... 1054 */ 1055 static int sbp2_handle_physdma_write(struct hpsb_host *host, int nodeid, 1056 int destid, quadlet_t *data, u64 addr, 1057 size_t length, u16 flags) 1058 { 1059 - 1060 - /* 1061 - * Manually put the data in the right place. 1062 - */ 1063 memcpy(bus_to_virt((u32) addr), data, length); 1064 - sbp2util_packet_dump(data, length, "sbp2 phys dma write by device", 1065 - (u32) addr); 1066 return RCODE_COMPLETE; 1067 } 1068 1069 /* 1070 - * This function deals with physical dma read requests (for adapters that do not support 1071 - * physical dma in hardware). Mostly just here for debugging... 1072 */ 1073 static int sbp2_handle_physdma_read(struct hpsb_host *host, int nodeid, 1074 quadlet_t *data, u64 addr, size_t length, 1075 u16 flags) 1076 { 1077 - 1078 - /* 1079 - * Grab data from memory and send a read response. 1080 - */ 1081 memcpy(data, bus_to_virt((u32) addr), length); 1082 - sbp2util_packet_dump(data, length, "sbp2 phys dma read by device", 1083 - (u32) addr); 1084 return RCODE_COMPLETE; 1085 } 1086 #endif ··· 1001 * SBP-2 protocol related section 1002 **************************************/ 1003 1004 - /* 1005 - * This function queries the device for the maximum concurrent logins it 1006 - * supports. 1007 - */ 1008 - static int sbp2_query_logins(struct scsi_id_instance_data *scsi_id) 1009 { 1010 - struct sbp2scsi_host_info *hi = scsi_id->hi; 1011 quadlet_t data[2]; 1012 int max_logins; 1013 int active_logins; 1014 1015 - SBP2_DEBUG_ENTER(); 1016 1017 - scsi_id->query_logins_orb->reserved1 = 0x0; 1018 - scsi_id->query_logins_orb->reserved2 = 0x0; 1019 1020 - scsi_id->query_logins_orb->query_response_lo = scsi_id->query_logins_response_dma; 1021 - scsi_id->query_logins_orb->query_response_hi = ORB_SET_NODE_ID(hi->host->node_id); 1022 1023 - scsi_id->query_logins_orb->lun_misc = ORB_SET_FUNCTION(SBP2_QUERY_LOGINS_REQUEST); 1024 - scsi_id->query_logins_orb->lun_misc |= ORB_SET_NOTIFY(1); 1025 - scsi_id->query_logins_orb->lun_misc |= ORB_SET_LUN(scsi_id->sbp2_lun); 1026 1027 - scsi_id->query_logins_orb->reserved_resp_length = 1028 - ORB_SET_QUERY_LOGINS_RESP_LENGTH(sizeof(struct sbp2_query_logins_response)); 1029 1030 - scsi_id->query_logins_orb->status_fifo_hi = 1031 - ORB_SET_STATUS_FIFO_HI(scsi_id->status_fifo_addr, hi->host->node_id); 1032 - scsi_id->query_logins_orb->status_fifo_lo = 1033 - ORB_SET_STATUS_FIFO_LO(scsi_id->status_fifo_addr); 1034 - 1035 - sbp2util_cpu_to_be32_buffer(scsi_id->query_logins_orb, sizeof(struct sbp2_query_logins_orb)); 1036 - 1037 - sbp2util_packet_dump(scsi_id->query_logins_orb, sizeof(struct sbp2_query_logins_orb), 1038 - "sbp2 query logins orb", scsi_id->query_logins_orb_dma); 1039 - 1040 - memset(scsi_id->query_logins_response, 0, sizeof(struct sbp2_query_logins_response)); 1041 1042 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1043 - data[1] = scsi_id->query_logins_orb_dma; 1044 sbp2util_cpu_to_be32_buffer(data, 8); 1045 1046 - hpsb_node_write(scsi_id->ne, scsi_id->sbp2_management_agent_addr, data, 8); 1047 1048 - if (sbp2util_access_timeout(scsi_id, 2*HZ)) { 1049 SBP2_INFO("Error querying logins to SBP-2 device - timed out"); 1050 return -EIO; 1051 } 1052 1053 - if (scsi_id->status_block.ORB_offset_lo != scsi_id->query_logins_orb_dma) { 1054 SBP2_INFO("Error querying logins to SBP-2 device - timed out"); 1055 return -EIO; 1056 } 1057 1058 - if (STATUS_TEST_RDS(scsi_id->status_block.ORB_offset_hi_misc)) { 1059 SBP2_INFO("Error querying logins to SBP-2 device - failed"); 1060 return -EIO; 1061 } 1062 1063 - sbp2util_cpu_to_be32_buffer(scsi_id->query_logins_response, sizeof(struct sbp2_query_logins_response)); 1064 1065 - SBP2_DEBUG("length_max_logins = %x", 1066 - (unsigned int)scsi_id->query_logins_response->length_max_logins); 1067 - 1068 - max_logins = RESPONSE_GET_MAX_LOGINS(scsi_id->query_logins_response->length_max_logins); 1069 SBP2_INFO("Maximum concurrent logins supported: %d", max_logins); 1070 1071 - active_logins = RESPONSE_GET_ACTIVE_LOGINS(scsi_id->query_logins_response->length_max_logins); 1072 SBP2_INFO("Number of active logins: %d", active_logins); 1073 1074 if (active_logins >= max_logins) { ··· 1073 return 0; 1074 } 1075 1076 - /* 1077 - * This function is called in order to login to a particular SBP-2 device, 1078 - * after a bus reset. 1079 - */ 1080 - static int sbp2_login_device(struct scsi_id_instance_data *scsi_id) 1081 { 1082 - struct sbp2scsi_host_info *hi = scsi_id->hi; 1083 quadlet_t data[2]; 1084 1085 - SBP2_DEBUG_ENTER(); 1086 1087 - if (!scsi_id->login_orb) { 1088 - SBP2_DEBUG("%s: login_orb not alloc'd!", __FUNCTION__); 1089 return -EIO; 1090 } 1091 1092 - if (!exclusive_login) { 1093 - if (sbp2_query_logins(scsi_id)) { 1094 - SBP2_INFO("Device does not support any more concurrent logins"); 1095 - return -EIO; 1096 - } 1097 - } 1098 1099 - /* Set-up login ORB, assume no password */ 1100 - scsi_id->login_orb->password_hi = 0; 1101 - scsi_id->login_orb->password_lo = 0; 1102 1103 - scsi_id->login_orb->login_response_lo = scsi_id->login_response_dma; 1104 - scsi_id->login_orb->login_response_hi = ORB_SET_NODE_ID(hi->host->node_id); 1105 1106 - scsi_id->login_orb->lun_misc = ORB_SET_FUNCTION(SBP2_LOGIN_REQUEST); 1107 - scsi_id->login_orb->lun_misc |= ORB_SET_RECONNECT(0); /* One second reconnect time */ 1108 - scsi_id->login_orb->lun_misc |= ORB_SET_EXCLUSIVE(exclusive_login); /* Exclusive access to device */ 1109 - scsi_id->login_orb->lun_misc |= ORB_SET_NOTIFY(1); /* Notify us of login complete */ 1110 - scsi_id->login_orb->lun_misc |= ORB_SET_LUN(scsi_id->sbp2_lun); 1111 - 1112 - scsi_id->login_orb->passwd_resp_lengths = 1113 ORB_SET_LOGIN_RESP_LENGTH(sizeof(struct sbp2_login_response)); 1114 1115 - scsi_id->login_orb->status_fifo_hi = 1116 - ORB_SET_STATUS_FIFO_HI(scsi_id->status_fifo_addr, hi->host->node_id); 1117 - scsi_id->login_orb->status_fifo_lo = 1118 - ORB_SET_STATUS_FIFO_LO(scsi_id->status_fifo_addr); 1119 1120 - sbp2util_cpu_to_be32_buffer(scsi_id->login_orb, sizeof(struct sbp2_login_orb)); 1121 1122 - sbp2util_packet_dump(scsi_id->login_orb, sizeof(struct sbp2_login_orb), 1123 - "sbp2 login orb", scsi_id->login_orb_dma); 1124 - 1125 - memset(scsi_id->login_response, 0, sizeof(struct sbp2_login_response)); 1126 1127 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1128 - data[1] = scsi_id->login_orb_dma; 1129 sbp2util_cpu_to_be32_buffer(data, 8); 1130 1131 - hpsb_node_write(scsi_id->ne, scsi_id->sbp2_management_agent_addr, data, 8); 1132 1133 - /* 1134 - * Wait for login status (up to 20 seconds)... 1135 - */ 1136 - if (sbp2util_access_timeout(scsi_id, 20*HZ)) { 1137 SBP2_ERR("Error logging into SBP-2 device - timed out"); 1138 return -EIO; 1139 } 1140 1141 - /* 1142 - * Sanity. Make sure status returned matches login orb. 1143 - */ 1144 - if (scsi_id->status_block.ORB_offset_lo != scsi_id->login_orb_dma) { 1145 SBP2_ERR("Error logging into SBP-2 device - timed out"); 1146 return -EIO; 1147 } 1148 1149 - if (STATUS_TEST_RDS(scsi_id->status_block.ORB_offset_hi_misc)) { 1150 SBP2_ERR("Error logging into SBP-2 device - failed"); 1151 return -EIO; 1152 } 1153 1154 - /* 1155 - * Byte swap the login response, for use when reconnecting or 1156 - * logging out. 1157 - */ 1158 - sbp2util_cpu_to_be32_buffer(scsi_id->login_response, sizeof(struct sbp2_login_response)); 1159 - 1160 - /* 1161 - * Grab our command block agent address from the login response. 1162 - */ 1163 - SBP2_DEBUG("command_block_agent_hi = %x", 1164 - (unsigned int)scsi_id->login_response->command_block_agent_hi); 1165 - SBP2_DEBUG("command_block_agent_lo = %x", 1166 - (unsigned int)scsi_id->login_response->command_block_agent_lo); 1167 - 1168 - scsi_id->sbp2_command_block_agent_addr = 1169 - ((u64)scsi_id->login_response->command_block_agent_hi) << 32; 1170 - scsi_id->sbp2_command_block_agent_addr |= ((u64)scsi_id->login_response->command_block_agent_lo); 1171 - scsi_id->sbp2_command_block_agent_addr &= 0x0000ffffffffffffULL; 1172 1173 SBP2_INFO("Logged into SBP-2 device"); 1174 return 0; 1175 } 1176 1177 - /* 1178 - * This function is called in order to logout from a particular SBP-2 1179 - * device, usually called during driver unload. 1180 - */ 1181 - static int sbp2_logout_device(struct scsi_id_instance_data *scsi_id) 1182 { 1183 - struct sbp2scsi_host_info *hi = scsi_id->hi; 1184 quadlet_t data[2]; 1185 int error; 1186 1187 - SBP2_DEBUG_ENTER(); 1188 1189 - /* 1190 - * Set-up logout ORB 1191 - */ 1192 - scsi_id->logout_orb->reserved1 = 0x0; 1193 - scsi_id->logout_orb->reserved2 = 0x0; 1194 - scsi_id->logout_orb->reserved3 = 0x0; 1195 - scsi_id->logout_orb->reserved4 = 0x0; 1196 1197 - scsi_id->logout_orb->login_ID_misc = ORB_SET_FUNCTION(SBP2_LOGOUT_REQUEST); 1198 - scsi_id->logout_orb->login_ID_misc |= ORB_SET_LOGIN_ID(scsi_id->login_response->length_login_ID); 1199 1200 - /* Notify us when complete */ 1201 - scsi_id->logout_orb->login_ID_misc |= ORB_SET_NOTIFY(1); 1202 1203 - scsi_id->logout_orb->reserved5 = 0x0; 1204 - scsi_id->logout_orb->status_fifo_hi = 1205 - ORB_SET_STATUS_FIFO_HI(scsi_id->status_fifo_addr, hi->host->node_id); 1206 - scsi_id->logout_orb->status_fifo_lo = 1207 - ORB_SET_STATUS_FIFO_LO(scsi_id->status_fifo_addr); 1208 - 1209 - /* 1210 - * Byte swap ORB if necessary 1211 - */ 1212 - sbp2util_cpu_to_be32_buffer(scsi_id->logout_orb, sizeof(struct sbp2_logout_orb)); 1213 - 1214 - sbp2util_packet_dump(scsi_id->logout_orb, sizeof(struct sbp2_logout_orb), 1215 - "sbp2 logout orb", scsi_id->logout_orb_dma); 1216 - 1217 - /* 1218 - * Ok, let's write to the target's management agent register 1219 - */ 1220 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1221 - data[1] = scsi_id->logout_orb_dma; 1222 sbp2util_cpu_to_be32_buffer(data, 8); 1223 1224 - error = hpsb_node_write(scsi_id->ne, 1225 - scsi_id->sbp2_management_agent_addr, data, 8); 1226 if (error) 1227 return error; 1228 1229 - /* Wait for device to logout...1 second. */ 1230 - if (sbp2util_access_timeout(scsi_id, HZ)) 1231 return -EIO; 1232 1233 SBP2_INFO("Logged out of SBP-2 device"); 1234 return 0; 1235 } 1236 1237 - /* 1238 - * This function is called in order to reconnect to a particular SBP-2 1239 - * device, after a bus reset. 1240 - */ 1241 - static int sbp2_reconnect_device(struct scsi_id_instance_data *scsi_id) 1242 { 1243 - struct sbp2scsi_host_info *hi = scsi_id->hi; 1244 quadlet_t data[2]; 1245 int error; 1246 1247 - SBP2_DEBUG_ENTER(); 1248 1249 - /* 1250 - * Set-up reconnect ORB 1251 - */ 1252 - scsi_id->reconnect_orb->reserved1 = 0x0; 1253 - scsi_id->reconnect_orb->reserved2 = 0x0; 1254 - scsi_id->reconnect_orb->reserved3 = 0x0; 1255 - scsi_id->reconnect_orb->reserved4 = 0x0; 1256 1257 - scsi_id->reconnect_orb->login_ID_misc = ORB_SET_FUNCTION(SBP2_RECONNECT_REQUEST); 1258 - scsi_id->reconnect_orb->login_ID_misc |= 1259 - ORB_SET_LOGIN_ID(scsi_id->login_response->length_login_ID); 1260 1261 - /* Notify us when complete */ 1262 - scsi_id->reconnect_orb->login_ID_misc |= ORB_SET_NOTIFY(1); 1263 - 1264 - scsi_id->reconnect_orb->reserved5 = 0x0; 1265 - scsi_id->reconnect_orb->status_fifo_hi = 1266 - ORB_SET_STATUS_FIFO_HI(scsi_id->status_fifo_addr, hi->host->node_id); 1267 - scsi_id->reconnect_orb->status_fifo_lo = 1268 - ORB_SET_STATUS_FIFO_LO(scsi_id->status_fifo_addr); 1269 - 1270 - /* 1271 - * Byte swap ORB if necessary 1272 - */ 1273 - sbp2util_cpu_to_be32_buffer(scsi_id->reconnect_orb, sizeof(struct sbp2_reconnect_orb)); 1274 - 1275 - sbp2util_packet_dump(scsi_id->reconnect_orb, sizeof(struct sbp2_reconnect_orb), 1276 - "sbp2 reconnect orb", scsi_id->reconnect_orb_dma); 1277 1278 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1279 - data[1] = scsi_id->reconnect_orb_dma; 1280 sbp2util_cpu_to_be32_buffer(data, 8); 1281 1282 - error = hpsb_node_write(scsi_id->ne, 1283 - scsi_id->sbp2_management_agent_addr, data, 8); 1284 if (error) 1285 return error; 1286 1287 - /* 1288 - * Wait for reconnect status (up to 1 second)... 1289 - */ 1290 - if (sbp2util_access_timeout(scsi_id, HZ)) { 1291 SBP2_ERR("Error reconnecting to SBP-2 device - timed out"); 1292 return -EIO; 1293 } 1294 1295 - /* 1296 - * Sanity. Make sure status returned matches reconnect orb. 1297 - */ 1298 - if (scsi_id->status_block.ORB_offset_lo != scsi_id->reconnect_orb_dma) { 1299 SBP2_ERR("Error reconnecting to SBP-2 device - timed out"); 1300 return -EIO; 1301 } 1302 1303 - if (STATUS_TEST_RDS(scsi_id->status_block.ORB_offset_hi_misc)) { 1304 SBP2_ERR("Error reconnecting to SBP-2 device - failed"); 1305 return -EIO; 1306 } 1307 1308 - HPSB_DEBUG("Reconnected to SBP-2 device"); 1309 return 0; 1310 } 1311 1312 /* 1313 - * This function is called in order to set the busy timeout (number of 1314 - * retries to attempt) on the sbp2 device. 1315 */ 1316 - static int sbp2_set_busy_timeout(struct scsi_id_instance_data *scsi_id) 1317 { 1318 quadlet_t data; 1319 1320 - SBP2_DEBUG_ENTER(); 1321 - 1322 data = cpu_to_be32(SBP2_BUSY_TIMEOUT_VALUE); 1323 - if (hpsb_node_write(scsi_id->ne, SBP2_BUSY_TIMEOUT_ADDRESS, &data, 4)) 1324 SBP2_ERR("%s error", __FUNCTION__); 1325 return 0; 1326 } 1327 1328 - /* 1329 - * This function is called to parse sbp2 device's config rom unit 1330 - * directory. Used to determine things like sbp2 management agent offset, 1331 - * and command set used (SCSI or RBC). 1332 - */ 1333 - static void sbp2_parse_unit_directory(struct scsi_id_instance_data *scsi_id, 1334 struct unit_directory *ud) 1335 { 1336 struct csr1212_keyval *kv; 1337 struct csr1212_dentry *dentry; 1338 u64 management_agent_addr; 1339 - u32 command_set_spec_id, command_set, unit_characteristics, 1340 - firmware_revision; 1341 unsigned workarounds; 1342 int i; 1343 1344 - SBP2_DEBUG_ENTER(); 1345 1346 - management_agent_addr = 0x0; 1347 - command_set_spec_id = 0x0; 1348 - command_set = 0x0; 1349 - unit_characteristics = 0x0; 1350 - firmware_revision = 0x0; 1351 - 1352 - /* Handle different fields in the unit directory, based on keys */ 1353 csr1212_for_each_dir_entry(ud->ne->csr, kv, ud->ud_kv, dentry) { 1354 switch (kv->key.id) { 1355 case CSR1212_KV_ID_DEPENDENT_INFO: 1356 - if (kv->key.type == CSR1212_KV_TYPE_CSR_OFFSET) { 1357 - /* Save off the management agent address */ 1358 management_agent_addr = 1359 CSR1212_REGISTER_SPACE_BASE + 1360 (kv->value.csr_offset << 2); 1361 1362 - SBP2_DEBUG("sbp2_management_agent_addr = %x", 1363 - (unsigned int)management_agent_addr); 1364 - } else if (kv->key.type == CSR1212_KV_TYPE_IMMEDIATE) { 1365 - scsi_id->sbp2_lun = 1366 - ORB_SET_LUN(kv->value.immediate); 1367 - } 1368 - break; 1369 - 1370 - case SBP2_COMMAND_SET_SPEC_ID_KEY: 1371 - /* Command spec organization */ 1372 - command_set_spec_id = kv->value.immediate; 1373 - SBP2_DEBUG("sbp2_command_set_spec_id = %x", 1374 - (unsigned int)command_set_spec_id); 1375 - break; 1376 - 1377 - case SBP2_COMMAND_SET_KEY: 1378 - /* Command set used by sbp2 device */ 1379 - command_set = kv->value.immediate; 1380 - SBP2_DEBUG("sbp2_command_set = %x", 1381 - (unsigned int)command_set); 1382 break; 1383 1384 case SBP2_UNIT_CHARACTERISTICS_KEY: 1385 - /* 1386 - * Unit characterisitcs (orb related stuff 1387 - * that I'm not yet paying attention to) 1388 - */ 1389 unit_characteristics = kv->value.immediate; 1390 - SBP2_DEBUG("sbp2_unit_characteristics = %x", 1391 - (unsigned int)unit_characteristics); 1392 break; 1393 1394 case SBP2_FIRMWARE_REVISION_KEY: 1395 - /* Firmware revision */ 1396 firmware_revision = kv->value.immediate; 1397 - SBP2_DEBUG("sbp2_firmware_revision = %x", 1398 - (unsigned int)firmware_revision); 1399 break; 1400 1401 default: 1402 break; 1403 } 1404 } ··· 1329 /* We would need one SCSI host template for each target to adjust 1330 * max_sectors on the fly, therefore warn only. */ 1331 if (workarounds & SBP2_WORKAROUND_128K_MAX_TRANS && 1332 - (max_sectors * 512) > (128 * 1024)) 1333 - SBP2_WARN("Node " NODE_BUS_FMT ": Bridge only supports 128KB " 1334 "max transfer size. WARNING: Current max_sectors " 1335 "setting is larger than 128KB (%d sectors)", 1336 NODE_BUS_ARGS(ud->ne->host, ud->ne->nodeid), 1337 - max_sectors); 1338 1339 /* If this is a logical unit directory entry, process the parent 1340 * to get the values. */ 1341 if (ud->flags & UNIT_DIRECTORY_LUN_DIRECTORY) { 1342 - struct unit_directory *parent_ud = 1343 - container_of(ud->device.parent, struct unit_directory, device); 1344 - sbp2_parse_unit_directory(scsi_id, parent_ud); 1345 } else { 1346 - scsi_id->sbp2_management_agent_addr = management_agent_addr; 1347 - scsi_id->sbp2_command_set_spec_id = command_set_spec_id; 1348 - scsi_id->sbp2_command_set = command_set; 1349 - scsi_id->sbp2_unit_characteristics = unit_characteristics; 1350 - scsi_id->sbp2_firmware_revision = firmware_revision; 1351 - scsi_id->workarounds = workarounds; 1352 if (ud->flags & UNIT_DIRECTORY_HAS_LUN) 1353 - scsi_id->sbp2_lun = ORB_SET_LUN(ud->lun); 1354 } 1355 } 1356 ··· 1361 * the speed that it needs to use, and the max_rec the host supports, and 1362 * it takes care of the rest. 1363 */ 1364 - static int sbp2_max_speed_and_size(struct scsi_id_instance_data *scsi_id) 1365 { 1366 - struct sbp2scsi_host_info *hi = scsi_id->hi; 1367 u8 payload; 1368 1369 - SBP2_DEBUG_ENTER(); 1370 1371 - scsi_id->speed_code = 1372 - hi->host->speed[NODEID_TO_NODE(scsi_id->ne->nodeid)]; 1373 - 1374 - /* Bump down our speed if the user requested it */ 1375 - if (scsi_id->speed_code > max_speed) { 1376 - scsi_id->speed_code = max_speed; 1377 - SBP2_ERR("Forcing SBP-2 max speed down to %s", 1378 - hpsb_speedto_str[scsi_id->speed_code]); 1379 } 1380 1381 /* Payload size is the lesser of what our speed supports and what 1382 * our host supports. */ 1383 - payload = min(sbp2_speedto_max_payload[scsi_id->speed_code], 1384 (u8) (hi->host->csr.max_rec - 1)); 1385 1386 /* If physical DMA is off, work around limitation in ohci1394: 1387 * packet size must not exceed PAGE_SIZE */ 1388 - if (scsi_id->ne->host->low_addr_space < (1ULL << 32)) 1389 while (SBP2_PAYLOAD_TO_BYTES(payload) + 24 > PAGE_SIZE && 1390 payload) 1391 payload--; 1392 1393 - HPSB_DEBUG("Node " NODE_BUS_FMT ": Max speed [%s] - Max payload [%u]", 1394 - NODE_BUS_ARGS(hi->host, scsi_id->ne->nodeid), 1395 - hpsb_speedto_str[scsi_id->speed_code], 1396 - SBP2_PAYLOAD_TO_BYTES(payload)); 1397 1398 - scsi_id->max_payload_size = payload; 1399 return 0; 1400 } 1401 1402 - /* 1403 - * This function is called in order to perform a SBP-2 agent reset. 1404 - */ 1405 - static int sbp2_agent_reset(struct scsi_id_instance_data *scsi_id, int wait) 1406 { 1407 quadlet_t data; 1408 u64 addr; 1409 int retval; 1410 unsigned long flags; 1411 1412 - SBP2_DEBUG_ENTER(); 1413 - 1414 - cancel_delayed_work(&scsi_id->protocol_work); 1415 if (wait) 1416 flush_scheduled_work(); 1417 1418 data = ntohl(SBP2_AGENT_RESET_DATA); 1419 - addr = scsi_id->sbp2_command_block_agent_addr + SBP2_AGENT_RESET_OFFSET; 1420 1421 if (wait) 1422 - retval = hpsb_node_write(scsi_id->ne, addr, &data, 4); 1423 else 1424 - retval = sbp2util_node_write_no_wait(scsi_id->ne, addr, &data, 4); 1425 1426 if (retval < 0) { 1427 SBP2_ERR("hpsb_node_write failed.\n"); 1428 return -EIO; 1429 } 1430 1431 - /* 1432 - * Need to make sure orb pointer is written on next command 1433 - */ 1434 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 1435 - scsi_id->last_orb = NULL; 1436 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 1437 1438 return 0; 1439 } 1440 1441 static void sbp2_prep_command_orb_sg(struct sbp2_command_orb *orb, 1442 - struct sbp2scsi_host_info *hi, 1443 - struct sbp2_command_info *command, 1444 unsigned int scsi_use_sg, 1445 struct scatterlist *sgpnt, 1446 u32 orb_direction, 1447 enum dma_data_direction dma_dir) 1448 { 1449 - command->dma_dir = dma_dir; 1450 orb->data_descriptor_hi = ORB_SET_NODE_ID(hi->host->node_id); 1451 orb->misc |= ORB_SET_DIRECTION(orb_direction); 1452 1453 - /* Special case if only one element (and less than 64KB in size) */ 1454 if ((scsi_use_sg == 1) && 1455 (sgpnt[0].length <= SBP2_MAX_SG_ELEMENT_LENGTH)) { 1456 1457 - SBP2_DEBUG("Only one s/g element"); 1458 - command->dma_size = sgpnt[0].length; 1459 - command->dma_type = CMD_DMA_PAGE; 1460 - command->cmd_dma = pci_map_page(hi->host->pdev, 1461 - sgpnt[0].page, 1462 - sgpnt[0].offset, 1463 - command->dma_size, 1464 - command->dma_dir); 1465 - SBP2_DMA_ALLOC("single page scatter element"); 1466 1467 - orb->data_descriptor_lo = command->cmd_dma; 1468 - orb->misc |= ORB_SET_DATA_SIZE(command->dma_size); 1469 1470 } else { 1471 struct sbp2_unrestricted_page_table *sg_element = 1472 - &command->scatter_gather_element[0]; 1473 u32 sg_count, sg_len; 1474 dma_addr_t sg_addr; 1475 - int i, count = pci_map_sg(hi->host->pdev, sgpnt, scsi_use_sg, 1476 dma_dir); 1477 1478 - SBP2_DMA_ALLOC("scatter list"); 1479 - 1480 - command->dma_size = scsi_use_sg; 1481 - command->sge_buffer = sgpnt; 1482 1483 /* use page tables (s/g) */ 1484 orb->misc |= ORB_SET_PAGE_TABLE_PRESENT(0x1); 1485 - orb->data_descriptor_lo = command->sge_dma; 1486 1487 - /* 1488 - * Loop through and fill out our sbp-2 page tables 1489 - * (and split up anything too large) 1490 - */ 1491 for (i = 0, sg_count = 0 ; i < count; i++, sgpnt++) { 1492 sg_len = sg_dma_len(sgpnt); 1493 sg_addr = sg_dma_address(sgpnt); ··· 1488 } 1489 } 1490 1491 - /* Number of page table (s/g) elements */ 1492 orb->misc |= ORB_SET_DATA_SIZE(sg_count); 1493 1494 - sbp2util_packet_dump(sg_element, 1495 - (sizeof(struct sbp2_unrestricted_page_table)) * sg_count, 1496 - "sbp2 s/g list", command->sge_dma); 1497 - 1498 - /* Byte swap page tables if necessary */ 1499 sbp2util_cpu_to_be32_buffer(sg_element, 1500 - (sizeof(struct sbp2_unrestricted_page_table)) * 1501 - sg_count); 1502 } 1503 } 1504 1505 static void sbp2_prep_command_orb_no_sg(struct sbp2_command_orb *orb, 1506 - struct sbp2scsi_host_info *hi, 1507 - struct sbp2_command_info *command, 1508 struct scatterlist *sgpnt, 1509 u32 orb_direction, 1510 unsigned int scsi_request_bufflen, 1511 void *scsi_request_buffer, 1512 enum dma_data_direction dma_dir) 1513 { 1514 - command->dma_dir = dma_dir; 1515 - command->dma_size = scsi_request_bufflen; 1516 - command->dma_type = CMD_DMA_SINGLE; 1517 - command->cmd_dma = pci_map_single(hi->host->pdev, scsi_request_buffer, 1518 - command->dma_size, command->dma_dir); 1519 orb->data_descriptor_hi = ORB_SET_NODE_ID(hi->host->node_id); 1520 orb->misc |= ORB_SET_DIRECTION(orb_direction); 1521 1522 - SBP2_DMA_ALLOC("single bulk"); 1523 - 1524 - /* 1525 - * Handle case where we get a command w/o s/g enabled (but 1526 - * check for transfers larger than 64K) 1527 - */ 1528 if (scsi_request_bufflen <= SBP2_MAX_SG_ELEMENT_LENGTH) { 1529 1530 - orb->data_descriptor_lo = command->cmd_dma; 1531 orb->misc |= ORB_SET_DATA_SIZE(scsi_request_bufflen); 1532 1533 } else { 1534 struct sbp2_unrestricted_page_table *sg_element = 1535 - &command->scatter_gather_element[0]; 1536 u32 sg_count, sg_len; 1537 dma_addr_t sg_addr; 1538 1539 - /* 1540 - * Need to turn this into page tables, since the 1541 - * buffer is too large. 1542 - */ 1543 - orb->data_descriptor_lo = command->sge_dma; 1544 - 1545 - /* Use page tables (s/g) */ 1546 orb->misc |= ORB_SET_PAGE_TABLE_PRESENT(0x1); 1547 1548 - /* 1549 - * fill out our sbp-2 page tables (and split up 1550 - * the large buffer) 1551 - */ 1552 sg_count = 0; 1553 sg_len = scsi_request_bufflen; 1554 - sg_addr = command->cmd_dma; 1555 while (sg_len) { 1556 sg_element[sg_count].segment_base_lo = sg_addr; 1557 if (sg_len > SBP2_MAX_SG_ELEMENT_LENGTH) { ··· 1550 sg_count++; 1551 } 1552 1553 - /* Number of page table (s/g) elements */ 1554 orb->misc |= ORB_SET_DATA_SIZE(sg_count); 1555 1556 - sbp2util_packet_dump(sg_element, 1557 - (sizeof(struct sbp2_unrestricted_page_table)) * sg_count, 1558 - "sbp2 s/g list", command->sge_dma); 1559 - 1560 - /* Byte swap page tables if necessary */ 1561 sbp2util_cpu_to_be32_buffer(sg_element, 1562 - (sizeof(struct sbp2_unrestricted_page_table)) * 1563 - sg_count); 1564 } 1565 } 1566 1567 - /* 1568 - * This function is called to create the actual command orb and s/g list 1569 - * out of the scsi command itself. 1570 - */ 1571 - static void sbp2_create_command_orb(struct scsi_id_instance_data *scsi_id, 1572 - struct sbp2_command_info *command, 1573 unchar *scsi_cmd, 1574 unsigned int scsi_use_sg, 1575 unsigned int scsi_request_bufflen, 1576 void *scsi_request_buffer, 1577 enum dma_data_direction dma_dir) 1578 { 1579 - struct sbp2scsi_host_info *hi = scsi_id->hi; 1580 struct scatterlist *sgpnt = (struct scatterlist *)scsi_request_buffer; 1581 - struct sbp2_command_orb *command_orb = &command->command_orb; 1582 u32 orb_direction; 1583 1584 /* 1585 - * Set-up our command ORB.. 1586 * 1587 * NOTE: We're doing unrestricted page tables (s/g), as this is 1588 * best performance (at least with the devices I have). This means 1589 * that data_size becomes the number of s/g elements, and 1590 * page_size should be zero (for unrestricted). 1591 */ 1592 - command_orb->next_ORB_hi = ORB_SET_NULL_PTR(1); 1593 - command_orb->next_ORB_lo = 0x0; 1594 - command_orb->misc = ORB_SET_MAX_PAYLOAD(scsi_id->max_payload_size); 1595 - command_orb->misc |= ORB_SET_SPEED(scsi_id->speed_code); 1596 - command_orb->misc |= ORB_SET_NOTIFY(1); /* Notify us when complete */ 1597 1598 if (dma_dir == DMA_NONE) 1599 orb_direction = ORB_DIRECTION_NO_DATA_TRANSFER; ··· 1592 else if (dma_dir == DMA_FROM_DEVICE && scsi_request_bufflen) 1593 orb_direction = ORB_DIRECTION_READ_FROM_MEDIA; 1594 else { 1595 - SBP2_WARN("Falling back to DMA_NONE"); 1596 orb_direction = ORB_DIRECTION_NO_DATA_TRANSFER; 1597 } 1598 1599 - /* Set-up our pagetable stuff */ 1600 if (orb_direction == ORB_DIRECTION_NO_DATA_TRANSFER) { 1601 - SBP2_DEBUG("No data transfer"); 1602 - command_orb->data_descriptor_hi = 0x0; 1603 - command_orb->data_descriptor_lo = 0x0; 1604 - command_orb->misc |= ORB_SET_DIRECTION(1); 1605 - } else if (scsi_use_sg) { 1606 - SBP2_DEBUG("Use scatter/gather"); 1607 - sbp2_prep_command_orb_sg(command_orb, hi, command, scsi_use_sg, 1608 - sgpnt, orb_direction, dma_dir); 1609 - } else { 1610 - SBP2_DEBUG("No scatter/gather"); 1611 - sbp2_prep_command_orb_no_sg(command_orb, hi, command, sgpnt, 1612 - orb_direction, scsi_request_bufflen, 1613 scsi_request_buffer, dma_dir); 1614 - } 1615 1616 - /* Byte swap command ORB if necessary */ 1617 - sbp2util_cpu_to_be32_buffer(command_orb, sizeof(struct sbp2_command_orb)); 1618 1619 - /* Put our scsi command in the command ORB */ 1620 - memset(command_orb->cdb, 0, 12); 1621 - memcpy(command_orb->cdb, scsi_cmd, COMMAND_SIZE(*scsi_cmd)); 1622 } 1623 1624 - /* 1625 - * This function is called in order to begin a regular SBP-2 command. 1626 - */ 1627 - static void sbp2_link_orb_command(struct scsi_id_instance_data *scsi_id, 1628 - struct sbp2_command_info *command) 1629 { 1630 - struct sbp2scsi_host_info *hi = scsi_id->hi; 1631 - struct sbp2_command_orb *command_orb = &command->command_orb; 1632 struct sbp2_command_orb *last_orb; 1633 dma_addr_t last_orb_dma; 1634 - u64 addr = scsi_id->sbp2_command_block_agent_addr; 1635 quadlet_t data[2]; 1636 size_t length; 1637 unsigned long flags; 1638 1639 - outstanding_orb_incr; 1640 - SBP2_ORB_DEBUG("sending command orb %p, total orbs = %x", 1641 - command_orb, global_outstanding_command_orbs); 1642 1643 - pci_dma_sync_single_for_device(hi->host->pdev, command->command_orb_dma, 1644 - sizeof(struct sbp2_command_orb), 1645 - PCI_DMA_TODEVICE); 1646 - pci_dma_sync_single_for_device(hi->host->pdev, command->sge_dma, 1647 - sizeof(command->scatter_gather_element), 1648 - PCI_DMA_BIDIRECTIONAL); 1649 - /* 1650 - * Check to see if there are any previous orbs to use 1651 - */ 1652 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 1653 - last_orb = scsi_id->last_orb; 1654 - last_orb_dma = scsi_id->last_orb_dma; 1655 if (!last_orb) { 1656 /* 1657 * last_orb == NULL means: We know that the target's fetch agent ··· 1644 */ 1645 addr += SBP2_ORB_POINTER_OFFSET; 1646 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1647 - data[1] = command->command_orb_dma; 1648 sbp2util_cpu_to_be32_buffer(data, 8); 1649 length = 8; 1650 } else { ··· 1655 * The target's fetch agent may or may not have read this 1656 * previous ORB yet. 1657 */ 1658 - pci_dma_sync_single_for_cpu(hi->host->pdev, last_orb_dma, 1659 - sizeof(struct sbp2_command_orb), 1660 - PCI_DMA_TODEVICE); 1661 - last_orb->next_ORB_lo = cpu_to_be32(command->command_orb_dma); 1662 wmb(); 1663 /* Tells hardware that this pointer is valid */ 1664 last_orb->next_ORB_hi = 0; 1665 - pci_dma_sync_single_for_device(hi->host->pdev, last_orb_dma, 1666 - sizeof(struct sbp2_command_orb), 1667 - PCI_DMA_TODEVICE); 1668 addr += SBP2_DOORBELL_OFFSET; 1669 data[0] = 0; 1670 length = 4; 1671 } 1672 - scsi_id->last_orb = command_orb; 1673 - scsi_id->last_orb_dma = command->command_orb_dma; 1674 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 1675 1676 - SBP2_ORB_DEBUG("write to %s register, command orb %p", 1677 - last_orb ? "DOORBELL" : "ORB_POINTER", command_orb); 1678 - if (sbp2util_node_write_no_wait(scsi_id->ne, addr, data, length)) { 1679 /* 1680 * sbp2util_node_write_no_wait failed. We certainly ran out 1681 * of transaction labels, perhaps just because there were no ··· 1682 * the workqueue job will sleep to guaranteedly get a tlabel. 1683 * We do not accept new commands until the job is over. 1684 */ 1685 - scsi_block_requests(scsi_id->scsi_host); 1686 - PREPARE_DELAYED_WORK(&scsi_id->protocol_work, 1687 last_orb ? sbp2util_write_doorbell: 1688 sbp2util_write_orb_pointer); 1689 - schedule_delayed_work(&scsi_id->protocol_work, 0); 1690 } 1691 } 1692 1693 - /* 1694 - * This function is called in order to begin a regular SBP-2 command. 1695 - */ 1696 - static int sbp2_send_command(struct scsi_id_instance_data *scsi_id, 1697 - struct scsi_cmnd *SCpnt, 1698 void (*done)(struct scsi_cmnd *)) 1699 { 1700 - unchar *cmd = (unchar *) SCpnt->cmnd; 1701 unsigned int request_bufflen = SCpnt->request_bufflen; 1702 - struct sbp2_command_info *command; 1703 1704 - SBP2_DEBUG_ENTER(); 1705 - SBP2_DEBUG("SCSI transfer size = %x", request_bufflen); 1706 - SBP2_DEBUG("SCSI s/g elements = %x", (unsigned int)SCpnt->use_sg); 1707 - 1708 - /* 1709 - * Allocate a command orb and s/g structure 1710 - */ 1711 - command = sbp2util_allocate_command_orb(scsi_id, SCpnt, done); 1712 - if (!command) { 1713 return -EIO; 1714 - } 1715 1716 - /* 1717 - * Now actually fill in the comamnd orb and sbp2 s/g list 1718 - */ 1719 - sbp2_create_command_orb(scsi_id, command, cmd, SCpnt->use_sg, 1720 request_bufflen, SCpnt->request_buffer, 1721 SCpnt->sc_data_direction); 1722 - 1723 - sbp2util_packet_dump(&command->command_orb, sizeof(struct sbp2_command_orb), 1724 - "sbp2 command orb", command->command_orb_dma); 1725 - 1726 - /* 1727 - * Link up the orb, and ring the doorbell if needed 1728 - */ 1729 - sbp2_link_orb_command(scsi_id, command); 1730 1731 return 0; 1732 } ··· 1712 /* 1713 * Translates SBP-2 status into SCSI sense data for check conditions 1714 */ 1715 - static unsigned int sbp2_status_to_sense_data(unchar *sbp2_status, unchar *sense_data) 1716 { 1717 - SBP2_DEBUG_ENTER(); 1718 - 1719 - /* 1720 - * Ok, it's pretty ugly... ;-) 1721 - */ 1722 sense_data[0] = 0x70; 1723 sense_data[1] = 0x0; 1724 sense_data[2] = sbp2_status[9]; ··· 1733 sense_data[14] = sbp2_status[20]; 1734 sense_data[15] = sbp2_status[21]; 1735 1736 - return sbp2_status[8] & 0x3f; /* return scsi status */ 1737 } 1738 1739 - /* 1740 - * This function deals with status writes from the SBP-2 device 1741 - */ 1742 static int sbp2_handle_status_write(struct hpsb_host *host, int nodeid, 1743 int destid, quadlet_t *data, u64 addr, 1744 size_t length, u16 fl) 1745 { 1746 - struct sbp2scsi_host_info *hi; 1747 - struct scsi_id_instance_data *scsi_id = NULL, *scsi_id_tmp; 1748 struct scsi_cmnd *SCpnt = NULL; 1749 struct sbp2_status_block *sb; 1750 u32 scsi_status = SBP2_SCSI_STATUS_GOOD; 1751 - struct sbp2_command_info *command; 1752 unsigned long flags; 1753 - 1754 - SBP2_DEBUG_ENTER(); 1755 - 1756 - sbp2util_packet_dump(data, length, "sbp2 status write by device", (u32)addr); 1757 1758 if (unlikely(length < 8 || length > sizeof(struct sbp2_status_block))) { 1759 SBP2_ERR("Wrong size of status block"); ··· 1761 SBP2_ERR("host info is NULL - this is bad!"); 1762 return RCODE_ADDRESS_ERROR; 1763 } 1764 - /* 1765 - * Find our scsi_id structure by looking at the status fifo address 1766 - * written to by the sbp2 device. 1767 - */ 1768 - list_for_each_entry(scsi_id_tmp, &hi->scsi_ids, scsi_list) { 1769 - if (scsi_id_tmp->ne->nodeid == nodeid && 1770 - scsi_id_tmp->status_fifo_addr == addr) { 1771 - scsi_id = scsi_id_tmp; 1772 break; 1773 } 1774 } 1775 - if (unlikely(!scsi_id)) { 1776 - SBP2_ERR("scsi_id is NULL - device is gone?"); 1777 return RCODE_ADDRESS_ERROR; 1778 } 1779 1780 - /* 1781 - * Put response into scsi_id status fifo buffer. The first two bytes 1782 * come in big endian bit order. Often the target writes only a 1783 * truncated status block, minimally the first two quadlets. The rest 1784 - * is implied to be zeros. 1785 - */ 1786 - sb = &scsi_id->status_block; 1787 memset(sb->command_set_dependent, 0, sizeof(sb->command_set_dependent)); 1788 memcpy(sb, data, length); 1789 sbp2util_be32_to_cpu_buffer(sb, 8); 1790 1791 - /* 1792 - * Ignore unsolicited status. Handle command ORB status. 1793 - */ 1794 if (unlikely(STATUS_GET_SRC(sb->ORB_offset_hi_misc) == 2)) 1795 - command = NULL; 1796 else 1797 - command = sbp2util_find_command_for_orb(scsi_id, 1798 - sb->ORB_offset_lo); 1799 - if (command) { 1800 - SBP2_DEBUG("Found status for command ORB"); 1801 - pci_dma_sync_single_for_cpu(hi->host->pdev, command->command_orb_dma, 1802 - sizeof(struct sbp2_command_orb), 1803 - PCI_DMA_TODEVICE); 1804 - pci_dma_sync_single_for_cpu(hi->host->pdev, command->sge_dma, 1805 - sizeof(command->scatter_gather_element), 1806 - PCI_DMA_BIDIRECTIONAL); 1807 - 1808 - SBP2_ORB_DEBUG("matched command orb %p", &command->command_orb); 1809 - outstanding_orb_decr; 1810 - 1811 - /* 1812 - * Matched status with command, now grab scsi command pointers 1813 - * and check status. 1814 - */ 1815 /* 1816 * FIXME: If the src field in the status is 1, the ORB DMA must 1817 * not be reused until status for a subsequent ORB is received. 1818 */ 1819 - SCpnt = command->Current_SCpnt; 1820 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 1821 - sbp2util_mark_command_completed(scsi_id, command); 1822 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 1823 1824 if (SCpnt) { 1825 u32 h = sb->ORB_offset_hi_misc; 1826 u32 r = STATUS_GET_RESP(h); 1827 1828 if (r != RESP_STATUS_REQUEST_COMPLETE) { 1829 - SBP2_WARN("resp 0x%x, sbp_status 0x%x", 1830 r, STATUS_GET_SBP_STATUS(h)); 1831 scsi_status = 1832 r == RESP_STATUS_TRANSPORT_FAILURE ? 1833 SBP2_SCSI_STATUS_BUSY : 1834 SBP2_SCSI_STATUS_COMMAND_TERMINATED; 1835 } 1836 - /* 1837 - * See if the target stored any scsi status information. 1838 - */ 1839 - if (STATUS_GET_LEN(h) > 1) { 1840 - SBP2_DEBUG("CHECK CONDITION"); 1841 scsi_status = sbp2_status_to_sense_data( 1842 (unchar *)sb, SCpnt->sense_buffer); 1843 - } 1844 - /* 1845 - * Check to see if the dead bit is set. If so, we'll 1846 - * have to initiate a fetch agent reset. 1847 - */ 1848 - if (STATUS_TEST_DEAD(h)) { 1849 - SBP2_DEBUG("Dead bit set - " 1850 - "initiating fetch agent reset"); 1851 - sbp2_agent_reset(scsi_id, 0); 1852 - } 1853 - SBP2_ORB_DEBUG("completing command orb %p", &command->command_orb); 1854 } 1855 1856 - /* 1857 - * Check here to see if there are no commands in-use. If there 1858 * are none, we know that the fetch agent left the active state 1859 * _and_ that we did not reactivate it yet. Therefore clear 1860 * last_orb so that next time we write directly to the 1861 * ORB_POINTER register. That way the fetch agent does not need 1862 - * to refetch the next_ORB. 1863 - */ 1864 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 1865 - if (list_empty(&scsi_id->sbp2_command_orb_inuse)) 1866 - scsi_id->last_orb = NULL; 1867 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 1868 1869 } else { 1870 - /* 1871 - * It's probably a login/logout/reconnect status. 1872 - */ 1873 - if ((sb->ORB_offset_lo == scsi_id->reconnect_orb_dma) || 1874 - (sb->ORB_offset_lo == scsi_id->login_orb_dma) || 1875 - (sb->ORB_offset_lo == scsi_id->query_logins_orb_dma) || 1876 - (sb->ORB_offset_lo == scsi_id->logout_orb_dma)) { 1877 - scsi_id->access_complete = 1; 1878 - wake_up_interruptible(&access_wq); 1879 } 1880 } 1881 1882 - if (SCpnt) { 1883 - SBP2_DEBUG("Completing SCSI command"); 1884 - sbp2scsi_complete_command(scsi_id, scsi_status, SCpnt, 1885 - command->Current_done); 1886 - SBP2_ORB_DEBUG("command orb completed"); 1887 - } 1888 - 1889 return RCODE_COMPLETE; 1890 } 1891 ··· 1859 * SCSI interface related section 1860 **************************************/ 1861 1862 - /* 1863 - * This routine is the main request entry routine for doing I/O. It is 1864 - * called from the scsi stack directly. 1865 - */ 1866 static int sbp2scsi_queuecommand(struct scsi_cmnd *SCpnt, 1867 void (*done)(struct scsi_cmnd *)) 1868 { 1869 - struct scsi_id_instance_data *scsi_id = 1870 - (struct scsi_id_instance_data *)SCpnt->device->host->hostdata[0]; 1871 - struct sbp2scsi_host_info *hi; 1872 int result = DID_NO_CONNECT << 16; 1873 1874 - SBP2_DEBUG_ENTER(); 1875 - #if (CONFIG_IEEE1394_SBP2_DEBUG >= 2) || defined(CONFIG_IEEE1394_SBP2_PACKET_DUMP) 1876 - scsi_print_command(SCpnt); 1877 - #endif 1878 - 1879 - if (!sbp2util_node_is_available(scsi_id)) 1880 goto done; 1881 1882 - hi = scsi_id->hi; 1883 1884 - if (!hi) { 1885 - SBP2_ERR("sbp2scsi_host_info is NULL - this is bad!"); 1886 goto done; 1887 } 1888 1889 - /* 1890 - * Until we handle multiple luns, just return selection time-out 1891 - * to any IO directed at non-zero LUNs 1892 - */ 1893 - if (SCpnt->device->lun) 1894 goto done; 1895 1896 - /* 1897 - * Check for request sense command, and handle it here 1898 - * (autorequest sense) 1899 - */ 1900 if (SCpnt->cmnd[0] == REQUEST_SENSE) { 1901 - SBP2_DEBUG("REQUEST_SENSE"); 1902 - memcpy(SCpnt->request_buffer, SCpnt->sense_buffer, SCpnt->request_bufflen); 1903 memset(SCpnt->sense_buffer, 0, sizeof(SCpnt->sense_buffer)); 1904 - sbp2scsi_complete_command(scsi_id, SBP2_SCSI_STATUS_GOOD, SCpnt, done); 1905 return 0; 1906 } 1907 1908 - /* 1909 - * Check to see if we are in the middle of a bus reset. 1910 - */ 1911 - if (!hpsb_node_entry_valid(scsi_id->ne)) { 1912 SBP2_ERR("Bus reset in progress - rejecting command"); 1913 result = DID_BUS_BUSY << 16; 1914 goto done; 1915 } 1916 1917 - /* 1918 - * Bidirectional commands are not yet implemented, 1919 - * and unknown transfer direction not handled. 1920 - */ 1921 - if (SCpnt->sc_data_direction == DMA_BIDIRECTIONAL) { 1922 SBP2_ERR("Cannot handle DMA_BIDIRECTIONAL - rejecting command"); 1923 result = DID_ERROR << 16; 1924 goto done; 1925 } 1926 1927 - /* 1928 - * Try and send our SCSI command 1929 - */ 1930 - if (sbp2_send_command(scsi_id, SCpnt, done)) { 1931 SBP2_ERR("Error sending SCSI command"); 1932 - sbp2scsi_complete_command(scsi_id, SBP2_SCSI_STATUS_SELECTION_TIMEOUT, 1933 SCpnt, done); 1934 } 1935 return 0; ··· 1920 return 0; 1921 } 1922 1923 - /* 1924 - * This function is called in order to complete all outstanding SBP-2 1925 - * commands (in case of resets, etc.). 1926 - */ 1927 - static void sbp2scsi_complete_all_commands(struct scsi_id_instance_data *scsi_id, 1928 - u32 status) 1929 { 1930 - struct sbp2scsi_host_info *hi = scsi_id->hi; 1931 struct list_head *lh; 1932 - struct sbp2_command_info *command; 1933 unsigned long flags; 1934 1935 - SBP2_DEBUG_ENTER(); 1936 - 1937 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 1938 - while (!list_empty(&scsi_id->sbp2_command_orb_inuse)) { 1939 - SBP2_DEBUG("Found pending command to complete"); 1940 - lh = scsi_id->sbp2_command_orb_inuse.next; 1941 - command = list_entry(lh, struct sbp2_command_info, list); 1942 - pci_dma_sync_single_for_cpu(hi->host->pdev, command->command_orb_dma, 1943 - sizeof(struct sbp2_command_orb), 1944 - PCI_DMA_TODEVICE); 1945 - pci_dma_sync_single_for_cpu(hi->host->pdev, command->sge_dma, 1946 - sizeof(command->scatter_gather_element), 1947 - PCI_DMA_BIDIRECTIONAL); 1948 - sbp2util_mark_command_completed(scsi_id, command); 1949 - if (command->Current_SCpnt) { 1950 - command->Current_SCpnt->result = status << 16; 1951 - command->Current_done(command->Current_SCpnt); 1952 } 1953 } 1954 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 1955 1956 return; 1957 } 1958 1959 /* 1960 - * This function is called in order to complete a regular SBP-2 command. 1961 - * 1962 - * This can be called in interrupt context. 1963 */ 1964 - static void sbp2scsi_complete_command(struct scsi_id_instance_data *scsi_id, 1965 - u32 scsi_status, struct scsi_cmnd *SCpnt, 1966 void (*done)(struct scsi_cmnd *)) 1967 { 1968 - SBP2_DEBUG_ENTER(); 1969 - 1970 - /* 1971 - * Sanity 1972 - */ 1973 if (!SCpnt) { 1974 SBP2_ERR("SCpnt is NULL"); 1975 return; 1976 } 1977 1978 - /* 1979 - * If a bus reset is in progress and there was an error, don't 1980 - * complete the command, just let it get retried at the end of the 1981 - * bus reset. 1982 - */ 1983 - if (!hpsb_node_entry_valid(scsi_id->ne) 1984 - && (scsi_status != SBP2_SCSI_STATUS_GOOD)) { 1985 - SBP2_ERR("Bus reset in progress - retry command later"); 1986 - return; 1987 - } 1988 - 1989 - /* 1990 - * Switch on scsi status 1991 - */ 1992 switch (scsi_status) { 1993 case SBP2_SCSI_STATUS_GOOD: 1994 SCpnt->result = DID_OK << 16; ··· 1971 break; 1972 1973 case SBP2_SCSI_STATUS_CHECK_CONDITION: 1974 - SBP2_DEBUG("SBP2_SCSI_STATUS_CHECK_CONDITION"); 1975 SCpnt->result = CHECK_CONDITION << 1 | DID_OK << 16; 1976 - #if CONFIG_IEEE1394_SBP2_DEBUG >= 1 1977 - scsi_print_command(SCpnt); 1978 - scsi_print_sense(SBP2_DEVICE_NAME, SCpnt); 1979 - #endif 1980 break; 1981 1982 case SBP2_SCSI_STATUS_SELECTION_TIMEOUT: ··· 1993 SCpnt->result = DID_ERROR << 16; 1994 } 1995 1996 - /* 1997 - * If a bus reset is in progress and there was an error, complete 1998 - * the command as busy so that it will get retried. 1999 - */ 2000 - if (!hpsb_node_entry_valid(scsi_id->ne) 2001 && (scsi_status != SBP2_SCSI_STATUS_GOOD)) { 2002 SBP2_ERR("Completing command with busy (bus reset)"); 2003 SCpnt->result = DID_BUS_BUSY << 16; 2004 } 2005 2006 - /* 2007 - * If a unit attention occurs, return busy status so it gets 2008 - * retried... it could have happened because of a 1394 bus reset 2009 - * or hot-plug... 2010 - * XXX DID_BUS_BUSY is actually a bad idea because it will defy 2011 - * the scsi layer's retry logic. 2012 - */ 2013 - #if 0 2014 - if ((scsi_status == SBP2_SCSI_STATUS_CHECK_CONDITION) && 2015 - (SCpnt->sense_buffer[2] == UNIT_ATTENTION)) { 2016 - SBP2_DEBUG("UNIT ATTENTION - return busy"); 2017 - SCpnt->result = DID_BUS_BUSY << 16; 2018 - } 2019 - #endif 2020 - 2021 - /* 2022 - * Tell scsi stack that we're done with this command 2023 - */ 2024 done(SCpnt); 2025 } 2026 2027 static int sbp2scsi_slave_alloc(struct scsi_device *sdev) 2028 { 2029 - struct scsi_id_instance_data *scsi_id = 2030 - (struct scsi_id_instance_data *)sdev->host->hostdata[0]; 2031 2032 - scsi_id->sdev = sdev; 2033 sdev->allow_restart = 1; 2034 2035 - if (scsi_id->workarounds & SBP2_WORKAROUND_INQUIRY_36) 2036 sdev->inquiry_len = 36; 2037 return 0; 2038 } 2039 2040 static int sbp2scsi_slave_configure(struct scsi_device *sdev) 2041 { 2042 - struct scsi_id_instance_data *scsi_id = 2043 - (struct scsi_id_instance_data *)sdev->host->hostdata[0]; 2044 2045 blk_queue_dma_alignment(sdev->request_queue, (512 - 1)); 2046 sdev->use_10_for_rw = 1; 2047 2048 if (sdev->type == TYPE_DISK && 2049 - scsi_id->workarounds & SBP2_WORKAROUND_MODE_SENSE_8) 2050 sdev->skip_ms_page_8 = 1; 2051 - if (scsi_id->workarounds & SBP2_WORKAROUND_FIX_CAPACITY) 2052 sdev->fix_capacity = 1; 2053 return 0; 2054 } 2055 2056 static void sbp2scsi_slave_destroy(struct scsi_device *sdev) 2057 { 2058 - ((struct scsi_id_instance_data *)sdev->host->hostdata[0])->sdev = NULL; 2059 return; 2060 } 2061 2062 /* 2063 - * Called by scsi stack when something has really gone wrong. Usually 2064 - * called when a command has timed-out for some reason. 2065 */ 2066 static int sbp2scsi_abort(struct scsi_cmnd *SCpnt) 2067 { 2068 - struct scsi_id_instance_data *scsi_id = 2069 - (struct scsi_id_instance_data *)SCpnt->device->host->hostdata[0]; 2070 - struct sbp2scsi_host_info *hi = scsi_id->hi; 2071 - struct sbp2_command_info *command; 2072 unsigned long flags; 2073 2074 - SBP2_ERR("aborting sbp2 command"); 2075 scsi_print_command(SCpnt); 2076 2077 - if (sbp2util_node_is_available(scsi_id)) { 2078 2079 - /* 2080 - * Right now, just return any matching command structures 2081 - * to the free pool. 2082 - */ 2083 - spin_lock_irqsave(&scsi_id->sbp2_command_orb_lock, flags); 2084 - command = sbp2util_find_command_for_SCpnt(scsi_id, SCpnt); 2085 - if (command) { 2086 - SBP2_DEBUG("Found command to abort"); 2087 - pci_dma_sync_single_for_cpu(hi->host->pdev, 2088 - command->command_orb_dma, 2089 - sizeof(struct sbp2_command_orb), 2090 - PCI_DMA_TODEVICE); 2091 - pci_dma_sync_single_for_cpu(hi->host->pdev, 2092 - command->sge_dma, 2093 - sizeof(command->scatter_gather_element), 2094 - PCI_DMA_BIDIRECTIONAL); 2095 - sbp2util_mark_command_completed(scsi_id, command); 2096 - if (command->Current_SCpnt) { 2097 - command->Current_SCpnt->result = DID_ABORT << 16; 2098 - command->Current_done(command->Current_SCpnt); 2099 } 2100 } 2101 - spin_unlock_irqrestore(&scsi_id->sbp2_command_orb_lock, flags); 2102 2103 - /* 2104 - * Initiate a fetch agent reset. 2105 - */ 2106 - sbp2_agent_reset(scsi_id, 1); 2107 - sbp2scsi_complete_all_commands(scsi_id, DID_BUS_BUSY); 2108 } 2109 2110 return SUCCESS; ··· 2085 */ 2086 static int sbp2scsi_reset(struct scsi_cmnd *SCpnt) 2087 { 2088 - struct scsi_id_instance_data *scsi_id = 2089 - (struct scsi_id_instance_data *)SCpnt->device->host->hostdata[0]; 2090 2091 - SBP2_ERR("reset requested"); 2092 2093 - if (sbp2util_node_is_available(scsi_id)) { 2094 - SBP2_ERR("Generating sbp2 fetch agent reset"); 2095 - sbp2_agent_reset(scsi_id, 1); 2096 } 2097 2098 return SUCCESS; ··· 2102 char *buf) 2103 { 2104 struct scsi_device *sdev; 2105 - struct scsi_id_instance_data *scsi_id; 2106 - int lun; 2107 2108 if (!(sdev = to_scsi_device(dev))) 2109 return 0; 2110 2111 - if (!(scsi_id = (struct scsi_id_instance_data *)sdev->host->hostdata[0])) 2112 return 0; 2113 2114 - lun = ORB_SET_LUN(scsi_id->sbp2_lun); 2115 - 2116 - return sprintf(buf, "%016Lx:%d:%d\n", (unsigned long long)scsi_id->ne->guid, 2117 - scsi_id->ud->id, lun); 2118 } 2119 - static DEVICE_ATTR(ieee1394_id, S_IRUGO, sbp2_sysfs_ieee1394_id_show, NULL); 2120 - 2121 - static struct device_attribute *sbp2_sysfs_sdev_attrs[] = { 2122 - &dev_attr_ieee1394_id, 2123 - NULL 2124 - }; 2125 2126 MODULE_AUTHOR("Ben Collins <bcollins@debian.org>"); 2127 MODULE_DESCRIPTION("IEEE-1394 SBP-2 protocol driver"); 2128 MODULE_SUPPORTED_DEVICE(SBP2_DEVICE_NAME); 2129 MODULE_LICENSE("GPL"); 2130 2131 - /* SCSI host template */ 2132 - static struct scsi_host_template scsi_driver_template = { 2133 - .module = THIS_MODULE, 2134 - .name = "SBP-2 IEEE-1394", 2135 - .proc_name = SBP2_DEVICE_NAME, 2136 - .queuecommand = sbp2scsi_queuecommand, 2137 - .eh_abort_handler = sbp2scsi_abort, 2138 - .eh_device_reset_handler = sbp2scsi_reset, 2139 - .slave_alloc = sbp2scsi_slave_alloc, 2140 - .slave_configure = sbp2scsi_slave_configure, 2141 - .slave_destroy = sbp2scsi_slave_destroy, 2142 - .this_id = -1, 2143 - .sg_tablesize = SG_ALL, 2144 - .use_clustering = ENABLE_CLUSTERING, 2145 - .cmd_per_lun = SBP2_MAX_CMDS, 2146 - .can_queue = SBP2_MAX_CMDS, 2147 - .emulated = 1, 2148 - .sdev_attrs = sbp2_sysfs_sdev_attrs, 2149 - }; 2150 - 2151 static int sbp2_module_init(void) 2152 { 2153 int ret; 2154 2155 - SBP2_DEBUG_ENTER(); 2156 - 2157 - /* Module load debug option to force one command at a time (serializing I/O) */ 2158 - if (serialize_io) { 2159 - SBP2_INFO("Driver forced to serialize I/O (serialize_io=1)"); 2160 - SBP2_INFO("Try serialize_io=0 for better performance"); 2161 - scsi_driver_template.can_queue = 1; 2162 - scsi_driver_template.cmd_per_lun = 1; 2163 } 2164 2165 if (sbp2_default_workarounds & SBP2_WORKAROUND_128K_MAX_TRANS && 2166 - (max_sectors * 512) > (128 * 1024)) 2167 - max_sectors = 128 * 1024 / 512; 2168 - scsi_driver_template.max_sectors = max_sectors; 2169 2170 - /* Register our high level driver with 1394 stack */ 2171 hpsb_register_highlevel(&sbp2_highlevel); 2172 - 2173 ret = hpsb_register_protocol(&sbp2_driver); 2174 if (ret) { 2175 SBP2_ERR("Failed to register protocol"); 2176 hpsb_unregister_highlevel(&sbp2_highlevel); 2177 return ret; 2178 } 2179 - 2180 return 0; 2181 } 2182 2183 static void __exit sbp2_module_exit(void) 2184 { 2185 - SBP2_DEBUG_ENTER(); 2186 - 2187 hpsb_unregister_protocol(&sbp2_driver); 2188 - 2189 hpsb_unregister_highlevel(&sbp2_highlevel); 2190 } 2191
··· 29 * driver. It also registers as a SCSI lower-level driver in order to accept 30 * SCSI commands for transport using SBP-2. 31 * 32 + * You may access any attached SBP-2 (usually storage devices) as regular 33 + * SCSI devices. E.g. mount /dev/sda1, fdisk, mkfs, etc.. 34 * 35 + * See http://www.t10.org/drafts.htm#sbp2 for the final draft of the SBP-2 36 + * specification and for where to purchase the official standard. 37 * 38 + * TODO: 39 + * - look into possible improvements of the SCSI error handlers 40 + * - handle Unit_Characteristics.mgt_ORB_timeout and .ORB_size 41 + * - handle Logical_Unit_Number.ordered 42 + * - handle src == 1 in status blocks 43 + * - reimplement the DMA mapping in absence of physical DMA so that 44 + * bus_to_virt is no longer required 45 + * - debug the handling of absent physical DMA 46 + * - replace CONFIG_IEEE1394_SBP2_PHYS_DMA by automatic detection 47 + * (this is easy but depends on the previous two TODO items) 48 + * - make the parameter serialize_io configurable per device 49 + * - move all requests to fetch agent registers into non-atomic context, 50 + * replace all usages of sbp2util_node_write_no_wait by true transactions 51 + * Grep for inline FIXME comments below. 52 */ 53 54 #include <linux/blkdev.h> ··· 49 #include <linux/list.h> 50 #include <linux/module.h> 51 #include <linux/moduleparam.h> 52 #include <linux/slab.h> 53 #include <linux/spinlock.h> 54 #include <linux/stat.h> ··· 98 * (probably due to PCI latency/throughput issues with the part). You can 99 * bump down the speed if you are running into problems. 100 */ 101 + static int sbp2_max_speed = IEEE1394_SPEED_MAX; 102 + module_param_named(max_speed, sbp2_max_speed, int, 0644); 103 + MODULE_PARM_DESC(max_speed, "Force max speed " 104 + "(3 = 800Mb/s, 2 = 400Mb/s, 1 = 200Mb/s, 0 = 100Mb/s)"); 105 106 /* 107 * Set serialize_io to 1 if you'd like only one scsi command sent 108 * down to us at a time (debugging). This might be necessary for very 109 * badly behaved sbp2 devices. 110 */ 111 + static int sbp2_serialize_io = 1; 112 + module_param_named(serialize_io, sbp2_serialize_io, int, 0444); 113 + MODULE_PARM_DESC(serialize_io, "Serialize I/O coming from scsi drivers " 114 + "(default = 1, faster = 0)"); 115 116 /* 117 * Bump up max_sectors if you'd like to support very large sized ··· 121 * the Oxsemi sbp2 chipsets have no problems supporting very large 122 * transfer sizes. 123 */ 124 + static int sbp2_max_sectors = SBP2_MAX_SECTORS; 125 + module_param_named(max_sectors, sbp2_max_sectors, int, 0444); 126 + MODULE_PARM_DESC(max_sectors, "Change max sectors per I/O supported " 127 + "(default = " __stringify(SBP2_MAX_SECTORS) ")"); 128 129 /* 130 * Exclusive login to sbp2 device? In most cases, the sbp2 driver should ··· 139 * concurrent logins. Depending on firmware, four or two concurrent logins 140 * are possible on OXFW911 and newer Oxsemi bridges. 141 */ 142 + static int sbp2_exclusive_login = 1; 143 + module_param_named(exclusive_login, sbp2_exclusive_login, int, 0644); 144 + MODULE_PARM_DESC(exclusive_login, "Exclusive login to sbp2 device " 145 + "(default = 1)"); 146 147 /* 148 * If any of the following workarounds is required for your device to work, ··· 179 ", override internal blacklist = " __stringify(SBP2_WORKAROUND_OVERRIDE) 180 ", or a combination)"); 181 182 183 + #define SBP2_INFO(fmt, args...) HPSB_INFO("sbp2: "fmt, ## args) 184 + #define SBP2_ERR(fmt, args...) HPSB_ERR("sbp2: "fmt, ## args) 185 186 /* 187 * Globals 188 */ 189 + static void sbp2scsi_complete_all_commands(struct sbp2_lu *, u32); 190 + static void sbp2scsi_complete_command(struct sbp2_lu *, u32, struct scsi_cmnd *, 191 + void (*)(struct scsi_cmnd *)); 192 + static struct sbp2_lu *sbp2_alloc_device(struct unit_directory *); 193 + static int sbp2_start_device(struct sbp2_lu *); 194 + static void sbp2_remove_device(struct sbp2_lu *); 195 + static int sbp2_login_device(struct sbp2_lu *); 196 + static int sbp2_reconnect_device(struct sbp2_lu *); 197 + static int sbp2_logout_device(struct sbp2_lu *); 198 + static void sbp2_host_reset(struct hpsb_host *); 199 + static int sbp2_handle_status_write(struct hpsb_host *, int, int, quadlet_t *, 200 + u64, size_t, u16); 201 + static int sbp2_agent_reset(struct sbp2_lu *, int); 202 + static void sbp2_parse_unit_directory(struct sbp2_lu *, 203 + struct unit_directory *); 204 + static int sbp2_set_busy_timeout(struct sbp2_lu *); 205 + static int sbp2_max_speed_and_size(struct sbp2_lu *); 206 207 208 static const u8 sbp2_speedto_max_payload[] = { 0x7, 0x8, 0x9, 0xA, 0xB, 0xC }; 209 210 static struct hpsb_highlevel sbp2_highlevel = { 211 + .name = SBP2_DEVICE_NAME, 212 + .host_reset = sbp2_host_reset, 213 }; 214 215 static struct hpsb_address_ops sbp2_ops = { 216 + .write = sbp2_handle_status_write 217 }; 218 219 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 220 + static int sbp2_handle_physdma_write(struct hpsb_host *, int, int, quadlet_t *, 221 + u64, size_t, u16); 222 + static int sbp2_handle_physdma_read(struct hpsb_host *, int, quadlet_t *, u64, 223 + size_t, u16); 224 + 225 static struct hpsb_address_ops sbp2_physdma_ops = { 226 + .read = sbp2_handle_physdma_read, 227 + .write = sbp2_handle_physdma_write, 228 }; 229 #endif 230 231 + 232 + /* 233 + * Interface to driver core and IEEE 1394 core 234 + */ 235 + static struct ieee1394_device_id sbp2_id_table[] = { 236 + { 237 + .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION, 238 + .specifier_id = SBP2_UNIT_SPEC_ID_ENTRY & 0xffffff, 239 + .version = SBP2_SW_VERSION_ENTRY & 0xffffff}, 240 + {} 241 + }; 242 + MODULE_DEVICE_TABLE(ieee1394, sbp2_id_table); 243 + 244 + static int sbp2_probe(struct device *); 245 + static int sbp2_remove(struct device *); 246 + static int sbp2_update(struct unit_directory *); 247 + 248 static struct hpsb_protocol_driver sbp2_driver = { 249 + .name = SBP2_DEVICE_NAME, 250 .id_table = sbp2_id_table, 251 .update = sbp2_update, 252 .driver = { 253 .probe = sbp2_probe, 254 .remove = sbp2_remove, 255 }, 256 }; 257 + 258 + 259 + /* 260 + * Interface to SCSI core 261 + */ 262 + static int sbp2scsi_queuecommand(struct scsi_cmnd *, 263 + void (*)(struct scsi_cmnd *)); 264 + static int sbp2scsi_abort(struct scsi_cmnd *); 265 + static int sbp2scsi_reset(struct scsi_cmnd *); 266 + static int sbp2scsi_slave_alloc(struct scsi_device *); 267 + static int sbp2scsi_slave_configure(struct scsi_device *); 268 + static void sbp2scsi_slave_destroy(struct scsi_device *); 269 + static ssize_t sbp2_sysfs_ieee1394_id_show(struct device *, 270 + struct device_attribute *, char *); 271 + 272 + static DEVICE_ATTR(ieee1394_id, S_IRUGO, sbp2_sysfs_ieee1394_id_show, NULL); 273 + 274 + static struct device_attribute *sbp2_sysfs_sdev_attrs[] = { 275 + &dev_attr_ieee1394_id, 276 + NULL 277 + }; 278 + 279 + static struct scsi_host_template sbp2_shost_template = { 280 + .module = THIS_MODULE, 281 + .name = "SBP-2 IEEE-1394", 282 + .proc_name = SBP2_DEVICE_NAME, 283 + .queuecommand = sbp2scsi_queuecommand, 284 + .eh_abort_handler = sbp2scsi_abort, 285 + .eh_device_reset_handler = sbp2scsi_reset, 286 + .slave_alloc = sbp2scsi_slave_alloc, 287 + .slave_configure = sbp2scsi_slave_configure, 288 + .slave_destroy = sbp2scsi_slave_destroy, 289 + .this_id = -1, 290 + .sg_tablesize = SG_ALL, 291 + .use_clustering = ENABLE_CLUSTERING, 292 + .cmd_per_lun = SBP2_MAX_CMDS, 293 + .can_queue = SBP2_MAX_CMDS, 294 + .emulated = 1, 295 + .sdev_attrs = sbp2_sysfs_sdev_attrs, 296 + }; 297 + 298 299 /* 300 * List of devices with known bugs. ··· 363 364 for (length = (length >> 2); length--; ) 365 temp[length] = be32_to_cpu(temp[length]); 366 } 367 368 /* ··· 376 377 for (length = (length >> 2); length--; ) 378 temp[length] = cpu_to_be32(temp[length]); 379 } 380 #else /* BIG_ENDIAN */ 381 /* Why waste the cpu cycles? */ ··· 385 #define sbp2util_cpu_to_be32_buffer(x,y) do {} while (0) 386 #endif 387 388 + static DECLARE_WAIT_QUEUE_HEAD(sbp2_access_wq); 389 390 /* 391 * Waits for completion of an SBP-2 access request. 392 * Returns nonzero if timed out or prematurely interrupted. 393 */ 394 + static int sbp2util_access_timeout(struct sbp2_lu *lu, int timeout) 395 { 396 + long leftover; 397 398 + leftover = wait_event_interruptible_timeout( 399 + sbp2_access_wq, lu->access_complete, timeout); 400 + lu->access_complete = 0; 401 return leftover <= 0; 402 } 403 404 + static void sbp2_free_packet(void *packet) 405 { 406 hpsb_free_tlabel(packet); 407 hpsb_free_packet(packet); 408 } 409 410 + /* 411 + * This is much like hpsb_node_write(), except it ignores the response 412 + * subaction and returns immediately. Can be used from atomic context. 413 */ 414 static int sbp2util_node_write_no_wait(struct node_entry *ne, u64 addr, 415 + quadlet_t *buf, size_t len) 416 { 417 struct hpsb_packet *packet; 418 419 + packet = hpsb_make_writepacket(ne->host, ne->nodeid, addr, buf, len); 420 if (!packet) 421 return -ENOMEM; 422 423 + hpsb_set_packet_complete_task(packet, sbp2_free_packet, packet); 424 hpsb_node_fill_packet(ne, packet); 425 if (hpsb_send_packet(packet) < 0) { 426 sbp2_free_packet(packet); 427 return -EIO; 428 } 429 return 0; 430 } 431 432 + static void sbp2util_notify_fetch_agent(struct sbp2_lu *lu, u64 offset, 433 + quadlet_t *data, size_t len) 434 { 435 + /* There is a small window after a bus reset within which the node 436 + * entry's generation is current but the reconnect wasn't completed. */ 437 + if (unlikely(atomic_read(&lu->state) == SBP2LU_STATE_IN_RESET)) 438 return; 439 440 + if (hpsb_node_write(lu->ne, lu->command_block_agent_addr + offset, 441 data, len)) 442 SBP2_ERR("sbp2util_notify_fetch_agent failed."); 443 + 444 + /* Now accept new SCSI commands, unless a bus reset happended during 445 + * hpsb_node_write. */ 446 + if (likely(atomic_read(&lu->state) != SBP2LU_STATE_IN_RESET)) 447 + scsi_unblock_requests(lu->shost); 448 } 449 450 static void sbp2util_write_orb_pointer(struct work_struct *work) 451 { 452 + struct sbp2_lu *lu = container_of(work, struct sbp2_lu, protocol_work); 453 quadlet_t data[2]; 454 455 + data[0] = ORB_SET_NODE_ID(lu->hi->host->node_id); 456 + data[1] = lu->last_orb_dma; 457 sbp2util_cpu_to_be32_buffer(data, 8); 458 + sbp2util_notify_fetch_agent(lu, SBP2_ORB_POINTER_OFFSET, data, 8); 459 } 460 461 static void sbp2util_write_doorbell(struct work_struct *work) 462 { 463 + struct sbp2_lu *lu = container_of(work, struct sbp2_lu, protocol_work); 464 + 465 + sbp2util_notify_fetch_agent(lu, SBP2_DOORBELL_OFFSET, NULL, 4); 466 } 467 468 + static int sbp2util_create_command_orb_pool(struct sbp2_lu *lu) 469 { 470 + struct sbp2_fwhost_info *hi = lu->hi; 471 int i; 472 unsigned long flags, orbs; 473 + struct sbp2_command_info *cmd; 474 475 + orbs = sbp2_serialize_io ? 2 : SBP2_MAX_CMDS; 476 477 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 478 for (i = 0; i < orbs; i++) { 479 + cmd = kzalloc(sizeof(*cmd), GFP_ATOMIC); 480 + if (!cmd) { 481 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 482 return -ENOMEM; 483 } 484 + cmd->command_orb_dma = dma_map_single(&hi->host->device, 485 + &cmd->command_orb, 486 + sizeof(struct sbp2_command_orb), 487 + DMA_TO_DEVICE); 488 + cmd->sge_dma = dma_map_single(&hi->host->device, 489 + &cmd->scatter_gather_element, 490 + sizeof(cmd->scatter_gather_element), 491 + DMA_BIDIRECTIONAL); 492 + INIT_LIST_HEAD(&cmd->list); 493 + list_add_tail(&cmd->list, &lu->cmd_orb_completed); 494 } 495 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 496 return 0; 497 } 498 499 + static void sbp2util_remove_command_orb_pool(struct sbp2_lu *lu) 500 { 501 + struct hpsb_host *host = lu->hi->host; 502 struct list_head *lh, *next; 503 + struct sbp2_command_info *cmd; 504 unsigned long flags; 505 506 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 507 + if (!list_empty(&lu->cmd_orb_completed)) 508 + list_for_each_safe(lh, next, &lu->cmd_orb_completed) { 509 + cmd = list_entry(lh, struct sbp2_command_info, list); 510 + dma_unmap_single(&host->device, cmd->command_orb_dma, 511 sizeof(struct sbp2_command_orb), 512 + DMA_TO_DEVICE); 513 + dma_unmap_single(&host->device, cmd->sge_dma, 514 + sizeof(cmd->scatter_gather_element), 515 + DMA_BIDIRECTIONAL); 516 + kfree(cmd); 517 } 518 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 519 return; 520 } 521 522 /* 523 + * Finds the sbp2_command for a given outstanding command ORB. 524 + * Only looks at the in-use list. 525 */ 526 static struct sbp2_command_info *sbp2util_find_command_for_orb( 527 + struct sbp2_lu *lu, dma_addr_t orb) 528 { 529 + struct sbp2_command_info *cmd; 530 unsigned long flags; 531 532 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 533 + if (!list_empty(&lu->cmd_orb_inuse)) 534 + list_for_each_entry(cmd, &lu->cmd_orb_inuse, list) 535 + if (cmd->command_orb_dma == orb) { 536 + spin_unlock_irqrestore( 537 + &lu->cmd_orb_lock, flags); 538 + return cmd; 539 } 540 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 541 return NULL; 542 } 543 544 /* 545 + * Finds the sbp2_command for a given outstanding SCpnt. 546 + * Only looks at the in-use list. 547 + * Must be called with lu->cmd_orb_lock held. 548 */ 549 static struct sbp2_command_info *sbp2util_find_command_for_SCpnt( 550 + struct sbp2_lu *lu, void *SCpnt) 551 { 552 + struct sbp2_command_info *cmd; 553 554 + if (!list_empty(&lu->cmd_orb_inuse)) 555 + list_for_each_entry(cmd, &lu->cmd_orb_inuse, list) 556 + if (cmd->Current_SCpnt == SCpnt) 557 + return cmd; 558 return NULL; 559 } 560 561 static struct sbp2_command_info *sbp2util_allocate_command_orb( 562 + struct sbp2_lu *lu, 563 + struct scsi_cmnd *Current_SCpnt, 564 + void (*Current_done)(struct scsi_cmnd *)) 565 { 566 struct list_head *lh; 567 + struct sbp2_command_info *cmd = NULL; 568 unsigned long flags; 569 570 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 571 + if (!list_empty(&lu->cmd_orb_completed)) { 572 + lh = lu->cmd_orb_completed.next; 573 list_del(lh); 574 + cmd = list_entry(lh, struct sbp2_command_info, list); 575 + cmd->Current_done = Current_done; 576 + cmd->Current_SCpnt = Current_SCpnt; 577 + list_add_tail(&cmd->list, &lu->cmd_orb_inuse); 578 + } else 579 SBP2_ERR("%s: no orbs available", __FUNCTION__); 580 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 581 + return cmd; 582 } 583 584 /* 585 + * Unmaps the DMAs of a command and moves the command to the completed ORB list. 586 + * Must be called with lu->cmd_orb_lock held. 587 */ 588 + static void sbp2util_mark_command_completed(struct sbp2_lu *lu, 589 + struct sbp2_command_info *cmd) 590 { 591 + struct hpsb_host *host = lu->ud->ne->host; 592 + 593 + if (cmd->cmd_dma) { 594 + if (cmd->dma_type == CMD_DMA_SINGLE) 595 + dma_unmap_single(&host->device, cmd->cmd_dma, 596 + cmd->dma_size, cmd->dma_dir); 597 + else if (cmd->dma_type == CMD_DMA_PAGE) 598 + dma_unmap_page(&host->device, cmd->cmd_dma, 599 + cmd->dma_size, cmd->dma_dir); 600 + /* XXX: Check for CMD_DMA_NONE bug */ 601 + cmd->dma_type = CMD_DMA_NONE; 602 + cmd->cmd_dma = 0; 603 + } 604 + if (cmd->sge_buffer) { 605 + dma_unmap_sg(&host->device, cmd->sge_buffer, 606 + cmd->dma_size, cmd->dma_dir); 607 + cmd->sge_buffer = NULL; 608 + } 609 + list_move_tail(&cmd->list, &lu->cmd_orb_completed); 610 } 611 612 /* 613 + * Is lu valid? Is the 1394 node still present? 614 */ 615 + static inline int sbp2util_node_is_available(struct sbp2_lu *lu) 616 { 617 + return lu && lu->ne && !lu->ne->in_limbo; 618 } 619 620 /********************************************* 621 * IEEE-1394 core driver stack related section 622 *********************************************/ 623 624 static int sbp2_probe(struct device *dev) 625 { 626 struct unit_directory *ud; 627 + struct sbp2_lu *lu; 628 629 ud = container_of(dev, struct unit_directory, device); 630 ··· 731 if (ud->flags & UNIT_DIRECTORY_HAS_LUN_DIRECTORY) 732 return -ENODEV; 733 734 + lu = sbp2_alloc_device(ud); 735 + if (!lu) 736 return -ENOMEM; 737 738 + sbp2_parse_unit_directory(lu, ud); 739 + return sbp2_start_device(lu); 740 } 741 742 static int sbp2_remove(struct device *dev) 743 { 744 struct unit_directory *ud; 745 + struct sbp2_lu *lu; 746 struct scsi_device *sdev; 747 748 ud = container_of(dev, struct unit_directory, device); 749 + lu = ud->device.driver_data; 750 + if (!lu) 751 return 0; 752 753 + if (lu->shost) { 754 /* Get rid of enqueued commands if there is no chance to 755 * send them. */ 756 + if (!sbp2util_node_is_available(lu)) 757 + sbp2scsi_complete_all_commands(lu, DID_NO_CONNECT); 758 + /* scsi_remove_device() may trigger shutdown functions of SCSI 759 * highlevel drivers which would deadlock if blocked. */ 760 + atomic_set(&lu->state, SBP2LU_STATE_IN_SHUTDOWN); 761 + scsi_unblock_requests(lu->shost); 762 } 763 + sdev = lu->sdev; 764 if (sdev) { 765 + lu->sdev = NULL; 766 scsi_remove_device(sdev); 767 } 768 769 + sbp2_logout_device(lu); 770 + sbp2_remove_device(lu); 771 772 return 0; 773 } 774 775 static int sbp2_update(struct unit_directory *ud) 776 { 777 + struct sbp2_lu *lu = ud->device.driver_data; 778 779 + if (sbp2_reconnect_device(lu)) { 780 + /* Reconnect has failed. Perhaps we didn't reconnect fast 781 + * enough. Try a regular login, but first log out just in 782 + * case of any weirdness. */ 783 + sbp2_logout_device(lu); 784 785 + if (sbp2_login_device(lu)) { 786 /* Login failed too, just fail, and the backend 787 * will call our sbp2_remove for us */ 788 SBP2_ERR("Failed to reconnect to sbp2 device!"); ··· 799 } 800 } 801 802 + sbp2_set_busy_timeout(lu); 803 + sbp2_agent_reset(lu, 1); 804 + sbp2_max_speed_and_size(lu); 805 806 + /* Complete any pending commands with busy (so they get retried) 807 + * and remove them from our queue. */ 808 + sbp2scsi_complete_all_commands(lu, DID_BUS_BUSY); 809 810 /* Accept new commands unless there was another bus reset in the 811 * meantime. */ 812 + if (hpsb_node_entry_valid(lu->ne)) { 813 + atomic_set(&lu->state, SBP2LU_STATE_RUNNING); 814 + scsi_unblock_requests(lu->shost); 815 } 816 return 0; 817 } 818 819 + static struct sbp2_lu *sbp2_alloc_device(struct unit_directory *ud) 820 { 821 + struct sbp2_fwhost_info *hi; 822 + struct Scsi_Host *shost = NULL; 823 + struct sbp2_lu *lu = NULL; 824 825 + lu = kzalloc(sizeof(*lu), GFP_KERNEL); 826 + if (!lu) { 827 + SBP2_ERR("failed to create lu"); 828 goto failed_alloc; 829 } 830 831 + lu->ne = ud->ne; 832 + lu->ud = ud; 833 + lu->speed_code = IEEE1394_SPEED_100; 834 + lu->max_payload_size = sbp2_speedto_max_payload[IEEE1394_SPEED_100]; 835 + lu->status_fifo_addr = CSR1212_INVALID_ADDR_SPACE; 836 + INIT_LIST_HEAD(&lu->cmd_orb_inuse); 837 + INIT_LIST_HEAD(&lu->cmd_orb_completed); 838 + INIT_LIST_HEAD(&lu->lu_list); 839 + spin_lock_init(&lu->cmd_orb_lock); 840 + atomic_set(&lu->state, SBP2LU_STATE_RUNNING); 841 + INIT_WORK(&lu->protocol_work, NULL); 842 843 + ud->device.driver_data = lu; 844 845 hi = hpsb_get_hostinfo(&sbp2_highlevel, ud->ne->host); 846 if (!hi) { 847 + hi = hpsb_create_hostinfo(&sbp2_highlevel, ud->ne->host, 848 + sizeof(*hi)); 849 if (!hi) { 850 SBP2_ERR("failed to allocate hostinfo"); 851 goto failed_alloc; 852 } 853 hi->host = ud->ne->host; 854 + INIT_LIST_HEAD(&hi->logical_units); 855 856 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 857 /* Handle data movement if physical dma is not ··· 881 goto failed_alloc; 882 } 883 884 + lu->hi = hi; 885 886 + list_add_tail(&lu->lu_list, &hi->logical_units); 887 888 /* Register the status FIFO address range. We could use the same FIFO 889 * for targets at different nodes. However we need different FIFOs per ··· 893 * then be performed as unified transactions. This slightly reduces 894 * bandwidth usage, and some Prolific based devices seem to require it. 895 */ 896 + lu->status_fifo_addr = hpsb_allocate_and_register_addrspace( 897 &sbp2_highlevel, ud->ne->host, &sbp2_ops, 898 sizeof(struct sbp2_status_block), sizeof(quadlet_t), 899 ud->ne->host->low_addr_space, CSR1212_ALL_SPACE_END); 900 + if (lu->status_fifo_addr == CSR1212_INVALID_ADDR_SPACE) { 901 SBP2_ERR("failed to allocate status FIFO address range"); 902 goto failed_alloc; 903 } 904 905 + shost = scsi_host_alloc(&sbp2_shost_template, sizeof(unsigned long)); 906 + if (!shost) { 907 SBP2_ERR("failed to register scsi host"); 908 goto failed_alloc; 909 } 910 911 + shost->hostdata[0] = (unsigned long)lu; 912 913 + if (!scsi_add_host(shost, &ud->device)) { 914 + lu->shost = shost; 915 + return lu; 916 } 917 918 SBP2_ERR("failed to add scsi host"); 919 + scsi_host_put(shost); 920 921 failed_alloc: 922 + sbp2_remove_device(lu); 923 return NULL; 924 } 925 926 static void sbp2_host_reset(struct hpsb_host *host) 927 { 928 + struct sbp2_fwhost_info *hi; 929 + struct sbp2_lu *lu; 930 931 hi = hpsb_get_hostinfo(&sbp2_highlevel, host); 932 if (!hi) 933 return; 934 + list_for_each_entry(lu, &hi->logical_units, lu_list) 935 + if (likely(atomic_read(&lu->state) != 936 SBP2LU_STATE_IN_SHUTDOWN)) { 937 + atomic_set(&lu->state, SBP2LU_STATE_IN_RESET); 938 + scsi_block_requests(lu->shost); 939 } 940 } 941 942 + static int sbp2_start_device(struct sbp2_lu *lu) 943 { 944 + struct sbp2_fwhost_info *hi = lu->hi; 945 int error; 946 947 + lu->login_response = dma_alloc_coherent(&hi->host->device, 948 sizeof(struct sbp2_login_response), 949 + &lu->login_response_dma, GFP_KERNEL); 950 + if (!lu->login_response) 951 goto alloc_fail; 952 953 + lu->query_logins_orb = dma_alloc_coherent(&hi->host->device, 954 sizeof(struct sbp2_query_logins_orb), 955 + &lu->query_logins_orb_dma, GFP_KERNEL); 956 + if (!lu->query_logins_orb) 957 goto alloc_fail; 958 959 + lu->query_logins_response = dma_alloc_coherent(&hi->host->device, 960 sizeof(struct sbp2_query_logins_response), 961 + &lu->query_logins_response_dma, GFP_KERNEL); 962 + if (!lu->query_logins_response) 963 goto alloc_fail; 964 965 + lu->reconnect_orb = dma_alloc_coherent(&hi->host->device, 966 sizeof(struct sbp2_reconnect_orb), 967 + &lu->reconnect_orb_dma, GFP_KERNEL); 968 + if (!lu->reconnect_orb) 969 goto alloc_fail; 970 971 + lu->logout_orb = dma_alloc_coherent(&hi->host->device, 972 sizeof(struct sbp2_logout_orb), 973 + &lu->logout_orb_dma, GFP_KERNEL); 974 + if (!lu->logout_orb) 975 goto alloc_fail; 976 977 + lu->login_orb = dma_alloc_coherent(&hi->host->device, 978 sizeof(struct sbp2_login_orb), 979 + &lu->login_orb_dma, GFP_KERNEL); 980 + if (!lu->login_orb) 981 goto alloc_fail; 982 983 + if (sbp2util_create_command_orb_pool(lu)) { 984 SBP2_ERR("sbp2util_create_command_orb_pool failed!"); 985 + sbp2_remove_device(lu); 986 return -ENOMEM; 987 } 988 989 + /* Wait a second before trying to log in. Previously logged in 990 + * initiators need a chance to reconnect. */ 991 if (msleep_interruptible(1000)) { 992 + sbp2_remove_device(lu); 993 return -EINTR; 994 } 995 996 + if (sbp2_login_device(lu)) { 997 + sbp2_remove_device(lu); 998 return -EBUSY; 999 } 1000 1001 + sbp2_set_busy_timeout(lu); 1002 + sbp2_agent_reset(lu, 1); 1003 + sbp2_max_speed_and_size(lu); 1004 1005 + error = scsi_add_device(lu->shost, 0, lu->ud->id, 0); 1006 if (error) { 1007 SBP2_ERR("scsi_add_device failed"); 1008 + sbp2_logout_device(lu); 1009 + sbp2_remove_device(lu); 1010 return error; 1011 } 1012 1013 return 0; 1014 1015 alloc_fail: 1016 + SBP2_ERR("Could not allocate memory for lu"); 1017 + sbp2_remove_device(lu); 1018 return -ENOMEM; 1019 } 1020 1021 + static void sbp2_remove_device(struct sbp2_lu *lu) 1022 { 1023 + struct sbp2_fwhost_info *hi; 1024 1025 + if (!lu) 1026 return; 1027 1028 + hi = lu->hi; 1029 1030 + if (lu->shost) { 1031 + scsi_remove_host(lu->shost); 1032 + scsi_host_put(lu->shost); 1033 } 1034 flush_scheduled_work(); 1035 + sbp2util_remove_command_orb_pool(lu); 1036 1037 + list_del(&lu->lu_list); 1038 1039 + if (lu->login_response) 1040 + dma_free_coherent(&hi->host->device, 1041 sizeof(struct sbp2_login_response), 1042 + lu->login_response, 1043 + lu->login_response_dma); 1044 + if (lu->login_orb) 1045 + dma_free_coherent(&hi->host->device, 1046 sizeof(struct sbp2_login_orb), 1047 + lu->login_orb, 1048 + lu->login_orb_dma); 1049 + if (lu->reconnect_orb) 1050 + dma_free_coherent(&hi->host->device, 1051 sizeof(struct sbp2_reconnect_orb), 1052 + lu->reconnect_orb, 1053 + lu->reconnect_orb_dma); 1054 + if (lu->logout_orb) 1055 + dma_free_coherent(&hi->host->device, 1056 sizeof(struct sbp2_logout_orb), 1057 + lu->logout_orb, 1058 + lu->logout_orb_dma); 1059 + if (lu->query_logins_orb) 1060 + dma_free_coherent(&hi->host->device, 1061 sizeof(struct sbp2_query_logins_orb), 1062 + lu->query_logins_orb, 1063 + lu->query_logins_orb_dma); 1064 + if (lu->query_logins_response) 1065 + dma_free_coherent(&hi->host->device, 1066 sizeof(struct sbp2_query_logins_response), 1067 + lu->query_logins_response, 1068 + lu->query_logins_response_dma); 1069 1070 + if (lu->status_fifo_addr != CSR1212_INVALID_ADDR_SPACE) 1071 hpsb_unregister_addrspace(&sbp2_highlevel, hi->host, 1072 + lu->status_fifo_addr); 1073 1074 + lu->ud->device.driver_data = NULL; 1075 1076 if (hi) 1077 module_put(hi->host->driver->owner); 1078 1079 + kfree(lu); 1080 } 1081 1082 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 1083 /* 1084 + * Deal with write requests on adapters which do not support physical DMA or 1085 + * have it switched off. 1086 */ 1087 static int sbp2_handle_physdma_write(struct hpsb_host *host, int nodeid, 1088 int destid, quadlet_t *data, u64 addr, 1089 size_t length, u16 flags) 1090 { 1091 memcpy(bus_to_virt((u32) addr), data, length); 1092 return RCODE_COMPLETE; 1093 } 1094 1095 /* 1096 + * Deal with read requests on adapters which do not support physical DMA or 1097 + * have it switched off. 1098 */ 1099 static int sbp2_handle_physdma_read(struct hpsb_host *host, int nodeid, 1100 quadlet_t *data, u64 addr, size_t length, 1101 u16 flags) 1102 { 1103 memcpy(data, bus_to_virt((u32) addr), length); 1104 return RCODE_COMPLETE; 1105 } 1106 #endif ··· 1197 * SBP-2 protocol related section 1198 **************************************/ 1199 1200 + static int sbp2_query_logins(struct sbp2_lu *lu) 1201 { 1202 + struct sbp2_fwhost_info *hi = lu->hi; 1203 quadlet_t data[2]; 1204 int max_logins; 1205 int active_logins; 1206 1207 + lu->query_logins_orb->reserved1 = 0x0; 1208 + lu->query_logins_orb->reserved2 = 0x0; 1209 1210 + lu->query_logins_orb->query_response_lo = lu->query_logins_response_dma; 1211 + lu->query_logins_orb->query_response_hi = 1212 + ORB_SET_NODE_ID(hi->host->node_id); 1213 + lu->query_logins_orb->lun_misc = 1214 + ORB_SET_FUNCTION(SBP2_QUERY_LOGINS_REQUEST); 1215 + lu->query_logins_orb->lun_misc |= ORB_SET_NOTIFY(1); 1216 + lu->query_logins_orb->lun_misc |= ORB_SET_LUN(lu->lun); 1217 1218 + lu->query_logins_orb->reserved_resp_length = 1219 + ORB_SET_QUERY_LOGINS_RESP_LENGTH( 1220 + sizeof(struct sbp2_query_logins_response)); 1221 1222 + lu->query_logins_orb->status_fifo_hi = 1223 + ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id); 1224 + lu->query_logins_orb->status_fifo_lo = 1225 + ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr); 1226 1227 + sbp2util_cpu_to_be32_buffer(lu->query_logins_orb, 1228 + sizeof(struct sbp2_query_logins_orb)); 1229 1230 + memset(lu->query_logins_response, 0, 1231 + sizeof(struct sbp2_query_logins_response)); 1232 1233 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1234 + data[1] = lu->query_logins_orb_dma; 1235 sbp2util_cpu_to_be32_buffer(data, 8); 1236 1237 + hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8); 1238 1239 + if (sbp2util_access_timeout(lu, 2*HZ)) { 1240 SBP2_INFO("Error querying logins to SBP-2 device - timed out"); 1241 return -EIO; 1242 } 1243 1244 + if (lu->status_block.ORB_offset_lo != lu->query_logins_orb_dma) { 1245 SBP2_INFO("Error querying logins to SBP-2 device - timed out"); 1246 return -EIO; 1247 } 1248 1249 + if (STATUS_TEST_RDS(lu->status_block.ORB_offset_hi_misc)) { 1250 SBP2_INFO("Error querying logins to SBP-2 device - failed"); 1251 return -EIO; 1252 } 1253 1254 + sbp2util_cpu_to_be32_buffer(lu->query_logins_response, 1255 + sizeof(struct sbp2_query_logins_response)); 1256 1257 + max_logins = RESPONSE_GET_MAX_LOGINS( 1258 + lu->query_logins_response->length_max_logins); 1259 SBP2_INFO("Maximum concurrent logins supported: %d", max_logins); 1260 1261 + active_logins = RESPONSE_GET_ACTIVE_LOGINS( 1262 + lu->query_logins_response->length_max_logins); 1263 SBP2_INFO("Number of active logins: %d", active_logins); 1264 1265 if (active_logins >= max_logins) { ··· 1274 return 0; 1275 } 1276 1277 + static int sbp2_login_device(struct sbp2_lu *lu) 1278 { 1279 + struct sbp2_fwhost_info *hi = lu->hi; 1280 quadlet_t data[2]; 1281 1282 + if (!lu->login_orb) 1283 + return -EIO; 1284 1285 + if (!sbp2_exclusive_login && sbp2_query_logins(lu)) { 1286 + SBP2_INFO("Device does not support any more concurrent logins"); 1287 return -EIO; 1288 } 1289 1290 + /* assume no password */ 1291 + lu->login_orb->password_hi = 0; 1292 + lu->login_orb->password_lo = 0; 1293 1294 + lu->login_orb->login_response_lo = lu->login_response_dma; 1295 + lu->login_orb->login_response_hi = ORB_SET_NODE_ID(hi->host->node_id); 1296 + lu->login_orb->lun_misc = ORB_SET_FUNCTION(SBP2_LOGIN_REQUEST); 1297 1298 + /* one second reconnect time */ 1299 + lu->login_orb->lun_misc |= ORB_SET_RECONNECT(0); 1300 + lu->login_orb->lun_misc |= ORB_SET_EXCLUSIVE(sbp2_exclusive_login); 1301 + lu->login_orb->lun_misc |= ORB_SET_NOTIFY(1); 1302 + lu->login_orb->lun_misc |= ORB_SET_LUN(lu->lun); 1303 1304 + lu->login_orb->passwd_resp_lengths = 1305 ORB_SET_LOGIN_RESP_LENGTH(sizeof(struct sbp2_login_response)); 1306 1307 + lu->login_orb->status_fifo_hi = 1308 + ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id); 1309 + lu->login_orb->status_fifo_lo = 1310 + ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr); 1311 1312 + sbp2util_cpu_to_be32_buffer(lu->login_orb, 1313 + sizeof(struct sbp2_login_orb)); 1314 1315 + memset(lu->login_response, 0, sizeof(struct sbp2_login_response)); 1316 1317 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1318 + data[1] = lu->login_orb_dma; 1319 sbp2util_cpu_to_be32_buffer(data, 8); 1320 1321 + hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8); 1322 1323 + /* wait up to 20 seconds for login status */ 1324 + if (sbp2util_access_timeout(lu, 20*HZ)) { 1325 SBP2_ERR("Error logging into SBP-2 device - timed out"); 1326 return -EIO; 1327 } 1328 1329 + /* make sure that the returned status matches the login ORB */ 1330 + if (lu->status_block.ORB_offset_lo != lu->login_orb_dma) { 1331 SBP2_ERR("Error logging into SBP-2 device - timed out"); 1332 return -EIO; 1333 } 1334 1335 + if (STATUS_TEST_RDS(lu->status_block.ORB_offset_hi_misc)) { 1336 SBP2_ERR("Error logging into SBP-2 device - failed"); 1337 return -EIO; 1338 } 1339 1340 + sbp2util_cpu_to_be32_buffer(lu->login_response, 1341 + sizeof(struct sbp2_login_response)); 1342 + lu->command_block_agent_addr = 1343 + ((u64)lu->login_response->command_block_agent_hi) << 32; 1344 + lu->command_block_agent_addr |= 1345 + ((u64)lu->login_response->command_block_agent_lo); 1346 + lu->command_block_agent_addr &= 0x0000ffffffffffffULL; 1347 1348 SBP2_INFO("Logged into SBP-2 device"); 1349 return 0; 1350 } 1351 1352 + static int sbp2_logout_device(struct sbp2_lu *lu) 1353 { 1354 + struct sbp2_fwhost_info *hi = lu->hi; 1355 quadlet_t data[2]; 1356 int error; 1357 1358 + lu->logout_orb->reserved1 = 0x0; 1359 + lu->logout_orb->reserved2 = 0x0; 1360 + lu->logout_orb->reserved3 = 0x0; 1361 + lu->logout_orb->reserved4 = 0x0; 1362 1363 + lu->logout_orb->login_ID_misc = ORB_SET_FUNCTION(SBP2_LOGOUT_REQUEST); 1364 + lu->logout_orb->login_ID_misc |= 1365 + ORB_SET_LOGIN_ID(lu->login_response->length_login_ID); 1366 + lu->logout_orb->login_ID_misc |= ORB_SET_NOTIFY(1); 1367 1368 + lu->logout_orb->reserved5 = 0x0; 1369 + lu->logout_orb->status_fifo_hi = 1370 + ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id); 1371 + lu->logout_orb->status_fifo_lo = 1372 + ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr); 1373 1374 + sbp2util_cpu_to_be32_buffer(lu->logout_orb, 1375 + sizeof(struct sbp2_logout_orb)); 1376 1377 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1378 + data[1] = lu->logout_orb_dma; 1379 sbp2util_cpu_to_be32_buffer(data, 8); 1380 1381 + error = hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8); 1382 if (error) 1383 return error; 1384 1385 + /* wait up to 1 second for the device to complete logout */ 1386 + if (sbp2util_access_timeout(lu, HZ)) 1387 return -EIO; 1388 1389 SBP2_INFO("Logged out of SBP-2 device"); 1390 return 0; 1391 } 1392 1393 + static int sbp2_reconnect_device(struct sbp2_lu *lu) 1394 { 1395 + struct sbp2_fwhost_info *hi = lu->hi; 1396 quadlet_t data[2]; 1397 int error; 1398 1399 + lu->reconnect_orb->reserved1 = 0x0; 1400 + lu->reconnect_orb->reserved2 = 0x0; 1401 + lu->reconnect_orb->reserved3 = 0x0; 1402 + lu->reconnect_orb->reserved4 = 0x0; 1403 1404 + lu->reconnect_orb->login_ID_misc = 1405 + ORB_SET_FUNCTION(SBP2_RECONNECT_REQUEST); 1406 + lu->reconnect_orb->login_ID_misc |= 1407 + ORB_SET_LOGIN_ID(lu->login_response->length_login_ID); 1408 + lu->reconnect_orb->login_ID_misc |= ORB_SET_NOTIFY(1); 1409 1410 + lu->reconnect_orb->reserved5 = 0x0; 1411 + lu->reconnect_orb->status_fifo_hi = 1412 + ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id); 1413 + lu->reconnect_orb->status_fifo_lo = 1414 + ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr); 1415 1416 + sbp2util_cpu_to_be32_buffer(lu->reconnect_orb, 1417 + sizeof(struct sbp2_reconnect_orb)); 1418 1419 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1420 + data[1] = lu->reconnect_orb_dma; 1421 sbp2util_cpu_to_be32_buffer(data, 8); 1422 1423 + error = hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8); 1424 if (error) 1425 return error; 1426 1427 + /* wait up to 1 second for reconnect status */ 1428 + if (sbp2util_access_timeout(lu, HZ)) { 1429 SBP2_ERR("Error reconnecting to SBP-2 device - timed out"); 1430 return -EIO; 1431 } 1432 1433 + /* make sure that the returned status matches the reconnect ORB */ 1434 + if (lu->status_block.ORB_offset_lo != lu->reconnect_orb_dma) { 1435 SBP2_ERR("Error reconnecting to SBP-2 device - timed out"); 1436 return -EIO; 1437 } 1438 1439 + if (STATUS_TEST_RDS(lu->status_block.ORB_offset_hi_misc)) { 1440 SBP2_ERR("Error reconnecting to SBP-2 device - failed"); 1441 return -EIO; 1442 } 1443 1444 + SBP2_INFO("Reconnected to SBP-2 device"); 1445 return 0; 1446 } 1447 1448 /* 1449 + * Set the target node's Single Phase Retry limit. Affects the target's retry 1450 + * behaviour if our node is too busy to accept requests. 1451 */ 1452 + static int sbp2_set_busy_timeout(struct sbp2_lu *lu) 1453 { 1454 quadlet_t data; 1455 1456 data = cpu_to_be32(SBP2_BUSY_TIMEOUT_VALUE); 1457 + if (hpsb_node_write(lu->ne, SBP2_BUSY_TIMEOUT_ADDRESS, &data, 4)) 1458 SBP2_ERR("%s error", __FUNCTION__); 1459 return 0; 1460 } 1461 1462 + static void sbp2_parse_unit_directory(struct sbp2_lu *lu, 1463 struct unit_directory *ud) 1464 { 1465 struct csr1212_keyval *kv; 1466 struct csr1212_dentry *dentry; 1467 u64 management_agent_addr; 1468 + u32 unit_characteristics, firmware_revision; 1469 unsigned workarounds; 1470 int i; 1471 1472 + management_agent_addr = 0; 1473 + unit_characteristics = 0; 1474 + firmware_revision = 0; 1475 1476 csr1212_for_each_dir_entry(ud->ne->csr, kv, ud->ud_kv, dentry) { 1477 switch (kv->key.id) { 1478 case CSR1212_KV_ID_DEPENDENT_INFO: 1479 + if (kv->key.type == CSR1212_KV_TYPE_CSR_OFFSET) 1480 management_agent_addr = 1481 CSR1212_REGISTER_SPACE_BASE + 1482 (kv->value.csr_offset << 2); 1483 1484 + else if (kv->key.type == CSR1212_KV_TYPE_IMMEDIATE) 1485 + lu->lun = ORB_SET_LUN(kv->value.immediate); 1486 break; 1487 1488 case SBP2_UNIT_CHARACTERISTICS_KEY: 1489 + /* FIXME: This is ignored so far. 1490 + * See SBP-2 clause 7.4.8. */ 1491 unit_characteristics = kv->value.immediate; 1492 break; 1493 1494 case SBP2_FIRMWARE_REVISION_KEY: 1495 firmware_revision = kv->value.immediate; 1496 break; 1497 1498 default: 1499 + /* FIXME: Check for SBP2_DEVICE_TYPE_AND_LUN_KEY. 1500 + * Its "ordered" bit has consequences for command ORB 1501 + * list handling. See SBP-2 clauses 4.6, 7.4.11, 10.2 */ 1502 break; 1503 } 1504 } ··· 1631 /* We would need one SCSI host template for each target to adjust 1632 * max_sectors on the fly, therefore warn only. */ 1633 if (workarounds & SBP2_WORKAROUND_128K_MAX_TRANS && 1634 + (sbp2_max_sectors * 512) > (128 * 1024)) 1635 + SBP2_INFO("Node " NODE_BUS_FMT ": Bridge only supports 128KB " 1636 "max transfer size. WARNING: Current max_sectors " 1637 "setting is larger than 128KB (%d sectors)", 1638 NODE_BUS_ARGS(ud->ne->host, ud->ne->nodeid), 1639 + sbp2_max_sectors); 1640 1641 /* If this is a logical unit directory entry, process the parent 1642 * to get the values. */ 1643 if (ud->flags & UNIT_DIRECTORY_LUN_DIRECTORY) { 1644 + struct unit_directory *parent_ud = container_of( 1645 + ud->device.parent, struct unit_directory, device); 1646 + sbp2_parse_unit_directory(lu, parent_ud); 1647 } else { 1648 + lu->management_agent_addr = management_agent_addr; 1649 + lu->workarounds = workarounds; 1650 if (ud->flags & UNIT_DIRECTORY_HAS_LUN) 1651 + lu->lun = ORB_SET_LUN(ud->lun); 1652 } 1653 } 1654 ··· 1667 * the speed that it needs to use, and the max_rec the host supports, and 1668 * it takes care of the rest. 1669 */ 1670 + static int sbp2_max_speed_and_size(struct sbp2_lu *lu) 1671 { 1672 + struct sbp2_fwhost_info *hi = lu->hi; 1673 u8 payload; 1674 1675 + lu->speed_code = hi->host->speed[NODEID_TO_NODE(lu->ne->nodeid)]; 1676 1677 + if (lu->speed_code > sbp2_max_speed) { 1678 + lu->speed_code = sbp2_max_speed; 1679 + SBP2_INFO("Reducing speed to %s", 1680 + hpsb_speedto_str[sbp2_max_speed]); 1681 } 1682 1683 /* Payload size is the lesser of what our speed supports and what 1684 * our host supports. */ 1685 + payload = min(sbp2_speedto_max_payload[lu->speed_code], 1686 (u8) (hi->host->csr.max_rec - 1)); 1687 1688 /* If physical DMA is off, work around limitation in ohci1394: 1689 * packet size must not exceed PAGE_SIZE */ 1690 + if (lu->ne->host->low_addr_space < (1ULL << 32)) 1691 while (SBP2_PAYLOAD_TO_BYTES(payload) + 24 > PAGE_SIZE && 1692 payload) 1693 payload--; 1694 1695 + SBP2_INFO("Node " NODE_BUS_FMT ": Max speed [%s] - Max payload [%u]", 1696 + NODE_BUS_ARGS(hi->host, lu->ne->nodeid), 1697 + hpsb_speedto_str[lu->speed_code], 1698 + SBP2_PAYLOAD_TO_BYTES(payload)); 1699 1700 + lu->max_payload_size = payload; 1701 return 0; 1702 } 1703 1704 + static int sbp2_agent_reset(struct sbp2_lu *lu, int wait) 1705 { 1706 quadlet_t data; 1707 u64 addr; 1708 int retval; 1709 unsigned long flags; 1710 1711 + /* flush lu->protocol_work */ 1712 if (wait) 1713 flush_scheduled_work(); 1714 1715 data = ntohl(SBP2_AGENT_RESET_DATA); 1716 + addr = lu->command_block_agent_addr + SBP2_AGENT_RESET_OFFSET; 1717 1718 if (wait) 1719 + retval = hpsb_node_write(lu->ne, addr, &data, 4); 1720 else 1721 + retval = sbp2util_node_write_no_wait(lu->ne, addr, &data, 4); 1722 1723 if (retval < 0) { 1724 SBP2_ERR("hpsb_node_write failed.\n"); 1725 return -EIO; 1726 } 1727 1728 + /* make sure that the ORB_POINTER is written on next command */ 1729 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 1730 + lu->last_orb = NULL; 1731 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 1732 1733 return 0; 1734 } 1735 1736 static void sbp2_prep_command_orb_sg(struct sbp2_command_orb *orb, 1737 + struct sbp2_fwhost_info *hi, 1738 + struct sbp2_command_info *cmd, 1739 unsigned int scsi_use_sg, 1740 struct scatterlist *sgpnt, 1741 u32 orb_direction, 1742 enum dma_data_direction dma_dir) 1743 { 1744 + cmd->dma_dir = dma_dir; 1745 orb->data_descriptor_hi = ORB_SET_NODE_ID(hi->host->node_id); 1746 orb->misc |= ORB_SET_DIRECTION(orb_direction); 1747 1748 + /* special case if only one element (and less than 64KB in size) */ 1749 if ((scsi_use_sg == 1) && 1750 (sgpnt[0].length <= SBP2_MAX_SG_ELEMENT_LENGTH)) { 1751 1752 + cmd->dma_size = sgpnt[0].length; 1753 + cmd->dma_type = CMD_DMA_PAGE; 1754 + cmd->cmd_dma = dma_map_page(&hi->host->device, 1755 + sgpnt[0].page, sgpnt[0].offset, 1756 + cmd->dma_size, cmd->dma_dir); 1757 1758 + orb->data_descriptor_lo = cmd->cmd_dma; 1759 + orb->misc |= ORB_SET_DATA_SIZE(cmd->dma_size); 1760 1761 } else { 1762 struct sbp2_unrestricted_page_table *sg_element = 1763 + &cmd->scatter_gather_element[0]; 1764 u32 sg_count, sg_len; 1765 dma_addr_t sg_addr; 1766 + int i, count = dma_map_sg(&hi->host->device, sgpnt, scsi_use_sg, 1767 dma_dir); 1768 1769 + cmd->dma_size = scsi_use_sg; 1770 + cmd->sge_buffer = sgpnt; 1771 1772 /* use page tables (s/g) */ 1773 orb->misc |= ORB_SET_PAGE_TABLE_PRESENT(0x1); 1774 + orb->data_descriptor_lo = cmd->sge_dma; 1775 1776 + /* loop through and fill out our SBP-2 page tables 1777 + * (and split up anything too large) */ 1778 for (i = 0, sg_count = 0 ; i < count; i++, sgpnt++) { 1779 sg_len = sg_dma_len(sgpnt); 1780 sg_addr = sg_dma_address(sgpnt); ··· 1813 } 1814 } 1815 1816 orb->misc |= ORB_SET_DATA_SIZE(sg_count); 1817 1818 sbp2util_cpu_to_be32_buffer(sg_element, 1819 + (sizeof(struct sbp2_unrestricted_page_table)) * 1820 + sg_count); 1821 } 1822 } 1823 1824 static void sbp2_prep_command_orb_no_sg(struct sbp2_command_orb *orb, 1825 + struct sbp2_fwhost_info *hi, 1826 + struct sbp2_command_info *cmd, 1827 struct scatterlist *sgpnt, 1828 u32 orb_direction, 1829 unsigned int scsi_request_bufflen, 1830 void *scsi_request_buffer, 1831 enum dma_data_direction dma_dir) 1832 { 1833 + cmd->dma_dir = dma_dir; 1834 + cmd->dma_size = scsi_request_bufflen; 1835 + cmd->dma_type = CMD_DMA_SINGLE; 1836 + cmd->cmd_dma = dma_map_single(&hi->host->device, scsi_request_buffer, 1837 + cmd->dma_size, cmd->dma_dir); 1838 orb->data_descriptor_hi = ORB_SET_NODE_ID(hi->host->node_id); 1839 orb->misc |= ORB_SET_DIRECTION(orb_direction); 1840 1841 + /* handle case where we get a command w/o s/g enabled 1842 + * (but check for transfers larger than 64K) */ 1843 if (scsi_request_bufflen <= SBP2_MAX_SG_ELEMENT_LENGTH) { 1844 1845 + orb->data_descriptor_lo = cmd->cmd_dma; 1846 orb->misc |= ORB_SET_DATA_SIZE(scsi_request_bufflen); 1847 1848 } else { 1849 + /* The buffer is too large. Turn this into page tables. */ 1850 + 1851 struct sbp2_unrestricted_page_table *sg_element = 1852 + &cmd->scatter_gather_element[0]; 1853 u32 sg_count, sg_len; 1854 dma_addr_t sg_addr; 1855 1856 + orb->data_descriptor_lo = cmd->sge_dma; 1857 orb->misc |= ORB_SET_PAGE_TABLE_PRESENT(0x1); 1858 1859 + /* fill out our SBP-2 page tables; split up the large buffer */ 1860 sg_count = 0; 1861 sg_len = scsi_request_bufflen; 1862 + sg_addr = cmd->cmd_dma; 1863 while (sg_len) { 1864 sg_element[sg_count].segment_base_lo = sg_addr; 1865 if (sg_len > SBP2_MAX_SG_ELEMENT_LENGTH) { ··· 1892 sg_count++; 1893 } 1894 1895 orb->misc |= ORB_SET_DATA_SIZE(sg_count); 1896 1897 sbp2util_cpu_to_be32_buffer(sg_element, 1898 + (sizeof(struct sbp2_unrestricted_page_table)) * 1899 + sg_count); 1900 } 1901 } 1902 1903 + static void sbp2_create_command_orb(struct sbp2_lu *lu, 1904 + struct sbp2_command_info *cmd, 1905 unchar *scsi_cmd, 1906 unsigned int scsi_use_sg, 1907 unsigned int scsi_request_bufflen, 1908 void *scsi_request_buffer, 1909 enum dma_data_direction dma_dir) 1910 { 1911 + struct sbp2_fwhost_info *hi = lu->hi; 1912 struct scatterlist *sgpnt = (struct scatterlist *)scsi_request_buffer; 1913 + struct sbp2_command_orb *orb = &cmd->command_orb; 1914 u32 orb_direction; 1915 1916 /* 1917 + * Set-up our command ORB. 1918 * 1919 * NOTE: We're doing unrestricted page tables (s/g), as this is 1920 * best performance (at least with the devices I have). This means 1921 * that data_size becomes the number of s/g elements, and 1922 * page_size should be zero (for unrestricted). 1923 */ 1924 + orb->next_ORB_hi = ORB_SET_NULL_PTR(1); 1925 + orb->next_ORB_lo = 0x0; 1926 + orb->misc = ORB_SET_MAX_PAYLOAD(lu->max_payload_size); 1927 + orb->misc |= ORB_SET_SPEED(lu->speed_code); 1928 + orb->misc |= ORB_SET_NOTIFY(1); 1929 1930 if (dma_dir == DMA_NONE) 1931 orb_direction = ORB_DIRECTION_NO_DATA_TRANSFER; ··· 1944 else if (dma_dir == DMA_FROM_DEVICE && scsi_request_bufflen) 1945 orb_direction = ORB_DIRECTION_READ_FROM_MEDIA; 1946 else { 1947 + SBP2_INFO("Falling back to DMA_NONE"); 1948 orb_direction = ORB_DIRECTION_NO_DATA_TRANSFER; 1949 } 1950 1951 + /* set up our page table stuff */ 1952 if (orb_direction == ORB_DIRECTION_NO_DATA_TRANSFER) { 1953 + orb->data_descriptor_hi = 0x0; 1954 + orb->data_descriptor_lo = 0x0; 1955 + orb->misc |= ORB_SET_DIRECTION(1); 1956 + } else if (scsi_use_sg) 1957 + sbp2_prep_command_orb_sg(orb, hi, cmd, scsi_use_sg, sgpnt, 1958 + orb_direction, dma_dir); 1959 + else 1960 + sbp2_prep_command_orb_no_sg(orb, hi, cmd, sgpnt, orb_direction, 1961 + scsi_request_bufflen, 1962 scsi_request_buffer, dma_dir); 1963 1964 + sbp2util_cpu_to_be32_buffer(orb, sizeof(*orb)); 1965 1966 + memset(orb->cdb, 0, 12); 1967 + memcpy(orb->cdb, scsi_cmd, COMMAND_SIZE(*scsi_cmd)); 1968 } 1969 1970 + static void sbp2_link_orb_command(struct sbp2_lu *lu, 1971 + struct sbp2_command_info *cmd) 1972 { 1973 + struct sbp2_fwhost_info *hi = lu->hi; 1974 struct sbp2_command_orb *last_orb; 1975 dma_addr_t last_orb_dma; 1976 + u64 addr = lu->command_block_agent_addr; 1977 quadlet_t data[2]; 1978 size_t length; 1979 unsigned long flags; 1980 1981 + dma_sync_single_for_device(&hi->host->device, cmd->command_orb_dma, 1982 + sizeof(struct sbp2_command_orb), 1983 + DMA_TO_DEVICE); 1984 + dma_sync_single_for_device(&hi->host->device, cmd->sge_dma, 1985 + sizeof(cmd->scatter_gather_element), 1986 + DMA_BIDIRECTIONAL); 1987 1988 + /* check to see if there are any previous orbs to use */ 1989 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 1990 + last_orb = lu->last_orb; 1991 + last_orb_dma = lu->last_orb_dma; 1992 if (!last_orb) { 1993 /* 1994 * last_orb == NULL means: We know that the target's fetch agent ··· 2011 */ 2012 addr += SBP2_ORB_POINTER_OFFSET; 2013 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 2014 + data[1] = cmd->command_orb_dma; 2015 sbp2util_cpu_to_be32_buffer(data, 8); 2016 length = 8; 2017 } else { ··· 2022 * The target's fetch agent may or may not have read this 2023 * previous ORB yet. 2024 */ 2025 + dma_sync_single_for_cpu(&hi->host->device, last_orb_dma, 2026 + sizeof(struct sbp2_command_orb), 2027 + DMA_TO_DEVICE); 2028 + last_orb->next_ORB_lo = cpu_to_be32(cmd->command_orb_dma); 2029 wmb(); 2030 /* Tells hardware that this pointer is valid */ 2031 last_orb->next_ORB_hi = 0; 2032 + dma_sync_single_for_device(&hi->host->device, last_orb_dma, 2033 + sizeof(struct sbp2_command_orb), 2034 + DMA_TO_DEVICE); 2035 addr += SBP2_DOORBELL_OFFSET; 2036 data[0] = 0; 2037 length = 4; 2038 } 2039 + lu->last_orb = &cmd->command_orb; 2040 + lu->last_orb_dma = cmd->command_orb_dma; 2041 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 2042 2043 + if (sbp2util_node_write_no_wait(lu->ne, addr, data, length)) { 2044 /* 2045 * sbp2util_node_write_no_wait failed. We certainly ran out 2046 * of transaction labels, perhaps just because there were no ··· 2051 * the workqueue job will sleep to guaranteedly get a tlabel. 2052 * We do not accept new commands until the job is over. 2053 */ 2054 + scsi_block_requests(lu->shost); 2055 + PREPARE_WORK(&lu->protocol_work, 2056 last_orb ? sbp2util_write_doorbell: 2057 sbp2util_write_orb_pointer); 2058 + schedule_work(&lu->protocol_work); 2059 } 2060 } 2061 2062 + static int sbp2_send_command(struct sbp2_lu *lu, struct scsi_cmnd *SCpnt, 2063 void (*done)(struct scsi_cmnd *)) 2064 { 2065 + unchar *scsi_cmd = (unchar *)SCpnt->cmnd; 2066 unsigned int request_bufflen = SCpnt->request_bufflen; 2067 + struct sbp2_command_info *cmd; 2068 2069 + cmd = sbp2util_allocate_command_orb(lu, SCpnt, done); 2070 + if (!cmd) 2071 return -EIO; 2072 2073 + sbp2_create_command_orb(lu, cmd, scsi_cmd, SCpnt->use_sg, 2074 request_bufflen, SCpnt->request_buffer, 2075 SCpnt->sc_data_direction); 2076 + sbp2_link_orb_command(lu, cmd); 2077 2078 return 0; 2079 } ··· 2103 /* 2104 * Translates SBP-2 status into SCSI sense data for check conditions 2105 */ 2106 + static unsigned int sbp2_status_to_sense_data(unchar *sbp2_status, 2107 + unchar *sense_data) 2108 { 2109 + /* OK, it's pretty ugly... ;-) */ 2110 sense_data[0] = 0x70; 2111 sense_data[1] = 0x0; 2112 sense_data[2] = sbp2_status[9]; ··· 2127 sense_data[14] = sbp2_status[20]; 2128 sense_data[15] = sbp2_status[21]; 2129 2130 + return sbp2_status[8] & 0x3f; 2131 } 2132 2133 static int sbp2_handle_status_write(struct hpsb_host *host, int nodeid, 2134 int destid, quadlet_t *data, u64 addr, 2135 size_t length, u16 fl) 2136 { 2137 + struct sbp2_fwhost_info *hi; 2138 + struct sbp2_lu *lu = NULL, *lu_tmp; 2139 struct scsi_cmnd *SCpnt = NULL; 2140 struct sbp2_status_block *sb; 2141 u32 scsi_status = SBP2_SCSI_STATUS_GOOD; 2142 + struct sbp2_command_info *cmd; 2143 unsigned long flags; 2144 2145 if (unlikely(length < 8 || length > sizeof(struct sbp2_status_block))) { 2146 SBP2_ERR("Wrong size of status block"); ··· 2162 SBP2_ERR("host info is NULL - this is bad!"); 2163 return RCODE_ADDRESS_ERROR; 2164 } 2165 + 2166 + /* Find the unit which wrote the status. */ 2167 + list_for_each_entry(lu_tmp, &hi->logical_units, lu_list) { 2168 + if (lu_tmp->ne->nodeid == nodeid && 2169 + lu_tmp->status_fifo_addr == addr) { 2170 + lu = lu_tmp; 2171 break; 2172 } 2173 } 2174 + if (unlikely(!lu)) { 2175 + SBP2_ERR("lu is NULL - device is gone?"); 2176 return RCODE_ADDRESS_ERROR; 2177 } 2178 2179 + /* Put response into lu status fifo buffer. The first two bytes 2180 * come in big endian bit order. Often the target writes only a 2181 * truncated status block, minimally the first two quadlets. The rest 2182 + * is implied to be zeros. */ 2183 + sb = &lu->status_block; 2184 memset(sb->command_set_dependent, 0, sizeof(sb->command_set_dependent)); 2185 memcpy(sb, data, length); 2186 sbp2util_be32_to_cpu_buffer(sb, 8); 2187 2188 + /* Ignore unsolicited status. Handle command ORB status. */ 2189 if (unlikely(STATUS_GET_SRC(sb->ORB_offset_hi_misc) == 2)) 2190 + cmd = NULL; 2191 else 2192 + cmd = sbp2util_find_command_for_orb(lu, sb->ORB_offset_lo); 2193 + if (cmd) { 2194 + dma_sync_single_for_cpu(&hi->host->device, cmd->command_orb_dma, 2195 + sizeof(struct sbp2_command_orb), 2196 + DMA_TO_DEVICE); 2197 + dma_sync_single_for_cpu(&hi->host->device, cmd->sge_dma, 2198 + sizeof(cmd->scatter_gather_element), 2199 + DMA_BIDIRECTIONAL); 2200 + /* Grab SCSI command pointers and check status. */ 2201 /* 2202 * FIXME: If the src field in the status is 1, the ORB DMA must 2203 * not be reused until status for a subsequent ORB is received. 2204 */ 2205 + SCpnt = cmd->Current_SCpnt; 2206 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 2207 + sbp2util_mark_command_completed(lu, cmd); 2208 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 2209 2210 if (SCpnt) { 2211 u32 h = sb->ORB_offset_hi_misc; 2212 u32 r = STATUS_GET_RESP(h); 2213 2214 if (r != RESP_STATUS_REQUEST_COMPLETE) { 2215 + SBP2_INFO("resp 0x%x, sbp_status 0x%x", 2216 r, STATUS_GET_SBP_STATUS(h)); 2217 scsi_status = 2218 r == RESP_STATUS_TRANSPORT_FAILURE ? 2219 SBP2_SCSI_STATUS_BUSY : 2220 SBP2_SCSI_STATUS_COMMAND_TERMINATED; 2221 } 2222 + 2223 + if (STATUS_GET_LEN(h) > 1) 2224 scsi_status = sbp2_status_to_sense_data( 2225 (unchar *)sb, SCpnt->sense_buffer); 2226 + 2227 + if (STATUS_TEST_DEAD(h)) 2228 + sbp2_agent_reset(lu, 0); 2229 } 2230 2231 + /* Check here to see if there are no commands in-use. If there 2232 * are none, we know that the fetch agent left the active state 2233 * _and_ that we did not reactivate it yet. Therefore clear 2234 * last_orb so that next time we write directly to the 2235 * ORB_POINTER register. That way the fetch agent does not need 2236 + * to refetch the next_ORB. */ 2237 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 2238 + if (list_empty(&lu->cmd_orb_inuse)) 2239 + lu->last_orb = NULL; 2240 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 2241 2242 } else { 2243 + /* It's probably status after a management request. */ 2244 + if ((sb->ORB_offset_lo == lu->reconnect_orb_dma) || 2245 + (sb->ORB_offset_lo == lu->login_orb_dma) || 2246 + (sb->ORB_offset_lo == lu->query_logins_orb_dma) || 2247 + (sb->ORB_offset_lo == lu->logout_orb_dma)) { 2248 + lu->access_complete = 1; 2249 + wake_up_interruptible(&sbp2_access_wq); 2250 } 2251 } 2252 2253 + if (SCpnt) 2254 + sbp2scsi_complete_command(lu, scsi_status, SCpnt, 2255 + cmd->Current_done); 2256 return RCODE_COMPLETE; 2257 } 2258 ··· 2294 * SCSI interface related section 2295 **************************************/ 2296 2297 static int sbp2scsi_queuecommand(struct scsi_cmnd *SCpnt, 2298 void (*done)(struct scsi_cmnd *)) 2299 { 2300 + struct sbp2_lu *lu = (struct sbp2_lu *)SCpnt->device->host->hostdata[0]; 2301 + struct sbp2_fwhost_info *hi; 2302 int result = DID_NO_CONNECT << 16; 2303 2304 + if (unlikely(!sbp2util_node_is_available(lu))) 2305 goto done; 2306 2307 + hi = lu->hi; 2308 2309 + if (unlikely(!hi)) { 2310 + SBP2_ERR("sbp2_fwhost_info is NULL - this is bad!"); 2311 goto done; 2312 } 2313 2314 + /* Multiple units are currently represented to the SCSI core as separate 2315 + * targets, not as one target with multiple LUs. Therefore return 2316 + * selection time-out to any IO directed at non-zero LUNs. */ 2317 + if (unlikely(SCpnt->device->lun)) 2318 goto done; 2319 2320 + /* handle the request sense command here (auto-request sense) */ 2321 if (SCpnt->cmnd[0] == REQUEST_SENSE) { 2322 + memcpy(SCpnt->request_buffer, SCpnt->sense_buffer, 2323 + SCpnt->request_bufflen); 2324 memset(SCpnt->sense_buffer, 0, sizeof(SCpnt->sense_buffer)); 2325 + sbp2scsi_complete_command(lu, SBP2_SCSI_STATUS_GOOD, SCpnt, 2326 + done); 2327 return 0; 2328 } 2329 2330 + if (unlikely(!hpsb_node_entry_valid(lu->ne))) { 2331 SBP2_ERR("Bus reset in progress - rejecting command"); 2332 result = DID_BUS_BUSY << 16; 2333 goto done; 2334 } 2335 2336 + /* Bidirectional commands are not yet implemented, 2337 + * and unknown transfer direction not handled. */ 2338 + if (unlikely(SCpnt->sc_data_direction == DMA_BIDIRECTIONAL)) { 2339 SBP2_ERR("Cannot handle DMA_BIDIRECTIONAL - rejecting command"); 2340 result = DID_ERROR << 16; 2341 goto done; 2342 } 2343 2344 + if (sbp2_send_command(lu, SCpnt, done)) { 2345 SBP2_ERR("Error sending SCSI command"); 2346 + sbp2scsi_complete_command(lu, 2347 + SBP2_SCSI_STATUS_SELECTION_TIMEOUT, 2348 SCpnt, done); 2349 } 2350 return 0; ··· 2375 return 0; 2376 } 2377 2378 + static void sbp2scsi_complete_all_commands(struct sbp2_lu *lu, u32 status) 2379 { 2380 + struct sbp2_fwhost_info *hi = lu->hi; 2381 struct list_head *lh; 2382 + struct sbp2_command_info *cmd; 2383 unsigned long flags; 2384 2385 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 2386 + while (!list_empty(&lu->cmd_orb_inuse)) { 2387 + lh = lu->cmd_orb_inuse.next; 2388 + cmd = list_entry(lh, struct sbp2_command_info, list); 2389 + dma_sync_single_for_cpu(&hi->host->device, cmd->command_orb_dma, 2390 + sizeof(struct sbp2_command_orb), 2391 + DMA_TO_DEVICE); 2392 + dma_sync_single_for_cpu(&hi->host->device, cmd->sge_dma, 2393 + sizeof(cmd->scatter_gather_element), 2394 + DMA_BIDIRECTIONAL); 2395 + sbp2util_mark_command_completed(lu, cmd); 2396 + if (cmd->Current_SCpnt) { 2397 + cmd->Current_SCpnt->result = status << 16; 2398 + cmd->Current_done(cmd->Current_SCpnt); 2399 } 2400 } 2401 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 2402 2403 return; 2404 } 2405 2406 /* 2407 + * Complete a regular SCSI command. Can be called in atomic context. 2408 */ 2409 + static void sbp2scsi_complete_command(struct sbp2_lu *lu, u32 scsi_status, 2410 + struct scsi_cmnd *SCpnt, 2411 void (*done)(struct scsi_cmnd *)) 2412 { 2413 if (!SCpnt) { 2414 SBP2_ERR("SCpnt is NULL"); 2415 return; 2416 } 2417 2418 switch (scsi_status) { 2419 case SBP2_SCSI_STATUS_GOOD: 2420 SCpnt->result = DID_OK << 16; ··· 2455 break; 2456 2457 case SBP2_SCSI_STATUS_CHECK_CONDITION: 2458 SCpnt->result = CHECK_CONDITION << 1 | DID_OK << 16; 2459 break; 2460 2461 case SBP2_SCSI_STATUS_SELECTION_TIMEOUT: ··· 2482 SCpnt->result = DID_ERROR << 16; 2483 } 2484 2485 + /* If a bus reset is in progress and there was an error, complete 2486 + * the command as busy so that it will get retried. */ 2487 + if (!hpsb_node_entry_valid(lu->ne) 2488 && (scsi_status != SBP2_SCSI_STATUS_GOOD)) { 2489 SBP2_ERR("Completing command with busy (bus reset)"); 2490 SCpnt->result = DID_BUS_BUSY << 16; 2491 } 2492 2493 + /* Tell the SCSI stack that we're done with this command. */ 2494 done(SCpnt); 2495 } 2496 2497 static int sbp2scsi_slave_alloc(struct scsi_device *sdev) 2498 { 2499 + struct sbp2_lu *lu = (struct sbp2_lu *)sdev->host->hostdata[0]; 2500 2501 + lu->sdev = sdev; 2502 sdev->allow_restart = 1; 2503 2504 + if (lu->workarounds & SBP2_WORKAROUND_INQUIRY_36) 2505 sdev->inquiry_len = 36; 2506 return 0; 2507 } 2508 2509 static int sbp2scsi_slave_configure(struct scsi_device *sdev) 2510 { 2511 + struct sbp2_lu *lu = (struct sbp2_lu *)sdev->host->hostdata[0]; 2512 2513 blk_queue_dma_alignment(sdev->request_queue, (512 - 1)); 2514 sdev->use_10_for_rw = 1; 2515 2516 if (sdev->type == TYPE_DISK && 2517 + lu->workarounds & SBP2_WORKAROUND_MODE_SENSE_8) 2518 sdev->skip_ms_page_8 = 1; 2519 + if (lu->workarounds & SBP2_WORKAROUND_FIX_CAPACITY) 2520 sdev->fix_capacity = 1; 2521 return 0; 2522 } 2523 2524 static void sbp2scsi_slave_destroy(struct scsi_device *sdev) 2525 { 2526 + ((struct sbp2_lu *)sdev->host->hostdata[0])->sdev = NULL; 2527 return; 2528 } 2529 2530 /* 2531 + * Called by scsi stack when something has really gone wrong. 2532 + * Usually called when a command has timed-out for some reason. 2533 */ 2534 static int sbp2scsi_abort(struct scsi_cmnd *SCpnt) 2535 { 2536 + struct sbp2_lu *lu = (struct sbp2_lu *)SCpnt->device->host->hostdata[0]; 2537 + struct sbp2_fwhost_info *hi = lu->hi; 2538 + struct sbp2_command_info *cmd; 2539 unsigned long flags; 2540 2541 + SBP2_INFO("aborting sbp2 command"); 2542 scsi_print_command(SCpnt); 2543 2544 + if (sbp2util_node_is_available(lu)) { 2545 + sbp2_agent_reset(lu, 1); 2546 2547 + /* Return a matching command structure to the free pool. */ 2548 + spin_lock_irqsave(&lu->cmd_orb_lock, flags); 2549 + cmd = sbp2util_find_command_for_SCpnt(lu, SCpnt); 2550 + if (cmd) { 2551 + dma_sync_single_for_cpu(&hi->host->device, 2552 + cmd->command_orb_dma, 2553 + sizeof(struct sbp2_command_orb), 2554 + DMA_TO_DEVICE); 2555 + dma_sync_single_for_cpu(&hi->host->device, cmd->sge_dma, 2556 + sizeof(cmd->scatter_gather_element), 2557 + DMA_BIDIRECTIONAL); 2558 + sbp2util_mark_command_completed(lu, cmd); 2559 + if (cmd->Current_SCpnt) { 2560 + cmd->Current_SCpnt->result = DID_ABORT << 16; 2561 + cmd->Current_done(cmd->Current_SCpnt); 2562 } 2563 } 2564 + spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 2565 2566 + sbp2scsi_complete_all_commands(lu, DID_BUS_BUSY); 2567 } 2568 2569 return SUCCESS; ··· 2604 */ 2605 static int sbp2scsi_reset(struct scsi_cmnd *SCpnt) 2606 { 2607 + struct sbp2_lu *lu = (struct sbp2_lu *)SCpnt->device->host->hostdata[0]; 2608 2609 + SBP2_INFO("reset requested"); 2610 2611 + if (sbp2util_node_is_available(lu)) { 2612 + SBP2_INFO("generating sbp2 fetch agent reset"); 2613 + sbp2_agent_reset(lu, 1); 2614 } 2615 2616 return SUCCESS; ··· 2622 char *buf) 2623 { 2624 struct scsi_device *sdev; 2625 + struct sbp2_lu *lu; 2626 2627 if (!(sdev = to_scsi_device(dev))) 2628 return 0; 2629 2630 + if (!(lu = (struct sbp2_lu *)sdev->host->hostdata[0])) 2631 return 0; 2632 2633 + return sprintf(buf, "%016Lx:%d:%d\n", (unsigned long long)lu->ne->guid, 2634 + lu->ud->id, ORB_SET_LUN(lu->lun)); 2635 } 2636 2637 MODULE_AUTHOR("Ben Collins <bcollins@debian.org>"); 2638 MODULE_DESCRIPTION("IEEE-1394 SBP-2 protocol driver"); 2639 MODULE_SUPPORTED_DEVICE(SBP2_DEVICE_NAME); 2640 MODULE_LICENSE("GPL"); 2641 2642 static int sbp2_module_init(void) 2643 { 2644 int ret; 2645 2646 + if (sbp2_serialize_io) { 2647 + sbp2_shost_template.can_queue = 1; 2648 + sbp2_shost_template.cmd_per_lun = 1; 2649 } 2650 2651 if (sbp2_default_workarounds & SBP2_WORKAROUND_128K_MAX_TRANS && 2652 + (sbp2_max_sectors * 512) > (128 * 1024)) 2653 + sbp2_max_sectors = 128 * 1024 / 512; 2654 + sbp2_shost_template.max_sectors = sbp2_max_sectors; 2655 2656 hpsb_register_highlevel(&sbp2_highlevel); 2657 ret = hpsb_register_protocol(&sbp2_driver); 2658 if (ret) { 2659 SBP2_ERR("Failed to register protocol"); 2660 hpsb_unregister_highlevel(&sbp2_highlevel); 2661 return ret; 2662 } 2663 return 0; 2664 } 2665 2666 static void __exit sbp2_module_exit(void) 2667 { 2668 hpsb_unregister_protocol(&sbp2_driver); 2669 hpsb_unregister_highlevel(&sbp2_highlevel); 2670 } 2671
+127 -190
drivers/ieee1394/sbp2.h
··· 25 #define SBP2_DEVICE_NAME "sbp2" 26 27 /* 28 - * SBP2 specific structures and defines 29 */ 30 31 - #define ORB_DIRECTION_WRITE_TO_MEDIA 0x0 32 - #define ORB_DIRECTION_READ_FROM_MEDIA 0x1 33 - #define ORB_DIRECTION_NO_DATA_TRANSFER 0x2 34 35 - #define ORB_SET_NULL_PTR(value) ((value & 0x1) << 31) 36 - #define ORB_SET_NOTIFY(value) ((value & 0x1) << 31) 37 - #define ORB_SET_RQ_FMT(value) ((value & 0x3) << 29) /* unused ? */ 38 - #define ORB_SET_NODE_ID(value) ((value & 0xffff) << 16) 39 - #define ORB_SET_STATUS_FIFO_HI(value, id) (value >> 32 | ORB_SET_NODE_ID(id)) 40 - #define ORB_SET_STATUS_FIFO_LO(value) (value & 0xffffffff) 41 - #define ORB_SET_DATA_SIZE(value) (value & 0xffff) 42 - #define ORB_SET_PAGE_SIZE(value) ((value & 0x7) << 16) 43 - #define ORB_SET_PAGE_TABLE_PRESENT(value) ((value & 0x1) << 19) 44 - #define ORB_SET_MAX_PAYLOAD(value) ((value & 0xf) << 20) 45 - #define ORB_SET_SPEED(value) ((value & 0x7) << 24) 46 - #define ORB_SET_DIRECTION(value) ((value & 0x1) << 27) 47 48 struct sbp2_command_orb { 49 u32 next_ORB_hi; ··· 64 #define SBP2_LOGICAL_UNIT_RESET 0xe 65 #define SBP2_TARGET_RESET_REQUEST 0xf 66 67 - #define ORB_SET_LUN(value) (value & 0xffff) 68 - #define ORB_SET_FUNCTION(value) ((value & 0xf) << 16) 69 - #define ORB_SET_RECONNECT(value) ((value & 0xf) << 20) 70 - #define ORB_SET_EXCLUSIVE(value) ((value & 0x1) << 28) 71 - #define ORB_SET_LOGIN_RESP_LENGTH(value) (value & 0xffff) 72 - #define ORB_SET_PASSWD_LENGTH(value) ((value & 0xffff) << 16) 73 74 struct sbp2_login_orb { 75 u32 password_hi; ··· 82 u32 status_fifo_lo; 83 } __attribute__((packed)); 84 85 - #define RESPONSE_GET_LOGIN_ID(value) (value & 0xffff) 86 - #define RESPONSE_GET_LENGTH(value) ((value >> 16) & 0xffff) 87 - #define RESPONSE_GET_RECONNECT_HOLD(value) (value & 0xffff) 88 89 struct sbp2_login_response { 90 u32 length_login_ID; ··· 93 u32 reconnect_hold; 94 } __attribute__((packed)); 95 96 - #define ORB_SET_LOGIN_ID(value) (value & 0xffff) 97 - 98 - #define ORB_SET_QUERY_LOGINS_RESP_LENGTH(value) (value & 0xffff) 99 100 struct sbp2_query_logins_orb { 101 u32 reserved1; ··· 107 u32 status_fifo_lo; 108 } __attribute__((packed)); 109 110 - #define RESPONSE_GET_MAX_LOGINS(value) (value & 0xffff) 111 - #define RESPONSE_GET_ACTIVE_LOGINS(value) ((RESPONSE_GET_LENGTH(value) - 4) / 12) 112 113 struct sbp2_query_logins_response { 114 u32 length_max_logins; ··· 139 u32 status_fifo_lo; 140 } __attribute__((packed)); 141 142 - #define PAGE_TABLE_SET_SEGMENT_BASE_HI(value) (value & 0xffff) 143 - #define PAGE_TABLE_SET_SEGMENT_LENGTH(value) ((value & 0xffff) << 16) 144 145 struct sbp2_unrestricted_page_table { 146 u32 length_segment_base_hi; ··· 170 #define SFMT_DEFERRED_ERROR 0x1 171 #define SFMT_VENDOR_DEPENDENT_STATUS 0x3 172 173 - #define SBP2_SCSI_STATUS_GOOD 0x0 174 - #define SBP2_SCSI_STATUS_CHECK_CONDITION 0x2 175 - #define SBP2_SCSI_STATUS_CONDITION_MET 0x4 176 - #define SBP2_SCSI_STATUS_BUSY 0x8 177 - #define SBP2_SCSI_STATUS_RESERVATION_CONFLICT 0x18 178 - #define SBP2_SCSI_STATUS_COMMAND_TERMINATED 0x22 179 - 180 - #define SBP2_SCSI_STATUS_SELECTION_TIMEOUT 0xff 181 - 182 - #define STATUS_GET_SRC(value) (((value) >> 30) & 0x3) 183 - #define STATUS_GET_RESP(value) (((value) >> 28) & 0x3) 184 - #define STATUS_GET_LEN(value) (((value) >> 24) & 0x7) 185 - #define STATUS_GET_SBP_STATUS(value) (((value) >> 16) & 0xff) 186 - #define STATUS_GET_ORB_OFFSET_HI(value) ((value) & 0x0000ffff) 187 - #define STATUS_TEST_DEAD(value) ((value) & 0x08000000) 188 /* test 'resp' | 'dead' | 'sbp2_status' */ 189 - #define STATUS_TEST_RDS(value) ((value) & 0x38ff0000) 190 191 struct sbp2_status_block { 192 u32 ORB_offset_hi_misc; ··· 185 u8 command_set_dependent[24]; 186 } __attribute__((packed)); 187 188 - /* 189 - * Miscellaneous SBP2 related config rom defines 190 - */ 191 - 192 - #define SBP2_UNIT_DIRECTORY_OFFSET_KEY 0xd1 193 - #define SBP2_CSR_OFFSET_KEY 0x54 194 - #define SBP2_UNIT_SPEC_ID_KEY 0x12 195 - #define SBP2_UNIT_SW_VERSION_KEY 0x13 196 - #define SBP2_COMMAND_SET_SPEC_ID_KEY 0x38 197 - #define SBP2_COMMAND_SET_KEY 0x39 198 - #define SBP2_UNIT_CHARACTERISTICS_KEY 0x3a 199 - #define SBP2_DEVICE_TYPE_AND_LUN_KEY 0x14 200 - #define SBP2_FIRMWARE_REVISION_KEY 0x3c 201 - 202 - #define SBP2_AGENT_STATE_OFFSET 0x00ULL 203 - #define SBP2_AGENT_RESET_OFFSET 0x04ULL 204 - #define SBP2_ORB_POINTER_OFFSET 0x08ULL 205 - #define SBP2_DOORBELL_OFFSET 0x10ULL 206 - #define SBP2_UNSOLICITED_STATUS_ENABLE_OFFSET 0x14ULL 207 - #define SBP2_UNSOLICITED_STATUS_VALUE 0xf 208 - 209 - #define SBP2_BUSY_TIMEOUT_ADDRESS 0xfffff0000210ULL 210 - #define SBP2_BUSY_TIMEOUT_VALUE 0xf 211 - 212 - #define SBP2_AGENT_RESET_DATA 0xf 213 214 /* 215 - * Unit spec id and sw version entry for SBP-2 devices 216 */ 217 218 - #define SBP2_UNIT_SPEC_ID_ENTRY 0x0000609e 219 - #define SBP2_SW_VERSION_ENTRY 0x00010483 220 221 /* 222 - * SCSI specific stuff 223 */ 224 225 - #define SBP2_MAX_SG_ELEMENT_LENGTH 0xf000 226 - #define SBP2_MAX_SECTORS 255 /* Max sectors supported */ 227 - #define SBP2_MAX_CMDS 8 /* This should be safe */ 228 229 - /* Flags for detected oddities and brokeness */ 230 - #define SBP2_WORKAROUND_128K_MAX_TRANS 0x1 231 - #define SBP2_WORKAROUND_INQUIRY_36 0x2 232 - #define SBP2_WORKAROUND_MODE_SENSE_8 0x4 233 - #define SBP2_WORKAROUND_FIX_CAPACITY 0x8 234 - #define SBP2_WORKAROUND_OVERRIDE 0x100 235 236 - /* This is the two dma types we use for cmd_dma below */ 237 - enum cmd_dma_types { 238 CMD_DMA_NONE, 239 CMD_DMA_PAGE, 240 CMD_DMA_SINGLE 241 }; 242 243 - /* 244 - * Encapsulates all the info necessary for an outstanding command. 245 - */ 246 struct sbp2_command_info { 247 - 248 struct list_head list; 249 struct sbp2_command_orb command_orb ____cacheline_aligned; 250 dma_addr_t command_orb_dma ____cacheline_aligned; ··· 256 void (*Current_done)(struct scsi_cmnd *); 257 258 /* Also need s/g structure for each sbp2 command */ 259 - struct sbp2_unrestricted_page_table scatter_gather_element[SG_ALL] ____cacheline_aligned; 260 dma_addr_t sge_dma ____cacheline_aligned; 261 void *sge_buffer; 262 dma_addr_t cmd_dma; 263 - enum cmd_dma_types dma_type; 264 unsigned long dma_size; 265 - int dma_dir; 266 - 267 }; 268 269 - struct sbp2scsi_host_info; 270 271 - /* 272 - * Information needed on a per scsi id basis (one for each sbp2 device) 273 - */ 274 - struct scsi_id_instance_data { 275 - /* 276 - * Various sbp2 specific structures 277 - */ 278 struct sbp2_command_orb *last_orb; 279 dma_addr_t last_orb_dma; 280 struct sbp2_login_orb *login_orb; ··· 291 dma_addr_t logout_orb_dma; 292 struct sbp2_status_block status_block; 293 294 - /* 295 - * Stuff we need to know about the sbp2 device itself 296 - */ 297 - u64 sbp2_management_agent_addr; 298 - u64 sbp2_command_block_agent_addr; 299 u32 speed_code; 300 u32 max_payload_size; 301 302 - /* 303 - * Values pulled from the device's unit directory 304 - */ 305 - u32 sbp2_command_set_spec_id; 306 - u32 sbp2_command_set; 307 - u32 sbp2_unit_characteristics; 308 - u32 sbp2_lun; 309 - u32 sbp2_firmware_revision; 310 - 311 - /* 312 - * Address for the device to write status blocks to 313 - */ 314 u64 status_fifo_addr; 315 316 - /* 317 - * Waitqueue flag for logins, reconnects, logouts, query logins 318 - */ 319 - int access_complete:1; 320 321 - /* 322 - * Pool of command orbs, so we can have more than overlapped command per id 323 - */ 324 - spinlock_t sbp2_command_orb_lock; 325 - struct list_head sbp2_command_orb_inuse; 326 - struct list_head sbp2_command_orb_completed; 327 328 - struct list_head scsi_list; 329 330 - /* Node entry, as retrieved from NodeMgr entries */ 331 struct node_entry *ne; 332 struct unit_directory *ud; 333 334 - /* A backlink to our host_info */ 335 - struct sbp2scsi_host_info *hi; 336 - 337 - /* SCSI related pointers */ 338 struct scsi_device *sdev; 339 - struct Scsi_Host *scsi_host; 340 341 /* Device specific workarounds/brokeness */ 342 unsigned workarounds; 343 344 atomic_t state; 345 - struct delayed_work protocol_work; 346 }; 347 348 - /* For use in scsi_id_instance_data.state */ 349 enum sbp2lu_state_types { 350 SBP2LU_STATE_RUNNING, /* all normal */ 351 SBP2LU_STATE_IN_RESET, /* between bus reset and reconnect */ 352 SBP2LU_STATE_IN_SHUTDOWN /* when sbp2_remove was called */ 353 }; 354 355 - /* Sbp2 host data structure (one per IEEE1394 host) */ 356 - struct sbp2scsi_host_info { 357 - struct hpsb_host *host; /* IEEE1394 host */ 358 - struct list_head scsi_ids; /* List of scsi ids on this host */ 359 - }; 360 - 361 - /* 362 - * Function prototypes 363 - */ 364 - 365 - /* 366 - * Various utility prototypes 367 - */ 368 - static int sbp2util_create_command_orb_pool(struct scsi_id_instance_data *scsi_id); 369 - static void sbp2util_remove_command_orb_pool(struct scsi_id_instance_data *scsi_id); 370 - static struct sbp2_command_info *sbp2util_find_command_for_orb(struct scsi_id_instance_data *scsi_id, dma_addr_t orb); 371 - static struct sbp2_command_info *sbp2util_find_command_for_SCpnt(struct scsi_id_instance_data *scsi_id, void *SCpnt); 372 - static struct sbp2_command_info *sbp2util_allocate_command_orb(struct scsi_id_instance_data *scsi_id, 373 - struct scsi_cmnd *Current_SCpnt, 374 - void (*Current_done)(struct scsi_cmnd *)); 375 - static void sbp2util_mark_command_completed(struct scsi_id_instance_data *scsi_id, 376 - struct sbp2_command_info *command); 377 - 378 - 379 - static int sbp2_start_device(struct scsi_id_instance_data *scsi_id); 380 - static void sbp2_remove_device(struct scsi_id_instance_data *scsi_id); 381 - 382 - #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 383 - static int sbp2_handle_physdma_write(struct hpsb_host *host, int nodeid, int destid, quadlet_t *data, 384 - u64 addr, size_t length, u16 flags); 385 - static int sbp2_handle_physdma_read(struct hpsb_host *host, int nodeid, quadlet_t *data, 386 - u64 addr, size_t length, u16 flags); 387 - #endif 388 - 389 - /* 390 - * SBP-2 protocol related prototypes 391 - */ 392 - static int sbp2_query_logins(struct scsi_id_instance_data *scsi_id); 393 - static int sbp2_login_device(struct scsi_id_instance_data *scsi_id); 394 - static int sbp2_reconnect_device(struct scsi_id_instance_data *scsi_id); 395 - static int sbp2_logout_device(struct scsi_id_instance_data *scsi_id); 396 - static int sbp2_handle_status_write(struct hpsb_host *host, int nodeid, int destid, 397 - quadlet_t *data, u64 addr, size_t length, u16 flags); 398 - static int sbp2_agent_reset(struct scsi_id_instance_data *scsi_id, int wait); 399 - static unsigned int sbp2_status_to_sense_data(unchar *sbp2_status, 400 - unchar *sense_data); 401 - static void sbp2_parse_unit_directory(struct scsi_id_instance_data *scsi_id, 402 - struct unit_directory *ud); 403 - static int sbp2_set_busy_timeout(struct scsi_id_instance_data *scsi_id); 404 - static int sbp2_max_speed_and_size(struct scsi_id_instance_data *scsi_id); 405 406 #endif /* SBP2_H */
··· 25 #define SBP2_DEVICE_NAME "sbp2" 26 27 /* 28 + * SBP-2 specific definitions 29 */ 30 31 + #define ORB_DIRECTION_WRITE_TO_MEDIA 0x0 32 + #define ORB_DIRECTION_READ_FROM_MEDIA 0x1 33 + #define ORB_DIRECTION_NO_DATA_TRANSFER 0x2 34 35 + #define ORB_SET_NULL_PTR(v) (((v) & 0x1) << 31) 36 + #define ORB_SET_NOTIFY(v) (((v) & 0x1) << 31) 37 + #define ORB_SET_RQ_FMT(v) (((v) & 0x3) << 29) 38 + #define ORB_SET_NODE_ID(v) (((v) & 0xffff) << 16) 39 + #define ORB_SET_STATUS_FIFO_HI(v, id) ((v) >> 32 | ORB_SET_NODE_ID(id)) 40 + #define ORB_SET_STATUS_FIFO_LO(v) ((v) & 0xffffffff) 41 + #define ORB_SET_DATA_SIZE(v) ((v) & 0xffff) 42 + #define ORB_SET_PAGE_SIZE(v) (((v) & 0x7) << 16) 43 + #define ORB_SET_PAGE_TABLE_PRESENT(v) (((v) & 0x1) << 19) 44 + #define ORB_SET_MAX_PAYLOAD(v) (((v) & 0xf) << 20) 45 + #define ORB_SET_SPEED(v) (((v) & 0x7) << 24) 46 + #define ORB_SET_DIRECTION(v) (((v) & 0x1) << 27) 47 48 struct sbp2_command_orb { 49 u32 next_ORB_hi; ··· 64 #define SBP2_LOGICAL_UNIT_RESET 0xe 65 #define SBP2_TARGET_RESET_REQUEST 0xf 66 67 + #define ORB_SET_LUN(v) ((v) & 0xffff) 68 + #define ORB_SET_FUNCTION(v) (((v) & 0xf) << 16) 69 + #define ORB_SET_RECONNECT(v) (((v) & 0xf) << 20) 70 + #define ORB_SET_EXCLUSIVE(v) (((v) & 0x1) << 28) 71 + #define ORB_SET_LOGIN_RESP_LENGTH(v) ((v) & 0xffff) 72 + #define ORB_SET_PASSWD_LENGTH(v) (((v) & 0xffff) << 16) 73 74 struct sbp2_login_orb { 75 u32 password_hi; ··· 82 u32 status_fifo_lo; 83 } __attribute__((packed)); 84 85 + #define RESPONSE_GET_LOGIN_ID(v) ((v) & 0xffff) 86 + #define RESPONSE_GET_LENGTH(v) (((v) >> 16) & 0xffff) 87 + #define RESPONSE_GET_RECONNECT_HOLD(v) ((v) & 0xffff) 88 89 struct sbp2_login_response { 90 u32 length_login_ID; ··· 93 u32 reconnect_hold; 94 } __attribute__((packed)); 95 96 + #define ORB_SET_LOGIN_ID(v) ((v) & 0xffff) 97 + #define ORB_SET_QUERY_LOGINS_RESP_LENGTH(v) ((v) & 0xffff) 98 99 struct sbp2_query_logins_orb { 100 u32 reserved1; ··· 108 u32 status_fifo_lo; 109 } __attribute__((packed)); 110 111 + #define RESPONSE_GET_MAX_LOGINS(v) ((v) & 0xffff) 112 + #define RESPONSE_GET_ACTIVE_LOGINS(v) ((RESPONSE_GET_LENGTH((v)) - 4) / 12) 113 114 struct sbp2_query_logins_response { 115 u32 length_max_logins; ··· 140 u32 status_fifo_lo; 141 } __attribute__((packed)); 142 143 + #define PAGE_TABLE_SET_SEGMENT_BASE_HI(v) ((v) & 0xffff) 144 + #define PAGE_TABLE_SET_SEGMENT_LENGTH(v) (((v) & 0xffff) << 16) 145 146 struct sbp2_unrestricted_page_table { 147 u32 length_segment_base_hi; ··· 171 #define SFMT_DEFERRED_ERROR 0x1 172 #define SFMT_VENDOR_DEPENDENT_STATUS 0x3 173 174 + #define STATUS_GET_SRC(v) (((v) >> 30) & 0x3) 175 + #define STATUS_GET_RESP(v) (((v) >> 28) & 0x3) 176 + #define STATUS_GET_LEN(v) (((v) >> 24) & 0x7) 177 + #define STATUS_GET_SBP_STATUS(v) (((v) >> 16) & 0xff) 178 + #define STATUS_GET_ORB_OFFSET_HI(v) ((v) & 0x0000ffff) 179 + #define STATUS_TEST_DEAD(v) ((v) & 0x08000000) 180 /* test 'resp' | 'dead' | 'sbp2_status' */ 181 + #define STATUS_TEST_RDS(v) ((v) & 0x38ff0000) 182 183 struct sbp2_status_block { 184 u32 ORB_offset_hi_misc; ··· 195 u8 command_set_dependent[24]; 196 } __attribute__((packed)); 197 198 199 /* 200 + * SBP2 related configuration ROM definitions 201 */ 202 203 + #define SBP2_UNIT_DIRECTORY_OFFSET_KEY 0xd1 204 + #define SBP2_CSR_OFFSET_KEY 0x54 205 + #define SBP2_UNIT_SPEC_ID_KEY 0x12 206 + #define SBP2_UNIT_SW_VERSION_KEY 0x13 207 + #define SBP2_COMMAND_SET_SPEC_ID_KEY 0x38 208 + #define SBP2_COMMAND_SET_KEY 0x39 209 + #define SBP2_UNIT_CHARACTERISTICS_KEY 0x3a 210 + #define SBP2_DEVICE_TYPE_AND_LUN_KEY 0x14 211 + #define SBP2_FIRMWARE_REVISION_KEY 0x3c 212 + 213 + #define SBP2_AGENT_STATE_OFFSET 0x00ULL 214 + #define SBP2_AGENT_RESET_OFFSET 0x04ULL 215 + #define SBP2_ORB_POINTER_OFFSET 0x08ULL 216 + #define SBP2_DOORBELL_OFFSET 0x10ULL 217 + #define SBP2_UNSOLICITED_STATUS_ENABLE_OFFSET 0x14ULL 218 + #define SBP2_UNSOLICITED_STATUS_VALUE 0xf 219 + 220 + #define SBP2_BUSY_TIMEOUT_ADDRESS 0xfffff0000210ULL 221 + /* biggest possible value for Single Phase Retry count is 0xf */ 222 + #define SBP2_BUSY_TIMEOUT_VALUE 0xf 223 + 224 + #define SBP2_AGENT_RESET_DATA 0xf 225 + 226 + #define SBP2_UNIT_SPEC_ID_ENTRY 0x0000609e 227 + #define SBP2_SW_VERSION_ENTRY 0x00010483 228 + 229 230 /* 231 + * SCSI specific definitions 232 */ 233 234 + #define SBP2_MAX_SG_ELEMENT_LENGTH 0xf000 235 + #define SBP2_MAX_SECTORS 255 236 + /* There is no real limitation of the queue depth (i.e. length of the linked 237 + * list of command ORBs) at the target. The chosen depth is merely an 238 + * implementation detail of the sbp2 driver. */ 239 + #define SBP2_MAX_CMDS 8 240 241 + #define SBP2_SCSI_STATUS_GOOD 0x0 242 + #define SBP2_SCSI_STATUS_CHECK_CONDITION 0x2 243 + #define SBP2_SCSI_STATUS_CONDITION_MET 0x4 244 + #define SBP2_SCSI_STATUS_BUSY 0x8 245 + #define SBP2_SCSI_STATUS_RESERVATION_CONFLICT 0x18 246 + #define SBP2_SCSI_STATUS_COMMAND_TERMINATED 0x22 247 + #define SBP2_SCSI_STATUS_SELECTION_TIMEOUT 0xff 248 249 + 250 + /* 251 + * Representations of commands and devices 252 + */ 253 + 254 + enum sbp2_dma_types { 255 CMD_DMA_NONE, 256 CMD_DMA_PAGE, 257 CMD_DMA_SINGLE 258 }; 259 260 + /* Per SCSI command */ 261 struct sbp2_command_info { 262 struct list_head list; 263 struct sbp2_command_orb command_orb ____cacheline_aligned; 264 dma_addr_t command_orb_dma ____cacheline_aligned; ··· 262 void (*Current_done)(struct scsi_cmnd *); 263 264 /* Also need s/g structure for each sbp2 command */ 265 + struct sbp2_unrestricted_page_table 266 + scatter_gather_element[SG_ALL] ____cacheline_aligned; 267 dma_addr_t sge_dma ____cacheline_aligned; 268 void *sge_buffer; 269 dma_addr_t cmd_dma; 270 + enum sbp2_dma_types dma_type; 271 unsigned long dma_size; 272 + enum dma_data_direction dma_dir; 273 }; 274 275 + /* Per FireWire host */ 276 + struct sbp2_fwhost_info { 277 + struct hpsb_host *host; 278 + struct list_head logical_units; 279 + }; 280 281 + /* Per logical unit */ 282 + struct sbp2_lu { 283 + /* Operation request blocks */ 284 struct sbp2_command_orb *last_orb; 285 dma_addr_t last_orb_dma; 286 struct sbp2_login_orb *login_orb; ··· 297 dma_addr_t logout_orb_dma; 298 struct sbp2_status_block status_block; 299 300 + /* How to talk to the unit */ 301 + u64 management_agent_addr; 302 + u64 command_block_agent_addr; 303 u32 speed_code; 304 u32 max_payload_size; 305 + u16 lun; 306 307 + /* Address for the unit to write status blocks to */ 308 u64 status_fifo_addr; 309 310 + /* Waitqueue flag for logins, reconnects, logouts, query logins */ 311 + unsigned int access_complete:1; 312 313 + /* Pool of command ORBs for this logical unit */ 314 + spinlock_t cmd_orb_lock; 315 + struct list_head cmd_orb_inuse; 316 + struct list_head cmd_orb_completed; 317 318 + /* Backlink to FireWire host; list of units attached to the host */ 319 + struct sbp2_fwhost_info *hi; 320 + struct list_head lu_list; 321 322 + /* IEEE 1394 core's device representations */ 323 struct node_entry *ne; 324 struct unit_directory *ud; 325 326 + /* SCSI core's device representations */ 327 struct scsi_device *sdev; 328 + struct Scsi_Host *shost; 329 330 /* Device specific workarounds/brokeness */ 331 unsigned workarounds; 332 333 + /* Connection state */ 334 atomic_t state; 335 + 336 + /* For deferred requests to the fetch agent */ 337 + struct work_struct protocol_work; 338 }; 339 340 + /* For use in sbp2_lu.state */ 341 enum sbp2lu_state_types { 342 SBP2LU_STATE_RUNNING, /* all normal */ 343 SBP2LU_STATE_IN_RESET, /* between bus reset and reconnect */ 344 SBP2LU_STATE_IN_SHUTDOWN /* when sbp2_remove was called */ 345 }; 346 347 + /* For use in sbp2_lu.workarounds and in the corresponding 348 + * module load parameter */ 349 + #define SBP2_WORKAROUND_128K_MAX_TRANS 0x1 350 + #define SBP2_WORKAROUND_INQUIRY_36 0x2 351 + #define SBP2_WORKAROUND_MODE_SENSE_8 0x4 352 + #define SBP2_WORKAROUND_FIX_CAPACITY 0x8 353 + #define SBP2_WORKAROUND_OVERRIDE 0x100 354 355 #endif /* SBP2_H */
+17 -37
drivers/ieee1394/video1394.c
··· 714 return ret; 715 } 716 717 - static int __video1394_ioctl(struct file *file, 718 - unsigned int cmd, unsigned long arg) 719 { 720 struct file_ctx *ctx = (struct file_ctx *)file->private_data; 721 struct ti_ohci *ohci = ctx->ohci; ··· 884 struct dma_iso_ctx *d; 885 int next_prg; 886 887 - if (copy_from_user(&v, argp, sizeof(v))) 888 return -EFAULT; 889 890 d = find_ctx(&ctx->context_list, OHCI_ISO_RECEIVE, v.channel); 891 - if (d == NULL) return -EFAULT; 892 893 - if ((v.buffer<0) || (v.buffer>=d->num_desc - 1)) { 894 PRINT(KERN_ERR, ohci->host->id, 895 "Buffer %d out of range",v.buffer); 896 return -EINVAL; ··· 899 900 spin_lock_irqsave(&d->lock,flags); 901 902 - if (d->buffer_status[v.buffer]==VIDEO1394_BUFFER_QUEUED) { 903 PRINT(KERN_ERR, ohci->host->id, 904 "Buffer %d is already used",v.buffer); 905 spin_unlock_irqrestore(&d->lock,flags); ··· 950 struct dma_iso_ctx *d; 951 int i = 0; 952 953 - if (copy_from_user(&v, argp, sizeof(v))) 954 return -EFAULT; 955 956 d = find_ctx(&ctx->context_list, OHCI_ISO_RECEIVE, v.channel); 957 - if (d == NULL) return -EFAULT; 958 959 - if ((v.buffer<0) || (v.buffer>d->num_desc - 1)) { 960 PRINT(KERN_ERR, ohci->host->id, 961 "Buffer %d out of range",v.buffer); 962 return -EINVAL; ··· 1010 spin_unlock_irqrestore(&d->lock, flags); 1011 1012 v.buffer=i; 1013 - if (copy_to_user(argp, &v, sizeof(v))) 1014 return -EFAULT; 1015 1016 return 0; ··· 1158 } 1159 } 1160 1161 - static long video1394_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 1162 - { 1163 - int err; 1164 - lock_kernel(); 1165 - err = __video1394_ioctl(file, cmd, arg); 1166 - unlock_kernel(); 1167 - return err; 1168 - } 1169 - 1170 /* 1171 * This maps the vmalloced and reserved buffer to user space. 1172 * ··· 1170 static int video1394_mmap(struct file *file, struct vm_area_struct *vma) 1171 { 1172 struct file_ctx *ctx = (struct file_ctx *)file->private_data; 1173 - int res = -EINVAL; 1174 1175 - lock_kernel(); 1176 if (ctx->current_ctx == NULL) { 1177 PRINT(KERN_ERR, ctx->ohci->host->id, 1178 "Current iso context not set"); 1179 - } else 1180 - res = dma_region_mmap(&ctx->current_ctx->dma, file, vma); 1181 - unlock_kernel(); 1182 1183 - return res; 1184 } 1185 1186 static unsigned int video1394_poll(struct file *file, poll_table *pt) ··· 1188 struct dma_iso_ctx *d; 1189 int i; 1190 1191 - lock_kernel(); 1192 ctx = file->private_data; 1193 d = ctx->current_ctx; 1194 if (d == NULL) { 1195 PRINT(KERN_ERR, ctx->ohci->host->id, 1196 "Current iso context not set"); 1197 - mask = POLLERR; 1198 - goto done; 1199 } 1200 1201 poll_wait(file, &d->waitq, pt); ··· 1206 } 1207 } 1208 spin_unlock_irqrestore(&d->lock, flags); 1209 - done: 1210 - unlock_kernel(); 1211 1212 return mask; 1213 } ··· 1241 struct list_head *lh, *next; 1242 u64 mask; 1243 1244 - lock_kernel(); 1245 list_for_each_safe(lh, next, &ctx->context_list) { 1246 struct dma_iso_ctx *d; 1247 d = list_entry(lh, struct dma_iso_ctx, link); ··· 1261 kfree(ctx); 1262 file->private_data = NULL; 1263 1264 - unlock_kernel(); 1265 return 0; 1266 } 1267 ··· 1308 MODULE_DEVICE_TABLE(ieee1394, video1394_id_table); 1309 1310 static struct hpsb_protocol_driver video1394_driver = { 1311 - .name = "1394 Digital Camera Driver", 1312 .id_table = video1394_id_table, 1313 - .driver = { 1314 - .name = VIDEO1394_DRIVER_NAME, 1315 - .bus = &ieee1394_bus_type, 1316 - }, 1317 }; 1318 1319
··· 714 return ret; 715 } 716 717 + static long video1394_ioctl(struct file *file, 718 + unsigned int cmd, unsigned long arg) 719 { 720 struct file_ctx *ctx = (struct file_ctx *)file->private_data; 721 struct ti_ohci *ohci = ctx->ohci; ··· 884 struct dma_iso_ctx *d; 885 int next_prg; 886 887 + if (unlikely(copy_from_user(&v, argp, sizeof(v)))) 888 return -EFAULT; 889 890 d = find_ctx(&ctx->context_list, OHCI_ISO_RECEIVE, v.channel); 891 + if (unlikely(d == NULL)) 892 + return -EFAULT; 893 894 + if (unlikely((v.buffer<0) || (v.buffer>=d->num_desc - 1))) { 895 PRINT(KERN_ERR, ohci->host->id, 896 "Buffer %d out of range",v.buffer); 897 return -EINVAL; ··· 898 899 spin_lock_irqsave(&d->lock,flags); 900 901 + if (unlikely(d->buffer_status[v.buffer]==VIDEO1394_BUFFER_QUEUED)) { 902 PRINT(KERN_ERR, ohci->host->id, 903 "Buffer %d is already used",v.buffer); 904 spin_unlock_irqrestore(&d->lock,flags); ··· 949 struct dma_iso_ctx *d; 950 int i = 0; 951 952 + if (unlikely(copy_from_user(&v, argp, sizeof(v)))) 953 return -EFAULT; 954 955 d = find_ctx(&ctx->context_list, OHCI_ISO_RECEIVE, v.channel); 956 + if (unlikely(d == NULL)) 957 + return -EFAULT; 958 959 + if (unlikely((v.buffer<0) || (v.buffer>d->num_desc - 1))) { 960 PRINT(KERN_ERR, ohci->host->id, 961 "Buffer %d out of range",v.buffer); 962 return -EINVAL; ··· 1008 spin_unlock_irqrestore(&d->lock, flags); 1009 1010 v.buffer=i; 1011 + if (unlikely(copy_to_user(argp, &v, sizeof(v)))) 1012 return -EFAULT; 1013 1014 return 0; ··· 1156 } 1157 } 1158 1159 /* 1160 * This maps the vmalloced and reserved buffer to user space. 1161 * ··· 1177 static int video1394_mmap(struct file *file, struct vm_area_struct *vma) 1178 { 1179 struct file_ctx *ctx = (struct file_ctx *)file->private_data; 1180 1181 if (ctx->current_ctx == NULL) { 1182 PRINT(KERN_ERR, ctx->ohci->host->id, 1183 "Current iso context not set"); 1184 + return -EINVAL; 1185 + } 1186 1187 + return dma_region_mmap(&ctx->current_ctx->dma, file, vma); 1188 } 1189 1190 static unsigned int video1394_poll(struct file *file, poll_table *pt) ··· 1198 struct dma_iso_ctx *d; 1199 int i; 1200 1201 ctx = file->private_data; 1202 d = ctx->current_ctx; 1203 if (d == NULL) { 1204 PRINT(KERN_ERR, ctx->ohci->host->id, 1205 "Current iso context not set"); 1206 + return POLLERR; 1207 } 1208 1209 poll_wait(file, &d->waitq, pt); ··· 1218 } 1219 } 1220 spin_unlock_irqrestore(&d->lock, flags); 1221 1222 return mask; 1223 } ··· 1255 struct list_head *lh, *next; 1256 u64 mask; 1257 1258 list_for_each_safe(lh, next, &ctx->context_list) { 1259 struct dma_iso_ctx *d; 1260 d = list_entry(lh, struct dma_iso_ctx, link); ··· 1276 kfree(ctx); 1277 file->private_data = NULL; 1278 1279 return 0; 1280 } 1281 ··· 1324 MODULE_DEVICE_TABLE(ieee1394, video1394_id_table); 1325 1326 static struct hpsb_protocol_driver video1394_driver = { 1327 + .name = VIDEO1394_DRIVER_NAME, 1328 .id_table = video1394_id_table, 1329 }; 1330 1331