Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge from Linus' tree.

+10289 -3680
+1 -1
Documentation/DocBook/kernel-hacking.tmpl
··· 1105 1105 </listitem> 1106 1106 <listitem> 1107 1107 <para> 1108 - Function names as strings (__func__). 1108 + Function names as strings (__FUNCTION__). 1109 1109 </para> 1110 1110 </listitem> 1111 1111 <listitem>
+73
Documentation/device-mapper/snapshot.txt
··· 1 + Device-mapper snapshot support 2 + ============================== 3 + 4 + Device-mapper allows you, without massive data copying: 5 + 6 + *) To create snapshots of any block device i.e. mountable, saved states of 7 + the block device which are also writable without interfering with the 8 + original content; 9 + *) To create device "forks", i.e. multiple different versions of the 10 + same data stream. 11 + 12 + 13 + In both cases, dm copies only the chunks of data that get changed and 14 + uses a separate copy-on-write (COW) block device for storage. 15 + 16 + 17 + There are two dm targets available: snapshot and snapshot-origin. 18 + 19 + *) snapshot-origin <origin> 20 + 21 + which will normally have one or more snapshots based on it. 22 + You must create the snapshot-origin device before you can create snapshots. 23 + Reads will be mapped directly to the backing device. For each write, the 24 + original data will be saved in the <COW device> of each snapshot to keep 25 + its visible content unchanged, at least until the <COW device> fills up. 26 + 27 + 28 + *) snapshot <origin> <COW device> <persistent?> <chunksize> 29 + 30 + A snapshot is created of the <origin> block device. Changed chunks of 31 + <chunksize> sectors will be stored on the <COW device>. Writes will 32 + only go to the <COW device>. Reads will come from the <COW device> or 33 + from <origin> for unchanged data. <COW device> will often be 34 + smaller than the origin and if it fills up the snapshot will become 35 + useless and be disabled, returning errors. So it is important to monitor 36 + the amount of free space and expand the <COW device> before it fills up. 37 + 38 + <persistent?> is P (Persistent) or N (Not persistent - will not survive 39 + after reboot). 40 + 41 + 42 + How this is used by LVM2 43 + ======================== 44 + When you create the first LVM2 snapshot of a volume, four dm devices are used: 45 + 46 + 1) a device containing the original mapping table of the source volume; 47 + 2) a device used as the <COW device>; 48 + 3) a "snapshot" device, combining #1 and #2, which is the visible snapshot 49 + volume; 50 + 4) the "original" volume (which uses the device number used by the original 51 + source volume), whose table is replaced by a "snapshot-origin" mapping 52 + from device #1. 53 + 54 + A fixed naming scheme is used, so with the following commands: 55 + 56 + lvcreate -L 1G -n base volumeGroup 57 + lvcreate -L 100M --snapshot -n snap volumeGroup/base 58 + 59 + we'll have this situation (with volumes in above order): 60 + 61 + # dmsetup table|grep volumeGroup 62 + 63 + volumeGroup-base-real: 0 2097152 linear 8:19 384 64 + volumeGroup-snap-cow: 0 204800 linear 8:19 2097536 65 + volumeGroup-snap: 0 2097152 snapshot 254:11 254:12 P 16 66 + volumeGroup-base: 0 2097152 snapshot-origin 254:11 67 + 68 + # ls -lL /dev/mapper/volumeGroup-* 69 + brw------- 1 root root 254, 11 29 ago 18:15 /dev/mapper/volumeGroup-base-real 70 + brw------- 1 root root 254, 12 29 ago 18:15 /dev/mapper/volumeGroup-snap-cow 71 + brw------- 1 root root 254, 13 29 ago 18:15 /dev/mapper/volumeGroup-snap 72 + brw------- 1 root root 254, 10 29 ago 18:14 /dev/mapper/volumeGroup-base 73 +
+2 -2
Documentation/sparse.txt
··· 51 51 Where to get sparse 52 52 ~~~~~~~~~~~~~~~~~~~ 53 53 54 - With BK, you can just get it from 54 + With git, you can just get it from 55 55 56 - bk://sparse.bkbits.net/sparse 56 + rsync://rsync.kernel.org/pub/scm/devel/sparse/sparse.git 57 57 58 58 and DaveJ has tar-balls at 59 59
+31 -43
Documentation/usb/URB.txt
··· 1 1 Revised: 2000-Dec-05. 2 2 Again: 2002-Jul-06 3 + Again: 2005-Sep-19 3 4 4 5 NOTE: 5 6 ··· 19 18 and deliver the data and status back. 20 19 21 20 - Execution of an URB is inherently an asynchronous operation, i.e. the 22 - usb_submit_urb(urb) call returns immediately after it has successfully queued 23 - the requested action. 21 + usb_submit_urb(urb) call returns immediately after it has successfully 22 + queued the requested action. 24 23 25 24 - Transfers for one URB can be canceled with usb_unlink_urb(urb) at any time. 26 25 ··· 95 94 96 95 void usb_free_urb(struct urb *urb) 97 96 98 - You may not free an urb that you've submitted, but which hasn't yet been 99 - returned to you in a completion callback. 97 + You may free an urb that you've submitted, but which hasn't yet been 98 + returned to you in a completion callback. It will automatically be 99 + deallocated when it is no longer in use. 100 100 101 101 102 102 1.4. What has to be filled in? ··· 147 145 148 146 1.6. How to cancel an already running URB? 149 147 150 - For an URB which you've submitted, but which hasn't been returned to 151 - your driver by the host controller, call 148 + There are two ways to cancel an URB you've submitted but which hasn't 149 + been returned to your driver yet. For an asynchronous cancel, call 152 150 153 151 int usb_unlink_urb(struct urb *urb) 154 152 155 153 It removes the urb from the internal list and frees all allocated 156 - HW descriptors. The status is changed to reflect unlinking. After 157 - usb_unlink_urb() returns with that status code, you can free the URB 158 - with usb_free_urb(). 154 + HW descriptors. The status is changed to reflect unlinking. Note 155 + that the URB will not normally have finished when usb_unlink_urb() 156 + returns; you must still wait for the completion handler to be called. 159 157 160 - There is also an asynchronous unlink mode. To use this, set the 161 - the URB_ASYNC_UNLINK flag in urb->transfer flags before calling 162 - usb_unlink_urb(). When using async unlinking, the URB will not 163 - normally be unlinked when usb_unlink_urb() returns. Instead, wait 164 - for the completion handler to be called. 158 + To cancel an URB synchronously, call 159 + 160 + void usb_kill_urb(struct urb *urb) 161 + 162 + It does everything usb_unlink_urb does, and in addition it waits 163 + until after the URB has been returned and the completion handler 164 + has finished. It also marks the URB as temporarily unusable, so 165 + that if the completion handler or anyone else tries to resubmit it 166 + they will get a -EPERM error. Thus you can be sure that when 167 + usb_kill_urb() returns, the URB is totally idle. 165 168 166 169 167 170 1.7. What about the completion handler? 168 171 169 172 The handler is of the following type: 170 173 171 - typedef void (*usb_complete_t)(struct urb *); 174 + typedef void (*usb_complete_t)(struct urb *, struct pt_regs *) 172 175 173 - i.e. it gets just the URB that caused the completion call. 176 + I.e., it gets the URB that caused the completion call, plus the 177 + register values at the time of the corresponding interrupt (if any). 174 178 In the completion handler, you should have a look at urb->status to 175 179 detect any USB errors. Since the context parameter is included in the URB, 176 180 you can pass information to the completion handler. ··· 184 176 Note that even when an error (or unlink) is reported, data may have been 185 177 transferred. That's because USB transfers are packetized; it might take 186 178 sixteen packets to transfer your 1KByte buffer, and ten of them might 187 - have transferred succesfully before the completion is called. 179 + have transferred succesfully before the completion was called. 188 180 189 181 190 182 NOTE: ***** WARNING ***** 191 - Don't use urb->dev field in your completion handler; it's cleared 192 - as part of giving urbs back to drivers. (Addressing an issue with 193 - ownership of periodic URBs, which was otherwise ambiguous.) Instead, 194 - use urb->context to hold all the data your driver needs. 195 - 196 - NOTE: ***** WARNING ***** 197 - Also, NEVER SLEEP IN A COMPLETION HANDLER. These are normally called 183 + NEVER SLEEP IN A COMPLETION HANDLER. These are normally called 198 184 during hardware interrupt processing. If you can, defer substantial 199 185 work to a tasklet (bottom half) to keep system latencies low. You'll 200 186 probably need to use spinlocks to protect data structures you manipulate ··· 231 229 Interrupt transfers, like isochronous transfers, are periodic, and happen 232 230 in intervals that are powers of two (1, 2, 4 etc) units. Units are frames 233 231 for full and low speed devices, and microframes for high speed ones. 234 - 235 - Currently, after you submit one interrupt URB, that urb is owned by the 236 - host controller driver until you cancel it with usb_unlink_urb(). You 237 - may unlink interrupt urbs in their completion handlers, if you need to. 238 - 239 - After a transfer completion is called, the URB is automagically resubmitted. 240 - THIS BEHAVIOR IS EXPECTED TO BE REMOVED!! 241 - 242 - Interrupt transfers may only send (or receive) the "maxpacket" value for 243 - the given interrupt endpoint; if you need more data, you will need to 244 - copy that data out of (or into) another buffer. Similarly, you can't 245 - queue interrupt transfers. 246 - THESE RESTRICTIONS ARE EXPECTED TO BE REMOVED!! 247 - 248 - Note that this automagic resubmission model does make it awkward to use 249 - interrupt OUT transfers. The portable solution involves unlinking those 250 - OUT urbs after the data is transferred, and perhaps submitting a final 251 - URB for a short packet. 252 - 253 232 The usb_submit_urb() call modifies urb->interval to the implemented interval 254 233 value that is less than or equal to the requested interval value. 234 + 235 + In Linux 2.6, unlike earlier versions, interrupt URBs are not automagically 236 + restarted when they complete. They end when the completion handler is 237 + called, just like other URBs. If you want an interrupt URB to be restarted, 238 + your completion handler must resubmit it.
+18 -2
MAINTAINERS
··· 1063 1063 S: Maintained 1064 1064 1065 1065 I2C SUBSYSTEM 1066 - P: Greg Kroah-Hartman 1067 - M: greg@kroah.com 1068 1066 P: Jean Delvare 1069 1067 M: khali@linux-fr.org 1070 1068 L: lm-sensors@lm-sensors.org ··· 1400 1402 W: http://www.xmission.com/~ebiederm/files/kexec/ 1401 1403 L: linux-kernel@vger.kernel.org 1402 1404 L: fastboot@osdl.org 1405 + S: Maintained 1406 + 1407 + KPROBES 1408 + P: Prasanna S Panchamukhi 1409 + M: prasanna@in.ibm.com 1410 + P: Ananth N Mavinakayanahalli 1411 + M: ananth@in.ibm.com 1412 + P: Anil S Keshavamurthy 1413 + M: anil.s.keshavamurthy@intel.com 1414 + P: David S. Miller 1415 + M: davem@davemloft.net 1416 + L: linux-kernel@vger.kernel.org 1403 1417 S: Maintained 1404 1418 1405 1419 LANMEDIA WAN CARD DRIVER ··· 2274 2264 P: Kristen Carlson Accardi 2275 2265 M: kristen.c.accardi@intel.com 2276 2266 L: pcihpd-discuss@lists.sourceforge.net 2267 + S: Maintained 2268 + 2269 + SKGE, SKY2 10/100/1000 GIGABIT ETHERNET DRIVERS 2270 + P: Stephen Hemminger 2271 + M: shemminger@osdl.org 2272 + L: netdev@vger.kernel.org 2277 2273 S: Maintained 2278 2274 2279 2275 SPARC (sparc32):
+1 -1
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 14 4 - EXTRAVERSION =-rc1 4 + EXTRAVERSION =-rc2 5 5 NAME=Affluent Albatross 6 6 7 7 # *DOCUMENTATION*
+6 -3
README
··· 149 149 "make gconfig" X windows (Gtk) based configuration tool. 150 150 "make oldconfig" Default all questions based on the contents of 151 151 your existing ./.config file. 152 + "make silentoldconfig" 153 + Like above, but avoids cluttering the screen 154 + with questions already answered. 152 155 153 156 NOTES on "make config": 154 157 - having unnecessary drivers will make the kernel bigger, and can ··· 171 168 break bad code to find kernel problems (kmalloc()). Thus you 172 169 should probably answer 'n' to the questions for 173 170 "development", "experimental", or "debugging" features. 174 - 175 - - Check the top Makefile for further site-dependent configuration 176 - (default SVGA mode etc). 177 171 178 172 COMPILING the kernel: 179 173 ··· 199 199 are installing a new kernel with the same version number as your 200 200 working kernel, make a backup of your modules directory before you 201 201 do a "make modules_install". 202 + Alternatively, before compiling, use the kernel config option 203 + "LOCALVERSION" to append a unique suffix to the regular kernel version. 204 + LOCALVERSION can be set in the "General Setup" menu. 202 205 203 206 - In order to boot your new kernel, you'll need to copy the kernel 204 207 image (e.g. .../linux/arch/i386/boot/bzImage after compilation)
+4
arch/alpha/kernel/process.c
··· 127 127 /* If booted from SRM, reset some of the original environment. */ 128 128 if (alpha_using_srm) { 129 129 #ifdef CONFIG_DUMMY_CONSOLE 130 + /* If we've gotten here after SysRq-b, leave interrupt 131 + context before taking over the console. */ 132 + if (in_interrupt()) 133 + irq_exit(); 130 134 /* This has the effect of resetting the VGA video origin. */ 131 135 take_over_console(&dummy_con, 0, MAX_NR_CONSOLES-1, 1); 132 136 #endif
+23 -18
arch/alpha/kernel/sys_dp264.c
··· 395 395 */ 396 396 397 397 static int __init 398 + isa_irq_fixup(struct pci_dev *dev, int irq) 399 + { 400 + u8 irq8; 401 + 402 + if (irq > 0) 403 + return irq; 404 + 405 + /* This interrupt is routed via ISA bridge, so we'll 406 + just have to trust whatever value the console might 407 + have assigned. */ 408 + pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq8); 409 + 410 + return irq8 & 0xf; 411 + } 412 + 413 + static int __init 398 414 dp264_map_irq(struct pci_dev *dev, u8 slot, u8 pin) 399 415 { 400 416 static char irq_tab[6][5] __initdata = { ··· 423 407 { 16+ 3, 16+ 3, 16+ 2, 16+ 1, 16+ 0} /* IdSel 10 slot 3 */ 424 408 }; 425 409 const long min_idsel = 5, max_idsel = 10, irqs_per_slot = 5; 426 - 427 410 struct pci_controller *hose = dev->sysdata; 428 411 int irq = COMMON_TABLE_LOOKUP; 429 412 430 - if (irq > 0) { 413 + if (irq > 0) 431 414 irq += 16 * hose->index; 432 - } else { 433 - /* ??? The Contaq IDE controller on the ISA bridge uses 434 - "legacy" interrupts 14 and 15. I don't know if anything 435 - can wind up at the same slot+pin on hose1, so we'll 436 - just have to trust whatever value the console might 437 - have assigned. */ 438 415 439 - u8 irq8; 440 - pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &irq8); 441 - irq = irq8; 442 - } 443 - 444 - return irq; 416 + return isa_irq_fixup(dev, irq); 445 417 } 446 418 447 419 static int __init ··· 457 453 { 24, 24, 25, 26, 27} /* IdSel 15 slot 5 PCI2*/ 458 454 }; 459 455 const long min_idsel = 3, max_idsel = 15, irqs_per_slot = 5; 460 - return COMMON_TABLE_LOOKUP; 456 + 457 + return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP); 461 458 } 462 459 463 460 static u8 __init ··· 512 507 { 47, 47, 46, 45, 44}, /* IdSel 17 slot 3 */ 513 508 }; 514 509 const long min_idsel = 7, max_idsel = 17, irqs_per_slot = 5; 515 - return COMMON_TABLE_LOOKUP; 510 + 511 + return isa_irq_fixup(dev, COMMON_TABLE_LOOKUP); 516 512 } 517 513 518 514 static int __init ··· 530 524 { -1, -1, -1, -1, -1} /* IdSel 7 ISA Bridge */ 531 525 }; 532 526 const long min_idsel = 1, max_idsel = 7, irqs_per_slot = 5; 533 - 534 527 struct pci_controller *hose = dev->sysdata; 535 528 int irq = COMMON_TABLE_LOOKUP; 536 529 537 530 if (irq > 0) 538 531 irq += 16 * hose->index; 539 532 540 - return irq; 533 + return isa_irq_fixup(dev, irq); 541 534 } 542 535 543 536 static void __init
+1 -1
arch/arm/boot/compressed/ofw-shark.c
··· 256 256 temp[11]='\0'; 257 257 mem_len = OF_getproplen(o,phandle, temp); 258 258 OF_getprop(o,phandle, temp, buffer, mem_len); 259 - (unsigned char) pointer[32] = ((unsigned char *) buffer)[mem_len-2]; 259 + * ((unsigned char *) &pointer[32]) = ((unsigned char *) buffer)[mem_len-2]; 260 260 }
+1 -1
arch/arm/kernel/entry-armv.S
··· 537 537 #ifdef CONFIG_CPU_MPCORE 538 538 clrex 539 539 #else 540 - strex r3, r4, [ip] @ Clear exclusive monitor 540 + strex r5, r4, [ip] @ Clear exclusive monitor 541 541 #endif 542 542 #endif 543 543 #if defined(CONFIG_CPU_XSCALE) && !defined(CONFIG_IWMMXT)
+3 -3
arch/arm/kernel/io.c
··· 7 7 * Copy data from IO memory space to "real" memory space. 8 8 * This needs to be optimized. 9 9 */ 10 - void _memcpy_fromio(void *to, void __iomem *from, size_t count) 10 + void _memcpy_fromio(void *to, const volatile void __iomem *from, size_t count) 11 11 { 12 12 unsigned char *t = to; 13 13 while (count) { ··· 22 22 * Copy data from "real" memory space to IO memory space. 23 23 * This needs to be optimized. 24 24 */ 25 - void _memcpy_toio(void __iomem *to, const void *from, size_t count) 25 + void _memcpy_toio(volatile void __iomem *to, const void *from, size_t count) 26 26 { 27 27 const unsigned char *f = from; 28 28 while (count) { ··· 37 37 * "memset" on IO memory space. 38 38 * This needs to be optimized. 39 39 */ 40 - void _memset_io(void __iomem *dst, int c, size_t count) 40 + void _memset_io(volatile void __iomem *dst, int c, size_t count) 41 41 { 42 42 while (count) { 43 43 count--;
+1 -1
arch/arm/kernel/semaphore.c
··· 178 178 * registers (r0 to r3 and lr), but not ip, as we use it as a return 179 179 * value in some cases.. 180 180 */ 181 - asm(" .section .sched.text,\"ax\" \n\ 181 + asm(" .section .sched.text,\"ax\",%progbits \n\ 182 182 .align 5 \n\ 183 183 .globl __down_failed \n\ 184 184 __down_failed: \n\
+3
arch/arm/kernel/traps.c
··· 624 624 printk(" - extra data = %p", data); 625 625 printk("\n"); 626 626 *(int *)0 = 0; 627 + 628 + /* Avoid "noreturn function does return" */ 629 + for (;;); 627 630 } 628 631 EXPORT_SYMBOL(__bug); 629 632
+4 -4
arch/arm/kernel/vmlinux.lds.S
··· 23 23 *(.init.text) 24 24 _einittext = .; 25 25 __proc_info_begin = .; 26 - *(.proc.info) 26 + *(.proc.info.init) 27 27 __proc_info_end = .; 28 28 __arch_info_begin = .; 29 - *(.arch.info) 29 + *(.arch.info.init) 30 30 __arch_info_end = .; 31 31 __tagtable_begin = .; 32 - *(.taglist) 32 + *(.taglist.init) 33 33 __tagtable_end = .; 34 34 . = ALIGN(16); 35 35 __setup_start = .; 36 36 *(.init.setup) 37 37 __setup_end = .; 38 38 __early_begin = .; 39 - *(__early_param) 39 + *(.early_param.init) 40 40 __early_end = .; 41 41 __initcall_start = .; 42 42 *(.initcall1.init)
+6
arch/arm/mach-ixp4xx/ixdp425-setup.c
··· 123 123 platform_add_devices(ixdp425_devices, ARRAY_SIZE(ixdp425_devices)); 124 124 } 125 125 126 + #ifdef CONFIG_ARCH_IXDP465 126 127 MACHINE_START(IXDP425, "Intel IXDP425 Development Platform") 127 128 /* Maintainer: MontaVista Software, Inc. */ 128 129 .phys_ram = PHYS_OFFSET, ··· 135 134 .boot_params = 0x0100, 136 135 .init_machine = ixdp425_init, 137 136 MACHINE_END 137 + #endif 138 138 139 + #ifdef CONFIG_MACH_IXDP465 139 140 MACHINE_START(IXDP465, "Intel IXDP465 Development Platform") 140 141 /* Maintainer: MontaVista Software, Inc. */ 141 142 .phys_ram = PHYS_OFFSET, ··· 149 146 .boot_params = 0x0100, 150 147 .init_machine = ixdp425_init, 151 148 MACHINE_END 149 + #endif 152 150 151 + #ifdef CONFIG_ARCH_PRPMC1100 153 152 MACHINE_START(IXCDP1100, "Intel IXCDP1100 Development Platform") 154 153 /* Maintainer: MontaVista Software, Inc. */ 155 154 .phys_ram = PHYS_OFFSET, ··· 163 158 .boot_params = 0x0100, 164 159 .init_machine = ixdp425_init, 165 160 MACHINE_END 161 + #endif 166 162 167 163 /* 168 164 * Avila is functionally equivalent to IXDP425 except that it adds
+2 -1
arch/arm/mach-s3c2410/mach-anubis.c
··· 12 12 * 13 13 * Modifications: 14 14 * 02-May-2005 BJD Copied from mach-bast.c 15 + * 20-Sep-2005 BJD Added static to non-exported items 15 16 */ 16 17 17 18 #include <linux/kernel.h> ··· 233 232 .clocks_count = ARRAY_SIZE(anubis_clocks) 234 233 }; 235 234 236 - void __init anubis_map_io(void) 235 + static void __init anubis_map_io(void) 237 236 { 238 237 /* initialise the clocks */ 239 238
+2 -1
arch/arm/mach-s3c2410/mach-bast.c
··· 31 31 * 17-Jul-2005 BJD Changed to platform device for SuperIO 16550s 32 32 * 25-Jul-2005 BJD Removed ASIX static mappings 33 33 * 27-Jul-2005 BJD Ensure maximum frequency of i2c bus 34 + * 20-Sep-2005 BJD Added static to non-exported items 34 35 */ 35 36 36 37 #include <linux/kernel.h> ··· 429 428 .clocks_count = ARRAY_SIZE(bast_clocks) 430 429 }; 431 430 432 - void __init bast_map_io(void) 431 + static void __init bast_map_io(void) 433 432 { 434 433 /* initialise the clocks */ 435 434
+4 -3
arch/arm/mach-s3c2410/mach-h1940.c
··· 24 24 * 10-Jan-2005 BJD Removed include of s3c2410.h 25 25 * 14-Jan-2005 BJD Added clock init 26 26 * 10-Mar-2005 LCVR Changed S3C2410_VA to S3C24XX_VA 27 + * 20-Sep-2005 BJD Added static to non-exported items 27 28 */ 28 29 29 30 #include <linux/kernel.h> ··· 148 147 .devices_count = ARRAY_SIZE(h1940_devices) 149 148 }; 150 149 151 - void __init h1940_map_io(void) 150 + static void __init h1940_map_io(void) 152 151 { 153 152 s3c24xx_init_io(h1940_iodesc, ARRAY_SIZE(h1940_iodesc)); 154 153 s3c24xx_init_clocks(0); ··· 156 155 s3c24xx_set_board(&h1940_board); 157 156 } 158 157 159 - void __init h1940_init_irq(void) 158 + static void __init h1940_init_irq(void) 160 159 { 161 160 s3c24xx_init_irq(); 162 161 163 162 } 164 163 165 - void __init h1940_init(void) 164 + static void __init h1940_init(void) 166 165 { 167 166 set_s3c2410fb_info(&h1940_lcdcfg); 168 167 }
+3 -3
arch/arm/mach-s3c2410/mach-n30.c
··· 97 97 .devices_count = ARRAY_SIZE(n30_devices) 98 98 }; 99 99 100 - void __init n30_map_io(void) 100 + static void __init n30_map_io(void) 101 101 { 102 102 s3c24xx_init_io(n30_iodesc, ARRAY_SIZE(n30_iodesc)); 103 103 s3c24xx_init_clocks(0); ··· 105 105 s3c24xx_set_board(&n30_board); 106 106 } 107 107 108 - void __init n30_init_irq(void) 108 + static void __init n30_init_irq(void) 109 109 { 110 110 s3c24xx_init_irq(); 111 111 } 112 112 113 113 /* GPB3 is the line that controls the pull-up for the USB D+ line */ 114 114 115 - void __init n30_init(void) 115 + static void __init n30_init(void) 116 116 { 117 117 s3c_device_i2c.dev.platform_data = &n30_i2ccfg; 118 118
+1 -1
arch/arm/mach-s3c2410/mach-nexcoder.c
··· 136 136 s3c2410_gpio_cfgpin(S3C2410_GPF2, S3C2410_GPF2_OUTP); // CAM_GPIO6 => CAM_PWRDN 137 137 } 138 138 139 - void __init nexcoder_map_io(void) 139 + static void __init nexcoder_map_io(void) 140 140 { 141 141 s3c24xx_init_io(nexcoder_iodesc, ARRAY_SIZE(nexcoder_iodesc)); 142 142 s3c24xx_init_clocks(0);
+1 -1
arch/arm/mach-s3c2410/mach-otom.c
··· 105 105 }; 106 106 107 107 108 - void __init otom11_map_io(void) 108 + static void __init otom11_map_io(void) 109 109 { 110 110 s3c24xx_init_io(otom11_iodesc, ARRAY_SIZE(otom11_iodesc)); 111 111 s3c24xx_init_clocks(0);
+3 -2
arch/arm/mach-s3c2410/mach-rx3715.c
··· 16 16 * 14-Jan-2005 BJD Added new clock init 17 17 * 10-Mar-2005 LCVR Changed S3C2410_VA to S3C24XX_VA 18 18 * 14-Mar-2005 BJD Fixed __iomem warnings 19 + * 20-Sep-2005 BJD Added static to non-exported items 19 20 */ 20 21 21 22 #include <linux/kernel.h> ··· 109 108 .devices_count = ARRAY_SIZE(rx3715_devices) 110 109 }; 111 110 112 - void __init rx3715_map_io(void) 111 + static void __init rx3715_map_io(void) 113 112 { 114 113 s3c24xx_init_io(rx3715_iodesc, ARRAY_SIZE(rx3715_iodesc)); 115 114 s3c24xx_init_clocks(16934000); ··· 117 116 s3c24xx_set_board(&rx3715_board); 118 117 } 119 118 120 - void __init rx3715_init_irq(void) 119 + static void __init rx3715_init_irq(void) 121 120 { 122 121 s3c24xx_init_irq(); 123 122 }
+3 -2
arch/arm/mach-s3c2410/mach-smdk2410.c
··· 28 28 * Ben Dooks <ben@simtec.co.uk> 29 29 * 30 30 * 10-Mar-2005 LCVR Changed S3C2410_VA to S3C24XX_VA 31 + * 20-Sep-2005 BJD Added static to non-exported items 31 32 * 32 33 ***********************************************************************/ 33 34 ··· 98 97 .devices_count = ARRAY_SIZE(smdk2410_devices) 99 98 }; 100 99 101 - void __init smdk2410_map_io(void) 100 + static void __init smdk2410_map_io(void) 102 101 { 103 102 s3c24xx_init_io(smdk2410_iodesc, ARRAY_SIZE(smdk2410_iodesc)); 104 103 s3c24xx_init_clocks(0); ··· 106 105 s3c24xx_set_board(&smdk2410_board); 107 106 } 108 107 109 - void __init smdk2410_init_irq(void) 108 + static void __init smdk2410_init_irq(void) 110 109 { 111 110 s3c24xx_init_irq(); 112 111 }
+3 -2
arch/arm/mach-s3c2410/mach-smdk2440.c
··· 18 18 * 22-Feb-2005 BJD Updated for 2.6.11-rc5 relesa 19 19 * 10-Mar-2005 LCVR Replaced S3C2410_VA by S3C24XX_VA 20 20 * 14-Mar-2005 BJD void __iomem fixes 21 + * 20-Sep-2005 BJD Added static to non-exported items 21 22 */ 22 23 23 24 #include <linux/kernel.h> ··· 99 98 .devices_count = ARRAY_SIZE(smdk2440_devices) 100 99 }; 101 100 102 - void __init smdk2440_map_io(void) 101 + static void __init smdk2440_map_io(void) 103 102 { 104 103 s3c24xx_init_io(smdk2440_iodesc, ARRAY_SIZE(smdk2440_iodesc)); 105 104 s3c24xx_init_clocks(16934400); ··· 107 106 s3c24xx_set_board(&smdk2440_board); 108 107 } 109 108 110 - void __init smdk2440_machine_init(void) 109 + static void __init smdk2440_machine_init(void) 111 110 { 112 111 /* Configure the LEDs (even if we have no LED support)*/ 113 112
+2 -1
arch/arm/mach-s3c2410/mach-vr1000.c
··· 28 28 * 10-Mar-2005 LCVR Changed S3C2410_VA to S3C24XX_VA 29 29 * 14-Mar-2006 BJD void __iomem fixes 30 30 * 22-Jun-2006 BJD Added DM9000 platform information 31 + * 20-Sep-2005 BJD Added static to non-exported items 31 32 */ 32 33 33 34 #include <linux/kernel.h> ··· 348 347 s3c2410_gpio_setpin(S3C2410_GPB9, 1); 349 348 } 350 349 351 - void __init vr1000_map_io(void) 350 + static void __init vr1000_map_io(void) 352 351 { 353 352 /* initialise clock sources */ 354 353
+3
arch/arm/mach-sa1100/generic.h
··· 39 39 40 40 struct irda_platform_data; 41 41 void sa11x0_set_irda_data(struct irda_platform_data *irda); 42 + 43 + struct mcp_plat_data; 44 + void sa11x0_set_mcp_data(struct mcp_plat_data *data);
+11 -1
arch/arm/mm/fault.c
··· 233 233 if (in_interrupt() || !mm) 234 234 goto no_context; 235 235 236 - down_read(&mm->mmap_sem); 236 + /* 237 + * As per x86, we may deadlock here. However, since the kernel only 238 + * validly references user space from well defined areas of the code, 239 + * we can bug out early if this is from code which shouldn't. 240 + */ 241 + if (!down_read_trylock(&mm->mmap_sem)) { 242 + if (!user_mode(regs) && !search_exception_tables(regs->ARM_pc)) 243 + goto no_context; 244 + down_read(&mm->mmap_sem); 245 + } 246 + 237 247 fault = __do_page_fault(mm, addr, fsr, tsk); 238 248 up_read(&mm->mmap_sem); 239 249
+1 -1
arch/arm/mm/proc-arm1020.S
··· 509 509 510 510 .align 511 511 512 - .section ".proc.info", #alloc, #execinstr 512 + .section ".proc.info.init", #alloc, #execinstr 513 513 514 514 .type __arm1020_proc_info,#object 515 515 __arm1020_proc_info:
+1 -1
arch/arm/mm/proc-arm1020e.S
··· 491 491 492 492 .align 493 493 494 - .section ".proc.info", #alloc, #execinstr 494 + .section ".proc.info.init", #alloc, #execinstr 495 495 496 496 .type __arm1020e_proc_info,#object 497 497 __arm1020e_proc_info:
+1 -1
arch/arm/mm/proc-arm1022.S
··· 473 473 474 474 .align 475 475 476 - .section ".proc.info", #alloc, #execinstr 476 + .section ".proc.info.init", #alloc, #execinstr 477 477 478 478 .type __arm1022_proc_info,#object 479 479 __arm1022_proc_info:
+1 -1
arch/arm/mm/proc-arm1026.S
··· 469 469 470 470 .align 471 471 472 - .section ".proc.info", #alloc, #execinstr 472 + .section ".proc.info.init", #alloc, #execinstr 473 473 474 474 .type __arm1026_proc_info,#object 475 475 __arm1026_proc_info:
+1 -1
arch/arm/mm/proc-arm6_7.S
··· 332 332 333 333 .align 334 334 335 - .section ".proc.info", #alloc, #execinstr 335 + .section ".proc.info.init", #alloc, #execinstr 336 336 337 337 .type __arm6_proc_info, #object 338 338 __arm6_proc_info:
+1 -1
arch/arm/mm/proc-arm720.S
··· 222 222 * See linux/include/asm-arm/procinfo.h for a definition of this structure. 223 223 */ 224 224 225 - .section ".proc.info", #alloc, #execinstr 225 + .section ".proc.info.init", #alloc, #execinstr 226 226 227 227 .type __arm710_proc_info, #object 228 228 __arm710_proc_info:
+1 -1
arch/arm/mm/proc-arm920.S
··· 452 452 453 453 .align 454 454 455 - .section ".proc.info", #alloc, #execinstr 455 + .section ".proc.info.init", #alloc, #execinstr 456 456 457 457 .type __arm920_proc_info,#object 458 458 __arm920_proc_info:
+1 -1
arch/arm/mm/proc-arm922.S
··· 456 456 457 457 .align 458 458 459 - .section ".proc.info", #alloc, #execinstr 459 + .section ".proc.info.init", #alloc, #execinstr 460 460 461 461 .type __arm922_proc_info,#object 462 462 __arm922_proc_info:
+1 -1
arch/arm/mm/proc-arm925.S
··· 521 521 522 522 .align 523 523 524 - .section ".proc.info", #alloc, #execinstr 524 + .section ".proc.info.init", #alloc, #execinstr 525 525 526 526 .type __arm925_proc_info,#object 527 527 __arm925_proc_info:
+1 -1
arch/arm/mm/proc-arm926.S
··· 471 471 472 472 .align 473 473 474 - .section ".proc.info", #alloc, #execinstr 474 + .section ".proc.info.init", #alloc, #execinstr 475 475 476 476 .type __arm926_proc_info,#object 477 477 __arm926_proc_info:
+1 -1
arch/arm/mm/proc-sa110.S
··· 249 249 250 250 .align 251 251 252 - .section ".proc.info", #alloc, #execinstr 252 + .section ".proc.info.init", #alloc, #execinstr 253 253 254 254 .type __sa110_proc_info,#object 255 255 __sa110_proc_info:
+1 -1
arch/arm/mm/proc-sa1100.S
··· 280 280 281 281 .align 282 282 283 - .section ".proc.info", #alloc, #execinstr 283 + .section ".proc.info.init", #alloc, #execinstr 284 284 285 285 .type __sa1100_proc_info,#object 286 286 __sa1100_proc_info:
+1 -1
arch/arm/mm/proc-v6.S
··· 240 240 .size cpu_elf_name, . - cpu_elf_name 241 241 .align 242 242 243 - .section ".proc.info", #alloc, #execinstr 243 + .section ".proc.info.init", #alloc, #execinstr 244 244 245 245 /* 246 246 * Match any ARMv6 processor core.
+1 -1
arch/arm/mm/proc-xscale.S
··· 578 578 579 579 .align 580 580 581 - .section ".proc.info", #alloc, #execinstr 581 + .section ".proc.info.init", #alloc, #execinstr 582 582 583 583 .type __80200_proc_info,#object 584 584 __80200_proc_info:
+23 -6
arch/ia64/hp/sim/simscsi.c
··· 233 233 simscsi_readwrite(sc, mode, offset, ((sc->cmnd[7] << 8) | sc->cmnd[8])*512); 234 234 } 235 235 236 + static void simscsi_fillresult(struct scsi_cmnd *sc, char *buf, unsigned len) 237 + { 238 + 239 + int scatterlen = sc->use_sg; 240 + struct scatterlist *slp; 241 + 242 + if (scatterlen == 0) 243 + memcpy(sc->request_buffer, buf, len); 244 + else for (slp = (struct scatterlist *)sc->buffer; scatterlen-- > 0 && len > 0; slp++) { 245 + unsigned thislen = min(len, slp->length); 246 + 247 + memcpy(page_address(slp->page) + slp->offset, buf, thislen); 248 + slp++; 249 + len -= thislen; 250 + } 251 + } 252 + 236 253 static int 237 254 simscsi_queuecommand (struct scsi_cmnd *sc, void (*done)(struct scsi_cmnd *)) 238 255 { ··· 257 240 char fname[MAX_ROOT_LEN+16]; 258 241 size_t disk_size; 259 242 char *buf; 243 + char localbuf[36]; 260 244 #if DEBUG_SIMSCSI 261 245 register long sp asm ("sp"); 262 246 ··· 281 263 /* disk doesn't exist... */ 282 264 break; 283 265 } 284 - buf = sc->request_buffer; 266 + buf = localbuf; 285 267 buf[0] = 0; /* magnetic disk */ 286 268 buf[1] = 0; /* not a removable medium */ 287 269 buf[2] = 2; /* SCSI-2 compliant device */ ··· 291 273 buf[6] = 0; /* reserved */ 292 274 buf[7] = 0; /* various flags */ 293 275 memcpy(buf + 8, "HP SIMULATED DISK 0.00", 28); 276 + simscsi_fillresult(sc, buf, 36); 294 277 sc->result = GOOD; 295 278 break; 296 279 ··· 323 304 simscsi_readwrite10(sc, SSC_WRITE); 324 305 break; 325 306 326 - 327 307 case READ_CAPACITY: 328 308 if (desc[target_id] < 0 || sc->request_bufflen < 8) { 329 309 break; 330 310 } 331 - buf = sc->request_buffer; 332 - 311 + buf = localbuf; 333 312 disk_size = simscsi_get_disk_size(desc[target_id]); 334 313 335 - /* pretend to be a 1GB disk (partition table contains real stuff): */ 336 314 buf[0] = (disk_size >> 24) & 0xff; 337 315 buf[1] = (disk_size >> 16) & 0xff; 338 316 buf[2] = (disk_size >> 8) & 0xff; ··· 339 323 buf[5] = 0; 340 324 buf[6] = 2; 341 325 buf[7] = 0; 326 + simscsi_fillresult(sc, buf, 8); 342 327 sc->result = GOOD; 343 328 break; 344 329 345 330 case MODE_SENSE: 346 331 case MODE_SENSE_10: 347 332 /* sd.c uses this to determine whether disk does write-caching. */ 348 - memset(sc->request_buffer, 0, 128); 333 + simscsi_fillresult(sc, (char *)empty_zero_page, sc->request_bufflen); 349 334 sc->result = GOOD; 350 335 break; 351 336
+85 -11
arch/ia64/kernel/mca_asm.S
··· 489 489 ;; 490 490 st8 [temp1]=r17,16 // pal_min_state 491 491 st8 [temp2]=r6,16 // prev_IA64_KR_CURRENT 492 + mov r6=IA64_KR(CURRENT_STACK) 493 + ;; 494 + st8 [temp1]=r6,16 // prev_IA64_KR_CURRENT_STACK 495 + st8 [temp2]=r0,16 // prev_task, starts off as NULL 492 496 mov r6=cr.ifa 493 497 ;; 494 - st8 [temp1]=r0,16 // prev_task, starts off as NULL 495 - st8 [temp2]=r12,16 // cr.isr 498 + st8 [temp1]=r12,16 // cr.isr 499 + st8 [temp2]=r6,16 // cr.ifa 496 500 mov r12=cr.itir 497 501 ;; 498 - st8 [temp1]=r6,16 // cr.ifa 499 - st8 [temp2]=r12,16 // cr.itir 502 + st8 [temp1]=r12,16 // cr.itir 503 + st8 [temp2]=r11,16 // cr.iipa 500 504 mov r12=cr.iim 501 505 ;; 502 - st8 [temp1]=r11,16 // cr.iipa 503 - st8 [temp2]=r12,16 // cr.iim 504 - mov r6=cr.iha 506 + st8 [temp1]=r12,16 // cr.iim 505 507 (p1) mov r12=IA64_MCA_COLD_BOOT 506 508 (p2) mov r12=IA64_INIT_WARM_BOOT 509 + mov r6=cr.iha 507 510 ;; 508 - st8 [temp1]=r6,16 // cr.iha 509 - st8 [temp2]=r12 // os_status, default is cold boot 511 + st8 [temp2]=r6,16 // cr.iha 512 + st8 [temp1]=r12 // os_status, default is cold boot 510 513 mov r6=IA64_MCA_SAME_CONTEXT 511 514 ;; 512 515 st8 [temp1]=r6 // context, default is same context ··· 826 823 ld8 r12=[temp1],16 // sal_ra 827 824 ld8 r9=[temp2],16 // sal_gp 828 825 ;; 829 - ld8 r22=[temp1],24 // pal_min_state, virtual. skip prev_task 826 + ld8 r22=[temp1],16 // pal_min_state, virtual 830 827 ld8 r21=[temp2],16 // prev_IA64_KR_CURRENT 828 + ;; 829 + ld8 r16=[temp1],16 // prev_IA64_KR_CURRENT_STACK 830 + ld8 r20=[temp2],16 // prev_task 831 831 ;; 832 832 ld8 temp3=[temp1],16 // cr.isr 833 833 ld8 temp4=[temp2],16 // cr.ifa ··· 851 845 mov IA64_KR(CURRENT)=r21 852 846 ld8 r8=[temp1] // os_status 853 847 ld8 r10=[temp2] // context 848 + 849 + /* Wire IA64_TR_CURRENT_STACK to the stack that we are resuming to. To 850 + * avoid any dependencies on the algorithm in ia64_switch_to(), just 851 + * purge any existing CURRENT_STACK mapping and insert the new one. 852 + * 853 + * r16 contains prev_IA64_KR_CURRENT_STACK, r21 contains 854 + * prev_IA64_KR_CURRENT, these values may have been changed by the C 855 + * code. Do not use r8, r9, r10, r22, they contain values ready for 856 + * the return to SAL. 857 + */ 858 + 859 + mov r15=IA64_KR(CURRENT_STACK) // physical granule mapped by IA64_TR_CURRENT_STACK 860 + ;; 861 + shl r15=r15,IA64_GRANULE_SHIFT 862 + ;; 863 + dep r15=-1,r15,61,3 // virtual granule 864 + mov r18=IA64_GRANULE_SHIFT<<2 // for cr.itir.ps 865 + ;; 866 + ptr.d r15,r18 867 + ;; 868 + srlz.d 869 + 870 + extr.u r19=r21,61,3 // r21 = prev_IA64_KR_CURRENT 871 + shl r20=r16,IA64_GRANULE_SHIFT // r16 = prev_IA64_KR_CURRENT_STACK 872 + movl r21=PAGE_KERNEL // page properties 873 + ;; 874 + mov IA64_KR(CURRENT_STACK)=r16 875 + cmp.ne p6,p0=RGN_KERNEL,r19 // new stack is in the kernel region? 876 + or r21=r20,r21 // construct PA | page properties 877 + (p6) br.spnt 1f // the dreaded cpu 0 idle task in region 5:( 878 + ;; 879 + mov cr.itir=r18 880 + mov cr.ifa=r21 881 + mov r20=IA64_TR_CURRENT_STACK 882 + ;; 883 + itr.d dtr[r20]=r21 884 + ;; 885 + srlz.d 886 + 1: 854 887 855 888 br.sptk b0 856 889 ··· 1027 982 add temp4=temp4, temp1 // &struct ia64_sal_os_state.os_gp 1028 983 add r12=temp1, temp3 // kernel stack pointer on MCA/INIT stack 1029 984 add r13=temp1, r3 // set current to start of MCA/INIT stack 985 + add r20=temp1, r3 // physical start of MCA/INIT stack 1030 986 ;; 1031 987 ld8 r1=[temp4] // OS GP from SAL OS state 1032 988 ;; ··· 1037 991 ;; 1038 992 mov IA64_KR(CURRENT)=r13 1039 993 1040 - // FIXME: do I need to wire IA64_KR_CURRENT_STACK and IA64_TR_CURRENT_STACK? 994 + /* Wire IA64_TR_CURRENT_STACK to the MCA/INIT handler stack. To avoid 995 + * any dependencies on the algorithm in ia64_switch_to(), just purge 996 + * any existing CURRENT_STACK mapping and insert the new one. 997 + */ 998 + 999 + mov r16=IA64_KR(CURRENT_STACK) // physical granule mapped by IA64_TR_CURRENT_STACK 1000 + ;; 1001 + shl r16=r16,IA64_GRANULE_SHIFT 1002 + ;; 1003 + dep r16=-1,r16,61,3 // virtual granule 1004 + mov r18=IA64_GRANULE_SHIFT<<2 // for cr.itir.ps 1005 + ;; 1006 + ptr.d r16,r18 1007 + ;; 1008 + srlz.d 1009 + 1010 + shr.u r16=r20,IA64_GRANULE_SHIFT // r20 = physical start of MCA/INIT stack 1011 + movl r21=PAGE_KERNEL // page properties 1012 + ;; 1013 + mov IA64_KR(CURRENT_STACK)=r16 1014 + or r21=r20,r21 // construct PA | page properties 1015 + ;; 1016 + mov cr.itir=r18 1017 + mov cr.ifa=r13 1018 + mov r20=IA64_TR_CURRENT_STACK 1019 + ;; 1020 + itr.d dtr[r20]=r21 1021 + ;; 1022 + srlz.d 1041 1023 1042 1024 br.sptk b0 1043 1025
+15 -6
arch/ia64/kernel/mca_drv.c
··· 56 56 static int num_page_isolate = 0; 57 57 58 58 typedef enum { 59 - ISOLATE_NG = 0, 60 - ISOLATE_OK = 1 59 + ISOLATE_NG, 60 + ISOLATE_OK, 61 + ISOLATE_NONE 61 62 } isolate_status_t; 62 63 63 64 /* ··· 75 74 * @paddr: poisoned memory location 76 75 * 77 76 * Return value: 78 - * ISOLATE_OK / ISOLATE_NG 77 + * one of isolate_status_t, ISOLATE_OK/NG/NONE. 79 78 */ 80 79 81 80 static isolate_status_t ··· 86 85 87 86 /* whether physical address is valid or not */ 88 87 if (!ia64_phys_addr_valid(paddr)) 89 - return ISOLATE_NG; 88 + return ISOLATE_NONE; 89 + 90 + if (!pfn_valid(paddr)) 91 + return ISOLATE_NONE; 90 92 91 93 /* convert physical address to physical page number */ 92 94 p = pfn_to_page(paddr>>PAGE_SHIFT); ··· 126 122 current->pid, current->comm); 127 123 128 124 spin_lock(&mca_bh_lock); 129 - if (mca_page_isolate(paddr) == ISOLATE_OK) { 125 + switch (mca_page_isolate(paddr)) { 126 + case ISOLATE_OK: 130 127 printk(KERN_DEBUG "Page isolation: ( %lx ) success.\n", paddr); 131 - } else { 128 + break; 129 + case ISOLATE_NG: 132 130 printk(KERN_DEBUG "Page isolation: ( %lx ) failure.\n", paddr); 131 + break; 132 + default: 133 + break; 133 134 } 134 135 spin_unlock(&mca_bh_lock); 135 136
+1 -2
arch/ppc/kernel/Makefile
··· 15 15 obj-y := entry.o traps.o irq.o idle.o time.o misc.o \ 16 16 process.o signal.o ptrace.o align.o \ 17 17 semaphore.o syscalls.o setup.o \ 18 - cputable.o ppc_htab.o 18 + cputable.o ppc_htab.o perfmon.o 19 19 obj-$(CONFIG_6xx) += l2cr.o cpu_setup_6xx.o 20 - obj-$(CONFIG_E500) += perfmon.o 21 20 obj-$(CONFIG_SOFTWARE_SUSPEND) += swsusp.o 22 21 obj-$(CONFIG_POWER4) += cpu_setup_power4.o 23 22 obj-$(CONFIG_MODULES) += module.o ppc_ksyms.o
+5 -1
arch/ppc/kernel/perfmon.c
··· 45 45 mtpmr(PMRN_PMGC0, pmgc0); 46 46 } 47 47 48 - #else 48 + #elif CONFIG_6xx 49 49 /* Ensure exceptions are disabled */ 50 50 51 51 static void dummy_perf(struct pt_regs *regs) ··· 54 54 55 55 mmcr0 &= ~MMCR0_PMXE; 56 56 mtspr(SPRN_MMCR0, mmcr0); 57 + } 58 + #else 59 + static void dummy_perf(struct pt_regs *regs) 60 + { 57 61 } 58 62 #endif 59 63
+6 -4
arch/ppc/platforms/pmac_setup.c
··· 719 719 if (np) { 720 720 for (np = np->child; np != NULL; np = np->sibling) 721 721 if (strncmp(np->name, "i2c", 3) == 0) { 722 - of_platform_device_create(np, "uni-n-i2c"); 722 + of_platform_device_create(np, "uni-n-i2c", 723 + NULL); 723 724 break; 724 725 } 725 726 } ··· 728 727 if (np) { 729 728 for (np = np->child; np != NULL; np = np->sibling) 730 729 if (strncmp(np->name, "i2c", 3) == 0) { 731 - of_platform_device_create(np, "u3-i2c"); 730 + of_platform_device_create(np, "u3-i2c", 731 + NULL); 732 732 break; 733 733 } 734 734 } 735 735 736 736 np = find_devices("valkyrie"); 737 737 if (np) 738 - of_platform_device_create(np, "valkyrie"); 738 + of_platform_device_create(np, "valkyrie", NULL); 739 739 np = find_devices("platinum"); 740 740 if (np) 741 - of_platform_device_create(np, "platinum"); 741 + of_platform_device_create(np, "platinum", NULL); 742 742 743 743 return 0; 744 744 }
+4 -2
arch/ppc/syslib/of_device.c
··· 234 234 device_unregister(&ofdev->dev); 235 235 } 236 236 237 - struct of_device* of_platform_device_create(struct device_node *np, const char *bus_id) 237 + struct of_device* of_platform_device_create(struct device_node *np, 238 + const char *bus_id, 239 + struct device *parent) 238 240 { 239 241 struct of_device *dev; 240 242 u32 *reg; ··· 249 247 dev->node = of_node_get(np); 250 248 dev->dma_mask = 0xffffffffUL; 251 249 dev->dev.dma_mask = &dev->dma_mask; 252 - dev->dev.parent = NULL; 250 + dev->dev.parent = parent; 253 251 dev->dev.bus = &of_platform_bus_type; 254 252 dev->dev.release = of_release_dev; 255 253
+4 -4
arch/ppc/syslib/ppc85xx_setup.c
··· 184 184 pci->powar1 = 0x80044000 | 185 185 (__ilog2(MPC85XX_PCI1_UPPER_MEM - MPC85XX_PCI1_LOWER_MEM + 1) - 1); 186 186 187 - /* Setup outboud IO windows @ MPC85XX_PCI1_IO_BASE */ 188 - pci->potar2 = 0x00000000; 187 + /* Setup outbound IO windows @ MPC85XX_PCI1_IO_BASE */ 188 + pci->potar2 = (MPC85XX_PCI1_LOWER_IO >> 12) & 0x000fffff; 189 189 pci->potear2 = 0x00000000; 190 190 pci->powbar2 = (MPC85XX_PCI1_IO_BASE >> 12) & 0x000fffff; 191 191 /* Enable, IO R/W */ ··· 235 235 pci->powar1 = 0x80044000 | 236 236 (__ilog2(MPC85XX_PCI2_UPPER_MEM - MPC85XX_PCI2_LOWER_MEM + 1) - 1); 237 237 238 - /* Setup outboud IO windows @ MPC85XX_PCI2_IO_BASE */ 239 - pci->potar2 = 0x00000000; 238 + /* Setup outbound IO windows @ MPC85XX_PCI2_IO_BASE */ 239 + pci->potar2 = (MPC85XX_PCI2_LOWER_IO >> 12) & 0x000fffff;; 240 240 pci->potear2 = 0x00000000; 241 241 pci->powbar2 = (MPC85XX_PCI2_IO_BASE >> 12) & 0x000fffff; 242 242 /* Enable, IO R/W */
+1 -1
arch/ppc64/Makefile
··· 107 107 $(Q)$(MAKE) $(build)=$(boot) BOOTIMAGE=$(BOOTIMAGE) $@ 108 108 109 109 defaultimage-$(CONFIG_PPC_PSERIES) := zImage 110 - defaultimage-$(CONFIG_PPC_PMAC) := vmlinux 110 + defaultimage-$(CONFIG_PPC_PMAC) := zImage.vmode 111 111 defaultimage-$(CONFIG_PPC_MAPLE) := zImage 112 112 defaultimage-$(CONFIG_PPC_ISERIES) := vmlinux 113 113 KBUILD_IMAGE := $(defaultimage-y)
+5 -2
arch/ppc64/kernel/of_device.c
··· 233 233 device_unregister(&ofdev->dev); 234 234 } 235 235 236 - struct of_device* of_platform_device_create(struct device_node *np, const char *bus_id) 236 + struct of_device* of_platform_device_create(struct device_node *np, 237 + const char *bus_id, 238 + struct device *parent) 237 239 { 238 240 struct of_device *dev; 239 241 ··· 247 245 dev->node = np; 248 246 dev->dma_mask = 0xffffffffUL; 249 247 dev->dev.dma_mask = &dev->dma_mask; 250 - dev->dev.parent = NULL; 248 + dev->dev.parent = parent; 251 249 dev->dev.bus = &of_platform_bus_type; 252 250 dev->dev.release = of_release_dev; 253 251 ··· 260 258 261 259 return dev; 262 260 } 261 + 263 262 264 263 EXPORT_SYMBOL(of_match_device); 265 264 EXPORT_SYMBOL(of_platform_bus_type);
+96 -87
arch/ppc64/kernel/pSeries_iommu.c
··· 281 281 tbl->it_offset = phb->dma_window_base_cur >> PAGE_SHIFT; 282 282 283 283 /* Test if we are going over 2GB of DMA space */ 284 - if (phb->dma_window_base_cur + phb->dma_window_size > (1L << 31)) 284 + if (phb->dma_window_base_cur + phb->dma_window_size > 0x80000000ul) { 285 + udbg_printf("PCI_DMA: Unexpected number of IOAs under this PHB.\n"); 285 286 panic("PCI_DMA: Unexpected number of IOAs under this PHB.\n"); 287 + } 286 288 287 289 phb->dma_window_base_cur += phb->dma_window_size; 288 290 ··· 328 326 329 327 static void iommu_bus_setup_pSeries(struct pci_bus *bus) 330 328 { 331 - struct device_node *dn, *pdn; 332 - struct pci_dn *pci; 329 + struct device_node *dn; 333 330 struct iommu_table *tbl; 331 + struct device_node *isa_dn, *isa_dn_orig; 332 + struct device_node *tmp; 333 + struct pci_dn *pci; 334 + int children; 334 335 335 336 DBG("iommu_bus_setup_pSeries, bus %p, bus->self %p\n", bus, bus->self); 336 337 337 - /* For each (root) bus, we carve up the available DMA space in 256MB 338 - * pieces. Since each piece is used by one (sub) bus/device, that would 339 - * give a maximum of 7 devices per PHB. In most cases, this is plenty. 338 + dn = pci_bus_to_OF_node(bus); 339 + pci = PCI_DN(dn); 340 + 341 + if (bus->self) { 342 + /* This is not a root bus, any setup will be done for the 343 + * device-side of the bridge in iommu_dev_setup_pSeries(). 344 + */ 345 + return; 346 + } 347 + 348 + /* Check if the ISA bus on the system is under 349 + * this PHB. 350 + */ 351 + isa_dn = isa_dn_orig = of_find_node_by_type(NULL, "isa"); 352 + 353 + while (isa_dn && isa_dn != dn) 354 + isa_dn = isa_dn->parent; 355 + 356 + if (isa_dn_orig) 357 + of_node_put(isa_dn_orig); 358 + 359 + /* Count number of direct PCI children of the PHB. 360 + * All PCI device nodes have class-code property, so it's 361 + * an easy way to find them. 362 + */ 363 + for (children = 0, tmp = dn->child; tmp; tmp = tmp->sibling) 364 + if (get_property(tmp, "class-code", NULL)) 365 + children++; 366 + 367 + DBG("Children: %d\n", children); 368 + 369 + /* Calculate amount of DMA window per slot. Each window must be 370 + * a power of two (due to pci_alloc_consistent requirements). 340 371 * 341 - * The exception is on Python PHBs (pre-POWER4). Here we don't have EADS 342 - * bridges below the PHB to allocate the sectioned tables to, so instead 343 - * we allocate a 1GB table at the PHB level. 372 + * Keep 256MB aside for PHBs with ISA. 344 373 */ 345 374 346 - dn = pci_bus_to_OF_node(bus); 347 - pci = dn->data; 375 + if (!isa_dn) { 376 + /* No ISA/IDE - just set window size and return */ 377 + pci->phb->dma_window_size = 0x80000000ul; /* To be divided */ 348 378 349 - if (!bus->self) { 350 - /* Root bus */ 351 - if (is_python(dn)) { 352 - unsigned int *iohole; 379 + while (pci->phb->dma_window_size * children > 0x80000000ul) 380 + pci->phb->dma_window_size >>= 1; 381 + DBG("No ISA/IDE, window size is 0x%lx\n", 382 + pci->phb->dma_window_size); 383 + pci->phb->dma_window_base_cur = 0; 353 384 354 - DBG("Python root bus %s\n", bus->name); 355 - 356 - iohole = (unsigned int *)get_property(dn, "io-hole", 0); 357 - 358 - if (iohole) { 359 - /* On first bus we need to leave room for the 360 - * ISA address space. Just skip the first 256MB 361 - * alltogether. This leaves 768MB for the window. 362 - */ 363 - DBG("PHB has io-hole, reserving 256MB\n"); 364 - pci->phb->dma_window_size = 3 << 28; 365 - pci->phb->dma_window_base_cur = 1 << 28; 366 - } else { 367 - /* 1GB window by default */ 368 - pci->phb->dma_window_size = 1 << 30; 369 - pci->phb->dma_window_base_cur = 0; 370 - } 371 - 372 - tbl = kmalloc(sizeof(struct iommu_table), GFP_KERNEL); 373 - 374 - iommu_table_setparms(pci->phb, dn, tbl); 375 - pci->iommu_table = iommu_init_table(tbl); 376 - } else { 377 - /* Do a 128MB table at root. This is used for the IDE 378 - * controller on some SMP-mode POWER4 machines. It 379 - * doesn't hurt to allocate it on other machines 380 - * -- it'll just be unused since new tables are 381 - * allocated on the EADS level. 382 - * 383 - * Allocate at offset 128MB to avoid having to deal 384 - * with ISA holes; 128MB table for IDE is plenty. 385 - */ 386 - pci->phb->dma_window_size = 1 << 27; 387 - pci->phb->dma_window_base_cur = 1 << 27; 388 - 389 - tbl = kmalloc(sizeof(struct iommu_table), GFP_KERNEL); 390 - 391 - iommu_table_setparms(pci->phb, dn, tbl); 392 - pci->iommu_table = iommu_init_table(tbl); 393 - 394 - /* All child buses have 256MB tables */ 395 - pci->phb->dma_window_size = 1 << 28; 396 - } 397 - } else { 398 - pdn = pci_bus_to_OF_node(bus->parent); 399 - 400 - if (!bus->parent->self && !is_python(pdn)) { 401 - struct iommu_table *tbl; 402 - /* First child and not python means this is the EADS 403 - * level. Allocate new table for this slot with 256MB 404 - * window. 405 - */ 406 - 407 - tbl = kmalloc(sizeof(struct iommu_table), GFP_KERNEL); 408 - 409 - iommu_table_setparms(pci->phb, dn, tbl); 410 - 411 - pci->iommu_table = iommu_init_table(tbl); 412 - } else { 413 - /* Lower than first child or under python, use parent table */ 414 - pci->iommu_table = PCI_DN(pdn)->iommu_table; 415 - } 385 + return; 416 386 } 387 + 388 + /* If we have ISA, then we probably have an IDE 389 + * controller too. Allocate a 128MB table but 390 + * skip the first 128MB to avoid stepping on ISA 391 + * space. 392 + */ 393 + pci->phb->dma_window_size = 0x8000000ul; 394 + pci->phb->dma_window_base_cur = 0x8000000ul; 395 + 396 + tbl = kmalloc(sizeof(struct iommu_table), GFP_KERNEL); 397 + 398 + iommu_table_setparms(pci->phb, dn, tbl); 399 + pci->iommu_table = iommu_init_table(tbl); 400 + 401 + /* Divide the rest (1.75GB) among the children */ 402 + pci->phb->dma_window_size = 0x80000000ul; 403 + while (pci->phb->dma_window_size * children > 0x70000000ul) 404 + pci->phb->dma_window_size >>= 1; 405 + 406 + DBG("ISA/IDE, window size is 0x%lx\n", pci->phb->dma_window_size); 407 + 417 408 } 418 409 419 410 ··· 457 462 static void iommu_dev_setup_pSeries(struct pci_dev *dev) 458 463 { 459 464 struct device_node *dn, *mydn; 465 + struct iommu_table *tbl; 460 466 461 - DBG("iommu_dev_setup_pSeries, dev %p (%s)\n", dev, dev->pretty_name); 462 - /* Now copy the iommu_table ptr from the bus device down to the 463 - * pci device_node. This means get_iommu_table() won't need to search 464 - * up the device tree to find it. 465 - */ 467 + DBG("iommu_dev_setup_pSeries, dev %p (%s)\n", dev, pci_name(dev)); 468 + 466 469 mydn = dn = pci_device_to_OF_node(dev); 470 + 471 + /* If we're the direct child of a root bus, then we need to allocate 472 + * an iommu table ourselves. The bus setup code should have setup 473 + * the window sizes already. 474 + */ 475 + if (!dev->bus->self) { 476 + DBG(" --> first child, no bridge. Allocating iommu table.\n"); 477 + tbl = kmalloc(sizeof(struct iommu_table), GFP_KERNEL); 478 + iommu_table_setparms(PCI_DN(dn)->phb, dn, tbl); 479 + PCI_DN(mydn)->iommu_table = iommu_init_table(tbl); 480 + 481 + return; 482 + } 483 + 484 + /* If this device is further down the bus tree, search upwards until 485 + * an already allocated iommu table is found and use that. 486 + */ 467 487 468 488 while (dn && dn->data && PCI_DN(dn)->iommu_table == NULL) 469 489 dn = dn->parent; ··· 486 476 if (dn && dn->data) { 487 477 PCI_DN(mydn)->iommu_table = PCI_DN(dn)->iommu_table; 488 478 } else { 489 - DBG("iommu_dev_setup_pSeries, dev %p (%s) has no iommu table\n", dev, dev->pretty_name); 479 + DBG("iommu_dev_setup_pSeries, dev %p (%s) has no iommu table\n", dev, pci_name(dev)); 490 480 } 491 481 } 492 482 ··· 520 510 int *dma_window = NULL; 521 511 struct pci_dn *pci; 522 512 523 - DBG("iommu_dev_setup_pSeriesLP, dev %p (%s)\n", dev, dev->pretty_name); 513 + DBG("iommu_dev_setup_pSeriesLP, dev %p (%s)\n", dev, pci_name(dev)); 524 514 525 515 /* dev setup for LPAR is a little tricky, since the device tree might 526 516 * contain the dma-window properties per-device and not neccesarily ··· 542 532 * slots on POWER4 machines. 543 533 */ 544 534 if (dma_window == NULL || pdn->parent == NULL) { 545 - /* Fall back to regular (non-LPAR) dev setup */ 546 - DBG("No dma window for device, falling back to regular setup\n"); 547 - iommu_dev_setup_pSeries(dev); 535 + DBG("No dma window for device, linking to parent\n"); 536 + PCI_DN(dn)->iommu_table = PCI_DN(pdn)->iommu_table; 548 537 return; 549 538 } else { 550 539 DBG("Found DMA window, allocating table\n");
+6 -3
arch/ppc64/kernel/pci.c
··· 246 246 unsigned int flags = 0; 247 247 248 248 if (addr0 & 0x02000000) { 249 - flags |= IORESOURCE_MEM; 249 + flags = IORESOURCE_MEM | PCI_BASE_ADDRESS_SPACE_MEMORY; 250 + flags |= (addr0 >> 22) & PCI_BASE_ADDRESS_MEM_TYPE_64; 251 + flags |= (addr0 >> 28) & PCI_BASE_ADDRESS_MEM_TYPE_1M; 250 252 if (addr0 & 0x40000000) 251 - flags |= IORESOURCE_PREFETCH; 253 + flags |= IORESOURCE_PREFETCH 254 + | PCI_BASE_ADDRESS_MEM_PREFETCH; 252 255 } else if (addr0 & 0x01000000) 253 - flags |= IORESOURCE_IO; 256 + flags = IORESOURCE_IO | PCI_BASE_ADDRESS_SPACE_IO; 254 257 return flags; 255 258 } 256 259
+13 -5
arch/ppc64/kernel/pmac_setup.c
··· 434 434 435 435 static int __init pmac_declare_of_platform_devices(void) 436 436 { 437 - struct device_node *np; 437 + struct device_node *np, *npp; 438 438 439 - np = find_devices("u3"); 440 - if (np) { 441 - for (np = np->child; np != NULL; np = np->sibling) 439 + npp = of_find_node_by_name(NULL, "u3"); 440 + if (npp) { 441 + for (np = NULL; (np = of_get_next_child(npp, np)) != NULL;) { 442 442 if (strncmp(np->name, "i2c", 3) == 0) { 443 - of_platform_device_create(np, "u3-i2c"); 443 + of_platform_device_create(np, "u3-i2c", NULL); 444 + of_node_put(np); 444 445 break; 445 446 } 447 + } 448 + of_node_put(npp); 449 + } 450 + npp = of_find_node_by_type(NULL, "smu"); 451 + if (npp) { 452 + of_platform_device_create(npp, "smu", NULL); 453 + of_node_put(npp); 446 454 } 447 455 448 456 return 0;
+2 -2
arch/ppc64/kernel/pmac_time.c
··· 84 84 85 85 #ifdef CONFIG_PMAC_SMU 86 86 case SYS_CTRLER_SMU: 87 - smu_get_rtc_time(tm); 87 + smu_get_rtc_time(tm, 1); 88 88 break; 89 89 #endif /* CONFIG_PMAC_SMU */ 90 90 default: ··· 128 128 129 129 #ifdef CONFIG_PMAC_SMU 130 130 case SYS_CTRLER_SMU: 131 - return smu_set_rtc_time(tm); 131 + return smu_set_rtc_time(tm, 1); 132 132 #endif /* CONFIG_PMAC_SMU */ 133 133 default: 134 134 return -ENODEV;
+2 -1
arch/ppc64/kernel/prom_init.c
··· 1711 1711 unsigned long offset = reloc_offset(); 1712 1712 unsigned long mem_start, mem_end, room; 1713 1713 struct boot_param_header *hdr; 1714 + struct prom_t *_prom = PTRRELOC(&prom); 1714 1715 char *namep; 1715 1716 u64 *rsvmap; 1716 1717 ··· 1766 1765 RELOC(dt_struct_end) = PAGE_ALIGN(mem_start); 1767 1766 1768 1767 /* Finish header */ 1768 + hdr->boot_cpuid_phys = _prom->cpu; 1769 1769 hdr->magic = OF_DT_HEADER; 1770 1770 hdr->totalsize = RELOC(dt_struct_end) - RELOC(dt_header_start); 1771 1771 hdr->off_dt_struct = RELOC(dt_struct_start) - RELOC(dt_header_start); ··· 1856 1854 1857 1855 cpu_pkg = call_prom("instance-to-package", 1, 1, prom_cpu); 1858 1856 1859 - prom_setprop(cpu_pkg, "linux,boot-cpu", NULL, 0); 1860 1857 prom_getprop(cpu_pkg, "reg", &getprop_rval, sizeof(getprop_rval)); 1861 1858 _prom->cpu = getprop_rval; 1862 1859
+1
arch/ppc64/kernel/ptrace.c
··· 219 219 220 220 case PTRACE_SET_DEBUGREG: 221 221 ret = ptrace_set_debugreg(child, addr, data); 222 + break; 222 223 223 224 case PTRACE_DETACH: 224 225 ret = ptrace_detach(child, data);
+2 -3
arch/ppc64/mm/hash_native.c
··· 342 342 hpte_t *hptep; 343 343 unsigned long hpte_v; 344 344 struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); 345 - 346 - /* XXX fix for large ptes */ 347 - unsigned long large = 0; 345 + unsigned long large; 348 346 349 347 local_irq_save(flags); 350 348 351 349 j = 0; 352 350 for (i = 0; i < number; i++) { 353 351 va = batch->vaddr[j]; 352 + large = pte_huge(batch->pte[i]); 354 353 if (large) 355 354 vpn = va >> HPAGE_SHIFT; 356 355 else
+5 -2
arch/ppc64/mm/hugetlbpage.c
··· 710 710 hpte_group = ((~hash & htab_hash_mask) * 711 711 HPTES_PER_GROUP) & ~0x7UL; 712 712 slot = ppc_md.hpte_insert(hpte_group, va, prpn, 713 - HPTE_V_LARGE, rflags); 713 + HPTE_V_LARGE | 714 + HPTE_V_SECONDARY, 715 + rflags); 714 716 if (slot == -1) { 715 717 if (mftb() & 0x1) 716 - hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL; 718 + hpte_group = ((hash & htab_hash_mask) * 719 + HPTES_PER_GROUP)&~0x7UL; 717 720 718 721 ppc_md.hpte_remove(hpte_group); 719 722 goto repeat;
+20 -21
arch/sparc64/kernel/entry.S
··· 42 42 * executing (see inherit_locked_prom_mappings() rant). 43 43 */ 44 44 sparc64_vpte_nucleus: 45 - /* Load 0xf0000000, which is LOW_OBP_ADDRESS. */ 46 - mov 0xf, %g5 47 - sllx %g5, 28, %g5 48 - 49 - /* Is addr >= LOW_OBP_ADDRESS? */ 45 + /* Note that kvmap below has verified that the address is 46 + * in the range MODULES_VADDR --> VMALLOC_END already. So 47 + * here we need only check if it is an OBP address or not. 48 + */ 49 + sethi %hi(LOW_OBP_ADDRESS), %g5 50 50 cmp %g4, %g5 51 51 blu,pn %xcc, sparc64_vpte_patchme1 52 52 mov 0x1, %g5 53 - 54 - /* Load 0x100000000, which is HI_OBP_ADDRESS. */ 55 53 sllx %g5, 32, %g5 56 - 57 - /* Is addr < HI_OBP_ADDRESS? */ 58 54 cmp %g4, %g5 59 55 blu,pn %xcc, obp_iaddr_patch 60 56 nop ··· 152 156 * rather, use information saved during inherit_prom_mappings() using 8k 153 157 * pagesize. 154 158 */ 159 + .align 32 155 160 kvmap: 156 - /* Load 0xf0000000, which is LOW_OBP_ADDRESS. */ 157 - mov 0xf, %g5 158 - sllx %g5, 28, %g5 159 - 160 - /* Is addr >= LOW_OBP_ADDRESS? */ 161 + sethi %hi(MODULES_VADDR), %g5 161 162 cmp %g4, %g5 162 - blu,pn %xcc, vmalloc_addr 163 + blu,pn %xcc, longpath 164 + mov (VMALLOC_END >> 24), %g5 165 + sllx %g5, 24, %g5 166 + cmp %g4, %g5 167 + bgeu,pn %xcc, longpath 168 + nop 169 + 170 + kvmap_check_obp: 171 + sethi %hi(LOW_OBP_ADDRESS), %g5 172 + cmp %g4, %g5 173 + blu,pn %xcc, kvmap_vmalloc_addr 163 174 mov 0x1, %g5 164 - 165 - /* Load 0x100000000, which is HI_OBP_ADDRESS. */ 166 175 sllx %g5, 32, %g5 167 - 168 - /* Is addr < HI_OBP_ADDRESS? */ 169 176 cmp %g4, %g5 170 177 blu,pn %xcc, obp_daddr_patch 171 178 nop 172 179 173 - vmalloc_addr: 174 - /* If we get here, a vmalloc addr accessed, load kernel VPTE. */ 180 + kvmap_vmalloc_addr: 181 + /* If we get here, a vmalloc addr was accessed, load kernel VPTE. */ 175 182 ldxa [%g3 + %g6] ASI_N, %g5 176 183 brgez,pn %g5, longpath 177 184 nop
+4 -3
arch/sparc64/kernel/ptrace.c
··· 30 30 #include <asm/psrcompat.h> 31 31 #include <asm/visasm.h> 32 32 #include <asm/spitfire.h> 33 + #include <asm/page.h> 33 34 34 35 /* Returning from ptrace is a bit tricky because the syscall return 35 36 * low level code assumes any value returned which is negative and ··· 129 128 * is mapped to in the user's address space, we can skip the 130 129 * D-cache flush. 131 130 */ 132 - if ((uaddr ^ kaddr) & (1UL << 13)) { 131 + if ((uaddr ^ (unsigned long) kaddr) & (1UL << 13)) { 133 132 unsigned long start = __pa(kaddr); 134 133 unsigned long end = start + len; 135 134 136 135 if (tlb_type == spitfire) { 137 136 for (; start < end; start += 32) 138 - spitfire_put_dcache_tag(va & 0x3fe0, 0x0); 137 + spitfire_put_dcache_tag(start & 0x3fe0, 0x0); 139 138 } else { 140 139 for (; start < end; start += 32) 141 140 __asm__ __volatile__( 142 141 "stxa %%g0, [%0] %1\n\t" 143 142 "membar #Sync" 144 143 : /* no outputs */ 145 - : "r" (va), 144 + : "r" (start), 146 145 "i" (ASI_DCACHE_INVALIDATE)); 147 146 } 148 147 }
+1 -1
arch/sparc64/kernel/una_asm.S
··· 17 17 __do_int_store: 18 18 rd %asi, %o4 19 19 wr %o3, 0, %asi 20 - ldx [%o2], %g3 20 + mov %o2, %g3 21 21 cmp %o1, 2 22 22 be,pn %icc, 2f 23 23 cmp %o1, 4
+57 -7
arch/sparc64/kernel/unaligned.c
··· 184 184 unsigned long *saddr, int is_signed, int asi); 185 185 186 186 extern void __do_int_store(unsigned long *dst_addr, int size, 187 - unsigned long *src_val, int asi); 187 + unsigned long src_val, int asi); 188 188 189 189 static inline void do_int_store(int reg_num, int size, unsigned long *dst_addr, 190 - struct pt_regs *regs, int asi) 190 + struct pt_regs *regs, int asi, int orig_asi) 191 191 { 192 192 unsigned long zero = 0; 193 - unsigned long *src_val = &zero; 193 + unsigned long *src_val_p = &zero; 194 + unsigned long src_val; 194 195 195 196 if (size == 16) { 196 197 size = 8; ··· 199 198 (unsigned)fetch_reg(reg_num, regs) : 0)) << 32) | 200 199 (unsigned)fetch_reg(reg_num + 1, regs); 201 200 } else if (reg_num) { 202 - src_val = fetch_reg_addr(reg_num, regs); 201 + src_val_p = fetch_reg_addr(reg_num, regs); 202 + } 203 + src_val = *src_val_p; 204 + if (unlikely(asi != orig_asi)) { 205 + switch (size) { 206 + case 2: 207 + src_val = swab16(src_val); 208 + break; 209 + case 4: 210 + src_val = swab32(src_val); 211 + break; 212 + case 8: 213 + src_val = swab64(src_val); 214 + break; 215 + case 16: 216 + default: 217 + BUG(); 218 + break; 219 + }; 203 220 } 204 221 __do_int_store(dst_addr, size, src_val, asi); 205 222 } ··· 295 276 kernel_mna_trap_fault(); 296 277 } else { 297 278 unsigned long addr; 279 + int orig_asi, asi; 298 280 299 281 addr = compute_effective_address(regs, insn, 300 282 ((insn >> 25) & 0x1f)); ··· 305 285 regs->tpc, dirstrings[dir], addr, size, 306 286 regs->u_regs[UREG_RETPC]); 307 287 #endif 288 + orig_asi = asi = decode_asi(insn, regs); 289 + switch (asi) { 290 + case ASI_NL: 291 + case ASI_AIUPL: 292 + case ASI_AIUSL: 293 + case ASI_PL: 294 + case ASI_SL: 295 + case ASI_PNFL: 296 + case ASI_SNFL: 297 + asi &= ~0x08; 298 + break; 299 + }; 308 300 switch (dir) { 309 301 case load: 310 302 do_int_load(fetch_reg_addr(((insn>>25)&0x1f), regs), 311 303 size, (unsigned long *) addr, 312 - decode_signedness(insn), 313 - decode_asi(insn, regs)); 304 + decode_signedness(insn), asi); 305 + if (unlikely(asi != orig_asi)) { 306 + unsigned long val_in = *(unsigned long *) addr; 307 + switch (size) { 308 + case 2: 309 + val_in = swab16(val_in); 310 + break; 311 + case 4: 312 + val_in = swab32(val_in); 313 + break; 314 + case 8: 315 + val_in = swab64(val_in); 316 + break; 317 + case 16: 318 + default: 319 + BUG(); 320 + break; 321 + }; 322 + *(unsigned long *) addr = val_in; 323 + } 314 324 break; 315 325 316 326 case store: 317 327 do_int_store(((insn>>25)&0x1f), size, 318 328 (unsigned long *) addr, regs, 319 - decode_asi(insn, regs)); 329 + asi, orig_asi); 320 330 break; 321 331 322 332 default:
+5 -1
arch/um/Makefile
··· 53 53 54 54 # -Dvmap=kernel_vmap affects everything, and prevents anything from 55 55 # referencing the libpcap.o symbol so named. 56 + # 57 + # Same things for in6addr_loopback - found in libc. 56 58 57 59 CFLAGS += $(CFLAGS-y) -D__arch_um__ -DSUBARCH=\"$(SUBARCH)\" \ 58 - $(ARCH_INCLUDE) $(MODE_INCLUDE) -Dvmap=kernel_vmap 60 + $(ARCH_INCLUDE) $(MODE_INCLUDE) -Dvmap=kernel_vmap \ 61 + -Din6addr_loopback=kernel_in6addr_loopback 62 + 59 63 AFLAGS += $(ARCH_INCLUDE) 60 64 61 65 USER_CFLAGS := $(patsubst -I%,,$(CFLAGS))
+43 -17
arch/um/drivers/chan_kern.c
··· 19 19 #include "line.h" 20 20 #include "os.h" 21 21 22 - #ifdef CONFIG_NOCONFIG_CHAN 22 + /* XXX: could well be moved to somewhere else, if needed. */ 23 + static int my_printf(const char * fmt, ...) 24 + __attribute__ ((format (printf, 1, 2))); 23 25 24 - /* The printk's here are wrong because we are complaining that there is no 25 - * output device, but printk is printing to that output device. The user will 26 - * never see the error. printf would be better, except it can't run on a 27 - * kernel stack because it will overflow it. 28 - * Use printk for now since that will avoid crashing. 29 - */ 26 + static int my_printf(const char * fmt, ...) 27 + { 28 + /* Yes, can be called on atomic context.*/ 29 + char *buf = kmalloc(4096, GFP_ATOMIC); 30 + va_list args; 31 + int r; 32 + 33 + if (!buf) { 34 + /* We print directly fmt. 35 + * Yes, yes, yes, feel free to complain. */ 36 + r = strlen(fmt); 37 + } else { 38 + va_start(args, fmt); 39 + r = vsprintf(buf, fmt, args); 40 + va_end(args); 41 + fmt = buf; 42 + } 43 + 44 + if (r) 45 + r = os_write_file(1, fmt, r); 46 + return r; 47 + 48 + } 49 + 50 + #ifdef CONFIG_NOCONFIG_CHAN 51 + /* Despite its name, there's no added trailing newline. */ 52 + static int my_puts(const char * buf) 53 + { 54 + return os_write_file(1, buf, strlen(buf)); 55 + } 30 56 31 57 static void *not_configged_init(char *str, int device, struct chan_opts *opts) 32 58 { 33 - printk(KERN_ERR "Using a channel type which is configured out of " 59 + my_puts("Using a channel type which is configured out of " 34 60 "UML\n"); 35 61 return(NULL); 36 62 } ··· 64 38 static int not_configged_open(int input, int output, int primary, void *data, 65 39 char **dev_out) 66 40 { 67 - printk(KERN_ERR "Using a channel type which is configured out of " 41 + my_puts("Using a channel type which is configured out of " 68 42 "UML\n"); 69 43 return(-ENODEV); 70 44 } 71 45 72 46 static void not_configged_close(int fd, void *data) 73 47 { 74 - printk(KERN_ERR "Using a channel type which is configured out of " 48 + my_puts("Using a channel type which is configured out of " 75 49 "UML\n"); 76 50 } 77 51 78 52 static int not_configged_read(int fd, char *c_out, void *data) 79 53 { 80 - printk(KERN_ERR "Using a channel type which is configured out of " 54 + my_puts("Using a channel type which is configured out of " 81 55 "UML\n"); 82 56 return(-EIO); 83 57 } 84 58 85 59 static int not_configged_write(int fd, const char *buf, int len, void *data) 86 60 { 87 - printk(KERN_ERR "Using a channel type which is configured out of " 61 + my_puts("Using a channel type which is configured out of " 88 62 "UML\n"); 89 63 return(-EIO); 90 64 } ··· 92 66 static int not_configged_console_write(int fd, const char *buf, int len, 93 67 void *data) 94 68 { 95 - printk(KERN_ERR "Using a channel type which is configured out of " 69 + my_puts("Using a channel type which is configured out of " 96 70 "UML\n"); 97 71 return(-EIO); 98 72 } ··· 100 74 static int not_configged_window_size(int fd, void *data, unsigned short *rows, 101 75 unsigned short *cols) 102 76 { 103 - printk(KERN_ERR "Using a channel type which is configured out of " 77 + my_puts("Using a channel type which is configured out of " 104 78 "UML\n"); 105 79 return(-ENODEV); 106 80 } 107 81 108 82 static void not_configged_free(void *data) 109 83 { 110 - printf(KERN_ERR "Using a channel type which is configured out of " 84 + my_puts("Using a channel type which is configured out of " 111 85 "UML\n"); 112 86 } 113 87 ··· 483 457 } 484 458 } 485 459 if(ops == NULL){ 486 - printk(KERN_ERR "parse_chan couldn't parse \"%s\"\n", 460 + my_printf("parse_chan couldn't parse \"%s\"\n", 487 461 str); 488 462 return(NULL); 489 463 } ··· 491 465 data = (*ops->init)(str, device, opts); 492 466 if(data == NULL) return(NULL); 493 467 494 - chan = kmalloc(sizeof(*chan), GFP_KERNEL); 468 + chan = kmalloc(sizeof(*chan), GFP_ATOMIC); 495 469 if(chan == NULL) return(NULL); 496 470 *chan = ((struct chan) { .list = LIST_HEAD_INIT(chan->list), 497 471 .primary = 1,
+1 -1
arch/um/drivers/mconsole_user.c
··· 23 23 { "reboot", mconsole_reboot, MCONSOLE_PROC }, 24 24 { "config", mconsole_config, MCONSOLE_PROC }, 25 25 { "remove", mconsole_remove, MCONSOLE_PROC }, 26 - { "sysrq", mconsole_sysrq, MCONSOLE_INTR }, 26 + { "sysrq", mconsole_sysrq, MCONSOLE_PROC }, 27 27 { "help", mconsole_help, MCONSOLE_INTR }, 28 28 { "cad", mconsole_cad, MCONSOLE_INTR }, 29 29 { "stop", mconsole_stop, MCONSOLE_PROC },
+3 -1
arch/um/include/common-offsets.h
··· 12 12 DEFINE_STR(UM_KERN_NOTICE, KERN_NOTICE); 13 13 DEFINE_STR(UM_KERN_INFO, KERN_INFO); 14 14 DEFINE_STR(UM_KERN_DEBUG, KERN_DEBUG); 15 - DEFINE(HOST_ELF_CLASS, ELF_CLASS); 15 + DEFINE(UM_ELF_CLASS, ELF_CLASS); 16 + DEFINE(UM_ELFCLASS32, ELFCLASS32); 17 + DEFINE(UM_ELFCLASS64, ELFCLASS64);
+3 -1
arch/um/include/user.h
··· 14 14 extern void kfree(void *ptr); 15 15 extern int in_aton(char *str); 16 16 extern int open_gdb_chan(void); 17 - extern int strlcpy(char *, const char *, int); 17 + /* These use size_t, however unsigned long is correct on both i386 and x86_64. */ 18 + extern unsigned long strlcpy(char *, const char *, unsigned long); 19 + extern unsigned long strlcat(char *, const char *, unsigned long); 18 20 extern void *um_vmalloc(int size); 19 21 extern void vfree(void *ptr); 20 22
+2 -1
arch/um/kernel/process_kern.c
··· 82 82 unsigned long page; 83 83 int flags = GFP_KERNEL; 84 84 85 - if(atomic) flags |= GFP_ATOMIC; 85 + if (atomic) 86 + flags = GFP_ATOMIC; 86 87 page = __get_free_pages(flags, order); 87 88 if(page == 0) 88 89 return(0);
+1 -1
arch/um/kernel/sigio_user.c
··· 340 340 { 341 341 struct pollfd *p; 342 342 343 - p = um_kmalloc(sizeof(struct pollfd)); 343 + p = um_kmalloc_atomic(sizeof(struct pollfd)); 344 344 if(p == NULL){ 345 345 printk("setup_initial_poll : failed to allocate poll\n"); 346 346 return(-1);
-6
arch/um/kernel/skas/include/uaccess-skas.h
··· 18 18 ((unsigned long) (addr) + (size) <= FIXADDR_USER_END) && \ 19 19 ((unsigned long) (addr) + (size) >= (unsigned long)(addr)))) 20 20 21 - static inline int verify_area_skas(int type, const void __user * addr, 22 - unsigned long size) 23 - { 24 - return(access_ok_skas(type, addr, size) ? 0 : -EFAULT); 25 - } 26 - 27 21 extern int copy_from_user_skas(void *to, const void __user *from, int n); 28 22 extern int copy_to_user_skas(void __user *to, const void *from, int n); 29 23 extern int strncpy_from_user_skas(char *dst, const char __user *src, int count);
+6 -6
arch/um/kernel/tlb.c
··· 193 193 r = pte_read(*npte); 194 194 w = pte_write(*npte); 195 195 x = pte_exec(*npte); 196 - if(!pte_dirty(*npte)) 197 - w = 0; 198 - if(!pte_young(*npte)){ 199 - r = 0; 200 - w = 0; 201 - } 196 + if (!pte_young(*npte)) { 197 + r = 0; 198 + w = 0; 199 + } else if (!pte_dirty(*npte)) { 200 + w = 0; 201 + } 202 202 if(force || pte_newpage(*npte)){ 203 203 if(pte_present(*npte)) 204 204 ret = add_mmap(addr,
+17 -1
arch/um/kernel/trap_kern.c
··· 18 18 #include "asm/a.out.h" 19 19 #include "asm/current.h" 20 20 #include "asm/irq.h" 21 + #include "sysdep/sigcontext.h" 21 22 #include "user_util.h" 22 23 #include "kern_util.h" 23 24 #include "kern.h" ··· 40 39 int err = -EFAULT; 41 40 42 41 *code_out = SEGV_MAPERR; 42 + 43 + /* If the fault was during atomic operation, don't take the fault, just 44 + * fail. */ 45 + if (in_atomic()) 46 + goto out_nosemaphore; 47 + 43 48 down_read(&mm->mmap_sem); 44 49 vma = find_vma(mm, address); 45 50 if(!vma) ··· 96 89 flush_tlb_page(vma, address); 97 90 out: 98 91 up_read(&mm->mmap_sem); 92 + out_nosemaphore: 99 93 return(err); 100 94 101 95 /* ··· 133 125 } 134 126 else if(current->mm == NULL) 135 127 panic("Segfault with no mm"); 136 - err = handle_page_fault(address, ip, is_write, is_user, &si.si_code); 128 + 129 + if (SEGV_IS_FIXABLE(&fi)) 130 + err = handle_page_fault(address, ip, is_write, is_user, &si.si_code); 131 + else { 132 + err = -EFAULT; 133 + /* A thread accessed NULL, we get a fault, but CR2 is invalid. 134 + * This code is used in __do_copy_from_user() of TT mode. */ 135 + address = 0; 136 + } 137 137 138 138 catcher = current->thread.fault_catcher; 139 139 if(!err)
-6
arch/um/kernel/tt/include/uaccess-tt.h
··· 33 33 (((unsigned long) (addr) <= ((unsigned long) (addr) + (size))) && \ 34 34 (under_task_size(addr, size) || is_stack(addr, size)))) 35 35 36 - static inline int verify_area_tt(int type, const void __user * addr, 37 - unsigned long size) 38 - { 39 - return(access_ok_tt(type, addr, size) ? 0 : -EFAULT); 40 - } 41 - 42 36 extern unsigned long get_fault_addr(void); 43 37 44 38 extern int __do_copy_from_user(void *to, const void *from, int n,
+2 -1
arch/um/kernel/tt/process_kern.c
··· 23 23 #include "mem_user.h" 24 24 #include "tlb.h" 25 25 #include "mode.h" 26 + #include "mode_kern.h" 26 27 #include "init.h" 27 28 #include "tt.h" 28 29 29 - int switch_to_tt(void *prev, void *next, void *last) 30 + void switch_to_tt(void *prev, void *next) 30 31 { 31 32 struct task_struct *from, *to, *prev_sched; 32 33 unsigned long flags;
+9 -2
arch/um/kernel/tt/uaccess_user.c
··· 22 22 __do_copy, &faulted); 23 23 TASK_REGS(get_current())->tt = save; 24 24 25 - if(!faulted) return(0); 26 - else return(n - (fault - (unsigned long) from)); 25 + if(!faulted) 26 + return 0; 27 + else if (fault) 28 + return n - (fault - (unsigned long) from); 29 + else 30 + /* In case of a general protection fault, we don't have the 31 + * fault address, so NULL is used instead. Pretend we didn't 32 + * copy anything. */ 33 + return n; 27 34 } 28 35 29 36 static void __do_strncpy(void *dst, const void *src, int count)
+23 -18
arch/um/kernel/umid.c
··· 31 31 /* Changed by set_umid */ 32 32 static int umid_is_random = 1; 33 33 static int umid_inited = 0; 34 + /* Have we created the files? Should we remove them? */ 35 + static int umid_owned = 0; 34 36 35 37 static int make_umid(int (*printer)(const char *fmt, ...)); 36 38 ··· 84 82 85 83 extern int tracing_pid; 86 84 87 - static int __init create_pid_file(void) 85 + static void __init create_pid_file(void) 88 86 { 89 87 char file[strlen(uml_dir) + UMID_LEN + sizeof("/pid\0")]; 90 88 char pid[sizeof("nnnnn\0")]; 91 89 int fd, n; 92 90 93 - if(umid_file_name("pid", file, sizeof(file))) return 0; 91 + if(umid_file_name("pid", file, sizeof(file))) 92 + return; 94 93 95 94 fd = os_open_file(file, of_create(of_excl(of_rdwr(OPENFLAGS()))), 96 95 0644); 97 96 if(fd < 0){ 98 97 printf("Open of machine pid file \"%s\" failed: %s\n", 99 98 file, strerror(-fd)); 100 - return 0; 99 + return; 101 100 } 102 101 103 102 sprintf(pid, "%d\n", os_getpid()); ··· 106 103 if(n != strlen(pid)) 107 104 printf("Write of pid file failed - err = %d\n", -n); 108 105 os_close_file(fd); 109 - return 0; 110 106 } 111 107 112 108 static int actually_do_remove(char *dir) ··· 149 147 void remove_umid_dir(void) 150 148 { 151 149 char dir[strlen(uml_dir) + UMID_LEN + 1]; 152 - if(!umid_inited) return; 150 + if (!umid_owned) 151 + return; 153 152 154 153 sprintf(dir, "%s%s", uml_dir, umid); 155 154 actually_do_remove(dir); ··· 158 155 159 156 char *get_umid(int only_if_set) 160 157 { 161 - if(only_if_set && umid_is_random) return(NULL); 162 - return(umid); 158 + if(only_if_set && umid_is_random) 159 + return NULL; 160 + return umid; 163 161 } 164 162 165 - int not_dead_yet(char *dir) 163 + static int not_dead_yet(char *dir) 166 164 { 167 165 char file[strlen(uml_dir) + UMID_LEN + sizeof("/pid\0")]; 168 166 char pid[sizeof("nnnnn\0")], *end; ··· 197 193 (p == CHOOSE_MODE(tracing_pid, os_getpid()))) 198 194 dead = 1; 199 195 } 200 - if(!dead) return(1); 196 + if(!dead) 197 + return(1); 201 198 return(actually_do_remove(dir)); 202 199 } 203 200 ··· 237 232 strlcpy(dir, home, sizeof(dir)); 238 233 uml_dir++; 239 234 } 235 + strlcat(dir, uml_dir, sizeof(dir)); 240 236 len = strlen(dir); 241 - strncat(dir, uml_dir, sizeof(dir) - len); 242 - len = strlen(dir); 243 - if((len > 0) && (len < sizeof(dir) - 1) && (dir[len - 1] != '/')){ 244 - dir[len] = '/'; 245 - dir[len + 1] = '\0'; 246 - } 237 + if (len > 0 && dir[len - 1] != '/') 238 + strlcat(dir, "/", sizeof(dir)); 247 239 248 240 uml_dir = malloc(strlen(dir) + 1); 249 - if(uml_dir == NULL){ 241 + if (uml_dir == NULL) { 250 242 printf("make_uml_dir : malloc failed, errno = %d\n", errno); 251 243 exit(1); 252 244 } ··· 288 286 if(errno == EEXIST){ 289 287 if(not_dead_yet(tmp)){ 290 288 (*printer)("umid '%s' is in use\n", umid); 289 + umid_owned = 0; 291 290 return(-1); 292 291 } 293 292 err = mkdir(tmp, 0777); ··· 299 296 return(-1); 300 297 } 301 298 302 - return(0); 299 + umid_owned = 1; 300 + return 0; 303 301 } 304 302 305 303 __uml_setup("uml_dir=", set_uml_dir, ··· 313 309 /* one function with the ordering we need ... */ 314 310 make_uml_dir(); 315 311 make_umid(printf); 316 - return create_pid_file(); 312 + create_pid_file(); 313 + return 0; 317 314 } 318 315 __uml_postsetup(make_umid_setup); 319 316
+6
arch/um/kernel/user_util.c
··· 128 128 struct utsname host; 129 129 130 130 uname(&host); 131 + #if defined(UML_CONFIG_UML_X86) && !defined(UML_CONFIG_64BIT) 132 + if (!strcmp(host.machine, "x86_64")) { 133 + strcpy(machine_out, "i686"); 134 + return; 135 + } 136 + #endif 131 137 strcpy(machine_out, host.machine); 132 138 } 133 139
+1
arch/um/os-Linux/aio.c
··· 144 144 "errno = %d\n", errno); 145 145 } 146 146 else { 147 + /* This is safe as we've just a pointer here. */ 147 148 aio = (struct aio_context *) (long) event.data; 148 149 if(update_aio(aio, event.res)){ 149 150 do_aio(ctx, aio);
+2 -1
arch/um/os-Linux/elf_aux.c
··· 14 14 #include "mem_user.h" 15 15 #include <kernel-offsets.h> 16 16 17 - #if HOST_ELF_CLASS == ELFCLASS32 17 + /* Use the one from the kernel - the host may miss it, if having old headers. */ 18 + #if UM_ELF_CLASS == UM_ELFCLASS32 18 19 typedef Elf32_auxv_t elf_auxv_t; 19 20 #else 20 21 typedef Elf64_auxv_t elf_auxv_t;
+1
arch/um/os-Linux/process.c
··· 3 3 * Licensed under the GPL 4 4 */ 5 5 6 + #include <unistd.h> 6 7 #include <stdio.h> 7 8 #include <errno.h> 8 9 #include <signal.h>
+1
arch/um/sys-i386/ldt.c
··· 83 83 goto out; 84 84 } 85 85 p = buf; 86 + break; 86 87 default: 87 88 res = -ENOSYS; 88 89 goto out;
+1 -1
arch/x86_64/Kconfig
··· 308 308 present. The HPET provides a stable time base on SMP 309 309 systems, unlike the TSC, but it is more expensive to access, 310 310 as it is off-chip. You can find the HPET spec at 311 - <http://www.intel.com/labs/platcomp/hpet/hpetspec.htm>. 311 + <http://www.intel.com/hardwaredesign/hpetspec.htm>. 312 312 313 313 config X86_PM_TIMER 314 314 bool "PM timer"
+2 -2
arch/xtensa/kernel/pci.c
··· 402 402 __pci_mmap_set_flags(dev, vma, mmap_state); 403 403 __pci_mmap_set_pgprot(dev, vma, mmap_state, write_combine); 404 404 405 - ret = io_remap_page_range(vma, vma->vm_start, vma->vm_pgoff<<PAGE_SHIFT, 406 - vma->vm_end - vma->vm_start, vma->vm_page_prot); 405 + ret = io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, 406 + vma->vm_end - vma->vm_start,vma->vm_page_prot); 407 407 408 408 return ret; 409 409 }
+1 -1
arch/xtensa/kernel/platform.c
··· 39 39 _F(int, get_rtc_time, (time_t* t), { return 0; }); 40 40 _F(int, set_rtc_time, (time_t t), { return 0; }); 41 41 42 - #if CONFIG_XTENSA_CALIBRATE_CCOUNT 42 + #ifdef CONFIG_XTENSA_CALIBRATE_CCOUNT 43 43 _F(void, calibrate_ccount, (void), 44 44 { 45 45 printk ("ERROR: Cannot calibrate cpu frequency! Assuming 100MHz.\n");
+1 -1
arch/xtensa/kernel/process.c
··· 457 457 dump_task_fpu(struct pt_regs *regs, struct task_struct *task, elf_fpregset_t *r) 458 458 { 459 459 /* see asm/coprocessor.h for this magic number 16 */ 460 - #if TOTAL_CPEXTRA_SIZE > 16 460 + #if XTENSA_CP_EXTRA_SIZE > 16 461 461 do_save_fpregs (r, regs, task); 462 462 463 463 /* For now, bit 16 means some extra state may be present: */
+1 -1
arch/xtensa/kernel/setup.c
··· 304 304 # endif 305 305 #endif 306 306 307 - #if CONFIG_PCI 307 + #ifdef CONFIG_PCI 308 308 platform_pcibios_init(); 309 309 #endif 310 310 }
+1 -1
arch/xtensa/kernel/signal.c
··· 182 182 183 183 struct task_struct *tsk = current; 184 184 release_all_cp(tsk); 185 - return __copy_from_user(tsk->thread.cpextra, buf, TOTAL_CPEXTRA_SIZE); 185 + return __copy_from_user(tsk->thread.cpextra, buf, XTENSA_CP_EXTRA_SIZE); 186 186 #endif 187 187 return 0; 188 188 }
+1 -1
arch/xtensa/kernel/time.c
··· 68 68 * speed for the CALIBRATE. 69 69 */ 70 70 71 - #if CONFIG_XTENSA_CALIBRATE_CCOUNT 71 + #ifdef CONFIG_XTENSA_CALIBRATE_CCOUNT 72 72 printk("Calibrating CPU frequency "); 73 73 platform_calibrate_ccount(); 74 74 printk("%d.%02d MHz\n", (int)ccount_per_jiffy/(1000000/HZ),
+1 -1
arch/xtensa/mm/init.c
··· 239 239 high_memory = (void *) __va(max_mapnr << PAGE_SHIFT); 240 240 highmemsize = 0; 241 241 242 - #if CONFIG_HIGHMEM 242 + #ifdef CONFIG_HIGHMEM 243 243 #error HIGHGMEM not implemented in init.c 244 244 #endif 245 245
+2 -1
drivers/acorn/char/pcf8583.c
··· 23 23 24 24 static unsigned short ignore[] = { I2C_CLIENT_END }; 25 25 static unsigned short normal_addr[] = { 0x50, I2C_CLIENT_END }; 26 + static unsigned short *forces[] = { NULL }; 26 27 27 28 static struct i2c_client_address_data addr_data = { 28 29 .normal_i2c = normal_addr, 29 30 .probe = ignore, 30 31 .ignore = ignore, 31 - .force = ignore, 32 + .forces = forces, 32 33 }; 33 34 34 35 #define DAT(x) ((unsigned int)(x->dev.driver_data))
+13
drivers/base/class.c
··· 669 669 int class_device_rename(struct class_device *class_dev, char *new_name) 670 670 { 671 671 int error = 0; 672 + char *old_class_name = NULL, *new_class_name = NULL; 672 673 673 674 class_dev = class_device_get(class_dev); 674 675 if (!class_dev) ··· 678 677 pr_debug("CLASS: renaming '%s' to '%s'\n", class_dev->class_id, 679 678 new_name); 680 679 680 + if (class_dev->dev) 681 + old_class_name = make_class_name(class_dev); 682 + 681 683 strlcpy(class_dev->class_id, new_name, KOBJ_NAME_LEN); 682 684 683 685 error = kobject_rename(&class_dev->kobj, new_name); 684 686 687 + if (class_dev->dev) { 688 + new_class_name = make_class_name(class_dev); 689 + sysfs_create_link(&class_dev->dev->kobj, &class_dev->kobj, 690 + new_class_name); 691 + sysfs_remove_link(&class_dev->dev->kobj, old_class_name); 692 + } 685 693 class_device_put(class_dev); 694 + 695 + kfree(old_class_name); 696 + kfree(new_class_name); 686 697 687 698 return error; 688 699 }
+3
drivers/base/dd.c
··· 40 40 */ 41 41 void device_bind_driver(struct device * dev) 42 42 { 43 + if (klist_node_attached(&dev->knode_driver)) 44 + return; 45 + 43 46 pr_debug("bound device '%s' to driver '%s'\n", 44 47 dev->bus_id, dev->driver->name); 45 48 klist_add_tail(&dev->knode_driver, &dev->driver->klist_devices);
+1 -4
drivers/block/cciss.c
··· 483 483 printk(KERN_DEBUG "cciss_open %s\n", inode->i_bdev->bd_disk->disk_name); 484 484 #endif /* CCISS_DEBUG */ 485 485 486 - if (host->busy_initializing) 487 - return -EBUSY; 488 - 489 486 if (host->busy_initializing || drv->busy_configuring) 490 487 return -EBUSY; 491 488 /* ··· 2988 2991 hba[i]->access.set_intr_mask(hba[i], CCISS_INTR_ON); 2989 2992 2990 2993 cciss_procinit(i); 2994 + hba[i]->busy_initializing = 0; 2991 2995 2992 2996 for(j=0; j < NWD; j++) { /* mfm */ 2993 2997 drive_info_struct *drv = &(hba[i]->drv[j]); ··· 3031 3033 add_disk(disk); 3032 3034 } 3033 3035 3034 - hba[i]->busy_initializing = 0; 3035 3036 return(1); 3036 3037 3037 3038 clean4:
-38
drivers/block/ll_rw_blk.c
··· 2373 2373 2374 2374 EXPORT_SYMBOL(blkdev_issue_flush); 2375 2375 2376 - /** 2377 - * blkdev_scsi_issue_flush_fn - issue flush for SCSI devices 2378 - * @q: device queue 2379 - * @disk: gendisk 2380 - * @error_sector: error offset 2381 - * 2382 - * Description: 2383 - * Devices understanding the SCSI command set, can use this function as 2384 - * a helper for issuing a cache flush. Note: driver is required to store 2385 - * the error offset (in case of error flushing) in ->sector of struct 2386 - * request. 2387 - */ 2388 - int blkdev_scsi_issue_flush_fn(request_queue_t *q, struct gendisk *disk, 2389 - sector_t *error_sector) 2390 - { 2391 - struct request *rq = blk_get_request(q, WRITE, __GFP_WAIT); 2392 - int ret; 2393 - 2394 - rq->flags |= REQ_BLOCK_PC | REQ_SOFTBARRIER; 2395 - rq->sector = 0; 2396 - memset(rq->cmd, 0, sizeof(rq->cmd)); 2397 - rq->cmd[0] = 0x35; 2398 - rq->cmd_len = 12; 2399 - rq->data = NULL; 2400 - rq->data_len = 0; 2401 - rq->timeout = 60 * HZ; 2402 - 2403 - ret = blk_execute_rq(q, disk, rq, 0); 2404 - 2405 - if (ret && error_sector) 2406 - *error_sector = rq->sector; 2407 - 2408 - blk_put_request(rq); 2409 - return ret; 2410 - } 2411 - 2412 - EXPORT_SYMBOL(blkdev_scsi_issue_flush_fn); 2413 - 2414 2376 static void drive_stat_acct(struct request *rq, int nr_sectors, int new_io) 2415 2377 { 2416 2378 int rw = rq_data_dir(rq);
+29 -26
drivers/block/ub.c
··· 172 172 */ 173 173 struct ub_dev; 174 174 175 - #define UB_MAX_REQ_SG 4 175 + #define UB_MAX_REQ_SG 9 /* cdrecord requires 32KB and maybe a header */ 176 176 #define UB_MAX_SECTORS 64 177 177 178 178 /* ··· 387 387 struct bulk_cs_wrap work_bcs; 388 388 struct usb_ctrlrequest work_cr; 389 389 390 - int sg_stat[UB_MAX_REQ_SG+1]; 390 + int sg_stat[6]; 391 391 struct ub_scsi_trace tr; 392 392 }; 393 393 ··· 525 525 "qlen %d qmax %d\n", 526 526 sc->cmd_queue.qlen, sc->cmd_queue.qmax); 527 527 cnt += sprintf(page + cnt, 528 - "sg %d %d %d %d %d\n", 528 + "sg %d %d %d %d %d .. %d\n", 529 529 sc->sg_stat[0], 530 530 sc->sg_stat[1], 531 531 sc->sg_stat[2], 532 532 sc->sg_stat[3], 533 - sc->sg_stat[4]); 533 + sc->sg_stat[4], 534 + sc->sg_stat[5]); 534 535 535 536 list_for_each (p, &sc->luns) { 536 537 lun = list_entry(p, struct ub_lun, link); ··· 836 835 return -1; 837 836 } 838 837 cmd->nsg = n_elem; 839 - sc->sg_stat[n_elem]++; 838 + sc->sg_stat[n_elem < 5 ? n_elem : 5]++; 840 839 841 840 /* 842 841 * build the command ··· 892 891 return -1; 893 892 } 894 893 cmd->nsg = n_elem; 895 - sc->sg_stat[n_elem]++; 894 + sc->sg_stat[n_elem < 5 ? n_elem : 5]++; 896 895 897 896 memcpy(&cmd->cdb, rq->cmd, rq->cmd_len); 898 897 cmd->cdb_len = rq->cmd_len; ··· 1011 1010 sc->last_pipe = sc->send_bulk_pipe; 1012 1011 usb_fill_bulk_urb(&sc->work_urb, sc->dev, sc->send_bulk_pipe, 1013 1012 bcb, US_BULK_CB_WRAP_LEN, ub_urb_complete, sc); 1014 - sc->work_urb.transfer_flags = 0; 1015 1013 1016 1014 /* Fill what we shouldn't be filling, because usb-storage did so. */ 1017 1015 sc->work_urb.actual_length = 0; ··· 1019 1019 1020 1020 if ((rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC)) != 0) { 1021 1021 /* XXX Clear stalls */ 1022 - printk("ub: cmd #%d start failed (%d)\n", cmd->tag, rc); /* P3 */ 1023 1022 ub_complete(&sc->work_done); 1024 1023 return rc; 1025 1024 } ··· 1189 1190 return; 1190 1191 } 1191 1192 if (urb->status != 0) { 1192 - printk("ub: cmd #%d cmd status (%d)\n", cmd->tag, urb->status); /* P3 */ 1193 1193 goto Bad_End; 1194 1194 } 1195 1195 if (urb->actual_length != US_BULK_CB_WRAP_LEN) { 1196 - printk("ub: cmd #%d xferred %d\n", cmd->tag, urb->actual_length); /* P3 */ 1197 1196 /* XXX Must do reset here to unconfuse the device */ 1198 1197 goto Bad_End; 1199 1198 } ··· 1392 1395 usb_fill_bulk_urb(&sc->work_urb, sc->dev, pipe, 1393 1396 page_address(sg->page) + sg->offset, sg->length, 1394 1397 ub_urb_complete, sc); 1395 - sc->work_urb.transfer_flags = 0; 1396 1398 sc->work_urb.actual_length = 0; 1397 1399 sc->work_urb.error_count = 0; 1398 1400 sc->work_urb.status = 0; 1399 1401 1400 1402 if ((rc = usb_submit_urb(&sc->work_urb, GFP_ATOMIC)) != 0) { 1401 1403 /* XXX Clear stalls */ 1402 - printk("ub: data #%d submit failed (%d)\n", cmd->tag, rc); /* P3 */ 1403 1404 ub_complete(&sc->work_done); 1404 1405 ub_state_done(sc, cmd, rc); 1405 1406 return; ··· 1437 1442 sc->last_pipe = sc->recv_bulk_pipe; 1438 1443 usb_fill_bulk_urb(&sc->work_urb, sc->dev, sc->recv_bulk_pipe, 1439 1444 &sc->work_bcs, US_BULK_CS_WRAP_LEN, ub_urb_complete, sc); 1440 - sc->work_urb.transfer_flags = 0; 1441 1445 sc->work_urb.actual_length = 0; 1442 1446 sc->work_urb.error_count = 0; 1443 1447 sc->work_urb.status = 0; ··· 1557 1563 1558 1564 usb_fill_control_urb(&sc->work_urb, sc->dev, sc->send_ctrl_pipe, 1559 1565 (unsigned char*) cr, NULL, 0, ub_urb_complete, sc); 1560 - sc->work_urb.transfer_flags = 0; 1561 1566 sc->work_urb.actual_length = 0; 1562 1567 sc->work_urb.error_count = 0; 1563 1568 sc->work_urb.status = 0; ··· 1993 2000 1994 2001 usb_fill_control_urb(&sc->work_urb, sc->dev, sc->recv_ctrl_pipe, 1995 2002 (unsigned char*) cr, p, 1, ub_probe_urb_complete, &compl); 1996 - sc->work_urb.transfer_flags = 0; 1997 2003 sc->work_urb.actual_length = 0; 1998 2004 sc->work_urb.error_count = 0; 1999 2005 sc->work_urb.status = 0; 2000 2006 2001 2007 if ((rc = usb_submit_urb(&sc->work_urb, GFP_KERNEL)) != 0) { 2002 2008 if (rc == -EPIPE) { 2003 - printk("%s: Stall at GetMaxLUN, using 1 LUN\n", 2009 + printk("%s: Stall submitting GetMaxLUN, using 1 LUN\n", 2004 2010 sc->name); /* P3 */ 2005 2011 } else { 2006 - printk(KERN_WARNING 2012 + printk(KERN_NOTICE 2007 2013 "%s: Unable to submit GetMaxLUN (%d)\n", 2008 2014 sc->name, rc); 2009 2015 } ··· 2019 2027 2020 2028 del_timer_sync(&timer); 2021 2029 usb_kill_urb(&sc->work_urb); 2030 + 2031 + if ((rc = sc->work_urb.status) < 0) { 2032 + if (rc == -EPIPE) { 2033 + printk("%s: Stall at GetMaxLUN, using 1 LUN\n", 2034 + sc->name); /* P3 */ 2035 + } else { 2036 + printk(KERN_NOTICE 2037 + "%s: Error at GetMaxLUN (%d)\n", 2038 + sc->name, rc); 2039 + } 2040 + goto err_io; 2041 + } 2022 2042 2023 2043 if (sc->work_urb.actual_length != 1) { 2024 2044 printk("%s: GetMaxLUN returned %d bytes\n", sc->name, ··· 2052 2048 kfree(p); 2053 2049 return nluns; 2054 2050 2051 + err_io: 2055 2052 err_submit: 2056 2053 kfree(p); 2057 2054 err_alloc: ··· 2085 2080 2086 2081 usb_fill_control_urb(&sc->work_urb, sc->dev, sc->send_ctrl_pipe, 2087 2082 (unsigned char*) cr, NULL, 0, ub_probe_urb_complete, &compl); 2088 - sc->work_urb.transfer_flags = 0; 2089 2083 sc->work_urb.actual_length = 0; 2090 2084 sc->work_urb.error_count = 0; 2091 2085 sc->work_urb.status = 0; ··· 2217 2213 * This is needed to clear toggles. It is a problem only if we do 2218 2214 * `rmmod ub && modprobe ub` without disconnects, but we like that. 2219 2215 */ 2216 + #if 0 /* iPod Mini fails if we do this (big white iPod works) */ 2220 2217 ub_probe_clear_stall(sc, sc->recv_bulk_pipe); 2221 2218 ub_probe_clear_stall(sc, sc->send_bulk_pipe); 2219 + #endif 2222 2220 2223 2221 /* 2224 2222 * The way this is used by the startup code is a little specific. ··· 2247 2241 for (i = 0; i < 3; i++) { 2248 2242 if ((rc = ub_sync_getmaxlun(sc)) < 0) { 2249 2243 /* 2250 - * Some devices (i.e. Iomega Zip100) need this -- 2251 - * apparently the bulk pipes get STALLed when the 2252 - * GetMaxLUN request is processed. 2253 - * XXX I have a ZIP-100, verify it does this. 2244 + * This segment is taken from usb-storage. They say 2245 + * that ZIP-100 needs this, but my own ZIP-100 works 2246 + * fine without this. 2247 + * Still, it does not seem to hurt anything. 2254 2248 */ 2255 2249 if (rc == -EPIPE) { 2256 2250 ub_probe_clear_stall(sc, sc->recv_bulk_pipe); ··· 2319 2313 disk->first_minor = lun->id * UB_MINORS_PER_MAJOR; 2320 2314 disk->fops = &ub_bd_fops; 2321 2315 disk->private_data = lun; 2322 - disk->driverfs_dev = &sc->intf->dev; /* XXX Many to one ok? */ 2316 + disk->driverfs_dev = &sc->intf->dev; 2323 2317 2324 2318 rc = -ENOMEM; 2325 2319 if ((q = blk_init_queue(ub_request_fn, &sc->lock)) == NULL) ··· 2471 2465 static int __init ub_init(void) 2472 2466 { 2473 2467 int rc; 2474 - 2475 - /* P3 */ printk("ub: sizeof ub_scsi_cmd %zu ub_dev %zu ub_lun %zu\n", 2476 - sizeof(struct ub_scsi_cmd), sizeof(struct ub_dev), sizeof(struct ub_lun)); 2477 2468 2478 2469 if ((rc = register_blkdev(UB_MAJOR, DRV_NAME)) != 0) 2479 2470 goto err_regblkdev;
-1
drivers/char/hpet.c
··· 273 273 274 274 vma->vm_flags |= VM_IO; 275 275 vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 276 - addr = __pa(addr); 277 276 278 277 if (io_remap_pfn_range(vma, vma->vm_start, addr >> PAGE_SHIFT, 279 278 PAGE_SIZE, vma->vm_page_prot)) {
+3 -3
drivers/char/ipmi/ipmi_msghandler.c
··· 2620 2620 spin_lock_irqsave(&(intf->waiting_msgs_lock), flags); 2621 2621 if (!list_empty(&(intf->waiting_msgs))) { 2622 2622 list_add_tail(&(msg->link), &(intf->waiting_msgs)); 2623 - spin_unlock(&(intf->waiting_msgs_lock)); 2623 + spin_unlock_irqrestore(&(intf->waiting_msgs_lock), flags); 2624 2624 goto out_unlock; 2625 2625 } 2626 2626 spin_unlock_irqrestore(&(intf->waiting_msgs_lock), flags); ··· 2629 2629 if (rv > 0) { 2630 2630 /* Could not handle the message now, just add it to a 2631 2631 list to handle later. */ 2632 - spin_lock(&(intf->waiting_msgs_lock)); 2632 + spin_lock_irqsave(&(intf->waiting_msgs_lock), flags); 2633 2633 list_add_tail(&(msg->link), &(intf->waiting_msgs)); 2634 - spin_unlock(&(intf->waiting_msgs_lock)); 2634 + spin_unlock_irqrestore(&(intf->waiting_msgs_lock), flags); 2635 2635 } else if (rv == 0) { 2636 2636 ipmi_free_smi_msg(msg); 2637 2637 }
+4 -5
drivers/hwmon/Kconfig
··· 418 418 help 419 419 This driver provides support for the IBM Hard Drive Active Protection 420 420 System (hdaps), which provides an accelerometer and other misc. data. 421 - Supported laptops include the IBM ThinkPad T41, T42, T43, and R51. 422 - The accelerometer data is readable via sysfs. 421 + ThinkPads starting with the R50, T41, and X40 are supported. The 422 + accelerometer data is readable via sysfs. 423 423 424 - This driver also provides an input class device, allowing the 425 - laptop to act as a pinball machine-esque mouse. This is off by 426 - default but enabled via sysfs or the module parameter "mousedev". 424 + This driver also provides an absolute input class device, allowing 425 + the laptop to act as a pinball machine-esque joystick. 427 426 428 427 Say Y here if you have an applicable laptop and want to experience 429 428 the awesome power of hdaps.
+9 -12
drivers/hwmon/hdaps.c
··· 4 4 * Copyright (C) 2005 Robert Love <rml@novell.com> 5 5 * Copyright (C) 2005 Jesper Juhl <jesper.juhl@gmail.com> 6 6 * 7 - * The HardDisk Active Protection System (hdaps) is present in the IBM ThinkPad 8 - * T41, T42, T43, R50, R50p, R51, and X40, at least. It provides a basic 9 - * two-axis accelerometer and other data, such as the device's temperature. 7 + * The HardDisk Active Protection System (hdaps) is present in IBM ThinkPads 8 + * starting with the R40, T41, and X40. It provides a basic two-axis 9 + * accelerometer and other data, such as the device's temperature. 10 10 * 11 11 * This driver is based on the document by Mark A. Smith available at 12 12 * http://www.almaden.ibm.com/cs/people/marksmith/tpaps.html and a lot of trial ··· 487 487 488 488 /* Module stuff */ 489 489 490 - /* 491 - * XXX: We should be able to return nonzero and halt the detection process. 492 - * But there is a bug in dmi_check_system() where a nonzero return from the 493 - * first match will result in a return of failure from dmi_check_system(). 494 - * I fixed this; the patch is 2.6-git. Once in a released tree, we can make 495 - * hdaps_dmi_match_invert() return hdaps_dmi_match(), which in turn returns 1. 496 - */ 490 + /* hdaps_dmi_match - found a match. return one, short-circuiting the hunt. */ 497 491 static int hdaps_dmi_match(struct dmi_system_id *id) 498 492 { 499 493 printk(KERN_INFO "hdaps: %s detected.\n", id->ident); 500 - return 0; 494 + return 1; 501 495 } 502 496 497 + /* hdaps_dmi_match_invert - found an inverted match. */ 503 498 static int hdaps_dmi_match_invert(struct dmi_system_id *id) 504 499 { 505 500 hdaps_invert = 1; 506 501 printk(KERN_INFO "hdaps: inverting axis readings.\n"); 507 - return 0; 502 + return hdaps_dmi_match(id); 508 503 } 509 504 510 505 #define HDAPS_DMI_MATCH_NORMAL(model) { \ ··· 529 534 HDAPS_DMI_MATCH_INVERT("ThinkPad R50p"), 530 535 HDAPS_DMI_MATCH_NORMAL("ThinkPad R50"), 531 536 HDAPS_DMI_MATCH_NORMAL("ThinkPad R51"), 537 + HDAPS_DMI_MATCH_NORMAL("ThinkPad R52"), 532 538 HDAPS_DMI_MATCH_INVERT("ThinkPad T41p"), 533 539 HDAPS_DMI_MATCH_NORMAL("ThinkPad T41"), 534 540 HDAPS_DMI_MATCH_INVERT("ThinkPad T42p"), ··· 537 541 HDAPS_DMI_MATCH_NORMAL("ThinkPad T43"), 538 542 HDAPS_DMI_MATCH_NORMAL("ThinkPad X40"), 539 543 HDAPS_DMI_MATCH_NORMAL("ThinkPad X41 Tablet"), 544 + HDAPS_DMI_MATCH_NORMAL("ThinkPad X41"), 540 545 { .ident = NULL } 541 546 }; 542 547
+12
drivers/i2c/busses/Kconfig
··· 245 245 This support is also available as a module. If so, the module 246 246 will be called i2c-keywest. 247 247 248 + config I2C_PMAC_SMU 249 + tristate "Powermac SMU I2C interface" 250 + depends on I2C && PMAC_SMU 251 + help 252 + This supports the use of the I2C interface in the SMU 253 + chip on recent Apple machines like the iMac G5. It is used 254 + among others by the thermal control driver for those machines. 255 + Say Y if you have such a machine. 256 + 257 + This support is also available as a module. If so, the module 258 + will be called i2c-pmac-smu. 259 + 248 260 config I2C_MPC 249 261 tristate "MPC107/824x/85xx/52xx" 250 262 depends on I2C && PPC32
+1
drivers/i2c/busses/Makefile
··· 20 20 obj-$(CONFIG_I2C_IXP2000) += i2c-ixp2000.o 21 21 obj-$(CONFIG_I2C_IXP4XX) += i2c-ixp4xx.o 22 22 obj-$(CONFIG_I2C_KEYWEST) += i2c-keywest.o 23 + obj-$(CONFIG_I2C_PMAC_SMU) += i2c-pmac-smu.o 23 24 obj-$(CONFIG_I2C_MPC) += i2c-mpc.o 24 25 obj-$(CONFIG_I2C_MV64XXX) += i2c-mv64xxx.o 25 26 obj-$(CONFIG_I2C_NFORCE2) += i2c-nforce2.o
+316
drivers/i2c/busses/i2c-pmac-smu.c
··· 1 + /* 2 + i2c Support for Apple SMU Controller 3 + 4 + Copyright (c) 2005 Benjamin Herrenschmidt, IBM Corp. 5 + <benh@kernel.crashing.org> 6 + 7 + This program is free software; you can redistribute it and/or modify 8 + it under the terms of the GNU General Public License as published by 9 + the Free Software Foundation; either version 2 of the License, or 10 + (at your option) any later version. 11 + 12 + This program is distributed in the hope that it will be useful, 13 + but WITHOUT ANY WARRANTY; without even the implied warranty of 14 + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 + GNU General Public License for more details. 16 + 17 + You should have received a copy of the GNU General Public License 18 + along with this program; if not, write to the Free Software 19 + Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 20 + 21 + */ 22 + 23 + #include <linux/config.h> 24 + #include <linux/module.h> 25 + #include <linux/kernel.h> 26 + #include <linux/types.h> 27 + #include <linux/i2c.h> 28 + #include <linux/init.h> 29 + #include <linux/completion.h> 30 + #include <linux/device.h> 31 + #include <asm/prom.h> 32 + #include <asm/of_device.h> 33 + #include <asm/smu.h> 34 + 35 + static int probe; 36 + 37 + MODULE_AUTHOR("Benjamin Herrenschmidt <benh@kernel.crashing.org>"); 38 + MODULE_DESCRIPTION("I2C driver for Apple's SMU"); 39 + MODULE_LICENSE("GPL"); 40 + module_param(probe, bool, 0); 41 + 42 + 43 + /* Physical interface */ 44 + struct smu_iface 45 + { 46 + struct i2c_adapter adapter; 47 + struct completion complete; 48 + u32 busid; 49 + }; 50 + 51 + static void smu_i2c_done(struct smu_i2c_cmd *cmd, void *misc) 52 + { 53 + struct smu_iface *iface = misc; 54 + complete(&iface->complete); 55 + } 56 + 57 + /* 58 + * SMBUS-type transfer entrypoint 59 + */ 60 + static s32 smu_smbus_xfer( struct i2c_adapter* adap, 61 + u16 addr, 62 + unsigned short flags, 63 + char read_write, 64 + u8 command, 65 + int size, 66 + union i2c_smbus_data* data) 67 + { 68 + struct smu_iface *iface = i2c_get_adapdata(adap); 69 + struct smu_i2c_cmd cmd; 70 + int rc = 0; 71 + int read = (read_write == I2C_SMBUS_READ); 72 + 73 + cmd.info.bus = iface->busid; 74 + cmd.info.devaddr = (addr << 1) | (read ? 0x01 : 0x00); 75 + 76 + /* Prepare datas & select mode */ 77 + switch (size) { 78 + case I2C_SMBUS_QUICK: 79 + cmd.info.type = SMU_I2C_TRANSFER_SIMPLE; 80 + cmd.info.datalen = 0; 81 + break; 82 + case I2C_SMBUS_BYTE: 83 + cmd.info.type = SMU_I2C_TRANSFER_SIMPLE; 84 + cmd.info.datalen = 1; 85 + if (!read) 86 + cmd.info.data[0] = data->byte; 87 + break; 88 + case I2C_SMBUS_BYTE_DATA: 89 + cmd.info.type = SMU_I2C_TRANSFER_STDSUB; 90 + cmd.info.datalen = 1; 91 + cmd.info.sublen = 1; 92 + cmd.info.subaddr[0] = command; 93 + cmd.info.subaddr[1] = 0; 94 + cmd.info.subaddr[2] = 0; 95 + if (!read) 96 + cmd.info.data[0] = data->byte; 97 + break; 98 + case I2C_SMBUS_WORD_DATA: 99 + cmd.info.type = SMU_I2C_TRANSFER_STDSUB; 100 + cmd.info.datalen = 2; 101 + cmd.info.sublen = 1; 102 + cmd.info.subaddr[0] = command; 103 + cmd.info.subaddr[1] = 0; 104 + cmd.info.subaddr[2] = 0; 105 + if (!read) { 106 + cmd.info.data[0] = data->byte & 0xff; 107 + cmd.info.data[1] = (data->byte >> 8) & 0xff; 108 + } 109 + break; 110 + /* Note that these are broken vs. the expected smbus API where 111 + * on reads, the lenght is actually returned from the function, 112 + * but I think the current API makes no sense and I don't want 113 + * any driver that I haven't verified for correctness to go 114 + * anywhere near a pmac i2c bus anyway ... 115 + */ 116 + case I2C_SMBUS_BLOCK_DATA: 117 + cmd.info.type = SMU_I2C_TRANSFER_STDSUB; 118 + cmd.info.datalen = data->block[0] + 1; 119 + if (cmd.info.datalen > 6) 120 + return -EINVAL; 121 + if (!read) 122 + memcpy(cmd.info.data, data->block, cmd.info.datalen); 123 + cmd.info.sublen = 1; 124 + cmd.info.subaddr[0] = command; 125 + cmd.info.subaddr[1] = 0; 126 + cmd.info.subaddr[2] = 0; 127 + break; 128 + case I2C_SMBUS_I2C_BLOCK_DATA: 129 + cmd.info.type = SMU_I2C_TRANSFER_STDSUB; 130 + cmd.info.datalen = data->block[0]; 131 + if (cmd.info.datalen > 7) 132 + return -EINVAL; 133 + if (!read) 134 + memcpy(cmd.info.data, &data->block[1], 135 + cmd.info.datalen); 136 + cmd.info.sublen = 1; 137 + cmd.info.subaddr[0] = command; 138 + cmd.info.subaddr[1] = 0; 139 + cmd.info.subaddr[2] = 0; 140 + break; 141 + 142 + default: 143 + return -EINVAL; 144 + } 145 + 146 + /* Turn a standardsub read into a combined mode access */ 147 + if (read_write == I2C_SMBUS_READ && 148 + cmd.info.type == SMU_I2C_TRANSFER_STDSUB) 149 + cmd.info.type = SMU_I2C_TRANSFER_COMBINED; 150 + 151 + /* Finish filling command and submit it */ 152 + cmd.done = smu_i2c_done; 153 + cmd.misc = iface; 154 + rc = smu_queue_i2c(&cmd); 155 + if (rc < 0) 156 + return rc; 157 + wait_for_completion(&iface->complete); 158 + rc = cmd.status; 159 + 160 + if (!read || rc < 0) 161 + return rc; 162 + 163 + switch (size) { 164 + case I2C_SMBUS_BYTE: 165 + case I2C_SMBUS_BYTE_DATA: 166 + data->byte = cmd.info.data[0]; 167 + break; 168 + case I2C_SMBUS_WORD_DATA: 169 + data->word = ((u16)cmd.info.data[1]) << 8; 170 + data->word |= cmd.info.data[0]; 171 + break; 172 + /* Note that these are broken vs. the expected smbus API where 173 + * on reads, the lenght is actually returned from the function, 174 + * but I think the current API makes no sense and I don't want 175 + * any driver that I haven't verified for correctness to go 176 + * anywhere near a pmac i2c bus anyway ... 177 + */ 178 + case I2C_SMBUS_BLOCK_DATA: 179 + case I2C_SMBUS_I2C_BLOCK_DATA: 180 + memcpy(&data->block[0], cmd.info.data, cmd.info.datalen); 181 + break; 182 + } 183 + 184 + return rc; 185 + } 186 + 187 + static u32 188 + smu_smbus_func(struct i2c_adapter * adapter) 189 + { 190 + return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE | 191 + I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA | 192 + I2C_FUNC_SMBUS_BLOCK_DATA; 193 + } 194 + 195 + /* For now, we only handle combined mode (smbus) */ 196 + static struct i2c_algorithm smu_algorithm = { 197 + .smbus_xfer = smu_smbus_xfer, 198 + .functionality = smu_smbus_func, 199 + }; 200 + 201 + static int create_iface(struct device_node *np, struct device *dev) 202 + { 203 + struct smu_iface* iface; 204 + u32 *reg, busid; 205 + int rc; 206 + 207 + reg = (u32 *)get_property(np, "reg", NULL); 208 + if (reg == NULL) { 209 + printk(KERN_ERR "i2c-pmac-smu: can't find bus number !\n"); 210 + return -ENXIO; 211 + } 212 + busid = *reg; 213 + 214 + iface = kmalloc(sizeof(struct smu_iface), GFP_KERNEL); 215 + if (iface == NULL) { 216 + printk(KERN_ERR "i2c-pmac-smu: can't allocate inteface !\n"); 217 + return -ENOMEM; 218 + } 219 + memset(iface, 0, sizeof(struct smu_iface)); 220 + init_completion(&iface->complete); 221 + iface->busid = busid; 222 + 223 + dev_set_drvdata(dev, iface); 224 + 225 + sprintf(iface->adapter.name, "smu-i2c-%02x", busid); 226 + iface->adapter.algo = &smu_algorithm; 227 + iface->adapter.algo_data = NULL; 228 + iface->adapter.client_register = NULL; 229 + iface->adapter.client_unregister = NULL; 230 + i2c_set_adapdata(&iface->adapter, iface); 231 + iface->adapter.dev.parent = dev; 232 + 233 + rc = i2c_add_adapter(&iface->adapter); 234 + if (rc) { 235 + printk(KERN_ERR "i2c-pamc-smu.c: Adapter %s registration " 236 + "failed\n", iface->adapter.name); 237 + i2c_set_adapdata(&iface->adapter, NULL); 238 + } 239 + 240 + if (probe) { 241 + unsigned char addr; 242 + printk("Probe: "); 243 + for (addr = 0x00; addr <= 0x7f; addr++) { 244 + if (i2c_smbus_xfer(&iface->adapter,addr, 245 + 0,0,0,I2C_SMBUS_QUICK,NULL) >= 0) 246 + printk("%02x ", addr); 247 + } 248 + printk("\n"); 249 + } 250 + 251 + printk(KERN_INFO "SMU i2c bus %x registered\n", busid); 252 + 253 + return 0; 254 + } 255 + 256 + static int dispose_iface(struct device *dev) 257 + { 258 + struct smu_iface *iface = dev_get_drvdata(dev); 259 + int rc; 260 + 261 + rc = i2c_del_adapter(&iface->adapter); 262 + i2c_set_adapdata(&iface->adapter, NULL); 263 + /* We aren't that prepared to deal with this... */ 264 + if (rc) 265 + printk("i2c-pmac-smu.c: Failed to remove bus %s !\n", 266 + iface->adapter.name); 267 + dev_set_drvdata(dev, NULL); 268 + kfree(iface); 269 + 270 + return 0; 271 + } 272 + 273 + 274 + static int create_iface_of_platform(struct of_device* dev, 275 + const struct of_device_id *match) 276 + { 277 + return create_iface(dev->node, &dev->dev); 278 + } 279 + 280 + 281 + static int dispose_iface_of_platform(struct of_device* dev) 282 + { 283 + return dispose_iface(&dev->dev); 284 + } 285 + 286 + 287 + static struct of_device_id i2c_smu_match[] = 288 + { 289 + { 290 + .compatible = "smu-i2c", 291 + }, 292 + {}, 293 + }; 294 + static struct of_platform_driver i2c_smu_of_platform_driver = 295 + { 296 + .name = "i2c-smu", 297 + .match_table = i2c_smu_match, 298 + .probe = create_iface_of_platform, 299 + .remove = dispose_iface_of_platform 300 + }; 301 + 302 + 303 + static int __init i2c_pmac_smu_init(void) 304 + { 305 + of_register_driver(&i2c_smu_of_platform_driver); 306 + return 0; 307 + } 308 + 309 + 310 + static void __exit i2c_pmac_smu_cleanup(void) 311 + { 312 + of_unregister_driver(&i2c_smu_of_platform_driver); 313 + } 314 + 315 + module_init(i2c_pmac_smu_init); 316 + module_exit(i2c_pmac_smu_cleanup);
+8 -11
drivers/infiniband/core/mad_rmpp.c
··· 412 412 413 413 hdr_size = data_offset(rmpp_mad->mad_hdr.mgmt_class); 414 414 data_size = sizeof(struct ib_rmpp_mad) - hdr_size; 415 - pad = data_size - be32_to_cpu(rmpp_mad->rmpp_hdr.paylen_newwin); 416 - if (pad > data_size || pad < 0) 415 + pad = IB_MGMT_RMPP_DATA - be32_to_cpu(rmpp_mad->rmpp_hdr.paylen_newwin); 416 + if (pad > IB_MGMT_RMPP_DATA || pad < 0) 417 417 pad = 0; 418 418 419 419 return hdr_size + rmpp_recv->seg_num * data_size - pad; ··· 583 583 { 584 584 struct ib_rmpp_mad *rmpp_mad; 585 585 int timeout; 586 + u32 paylen; 586 587 587 588 rmpp_mad = (struct ib_rmpp_mad *)mad_send_wr->send_wr.wr.ud.mad_hdr; 588 589 ib_set_rmpp_flags(&rmpp_mad->rmpp_hdr, IB_MGMT_RMPP_FLAG_ACTIVE); ··· 591 590 592 591 if (mad_send_wr->seg_num == 1) { 593 592 rmpp_mad->rmpp_hdr.rmpp_rtime_flags |= IB_MGMT_RMPP_FLAG_FIRST; 594 - rmpp_mad->rmpp_hdr.paylen_newwin = 595 - cpu_to_be32(mad_send_wr->total_seg * 596 - (sizeof(struct ib_rmpp_mad) - 597 - offsetof(struct ib_rmpp_mad, data)) - 598 - mad_send_wr->pad); 593 + paylen = mad_send_wr->total_seg * IB_MGMT_RMPP_DATA - 594 + mad_send_wr->pad; 595 + rmpp_mad->rmpp_hdr.paylen_newwin = cpu_to_be32(paylen); 599 596 mad_send_wr->sg_list[0].length = sizeof(struct ib_rmpp_mad); 600 597 } else { 601 598 mad_send_wr->send_wr.num_sge = 2; ··· 607 608 608 609 if (mad_send_wr->seg_num == mad_send_wr->total_seg) { 609 610 rmpp_mad->rmpp_hdr.rmpp_rtime_flags |= IB_MGMT_RMPP_FLAG_LAST; 610 - rmpp_mad->rmpp_hdr.paylen_newwin = 611 - cpu_to_be32(sizeof(struct ib_rmpp_mad) - 612 - offsetof(struct ib_rmpp_mad, data) - 613 - mad_send_wr->pad); 611 + paylen = IB_MGMT_RMPP_DATA - mad_send_wr->pad; 612 + rmpp_mad->rmpp_hdr.paylen_newwin = cpu_to_be32(paylen); 614 613 } 615 614 616 615 /* 2 seconds for an ACK until we can find the packet lifetime */
+3 -2
drivers/infiniband/core/user_mad.c
··· 334 334 ret = -EINVAL; 335 335 goto err_ah; 336 336 } 337 - /* Validate that management class can support RMPP */ 337 + 338 + /* Validate that the management class can support RMPP */ 338 339 if (rmpp_mad->mad_hdr.mgmt_class == IB_MGMT_CLASS_SUBN_ADM) { 339 340 hdr_len = offsetof(struct ib_sa_mad, data); 340 - data_len = length; 341 + data_len = length - hdr_len; 341 342 } else if ((rmpp_mad->mad_hdr.mgmt_class >= IB_MGMT_CLASS_VENDOR_RANGE2_START) && 342 343 (rmpp_mad->mad_hdr.mgmt_class <= IB_MGMT_CLASS_VENDOR_RANGE2_END)) { 343 344 hdr_len = offsetof(struct ib_vendor_mad, data);
+5 -11
drivers/infiniband/hw/mthca/mthca_eq.c
··· 476 476 int i; 477 477 u8 status; 478 478 479 - /* Make sure EQ size is aligned to a power of 2 size. */ 480 - for (i = 1; i < nent; i <<= 1) 481 - ; /* nothing */ 482 - nent = i; 483 - 484 - eq->dev = dev; 479 + eq->dev = dev; 480 + eq->nent = roundup_pow_of_two(max(nent, 2)); 485 481 486 482 eq->page_list = kmalloc(npages * sizeof *eq->page_list, 487 483 GFP_KERNEL); ··· 508 512 memset(eq->page_list[i].buf, 0, PAGE_SIZE); 509 513 } 510 514 511 - for (i = 0; i < nent; ++i) 515 + for (i = 0; i < eq->nent; ++i) 512 516 set_eqe_hw(get_eqe(eq, i)); 513 517 514 518 eq->eqn = mthca_alloc(&dev->eq_table.alloc); ··· 524 528 if (err) 525 529 goto err_out_free_eq; 526 530 527 - eq->nent = nent; 528 - 529 531 memset(eq_context, 0, sizeof *eq_context); 530 532 eq_context->flags = cpu_to_be32(MTHCA_EQ_STATUS_OK | 531 533 MTHCA_EQ_OWNER_HW | ··· 532 538 if (mthca_is_memfree(dev)) 533 539 eq_context->flags |= cpu_to_be32(MTHCA_EQ_STATE_ARBEL); 534 540 535 - eq_context->logsize_usrpage = cpu_to_be32((ffs(nent) - 1) << 24); 541 + eq_context->logsize_usrpage = cpu_to_be32((ffs(eq->nent) - 1) << 24); 536 542 if (mthca_is_memfree(dev)) { 537 543 eq_context->arbel_pd = cpu_to_be32(dev->driver_pd.pd_num); 538 544 } else { ··· 563 569 dev->eq_table.arm_mask |= eq->eqn_mask; 564 570 565 571 mthca_dbg(dev, "Allocated EQ %d with %d entries\n", 566 - eq->eqn, nent); 572 + eq->eqn, eq->nent); 567 573 568 574 return err; 569 575
+24 -27
drivers/infiniband/hw/mthca/mthca_qp.c
··· 227 227 wq->last_comp = wq->max - 1; 228 228 wq->head = 0; 229 229 wq->tail = 0; 230 - wq->last = NULL; 231 230 } 232 231 233 232 void mthca_qp_event(struct mthca_dev *dev, u32 qpn, ··· 686 687 } 687 688 688 689 if (attr_mask & IB_QP_TIMEOUT) { 689 - qp_context->pri_path.ackto = attr->timeout; 690 + qp_context->pri_path.ackto = attr->timeout << 3; 690 691 qp_param->opt_param_mask |= cpu_to_be32(MTHCA_QP_OPTPAR_ACK_TIMEOUT); 691 692 } 692 693 ··· 1101 1102 qp->send_wqe_offset); 1102 1103 } 1103 1104 } 1105 + 1106 + qp->sq.last = get_send_wqe(qp, qp->sq.max - 1); 1107 + qp->rq.last = get_recv_wqe(qp, qp->rq.max - 1); 1104 1108 1105 1109 return 0; 1106 1110 } ··· 1585 1583 goto out; 1586 1584 } 1587 1585 1588 - if (prev_wqe) { 1589 - ((struct mthca_next_seg *) prev_wqe)->nda_op = 1590 - cpu_to_be32(((ind << qp->sq.wqe_shift) + 1591 - qp->send_wqe_offset) | 1592 - mthca_opcode[wr->opcode]); 1593 - wmb(); 1594 - ((struct mthca_next_seg *) prev_wqe)->ee_nds = 1595 - cpu_to_be32((size0 ? 0 : MTHCA_NEXT_DBD) | size); 1596 - } 1586 + ((struct mthca_next_seg *) prev_wqe)->nda_op = 1587 + cpu_to_be32(((ind << qp->sq.wqe_shift) + 1588 + qp->send_wqe_offset) | 1589 + mthca_opcode[wr->opcode]); 1590 + wmb(); 1591 + ((struct mthca_next_seg *) prev_wqe)->ee_nds = 1592 + cpu_to_be32((size0 ? 0 : MTHCA_NEXT_DBD) | size); 1597 1593 1598 1594 if (!size0) { 1599 1595 size0 = size; ··· 1688 1688 1689 1689 qp->wrid[ind] = wr->wr_id; 1690 1690 1691 - if (likely(prev_wqe)) { 1692 - ((struct mthca_next_seg *) prev_wqe)->nda_op = 1693 - cpu_to_be32((ind << qp->rq.wqe_shift) | 1); 1694 - wmb(); 1695 - ((struct mthca_next_seg *) prev_wqe)->ee_nds = 1696 - cpu_to_be32(MTHCA_NEXT_DBD | size); 1697 - } 1691 + ((struct mthca_next_seg *) prev_wqe)->nda_op = 1692 + cpu_to_be32((ind << qp->rq.wqe_shift) | 1); 1693 + wmb(); 1694 + ((struct mthca_next_seg *) prev_wqe)->ee_nds = 1695 + cpu_to_be32(MTHCA_NEXT_DBD | size); 1698 1696 1699 1697 if (!size0) 1700 1698 size0 = size; ··· 1903 1905 goto out; 1904 1906 } 1905 1907 1906 - if (likely(prev_wqe)) { 1907 - ((struct mthca_next_seg *) prev_wqe)->nda_op = 1908 - cpu_to_be32(((ind << qp->sq.wqe_shift) + 1909 - qp->send_wqe_offset) | 1910 - mthca_opcode[wr->opcode]); 1911 - wmb(); 1912 - ((struct mthca_next_seg *) prev_wqe)->ee_nds = 1913 - cpu_to_be32(MTHCA_NEXT_DBD | size); 1914 - } 1908 + ((struct mthca_next_seg *) prev_wqe)->nda_op = 1909 + cpu_to_be32(((ind << qp->sq.wqe_shift) + 1910 + qp->send_wqe_offset) | 1911 + mthca_opcode[wr->opcode]); 1912 + wmb(); 1913 + ((struct mthca_next_seg *) prev_wqe)->ee_nds = 1914 + cpu_to_be32(MTHCA_NEXT_DBD | size); 1915 1915 1916 1916 if (!size0) { 1917 1917 size0 = size; ··· 2123 2127 for (i = 0; i < 2; ++i) 2124 2128 mthca_CONF_SPECIAL_QP(dev, i, 0, &status); 2125 2129 2130 + mthca_array_cleanup(&dev->qp_table.qp, dev->limits.num_qps); 2126 2131 mthca_alloc_cleanup(&dev->qp_table.alloc); 2127 2132 }
+11 -14
drivers/infiniband/hw/mthca/mthca_srq.c
··· 172 172 scatter->lkey = cpu_to_be32(MTHCA_INVAL_LKEY); 173 173 } 174 174 175 + srq->last = get_wqe(srq, srq->max - 1); 176 + 175 177 return 0; 176 178 } 177 179 ··· 191 189 192 190 srq->max = attr->max_wr; 193 191 srq->max_gs = attr->max_sge; 194 - srq->last = NULL; 195 192 srq->counter = 0; 196 193 197 194 if (mthca_is_memfree(dev)) ··· 410 409 mthca_err(dev, "SRQ %06x full\n", srq->srqn); 411 410 err = -ENOMEM; 412 411 *bad_wr = wr; 413 - return nreq; 412 + break; 414 413 } 415 414 416 415 wqe = get_wqe(srq, ind); ··· 428 427 err = -EINVAL; 429 428 *bad_wr = wr; 430 429 srq->last = prev_wqe; 431 - return nreq; 430 + break; 432 431 } 433 432 434 433 for (i = 0; i < wr->num_sge; ++i) { ··· 447 446 ((struct mthca_data_seg *) wqe)->addr = 0; 448 447 } 449 448 450 - if (likely(prev_wqe)) { 451 - ((struct mthca_next_seg *) prev_wqe)->nda_op = 452 - cpu_to_be32((ind << srq->wqe_shift) | 1); 453 - wmb(); 454 - ((struct mthca_next_seg *) prev_wqe)->ee_nds = 455 - cpu_to_be32(MTHCA_NEXT_DBD); 456 - } 449 + ((struct mthca_next_seg *) prev_wqe)->nda_op = 450 + cpu_to_be32((ind << srq->wqe_shift) | 1); 451 + wmb(); 452 + ((struct mthca_next_seg *) prev_wqe)->ee_nds = 453 + cpu_to_be32(MTHCA_NEXT_DBD); 457 454 458 455 srq->wrid[ind] = wr->wr_id; 459 456 srq->first_free = next_ind; 460 457 } 461 - 462 - return nreq; 463 458 464 459 if (likely(nreq)) { 465 460 __be32 doorbell[2]; ··· 500 503 mthca_err(dev, "SRQ %06x full\n", srq->srqn); 501 504 err = -ENOMEM; 502 505 *bad_wr = wr; 503 - return nreq; 506 + break; 504 507 } 505 508 506 509 wqe = get_wqe(srq, ind); ··· 516 519 if (unlikely(wr->num_sge > srq->max_gs)) { 517 520 err = -EINVAL; 518 521 *bad_wr = wr; 519 - return nreq; 522 + break; 520 523 } 521 524 522 525 for (i = 0; i < wr->num_sge; ++i) {
+1 -1
drivers/infiniband/ulp/ipoib/ipoib.h
··· 257 257 258 258 void ipoib_mcast_restart_task(void *dev_ptr); 259 259 int ipoib_mcast_start_thread(struct net_device *dev); 260 - int ipoib_mcast_stop_thread(struct net_device *dev); 260 + int ipoib_mcast_stop_thread(struct net_device *dev, int flush); 261 261 262 262 void ipoib_mcast_dev_down(struct net_device *dev); 263 263 void ipoib_mcast_dev_flush(struct net_device *dev);
+2 -2
drivers/infiniband/ulp/ipoib/ipoib_ib.c
··· 432 432 flush_workqueue(ipoib_workqueue); 433 433 } 434 434 435 - ipoib_mcast_stop_thread(dev); 435 + ipoib_mcast_stop_thread(dev, 1); 436 436 437 437 /* 438 438 * Flush the multicast groups first so we stop any multicast joins. The ··· 599 599 600 600 ipoib_dbg(priv, "cleaning up ib_dev\n"); 601 601 602 - ipoib_mcast_stop_thread(dev); 602 + ipoib_mcast_stop_thread(dev, 1); 603 603 604 604 /* Delete the broadcast address and the local address */ 605 605 ipoib_mcast_dev_down(dev);
+2
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 1005 1005 1006 1006 register_failed: 1007 1007 ib_unregister_event_handler(&priv->event_handler); 1008 + flush_scheduled_work(); 1008 1009 1009 1010 event_failed: 1010 1011 ipoib_dev_cleanup(priv->dev); ··· 1058 1057 1059 1058 list_for_each_entry_safe(priv, tmp, dev_list, list) { 1060 1059 ib_unregister_event_handler(&priv->event_handler); 1060 + flush_scheduled_work(); 1061 1061 1062 1062 unregister_netdev(priv->dev); 1063 1063 ipoib_dev_cleanup(priv->dev);
+7 -6
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
··· 145 145 146 146 mcast->dev = dev; 147 147 mcast->created = jiffies; 148 - mcast->backoff = HZ; 148 + mcast->backoff = 1; 149 149 mcast->logcount = 0; 150 150 151 151 INIT_LIST_HEAD(&mcast->list); ··· 396 396 IPOIB_GID_ARG(mcast->mcmember.mgid), status); 397 397 398 398 if (!status && !ipoib_mcast_join_finish(mcast, mcmember)) { 399 - mcast->backoff = HZ; 399 + mcast->backoff = 1; 400 400 down(&mcast_mutex); 401 401 if (test_bit(IPOIB_MCAST_RUN, &priv->flags)) 402 402 queue_work(ipoib_workqueue, &priv->mcast_task); ··· 496 496 if (test_bit(IPOIB_MCAST_RUN, &priv->flags)) 497 497 queue_delayed_work(ipoib_workqueue, 498 498 &priv->mcast_task, 499 - mcast->backoff); 499 + mcast->backoff * HZ); 500 500 up(&mcast_mutex); 501 501 } else 502 502 mcast->query_id = ret; ··· 598 598 return 0; 599 599 } 600 600 601 - int ipoib_mcast_stop_thread(struct net_device *dev) 601 + int ipoib_mcast_stop_thread(struct net_device *dev, int flush) 602 602 { 603 603 struct ipoib_dev_priv *priv = netdev_priv(dev); 604 604 struct ipoib_mcast *mcast; ··· 610 610 cancel_delayed_work(&priv->mcast_task); 611 611 up(&mcast_mutex); 612 612 613 - flush_workqueue(ipoib_workqueue); 613 + if (flush) 614 + flush_workqueue(ipoib_workqueue); 614 615 615 616 if (priv->broadcast && priv->broadcast->query) { 616 617 ib_sa_cancel_query(priv->broadcast->query_id, priv->broadcast->query); ··· 833 832 834 833 ipoib_dbg_mcast(priv, "restarting multicast task\n"); 835 834 836 - ipoib_mcast_stop_thread(dev); 835 + ipoib_mcast_stop_thread(dev, 0); 837 836 838 837 spin_lock_irqsave(&priv->lock, flags); 839 838
+1
drivers/input/input.c
··· 308 308 MATCH_BIT(ledbit, LED_MAX); 309 309 MATCH_BIT(sndbit, SND_MAX); 310 310 MATCH_BIT(ffbit, FF_MAX); 311 + MATCH_BIT(swbit, SW_MAX); 311 312 312 313 return id; 313 314 }
-2
drivers/isdn/hisax/st5481_b.c
··· 209 209 bcs->mode = mode; 210 210 211 211 // Cancel all USB transfers on this B channel 212 - b_out->urb[0]->transfer_flags |= URB_ASYNC_UNLINK; 213 212 usb_unlink_urb(b_out->urb[0]); 214 - b_out->urb[1]->transfer_flags |= URB_ASYNC_UNLINK; 215 213 usb_unlink_urb(b_out->urb[1]); 216 214 b_out->busy = 0; 217 215
-2
drivers/isdn/hisax/st5481_usb.c
··· 645 645 646 646 in->mode = mode; 647 647 648 - in->urb[0]->transfer_flags |= URB_ASYNC_UNLINK; 649 648 usb_unlink_urb(in->urb[0]); 650 - in->urb[1]->transfer_flags |= URB_ASYNC_UNLINK; 651 649 usb_unlink_urb(in->urb[1]); 652 650 653 651 if (in->mode != L1_MODE_NULL) {
+897 -145
drivers/macintosh/smu.c
··· 8 8 */ 9 9 10 10 /* 11 - * For now, this driver includes: 12 - * - RTC get & set 13 - * - reboot & shutdown commands 14 - * all synchronous with IRQ disabled (ugh) 15 - * 16 11 * TODO: 17 - * rework in a way the PMU driver works, that is asynchronous 18 - * with a queue of commands. I'll do that as soon as I have an 19 - * SMU based machine at hand. Some more cleanup is needed too, 20 - * like maybe fitting it into a platform device, etc... 21 - * Also check what's up with cache coherency, and if we really 22 - * can't do better than flushing the cache, maybe build a table 23 - * of command len/reply len like the PMU driver to only flush 24 - * what is actually necessary. 25 - * --BenH. 12 + * - maybe add timeout to commands ? 13 + * - blocking version of time functions 14 + * - polling version of i2c commands (including timer that works with 15 + * interrutps off) 16 + * - maybe avoid some data copies with i2c by directly using the smu cmd 17 + * buffer and a lower level internal interface 18 + * - understand SMU -> CPU events and implement reception of them via 19 + * the userland interface 26 20 */ 27 21 28 22 #include <linux/config.h> ··· 30 36 #include <linux/jiffies.h> 31 37 #include <linux/interrupt.h> 32 38 #include <linux/rtc.h> 39 + #include <linux/completion.h> 40 + #include <linux/miscdevice.h> 41 + #include <linux/delay.h> 42 + #include <linux/sysdev.h> 43 + #include <linux/poll.h> 33 44 34 45 #include <asm/byteorder.h> 35 46 #include <asm/io.h> ··· 44 45 #include <asm/smu.h> 45 46 #include <asm/sections.h> 46 47 #include <asm/abs_addr.h> 48 + #include <asm/uaccess.h> 49 + #include <asm/of_device.h> 47 50 48 - #define DEBUG_SMU 1 51 + #define VERSION "0.6" 52 + #define AUTHOR "(c) 2005 Benjamin Herrenschmidt, IBM Corp." 53 + 54 + #undef DEBUG_SMU 49 55 50 56 #ifdef DEBUG_SMU 51 57 #define DPRINTK(fmt, args...) do { printk(KERN_DEBUG fmt , ##args); } while (0) ··· 61 57 /* 62 58 * This is the command buffer passed to the SMU hardware 63 59 */ 60 + #define SMU_MAX_DATA 254 61 + 64 62 struct smu_cmd_buf { 65 63 u8 cmd; 66 64 u8 length; 67 - u8 data[0x0FFE]; 65 + u8 data[SMU_MAX_DATA]; 68 66 }; 69 67 70 68 struct smu_device { 71 69 spinlock_t lock; 72 70 struct device_node *of_node; 73 - int db_ack; /* doorbell ack GPIO */ 74 - int db_req; /* doorbell req GPIO */ 71 + struct of_device *of_dev; 72 + int doorbell; /* doorbell gpio */ 75 73 u32 __iomem *db_buf; /* doorbell buffer */ 74 + int db_irq; 75 + int msg; 76 + int msg_irq; 76 77 struct smu_cmd_buf *cmd_buf; /* command buffer virtual */ 77 78 u32 cmd_buf_abs; /* command buffer absolute */ 79 + struct list_head cmd_list; 80 + struct smu_cmd *cmd_cur; /* pending command */ 81 + struct list_head cmd_i2c_list; 82 + struct smu_i2c_cmd *cmd_i2c_cur; /* pending i2c command */ 83 + struct timer_list i2c_timer; 78 84 }; 79 85 80 86 /* ··· 93 79 */ 94 80 static struct smu_device *smu; 95 81 82 + 96 83 /* 97 - * SMU low level communication stuff 84 + * SMU driver low level stuff 98 85 */ 99 - static inline int smu_cmd_stat(struct smu_cmd_buf *cmd_buf, u8 cmd_ack) 100 - { 101 - rmb(); 102 - return cmd_buf->cmd == cmd_ack && cmd_buf->length != 0; 103 - } 104 86 105 - static inline u8 smu_save_ack_cmd(struct smu_cmd_buf *cmd_buf) 87 + static void smu_start_cmd(void) 106 88 { 107 - return (~cmd_buf->cmd) & 0xff; 108 - } 89 + unsigned long faddr, fend; 90 + struct smu_cmd *cmd; 109 91 110 - static void smu_send_cmd(struct smu_device *dev) 111 - { 112 - /* SMU command buf is currently cacheable, we need a physical 113 - * address. This isn't exactly a DMA mapping here, I suspect 92 + if (list_empty(&smu->cmd_list)) 93 + return; 94 + 95 + /* Fetch first command in queue */ 96 + cmd = list_entry(smu->cmd_list.next, struct smu_cmd, link); 97 + smu->cmd_cur = cmd; 98 + list_del(&cmd->link); 99 + 100 + DPRINTK("SMU: starting cmd %x, %d bytes data\n", cmd->cmd, 101 + cmd->data_len); 102 + DPRINTK("SMU: data buffer: %02x %02x %02x %02x ...\n", 103 + ((u8 *)cmd->data_buf)[0], ((u8 *)cmd->data_buf)[1], 104 + ((u8 *)cmd->data_buf)[2], ((u8 *)cmd->data_buf)[3]); 105 + 106 + /* Fill the SMU command buffer */ 107 + smu->cmd_buf->cmd = cmd->cmd; 108 + smu->cmd_buf->length = cmd->data_len; 109 + memcpy(smu->cmd_buf->data, cmd->data_buf, cmd->data_len); 110 + 111 + /* Flush command and data to RAM */ 112 + faddr = (unsigned long)smu->cmd_buf; 113 + fend = faddr + smu->cmd_buf->length + 2; 114 + flush_inval_dcache_range(faddr, fend); 115 + 116 + /* This isn't exactly a DMA mapping here, I suspect 114 117 * the SMU is actually communicating with us via i2c to the 115 118 * northbridge or the CPU to access RAM. 116 119 */ 117 - writel(dev->cmd_buf_abs, dev->db_buf); 120 + writel(smu->cmd_buf_abs, smu->db_buf); 118 121 119 122 /* Ring the SMU doorbell */ 120 - pmac_do_feature_call(PMAC_FTR_WRITE_GPIO, NULL, dev->db_req, 4); 121 - pmac_do_feature_call(PMAC_FTR_READ_GPIO, NULL, dev->db_req, 4); 123 + pmac_do_feature_call(PMAC_FTR_WRITE_GPIO, NULL, smu->doorbell, 4); 122 124 } 123 125 124 - static int smu_cmd_done(struct smu_device *dev) 126 + 127 + static irqreturn_t smu_db_intr(int irq, void *arg, struct pt_regs *regs) 125 128 { 126 - unsigned long wait = 0; 127 - int gpio; 129 + unsigned long flags; 130 + struct smu_cmd *cmd; 131 + void (*done)(struct smu_cmd *cmd, void *misc) = NULL; 132 + void *misc = NULL; 133 + u8 gpio; 134 + int rc = 0; 128 135 129 - /* Check the SMU doorbell */ 130 - do { 131 - gpio = pmac_do_feature_call(PMAC_FTR_READ_GPIO, 132 - NULL, dev->db_ack); 133 - if ((gpio & 7) == 7) 134 - return 0; 135 - udelay(100); 136 - } while(++wait < 10000); 136 + /* SMU completed the command, well, we hope, let's make sure 137 + * of it 138 + */ 139 + spin_lock_irqsave(&smu->lock, flags); 137 140 138 - printk(KERN_ERR "SMU timeout !\n"); 139 - return -ENXIO; 141 + gpio = pmac_do_feature_call(PMAC_FTR_READ_GPIO, NULL, smu->doorbell); 142 + if ((gpio & 7) != 7) 143 + return IRQ_HANDLED; 144 + 145 + cmd = smu->cmd_cur; 146 + smu->cmd_cur = NULL; 147 + if (cmd == NULL) 148 + goto bail; 149 + 150 + if (rc == 0) { 151 + unsigned long faddr; 152 + int reply_len; 153 + u8 ack; 154 + 155 + /* CPU might have brought back the cache line, so we need 156 + * to flush again before peeking at the SMU response. We 157 + * flush the entire buffer for now as we haven't read the 158 + * reply lenght (it's only 2 cache lines anyway) 159 + */ 160 + faddr = (unsigned long)smu->cmd_buf; 161 + flush_inval_dcache_range(faddr, faddr + 256); 162 + 163 + /* Now check ack */ 164 + ack = (~cmd->cmd) & 0xff; 165 + if (ack != smu->cmd_buf->cmd) { 166 + DPRINTK("SMU: incorrect ack, want %x got %x\n", 167 + ack, smu->cmd_buf->cmd); 168 + rc = -EIO; 169 + } 170 + reply_len = rc == 0 ? smu->cmd_buf->length : 0; 171 + DPRINTK("SMU: reply len: %d\n", reply_len); 172 + if (reply_len > cmd->reply_len) { 173 + printk(KERN_WARNING "SMU: reply buffer too small," 174 + "got %d bytes for a %d bytes buffer\n", 175 + reply_len, cmd->reply_len); 176 + reply_len = cmd->reply_len; 177 + } 178 + cmd->reply_len = reply_len; 179 + if (cmd->reply_buf && reply_len) 180 + memcpy(cmd->reply_buf, smu->cmd_buf->data, reply_len); 181 + } 182 + 183 + /* Now complete the command. Write status last in order as we lost 184 + * ownership of the command structure as soon as it's no longer -1 185 + */ 186 + done = cmd->done; 187 + misc = cmd->misc; 188 + mb(); 189 + cmd->status = rc; 190 + bail: 191 + /* Start next command if any */ 192 + smu_start_cmd(); 193 + spin_unlock_irqrestore(&smu->lock, flags); 194 + 195 + /* Call command completion handler if any */ 196 + if (done) 197 + done(cmd, misc); 198 + 199 + /* It's an edge interrupt, nothing to do */ 200 + return IRQ_HANDLED; 140 201 } 141 202 142 - static int smu_do_cmd(struct smu_device *dev) 203 + 204 + static irqreturn_t smu_msg_intr(int irq, void *arg, struct pt_regs *regs) 143 205 { 144 - int rc; 145 - u8 cmd_ack; 206 + /* I don't quite know what to do with this one, we seem to never 207 + * receive it, so I suspect we have to arm it someway in the SMU 208 + * to start getting events that way. 209 + */ 146 210 147 - DPRINTK("SMU do_cmd %02x len=%d %02x\n", 148 - dev->cmd_buf->cmd, dev->cmd_buf->length, 149 - dev->cmd_buf->data[0]); 211 + printk(KERN_INFO "SMU: message interrupt !\n"); 150 212 151 - cmd_ack = smu_save_ack_cmd(dev->cmd_buf); 152 - 153 - /* Clear cmd_buf cache lines */ 154 - flush_inval_dcache_range((unsigned long)dev->cmd_buf, 155 - ((unsigned long)dev->cmd_buf) + 156 - sizeof(struct smu_cmd_buf)); 157 - smu_send_cmd(dev); 158 - rc = smu_cmd_done(dev); 159 - if (rc == 0) 160 - rc = smu_cmd_stat(dev->cmd_buf, cmd_ack) ? 0 : -1; 161 - 162 - DPRINTK("SMU do_cmd %02x len=%d %02x => %d (%02x)\n", 163 - dev->cmd_buf->cmd, dev->cmd_buf->length, 164 - dev->cmd_buf->data[0], rc, cmd_ack); 165 - 166 - return rc; 213 + /* It's an edge interrupt, nothing to do */ 214 + return IRQ_HANDLED; 167 215 } 216 + 217 + 218 + /* 219 + * Queued command management. 220 + * 221 + */ 222 + 223 + int smu_queue_cmd(struct smu_cmd *cmd) 224 + { 225 + unsigned long flags; 226 + 227 + if (smu == NULL) 228 + return -ENODEV; 229 + if (cmd->data_len > SMU_MAX_DATA || 230 + cmd->reply_len > SMU_MAX_DATA) 231 + return -EINVAL; 232 + 233 + cmd->status = 1; 234 + spin_lock_irqsave(&smu->lock, flags); 235 + list_add_tail(&cmd->link, &smu->cmd_list); 236 + if (smu->cmd_cur == NULL) 237 + smu_start_cmd(); 238 + spin_unlock_irqrestore(&smu->lock, flags); 239 + 240 + return 0; 241 + } 242 + EXPORT_SYMBOL(smu_queue_cmd); 243 + 244 + 245 + int smu_queue_simple(struct smu_simple_cmd *scmd, u8 command, 246 + unsigned int data_len, 247 + void (*done)(struct smu_cmd *cmd, void *misc), 248 + void *misc, ...) 249 + { 250 + struct smu_cmd *cmd = &scmd->cmd; 251 + va_list list; 252 + int i; 253 + 254 + if (data_len > sizeof(scmd->buffer)) 255 + return -EINVAL; 256 + 257 + memset(scmd, 0, sizeof(*scmd)); 258 + cmd->cmd = command; 259 + cmd->data_len = data_len; 260 + cmd->data_buf = scmd->buffer; 261 + cmd->reply_len = sizeof(scmd->buffer); 262 + cmd->reply_buf = scmd->buffer; 263 + cmd->done = done; 264 + cmd->misc = misc; 265 + 266 + va_start(list, misc); 267 + for (i = 0; i < data_len; ++i) 268 + scmd->buffer[i] = (u8)va_arg(list, int); 269 + va_end(list); 270 + 271 + return smu_queue_cmd(cmd); 272 + } 273 + EXPORT_SYMBOL(smu_queue_simple); 274 + 275 + 276 + void smu_poll(void) 277 + { 278 + u8 gpio; 279 + 280 + if (smu == NULL) 281 + return; 282 + 283 + gpio = pmac_do_feature_call(PMAC_FTR_READ_GPIO, NULL, smu->doorbell); 284 + if ((gpio & 7) == 7) 285 + smu_db_intr(smu->db_irq, smu, NULL); 286 + } 287 + EXPORT_SYMBOL(smu_poll); 288 + 289 + 290 + void smu_done_complete(struct smu_cmd *cmd, void *misc) 291 + { 292 + struct completion *comp = misc; 293 + 294 + complete(comp); 295 + } 296 + EXPORT_SYMBOL(smu_done_complete); 297 + 298 + 299 + void smu_spinwait_cmd(struct smu_cmd *cmd) 300 + { 301 + while(cmd->status == 1) 302 + smu_poll(); 303 + } 304 + EXPORT_SYMBOL(smu_spinwait_cmd); 305 + 168 306 169 307 /* RTC low level commands */ 170 308 static inline int bcd2hex (int n) ··· 324 158 return (((n & 0xf0) >> 4) * 10) + (n & 0xf); 325 159 } 326 160 161 + 327 162 static inline int hex2bcd (int n) 328 163 { 329 164 return ((n / 10) << 4) + (n % 10); 330 165 } 331 166 332 - #if 0 333 - static inline void smu_fill_set_pwrup_timer_cmd(struct smu_cmd_buf *cmd_buf) 334 - { 335 - cmd_buf->cmd = 0x8e; 336 - cmd_buf->length = 8; 337 - cmd_buf->data[0] = 0x00; 338 - memset(cmd_buf->data + 1, 0, 7); 339 - } 340 - 341 - static inline void smu_fill_get_pwrup_timer_cmd(struct smu_cmd_buf *cmd_buf) 342 - { 343 - cmd_buf->cmd = 0x8e; 344 - cmd_buf->length = 1; 345 - cmd_buf->data[0] = 0x01; 346 - } 347 - 348 - static inline void smu_fill_dis_pwrup_timer_cmd(struct smu_cmd_buf *cmd_buf) 349 - { 350 - cmd_buf->cmd = 0x8e; 351 - cmd_buf->length = 1; 352 - cmd_buf->data[0] = 0x02; 353 - } 354 - #endif 355 167 356 168 static inline void smu_fill_set_rtc_cmd(struct smu_cmd_buf *cmd_buf, 357 169 struct rtc_time *time) ··· 346 202 cmd_buf->data[7] = hex2bcd(time->tm_year - 100); 347 203 } 348 204 349 - static inline void smu_fill_get_rtc_cmd(struct smu_cmd_buf *cmd_buf) 350 - { 351 - cmd_buf->cmd = 0x8e; 352 - cmd_buf->length = 1; 353 - cmd_buf->data[0] = 0x81; 354 - } 355 205 356 - static void smu_parse_get_rtc_reply(struct smu_cmd_buf *cmd_buf, 357 - struct rtc_time *time) 206 + int smu_get_rtc_time(struct rtc_time *time, int spinwait) 358 207 { 359 - time->tm_sec = bcd2hex(cmd_buf->data[0]); 360 - time->tm_min = bcd2hex(cmd_buf->data[1]); 361 - time->tm_hour = bcd2hex(cmd_buf->data[2]); 362 - time->tm_wday = bcd2hex(cmd_buf->data[3]); 363 - time->tm_mday = bcd2hex(cmd_buf->data[4]); 364 - time->tm_mon = bcd2hex(cmd_buf->data[5]) - 1; 365 - time->tm_year = bcd2hex(cmd_buf->data[6]) + 100; 366 - } 367 - 368 - int smu_get_rtc_time(struct rtc_time *time) 369 - { 370 - unsigned long flags; 208 + struct smu_simple_cmd cmd; 371 209 int rc; 372 210 373 211 if (smu == NULL) 374 212 return -ENODEV; 375 213 376 214 memset(time, 0, sizeof(struct rtc_time)); 377 - spin_lock_irqsave(&smu->lock, flags); 378 - smu_fill_get_rtc_cmd(smu->cmd_buf); 379 - rc = smu_do_cmd(smu); 380 - if (rc == 0) 381 - smu_parse_get_rtc_reply(smu->cmd_buf, time); 382 - spin_unlock_irqrestore(&smu->lock, flags); 215 + rc = smu_queue_simple(&cmd, SMU_CMD_RTC_COMMAND, 1, NULL, NULL, 216 + SMU_CMD_RTC_GET_DATETIME); 217 + if (rc) 218 + return rc; 219 + smu_spinwait_simple(&cmd); 383 220 384 - return rc; 221 + time->tm_sec = bcd2hex(cmd.buffer[0]); 222 + time->tm_min = bcd2hex(cmd.buffer[1]); 223 + time->tm_hour = bcd2hex(cmd.buffer[2]); 224 + time->tm_wday = bcd2hex(cmd.buffer[3]); 225 + time->tm_mday = bcd2hex(cmd.buffer[4]); 226 + time->tm_mon = bcd2hex(cmd.buffer[5]) - 1; 227 + time->tm_year = bcd2hex(cmd.buffer[6]) + 100; 228 + 229 + return 0; 385 230 } 386 231 387 - int smu_set_rtc_time(struct rtc_time *time) 232 + 233 + int smu_set_rtc_time(struct rtc_time *time, int spinwait) 388 234 { 389 - unsigned long flags; 235 + struct smu_simple_cmd cmd; 390 236 int rc; 391 237 392 238 if (smu == NULL) 393 239 return -ENODEV; 394 240 395 - spin_lock_irqsave(&smu->lock, flags); 396 - smu_fill_set_rtc_cmd(smu->cmd_buf, time); 397 - rc = smu_do_cmd(smu); 398 - spin_unlock_irqrestore(&smu->lock, flags); 241 + rc = smu_queue_simple(&cmd, SMU_CMD_RTC_COMMAND, 8, NULL, NULL, 242 + SMU_CMD_RTC_SET_DATETIME, 243 + hex2bcd(time->tm_sec), 244 + hex2bcd(time->tm_min), 245 + hex2bcd(time->tm_hour), 246 + time->tm_wday, 247 + hex2bcd(time->tm_mday), 248 + hex2bcd(time->tm_mon) + 1, 249 + hex2bcd(time->tm_year - 100)); 250 + if (rc) 251 + return rc; 252 + smu_spinwait_simple(&cmd); 399 253 400 - return rc; 254 + return 0; 401 255 } 256 + 402 257 403 258 void smu_shutdown(void) 404 259 { 405 - const unsigned char *command = "SHUTDOWN"; 406 - unsigned long flags; 260 + struct smu_simple_cmd cmd; 407 261 408 262 if (smu == NULL) 409 263 return; 410 264 411 - spin_lock_irqsave(&smu->lock, flags); 412 - smu->cmd_buf->cmd = 0xaa; 413 - smu->cmd_buf->length = strlen(command); 414 - strcpy(smu->cmd_buf->data, command); 415 - smu_do_cmd(smu); 265 + if (smu_queue_simple(&cmd, SMU_CMD_POWER_COMMAND, 9, NULL, NULL, 266 + 'S', 'H', 'U', 'T', 'D', 'O', 'W', 'N', 0)) 267 + return; 268 + smu_spinwait_simple(&cmd); 416 269 for (;;) 417 270 ; 418 - spin_unlock_irqrestore(&smu->lock, flags); 419 271 } 272 + 420 273 421 274 void smu_restart(void) 422 275 { 423 - const unsigned char *command = "RESTART"; 424 - unsigned long flags; 276 + struct smu_simple_cmd cmd; 425 277 426 278 if (smu == NULL) 427 279 return; 428 280 429 - spin_lock_irqsave(&smu->lock, flags); 430 - smu->cmd_buf->cmd = 0xaa; 431 - smu->cmd_buf->length = strlen(command); 432 - strcpy(smu->cmd_buf->data, command); 433 - smu_do_cmd(smu); 281 + if (smu_queue_simple(&cmd, SMU_CMD_POWER_COMMAND, 8, NULL, NULL, 282 + 'R', 'E', 'S', 'T', 'A', 'R', 'T', 0)) 283 + return; 284 + smu_spinwait_simple(&cmd); 434 285 for (;;) 435 286 ; 436 - spin_unlock_irqrestore(&smu->lock, flags); 437 287 } 288 + 438 289 439 290 int smu_present(void) 440 291 { 441 292 return smu != NULL; 442 293 } 294 + EXPORT_SYMBOL(smu_present); 443 295 444 296 445 297 int smu_init (void) ··· 446 306 np = of_find_node_by_type(NULL, "smu"); 447 307 if (np == NULL) 448 308 return -ENODEV; 309 + 310 + printk(KERN_INFO "SMU driver %s %s\n", VERSION, AUTHOR); 449 311 450 312 if (smu_cmdbuf_abs == 0) { 451 313 printk(KERN_ERR "SMU: Command buffer not allocated !\n"); ··· 460 318 memset(smu, 0, sizeof(*smu)); 461 319 462 320 spin_lock_init(&smu->lock); 321 + INIT_LIST_HEAD(&smu->cmd_list); 322 + INIT_LIST_HEAD(&smu->cmd_i2c_list); 463 323 smu->of_node = np; 324 + smu->db_irq = NO_IRQ; 325 + smu->msg_irq = NO_IRQ; 326 + init_timer(&smu->i2c_timer); 327 + 464 328 /* smu_cmdbuf_abs is in the low 2G of RAM, can be converted to a 465 329 * 32 bits value safely 466 330 */ ··· 479 331 goto fail; 480 332 } 481 333 data = (u32 *)get_property(np, "reg", NULL); 482 - of_node_put(np); 483 334 if (data == NULL) { 335 + of_node_put(np); 484 336 printk(KERN_ERR "SMU: Can't find doorbell GPIO address !\n"); 485 337 goto fail; 486 338 } ··· 489 341 * and ack. GPIOs are at 0x50, best would be to find that out 490 342 * in the device-tree though. 491 343 */ 492 - smu->db_req = 0x50 + *data; 493 - smu->db_ack = 0x50 + *data; 344 + smu->doorbell = *data; 345 + if (smu->doorbell < 0x50) 346 + smu->doorbell += 0x50; 347 + if (np->n_intrs > 0) 348 + smu->db_irq = np->intrs[0].line; 349 + 350 + of_node_put(np); 351 + 352 + /* Now look for the smu-interrupt GPIO */ 353 + do { 354 + np = of_find_node_by_name(NULL, "smu-interrupt"); 355 + if (np == NULL) 356 + break; 357 + data = (u32 *)get_property(np, "reg", NULL); 358 + if (data == NULL) { 359 + of_node_put(np); 360 + break; 361 + } 362 + smu->msg = *data; 363 + if (smu->msg < 0x50) 364 + smu->msg += 0x50; 365 + if (np->n_intrs > 0) 366 + smu->msg_irq = np->intrs[0].line; 367 + of_node_put(np); 368 + } while(0); 494 369 495 370 /* Doorbell buffer is currently hard-coded, I didn't find a proper 496 371 * device-tree entry giving the address. Best would probably to use ··· 533 362 return -ENXIO; 534 363 535 364 } 365 + 366 + 367 + static int smu_late_init(void) 368 + { 369 + if (!smu) 370 + return 0; 371 + 372 + /* 373 + * Try to request the interrupts 374 + */ 375 + 376 + if (smu->db_irq != NO_IRQ) { 377 + if (request_irq(smu->db_irq, smu_db_intr, 378 + SA_SHIRQ, "SMU doorbell", smu) < 0) { 379 + printk(KERN_WARNING "SMU: can't " 380 + "request interrupt %d\n", 381 + smu->db_irq); 382 + smu->db_irq = NO_IRQ; 383 + } 384 + } 385 + 386 + if (smu->msg_irq != NO_IRQ) { 387 + if (request_irq(smu->msg_irq, smu_msg_intr, 388 + SA_SHIRQ, "SMU message", smu) < 0) { 389 + printk(KERN_WARNING "SMU: can't " 390 + "request interrupt %d\n", 391 + smu->msg_irq); 392 + smu->msg_irq = NO_IRQ; 393 + } 394 + } 395 + 396 + return 0; 397 + } 398 + arch_initcall(smu_late_init); 399 + 400 + /* 401 + * sysfs visibility 402 + */ 403 + 404 + static void smu_expose_childs(void *unused) 405 + { 406 + struct device_node *np; 407 + 408 + for (np = NULL; (np = of_get_next_child(smu->of_node, np)) != NULL;) { 409 + if (device_is_compatible(np, "smu-i2c")) { 410 + char name[32]; 411 + u32 *reg = (u32 *)get_property(np, "reg", NULL); 412 + 413 + if (reg == NULL) 414 + continue; 415 + sprintf(name, "smu-i2c-%02x", *reg); 416 + of_platform_device_create(np, name, &smu->of_dev->dev); 417 + } 418 + } 419 + 420 + } 421 + 422 + static DECLARE_WORK(smu_expose_childs_work, smu_expose_childs, NULL); 423 + 424 + static int smu_platform_probe(struct of_device* dev, 425 + const struct of_device_id *match) 426 + { 427 + if (!smu) 428 + return -ENODEV; 429 + smu->of_dev = dev; 430 + 431 + /* 432 + * Ok, we are matched, now expose all i2c busses. We have to defer 433 + * that unfortunately or it would deadlock inside the device model 434 + */ 435 + schedule_work(&smu_expose_childs_work); 436 + 437 + return 0; 438 + } 439 + 440 + static struct of_device_id smu_platform_match[] = 441 + { 442 + { 443 + .type = "smu", 444 + }, 445 + {}, 446 + }; 447 + 448 + static struct of_platform_driver smu_of_platform_driver = 449 + { 450 + .name = "smu", 451 + .match_table = smu_platform_match, 452 + .probe = smu_platform_probe, 453 + }; 454 + 455 + static int __init smu_init_sysfs(void) 456 + { 457 + int rc; 458 + 459 + /* 460 + * Due to sysfs bogosity, a sysdev is not a real device, so 461 + * we should in fact create both if we want sysdev semantics 462 + * for power management. 463 + * For now, we don't power manage machines with an SMU chip, 464 + * I'm a bit too far from figuring out how that works with those 465 + * new chipsets, but that will come back and bite us 466 + */ 467 + rc = of_register_driver(&smu_of_platform_driver); 468 + return 0; 469 + } 470 + 471 + device_initcall(smu_init_sysfs); 472 + 473 + struct of_device *smu_get_ofdev(void) 474 + { 475 + if (!smu) 476 + return NULL; 477 + return smu->of_dev; 478 + } 479 + 480 + EXPORT_SYMBOL_GPL(smu_get_ofdev); 481 + 482 + /* 483 + * i2c interface 484 + */ 485 + 486 + static void smu_i2c_complete_command(struct smu_i2c_cmd *cmd, int fail) 487 + { 488 + void (*done)(struct smu_i2c_cmd *cmd, void *misc) = cmd->done; 489 + void *misc = cmd->misc; 490 + unsigned long flags; 491 + 492 + /* Check for read case */ 493 + if (!fail && cmd->read) { 494 + if (cmd->pdata[0] < 1) 495 + fail = 1; 496 + else 497 + memcpy(cmd->info.data, &cmd->pdata[1], 498 + cmd->info.datalen); 499 + } 500 + 501 + DPRINTK("SMU: completing, success: %d\n", !fail); 502 + 503 + /* Update status and mark no pending i2c command with lock 504 + * held so nobody comes in while we dequeue an eventual 505 + * pending next i2c command 506 + */ 507 + spin_lock_irqsave(&smu->lock, flags); 508 + smu->cmd_i2c_cur = NULL; 509 + wmb(); 510 + cmd->status = fail ? -EIO : 0; 511 + 512 + /* Is there another i2c command waiting ? */ 513 + if (!list_empty(&smu->cmd_i2c_list)) { 514 + struct smu_i2c_cmd *newcmd; 515 + 516 + /* Fetch it, new current, remove from list */ 517 + newcmd = list_entry(smu->cmd_i2c_list.next, 518 + struct smu_i2c_cmd, link); 519 + smu->cmd_i2c_cur = newcmd; 520 + list_del(&cmd->link); 521 + 522 + /* Queue with low level smu */ 523 + list_add_tail(&cmd->scmd.link, &smu->cmd_list); 524 + if (smu->cmd_cur == NULL) 525 + smu_start_cmd(); 526 + } 527 + spin_unlock_irqrestore(&smu->lock, flags); 528 + 529 + /* Call command completion handler if any */ 530 + if (done) 531 + done(cmd, misc); 532 + 533 + } 534 + 535 + 536 + static void smu_i2c_retry(unsigned long data) 537 + { 538 + struct smu_i2c_cmd *cmd = (struct smu_i2c_cmd *)data; 539 + 540 + DPRINTK("SMU: i2c failure, requeuing...\n"); 541 + 542 + /* requeue command simply by resetting reply_len */ 543 + cmd->pdata[0] = 0xff; 544 + cmd->scmd.reply_len = 0x10; 545 + smu_queue_cmd(&cmd->scmd); 546 + } 547 + 548 + 549 + static void smu_i2c_low_completion(struct smu_cmd *scmd, void *misc) 550 + { 551 + struct smu_i2c_cmd *cmd = misc; 552 + int fail = 0; 553 + 554 + DPRINTK("SMU: i2c compl. stage=%d status=%x pdata[0]=%x rlen: %x\n", 555 + cmd->stage, scmd->status, cmd->pdata[0], scmd->reply_len); 556 + 557 + /* Check for possible status */ 558 + if (scmd->status < 0) 559 + fail = 1; 560 + else if (cmd->read) { 561 + if (cmd->stage == 0) 562 + fail = cmd->pdata[0] != 0; 563 + else 564 + fail = cmd->pdata[0] >= 0x80; 565 + } else { 566 + fail = cmd->pdata[0] != 0; 567 + } 568 + 569 + /* Handle failures by requeuing command, after 5ms interval 570 + */ 571 + if (fail && --cmd->retries > 0) { 572 + DPRINTK("SMU: i2c failure, starting timer...\n"); 573 + smu->i2c_timer.function = smu_i2c_retry; 574 + smu->i2c_timer.data = (unsigned long)cmd; 575 + smu->i2c_timer.expires = jiffies + msecs_to_jiffies(5); 576 + add_timer(&smu->i2c_timer); 577 + return; 578 + } 579 + 580 + /* If failure or stage 1, command is complete */ 581 + if (fail || cmd->stage != 0) { 582 + smu_i2c_complete_command(cmd, fail); 583 + return; 584 + } 585 + 586 + DPRINTK("SMU: going to stage 1\n"); 587 + 588 + /* Ok, initial command complete, now poll status */ 589 + scmd->reply_buf = cmd->pdata; 590 + scmd->reply_len = 0x10; 591 + scmd->data_buf = cmd->pdata; 592 + scmd->data_len = 1; 593 + cmd->pdata[0] = 0; 594 + cmd->stage = 1; 595 + cmd->retries = 20; 596 + smu_queue_cmd(scmd); 597 + } 598 + 599 + 600 + int smu_queue_i2c(struct smu_i2c_cmd *cmd) 601 + { 602 + unsigned long flags; 603 + 604 + if (smu == NULL) 605 + return -ENODEV; 606 + 607 + /* Fill most fields of scmd */ 608 + cmd->scmd.cmd = SMU_CMD_I2C_COMMAND; 609 + cmd->scmd.done = smu_i2c_low_completion; 610 + cmd->scmd.misc = cmd; 611 + cmd->scmd.reply_buf = cmd->pdata; 612 + cmd->scmd.reply_len = 0x10; 613 + cmd->scmd.data_buf = (u8 *)(char *)&cmd->info; 614 + cmd->scmd.status = 1; 615 + cmd->stage = 0; 616 + cmd->pdata[0] = 0xff; 617 + cmd->retries = 20; 618 + cmd->status = 1; 619 + 620 + /* Check transfer type, sanitize some "info" fields 621 + * based on transfer type and do more checking 622 + */ 623 + cmd->info.caddr = cmd->info.devaddr; 624 + cmd->read = cmd->info.devaddr & 0x01; 625 + switch(cmd->info.type) { 626 + case SMU_I2C_TRANSFER_SIMPLE: 627 + memset(&cmd->info.sublen, 0, 4); 628 + break; 629 + case SMU_I2C_TRANSFER_COMBINED: 630 + cmd->info.devaddr &= 0xfe; 631 + case SMU_I2C_TRANSFER_STDSUB: 632 + if (cmd->info.sublen > 3) 633 + return -EINVAL; 634 + break; 635 + default: 636 + return -EINVAL; 637 + } 638 + 639 + /* Finish setting up command based on transfer direction 640 + */ 641 + if (cmd->read) { 642 + if (cmd->info.datalen > SMU_I2C_READ_MAX) 643 + return -EINVAL; 644 + memset(cmd->info.data, 0xff, cmd->info.datalen); 645 + cmd->scmd.data_len = 9; 646 + } else { 647 + if (cmd->info.datalen > SMU_I2C_WRITE_MAX) 648 + return -EINVAL; 649 + cmd->scmd.data_len = 9 + cmd->info.datalen; 650 + } 651 + 652 + DPRINTK("SMU: i2c enqueuing command\n"); 653 + DPRINTK("SMU: %s, len=%d bus=%x addr=%x sub0=%x type=%x\n", 654 + cmd->read ? "read" : "write", cmd->info.datalen, 655 + cmd->info.bus, cmd->info.caddr, 656 + cmd->info.subaddr[0], cmd->info.type); 657 + 658 + 659 + /* Enqueue command in i2c list, and if empty, enqueue also in 660 + * main command list 661 + */ 662 + spin_lock_irqsave(&smu->lock, flags); 663 + if (smu->cmd_i2c_cur == NULL) { 664 + smu->cmd_i2c_cur = cmd; 665 + list_add_tail(&cmd->scmd.link, &smu->cmd_list); 666 + if (smu->cmd_cur == NULL) 667 + smu_start_cmd(); 668 + } else 669 + list_add_tail(&cmd->link, &smu->cmd_i2c_list); 670 + spin_unlock_irqrestore(&smu->lock, flags); 671 + 672 + return 0; 673 + } 674 + 675 + 676 + 677 + /* 678 + * Userland driver interface 679 + */ 680 + 681 + 682 + static LIST_HEAD(smu_clist); 683 + static DEFINE_SPINLOCK(smu_clist_lock); 684 + 685 + enum smu_file_mode { 686 + smu_file_commands, 687 + smu_file_events, 688 + smu_file_closing 689 + }; 690 + 691 + struct smu_private 692 + { 693 + struct list_head list; 694 + enum smu_file_mode mode; 695 + int busy; 696 + struct smu_cmd cmd; 697 + spinlock_t lock; 698 + wait_queue_head_t wait; 699 + u8 buffer[SMU_MAX_DATA]; 700 + }; 701 + 702 + 703 + static int smu_open(struct inode *inode, struct file *file) 704 + { 705 + struct smu_private *pp; 706 + unsigned long flags; 707 + 708 + pp = kmalloc(sizeof(struct smu_private), GFP_KERNEL); 709 + if (pp == 0) 710 + return -ENOMEM; 711 + memset(pp, 0, sizeof(struct smu_private)); 712 + spin_lock_init(&pp->lock); 713 + pp->mode = smu_file_commands; 714 + init_waitqueue_head(&pp->wait); 715 + 716 + spin_lock_irqsave(&smu_clist_lock, flags); 717 + list_add(&pp->list, &smu_clist); 718 + spin_unlock_irqrestore(&smu_clist_lock, flags); 719 + file->private_data = pp; 720 + 721 + return 0; 722 + } 723 + 724 + 725 + static void smu_user_cmd_done(struct smu_cmd *cmd, void *misc) 726 + { 727 + struct smu_private *pp = misc; 728 + 729 + wake_up_all(&pp->wait); 730 + } 731 + 732 + 733 + static ssize_t smu_write(struct file *file, const char __user *buf, 734 + size_t count, loff_t *ppos) 735 + { 736 + struct smu_private *pp = file->private_data; 737 + unsigned long flags; 738 + struct smu_user_cmd_hdr hdr; 739 + int rc = 0; 740 + 741 + if (pp->busy) 742 + return -EBUSY; 743 + else if (copy_from_user(&hdr, buf, sizeof(hdr))) 744 + return -EFAULT; 745 + else if (hdr.cmdtype == SMU_CMDTYPE_WANTS_EVENTS) { 746 + pp->mode = smu_file_events; 747 + return 0; 748 + } else if (hdr.cmdtype != SMU_CMDTYPE_SMU) 749 + return -EINVAL; 750 + else if (pp->mode != smu_file_commands) 751 + return -EBADFD; 752 + else if (hdr.data_len > SMU_MAX_DATA) 753 + return -EINVAL; 754 + 755 + spin_lock_irqsave(&pp->lock, flags); 756 + if (pp->busy) { 757 + spin_unlock_irqrestore(&pp->lock, flags); 758 + return -EBUSY; 759 + } 760 + pp->busy = 1; 761 + pp->cmd.status = 1; 762 + spin_unlock_irqrestore(&pp->lock, flags); 763 + 764 + if (copy_from_user(pp->buffer, buf + sizeof(hdr), hdr.data_len)) { 765 + pp->busy = 0; 766 + return -EFAULT; 767 + } 768 + 769 + pp->cmd.cmd = hdr.cmd; 770 + pp->cmd.data_len = hdr.data_len; 771 + pp->cmd.reply_len = SMU_MAX_DATA; 772 + pp->cmd.data_buf = pp->buffer; 773 + pp->cmd.reply_buf = pp->buffer; 774 + pp->cmd.done = smu_user_cmd_done; 775 + pp->cmd.misc = pp; 776 + rc = smu_queue_cmd(&pp->cmd); 777 + if (rc < 0) 778 + return rc; 779 + return count; 780 + } 781 + 782 + 783 + static ssize_t smu_read_command(struct file *file, struct smu_private *pp, 784 + char __user *buf, size_t count) 785 + { 786 + DECLARE_WAITQUEUE(wait, current); 787 + struct smu_user_reply_hdr hdr; 788 + unsigned long flags; 789 + int size, rc = 0; 790 + 791 + if (!pp->busy) 792 + return 0; 793 + if (count < sizeof(struct smu_user_reply_hdr)) 794 + return -EOVERFLOW; 795 + spin_lock_irqsave(&pp->lock, flags); 796 + if (pp->cmd.status == 1) { 797 + if (file->f_flags & O_NONBLOCK) 798 + return -EAGAIN; 799 + add_wait_queue(&pp->wait, &wait); 800 + for (;;) { 801 + set_current_state(TASK_INTERRUPTIBLE); 802 + rc = 0; 803 + if (pp->cmd.status != 1) 804 + break; 805 + rc = -ERESTARTSYS; 806 + if (signal_pending(current)) 807 + break; 808 + spin_unlock_irqrestore(&pp->lock, flags); 809 + schedule(); 810 + spin_lock_irqsave(&pp->lock, flags); 811 + } 812 + set_current_state(TASK_RUNNING); 813 + remove_wait_queue(&pp->wait, &wait); 814 + } 815 + spin_unlock_irqrestore(&pp->lock, flags); 816 + if (rc) 817 + return rc; 818 + if (pp->cmd.status != 0) 819 + pp->cmd.reply_len = 0; 820 + size = sizeof(hdr) + pp->cmd.reply_len; 821 + if (count < size) 822 + size = count; 823 + rc = size; 824 + hdr.status = pp->cmd.status; 825 + hdr.reply_len = pp->cmd.reply_len; 826 + if (copy_to_user(buf, &hdr, sizeof(hdr))) 827 + return -EFAULT; 828 + size -= sizeof(hdr); 829 + if (size && copy_to_user(buf + sizeof(hdr), pp->buffer, size)) 830 + return -EFAULT; 831 + pp->busy = 0; 832 + 833 + return rc; 834 + } 835 + 836 + 837 + static ssize_t smu_read_events(struct file *file, struct smu_private *pp, 838 + char __user *buf, size_t count) 839 + { 840 + /* Not implemented */ 841 + msleep_interruptible(1000); 842 + return 0; 843 + } 844 + 845 + 846 + static ssize_t smu_read(struct file *file, char __user *buf, 847 + size_t count, loff_t *ppos) 848 + { 849 + struct smu_private *pp = file->private_data; 850 + 851 + if (pp->mode == smu_file_commands) 852 + return smu_read_command(file, pp, buf, count); 853 + if (pp->mode == smu_file_events) 854 + return smu_read_events(file, pp, buf, count); 855 + 856 + return -EBADFD; 857 + } 858 + 859 + static unsigned int smu_fpoll(struct file *file, poll_table *wait) 860 + { 861 + struct smu_private *pp = file->private_data; 862 + unsigned int mask = 0; 863 + unsigned long flags; 864 + 865 + if (pp == 0) 866 + return 0; 867 + 868 + if (pp->mode == smu_file_commands) { 869 + poll_wait(file, &pp->wait, wait); 870 + 871 + spin_lock_irqsave(&pp->lock, flags); 872 + if (pp->busy && pp->cmd.status != 1) 873 + mask |= POLLIN; 874 + spin_unlock_irqrestore(&pp->lock, flags); 875 + } if (pp->mode == smu_file_events) { 876 + /* Not yet implemented */ 877 + } 878 + return mask; 879 + } 880 + 881 + static int smu_release(struct inode *inode, struct file *file) 882 + { 883 + struct smu_private *pp = file->private_data; 884 + unsigned long flags; 885 + unsigned int busy; 886 + 887 + if (pp == 0) 888 + return 0; 889 + 890 + file->private_data = NULL; 891 + 892 + /* Mark file as closing to avoid races with new request */ 893 + spin_lock_irqsave(&pp->lock, flags); 894 + pp->mode = smu_file_closing; 895 + busy = pp->busy; 896 + 897 + /* Wait for any pending request to complete */ 898 + if (busy && pp->cmd.status == 1) { 899 + DECLARE_WAITQUEUE(wait, current); 900 + 901 + add_wait_queue(&pp->wait, &wait); 902 + for (;;) { 903 + set_current_state(TASK_UNINTERRUPTIBLE); 904 + if (pp->cmd.status != 1) 905 + break; 906 + spin_lock_irqsave(&pp->lock, flags); 907 + schedule(); 908 + spin_unlock_irqrestore(&pp->lock, flags); 909 + } 910 + set_current_state(TASK_RUNNING); 911 + remove_wait_queue(&pp->wait, &wait); 912 + } 913 + spin_unlock_irqrestore(&pp->lock, flags); 914 + 915 + spin_lock_irqsave(&smu_clist_lock, flags); 916 + list_del(&pp->list); 917 + spin_unlock_irqrestore(&smu_clist_lock, flags); 918 + kfree(pp); 919 + 920 + return 0; 921 + } 922 + 923 + 924 + static struct file_operations smu_device_fops __pmacdata = { 925 + .llseek = no_llseek, 926 + .read = smu_read, 927 + .write = smu_write, 928 + .poll = smu_fpoll, 929 + .open = smu_open, 930 + .release = smu_release, 931 + }; 932 + 933 + static struct miscdevice pmu_device __pmacdata = { 934 + MISC_DYNAMIC_MINOR, "smu", &smu_device_fops 935 + }; 936 + 937 + static int smu_device_init(void) 938 + { 939 + if (!smu) 940 + return -ENODEV; 941 + if (misc_register(&pmu_device) < 0) 942 + printk(KERN_ERR "via-pmu: cannot register misc device.\n"); 943 + return 0; 944 + } 945 + device_initcall(smu_device_init);
+1 -1
drivers/macintosh/therm_adt746x.c
··· 599 599 sensor_location[2] = "?"; 600 600 } 601 601 602 - of_dev = of_platform_device_create(np, "temperatures"); 602 + of_dev = of_platform_device_create(np, "temperatures", NULL); 603 603 604 604 if (of_dev == NULL) { 605 605 printk(KERN_ERR "Can't register temperatures device !\n");
+1 -1
drivers/macintosh/therm_pm72.c
··· 2051 2051 return -ENODEV; 2052 2052 } 2053 2053 } 2054 - of_dev = of_platform_device_create(np, "temperature"); 2054 + of_dev = of_platform_device_create(np, "temperature", NULL); 2055 2055 if (of_dev == NULL) { 2056 2056 printk(KERN_ERR "Can't register FCU platform device !\n"); 2057 2057 return -ENODEV;
+1 -1
drivers/macintosh/therm_windtunnel.c
··· 504 504 } 505 505 if( !(np=of_find_node_by_name(NULL, "fan")) ) 506 506 return -ENODEV; 507 - x.of_dev = of_platform_device_create( np, "temperature" ); 507 + x.of_dev = of_platform_device_create(np, "temperature", NULL); 508 508 of_node_put( np ); 509 509 510 510 if( !x.of_dev ) {
+7 -7
drivers/media/video/bttv-driver.c
··· 763 763 /* no PLL needed */ 764 764 if (btv->pll.pll_current == 0) 765 765 return; 766 - vprintk(KERN_INFO "bttv%d: PLL can sleep, using XTAL (%d).\n", 767 - btv->c.nr,btv->pll.pll_ifreq); 766 + bttv_printk(KERN_INFO "bttv%d: PLL can sleep, using XTAL (%d).\n", 767 + btv->c.nr,btv->pll.pll_ifreq); 768 768 btwrite(0x00,BT848_TGCTRL); 769 769 btwrite(0x00,BT848_PLL_XCI); 770 770 btv->pll.pll_current = 0; 771 771 return; 772 772 } 773 773 774 - vprintk(KERN_INFO "bttv%d: PLL: %d => %d ",btv->c.nr, 775 - btv->pll.pll_ifreq, btv->pll.pll_ofreq); 774 + bttv_printk(KERN_INFO "bttv%d: PLL: %d => %d ",btv->c.nr, 775 + btv->pll.pll_ifreq, btv->pll.pll_ofreq); 776 776 set_pll_freq(btv, btv->pll.pll_ifreq, btv->pll.pll_ofreq); 777 777 778 778 for (i=0; i<10; i++) { 779 779 /* Let other people run while the PLL stabilizes */ 780 - vprintk("."); 780 + bttv_printk("."); 781 781 msleep(10); 782 782 783 783 if (btread(BT848_DSTATUS) & BT848_DSTATUS_PLOCK) { ··· 785 785 } else { 786 786 btwrite(0x08,BT848_TGCTRL); 787 787 btv->pll.pll_current = btv->pll.pll_ofreq; 788 - vprintk(" ok\n"); 788 + bttv_printk(" ok\n"); 789 789 return; 790 790 } 791 791 } 792 792 btv->pll.pll_current = -1; 793 - vprintk("failed\n"); 793 + bttv_printk("failed\n"); 794 794 return; 795 795 } 796 796
+1 -1
drivers/media/video/bttvp.h
··· 221 221 extern int init_bttv_i2c(struct bttv *btv); 222 222 extern int fini_bttv_i2c(struct bttv *btv); 223 223 224 - #define vprintk if (bttv_verbose) printk 224 + #define bttv_printk if (bttv_verbose) printk 225 225 #define dprintk if (bttv_debug >= 1) printk 226 226 #define d2printk if (bttv_debug >= 2) printk 227 227
+17
drivers/message/fusion/Kconfig
··· 35 35 LSIFC929X 36 36 LSIFC929XL 37 37 38 + config FUSION_SAS 39 + tristate "Fusion MPT ScsiHost drivers for SAS" 40 + depends on PCI && SCSI 41 + select FUSION 42 + select SCSI_SAS_ATTRS 43 + ---help--- 44 + SCSI HOST support for a SAS host adapters. 45 + 46 + List of supported controllers: 47 + 48 + LSISAS1064 49 + LSISAS1066 50 + LSISAS1068 51 + LSISAS1064E 52 + LSISAS1066E 53 + LSISAS1068E 54 + 38 55 config FUSION_MAX_SGE 39 56 int "Maximum number of scatter gather entries (16 - 128)" 40 57 depends on FUSION
+1
drivers/message/fusion/Makefile
··· 34 34 35 35 obj-$(CONFIG_FUSION_SPI) += mptbase.o mptscsih.o mptspi.o 36 36 obj-$(CONFIG_FUSION_FC) += mptbase.o mptscsih.o mptfc.o 37 + obj-$(CONFIG_FUSION_SAS) += mptbase.o mptscsih.o mptsas.o 37 38 obj-$(CONFIG_FUSION_CTL) += mptctl.o 38 39 obj-$(CONFIG_FUSION_LAN) += mptlan.o
+723 -238
drivers/message/fusion/mptbase.c
··· 135 135 136 136 static void MptDisplayIocCapabilities(MPT_ADAPTER *ioc); 137 137 static int MakeIocReady(MPT_ADAPTER *ioc, int force, int sleepFlag); 138 - //static u32 mpt_GetIocState(MPT_ADAPTER *ioc, int cooked); 139 138 static int GetIocFacts(MPT_ADAPTER *ioc, int sleepFlag, int reason); 140 139 static int GetPortFacts(MPT_ADAPTER *ioc, int portnum, int sleepFlag); 141 140 static int SendIocInit(MPT_ADAPTER *ioc, int sleepFlag); 142 141 static int SendPortEnable(MPT_ADAPTER *ioc, int portnum, int sleepFlag); 143 142 static int mpt_do_upload(MPT_ADAPTER *ioc, int sleepFlag); 144 - static int mpt_downloadboot(MPT_ADAPTER *ioc, int sleepFlag); 143 + static int mpt_downloadboot(MPT_ADAPTER *ioc, MpiFwHeader_t *pFwHeader, int sleepFlag); 145 144 static int mpt_diag_reset(MPT_ADAPTER *ioc, int ignore, int sleepFlag); 146 145 static int KickStart(MPT_ADAPTER *ioc, int ignore, int sleepFlag); 147 146 static int SendIocReset(MPT_ADAPTER *ioc, u8 reset_type, int sleepFlag); ··· 151 152 static int GetLanConfigPages(MPT_ADAPTER *ioc); 152 153 static int GetFcPortPage0(MPT_ADAPTER *ioc, int portnum); 153 154 static int GetIoUnitPage2(MPT_ADAPTER *ioc); 155 + int mptbase_sas_persist_operation(MPT_ADAPTER *ioc, u8 persist_opcode); 154 156 static int mpt_GetScsiPortSettings(MPT_ADAPTER *ioc, int portnum); 155 157 static int mpt_readScsiDevicePageHeaders(MPT_ADAPTER *ioc, int portnum); 156 158 static void mpt_read_ioc_pg_1(MPT_ADAPTER *ioc); ··· 159 159 static void mpt_timer_expired(unsigned long data); 160 160 static int SendEventNotification(MPT_ADAPTER *ioc, u8 EvSwitch); 161 161 static int SendEventAck(MPT_ADAPTER *ioc, EventNotificationReply_t *evnp); 162 + static int mpt_host_page_access_control(MPT_ADAPTER *ioc, u8 access_control_value, int sleepFlag); 163 + static int mpt_host_page_alloc(MPT_ADAPTER *ioc, pIOCInit_t ioc_init); 162 164 163 165 #ifdef CONFIG_PROC_FS 164 166 static int procmpt_summary_read(char *buf, char **start, off_t offset, ··· 177 175 static void mpt_sp_ioc_info(MPT_ADAPTER *ioc, u32 ioc_status, MPT_FRAME_HDR *mf); 178 176 static void mpt_fc_log_info(MPT_ADAPTER *ioc, u32 log_info); 179 177 static void mpt_sp_log_info(MPT_ADAPTER *ioc, u32 log_info); 178 + static void mpt_sas_log_info(MPT_ADAPTER *ioc, u32 log_info); 180 179 181 180 /* module entry point */ 182 181 static int __init fusion_init (void); ··· 209 206 pci_write_config_word(pdev, PCI_COMMAND, command_reg); 210 207 } 211 208 209 + /* 210 + * Process turbo (context) reply... 211 + */ 212 + static void 213 + mpt_turbo_reply(MPT_ADAPTER *ioc, u32 pa) 214 + { 215 + MPT_FRAME_HDR *mf = NULL; 216 + MPT_FRAME_HDR *mr = NULL; 217 + int req_idx = 0; 218 + int cb_idx; 219 + 220 + dmfprintk((MYIOC_s_INFO_FMT "Got TURBO reply req_idx=%08x\n", 221 + ioc->name, pa)); 222 + 223 + switch (pa >> MPI_CONTEXT_REPLY_TYPE_SHIFT) { 224 + case MPI_CONTEXT_REPLY_TYPE_SCSI_INIT: 225 + req_idx = pa & 0x0000FFFF; 226 + cb_idx = (pa & 0x00FF0000) >> 16; 227 + mf = MPT_INDEX_2_MFPTR(ioc, req_idx); 228 + break; 229 + case MPI_CONTEXT_REPLY_TYPE_LAN: 230 + cb_idx = mpt_lan_index; 231 + /* 232 + * Blind set of mf to NULL here was fatal 233 + * after lan_reply says "freeme" 234 + * Fix sort of combined with an optimization here; 235 + * added explicit check for case where lan_reply 236 + * was just returning 1 and doing nothing else. 237 + * For this case skip the callback, but set up 238 + * proper mf value first here:-) 239 + */ 240 + if ((pa & 0x58000000) == 0x58000000) { 241 + req_idx = pa & 0x0000FFFF; 242 + mf = MPT_INDEX_2_MFPTR(ioc, req_idx); 243 + mpt_free_msg_frame(ioc, mf); 244 + mb(); 245 + return; 246 + break; 247 + } 248 + mr = (MPT_FRAME_HDR *) CAST_U32_TO_PTR(pa); 249 + break; 250 + case MPI_CONTEXT_REPLY_TYPE_SCSI_TARGET: 251 + cb_idx = mpt_stm_index; 252 + mr = (MPT_FRAME_HDR *) CAST_U32_TO_PTR(pa); 253 + break; 254 + default: 255 + cb_idx = 0; 256 + BUG(); 257 + } 258 + 259 + /* Check for (valid) IO callback! */ 260 + if (cb_idx < 1 || cb_idx >= MPT_MAX_PROTOCOL_DRIVERS || 261 + MptCallbacks[cb_idx] == NULL) { 262 + printk(MYIOC_s_WARN_FMT "%s: Invalid cb_idx (%d)!\n", 263 + __FUNCTION__, ioc->name, cb_idx); 264 + goto out; 265 + } 266 + 267 + if (MptCallbacks[cb_idx](ioc, mf, mr)) 268 + mpt_free_msg_frame(ioc, mf); 269 + out: 270 + mb(); 271 + } 272 + 273 + static void 274 + mpt_reply(MPT_ADAPTER *ioc, u32 pa) 275 + { 276 + MPT_FRAME_HDR *mf; 277 + MPT_FRAME_HDR *mr; 278 + int req_idx; 279 + int cb_idx; 280 + int freeme; 281 + 282 + u32 reply_dma_low; 283 + u16 ioc_stat; 284 + 285 + /* non-TURBO reply! Hmmm, something may be up... 286 + * Newest turbo reply mechanism; get address 287 + * via left shift 1 (get rid of MPI_ADDRESS_REPLY_A_BIT)! 288 + */ 289 + 290 + /* Map DMA address of reply header to cpu address. 291 + * pa is 32 bits - but the dma address may be 32 or 64 bits 292 + * get offset based only only the low addresses 293 + */ 294 + 295 + reply_dma_low = (pa <<= 1); 296 + mr = (MPT_FRAME_HDR *)((u8 *)ioc->reply_frames + 297 + (reply_dma_low - ioc->reply_frames_low_dma)); 298 + 299 + req_idx = le16_to_cpu(mr->u.frame.hwhdr.msgctxu.fld.req_idx); 300 + cb_idx = mr->u.frame.hwhdr.msgctxu.fld.cb_idx; 301 + mf = MPT_INDEX_2_MFPTR(ioc, req_idx); 302 + 303 + dmfprintk((MYIOC_s_INFO_FMT "Got non-TURBO reply=%p req_idx=%x cb_idx=%x Function=%x\n", 304 + ioc->name, mr, req_idx, cb_idx, mr->u.hdr.Function)); 305 + DBG_DUMP_REPLY_FRAME(mr) 306 + 307 + /* Check/log IOC log info 308 + */ 309 + ioc_stat = le16_to_cpu(mr->u.reply.IOCStatus); 310 + if (ioc_stat & MPI_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) { 311 + u32 log_info = le32_to_cpu(mr->u.reply.IOCLogInfo); 312 + if (ioc->bus_type == FC) 313 + mpt_fc_log_info(ioc, log_info); 314 + else if (ioc->bus_type == SCSI) 315 + mpt_sp_log_info(ioc, log_info); 316 + else if (ioc->bus_type == SAS) 317 + mpt_sas_log_info(ioc, log_info); 318 + } 319 + if (ioc_stat & MPI_IOCSTATUS_MASK) { 320 + if (ioc->bus_type == SCSI && 321 + cb_idx != mpt_stm_index && 322 + cb_idx != mpt_lan_index) 323 + mpt_sp_ioc_info(ioc, (u32)ioc_stat, mf); 324 + } 325 + 326 + 327 + /* Check for (valid) IO callback! */ 328 + if (cb_idx < 1 || cb_idx >= MPT_MAX_PROTOCOL_DRIVERS || 329 + MptCallbacks[cb_idx] == NULL) { 330 + printk(MYIOC_s_WARN_FMT "%s: Invalid cb_idx (%d)!\n", 331 + __FUNCTION__, ioc->name, cb_idx); 332 + freeme = 0; 333 + goto out; 334 + } 335 + 336 + freeme = MptCallbacks[cb_idx](ioc, mf, mr); 337 + 338 + out: 339 + /* Flush (non-TURBO) reply with a WRITE! */ 340 + CHIPREG_WRITE32(&ioc->chip->ReplyFifo, pa); 341 + 342 + if (freeme) 343 + mpt_free_msg_frame(ioc, mf); 344 + mb(); 345 + } 346 + 212 347 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 213 348 /* 214 349 * mpt_interrupt - MPT adapter (IOC) specific interrupt handler. ··· 368 227 static irqreturn_t 369 228 mpt_interrupt(int irq, void *bus_id, struct pt_regs *r) 370 229 { 371 - MPT_ADAPTER *ioc; 372 - MPT_FRAME_HDR *mf; 373 - MPT_FRAME_HDR *mr; 374 - u32 pa; 375 - int req_idx; 376 - int cb_idx; 377 - int type; 378 - int freeme; 379 - 380 - ioc = (MPT_ADAPTER *)bus_id; 230 + MPT_ADAPTER *ioc = bus_id; 231 + u32 pa; 381 232 382 233 /* 383 234 * Drain the reply FIFO! 384 - * 385 - * NOTES: I've seen up to 10 replies processed in this loop, so far... 386 - * Update: I've seen up to 9182 replies processed in this loop! ?? 387 - * Update: Limit ourselves to processing max of N replies 388 - * (bottom of loop). 389 235 */ 390 236 while (1) { 391 - 392 - if ((pa = CHIPREG_READ32_dmasync(&ioc->chip->ReplyFifo)) == 0xFFFFFFFF) 237 + pa = CHIPREG_READ32_dmasync(&ioc->chip->ReplyFifo); 238 + if (pa == 0xFFFFFFFF) 393 239 return IRQ_HANDLED; 394 - 395 - cb_idx = 0; 396 - freeme = 0; 397 - 398 - /* 399 - * Check for non-TURBO reply! 400 - */ 401 - if (pa & MPI_ADDRESS_REPLY_A_BIT) { 402 - u32 reply_dma_low; 403 - u16 ioc_stat; 404 - 405 - /* non-TURBO reply! Hmmm, something may be up... 406 - * Newest turbo reply mechanism; get address 407 - * via left shift 1 (get rid of MPI_ADDRESS_REPLY_A_BIT)! 408 - */ 409 - 410 - /* Map DMA address of reply header to cpu address. 411 - * pa is 32 bits - but the dma address may be 32 or 64 bits 412 - * get offset based only only the low addresses 413 - */ 414 - reply_dma_low = (pa = (pa << 1)); 415 - mr = (MPT_FRAME_HDR *)((u8 *)ioc->reply_frames + 416 - (reply_dma_low - ioc->reply_frames_low_dma)); 417 - 418 - req_idx = le16_to_cpu(mr->u.frame.hwhdr.msgctxu.fld.req_idx); 419 - cb_idx = mr->u.frame.hwhdr.msgctxu.fld.cb_idx; 420 - mf = MPT_INDEX_2_MFPTR(ioc, req_idx); 421 - 422 - dmfprintk((MYIOC_s_INFO_FMT "Got non-TURBO reply=%p req_idx=%x cb_idx=%x Function=%x\n", 423 - ioc->name, mr, req_idx, cb_idx, mr->u.hdr.Function)); 424 - DBG_DUMP_REPLY_FRAME(mr) 425 - 426 - /* Check/log IOC log info 427 - */ 428 - ioc_stat = le16_to_cpu(mr->u.reply.IOCStatus); 429 - if (ioc_stat & MPI_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) { 430 - u32 log_info = le32_to_cpu(mr->u.reply.IOCLogInfo); 431 - if (ioc->bus_type == FC) 432 - mpt_fc_log_info(ioc, log_info); 433 - else if (ioc->bus_type == SCSI) 434 - mpt_sp_log_info(ioc, log_info); 435 - } 436 - if (ioc_stat & MPI_IOCSTATUS_MASK) { 437 - if (ioc->bus_type == SCSI) 438 - mpt_sp_ioc_info(ioc, (u32)ioc_stat, mf); 439 - } 440 - } else { 441 - /* 442 - * Process turbo (context) reply... 443 - */ 444 - dmfprintk((MYIOC_s_INFO_FMT "Got TURBO reply req_idx=%08x\n", ioc->name, pa)); 445 - type = (pa >> MPI_CONTEXT_REPLY_TYPE_SHIFT); 446 - if (type == MPI_CONTEXT_REPLY_TYPE_SCSI_TARGET) { 447 - cb_idx = mpt_stm_index; 448 - mf = NULL; 449 - mr = (MPT_FRAME_HDR *) CAST_U32_TO_PTR(pa); 450 - } else if (type == MPI_CONTEXT_REPLY_TYPE_LAN) { 451 - cb_idx = mpt_lan_index; 452 - /* Blind set of mf to NULL here was fatal 453 - * after lan_reply says "freeme" 454 - * Fix sort of combined with an optimization here; 455 - * added explicit check for case where lan_reply 456 - * was just returning 1 and doing nothing else. 457 - * For this case skip the callback, but set up 458 - * proper mf value first here:-) 459 - */ 460 - if ((pa & 0x58000000) == 0x58000000) { 461 - req_idx = pa & 0x0000FFFF; 462 - mf = MPT_INDEX_2_MFPTR(ioc, req_idx); 463 - freeme = 1; 464 - /* 465 - * IMPORTANT! Invalidate the callback! 466 - */ 467 - cb_idx = 0; 468 - } else { 469 - mf = NULL; 470 - } 471 - mr = (MPT_FRAME_HDR *) CAST_U32_TO_PTR(pa); 472 - } else { 473 - req_idx = pa & 0x0000FFFF; 474 - cb_idx = (pa & 0x00FF0000) >> 16; 475 - mf = MPT_INDEX_2_MFPTR(ioc, req_idx); 476 - mr = NULL; 477 - } 478 - pa = 0; /* No reply flush! */ 479 - } 480 - 481 - #ifdef MPT_DEBUG_IRQ 482 - if (ioc->bus_type == SCSI) { 483 - /* Verify mf, mr are reasonable. 484 - */ 485 - if ((mf) && ((mf >= MPT_INDEX_2_MFPTR(ioc, ioc->req_depth)) 486 - || (mf < ioc->req_frames)) ) { 487 - printk(MYIOC_s_WARN_FMT 488 - "mpt_interrupt: Invalid mf (%p)!\n", ioc->name, (void *)mf); 489 - cb_idx = 0; 490 - pa = 0; 491 - freeme = 0; 492 - } 493 - if ((pa) && (mr) && ((mr >= MPT_INDEX_2_RFPTR(ioc, ioc->req_depth)) 494 - || (mr < ioc->reply_frames)) ) { 495 - printk(MYIOC_s_WARN_FMT 496 - "mpt_interrupt: Invalid rf (%p)!\n", ioc->name, (void *)mr); 497 - cb_idx = 0; 498 - pa = 0; 499 - freeme = 0; 500 - } 501 - if (cb_idx > (MPT_MAX_PROTOCOL_DRIVERS-1)) { 502 - printk(MYIOC_s_WARN_FMT 503 - "mpt_interrupt: Invalid cb_idx (%d)!\n", ioc->name, cb_idx); 504 - cb_idx = 0; 505 - pa = 0; 506 - freeme = 0; 507 - } 508 - } 509 - #endif 510 - 511 - /* Check for (valid) IO callback! */ 512 - if (cb_idx) { 513 - /* Do the callback! */ 514 - freeme = (*(MptCallbacks[cb_idx]))(ioc, mf, mr); 515 - } 516 - 517 - if (pa) { 518 - /* Flush (non-TURBO) reply with a WRITE! */ 519 - CHIPREG_WRITE32(&ioc->chip->ReplyFifo, pa); 520 - } 521 - 522 - if (freeme) { 523 - /* Put Request back on FreeQ! */ 524 - mpt_free_msg_frame(ioc, mf); 525 - } 526 - 527 - mb(); 528 - } /* drain reply FIFO */ 240 + else if (pa & MPI_ADDRESS_REPLY_A_BIT) 241 + mpt_reply(ioc, pa); 242 + else 243 + mpt_turbo_reply(ioc, pa); 244 + } 529 245 530 246 return IRQ_HANDLED; 531 247 } ··· 507 509 pCfg->wait_done = 1; 508 510 wake_up(&mpt_waitq); 509 511 } 512 + } else if (func == MPI_FUNCTION_SAS_IO_UNIT_CONTROL) { 513 + /* we should be always getting a reply frame */ 514 + memcpy(ioc->persist_reply_frame, reply, 515 + min(MPT_DEFAULT_FRAME_SIZE, 516 + 4*reply->u.reply.MsgLength)); 517 + del_timer(&ioc->persist_timer); 518 + ioc->persist_wait_done = 1; 519 + wake_up(&mpt_waitq); 510 520 } else { 511 521 printk(MYIOC_s_ERR_FMT "Unexpected msg function (=%02Xh) reply received!\n", 512 522 ioc->name, func); ··· 756 750 mf = list_entry(ioc->FreeQ.next, MPT_FRAME_HDR, 757 751 u.frame.linkage.list); 758 752 list_del(&mf->u.frame.linkage.list); 753 + mf->u.frame.linkage.arg1 = 0; 759 754 mf->u.frame.hwhdr.msgctxu.fld.cb_idx = handle; /* byte */ 760 755 req_offset = (u8 *)mf - (u8 *)ioc->req_frames; 761 756 /* u16! */ ··· 852 845 853 846 /* Put Request back on FreeQ! */ 854 847 spin_lock_irqsave(&ioc->FreeQlock, flags); 848 + mf->u.frame.linkage.arg1 = 0xdeadbeaf; /* signature to know if this mf is freed */ 855 849 list_add_tail(&mf->u.frame.linkage.list, &ioc->FreeQ); 856 850 #ifdef MFCNT 857 851 ioc->mfcnt--; ··· 979 971 980 972 /* Make sure there are no doorbells */ 981 973 CHIPREG_WRITE32(&ioc->chip->IntStatus, 0); 982 - 974 + 983 975 return r; 976 + } 977 + 978 + /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 979 + /** 980 + * mpt_host_page_access_control - provides mechanism for the host 981 + * driver to control the IOC's Host Page Buffer access. 982 + * @ioc: Pointer to MPT adapter structure 983 + * @access_control_value: define bits below 984 + * 985 + * Access Control Value - bits[15:12] 986 + * 0h Reserved 987 + * 1h Enable Access { MPI_DB_HPBAC_ENABLE_ACCESS } 988 + * 2h Disable Access { MPI_DB_HPBAC_DISABLE_ACCESS } 989 + * 3h Free Buffer { MPI_DB_HPBAC_FREE_BUFFER } 990 + * 991 + * Returns 0 for success, non-zero for failure. 992 + */ 993 + 994 + static int 995 + mpt_host_page_access_control(MPT_ADAPTER *ioc, u8 access_control_value, int sleepFlag) 996 + { 997 + int r = 0; 998 + 999 + /* return if in use */ 1000 + if (CHIPREG_READ32(&ioc->chip->Doorbell) 1001 + & MPI_DOORBELL_ACTIVE) 1002 + return -1; 1003 + 1004 + CHIPREG_WRITE32(&ioc->chip->IntStatus, 0); 1005 + 1006 + CHIPREG_WRITE32(&ioc->chip->Doorbell, 1007 + ((MPI_FUNCTION_HOST_PAGEBUF_ACCESS_CONTROL 1008 + <<MPI_DOORBELL_FUNCTION_SHIFT) | 1009 + (access_control_value<<12))); 1010 + 1011 + /* Wait for IOC to clear Doorbell Status bit */ 1012 + if ((r = WaitForDoorbellAck(ioc, 5, sleepFlag)) < 0) { 1013 + return -2; 1014 + }else 1015 + return 0; 1016 + } 1017 + 1018 + /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 1019 + /** 1020 + * mpt_host_page_alloc - allocate system memory for the fw 1021 + * If we already allocated memory in past, then resend the same pointer. 1022 + * ioc@: Pointer to pointer to IOC adapter 1023 + * ioc_init@: Pointer to ioc init config page 1024 + * 1025 + * Returns 0 for success, non-zero for failure. 1026 + */ 1027 + static int 1028 + mpt_host_page_alloc(MPT_ADAPTER *ioc, pIOCInit_t ioc_init) 1029 + { 1030 + char *psge; 1031 + int flags_length; 1032 + u32 host_page_buffer_sz=0; 1033 + 1034 + if(!ioc->HostPageBuffer) { 1035 + 1036 + host_page_buffer_sz = 1037 + le32_to_cpu(ioc->facts.HostPageBufferSGE.FlagsLength) & 0xFFFFFF; 1038 + 1039 + if(!host_page_buffer_sz) 1040 + return 0; /* fw doesn't need any host buffers */ 1041 + 1042 + /* spin till we get enough memory */ 1043 + while(host_page_buffer_sz > 0) { 1044 + 1045 + if((ioc->HostPageBuffer = pci_alloc_consistent( 1046 + ioc->pcidev, 1047 + host_page_buffer_sz, 1048 + &ioc->HostPageBuffer_dma)) != NULL) { 1049 + 1050 + dinitprintk((MYIOC_s_INFO_FMT 1051 + "host_page_buffer @ %p, dma @ %x, sz=%d bytes\n", 1052 + ioc->name, 1053 + ioc->HostPageBuffer, 1054 + ioc->HostPageBuffer_dma, 1055 + host_page_buffer_sz)); 1056 + ioc->alloc_total += host_page_buffer_sz; 1057 + ioc->HostPageBuffer_sz = host_page_buffer_sz; 1058 + break; 1059 + } 1060 + 1061 + host_page_buffer_sz -= (4*1024); 1062 + } 1063 + } 1064 + 1065 + if(!ioc->HostPageBuffer) { 1066 + printk(MYIOC_s_ERR_FMT 1067 + "Failed to alloc memory for host_page_buffer!\n", 1068 + ioc->name); 1069 + return -999; 1070 + } 1071 + 1072 + psge = (char *)&ioc_init->HostPageBufferSGE; 1073 + flags_length = MPI_SGE_FLAGS_SIMPLE_ELEMENT | 1074 + MPI_SGE_FLAGS_SYSTEM_ADDRESS | 1075 + MPI_SGE_FLAGS_32_BIT_ADDRESSING | 1076 + MPI_SGE_FLAGS_HOST_TO_IOC | 1077 + MPI_SGE_FLAGS_END_OF_BUFFER; 1078 + if (sizeof(dma_addr_t) == sizeof(u64)) { 1079 + flags_length |= MPI_SGE_FLAGS_64_BIT_ADDRESSING; 1080 + } 1081 + flags_length = flags_length << MPI_SGE_FLAGS_SHIFT; 1082 + flags_length |= ioc->HostPageBuffer_sz; 1083 + mpt_add_sge(psge, flags_length, ioc->HostPageBuffer_dma); 1084 + ioc->facts.HostPageBufferSGE = ioc_init->HostPageBufferSGE; 1085 + 1086 + return 0; 984 1087 } 985 1088 986 1089 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ ··· 1203 1084 1204 1085 /* Initilize SCSI Config Data structure 1205 1086 */ 1206 - memset(&ioc->spi_data, 0, sizeof(ScsiCfgData)); 1087 + memset(&ioc->spi_data, 0, sizeof(SpiCfgData)); 1207 1088 1208 1089 /* Initialize the running configQ head. 1209 1090 */ ··· 1331 1212 else if (pdev->device == MPI_MANUFACTPAGE_DEVID_1030_53C1035) { 1332 1213 ioc->prod_name = "LSI53C1035"; 1333 1214 ioc->bus_type = SCSI; 1215 + } 1216 + else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1064) { 1217 + ioc->prod_name = "LSISAS1064"; 1218 + ioc->bus_type = SAS; 1219 + ioc->errata_flag_1064 = 1; 1220 + } 1221 + else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1066) { 1222 + ioc->prod_name = "LSISAS1066"; 1223 + ioc->bus_type = SAS; 1224 + ioc->errata_flag_1064 = 1; 1225 + } 1226 + else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1068) { 1227 + ioc->prod_name = "LSISAS1068"; 1228 + ioc->bus_type = SAS; 1229 + ioc->errata_flag_1064 = 1; 1230 + } 1231 + else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1064E) { 1232 + ioc->prod_name = "LSISAS1064E"; 1233 + ioc->bus_type = SAS; 1234 + } 1235 + else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1066E) { 1236 + ioc->prod_name = "LSISAS1066E"; 1237 + ioc->bus_type = SAS; 1238 + } 1239 + else if (pdev->device == MPI_MANUFACTPAGE_DEVID_SAS1068E) { 1240 + ioc->prod_name = "LSISAS1068E"; 1241 + ioc->bus_type = SAS; 1334 1242 } 1335 1243 1336 1244 if (ioc->errata_flag_1064) ··· 1750 1604 */ 1751 1605 if (ret == 0) { 1752 1606 rc = mpt_do_upload(ioc, sleepFlag); 1753 - if (rc != 0) 1607 + if (rc == 0) { 1608 + if (ioc->alt_ioc && ioc->alt_ioc->cached_fw) { 1609 + /* 1610 + * Maintain only one pointer to FW memory 1611 + * so there will not be two attempt to 1612 + * downloadboot onboard dual function 1613 + * chips (mpt_adapter_disable, 1614 + * mpt_diag_reset) 1615 + */ 1616 + ioc->cached_fw = NULL; 1617 + ddlprintk((MYIOC_s_INFO_FMT ": mpt_upload: alt_%s has cached_fw=%p \n", 1618 + ioc->name, ioc->alt_ioc->name, ioc->alt_ioc->cached_fw)); 1619 + } 1620 + } else { 1754 1621 printk(KERN_WARNING MYNAM ": firmware upload failure!\n"); 1622 + ret = -5; 1623 + } 1755 1624 } 1756 1625 } 1757 1626 } ··· 1801 1640 * and we try GetLanConfigPages again... 1802 1641 */ 1803 1642 if ((ret == 0) && (reason == MPT_HOSTEVENT_IOC_BRINGUP)) { 1804 - if (ioc->bus_type == FC) { 1643 + if (ioc->bus_type == SAS) { 1644 + 1645 + /* clear persistency table */ 1646 + if(ioc->facts.IOCExceptions & 1647 + MPI_IOCFACTS_EXCEPT_PERSISTENT_TABLE_FULL) { 1648 + ret = mptbase_sas_persist_operation(ioc, 1649 + MPI_SAS_OP_CLEAR_NOT_PRESENT); 1650 + if(ret != 0) 1651 + return -1; 1652 + } 1653 + 1654 + /* Find IM volumes 1655 + */ 1656 + mpt_findImVolumes(ioc); 1657 + 1658 + } else if (ioc->bus_type == FC) { 1805 1659 /* 1806 1660 * Pre-fetch FC port WWN and stuff... 1807 1661 * (FCPortPage0_t stuff) ··· 1959 1783 1960 1784 if (ioc->cached_fw != NULL) { 1961 1785 ddlprintk((KERN_INFO MYNAM ": mpt_adapter_disable: Pushing FW onto adapter\n")); 1962 - if ((ret = mpt_downloadboot(ioc, NO_SLEEP)) < 0) { 1786 + if ((ret = mpt_downloadboot(ioc, (MpiFwHeader_t *)ioc->cached_fw, NO_SLEEP)) < 0) { 1963 1787 printk(KERN_WARNING MYNAM 1964 1788 ": firmware downloadboot failure (%d)!\n", ret); 1965 1789 } ··· 2007 1831 } 2008 1832 2009 1833 kfree(ioc->spi_data.nvram); 2010 - kfree(ioc->spi_data.pIocPg3); 1834 + kfree(ioc->raid_data.pIocPg3); 2011 1835 ioc->spi_data.nvram = NULL; 2012 - ioc->spi_data.pIocPg3 = NULL; 1836 + ioc->raid_data.pIocPg3 = NULL; 2013 1837 2014 1838 if (ioc->spi_data.pIocPg4 != NULL) { 2015 1839 sz = ioc->spi_data.IocPg4Sz; ··· 2028 1852 2029 1853 kfree(ioc->ChainToChain); 2030 1854 ioc->ChainToChain = NULL; 1855 + 1856 + if (ioc->HostPageBuffer != NULL) { 1857 + if((ret = mpt_host_page_access_control(ioc, 1858 + MPI_DB_HPBAC_FREE_BUFFER, NO_SLEEP)) != 0) { 1859 + printk(KERN_ERR MYNAM 1860 + ": %s: host page buffers free failed (%d)!\n", 1861 + __FUNCTION__, ret); 1862 + } 1863 + dexitprintk((KERN_INFO MYNAM ": %s HostPageBuffer free @ %p, sz=%d bytes\n", 1864 + ioc->name, ioc->HostPageBuffer, ioc->HostPageBuffer_sz)); 1865 + pci_free_consistent(ioc->pcidev, ioc->HostPageBuffer_sz, 1866 + ioc->HostPageBuffer, 1867 + ioc->HostPageBuffer_dma); 1868 + ioc->HostPageBuffer = NULL; 1869 + ioc->HostPageBuffer_sz = 0; 1870 + ioc->alloc_total -= ioc->HostPageBuffer_sz; 1871 + } 2031 1872 } 2032 1873 2033 1874 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ ··· 2227 2034 * Loop here waiting for IOC to come READY. 2228 2035 */ 2229 2036 ii = 0; 2230 - cntdn = ((sleepFlag == CAN_SLEEP) ? HZ : 1000) * 15; /* 15 seconds */ 2037 + cntdn = ((sleepFlag == CAN_SLEEP) ? HZ : 1000) * 5; /* 5 seconds */ 2231 2038 2232 2039 while ((ioc_state = mpt_GetIocState(ioc, 1)) != MPI_IOC_STATE_READY) { 2233 2040 if (ioc_state == MPI_IOC_STATE_OPERATIONAL) { ··· 2405 2212 le32_to_cpu(facts->CurrentSenseBufferHighAddr); 2406 2213 facts->CurReplyFrameSize = 2407 2214 le16_to_cpu(facts->CurReplyFrameSize); 2215 + facts->IOCCapabilities = le32_to_cpu(facts->IOCCapabilities); 2408 2216 2409 2217 /* 2410 2218 * Handle NEW (!) IOCFactsReply fields in MPI-1.01.xx ··· 2577 2383 ddlprintk((MYIOC_s_INFO_FMT "upload_fw %d facts.Flags=%x\n", 2578 2384 ioc->name, ioc->upload_fw, ioc->facts.Flags)); 2579 2385 2580 - if (ioc->bus_type == FC) 2386 + if(ioc->bus_type == SAS) 2387 + ioc_init.MaxDevices = ioc->facts.MaxDevices; 2388 + else if(ioc->bus_type == FC) 2581 2389 ioc_init.MaxDevices = MPT_MAX_FC_DEVICES; 2582 2390 else 2583 2391 ioc_init.MaxDevices = MPT_MAX_SCSI_DEVICES; 2584 - 2585 2392 ioc_init.MaxBuses = MPT_MAX_BUS; 2393 + dinitprintk((MYIOC_s_INFO_FMT "facts.MsgVersion=%x\n", 2394 + ioc->name, ioc->facts.MsgVersion)); 2395 + if (ioc->facts.MsgVersion >= MPI_VERSION_01_05) { 2396 + // set MsgVersion and HeaderVersion host driver was built with 2397 + ioc_init.MsgVersion = cpu_to_le16(MPI_VERSION); 2398 + ioc_init.HeaderVersion = cpu_to_le16(MPI_HEADER_VERSION); 2586 2399 2400 + if (ioc->facts.Flags & MPI_IOCFACTS_FLAGS_HOST_PAGE_BUFFER_PERSISTENT) { 2401 + ioc_init.HostPageBufferSGE = ioc->facts.HostPageBufferSGE; 2402 + } else if(mpt_host_page_alloc(ioc, &ioc_init)) 2403 + return -99; 2404 + } 2587 2405 ioc_init.ReplyFrameSize = cpu_to_le16(ioc->reply_sz); /* in BYTES */ 2588 2406 2589 2407 if (sizeof(dma_addr_t) == sizeof(u64)) { ··· 2609 2403 ioc_init.HostMfaHighAddr = cpu_to_le32(0); 2610 2404 ioc_init.SenseBufferHighAddr = cpu_to_le32(0); 2611 2405 } 2612 - 2406 + 2613 2407 ioc->facts.CurrentHostMfaHighAddr = ioc_init.HostMfaHighAddr; 2614 2408 ioc->facts.CurrentSenseBufferHighAddr = ioc_init.SenseBufferHighAddr; 2409 + ioc->facts.MaxDevices = ioc_init.MaxDevices; 2410 + ioc->facts.MaxBuses = ioc_init.MaxBuses; 2615 2411 2616 2412 dhsprintk((MYIOC_s_INFO_FMT "Sending IOCInit (req @ %p)\n", 2617 2413 ioc->name, &ioc_init)); 2618 2414 2619 2415 r = mpt_handshake_req_reply_wait(ioc, sizeof(IOCInit_t), (u32*)&ioc_init, 2620 2416 sizeof(MPIDefaultReply_t), (u16*)&init_reply, 10 /*seconds*/, sleepFlag); 2621 - if (r != 0) 2417 + if (r != 0) { 2418 + printk(MYIOC_s_ERR_FMT "Sending IOCInit failed(%d)!\n",ioc->name, r); 2622 2419 return r; 2420 + } 2623 2421 2624 2422 /* No need to byte swap the multibyte fields in the reply 2625 2423 * since we don't even look at it's contents. ··· 2682 2472 { 2683 2473 PortEnable_t port_enable; 2684 2474 MPIDefaultReply_t reply_buf; 2685 - int ii; 2475 + int rc; 2686 2476 int req_sz; 2687 2477 int reply_sz; 2688 2478 ··· 2704 2494 2705 2495 /* RAID FW may take a long time to enable 2706 2496 */ 2707 - if (ioc->bus_type == FC) { 2708 - ii = mpt_handshake_req_reply_wait(ioc, req_sz, (u32*)&port_enable, 2709 - reply_sz, (u16*)&reply_buf, 65 /*seconds*/, sleepFlag); 2710 - } else { 2711 - ii = mpt_handshake_req_reply_wait(ioc, req_sz, (u32*)&port_enable, 2497 + if ( (ioc->facts.ProductID & MPI_FW_HEADER_PID_PROD_MASK) 2498 + > MPI_FW_HEADER_PID_PROD_TARGET_SCSI ) { 2499 + rc = mpt_handshake_req_reply_wait(ioc, req_sz, (u32*)&port_enable, 2712 2500 reply_sz, (u16*)&reply_buf, 300 /*seconds*/, sleepFlag); 2501 + } else { 2502 + rc = mpt_handshake_req_reply_wait(ioc, req_sz, (u32*)&port_enable, 2503 + reply_sz, (u16*)&reply_buf, 30 /*seconds*/, sleepFlag); 2713 2504 } 2714 - 2715 - if (ii != 0) 2716 - return ii; 2717 - 2718 - /* We do not even look at the reply, so we need not 2719 - * swap the multi-byte fields. 2720 - */ 2721 - 2722 - return 0; 2505 + return rc; 2723 2506 } 2724 2507 2725 2508 /* ··· 2869 2666 * <0 for fw upload failure. 2870 2667 */ 2871 2668 static int 2872 - mpt_downloadboot(MPT_ADAPTER *ioc, int sleepFlag) 2669 + mpt_downloadboot(MPT_ADAPTER *ioc, MpiFwHeader_t *pFwHeader, int sleepFlag) 2873 2670 { 2874 - MpiFwHeader_t *pFwHeader; 2875 2671 MpiExtImageHeader_t *pExtImage; 2876 2672 u32 fwSize; 2877 2673 u32 diag0val; ··· 2881 2679 u32 load_addr; 2882 2680 u32 ioc_state=0; 2883 2681 2884 - ddlprintk((MYIOC_s_INFO_FMT "downloadboot: fw size 0x%x, ioc FW Ptr %p\n", 2885 - ioc->name, ioc->facts.FWImageSize, ioc->cached_fw)); 2886 - 2887 - if ( ioc->facts.FWImageSize == 0 ) 2888 - return -1; 2889 - 2890 - if (ioc->cached_fw == NULL) 2891 - return -2; 2892 - 2893 - /* prevent a second downloadboot and memory free with alt_ioc */ 2894 - if (ioc->alt_ioc && ioc->alt_ioc->cached_fw) 2895 - ioc->alt_ioc->cached_fw = NULL; 2682 + ddlprintk((MYIOC_s_INFO_FMT "downloadboot: fw size 0x%x (%d), FW Ptr %p\n", 2683 + ioc->name, pFwHeader->ImageSize, pFwHeader->ImageSize, pFwHeader)); 2896 2684 2897 2685 CHIPREG_WRITE32(&ioc->chip->WriteSequence, 0xFF); 2898 2686 CHIPREG_WRITE32(&ioc->chip->WriteSequence, MPI_WRSEQ_1ST_KEY_VALUE); ··· 2910 2718 ioc->name, count)); 2911 2719 break; 2912 2720 } 2913 - /* wait 1 sec */ 2721 + /* wait .1 sec */ 2914 2722 if (sleepFlag == CAN_SLEEP) { 2915 - msleep_interruptible (1000); 2723 + msleep_interruptible (100); 2916 2724 } else { 2917 - mdelay (1000); 2725 + mdelay (100); 2918 2726 } 2919 2727 } 2920 2728 2921 2729 if ( count == 30 ) { 2922 - ddlprintk((MYIOC_s_INFO_FMT "downloadboot failed! Unable to RESET_ADAPTER diag0val=%x\n", 2730 + ddlprintk((MYIOC_s_INFO_FMT "downloadboot failed! " 2731 + "Unable to get MPI_DIAG_DRWE mode, diag0val=%x\n", 2923 2732 ioc->name, diag0val)); 2924 2733 return -3; 2925 2734 } ··· 2935 2742 /* Set the DiagRwEn and Disable ARM bits */ 2936 2743 CHIPREG_WRITE32(&ioc->chip->Diagnostic, (MPI_DIAG_RW_ENABLE | MPI_DIAG_DISABLE_ARM)); 2937 2744 2938 - pFwHeader = (MpiFwHeader_t *) ioc->cached_fw; 2939 2745 fwSize = (pFwHeader->ImageSize + 3)/4; 2940 2746 ptrFw = (u32 *) pFwHeader; 2941 2747 ··· 2984 2792 /* Clear the internal flash bad bit - autoincrementing register, 2985 2793 * so must do two writes. 2986 2794 */ 2987 - CHIPREG_PIO_WRITE32(&ioc->pio_chip->DiagRwAddress, 0x3F000000); 2988 - diagRwData = CHIPREG_PIO_READ32(&ioc->pio_chip->DiagRwData); 2989 - diagRwData |= 0x4000000; 2990 - CHIPREG_PIO_WRITE32(&ioc->pio_chip->DiagRwAddress, 0x3F000000); 2991 - CHIPREG_PIO_WRITE32(&ioc->pio_chip->DiagRwData, diagRwData); 2795 + if (ioc->bus_type == SCSI) { 2796 + /* 2797 + * 1030 and 1035 H/W errata, workaround to access 2798 + * the ClearFlashBadSignatureBit 2799 + */ 2800 + CHIPREG_PIO_WRITE32(&ioc->pio_chip->DiagRwAddress, 0x3F000000); 2801 + diagRwData = CHIPREG_PIO_READ32(&ioc->pio_chip->DiagRwData); 2802 + diagRwData |= 0x40000000; 2803 + CHIPREG_PIO_WRITE32(&ioc->pio_chip->DiagRwAddress, 0x3F000000); 2804 + CHIPREG_PIO_WRITE32(&ioc->pio_chip->DiagRwData, diagRwData); 2805 + 2806 + } else /* if((ioc->bus_type == SAS) || (ioc->bus_type == FC)) */ { 2807 + diag0val = CHIPREG_READ32(&ioc->chip->Diagnostic); 2808 + CHIPREG_WRITE32(&ioc->chip->Diagnostic, diag0val | 2809 + MPI_DIAG_CLEAR_FLASH_BAD_SIG); 2810 + 2811 + /* wait 1 msec */ 2812 + if (sleepFlag == CAN_SLEEP) { 2813 + msleep_interruptible (1); 2814 + } else { 2815 + mdelay (1); 2816 + } 2817 + } 2992 2818 2993 2819 if (ioc->errata_flag_1064) 2994 2820 pci_disable_io_access(ioc->pcidev); 2995 2821 2996 2822 diag0val = CHIPREG_READ32(&ioc->chip->Diagnostic); 2997 - ddlprintk((MYIOC_s_INFO_FMT "downloadboot diag0val=%x, turning off PREVENT_IOC_BOOT, DISABLE_ARM\n", 2823 + ddlprintk((MYIOC_s_INFO_FMT "downloadboot diag0val=%x, " 2824 + "turning off PREVENT_IOC_BOOT, DISABLE_ARM, RW_ENABLE\n", 2998 2825 ioc->name, diag0val)); 2999 - diag0val &= ~(MPI_DIAG_PREVENT_IOC_BOOT | MPI_DIAG_DISABLE_ARM); 2826 + diag0val &= ~(MPI_DIAG_PREVENT_IOC_BOOT | MPI_DIAG_DISABLE_ARM | MPI_DIAG_RW_ENABLE); 3000 2827 ddlprintk((MYIOC_s_INFO_FMT "downloadboot now diag0val=%x\n", 3001 2828 ioc->name, diag0val)); 3002 2829 CHIPREG_WRITE32(&ioc->chip->Diagnostic, diag0val); ··· 3023 2812 /* Write 0xFF to reset the sequencer */ 3024 2813 CHIPREG_WRITE32(&ioc->chip->WriteSequence, 0xFF); 3025 2814 2815 + if (ioc->bus_type == SAS) { 2816 + ioc_state = mpt_GetIocState(ioc, 0); 2817 + if ( (GetIocFacts(ioc, sleepFlag, 2818 + MPT_HOSTEVENT_IOC_BRINGUP)) != 0 ) { 2819 + ddlprintk((MYIOC_s_INFO_FMT "GetIocFacts failed: IocState=%x\n", 2820 + ioc->name, ioc_state)); 2821 + return -EFAULT; 2822 + } 2823 + } 2824 + 3026 2825 for (count=0; count<HZ*20; count++) { 3027 2826 if ((ioc_state = mpt_GetIocState(ioc, 0)) & MPI_IOC_STATE_READY) { 3028 2827 ddlprintk((MYIOC_s_INFO_FMT "downloadboot successful! (count=%d) IocState=%x\n", 3029 2828 ioc->name, count, ioc_state)); 2829 + if (ioc->bus_type == SAS) { 2830 + return 0; 2831 + } 3030 2832 if ((SendIocInit(ioc, sleepFlag)) != 0) { 3031 2833 ddlprintk((MYIOC_s_INFO_FMT "downloadboot: SendIocInit failed\n", 3032 2834 ioc->name)); ··· 3273 3049 3274 3050 /* wait 1 sec */ 3275 3051 if (sleepFlag == CAN_SLEEP) { 3276 - ssleep(1); 3052 + msleep_interruptible (1000); 3277 3053 } else { 3278 3054 mdelay (1000); 3279 3055 } 3280 3056 } 3281 - if ((count = mpt_downloadboot(ioc, sleepFlag)) < 0) { 3057 + if ((count = mpt_downloadboot(ioc, 3058 + (MpiFwHeader_t *)ioc->cached_fw, sleepFlag)) < 0) { 3282 3059 printk(KERN_WARNING MYNAM 3283 3060 ": firmware downloadboot failure (%d)!\n", count); 3284 3061 } ··· 3862 3637 int count = 0; 3863 3638 u32 intstat=0; 3864 3639 3865 - cntdn = ((sleepFlag == CAN_SLEEP) ? HZ : 1000) * howlong; 3640 + cntdn = 1000 * howlong; 3866 3641 3867 3642 if (sleepFlag == CAN_SLEEP) { 3868 3643 while (--cntdn) { ··· 3912 3687 int count = 0; 3913 3688 u32 intstat=0; 3914 3689 3915 - cntdn = ((sleepFlag == CAN_SLEEP) ? HZ : 1000) * howlong; 3690 + cntdn = 1000 * howlong; 3916 3691 if (sleepFlag == CAN_SLEEP) { 3917 3692 while (--cntdn) { 3918 3693 intstat = CHIPREG_READ32(&ioc->chip->IntStatus); ··· 4222 3997 } 4223 3998 4224 3999 return rc; 4000 + } 4001 + 4002 + /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 4003 + /* 4004 + * mptbase_sas_persist_operation - Perform operation on SAS Persitent Table 4005 + * @ioc: Pointer to MPT_ADAPTER structure 4006 + * @sas_address: 64bit SAS Address for operation. 4007 + * @target_id: specified target for operation 4008 + * @bus: specified bus for operation 4009 + * @persist_opcode: see below 4010 + * 4011 + * MPI_SAS_OP_CLEAR_NOT_PRESENT - Free all persist TargetID mappings for 4012 + * devices not currently present. 4013 + * MPI_SAS_OP_CLEAR_ALL_PERSISTENT - Clear al persist TargetID mappings 4014 + * 4015 + * NOTE: Don't use not this function during interrupt time. 4016 + * 4017 + * Returns: 0 for success, non-zero error 4018 + */ 4019 + 4020 + /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 4021 + int 4022 + mptbase_sas_persist_operation(MPT_ADAPTER *ioc, u8 persist_opcode) 4023 + { 4024 + SasIoUnitControlRequest_t *sasIoUnitCntrReq; 4025 + SasIoUnitControlReply_t *sasIoUnitCntrReply; 4026 + MPT_FRAME_HDR *mf = NULL; 4027 + MPIHeader_t *mpi_hdr; 4028 + 4029 + 4030 + /* insure garbage is not sent to fw */ 4031 + switch(persist_opcode) { 4032 + 4033 + case MPI_SAS_OP_CLEAR_NOT_PRESENT: 4034 + case MPI_SAS_OP_CLEAR_ALL_PERSISTENT: 4035 + break; 4036 + 4037 + default: 4038 + return -1; 4039 + break; 4040 + } 4041 + 4042 + printk("%s: persist_opcode=%x\n",__FUNCTION__, persist_opcode); 4043 + 4044 + /* Get a MF for this command. 4045 + */ 4046 + if ((mf = mpt_get_msg_frame(mpt_base_index, ioc)) == NULL) { 4047 + printk("%s: no msg frames!\n",__FUNCTION__); 4048 + return -1; 4049 + } 4050 + 4051 + mpi_hdr = (MPIHeader_t *) mf; 4052 + sasIoUnitCntrReq = (SasIoUnitControlRequest_t *)mf; 4053 + memset(sasIoUnitCntrReq,0,sizeof(SasIoUnitControlRequest_t)); 4054 + sasIoUnitCntrReq->Function = MPI_FUNCTION_SAS_IO_UNIT_CONTROL; 4055 + sasIoUnitCntrReq->MsgContext = mpi_hdr->MsgContext; 4056 + sasIoUnitCntrReq->Operation = persist_opcode; 4057 + 4058 + init_timer(&ioc->persist_timer); 4059 + ioc->persist_timer.data = (unsigned long) ioc; 4060 + ioc->persist_timer.function = mpt_timer_expired; 4061 + ioc->persist_timer.expires = jiffies + HZ*10 /* 10 sec */; 4062 + ioc->persist_wait_done=0; 4063 + add_timer(&ioc->persist_timer); 4064 + mpt_put_msg_frame(mpt_base_index, ioc, mf); 4065 + wait_event(mpt_waitq, ioc->persist_wait_done); 4066 + 4067 + sasIoUnitCntrReply = 4068 + (SasIoUnitControlReply_t *)ioc->persist_reply_frame; 4069 + if (le16_to_cpu(sasIoUnitCntrReply->IOCStatus) != MPI_IOCSTATUS_SUCCESS) { 4070 + printk("%s: IOCStatus=0x%X IOCLogInfo=0x%X\n", 4071 + __FUNCTION__, 4072 + sasIoUnitCntrReply->IOCStatus, 4073 + sasIoUnitCntrReply->IOCLogInfo); 4074 + return -1; 4075 + } 4076 + 4077 + printk("%s: success\n",__FUNCTION__); 4078 + return 0; 4225 4079 } 4226 4080 4227 4081 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ ··· 4644 4340 if (mpt_config(ioc, &cfg) != 0) 4645 4341 goto done_and_free; 4646 4342 4647 - if ( (mem = (u8 *)ioc->spi_data.pIocPg2) == NULL ) { 4343 + if ( (mem = (u8 *)ioc->raid_data.pIocPg2) == NULL ) { 4648 4344 mem = kmalloc(iocpage2sz, GFP_ATOMIC); 4649 4345 if (mem) { 4650 - ioc->spi_data.pIocPg2 = (IOCPage2_t *) mem; 4346 + ioc->raid_data.pIocPg2 = (IOCPage2_t *) mem; 4651 4347 } else { 4652 4348 goto done_and_free; 4653 4349 } ··· 4664 4360 /* At least 1 RAID Volume 4665 4361 */ 4666 4362 pIocRv = pIoc2->RaidVolume; 4667 - ioc->spi_data.isRaid = 0; 4363 + ioc->raid_data.isRaid = 0; 4668 4364 for (jj = 0; jj < nVols; jj++, pIocRv++) { 4669 4365 vid = pIocRv->VolumeID; 4670 4366 vbus = pIocRv->VolumeBus; ··· 4673 4369 /* find the match 4674 4370 */ 4675 4371 if (vbus == 0) { 4676 - ioc->spi_data.isRaid |= (1 << vid); 4372 + ioc->raid_data.isRaid |= (1 << vid); 4677 4373 } else { 4678 4374 /* Error! Always bus 0 4679 4375 */ ··· 4708 4404 4709 4405 /* Free the old page 4710 4406 */ 4711 - kfree(ioc->spi_data.pIocPg3); 4712 - ioc->spi_data.pIocPg3 = NULL; 4407 + kfree(ioc->raid_data.pIocPg3); 4408 + ioc->raid_data.pIocPg3 = NULL; 4713 4409 4714 4410 /* There is at least one physical disk. 4715 4411 * Read and save IOC Page 3 ··· 4746 4442 mem = kmalloc(iocpage3sz, GFP_ATOMIC); 4747 4443 if (mem) { 4748 4444 memcpy(mem, (u8 *)pIoc3, iocpage3sz); 4749 - ioc->spi_data.pIocPg3 = (IOCPage3_t *) mem; 4445 + ioc->raid_data.pIocPg3 = (IOCPage3_t *) mem; 4750 4446 } 4751 4447 } 4752 4448 ··· 5670 5366 } 5671 5367 5672 5368 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 5673 - static char * 5674 - EventDescriptionStr(u8 event, u32 evData0) 5369 + static void 5370 + EventDescriptionStr(u8 event, u32 evData0, char *evStr) 5675 5371 { 5676 5372 char *ds; 5677 5373 ··· 5724 5420 ds = "Events(OFF) Change"; 5725 5421 break; 5726 5422 case MPI_EVENT_INTEGRATED_RAID: 5727 - ds = "Integrated Raid"; 5423 + { 5424 + u8 ReasonCode = (u8)(evData0 >> 16); 5425 + switch (ReasonCode) { 5426 + case MPI_EVENT_RAID_RC_VOLUME_CREATED : 5427 + ds = "Integrated Raid: Volume Created"; 5428 + break; 5429 + case MPI_EVENT_RAID_RC_VOLUME_DELETED : 5430 + ds = "Integrated Raid: Volume Deleted"; 5431 + break; 5432 + case MPI_EVENT_RAID_RC_VOLUME_SETTINGS_CHANGED : 5433 + ds = "Integrated Raid: Volume Settings Changed"; 5434 + break; 5435 + case MPI_EVENT_RAID_RC_VOLUME_STATUS_CHANGED : 5436 + ds = "Integrated Raid: Volume Status Changed"; 5437 + break; 5438 + case MPI_EVENT_RAID_RC_VOLUME_PHYSDISK_CHANGED : 5439 + ds = "Integrated Raid: Volume Physdisk Changed"; 5440 + break; 5441 + case MPI_EVENT_RAID_RC_PHYSDISK_CREATED : 5442 + ds = "Integrated Raid: Physdisk Created"; 5443 + break; 5444 + case MPI_EVENT_RAID_RC_PHYSDISK_DELETED : 5445 + ds = "Integrated Raid: Physdisk Deleted"; 5446 + break; 5447 + case MPI_EVENT_RAID_RC_PHYSDISK_SETTINGS_CHANGED : 5448 + ds = "Integrated Raid: Physdisk Settings Changed"; 5449 + break; 5450 + case MPI_EVENT_RAID_RC_PHYSDISK_STATUS_CHANGED : 5451 + ds = "Integrated Raid: Physdisk Status Changed"; 5452 + break; 5453 + case MPI_EVENT_RAID_RC_DOMAIN_VAL_NEEDED : 5454 + ds = "Integrated Raid: Domain Validation Needed"; 5455 + break; 5456 + case MPI_EVENT_RAID_RC_SMART_DATA : 5457 + ds = "Integrated Raid; Smart Data"; 5458 + break; 5459 + case MPI_EVENT_RAID_RC_REPLACE_ACTION_STARTED : 5460 + ds = "Integrated Raid: Replace Action Started"; 5461 + break; 5462 + default: 5463 + ds = "Integrated Raid"; 5728 5464 break; 5465 + } 5466 + break; 5467 + } 5468 + case MPI_EVENT_SCSI_DEVICE_STATUS_CHANGE: 5469 + ds = "SCSI Device Status Change"; 5470 + break; 5471 + case MPI_EVENT_SAS_DEVICE_STATUS_CHANGE: 5472 + { 5473 + u8 ReasonCode = (u8)(evData0 >> 16); 5474 + switch (ReasonCode) { 5475 + case MPI_EVENT_SAS_DEV_STAT_RC_ADDED: 5476 + ds = "SAS Device Status Change: Added"; 5477 + break; 5478 + case MPI_EVENT_SAS_DEV_STAT_RC_NOT_RESPONDING: 5479 + ds = "SAS Device Status Change: Deleted"; 5480 + break; 5481 + case MPI_EVENT_SAS_DEV_STAT_RC_SMART_DATA: 5482 + ds = "SAS Device Status Change: SMART Data"; 5483 + break; 5484 + case MPI_EVENT_SAS_DEV_STAT_RC_NO_PERSIST_ADDED: 5485 + ds = "SAS Device Status Change: No Persistancy Added"; 5486 + break; 5487 + default: 5488 + ds = "SAS Device Status Change: Unknown"; 5489 + break; 5490 + } 5491 + break; 5492 + } 5493 + case MPI_EVENT_ON_BUS_TIMER_EXPIRED: 5494 + ds = "Bus Timer Expired"; 5495 + break; 5496 + case MPI_EVENT_QUEUE_FULL: 5497 + ds = "Queue Full"; 5498 + break; 5499 + case MPI_EVENT_SAS_SES: 5500 + ds = "SAS SES Event"; 5501 + break; 5502 + case MPI_EVENT_PERSISTENT_TABLE_FULL: 5503 + ds = "Persistent Table Full"; 5504 + break; 5505 + case MPI_EVENT_SAS_PHY_LINK_STATUS: 5506 + ds = "SAS PHY Link Status"; 5507 + break; 5508 + case MPI_EVENT_SAS_DISCOVERY_ERROR: 5509 + ds = "SAS Discovery Error"; 5510 + break; 5511 + 5729 5512 /* 5730 5513 * MPT base "custom" events may be added here... 5731 5514 */ ··· 5820 5429 ds = "Unknown"; 5821 5430 break; 5822 5431 } 5823 - return ds; 5432 + strcpy(evStr,ds); 5824 5433 } 5825 5434 5826 5435 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ ··· 5842 5451 int ii; 5843 5452 int r = 0; 5844 5453 int handlers = 0; 5845 - char *evStr; 5454 + char evStr[100]; 5846 5455 u8 event; 5847 5456 5848 5457 /* ··· 5855 5464 evData0 = le32_to_cpu(pEventReply->Data[0]); 5856 5465 } 5857 5466 5858 - evStr = EventDescriptionStr(event, evData0); 5467 + EventDescriptionStr(event, evData0, evStr); 5859 5468 devtprintk((MYIOC_s_INFO_FMT "MPT event (%s=%02Xh) detected!\n", 5860 5469 ioc->name, 5861 5470 evStr, ··· 5872 5481 * Do general / base driver event processing 5873 5482 */ 5874 5483 switch(event) { 5875 - case MPI_EVENT_NONE: /* 00 */ 5876 - case MPI_EVENT_LOG_DATA: /* 01 */ 5877 - case MPI_EVENT_STATE_CHANGE: /* 02 */ 5878 - case MPI_EVENT_UNIT_ATTENTION: /* 03 */ 5879 - case MPI_EVENT_IOC_BUS_RESET: /* 04 */ 5880 - case MPI_EVENT_EXT_BUS_RESET: /* 05 */ 5881 - case MPI_EVENT_RESCAN: /* 06 */ 5882 - case MPI_EVENT_LINK_STATUS_CHANGE: /* 07 */ 5883 - case MPI_EVENT_LOOP_STATE_CHANGE: /* 08 */ 5884 - case MPI_EVENT_LOGOUT: /* 09 */ 5885 - case MPI_EVENT_INTEGRATED_RAID: /* 0B */ 5886 - case MPI_EVENT_SCSI_DEVICE_STATUS_CHANGE: /* 0C */ 5887 - default: 5888 - break; 5889 5484 case MPI_EVENT_EVENT_CHANGE: /* 0A */ 5890 5485 if (evDataLen) { 5891 5486 u8 evState = evData0 & 0xFF; ··· 5883 5506 ioc->facts.EventState = evState; 5884 5507 } 5885 5508 } 5509 + break; 5510 + default: 5886 5511 break; 5887 5512 } 5888 5513 ··· 6030 5651 } 6031 5652 6032 5653 printk(MYIOC_s_INFO_FMT "LogInfo(0x%08x): F/W: %s\n", ioc->name, log_info, desc); 5654 + } 5655 + 5656 + /* strings for sas loginfo */ 5657 + static char *originator_str[] = { 5658 + "IOP", /* 00h */ 5659 + "PL", /* 01h */ 5660 + "IR" /* 02h */ 5661 + }; 5662 + static char *iop_code_str[] = { 5663 + NULL, /* 00h */ 5664 + "Invalid SAS Address", /* 01h */ 5665 + NULL, /* 02h */ 5666 + "Invalid Page", /* 03h */ 5667 + NULL, /* 04h */ 5668 + "Task Terminated" /* 05h */ 5669 + }; 5670 + static char *pl_code_str[] = { 5671 + NULL, /* 00h */ 5672 + "Open Failure", /* 01h */ 5673 + "Invalid Scatter Gather List", /* 02h */ 5674 + "Wrong Relative Offset or Frame Length", /* 03h */ 5675 + "Frame Transfer Error", /* 04h */ 5676 + "Transmit Frame Connected Low", /* 05h */ 5677 + "SATA Non-NCQ RW Error Bit Set", /* 06h */ 5678 + "SATA Read Log Receive Data Error", /* 07h */ 5679 + "SATA NCQ Fail All Commands After Error", /* 08h */ 5680 + "SATA Error in Receive Set Device Bit FIS", /* 09h */ 5681 + "Receive Frame Invalid Message", /* 0Ah */ 5682 + "Receive Context Message Valid Error", /* 0Bh */ 5683 + "Receive Frame Current Frame Error", /* 0Ch */ 5684 + "SATA Link Down", /* 0Dh */ 5685 + "Discovery SATA Init W IOS", /* 0Eh */ 5686 + "Config Invalid Page", /* 0Fh */ 5687 + "Discovery SATA Init Timeout", /* 10h */ 5688 + "Reset", /* 11h */ 5689 + "Abort", /* 12h */ 5690 + "IO Not Yet Executed", /* 13h */ 5691 + "IO Executed", /* 14h */ 5692 + NULL, /* 15h */ 5693 + NULL, /* 16h */ 5694 + NULL, /* 17h */ 5695 + NULL, /* 18h */ 5696 + NULL, /* 19h */ 5697 + NULL, /* 1Ah */ 5698 + NULL, /* 1Bh */ 5699 + NULL, /* 1Ch */ 5700 + NULL, /* 1Dh */ 5701 + NULL, /* 1Eh */ 5702 + NULL, /* 1Fh */ 5703 + "Enclosure Management" /* 20h */ 5704 + }; 5705 + 5706 + /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 5707 + /* 5708 + * mpt_sas_log_info - Log information returned from SAS IOC. 5709 + * @ioc: Pointer to MPT_ADAPTER structure 5710 + * @log_info: U32 LogInfo reply word from the IOC 5711 + * 5712 + * Refer to lsi/mpi_log_sas.h. 5713 + */ 5714 + static void 5715 + mpt_sas_log_info(MPT_ADAPTER *ioc, u32 log_info) 5716 + { 5717 + union loginfo_type { 5718 + u32 loginfo; 5719 + struct { 5720 + u32 subcode:16; 5721 + u32 code:8; 5722 + u32 originator:4; 5723 + u32 bus_type:4; 5724 + }dw; 5725 + }; 5726 + union loginfo_type sas_loginfo; 5727 + char *code_desc = NULL; 5728 + 5729 + sas_loginfo.loginfo = log_info; 5730 + if ((sas_loginfo.dw.bus_type != 3 /*SAS*/) && 5731 + (sas_loginfo.dw.originator < sizeof(originator_str)/sizeof(char*))) 5732 + return; 5733 + if ((sas_loginfo.dw.originator == 0 /*IOP*/) && 5734 + (sas_loginfo.dw.code < sizeof(iop_code_str)/sizeof(char*))) { 5735 + code_desc = iop_code_str[sas_loginfo.dw.code]; 5736 + }else if ((sas_loginfo.dw.originator == 1 /*PL*/) && 5737 + (sas_loginfo.dw.code < sizeof(pl_code_str)/sizeof(char*) )) { 5738 + code_desc = pl_code_str[sas_loginfo.dw.code]; 5739 + } 5740 + 5741 + if (code_desc != NULL) 5742 + printk(MYIOC_s_INFO_FMT 5743 + "LogInfo(0x%08x): Originator={%s}, Code={%s}," 5744 + " SubCode(0x%04x)\n", 5745 + ioc->name, 5746 + log_info, 5747 + originator_str[sas_loginfo.dw.originator], 5748 + code_desc, 5749 + sas_loginfo.dw.subcode); 5750 + else 5751 + printk(MYIOC_s_INFO_FMT 5752 + "LogInfo(0x%08x): Originator={%s}, Code=(0x%02x)," 5753 + " SubCode(0x%04x)\n", 5754 + ioc->name, 5755 + log_info, 5756 + originator_str[sas_loginfo.dw.originator], 5757 + sas_loginfo.dw.code, 5758 + sas_loginfo.dw.subcode); 6033 5759 } 6034 5760 6035 5761 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ ··· 6298 5814 EXPORT_SYMBOL(mpt_read_ioc_pg_3); 6299 5815 EXPORT_SYMBOL(mpt_alloc_fw_memory); 6300 5816 EXPORT_SYMBOL(mpt_free_fw_memory); 5817 + EXPORT_SYMBOL(mptbase_sas_persist_operation); 6301 5818 6302 5819 6303 5820 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+41 -15
drivers/message/fusion/mptbase.h
··· 65 65 #include "lsi/mpi_fc.h" /* Fibre Channel (lowlevel) support */ 66 66 #include "lsi/mpi_targ.h" /* SCSI/FCP Target protcol support */ 67 67 #include "lsi/mpi_tool.h" /* Tools support */ 68 + #include "lsi/mpi_sas.h" /* SAS support */ 68 69 69 70 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 70 71 ··· 77 76 #define COPYRIGHT "Copyright (c) 1999-2005 " MODULEAUTHOR 78 77 #endif 79 78 80 - #define MPT_LINUX_VERSION_COMMON "3.03.02" 81 - #define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.03.02" 79 + #define MPT_LINUX_VERSION_COMMON "3.03.03" 80 + #define MPT_LINUX_PACKAGE_NAME "@(#)mptlinux-3.03.03" 82 81 #define WHAT_MAGIC_STRING "@" "(" "#" ")" 83 82 84 83 #define show_mptmod_ver(s,ver) \ ··· 424 423 /* 425 424 * Event Structure and define 426 425 */ 427 - #define MPTCTL_EVENT_LOG_SIZE (0x0000000A) 426 + #define MPTCTL_EVENT_LOG_SIZE (0x000000032) 428 427 typedef struct _mpt_ioctl_events { 429 428 u32 event; /* Specified by define above */ 430 429 u32 eventContext; /* Index or counter */ ··· 452 451 #define MPT_SCSICFG_ALL_IDS 0x02 /* WriteSDP1 to all IDS */ 453 452 /* #define MPT_SCSICFG_BLK_NEGO 0x10 WriteSDP1 with WDTR and SDTR disabled */ 454 453 455 - typedef struct _ScsiCfgData { 454 + typedef struct _SpiCfgData { 456 455 u32 PortFlags; 457 456 int *nvram; /* table of device NVRAM values */ 458 - IOCPage2_t *pIocPg2; /* table of Raid Volumes */ 459 - IOCPage3_t *pIocPg3; /* table of physical disks */ 460 457 IOCPage4_t *pIocPg4; /* SEP devices addressing */ 461 458 dma_addr_t IocPg4_dma; /* Phys Addr of IOCPage4 data */ 462 459 int IocPg4Sz; /* IOCPage4 size */ 463 460 u8 dvStatus[MPT_MAX_SCSI_DEVICES]; 464 - int isRaid; /* bit field, 1 if RAID */ 465 461 u8 minSyncFactor; /* 0xFF if async */ 466 462 u8 maxSyncOffset; /* 0 if async */ 467 463 u8 maxBusWidth; /* 0 if narrow, 1 if wide */ ··· 470 472 u8 dvScheduled; /* 1 if scheduled */ 471 473 u8 forceDv; /* 1 to force DV scheduling */ 472 474 u8 noQas; /* Disable QAS for this adapter */ 473 - u8 Saf_Te; /* 1 to force all Processors as SAF-TE if Inquiry data length is too short to check for SAF-TE */ 475 + u8 Saf_Te; /* 1 to force all Processors as 476 + * SAF-TE if Inquiry data length 477 + * is too short to check for SAF-TE 478 + */ 474 479 u8 mpt_dv; /* command line option: enhanced=1, basic=0 */ 480 + u8 bus_reset; /* 1 to allow bus reset */ 475 481 u8 rsvd[1]; 476 - } ScsiCfgData; 482 + }SpiCfgData; 483 + 484 + typedef struct _SasCfgData { 485 + u8 ptClear; /* 1 to automatically clear the 486 + * persistent table. 487 + * 0 to disable 488 + * automatic clearing. 489 + */ 490 + }SasCfgData; 491 + 492 + typedef struct _RaidCfgData { 493 + IOCPage2_t *pIocPg2; /* table of Raid Volumes */ 494 + IOCPage3_t *pIocPg3; /* table of physical disks */ 495 + int isRaid; /* bit field, 1 if RAID */ 496 + }RaidCfgData; 477 497 478 498 /* 479 499 * Adapter Structure - pci_dev specific. Maximum: MPT_MAX_ADAPTERS ··· 546 530 u8 *sense_buf_pool; 547 531 dma_addr_t sense_buf_pool_dma; 548 532 u32 sense_buf_low_dma; 533 + u8 *HostPageBuffer; /* SAS - host page buffer support */ 534 + u32 HostPageBuffer_sz; 535 + dma_addr_t HostPageBuffer_dma; 549 536 int mtrr_reg; 550 537 struct pci_dev *pcidev; /* struct pci_dev pointer */ 551 538 u8 __iomem *memmap; /* mmap address */ 552 539 struct Scsi_Host *sh; /* Scsi Host pointer */ 553 - ScsiCfgData spi_data; /* Scsi config. data */ 540 + SpiCfgData spi_data; /* Scsi config. data */ 541 + RaidCfgData raid_data; /* Raid config. data */ 542 + SasCfgData sas_data; /* Sas config. data */ 554 543 MPT_IOCTL *ioctl; /* ioctl data pointer */ 555 544 struct proc_dir_entry *ioc_dentry; 556 545 struct _MPT_ADAPTER *alt_ioc; /* ptr to 929 bound adapter port */ ··· 575 554 #else 576 555 u32 mfcnt; 577 556 #endif 578 - u32 NB_for_64_byte_frame; 557 + u32 NB_for_64_byte_frame; 579 558 u32 hs_req[MPT_MAX_FRAME_SIZE/sizeof(u32)]; 580 559 u16 hs_reply[MPT_MAX_FRAME_SIZE/sizeof(u16)]; 581 560 IOCFactsReply_t facts; 582 561 PortFactsReply_t pfacts[2]; 583 562 FCPortPage0_t fc_port_page0[2]; 563 + struct timer_list persist_timer; /* persist table timer */ 564 + int persist_wait_done; /* persist completion flag */ 565 + u8 persist_reply_frame[MPT_DEFAULT_FRAME_SIZE]; /* persist reply */ 584 566 LANPage0_t lan_cnfg_page0; 585 567 LANPage1_t lan_cnfg_page1; 586 - /* 568 + /* 587 569 * Description: errata_flag_1064 588 570 * If a PCIX read occurs within 1 or 2 cycles after the chip receives 589 571 * a split completion for a read data, an internal address pointer incorrectly 590 572 * increments by 32 bytes 591 573 */ 592 - int errata_flag_1064; 574 + int errata_flag_1064; 593 575 u8 FirstWhoInit; 594 576 u8 upload_fw; /* If set, do a fw upload */ 595 577 u8 reload_fw; /* Force a FW Reload on next reset */ 596 - u8 NBShiftFactor; /* NB Shift Factor based on Block Size (Facts) */ 578 + u8 NBShiftFactor; /* NB Shift Factor based on Block Size (Facts) */ 597 579 u8 pad1[4]; 598 580 int DoneCtx; 599 581 int TaskCtx; 600 582 int InternalCtx; 601 - struct list_head list; 583 + struct list_head list; 602 584 struct net_device *netdev; 585 + struct list_head sas_topology; 603 586 } MPT_ADAPTER; 604 587 605 588 /* ··· 989 964 extern void mpt_free_fw_memory(MPT_ADAPTER *ioc); 990 965 extern int mpt_findImVolumes(MPT_ADAPTER *ioc); 991 966 extern int mpt_read_ioc_pg_3(MPT_ADAPTER *ioc); 967 + extern int mptbase_sas_persist_operation(MPT_ADAPTER *ioc, u8 persist_opcode); 992 968 993 969 /* 994 970 * Public data decl's...
+2 -2
drivers/message/fusion/mptctl.c
··· 1326 1326 */ 1327 1327 if (hd && hd->Targets) { 1328 1328 mpt_findImVolumes(ioc); 1329 - pIoc2 = ioc->spi_data.pIocPg2; 1329 + pIoc2 = ioc->raid_data.pIocPg2; 1330 1330 for ( id = 0; id <= max_id; ) { 1331 1331 if ( pIoc2 && pIoc2->NumActiveVolumes ) { 1332 1332 if ( id == pIoc2->RaidVolume[0].VolumeID ) { ··· 1348 1348 --maxWordsLeft; 1349 1349 goto next_id; 1350 1350 } else { 1351 - pIoc3 = ioc->spi_data.pIocPg3; 1351 + pIoc3 = ioc->raid_data.pIocPg3; 1352 1352 for ( jj = 0; jj < pIoc3->NumPhysDisks; jj++ ) { 1353 1353 if ( pIoc3->PhysDisk[jj].PhysDiskID == id ) 1354 1354 goto next_id;
+1 -1
drivers/message/fusion/mptfc.c
··· 189 189 printk(MYIOC_s_WARN_FMT 190 190 "Skipping ioc=%p because SCSI Initiator mode is NOT enabled!\n", 191 191 ioc->name, ioc); 192 - return -ENODEV; 192 + return 0; 193 193 } 194 194 195 195 sh = scsi_host_alloc(&mptfc_driver_template, sizeof(MPT_SCSI_HOST));
+6 -1
drivers/message/fusion/mptlan.c
··· 312 312 mpt_lan_ioc_reset(MPT_ADAPTER *ioc, int reset_phase) 313 313 { 314 314 struct net_device *dev = ioc->netdev; 315 - struct mpt_lan_priv *priv = netdev_priv(dev); 315 + struct mpt_lan_priv *priv; 316 + 317 + if (dev == NULL) 318 + return(1); 319 + else 320 + priv = netdev_priv(dev); 316 321 317 322 dlprintk((KERN_INFO MYNAM ": IOC %s_reset routed to LAN driver!\n", 318 323 reset_phase==MPT_IOC_SETUP_RESET ? "setup" : (
+1235
drivers/message/fusion/mptsas.c
··· 1 + /* 2 + * linux/drivers/message/fusion/mptsas.c 3 + * For use with LSI Logic PCI chip/adapter(s) 4 + * running LSI Logic Fusion MPT (Message Passing Technology) firmware. 5 + * 6 + * Copyright (c) 1999-2005 LSI Logic Corporation 7 + * (mailto:mpt_linux_developer@lsil.com) 8 + * Copyright (c) 2005 Dell 9 + */ 10 + /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 11 + /* 12 + This program is free software; you can redistribute it and/or modify 13 + it under the terms of the GNU General Public License as published by 14 + the Free Software Foundation; version 2 of the License. 15 + 16 + This program is distributed in the hope that it will be useful, 17 + but WITHOUT ANY WARRANTY; without even the implied warranty of 18 + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 + GNU General Public License for more details. 20 + 21 + NO WARRANTY 22 + THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR 23 + CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT 24 + LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, 25 + MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is 26 + solely responsible for determining the appropriateness of using and 27 + distributing the Program and assumes all risks associated with its 28 + exercise of rights under this Agreement, including but not limited to 29 + the risks and costs of program errors, damage to or loss of data, 30 + programs or equipment, and unavailability or interruption of operations. 31 + 32 + DISCLAIMER OF LIABILITY 33 + NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY 34 + DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 35 + DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND 36 + ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR 37 + TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE 38 + USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED 39 + HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES 40 + 41 + You should have received a copy of the GNU General Public License 42 + along with this program; if not, write to the Free Software 43 + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 44 + */ 45 + /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 46 + 47 + #include <linux/module.h> 48 + #include <linux/kernel.h> 49 + #include <linux/init.h> 50 + #include <linux/errno.h> 51 + #include <linux/sched.h> 52 + #include <linux/workqueue.h> 53 + 54 + #include <scsi/scsi_cmnd.h> 55 + #include <scsi/scsi_device.h> 56 + #include <scsi/scsi_host.h> 57 + #include <scsi/scsi_transport_sas.h> 58 + 59 + #include "mptbase.h" 60 + #include "mptscsih.h" 61 + 62 + 63 + #define my_NAME "Fusion MPT SAS Host driver" 64 + #define my_VERSION MPT_LINUX_VERSION_COMMON 65 + #define MYNAM "mptsas" 66 + 67 + MODULE_AUTHOR(MODULEAUTHOR); 68 + MODULE_DESCRIPTION(my_NAME); 69 + MODULE_LICENSE("GPL"); 70 + 71 + static int mpt_pq_filter; 72 + module_param(mpt_pq_filter, int, 0); 73 + MODULE_PARM_DESC(mpt_pq_filter, 74 + "Enable peripheral qualifier filter: enable=1 " 75 + "(default=0)"); 76 + 77 + static int mpt_pt_clear; 78 + module_param(mpt_pt_clear, int, 0); 79 + MODULE_PARM_DESC(mpt_pt_clear, 80 + "Clear persistency table: enable=1 " 81 + "(default=MPTSCSIH_PT_CLEAR=0)"); 82 + 83 + static int mptsasDoneCtx = -1; 84 + static int mptsasTaskCtx = -1; 85 + static int mptsasInternalCtx = -1; /* Used only for internal commands */ 86 + 87 + 88 + /* 89 + * SAS topology structures 90 + * 91 + * The MPT Fusion firmware interface spreads information about the 92 + * SAS topology over many manufacture pages, thus we need some data 93 + * structure to collect it and process it for the SAS transport class. 94 + */ 95 + 96 + struct mptsas_devinfo { 97 + u16 handle; /* unique id to address this device */ 98 + u8 phy_id; /* phy number of parent device */ 99 + u8 port_id; /* sas physical port this device 100 + is assoc'd with */ 101 + u8 target; /* logical target id of this device */ 102 + u8 bus; /* logical bus number of this device */ 103 + u64 sas_address; /* WWN of this device, 104 + SATA is assigned by HBA,expander */ 105 + u32 device_info; /* bitfield detailed info about this device */ 106 + }; 107 + 108 + struct mptsas_phyinfo { 109 + u8 phy_id; /* phy index */ 110 + u8 port_id; /* port number this phy is part of */ 111 + u8 negotiated_link_rate; /* nego'd link rate for this phy */ 112 + u8 hw_link_rate; /* hardware max/min phys link rate */ 113 + u8 programmed_link_rate; /* programmed max/min phy link rate */ 114 + struct mptsas_devinfo identify; /* point to phy device info */ 115 + struct mptsas_devinfo attached; /* point to attached device info */ 116 + struct sas_rphy *rphy; 117 + }; 118 + 119 + struct mptsas_portinfo { 120 + struct list_head list; 121 + u16 handle; /* unique id to address this */ 122 + u8 num_phys; /* number of phys */ 123 + struct mptsas_phyinfo *phy_info; 124 + }; 125 + 126 + /* 127 + * This is pretty ugly. We will be able to seriously clean it up 128 + * once the DV code in mptscsih goes away and we can properly 129 + * implement ->target_alloc. 130 + */ 131 + static int 132 + mptsas_slave_alloc(struct scsi_device *device) 133 + { 134 + struct Scsi_Host *host = device->host; 135 + MPT_SCSI_HOST *hd = (MPT_SCSI_HOST *)host->hostdata; 136 + struct sas_rphy *rphy; 137 + struct mptsas_portinfo *p; 138 + VirtDevice *vdev; 139 + uint target = device->id; 140 + int i; 141 + 142 + if ((vdev = hd->Targets[target]) != NULL) 143 + goto out; 144 + 145 + vdev = kmalloc(sizeof(VirtDevice), GFP_KERNEL); 146 + if (!vdev) { 147 + printk(MYIOC_s_ERR_FMT "slave_alloc kmalloc(%zd) FAILED!\n", 148 + hd->ioc->name, sizeof(VirtDevice)); 149 + return -ENOMEM; 150 + } 151 + 152 + memset(vdev, 0, sizeof(VirtDevice)); 153 + vdev->tflags = MPT_TARGET_FLAGS_Q_YES|MPT_TARGET_FLAGS_VALID_INQUIRY; 154 + vdev->ioc_id = hd->ioc->id; 155 + 156 + rphy = dev_to_rphy(device->sdev_target->dev.parent); 157 + list_for_each_entry(p, &hd->ioc->sas_topology, list) { 158 + for (i = 0; i < p->num_phys; i++) { 159 + if (p->phy_info[i].attached.sas_address == 160 + rphy->identify.sas_address) { 161 + vdev->target_id = 162 + p->phy_info[i].attached.target; 163 + vdev->bus_id = p->phy_info[i].attached.bus; 164 + hd->Targets[device->id] = vdev; 165 + goto out; 166 + } 167 + } 168 + } 169 + 170 + printk("No matching SAS device found!!\n"); 171 + kfree(vdev); 172 + return -ENODEV; 173 + 174 + out: 175 + vdev->num_luns++; 176 + device->hostdata = vdev; 177 + return 0; 178 + } 179 + 180 + static struct scsi_host_template mptsas_driver_template = { 181 + .proc_name = "mptsas", 182 + .proc_info = mptscsih_proc_info, 183 + .name = "MPT SPI Host", 184 + .info = mptscsih_info, 185 + .queuecommand = mptscsih_qcmd, 186 + .slave_alloc = mptsas_slave_alloc, 187 + .slave_configure = mptscsih_slave_configure, 188 + .slave_destroy = mptscsih_slave_destroy, 189 + .change_queue_depth = mptscsih_change_queue_depth, 190 + .eh_abort_handler = mptscsih_abort, 191 + .eh_device_reset_handler = mptscsih_dev_reset, 192 + .eh_bus_reset_handler = mptscsih_bus_reset, 193 + .eh_host_reset_handler = mptscsih_host_reset, 194 + .bios_param = mptscsih_bios_param, 195 + .can_queue = MPT_FC_CAN_QUEUE, 196 + .this_id = -1, 197 + .sg_tablesize = MPT_SCSI_SG_DEPTH, 198 + .max_sectors = 8192, 199 + .cmd_per_lun = 7, 200 + .use_clustering = ENABLE_CLUSTERING, 201 + }; 202 + 203 + static struct sas_function_template mptsas_transport_functions = { 204 + }; 205 + 206 + static struct scsi_transport_template *mptsas_transport_template; 207 + 208 + #ifdef SASDEBUG 209 + static void mptsas_print_phy_data(MPI_SAS_IO_UNIT0_PHY_DATA *phy_data) 210 + { 211 + printk("---- IO UNIT PAGE 0 ------------\n"); 212 + printk("Handle=0x%X\n", 213 + le16_to_cpu(phy_data->AttachedDeviceHandle)); 214 + printk("Controller Handle=0x%X\n", 215 + le16_to_cpu(phy_data->ControllerDevHandle)); 216 + printk("Port=0x%X\n", phy_data->Port); 217 + printk("Port Flags=0x%X\n", phy_data->PortFlags); 218 + printk("PHY Flags=0x%X\n", phy_data->PhyFlags); 219 + printk("Negotiated Link Rate=0x%X\n", phy_data->NegotiatedLinkRate); 220 + printk("Controller PHY Device Info=0x%X\n", 221 + le32_to_cpu(phy_data->ControllerPhyDeviceInfo)); 222 + printk("DiscoveryStatus=0x%X\n", 223 + le32_to_cpu(phy_data->DiscoveryStatus)); 224 + printk("\n"); 225 + } 226 + 227 + static void mptsas_print_phy_pg0(SasPhyPage0_t *pg0) 228 + { 229 + __le64 sas_address; 230 + 231 + memcpy(&sas_address, &pg0->SASAddress, sizeof(__le64)); 232 + 233 + printk("---- SAS PHY PAGE 0 ------------\n"); 234 + printk("Attached Device Handle=0x%X\n", 235 + le16_to_cpu(pg0->AttachedDevHandle)); 236 + printk("SAS Address=0x%llX\n", 237 + (unsigned long long)le64_to_cpu(sas_address)); 238 + printk("Attached PHY Identifier=0x%X\n", pg0->AttachedPhyIdentifier); 239 + printk("Attached Device Info=0x%X\n", 240 + le32_to_cpu(pg0->AttachedDeviceInfo)); 241 + printk("Programmed Link Rate=0x%X\n", pg0->ProgrammedLinkRate); 242 + printk("Change Count=0x%X\n", pg0->ChangeCount); 243 + printk("PHY Info=0x%X\n", le32_to_cpu(pg0->PhyInfo)); 244 + printk("\n"); 245 + } 246 + 247 + static void mptsas_print_device_pg0(SasDevicePage0_t *pg0) 248 + { 249 + __le64 sas_address; 250 + 251 + memcpy(&sas_address, &pg0->SASAddress, sizeof(__le64)); 252 + 253 + printk("---- SAS DEVICE PAGE 0 ---------\n"); 254 + printk("Handle=0x%X\n" ,le16_to_cpu(pg0->DevHandle)); 255 + printk("Enclosure Handle=0x%X\n", le16_to_cpu(pg0->EnclosureHandle)); 256 + printk("Slot=0x%X\n", le16_to_cpu(pg0->Slot)); 257 + printk("SAS Address=0x%llX\n", le64_to_cpu(sas_address)); 258 + printk("Target ID=0x%X\n", pg0->TargetID); 259 + printk("Bus=0x%X\n", pg0->Bus); 260 + printk("PhyNum=0x%X\n", pg0->PhyNum); 261 + printk("AccessStatus=0x%X\n", le16_to_cpu(pg0->AccessStatus)); 262 + printk("Device Info=0x%X\n", le32_to_cpu(pg0->DeviceInfo)); 263 + printk("Flags=0x%X\n", le16_to_cpu(pg0->Flags)); 264 + printk("Physical Port=0x%X\n", pg0->PhysicalPort); 265 + printk("\n"); 266 + } 267 + 268 + static void mptsas_print_expander_pg1(SasExpanderPage1_t *pg1) 269 + { 270 + printk("---- SAS EXPANDER PAGE 1 ------------\n"); 271 + 272 + printk("Physical Port=0x%X\n", pg1->PhysicalPort); 273 + printk("PHY Identifier=0x%X\n", pg1->Phy); 274 + printk("Negotiated Link Rate=0x%X\n", pg1->NegotiatedLinkRate); 275 + printk("Programmed Link Rate=0x%X\n", pg1->ProgrammedLinkRate); 276 + printk("Hardware Link Rate=0x%X\n", pg1->HwLinkRate); 277 + printk("Owner Device Handle=0x%X\n", 278 + le16_to_cpu(pg1->OwnerDevHandle)); 279 + printk("Attached Device Handle=0x%X\n", 280 + le16_to_cpu(pg1->AttachedDevHandle)); 281 + } 282 + #else 283 + #define mptsas_print_phy_data(phy_data) do { } while (0) 284 + #define mptsas_print_phy_pg0(pg0) do { } while (0) 285 + #define mptsas_print_device_pg0(pg0) do { } while (0) 286 + #define mptsas_print_expander_pg1(pg1) do { } while (0) 287 + #endif 288 + 289 + static int 290 + mptsas_sas_io_unit_pg0(MPT_ADAPTER *ioc, struct mptsas_portinfo *port_info) 291 + { 292 + ConfigExtendedPageHeader_t hdr; 293 + CONFIGPARMS cfg; 294 + SasIOUnitPage0_t *buffer; 295 + dma_addr_t dma_handle; 296 + int error, i; 297 + 298 + hdr.PageVersion = MPI_SASIOUNITPAGE0_PAGEVERSION; 299 + hdr.ExtPageLength = 0; 300 + hdr.PageNumber = 0; 301 + hdr.Reserved1 = 0; 302 + hdr.Reserved2 = 0; 303 + hdr.PageType = MPI_CONFIG_PAGETYPE_EXTENDED; 304 + hdr.ExtPageType = MPI_CONFIG_EXTPAGETYPE_SAS_IO_UNIT; 305 + 306 + cfg.cfghdr.ehdr = &hdr; 307 + cfg.physAddr = -1; 308 + cfg.pageAddr = 0; 309 + cfg.action = MPI_CONFIG_ACTION_PAGE_HEADER; 310 + cfg.dir = 0; /* read */ 311 + cfg.timeout = 10; 312 + 313 + error = mpt_config(ioc, &cfg); 314 + if (error) 315 + goto out; 316 + if (!hdr.ExtPageLength) { 317 + error = -ENXIO; 318 + goto out; 319 + } 320 + 321 + buffer = pci_alloc_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 322 + &dma_handle); 323 + if (!buffer) { 324 + error = -ENOMEM; 325 + goto out; 326 + } 327 + 328 + cfg.physAddr = dma_handle; 329 + cfg.action = MPI_CONFIG_ACTION_PAGE_READ_CURRENT; 330 + 331 + error = mpt_config(ioc, &cfg); 332 + if (error) 333 + goto out_free_consistent; 334 + 335 + port_info->num_phys = buffer->NumPhys; 336 + port_info->phy_info = kcalloc(port_info->num_phys, 337 + sizeof(struct mptsas_phyinfo),GFP_KERNEL); 338 + if (!port_info->phy_info) { 339 + error = -ENOMEM; 340 + goto out_free_consistent; 341 + } 342 + 343 + for (i = 0; i < port_info->num_phys; i++) { 344 + mptsas_print_phy_data(&buffer->PhyData[i]); 345 + port_info->phy_info[i].phy_id = i; 346 + port_info->phy_info[i].port_id = 347 + buffer->PhyData[i].Port; 348 + port_info->phy_info[i].negotiated_link_rate = 349 + buffer->PhyData[i].NegotiatedLinkRate; 350 + } 351 + 352 + out_free_consistent: 353 + pci_free_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 354 + buffer, dma_handle); 355 + out: 356 + return error; 357 + } 358 + 359 + static int 360 + mptsas_sas_phy_pg0(MPT_ADAPTER *ioc, struct mptsas_phyinfo *phy_info, 361 + u32 form, u32 form_specific) 362 + { 363 + ConfigExtendedPageHeader_t hdr; 364 + CONFIGPARMS cfg; 365 + SasPhyPage0_t *buffer; 366 + dma_addr_t dma_handle; 367 + int error; 368 + 369 + hdr.PageVersion = MPI_SASPHY0_PAGEVERSION; 370 + hdr.ExtPageLength = 0; 371 + hdr.PageNumber = 0; 372 + hdr.Reserved1 = 0; 373 + hdr.Reserved2 = 0; 374 + hdr.PageType = MPI_CONFIG_PAGETYPE_EXTENDED; 375 + hdr.ExtPageType = MPI_CONFIG_EXTPAGETYPE_SAS_PHY; 376 + 377 + cfg.cfghdr.ehdr = &hdr; 378 + cfg.dir = 0; /* read */ 379 + cfg.timeout = 10; 380 + 381 + /* Get Phy Pg 0 for each Phy. */ 382 + cfg.physAddr = -1; 383 + cfg.pageAddr = form + form_specific; 384 + cfg.action = MPI_CONFIG_ACTION_PAGE_HEADER; 385 + 386 + error = mpt_config(ioc, &cfg); 387 + if (error) 388 + goto out; 389 + 390 + if (!hdr.ExtPageLength) { 391 + error = -ENXIO; 392 + goto out; 393 + } 394 + 395 + buffer = pci_alloc_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 396 + &dma_handle); 397 + if (!buffer) { 398 + error = -ENOMEM; 399 + goto out; 400 + } 401 + 402 + cfg.physAddr = dma_handle; 403 + cfg.action = MPI_CONFIG_ACTION_PAGE_READ_CURRENT; 404 + 405 + error = mpt_config(ioc, &cfg); 406 + if (error) 407 + goto out_free_consistent; 408 + 409 + mptsas_print_phy_pg0(buffer); 410 + 411 + phy_info->hw_link_rate = buffer->HwLinkRate; 412 + phy_info->programmed_link_rate = buffer->ProgrammedLinkRate; 413 + phy_info->identify.handle = le16_to_cpu(buffer->OwnerDevHandle); 414 + phy_info->attached.handle = le16_to_cpu(buffer->AttachedDevHandle); 415 + 416 + out_free_consistent: 417 + pci_free_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 418 + buffer, dma_handle); 419 + out: 420 + return error; 421 + } 422 + 423 + static int 424 + mptsas_sas_device_pg0(MPT_ADAPTER *ioc, struct mptsas_devinfo *device_info, 425 + u32 form, u32 form_specific) 426 + { 427 + ConfigExtendedPageHeader_t hdr; 428 + CONFIGPARMS cfg; 429 + SasDevicePage0_t *buffer; 430 + dma_addr_t dma_handle; 431 + __le64 sas_address; 432 + int error; 433 + 434 + hdr.PageVersion = MPI_SASDEVICE0_PAGEVERSION; 435 + hdr.ExtPageLength = 0; 436 + hdr.PageNumber = 0; 437 + hdr.Reserved1 = 0; 438 + hdr.Reserved2 = 0; 439 + hdr.PageType = MPI_CONFIG_PAGETYPE_EXTENDED; 440 + hdr.ExtPageType = MPI_CONFIG_EXTPAGETYPE_SAS_DEVICE; 441 + 442 + cfg.cfghdr.ehdr = &hdr; 443 + cfg.pageAddr = form + form_specific; 444 + cfg.physAddr = -1; 445 + cfg.action = MPI_CONFIG_ACTION_PAGE_HEADER; 446 + cfg.dir = 0; /* read */ 447 + cfg.timeout = 10; 448 + 449 + error = mpt_config(ioc, &cfg); 450 + if (error) 451 + goto out; 452 + if (!hdr.ExtPageLength) { 453 + error = -ENXIO; 454 + goto out; 455 + } 456 + 457 + buffer = pci_alloc_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 458 + &dma_handle); 459 + if (!buffer) { 460 + error = -ENOMEM; 461 + goto out; 462 + } 463 + 464 + cfg.physAddr = dma_handle; 465 + cfg.action = MPI_CONFIG_ACTION_PAGE_READ_CURRENT; 466 + 467 + error = mpt_config(ioc, &cfg); 468 + if (error) 469 + goto out_free_consistent; 470 + 471 + mptsas_print_device_pg0(buffer); 472 + 473 + device_info->handle = le16_to_cpu(buffer->DevHandle); 474 + device_info->phy_id = buffer->PhyNum; 475 + device_info->port_id = buffer->PhysicalPort; 476 + device_info->target = buffer->TargetID; 477 + device_info->bus = buffer->Bus; 478 + memcpy(&sas_address, &buffer->SASAddress, sizeof(__le64)); 479 + device_info->sas_address = le64_to_cpu(sas_address); 480 + device_info->device_info = 481 + le32_to_cpu(buffer->DeviceInfo); 482 + 483 + out_free_consistent: 484 + pci_free_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 485 + buffer, dma_handle); 486 + out: 487 + return error; 488 + } 489 + 490 + static int 491 + mptsas_sas_expander_pg0(MPT_ADAPTER *ioc, struct mptsas_portinfo *port_info, 492 + u32 form, u32 form_specific) 493 + { 494 + ConfigExtendedPageHeader_t hdr; 495 + CONFIGPARMS cfg; 496 + SasExpanderPage0_t *buffer; 497 + dma_addr_t dma_handle; 498 + int error; 499 + 500 + hdr.PageVersion = MPI_SASEXPANDER0_PAGEVERSION; 501 + hdr.ExtPageLength = 0; 502 + hdr.PageNumber = 0; 503 + hdr.Reserved1 = 0; 504 + hdr.Reserved2 = 0; 505 + hdr.PageType = MPI_CONFIG_PAGETYPE_EXTENDED; 506 + hdr.ExtPageType = MPI_CONFIG_EXTPAGETYPE_SAS_EXPANDER; 507 + 508 + cfg.cfghdr.ehdr = &hdr; 509 + cfg.physAddr = -1; 510 + cfg.pageAddr = form + form_specific; 511 + cfg.action = MPI_CONFIG_ACTION_PAGE_HEADER; 512 + cfg.dir = 0; /* read */ 513 + cfg.timeout = 10; 514 + 515 + error = mpt_config(ioc, &cfg); 516 + if (error) 517 + goto out; 518 + 519 + if (!hdr.ExtPageLength) { 520 + error = -ENXIO; 521 + goto out; 522 + } 523 + 524 + buffer = pci_alloc_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 525 + &dma_handle); 526 + if (!buffer) { 527 + error = -ENOMEM; 528 + goto out; 529 + } 530 + 531 + cfg.physAddr = dma_handle; 532 + cfg.action = MPI_CONFIG_ACTION_PAGE_READ_CURRENT; 533 + 534 + error = mpt_config(ioc, &cfg); 535 + if (error) 536 + goto out_free_consistent; 537 + 538 + /* save config data */ 539 + port_info->num_phys = buffer->NumPhys; 540 + port_info->handle = le16_to_cpu(buffer->DevHandle); 541 + port_info->phy_info = kcalloc(port_info->num_phys, 542 + sizeof(struct mptsas_phyinfo),GFP_KERNEL); 543 + if (!port_info->phy_info) { 544 + error = -ENOMEM; 545 + goto out_free_consistent; 546 + } 547 + 548 + out_free_consistent: 549 + pci_free_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 550 + buffer, dma_handle); 551 + out: 552 + return error; 553 + } 554 + 555 + static int 556 + mptsas_sas_expander_pg1(MPT_ADAPTER *ioc, struct mptsas_phyinfo *phy_info, 557 + u32 form, u32 form_specific) 558 + { 559 + ConfigExtendedPageHeader_t hdr; 560 + CONFIGPARMS cfg; 561 + SasExpanderPage1_t *buffer; 562 + dma_addr_t dma_handle; 563 + int error; 564 + 565 + hdr.PageVersion = MPI_SASEXPANDER0_PAGEVERSION; 566 + hdr.ExtPageLength = 0; 567 + hdr.PageNumber = 1; 568 + hdr.Reserved1 = 0; 569 + hdr.Reserved2 = 0; 570 + hdr.PageType = MPI_CONFIG_PAGETYPE_EXTENDED; 571 + hdr.ExtPageType = MPI_CONFIG_EXTPAGETYPE_SAS_EXPANDER; 572 + 573 + cfg.cfghdr.ehdr = &hdr; 574 + cfg.physAddr = -1; 575 + cfg.pageAddr = form + form_specific; 576 + cfg.action = MPI_CONFIG_ACTION_PAGE_HEADER; 577 + cfg.dir = 0; /* read */ 578 + cfg.timeout = 10; 579 + 580 + error = mpt_config(ioc, &cfg); 581 + if (error) 582 + goto out; 583 + 584 + if (!hdr.ExtPageLength) { 585 + error = -ENXIO; 586 + goto out; 587 + } 588 + 589 + buffer = pci_alloc_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 590 + &dma_handle); 591 + if (!buffer) { 592 + error = -ENOMEM; 593 + goto out; 594 + } 595 + 596 + cfg.physAddr = dma_handle; 597 + cfg.action = MPI_CONFIG_ACTION_PAGE_READ_CURRENT; 598 + 599 + error = mpt_config(ioc, &cfg); 600 + if (error) 601 + goto out_free_consistent; 602 + 603 + 604 + mptsas_print_expander_pg1(buffer); 605 + 606 + /* save config data */ 607 + phy_info->phy_id = buffer->Phy; 608 + phy_info->port_id = buffer->PhysicalPort; 609 + phy_info->negotiated_link_rate = buffer->NegotiatedLinkRate; 610 + phy_info->programmed_link_rate = buffer->ProgrammedLinkRate; 611 + phy_info->hw_link_rate = buffer->HwLinkRate; 612 + phy_info->identify.handle = le16_to_cpu(buffer->OwnerDevHandle); 613 + phy_info->attached.handle = le16_to_cpu(buffer->AttachedDevHandle); 614 + 615 + 616 + out_free_consistent: 617 + pci_free_consistent(ioc->pcidev, hdr.ExtPageLength * 4, 618 + buffer, dma_handle); 619 + out: 620 + return error; 621 + } 622 + 623 + static void 624 + mptsas_parse_device_info(struct sas_identify *identify, 625 + struct mptsas_devinfo *device_info) 626 + { 627 + u16 protocols; 628 + 629 + identify->sas_address = device_info->sas_address; 630 + identify->phy_identifier = device_info->phy_id; 631 + 632 + /* 633 + * Fill in Phy Initiator Port Protocol. 634 + * Bits 6:3, more than one bit can be set, fall through cases. 635 + */ 636 + protocols = device_info->device_info & 0x78; 637 + identify->initiator_port_protocols = 0; 638 + if (protocols & MPI_SAS_DEVICE_INFO_SSP_INITIATOR) 639 + identify->initiator_port_protocols |= SAS_PROTOCOL_SSP; 640 + if (protocols & MPI_SAS_DEVICE_INFO_STP_INITIATOR) 641 + identify->initiator_port_protocols |= SAS_PROTOCOL_STP; 642 + if (protocols & MPI_SAS_DEVICE_INFO_SMP_INITIATOR) 643 + identify->initiator_port_protocols |= SAS_PROTOCOL_SMP; 644 + if (protocols & MPI_SAS_DEVICE_INFO_SATA_HOST) 645 + identify->initiator_port_protocols |= SAS_PROTOCOL_SATA; 646 + 647 + /* 648 + * Fill in Phy Target Port Protocol. 649 + * Bits 10:7, more than one bit can be set, fall through cases. 650 + */ 651 + protocols = device_info->device_info & 0x780; 652 + identify->target_port_protocols = 0; 653 + if (protocols & MPI_SAS_DEVICE_INFO_SSP_TARGET) 654 + identify->target_port_protocols |= SAS_PROTOCOL_SSP; 655 + if (protocols & MPI_SAS_DEVICE_INFO_STP_TARGET) 656 + identify->target_port_protocols |= SAS_PROTOCOL_STP; 657 + if (protocols & MPI_SAS_DEVICE_INFO_SMP_TARGET) 658 + identify->target_port_protocols |= SAS_PROTOCOL_SMP; 659 + if (protocols & MPI_SAS_DEVICE_INFO_SATA_DEVICE) 660 + identify->target_port_protocols |= SAS_PROTOCOL_SATA; 661 + 662 + /* 663 + * Fill in Attached device type. 664 + */ 665 + switch (device_info->device_info & 666 + MPI_SAS_DEVICE_INFO_MASK_DEVICE_TYPE) { 667 + case MPI_SAS_DEVICE_INFO_NO_DEVICE: 668 + identify->device_type = SAS_PHY_UNUSED; 669 + break; 670 + case MPI_SAS_DEVICE_INFO_END_DEVICE: 671 + identify->device_type = SAS_END_DEVICE; 672 + break; 673 + case MPI_SAS_DEVICE_INFO_EDGE_EXPANDER: 674 + identify->device_type = SAS_EDGE_EXPANDER_DEVICE; 675 + break; 676 + case MPI_SAS_DEVICE_INFO_FANOUT_EXPANDER: 677 + identify->device_type = SAS_FANOUT_EXPANDER_DEVICE; 678 + break; 679 + } 680 + } 681 + 682 + static int mptsas_probe_one_phy(struct device *dev, 683 + struct mptsas_phyinfo *phy_info, int index) 684 + { 685 + struct sas_phy *port; 686 + int error; 687 + 688 + port = sas_phy_alloc(dev, index); 689 + if (!port) 690 + return -ENOMEM; 691 + 692 + port->port_identifier = phy_info->port_id; 693 + mptsas_parse_device_info(&port->identify, &phy_info->identify); 694 + 695 + /* 696 + * Set Negotiated link rate. 697 + */ 698 + switch (phy_info->negotiated_link_rate) { 699 + case MPI_SAS_IOUNIT0_RATE_PHY_DISABLED: 700 + port->negotiated_linkrate = SAS_PHY_DISABLED; 701 + break; 702 + case MPI_SAS_IOUNIT0_RATE_FAILED_SPEED_NEGOTIATION: 703 + port->negotiated_linkrate = SAS_LINK_RATE_FAILED; 704 + break; 705 + case MPI_SAS_IOUNIT0_RATE_1_5: 706 + port->negotiated_linkrate = SAS_LINK_RATE_1_5_GBPS; 707 + break; 708 + case MPI_SAS_IOUNIT0_RATE_3_0: 709 + port->negotiated_linkrate = SAS_LINK_RATE_3_0_GBPS; 710 + break; 711 + case MPI_SAS_IOUNIT0_RATE_SATA_OOB_COMPLETE: 712 + case MPI_SAS_IOUNIT0_RATE_UNKNOWN: 713 + default: 714 + port->negotiated_linkrate = SAS_LINK_RATE_UNKNOWN; 715 + break; 716 + } 717 + 718 + /* 719 + * Set Max hardware link rate. 720 + */ 721 + switch (phy_info->hw_link_rate & MPI_SAS_PHY0_PRATE_MAX_RATE_MASK) { 722 + case MPI_SAS_PHY0_HWRATE_MAX_RATE_1_5: 723 + port->maximum_linkrate_hw = SAS_LINK_RATE_1_5_GBPS; 724 + break; 725 + case MPI_SAS_PHY0_PRATE_MAX_RATE_3_0: 726 + port->maximum_linkrate_hw = SAS_LINK_RATE_3_0_GBPS; 727 + break; 728 + default: 729 + break; 730 + } 731 + 732 + /* 733 + * Set Max programmed link rate. 734 + */ 735 + switch (phy_info->programmed_link_rate & 736 + MPI_SAS_PHY0_PRATE_MAX_RATE_MASK) { 737 + case MPI_SAS_PHY0_PRATE_MAX_RATE_1_5: 738 + port->maximum_linkrate = SAS_LINK_RATE_1_5_GBPS; 739 + break; 740 + case MPI_SAS_PHY0_PRATE_MAX_RATE_3_0: 741 + port->maximum_linkrate = SAS_LINK_RATE_3_0_GBPS; 742 + break; 743 + default: 744 + break; 745 + } 746 + 747 + /* 748 + * Set Min hardware link rate. 749 + */ 750 + switch (phy_info->hw_link_rate & MPI_SAS_PHY0_HWRATE_MIN_RATE_MASK) { 751 + case MPI_SAS_PHY0_HWRATE_MIN_RATE_1_5: 752 + port->minimum_linkrate_hw = SAS_LINK_RATE_1_5_GBPS; 753 + break; 754 + case MPI_SAS_PHY0_PRATE_MIN_RATE_3_0: 755 + port->minimum_linkrate_hw = SAS_LINK_RATE_3_0_GBPS; 756 + break; 757 + default: 758 + break; 759 + } 760 + 761 + /* 762 + * Set Min programmed link rate. 763 + */ 764 + switch (phy_info->programmed_link_rate & 765 + MPI_SAS_PHY0_PRATE_MIN_RATE_MASK) { 766 + case MPI_SAS_PHY0_PRATE_MIN_RATE_1_5: 767 + port->minimum_linkrate = SAS_LINK_RATE_1_5_GBPS; 768 + break; 769 + case MPI_SAS_PHY0_PRATE_MIN_RATE_3_0: 770 + port->minimum_linkrate = SAS_LINK_RATE_3_0_GBPS; 771 + break; 772 + default: 773 + break; 774 + } 775 + 776 + error = sas_phy_add(port); 777 + if (error) { 778 + sas_phy_free(port); 779 + return error; 780 + } 781 + 782 + if (phy_info->attached.handle) { 783 + struct sas_rphy *rphy; 784 + 785 + rphy = sas_rphy_alloc(port); 786 + if (!rphy) 787 + return 0; /* non-fatal: an rphy can be added later */ 788 + 789 + mptsas_parse_device_info(&rphy->identify, &phy_info->attached); 790 + error = sas_rphy_add(rphy); 791 + if (error) { 792 + sas_rphy_free(rphy); 793 + return error; 794 + } 795 + 796 + phy_info->rphy = rphy; 797 + } 798 + 799 + return 0; 800 + } 801 + 802 + static int 803 + mptsas_probe_hba_phys(MPT_ADAPTER *ioc, int *index) 804 + { 805 + struct mptsas_portinfo *port_info; 806 + u32 handle = 0xFFFF; 807 + int error = -ENOMEM, i; 808 + 809 + port_info = kmalloc(sizeof(*port_info), GFP_KERNEL); 810 + if (!port_info) 811 + goto out; 812 + memset(port_info, 0, sizeof(*port_info)); 813 + 814 + error = mptsas_sas_io_unit_pg0(ioc, port_info); 815 + if (error) 816 + goto out_free_port_info; 817 + 818 + list_add_tail(&port_info->list, &ioc->sas_topology); 819 + 820 + for (i = 0; i < port_info->num_phys; i++) { 821 + mptsas_sas_phy_pg0(ioc, &port_info->phy_info[i], 822 + (MPI_SAS_PHY_PGAD_FORM_PHY_NUMBER << 823 + MPI_SAS_PHY_PGAD_FORM_SHIFT), i); 824 + 825 + mptsas_sas_device_pg0(ioc, &port_info->phy_info[i].identify, 826 + (MPI_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE << 827 + MPI_SAS_DEVICE_PGAD_FORM_SHIFT), handle); 828 + handle = port_info->phy_info[i].identify.handle; 829 + 830 + if (port_info->phy_info[i].attached.handle) { 831 + mptsas_sas_device_pg0(ioc, 832 + &port_info->phy_info[i].attached, 833 + (MPI_SAS_DEVICE_PGAD_FORM_HANDLE << 834 + MPI_SAS_DEVICE_PGAD_FORM_SHIFT), 835 + port_info->phy_info[i].attached.handle); 836 + } 837 + 838 + mptsas_probe_one_phy(&ioc->sh->shost_gendev, 839 + &port_info->phy_info[i], *index); 840 + (*index)++; 841 + } 842 + 843 + return 0; 844 + 845 + out_free_port_info: 846 + kfree(port_info); 847 + out: 848 + return error; 849 + } 850 + 851 + static int 852 + mptsas_probe_expander_phys(MPT_ADAPTER *ioc, u32 *handle, int *index) 853 + { 854 + struct mptsas_portinfo *port_info, *p; 855 + int error = -ENOMEM, i, j; 856 + 857 + port_info = kmalloc(sizeof(*port_info), GFP_KERNEL); 858 + if (!port_info) 859 + goto out; 860 + memset(port_info, 0, sizeof(*port_info)); 861 + 862 + error = mptsas_sas_expander_pg0(ioc, port_info, 863 + (MPI_SAS_EXPAND_PGAD_FORM_GET_NEXT_HANDLE << 864 + MPI_SAS_EXPAND_PGAD_FORM_SHIFT), *handle); 865 + if (error) 866 + goto out_free_port_info; 867 + 868 + *handle = port_info->handle; 869 + 870 + list_add_tail(&port_info->list, &ioc->sas_topology); 871 + for (i = 0; i < port_info->num_phys; i++) { 872 + struct device *parent; 873 + 874 + mptsas_sas_expander_pg1(ioc, &port_info->phy_info[i], 875 + (MPI_SAS_EXPAND_PGAD_FORM_HANDLE_PHY_NUM << 876 + MPI_SAS_EXPAND_PGAD_FORM_SHIFT), (i << 16) + *handle); 877 + 878 + if (port_info->phy_info[i].identify.handle) { 879 + mptsas_sas_device_pg0(ioc, 880 + &port_info->phy_info[i].identify, 881 + (MPI_SAS_DEVICE_PGAD_FORM_HANDLE << 882 + MPI_SAS_DEVICE_PGAD_FORM_SHIFT), 883 + port_info->phy_info[i].identify.handle); 884 + } 885 + 886 + if (port_info->phy_info[i].attached.handle) { 887 + mptsas_sas_device_pg0(ioc, 888 + &port_info->phy_info[i].attached, 889 + (MPI_SAS_DEVICE_PGAD_FORM_HANDLE << 890 + MPI_SAS_DEVICE_PGAD_FORM_SHIFT), 891 + port_info->phy_info[i].attached.handle); 892 + } 893 + 894 + /* 895 + * If we find a parent port handle this expander is 896 + * attached to another expander, else it hangs of the 897 + * HBA phys. 898 + */ 899 + parent = &ioc->sh->shost_gendev; 900 + list_for_each_entry(p, &ioc->sas_topology, list) { 901 + for (j = 0; j < p->num_phys; j++) { 902 + if (port_info->phy_info[i].identify.handle == 903 + p->phy_info[j].attached.handle) 904 + parent = &p->phy_info[j].rphy->dev; 905 + } 906 + } 907 + 908 + mptsas_probe_one_phy(parent, &port_info->phy_info[i], *index); 909 + (*index)++; 910 + } 911 + 912 + return 0; 913 + 914 + out_free_port_info: 915 + kfree(port_info); 916 + out: 917 + return error; 918 + } 919 + 920 + static void 921 + mptsas_scan_sas_topology(MPT_ADAPTER *ioc) 922 + { 923 + u32 handle = 0xFFFF; 924 + int index = 0; 925 + 926 + mptsas_probe_hba_phys(ioc, &index); 927 + while (!mptsas_probe_expander_phys(ioc, &handle, &index)) 928 + ; 929 + } 930 + 931 + static int 932 + mptsas_probe(struct pci_dev *pdev, const struct pci_device_id *id) 933 + { 934 + struct Scsi_Host *sh; 935 + MPT_SCSI_HOST *hd; 936 + MPT_ADAPTER *ioc; 937 + unsigned long flags; 938 + int sz, ii; 939 + int numSGE = 0; 940 + int scale; 941 + int ioc_cap; 942 + u8 *mem; 943 + int error=0; 944 + int r; 945 + 946 + r = mpt_attach(pdev,id); 947 + if (r) 948 + return r; 949 + 950 + ioc = pci_get_drvdata(pdev); 951 + ioc->DoneCtx = mptsasDoneCtx; 952 + ioc->TaskCtx = mptsasTaskCtx; 953 + ioc->InternalCtx = mptsasInternalCtx; 954 + 955 + /* Added sanity check on readiness of the MPT adapter. 956 + */ 957 + if (ioc->last_state != MPI_IOC_STATE_OPERATIONAL) { 958 + printk(MYIOC_s_WARN_FMT 959 + "Skipping because it's not operational!\n", 960 + ioc->name); 961 + return -ENODEV; 962 + } 963 + 964 + if (!ioc->active) { 965 + printk(MYIOC_s_WARN_FMT "Skipping because it's disabled!\n", 966 + ioc->name); 967 + return -ENODEV; 968 + } 969 + 970 + /* Sanity check - ensure at least 1 port is INITIATOR capable 971 + */ 972 + ioc_cap = 0; 973 + for (ii = 0; ii < ioc->facts.NumberOfPorts; ii++) { 974 + if (ioc->pfacts[ii].ProtocolFlags & 975 + MPI_PORTFACTS_PROTOCOL_INITIATOR) 976 + ioc_cap++; 977 + } 978 + 979 + if (!ioc_cap) { 980 + printk(MYIOC_s_WARN_FMT 981 + "Skipping ioc=%p because SCSI Initiator mode " 982 + "is NOT enabled!\n", ioc->name, ioc); 983 + return 0; 984 + } 985 + 986 + sh = scsi_host_alloc(&mptsas_driver_template, sizeof(MPT_SCSI_HOST)); 987 + if (!sh) { 988 + printk(MYIOC_s_WARN_FMT 989 + "Unable to register controller with SCSI subsystem\n", 990 + ioc->name); 991 + return -1; 992 + } 993 + 994 + spin_lock_irqsave(&ioc->FreeQlock, flags); 995 + 996 + /* Attach the SCSI Host to the IOC structure 997 + */ 998 + ioc->sh = sh; 999 + 1000 + sh->io_port = 0; 1001 + sh->n_io_port = 0; 1002 + sh->irq = 0; 1003 + 1004 + /* set 16 byte cdb's */ 1005 + sh->max_cmd_len = 16; 1006 + 1007 + sh->max_id = ioc->pfacts->MaxDevices + 1; 1008 + 1009 + sh->transportt = mptsas_transport_template; 1010 + 1011 + sh->max_lun = MPT_LAST_LUN + 1; 1012 + sh->max_channel = 0; 1013 + sh->this_id = ioc->pfacts[0].PortSCSIID; 1014 + 1015 + /* Required entry. 1016 + */ 1017 + sh->unique_id = ioc->id; 1018 + 1019 + INIT_LIST_HEAD(&ioc->sas_topology); 1020 + 1021 + /* Verify that we won't exceed the maximum 1022 + * number of chain buffers 1023 + * We can optimize: ZZ = req_sz/sizeof(SGE) 1024 + * For 32bit SGE's: 1025 + * numSGE = 1 + (ZZ-1)*(maxChain -1) + ZZ 1026 + * + (req_sz - 64)/sizeof(SGE) 1027 + * A slightly different algorithm is required for 1028 + * 64bit SGEs. 1029 + */ 1030 + scale = ioc->req_sz/(sizeof(dma_addr_t) + sizeof(u32)); 1031 + if (sizeof(dma_addr_t) == sizeof(u64)) { 1032 + numSGE = (scale - 1) * 1033 + (ioc->facts.MaxChainDepth-1) + scale + 1034 + (ioc->req_sz - 60) / (sizeof(dma_addr_t) + 1035 + sizeof(u32)); 1036 + } else { 1037 + numSGE = 1 + (scale - 1) * 1038 + (ioc->facts.MaxChainDepth-1) + scale + 1039 + (ioc->req_sz - 64) / (sizeof(dma_addr_t) + 1040 + sizeof(u32)); 1041 + } 1042 + 1043 + if (numSGE < sh->sg_tablesize) { 1044 + /* Reset this value */ 1045 + dprintk((MYIOC_s_INFO_FMT 1046 + "Resetting sg_tablesize to %d from %d\n", 1047 + ioc->name, numSGE, sh->sg_tablesize)); 1048 + sh->sg_tablesize = numSGE; 1049 + } 1050 + 1051 + spin_unlock_irqrestore(&ioc->FreeQlock, flags); 1052 + 1053 + hd = (MPT_SCSI_HOST *) sh->hostdata; 1054 + hd->ioc = ioc; 1055 + 1056 + /* SCSI needs scsi_cmnd lookup table! 1057 + * (with size equal to req_depth*PtrSz!) 1058 + */ 1059 + sz = ioc->req_depth * sizeof(void *); 1060 + mem = kmalloc(sz, GFP_ATOMIC); 1061 + if (mem == NULL) { 1062 + error = -ENOMEM; 1063 + goto mptsas_probe_failed; 1064 + } 1065 + 1066 + memset(mem, 0, sz); 1067 + hd->ScsiLookup = (struct scsi_cmnd **) mem; 1068 + 1069 + dprintk((MYIOC_s_INFO_FMT "ScsiLookup @ %p, sz=%d\n", 1070 + ioc->name, hd->ScsiLookup, sz)); 1071 + 1072 + /* Allocate memory for the device structures. 1073 + * A non-Null pointer at an offset 1074 + * indicates a device exists. 1075 + * max_id = 1 + maximum id (hosts.h) 1076 + */ 1077 + sz = sh->max_id * sizeof(void *); 1078 + mem = kmalloc(sz, GFP_ATOMIC); 1079 + if (mem == NULL) { 1080 + error = -ENOMEM; 1081 + goto mptsas_probe_failed; 1082 + } 1083 + 1084 + memset(mem, 0, sz); 1085 + hd->Targets = (VirtDevice **) mem; 1086 + 1087 + dprintk((KERN_INFO 1088 + " Targets @ %p, sz=%d\n", hd->Targets, sz)); 1089 + 1090 + /* Clear the TM flags 1091 + */ 1092 + hd->tmPending = 0; 1093 + hd->tmState = TM_STATE_NONE; 1094 + hd->resetPending = 0; 1095 + hd->abortSCpnt = NULL; 1096 + 1097 + /* Clear the pointer used to store 1098 + * single-threaded commands, i.e., those 1099 + * issued during a bus scan, dv and 1100 + * configuration pages. 1101 + */ 1102 + hd->cmdPtr = NULL; 1103 + 1104 + /* Initialize this SCSI Hosts' timers 1105 + * To use, set the timer expires field 1106 + * and add_timer 1107 + */ 1108 + init_timer(&hd->timer); 1109 + hd->timer.data = (unsigned long) hd; 1110 + hd->timer.function = mptscsih_timer_expired; 1111 + 1112 + hd->mpt_pq_filter = mpt_pq_filter; 1113 + ioc->sas_data.ptClear = mpt_pt_clear; 1114 + 1115 + if (ioc->sas_data.ptClear==1) { 1116 + mptbase_sas_persist_operation( 1117 + ioc, MPI_SAS_OP_CLEAR_ALL_PERSISTENT); 1118 + } 1119 + 1120 + ddvprintk((MYIOC_s_INFO_FMT 1121 + "mpt_pq_filter %x mpt_pq_filter %x\n", 1122 + ioc->name, 1123 + mpt_pq_filter, 1124 + mpt_pq_filter)); 1125 + 1126 + init_waitqueue_head(&hd->scandv_waitq); 1127 + hd->scandv_wait_done = 0; 1128 + hd->last_queue_full = 0; 1129 + 1130 + error = scsi_add_host(sh, &ioc->pcidev->dev); 1131 + if (error) { 1132 + dprintk((KERN_ERR MYNAM 1133 + "scsi_add_host failed\n")); 1134 + goto mptsas_probe_failed; 1135 + } 1136 + 1137 + mptsas_scan_sas_topology(ioc); 1138 + 1139 + return 0; 1140 + 1141 + mptsas_probe_failed: 1142 + 1143 + mptscsih_remove(pdev); 1144 + return error; 1145 + } 1146 + 1147 + static void __devexit mptsas_remove(struct pci_dev *pdev) 1148 + { 1149 + MPT_ADAPTER *ioc = pci_get_drvdata(pdev); 1150 + struct mptsas_portinfo *p, *n; 1151 + 1152 + sas_remove_host(ioc->sh); 1153 + 1154 + list_for_each_entry_safe(p, n, &ioc->sas_topology, list) { 1155 + list_del(&p->list); 1156 + kfree(p); 1157 + } 1158 + 1159 + mptscsih_remove(pdev); 1160 + } 1161 + 1162 + static struct pci_device_id mptsas_pci_table[] = { 1163 + { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1064, 1164 + PCI_ANY_ID, PCI_ANY_ID }, 1165 + { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1066, 1166 + PCI_ANY_ID, PCI_ANY_ID }, 1167 + { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1068, 1168 + PCI_ANY_ID, PCI_ANY_ID }, 1169 + { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1064E, 1170 + PCI_ANY_ID, PCI_ANY_ID }, 1171 + { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1066E, 1172 + PCI_ANY_ID, PCI_ANY_ID }, 1173 + { PCI_VENDOR_ID_LSI_LOGIC, PCI_DEVICE_ID_LSI_SAS1068E, 1174 + PCI_ANY_ID, PCI_ANY_ID }, 1175 + {0} /* Terminating entry */ 1176 + }; 1177 + MODULE_DEVICE_TABLE(pci, mptsas_pci_table); 1178 + 1179 + 1180 + static struct pci_driver mptsas_driver = { 1181 + .name = "mptsas", 1182 + .id_table = mptsas_pci_table, 1183 + .probe = mptsas_probe, 1184 + .remove = __devexit_p(mptsas_remove), 1185 + .shutdown = mptscsih_shutdown, 1186 + #ifdef CONFIG_PM 1187 + .suspend = mptscsih_suspend, 1188 + .resume = mptscsih_resume, 1189 + #endif 1190 + }; 1191 + 1192 + static int __init 1193 + mptsas_init(void) 1194 + { 1195 + show_mptmod_ver(my_NAME, my_VERSION); 1196 + 1197 + mptsas_transport_template = 1198 + sas_attach_transport(&mptsas_transport_functions); 1199 + if (!mptsas_transport_template) 1200 + return -ENODEV; 1201 + 1202 + mptsasDoneCtx = mpt_register(mptscsih_io_done, MPTSAS_DRIVER); 1203 + mptsasTaskCtx = mpt_register(mptscsih_taskmgmt_complete, MPTSAS_DRIVER); 1204 + mptsasInternalCtx = 1205 + mpt_register(mptscsih_scandv_complete, MPTSAS_DRIVER); 1206 + 1207 + if (mpt_event_register(mptsasDoneCtx, mptscsih_event_process) == 0) { 1208 + devtprintk((KERN_INFO MYNAM 1209 + ": Registered for IOC event notifications\n")); 1210 + } 1211 + 1212 + if (mpt_reset_register(mptsasDoneCtx, mptscsih_ioc_reset) == 0) { 1213 + dprintk((KERN_INFO MYNAM 1214 + ": Registered for IOC reset notifications\n")); 1215 + } 1216 + 1217 + return pci_register_driver(&mptsas_driver); 1218 + } 1219 + 1220 + static void __exit 1221 + mptsas_exit(void) 1222 + { 1223 + pci_unregister_driver(&mptsas_driver); 1224 + sas_release_transport(mptsas_transport_template); 1225 + 1226 + mpt_reset_deregister(mptsasDoneCtx); 1227 + mpt_event_deregister(mptsasDoneCtx); 1228 + 1229 + mpt_deregister(mptsasInternalCtx); 1230 + mpt_deregister(mptsasTaskCtx); 1231 + mpt_deregister(mptsasDoneCtx); 1232 + } 1233 + 1234 + module_init(mptsas_init); 1235 + module_exit(mptsas_exit);
+240 -223
drivers/message/fusion/mptscsih.c
··· 62 62 #include <scsi/scsi_device.h> 63 63 #include <scsi/scsi_host.h> 64 64 #include <scsi/scsi_tcq.h> 65 + #include <scsi/scsi_dbg.h> 65 66 66 67 #include "mptbase.h" 67 68 #include "mptscsih.h" ··· 94 93 95 94 #define MPT_ICFLAG_BUF_CAP 0x01 /* ReadBuffer Read Capacity format */ 96 95 #define MPT_ICFLAG_ECHO 0x02 /* ReadBuffer Echo buffer format */ 97 - #define MPT_ICFLAG_PHYS_DISK 0x04 /* Any SCSI IO but do Phys Disk Format */ 98 - #define MPT_ICFLAG_TAGGED_CMD 0x08 /* Do tagged IO */ 96 + #define MPT_ICFLAG_EBOS 0x04 /* ReadBuffer Echo buffer has EBOS */ 97 + #define MPT_ICFLAG_PHYS_DISK 0x08 /* Any SCSI IO but do Phys Disk Format */ 98 + #define MPT_ICFLAG_TAGGED_CMD 0x10 /* Do tagged IO */ 99 99 #define MPT_ICFLAG_DID_RESET 0x20 /* Bus Reset occurred with this command */ 100 100 #define MPT_ICFLAG_RESERVED 0x40 /* Reserved has been issued */ 101 101 ··· 161 159 static int mptscsih_do_cmd(MPT_SCSI_HOST *hd, INTERNAL_CMD *iocmd); 162 160 static int mptscsih_synchronize_cache(MPT_SCSI_HOST *hd, int portnum); 163 161 162 + static struct work_struct mptscsih_persistTask; 163 + 164 164 #ifdef MPTSCSIH_ENABLE_DOMAIN_VALIDATION 165 165 static int mptscsih_do_raid(MPT_SCSI_HOST *hd, u8 action, INTERNAL_CMD *io); 166 166 static void mptscsih_domainValidation(void *hd); ··· 171 167 static int mptscsih_doDv(MPT_SCSI_HOST *hd, int channel, int target); 172 168 static void mptscsih_dv_parms(MPT_SCSI_HOST *hd, DVPARAMETERS *dv,void *pPage); 173 169 static void mptscsih_fillbuf(char *buffer, int size, int index, int width); 170 + static void mptscsih_set_dvflags_raid(MPT_SCSI_HOST *hd, int id); 174 171 #endif 175 172 176 173 void mptscsih_remove(struct pci_dev *); ··· 611 606 xfer_cnt = le32_to_cpu(pScsiReply->TransferCount); 612 607 sc->resid = sc->request_bufflen - xfer_cnt; 613 608 609 + /* 610 + * if we get a data underrun indication, yet no data was 611 + * transferred and the SCSI status indicates that the 612 + * command was never started, change the data underrun 613 + * to success 614 + */ 615 + if (status == MPI_IOCSTATUS_SCSI_DATA_UNDERRUN && xfer_cnt == 0 && 616 + (scsi_status == MPI_SCSI_STATUS_BUSY || 617 + scsi_status == MPI_SCSI_STATUS_RESERVATION_CONFLICT || 618 + scsi_status == MPI_SCSI_STATUS_TASK_SET_FULL)) { 619 + status = MPI_IOCSTATUS_SUCCESS; 620 + } 621 + 614 622 dreplyprintk((KERN_NOTICE "Reply ha=%d id=%d lun=%d:\n" 615 623 "IOCStatus=%04xh SCSIState=%02xh SCSIStatus=%02xh\n" 616 624 "resid=%d bufflen=%d xfer_cnt=%d\n", 617 625 ioc->id, pScsiReq->TargetID, pScsiReq->LUN[1], 618 - status, scsi_state, scsi_status, sc->resid, 626 + status, scsi_state, scsi_status, sc->resid, 619 627 sc->request_bufflen, xfer_cnt)); 620 628 621 629 if (scsi_state & MPI_SCSI_STATE_AUTOSENSE_VALID) ··· 637 619 /* 638 620 * Look for + dump FCP ResponseInfo[]! 639 621 */ 640 - if (scsi_state & MPI_SCSI_STATE_RESPONSE_INFO_VALID) { 641 - printk(KERN_NOTICE " FCP_ResponseInfo=%08xh\n", 622 + if (scsi_state & MPI_SCSI_STATE_RESPONSE_INFO_VALID && 623 + pScsiReply->ResponseInfo) { 624 + printk(KERN_NOTICE "ha=%d id=%d lun=%d: " 625 + "FCP_ResponseInfo=%08xh\n", 626 + ioc->id, pScsiReq->TargetID, pScsiReq->LUN[1], 642 627 le32_to_cpu(pScsiReply->ResponseInfo)); 643 628 } 644 629 ··· 682 661 break; 683 662 684 663 case MPI_IOCSTATUS_SCSI_RESIDUAL_MISMATCH: /* 0x0049 */ 685 - if ( xfer_cnt >= sc->underflow ) { 686 - /* Sufficient data transfer occurred */ 664 + sc->resid = sc->request_bufflen - xfer_cnt; 665 + if((xfer_cnt==0)||(sc->underflow > xfer_cnt)) 666 + sc->result=DID_SOFT_ERROR << 16; 667 + else /* Sufficient data transfer occurred */ 687 668 sc->result = (DID_OK << 16) | scsi_status; 688 - } else if ( xfer_cnt == 0 ) { 689 - /* A CRC Error causes this condition; retry */ 690 - sc->result = (DRIVER_SENSE << 24) | (DID_OK << 16) | 691 - (CHECK_CONDITION << 1); 692 - sc->sense_buffer[0] = 0x70; 693 - sc->sense_buffer[2] = NO_SENSE; 694 - sc->sense_buffer[12] = 0; 695 - sc->sense_buffer[13] = 0; 696 - } else { 697 - sc->result = DID_SOFT_ERROR << 16; 698 - } 699 - dreplyprintk((KERN_NOTICE 700 - "RESIDUAL_MISMATCH: result=%x on id=%d\n", 701 - sc->result, sc->device->id)); 669 + dreplyprintk((KERN_NOTICE 670 + "RESIDUAL_MISMATCH: result=%x on id=%d\n", sc->result, sc->device->id)); 702 671 break; 703 672 704 673 case MPI_IOCSTATUS_SCSI_DATA_UNDERRUN: /* 0x0045 */ ··· 703 692 ; 704 693 } else { 705 694 if (xfer_cnt < sc->underflow) { 706 - sc->result = DID_SOFT_ERROR << 16; 695 + if (scsi_status == SAM_STAT_BUSY) 696 + sc->result = SAM_STAT_BUSY; 697 + else 698 + sc->result = DID_SOFT_ERROR << 16; 707 699 } 708 700 if (scsi_state & (MPI_SCSI_STATE_AUTOSENSE_FAILED | MPI_SCSI_STATE_NO_SCSI_STATUS)) { 709 701 /* What to do? ··· 731 717 732 718 case MPI_IOCSTATUS_SCSI_RECOVERED_ERROR: /* 0x0040 */ 733 719 case MPI_IOCSTATUS_SUCCESS: /* 0x0000 */ 734 - scsi_status = pScsiReply->SCSIStatus; 735 - sc->result = (DID_OK << 16) | scsi_status; 720 + if (scsi_status == MPI_SCSI_STATUS_BUSY) 721 + sc->result = (DID_BUS_BUSY << 16) | scsi_status; 722 + else 723 + sc->result = (DID_OK << 16) | scsi_status; 736 724 if (scsi_state == 0) { 737 725 ; 738 726 } else if (scsi_state & MPI_SCSI_STATE_AUTOSENSE_VALID) { ··· 906 890 SCSIIORequest_t *mf = NULL; 907 891 int ii; 908 892 int max = hd->ioc->req_depth; 893 + struct scsi_cmnd *sc; 909 894 910 895 dsprintk((KERN_INFO MYNAM ": search_running target %d lun %d max %d\n", 911 896 target, lun, max)); 912 897 913 898 for (ii=0; ii < max; ii++) { 914 - if (hd->ScsiLookup[ii] != NULL) { 899 + if ((sc = hd->ScsiLookup[ii]) != NULL) { 915 900 916 901 mf = (SCSIIORequest_t *)MPT_INDEX_2_MFPTR(hd->ioc, ii); 917 902 ··· 927 910 hd->ScsiLookup[ii] = NULL; 928 911 mptscsih_freeChainBuffers(hd->ioc, ii); 929 912 mpt_free_msg_frame(hd->ioc, (MPT_FRAME_HDR *)mf); 913 + if (sc->use_sg) { 914 + pci_unmap_sg(hd->ioc->pcidev, 915 + (struct scatterlist *) sc->request_buffer, 916 + sc->use_sg, 917 + sc->sc_data_direction); 918 + } else if (sc->request_bufflen) { 919 + pci_unmap_single(hd->ioc->pcidev, 920 + sc->SCp.dma_handle, 921 + sc->request_bufflen, 922 + sc->sc_data_direction); 923 + } 924 + sc->host_scribble = NULL; 925 + sc->result = DID_NO_CONNECT << 16; 926 + sc->scsi_done(sc); 930 927 } 931 928 } 932 - 933 929 return; 934 930 } 935 931 ··· 997 967 unsigned long flags; 998 968 int sz1; 999 969 1000 - if(!host) 970 + if(!host) { 971 + mpt_detach(pdev); 1001 972 return; 973 + } 1002 974 1003 975 scsi_remove_host(host); 1004 976 ··· 1288 1256 MPT_SCSI_HOST *hd; 1289 1257 MPT_FRAME_HDR *mf; 1290 1258 SCSIIORequest_t *pScsiReq; 1291 - VirtDevice *pTarget; 1292 - int target; 1259 + VirtDevice *pTarget = SCpnt->device->hostdata; 1293 1260 int lun; 1294 1261 u32 datalen; 1295 1262 u32 scsictl; ··· 1298 1267 int ii; 1299 1268 1300 1269 hd = (MPT_SCSI_HOST *) SCpnt->device->host->hostdata; 1301 - target = SCpnt->device->id; 1302 1270 lun = SCpnt->device->lun; 1303 1271 SCpnt->scsi_done = done; 1304 - 1305 - pTarget = hd->Targets[target]; 1306 1272 1307 1273 dmfprintk((MYIOC_s_INFO_FMT "qcmd: SCpnt=%p, done()=%p\n", 1308 1274 (hd && hd->ioc) ? hd->ioc->name : "ioc?", SCpnt, done)); ··· 1343 1315 /* Default to untagged. Once a target structure has been allocated, 1344 1316 * use the Inquiry data to determine if device supports tagged. 1345 1317 */ 1346 - if ( pTarget 1318 + if (pTarget 1347 1319 && (pTarget->tflags & MPT_TARGET_FLAGS_Q_YES) 1348 1320 && (SCpnt->device->tagged_supported)) { 1349 1321 scsictl = scsidir | MPI_SCSIIO_CONTROL_SIMPLEQ; ··· 1353 1325 1354 1326 /* Use the above information to set up the message frame 1355 1327 */ 1356 - pScsiReq->TargetID = (u8) target; 1357 - pScsiReq->Bus = (u8) SCpnt->device->channel; 1328 + pScsiReq->TargetID = (u8) pTarget->target_id; 1329 + pScsiReq->Bus = pTarget->bus_id; 1358 1330 pScsiReq->ChainOffset = 0; 1359 1331 pScsiReq->Function = MPI_FUNCTION_SCSI_IO_REQUEST; 1360 1332 pScsiReq->CDBLength = SCpnt->cmd_len; ··· 1406 1378 1407 1379 #ifdef MPTSCSIH_ENABLE_DOMAIN_VALIDATION 1408 1380 if (hd->ioc->bus_type == SCSI) { 1409 - int dvStatus = hd->ioc->spi_data.dvStatus[target]; 1381 + int dvStatus = hd->ioc->spi_data.dvStatus[pTarget->target_id]; 1410 1382 int issueCmd = 1; 1411 1383 1412 1384 if (dvStatus || hd->ioc->spi_data.forceDv) { ··· 1454 1426 return 0; 1455 1427 1456 1428 fail: 1429 + hd->ScsiLookup[my_idx] = NULL; 1457 1430 mptscsih_freeChainBuffers(hd->ioc, my_idx); 1458 1431 mpt_free_msg_frame(hd->ioc, mf); 1459 1432 return SCSI_MLQUEUE_HOST_BUSY; ··· 1742 1713 MPT_FRAME_HDR *mf; 1743 1714 u32 ctx2abort; 1744 1715 int scpnt_idx; 1716 + int retval; 1745 1717 1746 1718 /* If we can't locate our host adapter structure, return FAILED status. 1747 1719 */ 1748 1720 if ((hd = (MPT_SCSI_HOST *) SCpnt->device->host->hostdata) == NULL) { 1749 1721 SCpnt->result = DID_RESET << 16; 1750 1722 SCpnt->scsi_done(SCpnt); 1751 - dfailprintk((KERN_WARNING MYNAM ": mptscsih_abort: " 1723 + dfailprintk((KERN_INFO MYNAM ": mptscsih_abort: " 1752 1724 "Can't locate host! (sc=%p)\n", 1753 1725 SCpnt)); 1754 1726 return FAILED; 1755 1727 } 1756 1728 1757 1729 ioc = hd->ioc; 1758 - if (hd->resetPending) 1730 + if (hd->resetPending) { 1759 1731 return FAILED; 1760 - 1761 - printk(KERN_WARNING MYNAM ": %s: >> Attempting task abort! (sc=%p)\n", 1762 - hd->ioc->name, SCpnt); 1732 + } 1763 1733 1764 1734 if (hd->timeouts < -1) 1765 1735 hd->timeouts++; ··· 1766 1738 /* Find this command 1767 1739 */ 1768 1740 if ((scpnt_idx = SCPNT_TO_LOOKUP_IDX(SCpnt)) < 0) { 1769 - /* Cmd not found in ScsiLookup. 1741 + /* Cmd not found in ScsiLookup. 1770 1742 * Do OS callback. 1771 1743 */ 1772 1744 SCpnt->result = DID_RESET << 16; 1773 - dtmprintk((KERN_WARNING MYNAM ": %s: mptscsih_abort: " 1745 + dtmprintk((KERN_INFO MYNAM ": %s: mptscsih_abort: " 1774 1746 "Command not in the active list! (sc=%p)\n", 1775 1747 hd->ioc->name, SCpnt)); 1776 1748 return SUCCESS; 1777 1749 } 1750 + 1751 + printk(KERN_WARNING MYNAM ": %s: attempting task abort! (sc=%p)\n", 1752 + hd->ioc->name, SCpnt); 1753 + scsi_print_command(SCpnt); 1778 1754 1779 1755 /* Most important! Set TaskMsgContext to SCpnt's MsgContext! 1780 1756 * (the IO to be ABORT'd) ··· 1792 1760 1793 1761 hd->abortSCpnt = SCpnt; 1794 1762 1795 - if (mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_ABORT_TASK, 1763 + retval = mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_ABORT_TASK, 1796 1764 SCpnt->device->channel, SCpnt->device->id, SCpnt->device->lun, 1797 - ctx2abort, 2 /* 2 second timeout */) 1798 - < 0) { 1765 + ctx2abort, 2 /* 2 second timeout */); 1799 1766 1800 - /* The TM request failed and the subsequent FW-reload failed! 1801 - * Fatal error case. 1802 - */ 1803 - printk(MYIOC_s_WARN_FMT "Error issuing abort task! (sc=%p)\n", 1804 - hd->ioc->name, SCpnt); 1767 + printk (KERN_WARNING MYNAM ": %s: task abort: %s (sc=%p)\n", 1768 + hd->ioc->name, 1769 + ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt); 1805 1770 1806 - /* We must clear our pending flag before clearing our state. 1807 - */ 1771 + if (retval == 0) 1772 + return SUCCESS; 1773 + 1774 + if(retval != FAILED ) { 1808 1775 hd->tmPending = 0; 1809 1776 hd->tmState = TM_STATE_NONE; 1810 - 1811 - /* Unmap the DMA buffers, if any. */ 1812 - if (SCpnt->use_sg) { 1813 - pci_unmap_sg(ioc->pcidev, (struct scatterlist *) SCpnt->request_buffer, 1814 - SCpnt->use_sg, SCpnt->sc_data_direction); 1815 - } else if (SCpnt->request_bufflen) { 1816 - pci_unmap_single(ioc->pcidev, SCpnt->SCp.dma_handle, 1817 - SCpnt->request_bufflen, SCpnt->sc_data_direction); 1818 - } 1819 - hd->ScsiLookup[scpnt_idx] = NULL; 1820 - SCpnt->result = DID_RESET << 16; 1821 - SCpnt->scsi_done(SCpnt); /* Issue the command callback */ 1822 - mptscsih_freeChainBuffers(ioc, scpnt_idx); 1823 - mpt_free_msg_frame(ioc, mf); 1824 - return FAILED; 1825 1777 } 1826 - return SUCCESS; 1778 + return FAILED; 1827 1779 } 1828 1780 1829 1781 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ ··· 1823 1807 mptscsih_dev_reset(struct scsi_cmnd * SCpnt) 1824 1808 { 1825 1809 MPT_SCSI_HOST *hd; 1810 + int retval; 1826 1811 1827 1812 /* If we can't locate our host adapter structure, return FAILED status. 1828 1813 */ 1829 1814 if ((hd = (MPT_SCSI_HOST *) SCpnt->device->host->hostdata) == NULL){ 1830 - dtmprintk((KERN_WARNING MYNAM ": mptscsih_dev_reset: " 1815 + dtmprintk((KERN_INFO MYNAM ": mptscsih_dev_reset: " 1831 1816 "Can't locate host! (sc=%p)\n", 1832 1817 SCpnt)); 1833 1818 return FAILED; ··· 1837 1820 if (hd->resetPending) 1838 1821 return FAILED; 1839 1822 1840 - printk(KERN_WARNING MYNAM ": %s: >> Attempting target reset! (sc=%p)\n", 1823 + printk(KERN_WARNING MYNAM ": %s: attempting target reset! (sc=%p)\n", 1841 1824 hd->ioc->name, SCpnt); 1825 + scsi_print_command(SCpnt); 1842 1826 1843 - if (mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 1827 + retval = mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 1844 1828 SCpnt->device->channel, SCpnt->device->id, 1845 - 0, 0, 5 /* 5 second timeout */) 1846 - < 0){ 1847 - /* The TM request failed and the subsequent FW-reload failed! 1848 - * Fatal error case. 1849 - */ 1850 - printk(MYIOC_s_WARN_FMT "Error processing TaskMgmt request (sc=%p)\n", 1851 - hd->ioc->name, SCpnt); 1829 + 0, 0, 5 /* 5 second timeout */); 1830 + 1831 + printk (KERN_WARNING MYNAM ": %s: target reset: %s (sc=%p)\n", 1832 + hd->ioc->name, 1833 + ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt); 1834 + 1835 + if (retval == 0) 1836 + return SUCCESS; 1837 + 1838 + if(retval != FAILED ) { 1852 1839 hd->tmPending = 0; 1853 1840 hd->tmState = TM_STATE_NONE; 1854 - return FAILED; 1855 1841 } 1856 - 1857 - return SUCCESS; 1842 + return FAILED; 1858 1843 } 1859 1844 1860 1845 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ ··· 1872 1853 mptscsih_bus_reset(struct scsi_cmnd * SCpnt) 1873 1854 { 1874 1855 MPT_SCSI_HOST *hd; 1875 - spinlock_t *host_lock = SCpnt->device->host->host_lock; 1856 + int retval; 1876 1857 1877 1858 /* If we can't locate our host adapter structure, return FAILED status. 1878 1859 */ 1879 1860 if ((hd = (MPT_SCSI_HOST *) SCpnt->device->host->hostdata) == NULL){ 1880 - dtmprintk((KERN_WARNING MYNAM ": mptscsih_bus_reset: " 1861 + dtmprintk((KERN_INFO MYNAM ": mptscsih_bus_reset: " 1881 1862 "Can't locate host! (sc=%p)\n", 1882 1863 SCpnt ) ); 1883 1864 return FAILED; 1884 1865 } 1885 1866 1886 - printk(KERN_WARNING MYNAM ": %s: >> Attempting bus reset! (sc=%p)\n", 1867 + printk(KERN_WARNING MYNAM ": %s: attempting bus reset! (sc=%p)\n", 1887 1868 hd->ioc->name, SCpnt); 1869 + scsi_print_command(SCpnt); 1888 1870 1889 1871 if (hd->timeouts < -1) 1890 1872 hd->timeouts++; 1891 1873 1892 - /* We are now ready to execute the task management request. */ 1893 - if (mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_RESET_BUS, 1894 - SCpnt->device->channel, 0, 0, 0, 5 /* 5 second timeout */) 1895 - < 0){ 1874 + retval = mptscsih_TMHandler(hd, MPI_SCSITASKMGMT_TASKTYPE_RESET_BUS, 1875 + SCpnt->device->channel, 0, 0, 0, 5 /* 5 second timeout */); 1896 1876 1897 - /* The TM request failed and the subsequent FW-reload failed! 1898 - * Fatal error case. 1899 - */ 1900 - printk(MYIOC_s_WARN_FMT 1901 - "Error processing TaskMgmt request (sc=%p)\n", 1902 - hd->ioc->name, SCpnt); 1877 + printk (KERN_WARNING MYNAM ": %s: bus reset: %s (sc=%p)\n", 1878 + hd->ioc->name, 1879 + ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt); 1880 + 1881 + if (retval == 0) 1882 + return SUCCESS; 1883 + 1884 + if(retval != FAILED ) { 1903 1885 hd->tmPending = 0; 1904 1886 hd->tmState = TM_STATE_NONE; 1905 - spin_lock_irq(host_lock); 1906 - return FAILED; 1907 1887 } 1908 - 1909 - return SUCCESS; 1888 + return FAILED; 1910 1889 } 1911 1890 1912 1891 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ ··· 2186 2169 vdev->raidVolume = 0; 2187 2170 hd->Targets[device->id] = vdev; 2188 2171 if (hd->ioc->bus_type == SCSI) { 2189 - if (hd->ioc->spi_data.isRaid & (1 << device->id)) { 2172 + if (hd->ioc->raid_data.isRaid & (1 << device->id)) { 2190 2173 vdev->raidVolume = 1; 2191 2174 ddvtprintk((KERN_INFO 2192 2175 "RAID Volume @ id %d\n", device->id)); ··· 2197 2180 2198 2181 out: 2199 2182 vdev->num_luns++; 2200 - return 0; 2201 - } 2202 - 2203 - static int 2204 - mptscsih_is_raid_volume(MPT_SCSI_HOST *hd, uint id) 2205 - { 2206 - int i; 2207 - 2208 - if (!hd->ioc->spi_data.isRaid || !hd->ioc->spi_data.pIocPg3) 2209 - return 0; 2210 - 2211 - for (i = 0; i < hd->ioc->spi_data.pIocPg3->NumPhysDisks; i++) { 2212 - if (id == hd->ioc->spi_data.pIocPg3->PhysDisk[i].PhysDiskID) 2213 - return 1; 2214 - } 2215 - 2183 + device->hostdata = vdev; 2216 2184 return 0; 2217 2185 } 2218 2186 ··· 2228 2226 hd->Targets[target] = NULL; 2229 2227 2230 2228 if (hd->ioc->bus_type == SCSI) { 2231 - if (mptscsih_is_raid_volume(hd, target)) { 2229 + if (mptscsih_is_phys_disk(hd->ioc, target)) { 2232 2230 hd->ioc->spi_data.forceDv |= MPT_SCSICFG_RELOAD_IOC_PG3; 2233 2231 } else { 2234 2232 hd->ioc->spi_data.dvStatus[target] = ··· 2441 2439 { 2442 2440 MPT_SCSI_HOST *hd; 2443 2441 unsigned long flags; 2442 + int ii; 2444 2443 2445 2444 dtmprintk((KERN_WARNING MYNAM 2446 2445 ": IOC %s_reset routed to SCSI host driver!\n", ··· 2499 2496 2500 2497 /* ScsiLookup initialization 2501 2498 */ 2502 - { 2503 - int ii; 2504 - for (ii=0; ii < hd->ioc->req_depth; ii++) 2505 - hd->ScsiLookup[ii] = NULL; 2506 - } 2499 + for (ii=0; ii < hd->ioc->req_depth; ii++) 2500 + hd->ScsiLookup[ii] = NULL; 2507 2501 2508 2502 /* 2. Chain Buffer initialization 2509 2503 */ ··· 2549 2549 } 2550 2550 2551 2551 /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 2552 + /* work queue thread to clear the persitency table */ 2553 + static void 2554 + mptscsih_sas_persist_clear_table(void * arg) 2555 + { 2556 + MPT_ADAPTER *ioc = (MPT_ADAPTER *)arg; 2557 + 2558 + mptbase_sas_persist_operation(ioc, MPI_SAS_OP_CLEAR_NOT_PRESENT); 2559 + } 2560 + 2561 + /*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/ 2552 2562 int 2553 2563 mptscsih_event_process(MPT_ADAPTER *ioc, EventNotificationReply_t *pEvReply) 2554 2564 { ··· 2568 2558 devtprintk((MYIOC_s_INFO_FMT "MPT event (=%02Xh) routed to SCSI host driver!\n", 2569 2559 ioc->name, event)); 2570 2560 2561 + if (ioc->sh == NULL || 2562 + ((hd = (MPT_SCSI_HOST *)ioc->sh->hostdata) == NULL)) 2563 + return 1; 2564 + 2571 2565 switch (event) { 2572 2566 case MPI_EVENT_UNIT_ATTENTION: /* 03 */ 2573 2567 /* FIXME! */ 2574 2568 break; 2575 2569 case MPI_EVENT_IOC_BUS_RESET: /* 04 */ 2576 2570 case MPI_EVENT_EXT_BUS_RESET: /* 05 */ 2577 - hd = NULL; 2578 - if (ioc->sh) { 2579 - hd = (MPT_SCSI_HOST *) ioc->sh->hostdata; 2580 - if (hd && (ioc->bus_type == SCSI) && (hd->soft_resets < -1)) 2581 - hd->soft_resets++; 2582 - } 2571 + if (hd && (ioc->bus_type == SCSI) && (hd->soft_resets < -1)) 2572 + hd->soft_resets++; 2583 2573 break; 2584 2574 case MPI_EVENT_LOGOUT: /* 09 */ 2585 2575 /* FIXME! */ ··· 2598 2588 break; 2599 2589 2600 2590 case MPI_EVENT_INTEGRATED_RAID: /* 0B */ 2591 + { 2592 + pMpiEventDataRaid_t pRaidEventData = 2593 + (pMpiEventDataRaid_t) pEvReply->Data; 2601 2594 #ifdef MPTSCSIH_ENABLE_DOMAIN_VALIDATION 2602 - /* negoNvram set to 0 if DV enabled and to USE_NVRAM if 2603 - * if DV disabled. Need to check for target mode. 2604 - */ 2605 - hd = NULL; 2606 - if (ioc->sh) 2607 - hd = (MPT_SCSI_HOST *) ioc->sh->hostdata; 2608 - 2609 - if (hd && (ioc->bus_type == SCSI) && (hd->negoNvram == 0)) { 2610 - ScsiCfgData *pSpi; 2611 - Ioc3PhysDisk_t *pPDisk; 2612 - int numPDisk; 2613 - u8 reason; 2614 - u8 physDiskNum; 2615 - 2616 - reason = (le32_to_cpu(pEvReply->Data[0]) & 0x00FF0000) >> 16; 2617 - if (reason == MPI_EVENT_RAID_RC_DOMAIN_VAL_NEEDED) { 2618 - /* New or replaced disk. 2619 - * Set DV flag and schedule DV. 2620 - */ 2621 - pSpi = &ioc->spi_data; 2622 - physDiskNum = (le32_to_cpu(pEvReply->Data[0]) & 0xFF000000) >> 24; 2623 - ddvtprintk(("DV requested for phys disk id %d\n", physDiskNum)); 2624 - if (pSpi->pIocPg3) { 2625 - pPDisk = pSpi->pIocPg3->PhysDisk; 2626 - numPDisk =pSpi->pIocPg3->NumPhysDisks; 2627 - 2628 - while (numPDisk) { 2629 - if (physDiskNum == pPDisk->PhysDiskNum) { 2630 - pSpi->dvStatus[pPDisk->PhysDiskID] = (MPT_SCSICFG_NEED_DV | MPT_SCSICFG_DV_NOT_DONE); 2631 - pSpi->forceDv = MPT_SCSICFG_NEED_DV; 2632 - ddvtprintk(("NEED_DV set for phys disk id %d\n", pPDisk->PhysDiskID)); 2633 - break; 2634 - } 2635 - pPDisk++; 2636 - numPDisk--; 2637 - } 2638 - 2639 - if (numPDisk == 0) { 2640 - /* The physical disk that needs DV was not found 2641 - * in the stored IOC Page 3. The driver must reload 2642 - * this page. DV routine will set the NEED_DV flag for 2643 - * all phys disks that have DV_NOT_DONE set. 2644 - */ 2645 - pSpi->forceDv = MPT_SCSICFG_NEED_DV | MPT_SCSICFG_RELOAD_IOC_PG3; 2646 - ddvtprintk(("phys disk %d not found. Setting reload IOC Pg3 Flag\n", physDiskNum)); 2647 - } 2648 - } 2649 - } 2650 - } 2595 + /* Domain Validation Needed */ 2596 + if (ioc->bus_type == SCSI && 2597 + pRaidEventData->ReasonCode == 2598 + MPI_EVENT_RAID_RC_DOMAIN_VAL_NEEDED) 2599 + mptscsih_set_dvflags_raid(hd, pRaidEventData->PhysDiskNum); 2651 2600 #endif 2601 + break; 2602 + } 2652 2603 2653 - #if defined(MPT_DEBUG_DV) || defined(MPT_DEBUG_DV_TINY) 2654 - printk("Raid Event RF: "); 2655 - { 2656 - u32 *m = (u32 *)pEvReply; 2657 - int ii; 2658 - int n = (int)pEvReply->MsgLength; 2659 - for (ii=6; ii < n; ii++) 2660 - printk(" %08x", le32_to_cpu(m[ii])); 2661 - printk("\n"); 2662 - } 2663 - #endif 2604 + /* Persistent table is full. */ 2605 + case MPI_EVENT_PERSISTENT_TABLE_FULL: 2606 + INIT_WORK(&mptscsih_persistTask, 2607 + mptscsih_sas_persist_clear_table,(void *)ioc); 2608 + schedule_work(&mptscsih_persistTask); 2664 2609 break; 2665 2610 2666 2611 case MPI_EVENT_NONE: /* 00 */ ··· 2652 2687 { 2653 2688 int indexed_lun, lun_index; 2654 2689 VirtDevice *vdev; 2655 - ScsiCfgData *pSpi; 2690 + SpiCfgData *pSpi; 2656 2691 char data_56; 2657 2692 2658 2693 dinitprintk((MYIOC_s_INFO_FMT "initTarget bus=%d id=%d lun=%d hd=%p\n", ··· 2759 2794 static void 2760 2795 mptscsih_setTargetNegoParms(MPT_SCSI_HOST *hd, VirtDevice *target, char byte56) 2761 2796 { 2762 - ScsiCfgData *pspi_data = &hd->ioc->spi_data; 2797 + SpiCfgData *pspi_data = &hd->ioc->spi_data; 2763 2798 int id = (int) target->target_id; 2764 2799 int nvram; 2765 2800 VirtDevice *vdev; ··· 2938 2973 static void 2939 2974 mptscsih_set_dvflags(MPT_SCSI_HOST *hd, SCSIIORequest_t *pReq) 2940 2975 { 2976 + MPT_ADAPTER *ioc = hd->ioc; 2941 2977 u8 cmd; 2942 - ScsiCfgData *pSpi; 2978 + SpiCfgData *pSpi; 2943 2979 2944 - ddvtprintk((" set_dvflags: id=%d lun=%d negoNvram=%x cmd=%x\n", 2945 - pReq->TargetID, pReq->LUN[1], hd->negoNvram, pReq->CDB[0])); 2980 + ddvtprintk((MYIOC_s_NOTE_FMT 2981 + " set_dvflags: id=%d lun=%d negoNvram=%x cmd=%x\n", 2982 + hd->ioc->name, pReq->TargetID, pReq->LUN[1], hd->negoNvram, pReq->CDB[0])); 2946 2983 2947 2984 if ((pReq->LUN[1] != 0) || (hd->negoNvram != 0)) 2948 2985 return; ··· 2952 2985 cmd = pReq->CDB[0]; 2953 2986 2954 2987 if ((cmd == READ_CAPACITY) || (cmd == MODE_SENSE)) { 2955 - pSpi = &hd->ioc->spi_data; 2956 - if ((pSpi->isRaid & (1 << pReq->TargetID)) && pSpi->pIocPg3) { 2988 + pSpi = &ioc->spi_data; 2989 + if ((ioc->raid_data.isRaid & (1 << pReq->TargetID)) && ioc->raid_data.pIocPg3) { 2957 2990 /* Set NEED_DV for all hidden disks 2958 2991 */ 2959 - Ioc3PhysDisk_t *pPDisk = pSpi->pIocPg3->PhysDisk; 2960 - int numPDisk = pSpi->pIocPg3->NumPhysDisks; 2992 + Ioc3PhysDisk_t *pPDisk = ioc->raid_data.pIocPg3->PhysDisk; 2993 + int numPDisk = ioc->raid_data.pIocPg3->NumPhysDisks; 2961 2994 2962 2995 while (numPDisk) { 2963 2996 pSpi->dvStatus[pPDisk->PhysDiskID] |= MPT_SCSICFG_NEED_DV; ··· 2968 3001 } 2969 3002 pSpi->dvStatus[pReq->TargetID] |= MPT_SCSICFG_NEED_DV; 2970 3003 ddvtprintk(("NEED_DV set for visible disk id %d\n", pReq->TargetID)); 3004 + } 3005 + } 3006 + 3007 + /* mptscsih_raid_set_dv_flags() 3008 + * 3009 + * New or replaced disk. Set DV flag and schedule DV. 3010 + */ 3011 + static void 3012 + mptscsih_set_dvflags_raid(MPT_SCSI_HOST *hd, int id) 3013 + { 3014 + MPT_ADAPTER *ioc = hd->ioc; 3015 + SpiCfgData *pSpi = &ioc->spi_data; 3016 + Ioc3PhysDisk_t *pPDisk; 3017 + int numPDisk; 3018 + 3019 + if (hd->negoNvram != 0) 3020 + return; 3021 + 3022 + ddvtprintk(("DV requested for phys disk id %d\n", id)); 3023 + if (ioc->raid_data.pIocPg3) { 3024 + pPDisk = ioc->raid_data.pIocPg3->PhysDisk; 3025 + numPDisk = ioc->raid_data.pIocPg3->NumPhysDisks; 3026 + while (numPDisk) { 3027 + if (id == pPDisk->PhysDiskNum) { 3028 + pSpi->dvStatus[pPDisk->PhysDiskID] = 3029 + (MPT_SCSICFG_NEED_DV | MPT_SCSICFG_DV_NOT_DONE); 3030 + pSpi->forceDv = MPT_SCSICFG_NEED_DV; 3031 + ddvtprintk(("NEED_DV set for phys disk id %d\n", 3032 + pPDisk->PhysDiskID)); 3033 + break; 3034 + } 3035 + pPDisk++; 3036 + numPDisk--; 3037 + } 3038 + 3039 + if (numPDisk == 0) { 3040 + /* The physical disk that needs DV was not found 3041 + * in the stored IOC Page 3. The driver must reload 3042 + * this page. DV routine will set the NEED_DV flag for 3043 + * all phys disks that have DV_NOT_DONE set. 3044 + */ 3045 + pSpi->forceDv = MPT_SCSICFG_NEED_DV | MPT_SCSICFG_RELOAD_IOC_PG3; 3046 + ddvtprintk(("phys disk %d not found. Setting reload IOC Pg3 Flag\n",id)); 3047 + } 2971 3048 } 2972 3049 } 2973 3050 ··· 3102 3091 MPT_ADAPTER *ioc = hd->ioc; 3103 3092 Config_t *pReq; 3104 3093 SCSIDevicePage1_t *pData; 3105 - VirtDevice *pTarget; 3094 + VirtDevice *pTarget=NULL; 3106 3095 MPT_FRAME_HDR *mf; 3107 3096 dma_addr_t dataDma; 3108 3097 u16 req_idx; ··· 3201 3190 #endif 3202 3191 3203 3192 if (flags & MPT_SCSICFG_BLK_NEGO) 3204 - negoFlags = MPT_TARGET_NO_NEGO_WIDE | MPT_TARGET_NO_NEGO_SYNC; 3193 + negoFlags |= MPT_TARGET_NO_NEGO_WIDE | MPT_TARGET_NO_NEGO_SYNC; 3205 3194 3206 3195 mptscsih_setDevicePage1Flags(width, factor, offset, 3207 3196 &requested, &configuration, negoFlags); ··· 4022 4011 4023 4012 /* If target Ptr NULL or if this target is NOT a disk, skip. 4024 4013 */ 4025 - if ((pTarget) && (pTarget->tflags & MPT_TARGET_FLAGS_Q_YES)){ 4014 + if ((pTarget) && (pTarget->inq_data[0] == TYPE_DISK)){ 4026 4015 for (lun=0; lun <= MPT_LAST_LUN; lun++) { 4027 4016 /* If LUN present, issue the command 4028 4017 */ ··· 4117 4106 4118 4107 if ((ioc->spi_data.forceDv & MPT_SCSICFG_RELOAD_IOC_PG3) != 0) { 4119 4108 mpt_read_ioc_pg_3(ioc); 4120 - if (ioc->spi_data.pIocPg3) { 4121 - Ioc3PhysDisk_t *pPDisk = ioc->spi_data.pIocPg3->PhysDisk; 4122 - int numPDisk = ioc->spi_data.pIocPg3->NumPhysDisks; 4109 + if (ioc->raid_data.pIocPg3) { 4110 + Ioc3PhysDisk_t *pPDisk = ioc->raid_data.pIocPg3->PhysDisk; 4111 + int numPDisk = ioc->raid_data.pIocPg3->NumPhysDisks; 4123 4112 4124 4113 while (numPDisk) { 4125 4114 if (ioc->spi_data.dvStatus[pPDisk->PhysDiskID] & MPT_SCSICFG_DV_NOT_DONE) ··· 4158 4147 isPhysDisk = mptscsih_is_phys_disk(ioc, id); 4159 4148 if (isPhysDisk) { 4160 4149 for (ii=0; ii < MPT_MAX_SCSI_DEVICES; ii++) { 4161 - if (hd->ioc->spi_data.isRaid & (1 << ii)) { 4150 + if (hd->ioc->raid_data.isRaid & (1 << ii)) { 4162 4151 hd->ioc->spi_data.dvStatus[ii] |= MPT_SCSICFG_DV_PENDING; 4163 4152 } 4164 4153 } ··· 4177 4166 4178 4167 if (isPhysDisk) { 4179 4168 for (ii=0; ii < MPT_MAX_SCSI_DEVICES; ii++) { 4180 - if (hd->ioc->spi_data.isRaid & (1 << ii)) { 4169 + if (hd->ioc->raid_data.isRaid & (1 << ii)) { 4181 4170 hd->ioc->spi_data.dvStatus[ii] &= ~MPT_SCSICFG_DV_PENDING; 4182 4171 } 4183 4172 } ··· 4199 4188 4200 4189 /* Search IOC page 3 to determine if this is hidden physical disk 4201 4190 */ 4202 - static int 4191 + /* Search IOC page 3 to determine if this is hidden physical disk 4192 + */ 4193 + static int 4203 4194 mptscsih_is_phys_disk(MPT_ADAPTER *ioc, int id) 4204 4195 { 4205 - if (ioc->spi_data.pIocPg3) { 4206 - Ioc3PhysDisk_t *pPDisk = ioc->spi_data.pIocPg3->PhysDisk; 4207 - int numPDisk = ioc->spi_data.pIocPg3->NumPhysDisks; 4196 + int i; 4208 4197 4209 - while (numPDisk) { 4210 - if (pPDisk->PhysDiskID == id) { 4211 - return 1; 4212 - } 4213 - pPDisk++; 4214 - numPDisk--; 4215 - } 4198 + if (!ioc->raid_data.isRaid || !ioc->raid_data.pIocPg3) 4199 + return 0; 4200 + 4201 + for (i = 0; i < ioc->raid_data.pIocPg3->NumPhysDisks; i++) { 4202 + if (id == ioc->raid_data.pIocPg3->PhysDisk[i].PhysDiskID) 4203 + return 1; 4216 4204 } 4205 + 4217 4206 return 0; 4218 4207 } 4219 4208 ··· 4419 4408 /* Skip this ID? Set cfg.cfghdr.hdr to force config page write 4420 4409 */ 4421 4410 { 4422 - ScsiCfgData *pspi_data = &hd->ioc->spi_data; 4411 + SpiCfgData *pspi_data = &hd->ioc->spi_data; 4423 4412 if (pspi_data->nvram && (pspi_data->nvram[id] != MPT_HOST_NVRAM_INVALID)) { 4424 4413 /* Set the factor from nvram */ 4425 4414 nfactor = (pspi_data->nvram[id] & MPT_NVRAM_SYNC_MASK) >> 8; ··· 4449 4438 } 4450 4439 4451 4440 /* Finish iocmd inititialization - hidden or visible disk? */ 4452 - if (ioc->spi_data.pIocPg3) { 4441 + if (ioc->raid_data.pIocPg3) { 4453 4442 /* Search IOC page 3 for matching id 4454 4443 */ 4455 - Ioc3PhysDisk_t *pPDisk = ioc->spi_data.pIocPg3->PhysDisk; 4456 - int numPDisk = ioc->spi_data.pIocPg3->NumPhysDisks; 4444 + Ioc3PhysDisk_t *pPDisk = ioc->raid_data.pIocPg3->PhysDisk; 4445 + int numPDisk = ioc->raid_data.pIocPg3->NumPhysDisks; 4457 4446 4458 4447 while (numPDisk) { 4459 4448 if (pPDisk->PhysDiskID == id) { ··· 4477 4466 /* RAID Volume ID's may double for a physical device. If RAID but 4478 4467 * not a physical ID as well, skip DV. 4479 4468 */ 4480 - if ((hd->ioc->spi_data.isRaid & (1 << id)) && !(iocmd.flags & MPT_ICFLAG_PHYS_DISK)) 4469 + if ((hd->ioc->raid_data.isRaid & (1 << id)) && !(iocmd.flags & MPT_ICFLAG_PHYS_DISK)) 4481 4470 goto target_done; 4482 4471 4483 4472 ··· 4826 4815 notDone = 0; 4827 4816 if (iocmd.flags & MPT_ICFLAG_ECHO) { 4828 4817 bufsize = ((pbuf1[2] & 0x1F) <<8) | pbuf1[3]; 4818 + if (pbuf1[0] & 0x01) 4819 + iocmd.flags |= MPT_ICFLAG_EBOS; 4829 4820 } else { 4830 4821 bufsize = pbuf1[1]<<16 | pbuf1[2]<<8 | pbuf1[3]; 4831 4822 } ··· 4924 4911 } 4925 4912 iocmd.flags &= ~MPT_ICFLAG_DID_RESET; 4926 4913 4914 + if (iocmd.flags & MPT_ICFLAG_EBOS) 4915 + goto skip_Reserve; 4916 + 4927 4917 repeat = 5; 4928 4918 while (repeat && (!(iocmd.flags & MPT_ICFLAG_RESERVED))) { 4929 4919 iocmd.cmd = RESERVE; ··· 4970 4954 } 4971 4955 } 4972 4956 4957 + skip_Reserve: 4973 4958 mptscsih_fillbuf(pbuf1, sz, patt, 1); 4974 4959 iocmd.cmd = WRITE_BUFFER; 4975 4960 iocmd.data_dma = buf1_dma; ··· 5215 5198 * If not an LVD bus, the adapter minSyncFactor has been 5216 5199 * already throttled back. 5217 5200 */ 5201 + negoFlags = hd->ioc->spi_data.noQas; 5218 5202 if ((hd->Targets)&&((pTarget = hd->Targets[(int)id]) != NULL) && !pTarget->raidVolume) { 5219 5203 width = pTarget->maxWidth; 5220 5204 offset = pTarget->maxOffset; 5221 5205 factor = pTarget->minSyncFactor; 5222 - negoFlags = pTarget->negoFlags; 5206 + negoFlags |= pTarget->negoFlags; 5223 5207 } else { 5224 5208 if (hd->ioc->spi_data.nvram && (hd->ioc->spi_data.nvram[id] != MPT_HOST_NVRAM_INVALID)) { 5225 5209 data = hd->ioc->spi_data.nvram[id]; ··· 5241 5223 } 5242 5224 5243 5225 /* Set the negotiation flags */ 5244 - negoFlags = hd->ioc->spi_data.noQas; 5245 5226 if (!width) 5246 5227 negoFlags |= MPT_TARGET_NO_NEGO_WIDE; 5247 5228
+4 -3
drivers/message/fusion/mptscsih.h
··· 1 1 /* 2 - * linux/drivers/message/fusion/mptscsi.h 2 + * linux/drivers/message/fusion/mptscsih.h 3 3 * High performance SCSI / Fibre Channel SCSI Host device driver. 4 4 * For use with PCI chip/adapter(s): 5 5 * LSIFC9xx/LSI409xx Fibre Channel ··· 53 53 * SCSI Public stuff... 54 54 */ 55 55 56 - #define MPT_SCSI_CMD_PER_DEV_HIGH 31 57 - #define MPT_SCSI_CMD_PER_DEV_LOW 7 56 + #define MPT_SCSI_CMD_PER_DEV_HIGH 64 57 + #define MPT_SCSI_CMD_PER_DEV_LOW 32 58 58 59 59 #define MPT_SCSI_CMD_PER_LUN 7 60 60 ··· 77 77 #define MPTSCSIH_MAX_WIDTH 1 78 78 #define MPTSCSIH_MIN_SYNC 0x08 79 79 #define MPTSCSIH_SAF_TE 0 80 + #define MPTSCSIH_PT_CLEAR 0 80 81 81 82 82 83 #endif
+1 -1
drivers/message/fusion/mptspi.c
··· 199 199 printk(MYIOC_s_WARN_FMT 200 200 "Skipping ioc=%p because SCSI Initiator mode is NOT enabled!\n", 201 201 ioc->name, ioc); 202 - return -ENODEV; 202 + return 0; 203 203 } 204 204 205 205 sh = scsi_host_alloc(&mptspi_driver_template, sizeof(MPT_SCSI_HOST));
+4 -1
drivers/message/i2o/config-osm.c
··· 56 56 return -EBUSY; 57 57 } 58 58 #ifdef CONFIG_I2O_CONFIG_OLD_IOCTL 59 - if (i2o_config_old_init()) 59 + if (i2o_config_old_init()) { 60 + osm_err("old config handler initialization failed\n"); 60 61 i2o_driver_unregister(&i2o_config_driver); 62 + return -EBUSY; 63 + } 61 64 #endif 62 65 63 66 return 0;
+2 -2
drivers/mfd/ucb1x00-ts.c
··· 48 48 u16 x_res; 49 49 u16 y_res; 50 50 51 - int restart:1; 52 - int adcsync:1; 51 + unsigned int restart:1; 52 + unsigned int adcsync:1; 53 53 }; 54 54 55 55 static int adcsync;
+4 -4
drivers/mtd/devices/docecc.c
··· 40 40 #include <linux/mtd/mtd.h> 41 41 #include <linux/mtd/doc2000.h> 42 42 43 - #define DEBUG 0 43 + #define DEBUG_ECC 0 44 44 /* need to undef it (from asm/termbits.h) */ 45 45 #undef B0 46 46 ··· 249 249 lambda[j] ^= Alpha_to[modnn(u + tmp)]; 250 250 } 251 251 } 252 - #if DEBUG >= 1 252 + #if DEBUG_ECC >= 1 253 253 /* Test code that verifies the erasure locator polynomial just constructed 254 254 Needed only for decoder debugging. */ 255 255 ··· 276 276 count = -1; 277 277 goto finish; 278 278 } 279 - #if DEBUG >= 2 279 + #if DEBUG_ECC >= 2 280 280 printf("\n Erasure positions as determined by roots of Eras Loc Poly:\n"); 281 281 for (i = 0; i < count; i++) 282 282 printf("%d ", loc[i]); ··· 409 409 den ^= Alpha_to[modnn(lambda[i+1] + i * root[j])]; 410 410 } 411 411 if (den == 0) { 412 - #if DEBUG >= 1 412 + #if DEBUG_ECC >= 1 413 413 printf("\n ERROR: denominator = 0\n"); 414 414 #endif 415 415 /* Convert to dual- basis */
+1 -1
drivers/net/8390.c
··· 1094 1094 1095 1095 outb_p(E8390_NODMA+E8390_PAGE0, e8390_base+E8390_CMD); 1096 1096 1097 - if (inb_p(e8390_base) & E8390_TRANS) 1097 + if (inb_p(e8390_base + E8390_CMD) & E8390_TRANS) 1098 1098 { 1099 1099 printk(KERN_WARNING "%s: trigger_send() called with the transmitter busy.\n", 1100 1100 dev->name);
+2 -1
drivers/net/bonding/bond_main.c
··· 1653 1653 int old_features = bond_dev->features; 1654 1654 int res = 0; 1655 1655 1656 - if (slave_dev->do_ioctl == NULL) { 1656 + if (!bond->params.use_carrier && slave_dev->ethtool_ops == NULL && 1657 + slave_dev->do_ioctl == NULL) { 1657 1658 printk(KERN_WARNING DRV_NAME 1658 1659 ": Warning : no link monitoring support for %s\n", 1659 1660 slave_dev->name);
+2 -2
drivers/net/r8169.c
··· 100 100 101 101 #ifdef CONFIG_R8169_NAPI 102 102 #define rtl8169_rx_skb netif_receive_skb 103 - #define rtl8169_rx_hwaccel_skb vlan_hwaccel_rx 103 + #define rtl8169_rx_hwaccel_skb vlan_hwaccel_receive_skb 104 104 #define rtl8169_rx_quota(count, quota) min(count, quota) 105 105 #else 106 106 #define rtl8169_rx_skb netif_rx 107 - #define rtl8169_rx_hwaccel_skb vlan_hwaccel_receive_skb 107 + #define rtl8169_rx_hwaccel_skb vlan_hwaccel_rx 108 108 #define rtl8169_rx_quota(count, quota) count 109 109 #endif 110 110
+119 -113
drivers/net/skge.c
··· 42 42 #include "skge.h" 43 43 44 44 #define DRV_NAME "skge" 45 - #define DRV_VERSION "1.0" 45 + #define DRV_VERSION "1.1" 46 46 #define PFX DRV_NAME " " 47 47 48 48 #define DEFAULT_TX_RING_SIZE 128 ··· 105 105 static const u32 txirqmask[] = { IS_XA1_F, IS_XA2_F }; 106 106 static const u32 portirqmask[] = { IS_PORT_1, IS_PORT_2 }; 107 107 108 - /* Don't need to look at whole 16K. 109 - * last interesting register is descriptor poll timer. 110 - */ 111 - #define SKGE_REGS_LEN (29*128) 112 - 113 108 static int skge_get_regs_len(struct net_device *dev) 114 109 { 115 - return SKGE_REGS_LEN; 110 + return 0x4000; 116 111 } 117 112 118 113 /* 119 - * Returns copy of control register region 120 - * I/O region is divided into banks and certain regions are unreadable 114 + * Returns copy of whole control register region 115 + * Note: skip RAM address register because accessing it will 116 + * cause bus hangs! 121 117 */ 122 118 static void skge_get_regs(struct net_device *dev, struct ethtool_regs *regs, 123 119 void *p) 124 120 { 125 121 const struct skge_port *skge = netdev_priv(dev); 126 - unsigned long offs; 127 122 const void __iomem *io = skge->hw->regs; 128 - static const unsigned long bankmap 129 - = (1<<0) | (1<<2) | (1<<8) | (1<<9) 130 - | (1<<12) | (1<<13) | (1<<14) | (1<<15) | (1<<16) 131 - | (1<<17) | (1<<20) | (1<<21) | (1<<22) | (1<<23) 132 - | (1<<24) | (1<<25) | (1<<26) | (1<<27) | (1<<28); 133 123 134 124 regs->version = 1; 135 - for (offs = 0; offs < regs->len; offs += 128) { 136 - u32 len = min_t(u32, 128, regs->len - offs); 125 + memset(p, 0, regs->len); 126 + memcpy_fromio(p, io, B3_RAM_ADDR); 137 127 138 - if (bankmap & (1<<(offs/128))) 139 - memcpy_fromio(p + offs, io + offs, len); 140 - else 141 - memset(p + offs, 0, len); 142 - } 128 + memcpy_fromio(p + B3_RI_WTO_R1, io + B3_RI_WTO_R1, 129 + regs->len - B3_RI_WTO_R1); 143 130 } 144 131 145 132 /* Wake on Lan only supported on Yukon chps with rev 1 or above */ ··· 762 775 return 0; 763 776 } 764 777 765 - static struct sk_buff *skge_rx_alloc(struct net_device *dev, unsigned int size) 766 - { 767 - struct sk_buff *skb = dev_alloc_skb(size); 768 - 769 - if (likely(skb)) { 770 - skb->dev = dev; 771 - skb_reserve(skb, NET_IP_ALIGN); 772 - } 773 - return skb; 774 - } 775 - 776 778 /* Allocate and setup a new buffer for receiving */ 777 779 static void skge_rx_setup(struct skge_port *skge, struct skge_element *e, 778 780 struct sk_buff *skb, unsigned int bufsize) ··· 834 858 { 835 859 struct skge_ring *ring = &skge->rx_ring; 836 860 struct skge_element *e; 837 - unsigned int bufsize = skge->rx_buf_size; 838 861 839 862 e = ring->start; 840 863 do { 841 - struct sk_buff *skb = skge_rx_alloc(skge->netdev, bufsize); 864 + struct sk_buff *skb; 842 865 866 + skb = dev_alloc_skb(skge->rx_buf_size + NET_IP_ALIGN); 843 867 if (!skb) 844 868 return -ENOMEM; 845 869 846 - skge_rx_setup(skge, e, skb, bufsize); 870 + skb_reserve(skb, NET_IP_ALIGN); 871 + skge_rx_setup(skge, e, skb, skge->rx_buf_size); 847 872 } while ( (e = e->next) != ring->start); 848 873 849 874 ring->to_clean = ring->start; ··· 1643 1666 | GM_RXCR_UCF_ENA | GM_RXCR_MCF_ENA); 1644 1667 } 1645 1668 1669 + /* Apparently, early versions of Yukon-Lite had wrong chip_id? */ 1670 + static int is_yukon_lite_a0(struct skge_hw *hw) 1671 + { 1672 + u32 reg; 1673 + int ret; 1674 + 1675 + if (hw->chip_id != CHIP_ID_YUKON) 1676 + return 0; 1677 + 1678 + reg = skge_read32(hw, B2_FAR); 1679 + skge_write8(hw, B2_FAR + 3, 0xff); 1680 + ret = (skge_read8(hw, B2_FAR + 3) != 0); 1681 + skge_write32(hw, B2_FAR, reg); 1682 + return ret; 1683 + } 1684 + 1646 1685 static void yukon_mac_init(struct skge_hw *hw, int port) 1647 1686 { 1648 1687 struct skge_port *skge = netdev_priv(hw->dev[port]); ··· 1774 1781 /* Configure Rx MAC FIFO */ 1775 1782 skge_write16(hw, SK_REG(port, RX_GMF_FL_MSK), RX_FF_FL_DEF_MSK); 1776 1783 reg = GMF_OPER_ON | GMF_RX_F_FL_ON; 1777 - if (hw->chip_id == CHIP_ID_YUKON_LITE && 1778 - hw->chip_rev >= CHIP_REV_YU_LITE_A3) 1784 + 1785 + /* disable Rx GMAC FIFO Flush for YUKON-Lite Rev. A0 only */ 1786 + if (is_yukon_lite_a0(hw)) 1779 1787 reg &= ~GMF_RX_F_FL_ON; 1788 + 1780 1789 skge_write8(hw, SK_REG(port, RX_GMF_CTRL_T), GMF_RST_CLR); 1781 1790 skge_write16(hw, SK_REG(port, RX_GMF_CTRL_T), reg); 1782 1791 /* ··· 2437 2442 gma_write16(hw, port, GM_RX_CTRL, reg); 2438 2443 } 2439 2444 2445 + static inline u16 phy_length(const struct skge_hw *hw, u32 status) 2446 + { 2447 + if (hw->chip_id == CHIP_ID_GENESIS) 2448 + return status >> XMR_FS_LEN_SHIFT; 2449 + else 2450 + return status >> GMR_FS_LEN_SHIFT; 2451 + } 2452 + 2440 2453 static inline int bad_phy_status(const struct skge_hw *hw, u32 status) 2441 2454 { 2442 2455 if (hw->chip_id == CHIP_ID_GENESIS) ··· 2454 2451 (status & GMR_FS_RX_OK) == 0; 2455 2452 } 2456 2453 2457 - static void skge_rx_error(struct skge_port *skge, int slot, 2458 - u32 control, u32 status) 2454 + 2455 + /* Get receive buffer from descriptor. 2456 + * Handles copy of small buffers and reallocation failures 2457 + */ 2458 + static inline struct sk_buff *skge_rx_get(struct skge_port *skge, 2459 + struct skge_element *e, 2460 + u32 control, u32 status, u16 csum) 2459 2461 { 2460 - if (netif_msg_rx_err(skge)) 2461 - printk(KERN_DEBUG PFX "%s: rx err, slot %d control 0x%x status 0x%x\n", 2462 - skge->netdev->name, slot, control, status); 2462 + struct sk_buff *skb; 2463 + u16 len = control & BMU_BBC; 2464 + 2465 + if (unlikely(netif_msg_rx_status(skge))) 2466 + printk(KERN_DEBUG PFX "%s: rx slot %td status 0x%x len %d\n", 2467 + skge->netdev->name, e - skge->rx_ring.start, 2468 + status, len); 2469 + 2470 + if (len > skge->rx_buf_size) 2471 + goto error; 2463 2472 2464 2473 if ((control & (BMU_EOF|BMU_STF)) != (BMU_STF|BMU_EOF)) 2465 - skge->net_stats.rx_length_errors++; 2466 - else if (skge->hw->chip_id == CHIP_ID_GENESIS) { 2474 + goto error; 2475 + 2476 + if (bad_phy_status(skge->hw, status)) 2477 + goto error; 2478 + 2479 + if (phy_length(skge->hw, status) != len) 2480 + goto error; 2481 + 2482 + if (len < RX_COPY_THRESHOLD) { 2483 + skb = dev_alloc_skb(len + 2); 2484 + if (!skb) 2485 + goto resubmit; 2486 + 2487 + skb_reserve(skb, 2); 2488 + pci_dma_sync_single_for_cpu(skge->hw->pdev, 2489 + pci_unmap_addr(e, mapaddr), 2490 + len, PCI_DMA_FROMDEVICE); 2491 + memcpy(skb->data, e->skb->data, len); 2492 + pci_dma_sync_single_for_device(skge->hw->pdev, 2493 + pci_unmap_addr(e, mapaddr), 2494 + len, PCI_DMA_FROMDEVICE); 2495 + skge_rx_reuse(e, skge->rx_buf_size); 2496 + } else { 2497 + struct sk_buff *nskb; 2498 + nskb = dev_alloc_skb(skge->rx_buf_size + NET_IP_ALIGN); 2499 + if (!nskb) 2500 + goto resubmit; 2501 + 2502 + pci_unmap_single(skge->hw->pdev, 2503 + pci_unmap_addr(e, mapaddr), 2504 + pci_unmap_len(e, maplen), 2505 + PCI_DMA_FROMDEVICE); 2506 + skb = e->skb; 2507 + prefetch(skb->data); 2508 + skge_rx_setup(skge, e, nskb, skge->rx_buf_size); 2509 + } 2510 + 2511 + skb_put(skb, len); 2512 + skb->dev = skge->netdev; 2513 + if (skge->rx_csum) { 2514 + skb->csum = csum; 2515 + skb->ip_summed = CHECKSUM_HW; 2516 + } 2517 + 2518 + skb->protocol = eth_type_trans(skb, skge->netdev); 2519 + 2520 + return skb; 2521 + error: 2522 + 2523 + if (netif_msg_rx_err(skge)) 2524 + printk(KERN_DEBUG PFX "%s: rx err, slot %td control 0x%x status 0x%x\n", 2525 + skge->netdev->name, e - skge->rx_ring.start, 2526 + control, status); 2527 + 2528 + if (skge->hw->chip_id == CHIP_ID_GENESIS) { 2467 2529 if (status & (XMR_FS_RUNT|XMR_FS_LNG_ERR)) 2468 2530 skge->net_stats.rx_length_errors++; 2469 2531 if (status & XMR_FS_FRA_ERR) ··· 2543 2475 if (status & GMR_FS_CRC_ERR) 2544 2476 skge->net_stats.rx_crc_errors++; 2545 2477 } 2546 - } 2547 2478 2548 - /* Get receive buffer from descriptor. 2549 - * Handles copy of small buffers and reallocation failures 2550 - */ 2551 - static inline struct sk_buff *skge_rx_get(struct skge_port *skge, 2552 - struct skge_element *e, 2553 - unsigned int len) 2554 - { 2555 - struct sk_buff *nskb, *skb; 2556 - 2557 - if (len < RX_COPY_THRESHOLD) { 2558 - nskb = skge_rx_alloc(skge->netdev, len + NET_IP_ALIGN); 2559 - if (unlikely(!nskb)) 2560 - return NULL; 2561 - 2562 - pci_dma_sync_single_for_cpu(skge->hw->pdev, 2563 - pci_unmap_addr(e, mapaddr), 2564 - len, PCI_DMA_FROMDEVICE); 2565 - memcpy(nskb->data, e->skb->data, len); 2566 - pci_dma_sync_single_for_device(skge->hw->pdev, 2567 - pci_unmap_addr(e, mapaddr), 2568 - len, PCI_DMA_FROMDEVICE); 2569 - 2570 - if (skge->rx_csum) { 2571 - struct skge_rx_desc *rd = e->desc; 2572 - nskb->csum = le16_to_cpu(rd->csum2); 2573 - nskb->ip_summed = CHECKSUM_HW; 2574 - } 2575 - skge_rx_reuse(e, skge->rx_buf_size); 2576 - return nskb; 2577 - } else { 2578 - nskb = skge_rx_alloc(skge->netdev, skge->rx_buf_size); 2579 - if (unlikely(!nskb)) 2580 - return NULL; 2581 - 2582 - pci_unmap_single(skge->hw->pdev, 2583 - pci_unmap_addr(e, mapaddr), 2584 - pci_unmap_len(e, maplen), 2585 - PCI_DMA_FROMDEVICE); 2586 - skb = e->skb; 2587 - if (skge->rx_csum) { 2588 - struct skge_rx_desc *rd = e->desc; 2589 - skb->csum = le16_to_cpu(rd->csum2); 2590 - skb->ip_summed = CHECKSUM_HW; 2591 - } 2592 - 2593 - skge_rx_setup(skge, e, nskb, skge->rx_buf_size); 2594 - return skb; 2595 - } 2479 + resubmit: 2480 + skge_rx_reuse(e, skge->rx_buf_size); 2481 + return NULL; 2596 2482 } 2597 2483 2598 2484 ··· 2562 2540 for (e = ring->to_clean; work_done < to_do; e = e->next) { 2563 2541 struct skge_rx_desc *rd = e->desc; 2564 2542 struct sk_buff *skb; 2565 - u32 control, len, status; 2543 + u32 control; 2566 2544 2567 2545 rmb(); 2568 2546 control = rd->control; 2569 2547 if (control & BMU_OWN) 2570 2548 break; 2571 2549 2572 - len = control & BMU_BBC; 2573 - status = rd->status; 2574 - 2575 - if (unlikely((control & (BMU_EOF|BMU_STF)) != (BMU_STF|BMU_EOF) 2576 - || bad_phy_status(hw, status))) { 2577 - skge_rx_error(skge, e - ring->start, control, status); 2578 - skge_rx_reuse(e, skge->rx_buf_size); 2579 - continue; 2580 - } 2581 - 2582 - if (netif_msg_rx_status(skge)) 2583 - printk(KERN_DEBUG PFX "%s: rx slot %td status 0x%x len %d\n", 2584 - dev->name, e - ring->start, rd->status, len); 2585 - 2586 - skb = skge_rx_get(skge, e, len); 2550 + skb = skge_rx_get(skge, e, control, rd->status, 2551 + le16_to_cpu(rd->csum2)); 2587 2552 if (likely(skb)) { 2588 - skb_put(skb, len); 2589 - skb->protocol = eth_type_trans(skb, dev); 2590 - 2591 2553 dev->last_rx = jiffies; 2592 2554 netif_receive_skb(skb); 2593 2555
+2
drivers/net/skge.h
··· 953 953 */ 954 954 enum { 955 955 XMR_FS_LEN = 0x3fff<<18, /* Bit 31..18: Rx Frame Length */ 956 + XMR_FS_LEN_SHIFT = 18, 956 957 XMR_FS_2L_VLAN = 1<<17, /* Bit 17: tagged wh 2Lev VLAN ID*/ 957 958 XMR_FS_1_VLAN = 1<<16, /* Bit 16: tagged wh 1ev VLAN ID*/ 958 959 XMR_FS_BC = 1<<15, /* Bit 15: Broadcast Frame */ ··· 1869 1868 /* Receive Frame Status Encoding */ 1870 1869 enum { 1871 1870 GMR_FS_LEN = 0xffff<<16, /* Bit 31..16: Rx Frame Length */ 1871 + GMR_FS_LEN_SHIFT = 16, 1872 1872 GMR_FS_VLAN = 1<<13, /* Bit 13: VLAN Packet */ 1873 1873 GMR_FS_JABBER = 1<<12, /* Bit 12: Jabber Packet */ 1874 1874 GMR_FS_UN_SIZE = 1<<11, /* Bit 11: Undersize Packet */
+1 -1
drivers/net/wan/hdlc_cisco.c
··· 72 72 } 73 73 skb_reserve(skb, 4); 74 74 cisco_hard_header(skb, dev, CISCO_KEEPALIVE, NULL, NULL, 0); 75 - data = (cisco_packet*)skb->data; 75 + data = (cisco_packet*)(skb->data + 4); 76 76 77 77 data->type = htonl(type); 78 78 data->par1 = htonl(par1);
-4
drivers/pci/hotplug.c
··· 7 7 char *buffer, int buffer_size) 8 8 { 9 9 struct pci_dev *pdev; 10 - char *scratch; 11 10 int i = 0; 12 11 int length = 0; 13 12 ··· 16 17 pdev = to_pci_dev(dev); 17 18 if (!pdev) 18 19 return -ENODEV; 19 - 20 - scratch = buffer; 21 - 22 20 23 21 if (add_hotplug_env_var(envp, num_envp, &i, 24 22 buffer, buffer_size, &length,
+2 -2
drivers/pci/hotplug/rpadlpar_sysfs.c
··· 62 62 char drc_name[MAX_DRC_NAME_LEN]; 63 63 char *end; 64 64 65 - if (nbytes > MAX_DRC_NAME_LEN) 65 + if (nbytes >= MAX_DRC_NAME_LEN) 66 66 return 0; 67 67 68 68 memcpy(drc_name, buf, nbytes); ··· 83 83 char drc_name[MAX_DRC_NAME_LEN]; 84 84 char *end; 85 85 86 - if (nbytes > MAX_DRC_NAME_LEN) 86 + if (nbytes >= MAX_DRC_NAME_LEN) 87 87 return 0; 88 88 89 89 memcpy(drc_name, buf, nbytes);
+3 -3
drivers/pci/hotplug/sgi_hotplug.c
··· 159 159 160 160 pcibus_info = SN_PCIBUS_BUSSOFT_INFO(pci_bus); 161 161 162 - slot = kcalloc(1, sizeof(*slot), GFP_KERNEL); 162 + slot = kzalloc(sizeof(*slot), GFP_KERNEL); 163 163 if (!slot) 164 164 return -ENOMEM; 165 165 bss_hotplug_slot->private = slot; ··· 491 491 if (sn_pci_slot_valid(pci_bus, device) != 1) 492 492 continue; 493 493 494 - bss_hotplug_slot = kcalloc(1, sizeof(*bss_hotplug_slot), 494 + bss_hotplug_slot = kzalloc(sizeof(*bss_hotplug_slot), 495 495 GFP_KERNEL); 496 496 if (!bss_hotplug_slot) { 497 497 rc = -ENOMEM; ··· 499 499 } 500 500 501 501 bss_hotplug_slot->info = 502 - kcalloc(1, sizeof(struct hotplug_slot_info), 502 + kzalloc(sizeof(struct hotplug_slot_info), 503 503 GFP_KERNEL); 504 504 if (!bss_hotplug_slot->info) { 505 505 rc = -ENOMEM;
+1 -1
drivers/pci/pci-sysfs.c
··· 360 360 continue; 361 361 362 362 /* allocate attribute structure, piggyback attribute name */ 363 - res_attr = kcalloc(1, sizeof(*res_attr) + 10, GFP_ATOMIC); 363 + res_attr = kzalloc(sizeof(*res_attr) + 10, GFP_ATOMIC); 364 364 if (res_attr) { 365 365 char *res_attr_name = (char *)(res_attr + 1); 366 366
+19 -3
drivers/pci/probe.c
··· 165 165 if (l == 0xffffffff) 166 166 l = 0; 167 167 if ((l & PCI_BASE_ADDRESS_SPACE) == PCI_BASE_ADDRESS_SPACE_MEMORY) { 168 - sz = pci_size(l, sz, PCI_BASE_ADDRESS_MEM_MASK); 168 + sz = pci_size(l, sz, (u32)PCI_BASE_ADDRESS_MEM_MASK); 169 169 if (!sz) 170 170 continue; 171 171 res->start = l & PCI_BASE_ADDRESS_MEM_MASK; ··· 215 215 if (l == 0xffffffff) 216 216 l = 0; 217 217 if (sz && sz != 0xffffffff) { 218 - sz = pci_size(l, sz, PCI_ROM_ADDRESS_MASK); 218 + sz = pci_size(l, sz, (u32)PCI_ROM_ADDRESS_MASK); 219 219 if (sz) { 220 220 res->flags = (l & IORESOURCE_ROM_ENABLE) | 221 221 IORESOURCE_MEM | IORESOURCE_PREFETCH | ··· 402 402 static void __devinit pci_fixup_parent_subordinate_busnr(struct pci_bus *child, int max) 403 403 { 404 404 struct pci_bus *parent = child->parent; 405 + 406 + /* Attempts to fix that up are really dangerous unless 407 + we're going to re-assign all bus numbers. */ 408 + if (!pcibios_assign_all_busses()) 409 + return; 410 + 405 411 while (parent->parent && parent->subordinate < max) { 406 412 parent->subordinate = max; 407 413 pci_write_config_byte(parent->self, PCI_SUBORDINATE_BUS, max); ··· 484 478 * We need to assign a number to this bus which we always 485 479 * do in the second pass. 486 480 */ 487 - if (!pass) 481 + if (!pass) { 482 + if (pcibios_assign_all_busses()) 483 + /* Temporarily disable forwarding of the 484 + configuration cycles on all bridges in 485 + this bus segment to avoid possible 486 + conflicts in the second pass between two 487 + bridges programmed with overlapping 488 + bus ranges. */ 489 + pci_write_config_dword(dev, PCI_PRIMARY_BUS, 490 + buses & ~0xffffff); 488 491 return max; 492 + } 489 493 490 494 /* Clear errors */ 491 495 pci_write_config_word(dev, PCI_STATUS, 0xffff);
+1 -1
drivers/s390/cio/ccwgroup.c
··· 437 437 if (cdev->dev.driver_data) { 438 438 gdev = (struct ccwgroup_device *)cdev->dev.driver_data; 439 439 if (get_device(&gdev->dev)) { 440 - if (klist_node_attached(&gdev->dev.knode_bus)) 440 + if (device_is_registered(&gdev->dev)) 441 441 return gdev; 442 442 put_device(&gdev->dev); 443 443 }
+1 -1
drivers/s390/scsi/Makefile
··· 3 3 # 4 4 5 5 zfcp-objs := zfcp_aux.o zfcp_ccw.o zfcp_scsi.o zfcp_erp.o zfcp_qdio.o \ 6 - zfcp_fsf.o zfcp_sysfs_adapter.o zfcp_sysfs_port.o \ 6 + zfcp_fsf.o zfcp_dbf.o zfcp_sysfs_adapter.o zfcp_sysfs_port.o \ 7 7 zfcp_sysfs_unit.o zfcp_sysfs_driver.o 8 8 9 9 obj-$(CONFIG_ZFCP) += zfcp.o
+2 -182
drivers/s390/scsi/zfcp_aux.c
··· 122 122 123 123 #define ZFCP_LOG_AREA ZFCP_LOG_AREA_OTHER 124 124 125 - static inline int 126 - zfcp_fsf_req_is_scsi_cmnd(struct zfcp_fsf_req *fsf_req) 127 - { 128 - return ((fsf_req->fsf_command == FSF_QTCB_FCP_CMND) && 129 - !(fsf_req->status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT)); 130 - } 131 - 132 - void 133 - zfcp_cmd_dbf_event_fsf(const char *text, struct zfcp_fsf_req *fsf_req, 134 - void *add_data, int add_length) 135 - { 136 - struct zfcp_adapter *adapter = fsf_req->adapter; 137 - struct scsi_cmnd *scsi_cmnd; 138 - int level = 3; 139 - int i; 140 - unsigned long flags; 141 - 142 - spin_lock_irqsave(&adapter->dbf_lock, flags); 143 - if (zfcp_fsf_req_is_scsi_cmnd(fsf_req)) { 144 - scsi_cmnd = fsf_req->data.send_fcp_command_task.scsi_cmnd; 145 - debug_text_event(adapter->cmd_dbf, level, "fsferror"); 146 - debug_text_event(adapter->cmd_dbf, level, text); 147 - debug_event(adapter->cmd_dbf, level, &fsf_req, 148 - sizeof (unsigned long)); 149 - debug_event(adapter->cmd_dbf, level, &fsf_req->seq_no, 150 - sizeof (u32)); 151 - debug_event(adapter->cmd_dbf, level, &scsi_cmnd, 152 - sizeof (unsigned long)); 153 - debug_event(adapter->cmd_dbf, level, &scsi_cmnd->cmnd, 154 - min(ZFCP_CMD_DBF_LENGTH, (int)scsi_cmnd->cmd_len)); 155 - for (i = 0; i < add_length; i += ZFCP_CMD_DBF_LENGTH) 156 - debug_event(adapter->cmd_dbf, 157 - level, 158 - (char *) add_data + i, 159 - min(ZFCP_CMD_DBF_LENGTH, add_length - i)); 160 - } 161 - spin_unlock_irqrestore(&adapter->dbf_lock, flags); 162 - } 163 - 164 - /* XXX additionally log unit if available */ 165 - /* ---> introduce new parameter for unit, see 2.4 code */ 166 - void 167 - zfcp_cmd_dbf_event_scsi(const char *text, struct scsi_cmnd *scsi_cmnd) 168 - { 169 - struct zfcp_adapter *adapter; 170 - union zfcp_req_data *req_data; 171 - struct zfcp_fsf_req *fsf_req; 172 - int level = ((host_byte(scsi_cmnd->result) != 0) ? 1 : 5); 173 - unsigned long flags; 174 - 175 - adapter = (struct zfcp_adapter *) scsi_cmnd->device->host->hostdata[0]; 176 - req_data = (union zfcp_req_data *) scsi_cmnd->host_scribble; 177 - fsf_req = (req_data ? req_data->send_fcp_command_task.fsf_req : NULL); 178 - spin_lock_irqsave(&adapter->dbf_lock, flags); 179 - debug_text_event(adapter->cmd_dbf, level, "hostbyte"); 180 - debug_text_event(adapter->cmd_dbf, level, text); 181 - debug_event(adapter->cmd_dbf, level, &scsi_cmnd->result, sizeof (u32)); 182 - debug_event(adapter->cmd_dbf, level, &scsi_cmnd, 183 - sizeof (unsigned long)); 184 - debug_event(adapter->cmd_dbf, level, &scsi_cmnd->cmnd, 185 - min(ZFCP_CMD_DBF_LENGTH, (int)scsi_cmnd->cmd_len)); 186 - if (likely(fsf_req)) { 187 - debug_event(adapter->cmd_dbf, level, &fsf_req, 188 - sizeof (unsigned long)); 189 - debug_event(adapter->cmd_dbf, level, &fsf_req->seq_no, 190 - sizeof (u32)); 191 - } else { 192 - debug_text_event(adapter->cmd_dbf, level, ""); 193 - debug_text_event(adapter->cmd_dbf, level, ""); 194 - } 195 - spin_unlock_irqrestore(&adapter->dbf_lock, flags); 196 - } 197 - 198 - void 199 - zfcp_in_els_dbf_event(struct zfcp_adapter *adapter, const char *text, 200 - struct fsf_status_read_buffer *status_buffer, int length) 201 - { 202 - int level = 1; 203 - int i; 204 - 205 - debug_text_event(adapter->in_els_dbf, level, text); 206 - debug_event(adapter->in_els_dbf, level, &status_buffer->d_id, 8); 207 - for (i = 0; i < length; i += ZFCP_IN_ELS_DBF_LENGTH) 208 - debug_event(adapter->in_els_dbf, 209 - level, 210 - (char *) status_buffer->payload + i, 211 - min(ZFCP_IN_ELS_DBF_LENGTH, length - i)); 212 - } 213 - 214 125 /** 215 126 * zfcp_device_setup - setup function 216 127 * @str: pointer to parameter string ··· 928 1017 mempool_destroy(adapter->pool.data_gid_pn); 929 1018 } 930 1019 931 - /** 932 - * zfcp_adapter_debug_register - registers debug feature for an adapter 933 - * @adapter: pointer to adapter for which debug features should be registered 934 - * return: -ENOMEM on error, 0 otherwise 935 - */ 936 - int 937 - zfcp_adapter_debug_register(struct zfcp_adapter *adapter) 938 - { 939 - char dbf_name[20]; 940 - 941 - /* debug feature area which records SCSI command failures (hostbyte) */ 942 - spin_lock_init(&adapter->dbf_lock); 943 - 944 - sprintf(dbf_name, ZFCP_CMD_DBF_NAME "%s", 945 - zfcp_get_busid_by_adapter(adapter)); 946 - adapter->cmd_dbf = debug_register(dbf_name, ZFCP_CMD_DBF_INDEX, 947 - ZFCP_CMD_DBF_AREAS, 948 - ZFCP_CMD_DBF_LENGTH); 949 - debug_register_view(adapter->cmd_dbf, &debug_hex_ascii_view); 950 - debug_set_level(adapter->cmd_dbf, ZFCP_CMD_DBF_LEVEL); 951 - 952 - /* debug feature area which records SCSI command aborts */ 953 - sprintf(dbf_name, ZFCP_ABORT_DBF_NAME "%s", 954 - zfcp_get_busid_by_adapter(adapter)); 955 - adapter->abort_dbf = debug_register(dbf_name, ZFCP_ABORT_DBF_INDEX, 956 - ZFCP_ABORT_DBF_AREAS, 957 - ZFCP_ABORT_DBF_LENGTH); 958 - debug_register_view(adapter->abort_dbf, &debug_hex_ascii_view); 959 - debug_set_level(adapter->abort_dbf, ZFCP_ABORT_DBF_LEVEL); 960 - 961 - /* debug feature area which records incoming ELS commands */ 962 - sprintf(dbf_name, ZFCP_IN_ELS_DBF_NAME "%s", 963 - zfcp_get_busid_by_adapter(adapter)); 964 - adapter->in_els_dbf = debug_register(dbf_name, ZFCP_IN_ELS_DBF_INDEX, 965 - ZFCP_IN_ELS_DBF_AREAS, 966 - ZFCP_IN_ELS_DBF_LENGTH); 967 - debug_register_view(adapter->in_els_dbf, &debug_hex_ascii_view); 968 - debug_set_level(adapter->in_els_dbf, ZFCP_IN_ELS_DBF_LEVEL); 969 - 970 - /* debug feature area which records erp events */ 971 - sprintf(dbf_name, ZFCP_ERP_DBF_NAME "%s", 972 - zfcp_get_busid_by_adapter(adapter)); 973 - adapter->erp_dbf = debug_register(dbf_name, ZFCP_ERP_DBF_INDEX, 974 - ZFCP_ERP_DBF_AREAS, 975 - ZFCP_ERP_DBF_LENGTH); 976 - debug_register_view(adapter->erp_dbf, &debug_hex_ascii_view); 977 - debug_set_level(adapter->erp_dbf, ZFCP_ERP_DBF_LEVEL); 978 - 979 - if (!(adapter->cmd_dbf && adapter->abort_dbf && 980 - adapter->in_els_dbf && adapter->erp_dbf)) { 981 - zfcp_adapter_debug_unregister(adapter); 982 - return -ENOMEM; 983 - } 984 - 985 - return 0; 986 - 987 - } 988 - 989 - /** 990 - * zfcp_adapter_debug_unregister - unregisters debug feature for an adapter 991 - * @adapter: pointer to adapter for which debug features should be unregistered 992 - */ 993 - void 994 - zfcp_adapter_debug_unregister(struct zfcp_adapter *adapter) 995 - { 996 - debug_unregister(adapter->abort_dbf); 997 - debug_unregister(adapter->cmd_dbf); 998 - debug_unregister(adapter->erp_dbf); 999 - debug_unregister(adapter->in_els_dbf); 1000 - adapter->abort_dbf = NULL; 1001 - adapter->cmd_dbf = NULL; 1002 - adapter->erp_dbf = NULL; 1003 - adapter->in_els_dbf = NULL; 1004 - } 1005 - 1006 1020 void 1007 1021 zfcp_dummy_release(struct device *dev) 1008 1022 { ··· 1298 1462 /* see FC-FS */ 1299 1463 no_entries = (fcp_rscn_head->payload_len / 4); 1300 1464 1301 - zfcp_in_els_dbf_event(adapter, "##rscn", status_buffer, 1302 - fcp_rscn_head->payload_len); 1303 - 1304 - debug_text_event(adapter->erp_dbf, 1, "unsol_els_rscn:"); 1305 1465 for (i = 1; i < no_entries; i++) { 1306 1466 /* skip head and start with 1st element */ 1307 1467 fcp_rscn_element++; ··· 1329 1497 (ZFCP_STATUS_PORT_DID_DID, &port->status)) { 1330 1498 ZFCP_LOG_INFO("incoming RSCN, trying to open " 1331 1499 "port 0x%016Lx\n", port->wwpn); 1332 - debug_text_event(adapter->erp_dbf, 1, 1333 - "unsol_els_rscnu:"); 1334 1500 zfcp_erp_port_reopen(port, 1335 1501 ZFCP_STATUS_COMMON_ERP_FAILED); 1336 1502 continue; ··· 1354 1524 */ 1355 1525 ZFCP_LOG_INFO("incoming RSCN, trying to open " 1356 1526 "port 0x%016Lx\n", port->wwpn); 1357 - debug_text_event(adapter->erp_dbf, 1, 1358 - "unsol_els_rscnk:"); 1359 1527 zfcp_test_link(port); 1360 1528 } 1361 1529 } ··· 1369 1541 struct zfcp_port *port; 1370 1542 unsigned long flags; 1371 1543 1372 - zfcp_in_els_dbf_event(adapter, "##plogi", status_buffer, 28); 1373 - 1374 1544 read_lock_irqsave(&zfcp_data.config_lock, flags); 1375 1545 list_for_each_entry(port, &adapter->port_list_head, list) { 1376 1546 if (port->wwpn == (*(wwn_t *) & els_logi->nport_wwn)) ··· 1382 1556 status_buffer->d_id, 1383 1557 zfcp_get_busid_by_adapter(adapter)); 1384 1558 } else { 1385 - debug_text_event(adapter->erp_dbf, 1, "unsol_els_plogi:"); 1386 - debug_event(adapter->erp_dbf, 1, &els_logi->nport_wwn, 8); 1387 1559 zfcp_erp_port_forced_reopen(port, 0); 1388 1560 } 1389 1561 } ··· 1393 1569 struct fcp_logo *els_logo = (struct fcp_logo *) status_buffer->payload; 1394 1570 struct zfcp_port *port; 1395 1571 unsigned long flags; 1396 - 1397 - zfcp_in_els_dbf_event(adapter, "##logo", status_buffer, 16); 1398 1572 1399 1573 read_lock_irqsave(&zfcp_data.config_lock, flags); 1400 1574 list_for_each_entry(port, &adapter->port_list_head, list) { ··· 1407 1585 status_buffer->d_id, 1408 1586 zfcp_get_busid_by_adapter(adapter)); 1409 1587 } else { 1410 - debug_text_event(adapter->erp_dbf, 1, "unsol_els_logo:"); 1411 - debug_event(adapter->erp_dbf, 1, &els_logo->nport_wwpn, 8); 1412 1588 zfcp_erp_port_forced_reopen(port, 0); 1413 1589 } 1414 1590 } ··· 1415 1595 zfcp_fsf_incoming_els_unknown(struct zfcp_adapter *adapter, 1416 1596 struct fsf_status_read_buffer *status_buffer) 1417 1597 { 1418 - zfcp_in_els_dbf_event(adapter, "##undef", status_buffer, 24); 1419 1598 ZFCP_LOG_NORMAL("warning: unknown incoming ELS 0x%08x " 1420 1599 "for adapter %s\n", *(u32 *) (status_buffer->payload), 1421 1600 zfcp_get_busid_by_adapter(adapter)); ··· 1428 1609 u32 els_type; 1429 1610 struct zfcp_adapter *adapter; 1430 1611 1431 - status_buffer = fsf_req->data.status_read.buffer; 1612 + status_buffer = (struct fsf_status_read_buffer *) fsf_req->data; 1432 1613 els_type = *(u32 *) (status_buffer->payload); 1433 1614 adapter = fsf_req->adapter; 1434 1615 1616 + zfcp_san_dbf_event_incoming_els(fsf_req); 1435 1617 if (els_type == LS_PLOGI) 1436 1618 zfcp_fsf_incoming_els_plogi(adapter, status_buffer); 1437 1619 else if (els_type == LS_LOGO)
-10
drivers/s390/scsi/zfcp_ccw.c
··· 202 202 zfcp_ccw_set_offline(struct ccw_device *ccw_device) 203 203 { 204 204 struct zfcp_adapter *adapter; 205 - struct zfcp_port *port; 206 - struct fc_rport *rport; 207 205 208 206 down(&zfcp_data.config_sema); 209 207 adapter = dev_get_drvdata(&ccw_device->dev); 210 - /* might be racy, but we cannot take config_lock due to the fact that 211 - fc_remote_port_delete might sleep */ 212 - list_for_each_entry(port, &adapter->port_list_head, list) 213 - if (port->rport) { 214 - rport = port->rport; 215 - port->rport = NULL; 216 - fc_remote_port_delete(rport); 217 - } 218 208 zfcp_erp_adapter_shutdown(adapter, 0); 219 209 zfcp_erp_wait(adapter); 220 210 zfcp_adapter_scsi_unregister(adapter);
+995
drivers/s390/scsi/zfcp_dbf.c
··· 1 + /* 2 + * 3 + * linux/drivers/s390/scsi/zfcp_dbf.c 4 + * 5 + * FCP adapter driver for IBM eServer zSeries 6 + * 7 + * Debugging facilities 8 + * 9 + * (C) Copyright IBM Corp. 2005 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License as published by 13 + * the Free Software Foundation; either version 2, or (at your option) 14 + * any later version. 15 + * 16 + * This program is distributed in the hope that it will be useful, 17 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 + * GNU General Public License for more details. 20 + * 21 + * You should have received a copy of the GNU General Public License 22 + * along with this program; if not, write to the Free Software 23 + * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 24 + */ 25 + 26 + #define ZFCP_DBF_REVISION "$Revision$" 27 + 28 + #include <asm/debug.h> 29 + #include <linux/ctype.h> 30 + #include "zfcp_ext.h" 31 + 32 + static u32 dbfsize = 4; 33 + 34 + module_param(dbfsize, uint, 0400); 35 + MODULE_PARM_DESC(dbfsize, 36 + "number of pages for each debug feature area (default 4)"); 37 + 38 + #define ZFCP_LOG_AREA ZFCP_LOG_AREA_OTHER 39 + 40 + static inline int 41 + zfcp_dbf_stck(char *out_buf, const char *label, unsigned long long stck) 42 + { 43 + unsigned long long sec; 44 + struct timespec xtime; 45 + int len = 0; 46 + 47 + stck -= 0x8126d60e46000000LL - (0x3c26700LL * 1000000 * 4096); 48 + sec = stck >> 12; 49 + do_div(sec, 1000000); 50 + xtime.tv_sec = sec; 51 + stck -= (sec * 1000000) << 12; 52 + xtime.tv_nsec = ((stck * 1000) >> 12); 53 + len += sprintf(out_buf + len, "%-24s%011lu:%06lu\n", 54 + label, xtime.tv_sec, xtime.tv_nsec); 55 + 56 + return len; 57 + } 58 + 59 + static int zfcp_dbf_tag(char *out_buf, const char *label, const char *tag) 60 + { 61 + int len = 0, i; 62 + 63 + len += sprintf(out_buf + len, "%-24s", label); 64 + for (i = 0; i < ZFCP_DBF_TAG_SIZE; i++) 65 + len += sprintf(out_buf + len, "%c", tag[i]); 66 + len += sprintf(out_buf + len, "\n"); 67 + 68 + return len; 69 + } 70 + 71 + static int 72 + zfcp_dbf_view(char *out_buf, const char *label, const char *format, ...) 73 + { 74 + va_list arg; 75 + int len = 0; 76 + 77 + len += sprintf(out_buf + len, "%-24s", label); 78 + va_start(arg, format); 79 + len += vsprintf(out_buf + len, format, arg); 80 + va_end(arg); 81 + len += sprintf(out_buf + len, "\n"); 82 + 83 + return len; 84 + } 85 + 86 + static int 87 + zfcp_dbf_view_dump(char *out_buf, const char *label, 88 + char *buffer, int buflen, int offset, int total_size) 89 + { 90 + int len = 0; 91 + 92 + if (offset == 0) 93 + len += sprintf(out_buf + len, "%-24s ", label); 94 + 95 + while (buflen--) { 96 + if (offset > 0) { 97 + if ((offset % 32) == 0) 98 + len += sprintf(out_buf + len, "\n%-24c ", ' '); 99 + else if ((offset % 4) == 0) 100 + len += sprintf(out_buf + len, " "); 101 + } 102 + len += sprintf(out_buf + len, "%02x", *buffer++); 103 + if (++offset == total_size) { 104 + len += sprintf(out_buf + len, "\n"); 105 + break; 106 + } 107 + } 108 + 109 + if (total_size == 0) 110 + len += sprintf(out_buf + len, "\n"); 111 + 112 + return len; 113 + } 114 + 115 + static inline int 116 + zfcp_dbf_view_header(debug_info_t * id, struct debug_view *view, int area, 117 + debug_entry_t * entry, char *out_buf) 118 + { 119 + struct zfcp_dbf_dump *dump = (struct zfcp_dbf_dump *)DEBUG_DATA(entry); 120 + int len = 0; 121 + 122 + if (strncmp(dump->tag, "dump", ZFCP_DBF_TAG_SIZE) != 0) { 123 + len += zfcp_dbf_stck(out_buf + len, "timestamp", 124 + entry->id.stck); 125 + len += zfcp_dbf_view(out_buf + len, "cpu", "%02i", 126 + entry->id.fields.cpuid); 127 + } else { 128 + len += zfcp_dbf_view_dump(out_buf + len, NULL, 129 + dump->data, 130 + dump->size, 131 + dump->offset, dump->total_size); 132 + if ((dump->offset + dump->size) == dump->total_size) 133 + len += sprintf(out_buf + len, "\n"); 134 + } 135 + 136 + return len; 137 + } 138 + 139 + inline void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *fsf_req) 140 + { 141 + struct zfcp_adapter *adapter = fsf_req->adapter; 142 + struct fsf_qtcb *qtcb = fsf_req->qtcb; 143 + union fsf_prot_status_qual *prot_status_qual = 144 + &qtcb->prefix.prot_status_qual; 145 + union fsf_status_qual *fsf_status_qual = &qtcb->header.fsf_status_qual; 146 + struct scsi_cmnd *scsi_cmnd; 147 + struct zfcp_port *port; 148 + struct zfcp_unit *unit; 149 + struct zfcp_send_els *send_els; 150 + struct zfcp_hba_dbf_record *rec = &adapter->hba_dbf_buf; 151 + struct zfcp_hba_dbf_record_response *response = &rec->type.response; 152 + int level; 153 + unsigned long flags; 154 + 155 + spin_lock_irqsave(&adapter->hba_dbf_lock, flags); 156 + memset(rec, 0, sizeof(struct zfcp_hba_dbf_record)); 157 + strncpy(rec->tag, "resp", ZFCP_DBF_TAG_SIZE); 158 + 159 + if ((qtcb->prefix.prot_status != FSF_PROT_GOOD) && 160 + (qtcb->prefix.prot_status != FSF_PROT_FSF_STATUS_PRESENTED)) { 161 + strncpy(rec->tag2, "perr", ZFCP_DBF_TAG_SIZE); 162 + level = 1; 163 + } else if (qtcb->header.fsf_status != FSF_GOOD) { 164 + strncpy(rec->tag2, "ferr", ZFCP_DBF_TAG_SIZE); 165 + level = 1; 166 + } else if ((fsf_req->fsf_command == FSF_QTCB_OPEN_PORT_WITH_DID) || 167 + (fsf_req->fsf_command == FSF_QTCB_OPEN_LUN)) { 168 + strncpy(rec->tag2, "open", ZFCP_DBF_TAG_SIZE); 169 + level = 4; 170 + } else if ((prot_status_qual->doubleword[0] != 0) || 171 + (prot_status_qual->doubleword[1] != 0) || 172 + (fsf_status_qual->doubleword[0] != 0) || 173 + (fsf_status_qual->doubleword[1] != 0)) { 174 + strncpy(rec->tag2, "qual", ZFCP_DBF_TAG_SIZE); 175 + level = 3; 176 + } else { 177 + strncpy(rec->tag2, "norm", ZFCP_DBF_TAG_SIZE); 178 + level = 6; 179 + } 180 + 181 + response->fsf_command = fsf_req->fsf_command; 182 + response->fsf_reqid = (unsigned long)fsf_req; 183 + response->fsf_seqno = fsf_req->seq_no; 184 + response->fsf_issued = fsf_req->issued; 185 + response->fsf_prot_status = qtcb->prefix.prot_status; 186 + response->fsf_status = qtcb->header.fsf_status; 187 + memcpy(response->fsf_prot_status_qual, 188 + prot_status_qual, FSF_PROT_STATUS_QUAL_SIZE); 189 + memcpy(response->fsf_status_qual, 190 + fsf_status_qual, FSF_STATUS_QUALIFIER_SIZE); 191 + response->fsf_req_status = fsf_req->status; 192 + response->sbal_first = fsf_req->sbal_first; 193 + response->sbal_curr = fsf_req->sbal_curr; 194 + response->sbal_last = fsf_req->sbal_last; 195 + response->pool = fsf_req->pool != NULL; 196 + response->erp_action = (unsigned long)fsf_req->erp_action; 197 + 198 + switch (fsf_req->fsf_command) { 199 + case FSF_QTCB_FCP_CMND: 200 + if (fsf_req->status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT) 201 + break; 202 + scsi_cmnd = (struct scsi_cmnd *)fsf_req->data; 203 + if (scsi_cmnd != NULL) { 204 + response->data.send_fcp.scsi_cmnd 205 + = (unsigned long)scsi_cmnd; 206 + response->data.send_fcp.scsi_serial 207 + = scsi_cmnd->serial_number; 208 + } 209 + break; 210 + 211 + case FSF_QTCB_OPEN_PORT_WITH_DID: 212 + case FSF_QTCB_CLOSE_PORT: 213 + case FSF_QTCB_CLOSE_PHYSICAL_PORT: 214 + port = (struct zfcp_port *)fsf_req->data; 215 + response->data.port.wwpn = port->wwpn; 216 + response->data.port.d_id = port->d_id; 217 + response->data.port.port_handle = qtcb->header.port_handle; 218 + break; 219 + 220 + case FSF_QTCB_OPEN_LUN: 221 + case FSF_QTCB_CLOSE_LUN: 222 + unit = (struct zfcp_unit *)fsf_req->data; 223 + port = unit->port; 224 + response->data.unit.wwpn = port->wwpn; 225 + response->data.unit.fcp_lun = unit->fcp_lun; 226 + response->data.unit.port_handle = qtcb->header.port_handle; 227 + response->data.unit.lun_handle = qtcb->header.lun_handle; 228 + break; 229 + 230 + case FSF_QTCB_SEND_ELS: 231 + send_els = (struct zfcp_send_els *)fsf_req->data; 232 + response->data.send_els.d_id = qtcb->bottom.support.d_id; 233 + response->data.send_els.ls_code = send_els->ls_code >> 24; 234 + break; 235 + 236 + case FSF_QTCB_ABORT_FCP_CMND: 237 + case FSF_QTCB_SEND_GENERIC: 238 + case FSF_QTCB_EXCHANGE_CONFIG_DATA: 239 + case FSF_QTCB_EXCHANGE_PORT_DATA: 240 + case FSF_QTCB_DOWNLOAD_CONTROL_FILE: 241 + case FSF_QTCB_UPLOAD_CONTROL_FILE: 242 + break; 243 + } 244 + 245 + debug_event(adapter->hba_dbf, level, 246 + rec, sizeof(struct zfcp_hba_dbf_record)); 247 + spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); 248 + } 249 + 250 + inline void 251 + zfcp_hba_dbf_event_fsf_unsol(const char *tag, struct zfcp_adapter *adapter, 252 + struct fsf_status_read_buffer *status_buffer) 253 + { 254 + struct zfcp_hba_dbf_record *rec = &adapter->hba_dbf_buf; 255 + unsigned long flags; 256 + 257 + spin_lock_irqsave(&adapter->hba_dbf_lock, flags); 258 + memset(rec, 0, sizeof(struct zfcp_hba_dbf_record)); 259 + strncpy(rec->tag, "stat", ZFCP_DBF_TAG_SIZE); 260 + strncpy(rec->tag2, tag, ZFCP_DBF_TAG_SIZE); 261 + 262 + rec->type.status.failed = adapter->status_read_failed; 263 + if (status_buffer != NULL) { 264 + rec->type.status.status_type = status_buffer->status_type; 265 + rec->type.status.status_subtype = status_buffer->status_subtype; 266 + memcpy(&rec->type.status.queue_designator, 267 + &status_buffer->queue_designator, 268 + sizeof(struct fsf_queue_designator)); 269 + 270 + switch (status_buffer->status_type) { 271 + case FSF_STATUS_READ_SENSE_DATA_AVAIL: 272 + rec->type.status.payload_size = 273 + ZFCP_DBF_UNSOL_PAYLOAD_SENSE_DATA_AVAIL; 274 + break; 275 + 276 + case FSF_STATUS_READ_BIT_ERROR_THRESHOLD: 277 + rec->type.status.payload_size = 278 + ZFCP_DBF_UNSOL_PAYLOAD_BIT_ERROR_THRESHOLD; 279 + break; 280 + 281 + case FSF_STATUS_READ_LINK_DOWN: 282 + switch (status_buffer->status_subtype) { 283 + case FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK: 284 + case FSF_STATUS_READ_SUB_FDISC_FAILED: 285 + rec->type.status.payload_size = 286 + sizeof(struct fsf_link_down_info); 287 + } 288 + break; 289 + 290 + case FSF_STATUS_READ_FEATURE_UPDATE_ALERT: 291 + rec->type.status.payload_size = 292 + ZFCP_DBF_UNSOL_PAYLOAD_FEATURE_UPDATE_ALERT; 293 + break; 294 + } 295 + memcpy(&rec->type.status.payload, 296 + &status_buffer->payload, rec->type.status.payload_size); 297 + } 298 + 299 + debug_event(adapter->hba_dbf, 2, 300 + rec, sizeof(struct zfcp_hba_dbf_record)); 301 + spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); 302 + } 303 + 304 + inline void 305 + zfcp_hba_dbf_event_qdio(struct zfcp_adapter *adapter, unsigned int status, 306 + unsigned int qdio_error, unsigned int siga_error, 307 + int sbal_index, int sbal_count) 308 + { 309 + struct zfcp_hba_dbf_record *rec = &adapter->hba_dbf_buf; 310 + unsigned long flags; 311 + 312 + spin_lock_irqsave(&adapter->hba_dbf_lock, flags); 313 + memset(rec, 0, sizeof(struct zfcp_hba_dbf_record)); 314 + strncpy(rec->tag, "qdio", ZFCP_DBF_TAG_SIZE); 315 + rec->type.qdio.status = status; 316 + rec->type.qdio.qdio_error = qdio_error; 317 + rec->type.qdio.siga_error = siga_error; 318 + rec->type.qdio.sbal_index = sbal_index; 319 + rec->type.qdio.sbal_count = sbal_count; 320 + debug_event(adapter->hba_dbf, 0, 321 + rec, sizeof(struct zfcp_hba_dbf_record)); 322 + spin_unlock_irqrestore(&adapter->hba_dbf_lock, flags); 323 + } 324 + 325 + static inline int 326 + zfcp_hba_dbf_view_response(char *out_buf, 327 + struct zfcp_hba_dbf_record_response *rec) 328 + { 329 + int len = 0; 330 + 331 + len += zfcp_dbf_view(out_buf + len, "fsf_command", "0x%08x", 332 + rec->fsf_command); 333 + len += zfcp_dbf_view(out_buf + len, "fsf_reqid", "0x%0Lx", 334 + rec->fsf_reqid); 335 + len += zfcp_dbf_view(out_buf + len, "fsf_seqno", "0x%08x", 336 + rec->fsf_seqno); 337 + len += zfcp_dbf_stck(out_buf + len, "fsf_issued", rec->fsf_issued); 338 + len += zfcp_dbf_view(out_buf + len, "fsf_prot_status", "0x%08x", 339 + rec->fsf_prot_status); 340 + len += zfcp_dbf_view(out_buf + len, "fsf_status", "0x%08x", 341 + rec->fsf_status); 342 + len += zfcp_dbf_view_dump(out_buf + len, "fsf_prot_status_qual", 343 + rec->fsf_prot_status_qual, 344 + FSF_PROT_STATUS_QUAL_SIZE, 345 + 0, FSF_PROT_STATUS_QUAL_SIZE); 346 + len += zfcp_dbf_view_dump(out_buf + len, "fsf_status_qual", 347 + rec->fsf_status_qual, 348 + FSF_STATUS_QUALIFIER_SIZE, 349 + 0, FSF_STATUS_QUALIFIER_SIZE); 350 + len += zfcp_dbf_view(out_buf + len, "fsf_req_status", "0x%08x", 351 + rec->fsf_req_status); 352 + len += zfcp_dbf_view(out_buf + len, "sbal_first", "0x%02x", 353 + rec->sbal_first); 354 + len += zfcp_dbf_view(out_buf + len, "sbal_curr", "0x%02x", 355 + rec->sbal_curr); 356 + len += zfcp_dbf_view(out_buf + len, "sbal_last", "0x%02x", 357 + rec->sbal_last); 358 + len += zfcp_dbf_view(out_buf + len, "pool", "0x%02x", rec->pool); 359 + 360 + switch (rec->fsf_command) { 361 + case FSF_QTCB_FCP_CMND: 362 + if (rec->fsf_req_status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT) 363 + break; 364 + len += zfcp_dbf_view(out_buf + len, "scsi_cmnd", "0x%0Lx", 365 + rec->data.send_fcp.scsi_cmnd); 366 + len += zfcp_dbf_view(out_buf + len, "scsi_serial", "0x%016Lx", 367 + rec->data.send_fcp.scsi_serial); 368 + break; 369 + 370 + case FSF_QTCB_OPEN_PORT_WITH_DID: 371 + case FSF_QTCB_CLOSE_PORT: 372 + case FSF_QTCB_CLOSE_PHYSICAL_PORT: 373 + len += zfcp_dbf_view(out_buf + len, "wwpn", "0x%016Lx", 374 + rec->data.port.wwpn); 375 + len += zfcp_dbf_view(out_buf + len, "d_id", "0x%06x", 376 + rec->data.port.d_id); 377 + len += zfcp_dbf_view(out_buf + len, "port_handle", "0x%08x", 378 + rec->data.port.port_handle); 379 + break; 380 + 381 + case FSF_QTCB_OPEN_LUN: 382 + case FSF_QTCB_CLOSE_LUN: 383 + len += zfcp_dbf_view(out_buf + len, "wwpn", "0x%016Lx", 384 + rec->data.unit.wwpn); 385 + len += zfcp_dbf_view(out_buf + len, "fcp_lun", "0x%016Lx", 386 + rec->data.unit.fcp_lun); 387 + len += zfcp_dbf_view(out_buf + len, "port_handle", "0x%08x", 388 + rec->data.unit.port_handle); 389 + len += zfcp_dbf_view(out_buf + len, "lun_handle", "0x%08x", 390 + rec->data.unit.lun_handle); 391 + break; 392 + 393 + case FSF_QTCB_SEND_ELS: 394 + len += zfcp_dbf_view(out_buf + len, "d_id", "0x%06x", 395 + rec->data.send_els.d_id); 396 + len += zfcp_dbf_view(out_buf + len, "ls_code", "0x%02x", 397 + rec->data.send_els.ls_code); 398 + break; 399 + 400 + case FSF_QTCB_ABORT_FCP_CMND: 401 + case FSF_QTCB_SEND_GENERIC: 402 + case FSF_QTCB_EXCHANGE_CONFIG_DATA: 403 + case FSF_QTCB_EXCHANGE_PORT_DATA: 404 + case FSF_QTCB_DOWNLOAD_CONTROL_FILE: 405 + case FSF_QTCB_UPLOAD_CONTROL_FILE: 406 + break; 407 + } 408 + 409 + return len; 410 + } 411 + 412 + static inline int 413 + zfcp_hba_dbf_view_status(char *out_buf, struct zfcp_hba_dbf_record_status *rec) 414 + { 415 + int len = 0; 416 + 417 + len += zfcp_dbf_view(out_buf + len, "failed", "0x%02x", rec->failed); 418 + len += zfcp_dbf_view(out_buf + len, "status_type", "0x%08x", 419 + rec->status_type); 420 + len += zfcp_dbf_view(out_buf + len, "status_subtype", "0x%08x", 421 + rec->status_subtype); 422 + len += zfcp_dbf_view_dump(out_buf + len, "queue_designator", 423 + (char *)&rec->queue_designator, 424 + sizeof(struct fsf_queue_designator), 425 + 0, sizeof(struct fsf_queue_designator)); 426 + len += zfcp_dbf_view_dump(out_buf + len, "payload", 427 + (char *)&rec->payload, 428 + rec->payload_size, 0, rec->payload_size); 429 + 430 + return len; 431 + } 432 + 433 + static inline int 434 + zfcp_hba_dbf_view_qdio(char *out_buf, struct zfcp_hba_dbf_record_qdio *rec) 435 + { 436 + int len = 0; 437 + 438 + len += zfcp_dbf_view(out_buf + len, "status", "0x%08x", rec->status); 439 + len += zfcp_dbf_view(out_buf + len, "qdio_error", "0x%08x", 440 + rec->qdio_error); 441 + len += zfcp_dbf_view(out_buf + len, "siga_error", "0x%08x", 442 + rec->siga_error); 443 + len += zfcp_dbf_view(out_buf + len, "sbal_index", "0x%02x", 444 + rec->sbal_index); 445 + len += zfcp_dbf_view(out_buf + len, "sbal_count", "0x%02x", 446 + rec->sbal_count); 447 + 448 + return len; 449 + } 450 + 451 + static int 452 + zfcp_hba_dbf_view_format(debug_info_t * id, struct debug_view *view, 453 + char *out_buf, const char *in_buf) 454 + { 455 + struct zfcp_hba_dbf_record *rec = (struct zfcp_hba_dbf_record *)in_buf; 456 + int len = 0; 457 + 458 + if (strncmp(rec->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) 459 + return 0; 460 + 461 + len += zfcp_dbf_tag(out_buf + len, "tag", rec->tag); 462 + if (isalpha(rec->tag2[0])) 463 + len += zfcp_dbf_tag(out_buf + len, "tag2", rec->tag2); 464 + if (strncmp(rec->tag, "resp", ZFCP_DBF_TAG_SIZE) == 0) 465 + len += zfcp_hba_dbf_view_response(out_buf + len, 466 + &rec->type.response); 467 + else if (strncmp(rec->tag, "stat", ZFCP_DBF_TAG_SIZE) == 0) 468 + len += zfcp_hba_dbf_view_status(out_buf + len, 469 + &rec->type.status); 470 + else if (strncmp(rec->tag, "qdio", ZFCP_DBF_TAG_SIZE) == 0) 471 + len += zfcp_hba_dbf_view_qdio(out_buf + len, &rec->type.qdio); 472 + 473 + len += sprintf(out_buf + len, "\n"); 474 + 475 + return len; 476 + } 477 + 478 + struct debug_view zfcp_hba_dbf_view = { 479 + "structured", 480 + NULL, 481 + &zfcp_dbf_view_header, 482 + &zfcp_hba_dbf_view_format, 483 + NULL, 484 + NULL 485 + }; 486 + 487 + inline void 488 + _zfcp_san_dbf_event_common_ct(const char *tag, struct zfcp_fsf_req *fsf_req, 489 + u32 s_id, u32 d_id, void *buffer, int buflen) 490 + { 491 + struct zfcp_send_ct *send_ct = (struct zfcp_send_ct *)fsf_req->data; 492 + struct zfcp_port *port = send_ct->port; 493 + struct zfcp_adapter *adapter = port->adapter; 494 + struct ct_hdr *header = (struct ct_hdr *)buffer; 495 + struct zfcp_san_dbf_record *rec = &adapter->san_dbf_buf; 496 + struct zfcp_san_dbf_record_ct *ct = &rec->type.ct; 497 + unsigned long flags; 498 + 499 + spin_lock_irqsave(&adapter->san_dbf_lock, flags); 500 + memset(rec, 0, sizeof(struct zfcp_san_dbf_record)); 501 + strncpy(rec->tag, tag, ZFCP_DBF_TAG_SIZE); 502 + rec->fsf_reqid = (unsigned long)fsf_req; 503 + rec->fsf_seqno = fsf_req->seq_no; 504 + rec->s_id = s_id; 505 + rec->d_id = d_id; 506 + if (strncmp(tag, "octc", ZFCP_DBF_TAG_SIZE) == 0) { 507 + ct->type.request.cmd_req_code = header->cmd_rsp_code; 508 + ct->type.request.revision = header->revision; 509 + ct->type.request.gs_type = header->gs_type; 510 + ct->type.request.gs_subtype = header->gs_subtype; 511 + ct->type.request.options = header->options; 512 + ct->type.request.max_res_size = header->max_res_size; 513 + } else if (strncmp(tag, "rctc", ZFCP_DBF_TAG_SIZE) == 0) { 514 + ct->type.response.cmd_rsp_code = header->cmd_rsp_code; 515 + ct->type.response.revision = header->revision; 516 + ct->type.response.reason_code = header->reason_code; 517 + ct->type.response.reason_code_expl = header->reason_code_expl; 518 + ct->type.response.vendor_unique = header->vendor_unique; 519 + } 520 + ct->payload_size = 521 + min(buflen - (int)sizeof(struct ct_hdr), ZFCP_DBF_CT_PAYLOAD); 522 + memcpy(ct->payload, buffer + sizeof(struct ct_hdr), ct->payload_size); 523 + debug_event(adapter->san_dbf, 3, 524 + rec, sizeof(struct zfcp_san_dbf_record)); 525 + spin_unlock_irqrestore(&adapter->san_dbf_lock, flags); 526 + } 527 + 528 + inline void zfcp_san_dbf_event_ct_request(struct zfcp_fsf_req *fsf_req) 529 + { 530 + struct zfcp_send_ct *ct = (struct zfcp_send_ct *)fsf_req->data; 531 + struct zfcp_port *port = ct->port; 532 + struct zfcp_adapter *adapter = port->adapter; 533 + 534 + _zfcp_san_dbf_event_common_ct("octc", fsf_req, 535 + fc_host_port_id(adapter->scsi_host), 536 + port->d_id, zfcp_sg_to_address(ct->req), 537 + ct->req->length); 538 + } 539 + 540 + inline void zfcp_san_dbf_event_ct_response(struct zfcp_fsf_req *fsf_req) 541 + { 542 + struct zfcp_send_ct *ct = (struct zfcp_send_ct *)fsf_req->data; 543 + struct zfcp_port *port = ct->port; 544 + struct zfcp_adapter *adapter = port->adapter; 545 + 546 + _zfcp_san_dbf_event_common_ct("rctc", fsf_req, port->d_id, 547 + fc_host_port_id(adapter->scsi_host), 548 + zfcp_sg_to_address(ct->resp), 549 + ct->resp->length); 550 + } 551 + 552 + static inline void 553 + _zfcp_san_dbf_event_common_els(const char *tag, int level, 554 + struct zfcp_fsf_req *fsf_req, u32 s_id, 555 + u32 d_id, u8 ls_code, void *buffer, int buflen) 556 + { 557 + struct zfcp_adapter *adapter = fsf_req->adapter; 558 + struct zfcp_san_dbf_record *rec = &adapter->san_dbf_buf; 559 + struct zfcp_dbf_dump *dump = (struct zfcp_dbf_dump *)rec; 560 + unsigned long flags; 561 + int offset = 0; 562 + 563 + spin_lock_irqsave(&adapter->san_dbf_lock, flags); 564 + do { 565 + memset(rec, 0, sizeof(struct zfcp_san_dbf_record)); 566 + if (offset == 0) { 567 + strncpy(rec->tag, tag, ZFCP_DBF_TAG_SIZE); 568 + rec->fsf_reqid = (unsigned long)fsf_req; 569 + rec->fsf_seqno = fsf_req->seq_no; 570 + rec->s_id = s_id; 571 + rec->d_id = d_id; 572 + rec->type.els.ls_code = ls_code; 573 + buflen = min(buflen, ZFCP_DBF_ELS_MAX_PAYLOAD); 574 + rec->type.els.payload_size = buflen; 575 + memcpy(rec->type.els.payload, 576 + buffer, min(buflen, ZFCP_DBF_ELS_PAYLOAD)); 577 + offset += min(buflen, ZFCP_DBF_ELS_PAYLOAD); 578 + } else { 579 + strncpy(dump->tag, "dump", ZFCP_DBF_TAG_SIZE); 580 + dump->total_size = buflen; 581 + dump->offset = offset; 582 + dump->size = min(buflen - offset, 583 + (int)sizeof(struct zfcp_san_dbf_record) 584 + - (int)sizeof(struct zfcp_dbf_dump)); 585 + memcpy(dump->data, buffer + offset, dump->size); 586 + offset += dump->size; 587 + } 588 + debug_event(adapter->san_dbf, level, 589 + rec, sizeof(struct zfcp_san_dbf_record)); 590 + } while (offset < buflen); 591 + spin_unlock_irqrestore(&adapter->san_dbf_lock, flags); 592 + } 593 + 594 + inline void zfcp_san_dbf_event_els_request(struct zfcp_fsf_req *fsf_req) 595 + { 596 + struct zfcp_send_els *els = (struct zfcp_send_els *)fsf_req->data; 597 + 598 + _zfcp_san_dbf_event_common_els("oels", 2, fsf_req, 599 + fc_host_port_id(els->adapter->scsi_host), 600 + els->d_id, 601 + *(u8 *) zfcp_sg_to_address(els->req), 602 + zfcp_sg_to_address(els->req), 603 + els->req->length); 604 + } 605 + 606 + inline void zfcp_san_dbf_event_els_response(struct zfcp_fsf_req *fsf_req) 607 + { 608 + struct zfcp_send_els *els = (struct zfcp_send_els *)fsf_req->data; 609 + 610 + _zfcp_san_dbf_event_common_els("rels", 2, fsf_req, els->d_id, 611 + fc_host_port_id(els->adapter->scsi_host), 612 + *(u8 *) zfcp_sg_to_address(els->req), 613 + zfcp_sg_to_address(els->resp), 614 + els->resp->length); 615 + } 616 + 617 + inline void zfcp_san_dbf_event_incoming_els(struct zfcp_fsf_req *fsf_req) 618 + { 619 + struct zfcp_adapter *adapter = fsf_req->adapter; 620 + struct fsf_status_read_buffer *status_buffer = 621 + (struct fsf_status_read_buffer *)fsf_req->data; 622 + int length = (int)status_buffer->length - 623 + (int)((void *)&status_buffer->payload - (void *)status_buffer); 624 + 625 + _zfcp_san_dbf_event_common_els("iels", 1, fsf_req, status_buffer->d_id, 626 + fc_host_port_id(adapter->scsi_host), 627 + *(u8 *) status_buffer->payload, 628 + (void *)status_buffer->payload, length); 629 + } 630 + 631 + static int 632 + zfcp_san_dbf_view_format(debug_info_t * id, struct debug_view *view, 633 + char *out_buf, const char *in_buf) 634 + { 635 + struct zfcp_san_dbf_record *rec = (struct zfcp_san_dbf_record *)in_buf; 636 + char *buffer = NULL; 637 + int buflen = 0, total = 0; 638 + int len = 0; 639 + 640 + if (strncmp(rec->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) 641 + return 0; 642 + 643 + len += zfcp_dbf_tag(out_buf + len, "tag", rec->tag); 644 + len += zfcp_dbf_view(out_buf + len, "fsf_reqid", "0x%0Lx", 645 + rec->fsf_reqid); 646 + len += zfcp_dbf_view(out_buf + len, "fsf_seqno", "0x%08x", 647 + rec->fsf_seqno); 648 + len += zfcp_dbf_view(out_buf + len, "s_id", "0x%06x", rec->s_id); 649 + len += zfcp_dbf_view(out_buf + len, "d_id", "0x%06x", rec->d_id); 650 + 651 + if (strncmp(rec->tag, "octc", ZFCP_DBF_TAG_SIZE) == 0) { 652 + len += zfcp_dbf_view(out_buf + len, "cmd_req_code", "0x%04x", 653 + rec->type.ct.type.request.cmd_req_code); 654 + len += zfcp_dbf_view(out_buf + len, "revision", "0x%02x", 655 + rec->type.ct.type.request.revision); 656 + len += zfcp_dbf_view(out_buf + len, "gs_type", "0x%02x", 657 + rec->type.ct.type.request.gs_type); 658 + len += zfcp_dbf_view(out_buf + len, "gs_subtype", "0x%02x", 659 + rec->type.ct.type.request.gs_subtype); 660 + len += zfcp_dbf_view(out_buf + len, "options", "0x%02x", 661 + rec->type.ct.type.request.options); 662 + len += zfcp_dbf_view(out_buf + len, "max_res_size", "0x%04x", 663 + rec->type.ct.type.request.max_res_size); 664 + total = rec->type.ct.payload_size; 665 + buffer = rec->type.ct.payload; 666 + buflen = min(total, ZFCP_DBF_CT_PAYLOAD); 667 + } else if (strncmp(rec->tag, "rctc", ZFCP_DBF_TAG_SIZE) == 0) { 668 + len += zfcp_dbf_view(out_buf + len, "cmd_rsp_code", "0x%04x", 669 + rec->type.ct.type.response.cmd_rsp_code); 670 + len += zfcp_dbf_view(out_buf + len, "revision", "0x%02x", 671 + rec->type.ct.type.response.revision); 672 + len += zfcp_dbf_view(out_buf + len, "reason_code", "0x%02x", 673 + rec->type.ct.type.response.reason_code); 674 + len += 675 + zfcp_dbf_view(out_buf + len, "reason_code_expl", "0x%02x", 676 + rec->type.ct.type.response.reason_code_expl); 677 + len += 678 + zfcp_dbf_view(out_buf + len, "vendor_unique", "0x%02x", 679 + rec->type.ct.type.response.vendor_unique); 680 + total = rec->type.ct.payload_size; 681 + buffer = rec->type.ct.payload; 682 + buflen = min(total, ZFCP_DBF_CT_PAYLOAD); 683 + } else if (strncmp(rec->tag, "oels", ZFCP_DBF_TAG_SIZE) == 0 || 684 + strncmp(rec->tag, "rels", ZFCP_DBF_TAG_SIZE) == 0 || 685 + strncmp(rec->tag, "iels", ZFCP_DBF_TAG_SIZE) == 0) { 686 + len += zfcp_dbf_view(out_buf + len, "ls_code", "0x%02x", 687 + rec->type.els.ls_code); 688 + total = rec->type.els.payload_size; 689 + buffer = rec->type.els.payload; 690 + buflen = min(total, ZFCP_DBF_ELS_PAYLOAD); 691 + } 692 + 693 + len += zfcp_dbf_view_dump(out_buf + len, "payload", 694 + buffer, buflen, 0, total); 695 + 696 + if (buflen == total) 697 + len += sprintf(out_buf + len, "\n"); 698 + 699 + return len; 700 + } 701 + 702 + struct debug_view zfcp_san_dbf_view = { 703 + "structured", 704 + NULL, 705 + &zfcp_dbf_view_header, 706 + &zfcp_san_dbf_view_format, 707 + NULL, 708 + NULL 709 + }; 710 + 711 + static inline void 712 + _zfcp_scsi_dbf_event_common(const char *tag, const char *tag2, int level, 713 + struct zfcp_adapter *adapter, 714 + struct scsi_cmnd *scsi_cmnd, 715 + struct zfcp_fsf_req *new_fsf_req) 716 + { 717 + struct zfcp_fsf_req *fsf_req = 718 + (struct zfcp_fsf_req *)scsi_cmnd->host_scribble; 719 + struct zfcp_scsi_dbf_record *rec = &adapter->scsi_dbf_buf; 720 + struct zfcp_dbf_dump *dump = (struct zfcp_dbf_dump *)rec; 721 + unsigned long flags; 722 + struct fcp_rsp_iu *fcp_rsp; 723 + char *fcp_rsp_info = NULL, *fcp_sns_info = NULL; 724 + int offset = 0, buflen = 0; 725 + 726 + spin_lock_irqsave(&adapter->scsi_dbf_lock, flags); 727 + do { 728 + memset(rec, 0, sizeof(struct zfcp_scsi_dbf_record)); 729 + if (offset == 0) { 730 + strncpy(rec->tag, tag, ZFCP_DBF_TAG_SIZE); 731 + strncpy(rec->tag2, tag2, ZFCP_DBF_TAG_SIZE); 732 + if (scsi_cmnd->device) { 733 + rec->scsi_id = scsi_cmnd->device->id; 734 + rec->scsi_lun = scsi_cmnd->device->lun; 735 + } 736 + rec->scsi_result = scsi_cmnd->result; 737 + rec->scsi_cmnd = (unsigned long)scsi_cmnd; 738 + rec->scsi_serial = scsi_cmnd->serial_number; 739 + memcpy(rec->scsi_opcode, 740 + &scsi_cmnd->cmnd, 741 + min((int)scsi_cmnd->cmd_len, 742 + ZFCP_DBF_SCSI_OPCODE)); 743 + rec->scsi_retries = scsi_cmnd->retries; 744 + rec->scsi_allowed = scsi_cmnd->allowed; 745 + if (fsf_req != NULL) { 746 + fcp_rsp = (struct fcp_rsp_iu *) 747 + &(fsf_req->qtcb->bottom.io.fcp_rsp); 748 + fcp_rsp_info = 749 + zfcp_get_fcp_rsp_info_ptr(fcp_rsp); 750 + fcp_sns_info = 751 + zfcp_get_fcp_sns_info_ptr(fcp_rsp); 752 + 753 + rec->type.fcp.rsp_validity = 754 + fcp_rsp->validity.value; 755 + rec->type.fcp.rsp_scsi_status = 756 + fcp_rsp->scsi_status; 757 + rec->type.fcp.rsp_resid = fcp_rsp->fcp_resid; 758 + if (fcp_rsp->validity.bits.fcp_rsp_len_valid) 759 + rec->type.fcp.rsp_code = 760 + *(fcp_rsp_info + 3); 761 + if (fcp_rsp->validity.bits.fcp_sns_len_valid) { 762 + buflen = min((int)fcp_rsp->fcp_sns_len, 763 + ZFCP_DBF_SCSI_MAX_FCP_SNS_INFO); 764 + rec->type.fcp.sns_info_len = buflen; 765 + memcpy(rec->type.fcp.sns_info, 766 + fcp_sns_info, 767 + min(buflen, 768 + ZFCP_DBF_SCSI_FCP_SNS_INFO)); 769 + offset += min(buflen, 770 + ZFCP_DBF_SCSI_FCP_SNS_INFO); 771 + } 772 + 773 + rec->fsf_reqid = (unsigned long)fsf_req; 774 + rec->fsf_seqno = fsf_req->seq_no; 775 + rec->fsf_issued = fsf_req->issued; 776 + } 777 + if (new_fsf_req != NULL) { 778 + rec->type.new_fsf_req.fsf_reqid = 779 + (unsigned long) 780 + new_fsf_req; 781 + rec->type.new_fsf_req.fsf_seqno = 782 + new_fsf_req->seq_no; 783 + rec->type.new_fsf_req.fsf_issued = 784 + new_fsf_req->issued; 785 + } 786 + } else { 787 + strncpy(dump->tag, "dump", ZFCP_DBF_TAG_SIZE); 788 + dump->total_size = buflen; 789 + dump->offset = offset; 790 + dump->size = min(buflen - offset, 791 + (int)sizeof(struct 792 + zfcp_scsi_dbf_record) - 793 + (int)sizeof(struct zfcp_dbf_dump)); 794 + memcpy(dump->data, fcp_sns_info + offset, dump->size); 795 + offset += dump->size; 796 + } 797 + debug_event(adapter->scsi_dbf, level, 798 + rec, sizeof(struct zfcp_scsi_dbf_record)); 799 + } while (offset < buflen); 800 + spin_unlock_irqrestore(&adapter->scsi_dbf_lock, flags); 801 + } 802 + 803 + inline void 804 + zfcp_scsi_dbf_event_result(const char *tag, int level, 805 + struct zfcp_adapter *adapter, 806 + struct scsi_cmnd *scsi_cmnd) 807 + { 808 + _zfcp_scsi_dbf_event_common("rslt", 809 + tag, level, adapter, scsi_cmnd, NULL); 810 + } 811 + 812 + inline void 813 + zfcp_scsi_dbf_event_abort(const char *tag, struct zfcp_adapter *adapter, 814 + struct scsi_cmnd *scsi_cmnd, 815 + struct zfcp_fsf_req *new_fsf_req) 816 + { 817 + _zfcp_scsi_dbf_event_common("abrt", 818 + tag, 1, adapter, scsi_cmnd, new_fsf_req); 819 + } 820 + 821 + inline void 822 + zfcp_scsi_dbf_event_devreset(const char *tag, u8 flag, struct zfcp_unit *unit, 823 + struct scsi_cmnd *scsi_cmnd) 824 + { 825 + struct zfcp_adapter *adapter = unit->port->adapter; 826 + 827 + _zfcp_scsi_dbf_event_common(flag == FCP_TARGET_RESET ? "trst" : "lrst", 828 + tag, 1, adapter, scsi_cmnd, NULL); 829 + } 830 + 831 + static int 832 + zfcp_scsi_dbf_view_format(debug_info_t * id, struct debug_view *view, 833 + char *out_buf, const char *in_buf) 834 + { 835 + struct zfcp_scsi_dbf_record *rec = 836 + (struct zfcp_scsi_dbf_record *)in_buf; 837 + int len = 0; 838 + 839 + if (strncmp(rec->tag, "dump", ZFCP_DBF_TAG_SIZE) == 0) 840 + return 0; 841 + 842 + len += zfcp_dbf_tag(out_buf + len, "tag", rec->tag); 843 + len += zfcp_dbf_tag(out_buf + len, "tag2", rec->tag2); 844 + len += zfcp_dbf_view(out_buf + len, "scsi_id", "0x%08x", rec->scsi_id); 845 + len += zfcp_dbf_view(out_buf + len, "scsi_lun", "0x%08x", 846 + rec->scsi_lun); 847 + len += zfcp_dbf_view(out_buf + len, "scsi_result", "0x%08x", 848 + rec->scsi_result); 849 + len += zfcp_dbf_view(out_buf + len, "scsi_cmnd", "0x%0Lx", 850 + rec->scsi_cmnd); 851 + len += zfcp_dbf_view(out_buf + len, "scsi_serial", "0x%016Lx", 852 + rec->scsi_serial); 853 + len += zfcp_dbf_view_dump(out_buf + len, "scsi_opcode", 854 + rec->scsi_opcode, 855 + ZFCP_DBF_SCSI_OPCODE, 856 + 0, ZFCP_DBF_SCSI_OPCODE); 857 + len += zfcp_dbf_view(out_buf + len, "scsi_retries", "0x%02x", 858 + rec->scsi_retries); 859 + len += zfcp_dbf_view(out_buf + len, "scsi_allowed", "0x%02x", 860 + rec->scsi_allowed); 861 + len += zfcp_dbf_view(out_buf + len, "fsf_reqid", "0x%0Lx", 862 + rec->fsf_reqid); 863 + len += zfcp_dbf_view(out_buf + len, "fsf_seqno", "0x%08x", 864 + rec->fsf_seqno); 865 + len += zfcp_dbf_stck(out_buf + len, "fsf_issued", rec->fsf_issued); 866 + if (strncmp(rec->tag, "rslt", ZFCP_DBF_TAG_SIZE) == 0) { 867 + len += 868 + zfcp_dbf_view(out_buf + len, "fcp_rsp_validity", "0x%02x", 869 + rec->type.fcp.rsp_validity); 870 + len += 871 + zfcp_dbf_view(out_buf + len, "fcp_rsp_scsi_status", 872 + "0x%02x", rec->type.fcp.rsp_scsi_status); 873 + len += 874 + zfcp_dbf_view(out_buf + len, "fcp_rsp_resid", "0x%08x", 875 + rec->type.fcp.rsp_resid); 876 + len += 877 + zfcp_dbf_view(out_buf + len, "fcp_rsp_code", "0x%08x", 878 + rec->type.fcp.rsp_code); 879 + len += 880 + zfcp_dbf_view(out_buf + len, "fcp_sns_info_len", "0x%08x", 881 + rec->type.fcp.sns_info_len); 882 + len += 883 + zfcp_dbf_view_dump(out_buf + len, "fcp_sns_info", 884 + rec->type.fcp.sns_info, 885 + min((int)rec->type.fcp.sns_info_len, 886 + ZFCP_DBF_SCSI_FCP_SNS_INFO), 0, 887 + rec->type.fcp.sns_info_len); 888 + } else if (strncmp(rec->tag, "abrt", ZFCP_DBF_TAG_SIZE) == 0) { 889 + len += zfcp_dbf_view(out_buf + len, "fsf_reqid_abort", "0x%0Lx", 890 + rec->type.new_fsf_req.fsf_reqid); 891 + len += zfcp_dbf_view(out_buf + len, "fsf_seqno_abort", "0x%08x", 892 + rec->type.new_fsf_req.fsf_seqno); 893 + len += zfcp_dbf_stck(out_buf + len, "fsf_issued", 894 + rec->type.new_fsf_req.fsf_issued); 895 + } else if ((strncmp(rec->tag, "trst", ZFCP_DBF_TAG_SIZE) == 0) || 896 + (strncmp(rec->tag, "lrst", ZFCP_DBF_TAG_SIZE) == 0)) { 897 + len += zfcp_dbf_view(out_buf + len, "fsf_reqid_reset", "0x%0Lx", 898 + rec->type.new_fsf_req.fsf_reqid); 899 + len += zfcp_dbf_view(out_buf + len, "fsf_seqno_reset", "0x%08x", 900 + rec->type.new_fsf_req.fsf_seqno); 901 + len += zfcp_dbf_stck(out_buf + len, "fsf_issued", 902 + rec->type.new_fsf_req.fsf_issued); 903 + } 904 + 905 + len += sprintf(out_buf + len, "\n"); 906 + 907 + return len; 908 + } 909 + 910 + struct debug_view zfcp_scsi_dbf_view = { 911 + "structured", 912 + NULL, 913 + &zfcp_dbf_view_header, 914 + &zfcp_scsi_dbf_view_format, 915 + NULL, 916 + NULL 917 + }; 918 + 919 + /** 920 + * zfcp_adapter_debug_register - registers debug feature for an adapter 921 + * @adapter: pointer to adapter for which debug features should be registered 922 + * return: -ENOMEM on error, 0 otherwise 923 + */ 924 + int zfcp_adapter_debug_register(struct zfcp_adapter *adapter) 925 + { 926 + char dbf_name[DEBUG_MAX_NAME_LEN]; 927 + 928 + /* debug feature area which records recovery activity */ 929 + spin_lock_init(&adapter->erp_dbf_lock); 930 + sprintf(dbf_name, "zfcp_%s_erp", zfcp_get_busid_by_adapter(adapter)); 931 + adapter->erp_dbf = debug_register(dbf_name, dbfsize, 2, 932 + sizeof(struct zfcp_erp_dbf_record)); 933 + if (!adapter->erp_dbf) 934 + goto failed; 935 + debug_register_view(adapter->erp_dbf, &debug_hex_ascii_view); 936 + debug_set_level(adapter->erp_dbf, 3); 937 + 938 + /* debug feature area which records HBA (FSF and QDIO) conditions */ 939 + spin_lock_init(&adapter->hba_dbf_lock); 940 + sprintf(dbf_name, "zfcp_%s_hba", zfcp_get_busid_by_adapter(adapter)); 941 + adapter->hba_dbf = debug_register(dbf_name, dbfsize, 1, 942 + sizeof(struct zfcp_hba_dbf_record)); 943 + if (!adapter->hba_dbf) 944 + goto failed; 945 + debug_register_view(adapter->hba_dbf, &debug_hex_ascii_view); 946 + debug_register_view(adapter->hba_dbf, &zfcp_hba_dbf_view); 947 + debug_set_level(adapter->hba_dbf, 3); 948 + 949 + /* debug feature area which records SAN command failures and recovery */ 950 + spin_lock_init(&adapter->san_dbf_lock); 951 + sprintf(dbf_name, "zfcp_%s_san", zfcp_get_busid_by_adapter(adapter)); 952 + adapter->san_dbf = debug_register(dbf_name, dbfsize, 1, 953 + sizeof(struct zfcp_san_dbf_record)); 954 + if (!adapter->san_dbf) 955 + goto failed; 956 + debug_register_view(adapter->san_dbf, &debug_hex_ascii_view); 957 + debug_register_view(adapter->san_dbf, &zfcp_san_dbf_view); 958 + debug_set_level(adapter->san_dbf, 6); 959 + 960 + /* debug feature area which records SCSI command failures and recovery */ 961 + spin_lock_init(&adapter->scsi_dbf_lock); 962 + sprintf(dbf_name, "zfcp_%s_scsi", zfcp_get_busid_by_adapter(adapter)); 963 + adapter->scsi_dbf = debug_register(dbf_name, dbfsize, 1, 964 + sizeof(struct zfcp_scsi_dbf_record)); 965 + if (!adapter->scsi_dbf) 966 + goto failed; 967 + debug_register_view(adapter->scsi_dbf, &debug_hex_ascii_view); 968 + debug_register_view(adapter->scsi_dbf, &zfcp_scsi_dbf_view); 969 + debug_set_level(adapter->scsi_dbf, 3); 970 + 971 + return 0; 972 + 973 + failed: 974 + zfcp_adapter_debug_unregister(adapter); 975 + 976 + return -ENOMEM; 977 + } 978 + 979 + /** 980 + * zfcp_adapter_debug_unregister - unregisters debug feature for an adapter 981 + * @adapter: pointer to adapter for which debug features should be unregistered 982 + */ 983 + void zfcp_adapter_debug_unregister(struct zfcp_adapter *adapter) 984 + { 985 + debug_unregister(adapter->scsi_dbf); 986 + debug_unregister(adapter->san_dbf); 987 + debug_unregister(adapter->hba_dbf); 988 + debug_unregister(adapter->erp_dbf); 989 + adapter->scsi_dbf = NULL; 990 + adapter->san_dbf = NULL; 991 + adapter->hba_dbf = NULL; 992 + adapter->erp_dbf = NULL; 993 + } 994 + 995 + #undef ZFCP_LOG_AREA
+195 -112
drivers/s390/scsi/zfcp_def.h
··· 66 66 /********************* GENERAL DEFINES *********************************/ 67 67 68 68 /* zfcp version number, it consists of major, minor, and patch-level number */ 69 - #define ZFCP_VERSION "4.3.0" 69 + #define ZFCP_VERSION "4.5.0" 70 70 71 71 /** 72 72 * zfcp_sg_to_address - determine kernel address from struct scatterlist ··· 154 154 #define ZFCP_EXCHANGE_CONFIG_DATA_FIRST_SLEEP 100 155 155 #define ZFCP_EXCHANGE_CONFIG_DATA_RETRIES 7 156 156 157 + /* Retry 5 times every 2 second, then every minute */ 158 + #define ZFCP_EXCHANGE_PORT_DATA_SHORT_RETRIES 5 159 + #define ZFCP_EXCHANGE_PORT_DATA_SHORT_SLEEP 200 160 + #define ZFCP_EXCHANGE_PORT_DATA_LONG_SLEEP 6000 161 + 157 162 /* timeout value for "default timer" for fsf requests */ 158 163 #define ZFCP_FSF_REQUEST_TIMEOUT (60*HZ); 159 164 160 165 /*************** FIBRE CHANNEL PROTOCOL SPECIFIC DEFINES ********************/ 161 166 162 167 typedef unsigned long long wwn_t; 163 - typedef unsigned int fc_id_t; 164 168 typedef unsigned long long fcp_lun_t; 165 169 /* data length field may be at variable position in FCP-2 FCP_CMND IU */ 166 170 typedef unsigned int fcp_dl_t; ··· 285 281 } __attribute__((packed)); 286 282 287 283 /* 284 + * DBF stuff 285 + */ 286 + #define ZFCP_DBF_TAG_SIZE 4 287 + 288 + struct zfcp_dbf_dump { 289 + u8 tag[ZFCP_DBF_TAG_SIZE]; 290 + u32 total_size; /* size of total dump data */ 291 + u32 offset; /* how much data has being already dumped */ 292 + u32 size; /* how much data comes with this record */ 293 + u8 data[]; /* dump data */ 294 + } __attribute__ ((packed)); 295 + 296 + /* FIXME: to be inflated when reworking the erp dbf */ 297 + struct zfcp_erp_dbf_record { 298 + u8 dummy[16]; 299 + } __attribute__ ((packed)); 300 + 301 + struct zfcp_hba_dbf_record_response { 302 + u32 fsf_command; 303 + u64 fsf_reqid; 304 + u32 fsf_seqno; 305 + u64 fsf_issued; 306 + u32 fsf_prot_status; 307 + u32 fsf_status; 308 + u8 fsf_prot_status_qual[FSF_PROT_STATUS_QUAL_SIZE]; 309 + u8 fsf_status_qual[FSF_STATUS_QUALIFIER_SIZE]; 310 + u32 fsf_req_status; 311 + u8 sbal_first; 312 + u8 sbal_curr; 313 + u8 sbal_last; 314 + u8 pool; 315 + u64 erp_action; 316 + union { 317 + struct { 318 + u64 scsi_cmnd; 319 + u64 scsi_serial; 320 + } send_fcp; 321 + struct { 322 + u64 wwpn; 323 + u32 d_id; 324 + u32 port_handle; 325 + } port; 326 + struct { 327 + u64 wwpn; 328 + u64 fcp_lun; 329 + u32 port_handle; 330 + u32 lun_handle; 331 + } unit; 332 + struct { 333 + u32 d_id; 334 + u8 ls_code; 335 + } send_els; 336 + } data; 337 + } __attribute__ ((packed)); 338 + 339 + struct zfcp_hba_dbf_record_status { 340 + u8 failed; 341 + u32 status_type; 342 + u32 status_subtype; 343 + struct fsf_queue_designator 344 + queue_designator; 345 + u32 payload_size; 346 + #define ZFCP_DBF_UNSOL_PAYLOAD 80 347 + #define ZFCP_DBF_UNSOL_PAYLOAD_SENSE_DATA_AVAIL 32 348 + #define ZFCP_DBF_UNSOL_PAYLOAD_BIT_ERROR_THRESHOLD 56 349 + #define ZFCP_DBF_UNSOL_PAYLOAD_FEATURE_UPDATE_ALERT 2 * sizeof(u32) 350 + u8 payload[ZFCP_DBF_UNSOL_PAYLOAD]; 351 + } __attribute__ ((packed)); 352 + 353 + struct zfcp_hba_dbf_record_qdio { 354 + u32 status; 355 + u32 qdio_error; 356 + u32 siga_error; 357 + u8 sbal_index; 358 + u8 sbal_count; 359 + } __attribute__ ((packed)); 360 + 361 + struct zfcp_hba_dbf_record { 362 + u8 tag[ZFCP_DBF_TAG_SIZE]; 363 + u8 tag2[ZFCP_DBF_TAG_SIZE]; 364 + union { 365 + struct zfcp_hba_dbf_record_response response; 366 + struct zfcp_hba_dbf_record_status status; 367 + struct zfcp_hba_dbf_record_qdio qdio; 368 + } type; 369 + } __attribute__ ((packed)); 370 + 371 + struct zfcp_san_dbf_record_ct { 372 + union { 373 + struct { 374 + u16 cmd_req_code; 375 + u8 revision; 376 + u8 gs_type; 377 + u8 gs_subtype; 378 + u8 options; 379 + u16 max_res_size; 380 + } request; 381 + struct { 382 + u16 cmd_rsp_code; 383 + u8 revision; 384 + u8 reason_code; 385 + u8 reason_code_expl; 386 + u8 vendor_unique; 387 + } response; 388 + } type; 389 + u32 payload_size; 390 + #define ZFCP_DBF_CT_PAYLOAD 24 391 + u8 payload[ZFCP_DBF_CT_PAYLOAD]; 392 + } __attribute__ ((packed)); 393 + 394 + struct zfcp_san_dbf_record_els { 395 + u8 ls_code; 396 + u32 payload_size; 397 + #define ZFCP_DBF_ELS_PAYLOAD 32 398 + #define ZFCP_DBF_ELS_MAX_PAYLOAD 1024 399 + u8 payload[ZFCP_DBF_ELS_PAYLOAD]; 400 + } __attribute__ ((packed)); 401 + 402 + struct zfcp_san_dbf_record { 403 + u8 tag[ZFCP_DBF_TAG_SIZE]; 404 + u64 fsf_reqid; 405 + u32 fsf_seqno; 406 + u32 s_id; 407 + u32 d_id; 408 + union { 409 + struct zfcp_san_dbf_record_ct ct; 410 + struct zfcp_san_dbf_record_els els; 411 + } type; 412 + } __attribute__ ((packed)); 413 + 414 + struct zfcp_scsi_dbf_record { 415 + u8 tag[ZFCP_DBF_TAG_SIZE]; 416 + u8 tag2[ZFCP_DBF_TAG_SIZE]; 417 + u32 scsi_id; 418 + u32 scsi_lun; 419 + u32 scsi_result; 420 + u64 scsi_cmnd; 421 + u64 scsi_serial; 422 + #define ZFCP_DBF_SCSI_OPCODE 16 423 + u8 scsi_opcode[ZFCP_DBF_SCSI_OPCODE]; 424 + u8 scsi_retries; 425 + u8 scsi_allowed; 426 + u64 fsf_reqid; 427 + u32 fsf_seqno; 428 + u64 fsf_issued; 429 + union { 430 + struct { 431 + u64 fsf_reqid; 432 + u32 fsf_seqno; 433 + u64 fsf_issued; 434 + } new_fsf_req; 435 + struct { 436 + u8 rsp_validity; 437 + u8 rsp_scsi_status; 438 + u32 rsp_resid; 439 + u8 rsp_code; 440 + #define ZFCP_DBF_SCSI_FCP_SNS_INFO 16 441 + #define ZFCP_DBF_SCSI_MAX_FCP_SNS_INFO 256 442 + u32 sns_info_len; 443 + u8 sns_info[ZFCP_DBF_SCSI_FCP_SNS_INFO]; 444 + } fcp; 445 + } type; 446 + } __attribute__ ((packed)); 447 + 448 + /* 288 449 * FC-FS stuff 289 450 */ 290 451 #define R_A_TOV 10 /* seconds */ ··· 507 338 * FC-GS-4 stuff 508 339 */ 509 340 #define ZFCP_CT_TIMEOUT (3 * R_A_TOV) 510 - 511 - 512 - /***************** S390 DEBUG FEATURE SPECIFIC DEFINES ***********************/ 513 - 514 - /* debug feature entries per adapter */ 515 - #define ZFCP_ERP_DBF_INDEX 1 516 - #define ZFCP_ERP_DBF_AREAS 2 517 - #define ZFCP_ERP_DBF_LENGTH 16 518 - #define ZFCP_ERP_DBF_LEVEL 3 519 - #define ZFCP_ERP_DBF_NAME "zfcperp" 520 - 521 - #define ZFCP_CMD_DBF_INDEX 2 522 - #define ZFCP_CMD_DBF_AREAS 1 523 - #define ZFCP_CMD_DBF_LENGTH 8 524 - #define ZFCP_CMD_DBF_LEVEL 3 525 - #define ZFCP_CMD_DBF_NAME "zfcpcmd" 526 - 527 - #define ZFCP_ABORT_DBF_INDEX 2 528 - #define ZFCP_ABORT_DBF_AREAS 1 529 - #define ZFCP_ABORT_DBF_LENGTH 8 530 - #define ZFCP_ABORT_DBF_LEVEL 6 531 - #define ZFCP_ABORT_DBF_NAME "zfcpabt" 532 - 533 - #define ZFCP_IN_ELS_DBF_INDEX 2 534 - #define ZFCP_IN_ELS_DBF_AREAS 1 535 - #define ZFCP_IN_ELS_DBF_LENGTH 8 536 - #define ZFCP_IN_ELS_DBF_LEVEL 6 537 - #define ZFCP_IN_ELS_DBF_NAME "zfcpels" 538 341 539 342 /******************** LOGGING MACROS AND DEFINES *****************************/ 540 343 ··· 642 501 #define ZFCP_STATUS_ADAPTER_ERP_THREAD_KILL 0x00000080 643 502 #define ZFCP_STATUS_ADAPTER_ERP_PENDING 0x00000100 644 503 #define ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED 0x00000200 504 + #define ZFCP_STATUS_ADAPTER_XPORT_OK 0x00000800 645 505 646 506 #define ZFCP_STATUS_ADAPTER_SCSI_UP \ 647 507 (ZFCP_STATUS_COMMON_UNBLOCKED | \ ··· 777 635 mempool_t *data_gid_pn; 778 636 }; 779 637 780 - struct zfcp_exchange_config_data{ 781 - }; 782 - 783 - struct zfcp_open_port { 784 - struct zfcp_port *port; 785 - }; 786 - 787 - struct zfcp_close_port { 788 - struct zfcp_port *port; 789 - }; 790 - 791 - struct zfcp_open_unit { 792 - struct zfcp_unit *unit; 793 - }; 794 - 795 - struct zfcp_close_unit { 796 - struct zfcp_unit *unit; 797 - }; 798 - 799 - struct zfcp_close_physical_port { 800 - struct zfcp_port *port; 801 - }; 802 - 803 - struct zfcp_send_fcp_command_task { 804 - struct zfcp_fsf_req *fsf_req; 805 - struct zfcp_unit *unit; 806 - struct scsi_cmnd *scsi_cmnd; 807 - unsigned long start_jiffies; 808 - }; 809 - 810 - struct zfcp_send_fcp_command_task_management { 811 - struct zfcp_unit *unit; 812 - }; 813 - 814 - struct zfcp_abort_fcp_command { 815 - struct zfcp_fsf_req *fsf_req; 816 - struct zfcp_unit *unit; 817 - }; 818 - 819 638 /* 820 639 * header for CT_IU 821 640 */ ··· 805 702 /* FS_ACC IU and data unit for GID_PN nameserver request */ 806 703 struct ct_iu_gid_pn_resp { 807 704 struct ct_hdr header; 808 - fc_id_t d_id; 705 + u32 d_id; 809 706 } __attribute__ ((packed)); 810 707 811 708 typedef void (*zfcp_send_ct_handler_t)(unsigned long); ··· 871 768 struct zfcp_send_els { 872 769 struct zfcp_adapter *adapter; 873 770 struct zfcp_port *port; 874 - fc_id_t d_id; 771 + u32 d_id; 875 772 struct scatterlist *req; 876 773 struct scatterlist *resp; 877 774 unsigned int req_count; ··· 882 779 struct completion *completion; 883 780 int ls_code; 884 781 int status; 885 - }; 886 - 887 - struct zfcp_status_read { 888 - struct fsf_status_read_buffer *buffer; 889 - }; 890 - 891 - struct zfcp_fsf_done { 892 - struct completion *complete; 893 - int status; 894 - }; 895 - 896 - /* request specific data */ 897 - union zfcp_req_data { 898 - struct zfcp_exchange_config_data exchange_config_data; 899 - struct zfcp_open_port open_port; 900 - struct zfcp_close_port close_port; 901 - struct zfcp_open_unit open_unit; 902 - struct zfcp_close_unit close_unit; 903 - struct zfcp_close_physical_port close_physical_port; 904 - struct zfcp_send_fcp_command_task send_fcp_command_task; 905 - struct zfcp_send_fcp_command_task_management 906 - send_fcp_command_task_management; 907 - struct zfcp_abort_fcp_command abort_fcp_command; 908 - struct zfcp_send_ct *send_ct; 909 - struct zfcp_send_els *send_els; 910 - struct zfcp_status_read status_read; 911 - struct fsf_qtcb_bottom_port *port_data; 912 782 }; 913 783 914 784 struct zfcp_qdio_queue { ··· 914 838 atomic_t refcount; /* reference count */ 915 839 wait_queue_head_t remove_wq; /* can be used to wait for 916 840 refcount drop to zero */ 917 - wwn_t wwnn; /* WWNN */ 918 - wwn_t wwpn; /* WWPN */ 919 - fc_id_t s_id; /* N_Port ID */ 920 841 wwn_t peer_wwnn; /* P2P peer WWNN */ 921 842 wwn_t peer_wwpn; /* P2P peer WWPN */ 922 - fc_id_t peer_d_id; /* P2P peer D_ID */ 843 + u32 peer_d_id; /* P2P peer D_ID */ 844 + wwn_t physical_wwpn; /* WWPN of physical port */ 845 + u32 physical_s_id; /* local FC port ID */ 923 846 struct ccw_device *ccw_device; /* S/390 ccw device */ 924 847 u8 fc_service_class; 925 848 u32 fc_topology; /* FC topology */ 926 - u32 fc_link_speed; /* FC interface speed */ 927 849 u32 hydra_version; /* Hydra version */ 928 850 u32 fsf_lic_version; 929 - u32 supported_features;/* of FCP channel */ 851 + u32 adapter_features; /* FCP channel features */ 852 + u32 connection_features; /* host connection features */ 930 853 u32 hardware_version; /* of FCP channel */ 931 - u8 serial_number[32]; /* of hardware */ 932 854 struct Scsi_Host *scsi_host; /* Pointer to mid-layer */ 933 855 unsigned short scsi_host_no; /* Assigned host number */ 934 856 unsigned char name[9]; ··· 963 889 u32 erp_low_mem_count; /* nr of erp actions waiting 964 890 for memory */ 965 891 struct zfcp_port *nameserver_port; /* adapter's nameserver */ 966 - debug_info_t *erp_dbf; /* S/390 debug features */ 967 - debug_info_t *abort_dbf; 968 - debug_info_t *in_els_dbf; 969 - debug_info_t *cmd_dbf; 970 - spinlock_t dbf_lock; 892 + debug_info_t *erp_dbf; 893 + debug_info_t *hba_dbf; 894 + debug_info_t *san_dbf; /* debug feature areas */ 895 + debug_info_t *scsi_dbf; 896 + spinlock_t erp_dbf_lock; 897 + spinlock_t hba_dbf_lock; 898 + spinlock_t san_dbf_lock; 899 + spinlock_t scsi_dbf_lock; 900 + struct zfcp_erp_dbf_record erp_dbf_buf; 901 + struct zfcp_hba_dbf_record hba_dbf_buf; 902 + struct zfcp_san_dbf_record san_dbf_buf; 903 + struct zfcp_scsi_dbf_record scsi_dbf_buf; 971 904 struct zfcp_adapter_mempool pool; /* Adapter memory pools */ 972 905 struct qdio_initialize qdio_init_data; /* for qdio_establish */ 973 906 struct device generic_services; /* directory for WKA ports */ ··· 1000 919 atomic_t status; /* status of this remote port */ 1001 920 wwn_t wwnn; /* WWNN if known */ 1002 921 wwn_t wwpn; /* WWPN */ 1003 - fc_id_t d_id; /* D_ID */ 922 + u32 d_id; /* D_ID */ 1004 923 u32 handle; /* handle assigned by FSF */ 1005 924 struct zfcp_erp_action erp_action; /* pending error recovery */ 1006 925 atomic_t erp_counter; ··· 1044 963 u32 fsf_command; /* FSF Command copy */ 1045 964 struct fsf_qtcb *qtcb; /* address of associated QTCB */ 1046 965 u32 seq_no; /* Sequence number of request */ 1047 - union zfcp_req_data data; /* Info fields of request */ 966 + unsigned long data; /* private data of request */ 1048 967 struct zfcp_erp_action *erp_action; /* used if this request is 1049 968 issued on behalf of erp */ 1050 969 mempool_t *pool; /* used if request was alloacted 1051 970 from emergency pool */ 971 + unsigned long long issued; /* request sent time (STCK) */ 972 + struct zfcp_unit *unit; 1052 973 }; 1053 974 1054 975 typedef void zfcp_fsf_req_handler_t(struct zfcp_fsf_req*);
+112 -23
drivers/s390/scsi/zfcp_erp.c
··· 82 82 static int zfcp_erp_adapter_strategy_open_qdio(struct zfcp_erp_action *); 83 83 static int zfcp_erp_adapter_strategy_open_fsf(struct zfcp_erp_action *); 84 84 static int zfcp_erp_adapter_strategy_open_fsf_xconfig(struct zfcp_erp_action *); 85 + static int zfcp_erp_adapter_strategy_open_fsf_xport(struct zfcp_erp_action *); 85 86 static int zfcp_erp_adapter_strategy_open_fsf_statusread( 86 87 struct zfcp_erp_action *); 87 88 ··· 346 345 347 346 /* acc. to FC-FS, hard_nport_id in ADISC should not be set for ports 348 347 without FC-AL-2 capability, so we don't set it */ 349 - adisc->wwpn = adapter->wwpn; 350 - adisc->wwnn = adapter->wwnn; 351 - adisc->nport_id = adapter->s_id; 348 + adisc->wwpn = fc_host_port_name(adapter->scsi_host); 349 + adisc->wwnn = fc_host_node_name(adapter->scsi_host); 350 + adisc->nport_id = fc_host_port_id(adapter->scsi_host); 352 351 ZFCP_LOG_INFO("ADISC request from s_id 0x%08x to d_id 0x%08x " 353 352 "(wwpn=0x%016Lx, wwnn=0x%016Lx, " 354 353 "hard_nport_id=0x%08x, nport_id=0x%08x)\n", 355 - adapter->s_id, send_els->d_id, (wwn_t) adisc->wwpn, 354 + adisc->nport_id, send_els->d_id, (wwn_t) adisc->wwpn, 356 355 (wwn_t) adisc->wwnn, adisc->hard_nport_id, 357 356 adisc->nport_id); 358 357 ··· 405 404 struct zfcp_send_els *send_els; 406 405 struct zfcp_port *port; 407 406 struct zfcp_adapter *adapter; 408 - fc_id_t d_id; 407 + u32 d_id; 409 408 struct zfcp_ls_adisc_acc *adisc; 410 409 411 410 send_els = (struct zfcp_send_els *) data; ··· 436 435 ZFCP_LOG_INFO("ADISC response from d_id 0x%08x to s_id " 437 436 "0x%08x (wwpn=0x%016Lx, wwnn=0x%016Lx, " 438 437 "hard_nport_id=0x%08x, nport_id=0x%08x)\n", 439 - d_id, adapter->s_id, (wwn_t) adisc->wwpn, 440 - (wwn_t) adisc->wwnn, adisc->hard_nport_id, 441 - adisc->nport_id); 438 + d_id, fc_host_port_id(adapter->scsi_host), 439 + (wwn_t) adisc->wwpn, (wwn_t) adisc->wwnn, 440 + adisc->hard_nport_id, adisc->nport_id); 442 441 443 442 /* set wwnn for port */ 444 443 if (port->wwnn == 0) ··· 887 886 zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *erp_action) 888 887 { 889 888 int retval = 0; 890 - struct zfcp_fsf_req *fsf_req; 889 + struct zfcp_fsf_req *fsf_req = NULL; 891 890 struct zfcp_adapter *adapter = erp_action->adapter; 892 891 893 892 if (erp_action->fsf_req) { ··· 897 896 list_for_each_entry(fsf_req, &adapter->fsf_req_list_head, list) 898 897 if (fsf_req == erp_action->fsf_req) 899 898 break; 900 - if (fsf_req == erp_action->fsf_req) { 899 + if (fsf_req && (fsf_req->erp_action == erp_action)) { 901 900 /* fsf_req still exists */ 902 901 debug_text_event(adapter->erp_dbf, 3, "a_ca_req"); 903 902 debug_event(adapter->erp_dbf, 3, &fsf_req, ··· 2259 2258 static int 2260 2259 zfcp_erp_adapter_strategy_open_fsf(struct zfcp_erp_action *erp_action) 2261 2260 { 2262 - int retval; 2261 + int xconfig, xport; 2263 2262 2264 - /* do 'exchange configuration data' */ 2265 - retval = zfcp_erp_adapter_strategy_open_fsf_xconfig(erp_action); 2266 - if (retval == ZFCP_ERP_FAILED) 2267 - return retval; 2263 + if (atomic_test_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, 2264 + &erp_action->adapter->status)) { 2265 + zfcp_erp_adapter_strategy_open_fsf_xport(erp_action); 2266 + atomic_set(&erp_action->adapter->erp_counter, 0); 2267 + return ZFCP_ERP_FAILED; 2268 + } 2268 2269 2269 - /* start the desired number of Status Reads */ 2270 - retval = zfcp_erp_adapter_strategy_open_fsf_statusread(erp_action); 2271 - return retval; 2270 + xconfig = zfcp_erp_adapter_strategy_open_fsf_xconfig(erp_action); 2271 + xport = zfcp_erp_adapter_strategy_open_fsf_xport(erp_action); 2272 + if ((xconfig == ZFCP_ERP_FAILED) || (xport == ZFCP_ERP_FAILED)) 2273 + return ZFCP_ERP_FAILED; 2274 + 2275 + return zfcp_erp_adapter_strategy_open_fsf_statusread(erp_action); 2272 2276 } 2273 2277 2274 2278 /* ··· 2297 2291 atomic_clear_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 2298 2292 &adapter->status); 2299 2293 ZFCP_LOG_DEBUG("Doing exchange config data\n"); 2294 + write_lock(&adapter->erp_lock); 2300 2295 zfcp_erp_action_to_running(erp_action); 2296 + write_unlock(&adapter->erp_lock); 2301 2297 zfcp_erp_timeout_init(erp_action); 2302 2298 if (zfcp_fsf_exchange_config_data(erp_action)) { 2303 2299 retval = ZFCP_ERP_FAILED; ··· 2348 2340 if (!atomic_test_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK, 2349 2341 &adapter->status)) { 2350 2342 ZFCP_LOG_INFO("error: exchange of configuration data for " 2343 + "adapter %s failed\n", 2344 + zfcp_get_busid_by_adapter(adapter)); 2345 + retval = ZFCP_ERP_FAILED; 2346 + } 2347 + 2348 + return retval; 2349 + } 2350 + 2351 + static int 2352 + zfcp_erp_adapter_strategy_open_fsf_xport(struct zfcp_erp_action *erp_action) 2353 + { 2354 + int retval = ZFCP_ERP_SUCCEEDED; 2355 + int retries; 2356 + int sleep; 2357 + struct zfcp_adapter *adapter = erp_action->adapter; 2358 + 2359 + atomic_clear_mask(ZFCP_STATUS_ADAPTER_XPORT_OK, &adapter->status); 2360 + 2361 + for (retries = 0; ; retries++) { 2362 + ZFCP_LOG_DEBUG("Doing exchange port data\n"); 2363 + zfcp_erp_action_to_running(erp_action); 2364 + zfcp_erp_timeout_init(erp_action); 2365 + if (zfcp_fsf_exchange_port_data(erp_action, adapter, NULL)) { 2366 + retval = ZFCP_ERP_FAILED; 2367 + debug_text_event(adapter->erp_dbf, 5, "a_fstx_xf"); 2368 + ZFCP_LOG_INFO("error: initiation of exchange of " 2369 + "port data failed for adapter %s\n", 2370 + zfcp_get_busid_by_adapter(adapter)); 2371 + break; 2372 + } 2373 + debug_text_event(adapter->erp_dbf, 6, "a_fstx_xok"); 2374 + ZFCP_LOG_DEBUG("Xchange underway\n"); 2375 + 2376 + /* 2377 + * Why this works: 2378 + * Both the normal completion handler as well as the timeout 2379 + * handler will do an 'up' when the 'exchange port data' 2380 + * request completes or times out. Thus, the signal to go on 2381 + * won't be lost utilizing this semaphore. 2382 + * Furthermore, this 'adapter_reopen' action is 2383 + * guaranteed to be the only action being there (highest action 2384 + * which prevents other actions from being created). 2385 + * Resulting from that, the wake signal recognized here 2386 + * _must_ be the one belonging to the 'exchange port 2387 + * data' request. 2388 + */ 2389 + down(&adapter->erp_ready_sem); 2390 + if (erp_action->status & ZFCP_STATUS_ERP_TIMEDOUT) { 2391 + ZFCP_LOG_INFO("error: exchange of port data " 2392 + "for adapter %s timed out\n", 2393 + zfcp_get_busid_by_adapter(adapter)); 2394 + break; 2395 + } 2396 + 2397 + if (!atomic_test_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, 2398 + &adapter->status)) 2399 + break; 2400 + 2401 + ZFCP_LOG_DEBUG("host connection still initialising... " 2402 + "waiting and retrying...\n"); 2403 + /* sleep a little bit before retry */ 2404 + sleep = retries < ZFCP_EXCHANGE_PORT_DATA_SHORT_RETRIES ? 2405 + ZFCP_EXCHANGE_PORT_DATA_SHORT_SLEEP : 2406 + ZFCP_EXCHANGE_PORT_DATA_LONG_SLEEP; 2407 + msleep(jiffies_to_msecs(sleep)); 2408 + } 2409 + 2410 + if (atomic_test_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, 2411 + &adapter->status)) { 2412 + ZFCP_LOG_INFO("error: exchange of port data for " 2351 2413 "adapter %s failed\n", 2352 2414 zfcp_get_busid_by_adapter(adapter)); 2353 2415 retval = ZFCP_ERP_FAILED; ··· 3272 3194 /* fall through !!! */ 3273 3195 3274 3196 case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED: 3275 - if (atomic_test_mask 3276 - (ZFCP_STATUS_COMMON_ERP_INUSE, &port->status) 3277 - && port->erp_action.action == 3278 - ZFCP_ERP_ACTION_REOPEN_PORT_FORCED) { 3279 - debug_text_event(adapter->erp_dbf, 4, "pf_actenq_drp"); 3197 + if (atomic_test_mask(ZFCP_STATUS_COMMON_ERP_INUSE, 3198 + &port->status)) { 3199 + if (port->erp_action.action != 3200 + ZFCP_ERP_ACTION_REOPEN_PORT_FORCED) { 3201 + ZFCP_LOG_INFO("dropped erp action %i (port " 3202 + "0x%016Lx, action in use: %i)\n", 3203 + action, port->wwpn, 3204 + port->erp_action.action); 3205 + debug_text_event(adapter->erp_dbf, 4, 3206 + "pf_actenq_drp"); 3207 + } else 3208 + debug_text_event(adapter->erp_dbf, 4, 3209 + "pf_actenq_drpcp"); 3280 3210 debug_event(adapter->erp_dbf, 4, &port->wwpn, 3281 3211 sizeof (wwn_t)); 3282 3212 goto out; ··· 3674 3588 { 3675 3589 struct zfcp_port *port; 3676 3590 unsigned long flags; 3591 + 3592 + if (adapter->connection_features & FSF_FEATURE_NPIV_MODE) 3593 + return; 3677 3594 3678 3595 debug_text_event(adapter->erp_dbf, 3, "a_access_recover"); 3679 3596 debug_event(adapter->erp_dbf, 3, &adapter->name, 8);
+23 -7
drivers/s390/scsi/zfcp_ext.h
··· 96 96 extern int zfcp_fsf_close_unit(struct zfcp_erp_action *); 97 97 98 98 extern int zfcp_fsf_exchange_config_data(struct zfcp_erp_action *); 99 - extern int zfcp_fsf_exchange_port_data(struct zfcp_adapter *, 99 + extern int zfcp_fsf_exchange_port_data(struct zfcp_erp_action *, 100 + struct zfcp_adapter *, 100 101 struct fsf_qtcb_bottom_port *); 101 102 extern int zfcp_fsf_control_file(struct zfcp_adapter *, struct zfcp_fsf_req **, 102 103 u32, u32, struct zfcp_sg_list *); ··· 110 109 extern int zfcp_fsf_send_ct(struct zfcp_send_ct *, mempool_t *, 111 110 struct zfcp_erp_action *); 112 111 extern int zfcp_fsf_send_els(struct zfcp_send_els *); 113 - extern int zfcp_fsf_req_wait_and_cleanup(struct zfcp_fsf_req *, int, u32 *); 114 112 extern int zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *, 115 113 struct zfcp_unit *, 116 114 struct scsi_cmnd *, ··· 182 182 extern void zfcp_erp_unit_access_changed(struct zfcp_unit *); 183 183 184 184 /******************************** AUX ****************************************/ 185 - extern void zfcp_cmd_dbf_event_fsf(const char *, struct zfcp_fsf_req *, 186 - void *, int); 187 - extern void zfcp_cmd_dbf_event_scsi(const char *, struct scsi_cmnd *); 188 - extern void zfcp_in_els_dbf_event(struct zfcp_adapter *, const char *, 189 - struct fsf_status_read_buffer *, int); 185 + extern void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *); 186 + extern void zfcp_hba_dbf_event_fsf_unsol(const char *, struct zfcp_adapter *, 187 + struct fsf_status_read_buffer *); 188 + extern void zfcp_hba_dbf_event_qdio(struct zfcp_adapter *, 189 + unsigned int, unsigned int, unsigned int, 190 + int, int); 191 + 192 + extern void zfcp_san_dbf_event_ct_request(struct zfcp_fsf_req *); 193 + extern void zfcp_san_dbf_event_ct_response(struct zfcp_fsf_req *); 194 + extern void zfcp_san_dbf_event_els_request(struct zfcp_fsf_req *); 195 + extern void zfcp_san_dbf_event_els_response(struct zfcp_fsf_req *); 196 + extern void zfcp_san_dbf_event_incoming_els(struct zfcp_fsf_req *); 197 + 198 + extern void zfcp_scsi_dbf_event_result(const char *, int, struct zfcp_adapter *, 199 + struct scsi_cmnd *); 200 + extern void zfcp_scsi_dbf_event_abort(const char *, struct zfcp_adapter *, 201 + struct scsi_cmnd *, 202 + struct zfcp_fsf_req *); 203 + extern void zfcp_scsi_dbf_event_devreset(const char *, u8, struct zfcp_unit *, 204 + struct scsi_cmnd *); 205 + 190 206 #endif /* ZFCP_EXT_H */
+354 -415
drivers/s390/scsi/zfcp_fsf.c
··· 59 59 static int zfcp_fsf_protstatus_eval(struct zfcp_fsf_req *); 60 60 static int zfcp_fsf_fsfstatus_eval(struct zfcp_fsf_req *); 61 61 static int zfcp_fsf_fsfstatus_qual_eval(struct zfcp_fsf_req *); 62 + static void zfcp_fsf_link_down_info_eval(struct zfcp_adapter *, 63 + struct fsf_link_down_info *); 62 64 static int zfcp_fsf_req_dispatch(struct zfcp_fsf_req *); 63 65 static void zfcp_fsf_req_dismiss(struct zfcp_fsf_req *); 64 66 ··· 287 285 { 288 286 int retval = 0; 289 287 struct zfcp_adapter *adapter = fsf_req->adapter; 288 + struct fsf_qtcb *qtcb = fsf_req->qtcb; 289 + union fsf_prot_status_qual *prot_status_qual = 290 + &qtcb->prefix.prot_status_qual; 290 291 291 - ZFCP_LOG_DEBUG("QTCB is at %p\n", fsf_req->qtcb); 292 + zfcp_hba_dbf_event_fsf_response(fsf_req); 292 293 293 294 if (fsf_req->status & ZFCP_STATUS_FSFREQ_DISMISSED) { 294 295 ZFCP_LOG_DEBUG("fsf_req 0x%lx has been dismissed\n", 295 296 (unsigned long) fsf_req); 296 297 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR | 297 298 ZFCP_STATUS_FSFREQ_RETRY; /* only for SCSI cmnds. */ 298 - zfcp_cmd_dbf_event_fsf("dismiss", fsf_req, NULL, 0); 299 299 goto skip_protstatus; 300 300 } 301 301 302 302 /* log additional information provided by FSF (if any) */ 303 - if (unlikely(fsf_req->qtcb->header.log_length)) { 303 + if (unlikely(qtcb->header.log_length)) { 304 304 /* do not trust them ;-) */ 305 - if (fsf_req->qtcb->header.log_start > sizeof(struct fsf_qtcb)) { 305 + if (qtcb->header.log_start > sizeof(struct fsf_qtcb)) { 306 306 ZFCP_LOG_NORMAL 307 307 ("bug: ULP (FSF logging) log data starts " 308 308 "beyond end of packet header. Ignored. " 309 309 "(start=%i, size=%li)\n", 310 - fsf_req->qtcb->header.log_start, 310 + qtcb->header.log_start, 311 311 sizeof(struct fsf_qtcb)); 312 312 goto forget_log; 313 313 } 314 - if ((size_t) (fsf_req->qtcb->header.log_start + 315 - fsf_req->qtcb->header.log_length) 314 + if ((size_t) (qtcb->header.log_start + qtcb->header.log_length) 316 315 > sizeof(struct fsf_qtcb)) { 317 316 ZFCP_LOG_NORMAL("bug: ULP (FSF logging) log data ends " 318 317 "beyond end of packet header. Ignored. " 319 318 "(start=%i, length=%i, size=%li)\n", 320 - fsf_req->qtcb->header.log_start, 321 - fsf_req->qtcb->header.log_length, 319 + qtcb->header.log_start, 320 + qtcb->header.log_length, 322 321 sizeof(struct fsf_qtcb)); 323 322 goto forget_log; 324 323 } 325 324 ZFCP_LOG_TRACE("ULP log data: \n"); 326 325 ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_TRACE, 327 - (char *) fsf_req->qtcb + 328 - fsf_req->qtcb->header.log_start, 329 - fsf_req->qtcb->header.log_length); 326 + (char *) qtcb + qtcb->header.log_start, 327 + qtcb->header.log_length); 330 328 } 331 329 forget_log: 332 330 333 331 /* evaluate FSF Protocol Status */ 334 - switch (fsf_req->qtcb->prefix.prot_status) { 332 + switch (qtcb->prefix.prot_status) { 335 333 336 334 case FSF_PROT_GOOD: 337 335 case FSF_PROT_FSF_STATUS_PRESENTED: ··· 342 340 "microcode of version 0x%x, the device driver " 343 341 "only supports 0x%x. Aborting.\n", 344 342 zfcp_get_busid_by_adapter(adapter), 345 - fsf_req->qtcb->prefix.prot_status_qual. 346 - version_error.fsf_version, ZFCP_QTCB_VERSION); 347 - /* stop operation for this adapter */ 348 - debug_text_exception(adapter->erp_dbf, 0, "prot_ver_err"); 343 + prot_status_qual->version_error.fsf_version, 344 + ZFCP_QTCB_VERSION); 349 345 zfcp_erp_adapter_shutdown(adapter, 0); 350 - zfcp_cmd_dbf_event_fsf("qverserr", fsf_req, 351 - &fsf_req->qtcb->prefix.prot_status_qual, 352 - sizeof (union fsf_prot_status_qual)); 353 346 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 354 347 break; 355 348 ··· 352 355 ZFCP_LOG_NORMAL("bug: Sequence number mismatch between " 353 356 "driver (0x%x) and adapter %s (0x%x). " 354 357 "Restarting all operations on this adapter.\n", 355 - fsf_req->qtcb->prefix.req_seq_no, 358 + qtcb->prefix.req_seq_no, 356 359 zfcp_get_busid_by_adapter(adapter), 357 - fsf_req->qtcb->prefix.prot_status_qual. 358 - sequence_error.exp_req_seq_no); 359 - debug_text_exception(adapter->erp_dbf, 0, "prot_seq_err"); 360 - /* restart operation on this adapter */ 360 + prot_status_qual->sequence_error.exp_req_seq_no); 361 361 zfcp_erp_adapter_reopen(adapter, 0); 362 - zfcp_cmd_dbf_event_fsf("seqnoerr", fsf_req, 363 - &fsf_req->qtcb->prefix.prot_status_qual, 364 - sizeof (union fsf_prot_status_qual)); 365 362 fsf_req->status |= ZFCP_STATUS_FSFREQ_RETRY; 366 363 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 367 364 break; ··· 366 375 "that used on adapter %s. " 367 376 "Stopping all operations on this adapter.\n", 368 377 zfcp_get_busid_by_adapter(adapter)); 369 - debug_text_exception(adapter->erp_dbf, 0, "prot_unsup_qtcb"); 370 378 zfcp_erp_adapter_shutdown(adapter, 0); 371 - zfcp_cmd_dbf_event_fsf("unsqtcbt", fsf_req, 372 - &fsf_req->qtcb->prefix.prot_status_qual, 373 - sizeof (union fsf_prot_status_qual)); 374 379 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 375 380 break; 376 381 377 382 case FSF_PROT_HOST_CONNECTION_INITIALIZING: 378 - zfcp_cmd_dbf_event_fsf("hconinit", fsf_req, 379 - &fsf_req->qtcb->prefix.prot_status_qual, 380 - sizeof (union fsf_prot_status_qual)); 381 383 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 382 384 atomic_set_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT, 383 385 &(adapter->status)); 384 - debug_text_event(adapter->erp_dbf, 3, "prot_con_init"); 385 386 break; 386 387 387 388 case FSF_PROT_DUPLICATE_REQUEST_ID: 388 - if (fsf_req->qtcb) { 389 389 ZFCP_LOG_NORMAL("bug: The request identifier 0x%Lx " 390 390 "to the adapter %s is ambiguous. " 391 - "Stopping all operations on this " 392 - "adapter.\n", 393 - *(unsigned long long *) 394 - (&fsf_req->qtcb->bottom.support. 395 - req_handle), 391 + "Stopping all operations on this adapter.\n", 392 + *(unsigned long long*) 393 + (&qtcb->bottom.support.req_handle), 396 394 zfcp_get_busid_by_adapter(adapter)); 397 - } else { 398 - ZFCP_LOG_NORMAL("bug: The request identifier %p " 399 - "to the adapter %s is ambiguous. " 400 - "Stopping all operations on this " 401 - "adapter. " 402 - "(bug: got this for an unsolicited " 403 - "status read request)\n", 404 - fsf_req, 405 - zfcp_get_busid_by_adapter(adapter)); 406 - } 407 - debug_text_exception(adapter->erp_dbf, 0, "prot_dup_id"); 408 395 zfcp_erp_adapter_shutdown(adapter, 0); 409 - zfcp_cmd_dbf_event_fsf("dupreqid", fsf_req, 410 - &fsf_req->qtcb->prefix.prot_status_qual, 411 - sizeof (union fsf_prot_status_qual)); 412 396 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 413 397 break; 414 398 415 399 case FSF_PROT_LINK_DOWN: 416 - /* 417 - * 'test and set' is not atomic here - 418 - * it's ok as long as calls to our response queue handler 419 - * (and thus execution of this code here) are serialized 420 - * by the qdio module 421 - */ 422 - if (!atomic_test_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, 423 - &adapter->status)) { 424 - switch (fsf_req->qtcb->prefix.prot_status_qual. 425 - locallink_error.code) { 426 - case FSF_PSQ_LINK_NOLIGHT: 427 - ZFCP_LOG_INFO("The local link to adapter %s " 428 - "is down (no light detected).\n", 429 - zfcp_get_busid_by_adapter( 430 - adapter)); 431 - break; 432 - case FSF_PSQ_LINK_WRAPPLUG: 433 - ZFCP_LOG_INFO("The local link to adapter %s " 434 - "is down (wrap plug detected).\n", 435 - zfcp_get_busid_by_adapter( 436 - adapter)); 437 - break; 438 - case FSF_PSQ_LINK_NOFCP: 439 - ZFCP_LOG_INFO("The local link to adapter %s " 440 - "is down (adjacent node on " 441 - "link does not support FCP).\n", 442 - zfcp_get_busid_by_adapter( 443 - adapter)); 444 - break; 445 - default: 446 - ZFCP_LOG_INFO("The local link to adapter %s " 447 - "is down " 448 - "(warning: unknown reason " 449 - "code).\n", 450 - zfcp_get_busid_by_adapter( 451 - adapter)); 452 - break; 453 - 454 - } 455 - /* 456 - * Due to the 'erp failed' flag the adapter won't 457 - * be recovered but will be just set to 'blocked' 458 - * state. All subordinary devices will have state 459 - * 'blocked' and 'erp failed', too. 460 - * Thus the adapter is still able to provide 461 - * 'link up' status without being flooded with 462 - * requests. 463 - * (note: even 'close port' is not permitted) 464 - */ 465 - ZFCP_LOG_INFO("Stopping all operations for adapter " 466 - "%s.\n", 467 - zfcp_get_busid_by_adapter(adapter)); 468 - atomic_set_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED | 469 - ZFCP_STATUS_COMMON_ERP_FAILED, 470 - &adapter->status); 471 - zfcp_erp_adapter_reopen(adapter, 0); 472 - } 400 + zfcp_fsf_link_down_info_eval(adapter, 401 + &prot_status_qual->link_down_info); 473 402 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 474 403 break; 475 404 476 405 case FSF_PROT_REEST_QUEUE: 477 - debug_text_event(adapter->erp_dbf, 1, "prot_reest_queue"); 478 - ZFCP_LOG_INFO("The local link to adapter with " 406 + ZFCP_LOG_NORMAL("The local link to adapter with " 479 407 "%s was re-plugged. " 480 408 "Re-starting operations on this adapter.\n", 481 409 zfcp_get_busid_by_adapter(adapter)); ··· 405 495 zfcp_erp_adapter_reopen(adapter, 406 496 ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED 407 497 | ZFCP_STATUS_COMMON_ERP_FAILED); 408 - zfcp_cmd_dbf_event_fsf("reestque", fsf_req, 409 - &fsf_req->qtcb->prefix.prot_status_qual, 410 - sizeof (union fsf_prot_status_qual)); 411 498 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 412 499 break; 413 500 ··· 414 507 "Restarting all operations on this " 415 508 "adapter.\n", 416 509 zfcp_get_busid_by_adapter(adapter)); 417 - debug_text_event(adapter->erp_dbf, 0, "prot_err_sta"); 418 - /* restart operation on this adapter */ 419 510 zfcp_erp_adapter_reopen(adapter, 0); 420 - zfcp_cmd_dbf_event_fsf("proterrs", fsf_req, 421 - &fsf_req->qtcb->prefix.prot_status_qual, 422 - sizeof (union fsf_prot_status_qual)); 423 511 fsf_req->status |= ZFCP_STATUS_FSFREQ_RETRY; 424 512 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 425 513 break; ··· 426 524 "Stopping all operations on this adapter. " 427 525 "(debug info 0x%x).\n", 428 526 zfcp_get_busid_by_adapter(adapter), 429 - fsf_req->qtcb->prefix.prot_status); 430 - debug_text_event(adapter->erp_dbf, 0, "prot_inval:"); 431 - debug_exception(adapter->erp_dbf, 0, 432 - &fsf_req->qtcb->prefix.prot_status, 433 - sizeof (u32)); 527 + qtcb->prefix.prot_status); 434 528 zfcp_erp_adapter_shutdown(adapter, 0); 435 529 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 436 530 } ··· 466 568 "(debug info 0x%x).\n", 467 569 zfcp_get_busid_by_adapter(fsf_req->adapter), 468 570 fsf_req->qtcb->header.fsf_command); 469 - debug_text_exception(fsf_req->adapter->erp_dbf, 0, 470 - "fsf_s_unknown"); 471 571 zfcp_erp_adapter_shutdown(fsf_req->adapter, 0); 472 - zfcp_cmd_dbf_event_fsf("unknownc", fsf_req, 473 - &fsf_req->qtcb->header.fsf_status_qual, 474 - sizeof (union fsf_status_qual)); 475 572 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 476 573 break; 477 574 478 575 case FSF_FCP_RSP_AVAILABLE: 479 576 ZFCP_LOG_DEBUG("FCP Sense data will be presented to the " 480 577 "SCSI stack.\n"); 481 - debug_text_event(fsf_req->adapter->erp_dbf, 3, "fsf_s_rsp"); 482 578 break; 483 579 484 580 case FSF_ADAPTER_STATUS_AVAILABLE: 485 - debug_text_event(fsf_req->adapter->erp_dbf, 2, "fsf_s_astatus"); 486 581 zfcp_fsf_fsfstatus_qual_eval(fsf_req); 487 - break; 488 - 489 - default: 490 582 break; 491 583 } 492 584 ··· 505 617 506 618 switch (fsf_req->qtcb->header.fsf_status_qual.word[0]) { 507 619 case FSF_SQ_FCP_RSP_AVAILABLE: 508 - debug_text_event(fsf_req->adapter->erp_dbf, 4, "fsf_sq_rsp"); 509 620 break; 510 621 case FSF_SQ_RETRY_IF_POSSIBLE: 511 622 /* The SCSI-stack may now issue retries or escalate */ 512 - debug_text_event(fsf_req->adapter->erp_dbf, 2, "fsf_sq_retry"); 513 - zfcp_cmd_dbf_event_fsf("sqretry", fsf_req, 514 - &fsf_req->qtcb->header.fsf_status_qual, 515 - sizeof (union fsf_status_qual)); 516 623 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 517 624 break; 518 625 case FSF_SQ_COMMAND_ABORTED: 519 626 /* Carry the aborted state on to upper layer */ 520 - debug_text_event(fsf_req->adapter->erp_dbf, 2, "fsf_sq_abort"); 521 - zfcp_cmd_dbf_event_fsf("sqabort", fsf_req, 522 - &fsf_req->qtcb->header.fsf_status_qual, 523 - sizeof (union fsf_status_qual)); 524 627 fsf_req->status |= ZFCP_STATUS_FSFREQ_ABORTED; 525 628 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 526 629 break; 527 630 case FSF_SQ_NO_RECOM: 528 - debug_text_exception(fsf_req->adapter->erp_dbf, 0, 529 - "fsf_sq_no_rec"); 530 631 ZFCP_LOG_NORMAL("bug: No recommendation could be given for a" 531 632 "problem on the adapter %s " 532 633 "Stopping all operations on this adapter. ", 533 634 zfcp_get_busid_by_adapter(fsf_req->adapter)); 534 635 zfcp_erp_adapter_shutdown(fsf_req->adapter, 0); 535 - zfcp_cmd_dbf_event_fsf("sqnrecom", fsf_req, 536 - &fsf_req->qtcb->header.fsf_status_qual, 537 - sizeof (union fsf_status_qual)); 538 636 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 539 637 break; 540 638 case FSF_SQ_ULP_PROGRAMMING_ERROR: 541 639 ZFCP_LOG_NORMAL("error: not enough SBALs for data transfer " 542 640 "(adapter %s)\n", 543 641 zfcp_get_busid_by_adapter(fsf_req->adapter)); 544 - debug_text_exception(fsf_req->adapter->erp_dbf, 0, 545 - "fsf_sq_ulp_err"); 546 642 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 547 643 break; 548 644 case FSF_SQ_INVOKE_LINK_TEST_PROCEDURE: ··· 540 668 ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_NORMAL, 541 669 (char *) &fsf_req->qtcb->header.fsf_status_qual, 542 670 sizeof (union fsf_status_qual)); 543 - debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf_sq_inval:"); 544 - debug_exception(fsf_req->adapter->erp_dbf, 0, 545 - &fsf_req->qtcb->header.fsf_status_qual.word[0], 546 - sizeof (u32)); 547 - zfcp_cmd_dbf_event_fsf("squndef", fsf_req, 548 - &fsf_req->qtcb->header.fsf_status_qual, 549 - sizeof (union fsf_status_qual)); 550 671 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 551 672 break; 552 673 } 553 674 554 675 return retval; 676 + } 677 + 678 + /** 679 + * zfcp_fsf_link_down_info_eval - evaluate link down information block 680 + */ 681 + static void 682 + zfcp_fsf_link_down_info_eval(struct zfcp_adapter *adapter, 683 + struct fsf_link_down_info *link_down) 684 + { 685 + switch (link_down->error_code) { 686 + case FSF_PSQ_LINK_NO_LIGHT: 687 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 688 + "(no light detected)\n", 689 + zfcp_get_busid_by_adapter(adapter)); 690 + break; 691 + case FSF_PSQ_LINK_WRAP_PLUG: 692 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 693 + "(wrap plug detected)\n", 694 + zfcp_get_busid_by_adapter(adapter)); 695 + break; 696 + case FSF_PSQ_LINK_NO_FCP: 697 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 698 + "(adjacent node on link does not support FCP)\n", 699 + zfcp_get_busid_by_adapter(adapter)); 700 + break; 701 + case FSF_PSQ_LINK_FIRMWARE_UPDATE: 702 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 703 + "(firmware update in progress)\n", 704 + zfcp_get_busid_by_adapter(adapter)); 705 + break; 706 + case FSF_PSQ_LINK_INVALID_WWPN: 707 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 708 + "(duplicate or invalid WWPN detected)\n", 709 + zfcp_get_busid_by_adapter(adapter)); 710 + break; 711 + case FSF_PSQ_LINK_NO_NPIV_SUPPORT: 712 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 713 + "(no support for NPIV by Fabric)\n", 714 + zfcp_get_busid_by_adapter(adapter)); 715 + break; 716 + case FSF_PSQ_LINK_NO_FCP_RESOURCES: 717 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 718 + "(out of resource in FCP daughtercard)\n", 719 + zfcp_get_busid_by_adapter(adapter)); 720 + break; 721 + case FSF_PSQ_LINK_NO_FABRIC_RESOURCES: 722 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 723 + "(out of resource in Fabric)\n", 724 + zfcp_get_busid_by_adapter(adapter)); 725 + break; 726 + case FSF_PSQ_LINK_FABRIC_LOGIN_UNABLE: 727 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 728 + "(unable to Fabric login)\n", 729 + zfcp_get_busid_by_adapter(adapter)); 730 + break; 731 + case FSF_PSQ_LINK_WWPN_ASSIGNMENT_CORRUPTED: 732 + ZFCP_LOG_NORMAL("WWPN assignment file corrupted on adapter %s\n", 733 + zfcp_get_busid_by_adapter(adapter)); 734 + break; 735 + case FSF_PSQ_LINK_MODE_TABLE_CURRUPTED: 736 + ZFCP_LOG_NORMAL("Mode table corrupted on adapter %s\n", 737 + zfcp_get_busid_by_adapter(adapter)); 738 + break; 739 + case FSF_PSQ_LINK_NO_WWPN_ASSIGNMENT: 740 + ZFCP_LOG_NORMAL("No WWPN for assignment table on adapter %s\n", 741 + zfcp_get_busid_by_adapter(adapter)); 742 + break; 743 + default: 744 + ZFCP_LOG_NORMAL("The local link to adapter %s is down " 745 + "(warning: unknown reason code %d)\n", 746 + zfcp_get_busid_by_adapter(adapter), 747 + link_down->error_code); 748 + } 749 + 750 + if (adapter->connection_features & FSF_FEATURE_NPIV_MODE) 751 + ZFCP_LOG_DEBUG("Debug information to link down: " 752 + "primary_status=0x%02x " 753 + "ioerr_code=0x%02x " 754 + "action_code=0x%02x " 755 + "reason_code=0x%02x " 756 + "explanation_code=0x%02x " 757 + "vendor_specific_code=0x%02x\n", 758 + link_down->primary_status, 759 + link_down->ioerr_code, 760 + link_down->action_code, 761 + link_down->reason_code, 762 + link_down->explanation_code, 763 + link_down->vendor_specific_code); 764 + 765 + if (!atomic_test_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, 766 + &adapter->status)) { 767 + atomic_set_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, 768 + &adapter->status); 769 + switch (link_down->error_code) { 770 + case FSF_PSQ_LINK_NO_LIGHT: 771 + case FSF_PSQ_LINK_WRAP_PLUG: 772 + case FSF_PSQ_LINK_NO_FCP: 773 + case FSF_PSQ_LINK_FIRMWARE_UPDATE: 774 + zfcp_erp_adapter_reopen(adapter, 0); 775 + break; 776 + default: 777 + zfcp_erp_adapter_failed(adapter); 778 + } 779 + } 555 780 } 556 781 557 782 /* ··· 665 696 struct zfcp_adapter *adapter = fsf_req->adapter; 666 697 int retval = 0; 667 698 668 - if (unlikely(fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR)) { 669 - ZFCP_LOG_TRACE("fsf_req=%p, QTCB=%p\n", fsf_req, fsf_req->qtcb); 670 - ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_TRACE, 671 - (char *) fsf_req->qtcb, sizeof(struct fsf_qtcb)); 672 - } 673 699 674 700 switch (fsf_req->fsf_command) { 675 701 ··· 724 760 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 725 761 ZFCP_LOG_NORMAL("bug: Command issued by the device driver is " 726 762 "not supported by the adapter %s\n", 727 - zfcp_get_busid_by_adapter(fsf_req->adapter)); 763 + zfcp_get_busid_by_adapter(adapter)); 728 764 if (fsf_req->fsf_command != fsf_req->qtcb->header.fsf_command) 729 765 ZFCP_LOG_NORMAL 730 766 ("bug: Command issued by the device driver differs " 731 767 "from the command returned by the adapter %s " 732 768 "(debug info 0x%x, 0x%x).\n", 733 - zfcp_get_busid_by_adapter(fsf_req->adapter), 769 + zfcp_get_busid_by_adapter(adapter), 734 770 fsf_req->fsf_command, 735 771 fsf_req->qtcb->header.fsf_command); 736 772 } ··· 738 774 if (!erp_action) 739 775 return retval; 740 776 741 - debug_text_event(adapter->erp_dbf, 3, "a_frh"); 742 - debug_event(adapter->erp_dbf, 3, &erp_action->action, sizeof (int)); 743 777 zfcp_erp_async_handler(erp_action, 0); 744 778 745 779 return retval; ··· 783 821 goto failed_buf; 784 822 } 785 823 memset(status_buffer, 0, sizeof (struct fsf_status_read_buffer)); 786 - fsf_req->data.status_read.buffer = status_buffer; 824 + fsf_req->data = (unsigned long) status_buffer; 787 825 788 826 /* insert pointer to respective buffer */ 789 827 sbale = zfcp_qdio_sbale_curr(fsf_req); ··· 808 846 failed_buf: 809 847 zfcp_fsf_req_free(fsf_req); 810 848 failed_req_create: 849 + zfcp_hba_dbf_event_fsf_unsol("fail", adapter, NULL); 811 850 out: 812 851 write_unlock_irqrestore(&adapter->request_queue.queue_lock, lock_flags); 813 852 return retval; ··· 822 859 struct zfcp_port *port; 823 860 unsigned long flags; 824 861 825 - status_buffer = fsf_req->data.status_read.buffer; 862 + status_buffer = (struct fsf_status_read_buffer *) fsf_req->data; 826 863 adapter = fsf_req->adapter; 827 864 828 865 read_lock_irqsave(&zfcp_data.config_lock, flags); ··· 881 918 int retval = 0; 882 919 struct zfcp_adapter *adapter = fsf_req->adapter; 883 920 struct fsf_status_read_buffer *status_buffer = 884 - fsf_req->data.status_read.buffer; 921 + (struct fsf_status_read_buffer *) fsf_req->data; 885 922 886 923 if (fsf_req->status & ZFCP_STATUS_FSFREQ_DISMISSED) { 924 + zfcp_hba_dbf_event_fsf_unsol("dism", adapter, status_buffer); 887 925 mempool_free(status_buffer, adapter->pool.data_status_read); 888 926 zfcp_fsf_req_free(fsf_req); 889 927 goto out; 890 928 } 891 929 930 + zfcp_hba_dbf_event_fsf_unsol("read", adapter, status_buffer); 931 + 892 932 switch (status_buffer->status_type) { 893 933 894 934 case FSF_STATUS_READ_PORT_CLOSED: 895 - debug_text_event(adapter->erp_dbf, 3, "unsol_pclosed:"); 896 - debug_event(adapter->erp_dbf, 3, 897 - &status_buffer->d_id, sizeof (u32)); 898 935 zfcp_fsf_status_read_port_closed(fsf_req); 899 936 break; 900 937 901 938 case FSF_STATUS_READ_INCOMING_ELS: 902 - debug_text_event(adapter->erp_dbf, 3, "unsol_els:"); 903 939 zfcp_fsf_incoming_els(fsf_req); 904 940 break; 905 941 906 942 case FSF_STATUS_READ_SENSE_DATA_AVAIL: 907 - debug_text_event(adapter->erp_dbf, 3, "unsol_sense:"); 908 943 ZFCP_LOG_INFO("unsolicited sense data received (adapter %s)\n", 909 944 zfcp_get_busid_by_adapter(adapter)); 910 - ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_NORMAL, (char *) status_buffer, 911 - sizeof(struct fsf_status_read_buffer)); 912 945 break; 913 946 914 947 case FSF_STATUS_READ_BIT_ERROR_THRESHOLD: 915 - debug_text_event(adapter->erp_dbf, 3, "unsol_bit_err:"); 916 948 ZFCP_LOG_NORMAL("Bit error threshold data received:\n"); 917 949 ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_NORMAL, 918 950 (char *) status_buffer, ··· 915 957 break; 916 958 917 959 case FSF_STATUS_READ_LINK_DOWN: 918 - debug_text_event(adapter->erp_dbf, 0, "unsol_link_down:"); 919 - ZFCP_LOG_INFO("Local link to adapter %s is down\n", 960 + switch (status_buffer->status_subtype) { 961 + case FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK: 962 + ZFCP_LOG_INFO("Physical link to adapter %s is down\n", 963 + zfcp_get_busid_by_adapter(adapter)); 964 + break; 965 + case FSF_STATUS_READ_SUB_FDISC_FAILED: 966 + ZFCP_LOG_INFO("Local link to adapter %s is down " 967 + "due to failed FDISC login\n", 920 968 zfcp_get_busid_by_adapter(adapter)); 921 - atomic_set_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, 922 - &adapter->status); 923 - zfcp_erp_adapter_failed(adapter); 969 + break; 970 + case FSF_STATUS_READ_SUB_FIRMWARE_UPDATE: 971 + ZFCP_LOG_INFO("Local link to adapter %s is down " 972 + "due to firmware update on adapter\n", 973 + zfcp_get_busid_by_adapter(adapter)); 974 + break; 975 + default: 976 + ZFCP_LOG_INFO("Local link to adapter %s is down " 977 + "due to unknown reason\n", 978 + zfcp_get_busid_by_adapter(adapter)); 979 + }; 980 + zfcp_fsf_link_down_info_eval(adapter, 981 + (struct fsf_link_down_info *) &status_buffer->payload); 924 982 break; 925 983 926 984 case FSF_STATUS_READ_LINK_UP: 927 - debug_text_event(adapter->erp_dbf, 2, "unsol_link_up:"); 928 - ZFCP_LOG_INFO("Local link to adapter %s was replugged. " 985 + ZFCP_LOG_NORMAL("Local link to adapter %s was replugged. " 929 986 "Restarting operations on this adapter\n", 930 987 zfcp_get_busid_by_adapter(adapter)); 931 988 /* All ports should be marked as ready to run again */ ··· 953 980 break; 954 981 955 982 case FSF_STATUS_READ_CFDC_UPDATED: 956 - debug_text_event(adapter->erp_dbf, 2, "unsol_cfdc_update:"); 957 - ZFCP_LOG_INFO("CFDC has been updated on the adapter %s\n", 983 + ZFCP_LOG_NORMAL("CFDC has been updated on the adapter %s\n", 958 984 zfcp_get_busid_by_adapter(adapter)); 959 985 zfcp_erp_adapter_access_changed(adapter); 960 986 break; 961 987 962 988 case FSF_STATUS_READ_CFDC_HARDENED: 963 - debug_text_event(adapter->erp_dbf, 2, "unsol_cfdc_harden:"); 964 989 switch (status_buffer->status_subtype) { 965 990 case FSF_STATUS_READ_SUB_CFDC_HARDENED_ON_SE: 966 - ZFCP_LOG_INFO("CFDC of adapter %s saved on SE\n", 991 + ZFCP_LOG_NORMAL("CFDC of adapter %s saved on SE\n", 967 992 zfcp_get_busid_by_adapter(adapter)); 968 993 break; 969 994 case FSF_STATUS_READ_SUB_CFDC_HARDENED_ON_SE2: 970 - ZFCP_LOG_INFO("CFDC of adapter %s has been copied " 995 + ZFCP_LOG_NORMAL("CFDC of adapter %s has been copied " 971 996 "to the secondary SE\n", 972 997 zfcp_get_busid_by_adapter(adapter)); 973 998 break; 974 999 default: 975 - ZFCP_LOG_INFO("CFDC of adapter %s has been hardened\n", 1000 + ZFCP_LOG_NORMAL("CFDC of adapter %s has been hardened\n", 976 1001 zfcp_get_busid_by_adapter(adapter)); 977 1002 } 978 1003 break; 979 1004 1005 + case FSF_STATUS_READ_FEATURE_UPDATE_ALERT: 1006 + debug_text_event(adapter->erp_dbf, 2, "unsol_features:"); 1007 + ZFCP_LOG_INFO("List of supported features on adapter %s has " 1008 + "been changed from 0x%08X to 0x%08X\n", 1009 + zfcp_get_busid_by_adapter(adapter), 1010 + *(u32*) (status_buffer->payload + 4), 1011 + *(u32*) (status_buffer->payload)); 1012 + adapter->adapter_features = *(u32*) status_buffer->payload; 1013 + break; 1014 + 980 1015 default: 981 - debug_text_event(adapter->erp_dbf, 0, "unsol_unknown:"); 982 - debug_exception(adapter->erp_dbf, 0, 983 - &status_buffer->status_type, sizeof (u32)); 984 - ZFCP_LOG_NORMAL("bug: An unsolicited status packet of unknown " 1016 + ZFCP_LOG_NORMAL("warning: An unsolicited status packet of unknown " 985 1017 "type was received (debug info 0x%x)\n", 986 1018 status_buffer->status_type); 987 1019 ZFCP_LOG_DEBUG("Dump of status_read_buffer %p:\n", ··· 1071 1093 sbale[0].flags |= SBAL_FLAGS0_TYPE_READ; 1072 1094 sbale[1].flags |= SBAL_FLAGS_LAST_ENTRY; 1073 1095 1074 - fsf_req->data.abort_fcp_command.unit = unit; 1096 + fsf_req->data = (unsigned long) unit; 1075 1097 1076 1098 /* set handles of unit and its parent port in QTCB */ 1077 1099 fsf_req->qtcb->header.lun_handle = unit->handle; ··· 1117 1139 zfcp_fsf_abort_fcp_command_handler(struct zfcp_fsf_req *new_fsf_req) 1118 1140 { 1119 1141 int retval = -EINVAL; 1120 - struct zfcp_unit *unit = new_fsf_req->data.abort_fcp_command.unit; 1142 + struct zfcp_unit *unit; 1121 1143 unsigned char status_qual = 1122 1144 new_fsf_req->qtcb->header.fsf_status_qual.word[0]; 1123 1145 ··· 1127 1149 /* do not set ZFCP_STATUS_FSFREQ_ABORTSUCCEEDED */ 1128 1150 goto skip_fsfstatus; 1129 1151 } 1152 + 1153 + unit = (struct zfcp_unit *) new_fsf_req->data; 1130 1154 1131 1155 /* evaluate FSF status in QTCB */ 1132 1156 switch (new_fsf_req->qtcb->header.fsf_status) { ··· 1344 1364 sbale[3].addr = zfcp_sg_to_address(&ct->resp[0]); 1345 1365 sbale[3].length = ct->resp[0].length; 1346 1366 sbale[3].flags |= SBAL_FLAGS_LAST_ENTRY; 1347 - } else if (adapter->supported_features & 1367 + } else if (adapter->adapter_features & 1348 1368 FSF_FEATURE_ELS_CT_CHAINED_SBALS) { 1349 1369 /* try to use chained SBALs */ 1350 1370 bytes = zfcp_qdio_sbals_from_sg(fsf_req, ··· 1394 1414 fsf_req->qtcb->header.port_handle = port->handle; 1395 1415 fsf_req->qtcb->bottom.support.service_class = adapter->fc_service_class; 1396 1416 fsf_req->qtcb->bottom.support.timeout = ct->timeout; 1397 - fsf_req->data.send_ct = ct; 1417 + fsf_req->data = (unsigned long) ct; 1418 + 1419 + zfcp_san_dbf_event_ct_request(fsf_req); 1398 1420 1399 1421 /* start QDIO request for this FSF request */ 1400 1422 ret = zfcp_fsf_req_send(fsf_req, ct->timer); ··· 1427 1445 * zfcp_fsf_send_ct_handler - handler for Generic Service requests 1428 1446 * @fsf_req: pointer to struct zfcp_fsf_req 1429 1447 * 1430 - * Data specific for the Generic Service request is passed by 1431 - * fsf_req->data.send_ct 1432 - * Usually a specific handler for the request is called via 1433 - * fsf_req->data.send_ct->handler at end of this function. 1448 + * Data specific for the Generic Service request is passed using 1449 + * fsf_req->data. There we find the pointer to struct zfcp_send_ct. 1450 + * Usually a specific handler for the CT request is called which is 1451 + * found in this structure. 1434 1452 */ 1435 1453 static int 1436 1454 zfcp_fsf_send_ct_handler(struct zfcp_fsf_req *fsf_req) ··· 1444 1462 u16 subtable, rule, counter; 1445 1463 1446 1464 adapter = fsf_req->adapter; 1447 - send_ct = fsf_req->data.send_ct; 1465 + send_ct = (struct zfcp_send_ct *) fsf_req->data; 1448 1466 port = send_ct->port; 1449 1467 header = &fsf_req->qtcb->header; 1450 1468 bottom = &fsf_req->qtcb->bottom.support; ··· 1456 1474 switch (header->fsf_status) { 1457 1475 1458 1476 case FSF_GOOD: 1477 + zfcp_san_dbf_event_ct_response(fsf_req); 1459 1478 retval = 0; 1460 1479 break; 1461 1480 ··· 1617 1634 { 1618 1635 volatile struct qdio_buffer_element *sbale; 1619 1636 struct zfcp_fsf_req *fsf_req; 1620 - fc_id_t d_id; 1637 + u32 d_id; 1621 1638 struct zfcp_adapter *adapter; 1622 1639 unsigned long lock_flags; 1623 1640 int bytes; ··· 1647 1664 sbale[3].addr = zfcp_sg_to_address(&els->resp[0]); 1648 1665 sbale[3].length = els->resp[0].length; 1649 1666 sbale[3].flags |= SBAL_FLAGS_LAST_ENTRY; 1650 - } else if (adapter->supported_features & 1667 + } else if (adapter->adapter_features & 1651 1668 FSF_FEATURE_ELS_CT_CHAINED_SBALS) { 1652 1669 /* try to use chained SBALs */ 1653 1670 bytes = zfcp_qdio_sbals_from_sg(fsf_req, ··· 1697 1714 fsf_req->qtcb->bottom.support.d_id = d_id; 1698 1715 fsf_req->qtcb->bottom.support.service_class = adapter->fc_service_class; 1699 1716 fsf_req->qtcb->bottom.support.timeout = ZFCP_ELS_TIMEOUT; 1700 - fsf_req->data.send_els = els; 1717 + fsf_req->data = (unsigned long) els; 1701 1718 1702 1719 sbale = zfcp_qdio_sbale_req(fsf_req, fsf_req->sbal_curr, 0); 1720 + 1721 + zfcp_san_dbf_event_els_request(fsf_req); 1703 1722 1704 1723 /* start QDIO request for this FSF request */ 1705 1724 ret = zfcp_fsf_req_send(fsf_req, els->timer); ··· 1731 1746 * zfcp_fsf_send_els_handler - handler for ELS commands 1732 1747 * @fsf_req: pointer to struct zfcp_fsf_req 1733 1748 * 1734 - * Data specific for the ELS command is passed by 1735 - * fsf_req->data.send_els 1736 - * Usually a specific handler for the command is called via 1737 - * fsf_req->data.send_els->handler at end of this function. 1749 + * Data specific for the ELS command is passed using 1750 + * fsf_req->data. There we find the pointer to struct zfcp_send_els. 1751 + * Usually a specific handler for the ELS command is called which is 1752 + * found in this structure. 1738 1753 */ 1739 1754 static int zfcp_fsf_send_els_handler(struct zfcp_fsf_req *fsf_req) 1740 1755 { 1741 1756 struct zfcp_adapter *adapter; 1742 1757 struct zfcp_port *port; 1743 - fc_id_t d_id; 1758 + u32 d_id; 1744 1759 struct fsf_qtcb_header *header; 1745 1760 struct fsf_qtcb_bottom_support *bottom; 1746 1761 struct zfcp_send_els *send_els; 1747 1762 int retval = -EINVAL; 1748 1763 u16 subtable, rule, counter; 1749 1764 1750 - send_els = fsf_req->data.send_els; 1765 + send_els = (struct zfcp_send_els *) fsf_req->data; 1751 1766 adapter = send_els->adapter; 1752 1767 port = send_els->port; 1753 1768 d_id = send_els->d_id; ··· 1760 1775 switch (header->fsf_status) { 1761 1776 1762 1777 case FSF_GOOD: 1778 + zfcp_san_dbf_event_els_response(fsf_req); 1763 1779 retval = 0; 1764 1780 break; 1765 1781 ··· 1940 1954 1941 1955 erp_action->fsf_req->erp_action = erp_action; 1942 1956 erp_action->fsf_req->qtcb->bottom.config.feature_selection = 1943 - (FSF_FEATURE_CFDC | FSF_FEATURE_LUN_SHARING); 1957 + FSF_FEATURE_CFDC | 1958 + FSF_FEATURE_LUN_SHARING | 1959 + FSF_FEATURE_UPDATE_ALERT; 1944 1960 1945 1961 /* start QDIO request for this FSF request */ 1946 1962 retval = zfcp_fsf_req_send(erp_action->fsf_req, &erp_action->timer); ··· 1978 1990 { 1979 1991 struct fsf_qtcb_bottom_config *bottom; 1980 1992 struct zfcp_adapter *adapter = fsf_req->adapter; 1993 + struct Scsi_Host *shost = adapter->scsi_host; 1981 1994 1982 1995 bottom = &fsf_req->qtcb->bottom.config; 1983 1996 ZFCP_LOG_DEBUG("low/high QTCB version 0x%x/0x%x of FSF\n", 1984 1997 bottom->low_qtcb_version, bottom->high_qtcb_version); 1985 1998 adapter->fsf_lic_version = bottom->lic_version; 1986 - adapter->supported_features = bottom->supported_features; 1999 + adapter->adapter_features = bottom->adapter_features; 2000 + adapter->connection_features = bottom->connection_features; 1987 2001 adapter->peer_wwpn = 0; 1988 2002 adapter->peer_wwnn = 0; 1989 2003 adapter->peer_d_id = 0; 1990 2004 1991 2005 if (xchg_ok) { 1992 - adapter->wwnn = bottom->nport_serv_param.wwnn; 1993 - adapter->wwpn = bottom->nport_serv_param.wwpn; 1994 - adapter->s_id = bottom->s_id & ZFCP_DID_MASK; 2006 + fc_host_node_name(shost) = bottom->nport_serv_param.wwnn; 2007 + fc_host_port_name(shost) = bottom->nport_serv_param.wwpn; 2008 + fc_host_port_id(shost) = bottom->s_id & ZFCP_DID_MASK; 2009 + fc_host_speed(shost) = bottom->fc_link_speed; 2010 + fc_host_supported_classes(shost) = FC_COS_CLASS2 | FC_COS_CLASS3; 1995 2011 adapter->fc_topology = bottom->fc_topology; 1996 - adapter->fc_link_speed = bottom->fc_link_speed; 1997 2012 adapter->hydra_version = bottom->adapter_type; 2013 + if (adapter->physical_wwpn == 0) 2014 + adapter->physical_wwpn = fc_host_port_name(shost); 2015 + if (adapter->physical_s_id == 0) 2016 + adapter->physical_s_id = fc_host_port_id(shost); 1998 2017 } else { 1999 - adapter->wwnn = 0; 2000 - adapter->wwpn = 0; 2001 - adapter->s_id = 0; 2018 + fc_host_node_name(shost) = 0; 2019 + fc_host_port_name(shost) = 0; 2020 + fc_host_port_id(shost) = 0; 2021 + fc_host_speed(shost) = FC_PORTSPEED_UNKNOWN; 2002 2022 adapter->fc_topology = 0; 2003 - adapter->fc_link_speed = 0; 2004 2023 adapter->hydra_version = 0; 2005 2024 } 2006 2025 ··· 2017 2022 adapter->peer_wwnn = bottom->plogi_payload.wwnn; 2018 2023 } 2019 2024 2020 - if(adapter->supported_features & FSF_FEATURE_HBAAPI_MANAGEMENT){ 2025 + if (adapter->adapter_features & FSF_FEATURE_HBAAPI_MANAGEMENT) { 2021 2026 adapter->hardware_version = bottom->hardware_version; 2022 - memcpy(adapter->serial_number, bottom->serial_number, 17); 2023 - EBCASC(adapter->serial_number, sizeof(adapter->serial_number)); 2027 + memcpy(fc_host_serial_number(shost), bottom->serial_number, 2028 + min(FC_SERIAL_NUMBER_SIZE, 17)); 2029 + EBCASC(fc_host_serial_number(shost), 2030 + min(FC_SERIAL_NUMBER_SIZE, 17)); 2024 2031 } 2025 2032 2026 2033 ZFCP_LOG_NORMAL("The adapter %s reported the following characteristics:\n" 2027 - "WWNN 0x%016Lx, " 2028 - "WWPN 0x%016Lx, " 2029 - "S_ID 0x%08x,\n" 2030 - "adapter version 0x%x, " 2031 - "LIC version 0x%x, " 2032 - "FC link speed %d Gb/s\n", 2033 - zfcp_get_busid_by_adapter(adapter), 2034 - adapter->wwnn, 2035 - adapter->wwpn, 2036 - (unsigned int) adapter->s_id, 2037 - adapter->hydra_version, 2038 - adapter->fsf_lic_version, 2039 - adapter->fc_link_speed); 2034 + "WWNN 0x%016Lx, " 2035 + "WWPN 0x%016Lx, " 2036 + "S_ID 0x%08x,\n" 2037 + "adapter version 0x%x, " 2038 + "LIC version 0x%x, " 2039 + "FC link speed %d Gb/s\n", 2040 + zfcp_get_busid_by_adapter(adapter), 2041 + (wwn_t) fc_host_node_name(shost), 2042 + (wwn_t) fc_host_port_name(shost), 2043 + fc_host_port_id(shost), 2044 + adapter->hydra_version, 2045 + adapter->fsf_lic_version, 2046 + fc_host_speed(shost)); 2040 2047 if (ZFCP_QTCB_VERSION < bottom->low_qtcb_version) { 2041 2048 ZFCP_LOG_NORMAL("error: the adapter %s " 2042 2049 "only supports newer control block " ··· 2059 2062 zfcp_erp_adapter_shutdown(adapter, 0); 2060 2063 return -EIO; 2061 2064 } 2062 - zfcp_set_fc_host_attrs(adapter); 2063 2065 return 0; 2064 2066 } 2065 2067 ··· 2074 2078 { 2075 2079 struct fsf_qtcb_bottom_config *bottom; 2076 2080 struct zfcp_adapter *adapter = fsf_req->adapter; 2081 + struct fsf_qtcb *qtcb = fsf_req->qtcb; 2077 2082 2078 2083 if (fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) 2079 2084 return -EIO; 2080 2085 2081 - switch (fsf_req->qtcb->header.fsf_status) { 2086 + switch (qtcb->header.fsf_status) { 2082 2087 2083 2088 case FSF_GOOD: 2084 2089 if (zfcp_fsf_exchange_config_evaluate(fsf_req, 1)) ··· 2109 2112 zfcp_erp_adapter_shutdown(adapter, 0); 2110 2113 return -EIO; 2111 2114 case FSF_TOPO_FABRIC: 2112 - ZFCP_LOG_INFO("Switched fabric fibrechannel " 2115 + ZFCP_LOG_NORMAL("Switched fabric fibrechannel " 2113 2116 "network detected at adapter %s.\n", 2114 2117 zfcp_get_busid_by_adapter(adapter)); 2115 2118 break; ··· 2127 2130 zfcp_erp_adapter_shutdown(adapter, 0); 2128 2131 return -EIO; 2129 2132 } 2130 - bottom = &fsf_req->qtcb->bottom.config; 2133 + bottom = &qtcb->bottom.config; 2131 2134 if (bottom->max_qtcb_size < sizeof(struct fsf_qtcb)) { 2132 2135 ZFCP_LOG_NORMAL("bug: Maximum QTCB size (%d bytes) " 2133 2136 "allowed by the adapter %s " ··· 2152 2155 if (zfcp_fsf_exchange_config_evaluate(fsf_req, 0)) 2153 2156 return -EIO; 2154 2157 2155 - ZFCP_LOG_INFO("Local link to adapter %s is down\n", 2156 - zfcp_get_busid_by_adapter(adapter)); 2157 - atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK | 2158 - ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, 2159 - &adapter->status); 2160 - zfcp_erp_adapter_failed(adapter); 2158 + atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK, &adapter->status); 2159 + 2160 + zfcp_fsf_link_down_info_eval(adapter, 2161 + &qtcb->header.fsf_status_qual.link_down_info); 2161 2162 break; 2162 2163 default: 2163 2164 debug_text_event(fsf_req->adapter->erp_dbf, 0, "fsf-stat-ng"); ··· 2169 2174 2170 2175 /** 2171 2176 * zfcp_fsf_exchange_port_data - request information about local port 2177 + * @erp_action: ERP action for the adapter for which port data is requested 2172 2178 * @adapter: for which port data is requested 2173 2179 * @data: response to exchange port data request 2174 2180 */ 2175 2181 int 2176 - zfcp_fsf_exchange_port_data(struct zfcp_adapter *adapter, 2182 + zfcp_fsf_exchange_port_data(struct zfcp_erp_action *erp_action, 2183 + struct zfcp_adapter *adapter, 2177 2184 struct fsf_qtcb_bottom_port *data) 2178 2185 { 2179 2186 volatile struct qdio_buffer_element *sbale; ··· 2184 2187 struct zfcp_fsf_req *fsf_req; 2185 2188 struct timer_list *timer; 2186 2189 2187 - if(!(adapter->supported_features & FSF_FEATURE_HBAAPI_MANAGEMENT)){ 2190 + if (!(adapter->adapter_features & FSF_FEATURE_HBAAPI_MANAGEMENT)) { 2188 2191 ZFCP_LOG_INFO("error: exchange port data " 2189 2192 "command not supported by adapter %s\n", 2190 2193 zfcp_get_busid_by_adapter(adapter)); ··· 2208 2211 goto out; 2209 2212 } 2210 2213 2214 + if (erp_action) { 2215 + erp_action->fsf_req = fsf_req; 2216 + fsf_req->erp_action = erp_action; 2217 + } 2218 + 2219 + if (data) 2220 + fsf_req->data = (unsigned long) data; 2221 + 2211 2222 sbale = zfcp_qdio_sbale_req(fsf_req, fsf_req->sbal_curr, 0); 2212 2223 sbale[0].flags |= SBAL_FLAGS0_TYPE_READ; 2213 2224 sbale[1].flags |= SBAL_FLAGS_LAST_ENTRY; 2214 - 2215 - fsf_req->data.port_data = data; 2216 2225 2217 2226 init_timer(timer); 2218 2227 timer->function = zfcp_fsf_request_timeout_handler; ··· 2231 2228 "command on the adapter %s\n", 2232 2229 zfcp_get_busid_by_adapter(adapter)); 2233 2230 zfcp_fsf_req_free(fsf_req); 2231 + if (erp_action) 2232 + erp_action->fsf_req = NULL; 2234 2233 write_unlock_irqrestore(&adapter->request_queue.queue_lock, 2235 2234 lock_flags); 2236 2235 goto out; ··· 2261 2256 static void 2262 2257 zfcp_fsf_exchange_port_data_handler(struct zfcp_fsf_req *fsf_req) 2263 2258 { 2264 - struct fsf_qtcb_bottom_port *bottom; 2265 - struct fsf_qtcb_bottom_port *data = fsf_req->data.port_data; 2259 + struct zfcp_adapter *adapter = fsf_req->adapter; 2260 + struct Scsi_Host *shost = adapter->scsi_host; 2261 + struct fsf_qtcb *qtcb = fsf_req->qtcb; 2262 + struct fsf_qtcb_bottom_port *bottom, *data; 2266 2263 2267 2264 if (fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) 2268 2265 return; 2269 2266 2270 - switch (fsf_req->qtcb->header.fsf_status) { 2267 + switch (qtcb->header.fsf_status) { 2271 2268 case FSF_GOOD: 2272 - bottom = &fsf_req->qtcb->bottom.port; 2273 - memcpy(data, bottom, sizeof(*data)); 2269 + atomic_set_mask(ZFCP_STATUS_ADAPTER_XPORT_OK, &adapter->status); 2270 + 2271 + bottom = &qtcb->bottom.port; 2272 + data = (struct fsf_qtcb_bottom_port*) fsf_req->data; 2273 + if (data) 2274 + memcpy(data, bottom, sizeof(struct fsf_qtcb_bottom_port)); 2275 + if (adapter->connection_features & FSF_FEATURE_NPIV_MODE) { 2276 + adapter->physical_wwpn = bottom->wwpn; 2277 + adapter->physical_s_id = bottom->fc_port_id; 2278 + } else { 2279 + adapter->physical_wwpn = fc_host_port_name(shost); 2280 + adapter->physical_s_id = fc_host_port_id(shost); 2281 + } 2282 + fc_host_maxframe_size(shost) = bottom->maximum_frame_size; 2283 + break; 2284 + 2285 + case FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE: 2286 + atomic_set_mask(ZFCP_STATUS_ADAPTER_XPORT_OK, &adapter->status); 2287 + 2288 + zfcp_fsf_link_down_info_eval(adapter, 2289 + &qtcb->header.fsf_status_qual.link_down_info); 2274 2290 break; 2275 2291 2276 2292 default: 2277 - debug_text_event(fsf_req->adapter->erp_dbf, 0, "xchg-port-ng"); 2278 - debug_event(fsf_req->adapter->erp_dbf, 0, 2293 + debug_text_event(adapter->erp_dbf, 0, "xchg-port-ng"); 2294 + debug_event(adapter->erp_dbf, 0, 2279 2295 &fsf_req->qtcb->header.fsf_status, sizeof(u32)); 2280 2296 } 2281 2297 } ··· 2338 2312 2339 2313 erp_action->fsf_req->qtcb->bottom.support.d_id = erp_action->port->d_id; 2340 2314 atomic_set_mask(ZFCP_STATUS_COMMON_OPENING, &erp_action->port->status); 2341 - erp_action->fsf_req->data.open_port.port = erp_action->port; 2315 + erp_action->fsf_req->data = (unsigned long) erp_action->port; 2342 2316 erp_action->fsf_req->erp_action = erp_action; 2343 2317 2344 2318 /* start QDIO request for this FSF request */ ··· 2379 2353 struct fsf_qtcb_header *header; 2380 2354 u16 subtable, rule, counter; 2381 2355 2382 - port = fsf_req->data.open_port.port; 2356 + port = (struct zfcp_port *) fsf_req->data; 2383 2357 header = &fsf_req->qtcb->header; 2384 2358 2385 2359 if (fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) { ··· 2592 2566 sbale[1].flags |= SBAL_FLAGS_LAST_ENTRY; 2593 2567 2594 2568 atomic_set_mask(ZFCP_STATUS_COMMON_CLOSING, &erp_action->port->status); 2595 - erp_action->fsf_req->data.close_port.port = erp_action->port; 2569 + erp_action->fsf_req->data = (unsigned long) erp_action->port; 2596 2570 erp_action->fsf_req->erp_action = erp_action; 2597 2571 erp_action->fsf_req->qtcb->header.port_handle = 2598 2572 erp_action->port->handle; ··· 2632 2606 int retval = -EINVAL; 2633 2607 struct zfcp_port *port; 2634 2608 2635 - port = fsf_req->data.close_port.port; 2609 + port = (struct zfcp_port *) fsf_req->data; 2636 2610 2637 2611 if (fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) { 2638 2612 /* don't change port status in our bookkeeping */ ··· 2729 2703 atomic_set_mask(ZFCP_STATUS_PORT_PHYS_CLOSING, 2730 2704 &erp_action->port->status); 2731 2705 /* save a pointer to this port */ 2732 - erp_action->fsf_req->data.close_physical_port.port = erp_action->port; 2733 - /* port to be closeed */ 2706 + erp_action->fsf_req->data = (unsigned long) erp_action->port; 2707 + /* port to be closed */ 2734 2708 erp_action->fsf_req->qtcb->header.port_handle = 2735 2709 erp_action->port->handle; 2736 2710 erp_action->fsf_req->erp_action = erp_action; ··· 2773 2747 struct fsf_qtcb_header *header; 2774 2748 u16 subtable, rule, counter; 2775 2749 2776 - port = fsf_req->data.close_physical_port.port; 2750 + port = (struct zfcp_port *) fsf_req->data; 2777 2751 header = &fsf_req->qtcb->header; 2778 2752 2779 2753 if (fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) { ··· 2934 2908 erp_action->port->handle; 2935 2909 erp_action->fsf_req->qtcb->bottom.support.fcp_lun = 2936 2910 erp_action->unit->fcp_lun; 2911 + if (!(erp_action->adapter->connection_features & FSF_FEATURE_NPIV_MODE)) 2937 2912 erp_action->fsf_req->qtcb->bottom.support.option = 2938 2913 FSF_OPEN_LUN_SUPPRESS_BOXING; 2939 2914 atomic_set_mask(ZFCP_STATUS_COMMON_OPENING, &erp_action->unit->status); 2940 - erp_action->fsf_req->data.open_unit.unit = erp_action->unit; 2915 + erp_action->fsf_req->data = (unsigned long) erp_action->unit; 2941 2916 erp_action->fsf_req->erp_action = erp_action; 2942 2917 2943 2918 /* start QDIO request for this FSF request */ ··· 2982 2955 struct fsf_qtcb_bottom_support *bottom; 2983 2956 struct fsf_queue_designator *queue_designator; 2984 2957 u16 subtable, rule, counter; 2985 - u32 allowed, exclusive, readwrite; 2958 + int exclusive, readwrite; 2986 2959 2987 - unit = fsf_req->data.open_unit.unit; 2960 + unit = (struct zfcp_unit *) fsf_req->data; 2988 2961 2989 2962 if (fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) { 2990 2963 /* don't change unit status in our bookkeeping */ ··· 2995 2968 header = &fsf_req->qtcb->header; 2996 2969 bottom = &fsf_req->qtcb->bottom.support; 2997 2970 queue_designator = &header->fsf_status_qual.fsf_queue_designator; 2998 - 2999 - allowed = bottom->lun_access_info & FSF_UNIT_ACCESS_OPEN_LUN_ALLOWED; 3000 - exclusive = bottom->lun_access_info & FSF_UNIT_ACCESS_EXCLUSIVE; 3001 - readwrite = bottom->lun_access_info & FSF_UNIT_ACCESS_OUTBOUND_TRANSFER; 3002 2971 3003 2972 atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED | 3004 2973 ZFCP_STATUS_UNIT_SHARED | ··· 3169 3146 unit->handle); 3170 3147 /* mark unit as open */ 3171 3148 atomic_set_mask(ZFCP_STATUS_COMMON_OPEN, &unit->status); 3172 - atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED | 3173 - ZFCP_STATUS_COMMON_ACCESS_BOXED, 3174 - &unit->status); 3175 - if (adapter->supported_features & FSF_FEATURE_LUN_SHARING){ 3149 + 3150 + if (!(adapter->connection_features & FSF_FEATURE_NPIV_MODE) && 3151 + (adapter->adapter_features & FSF_FEATURE_LUN_SHARING) && 3152 + (adapter->ccw_device->id.dev_model != ZFCP_DEVICE_MODEL_PRIV)) { 3153 + exclusive = (bottom->lun_access_info & 3154 + FSF_UNIT_ACCESS_EXCLUSIVE); 3155 + readwrite = (bottom->lun_access_info & 3156 + FSF_UNIT_ACCESS_OUTBOUND_TRANSFER); 3157 + 3176 3158 if (!exclusive) 3177 3159 atomic_set_mask(ZFCP_STATUS_UNIT_SHARED, 3178 3160 &unit->status); ··· 3270 3242 erp_action->port->handle; 3271 3243 erp_action->fsf_req->qtcb->header.lun_handle = erp_action->unit->handle; 3272 3244 atomic_set_mask(ZFCP_STATUS_COMMON_CLOSING, &erp_action->unit->status); 3273 - erp_action->fsf_req->data.close_unit.unit = erp_action->unit; 3245 + erp_action->fsf_req->data = (unsigned long) erp_action->unit; 3274 3246 erp_action->fsf_req->erp_action = erp_action; 3275 3247 3276 3248 /* start QDIO request for this FSF request */ ··· 3309 3281 int retval = -EINVAL; 3310 3282 struct zfcp_unit *unit; 3311 3283 3312 - unit = fsf_req->data.close_unit.unit; /* restore unit */ 3284 + unit = (struct zfcp_unit *) fsf_req->data; 3313 3285 3314 3286 if (fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) { 3315 3287 /* don't change unit status in our bookkeeping */ ··· 3333 3305 debug_text_event(fsf_req->adapter->erp_dbf, 1, 3334 3306 "fsf_s_phand_nv"); 3335 3307 zfcp_erp_adapter_reopen(unit->port->adapter, 0); 3336 - zfcp_cmd_dbf_event_fsf("porthinv", fsf_req, 3337 - &fsf_req->qtcb->header.fsf_status_qual, 3338 - sizeof (union fsf_status_qual)); 3339 3308 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 3340 3309 break; 3341 3310 ··· 3351 3326 debug_text_event(fsf_req->adapter->erp_dbf, 1, 3352 3327 "fsf_s_lhand_nv"); 3353 3328 zfcp_erp_port_reopen(unit->port, 0); 3354 - zfcp_cmd_dbf_event_fsf("lunhinv", fsf_req, 3355 - &fsf_req->qtcb->header.fsf_status_qual, 3356 - sizeof (union fsf_status_qual)); 3357 3329 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 3358 3330 break; 3359 3331 ··· 3458 3436 goto failed_req_create; 3459 3437 } 3460 3438 3461 - /* 3462 - * associate FSF request with SCSI request 3463 - * (need this for look up on abort) 3464 - */ 3465 - fsf_req->data.send_fcp_command_task.fsf_req = fsf_req; 3466 - scsi_cmnd->host_scribble = (char *) &(fsf_req->data); 3439 + zfcp_unit_get(unit); 3440 + fsf_req->unit = unit; 3467 3441 3468 - /* 3469 - * associate SCSI command with FSF request 3470 - * (need this for look up on normal command completion) 3471 - */ 3472 - fsf_req->data.send_fcp_command_task.scsi_cmnd = scsi_cmnd; 3473 - fsf_req->data.send_fcp_command_task.start_jiffies = jiffies; 3474 - fsf_req->data.send_fcp_command_task.unit = unit; 3475 - ZFCP_LOG_DEBUG("unit=%p, fcp_lun=0x%016Lx\n", unit, unit->fcp_lun); 3442 + /* associate FSF request with SCSI request (for look up on abort) */ 3443 + scsi_cmnd->host_scribble = (char *) fsf_req; 3444 + 3445 + /* associate SCSI command with FSF request */ 3446 + fsf_req->data = (unsigned long) scsi_cmnd; 3476 3447 3477 3448 /* set handles of unit and its parent port in QTCB */ 3478 3449 fsf_req->qtcb->header.lun_handle = unit->handle; ··· 3599 3584 send_failed: 3600 3585 no_fit: 3601 3586 failed_scsi_cmnd: 3587 + zfcp_unit_put(unit); 3602 3588 zfcp_fsf_req_free(fsf_req); 3603 3589 fsf_req = NULL; 3604 3590 scsi_cmnd->host_scribble = NULL; ··· 3656 3640 * hold a pointer to the unit being target of this 3657 3641 * task management request 3658 3642 */ 3659 - fsf_req->data.send_fcp_command_task_management.unit = unit; 3643 + fsf_req->data = (unsigned long) unit; 3660 3644 3661 3645 /* set FSF related fields in QTCB */ 3662 3646 fsf_req->qtcb->header.lun_handle = unit->handle; ··· 3722 3706 header = &fsf_req->qtcb->header; 3723 3707 3724 3708 if (unlikely(fsf_req->status & ZFCP_STATUS_FSFREQ_TASK_MANAGEMENT)) 3725 - unit = fsf_req->data.send_fcp_command_task_management.unit; 3709 + unit = (struct zfcp_unit *) fsf_req->data; 3726 3710 else 3727 - unit = fsf_req->data.send_fcp_command_task.unit; 3711 + unit = fsf_req->unit; 3728 3712 3729 3713 if (unlikely(fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR)) { 3730 3714 /* go directly to calls of special handlers */ ··· 3781 3765 debug_text_event(fsf_req->adapter->erp_dbf, 1, 3782 3766 "fsf_s_hand_mis"); 3783 3767 zfcp_erp_adapter_reopen(unit->port->adapter, 0); 3784 - zfcp_cmd_dbf_event_fsf("handmism", 3785 - fsf_req, 3786 - &header->fsf_status_qual, 3787 - sizeof (union fsf_status_qual)); 3788 3768 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 3789 3769 break; 3790 3770 ··· 3801 3789 debug_text_exception(fsf_req->adapter->erp_dbf, 0, 3802 3790 "fsf_s_class_nsup"); 3803 3791 zfcp_erp_adapter_shutdown(unit->port->adapter, 0); 3804 - zfcp_cmd_dbf_event_fsf("unsclass", 3805 - fsf_req, 3806 - &header->fsf_status_qual, 3807 - sizeof (union fsf_status_qual)); 3808 3792 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 3809 3793 break; 3810 3794 ··· 3819 3811 debug_text_event(fsf_req->adapter->erp_dbf, 1, 3820 3812 "fsf_s_fcp_lun_nv"); 3821 3813 zfcp_erp_port_reopen(unit->port, 0); 3822 - zfcp_cmd_dbf_event_fsf("fluninv", 3823 - fsf_req, 3824 - &header->fsf_status_qual, 3825 - sizeof (union fsf_status_qual)); 3826 3814 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 3827 3815 break; 3828 3816 ··· 3857 3853 debug_text_event(fsf_req->adapter->erp_dbf, 0, 3858 3854 "fsf_s_dir_ind_nv"); 3859 3855 zfcp_erp_adapter_shutdown(unit->port->adapter, 0); 3860 - zfcp_cmd_dbf_event_fsf("dirinv", 3861 - fsf_req, 3862 - &header->fsf_status_qual, 3863 - sizeof (union fsf_status_qual)); 3864 3856 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 3865 3857 break; 3866 3858 ··· 3872 3872 debug_text_event(fsf_req->adapter->erp_dbf, 0, 3873 3873 "fsf_s_cmd_len_nv"); 3874 3874 zfcp_erp_adapter_shutdown(unit->port->adapter, 0); 3875 - zfcp_cmd_dbf_event_fsf("cleninv", 3876 - fsf_req, 3877 - &header->fsf_status_qual, 3878 - sizeof (union fsf_status_qual)); 3879 3875 fsf_req->status |= ZFCP_STATUS_FSFREQ_ERROR; 3880 3876 break; 3881 3877 ··· 3943 3947 zfcp_fsf_send_fcp_command_task_management_handler(fsf_req); 3944 3948 } else { 3945 3949 retval = zfcp_fsf_send_fcp_command_task_handler(fsf_req); 3950 + fsf_req->unit = NULL; 3951 + zfcp_unit_put(unit); 3946 3952 } 3947 3953 return retval; 3948 3954 } ··· 3968 3970 u32 sns_len; 3969 3971 char *fcp_rsp_info = zfcp_get_fcp_rsp_info_ptr(fcp_rsp_iu); 3970 3972 unsigned long flags; 3971 - struct zfcp_unit *unit = fsf_req->data.send_fcp_command_task.unit; 3973 + struct zfcp_unit *unit = fsf_req->unit; 3972 3974 3973 3975 read_lock_irqsave(&fsf_req->adapter->abort_lock, flags); 3974 - scpnt = fsf_req->data.send_fcp_command_task.scsi_cmnd; 3976 + scpnt = (struct scsi_cmnd *) fsf_req->data; 3975 3977 if (unlikely(!scpnt)) { 3976 3978 ZFCP_LOG_DEBUG 3977 3979 ("Command with fsf_req %p is not associated to " ··· 4041 4043 ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, 4042 4044 (char *) &fsf_req->qtcb-> 4043 4045 bottom.io.fcp_cmnd, FSF_FCP_CMND_SIZE); 4044 - zfcp_cmd_dbf_event_fsf("clenmis", fsf_req, NULL, 0); 4045 4046 set_host_byte(&scpnt->result, DID_ERROR); 4046 4047 goto skip_fsfstatus; 4047 4048 case RSP_CODE_FIELD_INVALID: ··· 4059 4062 (char *) &fsf_req->qtcb-> 4060 4063 bottom.io.fcp_cmnd, FSF_FCP_CMND_SIZE); 4061 4064 set_host_byte(&scpnt->result, DID_ERROR); 4062 - zfcp_cmd_dbf_event_fsf("codeinv", fsf_req, NULL, 0); 4063 4065 goto skip_fsfstatus; 4064 4066 case RSP_CODE_RO_MISMATCH: 4065 4067 /* hardware bug */ ··· 4075 4079 ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, 4076 4080 (char *) &fsf_req->qtcb-> 4077 4081 bottom.io.fcp_cmnd, FSF_FCP_CMND_SIZE); 4078 - zfcp_cmd_dbf_event_fsf("codemism", fsf_req, NULL, 0); 4079 4082 set_host_byte(&scpnt->result, DID_ERROR); 4080 4083 goto skip_fsfstatus; 4081 4084 default: ··· 4091 4096 ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_DEBUG, 4092 4097 (char *) &fsf_req->qtcb-> 4093 4098 bottom.io.fcp_cmnd, FSF_FCP_CMND_SIZE); 4094 - zfcp_cmd_dbf_event_fsf("undeffcp", fsf_req, NULL, 0); 4095 4099 set_host_byte(&scpnt->result, DID_ERROR); 4096 4100 goto skip_fsfstatus; 4097 4101 } ··· 4152 4158 skip_fsfstatus: 4153 4159 ZFCP_LOG_DEBUG("scpnt->result =0x%x\n", scpnt->result); 4154 4160 4155 - zfcp_cmd_dbf_event_scsi("response", scpnt); 4161 + if (scpnt->result != 0) 4162 + zfcp_scsi_dbf_event_result("erro", 3, fsf_req->adapter, scpnt); 4163 + else if (scpnt->retries > 0) 4164 + zfcp_scsi_dbf_event_result("retr", 4, fsf_req->adapter, scpnt); 4165 + else 4166 + zfcp_scsi_dbf_event_result("norm", 6, fsf_req->adapter, scpnt); 4156 4167 4157 4168 /* cleanup pointer (need this especially for abort) */ 4158 4169 scpnt->host_scribble = NULL; 4159 4170 4160 - /* 4161 - * NOTE: 4162 - * according to the outcome of a discussion on linux-scsi we 4163 - * don't need to grab the io_request_lock here since we use 4164 - * the new eh 4165 - */ 4166 4171 /* always call back */ 4167 - 4168 4172 (scpnt->scsi_done) (scpnt); 4169 4173 4170 4174 /* ··· 4190 4198 struct fcp_rsp_iu *fcp_rsp_iu = (struct fcp_rsp_iu *) 4191 4199 &(fsf_req->qtcb->bottom.io.fcp_rsp); 4192 4200 char *fcp_rsp_info = zfcp_get_fcp_rsp_info_ptr(fcp_rsp_iu); 4193 - struct zfcp_unit *unit = 4194 - fsf_req->data.send_fcp_command_task_management.unit; 4201 + struct zfcp_unit *unit = (struct zfcp_unit *) fsf_req->data; 4195 4202 4196 4203 del_timer(&fsf_req->adapter->scsi_er_timer); 4197 4204 if (fsf_req->status & ZFCP_STATUS_FSFREQ_ERROR) { ··· 4267 4276 int direction; 4268 4277 int retval = 0; 4269 4278 4270 - if (!(adapter->supported_features & FSF_FEATURE_CFDC)) { 4279 + if (!(adapter->adapter_features & FSF_FEATURE_CFDC)) { 4271 4280 ZFCP_LOG_INFO("cfdc not supported (adapter %s)\n", 4272 4281 zfcp_get_busid_by_adapter(adapter)); 4273 4282 retval = -EOPNOTSUPP; ··· 4540 4549 return retval; 4541 4550 } 4542 4551 4543 - 4544 - /* 4545 - * function: zfcp_fsf_req_wait_and_cleanup 4546 - * 4547 - * purpose: 4548 - * 4549 - * FIXME(design): signal seems to be <0 !!! 4550 - * returns: 0 - request completed (*status is valid), cleanup succ. 4551 - * <0 - request completed (*status is valid), cleanup failed 4552 - * >0 - signal which interrupted waiting (*status invalid), 4553 - * request not completed, no cleanup 4554 - * 4555 - * *status is a copy of status of completed fsf_req 4556 - */ 4557 - int 4558 - zfcp_fsf_req_wait_and_cleanup(struct zfcp_fsf_req *fsf_req, 4559 - int interruptible, u32 * status) 4560 - { 4561 - int retval = 0; 4562 - int signal = 0; 4563 - 4564 - if (interruptible) { 4565 - __wait_event_interruptible(fsf_req->completion_wq, 4566 - fsf_req->status & 4567 - ZFCP_STATUS_FSFREQ_COMPLETED, 4568 - signal); 4569 - if (signal) { 4570 - ZFCP_LOG_DEBUG("Caught signal %i while waiting for the " 4571 - "completion of the request at %p\n", 4572 - signal, fsf_req); 4573 - retval = signal; 4574 - goto out; 4575 - } 4576 - } else { 4577 - __wait_event(fsf_req->completion_wq, 4578 - fsf_req->status & ZFCP_STATUS_FSFREQ_COMPLETED); 4579 - } 4580 - 4581 - *status = fsf_req->status; 4582 - 4583 - /* cleanup request */ 4584 - zfcp_fsf_req_free(fsf_req); 4585 - out: 4586 - return retval; 4587 - } 4588 - 4589 4552 static inline int 4590 4553 zfcp_fsf_req_sbal_check(unsigned long *flags, 4591 4554 struct zfcp_qdio_queue *queue, int needed) ··· 4555 4610 * set qtcb pointer in fsf_req and initialize QTCB 4556 4611 */ 4557 4612 static inline void 4558 - zfcp_fsf_req_qtcb_init(struct zfcp_fsf_req *fsf_req, u32 fsf_cmd) 4613 + zfcp_fsf_req_qtcb_init(struct zfcp_fsf_req *fsf_req) 4559 4614 { 4560 4615 if (likely(fsf_req->qtcb != NULL)) { 4616 + fsf_req->qtcb->prefix.req_seq_no = fsf_req->adapter->fsf_req_seq_no; 4561 4617 fsf_req->qtcb->prefix.req_id = (unsigned long)fsf_req; 4562 4618 fsf_req->qtcb->prefix.ulp_info = ZFCP_ULP_INFO_VERSION; 4563 - fsf_req->qtcb->prefix.qtcb_type = fsf_qtcb_type[fsf_cmd]; 4619 + fsf_req->qtcb->prefix.qtcb_type = fsf_qtcb_type[fsf_req->fsf_command]; 4564 4620 fsf_req->qtcb->prefix.qtcb_version = ZFCP_QTCB_VERSION; 4565 4621 fsf_req->qtcb->header.req_handle = (unsigned long)fsf_req; 4566 - fsf_req->qtcb->header.fsf_command = fsf_cmd; 4622 + fsf_req->qtcb->header.fsf_command = fsf_req->fsf_command; 4567 4623 } 4568 4624 } 4569 4625 ··· 4632 4686 goto failed_fsf_req; 4633 4687 } 4634 4688 4635 - zfcp_fsf_req_qtcb_init(fsf_req, fsf_cmd); 4689 + fsf_req->adapter = adapter; 4690 + fsf_req->fsf_command = fsf_cmd; 4691 + 4692 + zfcp_fsf_req_qtcb_init(fsf_req); 4636 4693 4637 4694 /* initialize waitqueue which may be used to wait on 4638 4695 this request completion */ ··· 4657 4708 goto failed_sbals; 4658 4709 } 4659 4710 4660 - fsf_req->adapter = adapter; /* pointer to "parent" adapter */ 4661 - fsf_req->fsf_command = fsf_cmd; 4711 + if (fsf_req->qtcb) { 4712 + fsf_req->seq_no = adapter->fsf_req_seq_no; 4713 + fsf_req->qtcb->prefix.req_seq_no = adapter->fsf_req_seq_no; 4714 + } 4662 4715 fsf_req->sbal_number = 1; 4663 4716 fsf_req->sbal_first = req_queue->free_index; 4664 4717 fsf_req->sbal_curr = req_queue->free_index; ··· 4711 4760 struct zfcp_adapter *adapter; 4712 4761 struct zfcp_qdio_queue *req_queue; 4713 4762 volatile struct qdio_buffer_element *sbale; 4763 + int inc_seq_no; 4714 4764 int new_distance_from_int; 4715 4765 unsigned long flags; 4716 - int inc_seq_no = 1; 4717 4766 int retval = 0; 4718 4767 4719 4768 adapter = fsf_req->adapter; ··· 4727 4776 ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_TRACE, (char *) sbale[1].addr, 4728 4777 sbale[1].length); 4729 4778 4730 - /* set sequence counter in QTCB */ 4731 - if (likely(fsf_req->qtcb)) { 4732 - fsf_req->qtcb->prefix.req_seq_no = adapter->fsf_req_seq_no; 4733 - fsf_req->seq_no = adapter->fsf_req_seq_no; 4734 - ZFCP_LOG_TRACE("FSF request %p of adapter %s gets " 4735 - "FSF sequence counter value of %i\n", 4736 - fsf_req, 4737 - zfcp_get_busid_by_adapter(adapter), 4738 - fsf_req->qtcb->prefix.req_seq_no); 4739 - } else 4740 - inc_seq_no = 0; 4741 - 4742 4779 /* put allocated FSF request at list tail */ 4743 4780 spin_lock_irqsave(&adapter->fsf_req_list_lock, flags); 4744 4781 list_add_tail(&fsf_req->list, &adapter->fsf_req_list_head); 4745 4782 spin_unlock_irqrestore(&adapter->fsf_req_list_lock, flags); 4783 + 4784 + inc_seq_no = (fsf_req->qtcb != NULL); 4746 4785 4747 4786 /* figure out expiration time of timeout and start timeout */ 4748 4787 if (unlikely(timer)) { ··· 4762 4821 req_queue->free_index += fsf_req->sbal_number; /* increase */ 4763 4822 req_queue->free_index %= QDIO_MAX_BUFFERS_PER_Q; /* wrap if needed */ 4764 4823 new_distance_from_int = zfcp_qdio_determine_pci(req_queue, fsf_req); 4824 + 4825 + fsf_req->issued = get_clock(); 4765 4826 4766 4827 retval = do_QDIO(adapter->ccw_device, 4767 4828 QDIO_FLAG_SYNC_OUTPUT, ··· 4803 4860 * routines resulting in missing sequence counter values 4804 4861 * otherwise, 4805 4862 */ 4863 + 4806 4864 /* Don't increase for unsolicited status */ 4807 - if (likely(inc_seq_no)) { 4865 + if (inc_seq_no) 4808 4866 adapter->fsf_req_seq_no++; 4809 - ZFCP_LOG_TRACE 4810 - ("FSF sequence counter value of adapter %s " 4811 - "increased to %i\n", 4812 - zfcp_get_busid_by_adapter(adapter), 4813 - adapter->fsf_req_seq_no); 4814 - } 4867 + 4815 4868 /* count FSF requests pending */ 4816 4869 atomic_inc(&adapter->fsf_reqs_active); 4817 4870 }
+42 -12
drivers/s390/scsi/zfcp_fsf.h
··· 116 116 #define FSF_INVALID_COMMAND_OPTION 0x000000E5 117 117 /* #define FSF_ERROR 0x000000FF */ 118 118 119 + #define FSF_PROT_STATUS_QUAL_SIZE 16 119 120 #define FSF_STATUS_QUALIFIER_SIZE 16 120 121 121 122 /* FSF status qualifier, recommendations */ ··· 140 139 #define FSF_SQ_CFDC_SUBTABLE_LUN 0x0004 141 140 142 141 /* FSF status qualifier (most significant 4 bytes), local link down */ 143 - #define FSF_PSQ_LINK_NOLIGHT 0x00000004 144 - #define FSF_PSQ_LINK_WRAPPLUG 0x00000008 145 - #define FSF_PSQ_LINK_NOFCP 0x00000010 142 + #define FSF_PSQ_LINK_NO_LIGHT 0x00000004 143 + #define FSF_PSQ_LINK_WRAP_PLUG 0x00000008 144 + #define FSF_PSQ_LINK_NO_FCP 0x00000010 145 + #define FSF_PSQ_LINK_FIRMWARE_UPDATE 0x00000020 146 + #define FSF_PSQ_LINK_INVALID_WWPN 0x00000100 147 + #define FSF_PSQ_LINK_NO_NPIV_SUPPORT 0x00000200 148 + #define FSF_PSQ_LINK_NO_FCP_RESOURCES 0x00000400 149 + #define FSF_PSQ_LINK_NO_FABRIC_RESOURCES 0x00000800 150 + #define FSF_PSQ_LINK_FABRIC_LOGIN_UNABLE 0x00001000 151 + #define FSF_PSQ_LINK_WWPN_ASSIGNMENT_CORRUPTED 0x00002000 152 + #define FSF_PSQ_LINK_MODE_TABLE_CURRUPTED 0x00004000 153 + #define FSF_PSQ_LINK_NO_WWPN_ASSIGNMENT 0x00008000 146 154 147 155 /* payload size in status read buffer */ 148 156 #define FSF_STATUS_READ_PAYLOAD_SIZE 4032 ··· 164 154 #define FSF_STATUS_READ_INCOMING_ELS 0x00000002 165 155 #define FSF_STATUS_READ_SENSE_DATA_AVAIL 0x00000003 166 156 #define FSF_STATUS_READ_BIT_ERROR_THRESHOLD 0x00000004 167 - #define FSF_STATUS_READ_LINK_DOWN 0x00000005 /* FIXME: really? */ 157 + #define FSF_STATUS_READ_LINK_DOWN 0x00000005 168 158 #define FSF_STATUS_READ_LINK_UP 0x00000006 169 159 #define FSF_STATUS_READ_CFDC_UPDATED 0x0000000A 170 160 #define FSF_STATUS_READ_CFDC_HARDENED 0x0000000B 161 + #define FSF_STATUS_READ_FEATURE_UPDATE_ALERT 0x0000000C 171 162 172 163 /* status subtypes in status read buffer */ 173 164 #define FSF_STATUS_READ_SUB_CLOSE_PHYS_PORT 0x00000001 174 165 #define FSF_STATUS_READ_SUB_ERROR_PORT 0x00000002 166 + 167 + /* status subtypes for link down */ 168 + #define FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK 0x00000000 169 + #define FSF_STATUS_READ_SUB_FDISC_FAILED 0x00000001 170 + #define FSF_STATUS_READ_SUB_FIRMWARE_UPDATE 0x00000002 175 171 176 172 /* status subtypes for CFDC */ 177 173 #define FSF_STATUS_READ_SUB_CFDC_HARDENED_ON_SE 0x00000002 ··· 209 193 #define FSF_QTCB_LOG_SIZE 1024 210 194 211 195 /* channel features */ 212 - #define FSF_FEATURE_QTCB_SUPPRESSION 0x00000001 213 196 #define FSF_FEATURE_CFDC 0x00000002 214 197 #define FSF_FEATURE_LUN_SHARING 0x00000004 215 198 #define FSF_FEATURE_HBAAPI_MANAGEMENT 0x00000010 216 199 #define FSF_FEATURE_ELS_CT_CHAINED_SBALS 0x00000020 200 + #define FSF_FEATURE_UPDATE_ALERT 0x00000100 201 + 202 + /* host connection features */ 203 + #define FSF_FEATURE_NPIV_MODE 0x00000001 204 + #define FSF_FEATURE_VM_ASSIGNED_WWPN 0x00000002 217 205 218 206 /* option */ 219 207 #define FSF_OPEN_LUN_SUPPRESS_BOXING 0x00000001 ··· 325 305 u32 res1[3]; 326 306 } __attribute__ ((packed)); 327 307 328 - struct fsf_qual_locallink_error { 329 - u32 code; 330 - u32 res1[3]; 308 + struct fsf_link_down_info { 309 + u32 error_code; 310 + u32 res1; 311 + u8 res2[2]; 312 + u8 primary_status; 313 + u8 ioerr_code; 314 + u8 action_code; 315 + u8 reason_code; 316 + u8 explanation_code; 317 + u8 vendor_specific_code; 331 318 } __attribute__ ((packed)); 332 319 333 320 union fsf_prot_status_qual { 321 + u64 doubleword[FSF_PROT_STATUS_QUAL_SIZE / sizeof(u64)]; 334 322 struct fsf_qual_version_error version_error; 335 323 struct fsf_qual_sequence_error sequence_error; 336 - struct fsf_qual_locallink_error locallink_error; 324 + struct fsf_link_down_info link_down_info; 337 325 } __attribute__ ((packed)); 338 326 339 327 struct fsf_qtcb_prefix { ··· 359 331 u8 byte[FSF_STATUS_QUALIFIER_SIZE]; 360 332 u16 halfword[FSF_STATUS_QUALIFIER_SIZE / sizeof (u16)]; 361 333 u32 word[FSF_STATUS_QUALIFIER_SIZE / sizeof (u32)]; 334 + u64 doubleword[FSF_STATUS_QUALIFIER_SIZE / sizeof(u64)]; 362 335 struct fsf_queue_designator fsf_queue_designator; 336 + struct fsf_link_down_info link_down_info; 363 337 } __attribute__ ((packed)); 364 338 365 339 struct fsf_qtcb_header { ··· 436 406 u32 low_qtcb_version; 437 407 u32 max_qtcb_size; 438 408 u32 max_data_transfer_size; 439 - u32 supported_features; 440 - u8 res1[4]; 409 + u32 adapter_features; 410 + u32 connection_features; 441 411 u32 fc_topology; 442 412 u32 fc_link_speed; 443 413 u32 adapter_type; ··· 455 425 } __attribute__ ((packed)); 456 426 457 427 struct fsf_qtcb_bottom_port { 458 - u8 res1[8]; 428 + u64 wwpn; 459 429 u32 fc_port_id; 460 430 u32 port_type; 461 431 u32 port_state;
+11 -19
drivers/s390/scsi/zfcp_qdio.c
··· 54 54 static qdio_handler_t zfcp_qdio_request_handler; 55 55 static qdio_handler_t zfcp_qdio_response_handler; 56 56 static int zfcp_qdio_handler_error_check(struct zfcp_adapter *, 57 - unsigned int, 58 - unsigned int, unsigned int); 57 + unsigned int, unsigned int, unsigned int, int, int); 59 58 60 59 #define ZFCP_LOG_AREA ZFCP_LOG_AREA_QDIO 61 60 ··· 213 214 * 214 215 */ 215 216 static inline int 216 - zfcp_qdio_handler_error_check(struct zfcp_adapter *adapter, 217 - unsigned int status, 218 - unsigned int qdio_error, unsigned int siga_error) 217 + zfcp_qdio_handler_error_check(struct zfcp_adapter *adapter, unsigned int status, 218 + unsigned int qdio_error, unsigned int siga_error, 219 + int first_element, int elements_processed) 219 220 { 220 221 int retval = 0; 221 222 222 - if (ZFCP_LOG_CHECK(ZFCP_LOG_LEVEL_TRACE)) { 223 - if (status & QDIO_STATUS_INBOUND_INT) { 224 - ZFCP_LOG_TRACE("status is" 225 - " QDIO_STATUS_INBOUND_INT \n"); 226 - } 227 - if (status & QDIO_STATUS_OUTBOUND_INT) { 228 - ZFCP_LOG_TRACE("status is" 229 - " QDIO_STATUS_OUTBOUND_INT \n"); 230 - } 231 - } 232 223 if (unlikely(status & QDIO_STATUS_LOOK_FOR_ERROR)) { 233 224 retval = -EIO; 234 225 ··· 226 237 "qdio_error=0x%x, siga_error=0x%x)\n", 227 238 status, qdio_error, siga_error); 228 239 229 - /* Restarting IO on the failed adapter from scratch */ 230 - debug_text_event(adapter->erp_dbf, 1, "qdio_err"); 240 + zfcp_hba_dbf_event_qdio(adapter, status, qdio_error, siga_error, 241 + first_element, elements_processed); 231 242 /* 243 + * Restarting IO on the failed adapter from scratch. 232 244 * Since we have been using this adapter, it is save to assume 233 245 * that it is not failed but recoverable. The card seems to 234 246 * report link-up events by self-initiated queue shutdown. ··· 272 282 first_element, elements_processed); 273 283 274 284 if (unlikely(zfcp_qdio_handler_error_check(adapter, status, qdio_error, 275 - siga_error))) 285 + siga_error, first_element, 286 + elements_processed))) 276 287 goto out; 277 288 /* 278 289 * we stored address of struct zfcp_adapter data structure ··· 325 334 queue = &adapter->response_queue; 326 335 327 336 if (unlikely(zfcp_qdio_handler_error_check(adapter, status, qdio_error, 328 - siga_error))) 337 + siga_error, first_element, 338 + elements_processed))) 329 339 goto out; 330 340 331 341 /*
+82 -213
drivers/s390/scsi/zfcp_scsi.c
··· 44 44 static int zfcp_scsi_eh_device_reset_handler(struct scsi_cmnd *); 45 45 static int zfcp_scsi_eh_bus_reset_handler(struct scsi_cmnd *); 46 46 static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *); 47 - static int zfcp_task_management_function(struct zfcp_unit *, u8); 47 + static int zfcp_task_management_function(struct zfcp_unit *, u8, 48 + struct scsi_cmnd *); 48 49 49 50 static struct zfcp_unit *zfcp_unit_lookup(struct zfcp_adapter *, int, scsi_id_t, 50 51 scsi_lun_t); ··· 243 242 zfcp_scsi_command_fail(struct scsi_cmnd *scpnt, int result) 244 243 { 245 244 set_host_byte(&scpnt->result, result); 246 - zfcp_cmd_dbf_event_scsi("failing", scpnt); 245 + if ((scpnt->device != NULL) && (scpnt->device->host != NULL)) 246 + zfcp_scsi_dbf_event_result("fail", 4, 247 + (struct zfcp_adapter*) scpnt->device->host->hostdata[0], 248 + scpnt); 247 249 /* return directly */ 248 250 scpnt->scsi_done(scpnt); 249 251 } ··· 418 414 return (struct zfcp_port *) NULL; 419 415 } 420 416 421 - /* 422 - * function: zfcp_scsi_eh_abort_handler 417 + /** 418 + * zfcp_scsi_eh_abort_handler - abort the specified SCSI command 419 + * @scpnt: pointer to scsi_cmnd to be aborted 420 + * Return: SUCCESS - command has been aborted and cleaned up in internal 421 + * bookkeeping, SCSI stack won't be called for aborted command 422 + * FAILED - otherwise 423 423 * 424 - * purpose: tries to abort the specified (timed out) SCSI command 425 - * 426 - * note: We do not need to care for a SCSI command which completes 427 - * normally but late during this abort routine runs. 428 - * We are allowed to return late commands to the SCSI stack. 429 - * It tracks the state of commands and will handle late commands. 430 - * (Usually, the normal completion of late commands is ignored with 431 - * respect to the running abort operation. Grep for 'done_late' 432 - * in the SCSI stacks sources.) 433 - * 434 - * returns: SUCCESS - command has been aborted and cleaned up in internal 435 - * bookkeeping, 436 - * SCSI stack won't be called for aborted command 437 - * FAILED - otherwise 424 + * We do not need to care for a SCSI command which completes normally 425 + * but late during this abort routine runs. We are allowed to return 426 + * late commands to the SCSI stack. It tracks the state of commands and 427 + * will handle late commands. (Usually, the normal completion of late 428 + * commands is ignored with respect to the running abort operation.) 438 429 */ 439 430 int 440 - __zfcp_scsi_eh_abort_handler(struct scsi_cmnd *scpnt) 431 + zfcp_scsi_eh_abort_handler(struct scsi_cmnd *scpnt) 441 432 { 433 + struct Scsi_Host *scsi_host; 434 + struct zfcp_adapter *adapter; 435 + struct zfcp_unit *unit; 442 436 int retval = SUCCESS; 443 - struct zfcp_fsf_req *new_fsf_req, *old_fsf_req; 444 - struct zfcp_adapter *adapter = (struct zfcp_adapter *) scpnt->device->host->hostdata[0]; 445 - struct zfcp_unit *unit = (struct zfcp_unit *) scpnt->device->hostdata; 446 - struct zfcp_port *port = unit->port; 447 - struct Scsi_Host *scsi_host = scpnt->device->host; 448 - union zfcp_req_data *req_data = NULL; 437 + struct zfcp_fsf_req *new_fsf_req = NULL; 438 + struct zfcp_fsf_req *old_fsf_req; 449 439 unsigned long flags; 450 - u32 status = 0; 451 440 452 - /* the components of a abort_dbf record (fixed size record) */ 453 - u64 dbf_scsi_cmnd = (unsigned long) scpnt; 454 - char dbf_opcode[ZFCP_ABORT_DBF_LENGTH]; 455 - wwn_t dbf_wwn = port->wwpn; 456 - fcp_lun_t dbf_fcp_lun = unit->fcp_lun; 457 - u64 dbf_retries = scpnt->retries; 458 - u64 dbf_allowed = scpnt->allowed; 459 - u64 dbf_timeout = 0; 460 - u64 dbf_fsf_req = 0; 461 - u64 dbf_fsf_status = 0; 462 - u64 dbf_fsf_qual[2] = { 0, 0 }; 463 - char dbf_result[ZFCP_ABORT_DBF_LENGTH] = "##undef"; 464 - 465 - memset(dbf_opcode, 0, ZFCP_ABORT_DBF_LENGTH); 466 - memcpy(dbf_opcode, 467 - scpnt->cmnd, 468 - min(scpnt->cmd_len, (unsigned char) ZFCP_ABORT_DBF_LENGTH)); 441 + scsi_host = scpnt->device->host; 442 + adapter = (struct zfcp_adapter *) scsi_host->hostdata[0]; 443 + unit = (struct zfcp_unit *) scpnt->device->hostdata; 469 444 470 445 ZFCP_LOG_INFO("aborting scsi_cmnd=%p on adapter %s\n", 471 446 scpnt, zfcp_get_busid_by_adapter(adapter)); 472 447 473 - spin_unlock_irq(scsi_host->host_lock); 474 - 475 - /* 476 - * Race condition between normal (late) completion and abort has 477 - * to be avoided. 478 - * The entirity of all accesses to scsi_req have to be atomic. 479 - * scsi_req is usually part of the fsf_req and thus we block the 480 - * release of fsf_req as long as we need to access scsi_req. 481 - */ 448 + /* avoid race condition between late normal completion and abort */ 482 449 write_lock_irqsave(&adapter->abort_lock, flags); 483 450 484 451 /* ··· 459 484 * this routine returns. (scpnt is parameter passed to this routine 460 485 * and must not disappear during abort even on late completion.) 461 486 */ 462 - req_data = (union zfcp_req_data *) scpnt->host_scribble; 463 - /* DEBUG */ 464 - ZFCP_LOG_DEBUG("req_data=%p\n", req_data); 465 - if (!req_data) { 466 - ZFCP_LOG_DEBUG("late command completion overtook abort\n"); 467 - /* 468 - * That's it. 469 - * Do not initiate abort but return SUCCESS. 470 - */ 471 - write_unlock_irqrestore(&adapter->abort_lock, flags); 472 - retval = SUCCESS; 473 - strncpy(dbf_result, "##late1", ZFCP_ABORT_DBF_LENGTH); 474 - goto out; 475 - } 476 - 477 - /* Figure out which fsf_req needs to be aborted. */ 478 - old_fsf_req = req_data->send_fcp_command_task.fsf_req; 479 - 480 - dbf_fsf_req = (unsigned long) old_fsf_req; 481 - dbf_timeout = 482 - (jiffies - req_data->send_fcp_command_task.start_jiffies) / HZ; 483 - 484 - ZFCP_LOG_DEBUG("old_fsf_req=%p\n", old_fsf_req); 487 + old_fsf_req = (struct zfcp_fsf_req *) scpnt->host_scribble; 485 488 if (!old_fsf_req) { 486 489 write_unlock_irqrestore(&adapter->abort_lock, flags); 487 - ZFCP_LOG_NORMAL("bug: no old fsf request found\n"); 488 - ZFCP_LOG_NORMAL("req_data:\n"); 489 - ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_NORMAL, 490 - (char *) req_data, sizeof (union zfcp_req_data)); 491 - ZFCP_LOG_NORMAL("scsi_cmnd:\n"); 492 - ZFCP_HEX_DUMP(ZFCP_LOG_LEVEL_NORMAL, 493 - (char *) scpnt, sizeof (struct scsi_cmnd)); 494 - retval = FAILED; 495 - strncpy(dbf_result, "##bug:r", ZFCP_ABORT_DBF_LENGTH); 490 + zfcp_scsi_dbf_event_abort("lte1", adapter, scpnt, new_fsf_req); 491 + retval = SUCCESS; 496 492 goto out; 497 493 } 498 - old_fsf_req->data.send_fcp_command_task.scsi_cmnd = NULL; 499 - /* mark old request as being aborted */ 494 + old_fsf_req->data = 0; 500 495 old_fsf_req->status |= ZFCP_STATUS_FSFREQ_ABORTING; 501 - /* 502 - * We have to collect all information (e.g. unit) needed by 503 - * zfcp_fsf_abort_fcp_command before calling that routine 504 - * since that routine is not allowed to access 505 - * fsf_req which it is going to abort. 506 - * This is because of we need to release fsf_req_list_lock 507 - * before calling zfcp_fsf_abort_fcp_command. 508 - * Since this lock will not be held, fsf_req may complete 509 - * late and may be released meanwhile. 510 - */ 511 - ZFCP_LOG_DEBUG("unit 0x%016Lx (%p)\n", unit->fcp_lun, unit); 512 496 513 - /* 514 - * We block (call schedule) 515 - * That's why we must release the lock and enable the 516 - * interrupts before. 517 - * On the other hand we do not need the lock anymore since 518 - * all critical accesses to scsi_req are done. 519 - */ 497 + /* don't access old_fsf_req after releasing the abort_lock */ 520 498 write_unlock_irqrestore(&adapter->abort_lock, flags); 521 499 /* call FSF routine which does the abort */ 522 500 new_fsf_req = zfcp_fsf_abort_fcp_command((unsigned long) old_fsf_req, 523 501 adapter, unit, 0); 524 - ZFCP_LOG_DEBUG("new_fsf_req=%p\n", new_fsf_req); 525 502 if (!new_fsf_req) { 503 + ZFCP_LOG_INFO("error: initiation of Abort FCP Cmnd failed\n"); 526 504 retval = FAILED; 527 - ZFCP_LOG_NORMAL("error: initiation of Abort FCP Cmnd " 528 - "failed\n"); 529 - strncpy(dbf_result, "##nores", ZFCP_ABORT_DBF_LENGTH); 530 505 goto out; 531 506 } 532 507 533 508 /* wait for completion of abort */ 534 - ZFCP_LOG_DEBUG("waiting for cleanup...\n"); 535 - #if 1 536 - /* 537 - * FIXME: 538 - * copying zfcp_fsf_req_wait_and_cleanup code is not really nice 539 - */ 540 509 __wait_event(new_fsf_req->completion_wq, 541 510 new_fsf_req->status & ZFCP_STATUS_FSFREQ_COMPLETED); 542 - status = new_fsf_req->status; 543 - dbf_fsf_status = new_fsf_req->qtcb->header.fsf_status; 544 - /* 545 - * Ralphs special debug load provides timestamps in the FSF 546 - * status qualifier. This might be specified later if being 547 - * useful for debugging aborts. 548 - */ 549 - dbf_fsf_qual[0] = 550 - *(u64 *) & new_fsf_req->qtcb->header.fsf_status_qual.word[0]; 551 - dbf_fsf_qual[1] = 552 - *(u64 *) & new_fsf_req->qtcb->header.fsf_status_qual.word[2]; 553 - zfcp_fsf_req_free(new_fsf_req); 554 - #else 555 - retval = zfcp_fsf_req_wait_and_cleanup(new_fsf_req, 556 - ZFCP_UNINTERRUPTIBLE, &status); 557 - #endif 558 - ZFCP_LOG_DEBUG("Waiting for cleanup complete, status=0x%x\n", status); 511 + 559 512 /* status should be valid since signals were not permitted */ 560 - if (status & ZFCP_STATUS_FSFREQ_ABORTSUCCEEDED) { 513 + if (new_fsf_req->status & ZFCP_STATUS_FSFREQ_ABORTSUCCEEDED) { 514 + zfcp_scsi_dbf_event_abort("okay", adapter, scpnt, new_fsf_req); 561 515 retval = SUCCESS; 562 - strncpy(dbf_result, "##succ", ZFCP_ABORT_DBF_LENGTH); 563 - } else if (status & ZFCP_STATUS_FSFREQ_ABORTNOTNEEDED) { 516 + } else if (new_fsf_req->status & ZFCP_STATUS_FSFREQ_ABORTNOTNEEDED) { 517 + zfcp_scsi_dbf_event_abort("lte2", adapter, scpnt, new_fsf_req); 564 518 retval = SUCCESS; 565 - strncpy(dbf_result, "##late2", ZFCP_ABORT_DBF_LENGTH); 566 519 } else { 520 + zfcp_scsi_dbf_event_abort("fail", adapter, scpnt, new_fsf_req); 567 521 retval = FAILED; 568 - strncpy(dbf_result, "##fail", ZFCP_ABORT_DBF_LENGTH); 569 522 } 570 - 523 + zfcp_fsf_req_free(new_fsf_req); 571 524 out: 572 - debug_event(adapter->abort_dbf, 1, &dbf_scsi_cmnd, sizeof (u64)); 573 - debug_event(adapter->abort_dbf, 1, &dbf_opcode, ZFCP_ABORT_DBF_LENGTH); 574 - debug_event(adapter->abort_dbf, 1, &dbf_wwn, sizeof (wwn_t)); 575 - debug_event(adapter->abort_dbf, 1, &dbf_fcp_lun, sizeof (fcp_lun_t)); 576 - debug_event(adapter->abort_dbf, 1, &dbf_retries, sizeof (u64)); 577 - debug_event(adapter->abort_dbf, 1, &dbf_allowed, sizeof (u64)); 578 - debug_event(adapter->abort_dbf, 1, &dbf_timeout, sizeof (u64)); 579 - debug_event(adapter->abort_dbf, 1, &dbf_fsf_req, sizeof (u64)); 580 - debug_event(adapter->abort_dbf, 1, &dbf_fsf_status, sizeof (u64)); 581 - debug_event(adapter->abort_dbf, 1, &dbf_fsf_qual[0], sizeof (u64)); 582 - debug_event(adapter->abort_dbf, 1, &dbf_fsf_qual[1], sizeof (u64)); 583 - debug_text_event(adapter->abort_dbf, 1, dbf_result); 584 - 585 - spin_lock_irq(scsi_host->host_lock); 586 525 return retval; 587 - } 588 - 589 - int 590 - zfcp_scsi_eh_abort_handler(struct scsi_cmnd *scpnt) 591 - { 592 - int rc; 593 - struct Scsi_Host *scsi_host = scpnt->device->host; 594 - spin_lock_irq(scsi_host->host_lock); 595 - rc = __zfcp_scsi_eh_abort_handler(scpnt); 596 - spin_unlock_irq(scsi_host->host_lock); 597 - return rc; 598 526 } 599 527 600 528 /* ··· 529 651 */ 530 652 if (!atomic_test_mask(ZFCP_STATUS_UNIT_NOTSUPPUNITRESET, 531 653 &unit->status)) { 532 - retval = 533 - zfcp_task_management_function(unit, FCP_LOGICAL_UNIT_RESET); 654 + retval = zfcp_task_management_function(unit, 655 + FCP_LOGICAL_UNIT_RESET, 656 + scpnt); 534 657 if (retval) { 535 658 ZFCP_LOG_DEBUG("unit reset failed (unit=%p)\n", unit); 536 659 if (retval == -ENOTSUPP) ··· 547 668 goto out; 548 669 } 549 670 } 550 - retval = zfcp_task_management_function(unit, FCP_TARGET_RESET); 671 + retval = zfcp_task_management_function(unit, FCP_TARGET_RESET, scpnt); 551 672 if (retval) { 552 673 ZFCP_LOG_DEBUG("target reset failed (unit=%p)\n", unit); 553 674 retval = FAILED; ··· 560 681 } 561 682 562 683 static int 563 - zfcp_task_management_function(struct zfcp_unit *unit, u8 tm_flags) 684 + zfcp_task_management_function(struct zfcp_unit *unit, u8 tm_flags, 685 + struct scsi_cmnd *scpnt) 564 686 { 565 687 struct zfcp_adapter *adapter = unit->port->adapter; 566 - int retval; 567 - int status; 568 688 struct zfcp_fsf_req *fsf_req; 689 + int retval = 0; 569 690 570 691 /* issue task management function */ 571 692 fsf_req = zfcp_fsf_send_fcp_command_task_management ··· 575 696 "failed for unit 0x%016Lx on port 0x%016Lx on " 576 697 "adapter %s\n", unit->fcp_lun, unit->port->wwpn, 577 698 zfcp_get_busid_by_adapter(adapter)); 699 + zfcp_scsi_dbf_event_devreset("nres", tm_flags, unit, scpnt); 578 700 retval = -ENOMEM; 579 701 goto out; 580 702 } 581 703 582 - retval = zfcp_fsf_req_wait_and_cleanup(fsf_req, 583 - ZFCP_UNINTERRUPTIBLE, &status); 704 + __wait_event(fsf_req->completion_wq, 705 + fsf_req->status & ZFCP_STATUS_FSFREQ_COMPLETED); 706 + 584 707 /* 585 708 * check completion status of task management function 586 - * (status should always be valid since no signals permitted) 587 709 */ 588 - if (status & ZFCP_STATUS_FSFREQ_TMFUNCFAILED) 710 + if (fsf_req->status & ZFCP_STATUS_FSFREQ_TMFUNCFAILED) { 711 + zfcp_scsi_dbf_event_devreset("fail", tm_flags, unit, scpnt); 589 712 retval = -EIO; 590 - else if (status & ZFCP_STATUS_FSFREQ_TMFUNCNOTSUPP) 713 + } else if (fsf_req->status & ZFCP_STATUS_FSFREQ_TMFUNCNOTSUPP) { 714 + zfcp_scsi_dbf_event_devreset("nsup", tm_flags, unit, scpnt); 591 715 retval = -ENOTSUPP; 592 - else 593 - retval = 0; 716 + } else 717 + zfcp_scsi_dbf_event_devreset("okay", tm_flags, unit, scpnt); 718 + 719 + zfcp_fsf_req_free(fsf_req); 594 720 out: 595 721 return retval; 596 722 } 597 723 598 - /* 599 - * function: zfcp_scsi_eh_bus_reset_handler 600 - * 601 - * purpose: 602 - * 603 - * returns: 724 + /** 725 + * zfcp_scsi_eh_bus_reset_handler - reset bus (reopen adapter) 604 726 */ 605 727 int 606 728 zfcp_scsi_eh_bus_reset_handler(struct scsi_cmnd *scpnt) 607 729 { 608 - int retval = 0; 609 - struct zfcp_unit *unit; 730 + struct zfcp_unit *unit = (struct zfcp_unit*) scpnt->device->hostdata; 731 + struct zfcp_adapter *adapter = unit->port->adapter; 610 732 611 - unit = (struct zfcp_unit *) scpnt->device->hostdata; 612 733 ZFCP_LOG_NORMAL("bus reset because of problems with " 613 734 "unit 0x%016Lx\n", unit->fcp_lun); 614 - zfcp_erp_adapter_reopen(unit->port->adapter, 0); 615 - zfcp_erp_wait(unit->port->adapter); 616 - retval = SUCCESS; 735 + zfcp_erp_adapter_reopen(adapter, 0); 736 + zfcp_erp_wait(adapter); 617 737 618 - return retval; 738 + return SUCCESS; 619 739 } 620 740 621 - /* 622 - * function: zfcp_scsi_eh_host_reset_handler 623 - * 624 - * purpose: 625 - * 626 - * returns: 741 + /** 742 + * zfcp_scsi_eh_host_reset_handler - reset host (reopen adapter) 627 743 */ 628 744 int 629 745 zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt) 630 746 { 631 - int retval = 0; 632 - struct zfcp_unit *unit; 747 + struct zfcp_unit *unit = (struct zfcp_unit*) scpnt->device->hostdata; 748 + struct zfcp_adapter *adapter = unit->port->adapter; 633 749 634 - unit = (struct zfcp_unit *) scpnt->device->hostdata; 635 750 ZFCP_LOG_NORMAL("host reset because of problems with " 636 751 "unit 0x%016Lx\n", unit->fcp_lun); 637 - zfcp_erp_adapter_reopen(unit->port->adapter, 0); 638 - zfcp_erp_wait(unit->port->adapter); 639 - retval = SUCCESS; 752 + zfcp_erp_adapter_reopen(adapter, 0); 753 + zfcp_erp_wait(adapter); 640 754 641 - return retval; 755 + return SUCCESS; 642 756 } 643 757 644 758 /* ··· 698 826 zfcp_adapter_scsi_unregister(struct zfcp_adapter *adapter) 699 827 { 700 828 struct Scsi_Host *shost; 829 + struct zfcp_port *port; 701 830 702 831 shost = adapter->scsi_host; 703 832 if (!shost) 704 833 return; 834 + read_lock_irq(&zfcp_data.config_lock); 835 + list_for_each_entry(port, &adapter->port_list_head, list) 836 + if (port->rport) 837 + port->rport = NULL; 838 + read_unlock_irq(&zfcp_data.config_lock); 705 839 fc_remove_host(shost); 706 840 scsi_remove_host(shost); 707 841 scsi_host_put(shost); ··· 782 904 read_unlock_irqrestore(&zfcp_data.config_lock, flags); 783 905 } 784 906 785 - void 786 - zfcp_set_fc_host_attrs(struct zfcp_adapter *adapter) 787 - { 788 - struct Scsi_Host *shost = adapter->scsi_host; 789 - 790 - fc_host_node_name(shost) = adapter->wwnn; 791 - fc_host_port_name(shost) = adapter->wwpn; 792 - strncpy(fc_host_serial_number(shost), adapter->serial_number, 793 - min(FC_SERIAL_NUMBER_SIZE, 32)); 794 - fc_host_supported_classes(shost) = FC_COS_CLASS2 | FC_COS_CLASS3; 795 - } 796 - 797 907 struct fc_function_template zfcp_transport_functions = { 798 908 .get_starget_port_id = zfcp_get_port_id, 799 909 .get_starget_port_name = zfcp_get_port_name, ··· 793 927 .show_host_node_name = 1, 794 928 .show_host_port_name = 1, 795 929 .show_host_supported_classes = 1, 930 + .show_host_maxframe_size = 1, 796 931 .show_host_serial_number = 1, 932 + .show_host_speed = 1, 933 + .show_host_port_id = 1, 797 934 }; 798 935 799 936 /**
+4 -10
drivers/s390/scsi/zfcp_sysfs_adapter.c
··· 62 62 static DEVICE_ATTR(_name, S_IRUGO, zfcp_sysfs_adapter_##_name##_show, NULL); 63 63 64 64 ZFCP_DEFINE_ADAPTER_ATTR(status, "0x%08x\n", atomic_read(&adapter->status)); 65 - ZFCP_DEFINE_ADAPTER_ATTR(wwnn, "0x%016llx\n", adapter->wwnn); 66 - ZFCP_DEFINE_ADAPTER_ATTR(wwpn, "0x%016llx\n", adapter->wwpn); 67 - ZFCP_DEFINE_ADAPTER_ATTR(s_id, "0x%06x\n", adapter->s_id); 68 65 ZFCP_DEFINE_ADAPTER_ATTR(peer_wwnn, "0x%016llx\n", adapter->peer_wwnn); 69 66 ZFCP_DEFINE_ADAPTER_ATTR(peer_wwpn, "0x%016llx\n", adapter->peer_wwpn); 70 67 ZFCP_DEFINE_ADAPTER_ATTR(peer_d_id, "0x%06x\n", adapter->peer_d_id); 68 + ZFCP_DEFINE_ADAPTER_ATTR(physical_wwpn, "0x%016llx\n", adapter->physical_wwpn); 69 + ZFCP_DEFINE_ADAPTER_ATTR(physical_s_id, "0x%06x\n", adapter->physical_s_id); 71 70 ZFCP_DEFINE_ADAPTER_ATTR(card_version, "0x%04x\n", adapter->hydra_version); 72 71 ZFCP_DEFINE_ADAPTER_ATTR(lic_version, "0x%08x\n", adapter->fsf_lic_version); 73 - ZFCP_DEFINE_ADAPTER_ATTR(fc_link_speed, "%d Gb/s\n", adapter->fc_link_speed); 74 72 ZFCP_DEFINE_ADAPTER_ATTR(fc_service_class, "%d\n", adapter->fc_service_class); 75 73 ZFCP_DEFINE_ADAPTER_ATTR(fc_topology, "%s\n", 76 74 fc_topologies[adapter->fc_topology]); 77 75 ZFCP_DEFINE_ADAPTER_ATTR(hardware_version, "0x%08x\n", 78 76 adapter->hardware_version); 79 - ZFCP_DEFINE_ADAPTER_ATTR(serial_number, "%17s\n", adapter->serial_number); 80 77 ZFCP_DEFINE_ADAPTER_ATTR(scsi_host_no, "0x%x\n", adapter->scsi_host_no); 81 78 ZFCP_DEFINE_ADAPTER_ATTR(in_recovery, "%d\n", atomic_test_mask 82 79 (ZFCP_STATUS_COMMON_ERP_INUSE, &adapter->status)); ··· 252 255 &dev_attr_in_recovery.attr, 253 256 &dev_attr_port_remove.attr, 254 257 &dev_attr_port_add.attr, 255 - &dev_attr_wwnn.attr, 256 - &dev_attr_wwpn.attr, 257 - &dev_attr_s_id.attr, 258 258 &dev_attr_peer_wwnn.attr, 259 259 &dev_attr_peer_wwpn.attr, 260 260 &dev_attr_peer_d_id.attr, 261 + &dev_attr_physical_wwpn.attr, 262 + &dev_attr_physical_s_id.attr, 261 263 &dev_attr_card_version.attr, 262 264 &dev_attr_lic_version.attr, 263 - &dev_attr_fc_link_speed.attr, 264 265 &dev_attr_fc_service_class.attr, 265 266 &dev_attr_fc_topology.attr, 266 267 &dev_attr_scsi_host_no.attr, 267 268 &dev_attr_status.attr, 268 269 &dev_attr_hardware_version.attr, 269 - &dev_attr_serial_number.attr, 270 270 NULL 271 271 }; 272 272
-9
drivers/scsi/aic7xxx/aic7xxx_osm.c
··· 1109 1109 return (0); 1110 1110 } 1111 1111 1112 - uint64_t 1113 - ahc_linux_get_memsize(void) 1114 - { 1115 - struct sysinfo si; 1116 - 1117 - si_meminfo(&si); 1118 - return ((uint64_t)si.totalram << PAGE_SHIFT); 1119 - } 1120 - 1121 1112 /* 1122 1113 * Place the SCSI bus into a known state by either resetting it, 1123 1114 * or forcing transfer negotiations on the next command to any
-2
drivers/scsi/aic7xxx/aic7xxx_osm.h
··· 494 494 int ahc_linux_register_host(struct ahc_softc *, 495 495 struct scsi_host_template *); 496 496 497 - uint64_t ahc_linux_get_memsize(void); 498 - 499 497 /*************************** Pretty Printing **********************************/ 500 498 struct info_str { 501 499 char *buffer;
+5 -3
drivers/scsi/aic7xxx/aic7xxx_osm_pci.c
··· 180 180 struct ahc_pci_identity *entry; 181 181 char *name; 182 182 int error; 183 + struct device *dev = &pdev->dev; 183 184 184 185 pci = pdev; 185 186 entry = ahc_find_pci_device(pci); ··· 210 209 pci_set_master(pdev); 211 210 212 211 if (sizeof(dma_addr_t) > 4 213 - && ahc_linux_get_memsize() > 0x80000000 214 - && pci_set_dma_mask(pdev, mask_39bit) == 0) { 212 + && ahc->features & AHC_LARGE_SCBS 213 + && dma_set_mask(dev, mask_39bit) == 0 214 + && dma_get_required_mask(dev) > DMA_32BIT_MASK) { 215 215 ahc->flags |= AHC_39BIT_ADDRESSING; 216 216 } else { 217 - if (pci_set_dma_mask(pdev, DMA_32BIT_MASK)) { 217 + if (dma_set_mask(dev, DMA_32BIT_MASK)) { 218 218 printk(KERN_WARNING "aic7xxx: No suitable DMA available.\n"); 219 219 return (-ENODEV); 220 220 }
-1
drivers/scsi/ata_piix.c
··· 442 442 * piix_set_piomode - Initialize host controller PATA PIO timings 443 443 * @ap: Port whose timings we are configuring 444 444 * @adev: um 445 - * @pio: PIO mode, 0 - 4 446 445 * 447 446 * Set PIO mode for device, in host controller PCI config space. 448 447 *
+4 -2
drivers/scsi/atp870u.c
··· 996 996 #ifdef ED_DBGP 997 997 printk("send_s870: prdaddr_2 0x%8x tmpcip %x target_id %d\n", dev->id[c][target_id].prdaddr,tmpcip,target_id); 998 998 #endif 999 + dev->id[c][target_id].prdaddr = dev->id[c][target_id].prd_bus; 999 1000 outl(dev->id[c][target_id].prdaddr, tmpcip); 1000 1001 tmpcip = tmpcip - 2; 1001 1002 outb(0x06, tmpcip); ··· 2573 2572 for (k = 0; k < 16; k++) { 2574 2573 if (!atp_dev->id[j][k].prd_table) 2575 2574 continue; 2576 - pci_free_consistent(atp_dev->pdev, 1024, atp_dev->id[j][k].prd_table, atp_dev->id[j][k].prdaddr); 2575 + pci_free_consistent(atp_dev->pdev, 1024, atp_dev->id[j][k].prd_table, atp_dev->id[j][k].prd_bus); 2577 2576 atp_dev->id[j][k].prd_table = NULL; 2578 2577 } 2579 2578 } ··· 2585 2584 int c,k; 2586 2585 for(c=0;c < 2;c++) { 2587 2586 for(k=0;k<16;k++) { 2588 - atp_dev->id[c][k].prd_table = pci_alloc_consistent(atp_dev->pdev, 1024, &(atp_dev->id[c][k].prdaddr)); 2587 + atp_dev->id[c][k].prd_table = pci_alloc_consistent(atp_dev->pdev, 1024, &(atp_dev->id[c][k].prd_bus)); 2589 2588 if (!atp_dev->id[c][k].prd_table) { 2590 2589 printk("atp870u_init_tables fail\n"); 2591 2590 atp870u_free_tables(host); 2592 2591 return -ENOMEM; 2593 2592 } 2593 + atp_dev->id[c][k].prdaddr = atp_dev->id[c][k].prd_bus; 2594 2594 atp_dev->id[c][k].devsp=0x20; 2595 2595 atp_dev->id[c][k].devtype = 0x7f; 2596 2596 atp_dev->id[c][k].curr_req = NULL;
+3 -2
drivers/scsi/atp870u.h
··· 54 54 unsigned long tran_len; 55 55 unsigned long last_len; 56 56 unsigned char *prd_pos; 57 - unsigned char *prd_table; 58 - dma_addr_t prdaddr; 57 + unsigned char *prd_table; /* Kernel address of PRD table */ 58 + dma_addr_t prd_bus; /* Bus address of PRD */ 59 + dma_addr_t prdaddr; /* Dynamically updated in driver */ 59 60 struct scsi_cmnd *curr_req; 60 61 } id[2][16]; 61 62 struct Scsi_Host *host;
+2
drivers/scsi/fd_mcs.c
··· 1360 1360 .use_clustering = DISABLE_CLUSTERING, 1361 1361 }; 1362 1362 #include "scsi_module.c" 1363 + 1364 + MODULE_LICENSE("GPL");
+33 -2
drivers/scsi/hosts.c
··· 98 98 switch (oldstate) { 99 99 case SHOST_CREATED: 100 100 case SHOST_RUNNING: 101 + case SHOST_CANCEL_RECOVERY: 101 102 break; 102 103 default: 103 104 goto illegal; ··· 108 107 case SHOST_DEL: 109 108 switch (oldstate) { 110 109 case SHOST_CANCEL: 110 + case SHOST_DEL_RECOVERY: 111 111 break; 112 112 default: 113 113 goto illegal; 114 114 } 115 115 break; 116 116 117 + case SHOST_CANCEL_RECOVERY: 118 + switch (oldstate) { 119 + case SHOST_CANCEL: 120 + case SHOST_RECOVERY: 121 + break; 122 + default: 123 + goto illegal; 124 + } 125 + break; 126 + 127 + case SHOST_DEL_RECOVERY: 128 + switch (oldstate) { 129 + case SHOST_CANCEL_RECOVERY: 130 + break; 131 + default: 132 + goto illegal; 133 + } 134 + break; 117 135 } 118 136 shost->shost_state = state; 119 137 return 0; ··· 154 134 **/ 155 135 void scsi_remove_host(struct Scsi_Host *shost) 156 136 { 137 + unsigned long flags; 157 138 down(&shost->scan_mutex); 158 - scsi_host_set_state(shost, SHOST_CANCEL); 139 + spin_lock_irqsave(shost->host_lock, flags); 140 + if (scsi_host_set_state(shost, SHOST_CANCEL)) 141 + if (scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY)) { 142 + spin_unlock_irqrestore(shost->host_lock, flags); 143 + up(&shost->scan_mutex); 144 + return; 145 + } 146 + spin_unlock_irqrestore(shost->host_lock, flags); 159 147 up(&shost->scan_mutex); 160 148 scsi_forget_host(shost); 161 149 scsi_proc_host_rm(shost); 162 150 163 - scsi_host_set_state(shost, SHOST_DEL); 151 + spin_lock_irqsave(shost->host_lock, flags); 152 + if (scsi_host_set_state(shost, SHOST_DEL)) 153 + BUG_ON(scsi_host_set_state(shost, SHOST_DEL_RECOVERY)); 154 + spin_unlock_irqrestore(shost->host_lock, flags); 164 155 165 156 transport_unregister_device(&shost->shost_gendev); 166 157 class_device_unregister(&shost->shost_classdev);
+2
drivers/scsi/ibmmca.c
··· 460 460 MODULE_PARM(normal, "1i"); 461 461 MODULE_PARM(ansi, "1i"); 462 462 #endif 463 + 464 + MODULE_LICENSE("GPL"); 463 465 #endif 464 466 /*counter of concurrent disk read/writes, to turn on/off disk led */ 465 467 static int disk_rw_in_progress = 0;
+10
drivers/scsi/ibmvscsi/ibmvscsi.c
··· 727 727 if (hostdata->madapter_info.port_max_txu[0]) 728 728 hostdata->host->max_sectors = 729 729 hostdata->madapter_info.port_max_txu[0] >> 9; 730 + 731 + if (hostdata->madapter_info.os_type == 3 && 732 + strcmp(hostdata->madapter_info.srp_version, "1.6a") <= 0) { 733 + printk("ibmvscsi: host (Ver. %s) doesn't support large" 734 + "transfers\n", 735 + hostdata->madapter_info.srp_version); 736 + printk("ibmvscsi: limiting scatterlists to %d\n", 737 + MAX_INDIRECT_BUFS); 738 + hostdata->host->sg_tablesize = MAX_INDIRECT_BUFS; 739 + } 730 740 } 731 741 } 732 742
+49 -32
drivers/scsi/libata-core.c
··· 4132 4132 } 4133 4133 4134 4134 /** 4135 + * ata_host_set_remove - PCI layer callback for device removal 4136 + * @host_set: ATA host set that was removed 4137 + * 4138 + * Unregister all objects associated with this host set. Free those 4139 + * objects. 4140 + * 4141 + * LOCKING: 4142 + * Inherited from calling layer (may sleep). 4143 + */ 4144 + 4145 + 4146 + void ata_host_set_remove(struct ata_host_set *host_set) 4147 + { 4148 + struct ata_port *ap; 4149 + unsigned int i; 4150 + 4151 + for (i = 0; i < host_set->n_ports; i++) { 4152 + ap = host_set->ports[i]; 4153 + scsi_remove_host(ap->host); 4154 + } 4155 + 4156 + free_irq(host_set->irq, host_set); 4157 + 4158 + for (i = 0; i < host_set->n_ports; i++) { 4159 + ap = host_set->ports[i]; 4160 + 4161 + ata_scsi_release(ap->host); 4162 + 4163 + if ((ap->flags & ATA_FLAG_NO_LEGACY) == 0) { 4164 + struct ata_ioports *ioaddr = &ap->ioaddr; 4165 + 4166 + if (ioaddr->cmd_addr == 0x1f0) 4167 + release_region(0x1f0, 8); 4168 + else if (ioaddr->cmd_addr == 0x170) 4169 + release_region(0x170, 8); 4170 + } 4171 + 4172 + scsi_host_put(ap->host); 4173 + } 4174 + 4175 + if (host_set->ops->host_stop) 4176 + host_set->ops->host_stop(host_set); 4177 + 4178 + kfree(host_set); 4179 + } 4180 + 4181 + /** 4135 4182 * ata_scsi_release - SCSI layer callback hook for host unload 4136 4183 * @host: libata host to be unloaded 4137 4184 * ··· 4518 4471 { 4519 4472 struct device *dev = pci_dev_to_dev(pdev); 4520 4473 struct ata_host_set *host_set = dev_get_drvdata(dev); 4521 - struct ata_port *ap; 4522 - unsigned int i; 4523 4474 4524 - for (i = 0; i < host_set->n_ports; i++) { 4525 - ap = host_set->ports[i]; 4526 - 4527 - scsi_remove_host(ap->host); 4528 - } 4529 - 4530 - free_irq(host_set->irq, host_set); 4531 - 4532 - for (i = 0; i < host_set->n_ports; i++) { 4533 - ap = host_set->ports[i]; 4534 - 4535 - ata_scsi_release(ap->host); 4536 - 4537 - if ((ap->flags & ATA_FLAG_NO_LEGACY) == 0) { 4538 - struct ata_ioports *ioaddr = &ap->ioaddr; 4539 - 4540 - if (ioaddr->cmd_addr == 0x1f0) 4541 - release_region(0x1f0, 8); 4542 - else if (ioaddr->cmd_addr == 0x170) 4543 - release_region(0x170, 8); 4544 - } 4545 - 4546 - scsi_host_put(ap->host); 4547 - } 4548 - 4549 - if (host_set->ops->host_stop) 4550 - host_set->ops->host_stop(host_set); 4551 - 4552 - kfree(host_set); 4553 - 4475 + ata_host_set_remove(host_set); 4554 4476 pci_release_regions(pdev); 4555 4477 pci_disable_device(pdev); 4556 4478 dev_set_drvdata(dev, NULL); ··· 4589 4573 EXPORT_SYMBOL_GPL(ata_std_bios_param); 4590 4574 EXPORT_SYMBOL_GPL(ata_std_ports); 4591 4575 EXPORT_SYMBOL_GPL(ata_device_add); 4576 + EXPORT_SYMBOL_GPL(ata_host_set_remove); 4592 4577 EXPORT_SYMBOL_GPL(ata_sg_init); 4593 4578 EXPORT_SYMBOL_GPL(ata_sg_init_one); 4594 4579 EXPORT_SYMBOL_GPL(ata_qc_complete);
+21 -8
drivers/scsi/mesh.c
··· 1959 1959 /* Set it up */ 1960 1960 mesh_init(ms); 1961 1961 1962 - /* XXX FIXME: error should be fatal */ 1963 - if (request_irq(ms->meshintr, do_mesh_interrupt, 0, "MESH", ms)) 1962 + /* Request interrupt */ 1963 + if (request_irq(ms->meshintr, do_mesh_interrupt, 0, "MESH", ms)) { 1964 1964 printk(KERN_ERR "MESH: can't get irq %d\n", ms->meshintr); 1965 + goto out_shutdown; 1966 + } 1965 1967 1966 - /* XXX FIXME: handle failure */ 1967 - scsi_add_host(mesh_host, &mdev->ofdev.dev); 1968 + /* Add scsi host & scan */ 1969 + if (scsi_add_host(mesh_host, &mdev->ofdev.dev)) 1970 + goto out_release_irq; 1968 1971 scsi_scan_host(mesh_host); 1969 1972 1970 1973 return 0; 1971 1974 1972 - out_unmap: 1975 + out_release_irq: 1976 + free_irq(ms->meshintr, ms); 1977 + out_shutdown: 1978 + /* shutdown & reset bus in case of error or macos can be confused 1979 + * at reboot if the bus was set to synchronous mode already 1980 + */ 1981 + mesh_shutdown(mdev); 1982 + set_mesh_power(ms, 0); 1983 + pci_free_consistent(macio_get_pci_dev(mdev), ms->dma_cmd_size, 1984 + ms->dma_cmd_space, ms->dma_cmd_bus); 1985 + out_unmap: 1973 1986 iounmap(ms->dma); 1974 1987 iounmap(ms->mesh); 1975 - out_free: 1988 + out_free: 1976 1989 scsi_host_put(mesh_host); 1977 - out_release: 1990 + out_release: 1978 1991 macio_release_resources(mdev); 1979 1992 1980 1993 return -ENODEV; ··· 2014 2001 2015 2002 /* Free DMA commands memory */ 2016 2003 pci_free_consistent(macio_get_pci_dev(mdev), ms->dma_cmd_size, 2017 - ms->dma_cmd_space, ms->dma_cmd_bus); 2004 + ms->dma_cmd_space, ms->dma_cmd_bus); 2018 2005 2019 2006 /* Release memory resources */ 2020 2007 macio_release_resources(mdev);
+2
drivers/scsi/sata_nv.c
··· 158 158 PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP51 }, 159 159 { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA, 160 160 PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP55 }, 161 + { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA2, 162 + PCI_ANY_ID, PCI_ANY_ID, 0, 0, MCP55 }, 161 163 { PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, 162 164 PCI_ANY_ID, PCI_ANY_ID, 163 165 PCI_CLASS_STORAGE_IDE<<8, 0xffff00, GENERIC },
+2 -3
drivers/scsi/scsi.c
··· 1265 1265 list_for_each_safe(lh, lh_sf, &active_list) { 1266 1266 scmd = list_entry(lh, struct scsi_cmnd, eh_entry); 1267 1267 list_del_init(lh); 1268 - if (recovery) { 1269 - scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD); 1270 - } else { 1268 + if (recovery && 1269 + !scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD)) { 1271 1270 scmd->result = (DID_ABORT << 16); 1272 1271 scsi_finish_command(scmd); 1273 1272 }
+1
drivers/scsi/scsi_devinfo.c
··· 110 110 {"RELISYS", "Scorpio", NULL, BLIST_NOLUN}, /* responds to all lun */ 111 111 {"SANKYO", "CP525", "6.64", BLIST_NOLUN}, /* causes failed REQ SENSE, extra reset */ 112 112 {"TEXEL", "CD-ROM", "1.06", BLIST_NOLUN}, 113 + {"transtec", "T5008", "0001", BLIST_NOREPORTLUN }, 113 114 {"YAMAHA", "CDR100", "1.00", BLIST_NOLUN}, /* locks up */ 114 115 {"YAMAHA", "CDR102", "1.00", BLIST_NOLUN}, /* locks up */ 115 116 {"YAMAHA", "CRW8424S", "1.0", BLIST_NOLUN}, /* locks up */
+39 -39
drivers/scsi/scsi_error.c
··· 50 50 void scsi_eh_wakeup(struct Scsi_Host *shost) 51 51 { 52 52 if (shost->host_busy == shost->host_failed) { 53 - up(shost->eh_wait); 53 + wake_up_process(shost->ehandler); 54 54 SCSI_LOG_ERROR_RECOVERY(5, 55 55 printk("Waking error handler thread\n")); 56 56 } ··· 68 68 { 69 69 struct Scsi_Host *shost = scmd->device->host; 70 70 unsigned long flags; 71 + int ret = 0; 71 72 72 - if (shost->eh_wait == NULL) 73 + if (!shost->ehandler) 73 74 return 0; 74 75 75 76 spin_lock_irqsave(shost->host_lock, flags); 77 + if (scsi_host_set_state(shost, SHOST_RECOVERY)) 78 + if (scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY)) 79 + goto out_unlock; 76 80 81 + ret = 1; 77 82 scmd->eh_eflags |= eh_flag; 78 83 list_add_tail(&scmd->eh_entry, &shost->eh_cmd_q); 79 - scsi_host_set_state(shost, SHOST_RECOVERY); 80 84 shost->host_failed++; 81 85 scsi_eh_wakeup(shost); 86 + out_unlock: 82 87 spin_unlock_irqrestore(shost->host_lock, flags); 83 - return 1; 88 + return ret; 84 89 } 85 90 86 91 /** ··· 181 176 } 182 177 183 178 if (unlikely(!scsi_eh_scmd_add(scmd, SCSI_EH_CANCEL_CMD))) { 184 - panic("Error handler thread not present at %p %p %s %d", 185 - scmd, scmd->device->host, __FILE__, __LINE__); 179 + scmd->result |= DID_TIME_OUT << 16; 180 + __scsi_done(scmd); 186 181 } 187 182 } 188 183 ··· 201 196 { 202 197 int online; 203 198 204 - wait_event(sdev->host->host_wait, (sdev->host->shost_state != 205 - SHOST_RECOVERY)); 199 + wait_event(sdev->host->host_wait, !scsi_host_in_recovery(sdev->host)); 206 200 207 201 online = scsi_device_online(sdev); 208 202 ··· 1445 1441 static void scsi_restart_operations(struct Scsi_Host *shost) 1446 1442 { 1447 1443 struct scsi_device *sdev; 1444 + unsigned long flags; 1448 1445 1449 1446 /* 1450 1447 * If the door was locked, we need to insert a door lock request ··· 1465 1460 SCSI_LOG_ERROR_RECOVERY(3, printk("%s: waking up host to restart\n", 1466 1461 __FUNCTION__)); 1467 1462 1468 - scsi_host_set_state(shost, SHOST_RUNNING); 1463 + spin_lock_irqsave(shost->host_lock, flags); 1464 + if (scsi_host_set_state(shost, SHOST_RUNNING)) 1465 + if (scsi_host_set_state(shost, SHOST_CANCEL)) 1466 + BUG_ON(scsi_host_set_state(shost, SHOST_DEL)); 1467 + spin_unlock_irqrestore(shost->host_lock, flags); 1469 1468 1470 1469 wake_up(&shost->host_wait); 1471 1470 ··· 1591 1582 { 1592 1583 struct Scsi_Host *shost = (struct Scsi_Host *) data; 1593 1584 int rtn; 1594 - DECLARE_MUTEX_LOCKED(sem); 1595 1585 1596 1586 current->flags |= PF_NOFREEZE; 1597 - shost->eh_wait = &sem; 1598 1587 1588 + 1599 1589 /* 1600 - * Wake up the thread that created us. 1590 + * Note - we always use TASK_INTERRUPTIBLE even if the module 1591 + * was loaded as part of the kernel. The reason is that 1592 + * UNINTERRUPTIBLE would cause this thread to be counted in 1593 + * the load average as a running process, and an interruptible 1594 + * wait doesn't. 1601 1595 */ 1602 - SCSI_LOG_ERROR_RECOVERY(3, printk("Wake up parent of" 1603 - " scsi_eh_%d\n",shost->host_no)); 1596 + set_current_state(TASK_INTERRUPTIBLE); 1597 + while (!kthread_should_stop()) { 1598 + if (shost->host_failed == 0 || 1599 + shost->host_failed != shost->host_busy) { 1600 + SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler" 1601 + " scsi_eh_%d" 1602 + " sleeping\n", 1603 + shost->host_no)); 1604 + schedule(); 1605 + set_current_state(TASK_INTERRUPTIBLE); 1606 + continue; 1607 + } 1604 1608 1605 - while (1) { 1606 - /* 1607 - * If we get a signal, it means we are supposed to go 1608 - * away and die. This typically happens if the user is 1609 - * trying to unload a module. 1610 - */ 1611 - SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler" 1612 - " scsi_eh_%d" 1613 - " sleeping\n",shost->host_no)); 1614 - 1615 - /* 1616 - * Note - we always use down_interruptible with the semaphore 1617 - * even if the module was loaded as part of the kernel. The 1618 - * reason is that down() will cause this thread to be counted 1619 - * in the load average as a running process, and down 1620 - * interruptible doesn't. Given that we need to allow this 1621 - * thread to die if the driver was loaded as a module, using 1622 - * semaphores isn't unreasonable. 1623 - */ 1624 - down_interruptible(&sem); 1625 - if (kthread_should_stop()) 1626 - break; 1627 - 1609 + __set_current_state(TASK_RUNNING); 1628 1610 SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler" 1629 1611 " scsi_eh_%d waking" 1630 1612 " up\n",shost->host_no)); ··· 1642 1642 * which are still online. 1643 1643 */ 1644 1644 scsi_restart_operations(shost); 1645 - 1645 + set_current_state(TASK_INTERRUPTIBLE); 1646 1646 } 1647 1647 1648 1648 SCSI_LOG_ERROR_RECOVERY(1, printk("Error handler scsi_eh_%d" ··· 1651 1651 /* 1652 1652 * Make sure that nobody tries to wake us up again. 1653 1653 */ 1654 - shost->eh_wait = NULL; 1654 + shost->ehandler = NULL; 1655 1655 return 0; 1656 1656 } 1657 1657
+1 -1
drivers/scsi/scsi_ioctl.c
··· 458 458 * error processing, as long as the device was opened 459 459 * non-blocking */ 460 460 if (filp && filp->f_flags & O_NONBLOCK) { 461 - if (sdev->host->shost_state == SHOST_RECOVERY) 461 + if (scsi_host_in_recovery(sdev->host)) 462 462 return -ENODEV; 463 463 } else if (!scsi_block_when_processing_errors(sdev)) 464 464 return -ENODEV;
+4 -8
drivers/scsi/scsi_lib.c
··· 118 118 req->flags &= ~REQ_DONTPREP; 119 119 req->special = (req->flags & REQ_SPECIAL) ? cmd->sc_request : NULL; 120 120 121 - scsi_release_buffers(cmd); 122 121 scsi_put_command(cmd); 123 122 } 124 123 ··· 139 140 * commands. 140 141 * Notes: This could be called either from an interrupt context or a 141 142 * normal process context. 142 - * Notes: Upon return, cmd is a stale pointer. 143 143 */ 144 144 int scsi_queue_insert(struct scsi_cmnd *cmd, int reason) 145 145 { 146 146 struct Scsi_Host *host = cmd->device->host; 147 147 struct scsi_device *device = cmd->device; 148 148 struct request_queue *q = device->request_queue; 149 - struct request *req = cmd->request; 150 149 unsigned long flags; 151 150 152 151 SCSI_LOG_MLQUEUE(1, ··· 185 188 * function. The SCSI request function detects the blocked condition 186 189 * and plugs the queue appropriately. 187 190 */ 188 - scsi_unprep_request(req); 189 191 spin_lock_irqsave(q->queue_lock, flags); 190 - blk_requeue_request(q, req); 192 + blk_requeue_request(q, cmd->request); 191 193 spin_unlock_irqrestore(q->queue_lock, flags); 192 194 193 195 scsi_run_queue(q); ··· 447 451 448 452 spin_lock_irqsave(shost->host_lock, flags); 449 453 shost->host_busy--; 450 - if (unlikely((shost->shost_state == SHOST_RECOVERY) && 454 + if (unlikely(scsi_host_in_recovery(shost) && 451 455 shost->host_failed)) 452 456 scsi_eh_wakeup(shost); 453 457 spin_unlock(shost->host_lock); ··· 1264 1268 } 1265 1269 } else { 1266 1270 memcpy(cmd->cmnd, req->cmd, sizeof(cmd->cmnd)); 1271 + cmd->cmd_len = req->cmd_len; 1267 1272 if (rq_data_dir(req) == WRITE) 1268 1273 cmd->sc_data_direction = DMA_TO_DEVICE; 1269 1274 else if (req->data_len) ··· 1339 1342 struct Scsi_Host *shost, 1340 1343 struct scsi_device *sdev) 1341 1344 { 1342 - if (shost->shost_state == SHOST_RECOVERY) 1345 + if (scsi_host_in_recovery(shost)) 1343 1346 return 0; 1344 1347 if (shost->host_busy == 0 && shost->host_blocked) { 1345 1348 /* ··· 1511 1514 * cases (host limits or settings) should run the queue at some 1512 1515 * later time. 1513 1516 */ 1514 - scsi_unprep_request(req); 1515 1517 spin_lock_irq(q->queue_lock); 1516 1518 blk_requeue_request(q, req); 1517 1519 sdev->device_busy--;
+7 -13
drivers/scsi/scsi_scan.c
··· 1466 1466 1467 1467 void scsi_forget_host(struct Scsi_Host *shost) 1468 1468 { 1469 - struct scsi_target *starget, *tmp; 1469 + struct scsi_device *sdev; 1470 1470 unsigned long flags; 1471 1471 1472 - /* 1473 - * Ok, this look a bit strange. We always look for the first device 1474 - * on the list as scsi_remove_device removes them from it - thus we 1475 - * also have to release the lock. 1476 - * We don't need to get another reference to the device before 1477 - * releasing the lock as we already own the reference from 1478 - * scsi_register_device that's release in scsi_remove_device. And 1479 - * after that we don't look at sdev anymore. 1480 - */ 1472 + restart: 1481 1473 spin_lock_irqsave(shost->host_lock, flags); 1482 - list_for_each_entry_safe(starget, tmp, &shost->__targets, siblings) { 1474 + list_for_each_entry(sdev, &shost->__devices, siblings) { 1475 + if (sdev->sdev_state == SDEV_DEL) 1476 + continue; 1483 1477 spin_unlock_irqrestore(shost->host_lock, flags); 1484 - scsi_remove_target(&starget->dev); 1485 - spin_lock_irqsave(shost->host_lock, flags); 1478 + __scsi_remove_device(sdev); 1479 + goto restart; 1486 1480 } 1487 1481 spin_unlock_irqrestore(shost->host_lock, flags); 1488 1482 }
+12 -5
drivers/scsi/scsi_sysfs.c
··· 57 57 { SHOST_CANCEL, "cancel" }, 58 58 { SHOST_DEL, "deleted" }, 59 59 { SHOST_RECOVERY, "recovery" }, 60 + { SHOST_CANCEL_RECOVERY, "cancel/recovery" }, 61 + { SHOST_DEL_RECOVERY, "deleted/recovery", }, 60 62 }; 61 63 const char *scsi_host_state_name(enum scsi_host_state state) 62 64 { ··· 709 707 **/ 710 708 void scsi_remove_device(struct scsi_device *sdev) 711 709 { 712 - down(&sdev->host->scan_mutex); 710 + struct Scsi_Host *shost = sdev->host; 711 + 712 + down(&shost->scan_mutex); 713 713 __scsi_remove_device(sdev); 714 - up(&sdev->host->scan_mutex); 714 + up(&shost->scan_mutex); 715 715 } 716 716 EXPORT_SYMBOL(scsi_remove_device); 717 717 ··· 721 717 { 722 718 struct Scsi_Host *shost = dev_to_shost(starget->dev.parent); 723 719 unsigned long flags; 724 - struct scsi_device *sdev, *tmp; 720 + struct scsi_device *sdev; 725 721 726 722 spin_lock_irqsave(shost->host_lock, flags); 727 723 starget->reap_ref++; 728 - list_for_each_entry_safe(sdev, tmp, &shost->__devices, siblings) { 724 + restart: 725 + list_for_each_entry(sdev, &shost->__devices, siblings) { 729 726 if (sdev->channel != starget->channel || 730 - sdev->id != starget->id) 727 + sdev->id != starget->id || 728 + sdev->sdev_state == SDEV_DEL) 731 729 continue; 732 730 spin_unlock_irqrestore(shost->host_lock, flags); 733 731 scsi_remove_device(sdev); 734 732 spin_lock_irqsave(shost->host_lock, flags); 733 + goto restart; 735 734 } 736 735 spin_unlock_irqrestore(shost->host_lock, flags); 737 736 scsi_target_reap(starget);
+1
drivers/scsi/sd.c
··· 235 235 return 0; 236 236 237 237 memcpy(SCpnt->cmnd, rq->cmd, sizeof(SCpnt->cmnd)); 238 + SCpnt->cmd_len = rq->cmd_len; 238 239 if (rq_data_dir(rq) == WRITE) 239 240 SCpnt->sc_data_direction = DMA_TO_DEVICE; 240 241 else if (rq->data_len)
+1 -1
drivers/scsi/sg.c
··· 1027 1027 if (sdp->detached) 1028 1028 return -ENODEV; 1029 1029 if (filp->f_flags & O_NONBLOCK) { 1030 - if (sdp->device->host->shost_state == SHOST_RECOVERY) 1030 + if (scsi_host_in_recovery(sdp->device->host)) 1031 1031 return -EBUSY; 1032 1032 } else if (!scsi_block_when_processing_errors(sdp->device)) 1033 1033 return -EBUSY;
+1
drivers/scsi/sr.c
··· 326 326 return 0; 327 327 328 328 memcpy(SCpnt->cmnd, rq->cmd, sizeof(SCpnt->cmnd)); 329 + SCpnt->cmd_len = rq->cmd_len; 329 330 if (!rq->data_len) 330 331 SCpnt->sc_data_direction = DMA_NONE; 331 332 else if (rq_data_dir(rq) == WRITE)
+1
drivers/scsi/st.c
··· 4206 4206 return 0; 4207 4207 4208 4208 memcpy(SCpnt->cmnd, rq->cmd, sizeof(SCpnt->cmnd)); 4209 + SCpnt->cmd_len = rq->cmd_len; 4209 4210 4210 4211 if (rq_data_dir(rq) == WRITE) 4211 4212 SCpnt->sc_data_direction = DMA_TO_DEVICE;
+1 -1
drivers/serial/clps711x.c
··· 98 98 { 99 99 struct uart_port *port = dev_id; 100 100 struct tty_struct *tty = port->info->tty; 101 - unsigned int status, ch, flg, ignored = 0; 101 + unsigned int status, ch, flg; 102 102 103 103 status = clps_readl(SYSFLG(port)); 104 104 while (!(status & SYSFLG_URXFE)) {
+1 -1
drivers/usb/core/message.c
··· 987 987 988 988 /* remove this interface if it has been registered */ 989 989 interface = dev->actconfig->interface[i]; 990 - if (!klist_node_attached(&interface->dev.knode_bus)) 990 + if (!device_is_registered(&interface->dev)) 991 991 continue; 992 992 dev_dbg (&dev->dev, "unregistering interface %s\n", 993 993 interface->dev.bus_id);
+3 -3
drivers/usb/core/usb.c
··· 303 303 /* if interface was already added, bind now; else let 304 304 * the future device_add() bind it, bypassing probe() 305 305 */ 306 - if (klist_node_attached(&dev->knode_bus)) 306 + if (device_is_registered(dev)) 307 307 device_bind_driver(dev); 308 308 309 309 return 0; ··· 336 336 if (iface->condition != USB_INTERFACE_BOUND) 337 337 return; 338 338 339 - /* release only after device_add() */ 340 - if (klist_node_attached(&dev->knode_bus)) { 339 + /* don't release if the interface hasn't been added yet */ 340 + if (device_is_registered(dev)) { 341 341 iface->condition = USB_INTERFACE_UNBINDING; 342 342 device_release_driver(dev); 343 343 }
+2 -2
drivers/usb/gadget/pxa2xx_udc.c
··· 422 422 } 423 423 424 424 static int 425 - write_packet(volatile u32 *uddr, struct pxa2xx_request *req, unsigned max) 425 + write_packet(volatile unsigned long *uddr, struct pxa2xx_request *req, unsigned max) 426 426 { 427 427 u8 *buf; 428 428 unsigned length, count; ··· 2602 2602 * VBUS IRQs should probably be ignored so that the PXA device just acts 2603 2603 * "dead" to USB hosts until system resume. 2604 2604 */ 2605 - static int pxa2xx_udc_suspend(struct device *dev, u32 state, u32 level) 2605 + static int pxa2xx_udc_suspend(struct device *dev, pm_message_t state, u32 level) 2606 2606 { 2607 2607 struct pxa2xx_udc *udc = dev_get_drvdata(dev); 2608 2608
+4 -4
drivers/usb/gadget/pxa2xx_udc.h
··· 69 69 * UDDR = UDC Endpoint Data Register (the fifo) 70 70 * DRCM = DMA Request Channel Map 71 71 */ 72 - volatile u32 *reg_udccs; 73 - volatile u32 *reg_ubcr; 74 - volatile u32 *reg_uddr; 72 + volatile unsigned long *reg_udccs; 73 + volatile unsigned long *reg_ubcr; 74 + volatile unsigned long *reg_uddr; 75 75 #ifdef USE_DMA 76 - volatile u32 *reg_drcmr; 76 + volatile unsigned long *reg_drcmr; 77 77 #define drcmr(n) .reg_drcmr = & DRCMR ## n , 78 78 #else 79 79 #define drcmr(n)
+14 -2
drivers/usb/host/sl811-hcd.c
··· 782 782 /* usb 1.1 says max 90% of a frame is available for periodic transfers. 783 783 * this driver doesn't promise that much since it's got to handle an 784 784 * IRQ per packet; irq handling latencies also use up that time. 785 + * 786 + * NOTE: the periodic schedule is a sparse tree, with the load for 787 + * each branch minimized. see fig 3.5 in the OHCI spec for example. 785 788 */ 786 789 #define MAX_PERIODIC_LOAD 500 /* out of 1000 usec */ 787 790 ··· 846 843 if (!(sl811->port1 & (1 << USB_PORT_FEAT_ENABLE)) 847 844 || !HC_IS_RUNNING(hcd->state)) { 848 845 retval = -ENODEV; 846 + kfree(ep); 849 847 goto fail; 850 848 } 851 849 ··· 915 911 case PIPE_ISOCHRONOUS: 916 912 case PIPE_INTERRUPT: 917 913 urb->interval = ep->period; 918 - if (ep->branch < PERIODIC_SIZE) 914 + if (ep->branch < PERIODIC_SIZE) { 915 + /* NOTE: the phase is correct here, but the value 916 + * needs offsetting by the transfer queue depth. 917 + * All current drivers ignore start_frame, so this 918 + * is unlikely to ever matter... 919 + */ 920 + urb->start_frame = (sl811->frame & (PERIODIC_SIZE - 1)) 921 + + ep->branch; 919 922 break; 923 + } 920 924 921 925 retval = balance(sl811, ep->period, ep->load); 922 926 if (retval < 0) ··· 1134 1122 desc->wHubCharacteristics = (__force __u16)cpu_to_le16(temp); 1135 1123 1136 1124 /* two bitmaps: ports removable, and legacy PortPwrCtrlMask */ 1137 - desc->bitmap[0] = 1 << 1; 1125 + desc->bitmap[0] = 0 << 1; 1138 1126 desc->bitmap[1] = ~0; 1139 1127 } 1140 1128
+20 -9
drivers/usb/net/pegasus.c
··· 648 648 } 649 649 650 650 /* 651 + * If the packet is unreasonably long, quietly drop it rather than 652 + * kernel panicing by calling skb_put. 653 + */ 654 + if (pkt_len > PEGASUS_MTU) 655 + goto goon; 656 + 657 + /* 651 658 * at this point we are sure pegasus->rx_skb != NULL 652 659 * so we go ahead and pass up the packet. 653 660 */ ··· 893 886 __u8 data[2]; 894 887 895 888 read_eprom_word(pegasus, 4, (__u16 *) data); 896 - if (data[1] < 0x80) { 897 - if (netif_msg_timer(pegasus)) 898 - dev_info(&pegasus->intf->dev, 899 - "intr interval changed from %ums to %ums\n", 900 - data[1], 0x80); 901 - data[1] = 0x80; 902 - #ifdef PEGASUS_WRITE_EEPROM 903 - write_eprom_word(pegasus, 4, *(__u16 *) data); 889 + if (pegasus->usb->speed != USB_SPEED_HIGH) { 890 + if (data[1] < 0x80) { 891 + if (netif_msg_timer(pegasus)) 892 + dev_info(&pegasus->intf->dev, "intr interval " 893 + "changed from %ums to %ums\n", 894 + data[1], 0x80); 895 + data[1] = 0x80; 896 + #ifdef PEGASUS_WRITE_EEPROM 897 + write_eprom_word(pegasus, 4, *(__u16 *) data); 904 898 #endif 899 + } 905 900 } 906 901 pegasus->intr_interval = data[1]; 907 902 } ··· 913 904 pegasus_t *pegasus = netdev_priv(net); 914 905 u16 tmp; 915 906 916 - if (read_mii_word(pegasus, pegasus->phy, MII_BMSR, &tmp)) 907 + if (!read_mii_word(pegasus, pegasus->phy, MII_BMSR, &tmp)) 917 908 return; 909 + 918 910 if (tmp & BMSR_LSTATUS) 919 911 netif_carrier_on(net); 920 912 else ··· 1365 1355 cancel_delayed_work(&pegasus->carrier_check); 1366 1356 unregister_netdev(pegasus->net); 1367 1357 usb_put_dev(interface_to_usbdev(intf)); 1358 + unlink_all_urbs(pegasus); 1368 1359 free_all_urbs(pegasus); 1369 1360 free_skb_pool(pegasus); 1370 1361 if (pegasus->rx_skb)
+2 -1
drivers/usb/serial/airprime.c
··· 16 16 #include "usb-serial.h" 17 17 18 18 static struct usb_device_id id_table [] = { 19 - { USB_DEVICE(0xf3d, 0x0112) }, 19 + { USB_DEVICE(0xf3d, 0x0112) }, /* AirPrime CDMA Wireless PC Card */ 20 + { USB_DEVICE(0x1410, 0x1110) }, /* Novatel Wireless Merlin CDMA */ 20 21 { }, 21 22 }; 22 23 MODULE_DEVICE_TABLE(usb, id_table);
+5 -3
drivers/usb/serial/ftdi_sio.c
··· 1846 1846 } else { 1847 1847 /* set the baudrate determined before */ 1848 1848 if (change_speed(port)) { 1849 - err("%s urb failed to set baurdrate", __FUNCTION__); 1849 + err("%s urb failed to set baudrate", __FUNCTION__); 1850 1850 } 1851 - /* Ensure RTS and DTR are raised */ 1852 - set_mctrl(port, TIOCM_DTR | TIOCM_RTS); 1851 + /* Ensure RTS and DTR are raised when baudrate changed from 0 */ 1852 + if ((old_termios->c_cflag & CBAUD) == B0) { 1853 + set_mctrl(port, TIOCM_DTR | TIOCM_RTS); 1854 + } 1853 1855 } 1854 1856 1855 1857 /* Set flow control */
+10 -1
drivers/usb/serial/option.c
··· 25 25 2005-06-20 v0.4.1 add missing braces :-/ 26 26 killed end-of-line whitespace 27 27 2005-07-15 v0.4.2 rename WLAN product to FUSION, add FUSION2 28 + 2005-09-10 v0.4.3 added HUAWEI E600 card and Audiovox AirCard 29 + 2005-09-20 v0.4.4 increased recv buffer size: the card sometimes 30 + wants to send >2000 bytes. 28 31 29 32 Work sponsored by: Sigos GmbH, Germany <info@sigos.de> 30 33 ··· 74 71 75 72 /* Vendor and product IDs */ 76 73 #define OPTION_VENDOR_ID 0x0AF0 74 + #define HUAWEI_VENDOR_ID 0x12D1 75 + #define AUDIOVOX_VENDOR_ID 0x0F3D 77 76 78 77 #define OPTION_PRODUCT_OLD 0x5000 79 78 #define OPTION_PRODUCT_FUSION 0x6000 80 79 #define OPTION_PRODUCT_FUSION2 0x6300 80 + #define HUAWEI_PRODUCT_E600 0x1001 81 + #define AUDIOVOX_PRODUCT_AIRCARD 0x0112 81 82 82 83 static struct usb_device_id option_ids[] = { 83 84 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_OLD) }, 84 85 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_FUSION) }, 85 86 { USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_FUSION2) }, 87 + { USB_DEVICE(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E600) }, 88 + { USB_DEVICE(AUDIOVOX_VENDOR_ID, AUDIOVOX_PRODUCT_AIRCARD) }, 86 89 { } /* Terminating entry */ 87 90 }; 88 91 ··· 141 132 142 133 #define N_IN_URB 4 143 134 #define N_OUT_URB 1 144 - #define IN_BUFLEN 1024 135 + #define IN_BUFLEN 4096 145 136 #define OUT_BUFLEN 128 146 137 147 138 struct option_port_private {
+1
drivers/video/Kconfig
··· 650 650 select FB_CFB_FILLRECT 651 651 select FB_CFB_COPYAREA 652 652 select FB_CFB_IMAGEBLIT 653 + select FB_SOFT_CURSOR 653 654 help 654 655 This driver supports graphics boards with the nVidia chips, TNT 655 656 and newer. For very old chipsets, such as the RIVA128, then use
+8 -3
drivers/video/aty/xlinit.c
··· 174 174 const struct xl_card_cfg_t * card = &card_cfg[xl_card]; 175 175 struct atyfb_par *par = (struct atyfb_par *) info->par; 176 176 union aty_pll pll; 177 - int i, err; 177 + int err; 178 178 u32 temp; 179 179 180 180 aty_st_8(CONFIG_STAT0, 0x85, par); ··· 252 252 aty_st_le32(0xEC, 0x00000000, par); 253 253 aty_st_le32(0xFC, 0x00000000, par); 254 254 255 - for (i=0; i<sizeof(lcd_tbl)/sizeof(lcd_tbl_t); i++) { 256 - aty_st_lcd(lcd_tbl[i].lcd_reg, lcd_tbl[i].val, par); 255 + #if defined (CONFIG_FB_ATY_GENERIC_LCD) 256 + { 257 + int i; 258 + 259 + for (i = 0; i < ARRAY_SIZE(lcd_tbl); i++) 260 + aty_st_lcd(lcd_tbl[i].lcd_reg, lcd_tbl[i].val, par); 257 261 } 262 + #endif 258 263 259 264 aty_st_le16(CONFIG_STAT0, 0x00A4, par); 260 265 mdelay(10);
+4 -4
drivers/video/fbcvt.c
··· 272 272 { 273 273 mode->refresh = cvt->f_refresh; 274 274 mode->pixclock = KHZ2PICOS(cvt->pixclock/1000); 275 - mode->left_margin = cvt->h_front_porch; 276 - mode->right_margin = cvt->h_back_porch; 275 + mode->left_margin = cvt->h_back_porch; 276 + mode->right_margin = cvt->h_front_porch; 277 277 mode->hsync_len = cvt->hsync; 278 - mode->upper_margin = cvt->v_front_porch; 279 - mode->lower_margin = cvt->v_back_porch; 278 + mode->upper_margin = cvt->v_back_porch; 279 + mode->lower_margin = cvt->v_front_porch; 280 280 mode->vsync_len = cvt->vsync; 281 281 282 282 mode->sync &= ~(FB_SYNC_HOR_HIGH_ACT | FB_SYNC_VERT_HIGH_ACT);
+4 -1
drivers/video/nvidia/nvidia.c
··· 893 893 int i, set = cursor->set; 894 894 u16 fg, bg; 895 895 896 - if (!hwcur || cursor->image.width > MAX_CURS || cursor->image.height > MAX_CURS) 896 + if (cursor->image.width > MAX_CURS || cursor->image.height > MAX_CURS) 897 897 return -ENXIO; 898 898 899 899 NVShowHideCursor(par, 0); ··· 1355 1355 info->pixmap.access_align = 32; 1356 1356 info->pixmap.size = 8 * 1024; 1357 1357 info->pixmap.flags = FB_PIXMAP_SYSTEM; 1358 + 1359 + if (!hwcur) 1360 + info->fbops->fb_cursor = soft_cursor; 1358 1361 1359 1362 info->var.accel_flags = (!noaccel); 1360 1363
+85 -70
fs/9p/conv.c
··· 3 3 * 4 4 * 9P protocol conversion functions 5 5 * 6 + * Copyright (C) 2004, 2005 by Latchesar Ionkov <lucho@ionkov.net> 6 7 * Copyright (C) 2004 by Eric Van Hensbergen <ericvh@gmail.com> 7 8 * Copyright (C) 2002 by Ron Minnich <rminnich@lanl.gov> 8 9 * ··· 56 55 return buf->p > buf->ep; 57 56 } 58 57 59 - static inline void buf_check_size(struct cbuf *buf, int len) 58 + static inline int buf_check_size(struct cbuf *buf, int len) 60 59 { 61 60 if (buf->p+len > buf->ep) { 62 61 if (buf->p < buf->ep) { 63 62 eprintk(KERN_ERR, "buffer overflow\n"); 64 63 buf->p = buf->ep + 1; 64 + return 0; 65 65 } 66 66 } 67 + 68 + return 1; 67 69 } 68 70 69 71 static inline void *buf_alloc(struct cbuf *buf, int len) 70 72 { 71 73 void *ret = NULL; 72 74 73 - buf_check_size(buf, len); 74 - ret = buf->p; 75 - buf->p += len; 75 + if (buf_check_size(buf, len)) { 76 + ret = buf->p; 77 + buf->p += len; 78 + } 76 79 77 80 return ret; 78 81 } 79 82 80 83 static inline void buf_put_int8(struct cbuf *buf, u8 val) 81 84 { 82 - buf_check_size(buf, 1); 83 - 84 - buf->p[0] = val; 85 - buf->p++; 85 + if (buf_check_size(buf, 1)) { 86 + buf->p[0] = val; 87 + buf->p++; 88 + } 86 89 } 87 90 88 91 static inline void buf_put_int16(struct cbuf *buf, u16 val) 89 92 { 90 - buf_check_size(buf, 2); 91 - 92 - *(__le16 *) buf->p = cpu_to_le16(val); 93 - buf->p += 2; 93 + if (buf_check_size(buf, 2)) { 94 + *(__le16 *) buf->p = cpu_to_le16(val); 95 + buf->p += 2; 96 + } 94 97 } 95 98 96 99 static inline void buf_put_int32(struct cbuf *buf, u32 val) 97 100 { 98 - buf_check_size(buf, 4); 99 - 100 - *(__le32 *)buf->p = cpu_to_le32(val); 101 - buf->p += 4; 101 + if (buf_check_size(buf, 4)) { 102 + *(__le32 *)buf->p = cpu_to_le32(val); 103 + buf->p += 4; 104 + } 102 105 } 103 106 104 107 static inline void buf_put_int64(struct cbuf *buf, u64 val) 105 108 { 106 - buf_check_size(buf, 8); 107 - 108 - *(__le64 *)buf->p = cpu_to_le64(val); 109 - buf->p += 8; 109 + if (buf_check_size(buf, 8)) { 110 + *(__le64 *)buf->p = cpu_to_le64(val); 111 + buf->p += 8; 112 + } 110 113 } 111 114 112 115 static inline void buf_put_stringn(struct cbuf *buf, const char *s, u16 slen) 113 116 { 114 - buf_check_size(buf, slen + 2); 115 - 116 - buf_put_int16(buf, slen); 117 - memcpy(buf->p, s, slen); 118 - buf->p += slen; 117 + if (buf_check_size(buf, slen + 2)) { 118 + buf_put_int16(buf, slen); 119 + memcpy(buf->p, s, slen); 120 + buf->p += slen; 121 + } 119 122 } 120 123 121 124 static inline void buf_put_string(struct cbuf *buf, const char *s) ··· 129 124 130 125 static inline void buf_put_data(struct cbuf *buf, void *data, u32 datalen) 131 126 { 132 - buf_check_size(buf, datalen); 133 - 134 - memcpy(buf->p, data, datalen); 135 - buf->p += datalen; 127 + if (buf_check_size(buf, datalen)) { 128 + memcpy(buf->p, data, datalen); 129 + buf->p += datalen; 130 + } 136 131 } 137 132 138 133 static inline u8 buf_get_int8(struct cbuf *buf) 139 134 { 140 135 u8 ret = 0; 141 136 142 - buf_check_size(buf, 1); 143 - ret = buf->p[0]; 144 - 145 - buf->p++; 137 + if (buf_check_size(buf, 1)) { 138 + ret = buf->p[0]; 139 + buf->p++; 140 + } 146 141 147 142 return ret; 148 143 } ··· 151 146 { 152 147 u16 ret = 0; 153 148 154 - buf_check_size(buf, 2); 155 - ret = le16_to_cpu(*(__le16 *)buf->p); 156 - 157 - buf->p += 2; 149 + if (buf_check_size(buf, 2)) { 150 + ret = le16_to_cpu(*(__le16 *)buf->p); 151 + buf->p += 2; 152 + } 158 153 159 154 return ret; 160 155 } ··· 163 158 { 164 159 u32 ret = 0; 165 160 166 - buf_check_size(buf, 4); 167 - ret = le32_to_cpu(*(__le32 *)buf->p); 168 - 169 - buf->p += 4; 161 + if (buf_check_size(buf, 4)) { 162 + ret = le32_to_cpu(*(__le32 *)buf->p); 163 + buf->p += 4; 164 + } 170 165 171 166 return ret; 172 167 } ··· 175 170 { 176 171 u64 ret = 0; 177 172 178 - buf_check_size(buf, 8); 179 - ret = le64_to_cpu(*(__le64 *)buf->p); 180 - 181 - buf->p += 8; 173 + if (buf_check_size(buf, 8)) { 174 + ret = le64_to_cpu(*(__le64 *)buf->p); 175 + buf->p += 8; 176 + } 182 177 183 178 return ret; 184 179 } ··· 186 181 static inline int 187 182 buf_get_string(struct cbuf *buf, char *data, unsigned int datalen) 188 183 { 184 + u16 len = 0; 189 185 190 - u16 len = buf_get_int16(buf); 191 - buf_check_size(buf, len); 192 - if (len + 1 > datalen) 193 - return 0; 186 + len = buf_get_int16(buf); 187 + if (!buf_check_overflow(buf) && buf_check_size(buf, len) && len+1>datalen) { 188 + memcpy(data, buf->p, len); 189 + data[len] = 0; 190 + buf->p += len; 191 + len++; 192 + } 194 193 195 - memcpy(data, buf->p, len); 196 - data[len] = 0; 197 - buf->p += len; 198 - 199 - return len + 1; 194 + return len; 200 195 } 201 196 202 197 static inline char *buf_get_stringb(struct cbuf *buf, struct cbuf *sbuf) 203 198 { 204 - char *ret = NULL; 205 - int n = buf_get_string(buf, sbuf->p, sbuf->ep - sbuf->p); 199 + char *ret; 200 + u16 len; 206 201 207 - if (n > 0) { 202 + ret = NULL; 203 + len = buf_get_int16(buf); 204 + 205 + if (!buf_check_overflow(buf) && buf_check_size(buf, len) && 206 + buf_check_size(sbuf, len+1)) { 207 + 208 + memcpy(sbuf->p, buf->p, len); 209 + sbuf->p[len] = 0; 208 210 ret = sbuf->p; 209 - sbuf->p += n; 211 + buf->p += len; 212 + sbuf->p += len + 1; 210 213 } 211 214 212 215 return ret; ··· 222 209 223 210 static inline int buf_get_data(struct cbuf *buf, void *data, int datalen) 224 211 { 225 - buf_check_size(buf, datalen); 212 + int ret = 0; 226 213 227 - memcpy(data, buf->p, datalen); 228 - buf->p += datalen; 214 + if (buf_check_size(buf, datalen)) { 215 + memcpy(data, buf->p, datalen); 216 + buf->p += datalen; 217 + ret = datalen; 218 + } 229 219 230 - return datalen; 220 + return ret; 231 221 } 232 222 233 223 static inline void *buf_get_datab(struct cbuf *buf, struct cbuf *dbuf, ··· 239 223 char *ret = NULL; 240 224 int n = 0; 241 225 242 - buf_check_size(dbuf, datalen); 243 - 244 - n = buf_get_data(buf, dbuf->p, datalen); 245 - 246 - if (n > 0) { 247 - ret = dbuf->p; 248 - dbuf->p += n; 226 + if (buf_check_size(dbuf, datalen)) { 227 + n = buf_get_data(buf, dbuf->p, datalen); 228 + if (n > 0) { 229 + ret = dbuf->p; 230 + dbuf->p += n; 231 + } 249 232 } 250 233 251 234 return ret; ··· 651 636 break; 652 637 case RWALK: 653 638 rcall->params.rwalk.nwqid = buf_get_int16(bufp); 654 - rcall->params.rwalk.wqids = buf_alloc(bufp, 639 + rcall->params.rwalk.wqids = buf_alloc(dbufp, 655 640 rcall->params.rwalk.nwqid * sizeof(struct v9fs_qid)); 656 641 if (rcall->params.rwalk.wqids) 657 642 for (i = 0; i < rcall->params.rwalk.nwqid; i++) {
+7 -1
fs/9p/v9fs.c
··· 303 303 goto SessCleanUp; 304 304 }; 305 305 306 - v9ses->transport = trans_proto; 306 + v9ses->transport = kmalloc(sizeof(*v9ses->transport), GFP_KERNEL); 307 + if (!v9ses->transport) { 308 + retval = -ENOMEM; 309 + goto SessCleanUp; 310 + } 311 + 312 + memmove(v9ses->transport, trans_proto, sizeof(*v9ses->transport)); 307 313 308 314 if ((retval = v9ses->transport->init(v9ses, dev_name, data)) < 0) { 309 315 eprintk(KERN_ERR, "problem initializing transport\n");
+2 -2
fs/9p/vfs_inode.c
··· 1063 1063 int ret; 1064 1064 char *link = __getname(); 1065 1065 1066 - if (strlen(link) < buflen) 1067 - buflen = strlen(link); 1066 + if (buflen > PATH_MAX) 1067 + buflen = PATH_MAX; 1068 1068 1069 1069 dprintk(DEBUG_VFS, " dentry: %s (%p)\n", dentry->d_iname, dentry); 1070 1070
+7 -17
fs/9p/vfs_super.c
··· 129 129 130 130 if ((newfid = v9fs_session_init(v9ses, dev_name, data)) < 0) { 131 131 dprintk(DEBUG_ERROR, "problem initiating session\n"); 132 - retval = newfid; 133 - goto free_session; 132 + kfree(v9ses); 133 + return ERR_PTR(newfid); 134 134 } 135 135 136 136 sb = sget(fs_type, NULL, v9fs_set_super, v9ses); ··· 150 150 151 151 if (!root) { 152 152 retval = -ENOMEM; 153 - goto release_inode; 153 + goto put_back_sb; 154 154 } 155 155 156 156 sb->s_root = root; ··· 159 159 root_fid = v9fs_fid_create(root); 160 160 if (root_fid == NULL) { 161 161 retval = -ENOMEM; 162 - goto release_dentry; 162 + goto put_back_sb; 163 163 } 164 164 165 165 root_fid->fidopen = 0; ··· 182 182 183 183 if (stat_result < 0) { 184 184 retval = stat_result; 185 - goto release_dentry; 185 + goto put_back_sb; 186 186 } 187 187 188 188 return sb; 189 189 190 - release_dentry: 191 - dput(sb->s_root); 192 - 193 - release_inode: 194 - iput(inode); 195 - 196 - put_back_sb: 190 + put_back_sb: 191 + /* deactivate_super calls v9fs_kill_super which will frees the rest */ 197 192 up_write(&sb->s_umount); 198 193 deactivate_super(sb); 199 - v9fs_session_close(v9ses); 200 - 201 - free_session: 202 - kfree(v9ses); 203 - 204 194 return ERR_PTR(retval); 205 195 } 206 196
+2
fs/cifs/cifsfs.c
··· 781 781 782 782 oplockThread = current; 783 783 do { 784 + if (try_to_freeze()) 785 + continue; 784 786 set_current_state(TASK_INTERRUPTIBLE); 785 787 786 788 schedule_timeout(1*HZ);
+2
fs/cifs/connect.c
··· 344 344 } 345 345 346 346 while (server->tcpStatus != CifsExiting) { 347 + if (try_to_freeze()) 348 + continue; 347 349 if (bigbuf == NULL) { 348 350 bigbuf = cifs_buf_get(); 349 351 if(bigbuf == NULL) {
+2 -1
fs/dcache.c
··· 102 102 list_del_init(&dentry->d_alias); 103 103 spin_unlock(&dentry->d_lock); 104 104 spin_unlock(&dcache_lock); 105 - fsnotify_inoderemove(inode); 105 + if (!inode->i_nlink) 106 + fsnotify_inoderemove(inode); 106 107 if (dentry->d_op && dentry->d_op->d_iput) 107 108 dentry->d_op->d_iput(dentry, inode); 108 109 else
+3 -3
fs/ext3/balloc.c
··· 1410 1410 unsigned long desc_count; 1411 1411 struct ext3_group_desc *gdp; 1412 1412 int i; 1413 - unsigned long ngroups; 1413 + unsigned long ngroups = EXT3_SB(sb)->s_groups_count; 1414 1414 #ifdef EXT3FS_DEBUG 1415 1415 struct ext3_super_block *es; 1416 1416 unsigned long bitmap_count, x; ··· 1421 1421 desc_count = 0; 1422 1422 bitmap_count = 0; 1423 1423 gdp = NULL; 1424 - for (i = 0; i < EXT3_SB(sb)->s_groups_count; i++) { 1424 + 1425 + for (i = 0; i < ngroups; i++) { 1425 1426 gdp = ext3_get_group_desc(sb, i, NULL); 1426 1427 if (!gdp) 1427 1428 continue; ··· 1444 1443 return bitmap_count; 1445 1444 #else 1446 1445 desc_count = 0; 1447 - ngroups = EXT3_SB(sb)->s_groups_count; 1448 1446 smp_rmb(); 1449 1447 for (i = 0; i < ngroups; i++) { 1450 1448 gdp = ext3_get_group_desc(sb, i, NULL);
+3 -3
fs/ext3/resize.c
··· 242 242 i < sbi->s_itb_per_group; i++, bit++, block++) { 243 243 struct buffer_head *it; 244 244 245 - ext3_debug("clear inode block %#04x (+%ld)\n", block, bit); 245 + ext3_debug("clear inode block %#04lx (+%d)\n", block, bit); 246 246 if (IS_ERR(it = bclean(handle, sb, block))) { 247 247 err = PTR_ERR(it); 248 248 goto exit_bh; ··· 643 643 break; 644 644 645 645 bh = sb_getblk(sb, group * bpg + blk_off); 646 - ext3_debug(sb, __FUNCTION__, "update metadata backup %#04lx\n", 647 - bh->b_blocknr); 646 + ext3_debug("update metadata backup %#04lx\n", 647 + (unsigned long)bh->b_blocknr); 648 648 if ((err = ext3_journal_get_write_access(handle, bh))) 649 649 break; 650 650 lock_buffer(bh);
+5 -6
fs/ext3/super.c
··· 512 512 513 513 static int ext3_show_options(struct seq_file *seq, struct vfsmount *vfs) 514 514 { 515 - struct ext3_sb_info *sbi = EXT3_SB(vfs->mnt_sb); 515 + struct super_block *sb = vfs->mnt_sb; 516 + struct ext3_sb_info *sbi = EXT3_SB(sb); 516 517 517 - if (sbi->s_mount_opt & EXT3_MOUNT_JOURNAL_DATA) 518 + if (test_opt(sb, DATA_FLAGS) == EXT3_MOUNT_JOURNAL_DATA) 518 519 seq_puts(seq, ",data=journal"); 519 - 520 - if (sbi->s_mount_opt & EXT3_MOUNT_ORDERED_DATA) 520 + else if (test_opt(sb, DATA_FLAGS) == EXT3_MOUNT_ORDERED_DATA) 521 521 seq_puts(seq, ",data=ordered"); 522 - 523 - if (sbi->s_mount_opt & EXT3_MOUNT_WRITEBACK_DATA) 522 + else if (test_opt(sb, DATA_FLAGS) == EXT3_MOUNT_WRITEBACK_DATA) 524 523 seq_puts(seq, ",data=writeback"); 525 524 526 525 #if defined(CONFIG_QUOTA)
+8 -3
fs/fat/inode.c
··· 300 300 inode->i_blksize = sbi->cluster_size; 301 301 inode->i_blocks = ((inode->i_size + (sbi->cluster_size - 1)) 302 302 & ~((loff_t)sbi->cluster_size - 1)) >> 9; 303 - inode->i_mtime.tv_sec = inode->i_atime.tv_sec = 303 + inode->i_mtime.tv_sec = 304 304 date_dos2unix(le16_to_cpu(de->time), le16_to_cpu(de->date)); 305 - inode->i_mtime.tv_nsec = inode->i_atime.tv_nsec = 0; 305 + inode->i_mtime.tv_nsec = 0; 306 306 if (sbi->options.isvfat) { 307 307 int secs = de->ctime_cs / 100; 308 308 int csecs = de->ctime_cs % 100; ··· 310 310 date_dos2unix(le16_to_cpu(de->ctime), 311 311 le16_to_cpu(de->cdate)) + secs; 312 312 inode->i_ctime.tv_nsec = csecs * 10000000; 313 + inode->i_atime.tv_sec = 314 + date_dos2unix(le16_to_cpu(0), le16_to_cpu(de->adate)); 315 + inode->i_atime.tv_nsec = 0; 313 316 } else 314 - inode->i_ctime = inode->i_mtime; 317 + inode->i_ctime = inode->i_atime = inode->i_mtime; 315 318 316 319 return 0; 317 320 } ··· 516 513 raw_entry->starthi = cpu_to_le16(MSDOS_I(inode)->i_logstart >> 16); 517 514 fat_date_unix2dos(inode->i_mtime.tv_sec, &raw_entry->time, &raw_entry->date); 518 515 if (sbi->options.isvfat) { 516 + __le16 atime; 519 517 fat_date_unix2dos(inode->i_ctime.tv_sec,&raw_entry->ctime,&raw_entry->cdate); 518 + fat_date_unix2dos(inode->i_atime.tv_sec,&atime,&raw_entry->adate); 520 519 raw_entry->ctime_cs = (inode->i_ctime.tv_sec & 1) * 100 + 521 520 inode->i_ctime.tv_nsec / 10000000; 522 521 }
+1 -2
fs/jfs/inode.c
··· 129 129 jfs_info("In jfs_delete_inode, inode = 0x%p", inode); 130 130 131 131 if (!is_bad_inode(inode) && 132 - (JFS_IP(inode)->fileset == cpu_to_le32(FILESYSTEM_I))) { 133 - 132 + (JFS_IP(inode)->fileset == FILESYSTEM_I)) { 134 133 truncate_inode_pages(&inode->i_data, 0); 135 134 136 135 if (test_cflag(COMMIT_Freewmap, inode))
+1 -1
fs/jfs/jfs_dmap.c
··· 3055 3055 * RETURN VALUES: 3056 3056 * log2 number of blocks 3057 3057 */ 3058 - int blkstol2(s64 nb) 3058 + static int blkstol2(s64 nb) 3059 3059 { 3060 3060 int l2nb; 3061 3061 s64 mask; /* meant to be signed */
+11 -4
fs/jfs/jfs_txnmgr.c
··· 725 725 else 726 726 tlck->flag = tlckINODELOCK; 727 727 728 + if (S_ISDIR(ip->i_mode)) 729 + tlck->flag |= tlckDIRECTORY; 730 + 728 731 tlck->type = 0; 729 732 730 733 /* bind the tlock and the page */ ··· 1012 1009 1013 1010 /* bind the tlock and the object */ 1014 1011 tlck->flag = tlckINODELOCK; 1012 + if (S_ISDIR(ip->i_mode)) 1013 + tlck->flag |= tlckDIRECTORY; 1015 1014 tlck->ip = ip; 1016 1015 tlck->mp = NULL; 1017 1016 ··· 1082 1077 linelock->flag = tlckLINELOCK; 1083 1078 linelock->maxcnt = TLOCKLONG; 1084 1079 linelock->index = 0; 1080 + if (tlck->flag & tlckDIRECTORY) 1081 + linelock->flag |= tlckDIRECTORY; 1085 1082 1086 1083 /* append linelock after tlock */ 1087 1084 linelock->next = tlock->next; ··· 2077 2070 * 2078 2071 * function: log from maplock of freed data extents; 2079 2072 */ 2080 - void mapLog(struct jfs_log * log, struct tblock * tblk, struct lrd * lrd, 2081 - struct tlock * tlck) 2073 + static void mapLog(struct jfs_log * log, struct tblock * tblk, struct lrd * lrd, 2074 + struct tlock * tlck) 2082 2075 { 2083 2076 struct pxd_lock *pxdlock; 2084 2077 int i, nlock; ··· 2216 2209 * function: synchronously write pages locked by transaction 2217 2210 * after txLog() but before txUpdateMap(); 2218 2211 */ 2219 - void txForce(struct tblock * tblk) 2212 + static void txForce(struct tblock * tblk) 2220 2213 { 2221 2214 struct tlock *tlck; 2222 2215 lid_t lid, next; ··· 2365 2358 */ 2366 2359 else { /* (maplock->flag & mlckFREE) */ 2367 2360 2368 - if (S_ISDIR(tlck->ip->i_mode)) 2361 + if (tlck->flag & tlckDIRECTORY) 2369 2362 txFreeMap(ipimap, maplock, 2370 2363 tblk, COMMIT_PWMAP); 2371 2364 else
+1
fs/jfs/jfs_txnmgr.h
··· 122 122 #define tlckLOG 0x0800 123 123 /* updateMap state */ 124 124 #define tlckUPDATEMAP 0x0080 125 + #define tlckDIRECTORY 0x0040 125 126 /* freeLock state */ 126 127 #define tlckFREELOCK 0x0008 127 128 #define tlckWRITEPAGE 0x0004
+2 -3
fs/nfs/read.c
··· 184 184 { 185 185 unlock_page(req->wb_page); 186 186 187 - nfs_clear_request(req); 188 - nfs_release_request(req); 189 - 190 187 dprintk("NFS: read done (%s/%Ld %d@%Ld)\n", 191 188 req->wb_context->dentry->d_inode->i_sb->s_id, 192 189 (long long)NFS_FILEID(req->wb_context->dentry->d_inode), 193 190 req->wb_bytes, 194 191 (long long)req_offset(req)); 192 + nfs_clear_request(req); 193 + nfs_release_request(req); 195 194 } 196 195 197 196 /*
+2
fs/ntfs/ChangeLog
··· 92 92 an octal number to conform to how chmod(1) works, too. Thanks to 93 93 Giuseppe Bilotta and Horst von Brand for pointing out the errors of 94 94 my ways. 95 + - Fix various bugs in the runlist merging code. (Based on libntfs 96 + changes by Richard Russon.) 95 97 96 98 2.1.23 - Implement extension of resident files and make writing safe as well as 97 99 many bug fixes, cleanups, and enhancements...
+81 -39
fs/ntfs/aops.c
··· 59 59 unsigned long flags; 60 60 struct buffer_head *first, *tmp; 61 61 struct page *page; 62 + struct inode *vi; 62 63 ntfs_inode *ni; 63 64 int page_uptodate = 1; 64 65 65 66 page = bh->b_page; 66 - ni = NTFS_I(page->mapping->host); 67 + vi = page->mapping->host; 68 + ni = NTFS_I(vi); 67 69 68 70 if (likely(uptodate)) { 69 - s64 file_ofs, initialized_size; 71 + loff_t i_size; 72 + s64 file_ofs, init_size; 70 73 71 74 set_buffer_uptodate(bh); 72 75 73 76 file_ofs = ((s64)page->index << PAGE_CACHE_SHIFT) + 74 77 bh_offset(bh); 75 78 read_lock_irqsave(&ni->size_lock, flags); 76 - initialized_size = ni->initialized_size; 79 + init_size = ni->initialized_size; 80 + i_size = i_size_read(vi); 77 81 read_unlock_irqrestore(&ni->size_lock, flags); 82 + if (unlikely(init_size > i_size)) { 83 + /* Race with shrinking truncate. */ 84 + init_size = i_size; 85 + } 78 86 /* Check for the current buffer head overflowing. */ 79 - if (file_ofs + bh->b_size > initialized_size) { 80 - char *addr; 81 - int ofs = 0; 87 + if (unlikely(file_ofs + bh->b_size > init_size)) { 88 + u8 *kaddr; 89 + int ofs; 82 90 83 - if (file_ofs < initialized_size) 84 - ofs = initialized_size - file_ofs; 85 - addr = kmap_atomic(page, KM_BIO_SRC_IRQ); 86 - memset(addr + bh_offset(bh) + ofs, 0, bh->b_size - ofs); 91 + ofs = 0; 92 + if (file_ofs < init_size) 93 + ofs = init_size - file_ofs; 94 + kaddr = kmap_atomic(page, KM_BIO_SRC_IRQ); 95 + memset(kaddr + bh_offset(bh) + ofs, 0, 96 + bh->b_size - ofs); 97 + kunmap_atomic(kaddr, KM_BIO_SRC_IRQ); 87 98 flush_dcache_page(page); 88 - kunmap_atomic(addr, KM_BIO_SRC_IRQ); 89 99 } 90 100 } else { 91 101 clear_buffer_uptodate(bh); 92 102 SetPageError(page); 93 - ntfs_error(ni->vol->sb, "Buffer I/O error, logical block %llu.", 94 - (unsigned long long)bh->b_blocknr); 103 + ntfs_error(ni->vol->sb, "Buffer I/O error, logical block " 104 + "0x%llx.", (unsigned long long)bh->b_blocknr); 95 105 } 96 106 first = page_buffers(page); 97 107 local_irq_save(flags); ··· 134 124 if (likely(page_uptodate && !PageError(page))) 135 125 SetPageUptodate(page); 136 126 } else { 137 - char *addr; 127 + u8 *kaddr; 138 128 unsigned int i, recs; 139 129 u32 rec_size; 140 130 ··· 142 132 recs = PAGE_CACHE_SIZE / rec_size; 143 133 /* Should have been verified before we got here... */ 144 134 BUG_ON(!recs); 145 - addr = kmap_atomic(page, KM_BIO_SRC_IRQ); 135 + kaddr = kmap_atomic(page, KM_BIO_SRC_IRQ); 146 136 for (i = 0; i < recs; i++) 147 - post_read_mst_fixup((NTFS_RECORD*)(addr + 137 + post_read_mst_fixup((NTFS_RECORD*)(kaddr + 148 138 i * rec_size), rec_size); 139 + kunmap_atomic(kaddr, KM_BIO_SRC_IRQ); 149 140 flush_dcache_page(page); 150 - kunmap_atomic(addr, KM_BIO_SRC_IRQ); 151 141 if (likely(page_uptodate && !PageError(page))) 152 142 SetPageUptodate(page); 153 143 } ··· 178 168 */ 179 169 static int ntfs_read_block(struct page *page) 180 170 { 171 + loff_t i_size; 181 172 VCN vcn; 182 173 LCN lcn; 174 + s64 init_size; 175 + struct inode *vi; 183 176 ntfs_inode *ni; 184 177 ntfs_volume *vol; 185 178 runlist_element *rl; ··· 193 180 int i, nr; 194 181 unsigned char blocksize_bits; 195 182 196 - ni = NTFS_I(page->mapping->host); 183 + vi = page->mapping->host; 184 + ni = NTFS_I(vi); 197 185 vol = ni->vol; 198 186 199 187 /* $MFT/$DATA must have its complete runlist in memory at all times. */ ··· 213 199 bh = head = page_buffers(page); 214 200 BUG_ON(!bh); 215 201 202 + /* 203 + * We may be racing with truncate. To avoid some of the problems we 204 + * now take a snapshot of the various sizes and use those for the whole 205 + * of the function. In case of an extending truncate it just means we 206 + * may leave some buffers unmapped which are now allocated. This is 207 + * not a problem since these buffers will just get mapped when a write 208 + * occurs. In case of a shrinking truncate, we will detect this later 209 + * on due to the runlist being incomplete and if the page is being 210 + * fully truncated, truncate will throw it away as soon as we unlock 211 + * it so no need to worry what we do with it. 212 + */ 216 213 iblock = (s64)page->index << (PAGE_CACHE_SHIFT - blocksize_bits); 217 214 read_lock_irqsave(&ni->size_lock, flags); 218 215 lblock = (ni->allocated_size + blocksize - 1) >> blocksize_bits; 219 - zblock = (ni->initialized_size + blocksize - 1) >> blocksize_bits; 216 + init_size = ni->initialized_size; 217 + i_size = i_size_read(vi); 220 218 read_unlock_irqrestore(&ni->size_lock, flags); 219 + if (unlikely(init_size > i_size)) { 220 + /* Race with shrinking truncate. */ 221 + init_size = i_size; 222 + } 223 + zblock = (init_size + blocksize - 1) >> blocksize_bits; 221 224 222 225 /* Loop through all the buffers in the page. */ 223 226 rl = NULL; ··· 397 366 */ 398 367 static int ntfs_readpage(struct file *file, struct page *page) 399 368 { 369 + loff_t i_size; 370 + struct inode *vi; 400 371 ntfs_inode *ni, *base_ni; 401 372 u8 *kaddr; 402 373 ntfs_attr_search_ctx *ctx; ··· 417 384 unlock_page(page); 418 385 return 0; 419 386 } 420 - ni = NTFS_I(page->mapping->host); 387 + vi = page->mapping->host; 388 + ni = NTFS_I(vi); 421 389 /* 422 390 * Only $DATA attributes can be encrypted and only unnamed $DATA 423 391 * attributes can be compressed. Index root can have the flags set but 424 392 * this means to create compressed/encrypted files, not that the 425 - * attribute is compressed/encrypted. 393 + * attribute is compressed/encrypted. Note we need to check for 394 + * AT_INDEX_ALLOCATION since this is the type of both directory and 395 + * index inodes. 426 396 */ 427 - if (ni->type != AT_INDEX_ROOT) { 397 + if (ni->type != AT_INDEX_ALLOCATION) { 428 398 /* If attribute is encrypted, deny access, just like NT4. */ 429 399 if (NInoEncrypted(ni)) { 430 400 BUG_ON(ni->type != AT_DATA); ··· 492 456 read_lock_irqsave(&ni->size_lock, flags); 493 457 if (unlikely(attr_len > ni->initialized_size)) 494 458 attr_len = ni->initialized_size; 459 + i_size = i_size_read(vi); 495 460 read_unlock_irqrestore(&ni->size_lock, flags); 461 + if (unlikely(attr_len > i_size)) { 462 + /* Race with shrinking truncate. */ 463 + attr_len = i_size; 464 + } 496 465 kaddr = kmap_atomic(page, KM_USER0); 497 466 /* Copy the data to the page. */ 498 467 memcpy(kaddr, (u8*)ctx->attr + ··· 1382 1341 * Only $DATA attributes can be encrypted and only unnamed $DATA 1383 1342 * attributes can be compressed. Index root can have the flags set but 1384 1343 * this means to create compressed/encrypted files, not that the 1385 - * attribute is compressed/encrypted. 1344 + * attribute is compressed/encrypted. Note we need to check for 1345 + * AT_INDEX_ALLOCATION since this is the type of both directory and 1346 + * index inodes. 1386 1347 */ 1387 - if (ni->type != AT_INDEX_ROOT) { 1348 + if (ni->type != AT_INDEX_ALLOCATION) { 1388 1349 /* If file is encrypted, deny access, just like NT4. */ 1389 1350 if (NInoEncrypted(ni)) { 1390 1351 unlock_page(page); ··· 1422 1379 unsigned int ofs = i_size & ~PAGE_CACHE_MASK; 1423 1380 kaddr = kmap_atomic(page, KM_USER0); 1424 1381 memset(kaddr + ofs, 0, PAGE_CACHE_SIZE - ofs); 1425 - flush_dcache_page(page); 1426 1382 kunmap_atomic(kaddr, KM_USER0); 1383 + flush_dcache_page(page); 1427 1384 } 1428 1385 /* Handle mst protected attributes. */ 1429 1386 if (NInoMstProtected(ni)) ··· 1486 1443 BUG_ON(PageWriteback(page)); 1487 1444 set_page_writeback(page); 1488 1445 unlock_page(page); 1489 - /* 1490 - * Here, we do not need to zero the out of bounds area everytime 1491 - * because the below memcpy() already takes care of the 1492 - * mmap-at-end-of-file requirements. If the file is converted to a 1493 - * non-resident one, then the code path use is switched to the 1494 - * non-resident one where the zeroing happens on each ntfs_writepage() 1495 - * invocation. 1496 - */ 1497 1446 attr_len = le32_to_cpu(ctx->attr->data.resident.value_length); 1498 1447 i_size = i_size_read(vi); 1499 1448 if (unlikely(attr_len > i_size)) { 1449 + /* Race with shrinking truncate or a failed truncate. */ 1500 1450 attr_len = i_size; 1501 - ctx->attr->data.resident.value_length = cpu_to_le32(attr_len); 1451 + /* 1452 + * If the truncate failed, fix it up now. If a concurrent 1453 + * truncate, we do its job, so it does not have to do anything. 1454 + */ 1455 + err = ntfs_resident_attr_value_resize(ctx->mrec, ctx->attr, 1456 + attr_len); 1457 + /* Shrinking cannot fail. */ 1458 + BUG_ON(err); 1502 1459 } 1503 1460 kaddr = kmap_atomic(page, KM_USER0); 1504 1461 /* Copy the data from the page to the mft record. */ 1505 1462 memcpy((u8*)ctx->attr + 1506 1463 le16_to_cpu(ctx->attr->data.resident.value_offset), 1507 1464 kaddr, attr_len); 1508 - flush_dcache_mft_record_page(ctx->ntfs_ino); 1509 1465 /* Zero out of bounds area in the page cache page. */ 1510 1466 memset(kaddr + attr_len, 0, PAGE_CACHE_SIZE - attr_len); 1511 - flush_dcache_page(page); 1512 1467 kunmap_atomic(kaddr, KM_USER0); 1513 - 1468 + flush_dcache_mft_record_page(ctx->ntfs_ino); 1469 + flush_dcache_page(page); 1470 + /* We are done with the page. */ 1514 1471 end_page_writeback(page); 1515 - 1516 - /* Mark the mft record dirty, so it gets written back. */ 1472 + /* Finally, mark the mft record dirty, so it gets written back. */ 1517 1473 mark_mft_record_dirty(ctx->ntfs_ino); 1518 1474 ntfs_attr_put_search_ctx(ctx); 1519 1475 unmap_mft_record(base_ni);
+5 -4
fs/ntfs/inode.c
··· 1166 1166 * 1167 1167 * Return 0 on success and -errno on error. In the error case, the inode will 1168 1168 * have had make_bad_inode() executed on it. 1169 + * 1170 + * Note this cannot be called for AT_INDEX_ALLOCATION. 1169 1171 */ 1170 1172 static int ntfs_read_locked_attr_inode(struct inode *base_vi, struct inode *vi) 1171 1173 { ··· 1244 1242 } 1245 1243 } 1246 1244 /* 1247 - * The encryption flag set in an index root just means to 1248 - * compress all files. 1245 + * The compressed/sparse flag set in an index root just means 1246 + * to compress all files. 1249 1247 */ 1250 1248 if (NInoMstProtected(ni) && ni->type != AT_INDEX_ROOT) { 1251 1249 ntfs_error(vi->i_sb, "Found mst protected attribute " ··· 1321 1319 "the mapping pairs array."); 1322 1320 goto unm_err_out; 1323 1321 } 1324 - if ((NInoCompressed(ni) || NInoSparse(ni)) && 1325 - ni->type != AT_INDEX_ROOT) { 1322 + if (NInoCompressed(ni) || NInoSparse(ni)) { 1326 1323 if (a->data.non_resident.compression_unit != 4) { 1327 1324 ntfs_error(vi->i_sb, "Found nonstandard " 1328 1325 "compression unit (%u instead "
+1 -1
fs/ntfs/malloc.h
··· 1 1 /* 2 2 * malloc.h - NTFS kernel memory handling. Part of the Linux-NTFS project. 3 3 * 4 - * Copyright (c) 2001-2004 Anton Altaparmakov 4 + * Copyright (c) 2001-2005 Anton Altaparmakov 5 5 * 6 6 * This program/include file is free software; you can redistribute it and/or 7 7 * modify it under the terms of the GNU General Public License as published
+92 -77
fs/ntfs/runlist.c
··· 2 2 * runlist.c - NTFS runlist handling code. Part of the Linux-NTFS project. 3 3 * 4 4 * Copyright (c) 2001-2005 Anton Altaparmakov 5 - * Copyright (c) 2002 Richard Russon 5 + * Copyright (c) 2002-2005 Richard Russon 6 6 * 7 7 * This program/include file is free software; you can redistribute it and/or 8 8 * modify it under the terms of the GNU General Public License as published ··· 158 158 BUG_ON(!dst); 159 159 BUG_ON(!src); 160 160 161 - if ((dst->lcn < 0) || (src->lcn < 0)) { /* Are we merging holes? */ 162 - if (dst->lcn == LCN_HOLE && src->lcn == LCN_HOLE) 163 - return TRUE; 161 + /* We can merge unmapped regions even if they are misaligned. */ 162 + if ((dst->lcn == LCN_RL_NOT_MAPPED) && (src->lcn == LCN_RL_NOT_MAPPED)) 163 + return TRUE; 164 + /* If the runs are misaligned, we cannot merge them. */ 165 + if ((dst->vcn + dst->length) != src->vcn) 164 166 return FALSE; 165 - } 166 - if ((dst->lcn + dst->length) != src->lcn) /* Are the runs contiguous? */ 167 - return FALSE; 168 - if ((dst->vcn + dst->length) != src->vcn) /* Are the runs misaligned? */ 169 - return FALSE; 170 - 171 - return TRUE; 167 + /* If both runs are non-sparse and contiguous, we can merge them. */ 168 + if ((dst->lcn >= 0) && (src->lcn >= 0) && 169 + ((dst->lcn + dst->length) == src->lcn)) 170 + return TRUE; 171 + /* If we are merging two holes, we can merge them. */ 172 + if ((dst->lcn == LCN_HOLE) && (src->lcn == LCN_HOLE)) 173 + return TRUE; 174 + /* Cannot merge. */ 175 + return FALSE; 172 176 } 173 177 174 178 /** ··· 218 214 static inline runlist_element *ntfs_rl_append(runlist_element *dst, 219 215 int dsize, runlist_element *src, int ssize, int loc) 220 216 { 221 - BOOL right; 222 - int magic; 217 + BOOL right = FALSE; /* Right end of @src needs merging. */ 218 + int marker; /* End of the inserted runs. */ 223 219 224 220 BUG_ON(!dst); 225 221 BUG_ON(!src); 226 222 227 223 /* First, check if the right hand end needs merging. */ 228 - right = ntfs_are_rl_mergeable(src + ssize - 1, dst + loc + 1); 224 + if ((loc + 1) < dsize) 225 + right = ntfs_are_rl_mergeable(src + ssize - 1, dst + loc + 1); 229 226 230 227 /* Space required: @dst size + @src size, less one if we merged. */ 231 228 dst = ntfs_rl_realloc(dst, dsize, dsize + ssize - right); ··· 241 236 if (right) 242 237 __ntfs_rl_merge(src + ssize - 1, dst + loc + 1); 243 238 244 - magic = loc + ssize; 239 + /* First run after the @src runs that have been inserted. */ 240 + marker = loc + ssize + 1; 245 241 246 242 /* Move the tail of @dst out of the way, then copy in @src. */ 247 - ntfs_rl_mm(dst, magic + 1, loc + 1 + right, dsize - loc - 1 - right); 243 + ntfs_rl_mm(dst, marker, loc + 1 + right, dsize - (loc + 1 + right)); 248 244 ntfs_rl_mc(dst, loc + 1, src, 0, ssize); 249 245 250 246 /* Adjust the size of the preceding hole. */ 251 247 dst[loc].length = dst[loc + 1].vcn - dst[loc].vcn; 252 248 253 249 /* We may have changed the length of the file, so fix the end marker */ 254 - if (dst[magic + 1].lcn == LCN_ENOENT) 255 - dst[magic + 1].vcn = dst[magic].vcn + dst[magic].length; 250 + if (dst[marker].lcn == LCN_ENOENT) 251 + dst[marker].vcn = dst[marker - 1].vcn + dst[marker - 1].length; 256 252 257 253 return dst; 258 254 } ··· 285 279 static inline runlist_element *ntfs_rl_insert(runlist_element *dst, 286 280 int dsize, runlist_element *src, int ssize, int loc) 287 281 { 288 - BOOL left = FALSE; 289 - BOOL disc = FALSE; /* Discontinuity */ 290 - BOOL hole = FALSE; /* Following a hole */ 291 - int magic; 282 + BOOL left = FALSE; /* Left end of @src needs merging. */ 283 + BOOL disc = FALSE; /* Discontinuity between @dst and @src. */ 284 + int marker; /* End of the inserted runs. */ 292 285 293 286 BUG_ON(!dst); 294 287 BUG_ON(!src); 295 288 296 - /* disc => Discontinuity between the end of @dst and the start of @src. 297 - * This means we might need to insert a hole. 298 - * hole => @dst ends with a hole or an unmapped region which we can 299 - * extend to match the discontinuity. */ 289 + /* 290 + * disc => Discontinuity between the end of @dst and the start of @src. 291 + * This means we might need to insert a "not mapped" run. 292 + */ 300 293 if (loc == 0) 301 294 disc = (src[0].vcn > 0); 302 295 else { ··· 308 303 merged_length += src->length; 309 304 310 305 disc = (src[0].vcn > dst[loc - 1].vcn + merged_length); 311 - if (disc) 312 - hole = (dst[loc - 1].lcn == LCN_HOLE); 313 306 } 314 - 315 - /* Space required: @dst size + @src size, less one if we merged, plus 316 - * one if there was a discontinuity, less one for a trailing hole. */ 317 - dst = ntfs_rl_realloc(dst, dsize, dsize + ssize - left + disc - hole); 307 + /* 308 + * Space required: @dst size + @src size, less one if we merged, plus 309 + * one if there was a discontinuity. 310 + */ 311 + dst = ntfs_rl_realloc(dst, dsize, dsize + ssize - left + disc); 318 312 if (IS_ERR(dst)) 319 313 return dst; 320 314 /* 321 315 * We are guaranteed to succeed from here so can start modifying the 322 316 * original runlist. 323 317 */ 324 - 325 318 if (left) 326 319 __ntfs_rl_merge(dst + loc - 1, src); 327 - 328 - magic = loc + ssize - left + disc - hole; 320 + /* 321 + * First run after the @src runs that have been inserted. 322 + * Nominally, @marker equals @loc + @ssize, i.e. location + number of 323 + * runs in @src. However, if @left, then the first run in @src has 324 + * been merged with one in @dst. And if @disc, then @dst and @src do 325 + * not meet and we need an extra run to fill the gap. 326 + */ 327 + marker = loc + ssize - left + disc; 329 328 330 329 /* Move the tail of @dst out of the way, then copy in @src. */ 331 - ntfs_rl_mm(dst, magic, loc, dsize - loc); 332 - ntfs_rl_mc(dst, loc + disc - hole, src, left, ssize - left); 330 + ntfs_rl_mm(dst, marker, loc, dsize - loc); 331 + ntfs_rl_mc(dst, loc + disc, src, left, ssize - left); 333 332 334 - /* Adjust the VCN of the last run ... */ 335 - if (dst[magic].lcn <= LCN_HOLE) 336 - dst[magic].vcn = dst[magic - 1].vcn + dst[magic - 1].length; 333 + /* Adjust the VCN of the first run after the insertion... */ 334 + dst[marker].vcn = dst[marker - 1].vcn + dst[marker - 1].length; 337 335 /* ... and the length. */ 338 - if (dst[magic].lcn == LCN_HOLE || dst[magic].lcn == LCN_RL_NOT_MAPPED) 339 - dst[magic].length = dst[magic + 1].vcn - dst[magic].vcn; 336 + if (dst[marker].lcn == LCN_HOLE || dst[marker].lcn == LCN_RL_NOT_MAPPED) 337 + dst[marker].length = dst[marker + 1].vcn - dst[marker].vcn; 340 338 341 - /* Writing beyond the end of the file and there's a discontinuity. */ 339 + /* Writing beyond the end of the file and there is a discontinuity. */ 342 340 if (disc) { 343 - if (hole) 344 - dst[loc - 1].length = dst[loc].vcn - dst[loc - 1].vcn; 345 - else { 346 - if (loc > 0) { 347 - dst[loc].vcn = dst[loc - 1].vcn + 348 - dst[loc - 1].length; 349 - dst[loc].length = dst[loc + 1].vcn - 350 - dst[loc].vcn; 351 - } else { 352 - dst[loc].vcn = 0; 353 - dst[loc].length = dst[loc + 1].vcn; 354 - } 355 - dst[loc].lcn = LCN_RL_NOT_MAPPED; 341 + if (loc > 0) { 342 + dst[loc].vcn = dst[loc - 1].vcn + dst[loc - 1].length; 343 + dst[loc].length = dst[loc + 1].vcn - dst[loc].vcn; 344 + } else { 345 + dst[loc].vcn = 0; 346 + dst[loc].length = dst[loc + 1].vcn; 356 347 } 357 - 358 - magic += hole; 359 - 360 - if (dst[magic].lcn == LCN_ENOENT) 361 - dst[magic].vcn = dst[magic - 1].vcn + 362 - dst[magic - 1].length; 348 + dst[loc].lcn = LCN_RL_NOT_MAPPED; 363 349 } 364 350 return dst; 365 351 } ··· 381 385 static inline runlist_element *ntfs_rl_replace(runlist_element *dst, 382 386 int dsize, runlist_element *src, int ssize, int loc) 383 387 { 384 - BOOL left = FALSE; 385 - BOOL right; 386 - int magic; 388 + BOOL left = FALSE; /* Left end of @src needs merging. */ 389 + BOOL right = FALSE; /* Right end of @src needs merging. */ 390 + int tail; /* Start of tail of @dst. */ 391 + int marker; /* End of the inserted runs. */ 387 392 388 393 BUG_ON(!dst); 389 394 BUG_ON(!src); 390 395 391 - /* First, merge the left and right ends, if necessary. */ 392 - right = ntfs_are_rl_mergeable(src + ssize - 1, dst + loc + 1); 396 + /* First, see if the left and right ends need merging. */ 397 + if ((loc + 1) < dsize) 398 + right = ntfs_are_rl_mergeable(src + ssize - 1, dst + loc + 1); 393 399 if (loc > 0) 394 400 left = ntfs_are_rl_mergeable(dst + loc - 1, src); 395 - 396 - /* Allocate some space. We'll need less if the left, right, or both 397 - * ends were merged. */ 401 + /* 402 + * Allocate some space. We will need less if the left, right, or both 403 + * ends get merged. 404 + */ 398 405 dst = ntfs_rl_realloc(dst, dsize, dsize + ssize - left - right); 399 406 if (IS_ERR(dst)) 400 407 return dst; ··· 405 406 * We are guaranteed to succeed from here so can start modifying the 406 407 * original runlists. 407 408 */ 409 + 410 + /* First, merge the left and right ends, if necessary. */ 408 411 if (right) 409 412 __ntfs_rl_merge(src + ssize - 1, dst + loc + 1); 410 413 if (left) 411 414 __ntfs_rl_merge(dst + loc - 1, src); 412 - 413 - /* FIXME: What does this mean? (AIA) */ 414 - magic = loc + ssize - left; 415 + /* 416 + * Offset of the tail of @dst. This needs to be moved out of the way 417 + * to make space for the runs to be copied from @src, i.e. the first 418 + * run of the tail of @dst. 419 + * Nominally, @tail equals @loc + 1, i.e. location, skipping the 420 + * replaced run. However, if @right, then one of @dst's runs is 421 + * already merged into @src. 422 + */ 423 + tail = loc + right + 1; 424 + /* 425 + * First run after the @src runs that have been inserted, i.e. where 426 + * the tail of @dst needs to be moved to. 427 + * Nominally, @marker equals @loc + @ssize, i.e. location + number of 428 + * runs in @src. However, if @left, then the first run in @src has 429 + * been merged with one in @dst. 430 + */ 431 + marker = loc + ssize - left; 415 432 416 433 /* Move the tail of @dst out of the way, then copy in @src. */ 417 - ntfs_rl_mm(dst, magic, loc + right + 1, dsize - loc - right - 1); 434 + ntfs_rl_mm(dst, marker, tail, dsize - tail); 418 435 ntfs_rl_mc(dst, loc, src, left, ssize - left); 419 436 420 - /* We may have changed the length of the file, so fix the end marker */ 421 - if (dst[magic].lcn == LCN_ENOENT) 422 - dst[magic].vcn = dst[magic - 1].vcn + dst[magic - 1].length; 437 + /* We may have changed the length of the file, so fix the end marker. */ 438 + if (dsize - tail > 0 && dst[marker].lcn == LCN_ENOENT) 439 + dst[marker].vcn = dst[marker - 1].vcn + dst[marker - 1].length; 423 440 return dst; 424 441 } 425 442
+79 -7
fs/proc/base.c
··· 340 340 return result; 341 341 } 342 342 343 + 344 + /* Same as proc_root_link, but this addionally tries to get fs from other 345 + * threads in the group */ 346 + static int proc_task_root_link(struct inode *inode, struct dentry **dentry, 347 + struct vfsmount **mnt) 348 + { 349 + struct fs_struct *fs; 350 + int result = -ENOENT; 351 + struct task_struct *leader = proc_task(inode); 352 + 353 + task_lock(leader); 354 + fs = leader->fs; 355 + if (fs) { 356 + atomic_inc(&fs->count); 357 + task_unlock(leader); 358 + } else { 359 + /* Try to get fs from other threads */ 360 + task_unlock(leader); 361 + read_lock(&tasklist_lock); 362 + if (pid_alive(leader)) { 363 + struct task_struct *task = leader; 364 + 365 + while ((task = next_thread(task)) != leader) { 366 + task_lock(task); 367 + fs = task->fs; 368 + if (fs) { 369 + atomic_inc(&fs->count); 370 + task_unlock(task); 371 + break; 372 + } 373 + task_unlock(task); 374 + } 375 + } 376 + read_unlock(&tasklist_lock); 377 + } 378 + 379 + if (fs) { 380 + read_lock(&fs->lock); 381 + *mnt = mntget(fs->rootmnt); 382 + *dentry = dget(fs->root); 383 + read_unlock(&fs->lock); 384 + result = 0; 385 + put_fs_struct(fs); 386 + } 387 + return result; 388 + } 389 + 390 + 343 391 #define MAY_PTRACE(task) \ 344 392 (task == current || \ 345 393 (task->parent == current && \ ··· 519 471 520 472 /* permission checks */ 521 473 522 - static int proc_check_root(struct inode *inode) 474 + /* If the process being read is separated by chroot from the reading process, 475 + * don't let the reader access the threads. 476 + */ 477 + static int proc_check_chroot(struct dentry *root, struct vfsmount *vfsmnt) 523 478 { 524 - struct dentry *de, *base, *root; 525 - struct vfsmount *our_vfsmnt, *vfsmnt, *mnt; 479 + struct dentry *de, *base; 480 + struct vfsmount *our_vfsmnt, *mnt; 526 481 int res = 0; 527 - 528 - if (proc_root_link(inode, &root, &vfsmnt)) /* Ewww... */ 529 - return -ENOENT; 530 482 read_lock(&current->fs->lock); 531 483 our_vfsmnt = mntget(current->fs->rootmnt); 532 484 base = dget(current->fs->root); ··· 559 511 goto exit; 560 512 } 561 513 514 + static int proc_check_root(struct inode *inode) 515 + { 516 + struct dentry *root; 517 + struct vfsmount *vfsmnt; 518 + 519 + if (proc_root_link(inode, &root, &vfsmnt)) /* Ewww... */ 520 + return -ENOENT; 521 + return proc_check_chroot(root, vfsmnt); 522 + } 523 + 562 524 static int proc_permission(struct inode *inode, int mask, struct nameidata *nd) 563 525 { 564 526 if (generic_permission(inode, mask, NULL) != 0) 565 527 return -EACCES; 566 528 return proc_check_root(inode); 529 + } 530 + 531 + static int proc_task_permission(struct inode *inode, int mask, struct nameidata *nd) 532 + { 533 + struct dentry *root; 534 + struct vfsmount *vfsmnt; 535 + 536 + if (generic_permission(inode, mask, NULL) != 0) 537 + return -EACCES; 538 + 539 + if (proc_task_root_link(inode, &root, &vfsmnt)) 540 + return -ENOENT; 541 + 542 + return proc_check_chroot(root, vfsmnt); 567 543 } 568 544 569 545 extern struct seq_operations proc_pid_maps_op; ··· 1491 1419 1492 1420 static struct inode_operations proc_task_inode_operations = { 1493 1421 .lookup = proc_task_lookup, 1494 - .permission = proc_permission, 1422 + .permission = proc_task_permission, 1495 1423 }; 1496 1424 1497 1425 #ifdef CONFIG_SECURITY
+4 -1
include/asm-alpha/compiler.h
··· 98 98 #undef inline 99 99 #undef __inline__ 100 100 #undef __inline 101 - 101 + #if __GNUC__ == 3 && __GNUC_MINOR__ >= 1 || __GNUC__ > 3 102 + #undef __always_inline 103 + #define __always_inline inline __attribute__((always_inline)) 104 + #endif 102 105 103 106 #endif /* __ALPHA_COMPILER_H */
+1 -1
include/asm-alpha/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-arm/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+3 -3
include/asm-arm/io.h
··· 136 136 /* 137 137 * String version of IO memory access ops: 138 138 */ 139 - extern void _memcpy_fromio(void *, void __iomem *, size_t); 140 - extern void _memcpy_toio(void __iomem *, const void *, size_t); 141 - extern void _memset_io(void __iomem *, int, size_t); 139 + extern void _memcpy_fromio(void *, const volatile void __iomem *, size_t); 140 + extern void _memcpy_toio(volatile void __iomem *, const void *, size_t); 141 + extern void _memset_io(volatile void __iomem *, int, size_t); 142 142 143 143 #define mmiowb() 144 144
+1 -1
include/asm-arm/mach/arch.h
··· 50 50 */ 51 51 #define MACHINE_START(_type,_name) \ 52 52 const struct machine_desc __mach_desc_##_type \ 53 - __attribute__((__section__(".arch.info"))) = { \ 53 + __attribute__((__section__(".arch.info.init"))) = { \ 54 54 .nr = MACH_TYPE_##_type, \ 55 55 .name = _name, 56 56
+2 -2
include/asm-arm/setup.h
··· 171 171 int (*parse)(const struct tag *); 172 172 }; 173 173 174 - #define __tag __attribute_used__ __attribute__((__section__(".taglist"))) 174 + #define __tag __attribute_used__ __attribute__((__section__(".taglist.init"))) 175 175 #define __tagtable(tag, fn) \ 176 176 static struct tagtable __tagtable_##fn __tag = { tag, fn } 177 177 ··· 213 213 214 214 #define __early_param(name,fn) \ 215 215 static struct early_params __early_##fn __attribute_used__ \ 216 - __attribute__((__section__("__early_param"))) = { name, fn } 216 + __attribute__((__section__(".early_param.init"))) = { name, fn } 217 217 218 218 #endif
+1 -1
include/asm-arm26/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-cris/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-frv/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-h8300/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-i386/futex.h
··· 61 61 if (op == FUTEX_OP_SET) 62 62 __futex_atomic_op1("xchgl %0, %2", ret, oldval, uaddr, oparg); 63 63 else { 64 - #ifndef CONFIG_X86_BSWAP 64 + #if !defined(CONFIG_X86_BSWAP) && !defined(CONFIG_UML) 65 65 if (boot_cpu_data.x86 == 3) 66 66 ret = -ENOSYS; 67 67 else
+1 -1
include/asm-ia64/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+5
include/asm-ia64/mca.h
··· 80 80 u64 sal_ra; /* Return address in SAL, physical */ 81 81 u64 sal_gp; /* GP of the SAL - physical */ 82 82 pal_min_state_area_t *pal_min_state; /* from R17. physical in asm, virtual in C */ 83 + /* Previous values of IA64_KR(CURRENT) and IA64_KR(CURRENT_STACK). 84 + * Note: if the MCA/INIT recovery code wants to resume to a new context 85 + * then it must change these values to reflect the new kernel stack. 86 + */ 83 87 u64 prev_IA64_KR_CURRENT; /* previous value of IA64_KR(CURRENT) */ 88 + u64 prev_IA64_KR_CURRENT_STACK; 84 89 struct task_struct *prev_task; /* previous task, NULL if it is not useful */ 85 90 /* Some interrupt registers are not saved in minstate, pt_regs or 86 91 * switch_stack. Because MCA/INIT can occur when interrupts are
+1 -1
include/asm-m32r/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-m68k/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-m68knommu/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-parisc/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-ppc/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
-1
include/asm-ppc/macio.h
··· 1 1 #ifndef __MACIO_ASIC_H__ 2 2 #define __MACIO_ASIC_H__ 3 3 4 - #include <linux/mod_devicetable.h> 5 4 #include <asm/of_device.h> 6 5 7 6 extern struct bus_type macio_bus_type;
+4 -1
include/asm-ppc/of_device.h
··· 2 2 #define __OF_DEVICE_H__ 3 3 4 4 #include <linux/device.h> 5 + #include <linux/mod_devicetable.h> 5 6 #include <asm/prom.h> 6 7 7 8 /* ··· 56 55 extern void of_unregister_driver(struct of_platform_driver *drv); 57 56 extern int of_device_register(struct of_device *ofdev); 58 57 extern void of_device_unregister(struct of_device *ofdev); 59 - extern struct of_device *of_platform_device_create(struct device_node *np, const char *bus_id); 58 + extern struct of_device *of_platform_device_create(struct device_node *np, 59 + const char *bus_id, 60 + struct device *parent); 60 61 extern void of_release_dev(struct device *dev); 61 62 62 63 #endif /* __OF_DEVICE_H__ */
+361 -4
include/asm-ppc64/smu.h
··· 1 + #ifndef _SMU_H 2 + #define _SMU_H 3 + 1 4 /* 2 5 * Definitions for talking to the SMU chip in newer G5 PowerMacs 3 6 */ 4 7 5 8 #include <linux/config.h> 9 + #include <linux/list.h> 6 10 7 11 /* 8 - * Basic routines for use by architecture. To be extended as 9 - * we understand more of the chip 12 + * Known SMU commands 13 + * 14 + * Most of what is below comes from looking at the Open Firmware driver, 15 + * though this is still incomplete and could use better documentation here 16 + * or there... 17 + */ 18 + 19 + 20 + /* 21 + * Partition info commands 22 + * 23 + * I do not know what those are for at this point 24 + */ 25 + #define SMU_CMD_PARTITION_COMMAND 0x3e 26 + 27 + 28 + /* 29 + * Fan control 30 + * 31 + * This is a "mux" for fan control commands, first byte is the 32 + * "sub" command. 33 + */ 34 + #define SMU_CMD_FAN_COMMAND 0x4a 35 + 36 + 37 + /* 38 + * Battery access 39 + * 40 + * Same command number as the PMU, could it be same syntax ? 41 + */ 42 + #define SMU_CMD_BATTERY_COMMAND 0x6f 43 + #define SMU_CMD_GET_BATTERY_INFO 0x00 44 + 45 + /* 46 + * Real time clock control 47 + * 48 + * This is a "mux", first data byte contains the "sub" command. 49 + * The "RTC" part of the SMU controls the date, time, powerup 50 + * timer, but also a PRAM 51 + * 52 + * Dates are in BCD format on 7 bytes: 53 + * [sec] [min] [hour] [weekday] [month day] [month] [year] 54 + * with month being 1 based and year minus 100 55 + */ 56 + #define SMU_CMD_RTC_COMMAND 0x8e 57 + #define SMU_CMD_RTC_SET_PWRUP_TIMER 0x00 /* i: 7 bytes date */ 58 + #define SMU_CMD_RTC_GET_PWRUP_TIMER 0x01 /* o: 7 bytes date */ 59 + #define SMU_CMD_RTC_STOP_PWRUP_TIMER 0x02 60 + #define SMU_CMD_RTC_SET_PRAM_BYTE_ACC 0x20 /* i: 1 byte (address?) */ 61 + #define SMU_CMD_RTC_SET_PRAM_AUTOINC 0x21 /* i: 1 byte (data?) */ 62 + #define SMU_CMD_RTC_SET_PRAM_LO_BYTES 0x22 /* i: 10 bytes */ 63 + #define SMU_CMD_RTC_SET_PRAM_HI_BYTES 0x23 /* i: 10 bytes */ 64 + #define SMU_CMD_RTC_GET_PRAM_BYTE 0x28 /* i: 1 bytes (address?) */ 65 + #define SMU_CMD_RTC_GET_PRAM_LO_BYTES 0x29 /* o: 10 bytes */ 66 + #define SMU_CMD_RTC_GET_PRAM_HI_BYTES 0x2a /* o: 10 bytes */ 67 + #define SMU_CMD_RTC_SET_DATETIME 0x80 /* i: 7 bytes date */ 68 + #define SMU_CMD_RTC_GET_DATETIME 0x81 /* o: 7 bytes date */ 69 + 70 + /* 71 + * i2c commands 72 + * 73 + * To issue an i2c command, first is to send a parameter block to the 74 + * the SMU. This is a command of type 0x9a with 9 bytes of header 75 + * eventually followed by data for a write: 76 + * 77 + * 0: bus number (from device-tree usually, SMU has lots of busses !) 78 + * 1: transfer type/format (see below) 79 + * 2: device address. For combined and combined4 type transfers, this 80 + * is the "write" version of the address (bit 0x01 cleared) 81 + * 3: subaddress length (0..3) 82 + * 4: subaddress byte 0 (or only byte for subaddress length 1) 83 + * 5: subaddress byte 1 84 + * 6: subaddress byte 2 85 + * 7: combined address (device address for combined mode data phase) 86 + * 8: data length 87 + * 88 + * The transfer types are the same good old Apple ones it seems, 89 + * that is: 90 + * - 0x00: Simple transfer 91 + * - 0x01: Subaddress transfer (addr write + data tx, no restart) 92 + * - 0x02: Combined transfer (addr write + restart + data tx) 93 + * 94 + * This is then followed by actual data for a write. 95 + * 96 + * At this point, the OF driver seems to have a limitation on transfer 97 + * sizes of 0xd bytes on reads and 0x5 bytes on writes. I do not know 98 + * wether this is just an OF limit due to some temporary buffer size 99 + * or if this is an SMU imposed limit. This driver has the same limitation 100 + * for now as I use a 0x10 bytes temporary buffer as well 101 + * 102 + * Once that is completed, a response is expected from the SMU. This is 103 + * obtained via a command of type 0x9a with a length of 1 byte containing 104 + * 0 as the data byte. OF also fills the rest of the data buffer with 0xff's 105 + * though I can't tell yet if this is actually necessary. Once this command 106 + * is complete, at this point, all I can tell is what OF does. OF tests 107 + * byte 0 of the reply: 108 + * - on read, 0xfe or 0xfc : bus is busy, wait (see below) or nak ? 109 + * - on read, 0x00 or 0x01 : reply is in buffer (after the byte 0) 110 + * - on write, < 0 -> failure (immediate exit) 111 + * - else, OF just exists (without error, weird) 112 + * 113 + * So on read, there is this wait-for-busy thing when getting a 0xfc or 114 + * 0xfe result. OF does a loop of up to 64 retries, waiting 20ms and 115 + * doing the above again until either the retries expire or the result 116 + * is no longer 0xfe or 0xfc 117 + * 118 + * The Darwin I2C driver is less subtle though. On any non-success status 119 + * from the response command, it waits 5ms and tries again up to 20 times, 120 + * it doesn't differenciate between fatal errors or "busy" status. 121 + * 122 + * This driver provides an asynchronous paramblock based i2c command 123 + * interface to be used either directly by low level code or by a higher 124 + * level driver interfacing to the linux i2c layer. The current 125 + * implementation of this relies on working timers & timer interrupts 126 + * though, so be careful of calling context for now. This may be "fixed" 127 + * in the future by adding a polling facility. 128 + */ 129 + #define SMU_CMD_I2C_COMMAND 0x9a 130 + /* transfer types */ 131 + #define SMU_I2C_TRANSFER_SIMPLE 0x00 132 + #define SMU_I2C_TRANSFER_STDSUB 0x01 133 + #define SMU_I2C_TRANSFER_COMBINED 0x02 134 + 135 + /* 136 + * Power supply control 137 + * 138 + * The "sub" command is an ASCII string in the data, the 139 + * data lenght is that of the string. 140 + * 141 + * The VSLEW command can be used to get or set the voltage slewing. 142 + * - lenght 5 (only "VSLEW") : it returns "DONE" and 3 bytes of 143 + * reply at data offset 6, 7 and 8. 144 + * - lenght 8 ("VSLEWxyz") has 3 additional bytes appended, and is 145 + * used to set the voltage slewing point. The SMU replies with "DONE" 146 + * I yet have to figure out their exact meaning of those 3 bytes in 147 + * both cases. 148 + * 149 + */ 150 + #define SMU_CMD_POWER_COMMAND 0xaa 151 + #define SMU_CMD_POWER_RESTART "RESTART" 152 + #define SMU_CMD_POWER_SHUTDOWN "SHUTDOWN" 153 + #define SMU_CMD_POWER_VOLTAGE_SLEW "VSLEW" 154 + 155 + /* Misc commands 156 + * 157 + * This command seem to be a grab bag of various things 158 + */ 159 + #define SMU_CMD_MISC_df_COMMAND 0xdf 160 + #define SMU_CMD_MISC_df_SET_DISPLAY_LIT 0x02 /* i: 1 byte */ 161 + #define SMU_CMD_MISC_df_NMI_OPTION 0x04 162 + 163 + /* 164 + * Version info commands 165 + * 166 + * I haven't quite tried to figure out how these work 167 + */ 168 + #define SMU_CMD_VERSION_COMMAND 0xea 169 + 170 + 171 + /* 172 + * Misc commands 173 + * 174 + * This command seem to be a grab bag of various things 175 + */ 176 + #define SMU_CMD_MISC_ee_COMMAND 0xee 177 + #define SMU_CMD_MISC_ee_GET_DATABLOCK_REC 0x02 178 + #define SMU_CMD_MISC_ee_LEDS_CTRL 0x04 /* i: 00 (00,01) [00] */ 179 + #define SMU_CMD_MISC_ee_GET_DATA 0x05 /* i: 00 , o: ?? */ 180 + 181 + 182 + 183 + /* 184 + * - Kernel side interface - 185 + */ 186 + 187 + #ifdef __KERNEL__ 188 + 189 + /* 190 + * Asynchronous SMU commands 191 + * 192 + * Fill up this structure and submit it via smu_queue_command(), 193 + * and get notified by the optional done() callback, or because 194 + * status becomes != 1 195 + */ 196 + 197 + struct smu_cmd; 198 + 199 + struct smu_cmd 200 + { 201 + /* public */ 202 + u8 cmd; /* command */ 203 + int data_len; /* data len */ 204 + int reply_len; /* reply len */ 205 + void *data_buf; /* data buffer */ 206 + void *reply_buf; /* reply buffer */ 207 + int status; /* command status */ 208 + void (*done)(struct smu_cmd *cmd, void *misc); 209 + void *misc; 210 + 211 + /* private */ 212 + struct list_head link; 213 + }; 214 + 215 + /* 216 + * Queues an SMU command, all fields have to be initialized 217 + */ 218 + extern int smu_queue_cmd(struct smu_cmd *cmd); 219 + 220 + /* 221 + * Simple command wrapper. This structure embeds a small buffer 222 + * to ease sending simple SMU commands from the stack 223 + */ 224 + struct smu_simple_cmd 225 + { 226 + struct smu_cmd cmd; 227 + u8 buffer[16]; 228 + }; 229 + 230 + /* 231 + * Queues a simple command. All fields will be initialized by that 232 + * function 233 + */ 234 + extern int smu_queue_simple(struct smu_simple_cmd *scmd, u8 command, 235 + unsigned int data_len, 236 + void (*done)(struct smu_cmd *cmd, void *misc), 237 + void *misc, 238 + ...); 239 + 240 + /* 241 + * Completion helper. Pass it to smu_queue_simple or as 'done' 242 + * member to smu_queue_cmd, it will call complete() on the struct 243 + * completion passed in the "misc" argument 244 + */ 245 + extern void smu_done_complete(struct smu_cmd *cmd, void *misc); 246 + 247 + /* 248 + * Synchronous helpers. Will spin-wait for completion of a command 249 + */ 250 + extern void smu_spinwait_cmd(struct smu_cmd *cmd); 251 + 252 + static inline void smu_spinwait_simple(struct smu_simple_cmd *scmd) 253 + { 254 + smu_spinwait_cmd(&scmd->cmd); 255 + } 256 + 257 + /* 258 + * Poll routine to call if blocked with irqs off 259 + */ 260 + extern void smu_poll(void); 261 + 262 + 263 + /* 264 + * Init routine, presence check.... 10 265 */ 11 266 extern int smu_init(void); 12 267 extern int smu_present(void); 268 + struct of_device; 269 + extern struct of_device *smu_get_ofdev(void); 270 + 271 + 272 + /* 273 + * Common command wrappers 274 + */ 13 275 extern void smu_shutdown(void); 14 276 extern void smu_restart(void); 15 - extern int smu_get_rtc_time(struct rtc_time *time); 16 - extern int smu_set_rtc_time(struct rtc_time *time); 277 + struct rtc_time; 278 + extern int smu_get_rtc_time(struct rtc_time *time, int spinwait); 279 + extern int smu_set_rtc_time(struct rtc_time *time, int spinwait); 17 280 18 281 /* 19 282 * SMU command buffer absolute address, exported by pmac_setup, 20 283 * this is allocated very early during boot. 21 284 */ 22 285 extern unsigned long smu_cmdbuf_abs; 286 + 287 + 288 + /* 289 + * Kenrel asynchronous i2c interface 290 + */ 291 + 292 + /* SMU i2c header, exactly matches i2c header on wire */ 293 + struct smu_i2c_param 294 + { 295 + u8 bus; /* SMU bus ID (from device tree) */ 296 + u8 type; /* i2c transfer type */ 297 + u8 devaddr; /* device address (includes direction) */ 298 + u8 sublen; /* subaddress length */ 299 + u8 subaddr[3]; /* subaddress */ 300 + u8 caddr; /* combined address, filled by SMU driver */ 301 + u8 datalen; /* length of transfer */ 302 + u8 data[7]; /* data */ 303 + }; 304 + 305 + #define SMU_I2C_READ_MAX 0x0d 306 + #define SMU_I2C_WRITE_MAX 0x05 307 + 308 + struct smu_i2c_cmd 309 + { 310 + /* public */ 311 + struct smu_i2c_param info; 312 + void (*done)(struct smu_i2c_cmd *cmd, void *misc); 313 + void *misc; 314 + int status; /* 1 = pending, 0 = ok, <0 = fail */ 315 + 316 + /* private */ 317 + struct smu_cmd scmd; 318 + int read; 319 + int stage; 320 + int retries; 321 + u8 pdata[0x10]; 322 + struct list_head link; 323 + }; 324 + 325 + /* 326 + * Call this to queue an i2c command to the SMU. You must fill info, 327 + * including info.data for a write, done and misc. 328 + * For now, no polling interface is provided so you have to use completion 329 + * callback. 330 + */ 331 + extern int smu_queue_i2c(struct smu_i2c_cmd *cmd); 332 + 333 + 334 + #endif /* __KERNEL__ */ 335 + 336 + /* 337 + * - Userland interface - 338 + */ 339 + 340 + /* 341 + * A given instance of the device can be configured for 2 different 342 + * things at the moment: 343 + * 344 + * - sending SMU commands (default at open() time) 345 + * - receiving SMU events (not yet implemented) 346 + * 347 + * Commands are written with write() of a command block. They can be 348 + * "driver" commands (for example to switch to event reception mode) 349 + * or real SMU commands. They are made of a header followed by command 350 + * data if any. 351 + * 352 + * For SMU commands (not for driver commands), you can then read() back 353 + * a reply. The reader will be blocked or not depending on how the device 354 + * file is opened. poll() isn't implemented yet. The reply will consist 355 + * of a header as well, followed by the reply data if any. You should 356 + * always provide a buffer large enough for the maximum reply data, I 357 + * recommand one page. 358 + * 359 + * It is illegal to send SMU commands through a file descriptor configured 360 + * for events reception 361 + * 362 + */ 363 + struct smu_user_cmd_hdr 364 + { 365 + __u32 cmdtype; 366 + #define SMU_CMDTYPE_SMU 0 /* SMU command */ 367 + #define SMU_CMDTYPE_WANTS_EVENTS 1 /* switch fd to events mode */ 368 + 369 + __u8 cmd; /* SMU command byte */ 370 + __u32 data_len; /* Lenght of data following */ 371 + }; 372 + 373 + struct smu_user_reply_hdr 374 + { 375 + __u32 status; /* Command status */ 376 + __u32 reply_len; /* Lenght of data follwing */ 377 + }; 378 + 379 + #endif /* _SMU_H */
+1 -1
include/asm-s390/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-sh/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-sh64/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-sparc/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
-7
include/asm-sparc64/cacheflush.h
··· 4 4 #include <linux/config.h> 5 5 #include <asm/page.h> 6 6 7 - /* Flushing for D-cache alias handling is only needed if 8 - * the page size is smaller than 16K. 9 - */ 10 - #if PAGE_SHIFT < 14 11 - #define DCACHE_ALIASING_POSSIBLE 12 - #endif 13 - 14 7 #ifndef __ASSEMBLY__ 15 8 16 9 #include <linux/mm.h>
+1 -1
include/asm-sparc64/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1
include/asm-sparc64/ide.h
··· 15 15 #include <asm/io.h> 16 16 #include <asm/spitfire.h> 17 17 #include <asm/cacheflush.h> 18 + #include <asm/page.h> 18 19 19 20 #ifndef MAX_HWIFS 20 21 # ifdef CONFIG_BLK_DEV_IDEPCI
+7
include/asm-sparc64/page.h
··· 21 21 #define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) 22 22 #define PAGE_MASK (~(PAGE_SIZE-1)) 23 23 24 + /* Flushing for D-cache alias handling is only needed if 25 + * the page size is smaller than 16K. 26 + */ 27 + #if PAGE_SHIFT < 14 28 + #define DCACHE_ALIASING_POSSIBLE 29 + #endif 30 + 24 31 #ifdef __KERNEL__ 25 32 26 33 #ifndef __ASSEMBLY__
+1
include/asm-sparc64/pgalloc.h
··· 10 10 #include <asm/spitfire.h> 11 11 #include <asm/cpudata.h> 12 12 #include <asm/cacheflush.h> 13 + #include <asm/page.h> 13 14 14 15 /* Page table allocation/freeing. */ 15 16 #ifdef CONFIG_SMP
+11 -9
include/asm-sparc64/pgtable.h
··· 24 24 #include <asm/processor.h> 25 25 #include <asm/const.h> 26 26 27 - /* The kernel image occupies 0x4000000 to 0x1000000 (4MB --> 16MB). 28 - * The page copy blockops use 0x1000000 to 0x18000000 (16MB --> 24MB). 27 + /* The kernel image occupies 0x4000000 to 0x1000000 (4MB --> 32MB). 28 + * The page copy blockops can use 0x2000000 to 0x10000000. 29 29 * The PROM resides in an area spanning 0xf0000000 to 0x100000000. 30 - * The vmalloc area spans 0x140000000 to 0x200000000. 30 + * The vmalloc area spans 0x100000000 to 0x200000000. 31 + * Since modules need to be in the lowest 32-bits of the address space, 32 + * we place them right before the OBP area from 0x10000000 to 0xf0000000. 31 33 * There is a single static kernel PMD which maps from 0x0 to address 32 34 * 0x400000000. 33 35 */ 34 - #define TLBTEMP_BASE _AC(0x0000000001000000,UL) 35 - #define MODULES_VADDR _AC(0x0000000002000000,UL) 36 - #define MODULES_LEN _AC(0x000000007e000000,UL) 37 - #define MODULES_END _AC(0x0000000080000000,UL) 38 - #define VMALLOC_START _AC(0x0000000140000000,UL) 39 - #define VMALLOC_END _AC(0x0000000200000000,UL) 36 + #define TLBTEMP_BASE _AC(0x0000000002000000,UL) 37 + #define MODULES_VADDR _AC(0x0000000010000000,UL) 38 + #define MODULES_LEN _AC(0x00000000e0000000,UL) 39 + #define MODULES_END _AC(0x00000000f0000000,UL) 40 40 #define LOW_OBP_ADDRESS _AC(0x00000000f0000000,UL) 41 41 #define HI_OBP_ADDRESS _AC(0x0000000100000000,UL) 42 + #define VMALLOC_START _AC(0x0000000100000000,UL) 43 + #define VMALLOC_END _AC(0x0000000200000000,UL) 42 44 43 45 /* XXX All of this needs to be rethought so we can take advantage 44 46 * XXX cheetah's full 64-bit virtual address space, ie. no more hole
+5 -46
include/asm-um/futex.h
··· 1 - #ifndef _ASM_FUTEX_H 2 - #define _ASM_FUTEX_H 3 - 4 - #ifdef __KERNEL__ 1 + #ifndef __UM_FUTEX_H 2 + #define __UM_FUTEX_H 5 3 6 4 #include <linux/futex.h> 7 5 #include <asm/errno.h> 6 + #include <asm/system.h> 7 + #include <asm/processor.h> 8 8 #include <asm/uaccess.h> 9 9 10 - static inline int 11 - futex_atomic_op_inuser (int encoded_op, int __user *uaddr) 12 - { 13 - int op = (encoded_op >> 28) & 7; 14 - int cmp = (encoded_op >> 24) & 15; 15 - int oparg = (encoded_op << 8) >> 20; 16 - int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 18 - if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 - oparg = 1 << oparg; 10 + #include "asm/arch/futex.h" 20 11 21 - if (! access_ok (VERIFY_WRITE, uaddr, sizeof(int))) 22 - return -EFAULT; 23 - 24 - inc_preempt_count(); 25 - 26 - switch (op) { 27 - case FUTEX_OP_SET: 28 - case FUTEX_OP_ADD: 29 - case FUTEX_OP_OR: 30 - case FUTEX_OP_ANDN: 31 - case FUTEX_OP_XOR: 32 - default: 33 - ret = -ENOSYS; 34 - } 35 - 36 - dec_preempt_count(); 37 - 38 - if (!ret) { 39 - switch (cmp) { 40 - case FUTEX_OP_CMP_EQ: ret = (oldval == cmparg); break; 41 - case FUTEX_OP_CMP_NE: ret = (oldval != cmparg); break; 42 - case FUTEX_OP_CMP_LT: ret = (oldval < cmparg); break; 43 - case FUTEX_OP_CMP_GE: ret = (oldval >= cmparg); break; 44 - case FUTEX_OP_CMP_LE: ret = (oldval <= cmparg); break; 45 - case FUTEX_OP_CMP_GT: ret = (oldval > cmparg); break; 46 - default: ret = -ENOSYS; 47 - } 48 - } 49 - return ret; 50 - } 51 - 52 - #endif 53 12 #endif
-1
include/asm-um/pgtable.h
··· 346 346 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) 347 347 { 348 348 pte_set_val(pte, (pte_val(pte) & _PAGE_CHG_MASK), newprot); 349 - if(pte_present(pte)) pte = pte_mknewpage(pte_mknewprot(pte)); 350 349 return pte; 351 350 } 352 351
+1 -1
include/asm-v850/futex.h
··· 14 14 int cmp = (encoded_op >> 24) & 15; 15 15 int oparg = (encoded_op << 8) >> 20; 16 16 int cmparg = (encoded_op << 20) >> 20; 17 - int oldval = 0, ret, tem; 17 + int oldval = 0, ret; 18 18 if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28)) 19 19 oparg = 1 << oparg; 20 20
+1 -1
include/asm-xtensa/atomic.h
··· 22 22 #include <asm/processor.h> 23 23 #include <asm/system.h> 24 24 25 - #define ATOMIC_INIT(i) ( (atomic_t) { (i) } ) 25 + #define ATOMIC_INIT(i) { (i) } 26 26 27 27 /* 28 28 * This Xtensa implementation assumes that the right mechanism
+1 -1
include/asm-xtensa/bitops.h
··· 174 174 return 1UL & (((const volatile unsigned int *)addr)[nr>>5] >> (nr&31)); 175 175 } 176 176 177 - #if XCHAL_HAVE_NSAU 177 + #if XCHAL_HAVE_NSA 178 178 179 179 static __inline__ int __cntlz (unsigned long x) 180 180 {
+1
include/asm-xtensa/hardirq.h
··· 23 23 unsigned int __nmi_count; /* arch dependent */ 24 24 } ____cacheline_aligned irq_cpustat_t; 25 25 26 + void ack_bad_irq(unsigned int irq); 26 27 #include <linux/irq_cpustat.h> /* Standard mappings for irq_cpustat_t above */ 27 28 28 29 #endif /* _XTENSA_HARDIRQ_H */
+11 -38
include/asm-xtensa/semaphore.h
··· 20 20 atomic_t count; 21 21 int sleepers; 22 22 wait_queue_head_t wait; 23 - #if WAITQUEUE_DEBUG 24 - long __magic; 25 - #endif 26 23 }; 27 24 28 - #if WAITQUEUE_DEBUG 29 - # define __SEM_DEBUG_INIT(name) \ 30 - , (int)&(name).__magic 31 - #else 32 - # define __SEM_DEBUG_INIT(name) 33 - #endif 25 + #define __SEMAPHORE_INITIALIZER(name,n) \ 26 + { \ 27 + .count = ATOMIC_INIT(n), \ 28 + .sleepers = 0, \ 29 + .wait = __WAIT_QUEUE_HEAD_INITIALIZER((name).wait) \ 30 + } 34 31 35 - #define __SEMAPHORE_INITIALIZER(name,count) \ 36 - { ATOMIC_INIT(count), \ 37 - 0, \ 38 - __WAIT_QUEUE_HEAD_INITIALIZER((name).wait) \ 39 - __SEM_DEBUG_INIT(name) } 40 - 41 - #define __MUTEX_INITIALIZER(name) \ 32 + #define __MUTEX_INITIALIZER(name) \ 42 33 __SEMAPHORE_INITIALIZER(name, 1) 43 34 44 - #define __DECLARE_SEMAPHORE_GENERIC(name,count) \ 35 + #define __DECLARE_SEMAPHORE_GENERIC(name,count) \ 45 36 struct semaphore name = __SEMAPHORE_INITIALIZER(name,count) 46 37 47 38 #define DECLARE_MUTEX(name) __DECLARE_SEMAPHORE_GENERIC(name,1) ··· 40 49 41 50 static inline void sema_init (struct semaphore *sem, int val) 42 51 { 43 - /* 44 - * *sem = (struct semaphore)__SEMAPHORE_INITIALIZER((*sem),val); 45 - * 46 - * i'd rather use the more flexible initialization above, but sadly 47 - * GCC 2.7.2.3 emits a bogus warning. EGCS doesnt. Oh well. 48 - */ 49 52 atomic_set(&sem->count, val); 50 53 init_waitqueue_head(&sem->wait); 51 - #if WAITQUEUE_DEBUG 52 - sem->__magic = (int)&sem->__magic; 53 - #endif 54 54 } 55 55 56 56 static inline void init_MUTEX (struct semaphore *sem) ··· 63 81 64 82 static inline void down(struct semaphore * sem) 65 83 { 66 - #if WAITQUEUE_DEBUG 67 - CHECK_MAGIC(sem->__magic); 68 - #endif 84 + might_sleep(); 69 85 70 86 if (atomic_sub_return(1, &sem->count) < 0) 71 87 __down(sem); ··· 72 92 static inline int down_interruptible(struct semaphore * sem) 73 93 { 74 94 int ret = 0; 75 - #if WAITQUEUE_DEBUG 76 - CHECK_MAGIC(sem->__magic); 77 - #endif 95 + 96 + might_sleep(); 78 97 79 98 if (atomic_sub_return(1, &sem->count) < 0) 80 99 ret = __down_interruptible(sem); ··· 83 104 static inline int down_trylock(struct semaphore * sem) 84 105 { 85 106 int ret = 0; 86 - #if WAITQUEUE_DEBUG 87 - CHECK_MAGIC(sem->__magic); 88 - #endif 89 107 90 108 if (atomic_sub_return(1, &sem->count) < 0) 91 109 ret = __down_trylock(sem); ··· 95 119 */ 96 120 static inline void up(struct semaphore * sem) 97 121 { 98 - #if WAITQUEUE_DEBUG 99 - CHECK_MAGIC(sem->__magic); 100 - #endif 101 122 if (atomic_add_return(1, &sem->count) <= 0) 102 123 __up(sem); 103 124 }
-16
include/asm-xtensa/system.h
··· 189 189 190 190 #define tas(ptr) (xchg((ptr),1)) 191 191 192 - #if ( __XCC__ == 1 ) 193 - 194 - /* xt-xcc processes __inline__ differently than xt-gcc and decides to 195 - * insert an out-of-line copy of function __xchg. This presents the 196 - * unresolved symbol at link time of __xchg_called_with_bad_pointer, 197 - * even though such a function would never be called at run-time. 198 - * xt-gcc always inlines __xchg, and optimizes away the undefined 199 - * bad_pointer function. 200 - */ 201 - 202 - #define xchg(ptr,x) xchg_u32(ptr,x) 203 - 204 - #else /* assume xt-gcc */ 205 - 206 192 #define xchg(ptr,x) ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) 207 193 208 194 /* ··· 209 223 __xchg_called_with_bad_pointer(); 210 224 return x; 211 225 } 212 - 213 - #endif 214 226 215 227 extern void set_except_vector(int n, void *addr); 216 228
+4
include/linux/byteorder/generic.h
··· 5 5 * linux/byteorder_generic.h 6 6 * Generic Byte-reordering support 7 7 * 8 + * The "... p" macros, like le64_to_cpup, can be used with pointers 9 + * to unaligned data, but there will be a performance penalty on 10 + * some architectures. Use get_unaligned for unaligned data. 11 + * 8 12 * Francois-Rene Rideau <fare@tunes.org> 19970707 9 13 * gathered all the good ideas from all asm-foo/byteorder.h into one file, 10 14 * cleaned them up.
+5
include/linux/device.h
··· 317 317 dev->driver_data = data; 318 318 } 319 319 320 + static inline int device_is_registered(struct device *dev) 321 + { 322 + return klist_node_attached(&dev->knode_bus); 323 + } 324 + 320 325 /* 321 326 * High level routines for use by the bus drivers 322 327 */
+4 -4
include/linux/if_vlan.h
··· 42 42 struct vlan_ethhdr { 43 43 unsigned char h_dest[ETH_ALEN]; /* destination eth addr */ 44 44 unsigned char h_source[ETH_ALEN]; /* source ether addr */ 45 - unsigned short h_vlan_proto; /* Should always be 0x8100 */ 46 - unsigned short h_vlan_TCI; /* Encapsulates priority and VLAN ID */ 45 + __be16 h_vlan_proto; /* Should always be 0x8100 */ 46 + __be16 h_vlan_TCI; /* Encapsulates priority and VLAN ID */ 47 47 unsigned short h_vlan_encapsulated_proto; /* packet type ID field (or len) */ 48 48 }; 49 49 ··· 55 55 } 56 56 57 57 struct vlan_hdr { 58 - unsigned short h_vlan_TCI; /* Encapsulates priority and VLAN ID */ 59 - unsigned short h_vlan_encapsulated_proto; /* packet type ID field (or len) */ 58 + __be16 h_vlan_TCI; /* Encapsulates priority and VLAN ID */ 59 + __be16 h_vlan_encapsulated_proto; /* packet type ID field (or len) */ 60 60 }; 61 61 62 62 #define VLAN_VID_MASK 0xfff
+1
include/linux/libata.h
··· 393 393 extern void ata_pci_remove_one (struct pci_dev *pdev); 394 394 #endif /* CONFIG_PCI */ 395 395 extern int ata_device_add(struct ata_probe_ent *ent); 396 + extern void ata_host_set_remove(struct ata_host_set *host_set); 396 397 extern int ata_scsi_detect(Scsi_Host_Template *sht); 397 398 extern int ata_scsi_ioctl(struct scsi_device *dev, int cmd, void __user *arg); 398 399 extern int ata_scsi_queuecmd(struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *));
+5 -5
include/linux/mm.h
··· 136 136 #define VM_EXEC 0x00000004 137 137 #define VM_SHARED 0x00000008 138 138 139 + /* mprotect() hardcodes VM_MAYREAD >> 4 == VM_READ, and so for r/w/x bits. */ 139 140 #define VM_MAYREAD 0x00000010 /* limits for mprotect() etc */ 140 141 #define VM_MAYWRITE 0x00000020 141 142 #define VM_MAYEXEC 0x00000040 ··· 351 350 * only one copy in memory, at most, normally. 352 351 * 353 352 * For the non-reserved pages, page_count(page) denotes a reference count. 354 - * page_count() == 0 means the page is free. 353 + * page_count() == 0 means the page is free. page->lru is then used for 354 + * freelist management in the buddy allocator. 355 355 * page_count() == 1 means the page is used for exactly one purpose 356 356 * (e.g. a private data page of one process). 357 357 * ··· 378 376 * attaches, plus 1 if `private' contains something, plus one for 379 377 * the page cache itself. 380 378 * 381 - * All pages belonging to an inode are in these doubly linked lists: 382 - * mapping->clean_pages, mapping->dirty_pages and mapping->locked_pages; 383 - * using the page->list list_head. These fields are also used for 384 - * freelist managemet (when page_count()==0). 379 + * Instead of keeping dirty/clean pages in per address-space lists, we instead 380 + * now tag pages as dirty/under writeback in the radix tree. 385 381 * 386 382 * There is also a per-mapping radix tree mapping index to the page 387 383 * in memory if present. The tree is rooted at mapping->root.
+34 -5
include/linux/netfilter_ipv4/ip_conntrack.h
··· 133 133 134 134 #include <linux/netfilter_ipv4/ip_conntrack_tcp.h> 135 135 #include <linux/netfilter_ipv4/ip_conntrack_icmp.h> 136 + #include <linux/netfilter_ipv4/ip_conntrack_proto_gre.h> 136 137 #include <linux/netfilter_ipv4/ip_conntrack_sctp.h> 137 138 138 139 /* per conntrack: protocol private data */ 139 140 union ip_conntrack_proto { 140 141 /* insert conntrack proto private data here */ 142 + struct ip_ct_gre gre; 141 143 struct ip_ct_sctp sctp; 142 144 struct ip_ct_tcp tcp; 143 145 struct ip_ct_icmp icmp; ··· 150 148 }; 151 149 152 150 /* Add protocol helper include file here */ 151 + #include <linux/netfilter_ipv4/ip_conntrack_pptp.h> 153 152 #include <linux/netfilter_ipv4/ip_conntrack_amanda.h> 154 153 #include <linux/netfilter_ipv4/ip_conntrack_ftp.h> 155 154 #include <linux/netfilter_ipv4/ip_conntrack_irc.h> ··· 158 155 /* per conntrack: application helper private data */ 159 156 union ip_conntrack_help { 160 157 /* insert conntrack helper private data (master) here */ 158 + struct ip_ct_pptp_master ct_pptp_info; 161 159 struct ip_ct_ftp_master ct_ftp_info; 162 160 struct ip_ct_irc_master ct_irc_info; 163 161 }; 164 162 165 163 #ifdef CONFIG_IP_NF_NAT_NEEDED 166 164 #include <linux/netfilter_ipv4/ip_nat.h> 165 + #include <linux/netfilter_ipv4/ip_nat_pptp.h> 166 + 167 + /* per conntrack: nat application helper private data */ 168 + union ip_conntrack_nat_help { 169 + /* insert nat helper private data here */ 170 + struct ip_nat_pptp nat_pptp_info; 171 + }; 167 172 #endif 168 173 169 174 #include <linux/types.h> ··· 234 223 #ifdef CONFIG_IP_NF_NAT_NEEDED 235 224 struct { 236 225 struct ip_nat_info info; 226 + union ip_conntrack_nat_help help; 237 227 #if defined(CONFIG_IP_NF_TARGET_MASQUERADE) || \ 238 228 defined(CONFIG_IP_NF_TARGET_MASQUERADE_MODULE) 239 229 int masq_index; ··· 332 320 extern int invert_tuplepr(struct ip_conntrack_tuple *inverse, 333 321 const struct ip_conntrack_tuple *orig); 334 322 323 + extern void __ip_ct_refresh_acct(struct ip_conntrack *ct, 324 + enum ip_conntrack_info ctinfo, 325 + const struct sk_buff *skb, 326 + unsigned long extra_jiffies, 327 + int do_acct); 328 + 329 + /* Refresh conntrack for this many jiffies and do accounting */ 330 + static inline void ip_ct_refresh_acct(struct ip_conntrack *ct, 331 + enum ip_conntrack_info ctinfo, 332 + const struct sk_buff *skb, 333 + unsigned long extra_jiffies) 334 + { 335 + __ip_ct_refresh_acct(ct, ctinfo, skb, extra_jiffies, 1); 336 + } 337 + 335 338 /* Refresh conntrack for this many jiffies */ 336 - extern void ip_ct_refresh_acct(struct ip_conntrack *ct, 337 - enum ip_conntrack_info ctinfo, 338 - const struct sk_buff *skb, 339 - unsigned long extra_jiffies); 339 + static inline void ip_ct_refresh(struct ip_conntrack *ct, 340 + const struct sk_buff *skb, 341 + unsigned long extra_jiffies) 342 + { 343 + __ip_ct_refresh_acct(ct, 0, skb, extra_jiffies, 0); 344 + } 340 345 341 346 /* These are for NAT. Icky. */ 342 347 /* Update TCP window tracking data when NAT mangles the packet */ ··· 401 372 __ip_conntrack_expect_find(const struct ip_conntrack_tuple *tuple); 402 373 403 374 extern struct ip_conntrack_expect * 404 - ip_conntrack_expect_find_get(const struct ip_conntrack_tuple *tuple); 375 + ip_conntrack_expect_find(const struct ip_conntrack_tuple *tuple); 405 376 406 377 extern struct ip_conntrack_tuple_hash * 407 378 __ip_conntrack_find(const struct ip_conntrack_tuple *tuple,
+325
include/linux/netfilter_ipv4/ip_conntrack_pptp.h
··· 1 + /* PPTP constants and structs */ 2 + #ifndef _CONNTRACK_PPTP_H 3 + #define _CONNTRACK_PPTP_H 4 + 5 + /* state of the control session */ 6 + enum pptp_ctrlsess_state { 7 + PPTP_SESSION_NONE, /* no session present */ 8 + PPTP_SESSION_ERROR, /* some session error */ 9 + PPTP_SESSION_STOPREQ, /* stop_sess request seen */ 10 + PPTP_SESSION_REQUESTED, /* start_sess request seen */ 11 + PPTP_SESSION_CONFIRMED, /* session established */ 12 + }; 13 + 14 + /* state of the call inside the control session */ 15 + enum pptp_ctrlcall_state { 16 + PPTP_CALL_NONE, 17 + PPTP_CALL_ERROR, 18 + PPTP_CALL_OUT_REQ, 19 + PPTP_CALL_OUT_CONF, 20 + PPTP_CALL_IN_REQ, 21 + PPTP_CALL_IN_REP, 22 + PPTP_CALL_IN_CONF, 23 + PPTP_CALL_CLEAR_REQ, 24 + }; 25 + 26 + 27 + /* conntrack private data */ 28 + struct ip_ct_pptp_master { 29 + enum pptp_ctrlsess_state sstate; /* session state */ 30 + 31 + /* everything below is going to be per-expectation in newnat, 32 + * since there could be more than one call within one session */ 33 + enum pptp_ctrlcall_state cstate; /* call state */ 34 + u_int16_t pac_call_id; /* call id of PAC, host byte order */ 35 + u_int16_t pns_call_id; /* call id of PNS, host byte order */ 36 + 37 + /* in pre-2.6.11 this used to be per-expect. Now it is per-conntrack 38 + * and therefore imposes a fixed limit on the number of maps */ 39 + struct ip_ct_gre_keymap *keymap_orig, *keymap_reply; 40 + }; 41 + 42 + /* conntrack_expect private member */ 43 + struct ip_ct_pptp_expect { 44 + enum pptp_ctrlcall_state cstate; /* call state */ 45 + u_int16_t pac_call_id; /* call id of PAC */ 46 + u_int16_t pns_call_id; /* call id of PNS */ 47 + }; 48 + 49 + 50 + #ifdef __KERNEL__ 51 + 52 + #define IP_CONNTR_PPTP PPTP_CONTROL_PORT 53 + 54 + #define PPTP_CONTROL_PORT 1723 55 + 56 + #define PPTP_PACKET_CONTROL 1 57 + #define PPTP_PACKET_MGMT 2 58 + 59 + #define PPTP_MAGIC_COOKIE 0x1a2b3c4d 60 + 61 + struct pptp_pkt_hdr { 62 + __u16 packetLength; 63 + __be16 packetType; 64 + __be32 magicCookie; 65 + }; 66 + 67 + /* PptpControlMessageType values */ 68 + #define PPTP_START_SESSION_REQUEST 1 69 + #define PPTP_START_SESSION_REPLY 2 70 + #define PPTP_STOP_SESSION_REQUEST 3 71 + #define PPTP_STOP_SESSION_REPLY 4 72 + #define PPTP_ECHO_REQUEST 5 73 + #define PPTP_ECHO_REPLY 6 74 + #define PPTP_OUT_CALL_REQUEST 7 75 + #define PPTP_OUT_CALL_REPLY 8 76 + #define PPTP_IN_CALL_REQUEST 9 77 + #define PPTP_IN_CALL_REPLY 10 78 + #define PPTP_IN_CALL_CONNECT 11 79 + #define PPTP_CALL_CLEAR_REQUEST 12 80 + #define PPTP_CALL_DISCONNECT_NOTIFY 13 81 + #define PPTP_WAN_ERROR_NOTIFY 14 82 + #define PPTP_SET_LINK_INFO 15 83 + 84 + #define PPTP_MSG_MAX 15 85 + 86 + /* PptpGeneralError values */ 87 + #define PPTP_ERROR_CODE_NONE 0 88 + #define PPTP_NOT_CONNECTED 1 89 + #define PPTP_BAD_FORMAT 2 90 + #define PPTP_BAD_VALUE 3 91 + #define PPTP_NO_RESOURCE 4 92 + #define PPTP_BAD_CALLID 5 93 + #define PPTP_REMOVE_DEVICE_ERROR 6 94 + 95 + struct PptpControlHeader { 96 + __be16 messageType; 97 + __u16 reserved; 98 + }; 99 + 100 + /* FramingCapability Bitmap Values */ 101 + #define PPTP_FRAME_CAP_ASYNC 0x1 102 + #define PPTP_FRAME_CAP_SYNC 0x2 103 + 104 + /* BearerCapability Bitmap Values */ 105 + #define PPTP_BEARER_CAP_ANALOG 0x1 106 + #define PPTP_BEARER_CAP_DIGITAL 0x2 107 + 108 + struct PptpStartSessionRequest { 109 + __be16 protocolVersion; 110 + __u8 reserved1; 111 + __u8 reserved2; 112 + __be32 framingCapability; 113 + __be32 bearerCapability; 114 + __be16 maxChannels; 115 + __be16 firmwareRevision; 116 + __u8 hostName[64]; 117 + __u8 vendorString[64]; 118 + }; 119 + 120 + /* PptpStartSessionResultCode Values */ 121 + #define PPTP_START_OK 1 122 + #define PPTP_START_GENERAL_ERROR 2 123 + #define PPTP_START_ALREADY_CONNECTED 3 124 + #define PPTP_START_NOT_AUTHORIZED 4 125 + #define PPTP_START_UNKNOWN_PROTOCOL 5 126 + 127 + struct PptpStartSessionReply { 128 + __be16 protocolVersion; 129 + __u8 resultCode; 130 + __u8 generalErrorCode; 131 + __be32 framingCapability; 132 + __be32 bearerCapability; 133 + __be16 maxChannels; 134 + __be16 firmwareRevision; 135 + __u8 hostName[64]; 136 + __u8 vendorString[64]; 137 + }; 138 + 139 + /* PptpStopReasons */ 140 + #define PPTP_STOP_NONE 1 141 + #define PPTP_STOP_PROTOCOL 2 142 + #define PPTP_STOP_LOCAL_SHUTDOWN 3 143 + 144 + struct PptpStopSessionRequest { 145 + __u8 reason; 146 + }; 147 + 148 + /* PptpStopSessionResultCode */ 149 + #define PPTP_STOP_OK 1 150 + #define PPTP_STOP_GENERAL_ERROR 2 151 + 152 + struct PptpStopSessionReply { 153 + __u8 resultCode; 154 + __u8 generalErrorCode; 155 + }; 156 + 157 + struct PptpEchoRequest { 158 + __be32 identNumber; 159 + }; 160 + 161 + /* PptpEchoReplyResultCode */ 162 + #define PPTP_ECHO_OK 1 163 + #define PPTP_ECHO_GENERAL_ERROR 2 164 + 165 + struct PptpEchoReply { 166 + __be32 identNumber; 167 + __u8 resultCode; 168 + __u8 generalErrorCode; 169 + __u16 reserved; 170 + }; 171 + 172 + /* PptpFramingType */ 173 + #define PPTP_ASYNC_FRAMING 1 174 + #define PPTP_SYNC_FRAMING 2 175 + #define PPTP_DONT_CARE_FRAMING 3 176 + 177 + /* PptpCallBearerType */ 178 + #define PPTP_ANALOG_TYPE 1 179 + #define PPTP_DIGITAL_TYPE 2 180 + #define PPTP_DONT_CARE_BEARER_TYPE 3 181 + 182 + struct PptpOutCallRequest { 183 + __be16 callID; 184 + __be16 callSerialNumber; 185 + __be32 minBPS; 186 + __be32 maxBPS; 187 + __be32 bearerType; 188 + __be32 framingType; 189 + __be16 packetWindow; 190 + __be16 packetProcDelay; 191 + __u16 reserved1; 192 + __be16 phoneNumberLength; 193 + __u16 reserved2; 194 + __u8 phoneNumber[64]; 195 + __u8 subAddress[64]; 196 + }; 197 + 198 + /* PptpCallResultCode */ 199 + #define PPTP_OUTCALL_CONNECT 1 200 + #define PPTP_OUTCALL_GENERAL_ERROR 2 201 + #define PPTP_OUTCALL_NO_CARRIER 3 202 + #define PPTP_OUTCALL_BUSY 4 203 + #define PPTP_OUTCALL_NO_DIAL_TONE 5 204 + #define PPTP_OUTCALL_TIMEOUT 6 205 + #define PPTP_OUTCALL_DONT_ACCEPT 7 206 + 207 + struct PptpOutCallReply { 208 + __be16 callID; 209 + __be16 peersCallID; 210 + __u8 resultCode; 211 + __u8 generalErrorCode; 212 + __be16 causeCode; 213 + __be32 connectSpeed; 214 + __be16 packetWindow; 215 + __be16 packetProcDelay; 216 + __be32 physChannelID; 217 + }; 218 + 219 + struct PptpInCallRequest { 220 + __be16 callID; 221 + __be16 callSerialNumber; 222 + __be32 callBearerType; 223 + __be32 physChannelID; 224 + __be16 dialedNumberLength; 225 + __be16 dialingNumberLength; 226 + __u8 dialedNumber[64]; 227 + __u8 dialingNumber[64]; 228 + __u8 subAddress[64]; 229 + }; 230 + 231 + /* PptpInCallResultCode */ 232 + #define PPTP_INCALL_ACCEPT 1 233 + #define PPTP_INCALL_GENERAL_ERROR 2 234 + #define PPTP_INCALL_DONT_ACCEPT 3 235 + 236 + struct PptpInCallReply { 237 + __be16 callID; 238 + __be16 peersCallID; 239 + __u8 resultCode; 240 + __u8 generalErrorCode; 241 + __be16 packetWindow; 242 + __be16 packetProcDelay; 243 + __u16 reserved; 244 + }; 245 + 246 + struct PptpInCallConnected { 247 + __be16 peersCallID; 248 + __u16 reserved; 249 + __be32 connectSpeed; 250 + __be16 packetWindow; 251 + __be16 packetProcDelay; 252 + __be32 callFramingType; 253 + }; 254 + 255 + struct PptpClearCallRequest { 256 + __be16 callID; 257 + __u16 reserved; 258 + }; 259 + 260 + struct PptpCallDisconnectNotify { 261 + __be16 callID; 262 + __u8 resultCode; 263 + __u8 generalErrorCode; 264 + __be16 causeCode; 265 + __u16 reserved; 266 + __u8 callStatistics[128]; 267 + }; 268 + 269 + struct PptpWanErrorNotify { 270 + __be16 peersCallID; 271 + __u16 reserved; 272 + __be32 crcErrors; 273 + __be32 framingErrors; 274 + __be32 hardwareOverRuns; 275 + __be32 bufferOverRuns; 276 + __be32 timeoutErrors; 277 + __be32 alignmentErrors; 278 + }; 279 + 280 + struct PptpSetLinkInfo { 281 + __be16 peersCallID; 282 + __u16 reserved; 283 + __be32 sendAccm; 284 + __be32 recvAccm; 285 + }; 286 + 287 + union pptp_ctrl_union { 288 + struct PptpStartSessionRequest sreq; 289 + struct PptpStartSessionReply srep; 290 + struct PptpStopSessionRequest streq; 291 + struct PptpStopSessionReply strep; 292 + struct PptpOutCallRequest ocreq; 293 + struct PptpOutCallReply ocack; 294 + struct PptpInCallRequest icreq; 295 + struct PptpInCallReply icack; 296 + struct PptpInCallConnected iccon; 297 + struct PptpClearCallRequest clrreq; 298 + struct PptpCallDisconnectNotify disc; 299 + struct PptpWanErrorNotify wanerr; 300 + struct PptpSetLinkInfo setlink; 301 + }; 302 + 303 + extern int 304 + (*ip_nat_pptp_hook_outbound)(struct sk_buff **pskb, 305 + struct ip_conntrack *ct, 306 + enum ip_conntrack_info ctinfo, 307 + struct PptpControlHeader *ctlh, 308 + union pptp_ctrl_union *pptpReq); 309 + 310 + extern int 311 + (*ip_nat_pptp_hook_inbound)(struct sk_buff **pskb, 312 + struct ip_conntrack *ct, 313 + enum ip_conntrack_info ctinfo, 314 + struct PptpControlHeader *ctlh, 315 + union pptp_ctrl_union *pptpReq); 316 + 317 + extern int 318 + (*ip_nat_pptp_hook_exp_gre)(struct ip_conntrack_expect *exp_orig, 319 + struct ip_conntrack_expect *exp_reply); 320 + 321 + extern void 322 + (*ip_nat_pptp_hook_expectfn)(struct ip_conntrack *ct, 323 + struct ip_conntrack_expect *exp); 324 + #endif /* __KERNEL__ */ 325 + #endif /* _CONNTRACK_PPTP_H */
+114
include/linux/netfilter_ipv4/ip_conntrack_proto_gre.h
··· 1 + #ifndef _CONNTRACK_PROTO_GRE_H 2 + #define _CONNTRACK_PROTO_GRE_H 3 + #include <asm/byteorder.h> 4 + 5 + /* GRE PROTOCOL HEADER */ 6 + 7 + /* GRE Version field */ 8 + #define GRE_VERSION_1701 0x0 9 + #define GRE_VERSION_PPTP 0x1 10 + 11 + /* GRE Protocol field */ 12 + #define GRE_PROTOCOL_PPTP 0x880B 13 + 14 + /* GRE Flags */ 15 + #define GRE_FLAG_C 0x80 16 + #define GRE_FLAG_R 0x40 17 + #define GRE_FLAG_K 0x20 18 + #define GRE_FLAG_S 0x10 19 + #define GRE_FLAG_A 0x80 20 + 21 + #define GRE_IS_C(f) ((f)&GRE_FLAG_C) 22 + #define GRE_IS_R(f) ((f)&GRE_FLAG_R) 23 + #define GRE_IS_K(f) ((f)&GRE_FLAG_K) 24 + #define GRE_IS_S(f) ((f)&GRE_FLAG_S) 25 + #define GRE_IS_A(f) ((f)&GRE_FLAG_A) 26 + 27 + /* GRE is a mess: Four different standards */ 28 + struct gre_hdr { 29 + #if defined(__LITTLE_ENDIAN_BITFIELD) 30 + __u16 rec:3, 31 + srr:1, 32 + seq:1, 33 + key:1, 34 + routing:1, 35 + csum:1, 36 + version:3, 37 + reserved:4, 38 + ack:1; 39 + #elif defined(__BIG_ENDIAN_BITFIELD) 40 + __u16 csum:1, 41 + routing:1, 42 + key:1, 43 + seq:1, 44 + srr:1, 45 + rec:3, 46 + ack:1, 47 + reserved:4, 48 + version:3; 49 + #else 50 + #error "Adjust your <asm/byteorder.h> defines" 51 + #endif 52 + __u16 protocol; 53 + }; 54 + 55 + /* modified GRE header for PPTP */ 56 + struct gre_hdr_pptp { 57 + __u8 flags; /* bitfield */ 58 + __u8 version; /* should be GRE_VERSION_PPTP */ 59 + __u16 protocol; /* should be GRE_PROTOCOL_PPTP */ 60 + __u16 payload_len; /* size of ppp payload, not inc. gre header */ 61 + __u16 call_id; /* peer's call_id for this session */ 62 + __u32 seq; /* sequence number. Present if S==1 */ 63 + __u32 ack; /* seq number of highest packet recieved by */ 64 + /* sender in this session */ 65 + }; 66 + 67 + 68 + /* this is part of ip_conntrack */ 69 + struct ip_ct_gre { 70 + unsigned int stream_timeout; 71 + unsigned int timeout; 72 + }; 73 + 74 + #ifdef __KERNEL__ 75 + struct ip_conntrack_expect; 76 + struct ip_conntrack; 77 + 78 + /* structure for original <-> reply keymap */ 79 + struct ip_ct_gre_keymap { 80 + struct list_head list; 81 + 82 + struct ip_conntrack_tuple tuple; 83 + }; 84 + 85 + /* add new tuple->key_reply pair to keymap */ 86 + int ip_ct_gre_keymap_add(struct ip_conntrack *ct, 87 + struct ip_conntrack_tuple *t, 88 + int reply); 89 + 90 + /* delete keymap entries */ 91 + void ip_ct_gre_keymap_destroy(struct ip_conntrack *ct); 92 + 93 + 94 + /* get pointer to gre key, if present */ 95 + static inline u_int32_t *gre_key(struct gre_hdr *greh) 96 + { 97 + if (!greh->key) 98 + return NULL; 99 + if (greh->csum || greh->routing) 100 + return (u_int32_t *) (greh+sizeof(*greh)+4); 101 + return (u_int32_t *) (greh+sizeof(*greh)); 102 + } 103 + 104 + /* get pointer ot gre csum, if present */ 105 + static inline u_int16_t *gre_csum(struct gre_hdr *greh) 106 + { 107 + if (!greh->csum) 108 + return NULL; 109 + return (u_int16_t *) (greh+sizeof(*greh)); 110 + } 111 + 112 + #endif /* __KERNEL__ */ 113 + 114 + #endif /* _CONNTRACK_PROTO_GRE_H */
+8 -1
include/linux/netfilter_ipv4/ip_conntrack_tuple.h
··· 17 17 u_int16_t all; 18 18 19 19 struct { 20 - u_int16_t port; 20 + __be16 port; 21 21 } tcp; 22 22 struct { 23 23 u_int16_t port; ··· 28 28 struct { 29 29 u_int16_t port; 30 30 } sctp; 31 + struct { 32 + __be16 key; /* key is 32bit, pptp only uses 16 */ 33 + } gre; 31 34 }; 32 35 33 36 /* The manipulable part of the tuple. */ ··· 64 61 struct { 65 62 u_int16_t port; 66 63 } sctp; 64 + struct { 65 + __be16 key; /* key is 32bit, 66 + * pptp only uses 16 */ 67 + } gre; 67 68 } u; 68 69 69 70 /* The protocol. */
+11
include/linux/netfilter_ipv4/ip_nat_pptp.h
··· 1 + /* PPTP constants and structs */ 2 + #ifndef _NAT_PPTP_H 3 + #define _NAT_PPTP_H 4 + 5 + /* conntrack private data */ 6 + struct ip_nat_pptp { 7 + u_int16_t pns_call_id; /* NAT'ed PNS call id */ 8 + u_int16_t pac_call_id; /* NAT'ed PAC call id */ 9 + }; 10 + 11 + #endif /* _NAT_PPTP_H */
+3
include/linux/netfilter_ipv6/ip6_tables.h
··· 455 455 456 456 /* Check for an extension */ 457 457 extern int ip6t_ext_hdr(u8 nexthdr); 458 + /* find specified header and get offset to it */ 459 + extern int ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset, 460 + u8 target); 458 461 459 462 #define IP6T_ALIGN(s) (((s) + (__alignof__(struct ip6t_entry)-1)) & ~(__alignof__(struct ip6t_entry)-1)) 460 463
+2 -1
include/linux/pci_ids.h
··· 1268 1268 #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA 0x0266 1269 1269 #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_SATA2 0x0267 1270 1270 #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_IDE 0x036E 1271 - #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA 0x036F 1271 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA 0x037E 1272 + #define PCI_DEVICE_ID_NVIDIA_NFORCE_MCP55_SATA2 0x037F 1272 1273 #define PCI_DEVICE_ID_NVIDIA_NVENET_12 0x0268 1273 1274 #define PCI_DEVICE_ID_NVIDIA_NVENET_13 0x0269 1274 1275 #define PCI_DEVICE_ID_NVIDIA_MCP51_AUDIO 0x026B
+4
include/linux/reboot.h
··· 59 59 * Architecture independent implemenations of sys_reboot commands. 60 60 */ 61 61 62 + extern void kernel_restart_prepare(char *cmd); 63 + extern void kernel_halt_prepare(void); 64 + extern void kernel_power_off_prepare(void); 65 + 62 66 extern void kernel_restart(char *cmd); 63 67 extern void kernel_halt(void); 64 68 extern void kernel_power_off(void);
+2
include/linux/syscalls.h
··· 508 508 509 509 asmlinkage long sys_ioprio_set(int which, int who, int ioprio); 510 510 asmlinkage long sys_ioprio_get(int which, int who); 511 + asmlinkage long sys_set_mempolicy(int mode, unsigned long __user *nmask, 512 + unsigned long maxnode); 511 513 512 514 #endif
+11 -4
include/rdma/ib_mad.h
··· 108 108 #define IB_QP1_QKEY 0x80010000 109 109 #define IB_QP_SET_QKEY 0x80000000 110 110 111 + enum { 112 + IB_MGMT_MAD_DATA = 232, 113 + IB_MGMT_RMPP_DATA = 220, 114 + IB_MGMT_VENDOR_DATA = 216, 115 + IB_MGMT_SA_DATA = 200 116 + }; 117 + 111 118 struct ib_mad_hdr { 112 119 u8 base_version; 113 120 u8 mgmt_class; ··· 156 149 157 150 struct ib_mad { 158 151 struct ib_mad_hdr mad_hdr; 159 - u8 data[232]; 152 + u8 data[IB_MGMT_MAD_DATA]; 160 153 }; 161 154 162 155 struct ib_rmpp_mad { 163 156 struct ib_mad_hdr mad_hdr; 164 157 struct ib_rmpp_hdr rmpp_hdr; 165 - u8 data[220]; 158 + u8 data[IB_MGMT_RMPP_DATA]; 166 159 }; 167 160 168 161 struct ib_sa_mad { 169 162 struct ib_mad_hdr mad_hdr; 170 163 struct ib_rmpp_hdr rmpp_hdr; 171 164 struct ib_sa_hdr sa_hdr; 172 - u8 data[200]; 165 + u8 data[IB_MGMT_SA_DATA]; 173 166 } __attribute__ ((packed)); 174 167 175 168 struct ib_vendor_mad { ··· 177 170 struct ib_rmpp_hdr rmpp_hdr; 178 171 u8 reserved; 179 172 u8 oui[3]; 180 - u8 data[216]; 173 + u8 data[IB_MGMT_VENDOR_DATA]; 181 174 }; 182 175 183 176 struct ib_class_port_info
+9 -2
include/scsi/scsi_host.h
··· 439 439 SHOST_CANCEL, 440 440 SHOST_DEL, 441 441 SHOST_RECOVERY, 442 + SHOST_CANCEL_RECOVERY, 443 + SHOST_DEL_RECOVERY, 442 444 }; 443 445 444 446 struct Scsi_Host { ··· 467 465 468 466 struct list_head eh_cmd_q; 469 467 struct task_struct * ehandler; /* Error recovery thread. */ 470 - struct semaphore * eh_wait; /* The error recovery thread waits 471 - on this. */ 472 468 struct semaphore * eh_action; /* Wait for specific actions on the 473 469 host. */ 474 470 unsigned int eh_active:1; /* Indicates the eh thread is awake and active if ··· 619 619 dev = dev->parent; 620 620 } 621 621 return container_of(dev, struct Scsi_Host, shost_gendev); 622 + } 623 + 624 + static inline int scsi_host_in_recovery(struct Scsi_Host *shost) 625 + { 626 + return shost->shost_state == SHOST_RECOVERY || 627 + shost->shost_state == SHOST_CANCEL_RECOVERY || 628 + shost->shost_state == SHOST_DEL_RECOVERY; 622 629 } 623 630 624 631 extern int scsi_queue_work(struct Scsi_Host *, struct work_struct *);
+2 -2
include/scsi/scsi_transport_fc.h
··· 103 103 incapable of reporting */ 104 104 #define FC_PORTSPEED_1GBIT 1 105 105 #define FC_PORTSPEED_2GBIT 2 106 - #define FC_PORTSPEED_10GBIT 4 107 - #define FC_PORTSPEED_4GBIT 8 106 + #define FC_PORTSPEED_4GBIT 4 107 + #define FC_PORTSPEED_10GBIT 8 108 108 #define FC_PORTSPEED_NOT_NEGOTIATED (1 << 15) /* Speed not established */ 109 109 110 110 /*
+1 -1
kernel/power/Kconfig
··· 29 29 30 30 config SOFTWARE_SUSPEND 31 31 bool "Software Suspend" 32 - depends on PM && SWAP && (X86 || ((FVR || PPC32) && !SMP)) 32 + depends on PM && SWAP && (X86 && (!SMP || SUSPEND_SMP)) || ((FVR || PPC32) && !SMP) 33 33 ---help--- 34 34 Enable the possibility of suspending the machine. 35 35 It doesn't need APM.
+2 -4
kernel/power/disk.c
··· 17 17 #include <linux/delay.h> 18 18 #include <linux/fs.h> 19 19 #include <linux/mount.h> 20 + #include <linux/pm.h> 20 21 21 22 #include "power.h" 22 23 23 24 24 25 extern suspend_disk_method_t pm_disk_mode; 25 - extern struct pm_ops * pm_ops; 26 26 27 27 extern int swsusp_suspend(void); 28 28 extern int swsusp_write(void); ··· 49 49 50 50 static void power_down(suspend_disk_method_t mode) 51 51 { 52 - unsigned long flags; 53 52 int error = 0; 54 53 55 - local_irq_save(flags); 56 54 switch(mode) { 57 55 case PM_DISK_PLATFORM: 58 - device_shutdown(); 56 + kernel_power_off_prepare(); 59 57 error = pm_ops->enter(PM_SUSPEND_DISK); 60 58 break; 61 59 case PM_DISK_SHUTDOWN:
+1 -1
kernel/power/power.h
··· 1 1 #include <linux/suspend.h> 2 2 #include <linux/utsname.h> 3 3 4 - /* With SUSPEND_CONSOLE defined, it suspend looks *really* cool, but 4 + /* With SUSPEND_CONSOLE defined suspend looks *really* cool, but 5 5 we probably do not take enough locks for switching consoles, etc, 6 6 so bad things might happen. 7 7 */
+8 -4
kernel/power/swsusp.c
··· 363 363 } 364 364 365 365 /** 366 - * write_swap_page - Write one page to a fresh swap location. 366 + * write_page - Write one page to a fresh swap location. 367 367 * @addr: Address we're writing. 368 368 * @loc: Place to store the entry we used. 369 369 * ··· 863 863 return 0; 864 864 } 865 865 866 + /* Free pages we allocated for suspend. Suspend pages are alocated 867 + * before atomic copy, so we need to free them after resume. 868 + */ 866 869 void swsusp_free(void) 867 870 { 868 871 BUG_ON(PageNosave(virt_to_page(pagedir_save))); ··· 921 918 922 919 pagedir_nosave = NULL; 923 920 nr_copy_pages = calc_nr(nr_copy_pages); 921 + nr_copy_pages_check = nr_copy_pages; 924 922 925 923 pr_debug("suspend: (pages needed: %d + %d free: %d)\n", 926 924 nr_copy_pages, PAGES_FOR_IO, nr_free_pages()); ··· 944 940 return error; 945 941 } 946 942 947 - nr_copy_pages_check = nr_copy_pages; 948 943 return 0; 949 944 } 950 945 ··· 1216 1213 free_pagedir(pblist); 1217 1214 free_eaten_memory(); 1218 1215 pblist = NULL; 1219 - } 1220 - else 1216 + /* Is this even worth handling? It should never ever happen, and we 1217 + have just lost user's state, anyway... */ 1218 + } else 1221 1219 printk("swsusp: Relocated %d pages\n", rel); 1222 1220 1223 1221 return pblist;
+6 -1
kernel/printk.c
··· 488 488 489 489 __setup("time", printk_time_setup); 490 490 491 + __attribute__((weak)) unsigned long long printk_clock(void) 492 + { 493 + return sched_clock(); 494 + } 495 + 491 496 /* 492 497 * This is printk. It can be called from any context. We want it to work. 493 498 * ··· 570 565 loglev_char = default_message_loglevel 571 566 + '0'; 572 567 } 573 - t = sched_clock(); 568 + t = printk_clock(); 574 569 nanosec_rem = do_div(t, 1000000000); 575 570 tlen = sprintf(tbuf, 576 571 "<%c>[%5lu.%06lu] ",
+14 -17
kernel/signal.c
··· 936 936 * as soon as they're available, so putting the signal on the shared queue 937 937 * will be equivalent to sending it to one such thread. 938 938 */ 939 - #define wants_signal(sig, p, mask) \ 940 - (!sigismember(&(p)->blocked, sig) \ 941 - && !((p)->state & mask) \ 942 - && !((p)->flags & PF_EXITING) \ 943 - && (task_curr(p) || !signal_pending(p))) 944 - 939 + static inline int wants_signal(int sig, struct task_struct *p) 940 + { 941 + if (sigismember(&p->blocked, sig)) 942 + return 0; 943 + if (p->flags & PF_EXITING) 944 + return 0; 945 + if (sig == SIGKILL) 946 + return 1; 947 + if (p->state & (TASK_STOPPED | TASK_TRACED)) 948 + return 0; 949 + return task_curr(p) || !signal_pending(p); 950 + } 945 951 946 952 static void 947 953 __group_complete_signal(int sig, struct task_struct *p) 948 954 { 949 - unsigned int mask; 950 955 struct task_struct *t; 951 - 952 - /* 953 - * Don't bother traced and stopped tasks (but 954 - * SIGKILL will punch through that). 955 - */ 956 - mask = TASK_STOPPED | TASK_TRACED; 957 - if (sig == SIGKILL) 958 - mask = 0; 959 956 960 957 /* 961 958 * Now find a thread we can wake up to take the signal off the queue. ··· 960 963 * If the main thread wants the signal, it gets first crack. 961 964 * Probably the least surprising to the average bear. 962 965 */ 963 - if (wants_signal(sig, p, mask)) 966 + if (wants_signal(sig, p)) 964 967 t = p; 965 968 else if (thread_group_empty(p)) 966 969 /* ··· 978 981 t = p->signal->curr_target = p; 979 982 BUG_ON(t->tgid != p->tgid); 980 983 981 - while (!wants_signal(sig, t, mask)) { 984 + while (!wants_signal(sig, t)) { 982 985 t = next_thread(t); 983 986 if (t == p->signal->curr_target) 984 987 /*
+46 -6
kernel/sys.c
··· 361 361 return retval; 362 362 } 363 363 364 + /** 365 + * emergency_restart - reboot the system 366 + * 367 + * Without shutting down any hardware or taking any locks 368 + * reboot the system. This is called when we know we are in 369 + * trouble so this is our best effort to reboot. This is 370 + * safe to call in interrupt context. 371 + */ 364 372 void emergency_restart(void) 365 373 { 366 374 machine_emergency_restart(); 367 375 } 368 376 EXPORT_SYMBOL_GPL(emergency_restart); 369 377 370 - void kernel_restart(char *cmd) 378 + /** 379 + * kernel_restart - reboot the system 380 + * 381 + * Shutdown everything and perform a clean reboot. 382 + * This is not safe to call in interrupt context. 383 + */ 384 + void kernel_restart_prepare(char *cmd) 371 385 { 372 386 notifier_call_chain(&reboot_notifier_list, SYS_RESTART, cmd); 373 387 system_state = SYSTEM_RESTART; 374 388 device_shutdown(); 389 + } 390 + void kernel_restart(char *cmd) 391 + { 392 + kernel_restart_prepare(cmd); 375 393 if (!cmd) { 376 394 printk(KERN_EMERG "Restarting system.\n"); 377 395 } else { ··· 400 382 } 401 383 EXPORT_SYMBOL_GPL(kernel_restart); 402 384 385 + /** 386 + * kernel_kexec - reboot the system 387 + * 388 + * Move into place and start executing a preloaded standalone 389 + * executable. If nothing was preloaded return an error. 390 + */ 403 391 void kernel_kexec(void) 404 392 { 405 393 #ifdef CONFIG_KEXEC ··· 414 390 if (!image) { 415 391 return; 416 392 } 417 - notifier_call_chain(&reboot_notifier_list, SYS_RESTART, NULL); 418 - system_state = SYSTEM_RESTART; 419 - device_shutdown(); 393 + kernel_restart_prepare(NULL); 420 394 printk(KERN_EMERG "Starting new kernel\n"); 421 395 machine_shutdown(); 422 396 machine_kexec(image); ··· 422 400 } 423 401 EXPORT_SYMBOL_GPL(kernel_kexec); 424 402 425 - void kernel_halt(void) 403 + /** 404 + * kernel_halt - halt the system 405 + * 406 + * Shutdown everything and perform a clean system halt. 407 + */ 408 + void kernel_halt_prepare(void) 426 409 { 427 410 notifier_call_chain(&reboot_notifier_list, SYS_HALT, NULL); 428 411 system_state = SYSTEM_HALT; 429 412 device_shutdown(); 413 + } 414 + void kernel_halt(void) 415 + { 416 + kernel_halt_prepare(); 430 417 printk(KERN_EMERG "System halted.\n"); 431 418 machine_halt(); 432 419 } 433 420 EXPORT_SYMBOL_GPL(kernel_halt); 434 421 435 - void kernel_power_off(void) 422 + /** 423 + * kernel_power_off - power_off the system 424 + * 425 + * Shutdown everything and perform a clean system power_off. 426 + */ 427 + void kernel_power_off_prepare(void) 436 428 { 437 429 notifier_call_chain(&reboot_notifier_list, SYS_POWER_OFF, NULL); 438 430 system_state = SYSTEM_POWER_OFF; 439 431 device_shutdown(); 432 + } 433 + void kernel_power_off(void) 434 + { 435 + kernel_power_off_prepare(); 440 436 printk(KERN_EMERG "Power down.\n"); 441 437 machine_power_off(); 442 438 }
+1 -1
mm/mmap.c
··· 1640 1640 /* 1641 1641 * Get rid of page table information in the indicated region. 1642 1642 * 1643 - * Called with the page table lock held. 1643 + * Called with the mm semaphore held. 1644 1644 */ 1645 1645 static void unmap_region(struct mm_struct *mm, 1646 1646 struct vm_area_struct *vma, struct vm_area_struct *prev,
+2 -1
mm/mprotect.c
··· 248 248 249 249 newflags = vm_flags | (vma->vm_flags & ~(VM_READ | VM_WRITE | VM_EXEC)); 250 250 251 - if ((newflags & ~(newflags >> 4)) & 0xf) { 251 + /* newflags >> 4 shift VM_MAY% in place of VM_% */ 252 + if ((newflags & ~(newflags >> 4)) & (VM_READ | VM_WRITE | VM_EXEC)) { 252 253 error = -EACCES; 253 254 goto out; 254 255 }
+23 -22
mm/slab.c
··· 308 308 #define SIZE_L3 (1 + MAX_NUMNODES) 309 309 310 310 /* 311 - * This function may be completely optimized away if 311 + * This function must be completely optimized away if 312 312 * a constant is passed to it. Mostly the same as 313 313 * what is in linux/slab.h except it returns an 314 314 * index. 315 315 */ 316 - static inline int index_of(const size_t size) 316 + static __always_inline int index_of(const size_t size) 317 317 { 318 318 if (__builtin_constant_p(size)) { 319 319 int i = 0; ··· 329 329 extern void __bad_size(void); 330 330 __bad_size(); 331 331 } 332 - } 332 + } else 333 + BUG(); 333 334 return 0; 334 335 } 335 336 ··· 640 639 641 640 static DEFINE_PER_CPU(struct work_struct, reap_work); 642 641 643 - static void free_block(kmem_cache_t* cachep, void** objpp, int len); 642 + static void free_block(kmem_cache_t* cachep, void** objpp, int len, int node); 644 643 static void enable_cpucache (kmem_cache_t *cachep); 645 644 static void cache_reap (void *unused); 646 645 static int __node_shrink(kmem_cache_t *cachep, int node); ··· 805 804 806 805 if (ac->avail) { 807 806 spin_lock(&rl3->list_lock); 808 - free_block(cachep, ac->entry, ac->avail); 807 + free_block(cachep, ac->entry, ac->avail, node); 809 808 ac->avail = 0; 810 809 spin_unlock(&rl3->list_lock); 811 810 } ··· 926 925 /* Free limit for this kmem_list3 */ 927 926 l3->free_limit -= cachep->batchcount; 928 927 if (nc) 929 - free_block(cachep, nc->entry, nc->avail); 928 + free_block(cachep, nc->entry, nc->avail, node); 930 929 931 930 if (!cpus_empty(mask)) { 932 931 spin_unlock(&l3->list_lock); ··· 935 934 936 935 if (l3->shared) { 937 936 free_block(cachep, l3->shared->entry, 938 - l3->shared->avail); 937 + l3->shared->avail, node); 939 938 kfree(l3->shared); 940 939 l3->shared = NULL; 941 940 } ··· 1883 1882 { 1884 1883 kmem_cache_t *cachep = (kmem_cache_t*)arg; 1885 1884 struct array_cache *ac; 1885 + int node = numa_node_id(); 1886 1886 1887 1887 check_irq_off(); 1888 1888 ac = ac_data(cachep); 1889 - spin_lock(&cachep->nodelists[numa_node_id()]->list_lock); 1890 - free_block(cachep, ac->entry, ac->avail); 1891 - spin_unlock(&cachep->nodelists[numa_node_id()]->list_lock); 1889 + spin_lock(&cachep->nodelists[node]->list_lock); 1890 + free_block(cachep, ac->entry, ac->avail, node); 1891 + spin_unlock(&cachep->nodelists[node]->list_lock); 1892 1892 ac->avail = 0; 1893 1893 } 1894 1894 ··· 2610 2608 /* 2611 2609 * Caller needs to acquire correct kmem_list's list_lock 2612 2610 */ 2613 - static void free_block(kmem_cache_t *cachep, void **objpp, int nr_objects) 2611 + static void free_block(kmem_cache_t *cachep, void **objpp, int nr_objects, int node) 2614 2612 { 2615 2613 int i; 2616 2614 struct kmem_list3 *l3; ··· 2619 2617 void *objp = objpp[i]; 2620 2618 struct slab *slabp; 2621 2619 unsigned int objnr; 2622 - int nodeid = 0; 2623 2620 2624 2621 slabp = GET_PAGE_SLAB(virt_to_page(objp)); 2625 - nodeid = slabp->nodeid; 2626 - l3 = cachep->nodelists[nodeid]; 2622 + l3 = cachep->nodelists[node]; 2627 2623 list_del(&slabp->list); 2628 2624 objnr = (objp - slabp->s_mem) / cachep->objsize; 2629 - check_spinlock_acquired_node(cachep, nodeid); 2625 + check_spinlock_acquired_node(cachep, node); 2630 2626 check_slabp(cachep, slabp); 2631 2627 2632 2628 ··· 2664 2664 { 2665 2665 int batchcount; 2666 2666 struct kmem_list3 *l3; 2667 + int node = numa_node_id(); 2667 2668 2668 2669 batchcount = ac->batchcount; 2669 2670 #if DEBUG 2670 2671 BUG_ON(!batchcount || batchcount > ac->avail); 2671 2672 #endif 2672 2673 check_irq_off(); 2673 - l3 = cachep->nodelists[numa_node_id()]; 2674 + l3 = cachep->nodelists[node]; 2674 2675 spin_lock(&l3->list_lock); 2675 2676 if (l3->shared) { 2676 2677 struct array_cache *shared_array = l3->shared; ··· 2687 2686 } 2688 2687 } 2689 2688 2690 - free_block(cachep, ac->entry, batchcount); 2689 + free_block(cachep, ac->entry, batchcount, node); 2691 2690 free_done: 2692 2691 #if STATS 2693 2692 { ··· 2752 2751 } else { 2753 2752 spin_lock(&(cachep->nodelists[nodeid])-> 2754 2753 list_lock); 2755 - free_block(cachep, &objp, 1); 2754 + free_block(cachep, &objp, 1, nodeid); 2756 2755 spin_unlock(&(cachep->nodelists[nodeid])-> 2757 2756 list_lock); 2758 2757 } ··· 2845 2844 unsigned long save_flags; 2846 2845 void *ptr; 2847 2846 2848 - if (nodeid == numa_node_id() || nodeid == -1) 2847 + if (nodeid == -1) 2849 2848 return __cache_alloc(cachep, flags); 2850 2849 2851 2850 if (unlikely(!cachep->nodelists[nodeid])) { ··· 3080 3079 3081 3080 if ((nc = cachep->nodelists[node]->shared)) 3082 3081 free_block(cachep, nc->entry, 3083 - nc->avail); 3082 + nc->avail, node); 3084 3083 3085 3084 l3->shared = new; 3086 3085 if (!cachep->nodelists[node]->alien) { ··· 3161 3160 if (!ccold) 3162 3161 continue; 3163 3162 spin_lock_irq(&cachep->nodelists[cpu_to_node(i)]->list_lock); 3164 - free_block(cachep, ccold->entry, ccold->avail); 3163 + free_block(cachep, ccold->entry, ccold->avail, cpu_to_node(i)); 3165 3164 spin_unlock_irq(&cachep->nodelists[cpu_to_node(i)]->list_lock); 3166 3165 kfree(ccold); 3167 3166 } ··· 3241 3240 if (tofree > ac->avail) { 3242 3241 tofree = (ac->avail+1)/2; 3243 3242 } 3244 - free_block(cachep, ac->entry, tofree); 3243 + free_block(cachep, ac->entry, tofree, node); 3245 3244 ac->avail -= tofree; 3246 3245 memmove(ac->entry, &(ac->entry[tofree]), 3247 3246 sizeof(void*)*ac->avail);
+1
mm/swapfile.c
··· 1381 1381 error = bd_claim(bdev, sys_swapon); 1382 1382 if (error < 0) { 1383 1383 bdev = NULL; 1384 + error = -EINVAL; 1384 1385 goto bad_swap; 1385 1386 } 1386 1387 p->old_block_size = block_size(bdev);
+1 -1
net/8021q/vlan_dev.c
··· 120 120 unsigned short vid; 121 121 struct net_device_stats *stats; 122 122 unsigned short vlan_TCI; 123 - unsigned short proto; 123 + __be16 proto; 124 124 125 125 /* vlan_TCI = ntohs(get_unaligned(&vhdr->h_vlan_TCI)); */ 126 126 vlan_TCI = ntohs(vhdr->h_vlan_TCI);
+2 -1
net/bridge/br_forward.c
··· 31 31 32 32 int br_dev_queue_push_xmit(struct sk_buff *skb) 33 33 { 34 - if (skb->len > skb->dev->mtu) 34 + /* drop mtu oversized packets except tso */ 35 + if (skb->len > skb->dev->mtu && !skb_shinfo(skb)->tso_size) 35 36 kfree_skb(skb); 36 37 else { 37 38 #ifdef CONFIG_BRIDGE_NETFILTER
+20 -29
net/ipv4/fib_trie.c
··· 43 43 * 2 of the License, or (at your option) any later version. 44 44 */ 45 45 46 - #define VERSION "0.403" 46 + #define VERSION "0.404" 47 47 48 48 #include <linux/config.h> 49 49 #include <asm/uaccess.h> ··· 224 224 Consider a node 'n' and its parent 'tp'. 225 225 226 226 If n is a leaf, every bit in its key is significant. Its presence is 227 - necessitaded by path compression, since during a tree traversal (when 227 + necessitated by path compression, since during a tree traversal (when 228 228 searching for a leaf - unless we are doing an insertion) we will completely 229 229 ignore all skipped bits we encounter. Thus we need to verify, at the end of 230 230 a potentially successful search, that we have indeed been walking the ··· 836 836 #endif 837 837 } 838 838 839 - /* readside most use rcu_read_lock currently dump routines 839 + /* readside must use rcu_read_lock currently dump routines 840 840 via get_fa_head and dump */ 841 841 842 - static struct leaf_info *find_leaf_info(struct hlist_head *head, int plen) 842 + static struct leaf_info *find_leaf_info(struct leaf *l, int plen) 843 843 { 844 + struct hlist_head *head = &l->list; 844 845 struct hlist_node *node; 845 846 struct leaf_info *li; 846 847 ··· 854 853 855 854 static inline struct list_head * get_fa_head(struct leaf *l, int plen) 856 855 { 857 - struct leaf_info *li = find_leaf_info(&l->list, plen); 856 + struct leaf_info *li = find_leaf_info(l, plen); 858 857 859 858 if (!li) 860 859 return NULL; ··· 1086 1085 } 1087 1086 1088 1087 if (tp && tp->pos + tp->bits > 32) 1089 - printk("ERROR tp=%p pos=%d, bits=%d, key=%0x plen=%d\n", 1088 + printk(KERN_WARNING "fib_trie tp=%p pos=%d, bits=%d, key=%0x plen=%d\n", 1090 1089 tp, tp->pos, tp->bits, key, plen); 1091 1090 1092 1091 /* Rebalance the trie */ ··· 1249 1248 } 1250 1249 1251 1250 1252 - /* should be clalled with rcu_read_lock */ 1251 + /* should be called with rcu_read_lock */ 1253 1252 static inline int check_leaf(struct trie *t, struct leaf *l, 1254 1253 t_key key, int *plen, const struct flowi *flp, 1255 1254 struct fib_result *res) ··· 1591 1590 rtmsg_fib(RTM_DELROUTE, htonl(key), fa, plen, tb->tb_id, nlhdr, req); 1592 1591 1593 1592 l = fib_find_node(t, key); 1594 - li = find_leaf_info(&l->list, plen); 1593 + li = find_leaf_info(l, plen); 1595 1594 1596 1595 list_del_rcu(&fa->fa_list); 1597 1596 ··· 1715 1714 1716 1715 t->revision++; 1717 1716 1718 - rcu_read_lock(); 1719 1717 for (h = 0; (l = nextleaf(t, l)) != NULL; h++) { 1720 1718 found += trie_flush_leaf(t, l); 1721 1719 ··· 1722 1722 trie_leaf_remove(t, ll->key); 1723 1723 ll = l; 1724 1724 } 1725 - rcu_read_unlock(); 1726 1725 1727 1726 if (ll && hlist_empty(&ll->list)) 1728 1727 trie_leaf_remove(t, ll->key); ··· 1832 1833 i++; 1833 1834 continue; 1834 1835 } 1835 - if (fa->fa_info->fib_nh == NULL) { 1836 - printk("Trie error _fib_nh=NULL in fa[%d] k=%08x plen=%d\n", i, key, plen); 1837 - i++; 1838 - continue; 1839 - } 1840 - if (fa->fa_info == NULL) { 1841 - printk("Trie error fa_info=NULL in fa[%d] k=%08x plen=%d\n", i, key, plen); 1842 - i++; 1843 - continue; 1844 - } 1836 + BUG_ON(!fa->fa_info); 1845 1837 1846 1838 if (fib_dump_info(skb, NETLINK_CB(cb->skb).pid, 1847 1839 cb->nlh->nlmsg_seq, ··· 1955 1965 trie_main = t; 1956 1966 1957 1967 if (id == RT_TABLE_LOCAL) 1958 - printk("IPv4 FIB: Using LC-trie version %s\n", VERSION); 1968 + printk(KERN_INFO "IPv4 FIB: Using LC-trie version %s\n", VERSION); 1959 1969 1960 1970 return tb; 1961 1971 } ··· 2019 2029 iter->tnode = (struct tnode *) n; 2020 2030 iter->trie = t; 2021 2031 iter->index = 0; 2022 - iter->depth = 0; 2032 + iter->depth = 1; 2023 2033 return n; 2024 2034 } 2025 2035 return NULL; ··· 2264 2274 seq_puts(seq, "<local>:\n"); 2265 2275 else 2266 2276 seq_puts(seq, "<main>:\n"); 2267 - } else { 2268 - seq_indent(seq, iter->depth-1); 2269 - seq_printf(seq, " +-- %d.%d.%d.%d/%d\n", 2270 - NIPQUAD(prf), tn->pos); 2271 - } 2277 + } 2278 + seq_indent(seq, iter->depth-1); 2279 + seq_printf(seq, " +-- %d.%d.%d.%d/%d %d %d %d\n", 2280 + NIPQUAD(prf), tn->pos, tn->bits, tn->full_children, 2281 + tn->empty_children); 2282 + 2272 2283 } else { 2273 2284 struct leaf *l = (struct leaf *) n; 2274 2285 int i; ··· 2278 2287 seq_indent(seq, iter->depth); 2279 2288 seq_printf(seq, " |-- %d.%d.%d.%d\n", NIPQUAD(val)); 2280 2289 for (i = 32; i >= 0; i--) { 2281 - struct leaf_info *li = find_leaf_info(&l->list, i); 2290 + struct leaf_info *li = find_leaf_info(l, i); 2282 2291 if (li) { 2283 2292 struct fib_alias *fa; 2284 2293 list_for_each_entry_rcu(fa, &li->falh, fa_list) { ··· 2374 2383 return 0; 2375 2384 2376 2385 for (i=32; i>=0; i--) { 2377 - struct leaf_info *li = find_leaf_info(&l->list, i); 2386 + struct leaf_info *li = find_leaf_info(l, i); 2378 2387 struct fib_alias *fa; 2379 2388 u32 mask, prefix; 2380 2389
+22
net/ipv4/netfilter/Kconfig
··· 137 137 138 138 To compile it as a module, choose M here. If unsure, say Y. 139 139 140 + config IP_NF_PPTP 141 + tristate 'PPTP protocol support' 142 + help 143 + This module adds support for PPTP (Point to Point Tunnelling 144 + Protocol, RFC2637) conncection tracking and NAT. 145 + 146 + If you are running PPTP sessions over a stateful firewall or NAT 147 + box, you may want to enable this feature. 148 + 149 + Please note that not all PPTP modes of operation are supported yet. 150 + For more info, read top of the file 151 + net/ipv4/netfilter/ip_conntrack_pptp.c 152 + 153 + If you want to compile it as a module, say M here and read 154 + Documentation/modules.txt. If unsure, say `N'. 155 + 140 156 config IP_NF_QUEUE 141 157 tristate "IP Userspace queueing via NETLINK (OBSOLETE)" 142 158 help ··· 636 620 depends on IP_NF_IPTABLES!=n && IP_NF_CONNTRACK!=n && IP_NF_NAT!=n 637 621 default IP_NF_NAT if IP_NF_AMANDA=y 638 622 default m if IP_NF_AMANDA=m 623 + 624 + config IP_NF_NAT_PPTP 625 + tristate 626 + depends on IP_NF_NAT!=n && IP_NF_PPTP!=n 627 + default IP_NF_NAT if IP_NF_PPTP=y 628 + default m if IP_NF_PPTP=m 639 629 640 630 # mangle + specific targets 641 631 config IP_NF_MANGLE
+5
net/ipv4/netfilter/Makefile
··· 6 6 ip_conntrack-objs := ip_conntrack_standalone.o ip_conntrack_core.o ip_conntrack_proto_generic.o ip_conntrack_proto_tcp.o ip_conntrack_proto_udp.o ip_conntrack_proto_icmp.o 7 7 iptable_nat-objs := ip_nat_standalone.o ip_nat_rule.o ip_nat_core.o ip_nat_helper.o ip_nat_proto_unknown.o ip_nat_proto_tcp.o ip_nat_proto_udp.o ip_nat_proto_icmp.o 8 8 9 + ip_conntrack_pptp-objs := ip_conntrack_helper_pptp.o ip_conntrack_proto_gre.o 10 + ip_nat_pptp-objs := ip_nat_helper_pptp.o ip_nat_proto_gre.o 11 + 9 12 # connection tracking 10 13 obj-$(CONFIG_IP_NF_CONNTRACK) += ip_conntrack.o 11 14 ··· 20 17 obj-$(CONFIG_IP_NF_CT_PROTO_SCTP) += ip_conntrack_proto_sctp.o 21 18 22 19 # connection tracking helpers 20 + obj-$(CONFIG_IP_NF_PPTP) += ip_conntrack_pptp.o 23 21 obj-$(CONFIG_IP_NF_AMANDA) += ip_conntrack_amanda.o 24 22 obj-$(CONFIG_IP_NF_TFTP) += ip_conntrack_tftp.o 25 23 obj-$(CONFIG_IP_NF_FTP) += ip_conntrack_ftp.o ··· 28 24 obj-$(CONFIG_IP_NF_NETBIOS_NS) += ip_conntrack_netbios_ns.o 29 25 30 26 # NAT helpers 27 + obj-$(CONFIG_IP_NF_NAT_PPTP) += ip_nat_pptp.o 31 28 obj-$(CONFIG_IP_NF_NAT_AMANDA) += ip_nat_amanda.o 32 29 obj-$(CONFIG_IP_NF_NAT_TFTP) += ip_nat_tftp.o 33 30 obj-$(CONFIG_IP_NF_NAT_FTP) += ip_nat_ftp.o
+1 -1
net/ipv4/netfilter/ip_conntrack_amanda.c
··· 65 65 66 66 /* increase the UDP timeout of the master connection as replies from 67 67 * Amanda clients to the server can be quite delayed */ 68 - ip_ct_refresh_acct(ct, ctinfo, NULL, master_timeout * HZ); 68 + ip_ct_refresh(ct, *pskb, master_timeout * HZ); 69 69 70 70 /* No data? */ 71 71 dataoff = (*pskb)->nh.iph->ihl*4 + sizeof(struct udphdr);
+26 -25
net/ipv4/netfilter/ip_conntrack_core.c
··· 233 233 234 234 /* Just find a expectation corresponding to a tuple. */ 235 235 struct ip_conntrack_expect * 236 - ip_conntrack_expect_find_get(const struct ip_conntrack_tuple *tuple) 236 + ip_conntrack_expect_find(const struct ip_conntrack_tuple *tuple) 237 237 { 238 238 struct ip_conntrack_expect *i; 239 239 ··· 1112 1112 synchronize_net(); 1113 1113 } 1114 1114 1115 - static inline void ct_add_counters(struct ip_conntrack *ct, 1116 - enum ip_conntrack_info ctinfo, 1117 - const struct sk_buff *skb) 1118 - { 1119 - #ifdef CONFIG_IP_NF_CT_ACCT 1120 - if (skb) { 1121 - ct->counters[CTINFO2DIR(ctinfo)].packets++; 1122 - ct->counters[CTINFO2DIR(ctinfo)].bytes += 1123 - ntohs(skb->nh.iph->tot_len); 1124 - } 1125 - #endif 1126 - } 1127 - 1128 - /* Refresh conntrack for this many jiffies and do accounting (if skb != NULL) */ 1129 - void ip_ct_refresh_acct(struct ip_conntrack *ct, 1115 + /* Refresh conntrack for this many jiffies and do accounting if do_acct is 1 */ 1116 + void __ip_ct_refresh_acct(struct ip_conntrack *ct, 1130 1117 enum ip_conntrack_info ctinfo, 1131 1118 const struct sk_buff *skb, 1132 - unsigned long extra_jiffies) 1119 + unsigned long extra_jiffies, 1120 + int do_acct) 1133 1121 { 1122 + int do_event = 0; 1123 + 1134 1124 IP_NF_ASSERT(ct->timeout.data == (unsigned long)ct); 1125 + IP_NF_ASSERT(skb); 1126 + 1127 + write_lock_bh(&ip_conntrack_lock); 1135 1128 1136 1129 /* If not in hash table, timer will not be active yet */ 1137 1130 if (!is_confirmed(ct)) { 1138 1131 ct->timeout.expires = extra_jiffies; 1139 - ct_add_counters(ct, ctinfo, skb); 1132 + do_event = 1; 1140 1133 } else { 1141 - write_lock_bh(&ip_conntrack_lock); 1142 1134 /* Need del_timer for race avoidance (may already be dying). */ 1143 1135 if (del_timer(&ct->timeout)) { 1144 1136 ct->timeout.expires = jiffies + extra_jiffies; 1145 1137 add_timer(&ct->timeout); 1146 - /* FIXME: We loose some REFRESH events if this function 1147 - * is called without an skb. I'll fix this later -HW */ 1148 - if (skb) 1149 - ip_conntrack_event_cache(IPCT_REFRESH, skb); 1138 + do_event = 1; 1150 1139 } 1151 - ct_add_counters(ct, ctinfo, skb); 1152 - write_unlock_bh(&ip_conntrack_lock); 1153 1140 } 1141 + 1142 + #ifdef CONFIG_IP_NF_CT_ACCT 1143 + if (do_acct) { 1144 + ct->counters[CTINFO2DIR(ctinfo)].packets++; 1145 + ct->counters[CTINFO2DIR(ctinfo)].bytes += 1146 + ntohs(skb->nh.iph->tot_len); 1147 + } 1148 + #endif 1149 + 1150 + write_unlock_bh(&ip_conntrack_lock); 1151 + 1152 + /* must be unlocked when calling event cache */ 1153 + if (do_event) 1154 + ip_conntrack_event_cache(IPCT_REFRESH, skb); 1154 1155 } 1155 1156 1156 1157 #if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \
+806
net/ipv4/netfilter/ip_conntrack_helper_pptp.c
··· 1 + /* 2 + * ip_conntrack_pptp.c - Version 3.0 3 + * 4 + * Connection tracking support for PPTP (Point to Point Tunneling Protocol). 5 + * PPTP is a a protocol for creating virtual private networks. 6 + * It is a specification defined by Microsoft and some vendors 7 + * working with Microsoft. PPTP is built on top of a modified 8 + * version of the Internet Generic Routing Encapsulation Protocol. 9 + * GRE is defined in RFC 1701 and RFC 1702. Documentation of 10 + * PPTP can be found in RFC 2637 11 + * 12 + * (C) 2000-2005 by Harald Welte <laforge@gnumonks.org> 13 + * 14 + * Development of this code funded by Astaro AG (http://www.astaro.com/) 15 + * 16 + * Limitations: 17 + * - We blindly assume that control connections are always 18 + * established in PNS->PAC direction. This is a violation 19 + * of RFFC2673 20 + * - We can only support one single call within each session 21 + * 22 + * TODO: 23 + * - testing of incoming PPTP calls 24 + * 25 + * Changes: 26 + * 2002-02-05 - Version 1.3 27 + * - Call ip_conntrack_unexpect_related() from 28 + * pptp_destroy_siblings() to destroy expectations in case 29 + * CALL_DISCONNECT_NOTIFY or tcp fin packet was seen 30 + * (Philip Craig <philipc@snapgear.com>) 31 + * - Add Version information at module loadtime 32 + * 2002-02-10 - Version 1.6 33 + * - move to C99 style initializers 34 + * - remove second expectation if first arrives 35 + * 2004-10-22 - Version 2.0 36 + * - merge Mandrake's 2.6.x port with recent 2.6.x API changes 37 + * - fix lots of linear skb assumptions from Mandrake's port 38 + * 2005-06-10 - Version 2.1 39 + * - use ip_conntrack_expect_free() instead of kfree() on the 40 + * expect's (which are from the slab for quite some time) 41 + * 2005-06-10 - Version 3.0 42 + * - port helper to post-2.6.11 API changes, 43 + * funded by Oxcoda NetBox Blue (http://www.netboxblue.com/) 44 + * 2005-07-30 - Version 3.1 45 + * - port helper to 2.6.13 API changes 46 + * 47 + */ 48 + 49 + #include <linux/config.h> 50 + #include <linux/module.h> 51 + #include <linux/netfilter.h> 52 + #include <linux/ip.h> 53 + #include <net/checksum.h> 54 + #include <net/tcp.h> 55 + 56 + #include <linux/netfilter_ipv4/ip_conntrack.h> 57 + #include <linux/netfilter_ipv4/ip_conntrack_core.h> 58 + #include <linux/netfilter_ipv4/ip_conntrack_helper.h> 59 + #include <linux/netfilter_ipv4/ip_conntrack_proto_gre.h> 60 + #include <linux/netfilter_ipv4/ip_conntrack_pptp.h> 61 + 62 + #define IP_CT_PPTP_VERSION "3.1" 63 + 64 + MODULE_LICENSE("GPL"); 65 + MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>"); 66 + MODULE_DESCRIPTION("Netfilter connection tracking helper module for PPTP"); 67 + 68 + static DEFINE_SPINLOCK(ip_pptp_lock); 69 + 70 + int 71 + (*ip_nat_pptp_hook_outbound)(struct sk_buff **pskb, 72 + struct ip_conntrack *ct, 73 + enum ip_conntrack_info ctinfo, 74 + struct PptpControlHeader *ctlh, 75 + union pptp_ctrl_union *pptpReq); 76 + 77 + int 78 + (*ip_nat_pptp_hook_inbound)(struct sk_buff **pskb, 79 + struct ip_conntrack *ct, 80 + enum ip_conntrack_info ctinfo, 81 + struct PptpControlHeader *ctlh, 82 + union pptp_ctrl_union *pptpReq); 83 + 84 + int 85 + (*ip_nat_pptp_hook_exp_gre)(struct ip_conntrack_expect *expect_orig, 86 + struct ip_conntrack_expect *expect_reply); 87 + 88 + void 89 + (*ip_nat_pptp_hook_expectfn)(struct ip_conntrack *ct, 90 + struct ip_conntrack_expect *exp); 91 + 92 + #if 0 93 + /* PptpControlMessageType names */ 94 + const char *pptp_msg_name[] = { 95 + "UNKNOWN_MESSAGE", 96 + "START_SESSION_REQUEST", 97 + "START_SESSION_REPLY", 98 + "STOP_SESSION_REQUEST", 99 + "STOP_SESSION_REPLY", 100 + "ECHO_REQUEST", 101 + "ECHO_REPLY", 102 + "OUT_CALL_REQUEST", 103 + "OUT_CALL_REPLY", 104 + "IN_CALL_REQUEST", 105 + "IN_CALL_REPLY", 106 + "IN_CALL_CONNECT", 107 + "CALL_CLEAR_REQUEST", 108 + "CALL_DISCONNECT_NOTIFY", 109 + "WAN_ERROR_NOTIFY", 110 + "SET_LINK_INFO" 111 + }; 112 + EXPORT_SYMBOL(pptp_msg_name); 113 + #define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, __FUNCTION__, ## args) 114 + #else 115 + #define DEBUGP(format, args...) 116 + #endif 117 + 118 + #define SECS *HZ 119 + #define MINS * 60 SECS 120 + #define HOURS * 60 MINS 121 + 122 + #define PPTP_GRE_TIMEOUT (10 MINS) 123 + #define PPTP_GRE_STREAM_TIMEOUT (5 HOURS) 124 + 125 + static void pptp_expectfn(struct ip_conntrack *ct, 126 + struct ip_conntrack_expect *exp) 127 + { 128 + DEBUGP("increasing timeouts\n"); 129 + 130 + /* increase timeout of GRE data channel conntrack entry */ 131 + ct->proto.gre.timeout = PPTP_GRE_TIMEOUT; 132 + ct->proto.gre.stream_timeout = PPTP_GRE_STREAM_TIMEOUT; 133 + 134 + /* Can you see how rusty this code is, compared with the pre-2.6.11 135 + * one? That's what happened to my shiny newnat of 2002 ;( -HW */ 136 + 137 + if (!ip_nat_pptp_hook_expectfn) { 138 + struct ip_conntrack_tuple inv_t; 139 + struct ip_conntrack_expect *exp_other; 140 + 141 + /* obviously this tuple inversion only works until you do NAT */ 142 + invert_tuplepr(&inv_t, &exp->tuple); 143 + DEBUGP("trying to unexpect other dir: "); 144 + DUMP_TUPLE(&inv_t); 145 + 146 + exp_other = ip_conntrack_expect_find(&inv_t); 147 + if (exp_other) { 148 + /* delete other expectation. */ 149 + DEBUGP("found\n"); 150 + ip_conntrack_unexpect_related(exp_other); 151 + ip_conntrack_expect_put(exp_other); 152 + } else { 153 + DEBUGP("not found\n"); 154 + } 155 + } else { 156 + /* we need more than simple inversion */ 157 + ip_nat_pptp_hook_expectfn(ct, exp); 158 + } 159 + } 160 + 161 + static int destroy_sibling_or_exp(const struct ip_conntrack_tuple *t) 162 + { 163 + struct ip_conntrack_tuple_hash *h; 164 + struct ip_conntrack_expect *exp; 165 + 166 + DEBUGP("trying to timeout ct or exp for tuple "); 167 + DUMP_TUPLE(t); 168 + 169 + h = ip_conntrack_find_get(t, NULL); 170 + if (h) { 171 + struct ip_conntrack *sibling = tuplehash_to_ctrack(h); 172 + DEBUGP("setting timeout of conntrack %p to 0\n", sibling); 173 + sibling->proto.gre.timeout = 0; 174 + sibling->proto.gre.stream_timeout = 0; 175 + if (del_timer(&sibling->timeout)) 176 + sibling->timeout.function((unsigned long)sibling); 177 + ip_conntrack_put(sibling); 178 + return 1; 179 + } else { 180 + exp = ip_conntrack_expect_find(t); 181 + if (exp) { 182 + DEBUGP("unexpect_related of expect %p\n", exp); 183 + ip_conntrack_unexpect_related(exp); 184 + ip_conntrack_expect_put(exp); 185 + return 1; 186 + } 187 + } 188 + 189 + return 0; 190 + } 191 + 192 + 193 + /* timeout GRE data connections */ 194 + static void pptp_destroy_siblings(struct ip_conntrack *ct) 195 + { 196 + struct ip_conntrack_tuple t; 197 + 198 + /* Since ct->sibling_list has literally rusted away in 2.6.11, 199 + * we now need another way to find out about our sibling 200 + * contrack and expects... -HW */ 201 + 202 + /* try original (pns->pac) tuple */ 203 + memcpy(&t, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, sizeof(t)); 204 + t.dst.protonum = IPPROTO_GRE; 205 + t.src.u.gre.key = htons(ct->help.ct_pptp_info.pns_call_id); 206 + t.dst.u.gre.key = htons(ct->help.ct_pptp_info.pac_call_id); 207 + 208 + if (!destroy_sibling_or_exp(&t)) 209 + DEBUGP("failed to timeout original pns->pac ct/exp\n"); 210 + 211 + /* try reply (pac->pns) tuple */ 212 + memcpy(&t, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, sizeof(t)); 213 + t.dst.protonum = IPPROTO_GRE; 214 + t.src.u.gre.key = htons(ct->help.ct_pptp_info.pac_call_id); 215 + t.dst.u.gre.key = htons(ct->help.ct_pptp_info.pns_call_id); 216 + 217 + if (!destroy_sibling_or_exp(&t)) 218 + DEBUGP("failed to timeout reply pac->pns ct/exp\n"); 219 + } 220 + 221 + /* expect GRE connections (PNS->PAC and PAC->PNS direction) */ 222 + static inline int 223 + exp_gre(struct ip_conntrack *master, 224 + u_int32_t seq, 225 + __be16 callid, 226 + __be16 peer_callid) 227 + { 228 + struct ip_conntrack_tuple inv_tuple; 229 + struct ip_conntrack_tuple exp_tuples[] = { 230 + /* tuple in original direction, PNS->PAC */ 231 + { .src = { .ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip, 232 + .u = { .gre = { .key = peer_callid } } 233 + }, 234 + .dst = { .ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip, 235 + .u = { .gre = { .key = callid } }, 236 + .protonum = IPPROTO_GRE 237 + }, 238 + }, 239 + /* tuple in reply direction, PAC->PNS */ 240 + { .src = { .ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip, 241 + .u = { .gre = { .key = callid } } 242 + }, 243 + .dst = { .ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip, 244 + .u = { .gre = { .key = peer_callid } }, 245 + .protonum = IPPROTO_GRE 246 + }, 247 + } 248 + }; 249 + struct ip_conntrack_expect *exp_orig, *exp_reply; 250 + int ret = 1; 251 + 252 + exp_orig = ip_conntrack_expect_alloc(master); 253 + if (exp_orig == NULL) 254 + goto out; 255 + 256 + exp_reply = ip_conntrack_expect_alloc(master); 257 + if (exp_reply == NULL) 258 + goto out_put_orig; 259 + 260 + memcpy(&exp_orig->tuple, &exp_tuples[0], sizeof(exp_orig->tuple)); 261 + 262 + exp_orig->mask.src.ip = 0xffffffff; 263 + exp_orig->mask.src.u.all = 0; 264 + exp_orig->mask.dst.u.all = 0; 265 + exp_orig->mask.dst.u.gre.key = htons(0xffff); 266 + exp_orig->mask.dst.ip = 0xffffffff; 267 + exp_orig->mask.dst.protonum = 0xff; 268 + 269 + exp_orig->master = master; 270 + exp_orig->expectfn = pptp_expectfn; 271 + exp_orig->flags = 0; 272 + 273 + exp_orig->dir = IP_CT_DIR_ORIGINAL; 274 + 275 + /* both expectations are identical apart from tuple */ 276 + memcpy(exp_reply, exp_orig, sizeof(*exp_reply)); 277 + memcpy(&exp_reply->tuple, &exp_tuples[1], sizeof(exp_reply->tuple)); 278 + 279 + exp_reply->dir = !exp_orig->dir; 280 + 281 + if (ip_nat_pptp_hook_exp_gre) 282 + ret = ip_nat_pptp_hook_exp_gre(exp_orig, exp_reply); 283 + else { 284 + 285 + DEBUGP("calling expect_related PNS->PAC"); 286 + DUMP_TUPLE(&exp_orig->tuple); 287 + 288 + if (ip_conntrack_expect_related(exp_orig) != 0) { 289 + DEBUGP("cannot expect_related()\n"); 290 + goto out_put_both; 291 + } 292 + 293 + DEBUGP("calling expect_related PAC->PNS"); 294 + DUMP_TUPLE(&exp_reply->tuple); 295 + 296 + if (ip_conntrack_expect_related(exp_reply) != 0) { 297 + DEBUGP("cannot expect_related()\n"); 298 + goto out_unexpect_orig; 299 + } 300 + 301 + /* Add GRE keymap entries */ 302 + if (ip_ct_gre_keymap_add(master, &exp_reply->tuple, 0) != 0) { 303 + DEBUGP("cannot keymap_add() exp\n"); 304 + goto out_unexpect_both; 305 + } 306 + 307 + invert_tuplepr(&inv_tuple, &exp_reply->tuple); 308 + if (ip_ct_gre_keymap_add(master, &inv_tuple, 1) != 0) { 309 + ip_ct_gre_keymap_destroy(master); 310 + DEBUGP("cannot keymap_add() exp_inv\n"); 311 + goto out_unexpect_both; 312 + } 313 + ret = 0; 314 + } 315 + 316 + out_put_both: 317 + ip_conntrack_expect_put(exp_reply); 318 + out_put_orig: 319 + ip_conntrack_expect_put(exp_orig); 320 + out: 321 + return ret; 322 + 323 + out_unexpect_both: 324 + ip_conntrack_unexpect_related(exp_reply); 325 + out_unexpect_orig: 326 + ip_conntrack_unexpect_related(exp_orig); 327 + goto out_put_both; 328 + } 329 + 330 + static inline int 331 + pptp_inbound_pkt(struct sk_buff **pskb, 332 + struct tcphdr *tcph, 333 + unsigned int nexthdr_off, 334 + unsigned int datalen, 335 + struct ip_conntrack *ct, 336 + enum ip_conntrack_info ctinfo) 337 + { 338 + struct PptpControlHeader _ctlh, *ctlh; 339 + unsigned int reqlen; 340 + union pptp_ctrl_union _pptpReq, *pptpReq; 341 + struct ip_ct_pptp_master *info = &ct->help.ct_pptp_info; 342 + u_int16_t msg; 343 + __be16 *cid, *pcid; 344 + u_int32_t seq; 345 + 346 + ctlh = skb_header_pointer(*pskb, nexthdr_off, sizeof(_ctlh), &_ctlh); 347 + if (!ctlh) { 348 + DEBUGP("error during skb_header_pointer\n"); 349 + return NF_ACCEPT; 350 + } 351 + nexthdr_off += sizeof(_ctlh); 352 + datalen -= sizeof(_ctlh); 353 + 354 + reqlen = datalen; 355 + if (reqlen > sizeof(*pptpReq)) 356 + reqlen = sizeof(*pptpReq); 357 + pptpReq = skb_header_pointer(*pskb, nexthdr_off, reqlen, &_pptpReq); 358 + if (!pptpReq) { 359 + DEBUGP("error during skb_header_pointer\n"); 360 + return NF_ACCEPT; 361 + } 362 + 363 + msg = ntohs(ctlh->messageType); 364 + DEBUGP("inbound control message %s\n", pptp_msg_name[msg]); 365 + 366 + switch (msg) { 367 + case PPTP_START_SESSION_REPLY: 368 + if (reqlen < sizeof(_pptpReq.srep)) { 369 + DEBUGP("%s: short packet\n", pptp_msg_name[msg]); 370 + break; 371 + } 372 + 373 + /* server confirms new control session */ 374 + if (info->sstate < PPTP_SESSION_REQUESTED) { 375 + DEBUGP("%s without START_SESS_REQUEST\n", 376 + pptp_msg_name[msg]); 377 + break; 378 + } 379 + if (pptpReq->srep.resultCode == PPTP_START_OK) 380 + info->sstate = PPTP_SESSION_CONFIRMED; 381 + else 382 + info->sstate = PPTP_SESSION_ERROR; 383 + break; 384 + 385 + case PPTP_STOP_SESSION_REPLY: 386 + if (reqlen < sizeof(_pptpReq.strep)) { 387 + DEBUGP("%s: short packet\n", pptp_msg_name[msg]); 388 + break; 389 + } 390 + 391 + /* server confirms end of control session */ 392 + if (info->sstate > PPTP_SESSION_STOPREQ) { 393 + DEBUGP("%s without STOP_SESS_REQUEST\n", 394 + pptp_msg_name[msg]); 395 + break; 396 + } 397 + if (pptpReq->strep.resultCode == PPTP_STOP_OK) 398 + info->sstate = PPTP_SESSION_NONE; 399 + else 400 + info->sstate = PPTP_SESSION_ERROR; 401 + break; 402 + 403 + case PPTP_OUT_CALL_REPLY: 404 + if (reqlen < sizeof(_pptpReq.ocack)) { 405 + DEBUGP("%s: short packet\n", pptp_msg_name[msg]); 406 + break; 407 + } 408 + 409 + /* server accepted call, we now expect GRE frames */ 410 + if (info->sstate != PPTP_SESSION_CONFIRMED) { 411 + DEBUGP("%s but no session\n", pptp_msg_name[msg]); 412 + break; 413 + } 414 + if (info->cstate != PPTP_CALL_OUT_REQ && 415 + info->cstate != PPTP_CALL_OUT_CONF) { 416 + DEBUGP("%s without OUTCALL_REQ\n", pptp_msg_name[msg]); 417 + break; 418 + } 419 + if (pptpReq->ocack.resultCode != PPTP_OUTCALL_CONNECT) { 420 + info->cstate = PPTP_CALL_NONE; 421 + break; 422 + } 423 + 424 + cid = &pptpReq->ocack.callID; 425 + pcid = &pptpReq->ocack.peersCallID; 426 + 427 + info->pac_call_id = ntohs(*cid); 428 + 429 + if (htons(info->pns_call_id) != *pcid) { 430 + DEBUGP("%s for unknown callid %u\n", 431 + pptp_msg_name[msg], ntohs(*pcid)); 432 + break; 433 + } 434 + 435 + DEBUGP("%s, CID=%X, PCID=%X\n", pptp_msg_name[msg], 436 + ntohs(*cid), ntohs(*pcid)); 437 + 438 + info->cstate = PPTP_CALL_OUT_CONF; 439 + 440 + seq = ntohl(tcph->seq) + sizeof(struct pptp_pkt_hdr) 441 + + sizeof(struct PptpControlHeader) 442 + + ((void *)pcid - (void *)pptpReq); 443 + 444 + if (exp_gre(ct, seq, *cid, *pcid) != 0) 445 + printk("ip_conntrack_pptp: error during exp_gre\n"); 446 + break; 447 + 448 + case PPTP_IN_CALL_REQUEST: 449 + if (reqlen < sizeof(_pptpReq.icack)) { 450 + DEBUGP("%s: short packet\n", pptp_msg_name[msg]); 451 + break; 452 + } 453 + 454 + /* server tells us about incoming call request */ 455 + if (info->sstate != PPTP_SESSION_CONFIRMED) { 456 + DEBUGP("%s but no session\n", pptp_msg_name[msg]); 457 + break; 458 + } 459 + pcid = &pptpReq->icack.peersCallID; 460 + DEBUGP("%s, PCID=%X\n", pptp_msg_name[msg], ntohs(*pcid)); 461 + info->cstate = PPTP_CALL_IN_REQ; 462 + info->pac_call_id = ntohs(*pcid); 463 + break; 464 + 465 + case PPTP_IN_CALL_CONNECT: 466 + if (reqlen < sizeof(_pptpReq.iccon)) { 467 + DEBUGP("%s: short packet\n", pptp_msg_name[msg]); 468 + break; 469 + } 470 + 471 + /* server tells us about incoming call established */ 472 + if (info->sstate != PPTP_SESSION_CONFIRMED) { 473 + DEBUGP("%s but no session\n", pptp_msg_name[msg]); 474 + break; 475 + } 476 + if (info->sstate != PPTP_CALL_IN_REP 477 + && info->sstate != PPTP_CALL_IN_CONF) { 478 + DEBUGP("%s but never sent IN_CALL_REPLY\n", 479 + pptp_msg_name[msg]); 480 + break; 481 + } 482 + 483 + pcid = &pptpReq->iccon.peersCallID; 484 + cid = &info->pac_call_id; 485 + 486 + if (info->pns_call_id != ntohs(*pcid)) { 487 + DEBUGP("%s for unknown CallID %u\n", 488 + pptp_msg_name[msg], ntohs(*pcid)); 489 + break; 490 + } 491 + 492 + DEBUGP("%s, PCID=%X\n", pptp_msg_name[msg], ntohs(*pcid)); 493 + info->cstate = PPTP_CALL_IN_CONF; 494 + 495 + /* we expect a GRE connection from PAC to PNS */ 496 + seq = ntohl(tcph->seq) + sizeof(struct pptp_pkt_hdr) 497 + + sizeof(struct PptpControlHeader) 498 + + ((void *)pcid - (void *)pptpReq); 499 + 500 + if (exp_gre(ct, seq, *cid, *pcid) != 0) 501 + printk("ip_conntrack_pptp: error during exp_gre\n"); 502 + 503 + break; 504 + 505 + case PPTP_CALL_DISCONNECT_NOTIFY: 506 + if (reqlen < sizeof(_pptpReq.disc)) { 507 + DEBUGP("%s: short packet\n", pptp_msg_name[msg]); 508 + break; 509 + } 510 + 511 + /* server confirms disconnect */ 512 + cid = &pptpReq->disc.callID; 513 + DEBUGP("%s, CID=%X\n", pptp_msg_name[msg], ntohs(*cid)); 514 + info->cstate = PPTP_CALL_NONE; 515 + 516 + /* untrack this call id, unexpect GRE packets */ 517 + pptp_destroy_siblings(ct); 518 + break; 519 + 520 + case PPTP_WAN_ERROR_NOTIFY: 521 + break; 522 + 523 + case PPTP_ECHO_REQUEST: 524 + case PPTP_ECHO_REPLY: 525 + /* I don't have to explain these ;) */ 526 + break; 527 + default: 528 + DEBUGP("invalid %s (TY=%d)\n", (msg <= PPTP_MSG_MAX) 529 + ? pptp_msg_name[msg]:pptp_msg_name[0], msg); 530 + break; 531 + } 532 + 533 + 534 + if (ip_nat_pptp_hook_inbound) 535 + return ip_nat_pptp_hook_inbound(pskb, ct, ctinfo, ctlh, 536 + pptpReq); 537 + 538 + return NF_ACCEPT; 539 + 540 + } 541 + 542 + static inline int 543 + pptp_outbound_pkt(struct sk_buff **pskb, 544 + struct tcphdr *tcph, 545 + unsigned int nexthdr_off, 546 + unsigned int datalen, 547 + struct ip_conntrack *ct, 548 + enum ip_conntrack_info ctinfo) 549 + { 550 + struct PptpControlHeader _ctlh, *ctlh; 551 + unsigned int reqlen; 552 + union pptp_ctrl_union _pptpReq, *pptpReq; 553 + struct ip_ct_pptp_master *info = &ct->help.ct_pptp_info; 554 + u_int16_t msg; 555 + __be16 *cid, *pcid; 556 + 557 + ctlh = skb_header_pointer(*pskb, nexthdr_off, sizeof(_ctlh), &_ctlh); 558 + if (!ctlh) 559 + return NF_ACCEPT; 560 + nexthdr_off += sizeof(_ctlh); 561 + datalen -= sizeof(_ctlh); 562 + 563 + reqlen = datalen; 564 + if (reqlen > sizeof(*pptpReq)) 565 + reqlen = sizeof(*pptpReq); 566 + pptpReq = skb_header_pointer(*pskb, nexthdr_off, reqlen, &_pptpReq); 567 + if (!pptpReq) 568 + return NF_ACCEPT; 569 + 570 + msg = ntohs(ctlh->messageType); 571 + DEBUGP("outbound control message %s\n", pptp_msg_name[msg]); 572 + 573 + switch (msg) { 574 + case PPTP_START_SESSION_REQUEST: 575 + /* client requests for new control session */ 576 + if (info->sstate != PPTP_SESSION_NONE) { 577 + DEBUGP("%s but we already have one", 578 + pptp_msg_name[msg]); 579 + } 580 + info->sstate = PPTP_SESSION_REQUESTED; 581 + break; 582 + case PPTP_STOP_SESSION_REQUEST: 583 + /* client requests end of control session */ 584 + info->sstate = PPTP_SESSION_STOPREQ; 585 + break; 586 + 587 + case PPTP_OUT_CALL_REQUEST: 588 + if (reqlen < sizeof(_pptpReq.ocreq)) { 589 + DEBUGP("%s: short packet\n", pptp_msg_name[msg]); 590 + /* FIXME: break; */ 591 + } 592 + 593 + /* client initiating connection to server */ 594 + if (info->sstate != PPTP_SESSION_CONFIRMED) { 595 + DEBUGP("%s but no session\n", 596 + pptp_msg_name[msg]); 597 + break; 598 + } 599 + info->cstate = PPTP_CALL_OUT_REQ; 600 + /* track PNS call id */ 601 + cid = &pptpReq->ocreq.callID; 602 + DEBUGP("%s, CID=%X\n", pptp_msg_name[msg], ntohs(*cid)); 603 + info->pns_call_id = ntohs(*cid); 604 + break; 605 + case PPTP_IN_CALL_REPLY: 606 + if (reqlen < sizeof(_pptpReq.icack)) { 607 + DEBUGP("%s: short packet\n", pptp_msg_name[msg]); 608 + break; 609 + } 610 + 611 + /* client answers incoming call */ 612 + if (info->cstate != PPTP_CALL_IN_REQ 613 + && info->cstate != PPTP_CALL_IN_REP) { 614 + DEBUGP("%s without incall_req\n", 615 + pptp_msg_name[msg]); 616 + break; 617 + } 618 + if (pptpReq->icack.resultCode != PPTP_INCALL_ACCEPT) { 619 + info->cstate = PPTP_CALL_NONE; 620 + break; 621 + } 622 + pcid = &pptpReq->icack.peersCallID; 623 + if (info->pac_call_id != ntohs(*pcid)) { 624 + DEBUGP("%s for unknown call %u\n", 625 + pptp_msg_name[msg], ntohs(*pcid)); 626 + break; 627 + } 628 + DEBUGP("%s, CID=%X\n", pptp_msg_name[msg], ntohs(*pcid)); 629 + /* part two of the three-way handshake */ 630 + info->cstate = PPTP_CALL_IN_REP; 631 + info->pns_call_id = ntohs(pptpReq->icack.callID); 632 + break; 633 + 634 + case PPTP_CALL_CLEAR_REQUEST: 635 + /* client requests hangup of call */ 636 + if (info->sstate != PPTP_SESSION_CONFIRMED) { 637 + DEBUGP("CLEAR_CALL but no session\n"); 638 + break; 639 + } 640 + /* FUTURE: iterate over all calls and check if 641 + * call ID is valid. We don't do this without newnat, 642 + * because we only know about last call */ 643 + info->cstate = PPTP_CALL_CLEAR_REQ; 644 + break; 645 + case PPTP_SET_LINK_INFO: 646 + break; 647 + case PPTP_ECHO_REQUEST: 648 + case PPTP_ECHO_REPLY: 649 + /* I don't have to explain these ;) */ 650 + break; 651 + default: 652 + DEBUGP("invalid %s (TY=%d)\n", (msg <= PPTP_MSG_MAX)? 653 + pptp_msg_name[msg]:pptp_msg_name[0], msg); 654 + /* unknown: no need to create GRE masq table entry */ 655 + break; 656 + } 657 + 658 + if (ip_nat_pptp_hook_outbound) 659 + return ip_nat_pptp_hook_outbound(pskb, ct, ctinfo, ctlh, 660 + pptpReq); 661 + 662 + return NF_ACCEPT; 663 + } 664 + 665 + 666 + /* track caller id inside control connection, call expect_related */ 667 + static int 668 + conntrack_pptp_help(struct sk_buff **pskb, 669 + struct ip_conntrack *ct, enum ip_conntrack_info ctinfo) 670 + 671 + { 672 + struct pptp_pkt_hdr _pptph, *pptph; 673 + struct tcphdr _tcph, *tcph; 674 + u_int32_t tcplen = (*pskb)->len - (*pskb)->nh.iph->ihl * 4; 675 + u_int32_t datalen; 676 + int dir = CTINFO2DIR(ctinfo); 677 + struct ip_ct_pptp_master *info = &ct->help.ct_pptp_info; 678 + unsigned int nexthdr_off; 679 + 680 + int oldsstate, oldcstate; 681 + int ret; 682 + 683 + /* don't do any tracking before tcp handshake complete */ 684 + if (ctinfo != IP_CT_ESTABLISHED 685 + && ctinfo != IP_CT_ESTABLISHED+IP_CT_IS_REPLY) { 686 + DEBUGP("ctinfo = %u, skipping\n", ctinfo); 687 + return NF_ACCEPT; 688 + } 689 + 690 + nexthdr_off = (*pskb)->nh.iph->ihl*4; 691 + tcph = skb_header_pointer(*pskb, nexthdr_off, sizeof(_tcph), &_tcph); 692 + BUG_ON(!tcph); 693 + nexthdr_off += tcph->doff * 4; 694 + datalen = tcplen - tcph->doff * 4; 695 + 696 + if (tcph->fin || tcph->rst) { 697 + DEBUGP("RST/FIN received, timeouting GRE\n"); 698 + /* can't do this after real newnat */ 699 + info->cstate = PPTP_CALL_NONE; 700 + 701 + /* untrack this call id, unexpect GRE packets */ 702 + pptp_destroy_siblings(ct); 703 + } 704 + 705 + pptph = skb_header_pointer(*pskb, nexthdr_off, sizeof(_pptph), &_pptph); 706 + if (!pptph) { 707 + DEBUGP("no full PPTP header, can't track\n"); 708 + return NF_ACCEPT; 709 + } 710 + nexthdr_off += sizeof(_pptph); 711 + datalen -= sizeof(_pptph); 712 + 713 + /* if it's not a control message we can't do anything with it */ 714 + if (ntohs(pptph->packetType) != PPTP_PACKET_CONTROL || 715 + ntohl(pptph->magicCookie) != PPTP_MAGIC_COOKIE) { 716 + DEBUGP("not a control packet\n"); 717 + return NF_ACCEPT; 718 + } 719 + 720 + oldsstate = info->sstate; 721 + oldcstate = info->cstate; 722 + 723 + spin_lock_bh(&ip_pptp_lock); 724 + 725 + /* FIXME: We just blindly assume that the control connection is always 726 + * established from PNS->PAC. However, RFC makes no guarantee */ 727 + if (dir == IP_CT_DIR_ORIGINAL) 728 + /* client -> server (PNS -> PAC) */ 729 + ret = pptp_outbound_pkt(pskb, tcph, nexthdr_off, datalen, ct, 730 + ctinfo); 731 + else 732 + /* server -> client (PAC -> PNS) */ 733 + ret = pptp_inbound_pkt(pskb, tcph, nexthdr_off, datalen, ct, 734 + ctinfo); 735 + DEBUGP("sstate: %d->%d, cstate: %d->%d\n", 736 + oldsstate, info->sstate, oldcstate, info->cstate); 737 + spin_unlock_bh(&ip_pptp_lock); 738 + 739 + return ret; 740 + } 741 + 742 + /* control protocol helper */ 743 + static struct ip_conntrack_helper pptp = { 744 + .list = { NULL, NULL }, 745 + .name = "pptp", 746 + .me = THIS_MODULE, 747 + .max_expected = 2, 748 + .timeout = 5 * 60, 749 + .tuple = { .src = { .ip = 0, 750 + .u = { .tcp = { .port = 751 + __constant_htons(PPTP_CONTROL_PORT) } } 752 + }, 753 + .dst = { .ip = 0, 754 + .u = { .all = 0 }, 755 + .protonum = IPPROTO_TCP 756 + } 757 + }, 758 + .mask = { .src = { .ip = 0, 759 + .u = { .tcp = { .port = __constant_htons(0xffff) } } 760 + }, 761 + .dst = { .ip = 0, 762 + .u = { .all = 0 }, 763 + .protonum = 0xff 764 + } 765 + }, 766 + .help = conntrack_pptp_help 767 + }; 768 + 769 + extern void __exit ip_ct_proto_gre_fini(void); 770 + extern int __init ip_ct_proto_gre_init(void); 771 + 772 + /* ip_conntrack_pptp initialization */ 773 + static int __init init(void) 774 + { 775 + int retcode; 776 + 777 + retcode = ip_ct_proto_gre_init(); 778 + if (retcode < 0) 779 + return retcode; 780 + 781 + DEBUGP(" registering helper\n"); 782 + if ((retcode = ip_conntrack_helper_register(&pptp))) { 783 + printk(KERN_ERR "Unable to register conntrack application " 784 + "helper for pptp: %d\n", retcode); 785 + ip_ct_proto_gre_fini(); 786 + return retcode; 787 + } 788 + 789 + printk("ip_conntrack_pptp version %s loaded\n", IP_CT_PPTP_VERSION); 790 + return 0; 791 + } 792 + 793 + static void __exit fini(void) 794 + { 795 + ip_conntrack_helper_unregister(&pptp); 796 + ip_ct_proto_gre_fini(); 797 + printk("ip_conntrack_pptp version %s unloaded\n", IP_CT_PPTP_VERSION); 798 + } 799 + 800 + module_init(init); 801 + module_exit(fini); 802 + 803 + EXPORT_SYMBOL(ip_nat_pptp_hook_outbound); 804 + EXPORT_SYMBOL(ip_nat_pptp_hook_inbound); 805 + EXPORT_SYMBOL(ip_nat_pptp_hook_exp_gre); 806 + EXPORT_SYMBOL(ip_nat_pptp_hook_expectfn);
+1 -1
net/ipv4/netfilter/ip_conntrack_netbios_ns.c
··· 91 91 ip_conntrack_expect_related(exp); 92 92 ip_conntrack_expect_put(exp); 93 93 94 - ip_ct_refresh_acct(ct, ctinfo, NULL, timeout * HZ); 94 + ip_ct_refresh(ct, *pskb, timeout * HZ); 95 95 out: 96 96 return NF_ACCEPT; 97 97 }
+2 -2
net/ipv4/netfilter/ip_conntrack_netlink.c
··· 1270 1270 if (err < 0) 1271 1271 return err; 1272 1272 1273 - exp = ip_conntrack_expect_find_get(&tuple); 1273 + exp = ip_conntrack_expect_find(&tuple); 1274 1274 if (!exp) 1275 1275 return -ENOENT; 1276 1276 ··· 1318 1318 return err; 1319 1319 1320 1320 /* bump usage count to 2 */ 1321 - exp = ip_conntrack_expect_find_get(&tuple); 1321 + exp = ip_conntrack_expect_find(&tuple); 1322 1322 if (!exp) 1323 1323 return -ENOENT; 1324 1324
+327
net/ipv4/netfilter/ip_conntrack_proto_gre.c
··· 1 + /* 2 + * ip_conntrack_proto_gre.c - Version 3.0 3 + * 4 + * Connection tracking protocol helper module for GRE. 5 + * 6 + * GRE is a generic encapsulation protocol, which is generally not very 7 + * suited for NAT, as it has no protocol-specific part as port numbers. 8 + * 9 + * It has an optional key field, which may help us distinguishing two 10 + * connections between the same two hosts. 11 + * 12 + * GRE is defined in RFC 1701 and RFC 1702, as well as RFC 2784 13 + * 14 + * PPTP is built on top of a modified version of GRE, and has a mandatory 15 + * field called "CallID", which serves us for the same purpose as the key 16 + * field in plain GRE. 17 + * 18 + * Documentation about PPTP can be found in RFC 2637 19 + * 20 + * (C) 2000-2005 by Harald Welte <laforge@gnumonks.org> 21 + * 22 + * Development of this code funded by Astaro AG (http://www.astaro.com/) 23 + * 24 + */ 25 + 26 + #include <linux/config.h> 27 + #include <linux/module.h> 28 + #include <linux/types.h> 29 + #include <linux/timer.h> 30 + #include <linux/netfilter.h> 31 + #include <linux/ip.h> 32 + #include <linux/in.h> 33 + #include <linux/list.h> 34 + 35 + static DEFINE_RWLOCK(ip_ct_gre_lock); 36 + #define ASSERT_READ_LOCK(x) 37 + #define ASSERT_WRITE_LOCK(x) 38 + 39 + #include <linux/netfilter_ipv4/listhelp.h> 40 + #include <linux/netfilter_ipv4/ip_conntrack_protocol.h> 41 + #include <linux/netfilter_ipv4/ip_conntrack_helper.h> 42 + #include <linux/netfilter_ipv4/ip_conntrack_core.h> 43 + 44 + #include <linux/netfilter_ipv4/ip_conntrack_proto_gre.h> 45 + #include <linux/netfilter_ipv4/ip_conntrack_pptp.h> 46 + 47 + MODULE_LICENSE("GPL"); 48 + MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>"); 49 + MODULE_DESCRIPTION("netfilter connection tracking protocol helper for GRE"); 50 + 51 + /* shamelessly stolen from ip_conntrack_proto_udp.c */ 52 + #define GRE_TIMEOUT (30*HZ) 53 + #define GRE_STREAM_TIMEOUT (180*HZ) 54 + 55 + #if 0 56 + #define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, __FUNCTION__, ## args) 57 + #define DUMP_TUPLE_GRE(x) printk("%u.%u.%u.%u:0x%x -> %u.%u.%u.%u:0x%x\n", \ 58 + NIPQUAD((x)->src.ip), ntohs((x)->src.u.gre.key), \ 59 + NIPQUAD((x)->dst.ip), ntohs((x)->dst.u.gre.key)) 60 + #else 61 + #define DEBUGP(x, args...) 62 + #define DUMP_TUPLE_GRE(x) 63 + #endif 64 + 65 + /* GRE KEYMAP HANDLING FUNCTIONS */ 66 + static LIST_HEAD(gre_keymap_list); 67 + 68 + static inline int gre_key_cmpfn(const struct ip_ct_gre_keymap *km, 69 + const struct ip_conntrack_tuple *t) 70 + { 71 + return ((km->tuple.src.ip == t->src.ip) && 72 + (km->tuple.dst.ip == t->dst.ip) && 73 + (km->tuple.dst.protonum == t->dst.protonum) && 74 + (km->tuple.dst.u.all == t->dst.u.all)); 75 + } 76 + 77 + /* look up the source key for a given tuple */ 78 + static u_int32_t gre_keymap_lookup(struct ip_conntrack_tuple *t) 79 + { 80 + struct ip_ct_gre_keymap *km; 81 + u_int32_t key = 0; 82 + 83 + read_lock_bh(&ip_ct_gre_lock); 84 + km = LIST_FIND(&gre_keymap_list, gre_key_cmpfn, 85 + struct ip_ct_gre_keymap *, t); 86 + if (km) 87 + key = km->tuple.src.u.gre.key; 88 + read_unlock_bh(&ip_ct_gre_lock); 89 + 90 + DEBUGP("lookup src key 0x%x up key for ", key); 91 + DUMP_TUPLE_GRE(t); 92 + 93 + return key; 94 + } 95 + 96 + /* add a single keymap entry, associate with specified master ct */ 97 + int 98 + ip_ct_gre_keymap_add(struct ip_conntrack *ct, 99 + struct ip_conntrack_tuple *t, int reply) 100 + { 101 + struct ip_ct_gre_keymap **exist_km, *km, *old; 102 + 103 + if (!ct->helper || strcmp(ct->helper->name, "pptp")) { 104 + DEBUGP("refusing to add GRE keymap to non-pptp session\n"); 105 + return -1; 106 + } 107 + 108 + if (!reply) 109 + exist_km = &ct->help.ct_pptp_info.keymap_orig; 110 + else 111 + exist_km = &ct->help.ct_pptp_info.keymap_reply; 112 + 113 + if (*exist_km) { 114 + /* check whether it's a retransmission */ 115 + old = LIST_FIND(&gre_keymap_list, gre_key_cmpfn, 116 + struct ip_ct_gre_keymap *, t); 117 + if (old == *exist_km) { 118 + DEBUGP("retransmission\n"); 119 + return 0; 120 + } 121 + 122 + DEBUGP("trying to override keymap_%s for ct %p\n", 123 + reply? "reply":"orig", ct); 124 + return -EEXIST; 125 + } 126 + 127 + km = kmalloc(sizeof(*km), GFP_ATOMIC); 128 + if (!km) 129 + return -ENOMEM; 130 + 131 + memcpy(&km->tuple, t, sizeof(*t)); 132 + *exist_km = km; 133 + 134 + DEBUGP("adding new entry %p: ", km); 135 + DUMP_TUPLE_GRE(&km->tuple); 136 + 137 + write_lock_bh(&ip_ct_gre_lock); 138 + list_append(&gre_keymap_list, km); 139 + write_unlock_bh(&ip_ct_gre_lock); 140 + 141 + return 0; 142 + } 143 + 144 + /* destroy the keymap entries associated with specified master ct */ 145 + void ip_ct_gre_keymap_destroy(struct ip_conntrack *ct) 146 + { 147 + DEBUGP("entering for ct %p\n", ct); 148 + 149 + if (!ct->helper || strcmp(ct->helper->name, "pptp")) { 150 + DEBUGP("refusing to destroy GRE keymap to non-pptp session\n"); 151 + return; 152 + } 153 + 154 + write_lock_bh(&ip_ct_gre_lock); 155 + if (ct->help.ct_pptp_info.keymap_orig) { 156 + DEBUGP("removing %p from list\n", 157 + ct->help.ct_pptp_info.keymap_orig); 158 + list_del(&ct->help.ct_pptp_info.keymap_orig->list); 159 + kfree(ct->help.ct_pptp_info.keymap_orig); 160 + ct->help.ct_pptp_info.keymap_orig = NULL; 161 + } 162 + if (ct->help.ct_pptp_info.keymap_reply) { 163 + DEBUGP("removing %p from list\n", 164 + ct->help.ct_pptp_info.keymap_reply); 165 + list_del(&ct->help.ct_pptp_info.keymap_reply->list); 166 + kfree(ct->help.ct_pptp_info.keymap_reply); 167 + ct->help.ct_pptp_info.keymap_reply = NULL; 168 + } 169 + write_unlock_bh(&ip_ct_gre_lock); 170 + } 171 + 172 + 173 + /* PUBLIC CONNTRACK PROTO HELPER FUNCTIONS */ 174 + 175 + /* invert gre part of tuple */ 176 + static int gre_invert_tuple(struct ip_conntrack_tuple *tuple, 177 + const struct ip_conntrack_tuple *orig) 178 + { 179 + tuple->dst.u.gre.key = orig->src.u.gre.key; 180 + tuple->src.u.gre.key = orig->dst.u.gre.key; 181 + 182 + return 1; 183 + } 184 + 185 + /* gre hdr info to tuple */ 186 + static int gre_pkt_to_tuple(const struct sk_buff *skb, 187 + unsigned int dataoff, 188 + struct ip_conntrack_tuple *tuple) 189 + { 190 + struct gre_hdr_pptp _pgrehdr, *pgrehdr; 191 + u_int32_t srckey; 192 + struct gre_hdr _grehdr, *grehdr; 193 + 194 + /* first only delinearize old RFC1701 GRE header */ 195 + grehdr = skb_header_pointer(skb, dataoff, sizeof(_grehdr), &_grehdr); 196 + if (!grehdr || grehdr->version != GRE_VERSION_PPTP) { 197 + /* try to behave like "ip_conntrack_proto_generic" */ 198 + tuple->src.u.all = 0; 199 + tuple->dst.u.all = 0; 200 + return 1; 201 + } 202 + 203 + /* PPTP header is variable length, only need up to the call_id field */ 204 + pgrehdr = skb_header_pointer(skb, dataoff, 8, &_pgrehdr); 205 + if (!pgrehdr) 206 + return 1; 207 + 208 + if (ntohs(grehdr->protocol) != GRE_PROTOCOL_PPTP) { 209 + DEBUGP("GRE_VERSION_PPTP but unknown proto\n"); 210 + return 0; 211 + } 212 + 213 + tuple->dst.u.gre.key = pgrehdr->call_id; 214 + srckey = gre_keymap_lookup(tuple); 215 + tuple->src.u.gre.key = srckey; 216 + 217 + return 1; 218 + } 219 + 220 + /* print gre part of tuple */ 221 + static int gre_print_tuple(struct seq_file *s, 222 + const struct ip_conntrack_tuple *tuple) 223 + { 224 + return seq_printf(s, "srckey=0x%x dstkey=0x%x ", 225 + ntohs(tuple->src.u.gre.key), 226 + ntohs(tuple->dst.u.gre.key)); 227 + } 228 + 229 + /* print private data for conntrack */ 230 + static int gre_print_conntrack(struct seq_file *s, 231 + const struct ip_conntrack *ct) 232 + { 233 + return seq_printf(s, "timeout=%u, stream_timeout=%u ", 234 + (ct->proto.gre.timeout / HZ), 235 + (ct->proto.gre.stream_timeout / HZ)); 236 + } 237 + 238 + /* Returns verdict for packet, and may modify conntrack */ 239 + static int gre_packet(struct ip_conntrack *ct, 240 + const struct sk_buff *skb, 241 + enum ip_conntrack_info conntrackinfo) 242 + { 243 + /* If we've seen traffic both ways, this is a GRE connection. 244 + * Extend timeout. */ 245 + if (ct->status & IPS_SEEN_REPLY) { 246 + ip_ct_refresh_acct(ct, conntrackinfo, skb, 247 + ct->proto.gre.stream_timeout); 248 + /* Also, more likely to be important, and not a probe. */ 249 + set_bit(IPS_ASSURED_BIT, &ct->status); 250 + } else 251 + ip_ct_refresh_acct(ct, conntrackinfo, skb, 252 + ct->proto.gre.timeout); 253 + 254 + return NF_ACCEPT; 255 + } 256 + 257 + /* Called when a new connection for this protocol found. */ 258 + static int gre_new(struct ip_conntrack *ct, 259 + const struct sk_buff *skb) 260 + { 261 + DEBUGP(": "); 262 + DUMP_TUPLE_GRE(&ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple); 263 + 264 + /* initialize to sane value. Ideally a conntrack helper 265 + * (e.g. in case of pptp) is increasing them */ 266 + ct->proto.gre.stream_timeout = GRE_STREAM_TIMEOUT; 267 + ct->proto.gre.timeout = GRE_TIMEOUT; 268 + 269 + return 1; 270 + } 271 + 272 + /* Called when a conntrack entry has already been removed from the hashes 273 + * and is about to be deleted from memory */ 274 + static void gre_destroy(struct ip_conntrack *ct) 275 + { 276 + struct ip_conntrack *master = ct->master; 277 + DEBUGP(" entering\n"); 278 + 279 + if (!master) 280 + DEBUGP("no master !?!\n"); 281 + else 282 + ip_ct_gre_keymap_destroy(master); 283 + } 284 + 285 + /* protocol helper struct */ 286 + static struct ip_conntrack_protocol gre = { 287 + .proto = IPPROTO_GRE, 288 + .name = "gre", 289 + .pkt_to_tuple = gre_pkt_to_tuple, 290 + .invert_tuple = gre_invert_tuple, 291 + .print_tuple = gre_print_tuple, 292 + .print_conntrack = gre_print_conntrack, 293 + .packet = gre_packet, 294 + .new = gre_new, 295 + .destroy = gre_destroy, 296 + .me = THIS_MODULE, 297 + #if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ 298 + defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) 299 + .tuple_to_nfattr = ip_ct_port_tuple_to_nfattr, 300 + .nfattr_to_tuple = ip_ct_port_nfattr_to_tuple, 301 + #endif 302 + }; 303 + 304 + /* ip_conntrack_proto_gre initialization */ 305 + int __init ip_ct_proto_gre_init(void) 306 + { 307 + return ip_conntrack_protocol_register(&gre); 308 + } 309 + 310 + void __exit ip_ct_proto_gre_fini(void) 311 + { 312 + struct list_head *pos, *n; 313 + 314 + /* delete all keymap entries */ 315 + write_lock_bh(&ip_ct_gre_lock); 316 + list_for_each_safe(pos, n, &gre_keymap_list) { 317 + DEBUGP("deleting keymap %p at module unload time\n", pos); 318 + list_del(pos); 319 + kfree(pos); 320 + } 321 + write_unlock_bh(&ip_ct_gre_lock); 322 + 323 + ip_conntrack_protocol_unregister(&gre); 324 + } 325 + 326 + EXPORT_SYMBOL(ip_ct_gre_keymap_add); 327 + EXPORT_SYMBOL(ip_ct_gre_keymap_destroy);
+3 -3
net/ipv4/netfilter/ip_conntrack_standalone.c
··· 989 989 EXPORT_SYMBOL(ip_conntrack_helper_register); 990 990 EXPORT_SYMBOL(ip_conntrack_helper_unregister); 991 991 EXPORT_SYMBOL(ip_ct_iterate_cleanup); 992 - EXPORT_SYMBOL(ip_ct_refresh_acct); 992 + EXPORT_SYMBOL(__ip_ct_refresh_acct); 993 993 994 994 EXPORT_SYMBOL(ip_conntrack_expect_alloc); 995 995 EXPORT_SYMBOL(ip_conntrack_expect_put); 996 - EXPORT_SYMBOL_GPL(ip_conntrack_expect_find_get); 996 + EXPORT_SYMBOL_GPL(__ip_conntrack_expect_find); 997 + EXPORT_SYMBOL_GPL(ip_conntrack_expect_find); 997 998 EXPORT_SYMBOL(ip_conntrack_expect_related); 998 999 EXPORT_SYMBOL(ip_conntrack_unexpect_related); 999 1000 EXPORT_SYMBOL_GPL(ip_conntrack_expect_list); 1000 - EXPORT_SYMBOL_GPL(__ip_conntrack_expect_find); 1001 1001 EXPORT_SYMBOL_GPL(ip_ct_unlink_expect); 1002 1002 1003 1003 EXPORT_SYMBOL(ip_conntrack_tuple_taken);
+2
net/ipv4/netfilter/ip_nat_core.c
··· 578 578 579 579 return ret; 580 580 } 581 + EXPORT_SYMBOL_GPL(ip_nat_port_nfattr_to_range); 582 + EXPORT_SYMBOL_GPL(ip_nat_port_range_to_nfattr); 581 583 #endif 582 584 583 585 int __init ip_nat_init(void)
+401
net/ipv4/netfilter/ip_nat_helper_pptp.c
··· 1 + /* 2 + * ip_nat_pptp.c - Version 3.0 3 + * 4 + * NAT support for PPTP (Point to Point Tunneling Protocol). 5 + * PPTP is a a protocol for creating virtual private networks. 6 + * It is a specification defined by Microsoft and some vendors 7 + * working with Microsoft. PPTP is built on top of a modified 8 + * version of the Internet Generic Routing Encapsulation Protocol. 9 + * GRE is defined in RFC 1701 and RFC 1702. Documentation of 10 + * PPTP can be found in RFC 2637 11 + * 12 + * (C) 2000-2005 by Harald Welte <laforge@gnumonks.org> 13 + * 14 + * Development of this code funded by Astaro AG (http://www.astaro.com/) 15 + * 16 + * TODO: - NAT to a unique tuple, not to TCP source port 17 + * (needs netfilter tuple reservation) 18 + * 19 + * Changes: 20 + * 2002-02-10 - Version 1.3 21 + * - Use ip_nat_mangle_tcp_packet() because of cloned skb's 22 + * in local connections (Philip Craig <philipc@snapgear.com>) 23 + * - add checks for magicCookie and pptp version 24 + * - make argument list of pptp_{out,in}bound_packet() shorter 25 + * - move to C99 style initializers 26 + * - print version number at module loadtime 27 + * 2003-09-22 - Version 1.5 28 + * - use SNATed tcp sourceport as callid, since we get called before 29 + * TCP header is mangled (Philip Craig <philipc@snapgear.com>) 30 + * 2004-10-22 - Version 2.0 31 + * - kernel 2.6.x version 32 + * 2005-06-10 - Version 3.0 33 + * - kernel >= 2.6.11 version, 34 + * funded by Oxcoda NetBox Blue (http://www.netboxblue.com/) 35 + * 36 + */ 37 + 38 + #include <linux/config.h> 39 + #include <linux/module.h> 40 + #include <linux/ip.h> 41 + #include <linux/tcp.h> 42 + #include <net/tcp.h> 43 + 44 + #include <linux/netfilter_ipv4/ip_nat.h> 45 + #include <linux/netfilter_ipv4/ip_nat_rule.h> 46 + #include <linux/netfilter_ipv4/ip_nat_helper.h> 47 + #include <linux/netfilter_ipv4/ip_nat_pptp.h> 48 + #include <linux/netfilter_ipv4/ip_conntrack_core.h> 49 + #include <linux/netfilter_ipv4/ip_conntrack_helper.h> 50 + #include <linux/netfilter_ipv4/ip_conntrack_proto_gre.h> 51 + #include <linux/netfilter_ipv4/ip_conntrack_pptp.h> 52 + 53 + #define IP_NAT_PPTP_VERSION "3.0" 54 + 55 + MODULE_LICENSE("GPL"); 56 + MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>"); 57 + MODULE_DESCRIPTION("Netfilter NAT helper module for PPTP"); 58 + 59 + 60 + #if 0 61 + extern const char *pptp_msg_name[]; 62 + #define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, \ 63 + __FUNCTION__, ## args) 64 + #else 65 + #define DEBUGP(format, args...) 66 + #endif 67 + 68 + static void pptp_nat_expected(struct ip_conntrack *ct, 69 + struct ip_conntrack_expect *exp) 70 + { 71 + struct ip_conntrack *master = ct->master; 72 + struct ip_conntrack_expect *other_exp; 73 + struct ip_conntrack_tuple t; 74 + struct ip_ct_pptp_master *ct_pptp_info; 75 + struct ip_nat_pptp *nat_pptp_info; 76 + 77 + ct_pptp_info = &master->help.ct_pptp_info; 78 + nat_pptp_info = &master->nat.help.nat_pptp_info; 79 + 80 + /* And here goes the grand finale of corrosion... */ 81 + 82 + if (exp->dir == IP_CT_DIR_ORIGINAL) { 83 + DEBUGP("we are PNS->PAC\n"); 84 + /* therefore, build tuple for PAC->PNS */ 85 + t.src.ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip; 86 + t.src.u.gre.key = htons(master->help.ct_pptp_info.pac_call_id); 87 + t.dst.ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip; 88 + t.dst.u.gre.key = htons(master->help.ct_pptp_info.pns_call_id); 89 + t.dst.protonum = IPPROTO_GRE; 90 + } else { 91 + DEBUGP("we are PAC->PNS\n"); 92 + /* build tuple for PNS->PAC */ 93 + t.src.ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip; 94 + t.src.u.gre.key = 95 + htons(master->nat.help.nat_pptp_info.pns_call_id); 96 + t.dst.ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip; 97 + t.dst.u.gre.key = 98 + htons(master->nat.help.nat_pptp_info.pac_call_id); 99 + t.dst.protonum = IPPROTO_GRE; 100 + } 101 + 102 + DEBUGP("trying to unexpect other dir: "); 103 + DUMP_TUPLE(&t); 104 + other_exp = ip_conntrack_expect_find(&t); 105 + if (other_exp) { 106 + ip_conntrack_unexpect_related(other_exp); 107 + ip_conntrack_expect_put(other_exp); 108 + DEBUGP("success\n"); 109 + } else { 110 + DEBUGP("not found!\n"); 111 + } 112 + 113 + ip_nat_follow_master(ct, exp); 114 + } 115 + 116 + /* outbound packets == from PNS to PAC */ 117 + static int 118 + pptp_outbound_pkt(struct sk_buff **pskb, 119 + struct ip_conntrack *ct, 120 + enum ip_conntrack_info ctinfo, 121 + struct PptpControlHeader *ctlh, 122 + union pptp_ctrl_union *pptpReq) 123 + 124 + { 125 + struct ip_ct_pptp_master *ct_pptp_info = &ct->help.ct_pptp_info; 126 + struct ip_nat_pptp *nat_pptp_info = &ct->nat.help.nat_pptp_info; 127 + 128 + u_int16_t msg, *cid = NULL, new_callid; 129 + 130 + new_callid = htons(ct_pptp_info->pns_call_id); 131 + 132 + switch (msg = ntohs(ctlh->messageType)) { 133 + case PPTP_OUT_CALL_REQUEST: 134 + cid = &pptpReq->ocreq.callID; 135 + /* FIXME: ideally we would want to reserve a call ID 136 + * here. current netfilter NAT core is not able to do 137 + * this :( For now we use TCP source port. This breaks 138 + * multiple calls within one control session */ 139 + 140 + /* save original call ID in nat_info */ 141 + nat_pptp_info->pns_call_id = ct_pptp_info->pns_call_id; 142 + 143 + /* don't use tcph->source since we are at a DSTmanip 144 + * hook (e.g. PREROUTING) and pkt is not mangled yet */ 145 + new_callid = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u.tcp.port; 146 + 147 + /* save new call ID in ct info */ 148 + ct_pptp_info->pns_call_id = ntohs(new_callid); 149 + break; 150 + case PPTP_IN_CALL_REPLY: 151 + cid = &pptpReq->icreq.callID; 152 + break; 153 + case PPTP_CALL_CLEAR_REQUEST: 154 + cid = &pptpReq->clrreq.callID; 155 + break; 156 + default: 157 + DEBUGP("unknown outbound packet 0x%04x:%s\n", msg, 158 + (msg <= PPTP_MSG_MAX)? 159 + pptp_msg_name[msg]:pptp_msg_name[0]); 160 + /* fall through */ 161 + 162 + case PPTP_SET_LINK_INFO: 163 + /* only need to NAT in case PAC is behind NAT box */ 164 + case PPTP_START_SESSION_REQUEST: 165 + case PPTP_START_SESSION_REPLY: 166 + case PPTP_STOP_SESSION_REQUEST: 167 + case PPTP_STOP_SESSION_REPLY: 168 + case PPTP_ECHO_REQUEST: 169 + case PPTP_ECHO_REPLY: 170 + /* no need to alter packet */ 171 + return NF_ACCEPT; 172 + } 173 + 174 + /* only OUT_CALL_REQUEST, IN_CALL_REPLY, CALL_CLEAR_REQUEST pass 175 + * down to here */ 176 + 177 + IP_NF_ASSERT(cid); 178 + 179 + DEBUGP("altering call id from 0x%04x to 0x%04x\n", 180 + ntohs(*cid), ntohs(new_callid)); 181 + 182 + /* mangle packet */ 183 + if (ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, 184 + (void *)cid - ((void *)ctlh - sizeof(struct pptp_pkt_hdr)), 185 + sizeof(new_callid), 186 + (char *)&new_callid, 187 + sizeof(new_callid)) == 0) 188 + return NF_DROP; 189 + 190 + return NF_ACCEPT; 191 + } 192 + 193 + static int 194 + pptp_exp_gre(struct ip_conntrack_expect *expect_orig, 195 + struct ip_conntrack_expect *expect_reply) 196 + { 197 + struct ip_ct_pptp_master *ct_pptp_info = 198 + &expect_orig->master->help.ct_pptp_info; 199 + struct ip_nat_pptp *nat_pptp_info = 200 + &expect_orig->master->nat.help.nat_pptp_info; 201 + 202 + struct ip_conntrack *ct = expect_orig->master; 203 + 204 + struct ip_conntrack_tuple inv_t; 205 + struct ip_conntrack_tuple *orig_t, *reply_t; 206 + 207 + /* save original PAC call ID in nat_info */ 208 + nat_pptp_info->pac_call_id = ct_pptp_info->pac_call_id; 209 + 210 + /* alter expectation */ 211 + orig_t = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple; 212 + reply_t = &ct->tuplehash[IP_CT_DIR_REPLY].tuple; 213 + 214 + /* alter expectation for PNS->PAC direction */ 215 + invert_tuplepr(&inv_t, &expect_orig->tuple); 216 + expect_orig->saved_proto.gre.key = htons(nat_pptp_info->pac_call_id); 217 + expect_orig->tuple.src.u.gre.key = htons(nat_pptp_info->pns_call_id); 218 + expect_orig->tuple.dst.u.gre.key = htons(ct_pptp_info->pac_call_id); 219 + inv_t.src.ip = reply_t->src.ip; 220 + inv_t.dst.ip = reply_t->dst.ip; 221 + inv_t.src.u.gre.key = htons(nat_pptp_info->pac_call_id); 222 + inv_t.dst.u.gre.key = htons(ct_pptp_info->pns_call_id); 223 + 224 + if (!ip_conntrack_expect_related(expect_orig)) { 225 + DEBUGP("successfully registered expect\n"); 226 + } else { 227 + DEBUGP("can't expect_related(expect_orig)\n"); 228 + return 1; 229 + } 230 + 231 + /* alter expectation for PAC->PNS direction */ 232 + invert_tuplepr(&inv_t, &expect_reply->tuple); 233 + expect_reply->saved_proto.gre.key = htons(nat_pptp_info->pns_call_id); 234 + expect_reply->tuple.src.u.gre.key = htons(nat_pptp_info->pac_call_id); 235 + expect_reply->tuple.dst.u.gre.key = htons(ct_pptp_info->pns_call_id); 236 + inv_t.src.ip = orig_t->src.ip; 237 + inv_t.dst.ip = orig_t->dst.ip; 238 + inv_t.src.u.gre.key = htons(nat_pptp_info->pns_call_id); 239 + inv_t.dst.u.gre.key = htons(ct_pptp_info->pac_call_id); 240 + 241 + if (!ip_conntrack_expect_related(expect_reply)) { 242 + DEBUGP("successfully registered expect\n"); 243 + } else { 244 + DEBUGP("can't expect_related(expect_reply)\n"); 245 + ip_conntrack_unexpect_related(expect_orig); 246 + return 1; 247 + } 248 + 249 + if (ip_ct_gre_keymap_add(ct, &expect_reply->tuple, 0) < 0) { 250 + DEBUGP("can't register original keymap\n"); 251 + ip_conntrack_unexpect_related(expect_orig); 252 + ip_conntrack_unexpect_related(expect_reply); 253 + return 1; 254 + } 255 + 256 + if (ip_ct_gre_keymap_add(ct, &inv_t, 1) < 0) { 257 + DEBUGP("can't register reply keymap\n"); 258 + ip_conntrack_unexpect_related(expect_orig); 259 + ip_conntrack_unexpect_related(expect_reply); 260 + ip_ct_gre_keymap_destroy(ct); 261 + return 1; 262 + } 263 + 264 + return 0; 265 + } 266 + 267 + /* inbound packets == from PAC to PNS */ 268 + static int 269 + pptp_inbound_pkt(struct sk_buff **pskb, 270 + struct ip_conntrack *ct, 271 + enum ip_conntrack_info ctinfo, 272 + struct PptpControlHeader *ctlh, 273 + union pptp_ctrl_union *pptpReq) 274 + { 275 + struct ip_nat_pptp *nat_pptp_info = &ct->nat.help.nat_pptp_info; 276 + u_int16_t msg, new_cid = 0, new_pcid, *pcid = NULL, *cid = NULL; 277 + 278 + int ret = NF_ACCEPT, rv; 279 + 280 + new_pcid = htons(nat_pptp_info->pns_call_id); 281 + 282 + switch (msg = ntohs(ctlh->messageType)) { 283 + case PPTP_OUT_CALL_REPLY: 284 + pcid = &pptpReq->ocack.peersCallID; 285 + cid = &pptpReq->ocack.callID; 286 + break; 287 + case PPTP_IN_CALL_CONNECT: 288 + pcid = &pptpReq->iccon.peersCallID; 289 + break; 290 + case PPTP_IN_CALL_REQUEST: 291 + /* only need to nat in case PAC is behind NAT box */ 292 + break; 293 + case PPTP_WAN_ERROR_NOTIFY: 294 + pcid = &pptpReq->wanerr.peersCallID; 295 + break; 296 + case PPTP_CALL_DISCONNECT_NOTIFY: 297 + pcid = &pptpReq->disc.callID; 298 + break; 299 + case PPTP_SET_LINK_INFO: 300 + pcid = &pptpReq->setlink.peersCallID; 301 + break; 302 + 303 + default: 304 + DEBUGP("unknown inbound packet %s\n", (msg <= PPTP_MSG_MAX)? 305 + pptp_msg_name[msg]:pptp_msg_name[0]); 306 + /* fall through */ 307 + 308 + case PPTP_START_SESSION_REQUEST: 309 + case PPTP_START_SESSION_REPLY: 310 + case PPTP_STOP_SESSION_REQUEST: 311 + case PPTP_STOP_SESSION_REPLY: 312 + case PPTP_ECHO_REQUEST: 313 + case PPTP_ECHO_REPLY: 314 + /* no need to alter packet */ 315 + return NF_ACCEPT; 316 + } 317 + 318 + /* only OUT_CALL_REPLY, IN_CALL_CONNECT, IN_CALL_REQUEST, 319 + * WAN_ERROR_NOTIFY, CALL_DISCONNECT_NOTIFY pass down here */ 320 + 321 + /* mangle packet */ 322 + IP_NF_ASSERT(pcid); 323 + DEBUGP("altering peer call id from 0x%04x to 0x%04x\n", 324 + ntohs(*pcid), ntohs(new_pcid)); 325 + 326 + rv = ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, 327 + (void *)pcid - ((void *)ctlh - sizeof(struct pptp_pkt_hdr)), 328 + sizeof(new_pcid), (char *)&new_pcid, 329 + sizeof(new_pcid)); 330 + if (rv != NF_ACCEPT) 331 + return rv; 332 + 333 + if (new_cid) { 334 + IP_NF_ASSERT(cid); 335 + DEBUGP("altering call id from 0x%04x to 0x%04x\n", 336 + ntohs(*cid), ntohs(new_cid)); 337 + rv = ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, 338 + (void *)cid - ((void *)ctlh - sizeof(struct pptp_pkt_hdr)), 339 + sizeof(new_cid), 340 + (char *)&new_cid, 341 + sizeof(new_cid)); 342 + if (rv != NF_ACCEPT) 343 + return rv; 344 + } 345 + 346 + /* check for earlier return value of 'switch' above */ 347 + if (ret != NF_ACCEPT) 348 + return ret; 349 + 350 + /* great, at least we don't need to resize packets */ 351 + return NF_ACCEPT; 352 + } 353 + 354 + 355 + extern int __init ip_nat_proto_gre_init(void); 356 + extern void __exit ip_nat_proto_gre_fini(void); 357 + 358 + static int __init init(void) 359 + { 360 + int ret; 361 + 362 + DEBUGP("%s: registering NAT helper\n", __FILE__); 363 + 364 + ret = ip_nat_proto_gre_init(); 365 + if (ret < 0) 366 + return ret; 367 + 368 + BUG_ON(ip_nat_pptp_hook_outbound); 369 + ip_nat_pptp_hook_outbound = &pptp_outbound_pkt; 370 + 371 + BUG_ON(ip_nat_pptp_hook_inbound); 372 + ip_nat_pptp_hook_inbound = &pptp_inbound_pkt; 373 + 374 + BUG_ON(ip_nat_pptp_hook_exp_gre); 375 + ip_nat_pptp_hook_exp_gre = &pptp_exp_gre; 376 + 377 + BUG_ON(ip_nat_pptp_hook_expectfn); 378 + ip_nat_pptp_hook_expectfn = &pptp_nat_expected; 379 + 380 + printk("ip_nat_pptp version %s loaded\n", IP_NAT_PPTP_VERSION); 381 + return 0; 382 + } 383 + 384 + static void __exit fini(void) 385 + { 386 + DEBUGP("cleanup_module\n" ); 387 + 388 + ip_nat_pptp_hook_expectfn = NULL; 389 + ip_nat_pptp_hook_exp_gre = NULL; 390 + ip_nat_pptp_hook_inbound = NULL; 391 + ip_nat_pptp_hook_outbound = NULL; 392 + 393 + ip_nat_proto_gre_fini(); 394 + /* Make sure noone calls it, meanwhile */ 395 + synchronize_net(); 396 + 397 + printk("ip_nat_pptp version %s unloaded\n", IP_NAT_PPTP_VERSION); 398 + } 399 + 400 + module_init(init); 401 + module_exit(fini);
+214
net/ipv4/netfilter/ip_nat_proto_gre.c
··· 1 + /* 2 + * ip_nat_proto_gre.c - Version 2.0 3 + * 4 + * NAT protocol helper module for GRE. 5 + * 6 + * GRE is a generic encapsulation protocol, which is generally not very 7 + * suited for NAT, as it has no protocol-specific part as port numbers. 8 + * 9 + * It has an optional key field, which may help us distinguishing two 10 + * connections between the same two hosts. 11 + * 12 + * GRE is defined in RFC 1701 and RFC 1702, as well as RFC 2784 13 + * 14 + * PPTP is built on top of a modified version of GRE, and has a mandatory 15 + * field called "CallID", which serves us for the same purpose as the key 16 + * field in plain GRE. 17 + * 18 + * Documentation about PPTP can be found in RFC 2637 19 + * 20 + * (C) 2000-2005 by Harald Welte <laforge@gnumonks.org> 21 + * 22 + * Development of this code funded by Astaro AG (http://www.astaro.com/) 23 + * 24 + */ 25 + 26 + #include <linux/config.h> 27 + #include <linux/module.h> 28 + #include <linux/ip.h> 29 + #include <linux/netfilter_ipv4/ip_nat.h> 30 + #include <linux/netfilter_ipv4/ip_nat_rule.h> 31 + #include <linux/netfilter_ipv4/ip_nat_protocol.h> 32 + #include <linux/netfilter_ipv4/ip_conntrack_proto_gre.h> 33 + 34 + MODULE_LICENSE("GPL"); 35 + MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>"); 36 + MODULE_DESCRIPTION("Netfilter NAT protocol helper module for GRE"); 37 + 38 + #if 0 39 + #define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, \ 40 + __FUNCTION__, ## args) 41 + #else 42 + #define DEBUGP(x, args...) 43 + #endif 44 + 45 + /* is key in given range between min and max */ 46 + static int 47 + gre_in_range(const struct ip_conntrack_tuple *tuple, 48 + enum ip_nat_manip_type maniptype, 49 + const union ip_conntrack_manip_proto *min, 50 + const union ip_conntrack_manip_proto *max) 51 + { 52 + u_int32_t key; 53 + 54 + if (maniptype == IP_NAT_MANIP_SRC) 55 + key = tuple->src.u.gre.key; 56 + else 57 + key = tuple->dst.u.gre.key; 58 + 59 + return ntohl(key) >= ntohl(min->gre.key) 60 + && ntohl(key) <= ntohl(max->gre.key); 61 + } 62 + 63 + /* generate unique tuple ... */ 64 + static int 65 + gre_unique_tuple(struct ip_conntrack_tuple *tuple, 66 + const struct ip_nat_range *range, 67 + enum ip_nat_manip_type maniptype, 68 + const struct ip_conntrack *conntrack) 69 + { 70 + static u_int16_t key; 71 + u_int16_t *keyptr; 72 + unsigned int min, i, range_size; 73 + 74 + if (maniptype == IP_NAT_MANIP_SRC) 75 + keyptr = &tuple->src.u.gre.key; 76 + else 77 + keyptr = &tuple->dst.u.gre.key; 78 + 79 + if (!(range->flags & IP_NAT_RANGE_PROTO_SPECIFIED)) { 80 + DEBUGP("%p: NATing GRE PPTP\n", conntrack); 81 + min = 1; 82 + range_size = 0xffff; 83 + } else { 84 + min = ntohl(range->min.gre.key); 85 + range_size = ntohl(range->max.gre.key) - min + 1; 86 + } 87 + 88 + DEBUGP("min = %u, range_size = %u\n", min, range_size); 89 + 90 + for (i = 0; i < range_size; i++, key++) { 91 + *keyptr = htonl(min + key % range_size); 92 + if (!ip_nat_used_tuple(tuple, conntrack)) 93 + return 1; 94 + } 95 + 96 + DEBUGP("%p: no NAT mapping\n", conntrack); 97 + 98 + return 0; 99 + } 100 + 101 + /* manipulate a GRE packet according to maniptype */ 102 + static int 103 + gre_manip_pkt(struct sk_buff **pskb, 104 + unsigned int iphdroff, 105 + const struct ip_conntrack_tuple *tuple, 106 + enum ip_nat_manip_type maniptype) 107 + { 108 + struct gre_hdr *greh; 109 + struct gre_hdr_pptp *pgreh; 110 + struct iphdr *iph = (struct iphdr *)((*pskb)->data + iphdroff); 111 + unsigned int hdroff = iphdroff + iph->ihl*4; 112 + 113 + /* pgreh includes two optional 32bit fields which are not required 114 + * to be there. That's where the magic '8' comes from */ 115 + if (!skb_make_writable(pskb, hdroff + sizeof(*pgreh)-8)) 116 + return 0; 117 + 118 + greh = (void *)(*pskb)->data + hdroff; 119 + pgreh = (struct gre_hdr_pptp *) greh; 120 + 121 + /* we only have destination manip of a packet, since 'source key' 122 + * is not present in the packet itself */ 123 + if (maniptype == IP_NAT_MANIP_DST) { 124 + /* key manipulation is always dest */ 125 + switch (greh->version) { 126 + case 0: 127 + if (!greh->key) { 128 + DEBUGP("can't nat GRE w/o key\n"); 129 + break; 130 + } 131 + if (greh->csum) { 132 + /* FIXME: Never tested this code... */ 133 + *(gre_csum(greh)) = 134 + ip_nat_cheat_check(~*(gre_key(greh)), 135 + tuple->dst.u.gre.key, 136 + *(gre_csum(greh))); 137 + } 138 + *(gre_key(greh)) = tuple->dst.u.gre.key; 139 + break; 140 + case GRE_VERSION_PPTP: 141 + DEBUGP("call_id -> 0x%04x\n", 142 + ntohl(tuple->dst.u.gre.key)); 143 + pgreh->call_id = htons(ntohl(tuple->dst.u.gre.key)); 144 + break; 145 + default: 146 + DEBUGP("can't nat unknown GRE version\n"); 147 + return 0; 148 + break; 149 + } 150 + } 151 + return 1; 152 + } 153 + 154 + /* print out a nat tuple */ 155 + static unsigned int 156 + gre_print(char *buffer, 157 + const struct ip_conntrack_tuple *match, 158 + const struct ip_conntrack_tuple *mask) 159 + { 160 + unsigned int len = 0; 161 + 162 + if (mask->src.u.gre.key) 163 + len += sprintf(buffer + len, "srckey=0x%x ", 164 + ntohl(match->src.u.gre.key)); 165 + 166 + if (mask->dst.u.gre.key) 167 + len += sprintf(buffer + len, "dstkey=0x%x ", 168 + ntohl(match->src.u.gre.key)); 169 + 170 + return len; 171 + } 172 + 173 + /* print a range of keys */ 174 + static unsigned int 175 + gre_print_range(char *buffer, const struct ip_nat_range *range) 176 + { 177 + if (range->min.gre.key != 0 178 + || range->max.gre.key != 0xFFFF) { 179 + if (range->min.gre.key == range->max.gre.key) 180 + return sprintf(buffer, "key 0x%x ", 181 + ntohl(range->min.gre.key)); 182 + else 183 + return sprintf(buffer, "keys 0x%u-0x%u ", 184 + ntohl(range->min.gre.key), 185 + ntohl(range->max.gre.key)); 186 + } else 187 + return 0; 188 + } 189 + 190 + /* nat helper struct */ 191 + static struct ip_nat_protocol gre = { 192 + .name = "GRE", 193 + .protonum = IPPROTO_GRE, 194 + .manip_pkt = gre_manip_pkt, 195 + .in_range = gre_in_range, 196 + .unique_tuple = gre_unique_tuple, 197 + .print = gre_print, 198 + .print_range = gre_print_range, 199 + #if defined(CONFIG_IP_NF_CONNTRACK_NETLINK) || \ 200 + defined(CONFIG_IP_NF_CONNTRACK_NETLINK_MODULE) 201 + .range_to_nfattr = ip_nat_port_range_to_nfattr, 202 + .nfattr_to_range = ip_nat_port_nfattr_to_range, 203 + #endif 204 + }; 205 + 206 + int __init ip_nat_proto_gre_init(void) 207 + { 208 + return ip_nat_protocol_register(&gre); 209 + } 210 + 211 + void __exit ip_nat_proto_gre_fini(void) 212 + { 213 + ip_nat_protocol_unregister(&gre); 214 + }
+1 -1
net/ipv4/raw.c
··· 361 361 362 362 if (type && code) { 363 363 get_user(fl->fl_icmp_type, type); 364 - __get_user(fl->fl_icmp_code, code); 364 + get_user(fl->fl_icmp_code, code); 365 365 probed = 1; 366 366 } 367 367 break;
+1 -1
net/ipv4/tcp_minisocks.c
··· 384 384 newtp->frto_counter = 0; 385 385 newtp->frto_highmark = 0; 386 386 387 - newicsk->icsk_ca_ops = &tcp_reno; 387 + newicsk->icsk_ca_ops = &tcp_init_congestion_ops; 388 388 389 389 tcp_set_ca_state(newsk, TCP_CA_Open); 390 390 tcp_init_xmit_timers(newsk);
+16 -3
net/ipv4/tcp_output.c
··· 461 461 flags = TCP_SKB_CB(skb)->flags; 462 462 TCP_SKB_CB(skb)->flags = flags & ~(TCPCB_FLAG_FIN|TCPCB_FLAG_PSH); 463 463 TCP_SKB_CB(buff)->flags = flags; 464 - TCP_SKB_CB(buff)->sacked = 465 - (TCP_SKB_CB(skb)->sacked & 466 - (TCPCB_LOST | TCPCB_EVER_RETRANS | TCPCB_AT_TAIL)); 464 + TCP_SKB_CB(buff)->sacked = TCP_SKB_CB(skb)->sacked; 467 465 TCP_SKB_CB(skb)->sacked &= ~TCPCB_AT_TAIL; 468 466 469 467 if (!skb_shinfo(skb)->nr_frags && skb->ip_summed != CHECKSUM_HW) { ··· 499 501 tcp_skb_pcount(buff); 500 502 501 503 tp->packets_out -= diff; 504 + 505 + if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) 506 + tp->sacked_out -= diff; 507 + if (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_RETRANS) 508 + tp->retrans_out -= diff; 509 + 502 510 if (TCP_SKB_CB(skb)->sacked & TCPCB_LOST) { 503 511 tp->lost_out -= diff; 504 512 tp->left_out -= diff; 505 513 } 514 + 506 515 if (diff > 0) { 516 + /* Adjust Reno SACK estimate. */ 517 + if (!tp->rx_opt.sack_ok) { 518 + tp->sacked_out -= diff; 519 + if ((int)tp->sacked_out < 0) 520 + tp->sacked_out = 0; 521 + tcp_sync_left_out(tp); 522 + } 523 + 507 524 tp->fackets_out -= diff; 508 525 if ((int)tp->fackets_out < 0) 509 526 tp->fackets_out = 0;
+52
net/ipv6/netfilter/ip6_tables.c
··· 1955 1955 #endif 1956 1956 } 1957 1957 1958 + /* 1959 + * find specified header up to transport protocol header. 1960 + * If found target header, the offset to the header is set to *offset 1961 + * and return 0. otherwise, return -1. 1962 + * 1963 + * Notes: - non-1st Fragment Header isn't skipped. 1964 + * - ESP header isn't skipped. 1965 + * - The target header may be trancated. 1966 + */ 1967 + int ipv6_find_hdr(const struct sk_buff *skb, unsigned int *offset, u8 target) 1968 + { 1969 + unsigned int start = (u8*)(skb->nh.ipv6h + 1) - skb->data; 1970 + u8 nexthdr = skb->nh.ipv6h->nexthdr; 1971 + unsigned int len = skb->len - start; 1972 + 1973 + while (nexthdr != target) { 1974 + struct ipv6_opt_hdr _hdr, *hp; 1975 + unsigned int hdrlen; 1976 + 1977 + if ((!ipv6_ext_hdr(nexthdr)) || nexthdr == NEXTHDR_NONE) 1978 + return -1; 1979 + hp = skb_header_pointer(skb, start, sizeof(_hdr), &_hdr); 1980 + if (hp == NULL) 1981 + return -1; 1982 + if (nexthdr == NEXTHDR_FRAGMENT) { 1983 + unsigned short _frag_off, *fp; 1984 + fp = skb_header_pointer(skb, 1985 + start+offsetof(struct frag_hdr, 1986 + frag_off), 1987 + sizeof(_frag_off), 1988 + &_frag_off); 1989 + if (fp == NULL) 1990 + return -1; 1991 + 1992 + if (ntohs(*fp) & ~0x7) 1993 + return -1; 1994 + hdrlen = 8; 1995 + } else if (nexthdr == NEXTHDR_AUTH) 1996 + hdrlen = (hp->hdrlen + 2) << 2; 1997 + else 1998 + hdrlen = ipv6_optlen(hp); 1999 + 2000 + nexthdr = hp->nexthdr; 2001 + len -= hdrlen; 2002 + start += hdrlen; 2003 + } 2004 + 2005 + *offset = start; 2006 + return 0; 2007 + } 2008 + 1958 2009 EXPORT_SYMBOL(ip6t_register_table); 1959 2010 EXPORT_SYMBOL(ip6t_unregister_table); 1960 2011 EXPORT_SYMBOL(ip6t_do_table); ··· 2014 1963 EXPORT_SYMBOL(ip6t_register_target); 2015 1964 EXPORT_SYMBOL(ip6t_unregister_target); 2016 1965 EXPORT_SYMBOL(ip6t_ext_hdr); 1966 + EXPORT_SYMBOL(ipv6_find_hdr); 2017 1967 2018 1968 module_init(init); 2019 1969 module_exit(fini);
+5 -76
net/ipv6/netfilter/ip6t_ah.c
··· 48 48 unsigned int protoff, 49 49 int *hotdrop) 50 50 { 51 - struct ip_auth_hdr *ah = NULL, _ah; 51 + struct ip_auth_hdr *ah, _ah; 52 52 const struct ip6t_ah *ahinfo = matchinfo; 53 - unsigned int temp; 54 - int len; 55 - u8 nexthdr; 56 53 unsigned int ptr; 57 54 unsigned int hdrlen = 0; 58 55 59 - /*DEBUGP("IPv6 AH entered\n");*/ 60 - /* if (opt->auth == 0) return 0; 61 - * It does not filled on output */ 62 - 63 - /* type of the 1st exthdr */ 64 - nexthdr = skb->nh.ipv6h->nexthdr; 65 - /* pointer to the 1st exthdr */ 66 - ptr = sizeof(struct ipv6hdr); 67 - /* available length */ 68 - len = skb->len - ptr; 69 - temp = 0; 70 - 71 - while (ip6t_ext_hdr(nexthdr)) { 72 - struct ipv6_opt_hdr _hdr, *hp; 73 - 74 - DEBUGP("ipv6_ah header iteration \n"); 75 - 76 - /* Is there enough space for the next ext header? */ 77 - if (len < sizeof(struct ipv6_opt_hdr)) 78 - return 0; 79 - /* No more exthdr -> evaluate */ 80 - if (nexthdr == NEXTHDR_NONE) 81 - break; 82 - /* ESP -> evaluate */ 83 - if (nexthdr == NEXTHDR_ESP) 84 - break; 85 - 86 - hp = skb_header_pointer(skb, ptr, sizeof(_hdr), &_hdr); 87 - BUG_ON(hp == NULL); 88 - 89 - /* Calculate the header length */ 90 - if (nexthdr == NEXTHDR_FRAGMENT) 91 - hdrlen = 8; 92 - else if (nexthdr == NEXTHDR_AUTH) 93 - hdrlen = (hp->hdrlen+2)<<2; 94 - else 95 - hdrlen = ipv6_optlen(hp); 96 - 97 - /* AH -> evaluate */ 98 - if (nexthdr == NEXTHDR_AUTH) { 99 - temp |= MASK_AH; 100 - break; 101 - } 102 - 103 - 104 - /* set the flag */ 105 - switch (nexthdr) { 106 - case NEXTHDR_HOP: 107 - case NEXTHDR_ROUTING: 108 - case NEXTHDR_FRAGMENT: 109 - case NEXTHDR_AUTH: 110 - case NEXTHDR_DEST: 111 - break; 112 - default: 113 - DEBUGP("ipv6_ah match: unknown nextheader %u\n",nexthdr); 114 - return 0; 115 - } 116 - 117 - nexthdr = hp->nexthdr; 118 - len -= hdrlen; 119 - ptr += hdrlen; 120 - if (ptr > skb->len) { 121 - DEBUGP("ipv6_ah: new pointer too large! \n"); 122 - break; 123 - } 124 - } 125 - 126 - /* AH header not found */ 127 - if (temp != MASK_AH) 56 + if (ipv6_find_hdr(skb, &ptr, NEXTHDR_AUTH) < 0) 128 57 return 0; 129 58 130 - if (len < sizeof(struct ip_auth_hdr)){ 59 + ah = skb_header_pointer(skb, ptr, sizeof(_ah), &_ah); 60 + if (ah == NULL) { 131 61 *hotdrop = 1; 132 62 return 0; 133 63 } 134 64 135 - ah = skb_header_pointer(skb, ptr, sizeof(_ah), &_ah); 136 - BUG_ON(ah == NULL); 65 + hdrlen = (ah->hdrlen + 2) << 2; 137 66 138 67 DEBUGP("IPv6 AH LEN %u %u ", hdrlen, ah->hdrlen); 139 68 DEBUGP("RES %04X ", ah->reserved);
+7 -81
net/ipv6/netfilter/ip6t_dst.c
··· 63 63 struct ipv6_opt_hdr _optsh, *oh; 64 64 const struct ip6t_opts *optinfo = matchinfo; 65 65 unsigned int temp; 66 - unsigned int len; 67 - u8 nexthdr; 68 66 unsigned int ptr; 69 67 unsigned int hdrlen = 0; 70 68 unsigned int ret = 0; ··· 70 72 u8 _optlen, *lp = NULL; 71 73 unsigned int optlen; 72 74 73 - /* type of the 1st exthdr */ 74 - nexthdr = skb->nh.ipv6h->nexthdr; 75 - /* pointer to the 1st exthdr */ 76 - ptr = sizeof(struct ipv6hdr); 77 - /* available length */ 78 - len = skb->len - ptr; 79 - temp = 0; 80 - 81 - while (ip6t_ext_hdr(nexthdr)) { 82 - struct ipv6_opt_hdr _hdr, *hp; 83 - 84 - DEBUGP("ipv6_opts header iteration \n"); 85 - 86 - /* Is there enough space for the next ext header? */ 87 - if (len < (int)sizeof(struct ipv6_opt_hdr)) 88 - return 0; 89 - /* No more exthdr -> evaluate */ 90 - if (nexthdr == NEXTHDR_NONE) { 91 - break; 92 - } 93 - /* ESP -> evaluate */ 94 - if (nexthdr == NEXTHDR_ESP) { 95 - break; 96 - } 97 - 98 - hp = skb_header_pointer(skb, ptr, sizeof(_hdr), &_hdr); 99 - BUG_ON(hp == NULL); 100 - 101 - /* Calculate the header length */ 102 - if (nexthdr == NEXTHDR_FRAGMENT) { 103 - hdrlen = 8; 104 - } else if (nexthdr == NEXTHDR_AUTH) 105 - hdrlen = (hp->hdrlen+2)<<2; 106 - else 107 - hdrlen = ipv6_optlen(hp); 108 - 109 - /* OPTS -> evaluate */ 110 75 #if HOPBYHOP 111 - if (nexthdr == NEXTHDR_HOP) { 112 - temp |= MASK_HOPOPTS; 76 + if (ipv6_find_hdr(skb, &ptr, NEXTHDR_HOP) < 0) 113 77 #else 114 - if (nexthdr == NEXTHDR_DEST) { 115 - temp |= MASK_DSTOPTS; 78 + if (ipv6_find_hdr(skb, &ptr, NEXTHDR_DEST) < 0) 116 79 #endif 117 - break; 118 - } 80 + return 0; 119 81 120 - 121 - /* set the flag */ 122 - switch (nexthdr){ 123 - case NEXTHDR_HOP: 124 - case NEXTHDR_ROUTING: 125 - case NEXTHDR_FRAGMENT: 126 - case NEXTHDR_AUTH: 127 - case NEXTHDR_DEST: 128 - break; 129 - default: 130 - DEBUGP("ipv6_opts match: unknown nextheader %u\n",nexthdr); 131 - return 0; 132 - break; 133 - } 134 - 135 - nexthdr = hp->nexthdr; 136 - len -= hdrlen; 137 - ptr += hdrlen; 138 - if ( ptr > skb->len ) { 139 - DEBUGP("ipv6_opts: new pointer is too large! \n"); 140 - break; 141 - } 142 - } 143 - 144 - /* OPTIONS header not found */ 145 - #if HOPBYHOP 146 - if ( temp != MASK_HOPOPTS ) return 0; 147 - #else 148 - if ( temp != MASK_DSTOPTS ) return 0; 149 - #endif 150 - 151 - if (len < (int)sizeof(struct ipv6_opt_hdr)){ 82 + oh = skb_header_pointer(skb, ptr, sizeof(_optsh), &_optsh); 83 + if (oh == NULL){ 152 84 *hotdrop = 1; 153 85 return 0; 154 86 } 155 87 156 - if (len < hdrlen){ 88 + hdrlen = ipv6_optlen(oh); 89 + if (skb->len - ptr < hdrlen){ 157 90 /* Packet smaller than it's length field */ 158 91 return 0; 159 92 } 160 - 161 - oh = skb_header_pointer(skb, ptr, sizeof(_optsh), &_optsh); 162 - BUG_ON(oh == NULL); 163 93 164 94 DEBUGP("IPv6 OPTS LEN %u %u ", hdrlen, oh->hdrlen); 165 95
+4 -69
net/ipv6/netfilter/ip6t_esp.c
··· 48 48 unsigned int protoff, 49 49 int *hotdrop) 50 50 { 51 - struct ip_esp_hdr _esp, *eh = NULL; 51 + struct ip_esp_hdr _esp, *eh; 52 52 const struct ip6t_esp *espinfo = matchinfo; 53 - unsigned int temp; 54 - int len; 55 - u8 nexthdr; 56 53 unsigned int ptr; 57 54 58 55 /* Make sure this isn't an evil packet */ 59 56 /*DEBUGP("ipv6_esp entered \n");*/ 60 57 61 - /* type of the 1st exthdr */ 62 - nexthdr = skb->nh.ipv6h->nexthdr; 63 - /* pointer to the 1st exthdr */ 64 - ptr = sizeof(struct ipv6hdr); 65 - /* available length */ 66 - len = skb->len - ptr; 67 - temp = 0; 68 - 69 - while (ip6t_ext_hdr(nexthdr)) { 70 - struct ipv6_opt_hdr _hdr, *hp; 71 - int hdrlen; 72 - 73 - DEBUGP("ipv6_esp header iteration \n"); 74 - 75 - /* Is there enough space for the next ext header? */ 76 - if (len < sizeof(struct ipv6_opt_hdr)) 77 - return 0; 78 - /* No more exthdr -> evaluate */ 79 - if (nexthdr == NEXTHDR_NONE) 80 - break; 81 - /* ESP -> evaluate */ 82 - if (nexthdr == NEXTHDR_ESP) { 83 - temp |= MASK_ESP; 84 - break; 85 - } 86 - 87 - hp = skb_header_pointer(skb, ptr, sizeof(_hdr), &_hdr); 88 - BUG_ON(hp == NULL); 89 - 90 - /* Calculate the header length */ 91 - if (nexthdr == NEXTHDR_FRAGMENT) 92 - hdrlen = 8; 93 - else if (nexthdr == NEXTHDR_AUTH) 94 - hdrlen = (hp->hdrlen+2)<<2; 95 - else 96 - hdrlen = ipv6_optlen(hp); 97 - 98 - /* set the flag */ 99 - switch (nexthdr) { 100 - case NEXTHDR_HOP: 101 - case NEXTHDR_ROUTING: 102 - case NEXTHDR_FRAGMENT: 103 - case NEXTHDR_AUTH: 104 - case NEXTHDR_DEST: 105 - break; 106 - default: 107 - DEBUGP("ipv6_esp match: unknown nextheader %u\n",nexthdr); 108 - return 0; 109 - } 110 - 111 - nexthdr = hp->nexthdr; 112 - len -= hdrlen; 113 - ptr += hdrlen; 114 - if (ptr > skb->len) { 115 - DEBUGP("ipv6_esp: new pointer too large! \n"); 116 - break; 117 - } 118 - } 119 - 120 - /* ESP header not found */ 121 - if (temp != MASK_ESP) 58 + if (ipv6_find_hdr(skb, &ptr, NEXTHDR_ESP) < 0) 122 59 return 0; 123 60 124 - if (len < sizeof(struct ip_esp_hdr)) { 61 + eh = skb_header_pointer(skb, ptr, sizeof(_esp), &_esp); 62 + if (eh == NULL) { 125 63 *hotdrop = 1; 126 64 return 0; 127 65 } 128 - 129 - eh = skb_header_pointer(skb, ptr, sizeof(_esp), &_esp); 130 - BUG_ON(eh == NULL); 131 66 132 67 DEBUGP("IPv6 ESP SPI %u %08X\n", ntohl(eh->spi), ntohl(eh->spi)); 133 68
+8 -80
net/ipv6/netfilter/ip6t_frag.c
··· 48 48 unsigned int protoff, 49 49 int *hotdrop) 50 50 { 51 - struct frag_hdr _frag, *fh = NULL; 51 + struct frag_hdr _frag, *fh; 52 52 const struct ip6t_frag *fraginfo = matchinfo; 53 - unsigned int temp; 54 - int len; 55 - u8 nexthdr; 56 53 unsigned int ptr; 57 - unsigned int hdrlen = 0; 58 54 59 - /* type of the 1st exthdr */ 60 - nexthdr = skb->nh.ipv6h->nexthdr; 61 - /* pointer to the 1st exthdr */ 62 - ptr = sizeof(struct ipv6hdr); 63 - /* available length */ 64 - len = skb->len - ptr; 65 - temp = 0; 55 + if (ipv6_find_hdr(skb, &ptr, NEXTHDR_FRAGMENT) < 0) 56 + return 0; 66 57 67 - while (ip6t_ext_hdr(nexthdr)) { 68 - struct ipv6_opt_hdr _hdr, *hp; 69 - 70 - DEBUGP("ipv6_frag header iteration \n"); 71 - 72 - /* Is there enough space for the next ext header? */ 73 - if (len < (int)sizeof(struct ipv6_opt_hdr)) 74 - return 0; 75 - /* No more exthdr -> evaluate */ 76 - if (nexthdr == NEXTHDR_NONE) { 77 - break; 78 - } 79 - /* ESP -> evaluate */ 80 - if (nexthdr == NEXTHDR_ESP) { 81 - break; 82 - } 83 - 84 - hp = skb_header_pointer(skb, ptr, sizeof(_hdr), &_hdr); 85 - BUG_ON(hp == NULL); 86 - 87 - /* Calculate the header length */ 88 - if (nexthdr == NEXTHDR_FRAGMENT) { 89 - hdrlen = 8; 90 - } else if (nexthdr == NEXTHDR_AUTH) 91 - hdrlen = (hp->hdrlen+2)<<2; 92 - else 93 - hdrlen = ipv6_optlen(hp); 94 - 95 - /* FRAG -> evaluate */ 96 - if (nexthdr == NEXTHDR_FRAGMENT) { 97 - temp |= MASK_FRAGMENT; 98 - break; 99 - } 100 - 101 - 102 - /* set the flag */ 103 - switch (nexthdr){ 104 - case NEXTHDR_HOP: 105 - case NEXTHDR_ROUTING: 106 - case NEXTHDR_FRAGMENT: 107 - case NEXTHDR_AUTH: 108 - case NEXTHDR_DEST: 109 - break; 110 - default: 111 - DEBUGP("ipv6_frag match: unknown nextheader %u\n",nexthdr); 112 - return 0; 113 - break; 114 - } 115 - 116 - nexthdr = hp->nexthdr; 117 - len -= hdrlen; 118 - ptr += hdrlen; 119 - if ( ptr > skb->len ) { 120 - DEBUGP("ipv6_frag: new pointer too large! \n"); 121 - break; 122 - } 123 - } 124 - 125 - /* FRAG header not found */ 126 - if ( temp != MASK_FRAGMENT ) return 0; 127 - 128 - if (len < sizeof(struct frag_hdr)){ 129 - *hotdrop = 1; 130 - return 0; 131 - } 132 - 133 - fh = skb_header_pointer(skb, ptr, sizeof(_frag), &_frag); 134 - BUG_ON(fh == NULL); 58 + fh = skb_header_pointer(skb, ptr, sizeof(_frag), &_frag); 59 + if (fh == NULL){ 60 + *hotdrop = 1; 61 + return 0; 62 + } 135 63 136 64 DEBUGP("INFO %04X ", fh->frag_off); 137 65 DEBUGP("OFFSET %04X ", ntohs(fh->frag_off) & ~0x7);
+7 -81
net/ipv6/netfilter/ip6t_hbh.c
··· 63 63 struct ipv6_opt_hdr _optsh, *oh; 64 64 const struct ip6t_opts *optinfo = matchinfo; 65 65 unsigned int temp; 66 - unsigned int len; 67 - u8 nexthdr; 68 66 unsigned int ptr; 69 67 unsigned int hdrlen = 0; 70 68 unsigned int ret = 0; ··· 70 72 u8 _optlen, *lp = NULL; 71 73 unsigned int optlen; 72 74 73 - /* type of the 1st exthdr */ 74 - nexthdr = skb->nh.ipv6h->nexthdr; 75 - /* pointer to the 1st exthdr */ 76 - ptr = sizeof(struct ipv6hdr); 77 - /* available length */ 78 - len = skb->len - ptr; 79 - temp = 0; 80 - 81 - while (ip6t_ext_hdr(nexthdr)) { 82 - struct ipv6_opt_hdr _hdr, *hp; 83 - 84 - DEBUGP("ipv6_opts header iteration \n"); 85 - 86 - /* Is there enough space for the next ext header? */ 87 - if (len < (int)sizeof(struct ipv6_opt_hdr)) 88 - return 0; 89 - /* No more exthdr -> evaluate */ 90 - if (nexthdr == NEXTHDR_NONE) { 91 - break; 92 - } 93 - /* ESP -> evaluate */ 94 - if (nexthdr == NEXTHDR_ESP) { 95 - break; 96 - } 97 - 98 - hp = skb_header_pointer(skb, ptr, sizeof(_hdr), &_hdr); 99 - BUG_ON(hp == NULL); 100 - 101 - /* Calculate the header length */ 102 - if (nexthdr == NEXTHDR_FRAGMENT) { 103 - hdrlen = 8; 104 - } else if (nexthdr == NEXTHDR_AUTH) 105 - hdrlen = (hp->hdrlen+2)<<2; 106 - else 107 - hdrlen = ipv6_optlen(hp); 108 - 109 - /* OPTS -> evaluate */ 110 75 #if HOPBYHOP 111 - if (nexthdr == NEXTHDR_HOP) { 112 - temp |= MASK_HOPOPTS; 76 + if (ipv6_find_hdr(skb, &ptr, NEXTHDR_HOP) < 0) 113 77 #else 114 - if (nexthdr == NEXTHDR_DEST) { 115 - temp |= MASK_DSTOPTS; 78 + if (ipv6_find_hdr(skb, &ptr, NEXTHDR_DEST) < 0) 116 79 #endif 117 - break; 118 - } 80 + return 0; 119 81 120 - 121 - /* set the flag */ 122 - switch (nexthdr){ 123 - case NEXTHDR_HOP: 124 - case NEXTHDR_ROUTING: 125 - case NEXTHDR_FRAGMENT: 126 - case NEXTHDR_AUTH: 127 - case NEXTHDR_DEST: 128 - break; 129 - default: 130 - DEBUGP("ipv6_opts match: unknown nextheader %u\n",nexthdr); 131 - return 0; 132 - break; 133 - } 134 - 135 - nexthdr = hp->nexthdr; 136 - len -= hdrlen; 137 - ptr += hdrlen; 138 - if ( ptr > skb->len ) { 139 - DEBUGP("ipv6_opts: new pointer is too large! \n"); 140 - break; 141 - } 142 - } 143 - 144 - /* OPTIONS header not found */ 145 - #if HOPBYHOP 146 - if ( temp != MASK_HOPOPTS ) return 0; 147 - #else 148 - if ( temp != MASK_DSTOPTS ) return 0; 149 - #endif 150 - 151 - if (len < (int)sizeof(struct ipv6_opt_hdr)){ 82 + oh = skb_header_pointer(skb, ptr, sizeof(_optsh), &_optsh); 83 + if (oh == NULL){ 152 84 *hotdrop = 1; 153 85 return 0; 154 86 } 155 87 156 - if (len < hdrlen){ 88 + hdrlen = ipv6_optlen(oh); 89 + if (skb->len - ptr < hdrlen){ 157 90 /* Packet smaller than it's length field */ 158 91 return 0; 159 92 } 160 - 161 - oh = skb_header_pointer(skb, ptr, sizeof(_optsh), &_optsh); 162 - BUG_ON(oh == NULL); 163 93 164 94 DEBUGP("IPv6 OPTS LEN %u %u ", hdrlen, oh->hdrlen); 165 95
+7 -76
net/ipv6/netfilter/ip6t_rt.c
··· 50 50 unsigned int protoff, 51 51 int *hotdrop) 52 52 { 53 - struct ipv6_rt_hdr _route, *rh = NULL; 53 + struct ipv6_rt_hdr _route, *rh; 54 54 const struct ip6t_rt *rtinfo = matchinfo; 55 55 unsigned int temp; 56 - unsigned int len; 57 - u8 nexthdr; 58 56 unsigned int ptr; 59 57 unsigned int hdrlen = 0; 60 58 unsigned int ret = 0; 61 59 struct in6_addr *ap, _addr; 62 60 63 - /* type of the 1st exthdr */ 64 - nexthdr = skb->nh.ipv6h->nexthdr; 65 - /* pointer to the 1st exthdr */ 66 - ptr = sizeof(struct ipv6hdr); 67 - /* available length */ 68 - len = skb->len - ptr; 69 - temp = 0; 61 + if (ipv6_find_hdr(skb, &ptr, NEXTHDR_ROUTING) < 0) 62 + return 0; 70 63 71 - while (ip6t_ext_hdr(nexthdr)) { 72 - struct ipv6_opt_hdr _hdr, *hp; 73 - 74 - DEBUGP("ipv6_rt header iteration \n"); 75 - 76 - /* Is there enough space for the next ext header? */ 77 - if (len < (int)sizeof(struct ipv6_opt_hdr)) 78 - return 0; 79 - /* No more exthdr -> evaluate */ 80 - if (nexthdr == NEXTHDR_NONE) { 81 - break; 82 - } 83 - /* ESP -> evaluate */ 84 - if (nexthdr == NEXTHDR_ESP) { 85 - break; 86 - } 87 - 88 - hp = skb_header_pointer(skb, ptr, sizeof(_hdr), &_hdr); 89 - BUG_ON(hp == NULL); 90 - 91 - /* Calculate the header length */ 92 - if (nexthdr == NEXTHDR_FRAGMENT) { 93 - hdrlen = 8; 94 - } else if (nexthdr == NEXTHDR_AUTH) 95 - hdrlen = (hp->hdrlen+2)<<2; 96 - else 97 - hdrlen = ipv6_optlen(hp); 98 - 99 - /* ROUTING -> evaluate */ 100 - if (nexthdr == NEXTHDR_ROUTING) { 101 - temp |= MASK_ROUTING; 102 - break; 103 - } 104 - 105 - 106 - /* set the flag */ 107 - switch (nexthdr){ 108 - case NEXTHDR_HOP: 109 - case NEXTHDR_ROUTING: 110 - case NEXTHDR_FRAGMENT: 111 - case NEXTHDR_AUTH: 112 - case NEXTHDR_DEST: 113 - break; 114 - default: 115 - DEBUGP("ipv6_rt match: unknown nextheader %u\n",nexthdr); 116 - return 0; 117 - break; 118 - } 119 - 120 - nexthdr = hp->nexthdr; 121 - len -= hdrlen; 122 - ptr += hdrlen; 123 - if ( ptr > skb->len ) { 124 - DEBUGP("ipv6_rt: new pointer is too large! \n"); 125 - break; 126 - } 127 - } 128 - 129 - /* ROUTING header not found */ 130 - if ( temp != MASK_ROUTING ) return 0; 131 - 132 - if (len < (int)sizeof(struct ipv6_rt_hdr)){ 64 + rh = skb_header_pointer(skb, ptr, sizeof(_route), &_route); 65 + if (rh == NULL){ 133 66 *hotdrop = 1; 134 67 return 0; 135 68 } 136 69 137 - if (len < hdrlen){ 70 + hdrlen = ipv6_optlen(rh); 71 + if (skb->len - ptr < hdrlen){ 138 72 /* Pcket smaller than its length field */ 139 73 return 0; 140 74 } 141 - 142 - rh = skb_header_pointer(skb, ptr, sizeof(_route), &_route); 143 - BUG_ON(rh == NULL); 144 75 145 76 DEBUGP("IPv6 RT LEN %u %u ", hdrlen, rh->hdrlen); 146 77 DEBUGP("TYPE %04X ", rh->type);
+1 -1
net/ipv6/raw.c
··· 627 627 628 628 if (type && code) { 629 629 get_user(fl->fl_icmp_type, type); 630 - __get_user(fl->fl_icmp_code, code); 630 + get_user(fl->fl_icmp_code, code); 631 631 probed = 1; 632 632 } 633 633 break;
+48 -17
net/packet/af_packet.c
··· 36 36 * Michal Ostrowski : Module initialization cleanup. 37 37 * Ulises Alonso : Frame number limit removal and 38 38 * packet_set_ring memory leak. 39 + * Eric Biederman : Allow for > 8 byte hardware addresses. 40 + * The convention is that longer addresses 41 + * will simply extend the hardware address 42 + * byte arrays at the end of sockaddr_ll 43 + * and packet_mreq. 39 44 * 40 45 * This program is free software; you can redistribute it and/or 41 46 * modify it under the terms of the GNU General Public License ··· 166 161 int count; 167 162 unsigned short type; 168 163 unsigned short alen; 169 - unsigned char addr[8]; 164 + unsigned char addr[MAX_ADDR_LEN]; 165 + }; 166 + /* identical to struct packet_mreq except it has 167 + * a longer address field. 168 + */ 169 + struct packet_mreq_max 170 + { 171 + int mr_ifindex; 172 + unsigned short mr_type; 173 + unsigned short mr_alen; 174 + unsigned char mr_address[MAX_ADDR_LEN]; 170 175 }; 171 176 #endif 172 177 #ifdef CONFIG_PACKET_MMAP ··· 731 716 err = -EINVAL; 732 717 if (msg->msg_namelen < sizeof(struct sockaddr_ll)) 733 718 goto out; 719 + if (msg->msg_namelen < (saddr->sll_halen + offsetof(struct sockaddr_ll, sll_addr))) 720 + goto out; 734 721 ifindex = saddr->sll_ifindex; 735 722 proto = saddr->sll_protocol; 736 723 addr = saddr->sll_addr; ··· 761 744 if (dev->hard_header) { 762 745 int res; 763 746 err = -EINVAL; 747 + if (saddr) { 748 + if (saddr->sll_halen != dev->addr_len) 749 + goto out_free; 750 + if (saddr->sll_hatype != dev->type) 751 + goto out_free; 752 + } 764 753 res = dev->hard_header(skb, dev, ntohs(proto), addr, NULL, len); 765 754 if (sock->type != SOCK_DGRAM) { 766 755 skb->tail = skb->data; ··· 1068 1045 struct sock *sk = sock->sk; 1069 1046 struct sk_buff *skb; 1070 1047 int copied, err; 1048 + struct sockaddr_ll *sll; 1071 1049 1072 1050 err = -EINVAL; 1073 1051 if (flags & ~(MSG_PEEK|MSG_DONTWAIT|MSG_TRUNC|MSG_CMSG_COMPAT)) ··· 1079 1055 if (pkt_sk(sk)->ifindex < 0) 1080 1056 return -ENODEV; 1081 1057 #endif 1082 - 1083 - /* 1084 - * If the address length field is there to be filled in, we fill 1085 - * it in now. 1086 - */ 1087 - 1088 - if (sock->type == SOCK_PACKET) 1089 - msg->msg_namelen = sizeof(struct sockaddr_pkt); 1090 - else 1091 - msg->msg_namelen = sizeof(struct sockaddr_ll); 1092 1058 1093 1059 /* 1094 1060 * Call the generic datagram receiver. This handles all sorts ··· 1099 1085 1100 1086 if(skb==NULL) 1101 1087 goto out; 1088 + 1089 + /* 1090 + * If the address length field is there to be filled in, we fill 1091 + * it in now. 1092 + */ 1093 + 1094 + sll = (struct sockaddr_ll*)skb->cb; 1095 + if (sock->type == SOCK_PACKET) 1096 + msg->msg_namelen = sizeof(struct sockaddr_pkt); 1097 + else 1098 + msg->msg_namelen = sll->sll_halen + offsetof(struct sockaddr_ll, sll_addr); 1102 1099 1103 1100 /* 1104 1101 * You lose any data beyond the buffer you gave. If it worries a ··· 1191 1166 sll->sll_hatype = 0; /* Bad: we have no ARPHRD_UNSPEC */ 1192 1167 sll->sll_halen = 0; 1193 1168 } 1194 - *uaddr_len = sizeof(*sll); 1169 + *uaddr_len = offsetof(struct sockaddr_ll, sll_addr) + sll->sll_halen; 1195 1170 1196 1171 return 0; 1197 1172 } ··· 1224 1199 } 1225 1200 } 1226 1201 1227 - static int packet_mc_add(struct sock *sk, struct packet_mreq *mreq) 1202 + static int packet_mc_add(struct sock *sk, struct packet_mreq_max *mreq) 1228 1203 { 1229 1204 struct packet_sock *po = pkt_sk(sk); 1230 1205 struct packet_mclist *ml, *i; ··· 1274 1249 return err; 1275 1250 } 1276 1251 1277 - static int packet_mc_drop(struct sock *sk, struct packet_mreq *mreq) 1252 + static int packet_mc_drop(struct sock *sk, struct packet_mreq_max *mreq) 1278 1253 { 1279 1254 struct packet_mclist *ml, **mlp; 1280 1255 ··· 1340 1315 case PACKET_ADD_MEMBERSHIP: 1341 1316 case PACKET_DROP_MEMBERSHIP: 1342 1317 { 1343 - struct packet_mreq mreq; 1344 - if (optlen<sizeof(mreq)) 1318 + struct packet_mreq_max mreq; 1319 + int len = optlen; 1320 + memset(&mreq, 0, sizeof(mreq)); 1321 + if (len < sizeof(struct packet_mreq)) 1345 1322 return -EINVAL; 1346 - if (copy_from_user(&mreq,optval,sizeof(mreq))) 1323 + if (len > sizeof(mreq)) 1324 + len = sizeof(mreq); 1325 + if (copy_from_user(&mreq,optval,len)) 1347 1326 return -EFAULT; 1327 + if (len < (mreq.mr_alen + offsetof(struct packet_mreq, mr_address))) 1328 + return -EINVAL; 1348 1329 if (optname == PACKET_ADD_MEMBERSHIP) 1349 1330 ret = packet_mc_add(sk, &mreq); 1350 1331 else
+11 -11
net/sctp/sm_statefuns.c
··· 2414 2414 skb_pull(chunk->skb, sizeof(sctp_shutdownhdr_t)); 2415 2415 chunk->subh.shutdown_hdr = sdh; 2416 2416 2417 + /* API 5.3.1.5 SCTP_SHUTDOWN_EVENT 2418 + * When a peer sends a SHUTDOWN, SCTP delivers this notification to 2419 + * inform the application that it should cease sending data. 2420 + */ 2421 + ev = sctp_ulpevent_make_shutdown_event(asoc, 0, GFP_ATOMIC); 2422 + if (!ev) { 2423 + disposition = SCTP_DISPOSITION_NOMEM; 2424 + goto out; 2425 + } 2426 + sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev)); 2427 + 2417 2428 /* Upon the reception of the SHUTDOWN, the peer endpoint shall 2418 2429 * - enter the SHUTDOWN-RECEIVED state, 2419 2430 * - stop accepting new data from its SCTP user ··· 2449 2438 */ 2450 2439 sctp_add_cmd_sf(commands, SCTP_CMD_PROCESS_CTSN, 2451 2440 SCTP_U32(chunk->subh.shutdown_hdr->cum_tsn_ack)); 2452 - 2453 - /* API 5.3.1.5 SCTP_SHUTDOWN_EVENT 2454 - * When a peer sends a SHUTDOWN, SCTP delivers this notification to 2455 - * inform the application that it should cease sending data. 2456 - */ 2457 - ev = sctp_ulpevent_make_shutdown_event(asoc, 0, GFP_ATOMIC); 2458 - if (!ev) { 2459 - disposition = SCTP_DISPOSITION_NOMEM; 2460 - goto out; 2461 - } 2462 - sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev)); 2463 2441 2464 2442 out: 2465 2443 return disposition;
+1 -1
sound/oss/au1000.c
··· 1295 1295 unsigned long size; 1296 1296 int ret = 0; 1297 1297 1298 - dbg(__FUNCTION__); 1298 + dbg("%s", __FUNCTION__); 1299 1299 1300 1300 lock_kernel(); 1301 1301 down(&s->sem);
+1 -1
sound/oss/ite8172.c
··· 1859 1859 struct it8172_state *s = (struct it8172_state *)file->private_data; 1860 1860 1861 1861 #ifdef IT8172_VERBOSE_DEBUG 1862 - dbg(__FUNCTION__); 1862 + dbg("%s", __FUNCTION__); 1863 1863 #endif 1864 1864 lock_kernel(); 1865 1865 if (file->f_mode & FMODE_WRITE)
+8 -8
sound/pci/atiixp_modem.c
··· 405 405 406 406 while (atiixp_read(chip, PHYS_OUT_ADDR) & ATI_REG_PHYS_OUT_ADDR_EN) { 407 407 if (! timeout--) { 408 - snd_printk(KERN_WARNING "atiixp: codec acquire timeout\n"); 408 + snd_printk(KERN_WARNING "atiixp-modem: codec acquire timeout\n"); 409 409 return -EBUSY; 410 410 } 411 411 udelay(1); ··· 436 436 } while (--timeout); 437 437 /* time out may happen during reset */ 438 438 if (reg < 0x7c) 439 - snd_printk(KERN_WARNING "atiixp: codec read timeout (reg %x)\n", reg); 439 + snd_printk(KERN_WARNING "atiixp-modem: codec read timeout (reg %x)\n", reg); 440 440 return 0xffff; 441 441 } 442 442 ··· 498 498 do_delay(); 499 499 atiixp_update(chip, CMD, ATI_REG_CMD_AC_RESET, ATI_REG_CMD_AC_RESET); 500 500 if (--timeout) { 501 - snd_printk(KERN_ERR "atiixp: codec reset timeout\n"); 501 + snd_printk(KERN_ERR "atiixp-modem: codec reset timeout\n"); 502 502 break; 503 503 } 504 504 } ··· 552 552 atiixp_write(chip, IER, 0); /* disable irqs */ 553 553 554 554 if ((chip->codec_not_ready_bits & ALL_CODEC_NOT_READY) == ALL_CODEC_NOT_READY) { 555 - snd_printk(KERN_ERR "atiixp: no codec detected!\n"); 555 + snd_printk(KERN_ERR "atiixp-modem: no codec detected!\n"); 556 556 return -ENXIO; 557 557 } 558 558 return 0; ··· 635 635 { 636 636 if (! dma->substream || ! dma->running) 637 637 return; 638 - snd_printdd("atiixp: XRUN detected (DMA %d)\n", dma->ops->type); 638 + snd_printdd("atiixp-modem: XRUN detected (DMA %d)\n", dma->ops->type); 639 639 snd_pcm_stop(dma->substream, SNDRV_PCM_STATE_XRUN); 640 640 } 641 641 ··· 1081 1081 ac97.scaps = AC97_SCAP_SKIP_AUDIO; 1082 1082 if ((err = snd_ac97_mixer(pbus, &ac97, &chip->ac97[i])) < 0) { 1083 1083 chip->ac97[i] = NULL; /* to be sure */ 1084 - snd_printdd("atiixp: codec %d not available for modem\n", i); 1084 + snd_printdd("atiixp-modem: codec %d not available for modem\n", i); 1085 1085 continue; 1086 1086 } 1087 1087 codec_count++; 1088 1088 } 1089 1089 1090 1090 if (! codec_count) { 1091 - snd_printk(KERN_ERR "atiixp: no codec available\n"); 1091 + snd_printk(KERN_ERR "atiixp-modem: no codec available\n"); 1092 1092 return -ENODEV; 1093 1093 } 1094 1094 ··· 1159 1159 { 1160 1160 snd_info_entry_t *entry; 1161 1161 1162 - if (! snd_card_proc_new(chip->card, "atiixp", &entry)) 1162 + if (! snd_card_proc_new(chip->card, "atiixp-modem", &entry)) 1163 1163 snd_info_set_text_ops(entry, chip, 1024, snd_atiixp_proc_read); 1164 1164 } 1165 1165
+166 -157
sound/sparc/cs4231.c
··· 173 173 174 174 #define CS4231_GLOBALIRQ 0x01 /* IRQ is active */ 175 175 176 - /* definitions for codec irq status */ 176 + /* definitions for codec irq status - CS4231_IRQ_STATUS */ 177 177 178 178 #define CS4231_PLAYBACK_IRQ 0x10 179 179 #define CS4231_RECORD_IRQ 0x20 ··· 402 402 udelay(100); 403 403 #ifdef CONFIG_SND_DEBUG 404 404 if (__cs4231_readb(chip, CS4231P(chip, REGSEL)) & CS4231_INIT) 405 - snd_printk("outm: auto calibration time out - reg = 0x%x, value = 0x%x\n", reg, value); 405 + snd_printdd("outm: auto calibration time out - reg = 0x%x, value = 0x%x\n", reg, value); 406 406 #endif 407 407 if (chip->calibrate_mute) { 408 408 chip->image[reg] &= mask; ··· 425 425 timeout > 0 && (__cs4231_readb(chip, CS4231P(chip, REGSEL)) & CS4231_INIT); 426 426 timeout--) 427 427 udelay(100); 428 + #ifdef CONFIG_SND_DEBUG 429 + if (__cs4231_readb(chip, CS4231P(chip, REGSEL)) & CS4231_INIT) 430 + snd_printdd("out: auto calibration time out - reg = 0x%x, value = 0x%x\n", reg, value); 431 + #endif 428 432 __cs4231_writeb(chip, chip->mce_bit | reg, CS4231P(chip, REGSEL)); 429 433 __cs4231_writeb(chip, value, CS4231P(chip, REG)); 430 434 mb(); ··· 444 440 udelay(100); 445 441 #ifdef CONFIG_SND_DEBUG 446 442 if (__cs4231_readb(chip, CS4231P(chip, REGSEL)) & CS4231_INIT) 447 - snd_printk("out: auto calibration time out - reg = 0x%x, value = 0x%x\n", reg, value); 443 + snd_printdd("out: auto calibration time out - reg = 0x%x, value = 0x%x\n", reg, value); 448 444 #endif 449 445 __cs4231_writeb(chip, chip->mce_bit | reg, CS4231P(chip, REGSEL)); 450 446 __cs4231_writeb(chip, value, CS4231P(chip, REG)); 451 447 chip->image[reg] = value; 452 448 mb(); 453 - #if 0 454 - printk("codec out - reg 0x%x = 0x%x\n", chip->mce_bit | reg, value); 455 - #endif 456 449 } 457 450 458 451 static unsigned char snd_cs4231_in(cs4231_t *chip, unsigned char reg) ··· 463 462 udelay(100); 464 463 #ifdef CONFIG_SND_DEBUG 465 464 if (__cs4231_readb(chip, CS4231P(chip, REGSEL)) & CS4231_INIT) 466 - snd_printk("in: auto calibration time out - reg = 0x%x\n", reg); 465 + snd_printdd("in: auto calibration time out - reg = 0x%x\n", reg); 467 466 #endif 468 467 __cs4231_writeb(chip, chip->mce_bit | reg, CS4231P(chip, REGSEL)); 469 468 mb(); 470 469 ret = __cs4231_readb(chip, CS4231P(chip, REG)); 471 - #if 0 472 - printk("codec in - reg 0x%x = 0x%x\n", chip->mce_bit | reg, ret); 473 - #endif 474 470 return ret; 475 471 } 476 - 477 - #if 0 478 - 479 - static void snd_cs4231_debug(cs4231_t *chip) 480 - { 481 - printk("CS4231 REGS: INDEX = 0x%02x ", 482 - __cs4231_readb(chip, CS4231P(chip, REGSEL))); 483 - printk(" STATUS = 0x%02x\n", 484 - __cs4231_readb(chip, CS4231P(chip, STATUS))); 485 - printk(" 0x00: left input = 0x%02x ", snd_cs4231_in(chip, 0x00)); 486 - printk(" 0x10: alt 1 (CFIG 2) = 0x%02x\n", snd_cs4231_in(chip, 0x10)); 487 - printk(" 0x01: right input = 0x%02x ", snd_cs4231_in(chip, 0x01)); 488 - printk(" 0x11: alt 2 (CFIG 3) = 0x%02x\n", snd_cs4231_in(chip, 0x11)); 489 - printk(" 0x02: GF1 left input = 0x%02x ", snd_cs4231_in(chip, 0x02)); 490 - printk(" 0x12: left line in = 0x%02x\n", snd_cs4231_in(chip, 0x12)); 491 - printk(" 0x03: GF1 right input = 0x%02x ", snd_cs4231_in(chip, 0x03)); 492 - printk(" 0x13: right line in = 0x%02x\n", snd_cs4231_in(chip, 0x13)); 493 - printk(" 0x04: CD left input = 0x%02x ", snd_cs4231_in(chip, 0x04)); 494 - printk(" 0x14: timer low = 0x%02x\n", snd_cs4231_in(chip, 0x14)); 495 - printk(" 0x05: CD right input = 0x%02x ", snd_cs4231_in(chip, 0x05)); 496 - printk(" 0x15: timer high = 0x%02x\n", snd_cs4231_in(chip, 0x15)); 497 - printk(" 0x06: left output = 0x%02x ", snd_cs4231_in(chip, 0x06)); 498 - printk(" 0x16: left MIC (PnP) = 0x%02x\n", snd_cs4231_in(chip, 0x16)); 499 - printk(" 0x07: right output = 0x%02x ", snd_cs4231_in(chip, 0x07)); 500 - printk(" 0x17: right MIC (PnP) = 0x%02x\n", snd_cs4231_in(chip, 0x17)); 501 - printk(" 0x08: playback format = 0x%02x ", snd_cs4231_in(chip, 0x08)); 502 - printk(" 0x18: IRQ status = 0x%02x\n", snd_cs4231_in(chip, 0x18)); 503 - printk(" 0x09: iface (CFIG 1) = 0x%02x ", snd_cs4231_in(chip, 0x09)); 504 - printk(" 0x19: left line out = 0x%02x\n", snd_cs4231_in(chip, 0x19)); 505 - printk(" 0x0a: pin control = 0x%02x ", snd_cs4231_in(chip, 0x0a)); 506 - printk(" 0x1a: mono control = 0x%02x\n", snd_cs4231_in(chip, 0x1a)); 507 - printk(" 0x0b: init & status = 0x%02x ", snd_cs4231_in(chip, 0x0b)); 508 - printk(" 0x1b: right line out = 0x%02x\n", snd_cs4231_in(chip, 0x1b)); 509 - printk(" 0x0c: revision & mode = 0x%02x ", snd_cs4231_in(chip, 0x0c)); 510 - printk(" 0x1c: record format = 0x%02x\n", snd_cs4231_in(chip, 0x1c)); 511 - printk(" 0x0d: loopback = 0x%02x ", snd_cs4231_in(chip, 0x0d)); 512 - printk(" 0x1d: var freq (PnP) = 0x%02x\n", snd_cs4231_in(chip, 0x1d)); 513 - printk(" 0x0e: ply upr count = 0x%02x ", snd_cs4231_in(chip, 0x0e)); 514 - printk(" 0x1e: rec upr count = 0x%02x\n", snd_cs4231_in(chip, 0x1e)); 515 - printk(" 0x0f: ply lwr count = 0x%02x ", snd_cs4231_in(chip, 0x0f)); 516 - printk(" 0x1f: rec lwr count = 0x%02x\n", snd_cs4231_in(chip, 0x1f)); 517 - } 518 - 519 - #endif 520 472 521 473 /* 522 474 * CS4231 detection / MCE routines ··· 482 528 /* huh.. looks like this sequence is proper for CS4231A chip (GUS MAX) */ 483 529 for (timeout = 5; timeout > 0; timeout--) 484 530 __cs4231_readb(chip, CS4231P(chip, REGSEL)); 531 + 485 532 /* end of cleanup sequence */ 486 - for (timeout = 250; 533 + for (timeout = 500; 487 534 timeout > 0 && (__cs4231_readb(chip, CS4231P(chip, REGSEL)) & CS4231_INIT); 488 535 timeout--) 489 - udelay(100); 536 + udelay(1000); 490 537 } 491 538 492 539 static void snd_cs4231_mce_up(cs4231_t *chip) ··· 500 545 udelay(100); 501 546 #ifdef CONFIG_SND_DEBUG 502 547 if (__cs4231_readb(chip, CS4231P(chip, REGSEL)) & CS4231_INIT) 503 - snd_printk("mce_up - auto calibration time out (0)\n"); 548 + snd_printdd("mce_up - auto calibration time out (0)\n"); 504 549 #endif 505 550 chip->mce_bit |= CS4231_MCE; 506 551 timeout = __cs4231_readb(chip, CS4231P(chip, REGSEL)); 507 552 if (timeout == 0x80) 508 - snd_printk("mce_up [%p]: serious init problem - codec still busy\n", chip->port); 553 + snd_printdd("mce_up [%p]: serious init problem - codec still busy\n", chip->port); 509 554 if (!(timeout & CS4231_MCE)) 510 555 __cs4231_writeb(chip, chip->mce_bit | (timeout & 0x1f), CS4231P(chip, REGSEL)); 511 556 spin_unlock_irqrestore(&chip->lock, flags); ··· 518 563 519 564 spin_lock_irqsave(&chip->lock, flags); 520 565 snd_cs4231_busy_wait(chip); 521 - #if 0 522 - printk("(1) timeout = %i\n", timeout); 523 - #endif 524 566 #ifdef CONFIG_SND_DEBUG 525 567 if (__cs4231_readb(chip, CS4231P(chip, REGSEL)) & CS4231_INIT) 526 - snd_printk("mce_down [%p] - auto calibration time out (0)\n", CS4231P(chip, REGSEL)); 568 + snd_printdd("mce_down [%p] - auto calibration time out (0)\n", CS4231P(chip, REGSEL)); 527 569 #endif 528 570 chip->mce_bit &= ~CS4231_MCE; 529 571 timeout = __cs4231_readb(chip, CS4231P(chip, REGSEL)); 530 572 __cs4231_writeb(chip, chip->mce_bit | (timeout & 0x1f), CS4231P(chip, REGSEL)); 531 573 if (timeout == 0x80) 532 - snd_printk("mce_down [%p]: serious init problem - codec still busy\n", chip->port); 574 + snd_printdd("mce_down [%p]: serious init problem - codec still busy\n", chip->port); 533 575 if ((timeout & CS4231_MCE) == 0) { 534 576 spin_unlock_irqrestore(&chip->lock, flags); 535 577 return; ··· 542 590 spin_unlock_irqrestore(&chip->lock, flags); 543 591 return; 544 592 } 545 - #if 0 546 - printk("(2) timeout = %i, jiffies = %li\n", timeout, jiffies); 547 - #endif 593 + 548 594 /* in 10ms increments, check condition, up to 250ms */ 549 595 timeout = 25; 550 596 while (snd_cs4231_in(chip, CS4231_TEST_INIT) & CS4231_CALIB_IN_PROGRESS) { ··· 554 604 msleep(10); 555 605 spin_lock_irqsave(&chip->lock, flags); 556 606 } 557 - #if 0 558 - printk("(3) jiffies = %li\n", jiffies); 559 - #endif 607 + 560 608 /* in 10ms increments, check condition, up to 100ms */ 561 609 timeout = 10; 562 610 while (__cs4231_readb(chip, CS4231P(chip, REGSEL)) & CS4231_INIT) { ··· 567 619 spin_lock_irqsave(&chip->lock, flags); 568 620 } 569 621 spin_unlock_irqrestore(&chip->lock, flags); 570 - #if 0 571 - printk("(4) jiffies = %li\n", jiffies); 572 - snd_printk("mce_down - exit = 0x%x\n", __cs4231_readb(chip, CS4231P(chip, REGSEL))); 573 - #endif 574 622 } 575 - 576 - #if 0 /* Unused for now... */ 577 - static unsigned int snd_cs4231_get_count(unsigned char format, unsigned int size) 578 - { 579 - switch (format & 0xe0) { 580 - case CS4231_LINEAR_16: 581 - case CS4231_LINEAR_16_BIG: 582 - size >>= 1; 583 - break; 584 - case CS4231_ADPCM_16: 585 - return size >> 2; 586 - } 587 - if (format & CS4231_STEREO) 588 - size >>= 1; 589 - return size; 590 - } 591 - #endif 592 623 593 624 #ifdef EBUS_SUPPORT 594 625 static void snd_cs4231_ebus_advance_dma(struct ebus_dma_info *p, snd_pcm_substream_t *substream, unsigned int *periods_sent) ··· 575 648 snd_pcm_runtime_t *runtime = substream->runtime; 576 649 577 650 while (1) { 578 - unsigned int dma_size = snd_pcm_lib_period_bytes(substream); 579 - unsigned int offset = dma_size * (*periods_sent); 651 + unsigned int period_size = snd_pcm_lib_period_bytes(substream); 652 + unsigned int offset = period_size * (*periods_sent); 580 653 581 - if (dma_size >= (1 << 24)) 654 + if (period_size >= (1 << 24)) 582 655 BUG(); 583 656 584 - if (ebus_dma_request(p, runtime->dma_addr + offset, dma_size)) 657 + if (ebus_dma_request(p, runtime->dma_addr + offset, period_size)) 585 658 return; 586 - #if 0 587 - printk("ebus_advance: Sent period %u (size[%x] offset[%x])\n", 588 - (*periods_sent), dma_size, offset); 589 - #endif 590 659 (*periods_sent) = ((*periods_sent) + 1) % runtime->periods; 591 660 } 592 661 } 593 662 #endif 594 663 595 - static void cs4231_dma_trigger(cs4231_t *chip, unsigned int what, int on) 664 + #ifdef SBUS_SUPPORT 665 + static void snd_cs4231_sbus_advance_dma(snd_pcm_substream_t *substream, unsigned int *periods_sent) 596 666 { 667 + cs4231_t *chip = snd_pcm_substream_chip(substream); 668 + snd_pcm_runtime_t *runtime = substream->runtime; 669 + 670 + unsigned int period_size = snd_pcm_lib_period_bytes(substream); 671 + unsigned int offset = period_size * (*periods_sent % runtime->periods); 672 + 673 + if (runtime->period_size > 0xffff + 1) 674 + BUG(); 675 + 676 + switch (substream->stream) { 677 + case SNDRV_PCM_STREAM_PLAYBACK: 678 + sbus_writel(runtime->dma_addr + offset, chip->port + APCPNVA); 679 + sbus_writel(period_size, chip->port + APCPNC); 680 + break; 681 + case SNDRV_PCM_STREAM_CAPTURE: 682 + sbus_writel(runtime->dma_addr + offset, chip->port + APCCNVA); 683 + sbus_writel(period_size, chip->port + APCCNC); 684 + break; 685 + } 686 + 687 + (*periods_sent) = (*periods_sent + 1) % runtime->periods; 688 + } 689 + #endif 690 + 691 + static void cs4231_dma_trigger(snd_pcm_substream_t *substream, unsigned int what, int on) 692 + { 693 + cs4231_t *chip = snd_pcm_substream_chip(substream); 694 + 597 695 #ifdef EBUS_SUPPORT 598 696 if (chip->flags & CS4231_FLAG_EBUS) { 599 697 if (what & CS4231_PLAYBACK_ENABLE) { ··· 646 694 } else { 647 695 #endif 648 696 #ifdef SBUS_SUPPORT 697 + u32 csr = sbus_readl(chip->port + APCCSR); 698 + /* I don't know why, but on sbus the period counter must 699 + * only start counting after the first period is sent. 700 + * Therefore this dummy thing. 701 + */ 702 + unsigned int dummy = 0; 703 + 704 + switch (what) { 705 + case CS4231_PLAYBACK_ENABLE: 706 + if (on) { 707 + csr &= ~APC_XINT_PLAY; 708 + sbus_writel(csr, chip->port + APCCSR); 709 + 710 + csr &= ~APC_PPAUSE; 711 + sbus_writel(csr, chip->port + APCCSR); 712 + 713 + snd_cs4231_sbus_advance_dma(substream, &dummy); 714 + 715 + csr |= APC_GENL_INT | APC_PLAY_INT | APC_XINT_ENA | 716 + APC_XINT_PLAY | APC_XINT_EMPT | APC_XINT_GENL | 717 + APC_XINT_PENA | APC_PDMA_READY; 718 + sbus_writel(csr, chip->port + APCCSR); 719 + } else { 720 + csr |= APC_PPAUSE; 721 + sbus_writel(csr, chip->port + APCCSR); 722 + 723 + csr &= ~APC_PDMA_READY; 724 + sbus_writel(csr, chip->port + APCCSR); 725 + } 726 + break; 727 + case CS4231_RECORD_ENABLE: 728 + if (on) { 729 + csr &= ~APC_XINT_CAPT; 730 + sbus_writel(csr, chip->port + APCCSR); 731 + 732 + csr &= ~APC_CPAUSE; 733 + sbus_writel(csr, chip->port + APCCSR); 734 + 735 + snd_cs4231_sbus_advance_dma(substream, &dummy); 736 + 737 + csr |= APC_GENL_INT | APC_CAPT_INT | APC_XINT_ENA | 738 + APC_XINT_CAPT | APC_XINT_CEMP | APC_XINT_GENL | 739 + APC_CDMA_READY; 740 + 741 + sbus_writel(csr, chip->port + APCCSR); 742 + } else { 743 + csr |= APC_CPAUSE; 744 + sbus_writel(csr, chip->port + APCCSR); 745 + 746 + csr &= ~APC_CDMA_READY; 747 + sbus_writel(csr, chip->port + APCCSR); 748 + } 749 + break; 750 + } 649 751 #endif 650 752 #ifdef EBUS_SUPPORT 651 753 } ··· 731 725 } 732 726 } 733 727 734 - #if 0 735 - printk("TRIGGER: what[%x] on(%d)\n", 736 - what, (cmd == SNDRV_PCM_TRIGGER_START)); 737 - #endif 738 - 739 728 spin_lock_irqsave(&chip->lock, flags); 740 729 if (cmd == SNDRV_PCM_TRIGGER_START) { 741 - cs4231_dma_trigger(chip, what, 1); 730 + cs4231_dma_trigger(substream, what, 1); 742 731 chip->image[CS4231_IFACE_CTRL] |= what; 743 - if (what & CS4231_PLAYBACK_ENABLE) { 744 - snd_cs4231_out(chip, CS4231_PLY_LWR_CNT, 0xff); 745 - snd_cs4231_out(chip, CS4231_PLY_UPR_CNT, 0xff); 746 - } 747 - if (what & CS4231_RECORD_ENABLE) { 748 - snd_cs4231_out(chip, CS4231_REC_LWR_CNT, 0xff); 749 - snd_cs4231_out(chip, CS4231_REC_UPR_CNT, 0xff); 750 - } 751 732 } else { 752 - cs4231_dma_trigger(chip, what, 0); 733 + cs4231_dma_trigger(substream, what, 0); 753 734 chip->image[CS4231_IFACE_CTRL] &= ~what; 754 735 } 755 736 snd_cs4231_out(chip, CS4231_IFACE_CTRL, ··· 748 755 result = -EINVAL; 749 756 break; 750 757 } 751 - #if 0 752 - snd_cs4231_debug(chip); 753 - #endif 758 + 754 759 return result; 755 760 } 756 761 ··· 781 790 } 782 791 if (channels > 1) 783 792 rformat |= CS4231_STEREO; 784 - #if 0 785 - snd_printk("get_format: 0x%x (mode=0x%x)\n", format, mode); 786 - #endif 787 793 return rformat; 788 794 } 789 795 ··· 932 944 snd_cs4231_mce_down(chip); 933 945 934 946 #ifdef SNDRV_DEBUG_MCE 935 - snd_printk("init: (1)\n"); 947 + snd_printdd("init: (1)\n"); 936 948 #endif 937 949 snd_cs4231_mce_up(chip); 938 950 spin_lock_irqsave(&chip->lock, flags); ··· 945 957 snd_cs4231_mce_down(chip); 946 958 947 959 #ifdef SNDRV_DEBUG_MCE 948 - snd_printk("init: (2)\n"); 960 + snd_printdd("init: (2)\n"); 949 961 #endif 950 962 951 963 snd_cs4231_mce_up(chip); ··· 955 967 snd_cs4231_mce_down(chip); 956 968 957 969 #ifdef SNDRV_DEBUG_MCE 958 - snd_printk("init: (3) - afei = 0x%x\n", chip->image[CS4231_ALT_FEATURE_1]); 970 + snd_printdd("init: (3) - afei = 0x%x\n", chip->image[CS4231_ALT_FEATURE_1]); 959 971 #endif 960 972 961 973 spin_lock_irqsave(&chip->lock, flags); ··· 969 981 snd_cs4231_mce_down(chip); 970 982 971 983 #ifdef SNDRV_DEBUG_MCE 972 - snd_printk("init: (4)\n"); 984 + snd_printdd("init: (4)\n"); 973 985 #endif 974 986 975 987 snd_cs4231_mce_up(chip); ··· 979 991 snd_cs4231_mce_down(chip); 980 992 981 993 #ifdef SNDRV_DEBUG_MCE 982 - snd_printk("init: (5)\n"); 994 + snd_printdd("init: (5)\n"); 983 995 #endif 984 996 } 985 997 ··· 1010 1022 CS4231_RECORD_IRQ | 1011 1023 CS4231_TIMER_IRQ); 1012 1024 snd_cs4231_out(chip, CS4231_IRQ_STATUS, 0); 1025 + 1013 1026 spin_unlock_irqrestore(&chip->lock, flags); 1014 1027 1015 1028 chip->mode = mode; ··· 1125 1136 static int snd_cs4231_playback_prepare(snd_pcm_substream_t *substream) 1126 1137 { 1127 1138 cs4231_t *chip = snd_pcm_substream_chip(substream); 1139 + snd_pcm_runtime_t *runtime = substream->runtime; 1128 1140 unsigned long flags; 1129 1141 1130 1142 spin_lock_irqsave(&chip->lock, flags); 1143 + 1131 1144 chip->image[CS4231_IFACE_CTRL] &= ~(CS4231_PLAYBACK_ENABLE | 1132 1145 CS4231_PLAYBACK_PIO); 1146 + 1147 + if (runtime->period_size > 0xffff + 1) 1148 + BUG(); 1149 + 1150 + snd_cs4231_out(chip, CS4231_PLY_LWR_CNT, (runtime->period_size - 1) & 0x00ff); 1151 + snd_cs4231_out(chip, CS4231_PLY_UPR_CNT, (runtime->period_size - 1) >> 8 & 0x00ff); 1152 + chip->p_periods_sent = 0; 1153 + 1133 1154 spin_unlock_irqrestore(&chip->lock, flags); 1134 1155 1135 1156 return 0; ··· 1171 1172 static int snd_cs4231_capture_prepare(snd_pcm_substream_t *substream) 1172 1173 { 1173 1174 cs4231_t *chip = snd_pcm_substream_chip(substream); 1175 + snd_pcm_runtime_t *runtime = substream->runtime; 1174 1176 unsigned long flags; 1175 1177 1176 1178 spin_lock_irqsave(&chip->lock, flags); 1177 1179 chip->image[CS4231_IFACE_CTRL] &= ~(CS4231_RECORD_ENABLE | 1178 1180 CS4231_RECORD_PIO); 1181 + 1182 + snd_cs4231_out(chip, CS4231_REC_LWR_CNT, (runtime->period_size - 1) & 0x00ff); 1183 + snd_cs4231_out(chip, CS4231_REC_LWR_CNT, (runtime->period_size - 1) >> 8 & 0x00ff); 1179 1184 1180 1185 spin_unlock_irqrestore(&chip->lock, flags); 1181 1186 ··· 1199 1196 chip->capture_substream->runtime->overrange++; 1200 1197 } 1201 1198 1202 - static void snd_cs4231_generic_interrupt(cs4231_t *chip) 1199 + static irqreturn_t snd_cs4231_generic_interrupt(cs4231_t *chip) 1203 1200 { 1204 1201 unsigned long flags; 1205 1202 unsigned char status; 1206 1203 1204 + /*This is IRQ is not raised by the cs4231*/ 1205 + if (!(__cs4231_readb(chip, CS4231P(chip, STATUS)) & CS4231_GLOBALIRQ)) 1206 + return IRQ_NONE; 1207 + 1207 1208 status = snd_cs4231_in(chip, CS4231_IRQ_STATUS); 1208 - if (!status) 1209 - return; 1210 1209 1211 1210 if (status & CS4231_TIMER_IRQ) { 1212 1211 if (chip->timer) 1213 1212 snd_timer_interrupt(chip->timer, chip->timer->sticks); 1214 1213 } 1215 - if (status & CS4231_PLAYBACK_IRQ) 1216 - snd_pcm_period_elapsed(chip->playback_substream); 1217 - if (status & CS4231_RECORD_IRQ) { 1214 + 1215 + if (status & CS4231_RECORD_IRQ) 1218 1216 snd_cs4231_overrange(chip); 1219 - snd_pcm_period_elapsed(chip->capture_substream); 1220 - } 1221 1217 1222 1218 /* ACK the CS4231 interrupt. */ 1223 1219 spin_lock_irqsave(&chip->lock, flags); 1224 1220 snd_cs4231_outm(chip, CS4231_IRQ_STATUS, ~CS4231_ALL_IRQS | ~status, 0); 1225 1221 spin_unlock_irqrestore(&chip->lock, flags); 1222 + 1223 + return 0; 1226 1224 } 1227 1225 1228 1226 #ifdef SBUS_SUPPORT 1229 1227 static irqreturn_t snd_cs4231_sbus_interrupt(int irq, void *dev_id, struct pt_regs *regs) 1230 1228 { 1231 1229 cs4231_t *chip = dev_id; 1232 - u32 csr; 1233 - 1234 - csr = sbus_readl(chip->port + APCCSR); 1235 - if (!(csr & (APC_INT_PENDING | 1236 - APC_PLAY_INT | 1237 - APC_CAPT_INT | 1238 - APC_GENL_INT | 1239 - APC_XINT_PEMP | 1240 - APC_XINT_CEMP))) 1241 - return IRQ_NONE; 1242 1230 1243 1231 /* ACK the APC interrupt. */ 1232 + u32 csr = sbus_readl(chip->port + APCCSR); 1233 + 1244 1234 sbus_writel(csr, chip->port + APCCSR); 1245 1235 1246 - snd_cs4231_generic_interrupt(chip); 1236 + if ((chip->image[CS4231_IFACE_CTRL] & CS4231_PLAYBACK_ENABLE) && 1237 + (csr & APC_PLAY_INT) && 1238 + (csr & APC_XINT_PNVA) && 1239 + !(csr & APC_XINT_EMPT)) { 1240 + snd_cs4231_sbus_advance_dma(chip->playback_substream, 1241 + &chip->p_periods_sent); 1242 + snd_pcm_period_elapsed(chip->playback_substream); 1243 + } 1247 1244 1248 - return IRQ_HANDLED; 1245 + if ((chip->image[CS4231_IFACE_CTRL] & CS4231_RECORD_ENABLE) && 1246 + (csr & APC_CAPT_INT) && 1247 + (csr & APC_XINT_CNVA)) { 1248 + snd_cs4231_sbus_advance_dma(chip->capture_substream, 1249 + &chip->c_periods_sent); 1250 + snd_pcm_period_elapsed(chip->capture_substream); 1251 + } 1252 + 1253 + return snd_cs4231_generic_interrupt(chip); 1249 1254 } 1250 1255 #endif 1251 1256 ··· 1301 1290 #ifdef EBUS_SUPPORT 1302 1291 } 1303 1292 #endif 1304 - ptr += (period_bytes - residue); 1293 + ptr += period_bytes - residue; 1294 + 1305 1295 return bytes_to_frames(substream->runtime, ptr); 1306 1296 } 1307 1297 ··· 1326 1314 #ifdef EBUS_SUPPORT 1327 1315 } 1328 1316 #endif 1329 - ptr += (period_bytes - residue); 1317 + ptr += period_bytes - residue; 1330 1318 return bytes_to_frames(substream->runtime, ptr); 1331 1319 } 1332 1320 ··· 1340 1328 int i, id, vers; 1341 1329 unsigned char *ptr; 1342 1330 1343 - #if 0 1344 - snd_cs4231_debug(chip); 1345 - #endif 1346 1331 id = vers = 0; 1347 1332 for (i = 0; i < 50; i++) { 1348 1333 mb(); ··· 1994 1985 chip->port = sbus_ioremap(&sdev->resource[0], 0, 1995 1986 chip->regs_size, "cs4231"); 1996 1987 if (!chip->port) { 1997 - snd_printk("cs4231-%d: Unable to map chip registers.\n", dev); 1988 + snd_printdd("cs4231-%d: Unable to map chip registers.\n", dev); 1998 1989 return -EIO; 1999 1990 } 2000 1991 2001 1992 if (request_irq(sdev->irqs[0], snd_cs4231_sbus_interrupt, 2002 1993 SA_SHIRQ, "cs4231", chip)) { 2003 - snd_printk("cs4231-%d: Unable to grab SBUS IRQ %s\n", 1994 + snd_printdd("cs4231-%d: Unable to grab SBUS IRQ %s\n", 2004 1995 dev, 2005 1996 __irq_itoa(sdev->irqs[0])); 2006 1997 snd_cs4231_sbus_free(chip); ··· 2122 2113 chip->eb2c.regs = ioremap(edev->resource[2].start, 0x10); 2123 2114 if (!chip->port || !chip->eb2p.regs || !chip->eb2c.regs) { 2124 2115 snd_cs4231_ebus_free(chip); 2125 - snd_printk("cs4231-%d: Unable to map chip registers.\n", dev); 2116 + snd_printdd("cs4231-%d: Unable to map chip registers.\n", dev); 2126 2117 return -EIO; 2127 2118 } 2128 2119 2129 2120 if (ebus_dma_register(&chip->eb2c)) { 2130 2121 snd_cs4231_ebus_free(chip); 2131 - snd_printk("cs4231-%d: Unable to register EBUS capture DMA\n", dev); 2122 + snd_printdd("cs4231-%d: Unable to register EBUS capture DMA\n", dev); 2132 2123 return -EBUSY; 2133 2124 } 2134 2125 if (ebus_dma_irq_enable(&chip->eb2c, 1)) { 2135 2126 snd_cs4231_ebus_free(chip); 2136 - snd_printk("cs4231-%d: Unable to enable EBUS capture IRQ\n", dev); 2127 + snd_printdd("cs4231-%d: Unable to enable EBUS capture IRQ\n", dev); 2137 2128 return -EBUSY; 2138 2129 } 2139 2130 2140 2131 if (ebus_dma_register(&chip->eb2p)) { 2141 2132 snd_cs4231_ebus_free(chip); 2142 - snd_printk("cs4231-%d: Unable to register EBUS play DMA\n", dev); 2133 + snd_printdd("cs4231-%d: Unable to register EBUS play DMA\n", dev); 2143 2134 return -EBUSY; 2144 2135 } 2145 2136 if (ebus_dma_irq_enable(&chip->eb2p, 1)) { 2146 2137 snd_cs4231_ebus_free(chip); 2147 - snd_printk("cs4231-%d: Unable to enable EBUS play IRQ\n", dev); 2138 + snd_printdd("cs4231-%d: Unable to enable EBUS play IRQ\n", dev); 2148 2139 return -EBUSY; 2149 2140 } 2150 2141