Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'sh/urgent' into sh-latest

+4856 -2351
+25
Documentation/ABI/testing/sysfs-platform-at91
··· 1 + What: /sys/devices/platform/at91_can/net/<iface>/mb0_id 2 + Date: January 2011 3 + KernelVersion: 2.6.38 4 + Contact: Marc Kleine-Budde <kernel@pengutronix.de> 5 + Description: 6 + Value representing the can_id of mailbox 0. 7 + 8 + Default: 0x7ff (standard frame) 9 + 10 + Due to a chip bug (errata 50.2.6.3 & 50.3.5.3 in 11 + "AT91SAM9263 Preliminary 6249H-ATARM-27-Jul-09") the 12 + contents of mailbox 0 may be send under certain 13 + conditions (even if disabled or in rx mode). 14 + 15 + The workaround in the errata suggests not to use the 16 + mailbox and load it with an unused identifier. 17 + 18 + In order to use an extended can_id add the 19 + CAN_EFF_FLAG (0x80000000U) to the can_id. Example: 20 + 21 + - standard id 0x7ff: 22 + echo 0x7ff > /sys/class/net/can0/mb0_id 23 + 24 + - extended id 0x1fffffff: 25 + echo 0x9fffffff > /sys/class/net/can0/mb0_id
+2
Documentation/filesystems/ntfs.txt
··· 460 460 2.1.30: 461 461 - Fix writev() (it kept writing the first segment over and over again 462 462 instead of moving onto subsequent segments). 463 + - Fix crash in ntfs_mft_record_alloc() when mapping the new extent mft 464 + record failed. 463 465 2.1.29: 464 466 - Fix a deadlock when mounting read-write. 465 467 2.1.28:
+71 -12
Documentation/networking/bonding.txt
··· 49 49 3.3 Configuring Bonding Manually with Ifenslave 50 50 3.3.1 Configuring Multiple Bonds Manually 51 51 3.4 Configuring Bonding Manually via Sysfs 52 - 3.5 Overriding Configuration for Special Cases 52 + 3.5 Configuration with Interfaces Support 53 + 3.6 Overriding Configuration for Special Cases 53 54 54 55 4. Querying Bonding Configuration 55 56 4.1 Bonding Configuration ··· 162 161 default kernel source include directory. 163 162 164 163 SECOND IMPORTANT NOTE: 165 - If you plan to configure bonding using sysfs, you do not need 166 - to use ifenslave. 164 + If you plan to configure bonding using sysfs or using the 165 + /etc/network/interfaces file, you do not need to use ifenslave. 167 166 168 167 2. Bonding Driver Options 169 168 ========================= ··· 780 779 781 780 You can configure bonding using either your distro's network 782 781 initialization scripts, or manually using either ifenslave or the 783 - sysfs interface. Distros generally use one of two packages for the 784 - network initialization scripts: initscripts or sysconfig. Recent 785 - versions of these packages have support for bonding, while older 782 + sysfs interface. Distros generally use one of three packages for the 783 + network initialization scripts: initscripts, sysconfig or interfaces. 784 + Recent versions of these packages have support for bonding, while older 786 785 versions do not. 787 786 788 787 We will first describe the options for configuring bonding for 789 - distros using versions of initscripts and sysconfig with full or 790 - partial support for bonding, then provide information on enabling 788 + distros using versions of initscripts, sysconfig and interfaces with full 789 + or partial support for bonding, then provide information on enabling 791 790 bonding without support from the network initialization scripts (i.e., 792 791 older versions of initscripts or sysconfig). 793 792 794 - If you're unsure whether your distro uses sysconfig or 795 - initscripts, or don't know if it's new enough, have no fear. 793 + If you're unsure whether your distro uses sysconfig, 794 + initscripts or interfaces, or don't know if it's new enough, have no fear. 796 795 Determining this is fairly straightforward. 797 796 798 - First, issue the command: 797 + First, look for a file called interfaces in /etc/network directory. 798 + If this file is present in your system, then your system use interfaces. See 799 + Configuration with Interfaces Support. 800 + 801 + Else, issue the command: 799 802 800 803 $ rpm -qf /sbin/ifup 801 804 ··· 1332 1327 echo +eth2 > /sys/class/net/bond1/bonding/slaves 1333 1328 echo +eth3 > /sys/class/net/bond1/bonding/slaves 1334 1329 1335 - 3.5 Overriding Configuration for Special Cases 1330 + 3.5 Configuration with Interfaces Support 1331 + ----------------------------------------- 1332 + 1333 + This section applies to distros which use /etc/network/interfaces file 1334 + to describe network interface configuration, most notably Debian and it's 1335 + derivatives. 1336 + 1337 + The ifup and ifdown commands on Debian don't support bonding out of 1338 + the box. The ifenslave-2.6 package should be installed to provide bonding 1339 + support. Once installed, this package will provide bond-* options to be used 1340 + into /etc/network/interfaces. 1341 + 1342 + Note that ifenslave-2.6 package will load the bonding module and use 1343 + the ifenslave command when appropriate. 1344 + 1345 + Example Configurations 1346 + ---------------------- 1347 + 1348 + In /etc/network/interfaces, the following stanza will configure bond0, in 1349 + active-backup mode, with eth0 and eth1 as slaves. 1350 + 1351 + auto bond0 1352 + iface bond0 inet dhcp 1353 + bond-slaves eth0 eth1 1354 + bond-mode active-backup 1355 + bond-miimon 100 1356 + bond-primary eth0 eth1 1357 + 1358 + If the above configuration doesn't work, you might have a system using 1359 + upstart for system startup. This is most notably true for recent 1360 + Ubuntu versions. The following stanza in /etc/network/interfaces will 1361 + produce the same result on those systems. 1362 + 1363 + auto bond0 1364 + iface bond0 inet dhcp 1365 + bond-slaves none 1366 + bond-mode active-backup 1367 + bond-miimon 100 1368 + 1369 + auto eth0 1370 + iface eth0 inet manual 1371 + bond-master bond0 1372 + bond-primary eth0 eth1 1373 + 1374 + auto eth1 1375 + iface eth1 inet manual 1376 + bond-master bond0 1377 + bond-primary eth0 eth1 1378 + 1379 + For a full list of bond-* supported options in /etc/network/interfaces and some 1380 + more advanced examples tailored to you particular distros, see the files in 1381 + /usr/share/doc/ifenslave-2.6. 1382 + 1383 + 3.6 Overriding Configuration for Special Cases 1336 1384 ---------------------------------------------- 1385 + 1337 1386 When using the bonding driver, the physical port which transmits a frame is 1338 1387 typically selected by the bonding driver, and is not relevant to the user or 1339 1388 system administrator. The output port is simply selected using the policies of
+16 -1
MAINTAINERS
··· 3139 3139 F: net/ieee802154/ 3140 3140 F: drivers/ieee802154/ 3141 3141 3142 + IKANOS/ADI EAGLE ADSL USB DRIVER 3143 + M: Matthieu Castet <castet.matthieu@free.fr> 3144 + M: Stanislaw Gruszka <stf_xl@wp.pl> 3145 + S: Maintained 3146 + F: drivers/usb/atm/ueagle-atm.c 3147 + 3142 3148 INTEGRITY MEASUREMENT ARCHITECTURE (IMA) 3143 3149 M: Mimi Zohar <zohar@us.ibm.com> 3144 3150 S: Supported ··· 3333 3327 F: include/linux/wimax/i2400m.h 3334 3328 3335 3329 INTEL WIRELESS WIFI LINK (iwlwifi) 3336 - M: Reinette Chatre <reinette.chatre@intel.com> 3337 3330 M: Wey-Yi Guy <wey-yi.w.guy@intel.com> 3338 3331 M: Intel Linux Wireless <ilw@linux.intel.com> 3339 3332 L: linux-wireless@vger.kernel.org ··· 6599 6594 S: Maintained 6600 6595 F: drivers/char/virtio_console.c 6601 6596 F: include/linux/virtio_console.h 6597 + 6598 + VIRTIO CORE, NET AND BLOCK DRIVERS 6599 + M: Rusty Russell <rusty@rustcorp.com.au> 6600 + M: "Michael S. Tsirkin" <mst@redhat.com> 6601 + L: virtualization@lists.linux-foundation.org 6602 + S: Maintained 6603 + F: drivers/virtio/ 6604 + F: drivers/net/virtio_net.c 6605 + F: drivers/block/virtio_blk.c 6606 + F: include/linux/virtio_*.h 6602 6607 6603 6608 VIRTIO HOST (VHOST) 6604 6609 M: "Michael S. Tsirkin" <mst@redhat.com>
+1 -1
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 38 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = Flesh-Eating Bats with Fangs 6 6 7 7 # *DOCUMENTATION*
+17 -16
arch/arm/include/asm/io.h
··· 95 95 return (void __iomem *)addr; 96 96 } 97 97 98 + /* IO barriers */ 99 + #ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE 100 + #define __iormb() rmb() 101 + #define __iowmb() wmb() 102 + #else 103 + #define __iormb() do { } while (0) 104 + #define __iowmb() do { } while (0) 105 + #endif 106 + 98 107 /* 99 108 * Now, pick up the machine-defined IO definitions 100 109 */ ··· 134 125 * The {in,out}[bwl] macros are for emulating x86-style PCI/ISA IO space. 135 126 */ 136 127 #ifdef __io 137 - #define outb(v,p) __raw_writeb(v,__io(p)) 138 - #define outw(v,p) __raw_writew((__force __u16) \ 139 - cpu_to_le16(v),__io(p)) 140 - #define outl(v,p) __raw_writel((__force __u32) \ 141 - cpu_to_le32(v),__io(p)) 128 + #define outb(v,p) ({ __iowmb(); __raw_writeb(v,__io(p)); }) 129 + #define outw(v,p) ({ __iowmb(); __raw_writew((__force __u16) \ 130 + cpu_to_le16(v),__io(p)); }) 131 + #define outl(v,p) ({ __iowmb(); __raw_writel((__force __u32) \ 132 + cpu_to_le32(v),__io(p)); }) 142 133 143 - #define inb(p) ({ __u8 __v = __raw_readb(__io(p)); __v; }) 134 + #define inb(p) ({ __u8 __v = __raw_readb(__io(p)); __iormb(); __v; }) 144 135 #define inw(p) ({ __u16 __v = le16_to_cpu((__force __le16) \ 145 - __raw_readw(__io(p))); __v; }) 136 + __raw_readw(__io(p))); __iormb(); __v; }) 146 137 #define inl(p) ({ __u32 __v = le32_to_cpu((__force __le32) \ 147 - __raw_readl(__io(p))); __v; }) 138 + __raw_readl(__io(p))); __iormb(); __v; }) 148 139 149 140 #define outsb(p,d,l) __raw_writesb(__io(p),d,l) 150 141 #define outsw(p,d,l) __raw_writesw(__io(p),d,l) ··· 200 191 cpu_to_le16(v),__mem_pci(c))) 201 192 #define writel_relaxed(v,c) ((void)__raw_writel((__force u32) \ 202 193 cpu_to_le32(v),__mem_pci(c))) 203 - 204 - #ifdef CONFIG_ARM_DMA_MEM_BUFFERABLE 205 - #define __iormb() rmb() 206 - #define __iowmb() wmb() 207 - #else 208 - #define __iormb() do { } while (0) 209 - #define __iowmb() do { } while (0) 210 - #endif 211 194 212 195 #define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; }) 213 196 #define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; })
+10 -12
arch/arm/kernel/head.S
··· 392 392 393 393 #ifdef CONFIG_SMP_ON_UP 394 394 __fixup_smp: 395 - mov r4, #0x00070000 396 - orr r3, r4, #0xff000000 @ mask 0xff070000 397 - orr r4, r4, #0x41000000 @ val 0x41070000 398 - and r0, r9, r3 399 - teq r0, r4 @ ARM CPU and ARMv6/v7? 395 + and r3, r9, #0x000f0000 @ architecture version 396 + teq r3, #0x000f0000 @ CPU ID supported? 400 397 bne __fixup_smp_on_up @ no, assume UP 401 398 402 - orr r3, r3, #0x0000ff00 403 - orr r3, r3, #0x000000f0 @ mask 0xff07fff0 399 + bic r3, r9, #0x00ff0000 400 + bic r3, r3, #0x0000000f @ mask 0xff00fff0 401 + mov r4, #0x41000000 404 402 orr r4, r4, #0x0000b000 405 - orr r4, r4, #0x00000020 @ val 0x4107b020 406 - and r0, r9, r3 407 - teq r0, r4 @ ARM 11MPCore? 403 + orr r4, r4, #0x00000020 @ val 0x4100b020 404 + teq r3, r4 @ ARM 11MPCore? 408 405 moveq pc, lr @ yes, assume SMP 409 406 410 407 mrc p15, 0, r0, c0, c0, 5 @ read MPIDR 411 - tst r0, #1 << 31 412 - movne pc, lr @ bit 31 => SMP 408 + and r0, r0, #0xc0000000 @ multiprocessing extensions and 409 + teq r0, #0x80000000 @ not part of a uniprocessor system? 410 + moveq pc, lr @ yes, assume SMP 413 411 414 412 __fixup_smp_on_up: 415 413 adr r0, 1f
+2 -2
arch/arm/mach-footbridge/include/mach/debug-macro.S
··· 17 17 /* For NetWinder debugging */ 18 18 .macro addruart, rp, rv 19 19 mov \rp, #0x000003f8 20 - orr \rv, \rp, #0x7c000000 @ physical 21 - orr \rp, \rp, #0xff000000 @ virtual 20 + orr \rv, \rp, #0xff000000 @ virtual 21 + orr \rp, \rp, #0x7c000000 @ physical 22 22 .endm 23 23 24 24 #define UART_SHIFT 0
-13
arch/arm/mach-omap1/include/mach/entry-macro.S
··· 14 14 #include <mach/irqs.h> 15 15 #include <asm/hardware/gic.h> 16 16 17 - /* 18 - * We use __glue to avoid errors with multiple definitions of 19 - * .globl omap_irq_flags as it's included from entry-armv.S but not 20 - * from entry-common.S. 21 - */ 22 - #ifdef __glue 23 - .pushsection .data 24 - .globl omap_irq_flags 25 - omap_irq_flags: 26 - .word 0 27 - .popsection 28 - #endif 29 - 30 17 .macro disable_fiq 31 18 .endm 32 19
+1 -1
arch/arm/mach-omap1/irq.c
··· 57 57 unsigned long wake_enable; 58 58 }; 59 59 60 + u32 omap_irq_flags; 60 61 static unsigned int irq_bank_count; 61 62 static struct omap_irq_bank *irq_banks; 62 63 ··· 177 176 178 177 void __init omap_init_irq(void) 179 178 { 180 - extern unsigned int omap_irq_flags; 181 179 int i, j; 182 180 183 181 #if defined(CONFIG_ARCH_OMAP730) || defined(CONFIG_ARCH_OMAP850)
+1 -1
arch/arm/mach-omap2/dma.c
··· 264 264 if (IS_ERR(od)) { 265 265 pr_err("%s: Cant build omap_device for %s:%s.\n", 266 266 __func__, name, oh->name); 267 - return IS_ERR(od); 267 + return PTR_ERR(od); 268 268 } 269 269 270 270 mem = platform_get_resource(&od->pdev, IORESOURCE_MEM, 0);
-14
arch/arm/mach-omap2/include/mach/entry-macro.S
··· 38 38 */ 39 39 40 40 #ifdef MULTI_OMAP2 41 - 42 - /* 43 - * We use __glue to avoid errors with multiple definitions of 44 - * .globl omap_irq_base as it's included from entry-armv.S but not 45 - * from entry-common.S. 46 - */ 47 - #ifdef __glue 48 - .pushsection .data 49 - .globl omap_irq_base 50 - omap_irq_base: 51 - .word 0 52 - .popsection 53 - #endif 54 - 55 41 /* 56 42 * Configure the interrupt base on the first interrupt. 57 43 * See also omap_irq_base_init for setting omap_irq_base.
+2 -4
arch/arm/mach-omap2/io.c
··· 314 314 return omap_hwmod_set_postsetup_state(oh, *(u8 *)data); 315 315 } 316 316 317 + void __iomem *omap_irq_base; 318 + 317 319 /* 318 320 * Initialize asm_irq_base for entry-macro.S 319 321 */ 320 322 static inline void omap_irq_base_init(void) 321 323 { 322 - extern void __iomem *omap_irq_base; 323 - 324 - #ifdef MULTI_OMAP2 325 324 if (cpu_is_omap24xx()) 326 325 omap_irq_base = OMAP2_L4_IO_ADDRESS(OMAP24XX_IC_BASE); 327 326 else if (cpu_is_omap34xx()) ··· 329 330 omap_irq_base = OMAP2_L4_IO_ADDRESS(OMAP44XX_GIC_CPU_BASE); 330 331 else 331 332 pr_err("Could not initialize omap_irq_base\n"); 332 - #endif 333 333 } 334 334 335 335 void __init omap2_init_common_infrastructure(void)
+1 -1
arch/arm/mach-omap2/mux.c
··· 160 160 struct omap_mux *mux = NULL; 161 161 struct omap_mux_entry *e; 162 162 const char *mode_name; 163 - int found = 0, found_mode, mode0_len = 0; 163 + int found = 0, found_mode = 0, mode0_len = 0; 164 164 struct list_head *muxmodes = &partition->muxmodes; 165 165 166 166 mode_name = strchr(muxname, '.');
+2 -2
arch/arm/mach-tegra/gpio.c
··· 207 207 spin_unlock_irqrestore(&bank->lvl_lock[port], flags); 208 208 209 209 if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH)) 210 - __set_irq_handler_unlocked(irq, handle_level_irq); 210 + __set_irq_handler_unlocked(d->irq, handle_level_irq); 211 211 else if (type & (IRQ_TYPE_EDGE_FALLING | IRQ_TYPE_EDGE_RISING)) 212 - __set_irq_handler_unlocked(irq, handle_edge_irq); 212 + __set_irq_handler_unlocked(d->irq, handle_edge_irq); 213 213 214 214 return 0; 215 215 }
+2
arch/arm/mach-tegra/include/mach/clk.h
··· 20 20 #ifndef __MACH_CLK_H 21 21 #define __MACH_CLK_H 22 22 23 + struct clk; 24 + 23 25 void tegra_periph_reset_deassert(struct clk *c); 24 26 void tegra_periph_reset_assert(struct clk *c); 25 27
+2
arch/arm/mach-tegra/include/mach/clkdev.h
··· 20 20 #ifndef __MACH_CLKDEV_H 21 21 #define __MACH_CLKDEV_H 22 22 23 + struct clk; 24 + 23 25 static inline int __clk_get(struct clk *clk) 24 26 { 25 27 return 1;
+9 -9
arch/arm/mach-tegra/irq.c
··· 46 46 #define ICTLR_COP_IER_CLR 0x38 47 47 #define ICTLR_COP_IEP_CLASS 0x3c 48 48 49 - static void (*gic_mask_irq)(struct irq_data *d); 50 - static void (*gic_unmask_irq)(struct irq_data *d); 49 + static void (*tegra_gic_mask_irq)(struct irq_data *d); 50 + static void (*tegra_gic_unmask_irq)(struct irq_data *d); 51 51 52 - #define irq_to_ictlr(irq) (((irq)-32) >> 5) 52 + #define irq_to_ictlr(irq) (((irq) - 32) >> 5) 53 53 static void __iomem *tegra_ictlr_base = IO_ADDRESS(TEGRA_PRIMARY_ICTLR_BASE); 54 - #define ictlr_to_virt(ictlr) (tegra_ictlr_base + (ictlr)*0x100) 54 + #define ictlr_to_virt(ictlr) (tegra_ictlr_base + (ictlr) * 0x100) 55 55 56 56 static void tegra_mask(struct irq_data *d) 57 57 { 58 58 void __iomem *addr = ictlr_to_virt(irq_to_ictlr(d->irq)); 59 - gic_mask_irq(d); 60 - writel(1<<(d->irq&31), addr+ICTLR_CPU_IER_CLR); 59 + tegra_gic_mask_irq(d); 60 + writel(1 << (d->irq & 31), addr+ICTLR_CPU_IER_CLR); 61 61 } 62 62 63 63 static void tegra_unmask(struct irq_data *d) 64 64 { 65 65 void __iomem *addr = ictlr_to_virt(irq_to_ictlr(d->irq)); 66 - gic_unmask_irq(d); 66 + tegra_gic_unmask_irq(d); 67 67 writel(1<<(d->irq&31), addr+ICTLR_CPU_IER_SET); 68 68 } 69 69 ··· 98 98 IO_ADDRESS(TEGRA_ARM_PERIF_BASE + 0x100)); 99 99 100 100 gic = get_irq_chip(29); 101 - gic_unmask_irq = gic->irq_unmask; 102 - gic_mask_irq = gic->irq_mask; 101 + tegra_gic_unmask_irq = gic->irq_unmask; 102 + tegra_gic_mask_irq = gic->irq_mask; 103 103 tegra_irq.irq_ack = gic->irq_ack; 104 104 #ifdef CONFIG_SMP 105 105 tegra_irq.irq_set_affinity = gic->irq_set_affinity;
+6
arch/arm/mm/init.c
··· 297 297 memblock_reserve(__pa(_stext), _end - _stext); 298 298 #endif 299 299 #ifdef CONFIG_BLK_DEV_INITRD 300 + if (phys_initrd_size && 301 + memblock_is_region_reserved(phys_initrd_start, phys_initrd_size)) { 302 + pr_err("INITRD: 0x%08lx+0x%08lx overlaps in-use memory region - disabling initrd\n", 303 + phys_initrd_start, phys_initrd_size); 304 + phys_initrd_start = phys_initrd_size = 0; 305 + } 300 306 if (phys_initrd_size) { 301 307 memblock_reserve(phys_initrd_start, phys_initrd_size); 302 308
+1
arch/avr32/include/asm/pgalloc.h
··· 8 8 #ifndef __ASM_AVR32_PGALLOC_H 9 9 #define __ASM_AVR32_PGALLOC_H 10 10 11 + #include <linux/mm.h> 11 12 #include <linux/quicklist.h> 12 13 #include <asm/page.h> 13 14 #include <asm/pgtable.h>
+9 -4
arch/sh/kernel/cpu/sh4/setup-sh7750.c
··· 14 14 #include <linux/io.h> 15 15 #include <linux/sh_timer.h> 16 16 #include <linux/serial_sci.h> 17 - #include <asm/machtypes.h> 17 + #include <generated/machtypes.h> 18 18 19 19 static struct resource rtc_resources[] = { 20 20 [0] = { ··· 255 255 256 256 void __init plat_early_device_setup(void) 257 257 { 258 + struct platform_device *dev[1]; 259 + 258 260 if (mach_is_rts7751r2d()) { 259 261 scif_platform_data.scscr |= SCSCR_CKE1; 260 - early_platform_add_devices(&scif_device, 1); 262 + dev[0] = &scif_device; 263 + early_platform_add_devices(dev, 1); 261 264 } else { 262 - early_platform_add_devices(&sci_device, 1); 263 - early_platform_add_devices(&scif_device, 1); 265 + dev[0] = &sci_device; 266 + early_platform_add_devices(dev, 1); 267 + dev[0] = &scif_device; 268 + early_platform_add_devices(dev, 1); 264 269 } 265 270 266 271 early_platform_add_devices(sh7750_early_devices,
+12 -12
arch/x86/include/asm/percpu.h
··· 273 273 typeof(var) pxo_new__ = (nval); \ 274 274 switch (sizeof(var)) { \ 275 275 case 1: \ 276 - asm("\n1:mov "__percpu_arg(1)",%%al" \ 277 - "\n\tcmpxchgb %2, "__percpu_arg(1) \ 276 + asm("\n\tmov "__percpu_arg(1)",%%al" \ 277 + "\n1:\tcmpxchgb %2, "__percpu_arg(1) \ 278 278 "\n\tjnz 1b" \ 279 - : "=a" (pxo_ret__), "+m" (var) \ 279 + : "=&a" (pxo_ret__), "+m" (var) \ 280 280 : "q" (pxo_new__) \ 281 281 : "memory"); \ 282 282 break; \ 283 283 case 2: \ 284 - asm("\n1:mov "__percpu_arg(1)",%%ax" \ 285 - "\n\tcmpxchgw %2, "__percpu_arg(1) \ 284 + asm("\n\tmov "__percpu_arg(1)",%%ax" \ 285 + "\n1:\tcmpxchgw %2, "__percpu_arg(1) \ 286 286 "\n\tjnz 1b" \ 287 - : "=a" (pxo_ret__), "+m" (var) \ 287 + : "=&a" (pxo_ret__), "+m" (var) \ 288 288 : "r" (pxo_new__) \ 289 289 : "memory"); \ 290 290 break; \ 291 291 case 4: \ 292 - asm("\n1:mov "__percpu_arg(1)",%%eax" \ 293 - "\n\tcmpxchgl %2, "__percpu_arg(1) \ 292 + asm("\n\tmov "__percpu_arg(1)",%%eax" \ 293 + "\n1:\tcmpxchgl %2, "__percpu_arg(1) \ 294 294 "\n\tjnz 1b" \ 295 - : "=a" (pxo_ret__), "+m" (var) \ 295 + : "=&a" (pxo_ret__), "+m" (var) \ 296 296 : "r" (pxo_new__) \ 297 297 : "memory"); \ 298 298 break; \ 299 299 case 8: \ 300 - asm("\n1:mov "__percpu_arg(1)",%%rax" \ 301 - "\n\tcmpxchgq %2, "__percpu_arg(1) \ 300 + asm("\n\tmov "__percpu_arg(1)",%%rax" \ 301 + "\n1:\tcmpxchgq %2, "__percpu_arg(1) \ 302 302 "\n\tjnz 1b" \ 303 - : "=a" (pxo_ret__), "+m" (var) \ 303 + : "=&a" (pxo_ret__), "+m" (var) \ 304 304 : "r" (pxo_new__) \ 305 305 : "memory"); \ 306 306 break; \
-22
arch/x86/include/asm/system_64.h
··· 1 - #ifndef _ASM_X86_SYSTEM_64_H 2 - #define _ASM_X86_SYSTEM_64_H 3 - 4 - #include <asm/segment.h> 5 - #include <asm/cmpxchg.h> 6 - 7 - 8 - static inline unsigned long read_cr8(void) 9 - { 10 - unsigned long cr8; 11 - asm volatile("movq %%cr8,%0" : "=r" (cr8)); 12 - return cr8; 13 - } 14 - 15 - static inline void write_cr8(unsigned long val) 16 - { 17 - asm volatile("movq %0,%%cr8" :: "r" (val) : "memory"); 18 - } 19 - 20 - #include <linux/irqflags.h> 21 - 22 - #endif /* _ASM_X86_SYSTEM_64_H */
+1 -1
arch/x86/kernel/dumpstack_64.c
··· 149 149 unsigned used = 0; 150 150 struct thread_info *tinfo; 151 151 int graph = 0; 152 + unsigned long dummy; 152 153 unsigned long bp; 153 154 154 155 if (!task) 155 156 task = current; 156 157 157 158 if (!stack) { 158 - unsigned long dummy; 159 159 stack = &dummy; 160 160 if (task && task != current) 161 161 stack = (unsigned long *)task->thread.sp;
+5 -11
arch/x86/xen/p2m.c
··· 241 241 * As long as the mfn_list has enough entries to completely 242 242 * fill a p2m page, pointing into the array is ok. But if 243 243 * not the entries beyond the last pfn will be undefined. 244 - * And guessing that the 'what-ever-there-is' does not take it 245 - * too kindly when changing it to invalid markers, a new page 246 - * is allocated, initialized and filled with the valid part. 247 244 */ 248 245 if (unlikely(pfn + P2M_PER_PAGE > max_pfn)) { 249 246 unsigned long p2midx; 250 - unsigned long *p2m = extend_brk(PAGE_SIZE, PAGE_SIZE); 251 - p2m_init(p2m); 252 247 253 - for (p2midx = 0; pfn + p2midx < max_pfn; p2midx++) { 254 - p2m[p2midx] = mfn_list[pfn + p2midx]; 255 - } 256 - p2m_top[topidx][mididx] = p2m; 257 - } else 258 - p2m_top[topidx][mididx] = &mfn_list[pfn]; 248 + p2midx = max_pfn % P2M_PER_PAGE; 249 + for ( ; p2midx < P2M_PER_PAGE; p2midx++) 250 + mfn_list[pfn + p2midx] = INVALID_P2M_ENTRY; 251 + } 252 + p2m_top[topidx][mididx] = &mfn_list[pfn]; 259 253 } 260 254 261 255 m2p_override_init();
+7 -1
arch/x86/xen/setup.c
··· 179 179 e820.nr_map = 0; 180 180 xen_extra_mem_start = mem_end; 181 181 for (i = 0; i < memmap.nr_entries; i++) { 182 - unsigned long long end = map[i].addr + map[i].size; 182 + unsigned long long end; 183 183 184 + /* Guard against non-page aligned E820 entries. */ 185 + if (map[i].type == E820_RAM) 186 + map[i].size -= (map[i].size + map[i].addr) % PAGE_SIZE; 187 + 188 + end = map[i].addr + map[i].size; 184 189 if (map[i].type == E820_RAM && end > mem_end) { 185 190 /* RAM off the end - may be partially included */ 186 191 u64 delta = min(map[i].size, end - mem_end); ··· 355 350 boot_cpu_data.hlt_works_ok = 1; 356 351 #endif 357 352 pm_idle = default_idle; 353 + boot_option_idle_override = IDLE_HALT; 358 354 359 355 fiddle_vdso(); 360 356 }
+3
drivers/ata/ahci.c
··· 260 260 { PCI_VDEVICE(INTEL, 0x1d02), board_ahci }, /* PBG AHCI */ 261 261 { PCI_VDEVICE(INTEL, 0x1d04), board_ahci }, /* PBG RAID */ 262 262 { PCI_VDEVICE(INTEL, 0x1d06), board_ahci }, /* PBG RAID */ 263 + { PCI_VDEVICE(INTEL, 0x2323), board_ahci }, /* DH89xxCC AHCI */ 263 264 264 265 /* JMicron 360/1/3/5/6, match class to avoid IDE function */ 265 266 { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, ··· 380 379 { PCI_VDEVICE(MARVELL, 0x6145), board_ahci_mv }, /* 6145 */ 381 380 { PCI_VDEVICE(MARVELL, 0x6121), board_ahci_mv }, /* 6121 */ 382 381 { PCI_DEVICE(0x1b4b, 0x9123), 382 + .class = PCI_CLASS_STORAGE_SATA_AHCI, 383 + .class_mask = 0xffffff, 383 384 .driver_data = board_ahci_yes_fbs }, /* 88se9128 */ 384 385 385 386 /* Promise */
+1
drivers/ata/libata-core.c
··· 4138 4138 * device and controller are SATA. 4139 4139 */ 4140 4140 { "PIONEER DVD-RW DVRTD08", "1.00", ATA_HORKAGE_NOSETXFER }, 4141 + { "PIONEER DVD-RW DVR-212D", "1.28", ATA_HORKAGE_NOSETXFER }, 4141 4142 4142 4143 /* End Marker */ 4143 4144 { }
+18 -6
drivers/ata/libata-scsi.c
··· 1099 1099 struct request_queue *q = sdev->request_queue; 1100 1100 void *buf; 1101 1101 1102 - /* set the min alignment and padding */ 1103 - blk_queue_update_dma_alignment(sdev->request_queue, 1104 - ATA_DMA_PAD_SZ - 1); 1102 + sdev->sector_size = ATA_SECT_SIZE; 1103 + 1104 + /* set DMA padding */ 1105 1105 blk_queue_update_dma_pad(sdev->request_queue, 1106 1106 ATA_DMA_PAD_SZ - 1); 1107 1107 ··· 1115 1115 1116 1116 blk_queue_dma_drain(q, atapi_drain_needed, buf, ATAPI_MAX_DRAIN); 1117 1117 } else { 1118 - /* ATA devices must be sector aligned */ 1119 1118 sdev->sector_size = ata_id_logical_sector_size(dev->id); 1120 - blk_queue_update_dma_alignment(sdev->request_queue, 1121 - sdev->sector_size - 1); 1122 1119 sdev->manage_start_stop = 1; 1123 1120 } 1121 + 1122 + /* 1123 + * ata_pio_sectors() expects buffer for each sector to not cross 1124 + * page boundary. Enforce it by requiring buffers to be sector 1125 + * aligned, which works iff sector_size is not larger than 1126 + * PAGE_SIZE. ATAPI devices also need the alignment as 1127 + * IDENTIFY_PACKET is executed as ATA_PROT_PIO. 1128 + */ 1129 + if (sdev->sector_size > PAGE_SIZE) 1130 + ata_dev_printk(dev, KERN_WARNING, 1131 + "sector_size=%u > PAGE_SIZE, PIO may malfunction\n", 1132 + sdev->sector_size); 1133 + 1134 + blk_queue_update_dma_alignment(sdev->request_queue, 1135 + sdev->sector_size - 1); 1124 1136 1125 1137 if (dev->flags & ATA_DFLAG_AN) 1126 1138 set_bit(SDEV_EVT_MEDIA_CHANGE, sdev->supported_events);
+3 -3
drivers/ata/pata_hpt366.c
··· 25 25 #include <linux/libata.h> 26 26 27 27 #define DRV_NAME "pata_hpt366" 28 - #define DRV_VERSION "0.6.9" 28 + #define DRV_VERSION "0.6.10" 29 29 30 30 struct hpt_clock { 31 31 u8 xfer_mode; ··· 160 160 161 161 while (list[i] != NULL) { 162 162 if (!strcmp(list[i], model_num)) { 163 - printk(KERN_WARNING DRV_NAME ": %s is not supported for %s.\n", 164 - modestr, list[i]); 163 + pr_warning(DRV_NAME ": %s is not supported for %s.\n", 164 + modestr, list[i]); 165 165 return 1; 166 166 } 167 167 i++;
+54 -58
drivers/ata/pata_hpt37x.c
··· 24 24 #include <linux/libata.h> 25 25 26 26 #define DRV_NAME "pata_hpt37x" 27 - #define DRV_VERSION "0.6.18" 27 + #define DRV_VERSION "0.6.22" 28 28 29 29 struct hpt_clock { 30 30 u8 xfer_speed; ··· 229 229 230 230 while (list[i] != NULL) { 231 231 if (!strcmp(list[i], model_num)) { 232 - printk(KERN_WARNING DRV_NAME ": %s is not supported for %s.\n", 233 - modestr, list[i]); 232 + pr_warning(DRV_NAME ": %s is not supported for %s.\n", 233 + modestr, list[i]); 234 234 return 1; 235 235 } 236 236 i++; ··· 642 642 static struct ata_port_operations hpt374_fn1_port_ops = { 643 643 .inherits = &hpt372_port_ops, 644 644 .cable_detect = hpt374_fn1_cable_detect, 645 - .prereset = hpt37x_pre_reset, 646 645 }; 647 646 648 647 /** ··· 802 803 .udma_mask = ATA_UDMA6, 803 804 .port_ops = &hpt302_port_ops 804 805 }; 805 - /* HPT374 - UDMA100, function 1 uses different prereset method */ 806 + /* HPT374 - UDMA100, function 1 uses different cable_detect method */ 806 807 static const struct ata_port_info info_hpt374_fn0 = { 807 808 .flags = ATA_FLAG_SLAVE_POSS, 808 809 .pio_mask = ATA_PIO4, ··· 837 838 if (rc) 838 839 return rc; 839 840 840 - if (dev->device == PCI_DEVICE_ID_TTI_HPT366) { 841 + switch (dev->device) { 842 + case PCI_DEVICE_ID_TTI_HPT366: 841 843 /* May be a later chip in disguise. Check */ 842 844 /* Older chips are in the HPT366 driver. Ignore them */ 843 845 if (rev < 3) ··· 863 863 chip_table = &hpt372; 864 864 break; 865 865 default: 866 - printk(KERN_ERR "pata_hpt37x: Unknown HPT366 subtype, " 866 + pr_err(DRV_NAME ": Unknown HPT366 subtype, " 867 867 "please report (%d).\n", rev); 868 868 return -ENODEV; 869 869 } 870 - } else { 871 - switch (dev->device) { 872 - case PCI_DEVICE_ID_TTI_HPT372: 873 - /* 372N if rev >= 2 */ 874 - if (rev >= 2) 875 - return -ENODEV; 876 - ppi[0] = &info_hpt372; 877 - chip_table = &hpt372a; 878 - break; 879 - case PCI_DEVICE_ID_TTI_HPT302: 880 - /* 302N if rev > 1 */ 881 - if (rev > 1) 882 - return -ENODEV; 883 - ppi[0] = &info_hpt302; 884 - /* Check this */ 885 - chip_table = &hpt302; 886 - break; 887 - case PCI_DEVICE_ID_TTI_HPT371: 888 - if (rev > 1) 889 - return -ENODEV; 890 - ppi[0] = &info_hpt302; 891 - chip_table = &hpt371; 892 - /* 893 - * Single channel device, master is not present 894 - * but the BIOS (or us for non x86) must mark it 895 - * absent 896 - */ 897 - pci_read_config_byte(dev, 0x50, &mcr1); 898 - mcr1 &= ~0x04; 899 - pci_write_config_byte(dev, 0x50, mcr1); 900 - break; 901 - case PCI_DEVICE_ID_TTI_HPT374: 902 - chip_table = &hpt374; 903 - if (!(PCI_FUNC(dev->devfn) & 1)) 904 - *ppi = &info_hpt374_fn0; 905 - else 906 - *ppi = &info_hpt374_fn1; 907 - break; 908 - default: 909 - printk(KERN_ERR 910 - "pata_hpt37x: PCI table is bogus, please report (%d).\n", 911 - dev->device); 912 - return -ENODEV; 913 - } 870 + break; 871 + case PCI_DEVICE_ID_TTI_HPT372: 872 + /* 372N if rev >= 2 */ 873 + if (rev >= 2) 874 + return -ENODEV; 875 + ppi[0] = &info_hpt372; 876 + chip_table = &hpt372a; 877 + break; 878 + case PCI_DEVICE_ID_TTI_HPT302: 879 + /* 302N if rev > 1 */ 880 + if (rev > 1) 881 + return -ENODEV; 882 + ppi[0] = &info_hpt302; 883 + /* Check this */ 884 + chip_table = &hpt302; 885 + break; 886 + case PCI_DEVICE_ID_TTI_HPT371: 887 + if (rev > 1) 888 + return -ENODEV; 889 + ppi[0] = &info_hpt302; 890 + chip_table = &hpt371; 891 + /* 892 + * Single channel device, master is not present but the BIOS 893 + * (or us for non x86) must mark it absent 894 + */ 895 + pci_read_config_byte(dev, 0x50, &mcr1); 896 + mcr1 &= ~0x04; 897 + pci_write_config_byte(dev, 0x50, mcr1); 898 + break; 899 + case PCI_DEVICE_ID_TTI_HPT374: 900 + chip_table = &hpt374; 901 + if (!(PCI_FUNC(dev->devfn) & 1)) 902 + *ppi = &info_hpt374_fn0; 903 + else 904 + *ppi = &info_hpt374_fn1; 905 + break; 906 + default: 907 + pr_err(DRV_NAME ": PCI table is bogus, please report (%d).\n", 908 + dev->device); 909 + return -ENODEV; 914 910 } 915 911 /* Ok so this is a chip we support */ 916 912 ··· 953 957 u8 sr; 954 958 u32 total = 0; 955 959 956 - printk(KERN_WARNING 957 - "pata_hpt37x: BIOS has not set timing clocks.\n"); 960 + pr_warning(DRV_NAME ": BIOS has not set timing clocks.\n"); 958 961 959 962 /* This is the process the HPT371 BIOS is reported to use */ 960 963 for (i = 0; i < 128; i++) { ··· 1009 1014 (f_high << 16) | f_low | 0x100); 1010 1015 } 1011 1016 if (adjust == 8) { 1012 - printk(KERN_ERR "pata_hpt37x: DPLL did not stabilize!\n"); 1017 + pr_err(DRV_NAME ": DPLL did not stabilize!\n"); 1013 1018 return -ENODEV; 1014 1019 } 1015 1020 if (dpll == 3) ··· 1017 1022 else 1018 1023 private_data = (void *)hpt37x_timings_50; 1019 1024 1020 - printk(KERN_INFO "pata_hpt37x: bus clock %dMHz, using %dMHz DPLL.\n", 1021 - MHz[clock_slot], MHz[dpll]); 1025 + pr_info(DRV_NAME ": bus clock %dMHz, using %dMHz DPLL.\n", 1026 + MHz[clock_slot], MHz[dpll]); 1022 1027 } else { 1023 1028 private_data = (void *)chip_table->clocks[clock_slot]; 1024 1029 /* ··· 1031 1036 ppi[0] = &info_hpt370_33; 1032 1037 if (clock_slot < 2 && ppi[0] == &info_hpt370a) 1033 1038 ppi[0] = &info_hpt370a_33; 1034 - printk(KERN_INFO "pata_hpt37x: %s using %dMHz bus clock.\n", 1035 - chip_table->name, MHz[clock_slot]); 1039 + 1040 + pr_info(DRV_NAME ": %s using %dMHz bus clock.\n", 1041 + chip_table->name, MHz[clock_slot]); 1036 1042 } 1037 1043 1038 1044 /* Now kick off ATA set up */
+5 -7
drivers/ata/pata_hpt3x2n.c
··· 25 25 #include <linux/libata.h> 26 26 27 27 #define DRV_NAME "pata_hpt3x2n" 28 - #define DRV_VERSION "0.3.13" 28 + #define DRV_VERSION "0.3.14" 29 29 30 30 enum { 31 31 HPT_PCI_FAST = (1 << 31), ··· 418 418 u16 sr; 419 419 u32 total = 0; 420 420 421 - printk(KERN_WARNING "pata_hpt3x2n: BIOS clock data not set.\n"); 421 + pr_warning(DRV_NAME ": BIOS clock data not set.\n"); 422 422 423 423 /* This is the process the HPT371 BIOS is reported to use */ 424 424 for (i = 0; i < 128; i++) { ··· 528 528 ppi[0] = &info_hpt372n; 529 529 break; 530 530 default: 531 - printk(KERN_ERR 532 - "pata_hpt3x2n: PCI table is bogus please report (%d).\n", 531 + pr_err(DRV_NAME ": PCI table is bogus, please report (%d).\n", 533 532 dev->device); 534 533 return -ENODEV; 535 534 } ··· 578 579 pci_write_config_dword(dev, 0x5C, (f_high << 16) | f_low); 579 580 } 580 581 if (adjust == 8) { 581 - printk(KERN_ERR "pata_hpt3x2n: DPLL did not stabilize!\n"); 582 + pr_err(DRV_NAME ": DPLL did not stabilize!\n"); 582 583 return -ENODEV; 583 584 } 584 585 585 - printk(KERN_INFO "pata_hpt37x: bus clock %dMHz, using 66MHz DPLL.\n", 586 - pci_mhz); 586 + pr_info(DRV_NAME ": bus clock %dMHz, using 66MHz DPLL.\n", pci_mhz); 587 587 588 588 /* 589 589 * Set our private data up. We only need a few flags
+1 -1
drivers/ata/pata_mpc52xx.c
··· 610 610 }; 611 611 612 612 static struct ata_port_operations mpc52xx_ata_port_ops = { 613 - .inherits = &ata_sff_port_ops, 613 + .inherits = &ata_bmdma_port_ops, 614 614 .sff_dev_select = mpc52xx_ata_dev_select, 615 615 .set_piomode = mpc52xx_ata_set_piomode, 616 616 .set_dmamode = mpc52xx_ata_set_dmamode,
+1 -1
drivers/atm/idt77105.c
··· 151 151 spin_unlock_irqrestore(&idt77105_priv_lock, flags); 152 152 if (arg == NULL) 153 153 return 0; 154 - return copy_to_user(arg, &PRIV(dev)->stats, 154 + return copy_to_user(arg, &stats, 155 155 sizeof(struct idt77105_stats)) ? -EFAULT : 0; 156 156 } 157 157
+6 -3
drivers/base/power/runtime.c
··· 407 407 goto out; 408 408 } 409 409 410 + /* Maybe the parent is now able to suspend. */ 410 411 if (parent && !parent->power.ignore_children && !dev->power.irq_safe) { 411 - spin_unlock_irq(&dev->power.lock); 412 + spin_unlock(&dev->power.lock); 412 413 413 - pm_request_idle(parent); 414 + spin_lock(&parent->power.lock); 415 + rpm_idle(parent, RPM_ASYNC); 416 + spin_unlock(&parent->power.lock); 414 417 415 - spin_lock_irq(&dev->power.lock); 418 + spin_lock(&dev->power.lock); 416 419 } 417 420 418 421 out:
+21 -56
drivers/bluetooth/ath3k.c
··· 47 47 #define USB_REQ_DFU_DNLOAD 1 48 48 #define BULK_SIZE 4096 49 49 50 - struct ath3k_data { 51 - struct usb_device *udev; 52 - u8 *fw_data; 53 - u32 fw_size; 54 - u32 fw_sent; 55 - }; 56 - 57 - static int ath3k_load_firmware(struct ath3k_data *data, 58 - unsigned char *firmware, 59 - int count) 50 + static int ath3k_load_firmware(struct usb_device *udev, 51 + const struct firmware *firmware) 60 52 { 61 53 u8 *send_buf; 62 54 int err, pipe, len, size, sent = 0; 55 + int count = firmware->size; 63 56 64 - BT_DBG("ath3k %p udev %p", data, data->udev); 57 + BT_DBG("udev %p", udev); 65 58 66 - pipe = usb_sndctrlpipe(data->udev, 0); 67 - 68 - if ((usb_control_msg(data->udev, pipe, 69 - USB_REQ_DFU_DNLOAD, 70 - USB_TYPE_VENDOR, 0, 0, 71 - firmware, 20, USB_CTRL_SET_TIMEOUT)) < 0) { 72 - BT_ERR("Can't change to loading configuration err"); 73 - return -EBUSY; 74 - } 75 - sent += 20; 76 - count -= 20; 59 + pipe = usb_sndctrlpipe(udev, 0); 77 60 78 61 send_buf = kmalloc(BULK_SIZE, GFP_ATOMIC); 79 62 if (!send_buf) { ··· 64 81 return -ENOMEM; 65 82 } 66 83 84 + memcpy(send_buf, firmware->data, 20); 85 + if ((err = usb_control_msg(udev, pipe, 86 + USB_REQ_DFU_DNLOAD, 87 + USB_TYPE_VENDOR, 0, 0, 88 + send_buf, 20, USB_CTRL_SET_TIMEOUT)) < 0) { 89 + BT_ERR("Can't change to loading configuration err"); 90 + goto error; 91 + } 92 + sent += 20; 93 + count -= 20; 94 + 67 95 while (count) { 68 96 size = min_t(uint, count, BULK_SIZE); 69 - pipe = usb_sndbulkpipe(data->udev, 0x02); 70 - memcpy(send_buf, firmware + sent, size); 97 + pipe = usb_sndbulkpipe(udev, 0x02); 98 + memcpy(send_buf, firmware->data + sent, size); 71 99 72 - err = usb_bulk_msg(data->udev, pipe, send_buf, size, 100 + err = usb_bulk_msg(udev, pipe, send_buf, size, 73 101 &len, 3000); 74 102 75 103 if (err || (len != size)) { ··· 106 112 { 107 113 const struct firmware *firmware; 108 114 struct usb_device *udev = interface_to_usbdev(intf); 109 - struct ath3k_data *data; 110 - int size; 111 115 112 116 BT_DBG("intf %p id %p", intf, id); 113 117 114 118 if (intf->cur_altsetting->desc.bInterfaceNumber != 0) 115 119 return -ENODEV; 116 120 117 - data = kzalloc(sizeof(*data), GFP_KERNEL); 118 - if (!data) 119 - return -ENOMEM; 120 - 121 - data->udev = udev; 122 - 123 121 if (request_firmware(&firmware, "ath3k-1.fw", &udev->dev) < 0) { 124 - kfree(data); 125 122 return -EIO; 126 123 } 127 124 128 - size = max_t(uint, firmware->size, 4096); 129 - data->fw_data = kmalloc(size, GFP_KERNEL); 130 - if (!data->fw_data) { 125 + if (ath3k_load_firmware(udev, firmware)) { 131 126 release_firmware(firmware); 132 - kfree(data); 133 - return -ENOMEM; 134 - } 135 - 136 - memcpy(data->fw_data, firmware->data, firmware->size); 137 - data->fw_size = firmware->size; 138 - data->fw_sent = 0; 139 - release_firmware(firmware); 140 - 141 - usb_set_intfdata(intf, data); 142 - if (ath3k_load_firmware(data, data->fw_data, data->fw_size)) { 143 - usb_set_intfdata(intf, NULL); 144 - kfree(data->fw_data); 145 - kfree(data); 146 127 return -EIO; 147 128 } 129 + release_firmware(firmware); 148 130 149 131 return 0; 150 132 } 151 133 152 134 static void ath3k_disconnect(struct usb_interface *intf) 153 135 { 154 - struct ath3k_data *data = usb_get_intfdata(intf); 155 - 156 136 BT_DBG("ath3k_disconnect intf %p", intf); 157 - 158 - kfree(data->fw_data); 159 - kfree(data); 160 137 } 161 138 162 139 static struct usb_driver ath3k_driver = {
+15
drivers/gpu/drm/nouveau/nouveau_bios.c
··· 6310 6310 static bool 6311 6311 apply_dcb_encoder_quirks(struct drm_device *dev, int idx, u32 *conn, u32 *conf) 6312 6312 { 6313 + struct drm_nouveau_private *dev_priv = dev->dev_private; 6314 + struct dcb_table *dcb = &dev_priv->vbios.dcb; 6315 + 6313 6316 /* Dell Precision M6300 6314 6317 * DCB entry 2: 02025312 00000010 6315 6318 * DCB entry 3: 02026312 00000020 ··· 6328 6325 if (nv_match_device(dev, 0x040d, 0x1028, 0x019b)) { 6329 6326 if (*conn == 0x02026312 && *conf == 0x00000020) 6330 6327 return false; 6328 + } 6329 + 6330 + /* GeForce3 Ti 200 6331 + * 6332 + * DCB reports an LVDS output that should be TMDS: 6333 + * DCB entry 1: f2005014 ffffffff 6334 + */ 6335 + if (nv_match_device(dev, 0x0201, 0x1462, 0x8851)) { 6336 + if (*conn == 0xf2005014 && *conf == 0xffffffff) { 6337 + fabricate_dcb_output(dcb, OUTPUT_TMDS, 1, 1, 1); 6338 + return false; 6339 + } 6331 6340 } 6332 6341 6333 6342 return true;
-3
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 848 848 struct nouveau_fence *fence); 849 849 extern const struct ttm_mem_type_manager_func nouveau_vram_manager; 850 850 851 - /* nvc0_vram.c */ 852 - extern const struct ttm_mem_type_manager_func nvc0_vram_manager; 853 - 854 851 /* nouveau_notifier.c */ 855 852 extern int nouveau_notifier_init_channel(struct nouveau_channel *); 856 853 extern void nouveau_notifier_takedown_channel(struct nouveau_channel *);
+1 -1
drivers/gpu/drm/nouveau/nouveau_temp.c
··· 265 265 struct i2c_board_info info[] = { 266 266 { I2C_BOARD_INFO("w83l785ts", 0x2d) }, 267 267 { I2C_BOARD_INFO("w83781d", 0x2d) }, 268 - { I2C_BOARD_INFO("f75375", 0x2e) }, 269 268 { I2C_BOARD_INFO("adt7473", 0x2e) }, 269 + { I2C_BOARD_INFO("f75375", 0x2e) }, 270 270 { I2C_BOARD_INFO("lm99", 0x4c) }, 271 271 { } 272 272 };
+3
drivers/gpu/drm/nouveau/nv50_graph.c
··· 256 256 struct drm_device *dev = chan->dev; 257 257 struct drm_nouveau_private *dev_priv = dev->dev_private; 258 258 struct nouveau_pgraph_engine *pgraph = &dev_priv->engine.graph; 259 + struct nouveau_fifo_engine *pfifo = &dev_priv->engine.fifo; 259 260 int i, hdr = (dev_priv->chipset == 0x50) ? 0x200 : 0x20; 260 261 unsigned long flags; 261 262 ··· 266 265 return; 267 266 268 267 spin_lock_irqsave(&dev_priv->context_switch_lock, flags); 268 + pfifo->reassign(dev, false); 269 269 pgraph->fifo_access(dev, false); 270 270 271 271 if (pgraph->channel(dev) == chan) ··· 277 275 dev_priv->engine.instmem.flush(dev); 278 276 279 277 pgraph->fifo_access(dev, true); 278 + pfifo->reassign(dev, true); 280 279 spin_unlock_irqrestore(&dev_priv->context_switch_lock, flags); 281 280 282 281 nouveau_gpuobj_ref(NULL, &chan->ramin_grctx);
-5
drivers/gpu/drm/nouveau/nv50_vm.c
··· 45 45 } 46 46 47 47 if (phys & 1) { 48 - if (dev_priv->vram_sys_base) { 49 - phys += dev_priv->vram_sys_base; 50 - phys |= 0x30; 51 - } 52 - 53 48 if (coverage <= 32 * 1024 * 1024) 54 49 phys |= 0x60; 55 50 else if (coverage <= 64 * 1024 * 1024)
+21 -2
drivers/gpu/drm/nouveau/nvc0_graph.c
··· 31 31 #include "nvc0_graph.h" 32 32 33 33 static void nvc0_graph_isr(struct drm_device *); 34 + static void nvc0_runk140_isr(struct drm_device *); 34 35 static int nvc0_graph_unload_context_to(struct drm_device *dev, u64 chan); 35 36 36 37 void ··· 282 281 return; 283 282 284 283 nouveau_irq_unregister(dev, 12); 284 + nouveau_irq_unregister(dev, 25); 285 285 286 286 nouveau_gpuobj_ref(NULL, &priv->unk4188b8); 287 287 nouveau_gpuobj_ref(NULL, &priv->unk4188b4); ··· 392 390 } 393 391 394 392 nouveau_irq_register(dev, 12, nvc0_graph_isr); 393 + nouveau_irq_register(dev, 25, nvc0_runk140_isr); 395 394 NVOBJ_CLASS(dev, 0x902d, GR); /* 2D */ 396 395 NVOBJ_CLASS(dev, 0x9039, GR); /* M2MF */ 397 396 NVOBJ_CLASS(dev, 0x9097, GR); /* 3D */ ··· 515 512 nv_wr32(dev, TP_UNIT(gpc, tp, 0x224), 0xc0000000); 516 513 nv_wr32(dev, TP_UNIT(gpc, tp, 0x48c), 0xc0000000); 517 514 nv_wr32(dev, TP_UNIT(gpc, tp, 0x084), 0xc0000000); 518 - nv_wr32(dev, TP_UNIT(gpc, tp, 0xe44), 0x001ffffe); 519 - nv_wr32(dev, TP_UNIT(gpc, tp, 0xe4c), 0x0000000f); 515 + nv_wr32(dev, TP_UNIT(gpc, tp, 0x644), 0x001ffffe); 516 + nv_wr32(dev, TP_UNIT(gpc, tp, 0x64c), 0x0000000f); 520 517 } 521 518 nv_wr32(dev, GPC_UNIT(gpc, 0x2c90), 0xffffffff); 522 519 nv_wr32(dev, GPC_UNIT(gpc, 0x2c94), 0xffffffff); ··· 779 776 } 780 777 781 778 nv_wr32(dev, 0x400500, 0x00010001); 779 + } 780 + 781 + static void 782 + nvc0_runk140_isr(struct drm_device *dev) 783 + { 784 + u32 units = nv_rd32(dev, 0x00017c) & 0x1f; 785 + 786 + while (units) { 787 + u32 unit = ffs(units) - 1; 788 + u32 reg = 0x140000 + unit * 0x2000; 789 + u32 st0 = nv_mask(dev, reg + 0x1020, 0, 0); 790 + u32 st1 = nv_mask(dev, reg + 0x1420, 0, 0); 791 + 792 + NV_INFO(dev, "PRUNK140: %d 0x%08x 0x%08x\n", unit, st0, st1); 793 + units &= ~(1 << unit); 794 + } 782 795 }
+1 -1
drivers/gpu/drm/nouveau/nvc0_grctx.c
··· 1830 1830 1831 1831 for (tp = 0, id = 0; tp < 4; tp++) { 1832 1832 for (gpc = 0; gpc < priv->gpc_nr; gpc++) { 1833 - if (tp <= priv->tp_nr[gpc]) { 1833 + if (tp < priv->tp_nr[gpc]) { 1834 1834 nv_wr32(dev, TP_UNIT(gpc, tp, 0x698), id); 1835 1835 nv_wr32(dev, TP_UNIT(gpc, tp, 0x4e8), id); 1836 1836 nv_wr32(dev, GPC_UNIT(gpc, 0x0c10 + tp * 4), id);
+17
drivers/gpu/drm/radeon/atombios_crtc.c
··· 994 994 struct radeon_bo *rbo; 995 995 uint64_t fb_location; 996 996 uint32_t fb_format, fb_pitch_pixels, tiling_flags; 997 + u32 fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_NONE); 997 998 int r; 998 999 999 1000 /* no fb bound */ ··· 1046 1045 case 16: 1047 1046 fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_16BPP) | 1048 1047 EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_ARGB565)); 1048 + #ifdef __BIG_ENDIAN 1049 + fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN16); 1050 + #endif 1049 1051 break; 1050 1052 case 24: 1051 1053 case 32: 1052 1054 fb_format = (EVERGREEN_GRPH_DEPTH(EVERGREEN_GRPH_DEPTH_32BPP) | 1053 1055 EVERGREEN_GRPH_FORMAT(EVERGREEN_GRPH_FORMAT_ARGB8888)); 1056 + #ifdef __BIG_ENDIAN 1057 + fb_swap = EVERGREEN_GRPH_ENDIAN_SWAP(EVERGREEN_GRPH_ENDIAN_8IN32); 1058 + #endif 1054 1059 break; 1055 1060 default: 1056 1061 DRM_ERROR("Unsupported screen depth %d\n", ··· 1101 1094 WREG32(EVERGREEN_GRPH_SECONDARY_SURFACE_ADDRESS + radeon_crtc->crtc_offset, 1102 1095 (u32) fb_location & EVERGREEN_GRPH_SURFACE_ADDRESS_MASK); 1103 1096 WREG32(EVERGREEN_GRPH_CONTROL + radeon_crtc->crtc_offset, fb_format); 1097 + WREG32(EVERGREEN_GRPH_SWAP_CONTROL + radeon_crtc->crtc_offset, fb_swap); 1104 1098 1105 1099 WREG32(EVERGREEN_GRPH_SURFACE_OFFSET_X + radeon_crtc->crtc_offset, 0); 1106 1100 WREG32(EVERGREEN_GRPH_SURFACE_OFFSET_Y + radeon_crtc->crtc_offset, 0); ··· 1158 1150 struct drm_framebuffer *target_fb; 1159 1151 uint64_t fb_location; 1160 1152 uint32_t fb_format, fb_pitch_pixels, tiling_flags; 1153 + u32 fb_swap = R600_D1GRPH_SWAP_ENDIAN_NONE; 1161 1154 int r; 1162 1155 1163 1156 /* no fb bound */ ··· 1212 1203 fb_format = 1213 1204 AVIVO_D1GRPH_CONTROL_DEPTH_16BPP | 1214 1205 AVIVO_D1GRPH_CONTROL_16BPP_RGB565; 1206 + #ifdef __BIG_ENDIAN 1207 + fb_swap = R600_D1GRPH_SWAP_ENDIAN_16BIT; 1208 + #endif 1215 1209 break; 1216 1210 case 24: 1217 1211 case 32: 1218 1212 fb_format = 1219 1213 AVIVO_D1GRPH_CONTROL_DEPTH_32BPP | 1220 1214 AVIVO_D1GRPH_CONTROL_32BPP_ARGB8888; 1215 + #ifdef __BIG_ENDIAN 1216 + fb_swap = R600_D1GRPH_SWAP_ENDIAN_32BIT; 1217 + #endif 1221 1218 break; 1222 1219 default: 1223 1220 DRM_ERROR("Unsupported screen depth %d\n", ··· 1263 1248 WREG32(AVIVO_D1GRPH_SECONDARY_SURFACE_ADDRESS + 1264 1249 radeon_crtc->crtc_offset, (u32) fb_location); 1265 1250 WREG32(AVIVO_D1GRPH_CONTROL + radeon_crtc->crtc_offset, fb_format); 1251 + if (rdev->family >= CHIP_R600) 1252 + WREG32(R600_D1GRPH_SWAP_CONTROL + radeon_crtc->crtc_offset, fb_swap); 1266 1253 1267 1254 WREG32(AVIVO_D1GRPH_SURFACE_OFFSET_X + radeon_crtc->crtc_offset, 0); 1268 1255 WREG32(AVIVO_D1GRPH_SURFACE_OFFSET_Y + radeon_crtc->crtc_offset, 0);
+2 -2
drivers/gpu/drm/radeon/atombios_dp.c
··· 187 187 int dp_mode_valid(u8 dpcd[DP_DPCD_SIZE], int mode_clock) 188 188 { 189 189 int lanes = dp_lanes_for_mode_clock(dpcd, mode_clock); 190 - int bw = dp_lanes_for_mode_clock(dpcd, mode_clock); 190 + int dp_clock = dp_link_clock_for_mode_clock(dpcd, mode_clock); 191 191 192 - if ((lanes == 0) || (bw == 0)) 192 + if ((lanes == 0) || (dp_clock == 0)) 193 193 return MODE_CLOCK_HIGH; 194 194 195 195 return MODE_OK;
+33 -6
drivers/gpu/drm/radeon/evergreen_blit_kms.c
··· 232 232 233 233 } 234 234 235 - /* emits 30 */ 235 + /* emits 34 */ 236 236 static void 237 237 set_default_state(struct radeon_device *rdev) 238 238 { ··· 245 245 int num_hs_threads, num_ls_threads; 246 246 int num_ps_stack_entries, num_vs_stack_entries, num_gs_stack_entries, num_es_stack_entries; 247 247 int num_hs_stack_entries, num_ls_stack_entries; 248 + u64 gpu_addr; 249 + int dwords; 248 250 249 251 switch (rdev->family) { 250 252 case CHIP_CEDAR: ··· 499 497 radeon_ring_write(rdev, 0x00000000); 500 498 radeon_ring_write(rdev, 0x00000000); 501 499 500 + /* emit an IB pointing at default state */ 501 + dwords = ALIGN(rdev->r600_blit.state_len, 0x10); 502 + gpu_addr = rdev->r600_blit.shader_gpu_addr + rdev->r600_blit.state_offset; 503 + radeon_ring_write(rdev, PACKET3(PACKET3_INDIRECT_BUFFER, 2)); 504 + radeon_ring_write(rdev, gpu_addr & 0xFFFFFFFC); 505 + radeon_ring_write(rdev, upper_32_bits(gpu_addr) & 0xFF); 506 + radeon_ring_write(rdev, dwords); 507 + 502 508 } 503 509 504 510 static inline uint32_t i2f(uint32_t input) ··· 537 527 int evergreen_blit_init(struct radeon_device *rdev) 538 528 { 539 529 u32 obj_size; 540 - int r; 530 + int r, dwords; 541 531 void *ptr; 532 + u32 packet2s[16]; 533 + int num_packet2s = 0; 542 534 543 535 /* pin copy shader into vram if already initialized */ 544 536 if (rdev->r600_blit.shader_obj) ··· 548 536 549 537 mutex_init(&rdev->r600_blit.mutex); 550 538 rdev->r600_blit.state_offset = 0; 551 - rdev->r600_blit.state_len = 0; 552 - obj_size = 0; 539 + 540 + rdev->r600_blit.state_len = evergreen_default_size; 541 + 542 + dwords = rdev->r600_blit.state_len; 543 + while (dwords & 0xf) { 544 + packet2s[num_packet2s++] = PACKET2(0); 545 + dwords++; 546 + } 547 + 548 + obj_size = dwords * 4; 549 + obj_size = ALIGN(obj_size, 256); 553 550 554 551 rdev->r600_blit.vs_offset = obj_size; 555 552 obj_size += evergreen_vs_size * 4; ··· 588 567 return r; 589 568 } 590 569 570 + memcpy_toio(ptr + rdev->r600_blit.state_offset, 571 + evergreen_default_state, rdev->r600_blit.state_len * 4); 572 + 573 + if (num_packet2s) 574 + memcpy_toio(ptr + rdev->r600_blit.state_offset + (rdev->r600_blit.state_len * 4), 575 + packet2s, num_packet2s * 4); 591 576 memcpy(ptr + rdev->r600_blit.vs_offset, evergreen_vs, evergreen_vs_size * 4); 592 577 memcpy(ptr + rdev->r600_blit.ps_offset, evergreen_ps, evergreen_ps_size * 4); 593 578 radeon_bo_kunmap(rdev->r600_blit.shader_obj); ··· 679 652 /* calculate number of loops correctly */ 680 653 ring_size = num_loops * dwords_per_loop; 681 654 /* set default + shaders */ 682 - ring_size += 46; /* shaders + def state */ 655 + ring_size += 50; /* shaders + def state */ 683 656 ring_size += 10; /* fence emit for VB IB */ 684 657 ring_size += 5; /* done copy */ 685 658 ring_size += 10; /* fence emit for done copy */ ··· 687 660 if (r) 688 661 return r; 689 662 690 - set_default_state(rdev); /* 30 */ 663 + set_default_state(rdev); /* 34 */ 691 664 set_shaders(rdev); /* 16 */ 692 665 return 0; 693 666 }
+5 -5
drivers/gpu/drm/radeon/r100.c
··· 1031 1031 WREG32(RADEON_CP_CSQ_MODE, 1032 1032 REG_SET(RADEON_INDIRECT2_START, indirect2_start) | 1033 1033 REG_SET(RADEON_INDIRECT1_START, indirect1_start)); 1034 - WREG32(0x718, 0); 1035 - WREG32(0x744, 0x00004D4D); 1034 + WREG32(RADEON_CP_RB_WPTR_DELAY, 0); 1035 + WREG32(RADEON_CP_CSQ_MODE, 0x00004D4D); 1036 1036 WREG32(RADEON_CP_CSQ_CNTL, RADEON_CSQ_PRIBM_INDBM); 1037 1037 radeon_ring_start(rdev); 1038 1038 r = radeon_ring_test(rdev); ··· 2347 2347 2348 2348 temp = RREG32(RADEON_CONFIG_CNTL); 2349 2349 if (state == false) { 2350 - temp &= ~(1<<8); 2351 - temp |= (1<<9); 2350 + temp &= ~RADEON_CFG_VGA_RAM_EN; 2351 + temp |= RADEON_CFG_VGA_IO_DIS; 2352 2352 } else { 2353 - temp &= ~(1<<9); 2353 + temp &= ~RADEON_CFG_VGA_IO_DIS; 2354 2354 } 2355 2355 WREG32(RADEON_CONFIG_CNTL, temp); 2356 2356 }
+5 -2
drivers/gpu/drm/radeon/r300.c
··· 69 69 mb(); 70 70 } 71 71 72 + #define R300_PTE_WRITEABLE (1 << 2) 73 + #define R300_PTE_READABLE (1 << 3) 74 + 72 75 int rv370_pcie_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr) 73 76 { 74 77 void __iomem *ptr = (void *)rdev->gart.table.vram.ptr; ··· 81 78 } 82 79 addr = (lower_32_bits(addr) >> 8) | 83 80 ((upper_32_bits(addr) & 0xff) << 24) | 84 - 0xc; 81 + R300_PTE_WRITEABLE | R300_PTE_READABLE; 85 82 /* on x86 we want this to be CPU endian, on powerpc 86 83 * on powerpc without HW swappers, it'll get swapped on way 87 84 * into VRAM - so no need for cpu_to_le32 on VRAM tables */ ··· 138 135 WREG32_PCIE(RADEON_PCIE_TX_DISCARD_RD_ADDR_LO, rdev->mc.vram_start); 139 136 WREG32_PCIE(RADEON_PCIE_TX_DISCARD_RD_ADDR_HI, 0); 140 137 /* Clear error */ 141 - WREG32_PCIE(0x18, 0); 138 + WREG32_PCIE(RADEON_PCIE_TX_GART_ERROR, 0); 142 139 tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL); 143 140 tmp |= RADEON_PCIE_TX_GART_EN; 144 141 tmp |= RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
+1 -1
drivers/gpu/drm/radeon/r420.c
··· 96 96 "programming pipes. Bad things might happen.\n"); 97 97 } 98 98 /* get max number of pipes */ 99 - gb_pipe_select = RREG32(0x402C); 99 + gb_pipe_select = RREG32(R400_GB_PIPE_SELECT); 100 100 num_pipes = ((gb_pipe_select >> 12) & 3) + 1; 101 101 102 102 /* SE chips have 1 pipe */
+2 -2
drivers/gpu/drm/radeon/r520.c
··· 79 79 WREG32(0x4128, 0xFF); 80 80 } 81 81 r420_pipes_init(rdev); 82 - gb_pipe_select = RREG32(0x402C); 83 - tmp = RREG32(0x170C); 82 + gb_pipe_select = RREG32(R400_GB_PIPE_SELECT); 83 + tmp = RREG32(R300_DST_PIPE_CONFIG); 84 84 pipe_select_current = (tmp >> 2) & 3; 85 85 tmp = (1 << pipe_select_current) | 86 86 (((gb_pipe_select >> 8) & 0xF) << 4);
+5 -1
drivers/gpu/drm/radeon/r600_reg.h
··· 81 81 #define R600_MEDIUM_VID_LOWER_GPIO_CNTL 0x720 82 82 #define R600_LOW_VID_LOWER_GPIO_CNTL 0x724 83 83 84 - 84 + #define R600_D1GRPH_SWAP_CONTROL 0x610C 85 + # define R600_D1GRPH_SWAP_ENDIAN_NONE (0 << 0) 86 + # define R600_D1GRPH_SWAP_ENDIAN_16BIT (1 << 0) 87 + # define R600_D1GRPH_SWAP_ENDIAN_32BIT (2 << 0) 88 + # define R600_D1GRPH_SWAP_ENDIAN_64BIT (3 << 0) 85 89 86 90 #define R600_HDP_NONSURFACE_BASE 0x2c04 87 91
+3 -3
drivers/gpu/drm/radeon/radeon_encoders.c
··· 641 641 switch (connector->connector_type) { 642 642 case DRM_MODE_CONNECTOR_DVII: 643 643 case DRM_MODE_CONNECTOR_HDMIB: /* HDMI-B is basically DL-DVI; analog works fine */ 644 - if (drm_detect_monitor_audio(radeon_connector->edid)) { 644 + if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) { 645 645 /* fix me */ 646 646 if (ASIC_IS_DCE4(rdev)) 647 647 return ATOM_ENCODER_MODE_DVI; ··· 655 655 case DRM_MODE_CONNECTOR_DVID: 656 656 case DRM_MODE_CONNECTOR_HDMIA: 657 657 default: 658 - if (drm_detect_monitor_audio(radeon_connector->edid)) { 658 + if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) { 659 659 /* fix me */ 660 660 if (ASIC_IS_DCE4(rdev)) 661 661 return ATOM_ENCODER_MODE_DVI; ··· 673 673 if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) || 674 674 (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) 675 675 return ATOM_ENCODER_MODE_DP; 676 - else if (drm_detect_monitor_audio(radeon_connector->edid)) { 676 + else if (drm_detect_monitor_audio(radeon_connector->edid) && radeon_audio) { 677 677 /* fix me */ 678 678 if (ASIC_IS_DCE4(rdev)) 679 679 return ATOM_ENCODER_MODE_DVI;
+2
drivers/gpu/drm/radeon/radeon_kms.c
··· 247 247 struct radeon_device *rdev = dev->dev_private; 248 248 if (rdev->hyperz_filp == file_priv) 249 249 rdev->hyperz_filp = NULL; 250 + if (rdev->cmask_filp == file_priv) 251 + rdev->cmask_filp = NULL; 250 252 } 251 253 252 254 /*
+2
drivers/gpu/drm/radeon/radeon_reg.h
··· 375 375 #define RADEON_CONFIG_APER_SIZE 0x0108 376 376 #define RADEON_CONFIG_BONDS 0x00e8 377 377 #define RADEON_CONFIG_CNTL 0x00e0 378 + # define RADEON_CFG_VGA_RAM_EN (1 << 8) 379 + # define RADEON_CFG_VGA_IO_DIS (1 << 9) 378 380 # define RADEON_CFG_ATI_REV_A11 (0 << 16) 379 381 # define RADEON_CFG_ATI_REV_A12 (1 << 16) 380 382 # define RADEON_CFG_ATI_REV_A13 (2 << 16)
+9 -6
drivers/gpu/drm/radeon/rs400.c
··· 203 203 radeon_gart_table_ram_free(rdev); 204 204 } 205 205 206 + #define RS400_PTE_WRITEABLE (1 << 2) 207 + #define RS400_PTE_READABLE (1 << 3) 208 + 206 209 int rs400_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr) 207 210 { 208 211 uint32_t entry; ··· 216 213 217 214 entry = (lower_32_bits(addr) & PAGE_MASK) | 218 215 ((upper_32_bits(addr) & 0xff) << 4) | 219 - 0xc; 216 + RS400_PTE_WRITEABLE | RS400_PTE_READABLE; 220 217 entry = cpu_to_le32(entry); 221 218 rdev->gart.table.ram.ptr[i] = entry; 222 219 return 0; ··· 229 226 230 227 for (i = 0; i < rdev->usec_timeout; i++) { 231 228 /* read MC_STATUS */ 232 - tmp = RREG32(0x0150); 233 - if (tmp & (1 << 2)) { 229 + tmp = RREG32(RADEON_MC_STATUS); 230 + if (tmp & RADEON_MC_IDLE) { 234 231 return 0; 235 232 } 236 233 DRM_UDELAY(1); ··· 244 241 r420_pipes_init(rdev); 245 242 if (rs400_mc_wait_for_idle(rdev)) { 246 243 printk(KERN_WARNING "rs400: Failed to wait MC idle while " 247 - "programming pipes. Bad things might happen. %08x\n", RREG32(0x150)); 244 + "programming pipes. Bad things might happen. %08x\n", RREG32(RADEON_MC_STATUS)); 248 245 } 249 246 } 250 247 ··· 303 300 seq_printf(m, "MCCFG_AGP_BASE_2 0x%08x\n", tmp); 304 301 tmp = RREG32_MC(RS690_MCCFG_AGP_LOCATION); 305 302 seq_printf(m, "MCCFG_AGP_LOCATION 0x%08x\n", tmp); 306 - tmp = RREG32_MC(0x100); 303 + tmp = RREG32_MC(RS690_MCCFG_FB_LOCATION); 307 304 seq_printf(m, "MCCFG_FB_LOCATION 0x%08x\n", tmp); 308 - tmp = RREG32(0x134); 305 + tmp = RREG32(RS690_HDP_FB_LOCATION); 309 306 seq_printf(m, "HDP_FB_LOCATION 0x%08x\n", tmp); 310 307 } else { 311 308 tmp = RREG32(RADEON_AGP_BASE);
+5 -5
drivers/gpu/drm/radeon/rv515.c
··· 69 69 ISYNC_CPSCRATCH_IDLEGUI); 70 70 radeon_ring_write(rdev, PACKET0(WAIT_UNTIL, 0)); 71 71 radeon_ring_write(rdev, WAIT_2D_IDLECLEAN | WAIT_3D_IDLECLEAN); 72 - radeon_ring_write(rdev, PACKET0(0x170C, 0)); 73 - radeon_ring_write(rdev, 1 << 31); 72 + radeon_ring_write(rdev, PACKET0(R300_DST_PIPE_CONFIG, 0)); 73 + radeon_ring_write(rdev, R300_PIPE_AUTO_CONFIG); 74 74 radeon_ring_write(rdev, PACKET0(GB_SELECT, 0)); 75 75 radeon_ring_write(rdev, 0); 76 76 radeon_ring_write(rdev, PACKET0(GB_ENABLE, 0)); 77 77 radeon_ring_write(rdev, 0); 78 - radeon_ring_write(rdev, PACKET0(0x42C8, 0)); 78 + radeon_ring_write(rdev, PACKET0(R500_SU_REG_DEST, 0)); 79 79 radeon_ring_write(rdev, (1 << rdev->num_gb_pipes) - 1); 80 80 radeon_ring_write(rdev, PACKET0(VAP_INDEX_OFFSET, 0)); 81 81 radeon_ring_write(rdev, 0); ··· 153 153 } 154 154 rv515_vga_render_disable(rdev); 155 155 r420_pipes_init(rdev); 156 - gb_pipe_select = RREG32(0x402C); 157 - tmp = RREG32(0x170C); 156 + gb_pipe_select = RREG32(R400_GB_PIPE_SELECT); 157 + tmp = RREG32(R300_DST_PIPE_CONFIG); 158 158 pipe_select_current = (tmp >> 2) & 3; 159 159 tmp = (1 << pipe_select_current) | 160 160 (((gb_pipe_select >> 8) & 0xF) << 4);
+1
drivers/hwmon/applesmc.c
··· 1072 1072 node->sda.dev_attr.show = grp->show; 1073 1073 node->sda.dev_attr.store = grp->store; 1074 1074 attr = &node->sda.dev_attr.attr; 1075 + sysfs_attr_init(attr); 1075 1076 attr->name = node->name; 1076 1077 attr->mode = S_IRUGO | (grp->store ? S_IWUSR : 0); 1077 1078 ret = sysfs_create_file(&pdev->dev.kobj, attr);
+22 -1
drivers/hwmon/asus_atk0110.c
··· 13 13 #include <linux/list.h> 14 14 #include <linux/module.h> 15 15 #include <linux/slab.h> 16 + #include <linux/dmi.h> 16 17 17 18 #include <acpi/acpi.h> 18 19 #include <acpi/acpixf.h> ··· 22 21 23 22 24 23 #define ATK_HID "ATK0110" 24 + 25 + static bool new_if; 26 + module_param(new_if, bool, 0); 27 + MODULE_PARM_DESC(new_if, "Override detection heuristic and force the use of the new ATK0110 interface"); 28 + 29 + static const struct dmi_system_id __initconst atk_force_new_if[] = { 30 + { 31 + /* Old interface has broken MCH temp monitoring */ 32 + .ident = "Asus Sabertooth X58", 33 + .matches = { 34 + DMI_MATCH(DMI_BOARD_NAME, "SABERTOOTH X58") 35 + } 36 + }, 37 + { } 38 + }; 25 39 26 40 /* Minimum time between readings, enforced in order to avoid 27 41 * hogging the CPU. ··· 1318 1302 * analysis of multiple DSDTs indicates that when both interfaces 1319 1303 * are present the new one (GGRP/GITM) is not functional. 1320 1304 */ 1321 - if (data->rtmp_handle && data->rvlt_handle && data->rfan_handle) 1305 + if (new_if) 1306 + dev_info(dev, "Overriding interface detection\n"); 1307 + if (data->rtmp_handle && data->rvlt_handle && data->rfan_handle && !new_if) 1322 1308 data->old_interface = true; 1323 1309 else if (data->enumerate_handle && data->read_handle && 1324 1310 data->write_handle) ··· 1437 1419 pr_err("Resources not safely usable due to acpi_enforce_resources kernel parameter\n"); 1438 1420 return -EBUSY; 1439 1421 } 1422 + 1423 + if (dmi_check_system(atk_force_new_if)) 1424 + new_if = true; 1440 1425 1441 1426 ret = acpi_bus_register_driver(&atk_driver); 1442 1427 if (ret)
+1 -1
drivers/hwmon/lis3lv02d.c
··· 957 957 958 958 /* bail if we did not get an IRQ from the bus layer */ 959 959 if (!dev->irq) { 960 - pr_err("No IRQ. Disabling /dev/freefall\n"); 960 + pr_debug("No IRQ. Disabling /dev/freefall\n"); 961 961 goto out; 962 962 } 963 963
+3 -3
drivers/input/keyboard/tegra-kbc.c
··· 86 86 KEY(0, 5, KEY_Z), 87 87 KEY(0, 7, KEY_FN), 88 88 89 - KEY(1, 7, KEY_MENU), 89 + KEY(1, 7, KEY_LEFTMETA), 90 90 91 91 KEY(2, 6, KEY_RIGHTALT), 92 92 KEY(2, 7, KEY_LEFTALT), ··· 355 355 for (i = 0; i < KBC_MAX_GPIO; i++) { 356 356 u32 r_shft = 5 * (i % 6); 357 357 u32 c_shft = 4 * (i % 8); 358 - u32 r_mask = 0x1f << r_shift; 359 - u32 c_mask = 0x0f << c_shift; 358 + u32 r_mask = 0x1f << r_shft; 359 + u32 c_mask = 0x0f << c_shft; 360 360 u32 r_offs = (i / 6) * 4 + KBC_ROW_CFG0_0; 361 361 u32 c_offs = (i / 8) * 4 + KBC_COL_CFG0_0; 362 362 u32 row_cfg = readl(kbc->mmio + r_offs);
+24 -8
drivers/input/mouse/synaptics.c
··· 755 755 { 756 756 struct synaptics_data *priv = psmouse->private; 757 757 struct synaptics_data old_priv = *priv; 758 + int retry = 0; 759 + int error; 758 760 759 - psmouse_reset(psmouse); 761 + do { 762 + psmouse_reset(psmouse); 763 + error = synaptics_detect(psmouse, 0); 764 + } while (error && ++retry < 3); 760 765 761 - if (synaptics_detect(psmouse, 0)) 766 + if (error) 762 767 return -1; 768 + 769 + if (retry > 1) 770 + printk(KERN_DEBUG "Synaptics reconnected after %d tries\n", 771 + retry); 763 772 764 773 if (synaptics_query_hardware(psmouse)) { 765 774 printk(KERN_ERR "Unable to query Synaptics hardware.\n"); 766 775 return -1; 767 776 } 768 - 769 - if (old_priv.identity != priv->identity || 770 - old_priv.model_id != priv->model_id || 771 - old_priv.capabilities != priv->capabilities || 772 - old_priv.ext_cap != priv->ext_cap) 773 - return -1; 774 777 775 778 if (synaptics_set_absolute_mode(psmouse)) { 776 779 printk(KERN_ERR "Unable to initialize Synaptics hardware.\n"); ··· 782 779 783 780 if (synaptics_set_advanced_gesture_mode(psmouse)) { 784 781 printk(KERN_ERR "Advanced gesture mode reconnect failed.\n"); 782 + return -1; 783 + } 784 + 785 + if (old_priv.identity != priv->identity || 786 + old_priv.model_id != priv->model_id || 787 + old_priv.capabilities != priv->capabilities || 788 + old_priv.ext_cap != priv->ext_cap) { 789 + printk(KERN_ERR "Synaptics hardware appears to be different: " 790 + "id(%ld-%ld), model(%ld-%ld), caps(%lx-%lx), ext(%lx-%lx).\n", 791 + old_priv.identity, priv->identity, 792 + old_priv.model_id, priv->model_id, 793 + old_priv.capabilities, priv->capabilities, 794 + old_priv.ext_cap, priv->ext_cap); 785 795 return -1; 786 796 } 787 797
+17 -11
drivers/media/rc/rc-main.c
··· 458 458 index = ir_lookup_by_scancode(rc_map, scancode); 459 459 } 460 460 461 - if (index >= rc_map->len) { 462 - if (!(ke->flags & INPUT_KEYMAP_BY_INDEX)) 463 - IR_dprintk(1, "unknown key for scancode 0x%04x\n", 464 - scancode); 461 + if (index < rc_map->len) { 462 + entry = &rc_map->scan[index]; 463 + 464 + ke->index = index; 465 + ke->keycode = entry->keycode; 466 + ke->len = sizeof(entry->scancode); 467 + memcpy(ke->scancode, &entry->scancode, sizeof(entry->scancode)); 468 + 469 + } else if (!(ke->flags & INPUT_KEYMAP_BY_INDEX)) { 470 + /* 471 + * We do not really know the valid range of scancodes 472 + * so let's respond with KEY_RESERVED to anything we 473 + * do not have mapping for [yet]. 474 + */ 475 + ke->index = index; 476 + ke->keycode = KEY_RESERVED; 477 + } else { 465 478 retval = -EINVAL; 466 479 goto out; 467 480 } 468 - 469 - entry = &rc_map->scan[index]; 470 - 471 - ke->index = index; 472 - ke->keycode = entry->keycode; 473 - ke->len = sizeof(entry->scancode); 474 - memcpy(ke->scancode, &entry->scancode, sizeof(entry->scancode)); 475 481 476 482 retval = 0; 477 483
+1 -1
drivers/mmc/host/bfin_sdh.c
··· 462 462 goto out; 463 463 } 464 464 465 - mmc = mmc_alloc_host(sizeof(*mmc), &pdev->dev); 465 + mmc = mmc_alloc_host(sizeof(struct sdh_host), &pdev->dev); 466 466 if (!mmc) { 467 467 ret = -ENOMEM; 468 468 goto out;
+3 -2
drivers/mmc/host/jz4740_mmc.c
··· 14 14 */ 15 15 16 16 #include <linux/mmc/host.h> 17 + #include <linux/err.h> 17 18 #include <linux/io.h> 18 19 #include <linux/irq.h> 19 20 #include <linux/interrupt.h> ··· 828 827 } 829 828 830 829 host->clk = clk_get(&pdev->dev, "mmc"); 831 - if (!host->clk) { 832 - ret = -ENOENT; 830 + if (IS_ERR(host->clk)) { 831 + ret = PTR_ERR(host->clk); 833 832 dev_err(&pdev->dev, "Failed to get mmc clock\n"); 834 833 goto err_free_host; 835 834 }
+11 -10
drivers/mmc/host/mmci.c
··· 14 14 #include <linux/ioport.h> 15 15 #include <linux/device.h> 16 16 #include <linux/interrupt.h> 17 + #include <linux/kernel.h> 17 18 #include <linux/delay.h> 18 19 #include <linux/err.h> 19 20 #include <linux/highmem.h> ··· 284 283 u32 remain, success; 285 284 286 285 /* Calculate how far we are into the transfer */ 287 - remain = readl(host->base + MMCIDATACNT) << 2; 286 + remain = readl(host->base + MMCIDATACNT); 288 287 success = data->blksz * data->blocks - remain; 289 288 290 289 dev_dbg(mmc_dev(host->mmc), "MCI ERROR IRQ (status %08x)\n", status); 291 290 if (status & MCI_DATACRCFAIL) { 292 291 /* Last block was not successful */ 293 - host->data_xfered = ((success / data->blksz) - 1 * data->blksz); 292 + host->data_xfered = round_down(success - 1, data->blksz); 294 293 data->error = -EILSEQ; 295 294 } else if (status & MCI_DATATIMEOUT) { 296 - host->data_xfered = success; 295 + host->data_xfered = round_down(success, data->blksz); 297 296 data->error = -ETIMEDOUT; 298 297 } else if (status & (MCI_TXUNDERRUN|MCI_RXOVERRUN)) { 299 - host->data_xfered = success; 298 + host->data_xfered = round_down(success, data->blksz); 300 299 data->error = -EIO; 301 300 } 302 301 ··· 320 319 if (status & MCI_DATABLOCKEND) 321 320 dev_err(mmc_dev(host->mmc), "stray MCI_DATABLOCKEND interrupt\n"); 322 321 323 - if (status & MCI_DATAEND) { 322 + if (status & MCI_DATAEND || data->error) { 324 323 mmci_stop_data(host); 325 324 326 325 if (!data->error) ··· 343 342 344 343 host->cmd = NULL; 345 344 346 - cmd->resp[0] = readl(base + MMCIRESPONSE0); 347 - cmd->resp[1] = readl(base + MMCIRESPONSE1); 348 - cmd->resp[2] = readl(base + MMCIRESPONSE2); 349 - cmd->resp[3] = readl(base + MMCIRESPONSE3); 350 - 351 345 if (status & MCI_CMDTIMEOUT) { 352 346 cmd->error = -ETIMEDOUT; 353 347 } else if (status & MCI_CMDCRCFAIL && cmd->flags & MMC_RSP_CRC) { 354 348 cmd->error = -EILSEQ; 349 + } else { 350 + cmd->resp[0] = readl(base + MMCIRESPONSE0); 351 + cmd->resp[1] = readl(base + MMCIRESPONSE1); 352 + cmd->resp[2] = readl(base + MMCIRESPONSE2); 353 + cmd->resp[3] = readl(base + MMCIRESPONSE3); 355 354 } 356 355 357 356 if (!cmd->data || cmd->error) {
+36
drivers/mmc/host/sdhci-s3c.c
··· 277 277 host->clock = clock; 278 278 } 279 279 280 + /** 281 + * sdhci_s3c_platform_8bit_width - support 8bit buswidth 282 + * @host: The SDHCI host being queried 283 + * @width: MMC_BUS_WIDTH_ macro for the bus width being requested 284 + * 285 + * We have 8-bit width support but is not a v3 controller. 286 + * So we add platform_8bit_width() and support 8bit width. 287 + */ 288 + static int sdhci_s3c_platform_8bit_width(struct sdhci_host *host, int width) 289 + { 290 + u8 ctrl; 291 + 292 + ctrl = sdhci_readb(host, SDHCI_HOST_CONTROL); 293 + 294 + switch (width) { 295 + case MMC_BUS_WIDTH_8: 296 + ctrl |= SDHCI_CTRL_8BITBUS; 297 + ctrl &= ~SDHCI_CTRL_4BITBUS; 298 + break; 299 + case MMC_BUS_WIDTH_4: 300 + ctrl |= SDHCI_CTRL_4BITBUS; 301 + ctrl &= ~SDHCI_CTRL_8BITBUS; 302 + break; 303 + default: 304 + break; 305 + } 306 + 307 + sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL); 308 + 309 + return 0; 310 + } 311 + 280 312 static struct sdhci_ops sdhci_s3c_ops = { 281 313 .get_max_clock = sdhci_s3c_get_max_clk, 282 314 .set_clock = sdhci_s3c_set_clock, 283 315 .get_min_clock = sdhci_s3c_get_min_clock, 316 + .platform_8bit_width = sdhci_s3c_platform_8bit_width, 284 317 }; 285 318 286 319 static void sdhci_s3c_notify_change(struct platform_device *dev, int state) ··· 505 472 506 473 if (pdata->cd_type == S3C_SDHCI_CD_PERMANENT) 507 474 host->mmc->caps = MMC_CAP_NONREMOVABLE; 475 + 476 + if (pdata->host_caps) 477 + host->mmc->caps |= pdata->host_caps; 508 478 509 479 host->quirks |= (SDHCI_QUIRK_32BIT_DMA_ADDR | 510 480 SDHCI_QUIRK_32BIT_DMA_SIZE);
-1
drivers/mmc/host/ushc.c
··· 19 19 #include <linux/module.h> 20 20 #include <linux/usb.h> 21 21 #include <linux/kernel.h> 22 - #include <linux/usb.h> 23 22 #include <linux/slab.h> 24 23 #include <linux/dma-mapping.h> 25 24 #include <linux/mmc/host.h>
+1 -27
drivers/mtd/ubi/build.c
··· 672 672 ubi->nor_flash = 1; 673 673 } 674 674 675 - /* 676 - * Set UBI min. I/O size (@ubi->min_io_size). We use @mtd->writebufsize 677 - * for these purposes, not @mtd->writesize. At the moment this does not 678 - * matter for NAND, because currently @mtd->writebufsize is equivalent to 679 - * @mtd->writesize for all NANDs. However, some CFI NOR flashes may 680 - * have @mtd->writebufsize which is multiple of @mtd->writesize. 681 - * 682 - * The reason we use @mtd->writebufsize for @ubi->min_io_size is that 683 - * UBI and UBIFS recovery algorithms rely on the fact that if there was 684 - * an unclean power cut, then we can find offset of the last corrupted 685 - * node, align the offset to @ubi->min_io_size, read the rest of the 686 - * eraseblock starting from this offset, and check whether there are 687 - * only 0xFF bytes. If yes, then we are probably dealing with a 688 - * corruption caused by a power cut, if not, then this is probably some 689 - * severe corruption. 690 - * 691 - * Thus, we have to use the maximum write unit size of the flash, which 692 - * is @mtd->writebufsize, because @mtd->writesize is the minimum write 693 - * size, not the maximum. 694 - */ 695 - if (ubi->mtd->type == MTD_NANDFLASH) 696 - ubi_assert(ubi->mtd->writebufsize == ubi->mtd->writesize); 697 - else if (ubi->mtd->type == MTD_NORFLASH) 698 - ubi_assert(ubi->mtd->writebufsize % ubi->mtd->writesize == 0); 699 - 700 - ubi->min_io_size = ubi->mtd->writebufsize; 701 - 675 + ubi->min_io_size = ubi->mtd->writesize; 702 676 ubi->hdrs_min_io_size = ubi->mtd->writesize >> ubi->mtd->subpage_sft; 703 677 704 678 /*
+13 -8
drivers/net/bnx2.c
··· 7553 7553 !(data & ETH_FLAG_RXVLAN)) 7554 7554 return -EINVAL; 7555 7555 7556 + /* TSO with VLAN tag won't work with current firmware */ 7557 + if (!(data & ETH_FLAG_TXVLAN)) 7558 + return -EINVAL; 7559 + 7556 7560 rc = ethtool_op_set_flags(dev, data, ETH_FLAG_RXHASH | ETH_FLAG_RXVLAN | 7557 7561 ETH_FLAG_TXVLAN); 7558 7562 if (rc) ··· 7966 7962 7967 7963 /* AER (Advanced Error Reporting) hooks */ 7968 7964 err = pci_enable_pcie_error_reporting(pdev); 7969 - if (err) { 7970 - dev_err(&pdev->dev, "pci_enable_pcie_error_reporting " 7971 - "failed 0x%x\n", err); 7972 - /* non-fatal, continue */ 7973 - } 7965 + if (!err) 7966 + bp->flags |= BNX2_FLAG_AER_ENABLED; 7974 7967 7975 7968 } else { 7976 7969 bp->pcix_cap = pci_find_capability(pdev, PCI_CAP_ID_PCIX); ··· 8230 8229 return 0; 8231 8230 8232 8231 err_out_unmap: 8233 - if (bp->flags & BNX2_FLAG_PCIE) 8232 + if (bp->flags & BNX2_FLAG_AER_ENABLED) { 8234 8233 pci_disable_pcie_error_reporting(pdev); 8234 + bp->flags &= ~BNX2_FLAG_AER_ENABLED; 8235 + } 8235 8236 8236 8237 if (bp->regview) { 8237 8238 iounmap(bp->regview); ··· 8421 8418 8422 8419 kfree(bp->temp_stats_blk); 8423 8420 8424 - if (bp->flags & BNX2_FLAG_PCIE) 8421 + if (bp->flags & BNX2_FLAG_AER_ENABLED) { 8425 8422 pci_disable_pcie_error_reporting(pdev); 8423 + bp->flags &= ~BNX2_FLAG_AER_ENABLED; 8424 + } 8426 8425 8427 8426 free_netdev(dev); 8428 8427 ··· 8540 8535 } 8541 8536 rtnl_unlock(); 8542 8537 8543 - if (!(bp->flags & BNX2_FLAG_PCIE)) 8538 + if (!(bp->flags & BNX2_FLAG_AER_ENABLED)) 8544 8539 return result; 8545 8540 8546 8541 err = pci_cleanup_aer_uncorrect_error_status(pdev);
+1
drivers/net/bnx2.h
··· 6741 6741 #define BNX2_FLAG_JUMBO_BROKEN 0x00000800 6742 6742 #define BNX2_FLAG_CAN_KEEP_VLAN 0x00001000 6743 6743 #define BNX2_FLAG_BROKEN_STATS 0x00002000 6744 + #define BNX2_FLAG_AER_ENABLED 0x00004000 6744 6745 6745 6746 struct bnx2_napi bnx2_napi[BNX2_MAX_MSIX_VEC]; 6746 6747
+4
drivers/net/bonding/bond_3ad.c
··· 2470 2470 if (!(dev->flags & IFF_MASTER)) 2471 2471 goto out; 2472 2472 2473 + skb = skb_share_check(skb, GFP_ATOMIC); 2474 + if (!skb) 2475 + goto out; 2476 + 2473 2477 if (!pskb_may_pull(skb, sizeof(struct lacpdu))) 2474 2478 goto out; 2475 2479
+4
drivers/net/bonding/bond_alb.c
··· 326 326 goto out; 327 327 } 328 328 329 + skb = skb_share_check(skb, GFP_ATOMIC); 330 + if (!skb) 331 + goto out; 332 + 329 333 if (!pskb_may_pull(skb, arp_hdr_len(bond_dev))) 330 334 goto out; 331 335
+4
drivers/net/bonding/bond_main.c
··· 2733 2733 if (!slave || !slave_do_arp_validate(bond, slave)) 2734 2734 goto out_unlock; 2735 2735 2736 + skb = skb_share_check(skb, GFP_ATOMIC); 2737 + if (!skb) 2738 + goto out_unlock; 2739 + 2736 2740 if (!pskb_may_pull(skb, arp_hdr_len(dev))) 2737 2741 goto out_unlock; 2738 2742
+2
drivers/net/can/Kconfig
··· 117 117 118 118 source "drivers/net/can/usb/Kconfig" 119 119 120 + source "drivers/net/can/softing/Kconfig" 121 + 120 122 config CAN_DEBUG_DEVICES 121 123 bool "CAN devices debugging messages" 122 124 depends on CAN
+1
drivers/net/can/Makefile
··· 9 9 can-dev-y := dev.o 10 10 11 11 obj-y += usb/ 12 + obj-y += softing/ 12 13 13 14 obj-$(CONFIG_CAN_SJA1000) += sja1000/ 14 15 obj-$(CONFIG_CAN_MSCAN) += mscan/
+112 -26
drivers/net/can/at91_can.c
··· 2 2 * at91_can.c - CAN network driver for AT91 SoC CAN controller 3 3 * 4 4 * (C) 2007 by Hans J. Koch <hjk@hansjkoch.de> 5 - * (C) 2008, 2009, 2010 by Marc Kleine-Budde <kernel@pengutronix.de> 5 + * (C) 2008, 2009, 2010, 2011 by Marc Kleine-Budde <kernel@pengutronix.de> 6 6 * 7 7 * This software may be distributed under the terms of the GNU General 8 8 * Public License ("GPL") version 2 as distributed in the 'COPYING' ··· 30 30 #include <linux/module.h> 31 31 #include <linux/netdevice.h> 32 32 #include <linux/platform_device.h> 33 + #include <linux/rtnetlink.h> 33 34 #include <linux/skbuff.h> 34 35 #include <linux/spinlock.h> 35 36 #include <linux/string.h> ··· 41 40 42 41 #include <mach/board.h> 43 42 44 - #define AT91_NAPI_WEIGHT 12 43 + #define AT91_NAPI_WEIGHT 11 45 44 46 45 /* 47 46 * RX/TX Mailbox split 48 47 * don't dare to touch 49 48 */ 50 - #define AT91_MB_RX_NUM 12 49 + #define AT91_MB_RX_NUM 11 51 50 #define AT91_MB_TX_SHIFT 2 52 51 53 - #define AT91_MB_RX_FIRST 0 52 + #define AT91_MB_RX_FIRST 1 54 53 #define AT91_MB_RX_LAST (AT91_MB_RX_FIRST + AT91_MB_RX_NUM - 1) 55 54 56 55 #define AT91_MB_RX_MASK(i) ((1 << (i)) - 1) 57 56 #define AT91_MB_RX_SPLIT 8 58 57 #define AT91_MB_RX_LOW_LAST (AT91_MB_RX_SPLIT - 1) 59 - #define AT91_MB_RX_LOW_MASK (AT91_MB_RX_MASK(AT91_MB_RX_SPLIT)) 58 + #define AT91_MB_RX_LOW_MASK (AT91_MB_RX_MASK(AT91_MB_RX_SPLIT) & \ 59 + ~AT91_MB_RX_MASK(AT91_MB_RX_FIRST)) 60 60 61 61 #define AT91_MB_TX_NUM (1 << AT91_MB_TX_SHIFT) 62 62 #define AT91_MB_TX_FIRST (AT91_MB_RX_LAST + 1) ··· 170 168 171 169 struct clk *clk; 172 170 struct at91_can_data *pdata; 171 + 172 + canid_t mb0_id; 173 173 }; 174 174 175 175 static struct can_bittiming_const at91_bittiming_const = { ··· 224 220 set_mb_mode_prio(priv, mb, mode, 0); 225 221 } 226 222 223 + static inline u32 at91_can_id_to_reg_mid(canid_t can_id) 224 + { 225 + u32 reg_mid; 226 + 227 + if (can_id & CAN_EFF_FLAG) 228 + reg_mid = (can_id & CAN_EFF_MASK) | AT91_MID_MIDE; 229 + else 230 + reg_mid = (can_id & CAN_SFF_MASK) << 18; 231 + 232 + return reg_mid; 233 + } 234 + 227 235 /* 228 236 * Swtich transceiver on or off 229 237 */ ··· 249 233 { 250 234 struct at91_priv *priv = netdev_priv(dev); 251 235 unsigned int i; 236 + u32 reg_mid; 252 237 253 238 /* 254 - * The first 12 mailboxes are used as a reception FIFO. The 255 - * last mailbox is configured with overwrite option. The 256 - * overwrite flag indicates a FIFO overflow. 239 + * Due to a chip bug (errata 50.2.6.3 & 50.3.5.3) the first 240 + * mailbox is disabled. The next 11 mailboxes are used as a 241 + * reception FIFO. The last mailbox is configured with 242 + * overwrite option. The overwrite flag indicates a FIFO 243 + * overflow. 257 244 */ 245 + reg_mid = at91_can_id_to_reg_mid(priv->mb0_id); 246 + for (i = 0; i < AT91_MB_RX_FIRST; i++) { 247 + set_mb_mode(priv, i, AT91_MB_MODE_DISABLED); 248 + at91_write(priv, AT91_MID(i), reg_mid); 249 + at91_write(priv, AT91_MCR(i), 0x0); /* clear dlc */ 250 + } 251 + 258 252 for (i = AT91_MB_RX_FIRST; i < AT91_MB_RX_LAST; i++) 259 253 set_mb_mode(priv, i, AT91_MB_MODE_RX); 260 254 set_mb_mode(priv, AT91_MB_RX_LAST, AT91_MB_MODE_RX_OVRWR); ··· 280 254 set_mb_mode_prio(priv, i, AT91_MB_MODE_TX, 0); 281 255 282 256 /* Reset tx and rx helper pointers */ 283 - priv->tx_next = priv->tx_echo = priv->rx_next = 0; 257 + priv->tx_next = priv->tx_echo = 0; 258 + priv->rx_next = AT91_MB_RX_FIRST; 284 259 } 285 260 286 261 static int at91_set_bittiming(struct net_device *dev) ··· 399 372 netdev_err(dev, "BUG! TX buffer full when queue awake!\n"); 400 373 return NETDEV_TX_BUSY; 401 374 } 402 - 403 - if (cf->can_id & CAN_EFF_FLAG) 404 - reg_mid = (cf->can_id & CAN_EFF_MASK) | AT91_MID_MIDE; 405 - else 406 - reg_mid = (cf->can_id & CAN_SFF_MASK) << 18; 407 - 375 + reg_mid = at91_can_id_to_reg_mid(cf->can_id); 408 376 reg_mcr = ((cf->can_id & CAN_RTR_FLAG) ? AT91_MCR_MRTR : 0) | 409 377 (cf->can_dlc << 16) | AT91_MCR_MTCR; 410 378 ··· 561 539 * 562 540 * Theory of Operation: 563 541 * 564 - * 12 of the 16 mailboxes on the chip are reserved for RX. we split 565 - * them into 2 groups. The lower group holds 8 and upper 4 mailboxes. 542 + * 11 of the 16 mailboxes on the chip are reserved for RX. we split 543 + * them into 2 groups. The lower group holds 7 and upper 4 mailboxes. 566 544 * 567 545 * Like it or not, but the chip always saves a received CAN message 568 546 * into the first free mailbox it finds (starting with the 569 547 * lowest). This makes it very difficult to read the messages in the 570 548 * right order from the chip. This is how we work around that problem: 571 549 * 572 - * The first message goes into mb nr. 0 and issues an interrupt. All 550 + * The first message goes into mb nr. 1 and issues an interrupt. All 573 551 * rx ints are disabled in the interrupt handler and a napi poll is 574 552 * scheduled. We read the mailbox, but do _not_ reenable the mb (to 575 553 * receive another message). 576 554 * 577 555 * lower mbxs upper 578 - * ______^______ __^__ 579 - * / \ / \ 556 + * ____^______ __^__ 557 + * / \ / \ 580 558 * +-+-+-+-+-+-+-+-++-+-+-+-+ 581 - * |x|x|x|x|x|x|x|x|| | | | | 559 + * | |x|x|x|x|x|x|x|| | | | | 582 560 * +-+-+-+-+-+-+-+-++-+-+-+-+ 583 561 * 0 0 0 0 0 0 0 0 0 0 1 1 \ mail 584 562 * 0 1 2 3 4 5 6 7 8 9 0 1 / box 563 + * ^ 564 + * | 565 + * \ 566 + * unused, due to chip bug 585 567 * 586 568 * The variable priv->rx_next points to the next mailbox to read a 587 569 * message from. As long we're in the lower mailboxes we just read the ··· 616 590 "order of incoming frames cannot be guaranteed\n"); 617 591 618 592 again: 619 - for (mb = find_next_bit(addr, AT91_MB_RX_NUM, priv->rx_next); 620 - mb < AT91_MB_RX_NUM && quota > 0; 593 + for (mb = find_next_bit(addr, AT91_MB_RX_LAST + 1, priv->rx_next); 594 + mb < AT91_MB_RX_LAST + 1 && quota > 0; 621 595 reg_sr = at91_read(priv, AT91_SR), 622 - mb = find_next_bit(addr, AT91_MB_RX_NUM, ++priv->rx_next)) { 596 + mb = find_next_bit(addr, AT91_MB_RX_LAST + 1, ++priv->rx_next)) { 623 597 at91_read_msg(dev, mb); 624 598 625 599 /* reactivate mailboxes */ ··· 636 610 637 611 /* upper group completed, look again in lower */ 638 612 if (priv->rx_next > AT91_MB_RX_LOW_LAST && 639 - quota > 0 && mb >= AT91_MB_RX_NUM) { 640 - priv->rx_next = 0; 613 + quota > 0 && mb > AT91_MB_RX_LAST) { 614 + priv->rx_next = AT91_MB_RX_FIRST; 641 615 goto again; 642 616 } 643 617 ··· 1063 1037 .ndo_start_xmit = at91_start_xmit, 1064 1038 }; 1065 1039 1040 + static ssize_t at91_sysfs_show_mb0_id(struct device *dev, 1041 + struct device_attribute *attr, char *buf) 1042 + { 1043 + struct at91_priv *priv = netdev_priv(to_net_dev(dev)); 1044 + 1045 + if (priv->mb0_id & CAN_EFF_FLAG) 1046 + return snprintf(buf, PAGE_SIZE, "0x%08x\n", priv->mb0_id); 1047 + else 1048 + return snprintf(buf, PAGE_SIZE, "0x%03x\n", priv->mb0_id); 1049 + } 1050 + 1051 + static ssize_t at91_sysfs_set_mb0_id(struct device *dev, 1052 + struct device_attribute *attr, const char *buf, size_t count) 1053 + { 1054 + struct net_device *ndev = to_net_dev(dev); 1055 + struct at91_priv *priv = netdev_priv(ndev); 1056 + unsigned long can_id; 1057 + ssize_t ret; 1058 + int err; 1059 + 1060 + rtnl_lock(); 1061 + 1062 + if (ndev->flags & IFF_UP) { 1063 + ret = -EBUSY; 1064 + goto out; 1065 + } 1066 + 1067 + err = strict_strtoul(buf, 0, &can_id); 1068 + if (err) { 1069 + ret = err; 1070 + goto out; 1071 + } 1072 + 1073 + if (can_id & CAN_EFF_FLAG) 1074 + can_id &= CAN_EFF_MASK | CAN_EFF_FLAG; 1075 + else 1076 + can_id &= CAN_SFF_MASK; 1077 + 1078 + priv->mb0_id = can_id; 1079 + ret = count; 1080 + 1081 + out: 1082 + rtnl_unlock(); 1083 + return ret; 1084 + } 1085 + 1086 + static DEVICE_ATTR(mb0_id, S_IWUGO | S_IRUGO, 1087 + at91_sysfs_show_mb0_id, at91_sysfs_set_mb0_id); 1088 + 1089 + static struct attribute *at91_sysfs_attrs[] = { 1090 + &dev_attr_mb0_id.attr, 1091 + NULL, 1092 + }; 1093 + 1094 + static struct attribute_group at91_sysfs_attr_group = { 1095 + .attrs = at91_sysfs_attrs, 1096 + }; 1097 + 1066 1098 static int __devinit at91_can_probe(struct platform_device *pdev) 1067 1099 { 1068 1100 struct net_device *dev; ··· 1166 1082 dev->netdev_ops = &at91_netdev_ops; 1167 1083 dev->irq = irq; 1168 1084 dev->flags |= IFF_ECHO; 1085 + dev->sysfs_groups[0] = &at91_sysfs_attr_group; 1169 1086 1170 1087 priv = netdev_priv(dev); 1171 1088 priv->can.clock.freq = clk_get_rate(clk); ··· 1178 1093 priv->dev = dev; 1179 1094 priv->clk = clk; 1180 1095 priv->pdata = pdev->dev.platform_data; 1096 + priv->mb0_id = 0x7ff; 1181 1097 1182 1098 netif_napi_add(dev, &priv->napi, at91_poll, AT91_NAPI_WEIGHT); 1183 1099
+30
drivers/net/can/softing/Kconfig
··· 1 + config CAN_SOFTING 2 + tristate "Softing Gmbh CAN generic support" 3 + depends on CAN_DEV 4 + ---help--- 5 + Support for CAN cards from Softing Gmbh & some cards 6 + from Vector Gmbh. 7 + Softing Gmbh CAN cards come with 1 or 2 physical busses. 8 + Those cards typically use Dual Port RAM to communicate 9 + with the host CPU. The interface is then identical for PCI 10 + and PCMCIA cards. This driver operates on a platform device, 11 + which has been created by softing_cs or softing_pci driver. 12 + Warning: 13 + The API of the card does not allow fine control per bus, but 14 + controls the 2 busses on the card together. 15 + As such, some actions (start/stop/busoff recovery) on 1 bus 16 + must bring down the other bus too temporarily. 17 + 18 + config CAN_SOFTING_CS 19 + tristate "Softing Gmbh CAN pcmcia cards" 20 + depends on PCMCIA 21 + select CAN_SOFTING 22 + ---help--- 23 + Support for PCMCIA cards from Softing Gmbh & some cards 24 + from Vector Gmbh. 25 + You need firmware for these, which you can get at 26 + http://developer.berlios.de/projects/socketcan/ 27 + This version of the driver is written against 28 + firmware version 4.6 (softing-fw-4.6-binaries.tar.gz) 29 + In order to use the card as CAN device, you need the Softing generic 30 + support too.
+6
drivers/net/can/softing/Makefile
··· 1 + 2 + softing-y := softing_main.o softing_fw.o 3 + obj-$(CONFIG_CAN_SOFTING) += softing.o 4 + obj-$(CONFIG_CAN_SOFTING_CS) += softing_cs.o 5 + 6 + ccflags-$(CONFIG_CAN_DEBUG_DEVICES) := -DDEBUG
+167
drivers/net/can/softing/softing.h
··· 1 + /* 2 + * softing common interfaces 3 + * 4 + * by Kurt Van Dijck, 2008-2010 5 + */ 6 + 7 + #include <linux/atomic.h> 8 + #include <linux/netdevice.h> 9 + #include <linux/ktime.h> 10 + #include <linux/mutex.h> 11 + #include <linux/spinlock.h> 12 + #include <linux/can.h> 13 + #include <linux/can/dev.h> 14 + 15 + #include "softing_platform.h" 16 + 17 + struct softing; 18 + 19 + struct softing_priv { 20 + struct can_priv can; /* must be the first member! */ 21 + struct net_device *netdev; 22 + struct softing *card; 23 + struct { 24 + int pending; 25 + /* variables wich hold the circular buffer */ 26 + int echo_put; 27 + int echo_get; 28 + } tx; 29 + struct can_bittiming_const btr_const; 30 + int index; 31 + uint8_t output; 32 + uint16_t chip; 33 + }; 34 + #define netdev2softing(netdev) ((struct softing_priv *)netdev_priv(netdev)) 35 + 36 + struct softing { 37 + const struct softing_platform_data *pdat; 38 + struct platform_device *pdev; 39 + struct net_device *net[2]; 40 + spinlock_t spin; /* protect this structure & DPRAM access */ 41 + ktime_t ts_ref; 42 + ktime_t ts_overflow; /* timestamp overflow value, in ktime */ 43 + 44 + struct { 45 + /* indication of firmware status */ 46 + int up; 47 + /* protection of the 'up' variable */ 48 + struct mutex lock; 49 + } fw; 50 + struct { 51 + int nr; 52 + int requested; 53 + int svc_count; 54 + unsigned int dpram_position; 55 + } irq; 56 + struct { 57 + int pending; 58 + int last_bus; 59 + /* 60 + * keep the bus that last tx'd a message, 61 + * in order to let every netdev queue resume 62 + */ 63 + } tx; 64 + __iomem uint8_t *dpram; 65 + unsigned long dpram_phys; 66 + unsigned long dpram_size; 67 + struct { 68 + uint16_t fw_version, hw_version, license, serial; 69 + uint16_t chip[2]; 70 + unsigned int freq; /* remote cpu's operating frequency */ 71 + } id; 72 + }; 73 + 74 + extern int softing_default_output(struct net_device *netdev); 75 + 76 + extern ktime_t softing_raw2ktime(struct softing *card, u32 raw); 77 + 78 + extern int softing_chip_poweron(struct softing *card); 79 + 80 + extern int softing_bootloader_command(struct softing *card, int16_t cmd, 81 + const char *msg); 82 + 83 + /* Load firmware after reset */ 84 + extern int softing_load_fw(const char *file, struct softing *card, 85 + __iomem uint8_t *virt, unsigned int size, int offset); 86 + 87 + /* Load final application firmware after bootloader */ 88 + extern int softing_load_app_fw(const char *file, struct softing *card); 89 + 90 + /* 91 + * enable or disable irq 92 + * only called with fw.lock locked 93 + */ 94 + extern int softing_enable_irq(struct softing *card, int enable); 95 + 96 + /* start/stop 1 bus on card */ 97 + extern int softing_startstop(struct net_device *netdev, int up); 98 + 99 + /* netif_rx() */ 100 + extern int softing_netdev_rx(struct net_device *netdev, 101 + const struct can_frame *msg, ktime_t ktime); 102 + 103 + /* SOFTING DPRAM mappings */ 104 + #define DPRAM_RX 0x0000 105 + #define DPRAM_RX_SIZE 32 106 + #define DPRAM_RX_CNT 16 107 + #define DPRAM_RX_RD 0x0201 /* uint8_t */ 108 + #define DPRAM_RX_WR 0x0205 /* uint8_t */ 109 + #define DPRAM_RX_LOST 0x0207 /* uint8_t */ 110 + 111 + #define DPRAM_FCT_PARAM 0x0300 /* int16_t [20] */ 112 + #define DPRAM_FCT_RESULT 0x0328 /* int16_t */ 113 + #define DPRAM_FCT_HOST 0x032b /* uint16_t */ 114 + 115 + #define DPRAM_INFO_BUSSTATE 0x0331 /* uint16_t */ 116 + #define DPRAM_INFO_BUSSTATE2 0x0335 /* uint16_t */ 117 + #define DPRAM_INFO_ERRSTATE 0x0339 /* uint16_t */ 118 + #define DPRAM_INFO_ERRSTATE2 0x033d /* uint16_t */ 119 + #define DPRAM_RESET 0x0341 /* uint16_t */ 120 + #define DPRAM_CLR_RECV_FIFO 0x0345 /* uint16_t */ 121 + #define DPRAM_RESET_TIME 0x034d /* uint16_t */ 122 + #define DPRAM_TIME 0x0350 /* uint64_t */ 123 + #define DPRAM_WR_START 0x0358 /* uint8_t */ 124 + #define DPRAM_WR_END 0x0359 /* uint8_t */ 125 + #define DPRAM_RESET_RX_FIFO 0x0361 /* uint16_t */ 126 + #define DPRAM_RESET_TX_FIFO 0x0364 /* uint8_t */ 127 + #define DPRAM_READ_FIFO_LEVEL 0x0365 /* uint8_t */ 128 + #define DPRAM_RX_FIFO_LEVEL 0x0366 /* uint16_t */ 129 + #define DPRAM_TX_FIFO_LEVEL 0x0366 /* uint16_t */ 130 + 131 + #define DPRAM_TX 0x0400 /* uint16_t */ 132 + #define DPRAM_TX_SIZE 16 133 + #define DPRAM_TX_CNT 32 134 + #define DPRAM_TX_RD 0x0601 /* uint8_t */ 135 + #define DPRAM_TX_WR 0x0605 /* uint8_t */ 136 + 137 + #define DPRAM_COMMAND 0x07e0 /* uint16_t */ 138 + #define DPRAM_RECEIPT 0x07f0 /* uint16_t */ 139 + #define DPRAM_IRQ_TOHOST 0x07fe /* uint8_t */ 140 + #define DPRAM_IRQ_TOCARD 0x07ff /* uint8_t */ 141 + 142 + #define DPRAM_V2_RESET 0x0e00 /* uint8_t */ 143 + #define DPRAM_V2_IRQ_TOHOST 0x0e02 /* uint8_t */ 144 + 145 + #define TXMAX (DPRAM_TX_CNT - 1) 146 + 147 + /* DPRAM return codes */ 148 + #define RES_NONE 0 149 + #define RES_OK 1 150 + #define RES_NOK 2 151 + #define RES_UNKNOWN 3 152 + /* DPRAM flags */ 153 + #define CMD_TX 0x01 154 + #define CMD_ACK 0x02 155 + #define CMD_XTD 0x04 156 + #define CMD_RTR 0x08 157 + #define CMD_ERR 0x10 158 + #define CMD_BUS2 0x80 159 + 160 + /* returned fifo entry bus state masks */ 161 + #define SF_MASK_BUSOFF 0x80 162 + #define SF_MASK_EPASSIVE 0x60 163 + 164 + /* bus states */ 165 + #define STATE_BUSOFF 2 166 + #define STATE_EPASSIVE 1 167 + #define STATE_EACTIVE 0
+359
drivers/net/can/softing/softing_cs.c
··· 1 + /* 2 + * Copyright (C) 2008-2010 3 + * 4 + * - Kurt Van Dijck, EIA Electronics 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the version 2 of the GNU General Public License 8 + * as published by the Free Software Foundation 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, write to the Free Software 17 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 + */ 19 + 20 + #include <linux/module.h> 21 + #include <linux/kernel.h> 22 + 23 + #include <pcmcia/cistpl.h> 24 + #include <pcmcia/ds.h> 25 + 26 + #include "softing_platform.h" 27 + 28 + static int softingcs_index; 29 + static spinlock_t softingcs_index_lock; 30 + 31 + static int softingcs_reset(struct platform_device *pdev, int v); 32 + static int softingcs_enable_irq(struct platform_device *pdev, int v); 33 + 34 + /* 35 + * platform_data descriptions 36 + */ 37 + #define MHZ (1000*1000) 38 + static const struct softing_platform_data softingcs_platform_data[] = { 39 + { 40 + .name = "CANcard", 41 + .manf = 0x0168, .prod = 0x001, 42 + .generation = 1, 43 + .nbus = 2, 44 + .freq = 16 * MHZ, .max_brp = 32, .max_sjw = 4, 45 + .dpram_size = 0x0800, 46 + .boot = {0x0000, 0x000000, fw_dir "bcard.bin",}, 47 + .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",}, 48 + .app = {0x0010, 0x0d0000, fw_dir "cancard.bin",}, 49 + .reset = softingcs_reset, 50 + .enable_irq = softingcs_enable_irq, 51 + }, { 52 + .name = "CANcard-NEC", 53 + .manf = 0x0168, .prod = 0x002, 54 + .generation = 1, 55 + .nbus = 2, 56 + .freq = 16 * MHZ, .max_brp = 32, .max_sjw = 4, 57 + .dpram_size = 0x0800, 58 + .boot = {0x0000, 0x000000, fw_dir "bcard.bin",}, 59 + .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",}, 60 + .app = {0x0010, 0x0d0000, fw_dir "cancard.bin",}, 61 + .reset = softingcs_reset, 62 + .enable_irq = softingcs_enable_irq, 63 + }, { 64 + .name = "CANcard-SJA", 65 + .manf = 0x0168, .prod = 0x004, 66 + .generation = 1, 67 + .nbus = 2, 68 + .freq = 20 * MHZ, .max_brp = 32, .max_sjw = 4, 69 + .dpram_size = 0x0800, 70 + .boot = {0x0000, 0x000000, fw_dir "bcard.bin",}, 71 + .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",}, 72 + .app = {0x0010, 0x0d0000, fw_dir "cansja.bin",}, 73 + .reset = softingcs_reset, 74 + .enable_irq = softingcs_enable_irq, 75 + }, { 76 + .name = "CANcard-2", 77 + .manf = 0x0168, .prod = 0x005, 78 + .generation = 2, 79 + .nbus = 2, 80 + .freq = 24 * MHZ, .max_brp = 64, .max_sjw = 4, 81 + .dpram_size = 0x1000, 82 + .boot = {0x0000, 0x000000, fw_dir "bcard2.bin",}, 83 + .load = {0x0120, 0x00f600, fw_dir "ldcard2.bin",}, 84 + .app = {0x0010, 0x0d0000, fw_dir "cancrd2.bin",}, 85 + .reset = softingcs_reset, 86 + .enable_irq = NULL, 87 + }, { 88 + .name = "Vector-CANcard", 89 + .manf = 0x0168, .prod = 0x081, 90 + .generation = 1, 91 + .nbus = 2, 92 + .freq = 16 * MHZ, .max_brp = 64, .max_sjw = 4, 93 + .dpram_size = 0x0800, 94 + .boot = {0x0000, 0x000000, fw_dir "bcard.bin",}, 95 + .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",}, 96 + .app = {0x0010, 0x0d0000, fw_dir "cancard.bin",}, 97 + .reset = softingcs_reset, 98 + .enable_irq = softingcs_enable_irq, 99 + }, { 100 + .name = "Vector-CANcard-SJA", 101 + .manf = 0x0168, .prod = 0x084, 102 + .generation = 1, 103 + .nbus = 2, 104 + .freq = 20 * MHZ, .max_brp = 32, .max_sjw = 4, 105 + .dpram_size = 0x0800, 106 + .boot = {0x0000, 0x000000, fw_dir "bcard.bin",}, 107 + .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",}, 108 + .app = {0x0010, 0x0d0000, fw_dir "cansja.bin",}, 109 + .reset = softingcs_reset, 110 + .enable_irq = softingcs_enable_irq, 111 + }, { 112 + .name = "Vector-CANcard-2", 113 + .manf = 0x0168, .prod = 0x085, 114 + .generation = 2, 115 + .nbus = 2, 116 + .freq = 24 * MHZ, .max_brp = 64, .max_sjw = 4, 117 + .dpram_size = 0x1000, 118 + .boot = {0x0000, 0x000000, fw_dir "bcard2.bin",}, 119 + .load = {0x0120, 0x00f600, fw_dir "ldcard2.bin",}, 120 + .app = {0x0010, 0x0d0000, fw_dir "cancrd2.bin",}, 121 + .reset = softingcs_reset, 122 + .enable_irq = NULL, 123 + }, { 124 + .name = "EDICcard-NEC", 125 + .manf = 0x0168, .prod = 0x102, 126 + .generation = 1, 127 + .nbus = 2, 128 + .freq = 16 * MHZ, .max_brp = 64, .max_sjw = 4, 129 + .dpram_size = 0x0800, 130 + .boot = {0x0000, 0x000000, fw_dir "bcard.bin",}, 131 + .load = {0x0120, 0x00f600, fw_dir "ldcard.bin",}, 132 + .app = {0x0010, 0x0d0000, fw_dir "cancard.bin",}, 133 + .reset = softingcs_reset, 134 + .enable_irq = softingcs_enable_irq, 135 + }, { 136 + .name = "EDICcard-2", 137 + .manf = 0x0168, .prod = 0x105, 138 + .generation = 2, 139 + .nbus = 2, 140 + .freq = 24 * MHZ, .max_brp = 64, .max_sjw = 4, 141 + .dpram_size = 0x1000, 142 + .boot = {0x0000, 0x000000, fw_dir "bcard2.bin",}, 143 + .load = {0x0120, 0x00f600, fw_dir "ldcard2.bin",}, 144 + .app = {0x0010, 0x0d0000, fw_dir "cancrd2.bin",}, 145 + .reset = softingcs_reset, 146 + .enable_irq = NULL, 147 + }, { 148 + 0, 0, 149 + }, 150 + }; 151 + 152 + MODULE_FIRMWARE(fw_dir "bcard.bin"); 153 + MODULE_FIRMWARE(fw_dir "ldcard.bin"); 154 + MODULE_FIRMWARE(fw_dir "cancard.bin"); 155 + MODULE_FIRMWARE(fw_dir "cansja.bin"); 156 + 157 + MODULE_FIRMWARE(fw_dir "bcard2.bin"); 158 + MODULE_FIRMWARE(fw_dir "ldcard2.bin"); 159 + MODULE_FIRMWARE(fw_dir "cancrd2.bin"); 160 + 161 + static __devinit const struct softing_platform_data 162 + *softingcs_find_platform_data(unsigned int manf, unsigned int prod) 163 + { 164 + const struct softing_platform_data *lp; 165 + 166 + for (lp = softingcs_platform_data; lp->manf; ++lp) { 167 + if ((lp->manf == manf) && (lp->prod == prod)) 168 + return lp; 169 + } 170 + return NULL; 171 + } 172 + 173 + /* 174 + * platformdata callbacks 175 + */ 176 + static int softingcs_reset(struct platform_device *pdev, int v) 177 + { 178 + struct pcmcia_device *pcmcia = to_pcmcia_dev(pdev->dev.parent); 179 + 180 + dev_dbg(&pdev->dev, "pcmcia config [2] %02x\n", v ? 0 : 0x20); 181 + return pcmcia_write_config_byte(pcmcia, 2, v ? 0 : 0x20); 182 + } 183 + 184 + static int softingcs_enable_irq(struct platform_device *pdev, int v) 185 + { 186 + struct pcmcia_device *pcmcia = to_pcmcia_dev(pdev->dev.parent); 187 + 188 + dev_dbg(&pdev->dev, "pcmcia config [0] %02x\n", v ? 0x60 : 0); 189 + return pcmcia_write_config_byte(pcmcia, 0, v ? 0x60 : 0); 190 + } 191 + 192 + /* 193 + * pcmcia check 194 + */ 195 + static __devinit int softingcs_probe_config(struct pcmcia_device *pcmcia, 196 + void *priv_data) 197 + { 198 + struct softing_platform_data *pdat = priv_data; 199 + struct resource *pres; 200 + int memspeed = 0; 201 + 202 + WARN_ON(!pdat); 203 + pres = pcmcia->resource[PCMCIA_IOMEM_0]; 204 + if (resource_size(pres) < 0x1000) 205 + return -ERANGE; 206 + 207 + pres->flags |= WIN_MEMORY_TYPE_CM | WIN_ENABLE; 208 + if (pdat->generation < 2) { 209 + pres->flags |= WIN_USE_WAIT | WIN_DATA_WIDTH_8; 210 + memspeed = 3; 211 + } else { 212 + pres->flags |= WIN_DATA_WIDTH_16; 213 + } 214 + return pcmcia_request_window(pcmcia, pres, memspeed); 215 + } 216 + 217 + static __devexit void softingcs_remove(struct pcmcia_device *pcmcia) 218 + { 219 + struct platform_device *pdev = pcmcia->priv; 220 + 221 + /* free bits */ 222 + platform_device_unregister(pdev); 223 + /* release pcmcia stuff */ 224 + pcmcia_disable_device(pcmcia); 225 + } 226 + 227 + /* 228 + * platform_device wrapper 229 + * pdev->resource has 2 entries: io & irq 230 + */ 231 + static void softingcs_pdev_release(struct device *dev) 232 + { 233 + struct platform_device *pdev = to_platform_device(dev); 234 + kfree(pdev); 235 + } 236 + 237 + static __devinit int softingcs_probe(struct pcmcia_device *pcmcia) 238 + { 239 + int ret; 240 + struct platform_device *pdev; 241 + const struct softing_platform_data *pdat; 242 + struct resource *pres; 243 + struct dev { 244 + struct platform_device pdev; 245 + struct resource res[2]; 246 + } *dev; 247 + 248 + /* find matching platform_data */ 249 + pdat = softingcs_find_platform_data(pcmcia->manf_id, pcmcia->card_id); 250 + if (!pdat) 251 + return -ENOTTY; 252 + 253 + /* setup pcmcia device */ 254 + pcmcia->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_SET_IOMEM | 255 + CONF_AUTO_SET_VPP | CONF_AUTO_CHECK_VCC; 256 + ret = pcmcia_loop_config(pcmcia, softingcs_probe_config, (void *)pdat); 257 + if (ret) 258 + goto pcmcia_failed; 259 + 260 + ret = pcmcia_enable_device(pcmcia); 261 + if (ret < 0) 262 + goto pcmcia_failed; 263 + 264 + pres = pcmcia->resource[PCMCIA_IOMEM_0]; 265 + if (!pres) { 266 + ret = -EBADF; 267 + goto pcmcia_bad; 268 + } 269 + 270 + /* create softing platform device */ 271 + dev = kzalloc(sizeof(*dev), GFP_KERNEL); 272 + if (!dev) { 273 + ret = -ENOMEM; 274 + goto mem_failed; 275 + } 276 + dev->pdev.resource = dev->res; 277 + dev->pdev.num_resources = ARRAY_SIZE(dev->res); 278 + dev->pdev.dev.release = softingcs_pdev_release; 279 + 280 + pdev = &dev->pdev; 281 + pdev->dev.platform_data = (void *)pdat; 282 + pdev->dev.parent = &pcmcia->dev; 283 + pcmcia->priv = pdev; 284 + 285 + /* platform device resources */ 286 + pdev->resource[0].flags = IORESOURCE_MEM; 287 + pdev->resource[0].start = pres->start; 288 + pdev->resource[0].end = pres->end; 289 + 290 + pdev->resource[1].flags = IORESOURCE_IRQ; 291 + pdev->resource[1].start = pcmcia->irq; 292 + pdev->resource[1].end = pdev->resource[1].start; 293 + 294 + /* platform device setup */ 295 + spin_lock(&softingcs_index_lock); 296 + pdev->id = softingcs_index++; 297 + spin_unlock(&softingcs_index_lock); 298 + pdev->name = "softing"; 299 + dev_set_name(&pdev->dev, "softingcs.%i", pdev->id); 300 + ret = platform_device_register(pdev); 301 + if (ret < 0) 302 + goto platform_failed; 303 + 304 + dev_info(&pcmcia->dev, "created %s\n", dev_name(&pdev->dev)); 305 + return 0; 306 + 307 + platform_failed: 308 + kfree(dev); 309 + mem_failed: 310 + pcmcia_bad: 311 + pcmcia_failed: 312 + pcmcia_disable_device(pcmcia); 313 + pcmcia->priv = NULL; 314 + return ret ?: -ENODEV; 315 + } 316 + 317 + static /*const*/ struct pcmcia_device_id softingcs_ids[] = { 318 + /* softing */ 319 + PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0001), 320 + PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0002), 321 + PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0004), 322 + PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0005), 323 + /* vector, manufacturer? */ 324 + PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0081), 325 + PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0084), 326 + PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0085), 327 + /* EDIC */ 328 + PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0102), 329 + PCMCIA_DEVICE_MANF_CARD(0x0168, 0x0105), 330 + PCMCIA_DEVICE_NULL, 331 + }; 332 + 333 + MODULE_DEVICE_TABLE(pcmcia, softingcs_ids); 334 + 335 + static struct pcmcia_driver softingcs_driver = { 336 + .owner = THIS_MODULE, 337 + .name = "softingcs", 338 + .id_table = softingcs_ids, 339 + .probe = softingcs_probe, 340 + .remove = __devexit_p(softingcs_remove), 341 + }; 342 + 343 + static int __init softingcs_start(void) 344 + { 345 + spin_lock_init(&softingcs_index_lock); 346 + return pcmcia_register_driver(&softingcs_driver); 347 + } 348 + 349 + static void __exit softingcs_stop(void) 350 + { 351 + pcmcia_unregister_driver(&softingcs_driver); 352 + } 353 + 354 + module_init(softingcs_start); 355 + module_exit(softingcs_stop); 356 + 357 + MODULE_DESCRIPTION("softing CANcard driver" 358 + ", links PCMCIA card to softing driver"); 359 + MODULE_LICENSE("GPL v2");
+691
drivers/net/can/softing/softing_fw.c
··· 1 + /* 2 + * Copyright (C) 2008-2010 3 + * 4 + * - Kurt Van Dijck, EIA Electronics 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the version 2 of the GNU General Public License 8 + * as published by the Free Software Foundation 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, write to the Free Software 17 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 + */ 19 + 20 + #include <linux/firmware.h> 21 + #include <linux/sched.h> 22 + #include <asm/div64.h> 23 + 24 + #include "softing.h" 25 + 26 + /* 27 + * low level DPRAM command. 28 + * Make sure that card->dpram[DPRAM_FCT_HOST] is preset 29 + */ 30 + static int _softing_fct_cmd(struct softing *card, int16_t cmd, uint16_t vector, 31 + const char *msg) 32 + { 33 + int ret; 34 + unsigned long stamp; 35 + 36 + iowrite16(cmd, &card->dpram[DPRAM_FCT_PARAM]); 37 + iowrite8(vector >> 8, &card->dpram[DPRAM_FCT_HOST + 1]); 38 + iowrite8(vector, &card->dpram[DPRAM_FCT_HOST]); 39 + /* be sure to flush this to the card */ 40 + wmb(); 41 + stamp = jiffies + 1 * HZ; 42 + /* wait for card */ 43 + do { 44 + /* DPRAM_FCT_HOST is _not_ aligned */ 45 + ret = ioread8(&card->dpram[DPRAM_FCT_HOST]) + 46 + (ioread8(&card->dpram[DPRAM_FCT_HOST + 1]) << 8); 47 + /* don't have any cached variables */ 48 + rmb(); 49 + if (ret == RES_OK) 50 + /* read return-value now */ 51 + return ioread16(&card->dpram[DPRAM_FCT_RESULT]); 52 + 53 + if ((ret != vector) || time_after(jiffies, stamp)) 54 + break; 55 + /* process context => relax */ 56 + usleep_range(500, 10000); 57 + } while (1); 58 + 59 + ret = (ret == RES_NONE) ? -ETIMEDOUT : -ECANCELED; 60 + dev_alert(&card->pdev->dev, "firmware %s failed (%i)\n", msg, ret); 61 + return ret; 62 + } 63 + 64 + static int softing_fct_cmd(struct softing *card, int16_t cmd, const char *msg) 65 + { 66 + int ret; 67 + 68 + ret = _softing_fct_cmd(card, cmd, 0, msg); 69 + if (ret > 0) { 70 + dev_alert(&card->pdev->dev, "%s returned %u\n", msg, ret); 71 + ret = -EIO; 72 + } 73 + return ret; 74 + } 75 + 76 + int softing_bootloader_command(struct softing *card, int16_t cmd, 77 + const char *msg) 78 + { 79 + int ret; 80 + unsigned long stamp; 81 + 82 + iowrite16(RES_NONE, &card->dpram[DPRAM_RECEIPT]); 83 + iowrite16(cmd, &card->dpram[DPRAM_COMMAND]); 84 + /* be sure to flush this to the card */ 85 + wmb(); 86 + stamp = jiffies + 3 * HZ; 87 + /* wait for card */ 88 + do { 89 + ret = ioread16(&card->dpram[DPRAM_RECEIPT]); 90 + /* don't have any cached variables */ 91 + rmb(); 92 + if (ret == RES_OK) 93 + return 0; 94 + if (time_after(jiffies, stamp)) 95 + break; 96 + /* process context => relax */ 97 + usleep_range(500, 10000); 98 + } while (!signal_pending(current)); 99 + 100 + ret = (ret == RES_NONE) ? -ETIMEDOUT : -ECANCELED; 101 + dev_alert(&card->pdev->dev, "bootloader %s failed (%i)\n", msg, ret); 102 + return ret; 103 + } 104 + 105 + static int fw_parse(const uint8_t **pmem, uint16_t *ptype, uint32_t *paddr, 106 + uint16_t *plen, const uint8_t **pdat) 107 + { 108 + uint16_t checksum[2]; 109 + const uint8_t *mem; 110 + const uint8_t *end; 111 + 112 + /* 113 + * firmware records are a binary, unaligned stream composed of: 114 + * uint16_t type; 115 + * uint32_t addr; 116 + * uint16_t len; 117 + * uint8_t dat[len]; 118 + * uint16_t checksum; 119 + * all values in little endian. 120 + * We could define a struct for this, with __attribute__((packed)), 121 + * but would that solve the alignment in _all_ cases (cfr. the 122 + * struct itself may be an odd address)? 123 + * 124 + * I chose to use leXX_to_cpup() since this solves both 125 + * endianness & alignment. 126 + */ 127 + mem = *pmem; 128 + *ptype = le16_to_cpup((void *)&mem[0]); 129 + *paddr = le32_to_cpup((void *)&mem[2]); 130 + *plen = le16_to_cpup((void *)&mem[6]); 131 + *pdat = &mem[8]; 132 + /* verify checksum */ 133 + end = &mem[8 + *plen]; 134 + checksum[0] = le16_to_cpup((void *)end); 135 + for (checksum[1] = 0; mem < end; ++mem) 136 + checksum[1] += *mem; 137 + if (checksum[0] != checksum[1]) 138 + return -EINVAL; 139 + /* increment */ 140 + *pmem += 10 + *plen; 141 + return 0; 142 + } 143 + 144 + int softing_load_fw(const char *file, struct softing *card, 145 + __iomem uint8_t *dpram, unsigned int size, int offset) 146 + { 147 + const struct firmware *fw; 148 + int ret; 149 + const uint8_t *mem, *end, *dat; 150 + uint16_t type, len; 151 + uint32_t addr; 152 + uint8_t *buf = NULL; 153 + int buflen = 0; 154 + int8_t type_end = 0; 155 + 156 + ret = request_firmware(&fw, file, &card->pdev->dev); 157 + if (ret < 0) 158 + return ret; 159 + dev_dbg(&card->pdev->dev, "%s, firmware(%s) got %u bytes" 160 + ", offset %c0x%04x\n", 161 + card->pdat->name, file, (unsigned int)fw->size, 162 + (offset >= 0) ? '+' : '-', (unsigned int)abs(offset)); 163 + /* parse the firmware */ 164 + mem = fw->data; 165 + end = &mem[fw->size]; 166 + /* look for header record */ 167 + ret = fw_parse(&mem, &type, &addr, &len, &dat); 168 + if (ret < 0) 169 + goto failed; 170 + if (type != 0xffff) 171 + goto failed; 172 + if (strncmp("Structured Binary Format, Softing GmbH" , dat, len)) { 173 + ret = -EINVAL; 174 + goto failed; 175 + } 176 + /* ok, we had a header */ 177 + while (mem < end) { 178 + ret = fw_parse(&mem, &type, &addr, &len, &dat); 179 + if (ret < 0) 180 + goto failed; 181 + if (type == 3) { 182 + /* start address, not used here */ 183 + continue; 184 + } else if (type == 1) { 185 + /* eof */ 186 + type_end = 1; 187 + break; 188 + } else if (type != 0) { 189 + ret = -EINVAL; 190 + goto failed; 191 + } 192 + 193 + if ((addr + len + offset) > size) 194 + goto failed; 195 + memcpy_toio(&dpram[addr + offset], dat, len); 196 + /* be sure to flush caches from IO space */ 197 + mb(); 198 + if (len > buflen) { 199 + /* align buflen */ 200 + buflen = (len + (1024-1)) & ~(1024-1); 201 + buf = krealloc(buf, buflen, GFP_KERNEL); 202 + if (!buf) { 203 + ret = -ENOMEM; 204 + goto failed; 205 + } 206 + } 207 + /* verify record data */ 208 + memcpy_fromio(buf, &dpram[addr + offset], len); 209 + if (memcmp(buf, dat, len)) { 210 + /* is not ok */ 211 + dev_alert(&card->pdev->dev, "DPRAM readback failed\n"); 212 + ret = -EIO; 213 + goto failed; 214 + } 215 + } 216 + if (!type_end) 217 + /* no end record seen */ 218 + goto failed; 219 + ret = 0; 220 + failed: 221 + kfree(buf); 222 + release_firmware(fw); 223 + if (ret < 0) 224 + dev_info(&card->pdev->dev, "firmware %s failed\n", file); 225 + return ret; 226 + } 227 + 228 + int softing_load_app_fw(const char *file, struct softing *card) 229 + { 230 + const struct firmware *fw; 231 + const uint8_t *mem, *end, *dat; 232 + int ret, j; 233 + uint16_t type, len; 234 + uint32_t addr, start_addr = 0; 235 + unsigned int sum, rx_sum; 236 + int8_t type_end = 0, type_entrypoint = 0; 237 + 238 + ret = request_firmware(&fw, file, &card->pdev->dev); 239 + if (ret) { 240 + dev_alert(&card->pdev->dev, "request_firmware(%s) got %i\n", 241 + file, ret); 242 + return ret; 243 + } 244 + dev_dbg(&card->pdev->dev, "firmware(%s) got %lu bytes\n", 245 + file, (unsigned long)fw->size); 246 + /* parse the firmware */ 247 + mem = fw->data; 248 + end = &mem[fw->size]; 249 + /* look for header record */ 250 + ret = fw_parse(&mem, &type, &addr, &len, &dat); 251 + if (ret) 252 + goto failed; 253 + ret = -EINVAL; 254 + if (type != 0xffff) { 255 + dev_alert(&card->pdev->dev, "firmware starts with type 0x%x\n", 256 + type); 257 + goto failed; 258 + } 259 + if (strncmp("Structured Binary Format, Softing GmbH", dat, len)) { 260 + dev_alert(&card->pdev->dev, "firmware string '%.*s' fault\n", 261 + len, dat); 262 + goto failed; 263 + } 264 + /* ok, we had a header */ 265 + while (mem < end) { 266 + ret = fw_parse(&mem, &type, &addr, &len, &dat); 267 + if (ret) 268 + goto failed; 269 + 270 + if (type == 3) { 271 + /* start address */ 272 + start_addr = addr; 273 + type_entrypoint = 1; 274 + continue; 275 + } else if (type == 1) { 276 + /* eof */ 277 + type_end = 1; 278 + break; 279 + } else if (type != 0) { 280 + dev_alert(&card->pdev->dev, 281 + "unknown record type 0x%04x\n", type); 282 + ret = -EINVAL; 283 + goto failed; 284 + } 285 + 286 + /* regualar data */ 287 + for (sum = 0, j = 0; j < len; ++j) 288 + sum += dat[j]; 289 + /* work in 16bit (target) */ 290 + sum &= 0xffff; 291 + 292 + memcpy_toio(&card->dpram[card->pdat->app.offs], dat, len); 293 + iowrite32(card->pdat->app.offs + card->pdat->app.addr, 294 + &card->dpram[DPRAM_COMMAND + 2]); 295 + iowrite32(addr, &card->dpram[DPRAM_COMMAND + 6]); 296 + iowrite16(len, &card->dpram[DPRAM_COMMAND + 10]); 297 + iowrite8(1, &card->dpram[DPRAM_COMMAND + 12]); 298 + ret = softing_bootloader_command(card, 1, "loading app."); 299 + if (ret < 0) 300 + goto failed; 301 + /* verify checksum */ 302 + rx_sum = ioread16(&card->dpram[DPRAM_RECEIPT + 2]); 303 + if (rx_sum != sum) { 304 + dev_alert(&card->pdev->dev, "SRAM seems to be damaged" 305 + ", wanted 0x%04x, got 0x%04x\n", sum, rx_sum); 306 + ret = -EIO; 307 + goto failed; 308 + } 309 + } 310 + if (!type_end || !type_entrypoint) 311 + goto failed; 312 + /* start application in card */ 313 + iowrite32(start_addr, &card->dpram[DPRAM_COMMAND + 2]); 314 + iowrite8(1, &card->dpram[DPRAM_COMMAND + 6]); 315 + ret = softing_bootloader_command(card, 3, "start app."); 316 + if (ret < 0) 317 + goto failed; 318 + ret = 0; 319 + failed: 320 + release_firmware(fw); 321 + if (ret < 0) 322 + dev_info(&card->pdev->dev, "firmware %s failed\n", file); 323 + return ret; 324 + } 325 + 326 + static int softing_reset_chip(struct softing *card) 327 + { 328 + int ret; 329 + 330 + do { 331 + /* reset chip */ 332 + iowrite8(0, &card->dpram[DPRAM_RESET_RX_FIFO]); 333 + iowrite8(0, &card->dpram[DPRAM_RESET_RX_FIFO+1]); 334 + iowrite8(1, &card->dpram[DPRAM_RESET]); 335 + iowrite8(0, &card->dpram[DPRAM_RESET+1]); 336 + 337 + ret = softing_fct_cmd(card, 0, "reset_can"); 338 + if (!ret) 339 + break; 340 + if (signal_pending(current)) 341 + /* don't wait any longer */ 342 + break; 343 + } while (1); 344 + card->tx.pending = 0; 345 + return ret; 346 + } 347 + 348 + int softing_chip_poweron(struct softing *card) 349 + { 350 + int ret; 351 + /* sync */ 352 + ret = _softing_fct_cmd(card, 99, 0x55, "sync-a"); 353 + if (ret < 0) 354 + goto failed; 355 + 356 + ret = _softing_fct_cmd(card, 99, 0xaa, "sync-b"); 357 + if (ret < 0) 358 + goto failed; 359 + 360 + ret = softing_reset_chip(card); 361 + if (ret < 0) 362 + goto failed; 363 + /* get_serial */ 364 + ret = softing_fct_cmd(card, 43, "get_serial_number"); 365 + if (ret < 0) 366 + goto failed; 367 + card->id.serial = ioread32(&card->dpram[DPRAM_FCT_PARAM]); 368 + /* get_version */ 369 + ret = softing_fct_cmd(card, 12, "get_version"); 370 + if (ret < 0) 371 + goto failed; 372 + card->id.fw_version = ioread16(&card->dpram[DPRAM_FCT_PARAM + 2]); 373 + card->id.hw_version = ioread16(&card->dpram[DPRAM_FCT_PARAM + 4]); 374 + card->id.license = ioread16(&card->dpram[DPRAM_FCT_PARAM + 6]); 375 + card->id.chip[0] = ioread16(&card->dpram[DPRAM_FCT_PARAM + 8]); 376 + card->id.chip[1] = ioread16(&card->dpram[DPRAM_FCT_PARAM + 10]); 377 + return 0; 378 + failed: 379 + return ret; 380 + } 381 + 382 + static void softing_initialize_timestamp(struct softing *card) 383 + { 384 + uint64_t ovf; 385 + 386 + card->ts_ref = ktime_get(); 387 + 388 + /* 16MHz is the reference */ 389 + ovf = 0x100000000ULL * 16; 390 + do_div(ovf, card->pdat->freq ?: 16); 391 + 392 + card->ts_overflow = ktime_add_us(ktime_set(0, 0), ovf); 393 + } 394 + 395 + ktime_t softing_raw2ktime(struct softing *card, u32 raw) 396 + { 397 + uint64_t rawl; 398 + ktime_t now, real_offset; 399 + ktime_t target; 400 + ktime_t tmp; 401 + 402 + now = ktime_get(); 403 + real_offset = ktime_sub(ktime_get_real(), now); 404 + 405 + /* find nsec from card */ 406 + rawl = raw * 16; 407 + do_div(rawl, card->pdat->freq ?: 16); 408 + target = ktime_add_us(card->ts_ref, rawl); 409 + /* test for overflows */ 410 + tmp = ktime_add(target, card->ts_overflow); 411 + while (unlikely(ktime_to_ns(tmp) > ktime_to_ns(now))) { 412 + card->ts_ref = ktime_add(card->ts_ref, card->ts_overflow); 413 + target = tmp; 414 + tmp = ktime_add(target, card->ts_overflow); 415 + } 416 + return ktime_add(target, real_offset); 417 + } 418 + 419 + static inline int softing_error_reporting(struct net_device *netdev) 420 + { 421 + struct softing_priv *priv = netdev_priv(netdev); 422 + 423 + return (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING) 424 + ? 1 : 0; 425 + } 426 + 427 + int softing_startstop(struct net_device *dev, int up) 428 + { 429 + int ret; 430 + struct softing *card; 431 + struct softing_priv *priv; 432 + struct net_device *netdev; 433 + int bus_bitmask_start; 434 + int j, error_reporting; 435 + struct can_frame msg; 436 + const struct can_bittiming *bt; 437 + 438 + priv = netdev_priv(dev); 439 + card = priv->card; 440 + 441 + if (!card->fw.up) 442 + return -EIO; 443 + 444 + ret = mutex_lock_interruptible(&card->fw.lock); 445 + if (ret) 446 + return ret; 447 + 448 + bus_bitmask_start = 0; 449 + if (dev && up) 450 + /* prepare to start this bus as well */ 451 + bus_bitmask_start |= (1 << priv->index); 452 + /* bring netdevs down */ 453 + for (j = 0; j < ARRAY_SIZE(card->net); ++j) { 454 + netdev = card->net[j]; 455 + if (!netdev) 456 + continue; 457 + priv = netdev_priv(netdev); 458 + 459 + if (dev != netdev) 460 + netif_stop_queue(netdev); 461 + 462 + if (netif_running(netdev)) { 463 + if (dev != netdev) 464 + bus_bitmask_start |= (1 << j); 465 + priv->tx.pending = 0; 466 + priv->tx.echo_put = 0; 467 + priv->tx.echo_get = 0; 468 + /* 469 + * this bus' may just have called open_candev() 470 + * which is rather stupid to call close_candev() 471 + * already 472 + * but we may come here from busoff recovery too 473 + * in which case the echo_skb _needs_ flushing too. 474 + * just be sure to call open_candev() again 475 + */ 476 + close_candev(netdev); 477 + } 478 + priv->can.state = CAN_STATE_STOPPED; 479 + } 480 + card->tx.pending = 0; 481 + 482 + softing_enable_irq(card, 0); 483 + ret = softing_reset_chip(card); 484 + if (ret) 485 + goto failed; 486 + if (!bus_bitmask_start) 487 + /* no busses to be brought up */ 488 + goto card_done; 489 + 490 + if ((bus_bitmask_start & 1) && (bus_bitmask_start & 2) 491 + && (softing_error_reporting(card->net[0]) 492 + != softing_error_reporting(card->net[1]))) { 493 + dev_alert(&card->pdev->dev, 494 + "err_reporting flag differs for busses\n"); 495 + goto invalid; 496 + } 497 + error_reporting = 0; 498 + if (bus_bitmask_start & 1) { 499 + netdev = card->net[0]; 500 + priv = netdev_priv(netdev); 501 + error_reporting += softing_error_reporting(netdev); 502 + /* init chip 1 */ 503 + bt = &priv->can.bittiming; 504 + iowrite16(bt->brp, &card->dpram[DPRAM_FCT_PARAM + 2]); 505 + iowrite16(bt->sjw, &card->dpram[DPRAM_FCT_PARAM + 4]); 506 + iowrite16(bt->phase_seg1 + bt->prop_seg, 507 + &card->dpram[DPRAM_FCT_PARAM + 6]); 508 + iowrite16(bt->phase_seg2, &card->dpram[DPRAM_FCT_PARAM + 8]); 509 + iowrite16((priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES) ? 1 : 0, 510 + &card->dpram[DPRAM_FCT_PARAM + 10]); 511 + ret = softing_fct_cmd(card, 1, "initialize_chip[0]"); 512 + if (ret < 0) 513 + goto failed; 514 + /* set mode */ 515 + iowrite16(0, &card->dpram[DPRAM_FCT_PARAM + 2]); 516 + iowrite16(0, &card->dpram[DPRAM_FCT_PARAM + 4]); 517 + ret = softing_fct_cmd(card, 3, "set_mode[0]"); 518 + if (ret < 0) 519 + goto failed; 520 + /* set filter */ 521 + /* 11bit id & mask */ 522 + iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 2]); 523 + iowrite16(0x07ff, &card->dpram[DPRAM_FCT_PARAM + 4]); 524 + /* 29bit id.lo & mask.lo & id.hi & mask.hi */ 525 + iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 6]); 526 + iowrite16(0xffff, &card->dpram[DPRAM_FCT_PARAM + 8]); 527 + iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 10]); 528 + iowrite16(0x1fff, &card->dpram[DPRAM_FCT_PARAM + 12]); 529 + ret = softing_fct_cmd(card, 7, "set_filter[0]"); 530 + if (ret < 0) 531 + goto failed; 532 + /* set output control */ 533 + iowrite16(priv->output, &card->dpram[DPRAM_FCT_PARAM + 2]); 534 + ret = softing_fct_cmd(card, 5, "set_output[0]"); 535 + if (ret < 0) 536 + goto failed; 537 + } 538 + if (bus_bitmask_start & 2) { 539 + netdev = card->net[1]; 540 + priv = netdev_priv(netdev); 541 + error_reporting += softing_error_reporting(netdev); 542 + /* init chip2 */ 543 + bt = &priv->can.bittiming; 544 + iowrite16(bt->brp, &card->dpram[DPRAM_FCT_PARAM + 2]); 545 + iowrite16(bt->sjw, &card->dpram[DPRAM_FCT_PARAM + 4]); 546 + iowrite16(bt->phase_seg1 + bt->prop_seg, 547 + &card->dpram[DPRAM_FCT_PARAM + 6]); 548 + iowrite16(bt->phase_seg2, &card->dpram[DPRAM_FCT_PARAM + 8]); 549 + iowrite16((priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES) ? 1 : 0, 550 + &card->dpram[DPRAM_FCT_PARAM + 10]); 551 + ret = softing_fct_cmd(card, 2, "initialize_chip[1]"); 552 + if (ret < 0) 553 + goto failed; 554 + /* set mode2 */ 555 + iowrite16(0, &card->dpram[DPRAM_FCT_PARAM + 2]); 556 + iowrite16(0, &card->dpram[DPRAM_FCT_PARAM + 4]); 557 + ret = softing_fct_cmd(card, 4, "set_mode[1]"); 558 + if (ret < 0) 559 + goto failed; 560 + /* set filter2 */ 561 + /* 11bit id & mask */ 562 + iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 2]); 563 + iowrite16(0x07ff, &card->dpram[DPRAM_FCT_PARAM + 4]); 564 + /* 29bit id.lo & mask.lo & id.hi & mask.hi */ 565 + iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 6]); 566 + iowrite16(0xffff, &card->dpram[DPRAM_FCT_PARAM + 8]); 567 + iowrite16(0x0000, &card->dpram[DPRAM_FCT_PARAM + 10]); 568 + iowrite16(0x1fff, &card->dpram[DPRAM_FCT_PARAM + 12]); 569 + ret = softing_fct_cmd(card, 8, "set_filter[1]"); 570 + if (ret < 0) 571 + goto failed; 572 + /* set output control2 */ 573 + iowrite16(priv->output, &card->dpram[DPRAM_FCT_PARAM + 2]); 574 + ret = softing_fct_cmd(card, 6, "set_output[1]"); 575 + if (ret < 0) 576 + goto failed; 577 + } 578 + /* enable_error_frame */ 579 + /* 580 + * Error reporting is switched off at the moment since 581 + * the receiving of them is not yet 100% verified 582 + * This should be enabled sooner or later 583 + * 584 + if (error_reporting) { 585 + ret = softing_fct_cmd(card, 51, "enable_error_frame"); 586 + if (ret < 0) 587 + goto failed; 588 + } 589 + */ 590 + /* initialize interface */ 591 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 2]); 592 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 4]); 593 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 6]); 594 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 8]); 595 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 10]); 596 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 12]); 597 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 14]); 598 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 16]); 599 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 18]); 600 + iowrite16(1, &card->dpram[DPRAM_FCT_PARAM + 20]); 601 + ret = softing_fct_cmd(card, 17, "initialize_interface"); 602 + if (ret < 0) 603 + goto failed; 604 + /* enable_fifo */ 605 + ret = softing_fct_cmd(card, 36, "enable_fifo"); 606 + if (ret < 0) 607 + goto failed; 608 + /* enable fifo tx ack */ 609 + ret = softing_fct_cmd(card, 13, "fifo_tx_ack[0]"); 610 + if (ret < 0) 611 + goto failed; 612 + /* enable fifo tx ack2 */ 613 + ret = softing_fct_cmd(card, 14, "fifo_tx_ack[1]"); 614 + if (ret < 0) 615 + goto failed; 616 + /* start_chip */ 617 + ret = softing_fct_cmd(card, 11, "start_chip"); 618 + if (ret < 0) 619 + goto failed; 620 + iowrite8(0, &card->dpram[DPRAM_INFO_BUSSTATE]); 621 + iowrite8(0, &card->dpram[DPRAM_INFO_BUSSTATE2]); 622 + if (card->pdat->generation < 2) { 623 + iowrite8(0, &card->dpram[DPRAM_V2_IRQ_TOHOST]); 624 + /* flush the DPRAM caches */ 625 + wmb(); 626 + } 627 + 628 + softing_initialize_timestamp(card); 629 + 630 + /* 631 + * do socketcan notifications/status changes 632 + * from here, no errors should occur, or the failed: part 633 + * must be reviewed 634 + */ 635 + memset(&msg, 0, sizeof(msg)); 636 + msg.can_id = CAN_ERR_FLAG | CAN_ERR_RESTARTED; 637 + msg.can_dlc = CAN_ERR_DLC; 638 + for (j = 0; j < ARRAY_SIZE(card->net); ++j) { 639 + if (!(bus_bitmask_start & (1 << j))) 640 + continue; 641 + netdev = card->net[j]; 642 + if (!netdev) 643 + continue; 644 + priv = netdev_priv(netdev); 645 + priv->can.state = CAN_STATE_ERROR_ACTIVE; 646 + open_candev(netdev); 647 + if (dev != netdev) { 648 + /* notify other busses on the restart */ 649 + softing_netdev_rx(netdev, &msg, ktime_set(0, 0)); 650 + ++priv->can.can_stats.restarts; 651 + } 652 + netif_wake_queue(netdev); 653 + } 654 + 655 + /* enable interrupts */ 656 + ret = softing_enable_irq(card, 1); 657 + if (ret) 658 + goto failed; 659 + card_done: 660 + mutex_unlock(&card->fw.lock); 661 + return 0; 662 + invalid: 663 + ret = -EINVAL; 664 + failed: 665 + softing_enable_irq(card, 0); 666 + softing_reset_chip(card); 667 + mutex_unlock(&card->fw.lock); 668 + /* bring all other interfaces down */ 669 + for (j = 0; j < ARRAY_SIZE(card->net); ++j) { 670 + netdev = card->net[j]; 671 + if (!netdev) 672 + continue; 673 + dev_close(netdev); 674 + } 675 + return ret; 676 + } 677 + 678 + int softing_default_output(struct net_device *netdev) 679 + { 680 + struct softing_priv *priv = netdev_priv(netdev); 681 + struct softing *card = priv->card; 682 + 683 + switch (priv->chip) { 684 + case 1000: 685 + return (card->pdat->generation < 2) ? 0xfb : 0xfa; 686 + case 5: 687 + return 0x60; 688 + default: 689 + return 0x40; 690 + } 691 + }
+893
drivers/net/can/softing/softing_main.c
··· 1 + /* 2 + * Copyright (C) 2008-2010 3 + * 4 + * - Kurt Van Dijck, EIA Electronics 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the version 2 of the GNU General Public License 8 + * as published by the Free Software Foundation 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program; if not, write to the Free Software 17 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 + */ 19 + 20 + #include <linux/version.h> 21 + #include <linux/module.h> 22 + #include <linux/init.h> 23 + #include <linux/interrupt.h> 24 + 25 + #include "softing.h" 26 + 27 + #define TX_ECHO_SKB_MAX (((TXMAX+1)/2)-1) 28 + 29 + /* 30 + * test is a specific CAN netdev 31 + * is online (ie. up 'n running, not sleeping, not busoff 32 + */ 33 + static inline int canif_is_active(struct net_device *netdev) 34 + { 35 + struct can_priv *can = netdev_priv(netdev); 36 + 37 + if (!netif_running(netdev)) 38 + return 0; 39 + return (can->state <= CAN_STATE_ERROR_PASSIVE); 40 + } 41 + 42 + /* reset DPRAM */ 43 + static inline void softing_set_reset_dpram(struct softing *card) 44 + { 45 + if (card->pdat->generation >= 2) { 46 + spin_lock_bh(&card->spin); 47 + iowrite8(ioread8(&card->dpram[DPRAM_V2_RESET]) & ~1, 48 + &card->dpram[DPRAM_V2_RESET]); 49 + spin_unlock_bh(&card->spin); 50 + } 51 + } 52 + 53 + static inline void softing_clr_reset_dpram(struct softing *card) 54 + { 55 + if (card->pdat->generation >= 2) { 56 + spin_lock_bh(&card->spin); 57 + iowrite8(ioread8(&card->dpram[DPRAM_V2_RESET]) | 1, 58 + &card->dpram[DPRAM_V2_RESET]); 59 + spin_unlock_bh(&card->spin); 60 + } 61 + } 62 + 63 + /* trigger the tx queue-ing */ 64 + static netdev_tx_t softing_netdev_start_xmit(struct sk_buff *skb, 65 + struct net_device *dev) 66 + { 67 + struct softing_priv *priv = netdev_priv(dev); 68 + struct softing *card = priv->card; 69 + int ret; 70 + uint8_t *ptr; 71 + uint8_t fifo_wr, fifo_rd; 72 + struct can_frame *cf = (struct can_frame *)skb->data; 73 + uint8_t buf[DPRAM_TX_SIZE]; 74 + 75 + if (can_dropped_invalid_skb(dev, skb)) 76 + return NETDEV_TX_OK; 77 + 78 + spin_lock(&card->spin); 79 + 80 + ret = NETDEV_TX_BUSY; 81 + if (!card->fw.up || 82 + (card->tx.pending >= TXMAX) || 83 + (priv->tx.pending >= TX_ECHO_SKB_MAX)) 84 + goto xmit_done; 85 + fifo_wr = ioread8(&card->dpram[DPRAM_TX_WR]); 86 + fifo_rd = ioread8(&card->dpram[DPRAM_TX_RD]); 87 + if (fifo_wr == fifo_rd) 88 + /* fifo full */ 89 + goto xmit_done; 90 + memset(buf, 0, sizeof(buf)); 91 + ptr = buf; 92 + *ptr = CMD_TX; 93 + if (cf->can_id & CAN_RTR_FLAG) 94 + *ptr |= CMD_RTR; 95 + if (cf->can_id & CAN_EFF_FLAG) 96 + *ptr |= CMD_XTD; 97 + if (priv->index) 98 + *ptr |= CMD_BUS2; 99 + ++ptr; 100 + *ptr++ = cf->can_dlc; 101 + *ptr++ = (cf->can_id >> 0); 102 + *ptr++ = (cf->can_id >> 8); 103 + if (cf->can_id & CAN_EFF_FLAG) { 104 + *ptr++ = (cf->can_id >> 16); 105 + *ptr++ = (cf->can_id >> 24); 106 + } else { 107 + /* increment 1, not 2 as you might think */ 108 + ptr += 1; 109 + } 110 + if (!(cf->can_id & CAN_RTR_FLAG)) 111 + memcpy(ptr, &cf->data[0], cf->can_dlc); 112 + memcpy_toio(&card->dpram[DPRAM_TX + DPRAM_TX_SIZE * fifo_wr], 113 + buf, DPRAM_TX_SIZE); 114 + if (++fifo_wr >= DPRAM_TX_CNT) 115 + fifo_wr = 0; 116 + iowrite8(fifo_wr, &card->dpram[DPRAM_TX_WR]); 117 + card->tx.last_bus = priv->index; 118 + ++card->tx.pending; 119 + ++priv->tx.pending; 120 + can_put_echo_skb(skb, dev, priv->tx.echo_put); 121 + ++priv->tx.echo_put; 122 + if (priv->tx.echo_put >= TX_ECHO_SKB_MAX) 123 + priv->tx.echo_put = 0; 124 + /* can_put_echo_skb() saves the skb, safe to return TX_OK */ 125 + ret = NETDEV_TX_OK; 126 + xmit_done: 127 + spin_unlock(&card->spin); 128 + if (card->tx.pending >= TXMAX) { 129 + int j; 130 + for (j = 0; j < ARRAY_SIZE(card->net); ++j) { 131 + if (card->net[j]) 132 + netif_stop_queue(card->net[j]); 133 + } 134 + } 135 + if (ret != NETDEV_TX_OK) 136 + netif_stop_queue(dev); 137 + 138 + return ret; 139 + } 140 + 141 + /* 142 + * shortcut for skb delivery 143 + */ 144 + int softing_netdev_rx(struct net_device *netdev, const struct can_frame *msg, 145 + ktime_t ktime) 146 + { 147 + struct sk_buff *skb; 148 + struct can_frame *cf; 149 + 150 + skb = alloc_can_skb(netdev, &cf); 151 + if (!skb) 152 + return -ENOMEM; 153 + memcpy(cf, msg, sizeof(*msg)); 154 + skb->tstamp = ktime; 155 + return netif_rx(skb); 156 + } 157 + 158 + /* 159 + * softing_handle_1 160 + * pop 1 entry from the DPRAM queue, and process 161 + */ 162 + static int softing_handle_1(struct softing *card) 163 + { 164 + struct net_device *netdev; 165 + struct softing_priv *priv; 166 + ktime_t ktime; 167 + struct can_frame msg; 168 + int cnt = 0, lost_msg; 169 + uint8_t fifo_rd, fifo_wr, cmd; 170 + uint8_t *ptr; 171 + uint32_t tmp_u32; 172 + uint8_t buf[DPRAM_RX_SIZE]; 173 + 174 + memset(&msg, 0, sizeof(msg)); 175 + /* test for lost msgs */ 176 + lost_msg = ioread8(&card->dpram[DPRAM_RX_LOST]); 177 + if (lost_msg) { 178 + int j; 179 + /* reset condition */ 180 + iowrite8(0, &card->dpram[DPRAM_RX_LOST]); 181 + /* prepare msg */ 182 + msg.can_id = CAN_ERR_FLAG | CAN_ERR_CRTL; 183 + msg.can_dlc = CAN_ERR_DLC; 184 + msg.data[1] = CAN_ERR_CRTL_RX_OVERFLOW; 185 + /* 186 + * service to all busses, we don't know which it was applicable 187 + * but only service busses that are online 188 + */ 189 + for (j = 0; j < ARRAY_SIZE(card->net); ++j) { 190 + netdev = card->net[j]; 191 + if (!netdev) 192 + continue; 193 + if (!canif_is_active(netdev)) 194 + /* a dead bus has no overflows */ 195 + continue; 196 + ++netdev->stats.rx_over_errors; 197 + softing_netdev_rx(netdev, &msg, ktime_set(0, 0)); 198 + } 199 + /* prepare for other use */ 200 + memset(&msg, 0, sizeof(msg)); 201 + ++cnt; 202 + } 203 + 204 + fifo_rd = ioread8(&card->dpram[DPRAM_RX_RD]); 205 + fifo_wr = ioread8(&card->dpram[DPRAM_RX_WR]); 206 + 207 + if (++fifo_rd >= DPRAM_RX_CNT) 208 + fifo_rd = 0; 209 + if (fifo_wr == fifo_rd) 210 + return cnt; 211 + 212 + memcpy_fromio(buf, &card->dpram[DPRAM_RX + DPRAM_RX_SIZE*fifo_rd], 213 + DPRAM_RX_SIZE); 214 + mb(); 215 + /* trigger dual port RAM */ 216 + iowrite8(fifo_rd, &card->dpram[DPRAM_RX_RD]); 217 + 218 + ptr = buf; 219 + cmd = *ptr++; 220 + if (cmd == 0xff) 221 + /* not quite usefull, probably the card has got out */ 222 + return 0; 223 + netdev = card->net[0]; 224 + if (cmd & CMD_BUS2) 225 + netdev = card->net[1]; 226 + priv = netdev_priv(netdev); 227 + 228 + if (cmd & CMD_ERR) { 229 + uint8_t can_state, state; 230 + 231 + state = *ptr++; 232 + 233 + msg.can_id = CAN_ERR_FLAG; 234 + msg.can_dlc = CAN_ERR_DLC; 235 + 236 + if (state & SF_MASK_BUSOFF) { 237 + can_state = CAN_STATE_BUS_OFF; 238 + msg.can_id |= CAN_ERR_BUSOFF; 239 + state = STATE_BUSOFF; 240 + } else if (state & SF_MASK_EPASSIVE) { 241 + can_state = CAN_STATE_ERROR_PASSIVE; 242 + msg.can_id |= CAN_ERR_CRTL; 243 + msg.data[1] = CAN_ERR_CRTL_TX_PASSIVE; 244 + state = STATE_EPASSIVE; 245 + } else { 246 + can_state = CAN_STATE_ERROR_ACTIVE; 247 + msg.can_id |= CAN_ERR_CRTL; 248 + state = STATE_EACTIVE; 249 + } 250 + /* update DPRAM */ 251 + iowrite8(state, &card->dpram[priv->index ? 252 + DPRAM_INFO_BUSSTATE2 : DPRAM_INFO_BUSSTATE]); 253 + /* timestamp */ 254 + tmp_u32 = le32_to_cpup((void *)ptr); 255 + ptr += 4; 256 + ktime = softing_raw2ktime(card, tmp_u32); 257 + 258 + ++netdev->stats.rx_errors; 259 + /* update internal status */ 260 + if (can_state != priv->can.state) { 261 + priv->can.state = can_state; 262 + if (can_state == CAN_STATE_ERROR_PASSIVE) 263 + ++priv->can.can_stats.error_passive; 264 + else if (can_state == CAN_STATE_BUS_OFF) { 265 + /* this calls can_close_cleanup() */ 266 + can_bus_off(netdev); 267 + netif_stop_queue(netdev); 268 + } 269 + /* trigger socketcan */ 270 + softing_netdev_rx(netdev, &msg, ktime); 271 + } 272 + 273 + } else { 274 + if (cmd & CMD_RTR) 275 + msg.can_id |= CAN_RTR_FLAG; 276 + msg.can_dlc = get_can_dlc(*ptr++); 277 + if (cmd & CMD_XTD) { 278 + msg.can_id |= CAN_EFF_FLAG; 279 + msg.can_id |= le32_to_cpup((void *)ptr); 280 + ptr += 4; 281 + } else { 282 + msg.can_id |= le16_to_cpup((void *)ptr); 283 + ptr += 2; 284 + } 285 + /* timestamp */ 286 + tmp_u32 = le32_to_cpup((void *)ptr); 287 + ptr += 4; 288 + ktime = softing_raw2ktime(card, tmp_u32); 289 + if (!(msg.can_id & CAN_RTR_FLAG)) 290 + memcpy(&msg.data[0], ptr, 8); 291 + ptr += 8; 292 + /* update socket */ 293 + if (cmd & CMD_ACK) { 294 + /* acknowledge, was tx msg */ 295 + struct sk_buff *skb; 296 + skb = priv->can.echo_skb[priv->tx.echo_get]; 297 + if (skb) 298 + skb->tstamp = ktime; 299 + can_get_echo_skb(netdev, priv->tx.echo_get); 300 + ++priv->tx.echo_get; 301 + if (priv->tx.echo_get >= TX_ECHO_SKB_MAX) 302 + priv->tx.echo_get = 0; 303 + if (priv->tx.pending) 304 + --priv->tx.pending; 305 + if (card->tx.pending) 306 + --card->tx.pending; 307 + ++netdev->stats.tx_packets; 308 + if (!(msg.can_id & CAN_RTR_FLAG)) 309 + netdev->stats.tx_bytes += msg.can_dlc; 310 + } else { 311 + int ret; 312 + 313 + ret = softing_netdev_rx(netdev, &msg, ktime); 314 + if (ret == NET_RX_SUCCESS) { 315 + ++netdev->stats.rx_packets; 316 + if (!(msg.can_id & CAN_RTR_FLAG)) 317 + netdev->stats.rx_bytes += msg.can_dlc; 318 + } else { 319 + ++netdev->stats.rx_dropped; 320 + } 321 + } 322 + } 323 + ++cnt; 324 + return cnt; 325 + } 326 + 327 + /* 328 + * real interrupt handler 329 + */ 330 + static irqreturn_t softing_irq_thread(int irq, void *dev_id) 331 + { 332 + struct softing *card = (struct softing *)dev_id; 333 + struct net_device *netdev; 334 + struct softing_priv *priv; 335 + int j, offset, work_done; 336 + 337 + work_done = 0; 338 + spin_lock_bh(&card->spin); 339 + while (softing_handle_1(card) > 0) { 340 + ++card->irq.svc_count; 341 + ++work_done; 342 + } 343 + spin_unlock_bh(&card->spin); 344 + /* resume tx queue's */ 345 + offset = card->tx.last_bus; 346 + for (j = 0; j < ARRAY_SIZE(card->net); ++j) { 347 + if (card->tx.pending >= TXMAX) 348 + break; 349 + netdev = card->net[(j + offset + 1) % card->pdat->nbus]; 350 + if (!netdev) 351 + continue; 352 + priv = netdev_priv(netdev); 353 + if (!canif_is_active(netdev)) 354 + /* it makes no sense to wake dead busses */ 355 + continue; 356 + if (priv->tx.pending >= TX_ECHO_SKB_MAX) 357 + continue; 358 + ++work_done; 359 + netif_wake_queue(netdev); 360 + } 361 + return work_done ? IRQ_HANDLED : IRQ_NONE; 362 + } 363 + 364 + /* 365 + * interrupt routines: 366 + * schedule the 'real interrupt handler' 367 + */ 368 + static irqreturn_t softing_irq_v2(int irq, void *dev_id) 369 + { 370 + struct softing *card = (struct softing *)dev_id; 371 + uint8_t ir; 372 + 373 + ir = ioread8(&card->dpram[DPRAM_V2_IRQ_TOHOST]); 374 + iowrite8(0, &card->dpram[DPRAM_V2_IRQ_TOHOST]); 375 + return (1 == ir) ? IRQ_WAKE_THREAD : IRQ_NONE; 376 + } 377 + 378 + static irqreturn_t softing_irq_v1(int irq, void *dev_id) 379 + { 380 + struct softing *card = (struct softing *)dev_id; 381 + uint8_t ir; 382 + 383 + ir = ioread8(&card->dpram[DPRAM_IRQ_TOHOST]); 384 + iowrite8(0, &card->dpram[DPRAM_IRQ_TOHOST]); 385 + return ir ? IRQ_WAKE_THREAD : IRQ_NONE; 386 + } 387 + 388 + /* 389 + * netdev/candev inter-operability 390 + */ 391 + static int softing_netdev_open(struct net_device *ndev) 392 + { 393 + int ret; 394 + 395 + /* check or determine and set bittime */ 396 + ret = open_candev(ndev); 397 + if (!ret) 398 + ret = softing_startstop(ndev, 1); 399 + return ret; 400 + } 401 + 402 + static int softing_netdev_stop(struct net_device *ndev) 403 + { 404 + int ret; 405 + 406 + netif_stop_queue(ndev); 407 + 408 + /* softing cycle does close_candev() */ 409 + ret = softing_startstop(ndev, 0); 410 + return ret; 411 + } 412 + 413 + static int softing_candev_set_mode(struct net_device *ndev, enum can_mode mode) 414 + { 415 + int ret; 416 + 417 + switch (mode) { 418 + case CAN_MODE_START: 419 + /* softing_startstop does close_candev() */ 420 + ret = softing_startstop(ndev, 1); 421 + return ret; 422 + case CAN_MODE_STOP: 423 + case CAN_MODE_SLEEP: 424 + return -EOPNOTSUPP; 425 + } 426 + return 0; 427 + } 428 + 429 + /* 430 + * Softing device management helpers 431 + */ 432 + int softing_enable_irq(struct softing *card, int enable) 433 + { 434 + int ret; 435 + 436 + if (!card->irq.nr) { 437 + return 0; 438 + } else if (card->irq.requested && !enable) { 439 + free_irq(card->irq.nr, card); 440 + card->irq.requested = 0; 441 + } else if (!card->irq.requested && enable) { 442 + ret = request_threaded_irq(card->irq.nr, 443 + (card->pdat->generation >= 2) ? 444 + softing_irq_v2 : softing_irq_v1, 445 + softing_irq_thread, IRQF_SHARED, 446 + dev_name(&card->pdev->dev), card); 447 + if (ret) { 448 + dev_alert(&card->pdev->dev, 449 + "request_threaded_irq(%u) failed\n", 450 + card->irq.nr); 451 + return ret; 452 + } 453 + card->irq.requested = 1; 454 + } 455 + return 0; 456 + } 457 + 458 + static void softing_card_shutdown(struct softing *card) 459 + { 460 + int fw_up = 0; 461 + 462 + if (mutex_lock_interruptible(&card->fw.lock)) 463 + /* return -ERESTARTSYS */; 464 + fw_up = card->fw.up; 465 + card->fw.up = 0; 466 + 467 + if (card->irq.requested && card->irq.nr) { 468 + free_irq(card->irq.nr, card); 469 + card->irq.requested = 0; 470 + } 471 + if (fw_up) { 472 + if (card->pdat->enable_irq) 473 + card->pdat->enable_irq(card->pdev, 0); 474 + softing_set_reset_dpram(card); 475 + if (card->pdat->reset) 476 + card->pdat->reset(card->pdev, 1); 477 + } 478 + mutex_unlock(&card->fw.lock); 479 + } 480 + 481 + static __devinit int softing_card_boot(struct softing *card) 482 + { 483 + int ret, j; 484 + static const uint8_t stream[] = { 485 + 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, }; 486 + unsigned char back[sizeof(stream)]; 487 + 488 + if (mutex_lock_interruptible(&card->fw.lock)) 489 + return -ERESTARTSYS; 490 + if (card->fw.up) { 491 + mutex_unlock(&card->fw.lock); 492 + return 0; 493 + } 494 + /* reset board */ 495 + if (card->pdat->enable_irq) 496 + card->pdat->enable_irq(card->pdev, 1); 497 + /* boot card */ 498 + softing_set_reset_dpram(card); 499 + if (card->pdat->reset) 500 + card->pdat->reset(card->pdev, 1); 501 + for (j = 0; (j + sizeof(stream)) < card->dpram_size; 502 + j += sizeof(stream)) { 503 + 504 + memcpy_toio(&card->dpram[j], stream, sizeof(stream)); 505 + /* flush IO cache */ 506 + mb(); 507 + memcpy_fromio(back, &card->dpram[j], sizeof(stream)); 508 + 509 + if (!memcmp(back, stream, sizeof(stream))) 510 + continue; 511 + /* memory is not equal */ 512 + dev_alert(&card->pdev->dev, "dpram failed at 0x%04x\n", j); 513 + ret = -EIO; 514 + goto failed; 515 + } 516 + wmb(); 517 + /* load boot firmware */ 518 + ret = softing_load_fw(card->pdat->boot.fw, card, card->dpram, 519 + card->dpram_size, 520 + card->pdat->boot.offs - card->pdat->boot.addr); 521 + if (ret < 0) 522 + goto failed; 523 + /* load loader firmware */ 524 + ret = softing_load_fw(card->pdat->load.fw, card, card->dpram, 525 + card->dpram_size, 526 + card->pdat->load.offs - card->pdat->load.addr); 527 + if (ret < 0) 528 + goto failed; 529 + 530 + if (card->pdat->reset) 531 + card->pdat->reset(card->pdev, 0); 532 + softing_clr_reset_dpram(card); 533 + ret = softing_bootloader_command(card, 0, "card boot"); 534 + if (ret < 0) 535 + goto failed; 536 + ret = softing_load_app_fw(card->pdat->app.fw, card); 537 + if (ret < 0) 538 + goto failed; 539 + 540 + ret = softing_chip_poweron(card); 541 + if (ret < 0) 542 + goto failed; 543 + 544 + card->fw.up = 1; 545 + mutex_unlock(&card->fw.lock); 546 + return 0; 547 + failed: 548 + card->fw.up = 0; 549 + if (card->pdat->enable_irq) 550 + card->pdat->enable_irq(card->pdev, 0); 551 + softing_set_reset_dpram(card); 552 + if (card->pdat->reset) 553 + card->pdat->reset(card->pdev, 1); 554 + mutex_unlock(&card->fw.lock); 555 + return ret; 556 + } 557 + 558 + /* 559 + * netdev sysfs 560 + */ 561 + static ssize_t show_channel(struct device *dev, struct device_attribute *attr, 562 + char *buf) 563 + { 564 + struct net_device *ndev = to_net_dev(dev); 565 + struct softing_priv *priv = netdev2softing(ndev); 566 + 567 + return sprintf(buf, "%i\n", priv->index); 568 + } 569 + 570 + static ssize_t show_chip(struct device *dev, struct device_attribute *attr, 571 + char *buf) 572 + { 573 + struct net_device *ndev = to_net_dev(dev); 574 + struct softing_priv *priv = netdev2softing(ndev); 575 + 576 + return sprintf(buf, "%i\n", priv->chip); 577 + } 578 + 579 + static ssize_t show_output(struct device *dev, struct device_attribute *attr, 580 + char *buf) 581 + { 582 + struct net_device *ndev = to_net_dev(dev); 583 + struct softing_priv *priv = netdev2softing(ndev); 584 + 585 + return sprintf(buf, "0x%02x\n", priv->output); 586 + } 587 + 588 + static ssize_t store_output(struct device *dev, struct device_attribute *attr, 589 + const char *buf, size_t count) 590 + { 591 + struct net_device *ndev = to_net_dev(dev); 592 + struct softing_priv *priv = netdev2softing(ndev); 593 + struct softing *card = priv->card; 594 + unsigned long val; 595 + int ret; 596 + 597 + ret = strict_strtoul(buf, 0, &val); 598 + if (ret < 0) 599 + return ret; 600 + val &= 0xFF; 601 + 602 + ret = mutex_lock_interruptible(&card->fw.lock); 603 + if (ret) 604 + return -ERESTARTSYS; 605 + if (netif_running(ndev)) { 606 + mutex_unlock(&card->fw.lock); 607 + return -EBUSY; 608 + } 609 + priv->output = val; 610 + mutex_unlock(&card->fw.lock); 611 + return count; 612 + } 613 + 614 + static const DEVICE_ATTR(channel, S_IRUGO, show_channel, NULL); 615 + static const DEVICE_ATTR(chip, S_IRUGO, show_chip, NULL); 616 + static const DEVICE_ATTR(output, S_IRUGO | S_IWUSR, show_output, store_output); 617 + 618 + static const struct attribute *const netdev_sysfs_attrs[] = { 619 + &dev_attr_channel.attr, 620 + &dev_attr_chip.attr, 621 + &dev_attr_output.attr, 622 + NULL, 623 + }; 624 + static const struct attribute_group netdev_sysfs_group = { 625 + .name = NULL, 626 + .attrs = (struct attribute **)netdev_sysfs_attrs, 627 + }; 628 + 629 + static const struct net_device_ops softing_netdev_ops = { 630 + .ndo_open = softing_netdev_open, 631 + .ndo_stop = softing_netdev_stop, 632 + .ndo_start_xmit = softing_netdev_start_xmit, 633 + }; 634 + 635 + static const struct can_bittiming_const softing_btr_const = { 636 + .tseg1_min = 1, 637 + .tseg1_max = 16, 638 + .tseg2_min = 1, 639 + .tseg2_max = 8, 640 + .sjw_max = 4, /* overruled */ 641 + .brp_min = 1, 642 + .brp_max = 32, /* overruled */ 643 + .brp_inc = 1, 644 + }; 645 + 646 + 647 + static __devinit struct net_device *softing_netdev_create(struct softing *card, 648 + uint16_t chip_id) 649 + { 650 + struct net_device *netdev; 651 + struct softing_priv *priv; 652 + 653 + netdev = alloc_candev(sizeof(*priv), TX_ECHO_SKB_MAX); 654 + if (!netdev) { 655 + dev_alert(&card->pdev->dev, "alloc_candev failed\n"); 656 + return NULL; 657 + } 658 + priv = netdev_priv(netdev); 659 + priv->netdev = netdev; 660 + priv->card = card; 661 + memcpy(&priv->btr_const, &softing_btr_const, sizeof(priv->btr_const)); 662 + priv->btr_const.brp_max = card->pdat->max_brp; 663 + priv->btr_const.sjw_max = card->pdat->max_sjw; 664 + priv->can.bittiming_const = &priv->btr_const; 665 + priv->can.clock.freq = 8000000; 666 + priv->chip = chip_id; 667 + priv->output = softing_default_output(netdev); 668 + SET_NETDEV_DEV(netdev, &card->pdev->dev); 669 + 670 + netdev->flags |= IFF_ECHO; 671 + netdev->netdev_ops = &softing_netdev_ops; 672 + priv->can.do_set_mode = softing_candev_set_mode; 673 + priv->can.ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES; 674 + 675 + return netdev; 676 + } 677 + 678 + static __devinit int softing_netdev_register(struct net_device *netdev) 679 + { 680 + int ret; 681 + 682 + netdev->sysfs_groups[0] = &netdev_sysfs_group; 683 + ret = register_candev(netdev); 684 + if (ret) { 685 + dev_alert(&netdev->dev, "register failed\n"); 686 + return ret; 687 + } 688 + return 0; 689 + } 690 + 691 + static void softing_netdev_cleanup(struct net_device *netdev) 692 + { 693 + unregister_candev(netdev); 694 + free_candev(netdev); 695 + } 696 + 697 + /* 698 + * sysfs for Platform device 699 + */ 700 + #define DEV_ATTR_RO(name, member) \ 701 + static ssize_t show_##name(struct device *dev, \ 702 + struct device_attribute *attr, char *buf) \ 703 + { \ 704 + struct softing *card = platform_get_drvdata(to_platform_device(dev)); \ 705 + return sprintf(buf, "%u\n", card->member); \ 706 + } \ 707 + static DEVICE_ATTR(name, 0444, show_##name, NULL) 708 + 709 + #define DEV_ATTR_RO_STR(name, member) \ 710 + static ssize_t show_##name(struct device *dev, \ 711 + struct device_attribute *attr, char *buf) \ 712 + { \ 713 + struct softing *card = platform_get_drvdata(to_platform_device(dev)); \ 714 + return sprintf(buf, "%s\n", card->member); \ 715 + } \ 716 + static DEVICE_ATTR(name, 0444, show_##name, NULL) 717 + 718 + DEV_ATTR_RO(serial, id.serial); 719 + DEV_ATTR_RO_STR(firmware, pdat->app.fw); 720 + DEV_ATTR_RO(firmware_version, id.fw_version); 721 + DEV_ATTR_RO_STR(hardware, pdat->name); 722 + DEV_ATTR_RO(hardware_version, id.hw_version); 723 + DEV_ATTR_RO(license, id.license); 724 + DEV_ATTR_RO(frequency, id.freq); 725 + DEV_ATTR_RO(txpending, tx.pending); 726 + 727 + static struct attribute *softing_pdev_attrs[] = { 728 + &dev_attr_serial.attr, 729 + &dev_attr_firmware.attr, 730 + &dev_attr_firmware_version.attr, 731 + &dev_attr_hardware.attr, 732 + &dev_attr_hardware_version.attr, 733 + &dev_attr_license.attr, 734 + &dev_attr_frequency.attr, 735 + &dev_attr_txpending.attr, 736 + NULL, 737 + }; 738 + 739 + static const struct attribute_group softing_pdev_group = { 740 + .name = NULL, 741 + .attrs = softing_pdev_attrs, 742 + }; 743 + 744 + /* 745 + * platform driver 746 + */ 747 + static __devexit int softing_pdev_remove(struct platform_device *pdev) 748 + { 749 + struct softing *card = platform_get_drvdata(pdev); 750 + int j; 751 + 752 + /* first, disable card*/ 753 + softing_card_shutdown(card); 754 + 755 + for (j = 0; j < ARRAY_SIZE(card->net); ++j) { 756 + if (!card->net[j]) 757 + continue; 758 + softing_netdev_cleanup(card->net[j]); 759 + card->net[j] = NULL; 760 + } 761 + sysfs_remove_group(&pdev->dev.kobj, &softing_pdev_group); 762 + 763 + iounmap(card->dpram); 764 + kfree(card); 765 + return 0; 766 + } 767 + 768 + static __devinit int softing_pdev_probe(struct platform_device *pdev) 769 + { 770 + const struct softing_platform_data *pdat = pdev->dev.platform_data; 771 + struct softing *card; 772 + struct net_device *netdev; 773 + struct softing_priv *priv; 774 + struct resource *pres; 775 + int ret; 776 + int j; 777 + 778 + if (!pdat) { 779 + dev_warn(&pdev->dev, "no platform data\n"); 780 + return -EINVAL; 781 + } 782 + if (pdat->nbus > ARRAY_SIZE(card->net)) { 783 + dev_warn(&pdev->dev, "%u nets??\n", pdat->nbus); 784 + return -EINVAL; 785 + } 786 + 787 + card = kzalloc(sizeof(*card), GFP_KERNEL); 788 + if (!card) 789 + return -ENOMEM; 790 + card->pdat = pdat; 791 + card->pdev = pdev; 792 + platform_set_drvdata(pdev, card); 793 + mutex_init(&card->fw.lock); 794 + spin_lock_init(&card->spin); 795 + 796 + ret = -EINVAL; 797 + pres = platform_get_resource(pdev, IORESOURCE_MEM, 0); 798 + if (!pres) 799 + goto platform_resource_failed;; 800 + card->dpram_phys = pres->start; 801 + card->dpram_size = pres->end - pres->start + 1; 802 + card->dpram = ioremap_nocache(card->dpram_phys, card->dpram_size); 803 + if (!card->dpram) { 804 + dev_alert(&card->pdev->dev, "dpram ioremap failed\n"); 805 + goto ioremap_failed; 806 + } 807 + 808 + pres = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 809 + if (pres) 810 + card->irq.nr = pres->start; 811 + 812 + /* reset card */ 813 + ret = softing_card_boot(card); 814 + if (ret < 0) { 815 + dev_alert(&pdev->dev, "failed to boot\n"); 816 + goto boot_failed; 817 + } 818 + 819 + /* only now, the chip's are known */ 820 + card->id.freq = card->pdat->freq; 821 + 822 + ret = sysfs_create_group(&pdev->dev.kobj, &softing_pdev_group); 823 + if (ret < 0) { 824 + dev_alert(&card->pdev->dev, "sysfs failed\n"); 825 + goto sysfs_failed; 826 + } 827 + 828 + ret = -ENOMEM; 829 + for (j = 0; j < ARRAY_SIZE(card->net); ++j) { 830 + card->net[j] = netdev = 831 + softing_netdev_create(card, card->id.chip[j]); 832 + if (!netdev) { 833 + dev_alert(&pdev->dev, "failed to make can[%i]", j); 834 + goto netdev_failed; 835 + } 836 + priv = netdev_priv(card->net[j]); 837 + priv->index = j; 838 + ret = softing_netdev_register(netdev); 839 + if (ret) { 840 + free_candev(netdev); 841 + card->net[j] = NULL; 842 + dev_alert(&card->pdev->dev, 843 + "failed to register can[%i]\n", j); 844 + goto netdev_failed; 845 + } 846 + } 847 + dev_info(&card->pdev->dev, "%s ready.\n", card->pdat->name); 848 + return 0; 849 + 850 + netdev_failed: 851 + for (j = 0; j < ARRAY_SIZE(card->net); ++j) { 852 + if (!card->net[j]) 853 + continue; 854 + softing_netdev_cleanup(card->net[j]); 855 + } 856 + sysfs_remove_group(&pdev->dev.kobj, &softing_pdev_group); 857 + sysfs_failed: 858 + softing_card_shutdown(card); 859 + boot_failed: 860 + iounmap(card->dpram); 861 + ioremap_failed: 862 + platform_resource_failed: 863 + kfree(card); 864 + return ret; 865 + } 866 + 867 + static struct platform_driver softing_driver = { 868 + .driver = { 869 + .name = "softing", 870 + .owner = THIS_MODULE, 871 + }, 872 + .probe = softing_pdev_probe, 873 + .remove = __devexit_p(softing_pdev_remove), 874 + }; 875 + 876 + MODULE_ALIAS("platform:softing"); 877 + 878 + static int __init softing_start(void) 879 + { 880 + return platform_driver_register(&softing_driver); 881 + } 882 + 883 + static void __exit softing_stop(void) 884 + { 885 + platform_driver_unregister(&softing_driver); 886 + } 887 + 888 + module_init(softing_start); 889 + module_exit(softing_stop); 890 + 891 + MODULE_DESCRIPTION("Softing DPRAM CAN driver"); 892 + MODULE_AUTHOR("Kurt Van Dijck <kurt.van.dijck@eia.be>"); 893 + MODULE_LICENSE("GPL v2");
+40
drivers/net/can/softing/softing_platform.h
··· 1 + 2 + #include <linux/platform_device.h> 3 + 4 + #ifndef _SOFTING_DEVICE_H_ 5 + #define _SOFTING_DEVICE_H_ 6 + 7 + /* softing firmware directory prefix */ 8 + #define fw_dir "softing-4.6/" 9 + 10 + struct softing_platform_data { 11 + unsigned int manf; 12 + unsigned int prod; 13 + /* 14 + * generation 15 + * 1st with NEC or SJA1000 16 + * 8bit, exclusive interrupt, ... 17 + * 2nd only SJA1000 18 + * 16bit, shared interrupt 19 + */ 20 + int generation; 21 + int nbus; /* # busses on device */ 22 + unsigned int freq; /* operating frequency in Hz */ 23 + unsigned int max_brp; 24 + unsigned int max_sjw; 25 + unsigned long dpram_size; 26 + const char *name; 27 + struct { 28 + unsigned long offs; 29 + unsigned long addr; 30 + const char *fw; 31 + } boot, load, app; 32 + /* 33 + * reset() function 34 + * bring pdev in or out of reset, depending on value 35 + */ 36 + int (*reset)(struct platform_device *pdev, int value); 37 + int (*enable_irq)(struct platform_device *pdev, int value); 38 + }; 39 + 40 + #endif
+6 -6
drivers/net/cnic.c
··· 699 699 static void cnic_setup_page_tbl(struct cnic_dev *dev, struct cnic_dma *dma) 700 700 { 701 701 int i; 702 - u32 *page_table = dma->pgtbl; 702 + __le32 *page_table = (__le32 *) dma->pgtbl; 703 703 704 704 for (i = 0; i < dma->num_pages; i++) { 705 705 /* Each entry needs to be in big endian format. */ 706 - *page_table = (u32) ((u64) dma->pg_map_arr[i] >> 32); 706 + *page_table = cpu_to_le32((u64) dma->pg_map_arr[i] >> 32); 707 707 page_table++; 708 - *page_table = (u32) dma->pg_map_arr[i]; 708 + *page_table = cpu_to_le32(dma->pg_map_arr[i] & 0xffffffff); 709 709 page_table++; 710 710 } 711 711 } ··· 713 713 static void cnic_setup_page_tbl_le(struct cnic_dev *dev, struct cnic_dma *dma) 714 714 { 715 715 int i; 716 - u32 *page_table = dma->pgtbl; 716 + __le32 *page_table = (__le32 *) dma->pgtbl; 717 717 718 718 for (i = 0; i < dma->num_pages; i++) { 719 719 /* Each entry needs to be in little endian format. */ 720 - *page_table = dma->pg_map_arr[i] & 0xffffffff; 720 + *page_table = cpu_to_le32(dma->pg_map_arr[i] & 0xffffffff); 721 721 page_table++; 722 - *page_table = (u32) ((u64) dma->pg_map_arr[i] >> 32); 722 + *page_table = cpu_to_le32((u64) dma->pg_map_arr[i] >> 32); 723 723 page_table++; 724 724 } 725 725 }
+2 -1
drivers/net/cxgb4/cxgb4_main.c
··· 2710 2710 struct port_info *pi = netdev_priv(dev); 2711 2711 struct adapter *adapter = pi->adapter; 2712 2712 2713 + netif_carrier_off(dev); 2714 + 2713 2715 if (!(adapter->flags & FULL_INIT_DONE)) { 2714 2716 err = cxgb_up(adapter); 2715 2717 if (err < 0) ··· 3663 3661 pi->xact_addr_filt = -1; 3664 3662 pi->rx_offload = RX_CSO; 3665 3663 pi->port_id = i; 3666 - netif_carrier_off(netdev); 3667 3664 netdev->irq = pdev->irq; 3668 3665 3669 3666 netdev->features |= NETIF_F_SG | TSO_FLAGS;
+1 -1
drivers/net/pch_gbe/pch_gbe_main.c
··· 2247 2247 struct net_device *netdev = pci_get_drvdata(pdev); 2248 2248 struct pch_gbe_adapter *adapter = netdev_priv(netdev); 2249 2249 2250 - flush_scheduled_work(); 2250 + cancel_work_sync(&adapter->reset_task); 2251 2251 unregister_netdev(netdev); 2252 2252 2253 2253 pch_gbe_hal_phy_hw_reset(&adapter->hw);
+10 -85
drivers/net/tg3.c
··· 60 60 #define BAR_0 0 61 61 #define BAR_2 2 62 62 63 - #if defined(CONFIG_VLAN_8021Q) || defined(CONFIG_VLAN_8021Q_MODULE) 64 - #define TG3_VLAN_TAG_USED 1 65 - #else 66 - #define TG3_VLAN_TAG_USED 0 67 - #endif 68 - 69 63 #include "tg3.h" 70 64 71 65 #define DRV_MODULE_NAME "tg3" ··· 127 133 #define TG3_TX_RING_BYTES (sizeof(struct tg3_tx_buffer_desc) * \ 128 134 TG3_TX_RING_SIZE) 129 135 #define NEXT_TX(N) (((N) + 1) & (TG3_TX_RING_SIZE - 1)) 130 - 131 - #define TG3_RX_DMA_ALIGN 16 132 - #define TG3_RX_HEADROOM ALIGN(VLAN_HLEN, TG3_RX_DMA_ALIGN) 133 136 134 137 #define TG3_DMA_BYTE_ENAB 64 135 138 ··· 4713 4722 struct sk_buff *skb; 4714 4723 dma_addr_t dma_addr; 4715 4724 u32 opaque_key, desc_idx, *post_ptr; 4716 - bool hw_vlan __maybe_unused = false; 4717 - u16 vtag __maybe_unused = 0; 4718 4725 4719 4726 desc_idx = desc->opaque & RXD_OPAQUE_INDEX_MASK; 4720 4727 opaque_key = desc->opaque & RXD_OPAQUE_RING_MASK; ··· 4771 4782 tg3_recycle_rx(tnapi, tpr, opaque_key, 4772 4783 desc_idx, *post_ptr); 4773 4784 4774 - copy_skb = netdev_alloc_skb(tp->dev, len + VLAN_HLEN + 4785 + copy_skb = netdev_alloc_skb(tp->dev, len + 4775 4786 TG3_RAW_IP_ALIGN); 4776 4787 if (copy_skb == NULL) 4777 4788 goto drop_it_no_recycle; 4778 4789 4779 - skb_reserve(copy_skb, TG3_RAW_IP_ALIGN + VLAN_HLEN); 4790 + skb_reserve(copy_skb, TG3_RAW_IP_ALIGN); 4780 4791 skb_put(copy_skb, len); 4781 4792 pci_dma_sync_single_for_cpu(tp->pdev, dma_addr, len, PCI_DMA_FROMDEVICE); 4782 4793 skb_copy_from_linear_data(skb, copy_skb->data, len); ··· 4803 4814 } 4804 4815 4805 4816 if (desc->type_flags & RXD_FLAG_VLAN && 4806 - !(tp->rx_mode & RX_MODE_KEEP_VLAN_TAG)) { 4807 - vtag = desc->err_vlan & RXD_VLAN_MASK; 4808 - #if TG3_VLAN_TAG_USED 4809 - if (tp->vlgrp) 4810 - hw_vlan = true; 4811 - else 4812 - #endif 4813 - { 4814 - struct vlan_ethhdr *ve = (struct vlan_ethhdr *) 4815 - __skb_push(skb, VLAN_HLEN); 4817 + !(tp->rx_mode & RX_MODE_KEEP_VLAN_TAG)) 4818 + __vlan_hwaccel_put_tag(skb, 4819 + desc->err_vlan & RXD_VLAN_MASK); 4816 4820 4817 - memmove(ve, skb->data + VLAN_HLEN, 4818 - ETH_ALEN * 2); 4819 - ve->h_vlan_proto = htons(ETH_P_8021Q); 4820 - ve->h_vlan_TCI = htons(vtag); 4821 - } 4822 - } 4823 - 4824 - #if TG3_VLAN_TAG_USED 4825 - if (hw_vlan) 4826 - vlan_gro_receive(&tnapi->napi, tp->vlgrp, vtag, skb); 4827 - else 4828 - #endif 4829 - napi_gro_receive(&tnapi->napi, skb); 4821 + napi_gro_receive(&tnapi->napi, skb); 4830 4822 4831 4823 received++; 4832 4824 budget--; ··· 5710 5740 base_flags |= TXD_FLAG_TCPUDP_CSUM; 5711 5741 } 5712 5742 5713 - #if TG3_VLAN_TAG_USED 5714 5743 if (vlan_tx_tag_present(skb)) 5715 5744 base_flags |= (TXD_FLAG_VLAN | 5716 5745 (vlan_tx_tag_get(skb) << 16)); 5717 - #endif 5718 5746 5719 5747 len = skb_headlen(skb); 5720 5748 ··· 5954 5986 } 5955 5987 } 5956 5988 } 5957 - #if TG3_VLAN_TAG_USED 5989 + 5958 5990 if (vlan_tx_tag_present(skb)) 5959 5991 base_flags |= (TXD_FLAG_VLAN | 5960 5992 (vlan_tx_tag_get(skb) << 16)); 5961 - #endif 5962 5993 5963 5994 if ((tp->tg3_flags3 & TG3_FLG3_USE_JUMBO_BDFLAG) && 5964 5995 !mss && skb->len > VLAN_ETH_FRAME_LEN) ··· 9499 9532 rx_mode = tp->rx_mode & ~(RX_MODE_PROMISC | 9500 9533 RX_MODE_KEEP_VLAN_TAG); 9501 9534 9535 + #if !defined(CONFIG_VLAN_8021Q) && !defined(CONFIG_VLAN_8021Q_MODULE) 9502 9536 /* When ASF is in use, we always keep the RX_MODE_KEEP_VLAN_TAG 9503 9537 * flag clear. 9504 - */ 9505 - #if TG3_VLAN_TAG_USED 9506 - if (!tp->vlgrp && 9507 - !(tp->tg3_flags & TG3_FLAG_ENABLE_ASF)) 9508 - rx_mode |= RX_MODE_KEEP_VLAN_TAG; 9509 - #else 9510 - /* By definition, VLAN is disabled always in this 9511 - * case. 9512 9538 */ 9513 9539 if (!(tp->tg3_flags & TG3_FLAG_ENABLE_ASF)) 9514 9540 rx_mode |= RX_MODE_KEEP_VLAN_TAG; ··· 11189 11229 } 11190 11230 return -EOPNOTSUPP; 11191 11231 } 11192 - 11193 - #if TG3_VLAN_TAG_USED 11194 - static void tg3_vlan_rx_register(struct net_device *dev, struct vlan_group *grp) 11195 - { 11196 - struct tg3 *tp = netdev_priv(dev); 11197 - 11198 - if (!netif_running(dev)) { 11199 - tp->vlgrp = grp; 11200 - return; 11201 - } 11202 - 11203 - tg3_netif_stop(tp); 11204 - 11205 - tg3_full_lock(tp, 0); 11206 - 11207 - tp->vlgrp = grp; 11208 - 11209 - /* Update RX_MODE_KEEP_VLAN_TAG bit in RX_MODE register. */ 11210 - __tg3_set_rx_mode(dev); 11211 - 11212 - tg3_netif_start(tp); 11213 - 11214 - tg3_full_unlock(tp); 11215 - } 11216 - #endif 11217 11232 11218 11233 static int tg3_get_coalesce(struct net_device *dev, struct ethtool_coalesce *ec) 11219 11234 { ··· 13001 13066 13002 13067 static void inline vlan_features_add(struct net_device *dev, unsigned long flags) 13003 13068 { 13004 - #if TG3_VLAN_TAG_USED 13005 13069 dev->vlan_features |= flags; 13006 - #endif 13007 13070 } 13008 13071 13009 13072 static inline u32 tg3_rx_ret_ring_size(struct tg3 *tp) ··· 13794 13861 else 13795 13862 tp->tg3_flags &= ~TG3_FLAG_POLL_SERDES; 13796 13863 13797 - tp->rx_offset = NET_IP_ALIGN + TG3_RX_HEADROOM; 13864 + tp->rx_offset = NET_IP_ALIGN; 13798 13865 tp->rx_copy_thresh = TG3_RX_COPY_THRESHOLD; 13799 13866 if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5701 && 13800 13867 (tp->tg3_flags & TG3_FLAG_PCIX_MODE) != 0) { 13801 - tp->rx_offset -= NET_IP_ALIGN; 13868 + tp->rx_offset = 0; 13802 13869 #ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 13803 13870 tp->rx_copy_thresh = ~(u16)0; 13804 13871 #endif ··· 14562 14629 .ndo_do_ioctl = tg3_ioctl, 14563 14630 .ndo_tx_timeout = tg3_tx_timeout, 14564 14631 .ndo_change_mtu = tg3_change_mtu, 14565 - #if TG3_VLAN_TAG_USED 14566 - .ndo_vlan_rx_register = tg3_vlan_rx_register, 14567 - #endif 14568 14632 #ifdef CONFIG_NET_POLL_CONTROLLER 14569 14633 .ndo_poll_controller = tg3_poll_controller, 14570 14634 #endif ··· 14578 14648 .ndo_do_ioctl = tg3_ioctl, 14579 14649 .ndo_tx_timeout = tg3_tx_timeout, 14580 14650 .ndo_change_mtu = tg3_change_mtu, 14581 - #if TG3_VLAN_TAG_USED 14582 - .ndo_vlan_rx_register = tg3_vlan_rx_register, 14583 - #endif 14584 14651 #ifdef CONFIG_NET_POLL_CONTROLLER 14585 14652 .ndo_poll_controller = tg3_poll_controller, 14586 14653 #endif ··· 14627 14700 14628 14701 SET_NETDEV_DEV(dev, &pdev->dev); 14629 14702 14630 - #if TG3_VLAN_TAG_USED 14631 14703 dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; 14632 - #endif 14633 14704 14634 14705 tp = netdev_priv(dev); 14635 14706 tp->pdev = pdev;
-3
drivers/net/tg3.h
··· 2808 2808 u32 rx_std_max_post; 2809 2809 u32 rx_offset; 2810 2810 u32 rx_pkt_map_sz; 2811 - #if TG3_VLAN_TAG_USED 2812 - struct vlan_group *vlgrp; 2813 - #endif 2814 2811 2815 2812 2816 2813 /* begin "everything else" cacheline(s) section */
+1
drivers/net/usb/kaweth.c
··· 406 406 407 407 if (fw->size > KAWETH_FIRMWARE_BUF_SIZE) { 408 408 err("Firmware too big: %zu", fw->size); 409 + release_firmware(fw); 409 410 return -ENOSPC; 410 411 } 411 412 data_len = fw->size;
+5 -1
drivers/net/wireless/ath/ath9k/hw.c
··· 369 369 else 370 370 ah->config.ht_enable = 0; 371 371 372 + /* PAPRD needs some more work to be enabled */ 373 + ah->config.paprd_disable = 1; 374 + 372 375 ah->config.rx_intr_mitigation = true; 373 376 ah->config.pcieSerDesWrite = true; 374 377 ··· 1936 1933 pCap->rx_status_len = sizeof(struct ar9003_rxs); 1937 1934 pCap->tx_desc_len = sizeof(struct ar9003_txc); 1938 1935 pCap->txs_len = sizeof(struct ar9003_txs); 1939 - if (ah->eep_ops->get_eeprom(ah, EEP_PAPRD)) 1936 + if (!ah->config.paprd_disable && 1937 + ah->eep_ops->get_eeprom(ah, EEP_PAPRD)) 1940 1938 pCap->hw_caps |= ATH9K_HW_CAP_PAPRD; 1941 1939 } else { 1942 1940 pCap->tx_desc_len = sizeof(struct ath_desc);
+1
drivers/net/wireless/ath/ath9k/hw.h
··· 225 225 u32 pcie_waen; 226 226 u8 analog_shiftreg; 227 227 u8 ht_enable; 228 + u8 paprd_disable; 228 229 u32 ofdm_trig_low; 229 230 u32 ofdm_trig_high; 230 231 u32 cck_trig_high;
+5 -3
drivers/net/wireless/ath/ath9k/main.c
··· 592 592 u32 status = sc->intrstatus; 593 593 u32 rxmask; 594 594 595 - ath9k_ps_wakeup(sc); 596 - 597 595 if (status & ATH9K_INT_FATAL) { 598 596 ath_reset(sc, true); 599 - ath9k_ps_restore(sc); 600 597 return; 601 598 } 602 599 600 + ath9k_ps_wakeup(sc); 603 601 spin_lock(&sc->sc_pcu_lock); 604 602 605 603 if (!ath9k_hw_check_alive(ah)) ··· 967 969 /* Stop ANI */ 968 970 del_timer_sync(&common->ani.timer); 969 971 972 + ath9k_ps_wakeup(sc); 970 973 spin_lock_bh(&sc->sc_pcu_lock); 971 974 972 975 ieee80211_stop_queues(hw); ··· 1014 1015 1015 1016 /* Start ANI */ 1016 1017 ath_start_ani(common); 1018 + ath9k_ps_restore(sc); 1017 1019 1018 1020 return r; 1019 1021 } ··· 1701 1701 skip_chan_change: 1702 1702 if (changed & IEEE80211_CONF_CHANGE_POWER) { 1703 1703 sc->config.txpowlimit = 2 * conf->power_level; 1704 + ath9k_ps_wakeup(sc); 1704 1705 ath_update_txpow(sc); 1706 + ath9k_ps_restore(sc); 1705 1707 } 1706 1708 1707 1709 spin_lock_bh(&sc->wiphy_lock);
-2
drivers/net/wireless/ath/ath9k/xmit.c
··· 2113 2113 if (needreset) { 2114 2114 ath_dbg(ath9k_hw_common(sc->sc_ah), ATH_DBG_RESET, 2115 2115 "tx hung, resetting the chip\n"); 2116 - ath9k_ps_wakeup(sc); 2117 2116 ath_reset(sc, true); 2118 - ath9k_ps_restore(sc); 2119 2117 } 2120 2118 2121 2119 ieee80211_queue_delayed_work(sc->hw, &sc->tx_complete_work,
+1
drivers/net/wireless/iwlwifi/iwl-4965.c
··· 2624 2624 .fw_name_pre = IWL4965_FW_PRE, 2625 2625 .ucode_api_max = IWL4965_UCODE_API_MAX, 2626 2626 .ucode_api_min = IWL4965_UCODE_API_MIN, 2627 + .sku = IWL_SKU_A|IWL_SKU_G|IWL_SKU_N, 2627 2628 .valid_tx_ant = ANT_AB, 2628 2629 .valid_rx_ant = ANT_ABC, 2629 2630 .eeprom_ver = EEPROM_4965_EEPROM_VERSION,
+7 -4
drivers/net/wireless/iwlwifi/iwl-agn-eeprom.c
··· 152 152 153 153 eeprom_sku = iwl_eeprom_query16(priv, EEPROM_SKU_CAP); 154 154 155 - priv->cfg->sku = ((eeprom_sku & EEPROM_SKU_CAP_BAND_SELECTION) >> 155 + if (!priv->cfg->sku) { 156 + /* not using sku overwrite */ 157 + priv->cfg->sku = 158 + ((eeprom_sku & EEPROM_SKU_CAP_BAND_SELECTION) >> 156 159 EEPROM_SKU_CAP_BAND_POS); 157 - if (eeprom_sku & EEPROM_SKU_CAP_11N_ENABLE) 158 - priv->cfg->sku |= IWL_SKU_N; 159 - 160 + if (eeprom_sku & EEPROM_SKU_CAP_11N_ENABLE) 161 + priv->cfg->sku |= IWL_SKU_N; 162 + } 160 163 if (!priv->cfg->sku) { 161 164 IWL_ERR(priv, "Invalid device sku\n"); 162 165 return -EINVAL;
+1
drivers/net/wireless/rt2x00/rt73usb.c
··· 2446 2446 { USB_DEVICE(0x04bb, 0x093d), USB_DEVICE_DATA(&rt73usb_ops) }, 2447 2447 { USB_DEVICE(0x148f, 0x2573), USB_DEVICE_DATA(&rt73usb_ops) }, 2448 2448 { USB_DEVICE(0x148f, 0x2671), USB_DEVICE_DATA(&rt73usb_ops) }, 2449 + { USB_DEVICE(0x0812, 0x3101), USB_DEVICE_DATA(&rt73usb_ops) }, 2449 2450 /* Qcom */ 2450 2451 { USB_DEVICE(0x18e8, 0x6196), USB_DEVICE_DATA(&rt73usb_ops) }, 2451 2452 { USB_DEVICE(0x18e8, 0x6229), USB_DEVICE_DATA(&rt73usb_ops) },
+9 -2
drivers/net/wireless/rtlwifi/pci.c
··· 619 619 struct sk_buff *uskb = NULL; 620 620 u8 *pdata; 621 621 uskb = dev_alloc_skb(skb->len + 128); 622 + if (!uskb) { 623 + RT_TRACE(rtlpriv, 624 + (COMP_INTR | COMP_RECV), 625 + DBG_EMERG, 626 + ("can't alloc rx skb\n")); 627 + goto done; 628 + } 622 629 memcpy(IEEE80211_SKB_RXCB(uskb), 623 630 &rx_status, 624 631 sizeof(rx_status)); ··· 648 641 new_skb = dev_alloc_skb(rtlpci->rxbuffersize); 649 642 if (unlikely(!new_skb)) { 650 643 RT_TRACE(rtlpriv, (COMP_INTR | COMP_RECV), 651 - DBG_DMESG, 644 + DBG_EMERG, 652 645 ("can't alloc skb for rx\n")); 653 646 goto done; 654 647 } ··· 1073 1066 struct sk_buff *skb = 1074 1067 dev_alloc_skb(rtlpci->rxbuffersize); 1075 1068 u32 bufferaddress; 1076 - entry = &rtlpci->rx_ring[rx_queue_idx].desc[i]; 1077 1069 if (!skb) 1078 1070 return 0; 1071 + entry = &rtlpci->rx_ring[rx_queue_idx].desc[i]; 1079 1072 1080 1073 /*skb->dev = dev; */ 1081 1074
-1
drivers/platform/x86/intel_scu_ipc.c
··· 26 26 #include <linux/sfi.h> 27 27 #include <asm/mrst.h> 28 28 #include <asm/intel_scu_ipc.h> 29 - #include <asm/mrst.h> 30 29 31 30 /* IPC defines the following message types */ 32 31 #define IPCMSG_WATCHDOG_TIMER 0xF8 /* Set Kernel Watchdog Threshold */
+2 -2
drivers/staging/ath6kl/miscdrv/ar3kps/ar3kpsconfig.c
··· 360 360 status = 1; 361 361 goto complete; 362 362 } 363 - len = (firmware->size > MAX_BDADDR_FORMAT_LENGTH)? MAX_BDADDR_FORMAT_LENGTH: firmware->size; 364 - memcpy(config_bdaddr, firmware->data,len); 363 + len = min(firmware->size, MAX_BDADDR_FORMAT_LENGTH - 1); 364 + memcpy(config_bdaddr, firmware->data, len); 365 365 config_bdaddr[len] = '\0'; 366 366 write_bdaddr(hdev,config_bdaddr,BDADDR_TYPE_STRING); 367 367 A_RELEASE_FIRMWARE(firmware);
+23 -22
drivers/staging/brcm80211/sys/wl_mac80211.c
··· 209 209 struct wl_info *wl = hw->priv; 210 210 ASSERT(wl); 211 211 WL_LOCK(wl); 212 - wl_down(wl); 213 212 ieee80211_stop_queues(hw); 214 213 WL_UNLOCK(wl); 215 - 216 - return; 217 214 } 218 215 219 216 static int ··· 243 246 static void 244 247 wl_ops_remove_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif) 245 248 { 246 - return; 249 + struct wl_info *wl; 250 + 251 + wl = HW_TO_WL(hw); 252 + 253 + /* put driver in down state */ 254 + WL_LOCK(wl); 255 + wl_down(wl); 256 + WL_UNLOCK(wl); 247 257 } 248 258 249 259 static int ··· 783 779 wl_found++; 784 780 return wl; 785 781 786 - fail: 782 + fail: 787 783 wl_free(wl); 788 784 fail1: 789 785 return NULL; ··· 1094 1090 return 0; 1095 1091 } 1096 1092 1097 - #ifdef LINUXSTA_PS 1098 1093 static int wl_suspend(struct pci_dev *pdev, pm_message_t state) 1099 1094 { 1100 1095 struct wl_info *wl; ··· 1108 1105 return -ENODEV; 1109 1106 } 1110 1107 1108 + /* only need to flag hw is down for proper resume */ 1111 1109 WL_LOCK(wl); 1112 - wl_down(wl); 1113 1110 wl->pub->hw_up = false; 1114 1111 WL_UNLOCK(wl); 1115 - pci_save_state(pdev, wl->pci_psstate); 1112 + 1113 + pci_save_state(pdev); 1116 1114 pci_disable_device(pdev); 1117 1115 return pci_set_power_state(pdev, PCI_D3hot); 1118 1116 } ··· 1137 1133 if (err) 1138 1134 return err; 1139 1135 1140 - pci_restore_state(pdev, wl->pci_psstate); 1136 + pci_restore_state(pdev); 1141 1137 1142 1138 err = pci_enable_device(pdev); 1143 1139 if (err) ··· 1149 1145 if ((val & 0x0000ff00) != 0) 1150 1146 pci_write_config_dword(pdev, 0x40, val & 0xffff00ff); 1151 1147 1152 - WL_LOCK(wl); 1153 - err = wl_up(wl); 1154 - WL_UNLOCK(wl); 1155 - 1148 + /* 1149 + * done. driver will be put in up state 1150 + * in wl_ops_add_interface() call. 1151 + */ 1156 1152 return err; 1157 1153 } 1158 - #endif /* LINUXSTA_PS */ 1159 1154 1160 1155 static void wl_remove(struct pci_dev *pdev) 1161 1156 { ··· 1187 1184 } 1188 1185 1189 1186 static struct pci_driver wl_pci_driver = { 1190 - .name = "brcm80211", 1191 - .probe = wl_pci_probe, 1192 - #ifdef LINUXSTA_PS 1193 - .suspend = wl_suspend, 1194 - .resume = wl_resume, 1195 - #endif /* LINUXSTA_PS */ 1196 - .remove = __devexit_p(wl_remove), 1197 - .id_table = wl_id_table, 1187 + .name = "brcm80211", 1188 + .probe = wl_pci_probe, 1189 + .suspend = wl_suspend, 1190 + .resume = wl_resume, 1191 + .remove = __devexit_p(wl_remove), 1192 + .id_table = wl_id_table, 1198 1193 }; 1199 1194 1200 1195 /**
-1
drivers/staging/brcm80211/sys/wlc_mac80211.c
··· 5126 5126 fifo = prio2fifo[prio]; 5127 5127 5128 5128 ASSERT((uint) skb_headroom(sdu) >= TXOFF); 5129 - ASSERT(!(sdu->cloned)); 5130 5129 ASSERT(!(sdu->next)); 5131 5130 ASSERT(!(sdu->prev)); 5132 5131 ASSERT(fifo < NFIFO);
+2 -1
drivers/staging/comedi/drivers/ni_labpc.c
··· 575 575 /* grab our IRQ */ 576 576 if (irq) { 577 577 isr_flags = 0; 578 - if (thisboard->bustype == pci_bustype) 578 + if (thisboard->bustype == pci_bustype 579 + || thisboard->bustype == pcmcia_bustype) 579 580 isr_flags |= IRQF_SHARED; 580 581 if (request_irq(irq, labpc_interrupt, isr_flags, 581 582 driver_labpc.driver_name, dev)) {
+1
drivers/staging/hv/blkvsc_drv.c
··· 368 368 blkdev->gd->first_minor = 0; 369 369 blkdev->gd->fops = &block_ops; 370 370 blkdev->gd->private_data = blkdev; 371 + blkdev->gd->driverfs_dev = &(blkdev->device_ctx->device); 371 372 sprintf(blkdev->gd->disk_name, "hd%c", 'a' + devnum); 372 373 373 374 blkvsc_do_inquiry(blkdev);
+1 -1
drivers/staging/hv/netvsc.c
··· 1279 1279 /* ASSERT(device); */ 1280 1280 1281 1281 packet = kzalloc(NETVSC_PACKET_SIZE * sizeof(unsigned char), 1282 - GFP_KERNEL); 1282 + GFP_ATOMIC); 1283 1283 if (!packet) 1284 1284 return; 1285 1285 buffer = packet;
-1
drivers/staging/hv/netvsc_drv.c
··· 358 358 359 359 /* Set initial state */ 360 360 netif_carrier_off(net); 361 - netif_stop_queue(net); 362 361 363 362 net_device_ctx = netdev_priv(net); 364 363 net_device_ctx->device_ctx = device_ctx;
+1 -1
drivers/staging/iio/adc/ad7476_core.c
··· 68 68 /* Corresponds to Vref / 2^(bits) */ 69 69 unsigned int scale_uv = (st->int_vref_mv * 1000) >> st->chip_info->bits; 70 70 71 - return sprintf(buf, "%d.%d\n", scale_uv / 1000, scale_uv % 1000); 71 + return sprintf(buf, "%d.%03d\n", scale_uv / 1000, scale_uv % 1000); 72 72 } 73 73 static IIO_DEVICE_ATTR(in_scale, S_IRUGO, ad7476_show_scale, NULL, 0); 74 74
+1 -1
drivers/staging/iio/adc/ad7887_core.c
··· 68 68 /* Corresponds to Vref / 2^(bits) */ 69 69 unsigned int scale_uv = (st->int_vref_mv * 1000) >> st->chip_info->bits; 70 70 71 - return sprintf(buf, "%d.%d\n", scale_uv / 1000, scale_uv % 1000); 71 + return sprintf(buf, "%d.%03d\n", scale_uv / 1000, scale_uv % 1000); 72 72 } 73 73 static IIO_DEVICE_ATTR(in_scale, S_IRUGO, ad7887_show_scale, NULL, 0); 74 74
+1 -1
drivers/staging/iio/adc/ad799x_core.c
··· 432 432 /* Corresponds to Vref / 2^(bits) */ 433 433 unsigned int scale_uv = (st->int_vref_mv * 1000) >> st->chip_info->bits; 434 434 435 - return sprintf(buf, "%d.%d\n", scale_uv / 1000, scale_uv % 1000); 435 + return sprintf(buf, "%d.%03d\n", scale_uv / 1000, scale_uv % 1000); 436 436 } 437 437 438 438 static IIO_DEVICE_ATTR(in_scale, S_IRUGO, ad799x_show_scale, NULL, 0);
+1 -1
drivers/staging/iio/dac/ad5446.c
··· 87 87 /* Corresponds to Vref / 2^(bits) */ 88 88 unsigned int scale_uv = (st->vref_mv * 1000) >> st->chip_info->bits; 89 89 90 - return sprintf(buf, "%d.%d\n", scale_uv / 1000, scale_uv % 1000); 90 + return sprintf(buf, "%d.%03d\n", scale_uv / 1000, scale_uv % 1000); 91 91 } 92 92 static IIO_DEVICE_ATTR(out_scale, S_IRUGO, ad5446_show_scale, NULL, 0); 93 93
-2
drivers/staging/rt2860/rt_main_dev.c
··· 484 484 net_dev->ml_priv = (void *)pAd; 485 485 pAd->net_dev = net_dev; 486 486 487 - netif_stop_queue(net_dev); 488 - 489 487 return net_dev; 490 488 491 489 }
+1
drivers/staging/rt2860/usb_main_dev.c
··· 106 106 {USB_DEVICE(0x0411, 0x016f)}, /* MelCo.,Inc. WLI-UC-G301N */ 107 107 {USB_DEVICE(0x1737, 0x0070)}, /* Linksys WUSB100 */ 108 108 {USB_DEVICE(0x1737, 0x0071)}, /* Linksys WUSB600N */ 109 + {USB_DEVICE(0x1737, 0x0078)}, /* Linksys WUSB100v2 */ 109 110 {USB_DEVICE(0x0411, 0x00e8)}, /* Buffalo WLI-UC-G300N */ 110 111 {USB_DEVICE(0x050d, 0x815c)}, /* Belkin F5D8053 */ 111 112 {USB_DEVICE(0x100D, 0x9031)}, /* Motorola 2770 */
+7 -4
drivers/staging/rtl8712/hal_init.c
··· 128 128 u8 *ptmpchar = NULL, *ppayload, *ptr; 129 129 struct tx_desc *ptx_desc; 130 130 u32 txdscp_sz = sizeof(struct tx_desc); 131 + u8 ret = _FAIL; 131 132 132 133 ulfilelength = rtl871x_open_fw(padapter, &phfwfile_hdl, &pmappedfw); 133 134 if (pmappedfw && (ulfilelength > 0)) { 134 135 update_fwhdr(&fwhdr, pmappedfw); 135 136 if (chk_fwhdr(&fwhdr, ulfilelength) == _FAIL) 136 - goto exit_fail; 137 + goto firmware_rel; 137 138 fill_fwpriv(padapter, &fwhdr.fwpriv); 138 139 /* firmware check ok */ 139 140 maxlen = (fwhdr.img_IMEM_size > fwhdr.img_SRAM_size) ? ··· 142 141 maxlen += txdscp_sz; 143 142 ptmpchar = _malloc(maxlen + FWBUFF_ALIGN_SZ); 144 143 if (ptmpchar == NULL) 145 - return _FAIL; 144 + goto firmware_rel; 146 145 147 146 ptx_desc = (struct tx_desc *)(ptmpchar + FWBUFF_ALIGN_SZ - 148 147 ((addr_t)(ptmpchar) & (FWBUFF_ALIGN_SZ - 1))); ··· 274 273 goto exit_fail; 275 274 } else 276 275 goto exit_fail; 277 - return _SUCCESS; 276 + ret = _SUCCESS; 278 277 279 278 exit_fail: 280 279 kfree(ptmpchar); 281 - return _FAIL; 280 + firmware_rel: 281 + release_firmware((struct firmware *)phfwfile_hdl); 282 + return ret; 282 283 } 283 284 284 285 uint rtl8712_hal_init(struct _adapter *padapter)
+114 -33
drivers/staging/rtl8712/usb_intf.c
··· 47 47 static void r871xu_dev_remove(struct usb_interface *pusb_intf); 48 48 49 49 static struct usb_device_id rtl871x_usb_id_tbl[] = { 50 - /*92SU 51 - * Realtek */ 52 - {USB_DEVICE(0x0bda, 0x8171)}, 53 - {USB_DEVICE(0x0bda, 0x8172)}, 50 + 51 + /* RTL8188SU */ 52 + /* Realtek */ 53 + {USB_DEVICE(0x0BDA, 0x8171)}, 54 54 {USB_DEVICE(0x0bda, 0x8173)}, 55 - {USB_DEVICE(0x0bda, 0x8174)}, 56 55 {USB_DEVICE(0x0bda, 0x8712)}, 57 56 {USB_DEVICE(0x0bda, 0x8713)}, 58 57 {USB_DEVICE(0x0bda, 0xC512)}, 59 - /* Abocom */ 58 + /* Abocom */ 60 59 {USB_DEVICE(0x07B8, 0x8188)}, 61 - /* Corega */ 62 - {USB_DEVICE(0x07aa, 0x0047)}, 63 - /* Dlink */ 64 - {USB_DEVICE(0x07d1, 0x3303)}, 65 - {USB_DEVICE(0x07d1, 0x3302)}, 66 - {USB_DEVICE(0x07d1, 0x3300)}, 67 - /* Dlink for Skyworth */ 68 - {USB_DEVICE(0x14b2, 0x3300)}, 69 - {USB_DEVICE(0x14b2, 0x3301)}, 70 - {USB_DEVICE(0x14b2, 0x3302)}, 71 - /* EnGenius */ 72 - {USB_DEVICE(0x1740, 0x9603)}, 73 - {USB_DEVICE(0x1740, 0x9605)}, 60 + /* ASUS */ 61 + {USB_DEVICE(0x0B05, 0x1786)}, 62 + {USB_DEVICE(0x0B05, 0x1791)}, /* 11n mode disable */ 74 63 /* Belkin */ 75 - {USB_DEVICE(0x050d, 0x815F)}, 76 - {USB_DEVICE(0x050d, 0x945A)}, 77 - {USB_DEVICE(0x050d, 0x845A)}, 78 - /* Guillemot */ 79 - {USB_DEVICE(0x06f8, 0xe031)}, 64 + {USB_DEVICE(0x050D, 0x945A)}, 65 + /* Corega */ 66 + {USB_DEVICE(0x07AA, 0x0047)}, 67 + /* D-Link */ 68 + {USB_DEVICE(0x2001, 0x3306)}, 69 + {USB_DEVICE(0x07D1, 0x3306)}, /* 11n mode disable */ 80 70 /* Edimax */ 81 71 {USB_DEVICE(0x7392, 0x7611)}, 82 - {USB_DEVICE(0x7392, 0x7612)}, 83 - {USB_DEVICE(0x7392, 0x7622)}, 72 + /* EnGenius */ 73 + {USB_DEVICE(0x1740, 0x9603)}, 74 + /* Hawking */ 75 + {USB_DEVICE(0x0E66, 0x0016)}, 76 + /* Hercules */ 77 + {USB_DEVICE(0x06F8, 0xE034)}, 78 + {USB_DEVICE(0x06F8, 0xE032)}, 79 + /* Logitec */ 80 + {USB_DEVICE(0x0789, 0x0167)}, 81 + /* PCI */ 82 + {USB_DEVICE(0x2019, 0xAB28)}, 83 + {USB_DEVICE(0x2019, 0xED16)}, 84 84 /* Sitecom */ 85 + {USB_DEVICE(0x0DF6, 0x0057)}, 85 86 {USB_DEVICE(0x0DF6, 0x0045)}, 87 + {USB_DEVICE(0x0DF6, 0x0059)}, /* 11n mode disable */ 88 + {USB_DEVICE(0x0DF6, 0x004B)}, 89 + {USB_DEVICE(0x0DF6, 0x0063)}, 90 + /* Sweex */ 91 + {USB_DEVICE(0x177F, 0x0154)}, 92 + /* Thinkware */ 93 + {USB_DEVICE(0x0BDA, 0x5077)}, 94 + /* Toshiba */ 95 + {USB_DEVICE(0x1690, 0x0752)}, 96 + /* - */ 97 + {USB_DEVICE(0x20F4, 0x646B)}, 98 + {USB_DEVICE(0x083A, 0xC512)}, 99 + 100 + /* RTL8191SU */ 101 + /* Realtek */ 102 + {USB_DEVICE(0x0BDA, 0x8172)}, 103 + /* Amigo */ 104 + {USB_DEVICE(0x0EB0, 0x9061)}, 105 + /* ASUS/EKB */ 106 + {USB_DEVICE(0x0BDA, 0x8172)}, 107 + {USB_DEVICE(0x13D3, 0x3323)}, 108 + {USB_DEVICE(0x13D3, 0x3311)}, /* 11n mode disable */ 109 + {USB_DEVICE(0x13D3, 0x3342)}, 110 + /* ASUS/EKBLenovo */ 111 + {USB_DEVICE(0x13D3, 0x3333)}, 112 + {USB_DEVICE(0x13D3, 0x3334)}, 113 + {USB_DEVICE(0x13D3, 0x3335)}, /* 11n mode disable */ 114 + {USB_DEVICE(0x13D3, 0x3336)}, /* 11n mode disable */ 115 + /* ASUS/Media BOX */ 116 + {USB_DEVICE(0x13D3, 0x3309)}, 117 + /* Belkin */ 118 + {USB_DEVICE(0x050D, 0x815F)}, 119 + /* D-Link */ 120 + {USB_DEVICE(0x07D1, 0x3302)}, 121 + {USB_DEVICE(0x07D1, 0x3300)}, 122 + {USB_DEVICE(0x07D1, 0x3303)}, 123 + /* Edimax */ 124 + {USB_DEVICE(0x7392, 0x7612)}, 125 + /* EnGenius */ 126 + {USB_DEVICE(0x1740, 0x9605)}, 127 + /* Guillemot */ 128 + {USB_DEVICE(0x06F8, 0xE031)}, 86 129 /* Hawking */ 87 130 {USB_DEVICE(0x0E66, 0x0015)}, 88 - {USB_DEVICE(0x0E66, 0x0016)}, 89 - {USB_DEVICE(0x0b05, 0x1786)}, 90 - {USB_DEVICE(0x0b05, 0x1791)}, /* 11n mode disable */ 91 - 131 + /* Mediao */ 92 132 {USB_DEVICE(0x13D3, 0x3306)}, 93 - {USB_DEVICE(0x13D3, 0x3309)}, 133 + /* PCI */ 134 + {USB_DEVICE(0x2019, 0xED18)}, 135 + {USB_DEVICE(0x2019, 0x4901)}, 136 + /* Sitecom */ 137 + {USB_DEVICE(0x0DF6, 0x0058)}, 138 + {USB_DEVICE(0x0DF6, 0x0049)}, 139 + {USB_DEVICE(0x0DF6, 0x004C)}, 140 + {USB_DEVICE(0x0DF6, 0x0064)}, 141 + /* Skyworth */ 142 + {USB_DEVICE(0x14b2, 0x3300)}, 143 + {USB_DEVICE(0x14b2, 0x3301)}, 144 + {USB_DEVICE(0x14B2, 0x3302)}, 145 + /* - */ 146 + {USB_DEVICE(0x04F2, 0xAFF2)}, 147 + {USB_DEVICE(0x04F2, 0xAFF5)}, 148 + {USB_DEVICE(0x04F2, 0xAFF6)}, 149 + {USB_DEVICE(0x13D3, 0x3339)}, 150 + {USB_DEVICE(0x13D3, 0x3340)}, /* 11n mode disable */ 151 + {USB_DEVICE(0x13D3, 0x3341)}, /* 11n mode disable */ 94 152 {USB_DEVICE(0x13D3, 0x3310)}, 95 - {USB_DEVICE(0x13D3, 0x3311)}, /* 11n mode disable */ 96 153 {USB_DEVICE(0x13D3, 0x3325)}, 97 - {USB_DEVICE(0x083A, 0xC512)}, 154 + 155 + /* RTL8192SU */ 156 + /* Realtek */ 157 + {USB_DEVICE(0x0BDA, 0x8174)}, 158 + {USB_DEVICE(0x0BDA, 0x8174)}, 159 + /* Belkin */ 160 + {USB_DEVICE(0x050D, 0x845A)}, 161 + /* Corega */ 162 + {USB_DEVICE(0x07AA, 0x0051)}, 163 + /* Edimax */ 164 + {USB_DEVICE(0x7392, 0x7622)}, 165 + /* NEC */ 166 + {USB_DEVICE(0x0409, 0x02B6)}, 98 167 {} 99 168 }; 100 169 ··· 172 103 static struct specific_device_id specific_device_id_tbl[] = { 173 104 {.idVendor = 0x0b05, .idProduct = 0x1791, 174 105 .flags = SPEC_DEV_ID_DISABLE_HT}, 106 + {.idVendor = 0x0df6, .idProduct = 0x0059, 107 + .flags = SPEC_DEV_ID_DISABLE_HT}, 108 + {.idVendor = 0x13d3, .idProduct = 0x3306, 109 + .flags = SPEC_DEV_ID_DISABLE_HT}, 175 110 {.idVendor = 0x13D3, .idProduct = 0x3311, 111 + .flags = SPEC_DEV_ID_DISABLE_HT}, 112 + {.idVendor = 0x13d3, .idProduct = 0x3335, 113 + .flags = SPEC_DEV_ID_DISABLE_HT}, 114 + {.idVendor = 0x13d3, .idProduct = 0x3336, 115 + .flags = SPEC_DEV_ID_DISABLE_HT}, 116 + {.idVendor = 0x13d3, .idProduct = 0x3340, 117 + .flags = SPEC_DEV_ID_DISABLE_HT}, 118 + {.idVendor = 0x13d3, .idProduct = 0x3341, 176 119 .flags = SPEC_DEV_ID_DISABLE_HT}, 177 120 {} 178 121 };
+1 -1
drivers/staging/speakup/kobjects.c
··· 332 332 unsigned long flags; 333 333 334 334 len = strlen(buf); 335 - if (len > 0 || len < 3) { 335 + if (len > 0 && len < 3) { 336 336 ch = buf[0]; 337 337 if (ch == '\n') 338 338 ch = '0';
+9 -10
drivers/staging/ste_rmi4/synaptics_i2c_rmi4.c
··· 986 986 input_set_abs_params(rmi4_data->input_dev, ABS_MT_TOUCH_MAJOR, 0, 987 987 MAX_TOUCH_MAJOR, 0, 0); 988 988 989 - retval = input_register_device(rmi4_data->input_dev); 990 - if (retval) { 991 - dev_err(&client->dev, "%s:input register failed\n", __func__); 992 - goto err_input_register; 993 - } 994 - 995 989 /* Clear interrupts */ 996 990 synaptics_rmi4_i2c_block_read(rmi4_data, 997 991 rmi4_data->fn01_data_base_addr + 1, intr_status, ··· 997 1003 if (retval) { 998 1004 dev_err(&client->dev, "%s:Unable to get attn irq %d\n", 999 1005 __func__, platformdata->irq_number); 1000 - goto err_request_irq; 1006 + goto err_unset_clientdata; 1007 + } 1008 + 1009 + retval = input_register_device(rmi4_data->input_dev); 1010 + if (retval) { 1011 + dev_err(&client->dev, "%s:input register failed\n", __func__); 1012 + goto err_free_irq; 1001 1013 } 1002 1014 1003 1015 return retval; 1004 1016 1005 - err_request_irq: 1017 + err_free_irq: 1006 1018 free_irq(platformdata->irq_number, rmi4_data); 1007 - input_unregister_device(rmi4_data->input_dev); 1008 - err_input_register: 1019 + err_unset_clientdata: 1009 1020 i2c_set_clientdata(client, NULL); 1010 1021 err_query_dev: 1011 1022 if (platformdata->regulator_en) {
+4 -4
drivers/staging/tidspbridge/core/io_sm.c
··· 949 949 * Calls the Bridge's CHNL_ISR to determine if this interrupt is ours, then 950 950 * schedules a DPC to dispatch I/O. 951 951 */ 952 - void io_mbox_msg(u32 msg) 952 + int io_mbox_msg(struct notifier_block *self, unsigned long len, void *msg) 953 953 { 954 954 struct io_mgr *pio_mgr; 955 955 struct dev_object *dev_obj; ··· 959 959 dev_get_io_mgr(dev_obj, &pio_mgr); 960 960 961 961 if (!pio_mgr) 962 - return; 962 + return NOTIFY_BAD; 963 963 964 - pio_mgr->intr_val = (u16)msg; 964 + pio_mgr->intr_val = (u16)((u32)msg); 965 965 if (pio_mgr->intr_val & MBX_PM_CLASS) 966 966 io_dispatch_pm(pio_mgr); 967 967 ··· 973 973 spin_unlock_irqrestore(&pio_mgr->dpc_lock, flags); 974 974 tasklet_schedule(&pio_mgr->dpc_tasklet); 975 975 } 976 - return; 976 + return NOTIFY_OK; 977 977 } 978 978 979 979 /*
+7 -8
drivers/staging/tidspbridge/core/tiomap3430.c
··· 223 223 bridge_msg_set_queue_id, 224 224 }; 225 225 226 + static struct notifier_block dsp_mbox_notifier = { 227 + .notifier_call = io_mbox_msg, 228 + }; 229 + 226 230 static inline void flush_all(struct bridge_dev_context *dev_context) 227 231 { 228 232 if (dev_context->dw_brd_state == BRD_DSP_HIBERNATION || ··· 557 553 * Enable Mailbox events and also drain any pending 558 554 * stale messages. 559 555 */ 560 - dev_context->mbox = omap_mbox_get("dsp"); 556 + dev_context->mbox = omap_mbox_get("dsp", &dsp_mbox_notifier); 561 557 if (IS_ERR(dev_context->mbox)) { 562 558 dev_context->mbox = NULL; 563 559 pr_err("%s: Failed to get dsp mailbox handle\n", ··· 567 563 568 564 } 569 565 if (!status) { 570 - dev_context->mbox->rxq->callback = (int (*)(void *))io_mbox_msg; 571 - 572 566 /*PM_IVA2GRPSEL_PER = 0xC0;*/ 573 567 temp = readl(resources->dw_per_pm_base + 0xA8); 574 568 temp = (temp & 0xFFFFFF30) | 0xC0; ··· 687 685 /* Disable the mailbox interrupts */ 688 686 if (dev_context->mbox) { 689 687 omap_mbox_disable_irq(dev_context->mbox, IRQ_RX); 690 - omap_mbox_put(dev_context->mbox); 688 + omap_mbox_put(dev_context->mbox, &dsp_mbox_notifier); 691 689 dev_context->mbox = NULL; 692 690 } 693 691 /* Reset IVA2 clocks*/ ··· 788 786 789 787 pt_attrs = kzalloc(sizeof(struct pg_table_attrs), GFP_KERNEL); 790 788 if (pt_attrs != NULL) { 791 - /* Assuming that we use only DSP's memory map 792 - * until 0x4000:0000 , we would need only 1024 793 - * L1 enties i.e L1 size = 4K */ 794 - pt_attrs->l1_size = 0x1000; 789 + pt_attrs->l1_size = SZ_16K; /* 4096 entries of 32 bits */ 795 790 align_size = pt_attrs->l1_size; 796 791 /* Align sizes are expected to be power of 2 */ 797 792 /* we like to get aligned on L1 table size */
+8 -13
drivers/staging/tidspbridge/include/dspbridge/io_sm.h
··· 72 72 /* 73 73 * ======== io_mbox_msg ======== 74 74 * Purpose: 75 - * Main interrupt handler for the shared memory Bridge channel manager. 76 - * Calls the Bridge's chnlsm_isr to determine if this interrupt is ours, 77 - * then schedules a DPC to dispatch I/O. 75 + * Main message handler for the shared memory Bridge channel manager. 76 + * Determine if this message is ours, then schedules a DPC to 77 + * dispatch I/O. 78 78 * Parameters: 79 - * ref_data: Pointer to the channel manager object for this board. 80 - * Set in an initial call to ISR_Install(). 79 + * self: Pointer to its own notifier_block struct. 80 + * len: Length of message. 81 + * msg: Message code received. 81 82 * Returns: 82 - * TRUE if interrupt handled; FALSE otherwise. 83 - * Requires: 84 - * Must be in locked memory if executing in kernel mode. 85 - * Must only call functions which are in locked memory if Kernel mode. 86 - * Must only call asynchronous services. 87 - * Interrupts are disabled and EOI for this interrupt has been sent. 88 - * Ensures: 83 + * NOTIFY_OK if handled; NOTIFY_BAD otherwise. 89 84 */ 90 - void io_mbox_msg(u32 msg); 85 + int io_mbox_msg(struct notifier_block *self, unsigned long len, void *msg); 91 86 92 87 /* 93 88 * ======== io_request_chnl ========
+1
drivers/staging/usbip/stub.h
··· 32 32 33 33 struct stub_device { 34 34 struct usb_interface *interface; 35 + struct usb_device *udev; 35 36 struct list_head list; 36 37 37 38 struct usbip_device ud;
+14 -4
drivers/staging/usbip/stub_dev.c
··· 258 258 static void stub_device_reset(struct usbip_device *ud) 259 259 { 260 260 struct stub_device *sdev = container_of(ud, struct stub_device, ud); 261 - struct usb_device *udev = interface_to_usbdev(sdev->interface); 261 + struct usb_device *udev = sdev->udev; 262 262 int ret; 263 263 264 264 usbip_udbg("device reset"); 265 + 265 266 ret = usb_lock_device_for_reset(udev, sdev->interface); 266 267 if (ret < 0) { 267 268 dev_err(&udev->dev, "lock for reset\n"); ··· 310 309 * 311 310 * Allocates and initializes a new stub_device struct. 312 311 */ 313 - static struct stub_device *stub_device_alloc(struct usb_interface *interface) 312 + static struct stub_device *stub_device_alloc(struct usb_device *udev, 313 + struct usb_interface *interface) 314 314 { 315 315 struct stub_device *sdev; 316 316 int busnum = interface_to_busnum(interface); ··· 326 324 return NULL; 327 325 } 328 326 329 - sdev->interface = interface; 327 + sdev->interface = usb_get_intf(interface); 328 + sdev->udev = usb_get_dev(udev); 330 329 331 330 /* 332 331 * devid is defined with devnum when this driver is first allocated. ··· 453 450 return err; 454 451 } 455 452 453 + usb_get_intf(interface); 456 454 return 0; 457 455 } 458 456 459 457 /* ok. this is my device. */ 460 - sdev = stub_device_alloc(interface); 458 + sdev = stub_device_alloc(udev, interface); 461 459 if (!sdev) 462 460 return -ENOMEM; 463 461 ··· 480 476 dev_err(&interface->dev, "create sysfs files for %s\n", 481 477 udev_busid); 482 478 usb_set_intfdata(interface, NULL); 479 + usb_put_intf(interface); 480 + 483 481 busid_priv->interf_count = 0; 484 482 485 483 busid_priv->sdev = NULL; ··· 551 545 if (busid_priv->interf_count > 1) { 552 546 busid_priv->interf_count--; 553 547 shutdown_busid(busid_priv); 548 + usb_put_intf(interface); 554 549 return; 555 550 } 556 551 ··· 560 553 561 554 /* 1. shutdown the current connection */ 562 555 shutdown_busid(busid_priv); 556 + 557 + usb_put_dev(sdev->udev); 558 + usb_put_intf(interface); 563 559 564 560 /* 3. free sdev */ 565 561 busid_priv->sdev = NULL;
+2 -2
drivers/staging/usbip/stub_rx.c
··· 364 364 365 365 static int get_pipe(struct stub_device *sdev, int epnum, int dir) 366 366 { 367 - struct usb_device *udev = interface_to_usbdev(sdev->interface); 367 + struct usb_device *udev = sdev->udev; 368 368 struct usb_host_endpoint *ep; 369 369 struct usb_endpoint_descriptor *epd = NULL; 370 370 ··· 484 484 int ret; 485 485 struct stub_priv *priv; 486 486 struct usbip_device *ud = &sdev->ud; 487 - struct usb_device *udev = interface_to_usbdev(sdev->interface); 487 + struct usb_device *udev = sdev->udev; 488 488 int pipe = get_pipe(sdev, pdu->base.ep, pdu->base.direction); 489 489 490 490
+3 -3
drivers/staging/usbip/vhci.h
··· 100 100 * But, the index of this array begins from 0. 101 101 */ 102 102 struct vhci_device vdev[VHCI_NPORTS]; 103 - 104 - /* vhci_device which has not been assiged its address yet */ 105 - int pending_port; 106 103 }; 107 104 108 105 ··· 115 118 void rh_port_disconnect(int rhport); 116 119 void vhci_rx_loop(struct usbip_task *ut); 117 120 void vhci_tx_loop(struct usbip_task *ut); 121 + 122 + struct urb *pickup_urb_and_free_priv(struct vhci_device *vdev, 123 + __u32 seqnum); 118 124 119 125 #define hardware (&the_controller->pdev.dev) 120 126
+46 -8
drivers/staging/usbip/vhci_hcd.c
··· 138 138 * the_controller->vdev[rhport].ud.status = VDEV_CONNECT; 139 139 * spin_unlock(&the_controller->vdev[rhport].ud.lock); */ 140 140 141 - the_controller->pending_port = rhport; 142 - 143 141 spin_unlock_irqrestore(&the_controller->lock, flags); 144 142 145 143 usb_hcd_poll_rh_status(vhci_to_hcd(the_controller)); ··· 557 559 struct device *dev = &urb->dev->dev; 558 560 int ret = 0; 559 561 unsigned long flags; 562 + struct vhci_device *vdev; 560 563 561 564 usbip_dbg_vhci_hc("enter, usb_hcd %p urb %p mem_flags %d\n", 562 565 hcd, urb, mem_flags); ··· 572 573 spin_unlock_irqrestore(&the_controller->lock, flags); 573 574 return urb->status; 574 575 } 576 + 577 + vdev = port_to_vdev(urb->dev->portnum-1); 578 + 579 + /* refuse enqueue for dead connection */ 580 + spin_lock(&vdev->ud.lock); 581 + if (vdev->ud.status == VDEV_ST_NULL || vdev->ud.status == VDEV_ST_ERROR) { 582 + usbip_uerr("enqueue for inactive port %d\n", vdev->rhport); 583 + spin_unlock(&vdev->ud.lock); 584 + spin_unlock_irqrestore(&the_controller->lock, flags); 585 + return -ENODEV; 586 + } 587 + spin_unlock(&vdev->ud.lock); 575 588 576 589 ret = usb_hcd_link_urb_to_ep(hcd, urb); 577 590 if (ret) ··· 603 592 __u8 type = usb_pipetype(urb->pipe); 604 593 struct usb_ctrlrequest *ctrlreq = 605 594 (struct usb_ctrlrequest *) urb->setup_packet; 606 - struct vhci_device *vdev = 607 - port_to_vdev(the_controller->pending_port); 608 595 609 596 if (type != PIPE_CONTROL || !ctrlreq) { 610 597 dev_err(dev, "invalid request to devnum 0\n"); ··· 616 607 dev_info(dev, "SetAddress Request (%d) to port %d\n", 617 608 ctrlreq->wValue, vdev->rhport); 618 609 619 - vdev->udev = urb->dev; 610 + if (vdev->udev) 611 + usb_put_dev(vdev->udev); 612 + vdev->udev = usb_get_dev(urb->dev); 620 613 621 614 spin_lock(&vdev->ud.lock); 622 615 vdev->ud.status = VDEV_ST_USED; ··· 638 627 "Get_Descriptor to device 0 " 639 628 "(get max pipe size)\n"); 640 629 641 - /* FIXME: reference count? (usb_get_dev()) */ 642 - vdev->udev = urb->dev; 630 + if (vdev->udev) 631 + usb_put_dev(vdev->udev); 632 + vdev->udev = usb_get_dev(urb->dev); 643 633 goto out; 644 634 645 635 default: ··· 817 805 return 0; 818 806 } 819 807 820 - 821 808 static void vhci_device_unlink_cleanup(struct vhci_device *vdev) 822 809 { 823 810 struct vhci_unlink *unlink, *tmp; ··· 824 813 spin_lock(&vdev->priv_lock); 825 814 826 815 list_for_each_entry_safe(unlink, tmp, &vdev->unlink_tx, list) { 816 + usbip_uinfo("unlink cleanup tx %lu\n", unlink->unlink_seqnum); 827 817 list_del(&unlink->list); 828 818 kfree(unlink); 829 819 } 830 820 831 821 list_for_each_entry_safe(unlink, tmp, &vdev->unlink_rx, list) { 822 + struct urb *urb; 823 + 824 + /* give back URB of unanswered unlink request */ 825 + usbip_uinfo("unlink cleanup rx %lu\n", unlink->unlink_seqnum); 826 + 827 + urb = pickup_urb_and_free_priv(vdev, unlink->unlink_seqnum); 828 + if (!urb) { 829 + usbip_uinfo("the urb (seqnum %lu) was already given back\n", 830 + unlink->unlink_seqnum); 831 + list_del(&unlink->list); 832 + kfree(unlink); 833 + continue; 834 + } 835 + 836 + urb->status = -ENODEV; 837 + 838 + spin_lock(&the_controller->lock); 839 + usb_hcd_unlink_urb_from_ep(vhci_to_hcd(the_controller), urb); 840 + spin_unlock(&the_controller->lock); 841 + 842 + usb_hcd_giveback_urb(vhci_to_hcd(the_controller), urb, urb->status); 843 + 832 844 list_del(&unlink->list); 833 845 kfree(unlink); 834 846 } ··· 920 886 921 887 vdev->speed = 0; 922 888 vdev->devid = 0; 889 + 890 + if (vdev->udev) 891 + usb_put_dev(vdev->udev); 892 + vdev->udev = NULL; 923 893 924 894 ud->tcp_socket = NULL; 925 895
+42 -8
drivers/staging/usbip/vhci_rx.c
··· 23 23 #include "vhci.h" 24 24 25 25 26 - /* get URB from transmitted urb queue */ 27 - static struct urb *pickup_urb_and_free_priv(struct vhci_device *vdev, 26 + /* get URB from transmitted urb queue. caller must hold vdev->priv_lock */ 27 + struct urb *pickup_urb_and_free_priv(struct vhci_device *vdev, 28 28 __u32 seqnum) 29 29 { 30 30 struct vhci_priv *priv, *tmp; 31 31 struct urb *urb = NULL; 32 32 int status; 33 - 34 - spin_lock(&vdev->priv_lock); 35 33 36 34 list_for_each_entry_safe(priv, tmp, &vdev->priv_rx, list) { 37 35 if (priv->seqnum == seqnum) { ··· 61 63 } 62 64 } 63 65 64 - spin_unlock(&vdev->priv_lock); 65 - 66 66 return urb; 67 67 } 68 68 ··· 70 74 struct usbip_device *ud = &vdev->ud; 71 75 struct urb *urb; 72 76 77 + spin_lock(&vdev->priv_lock); 73 78 74 79 urb = pickup_urb_and_free_priv(vdev, pdu->base.seqnum); 75 80 81 + spin_unlock(&vdev->priv_lock); 76 82 77 83 if (!urb) { 78 84 usbip_uerr("cannot find a urb of seqnum %u\n", ··· 159 161 return; 160 162 } 161 163 164 + spin_lock(&vdev->priv_lock); 165 + 162 166 urb = pickup_urb_and_free_priv(vdev, unlink->unlink_seqnum); 167 + 168 + spin_unlock(&vdev->priv_lock); 169 + 163 170 if (!urb) { 164 171 /* 165 172 * I get the result of a unlink request. But, it seems that I ··· 193 190 return; 194 191 } 195 192 193 + static int vhci_priv_tx_empty(struct vhci_device *vdev) 194 + { 195 + int empty = 0; 196 + 197 + spin_lock(&vdev->priv_lock); 198 + 199 + empty = list_empty(&vdev->priv_rx); 200 + 201 + spin_unlock(&vdev->priv_lock); 202 + 203 + return empty; 204 + } 205 + 196 206 /* recv a pdu */ 197 207 static void vhci_rx_pdu(struct usbip_device *ud) 198 208 { ··· 218 202 219 203 memset(&pdu, 0, sizeof(pdu)); 220 204 221 - 222 205 /* 1. receive a pdu header */ 223 206 ret = usbip_xmit(0, ud->tcp_socket, (char *) &pdu, sizeof(pdu), 0); 207 + if (ret < 0) { 208 + if (ret == -ECONNRESET) 209 + usbip_uinfo("connection reset by peer\n"); 210 + else if (ret == -EAGAIN) { 211 + /* ignore if connection was idle */ 212 + if (vhci_priv_tx_empty(vdev)) 213 + return; 214 + usbip_uinfo("connection timed out with pending urbs\n"); 215 + } else if (ret != -ERESTARTSYS) 216 + usbip_uinfo("xmit failed %d\n", ret); 217 + 218 + usbip_event_add(ud, VDEV_EVENT_ERROR_TCP); 219 + return; 220 + } 221 + if (ret == 0) { 222 + usbip_uinfo("connection closed"); 223 + usbip_event_add(ud, VDEV_EVENT_DOWN); 224 + return; 225 + } 224 226 if (ret != sizeof(pdu)) { 225 - usbip_uerr("receiving pdu failed! size is %d, should be %d\n", 227 + usbip_uerr("received pdu size is %d, should be %d\n", 226 228 ret, (unsigned int)sizeof(pdu)); 227 229 usbip_event_add(ud, VDEV_EVENT_ERROR_TCP); 228 230 return;
drivers/staging/vme/bridges/Module.symvers
+3 -3
drivers/staging/xgifb/vb_setmode.c
··· 3954 3954 unsigned char XGI_IsLCDDualLink(struct vb_device_info *pVBInfo) 3955 3955 { 3956 3956 3957 - if ((((pVBInfo->VBInfo & SetCRT2ToLCD) | SetCRT2ToLCDA)) 3958 - && (pVBInfo->LCDInfo & SetLCDDualLink)) /* shampoo0129 */ 3957 + if ((pVBInfo->VBInfo & (SetCRT2ToLCD | SetCRT2ToLCDA)) && 3958 + (pVBInfo->LCDInfo & SetLCDDualLink)) /* shampoo0129 */ 3959 3959 return 1; 3960 3960 3961 3961 return 0; ··· 8773 8773 8774 8774 if (pVBInfo->IF_DEF_LVDS == 0) { 8775 8775 CRT2Index = CRT2Index >> 6; /* for LCD */ 8776 - if (((pVBInfo->VBInfo & SetCRT2ToLCD) | SetCRT2ToLCDA)) { /*301b*/ 8776 + if (pVBInfo->VBInfo & (SetCRT2ToLCD | SetCRT2ToLCDA)) { /*301b*/ 8777 8777 if (pVBInfo->LCDResInfo != Panel1024x768) 8778 8778 VCLKIndex = LCDXlat2VCLK[CRT2Index]; 8779 8779 else
+45 -45
drivers/tty/n_hdlc.c
··· 581 581 __u8 __user *buf, size_t nr) 582 582 { 583 583 struct n_hdlc *n_hdlc = tty2n_hdlc(tty); 584 - int ret; 584 + int ret = 0; 585 585 struct n_hdlc_buf *rbuf; 586 + DECLARE_WAITQUEUE(wait, current); 586 587 587 588 if (debuglevel >= DEBUG_LEVEL_INFO) 588 589 printk("%s(%d)n_hdlc_tty_read() called\n",__FILE__,__LINE__); ··· 599 598 return -EFAULT; 600 599 } 601 600 602 - tty_lock(); 601 + add_wait_queue(&tty->read_wait, &wait); 603 602 604 603 for (;;) { 605 604 if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) { 606 - tty_unlock(); 607 - return -EIO; 605 + ret = -EIO; 606 + break; 608 607 } 608 + if (tty_hung_up_p(file)) 609 + break; 609 610 610 - n_hdlc = tty2n_hdlc (tty); 611 - if (!n_hdlc || n_hdlc->magic != HDLC_MAGIC || 612 - tty != n_hdlc->tty) { 613 - tty_unlock(); 614 - return 0; 615 - } 611 + set_current_state(TASK_INTERRUPTIBLE); 616 612 617 613 rbuf = n_hdlc_buf_get(&n_hdlc->rx_buf_list); 618 - if (rbuf) 614 + if (rbuf) { 615 + if (rbuf->count > nr) { 616 + /* too large for caller's buffer */ 617 + ret = -EOVERFLOW; 618 + } else { 619 + if (copy_to_user(buf, rbuf->buf, rbuf->count)) 620 + ret = -EFAULT; 621 + else 622 + ret = rbuf->count; 623 + } 624 + 625 + if (n_hdlc->rx_free_buf_list.count > 626 + DEFAULT_RX_BUF_COUNT) 627 + kfree(rbuf); 628 + else 629 + n_hdlc_buf_put(&n_hdlc->rx_free_buf_list, rbuf); 619 630 break; 631 + } 620 632 621 633 /* no data */ 622 634 if (file->f_flags & O_NONBLOCK) { 623 - tty_unlock(); 624 - return -EAGAIN; 635 + ret = -EAGAIN; 636 + break; 625 637 } 626 - 627 - interruptible_sleep_on (&tty->read_wait); 638 + 639 + schedule(); 640 + 628 641 if (signal_pending(current)) { 629 - tty_unlock(); 630 - return -EINTR; 642 + ret = -EINTR; 643 + break; 631 644 } 632 645 } 633 - 634 - if (rbuf->count > nr) 635 - /* frame too large for caller's buffer (discard frame) */ 636 - ret = -EOVERFLOW; 637 - else { 638 - /* Copy the data to the caller's buffer */ 639 - if (copy_to_user(buf, rbuf->buf, rbuf->count)) 640 - ret = -EFAULT; 641 - else 642 - ret = rbuf->count; 643 - } 644 - 645 - /* return HDLC buffer to free list unless the free list */ 646 - /* count has exceeded the default value, in which case the */ 647 - /* buffer is freed back to the OS to conserve memory */ 648 - if (n_hdlc->rx_free_buf_list.count > DEFAULT_RX_BUF_COUNT) 649 - kfree(rbuf); 650 - else 651 - n_hdlc_buf_put(&n_hdlc->rx_free_buf_list,rbuf); 652 - tty_unlock(); 646 + 647 + remove_wait_queue(&tty->read_wait, &wait); 648 + __set_current_state(TASK_RUNNING); 649 + 653 650 return ret; 654 651 655 652 } /* end of n_hdlc_tty_read() */ ··· 690 691 count = maxframe; 691 692 } 692 693 693 - tty_lock(); 694 - 695 694 add_wait_queue(&tty->write_wait, &wait); 696 - set_current_state(TASK_INTERRUPTIBLE); 695 + 696 + for (;;) { 697 + set_current_state(TASK_INTERRUPTIBLE); 697 698 698 - /* Allocate transmit buffer */ 699 - /* sleep until transmit buffer available */ 700 - while (!(tbuf = n_hdlc_buf_get(&n_hdlc->tx_free_buf_list))) { 699 + tbuf = n_hdlc_buf_get(&n_hdlc->tx_free_buf_list); 700 + if (tbuf) 701 + break; 702 + 701 703 if (file->f_flags & O_NONBLOCK) { 702 704 error = -EAGAIN; 703 705 break; ··· 719 719 } 720 720 } 721 721 722 - set_current_state(TASK_RUNNING); 722 + __set_current_state(TASK_RUNNING); 723 723 remove_wait_queue(&tty->write_wait, &wait); 724 724 725 725 if (!error) { ··· 731 731 n_hdlc_buf_put(&n_hdlc->tx_buf_list,tbuf); 732 732 n_hdlc_send_frames(n_hdlc,tty); 733 733 } 734 - tty_unlock(); 734 + 735 735 return error; 736 736 737 737 } /* end of n_hdlc_tty_write() */
+2 -1
drivers/tty/serial/8250.c
··· 236 236 .fifo_size = 128, 237 237 .tx_loadsz = 128, 238 238 .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10, 239 - .flags = UART_CAP_FIFO | UART_CAP_EFR | UART_CAP_SLEEP, 239 + /* UART_CAP_EFR breaks billionon CF bluetooth card. */ 240 + .flags = UART_CAP_FIFO | UART_CAP_SLEEP, 240 241 }, 241 242 [PORT_16654] = { 242 243 .name = "ST16654",
+1
drivers/tty/serial/Kconfig
··· 1518 1518 config SERIAL_GRLIB_GAISLER_APBUART 1519 1519 tristate "GRLIB APBUART serial support" 1520 1520 depends on OF 1521 + select SERIAL_CORE 1521 1522 ---help--- 1522 1523 Add support for the GRLIB APBUART serial port. 1523 1524
+2 -2
drivers/tty/tty_io.c
··· 3257 3257 ssize_t count = 0; 3258 3258 3259 3259 console_lock(); 3260 - for (c = console_drivers; c; c = c->next) { 3260 + for_each_console(c) { 3261 3261 if (!c->device) 3262 3262 continue; 3263 3263 if (!c->write) ··· 3306 3306 if (IS_ERR(consdev)) 3307 3307 consdev = NULL; 3308 3308 else 3309 - device_create_file(consdev, &dev_attr_active); 3309 + WARN_ON(device_create_file(consdev, &dev_attr_active) < 0); 3310 3310 3311 3311 #ifdef CONFIG_VT 3312 3312 vty_init(&console_fops);
+8 -3
drivers/tty/vt/vt.c
··· 2994 2994 if (IS_ERR(tty0dev)) 2995 2995 tty0dev = NULL; 2996 2996 else 2997 - device_create_file(tty0dev, &dev_attr_active); 2997 + WARN_ON(device_create_file(tty0dev, &dev_attr_active) < 0); 2998 2998 2999 2999 vcs_init(); 3000 3000 ··· 3545 3545 3546 3546 /* already registered */ 3547 3547 if (con_driver->con == csw) 3548 - retval = -EINVAL; 3548 + retval = -EBUSY; 3549 3549 } 3550 3550 3551 3551 if (retval) ··· 3656 3656 int err; 3657 3657 3658 3658 err = register_con_driver(csw, first, last); 3659 - 3659 + /* if we get an busy error we still want to bind the console driver 3660 + * and return success, as we may have unbound the console driver 3661 +  * but not unregistered it. 3662 + */ 3663 + if (err == -EBUSY) 3664 + err = 0; 3660 3665 if (!err) 3661 3666 bind_con_driver(csw, first, last, deflt); 3662 3667
+1 -1
drivers/usb/class/cdc-wdm.c
··· 342 342 goto outnp; 343 343 } 344 344 345 - if (!file->f_flags && O_NONBLOCK) 345 + if (!(file->f_flags & O_NONBLOCK)) 346 346 r = wait_event_interruptible(desc->wait, !test_bit(WDM_IN_USE, 347 347 &desc->flags)); 348 348 else
+1 -1
drivers/usb/core/endpoint.c
··· 192 192 ep_dev->dev.parent = parent; 193 193 ep_dev->dev.release = ep_device_release; 194 194 dev_set_name(&ep_dev->dev, "ep_%02x", endpoint->desc.bEndpointAddress); 195 - device_enable_async_suspend(&ep_dev->dev); 196 195 197 196 retval = device_register(&ep_dev->dev); 198 197 if (retval) 199 198 goto error_register; 200 199 200 + device_enable_async_suspend(&ep_dev->dev); 201 201 endpoint->ep_dev = ep_dev; 202 202 return retval; 203 203
+6 -1
drivers/usb/core/hcd-pci.c
··· 405 405 return retval; 406 406 } 407 407 408 - synchronize_irq(pci_dev->irq); 408 + /* If MSI-X is enabled, the driver will have synchronized all vectors 409 + * in pci_suspend(). If MSI or legacy PCI is enabled, that will be 410 + * synchronized here. 411 + */ 412 + if (!hcd->msix_enabled) 413 + synchronize_irq(pci_dev->irq); 409 414 410 415 /* Downstream ports from this root hub should already be quiesced, so 411 416 * there will be no DMA activity. Now we can shut down the upstream
+21
drivers/usb/core/hub.c
··· 676 676 static void hub_activate(struct usb_hub *hub, enum hub_activation_type type) 677 677 { 678 678 struct usb_device *hdev = hub->hdev; 679 + struct usb_hcd *hcd; 680 + int ret; 679 681 int port1; 680 682 int status; 681 683 bool need_debounce_delay = false; ··· 716 714 usb_autopm_get_interface_no_resume( 717 715 to_usb_interface(hub->intfdev)); 718 716 return; /* Continues at init2: below */ 717 + } else if (type == HUB_RESET_RESUME) { 718 + /* The internal host controller state for the hub device 719 + * may be gone after a host power loss on system resume. 720 + * Update the device's info so the HW knows it's a hub. 721 + */ 722 + hcd = bus_to_hcd(hdev->bus); 723 + if (hcd->driver->update_hub_device) { 724 + ret = hcd->driver->update_hub_device(hcd, hdev, 725 + &hub->tt, GFP_NOIO); 726 + if (ret < 0) { 727 + dev_err(hub->intfdev, "Host not " 728 + "accepting hub info " 729 + "update.\n"); 730 + dev_err(hub->intfdev, "LS/FS devices " 731 + "and hubs may not work " 732 + "under this hub\n."); 733 + } 734 + } 735 + hub_power_on(hub, true); 719 736 } else { 720 737 hub_power_on(hub, true); 721 738 }
+6 -1
drivers/usb/gadget/Kconfig
··· 509 509 select USB_GADGET_SELECTED 510 510 511 511 config USB_GADGET_EG20T 512 - boolean "Intel EG20T(Topcliff) USB Device controller" 512 + boolean "Intel EG20T PCH/OKI SEMICONDUCTOR ML7213 IOH UDC" 513 513 depends on PCI 514 514 select USB_GADGET_DUALSPEED 515 515 help ··· 524 524 This driver supports both control transfer and bulk transfer modes. 525 525 This driver dose not support interrupt transfer or isochronous 526 526 transfer modes. 527 + 528 + This driver also can be used for OKI SEMICONDUCTOR's ML7213 which is 529 + for IVI(In-Vehicle Infotainment) use. 530 + ML7213 is companion chip for Intel Atom E6xx series. 531 + ML7213 is completely compatible for Intel EG20T PCH. 527 532 528 533 config USB_EG20T 529 534 tristate
+137 -131
drivers/usb/gadget/ci13xxx_udc.c
··· 76 76 77 77 /* control endpoint description */ 78 78 static const struct usb_endpoint_descriptor 79 - ctrl_endpt_desc = { 79 + ctrl_endpt_out_desc = { 80 80 .bLength = USB_DT_ENDPOINT_SIZE, 81 81 .bDescriptorType = USB_DT_ENDPOINT, 82 82 83 + .bEndpointAddress = USB_DIR_OUT, 84 + .bmAttributes = USB_ENDPOINT_XFER_CONTROL, 85 + .wMaxPacketSize = cpu_to_le16(CTRL_PAYLOAD_MAX), 86 + }; 87 + 88 + static const struct usb_endpoint_descriptor 89 + ctrl_endpt_in_desc = { 90 + .bLength = USB_DT_ENDPOINT_SIZE, 91 + .bDescriptorType = USB_DT_ENDPOINT, 92 + 93 + .bEndpointAddress = USB_DIR_IN, 83 94 .bmAttributes = USB_ENDPOINT_XFER_CONTROL, 84 95 .wMaxPacketSize = cpu_to_le16(CTRL_PAYLOAD_MAX), 85 96 }; ··· 276 265 hw_bank.size /= sizeof(u32); 277 266 278 267 reg = hw_aread(ABS_DCCPARAMS, DCCPARAMS_DEN) >> ffs_nr(DCCPARAMS_DEN); 279 - if (reg == 0 || reg > ENDPT_MAX) 280 - return -ENODEV; 268 + hw_ep_max = reg * 2; /* cache hw ENDPT_MAX */ 281 269 282 - hw_ep_max = reg; /* cache hw ENDPT_MAX */ 270 + if (hw_ep_max == 0 || hw_ep_max > ENDPT_MAX) 271 + return -ENODEV; 283 272 284 273 /* setup lock mode ? */ 285 274 ··· 1208 1197 } 1209 1198 1210 1199 spin_lock_irqsave(udc->lock, flags); 1211 - for (i = 0; i < hw_ep_max; i++) { 1212 - struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[i]; 1200 + for (i = 0; i < hw_ep_max/2; i++) { 1201 + struct ci13xxx_ep *mEpRx = &udc->ci13xxx_ep[i]; 1202 + struct ci13xxx_ep *mEpTx = &udc->ci13xxx_ep[i + hw_ep_max/2]; 1213 1203 n += scnprintf(buf + n, PAGE_SIZE - n, 1214 1204 "EP=%02i: RX=%08X TX=%08X\n", 1215 - i, (u32)mEp->qh[RX].dma, (u32)mEp->qh[TX].dma); 1205 + i, (u32)mEpRx->qh.dma, (u32)mEpTx->qh.dma); 1216 1206 for (j = 0; j < (sizeof(struct ci13xxx_qh)/sizeof(u32)); j++) { 1217 1207 n += scnprintf(buf + n, PAGE_SIZE - n, 1218 1208 " %04X: %08X %08X\n", j, 1219 - *((u32 *)mEp->qh[RX].ptr + j), 1220 - *((u32 *)mEp->qh[TX].ptr + j)); 1209 + *((u32 *)mEpRx->qh.ptr + j), 1210 + *((u32 *)mEpTx->qh.ptr + j)); 1221 1211 } 1222 1212 } 1223 1213 spin_unlock_irqrestore(udc->lock, flags); ··· 1305 1293 unsigned long flags; 1306 1294 struct list_head *ptr = NULL; 1307 1295 struct ci13xxx_req *req = NULL; 1308 - unsigned i, j, k, n = 0, qSize = sizeof(struct ci13xxx_td)/sizeof(u32); 1296 + unsigned i, j, n = 0, qSize = sizeof(struct ci13xxx_td)/sizeof(u32); 1309 1297 1310 1298 dbg_trace("[%s] %p\n", __func__, buf); 1311 1299 if (attr == NULL || buf == NULL) { ··· 1315 1303 1316 1304 spin_lock_irqsave(udc->lock, flags); 1317 1305 for (i = 0; i < hw_ep_max; i++) 1318 - for (k = RX; k <= TX; k++) 1319 - list_for_each(ptr, &udc->ci13xxx_ep[i].qh[k].queue) 1320 - { 1321 - req = list_entry(ptr, 1322 - struct ci13xxx_req, queue); 1306 + list_for_each(ptr, &udc->ci13xxx_ep[i].qh.queue) 1307 + { 1308 + req = list_entry(ptr, struct ci13xxx_req, queue); 1323 1309 1310 + n += scnprintf(buf + n, PAGE_SIZE - n, 1311 + "EP=%02i: TD=%08X %s\n", 1312 + i % hw_ep_max/2, (u32)req->dma, 1313 + ((i < hw_ep_max/2) ? "RX" : "TX")); 1314 + 1315 + for (j = 0; j < qSize; j++) 1324 1316 n += scnprintf(buf + n, PAGE_SIZE - n, 1325 - "EP=%02i: TD=%08X %s\n", 1326 - i, (u32)req->dma, 1327 - ((k == RX) ? "RX" : "TX")); 1328 - 1329 - for (j = 0; j < qSize; j++) 1330 - n += scnprintf(buf + n, PAGE_SIZE - n, 1331 - " %04X: %08X\n", j, 1332 - *((u32 *)req->ptr + j)); 1333 - } 1317 + " %04X: %08X\n", j, 1318 + *((u32 *)req->ptr + j)); 1319 + } 1334 1320 spin_unlock_irqrestore(udc->lock, flags); 1335 1321 1336 1322 return n; ··· 1477 1467 * At this point it's guaranteed exclusive access to qhead 1478 1468 * (endpt is not primed) so it's no need to use tripwire 1479 1469 */ 1480 - mEp->qh[mEp->dir].ptr->td.next = mReq->dma; /* TERMINATE = 0 */ 1481 - mEp->qh[mEp->dir].ptr->td.token &= ~TD_STATUS; /* clear status */ 1470 + mEp->qh.ptr->td.next = mReq->dma; /* TERMINATE = 0 */ 1471 + mEp->qh.ptr->td.token &= ~TD_STATUS; /* clear status */ 1482 1472 if (mReq->req.zero == 0) 1483 - mEp->qh[mEp->dir].ptr->cap |= QH_ZLT; 1473 + mEp->qh.ptr->cap |= QH_ZLT; 1484 1474 else 1485 - mEp->qh[mEp->dir].ptr->cap &= ~QH_ZLT; 1475 + mEp->qh.ptr->cap &= ~QH_ZLT; 1486 1476 1487 1477 wmb(); /* synchronize before ep prime */ 1488 1478 ··· 1552 1542 1553 1543 hw_ep_flush(mEp->num, mEp->dir); 1554 1544 1555 - while (!list_empty(&mEp->qh[mEp->dir].queue)) { 1545 + while (!list_empty(&mEp->qh.queue)) { 1556 1546 1557 1547 /* pop oldest request */ 1558 1548 struct ci13xxx_req *mReq = \ 1559 - list_entry(mEp->qh[mEp->dir].queue.next, 1549 + list_entry(mEp->qh.queue.next, 1560 1550 struct ci13xxx_req, queue); 1561 1551 list_del_init(&mReq->queue); 1562 1552 mReq->req.status = -ESHUTDOWN; ··· 1581 1571 { 1582 1572 struct usb_ep *ep; 1583 1573 struct ci13xxx *udc = container_of(gadget, struct ci13xxx, gadget); 1584 - struct ci13xxx_ep *mEp = container_of(gadget->ep0, 1585 - struct ci13xxx_ep, ep); 1586 1574 1587 1575 trace("%p", gadget); 1588 1576 ··· 1591 1583 gadget_for_each_ep(ep, gadget) { 1592 1584 usb_ep_fifo_flush(ep); 1593 1585 } 1594 - usb_ep_fifo_flush(gadget->ep0); 1586 + usb_ep_fifo_flush(&udc->ep0out.ep); 1587 + usb_ep_fifo_flush(&udc->ep0in.ep); 1595 1588 1596 1589 udc->driver->disconnect(gadget); 1597 1590 ··· 1600 1591 gadget_for_each_ep(ep, gadget) { 1601 1592 usb_ep_disable(ep); 1602 1593 } 1603 - usb_ep_disable(gadget->ep0); 1594 + usb_ep_disable(&udc->ep0out.ep); 1595 + usb_ep_disable(&udc->ep0in.ep); 1604 1596 1605 - if (mEp->status != NULL) { 1606 - usb_ep_free_request(gadget->ep0, mEp->status); 1607 - mEp->status = NULL; 1597 + if (udc->status != NULL) { 1598 + usb_ep_free_request(&udc->ep0in.ep, udc->status); 1599 + udc->status = NULL; 1608 1600 } 1609 1601 1610 1602 return 0; ··· 1624 1614 __releases(udc->lock) 1625 1615 __acquires(udc->lock) 1626 1616 { 1627 - struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[0]; 1628 1617 int retval; 1629 1618 1630 1619 trace("%p", udc); ··· 1644 1635 if (retval) 1645 1636 goto done; 1646 1637 1647 - retval = usb_ep_enable(&mEp->ep, &ctrl_endpt_desc); 1638 + retval = usb_ep_enable(&udc->ep0out.ep, &ctrl_endpt_out_desc); 1639 + if (retval) 1640 + goto done; 1641 + 1642 + retval = usb_ep_enable(&udc->ep0in.ep, &ctrl_endpt_in_desc); 1648 1643 if (!retval) { 1649 - mEp->status = usb_ep_alloc_request(&mEp->ep, GFP_ATOMIC); 1650 - if (mEp->status == NULL) { 1651 - usb_ep_disable(&mEp->ep); 1644 + udc->status = usb_ep_alloc_request(&udc->ep0in.ep, GFP_ATOMIC); 1645 + if (udc->status == NULL) { 1646 + usb_ep_disable(&udc->ep0out.ep); 1652 1647 retval = -ENOMEM; 1653 1648 } 1654 1649 } ··· 1685 1672 1686 1673 /** 1687 1674 * isr_get_status_response: get_status request response 1688 - * @ep: endpoint 1675 + * @udc: udc struct 1689 1676 * @setup: setup request packet 1690 1677 * 1691 1678 * This function returns an error code 1692 1679 */ 1693 - static int isr_get_status_response(struct ci13xxx_ep *mEp, 1680 + static int isr_get_status_response(struct ci13xxx *udc, 1694 1681 struct usb_ctrlrequest *setup) 1695 1682 __releases(mEp->lock) 1696 1683 __acquires(mEp->lock) 1697 1684 { 1685 + struct ci13xxx_ep *mEp = &udc->ep0in; 1698 1686 struct usb_request *req = NULL; 1699 1687 gfp_t gfp_flags = GFP_ATOMIC; 1700 1688 int dir, num, retval; ··· 1750 1736 1751 1737 /** 1752 1738 * isr_setup_status_phase: queues the status phase of a setup transation 1753 - * @mEp: endpoint 1739 + * @udc: udc struct 1754 1740 * 1755 1741 * This function returns an error code 1756 1742 */ 1757 - static int isr_setup_status_phase(struct ci13xxx_ep *mEp) 1743 + static int isr_setup_status_phase(struct ci13xxx *udc) 1758 1744 __releases(mEp->lock) 1759 1745 __acquires(mEp->lock) 1760 1746 { 1761 1747 int retval; 1748 + struct ci13xxx_ep *mEp; 1762 1749 1763 - trace("%p", mEp); 1750 + trace("%p", udc); 1764 1751 1765 - /* mEp is always valid & configured */ 1766 - 1767 - if (mEp->type == USB_ENDPOINT_XFER_CONTROL) 1768 - mEp->dir = (mEp->dir == TX) ? RX : TX; 1769 - 1770 - mEp->status->no_interrupt = 1; 1752 + mEp = (udc->ep0_dir == TX) ? &udc->ep0out : &udc->ep0in; 1771 1753 1772 1754 spin_unlock(mEp->lock); 1773 - retval = usb_ep_queue(&mEp->ep, mEp->status, GFP_ATOMIC); 1755 + retval = usb_ep_queue(&mEp->ep, udc->status, GFP_ATOMIC); 1774 1756 spin_lock(mEp->lock); 1775 1757 1776 1758 return retval; ··· 1788 1778 1789 1779 trace("%p", mEp); 1790 1780 1791 - if (list_empty(&mEp->qh[mEp->dir].queue)) 1781 + if (list_empty(&mEp->qh.queue)) 1792 1782 return -EINVAL; 1793 1783 1794 1784 /* pop oldest request */ 1795 - mReq = list_entry(mEp->qh[mEp->dir].queue.next, 1785 + mReq = list_entry(mEp->qh.queue.next, 1796 1786 struct ci13xxx_req, queue); 1797 1787 list_del_init(&mReq->queue); 1798 1788 ··· 1804 1794 1805 1795 dbg_done(_usb_addr(mEp), mReq->ptr->token, retval); 1806 1796 1807 - if (!list_empty(&mEp->qh[mEp->dir].queue)) { 1797 + if (!list_empty(&mEp->qh.queue)) { 1808 1798 struct ci13xxx_req* mReqEnq; 1809 1799 1810 - mReqEnq = list_entry(mEp->qh[mEp->dir].queue.next, 1800 + mReqEnq = list_entry(mEp->qh.queue.next, 1811 1801 struct ci13xxx_req, queue); 1812 1802 _hardware_enqueue(mEp, mReqEnq); 1813 1803 } ··· 1846 1836 int type, num, err = -EINVAL; 1847 1837 struct usb_ctrlrequest req; 1848 1838 1849 - 1850 1839 if (mEp->desc == NULL) 1851 1840 continue; /* not configured */ 1852 1841 1853 - if ((mEp->dir == RX && hw_test_and_clear_complete(i)) || 1854 - (mEp->dir == TX && hw_test_and_clear_complete(i + 16))) { 1842 + if (hw_test_and_clear_complete(i)) { 1855 1843 err = isr_tr_complete_low(mEp); 1856 1844 if (mEp->type == USB_ENDPOINT_XFER_CONTROL) { 1857 1845 if (err > 0) /* needs status phase */ 1858 - err = isr_setup_status_phase(mEp); 1846 + err = isr_setup_status_phase(udc); 1859 1847 if (err < 0) { 1860 1848 dbg_event(_usb_addr(mEp), 1861 1849 "ERROR", err); ··· 1874 1866 continue; 1875 1867 } 1876 1868 1869 + /* 1870 + * Flush data and handshake transactions of previous 1871 + * setup packet. 1872 + */ 1873 + _ep_nuke(&udc->ep0out); 1874 + _ep_nuke(&udc->ep0in); 1875 + 1877 1876 /* read_setup_packet */ 1878 1877 do { 1879 1878 hw_test_and_set_setup_guard(); 1880 - memcpy(&req, &mEp->qh[RX].ptr->setup, sizeof(req)); 1879 + memcpy(&req, &mEp->qh.ptr->setup, sizeof(req)); 1881 1880 } while (!hw_test_and_clear_setup_guard()); 1882 1881 1883 1882 type = req.bRequestType; 1884 1883 1885 - mEp->dir = (type & USB_DIR_IN) ? TX : RX; 1884 + udc->ep0_dir = (type & USB_DIR_IN) ? TX : RX; 1886 1885 1887 1886 dbg_setup(_usb_addr(mEp), &req); 1888 1887 ··· 1910 1895 if (err) 1911 1896 break; 1912 1897 } 1913 - err = isr_setup_status_phase(mEp); 1898 + err = isr_setup_status_phase(udc); 1914 1899 break; 1915 1900 case USB_REQ_GET_STATUS: 1916 1901 if (type != (USB_DIR_IN|USB_RECIP_DEVICE) && ··· 1920 1905 if (le16_to_cpu(req.wLength) != 2 || 1921 1906 le16_to_cpu(req.wValue) != 0) 1922 1907 break; 1923 - err = isr_get_status_response(mEp, &req); 1908 + err = isr_get_status_response(udc, &req); 1924 1909 break; 1925 1910 case USB_REQ_SET_ADDRESS: 1926 1911 if (type != (USB_DIR_OUT|USB_RECIP_DEVICE)) ··· 1931 1916 err = hw_usb_set_address((u8)le16_to_cpu(req.wValue)); 1932 1917 if (err) 1933 1918 break; 1934 - err = isr_setup_status_phase(mEp); 1919 + err = isr_setup_status_phase(udc); 1935 1920 break; 1936 1921 case USB_REQ_SET_FEATURE: 1937 1922 if (type != (USB_DIR_OUT|USB_RECIP_ENDPOINT) && ··· 1947 1932 spin_lock(udc->lock); 1948 1933 if (err) 1949 1934 break; 1950 - err = isr_setup_status_phase(mEp); 1935 + err = isr_setup_status_phase(udc); 1951 1936 break; 1952 1937 default: 1953 1938 delegate: 1954 1939 if (req.wLength == 0) /* no data phase */ 1955 - mEp->dir = TX; 1940 + udc->ep0_dir = TX; 1956 1941 1957 1942 spin_unlock(udc->lock); 1958 1943 err = udc->driver->setup(&udc->gadget, &req); ··· 1983 1968 const struct usb_endpoint_descriptor *desc) 1984 1969 { 1985 1970 struct ci13xxx_ep *mEp = container_of(ep, struct ci13xxx_ep, ep); 1986 - int direction, retval = 0; 1971 + int retval = 0; 1987 1972 unsigned long flags; 1988 1973 1989 1974 trace("%p, %p", ep, desc); ··· 1997 1982 1998 1983 mEp->desc = desc; 1999 1984 2000 - if (!list_empty(&mEp->qh[mEp->dir].queue)) 1985 + if (!list_empty(&mEp->qh.queue)) 2001 1986 warn("enabling a non-empty endpoint!"); 2002 1987 2003 1988 mEp->dir = usb_endpoint_dir_in(desc) ? TX : RX; ··· 2006 1991 2007 1992 mEp->ep.maxpacket = __constant_le16_to_cpu(desc->wMaxPacketSize); 2008 1993 2009 - direction = mEp->dir; 2010 - do { 2011 - dbg_event(_usb_addr(mEp), "ENABLE", 0); 1994 + dbg_event(_usb_addr(mEp), "ENABLE", 0); 2012 1995 2013 - mEp->qh[mEp->dir].ptr->cap = 0; 1996 + mEp->qh.ptr->cap = 0; 2014 1997 2015 - if (mEp->type == USB_ENDPOINT_XFER_CONTROL) 2016 - mEp->qh[mEp->dir].ptr->cap |= QH_IOS; 2017 - else if (mEp->type == USB_ENDPOINT_XFER_ISOC) 2018 - mEp->qh[mEp->dir].ptr->cap &= ~QH_MULT; 2019 - else 2020 - mEp->qh[mEp->dir].ptr->cap &= ~QH_ZLT; 1998 + if (mEp->type == USB_ENDPOINT_XFER_CONTROL) 1999 + mEp->qh.ptr->cap |= QH_IOS; 2000 + else if (mEp->type == USB_ENDPOINT_XFER_ISOC) 2001 + mEp->qh.ptr->cap &= ~QH_MULT; 2002 + else 2003 + mEp->qh.ptr->cap &= ~QH_ZLT; 2021 2004 2022 - mEp->qh[mEp->dir].ptr->cap |= 2023 - (mEp->ep.maxpacket << ffs_nr(QH_MAX_PKT)) & QH_MAX_PKT; 2024 - mEp->qh[mEp->dir].ptr->td.next |= TD_TERMINATE; /* needed? */ 2005 + mEp->qh.ptr->cap |= 2006 + (mEp->ep.maxpacket << ffs_nr(QH_MAX_PKT)) & QH_MAX_PKT; 2007 + mEp->qh.ptr->td.next |= TD_TERMINATE; /* needed? */ 2025 2008 2026 - retval |= hw_ep_enable(mEp->num, mEp->dir, mEp->type); 2027 - 2028 - if (mEp->type == USB_ENDPOINT_XFER_CONTROL) 2029 - mEp->dir = (mEp->dir == TX) ? RX : TX; 2030 - 2031 - } while (mEp->dir != direction); 2009 + retval |= hw_ep_enable(mEp->num, mEp->dir, mEp->type); 2032 2010 2033 2011 spin_unlock_irqrestore(mEp->lock, flags); 2034 2012 return retval; ··· 2154 2146 spin_lock_irqsave(mEp->lock, flags); 2155 2147 2156 2148 if (mEp->type == USB_ENDPOINT_XFER_CONTROL && 2157 - !list_empty(&mEp->qh[mEp->dir].queue)) { 2149 + !list_empty(&mEp->qh.queue)) { 2158 2150 _ep_nuke(mEp); 2159 2151 retval = -EOVERFLOW; 2160 2152 warn("endpoint ctrl %X nuked", _usb_addr(mEp)); ··· 2178 2170 /* push request */ 2179 2171 mReq->req.status = -EINPROGRESS; 2180 2172 mReq->req.actual = 0; 2181 - list_add_tail(&mReq->queue, &mEp->qh[mEp->dir].queue); 2173 + list_add_tail(&mReq->queue, &mEp->qh.queue); 2182 2174 2183 - if (list_is_singular(&mEp->qh[mEp->dir].queue)) 2175 + if (list_is_singular(&mEp->qh.queue)) 2184 2176 retval = _hardware_enqueue(mEp, mReq); 2185 2177 2186 2178 if (retval == -EALREADY) { ··· 2207 2199 trace("%p, %p", ep, req); 2208 2200 2209 2201 if (ep == NULL || req == NULL || mEp->desc == NULL || 2210 - list_empty(&mReq->queue) || list_empty(&mEp->qh[mEp->dir].queue)) 2202 + list_empty(&mReq->queue) || list_empty(&mEp->qh.queue)) 2211 2203 return -EINVAL; 2212 2204 2213 2205 spin_lock_irqsave(mEp->lock, flags); ··· 2252 2244 #ifndef STALL_IN 2253 2245 /* g_file_storage MS compliant but g_zero fails chapter 9 compliance */ 2254 2246 if (value && mEp->type == USB_ENDPOINT_XFER_BULK && mEp->dir == TX && 2255 - !list_empty(&mEp->qh[mEp->dir].queue)) { 2247 + !list_empty(&mEp->qh.queue)) { 2256 2248 spin_unlock_irqrestore(mEp->lock, flags); 2257 2249 return -EAGAIN; 2258 2250 } ··· 2363 2355 if (is_active) { 2364 2356 pm_runtime_get_sync(&_gadget->dev); 2365 2357 hw_device_reset(udc); 2366 - hw_device_state(udc->ci13xxx_ep[0].qh[RX].dma); 2358 + hw_device_state(udc->ep0out.qh.dma); 2367 2359 } else { 2368 2360 hw_device_state(0); 2369 2361 if (udc->udc_driver->notify_event) ··· 2398 2390 int (*bind)(struct usb_gadget *)) 2399 2391 { 2400 2392 struct ci13xxx *udc = _udc; 2401 - unsigned long i, k, flags; 2393 + unsigned long flags; 2394 + int i, j; 2402 2395 int retval = -ENOMEM; 2403 2396 2404 2397 trace("%p", driver); ··· 2436 2427 2437 2428 info("hw_ep_max = %d", hw_ep_max); 2438 2429 2439 - udc->driver = driver; 2440 2430 udc->gadget.dev.driver = NULL; 2441 2431 2442 2432 retval = 0; 2443 - for (i = 0; i < hw_ep_max; i++) { 2444 - struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[i]; 2433 + for (i = 0; i < hw_ep_max/2; i++) { 2434 + for (j = RX; j <= TX; j++) { 2435 + int k = i + j * hw_ep_max/2; 2436 + struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[k]; 2445 2437 2446 - scnprintf(mEp->name, sizeof(mEp->name), "ep%i", (int)i); 2438 + scnprintf(mEp->name, sizeof(mEp->name), "ep%i%s", i, 2439 + (j == TX) ? "in" : "out"); 2447 2440 2448 - mEp->lock = udc->lock; 2449 - mEp->device = &udc->gadget.dev; 2450 - mEp->td_pool = udc->td_pool; 2441 + mEp->lock = udc->lock; 2442 + mEp->device = &udc->gadget.dev; 2443 + mEp->td_pool = udc->td_pool; 2451 2444 2452 - mEp->ep.name = mEp->name; 2453 - mEp->ep.ops = &usb_ep_ops; 2454 - mEp->ep.maxpacket = CTRL_PAYLOAD_MAX; 2445 + mEp->ep.name = mEp->name; 2446 + mEp->ep.ops = &usb_ep_ops; 2447 + mEp->ep.maxpacket = CTRL_PAYLOAD_MAX; 2455 2448 2456 - /* this allocation cannot be random */ 2457 - for (k = RX; k <= TX; k++) { 2458 - INIT_LIST_HEAD(&mEp->qh[k].queue); 2449 + INIT_LIST_HEAD(&mEp->qh.queue); 2459 2450 spin_unlock_irqrestore(udc->lock, flags); 2460 - mEp->qh[k].ptr = dma_pool_alloc(udc->qh_pool, 2461 - GFP_KERNEL, 2462 - &mEp->qh[k].dma); 2451 + mEp->qh.ptr = dma_pool_alloc(udc->qh_pool, GFP_KERNEL, 2452 + &mEp->qh.dma); 2463 2453 spin_lock_irqsave(udc->lock, flags); 2464 - if (mEp->qh[k].ptr == NULL) 2454 + if (mEp->qh.ptr == NULL) 2465 2455 retval = -ENOMEM; 2466 2456 else 2467 - memset(mEp->qh[k].ptr, 0, 2468 - sizeof(*mEp->qh[k].ptr)); 2469 - } 2470 - if (i == 0) 2471 - udc->gadget.ep0 = &mEp->ep; 2472 - else 2457 + memset(mEp->qh.ptr, 0, sizeof(*mEp->qh.ptr)); 2458 + 2459 + /* skip ep0 out and in endpoints */ 2460 + if (i == 0) 2461 + continue; 2462 + 2473 2463 list_add_tail(&mEp->ep.ep_list, &udc->gadget.ep_list); 2464 + } 2474 2465 } 2475 2466 if (retval) 2476 2467 goto done; 2477 2468 2469 + udc->gadget.ep0 = &udc->ep0in.ep; 2478 2470 /* bind gadget */ 2479 2471 driver->driver.bus = NULL; 2480 2472 udc->gadget.dev.driver = &driver->driver; ··· 2489 2479 goto done; 2490 2480 } 2491 2481 2482 + udc->driver = driver; 2492 2483 pm_runtime_get_sync(&udc->gadget.dev); 2493 2484 if (udc->udc_driver->flags & CI13XXX_PULLUP_ON_VBUS) { 2494 2485 if (udc->vbus_active) { ··· 2501 2490 } 2502 2491 } 2503 2492 2504 - retval = hw_device_state(udc->ci13xxx_ep[0].qh[RX].dma); 2493 + retval = hw_device_state(udc->ep0out.qh.dma); 2505 2494 if (retval) 2506 2495 pm_runtime_put_sync(&udc->gadget.dev); 2507 2496 2508 2497 done: 2509 2498 spin_unlock_irqrestore(udc->lock, flags); 2510 - if (retval) 2511 - usb_gadget_unregister_driver(driver); 2512 2499 return retval; 2513 2500 } 2514 2501 EXPORT_SYMBOL(usb_gadget_probe_driver); ··· 2519 2510 int usb_gadget_unregister_driver(struct usb_gadget_driver *driver) 2520 2511 { 2521 2512 struct ci13xxx *udc = _udc; 2522 - unsigned long i, k, flags; 2513 + unsigned long i, flags; 2523 2514 2524 2515 trace("%p", driver); 2525 2516 ··· 2555 2546 for (i = 0; i < hw_ep_max; i++) { 2556 2547 struct ci13xxx_ep *mEp = &udc->ci13xxx_ep[i]; 2557 2548 2558 - if (i == 0) 2559 - udc->gadget.ep0 = NULL; 2560 - else if (!list_empty(&mEp->ep.ep_list)) 2549 + if (!list_empty(&mEp->ep.ep_list)) 2561 2550 list_del_init(&mEp->ep.ep_list); 2562 2551 2563 - for (k = RX; k <= TX; k++) 2564 - if (mEp->qh[k].ptr != NULL) 2565 - dma_pool_free(udc->qh_pool, 2566 - mEp->qh[k].ptr, mEp->qh[k].dma); 2552 + if (mEp->qh.ptr != NULL) 2553 + dma_pool_free(udc->qh_pool, mEp->qh.ptr, mEp->qh.dma); 2567 2554 } 2568 2555 2556 + udc->gadget.ep0 = NULL; 2569 2557 udc->driver = NULL; 2570 2558 2571 2559 spin_unlock_irqrestore(udc->lock, flags);
+6 -3
drivers/usb/gadget/ci13xxx_udc.h
··· 20 20 * DEFINE 21 21 *****************************************************************************/ 22 22 #define CI13XXX_PAGE_SIZE 4096ul /* page size for TD's */ 23 - #define ENDPT_MAX (16) 23 + #define ENDPT_MAX (32) 24 24 #define CTRL_PAYLOAD_MAX (64) 25 25 #define RX (0) /* similar to USB_DIR_OUT but can be used as an index */ 26 26 #define TX (1) /* similar to USB_DIR_IN but can be used as an index */ ··· 88 88 struct list_head queue; 89 89 struct ci13xxx_qh *ptr; 90 90 dma_addr_t dma; 91 - } qh[2]; 92 - struct usb_request *status; 91 + } qh; 93 92 int wedge; 94 93 95 94 /* global resources */ ··· 118 119 119 120 struct dma_pool *qh_pool; /* DMA pool for queue heads */ 120 121 struct dma_pool *td_pool; /* DMA pool for transfer descs */ 122 + struct usb_request *status; /* ep0 status request */ 121 123 122 124 struct usb_gadget gadget; /* USB slave device */ 123 125 struct ci13xxx_ep ci13xxx_ep[ENDPT_MAX]; /* extended endpts */ 126 + u32 ep0_dir; /* ep0 direction */ 127 + #define ep0out ci13xxx_ep[0] 128 + #define ep0in ci13xxx_ep[16] 124 129 125 130 struct usb_gadget_driver *driver; /* 3rd party gadget driver */ 126 131 struct ci13xxx_udc_driver *udc_driver; /* device controller driver */
+3 -2
drivers/usb/gadget/composite.c
··· 928 928 */ 929 929 switch (ctrl->bRequestType & USB_RECIP_MASK) { 930 930 case USB_RECIP_INTERFACE: 931 - if (cdev->config) 932 - f = cdev->config->interface[intf]; 931 + if (!cdev->config || w_index >= MAX_CONFIG_INTERFACES) 932 + break; 933 + f = cdev->config->interface[intf]; 933 934 break; 934 935 935 936 case USB_RECIP_ENDPOINT:
+76 -51
drivers/usb/gadget/pch_udc.c
··· 198 198 #define PCH_UDC_BRLEN 0x0F /* Burst length */ 199 199 #define PCH_UDC_THLEN 0x1F /* Threshold length */ 200 200 /* Value of EP Buffer Size */ 201 - #define UDC_EP0IN_BUFF_SIZE 64 202 - #define UDC_EPIN_BUFF_SIZE 512 203 - #define UDC_EP0OUT_BUFF_SIZE 64 204 - #define UDC_EPOUT_BUFF_SIZE 512 201 + #define UDC_EP0IN_BUFF_SIZE 16 202 + #define UDC_EPIN_BUFF_SIZE 256 203 + #define UDC_EP0OUT_BUFF_SIZE 16 204 + #define UDC_EPOUT_BUFF_SIZE 256 205 205 /* Value of EP maximum packet size */ 206 206 #define UDC_EP0IN_MAX_PKT_SIZE 64 207 207 #define UDC_EP0OUT_MAX_PKT_SIZE 64 ··· 351 351 struct pci_pool *data_requests; 352 352 struct pci_pool *stp_requests; 353 353 dma_addr_t dma_addr; 354 - unsigned long ep0out_buf[64]; 354 + void *ep0out_buf; 355 355 struct usb_ctrlrequest setup_data; 356 356 unsigned long phys_addr; 357 357 void __iomem *base_addr; ··· 361 361 362 362 #define PCH_UDC_PCI_BAR 1 363 363 #define PCI_DEVICE_ID_INTEL_EG20T_UDC 0x8808 364 + #define PCI_VENDOR_ID_ROHM 0x10DB 365 + #define PCI_DEVICE_ID_ML7213_IOH_UDC 0x801D 364 366 365 367 static const char ep0_string[] = "ep0in"; 366 368 static DEFINE_SPINLOCK(udc_stall_spinlock); /* stall spin lock */ ··· 1221 1219 dev = ep->dev; 1222 1220 if (req->dma_mapped) { 1223 1221 if (ep->in) 1224 - pci_unmap_single(dev->pdev, req->req.dma, 1225 - req->req.length, PCI_DMA_TODEVICE); 1222 + dma_unmap_single(&dev->pdev->dev, req->req.dma, 1223 + req->req.length, DMA_TO_DEVICE); 1226 1224 else 1227 - pci_unmap_single(dev->pdev, req->req.dma, 1228 - req->req.length, PCI_DMA_FROMDEVICE); 1225 + dma_unmap_single(&dev->pdev->dev, req->req.dma, 1226 + req->req.length, DMA_FROM_DEVICE); 1229 1227 req->dma_mapped = 0; 1230 1228 req->req.dma = DMA_ADDR_INVALID; 1231 1229 } ··· 1416 1414 1417 1415 pch_udc_clear_dma(ep->dev, DMA_DIR_RX); 1418 1416 td_data = req->td_data; 1419 - ep->td_data = req->td_data; 1420 1417 /* Set the status bits for all descriptors */ 1421 1418 while (1) { 1422 1419 td_data->status = (td_data->status & ~PCH_UDC_BUFF_STS) | ··· 1614 1613 if (usbreq->length && 1615 1614 ((usbreq->dma == DMA_ADDR_INVALID) || !usbreq->dma)) { 1616 1615 if (ep->in) 1617 - usbreq->dma = pci_map_single(dev->pdev, usbreq->buf, 1618 - usbreq->length, PCI_DMA_TODEVICE); 1616 + usbreq->dma = dma_map_single(&dev->pdev->dev, 1617 + usbreq->buf, 1618 + usbreq->length, 1619 + DMA_TO_DEVICE); 1619 1620 else 1620 - usbreq->dma = pci_map_single(dev->pdev, usbreq->buf, 1621 - usbreq->length, PCI_DMA_FROMDEVICE); 1621 + usbreq->dma = dma_map_single(&dev->pdev->dev, 1622 + usbreq->buf, 1623 + usbreq->length, 1624 + DMA_FROM_DEVICE); 1622 1625 req->dma_mapped = 1; 1623 1626 } 1624 1627 if (usbreq->length > 0) { 1625 - retval = prepare_dma(ep, req, gfp); 1628 + retval = prepare_dma(ep, req, GFP_ATOMIC); 1626 1629 if (retval) 1627 1630 goto probe_end; 1628 1631 } ··· 1651 1646 pch_udc_wait_ep_stall(ep); 1652 1647 pch_udc_ep_clear_nak(ep); 1653 1648 pch_udc_enable_ep_interrupts(ep->dev, (1 << ep->num)); 1654 - pch_udc_set_dma(dev, DMA_DIR_TX); 1655 1649 } 1656 1650 } 1657 1651 /* Now add this request to the ep's pending requests */ ··· 1930 1926 PCH_UDC_BS_DMA_DONE) 1931 1927 return; 1932 1928 pch_udc_clear_dma(ep->dev, DMA_DIR_RX); 1929 + pch_udc_ep_set_ddptr(ep, 0); 1933 1930 if ((req->td_data_last->status & PCH_UDC_RXTX_STS) != 1934 1931 PCH_UDC_RTS_SUCC) { 1935 1932 dev_err(&dev->pdev->dev, "Invalid RXTX status (0x%08x) " ··· 1968 1963 u32 epsts; 1969 1964 struct pch_udc_ep *ep; 1970 1965 1971 - ep = &dev->ep[2*ep_num]; 1966 + ep = &dev->ep[UDC_EPIN_IDX(ep_num)]; 1972 1967 epsts = ep->epsts; 1973 1968 ep->epsts = 0; 1974 1969 ··· 2013 2008 struct pch_udc_ep *ep; 2014 2009 struct pch_udc_request *req = NULL; 2015 2010 2016 - ep = &dev->ep[2*ep_num + 1]; 2011 + ep = &dev->ep[UDC_EPOUT_IDX(ep_num)]; 2017 2012 epsts = ep->epsts; 2018 2013 ep->epsts = 0; 2019 2014 ··· 2030 2025 } 2031 2026 if (epsts & UDC_EPSTS_HE) 2032 2027 return; 2033 - if (epsts & UDC_EPSTS_RSS) 2028 + if (epsts & UDC_EPSTS_RSS) { 2034 2029 pch_udc_ep_set_stall(ep); 2035 2030 pch_udc_enable_ep_interrupts(ep->dev, 2036 2031 PCH_UDC_EPINT(ep->in, ep->num)); 2032 + } 2037 2033 if (epsts & UDC_EPSTS_RCS) { 2038 2034 if (!dev->prot_stall) { 2039 2035 pch_udc_ep_clear_stall(ep); ··· 2066 2060 { 2067 2061 u32 epsts; 2068 2062 struct pch_udc_ep *ep; 2063 + struct pch_udc_ep *ep_out; 2069 2064 2070 2065 ep = &dev->ep[UDC_EP0IN_IDX]; 2066 + ep_out = &dev->ep[UDC_EP0OUT_IDX]; 2071 2067 epsts = ep->epsts; 2072 2068 ep->epsts = 0; 2073 2069 ··· 2081 2073 return; 2082 2074 if (epsts & UDC_EPSTS_HE) 2083 2075 return; 2084 - if ((epsts & UDC_EPSTS_TDC) && (!dev->stall)) 2076 + if ((epsts & UDC_EPSTS_TDC) && (!dev->stall)) { 2085 2077 pch_udc_complete_transfer(ep); 2078 + pch_udc_clear_dma(dev, DMA_DIR_RX); 2079 + ep_out->td_data->status = (ep_out->td_data->status & 2080 + ~PCH_UDC_BUFF_STS) | 2081 + PCH_UDC_BS_HST_RDY; 2082 + pch_udc_ep_clear_nak(ep_out); 2083 + pch_udc_set_dma(dev, DMA_DIR_RX); 2084 + pch_udc_ep_set_rrdy(ep_out); 2085 + } 2086 2086 /* On IN interrupt, provide data if we have any */ 2087 2087 if ((epsts & UDC_EPSTS_IN) && !(epsts & UDC_EPSTS_TDC) && 2088 2088 !(epsts & UDC_EPSTS_TXEMPTY)) ··· 2118 2102 dev->stall = 0; 2119 2103 dev->ep[UDC_EP0IN_IDX].halted = 0; 2120 2104 dev->ep[UDC_EP0OUT_IDX].halted = 0; 2121 - /* In data not ready */ 2122 - pch_udc_ep_set_nak(&(dev->ep[UDC_EP0IN_IDX])); 2123 2105 dev->setup_data = ep->td_stp->request; 2124 2106 pch_udc_init_setup_buff(ep->td_stp); 2125 - pch_udc_clear_dma(dev, DMA_DIR_TX); 2107 + pch_udc_clear_dma(dev, DMA_DIR_RX); 2126 2108 pch_udc_ep_fifo_flush(&(dev->ep[UDC_EP0IN_IDX]), 2127 2109 dev->ep[UDC_EP0IN_IDX].in); 2128 2110 if ((dev->setup_data.bRequestType & USB_DIR_IN)) ··· 2136 2122 setup_supported = dev->driver->setup(&dev->gadget, 2137 2123 &dev->setup_data); 2138 2124 spin_lock(&dev->lock); 2125 + 2126 + if (dev->setup_data.bRequestType & USB_DIR_IN) { 2127 + ep->td_data->status = (ep->td_data->status & 2128 + ~PCH_UDC_BUFF_STS) | 2129 + PCH_UDC_BS_HST_RDY; 2130 + pch_udc_ep_set_ddptr(ep, ep->td_data_phys); 2131 + } 2139 2132 /* ep0 in returns data on IN phase */ 2140 2133 if (setup_supported >= 0 && setup_supported < 2141 2134 UDC_EP0IN_MAX_PKT_SIZE) { 2142 2135 pch_udc_ep_clear_nak(&(dev->ep[UDC_EP0IN_IDX])); 2143 2136 /* Gadget would have queued a request when 2144 2137 * we called the setup */ 2145 - pch_udc_set_dma(dev, DMA_DIR_RX); 2146 - pch_udc_ep_clear_nak(ep); 2138 + if (!(dev->setup_data.bRequestType & USB_DIR_IN)) { 2139 + pch_udc_set_dma(dev, DMA_DIR_RX); 2140 + pch_udc_ep_clear_nak(ep); 2141 + } 2147 2142 } else if (setup_supported < 0) { 2148 2143 /* if unsupported request, then stall */ 2149 2144 pch_udc_ep_set_stall(&(dev->ep[UDC_EP0IN_IDX])); ··· 2165 2142 } 2166 2143 } else if ((((stat & UDC_EPSTS_OUT_MASK) >> UDC_EPSTS_OUT_SHIFT) == 2167 2144 UDC_EPSTS_OUT_DATA) && !dev->stall) { 2168 - if (list_empty(&ep->queue)) { 2169 - dev_err(&dev->pdev->dev, "%s: No request\n", __func__); 2170 - ep->td_data->status = (ep->td_data->status & 2171 - ~PCH_UDC_BUFF_STS) | 2172 - PCH_UDC_BS_HST_RDY; 2173 - pch_udc_set_dma(dev, DMA_DIR_RX); 2174 - } else { 2175 - /* control write */ 2176 - /* next function will pickuo an clear the status */ 2145 + pch_udc_clear_dma(dev, DMA_DIR_RX); 2146 + pch_udc_ep_set_ddptr(ep, 0); 2147 + if (!list_empty(&ep->queue)) { 2177 2148 ep->epsts = stat; 2178 - 2179 - pch_udc_svc_data_out(dev, 0); 2180 - /* re-program desc. pointer for possible ZLPs */ 2181 - pch_udc_ep_set_ddptr(ep, ep->td_data_phys); 2182 - pch_udc_set_dma(dev, DMA_DIR_RX); 2149 + pch_udc_svc_data_out(dev, PCH_UDC_EP0); 2183 2150 } 2151 + pch_udc_set_dma(dev, DMA_DIR_RX); 2184 2152 } 2185 2153 pch_udc_ep_set_rrdy(ep); 2186 2154 } ··· 2188 2174 struct pch_udc_ep *ep; 2189 2175 struct pch_udc_request *req; 2190 2176 2191 - ep = &dev->ep[2*ep_num]; 2177 + ep = &dev->ep[UDC_EPIN_IDX(ep_num)]; 2192 2178 if (!list_empty(&ep->queue)) { 2193 2179 req = list_entry(ep->queue.next, struct pch_udc_request, queue); 2194 2180 pch_udc_enable_ep_interrupts(ep->dev, ··· 2210 2196 for (i = 0; i < PCH_UDC_USED_EP_NUM; i++) { 2211 2197 /* IN */ 2212 2198 if (ep_intr & (0x1 << i)) { 2213 - ep = &dev->ep[2*i]; 2199 + ep = &dev->ep[UDC_EPIN_IDX(i)]; 2214 2200 ep->epsts = pch_udc_read_ep_status(ep); 2215 2201 pch_udc_clear_ep_status(ep, ep->epsts); 2216 2202 } 2217 2203 /* OUT */ 2218 2204 if (ep_intr & (0x10000 << i)) { 2219 - ep = &dev->ep[2*i+1]; 2205 + ep = &dev->ep[UDC_EPOUT_IDX(i)]; 2220 2206 ep->epsts = pch_udc_read_ep_status(ep); 2221 2207 pch_udc_clear_ep_status(ep, ep->epsts); 2222 2208 } ··· 2577 2563 dev->ep[UDC_EP0IN_IDX].ep.maxpacket = UDC_EP0IN_MAX_PKT_SIZE; 2578 2564 dev->ep[UDC_EP0OUT_IDX].ep.maxpacket = UDC_EP0OUT_MAX_PKT_SIZE; 2579 2565 2580 - dev->dma_addr = pci_map_single(dev->pdev, dev->ep0out_buf, 256, 2581 - PCI_DMA_FROMDEVICE); 2582 - 2583 2566 /* remove ep0 in and out from the list. They have own pointer */ 2584 2567 list_del_init(&dev->ep[UDC_EP0IN_IDX].ep.ep_list); 2585 2568 list_del_init(&dev->ep[UDC_EP0OUT_IDX].ep.ep_list); ··· 2648 2637 dev->ep[UDC_EP0IN_IDX].td_stp_phys = 0; 2649 2638 dev->ep[UDC_EP0IN_IDX].td_data = NULL; 2650 2639 dev->ep[UDC_EP0IN_IDX].td_data_phys = 0; 2640 + 2641 + dev->ep0out_buf = kzalloc(UDC_EP0OUT_BUFF_SIZE * 4, GFP_KERNEL); 2642 + if (!dev->ep0out_buf) 2643 + return -ENOMEM; 2644 + dev->dma_addr = dma_map_single(&dev->pdev->dev, dev->ep0out_buf, 2645 + UDC_EP0OUT_BUFF_SIZE * 4, 2646 + DMA_FROM_DEVICE); 2651 2647 return 0; 2652 2648 } 2653 2649 ··· 2718 2700 2719 2701 pch_udc_disable_interrupts(dev, UDC_DEVINT_MSK); 2720 2702 2721 - /* Assues that there are no pending requets with this driver */ 2703 + /* Assures that there are no pending requests with this driver */ 2704 + driver->disconnect(&dev->gadget); 2722 2705 driver->unbind(&dev->gadget); 2723 2706 dev->gadget.dev.driver = NULL; 2724 2707 dev->driver = NULL; ··· 2769 2750 pci_pool_destroy(dev->stp_requests); 2770 2751 } 2771 2752 2753 + if (dev->dma_addr) 2754 + dma_unmap_single(&dev->pdev->dev, dev->dma_addr, 2755 + UDC_EP0OUT_BUFF_SIZE * 4, DMA_FROM_DEVICE); 2756 + kfree(dev->ep0out_buf); 2757 + 2772 2758 pch_udc_exit(dev); 2773 2759 2774 2760 if (dev->irq_registered) ··· 2816 2792 int ret; 2817 2793 2818 2794 pci_set_power_state(pdev, PCI_D0); 2819 - ret = pci_restore_state(pdev); 2820 - if (ret) { 2821 - dev_err(&pdev->dev, "%s: pci_restore_state failed\n", __func__); 2822 - return ret; 2823 - } 2795 + pci_restore_state(pdev); 2824 2796 ret = pci_enable_device(pdev); 2825 2797 if (ret) { 2826 2798 dev_err(&pdev->dev, "%s: pci_enable_device failed\n", __func__); ··· 2931 2911 static DEFINE_PCI_DEVICE_TABLE(pch_udc_pcidev_id) = { 2932 2912 { 2933 2913 PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_EG20T_UDC), 2914 + .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 2915 + .class_mask = 0xffffffff, 2916 + }, 2917 + { 2918 + PCI_DEVICE(PCI_VENDOR_ID_ROHM, PCI_DEVICE_ID_ML7213_IOH_UDC), 2934 2919 .class = (PCI_CLASS_SERIAL_USB << 8) | 0xfe, 2935 2920 .class_mask = 0xffffffff, 2936 2921 },
+9 -10
drivers/usb/gadget/printer.c
··· 131 131 * parameters are in UTF-8 (superset of ASCII's 7 bit characters). 132 132 */ 133 133 134 - static ushort __initdata idVendor; 134 + static ushort idVendor; 135 135 module_param(idVendor, ushort, S_IRUGO); 136 136 MODULE_PARM_DESC(idVendor, "USB Vendor ID"); 137 137 138 - static ushort __initdata idProduct; 138 + static ushort idProduct; 139 139 module_param(idProduct, ushort, S_IRUGO); 140 140 MODULE_PARM_DESC(idProduct, "USB Product ID"); 141 141 142 - static ushort __initdata bcdDevice; 142 + static ushort bcdDevice; 143 143 module_param(bcdDevice, ushort, S_IRUGO); 144 144 MODULE_PARM_DESC(bcdDevice, "USB Device version (BCD)"); 145 145 146 - static char *__initdata iManufacturer; 146 + static char *iManufacturer; 147 147 module_param(iManufacturer, charp, S_IRUGO); 148 148 MODULE_PARM_DESC(iManufacturer, "USB Manufacturer string"); 149 149 150 - static char *__initdata iProduct; 150 + static char *iProduct; 151 151 module_param(iProduct, charp, S_IRUGO); 152 152 MODULE_PARM_DESC(iProduct, "USB Product string"); 153 153 154 - static char *__initdata iSerialNum; 154 + static char *iSerialNum; 155 155 module_param(iSerialNum, charp, S_IRUGO); 156 156 MODULE_PARM_DESC(iSerialNum, "1"); 157 157 158 - static char *__initdata iPNPstring; 158 + static char *iPNPstring; 159 159 module_param(iPNPstring, charp, S_IRUGO); 160 160 MODULE_PARM_DESC(iPNPstring, "MFG:linux;MDL:g_printer;CLS:PRINTER;SN:1;"); 161 161 ··· 1596 1596 int status; 1597 1597 1598 1598 mutex_lock(&usb_printer_gadget.lock_printer_io); 1599 - class_destroy(usb_gadget_class); 1600 - unregister_chrdev_region(g_printer_devno, 2); 1601 - 1602 1599 status = usb_gadget_unregister_driver(&printer_driver); 1603 1600 if (status) 1604 1601 ERROR(dev, "usb_gadget_unregister_driver %x\n", status); 1605 1602 1603 + unregister_chrdev_region(g_printer_devno, 2); 1604 + class_destroy(usb_gadget_class); 1606 1605 mutex_unlock(&usb_printer_gadget.lock_printer_io); 1607 1606 } 1608 1607 module_exit(cleanup);
-13
drivers/usb/host/ehci-fsl.c
··· 52 52 struct resource *res; 53 53 int irq; 54 54 int retval; 55 - unsigned int temp; 56 55 57 56 pr_debug("initializing FSL-SOC USB Controller\n"); 58 57 ··· 124 125 retval = -ENODEV; 125 126 goto err3; 126 127 } 127 - 128 - /* 129 - * Check if it is MPC5121 SoC, otherwise set pdata->have_sysif_regs 130 - * flag for 83xx or 8536 system interface registers. 131 - */ 132 - if (pdata->big_endian_mmio) 133 - temp = in_be32(hcd->regs + FSL_SOC_USB_ID); 134 - else 135 - temp = in_le32(hcd->regs + FSL_SOC_USB_ID); 136 - 137 - if ((temp & ID_MSK) != (~((temp & NID_MSK) >> 8) & ID_MSK)) 138 - pdata->have_sysif_regs = 1; 139 128 140 129 /* Enable USB controller, 83xx or 8536 */ 141 130 if (pdata->have_sysif_regs)
-3
drivers/usb/host/ehci-fsl.h
··· 19 19 #define _EHCI_FSL_H 20 20 21 21 /* offsets for the non-ehci registers in the FSL SOC USB controller */ 22 - #define FSL_SOC_USB_ID 0x0 23 - #define ID_MSK 0x3f 24 - #define NID_MSK 0x3f00 25 22 #define FSL_SOC_USB_ULPIVP 0x170 26 23 #define FSL_SOC_USB_PORTSC1 0x184 27 24 #define PORT_PTS_MSK (3<<30)
+12 -7
drivers/usb/host/ehci-hcd.c
··· 572 572 ehci->iaa_watchdog.function = ehci_iaa_watchdog; 573 573 ehci->iaa_watchdog.data = (unsigned long) ehci; 574 574 575 + hcc_params = ehci_readl(ehci, &ehci->caps->hcc_params); 576 + 575 577 /* 576 578 * hw default: 1K periodic list heads, one per frame. 577 579 * periodic_size can shrink by USBCMD update if hcc_params allows. ··· 581 579 ehci->periodic_size = DEFAULT_I_TDPS; 582 580 INIT_LIST_HEAD(&ehci->cached_itd_list); 583 581 INIT_LIST_HEAD(&ehci->cached_sitd_list); 582 + 583 + if (HCC_PGM_FRAMELISTLEN(hcc_params)) { 584 + /* periodic schedule size can be smaller than default */ 585 + switch (EHCI_TUNE_FLS) { 586 + case 0: ehci->periodic_size = 1024; break; 587 + case 1: ehci->periodic_size = 512; break; 588 + case 2: ehci->periodic_size = 256; break; 589 + default: BUG(); 590 + } 591 + } 584 592 if ((retval = ehci_mem_init(ehci, GFP_KERNEL)) < 0) 585 593 return retval; 586 594 587 595 /* controllers may cache some of the periodic schedule ... */ 588 - hcc_params = ehci_readl(ehci, &ehci->caps->hcc_params); 589 596 if (HCC_ISOC_CACHE(hcc_params)) // full frame cache 590 597 ehci->i_thresh = 2 + 8; 591 598 else // N microframes cached ··· 648 637 /* periodic schedule size can be smaller than default */ 649 638 temp &= ~(3 << 2); 650 639 temp |= (EHCI_TUNE_FLS << 2); 651 - switch (EHCI_TUNE_FLS) { 652 - case 0: ehci->periodic_size = 1024; break; 653 - case 1: ehci->periodic_size = 512; break; 654 - case 2: ehci->periodic_size = 256; break; 655 - default: BUG(); 656 - } 657 640 } 658 641 if (HCC_LPM(hcc_params)) { 659 642 /* support link power management EHCI 1.1 addendum */
+23 -2
drivers/usb/host/ehci-mxc.c
··· 21 21 #include <linux/clk.h> 22 22 #include <linux/delay.h> 23 23 #include <linux/usb/otg.h> 24 + #include <linux/usb/ulpi.h> 24 25 #include <linux/slab.h> 25 26 26 27 #include <mach/mxc_ehci.h> 28 + 29 + #include <asm/mach-types.h> 27 30 28 31 #define ULPI_VIEWPORT_OFFSET 0x170 29 32 ··· 117 114 struct usb_hcd *hcd; 118 115 struct resource *res; 119 116 int irq, ret; 117 + unsigned int flags; 120 118 struct ehci_mxc_priv *priv; 121 119 struct device *dev = &pdev->dev; 122 120 struct ehci_hcd *ehci; ··· 181 177 clk_enable(priv->ahbclk); 182 178 } 183 179 184 - /* "dr" device has its own clock */ 185 - if (pdev->id == 0) { 180 + /* "dr" device has its own clock on i.MX51 */ 181 + if (cpu_is_mx51() && (pdev->id == 0)) { 186 182 priv->phy1clk = clk_get(dev, "usb_phy1"); 187 183 if (IS_ERR(priv->phy1clk)) { 188 184 ret = PTR_ERR(priv->phy1clk); ··· 243 239 ret = usb_add_hcd(hcd, irq, IRQF_DISABLED | IRQF_SHARED); 244 240 if (ret) 245 241 goto err_add; 242 + 243 + if (pdata->otg) { 244 + /* 245 + * efikamx and efikasb have some hardware bug which is 246 + * preventing usb to work unless CHRGVBUS is set. 247 + * It's in violation of USB specs 248 + */ 249 + if (machine_is_mx51_efikamx() || machine_is_mx51_efikasb()) { 250 + flags = otg_io_read(pdata->otg, ULPI_OTG_CTRL); 251 + flags |= ULPI_OTG_CTRL_CHRGVBUS; 252 + ret = otg_io_write(pdata->otg, flags, ULPI_OTG_CTRL); 253 + if (ret) { 254 + dev_err(dev, "unable to set CHRVBUS\n"); 255 + goto err_add; 256 + } 257 + } 258 + } 246 259 247 260 return 0; 248 261
+20 -13
drivers/usb/host/ehci-pci.c
··· 44 44 return 0; 45 45 } 46 46 47 - static int ehci_quirk_amd_SB800(struct ehci_hcd *ehci) 47 + static int ehci_quirk_amd_hudson(struct ehci_hcd *ehci) 48 48 { 49 49 struct pci_dev *amd_smbus_dev; 50 50 u8 rev = 0; 51 51 52 52 amd_smbus_dev = pci_get_device(PCI_VENDOR_ID_ATI, 0x4385, NULL); 53 - if (!amd_smbus_dev) 54 - return 0; 55 - 56 - pci_read_config_byte(amd_smbus_dev, PCI_REVISION_ID, &rev); 57 - if (rev < 0x40) { 58 - pci_dev_put(amd_smbus_dev); 59 - amd_smbus_dev = NULL; 60 - return 0; 53 + if (amd_smbus_dev) { 54 + pci_read_config_byte(amd_smbus_dev, PCI_REVISION_ID, &rev); 55 + if (rev < 0x40) { 56 + pci_dev_put(amd_smbus_dev); 57 + amd_smbus_dev = NULL; 58 + return 0; 59 + } 60 + } else { 61 + amd_smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x780b, NULL); 62 + if (!amd_smbus_dev) 63 + return 0; 64 + pci_read_config_byte(amd_smbus_dev, PCI_REVISION_ID, &rev); 65 + if (rev < 0x11 || rev > 0x18) { 66 + pci_dev_put(amd_smbus_dev); 67 + amd_smbus_dev = NULL; 68 + return 0; 69 + } 61 70 } 62 71 63 72 if (!amd_nb_dev) 64 73 amd_nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x1510, NULL); 65 - if (!amd_nb_dev) 66 - ehci_err(ehci, "QUIRK: unable to get AMD NB device\n"); 67 74 68 - ehci_info(ehci, "QUIRK: Enable AMD SB800 L1 fix\n"); 75 + ehci_info(ehci, "QUIRK: Enable exception for AMD Hudson ASPM\n"); 69 76 70 77 pci_dev_put(amd_smbus_dev); 71 78 amd_smbus_dev = NULL; ··· 138 131 /* cache this readonly data; minimize chip reads */ 139 132 ehci->hcs_params = ehci_readl(ehci, &ehci->caps->hcs_params); 140 133 141 - if (ehci_quirk_amd_SB800(ehci)) 134 + if (ehci_quirk_amd_hudson(ehci)) 142 135 ehci->amd_l1_fix = 1; 143 136 144 137 retval = ehci_halt(ehci);
+8 -3
drivers/usb/host/fsl-mph-dr-of.c
··· 262 262 } 263 263 } 264 264 265 - struct fsl_usb2_platform_data fsl_usb2_mpc5121_pd = { 265 + static struct fsl_usb2_platform_data fsl_usb2_mpc5121_pd = { 266 266 .big_endian_desc = 1, 267 267 .big_endian_mmio = 1, 268 268 .es = 1, 269 + .have_sysif_regs = 0, 269 270 .le_setup_buf = 1, 270 271 .init = fsl_usb2_mpc5121_init, 271 272 .exit = fsl_usb2_mpc5121_exit, 272 273 }; 273 274 #endif /* CONFIG_PPC_MPC512x */ 274 275 276 + static struct fsl_usb2_platform_data fsl_usb2_mpc8xxx_pd = { 277 + .have_sysif_regs = 1, 278 + }; 279 + 275 280 static const struct of_device_id fsl_usb2_mph_dr_of_match[] = { 276 - { .compatible = "fsl-usb2-mph", }, 277 - { .compatible = "fsl-usb2-dr", }, 281 + { .compatible = "fsl-usb2-mph", .data = &fsl_usb2_mpc8xxx_pd, }, 282 + { .compatible = "fsl-usb2-dr", .data = &fsl_usb2_mpc8xxx_pd, }, 278 283 #ifdef CONFIG_PPC_MPC512x 279 284 { .compatible = "fsl,mpc5121-usb2-dr", .data = &fsl_usb2_mpc5121_pd, }, 280 285 #endif
+51 -40
drivers/usb/host/xhci-ring.c
··· 308 308 /* Ring the host controller doorbell after placing a command on the ring */ 309 309 void xhci_ring_cmd_db(struct xhci_hcd *xhci) 310 310 { 311 - u32 temp; 312 - 313 311 xhci_dbg(xhci, "// Ding dong!\n"); 314 - temp = xhci_readl(xhci, &xhci->dba->doorbell[0]) & DB_MASK; 315 - xhci_writel(xhci, temp | DB_TARGET_HOST, &xhci->dba->doorbell[0]); 312 + xhci_writel(xhci, DB_VALUE_HOST, &xhci->dba->doorbell[0]); 316 313 /* Flush PCI posted writes */ 317 314 xhci_readl(xhci, &xhci->dba->doorbell[0]); 318 315 } ··· 319 322 unsigned int ep_index, 320 323 unsigned int stream_id) 321 324 { 322 - struct xhci_virt_ep *ep; 323 - unsigned int ep_state; 324 - u32 field; 325 325 __u32 __iomem *db_addr = &xhci->dba->doorbell[slot_id]; 326 + struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index]; 327 + unsigned int ep_state = ep->ep_state; 326 328 327 - ep = &xhci->devs[slot_id]->eps[ep_index]; 328 - ep_state = ep->ep_state; 329 329 /* Don't ring the doorbell for this endpoint if there are pending 330 - * cancellations because the we don't want to interrupt processing. 330 + * cancellations because we don't want to interrupt processing. 331 331 * We don't want to restart any stream rings if there's a set dequeue 332 332 * pointer command pending because the device can choose to start any 333 333 * stream once the endpoint is on the HW schedule. 334 334 * FIXME - check all the stream rings for pending cancellations. 335 335 */ 336 - if (!(ep_state & EP_HALT_PENDING) && !(ep_state & SET_DEQ_PENDING) 337 - && !(ep_state & EP_HALTED)) { 338 - field = xhci_readl(xhci, db_addr) & DB_MASK; 339 - field |= EPI_TO_DB(ep_index) | STREAM_ID_TO_DB(stream_id); 340 - xhci_writel(xhci, field, db_addr); 341 - } 336 + if ((ep_state & EP_HALT_PENDING) || (ep_state & SET_DEQ_PENDING) || 337 + (ep_state & EP_HALTED)) 338 + return; 339 + xhci_writel(xhci, DB_VALUE(ep_index, stream_id), db_addr); 340 + /* The CPU has better things to do at this point than wait for a 341 + * write-posting flush. It'll get there soon enough. 342 + */ 342 343 } 343 344 344 345 /* Ring the doorbell for any rings with pending URBs */ ··· 1183 1188 1184 1189 addr = &xhci->op_regs->port_status_base + NUM_PORT_REGS * (port_id - 1); 1185 1190 temp = xhci_readl(xhci, addr); 1186 - if ((temp & PORT_CONNECT) && (hcd->state == HC_STATE_SUSPENDED)) { 1191 + if (hcd->state == HC_STATE_SUSPENDED) { 1187 1192 xhci_dbg(xhci, "resume root hub\n"); 1188 1193 usb_hcd_resume_root_hub(hcd); 1189 1194 } ··· 1705 1710 /* Others already handled above */ 1706 1711 break; 1707 1712 } 1708 - dev_dbg(&td->urb->dev->dev, 1709 - "ep %#x - asked for %d bytes, " 1713 + xhci_dbg(xhci, "ep %#x - asked for %d bytes, " 1710 1714 "%d bytes untransferred\n", 1711 1715 td->urb->ep->desc.bEndpointAddress, 1712 1716 td->urb->transfer_buffer_length, ··· 2383 2389 } 2384 2390 xhci_dbg(xhci, "\n"); 2385 2391 if (!in_interrupt()) 2386 - dev_dbg(&urb->dev->dev, "ep %#x - urb len = %d, sglist used, num_trbs = %d\n", 2392 + xhci_dbg(xhci, "ep %#x - urb len = %d, sglist used, " 2393 + "num_trbs = %d\n", 2387 2394 urb->ep->desc.bEndpointAddress, 2388 2395 urb->transfer_buffer_length, 2389 2396 num_trbs); ··· 2409 2414 2410 2415 static void giveback_first_trb(struct xhci_hcd *xhci, int slot_id, 2411 2416 unsigned int ep_index, unsigned int stream_id, int start_cycle, 2412 - struct xhci_generic_trb *start_trb, struct xhci_td *td) 2417 + struct xhci_generic_trb *start_trb) 2413 2418 { 2414 2419 /* 2415 2420 * Pass all the TRBs to the hardware at once and make sure this write 2416 2421 * isn't reordered. 2417 2422 */ 2418 2423 wmb(); 2419 - start_trb->field[3] |= start_cycle; 2424 + if (start_cycle) 2425 + start_trb->field[3] |= start_cycle; 2426 + else 2427 + start_trb->field[3] &= ~0x1; 2420 2428 xhci_ring_ep_doorbell(xhci, slot_id, ep_index, stream_id); 2421 2429 } 2422 2430 ··· 2447 2449 * to set the polling interval (once the API is added). 2448 2450 */ 2449 2451 if (xhci_interval != ep_interval) { 2450 - if (!printk_ratelimit()) 2452 + if (printk_ratelimit()) 2451 2453 dev_dbg(&urb->dev->dev, "Driver uses different interval" 2452 2454 " (%d microframe%s) than xHCI " 2453 2455 "(%d microframe%s)\n", ··· 2549 2551 u32 remainder = 0; 2550 2552 2551 2553 /* Don't change the cycle bit of the first TRB until later */ 2552 - if (first_trb) 2554 + if (first_trb) { 2553 2555 first_trb = false; 2554 - else 2556 + if (start_cycle == 0) 2557 + field |= 0x1; 2558 + } else 2555 2559 field |= ep_ring->cycle_state; 2556 2560 2557 2561 /* Chain all the TRBs together; clear the chain bit in the last ··· 2625 2625 2626 2626 check_trb_math(urb, num_trbs, running_total); 2627 2627 giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, 2628 - start_cycle, start_trb, td); 2628 + start_cycle, start_trb); 2629 2629 return 0; 2630 2630 } 2631 2631 ··· 2671 2671 /* FIXME: this doesn't deal with URB_ZERO_PACKET - need one more */ 2672 2672 2673 2673 if (!in_interrupt()) 2674 - dev_dbg(&urb->dev->dev, "ep %#x - urb len = %#x (%d), addr = %#llx, num_trbs = %d\n", 2674 + xhci_dbg(xhci, "ep %#x - urb len = %#x (%d), " 2675 + "addr = %#llx, num_trbs = %d\n", 2675 2676 urb->ep->desc.bEndpointAddress, 2676 2677 urb->transfer_buffer_length, 2677 2678 urb->transfer_buffer_length, ··· 2712 2711 field = 0; 2713 2712 2714 2713 /* Don't change the cycle bit of the first TRB until later */ 2715 - if (first_trb) 2714 + if (first_trb) { 2716 2715 first_trb = false; 2717 - else 2716 + if (start_cycle == 0) 2717 + field |= 0x1; 2718 + } else 2718 2719 field |= ep_ring->cycle_state; 2719 2720 2720 2721 /* Chain all the TRBs together; clear the chain bit in the last ··· 2760 2757 2761 2758 check_trb_math(urb, num_trbs, running_total); 2762 2759 giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, 2763 - start_cycle, start_trb, td); 2760 + start_cycle, start_trb); 2764 2761 return 0; 2765 2762 } 2766 2763 ··· 2821 2818 /* Queue setup TRB - see section 6.4.1.2.1 */ 2822 2819 /* FIXME better way to translate setup_packet into two u32 fields? */ 2823 2820 setup = (struct usb_ctrlrequest *) urb->setup_packet; 2821 + field = 0; 2822 + field |= TRB_IDT | TRB_TYPE(TRB_SETUP); 2823 + if (start_cycle == 0) 2824 + field |= 0x1; 2824 2825 queue_trb(xhci, ep_ring, false, true, 2825 2826 /* FIXME endianness is probably going to bite my ass here. */ 2826 2827 setup->bRequestType | setup->bRequest << 8 | setup->wValue << 16, 2827 2828 setup->wIndex | setup->wLength << 16, 2828 2829 TRB_LEN(8) | TRB_INTR_TARGET(0), 2829 2830 /* Immediate data in pointer */ 2830 - TRB_IDT | TRB_TYPE(TRB_SETUP)); 2831 + field); 2831 2832 2832 2833 /* If there's data, queue data TRBs */ 2833 2834 field = 0; ··· 2866 2859 field | TRB_IOC | TRB_TYPE(TRB_STATUS) | ep_ring->cycle_state); 2867 2860 2868 2861 giveback_first_trb(xhci, slot_id, ep_index, 0, 2869 - start_cycle, start_trb, td); 2862 + start_cycle, start_trb); 2870 2863 return 0; 2871 2864 } 2872 2865 ··· 2907 2900 int running_total, trb_buff_len, td_len, td_remain_len, ret; 2908 2901 u64 start_addr, addr; 2909 2902 int i, j; 2903 + bool more_trbs_coming; 2910 2904 2911 2905 ep_ring = xhci->devs[slot_id]->eps[ep_index].ring; 2912 2906 ··· 2918 2910 } 2919 2911 2920 2912 if (!in_interrupt()) 2921 - dev_dbg(&urb->dev->dev, "ep %#x - urb len = %#x (%d)," 2913 + xhci_dbg(xhci, "ep %#x - urb len = %#x (%d)," 2922 2914 " addr = %#llx, num_tds = %d\n", 2923 2915 urb->ep->desc.bEndpointAddress, 2924 2916 urb->transfer_buffer_length, ··· 2958 2950 field |= TRB_TYPE(TRB_ISOC); 2959 2951 /* Assume URB_ISO_ASAP is set */ 2960 2952 field |= TRB_SIA; 2961 - if (i > 0) 2953 + if (i == 0) { 2954 + if (start_cycle == 0) 2955 + field |= 0x1; 2956 + } else 2962 2957 field |= ep_ring->cycle_state; 2963 2958 first_trb = false; 2964 2959 } else { ··· 2976 2965 */ 2977 2966 if (j < trbs_per_td - 1) { 2978 2967 field |= TRB_CHAIN; 2968 + more_trbs_coming = true; 2979 2969 } else { 2980 2970 td->last_trb = ep_ring->enqueue; 2981 2971 field |= TRB_IOC; 2972 + more_trbs_coming = false; 2982 2973 } 2983 2974 2984 2975 /* Calculate TRB length */ ··· 2993 2980 length_field = TRB_LEN(trb_buff_len) | 2994 2981 remainder | 2995 2982 TRB_INTR_TARGET(0); 2996 - queue_trb(xhci, ep_ring, false, false, 2983 + queue_trb(xhci, ep_ring, false, more_trbs_coming, 2997 2984 lower_32_bits(addr), 2998 2985 upper_32_bits(addr), 2999 2986 length_field, ··· 3016 3003 } 3017 3004 } 3018 3005 3019 - wmb(); 3020 - start_trb->field[3] |= start_cycle; 3021 - 3022 - xhci_ring_ep_doorbell(xhci, slot_id, ep_index, urb->stream_id); 3006 + giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, 3007 + start_cycle, start_trb); 3023 3008 return 0; 3024 3009 } 3025 3010 ··· 3075 3064 * to set the polling interval (once the API is added). 3076 3065 */ 3077 3066 if (xhci_interval != ep_interval) { 3078 - if (!printk_ratelimit()) 3067 + if (printk_ratelimit()) 3079 3068 dev_dbg(&urb->dev->dev, "Driver uses different interval" 3080 3069 " (%d microframe%s) than xHCI " 3081 3070 "(%d microframe%s)\n",
+25 -35
drivers/usb/host/xhci.c
··· 226 226 static int xhci_setup_msix(struct xhci_hcd *xhci) 227 227 { 228 228 int i, ret = 0; 229 - struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller); 229 + struct usb_hcd *hcd = xhci_to_hcd(xhci); 230 + struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 230 231 231 232 /* 232 233 * calculate number of msi-x vectors supported. ··· 266 265 goto disable_msix; 267 266 } 268 267 268 + hcd->msix_enabled = 1; 269 269 return ret; 270 270 271 271 disable_msix: ··· 282 280 /* Free any IRQs and disable MSI-X */ 283 281 static void xhci_cleanup_msix(struct xhci_hcd *xhci) 284 282 { 285 - struct pci_dev *pdev = to_pci_dev(xhci_to_hcd(xhci)->self.controller); 283 + struct usb_hcd *hcd = xhci_to_hcd(xhci); 284 + struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 286 285 287 286 xhci_free_irq(xhci); 288 287 ··· 295 292 pci_disable_msi(pdev); 296 293 } 297 294 295 + hcd->msix_enabled = 0; 298 296 return; 299 297 } 300 298 ··· 512 508 spin_lock_irq(&xhci->lock); 513 509 xhci_halt(xhci); 514 510 xhci_reset(xhci); 515 - xhci_cleanup_msix(xhci); 516 511 spin_unlock_irq(&xhci->lock); 512 + 513 + xhci_cleanup_msix(xhci); 517 514 518 515 #ifdef CONFIG_USB_XHCI_HCD_DEBUGGING 519 516 /* Tell the event ring poll function not to reschedule */ ··· 549 544 550 545 spin_lock_irq(&xhci->lock); 551 546 xhci_halt(xhci); 552 - xhci_cleanup_msix(xhci); 553 547 spin_unlock_irq(&xhci->lock); 548 + 549 + xhci_cleanup_msix(xhci); 554 550 555 551 xhci_dbg(xhci, "xhci_shutdown completed - status = %x\n", 556 552 xhci_readl(xhci, &xhci->op_regs->status)); ··· 653 647 int rc = 0; 654 648 struct usb_hcd *hcd = xhci_to_hcd(xhci); 655 649 u32 command; 650 + int i; 656 651 657 652 spin_lock_irq(&xhci->lock); 658 653 clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags); ··· 684 677 spin_unlock_irq(&xhci->lock); 685 678 return -ETIMEDOUT; 686 679 } 687 - /* step 5: remove core well power */ 688 - xhci_cleanup_msix(xhci); 689 680 spin_unlock_irq(&xhci->lock); 681 + 682 + /* step 5: remove core well power */ 683 + /* synchronize irq when using MSI-X */ 684 + if (xhci->msix_entries) { 685 + for (i = 0; i < xhci->msix_count; i++) 686 + synchronize_irq(xhci->msix_entries[i].vector); 687 + } 690 688 691 689 return rc; 692 690 } ··· 706 694 { 707 695 u32 command, temp = 0; 708 696 struct usb_hcd *hcd = xhci_to_hcd(xhci); 709 - struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 710 697 int old_state, retval; 711 698 712 699 old_state = hcd->state; ··· 740 729 xhci_dbg(xhci, "Stop HCD\n"); 741 730 xhci_halt(xhci); 742 731 xhci_reset(xhci); 743 - if (hibernated) 744 - xhci_cleanup_msix(xhci); 745 732 spin_unlock_irq(&xhci->lock); 733 + xhci_cleanup_msix(xhci); 746 734 747 735 #ifdef CONFIG_USB_XHCI_HCD_DEBUGGING 748 736 /* Tell the event ring poll function not to reschedule */ ··· 775 765 return retval; 776 766 } 777 767 778 - spin_unlock_irq(&xhci->lock); 779 - /* Re-setup MSI-X */ 780 - if (hcd->irq) 781 - free_irq(hcd->irq, hcd); 782 - hcd->irq = -1; 783 - 784 - retval = xhci_setup_msix(xhci); 785 - if (retval) 786 - /* fall back to msi*/ 787 - retval = xhci_setup_msi(xhci); 788 - 789 - if (retval) { 790 - /* fall back to legacy interrupt*/ 791 - retval = request_irq(pdev->irq, &usb_hcd_irq, IRQF_SHARED, 792 - hcd->irq_descr, hcd); 793 - if (retval) { 794 - xhci_err(xhci, "request interrupt %d failed\n", 795 - pdev->irq); 796 - return retval; 797 - } 798 - hcd->irq = pdev->irq; 799 - } 800 - 801 - spin_lock_irq(&xhci->lock); 802 768 /* step 4: set Run/Stop bit */ 803 769 command = xhci_readl(xhci, &xhci->op_regs->command); 804 770 command |= CMD_RUN; ··· 2431 2445 xhci_err(xhci, "Error while assigning device slot ID\n"); 2432 2446 return 0; 2433 2447 } 2434 - /* xhci_alloc_virt_device() does not touch rings; no need to lock */ 2435 - if (!xhci_alloc_virt_device(xhci, xhci->slot_id, udev, GFP_KERNEL)) { 2448 + /* xhci_alloc_virt_device() does not touch rings; no need to lock. 2449 + * Use GFP_NOIO, since this function can be called from 2450 + * xhci_discover_or_reset_device(), which may be called as part of 2451 + * mass storage driver error handling. 2452 + */ 2453 + if (!xhci_alloc_virt_device(xhci, xhci->slot_id, udev, GFP_NOIO)) { 2436 2454 /* Disable slot, if we can do it without mem alloc */ 2437 2455 xhci_warn(xhci, "Could not allocate xHCI USB device data structures\n"); 2438 2456 spin_lock_irqsave(&xhci->lock, flags);
+6 -10
drivers/usb/host/xhci.h
··· 436 436 /** 437 437 * struct doorbell_array 438 438 * 439 + * Bits 0 - 7: Endpoint target 440 + * Bits 8 - 15: RsvdZ 441 + * Bits 16 - 31: Stream ID 442 + * 439 443 * Section 5.6 440 444 */ 441 445 struct xhci_doorbell_array { 442 446 u32 doorbell[256]; 443 447 }; 444 448 445 - #define DB_TARGET_MASK 0xFFFFFF00 446 - #define DB_STREAM_ID_MASK 0x0000FFFF 447 - #define DB_TARGET_HOST 0x0 448 - #define DB_STREAM_ID_HOST 0x0 449 - #define DB_MASK (0xff << 8) 450 - 451 - /* Endpoint Target - bits 0:7 */ 452 - #define EPI_TO_DB(p) (((p) + 1) & 0xff) 453 - #define STREAM_ID_TO_DB(p) (((p) & 0xffff) << 16) 454 - 449 + #define DB_VALUE(ep, stream) ((((ep) + 1) & 0xff) | ((stream) << 16)) 450 + #define DB_VALUE_HOST 0x00000000 455 451 456 452 /** 457 453 * struct xhci_protocol_caps
+1 -1
drivers/usb/misc/usbled.c
··· 45 45 46 46 static void change_color(struct usb_led *led) 47 47 { 48 - int retval; 48 + int retval = 0; 49 49 unsigned char *buffer; 50 50 51 51 buffer = kmalloc(8, GFP_KERNEL);
-1
drivers/usb/misc/uss720.c
··· 776 776 { USB_DEVICE(0x0557, 0x2001) }, 777 777 { USB_DEVICE(0x0729, 0x1284) }, 778 778 { USB_DEVICE(0x1293, 0x0002) }, 779 - { USB_DEVICE(0x1293, 0x0002) }, 780 779 { USB_DEVICE(0x050d, 0x0002) }, 781 780 { } /* Terminating entry */ 782 781 };
+2
drivers/usb/otg/nop-usb-xceiv.c
··· 132 132 133 133 platform_set_drvdata(pdev, nop); 134 134 135 + BLOCKING_INIT_NOTIFIER_HEAD(&nop->otg.notifier); 136 + 135 137 return 0; 136 138 exit: 137 139 kfree(nop);
+1 -1
drivers/usb/otg/ulpi.c
··· 45 45 /* ULPI hardcoded IDs, used for probing */ 46 46 static struct ulpi_info ulpi_ids[] = { 47 47 ULPI_INFO(ULPI_ID(0x04cc, 0x1504), "NXP ISP1504"), 48 - ULPI_INFO(ULPI_ID(0x0424, 0x0006), "SMSC USB3319"), 48 + ULPI_INFO(ULPI_ID(0x0424, 0x0006), "SMSC USB331x"), 49 49 }; 50 50 51 51 static int ulpi_set_otg_flags(struct otg_transceiver *otg)
+10
drivers/usb/serial/ch341.c
··· 486 486 if (actual_length >= 4) { 487 487 struct ch341_private *priv = usb_get_serial_port_data(port); 488 488 unsigned long flags; 489 + u8 prev_line_status = priv->line_status; 489 490 490 491 spin_lock_irqsave(&priv->lock, flags); 491 492 priv->line_status = (~(data[2])) & CH341_BITS_MODEM_STAT; 492 493 if ((data[1] & CH341_MULT_STAT)) 493 494 priv->multi_status_change = 1; 494 495 spin_unlock_irqrestore(&priv->lock, flags); 496 + 497 + if ((priv->line_status ^ prev_line_status) & CH341_BIT_DCD) { 498 + struct tty_struct *tty = tty_port_tty_get(&port->port); 499 + if (tty) 500 + usb_serial_handle_dcd_change(port, tty, 501 + priv->line_status & CH341_BIT_DCD); 502 + tty_kref_put(tty); 503 + } 504 + 495 505 wake_up_interruptible(&priv->delta_msr_wait); 496 506 } 497 507
+3 -13
drivers/usb/serial/cp210x.c
··· 49 49 static void cp210x_break_ctl(struct tty_struct *, int); 50 50 static int cp210x_startup(struct usb_serial *); 51 51 static void cp210x_dtr_rts(struct usb_serial_port *p, int on); 52 - static int cp210x_carrier_raised(struct usb_serial_port *p); 53 52 54 53 static int debug; 55 54 ··· 86 87 { USB_DEVICE(0x10C4, 0x8115) }, /* Arygon NFC/Mifare Reader */ 87 88 { USB_DEVICE(0x10C4, 0x813D) }, /* Burnside Telecom Deskmobile */ 88 89 { USB_DEVICE(0x10C4, 0x813F) }, /* Tams Master Easy Control */ 89 - { USB_DEVICE(0x10C4, 0x8149) }, /* West Mountain Radio Computerized Battery Analyzer */ 90 90 { USB_DEVICE(0x10C4, 0x814A) }, /* West Mountain Radio RIGblaster P&P */ 91 91 { USB_DEVICE(0x10C4, 0x814B) }, /* West Mountain Radio RIGtalk */ 92 92 { USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */ ··· 108 110 { USB_DEVICE(0x10C4, 0x8341) }, /* Siemens MC35PU GPRS Modem */ 109 111 { USB_DEVICE(0x10C4, 0x8382) }, /* Cygnal Integrated Products, Inc. */ 110 112 { USB_DEVICE(0x10C4, 0x83A8) }, /* Amber Wireless AMB2560 */ 113 + { USB_DEVICE(0x10C4, 0x83D8) }, /* DekTec DTA Plus VHF/UHF Booster/Attenuator */ 111 114 { USB_DEVICE(0x10C4, 0x8411) }, /* Kyocera GPS Module */ 115 + { USB_DEVICE(0x10C4, 0x8418) }, /* IRZ Automation Teleport SG-10 GSM/GPRS Modem */ 112 116 { USB_DEVICE(0x10C4, 0x846E) }, /* BEI USB Sensor Interface (VCP) */ 113 117 { USB_DEVICE(0x10C4, 0x8477) }, /* Balluff RFID */ 114 118 { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */ ··· 165 165 .tiocmget = cp210x_tiocmget, 166 166 .tiocmset = cp210x_tiocmset, 167 167 .attach = cp210x_startup, 168 - .dtr_rts = cp210x_dtr_rts, 169 - .carrier_raised = cp210x_carrier_raised 168 + .dtr_rts = cp210x_dtr_rts 170 169 }; 171 170 172 171 /* Config request types */ ··· 762 763 dbg("%s - control = 0x%.2x", __func__, control); 763 764 764 765 return result; 765 - } 766 - 767 - static int cp210x_carrier_raised(struct usb_serial_port *p) 768 - { 769 - unsigned int control; 770 - cp210x_get_config(p, CP210X_GET_MDMSTS, &control, 1); 771 - if (control & CONTROL_DCD) 772 - return 1; 773 - return 0; 774 766 } 775 767 776 768 static void cp210x_break_ctl (struct tty_struct *tty, int break_state)
-10
drivers/usb/serial/digi_acceleport.c
··· 455 455 static int digi_chars_in_buffer(struct tty_struct *tty); 456 456 static int digi_open(struct tty_struct *tty, struct usb_serial_port *port); 457 457 static void digi_close(struct usb_serial_port *port); 458 - static int digi_carrier_raised(struct usb_serial_port *port); 459 458 static void digi_dtr_rts(struct usb_serial_port *port, int on); 460 459 static int digi_startup_device(struct usb_serial *serial); 461 460 static int digi_startup(struct usb_serial *serial); ··· 510 511 .open = digi_open, 511 512 .close = digi_close, 512 513 .dtr_rts = digi_dtr_rts, 513 - .carrier_raised = digi_carrier_raised, 514 514 .write = digi_write, 515 515 .write_room = digi_write_room, 516 516 .write_bulk_callback = digi_write_bulk_callback, ··· 1335 1337 { 1336 1338 /* Adjust DTR and RTS */ 1337 1339 digi_set_modem_signals(port, on * (TIOCM_DTR|TIOCM_RTS), 1); 1338 - } 1339 - 1340 - static int digi_carrier_raised(struct usb_serial_port *port) 1341 - { 1342 - struct digi_port *priv = usb_get_serial_port_data(port); 1343 - if (priv->dp_modem_signals & TIOCM_CD) 1344 - return 1; 1345 - return 0; 1346 1340 } 1347 1341 1348 1342 static int digi_open(struct tty_struct *tty, struct usb_serial_port *port)
+11 -1
drivers/usb/serial/ftdi_sio.c
··· 676 676 { USB_DEVICE(FTDI_VID, FTDI_PCDJ_DAC2_PID) }, 677 677 { USB_DEVICE(FTDI_VID, FTDI_RRCIRKITS_LOCOBUFFER_PID) }, 678 678 { USB_DEVICE(FTDI_VID, FTDI_ASK_RDR400_PID) }, 679 - { USB_DEVICE(ICOM_ID1_VID, ICOM_ID1_PID) }, 679 + { USB_DEVICE(ICOM_VID, ICOM_ID_1_PID) }, 680 + { USB_DEVICE(ICOM_VID, ICOM_OPC_U_UC_PID) }, 681 + { USB_DEVICE(ICOM_VID, ICOM_ID_RP2C1_PID) }, 682 + { USB_DEVICE(ICOM_VID, ICOM_ID_RP2C2_PID) }, 683 + { USB_DEVICE(ICOM_VID, ICOM_ID_RP2D_PID) }, 684 + { USB_DEVICE(ICOM_VID, ICOM_ID_RP2VT_PID) }, 685 + { USB_DEVICE(ICOM_VID, ICOM_ID_RP2VR_PID) }, 686 + { USB_DEVICE(ICOM_VID, ICOM_ID_RP4KVT_PID) }, 687 + { USB_DEVICE(ICOM_VID, ICOM_ID_RP4KVR_PID) }, 688 + { USB_DEVICE(ICOM_VID, ICOM_ID_RP2KVT_PID) }, 689 + { USB_DEVICE(ICOM_VID, ICOM_ID_RP2KVR_PID) }, 680 690 { USB_DEVICE(FTDI_VID, FTDI_ACG_HFDUAL_PID) }, 681 691 { USB_DEVICE(FTDI_VID, FTDI_YEI_SERVOCENTER31_PID) }, 682 692 { USB_DEVICE(FTDI_VID, FTDI_THORLABS_PID) },
+16 -4
drivers/usb/serial/ftdi_sio_ids.h
··· 569 569 #define OCT_US101_PID 0x0421 /* OCT US101 USB to RS-232 */ 570 570 571 571 /* 572 - * Icom ID-1 digital transceiver 572 + * Definitions for Icom Inc. devices 573 573 */ 574 - 575 - #define ICOM_ID1_VID 0x0C26 576 - #define ICOM_ID1_PID 0x0004 574 + #define ICOM_VID 0x0C26 /* Icom vendor ID */ 575 + /* Note: ID-1 is a communications tranceiver for HAM-radio operators */ 576 + #define ICOM_ID_1_PID 0x0004 /* ID-1 USB to RS-232 */ 577 + /* Note: OPC is an Optional cable to connect an Icom Tranceiver */ 578 + #define ICOM_OPC_U_UC_PID 0x0018 /* OPC-478UC, OPC-1122U cloning cable */ 579 + /* Note: ID-RP* devices are Icom Repeater Devices for HAM-radio */ 580 + #define ICOM_ID_RP2C1_PID 0x0009 /* ID-RP2C Asset 1 to RS-232 */ 581 + #define ICOM_ID_RP2C2_PID 0x000A /* ID-RP2C Asset 2 to RS-232 */ 582 + #define ICOM_ID_RP2D_PID 0x000B /* ID-RP2D configuration port*/ 583 + #define ICOM_ID_RP2VT_PID 0x000C /* ID-RP2V Transmit config port */ 584 + #define ICOM_ID_RP2VR_PID 0x000D /* ID-RP2V Receive config port */ 585 + #define ICOM_ID_RP4KVT_PID 0x0010 /* ID-RP4000V Transmit config port */ 586 + #define ICOM_ID_RP4KVR_PID 0x0011 /* ID-RP4000V Receive config port */ 587 + #define ICOM_ID_RP2KVT_PID 0x0012 /* ID-RP2000V Transmit config port */ 588 + #define ICOM_ID_RP2KVR_PID 0x0013 /* ID-RP2000V Receive config port */ 577 589 578 590 /* 579 591 * GN Otometrics (http://www.otometrics.com)
+20
drivers/usb/serial/generic.c
··· 479 479 } 480 480 EXPORT_SYMBOL_GPL(usb_serial_handle_break); 481 481 482 + /** 483 + * usb_serial_handle_dcd_change - handle a change of carrier detect state 484 + * @port: usb_serial_port structure for the open port 485 + * @tty: tty_struct structure for the port 486 + * @status: new carrier detect status, nonzero if active 487 + */ 488 + void usb_serial_handle_dcd_change(struct usb_serial_port *usb_port, 489 + struct tty_struct *tty, unsigned int status) 490 + { 491 + struct tty_port *port = &usb_port->port; 492 + 493 + dbg("%s - port %d, status %d", __func__, usb_port->number, status); 494 + 495 + if (status) 496 + wake_up_interruptible(&port->open_wait); 497 + else if (tty && !C_CLOCAL(tty)) 498 + tty_hangup(tty); 499 + } 500 + EXPORT_SYMBOL_GPL(usb_serial_handle_dcd_change); 501 + 482 502 int usb_serial_generic_resume(struct usb_serial *serial) 483 503 { 484 504 struct usb_serial_port *port;
+1
drivers/usb/serial/io_tables.h
··· 199 199 .name = "epic", 200 200 }, 201 201 .description = "EPiC device", 202 + .usb_driver = &io_driver, 202 203 .id_table = Epic_port_id_table, 203 204 .num_ports = 1, 204 205 .open = edge_open,
+1
drivers/usb/serial/iuu_phoenix.c
··· 1275 1275 .name = "iuu_phoenix", 1276 1276 }, 1277 1277 .id_table = id_table, 1278 + .usb_driver = &iuu_driver, 1278 1279 .num_ports = 1, 1279 1280 .bulk_in_size = 512, 1280 1281 .bulk_out_size = 512,
+4
drivers/usb/serial/keyspan.h
··· 546 546 .name = "keyspan_no_firm", 547 547 }, 548 548 .description = "Keyspan - (without firmware)", 549 + .usb_driver = &keyspan_driver, 549 550 .id_table = keyspan_pre_ids, 550 551 .num_ports = 1, 551 552 .attach = keyspan_fake_startup, ··· 558 557 .name = "keyspan_1", 559 558 }, 560 559 .description = "Keyspan 1 port adapter", 560 + .usb_driver = &keyspan_driver, 561 561 .id_table = keyspan_1port_ids, 562 562 .num_ports = 1, 563 563 .open = keyspan_open, ··· 581 579 .name = "keyspan_2", 582 580 }, 583 581 .description = "Keyspan 2 port adapter", 582 + .usb_driver = &keyspan_driver, 584 583 .id_table = keyspan_2port_ids, 585 584 .num_ports = 2, 586 585 .open = keyspan_open, ··· 604 601 .name = "keyspan_4", 605 602 }, 606 603 .description = "Keyspan 4 port adapter", 604 + .usb_driver = &keyspan_driver, 607 605 .id_table = keyspan_4port_ids, 608 606 .num_ports = 4, 609 607 .open = keyspan_open,
-17
drivers/usb/serial/keyspan_pda.c
··· 679 679 } 680 680 } 681 681 682 - static int keyspan_pda_carrier_raised(struct usb_serial_port *port) 683 - { 684 - struct usb_serial *serial = port->serial; 685 - unsigned char modembits; 686 - 687 - /* If we can read the modem status and the DCD is low then 688 - carrier is not raised yet */ 689 - if (keyspan_pda_get_modem_info(serial, &modembits) >= 0) { 690 - if (!(modembits & (1>>6))) 691 - return 0; 692 - } 693 - /* Carrier raised, or we failed (eg disconnected) so 694 - progress accordingly */ 695 - return 1; 696 - } 697 - 698 682 699 683 static int keyspan_pda_open(struct tty_struct *tty, 700 684 struct usb_serial_port *port) ··· 865 881 .id_table = id_table_std, 866 882 .num_ports = 1, 867 883 .dtr_rts = keyspan_pda_dtr_rts, 868 - .carrier_raised = keyspan_pda_carrier_raised, 869 884 .open = keyspan_pda_open, 870 885 .close = keyspan_pda_close, 871 886 .write = keyspan_pda_write,
+1
drivers/usb/serial/moto_modem.c
··· 44 44 .name = "moto-modem", 45 45 }, 46 46 .id_table = id_table, 47 + .usb_driver = &moto_driver, 47 48 .num_ports = 1, 48 49 }; 49 50
+21 -2
drivers/usb/serial/option.c
··· 382 382 #define HAIER_VENDOR_ID 0x201e 383 383 #define HAIER_PRODUCT_CE100 0x2009 384 384 385 - #define CINTERION_VENDOR_ID 0x0681 385 + /* Cinterion (formerly Siemens) products */ 386 + #define SIEMENS_VENDOR_ID 0x0681 387 + #define CINTERION_VENDOR_ID 0x1e2d 388 + #define CINTERION_PRODUCT_HC25_MDM 0x0047 389 + #define CINTERION_PRODUCT_HC25_MDMNET 0x0040 390 + #define CINTERION_PRODUCT_HC28_MDM 0x004C 391 + #define CINTERION_PRODUCT_HC28_MDMNET 0x004A /* same for HC28J */ 392 + #define CINTERION_PRODUCT_EU3_E 0x0051 393 + #define CINTERION_PRODUCT_EU3_P 0x0052 394 + #define CINTERION_PRODUCT_PH8 0x0053 386 395 387 396 /* Olivetti products */ 388 397 #define OLIVETTI_VENDOR_ID 0x0b3c ··· 953 944 { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_100F) }, 954 945 { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1011)}, 955 946 { USB_DEVICE(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_1012)}, 956 - { USB_DEVICE(CINTERION_VENDOR_ID, 0x0047) }, 947 + /* Cinterion */ 948 + { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_EU3_E) }, 949 + { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_EU3_P) }, 950 + { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_PH8) }, 951 + { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, 952 + { USB_DEVICE(CINTERION_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) }, 953 + { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDM) }, 954 + { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDMNET) }, 955 + { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, /* HC28 enumerates with Siemens or Cinterion VID depending on FW revision */ 956 + { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) }, 957 + 957 958 { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) }, 958 959 { USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */ 959 960 { USB_DEVICE(ONDA_VENDOR_ID, ONDA_MT825UP) }, /* ONDA MT825UP modem */
+1
drivers/usb/serial/oti6858.c
··· 157 157 .name = "oti6858", 158 158 }, 159 159 .id_table = id_table, 160 + .usb_driver = &oti6858_driver, 160 161 .num_ports = 1, 161 162 .open = oti6858_open, 162 163 .close = oti6858_close,
+12
drivers/usb/serial/pl2303.c
··· 50 50 { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_MMX) }, 51 51 { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_GPRS) }, 52 52 { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_HCR331) }, 53 + { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_MOTOROLA) }, 53 54 { USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID) }, 54 55 { USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID_RSAQ5) }, 55 56 { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID) }, ··· 678 677 { 679 678 680 679 struct pl2303_private *priv = usb_get_serial_port_data(port); 680 + struct tty_struct *tty; 681 681 unsigned long flags; 682 682 u8 status_idx = UART_STATE; 683 683 u8 length = UART_STATE + 1; 684 + u8 prev_line_status; 684 685 u16 idv, idp; 685 686 686 687 idv = le16_to_cpu(port->serial->dev->descriptor.idVendor); ··· 704 701 705 702 /* Save off the uart status for others to look at */ 706 703 spin_lock_irqsave(&priv->lock, flags); 704 + prev_line_status = priv->line_status; 707 705 priv->line_status = data[status_idx]; 708 706 spin_unlock_irqrestore(&priv->lock, flags); 709 707 if (priv->line_status & UART_BREAK_ERROR) 710 708 usb_serial_handle_break(port); 711 709 wake_up_interruptible(&priv->delta_msr_wait); 710 + 711 + tty = tty_port_tty_get(&port->port); 712 + if (!tty) 713 + return; 714 + if ((priv->line_status ^ prev_line_status) & UART_DCD) 715 + usb_serial_handle_dcd_change(port, tty, 716 + priv->line_status & UART_DCD); 717 + tty_kref_put(tty); 712 718 } 713 719 714 720 static void pl2303_read_int_callback(struct urb *urb)
+1
drivers/usb/serial/pl2303.h
··· 21 21 #define PL2303_PRODUCT_ID_MMX 0x0612 22 22 #define PL2303_PRODUCT_ID_GPRS 0x0609 23 23 #define PL2303_PRODUCT_ID_HCR331 0x331a 24 + #define PL2303_PRODUCT_ID_MOTOROLA 0x0307 24 25 25 26 #define ATEN_VENDOR_ID 0x0557 26 27 #define ATEN_VENDOR_ID2 0x0547
+3
drivers/usb/serial/qcaux.c
··· 36 36 #define UTSTARCOM_PRODUCT_UM175_V1 0x3712 37 37 #define UTSTARCOM_PRODUCT_UM175_V2 0x3714 38 38 #define UTSTARCOM_PRODUCT_UM175_ALLTEL 0x3715 39 + #define PANTECH_PRODUCT_UML290_VZW 0x3718 39 40 40 41 /* CMOTECH devices */ 41 42 #define CMOTECH_VENDOR_ID 0x16d8 ··· 67 66 { USB_DEVICE_AND_INTERFACE_INFO(LG_VENDOR_ID, LG_PRODUCT_VX4400_6000, 0xff, 0xff, 0x00) }, 68 67 { USB_DEVICE_AND_INTERFACE_INFO(SANYO_VENDOR_ID, SANYO_PRODUCT_KATANA_LX, 0xff, 0xff, 0x00) }, 69 68 { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_U520, 0xff, 0x00, 0x00) }, 69 + { USB_DEVICE_AND_INTERFACE_INFO(UTSTARCOM_VENDOR_ID, PANTECH_PRODUCT_UML290_VZW, 0xff, 0xff, 0xff) }, 70 70 { }, 71 71 }; 72 72 MODULE_DEVICE_TABLE(usb, id_table); ··· 86 84 .name = "qcaux", 87 85 }, 88 86 .id_table = id_table, 87 + .usb_driver = &qcaux_driver, 89 88 .num_ports = 1, 90 89 }; 91 90
+1
drivers/usb/serial/siemens_mpi.c
··· 42 42 .name = "siemens_mpi", 43 43 }, 44 44 .id_table = id_table, 45 + .usb_driver = &siemens_usb_mpi_driver, 45 46 .num_ports = 1, 46 47 }; 47 48
+6 -1
drivers/usb/serial/spcp8x5.c
··· 133 133 134 134 /* how come ??? */ 135 135 #define UART_STATE 0x08 136 - #define UART_STATE_TRANSIENT_MASK 0x74 136 + #define UART_STATE_TRANSIENT_MASK 0x75 137 137 #define UART_DCD 0x01 138 138 #define UART_DSR 0x02 139 139 #define UART_BREAK_ERROR 0x04 ··· 525 525 /* overrun is special, not associated with a char */ 526 526 if (status & UART_OVERRUN_ERROR) 527 527 tty_insert_flip_char(tty, 0, TTY_OVERRUN); 528 + 529 + if (status & UART_DCD) 530 + usb_serial_handle_dcd_change(port, tty, 531 + priv->line_status & MSR_STATUS_LINE_DCD); 528 532 } 529 533 530 534 tty_insert_flip_string_fixed_flag(tty, data, tty_flag, ··· 649 645 .name = "SPCP8x5", 650 646 }, 651 647 .id_table = id_table, 648 + .usb_driver = &spcp8x5_driver, 652 649 .num_ports = 1, 653 650 .open = spcp8x5_open, 654 651 .dtr_rts = spcp8x5_dtr_rts,
+6 -2
drivers/usb/serial/usb-serial.c
··· 1344 1344 return -ENODEV; 1345 1345 1346 1346 fixup_generic(driver); 1347 - if (driver->usb_driver) 1348 - driver->usb_driver->supports_autosuspend = 1; 1349 1347 1350 1348 if (!driver->description) 1351 1349 driver->description = driver->driver.name; 1350 + if (!driver->usb_driver) { 1351 + WARN(1, "Serial driver %s has no usb_driver\n", 1352 + driver->description); 1353 + return -EINVAL; 1354 + } 1355 + driver->usb_driver->supports_autosuspend = 1; 1352 1356 1353 1357 /* Add this device to our list of devices */ 1354 1358 mutex_lock(&table_lock);
+1
drivers/usb/serial/usb_debug.c
··· 75 75 .name = "debug", 76 76 }, 77 77 .id_table = id_table, 78 + .usb_driver = &debug_driver, 78 79 .num_ports = 1, 79 80 .bulk_out_size = USB_DEBUG_MAX_PACKET_SIZE, 80 81 .break_ctl = usb_debug_break_ctl,
+5
drivers/usb/storage/unusual_cypress.h
··· 31 31 "Cypress ISD-300LP", 32 32 USB_SC_CYP_ATACB, USB_PR_DEVICE, NULL, 0), 33 33 34 + UNUSUAL_DEV( 0x14cd, 0x6116, 0x0000, 0x9999, 35 + "Super Top", 36 + "USB 2.0 SATA BRIDGE", 37 + USB_SC_CYP_ATACB, USB_PR_DEVICE, NULL, 0), 38 + 34 39 #endif /* defined(CONFIG_USB_STORAGE_CYPRESS_ATACB) || ... */
+18
drivers/usb/storage/unusual_devs.h
··· 1044 1044 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1045 1045 US_FL_BULK32), 1046 1046 1047 + /* Reported by <ttkspam@free.fr> 1048 + * The device reports a vendor-specific device class, requiring an 1049 + * explicit vendor/product match. 1050 + */ 1051 + UNUSUAL_DEV( 0x0851, 0x1542, 0x0002, 0x0002, 1052 + "MagicPixel", 1053 + "FW_Omega2", 1054 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 0), 1055 + 1047 1056 /* Andrew Lunn <andrew@lunn.ch> 1048 1057 * PanDigital Digital Picture Frame. Does not like ALLOW_MEDIUM_REMOVAL 1049 1058 * on LUN 4. ··· 1880 1871 "Photo Frame", 1881 1872 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1882 1873 US_FL_NO_READ_DISC_INFO ), 1874 + 1875 + /* Patch by Richard Sch�tz <r.schtz@t-online.de> 1876 + * This external hard drive enclosure uses a JMicron chip which 1877 + * needs the US_FL_IGNORE_RESIDUE flag to work properly. */ 1878 + UNUSUAL_DEV( 0x1e68, 0x001b, 0x0000, 0x0000, 1879 + "TrekStor GmbH & Co. KG", 1880 + "DataStation maxi g.u", 1881 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 1882 + US_FL_IGNORE_RESIDUE | US_FL_SANE_SENSE ), 1883 1883 1884 1884 UNUSUAL_DEV( 0x2116, 0x0320, 0x0001, 0x0001, 1885 1885 "ST",
+37 -6
fs/ceph/caps.c
··· 1560 1560 /* NOTE: no side-effects allowed, until we take s_mutex */ 1561 1561 1562 1562 revoking = cap->implemented & ~cap->issued; 1563 - if (revoking) 1564 - dout(" mds%d revoking %s\n", cap->mds, 1565 - ceph_cap_string(revoking)); 1563 + dout(" mds%d cap %p issued %s implemented %s revoking %s\n", 1564 + cap->mds, cap, ceph_cap_string(cap->issued), 1565 + ceph_cap_string(cap->implemented), 1566 + ceph_cap_string(revoking)); 1566 1567 1567 1568 if (cap == ci->i_auth_cap && 1568 1569 (cap->issued & CEPH_CAP_FILE_WR)) { ··· 1659 1658 1660 1659 if (cap == ci->i_auth_cap && ci->i_dirty_caps) 1661 1660 flushing = __mark_caps_flushing(inode, session); 1661 + else 1662 + flushing = 0; 1662 1663 1663 1664 mds = cap->mds; /* remember mds, so we don't repeat */ 1664 1665 sent++; ··· 1940 1937 cap, session->s_mds); 1941 1938 spin_unlock(&inode->i_lock); 1942 1939 } 1940 + } 1941 + } 1942 + 1943 + static void kick_flushing_inode_caps(struct ceph_mds_client *mdsc, 1944 + struct ceph_mds_session *session, 1945 + struct inode *inode) 1946 + { 1947 + struct ceph_inode_info *ci = ceph_inode(inode); 1948 + struct ceph_cap *cap; 1949 + int delayed = 0; 1950 + 1951 + spin_lock(&inode->i_lock); 1952 + cap = ci->i_auth_cap; 1953 + dout("kick_flushing_inode_caps %p flushing %s flush_seq %lld\n", inode, 1954 + ceph_cap_string(ci->i_flushing_caps), ci->i_cap_flush_seq); 1955 + __ceph_flush_snaps(ci, &session, 1); 1956 + if (ci->i_flushing_caps) { 1957 + delayed = __send_cap(mdsc, cap, CEPH_CAP_OP_FLUSH, 1958 + __ceph_caps_used(ci), 1959 + __ceph_caps_wanted(ci), 1960 + cap->issued | cap->implemented, 1961 + ci->i_flushing_caps, NULL); 1962 + if (delayed) { 1963 + spin_lock(&inode->i_lock); 1964 + __cap_delay_requeue(mdsc, ci); 1965 + spin_unlock(&inode->i_lock); 1966 + } 1967 + } else { 1968 + spin_unlock(&inode->i_lock); 1943 1969 } 1944 1970 } 1945 1971 ··· 2719 2687 ceph_add_cap(inode, session, cap_id, -1, 2720 2688 issued, wanted, seq, mseq, realmino, CEPH_CAP_FLAG_AUTH, 2721 2689 NULL /* no caps context */); 2722 - try_flush_caps(inode, session, NULL); 2690 + kick_flushing_inode_caps(mdsc, session, inode); 2723 2691 up_read(&mdsc->snap_rwsem); 2724 2692 2725 2693 /* make sure we re-request max_size, if necessary */ ··· 2817 2785 case CEPH_CAP_OP_IMPORT: 2818 2786 handle_cap_import(mdsc, inode, h, session, 2819 2787 snaptrace, snaptrace_len); 2820 - ceph_check_caps(ceph_inode(inode), CHECK_CAPS_NODELAY, 2821 - session); 2788 + ceph_check_caps(ceph_inode(inode), 0, session); 2822 2789 goto done_unlocked; 2823 2790 } 2824 2791
+5 -5
fs/ceph/inode.c
··· 710 710 ci->i_ceph_flags |= CEPH_I_COMPLETE; 711 711 ci->i_max_offset = 2; 712 712 } 713 - 714 - /* it may be better to set st_size in getattr instead? */ 715 - if (ceph_test_mount_opt(ceph_sb_to_client(inode->i_sb), RBYTES)) 716 - inode->i_size = ci->i_rbytes; 717 713 break; 718 714 default: 719 715 pr_err("fill_inode %llx.%llx BAD mode 0%o\n", ··· 1815 1819 else 1816 1820 stat->dev = 0; 1817 1821 if (S_ISDIR(inode->i_mode)) { 1818 - stat->size = ci->i_rbytes; 1822 + if (ceph_test_mount_opt(ceph_sb_to_client(inode->i_sb), 1823 + RBYTES)) 1824 + stat->size = ci->i_rbytes; 1825 + else 1826 + stat->size = ci->i_files + ci->i_subdirs; 1819 1827 stat->blocks = 0; 1820 1828 stat->blksize = 65536; 1821 1829 }
+7 -3
fs/ceph/mds_client.c
··· 693 693 dout("choose_mds %p %llx.%llx " 694 694 "frag %u mds%d (%d/%d)\n", 695 695 inode, ceph_vinop(inode), 696 - frag.frag, frag.mds, 696 + frag.frag, mds, 697 697 (int)r, frag.ndist); 698 - return mds; 698 + if (ceph_mdsmap_get_state(mdsc->mdsmap, mds) >= 699 + CEPH_MDS_STATE_ACTIVE) 700 + return mds; 699 701 } 700 702 701 703 /* since this file/dir wasn't known to be ··· 710 708 dout("choose_mds %p %llx.%llx " 711 709 "frag %u mds%d (auth)\n", 712 710 inode, ceph_vinop(inode), frag.frag, mds); 713 - return mds; 711 + if (ceph_mdsmap_get_state(mdsc->mdsmap, mds) >= 712 + CEPH_MDS_STATE_ACTIVE) 713 + return mds; 714 714 } 715 715 } 716 716 }
+2
fs/ceph/super.c
··· 290 290 291 291 fsopt->rsize = CEPH_MOUNT_RSIZE_DEFAULT; 292 292 fsopt->snapdir_name = kstrdup(CEPH_SNAPDIRNAME_DEFAULT, GFP_KERNEL); 293 + fsopt->caps_wanted_delay_min = CEPH_CAPS_WANTED_DELAY_MIN_DEFAULT; 294 + fsopt->caps_wanted_delay_max = CEPH_CAPS_WANTED_DELAY_MAX_DEFAULT; 293 295 fsopt->cap_release_safety = CEPH_CAP_RELEASE_SAFETY_DEFAULT; 294 296 fsopt->max_readdir = CEPH_MAX_READDIR_DEFAULT; 295 297 fsopt->max_readdir_bytes = CEPH_MAX_READDIR_BYTES_DEFAULT;
+3
fs/ceph/xattr.c
··· 219 219 struct rb_node **p; 220 220 struct rb_node *parent = NULL; 221 221 struct ceph_inode_xattr *xattr = NULL; 222 + int name_len = strlen(name); 222 223 int c; 223 224 224 225 p = &ci->i_xattrs.index.rb_node; ··· 227 226 parent = *p; 228 227 xattr = rb_entry(parent, struct ceph_inode_xattr, node); 229 228 c = strncmp(name, xattr->name, xattr->name_len); 229 + if (c == 0 && name_len > xattr->name_len) 230 + c = 1; 230 231 if (c < 0) 231 232 p = &(*p)->rb_left; 232 233 else if (c > 0)
+1 -1
fs/cifs/Makefile
··· 5 5 6 6 cifs-y := cifsfs.o cifssmb.o cifs_debug.o connect.o dir.o file.o inode.o \ 7 7 link.o misc.o netmisc.o smbdes.o smbencrypt.o transport.o asn1.o \ 8 - md4.o md5.o cifs_unicode.o nterr.o xattr.o cifsencrypt.o \ 8 + cifs_unicode.o nterr.o xattr.o cifsencrypt.o \ 9 9 readdir.o ioctl.o sess.o export.o 10 10 11 11 cifs-$(CONFIG_CIFS_ACL) += cifsacl.o
+5
fs/cifs/README
··· 452 452 if oplock (caching token) is granted and held. Note that 453 453 direct allows write operations larger than page size 454 454 to be sent to the server. 455 + strictcache Use for switching on strict cache mode. In this mode the 456 + client read from the cache all the time it has Oplock Level II, 457 + otherwise - read from the server. All written data are stored 458 + in the cache, but if the client doesn't have Exclusive Oplock, 459 + it writes the data to the server. 455 460 acl Allow setfacl and getfacl to manage posix ACLs if server 456 461 supports them. (default) 457 462 noacl Do not allow setfacl and getfacl calls on this mount
+20 -13
fs/cifs/cifsencrypt.c
··· 24 24 #include "cifspdu.h" 25 25 #include "cifsglob.h" 26 26 #include "cifs_debug.h" 27 - #include "md5.h" 28 27 #include "cifs_unicode.h" 29 28 #include "cifsproto.h" 30 29 #include "ntlmssp.h" ··· 35 36 /* Note we only use the 1st eight bytes */ 36 37 /* Note that the smb header signature field on input contains the 37 38 sequence number before this function is called */ 38 - 39 - extern void mdfour(unsigned char *out, unsigned char *in, int n); 40 - extern void E_md4hash(const unsigned char *passwd, unsigned char *p16); 41 - extern void SMBencrypt(unsigned char *passwd, const unsigned char *c8, 42 - unsigned char *p24); 43 39 44 40 static int cifs_calculate_signature(const struct smb_hdr *cifs_pdu, 45 41 struct TCP_Server_Info *server, char *signature) ··· 228 234 /* first calculate 24 bytes ntlm response and then 16 byte session key */ 229 235 int setup_ntlm_response(struct cifsSesInfo *ses) 230 236 { 237 + int rc = 0; 231 238 unsigned int temp_len = CIFS_SESS_KEY_SIZE + CIFS_AUTH_RESP_SIZE; 232 239 char temp_key[CIFS_SESS_KEY_SIZE]; 233 240 ··· 242 247 } 243 248 ses->auth_key.len = temp_len; 244 249 245 - SMBNTencrypt(ses->password, ses->server->cryptkey, 250 + rc = SMBNTencrypt(ses->password, ses->server->cryptkey, 246 251 ses->auth_key.response + CIFS_SESS_KEY_SIZE); 252 + if (rc) { 253 + cFYI(1, "%s Can't generate NTLM response, error: %d", 254 + __func__, rc); 255 + return rc; 256 + } 247 257 248 - E_md4hash(ses->password, temp_key); 249 - mdfour(ses->auth_key.response, temp_key, CIFS_SESS_KEY_SIZE); 258 + rc = E_md4hash(ses->password, temp_key); 259 + if (rc) { 260 + cFYI(1, "%s Can't generate NT hash, error: %d", __func__, rc); 261 + return rc; 262 + } 250 263 251 - return 0; 264 + rc = mdfour(ses->auth_key.response, temp_key, CIFS_SESS_KEY_SIZE); 265 + if (rc) 266 + cFYI(1, "%s Can't generate NTLM session key, error: %d", 267 + __func__, rc); 268 + 269 + return rc; 252 270 } 253 271 254 272 #ifdef CONFIG_CIFS_WEAK_PW_HASH ··· 708 700 unsigned int size; 709 701 710 702 server->secmech.hmacmd5 = crypto_alloc_shash("hmac(md5)", 0, 0); 711 - if (!server->secmech.hmacmd5 || 712 - IS_ERR(server->secmech.hmacmd5)) { 703 + if (IS_ERR(server->secmech.hmacmd5)) { 713 704 cERROR(1, "could not allocate crypto hmacmd5\n"); 714 705 return PTR_ERR(server->secmech.hmacmd5); 715 706 } 716 707 717 708 server->secmech.md5 = crypto_alloc_shash("md5", 0, 0); 718 - if (!server->secmech.md5 || IS_ERR(server->secmech.md5)) { 709 + if (IS_ERR(server->secmech.md5)) { 719 710 cERROR(1, "could not allocate crypto md5\n"); 720 711 rc = PTR_ERR(server->secmech.md5); 721 712 goto crypto_allocate_md5_fail;
-33
fs/cifs/cifsencrypt.h
··· 1 - /* 2 - * fs/cifs/cifsencrypt.h 3 - * 4 - * Copyright (c) International Business Machines Corp., 2005 5 - * Author(s): Steve French (sfrench@us.ibm.com) 6 - * 7 - * Externs for misc. small encryption routines 8 - * so we do not have to put them in cifsproto.h 9 - * 10 - * This library is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU Lesser General Public License as published 12 - * by the Free Software Foundation; either version 2.1 of the License, or 13 - * (at your option) any later version. 14 - * 15 - * This library is distributed in the hope that it will be useful, 16 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See 18 - * the GNU Lesser General Public License for more details. 19 - * 20 - * You should have received a copy of the GNU Lesser General Public License 21 - * along with this library; if not, write to the Free Software 22 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 23 - */ 24 - 25 - /* md4.c */ 26 - extern void mdfour(unsigned char *out, unsigned char *in, int n); 27 - /* smbdes.c */ 28 - extern void E_P16(unsigned char *p14, unsigned char *p16); 29 - extern void E_P24(unsigned char *p21, const unsigned char *c8, 30 - unsigned char *p24); 31 - 32 - 33 -
+11 -4
fs/cifs/cifsfs.c
··· 600 600 { 601 601 struct inode *inode = iocb->ki_filp->f_path.dentry->d_inode; 602 602 ssize_t written; 603 + int rc; 603 604 604 605 written = generic_file_aio_write(iocb, iov, nr_segs, pos); 605 - if (!CIFS_I(inode)->clientCanCacheAll) 606 - filemap_fdatawrite(inode->i_mapping); 606 + 607 + if (CIFS_I(inode)->clientCanCacheAll) 608 + return written; 609 + 610 + rc = filemap_fdatawrite(inode->i_mapping); 611 + if (rc) 612 + cFYI(1, "cifs_file_aio_write: %d rc on %p inode", rc, inode); 613 + 607 614 return written; 608 615 } 609 616 ··· 744 737 .read = do_sync_read, 745 738 .write = do_sync_write, 746 739 .aio_read = cifs_strict_readv, 747 - .aio_write = cifs_file_aio_write, 740 + .aio_write = cifs_strict_writev, 748 741 .open = cifs_open, 749 742 .release = cifs_close, 750 743 .lock = cifs_lock, ··· 800 793 .read = do_sync_read, 801 794 .write = do_sync_write, 802 795 .aio_read = cifs_strict_readv, 803 - .aio_write = cifs_file_aio_write, 796 + .aio_write = cifs_strict_writev, 804 797 .open = cifs_open, 805 798 .release = cifs_close, 806 799 .fsync = cifs_strict_fsync,
+3 -1
fs/cifs/cifsfs.h
··· 85 85 extern ssize_t cifs_strict_readv(struct kiocb *iocb, const struct iovec *iov, 86 86 unsigned long nr_segs, loff_t pos); 87 87 extern ssize_t cifs_user_write(struct file *file, const char __user *write_data, 88 - size_t write_size, loff_t *poffset); 88 + size_t write_size, loff_t *poffset); 89 + extern ssize_t cifs_strict_writev(struct kiocb *iocb, const struct iovec *iov, 90 + unsigned long nr_segs, loff_t pos); 89 91 extern int cifs_lock(struct file *, int, struct file_lock *); 90 92 extern int cifs_fsync(struct file *, int); 91 93 extern int cifs_strict_fsync(struct file *, int);
+10 -1
fs/cifs/cifsproto.h
··· 85 85 extern bool is_valid_oplock_break(struct smb_hdr *smb, 86 86 struct TCP_Server_Info *); 87 87 extern bool is_size_safe_to_change(struct cifsInodeInfo *, __u64 eof); 88 + extern void cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset, 89 + unsigned int bytes_written); 88 90 extern struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *, bool); 89 91 extern struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *, bool); 90 92 extern unsigned int smbCalcSize(struct smb_hdr *ptr); ··· 375 373 extern int cifs_verify_signature(struct smb_hdr *, 376 374 struct TCP_Server_Info *server, 377 375 __u32 expected_sequence_number); 378 - extern void SMBNTencrypt(unsigned char *, unsigned char *, unsigned char *); 376 + extern int SMBNTencrypt(unsigned char *, unsigned char *, unsigned char *); 379 377 extern int setup_ntlm_response(struct cifsSesInfo *); 380 378 extern int setup_ntlmv2_rsp(struct cifsSesInfo *, const struct nls_table *); 381 379 extern int cifs_crypto_shash_allocate(struct TCP_Server_Info *); ··· 425 423 extern int CIFSCheckMFSymlink(struct cifs_fattr *fattr, 426 424 const unsigned char *path, 427 425 struct cifs_sb_info *cifs_sb, int xid); 426 + extern int mdfour(unsigned char *, unsigned char *, int); 427 + extern int E_md4hash(const unsigned char *passwd, unsigned char *p16); 428 + extern void SMBencrypt(unsigned char *passwd, const unsigned char *c8, 429 + unsigned char *p24); 430 + extern void E_P16(unsigned char *p14, unsigned char *p16); 431 + extern void E_P24(unsigned char *p21, const unsigned char *c8, 432 + unsigned char *p24); 428 433 #endif /* _CIFSPROTO_H */
+7 -4
fs/cifs/connect.c
··· 55 55 /* SMB echo "timeout" -- FIXME: tunable? */ 56 56 #define SMB_ECHO_INTERVAL (60 * HZ) 57 57 58 - extern void SMBNTencrypt(unsigned char *passwd, unsigned char *c8, 59 - unsigned char *p24); 60 - 61 58 extern mempool_t *cifs_req_poolp; 62 59 63 60 struct smb_vol { ··· 84 87 bool no_xattr:1; /* set if xattr (EA) support should be disabled*/ 85 88 bool server_ino:1; /* use inode numbers from server ie UniqueId */ 86 89 bool direct_io:1; 90 + bool strict_io:1; /* strict cache behavior */ 87 91 bool remap:1; /* set to remap seven reserved chars in filenames */ 88 92 bool posix_paths:1; /* unset to not ask for posix pathnames. */ 89 93 bool no_linux_ext:1; ··· 1342 1344 vol->direct_io = 1; 1343 1345 } else if (strnicmp(data, "forcedirectio", 13) == 0) { 1344 1346 vol->direct_io = 1; 1347 + } else if (strnicmp(data, "strictcache", 11) == 0) { 1348 + vol->strict_io = 1; 1345 1349 } else if (strnicmp(data, "noac", 4) == 0) { 1346 1350 printk(KERN_WARNING "CIFS: Mount option noac not " 1347 1351 "supported. Instead set " ··· 2584 2584 if (pvolume_info->multiuser) 2585 2585 cifs_sb->mnt_cifs_flags |= (CIFS_MOUNT_MULTIUSER | 2586 2586 CIFS_MOUNT_NO_PERM); 2587 + if (pvolume_info->strict_io) 2588 + cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_STRICT_IO; 2587 2589 if (pvolume_info->direct_io) { 2588 2590 cFYI(1, "mounting share using direct i/o"); 2589 2591 cifs_sb->mnt_cifs_flags |= CIFS_MOUNT_DIRECT_IO; ··· 2987 2985 bcc_ptr); 2988 2986 else 2989 2987 #endif /* CIFS_WEAK_PW_HASH */ 2990 - SMBNTencrypt(tcon->password, ses->server->cryptkey, bcc_ptr); 2988 + rc = SMBNTencrypt(tcon->password, ses->server->cryptkey, 2989 + bcc_ptr); 2991 2990 2992 2991 bcc_ptr += CIFS_AUTH_RESP_SIZE; 2993 2992 if (ses->capabilities & CAP_UNICODE) {
+201 -1
fs/cifs/file.c
··· 848 848 } 849 849 850 850 /* update the file size (if needed) after a write */ 851 - static void 851 + void 852 852 cifs_update_eof(struct cifsInodeInfo *cifsi, loff_t offset, 853 853 unsigned int bytes_written) 854 854 { ··· 1617 1617 cFYI(1, "Flush inode %p file %p rc %d", inode, file, rc); 1618 1618 1619 1619 return rc; 1620 + } 1621 + 1622 + static int 1623 + cifs_write_allocate_pages(struct page **pages, unsigned long num_pages) 1624 + { 1625 + int rc = 0; 1626 + unsigned long i; 1627 + 1628 + for (i = 0; i < num_pages; i++) { 1629 + pages[i] = alloc_page(__GFP_HIGHMEM); 1630 + if (!pages[i]) { 1631 + /* 1632 + * save number of pages we have already allocated and 1633 + * return with ENOMEM error 1634 + */ 1635 + num_pages = i; 1636 + rc = -ENOMEM; 1637 + goto error; 1638 + } 1639 + } 1640 + 1641 + return rc; 1642 + 1643 + error: 1644 + for (i = 0; i < num_pages; i++) 1645 + put_page(pages[i]); 1646 + return rc; 1647 + } 1648 + 1649 + static inline 1650 + size_t get_numpages(const size_t wsize, const size_t len, size_t *cur_len) 1651 + { 1652 + size_t num_pages; 1653 + size_t clen; 1654 + 1655 + clen = min_t(const size_t, len, wsize); 1656 + num_pages = clen / PAGE_CACHE_SIZE; 1657 + if (clen % PAGE_CACHE_SIZE) 1658 + num_pages++; 1659 + 1660 + if (cur_len) 1661 + *cur_len = clen; 1662 + 1663 + return num_pages; 1664 + } 1665 + 1666 + static ssize_t 1667 + cifs_iovec_write(struct file *file, const struct iovec *iov, 1668 + unsigned long nr_segs, loff_t *poffset) 1669 + { 1670 + size_t total_written = 0, written = 0; 1671 + unsigned long num_pages, npages; 1672 + size_t copied, len, cur_len, i; 1673 + struct kvec *to_send; 1674 + struct page **pages; 1675 + struct iov_iter it; 1676 + struct inode *inode; 1677 + struct cifsFileInfo *open_file; 1678 + struct cifsTconInfo *pTcon; 1679 + struct cifs_sb_info *cifs_sb; 1680 + int xid, rc; 1681 + 1682 + len = iov_length(iov, nr_segs); 1683 + if (!len) 1684 + return 0; 1685 + 1686 + rc = generic_write_checks(file, poffset, &len, 0); 1687 + if (rc) 1688 + return rc; 1689 + 1690 + cifs_sb = CIFS_SB(file->f_path.dentry->d_sb); 1691 + num_pages = get_numpages(cifs_sb->wsize, len, &cur_len); 1692 + 1693 + pages = kmalloc(sizeof(struct pages *)*num_pages, GFP_KERNEL); 1694 + if (!pages) 1695 + return -ENOMEM; 1696 + 1697 + to_send = kmalloc(sizeof(struct kvec)*(num_pages + 1), GFP_KERNEL); 1698 + if (!to_send) { 1699 + kfree(pages); 1700 + return -ENOMEM; 1701 + } 1702 + 1703 + rc = cifs_write_allocate_pages(pages, num_pages); 1704 + if (rc) { 1705 + kfree(pages); 1706 + kfree(to_send); 1707 + return rc; 1708 + } 1709 + 1710 + xid = GetXid(); 1711 + open_file = file->private_data; 1712 + pTcon = tlink_tcon(open_file->tlink); 1713 + inode = file->f_path.dentry->d_inode; 1714 + 1715 + iov_iter_init(&it, iov, nr_segs, len, 0); 1716 + npages = num_pages; 1717 + 1718 + do { 1719 + size_t save_len = cur_len; 1720 + for (i = 0; i < npages; i++) { 1721 + copied = min_t(const size_t, cur_len, PAGE_CACHE_SIZE); 1722 + copied = iov_iter_copy_from_user(pages[i], &it, 0, 1723 + copied); 1724 + cur_len -= copied; 1725 + iov_iter_advance(&it, copied); 1726 + to_send[i+1].iov_base = kmap(pages[i]); 1727 + to_send[i+1].iov_len = copied; 1728 + } 1729 + 1730 + cur_len = save_len - cur_len; 1731 + 1732 + do { 1733 + if (open_file->invalidHandle) { 1734 + rc = cifs_reopen_file(open_file, false); 1735 + if (rc != 0) 1736 + break; 1737 + } 1738 + rc = CIFSSMBWrite2(xid, pTcon, open_file->netfid, 1739 + cur_len, *poffset, &written, 1740 + to_send, npages, 0); 1741 + } while (rc == -EAGAIN); 1742 + 1743 + for (i = 0; i < npages; i++) 1744 + kunmap(pages[i]); 1745 + 1746 + if (written) { 1747 + len -= written; 1748 + total_written += written; 1749 + cifs_update_eof(CIFS_I(inode), *poffset, written); 1750 + *poffset += written; 1751 + } else if (rc < 0) { 1752 + if (!total_written) 1753 + total_written = rc; 1754 + break; 1755 + } 1756 + 1757 + /* get length and number of kvecs of the next write */ 1758 + npages = get_numpages(cifs_sb->wsize, len, &cur_len); 1759 + } while (len > 0); 1760 + 1761 + if (total_written > 0) { 1762 + spin_lock(&inode->i_lock); 1763 + if (*poffset > inode->i_size) 1764 + i_size_write(inode, *poffset); 1765 + spin_unlock(&inode->i_lock); 1766 + } 1767 + 1768 + cifs_stats_bytes_written(pTcon, total_written); 1769 + mark_inode_dirty_sync(inode); 1770 + 1771 + for (i = 0; i < num_pages; i++) 1772 + put_page(pages[i]); 1773 + kfree(to_send); 1774 + kfree(pages); 1775 + FreeXid(xid); 1776 + return total_written; 1777 + } 1778 + 1779 + static ssize_t cifs_user_writev(struct kiocb *iocb, const struct iovec *iov, 1780 + unsigned long nr_segs, loff_t pos) 1781 + { 1782 + ssize_t written; 1783 + struct inode *inode; 1784 + 1785 + inode = iocb->ki_filp->f_path.dentry->d_inode; 1786 + 1787 + /* 1788 + * BB - optimize the way when signing is disabled. We can drop this 1789 + * extra memory-to-memory copying and use iovec buffers for constructing 1790 + * write request. 1791 + */ 1792 + 1793 + written = cifs_iovec_write(iocb->ki_filp, iov, nr_segs, &pos); 1794 + if (written > 0) { 1795 + CIFS_I(inode)->invalid_mapping = true; 1796 + iocb->ki_pos = pos; 1797 + } 1798 + 1799 + return written; 1800 + } 1801 + 1802 + ssize_t cifs_strict_writev(struct kiocb *iocb, const struct iovec *iov, 1803 + unsigned long nr_segs, loff_t pos) 1804 + { 1805 + struct inode *inode; 1806 + 1807 + inode = iocb->ki_filp->f_path.dentry->d_inode; 1808 + 1809 + if (CIFS_I(inode)->clientCanCacheAll) 1810 + return generic_file_aio_write(iocb, iov, nr_segs, pos); 1811 + 1812 + /* 1813 + * In strict cache mode we need to write the data to the server exactly 1814 + * from the pos to pos+len-1 rather than flush all affected pages 1815 + * because it may cause a error with mandatory locks on these pages but 1816 + * not on the region from pos to ppos+len-1. 1817 + */ 1818 + 1819 + return cifs_user_writev(iocb, iov, nr_segs, pos); 1620 1820 } 1621 1821 1622 1822 static ssize_t
+49 -9
fs/cifs/link.c
··· 28 28 #include "cifsproto.h" 29 29 #include "cifs_debug.h" 30 30 #include "cifs_fs_sb.h" 31 - #include "md5.h" 32 31 33 32 #define CIFS_MF_SYMLINK_LEN_OFFSET (4+1) 34 33 #define CIFS_MF_SYMLINK_MD5_OFFSET (CIFS_MF_SYMLINK_LEN_OFFSET+(4+1)) ··· 46 47 md5_hash[12], md5_hash[13], md5_hash[14], md5_hash[15] 47 48 48 49 static int 50 + symlink_hash(unsigned int link_len, const char *link_str, u8 *md5_hash) 51 + { 52 + int rc; 53 + unsigned int size; 54 + struct crypto_shash *md5; 55 + struct sdesc *sdescmd5; 56 + 57 + md5 = crypto_alloc_shash("md5", 0, 0); 58 + if (IS_ERR(md5)) { 59 + cERROR(1, "%s: Crypto md5 allocation error %d\n", __func__, rc); 60 + return PTR_ERR(md5); 61 + } 62 + size = sizeof(struct shash_desc) + crypto_shash_descsize(md5); 63 + sdescmd5 = kmalloc(size, GFP_KERNEL); 64 + if (!sdescmd5) { 65 + rc = -ENOMEM; 66 + cERROR(1, "%s: Memory allocation failure\n", __func__); 67 + goto symlink_hash_err; 68 + } 69 + sdescmd5->shash.tfm = md5; 70 + sdescmd5->shash.flags = 0x0; 71 + 72 + rc = crypto_shash_init(&sdescmd5->shash); 73 + if (rc) { 74 + cERROR(1, "%s: Could not init md5 shash\n", __func__); 75 + goto symlink_hash_err; 76 + } 77 + crypto_shash_update(&sdescmd5->shash, link_str, link_len); 78 + rc = crypto_shash_final(&sdescmd5->shash, md5_hash); 79 + 80 + symlink_hash_err: 81 + crypto_free_shash(md5); 82 + kfree(sdescmd5); 83 + 84 + return rc; 85 + } 86 + 87 + static int 49 88 CIFSParseMFSymlink(const u8 *buf, 50 89 unsigned int buf_len, 51 90 unsigned int *_link_len, ··· 93 56 unsigned int link_len; 94 57 const char *md5_str1; 95 58 const char *link_str; 96 - struct MD5Context md5_ctx; 97 59 u8 md5_hash[16]; 98 60 char md5_str2[34]; 99 61 ··· 106 70 if (rc != 1) 107 71 return -EINVAL; 108 72 109 - cifs_MD5_init(&md5_ctx); 110 - cifs_MD5_update(&md5_ctx, (const u8 *)link_str, link_len); 111 - cifs_MD5_final(md5_hash, &md5_ctx); 73 + rc = symlink_hash(link_len, link_str, md5_hash); 74 + if (rc) { 75 + cFYI(1, "%s: MD5 hash failure: %d\n", __func__, rc); 76 + return rc; 77 + } 112 78 113 79 snprintf(md5_str2, sizeof(md5_str2), 114 80 CIFS_MF_SYMLINK_MD5_FORMAT, ··· 132 94 static int 133 95 CIFSFormatMFSymlink(u8 *buf, unsigned int buf_len, const char *link_str) 134 96 { 97 + int rc; 135 98 unsigned int link_len; 136 99 unsigned int ofs; 137 - struct MD5Context md5_ctx; 138 100 u8 md5_hash[16]; 139 101 140 102 if (buf_len != CIFS_MF_SYMLINK_FILE_SIZE) ··· 145 107 if (link_len > CIFS_MF_SYMLINK_LINK_MAXLEN) 146 108 return -ENAMETOOLONG; 147 109 148 - cifs_MD5_init(&md5_ctx); 149 - cifs_MD5_update(&md5_ctx, (const u8 *)link_str, link_len); 150 - cifs_MD5_final(md5_hash, &md5_ctx); 110 + rc = symlink_hash(link_len, link_str, md5_hash); 111 + if (rc) { 112 + cFYI(1, "%s: MD5 hash failure: %d\n", __func__, rc); 113 + return rc; 114 + } 151 115 152 116 snprintf(buf, buf_len, 153 117 CIFS_MF_SYMLINK_LEN_FORMAT CIFS_MF_SYMLINK_MD5_FORMAT,
-205
fs/cifs/md4.c
··· 1 - /* 2 - Unix SMB/Netbios implementation. 3 - Version 1.9. 4 - a implementation of MD4 designed for use in the SMB authentication protocol 5 - Copyright (C) Andrew Tridgell 1997-1998. 6 - Modified by Steve French (sfrench@us.ibm.com) 2002-2003 7 - 8 - This program is free software; you can redistribute it and/or modify 9 - it under the terms of the GNU General Public License as published by 10 - the Free Software Foundation; either version 2 of the License, or 11 - (at your option) any later version. 12 - 13 - This program is distributed in the hope that it will be useful, 14 - but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 - GNU General Public License for more details. 17 - 18 - You should have received a copy of the GNU General Public License 19 - along with this program; if not, write to the Free Software 20 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 21 - */ 22 - #include <linux/module.h> 23 - #include <linux/fs.h> 24 - #include "cifsencrypt.h" 25 - 26 - /* NOTE: This code makes no attempt to be fast! */ 27 - 28 - static __u32 29 - F(__u32 X, __u32 Y, __u32 Z) 30 - { 31 - return (X & Y) | ((~X) & Z); 32 - } 33 - 34 - static __u32 35 - G(__u32 X, __u32 Y, __u32 Z) 36 - { 37 - return (X & Y) | (X & Z) | (Y & Z); 38 - } 39 - 40 - static __u32 41 - H(__u32 X, __u32 Y, __u32 Z) 42 - { 43 - return X ^ Y ^ Z; 44 - } 45 - 46 - static __u32 47 - lshift(__u32 x, int s) 48 - { 49 - x &= 0xFFFFFFFF; 50 - return ((x << s) & 0xFFFFFFFF) | (x >> (32 - s)); 51 - } 52 - 53 - #define ROUND1(a,b,c,d,k,s) (*a) = lshift((*a) + F(*b,*c,*d) + X[k], s) 54 - #define ROUND2(a,b,c,d,k,s) (*a) = lshift((*a) + G(*b,*c,*d) + X[k] + (__u32)0x5A827999,s) 55 - #define ROUND3(a,b,c,d,k,s) (*a) = lshift((*a) + H(*b,*c,*d) + X[k] + (__u32)0x6ED9EBA1,s) 56 - 57 - /* this applies md4 to 64 byte chunks */ 58 - static void 59 - mdfour64(__u32 *M, __u32 *A, __u32 *B, __u32 *C, __u32 *D) 60 - { 61 - int j; 62 - __u32 AA, BB, CC, DD; 63 - __u32 X[16]; 64 - 65 - 66 - for (j = 0; j < 16; j++) 67 - X[j] = M[j]; 68 - 69 - AA = *A; 70 - BB = *B; 71 - CC = *C; 72 - DD = *D; 73 - 74 - ROUND1(A, B, C, D, 0, 3); 75 - ROUND1(D, A, B, C, 1, 7); 76 - ROUND1(C, D, A, B, 2, 11); 77 - ROUND1(B, C, D, A, 3, 19); 78 - ROUND1(A, B, C, D, 4, 3); 79 - ROUND1(D, A, B, C, 5, 7); 80 - ROUND1(C, D, A, B, 6, 11); 81 - ROUND1(B, C, D, A, 7, 19); 82 - ROUND1(A, B, C, D, 8, 3); 83 - ROUND1(D, A, B, C, 9, 7); 84 - ROUND1(C, D, A, B, 10, 11); 85 - ROUND1(B, C, D, A, 11, 19); 86 - ROUND1(A, B, C, D, 12, 3); 87 - ROUND1(D, A, B, C, 13, 7); 88 - ROUND1(C, D, A, B, 14, 11); 89 - ROUND1(B, C, D, A, 15, 19); 90 - 91 - ROUND2(A, B, C, D, 0, 3); 92 - ROUND2(D, A, B, C, 4, 5); 93 - ROUND2(C, D, A, B, 8, 9); 94 - ROUND2(B, C, D, A, 12, 13); 95 - ROUND2(A, B, C, D, 1, 3); 96 - ROUND2(D, A, B, C, 5, 5); 97 - ROUND2(C, D, A, B, 9, 9); 98 - ROUND2(B, C, D, A, 13, 13); 99 - ROUND2(A, B, C, D, 2, 3); 100 - ROUND2(D, A, B, C, 6, 5); 101 - ROUND2(C, D, A, B, 10, 9); 102 - ROUND2(B, C, D, A, 14, 13); 103 - ROUND2(A, B, C, D, 3, 3); 104 - ROUND2(D, A, B, C, 7, 5); 105 - ROUND2(C, D, A, B, 11, 9); 106 - ROUND2(B, C, D, A, 15, 13); 107 - 108 - ROUND3(A, B, C, D, 0, 3); 109 - ROUND3(D, A, B, C, 8, 9); 110 - ROUND3(C, D, A, B, 4, 11); 111 - ROUND3(B, C, D, A, 12, 15); 112 - ROUND3(A, B, C, D, 2, 3); 113 - ROUND3(D, A, B, C, 10, 9); 114 - ROUND3(C, D, A, B, 6, 11); 115 - ROUND3(B, C, D, A, 14, 15); 116 - ROUND3(A, B, C, D, 1, 3); 117 - ROUND3(D, A, B, C, 9, 9); 118 - ROUND3(C, D, A, B, 5, 11); 119 - ROUND3(B, C, D, A, 13, 15); 120 - ROUND3(A, B, C, D, 3, 3); 121 - ROUND3(D, A, B, C, 11, 9); 122 - ROUND3(C, D, A, B, 7, 11); 123 - ROUND3(B, C, D, A, 15, 15); 124 - 125 - *A += AA; 126 - *B += BB; 127 - *C += CC; 128 - *D += DD; 129 - 130 - *A &= 0xFFFFFFFF; 131 - *B &= 0xFFFFFFFF; 132 - *C &= 0xFFFFFFFF; 133 - *D &= 0xFFFFFFFF; 134 - 135 - for (j = 0; j < 16; j++) 136 - X[j] = 0; 137 - } 138 - 139 - static void 140 - copy64(__u32 *M, unsigned char *in) 141 - { 142 - int i; 143 - 144 - for (i = 0; i < 16; i++) 145 - M[i] = (in[i * 4 + 3] << 24) | (in[i * 4 + 2] << 16) | 146 - (in[i * 4 + 1] << 8) | (in[i * 4 + 0] << 0); 147 - } 148 - 149 - static void 150 - copy4(unsigned char *out, __u32 x) 151 - { 152 - out[0] = x & 0xFF; 153 - out[1] = (x >> 8) & 0xFF; 154 - out[2] = (x >> 16) & 0xFF; 155 - out[3] = (x >> 24) & 0xFF; 156 - } 157 - 158 - /* produce a md4 message digest from data of length n bytes */ 159 - void 160 - mdfour(unsigned char *out, unsigned char *in, int n) 161 - { 162 - unsigned char buf[128]; 163 - __u32 M[16]; 164 - __u32 b = n * 8; 165 - int i; 166 - __u32 A = 0x67452301; 167 - __u32 B = 0xefcdab89; 168 - __u32 C = 0x98badcfe; 169 - __u32 D = 0x10325476; 170 - 171 - while (n > 64) { 172 - copy64(M, in); 173 - mdfour64(M, &A, &B, &C, &D); 174 - in += 64; 175 - n -= 64; 176 - } 177 - 178 - for (i = 0; i < 128; i++) 179 - buf[i] = 0; 180 - memcpy(buf, in, n); 181 - buf[n] = 0x80; 182 - 183 - if (n <= 55) { 184 - copy4(buf + 56, b); 185 - copy64(M, buf); 186 - mdfour64(M, &A, &B, &C, &D); 187 - } else { 188 - copy4(buf + 120, b); 189 - copy64(M, buf); 190 - mdfour64(M, &A, &B, &C, &D); 191 - copy64(M, buf + 64); 192 - mdfour64(M, &A, &B, &C, &D); 193 - } 194 - 195 - for (i = 0; i < 128; i++) 196 - buf[i] = 0; 197 - copy64(M, buf); 198 - 199 - copy4(out, A); 200 - copy4(out + 4, B); 201 - copy4(out + 8, C); 202 - copy4(out + 12, D); 203 - 204 - A = B = C = D = 0; 205 - }
-366
fs/cifs/md5.c
··· 1 - /* 2 - * This code implements the MD5 message-digest algorithm. 3 - * The algorithm is due to Ron Rivest. This code was 4 - * written by Colin Plumb in 1993, no copyright is claimed. 5 - * This code is in the public domain; do with it what you wish. 6 - * 7 - * Equivalent code is available from RSA Data Security, Inc. 8 - * This code has been tested against that, and is equivalent, 9 - * except that you don't need to include two pages of legalese 10 - * with every copy. 11 - * 12 - * To compute the message digest of a chunk of bytes, declare an 13 - * MD5Context structure, pass it to cifs_MD5_init, call cifs_MD5_update as 14 - * needed on buffers full of bytes, and then call cifs_MD5_final, which 15 - * will fill a supplied 16-byte array with the digest. 16 - */ 17 - 18 - /* This code slightly modified to fit into Samba by 19 - abartlet@samba.org Jun 2001 20 - and to fit the cifs vfs by 21 - Steve French sfrench@us.ibm.com */ 22 - 23 - #include <linux/string.h> 24 - #include "md5.h" 25 - 26 - static void MD5Transform(__u32 buf[4], __u32 const in[16]); 27 - 28 - /* 29 - * Note: this code is harmless on little-endian machines. 30 - */ 31 - static void 32 - byteReverse(unsigned char *buf, unsigned longs) 33 - { 34 - __u32 t; 35 - do { 36 - t = (__u32) ((unsigned) buf[3] << 8 | buf[2]) << 16 | 37 - ((unsigned) buf[1] << 8 | buf[0]); 38 - *(__u32 *) buf = t; 39 - buf += 4; 40 - } while (--longs); 41 - } 42 - 43 - /* 44 - * Start MD5 accumulation. Set bit count to 0 and buffer to mysterious 45 - * initialization constants. 46 - */ 47 - void 48 - cifs_MD5_init(struct MD5Context *ctx) 49 - { 50 - ctx->buf[0] = 0x67452301; 51 - ctx->buf[1] = 0xefcdab89; 52 - ctx->buf[2] = 0x98badcfe; 53 - ctx->buf[3] = 0x10325476; 54 - 55 - ctx->bits[0] = 0; 56 - ctx->bits[1] = 0; 57 - } 58 - 59 - /* 60 - * Update context to reflect the concatenation of another buffer full 61 - * of bytes. 62 - */ 63 - void 64 - cifs_MD5_update(struct MD5Context *ctx, unsigned char const *buf, unsigned len) 65 - { 66 - register __u32 t; 67 - 68 - /* Update bitcount */ 69 - 70 - t = ctx->bits[0]; 71 - if ((ctx->bits[0] = t + ((__u32) len << 3)) < t) 72 - ctx->bits[1]++; /* Carry from low to high */ 73 - ctx->bits[1] += len >> 29; 74 - 75 - t = (t >> 3) & 0x3f; /* Bytes already in shsInfo->data */ 76 - 77 - /* Handle any leading odd-sized chunks */ 78 - 79 - if (t) { 80 - unsigned char *p = (unsigned char *) ctx->in + t; 81 - 82 - t = 64 - t; 83 - if (len < t) { 84 - memmove(p, buf, len); 85 - return; 86 - } 87 - memmove(p, buf, t); 88 - byteReverse(ctx->in, 16); 89 - MD5Transform(ctx->buf, (__u32 *) ctx->in); 90 - buf += t; 91 - len -= t; 92 - } 93 - /* Process data in 64-byte chunks */ 94 - 95 - while (len >= 64) { 96 - memmove(ctx->in, buf, 64); 97 - byteReverse(ctx->in, 16); 98 - MD5Transform(ctx->buf, (__u32 *) ctx->in); 99 - buf += 64; 100 - len -= 64; 101 - } 102 - 103 - /* Handle any remaining bytes of data. */ 104 - 105 - memmove(ctx->in, buf, len); 106 - } 107 - 108 - /* 109 - * Final wrapup - pad to 64-byte boundary with the bit pattern 110 - * 1 0* (64-bit count of bits processed, MSB-first) 111 - */ 112 - void 113 - cifs_MD5_final(unsigned char digest[16], struct MD5Context *ctx) 114 - { 115 - unsigned int count; 116 - unsigned char *p; 117 - 118 - /* Compute number of bytes mod 64 */ 119 - count = (ctx->bits[0] >> 3) & 0x3F; 120 - 121 - /* Set the first char of padding to 0x80. This is safe since there is 122 - always at least one byte free */ 123 - p = ctx->in + count; 124 - *p++ = 0x80; 125 - 126 - /* Bytes of padding needed to make 64 bytes */ 127 - count = 64 - 1 - count; 128 - 129 - /* Pad out to 56 mod 64 */ 130 - if (count < 8) { 131 - /* Two lots of padding: Pad the first block to 64 bytes */ 132 - memset(p, 0, count); 133 - byteReverse(ctx->in, 16); 134 - MD5Transform(ctx->buf, (__u32 *) ctx->in); 135 - 136 - /* Now fill the next block with 56 bytes */ 137 - memset(ctx->in, 0, 56); 138 - } else { 139 - /* Pad block to 56 bytes */ 140 - memset(p, 0, count - 8); 141 - } 142 - byteReverse(ctx->in, 14); 143 - 144 - /* Append length in bits and transform */ 145 - ((__u32 *) ctx->in)[14] = ctx->bits[0]; 146 - ((__u32 *) ctx->in)[15] = ctx->bits[1]; 147 - 148 - MD5Transform(ctx->buf, (__u32 *) ctx->in); 149 - byteReverse((unsigned char *) ctx->buf, 4); 150 - memmove(digest, ctx->buf, 16); 151 - memset(ctx, 0, sizeof(*ctx)); /* In case it's sensitive */ 152 - } 153 - 154 - /* The four core functions - F1 is optimized somewhat */ 155 - 156 - /* #define F1(x, y, z) (x & y | ~x & z) */ 157 - #define F1(x, y, z) (z ^ (x & (y ^ z))) 158 - #define F2(x, y, z) F1(z, x, y) 159 - #define F3(x, y, z) (x ^ y ^ z) 160 - #define F4(x, y, z) (y ^ (x | ~z)) 161 - 162 - /* This is the central step in the MD5 algorithm. */ 163 - #define MD5STEP(f, w, x, y, z, data, s) \ 164 - (w += f(x, y, z) + data, w = w<<s | w>>(32-s), w += x) 165 - 166 - /* 167 - * The core of the MD5 algorithm, this alters an existing MD5 hash to 168 - * reflect the addition of 16 longwords of new data. cifs_MD5_update blocks 169 - * the data and converts bytes into longwords for this routine. 170 - */ 171 - static void 172 - MD5Transform(__u32 buf[4], __u32 const in[16]) 173 - { 174 - register __u32 a, b, c, d; 175 - 176 - a = buf[0]; 177 - b = buf[1]; 178 - c = buf[2]; 179 - d = buf[3]; 180 - 181 - MD5STEP(F1, a, b, c, d, in[0] + 0xd76aa478, 7); 182 - MD5STEP(F1, d, a, b, c, in[1] + 0xe8c7b756, 12); 183 - MD5STEP(F1, c, d, a, b, in[2] + 0x242070db, 17); 184 - MD5STEP(F1, b, c, d, a, in[3] + 0xc1bdceee, 22); 185 - MD5STEP(F1, a, b, c, d, in[4] + 0xf57c0faf, 7); 186 - MD5STEP(F1, d, a, b, c, in[5] + 0x4787c62a, 12); 187 - MD5STEP(F1, c, d, a, b, in[6] + 0xa8304613, 17); 188 - MD5STEP(F1, b, c, d, a, in[7] + 0xfd469501, 22); 189 - MD5STEP(F1, a, b, c, d, in[8] + 0x698098d8, 7); 190 - MD5STEP(F1, d, a, b, c, in[9] + 0x8b44f7af, 12); 191 - MD5STEP(F1, c, d, a, b, in[10] + 0xffff5bb1, 17); 192 - MD5STEP(F1, b, c, d, a, in[11] + 0x895cd7be, 22); 193 - MD5STEP(F1, a, b, c, d, in[12] + 0x6b901122, 7); 194 - MD5STEP(F1, d, a, b, c, in[13] + 0xfd987193, 12); 195 - MD5STEP(F1, c, d, a, b, in[14] + 0xa679438e, 17); 196 - MD5STEP(F1, b, c, d, a, in[15] + 0x49b40821, 22); 197 - 198 - MD5STEP(F2, a, b, c, d, in[1] + 0xf61e2562, 5); 199 - MD5STEP(F2, d, a, b, c, in[6] + 0xc040b340, 9); 200 - MD5STEP(F2, c, d, a, b, in[11] + 0x265e5a51, 14); 201 - MD5STEP(F2, b, c, d, a, in[0] + 0xe9b6c7aa, 20); 202 - MD5STEP(F2, a, b, c, d, in[5] + 0xd62f105d, 5); 203 - MD5STEP(F2, d, a, b, c, in[10] + 0x02441453, 9); 204 - MD5STEP(F2, c, d, a, b, in[15] + 0xd8a1e681, 14); 205 - MD5STEP(F2, b, c, d, a, in[4] + 0xe7d3fbc8, 20); 206 - MD5STEP(F2, a, b, c, d, in[9] + 0x21e1cde6, 5); 207 - MD5STEP(F2, d, a, b, c, in[14] + 0xc33707d6, 9); 208 - MD5STEP(F2, c, d, a, b, in[3] + 0xf4d50d87, 14); 209 - MD5STEP(F2, b, c, d, a, in[8] + 0x455a14ed, 20); 210 - MD5STEP(F2, a, b, c, d, in[13] + 0xa9e3e905, 5); 211 - MD5STEP(F2, d, a, b, c, in[2] + 0xfcefa3f8, 9); 212 - MD5STEP(F2, c, d, a, b, in[7] + 0x676f02d9, 14); 213 - MD5STEP(F2, b, c, d, a, in[12] + 0x8d2a4c8a, 20); 214 - 215 - MD5STEP(F3, a, b, c, d, in[5] + 0xfffa3942, 4); 216 - MD5STEP(F3, d, a, b, c, in[8] + 0x8771f681, 11); 217 - MD5STEP(F3, c, d, a, b, in[11] + 0x6d9d6122, 16); 218 - MD5STEP(F3, b, c, d, a, in[14] + 0xfde5380c, 23); 219 - MD5STEP(F3, a, b, c, d, in[1] + 0xa4beea44, 4); 220 - MD5STEP(F3, d, a, b, c, in[4] + 0x4bdecfa9, 11); 221 - MD5STEP(F3, c, d, a, b, in[7] + 0xf6bb4b60, 16); 222 - MD5STEP(F3, b, c, d, a, in[10] + 0xbebfbc70, 23); 223 - MD5STEP(F3, a, b, c, d, in[13] + 0x289b7ec6, 4); 224 - MD5STEP(F3, d, a, b, c, in[0] + 0xeaa127fa, 11); 225 - MD5STEP(F3, c, d, a, b, in[3] + 0xd4ef3085, 16); 226 - MD5STEP(F3, b, c, d, a, in[6] + 0x04881d05, 23); 227 - MD5STEP(F3, a, b, c, d, in[9] + 0xd9d4d039, 4); 228 - MD5STEP(F3, d, a, b, c, in[12] + 0xe6db99e5, 11); 229 - MD5STEP(F3, c, d, a, b, in[15] + 0x1fa27cf8, 16); 230 - MD5STEP(F3, b, c, d, a, in[2] + 0xc4ac5665, 23); 231 - 232 - MD5STEP(F4, a, b, c, d, in[0] + 0xf4292244, 6); 233 - MD5STEP(F4, d, a, b, c, in[7] + 0x432aff97, 10); 234 - MD5STEP(F4, c, d, a, b, in[14] + 0xab9423a7, 15); 235 - MD5STEP(F4, b, c, d, a, in[5] + 0xfc93a039, 21); 236 - MD5STEP(F4, a, b, c, d, in[12] + 0x655b59c3, 6); 237 - MD5STEP(F4, d, a, b, c, in[3] + 0x8f0ccc92, 10); 238 - MD5STEP(F4, c, d, a, b, in[10] + 0xffeff47d, 15); 239 - MD5STEP(F4, b, c, d, a, in[1] + 0x85845dd1, 21); 240 - MD5STEP(F4, a, b, c, d, in[8] + 0x6fa87e4f, 6); 241 - MD5STEP(F4, d, a, b, c, in[15] + 0xfe2ce6e0, 10); 242 - MD5STEP(F4, c, d, a, b, in[6] + 0xa3014314, 15); 243 - MD5STEP(F4, b, c, d, a, in[13] + 0x4e0811a1, 21); 244 - MD5STEP(F4, a, b, c, d, in[4] + 0xf7537e82, 6); 245 - MD5STEP(F4, d, a, b, c, in[11] + 0xbd3af235, 10); 246 - MD5STEP(F4, c, d, a, b, in[2] + 0x2ad7d2bb, 15); 247 - MD5STEP(F4, b, c, d, a, in[9] + 0xeb86d391, 21); 248 - 249 - buf[0] += a; 250 - buf[1] += b; 251 - buf[2] += c; 252 - buf[3] += d; 253 - } 254 - 255 - #if 0 /* currently unused */ 256 - /*********************************************************************** 257 - the rfc 2104 version of hmac_md5 initialisation. 258 - ***********************************************************************/ 259 - static void 260 - hmac_md5_init_rfc2104(unsigned char *key, int key_len, 261 - struct HMACMD5Context *ctx) 262 - { 263 - int i; 264 - 265 - /* if key is longer than 64 bytes reset it to key=MD5(key) */ 266 - if (key_len > 64) { 267 - unsigned char tk[16]; 268 - struct MD5Context tctx; 269 - 270 - cifs_MD5_init(&tctx); 271 - cifs_MD5_update(&tctx, key, key_len); 272 - cifs_MD5_final(tk, &tctx); 273 - 274 - key = tk; 275 - key_len = 16; 276 - } 277 - 278 - /* start out by storing key in pads */ 279 - memset(ctx->k_ipad, 0, sizeof(ctx->k_ipad)); 280 - memset(ctx->k_opad, 0, sizeof(ctx->k_opad)); 281 - memcpy(ctx->k_ipad, key, key_len); 282 - memcpy(ctx->k_opad, key, key_len); 283 - 284 - /* XOR key with ipad and opad values */ 285 - for (i = 0; i < 64; i++) { 286 - ctx->k_ipad[i] ^= 0x36; 287 - ctx->k_opad[i] ^= 0x5c; 288 - } 289 - 290 - cifs_MD5_init(&ctx->ctx); 291 - cifs_MD5_update(&ctx->ctx, ctx->k_ipad, 64); 292 - } 293 - #endif 294 - 295 - /*********************************************************************** 296 - the microsoft version of hmac_md5 initialisation. 297 - ***********************************************************************/ 298 - void 299 - hmac_md5_init_limK_to_64(const unsigned char *key, int key_len, 300 - struct HMACMD5Context *ctx) 301 - { 302 - int i; 303 - 304 - /* if key is longer than 64 bytes truncate it */ 305 - if (key_len > 64) 306 - key_len = 64; 307 - 308 - /* start out by storing key in pads */ 309 - memset(ctx->k_ipad, 0, sizeof(ctx->k_ipad)); 310 - memset(ctx->k_opad, 0, sizeof(ctx->k_opad)); 311 - memcpy(ctx->k_ipad, key, key_len); 312 - memcpy(ctx->k_opad, key, key_len); 313 - 314 - /* XOR key with ipad and opad values */ 315 - for (i = 0; i < 64; i++) { 316 - ctx->k_ipad[i] ^= 0x36; 317 - ctx->k_opad[i] ^= 0x5c; 318 - } 319 - 320 - cifs_MD5_init(&ctx->ctx); 321 - cifs_MD5_update(&ctx->ctx, ctx->k_ipad, 64); 322 - } 323 - 324 - /*********************************************************************** 325 - update hmac_md5 "inner" buffer 326 - ***********************************************************************/ 327 - void 328 - hmac_md5_update(const unsigned char *text, int text_len, 329 - struct HMACMD5Context *ctx) 330 - { 331 - cifs_MD5_update(&ctx->ctx, text, text_len); /* then text of datagram */ 332 - } 333 - 334 - /*********************************************************************** 335 - finish off hmac_md5 "inner" buffer and generate outer one. 336 - ***********************************************************************/ 337 - void 338 - hmac_md5_final(unsigned char *digest, struct HMACMD5Context *ctx) 339 - { 340 - struct MD5Context ctx_o; 341 - 342 - cifs_MD5_final(digest, &ctx->ctx); 343 - 344 - cifs_MD5_init(&ctx_o); 345 - cifs_MD5_update(&ctx_o, ctx->k_opad, 64); 346 - cifs_MD5_update(&ctx_o, digest, 16); 347 - cifs_MD5_final(digest, &ctx_o); 348 - } 349 - 350 - /*********************************************************** 351 - single function to calculate an HMAC MD5 digest from data. 352 - use the microsoft hmacmd5 init method because the key is 16 bytes. 353 - ************************************************************/ 354 - #if 0 /* currently unused */ 355 - static void 356 - hmac_md5(unsigned char key[16], unsigned char *data, int data_len, 357 - unsigned char *digest) 358 - { 359 - struct HMACMD5Context ctx; 360 - hmac_md5_init_limK_to_64(key, 16, &ctx); 361 - if (data_len != 0) 362 - hmac_md5_update(data, data_len, &ctx); 363 - 364 - hmac_md5_final(digest, &ctx); 365 - } 366 - #endif
-38
fs/cifs/md5.h
··· 1 - #ifndef MD5_H 2 - #define MD5_H 3 - #ifndef HEADER_MD5_H 4 - /* Try to avoid clashes with OpenSSL */ 5 - #define HEADER_MD5_H 6 - #endif 7 - 8 - struct MD5Context { 9 - __u32 buf[4]; 10 - __u32 bits[2]; 11 - unsigned char in[64]; 12 - }; 13 - #endif /* !MD5_H */ 14 - 15 - #ifndef _HMAC_MD5_H 16 - struct HMACMD5Context { 17 - struct MD5Context ctx; 18 - unsigned char k_ipad[65]; 19 - unsigned char k_opad[65]; 20 - }; 21 - #endif /* _HMAC_MD5_H */ 22 - 23 - void cifs_MD5_init(struct MD5Context *context); 24 - void cifs_MD5_update(struct MD5Context *context, unsigned char const *buf, 25 - unsigned len); 26 - void cifs_MD5_final(unsigned char digest[16], struct MD5Context *context); 27 - 28 - /* The following definitions come from lib/hmacmd5.c */ 29 - 30 - /* void hmac_md5_init_rfc2104(unsigned char *key, int key_len, 31 - struct HMACMD5Context *ctx);*/ 32 - void hmac_md5_init_limK_to_64(const unsigned char *key, int key_len, 33 - struct HMACMD5Context *ctx); 34 - void hmac_md5_update(const unsigned char *text, int text_len, 35 - struct HMACMD5Context *ctx); 36 - void hmac_md5_final(unsigned char *digest, struct HMACMD5Context *ctx); 37 - /* void hmac_md5(unsigned char key[16], unsigned char *data, int data_len, 38 - unsigned char *digest);*/
-1
fs/cifs/smbdes.c
··· 45 45 up with a different answer to the one above) 46 46 */ 47 47 #include <linux/slab.h> 48 - #include "cifsencrypt.h" 49 48 #define uchar unsigned char 50 49 51 50 static uchar perm1[56] = { 57, 49, 41, 33, 25, 17, 9,
+64 -27
fs/cifs/smbencrypt.c
··· 32 32 #include "cifs_unicode.h" 33 33 #include "cifspdu.h" 34 34 #include "cifsglob.h" 35 - #include "md5.h" 36 35 #include "cifs_debug.h" 37 - #include "cifsencrypt.h" 36 + #include "cifsproto.h" 38 37 39 38 #ifndef false 40 39 #define false 0 ··· 47 48 #define SSVALX(buf,pos,val) (CVAL(buf,pos)=(val)&0xFF,CVAL(buf,pos+1)=(val)>>8) 48 49 #define SSVAL(buf,pos,val) SSVALX((buf),(pos),((__u16)(val))) 49 50 50 - /*The following definitions come from libsmb/smbencrypt.c */ 51 + /* produce a md4 message digest from data of length n bytes */ 52 + int 53 + mdfour(unsigned char *md4_hash, unsigned char *link_str, int link_len) 54 + { 55 + int rc; 56 + unsigned int size; 57 + struct crypto_shash *md4; 58 + struct sdesc *sdescmd4; 51 59 52 - void SMBencrypt(unsigned char *passwd, const unsigned char *c8, 53 - unsigned char *p24); 54 - void E_md4hash(const unsigned char *passwd, unsigned char *p16); 55 - static void SMBOWFencrypt(unsigned char passwd[16], const unsigned char *c8, 56 - unsigned char p24[24]); 57 - void SMBNTencrypt(unsigned char *passwd, unsigned char *c8, unsigned char *p24); 60 + md4 = crypto_alloc_shash("md4", 0, 0); 61 + if (IS_ERR(md4)) { 62 + cERROR(1, "%s: Crypto md4 allocation error %d\n", __func__, rc); 63 + return PTR_ERR(md4); 64 + } 65 + size = sizeof(struct shash_desc) + crypto_shash_descsize(md4); 66 + sdescmd4 = kmalloc(size, GFP_KERNEL); 67 + if (!sdescmd4) { 68 + rc = -ENOMEM; 69 + cERROR(1, "%s: Memory allocation failure\n", __func__); 70 + goto mdfour_err; 71 + } 72 + sdescmd4->shash.tfm = md4; 73 + sdescmd4->shash.flags = 0x0; 74 + 75 + rc = crypto_shash_init(&sdescmd4->shash); 76 + if (rc) { 77 + cERROR(1, "%s: Could not init md4 shash\n", __func__); 78 + goto mdfour_err; 79 + } 80 + crypto_shash_update(&sdescmd4->shash, link_str, link_len); 81 + rc = crypto_shash_final(&sdescmd4->shash, md4_hash); 82 + 83 + mdfour_err: 84 + crypto_free_shash(md4); 85 + kfree(sdescmd4); 86 + 87 + return rc; 88 + } 89 + 90 + /* Does the des encryption from the NT or LM MD4 hash. */ 91 + static void 92 + SMBOWFencrypt(unsigned char passwd[16], const unsigned char *c8, 93 + unsigned char p24[24]) 94 + { 95 + unsigned char p21[21]; 96 + 97 + memset(p21, '\0', 21); 98 + 99 + memcpy(p21, passwd, 16); 100 + E_P24(p21, c8, p24); 101 + } 58 102 59 103 /* 60 104 This implements the X/Open SMB password encryption ··· 160 118 * Creates the MD4 Hash of the users password in NT UNICODE. 161 119 */ 162 120 163 - void 121 + int 164 122 E_md4hash(const unsigned char *passwd, unsigned char *p16) 165 123 { 124 + int rc; 166 125 int len; 167 126 __u16 wpwd[129]; 168 127 ··· 182 139 /* Calculate length in bytes */ 183 140 len = _my_wcslen(wpwd) * sizeof(__u16); 184 141 185 - mdfour(p16, (unsigned char *) wpwd, len); 142 + rc = mdfour(p16, (unsigned char *) wpwd, len); 186 143 memset(wpwd, 0, 129 * 2); 144 + 145 + return rc; 187 146 } 188 147 189 148 #if 0 /* currently unused */ ··· 257 212 } 258 213 #endif 259 214 260 - /* Does the des encryption from the NT or LM MD4 hash. */ 261 - static void 262 - SMBOWFencrypt(unsigned char passwd[16], const unsigned char *c8, 263 - unsigned char p24[24]) 264 - { 265 - unsigned char p21[21]; 266 - 267 - memset(p21, '\0', 21); 268 - 269 - memcpy(p21, passwd, 16); 270 - E_P24(p21, c8, p24); 271 - } 272 - 273 215 /* Does the des encryption from the FIRST 8 BYTES of the NT or LM MD4 hash. */ 274 216 #if 0 /* currently unused */ 275 217 static void ··· 274 242 #endif 275 243 276 244 /* Does the NT MD4 hash then des encryption. */ 277 - 278 - void 245 + int 279 246 SMBNTencrypt(unsigned char *passwd, unsigned char *c8, unsigned char *p24) 280 247 { 248 + int rc; 281 249 unsigned char p21[21]; 282 250 283 251 memset(p21, '\0', 21); 284 252 285 - E_md4hash(passwd, p21); 253 + rc = E_md4hash(passwd, p21); 254 + if (rc) { 255 + cFYI(1, "%s Can't generate NT hash, error: %d", __func__, rc); 256 + return rc; 257 + } 286 258 SMBOWFencrypt(p21, c8, p24); 259 + return rc; 287 260 } 288 261 289 262
+5 -4
fs/lockd/host.c
··· 520 520 struct nsm_handle *nsm, 521 521 const struct nlm_reboot *info) 522 522 { 523 - struct nlm_host *host = NULL; 523 + struct nlm_host *host; 524 524 struct hlist_head *chain; 525 525 struct hlist_node *pos; 526 526 ··· 532 532 host->h_state++; 533 533 534 534 nlm_get_host(host); 535 - goto out; 535 + mutex_unlock(&nlm_host_mutex); 536 + return host; 536 537 } 537 538 } 538 - out: 539 + 539 540 mutex_unlock(&nlm_host_mutex); 540 - return host; 541 + return NULL; 541 542 } 542 543 543 544 /**
+29 -80
fs/nfs/callback.c
··· 135 135 136 136 #if defined(CONFIG_NFS_V4_1) 137 137 /* 138 - * * CB_SEQUENCE operations will fail until the callback sessionid is set. 139 - * */ 140 - int nfs4_set_callback_sessionid(struct nfs_client *clp) 141 - { 142 - struct svc_serv *serv = clp->cl_rpcclient->cl_xprt->bc_serv; 143 - struct nfs4_sessionid *bc_sid; 144 - 145 - if (!serv->sv_bc_xprt) 146 - return -EINVAL; 147 - 148 - /* on success freed in xprt_free */ 149 - bc_sid = kmalloc(sizeof(struct nfs4_sessionid), GFP_KERNEL); 150 - if (!bc_sid) 151 - return -ENOMEM; 152 - memcpy(bc_sid->data, &clp->cl_session->sess_id.data, 153 - NFS4_MAX_SESSIONID_LEN); 154 - spin_lock_bh(&serv->sv_cb_lock); 155 - serv->sv_bc_xprt->xpt_bc_sid = bc_sid; 156 - spin_unlock_bh(&serv->sv_cb_lock); 157 - dprintk("%s set xpt_bc_sid=%u:%u:%u:%u for sv_bc_xprt %p\n", __func__, 158 - ((u32 *)bc_sid->data)[0], ((u32 *)bc_sid->data)[1], 159 - ((u32 *)bc_sid->data)[2], ((u32 *)bc_sid->data)[3], 160 - serv->sv_bc_xprt); 161 - return 0; 162 - } 163 - 164 - /* 165 138 * The callback service for NFSv4.1 callbacks 166 139 */ 167 140 static int ··· 239 266 struct nfs_callback_data *cb_info) 240 267 { 241 268 } 242 - int nfs4_set_callback_sessionid(struct nfs_client *clp) 243 - { 244 - return 0; 245 - } 246 269 #endif /* CONFIG_NFS_V4_1 */ 247 270 248 271 /* ··· 328 359 mutex_unlock(&nfs_callback_mutex); 329 360 } 330 361 331 - static int check_gss_callback_principal(struct nfs_client *clp, 332 - struct svc_rqst *rqstp) 362 + /* Boolean check of RPC_AUTH_GSS principal */ 363 + int 364 + check_gss_callback_principal(struct nfs_client *clp, struct svc_rqst *rqstp) 333 365 { 334 366 struct rpc_clnt *r = clp->cl_rpcclient; 335 367 char *p = svc_gss_principal(rqstp); 336 368 369 + if (rqstp->rq_authop->flavour != RPC_AUTH_GSS) 370 + return 1; 371 + 337 372 /* No RPC_AUTH_GSS on NFSv4.1 back channel yet */ 338 373 if (clp->cl_minorversion != 0) 339 - return SVC_DROP; 374 + return 0; 340 375 /* 341 376 * It might just be a normal user principal, in which case 342 377 * userspace won't bother to tell us the name at all. 343 378 */ 344 379 if (p == NULL) 345 - return SVC_DENIED; 380 + return 0; 346 381 347 382 /* Expect a GSS_C_NT_HOSTBASED_NAME like "nfs@serverhostname" */ 348 383 349 384 if (memcmp(p, "nfs@", 4) != 0) 350 - return SVC_DENIED; 385 + return 0; 351 386 p += 4; 352 387 if (strcmp(p, r->cl_server) != 0) 353 - return SVC_DENIED; 354 - return SVC_OK; 388 + return 0; 389 + return 1; 355 390 } 356 391 357 - /* pg_authenticate method helper */ 358 - static struct nfs_client *nfs_cb_find_client(struct svc_rqst *rqstp) 359 - { 360 - struct nfs4_sessionid *sessionid = bc_xprt_sid(rqstp); 361 - int is_cb_compound = rqstp->rq_proc == CB_COMPOUND ? 1 : 0; 362 - 363 - dprintk("--> %s rq_proc %d\n", __func__, rqstp->rq_proc); 364 - if (svc_is_backchannel(rqstp)) 365 - /* Sessionid (usually) set after CB_NULL ping */ 366 - return nfs4_find_client_sessionid(svc_addr(rqstp), sessionid, 367 - is_cb_compound); 368 - else 369 - /* No callback identifier in pg_authenticate */ 370 - return nfs4_find_client_no_ident(svc_addr(rqstp)); 371 - } 372 - 373 - /* pg_authenticate method for nfsv4 callback threads. */ 392 + /* 393 + * pg_authenticate method for nfsv4 callback threads. 394 + * 395 + * The authflavor has been negotiated, so an incorrect flavor is a server 396 + * bug. Drop packets with incorrect authflavor. 397 + * 398 + * All other checking done after NFS decoding where the nfs_client can be 399 + * found in nfs4_callback_compound 400 + */ 374 401 static int nfs_callback_authenticate(struct svc_rqst *rqstp) 375 402 { 376 - struct nfs_client *clp; 377 - RPC_IFDEBUG(char buf[RPC_MAX_ADDRBUFLEN]); 378 - int ret = SVC_OK; 379 - 380 - /* Don't talk to strangers */ 381 - clp = nfs_cb_find_client(rqstp); 382 - if (clp == NULL) 383 - return SVC_DROP; 384 - 385 - dprintk("%s: %s NFSv4 callback!\n", __func__, 386 - svc_print_addr(rqstp, buf, sizeof(buf))); 387 - 388 403 switch (rqstp->rq_authop->flavour) { 389 - case RPC_AUTH_NULL: 390 - if (rqstp->rq_proc != CB_NULL) 391 - ret = SVC_DENIED; 392 - break; 393 - case RPC_AUTH_UNIX: 394 - break; 395 - case RPC_AUTH_GSS: 396 - ret = check_gss_callback_principal(clp, rqstp); 397 - break; 398 - default: 399 - ret = SVC_DENIED; 404 + case RPC_AUTH_NULL: 405 + if (rqstp->rq_proc != CB_NULL) 406 + return SVC_DROP; 407 + break; 408 + case RPC_AUTH_GSS: 409 + /* No RPC_AUTH_GSS support yet in NFSv4.1 */ 410 + if (svc_is_backchannel(rqstp)) 411 + return SVC_DROP; 400 412 } 401 - nfs_put_client(clp); 402 - return ret; 413 + return SVC_OK; 403 414 } 404 415 405 416 /*
+2 -2
fs/nfs/callback.h
··· 7 7 */ 8 8 #ifndef __LINUX_FS_NFS_CALLBACK_H 9 9 #define __LINUX_FS_NFS_CALLBACK_H 10 + #include <linux/sunrpc/svc.h> 10 11 11 12 #define NFS4_CALLBACK 0x40000000 12 13 #define NFS4_CALLBACK_XDRSIZE 2048 ··· 38 37 struct cb_process_state { 39 38 __be32 drc_status; 40 39 struct nfs_client *clp; 41 - struct nfs4_sessionid *svc_sid; /* v4.1 callback service sessionid */ 42 40 }; 43 41 44 42 struct cb_compound_hdr_arg { ··· 168 168 extern void nfs4_check_drain_bc_complete(struct nfs4_session *ses); 169 169 extern void nfs4_cb_take_slot(struct nfs_client *clp); 170 170 #endif /* CONFIG_NFS_V4_1 */ 171 - 171 + extern int check_gss_callback_principal(struct nfs_client *, struct svc_rqst *); 172 172 extern __be32 nfs4_callback_getattr(struct cb_getattrargs *args, 173 173 struct cb_getattrres *res, 174 174 struct cb_process_state *cps);
+3 -9
fs/nfs/callback_proc.c
··· 373 373 { 374 374 struct nfs_client *clp; 375 375 int i; 376 - __be32 status; 376 + __be32 status = htonl(NFS4ERR_BADSESSION); 377 377 378 378 cps->clp = NULL; 379 379 380 - status = htonl(NFS4ERR_BADSESSION); 381 - /* Incoming session must match the callback session */ 382 - if (memcmp(&args->csa_sessionid, cps->svc_sid, NFS4_MAX_SESSIONID_LEN)) 383 - goto out; 384 - 385 - clp = nfs4_find_client_sessionid(args->csa_addr, 386 - &args->csa_sessionid, 1); 380 + clp = nfs4_find_client_sessionid(args->csa_addr, &args->csa_sessionid); 387 381 if (clp == NULL) 388 382 goto out; 389 383 ··· 408 414 res->csr_highestslotid = NFS41_BC_MAX_CALLBACKS - 1; 409 415 res->csr_target_highestslotid = NFS41_BC_MAX_CALLBACKS - 1; 410 416 nfs4_cb_take_slot(clp); 411 - cps->clp = clp; /* put in nfs4_callback_compound */ 412 417 413 418 out: 419 + cps->clp = clp; /* put in nfs4_callback_compound */ 414 420 for (i = 0; i < args->csa_nrclists; i++) 415 421 kfree(args->csa_rclists[i].rcl_refcalls); 416 422 kfree(args->csa_rclists);
+2 -3
fs/nfs/callback_xdr.c
··· 794 794 795 795 if (hdr_arg.minorversion == 0) { 796 796 cps.clp = nfs4_find_client_ident(hdr_arg.cb_ident); 797 - if (!cps.clp) 797 + if (!cps.clp || !check_gss_callback_principal(cps.clp, rqstp)) 798 798 return rpc_drop_reply; 799 - } else 800 - cps.svc_sid = bc_xprt_sid(rqstp); 799 + } 801 800 802 801 hdr_res.taglen = hdr_arg.taglen; 803 802 hdr_res.tag = hdr_arg.tag;
+5 -10
fs/nfs/client.c
··· 1206 1206 * For CB_COMPOUND calls, find a client by IP address, protocol version, 1207 1207 * minorversion, and sessionID 1208 1208 * 1209 - * CREATE_SESSION triggers a CB_NULL ping from servers. The callback service 1210 - * sessionid can only be set after the CREATE_SESSION return, so a CB_NULL 1211 - * can arrive before the callback sessionid is set. For CB_NULL calls, 1212 - * find a client by IP address protocol version, and minorversion. 1213 - * 1214 1209 * Returns NULL if no such client 1215 1210 */ 1216 1211 struct nfs_client * 1217 1212 nfs4_find_client_sessionid(const struct sockaddr *addr, 1218 - struct nfs4_sessionid *sid, int is_cb_compound) 1213 + struct nfs4_sessionid *sid) 1219 1214 { 1220 1215 struct nfs_client *clp; 1221 1216 ··· 1222 1227 if (!nfs4_has_session(clp)) 1223 1228 continue; 1224 1229 1225 - /* Match sessionid unless cb_null call*/ 1226 - if (is_cb_compound && (memcmp(clp->cl_session->sess_id.data, 1227 - sid->data, NFS4_MAX_SESSIONID_LEN) != 0)) 1230 + /* Match sessionid*/ 1231 + if (memcmp(clp->cl_session->sess_id.data, 1232 + sid->data, NFS4_MAX_SESSIONID_LEN) != 0) 1228 1233 continue; 1229 1234 1230 1235 atomic_inc(&clp->cl_count); ··· 1239 1244 1240 1245 struct nfs_client * 1241 1246 nfs4_find_client_sessionid(const struct sockaddr *addr, 1242 - struct nfs4_sessionid *sid, int is_cb_compound) 1247 + struct nfs4_sessionid *sid) 1243 1248 { 1244 1249 return NULL; 1245 1250 }
+4 -2
fs/nfs/delegation.c
··· 23 23 24 24 static void nfs_do_free_delegation(struct nfs_delegation *delegation) 25 25 { 26 - if (delegation->cred) 27 - put_rpccred(delegation->cred); 28 26 kfree(delegation); 29 27 } 30 28 ··· 35 37 36 38 static void nfs_free_delegation(struct nfs_delegation *delegation) 37 39 { 40 + if (delegation->cred) { 41 + put_rpccred(delegation->cred); 42 + delegation->cred = NULL; 43 + } 38 44 call_rcu(&delegation->rcu, nfs_free_delegation_callback); 39 45 } 40 46
+20 -14
fs/nfs/direct.c
··· 407 407 pos += vec->iov_len; 408 408 } 409 409 410 + /* 411 + * If no bytes were started, return the error, and let the 412 + * generic layer handle the completion. 413 + */ 414 + if (requested_bytes == 0) { 415 + nfs_direct_req_release(dreq); 416 + return result < 0 ? result : -EIO; 417 + } 418 + 410 419 if (put_dreq(dreq)) 411 420 nfs_direct_complete(dreq); 412 - 413 - if (requested_bytes != 0) 414 - return 0; 415 - 416 - if (result < 0) 417 - return result; 418 - return -EIO; 421 + return 0; 419 422 } 420 423 421 424 static ssize_t nfs_direct_read(struct kiocb *iocb, const struct iovec *iov, ··· 844 841 pos += vec->iov_len; 845 842 } 846 843 844 + /* 845 + * If no bytes were started, return the error, and let the 846 + * generic layer handle the completion. 847 + */ 848 + if (requested_bytes == 0) { 849 + nfs_direct_req_release(dreq); 850 + return result < 0 ? result : -EIO; 851 + } 852 + 847 853 if (put_dreq(dreq)) 848 854 nfs_direct_write_complete(dreq, dreq->inode); 849 - 850 - if (requested_bytes != 0) 851 - return 0; 852 - 853 - if (result < 0) 854 - return result; 855 - return -EIO; 855 + return 0; 856 856 } 857 857 858 858 static ssize_t nfs_direct_write(struct kiocb *iocb, const struct iovec *iov,
+17 -9
fs/nfs/inode.c
··· 881 881 return ret; 882 882 } 883 883 884 - static void nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr *fattr) 884 + static unsigned long nfs_wcc_update_inode(struct inode *inode, struct nfs_fattr *fattr) 885 885 { 886 886 struct nfs_inode *nfsi = NFS_I(inode); 887 + unsigned long ret = 0; 887 888 888 889 if ((fattr->valid & NFS_ATTR_FATTR_PRECHANGE) 889 890 && (fattr->valid & NFS_ATTR_FATTR_CHANGE) ··· 892 891 nfsi->change_attr = fattr->change_attr; 893 892 if (S_ISDIR(inode->i_mode)) 894 893 nfsi->cache_validity |= NFS_INO_INVALID_DATA; 894 + ret |= NFS_INO_INVALID_ATTR; 895 895 } 896 896 /* If we have atomic WCC data, we may update some attributes */ 897 897 if ((fattr->valid & NFS_ATTR_FATTR_PRECTIME) 898 898 && (fattr->valid & NFS_ATTR_FATTR_CTIME) 899 - && timespec_equal(&inode->i_ctime, &fattr->pre_ctime)) 900 - memcpy(&inode->i_ctime, &fattr->ctime, sizeof(inode->i_ctime)); 899 + && timespec_equal(&inode->i_ctime, &fattr->pre_ctime)) { 900 + memcpy(&inode->i_ctime, &fattr->ctime, sizeof(inode->i_ctime)); 901 + ret |= NFS_INO_INVALID_ATTR; 902 + } 901 903 902 904 if ((fattr->valid & NFS_ATTR_FATTR_PREMTIME) 903 905 && (fattr->valid & NFS_ATTR_FATTR_MTIME) 904 906 && timespec_equal(&inode->i_mtime, &fattr->pre_mtime)) { 905 - memcpy(&inode->i_mtime, &fattr->mtime, sizeof(inode->i_mtime)); 906 - if (S_ISDIR(inode->i_mode)) 907 - nfsi->cache_validity |= NFS_INO_INVALID_DATA; 907 + memcpy(&inode->i_mtime, &fattr->mtime, sizeof(inode->i_mtime)); 908 + if (S_ISDIR(inode->i_mode)) 909 + nfsi->cache_validity |= NFS_INO_INVALID_DATA; 910 + ret |= NFS_INO_INVALID_ATTR; 908 911 } 909 912 if ((fattr->valid & NFS_ATTR_FATTR_PRESIZE) 910 913 && (fattr->valid & NFS_ATTR_FATTR_SIZE) 911 914 && i_size_read(inode) == nfs_size_to_loff_t(fattr->pre_size) 912 - && nfsi->npages == 0) 913 - i_size_write(inode, nfs_size_to_loff_t(fattr->size)); 915 + && nfsi->npages == 0) { 916 + i_size_write(inode, nfs_size_to_loff_t(fattr->size)); 917 + ret |= NFS_INO_INVALID_ATTR; 918 + } 919 + return ret; 914 920 } 915 921 916 922 /** ··· 1231 1223 | NFS_INO_REVAL_PAGECACHE); 1232 1224 1233 1225 /* Do atomic weak cache consistency updates */ 1234 - nfs_wcc_update_inode(inode, fattr); 1226 + invalid |= nfs_wcc_update_inode(inode, fattr); 1235 1227 1236 1228 /* More cache consistency checks */ 1237 1229 if (fattr->valid & NFS_ATTR_FATTR_CHANGE) {
+1 -2
fs/nfs/internal.h
··· 133 133 extern struct nfs_client *nfs4_find_client_no_ident(const struct sockaddr *); 134 134 extern struct nfs_client *nfs4_find_client_ident(int); 135 135 extern struct nfs_client * 136 - nfs4_find_client_sessionid(const struct sockaddr *, struct nfs4_sessionid *, 137 - int); 136 + nfs4_find_client_sessionid(const struct sockaddr *, struct nfs4_sessionid *); 138 137 extern struct nfs_server *nfs_create_server( 139 138 const struct nfs_parsed_mount_data *, 140 139 struct nfs_fh *);
+2 -2
fs/nfs/nfs3acl.c
··· 311 311 if (!nfs_server_capable(inode, NFS_CAP_ACLS)) 312 312 goto out; 313 313 314 - /* We are doing this here, because XDR marshalling can only 315 - return -ENOMEM. */ 314 + /* We are doing this here because XDR marshalling does not 315 + * return any results, it BUGs. */ 316 316 status = -ENOSPC; 317 317 if (acl != NULL && acl->a_count > NFS_ACL_MAX_ENTRIES) 318 318 goto out;
+5 -2
fs/nfs/nfs3xdr.c
··· 1328 1328 1329 1329 encode_nfs_fh3(xdr, NFS_FH(args->inode)); 1330 1330 encode_uint32(xdr, args->mask); 1331 - if (args->npages != 0) 1332 - xdr_write_pages(xdr, args->pages, 0, args->len); 1333 1331 1334 1332 base = req->rq_slen; 1333 + if (args->npages != 0) 1334 + xdr_write_pages(xdr, args->pages, 0, args->len); 1335 + else 1336 + xdr_reserve_space(xdr, NFS_ACL_INLINE_BUFSIZE); 1337 + 1335 1338 error = nfsacl_encode(xdr->buf, base, args->inode, 1336 1339 (args->mask & NFS_ACL) ? 1337 1340 args->acl_access : NULL, 1, 0);
+7 -2
fs/nfs/nfs4filelayoutdev.c
··· 214 214 215 215 /* ipv6 length plus port is legal */ 216 216 if (rlen > INET6_ADDRSTRLEN + 8) { 217 - dprintk("%s Invalid address, length %d\n", __func__, 217 + dprintk("%s: Invalid address, length %d\n", __func__, 218 218 rlen); 219 219 goto out_err; 220 220 } ··· 225 225 /* replace the port dots with dashes for the in4_pton() delimiter*/ 226 226 for (i = 0; i < 2; i++) { 227 227 char *res = strrchr(buf, '.'); 228 + if (!res) { 229 + dprintk("%s: Failed finding expected dots in port\n", 230 + __func__); 231 + goto out_free; 232 + } 228 233 *res = '-'; 229 234 } 230 235 ··· 245 240 port = htons((tmp[0] << 8) | (tmp[1])); 246 241 247 242 ds = nfs4_pnfs_ds_add(inode, ip_addr, port); 248 - dprintk("%s Decoded address and port %s\n", __func__, buf); 243 + dprintk("%s: Decoded address and port %s\n", __func__, buf); 249 244 out_free: 250 245 kfree(buf); 251 246 out_err:
+10 -20
fs/nfs/nfs4proc.c
··· 50 50 #include <linux/module.h> 51 51 #include <linux/sunrpc/bc_xprt.h> 52 52 #include <linux/xattr.h> 53 + #include <linux/utsname.h> 53 54 54 55 #include "nfs4_fs.h" 55 56 #include "delegation.h" ··· 4573 4572 *p = htonl((u32)clp->cl_boot_time.tv_nsec); 4574 4573 args.verifier = &verifier; 4575 4574 4576 - while (1) { 4577 - args.id_len = scnprintf(args.id, sizeof(args.id), 4578 - "%s/%s %u", 4579 - clp->cl_ipaddr, 4580 - rpc_peeraddr2str(clp->cl_rpcclient, 4581 - RPC_DISPLAY_ADDR), 4582 - clp->cl_id_uniquifier); 4575 + args.id_len = scnprintf(args.id, sizeof(args.id), 4576 + "%s/%s.%s/%u", 4577 + clp->cl_ipaddr, 4578 + init_utsname()->nodename, 4579 + init_utsname()->domainname, 4580 + clp->cl_rpcclient->cl_auth->au_flavor); 4583 4581 4584 - status = rpc_call_sync(clp->cl_rpcclient, &msg, 0); 4585 - 4586 - if (status != -NFS4ERR_CLID_INUSE) 4587 - break; 4588 - 4589 - if (signalled()) 4590 - break; 4591 - 4592 - if (++clp->cl_id_uniquifier == 0) 4593 - break; 4594 - } 4595 - 4596 - status = nfs4_check_cl_exchange_flags(clp->cl_exchange_flags); 4582 + status = rpc_call_sync(clp->cl_rpcclient, &msg, 0); 4583 + if (!status) 4584 + status = nfs4_check_cl_exchange_flags(clp->cl_exchange_flags); 4597 4585 dprintk("<-- %s status= %d\n", __func__, status); 4598 4586 return status; 4599 4587 }
-6
fs/nfs/nfs4state.c
··· 232 232 status = nfs4_proc_create_session(clp); 233 233 if (status != 0) 234 234 goto out; 235 - status = nfs4_set_callback_sessionid(clp); 236 - if (status != 0) { 237 - printk(KERN_WARNING "Sessionid not set. No callback service\n"); 238 - nfs_callback_down(1); 239 - status = 0; 240 - } 241 235 nfs41_setup_state_renewal(clp); 242 236 nfs_mark_client_ready(clp, NFS_CS_READY); 243 237 out:
+3 -6
fs/nfs/nfs4xdr.c
··· 6086 6086 __be32 *p = xdr_inline_decode(xdr, 4); 6087 6087 if (unlikely(!p)) 6088 6088 goto out_overflow; 6089 - if (!ntohl(*p++)) { 6089 + if (*p == xdr_zero) { 6090 6090 p = xdr_inline_decode(xdr, 4); 6091 6091 if (unlikely(!p)) 6092 6092 goto out_overflow; 6093 - if (!ntohl(*p++)) 6093 + if (*p == xdr_zero) 6094 6094 return -EAGAIN; 6095 6095 entry->eof = 1; 6096 6096 return -EBADCOOKIE; ··· 6101 6101 goto out_overflow; 6102 6102 entry->prev_cookie = entry->cookie; 6103 6103 p = xdr_decode_hyper(p, &entry->cookie); 6104 - entry->len = ntohl(*p++); 6104 + entry->len = be32_to_cpup(p); 6105 6105 6106 6106 p = xdr_inline_decode(xdr, entry->len); 6107 6107 if (unlikely(!p)) ··· 6131 6131 entry->d_type = DT_UNKNOWN; 6132 6132 if (entry->fattr->valid & NFS_ATTR_FATTR_TYPE) 6133 6133 entry->d_type = nfs_umode_to_dtype(entry->fattr->mode); 6134 - 6135 - if (verify_attr_len(xdr, p, len) < 0) 6136 - goto out_overflow; 6137 6134 6138 6135 return 0; 6139 6136
+1 -1
fs/nfs/pnfs.c
··· 951 951 { 952 952 struct pnfs_deviceid_cache *local = clp->cl_devid_cache; 953 953 954 - dprintk("--> %s cl_devid_cache %p\n", __func__, clp->cl_devid_cache); 954 + dprintk("--> %s ({%d})\n", __func__, atomic_read(&local->dc_ref)); 955 955 if (atomic_dec_and_lock(&local->dc_ref, &clp->cl_lock)) { 956 956 int i; 957 957 /* Verify cache is empty */
+1 -1
fs/nfs/write.c
··· 932 932 while (!list_empty(&list)) { 933 933 data = list_entry(list.next, struct nfs_write_data, pages); 934 934 list_del(&data->pages); 935 - nfs_writedata_release(data); 935 + nfs_writedata_free(data); 936 936 } 937 937 nfs_redirty_request(req); 938 938 return -ENOMEM;
+41 -13
fs/nfs_common/nfsacl.c
··· 42 42 gid_t gid; 43 43 }; 44 44 45 + struct nfsacl_simple_acl { 46 + struct posix_acl acl; 47 + struct posix_acl_entry ace[4]; 48 + }; 49 + 45 50 static int 46 51 xdr_nfsace_encode(struct xdr_array2_desc *desc, void *elem) 47 52 { ··· 77 72 return 0; 78 73 } 79 74 80 - unsigned int 81 - nfsacl_encode(struct xdr_buf *buf, unsigned int base, struct inode *inode, 82 - struct posix_acl *acl, int encode_entries, int typeflag) 75 + /** 76 + * nfsacl_encode - Encode an NFSv3 ACL 77 + * 78 + * @buf: destination xdr_buf to contain XDR encoded ACL 79 + * @base: byte offset in xdr_buf where XDR'd ACL begins 80 + * @inode: inode of file whose ACL this is 81 + * @acl: posix_acl to encode 82 + * @encode_entries: whether to encode ACEs as well 83 + * @typeflag: ACL type: NFS_ACL_DEFAULT or zero 84 + * 85 + * Returns size of encoded ACL in bytes or a negative errno value. 86 + */ 87 + int nfsacl_encode(struct xdr_buf *buf, unsigned int base, struct inode *inode, 88 + struct posix_acl *acl, int encode_entries, int typeflag) 83 89 { 84 90 int entries = (acl && acl->a_count) ? max_t(int, acl->a_count, 4) : 0; 85 91 struct nfsacl_encode_desc nfsacl_desc = { ··· 104 88 .uid = inode->i_uid, 105 89 .gid = inode->i_gid, 106 90 }; 91 + struct nfsacl_simple_acl aclbuf; 107 92 int err; 108 - struct posix_acl *acl2 = NULL; 109 93 110 94 if (entries > NFS_ACL_MAX_ENTRIES || 111 95 xdr_encode_word(buf, base, entries)) 112 96 return -EINVAL; 113 97 if (encode_entries && acl && acl->a_count == 3) { 114 - /* Fake up an ACL_MASK entry. */ 115 - acl2 = posix_acl_alloc(4, GFP_KERNEL); 116 - if (!acl2) 117 - return -ENOMEM; 98 + struct posix_acl *acl2 = &aclbuf.acl; 99 + 100 + /* Avoid the use of posix_acl_alloc(). nfsacl_encode() is 101 + * invoked in contexts where a memory allocation failure is 102 + * fatal. Fortunately this fake ACL is small enough to 103 + * construct on the stack. */ 104 + memset(acl2, 0, sizeof(acl2)); 105 + posix_acl_init(acl2, 4); 106 + 118 107 /* Insert entries in canonical order: other orders seem 119 108 to confuse Solaris VxFS. */ 120 109 acl2->a_entries[0] = acl->a_entries[0]; /* ACL_USER_OBJ */ ··· 130 109 nfsacl_desc.acl = acl2; 131 110 } 132 111 err = xdr_encode_array2(buf, base + 4, &nfsacl_desc.desc); 133 - if (acl2) 134 - posix_acl_release(acl2); 135 112 if (!err) 136 113 err = 8 + nfsacl_desc.desc.elem_size * 137 114 nfsacl_desc.desc.array_len; ··· 243 224 return 0; 244 225 } 245 226 246 - unsigned int 247 - nfsacl_decode(struct xdr_buf *buf, unsigned int base, unsigned int *aclcnt, 248 - struct posix_acl **pacl) 227 + /** 228 + * nfsacl_decode - Decode an NFSv3 ACL 229 + * 230 + * @buf: xdr_buf containing XDR'd ACL data to decode 231 + * @base: byte offset in xdr_buf where XDR'd ACL begins 232 + * @aclcnt: count of ACEs in decoded posix_acl 233 + * @pacl: buffer in which to place decoded posix_acl 234 + * 235 + * Returns the length of the decoded ACL in bytes, or a negative errno value. 236 + */ 237 + int nfsacl_decode(struct xdr_buf *buf, unsigned int base, unsigned int *aclcnt, 238 + struct posix_acl **pacl) 249 239 { 250 240 struct nfsacl_decode_desc nfsacl_desc = { 251 241 .desc = {
+7 -4
fs/ntfs/mft.c
··· 1 1 /** 2 2 * mft.c - NTFS kernel mft record operations. Part of the Linux-NTFS project. 3 3 * 4 - * Copyright (c) 2001-2006 Anton Altaparmakov 4 + * Copyright (c) 2001-2011 Anton Altaparmakov and Tuxera Inc. 5 5 * Copyright (c) 2002 Richard Russon 6 6 * 7 7 * This program/include file is free software; you can redistribute it and/or ··· 2576 2576 flush_dcache_page(page); 2577 2577 SetPageUptodate(page); 2578 2578 if (base_ni) { 2579 + MFT_RECORD *m_tmp; 2580 + 2579 2581 /* 2580 2582 * Setup the base mft record in the extent mft record. This 2581 2583 * completes initialization of the allocated extent mft record ··· 2590 2588 * attach it to the base inode @base_ni and map, pin, and lock 2591 2589 * its, i.e. the allocated, mft record. 2592 2590 */ 2593 - m = map_extent_mft_record(base_ni, bit, &ni); 2594 - if (IS_ERR(m)) { 2591 + m_tmp = map_extent_mft_record(base_ni, bit, &ni); 2592 + if (IS_ERR(m_tmp)) { 2595 2593 ntfs_error(vol->sb, "Failed to map allocated extent " 2596 2594 "mft record 0x%llx.", (long long)bit); 2597 - err = PTR_ERR(m); 2595 + err = PTR_ERR(m_tmp); 2598 2596 /* Set the mft record itself not in use. */ 2599 2597 m->flags &= cpu_to_le16( 2600 2598 ~le16_to_cpu(MFT_RECORD_IN_USE)); ··· 2605 2603 ntfs_unmap_page(page); 2606 2604 goto undo_mftbmp_alloc; 2607 2605 } 2606 + BUG_ON(m != m_tmp); 2608 2607 /* 2609 2608 * Make sure the allocated mft record is written out to disk. 2610 2609 * No need to set the inode dirty because the caller is going
+13 -4
fs/posix_acl.c
··· 22 22 23 23 #include <linux/errno.h> 24 24 25 + EXPORT_SYMBOL(posix_acl_init); 25 26 EXPORT_SYMBOL(posix_acl_alloc); 26 27 EXPORT_SYMBOL(posix_acl_clone); 27 28 EXPORT_SYMBOL(posix_acl_valid); ··· 33 32 EXPORT_SYMBOL(posix_acl_permission); 34 33 35 34 /* 35 + * Init a fresh posix_acl 36 + */ 37 + void 38 + posix_acl_init(struct posix_acl *acl, int count) 39 + { 40 + atomic_set(&acl->a_refcount, 1); 41 + acl->a_count = count; 42 + } 43 + 44 + /* 36 45 * Allocate a new ACL with the specified number of entries. 37 46 */ 38 47 struct posix_acl * ··· 51 40 const size_t size = sizeof(struct posix_acl) + 52 41 count * sizeof(struct posix_acl_entry); 53 42 struct posix_acl *acl = kmalloc(size, flags); 54 - if (acl) { 55 - atomic_set(&acl->a_refcount, 1); 56 - acl->a_count = count; 57 - } 43 + if (acl) 44 + posix_acl_init(acl, count); 58 45 return acl; 59 46 } 60 47
+18 -2
fs/xfs/linux-2.6/xfs_ioctl.c
··· 985 985 986 986 /* 987 987 * Extent size must be a multiple of the appropriate block 988 - * size, if set at all. 988 + * size, if set at all. It must also be smaller than the 989 + * maximum extent size supported by the filesystem. 990 + * 991 + * Also, for non-realtime files, limit the extent size hint to 992 + * half the size of the AGs in the filesystem so alignment 993 + * doesn't result in extents larger than an AG. 989 994 */ 990 995 if (fa->fsx_extsize != 0) { 991 - xfs_extlen_t size; 996 + xfs_extlen_t size; 997 + xfs_fsblock_t extsize_fsb; 998 + 999 + extsize_fsb = XFS_B_TO_FSB(mp, fa->fsx_extsize); 1000 + if (extsize_fsb > MAXEXTLEN) { 1001 + code = XFS_ERROR(EINVAL); 1002 + goto error_return; 1003 + } 992 1004 993 1005 if (XFS_IS_REALTIME_INODE(ip) || 994 1006 ((mask & FSX_XFLAGS) && ··· 1009 997 mp->m_sb.sb_blocklog; 1010 998 } else { 1011 999 size = mp->m_sb.sb_blocksize; 1000 + if (extsize_fsb > mp->m_sb.sb_agblocks / 2) { 1001 + code = XFS_ERROR(EINVAL); 1002 + goto error_return; 1003 + } 1012 1004 } 1013 1005 1014 1006 if (fa->fsx_extsize % size) {
+21 -25
fs/xfs/quota/xfs_qm.c
··· 1863 1863 xfs_dquot_t *dqpout; 1864 1864 xfs_dquot_t *dqp; 1865 1865 int restarts; 1866 + int startagain; 1866 1867 1867 1868 restarts = 0; 1868 1869 dqpout = NULL; 1869 1870 1870 1871 /* lockorder: hashchainlock, freelistlock, mplistlock, dqlock, dqflock */ 1871 - startagain: 1872 + again: 1873 + startagain = 0; 1872 1874 mutex_lock(&xfs_Gqm->qm_dqfrlist_lock); 1873 1875 1874 1876 list_for_each_entry(dqp, &xfs_Gqm->qm_dqfrlist, q_freelist) { ··· 1887 1885 ASSERT(! (dqp->dq_flags & XFS_DQ_INACTIVE)); 1888 1886 1889 1887 trace_xfs_dqreclaim_want(dqp); 1890 - 1891 - xfs_dqunlock(dqp); 1892 - mutex_unlock(&xfs_Gqm->qm_dqfrlist_lock); 1893 - if (++restarts >= XFS_QM_RECLAIM_MAX_RESTARTS) 1894 - return NULL; 1895 1888 XQM_STATS_INC(xqmstats.xs_qm_dqwants); 1896 - goto startagain; 1889 + restarts++; 1890 + startagain = 1; 1891 + goto dqunlock; 1897 1892 } 1898 1893 1899 1894 /* ··· 1905 1906 ASSERT(list_empty(&dqp->q_mplist)); 1906 1907 list_del_init(&dqp->q_freelist); 1907 1908 xfs_Gqm->qm_dqfrlist_cnt--; 1908 - xfs_dqunlock(dqp); 1909 1909 dqpout = dqp; 1910 1910 XQM_STATS_INC(xqmstats.xs_qm_dqinact_reclaims); 1911 - break; 1911 + goto dqunlock; 1912 1912 } 1913 1913 1914 1914 ASSERT(dqp->q_hash); 1915 1915 ASSERT(!list_empty(&dqp->q_mplist)); 1916 1916 1917 1917 /* 1918 - * Try to grab the flush lock. If this dquot is in the process of 1919 - * getting flushed to disk, we don't want to reclaim it. 1918 + * Try to grab the flush lock. If this dquot is in the process 1919 + * of getting flushed to disk, we don't want to reclaim it. 1920 1920 */ 1921 - if (!xfs_dqflock_nowait(dqp)) { 1922 - xfs_dqunlock(dqp); 1923 - continue; 1924 - } 1921 + if (!xfs_dqflock_nowait(dqp)) 1922 + goto dqunlock; 1925 1923 1926 1924 /* 1927 1925 * We have the flush lock so we know that this is not in the ··· 1940 1944 xfs_fs_cmn_err(CE_WARN, mp, 1941 1945 "xfs_qm_dqreclaim: dquot %p flush failed", dqp); 1942 1946 } 1943 - xfs_dqunlock(dqp); /* dqflush unlocks dqflock */ 1944 - continue; 1947 + goto dqunlock; 1945 1948 } 1946 1949 1947 1950 /* ··· 1962 1967 */ 1963 1968 if (!mutex_trylock(&mp->m_quotainfo->qi_dqlist_lock)) { 1964 1969 restarts++; 1965 - mutex_unlock(&dqp->q_hash->qh_lock); 1966 - xfs_dqfunlock(dqp); 1967 - xfs_dqunlock(dqp); 1968 - mutex_unlock(&xfs_Gqm->qm_dqfrlist_lock); 1969 - if (restarts++ >= XFS_QM_RECLAIM_MAX_RESTARTS) 1970 - return NULL; 1971 - goto startagain; 1970 + startagain = 1; 1971 + goto qhunlock; 1972 1972 } 1973 1973 1974 1974 ASSERT(dqp->q_nrefs == 0); ··· 1976 1986 xfs_Gqm->qm_dqfrlist_cnt--; 1977 1987 dqpout = dqp; 1978 1988 mutex_unlock(&mp->m_quotainfo->qi_dqlist_lock); 1989 + qhunlock: 1979 1990 mutex_unlock(&dqp->q_hash->qh_lock); 1980 1991 dqfunlock: 1981 1992 xfs_dqfunlock(dqp); 1993 + dqunlock: 1982 1994 xfs_dqunlock(dqp); 1983 1995 if (dqpout) 1984 1996 break; 1985 1997 if (restarts >= XFS_QM_RECLAIM_MAX_RESTARTS) 1986 - return NULL; 1998 + break; 1999 + if (startagain) { 2000 + mutex_unlock(&xfs_Gqm->qm_dqfrlist_lock); 2001 + goto again; 2002 + } 1987 2003 } 1988 2004 mutex_unlock(&xfs_Gqm->qm_dqfrlist_lock); 1989 2005 return dqpout;
+16
fs/xfs/xfs_alloc.h
··· 75 75 #define XFS_ALLOC_SET_ASIDE(mp) (4 + ((mp)->m_sb.sb_agcount * 4)) 76 76 77 77 /* 78 + * When deciding how much space to allocate out of an AG, we limit the 79 + * allocation maximum size to the size the AG. However, we cannot use all the 80 + * blocks in the AG - some are permanently used by metadata. These 81 + * blocks are generally: 82 + * - the AG superblock, AGF, AGI and AGFL 83 + * - the AGF (bno and cnt) and AGI btree root blocks 84 + * - 4 blocks on the AGFL according to XFS_ALLOC_SET_ASIDE() limits 85 + * 86 + * The AG headers are sector sized, so the amount of space they take up is 87 + * dependent on filesystem geometry. The others are all single blocks. 88 + */ 89 + #define XFS_ALLOC_AG_MAX_USABLE(mp) \ 90 + ((mp)->m_sb.sb_agblocks - XFS_BB_TO_FSB(mp, XFS_FSS_TO_BB(mp, 4)) - 7) 91 + 92 + 93 + /* 78 94 * Argument structure for xfs_alloc routines. 79 95 * This is turned into a structure to avoid having 20 arguments passed 80 96 * down several levels of the stack.
+45 -16
fs/xfs/xfs_bmap.c
··· 1038 1038 * Filling in the middle part of a previous delayed allocation. 1039 1039 * Contiguity is impossible here. 1040 1040 * This case is avoided almost all the time. 1041 + * 1042 + * We start with a delayed allocation: 1043 + * 1044 + * +ddddddddddddddddddddddddddddddddddddddddddddddddddddddd+ 1045 + * PREV @ idx 1046 + * 1047 + * and we are allocating: 1048 + * +rrrrrrrrrrrrrrrrr+ 1049 + * new 1050 + * 1051 + * and we set it up for insertion as: 1052 + * +ddddddddddddddddddd+rrrrrrrrrrrrrrrrr+ddddddddddddddddd+ 1053 + * new 1054 + * PREV @ idx LEFT RIGHT 1055 + * inserted at idx + 1 1041 1056 */ 1042 1057 temp = new->br_startoff - PREV.br_startoff; 1043 - trace_xfs_bmap_pre_update(ip, idx, 0, _THIS_IP_); 1044 - xfs_bmbt_set_blockcount(ep, temp); 1045 - r[0] = *new; 1046 - r[1].br_state = PREV.br_state; 1047 - r[1].br_startblock = 0; 1048 - r[1].br_startoff = new_endoff; 1049 1058 temp2 = PREV.br_startoff + PREV.br_blockcount - new_endoff; 1050 - r[1].br_blockcount = temp2; 1051 - xfs_iext_insert(ip, idx + 1, 2, &r[0], state); 1059 + trace_xfs_bmap_pre_update(ip, idx, 0, _THIS_IP_); 1060 + xfs_bmbt_set_blockcount(ep, temp); /* truncate PREV */ 1061 + LEFT = *new; 1062 + RIGHT.br_state = PREV.br_state; 1063 + RIGHT.br_startblock = nullstartblock( 1064 + (int)xfs_bmap_worst_indlen(ip, temp2)); 1065 + RIGHT.br_startoff = new_endoff; 1066 + RIGHT.br_blockcount = temp2; 1067 + /* insert LEFT (r[0]) and RIGHT (r[1]) at the same time */ 1068 + xfs_iext_insert(ip, idx + 1, 2, &LEFT, state); 1052 1069 ip->i_df.if_lastex = idx + 1; 1053 1070 ip->i_d.di_nextents++; 1054 1071 if (cur == NULL) ··· 2447 2430 startag = ag = 0; 2448 2431 2449 2432 pag = xfs_perag_get(mp, ag); 2450 - while (*blen < ap->alen) { 2433 + while (*blen < args->maxlen) { 2451 2434 if (!pag->pagf_init) { 2452 2435 error = xfs_alloc_pagf_init(mp, args->tp, ag, 2453 2436 XFS_ALLOC_FLAG_TRYLOCK); ··· 2469 2452 notinit = 1; 2470 2453 2471 2454 if (xfs_inode_is_filestream(ap->ip)) { 2472 - if (*blen >= ap->alen) 2455 + if (*blen >= args->maxlen) 2473 2456 break; 2474 2457 2475 2458 if (ap->userdata) { ··· 2515 2498 * If the best seen length is less than the request 2516 2499 * length, use the best as the minimum. 2517 2500 */ 2518 - else if (*blen < ap->alen) 2501 + else if (*blen < args->maxlen) 2519 2502 args->minlen = *blen; 2520 2503 /* 2521 - * Otherwise we've seen an extent as big as alen, 2504 + * Otherwise we've seen an extent as big as maxlen, 2522 2505 * use that as the minimum. 2523 2506 */ 2524 2507 else 2525 - args->minlen = ap->alen; 2508 + args->minlen = args->maxlen; 2526 2509 2527 2510 /* 2528 2511 * set the failure fallback case to look in the selected ··· 2590 2573 args.tp = ap->tp; 2591 2574 args.mp = mp; 2592 2575 args.fsbno = ap->rval; 2593 - args.maxlen = MIN(ap->alen, mp->m_sb.sb_agblocks); 2576 + 2577 + /* Trim the allocation back to the maximum an AG can fit. */ 2578 + args.maxlen = MIN(ap->alen, XFS_ALLOC_AG_MAX_USABLE(mp)); 2594 2579 args.firstblock = ap->firstblock; 2595 2580 blen = 0; 2596 2581 if (nullfb) { ··· 2640 2621 /* 2641 2622 * Adjust for alignment 2642 2623 */ 2643 - if (blen > args.alignment && blen <= ap->alen) 2624 + if (blen > args.alignment && blen <= args.maxlen) 2644 2625 args.minlen = blen - args.alignment; 2645 2626 args.minalignslop = 0; 2646 2627 } else { ··· 2659 2640 * of minlen+alignment+slop doesn't go up 2660 2641 * between the calls. 2661 2642 */ 2662 - if (blen > mp->m_dalign && blen <= ap->alen) 2643 + if (blen > mp->m_dalign && blen <= args.maxlen) 2663 2644 nextminlen = blen - mp->m_dalign; 2664 2645 else 2665 2646 nextminlen = args.minlen; ··· 4504 4485 /* Figure out the extent size, adjust alen */ 4505 4486 extsz = xfs_get_extsz_hint(ip); 4506 4487 if (extsz) { 4488 + /* 4489 + * make sure we don't exceed a single 4490 + * extent length when we align the 4491 + * extent by reducing length we are 4492 + * going to allocate by the maximum 4493 + * amount extent size aligment may 4494 + * require. 4495 + */ 4496 + alen = XFS_FILBLKS_MIN(len, 4497 + MAXEXTLEN - (2 * extsz - 1)); 4507 4498 error = xfs_bmap_extsize_align(mp, 4508 4499 &got, &prev, extsz, 4509 4500 rt, eof,
+7 -5
fs/xfs/xfs_buf_item.c
··· 427 427 428 428 if (remove) { 429 429 /* 430 - * We have to remove the log item from the transaction 431 - * as we are about to release our reference to the 432 - * buffer. If we don't, the unlock that occurs later 433 - * in xfs_trans_uncommit() will ry to reference the 430 + * If we are in a transaction context, we have to 431 + * remove the log item from the transaction as we are 432 + * about to release our reference to the buffer. If we 433 + * don't, the unlock that occurs later in 434 + * xfs_trans_uncommit() will try to reference the 434 435 * buffer which we no longer have a hold on. 435 436 */ 436 - xfs_trans_del_item(lip); 437 + if (lip->li_desc) 438 + xfs_trans_del_item(lip); 437 439 438 440 /* 439 441 * Since the transaction no longer refers to the buffer,
+2 -1
fs/xfs/xfs_extfree_item.c
··· 138 138 139 139 if (remove) { 140 140 ASSERT(!(lip->li_flags & XFS_LI_IN_AIL)); 141 - xfs_trans_del_item(lip); 141 + if (lip->li_desc) 142 + xfs_trans_del_item(lip); 142 143 xfs_efi_item_free(efip); 143 144 return; 144 145 }
+6 -1
fs/xfs/xfs_iomap.c
··· 337 337 int shift = 0; 338 338 int64_t freesp; 339 339 340 - alloc_blocks = XFS_B_TO_FSB(mp, ip->i_size); 340 + /* 341 + * rounddown_pow_of_two() returns an undefined result 342 + * if we pass in alloc_blocks = 0. Hence the "+ 1" to 343 + * ensure we always pass in a non-zero value. 344 + */ 345 + alloc_blocks = XFS_B_TO_FSB(mp, ip->i_size) + 1; 341 346 alloc_blocks = XFS_FILEOFF_MIN(MAXEXTLEN, 342 347 rounddown_pow_of_two(alloc_blocks)); 343 348
+1 -1
fs/xfs/xfs_log.h
··· 191 191 192 192 xlog_tid_t xfs_log_get_trans_ident(struct xfs_trans *tp); 193 193 194 - int xfs_log_commit_cil(struct xfs_mount *mp, struct xfs_trans *tp, 194 + void xfs_log_commit_cil(struct xfs_mount *mp, struct xfs_trans *tp, 195 195 struct xfs_log_vec *log_vector, 196 196 xfs_lsn_t *commit_lsn, int flags); 197 197 bool xfs_log_item_in_current_chkpt(struct xfs_log_item *lip);
+6 -9
fs/xfs/xfs_log_cil.c
··· 543 543 544 544 error = xlog_write(log, &lvhdr, tic, &ctx->start_lsn, NULL, 0); 545 545 if (error) 546 - goto out_abort; 546 + goto out_abort_free_ticket; 547 547 548 548 /* 549 549 * now that we've written the checkpoint into the log, strictly ··· 569 569 } 570 570 spin_unlock(&cil->xc_cil_lock); 571 571 572 + /* xfs_log_done always frees the ticket on error. */ 572 573 commit_lsn = xfs_log_done(log->l_mp, tic, &commit_iclog, 0); 573 - if (error || commit_lsn == -1) 574 + if (commit_lsn == -1) 574 575 goto out_abort; 575 576 576 577 /* attach all the transactions w/ busy extents to iclog */ ··· 601 600 kmem_free(new_ctx); 602 601 return 0; 603 602 603 + out_abort_free_ticket: 604 + xfs_log_ticket_put(tic); 604 605 out_abort: 605 606 xlog_cil_committed(ctx, XFS_LI_ABORTED); 606 607 return XFS_ERROR(EIO); ··· 625 622 * background commit, returns without it held once background commits are 626 623 * allowed again. 627 624 */ 628 - int 625 + void 629 626 xfs_log_commit_cil( 630 627 struct xfs_mount *mp, 631 628 struct xfs_trans *tp, ··· 639 636 640 637 if (flags & XFS_TRANS_RELEASE_LOG_RES) 641 638 log_flags = XFS_LOG_REL_PERM_RESERV; 642 - 643 - if (XLOG_FORCED_SHUTDOWN(log)) { 644 - xlog_cil_free_logvec(log_vector); 645 - return XFS_ERROR(EIO); 646 - } 647 639 648 640 /* 649 641 * do all the hard work of formatting items (including memory ··· 699 701 */ 700 702 if (push) 701 703 xlog_cil_push(log, 0); 702 - return 0; 703 704 } 704 705 705 706 /*
+30 -11
fs/xfs/xfs_trans.c
··· 1446 1446 * Bulk operation version of xfs_trans_committed that takes a log vector of 1447 1447 * items to insert into the AIL. This uses bulk AIL insertion techniques to 1448 1448 * minimise lock traffic. 1449 + * 1450 + * If we are called with the aborted flag set, it is because a log write during 1451 + * a CIL checkpoint commit has failed. In this case, all the items in the 1452 + * checkpoint have already gone through IOP_COMMITED and IOP_UNLOCK, which 1453 + * means that checkpoint commit abort handling is treated exactly the same 1454 + * as an iclog write error even though we haven't started any IO yet. Hence in 1455 + * this case all we need to do is IOP_COMMITTED processing, followed by an 1456 + * IOP_UNPIN(aborted) call. 1449 1457 */ 1450 1458 void 1451 1459 xfs_trans_committed_bulk( ··· 1479 1471 /* item_lsn of -1 means the item was freed */ 1480 1472 if (XFS_LSN_CMP(item_lsn, (xfs_lsn_t)-1) == 0) 1481 1473 continue; 1474 + 1475 + /* 1476 + * if we are aborting the operation, no point in inserting the 1477 + * object into the AIL as we are in a shutdown situation. 1478 + */ 1479 + if (aborted) { 1480 + ASSERT(XFS_FORCED_SHUTDOWN(ailp->xa_mount)); 1481 + IOP_UNPIN(lip, 1); 1482 + continue; 1483 + } 1482 1484 1483 1485 if (item_lsn != commit_lsn) { 1484 1486 ··· 1521 1503 } 1522 1504 1523 1505 /* 1524 - * Called from the trans_commit code when we notice that 1525 - * the filesystem is in the middle of a forced shutdown. 1506 + * Called from the trans_commit code when we notice that the filesystem is in 1507 + * the middle of a forced shutdown. 1508 + * 1509 + * When we are called here, we have already pinned all the items in the 1510 + * transaction. However, neither IOP_COMMITTING or IOP_UNLOCK has been called 1511 + * so we can simply walk the items in the transaction, unpin them with an abort 1512 + * flag and then free the items. Note that unpinning the items can result in 1513 + * them being freed immediately, so we need to use a safe list traversal method 1514 + * here. 1526 1515 */ 1527 1516 STATIC void 1528 1517 xfs_trans_uncommit( 1529 1518 struct xfs_trans *tp, 1530 1519 uint flags) 1531 1520 { 1532 - struct xfs_log_item_desc *lidp; 1521 + struct xfs_log_item_desc *lidp, *n; 1533 1522 1534 - list_for_each_entry(lidp, &tp->t_items, lid_trans) { 1535 - /* 1536 - * Unpin all but those that aren't dirty. 1537 - */ 1523 + list_for_each_entry_safe(lidp, n, &tp->t_items, lid_trans) { 1538 1524 if (lidp->lid_flags & XFS_LID_DIRTY) 1539 1525 IOP_UNPIN(lidp->lid_item, 1); 1540 1526 } ··· 1755 1733 int flags) 1756 1734 { 1757 1735 struct xfs_log_vec *log_vector; 1758 - int error; 1759 1736 1760 1737 /* 1761 1738 * Get each log item to allocate a vector structure for ··· 1765 1744 if (!log_vector) 1766 1745 return ENOMEM; 1767 1746 1768 - error = xfs_log_commit_cil(mp, tp, log_vector, commit_lsn, flags); 1769 - if (error) 1770 - return error; 1747 + xfs_log_commit_cil(mp, tp, log_vector, commit_lsn, flags); 1771 1748 1772 1749 current_restore_flags_nested(&tp->t_pflags, PF_FSTRANS); 1773 1750 xfs_trans_free(tp);
+1 -1
include/linux/kernel.h
··· 588 588 589 589 /** 590 590 * BUILD_BUG_ON - break compile if a condition is true. 591 - * @cond: the condition which the compiler should know is false. 591 + * @condition: the condition which the compiler should know is false. 592 592 * 593 593 * If you have some code which relies on certain constants being equal, or 594 594 * other compile-time-evaluated condition, you should use BUILD_BUG_ON to
+2 -2
include/linux/nfsacl.h
··· 51 51 return w; 52 52 } 53 53 54 - extern unsigned int 54 + extern int 55 55 nfsacl_encode(struct xdr_buf *buf, unsigned int base, struct inode *inode, 56 56 struct posix_acl *acl, int encode_entries, int typeflag); 57 - extern unsigned int 57 + extern int 58 58 nfsacl_decode(struct xdr_buf *buf, unsigned int base, unsigned int *aclcnt, 59 59 struct posix_acl **pacl); 60 60
+1
include/linux/posix_acl.h
··· 71 71 72 72 /* posix_acl.c */ 73 73 74 + extern void posix_acl_init(struct posix_acl *, int); 74 75 extern struct posix_acl *posix_acl_alloc(int, gfp_t); 75 76 extern struct posix_acl *posix_acl_clone(const struct posix_acl *, gfp_t); 76 77 extern int posix_acl_valid(const struct posix_acl *);
-13
include/linux/sunrpc/bc_xprt.h
··· 47 47 return 1; 48 48 return 0; 49 49 } 50 - static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp) 51 - { 52 - if (svc_is_backchannel(rqstp)) 53 - return (struct nfs4_sessionid *) 54 - rqstp->rq_server->sv_bc_xprt->xpt_bc_sid; 55 - return NULL; 56 - } 57 - 58 50 #else /* CONFIG_NFS_V4_1 */ 59 51 static inline int xprt_setup_backchannel(struct rpc_xprt *xprt, 60 52 unsigned int min_reqs) ··· 57 65 static inline int svc_is_backchannel(const struct svc_rqst *rqstp) 58 66 { 59 67 return 0; 60 - } 61 - 62 - static inline struct nfs4_sessionid *bc_xprt_sid(struct svc_rqst *rqstp) 63 - { 64 - return NULL; 65 68 } 66 69 67 70 static inline void xprt_free_bc_request(struct rpc_rqst *req)
-1
include/linux/sunrpc/svc_xprt.h
··· 77 77 size_t xpt_remotelen; /* length of address */ 78 78 struct rpc_wait_queue xpt_bc_pending; /* backchannel wait queue */ 79 79 struct list_head xpt_users; /* callbacks on free */ 80 - void *xpt_bc_sid; /* back channel session ID */ 81 80 82 81 struct net *xpt_net; 83 82 struct rpc_xprt *xpt_bc_xprt; /* NFSv4.1 backchannel */
+1
include/linux/usb/hcd.h
··· 112 112 /* Flags that get set only during HCD registration or removal. */ 113 113 unsigned rh_registered:1;/* is root hub registered? */ 114 114 unsigned rh_pollable:1; /* may we poll the root hub? */ 115 + unsigned msix_enabled:1; /* driver has MSI-X enabled? */ 115 116 116 117 /* The next flag is a stopgap, to be removed when all the HCDs 117 118 * support the new root-hub polling mechanism. */
+3
include/linux/usb/serial.h
··· 347 347 extern int usb_serial_handle_sysrq_char(struct usb_serial_port *port, 348 348 unsigned int ch); 349 349 extern int usb_serial_handle_break(struct usb_serial_port *port); 350 + extern void usb_serial_handle_dcd_change(struct usb_serial_port *usb_port, 351 + struct tty_struct *tty, 352 + unsigned int status); 350 353 351 354 352 355 extern int usb_serial_bus_register(struct usb_serial_driver *device);
+1
include/net/bluetooth/hci_core.h
··· 184 184 __u32 link_mode; 185 185 __u8 auth_type; 186 186 __u8 sec_level; 187 + __u8 pending_sec_level; 187 188 __u8 power_save; 188 189 __u16 disc_timeout; 189 190 unsigned long pend;
+5 -3
include/net/sch_generic.h
··· 445 445 { 446 446 __skb_queue_tail(list, skb); 447 447 sch->qstats.backlog += qdisc_pkt_len(skb); 448 - qdisc_bstats_update(sch, skb); 449 448 450 449 return NET_XMIT_SUCCESS; 451 450 } ··· 459 460 { 460 461 struct sk_buff *skb = __skb_dequeue(list); 461 462 462 - if (likely(skb != NULL)) 463 + if (likely(skb != NULL)) { 463 464 sch->qstats.backlog -= qdisc_pkt_len(skb); 465 + qdisc_bstats_update(sch, skb); 466 + } 464 467 465 468 return skb; 466 469 } ··· 475 474 static inline unsigned int __qdisc_queue_drop_head(struct Qdisc *sch, 476 475 struct sk_buff_head *list) 477 476 { 478 - struct sk_buff *skb = __qdisc_dequeue_head(sch, list); 477 + struct sk_buff *skb = __skb_dequeue(list); 479 478 480 479 if (likely(skb != NULL)) { 481 480 unsigned int len = qdisc_pkt_len(skb); 481 + sch->qstats.backlog -= len; 482 482 kfree_skb(skb); 483 483 return len; 484 484 }
+5 -8
kernel/sched_fair.c
··· 722 722 u64 now, delta; 723 723 unsigned long load = cfs_rq->load.weight; 724 724 725 - if (!cfs_rq) 725 + if (cfs_rq->tg == &root_task_group) 726 726 return; 727 727 728 - now = rq_of(cfs_rq)->clock; 728 + now = rq_of(cfs_rq)->clock_task; 729 729 delta = now - cfs_rq->load_stamp; 730 730 731 731 /* truncate load history at 4 idle periods */ ··· 829 829 struct task_group *tg; 830 830 struct sched_entity *se; 831 831 long shares; 832 - 833 - if (!cfs_rq) 834 - return; 835 832 836 833 tg = cfs_rq->tg; 837 834 se = tg->se[cpu_of(rq_of(cfs_rq))]; ··· 1429 1432 1430 1433 static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) 1431 1434 { 1432 - unsigned long this_load, load; 1435 + s64 this_load, load; 1433 1436 int idx, this_cpu, prev_cpu; 1434 1437 unsigned long tl_per_task; 1435 1438 struct task_group *tg; ··· 1468 1471 * Otherwise check if either cpus are near enough in load to allow this 1469 1472 * task to be woken on this_cpu. 1470 1473 */ 1471 - if (this_load) { 1472 - unsigned long this_eff_load, prev_eff_load; 1474 + if (this_load > 0) { 1475 + s64 this_eff_load, prev_eff_load; 1473 1476 1474 1477 this_eff_load = 100; 1475 1478 this_eff_load *= power_of(prev_cpu);
+2 -1
kernel/sys.c
··· 1385 1385 const struct cred *cred = current_cred(), *tcred; 1386 1386 1387 1387 tcred = __task_cred(task); 1388 - if ((cred->uid != tcred->euid || 1388 + if (current != task && 1389 + (cred->uid != tcred->euid || 1389 1390 cred->uid != tcred->suid || 1390 1391 cred->uid != tcred->uid || 1391 1392 cred->gid != tcred->egid ||
+3
lib/rbtree.c
··· 315 315 316 316 rb_augment_path(node, func, data); 317 317 } 318 + EXPORT_SYMBOL(rb_augment_insert); 318 319 319 320 /* 320 321 * before removing the node, find the deepest node on the rebalance path ··· 341 340 342 341 return deepest; 343 342 } 343 + EXPORT_SYMBOL(rb_augment_erase_begin); 344 344 345 345 /* 346 346 * after removal, update the tree to account for the removed entry ··· 352 350 if (node) 353 351 rb_augment_path(node, func, data); 354 352 } 353 + EXPORT_SYMBOL(rb_augment_erase_end); 355 354 356 355 /* 357 356 * This function returns the first node (in sort order) of the tree.
+5 -5
lib/textsearch.c
··· 13 13 * 14 14 * INTRODUCTION 15 15 * 16 - * The textsearch infrastructure provides text searching facitilies for 16 + * The textsearch infrastructure provides text searching facilities for 17 17 * both linear and non-linear data. Individual search algorithms are 18 18 * implemented in modules and chosen by the user. 19 19 * ··· 43 43 * to the algorithm to store persistent variables. 44 44 * (4) Core eventually resets the search offset and forwards the find() 45 45 * request to the algorithm. 46 - * (5) Algorithm calls get_next_block() provided by the user continously 46 + * (5) Algorithm calls get_next_block() provided by the user continuously 47 47 * to fetch the data to be searched in block by block. 48 48 * (6) Algorithm invokes finish() after the last call to get_next_block 49 49 * to clean up any leftovers from get_next_block. (Optional) ··· 58 58 * the pattern to look for and flags. As a flag, you can set TS_IGNORECASE 59 59 * to perform case insensitive matching. But it might slow down 60 60 * performance of algorithm, so you should use it at own your risk. 61 - * The returned configuration may then be used for an arbitary 61 + * The returned configuration may then be used for an arbitrary 62 62 * amount of times and even in parallel as long as a separate struct 63 63 * ts_state variable is provided to every instance. 64 64 * 65 65 * The actual search is performed by either calling textsearch_find_- 66 66 * continuous() for linear data or by providing an own get_next_block() 67 67 * implementation and calling textsearch_find(). Both functions return 68 - * the position of the first occurrence of the patern or UINT_MAX if 69 - * no match was found. Subsequent occurences can be found by calling 68 + * the position of the first occurrence of the pattern or UINT_MAX if 69 + * no match was found. Subsequent occurrences can be found by calling 70 70 * textsearch_next() regardless of the linearity of the data. 71 71 * 72 72 * Once you're done using a configuration it must be given back via
+2 -4
mm/kmemleak-test.c
··· 75 75 * after the module is removed. 76 76 */ 77 77 for (i = 0; i < 10; i++) { 78 - elem = kmalloc(sizeof(*elem), GFP_KERNEL); 79 - pr_info("kmemleak: kmalloc(sizeof(*elem)) = %p\n", elem); 78 + elem = kzalloc(sizeof(*elem), GFP_KERNEL); 79 + pr_info("kmemleak: kzalloc(sizeof(*elem)) = %p\n", elem); 80 80 if (!elem) 81 81 return -ENOMEM; 82 - memset(elem, 0, sizeof(*elem)); 83 82 INIT_LIST_HEAD(&elem->list); 84 - 85 83 list_add_tail(&elem->list, &test_list); 86 84 } 87 85
+8 -5
mm/kmemleak.c
··· 113 113 #define BYTES_PER_POINTER sizeof(void *) 114 114 115 115 /* GFP bitmask for kmemleak internal allocations */ 116 - #define GFP_KMEMLEAK_MASK (GFP_KERNEL | GFP_ATOMIC) 116 + #define gfp_kmemleak_mask(gfp) (((gfp) & (GFP_KERNEL | GFP_ATOMIC)) | \ 117 + __GFP_NORETRY | __GFP_NOMEMALLOC | \ 118 + __GFP_NOWARN) 117 119 118 120 /* scanning area inside a memory block */ 119 121 struct kmemleak_scan_area { ··· 513 511 struct kmemleak_object *object; 514 512 struct prio_tree_node *node; 515 513 516 - object = kmem_cache_alloc(object_cache, gfp & GFP_KMEMLEAK_MASK); 514 + object = kmem_cache_alloc(object_cache, gfp_kmemleak_mask(gfp)); 517 515 if (!object) { 518 - kmemleak_stop("Cannot allocate a kmemleak_object structure\n"); 516 + pr_warning("Cannot allocate a kmemleak_object structure\n"); 517 + kmemleak_disable(); 519 518 return NULL; 520 519 } 521 520 ··· 737 734 return; 738 735 } 739 736 740 - area = kmem_cache_alloc(scan_area_cache, gfp & GFP_KMEMLEAK_MASK); 737 + area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp)); 741 738 if (!area) { 742 - kmemleak_warn("Cannot allocate a scan area\n"); 739 + pr_warning("Cannot allocate a scan area\n"); 743 740 goto out; 744 741 } 745 742
+9 -7
net/bluetooth/hci_conn.c
··· 379 379 hci_conn_hold(acl); 380 380 381 381 if (acl->state == BT_OPEN || acl->state == BT_CLOSED) { 382 - acl->sec_level = sec_level; 382 + acl->sec_level = BT_SECURITY_LOW; 383 + acl->pending_sec_level = sec_level; 383 384 acl->auth_type = auth_type; 384 385 hci_acl_connect(acl); 385 - } else { 386 - if (acl->sec_level < sec_level) 387 - acl->sec_level = sec_level; 388 - if (acl->auth_type < auth_type) 389 - acl->auth_type = auth_type; 390 386 } 391 387 392 388 if (type == ACL_LINK) ··· 438 442 { 439 443 BT_DBG("conn %p", conn); 440 444 445 + if (conn->pending_sec_level > sec_level) 446 + sec_level = conn->pending_sec_level; 447 + 441 448 if (sec_level > conn->sec_level) 442 - conn->sec_level = sec_level; 449 + conn->pending_sec_level = sec_level; 443 450 else if (conn->link_mode & HCI_LM_AUTH) 444 451 return 1; 452 + 453 + /* Make sure we preserve an existing MITM requirement*/ 454 + auth_type |= (conn->auth_type & 0x01); 445 455 446 456 conn->auth_type = auth_type; 447 457
+4
net/bluetooth/hci_core.c
··· 1011 1011 1012 1012 destroy_workqueue(hdev->workqueue); 1013 1013 1014 + hci_dev_lock_bh(hdev); 1015 + hci_blacklist_clear(hdev); 1016 + hci_dev_unlock_bh(hdev); 1017 + 1014 1018 __hci_dev_put(hdev); 1015 1019 1016 1020 return 0;
+5 -4
net/bluetooth/hci_event.c
··· 692 692 if (conn->state != BT_CONFIG || !conn->out) 693 693 return 0; 694 694 695 - if (conn->sec_level == BT_SECURITY_SDP) 695 + if (conn->pending_sec_level == BT_SECURITY_SDP) 696 696 return 0; 697 697 698 698 /* Only request authentication for SSP connections or non-SSP 699 699 * devices with sec_level HIGH */ 700 700 if (!(hdev->ssp_mode > 0 && conn->ssp_mode > 0) && 701 - conn->sec_level != BT_SECURITY_HIGH) 701 + conn->pending_sec_level != BT_SECURITY_HIGH) 702 702 return 0; 703 703 704 704 return 1; ··· 1095 1095 1096 1096 conn = hci_conn_hash_lookup_handle(hdev, __le16_to_cpu(ev->handle)); 1097 1097 if (conn) { 1098 - if (!ev->status) 1098 + if (!ev->status) { 1099 1099 conn->link_mode |= HCI_LM_AUTH; 1100 - else 1100 + conn->sec_level = conn->pending_sec_level; 1101 + } else 1101 1102 conn->sec_level = BT_SECURITY_LOW; 1102 1103 1103 1104 clear_bit(HCI_CONN_AUTH_PEND, &conn->pend);
+37 -57
net/bluetooth/l2cap.c
··· 305 305 } 306 306 } 307 307 308 + static inline u8 l2cap_get_auth_type(struct sock *sk) 309 + { 310 + if (sk->sk_type == SOCK_RAW) { 311 + switch (l2cap_pi(sk)->sec_level) { 312 + case BT_SECURITY_HIGH: 313 + return HCI_AT_DEDICATED_BONDING_MITM; 314 + case BT_SECURITY_MEDIUM: 315 + return HCI_AT_DEDICATED_BONDING; 316 + default: 317 + return HCI_AT_NO_BONDING; 318 + } 319 + } else if (l2cap_pi(sk)->psm == cpu_to_le16(0x0001)) { 320 + if (l2cap_pi(sk)->sec_level == BT_SECURITY_LOW) 321 + l2cap_pi(sk)->sec_level = BT_SECURITY_SDP; 322 + 323 + if (l2cap_pi(sk)->sec_level == BT_SECURITY_HIGH) 324 + return HCI_AT_NO_BONDING_MITM; 325 + else 326 + return HCI_AT_NO_BONDING; 327 + } else { 328 + switch (l2cap_pi(sk)->sec_level) { 329 + case BT_SECURITY_HIGH: 330 + return HCI_AT_GENERAL_BONDING_MITM; 331 + case BT_SECURITY_MEDIUM: 332 + return HCI_AT_GENERAL_BONDING; 333 + default: 334 + return HCI_AT_NO_BONDING; 335 + } 336 + } 337 + } 338 + 308 339 /* Service level security */ 309 340 static inline int l2cap_check_security(struct sock *sk) 310 341 { 311 342 struct l2cap_conn *conn = l2cap_pi(sk)->conn; 312 343 __u8 auth_type; 313 344 314 - if (l2cap_pi(sk)->psm == cpu_to_le16(0x0001)) { 315 - if (l2cap_pi(sk)->sec_level == BT_SECURITY_HIGH) 316 - auth_type = HCI_AT_NO_BONDING_MITM; 317 - else 318 - auth_type = HCI_AT_NO_BONDING; 319 - 320 - if (l2cap_pi(sk)->sec_level == BT_SECURITY_LOW) 321 - l2cap_pi(sk)->sec_level = BT_SECURITY_SDP; 322 - } else { 323 - switch (l2cap_pi(sk)->sec_level) { 324 - case BT_SECURITY_HIGH: 325 - auth_type = HCI_AT_GENERAL_BONDING_MITM; 326 - break; 327 - case BT_SECURITY_MEDIUM: 328 - auth_type = HCI_AT_GENERAL_BONDING; 329 - break; 330 - default: 331 - auth_type = HCI_AT_NO_BONDING; 332 - break; 333 - } 334 - } 345 + auth_type = l2cap_get_auth_type(sk); 335 346 336 347 return hci_conn_security(conn->hcon, l2cap_pi(sk)->sec_level, 337 348 auth_type); ··· 1079 1068 1080 1069 err = -ENOMEM; 1081 1070 1082 - if (sk->sk_type == SOCK_RAW) { 1083 - switch (l2cap_pi(sk)->sec_level) { 1084 - case BT_SECURITY_HIGH: 1085 - auth_type = HCI_AT_DEDICATED_BONDING_MITM; 1086 - break; 1087 - case BT_SECURITY_MEDIUM: 1088 - auth_type = HCI_AT_DEDICATED_BONDING; 1089 - break; 1090 - default: 1091 - auth_type = HCI_AT_NO_BONDING; 1092 - break; 1093 - } 1094 - } else if (l2cap_pi(sk)->psm == cpu_to_le16(0x0001)) { 1095 - if (l2cap_pi(sk)->sec_level == BT_SECURITY_HIGH) 1096 - auth_type = HCI_AT_NO_BONDING_MITM; 1097 - else 1098 - auth_type = HCI_AT_NO_BONDING; 1099 - 1100 - if (l2cap_pi(sk)->sec_level == BT_SECURITY_LOW) 1101 - l2cap_pi(sk)->sec_level = BT_SECURITY_SDP; 1102 - } else { 1103 - switch (l2cap_pi(sk)->sec_level) { 1104 - case BT_SECURITY_HIGH: 1105 - auth_type = HCI_AT_GENERAL_BONDING_MITM; 1106 - break; 1107 - case BT_SECURITY_MEDIUM: 1108 - auth_type = HCI_AT_GENERAL_BONDING; 1109 - break; 1110 - default: 1111 - auth_type = HCI_AT_NO_BONDING; 1112 - break; 1113 - } 1114 - } 1071 + auth_type = l2cap_get_auth_type(sk); 1115 1072 1116 1073 hcon = hci_connect(hdev, ACL_LINK, dst, 1117 1074 l2cap_pi(sk)->sec_level, auth_type); ··· 1106 1127 if (sk->sk_type != SOCK_SEQPACKET && 1107 1128 sk->sk_type != SOCK_STREAM) { 1108 1129 l2cap_sock_clear_timer(sk); 1109 - sk->sk_state = BT_CONNECTED; 1130 + if (l2cap_check_security(sk)) 1131 + sk->sk_state = BT_CONNECTED; 1110 1132 } else 1111 1133 l2cap_do_start(sk); 1112 1134 } ··· 1873 1893 if (pi->mode == L2CAP_MODE_STREAMING) { 1874 1894 l2cap_streaming_send(sk); 1875 1895 } else { 1876 - if (pi->conn_state & L2CAP_CONN_REMOTE_BUSY && 1877 - pi->conn_state && L2CAP_CONN_WAIT_F) { 1896 + if ((pi->conn_state & L2CAP_CONN_REMOTE_BUSY) && 1897 + (pi->conn_state & L2CAP_CONN_WAIT_F)) { 1878 1898 err = len; 1879 1899 break; 1880 1900 }
+2 -1
net/bluetooth/rfcomm/core.c
··· 1164 1164 * initiator rfcomm_process_rx already calls 1165 1165 * rfcomm_session_put() */ 1166 1166 if (s->sock->sk->sk_state != BT_CLOSED) 1167 - rfcomm_session_put(s); 1167 + if (list_empty(&s->dlcs)) 1168 + rfcomm_session_put(s); 1168 1169 break; 1169 1170 } 1170 1171 }
+2 -1
net/core/dev.c
··· 749 749 * @ha: hardware address 750 750 * 751 751 * Search for an interface by MAC address. Returns NULL if the device 752 - * is not found or a pointer to the device. The caller must hold RCU 752 + * is not found or a pointer to the device. 753 + * The caller must hold RCU or RTNL. 753 754 * The returned device has not had its ref count increased 754 755 * and the caller must therefore be careful about locking 755 756 *
+1 -1
net/core/ethtool.c
··· 817 817 if (regs.len > reglen) 818 818 regs.len = reglen; 819 819 820 - regbuf = vmalloc(reglen); 820 + regbuf = vzalloc(reglen); 821 821 if (!regbuf) 822 822 return -ENOMEM; 823 823
+6 -2
net/core/skbuff.c
··· 2744 2744 2745 2745 merge: 2746 2746 if (offset > headlen) { 2747 - skbinfo->frags[0].page_offset += offset - headlen; 2748 - skbinfo->frags[0].size -= offset - headlen; 2747 + unsigned int eat = offset - headlen; 2748 + 2749 + skbinfo->frags[0].page_offset += eat; 2750 + skbinfo->frags[0].size -= eat; 2751 + skb->data_len -= eat; 2752 + skb->len -= eat; 2749 2753 offset = headlen; 2750 2754 } 2751 2755
+11 -2
net/dcb/dcbnl.c
··· 583 583 u8 up, idtype; 584 584 int ret = -EINVAL; 585 585 586 - if (!tb[DCB_ATTR_APP] || !netdev->dcbnl_ops->getapp) 586 + if (!tb[DCB_ATTR_APP]) 587 587 goto out; 588 588 589 589 ret = nla_parse_nested(app_tb, DCB_APP_ATTR_MAX, tb[DCB_ATTR_APP], ··· 604 604 goto out; 605 605 606 606 id = nla_get_u16(app_tb[DCB_APP_ATTR_ID]); 607 - up = netdev->dcbnl_ops->getapp(netdev, idtype, id); 607 + 608 + if (netdev->dcbnl_ops->getapp) { 609 + up = netdev->dcbnl_ops->getapp(netdev, idtype, id); 610 + } else { 611 + struct dcb_app app = { 612 + .selector = idtype, 613 + .protocol = id, 614 + }; 615 + up = dcb_getapp(netdev, &app); 616 + } 608 617 609 618 /* send this back */ 610 619 dcbnl_skb = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
+5 -6
net/ipv4/arp.c
··· 1017 1017 IPV4_DEVCONF_ALL(net, PROXY_ARP) = on; 1018 1018 return 0; 1019 1019 } 1020 - if (__in_dev_get_rcu(dev)) { 1021 - IN_DEV_CONF_SET(__in_dev_get_rcu(dev), PROXY_ARP, on); 1020 + if (__in_dev_get_rtnl(dev)) { 1021 + IN_DEV_CONF_SET(__in_dev_get_rtnl(dev), PROXY_ARP, on); 1022 1022 return 0; 1023 1023 } 1024 1024 return -ENXIO; 1025 1025 } 1026 1026 1027 - /* must be called with rcu_read_lock() */ 1028 1027 static int arp_req_set_public(struct net *net, struct arpreq *r, 1029 1028 struct net_device *dev) 1030 1029 { ··· 1232 1233 if (!(r.arp_flags & ATF_NETMASK)) 1233 1234 ((struct sockaddr_in *)&r.arp_netmask)->sin_addr.s_addr = 1234 1235 htonl(0xFFFFFFFFUL); 1235 - rcu_read_lock(); 1236 + rtnl_lock(); 1236 1237 if (r.arp_dev[0]) { 1237 1238 err = -ENODEV; 1238 - dev = dev_get_by_name_rcu(net, r.arp_dev); 1239 + dev = __dev_get_by_name(net, r.arp_dev); 1239 1240 if (dev == NULL) 1240 1241 goto out; 1241 1242 ··· 1262 1263 break; 1263 1264 } 1264 1265 out: 1265 - rcu_read_unlock(); 1266 + rtnl_unlock(); 1266 1267 if (cmd == SIOCGARP && !err && copy_to_user(arg, &r, sizeof(r))) 1267 1268 err = -EFAULT; 1268 1269 return err;
+1 -1
net/ipv4/inetpeer.c
··· 475 475 struct inet_peer *inet_getpeer(struct inetpeer_addr *daddr, int create) 476 476 { 477 477 struct inet_peer __rcu **stack[PEER_MAXDEPTH], ***stackptr; 478 - struct inet_peer_base *base = family_to_base(AF_INET); 478 + struct inet_peer_base *base = family_to_base(daddr->family); 479 479 struct inet_peer *p; 480 480 481 481 /* Look up for the address quickly, lockless.
+1 -1
net/ipv4/tcp_input.c
··· 4399 4399 if (!skb_copy_datagram_iovec(skb, 0, tp->ucopy.iov, chunk)) { 4400 4400 tp->ucopy.len -= chunk; 4401 4401 tp->copied_seq += chunk; 4402 - eaten = (chunk == skb->len && !th->fin); 4402 + eaten = (chunk == skb->len); 4403 4403 tcp_rcv_space_adjust(sk); 4404 4404 } 4405 4405 local_bh_disable();
-1
net/ipv4/tcp_ipv4.c
··· 1994 1994 } 1995 1995 req = req->dl_next; 1996 1996 } 1997 - st->offset = 0; 1998 1997 if (++st->sbucket >= icsk->icsk_accept_queue.listen_opt->nr_table_entries) 1999 1998 break; 2000 1999 get_req:
+33 -48
net/ipv6/addrconf.c
··· 2661 2661 struct net *net = dev_net(dev); 2662 2662 struct inet6_dev *idev; 2663 2663 struct inet6_ifaddr *ifa; 2664 - LIST_HEAD(keep_list); 2665 - int state; 2664 + int state, i; 2666 2665 2667 2666 ASSERT_RTNL(); 2668 2667 2669 - /* Flush routes if device is being removed or it is not loopback */ 2670 - if (how || !(dev->flags & IFF_LOOPBACK)) 2671 - rt6_ifdown(net, dev); 2668 + rt6_ifdown(net, dev); 2669 + neigh_ifdown(&nd_tbl, dev); 2672 2670 2673 2671 idev = __in6_dev_get(dev); 2674 2672 if (idev == NULL) ··· 2685 2687 /* Step 1.5: remove snmp6 entry */ 2686 2688 snmp6_unregister_dev(idev); 2687 2689 2690 + } 2691 + 2692 + /* Step 2: clear hash table */ 2693 + for (i = 0; i < IN6_ADDR_HSIZE; i++) { 2694 + struct hlist_head *h = &inet6_addr_lst[i]; 2695 + struct hlist_node *n; 2696 + 2697 + spin_lock_bh(&addrconf_hash_lock); 2698 + restart: 2699 + hlist_for_each_entry_rcu(ifa, n, h, addr_lst) { 2700 + if (ifa->idev == idev) { 2701 + hlist_del_init_rcu(&ifa->addr_lst); 2702 + addrconf_del_timer(ifa); 2703 + goto restart; 2704 + } 2705 + } 2706 + spin_unlock_bh(&addrconf_hash_lock); 2688 2707 } 2689 2708 2690 2709 write_lock_bh(&idev->lock); ··· 2737 2722 struct inet6_ifaddr, if_list); 2738 2723 addrconf_del_timer(ifa); 2739 2724 2740 - /* If just doing link down, and address is permanent 2741 - and not link-local, then retain it. */ 2742 - if (!how && 2743 - (ifa->flags&IFA_F_PERMANENT) && 2744 - !(ipv6_addr_type(&ifa->addr) & IPV6_ADDR_LINKLOCAL)) { 2745 - list_move_tail(&ifa->if_list, &keep_list); 2725 + list_del(&ifa->if_list); 2746 2726 2747 - /* If not doing DAD on this address, just keep it. */ 2748 - if ((dev->flags&(IFF_NOARP|IFF_LOOPBACK)) || 2749 - idev->cnf.accept_dad <= 0 || 2750 - (ifa->flags & IFA_F_NODAD)) 2751 - continue; 2727 + write_unlock_bh(&idev->lock); 2752 2728 2753 - /* If it was tentative already, no need to notify */ 2754 - if (ifa->flags & IFA_F_TENTATIVE) 2755 - continue; 2729 + spin_lock_bh(&ifa->state_lock); 2730 + state = ifa->state; 2731 + ifa->state = INET6_IFADDR_STATE_DEAD; 2732 + spin_unlock_bh(&ifa->state_lock); 2756 2733 2757 - /* Flag it for later restoration when link comes up */ 2758 - ifa->flags |= IFA_F_TENTATIVE; 2759 - ifa->state = INET6_IFADDR_STATE_DAD; 2760 - } else { 2761 - list_del(&ifa->if_list); 2762 - 2763 - /* clear hash table */ 2764 - spin_lock_bh(&addrconf_hash_lock); 2765 - hlist_del_init_rcu(&ifa->addr_lst); 2766 - spin_unlock_bh(&addrconf_hash_lock); 2767 - 2768 - write_unlock_bh(&idev->lock); 2769 - spin_lock_bh(&ifa->state_lock); 2770 - state = ifa->state; 2771 - ifa->state = INET6_IFADDR_STATE_DEAD; 2772 - spin_unlock_bh(&ifa->state_lock); 2773 - 2774 - if (state != INET6_IFADDR_STATE_DEAD) { 2775 - __ipv6_ifa_notify(RTM_DELADDR, ifa); 2776 - atomic_notifier_call_chain(&inet6addr_chain, 2777 - NETDEV_DOWN, ifa); 2778 - } 2779 - 2780 - in6_ifa_put(ifa); 2781 - write_lock_bh(&idev->lock); 2734 + if (state != INET6_IFADDR_STATE_DEAD) { 2735 + __ipv6_ifa_notify(RTM_DELADDR, ifa); 2736 + atomic_notifier_call_chain(&inet6addr_chain, NETDEV_DOWN, ifa); 2782 2737 } 2783 - } 2738 + in6_ifa_put(ifa); 2784 2739 2785 - list_splice(&keep_list, &idev->addr_list); 2740 + write_lock_bh(&idev->lock); 2741 + } 2786 2742 2787 2743 write_unlock_bh(&idev->lock); 2788 2744 ··· 4142 4156 addrconf_leave_solict(ifp->idev, &ifp->addr); 4143 4157 dst_hold(&ifp->rt->dst); 4144 4158 4145 - if (ifp->state == INET6_IFADDR_STATE_DEAD && 4146 - ip6_del_rt(ifp->rt)) 4159 + if (ip6_del_rt(ifp->rt)) 4147 4160 dst_free(&ifp->rt->dst); 4148 4161 break; 4149 4162 }
+1 -8
net/ipv6/route.c
··· 72 72 #define RT6_TRACE(x...) do { ; } while (0) 73 73 #endif 74 74 75 - #define CLONE_OFFLINK_ROUTE 0 76 - 77 75 static struct rt6_info * ip6_rt_copy(struct rt6_info *ort); 78 76 static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie); 79 77 static unsigned int ip6_default_advmss(const struct dst_entry *dst); ··· 736 738 737 739 if (!rt->rt6i_nexthop && !(rt->rt6i_flags & RTF_NONEXTHOP)) 738 740 nrt = rt6_alloc_cow(rt, &fl->fl6_dst, &fl->fl6_src); 739 - else { 740 - #if CLONE_OFFLINK_ROUTE 741 + else 741 742 nrt = rt6_alloc_clone(rt, &fl->fl6_dst); 742 - #else 743 - goto out2; 744 - #endif 745 - } 746 743 747 744 dst_release(&rt->dst); 748 745 rt = nrt ? : net->ipv6.ip6_null_entry;
+6
net/ipv6/xfrm6_policy.c
··· 98 98 if (!xdst->u.rt6.rt6i_idev) 99 99 return -ENODEV; 100 100 101 + xdst->u.rt6.rt6i_peer = rt->rt6i_peer; 102 + if (rt->rt6i_peer) 103 + atomic_inc(&rt->rt6i_peer->refcnt); 104 + 101 105 /* Sheit... I remember I did this right. Apparently, 102 106 * it was magically lost, so this code needs audit */ 103 107 xdst->u.rt6.rt6i_flags = rt->rt6i_flags & (RTF_ANYCAST | ··· 220 216 221 217 if (likely(xdst->u.rt6.rt6i_idev)) 222 218 in6_dev_put(xdst->u.rt6.rt6i_idev); 219 + if (likely(xdst->u.rt6.rt6i_peer)) 220 + inet_putpeer(xdst->u.rt6.rt6i_peer); 223 221 xfrm_dst_destroy(xdst); 224 222 } 225 223
+3
net/mac80211/tx.c
··· 2230 2230 2231 2231 sdata = vif_to_sdata(vif); 2232 2232 2233 + if (!ieee80211_sdata_running(sdata)) 2234 + goto out; 2235 + 2233 2236 if (tim_offset) 2234 2237 *tim_offset = 0; 2235 2238 if (tim_length)
+1 -2
net/sched/sch_cbq.c
··· 390 390 ret = qdisc_enqueue(skb, cl->q); 391 391 if (ret == NET_XMIT_SUCCESS) { 392 392 sch->q.qlen++; 393 - qdisc_bstats_update(sch, skb); 394 393 cbq_mark_toplevel(q, cl); 395 394 if (!cl->next_alive) 396 395 cbq_activate_class(cl); ··· 648 649 ret = qdisc_enqueue(skb, cl->q); 649 650 if (ret == NET_XMIT_SUCCESS) { 650 651 sch->q.qlen++; 651 - qdisc_bstats_update(sch, skb); 652 652 if (!cl->next_alive) 653 653 cbq_activate_class(cl); 654 654 return 0; ··· 969 971 970 972 skb = cbq_dequeue_1(sch); 971 973 if (skb) { 974 + qdisc_bstats_update(sch, skb); 972 975 sch->q.qlen--; 973 976 sch->flags &= ~TCQ_F_THROTTLED; 974 977 return skb;
+1 -1
net/sched/sch_drr.c
··· 376 376 } 377 377 378 378 bstats_update(&cl->bstats, skb); 379 - qdisc_bstats_update(sch, skb); 380 379 381 380 sch->q.qlen++; 382 381 return err; ··· 402 403 skb = qdisc_dequeue_peeked(cl->qdisc); 403 404 if (cl->qdisc->q.qlen == 0) 404 405 list_del(&cl->alist); 406 + qdisc_bstats_update(sch, skb); 405 407 sch->q.qlen--; 406 408 return skb; 407 409 }
+1 -1
net/sched/sch_dsmark.c
··· 260 260 return err; 261 261 } 262 262 263 - qdisc_bstats_update(sch, skb); 264 263 sch->q.qlen++; 265 264 266 265 return NET_XMIT_SUCCESS; ··· 282 283 if (skb == NULL) 283 284 return NULL; 284 285 286 + qdisc_bstats_update(sch, skb); 285 287 sch->q.qlen--; 286 288 287 289 index = skb->tc_index & (p->indices - 1);
+1 -4
net/sched/sch_fifo.c
··· 46 46 47 47 static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc* sch) 48 48 { 49 - struct sk_buff *skb_head; 50 49 struct fifo_sched_data *q = qdisc_priv(sch); 51 50 52 51 if (likely(skb_queue_len(&sch->q) < q->limit)) 53 52 return qdisc_enqueue_tail(skb, sch); 54 53 55 54 /* queue full, remove one skb to fulfill the limit */ 56 - skb_head = qdisc_dequeue_head(sch); 55 + __qdisc_queue_drop_head(sch, &sch->q); 57 56 sch->qstats.drops++; 58 - kfree_skb(skb_head); 59 - 60 57 qdisc_enqueue_tail(skb, sch); 61 58 62 59 return NET_XMIT_CN;
+1 -1
net/sched/sch_hfsc.c
··· 1600 1600 set_active(cl, qdisc_pkt_len(skb)); 1601 1601 1602 1602 bstats_update(&cl->bstats, skb); 1603 - qdisc_bstats_update(sch, skb); 1604 1603 sch->q.qlen++; 1605 1604 1606 1605 return NET_XMIT_SUCCESS; ··· 1665 1666 } 1666 1667 1667 1668 sch->flags &= ~TCQ_F_THROTTLED; 1669 + qdisc_bstats_update(sch, skb); 1668 1670 sch->q.qlen--; 1669 1671 1670 1672 return skb;
+5 -7
net/sched/sch_htb.c
··· 574 574 } 575 575 576 576 sch->q.qlen++; 577 - qdisc_bstats_update(sch, skb); 578 577 return NET_XMIT_SUCCESS; 579 578 } 580 579 ··· 841 842 842 843 static struct sk_buff *htb_dequeue(struct Qdisc *sch) 843 844 { 844 - struct sk_buff *skb = NULL; 845 + struct sk_buff *skb; 845 846 struct htb_sched *q = qdisc_priv(sch); 846 847 int level; 847 848 psched_time_t next_event; ··· 850 851 /* try to dequeue direct packets as high prio (!) to minimize cpu work */ 851 852 skb = __skb_dequeue(&q->direct_queue); 852 853 if (skb != NULL) { 854 + ok: 855 + qdisc_bstats_update(sch, skb); 853 856 sch->flags &= ~TCQ_F_THROTTLED; 854 857 sch->q.qlen--; 855 858 return skb; ··· 885 884 int prio = ffz(m); 886 885 m |= 1 << prio; 887 886 skb = htb_dequeue_tree(q, prio, level); 888 - if (likely(skb != NULL)) { 889 - sch->q.qlen--; 890 - sch->flags &= ~TCQ_F_THROTTLED; 891 - goto fin; 892 - } 887 + if (likely(skb != NULL)) 888 + goto ok; 893 889 } 894 890 } 895 891 sch->qstats.overlimits++;
+1 -1
net/sched/sch_multiq.c
··· 83 83 84 84 ret = qdisc_enqueue(skb, qdisc); 85 85 if (ret == NET_XMIT_SUCCESS) { 86 - qdisc_bstats_update(sch, skb); 87 86 sch->q.qlen++; 88 87 return NET_XMIT_SUCCESS; 89 88 } ··· 111 112 qdisc = q->queues[q->curband]; 112 113 skb = qdisc->dequeue(qdisc); 113 114 if (skb) { 115 + qdisc_bstats_update(sch, skb); 114 116 sch->q.qlen--; 115 117 return skb; 116 118 }
+1 -2
net/sched/sch_netem.c
··· 240 240 241 241 if (likely(ret == NET_XMIT_SUCCESS)) { 242 242 sch->q.qlen++; 243 - qdisc_bstats_update(sch, skb); 244 243 } else if (net_xmit_drop_count(ret)) { 245 244 sch->qstats.drops++; 246 245 } ··· 288 289 skb->tstamp.tv64 = 0; 289 290 #endif 290 291 pr_debug("netem_dequeue: return skb=%p\n", skb); 292 + qdisc_bstats_update(sch, skb); 291 293 sch->q.qlen--; 292 294 return skb; 293 295 } ··· 476 476 __skb_queue_after(list, skb, nskb); 477 477 478 478 sch->qstats.backlog += qdisc_pkt_len(nskb); 479 - qdisc_bstats_update(sch, nskb); 480 479 481 480 return NET_XMIT_SUCCESS; 482 481 }
+1 -1
net/sched/sch_prio.c
··· 84 84 85 85 ret = qdisc_enqueue(skb, qdisc); 86 86 if (ret == NET_XMIT_SUCCESS) { 87 - qdisc_bstats_update(sch, skb); 88 87 sch->q.qlen++; 89 88 return NET_XMIT_SUCCESS; 90 89 } ··· 115 116 struct Qdisc *qdisc = q->queues[prio]; 116 117 struct sk_buff *skb = qdisc->dequeue(qdisc); 117 118 if (skb) { 119 + qdisc_bstats_update(sch, skb); 118 120 sch->q.qlen--; 119 121 return skb; 120 122 }
+6 -5
net/sched/sch_red.c
··· 94 94 95 95 ret = qdisc_enqueue(skb, child); 96 96 if (likely(ret == NET_XMIT_SUCCESS)) { 97 - qdisc_bstats_update(sch, skb); 98 97 sch->q.qlen++; 99 98 } else if (net_xmit_drop_count(ret)) { 100 99 q->stats.pdrop++; ··· 113 114 struct Qdisc *child = q->qdisc; 114 115 115 116 skb = child->dequeue(child); 116 - if (skb) 117 + if (skb) { 118 + qdisc_bstats_update(sch, skb); 117 119 sch->q.qlen--; 118 - else if (!red_is_idling(&q->parms)) 119 - red_start_of_idle_period(&q->parms); 120 - 120 + } else { 121 + if (!red_is_idling(&q->parms)) 122 + red_start_of_idle_period(&q->parms); 123 + } 121 124 return skb; 122 125 } 123 126
+2 -3
net/sched/sch_sfq.c
··· 402 402 q->tail = slot; 403 403 slot->allot = q->scaled_quantum; 404 404 } 405 - if (++sch->q.qlen <= q->limit) { 406 - qdisc_bstats_update(sch, skb); 405 + if (++sch->q.qlen <= q->limit) 407 406 return NET_XMIT_SUCCESS; 408 - } 409 407 410 408 sfq_drop(sch); 411 409 return NET_XMIT_CN; ··· 443 445 } 444 446 skb = slot_dequeue_head(slot); 445 447 sfq_dec(q, a); 448 + qdisc_bstats_update(sch, skb); 446 449 sch->q.qlen--; 447 450 sch->qstats.backlog -= qdisc_pkt_len(skb); 448 451
+1 -1
net/sched/sch_tbf.c
··· 134 134 } 135 135 136 136 sch->q.qlen++; 137 - qdisc_bstats_update(sch, skb); 138 137 return NET_XMIT_SUCCESS; 139 138 } 140 139 ··· 186 187 q->ptokens = ptoks; 187 188 sch->q.qlen--; 188 189 sch->flags &= ~TCQ_F_THROTTLED; 190 + qdisc_bstats_update(sch, skb); 189 191 return skb; 190 192 } 191 193
+2 -1
net/sched/sch_teql.c
··· 87 87 88 88 if (q->q.qlen < dev->tx_queue_len) { 89 89 __skb_queue_tail(&q->q, skb); 90 - qdisc_bstats_update(sch, skb); 91 90 return NET_XMIT_SUCCESS; 92 91 } 93 92 ··· 110 111 dat->m->slaves = sch; 111 112 netif_wake_queue(m); 112 113 } 114 + } else { 115 + qdisc_bstats_update(sch, skb); 113 116 } 114 117 sch->q.qlen = dat->q.qlen + dat_queue->qdisc->q.qlen; 115 118 return skb;
+1 -3
net/sunrpc/svcsock.c
··· 1609 1609 */ 1610 1610 static void svc_bc_sock_free(struct svc_xprt *xprt) 1611 1611 { 1612 - if (xprt) { 1613 - kfree(xprt->xpt_bc_sid); 1612 + if (xprt) 1614 1613 kfree(container_of(xprt, struct svc_sock, sk_xprt)); 1615 - } 1616 1614 } 1617 1615 #endif /* CONFIG_NFS_V4_1 */
+4 -1
sound/atmel/ac97c.c
··· 33 33 #include <linux/dw_dmac.h> 34 34 35 35 #include <mach/cpu.h> 36 - #include <mach/hardware.h> 37 36 #include <mach/gpio.h> 37 + 38 + #ifdef CONFIG_ARCH_AT91 39 + #include <mach/hardware.h> 40 + #endif 38 41 39 42 #include "ac97c.h" 40 43
+16 -22
sound/pci/azt3328.c
··· 979 979 980 980 snd_azf3328_dbgcallenter(); 981 981 switch (bitrate) { 982 - #define AZF_FMT_XLATE(in_freq, out_bits) \ 983 - do { \ 984 - case AZF_FREQ_ ## in_freq: \ 985 - freq = SOUNDFORMAT_FREQ_ ## out_bits; \ 986 - break; \ 987 - } while (0); 988 - AZF_FMT_XLATE(4000, SUSPECTED_4000) 989 - AZF_FMT_XLATE(4800, SUSPECTED_4800) 990 - /* the AZF3328 names it "5510" for some strange reason: */ 991 - AZF_FMT_XLATE(5512, 5510) 992 - AZF_FMT_XLATE(6620, 6620) 993 - AZF_FMT_XLATE(8000, 8000) 994 - AZF_FMT_XLATE(9600, 9600) 995 - AZF_FMT_XLATE(11025, 11025) 996 - AZF_FMT_XLATE(13240, SUSPECTED_13240) 997 - AZF_FMT_XLATE(16000, 16000) 998 - AZF_FMT_XLATE(22050, 22050) 999 - AZF_FMT_XLATE(32000, 32000) 982 + case AZF_FREQ_4000: freq = SOUNDFORMAT_FREQ_SUSPECTED_4000; break; 983 + case AZF_FREQ_4800: freq = SOUNDFORMAT_FREQ_SUSPECTED_4800; break; 984 + case AZF_FREQ_5512: 985 + /* the AZF3328 names it "5510" for some strange reason */ 986 + freq = SOUNDFORMAT_FREQ_5510; break; 987 + case AZF_FREQ_6620: freq = SOUNDFORMAT_FREQ_6620; break; 988 + case AZF_FREQ_8000: freq = SOUNDFORMAT_FREQ_8000; break; 989 + case AZF_FREQ_9600: freq = SOUNDFORMAT_FREQ_9600; break; 990 + case AZF_FREQ_11025: freq = SOUNDFORMAT_FREQ_11025; break; 991 + case AZF_FREQ_13240: freq = SOUNDFORMAT_FREQ_SUSPECTED_13240; break; 992 + case AZF_FREQ_16000: freq = SOUNDFORMAT_FREQ_16000; break; 993 + case AZF_FREQ_22050: freq = SOUNDFORMAT_FREQ_22050; break; 994 + case AZF_FREQ_32000: freq = SOUNDFORMAT_FREQ_32000; break; 1000 995 default: 1001 996 snd_printk(KERN_WARNING "unknown bitrate %d, assuming 44.1kHz!\n", bitrate); 1002 997 /* fall-through */ 1003 - AZF_FMT_XLATE(44100, 44100) 1004 - AZF_FMT_XLATE(48000, 48000) 1005 - AZF_FMT_XLATE(66200, SUSPECTED_66200) 1006 - #undef AZF_FMT_XLATE 998 + case AZF_FREQ_44100: freq = SOUNDFORMAT_FREQ_44100; break; 999 + case AZF_FREQ_48000: freq = SOUNDFORMAT_FREQ_48000; break; 1000 + case AZF_FREQ_66200: freq = SOUNDFORMAT_FREQ_SUSPECTED_66200; break; 1007 1001 } 1008 1002 /* val = 0xff07; 3m27.993s (65301Hz; -> 64000Hz???) hmm, 66120, 65967, 66123 */ 1009 1003 /* val = 0xff09; 17m15.098s (13123,478Hz; -> 12000Hz???) hmm, 13237.2Hz? */
+1 -1
sound/pci/hda/hda_eld.c
··· 381 381 snd_print_pcm_rates(a->rates, buf, sizeof(buf)); 382 382 383 383 if (a->format == AUDIO_CODING_TYPE_LPCM) 384 - snd_print_pcm_bits(a->sample_bits, buf2 + 8, sizeof(buf2 - 8)); 384 + snd_print_pcm_bits(a->sample_bits, buf2 + 8, sizeof(buf2) - 8); 385 385 else if (a->max_bitrate) 386 386 snprintf(buf2, sizeof(buf2), 387 387 ", max bitrate = %d", a->max_bitrate);
+4 -2
sound/pci/hda/patch_realtek.c
··· 14954 14954 SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ), 14955 14955 SND_PCI_QUIRK_VENDOR(0x104d, "Sony VAIO", ALC269_FIXUP_SONY_VAIO), 14956 14956 SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), 14957 - SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE), 14958 - SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE), 14959 14957 SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE), 14958 + SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE), 14959 + SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE), 14960 + SND_PCI_QUIRK(0x17aa, 0x21ca, "Thinkpad L412", ALC269_FIXUP_SKU_IGNORE), 14961 + SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 15", ALC269_FIXUP_SKU_IGNORE), 14960 14962 SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW), 14961 14963 SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD), 14962 14964 {}
+1 -1
sound/pci/oxygen/xonar_cs43xx.c
··· 392 392 unsigned int i; 393 393 394 394 snd_iprintf(buffer, "\nCS4398: 7?"); 395 - for (i = 2; i <= 8; ++i) 395 + for (i = 2; i < 8; ++i) 396 396 snd_iprintf(buffer, " %02x", data->cs4398_regs[i]); 397 397 snd_iprintf(buffer, "\n"); 398 398 dump_cs4362a_registers(data, buffer);
+1 -1
sound/soc/atmel/snd-soc-afeb9260.c
··· 129 129 .cpu_dai_name = "atmel-ssc-dai.0", 130 130 .codec_dai_name = "tlv320aic23-hifi", 131 131 .platform_name = "atmel_pcm-audio", 132 - .codec_name = "tlv320aic23-codec.0-0x1a", 132 + .codec_name = "tlv320aic23-codec.0-001a", 133 133 .init = afeb9260_tlv320aic23_init, 134 134 .ops = &afeb9260_ops, 135 135 };
+1 -1
sound/soc/blackfin/bf5xx-ssm2602.c
··· 119 119 .cpu_dai_name = "bf5xx-i2s", 120 120 .codec_dai_name = "ssm2602-hifi", 121 121 .platform_name = "bf5xx-pcm-audio", 122 - .codec_name = "ssm2602-codec.0-0x1b", 122 + .codec_name = "ssm2602-codec.0-001b", 123 123 .ops = &bf5xx_ssm2602_ops, 124 124 }; 125 125
+1 -1
sound/soc/codecs/wm8994.c
··· 2386 2386 else 2387 2387 val = 0; 2388 2388 2389 - return snd_soc_update_bits(codec, reg, mask, reg); 2389 + return snd_soc_update_bits(codec, reg, mask, val); 2390 2390 } 2391 2391 2392 2392 #define WM8994_RATES SNDRV_PCM_RATE_8000_96000
+1 -1
sound/soc/codecs/wm8995.c
··· 1223 1223 else 1224 1224 val = 0; 1225 1225 1226 - return snd_soc_update_bits(codec, reg, mask, reg); 1226 + return snd_soc_update_bits(codec, reg, mask, val); 1227 1227 } 1228 1228 1229 1229 /* The size in bits of the FLL divide multiplied by 10
+7 -8
sound/soc/codecs/wm_hubs.c
··· 91 91 static void calibrate_dc_servo(struct snd_soc_codec *codec) 92 92 { 93 93 struct wm_hubs_data *hubs = snd_soc_codec_get_drvdata(codec); 94 + s8 offset; 94 95 u16 reg, reg_l, reg_r, dcs_cfg; 95 96 96 97 /* If we're using a digital only path and have a previously ··· 150 149 hubs->dcs_codes); 151 150 152 151 /* HPOUT1L */ 153 - if (reg_l + hubs->dcs_codes > 0 && 154 - reg_l + hubs->dcs_codes < 0xff) 155 - reg_l += hubs->dcs_codes; 156 - dcs_cfg = reg_l << WM8993_DCS_DAC_WR_VAL_1_SHIFT; 152 + offset = reg_l; 153 + offset += hubs->dcs_codes; 154 + dcs_cfg = (u8)offset << WM8993_DCS_DAC_WR_VAL_1_SHIFT; 157 155 158 156 /* HPOUT1R */ 159 - if (reg_r + hubs->dcs_codes > 0 && 160 - reg_r + hubs->dcs_codes < 0xff) 161 - reg_r += hubs->dcs_codes; 162 - dcs_cfg |= reg_r; 157 + offset = reg_r; 158 + offset += hubs->dcs_codes; 159 + dcs_cfg |= (u8)offset; 163 160 164 161 dev_dbg(codec->dev, "DCS result: %x\n", dcs_cfg); 165 162
+1 -1
sound/soc/davinci/davinci-evm.c
··· 223 223 .stream_name = "AIC3X", 224 224 .cpu_dai_name= "davinci-mcasp.0", 225 225 .codec_dai_name = "tlv320aic3x-hifi", 226 - .codec_name = "tlv320aic3x-codec.0-001a", 226 + .codec_name = "tlv320aic3x-codec.1-0018", 227 227 .platform_name = "davinci-pcm-audio", 228 228 .init = evm_aic3x_init, 229 229 .ops = &evm_ops,
+2 -2
sound/soc/pxa/corgi.c
··· 307 307 static struct snd_soc_dai_link corgi_dai = { 308 308 .name = "WM8731", 309 309 .stream_name = "WM8731", 310 - .cpu_dai_name = "pxa-is2-dai", 310 + .cpu_dai_name = "pxa2xx-i2s", 311 311 .codec_dai_name = "wm8731-hifi", 312 312 .platform_name = "pxa-pcm-audio", 313 - .codec_name = "wm8731-codec-0.001a", 313 + .codec_name = "wm8731-codec-0.001b", 314 314 .init = corgi_wm8731_init, 315 315 .ops = &corgi_ops, 316 316 };
+1 -1
sound/soc/pxa/poodle.c
··· 276 276 .cpu_dai_name = "pxa2xx-i2s", 277 277 .codec_dai_name = "wm8731-hifi", 278 278 .platform_name = "pxa-pcm-audio", 279 - .codec_name = "wm8731-codec.0-001a", 279 + .codec_name = "wm8731-codec.0-001b", 280 280 .init = poodle_wm8731_init, 281 281 .ops = &poodle_ops, 282 282 };
+2 -2
sound/soc/pxa/spitz.c
··· 315 315 static struct snd_soc_dai_link spitz_dai = { 316 316 .name = "wm8750", 317 317 .stream_name = "WM8750", 318 - .cpu_dai_name = "pxa-is2", 318 + .cpu_dai_name = "pxa2xx-i2s", 319 319 .codec_dai_name = "wm8750-hifi", 320 320 .platform_name = "pxa-pcm-audio", 321 - .codec_name = "wm8750-codec.0-001a", 321 + .codec_name = "wm8750-codec.0-001b", 322 322 .init = spitz_wm8750_init, 323 323 .ops = &spitz_ops, 324 324 };
+3 -3
sound/soc/samsung/neo1973_gta02_wm8753.c
··· 397 397 { /* Hifi Playback - for similatious use with voice below */ 398 398 .name = "WM8753", 399 399 .stream_name = "WM8753 HiFi", 400 - .cpu_dai_name = "s3c24xx-i2s", 400 + .cpu_dai_name = "s3c24xx-iis", 401 401 .codec_dai_name = "wm8753-hifi", 402 402 .init = neo1973_gta02_wm8753_init, 403 403 .platform_name = "samsung-audio", 404 - .codec_name = "wm8753-codec.0-0x1a", 404 + .codec_name = "wm8753-codec.0-001a", 405 405 .ops = &neo1973_gta02_hifi_ops, 406 406 }, 407 407 { /* Voice via BT */ ··· 410 410 .cpu_dai_name = "bluetooth-dai", 411 411 .codec_dai_name = "wm8753-voice", 412 412 .ops = &neo1973_gta02_voice_ops, 413 - .codec_name = "wm8753-codec.0-0x1a", 413 + .codec_name = "wm8753-codec.0-001a", 414 414 .platform_name = "samsung-audio", 415 415 }, 416 416 };
+3 -3
sound/soc/samsung/neo1973_wm8753.c
··· 559 559 .name = "WM8753", 560 560 .stream_name = "WM8753 HiFi", 561 561 .platform_name = "samsung-audio", 562 - .cpu_dai_name = "s3c24xx-i2s", 562 + .cpu_dai_name = "s3c24xx-iis", 563 563 .codec_dai_name = "wm8753-hifi", 564 - .codec_name = "wm8753-codec.0-0x1a", 564 + .codec_name = "wm8753-codec.0-001a", 565 565 .init = neo1973_wm8753_init, 566 566 .ops = &neo1973_hifi_ops, 567 567 }, ··· 571 571 .platform_name = "samsung-audio", 572 572 .cpu_dai_name = "bluetooth-dai", 573 573 .codec_dai_name = "wm8753-voice", 574 - .codec_name = "wm8753-codec.0-0x1a", 574 + .codec_name = "wm8753-codec.0-001a", 575 575 .ops = &neo1973_voice_ops, 576 576 }, 577 577 };
+2 -2
sound/soc/samsung/s3c24xx_simtec_hermes.c
··· 94 94 static struct snd_soc_dai_link simtec_dai_aic33 = { 95 95 .name = "tlv320aic33", 96 96 .stream_name = "TLV320AIC33", 97 - .codec_name = "tlv320aic3x-codec.0-0x1a", 98 - .cpu_dai_name = "s3c24xx-i2s", 97 + .codec_name = "tlv320aic3x-codec.0-001a", 98 + .cpu_dai_name = "s3c24xx-iis", 99 99 .codec_dai_name = "tlv320aic3x-hifi", 100 100 .platform_name = "samsung-audio", 101 101 .init = simtec_hermes_init,
+2 -2
sound/soc/samsung/s3c24xx_simtec_tlv320aic23.c
··· 85 85 static struct snd_soc_dai_link simtec_dai_aic23 = { 86 86 .name = "tlv320aic23", 87 87 .stream_name = "TLV320AIC23", 88 - .codec_name = "tlv320aic3x-codec.0-0x1a", 89 - .cpu_dai_name = "s3c24xx-i2s", 88 + .codec_name = "tlv320aic3x-codec.0-001a", 89 + .cpu_dai_name = "s3c24xx-iis", 90 90 .codec_dai_name = "tlv320aic3x-hifi", 91 91 .platform_name = "samsung-audio", 92 92 .init = simtec_tlv320aic23_init,
+1 -1
sound/soc/samsung/s3c24xx_uda134x.c
··· 228 228 .stream_name = "UDA134X", 229 229 .codec_name = "uda134x-hifi", 230 230 .codec_dai_name = "uda134x-hifi", 231 - .cpu_dai_name = "s3c24xx-i2s", 231 + .cpu_dai_name = "s3c24xx-iis", 232 232 .ops = &s3c24xx_uda134x_ops, 233 233 .platform_name = "samsung-audio", 234 234 };