···11+ Asynchronous Transfers/Transforms API22+33+1 INTRODUCTION44+55+2 GENEALOGY66+77+3 USAGE88+3.1 General format of the API99+3.2 Supported operations1010+3.3 Descriptor management1111+3.4 When does the operation execute?1212+3.5 When does the operation complete?1313+3.6 Constraints1414+3.7 Example1515+1616+4 DRIVER DEVELOPER NOTES1717+4.1 Conformance points1818+4.2 "My application needs finer control of hardware channels"1919+2020+5 SOURCE2121+2222+---2323+2424+1 INTRODUCTION2525+2626+The async_tx API provides methods for describing a chain of asynchronous2727+bulk memory transfers/transforms with support for inter-transactional2828+dependencies. It is implemented as a dmaengine client that smooths over2929+the details of different hardware offload engine implementations. Code3030+that is written to the API can optimize for asynchronous operation and3131+the API will fit the chain of operations to the available offload3232+resources.3333+3434+2 GENEALOGY3535+3636+The API was initially designed to offload the memory copy and3737+xor-parity-calculations of the md-raid5 driver using the offload engines3838+present in the Intel(R) Xscale series of I/O processors. It also built3939+on the 'dmaengine' layer developed for offloading memory copies in the4040+network stack using Intel(R) I/OAT engines. The following design4141+features surfaced as a result:4242+1/ implicit synchronous path: users of the API do not need to know if4343+ the platform they are running on has offload capabilities. The4444+ operation will be offloaded when an engine is available and carried out4545+ in software otherwise.4646+2/ cross channel dependency chains: the API allows a chain of dependent4747+ operations to be submitted, like xor->copy->xor in the raid5 case. The4848+ API automatically handles cases where the transition from one operation4949+ to another implies a hardware channel switch.5050+3/ dmaengine extensions to support multiple clients and operation types5151+ beyond 'memcpy'5252+5353+3 USAGE5454+5555+3.1 General format of the API:5656+struct dma_async_tx_descriptor *5757+async_<operation>(<op specific parameters>,5858+ enum async_tx_flags flags,5959+ struct dma_async_tx_descriptor *dependency,6060+ dma_async_tx_callback callback_routine,6161+ void *callback_parameter);6262+6363+3.2 Supported operations:6464+memcpy - memory copy between a source and a destination buffer6565+memset - fill a destination buffer with a byte value6666+xor - xor a series of source buffers and write the result to a6767+ destination buffer6868+xor_zero_sum - xor a series of source buffers and set a flag if the6969+ result is zero. The implementation attempts to prevent7070+ writes to memory7171+7272+3.3 Descriptor management:7373+The return value is non-NULL and points to a 'descriptor' when the operation7474+has been queued to execute asynchronously. Descriptors are recycled7575+resources, under control of the offload engine driver, to be reused as7676+operations complete. When an application needs to submit a chain of7777+operations it must guarantee that the descriptor is not automatically recycled7878+before the dependency is submitted. This requires that all descriptors be7979+acknowledged by the application before the offload engine driver is allowed to8080+recycle (or free) the descriptor. A descriptor can be acked by one of the8181+following methods:8282+1/ setting the ASYNC_TX_ACK flag if no child operations are to be submitted8383+2/ setting the ASYNC_TX_DEP_ACK flag to acknowledge the parent8484+ descriptor of a new operation.8585+3/ calling async_tx_ack() on the descriptor.8686+8787+3.4 When does the operation execute?8888+Operations do not immediately issue after return from the8989+async_<operation> call. Offload engine drivers batch operations to9090+improve performance by reducing the number of mmio cycles needed to9191+manage the channel. Once a driver-specific threshold is met the driver9292+automatically issues pending operations. An application can force this9393+event by calling async_tx_issue_pending_all(). This operates on all9494+channels since the application has no knowledge of channel to operation9595+mapping.9696+9797+3.5 When does the operation complete?9898+There are two methods for an application to learn about the completion9999+of an operation.100100+1/ Call dma_wait_for_async_tx(). This call causes the CPU to spin while101101+ it polls for the completion of the operation. It handles dependency102102+ chains and issuing pending operations.103103+2/ Specify a completion callback. The callback routine runs in tasklet104104+ context if the offload engine driver supports interrupts, or it is105105+ called in application context if the operation is carried out106106+ synchronously in software. The callback can be set in the call to107107+ async_<operation>, or when the application needs to submit a chain of108108+ unknown length it can use the async_trigger_callback() routine to set a109109+ completion interrupt/callback at the end of the chain.110110+111111+3.6 Constraints:112112+1/ Calls to async_<operation> are not permitted in IRQ context. Other113113+ contexts are permitted provided constraint #2 is not violated.114114+2/ Completion callback routines cannot submit new operations. This115115+ results in recursion in the synchronous case and spin_locks being116116+ acquired twice in the asynchronous case.117117+118118+3.7 Example:119119+Perform a xor->copy->xor operation where each operation depends on the120120+result from the previous operation:121121+122122+void complete_xor_copy_xor(void *param)123123+{124124+ printk("complete\n");125125+}126126+127127+int run_xor_copy_xor(struct page **xor_srcs,128128+ int xor_src_cnt,129129+ struct page *xor_dest,130130+ size_t xor_len,131131+ struct page *copy_src,132132+ struct page *copy_dest,133133+ size_t copy_len)134134+{135135+ struct dma_async_tx_descriptor *tx;136136+137137+ tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len,138138+ ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL);139139+ tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len,140140+ ASYNC_TX_DEP_ACK, tx, NULL, NULL);141141+ tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len,142142+ ASYNC_TX_XOR_DROP_DST | ASYNC_TX_DEP_ACK | ASYNC_TX_ACK,143143+ tx, complete_xor_copy_xor, NULL);144144+145145+ async_tx_issue_pending_all();146146+}147147+148148+See include/linux/async_tx.h for more information on the flags. See the149149+ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more150150+implementation examples.151151+152152+4 DRIVER DEVELOPMENT NOTES153153+4.1 Conformance points:154154+There are a few conformance points required in dmaengine drivers to155155+accommodate assumptions made by applications using the async_tx API:156156+1/ Completion callbacks are expected to happen in tasklet context157157+2/ dma_async_tx_descriptor fields are never manipulated in IRQ context158158+3/ Use async_tx_run_dependencies() in the descriptor clean up path to159159+ handle submission of dependent operations160160+161161+4.2 "My application needs finer control of hardware channels"162162+This requirement seems to arise from cases where a DMA engine driver is163163+trying to support device-to-memory DMA. The dmaengine and async_tx164164+implementations were designed for offloading memory-to-memory165165+operations; however, there are some capabilities of the dmaengine layer166166+that can be used for platform-specific channel management.167167+Platform-specific constraints can be handled by registering the168168+application as a 'dma_client' and implementing a 'dma_event_callback' to169169+apply a filter to the available channels in the system. Before showing170170+how to implement a custom dma_event callback some background of171171+dmaengine's client support is required.172172+173173+The following routines in dmaengine support multiple clients requesting174174+use of a channel:175175+- dma_async_client_register(struct dma_client *client)176176+- dma_async_client_chan_request(struct dma_client *client)177177+178178+dma_async_client_register takes a pointer to an initialized dma_client179179+structure. It expects that the 'event_callback' and 'cap_mask' fields180180+are already initialized.181181+182182+dma_async_client_chan_request triggers dmaengine to notify the client of183183+all channels that satisfy the capability mask. It is up to the client's184184+event_callback routine to track how many channels the client needs and185185+how many it is currently using. The dma_event_callback routine returns a186186+dma_state_client code to let dmaengine know the status of the187187+allocation.188188+189189+Below is the example of how to extend this functionality for190190+platform-specific filtering of the available channels beyond the191191+standard capability mask:192192+193193+static enum dma_state_client194194+my_dma_client_callback(struct dma_client *client,195195+ struct dma_chan *chan, enum dma_state state)196196+{197197+ struct dma_device *dma_dev;198198+ struct my_platform_specific_dma *plat_dma_dev;199199+200200+ dma_dev = chan->device;201201+ plat_dma_dev = container_of(dma_dev,202202+ struct my_platform_specific_dma,203203+ dma_dev);204204+205205+ if (!plat_dma_dev->platform_specific_capability)206206+ return DMA_DUP;207207+208208+ . . .209209+}210210+211211+5 SOURCE212212+include/linux/dmaengine.h: core header file for DMA drivers and clients213213+drivers/dma/dmaengine.c: offload engine channel management routines214214+drivers/dma/: location for offload engine drivers215215+include/linux/async_tx.h: core header file for the async_tx api216216+crypto/async_tx/async_tx.c: async_tx interface to dmaengine and common code217217+crypto/async_tx/async_memcpy.c: copy offload218218+crypto/async_tx/async_memset.c: memory fill offload219219+crypto/async_tx/async_xor.c: xor and xor zero sum offload
+2
Documentation/devices.txt
···9494 9 = /dev/urandom Faster, less secure random number gen.9595 10 = /dev/aio Asynchronous I/O notification interface9696 11 = /dev/kmsg Writes to this come out as printk's9797+ 12 = /dev/oldmem Used by crashdump kernels to access9898+ the memory of the kernel that crashed.979998100 1 block RAM disk99101 0 = /dev/ram0 First RAM disk
+8-8
Documentation/input/iforce-protocol.txt
···6767 Val 40 Spring (Force = f(pos))6868 Val 41 Friction (Force = f(velocity)) and Inertia (Force = f(acceleration))69697070-7070+717102 Axes affected and trigger7272 Bits 4-7: Val 2 = effect along one axis. Byte 05 indicates direction7373 Val 4 = X axis only. Byte 05 must contain 5a···176176Query the product id (2 bytes)177177178178**** Open device ****179179-QUERY = 4f ('O'pen) 179179+QUERY = 4f ('O'pen)180180No data returned.181181182182**** Close device *****···184184No data returned.185185186186**** Query effect ****187187-QUERY = 45 ('E') 187187+QUERY = 45 ('E')188188Send effect type.189189Returns nonzero if supported (2 bytes)190190···199199OP= 40 <idx> <val> [<val>]200200LEN= 2 or 320120100 Idx202202- Idx 00 Set dead zone (0..2048) 203203- Idx 01 Ignore Deadman sensor (0..1) 204204- Idx 02 Enable comm watchdog (0..1) 205205- Idx 03 Set the strength of the spring (0..100) 202202+ Idx 00 Set dead zone (0..2048)203203+ Idx 01 Ignore Deadman sensor (0..1)204204+ Idx 02 Enable comm watchdog (0..1)205205+ Idx 03 Set the strength of the spring (0..100)206206 Idx 04 Enable or disable the spring (0/1)207207- Idx 05 Set axis saturation threshold (0..2048) 207207+ Idx 05 Set axis saturation threshold (0..2048)208208209209**** Set Effect State ****210210OP= 42 <val>
+1-1
Documentation/lguest/lguest.c
···882882 * of the block file (possibly extending it). */883883 if (off + len > device_len) {884884 /* Trim it back to the correct length */885885- ftruncate(dev->fd, device_len);885885+ ftruncate64(dev->fd, device_len);886886 /* Die, bad Guest, die. */887887 errx(1, "Write past end %llu+%u", off, len);888888 }
···11VERSION = 222PATCHLEVEL = 633SUBLEVEL = 2344-EXTRAVERSION =-rc655-NAME = Pink Farting Weasel44+EXTRAVERSION =-rc955+NAME = Arr Matey! A Hairy Bilge Rat!6677# *DOCUMENTATION*88# To see a list of typical targets execute "make help"
+2-2
arch/arm/kernel/bios32.c
···338338 * pcibios_fixup_bus - Called after each bus is probed,339339 * but before its children are examined.340340 */341341-void __devinit pcibios_fixup_bus(struct pci_bus *bus)341341+void pcibios_fixup_bus(struct pci_bus *bus)342342{343343 struct pci_sys_data *root = bus->sysdata;344344 struct pci_dev *dev;···419419/*420420 * Convert from Linux-centric to bus-centric addresses for bridge devices.421421 */422422-void __devinit422422+void423423pcibios_resource_to_bus(struct pci_dev *dev, struct pci_bus_region *region,424424 struct resource *res)425425{
+1-1
arch/arm/mach-ep93xx/core.c
···336336 if (line >= 0 && line < 16) {337337 gpio_line_config(line, GPIO_IN);338338 } else {339339- gpio_line_config(EP93XX_GPIO_LINE_F(line), GPIO_IN);339339+ gpio_line_config(EP93XX_GPIO_LINE_F(line-16), GPIO_IN);340340 }341341342342 port = line >> 3;
···20202121static int detect_memory_e820(void)2222{2323+ int count = 0;2324 u32 next = 0;2425 u32 size, id;2526 u8 err;···28272928 do {3029 size = sizeof(struct e820entry);3131- id = SMAP;3232- asm("int $0x15; setc %0"3333- : "=am" (err), "+b" (next), "+d" (id), "+c" (size),3434- "=m" (*desc)3535- : "D" (desc), "a" (0xe820));36303737- if (err || id != SMAP)3131+ /* Important: %edx is clobbered by some BIOSes,3232+ so it must be either used for the error output3333+ or explicitly marked clobbered. */3434+ asm("int $0x15; setc %0"3535+ : "=d" (err), "+b" (next), "=a" (id), "+c" (size),3636+ "=m" (*desc)3737+ : "D" (desc), "d" (SMAP), "a" (0xe820));3838+3939+ /* Some BIOSes stop returning SMAP in the middle of4040+ the search loop. We don't know exactly how the BIOS4141+ screwed up the map at that point, we might have a4242+ partial map, the full map, or complete garbage, so4343+ just return failure. */4444+ if (id != SMAP) {4545+ count = 0;4646+ break;4747+ }4848+4949+ if (err)3850 break;39514040- boot_params.e820_entries++;5252+ count++;4153 desc++;4242- } while (next && boot_params.e820_entries < E820MAX);5454+ } while (next && count < E820MAX);43554444- return boot_params.e820_entries;5656+ return boot_params.e820_entries = count;4557}46584759static int detect_memory_e801(void)···1038910490int detect_memory(void)10591{9292+ int err = -1;9393+10694 if (detect_memory_e820() > 0)107107- return 0;9595+ err = 0;1089610997 if (!detect_memory_e801())110110- return 0;9898+ err = 0;11199112112- return detect_memory_88();100100+ if (!detect_memory_88())101101+ err = 0;102102+103103+ return err;113104}
+10-4
arch/i386/boot/video.c
···147147}148148149149/* Set mode (without recalc) */150150-static int raw_set_mode(u16 mode)150150+static int raw_set_mode(u16 mode, u16 *real_mode)151151{152152 int nmode, i;153153 struct card_info *card;···165165166166 if ((mode == nmode && visible) ||167167 mode == mi->mode ||168168- mode == (mi->y << 8)+mi->x)168168+ mode == (mi->y << 8)+mi->x) {169169+ *real_mode = mi->mode;169170 return card->set_mode(mi);171171+ }170172171173 if (visible)172174 nmode++;···180178 if (mode >= card->xmode_first &&181179 mode < card->xmode_first+card->xmode_n) {182180 struct mode_info mix;183183- mix.mode = mode;181181+ *real_mode = mix.mode = mode;184182 mix.x = mix.y = 0;185183 return card->set_mode(&mix);186184 }···225223static int set_mode(u16 mode)226224{227225 int rv;226226+ u16 real_mode;228227229228 /* Very special mode numbers... */230229 if (mode == VIDEO_CURRENT_MODE)···235232 else if (mode == EXTENDED_VGA)236233 mode = VIDEO_8POINT;237234238238- rv = raw_set_mode(mode);235235+ rv = raw_set_mode(mode, &real_mode);239236 if (rv)240237 return rv;241238242239 if (mode & VIDEO_RECALC)243240 vga_recalc_vertical();244241242242+ /* Save the canonical mode number for the kernel, not243243+ an alias, size specification or menu position */244244+ boot_params.hdr.vid_mode = real_mode;245245 return 0;246246}247247
+10-31
arch/i386/kernel/acpi/wakeup.S
···151151#define VIDEO_FIRST_V7 0x0900152152153153# Setting of user mode (AX=mode ID) => CF=success154154+155155+# For now, we only handle VESA modes (0x0200..0x03ff). To handle other156156+# modes, we should probably compile in the video code from the boot157157+# directory.154158mode_set:155159 movw %ax, %bx156156-#if 0157157- cmpb $0xff, %ah158158- jz setalias160160+ subb $VIDEO_FIRST_VESA>>8, %bh161161+ cmpb $2, %bh162162+ jb check_vesa159163160160- testb $VIDEO_RECALC>>8, %ah161161- jnz _setrec162162-163163- cmpb $VIDEO_FIRST_RESOLUTION>>8, %ah164164- jnc setres165165-166166- cmpb $VIDEO_FIRST_SPECIAL>>8, %ah167167- jz setspc168168-169169- cmpb $VIDEO_FIRST_V7>>8, %ah170170- jz setv7171171-#endif172172-173173- cmpb $VIDEO_FIRST_VESA>>8, %ah174174- jnc check_vesa175175-#if 0 176176- orb %ah, %ah177177- jz setmenu178178-#endif179179-180180- decb %ah181181-# jz setbios Add bios modes later182182-183183-setbad: clc164164+setbad:165165+ clc184166 ret185167186168check_vesa:187187- subb $VIDEO_FIRST_VESA>>8, %bh188169 orw $0x4000, %bx # Use linear frame buffer189170 movw $0x4f02, %ax # VESA BIOS mode set call190171 int $0x10191172 cmpw $0x004f, %ax # AL=4f if implemented192192- jnz _setbad # AH=0 if OK173173+ jnz setbad # AH=0 if OK193174194175 stc195176 ret196196-197197-_setbad: jmp setbad198177199178 .code32200179 ALIGN
+4-1
arch/i386/xen/mmu.c
···559559 put_cpu();560560561561 spin_lock(&mm->page_table_lock);562562- xen_pgd_unpin(mm->pgd);562562+563563+ /* pgd may not be pinned in the error exit path of execve */564564+ if (PagePinned(virt_to_page(mm->pgd)))565565+ xen_pgd_unpin(mm->pgd);563566 spin_unlock(&mm->page_table_lock);564567}
···5252 mask_msc_irq(irq);5353 if (!cpu_has_veic)5454 MSCIC_WRITE(MSC01_IC_EOI, 0);5555-#ifdef CONFIG_MIPS_MT_SMTC5655 /* This actually needs to be a call into platform code */5757- if (irq_hwmask[irq] & ST0_IM)5858- set_c0_status(irq_hwmask[irq] & ST0_IM);5959-#endif /* CONFIG_MIPS_MT_SMTC */5656+ smtc_im_ack_irq(irq);6057}61586259/*···7073 MSCIC_WRITE(MSC01_IC_SUP+irq*8, r | ~MSC01_IC_SUP_EDGE_BIT);7174 MSCIC_WRITE(MSC01_IC_SUP+irq*8, r);7275 }7373-#ifdef CONFIG_MIPS_MT_SMTC7474- if (irq_hwmask[irq] & ST0_IM)7575- set_c0_status(irq_hwmask[irq] & ST0_IM);7676-#endif /* CONFIG_MIPS_MT_SMTC */7676+ smtc_im_ack_irq(irq);7777}78787979/*
+1-9
arch/mips/kernel/irq.c
···7474 */7575void ack_bad_irq(unsigned int irq)7676{7777+ smtc_im_ack_irq(irq);7778 printk("unexpected IRQ # %d\n", irq);7879}79808081atomic_t irq_err_count;8181-8282-#ifdef CONFIG_MIPS_MT_SMTC8383-/*8484- * SMTC Kernel needs to manipulate low-level CPU interrupt mask8585- * in do_IRQ. These are passed in setup_irq_smtc() and stored8686- * in this table.8787- */8888-unsigned long irq_hwmask[NR_IRQS];8989-#endif /* CONFIG_MIPS_MT_SMTC */90829183/*9284 * Generic, controller-independent functions:
···2525#include <asm/smtc_proc.h>26262727/*2828- * This file should be built into the kernel only if CONFIG_MIPS_MT_SMTC is set.2828+ * SMTC Kernel needs to manipulate low-level CPU interrupt mask2929+ * in do_IRQ. These are passed in setup_irq_smtc() and stored3030+ * in this table.2931 */3232+unsigned long irq_hwmask[NR_IRQS];30333134#define LOCK_MT_PRA() \3235 local_irq_save(flags); \
···1515 * along with this program; if not, write to the Free Software1616 * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.1717 */1818+#include <linux/init.h>1819#include <linux/kernel.h>1920#include <linux/module.h>2021#include <linux/reboot.h>···3635EXPORT_SYMBOL(soc_type);3736unsigned int periph_rev;3837unsigned int zbbus_mhz;3838+EXPORT_SYMBOL(zbbus_mhz);39394040static unsigned int part_type;4141
···613613 regs->ccr = 0;614614 regs->gpr[1] = sp;615615616616+ /*617617+ * We have just cleared all the nonvolatile GPRs, so make618618+ * FULL_REGS(regs) return true. This is necessary to allow619619+ * ptrace to examine the thread immediately after exec.620620+ */621621+ regs->trap &= ~1UL;622622+616623#ifdef CONFIG_PPC32617624 regs->mq = 0;618625 regs->nip = start;
+2-2
arch/powerpc/platforms/83xx/usb.c
···7676 if (port0_is_dr)7777 printk(KERN_WARNING7878 "834x USB port0 can't be used by both DR and MPH!\n");7979- sicrl |= MPC834X_SICRL_USB0;7979+ sicrl &= ~MPC834X_SICRL_USB0;8080 }8181 prop = of_get_property(np, "port1", NULL);8282 if (prop) {8383 if (port1_is_dr)8484 printk(KERN_WARNING8585 "834x USB port1 can't be used by both DR and MPH!\n");8686- sicrl |= MPC834X_SICRL_USB1;8686+ sicrl &= ~MPC834X_SICRL_USB1;8787 }8888 of_node_put(np);8989 }
···419419 * For the moment only implement delivery to all cpus or one cpu.420420 * Get current irq_server for the given irq421421 */422422- irq_server = get_irq_server(irq, 1);422422+ irq_server = get_irq_server(virq, 1);423423 if (irq_server == -1) {424424 char cpulist[128];425425 cpumask_scnprintf(cpulist, sizeof(cpulist), cpumask);
···156156 dev->prom_node = dp;157157158158 regs = of_get_property(dp, "reg", &len);159159+ if (!regs)160160+ len = 0;159161 if (len % sizeof(struct linux_prom_registers)) {160162 prom_printf("UGH: proplen for %s was %d, need multiple of %d\n",161163 dev->prom_node->name, len,
···3838 movq %rax,R8(%rsp)3939 .endm40404141+ .macro LOAD_ARGS32 offset4242+ movl \offset(%rsp),%r11d4343+ movl \offset+8(%rsp),%r10d4444+ movl \offset+16(%rsp),%r9d4545+ movl \offset+24(%rsp),%r8d4646+ movl \offset+40(%rsp),%ecx4747+ movl \offset+48(%rsp),%edx4848+ movl \offset+56(%rsp),%esi4949+ movl \offset+64(%rsp),%edi5050+ movl \offset+72(%rsp),%eax5151+ .endm5252+4153 .macro CFI_STARTPROC32 simple4254 CFI_STARTPROC \simple4355 CFI_UNDEFINED r8···164152 movq $-ENOSYS,RAX(%rsp) /* really needed? */165153 movq %rsp,%rdi /* &pt_regs -> arg1 */166154 call syscall_trace_enter167167- LOAD_ARGS ARGOFFSET /* reload args from stack in case ptrace changed it */155155+ LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */168156 RESTORE_REST169157 movl %ebp, %ebp170158 /* no need to do an access_ok check here because rbp has been···267255 movq $-ENOSYS,RAX(%rsp) /* really needed? */268256 movq %rsp,%rdi /* &pt_regs -> arg1 */269257 call syscall_trace_enter270270- LOAD_ARGS ARGOFFSET /* reload args from stack in case ptrace changed it */258258+ LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */271259 RESTORE_REST272260 movl RSP-ARGOFFSET(%rsp), %r8d273261 /* no need to do an access_ok check here because r8 has been···346334 movq $-ENOSYS,RAX(%rsp) /* really needed? */347335 movq %rsp,%rdi /* &pt_regs -> arg1 */348336 call syscall_trace_enter349349- LOAD_ARGS ARGOFFSET /* reload args from stack in case ptrace changed it */337337+ LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */350338 RESTORE_REST351339 jmp ia32_do_syscall352340END(ia32_syscall)
+13-34
arch/x86_64/kernel/acpi/wakeup.S
···8181 testl $2, realmode_flags - wakeup_code8282 jz 1f8383 mov video_mode - wakeup_code, %ax8484- call mode_seta8484+ call mode_set85851:86868787 movw $0xb800, %ax···291291#define VIDEO_FIRST_V7 0x0900292292293293# Setting of user mode (AX=mode ID) => CF=success294294+295295+# For now, we only handle VESA modes (0x0200..0x03ff). To handle other296296+# modes, we should probably compile in the video code from the boot297297+# directory.294298.code16295295-mode_seta:299299+mode_set:296300 movw %ax, %bx297297-#if 0298298- cmpb $0xff, %ah299299- jz setalias301301+ subb $VIDEO_FIRST_VESA>>8, %bh302302+ cmpb $2, %bh303303+ jb check_vesa300304301301- testb $VIDEO_RECALC>>8, %ah302302- jnz _setrec303303-304304- cmpb $VIDEO_FIRST_RESOLUTION>>8, %ah305305- jnc setres306306-307307- cmpb $VIDEO_FIRST_SPECIAL>>8, %ah308308- jz setspc309309-310310- cmpb $VIDEO_FIRST_V7>>8, %ah311311- jz setv7312312-#endif313313-314314- cmpb $VIDEO_FIRST_VESA>>8, %ah315315- jnc check_vesaa316316-#if 0 317317- orb %ah, %ah318318- jz setmenu319319-#endif320320-321321- decb %ah322322-# jz setbios Add bios modes later323323-324324-setbada: clc305305+setbad:306306+ clc325307 ret326308327327-check_vesaa:328328- subb $VIDEO_FIRST_VESA>>8, %bh309309+check_vesa:329310 orw $0x4000, %bx # Use linear frame buffer330311 movw $0x4f02, %ax # VESA BIOS mode set call331312 int $0x10332313 cmpw $0x004f, %ax # AL=4f if implemented333333- jnz _setbada # AH=0 if OK314314+ jnz setbad # AH=0 if OK334315335316 stc336317 ret337337-338338-_setbada: jmp setbada339318340319wakeup_stack_begin: # Stack grows down341320
-1
arch/x86_64/kernel/process.c
···208208 if (__get_cpu_var(cpu_idle_state))209209 __get_cpu_var(cpu_idle_state) = 0;210210211211- check_pgt_cache();212211 rmb();213212 idle = pm_idle;214213 if (!idle)
-4
arch/x86_64/kernel/ptrace.c
···232232{233233 unsigned long tmp; 234234235235- /* Some code in the 64bit emulation may not be 64bit clean.236236- Don't take any chances. */237237- if (test_tsk_thread_flag(child, TIF_IA32))238238- value &= 0xffffffff;239235 switch (regno) {240236 case offsetof(struct user_regs_struct,fs):241237 if (value && (value & 3) != 3)
+1-1
arch/x86_64/kernel/smp.c
···241241 }242242 if (!cpus_empty(cpu_mask))243243 flush_tlb_others(cpu_mask, mm, FLUSH_ALL);244244- check_pgt_cache();244244+245245 preempt_enable();246246}247247EXPORT_SYMBOL(flush_tlb_mm);
···8080{8181 enum dma_status status;8282 struct dma_async_tx_descriptor *iter;8383+ struct dma_async_tx_descriptor *parent;83848485 if (!tx)8586 return DMA_SUCCESS;···8887 /* poll through the dependency chain, return when tx is complete */8988 do {9089 iter = tx;9191- while (iter->cookie == -EBUSY)9292- iter = iter->parent;9090+9191+ /* find the root of the unsubmitted dependency chain */9292+ while (iter->cookie == -EBUSY) {9393+ parent = iter->parent;9494+ if (parent && parent->cookie == -EBUSY)9595+ iter = iter->parent;9696+ else9797+ break;9898+ }939994100 status = dma_sync_wait(iter->chan, iter->cookie);95101 } while (status == DMA_IN_PROGRESS || (iter != tx));
···1515#include <linux/dmi.h>1616#include <linux/device.h>1717#include <linux/suspend.h>1818+1919+#include <asm/io.h>2020+1821#include <acpi/acpi_bus.h>1922#include <acpi/acpi_drivers.h>2023#include "sleep.h"21242225u8 sleep_states[ACPI_S_STATE_COUNT];23262727+#ifdef CONFIG_PM_SLEEP2428static u32 acpi_target_sleep_state = ACPI_STATE_S0;2929+#endif3030+3131+int acpi_sleep_prepare(u32 acpi_state)3232+{3333+#ifdef CONFIG_ACPI_SLEEP3434+ /* do we have a wakeup address for S2 and S3? */3535+ if (acpi_state == ACPI_STATE_S3) {3636+ if (!acpi_wakeup_address) {3737+ return -EFAULT;3838+ }3939+ acpi_set_firmware_waking_vector((acpi_physical_address)4040+ virt_to_phys((void *)4141+ acpi_wakeup_address));4242+4343+ }4444+ ACPI_FLUSH_CPU_CACHE();4545+ acpi_enable_wakeup_device_prep(acpi_state);4646+#endif4747+ acpi_gpe_sleep_prepare(acpi_state);4848+ acpi_enter_sleep_state_prep(acpi_state);4949+ return 0;5050+}25512652#ifdef CONFIG_SUSPEND2753static struct pm_ops acpi_pm_ops;···301275 return -EINVAL;302276}303277278278+#ifdef CONFIG_PM_SLEEP304279/**305280 * acpi_pm_device_sleep_state - return preferred power state of ACPI device306281 * in the system sleep state given by %acpi_target_sleep_state···376349 *d_min_p = d_min;377350 return d_max;378351}352352+#endif353353+354354+static void acpi_power_off_prepare(void)355355+{356356+ /* Prepare to power off the system */357357+ acpi_sleep_prepare(ACPI_STATE_S5);358358+}359359+360360+static void acpi_power_off(void)361361+{362362+ /* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */363363+ printk("%s called\n", __FUNCTION__);364364+ local_irq_disable();365365+ acpi_enter_sleep_state(ACPI_STATE_S5);366366+}379367380368int __init acpi_sleep_init(void)381369{···405363 if (acpi_disabled)406364 return 0;407365366366+ sleep_states[ACPI_STATE_S0] = 1;367367+ printk(KERN_INFO PREFIX "(supports S0");368368+408369#ifdef CONFIG_SUSPEND409409- printk(KERN_INFO PREFIX "(supports");410410- for (i = ACPI_STATE_S0; i < ACPI_STATE_S4; i++) {370370+ for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) {411371 status = acpi_get_sleep_type_data(i, &type_a, &type_b);412372 if (ACPI_SUCCESS(status)) {413373 sleep_states[i] = 1;414374 printk(" S%d", i);415375 }416376 }417417- printk(")\n");418377419378 pm_set_ops(&acpi_pm_ops);420379#endif···425382 if (ACPI_SUCCESS(status)) {426383 hibernation_set_ops(&acpi_hibernation_ops);427384 sleep_states[ACPI_STATE_S4] = 1;385385+ printk(" S4");428386 }429429-#else430430- sleep_states[ACPI_STATE_S4] = 0;431387#endif432432-388388+ status = acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b);389389+ if (ACPI_SUCCESS(status)) {390390+ sleep_states[ACPI_STATE_S5] = 1;391391+ printk(" S5");392392+ pm_power_off_prepare = acpi_power_off_prepare;393393+ pm_power_off = acpi_power_off;394394+ }395395+ printk(")\n");433396 return 0;434397}
-75
drivers/acpi/sleep/poweroff.c
···11-/*22- * poweroff.c - ACPI handler for powering off the system.33- *44- * AKA S5, but it is independent of whether or not the kernel supports55- * any other sleep support in the system.66- *77- * Copyright (c) 2005 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com>88- *99- * This file is released under the GPLv2.1010- */1111-1212-#include <linux/pm.h>1313-#include <linux/init.h>1414-#include <acpi/acpi_bus.h>1515-#include <linux/sysdev.h>1616-#include <asm/io.h>1717-#include "sleep.h"1818-1919-int acpi_sleep_prepare(u32 acpi_state)2020-{2121-#ifdef CONFIG_ACPI_SLEEP2222- /* do we have a wakeup address for S2 and S3? */2323- if (acpi_state == ACPI_STATE_S3) {2424- if (!acpi_wakeup_address) {2525- return -EFAULT;2626- }2727- acpi_set_firmware_waking_vector((acpi_physical_address)2828- virt_to_phys((void *)2929- acpi_wakeup_address));3030-3131- }3232- ACPI_FLUSH_CPU_CACHE();3333- acpi_enable_wakeup_device_prep(acpi_state);3434-#endif3535- acpi_gpe_sleep_prepare(acpi_state);3636- acpi_enter_sleep_state_prep(acpi_state);3737- return 0;3838-}3939-4040-#ifdef CONFIG_PM4141-4242-static void acpi_power_off_prepare(void)4343-{4444- /* Prepare to power off the system */4545- acpi_sleep_prepare(ACPI_STATE_S5);4646-}4747-4848-static void acpi_power_off(void)4949-{5050- /* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */5151- printk("%s called\n", __FUNCTION__);5252- local_irq_disable();5353- /* Some SMP machines only can poweroff in boot CPU */5454- acpi_enter_sleep_state(ACPI_STATE_S5);5555-}5656-5757-static int acpi_poweroff_init(void)5858-{5959- if (!acpi_disabled) {6060- u8 type_a, type_b;6161- acpi_status status;6262-6363- status =6464- acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b);6565- if (ACPI_SUCCESS(status)) {6666- pm_power_off_prepare = acpi_power_off_prepare;6767- pm_power_off = acpi_power_off;6868- }6969- }7070- return 0;7171-}7272-7373-late_initcall(acpi_poweroff_init);7474-7575-#endif /* CONFIG_PM */
···297297 dmactl = ioread8(ap->ioaddr.bmdma_addr + ATA_DMA_CMD);298298 iowrite8(dmactl | ATA_DMA_START, ap->ioaddr.bmdma_addr + ATA_DMA_CMD);299299300300- /* Strictly, one may wish to issue a readb() here, to300300+ /* Strictly, one may wish to issue an ioread8() here, to301301 * flush the mmio write. However, control also passes302302 * to the hardware at this point, and it will interrupt303303 * us when we are to resume control. So, in effect,···307307 * is expected, so I think it is best to not add a readb()308308 * without first all the MMIO ATA cards/mobos.309309 * Or maybe I'm just being paranoid.310310+ *311311+ * FIXME: The posting of this write means I/O starts are312312+ * unneccessarily delayed for MMIO310313 */311314}312315
···888888 u32 slot_stat, qc_active;889889 int rc;890890891891+ /* If PCIX_IRQ_WOC, there's an inherent race window between892892+ * clearing IRQ pending status and reading PORT_SLOT_STAT893893+ * which may cause spurious interrupts afterwards. This is894894+ * unavoidable and much better than losing interrupts which895895+ * happens if IRQ pending is cleared after reading896896+ * PORT_SLOT_STAT.897897+ */898898+ if (ap->flags & SIL24_FLAG_PCIX_IRQ_WOC)899899+ writel(PORT_IRQ_COMPLETE, port + PORT_IRQ_STAT);900900+891901 slot_stat = readl(port + PORT_SLOT_STAT);892902893903 if (unlikely(slot_stat & HOST_SSTAT_ATTN)) {894904 sil24_error_intr(ap);895905 return;896906 }897897-898898- if (ap->flags & SIL24_FLAG_PCIX_IRQ_WOC)899899- writel(PORT_IRQ_COMPLETE, port + PORT_IRQ_STAT);900907901908 qc_active = slot_stat & ~HOST_SSTAT_ATTN;902909 rc = ata_qc_complete_multiple(ap, qc_active, sil24_finish_qc);···917910 return;918911 }919912920920- if (ata_ratelimit())913913+ /* spurious interrupts are expected if PCIX_IRQ_WOC */914914+ if (!(ap->flags & SIL24_FLAG_PCIX_IRQ_WOC) && ata_ratelimit())921915 ata_port_printk(ap, KERN_INFO, "spurious interrupt "922916 "(slot_stat 0x%x active_tag %d sactive 0x%x)\n",923917 slot_stat, ap->active_tag, ap->sactive);
+1
drivers/base/core.c
···284284285285 /* let the kset specific function add its keys */286286 pos = data;287287+ memset(envp, 0, sizeof(envp));287288 retval = kset->uevent_ops->uevent(kset, &dev->kobj,288289 envp, ARRAY_SIZE(envp),289290 pos, PAGE_SIZE);
···62626363static u32 hpet_nhpet, hpet_max_freq = HPET_USER_FREQ;64646565+/* This clocksource driver currently only works on ia64 */6666+#ifdef CONFIG_IA646567static void __iomem *hpet_mctr;66686769static cycle_t read_hpet(void)···8179 .flags = CLOCK_SOURCE_IS_CONTINUOUS,8280};8381static struct clocksource *hpet_clocksource;8282+#endif84838584/* A lock for concurrent access by app and isr hpet activity. */8685static DEFINE_SPINLOCK(hpet_lock);···946943 printk(KERN_DEBUG "%s: 0x%lx is busy\n",947944 __FUNCTION__, hdp->hd_phys_address);948945 iounmap(hdp->hd_address);949949- return -EBUSY;946946+ return AE_ALREADY_EXISTS;950947 }951948 } else if (res->type == ACPI_RESOURCE_TYPE_FIXED_MEMORY32) {952949 struct acpi_resource_fixed_memory32 *fixmem32;953950954951 fixmem32 = &res->data.fixed_memory32;955952 if (!fixmem32)956956- return -EINVAL;953953+ return AE_NO_MEMORY;957954958955 hdp->hd_phys_address = fixmem32->address;959956 hdp->hd_address = ioremap(fixmem32->address,···963960 printk(KERN_DEBUG "%s: 0x%lx is busy\n",964961 __FUNCTION__, hdp->hd_phys_address);965962 iounmap(hdp->hd_address);966966- return -EBUSY;963963+ return AE_ALREADY_EXISTS;967964 }968965 } else if (res->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) {969966 struct acpi_resource_extended_irq *irqp;
+8-18
drivers/char/mspec.c
···155155 * mspec_close156156 *157157 * Called when unmapping a device mapping. Frees all mspec pages158158- * belonging to the vma.158158+ * belonging to all the vma's sharing this vma_data structure.159159 */160160static void161161mspec_close(struct vm_area_struct *vma)162162{163163 struct vma_data *vdata;164164- int index, last_index, result;164164+ int index, last_index;165165 unsigned long my_page;166166167167 vdata = vma->vm_private_data;168168169169- BUG_ON(vma->vm_start < vdata->vm_start || vma->vm_end > vdata->vm_end);169169+ if (!atomic_dec_and_test(&vdata->refcnt))170170+ return;170171171171- spin_lock(&vdata->lock);172172- index = (vma->vm_start - vdata->vm_start) >> PAGE_SHIFT;173173- last_index = (vma->vm_end - vdata->vm_start) >> PAGE_SHIFT;174174- for (; index < last_index; index++) {172172+ last_index = (vdata->vm_end - vdata->vm_start) >> PAGE_SHIFT;173173+ for (index = 0; index < last_index; index++) {175174 if (vdata->maddr[index] == 0)176175 continue;177176 /*···179180 */180181 my_page = vdata->maddr[index];181182 vdata->maddr[index] = 0;182182- spin_unlock(&vdata->lock);183183- result = mspec_zero_block(my_page, PAGE_SIZE);184184- if (!result)183183+ if (!mspec_zero_block(my_page, PAGE_SIZE))185184 uncached_free_page(my_page);186185 else187186 printk(KERN_WARNING "mspec_close(): "188188- "failed to zero page %i\n",189189- result);190190- spin_lock(&vdata->lock);187187+ "failed to zero page %ld\n", my_page);191188 }192192- spin_unlock(&vdata->lock);193193-194194- if (!atomic_dec_and_test(&vdata->refcnt))195195- return;196189197190 if (vdata->flags & VMD_VMALLOCED)198191 vfree(vdata);199192 else200193 kfree(vdata);201194}202202-203195204196/*205197 * mspec_nopfn
+6-4
drivers/char/random.c
···15501550 * As close as possible to RFC 793, which15511551 * suggests using a 250 kHz clock.15521552 * Further reading shows this assumes 2 Mb/s networks.15531553- * For 10 Gb/s Ethernet, a 1 GHz clock is appropriate.15541554- * That's funny, Linux has one built in! Use it!15551555- * (Networks are faster now - should this be increased?)15531553+ * For 10 Mb/s Ethernet, a 1 MHz clock is appropriate.15541554+ * For 10 Gb/s Ethernet, a 1 GHz clock should be ok, but15551555+ * we also need to limit the resolution so that the u32 seq15561556+ * overlaps less than one time per MSL (2 minutes).15571557+ * Choosing a clock of 64 ns period is OK. (period of 274 s)15561558 */15571557- seq += ktime_get_real().tv64;15591559+ seq += ktime_get_real().tv64 >> 6;15581560#if 015591561 printk("init_seq(%lx, %lx, %d, %d) = %d\n",15601562 saddr, daddr, sport, dport, seq);
+10-5
drivers/char/vt_ioctl.c
···770770 /*771771 * Switching-from response772772 */773773+ acquire_console_sem();773774 if (vc->vt_newvt >= 0) {774775 if (arg == 0)775776 /*···785784 * complete the switch.786785 */787786 int newvt;788788- acquire_console_sem();789787 newvt = vc->vt_newvt;790788 vc->vt_newvt = -1;791789 i = vc_allocate(newvt);···798798 * other console switches..799799 */800800 complete_change_console(vc_cons[newvt].d);801801- release_console_sem();802801 }803802 }804803···809810 /*810811 * If it's just an ACK, ignore it811812 */812812- if (arg != VT_ACKACQ)813813+ if (arg != VT_ACKACQ) {814814+ release_console_sem();813815 return -EINVAL;816816+ }814817 }818818+ release_console_sem();815819816820 return 0;817821···12101208 /*12111209 * Send the signal as privileged - kill_pid() will12121210 * tell us if the process has gone or something else12131213- * is awry12111211+ * is awry.12121212+ *12131213+ * We need to set vt_newvt *before* sending the signal or we12141214+ * have a race.12141215 */12161216+ vc->vt_newvt = new_vc->vc_num;12151217 if (kill_pid(vc->vt_pid, vc->vt_mode.relsig, 1) == 0) {12161218 /*12171219 * It worked. Mark the vt to switch to and12181220 * return. The process needs to send us a12191221 * VT_RELDISP ioctl to complete the switch.12201222 */12211221- vc->vt_newvt = new_vc->vc_num;12221223 return;12231224 }12241225
+1-1
drivers/ieee1394/ieee1394_core.c
···12731273 unregister_chrdev_region(IEEE1394_CORE_DEV, 256);12741274}1275127512761276-fs_initcall(ieee1394_init); /* same as ohci1394 */12761276+module_init(ieee1394_init);12771277module_exit(ieee1394_cleanup);1278127812791279/* Exported symbols */
+1-3
drivers/ieee1394/ohci1394.c
···35373537 return pci_register_driver(&ohci1394_pci_driver);35383538}3539353935403540-/* Register before most other device drivers.35413541- * Useful for remote debugging via physical DMA, e.g. using firescope. */35423542-fs_initcall(ohci1394_init);35403540+module_init(ohci1394_init);35433541module_exit(ohci1394_cleanup);
+49-13
drivers/infiniband/hw/mlx4/qp.c
···12111211 dseg->qkey = cpu_to_be32(wr->wr.ud.remote_qkey);12121212}1213121312141214-static void set_data_seg(struct mlx4_wqe_data_seg *dseg,12151215- struct ib_sge *sg)12141214+static void set_mlx_icrc_seg(void *dseg)12161215{12171217- dseg->byte_count = cpu_to_be32(sg->length);12161216+ u32 *t = dseg;12171217+ struct mlx4_wqe_inline_seg *iseg = dseg;12181218+12191219+ t[1] = 0;12201220+12211221+ /*12221222+ * Need a barrier here before writing the byte_count field to12231223+ * make sure that all the data is visible before the12241224+ * byte_count field is set. Otherwise, if the segment begins12251225+ * a new cacheline, the HCA prefetcher could grab the 64-byte12261226+ * chunk and get a valid (!= * 0xffffffff) byte count but12271227+ * stale data, and end up sending the wrong data.12281228+ */12291229+ wmb();12301230+12311231+ iseg->byte_count = cpu_to_be32((1 << 31) | 4);12321232+}12331233+12341234+static void set_data_seg(struct mlx4_wqe_data_seg *dseg, struct ib_sge *sg)12351235+{12181236 dseg->lkey = cpu_to_be32(sg->lkey);12191237 dseg->addr = cpu_to_be64(sg->addr);12381238+12391239+ /*12401240+ * Need a barrier here before writing the byte_count field to12411241+ * make sure that all the data is visible before the12421242+ * byte_count field is set. Otherwise, if the segment begins12431243+ * a new cacheline, the HCA prefetcher could grab the 64-byte12441244+ * chunk and get a valid (!= * 0xffffffff) byte count but12451245+ * stale data, and end up sending the wrong data.12461246+ */12471247+ wmb();12481248+12491249+ dseg->byte_count = cpu_to_be32(sg->length);12201250}1221125112221252int mlx4_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,···12551225 struct mlx4_ib_qp *qp = to_mqp(ibqp);12561226 void *wqe;12571227 struct mlx4_wqe_ctrl_seg *ctrl;12281228+ struct mlx4_wqe_data_seg *dseg;12581229 unsigned long flags;12591230 int nreq;12601231 int err = 0;···13551324 break;13561325 }1357132613581358- for (i = 0; i < wr->num_sge; ++i) {13591359- set_data_seg(wqe, wr->sg_list + i);13271327+ /*13281328+ * Write data segments in reverse order, so as to13291329+ * overwrite cacheline stamp last within each13301330+ * cacheline. This avoids issues with WQE13311331+ * prefetching.13321332+ */1360133313611361- wqe += sizeof (struct mlx4_wqe_data_seg);13621362- size += sizeof (struct mlx4_wqe_data_seg) / 16;13631363- }13341334+ dseg = wqe;13351335+ dseg += wr->num_sge - 1;13361336+ size += wr->num_sge * (sizeof (struct mlx4_wqe_data_seg) / 16);1364133713651338 /* Add one more inline data segment for ICRC for MLX sends */13661366- if (qp->ibqp.qp_type == IB_QPT_SMI || qp->ibqp.qp_type == IB_QPT_GSI) {13671367- ((struct mlx4_wqe_inline_seg *) wqe)->byte_count =13681368- cpu_to_be32((1 << 31) | 4);13691369- ((u32 *) wqe)[1] = 0;13701370- wqe += sizeof (struct mlx4_wqe_data_seg);13391339+ if (unlikely(qp->ibqp.qp_type == IB_QPT_SMI ||13401340+ qp->ibqp.qp_type == IB_QPT_GSI)) {13411341+ set_mlx_icrc_seg(dseg + 1);13711342 size += sizeof (struct mlx4_wqe_data_seg) / 16;13721343 }13441344+13451345+ for (i = wr->num_sge - 1; i >= 0; --i, --dseg)13461346+ set_data_seg(dseg, wr->sg_list + i);1373134713741348 ctrl->fence_size = (wr->send_flags & IB_SEND_FENCE ?13751349 MLX4_WQE_CTRL_FENCE : 0) | size;
+1-1
drivers/input/joystick/Kconfig
···277277278278config JOYSTICK_XPAD_LEDS279279 bool "LED Support for Xbox360 controller 'BigX' LED"280280- depends on LEDS_CLASS && JOYSTICK_XPAD280280+ depends on JOYSTICK_XPAD && (LEDS_CLASS=y || LEDS_CLASS=JOYSTICK_XPAD)281281 ---help---282282 This option enables support for the LED which surrounds the Big X on283283 XBox 360 controller.
+4-2
drivers/input/mouse/appletouch.c
···328328{329329 int x, y, x_z, y_z, x_f, y_f;330330 int retval, i, j;331331+ int key;331332 struct atp *dev = urb->context;332333333334 switch (urb->status) {···469468 ATP_XFACT, &x_z, &x_f);470469 y = atp_calculate_abs(dev->xy_acc + ATP_XSENSORS, ATP_YSENSORS,471470 ATP_YFACT, &y_z, &y_f);471471+ key = dev->data[dev->datalen - 1] & 1;472472473473 if (x && y) {474474 if (dev->x_old != -1) {···507505 the first touch unless reinitialised. Do so if it's been508506 idle for a while in order to avoid waking the kernel up509507 several hundred times a second */510510- if (atp_is_geyser_3(dev)) {508508+ if (!key && atp_is_geyser_3(dev)) {511509 dev->idlecount++;512510 if (dev->idlecount == 10) {513511 dev->valid = 0;···516514 }517515 }518516519519- input_report_key(dev->input, BTN_LEFT, dev->data[dev->datalen - 1] & 1);517517+ input_report_key(dev->input, BTN_LEFT, key);520518 input_sync(dev->input);521519522520exit:
+2-1
drivers/kvm/Kconfig
···66 depends on X8677 default y88 ---help---99- Say Y here to get to see options for virtualization guest drivers.99+ Say Y here to get to see options for using your Linux host to run other1010+ operating systems inside virtual machines (guests).1011 This option alone does not add any kernel code.11121213 If you say N, all options in this submenu will be skipped and disabled.
+3-3
drivers/lguest/lguest_asm.S
···2222 jmp lguest_init23232424/*G:055 We create a macro which puts the assembler code between lgstart_ and2525- * lgend_ markers. These templates end up in the .init.text section, so they2626- * are discarded after boot. */2525+ * lgend_ markers. These templates are put in the .text section: they can't be2626+ * discarded after boot as we may need to patch modules, too. */2727+.text2728#define LGUEST_PATCH(name, insns...) \2829 lgstart_##name: insns; lgend_##name:; \2930 .globl lgstart_##name; .globl lgend_##name···3534LGUEST_PATCH(pushf, movl lguest_data+LGUEST_DATA_irq_enabled, %eax)3635/*:*/37363838-.text3937/* These demark the EIP range where host should never deliver interrupts. */4038.global lguest_noirq_start4139.global lguest_noirq_end
+7-10
drivers/md/raid5.c
···514514 struct stripe_head *sh = stripe_head_ref;515515 struct bio *return_bi = NULL;516516 raid5_conf_t *conf = sh->raid_conf;517517- int i, more_to_read = 0;517517+ int i;518518519519 pr_debug("%s: stripe %llu\n", __FUNCTION__,520520 (unsigned long long)sh->sector);···522522 /* clear completed biofills */523523 for (i = sh->disks; i--; ) {524524 struct r5dev *dev = &sh->dev[i];525525- /* check if this stripe has new incoming reads */526526- if (dev->toread)527527- more_to_read++;528525529526 /* acknowledge completion of a biofill operation */530530- /* and check if we need to reply to a read request531531- */532532- if (test_bit(R5_Wantfill, &dev->flags) && !dev->toread) {527527+ /* and check if we need to reply to a read request,528528+ * new R5_Wantfill requests are held off until529529+ * !test_bit(STRIPE_OP_BIOFILL, &sh->ops.pending)530530+ */531531+ if (test_and_clear_bit(R5_Wantfill, &dev->flags)) {533532 struct bio *rbi, *rbi2;534534- clear_bit(R5_Wantfill, &dev->flags);535533536534 /* The access to dev->read is outside of the537535 * spin_lock_irq(&conf->device_lock), but is protected···556558557559 return_io(return_bi);558560559559- if (more_to_read)560560- set_bit(STRIPE_HANDLE, &sh->state);561561+ set_bit(STRIPE_HANDLE, &sh->state);561562 release_stripe(sh);562563}563564
···17261726 case E1000_DEV_ID_82571EB_QUAD_COPPER:17271727 case E1000_DEV_ID_82571EB_QUAD_FIBER:17281728 case E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE:17291729+ case E1000_DEV_ID_82571PT_QUAD_COPPER:17291730 case E1000_DEV_ID_82546GB_QUAD_COPPER_KSP3:17301731 /* quad port adapters only support WoL on port A */17311732 if (!adapter->quad_port_a) {
+1
drivers/net/e1000/e1000_hw.c
···387387 case E1000_DEV_ID_82571EB_SERDES_DUAL:388388 case E1000_DEV_ID_82571EB_SERDES_QUAD:389389 case E1000_DEV_ID_82571EB_QUAD_COPPER:390390+ case E1000_DEV_ID_82571PT_QUAD_COPPER:390391 case E1000_DEV_ID_82571EB_QUAD_FIBER:391392 case E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE:392393 hw->mac_type = e1000_82571;
···108108 INTEL_E1000_ETHERNET_DEVICE(0x10BC),109109 INTEL_E1000_ETHERNET_DEVICE(0x10C4),110110 INTEL_E1000_ETHERNET_DEVICE(0x10C5),111111+ INTEL_E1000_ETHERNET_DEVICE(0x10D5),111112 INTEL_E1000_ETHERNET_DEVICE(0x10D9),112113 INTEL_E1000_ETHERNET_DEVICE(0x10DA),113114 /* required last entry */···11021101 case E1000_DEV_ID_82571EB_QUAD_COPPER:11031102 case E1000_DEV_ID_82571EB_QUAD_FIBER:11041103 case E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE:11041104+ case E1000_DEV_ID_82571PT_QUAD_COPPER:11051105 /* if quad port adapter, disable WoL on all but port A */11061106 if (global_quad_port_a != 0)11071107 adapter->eeprom_wol = 0;
···491491 u16 hdrflags;492492 u16 tunnel_id, session_id;493493 int length;494494- struct udphdr *uh;494494+ int offset;495495496496 tunnel = pppol2tp_sock_to_tunnel(sock);497497 if (tunnel == NULL)498498 goto error;499499500500+ /* UDP always verifies the packet length. */501501+ __skb_pull(skb, sizeof(struct udphdr));502502+500503 /* Short packet? */501501- if (skb->len < sizeof(struct udphdr)) {504504+ if (!pskb_may_pull(skb, 12)) {502505 PRINTK(tunnel->debug, PPPOL2TP_MSG_DATA, KERN_INFO,503506 "%s: recv short packet (len=%d)\n", tunnel->name, skb->len);504507 goto error;505508 }506509507510 /* Point to L2TP header */508508- ptr = skb->data + sizeof(struct udphdr);511511+ ptr = skb->data;509512510513 /* Get L2TP header flags */511514 hdrflags = ntohs(*(__be16*)ptr);512515513516 /* Trace packet contents, if enabled */514517 if (tunnel->debug & PPPOL2TP_MSG_DATA) {518518+ length = min(16u, skb->len);519519+ if (!pskb_may_pull(skb, length))520520+ goto error;521521+515522 printk(KERN_DEBUG "%s: recv: ", tunnel->name);516523517517- for (length = 0; length < 16; length++)518518- printk(" %02X", ptr[length]);524524+ offset = 0;525525+ do {526526+ printk(" %02X", ptr[offset]);527527+ } while (++offset < length);528528+519529 printk("\n");520530 }521531522532 /* Get length of L2TP packet */523523- uh = (struct udphdr *) skb_transport_header(skb);524524- length = ntohs(uh->len) - sizeof(struct udphdr);525525-526526- /* Too short? */527527- if (length < 12) {528528- PRINTK(tunnel->debug, PPPOL2TP_MSG_DATA, KERN_INFO,529529- "%s: recv short L2TP packet (len=%d)\n", tunnel->name, length);530530- goto error;531531- }533533+ length = skb->len;532534533535 /* If type is control packet, it is handled by userspace. */534536 if (hdrflags & L2TP_HDRFLAG_T) {···608606 "%s: recv data has no seq numbers when required. "609607 "Discarding\n", session->name);610608 session->stats.rx_seq_discards++;611611- session->stats.rx_errors++;612609 goto discard;613610 }614611···626625 "%s: recv data has no seq numbers when required. "627626 "Discarding\n", session->name);628627 session->stats.rx_seq_discards++;629629- session->stats.rx_errors++;630628 goto discard;631629 }632630···634634 }635635636636 /* If offset bit set, skip it. */637637- if (hdrflags & L2TP_HDRFLAG_O)638638- ptr += 2 + ntohs(*(__be16 *) ptr);637637+ if (hdrflags & L2TP_HDRFLAG_O) {638638+ offset = ntohs(*(__be16 *)ptr);639639+ skb->transport_header += 2 + offset;640640+ if (!pskb_may_pull(skb, skb_transport_offset(skb) + 2))641641+ goto discard;642642+ }639643640640- skb_pull(skb, ptr - skb->data);644644+ __skb_pull(skb, skb_transport_offset(skb));641645642646 /* Skip PPP header, if present. In testing, Microsoft L2TP clients643647 * don't send the PPP header (PPP header compression enabled), but···677673 */678674 if (PPPOL2TP_SKB_CB(skb)->ns != session->nr) {679675 session->stats.rx_seq_discards++;680680- session->stats.rx_errors++;681676 PRINTK(session->debug, PPPOL2TP_MSG_SEQ, KERN_DEBUG,682677 "%s: oos pkt %hu len %d discarded, "683678 "waiting for %hu, reorder_q_len=%d\n",···701698 return 0;702699703700discard:701701+ session->stats.rx_errors++;704702 kfree_skb(skb);705703 sock_put(session->sock);706704···962958 int data_len = skb->len;963959 struct inet_sock *inet;964960 __wsum csum = 0;965965- struct sk_buff *skb2 = NULL;966961 struct udphdr *uh;967962 unsigned int len;968963···992989 */993990 headroom = NET_SKB_PAD + sizeof(struct iphdr) +994991 sizeof(struct udphdr) + hdr_len + sizeof(ppph);995995- if (skb_headroom(skb) < headroom) {996996- skb2 = skb_realloc_headroom(skb, headroom);997997- if (skb2 == NULL)998998- goto abort;999999- } else10001000- skb2 = skb;10011001-10021002- /* Check that the socket has room */10031003- if (atomic_read(&sk_tun->sk_wmem_alloc) < sk_tun->sk_sndbuf)10041004- skb_set_owner_w(skb2, sk_tun);10051005- else10061006- goto discard;992992+ if (skb_cow_head(skb, headroom))993993+ goto abort;10079941008995 /* Setup PPP header */10091009- skb_push(skb2, sizeof(ppph));10101010- skb2->data[0] = ppph[0];10111011- skb2->data[1] = ppph[1];996996+ __skb_push(skb, sizeof(ppph));997997+ skb->data[0] = ppph[0];998998+ skb->data[1] = ppph[1];101299910131000 /* Setup L2TP header */10141014- skb_push(skb2, hdr_len);10151015- pppol2tp_build_l2tp_header(session, skb2->data);10011001+ pppol2tp_build_l2tp_header(session, __skb_push(skb, hdr_len));1016100210171003 /* Setup UDP header */10181004 inet = inet_sk(sk_tun);10191019- skb_push(skb2, sizeof(struct udphdr));10201020- skb_reset_transport_header(skb2);10211021- uh = (struct udphdr *) skb2->data;10051005+ __skb_push(skb, sizeof(*uh));10061006+ skb_reset_transport_header(skb);10071007+ uh = udp_hdr(skb);10221008 uh->source = inet->sport;10231009 uh->dest = inet->dport;10241010 uh->len = htons(sizeof(struct udphdr) + hdr_len + sizeof(ppph) + data_len);10251011 uh->check = 0;1026101210271027- /* Calculate UDP checksum if configured to do so */10131013+ /* *BROKEN* Calculate UDP checksum if configured to do so */10281014 if (sk_tun->sk_no_check != UDP_CSUM_NOXMIT)10291029- csum = udp_csum_outgoing(sk_tun, skb2);10151015+ csum = udp_csum_outgoing(sk_tun, skb);1030101610311017 /* Debug */10321018 if (session->send_seq)···1028103610291037 if (session->debug & PPPOL2TP_MSG_DATA) {10301038 int i;10311031- unsigned char *datap = skb2->data;10391039+ unsigned char *datap = skb->data;1032104010331041 printk(KERN_DEBUG "%s: xmit:", session->name);10341042 for (i = 0; i < data_len; i++) {···10411049 printk("\n");10421050 }1043105110441044- memset(&(IPCB(skb2)->opt), 0, sizeof(IPCB(skb2)->opt));10451045- IPCB(skb2)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED |10461046- IPSKB_REROUTED);10471047- nf_reset(skb2);10521052+ memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));10531053+ IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED |10541054+ IPSKB_REROUTED);10551055+ nf_reset(skb);1048105610491057 /* Get routing info from the tunnel socket */10501050- dst_release(skb2->dst);10511051- skb2->dst = sk_dst_get(sk_tun);10581058+ dst_release(skb->dst);10591059+ skb->dst = sk_dst_get(sk_tun);1052106010531061 /* Queue the packet to IP for output */10541054- len = skb2->len;10551055- rc = ip_queue_xmit(skb2, 1);10621062+ len = skb->len;10631063+ rc = ip_queue_xmit(skb, 1);1056106410571065 /* Update stats */10581066 if (rc >= 0) {···10651073 session->stats.tx_errors++;10661074 }1067107510681068- /* Free the original skb */10691069- kfree_skb(skb);10701070-10711076 return 1;1072107710731073-discard:10741074- /* Free the new skb. Caller will free original skb. */10751075- if (skb2 != skb)10761076- kfree_skb(skb2);10771078abort:10781078- return 0;10791079+ /* Free the original skb */10801080+ kfree_skb(skb);10811081+ return 1;10791082}1080108310811084/*****************************************************************************···13131326 goto err;13141327 }1315132813291329+ sk = sock->sk;13301330+13161331 /* Quick sanity checks */13171317- err = -ESOCKTNOSUPPORT;13181318- if (sock->type != SOCK_DGRAM) {13321332+ err = -EPROTONOSUPPORT;13331333+ if (sk->sk_protocol != IPPROTO_UDP) {13191334 PRINTK(-1, PPPOL2TP_MSG_CONTROL, KERN_ERR,13201320- "tunl %hu: fd %d wrong type, got %d, expected %d\n",13211321- tunnel_id, fd, sock->type, SOCK_DGRAM);13351335+ "tunl %hu: fd %d wrong protocol, got %d, expected %d\n",13361336+ tunnel_id, fd, sk->sk_protocol, IPPROTO_UDP);13221337 goto err;13231338 }13241339 err = -EAFNOSUPPORT;···13321343 }1333134413341345 err = -ENOTCONN;13351335- sk = sock->sk;1336134613371347 /* Check if this socket has already been prepped */13381348 tunnel = (struct pppol2tp_tunnel *)sk->sk_user_data;
+7
drivers/net/qla3xxx.c
···22482248 qdev->rsp_consumer_index) && (work_done < work_to_do)) {2249224922502250 net_rsp = qdev->rsp_current;22512251+ rmb();22522252+ /*22532253+ * Fix 4032 chipe undocumented "feature" where bit-8 is set if the22542254+ * inbound completion is for a VLAN.22552255+ */22562256+ if (qdev->device_id == QL3032_DEVICE_ID)22572257+ net_rsp->opcode &= 0x7f;22512258 switch (net_rsp->opcode) {2252225922532260 case OPCODE_OB_MAC_IOCB_FN0:
+13-1
drivers/net/r8169.c
···12281228 return;12291229 }1230123012311231- /* phy config for RTL8169s mac_version C chip */12311231+ if ((tp->mac_version != RTL_GIGA_MAC_VER_02) &&12321232+ (tp->mac_version != RTL_GIGA_MAC_VER_03))12331233+ return;12341234+12321235 mdio_write(ioaddr, 31, 0x0001); //w 31 2 0 112331236 mdio_write(ioaddr, 21, 0x1000); //w 21 15 0 100012341237 mdio_write(ioaddr, 24, 0x65c7); //w 24 15 0 65c7···25702567 (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)) {25712568 netif_wake_queue(dev);25722569 }25702570+ /*25712571+ * 8168 hack: TxPoll requests are lost when the Tx packets are25722572+ * too close. Let's kick an extra TxPoll request when a burst25732573+ * of start_xmit activity is detected (if it is not detected,25742574+ * it is slow enough). -- FR25752575+ */25762576+ smp_rmb();25772577+ if (tp->cur_tx != dirty_tx)25782578+ RTL_W8(TxPoll, NPQ);25732579 }25742580}25752581
+294-116
drivers/net/sky2.c
···5151#include "sky2.h"52525353#define DRV_NAME "sky2"5454-#define DRV_VERSION "1.17"5454+#define DRV_VERSION "1.18"5555#define PFX DRV_NAME " "56565757/*···118118 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4351) }, /* 88E8036 */119119 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4352) }, /* 88E8038 */120120 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4353) }, /* 88E8039 */121121+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4354) }, /* 88E8040 */121122 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4356) }, /* 88EC033 */123123+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x435A) }, /* 88E8048 */122124 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4360) }, /* 88E8052 */123125 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4361) }, /* 88E8050 */124126 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4362) }, /* 88E8053 */125127 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4363) }, /* 88E8055 */126128 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4364) }, /* 88E8056 */129129+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4365) }, /* 88E8070 */127130 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4366) }, /* 88EC036 */128131 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4367) }, /* 88EC032 */129132 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4368) }, /* 88EC034 */···150147 "Extreme", /* 0xb5 */151148 "EC", /* 0xb6 */152149 "FE", /* 0xb7 */150150+ "FE+", /* 0xb8 */153151};154152155153static void sky2_set_multicast(struct net_device *dev);···221217 else222218 sky2_write8(hw, B2_Y2_CLK_GATE, 0);223219224224- if (hw->chip_id == CHIP_ID_YUKON_EC_U ||225225- hw->chip_id == CHIP_ID_YUKON_EX) {220220+ if (hw->flags & SKY2_HW_ADV_POWER_CTL) {226221 u32 reg;227222228223 sky2_pci_write32(hw, PCI_DEV_REG3, 0);···314311 struct sky2_port *sky2 = netdev_priv(hw->dev[port]);315312 u16 ctrl, ct1000, adv, pg, ledctrl, ledover, reg;316313317317- if (sky2->autoneg == AUTONEG_ENABLE318318- && !(hw->chip_id == CHIP_ID_YUKON_XL319319- || hw->chip_id == CHIP_ID_YUKON_EC_U320320- || hw->chip_id == CHIP_ID_YUKON_EX)) {314314+ if (sky2->autoneg == AUTONEG_ENABLE &&315315+ !(hw->flags & SKY2_HW_NEWER_PHY)) {321316 u16 ectrl = gm_phy_read(hw, port, PHY_MARV_EXT_CTRL);322317323318 ectrl &= ~(PHY_M_EC_M_DSC_MSK | PHY_M_EC_S_DSC_MSK |···335334336335 ctrl = gm_phy_read(hw, port, PHY_MARV_PHY_CTRL);337336 if (sky2_is_copper(hw)) {338338- if (hw->chip_id == CHIP_ID_YUKON_FE) {337337+ if (!(hw->flags & SKY2_HW_GIGABIT)) {339338 /* enable automatic crossover */340339 ctrl |= PHY_M_PC_MDI_XMODE(PHY_M_PC_ENA_AUTO) >> 1;340340+341341+ if (hw->chip_id == CHIP_ID_YUKON_FE_P &&342342+ hw->chip_rev == CHIP_REV_YU_FE2_A0) {343343+ u16 spec;344344+345345+ /* Enable Class A driver for FE+ A0 */346346+ spec = gm_phy_read(hw, port, PHY_MARV_FE_SPEC_2);347347+ spec |= PHY_M_FESC_SEL_CL_A;348348+ gm_phy_write(hw, port, PHY_MARV_FE_SPEC_2, spec);349349+ }341350 } else {342351 /* disable energy detect */343352 ctrl &= ~PHY_M_PC_EN_DET_MSK;···357346358347 /* downshift on PHY 88E1112 and 88E1149 is changed */359348 if (sky2->autoneg == AUTONEG_ENABLE360360- && (hw->chip_id == CHIP_ID_YUKON_XL361361- || hw->chip_id == CHIP_ID_YUKON_EC_U362362- || hw->chip_id == CHIP_ID_YUKON_EX)) {349349+ && (hw->flags & SKY2_HW_NEWER_PHY)) {363350 /* set downshift counter to 3x and enable downshift */364351 ctrl &= ~PHY_M_PC_DSC_MSK;365352 ctrl |= PHY_M_PC_DSC(2) | PHY_M_PC_DOWN_S_ENA;···373364 gm_phy_write(hw, port, PHY_MARV_PHY_CTRL, ctrl);374365375366 /* special setup for PHY 88E1112 Fiber */376376- if (hw->chip_id == CHIP_ID_YUKON_XL && !sky2_is_copper(hw)) {367367+ if (hw->chip_id == CHIP_ID_YUKON_XL && (hw->flags & SKY2_HW_FIBRE_PHY)) {377368 pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR);378369379370 /* Fiber: select 1000BASE-X only mode MAC Specific Ctrl Reg. */···464455465456 gma_write16(hw, port, GM_GP_CTRL, reg);466457467467- if (hw->chip_id != CHIP_ID_YUKON_FE)458458+ if (hw->flags & SKY2_HW_GIGABIT)468459 gm_phy_write(hw, port, PHY_MARV_1000T_CTRL, ct1000);469460470461 gm_phy_write(hw, port, PHY_MARV_AUNE_ADV, adv);···485476 ctrl &= ~PHY_M_FELP_LED1_MSK;486477 /* change ACT LED control to blink mode */487478 ctrl |= PHY_M_FELP_LED1_CTRL(LED_PAR_CTRL_ACT_BL);479479+ gm_phy_write(hw, port, PHY_MARV_FE_LED_PAR, ctrl);480480+ break;481481+482482+ case CHIP_ID_YUKON_FE_P:483483+ /* Enable Link Partner Next Page */484484+ ctrl = gm_phy_read(hw, port, PHY_MARV_PHY_CTRL);485485+ ctrl |= PHY_M_PC_ENA_LIP_NP;486486+487487+ /* disable Energy Detect and enable scrambler */488488+ ctrl &= ~(PHY_M_PC_ENA_ENE_DT | PHY_M_PC_DIS_SCRAMB);489489+ gm_phy_write(hw, port, PHY_MARV_PHY_CTRL, ctrl);490490+491491+ /* set LED2 -> ACT, LED1 -> LINK, LED0 -> SPEED */492492+ ctrl = PHY_M_FELP_LED2_CTRL(LED_PAR_CTRL_ACT_BL) |493493+ PHY_M_FELP_LED1_CTRL(LED_PAR_CTRL_LINK) |494494+ PHY_M_FELP_LED0_CTRL(LED_PAR_CTRL_SPEED);495495+488496 gm_phy_write(hw, port, PHY_MARV_FE_LED_PAR, ctrl);489497 break;490498···574548575549 /* set page register to 0 */576550 gm_phy_write(hw, port, PHY_MARV_EXT_ADR, 0);551551+ } else if (hw->chip_id == CHIP_ID_YUKON_FE_P &&552552+ hw->chip_rev == CHIP_REV_YU_FE2_A0) {553553+ /* apply workaround for integrated resistors calibration */554554+ gm_phy_write(hw, port, PHY_MARV_PAGE_ADDR, 17);555555+ gm_phy_write(hw, port, PHY_MARV_PAGE_DATA, 0x3f60);577556 } else if (hw->chip_id != CHIP_ID_YUKON_EX) {557557+ /* no effect on Yukon-XL */578558 gm_phy_write(hw, port, PHY_MARV_LED_CTRL, ledctrl);579559580560 if (sky2->autoneg == AUTONEG_DISABLE || sky2->speed == SPEED_100) {···701669702670static void sky2_set_tx_stfwd(struct sky2_hw *hw, unsigned port)703671{704704- if (hw->chip_id == CHIP_ID_YUKON_EX && hw->chip_rev != CHIP_REV_YU_EX_A0) {672672+ struct net_device *dev = hw->dev[port];673673+674674+ if (dev->mtu <= ETH_DATA_LEN)705675 sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),706706- TX_STFW_ENA |707707- (hw->dev[port]->mtu > ETH_DATA_LEN) ? TX_JUMBO_ENA : TX_JUMBO_DIS);708708- } else {709709- if (hw->dev[port]->mtu > ETH_DATA_LEN) {710710- /* set Tx GMAC FIFO Almost Empty Threshold */711711- sky2_write32(hw, SK_REG(port, TX_GMF_AE_THR),712712- (ECU_JUMBO_WM << 16) | ECU_AE_THR);676676+ TX_JUMBO_DIS | TX_STFW_ENA);713677714714- sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),715715- TX_JUMBO_ENA | TX_STFW_DIS);678678+ else if (hw->chip_id != CHIP_ID_YUKON_EC_U)679679+ sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),680680+ TX_STFW_ENA | TX_JUMBO_ENA);681681+ else {682682+ /* set Tx GMAC FIFO Almost Empty Threshold */683683+ sky2_write32(hw, SK_REG(port, TX_GMF_AE_THR),684684+ (ECU_JUMBO_WM << 16) | ECU_AE_THR);716685717717- /* Can't do offload because of lack of store/forward */718718- hw->dev[port]->features &= ~(NETIF_F_TSO | NETIF_F_SG719719- | NETIF_F_ALL_CSUM);720720- } else721721- sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),722722- TX_JUMBO_DIS | TX_STFW_ENA);686686+ sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T),687687+ TX_JUMBO_ENA | TX_STFW_DIS);688688+689689+ /* Can't do offload because of lack of store/forward */690690+ dev->features &= ~(NETIF_F_TSO | NETIF_F_SG | NETIF_F_ALL_CSUM);723691 }724692}725693···805773 /* Configure Rx MAC FIFO */806774 sky2_write8(hw, SK_REG(port, RX_GMF_CTRL_T), GMF_RST_CLR);807775 rx_reg = GMF_OPER_ON | GMF_RX_F_FL_ON;808808- if (hw->chip_id == CHIP_ID_YUKON_EX)776776+ if (hw->chip_id == CHIP_ID_YUKON_EX ||777777+ hw->chip_id == CHIP_ID_YUKON_FE_P)809778 rx_reg |= GMF_RX_OVER_ON;810779811780 sky2_write32(hw, SK_REG(port, RX_GMF_CTRL_T), rx_reg);···815782 sky2_write16(hw, SK_REG(port, RX_GMF_FL_MSK), GMR_FS_ANY_ERR);816783817784 /* Set threshold to 0xa (64 bytes) + 1 to workaround pause bug */818818- sky2_write16(hw, SK_REG(port, RX_GMF_FL_THR), RX_GMF_FL_THR_DEF+1);785785+ reg = RX_GMF_FL_THR_DEF + 1;786786+ /* Another magic mystery workaround from sk98lin */787787+ if (hw->chip_id == CHIP_ID_YUKON_FE_P &&788788+ hw->chip_rev == CHIP_REV_YU_FE2_A0)789789+ reg = 0x178;790790+ sky2_write16(hw, SK_REG(port, RX_GMF_FL_THR), reg);819791820792 /* Configure Tx MAC FIFO */821793 sky2_write8(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_RST_CLR);822794 sky2_write16(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_OPER_ON);823795824824- if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) {796796+ /* On chips without ram buffer, pause is controled by MAC level */797797+ if (sky2_read8(hw, B2_E_0) == 0) {825798 sky2_write8(hw, SK_REG(port, RX_GMF_LP_THR), 768/8);826799 sky2_write8(hw, SK_REG(port, RX_GMF_UP_THR), 1024/8);827800···908869 sky2->tx_prod = RING_NEXT(sky2->tx_prod, TX_RING_SIZE);909870 le->ctrl = 0;910871 return le;872872+}873873+874874+static void tx_init(struct sky2_port *sky2)875875+{876876+ struct sky2_tx_le *le;877877+878878+ sky2->tx_prod = sky2->tx_cons = 0;879879+ sky2->tx_tcpsum = 0;880880+ sky2->tx_last_mss = 0;881881+882882+ le = get_tx_le(sky2);883883+ le->addr = 0;884884+ le->opcode = OP_ADDR64 | HW_OWNER;885885+ sky2->tx_addr64 = 0;911886}912887913888static inline struct tx_ring_info *tx_le_re(struct sky2_port *sky2,···1020967 */1021968static void rx_set_checksum(struct sky2_port *sky2)1022969{10231023- struct sky2_rx_le *le;970970+ struct sky2_rx_le *le = sky2_next_rx(sky2);102497110251025- if (sky2->hw->chip_id != CHIP_ID_YUKON_EX) {10261026- le = sky2_next_rx(sky2);10271027- le->addr = cpu_to_le32((ETH_HLEN << 16) | ETH_HLEN);10281028- le->ctrl = 0;10291029- le->opcode = OP_TCPSTART | HW_OWNER;972972+ le->addr = cpu_to_le32((ETH_HLEN << 16) | ETH_HLEN);973973+ le->ctrl = 0;974974+ le->opcode = OP_TCPSTART | HW_OWNER;103097510311031- sky2_write32(sky2->hw,10321032- Q_ADDR(rxqaddr[sky2->port], Q_CSR),10331033- sky2->rx_csum ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);10341034- }10351035-976976+ sky2_write32(sky2->hw,977977+ Q_ADDR(rxqaddr[sky2->port], Q_CSR),978978+ sky2->rx_csum ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);1036979}10379801038981/*···1224117512251176 sky2_prefetch_init(hw, rxq, sky2->rx_le_map, RX_LE_SIZE - 1);1226117712271227- rx_set_checksum(sky2);11781178+ if (!(hw->flags & SKY2_HW_NEW_LE))11791179+ rx_set_checksum(sky2);1228118012291181 /* Space needed for frame data + headers rounded up */12301182 size = roundup(sky2->netdev->mtu + ETH_HLEN + VLAN_HLEN, 8);···12961246 struct sky2_port *sky2 = netdev_priv(dev);12971247 struct sky2_hw *hw = sky2->hw;12981248 unsigned port = sky2->port;12991299- u32 ramsize, imask;12491249+ u32 imask, ramsize;13001250 int cap, err = -ENOMEM;13011251 struct net_device *otherdev = hw->dev[sky2->port^1];13021252···13341284 GFP_KERNEL);13351285 if (!sky2->tx_ring)13361286 goto err_out;13371337- sky2->tx_prod = sky2->tx_cons = 0;12871287+12881288+ tx_init(sky2);1338128913391290 sky2->rx_le = pci_alloc_consistent(hw->pdev, RX_LE_BYTES,13401291 &sky2->rx_le_map);···1354130313551304 /* Register is number of 4K blocks on internal RAM buffer. */13561305 ramsize = sky2_read8(hw, B2_E_0) * 4;13571357- printk(KERN_INFO PFX "%s: ram buffer %dK\n", dev->name, ramsize);13581358-13591306 if (ramsize > 0) {13601307 u32 rxspace;1361130813091309+ pr_debug(PFX "%s: ram buffer %dK\n", dev->name, ramsize);13621310 if (ramsize < 16)13631311 rxspace = ramsize / 2;13641312 else···14861436 /* Check for TCP Segmentation Offload */14871437 mss = skb_shinfo(skb)->gso_size;14881438 if (mss != 0) {14891489- if (hw->chip_id != CHIP_ID_YUKON_EX)14391439+14401440+ if (!(hw->flags & SKY2_HW_NEW_LE))14901441 mss += ETH_HLEN + ip_hdrlen(skb) + tcp_hdrlen(skb);1491144214921443 if (mss != sky2->tx_last_mss) {14931444 le = get_tx_le(sky2);14941445 le->addr = cpu_to_le32(mss);14951495- if (hw->chip_id == CHIP_ID_YUKON_EX)14461446+14471447+ if (hw->flags & SKY2_HW_NEW_LE)14961448 le->opcode = OP_MSS | HW_OWNER;14971449 else14981450 le->opcode = OP_LRGLEN | HW_OWNER;···15201468 /* Handle TCP checksum offload */15211469 if (skb->ip_summed == CHECKSUM_PARTIAL) {15221470 /* On Yukon EX (some versions) encoding change. */15231523- if (hw->chip_id == CHIP_ID_YUKON_EX15241524- && hw->chip_rev != CHIP_REV_YU_EX_B0)14711471+ if (hw->flags & SKY2_HW_AUTO_TX_SUM)15251472 ctrl |= CALSUM; /* auto checksum */15261473 else {15271474 const unsigned offset = skb_transport_offset(skb);···16731622 if (netif_msg_ifdown(sky2))16741623 printk(KERN_INFO PFX "%s: disabling interface\n", dev->name);1675162416761676- if (netif_carrier_ok(dev) && --hw->active == 0)16771677- del_timer(&hw->watchdog_timer);16781678-16791625 /* Stop more packets from being queued */16801626 netif_stop_queue(dev);16811627···1756170817571709static u16 sky2_phy_speed(const struct sky2_hw *hw, u16 aux)17581710{17591759- if (!sky2_is_copper(hw))17111711+ if (hw->flags & SKY2_HW_FIBRE_PHY)17601712 return SPEED_1000;1761171317621762- if (hw->chip_id == CHIP_ID_YUKON_FE)17631763- return (aux & PHY_M_PS_SPEED_100) ? SPEED_100 : SPEED_10;17141714+ if (!(hw->flags & SKY2_HW_GIGABIT)) {17151715+ if (aux & PHY_M_PS_SPEED_100)17161716+ return SPEED_100;17171717+ else17181718+ return SPEED_10;17191719+ }1764172017651721 switch (aux & PHY_M_PS_SPEED_MSK) {17661722 case PHY_M_PS_SPEED_1000:···1797174517981746 netif_carrier_on(sky2->netdev);1799174718001800- if (hw->active++ == 0)18011801- mod_timer(&hw->watchdog_timer, jiffies + 1);18021802-17481748+ mod_timer(&hw->watchdog_timer, jiffies + 1);1803174918041750 /* Turn on link LED */18051751 sky2_write8(hw, SK_REG(port, LNK_LED_REG),18061752 LINKLED_ON | LINKLED_BLINK_OFF | LINKLED_LINKSYNC_OFF);1807175318081808- if (hw->chip_id == CHIP_ID_YUKON_XL18091809- || hw->chip_id == CHIP_ID_YUKON_EC_U18101810- || hw->chip_id == CHIP_ID_YUKON_EX) {17541754+ if (hw->flags & SKY2_HW_NEWER_PHY) {18111755 u16 pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR);18121756 u16 led = PHY_M_LEDC_LOS_CTRL(1); /* link active */18131757···1848180018491801 netif_carrier_off(sky2->netdev);1850180218511851- /* Stop watchdog if both ports are not active */18521852- if (--hw->active == 0)18531853- del_timer(&hw->watchdog_timer);18541854-18551855-18561803 /* Turn on link LED */18571804 sky2_write8(hw, SK_REG(port, LNK_LED_REG), LINKLED_OFF);18581805···18901847 /* Since the pause result bits seem to in different positions on18911848 * different chips. look at registers.18921849 */18931893- if (!sky2_is_copper(hw)) {18501850+ if (hw->flags & SKY2_HW_FIBRE_PHY) {18941851 /* Shift for bits in fiber PHY */18951852 advert &= ~(ADVERTISE_PAUSE_CAP|ADVERTISE_PAUSE_ASYM);18961853 lpa &= ~(LPA_PAUSE_CAP|LPA_PAUSE_ASYM);···20011958 if (new_mtu < ETH_ZLEN || new_mtu > ETH_JUMBO_MTU)20021959 return -EINVAL;2003196020042004- if (new_mtu > ETH_DATA_LEN && hw->chip_id == CHIP_ID_YUKON_FE)19611961+ if (new_mtu > ETH_DATA_LEN &&19621962+ (hw->chip_id == CHIP_ID_YUKON_FE ||19631963+ hw->chip_id == CHIP_ID_YUKON_FE_P))20051964 return -EINVAL;2006196520071966 if (!netif_running(dev)) {···2020197520211976 synchronize_irq(hw->pdev->irq);2022197720232023- if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX)19781978+ if (sky2_read8(hw, B2_E_0) == 0)20241979 sky2_set_tx_stfwd(hw, port);2025198020261981 ctl = gma_read16(hw, port, GM_GP_CTRL);···21482103 struct sky2_port *sky2 = netdev_priv(dev);21492104 struct rx_ring_info *re = sky2->rx_ring + sky2->rx_next;21502105 struct sk_buff *skb = NULL;21062106+ u16 count = (status & GMR_FS_LEN) >> 16;21072107+21082108+#ifdef SKY2_VLAN_TAG_USED21092109+ /* Account for vlan tag */21102110+ if (sky2->vlgrp && (status & GMR_FS_VLAN))21112111+ count -= VLAN_HLEN;21122112+#endif2151211321522114 if (unlikely(netif_msg_rx_status(sky2)))21532115 printk(KERN_DEBUG PFX "%s: rx slot %u status 0x%x len %d\n",···21632111 sky2->rx_next = (sky2->rx_next + 1) % sky2->rx_pending;21642112 prefetch(sky2->rx_ring + sky2->rx_next);2165211321142114+ if (length < ETH_ZLEN || length > sky2->rx_data_size)21152115+ goto len_error;21162116+21172117+ /* This chip has hardware problems that generates bogus status.21182118+ * So do only marginal checking and expect higher level protocols21192119+ * to handle crap frames.21202120+ */21212121+ if (sky2->hw->chip_id == CHIP_ID_YUKON_FE_P &&21222122+ sky2->hw->chip_rev == CHIP_REV_YU_FE2_A0 &&21232123+ length != count)21242124+ goto okay;21252125+21662126 if (status & GMR_FS_ANY_ERR)21672127 goto error;2168212821692129 if (!(status & GMR_FS_RX_OK))21702130 goto resubmit;2171213121722172- if (status >> 16 != length)21732173- goto len_mismatch;21322132+ /* if length reported by DMA does not match PHY, packet was truncated */21332133+ if (length != count)21342134+ goto len_error;2174213521362136+okay:21752137 if (length < copybreak)21762138 skb = receive_copy(sky2, re, length);21772139 else···2195212921962130 return skb;2197213121982198-len_mismatch:21322132+len_error:21992133 /* Truncation of overlength packets22002134 causes PHY length to not match MAC length */22012135 ++sky2->net_stats.rx_length_errors;21362136+ if (netif_msg_rx_err(sky2) && net_ratelimit())21372137+ pr_info(PFX "%s: rx length error: status %#x length %d\n",21382138+ dev->name, status, length);21392139+ goto resubmit;2202214022032141error:22042142 ++sky2->net_stats.rx_errors;···22722202 }2273220322742204 /* This chip reports checksum status differently */22752275- if (hw->chip_id == CHIP_ID_YUKON_EX) {22052205+ if (hw->flags & SKY2_HW_NEW_LE) {22762206 if (sky2->rx_csum &&22772207 (le->css & (CSS_ISIPV4 | CSS_ISIPV6)) &&22782208 (le->css & CSS_TCPUDPCSOK))···23132243 if (!sky2->rx_csum)23142244 break;2315224523162316- if (hw->chip_id == CHIP_ID_YUKON_EX)22462246+ /* If this happens then driver assuming wrong format */22472247+ if (unlikely(hw->flags & SKY2_HW_NEW_LE)) {22482248+ if (net_ratelimit())22492249+ printk(KERN_NOTICE "%s: unexpected"22502250+ " checksum status\n",22512251+ dev->name);23172252 break;22532253+ }2318225423192255 /* Both checksum counters are programmed to start at23202256 * the same offset, so unless there is a problem they···25122436 sky2_write32(hw, Q_ADDR(q, Q_CSR), BMU_CLR_IRQ_CHK);25132437}2514243825152515-/* Check for lost IRQ once a second */24392439+static int sky2_rx_hung(struct net_device *dev)24402440+{24412441+ struct sky2_port *sky2 = netdev_priv(dev);24422442+ struct sky2_hw *hw = sky2->hw;24432443+ unsigned port = sky2->port;24442444+ unsigned rxq = rxqaddr[port];24452445+ u32 mac_rp = sky2_read32(hw, SK_REG(port, RX_GMF_RP));24462446+ u8 mac_lev = sky2_read8(hw, SK_REG(port, RX_GMF_RLEV));24472447+ u8 fifo_rp = sky2_read8(hw, Q_ADDR(rxq, Q_RP));24482448+ u8 fifo_lev = sky2_read8(hw, Q_ADDR(rxq, Q_RL));24492449+24502450+ /* If idle and MAC or PCI is stuck */24512451+ if (sky2->check.last == dev->last_rx &&24522452+ ((mac_rp == sky2->check.mac_rp &&24532453+ mac_lev != 0 && mac_lev >= sky2->check.mac_lev) ||24542454+ /* Check if the PCI RX hang */24552455+ (fifo_rp == sky2->check.fifo_rp &&24562456+ fifo_lev != 0 && fifo_lev >= sky2->check.fifo_lev))) {24572457+ printk(KERN_DEBUG PFX "%s: hung mac %d:%d fifo %d (%d:%d)\n",24582458+ dev->name, mac_lev, mac_rp, fifo_lev, fifo_rp,24592459+ sky2_read8(hw, Q_ADDR(rxq, Q_WP)));24602460+ return 1;24612461+ } else {24622462+ sky2->check.last = dev->last_rx;24632463+ sky2->check.mac_rp = mac_rp;24642464+ sky2->check.mac_lev = mac_lev;24652465+ sky2->check.fifo_rp = fifo_rp;24662466+ sky2->check.fifo_lev = fifo_lev;24672467+ return 0;24682468+ }24692469+}24702470+25162471static void sky2_watchdog(unsigned long arg)25172472{25182473 struct sky2_hw *hw = (struct sky2_hw *) arg;24742474+ struct net_device *dev;2519247524762476+ /* Check for lost IRQ once a second */25202477 if (sky2_read32(hw, B0_ISRC)) {25212521- struct net_device *dev = hw->dev[0];25222522-24782478+ dev = hw->dev[0];25232479 if (__netif_rx_schedule_prep(dev))25242480 __netif_rx_schedule(dev);24812481+ } else {24822482+ int i, active = 0;24832483+24842484+ for (i = 0; i < hw->ports; i++) {24852485+ dev = hw->dev[i];24862486+ if (!netif_running(dev))24872487+ continue;24882488+ ++active;24892489+24902490+ /* For chips with Rx FIFO, check if stuck */24912491+ if ((hw->flags & SKY2_HW_FIFO_HANG_CHECK) &&24922492+ sky2_rx_hung(dev)) {24932493+ pr_info(PFX "%s: receiver hang detected\n",24942494+ dev->name);24952495+ schedule_work(&hw->restart_work);24962496+ return;24972497+ }24982498+ }24992499+25002500+ if (active == 0)25012501+ return;25252502 }2526250325272527- if (hw->active > 0)25282528- mod_timer(&hw->watchdog_timer, round_jiffies(jiffies + HZ));25042504+ mod_timer(&hw->watchdog_timer, round_jiffies(jiffies + HZ));25292505}2530250625312507/* Hardware/software error handling */···26742546#endif2675254726762548/* Chip internal frequency for clock calculations */26772677-static inline u32 sky2_mhz(const struct sky2_hw *hw)25492549+static u32 sky2_mhz(const struct sky2_hw *hw)26782550{26792551 switch (hw->chip_id) {26802552 case CHIP_ID_YUKON_EC:26812553 case CHIP_ID_YUKON_EC_U:26822554 case CHIP_ID_YUKON_EX:26832683- return 125; /* 125 Mhz */25552555+ return 125;25562556+26842557 case CHIP_ID_YUKON_FE:26852685- return 100; /* 100 Mhz */26862686- default: /* YUKON_XL */26872687- return 156; /* 156 Mhz */25582558+ return 100;25592559+25602560+ case CHIP_ID_YUKON_FE_P:25612561+ return 50;25622562+25632563+ case CHIP_ID_YUKON_XL:25642564+ return 156;25652565+25662566+ default:25672567+ BUG();26882568 }26892569}26902570···27172581 sky2_write8(hw, B0_CTST, CS_RST_CLR);2718258227192583 hw->chip_id = sky2_read8(hw, B2_CHIP_ID);27202720- if (hw->chip_id < CHIP_ID_YUKON_XL || hw->chip_id > CHIP_ID_YUKON_FE) {25842584+ hw->chip_rev = (sky2_read8(hw, B2_MAC_CFG) & CFG_CHIP_R_MSK) >> 4;25852585+25862586+ switch(hw->chip_id) {25872587+ case CHIP_ID_YUKON_XL:25882588+ hw->flags = SKY2_HW_GIGABIT25892589+ | SKY2_HW_NEWER_PHY;25902590+ if (hw->chip_rev < 3)25912591+ hw->flags |= SKY2_HW_FIFO_HANG_CHECK;25922592+25932593+ break;25942594+25952595+ case CHIP_ID_YUKON_EC_U:25962596+ hw->flags = SKY2_HW_GIGABIT25972597+ | SKY2_HW_NEWER_PHY25982598+ | SKY2_HW_ADV_POWER_CTL;25992599+ break;26002600+26012601+ case CHIP_ID_YUKON_EX:26022602+ hw->flags = SKY2_HW_GIGABIT26032603+ | SKY2_HW_NEWER_PHY26042604+ | SKY2_HW_NEW_LE26052605+ | SKY2_HW_ADV_POWER_CTL;26062606+26072607+ /* New transmit checksum */26082608+ if (hw->chip_rev != CHIP_REV_YU_EX_B0)26092609+ hw->flags |= SKY2_HW_AUTO_TX_SUM;26102610+ break;26112611+26122612+ case CHIP_ID_YUKON_EC:26132613+ /* This rev is really old, and requires untested workarounds */26142614+ if (hw->chip_rev == CHIP_REV_YU_EC_A1) {26152615+ dev_err(&hw->pdev->dev, "unsupported revision Yukon-EC rev A1\n");26162616+ return -EOPNOTSUPP;26172617+ }26182618+ hw->flags = SKY2_HW_GIGABIT | SKY2_HW_FIFO_HANG_CHECK;26192619+ break;26202620+26212621+ case CHIP_ID_YUKON_FE:26222622+ break;26232623+26242624+ case CHIP_ID_YUKON_FE_P:26252625+ hw->flags = SKY2_HW_NEWER_PHY26262626+ | SKY2_HW_NEW_LE26272627+ | SKY2_HW_AUTO_TX_SUM26282628+ | SKY2_HW_ADV_POWER_CTL;26292629+ break;26302630+ default:27212631 dev_err(&hw->pdev->dev, "unsupported chip type 0x%x\n",27222632 hw->chip_id);27232633 return -EOPNOTSUPP;27242634 }2725263527262726- hw->chip_rev = (sky2_read8(hw, B2_MAC_CFG) & CFG_CHIP_R_MSK) >> 4;27272727-27282728- /* This rev is really old, and requires untested workarounds */27292729- if (hw->chip_id == CHIP_ID_YUKON_EC && hw->chip_rev == CHIP_REV_YU_EC_A1) {27302730- dev_err(&hw->pdev->dev, "unsupported revision Yukon-%s (0x%x) rev %d\n",27312731- yukon2_name[hw->chip_id - CHIP_ID_YUKON_XL],27322732- hw->chip_id, hw->chip_rev);27332733- return -EOPNOTSUPP;27342734- }27352735-27362636 hw->pmd_type = sky2_read8(hw, B2_PMD_TYP);26372637+ if (hw->pmd_type == 'L' || hw->pmd_type == 'S' || hw->pmd_type == 'P')26382638+ hw->flags |= SKY2_HW_FIBRE_PHY;26392639+26402640+27372641 hw->ports = 1;27382642 t8 = sky2_read8(hw, B2_Y2_HW_RES);27392643 if ((t8 & CFG_DUAL_MAC_MSK) == CFG_DUAL_MAC_MSK) {···2967279129682792 sky2->wol = wol->wolopts;2969279329702970- if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX)27942794+ if (hw->chip_id == CHIP_ID_YUKON_EC_U ||27952795+ hw->chip_id == CHIP_ID_YUKON_EX ||27962796+ hw->chip_id == CHIP_ID_YUKON_FE_P)29712797 sky2_write32(hw, B0_CTST, sky2->wol29722798 ? Y2_HW_WOL_ON : Y2_HW_WOL_OFF);29732799···29872809 | SUPPORTED_100baseT_Full29882810 | SUPPORTED_Autoneg | SUPPORTED_TP;2989281129902990- if (hw->chip_id != CHIP_ID_YUKON_FE)28122812+ if (hw->flags & SKY2_HW_GIGABIT)29912813 modes |= SUPPORTED_1000baseT_Half29922814 | SUPPORTED_1000baseT_Full;29932815 return modes;···30072829 ecmd->supported = sky2_supported_modes(hw);30082830 ecmd->phy_address = PHY_ADDR_MARV;30092831 if (sky2_is_copper(hw)) {30103010- ecmd->supported = SUPPORTED_10baseT_Half30113011- | SUPPORTED_10baseT_Full30123012- | SUPPORTED_100baseT_Half30133013- | SUPPORTED_100baseT_Full30143014- | SUPPORTED_1000baseT_Half30153015- | SUPPORTED_1000baseT_Full30163016- | SUPPORTED_Autoneg | SUPPORTED_TP;30172832 ecmd->port = PORT_TP;30182833 ecmd->speed = sky2->speed;30192834 } else {···39853814 dev->features |= NETIF_F_HIGHDMA;3986381539873816#ifdef SKY2_VLAN_TAG_USED39883988- dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;39893989- dev->vlan_rx_register = sky2_vlan_rx_register;38173817+ /* The workaround for FE+ status conflicts with VLAN tag detection. */38183818+ if (!(sky2->hw->chip_id == CHIP_ID_YUKON_FE_P &&38193819+ sky2->hw->chip_rev == CHIP_REV_YU_FE2_A0)) {38203820+ dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;38213821+ dev->vlan_rx_register = sky2_vlan_rx_register;38223822+ }39903823#endif3991382439923825 /* read the mac address */···40213846 return IRQ_NONE;4022384740233848 if (status & Y2_IS_IRQ_SW) {40244024- hw->msi = 1;38493849+ hw->flags |= SKY2_HW_USE_MSI;40253850 wake_up(&hw->msi_wait);40263851 sky2_write8(hw, B0_CTST, CS_CL_SW_IRQ);40273852 }···40493874 sky2_write8(hw, B0_CTST, CS_ST_SW_IRQ);40503875 sky2_read8(hw, B0_CTST);4051387640524052- wait_event_timeout(hw->msi_wait, hw->msi, HZ/10);38773877+ wait_event_timeout(hw->msi_wait, (hw->flags & SKY2_HW_USE_MSI), HZ/10);4053387840544054- if (!hw->msi) {38793879+ if (!(hw->flags & SKY2_HW_USE_MSI)) {40553880 /* MSI test failed, go back to INTx mode */40563881 dev_info(&pdev->dev, "No interrupt generated using MSI, "40573882 "switching to INTx mode.\n");···41844009 goto err_out_free_netdev;41854010 }4186401141874187- err = request_irq(pdev->irq, sky2_intr, hw->msi ? 0 : IRQF_SHARED,40124012+ err = request_irq(pdev->irq, sky2_intr,40134013+ (hw->flags & SKY2_HW_USE_MSI) ? 0 : IRQF_SHARED,41884014 dev->name, hw);41894015 if (err) {41904016 dev_err(&pdev->dev, "cannot assign irq %d\n", pdev->irq);···42184042 return 0;4219404342204044err_out_unregister:42214221- if (hw->msi)40454045+ if (hw->flags & SKY2_HW_USE_MSI)42224046 pci_disable_msi(pdev);42234047 unregister_netdev(dev);42244048err_out_free_netdev:···42674091 sky2_read8(hw, B0_CTST);4268409242694093 free_irq(pdev->irq, hw);42704270- if (hw->msi)40944094+ if (hw->flags & SKY2_HW_USE_MSI)42714095 pci_disable_msi(pdev);42724096 pci_free_consistent(pdev, STATUS_LE_BYTES, hw->st_le, hw->st_dma);42734097 pci_release_regions(pdev);···43354159 pci_enable_wake(pdev, PCI_D0, 0);4336416043374161 /* Re-enable all clocks */43384338- if (hw->chip_id == CHIP_ID_YUKON_EX || hw->chip_id == CHIP_ID_YUKON_EC_U)41624162+ if (hw->chip_id == CHIP_ID_YUKON_EX ||41634163+ hw->chip_id == CHIP_ID_YUKON_EC_U ||41644164+ hw->chip_id == CHIP_ID_YUKON_FE_P)43394165 sky2_pci_write32(hw, PCI_DEV_REG3, 0);4340416643414167 sky2_reset(hw);
···787787 struct scsi_target *starget = sdev->sdev_target;788788 struct Scsi_Host *shost = sdev->host;789789 int len = sdev->inquiry_len;790790+ int min_period = spi_min_period(starget);791791+ int max_width = spi_max_width(starget);790792 /* first set us up for narrow async */791793 DV_SET(offset, 0);792794 DV_SET(width, 0);793793-795795+794796 if (spi_dv_device_compare_inquiry(sdev, buffer, buffer, DV_LOOPS)795797 != SPI_COMPARE_SUCCESS) {796798 starget_printk(KERN_ERR, starget, "Domain Validation Initial Inquiry Failed\n");···800798 return;801799 }802800801801+ if (!scsi_device_wide(sdev)) {802802+ spi_max_width(starget) = 0;803803+ max_width = 0;804804+ }805805+803806 /* test width */804804- if (i->f->set_width && spi_max_width(starget) &&805805- scsi_device_wide(sdev)) {807807+ if (i->f->set_width && max_width) {806808 i->f->set_width(starget, 1);807809808810 if (spi_dv_device_compare_inquiry(sdev, buffer,···815809 != SPI_COMPARE_SUCCESS) {816810 starget_printk(KERN_ERR, starget, "Wide Transfers Fail\n");817811 i->f->set_width(starget, 0);812812+ /* Make sure we don't force wide back on by asking813813+ * for a transfer period that requires it */814814+ max_width = 0;815815+ if (min_period < 10)816816+ min_period = 10;818817 }819818 }820819···839828840829 /* now set up to the maximum */841830 DV_SET(offset, spi_max_offset(starget));842842- DV_SET(period, spi_min_period(starget));831831+ DV_SET(period, min_period);832832+843833 /* try QAS requests; this should be harmless to set if the844834 * target supports it */845835 if (scsi_device_qas(sdev)) {···849837 DV_SET(qas, 0);850838 }851839852852- if (scsi_device_ius(sdev) && spi_min_period(starget) < 9) {840840+ if (scsi_device_ius(sdev) && min_period < 9) {853841 /* This u320 (or u640). Set IU transfers */854842 DV_SET(iu, 1);855843 /* Then set the optional parameters */856844 DV_SET(rd_strm, 1);857845 DV_SET(wr_flow, 1);858846 DV_SET(rti, 1);859859- if (spi_min_period(starget) == 8)847847+ if (min_period == 8)860848 DV_SET(pcomp_en, 1);861849 } else {862850 DV_SET(iu, 0);···874862 } else {875863 DV_SET(dt, 1);876864 }865865+ /* set width last because it will pull all the other866866+ * parameters down to required values */867867+ DV_SET(width, max_width);868868+877869 /* Do the read only INQUIRY tests */878870 spi_dv_retrain(sdev, buffer, buffer + sdev->inquiry_len,879871 spi_dv_device_compare_inquiry);
···5050#include <linux/tsacct_kern.h>5151#include <linux/cn_proc.h>5252#include <linux/audit.h>5353-#include <linux/signalfd.h>54535554#include <asm/uaccess.h>5655#include <asm/mmu_context.h>···783784 * and we can just re-use it all.784785 */785786 if (atomic_read(&oldsighand->count) <= 1) {786786- signalfd_detach(tsk);787787 exit_itimers(sig);788788 return 0;789789 }···921923 sig->flags = 0;922924923925no_thread_group:924924- signalfd_detach(tsk);925926 exit_itimers(sig);926927 if (leader)927928 release_task(leader);
···14861486 * contig. allocation, set to '1' to indicate we can deal with extents14871487 * of any size.14881488 */14891489-int ocfs2_claim_clusters(struct ocfs2_super *osb,14901490- handle_t *handle,14911491- struct ocfs2_alloc_context *ac,14921492- u32 min_clusters,14931493- u32 *cluster_start,14941494- u32 *num_clusters)14891489+int __ocfs2_claim_clusters(struct ocfs2_super *osb,14901490+ handle_t *handle,14911491+ struct ocfs2_alloc_context *ac,14921492+ u32 min_clusters,14931493+ u32 max_clusters,14941494+ u32 *cluster_start,14951495+ u32 *num_clusters)14951496{14961497 int status;14971497- unsigned int bits_wanted = ac->ac_bits_wanted - ac->ac_bits_given;14981498+ unsigned int bits_wanted = max_clusters;14981499 u64 bg_blkno = 0;14991500 u16 bg_bit_off;1500150115011502 mlog_entry_void();1502150315031503- BUG_ON(!ac);15041504 BUG_ON(ac->ac_bits_given >= ac->ac_bits_wanted);1505150515061506 BUG_ON(ac->ac_which != OCFS2_AC_USE_LOCAL···15551555bail:15561556 mlog_exit(status);15571557 return status;15581558+}15591559+15601560+int ocfs2_claim_clusters(struct ocfs2_super *osb,15611561+ handle_t *handle,15621562+ struct ocfs2_alloc_context *ac,15631563+ u32 min_clusters,15641564+ u32 *cluster_start,15651565+ u32 *num_clusters)15661566+{15671567+ unsigned int bits_wanted = ac->ac_bits_wanted - ac->ac_bits_given;15681568+15691569+ return __ocfs2_claim_clusters(osb, handle, ac, min_clusters,15701570+ bits_wanted, cluster_start, num_clusters);15581571}1559157215601573static inline int ocfs2_block_group_clear_bits(handle_t *handle,
+11
fs/ocfs2/suballoc.h
···8585 u32 min_clusters,8686 u32 *cluster_start,8787 u32 *num_clusters);8888+/*8989+ * Use this variant of ocfs2_claim_clusters to specify a maxiumum9090+ * number of clusters smaller than the allocation reserved.9191+ */9292+int __ocfs2_claim_clusters(struct ocfs2_super *osb,9393+ handle_t *handle,9494+ struct ocfs2_alloc_context *ac,9595+ u32 min_clusters,9696+ u32 max_clusters,9797+ u32 *cluster_start,9898+ u32 *num_clusters);889989100int ocfs2_free_suballoc_bits(handle_t *handle,90101 struct inode *alloc_inode,
+2-2
fs/ocfs2/vote.c
···6666{6767 struct ocfs2_msg_hdr v_hdr;6868 __be32 v_reserved1;6969-};6969+} __attribute__ ((packed));70707171/* Responses are given these values to maintain backwards7272 * compatibility with older ocfs2 versions */···7878{7979 struct ocfs2_msg_hdr r_hdr;8080 __be32 r_response;8181-};8181+} __attribute__ ((packed));82828383struct ocfs2_vote_work {8484 struct list_head w_list;
+29-161
fs/signalfd.c
···1111 * Now using anonymous inode source.1212 * Thanks to Oleg Nesterov for useful code review and suggestions.1313 * More comments and suggestions from Arnd Bergmann.1414- * Sat May 19, 2007: Davi E. M. Arnaut <davi@haxent.com.br>1414+ * Sat May 19, 2007: Davi E. M. Arnaut <davi@haxent.com.br>1515 * Retrieve multiple signals with one read() call1616+ * Sun Jul 15, 2007: Davide Libenzi <davidel@xmailserver.org>1717+ * Attach to the sighand only during read() and poll().1618 */17191820#include <linux/file.h>···2927#include <linux/signalfd.h>30283129struct signalfd_ctx {3232- struct list_head lnk;3333- wait_queue_head_t wqh;3430 sigset_t sigmask;3535- struct task_struct *tsk;3631};3737-3838-struct signalfd_lockctx {3939- struct task_struct *tsk;4040- unsigned long flags;4141-};4242-4343-/*4444- * Tries to acquire the sighand lock. We do not increment the sighand4545- * use count, and we do not even pin the task struct, so we need to4646- * do it inside an RCU read lock, and we must be prepared for the4747- * ctx->tsk going to NULL (in signalfd_deliver()), and for the sighand4848- * being detached. We return 0 if the sighand has been detached, or4949- * 1 if we were able to pin the sighand lock.5050- */5151-static int signalfd_lock(struct signalfd_ctx *ctx, struct signalfd_lockctx *lk)5252-{5353- struct sighand_struct *sighand = NULL;5454-5555- rcu_read_lock();5656- lk->tsk = rcu_dereference(ctx->tsk);5757- if (likely(lk->tsk != NULL))5858- sighand = lock_task_sighand(lk->tsk, &lk->flags);5959- rcu_read_unlock();6060-6161- if (!sighand)6262- return 0;6363-6464- if (!ctx->tsk) {6565- unlock_task_sighand(lk->tsk, &lk->flags);6666- return 0;6767- }6868-6969- if (lk->tsk->tgid == current->tgid)7070- lk->tsk = current;7171-7272- return 1;7373-}7474-7575-static void signalfd_unlock(struct signalfd_lockctx *lk)7676-{7777- unlock_task_sighand(lk->tsk, &lk->flags);7878-}7979-8080-/*8181- * This must be called with the sighand lock held.8282- */8383-void signalfd_deliver(struct task_struct *tsk, int sig)8484-{8585- struct sighand_struct *sighand = tsk->sighand;8686- struct signalfd_ctx *ctx, *tmp;8787-8888- BUG_ON(!sig);8989- list_for_each_entry_safe(ctx, tmp, &sighand->signalfd_list, lnk) {9090- /*9191- * We use a negative signal value as a way to broadcast that the9292- * sighand has been orphaned, so that we can notify all the9393- * listeners about this. Remember the ctx->sigmask is inverted,9494- * so if the user is interested in a signal, that corresponding9595- * bit will be zero.9696- */9797- if (sig < 0) {9898- if (ctx->tsk == tsk) {9999- ctx->tsk = NULL;100100- list_del_init(&ctx->lnk);101101- wake_up(&ctx->wqh);102102- }103103- } else {104104- if (!sigismember(&ctx->sigmask, sig))105105- wake_up(&ctx->wqh);106106- }107107- }108108-}109109-110110-static void signalfd_cleanup(struct signalfd_ctx *ctx)111111-{112112- struct signalfd_lockctx lk;113113-114114- /*115115- * This is tricky. If the sighand is gone, we do not need to remove116116- * context from the list, the list itself won't be there anymore.117117- */118118- if (signalfd_lock(ctx, &lk)) {119119- list_del(&ctx->lnk);120120- signalfd_unlock(&lk);121121- }122122- kfree(ctx);123123-}1243212533static int signalfd_release(struct inode *inode, struct file *file)12634{127127- signalfd_cleanup(file->private_data);3535+ kfree(file->private_data);12836 return 0;12937}13038···42130{43131 struct signalfd_ctx *ctx = file->private_data;44132 unsigned int events = 0;4545- struct signalfd_lockctx lk;461334747- poll_wait(file, &ctx->wqh, wait);134134+ poll_wait(file, ¤t->sighand->signalfd_wqh, wait);481354949- /*5050- * Let the caller get a POLLIN in this case, ala socket recv() when5151- * the peer disconnects.5252- */5353- if (signalfd_lock(ctx, &lk)) {5454- if ((lk.tsk == current &&5555- next_signal(&lk.tsk->pending, &ctx->sigmask) > 0) ||5656- next_signal(&lk.tsk->signal->shared_pending,5757- &ctx->sigmask) > 0)5858- events |= POLLIN;5959- signalfd_unlock(&lk);6060- } else136136+ spin_lock_irq(¤t->sighand->siglock);137137+ if (next_signal(¤t->pending, &ctx->sigmask) ||138138+ next_signal(¤t->signal->shared_pending,139139+ &ctx->sigmask))61140 events |= POLLIN;141141+ spin_unlock_irq(¤t->sighand->siglock);6214263143 return events;64144}···123219 int nonblock)124220{125221 ssize_t ret;126126- struct signalfd_lockctx lk;127222 DECLARE_WAITQUEUE(wait, current);128223129129- if (!signalfd_lock(ctx, &lk))130130- return 0;131131-132132- ret = dequeue_signal(lk.tsk, &ctx->sigmask, info);224224+ spin_lock_irq(¤t->sighand->siglock);225225+ ret = dequeue_signal(current, &ctx->sigmask, info);133226 switch (ret) {134227 case 0:135228 if (!nonblock)136229 break;137230 ret = -EAGAIN;138231 default:139139- signalfd_unlock(&lk);232232+ spin_unlock_irq(¤t->sighand->siglock);140233 return ret;141234 }142235143143- add_wait_queue(&ctx->wqh, &wait);236236+ add_wait_queue(¤t->sighand->signalfd_wqh, &wait);144237 for (;;) {145238 set_current_state(TASK_INTERRUPTIBLE);146146- ret = dequeue_signal(lk.tsk, &ctx->sigmask, info);147147- signalfd_unlock(&lk);239239+ ret = dequeue_signal(current, &ctx->sigmask, info);148240 if (ret != 0)149241 break;150242 if (signal_pending(current)) {151243 ret = -ERESTARTSYS;152244 break;153245 }246246+ spin_unlock_irq(¤t->sighand->siglock);154247 schedule();155155- ret = signalfd_lock(ctx, &lk);156156- if (unlikely(!ret)) {157157- /*158158- * Let the caller read zero byte, ala socket159159- * recv() when the peer disconnect. This test160160- * must be done before doing a dequeue_signal(),161161- * because if the sighand has been orphaned,162162- * the dequeue_signal() call is going to crash163163- * because ->sighand will be long gone.164164- */165165- break;166166- }248248+ spin_lock_irq(¤t->sighand->siglock);167249 }250250+ spin_unlock_irq(¤t->sighand->siglock);168251169169- remove_wait_queue(&ctx->wqh, &wait);252252+ remove_wait_queue(¤t->sighand->signalfd_wqh, &wait);170253 __set_current_state(TASK_RUNNING);171254172255 return ret;173256}174257175258/*176176- * Returns either the size of a "struct signalfd_siginfo", or zero if the177177- * sighand we are attached to, has been orphaned. The "count" parameter178178- * must be at least the size of a "struct signalfd_siginfo".259259+ * Returns a multiple of the size of a "struct signalfd_siginfo", or a negative260260+ * error code. The "count" parameter must be at least the size of a261261+ * "struct signalfd_siginfo".179262 */180263static ssize_t signalfd_read(struct file *file, char __user *buf, size_t count,181264 loff_t *ppos)···178287 return -EINVAL;179288180289 siginfo = (struct signalfd_siginfo __user *) buf;181181-182290 do {183291 ret = signalfd_dequeue(ctx, &info, nonblock);184292 if (unlikely(ret <= 0))···190300 nonblock = 1;191301 } while (--count);192302193193- return total ? total : ret;303303+ return total ? total: ret;194304}195305196306static const struct file_operations signalfd_fops = {···199309 .read = signalfd_read,200310};201311202202-/*203203- * Create a file descriptor that is associated with our signal204204- * state. We can pass it around to others if we want to, but205205- * it will always be _our_ signal state.206206- */207312asmlinkage long sys_signalfd(int ufd, sigset_t __user *user_mask, size_t sizemask)208313{209314 int error;210315 sigset_t sigmask;211316 struct signalfd_ctx *ctx;212212- struct sighand_struct *sighand;213317 struct file *file;214318 struct inode *inode;215215- struct signalfd_lockctx lk;216319217320 if (sizemask != sizeof(sigset_t) ||218321 copy_from_user(&sigmask, user_mask, sizeof(sigmask)))···218335 if (!ctx)219336 return -ENOMEM;220337221221- init_waitqueue_head(&ctx->wqh);222338 ctx->sigmask = sigmask;223223- ctx->tsk = current->group_leader;224224-225225- sighand = current->sighand;226226- /*227227- * Add this fd to the list of signal listeners.228228- */229229- spin_lock_irq(&sighand->siglock);230230- list_add_tail(&ctx->lnk, &sighand->signalfd_list);231231- spin_unlock_irq(&sighand->siglock);232339233340 /*234341 * When we call this, the initialization must be complete, since···237364 fput(file);238365 return -EINVAL;239366 }240240- /*241241- * We need to be prepared of the fact that the sighand this fd242242- * is attached to, has been detched. In that case signalfd_lock()243243- * will return 0, and we'll just skip setting the new mask.244244- */245245- if (signalfd_lock(ctx, &lk)) {246246- ctx->sigmask = sigmask;247247- signalfd_unlock(&lk);248248- }249249- wake_up(&ctx->wqh);367367+ spin_lock_irq(¤t->sighand->siglock);368368+ ctx->sigmask = sigmask;369369+ spin_unlock_irq(¤t->sighand->siglock);370370+371371+ wake_up(¤t->sighand->signalfd_wqh);250372 fput(file);251373 }252374253375 return ufd;254376255377err_fdalloc:256256- signalfd_cleanup(ctx);378378+ kfree(ctx);257379 return error;258380}259381
+34-12
fs/splice.c
···12241224}1225122512261226/*12271227+ * Do a copy-from-user while holding the mmap_semaphore for reading, in a12281228+ * manner safe from deadlocking with simultaneous mmap() (grabbing mmap_sem12291229+ * for writing) and page faulting on the user memory pointed to by src.12301230+ * This assumes that we will very rarely hit the partial != 0 path, or this12311231+ * will not be a win.12321232+ */12331233+static int copy_from_user_mmap_sem(void *dst, const void __user *src, size_t n)12341234+{12351235+ int partial;12361236+12371237+ pagefault_disable();12381238+ partial = __copy_from_user_inatomic(dst, src, n);12391239+ pagefault_enable();12401240+12411241+ /*12421242+ * Didn't copy everything, drop the mmap_sem and do a faulting copy12431243+ */12441244+ if (unlikely(partial)) {12451245+ up_read(¤t->mm->mmap_sem);12461246+ partial = copy_from_user(dst, src, n);12471247+ down_read(¤t->mm->mmap_sem);12481248+ }12491249+12501250+ return partial;12511251+}12521252+12531253+/*12271254 * Map an iov into an array of pages and offset/length tupples. With the12281255 * partial_page structure, we can map several non-contiguous ranges into12291256 * our ones pages[] map instead of splitting that operation into pieces.···12631236{12641237 int buffers = 0, error = 0;1265123812661266- /*12671267- * It's ok to take the mmap_sem for reading, even12681268- * across a "get_user()".12691269- */12701239 down_read(¤t->mm->mmap_sem);1271124012721241 while (nr_vecs) {12731242 unsigned long off, npages;12431243+ struct iovec entry;12741244 void __user *base;12751245 size_t len;12761246 int i;1277124712781278- /*12791279- * Get user address base and length for this iovec.12801280- */12811281- error = get_user(base, &iov->iov_base);12821282- if (unlikely(error))12481248+ error = -EFAULT;12491249+ if (copy_from_user_mmap_sem(&entry, iov, sizeof(entry)))12831250 break;12841284- error = get_user(len, &iov->iov_len);12851285- if (unlikely(error))12861286- break;12511251+12521252+ base = entry.iov_base;12531253+ len = entry.iov_len;1287125412881255 /*12891256 * Sanity check this iovec. 0 read succeeds.12901257 */12581258+ error = 0;12911259 if (unlikely(!len))12921260 break;12931261 error = -EFAULT;
+1-3
fs/ufs/super.c
···894894 goto again;895895 }896896897897-897897+ sbi->s_flags = flags;/*after that line some functions use s_flags*/898898 ufs_print_super_stuff(sb, usb1, usb2, usb3);899899900900 /*···10251025 UFS_MOUNT_UFSTYPE_44BSD)10261026 uspi->s_maxsymlinklen =10271027 fs32_to_cpu(sb, usb3->fs_un2.fs_44.fs_maxsymlinklen);10281028-10291029- sbi->s_flags = flags;1030102810311029 inode = iget(sb, UFS_ROOTINO);10321030 if (!inode || is_bad_inode(inode))
-5
fs/xfs/xfs_buf_item.h
···5252#define XFS_BLI_UDQUOT_BUF 0x45353#define XFS_BLI_PDQUOT_BUF 0x85454#define XFS_BLI_GDQUOT_BUF 0x105555-/*5656- * This flag indicates that the buffer contains newly allocated5757- * inodes.5858- */5959-#define XFS_BLI_INODE_NEW_BUF 0x2060556156#define XFS_BLI_CHUNK 1286257#define XFS_BLI_SHIFT 7
···18741874/*ARGSUSED*/18751875STATIC void18761876xlog_recover_do_reg_buffer(18771877- xfs_mount_t *mp,18781877 xlog_recover_item_t *item,18791878 xfs_buf_t *bp,18801879 xfs_buf_log_format_t *buf_f)···18841885 unsigned int *data_map = NULL;18851886 unsigned int map_size = 0;18861887 int error;18871887- int stale_buf = 1;18881888-18891889- /*18901890- * Scan through the on-disk inode buffer and attempt to18911891- * determine if it has been written to since it was logged.18921892- *18931893- * - If any of the magic numbers are incorrect then the buffer is stale18941894- * - If any of the modes are non-zero then the buffer is not stale18951895- * - If all of the modes are zero and at least one of the generation18961896- * counts is non-zero then the buffer is stale18971897- *18981898- * If the end result is a stale buffer then the log buffer is replayed18991899- * otherwise it is skipped.19001900- *19011901- * This heuristic is not perfect. It can be improved by scanning the19021902- * entire inode chunk for evidence that any of the inode clusters have19031903- * been updated. To fix this problem completely we will need a major19041904- * architectural change to the logging system.19051905- */19061906- if (buf_f->blf_flags & XFS_BLI_INODE_NEW_BUF) {19071907- xfs_dinode_t *dip;19081908- int inodes_per_buf;19091909- int mode_count = 0;19101910- int gen_count = 0;19111911-19121912- stale_buf = 0;19131913- inodes_per_buf = XFS_BUF_COUNT(bp) >> mp->m_sb.sb_inodelog;19141914- for (i = 0; i < inodes_per_buf; i++) {19151915- dip = (xfs_dinode_t *)xfs_buf_offset(bp,19161916- i * mp->m_sb.sb_inodesize);19171917- if (be16_to_cpu(dip->di_core.di_magic) !=19181918- XFS_DINODE_MAGIC) {19191919- stale_buf = 1;19201920- break;19211921- }19221922- if (be16_to_cpu(dip->di_core.di_mode))19231923- mode_count++;19241924- if (be16_to_cpu(dip->di_core.di_gen))19251925- gen_count++;19261926- }19271927-19281928- if (!mode_count && gen_count)19291929- stale_buf = 1;19301930- }1931188819321889 switch (buf_f->blf_type) {19331890 case XFS_LI_BUF:···19171962 -1, 0, XFS_QMOPT_DOWARN,19181963 "dquot_buf_recover");19191964 }19201920- if (!error && stale_buf)19651965+ if (!error)19211966 memcpy(xfs_buf_offset(bp,19221967 (uint)bit << XFS_BLI_SHIFT), /* dest */19231968 item->ri_buf[i].i_addr, /* source */···20892134 if (log->l_quotaoffs_flag & type)20902135 return;2091213620922092- xlog_recover_do_reg_buffer(mp, item, bp, buf_f);21372137+ xlog_recover_do_reg_buffer(item, bp, buf_f);20932138}2094213920952140/*···21902235 (XFS_BLI_UDQUOT_BUF|XFS_BLI_PDQUOT_BUF|XFS_BLI_GDQUOT_BUF)) {21912236 xlog_recover_do_dquot_buffer(mp, log, item, bp, buf_f);21922237 } else {21932193- xlog_recover_do_reg_buffer(mp, item, bp, buf_f);22382238+ xlog_recover_do_reg_buffer(item, bp, buf_f);21942239 }21952240 if (error)21962241 return XFS_ERROR(error);
···214214 */215215216216217217-/* 218218- * Actually only lfence would be needed for mb() because all stores done 219219- * by the kernel should be already ordered. But keep a full barrier for now. 220220- */221221-222217#define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2)223218#define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2)224219
+1
include/asm-mips/fcntl.h
···1313#define O_SYNC 0x00101414#define O_NONBLOCK 0x00801515#define O_CREAT 0x0100 /* not fcntl */1616+#define O_TRUNC 0x0200 /* not fcntl */1617#define O_EXCL 0x0400 /* not fcntl */1718#define O_NOCTTY 0x0800 /* not fcntl */1819#define FASYNC 0x1000 /* fcntl, for BSD compatibility */
+24-8
include/asm-mips/irq.h
···2424#define irq_canonicalize(irq) (irq) /* Sane hardware, sane code ... */2525#endif26262727+#ifdef CONFIG_MIPS_MT_SMTC2828+2929+struct irqaction;3030+3131+extern unsigned long irq_hwmask[];3232+extern int setup_irq_smtc(unsigned int irq, struct irqaction * new,3333+ unsigned long hwmask);3434+3535+static inline void smtc_im_ack_irq(unsigned int irq)3636+{3737+ if (irq_hwmask[irq] & ST0_IM)3838+ set_c0_status(irq_hwmask[irq] & ST0_IM);3939+}4040+4141+#else4242+4343+static inline void smtc_im_ack_irq(unsigned int irq)4444+{4545+}4646+4747+#endif /* CONFIG_MIPS_MT_SMTC */4848+2749#ifdef CONFIG_MIPS_MT_SMTC_IM_BACKSTOP5050+2851/*2952 * Clear interrupt mask handling "backstop" if irq_hwmask3053 * entry so indicates. This implies that the ack() or end()···6138 ~(irq_hwmask[irq] & 0x0000ff00)); \6239} while (0)6340#else4141+6442#define __DO_IRQ_SMTC_HOOK(irq) do { } while (0)6543#endif6644···83598460extern void arch_init_irq(void);8561extern void spurious_interrupt(void);8686-8787-#ifdef CONFIG_MIPS_MT_SMTC8888-struct irqaction;8989-9090-extern unsigned long irq_hwmask[];9191-extern int setup_irq_smtc(unsigned int irq, struct irqaction * new,9292- unsigned long hwmask);9393-#endif /* CONFIG_MIPS_MT_SMTC */94629563extern int allocate_irqno(void);9664extern void alloc_legacy_irqno(void);
+1-1
include/asm-mips/page.h
···142142/*143143 * __pa()/__va() should be used only during mem init.144144 */145145-#if defined(CONFIG_64BIT) && !defined(CONFIG_BUILD_ELF64)145145+#ifdef CONFIG_64BIT146146#define __pa(x) \147147({ \148148 unsigned long __x = (unsigned long)(x); \
···411411#define HAVE_ARCH_UNMAPPED_AREA412412413413#define pgtable_cache_init() do { } while (0)414414+#define check_pgt_cache() do { } while (0)414415415416#define PAGE_AGP PAGE_KERNEL_NOCACHE416417#define HAVE_PAGE_AGP 1
+3-16
include/linux/cpufreq.h
···3232 * CPUFREQ NOTIFIER INTERFACE *3333 *********************************************************************/34343535-#ifdef CONFIG_CPU_FREQ3635int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list);3737-#else3838-static inline int cpufreq_register_notifier(struct notifier_block *nb,3939- unsigned int list)4040-{4141- return 0;4242-}4343-#endif4436int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list);45374638#define CPUFREQ_TRANSITION_NOTIFIER (0)···260268int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu);261269int cpufreq_update_policy(unsigned int cpu);262270271271+/* query the current CPU frequency (in kHz). If zero, cpufreq couldn't detect it */272272+unsigned int cpufreq_get(unsigned int cpu);263273264264-/*265265- * query the last known CPU freq (in kHz). If zero, cpufreq couldn't detect it266266- */274274+/* query the last known CPU freq (in kHz). If zero, cpufreq couldn't detect it */267275#ifdef CONFIG_CPU_FREQ268276unsigned int cpufreq_quick_get(unsigned int cpu);269269-unsigned int cpufreq_get(unsigned int cpu);270277#else271278static inline unsigned int cpufreq_quick_get(unsigned int cpu)272272-{273273- return 0;274274-}275275-static inline unsigned int cpufreq_get(unsigned int cpu)276279{277280 return 0;278281}
···438438 atomic_t count;439439 struct k_sigaction action[_NSIG];440440 spinlock_t siglock;441441- struct list_head signalfd_list;441441+ wait_queue_head_t signalfd_wqh;442442};443443444444struct pacct_struct {···14061406extern unsigned int sysctl_sched_batch_wakeup_granularity;14071407extern unsigned int sysctl_sched_stat_granularity;14081408extern unsigned int sysctl_sched_runtime_limit;14091409+extern unsigned int sysctl_sched_compat_yield;14091410extern unsigned int sysctl_sched_child_runs_first;14101411extern unsigned int sysctl_sched_features;14111412
+4-36
include/linux/signalfd.h
···4545#ifdef CONFIG_SIGNALFD46464747/*4848- * Deliver the signal to listening signalfd. This must be called4949- * with the sighand lock held. Same are the following that end up5050- * calling signalfd_deliver().5151- */5252-void signalfd_deliver(struct task_struct *tsk, int sig);5353-5454-/*5555- * No need to fall inside signalfd_deliver() if no signal listeners5656- * are available.4848+ * Deliver the signal to listening signalfd.5749 */5850static inline void signalfd_notify(struct task_struct *tsk, int sig)5951{6060- if (unlikely(!list_empty(&tsk->sighand->signalfd_list)))6161- signalfd_deliver(tsk, sig);6262-}6363-6464-/*6565- * The signal -1 is used to notify the signalfd that the sighand6666- * is on its way to be detached.6767- */6868-static inline void signalfd_detach_locked(struct task_struct *tsk)6969-{7070- if (unlikely(!list_empty(&tsk->sighand->signalfd_list)))7171- signalfd_deliver(tsk, -1);7272-}7373-7474-static inline void signalfd_detach(struct task_struct *tsk)7575-{7676- struct sighand_struct *sighand = tsk->sighand;7777-7878- if (unlikely(!list_empty(&sighand->signalfd_list))) {7979- spin_lock_irq(&sighand->siglock);8080- signalfd_deliver(tsk, -1);8181- spin_unlock_irq(&sighand->siglock);8282- }5252+ if (unlikely(waitqueue_active(&tsk->sighand->signalfd_wqh)))5353+ wake_up(&tsk->sighand->signalfd_wqh);8354}84558556#else /* CONFIG_SIGNALFD */86578787-#define signalfd_deliver(t, s) do { } while (0)8888-#define signalfd_notify(t, s) do { } while (0)8989-#define signalfd_detach_locked(t) do { } while (0)9090-#define signalfd_detach(t) do { } while (0)5858+static inline void signalfd_notify(struct task_struct *tsk, int sig) { }91599260#endif /* CONFIG_SIGNALFD */9361
···2424#include <linux/pid_namespace.h>2525#include <linux/ptrace.h>2626#include <linux/profile.h>2727-#include <linux/signalfd.h>2827#include <linux/mount.h>2928#include <linux/proc_fs.h>3029#include <linux/kthread.h>···8485 rcu_read_lock();8586 sighand = rcu_dereference(tsk->sighand);8687 spin_lock(&sighand->siglock);8787-8888- /*8989- * Notify that this sighand has been detached. This must9090- * be called with the tsk->sighand lock held. Also, this9191- * access tsk->sighand internally, so it must be called9292- * before tsk->sighand is reset.9393- */9494- signalfd_detach_locked(tsk);95889689 posix_cpu_timers_exit(tsk);9790 if (atomic_dec_and_test(&sig->count))
···19431943void exit_robust_list(struct task_struct *curr)19441944{19451945 struct robust_list_head __user *head = curr->robust_list;19461946- struct robust_list __user *entry, *pending;19471947- unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;19461946+ struct robust_list __user *entry, *next_entry, *pending;19471947+ unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip;19481948 unsigned long futex_offset;19491949+ int rc;1949195019501951 /*19511952 * Fetch the list head (which was registered earlier, via···19661965 if (fetch_robust_entry(&pending, &head->list_op_pending, &pip))19671966 return;1968196719691969- if (pending)19701970- handle_futex_death((void __user *)pending + futex_offset,19711971- curr, pip);19721972-19681968+ next_entry = NULL; /* avoid warning with gcc */19731969 while (entry != &head->list) {19701970+ /*19711971+ * Fetch the next entry in the list before calling19721972+ * handle_futex_death:19731973+ */19741974+ rc = fetch_robust_entry(&next_entry, &entry->next, &next_pi);19741975 /*19751976 * A pending lock might already be on the list, so19761977 * don't process it twice:···19811978 if (handle_futex_death((void __user *)entry + futex_offset,19821979 curr, pi))19831980 return;19841984- /*19851985- * Fetch the next entry in the list:19861986- */19871987- if (fetch_robust_entry(&entry, &entry->next, &pi))19811981+ if (rc)19881982 return;19831983+ entry = next_entry;19841984+ pi = next_pi;19891985 /*19901986 * Avoid excessively long or circular lists:19911987 */···1993199119941992 cond_resched();19951993 }19941994+19951995+ if (pending)19961996+ handle_futex_death((void __user *)pending + futex_offset,19971997+ curr, pip);19961998}1997199919982000long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
+18-10
kernel/futex_compat.c
···3838void compat_exit_robust_list(struct task_struct *curr)3939{4040 struct compat_robust_list_head __user *head = curr->compat_robust_list;4141- struct robust_list __user *entry, *pending;4242- unsigned int limit = ROBUST_LIST_LIMIT, pi, pip;4343- compat_uptr_t uentry, upending;4141+ struct robust_list __user *entry, *next_entry, *pending;4242+ unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip;4343+ compat_uptr_t uentry, next_uentry, upending;4444 compat_long_t futex_offset;4545+ int rc;45464647 /*4748 * Fetch the list head (which was registered earlier, via···6261 if (fetch_robust_entry(&upending, &pending,6362 &head->list_op_pending, &pip))6463 return;6565- if (pending)6666- handle_futex_death((void __user *)pending + futex_offset, curr, pip);67646565+ next_entry = NULL; /* avoid warning with gcc */6866 while (entry != (struct robust_list __user *) &head->list) {6767+ /*6868+ * Fetch the next entry in the list before calling6969+ * handle_futex_death:7070+ */7171+ rc = fetch_robust_entry(&next_uentry, &next_entry,7272+ (compat_uptr_t __user *)&entry->next, &next_pi);6973 /*7074 * A pending lock might already be on the list, so7175 * dont process it twice:···8074 curr, pi))8175 return;82768383- /*8484- * Fetch the next entry in the list:8585- */8686- if (fetch_robust_entry(&uentry, &entry,8787- (compat_uptr_t __user *)&entry->next, &pi))7777+ if (rc)8878 return;7979+ uentry = next_uentry;8080+ entry = next_entry;8181+ pi = next_pi;8982 /*9083 * Avoid excessively long or circular lists:9184 */···93889489 cond_resched();9590 }9191+ if (pending)9292+ handle_futex_death((void __user *)pending + futex_offset,9393+ curr, pip);9694}97959896asmlinkage long
+1-1
kernel/power/Kconfig
···110110111111config HIBERNATION_UP_POSSIBLE112112 bool113113- depends on X86 || PPC64_SWSUSP || FRV || PPC32113113+ depends on X86 || PPC64_SWSUSP || PPC32114114 depends on !SMP115115 default y116116
+6-4
kernel/sched.c
···1682168216831683 p->prio = effective_prio(p);1684168416851685+ if (rt_prio(p->prio))16861686+ p->sched_class = &rt_sched_class;16871687+ else16881688+ p->sched_class = &fair_sched_class;16891689+16851690 if (!p->sched_class->task_new || !sysctl_sched_child_runs_first ||16861691 (clone_flags & CLONE_VM) || task_cpu(p) != this_cpu ||16871692 !current->se.on_rq) {···45554550 struct rq *rq = this_rq_lock();4556455145574552 schedstat_inc(rq, yld_cnt);45584558- if (unlikely(rq->nr_running == 1))45594559- schedstat_inc(rq, yld_act_empty);45604560- else45614561- current->sched_class->yield_task(rq, current);45534553+ current->sched_class->yield_task(rq, current);4562455445634555 /*45644556 * Since we are going to call schedule() anyway, there's
+67-6
kernel/sched_fair.c
···4343unsigned int sysctl_sched_min_granularity __read_mostly = 2000000ULL;44444545/*4646+ * sys_sched_yield() compat mode4747+ *4848+ * This option switches the agressive yield implementation of the4949+ * old scheduler back on.5050+ */5151+unsigned int __read_mostly sysctl_sched_compat_yield;5252+5353+/*4654 * SCHED_BATCH wake-up granularity.4755 * (default: 25 msec, units: nanoseconds)4856 *···639631640632 se->block_start = 0;641633 se->sum_sleep_runtime += delta;634634+635635+ /*636636+ * Blocking time is in units of nanosecs, so shift by 20 to637637+ * get a milliseconds-range estimation of the amount of638638+ * time that the task spent sleeping:639639+ */640640+ if (unlikely(prof_on == SLEEP_PROFILING)) {641641+ profile_hits(SLEEP_PROFILING, (void *)get_wchan(tsk),642642+ delta >> 20);643643+ }642644 }643645#endif644646}···915897}916898917899/*918918- * sched_yield() support is very simple - we dequeue and enqueue900900+ * sched_yield() support is very simple - we dequeue and enqueue.901901+ *902902+ * If compat_yield is turned on then we requeue to the end of the tree.919903 */920904static void yield_task_fair(struct rq *rq, struct task_struct *p)921905{922906 struct cfs_rq *cfs_rq = task_cfs_rq(p);907907+ struct rb_node **link = &cfs_rq->tasks_timeline.rb_node;908908+ struct sched_entity *rightmost, *se = &p->se;909909+ struct rb_node *parent;923910924924- __update_rq_clock(rq);925911 /*926926- * Dequeue and enqueue the task to update its927927- * position within the tree:912912+ * Are we the only task in the tree?928913 */929929- dequeue_entity(cfs_rq, &p->se, 0);930930- enqueue_entity(cfs_rq, &p->se, 0);914914+ if (unlikely(cfs_rq->nr_running == 1))915915+ return;916916+917917+ if (likely(!sysctl_sched_compat_yield)) {918918+ __update_rq_clock(rq);919919+ /*920920+ * Dequeue and enqueue the task to update its921921+ * position within the tree:922922+ */923923+ dequeue_entity(cfs_rq, &p->se, 0);924924+ enqueue_entity(cfs_rq, &p->se, 0);925925+926926+ return;927927+ }928928+ /*929929+ * Find the rightmost entry in the rbtree:930930+ */931931+ do {932932+ parent = *link;933933+ link = &parent->rb_right;934934+ } while (*link);935935+936936+ rightmost = rb_entry(parent, struct sched_entity, run_node);937937+ /*938938+ * Already in the rightmost position?939939+ */940940+ if (unlikely(rightmost == se))941941+ return;942942+943943+ /*944944+ * Minimally necessary key value to be last in the tree:945945+ */946946+ se->fair_key = rightmost->fair_key + 1;947947+948948+ if (cfs_rq->rb_leftmost == &se->run_node)949949+ cfs_rq->rb_leftmost = rb_next(&se->run_node);950950+ /*951951+ * Relink the task to the rightmost position:952952+ */953953+ rb_erase(&se->run_node, &cfs_rq->tasks_timeline);954954+ rb_link_node(&se->run_node, parent, link);955955+ rb_insert_color(&se->run_node, &cfs_rq->tasks_timeline);931956}932957933958/*
+3-5
kernel/signal.c
···378378 /* We only dequeue private signals from ourselves, we don't let379379 * signalfd steal them380380 */381381- if (likely(tsk == current))382382- signr = __dequeue_signal(&tsk->pending, mask, info);381381+ signr = __dequeue_signal(&tsk->pending, mask, info);383382 if (!signr) {384383 signr = __dequeue_signal(&tsk->signal->shared_pending,385384 mask, info);···406407 }407408 }408409 }409409- if (likely(tsk == current))410410- recalc_sigpending();410410+ recalc_sigpending();411411 if (signr && unlikely(sig_kernel_stop(signr))) {412412 /*413413 * Set a marker that we have dequeued a stop signal. Our···423425 if (!(tsk->signal->flags & SIGNAL_GROUP_EXIT))424426 tsk->signal->flags |= SIGNAL_STOP_DEQUEUED;425427 }426426- if (signr && likely(tsk == current) &&428428+ if (signr &&427429 ((info->si_code & __SI_MASK) == __SI_TIMER) &&428430 info->si_sys_private){429431 /*
···382382383383int tick_resume_broadcast_oneshot(struct clock_event_device *bc)384384{385385- int cpu = smp_processor_id();386386-387387- /*388388- * If the CPU is marked for broadcast, enforce oneshot389389- * broadcast mode. The jinxed VAIO does not resume otherwise.390390- * No idea why it ends up in a lower C State during resume391391- * without notifying the clock events layer.392392- */393393- if (cpu_isset(cpu, tick_broadcast_mask))394394- cpu_set(cpu, tick_broadcast_oneshot_mask);395395-396385 clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT);397397-398398- if(!cpus_empty(tick_broadcast_oneshot_mask))399399- tick_broadcast_set_event(ktime_get(), 1);400400-401401- return cpu_isset(cpu, tick_broadcast_oneshot_mask);386386+ return 0;402387}403388404389/*
···4242 might_sleep();4343 for (i = 0; i < (HPAGE_SIZE/PAGE_SIZE); i++) {4444 cond_resched();4545- clear_user_highpage(page + i, addr);4545+ clear_user_highpage(page + i, addr + i * PAGE_SIZE);4646 }4747}4848
+6
net/ieee80211/ieee80211_rx.c
···366366 frag = WLAN_GET_SEQ_FRAG(sc);367367 hdrlen = ieee80211_get_hdrlen(fc);368368369369+ if (skb->len < hdrlen) {370370+ printk(KERN_INFO "%s: invalid SKB length %d\n",371371+ dev->name, skb->len);372372+ goto rx_dropped;373373+ }374374+369375 /* Put this code here so that we avoid duplicating it in all370376 * Rx paths. - Jean II */371377#ifdef CONFIG_WIRELESS_EXT
-2
net/ieee80211/softmac/ieee80211softmac_assoc.c
···273273 ieee80211softmac_notify(mac->dev, IEEE80211SOFTMAC_EVENT_SCAN_FINISHED, ieee80211softmac_assoc_notify_scan, NULL);274274 if (ieee80211softmac_start_scan(mac)) {275275 dprintk(KERN_INFO PFX "Associate: failed to initiate scan. Is device up?\n");276276- mac->associnfo.associating = 0;277277- mac->associnfo.associated = 0;278276 }279277 goto out;280278 } else {
+20-34
net/ieee80211/softmac/ieee80211softmac_wx.c
···7070 char *extra)7171{7272 struct ieee80211softmac_device *sm = ieee80211_priv(net_dev);7373- struct ieee80211softmac_network *n;7473 struct ieee80211softmac_auth_queue_item *authptr;7574 int length = 0;76757776check_assoc_again:7877 mutex_lock(&sm->associnfo.mutex);7979- /* Check if we're already associating to this or another network8080- * If it's another network, cancel and start over with our new network8181- * If it's our network, ignore the change, we're already doing it!8282- */8378 if((sm->associnfo.associating || sm->associnfo.associated) &&8479 (data->essid.flags && data->essid.length)) {8585- /* Get the associating network */8686- n = ieee80211softmac_get_network_by_bssid(sm, sm->associnfo.bssid);8787- if(n && n->essid.len == data->essid.length &&8888- !memcmp(n->essid.data, extra, n->essid.len)) {8989- dprintk(KERN_INFO PFX "Already associating or associated to "MAC_FMT"\n",9090- MAC_ARG(sm->associnfo.bssid));9191- goto out;9292- } else {9393- dprintk(KERN_INFO PFX "Canceling existing associate request!\n");9494- /* Cancel assoc work */9595- cancel_delayed_work(&sm->associnfo.work);9696- /* We don't have to do this, but it's a little cleaner */9797- list_for_each_entry(authptr, &sm->auth_queue, list)9898- cancel_delayed_work(&authptr->work);9999- sm->associnfo.bssvalid = 0;100100- sm->associnfo.bssfixed = 0;101101- sm->associnfo.associating = 0;102102- sm->associnfo.associated = 0;103103- /* We must unlock to avoid deadlocks with the assoc workqueue104104- * on the associnfo.mutex */105105- mutex_unlock(&sm->associnfo.mutex);106106- flush_scheduled_work();107107- /* Avoid race! Check assoc status again. Maybe someone started an108108- * association while we flushed. */109109- goto check_assoc_again;110110- }8080+ dprintk(KERN_INFO PFX "Canceling existing associate request!\n");8181+ /* Cancel assoc work */8282+ cancel_delayed_work(&sm->associnfo.work);8383+ /* We don't have to do this, but it's a little cleaner */8484+ list_for_each_entry(authptr, &sm->auth_queue, list)8585+ cancel_delayed_work(&authptr->work);8686+ sm->associnfo.bssvalid = 0;8787+ sm->associnfo.bssfixed = 0;8888+ sm->associnfo.associating = 0;8989+ sm->associnfo.associated = 0;9090+ /* We must unlock to avoid deadlocks with the assoc workqueue9191+ * on the associnfo.mutex */9292+ mutex_unlock(&sm->associnfo.mutex);9393+ flush_scheduled_work();9494+ /* Avoid race! Check assoc status again. Maybe someone started an9595+ * association while we flushed. */9696+ goto check_assoc_again;11197 }1129811399 sm->associnfo.static_essid = 0;···139153 data->essid.length = sm->associnfo.req_essid.len;140154 data->essid.flags = 1; /* active */141155 memcpy(extra, sm->associnfo.req_essid.data, sm->associnfo.req_essid.len);142142- }143143-156156+ dprintk(KERN_INFO PFX "Getting essid from req_essid\n");157157+ } else if (sm->associnfo.associated || sm->associnfo.associating) {144158 /* If we're associating/associated, return that */145145- if (sm->associnfo.associated || sm->associnfo.associating) {146159 data->essid.length = sm->associnfo.associate_essid.len;147160 data->essid.flags = 1; /* active */148161 memcpy(extra, sm->associnfo.associate_essid.data, sm->associnfo.associate_essid.len);162162+ dprintk(KERN_INFO PFX "Getting essid from associate_essid\n");149163 }150164 mutex_unlock(&sm->associnfo.mutex);151165
+9-10
net/ipv4/tcp_ipv4.c
···833833 return NULL;834834 for (i = 0; i < tp->md5sig_info->entries4; i++) {835835 if (tp->md5sig_info->keys4[i].addr == addr)836836- return (struct tcp_md5sig_key *)837837- &tp->md5sig_info->keys4[i];836836+ return &tp->md5sig_info->keys4[i].base;838837 }839838 return NULL;840839}···864865 key = (struct tcp4_md5sig_key *)tcp_v4_md5_do_lookup(sk, addr);865866 if (key) {866867 /* Pre-existing entry - just update that one. */867867- kfree(key->key);868868- key->key = newkey;869869- key->keylen = newkeylen;868868+ kfree(key->base.key);869869+ key->base.key = newkey;870870+ key->base.keylen = newkeylen;870871 } else {871872 struct tcp_md5sig_info *md5sig;872873···905906 md5sig->alloced4++;906907 }907908 md5sig->entries4++;908908- md5sig->keys4[md5sig->entries4 - 1].addr = addr;909909- md5sig->keys4[md5sig->entries4 - 1].key = newkey;910910- md5sig->keys4[md5sig->entries4 - 1].keylen = newkeylen;909909+ md5sig->keys4[md5sig->entries4 - 1].addr = addr;910910+ md5sig->keys4[md5sig->entries4 - 1].base.key = newkey;911911+ md5sig->keys4[md5sig->entries4 - 1].base.keylen = newkeylen;911912 }912913 return 0;913914}···929930 for (i = 0; i < tp->md5sig_info->entries4; i++) {930931 if (tp->md5sig_info->keys4[i].addr == addr) {931932 /* Free the key */932932- kfree(tp->md5sig_info->keys4[i].key);933933+ kfree(tp->md5sig_info->keys4[i].base.key);933934 tp->md5sig_info->entries4--;934935935936 if (tp->md5sig_info->entries4 == 0) {···963964 if (tp->md5sig_info->entries4) {964965 int i;965966 for (i = 0; i < tp->md5sig_info->entries4; i++)966966- kfree(tp->md5sig_info->keys4[i].key);967967+ kfree(tp->md5sig_info->keys4[i].base.key);967968 tp->md5sig_info->entries4 = 0;968969 tcp_free_md5sig_pool();969970 }
+9-9
net/ipv6/tcp_ipv6.c
···539539540540 for (i = 0; i < tp->md5sig_info->entries6; i++) {541541 if (ipv6_addr_cmp(&tp->md5sig_info->keys6[i].addr, addr) == 0)542542- return (struct tcp_md5sig_key *)&tp->md5sig_info->keys6[i];542542+ return &tp->md5sig_info->keys6[i].base;543543 }544544 return NULL;545545}···567567 key = (struct tcp6_md5sig_key*) tcp_v6_md5_do_lookup(sk, peer);568568 if (key) {569569 /* modify existing entry - just update that one */570570- kfree(key->key);571571- key->key = newkey;572572- key->keylen = newkeylen;570570+ kfree(key->base.key);571571+ key->base.key = newkey;572572+ key->base.keylen = newkeylen;573573 } else {574574 /* reallocate new list if current one is full. */575575 if (!tp->md5sig_info) {···603603604604 ipv6_addr_copy(&tp->md5sig_info->keys6[tp->md5sig_info->entries6].addr,605605 peer);606606- tp->md5sig_info->keys6[tp->md5sig_info->entries6].key = newkey;607607- tp->md5sig_info->keys6[tp->md5sig_info->entries6].keylen = newkeylen;606606+ tp->md5sig_info->keys6[tp->md5sig_info->entries6].base.key = newkey;607607+ tp->md5sig_info->keys6[tp->md5sig_info->entries6].base.keylen = newkeylen;608608609609 tp->md5sig_info->entries6++;610610 }···626626 for (i = 0; i < tp->md5sig_info->entries6; i++) {627627 if (ipv6_addr_cmp(&tp->md5sig_info->keys6[i].addr, peer) == 0) {628628 /* Free the key */629629- kfree(tp->md5sig_info->keys6[i].key);629629+ kfree(tp->md5sig_info->keys6[i].base.key);630630 tp->md5sig_info->entries6--;631631632632 if (tp->md5sig_info->entries6 == 0) {···657657658658 if (tp->md5sig_info->entries6) {659659 for (i = 0; i < tp->md5sig_info->entries6; i++)660660- kfree(tp->md5sig_info->keys6[i].key);660660+ kfree(tp->md5sig_info->keys6[i].base.key);661661 tp->md5sig_info->entries6 = 0;662662 tcp_free_md5sig_pool();663663 }···668668669669 if (tp->md5sig_info->entries4) {670670 for (i = 0; i < tp->md5sig_info->entries4; i++)671671- kfree(tp->md5sig_info->keys4[i].key);671671+ kfree(tp->md5sig_info->keys4[i].base.key);672672 tp->md5sig_info->entries4 = 0;673673 tcp_free_md5sig_pool();674674 }
···431431}432432433433434434-module_init(rate_control_simple_init);434434+subsys_initcall(rate_control_simple_init);435435module_exit(rate_control_simple_exit);436436437437MODULE_DESCRIPTION("Simple rate control algorithm for ieee80211");
···58585959 unsigned int qlen; /* number of nlmsgs in skb */6060 struct sk_buff *skb; /* pre-allocatd skb */6161- struct nlmsghdr *lastnlh; /* netlink header of last msg in skb */6261 struct timer_list timer;6362 int peer_pid; /* PID of the peer process */6463···344345static int345346__nfulnl_send(struct nfulnl_instance *inst)346347{347347- int status;348348+ int status = -1;348349349350 if (inst->qlen > 1)350350- inst->lastnlh->nlmsg_type = NLMSG_DONE;351351+ NLMSG_PUT(inst->skb, 0, 0,352352+ NLMSG_DONE,353353+ sizeof(struct nfgenmsg));351354352355 status = nfnetlink_unicast(inst->skb, inst->peer_pid, MSG_DONTWAIT);353356 if (status < 0) {···359358360359 inst->qlen = 0;361360 inst->skb = NULL;362362- inst->lastnlh = NULL;363361362362+nlmsg_failure:364363 return status;365364}366365···539538 }540539541540 nlh->nlmsg_len = inst->skb->tail - old_tail;542542- inst->lastnlh = nlh;543541 return 0;544542545543nlmsg_failure:···644644 }645645646646 if (inst->qlen >= qthreshold ||647647- (inst->skb && size > skb_tailroom(inst->skb))) {647647+ (inst->skb && size >648648+ skb_tailroom(inst->skb) - sizeof(struct nfgenmsg))) {648649 /* either the queue len is too high or we don't have649650 * enough room in the skb left. flush to userspace. */650651 UDEBUG("flushing old skb\n");
+34-19
net/sched/sch_sfq.c
···1919#include <linux/init.h>2020#include <linux/ipv6.h>2121#include <linux/skbuff.h>2222+#include <linux/jhash.h>2223#include <net/ip.h>2324#include <net/netlink.h>2425#include <net/pkt_sched.h>···96959796/* Variables */9897 struct timer_list perturb_timer;9999- int perturbation;9898+ u32 perturbation;10099 sfq_index tail; /* Index of current slot in round */101100 sfq_index max_depth; /* Maximal depth */102101···110109111110static __inline__ unsigned sfq_fold_hash(struct sfq_sched_data *q, u32 h, u32 h1)112111{113113- int pert = q->perturbation;114114-115115- /* Have we any rotation primitives? If not, WHY? */116116- h ^= (h1<<pert) ^ (h1>>(0x1F - pert));117117- h ^= h>>10;118118- return h & 0x3FF;112112+ return jhash_2words(h, h1, q->perturbation) & (SFQ_HASH_DIVISOR - 1);119113}120114121115static unsigned sfq_hash(struct sfq_sched_data *q, struct sk_buff *skb)···252256 q->ht[hash] = x = q->dep[SFQ_DEPTH].next;253257 q->hash[x] = hash;254258 }259259+ /* If selected queue has length q->limit, this means that260260+ * all another queues are empty and that we do simple tail drop,261261+ * i.e. drop _this_ packet.262262+ */263263+ if (q->qs[x].qlen >= q->limit)264264+ return qdisc_drop(skb, sch);265265+255266 sch->qstats.backlog += skb->len;256267 __skb_queue_tail(&q->qs[x], skb);257268 sfq_inc(q, x);···273270 q->tail = x;274271 }275272 }276276- if (++sch->q.qlen < q->limit-1) {273273+ if (++sch->q.qlen <= q->limit) {277274 sch->bstats.bytes += skb->len;278275 sch->bstats.packets++;279276 return 0;···297294 }298295 sch->qstats.backlog += skb->len;299296 __skb_queue_head(&q->qs[x], skb);297297+ /* If selected queue has length q->limit+1, this means that298298+ * all another queues are empty and we do simple tail drop.299299+ * This packet is still requeued at head of queue, tail packet300300+ * is dropped.301301+ */302302+ if (q->qs[x].qlen > q->limit) {303303+ skb = q->qs[x].prev;304304+ __skb_unlink(skb, &q->qs[x]);305305+ sch->qstats.drops++;306306+ sch->qstats.backlog -= skb->len;307307+ kfree_skb(skb);308308+ return NET_XMIT_CN;309309+ }300310 sfq_inc(q, x);301311 if (q->qs[x].qlen == 1) { /* The flow is new */302312 if (q->tail == SFQ_DEPTH) { /* It is the first flow */···322306 q->tail = x;323307 }324308 }325325- if (++sch->q.qlen < q->limit - 1) {309309+ if (++sch->q.qlen <= q->limit) {326310 sch->qstats.requeues++;327311 return 0;328312 }···386370 struct Qdisc *sch = (struct Qdisc*)arg;387371 struct sfq_sched_data *q = qdisc_priv(sch);388372389389- q->perturbation = net_random()&0x1F;373373+ get_random_bytes(&q->perturbation, 4);390374391391- if (q->perturb_period) {392392- q->perturb_timer.expires = jiffies + q->perturb_period;393393- add_timer(&q->perturb_timer);394394- }375375+ if (q->perturb_period)376376+ mod_timer(&q->perturb_timer, jiffies + q->perturb_period);395377}396378397379static int sfq_change(struct Qdisc *sch, struct rtattr *opt)···405391 q->quantum = ctl->quantum ? : psched_mtu(sch->dev);406392 q->perturb_period = ctl->perturb_period*HZ;407393 if (ctl->limit)408408- q->limit = min_t(u32, ctl->limit, SFQ_DEPTH);394394+ q->limit = min_t(u32, ctl->limit, SFQ_DEPTH - 1);409395410396 qlen = sch->q.qlen;411411- while (sch->q.qlen >= q->limit-1)397397+ while (sch->q.qlen > q->limit)412398 sfq_drop(sch);413399 qdisc_tree_decrease_qlen(sch, qlen - sch->q.qlen);414400415401 del_timer(&q->perturb_timer);416402 if (q->perturb_period) {417417- q->perturb_timer.expires = jiffies + q->perturb_period;418418- add_timer(&q->perturb_timer);403403+ mod_timer(&q->perturb_timer, jiffies + q->perturb_period);404404+ get_random_bytes(&q->perturbation, 4);419405 }420406 sch_tree_unlock(sch);421407 return 0;···437423 q->dep[i+SFQ_DEPTH].next = i+SFQ_DEPTH;438424 q->dep[i+SFQ_DEPTH].prev = i+SFQ_DEPTH;439425 }440440- q->limit = SFQ_DEPTH;426426+ q->limit = SFQ_DEPTH - 1;441427 q->max_depth = 0;442428 q->tail = SFQ_DEPTH;443429 if (opt == NULL) {444430 q->quantum = psched_mtu(sch->dev);445431 q->perturb_period = 0;432432+ get_random_bytes(&q->perturbation, 4);446433 } else {447434 int err = sfq_change(sch, opt);448435 if (err)
···622622 if (SCTP_CID_SHUTDOWN_COMPLETE == ch->type)623623 goto discard;624624625625+ /* RFC 4460, 2.11.2626626+ * This will discard packets with INIT chunk bundled as627627+ * subsequent chunks in the packet. When INIT is first,628628+ * the normal INIT processing will discard the chunk.629629+ */630630+ if (SCTP_CID_INIT == ch->type && (void *)ch != skb->data)631631+ goto discard;632632+625633 /* RFC 8.4, 7) If the packet contains a "Stale cookie" ERROR626634 * or a COOKIE ACK the SCTP Packet should be silently627635 * discarded.
+8
net/sctp/inqueue.c
···130130 /* Force chunk->skb->data to chunk->chunk_end. */131131 skb_pull(chunk->skb,132132 chunk->chunk_end - chunk->skb->data);133133+134134+ /* Verify that we have at least chunk headers135135+ * worth of buffer left.136136+ */137137+ if (skb_headlen(chunk->skb) < sizeof(sctp_chunkhdr_t)) {138138+ sctp_chunk_free(chunk);139139+ chunk = queue->in_progress = NULL;140140+ }133141 }134142 }135143
+46
net/sctp/sm_make_chunk.c
···24992499 return SCTP_ERROR_NO_ERROR;25002500}2501250125022502+/* Verify the ASCONF packet before we process it. */25032503+int sctp_verify_asconf(const struct sctp_association *asoc,25042504+ struct sctp_paramhdr *param_hdr, void *chunk_end,25052505+ struct sctp_paramhdr **errp) {25062506+ sctp_addip_param_t *asconf_param;25072507+ union sctp_params param;25082508+ int length, plen;25092509+25102510+ param.v = (sctp_paramhdr_t *) param_hdr;25112511+ while (param.v <= chunk_end - sizeof(sctp_paramhdr_t)) {25122512+ length = ntohs(param.p->length);25132513+ *errp = param.p;25142514+25152515+ if (param.v > chunk_end - length ||25162516+ length < sizeof(sctp_paramhdr_t))25172517+ return 0;25182518+25192519+ switch (param.p->type) {25202520+ case SCTP_PARAM_ADD_IP:25212521+ case SCTP_PARAM_DEL_IP:25222522+ case SCTP_PARAM_SET_PRIMARY:25232523+ asconf_param = (sctp_addip_param_t *)param.v;25242524+ plen = ntohs(asconf_param->param_hdr.length);25252525+ if (plen < sizeof(sctp_addip_param_t) +25262526+ sizeof(sctp_paramhdr_t))25272527+ return 0;25282528+ break;25292529+ case SCTP_PARAM_SUCCESS_REPORT:25302530+ case SCTP_PARAM_ADAPTATION_LAYER_IND:25312531+ if (length != sizeof(sctp_addip_param_t))25322532+ return 0;25332533+25342534+ break;25352535+ default:25362536+ break;25372537+ }25382538+25392539+ param.v += WORD_ROUND(length);25402540+ }25412541+25422542+ if (param.v != chunk_end)25432543+ return 0;25442544+25452545+ return 1;25462546+}25472547+25022548/* Process an incoming ASCONF chunk with the next expected serial no. and25032549 * return an ASCONF_ACK chunk to be sent in response.25042550 */
+204-39
net/sctp/sm_statefuns.c
···9090 const sctp_subtype_t type,9191 void *arg,9292 sctp_cmd_seq_t *commands);9393+static sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep,9494+ const struct sctp_association *asoc,9595+ const sctp_subtype_t type,9696+ void *arg,9797+ sctp_cmd_seq_t *commands);9398static struct sctp_sackhdr *sctp_sm_pull_sack(struct sctp_chunk *chunk);949995100static sctp_disposition_t sctp_stop_t1_and_abort(sctp_cmd_seq_t *commands,···10398 struct sctp_transport *transport);10499105100static sctp_disposition_t sctp_sf_abort_violation(101101+ const struct sctp_endpoint *ep,106102 const struct sctp_association *asoc,107103 void *arg,108104 sctp_cmd_seq_t *commands,···117111 void *arg,118112 sctp_cmd_seq_t *commands);119113114114+static sctp_disposition_t sctp_sf_violation_paramlen(115115+ const struct sctp_endpoint *ep,116116+ const struct sctp_association *asoc,117117+ const sctp_subtype_t type,118118+ void *arg,119119+ sctp_cmd_seq_t *commands);120120+120121static sctp_disposition_t sctp_sf_violation_ctsn(122122+ const struct sctp_endpoint *ep,123123+ const struct sctp_association *asoc,124124+ const sctp_subtype_t type,125125+ void *arg,126126+ sctp_cmd_seq_t *commands);127127+128128+static sctp_disposition_t sctp_sf_violation_chunk(121129 const struct sctp_endpoint *ep,122130 const struct sctp_association *asoc,123131 const sctp_subtype_t type,···201181 struct sctp_chunk *chunk = arg;202182 struct sctp_ulpevent *ev;203183184184+ if (!sctp_vtag_verify_either(chunk, asoc))185185+ return sctp_sf_pdiscard(ep, asoc, type, arg, commands);186186+204187 /* RFC 2960 6.10 Bundling205188 *206189 * An endpoint MUST NOT bundle INIT, INIT ACK or207190 * SHUTDOWN COMPLETE with any other chunks.208191 */209192 if (!chunk->singleton)210210- return SCTP_DISPOSITION_VIOLATION;193193+ return sctp_sf_violation_chunk(ep, asoc, type, arg, commands);211194212212- if (!sctp_vtag_verify_either(chunk, asoc))213213- return sctp_sf_pdiscard(ep, asoc, type, arg, commands);195195+ /* Make sure that the SHUTDOWN_COMPLETE chunk has a valid length. */196196+ if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t)))197197+ return sctp_sf_violation_chunklen(ep, asoc, type, arg,198198+ commands);214199215200 /* RFC 2960 10.2 SCTP-to-ULP216201 *···475450 if (!sctp_vtag_verify(chunk, asoc))476451 return sctp_sf_pdiscard(ep, asoc, type, arg, commands);477452478478- /* Make sure that the INIT-ACK chunk has a valid length */479479- if (!sctp_chunk_length_valid(chunk, sizeof(sctp_initack_chunk_t)))480480- return sctp_sf_violation_chunklen(ep, asoc, type, arg,481481- commands);482453 /* 6.10 Bundling483454 * An endpoint MUST NOT bundle INIT, INIT ACK or484455 * SHUTDOWN COMPLETE with any other chunks.485456 */486457 if (!chunk->singleton)487487- return SCTP_DISPOSITION_VIOLATION;458458+ return sctp_sf_violation_chunk(ep, asoc, type, arg, commands);488459460460+ /* Make sure that the INIT-ACK chunk has a valid length */461461+ if (!sctp_chunk_length_valid(chunk, sizeof(sctp_initack_chunk_t)))462462+ return sctp_sf_violation_chunklen(ep, asoc, type, arg,463463+ commands);489464 /* Grab the INIT header. */490465 chunk->subh.init_hdr = (sctp_inithdr_t *) chunk->skb->data;491466···610585 * control endpoint, respond with an ABORT.611586 */612587 if (ep == sctp_sk((sctp_get_ctl_sock()))->ep)613613- return sctp_sf_ootb(ep, asoc, type, arg, commands);588588+ return sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands);614589615590 /* Make sure that the COOKIE_ECHO chunk has a valid length.616591 * In this case, we check that we have enough for at least a···25212496 struct sctp_chunk *chunk = (struct sctp_chunk *) arg;25222497 struct sctp_chunk *reply;2523249824992499+ /* Make sure that the chunk has a valid length */25002500+ if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t)))25012501+ return sctp_sf_violation_chunklen(ep, asoc, type, arg,25022502+ commands);25032503+25242504 /* Since we are not going to really process this INIT, there25252505 * is no point in verifying chunk boundries. Just generate25262506 * the SHUTDOWN ACK.···29592929 *29602930 * The return value is the disposition of the chunk.29612931*/29622962-sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep,29322932+static sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep,29632933 const struct sctp_association *asoc,29642934 const sctp_subtype_t type,29652935 void *arg,···2995296529962966 SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS);2997296729682968+ sctp_sf_pdiscard(ep, asoc, type, arg, commands);29982969 return SCTP_DISPOSITION_CONSUME;29992970 }30002971···3156312531573126 ch = (sctp_chunkhdr_t *) chunk->chunk_hdr;31583127 do {31593159- /* Break out if chunk length is less then minimal. */31283128+ /* Report violation if the chunk is less then minimal */31603129 if (ntohs(ch->length) < sizeof(sctp_chunkhdr_t))31613161- break;31303130+ return sctp_sf_violation_chunklen(ep, asoc, type, arg,31313131+ commands);3162313231633163- ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length));31643164- if (ch_end > skb_tail_pointer(skb))31653165- break;31663166-31333133+ /* Now that we know we at least have a chunk header,31343134+ * do things that are type appropriate.31353135+ */31673136 if (SCTP_CID_SHUTDOWN_ACK == ch->type)31683137 ootb_shut_ack = 1;31693138···31753144 if (SCTP_CID_ABORT == ch->type)31763145 return sctp_sf_pdiscard(ep, asoc, type, arg, commands);3177314631473147+ /* Report violation if chunk len overflows */31483148+ ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length));31493149+ if (ch_end > skb_tail_pointer(skb))31503150+ return sctp_sf_violation_chunklen(ep, asoc, type, arg,31513151+ commands);31523152+31783153 ch = (sctp_chunkhdr_t *) ch_end;31793154 } while (ch_end < skb_tail_pointer(skb));3180315531813156 if (ootb_shut_ack)31823182- sctp_sf_shut_8_4_5(ep, asoc, type, arg, commands);31573157+ return sctp_sf_shut_8_4_5(ep, asoc, type, arg, commands);31833158 else31843184- sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands);31853185-31863186- return sctp_sf_pdiscard(ep, asoc, type, arg, commands);31593159+ return sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands);31873160}3188316131893162/*···32533218 if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t)))32543219 return sctp_sf_pdiscard(ep, asoc, type, arg, commands);3255322032563256- return SCTP_DISPOSITION_CONSUME;32213221+ /* We need to discard the rest of the packet to prevent32223222+ * potential bomming attacks from additional bundled chunks.32233223+ * This is documented in SCTP Threats ID.32243224+ */32253225+ return sctp_sf_pdiscard(ep, asoc, type, arg, commands);32573226 }3258322732593228 return SCTP_DISPOSITION_NOMEM;···32803241 void *arg,32813242 sctp_cmd_seq_t *commands)32823243{32443244+ struct sctp_chunk *chunk = arg;32453245+32463246+ /* Make sure that the SHUTDOWN_ACK chunk has a valid length. */32473247+ if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t)))32483248+ return sctp_sf_violation_chunklen(ep, asoc, type, arg,32493249+ commands);32503250+32833251 /* Although we do have an association in this case, it corresponds32843252 * to a restarted association. So the packet is treated as an OOTB32853253 * packet and the state function that handles OOTB SHUTDOWN_ACK is···33033257{33043258 struct sctp_chunk *chunk = arg;33053259 struct sctp_chunk *asconf_ack = NULL;32603260+ struct sctp_paramhdr *err_param = NULL;33063261 sctp_addiphdr_t *hdr;32623262+ union sctp_addr_param *addr_param;33073263 __u32 serial;32643264+ int length;3308326533093266 if (!sctp_vtag_verify(chunk, asoc)) {33103267 sctp_add_cmd_sf(commands, SCTP_CMD_REPORT_BAD_TAG,···3322327333233274 hdr = (sctp_addiphdr_t *)chunk->skb->data;33243275 serial = ntohl(hdr->serial);32763276+32773277+ addr_param = (union sctp_addr_param *)hdr->params;32783278+ length = ntohs(addr_param->p.length);32793279+ if (length < sizeof(sctp_paramhdr_t))32803280+ return sctp_sf_violation_paramlen(ep, asoc, type,32813281+ (void *)addr_param, commands);32823282+32833283+ /* Verify the ASCONF chunk before processing it. */32843284+ if (!sctp_verify_asconf(asoc,32853285+ (sctp_paramhdr_t *)((void *)addr_param + length),32863286+ (void *)chunk->chunk_end,32873287+ &err_param))32883288+ return sctp_sf_violation_paramlen(ep, asoc, type,32893289+ (void *)&err_param, commands);3325329033263291 /* ADDIP 4.2 C1) Compare the value of the serial number to the value33273292 * the endpoint stored in a new association variable···33913328 struct sctp_chunk *asconf_ack = arg;33923329 struct sctp_chunk *last_asconf = asoc->addip_last_asconf;33933330 struct sctp_chunk *abort;33313331+ struct sctp_paramhdr *err_param = NULL;33943332 sctp_addiphdr_t *addip_hdr;33953333 __u32 sent_serial, rcvd_serial;33963334···3408334434093345 addip_hdr = (sctp_addiphdr_t *)asconf_ack->skb->data;34103346 rcvd_serial = ntohl(addip_hdr->serial);33473347+33483348+ /* Verify the ASCONF-ACK chunk before processing it. */33493349+ if (!sctp_verify_asconf(asoc,33503350+ (sctp_paramhdr_t *)addip_hdr->params,33513351+ (void *)asconf_ack->chunk_end,33523352+ &err_param))33533353+ return sctp_sf_violation_paramlen(ep, asoc, type,33543354+ (void *)&err_param, commands);3411335534123356 if (last_asconf) {34133357 addip_hdr = (sctp_addiphdr_t *)last_asconf->subh.addip_hdr;···37273655 void *arg,37283656 sctp_cmd_seq_t *commands)37293657{36583658+ struct sctp_chunk *chunk = arg;36593659+36603660+ /* Make sure that the chunk has a valid length.36613661+ * Since we don't know the chunk type, we use a general36623662+ * chunkhdr structure to make a comparison.36633663+ */36643664+ if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t)))36653665+ return sctp_sf_violation_chunklen(ep, asoc, type, arg,36663666+ commands);36673667+37303668 SCTP_DEBUG_PRINTK("Chunk %d is discarded\n", type.chunk);37313669 return SCTP_DISPOSITION_DISCARD;37323670}···37923710 void *arg,37933711 sctp_cmd_seq_t *commands)37943712{37133713+ struct sctp_chunk *chunk = arg;37143714+37153715+ /* Make sure that the chunk has a valid length. */37163716+ if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t)))37173717+ return sctp_sf_violation_chunklen(ep, asoc, type, arg,37183718+ commands);37193719+37953720 return SCTP_DISPOSITION_VIOLATION;37963721}37973722···38063717 * Common function to handle a protocol violation.38073718 */38083719static sctp_disposition_t sctp_sf_abort_violation(37203720+ const struct sctp_endpoint *ep,38093721 const struct sctp_association *asoc,38103722 void *arg,38113723 sctp_cmd_seq_t *commands,38123724 const __u8 *payload,38133725 const size_t paylen)38143726{37273727+ struct sctp_packet *packet = NULL;38153728 struct sctp_chunk *chunk = arg;38163729 struct sctp_chunk *abort = NULL;38173730···38223731 if (!abort)38233732 goto nomem;3824373338253825- sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort));38263826- SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS);37343734+ if (asoc) {37353735+ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort));37363736+ SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS);3827373738283828- if (asoc->state <= SCTP_STATE_COOKIE_ECHOED) {38293829- sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP,38303830- SCTP_TO(SCTP_EVENT_TIMEOUT_T1_INIT));38313831- sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR,38323832- SCTP_ERROR(ECONNREFUSED));38333833- sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED,38343834- SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION));37383738+ if (asoc->state <= SCTP_STATE_COOKIE_ECHOED) {37393739+ sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP,37403740+ SCTP_TO(SCTP_EVENT_TIMEOUT_T1_INIT));37413741+ sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR,37423742+ SCTP_ERROR(ECONNREFUSED));37433743+ sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED,37443744+ SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION));37453745+ } else {37463746+ sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR,37473747+ SCTP_ERROR(ECONNABORTED));37483748+ sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED,37493749+ SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION));37503750+ SCTP_DEC_STATS(SCTP_MIB_CURRESTAB);37513751+ }38353752 } else {38363836- sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR,38373837- SCTP_ERROR(ECONNABORTED));38383838- sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED,38393839- SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION));38403840- SCTP_DEC_STATS(SCTP_MIB_CURRESTAB);37533753+ packet = sctp_ootb_pkt_new(asoc, chunk);37543754+37553755+ if (!packet)37563756+ goto nomem_pkt;37573757+37583758+ if (sctp_test_T_bit(abort))37593759+ packet->vtag = ntohl(chunk->sctp_hdr->vtag);37603760+37613761+ abort->skb->sk = ep->base.sk;37623762+37633763+ sctp_packet_append_chunk(packet, abort);37643764+37653765+ sctp_add_cmd_sf(commands, SCTP_CMD_SEND_PKT,37663766+ SCTP_PACKET(packet));37673767+37683768+ SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS);38413769 }3842377038433843- sctp_add_cmd_sf(commands, SCTP_CMD_DISCARD_PACKET, SCTP_NULL());37713771+ sctp_sf_pdiscard(ep, asoc, SCTP_ST_CHUNK(0), arg, commands);3844377238453773 SCTP_INC_STATS(SCTP_MIB_ABORTEDS);3846377438473775 return SCTP_DISPOSITION_ABORT;3848377637773777+nomem_pkt:37783778+ sctp_chunk_free(abort);38493779nomem:38503780 return SCTP_DISPOSITION_NOMEM;38513781}···38993787{39003788 char err_str[]="The following chunk had invalid length:";3901378939023902- return sctp_sf_abort_violation(asoc, arg, commands, err_str,37903790+ return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str,37913791+ sizeof(err_str));37923792+}37933793+37943794+/*37953795+ * Handle a protocol violation when the parameter length is invalid.37963796+ * "Invalid" length is identified as smaller then the minimal length a37973797+ * given parameter can be.37983798+ */37993799+static sctp_disposition_t sctp_sf_violation_paramlen(38003800+ const struct sctp_endpoint *ep,38013801+ const struct sctp_association *asoc,38023802+ const sctp_subtype_t type,38033803+ void *arg,38043804+ sctp_cmd_seq_t *commands) {38053805+ char err_str[] = "The following parameter had invalid length:";38063806+38073807+ return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str,39033808 sizeof(err_str));39043809}39053810···39353806{39363807 char err_str[]="The cumulative tsn ack beyond the max tsn currently sent:";3937380839383938- return sctp_sf_abort_violation(asoc, arg, commands, err_str,38093809+ return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str,39393810 sizeof(err_str));39403811}3941381238133813+/* Handle protocol violation of an invalid chunk bundling. For example,38143814+ * when we have an association and we recieve bundled INIT-ACK, or38153815+ * SHUDOWN-COMPLETE, our peer is clearly violationg the "MUST NOT bundle"38163816+ * statement from the specs. Additinally, there might be an attacker38173817+ * on the path and we may not want to continue this communication.38183818+ */38193819+static sctp_disposition_t sctp_sf_violation_chunk(38203820+ const struct sctp_endpoint *ep,38213821+ const struct sctp_association *asoc,38223822+ const sctp_subtype_t type,38233823+ void *arg,38243824+ sctp_cmd_seq_t *commands)38253825+{38263826+ char err_str[]="The following chunk violates protocol:";38273827+38283828+ if (!asoc)38293829+ return sctp_sf_violation(ep, asoc, type, arg, commands);38303830+38313831+ return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str,38323832+ sizeof(err_str));38333833+}39423834/***************************************************************************39433835 * These are the state functions for handling primitive (Section 10) events.39443836 ***************************************************************************/···53265176 * association exists, otherwise, use the peer's vtag.53275177 */53285178 if (asoc) {53295329- vtag = asoc->peer.i.init_tag;51795179+ /* Special case the INIT-ACK as there is no peer's vtag51805180+ * yet.51815181+ */51825182+ switch(chunk->chunk_hdr->type) {51835183+ case SCTP_CID_INIT_ACK:51845184+ {51855185+ sctp_initack_chunk_t *initack;51865186+51875187+ initack = (sctp_initack_chunk_t *)chunk->chunk_hdr;51885188+ vtag = ntohl(initack->init_hdr.init_tag);51895189+ break;51905190+ }51915191+ default:51925192+ vtag = asoc->peer.i.init_tag;51935193+ break;51945194+ }53305195 } else {53315196 /* Special case the INIT and stale COOKIE_ECHO as there is no53325197 * vtag yet.
···5252 cfg80211_dev_free(rdev);5353}54545555+#ifdef CONFIG_HOTPLUG5556static int wiphy_uevent(struct device *dev, char **envp,5657 int num_envp, char *buf, int size)5758{5859 /* TODO, we probably need stuff here */5960 return 0;6061}6262+#endif61636264struct class ieee80211_class = {6365 .name = "ieee80211",