Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'linux-2.6' into for-2.6.24

+2085 -1361
+219
Documentation/crypto/async-tx-api.txt
··· 1 + Asynchronous Transfers/Transforms API 2 + 3 + 1 INTRODUCTION 4 + 5 + 2 GENEALOGY 6 + 7 + 3 USAGE 8 + 3.1 General format of the API 9 + 3.2 Supported operations 10 + 3.3 Descriptor management 11 + 3.4 When does the operation execute? 12 + 3.5 When does the operation complete? 13 + 3.6 Constraints 14 + 3.7 Example 15 + 16 + 4 DRIVER DEVELOPER NOTES 17 + 4.1 Conformance points 18 + 4.2 "My application needs finer control of hardware channels" 19 + 20 + 5 SOURCE 21 + 22 + --- 23 + 24 + 1 INTRODUCTION 25 + 26 + The async_tx API provides methods for describing a chain of asynchronous 27 + bulk memory transfers/transforms with support for inter-transactional 28 + dependencies. It is implemented as a dmaengine client that smooths over 29 + the details of different hardware offload engine implementations. Code 30 + that is written to the API can optimize for asynchronous operation and 31 + the API will fit the chain of operations to the available offload 32 + resources. 33 + 34 + 2 GENEALOGY 35 + 36 + The API was initially designed to offload the memory copy and 37 + xor-parity-calculations of the md-raid5 driver using the offload engines 38 + present in the Intel(R) Xscale series of I/O processors. It also built 39 + on the 'dmaengine' layer developed for offloading memory copies in the 40 + network stack using Intel(R) I/OAT engines. The following design 41 + features surfaced as a result: 42 + 1/ implicit synchronous path: users of the API do not need to know if 43 + the platform they are running on has offload capabilities. The 44 + operation will be offloaded when an engine is available and carried out 45 + in software otherwise. 46 + 2/ cross channel dependency chains: the API allows a chain of dependent 47 + operations to be submitted, like xor->copy->xor in the raid5 case. The 48 + API automatically handles cases where the transition from one operation 49 + to another implies a hardware channel switch. 50 + 3/ dmaengine extensions to support multiple clients and operation types 51 + beyond 'memcpy' 52 + 53 + 3 USAGE 54 + 55 + 3.1 General format of the API: 56 + struct dma_async_tx_descriptor * 57 + async_<operation>(<op specific parameters>, 58 + enum async_tx_flags flags, 59 + struct dma_async_tx_descriptor *dependency, 60 + dma_async_tx_callback callback_routine, 61 + void *callback_parameter); 62 + 63 + 3.2 Supported operations: 64 + memcpy - memory copy between a source and a destination buffer 65 + memset - fill a destination buffer with a byte value 66 + xor - xor a series of source buffers and write the result to a 67 + destination buffer 68 + xor_zero_sum - xor a series of source buffers and set a flag if the 69 + result is zero. The implementation attempts to prevent 70 + writes to memory 71 + 72 + 3.3 Descriptor management: 73 + The return value is non-NULL and points to a 'descriptor' when the operation 74 + has been queued to execute asynchronously. Descriptors are recycled 75 + resources, under control of the offload engine driver, to be reused as 76 + operations complete. When an application needs to submit a chain of 77 + operations it must guarantee that the descriptor is not automatically recycled 78 + before the dependency is submitted. This requires that all descriptors be 79 + acknowledged by the application before the offload engine driver is allowed to 80 + recycle (or free) the descriptor. A descriptor can be acked by one of the 81 + following methods: 82 + 1/ setting the ASYNC_TX_ACK flag if no child operations are to be submitted 83 + 2/ setting the ASYNC_TX_DEP_ACK flag to acknowledge the parent 84 + descriptor of a new operation. 85 + 3/ calling async_tx_ack() on the descriptor. 86 + 87 + 3.4 When does the operation execute? 88 + Operations do not immediately issue after return from the 89 + async_<operation> call. Offload engine drivers batch operations to 90 + improve performance by reducing the number of mmio cycles needed to 91 + manage the channel. Once a driver-specific threshold is met the driver 92 + automatically issues pending operations. An application can force this 93 + event by calling async_tx_issue_pending_all(). This operates on all 94 + channels since the application has no knowledge of channel to operation 95 + mapping. 96 + 97 + 3.5 When does the operation complete? 98 + There are two methods for an application to learn about the completion 99 + of an operation. 100 + 1/ Call dma_wait_for_async_tx(). This call causes the CPU to spin while 101 + it polls for the completion of the operation. It handles dependency 102 + chains and issuing pending operations. 103 + 2/ Specify a completion callback. The callback routine runs in tasklet 104 + context if the offload engine driver supports interrupts, or it is 105 + called in application context if the operation is carried out 106 + synchronously in software. The callback can be set in the call to 107 + async_<operation>, or when the application needs to submit a chain of 108 + unknown length it can use the async_trigger_callback() routine to set a 109 + completion interrupt/callback at the end of the chain. 110 + 111 + 3.6 Constraints: 112 + 1/ Calls to async_<operation> are not permitted in IRQ context. Other 113 + contexts are permitted provided constraint #2 is not violated. 114 + 2/ Completion callback routines cannot submit new operations. This 115 + results in recursion in the synchronous case and spin_locks being 116 + acquired twice in the asynchronous case. 117 + 118 + 3.7 Example: 119 + Perform a xor->copy->xor operation where each operation depends on the 120 + result from the previous operation: 121 + 122 + void complete_xor_copy_xor(void *param) 123 + { 124 + printk("complete\n"); 125 + } 126 + 127 + int run_xor_copy_xor(struct page **xor_srcs, 128 + int xor_src_cnt, 129 + struct page *xor_dest, 130 + size_t xor_len, 131 + struct page *copy_src, 132 + struct page *copy_dest, 133 + size_t copy_len) 134 + { 135 + struct dma_async_tx_descriptor *tx; 136 + 137 + tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, 138 + ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL); 139 + tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, 140 + ASYNC_TX_DEP_ACK, tx, NULL, NULL); 141 + tx = async_xor(xor_dest, xor_srcs, 0, xor_src_cnt, xor_len, 142 + ASYNC_TX_XOR_DROP_DST | ASYNC_TX_DEP_ACK | ASYNC_TX_ACK, 143 + tx, complete_xor_copy_xor, NULL); 144 + 145 + async_tx_issue_pending_all(); 146 + } 147 + 148 + See include/linux/async_tx.h for more information on the flags. See the 149 + ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more 150 + implementation examples. 151 + 152 + 4 DRIVER DEVELOPMENT NOTES 153 + 4.1 Conformance points: 154 + There are a few conformance points required in dmaengine drivers to 155 + accommodate assumptions made by applications using the async_tx API: 156 + 1/ Completion callbacks are expected to happen in tasklet context 157 + 2/ dma_async_tx_descriptor fields are never manipulated in IRQ context 158 + 3/ Use async_tx_run_dependencies() in the descriptor clean up path to 159 + handle submission of dependent operations 160 + 161 + 4.2 "My application needs finer control of hardware channels" 162 + This requirement seems to arise from cases where a DMA engine driver is 163 + trying to support device-to-memory DMA. The dmaengine and async_tx 164 + implementations were designed for offloading memory-to-memory 165 + operations; however, there are some capabilities of the dmaengine layer 166 + that can be used for platform-specific channel management. 167 + Platform-specific constraints can be handled by registering the 168 + application as a 'dma_client' and implementing a 'dma_event_callback' to 169 + apply a filter to the available channels in the system. Before showing 170 + how to implement a custom dma_event callback some background of 171 + dmaengine's client support is required. 172 + 173 + The following routines in dmaengine support multiple clients requesting 174 + use of a channel: 175 + - dma_async_client_register(struct dma_client *client) 176 + - dma_async_client_chan_request(struct dma_client *client) 177 + 178 + dma_async_client_register takes a pointer to an initialized dma_client 179 + structure. It expects that the 'event_callback' and 'cap_mask' fields 180 + are already initialized. 181 + 182 + dma_async_client_chan_request triggers dmaengine to notify the client of 183 + all channels that satisfy the capability mask. It is up to the client's 184 + event_callback routine to track how many channels the client needs and 185 + how many it is currently using. The dma_event_callback routine returns a 186 + dma_state_client code to let dmaengine know the status of the 187 + allocation. 188 + 189 + Below is the example of how to extend this functionality for 190 + platform-specific filtering of the available channels beyond the 191 + standard capability mask: 192 + 193 + static enum dma_state_client 194 + my_dma_client_callback(struct dma_client *client, 195 + struct dma_chan *chan, enum dma_state state) 196 + { 197 + struct dma_device *dma_dev; 198 + struct my_platform_specific_dma *plat_dma_dev; 199 + 200 + dma_dev = chan->device; 201 + plat_dma_dev = container_of(dma_dev, 202 + struct my_platform_specific_dma, 203 + dma_dev); 204 + 205 + if (!plat_dma_dev->platform_specific_capability) 206 + return DMA_DUP; 207 + 208 + . . . 209 + } 210 + 211 + 5 SOURCE 212 + include/linux/dmaengine.h: core header file for DMA drivers and clients 213 + drivers/dma/dmaengine.c: offload engine channel management routines 214 + drivers/dma/: location for offload engine drivers 215 + include/linux/async_tx.h: core header file for the async_tx api 216 + crypto/async_tx/async_tx.c: async_tx interface to dmaengine and common code 217 + crypto/async_tx/async_memcpy.c: copy offload 218 + crypto/async_tx/async_memset.c: memory fill offload 219 + crypto/async_tx/async_xor.c: xor and xor zero sum offload
+2
Documentation/devices.txt
··· 94 94 9 = /dev/urandom Faster, less secure random number gen. 95 95 10 = /dev/aio Asynchronous I/O notification interface 96 96 11 = /dev/kmsg Writes to this come out as printk's 97 + 12 = /dev/oldmem Used by crashdump kernels to access 98 + the memory of the kernel that crashed. 97 99 98 100 1 block RAM disk 99 101 0 = /dev/ram0 First RAM disk
+8 -8
Documentation/input/iforce-protocol.txt
··· 67 67 Val 40 Spring (Force = f(pos)) 68 68 Val 41 Friction (Force = f(velocity)) and Inertia (Force = f(acceleration)) 69 69 70 - 70 + 71 71 02 Axes affected and trigger 72 72 Bits 4-7: Val 2 = effect along one axis. Byte 05 indicates direction 73 73 Val 4 = X axis only. Byte 05 must contain 5a ··· 176 176 Query the product id (2 bytes) 177 177 178 178 **** Open device **** 179 - QUERY = 4f ('O'pen) 179 + QUERY = 4f ('O'pen) 180 180 No data returned. 181 181 182 182 **** Close device ***** ··· 184 184 No data returned. 185 185 186 186 **** Query effect **** 187 - QUERY = 45 ('E') 187 + QUERY = 45 ('E') 188 188 Send effect type. 189 189 Returns nonzero if supported (2 bytes) 190 190 ··· 199 199 OP= 40 <idx> <val> [<val>] 200 200 LEN= 2 or 3 201 201 00 Idx 202 - Idx 00 Set dead zone (0..2048) 203 - Idx 01 Ignore Deadman sensor (0..1) 204 - Idx 02 Enable comm watchdog (0..1) 205 - Idx 03 Set the strength of the spring (0..100) 202 + Idx 00 Set dead zone (0..2048) 203 + Idx 01 Ignore Deadman sensor (0..1) 204 + Idx 02 Enable comm watchdog (0..1) 205 + Idx 03 Set the strength of the spring (0..100) 206 206 Idx 04 Enable or disable the spring (0/1) 207 - Idx 05 Set axis saturation threshold (0..2048) 207 + Idx 05 Set axis saturation threshold (0..2048) 208 208 209 209 **** Set Effect State **** 210 210 OP= 42 <val>
+1 -1
Documentation/lguest/lguest.c
··· 882 882 * of the block file (possibly extending it). */ 883 883 if (off + len > device_len) { 884 884 /* Trim it back to the correct length */ 885 - ftruncate(dev->fd, device_len); 885 + ftruncate64(dev->fd, device_len); 886 886 /* Die, bad Guest, die. */ 887 887 errx(1, "Write past end %llu+%u", off, len); 888 888 }
+3 -3
MAINTAINERS
··· 2624 2624 P: Jozsef Kadlecsik 2625 2625 P: Patrick McHardy 2626 2626 M: kaber@trash.net 2627 - L: netfilter-devel@lists.netfilter.org 2628 - L: netfilter@lists.netfilter.org (subscribers-only) 2627 + L: netfilter-devel@vger.kernel.org 2628 + L: netfilter@vger.kernel.org 2629 2629 L: coreteam@netfilter.org 2630 2630 W: http://www.netfilter.org/ 2631 2631 W: http://www.iptables.org/ ··· 2678 2678 P: Hideaki YOSHIFUJI 2679 2679 M: yoshfuji@linux-ipv6.org 2680 2680 P: Patrick McHardy 2681 - M: kaber@coreworks.de 2681 + M: kaber@trash.net 2682 2682 L: netdev@vger.kernel.org 2683 2683 T: git kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6.git 2684 2684 S: Maintained
+2 -2
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 23 4 - EXTRAVERSION =-rc6 5 - NAME = Pink Farting Weasel 4 + EXTRAVERSION =-rc9 5 + NAME = Arr Matey! A Hairy Bilge Rat! 6 6 7 7 # *DOCUMENTATION* 8 8 # To see a list of typical targets execute "make help"
+2 -2
arch/arm/kernel/bios32.c
··· 338 338 * pcibios_fixup_bus - Called after each bus is probed, 339 339 * but before its children are examined. 340 340 */ 341 - void __devinit pcibios_fixup_bus(struct pci_bus *bus) 341 + void pcibios_fixup_bus(struct pci_bus *bus) 342 342 { 343 343 struct pci_sys_data *root = bus->sysdata; 344 344 struct pci_dev *dev; ··· 419 419 /* 420 420 * Convert from Linux-centric to bus-centric addresses for bridge devices. 421 421 */ 422 - void __devinit 422 + void 423 423 pcibios_resource_to_bus(struct pci_dev *dev, struct pci_bus_region *region, 424 424 struct resource *res) 425 425 {
+1 -1
arch/arm/mach-ep93xx/core.c
··· 336 336 if (line >= 0 && line < 16) { 337 337 gpio_line_config(line, GPIO_IN); 338 338 } else { 339 - gpio_line_config(EP93XX_GPIO_LINE_F(line), GPIO_IN); 339 + gpio_line_config(EP93XX_GPIO_LINE_F(line-16), GPIO_IN); 340 340 } 341 341 342 342 port = line >> 3;
+11 -1
arch/arm/mm/cache-l2x0.c
··· 57 57 { 58 58 unsigned long addr; 59 59 60 - start &= ~(CACHE_LINE_SIZE - 1); 60 + if (start & (CACHE_LINE_SIZE - 1)) { 61 + start &= ~(CACHE_LINE_SIZE - 1); 62 + sync_writel(start, L2X0_CLEAN_INV_LINE_PA, 1); 63 + start += CACHE_LINE_SIZE; 64 + } 65 + 66 + if (end & (CACHE_LINE_SIZE - 1)) { 67 + end &= ~(CACHE_LINE_SIZE - 1); 68 + sync_writel(end, L2X0_CLEAN_INV_LINE_PA, 1); 69 + } 70 + 61 71 for (addr = start; addr < end; addr += CACHE_LINE_SIZE) 62 72 sync_writel(addr, L2X0_INV_LINE_PA, 1); 63 73 cache_sync();
+1 -1
arch/i386/boot/header.S
··· 275 275 hlt 276 276 jmp die 277 277 278 - .size die, .-due 278 + .size die, .-die 279 279 280 280 .section ".initdata", "a" 281 281 setup_corrupt:
+31 -12
arch/i386/boot/memory.c
··· 20 20 21 21 static int detect_memory_e820(void) 22 22 { 23 + int count = 0; 23 24 u32 next = 0; 24 25 u32 size, id; 25 26 u8 err; ··· 28 27 29 28 do { 30 29 size = sizeof(struct e820entry); 31 - id = SMAP; 32 - asm("int $0x15; setc %0" 33 - : "=am" (err), "+b" (next), "+d" (id), "+c" (size), 34 - "=m" (*desc) 35 - : "D" (desc), "a" (0xe820)); 36 30 37 - if (err || id != SMAP) 31 + /* Important: %edx is clobbered by some BIOSes, 32 + so it must be either used for the error output 33 + or explicitly marked clobbered. */ 34 + asm("int $0x15; setc %0" 35 + : "=d" (err), "+b" (next), "=a" (id), "+c" (size), 36 + "=m" (*desc) 37 + : "D" (desc), "d" (SMAP), "a" (0xe820)); 38 + 39 + /* Some BIOSes stop returning SMAP in the middle of 40 + the search loop. We don't know exactly how the BIOS 41 + screwed up the map at that point, we might have a 42 + partial map, the full map, or complete garbage, so 43 + just return failure. */ 44 + if (id != SMAP) { 45 + count = 0; 46 + break; 47 + } 48 + 49 + if (err) 38 50 break; 39 51 40 - boot_params.e820_entries++; 52 + count++; 41 53 desc++; 42 - } while (next && boot_params.e820_entries < E820MAX); 54 + } while (next && count < E820MAX); 43 55 44 - return boot_params.e820_entries; 56 + return boot_params.e820_entries = count; 45 57 } 46 58 47 59 static int detect_memory_e801(void) ··· 103 89 104 90 int detect_memory(void) 105 91 { 92 + int err = -1; 93 + 106 94 if (detect_memory_e820() > 0) 107 - return 0; 95 + err = 0; 108 96 109 97 if (!detect_memory_e801()) 110 - return 0; 98 + err = 0; 111 99 112 - return detect_memory_88(); 100 + if (!detect_memory_88()) 101 + err = 0; 102 + 103 + return err; 113 104 }
+10 -4
arch/i386/boot/video.c
··· 147 147 } 148 148 149 149 /* Set mode (without recalc) */ 150 - static int raw_set_mode(u16 mode) 150 + static int raw_set_mode(u16 mode, u16 *real_mode) 151 151 { 152 152 int nmode, i; 153 153 struct card_info *card; ··· 165 165 166 166 if ((mode == nmode && visible) || 167 167 mode == mi->mode || 168 - mode == (mi->y << 8)+mi->x) 168 + mode == (mi->y << 8)+mi->x) { 169 + *real_mode = mi->mode; 169 170 return card->set_mode(mi); 171 + } 170 172 171 173 if (visible) 172 174 nmode++; ··· 180 178 if (mode >= card->xmode_first && 181 179 mode < card->xmode_first+card->xmode_n) { 182 180 struct mode_info mix; 183 - mix.mode = mode; 181 + *real_mode = mix.mode = mode; 184 182 mix.x = mix.y = 0; 185 183 return card->set_mode(&mix); 186 184 } ··· 225 223 static int set_mode(u16 mode) 226 224 { 227 225 int rv; 226 + u16 real_mode; 228 227 229 228 /* Very special mode numbers... */ 230 229 if (mode == VIDEO_CURRENT_MODE) ··· 235 232 else if (mode == EXTENDED_VGA) 236 233 mode = VIDEO_8POINT; 237 234 238 - rv = raw_set_mode(mode); 235 + rv = raw_set_mode(mode, &real_mode); 239 236 if (rv) 240 237 return rv; 241 238 242 239 if (mode & VIDEO_RECALC) 243 240 vga_recalc_vertical(); 244 241 242 + /* Save the canonical mode number for the kernel, not 243 + an alias, size specification or menu position */ 244 + boot_params.hdr.vid_mode = real_mode; 245 245 return 0; 246 246 } 247 247
+10 -31
arch/i386/kernel/acpi/wakeup.S
··· 151 151 #define VIDEO_FIRST_V7 0x0900 152 152 153 153 # Setting of user mode (AX=mode ID) => CF=success 154 + 155 + # For now, we only handle VESA modes (0x0200..0x03ff). To handle other 156 + # modes, we should probably compile in the video code from the boot 157 + # directory. 154 158 mode_set: 155 159 movw %ax, %bx 156 - #if 0 157 - cmpb $0xff, %ah 158 - jz setalias 160 + subb $VIDEO_FIRST_VESA>>8, %bh 161 + cmpb $2, %bh 162 + jb check_vesa 159 163 160 - testb $VIDEO_RECALC>>8, %ah 161 - jnz _setrec 162 - 163 - cmpb $VIDEO_FIRST_RESOLUTION>>8, %ah 164 - jnc setres 165 - 166 - cmpb $VIDEO_FIRST_SPECIAL>>8, %ah 167 - jz setspc 168 - 169 - cmpb $VIDEO_FIRST_V7>>8, %ah 170 - jz setv7 171 - #endif 172 - 173 - cmpb $VIDEO_FIRST_VESA>>8, %ah 174 - jnc check_vesa 175 - #if 0 176 - orb %ah, %ah 177 - jz setmenu 178 - #endif 179 - 180 - decb %ah 181 - # jz setbios Add bios modes later 182 - 183 - setbad: clc 164 + setbad: 165 + clc 184 166 ret 185 167 186 168 check_vesa: 187 - subb $VIDEO_FIRST_VESA>>8, %bh 188 169 orw $0x4000, %bx # Use linear frame buffer 189 170 movw $0x4f02, %ax # VESA BIOS mode set call 190 171 int $0x10 191 172 cmpw $0x004f, %ax # AL=4f if implemented 192 - jnz _setbad # AH=0 if OK 173 + jnz setbad # AH=0 if OK 193 174 194 175 stc 195 176 ret 196 - 197 - _setbad: jmp setbad 198 177 199 178 .code32 200 179 ALIGN
+4 -1
arch/i386/xen/mmu.c
··· 559 559 put_cpu(); 560 560 561 561 spin_lock(&mm->page_table_lock); 562 - xen_pgd_unpin(mm->pgd); 562 + 563 + /* pgd may not be pinned in the error exit path of execve */ 564 + if (PagePinned(virt_to_page(mm->pgd))) 565 + xen_pgd_unpin(mm->pgd); 563 566 spin_unlock(&mm->page_table_lock); 564 567 }
+1 -4
arch/mips/kernel/i8259.c
··· 177 177 outb(cached_master_mask, PIC_MASTER_IMR); 178 178 outb(0x60+irq,PIC_MASTER_CMD); /* 'Specific EOI to master */ 179 179 } 180 - #ifdef CONFIG_MIPS_MT_SMTC 181 - if (irq_hwmask[irq] & ST0_IM) 182 - set_c0_status(irq_hwmask[irq] & ST0_IM); 183 - #endif /* CONFIG_MIPS_MT_SMTC */ 180 + smtc_im_ack_irq(irq); 184 181 spin_unlock_irqrestore(&i8259A_lock, flags); 185 182 return; 186 183
+2 -8
arch/mips/kernel/irq-msc01.c
··· 52 52 mask_msc_irq(irq); 53 53 if (!cpu_has_veic) 54 54 MSCIC_WRITE(MSC01_IC_EOI, 0); 55 - #ifdef CONFIG_MIPS_MT_SMTC 56 55 /* This actually needs to be a call into platform code */ 57 - if (irq_hwmask[irq] & ST0_IM) 58 - set_c0_status(irq_hwmask[irq] & ST0_IM); 59 - #endif /* CONFIG_MIPS_MT_SMTC */ 56 + smtc_im_ack_irq(irq); 60 57 } 61 58 62 59 /* ··· 70 73 MSCIC_WRITE(MSC01_IC_SUP+irq*8, r | ~MSC01_IC_SUP_EDGE_BIT); 71 74 MSCIC_WRITE(MSC01_IC_SUP+irq*8, r); 72 75 } 73 - #ifdef CONFIG_MIPS_MT_SMTC 74 - if (irq_hwmask[irq] & ST0_IM) 75 - set_c0_status(irq_hwmask[irq] & ST0_IM); 76 - #endif /* CONFIG_MIPS_MT_SMTC */ 76 + smtc_im_ack_irq(irq); 77 77 } 78 78 79 79 /*
+1 -9
arch/mips/kernel/irq.c
··· 74 74 */ 75 75 void ack_bad_irq(unsigned int irq) 76 76 { 77 + smtc_im_ack_irq(irq); 77 78 printk("unexpected IRQ # %d\n", irq); 78 79 } 79 80 80 81 atomic_t irq_err_count; 81 - 82 - #ifdef CONFIG_MIPS_MT_SMTC 83 - /* 84 - * SMTC Kernel needs to manipulate low-level CPU interrupt mask 85 - * in do_IRQ. These are passed in setup_irq_smtc() and stored 86 - * in this table. 87 - */ 88 - unsigned long irq_hwmask[NR_IRQS]; 89 - #endif /* CONFIG_MIPS_MT_SMTC */ 90 82 91 83 /* 92 84 * Generic, controller-independent functions:
+1 -1
arch/mips/kernel/scall64-o32.S
··· 525 525 PTR compat_sys_signalfd 526 526 PTR compat_sys_timerfd 527 527 PTR sys_eventfd 528 - PTR sys_fallocate /* 4320 */ 528 + PTR sys32_fallocate /* 4320 */ 529 529 .size sys_call_table,.-sys_call_table
+4 -1
arch/mips/kernel/smtc.c
··· 25 25 #include <asm/smtc_proc.h> 26 26 27 27 /* 28 - * This file should be built into the kernel only if CONFIG_MIPS_MT_SMTC is set. 28 + * SMTC Kernel needs to manipulate low-level CPU interrupt mask 29 + * in do_IRQ. These are passed in setup_irq_smtc() and stored 30 + * in this table. 29 31 */ 32 + unsigned long irq_hwmask[NR_IRQS]; 30 33 31 34 #define LOCK_MT_PRA() \ 32 35 local_irq_save(flags); \
+2
arch/mips/kernel/vmlinux.lds.S
··· 45 45 __dbe_table : { *(__dbe_table) } 46 46 __stop___dbe_table = .; 47 47 48 + NOTES 49 + 48 50 RODATA 49 51 50 52 /* writeable */
+2 -2
arch/mips/sgi-ip32/ip32-platform.c
··· 41 41 42 42 static int __init uart8250_init(void) 43 43 { 44 - uart8250_data[0].iobase = (unsigned long) &mace->isa.serial1; 45 - uart8250_data[1].iobase = (unsigned long) &mace->isa.serial1; 44 + uart8250_data[0].membase = (void __iomem *) &mace->isa.serial1; 45 + uart8250_data[1].membase = (void __iomem *) &mace->isa.serial1; 46 46 47 47 return platform_device_register(&uart8250_device); 48 48 }
+2
arch/mips/sibyte/bcm1480/setup.c
··· 15 15 * along with this program; if not, write to the Free Software 16 16 * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 17 17 */ 18 + #include <linux/init.h> 18 19 #include <linux/kernel.h> 19 20 #include <linux/module.h> 20 21 #include <linux/reboot.h> ··· 36 35 EXPORT_SYMBOL(soc_type); 37 36 unsigned int periph_rev; 38 37 unsigned int zbbus_mhz; 38 + EXPORT_SYMBOL(zbbus_mhz); 39 39 40 40 static unsigned int part_type; 41 41
+1
arch/powerpc/boot/dts/mpc8349emitx.dts
··· 97 97 #size-cells = <0>; 98 98 interrupt-parent = < &ipic >; 99 99 interrupts = <26 8>; 100 + dr_mode = "peripheral"; 100 101 phy_type = "ulpi"; 101 102 }; 102 103
+7
arch/powerpc/kernel/process.c
··· 613 613 regs->ccr = 0; 614 614 regs->gpr[1] = sp; 615 615 616 + /* 617 + * We have just cleared all the nonvolatile GPRs, so make 618 + * FULL_REGS(regs) return true. This is necessary to allow 619 + * ptrace to examine the thread immediately after exec. 620 + */ 621 + regs->trap &= ~1UL; 622 + 616 623 #ifdef CONFIG_PPC32 617 624 regs->mq = 0; 618 625 regs->nip = start;
+2 -2
arch/powerpc/platforms/83xx/usb.c
··· 76 76 if (port0_is_dr) 77 77 printk(KERN_WARNING 78 78 "834x USB port0 can't be used by both DR and MPH!\n"); 79 - sicrl |= MPC834X_SICRL_USB0; 79 + sicrl &= ~MPC834X_SICRL_USB0; 80 80 } 81 81 prop = of_get_property(np, "port1", NULL); 82 82 if (prop) { 83 83 if (port1_is_dr) 84 84 printk(KERN_WARNING 85 85 "834x USB port1 can't be used by both DR and MPH!\n"); 86 - sicrl |= MPC834X_SICRL_USB1; 86 + sicrl &= ~MPC834X_SICRL_USB1; 87 87 } 88 88 of_node_put(np); 89 89 }
+2 -2
arch/powerpc/platforms/cell/spufs/file.c
··· 2110 2110 { "mbox_stat", &spufs_mbox_stat_fops, 0444, }, 2111 2111 { "ibox_stat", &spufs_ibox_stat_fops, 0444, }, 2112 2112 { "wbox_stat", &spufs_wbox_stat_fops, 0444, }, 2113 - { "signal1", &spufs_signal1_nosched_fops, 0222, }, 2114 - { "signal2", &spufs_signal2_nosched_fops, 0222, }, 2113 + { "signal1", &spufs_signal1_fops, 0666, }, 2114 + { "signal2", &spufs_signal2_fops, 0666, }, 2115 2115 { "signal1_type", &spufs_signal1_type, 0666, }, 2116 2116 { "signal2_type", &spufs_signal2_type, 0666, }, 2117 2117 { "cntl", &spufs_cntl_fops, 0666, },
+1 -1
arch/powerpc/platforms/pseries/xics.c
··· 419 419 * For the moment only implement delivery to all cpus or one cpu. 420 420 * Get current irq_server for the given irq 421 421 */ 422 - irq_server = get_irq_server(irq, 1); 422 + irq_server = get_irq_server(virq, 1); 423 423 if (irq_server == -1) { 424 424 char cpulist[128]; 425 425 cpumask_scnprintf(cpulist, sizeof(cpulist), cpumask);
+1 -1
arch/powerpc/sysdev/commproc.c
··· 387 387 { 388 388 return (dpram_pbase + (uint)(addr - dpram_vbase)); 389 389 } 390 - EXPORT_SYMBOL(cpm_dpram_addr); 390 + EXPORT_SYMBOL(cpm_dpram_phys);
+1 -1
arch/ppc/8xx_io/commproc.c
··· 459 459 460 460 void *cpm_dpram_addr(unsigned long offset) 461 461 { 462 - return ((immap_t *)IMAP_ADDR)->im_cpm.cp_dpmem + offset; 462 + return (void *)(dpram_vbase + offset); 463 463 } 464 464 EXPORT_SYMBOL(cpm_dpram_addr); 465 465
+2
arch/sparc/kernel/ebus.c
··· 156 156 dev->prom_node = dp; 157 157 158 158 regs = of_get_property(dp, "reg", &len); 159 + if (!regs) 160 + len = 0; 159 161 if (len % sizeof(struct linux_prom_registers)) { 160 162 prom_printf("UGH: proplen for %s was %d, need multiple of %d\n", 161 163 dev->prom_node->name, len,
+2 -2
arch/sparc64/kernel/binfmt_aout32.c
··· 177 177 get_user(c,p++); 178 178 } while (c); 179 179 } 180 - put_user(NULL,argv); 180 + put_user(0,argv); 181 181 current->mm->arg_end = current->mm->env_start = (unsigned long) p; 182 182 while (envc-->0) { 183 183 char c; ··· 186 186 get_user(c,p++); 187 187 } while (c); 188 188 } 189 - put_user(NULL,envp); 189 + put_user(0,envp); 190 190 current->mm->env_end = (unsigned long) p; 191 191 return sp; 192 192 }
+4 -1
arch/sparc64/kernel/ebus.c
··· 375 375 dev->num_addrs = 0; 376 376 dev->num_irqs = 0; 377 377 } else { 378 - (void) of_get_property(dp, "reg", &len); 378 + const int *regs = of_get_property(dp, "reg", &len); 379 + 380 + if (!regs) 381 + len = 0; 379 382 dev->num_addrs = len / sizeof(struct linux_prom_registers); 380 383 381 384 for (i = 0; i < dev->num_addrs; i++)
+4 -4
arch/sparc64/lib/NGcopy_from_user.S
··· 1 1 /* NGcopy_from_user.S: Niagara optimized copy from userspace. 2 2 * 3 - * Copyright (C) 2006 David S. Miller (davem@davemloft.net) 3 + * Copyright (C) 2006, 2007 David S. Miller (davem@davemloft.net) 4 4 */ 5 5 6 6 #define EX_LD(x) \ ··· 8 8 .section .fixup; \ 9 9 .align 4; \ 10 10 99: wr %g0, ASI_AIUS, %asi;\ 11 - retl; \ 12 - mov 1, %o0; \ 11 + ret; \ 12 + restore %g0, 1, %o0; \ 13 13 .section __ex_table,"a";\ 14 14 .align 4; \ 15 15 .word 98b, 99b; \ ··· 24 24 #define LOAD(type,addr,dest) type##a [addr] ASI_AIUS, dest 25 25 #define LOAD_TWIN(addr_reg,dest0,dest1) \ 26 26 ldda [addr_reg] ASI_BLK_INIT_QUAD_LDD_AIUS, dest0 27 - #define EX_RETVAL(x) 0 27 + #define EX_RETVAL(x) %g0 28 28 29 29 #ifdef __KERNEL__ 30 30 #define PREAMBLE \
+4 -4
arch/sparc64/lib/NGcopy_to_user.S
··· 1 1 /* NGcopy_to_user.S: Niagara optimized copy to userspace. 2 2 * 3 - * Copyright (C) 2006 David S. Miller (davem@davemloft.net) 3 + * Copyright (C) 2006, 2007 David S. Miller (davem@davemloft.net) 4 4 */ 5 5 6 6 #define EX_ST(x) \ ··· 8 8 .section .fixup; \ 9 9 .align 4; \ 10 10 99: wr %g0, ASI_AIUS, %asi;\ 11 - retl; \ 12 - mov 1, %o0; \ 11 + ret; \ 12 + restore %g0, 1, %o0; \ 13 13 .section __ex_table,"a";\ 14 14 .align 4; \ 15 15 .word 98b, 99b; \ ··· 23 23 #define FUNC_NAME NGcopy_to_user 24 24 #define STORE(type,src,addr) type##a src, [addr] ASI_AIUS 25 25 #define STORE_ASI ASI_BLK_INIT_QUAD_LDD_AIUS 26 - #define EX_RETVAL(x) 0 26 + #define EX_RETVAL(x) %g0 27 27 28 28 #ifdef __KERNEL__ 29 29 /* Writing to %asi is _expensive_ so we hardcode it.
+213 -158
arch/sparc64/lib/NGmemcpy.S
··· 1 1 /* NGmemcpy.S: Niagara optimized memcpy. 2 2 * 3 - * Copyright (C) 2006 David S. Miller (davem@davemloft.net) 3 + * Copyright (C) 2006, 2007 David S. Miller (davem@davemloft.net) 4 4 */ 5 5 6 6 #ifdef __KERNEL__ ··· 14 14 #define GLOBAL_SPARE %g5 15 15 #define RESTORE_ASI(TMP) \ 16 16 wr %g0, ASI_PNF, %asi 17 + #endif 18 + 19 + #ifdef __sparc_v9__ 20 + #define SAVE_AMOUNT 128 21 + #else 22 + #define SAVE_AMOUNT 64 17 23 #endif 18 24 19 25 #ifndef STORE_ASI ··· 56 50 #endif 57 51 58 52 #ifndef STORE_INIT 53 + #ifndef SIMULATE_NIAGARA_ON_NON_NIAGARA 59 54 #define STORE_INIT(src,addr) stxa src, [addr] %asi 55 + #else 56 + #define STORE_INIT(src,addr) stx src, [addr + 0x00] 57 + #endif 60 58 #endif 61 59 62 60 #ifndef FUNC_NAME ··· 83 73 84 74 .globl FUNC_NAME 85 75 .type FUNC_NAME,#function 86 - FUNC_NAME: /* %o0=dst, %o1=src, %o2=len */ 87 - srlx %o2, 31, %g2 76 + FUNC_NAME: /* %i0=dst, %i1=src, %i2=len */ 77 + PREAMBLE 78 + save %sp, -SAVE_AMOUNT, %sp 79 + srlx %i2, 31, %g2 88 80 cmp %g2, 0 89 81 tne %xcc, 5 90 - PREAMBLE 91 - mov %o0, GLOBAL_SPARE 92 - cmp %o2, 0 82 + mov %i0, %o0 83 + cmp %i2, 0 93 84 be,pn %XCC, 85f 94 - or %o0, %o1, %o3 95 - cmp %o2, 16 85 + or %o0, %i1, %i3 86 + cmp %i2, 16 96 87 blu,a,pn %XCC, 80f 97 - or %o3, %o2, %o3 88 + or %i3, %i2, %i3 98 89 99 90 /* 2 blocks (128 bytes) is the minimum we can do the block 100 91 * copy with. We need to ensure that we'll iterate at least ··· 104 93 * to (64 - 1) bytes from the length before we perform the 105 94 * block copy loop. 106 95 */ 107 - cmp %o2, (2 * 64) 96 + cmp %i2, (2 * 64) 108 97 blu,pt %XCC, 70f 109 - andcc %o3, 0x7, %g0 98 + andcc %i3, 0x7, %g0 110 99 111 100 /* %o0: dst 112 - * %o1: src 113 - * %o2: len (known to be >= 128) 101 + * %i1: src 102 + * %i2: len (known to be >= 128) 114 103 * 115 - * The block copy loops will use %o4/%o5,%g2/%g3 as 104 + * The block copy loops will use %i4/%i5,%g2/%g3 as 116 105 * temporaries while copying the data. 117 106 */ 118 107 119 - LOAD(prefetch, %o1, #one_read) 108 + LOAD(prefetch, %i1, #one_read) 120 109 wr %g0, STORE_ASI, %asi 121 110 122 111 /* Align destination on 64-byte boundary. */ 123 - andcc %o0, (64 - 1), %o4 112 + andcc %o0, (64 - 1), %i4 124 113 be,pt %XCC, 2f 125 - sub %o4, 64, %o4 126 - sub %g0, %o4, %o4 ! bytes to align dst 127 - sub %o2, %o4, %o2 128 - 1: subcc %o4, 1, %o4 129 - EX_LD(LOAD(ldub, %o1, %g1)) 114 + sub %i4, 64, %i4 115 + sub %g0, %i4, %i4 ! bytes to align dst 116 + sub %i2, %i4, %i2 117 + 1: subcc %i4, 1, %i4 118 + EX_LD(LOAD(ldub, %i1, %g1)) 130 119 EX_ST(STORE(stb, %g1, %o0)) 131 - add %o1, 1, %o1 120 + add %i1, 1, %i1 132 121 bne,pt %XCC, 1b 133 122 add %o0, 1, %o0 134 123 ··· 147 136 * aligned store data at a time, this is easy to ensure. 148 137 */ 149 138 2: 150 - andcc %o1, (16 - 1), %o4 151 - andn %o2, (64 - 1), %g1 ! block copy loop iterator 152 - sub %o2, %g1, %o2 ! final sub-block copy bytes 139 + andcc %i1, (16 - 1), %i4 140 + andn %i2, (64 - 1), %g1 ! block copy loop iterator 153 141 be,pt %XCC, 50f 154 - cmp %o4, 8 155 - be,a,pt %XCC, 10f 156 - sub %o1, 0x8, %o1 142 + sub %i2, %g1, %i2 ! final sub-block copy bytes 143 + 144 + cmp %i4, 8 145 + be,pt %XCC, 10f 146 + sub %i1, %i4, %i1 157 147 158 148 /* Neither 8-byte nor 16-byte aligned, shift and mask. */ 159 - mov %g1, %o4 160 - and %o1, 0x7, %g1 161 - sll %g1, 3, %g1 162 - mov 64, %o3 163 - andn %o1, 0x7, %o1 164 - EX_LD(LOAD(ldx, %o1, %g2)) 165 - sub %o3, %g1, %o3 166 - sllx %g2, %g1, %g2 149 + and %i4, 0x7, GLOBAL_SPARE 150 + sll GLOBAL_SPARE, 3, GLOBAL_SPARE 151 + mov 64, %i5 152 + EX_LD(LOAD_TWIN(%i1, %g2, %g3)) 153 + sub %i5, GLOBAL_SPARE, %i5 154 + mov 16, %o4 155 + mov 32, %o5 156 + mov 48, %o7 157 + mov 64, %i3 167 158 168 - #define SWIVEL_ONE_DWORD(SRC, TMP1, TMP2, PRE_VAL, PRE_SHIFT, POST_SHIFT, DST)\ 169 - EX_LD(LOAD(ldx, SRC, TMP1)); \ 170 - srlx TMP1, PRE_SHIFT, TMP2; \ 171 - or TMP2, PRE_VAL, TMP2; \ 172 - EX_ST(STORE_INIT(TMP2, DST)); \ 173 - sllx TMP1, POST_SHIFT, PRE_VAL; 159 + bg,pn %XCC, 9f 160 + nop 174 161 175 - 1: add %o1, 0x8, %o1 176 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x00) 177 - add %o1, 0x8, %o1 178 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x08) 179 - add %o1, 0x8, %o1 180 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x10) 181 - add %o1, 0x8, %o1 182 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x18) 183 - add %o1, 32, %o1 184 - LOAD(prefetch, %o1, #one_read) 185 - sub %o1, 32 - 8, %o1 186 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x20) 187 - add %o1, 8, %o1 188 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x28) 189 - add %o1, 8, %o1 190 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x30) 191 - add %o1, 8, %o1 192 - SWIVEL_ONE_DWORD(%o1, %g3, %o5, %g2, %o3, %g1, %o0 + 0x38) 193 - subcc %o4, 64, %o4 194 - bne,pt %XCC, 1b 162 + #define MIX_THREE_WORDS(WORD1, WORD2, WORD3, PRE_SHIFT, POST_SHIFT, TMP) \ 163 + sllx WORD1, POST_SHIFT, WORD1; \ 164 + srlx WORD2, PRE_SHIFT, TMP; \ 165 + sllx WORD2, POST_SHIFT, WORD2; \ 166 + or WORD1, TMP, WORD1; \ 167 + srlx WORD3, PRE_SHIFT, TMP; \ 168 + or WORD2, TMP, WORD2; 169 + 170 + 8: EX_LD(LOAD_TWIN(%i1 + %o4, %o2, %o3)) 171 + MIX_THREE_WORDS(%g2, %g3, %o2, %i5, GLOBAL_SPARE, %o1) 172 + LOAD(prefetch, %i1 + %i3, #one_read) 173 + 174 + EX_ST(STORE_INIT(%g2, %o0 + 0x00)) 175 + EX_ST(STORE_INIT(%g3, %o0 + 0x08)) 176 + 177 + EX_LD(LOAD_TWIN(%i1 + %o5, %g2, %g3)) 178 + MIX_THREE_WORDS(%o2, %o3, %g2, %i5, GLOBAL_SPARE, %o1) 179 + 180 + EX_ST(STORE_INIT(%o2, %o0 + 0x10)) 181 + EX_ST(STORE_INIT(%o3, %o0 + 0x18)) 182 + 183 + EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3)) 184 + MIX_THREE_WORDS(%g2, %g3, %o2, %i5, GLOBAL_SPARE, %o1) 185 + 186 + EX_ST(STORE_INIT(%g2, %o0 + 0x20)) 187 + EX_ST(STORE_INIT(%g3, %o0 + 0x28)) 188 + 189 + EX_LD(LOAD_TWIN(%i1 + %i3, %g2, %g3)) 190 + add %i1, 64, %i1 191 + MIX_THREE_WORDS(%o2, %o3, %g2, %i5, GLOBAL_SPARE, %o1) 192 + 193 + EX_ST(STORE_INIT(%o2, %o0 + 0x30)) 194 + EX_ST(STORE_INIT(%o3, %o0 + 0x38)) 195 + 196 + subcc %g1, 64, %g1 197 + bne,pt %XCC, 8b 195 198 add %o0, 64, %o0 196 199 197 - #undef SWIVEL_ONE_DWORD 198 - 199 - srl %g1, 3, %g1 200 200 ba,pt %XCC, 60f 201 - add %o1, %g1, %o1 201 + add %i1, %i4, %i1 202 + 203 + 9: EX_LD(LOAD_TWIN(%i1 + %o4, %o2, %o3)) 204 + MIX_THREE_WORDS(%g3, %o2, %o3, %i5, GLOBAL_SPARE, %o1) 205 + LOAD(prefetch, %i1 + %i3, #one_read) 206 + 207 + EX_ST(STORE_INIT(%g3, %o0 + 0x00)) 208 + EX_ST(STORE_INIT(%o2, %o0 + 0x08)) 209 + 210 + EX_LD(LOAD_TWIN(%i1 + %o5, %g2, %g3)) 211 + MIX_THREE_WORDS(%o3, %g2, %g3, %i5, GLOBAL_SPARE, %o1) 212 + 213 + EX_ST(STORE_INIT(%o3, %o0 + 0x10)) 214 + EX_ST(STORE_INIT(%g2, %o0 + 0x18)) 215 + 216 + EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3)) 217 + MIX_THREE_WORDS(%g3, %o2, %o3, %i5, GLOBAL_SPARE, %o1) 218 + 219 + EX_ST(STORE_INIT(%g3, %o0 + 0x20)) 220 + EX_ST(STORE_INIT(%o2, %o0 + 0x28)) 221 + 222 + EX_LD(LOAD_TWIN(%i1 + %i3, %g2, %g3)) 223 + add %i1, 64, %i1 224 + MIX_THREE_WORDS(%o3, %g2, %g3, %i5, GLOBAL_SPARE, %o1) 225 + 226 + EX_ST(STORE_INIT(%o3, %o0 + 0x30)) 227 + EX_ST(STORE_INIT(%g2, %o0 + 0x38)) 228 + 229 + subcc %g1, 64, %g1 230 + bne,pt %XCC, 9b 231 + add %o0, 64, %o0 232 + 233 + ba,pt %XCC, 60f 234 + add %i1, %i4, %i1 202 235 203 236 10: /* Destination is 64-byte aligned, source was only 8-byte 204 237 * aligned but it has been subtracted by 8 and we perform 205 238 * one twin load ahead, then add 8 back into source when 206 239 * we finish the loop. 207 240 */ 208 - EX_LD(LOAD_TWIN(%o1, %o4, %o5)) 209 - 1: add %o1, 16, %o1 210 - EX_LD(LOAD_TWIN(%o1, %g2, %g3)) 211 - add %o1, 16 + 32, %o1 212 - LOAD(prefetch, %o1, #one_read) 213 - sub %o1, 32, %o1 241 + EX_LD(LOAD_TWIN(%i1, %o4, %o5)) 242 + mov 16, %o7 243 + mov 32, %g2 244 + mov 48, %g3 245 + mov 64, %o1 246 + 1: EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3)) 247 + LOAD(prefetch, %i1 + %o1, #one_read) 214 248 EX_ST(STORE_INIT(%o5, %o0 + 0x00)) ! initializes cache line 215 - EX_ST(STORE_INIT(%g2, %o0 + 0x08)) 216 - EX_LD(LOAD_TWIN(%o1, %o4, %o5)) 217 - add %o1, 16, %o1 218 - EX_ST(STORE_INIT(%g3, %o0 + 0x10)) 249 + EX_ST(STORE_INIT(%o2, %o0 + 0x08)) 250 + EX_LD(LOAD_TWIN(%i1 + %g2, %o4, %o5)) 251 + EX_ST(STORE_INIT(%o3, %o0 + 0x10)) 219 252 EX_ST(STORE_INIT(%o4, %o0 + 0x18)) 220 - EX_LD(LOAD_TWIN(%o1, %g2, %g3)) 221 - add %o1, 16, %o1 253 + EX_LD(LOAD_TWIN(%i1 + %g3, %o2, %o3)) 222 254 EX_ST(STORE_INIT(%o5, %o0 + 0x20)) 223 - EX_ST(STORE_INIT(%g2, %o0 + 0x28)) 224 - EX_LD(LOAD_TWIN(%o1, %o4, %o5)) 225 - EX_ST(STORE_INIT(%g3, %o0 + 0x30)) 255 + EX_ST(STORE_INIT(%o2, %o0 + 0x28)) 256 + EX_LD(LOAD_TWIN(%i1 + %o1, %o4, %o5)) 257 + add %i1, 64, %i1 258 + EX_ST(STORE_INIT(%o3, %o0 + 0x30)) 226 259 EX_ST(STORE_INIT(%o4, %o0 + 0x38)) 227 260 subcc %g1, 64, %g1 228 261 bne,pt %XCC, 1b 229 262 add %o0, 64, %o0 230 263 231 264 ba,pt %XCC, 60f 232 - add %o1, 0x8, %o1 265 + add %i1, 0x8, %i1 233 266 234 267 50: /* Destination is 64-byte aligned, and source is 16-byte 235 268 * aligned. 236 269 */ 237 - 1: EX_LD(LOAD_TWIN(%o1, %o4, %o5)) 238 - add %o1, 16, %o1 239 - EX_LD(LOAD_TWIN(%o1, %g2, %g3)) 240 - add %o1, 16 + 32, %o1 241 - LOAD(prefetch, %o1, #one_read) 242 - sub %o1, 32, %o1 270 + mov 16, %o7 271 + mov 32, %g2 272 + mov 48, %g3 273 + mov 64, %o1 274 + 1: EX_LD(LOAD_TWIN(%i1 + %g0, %o4, %o5)) 275 + EX_LD(LOAD_TWIN(%i1 + %o7, %o2, %o3)) 276 + LOAD(prefetch, %i1 + %o1, #one_read) 243 277 EX_ST(STORE_INIT(%o4, %o0 + 0x00)) ! initializes cache line 244 278 EX_ST(STORE_INIT(%o5, %o0 + 0x08)) 245 - EX_LD(LOAD_TWIN(%o1, %o4, %o5)) 246 - add %o1, 16, %o1 247 - EX_ST(STORE_INIT(%g2, %o0 + 0x10)) 248 - EX_ST(STORE_INIT(%g3, %o0 + 0x18)) 249 - EX_LD(LOAD_TWIN(%o1, %g2, %g3)) 250 - add %o1, 16, %o1 279 + EX_LD(LOAD_TWIN(%i1 + %g2, %o4, %o5)) 280 + EX_ST(STORE_INIT(%o2, %o0 + 0x10)) 281 + EX_ST(STORE_INIT(%o3, %o0 + 0x18)) 282 + EX_LD(LOAD_TWIN(%i1 + %g3, %o2, %o3)) 283 + add %i1, 64, %i1 251 284 EX_ST(STORE_INIT(%o4, %o0 + 0x20)) 252 285 EX_ST(STORE_INIT(%o5, %o0 + 0x28)) 253 - EX_ST(STORE_INIT(%g2, %o0 + 0x30)) 254 - EX_ST(STORE_INIT(%g3, %o0 + 0x38)) 286 + EX_ST(STORE_INIT(%o2, %o0 + 0x30)) 287 + EX_ST(STORE_INIT(%o3, %o0 + 0x38)) 255 288 subcc %g1, 64, %g1 256 289 bne,pt %XCC, 1b 257 290 add %o0, 64, %o0 ··· 304 249 60: 305 250 membar #Sync 306 251 307 - /* %o2 contains any final bytes still needed to be copied 252 + /* %i2 contains any final bytes still needed to be copied 308 253 * over. If anything is left, we copy it one byte at a time. 309 254 */ 310 - RESTORE_ASI(%o3) 311 - brz,pt %o2, 85f 312 - sub %o0, %o1, %o3 255 + RESTORE_ASI(%i3) 256 + brz,pt %i2, 85f 257 + sub %o0, %i1, %i3 313 258 ba,a,pt %XCC, 90f 314 259 315 260 .align 64 316 261 70: /* 16 < len <= 64 */ 317 262 bne,pn %XCC, 75f 318 - sub %o0, %o1, %o3 263 + sub %o0, %i1, %i3 319 264 320 265 72: 321 - andn %o2, 0xf, %o4 322 - and %o2, 0xf, %o2 323 - 1: subcc %o4, 0x10, %o4 324 - EX_LD(LOAD(ldx, %o1, %o5)) 325 - add %o1, 0x08, %o1 326 - EX_LD(LOAD(ldx, %o1, %g1)) 327 - sub %o1, 0x08, %o1 328 - EX_ST(STORE(stx, %o5, %o1 + %o3)) 329 - add %o1, 0x8, %o1 330 - EX_ST(STORE(stx, %g1, %o1 + %o3)) 266 + andn %i2, 0xf, %i4 267 + and %i2, 0xf, %i2 268 + 1: subcc %i4, 0x10, %i4 269 + EX_LD(LOAD(ldx, %i1, %i5)) 270 + add %i1, 0x08, %i1 271 + EX_LD(LOAD(ldx, %i1, %g1)) 272 + sub %i1, 0x08, %i1 273 + EX_ST(STORE(stx, %i5, %i1 + %i3)) 274 + add %i1, 0x8, %i1 275 + EX_ST(STORE(stx, %g1, %i1 + %i3)) 331 276 bgu,pt %XCC, 1b 332 - add %o1, 0x8, %o1 333 - 73: andcc %o2, 0x8, %g0 277 + add %i1, 0x8, %i1 278 + 73: andcc %i2, 0x8, %g0 334 279 be,pt %XCC, 1f 335 280 nop 336 - sub %o2, 0x8, %o2 337 - EX_LD(LOAD(ldx, %o1, %o5)) 338 - EX_ST(STORE(stx, %o5, %o1 + %o3)) 339 - add %o1, 0x8, %o1 340 - 1: andcc %o2, 0x4, %g0 281 + sub %i2, 0x8, %i2 282 + EX_LD(LOAD(ldx, %i1, %i5)) 283 + EX_ST(STORE(stx, %i5, %i1 + %i3)) 284 + add %i1, 0x8, %i1 285 + 1: andcc %i2, 0x4, %g0 341 286 be,pt %XCC, 1f 342 287 nop 343 - sub %o2, 0x4, %o2 344 - EX_LD(LOAD(lduw, %o1, %o5)) 345 - EX_ST(STORE(stw, %o5, %o1 + %o3)) 346 - add %o1, 0x4, %o1 347 - 1: cmp %o2, 0 288 + sub %i2, 0x4, %i2 289 + EX_LD(LOAD(lduw, %i1, %i5)) 290 + EX_ST(STORE(stw, %i5, %i1 + %i3)) 291 + add %i1, 0x4, %i1 292 + 1: cmp %i2, 0 348 293 be,pt %XCC, 85f 349 294 nop 350 295 ba,pt %xcc, 90f ··· 355 300 sub %g1, 0x8, %g1 356 301 be,pn %icc, 2f 357 302 sub %g0, %g1, %g1 358 - sub %o2, %g1, %o2 303 + sub %i2, %g1, %i2 359 304 360 305 1: subcc %g1, 1, %g1 361 - EX_LD(LOAD(ldub, %o1, %o5)) 362 - EX_ST(STORE(stb, %o5, %o1 + %o3)) 306 + EX_LD(LOAD(ldub, %i1, %i5)) 307 + EX_ST(STORE(stb, %i5, %i1 + %i3)) 363 308 bgu,pt %icc, 1b 364 - add %o1, 1, %o1 309 + add %i1, 1, %i1 365 310 366 - 2: add %o1, %o3, %o0 367 - andcc %o1, 0x7, %g1 311 + 2: add %i1, %i3, %o0 312 + andcc %i1, 0x7, %g1 368 313 bne,pt %icc, 8f 369 314 sll %g1, 3, %g1 370 315 371 - cmp %o2, 16 316 + cmp %i2, 16 372 317 bgeu,pt %icc, 72b 373 318 nop 374 319 ba,a,pt %xcc, 73b 375 320 376 - 8: mov 64, %o3 377 - andn %o1, 0x7, %o1 378 - EX_LD(LOAD(ldx, %o1, %g2)) 379 - sub %o3, %g1, %o3 380 - andn %o2, 0x7, %o4 321 + 8: mov 64, %i3 322 + andn %i1, 0x7, %i1 323 + EX_LD(LOAD(ldx, %i1, %g2)) 324 + sub %i3, %g1, %i3 325 + andn %i2, 0x7, %i4 381 326 sllx %g2, %g1, %g2 382 - 1: add %o1, 0x8, %o1 383 - EX_LD(LOAD(ldx, %o1, %g3)) 384 - subcc %o4, 0x8, %o4 385 - srlx %g3, %o3, %o5 386 - or %o5, %g2, %o5 387 - EX_ST(STORE(stx, %o5, %o0)) 327 + 1: add %i1, 0x8, %i1 328 + EX_LD(LOAD(ldx, %i1, %g3)) 329 + subcc %i4, 0x8, %i4 330 + srlx %g3, %i3, %i5 331 + or %i5, %g2, %i5 332 + EX_ST(STORE(stx, %i5, %o0)) 388 333 add %o0, 0x8, %o0 389 334 bgu,pt %icc, 1b 390 335 sllx %g3, %g1, %g2 391 336 392 337 srl %g1, 3, %g1 393 - andcc %o2, 0x7, %o2 338 + andcc %i2, 0x7, %i2 394 339 be,pn %icc, 85f 395 - add %o1, %g1, %o1 340 + add %i1, %g1, %i1 396 341 ba,pt %xcc, 90f 397 - sub %o0, %o1, %o3 342 + sub %o0, %i1, %i3 398 343 399 344 .align 64 400 345 80: /* 0 < len <= 16 */ 401 - andcc %o3, 0x3, %g0 346 + andcc %i3, 0x3, %g0 402 347 bne,pn %XCC, 90f 403 - sub %o0, %o1, %o3 348 + sub %o0, %i1, %i3 404 349 405 350 1: 406 - subcc %o2, 4, %o2 407 - EX_LD(LOAD(lduw, %o1, %g1)) 408 - EX_ST(STORE(stw, %g1, %o1 + %o3)) 351 + subcc %i2, 4, %i2 352 + EX_LD(LOAD(lduw, %i1, %g1)) 353 + EX_ST(STORE(stw, %g1, %i1 + %i3)) 409 354 bgu,pt %XCC, 1b 410 - add %o1, 4, %o1 355 + add %i1, 4, %i1 411 356 412 - 85: retl 413 - mov EX_RETVAL(GLOBAL_SPARE), %o0 357 + 85: ret 358 + restore EX_RETVAL(%i0), %g0, %o0 414 359 415 360 .align 32 416 361 90: 417 - subcc %o2, 1, %o2 418 - EX_LD(LOAD(ldub, %o1, %g1)) 419 - EX_ST(STORE(stb, %g1, %o1 + %o3)) 362 + subcc %i2, 1, %i2 363 + EX_LD(LOAD(ldub, %i1, %g1)) 364 + EX_ST(STORE(stb, %g1, %i1 + %i3)) 420 365 bgu,pt %XCC, 90b 421 - add %o1, 1, %o1 422 - retl 423 - mov EX_RETVAL(GLOBAL_SPARE), %o0 366 + add %i1, 1, %i1 367 + ret 368 + restore EX_RETVAL(%i0), %g0, %o0 424 369 425 370 .size FUNC_NAME, .-FUNC_NAME
-8
arch/x86_64/Kconfig
··· 60 60 bool 61 61 default y 62 62 63 - config QUICKLIST 64 - bool 65 - default y 66 - 67 - config NR_QUICK 68 - int 69 - default 2 70 - 71 63 config ISA 72 64 bool 73 65
+15 -3
arch/x86_64/ia32/ia32entry.S
··· 38 38 movq %rax,R8(%rsp) 39 39 .endm 40 40 41 + .macro LOAD_ARGS32 offset 42 + movl \offset(%rsp),%r11d 43 + movl \offset+8(%rsp),%r10d 44 + movl \offset+16(%rsp),%r9d 45 + movl \offset+24(%rsp),%r8d 46 + movl \offset+40(%rsp),%ecx 47 + movl \offset+48(%rsp),%edx 48 + movl \offset+56(%rsp),%esi 49 + movl \offset+64(%rsp),%edi 50 + movl \offset+72(%rsp),%eax 51 + .endm 52 + 41 53 .macro CFI_STARTPROC32 simple 42 54 CFI_STARTPROC \simple 43 55 CFI_UNDEFINED r8 ··· 164 152 movq $-ENOSYS,RAX(%rsp) /* really needed? */ 165 153 movq %rsp,%rdi /* &pt_regs -> arg1 */ 166 154 call syscall_trace_enter 167 - LOAD_ARGS ARGOFFSET /* reload args from stack in case ptrace changed it */ 155 + LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ 168 156 RESTORE_REST 169 157 movl %ebp, %ebp 170 158 /* no need to do an access_ok check here because rbp has been ··· 267 255 movq $-ENOSYS,RAX(%rsp) /* really needed? */ 268 256 movq %rsp,%rdi /* &pt_regs -> arg1 */ 269 257 call syscall_trace_enter 270 - LOAD_ARGS ARGOFFSET /* reload args from stack in case ptrace changed it */ 258 + LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ 271 259 RESTORE_REST 272 260 movl RSP-ARGOFFSET(%rsp), %r8d 273 261 /* no need to do an access_ok check here because r8 has been ··· 346 334 movq $-ENOSYS,RAX(%rsp) /* really needed? */ 347 335 movq %rsp,%rdi /* &pt_regs -> arg1 */ 348 336 call syscall_trace_enter 349 - LOAD_ARGS ARGOFFSET /* reload args from stack in case ptrace changed it */ 337 + LOAD_ARGS32 ARGOFFSET /* reload args from stack in case ptrace changed it */ 350 338 RESTORE_REST 351 339 jmp ia32_do_syscall 352 340 END(ia32_syscall)
+13 -34
arch/x86_64/kernel/acpi/wakeup.S
··· 81 81 testl $2, realmode_flags - wakeup_code 82 82 jz 1f 83 83 mov video_mode - wakeup_code, %ax 84 - call mode_seta 84 + call mode_set 85 85 1: 86 86 87 87 movw $0xb800, %ax ··· 291 291 #define VIDEO_FIRST_V7 0x0900 292 292 293 293 # Setting of user mode (AX=mode ID) => CF=success 294 + 295 + # For now, we only handle VESA modes (0x0200..0x03ff). To handle other 296 + # modes, we should probably compile in the video code from the boot 297 + # directory. 294 298 .code16 295 - mode_seta: 299 + mode_set: 296 300 movw %ax, %bx 297 - #if 0 298 - cmpb $0xff, %ah 299 - jz setalias 301 + subb $VIDEO_FIRST_VESA>>8, %bh 302 + cmpb $2, %bh 303 + jb check_vesa 300 304 301 - testb $VIDEO_RECALC>>8, %ah 302 - jnz _setrec 303 - 304 - cmpb $VIDEO_FIRST_RESOLUTION>>8, %ah 305 - jnc setres 306 - 307 - cmpb $VIDEO_FIRST_SPECIAL>>8, %ah 308 - jz setspc 309 - 310 - cmpb $VIDEO_FIRST_V7>>8, %ah 311 - jz setv7 312 - #endif 313 - 314 - cmpb $VIDEO_FIRST_VESA>>8, %ah 315 - jnc check_vesaa 316 - #if 0 317 - orb %ah, %ah 318 - jz setmenu 319 - #endif 320 - 321 - decb %ah 322 - # jz setbios Add bios modes later 323 - 324 - setbada: clc 305 + setbad: 306 + clc 325 307 ret 326 308 327 - check_vesaa: 328 - subb $VIDEO_FIRST_VESA>>8, %bh 309 + check_vesa: 329 310 orw $0x4000, %bx # Use linear frame buffer 330 311 movw $0x4f02, %ax # VESA BIOS mode set call 331 312 int $0x10 332 313 cmpw $0x004f, %ax # AL=4f if implemented 333 - jnz _setbada # AH=0 if OK 314 + jnz setbad # AH=0 if OK 334 315 335 316 stc 336 317 ret 337 - 338 - _setbada: jmp setbada 339 318 340 319 wakeup_stack_begin: # Stack grows down 341 320
-1
arch/x86_64/kernel/process.c
··· 208 208 if (__get_cpu_var(cpu_idle_state)) 209 209 __get_cpu_var(cpu_idle_state) = 0; 210 210 211 - check_pgt_cache(); 212 211 rmb(); 213 212 idle = pm_idle; 214 213 if (!idle)
-4
arch/x86_64/kernel/ptrace.c
··· 232 232 { 233 233 unsigned long tmp; 234 234 235 - /* Some code in the 64bit emulation may not be 64bit clean. 236 - Don't take any chances. */ 237 - if (test_tsk_thread_flag(child, TIF_IA32)) 238 - value &= 0xffffffff; 239 235 switch (regno) { 240 236 case offsetof(struct user_regs_struct,fs): 241 237 if (value && (value & 3) != 3)
+1 -1
arch/x86_64/kernel/smp.c
··· 241 241 } 242 242 if (!cpus_empty(cpu_mask)) 243 243 flush_tlb_others(cpu_mask, mm, FLUSH_ALL); 244 - check_pgt_cache(); 244 + 245 245 preempt_enable(); 246 246 } 247 247 EXPORT_SYMBOL(flush_tlb_mm);
+1 -1
arch/x86_64/vdso/voffset.h
··· 1 - #define VDSO_TEXT_OFFSET 0x500 1 + #define VDSO_TEXT_OFFSET 0x600
+10 -2
crypto/async_tx/async_tx.c
··· 80 80 { 81 81 enum dma_status status; 82 82 struct dma_async_tx_descriptor *iter; 83 + struct dma_async_tx_descriptor *parent; 83 84 84 85 if (!tx) 85 86 return DMA_SUCCESS; ··· 88 87 /* poll through the dependency chain, return when tx is complete */ 89 88 do { 90 89 iter = tx; 91 - while (iter->cookie == -EBUSY) 92 - iter = iter->parent; 90 + 91 + /* find the root of the unsubmitted dependency chain */ 92 + while (iter->cookie == -EBUSY) { 93 + parent = iter->parent; 94 + if (parent && parent->cookie == -EBUSY) 95 + iter = iter->parent; 96 + else 97 + break; 98 + } 93 99 94 100 status = dma_sync_wait(iter->chan, iter->cookie); 95 101 } while (status == DMA_IN_PROGRESS || (iter != tx));
+2
drivers/acpi/processor_core.c
··· 102 102 .add = acpi_processor_add, 103 103 .remove = acpi_processor_remove, 104 104 .start = acpi_processor_start, 105 + .suspend = acpi_processor_suspend, 106 + .resume = acpi_processor_resume, 105 107 }, 106 108 }; 107 109
+18 -1
drivers/acpi/processor_idle.c
··· 325 325 326 326 #endif 327 327 328 + /* 329 + * Suspend / resume control 330 + */ 331 + static int acpi_idle_suspend; 332 + 333 + int acpi_processor_suspend(struct acpi_device * device, pm_message_t state) 334 + { 335 + acpi_idle_suspend = 1; 336 + return 0; 337 + } 338 + 339 + int acpi_processor_resume(struct acpi_device * device) 340 + { 341 + acpi_idle_suspend = 0; 342 + return 0; 343 + } 344 + 328 345 static void acpi_processor_idle(void) 329 346 { 330 347 struct acpi_processor *pr = NULL; ··· 372 355 } 373 356 374 357 cx = pr->power.state; 375 - if (!cx) { 358 + if (!cx || acpi_idle_suspend) { 376 359 if (pm_idle_save) 377 360 pm_idle_save(); 378 361 else
+2 -2
drivers/acpi/sleep/Makefile
··· 1 - obj-y := poweroff.o wakeup.o 2 - obj-$(CONFIG_ACPI_SLEEP) += main.o 1 + obj-y := wakeup.o 2 + obj-y += main.o 3 3 obj-$(CONFIG_ACPI_SLEEP) += proc.o 4 4 5 5 EXTRA_CFLAGS += $(ACPI_CFLAGS)
+55 -6
drivers/acpi/sleep/main.c
··· 15 15 #include <linux/dmi.h> 16 16 #include <linux/device.h> 17 17 #include <linux/suspend.h> 18 + 19 + #include <asm/io.h> 20 + 18 21 #include <acpi/acpi_bus.h> 19 22 #include <acpi/acpi_drivers.h> 20 23 #include "sleep.h" 21 24 22 25 u8 sleep_states[ACPI_S_STATE_COUNT]; 23 26 27 + #ifdef CONFIG_PM_SLEEP 24 28 static u32 acpi_target_sleep_state = ACPI_STATE_S0; 29 + #endif 30 + 31 + int acpi_sleep_prepare(u32 acpi_state) 32 + { 33 + #ifdef CONFIG_ACPI_SLEEP 34 + /* do we have a wakeup address for S2 and S3? */ 35 + if (acpi_state == ACPI_STATE_S3) { 36 + if (!acpi_wakeup_address) { 37 + return -EFAULT; 38 + } 39 + acpi_set_firmware_waking_vector((acpi_physical_address) 40 + virt_to_phys((void *) 41 + acpi_wakeup_address)); 42 + 43 + } 44 + ACPI_FLUSH_CPU_CACHE(); 45 + acpi_enable_wakeup_device_prep(acpi_state); 46 + #endif 47 + acpi_gpe_sleep_prepare(acpi_state); 48 + acpi_enter_sleep_state_prep(acpi_state); 49 + return 0; 50 + } 25 51 26 52 #ifdef CONFIG_SUSPEND 27 53 static struct pm_ops acpi_pm_ops; ··· 301 275 return -EINVAL; 302 276 } 303 277 278 + #ifdef CONFIG_PM_SLEEP 304 279 /** 305 280 * acpi_pm_device_sleep_state - return preferred power state of ACPI device 306 281 * in the system sleep state given by %acpi_target_sleep_state ··· 376 349 *d_min_p = d_min; 377 350 return d_max; 378 351 } 352 + #endif 353 + 354 + static void acpi_power_off_prepare(void) 355 + { 356 + /* Prepare to power off the system */ 357 + acpi_sleep_prepare(ACPI_STATE_S5); 358 + } 359 + 360 + static void acpi_power_off(void) 361 + { 362 + /* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */ 363 + printk("%s called\n", __FUNCTION__); 364 + local_irq_disable(); 365 + acpi_enter_sleep_state(ACPI_STATE_S5); 366 + } 379 367 380 368 int __init acpi_sleep_init(void) 381 369 { ··· 405 363 if (acpi_disabled) 406 364 return 0; 407 365 366 + sleep_states[ACPI_STATE_S0] = 1; 367 + printk(KERN_INFO PREFIX "(supports S0"); 368 + 408 369 #ifdef CONFIG_SUSPEND 409 - printk(KERN_INFO PREFIX "(supports"); 410 - for (i = ACPI_STATE_S0; i < ACPI_STATE_S4; i++) { 370 + for (i = ACPI_STATE_S1; i < ACPI_STATE_S4; i++) { 411 371 status = acpi_get_sleep_type_data(i, &type_a, &type_b); 412 372 if (ACPI_SUCCESS(status)) { 413 373 sleep_states[i] = 1; 414 374 printk(" S%d", i); 415 375 } 416 376 } 417 - printk(")\n"); 418 377 419 378 pm_set_ops(&acpi_pm_ops); 420 379 #endif ··· 425 382 if (ACPI_SUCCESS(status)) { 426 383 hibernation_set_ops(&acpi_hibernation_ops); 427 384 sleep_states[ACPI_STATE_S4] = 1; 385 + printk(" S4"); 428 386 } 429 - #else 430 - sleep_states[ACPI_STATE_S4] = 0; 431 387 #endif 432 - 388 + status = acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b); 389 + if (ACPI_SUCCESS(status)) { 390 + sleep_states[ACPI_STATE_S5] = 1; 391 + printk(" S5"); 392 + pm_power_off_prepare = acpi_power_off_prepare; 393 + pm_power_off = acpi_power_off; 394 + } 395 + printk(")\n"); 433 396 return 0; 434 397 }
-75
drivers/acpi/sleep/poweroff.c
··· 1 - /* 2 - * poweroff.c - ACPI handler for powering off the system. 3 - * 4 - * AKA S5, but it is independent of whether or not the kernel supports 5 - * any other sleep support in the system. 6 - * 7 - * Copyright (c) 2005 Alexey Starikovskiy <alexey.y.starikovskiy@intel.com> 8 - * 9 - * This file is released under the GPLv2. 10 - */ 11 - 12 - #include <linux/pm.h> 13 - #include <linux/init.h> 14 - #include <acpi/acpi_bus.h> 15 - #include <linux/sysdev.h> 16 - #include <asm/io.h> 17 - #include "sleep.h" 18 - 19 - int acpi_sleep_prepare(u32 acpi_state) 20 - { 21 - #ifdef CONFIG_ACPI_SLEEP 22 - /* do we have a wakeup address for S2 and S3? */ 23 - if (acpi_state == ACPI_STATE_S3) { 24 - if (!acpi_wakeup_address) { 25 - return -EFAULT; 26 - } 27 - acpi_set_firmware_waking_vector((acpi_physical_address) 28 - virt_to_phys((void *) 29 - acpi_wakeup_address)); 30 - 31 - } 32 - ACPI_FLUSH_CPU_CACHE(); 33 - acpi_enable_wakeup_device_prep(acpi_state); 34 - #endif 35 - acpi_gpe_sleep_prepare(acpi_state); 36 - acpi_enter_sleep_state_prep(acpi_state); 37 - return 0; 38 - } 39 - 40 - #ifdef CONFIG_PM 41 - 42 - static void acpi_power_off_prepare(void) 43 - { 44 - /* Prepare to power off the system */ 45 - acpi_sleep_prepare(ACPI_STATE_S5); 46 - } 47 - 48 - static void acpi_power_off(void) 49 - { 50 - /* acpi_sleep_prepare(ACPI_STATE_S5) should have already been called */ 51 - printk("%s called\n", __FUNCTION__); 52 - local_irq_disable(); 53 - /* Some SMP machines only can poweroff in boot CPU */ 54 - acpi_enter_sleep_state(ACPI_STATE_S5); 55 - } 56 - 57 - static int acpi_poweroff_init(void) 58 - { 59 - if (!acpi_disabled) { 60 - u8 type_a, type_b; 61 - acpi_status status; 62 - 63 - status = 64 - acpi_get_sleep_type_data(ACPI_STATE_S5, &type_a, &type_b); 65 - if (ACPI_SUCCESS(status)) { 66 - pm_power_off_prepare = acpi_power_off_prepare; 67 - pm_power_off = acpi_power_off; 68 - } 69 - } 70 - return 0; 71 - } 72 - 73 - late_initcall(acpi_poweroff_init); 74 - 75 - #endif /* CONFIG_PM */
+1 -2
drivers/acpi/video.c
··· 417 417 arg0.integer.value = level; 418 418 status = acpi_evaluate_object(device->dev->handle, "_BCM", &args, NULL); 419 419 420 - printk(KERN_DEBUG "set_level status: %x\n", status); 421 420 return status; 422 421 } 423 422 ··· 1753 1754 1754 1755 static int acpi_video_bus_start_devices(struct acpi_video_bus *video) 1755 1756 { 1756 - return acpi_video_bus_DOS(video, 1, 0); 1757 + return acpi_video_bus_DOS(video, 0, 0); 1757 1758 } 1758 1759 1759 1760 static int acpi_video_bus_stop_devices(struct acpi_video_bus *video)
+6 -4
drivers/ata/ahci.c
··· 418 418 419 419 /* ATI */ 420 420 { PCI_VDEVICE(ATI, 0x4380), board_ahci_sb600 }, /* ATI SB600 */ 421 - { PCI_VDEVICE(ATI, 0x4390), board_ahci_sb600 }, /* ATI SB700 IDE */ 422 - { PCI_VDEVICE(ATI, 0x4391), board_ahci_sb600 }, /* ATI SB700 AHCI */ 423 - { PCI_VDEVICE(ATI, 0x4392), board_ahci_sb600 }, /* ATI SB700 nraid5 */ 424 - { PCI_VDEVICE(ATI, 0x4393), board_ahci_sb600 }, /* ATI SB700 raid5 */ 421 + { PCI_VDEVICE(ATI, 0x4390), board_ahci_sb600 }, /* ATI SB700/800 */ 422 + { PCI_VDEVICE(ATI, 0x4391), board_ahci_sb600 }, /* ATI SB700/800 */ 423 + { PCI_VDEVICE(ATI, 0x4392), board_ahci_sb600 }, /* ATI SB700/800 */ 424 + { PCI_VDEVICE(ATI, 0x4393), board_ahci_sb600 }, /* ATI SB700/800 */ 425 + { PCI_VDEVICE(ATI, 0x4394), board_ahci_sb600 }, /* ATI SB700/800 */ 426 + { PCI_VDEVICE(ATI, 0x4395), board_ahci_sb600 }, /* ATI SB700/800 */ 425 427 426 428 /* VIA */ 427 429 { PCI_VDEVICE(VIA, 0x3349), board_ahci_vt8251 }, /* VIA VT8251 */
+7
drivers/ata/ata_piix.c
··· 921 921 { 922 922 static struct dmi_system_id sysids[] = { 923 923 { 924 + .ident = "TECRA M3", 925 + .matches = { 926 + DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), 927 + DMI_MATCH(DMI_PRODUCT_NAME, "TECRA M3"), 928 + }, 929 + }, 930 + { 924 931 .ident = "TECRA M5", 925 932 .matches = { 926 933 DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+4
drivers/ata/libata-core.c
··· 3778 3778 { "Maxtor 6L250S0", "BANC1G10", ATA_HORKAGE_NONCQ }, 3779 3779 { "Maxtor 6B200M0", "BANC1BM0", ATA_HORKAGE_NONCQ }, 3780 3780 { "Maxtor 6B200M0", "BANC1B10", ATA_HORKAGE_NONCQ }, 3781 + { "Maxtor 7B250S0", "BANC1B70", ATA_HORKAGE_NONCQ, }, 3782 + { "Maxtor 7B300S0", "BANC1B70", ATA_HORKAGE_NONCQ }, 3783 + { "Maxtor 7V300F0", "VA111630", ATA_HORKAGE_NONCQ }, 3781 3784 { "HITACHI HDS7250SASUN500G 0621KTAWSD", "K2AOAJ0AHITACHI", 3782 3785 ATA_HORKAGE_NONCQ }, 3783 3786 /* NCQ hard hangs device under heavier load, needs hard power cycle */ ··· 3797 3794 { "WDC WD740ADFD-00NLR1", NULL, ATA_HORKAGE_NONCQ, }, 3798 3795 { "FUJITSU MHV2080BH", "00840028", ATA_HORKAGE_NONCQ, }, 3799 3796 { "ST9160821AS", "3.CLF", ATA_HORKAGE_NONCQ, }, 3797 + { "ST3160812AS", "3.AD", ATA_HORKAGE_NONCQ, }, 3800 3798 { "SAMSUNG HD401LJ", "ZZ100-15", ATA_HORKAGE_NONCQ, }, 3801 3799 3802 3800 /* devices which puke on READ_NATIVE_MAX */
+4 -1
drivers/ata/libata-sff.c
··· 297 297 dmactl = ioread8(ap->ioaddr.bmdma_addr + ATA_DMA_CMD); 298 298 iowrite8(dmactl | ATA_DMA_START, ap->ioaddr.bmdma_addr + ATA_DMA_CMD); 299 299 300 - /* Strictly, one may wish to issue a readb() here, to 300 + /* Strictly, one may wish to issue an ioread8() here, to 301 301 * flush the mmio write. However, control also passes 302 302 * to the hardware at this point, and it will interrupt 303 303 * us when we are to resume control. So, in effect, ··· 307 307 * is expected, so I think it is best to not add a readb() 308 308 * without first all the MMIO ATA cards/mobos. 309 309 * Or maybe I'm just being paranoid. 310 + * 311 + * FIXME: The posting of this write means I/O starts are 312 + * unneccessarily delayed for MMIO 310 313 */ 311 314 } 312 315
+2 -1
drivers/ata/pata_sis.c
··· 375 375 int drive_pci = sis_old_port_base(adev); 376 376 u16 timing; 377 377 378 + /* MWDMA 0-2 and UDMA 0-5 */ 378 379 const u16 mwdma_bits[] = { 0x008, 0x302, 0x301 }; 379 - const u16 udma_bits[] = { 0xF000, 0xD000, 0xB000, 0xA000, 0x9000}; 380 + const u16 udma_bits[] = { 0xF000, 0xD000, 0xB000, 0xA000, 0x9000, 0x8000 }; 380 381 381 382 pci_read_config_word(pdev, drive_pci, &timing); 382 383
+12 -4
drivers/ata/sata_sil24.c
··· 888 888 u32 slot_stat, qc_active; 889 889 int rc; 890 890 891 + /* If PCIX_IRQ_WOC, there's an inherent race window between 892 + * clearing IRQ pending status and reading PORT_SLOT_STAT 893 + * which may cause spurious interrupts afterwards. This is 894 + * unavoidable and much better than losing interrupts which 895 + * happens if IRQ pending is cleared after reading 896 + * PORT_SLOT_STAT. 897 + */ 898 + if (ap->flags & SIL24_FLAG_PCIX_IRQ_WOC) 899 + writel(PORT_IRQ_COMPLETE, port + PORT_IRQ_STAT); 900 + 891 901 slot_stat = readl(port + PORT_SLOT_STAT); 892 902 893 903 if (unlikely(slot_stat & HOST_SSTAT_ATTN)) { 894 904 sil24_error_intr(ap); 895 905 return; 896 906 } 897 - 898 - if (ap->flags & SIL24_FLAG_PCIX_IRQ_WOC) 899 - writel(PORT_IRQ_COMPLETE, port + PORT_IRQ_STAT); 900 907 901 908 qc_active = slot_stat & ~HOST_SSTAT_ATTN; 902 909 rc = ata_qc_complete_multiple(ap, qc_active, sil24_finish_qc); ··· 917 910 return; 918 911 } 919 912 920 - if (ata_ratelimit()) 913 + /* spurious interrupts are expected if PCIX_IRQ_WOC */ 914 + if (!(ap->flags & SIL24_FLAG_PCIX_IRQ_WOC) && ata_ratelimit()) 921 915 ata_port_printk(ap, KERN_INFO, "spurious interrupt " 922 916 "(slot_stat 0x%x active_tag %d sactive 0x%x)\n", 923 917 slot_stat, ap->active_tag, ap->sactive);
+1
drivers/base/core.c
··· 284 284 285 285 /* let the kset specific function add its keys */ 286 286 pos = data; 287 + memset(envp, 0, sizeof(envp)); 287 288 retval = kset->uevent_ops->uevent(kset, &dev->kobj, 288 289 envp, ARRAY_SIZE(envp), 289 290 pos, PAGE_SIZE);
+4
drivers/cdrom/cdrom.c
··· 1032 1032 check_disk_change(ip->i_bdev); 1033 1033 return 0; 1034 1034 err_release: 1035 + if (CDROM_CAN(CDC_LOCK) && cdi->options & CDO_LOCK) { 1036 + cdi->ops->lock_door(cdi, 0); 1037 + cdinfo(CD_OPEN, "door unlocked.\n"); 1038 + } 1035 1039 cdi->ops->release(cdi); 1036 1040 err: 1037 1041 cdi->use_count--;
+6
drivers/char/drm/i915_drv.h
··· 210 210 #define I915REG_INT_MASK_R 0x020a8 211 211 #define I915REG_INT_ENABLE_R 0x020a0 212 212 213 + #define I915REG_PIPEASTAT 0x70024 214 + #define I915REG_PIPEBSTAT 0x71024 215 + 216 + #define I915_VBLANK_INTERRUPT_ENABLE (1UL<<17) 217 + #define I915_VBLANK_CLEAR (1UL<<1) 218 + 213 219 #define SRX_INDEX 0x3c4 214 220 #define SRX_DATA 0x3c5 215 221 #define SR01 1
+12
drivers/char/drm/i915_irq.c
··· 214 214 struct drm_device *dev = (struct drm_device *) arg; 215 215 drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private; 216 216 u16 temp; 217 + u32 pipea_stats, pipeb_stats; 218 + 219 + pipea_stats = I915_READ(I915REG_PIPEASTAT); 220 + pipeb_stats = I915_READ(I915REG_PIPEBSTAT); 217 221 218 222 temp = I915_READ16(I915REG_INT_IDENTITY_R); 219 223 ··· 229 225 return IRQ_NONE; 230 226 231 227 I915_WRITE16(I915REG_INT_IDENTITY_R, temp); 228 + (void) I915_READ16(I915REG_INT_IDENTITY_R); 229 + DRM_READMEMORYBARRIER(); 232 230 233 231 dev_priv->sarea_priv->last_dispatch = READ_BREADCRUMB(dev_priv); 234 232 ··· 258 252 259 253 if (dev_priv->swaps_pending > 0) 260 254 drm_locked_tasklet(dev, i915_vblank_tasklet); 255 + I915_WRITE(I915REG_PIPEASTAT, 256 + pipea_stats|I915_VBLANK_INTERRUPT_ENABLE| 257 + I915_VBLANK_CLEAR); 258 + I915_WRITE(I915REG_PIPEBSTAT, 259 + pipeb_stats|I915_VBLANK_INTERRUPT_ENABLE| 260 + I915_VBLANK_CLEAR); 261 261 } 262 262 263 263 return IRQ_HANDLED;
+6 -3
drivers/char/hpet.c
··· 62 62 63 63 static u32 hpet_nhpet, hpet_max_freq = HPET_USER_FREQ; 64 64 65 + /* This clocksource driver currently only works on ia64 */ 66 + #ifdef CONFIG_IA64 65 67 static void __iomem *hpet_mctr; 66 68 67 69 static cycle_t read_hpet(void) ··· 81 79 .flags = CLOCK_SOURCE_IS_CONTINUOUS, 82 80 }; 83 81 static struct clocksource *hpet_clocksource; 82 + #endif 84 83 85 84 /* A lock for concurrent access by app and isr hpet activity. */ 86 85 static DEFINE_SPINLOCK(hpet_lock); ··· 946 943 printk(KERN_DEBUG "%s: 0x%lx is busy\n", 947 944 __FUNCTION__, hdp->hd_phys_address); 948 945 iounmap(hdp->hd_address); 949 - return -EBUSY; 946 + return AE_ALREADY_EXISTS; 950 947 } 951 948 } else if (res->type == ACPI_RESOURCE_TYPE_FIXED_MEMORY32) { 952 949 struct acpi_resource_fixed_memory32 *fixmem32; 953 950 954 951 fixmem32 = &res->data.fixed_memory32; 955 952 if (!fixmem32) 956 - return -EINVAL; 953 + return AE_NO_MEMORY; 957 954 958 955 hdp->hd_phys_address = fixmem32->address; 959 956 hdp->hd_address = ioremap(fixmem32->address, ··· 963 960 printk(KERN_DEBUG "%s: 0x%lx is busy\n", 964 961 __FUNCTION__, hdp->hd_phys_address); 965 962 iounmap(hdp->hd_address); 966 - return -EBUSY; 963 + return AE_ALREADY_EXISTS; 967 964 } 968 965 } else if (res->type == ACPI_RESOURCE_TYPE_EXTENDED_IRQ) { 969 966 struct acpi_resource_extended_irq *irqp;
+8 -18
drivers/char/mspec.c
··· 155 155 * mspec_close 156 156 * 157 157 * Called when unmapping a device mapping. Frees all mspec pages 158 - * belonging to the vma. 158 + * belonging to all the vma's sharing this vma_data structure. 159 159 */ 160 160 static void 161 161 mspec_close(struct vm_area_struct *vma) 162 162 { 163 163 struct vma_data *vdata; 164 - int index, last_index, result; 164 + int index, last_index; 165 165 unsigned long my_page; 166 166 167 167 vdata = vma->vm_private_data; 168 168 169 - BUG_ON(vma->vm_start < vdata->vm_start || vma->vm_end > vdata->vm_end); 169 + if (!atomic_dec_and_test(&vdata->refcnt)) 170 + return; 170 171 171 - spin_lock(&vdata->lock); 172 - index = (vma->vm_start - vdata->vm_start) >> PAGE_SHIFT; 173 - last_index = (vma->vm_end - vdata->vm_start) >> PAGE_SHIFT; 174 - for (; index < last_index; index++) { 172 + last_index = (vdata->vm_end - vdata->vm_start) >> PAGE_SHIFT; 173 + for (index = 0; index < last_index; index++) { 175 174 if (vdata->maddr[index] == 0) 176 175 continue; 177 176 /* ··· 179 180 */ 180 181 my_page = vdata->maddr[index]; 181 182 vdata->maddr[index] = 0; 182 - spin_unlock(&vdata->lock); 183 - result = mspec_zero_block(my_page, PAGE_SIZE); 184 - if (!result) 183 + if (!mspec_zero_block(my_page, PAGE_SIZE)) 185 184 uncached_free_page(my_page); 186 185 else 187 186 printk(KERN_WARNING "mspec_close(): " 188 - "failed to zero page %i\n", 189 - result); 190 - spin_lock(&vdata->lock); 187 + "failed to zero page %ld\n", my_page); 191 188 } 192 - spin_unlock(&vdata->lock); 193 - 194 - if (!atomic_dec_and_test(&vdata->refcnt)) 195 - return; 196 189 197 190 if (vdata->flags & VMD_VMALLOCED) 198 191 vfree(vdata); 199 192 else 200 193 kfree(vdata); 201 194 } 202 - 203 195 204 196 /* 205 197 * mspec_nopfn
+6 -4
drivers/char/random.c
··· 1550 1550 * As close as possible to RFC 793, which 1551 1551 * suggests using a 250 kHz clock. 1552 1552 * Further reading shows this assumes 2 Mb/s networks. 1553 - * For 10 Gb/s Ethernet, a 1 GHz clock is appropriate. 1554 - * That's funny, Linux has one built in! Use it! 1555 - * (Networks are faster now - should this be increased?) 1553 + * For 10 Mb/s Ethernet, a 1 MHz clock is appropriate. 1554 + * For 10 Gb/s Ethernet, a 1 GHz clock should be ok, but 1555 + * we also need to limit the resolution so that the u32 seq 1556 + * overlaps less than one time per MSL (2 minutes). 1557 + * Choosing a clock of 64 ns period is OK. (period of 274 s) 1556 1558 */ 1557 - seq += ktime_get_real().tv64; 1559 + seq += ktime_get_real().tv64 >> 6; 1558 1560 #if 0 1559 1561 printk("init_seq(%lx, %lx, %d, %d) = %d\n", 1560 1562 saddr, daddr, sport, dport, seq);
+10 -5
drivers/char/vt_ioctl.c
··· 770 770 /* 771 771 * Switching-from response 772 772 */ 773 + acquire_console_sem(); 773 774 if (vc->vt_newvt >= 0) { 774 775 if (arg == 0) 775 776 /* ··· 785 784 * complete the switch. 786 785 */ 787 786 int newvt; 788 - acquire_console_sem(); 789 787 newvt = vc->vt_newvt; 790 788 vc->vt_newvt = -1; 791 789 i = vc_allocate(newvt); ··· 798 798 * other console switches.. 799 799 */ 800 800 complete_change_console(vc_cons[newvt].d); 801 - release_console_sem(); 802 801 } 803 802 } 804 803 ··· 809 810 /* 810 811 * If it's just an ACK, ignore it 811 812 */ 812 - if (arg != VT_ACKACQ) 813 + if (arg != VT_ACKACQ) { 814 + release_console_sem(); 813 815 return -EINVAL; 816 + } 814 817 } 818 + release_console_sem(); 815 819 816 820 return 0; 817 821 ··· 1210 1208 /* 1211 1209 * Send the signal as privileged - kill_pid() will 1212 1210 * tell us if the process has gone or something else 1213 - * is awry 1211 + * is awry. 1212 + * 1213 + * We need to set vt_newvt *before* sending the signal or we 1214 + * have a race. 1214 1215 */ 1216 + vc->vt_newvt = new_vc->vc_num; 1215 1217 if (kill_pid(vc->vt_pid, vc->vt_mode.relsig, 1) == 0) { 1216 1218 /* 1217 1219 * It worked. Mark the vt to switch to and 1218 1220 * return. The process needs to send us a 1219 1221 * VT_RELDISP ioctl to complete the switch. 1220 1222 */ 1221 - vc->vt_newvt = new_vc->vc_num; 1222 1223 return; 1223 1224 } 1224 1225
+1 -1
drivers/ieee1394/ieee1394_core.c
··· 1273 1273 unregister_chrdev_region(IEEE1394_CORE_DEV, 256); 1274 1274 } 1275 1275 1276 - fs_initcall(ieee1394_init); /* same as ohci1394 */ 1276 + module_init(ieee1394_init); 1277 1277 module_exit(ieee1394_cleanup); 1278 1278 1279 1279 /* Exported symbols */
+1 -3
drivers/ieee1394/ohci1394.c
··· 3537 3537 return pci_register_driver(&ohci1394_pci_driver); 3538 3538 } 3539 3539 3540 - /* Register before most other device drivers. 3541 - * Useful for remote debugging via physical DMA, e.g. using firescope. */ 3542 - fs_initcall(ohci1394_init); 3540 + module_init(ohci1394_init); 3543 3541 module_exit(ohci1394_cleanup);
+49 -13
drivers/infiniband/hw/mlx4/qp.c
··· 1211 1211 dseg->qkey = cpu_to_be32(wr->wr.ud.remote_qkey); 1212 1212 } 1213 1213 1214 - static void set_data_seg(struct mlx4_wqe_data_seg *dseg, 1215 - struct ib_sge *sg) 1214 + static void set_mlx_icrc_seg(void *dseg) 1216 1215 { 1217 - dseg->byte_count = cpu_to_be32(sg->length); 1216 + u32 *t = dseg; 1217 + struct mlx4_wqe_inline_seg *iseg = dseg; 1218 + 1219 + t[1] = 0; 1220 + 1221 + /* 1222 + * Need a barrier here before writing the byte_count field to 1223 + * make sure that all the data is visible before the 1224 + * byte_count field is set. Otherwise, if the segment begins 1225 + * a new cacheline, the HCA prefetcher could grab the 64-byte 1226 + * chunk and get a valid (!= * 0xffffffff) byte count but 1227 + * stale data, and end up sending the wrong data. 1228 + */ 1229 + wmb(); 1230 + 1231 + iseg->byte_count = cpu_to_be32((1 << 31) | 4); 1232 + } 1233 + 1234 + static void set_data_seg(struct mlx4_wqe_data_seg *dseg, struct ib_sge *sg) 1235 + { 1218 1236 dseg->lkey = cpu_to_be32(sg->lkey); 1219 1237 dseg->addr = cpu_to_be64(sg->addr); 1238 + 1239 + /* 1240 + * Need a barrier here before writing the byte_count field to 1241 + * make sure that all the data is visible before the 1242 + * byte_count field is set. Otherwise, if the segment begins 1243 + * a new cacheline, the HCA prefetcher could grab the 64-byte 1244 + * chunk and get a valid (!= * 0xffffffff) byte count but 1245 + * stale data, and end up sending the wrong data. 1246 + */ 1247 + wmb(); 1248 + 1249 + dseg->byte_count = cpu_to_be32(sg->length); 1220 1250 } 1221 1251 1222 1252 int mlx4_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, ··· 1255 1225 struct mlx4_ib_qp *qp = to_mqp(ibqp); 1256 1226 void *wqe; 1257 1227 struct mlx4_wqe_ctrl_seg *ctrl; 1228 + struct mlx4_wqe_data_seg *dseg; 1258 1229 unsigned long flags; 1259 1230 int nreq; 1260 1231 int err = 0; ··· 1355 1324 break; 1356 1325 } 1357 1326 1358 - for (i = 0; i < wr->num_sge; ++i) { 1359 - set_data_seg(wqe, wr->sg_list + i); 1327 + /* 1328 + * Write data segments in reverse order, so as to 1329 + * overwrite cacheline stamp last within each 1330 + * cacheline. This avoids issues with WQE 1331 + * prefetching. 1332 + */ 1360 1333 1361 - wqe += sizeof (struct mlx4_wqe_data_seg); 1362 - size += sizeof (struct mlx4_wqe_data_seg) / 16; 1363 - } 1334 + dseg = wqe; 1335 + dseg += wr->num_sge - 1; 1336 + size += wr->num_sge * (sizeof (struct mlx4_wqe_data_seg) / 16); 1364 1337 1365 1338 /* Add one more inline data segment for ICRC for MLX sends */ 1366 - if (qp->ibqp.qp_type == IB_QPT_SMI || qp->ibqp.qp_type == IB_QPT_GSI) { 1367 - ((struct mlx4_wqe_inline_seg *) wqe)->byte_count = 1368 - cpu_to_be32((1 << 31) | 4); 1369 - ((u32 *) wqe)[1] = 0; 1370 - wqe += sizeof (struct mlx4_wqe_data_seg); 1339 + if (unlikely(qp->ibqp.qp_type == IB_QPT_SMI || 1340 + qp->ibqp.qp_type == IB_QPT_GSI)) { 1341 + set_mlx_icrc_seg(dseg + 1); 1371 1342 size += sizeof (struct mlx4_wqe_data_seg) / 16; 1372 1343 } 1344 + 1345 + for (i = wr->num_sge - 1; i >= 0; --i, --dseg) 1346 + set_data_seg(dseg, wr->sg_list + i); 1373 1347 1374 1348 ctrl->fence_size = (wr->send_flags & IB_SEND_FENCE ? 1375 1349 MLX4_WQE_CTRL_FENCE : 0) | size;
+1 -1
drivers/input/joystick/Kconfig
··· 277 277 278 278 config JOYSTICK_XPAD_LEDS 279 279 bool "LED Support for Xbox360 controller 'BigX' LED" 280 - depends on LEDS_CLASS && JOYSTICK_XPAD 280 + depends on JOYSTICK_XPAD && (LEDS_CLASS=y || LEDS_CLASS=JOYSTICK_XPAD) 281 281 ---help--- 282 282 This option enables support for the LED which surrounds the Big X on 283 283 XBox 360 controller.
+4 -2
drivers/input/mouse/appletouch.c
··· 328 328 { 329 329 int x, y, x_z, y_z, x_f, y_f; 330 330 int retval, i, j; 331 + int key; 331 332 struct atp *dev = urb->context; 332 333 333 334 switch (urb->status) { ··· 469 468 ATP_XFACT, &x_z, &x_f); 470 469 y = atp_calculate_abs(dev->xy_acc + ATP_XSENSORS, ATP_YSENSORS, 471 470 ATP_YFACT, &y_z, &y_f); 471 + key = dev->data[dev->datalen - 1] & 1; 472 472 473 473 if (x && y) { 474 474 if (dev->x_old != -1) { ··· 507 505 the first touch unless reinitialised. Do so if it's been 508 506 idle for a while in order to avoid waking the kernel up 509 507 several hundred times a second */ 510 - if (atp_is_geyser_3(dev)) { 508 + if (!key && atp_is_geyser_3(dev)) { 511 509 dev->idlecount++; 512 510 if (dev->idlecount == 10) { 513 511 dev->valid = 0; ··· 516 514 } 517 515 } 518 516 519 - input_report_key(dev->input, BTN_LEFT, dev->data[dev->datalen - 1] & 1); 517 + input_report_key(dev->input, BTN_LEFT, key); 520 518 input_sync(dev->input); 521 519 522 520 exit:
+2 -1
drivers/kvm/Kconfig
··· 6 6 depends on X86 7 7 default y 8 8 ---help--- 9 - Say Y here to get to see options for virtualization guest drivers. 9 + Say Y here to get to see options for using your Linux host to run other 10 + operating systems inside virtual machines (guests). 10 11 This option alone does not add any kernel code. 11 12 12 13 If you say N, all options in this submenu will be skipped and disabled.
+3 -3
drivers/lguest/lguest_asm.S
··· 22 22 jmp lguest_init 23 23 24 24 /*G:055 We create a macro which puts the assembler code between lgstart_ and 25 - * lgend_ markers. These templates end up in the .init.text section, so they 26 - * are discarded after boot. */ 25 + * lgend_ markers. These templates are put in the .text section: they can't be 26 + * discarded after boot as we may need to patch modules, too. */ 27 + .text 27 28 #define LGUEST_PATCH(name, insns...) \ 28 29 lgstart_##name: insns; lgend_##name:; \ 29 30 .globl lgstart_##name; .globl lgend_##name ··· 35 34 LGUEST_PATCH(pushf, movl lguest_data+LGUEST_DATA_irq_enabled, %eax) 36 35 /*:*/ 37 36 38 - .text 39 37 /* These demark the EIP range where host should never deliver interrupts. */ 40 38 .global lguest_noirq_start 41 39 .global lguest_noirq_end
+7 -10
drivers/md/raid5.c
··· 514 514 struct stripe_head *sh = stripe_head_ref; 515 515 struct bio *return_bi = NULL; 516 516 raid5_conf_t *conf = sh->raid_conf; 517 - int i, more_to_read = 0; 517 + int i; 518 518 519 519 pr_debug("%s: stripe %llu\n", __FUNCTION__, 520 520 (unsigned long long)sh->sector); ··· 522 522 /* clear completed biofills */ 523 523 for (i = sh->disks; i--; ) { 524 524 struct r5dev *dev = &sh->dev[i]; 525 - /* check if this stripe has new incoming reads */ 526 - if (dev->toread) 527 - more_to_read++; 528 525 529 526 /* acknowledge completion of a biofill operation */ 530 - /* and check if we need to reply to a read request 531 - */ 532 - if (test_bit(R5_Wantfill, &dev->flags) && !dev->toread) { 527 + /* and check if we need to reply to a read request, 528 + * new R5_Wantfill requests are held off until 529 + * !test_bit(STRIPE_OP_BIOFILL, &sh->ops.pending) 530 + */ 531 + if (test_and_clear_bit(R5_Wantfill, &dev->flags)) { 533 532 struct bio *rbi, *rbi2; 534 - clear_bit(R5_Wantfill, &dev->flags); 535 533 536 534 /* The access to dev->read is outside of the 537 535 * spin_lock_irq(&conf->device_lock), but is protected ··· 556 558 557 559 return_io(return_bi); 558 560 559 - if (more_to_read) 560 - set_bit(STRIPE_HANDLE, &sh->state); 561 + set_bit(STRIPE_HANDLE, &sh->state); 561 562 release_stripe(sh); 562 563 } 563 564
+4 -2
drivers/media/video/ivtv/ivtv-fileops.c
··· 754 754 ivtv_yuv_close(itv); 755 755 } 756 756 if (s->type == IVTV_DEC_STREAM_TYPE_YUV && itv->output_mode == OUT_YUV) 757 - itv->output_mode = OUT_NONE; 757 + itv->output_mode = OUT_NONE; 758 + else if (s->type == IVTV_DEC_STREAM_TYPE_YUV && itv->output_mode == OUT_UDMA_YUV) 759 + itv->output_mode = OUT_NONE; 758 760 else if (s->type == IVTV_DEC_STREAM_TYPE_MPG && itv->output_mode == OUT_MPG) 759 - itv->output_mode = OUT_NONE; 761 + itv->output_mode = OUT_NONE; 760 762 761 763 itv->speed = 0; 762 764 clear_bit(IVTV_F_I_DEC_PAUSED, &itv->i_flags);
+2 -3
drivers/media/video/usbvision/usbvision-video.c
··· 1387 1387 .ioctl = video_ioctl2, 1388 1388 .llseek = no_llseek, 1389 1389 /* .poll = video_poll, */ 1390 - .mmap = usbvision_v4l2_mmap, 1391 1390 .compat_ioctl = v4l_compat_ioctl32, 1392 1391 }; 1393 1392 static struct video_device usbvision_video_template = { ··· 1412 1413 .vidioc_s_input = vidioc_s_input, 1413 1414 .vidioc_queryctrl = vidioc_queryctrl, 1414 1415 .vidioc_g_audio = vidioc_g_audio, 1415 - .vidioc_g_audio = vidioc_s_audio, 1416 + .vidioc_s_audio = vidioc_s_audio, 1416 1417 .vidioc_g_ctrl = vidioc_g_ctrl, 1417 1418 .vidioc_s_ctrl = vidioc_s_ctrl, 1418 1419 .vidioc_streamon = vidioc_streamon, ··· 1458 1459 .vidioc_s_input = vidioc_s_input, 1459 1460 .vidioc_queryctrl = vidioc_queryctrl, 1460 1461 .vidioc_g_audio = vidioc_g_audio, 1461 - .vidioc_g_audio = vidioc_s_audio, 1462 + .vidioc_s_audio = vidioc_s_audio, 1462 1463 .vidioc_g_ctrl = vidioc_g_ctrl, 1463 1464 .vidioc_s_ctrl = vidioc_s_ctrl, 1464 1465 .vidioc_g_tuner = vidioc_g_tuner,
+4 -3
drivers/net/bnx2.c
··· 54 54 55 55 #define DRV_MODULE_NAME "bnx2" 56 56 #define PFX DRV_MODULE_NAME ": " 57 - #define DRV_MODULE_VERSION "1.6.4" 58 - #define DRV_MODULE_RELDATE "August 3, 2007" 57 + #define DRV_MODULE_VERSION "1.6.5" 58 + #define DRV_MODULE_RELDATE "September 20, 2007" 59 59 60 60 #define RUN_AT(x) (jiffies + (x)) 61 61 ··· 6727 6727 } else if (CHIP_NUM(bp) == CHIP_NUM_5706 || 6728 6728 CHIP_NUM(bp) == CHIP_NUM_5708) 6729 6729 bp->phy_flags |= PHY_CRC_FIX_FLAG; 6730 - else if (CHIP_ID(bp) == CHIP_ID_5709_A0) 6730 + else if (CHIP_ID(bp) == CHIP_ID_5709_A0 || 6731 + CHIP_ID(bp) == CHIP_ID_5709_A1) 6731 6732 bp->phy_flags |= PHY_DIS_EARLY_DAC_FLAG; 6732 6733 6733 6734 if ((CHIP_ID(bp) == CHIP_ID_5708_A0) ||
+1
drivers/net/e1000/e1000_ethtool.c
··· 1726 1726 case E1000_DEV_ID_82571EB_QUAD_COPPER: 1727 1727 case E1000_DEV_ID_82571EB_QUAD_FIBER: 1728 1728 case E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE: 1729 + case E1000_DEV_ID_82571PT_QUAD_COPPER: 1729 1730 case E1000_DEV_ID_82546GB_QUAD_COPPER_KSP3: 1730 1731 /* quad port adapters only support WoL on port A */ 1731 1732 if (!adapter->quad_port_a) {
+1
drivers/net/e1000/e1000_hw.c
··· 387 387 case E1000_DEV_ID_82571EB_SERDES_DUAL: 388 388 case E1000_DEV_ID_82571EB_SERDES_QUAD: 389 389 case E1000_DEV_ID_82571EB_QUAD_COPPER: 390 + case E1000_DEV_ID_82571PT_QUAD_COPPER: 390 391 case E1000_DEV_ID_82571EB_QUAD_FIBER: 391 392 case E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE: 392 393 hw->mac_type = e1000_82571;
+1
drivers/net/e1000/e1000_hw.h
··· 475 475 #define E1000_DEV_ID_82571EB_FIBER 0x105F 476 476 #define E1000_DEV_ID_82571EB_SERDES 0x1060 477 477 #define E1000_DEV_ID_82571EB_QUAD_COPPER 0x10A4 478 + #define E1000_DEV_ID_82571PT_QUAD_COPPER 0x10D5 478 479 #define E1000_DEV_ID_82571EB_QUAD_FIBER 0x10A5 479 480 #define E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE 0x10BC 480 481 #define E1000_DEV_ID_82571EB_SERDES_DUAL 0x10D9
+2
drivers/net/e1000/e1000_main.c
··· 108 108 INTEL_E1000_ETHERNET_DEVICE(0x10BC), 109 109 INTEL_E1000_ETHERNET_DEVICE(0x10C4), 110 110 INTEL_E1000_ETHERNET_DEVICE(0x10C5), 111 + INTEL_E1000_ETHERNET_DEVICE(0x10D5), 111 112 INTEL_E1000_ETHERNET_DEVICE(0x10D9), 112 113 INTEL_E1000_ETHERNET_DEVICE(0x10DA), 113 114 /* required last entry */ ··· 1102 1101 case E1000_DEV_ID_82571EB_QUAD_COPPER: 1103 1102 case E1000_DEV_ID_82571EB_QUAD_FIBER: 1104 1103 case E1000_DEV_ID_82571EB_QUAD_COPPER_LOWPROFILE: 1104 + case E1000_DEV_ID_82571PT_QUAD_COPPER: 1105 1105 /* if quad port adapter, disable WoL on all but port A */ 1106 1106 if (global_quad_port_a != 0) 1107 1107 adapter->eeprom_wol = 0;
+1 -4
drivers/net/mv643xx_eth.c
··· 534 534 } 535 535 536 536 /* PHY status changed */ 537 - if (eth_int_cause_ext & ETH_INT_CAUSE_PHY) { 537 + if (eth_int_cause_ext & (ETH_INT_CAUSE_PHY | ETH_INT_CAUSE_STATE)) { 538 538 struct ethtool_cmd cmd; 539 539 540 540 if (mii_link_ok(&mp->mii)) { ··· 1357 1357 #endif 1358 1358 1359 1359 dev->watchdog_timeo = 2 * HZ; 1360 - dev->tx_queue_len = mp->tx_ring_size; 1361 1360 dev->base_addr = 0; 1362 1361 dev->change_mtu = mv643xx_eth_change_mtu; 1363 1362 dev->do_ioctl = mv643xx_eth_do_ioctl; ··· 2767 2768 .get_stats_count = mv643xx_get_stats_count, 2768 2769 .get_ethtool_stats = mv643xx_get_ethtool_stats, 2769 2770 .get_strings = mv643xx_get_strings, 2770 - .get_stats_count = mv643xx_get_stats_count, 2771 - .get_ethtool_stats = mv643xx_get_ethtool_stats, 2772 2771 .nway_reset = mv643xx_eth_nway_restart, 2773 2772 }; 2774 2773
+3 -1
drivers/net/mv643xx_eth.h
··· 64 64 #define ETH_INT_CAUSE_TX_ERROR (ETH_TX_QUEUES_ENABLED << 8) 65 65 #define ETH_INT_CAUSE_TX (ETH_INT_CAUSE_TX_DONE | ETH_INT_CAUSE_TX_ERROR) 66 66 #define ETH_INT_CAUSE_PHY 0x00010000 67 - #define ETH_INT_UNMASK_ALL_EXT (ETH_INT_CAUSE_TX | ETH_INT_CAUSE_PHY) 67 + #define ETH_INT_CAUSE_STATE 0x00100000 68 + #define ETH_INT_UNMASK_ALL_EXT (ETH_INT_CAUSE_TX | ETH_INT_CAUSE_PHY | \ 69 + ETH_INT_CAUSE_STATE) 68 70 69 71 #define ETH_INT_MASK_ALL 0x00000000 70 72 #define ETH_INT_MASK_ALL_EXT 0x00000000
+3
drivers/net/myri10ge/myri10ge.c
··· 3094 3094 } 3095 3095 3096 3096 #define PCI_DEVICE_ID_MYRICOM_MYRI10GE_Z8E 0x0008 3097 + #define PCI_DEVICE_ID_MYRICOM_MYRI10GE_Z8E_9 0x0009 3097 3098 3098 3099 static struct pci_device_id myri10ge_pci_tbl[] = { 3099 3100 {PCI_DEVICE(PCI_VENDOR_ID_MYRICOM, PCI_DEVICE_ID_MYRICOM_MYRI10GE_Z8E)}, 3101 + {PCI_DEVICE 3102 + (PCI_VENDOR_ID_MYRICOM, PCI_DEVICE_ID_MYRICOM_MYRI10GE_Z8E_9)}, 3100 3103 {0}, 3101 3104 }; 3102 3105
+1 -1
drivers/net/pcmcia/3c589_cs.c
··· 116 116 spinlock_t lock; 117 117 }; 118 118 119 - static const char *if_names[] = { "auto", "10base2", "10baseT", "AUI" }; 119 + static const char *if_names[] = { "auto", "10baseT", "10base2", "AUI" }; 120 120 121 121 /*====================================================================*/ 122 122
+1
drivers/net/phy/phy.c
··· 409 409 410 410 return 0; 411 411 } 412 + EXPORT_SYMBOL(phy_mii_ioctl); 412 413 413 414 /** 414 415 * phy_start_aneg - start auto-negotiation for this PHY device
+6 -8
drivers/net/ppp_mppe.c
··· 136 136 * Key Derivation, from RFC 3078, RFC 3079. 137 137 * Equivalent to Get_Key() for MS-CHAP as described in RFC 3079. 138 138 */ 139 - static void get_new_key_from_sha(struct ppp_mppe_state * state, unsigned char *InterimKey) 139 + static void get_new_key_from_sha(struct ppp_mppe_state * state) 140 140 { 141 141 struct hash_desc desc; 142 142 struct scatterlist sg[4]; ··· 153 153 desc.flags = 0; 154 154 155 155 crypto_hash_digest(&desc, sg, nbytes, state->sha1_digest); 156 - 157 - memcpy(InterimKey, state->sha1_digest, state->keylen); 158 156 } 159 157 160 158 /* ··· 161 163 */ 162 164 static void mppe_rekey(struct ppp_mppe_state * state, int initial_key) 163 165 { 164 - unsigned char InterimKey[MPPE_MAX_KEY_LEN]; 165 166 struct scatterlist sg_in[1], sg_out[1]; 166 167 struct blkcipher_desc desc = { .tfm = state->arc4 }; 167 168 168 - get_new_key_from_sha(state, InterimKey); 169 + get_new_key_from_sha(state); 169 170 if (!initial_key) { 170 - crypto_blkcipher_setkey(state->arc4, InterimKey, state->keylen); 171 - setup_sg(sg_in, InterimKey, state->keylen); 171 + crypto_blkcipher_setkey(state->arc4, state->sha1_digest, 172 + state->keylen); 173 + setup_sg(sg_in, state->sha1_digest, state->keylen); 172 174 setup_sg(sg_out, state->session_key, state->keylen); 173 175 if (crypto_blkcipher_encrypt(&desc, sg_out, sg_in, 174 176 state->keylen) != 0) { 175 177 printk(KERN_WARNING "mppe_rekey: cipher_encrypt failed\n"); 176 178 } 177 179 } else { 178 - memcpy(state->session_key, InterimKey, state->keylen); 180 + memcpy(state->session_key, state->sha1_digest, state->keylen); 179 181 } 180 182 if (state->keylen == 8) { 181 183 /* See RFC 3078 */
+1 -2
drivers/net/pppoe.c
··· 879 879 dev->hard_header(skb, dev, ETH_P_PPP_SES, 880 880 po->pppoe_pa.remote, NULL, data_len); 881 881 882 - if (dev_queue_xmit(skb) < 0) 883 - goto abort; 882 + dev_queue_xmit(skb); 884 883 885 884 return 1; 886 885
+53 -65
drivers/net/pppol2tp.c
··· 491 491 u16 hdrflags; 492 492 u16 tunnel_id, session_id; 493 493 int length; 494 - struct udphdr *uh; 494 + int offset; 495 495 496 496 tunnel = pppol2tp_sock_to_tunnel(sock); 497 497 if (tunnel == NULL) 498 498 goto error; 499 499 500 + /* UDP always verifies the packet length. */ 501 + __skb_pull(skb, sizeof(struct udphdr)); 502 + 500 503 /* Short packet? */ 501 - if (skb->len < sizeof(struct udphdr)) { 504 + if (!pskb_may_pull(skb, 12)) { 502 505 PRINTK(tunnel->debug, PPPOL2TP_MSG_DATA, KERN_INFO, 503 506 "%s: recv short packet (len=%d)\n", tunnel->name, skb->len); 504 507 goto error; 505 508 } 506 509 507 510 /* Point to L2TP header */ 508 - ptr = skb->data + sizeof(struct udphdr); 511 + ptr = skb->data; 509 512 510 513 /* Get L2TP header flags */ 511 514 hdrflags = ntohs(*(__be16*)ptr); 512 515 513 516 /* Trace packet contents, if enabled */ 514 517 if (tunnel->debug & PPPOL2TP_MSG_DATA) { 518 + length = min(16u, skb->len); 519 + if (!pskb_may_pull(skb, length)) 520 + goto error; 521 + 515 522 printk(KERN_DEBUG "%s: recv: ", tunnel->name); 516 523 517 - for (length = 0; length < 16; length++) 518 - printk(" %02X", ptr[length]); 524 + offset = 0; 525 + do { 526 + printk(" %02X", ptr[offset]); 527 + } while (++offset < length); 528 + 519 529 printk("\n"); 520 530 } 521 531 522 532 /* Get length of L2TP packet */ 523 - uh = (struct udphdr *) skb_transport_header(skb); 524 - length = ntohs(uh->len) - sizeof(struct udphdr); 525 - 526 - /* Too short? */ 527 - if (length < 12) { 528 - PRINTK(tunnel->debug, PPPOL2TP_MSG_DATA, KERN_INFO, 529 - "%s: recv short L2TP packet (len=%d)\n", tunnel->name, length); 530 - goto error; 531 - } 533 + length = skb->len; 532 534 533 535 /* If type is control packet, it is handled by userspace. */ 534 536 if (hdrflags & L2TP_HDRFLAG_T) { ··· 608 606 "%s: recv data has no seq numbers when required. " 609 607 "Discarding\n", session->name); 610 608 session->stats.rx_seq_discards++; 611 - session->stats.rx_errors++; 612 609 goto discard; 613 610 } 614 611 ··· 626 625 "%s: recv data has no seq numbers when required. " 627 626 "Discarding\n", session->name); 628 627 session->stats.rx_seq_discards++; 629 - session->stats.rx_errors++; 630 628 goto discard; 631 629 } 632 630 ··· 634 634 } 635 635 636 636 /* If offset bit set, skip it. */ 637 - if (hdrflags & L2TP_HDRFLAG_O) 638 - ptr += 2 + ntohs(*(__be16 *) ptr); 637 + if (hdrflags & L2TP_HDRFLAG_O) { 638 + offset = ntohs(*(__be16 *)ptr); 639 + skb->transport_header += 2 + offset; 640 + if (!pskb_may_pull(skb, skb_transport_offset(skb) + 2)) 641 + goto discard; 642 + } 639 643 640 - skb_pull(skb, ptr - skb->data); 644 + __skb_pull(skb, skb_transport_offset(skb)); 641 645 642 646 /* Skip PPP header, if present. In testing, Microsoft L2TP clients 643 647 * don't send the PPP header (PPP header compression enabled), but ··· 677 673 */ 678 674 if (PPPOL2TP_SKB_CB(skb)->ns != session->nr) { 679 675 session->stats.rx_seq_discards++; 680 - session->stats.rx_errors++; 681 676 PRINTK(session->debug, PPPOL2TP_MSG_SEQ, KERN_DEBUG, 682 677 "%s: oos pkt %hu len %d discarded, " 683 678 "waiting for %hu, reorder_q_len=%d\n", ··· 701 698 return 0; 702 699 703 700 discard: 701 + session->stats.rx_errors++; 704 702 kfree_skb(skb); 705 703 sock_put(session->sock); 706 704 ··· 962 958 int data_len = skb->len; 963 959 struct inet_sock *inet; 964 960 __wsum csum = 0; 965 - struct sk_buff *skb2 = NULL; 966 961 struct udphdr *uh; 967 962 unsigned int len; 968 963 ··· 992 989 */ 993 990 headroom = NET_SKB_PAD + sizeof(struct iphdr) + 994 991 sizeof(struct udphdr) + hdr_len + sizeof(ppph); 995 - if (skb_headroom(skb) < headroom) { 996 - skb2 = skb_realloc_headroom(skb, headroom); 997 - if (skb2 == NULL) 998 - goto abort; 999 - } else 1000 - skb2 = skb; 1001 - 1002 - /* Check that the socket has room */ 1003 - if (atomic_read(&sk_tun->sk_wmem_alloc) < sk_tun->sk_sndbuf) 1004 - skb_set_owner_w(skb2, sk_tun); 1005 - else 1006 - goto discard; 992 + if (skb_cow_head(skb, headroom)) 993 + goto abort; 1007 994 1008 995 /* Setup PPP header */ 1009 - skb_push(skb2, sizeof(ppph)); 1010 - skb2->data[0] = ppph[0]; 1011 - skb2->data[1] = ppph[1]; 996 + __skb_push(skb, sizeof(ppph)); 997 + skb->data[0] = ppph[0]; 998 + skb->data[1] = ppph[1]; 1012 999 1013 1000 /* Setup L2TP header */ 1014 - skb_push(skb2, hdr_len); 1015 - pppol2tp_build_l2tp_header(session, skb2->data); 1001 + pppol2tp_build_l2tp_header(session, __skb_push(skb, hdr_len)); 1016 1002 1017 1003 /* Setup UDP header */ 1018 1004 inet = inet_sk(sk_tun); 1019 - skb_push(skb2, sizeof(struct udphdr)); 1020 - skb_reset_transport_header(skb2); 1021 - uh = (struct udphdr *) skb2->data; 1005 + __skb_push(skb, sizeof(*uh)); 1006 + skb_reset_transport_header(skb); 1007 + uh = udp_hdr(skb); 1022 1008 uh->source = inet->sport; 1023 1009 uh->dest = inet->dport; 1024 1010 uh->len = htons(sizeof(struct udphdr) + hdr_len + sizeof(ppph) + data_len); 1025 1011 uh->check = 0; 1026 1012 1027 - /* Calculate UDP checksum if configured to do so */ 1013 + /* *BROKEN* Calculate UDP checksum if configured to do so */ 1028 1014 if (sk_tun->sk_no_check != UDP_CSUM_NOXMIT) 1029 - csum = udp_csum_outgoing(sk_tun, skb2); 1015 + csum = udp_csum_outgoing(sk_tun, skb); 1030 1016 1031 1017 /* Debug */ 1032 1018 if (session->send_seq) ··· 1028 1036 1029 1037 if (session->debug & PPPOL2TP_MSG_DATA) { 1030 1038 int i; 1031 - unsigned char *datap = skb2->data; 1039 + unsigned char *datap = skb->data; 1032 1040 1033 1041 printk(KERN_DEBUG "%s: xmit:", session->name); 1034 1042 for (i = 0; i < data_len; i++) { ··· 1041 1049 printk("\n"); 1042 1050 } 1043 1051 1044 - memset(&(IPCB(skb2)->opt), 0, sizeof(IPCB(skb2)->opt)); 1045 - IPCB(skb2)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | 1046 - IPSKB_REROUTED); 1047 - nf_reset(skb2); 1052 + memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt)); 1053 + IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | 1054 + IPSKB_REROUTED); 1055 + nf_reset(skb); 1048 1056 1049 1057 /* Get routing info from the tunnel socket */ 1050 - dst_release(skb2->dst); 1051 - skb2->dst = sk_dst_get(sk_tun); 1058 + dst_release(skb->dst); 1059 + skb->dst = sk_dst_get(sk_tun); 1052 1060 1053 1061 /* Queue the packet to IP for output */ 1054 - len = skb2->len; 1055 - rc = ip_queue_xmit(skb2, 1); 1062 + len = skb->len; 1063 + rc = ip_queue_xmit(skb, 1); 1056 1064 1057 1065 /* Update stats */ 1058 1066 if (rc >= 0) { ··· 1065 1073 session->stats.tx_errors++; 1066 1074 } 1067 1075 1068 - /* Free the original skb */ 1069 - kfree_skb(skb); 1070 - 1071 1076 return 1; 1072 1077 1073 - discard: 1074 - /* Free the new skb. Caller will free original skb. */ 1075 - if (skb2 != skb) 1076 - kfree_skb(skb2); 1077 1078 abort: 1078 - return 0; 1079 + /* Free the original skb */ 1080 + kfree_skb(skb); 1081 + return 1; 1079 1082 } 1080 1083 1081 1084 /***************************************************************************** ··· 1313 1326 goto err; 1314 1327 } 1315 1328 1329 + sk = sock->sk; 1330 + 1316 1331 /* Quick sanity checks */ 1317 - err = -ESOCKTNOSUPPORT; 1318 - if (sock->type != SOCK_DGRAM) { 1332 + err = -EPROTONOSUPPORT; 1333 + if (sk->sk_protocol != IPPROTO_UDP) { 1319 1334 PRINTK(-1, PPPOL2TP_MSG_CONTROL, KERN_ERR, 1320 - "tunl %hu: fd %d wrong type, got %d, expected %d\n", 1321 - tunnel_id, fd, sock->type, SOCK_DGRAM); 1335 + "tunl %hu: fd %d wrong protocol, got %d, expected %d\n", 1336 + tunnel_id, fd, sk->sk_protocol, IPPROTO_UDP); 1322 1337 goto err; 1323 1338 } 1324 1339 err = -EAFNOSUPPORT; ··· 1332 1343 } 1333 1344 1334 1345 err = -ENOTCONN; 1335 - sk = sock->sk; 1336 1346 1337 1347 /* Check if this socket has already been prepped */ 1338 1348 tunnel = (struct pppol2tp_tunnel *)sk->sk_user_data;
+7
drivers/net/qla3xxx.c
··· 2248 2248 qdev->rsp_consumer_index) && (work_done < work_to_do)) { 2249 2249 2250 2250 net_rsp = qdev->rsp_current; 2251 + rmb(); 2252 + /* 2253 + * Fix 4032 chipe undocumented "feature" where bit-8 is set if the 2254 + * inbound completion is for a VLAN. 2255 + */ 2256 + if (qdev->device_id == QL3032_DEVICE_ID) 2257 + net_rsp->opcode &= 0x7f; 2251 2258 switch (net_rsp->opcode) { 2252 2259 2253 2260 case OPCODE_OB_MAC_IOCB_FN0:
+13 -1
drivers/net/r8169.c
··· 1228 1228 return; 1229 1229 } 1230 1230 1231 - /* phy config for RTL8169s mac_version C chip */ 1231 + if ((tp->mac_version != RTL_GIGA_MAC_VER_02) && 1232 + (tp->mac_version != RTL_GIGA_MAC_VER_03)) 1233 + return; 1234 + 1232 1235 mdio_write(ioaddr, 31, 0x0001); //w 31 2 0 1 1233 1236 mdio_write(ioaddr, 21, 0x1000); //w 21 15 0 1000 1234 1237 mdio_write(ioaddr, 24, 0x65c7); //w 24 15 0 65c7 ··· 2570 2567 (TX_BUFFS_AVAIL(tp) >= MAX_SKB_FRAGS)) { 2571 2568 netif_wake_queue(dev); 2572 2569 } 2570 + /* 2571 + * 8168 hack: TxPoll requests are lost when the Tx packets are 2572 + * too close. Let's kick an extra TxPoll request when a burst 2573 + * of start_xmit activity is detected (if it is not detected, 2574 + * it is slow enough). -- FR 2575 + */ 2576 + smp_rmb(); 2577 + if (tp->cur_tx != dirty_tx) 2578 + RTL_W8(TxPoll, NPQ); 2573 2579 } 2574 2580 } 2575 2581
+294 -116
drivers/net/sky2.c
··· 51 51 #include "sky2.h" 52 52 53 53 #define DRV_NAME "sky2" 54 - #define DRV_VERSION "1.17" 54 + #define DRV_VERSION "1.18" 55 55 #define PFX DRV_NAME " " 56 56 57 57 /* ··· 118 118 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4351) }, /* 88E8036 */ 119 119 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4352) }, /* 88E8038 */ 120 120 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4353) }, /* 88E8039 */ 121 + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4354) }, /* 88E8040 */ 121 122 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4356) }, /* 88EC033 */ 123 + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x435A) }, /* 88E8048 */ 122 124 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4360) }, /* 88E8052 */ 123 125 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4361) }, /* 88E8050 */ 124 126 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4362) }, /* 88E8053 */ 125 127 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4363) }, /* 88E8055 */ 126 128 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4364) }, /* 88E8056 */ 129 + { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4365) }, /* 88E8070 */ 127 130 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4366) }, /* 88EC036 */ 128 131 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4367) }, /* 88EC032 */ 129 132 { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4368) }, /* 88EC034 */ ··· 150 147 "Extreme", /* 0xb5 */ 151 148 "EC", /* 0xb6 */ 152 149 "FE", /* 0xb7 */ 150 + "FE+", /* 0xb8 */ 153 151 }; 154 152 155 153 static void sky2_set_multicast(struct net_device *dev); ··· 221 217 else 222 218 sky2_write8(hw, B2_Y2_CLK_GATE, 0); 223 219 224 - if (hw->chip_id == CHIP_ID_YUKON_EC_U || 225 - hw->chip_id == CHIP_ID_YUKON_EX) { 220 + if (hw->flags & SKY2_HW_ADV_POWER_CTL) { 226 221 u32 reg; 227 222 228 223 sky2_pci_write32(hw, PCI_DEV_REG3, 0); ··· 314 311 struct sky2_port *sky2 = netdev_priv(hw->dev[port]); 315 312 u16 ctrl, ct1000, adv, pg, ledctrl, ledover, reg; 316 313 317 - if (sky2->autoneg == AUTONEG_ENABLE 318 - && !(hw->chip_id == CHIP_ID_YUKON_XL 319 - || hw->chip_id == CHIP_ID_YUKON_EC_U 320 - || hw->chip_id == CHIP_ID_YUKON_EX)) { 314 + if (sky2->autoneg == AUTONEG_ENABLE && 315 + !(hw->flags & SKY2_HW_NEWER_PHY)) { 321 316 u16 ectrl = gm_phy_read(hw, port, PHY_MARV_EXT_CTRL); 322 317 323 318 ectrl &= ~(PHY_M_EC_M_DSC_MSK | PHY_M_EC_S_DSC_MSK | ··· 335 334 336 335 ctrl = gm_phy_read(hw, port, PHY_MARV_PHY_CTRL); 337 336 if (sky2_is_copper(hw)) { 338 - if (hw->chip_id == CHIP_ID_YUKON_FE) { 337 + if (!(hw->flags & SKY2_HW_GIGABIT)) { 339 338 /* enable automatic crossover */ 340 339 ctrl |= PHY_M_PC_MDI_XMODE(PHY_M_PC_ENA_AUTO) >> 1; 340 + 341 + if (hw->chip_id == CHIP_ID_YUKON_FE_P && 342 + hw->chip_rev == CHIP_REV_YU_FE2_A0) { 343 + u16 spec; 344 + 345 + /* Enable Class A driver for FE+ A0 */ 346 + spec = gm_phy_read(hw, port, PHY_MARV_FE_SPEC_2); 347 + spec |= PHY_M_FESC_SEL_CL_A; 348 + gm_phy_write(hw, port, PHY_MARV_FE_SPEC_2, spec); 349 + } 341 350 } else { 342 351 /* disable energy detect */ 343 352 ctrl &= ~PHY_M_PC_EN_DET_MSK; ··· 357 346 358 347 /* downshift on PHY 88E1112 and 88E1149 is changed */ 359 348 if (sky2->autoneg == AUTONEG_ENABLE 360 - && (hw->chip_id == CHIP_ID_YUKON_XL 361 - || hw->chip_id == CHIP_ID_YUKON_EC_U 362 - || hw->chip_id == CHIP_ID_YUKON_EX)) { 349 + && (hw->flags & SKY2_HW_NEWER_PHY)) { 363 350 /* set downshift counter to 3x and enable downshift */ 364 351 ctrl &= ~PHY_M_PC_DSC_MSK; 365 352 ctrl |= PHY_M_PC_DSC(2) | PHY_M_PC_DOWN_S_ENA; ··· 373 364 gm_phy_write(hw, port, PHY_MARV_PHY_CTRL, ctrl); 374 365 375 366 /* special setup for PHY 88E1112 Fiber */ 376 - if (hw->chip_id == CHIP_ID_YUKON_XL && !sky2_is_copper(hw)) { 367 + if (hw->chip_id == CHIP_ID_YUKON_XL && (hw->flags & SKY2_HW_FIBRE_PHY)) { 377 368 pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR); 378 369 379 370 /* Fiber: select 1000BASE-X only mode MAC Specific Ctrl Reg. */ ··· 464 455 465 456 gma_write16(hw, port, GM_GP_CTRL, reg); 466 457 467 - if (hw->chip_id != CHIP_ID_YUKON_FE) 458 + if (hw->flags & SKY2_HW_GIGABIT) 468 459 gm_phy_write(hw, port, PHY_MARV_1000T_CTRL, ct1000); 469 460 470 461 gm_phy_write(hw, port, PHY_MARV_AUNE_ADV, adv); ··· 485 476 ctrl &= ~PHY_M_FELP_LED1_MSK; 486 477 /* change ACT LED control to blink mode */ 487 478 ctrl |= PHY_M_FELP_LED1_CTRL(LED_PAR_CTRL_ACT_BL); 479 + gm_phy_write(hw, port, PHY_MARV_FE_LED_PAR, ctrl); 480 + break; 481 + 482 + case CHIP_ID_YUKON_FE_P: 483 + /* Enable Link Partner Next Page */ 484 + ctrl = gm_phy_read(hw, port, PHY_MARV_PHY_CTRL); 485 + ctrl |= PHY_M_PC_ENA_LIP_NP; 486 + 487 + /* disable Energy Detect and enable scrambler */ 488 + ctrl &= ~(PHY_M_PC_ENA_ENE_DT | PHY_M_PC_DIS_SCRAMB); 489 + gm_phy_write(hw, port, PHY_MARV_PHY_CTRL, ctrl); 490 + 491 + /* set LED2 -> ACT, LED1 -> LINK, LED0 -> SPEED */ 492 + ctrl = PHY_M_FELP_LED2_CTRL(LED_PAR_CTRL_ACT_BL) | 493 + PHY_M_FELP_LED1_CTRL(LED_PAR_CTRL_LINK) | 494 + PHY_M_FELP_LED0_CTRL(LED_PAR_CTRL_SPEED); 495 + 488 496 gm_phy_write(hw, port, PHY_MARV_FE_LED_PAR, ctrl); 489 497 break; 490 498 ··· 574 548 575 549 /* set page register to 0 */ 576 550 gm_phy_write(hw, port, PHY_MARV_EXT_ADR, 0); 551 + } else if (hw->chip_id == CHIP_ID_YUKON_FE_P && 552 + hw->chip_rev == CHIP_REV_YU_FE2_A0) { 553 + /* apply workaround for integrated resistors calibration */ 554 + gm_phy_write(hw, port, PHY_MARV_PAGE_ADDR, 17); 555 + gm_phy_write(hw, port, PHY_MARV_PAGE_DATA, 0x3f60); 577 556 } else if (hw->chip_id != CHIP_ID_YUKON_EX) { 557 + /* no effect on Yukon-XL */ 578 558 gm_phy_write(hw, port, PHY_MARV_LED_CTRL, ledctrl); 579 559 580 560 if (sky2->autoneg == AUTONEG_DISABLE || sky2->speed == SPEED_100) { ··· 701 669 702 670 static void sky2_set_tx_stfwd(struct sky2_hw *hw, unsigned port) 703 671 { 704 - if (hw->chip_id == CHIP_ID_YUKON_EX && hw->chip_rev != CHIP_REV_YU_EX_A0) { 672 + struct net_device *dev = hw->dev[port]; 673 + 674 + if (dev->mtu <= ETH_DATA_LEN) 705 675 sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), 706 - TX_STFW_ENA | 707 - (hw->dev[port]->mtu > ETH_DATA_LEN) ? TX_JUMBO_ENA : TX_JUMBO_DIS); 708 - } else { 709 - if (hw->dev[port]->mtu > ETH_DATA_LEN) { 710 - /* set Tx GMAC FIFO Almost Empty Threshold */ 711 - sky2_write32(hw, SK_REG(port, TX_GMF_AE_THR), 712 - (ECU_JUMBO_WM << 16) | ECU_AE_THR); 676 + TX_JUMBO_DIS | TX_STFW_ENA); 713 677 714 - sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), 715 - TX_JUMBO_ENA | TX_STFW_DIS); 678 + else if (hw->chip_id != CHIP_ID_YUKON_EC_U) 679 + sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), 680 + TX_STFW_ENA | TX_JUMBO_ENA); 681 + else { 682 + /* set Tx GMAC FIFO Almost Empty Threshold */ 683 + sky2_write32(hw, SK_REG(port, TX_GMF_AE_THR), 684 + (ECU_JUMBO_WM << 16) | ECU_AE_THR); 716 685 717 - /* Can't do offload because of lack of store/forward */ 718 - hw->dev[port]->features &= ~(NETIF_F_TSO | NETIF_F_SG 719 - | NETIF_F_ALL_CSUM); 720 - } else 721 - sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), 722 - TX_JUMBO_DIS | TX_STFW_ENA); 686 + sky2_write32(hw, SK_REG(port, TX_GMF_CTRL_T), 687 + TX_JUMBO_ENA | TX_STFW_DIS); 688 + 689 + /* Can't do offload because of lack of store/forward */ 690 + dev->features &= ~(NETIF_F_TSO | NETIF_F_SG | NETIF_F_ALL_CSUM); 723 691 } 724 692 } 725 693 ··· 805 773 /* Configure Rx MAC FIFO */ 806 774 sky2_write8(hw, SK_REG(port, RX_GMF_CTRL_T), GMF_RST_CLR); 807 775 rx_reg = GMF_OPER_ON | GMF_RX_F_FL_ON; 808 - if (hw->chip_id == CHIP_ID_YUKON_EX) 776 + if (hw->chip_id == CHIP_ID_YUKON_EX || 777 + hw->chip_id == CHIP_ID_YUKON_FE_P) 809 778 rx_reg |= GMF_RX_OVER_ON; 810 779 811 780 sky2_write32(hw, SK_REG(port, RX_GMF_CTRL_T), rx_reg); ··· 815 782 sky2_write16(hw, SK_REG(port, RX_GMF_FL_MSK), GMR_FS_ANY_ERR); 816 783 817 784 /* Set threshold to 0xa (64 bytes) + 1 to workaround pause bug */ 818 - sky2_write16(hw, SK_REG(port, RX_GMF_FL_THR), RX_GMF_FL_THR_DEF+1); 785 + reg = RX_GMF_FL_THR_DEF + 1; 786 + /* Another magic mystery workaround from sk98lin */ 787 + if (hw->chip_id == CHIP_ID_YUKON_FE_P && 788 + hw->chip_rev == CHIP_REV_YU_FE2_A0) 789 + reg = 0x178; 790 + sky2_write16(hw, SK_REG(port, RX_GMF_FL_THR), reg); 819 791 820 792 /* Configure Tx MAC FIFO */ 821 793 sky2_write8(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_RST_CLR); 822 794 sky2_write16(hw, SK_REG(port, TX_GMF_CTRL_T), GMF_OPER_ON); 823 795 824 - if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) { 796 + /* On chips without ram buffer, pause is controled by MAC level */ 797 + if (sky2_read8(hw, B2_E_0) == 0) { 825 798 sky2_write8(hw, SK_REG(port, RX_GMF_LP_THR), 768/8); 826 799 sky2_write8(hw, SK_REG(port, RX_GMF_UP_THR), 1024/8); 827 800 ··· 908 869 sky2->tx_prod = RING_NEXT(sky2->tx_prod, TX_RING_SIZE); 909 870 le->ctrl = 0; 910 871 return le; 872 + } 873 + 874 + static void tx_init(struct sky2_port *sky2) 875 + { 876 + struct sky2_tx_le *le; 877 + 878 + sky2->tx_prod = sky2->tx_cons = 0; 879 + sky2->tx_tcpsum = 0; 880 + sky2->tx_last_mss = 0; 881 + 882 + le = get_tx_le(sky2); 883 + le->addr = 0; 884 + le->opcode = OP_ADDR64 | HW_OWNER; 885 + sky2->tx_addr64 = 0; 911 886 } 912 887 913 888 static inline struct tx_ring_info *tx_le_re(struct sky2_port *sky2, ··· 1020 967 */ 1021 968 static void rx_set_checksum(struct sky2_port *sky2) 1022 969 { 1023 - struct sky2_rx_le *le; 970 + struct sky2_rx_le *le = sky2_next_rx(sky2); 1024 971 1025 - if (sky2->hw->chip_id != CHIP_ID_YUKON_EX) { 1026 - le = sky2_next_rx(sky2); 1027 - le->addr = cpu_to_le32((ETH_HLEN << 16) | ETH_HLEN); 1028 - le->ctrl = 0; 1029 - le->opcode = OP_TCPSTART | HW_OWNER; 972 + le->addr = cpu_to_le32((ETH_HLEN << 16) | ETH_HLEN); 973 + le->ctrl = 0; 974 + le->opcode = OP_TCPSTART | HW_OWNER; 1030 975 1031 - sky2_write32(sky2->hw, 1032 - Q_ADDR(rxqaddr[sky2->port], Q_CSR), 1033 - sky2->rx_csum ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM); 1034 - } 1035 - 976 + sky2_write32(sky2->hw, 977 + Q_ADDR(rxqaddr[sky2->port], Q_CSR), 978 + sky2->rx_csum ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM); 1036 979 } 1037 980 1038 981 /* ··· 1224 1175 1225 1176 sky2_prefetch_init(hw, rxq, sky2->rx_le_map, RX_LE_SIZE - 1); 1226 1177 1227 - rx_set_checksum(sky2); 1178 + if (!(hw->flags & SKY2_HW_NEW_LE)) 1179 + rx_set_checksum(sky2); 1228 1180 1229 1181 /* Space needed for frame data + headers rounded up */ 1230 1182 size = roundup(sky2->netdev->mtu + ETH_HLEN + VLAN_HLEN, 8); ··· 1296 1246 struct sky2_port *sky2 = netdev_priv(dev); 1297 1247 struct sky2_hw *hw = sky2->hw; 1298 1248 unsigned port = sky2->port; 1299 - u32 ramsize, imask; 1249 + u32 imask, ramsize; 1300 1250 int cap, err = -ENOMEM; 1301 1251 struct net_device *otherdev = hw->dev[sky2->port^1]; 1302 1252 ··· 1334 1284 GFP_KERNEL); 1335 1285 if (!sky2->tx_ring) 1336 1286 goto err_out; 1337 - sky2->tx_prod = sky2->tx_cons = 0; 1287 + 1288 + tx_init(sky2); 1338 1289 1339 1290 sky2->rx_le = pci_alloc_consistent(hw->pdev, RX_LE_BYTES, 1340 1291 &sky2->rx_le_map); ··· 1354 1303 1355 1304 /* Register is number of 4K blocks on internal RAM buffer. */ 1356 1305 ramsize = sky2_read8(hw, B2_E_0) * 4; 1357 - printk(KERN_INFO PFX "%s: ram buffer %dK\n", dev->name, ramsize); 1358 - 1359 1306 if (ramsize > 0) { 1360 1307 u32 rxspace; 1361 1308 1309 + pr_debug(PFX "%s: ram buffer %dK\n", dev->name, ramsize); 1362 1310 if (ramsize < 16) 1363 1311 rxspace = ramsize / 2; 1364 1312 else ··· 1486 1436 /* Check for TCP Segmentation Offload */ 1487 1437 mss = skb_shinfo(skb)->gso_size; 1488 1438 if (mss != 0) { 1489 - if (hw->chip_id != CHIP_ID_YUKON_EX) 1439 + 1440 + if (!(hw->flags & SKY2_HW_NEW_LE)) 1490 1441 mss += ETH_HLEN + ip_hdrlen(skb) + tcp_hdrlen(skb); 1491 1442 1492 1443 if (mss != sky2->tx_last_mss) { 1493 1444 le = get_tx_le(sky2); 1494 1445 le->addr = cpu_to_le32(mss); 1495 - if (hw->chip_id == CHIP_ID_YUKON_EX) 1446 + 1447 + if (hw->flags & SKY2_HW_NEW_LE) 1496 1448 le->opcode = OP_MSS | HW_OWNER; 1497 1449 else 1498 1450 le->opcode = OP_LRGLEN | HW_OWNER; ··· 1520 1468 /* Handle TCP checksum offload */ 1521 1469 if (skb->ip_summed == CHECKSUM_PARTIAL) { 1522 1470 /* On Yukon EX (some versions) encoding change. */ 1523 - if (hw->chip_id == CHIP_ID_YUKON_EX 1524 - && hw->chip_rev != CHIP_REV_YU_EX_B0) 1471 + if (hw->flags & SKY2_HW_AUTO_TX_SUM) 1525 1472 ctrl |= CALSUM; /* auto checksum */ 1526 1473 else { 1527 1474 const unsigned offset = skb_transport_offset(skb); ··· 1673 1622 if (netif_msg_ifdown(sky2)) 1674 1623 printk(KERN_INFO PFX "%s: disabling interface\n", dev->name); 1675 1624 1676 - if (netif_carrier_ok(dev) && --hw->active == 0) 1677 - del_timer(&hw->watchdog_timer); 1678 - 1679 1625 /* Stop more packets from being queued */ 1680 1626 netif_stop_queue(dev); 1681 1627 ··· 1756 1708 1757 1709 static u16 sky2_phy_speed(const struct sky2_hw *hw, u16 aux) 1758 1710 { 1759 - if (!sky2_is_copper(hw)) 1711 + if (hw->flags & SKY2_HW_FIBRE_PHY) 1760 1712 return SPEED_1000; 1761 1713 1762 - if (hw->chip_id == CHIP_ID_YUKON_FE) 1763 - return (aux & PHY_M_PS_SPEED_100) ? SPEED_100 : SPEED_10; 1714 + if (!(hw->flags & SKY2_HW_GIGABIT)) { 1715 + if (aux & PHY_M_PS_SPEED_100) 1716 + return SPEED_100; 1717 + else 1718 + return SPEED_10; 1719 + } 1764 1720 1765 1721 switch (aux & PHY_M_PS_SPEED_MSK) { 1766 1722 case PHY_M_PS_SPEED_1000: ··· 1797 1745 1798 1746 netif_carrier_on(sky2->netdev); 1799 1747 1800 - if (hw->active++ == 0) 1801 - mod_timer(&hw->watchdog_timer, jiffies + 1); 1802 - 1748 + mod_timer(&hw->watchdog_timer, jiffies + 1); 1803 1749 1804 1750 /* Turn on link LED */ 1805 1751 sky2_write8(hw, SK_REG(port, LNK_LED_REG), 1806 1752 LINKLED_ON | LINKLED_BLINK_OFF | LINKLED_LINKSYNC_OFF); 1807 1753 1808 - if (hw->chip_id == CHIP_ID_YUKON_XL 1809 - || hw->chip_id == CHIP_ID_YUKON_EC_U 1810 - || hw->chip_id == CHIP_ID_YUKON_EX) { 1754 + if (hw->flags & SKY2_HW_NEWER_PHY) { 1811 1755 u16 pg = gm_phy_read(hw, port, PHY_MARV_EXT_ADR); 1812 1756 u16 led = PHY_M_LEDC_LOS_CTRL(1); /* link active */ 1813 1757 ··· 1848 1800 1849 1801 netif_carrier_off(sky2->netdev); 1850 1802 1851 - /* Stop watchdog if both ports are not active */ 1852 - if (--hw->active == 0) 1853 - del_timer(&hw->watchdog_timer); 1854 - 1855 - 1856 1803 /* Turn on link LED */ 1857 1804 sky2_write8(hw, SK_REG(port, LNK_LED_REG), LINKLED_OFF); 1858 1805 ··· 1890 1847 /* Since the pause result bits seem to in different positions on 1891 1848 * different chips. look at registers. 1892 1849 */ 1893 - if (!sky2_is_copper(hw)) { 1850 + if (hw->flags & SKY2_HW_FIBRE_PHY) { 1894 1851 /* Shift for bits in fiber PHY */ 1895 1852 advert &= ~(ADVERTISE_PAUSE_CAP|ADVERTISE_PAUSE_ASYM); 1896 1853 lpa &= ~(LPA_PAUSE_CAP|LPA_PAUSE_ASYM); ··· 2001 1958 if (new_mtu < ETH_ZLEN || new_mtu > ETH_JUMBO_MTU) 2002 1959 return -EINVAL; 2003 1960 2004 - if (new_mtu > ETH_DATA_LEN && hw->chip_id == CHIP_ID_YUKON_FE) 1961 + if (new_mtu > ETH_DATA_LEN && 1962 + (hw->chip_id == CHIP_ID_YUKON_FE || 1963 + hw->chip_id == CHIP_ID_YUKON_FE_P)) 2005 1964 return -EINVAL; 2006 1965 2007 1966 if (!netif_running(dev)) { ··· 2020 1975 2021 1976 synchronize_irq(hw->pdev->irq); 2022 1977 2023 - if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) 1978 + if (sky2_read8(hw, B2_E_0) == 0) 2024 1979 sky2_set_tx_stfwd(hw, port); 2025 1980 2026 1981 ctl = gma_read16(hw, port, GM_GP_CTRL); ··· 2148 2103 struct sky2_port *sky2 = netdev_priv(dev); 2149 2104 struct rx_ring_info *re = sky2->rx_ring + sky2->rx_next; 2150 2105 struct sk_buff *skb = NULL; 2106 + u16 count = (status & GMR_FS_LEN) >> 16; 2107 + 2108 + #ifdef SKY2_VLAN_TAG_USED 2109 + /* Account for vlan tag */ 2110 + if (sky2->vlgrp && (status & GMR_FS_VLAN)) 2111 + count -= VLAN_HLEN; 2112 + #endif 2151 2113 2152 2114 if (unlikely(netif_msg_rx_status(sky2))) 2153 2115 printk(KERN_DEBUG PFX "%s: rx slot %u status 0x%x len %d\n", ··· 2163 2111 sky2->rx_next = (sky2->rx_next + 1) % sky2->rx_pending; 2164 2112 prefetch(sky2->rx_ring + sky2->rx_next); 2165 2113 2114 + if (length < ETH_ZLEN || length > sky2->rx_data_size) 2115 + goto len_error; 2116 + 2117 + /* This chip has hardware problems that generates bogus status. 2118 + * So do only marginal checking and expect higher level protocols 2119 + * to handle crap frames. 2120 + */ 2121 + if (sky2->hw->chip_id == CHIP_ID_YUKON_FE_P && 2122 + sky2->hw->chip_rev == CHIP_REV_YU_FE2_A0 && 2123 + length != count) 2124 + goto okay; 2125 + 2166 2126 if (status & GMR_FS_ANY_ERR) 2167 2127 goto error; 2168 2128 2169 2129 if (!(status & GMR_FS_RX_OK)) 2170 2130 goto resubmit; 2171 2131 2172 - if (status >> 16 != length) 2173 - goto len_mismatch; 2132 + /* if length reported by DMA does not match PHY, packet was truncated */ 2133 + if (length != count) 2134 + goto len_error; 2174 2135 2136 + okay: 2175 2137 if (length < copybreak) 2176 2138 skb = receive_copy(sky2, re, length); 2177 2139 else ··· 2195 2129 2196 2130 return skb; 2197 2131 2198 - len_mismatch: 2132 + len_error: 2199 2133 /* Truncation of overlength packets 2200 2134 causes PHY length to not match MAC length */ 2201 2135 ++sky2->net_stats.rx_length_errors; 2136 + if (netif_msg_rx_err(sky2) && net_ratelimit()) 2137 + pr_info(PFX "%s: rx length error: status %#x length %d\n", 2138 + dev->name, status, length); 2139 + goto resubmit; 2202 2140 2203 2141 error: 2204 2142 ++sky2->net_stats.rx_errors; ··· 2272 2202 } 2273 2203 2274 2204 /* This chip reports checksum status differently */ 2275 - if (hw->chip_id == CHIP_ID_YUKON_EX) { 2205 + if (hw->flags & SKY2_HW_NEW_LE) { 2276 2206 if (sky2->rx_csum && 2277 2207 (le->css & (CSS_ISIPV4 | CSS_ISIPV6)) && 2278 2208 (le->css & CSS_TCPUDPCSOK)) ··· 2313 2243 if (!sky2->rx_csum) 2314 2244 break; 2315 2245 2316 - if (hw->chip_id == CHIP_ID_YUKON_EX) 2246 + /* If this happens then driver assuming wrong format */ 2247 + if (unlikely(hw->flags & SKY2_HW_NEW_LE)) { 2248 + if (net_ratelimit()) 2249 + printk(KERN_NOTICE "%s: unexpected" 2250 + " checksum status\n", 2251 + dev->name); 2317 2252 break; 2253 + } 2318 2254 2319 2255 /* Both checksum counters are programmed to start at 2320 2256 * the same offset, so unless there is a problem they ··· 2512 2436 sky2_write32(hw, Q_ADDR(q, Q_CSR), BMU_CLR_IRQ_CHK); 2513 2437 } 2514 2438 2515 - /* Check for lost IRQ once a second */ 2439 + static int sky2_rx_hung(struct net_device *dev) 2440 + { 2441 + struct sky2_port *sky2 = netdev_priv(dev); 2442 + struct sky2_hw *hw = sky2->hw; 2443 + unsigned port = sky2->port; 2444 + unsigned rxq = rxqaddr[port]; 2445 + u32 mac_rp = sky2_read32(hw, SK_REG(port, RX_GMF_RP)); 2446 + u8 mac_lev = sky2_read8(hw, SK_REG(port, RX_GMF_RLEV)); 2447 + u8 fifo_rp = sky2_read8(hw, Q_ADDR(rxq, Q_RP)); 2448 + u8 fifo_lev = sky2_read8(hw, Q_ADDR(rxq, Q_RL)); 2449 + 2450 + /* If idle and MAC or PCI is stuck */ 2451 + if (sky2->check.last == dev->last_rx && 2452 + ((mac_rp == sky2->check.mac_rp && 2453 + mac_lev != 0 && mac_lev >= sky2->check.mac_lev) || 2454 + /* Check if the PCI RX hang */ 2455 + (fifo_rp == sky2->check.fifo_rp && 2456 + fifo_lev != 0 && fifo_lev >= sky2->check.fifo_lev))) { 2457 + printk(KERN_DEBUG PFX "%s: hung mac %d:%d fifo %d (%d:%d)\n", 2458 + dev->name, mac_lev, mac_rp, fifo_lev, fifo_rp, 2459 + sky2_read8(hw, Q_ADDR(rxq, Q_WP))); 2460 + return 1; 2461 + } else { 2462 + sky2->check.last = dev->last_rx; 2463 + sky2->check.mac_rp = mac_rp; 2464 + sky2->check.mac_lev = mac_lev; 2465 + sky2->check.fifo_rp = fifo_rp; 2466 + sky2->check.fifo_lev = fifo_lev; 2467 + return 0; 2468 + } 2469 + } 2470 + 2516 2471 static void sky2_watchdog(unsigned long arg) 2517 2472 { 2518 2473 struct sky2_hw *hw = (struct sky2_hw *) arg; 2474 + struct net_device *dev; 2519 2475 2476 + /* Check for lost IRQ once a second */ 2520 2477 if (sky2_read32(hw, B0_ISRC)) { 2521 - struct net_device *dev = hw->dev[0]; 2522 - 2478 + dev = hw->dev[0]; 2523 2479 if (__netif_rx_schedule_prep(dev)) 2524 2480 __netif_rx_schedule(dev); 2481 + } else { 2482 + int i, active = 0; 2483 + 2484 + for (i = 0; i < hw->ports; i++) { 2485 + dev = hw->dev[i]; 2486 + if (!netif_running(dev)) 2487 + continue; 2488 + ++active; 2489 + 2490 + /* For chips with Rx FIFO, check if stuck */ 2491 + if ((hw->flags & SKY2_HW_FIFO_HANG_CHECK) && 2492 + sky2_rx_hung(dev)) { 2493 + pr_info(PFX "%s: receiver hang detected\n", 2494 + dev->name); 2495 + schedule_work(&hw->restart_work); 2496 + return; 2497 + } 2498 + } 2499 + 2500 + if (active == 0) 2501 + return; 2525 2502 } 2526 2503 2527 - if (hw->active > 0) 2528 - mod_timer(&hw->watchdog_timer, round_jiffies(jiffies + HZ)); 2504 + mod_timer(&hw->watchdog_timer, round_jiffies(jiffies + HZ)); 2529 2505 } 2530 2506 2531 2507 /* Hardware/software error handling */ ··· 2674 2546 #endif 2675 2547 2676 2548 /* Chip internal frequency for clock calculations */ 2677 - static inline u32 sky2_mhz(const struct sky2_hw *hw) 2549 + static u32 sky2_mhz(const struct sky2_hw *hw) 2678 2550 { 2679 2551 switch (hw->chip_id) { 2680 2552 case CHIP_ID_YUKON_EC: 2681 2553 case CHIP_ID_YUKON_EC_U: 2682 2554 case CHIP_ID_YUKON_EX: 2683 - return 125; /* 125 Mhz */ 2555 + return 125; 2556 + 2684 2557 case CHIP_ID_YUKON_FE: 2685 - return 100; /* 100 Mhz */ 2686 - default: /* YUKON_XL */ 2687 - return 156; /* 156 Mhz */ 2558 + return 100; 2559 + 2560 + case CHIP_ID_YUKON_FE_P: 2561 + return 50; 2562 + 2563 + case CHIP_ID_YUKON_XL: 2564 + return 156; 2565 + 2566 + default: 2567 + BUG(); 2688 2568 } 2689 2569 } 2690 2570 ··· 2717 2581 sky2_write8(hw, B0_CTST, CS_RST_CLR); 2718 2582 2719 2583 hw->chip_id = sky2_read8(hw, B2_CHIP_ID); 2720 - if (hw->chip_id < CHIP_ID_YUKON_XL || hw->chip_id > CHIP_ID_YUKON_FE) { 2584 + hw->chip_rev = (sky2_read8(hw, B2_MAC_CFG) & CFG_CHIP_R_MSK) >> 4; 2585 + 2586 + switch(hw->chip_id) { 2587 + case CHIP_ID_YUKON_XL: 2588 + hw->flags = SKY2_HW_GIGABIT 2589 + | SKY2_HW_NEWER_PHY; 2590 + if (hw->chip_rev < 3) 2591 + hw->flags |= SKY2_HW_FIFO_HANG_CHECK; 2592 + 2593 + break; 2594 + 2595 + case CHIP_ID_YUKON_EC_U: 2596 + hw->flags = SKY2_HW_GIGABIT 2597 + | SKY2_HW_NEWER_PHY 2598 + | SKY2_HW_ADV_POWER_CTL; 2599 + break; 2600 + 2601 + case CHIP_ID_YUKON_EX: 2602 + hw->flags = SKY2_HW_GIGABIT 2603 + | SKY2_HW_NEWER_PHY 2604 + | SKY2_HW_NEW_LE 2605 + | SKY2_HW_ADV_POWER_CTL; 2606 + 2607 + /* New transmit checksum */ 2608 + if (hw->chip_rev != CHIP_REV_YU_EX_B0) 2609 + hw->flags |= SKY2_HW_AUTO_TX_SUM; 2610 + break; 2611 + 2612 + case CHIP_ID_YUKON_EC: 2613 + /* This rev is really old, and requires untested workarounds */ 2614 + if (hw->chip_rev == CHIP_REV_YU_EC_A1) { 2615 + dev_err(&hw->pdev->dev, "unsupported revision Yukon-EC rev A1\n"); 2616 + return -EOPNOTSUPP; 2617 + } 2618 + hw->flags = SKY2_HW_GIGABIT | SKY2_HW_FIFO_HANG_CHECK; 2619 + break; 2620 + 2621 + case CHIP_ID_YUKON_FE: 2622 + break; 2623 + 2624 + case CHIP_ID_YUKON_FE_P: 2625 + hw->flags = SKY2_HW_NEWER_PHY 2626 + | SKY2_HW_NEW_LE 2627 + | SKY2_HW_AUTO_TX_SUM 2628 + | SKY2_HW_ADV_POWER_CTL; 2629 + break; 2630 + default: 2721 2631 dev_err(&hw->pdev->dev, "unsupported chip type 0x%x\n", 2722 2632 hw->chip_id); 2723 2633 return -EOPNOTSUPP; 2724 2634 } 2725 2635 2726 - hw->chip_rev = (sky2_read8(hw, B2_MAC_CFG) & CFG_CHIP_R_MSK) >> 4; 2727 - 2728 - /* This rev is really old, and requires untested workarounds */ 2729 - if (hw->chip_id == CHIP_ID_YUKON_EC && hw->chip_rev == CHIP_REV_YU_EC_A1) { 2730 - dev_err(&hw->pdev->dev, "unsupported revision Yukon-%s (0x%x) rev %d\n", 2731 - yukon2_name[hw->chip_id - CHIP_ID_YUKON_XL], 2732 - hw->chip_id, hw->chip_rev); 2733 - return -EOPNOTSUPP; 2734 - } 2735 - 2736 2636 hw->pmd_type = sky2_read8(hw, B2_PMD_TYP); 2637 + if (hw->pmd_type == 'L' || hw->pmd_type == 'S' || hw->pmd_type == 'P') 2638 + hw->flags |= SKY2_HW_FIBRE_PHY; 2639 + 2640 + 2737 2641 hw->ports = 1; 2738 2642 t8 = sky2_read8(hw, B2_Y2_HW_RES); 2739 2643 if ((t8 & CFG_DUAL_MAC_MSK) == CFG_DUAL_MAC_MSK) { ··· 2967 2791 2968 2792 sky2->wol = wol->wolopts; 2969 2793 2970 - if (hw->chip_id == CHIP_ID_YUKON_EC_U || hw->chip_id == CHIP_ID_YUKON_EX) 2794 + if (hw->chip_id == CHIP_ID_YUKON_EC_U || 2795 + hw->chip_id == CHIP_ID_YUKON_EX || 2796 + hw->chip_id == CHIP_ID_YUKON_FE_P) 2971 2797 sky2_write32(hw, B0_CTST, sky2->wol 2972 2798 ? Y2_HW_WOL_ON : Y2_HW_WOL_OFF); 2973 2799 ··· 2987 2809 | SUPPORTED_100baseT_Full 2988 2810 | SUPPORTED_Autoneg | SUPPORTED_TP; 2989 2811 2990 - if (hw->chip_id != CHIP_ID_YUKON_FE) 2812 + if (hw->flags & SKY2_HW_GIGABIT) 2991 2813 modes |= SUPPORTED_1000baseT_Half 2992 2814 | SUPPORTED_1000baseT_Full; 2993 2815 return modes; ··· 3007 2829 ecmd->supported = sky2_supported_modes(hw); 3008 2830 ecmd->phy_address = PHY_ADDR_MARV; 3009 2831 if (sky2_is_copper(hw)) { 3010 - ecmd->supported = SUPPORTED_10baseT_Half 3011 - | SUPPORTED_10baseT_Full 3012 - | SUPPORTED_100baseT_Half 3013 - | SUPPORTED_100baseT_Full 3014 - | SUPPORTED_1000baseT_Half 3015 - | SUPPORTED_1000baseT_Full 3016 - | SUPPORTED_Autoneg | SUPPORTED_TP; 3017 2832 ecmd->port = PORT_TP; 3018 2833 ecmd->speed = sky2->speed; 3019 2834 } else { ··· 3985 3814 dev->features |= NETIF_F_HIGHDMA; 3986 3815 3987 3816 #ifdef SKY2_VLAN_TAG_USED 3988 - dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; 3989 - dev->vlan_rx_register = sky2_vlan_rx_register; 3817 + /* The workaround for FE+ status conflicts with VLAN tag detection. */ 3818 + if (!(sky2->hw->chip_id == CHIP_ID_YUKON_FE_P && 3819 + sky2->hw->chip_rev == CHIP_REV_YU_FE2_A0)) { 3820 + dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; 3821 + dev->vlan_rx_register = sky2_vlan_rx_register; 3822 + } 3990 3823 #endif 3991 3824 3992 3825 /* read the mac address */ ··· 4021 3846 return IRQ_NONE; 4022 3847 4023 3848 if (status & Y2_IS_IRQ_SW) { 4024 - hw->msi = 1; 3849 + hw->flags |= SKY2_HW_USE_MSI; 4025 3850 wake_up(&hw->msi_wait); 4026 3851 sky2_write8(hw, B0_CTST, CS_CL_SW_IRQ); 4027 3852 } ··· 4049 3874 sky2_write8(hw, B0_CTST, CS_ST_SW_IRQ); 4050 3875 sky2_read8(hw, B0_CTST); 4051 3876 4052 - wait_event_timeout(hw->msi_wait, hw->msi, HZ/10); 3877 + wait_event_timeout(hw->msi_wait, (hw->flags & SKY2_HW_USE_MSI), HZ/10); 4053 3878 4054 - if (!hw->msi) { 3879 + if (!(hw->flags & SKY2_HW_USE_MSI)) { 4055 3880 /* MSI test failed, go back to INTx mode */ 4056 3881 dev_info(&pdev->dev, "No interrupt generated using MSI, " 4057 3882 "switching to INTx mode.\n"); ··· 4184 4009 goto err_out_free_netdev; 4185 4010 } 4186 4011 4187 - err = request_irq(pdev->irq, sky2_intr, hw->msi ? 0 : IRQF_SHARED, 4012 + err = request_irq(pdev->irq, sky2_intr, 4013 + (hw->flags & SKY2_HW_USE_MSI) ? 0 : IRQF_SHARED, 4188 4014 dev->name, hw); 4189 4015 if (err) { 4190 4016 dev_err(&pdev->dev, "cannot assign irq %d\n", pdev->irq); ··· 4218 4042 return 0; 4219 4043 4220 4044 err_out_unregister: 4221 - if (hw->msi) 4045 + if (hw->flags & SKY2_HW_USE_MSI) 4222 4046 pci_disable_msi(pdev); 4223 4047 unregister_netdev(dev); 4224 4048 err_out_free_netdev: ··· 4267 4091 sky2_read8(hw, B0_CTST); 4268 4092 4269 4093 free_irq(pdev->irq, hw); 4270 - if (hw->msi) 4094 + if (hw->flags & SKY2_HW_USE_MSI) 4271 4095 pci_disable_msi(pdev); 4272 4096 pci_free_consistent(pdev, STATUS_LE_BYTES, hw->st_le, hw->st_dma); 4273 4097 pci_release_regions(pdev); ··· 4335 4159 pci_enable_wake(pdev, PCI_D0, 0); 4336 4160 4337 4161 /* Re-enable all clocks */ 4338 - if (hw->chip_id == CHIP_ID_YUKON_EX || hw->chip_id == CHIP_ID_YUKON_EC_U) 4162 + if (hw->chip_id == CHIP_ID_YUKON_EX || 4163 + hw->chip_id == CHIP_ID_YUKON_EC_U || 4164 + hw->chip_id == CHIP_ID_YUKON_FE_P) 4339 4165 sky2_pci_write32(hw, PCI_DEV_REG3, 0); 4340 4166 4341 4167 sky2_reset(hw);
+33 -8
drivers/net/sky2.h
··· 470 470 CHIP_ID_YUKON_EX = 0xb5, /* Chip ID for YUKON-2 Extreme */ 471 471 CHIP_ID_YUKON_EC = 0xb6, /* Chip ID for YUKON-2 EC */ 472 472 CHIP_ID_YUKON_FE = 0xb7, /* Chip ID for YUKON-2 FE */ 473 - 473 + CHIP_ID_YUKON_FE_P = 0xb8, /* Chip ID for YUKON-2 FE+ */ 474 + }; 475 + enum yukon_ec_rev { 474 476 CHIP_REV_YU_EC_A1 = 0, /* Chip Rev. for Yukon-EC A1/A0 */ 475 477 CHIP_REV_YU_EC_A2 = 1, /* Chip Rev. for Yukon-EC A2 */ 476 478 CHIP_REV_YU_EC_A3 = 2, /* Chip Rev. for Yukon-EC A3 */ 477 - 479 + }; 480 + enum yukon_ec_u_rev { 478 481 CHIP_REV_YU_EC_U_A0 = 1, 479 482 CHIP_REV_YU_EC_U_A1 = 2, 480 483 CHIP_REV_YU_EC_U_B0 = 3, 481 - 484 + }; 485 + enum yukon_fe_rev { 482 486 CHIP_REV_YU_FE_A1 = 1, 483 487 CHIP_REV_YU_FE_A2 = 2, 484 - 488 + }; 489 + enum yukon_fe_p_rev { 490 + CHIP_REV_YU_FE2_A0 = 0, 485 491 }; 486 492 enum yukon_ex_rev { 487 493 CHIP_REV_YU_EX_A0 = 1, ··· 1674 1668 1675 1669 /* Receive Frame Status Encoding */ 1676 1670 enum { 1677 - GMR_FS_LEN = 0xffff<<16, /* Bit 31..16: Rx Frame Length */ 1671 + GMR_FS_LEN = 0x7fff<<16, /* Bit 30..16: Rx Frame Length */ 1678 1672 GMR_FS_VLAN = 1<<13, /* VLAN Packet */ 1679 1673 GMR_FS_JABBER = 1<<12, /* Jabber Packet */ 1680 1674 GMR_FS_UN_SIZE = 1<<11, /* Undersize Packet */ ··· 1735 1729 GMF_RX_CTRL_DEF = GMF_OPER_ON | GMF_RX_F_FL_ON, 1736 1730 }; 1737 1731 1732 + /* TX_GMF_EA 32 bit Tx GMAC FIFO End Address */ 1733 + enum { 1734 + TX_DYN_WM_ENA = 3, /* Yukon-FE+ specific */ 1735 + }; 1738 1736 1739 1737 /* TX_GMF_CTRL_T 32 bit Tx GMAC FIFO Control/Test */ 1740 1738 enum { ··· 2027 2017 u16 rx_tag; 2028 2018 struct vlan_group *vlgrp; 2029 2019 #endif 2020 + struct { 2021 + unsigned long last; 2022 + u32 mac_rp; 2023 + u8 mac_lev; 2024 + u8 fifo_rp; 2025 + u8 fifo_lev; 2026 + } check; 2027 + 2030 2028 2031 2029 dma_addr_t rx_le_map; 2032 2030 dma_addr_t tx_le_map; ··· 2058 2040 void __iomem *regs; 2059 2041 struct pci_dev *pdev; 2060 2042 struct net_device *dev[2]; 2043 + unsigned long flags; 2044 + #define SKY2_HW_USE_MSI 0x00000001 2045 + #define SKY2_HW_FIBRE_PHY 0x00000002 2046 + #define SKY2_HW_GIGABIT 0x00000004 2047 + #define SKY2_HW_NEWER_PHY 0x00000008 2048 + #define SKY2_HW_FIFO_HANG_CHECK 0x00000010 2049 + #define SKY2_HW_NEW_LE 0x00000020 /* new LSOv2 format */ 2050 + #define SKY2_HW_AUTO_TX_SUM 0x00000040 /* new IP decode for Tx */ 2051 + #define SKY2_HW_ADV_POWER_CTL 0x00000080 /* additional PHY power regs */ 2061 2052 2062 2053 u8 chip_id; 2063 2054 u8 chip_rev; 2064 2055 u8 pmd_type; 2065 2056 u8 ports; 2066 - u8 active; 2067 2057 2068 2058 struct sky2_status_le *st_le; 2069 2059 u32 st_idx; ··· 2079 2053 2080 2054 struct timer_list watchdog_timer; 2081 2055 struct work_struct restart_work; 2082 - int msi; 2083 2056 wait_queue_head_t msi_wait; 2084 2057 }; 2085 2058 2086 2059 static inline int sky2_is_copper(const struct sky2_hw *hw) 2087 2060 { 2088 - return !(hw->pmd_type == 'L' || hw->pmd_type == 'S' || hw->pmd_type == 'P'); 2061 + return !(hw->flags & SKY2_HW_FIBRE_PHY); 2089 2062 } 2090 2063 2091 2064 /* Register accessor for memory mapped device */
+1 -1
drivers/net/usb/dm9601.c
··· 405 405 dev->net->ethtool_ops = &dm9601_ethtool_ops; 406 406 dev->net->hard_header_len += DM_TX_OVERHEAD; 407 407 dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len; 408 - dev->rx_urb_size = dev->net->mtu + DM_RX_OVERHEAD; 408 + dev->rx_urb_size = dev->net->mtu + ETH_HLEN + DM_RX_OVERHEAD; 409 409 410 410 dev->mii.dev = dev->net; 411 411 dev->mii.mdio_read = dm9601_mdio_read;
+1 -1
drivers/net/wireless/Makefile
··· 43 43 obj-$(CONFIG_PCMCIA_WL3501) += wl3501_cs.o 44 44 45 45 obj-$(CONFIG_USB_ZD1201) += zd1201.o 46 - obj-$(CONFIG_LIBERTAS_USB) += libertas/ 46 + obj-$(CONFIG_LIBERTAS) += libertas/ 47 47 48 48 rtl8187-objs := rtl8187_dev.o rtl8187_rtl8225.o 49 49 obj-$(CONFIG_RTL8187) += rtl8187.o
+3 -4
drivers/pci/quirks.c
··· 1444 1444 static void __devinit quirk_e100_interrupt(struct pci_dev *dev) 1445 1445 { 1446 1446 u16 command; 1447 - u32 bar; 1448 1447 u8 __iomem *csr; 1449 1448 u8 cmd_hi; 1450 1449 ··· 1475 1476 * re-enable them when it's ready. 1476 1477 */ 1477 1478 pci_read_config_word(dev, PCI_COMMAND, &command); 1478 - pci_read_config_dword(dev, PCI_BASE_ADDRESS_0, &bar); 1479 1479 1480 - if (!(command & PCI_COMMAND_MEMORY) || !bar) 1480 + if (!(command & PCI_COMMAND_MEMORY) || !pci_resource_start(dev, 0)) 1481 1481 return; 1482 1482 1483 - csr = ioremap(bar, 8); 1483 + /* Convert from PCI bus to resource space. */ 1484 + csr = ioremap(pci_resource_start(dev, 0), 8); 1484 1485 if (!csr) { 1485 1486 printk(KERN_WARNING "PCI: Can't map %s e100 registers\n", 1486 1487 pci_name(dev));
+1
drivers/power/power_supply_sysfs.c
··· 289 289 if (ret) 290 290 goto out; 291 291 } 292 + envp[i] = NULL; 292 293 293 294 out: 294 295 free_page((unsigned long)prop_buf);
+2 -2
drivers/scsi/aic94xx/aic94xx_task.c
··· 451 451 struct scb *scb; 452 452 453 453 pci_map_sg(asd_ha->pcidev, &task->smp_task.smp_req, 1, 454 - PCI_DMA_FROMDEVICE); 454 + PCI_DMA_TODEVICE); 455 455 pci_map_sg(asd_ha->pcidev, &task->smp_task.smp_resp, 1, 456 456 PCI_DMA_FROMDEVICE); 457 457 ··· 486 486 487 487 BUG_ON(!task); 488 488 pci_unmap_sg(a->ha->pcidev, &task->smp_task.smp_req, 1, 489 - PCI_DMA_FROMDEVICE); 489 + PCI_DMA_TODEVICE); 490 490 pci_unmap_sg(a->ha->pcidev, &task->smp_task.smp_resp, 1, 491 491 PCI_DMA_FROMDEVICE); 492 492 }
+2 -1
drivers/scsi/esp_scsi.c
··· 2314 2314 esp->host->transportt = esp_transport_template; 2315 2315 esp->host->max_lun = ESP_MAX_LUN; 2316 2316 esp->host->cmd_per_lun = 2; 2317 + esp->host->unique_id = instance; 2317 2318 2318 2319 esp_set_clock_params(esp); 2319 2320 ··· 2338 2337 if (err) 2339 2338 return err; 2340 2339 2341 - esp->host->unique_id = instance++; 2340 + instance++; 2342 2341 2343 2342 scsi_scan_host(esp->host); 2344 2343
+22 -6
drivers/scsi/scsi_transport_spi.c
··· 787 787 struct scsi_target *starget = sdev->sdev_target; 788 788 struct Scsi_Host *shost = sdev->host; 789 789 int len = sdev->inquiry_len; 790 + int min_period = spi_min_period(starget); 791 + int max_width = spi_max_width(starget); 790 792 /* first set us up for narrow async */ 791 793 DV_SET(offset, 0); 792 794 DV_SET(width, 0); 793 - 795 + 794 796 if (spi_dv_device_compare_inquiry(sdev, buffer, buffer, DV_LOOPS) 795 797 != SPI_COMPARE_SUCCESS) { 796 798 starget_printk(KERN_ERR, starget, "Domain Validation Initial Inquiry Failed\n"); ··· 800 798 return; 801 799 } 802 800 801 + if (!scsi_device_wide(sdev)) { 802 + spi_max_width(starget) = 0; 803 + max_width = 0; 804 + } 805 + 803 806 /* test width */ 804 - if (i->f->set_width && spi_max_width(starget) && 805 - scsi_device_wide(sdev)) { 807 + if (i->f->set_width && max_width) { 806 808 i->f->set_width(starget, 1); 807 809 808 810 if (spi_dv_device_compare_inquiry(sdev, buffer, ··· 815 809 != SPI_COMPARE_SUCCESS) { 816 810 starget_printk(KERN_ERR, starget, "Wide Transfers Fail\n"); 817 811 i->f->set_width(starget, 0); 812 + /* Make sure we don't force wide back on by asking 813 + * for a transfer period that requires it */ 814 + max_width = 0; 815 + if (min_period < 10) 816 + min_period = 10; 818 817 } 819 818 } 820 819 ··· 839 828 840 829 /* now set up to the maximum */ 841 830 DV_SET(offset, spi_max_offset(starget)); 842 - DV_SET(period, spi_min_period(starget)); 831 + DV_SET(period, min_period); 832 + 843 833 /* try QAS requests; this should be harmless to set if the 844 834 * target supports it */ 845 835 if (scsi_device_qas(sdev)) { ··· 849 837 DV_SET(qas, 0); 850 838 } 851 839 852 - if (scsi_device_ius(sdev) && spi_min_period(starget) < 9) { 840 + if (scsi_device_ius(sdev) && min_period < 9) { 853 841 /* This u320 (or u640). Set IU transfers */ 854 842 DV_SET(iu, 1); 855 843 /* Then set the optional parameters */ 856 844 DV_SET(rd_strm, 1); 857 845 DV_SET(wr_flow, 1); 858 846 DV_SET(rti, 1); 859 - if (spi_min_period(starget) == 8) 847 + if (min_period == 8) 860 848 DV_SET(pcomp_en, 1); 861 849 } else { 862 850 DV_SET(iu, 0); ··· 874 862 } else { 875 863 DV_SET(dt, 1); 876 864 } 865 + /* set width last because it will pull all the other 866 + * parameters down to required values */ 867 + DV_SET(width, max_width); 868 + 877 869 /* Do the read only INQUIRY tests */ 878 870 spi_dv_retrain(sdev, buffer, buffer + sdev->inquiry_len, 879 871 spi_dv_device_compare_inquiry);
+1 -1
drivers/serial/cpm_uart/cpm_uart_cpm1.h
··· 37 37 up->smc_tfcr = SMC_EB; 38 38 } 39 39 40 - #define DPRAM_BASE ((unsigned char *)&cpmp->cp_dpmem[0]) 40 + #define DPRAM_BASE ((unsigned char *)cpm_dpram_addr(0)) 41 41 42 42 #endif
+1 -1
drivers/serial/sunsab.c
··· 38 38 #include <asm/prom.h> 39 39 #include <asm/of_device.h> 40 40 41 - #if defined(CONFIG_SERIAL_SUNZILOG_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ) 41 + #if defined(CONFIG_SERIAL_SUNSAB_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ) 42 42 #define SUPPORT_SYSRQ 43 43 #endif 44 44
+1
drivers/w1/w1.c
··· 431 431 err = add_uevent_var(envp, num_envp, &cur_index, buffer, buffer_size, 432 432 &cur_len, "W1_SLAVE_ID=%024LX", 433 433 (unsigned long long)sl->reg_num.id); 434 + envp[cur_index] = NULL; 434 435 if (err) 435 436 return err; 436 437
+2
fs/compat_ioctl.c
··· 3190 3190 COMPATIBLE_IOCTL(SIOCGIWRETRY) 3191 3191 COMPATIBLE_IOCTL(SIOCSIWPOWER) 3192 3192 COMPATIBLE_IOCTL(SIOCGIWPOWER) 3193 + COMPATIBLE_IOCTL(SIOCSIWAUTH) 3194 + COMPATIBLE_IOCTL(SIOCGIWAUTH) 3193 3195 /* hiddev */ 3194 3196 COMPATIBLE_IOCTL(HIDIOCGVERSION) 3195 3197 COMPATIBLE_IOCTL(HIDIOCAPPLICATION)
-3
fs/exec.c
··· 50 50 #include <linux/tsacct_kern.h> 51 51 #include <linux/cn_proc.h> 52 52 #include <linux/audit.h> 53 - #include <linux/signalfd.h> 54 53 55 54 #include <asm/uaccess.h> 56 55 #include <asm/mmu_context.h> ··· 783 784 * and we can just re-use it all. 784 785 */ 785 786 if (atomic_read(&oldsighand->count) <= 1) { 786 - signalfd_detach(tsk); 787 787 exit_itimers(sig); 788 788 return 0; 789 789 } ··· 921 923 sig->flags = 0; 922 924 923 925 no_thread_group: 924 - signalfd_detach(tsk); 925 926 exit_itimers(sig); 926 927 if (leader) 927 928 release_task(leader);
+18 -11
fs/lockd/svclock.c
··· 171 171 * GRANTED_RES message by cookie, without having to rely on the client's IP 172 172 * address. --okir 173 173 */ 174 - static inline struct nlm_block * 175 - nlmsvc_create_block(struct svc_rqst *rqstp, struct nlm_file *file, 176 - struct nlm_lock *lock, struct nlm_cookie *cookie) 174 + static struct nlm_block * 175 + nlmsvc_create_block(struct svc_rqst *rqstp, struct nlm_host *host, 176 + struct nlm_file *file, struct nlm_lock *lock, 177 + struct nlm_cookie *cookie) 177 178 { 178 179 struct nlm_block *block; 179 - struct nlm_host *host; 180 180 struct nlm_rqst *call = NULL; 181 - 182 - /* Create host handle for callback */ 183 - host = nlmsvc_lookup_host(rqstp, lock->caller, lock->len); 184 - if (host == NULL) 185 - return NULL; 186 181 187 182 call = nlm_alloc_call(host); 188 183 if (call == NULL) ··· 361 366 struct nlm_lock *lock, int wait, struct nlm_cookie *cookie) 362 367 { 363 368 struct nlm_block *block = NULL; 369 + struct nlm_host *host; 364 370 int error; 365 371 __be32 ret; 366 372 ··· 373 377 (long long)lock->fl.fl_end, 374 378 wait); 375 379 380 + /* Create host handle for callback */ 381 + host = nlmsvc_lookup_host(rqstp, lock->caller, lock->len); 382 + if (host == NULL) 383 + return nlm_lck_denied_nolocks; 376 384 377 385 /* Lock file against concurrent access */ 378 386 mutex_lock(&file->f_mutex); ··· 385 385 */ 386 386 block = nlmsvc_lookup_block(file, lock); 387 387 if (block == NULL) { 388 - block = nlmsvc_create_block(rqstp, file, lock, cookie); 388 + block = nlmsvc_create_block(rqstp, nlm_get_host(host), file, 389 + lock, cookie); 389 390 ret = nlm_lck_denied_nolocks; 390 391 if (block == NULL) 391 392 goto out; ··· 450 449 out: 451 450 mutex_unlock(&file->f_mutex); 452 451 nlmsvc_release_block(block); 452 + nlm_release_host(host); 453 453 dprintk("lockd: nlmsvc_lock returned %u\n", ret); 454 454 return ret; 455 455 } ··· 479 477 480 478 if (block == NULL) { 481 479 struct file_lock *conf = kzalloc(sizeof(*conf), GFP_KERNEL); 480 + struct nlm_host *host; 482 481 483 482 if (conf == NULL) 484 483 return nlm_granted; 485 - block = nlmsvc_create_block(rqstp, file, lock, cookie); 484 + /* Create host handle for callback */ 485 + host = nlmsvc_lookup_host(rqstp, lock->caller, lock->len); 486 + if (host == NULL) 487 + return nlm_lck_denied_nolocks; 488 + block = nlmsvc_create_block(rqstp, host, file, lock, cookie); 486 489 if (block == NULL) { 487 490 kfree(conf); 488 491 return nlm_granted;
+19 -10
fs/nfs/client.c
··· 588 588 server->namelen = data->namlen; 589 589 /* Create a client RPC handle for the NFSv3 ACL management interface */ 590 590 nfs_init_server_aclclient(server); 591 - if (clp->cl_nfsversion == 3) { 592 - if (server->namelen == 0 || server->namelen > NFS3_MAXNAMLEN) 593 - server->namelen = NFS3_MAXNAMLEN; 594 - if (!(data->flags & NFS_MOUNT_NORDIRPLUS)) 595 - server->caps |= NFS_CAP_READDIRPLUS; 596 - } else { 597 - if (server->namelen == 0 || server->namelen > NFS2_MAXNAMLEN) 598 - server->namelen = NFS2_MAXNAMLEN; 599 - } 600 - 601 591 dprintk("<-- nfs_init_server() = 0 [new %p]\n", clp); 602 592 return 0; 603 593 ··· 784 794 error = nfs_probe_fsinfo(server, mntfh, &fattr); 785 795 if (error < 0) 786 796 goto error; 797 + if (server->nfs_client->rpc_ops->version == 3) { 798 + if (server->namelen == 0 || server->namelen > NFS3_MAXNAMLEN) 799 + server->namelen = NFS3_MAXNAMLEN; 800 + if (!(data->flags & NFS_MOUNT_NORDIRPLUS)) 801 + server->caps |= NFS_CAP_READDIRPLUS; 802 + } else { 803 + if (server->namelen == 0 || server->namelen > NFS2_MAXNAMLEN) 804 + server->namelen = NFS2_MAXNAMLEN; 805 + } 806 + 787 807 if (!(fattr.valid & NFS_ATTR_FATTR)) { 788 808 error = server->nfs_client->rpc_ops->getattr(server, mntfh, &fattr); 789 809 if (error < 0) { ··· 984 984 if (error < 0) 985 985 goto error; 986 986 987 + if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN) 988 + server->namelen = NFS4_MAXNAMLEN; 989 + 987 990 BUG_ON(!server->nfs_client); 988 991 BUG_ON(!server->nfs_client->rpc_ops); 989 992 BUG_ON(!server->nfs_client->rpc_ops->file_inode_ops); ··· 1059 1056 if (error < 0) 1060 1057 goto error; 1061 1058 1059 + if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN) 1060 + server->namelen = NFS4_MAXNAMLEN; 1061 + 1062 1062 dprintk("Referral FSID: %llx:%llx\n", 1063 1063 (unsigned long long) server->fsid.major, 1064 1064 (unsigned long long) server->fsid.minor); ··· 1120 1114 error = nfs_probe_fsinfo(server, fh, &fattr_fsinfo); 1121 1115 if (error < 0) 1122 1116 goto out_free_server; 1117 + 1118 + if (server->namelen == 0 || server->namelen > NFS4_MAXNAMLEN) 1119 + server->namelen = NFS4_MAXNAMLEN; 1123 1120 1124 1121 dprintk("Cloned FSID: %llx:%llx\n", 1125 1122 (unsigned long long) server->fsid.major,
+2
fs/nfs/dir.c
··· 1162 1162 } 1163 1163 if (!desc->plus || !(entry->fattr->valid & NFS_ATTR_FATTR)) 1164 1164 return NULL; 1165 + if (name.len > NFS_SERVER(dir)->namelen) 1166 + return NULL; 1165 1167 /* Note: caller is already holding the dir->i_mutex! */ 1166 1168 dentry = d_alloc(parent, &name); 1167 1169 if (dentry == NULL)
+3
fs/nfs/getroot.c
··· 175 175 path++; 176 176 name.len = path - (const char *) name.name; 177 177 178 + if (name.len > NFS4_MAXNAMLEN) 179 + return -ENAMETOOLONG; 180 + 178 181 eat_dot_dir: 179 182 while (*path == '/') 180 183 path++;
+19 -14
fs/ocfs2/aops.c
··· 930 930 loff_t user_pos, unsigned user_len) 931 931 { 932 932 int i; 933 - unsigned from, to; 933 + unsigned from = user_pos & (PAGE_CACHE_SIZE - 1), 934 + to = user_pos + user_len; 934 935 struct page *tmppage; 935 936 936 - ocfs2_zero_new_buffers(wc->w_target_page, user_pos, user_len); 937 - 938 - if (wc->w_large_pages) { 939 - from = wc->w_target_from; 940 - to = wc->w_target_to; 941 - } else { 942 - from = 0; 943 - to = PAGE_CACHE_SIZE; 944 - } 937 + ocfs2_zero_new_buffers(wc->w_target_page, from, to); 945 938 946 939 for(i = 0; i < wc->w_num_pages; i++) { 947 940 tmppage = wc->w_pages[i]; ··· 984 991 map_from = cluster_start; 985 992 map_to = cluster_end; 986 993 } 987 - 988 - wc->w_target_from = map_from; 989 - wc->w_target_to = map_to; 990 994 } else { 991 995 /* 992 996 * If we haven't allocated the new page yet, we ··· 1201 1211 loff_t pos, unsigned len) 1202 1212 { 1203 1213 int ret, i; 1214 + loff_t cluster_off; 1215 + unsigned int local_len = len; 1204 1216 struct ocfs2_write_cluster_desc *desc; 1217 + struct ocfs2_super *osb = OCFS2_SB(mapping->host->i_sb); 1205 1218 1206 1219 for (i = 0; i < wc->w_clen; i++) { 1207 1220 desc = &wc->w_desc[i]; 1208 1221 1222 + /* 1223 + * We have to make sure that the total write passed in 1224 + * doesn't extend past a single cluster. 1225 + */ 1226 + local_len = len; 1227 + cluster_off = pos & (osb->s_clustersize - 1); 1228 + if ((cluster_off + local_len) > osb->s_clustersize) 1229 + local_len = osb->s_clustersize - cluster_off; 1230 + 1209 1231 ret = ocfs2_write_cluster(mapping, desc->c_phys, 1210 1232 desc->c_unwritten, data_ac, meta_ac, 1211 - wc, desc->c_cpos, pos, len); 1233 + wc, desc->c_cpos, pos, local_len); 1212 1234 if (ret) { 1213 1235 mlog_errno(ret); 1214 1236 goto out; 1215 1237 } 1238 + 1239 + len -= local_len; 1240 + pos += local_len; 1216 1241 } 1217 1242 1218 1243 ret = 0;
+2 -2
fs/ocfs2/file.c
··· 491 491 goto leave; 492 492 } 493 493 494 - status = ocfs2_claim_clusters(osb, handle, data_ac, 1, 495 - &bit_off, &num_bits); 494 + status = __ocfs2_claim_clusters(osb, handle, data_ac, 1, 495 + clusters_to_add, &bit_off, &num_bits); 496 496 if (status < 0) { 497 497 if (status != -ENOSPC) 498 498 mlog_errno(status);
+1 -3
fs/ocfs2/localalloc.c
··· 524 524 int ocfs2_claim_local_alloc_bits(struct ocfs2_super *osb, 525 525 handle_t *handle, 526 526 struct ocfs2_alloc_context *ac, 527 - u32 min_bits, 527 + u32 bits_wanted, 528 528 u32 *bit_off, 529 529 u32 *num_bits) 530 530 { 531 531 int status, start; 532 532 struct inode *local_alloc_inode; 533 - u32 bits_wanted; 534 533 void *bitmap; 535 534 struct ocfs2_dinode *alloc; 536 535 struct ocfs2_local_alloc *la; ··· 537 538 mlog_entry_void(); 538 539 BUG_ON(ac->ac_which != OCFS2_AC_USE_LOCAL); 539 540 540 - bits_wanted = ac->ac_bits_wanted - ac->ac_bits_given; 541 541 local_alloc_inode = ac->ac_inode; 542 542 alloc = (struct ocfs2_dinode *) osb->local_alloc_bh->b_data; 543 543 la = OCFS2_LOCAL_ALLOC(alloc);
+1 -1
fs/ocfs2/localalloc.h
··· 48 48 int ocfs2_claim_local_alloc_bits(struct ocfs2_super *osb, 49 49 handle_t *handle, 50 50 struct ocfs2_alloc_context *ac, 51 - u32 min_bits, 51 + u32 bits_wanted, 52 52 u32 *bit_off, 53 53 u32 *num_bits); 54 54
+21 -8
fs/ocfs2/suballoc.c
··· 1486 1486 * contig. allocation, set to '1' to indicate we can deal with extents 1487 1487 * of any size. 1488 1488 */ 1489 - int ocfs2_claim_clusters(struct ocfs2_super *osb, 1490 - handle_t *handle, 1491 - struct ocfs2_alloc_context *ac, 1492 - u32 min_clusters, 1493 - u32 *cluster_start, 1494 - u32 *num_clusters) 1489 + int __ocfs2_claim_clusters(struct ocfs2_super *osb, 1490 + handle_t *handle, 1491 + struct ocfs2_alloc_context *ac, 1492 + u32 min_clusters, 1493 + u32 max_clusters, 1494 + u32 *cluster_start, 1495 + u32 *num_clusters) 1495 1496 { 1496 1497 int status; 1497 - unsigned int bits_wanted = ac->ac_bits_wanted - ac->ac_bits_given; 1498 + unsigned int bits_wanted = max_clusters; 1498 1499 u64 bg_blkno = 0; 1499 1500 u16 bg_bit_off; 1500 1501 1501 1502 mlog_entry_void(); 1502 1503 1503 - BUG_ON(!ac); 1504 1504 BUG_ON(ac->ac_bits_given >= ac->ac_bits_wanted); 1505 1505 1506 1506 BUG_ON(ac->ac_which != OCFS2_AC_USE_LOCAL ··· 1555 1555 bail: 1556 1556 mlog_exit(status); 1557 1557 return status; 1558 + } 1559 + 1560 + int ocfs2_claim_clusters(struct ocfs2_super *osb, 1561 + handle_t *handle, 1562 + struct ocfs2_alloc_context *ac, 1563 + u32 min_clusters, 1564 + u32 *cluster_start, 1565 + u32 *num_clusters) 1566 + { 1567 + unsigned int bits_wanted = ac->ac_bits_wanted - ac->ac_bits_given; 1568 + 1569 + return __ocfs2_claim_clusters(osb, handle, ac, min_clusters, 1570 + bits_wanted, cluster_start, num_clusters); 1558 1571 } 1559 1572 1560 1573 static inline int ocfs2_block_group_clear_bits(handle_t *handle,
+11
fs/ocfs2/suballoc.h
··· 85 85 u32 min_clusters, 86 86 u32 *cluster_start, 87 87 u32 *num_clusters); 88 + /* 89 + * Use this variant of ocfs2_claim_clusters to specify a maxiumum 90 + * number of clusters smaller than the allocation reserved. 91 + */ 92 + int __ocfs2_claim_clusters(struct ocfs2_super *osb, 93 + handle_t *handle, 94 + struct ocfs2_alloc_context *ac, 95 + u32 min_clusters, 96 + u32 max_clusters, 97 + u32 *cluster_start, 98 + u32 *num_clusters); 88 99 89 100 int ocfs2_free_suballoc_bits(handle_t *handle, 90 101 struct inode *alloc_inode,
+2 -2
fs/ocfs2/vote.c
··· 66 66 { 67 67 struct ocfs2_msg_hdr v_hdr; 68 68 __be32 v_reserved1; 69 - }; 69 + } __attribute__ ((packed)); 70 70 71 71 /* Responses are given these values to maintain backwards 72 72 * compatibility with older ocfs2 versions */ ··· 78 78 { 79 79 struct ocfs2_msg_hdr r_hdr; 80 80 __be32 r_response; 81 - }; 81 + } __attribute__ ((packed)); 82 82 83 83 struct ocfs2_vote_work { 84 84 struct list_head w_list;
+29 -161
fs/signalfd.c
··· 11 11 * Now using anonymous inode source. 12 12 * Thanks to Oleg Nesterov for useful code review and suggestions. 13 13 * More comments and suggestions from Arnd Bergmann. 14 - * Sat May 19, 2007: Davi E. M. Arnaut <davi@haxent.com.br> 14 + * Sat May 19, 2007: Davi E. M. Arnaut <davi@haxent.com.br> 15 15 * Retrieve multiple signals with one read() call 16 + * Sun Jul 15, 2007: Davide Libenzi <davidel@xmailserver.org> 17 + * Attach to the sighand only during read() and poll(). 16 18 */ 17 19 18 20 #include <linux/file.h> ··· 29 27 #include <linux/signalfd.h> 30 28 31 29 struct signalfd_ctx { 32 - struct list_head lnk; 33 - wait_queue_head_t wqh; 34 30 sigset_t sigmask; 35 - struct task_struct *tsk; 36 31 }; 37 - 38 - struct signalfd_lockctx { 39 - struct task_struct *tsk; 40 - unsigned long flags; 41 - }; 42 - 43 - /* 44 - * Tries to acquire the sighand lock. We do not increment the sighand 45 - * use count, and we do not even pin the task struct, so we need to 46 - * do it inside an RCU read lock, and we must be prepared for the 47 - * ctx->tsk going to NULL (in signalfd_deliver()), and for the sighand 48 - * being detached. We return 0 if the sighand has been detached, or 49 - * 1 if we were able to pin the sighand lock. 50 - */ 51 - static int signalfd_lock(struct signalfd_ctx *ctx, struct signalfd_lockctx *lk) 52 - { 53 - struct sighand_struct *sighand = NULL; 54 - 55 - rcu_read_lock(); 56 - lk->tsk = rcu_dereference(ctx->tsk); 57 - if (likely(lk->tsk != NULL)) 58 - sighand = lock_task_sighand(lk->tsk, &lk->flags); 59 - rcu_read_unlock(); 60 - 61 - if (!sighand) 62 - return 0; 63 - 64 - if (!ctx->tsk) { 65 - unlock_task_sighand(lk->tsk, &lk->flags); 66 - return 0; 67 - } 68 - 69 - if (lk->tsk->tgid == current->tgid) 70 - lk->tsk = current; 71 - 72 - return 1; 73 - } 74 - 75 - static void signalfd_unlock(struct signalfd_lockctx *lk) 76 - { 77 - unlock_task_sighand(lk->tsk, &lk->flags); 78 - } 79 - 80 - /* 81 - * This must be called with the sighand lock held. 82 - */ 83 - void signalfd_deliver(struct task_struct *tsk, int sig) 84 - { 85 - struct sighand_struct *sighand = tsk->sighand; 86 - struct signalfd_ctx *ctx, *tmp; 87 - 88 - BUG_ON(!sig); 89 - list_for_each_entry_safe(ctx, tmp, &sighand->signalfd_list, lnk) { 90 - /* 91 - * We use a negative signal value as a way to broadcast that the 92 - * sighand has been orphaned, so that we can notify all the 93 - * listeners about this. Remember the ctx->sigmask is inverted, 94 - * so if the user is interested in a signal, that corresponding 95 - * bit will be zero. 96 - */ 97 - if (sig < 0) { 98 - if (ctx->tsk == tsk) { 99 - ctx->tsk = NULL; 100 - list_del_init(&ctx->lnk); 101 - wake_up(&ctx->wqh); 102 - } 103 - } else { 104 - if (!sigismember(&ctx->sigmask, sig)) 105 - wake_up(&ctx->wqh); 106 - } 107 - } 108 - } 109 - 110 - static void signalfd_cleanup(struct signalfd_ctx *ctx) 111 - { 112 - struct signalfd_lockctx lk; 113 - 114 - /* 115 - * This is tricky. If the sighand is gone, we do not need to remove 116 - * context from the list, the list itself won't be there anymore. 117 - */ 118 - if (signalfd_lock(ctx, &lk)) { 119 - list_del(&ctx->lnk); 120 - signalfd_unlock(&lk); 121 - } 122 - kfree(ctx); 123 - } 124 32 125 33 static int signalfd_release(struct inode *inode, struct file *file) 126 34 { 127 - signalfd_cleanup(file->private_data); 35 + kfree(file->private_data); 128 36 return 0; 129 37 } 130 38 ··· 42 130 { 43 131 struct signalfd_ctx *ctx = file->private_data; 44 132 unsigned int events = 0; 45 - struct signalfd_lockctx lk; 46 133 47 - poll_wait(file, &ctx->wqh, wait); 134 + poll_wait(file, &current->sighand->signalfd_wqh, wait); 48 135 49 - /* 50 - * Let the caller get a POLLIN in this case, ala socket recv() when 51 - * the peer disconnects. 52 - */ 53 - if (signalfd_lock(ctx, &lk)) { 54 - if ((lk.tsk == current && 55 - next_signal(&lk.tsk->pending, &ctx->sigmask) > 0) || 56 - next_signal(&lk.tsk->signal->shared_pending, 57 - &ctx->sigmask) > 0) 58 - events |= POLLIN; 59 - signalfd_unlock(&lk); 60 - } else 136 + spin_lock_irq(&current->sighand->siglock); 137 + if (next_signal(&current->pending, &ctx->sigmask) || 138 + next_signal(&current->signal->shared_pending, 139 + &ctx->sigmask)) 61 140 events |= POLLIN; 141 + spin_unlock_irq(&current->sighand->siglock); 62 142 63 143 return events; 64 144 } ··· 123 219 int nonblock) 124 220 { 125 221 ssize_t ret; 126 - struct signalfd_lockctx lk; 127 222 DECLARE_WAITQUEUE(wait, current); 128 223 129 - if (!signalfd_lock(ctx, &lk)) 130 - return 0; 131 - 132 - ret = dequeue_signal(lk.tsk, &ctx->sigmask, info); 224 + spin_lock_irq(&current->sighand->siglock); 225 + ret = dequeue_signal(current, &ctx->sigmask, info); 133 226 switch (ret) { 134 227 case 0: 135 228 if (!nonblock) 136 229 break; 137 230 ret = -EAGAIN; 138 231 default: 139 - signalfd_unlock(&lk); 232 + spin_unlock_irq(&current->sighand->siglock); 140 233 return ret; 141 234 } 142 235 143 - add_wait_queue(&ctx->wqh, &wait); 236 + add_wait_queue(&current->sighand->signalfd_wqh, &wait); 144 237 for (;;) { 145 238 set_current_state(TASK_INTERRUPTIBLE); 146 - ret = dequeue_signal(lk.tsk, &ctx->sigmask, info); 147 - signalfd_unlock(&lk); 239 + ret = dequeue_signal(current, &ctx->sigmask, info); 148 240 if (ret != 0) 149 241 break; 150 242 if (signal_pending(current)) { 151 243 ret = -ERESTARTSYS; 152 244 break; 153 245 } 246 + spin_unlock_irq(&current->sighand->siglock); 154 247 schedule(); 155 - ret = signalfd_lock(ctx, &lk); 156 - if (unlikely(!ret)) { 157 - /* 158 - * Let the caller read zero byte, ala socket 159 - * recv() when the peer disconnect. This test 160 - * must be done before doing a dequeue_signal(), 161 - * because if the sighand has been orphaned, 162 - * the dequeue_signal() call is going to crash 163 - * because ->sighand will be long gone. 164 - */ 165 - break; 166 - } 248 + spin_lock_irq(&current->sighand->siglock); 167 249 } 250 + spin_unlock_irq(&current->sighand->siglock); 168 251 169 - remove_wait_queue(&ctx->wqh, &wait); 252 + remove_wait_queue(&current->sighand->signalfd_wqh, &wait); 170 253 __set_current_state(TASK_RUNNING); 171 254 172 255 return ret; 173 256 } 174 257 175 258 /* 176 - * Returns either the size of a "struct signalfd_siginfo", or zero if the 177 - * sighand we are attached to, has been orphaned. The "count" parameter 178 - * must be at least the size of a "struct signalfd_siginfo". 259 + * Returns a multiple of the size of a "struct signalfd_siginfo", or a negative 260 + * error code. The "count" parameter must be at least the size of a 261 + * "struct signalfd_siginfo". 179 262 */ 180 263 static ssize_t signalfd_read(struct file *file, char __user *buf, size_t count, 181 264 loff_t *ppos) ··· 178 287 return -EINVAL; 179 288 180 289 siginfo = (struct signalfd_siginfo __user *) buf; 181 - 182 290 do { 183 291 ret = signalfd_dequeue(ctx, &info, nonblock); 184 292 if (unlikely(ret <= 0)) ··· 190 300 nonblock = 1; 191 301 } while (--count); 192 302 193 - return total ? total : ret; 303 + return total ? total: ret; 194 304 } 195 305 196 306 static const struct file_operations signalfd_fops = { ··· 199 309 .read = signalfd_read, 200 310 }; 201 311 202 - /* 203 - * Create a file descriptor that is associated with our signal 204 - * state. We can pass it around to others if we want to, but 205 - * it will always be _our_ signal state. 206 - */ 207 312 asmlinkage long sys_signalfd(int ufd, sigset_t __user *user_mask, size_t sizemask) 208 313 { 209 314 int error; 210 315 sigset_t sigmask; 211 316 struct signalfd_ctx *ctx; 212 - struct sighand_struct *sighand; 213 317 struct file *file; 214 318 struct inode *inode; 215 - struct signalfd_lockctx lk; 216 319 217 320 if (sizemask != sizeof(sigset_t) || 218 321 copy_from_user(&sigmask, user_mask, sizeof(sigmask))) ··· 218 335 if (!ctx) 219 336 return -ENOMEM; 220 337 221 - init_waitqueue_head(&ctx->wqh); 222 338 ctx->sigmask = sigmask; 223 - ctx->tsk = current->group_leader; 224 - 225 - sighand = current->sighand; 226 - /* 227 - * Add this fd to the list of signal listeners. 228 - */ 229 - spin_lock_irq(&sighand->siglock); 230 - list_add_tail(&ctx->lnk, &sighand->signalfd_list); 231 - spin_unlock_irq(&sighand->siglock); 232 339 233 340 /* 234 341 * When we call this, the initialization must be complete, since ··· 237 364 fput(file); 238 365 return -EINVAL; 239 366 } 240 - /* 241 - * We need to be prepared of the fact that the sighand this fd 242 - * is attached to, has been detched. In that case signalfd_lock() 243 - * will return 0, and we'll just skip setting the new mask. 244 - */ 245 - if (signalfd_lock(ctx, &lk)) { 246 - ctx->sigmask = sigmask; 247 - signalfd_unlock(&lk); 248 - } 249 - wake_up(&ctx->wqh); 367 + spin_lock_irq(&current->sighand->siglock); 368 + ctx->sigmask = sigmask; 369 + spin_unlock_irq(&current->sighand->siglock); 370 + 371 + wake_up(&current->sighand->signalfd_wqh); 250 372 fput(file); 251 373 } 252 374 253 375 return ufd; 254 376 255 377 err_fdalloc: 256 - signalfd_cleanup(ctx); 378 + kfree(ctx); 257 379 return error; 258 380 } 259 381
+34 -12
fs/splice.c
··· 1224 1224 } 1225 1225 1226 1226 /* 1227 + * Do a copy-from-user while holding the mmap_semaphore for reading, in a 1228 + * manner safe from deadlocking with simultaneous mmap() (grabbing mmap_sem 1229 + * for writing) and page faulting on the user memory pointed to by src. 1230 + * This assumes that we will very rarely hit the partial != 0 path, or this 1231 + * will not be a win. 1232 + */ 1233 + static int copy_from_user_mmap_sem(void *dst, const void __user *src, size_t n) 1234 + { 1235 + int partial; 1236 + 1237 + pagefault_disable(); 1238 + partial = __copy_from_user_inatomic(dst, src, n); 1239 + pagefault_enable(); 1240 + 1241 + /* 1242 + * Didn't copy everything, drop the mmap_sem and do a faulting copy 1243 + */ 1244 + if (unlikely(partial)) { 1245 + up_read(&current->mm->mmap_sem); 1246 + partial = copy_from_user(dst, src, n); 1247 + down_read(&current->mm->mmap_sem); 1248 + } 1249 + 1250 + return partial; 1251 + } 1252 + 1253 + /* 1227 1254 * Map an iov into an array of pages and offset/length tupples. With the 1228 1255 * partial_page structure, we can map several non-contiguous ranges into 1229 1256 * our ones pages[] map instead of splitting that operation into pieces. ··· 1263 1236 { 1264 1237 int buffers = 0, error = 0; 1265 1238 1266 - /* 1267 - * It's ok to take the mmap_sem for reading, even 1268 - * across a "get_user()". 1269 - */ 1270 1239 down_read(&current->mm->mmap_sem); 1271 1240 1272 1241 while (nr_vecs) { 1273 1242 unsigned long off, npages; 1243 + struct iovec entry; 1274 1244 void __user *base; 1275 1245 size_t len; 1276 1246 int i; 1277 1247 1278 - /* 1279 - * Get user address base and length for this iovec. 1280 - */ 1281 - error = get_user(base, &iov->iov_base); 1282 - if (unlikely(error)) 1248 + error = -EFAULT; 1249 + if (copy_from_user_mmap_sem(&entry, iov, sizeof(entry))) 1283 1250 break; 1284 - error = get_user(len, &iov->iov_len); 1285 - if (unlikely(error)) 1286 - break; 1251 + 1252 + base = entry.iov_base; 1253 + len = entry.iov_len; 1287 1254 1288 1255 /* 1289 1256 * Sanity check this iovec. 0 read succeeds. 1290 1257 */ 1258 + error = 0; 1291 1259 if (unlikely(!len)) 1292 1260 break; 1293 1261 error = -EFAULT;
+1 -3
fs/ufs/super.c
··· 894 894 goto again; 895 895 } 896 896 897 - 897 + sbi->s_flags = flags;/*after that line some functions use s_flags*/ 898 898 ufs_print_super_stuff(sb, usb1, usb2, usb3); 899 899 900 900 /* ··· 1025 1025 UFS_MOUNT_UFSTYPE_44BSD) 1026 1026 uspi->s_maxsymlinklen = 1027 1027 fs32_to_cpu(sb, usb3->fs_un2.fs_44.fs_maxsymlinklen); 1028 - 1029 - sbi->s_flags = flags; 1030 1028 1031 1029 inode = iget(sb, UFS_ROOTINO); 1032 1030 if (!inode || is_bad_inode(inode))
-5
fs/xfs/xfs_buf_item.h
··· 52 52 #define XFS_BLI_UDQUOT_BUF 0x4 53 53 #define XFS_BLI_PDQUOT_BUF 0x8 54 54 #define XFS_BLI_GDQUOT_BUF 0x10 55 - /* 56 - * This flag indicates that the buffer contains newly allocated 57 - * inodes. 58 - */ 59 - #define XFS_BLI_INODE_NEW_BUF 0x20 60 55 61 56 #define XFS_BLI_CHUNK 128 62 57 #define XFS_BLI_SHIFT 7
+4 -3
fs/xfs/xfs_filestream.c
··· 350 350 /* xfs_fstrm_free_func(): callback for freeing cached stream items. */ 351 351 void 352 352 xfs_fstrm_free_func( 353 - xfs_ino_t ino, 354 - fstrm_item_t *item) 353 + unsigned long ino, 354 + void *data) 355 355 { 356 + fstrm_item_t *item = (fstrm_item_t *)data; 356 357 xfs_inode_t *ip = item->ip; 357 358 int ref; 358 359 ··· 439 438 grp_count = 10; 440 439 441 440 err = xfs_mru_cache_create(&mp->m_filestream, lifetime, grp_count, 442 - (xfs_mru_cache_free_func_t)xfs_fstrm_free_func); 441 + xfs_fstrm_free_func); 443 442 444 443 return err; 445 444 }
+3 -48
fs/xfs/xfs_log_recover.c
··· 1874 1874 /*ARGSUSED*/ 1875 1875 STATIC void 1876 1876 xlog_recover_do_reg_buffer( 1877 - xfs_mount_t *mp, 1878 1877 xlog_recover_item_t *item, 1879 1878 xfs_buf_t *bp, 1880 1879 xfs_buf_log_format_t *buf_f) ··· 1884 1885 unsigned int *data_map = NULL; 1885 1886 unsigned int map_size = 0; 1886 1887 int error; 1887 - int stale_buf = 1; 1888 - 1889 - /* 1890 - * Scan through the on-disk inode buffer and attempt to 1891 - * determine if it has been written to since it was logged. 1892 - * 1893 - * - If any of the magic numbers are incorrect then the buffer is stale 1894 - * - If any of the modes are non-zero then the buffer is not stale 1895 - * - If all of the modes are zero and at least one of the generation 1896 - * counts is non-zero then the buffer is stale 1897 - * 1898 - * If the end result is a stale buffer then the log buffer is replayed 1899 - * otherwise it is skipped. 1900 - * 1901 - * This heuristic is not perfect. It can be improved by scanning the 1902 - * entire inode chunk for evidence that any of the inode clusters have 1903 - * been updated. To fix this problem completely we will need a major 1904 - * architectural change to the logging system. 1905 - */ 1906 - if (buf_f->blf_flags & XFS_BLI_INODE_NEW_BUF) { 1907 - xfs_dinode_t *dip; 1908 - int inodes_per_buf; 1909 - int mode_count = 0; 1910 - int gen_count = 0; 1911 - 1912 - stale_buf = 0; 1913 - inodes_per_buf = XFS_BUF_COUNT(bp) >> mp->m_sb.sb_inodelog; 1914 - for (i = 0; i < inodes_per_buf; i++) { 1915 - dip = (xfs_dinode_t *)xfs_buf_offset(bp, 1916 - i * mp->m_sb.sb_inodesize); 1917 - if (be16_to_cpu(dip->di_core.di_magic) != 1918 - XFS_DINODE_MAGIC) { 1919 - stale_buf = 1; 1920 - break; 1921 - } 1922 - if (be16_to_cpu(dip->di_core.di_mode)) 1923 - mode_count++; 1924 - if (be16_to_cpu(dip->di_core.di_gen)) 1925 - gen_count++; 1926 - } 1927 - 1928 - if (!mode_count && gen_count) 1929 - stale_buf = 1; 1930 - } 1931 1888 1932 1889 switch (buf_f->blf_type) { 1933 1890 case XFS_LI_BUF: ··· 1917 1962 -1, 0, XFS_QMOPT_DOWARN, 1918 1963 "dquot_buf_recover"); 1919 1964 } 1920 - if (!error && stale_buf) 1965 + if (!error) 1921 1966 memcpy(xfs_buf_offset(bp, 1922 1967 (uint)bit << XFS_BLI_SHIFT), /* dest */ 1923 1968 item->ri_buf[i].i_addr, /* source */ ··· 2089 2134 if (log->l_quotaoffs_flag & type) 2090 2135 return; 2091 2136 2092 - xlog_recover_do_reg_buffer(mp, item, bp, buf_f); 2137 + xlog_recover_do_reg_buffer(item, bp, buf_f); 2093 2138 } 2094 2139 2095 2140 /* ··· 2190 2235 (XFS_BLI_UDQUOT_BUF|XFS_BLI_PDQUOT_BUF|XFS_BLI_GDQUOT_BUF)) { 2191 2236 xlog_recover_do_dquot_buffer(mp, log, item, bp, buf_f); 2192 2237 } else { 2193 - xlog_recover_do_reg_buffer(mp, item, bp, buf_f); 2238 + xlog_recover_do_reg_buffer(item, bp, buf_f); 2194 2239 } 2195 2240 if (error) 2196 2241 return XFS_ERROR(error);
-1
fs/xfs/xfs_trans_buf.c
··· 966 966 ASSERT(atomic_read(&bip->bli_refcount) > 0); 967 967 968 968 bip->bli_flags |= XFS_BLI_INODE_ALLOC_BUF; 969 - bip->bli_format.blf_flags |= XFS_BLI_INODE_NEW_BUF; 970 969 } 971 970 972 971
-4
include/acpi/acpi_drivers.h
··· 147 147 /*-------------------------------------------------------------------------- 148 148 Suspend/Resume 149 149 -------------------------------------------------------------------------- */ 150 - #ifdef CONFIG_ACPI_SLEEP 151 150 extern int acpi_sleep_init(void); 152 - #else 153 - static inline int acpi_sleep_init(void) { return 0; } 154 - #endif 155 151 156 152 #endif /*__ACPI_DRIVERS_H__*/
+2
include/acpi/processor.h
··· 320 320 int acpi_processor_cst_has_changed(struct acpi_processor *pr); 321 321 int acpi_processor_power_exit(struct acpi_processor *pr, 322 322 struct acpi_device *device); 323 + int acpi_processor_suspend(struct acpi_device * device, pm_message_t state); 324 + int acpi_processor_resume(struct acpi_device * device); 323 325 324 326 /* in processor_thermal.c */ 325 327 int acpi_processor_get_limit_info(struct acpi_processor *pr);
-5
include/asm-i386/system.h
··· 214 214 */ 215 215 216 216 217 - /* 218 - * Actually only lfence would be needed for mb() because all stores done 219 - * by the kernel should be already ordered. But keep a full barrier for now. 220 - */ 221 - 222 217 #define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2) 223 218 #define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2) 224 219
+1
include/asm-mips/fcntl.h
··· 13 13 #define O_SYNC 0x0010 14 14 #define O_NONBLOCK 0x0080 15 15 #define O_CREAT 0x0100 /* not fcntl */ 16 + #define O_TRUNC 0x0200 /* not fcntl */ 16 17 #define O_EXCL 0x0400 /* not fcntl */ 17 18 #define O_NOCTTY 0x0800 /* not fcntl */ 18 19 #define FASYNC 0x1000 /* fcntl, for BSD compatibility */
+24 -8
include/asm-mips/irq.h
··· 24 24 #define irq_canonicalize(irq) (irq) /* Sane hardware, sane code ... */ 25 25 #endif 26 26 27 + #ifdef CONFIG_MIPS_MT_SMTC 28 + 29 + struct irqaction; 30 + 31 + extern unsigned long irq_hwmask[]; 32 + extern int setup_irq_smtc(unsigned int irq, struct irqaction * new, 33 + unsigned long hwmask); 34 + 35 + static inline void smtc_im_ack_irq(unsigned int irq) 36 + { 37 + if (irq_hwmask[irq] & ST0_IM) 38 + set_c0_status(irq_hwmask[irq] & ST0_IM); 39 + } 40 + 41 + #else 42 + 43 + static inline void smtc_im_ack_irq(unsigned int irq) 44 + { 45 + } 46 + 47 + #endif /* CONFIG_MIPS_MT_SMTC */ 48 + 27 49 #ifdef CONFIG_MIPS_MT_SMTC_IM_BACKSTOP 50 + 28 51 /* 29 52 * Clear interrupt mask handling "backstop" if irq_hwmask 30 53 * entry so indicates. This implies that the ack() or end() ··· 61 38 ~(irq_hwmask[irq] & 0x0000ff00)); \ 62 39 } while (0) 63 40 #else 41 + 64 42 #define __DO_IRQ_SMTC_HOOK(irq) do { } while (0) 65 43 #endif 66 44 ··· 83 59 84 60 extern void arch_init_irq(void); 85 61 extern void spurious_interrupt(void); 86 - 87 - #ifdef CONFIG_MIPS_MT_SMTC 88 - struct irqaction; 89 - 90 - extern unsigned long irq_hwmask[]; 91 - extern int setup_irq_smtc(unsigned int irq, struct irqaction * new, 92 - unsigned long hwmask); 93 - #endif /* CONFIG_MIPS_MT_SMTC */ 94 62 95 63 extern int allocate_irqno(void); 96 64 extern void alloc_legacy_irqno(void);
+1 -1
include/asm-mips/page.h
··· 142 142 /* 143 143 * __pa()/__va() should be used only during mem init. 144 144 */ 145 - #if defined(CONFIG_64BIT) && !defined(CONFIG_BUILD_ELF64) 145 + #ifdef CONFIG_64BIT 146 146 #define __pa(x) \ 147 147 ({ \ 148 148 unsigned long __x = (unsigned long)(x); \
+29 -54
include/asm-x86_64/pgalloc.h
··· 4 4 #include <asm/pda.h> 5 5 #include <linux/threads.h> 6 6 #include <linux/mm.h> 7 - #include <linux/quicklist.h> 8 - 9 - #define QUICK_PGD 0 /* We preserve special mappings over free */ 10 - #define QUICK_PT 1 /* Other page table pages that are zero on free */ 11 7 12 8 #define pmd_populate_kernel(mm, pmd, pte) \ 13 9 set_pmd(pmd, __pmd(_PAGE_TABLE | __pa(pte))) ··· 20 24 static inline void pmd_free(pmd_t *pmd) 21 25 { 22 26 BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); 23 - quicklist_free(QUICK_PT, NULL, pmd); 27 + free_page((unsigned long)pmd); 24 28 } 25 29 26 30 static inline pmd_t *pmd_alloc_one (struct mm_struct *mm, unsigned long addr) 27 31 { 28 - return (pmd_t *)quicklist_alloc(QUICK_PT, GFP_KERNEL|__GFP_REPEAT, NULL); 32 + return (pmd_t *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT); 29 33 } 30 34 31 35 static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) 32 36 { 33 - return (pud_t *)quicklist_alloc(QUICK_PT, GFP_KERNEL|__GFP_REPEAT, NULL); 37 + return (pud_t *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT); 34 38 } 35 39 36 40 static inline void pud_free (pud_t *pud) 37 41 { 38 42 BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); 39 - quicklist_free(QUICK_PT, NULL, pud); 43 + free_page((unsigned long)pud); 40 44 } 41 45 42 46 static inline void pgd_list_add(pgd_t *pgd) ··· 57 61 spin_unlock(&pgd_lock); 58 62 } 59 63 60 - static inline void pgd_ctor(void *x) 61 - { 62 - unsigned boundary; 63 - pgd_t *pgd = x; 64 - struct page *page = virt_to_page(pgd); 65 - 66 - /* 67 - * Copy kernel pointers in from init. 68 - */ 69 - boundary = pgd_index(__PAGE_OFFSET); 70 - memcpy(pgd + boundary, 71 - init_level4_pgt + boundary, 72 - (PTRS_PER_PGD - boundary) * sizeof(pgd_t)); 73 - 74 - spin_lock(&pgd_lock); 75 - list_add(&page->lru, &pgd_list); 76 - spin_unlock(&pgd_lock); 77 - } 78 - 79 - static inline void pgd_dtor(void *x) 80 - { 81 - pgd_t *pgd = x; 82 - struct page *page = virt_to_page(pgd); 83 - 84 - spin_lock(&pgd_lock); 85 - list_del(&page->lru); 86 - spin_unlock(&pgd_lock); 87 - } 88 - 89 64 static inline pgd_t *pgd_alloc(struct mm_struct *mm) 90 65 { 91 - pgd_t *pgd = (pgd_t *)quicklist_alloc(QUICK_PGD, 92 - GFP_KERNEL|__GFP_REPEAT, pgd_ctor); 66 + unsigned boundary; 67 + pgd_t *pgd = (pgd_t *)__get_free_page(GFP_KERNEL|__GFP_REPEAT); 68 + if (!pgd) 69 + return NULL; 70 + pgd_list_add(pgd); 71 + /* 72 + * Copy kernel pointers in from init. 73 + * Could keep a freelist or slab cache of those because the kernel 74 + * part never changes. 75 + */ 76 + boundary = pgd_index(__PAGE_OFFSET); 77 + memset(pgd, 0, boundary * sizeof(pgd_t)); 78 + memcpy(pgd + boundary, 79 + init_level4_pgt + boundary, 80 + (PTRS_PER_PGD - boundary) * sizeof(pgd_t)); 93 81 return pgd; 94 82 } 95 83 96 84 static inline void pgd_free(pgd_t *pgd) 97 85 { 98 86 BUG_ON((unsigned long)pgd & (PAGE_SIZE-1)); 99 - quicklist_free(QUICK_PGD, pgd_dtor, pgd); 87 + pgd_list_del(pgd); 88 + free_page((unsigned long)pgd); 100 89 } 101 90 102 91 static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address) 103 92 { 104 - return (pte_t *)quicklist_alloc(QUICK_PT, GFP_KERNEL|__GFP_REPEAT, NULL); 93 + return (pte_t *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT); 105 94 } 106 95 107 96 static inline struct page *pte_alloc_one(struct mm_struct *mm, unsigned long address) 108 97 { 109 - void *p = (void *)quicklist_alloc(QUICK_PT, GFP_KERNEL|__GFP_REPEAT, NULL); 110 - 98 + void *p = (void *)get_zeroed_page(GFP_KERNEL|__GFP_REPEAT); 111 99 if (!p) 112 100 return NULL; 113 101 return virt_to_page(p); ··· 103 123 static inline void pte_free_kernel(pte_t *pte) 104 124 { 105 125 BUG_ON((unsigned long)pte & (PAGE_SIZE-1)); 106 - quicklist_free(QUICK_PT, NULL, pte); 126 + free_page((unsigned long)pte); 107 127 } 108 128 109 129 static inline void pte_free(struct page *pte) 110 130 { 111 - quicklist_free_page(QUICK_PT, NULL, pte); 112 - } 131 + __free_page(pte); 132 + } 113 133 114 - #define __pte_free_tlb(tlb,pte) quicklist_free_page(QUICK_PT, NULL,(pte)) 134 + #define __pte_free_tlb(tlb,pte) tlb_remove_page((tlb),(pte)) 115 135 116 - #define __pmd_free_tlb(tlb,x) quicklist_free(QUICK_PT, NULL, (x)) 117 - #define __pud_free_tlb(tlb,x) quicklist_free(QUICK_PT, NULL, (x)) 136 + #define __pmd_free_tlb(tlb,x) tlb_remove_page((tlb),virt_to_page(x)) 137 + #define __pud_free_tlb(tlb,x) tlb_remove_page((tlb),virt_to_page(x)) 118 138 119 - static inline void check_pgt_cache(void) 120 - { 121 - quicklist_trim(QUICK_PGD, pgd_dtor, 25, 16); 122 - quicklist_trim(QUICK_PT, NULL, 25, 16); 123 - } 124 139 #endif /* _X86_64_PGALLOC_H */
+1
include/asm-x86_64/pgtable.h
··· 411 411 #define HAVE_ARCH_UNMAPPED_AREA 412 412 413 413 #define pgtable_cache_init() do { } while (0) 414 + #define check_pgt_cache() do { } while (0) 414 415 415 416 #define PAGE_AGP PAGE_KERNEL_NOCACHE 416 417 #define HAVE_PAGE_AGP 1
+3 -16
include/linux/cpufreq.h
··· 32 32 * CPUFREQ NOTIFIER INTERFACE * 33 33 *********************************************************************/ 34 34 35 - #ifdef CONFIG_CPU_FREQ 36 35 int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list); 37 - #else 38 - static inline int cpufreq_register_notifier(struct notifier_block *nb, 39 - unsigned int list) 40 - { 41 - return 0; 42 - } 43 - #endif 44 36 int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list); 45 37 46 38 #define CPUFREQ_TRANSITION_NOTIFIER (0) ··· 260 268 int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu); 261 269 int cpufreq_update_policy(unsigned int cpu); 262 270 271 + /* query the current CPU frequency (in kHz). If zero, cpufreq couldn't detect it */ 272 + unsigned int cpufreq_get(unsigned int cpu); 263 273 264 - /* 265 - * query the last known CPU freq (in kHz). If zero, cpufreq couldn't detect it 266 - */ 274 + /* query the last known CPU freq (in kHz). If zero, cpufreq couldn't detect it */ 267 275 #ifdef CONFIG_CPU_FREQ 268 276 unsigned int cpufreq_quick_get(unsigned int cpu); 269 - unsigned int cpufreq_get(unsigned int cpu); 270 277 #else 271 278 static inline unsigned int cpufreq_quick_get(unsigned int cpu) 272 - { 273 - return 0; 274 - } 275 - static inline unsigned int cpufreq_get(unsigned int cpu) 276 279 { 277 280 return 0; 278 281 }
+1 -1
include/linux/init_task.h
··· 86 86 .count = ATOMIC_INIT(1), \ 87 87 .action = { { { .sa_handler = NULL, } }, }, \ 88 88 .siglock = __SPIN_LOCK_UNLOCKED(sighand.siglock), \ 89 - .signalfd_list = LIST_HEAD_INIT(sighand.signalfd_list), \ 89 + .signalfd_wqh = __WAIT_QUEUE_HEAD_INITIALIZER(sighand.signalfd_wqh), \ 90 90 } 91 91 92 92 extern struct group_info init_groups;
+2 -1
include/linux/sched.h
··· 438 438 atomic_t count; 439 439 struct k_sigaction action[_NSIG]; 440 440 spinlock_t siglock; 441 - struct list_head signalfd_list; 441 + wait_queue_head_t signalfd_wqh; 442 442 }; 443 443 444 444 struct pacct_struct { ··· 1406 1406 extern unsigned int sysctl_sched_batch_wakeup_granularity; 1407 1407 extern unsigned int sysctl_sched_stat_granularity; 1408 1408 extern unsigned int sysctl_sched_runtime_limit; 1409 + extern unsigned int sysctl_sched_compat_yield; 1409 1410 extern unsigned int sysctl_sched_child_runs_first; 1410 1411 extern unsigned int sysctl_sched_features; 1411 1412
+4 -36
include/linux/signalfd.h
··· 45 45 #ifdef CONFIG_SIGNALFD 46 46 47 47 /* 48 - * Deliver the signal to listening signalfd. This must be called 49 - * with the sighand lock held. Same are the following that end up 50 - * calling signalfd_deliver(). 51 - */ 52 - void signalfd_deliver(struct task_struct *tsk, int sig); 53 - 54 - /* 55 - * No need to fall inside signalfd_deliver() if no signal listeners 56 - * are available. 48 + * Deliver the signal to listening signalfd. 57 49 */ 58 50 static inline void signalfd_notify(struct task_struct *tsk, int sig) 59 51 { 60 - if (unlikely(!list_empty(&tsk->sighand->signalfd_list))) 61 - signalfd_deliver(tsk, sig); 62 - } 63 - 64 - /* 65 - * The signal -1 is used to notify the signalfd that the sighand 66 - * is on its way to be detached. 67 - */ 68 - static inline void signalfd_detach_locked(struct task_struct *tsk) 69 - { 70 - if (unlikely(!list_empty(&tsk->sighand->signalfd_list))) 71 - signalfd_deliver(tsk, -1); 72 - } 73 - 74 - static inline void signalfd_detach(struct task_struct *tsk) 75 - { 76 - struct sighand_struct *sighand = tsk->sighand; 77 - 78 - if (unlikely(!list_empty(&sighand->signalfd_list))) { 79 - spin_lock_irq(&sighand->siglock); 80 - signalfd_deliver(tsk, -1); 81 - spin_unlock_irq(&sighand->siglock); 82 - } 52 + if (unlikely(waitqueue_active(&tsk->sighand->signalfd_wqh))) 53 + wake_up(&tsk->sighand->signalfd_wqh); 83 54 } 84 55 85 56 #else /* CONFIG_SIGNALFD */ 86 57 87 - #define signalfd_deliver(t, s) do { } while (0) 88 - #define signalfd_notify(t, s) do { } while (0) 89 - #define signalfd_detach_locked(t) do { } while (0) 90 - #define signalfd_detach(t) do { } while (0) 58 + static inline void signalfd_notify(struct task_struct *tsk, int sig) { } 91 59 92 60 #endif /* CONFIG_SIGNALFD */ 93 61
+3 -1
include/net/sctp/sm.h
··· 114 114 sctp_state_fn_t sctp_sf_eat_data_6_2; 115 115 sctp_state_fn_t sctp_sf_eat_data_fast_4_4; 116 116 sctp_state_fn_t sctp_sf_eat_sack_6_2; 117 - sctp_state_fn_t sctp_sf_tabort_8_4_8; 118 117 sctp_state_fn_t sctp_sf_operr_notify; 119 118 sctp_state_fn_t sctp_sf_t1_init_timer_expire; 120 119 sctp_state_fn_t sctp_sf_t1_cookie_timer_expire; ··· 246 247 int, __be16); 247 248 struct sctp_chunk *sctp_make_asconf_set_prim(struct sctp_association *asoc, 248 249 union sctp_addr *addr); 250 + int sctp_verify_asconf(const struct sctp_association *asoc, 251 + struct sctp_paramhdr *param_hdr, void *chunk_end, 252 + struct sctp_paramhdr **errp); 249 253 struct sctp_chunk *sctp_process_asconf(struct sctp_association *asoc, 250 254 struct sctp_chunk *asconf); 251 255 int sctp_process_asconf_ack(struct sctp_association *asoc,
+2 -1
include/net/sctp/structs.h
··· 421 421 * internally. 422 422 */ 423 423 union sctp_addr_param { 424 + struct sctp_paramhdr p; 424 425 struct sctp_ipv4addr_param v4; 425 426 struct sctp_ipv6addr_param v6; 426 427 }; ··· 1157 1156 int sctp_add_bind_addr(struct sctp_bind_addr *, union sctp_addr *, 1158 1157 __u8 use_as_src, gfp_t gfp); 1159 1158 int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *, 1160 - void (*rcu_call)(struct rcu_head *, 1159 + void fastcall (*rcu_call)(struct rcu_head *, 1161 1160 void (*func)(struct rcu_head *))); 1162 1161 int sctp_bind_addr_match(struct sctp_bind_addr *, const union sctp_addr *, 1163 1162 struct sctp_sock *);
+2 -4
include/net/tcp.h
··· 1059 1059 }; 1060 1060 1061 1061 struct tcp4_md5sig_key { 1062 - u8 *key; 1063 - u16 keylen; 1062 + struct tcp_md5sig_key base; 1064 1063 __be32 addr; 1065 1064 }; 1066 1065 1067 1066 struct tcp6_md5sig_key { 1068 - u8 *key; 1069 - u16 keylen; 1067 + struct tcp_md5sig_key base; 1070 1068 #if 0 1071 1069 u32 scope_id; /* XXX */ 1072 1070 #endif
-9
kernel/exit.c
··· 24 24 #include <linux/pid_namespace.h> 25 25 #include <linux/ptrace.h> 26 26 #include <linux/profile.h> 27 - #include <linux/signalfd.h> 28 27 #include <linux/mount.h> 29 28 #include <linux/proc_fs.h> 30 29 #include <linux/kthread.h> ··· 84 85 rcu_read_lock(); 85 86 sighand = rcu_dereference(tsk->sighand); 86 87 spin_lock(&sighand->siglock); 87 - 88 - /* 89 - * Notify that this sighand has been detached. This must 90 - * be called with the tsk->sighand lock held. Also, this 91 - * access tsk->sighand internally, so it must be called 92 - * before tsk->sighand is reset. 93 - */ 94 - signalfd_detach_locked(tsk); 95 88 96 89 posix_cpu_timers_exit(tsk); 97 90 if (atomic_dec_and_test(&sig->count))
+1 -1
kernel/fork.c
··· 1438 1438 struct sighand_struct *sighand = data; 1439 1439 1440 1440 spin_lock_init(&sighand->siglock); 1441 - INIT_LIST_HEAD(&sighand->signalfd_list); 1441 + init_waitqueue_head(&sighand->signalfd_wqh); 1442 1442 } 1443 1443 1444 1444 void __init proc_caches_init(void)
+16 -10
kernel/futex.c
··· 1943 1943 void exit_robust_list(struct task_struct *curr) 1944 1944 { 1945 1945 struct robust_list_head __user *head = curr->robust_list; 1946 - struct robust_list __user *entry, *pending; 1947 - unsigned int limit = ROBUST_LIST_LIMIT, pi, pip; 1946 + struct robust_list __user *entry, *next_entry, *pending; 1947 + unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip; 1948 1948 unsigned long futex_offset; 1949 + int rc; 1949 1950 1950 1951 /* 1951 1952 * Fetch the list head (which was registered earlier, via ··· 1966 1965 if (fetch_robust_entry(&pending, &head->list_op_pending, &pip)) 1967 1966 return; 1968 1967 1969 - if (pending) 1970 - handle_futex_death((void __user *)pending + futex_offset, 1971 - curr, pip); 1972 - 1968 + next_entry = NULL; /* avoid warning with gcc */ 1973 1969 while (entry != &head->list) { 1970 + /* 1971 + * Fetch the next entry in the list before calling 1972 + * handle_futex_death: 1973 + */ 1974 + rc = fetch_robust_entry(&next_entry, &entry->next, &next_pi); 1974 1975 /* 1975 1976 * A pending lock might already be on the list, so 1976 1977 * don't process it twice: ··· 1981 1978 if (handle_futex_death((void __user *)entry + futex_offset, 1982 1979 curr, pi)) 1983 1980 return; 1984 - /* 1985 - * Fetch the next entry in the list: 1986 - */ 1987 - if (fetch_robust_entry(&entry, &entry->next, &pi)) 1981 + if (rc) 1988 1982 return; 1983 + entry = next_entry; 1984 + pi = next_pi; 1989 1985 /* 1990 1986 * Avoid excessively long or circular lists: 1991 1987 */ ··· 1993 1991 1994 1992 cond_resched(); 1995 1993 } 1994 + 1995 + if (pending) 1996 + handle_futex_death((void __user *)pending + futex_offset, 1997 + curr, pip); 1996 1998 } 1997 1999 1998 2000 long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
+18 -10
kernel/futex_compat.c
··· 38 38 void compat_exit_robust_list(struct task_struct *curr) 39 39 { 40 40 struct compat_robust_list_head __user *head = curr->compat_robust_list; 41 - struct robust_list __user *entry, *pending; 42 - unsigned int limit = ROBUST_LIST_LIMIT, pi, pip; 43 - compat_uptr_t uentry, upending; 41 + struct robust_list __user *entry, *next_entry, *pending; 42 + unsigned int limit = ROBUST_LIST_LIMIT, pi, next_pi, pip; 43 + compat_uptr_t uentry, next_uentry, upending; 44 44 compat_long_t futex_offset; 45 + int rc; 45 46 46 47 /* 47 48 * Fetch the list head (which was registered earlier, via ··· 62 61 if (fetch_robust_entry(&upending, &pending, 63 62 &head->list_op_pending, &pip)) 64 63 return; 65 - if (pending) 66 - handle_futex_death((void __user *)pending + futex_offset, curr, pip); 67 64 65 + next_entry = NULL; /* avoid warning with gcc */ 68 66 while (entry != (struct robust_list __user *) &head->list) { 67 + /* 68 + * Fetch the next entry in the list before calling 69 + * handle_futex_death: 70 + */ 71 + rc = fetch_robust_entry(&next_uentry, &next_entry, 72 + (compat_uptr_t __user *)&entry->next, &next_pi); 69 73 /* 70 74 * A pending lock might already be on the list, so 71 75 * dont process it twice: ··· 80 74 curr, pi)) 81 75 return; 82 76 83 - /* 84 - * Fetch the next entry in the list: 85 - */ 86 - if (fetch_robust_entry(&uentry, &entry, 87 - (compat_uptr_t __user *)&entry->next, &pi)) 77 + if (rc) 88 78 return; 79 + uentry = next_uentry; 80 + entry = next_entry; 81 + pi = next_pi; 89 82 /* 90 83 * Avoid excessively long or circular lists: 91 84 */ ··· 93 88 94 89 cond_resched(); 95 90 } 91 + if (pending) 92 + handle_futex_death((void __user *)pending + futex_offset, 93 + curr, pip); 96 94 } 97 95 98 96 asmlinkage long
+1 -1
kernel/power/Kconfig
··· 110 110 111 111 config HIBERNATION_UP_POSSIBLE 112 112 bool 113 - depends on X86 || PPC64_SWSUSP || FRV || PPC32 113 + depends on X86 || PPC64_SWSUSP || PPC32 114 114 depends on !SMP 115 115 default y 116 116
+6 -4
kernel/sched.c
··· 1682 1682 1683 1683 p->prio = effective_prio(p); 1684 1684 1685 + if (rt_prio(p->prio)) 1686 + p->sched_class = &rt_sched_class; 1687 + else 1688 + p->sched_class = &fair_sched_class; 1689 + 1685 1690 if (!p->sched_class->task_new || !sysctl_sched_child_runs_first || 1686 1691 (clone_flags & CLONE_VM) || task_cpu(p) != this_cpu || 1687 1692 !current->se.on_rq) { ··· 4555 4550 struct rq *rq = this_rq_lock(); 4556 4551 4557 4552 schedstat_inc(rq, yld_cnt); 4558 - if (unlikely(rq->nr_running == 1)) 4559 - schedstat_inc(rq, yld_act_empty); 4560 - else 4561 - current->sched_class->yield_task(rq, current); 4553 + current->sched_class->yield_task(rq, current); 4562 4554 4563 4555 /* 4564 4556 * Since we are going to call schedule() anyway, there's
+67 -6
kernel/sched_fair.c
··· 43 43 unsigned int sysctl_sched_min_granularity __read_mostly = 2000000ULL; 44 44 45 45 /* 46 + * sys_sched_yield() compat mode 47 + * 48 + * This option switches the agressive yield implementation of the 49 + * old scheduler back on. 50 + */ 51 + unsigned int __read_mostly sysctl_sched_compat_yield; 52 + 53 + /* 46 54 * SCHED_BATCH wake-up granularity. 47 55 * (default: 25 msec, units: nanoseconds) 48 56 * ··· 639 631 640 632 se->block_start = 0; 641 633 se->sum_sleep_runtime += delta; 634 + 635 + /* 636 + * Blocking time is in units of nanosecs, so shift by 20 to 637 + * get a milliseconds-range estimation of the amount of 638 + * time that the task spent sleeping: 639 + */ 640 + if (unlikely(prof_on == SLEEP_PROFILING)) { 641 + profile_hits(SLEEP_PROFILING, (void *)get_wchan(tsk), 642 + delta >> 20); 643 + } 642 644 } 643 645 #endif 644 646 } ··· 915 897 } 916 898 917 899 /* 918 - * sched_yield() support is very simple - we dequeue and enqueue 900 + * sched_yield() support is very simple - we dequeue and enqueue. 901 + * 902 + * If compat_yield is turned on then we requeue to the end of the tree. 919 903 */ 920 904 static void yield_task_fair(struct rq *rq, struct task_struct *p) 921 905 { 922 906 struct cfs_rq *cfs_rq = task_cfs_rq(p); 907 + struct rb_node **link = &cfs_rq->tasks_timeline.rb_node; 908 + struct sched_entity *rightmost, *se = &p->se; 909 + struct rb_node *parent; 923 910 924 - __update_rq_clock(rq); 925 911 /* 926 - * Dequeue and enqueue the task to update its 927 - * position within the tree: 912 + * Are we the only task in the tree? 928 913 */ 929 - dequeue_entity(cfs_rq, &p->se, 0); 930 - enqueue_entity(cfs_rq, &p->se, 0); 914 + if (unlikely(cfs_rq->nr_running == 1)) 915 + return; 916 + 917 + if (likely(!sysctl_sched_compat_yield)) { 918 + __update_rq_clock(rq); 919 + /* 920 + * Dequeue and enqueue the task to update its 921 + * position within the tree: 922 + */ 923 + dequeue_entity(cfs_rq, &p->se, 0); 924 + enqueue_entity(cfs_rq, &p->se, 0); 925 + 926 + return; 927 + } 928 + /* 929 + * Find the rightmost entry in the rbtree: 930 + */ 931 + do { 932 + parent = *link; 933 + link = &parent->rb_right; 934 + } while (*link); 935 + 936 + rightmost = rb_entry(parent, struct sched_entity, run_node); 937 + /* 938 + * Already in the rightmost position? 939 + */ 940 + if (unlikely(rightmost == se)) 941 + return; 942 + 943 + /* 944 + * Minimally necessary key value to be last in the tree: 945 + */ 946 + se->fair_key = rightmost->fair_key + 1; 947 + 948 + if (cfs_rq->rb_leftmost == &se->run_node) 949 + cfs_rq->rb_leftmost = rb_next(&se->run_node); 950 + /* 951 + * Relink the task to the rightmost position: 952 + */ 953 + rb_erase(&se->run_node, &cfs_rq->tasks_timeline); 954 + rb_link_node(&se->run_node, parent, link); 955 + rb_insert_color(&se->run_node, &cfs_rq->tasks_timeline); 931 956 } 932 957 933 958 /*
+3 -5
kernel/signal.c
··· 378 378 /* We only dequeue private signals from ourselves, we don't let 379 379 * signalfd steal them 380 380 */ 381 - if (likely(tsk == current)) 382 - signr = __dequeue_signal(&tsk->pending, mask, info); 381 + signr = __dequeue_signal(&tsk->pending, mask, info); 383 382 if (!signr) { 384 383 signr = __dequeue_signal(&tsk->signal->shared_pending, 385 384 mask, info); ··· 406 407 } 407 408 } 408 409 } 409 - if (likely(tsk == current)) 410 - recalc_sigpending(); 410 + recalc_sigpending(); 411 411 if (signr && unlikely(sig_kernel_stop(signr))) { 412 412 /* 413 413 * Set a marker that we have dequeued a stop signal. Our ··· 423 425 if (!(tsk->signal->flags & SIGNAL_GROUP_EXIT)) 424 426 tsk->signal->flags |= SIGNAL_STOP_DEQUEUED; 425 427 } 426 - if (signr && likely(tsk == current) && 428 + if (signr && 427 429 ((info->si_code & __SI_MASK) == __SI_TIMER) && 428 430 info->si_sys_private){ 429 431 /*
+2
kernel/sys.c
··· 32 32 #include <linux/getcpu.h> 33 33 #include <linux/task_io_accounting_ops.h> 34 34 #include <linux/seccomp.h> 35 + #include <linux/cpu.h> 35 36 36 37 #include <linux/compat.h> 37 38 #include <linux/syscalls.h> ··· 879 878 kernel_shutdown_prepare(SYSTEM_POWER_OFF); 880 879 if (pm_power_off_prepare) 881 880 pm_power_off_prepare(); 881 + disable_nonboot_cpus(); 882 882 sysdev_shutdown(); 883 883 printk(KERN_EMERG "Power down.\n"); 884 884 machine_power_off();
+8
kernel/sysctl.c
··· 303 303 .proc_handler = &proc_dointvec, 304 304 }, 305 305 #endif 306 + { 307 + .ctl_name = CTL_UNNUMBERED, 308 + .procname = "sched_compat_yield", 309 + .data = &sysctl_sched_compat_yield, 310 + .maxlen = sizeof(unsigned int), 311 + .mode = 0644, 312 + .proc_handler = &proc_dointvec, 313 + }, 306 314 #ifdef CONFIG_PROVE_LOCKING 307 315 { 308 316 .ctl_name = CTL_UNNUMBERED,
+1 -16
kernel/time/tick-broadcast.c
··· 382 382 383 383 int tick_resume_broadcast_oneshot(struct clock_event_device *bc) 384 384 { 385 - int cpu = smp_processor_id(); 386 - 387 - /* 388 - * If the CPU is marked for broadcast, enforce oneshot 389 - * broadcast mode. The jinxed VAIO does not resume otherwise. 390 - * No idea why it ends up in a lower C State during resume 391 - * without notifying the clock events layer. 392 - */ 393 - if (cpu_isset(cpu, tick_broadcast_mask)) 394 - cpu_set(cpu, tick_broadcast_oneshot_mask); 395 - 396 385 clockevents_set_mode(bc, CLOCK_EVT_MODE_ONESHOT); 397 - 398 - if(!cpus_empty(tick_broadcast_oneshot_mask)) 399 - tick_broadcast_set_event(ktime_get(), 1); 400 - 401 - return cpu_isset(cpu, tick_broadcast_oneshot_mask); 386 + return 0; 402 387 } 403 388 404 389 /*
+1 -1
lib/Kconfig.debug
··· 284 284 select KALLSYMS_ALL 285 285 286 286 config LOCK_STAT 287 - bool "Lock usage statisitics" 287 + bool "Lock usage statistics" 288 288 depends on DEBUG_KERNEL && TRACE_IRQFLAGS_SUPPORT && STACKTRACE_SUPPORT && LOCKDEP_SUPPORT 289 289 select LOCKDEP 290 290 select DEBUG_SPINLOCK
+1 -1
mm/hugetlb.c
··· 42 42 might_sleep(); 43 43 for (i = 0; i < (HPAGE_SIZE/PAGE_SIZE); i++) { 44 44 cond_resched(); 45 - clear_user_highpage(page + i, addr); 45 + clear_user_highpage(page + i, addr + i * PAGE_SIZE); 46 46 } 47 47 } 48 48
+6
net/ieee80211/ieee80211_rx.c
··· 366 366 frag = WLAN_GET_SEQ_FRAG(sc); 367 367 hdrlen = ieee80211_get_hdrlen(fc); 368 368 369 + if (skb->len < hdrlen) { 370 + printk(KERN_INFO "%s: invalid SKB length %d\n", 371 + dev->name, skb->len); 372 + goto rx_dropped; 373 + } 374 + 369 375 /* Put this code here so that we avoid duplicating it in all 370 376 * Rx paths. - Jean II */ 371 377 #ifdef CONFIG_WIRELESS_EXT
-2
net/ieee80211/softmac/ieee80211softmac_assoc.c
··· 273 273 ieee80211softmac_notify(mac->dev, IEEE80211SOFTMAC_EVENT_SCAN_FINISHED, ieee80211softmac_assoc_notify_scan, NULL); 274 274 if (ieee80211softmac_start_scan(mac)) { 275 275 dprintk(KERN_INFO PFX "Associate: failed to initiate scan. Is device up?\n"); 276 - mac->associnfo.associating = 0; 277 - mac->associnfo.associated = 0; 278 276 } 279 277 goto out; 280 278 } else {
+20 -34
net/ieee80211/softmac/ieee80211softmac_wx.c
··· 70 70 char *extra) 71 71 { 72 72 struct ieee80211softmac_device *sm = ieee80211_priv(net_dev); 73 - struct ieee80211softmac_network *n; 74 73 struct ieee80211softmac_auth_queue_item *authptr; 75 74 int length = 0; 76 75 77 76 check_assoc_again: 78 77 mutex_lock(&sm->associnfo.mutex); 79 - /* Check if we're already associating to this or another network 80 - * If it's another network, cancel and start over with our new network 81 - * If it's our network, ignore the change, we're already doing it! 82 - */ 83 78 if((sm->associnfo.associating || sm->associnfo.associated) && 84 79 (data->essid.flags && data->essid.length)) { 85 - /* Get the associating network */ 86 - n = ieee80211softmac_get_network_by_bssid(sm, sm->associnfo.bssid); 87 - if(n && n->essid.len == data->essid.length && 88 - !memcmp(n->essid.data, extra, n->essid.len)) { 89 - dprintk(KERN_INFO PFX "Already associating or associated to "MAC_FMT"\n", 90 - MAC_ARG(sm->associnfo.bssid)); 91 - goto out; 92 - } else { 93 - dprintk(KERN_INFO PFX "Canceling existing associate request!\n"); 94 - /* Cancel assoc work */ 95 - cancel_delayed_work(&sm->associnfo.work); 96 - /* We don't have to do this, but it's a little cleaner */ 97 - list_for_each_entry(authptr, &sm->auth_queue, list) 98 - cancel_delayed_work(&authptr->work); 99 - sm->associnfo.bssvalid = 0; 100 - sm->associnfo.bssfixed = 0; 101 - sm->associnfo.associating = 0; 102 - sm->associnfo.associated = 0; 103 - /* We must unlock to avoid deadlocks with the assoc workqueue 104 - * on the associnfo.mutex */ 105 - mutex_unlock(&sm->associnfo.mutex); 106 - flush_scheduled_work(); 107 - /* Avoid race! Check assoc status again. Maybe someone started an 108 - * association while we flushed. */ 109 - goto check_assoc_again; 110 - } 80 + dprintk(KERN_INFO PFX "Canceling existing associate request!\n"); 81 + /* Cancel assoc work */ 82 + cancel_delayed_work(&sm->associnfo.work); 83 + /* We don't have to do this, but it's a little cleaner */ 84 + list_for_each_entry(authptr, &sm->auth_queue, list) 85 + cancel_delayed_work(&authptr->work); 86 + sm->associnfo.bssvalid = 0; 87 + sm->associnfo.bssfixed = 0; 88 + sm->associnfo.associating = 0; 89 + sm->associnfo.associated = 0; 90 + /* We must unlock to avoid deadlocks with the assoc workqueue 91 + * on the associnfo.mutex */ 92 + mutex_unlock(&sm->associnfo.mutex); 93 + flush_scheduled_work(); 94 + /* Avoid race! Check assoc status again. Maybe someone started an 95 + * association while we flushed. */ 96 + goto check_assoc_again; 111 97 } 112 98 113 99 sm->associnfo.static_essid = 0; ··· 139 153 data->essid.length = sm->associnfo.req_essid.len; 140 154 data->essid.flags = 1; /* active */ 141 155 memcpy(extra, sm->associnfo.req_essid.data, sm->associnfo.req_essid.len); 142 - } 143 - 156 + dprintk(KERN_INFO PFX "Getting essid from req_essid\n"); 157 + } else if (sm->associnfo.associated || sm->associnfo.associating) { 144 158 /* If we're associating/associated, return that */ 145 - if (sm->associnfo.associated || sm->associnfo.associating) { 146 159 data->essid.length = sm->associnfo.associate_essid.len; 147 160 data->essid.flags = 1; /* active */ 148 161 memcpy(extra, sm->associnfo.associate_essid.data, sm->associnfo.associate_essid.len); 162 + dprintk(KERN_INFO PFX "Getting essid from associate_essid\n"); 149 163 } 150 164 mutex_unlock(&sm->associnfo.mutex); 151 165
+9 -10
net/ipv4/tcp_ipv4.c
··· 833 833 return NULL; 834 834 for (i = 0; i < tp->md5sig_info->entries4; i++) { 835 835 if (tp->md5sig_info->keys4[i].addr == addr) 836 - return (struct tcp_md5sig_key *) 837 - &tp->md5sig_info->keys4[i]; 836 + return &tp->md5sig_info->keys4[i].base; 838 837 } 839 838 return NULL; 840 839 } ··· 864 865 key = (struct tcp4_md5sig_key *)tcp_v4_md5_do_lookup(sk, addr); 865 866 if (key) { 866 867 /* Pre-existing entry - just update that one. */ 867 - kfree(key->key); 868 - key->key = newkey; 869 - key->keylen = newkeylen; 868 + kfree(key->base.key); 869 + key->base.key = newkey; 870 + key->base.keylen = newkeylen; 870 871 } else { 871 872 struct tcp_md5sig_info *md5sig; 872 873 ··· 905 906 md5sig->alloced4++; 906 907 } 907 908 md5sig->entries4++; 908 - md5sig->keys4[md5sig->entries4 - 1].addr = addr; 909 - md5sig->keys4[md5sig->entries4 - 1].key = newkey; 910 - md5sig->keys4[md5sig->entries4 - 1].keylen = newkeylen; 909 + md5sig->keys4[md5sig->entries4 - 1].addr = addr; 910 + md5sig->keys4[md5sig->entries4 - 1].base.key = newkey; 911 + md5sig->keys4[md5sig->entries4 - 1].base.keylen = newkeylen; 911 912 } 912 913 return 0; 913 914 } ··· 929 930 for (i = 0; i < tp->md5sig_info->entries4; i++) { 930 931 if (tp->md5sig_info->keys4[i].addr == addr) { 931 932 /* Free the key */ 932 - kfree(tp->md5sig_info->keys4[i].key); 933 + kfree(tp->md5sig_info->keys4[i].base.key); 933 934 tp->md5sig_info->entries4--; 934 935 935 936 if (tp->md5sig_info->entries4 == 0) { ··· 963 964 if (tp->md5sig_info->entries4) { 964 965 int i; 965 966 for (i = 0; i < tp->md5sig_info->entries4; i++) 966 - kfree(tp->md5sig_info->keys4[i].key); 967 + kfree(tp->md5sig_info->keys4[i].base.key); 967 968 tp->md5sig_info->entries4 = 0; 968 969 tcp_free_md5sig_pool(); 969 970 }
+9 -9
net/ipv6/tcp_ipv6.c
··· 539 539 540 540 for (i = 0; i < tp->md5sig_info->entries6; i++) { 541 541 if (ipv6_addr_cmp(&tp->md5sig_info->keys6[i].addr, addr) == 0) 542 - return (struct tcp_md5sig_key *)&tp->md5sig_info->keys6[i]; 542 + return &tp->md5sig_info->keys6[i].base; 543 543 } 544 544 return NULL; 545 545 } ··· 567 567 key = (struct tcp6_md5sig_key*) tcp_v6_md5_do_lookup(sk, peer); 568 568 if (key) { 569 569 /* modify existing entry - just update that one */ 570 - kfree(key->key); 571 - key->key = newkey; 572 - key->keylen = newkeylen; 570 + kfree(key->base.key); 571 + key->base.key = newkey; 572 + key->base.keylen = newkeylen; 573 573 } else { 574 574 /* reallocate new list if current one is full. */ 575 575 if (!tp->md5sig_info) { ··· 603 603 604 604 ipv6_addr_copy(&tp->md5sig_info->keys6[tp->md5sig_info->entries6].addr, 605 605 peer); 606 - tp->md5sig_info->keys6[tp->md5sig_info->entries6].key = newkey; 607 - tp->md5sig_info->keys6[tp->md5sig_info->entries6].keylen = newkeylen; 606 + tp->md5sig_info->keys6[tp->md5sig_info->entries6].base.key = newkey; 607 + tp->md5sig_info->keys6[tp->md5sig_info->entries6].base.keylen = newkeylen; 608 608 609 609 tp->md5sig_info->entries6++; 610 610 } ··· 626 626 for (i = 0; i < tp->md5sig_info->entries6; i++) { 627 627 if (ipv6_addr_cmp(&tp->md5sig_info->keys6[i].addr, peer) == 0) { 628 628 /* Free the key */ 629 - kfree(tp->md5sig_info->keys6[i].key); 629 + kfree(tp->md5sig_info->keys6[i].base.key); 630 630 tp->md5sig_info->entries6--; 631 631 632 632 if (tp->md5sig_info->entries6 == 0) { ··· 657 657 658 658 if (tp->md5sig_info->entries6) { 659 659 for (i = 0; i < tp->md5sig_info->entries6; i++) 660 - kfree(tp->md5sig_info->keys6[i].key); 660 + kfree(tp->md5sig_info->keys6[i].base.key); 661 661 tp->md5sig_info->entries6 = 0; 662 662 tcp_free_md5sig_pool(); 663 663 } ··· 668 668 669 669 if (tp->md5sig_info->entries4) { 670 670 for (i = 0; i < tp->md5sig_info->entries4; i++) 671 - kfree(tp->md5sig_info->keys4[i].key); 671 + kfree(tp->md5sig_info->keys4[i].base.key); 672 672 tp->md5sig_info->entries4 = 0; 673 673 tcp_free_md5sig_pool(); 674 674 }
+1 -1
net/mac80211/ieee80211.c
··· 5259 5259 } 5260 5260 5261 5261 5262 - module_init(ieee80211_init); 5262 + subsys_initcall(ieee80211_init); 5263 5263 module_exit(ieee80211_exit); 5264 5264 5265 5265 MODULE_DESCRIPTION("IEEE 802.11 subsystem");
+1 -1
net/mac80211/rc80211_simple.c
··· 431 431 } 432 432 433 433 434 - module_init(rate_control_simple_init); 434 + subsys_initcall(rate_control_simple_init); 435 435 module_exit(rate_control_simple_exit); 436 436 437 437 MODULE_DESCRIPTION("Simple rate control algorithm for ieee80211");
+1 -1
net/mac80211/wme.c
··· 424 424 skb_queue_head_init(&q->requeued[i]); 425 425 q->queues[i] = qdisc_create_dflt(qd->dev, &pfifo_qdisc_ops, 426 426 qd->handle); 427 - if (q->queues[i] == 0) { 427 + if (!q->queues[i]) { 428 428 q->queues[i] = &noop_qdisc; 429 429 printk(KERN_ERR "%s child qdisc %i creation failed", dev->name, i); 430 430 }
+34 -19
net/sched/sch_sfq.c
··· 19 19 #include <linux/init.h> 20 20 #include <linux/ipv6.h> 21 21 #include <linux/skbuff.h> 22 + #include <linux/jhash.h> 22 23 #include <net/ip.h> 23 24 #include <net/netlink.h> 24 25 #include <net/pkt_sched.h> ··· 96 95 97 96 /* Variables */ 98 97 struct timer_list perturb_timer; 99 - int perturbation; 98 + u32 perturbation; 100 99 sfq_index tail; /* Index of current slot in round */ 101 100 sfq_index max_depth; /* Maximal depth */ 102 101 ··· 110 109 111 110 static __inline__ unsigned sfq_fold_hash(struct sfq_sched_data *q, u32 h, u32 h1) 112 111 { 113 - int pert = q->perturbation; 114 - 115 - /* Have we any rotation primitives? If not, WHY? */ 116 - h ^= (h1<<pert) ^ (h1>>(0x1F - pert)); 117 - h ^= h>>10; 118 - return h & 0x3FF; 112 + return jhash_2words(h, h1, q->perturbation) & (SFQ_HASH_DIVISOR - 1); 119 113 } 120 114 121 115 static unsigned sfq_hash(struct sfq_sched_data *q, struct sk_buff *skb) ··· 252 256 q->ht[hash] = x = q->dep[SFQ_DEPTH].next; 253 257 q->hash[x] = hash; 254 258 } 259 + /* If selected queue has length q->limit, this means that 260 + * all another queues are empty and that we do simple tail drop, 261 + * i.e. drop _this_ packet. 262 + */ 263 + if (q->qs[x].qlen >= q->limit) 264 + return qdisc_drop(skb, sch); 265 + 255 266 sch->qstats.backlog += skb->len; 256 267 __skb_queue_tail(&q->qs[x], skb); 257 268 sfq_inc(q, x); ··· 273 270 q->tail = x; 274 271 } 275 272 } 276 - if (++sch->q.qlen < q->limit-1) { 273 + if (++sch->q.qlen <= q->limit) { 277 274 sch->bstats.bytes += skb->len; 278 275 sch->bstats.packets++; 279 276 return 0; ··· 297 294 } 298 295 sch->qstats.backlog += skb->len; 299 296 __skb_queue_head(&q->qs[x], skb); 297 + /* If selected queue has length q->limit+1, this means that 298 + * all another queues are empty and we do simple tail drop. 299 + * This packet is still requeued at head of queue, tail packet 300 + * is dropped. 301 + */ 302 + if (q->qs[x].qlen > q->limit) { 303 + skb = q->qs[x].prev; 304 + __skb_unlink(skb, &q->qs[x]); 305 + sch->qstats.drops++; 306 + sch->qstats.backlog -= skb->len; 307 + kfree_skb(skb); 308 + return NET_XMIT_CN; 309 + } 300 310 sfq_inc(q, x); 301 311 if (q->qs[x].qlen == 1) { /* The flow is new */ 302 312 if (q->tail == SFQ_DEPTH) { /* It is the first flow */ ··· 322 306 q->tail = x; 323 307 } 324 308 } 325 - if (++sch->q.qlen < q->limit - 1) { 309 + if (++sch->q.qlen <= q->limit) { 326 310 sch->qstats.requeues++; 327 311 return 0; 328 312 } ··· 386 370 struct Qdisc *sch = (struct Qdisc*)arg; 387 371 struct sfq_sched_data *q = qdisc_priv(sch); 388 372 389 - q->perturbation = net_random()&0x1F; 373 + get_random_bytes(&q->perturbation, 4); 390 374 391 - if (q->perturb_period) { 392 - q->perturb_timer.expires = jiffies + q->perturb_period; 393 - add_timer(&q->perturb_timer); 394 - } 375 + if (q->perturb_period) 376 + mod_timer(&q->perturb_timer, jiffies + q->perturb_period); 395 377 } 396 378 397 379 static int sfq_change(struct Qdisc *sch, struct rtattr *opt) ··· 405 391 q->quantum = ctl->quantum ? : psched_mtu(sch->dev); 406 392 q->perturb_period = ctl->perturb_period*HZ; 407 393 if (ctl->limit) 408 - q->limit = min_t(u32, ctl->limit, SFQ_DEPTH); 394 + q->limit = min_t(u32, ctl->limit, SFQ_DEPTH - 1); 409 395 410 396 qlen = sch->q.qlen; 411 - while (sch->q.qlen >= q->limit-1) 397 + while (sch->q.qlen > q->limit) 412 398 sfq_drop(sch); 413 399 qdisc_tree_decrease_qlen(sch, qlen - sch->q.qlen); 414 400 415 401 del_timer(&q->perturb_timer); 416 402 if (q->perturb_period) { 417 - q->perturb_timer.expires = jiffies + q->perturb_period; 418 - add_timer(&q->perturb_timer); 403 + mod_timer(&q->perturb_timer, jiffies + q->perturb_period); 404 + get_random_bytes(&q->perturbation, 4); 419 405 } 420 406 sch_tree_unlock(sch); 421 407 return 0; ··· 437 423 q->dep[i+SFQ_DEPTH].next = i+SFQ_DEPTH; 438 424 q->dep[i+SFQ_DEPTH].prev = i+SFQ_DEPTH; 439 425 } 440 - q->limit = SFQ_DEPTH; 426 + q->limit = SFQ_DEPTH - 1; 441 427 q->max_depth = 0; 442 428 q->tail = SFQ_DEPTH; 443 429 if (opt == NULL) { 444 430 q->quantum = psched_mtu(sch->dev); 445 431 q->perturb_period = 0; 432 + get_random_bytes(&q->perturbation, 4); 446 433 } else { 447 434 int err = sfq_change(sch, opt); 448 435 if (err)
+1 -1
net/sctp/bind_addr.c
··· 181 181 * structure. 182 182 */ 183 183 int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr, 184 - void (*rcu_call)(struct rcu_head *head, 184 + void fastcall (*rcu_call)(struct rcu_head *head, 185 185 void (*func)(struct rcu_head *head))) 186 186 { 187 187 struct sctp_sockaddr_entry *addr, *temp;
+8
net/sctp/input.c
··· 622 622 if (SCTP_CID_SHUTDOWN_COMPLETE == ch->type) 623 623 goto discard; 624 624 625 + /* RFC 4460, 2.11.2 626 + * This will discard packets with INIT chunk bundled as 627 + * subsequent chunks in the packet. When INIT is first, 628 + * the normal INIT processing will discard the chunk. 629 + */ 630 + if (SCTP_CID_INIT == ch->type && (void *)ch != skb->data) 631 + goto discard; 632 + 625 633 /* RFC 8.4, 7) If the packet contains a "Stale cookie" ERROR 626 634 * or a COOKIE ACK the SCTP Packet should be silently 627 635 * discarded.
+8
net/sctp/inqueue.c
··· 130 130 /* Force chunk->skb->data to chunk->chunk_end. */ 131 131 skb_pull(chunk->skb, 132 132 chunk->chunk_end - chunk->skb->data); 133 + 134 + /* Verify that we have at least chunk headers 135 + * worth of buffer left. 136 + */ 137 + if (skb_headlen(chunk->skb) < sizeof(sctp_chunkhdr_t)) { 138 + sctp_chunk_free(chunk); 139 + chunk = queue->in_progress = NULL; 140 + } 133 141 } 134 142 } 135 143
+46
net/sctp/sm_make_chunk.c
··· 2499 2499 return SCTP_ERROR_NO_ERROR; 2500 2500 } 2501 2501 2502 + /* Verify the ASCONF packet before we process it. */ 2503 + int sctp_verify_asconf(const struct sctp_association *asoc, 2504 + struct sctp_paramhdr *param_hdr, void *chunk_end, 2505 + struct sctp_paramhdr **errp) { 2506 + sctp_addip_param_t *asconf_param; 2507 + union sctp_params param; 2508 + int length, plen; 2509 + 2510 + param.v = (sctp_paramhdr_t *) param_hdr; 2511 + while (param.v <= chunk_end - sizeof(sctp_paramhdr_t)) { 2512 + length = ntohs(param.p->length); 2513 + *errp = param.p; 2514 + 2515 + if (param.v > chunk_end - length || 2516 + length < sizeof(sctp_paramhdr_t)) 2517 + return 0; 2518 + 2519 + switch (param.p->type) { 2520 + case SCTP_PARAM_ADD_IP: 2521 + case SCTP_PARAM_DEL_IP: 2522 + case SCTP_PARAM_SET_PRIMARY: 2523 + asconf_param = (sctp_addip_param_t *)param.v; 2524 + plen = ntohs(asconf_param->param_hdr.length); 2525 + if (plen < sizeof(sctp_addip_param_t) + 2526 + sizeof(sctp_paramhdr_t)) 2527 + return 0; 2528 + break; 2529 + case SCTP_PARAM_SUCCESS_REPORT: 2530 + case SCTP_PARAM_ADAPTATION_LAYER_IND: 2531 + if (length != sizeof(sctp_addip_param_t)) 2532 + return 0; 2533 + 2534 + break; 2535 + default: 2536 + break; 2537 + } 2538 + 2539 + param.v += WORD_ROUND(length); 2540 + } 2541 + 2542 + if (param.v != chunk_end) 2543 + return 0; 2544 + 2545 + return 1; 2546 + } 2547 + 2502 2548 /* Process an incoming ASCONF chunk with the next expected serial no. and 2503 2549 * return an ASCONF_ACK chunk to be sent in response. 2504 2550 */
+204 -39
net/sctp/sm_statefuns.c
··· 90 90 const sctp_subtype_t type, 91 91 void *arg, 92 92 sctp_cmd_seq_t *commands); 93 + static sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep, 94 + const struct sctp_association *asoc, 95 + const sctp_subtype_t type, 96 + void *arg, 97 + sctp_cmd_seq_t *commands); 93 98 static struct sctp_sackhdr *sctp_sm_pull_sack(struct sctp_chunk *chunk); 94 99 95 100 static sctp_disposition_t sctp_stop_t1_and_abort(sctp_cmd_seq_t *commands, ··· 103 98 struct sctp_transport *transport); 104 99 105 100 static sctp_disposition_t sctp_sf_abort_violation( 101 + const struct sctp_endpoint *ep, 106 102 const struct sctp_association *asoc, 107 103 void *arg, 108 104 sctp_cmd_seq_t *commands, ··· 117 111 void *arg, 118 112 sctp_cmd_seq_t *commands); 119 113 114 + static sctp_disposition_t sctp_sf_violation_paramlen( 115 + const struct sctp_endpoint *ep, 116 + const struct sctp_association *asoc, 117 + const sctp_subtype_t type, 118 + void *arg, 119 + sctp_cmd_seq_t *commands); 120 + 120 121 static sctp_disposition_t sctp_sf_violation_ctsn( 122 + const struct sctp_endpoint *ep, 123 + const struct sctp_association *asoc, 124 + const sctp_subtype_t type, 125 + void *arg, 126 + sctp_cmd_seq_t *commands); 127 + 128 + static sctp_disposition_t sctp_sf_violation_chunk( 121 129 const struct sctp_endpoint *ep, 122 130 const struct sctp_association *asoc, 123 131 const sctp_subtype_t type, ··· 201 181 struct sctp_chunk *chunk = arg; 202 182 struct sctp_ulpevent *ev; 203 183 184 + if (!sctp_vtag_verify_either(chunk, asoc)) 185 + return sctp_sf_pdiscard(ep, asoc, type, arg, commands); 186 + 204 187 /* RFC 2960 6.10 Bundling 205 188 * 206 189 * An endpoint MUST NOT bundle INIT, INIT ACK or 207 190 * SHUTDOWN COMPLETE with any other chunks. 208 191 */ 209 192 if (!chunk->singleton) 210 - return SCTP_DISPOSITION_VIOLATION; 193 + return sctp_sf_violation_chunk(ep, asoc, type, arg, commands); 211 194 212 - if (!sctp_vtag_verify_either(chunk, asoc)) 213 - return sctp_sf_pdiscard(ep, asoc, type, arg, commands); 195 + /* Make sure that the SHUTDOWN_COMPLETE chunk has a valid length. */ 196 + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) 197 + return sctp_sf_violation_chunklen(ep, asoc, type, arg, 198 + commands); 214 199 215 200 /* RFC 2960 10.2 SCTP-to-ULP 216 201 * ··· 475 450 if (!sctp_vtag_verify(chunk, asoc)) 476 451 return sctp_sf_pdiscard(ep, asoc, type, arg, commands); 477 452 478 - /* Make sure that the INIT-ACK chunk has a valid length */ 479 - if (!sctp_chunk_length_valid(chunk, sizeof(sctp_initack_chunk_t))) 480 - return sctp_sf_violation_chunklen(ep, asoc, type, arg, 481 - commands); 482 453 /* 6.10 Bundling 483 454 * An endpoint MUST NOT bundle INIT, INIT ACK or 484 455 * SHUTDOWN COMPLETE with any other chunks. 485 456 */ 486 457 if (!chunk->singleton) 487 - return SCTP_DISPOSITION_VIOLATION; 458 + return sctp_sf_violation_chunk(ep, asoc, type, arg, commands); 488 459 460 + /* Make sure that the INIT-ACK chunk has a valid length */ 461 + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_initack_chunk_t))) 462 + return sctp_sf_violation_chunklen(ep, asoc, type, arg, 463 + commands); 489 464 /* Grab the INIT header. */ 490 465 chunk->subh.init_hdr = (sctp_inithdr_t *) chunk->skb->data; 491 466 ··· 610 585 * control endpoint, respond with an ABORT. 611 586 */ 612 587 if (ep == sctp_sk((sctp_get_ctl_sock()))->ep) 613 - return sctp_sf_ootb(ep, asoc, type, arg, commands); 588 + return sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands); 614 589 615 590 /* Make sure that the COOKIE_ECHO chunk has a valid length. 616 591 * In this case, we check that we have enough for at least a ··· 2521 2496 struct sctp_chunk *chunk = (struct sctp_chunk *) arg; 2522 2497 struct sctp_chunk *reply; 2523 2498 2499 + /* Make sure that the chunk has a valid length */ 2500 + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) 2501 + return sctp_sf_violation_chunklen(ep, asoc, type, arg, 2502 + commands); 2503 + 2524 2504 /* Since we are not going to really process this INIT, there 2525 2505 * is no point in verifying chunk boundries. Just generate 2526 2506 * the SHUTDOWN ACK. ··· 2959 2929 * 2960 2930 * The return value is the disposition of the chunk. 2961 2931 */ 2962 - sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep, 2932 + static sctp_disposition_t sctp_sf_tabort_8_4_8(const struct sctp_endpoint *ep, 2963 2933 const struct sctp_association *asoc, 2964 2934 const sctp_subtype_t type, 2965 2935 void *arg, ··· 2995 2965 2996 2966 SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS); 2997 2967 2968 + sctp_sf_pdiscard(ep, asoc, type, arg, commands); 2998 2969 return SCTP_DISPOSITION_CONSUME; 2999 2970 } 3000 2971 ··· 3156 3125 3157 3126 ch = (sctp_chunkhdr_t *) chunk->chunk_hdr; 3158 3127 do { 3159 - /* Break out if chunk length is less then minimal. */ 3128 + /* Report violation if the chunk is less then minimal */ 3160 3129 if (ntohs(ch->length) < sizeof(sctp_chunkhdr_t)) 3161 - break; 3130 + return sctp_sf_violation_chunklen(ep, asoc, type, arg, 3131 + commands); 3162 3132 3163 - ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length)); 3164 - if (ch_end > skb_tail_pointer(skb)) 3165 - break; 3166 - 3133 + /* Now that we know we at least have a chunk header, 3134 + * do things that are type appropriate. 3135 + */ 3167 3136 if (SCTP_CID_SHUTDOWN_ACK == ch->type) 3168 3137 ootb_shut_ack = 1; 3169 3138 ··· 3175 3144 if (SCTP_CID_ABORT == ch->type) 3176 3145 return sctp_sf_pdiscard(ep, asoc, type, arg, commands); 3177 3146 3147 + /* Report violation if chunk len overflows */ 3148 + ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length)); 3149 + if (ch_end > skb_tail_pointer(skb)) 3150 + return sctp_sf_violation_chunklen(ep, asoc, type, arg, 3151 + commands); 3152 + 3178 3153 ch = (sctp_chunkhdr_t *) ch_end; 3179 3154 } while (ch_end < skb_tail_pointer(skb)); 3180 3155 3181 3156 if (ootb_shut_ack) 3182 - sctp_sf_shut_8_4_5(ep, asoc, type, arg, commands); 3157 + return sctp_sf_shut_8_4_5(ep, asoc, type, arg, commands); 3183 3158 else 3184 - sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands); 3185 - 3186 - return sctp_sf_pdiscard(ep, asoc, type, arg, commands); 3159 + return sctp_sf_tabort_8_4_8(ep, asoc, type, arg, commands); 3187 3160 } 3188 3161 3189 3162 /* ··· 3253 3218 if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) 3254 3219 return sctp_sf_pdiscard(ep, asoc, type, arg, commands); 3255 3220 3256 - return SCTP_DISPOSITION_CONSUME; 3221 + /* We need to discard the rest of the packet to prevent 3222 + * potential bomming attacks from additional bundled chunks. 3223 + * This is documented in SCTP Threats ID. 3224 + */ 3225 + return sctp_sf_pdiscard(ep, asoc, type, arg, commands); 3257 3226 } 3258 3227 3259 3228 return SCTP_DISPOSITION_NOMEM; ··· 3280 3241 void *arg, 3281 3242 sctp_cmd_seq_t *commands) 3282 3243 { 3244 + struct sctp_chunk *chunk = arg; 3245 + 3246 + /* Make sure that the SHUTDOWN_ACK chunk has a valid length. */ 3247 + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) 3248 + return sctp_sf_violation_chunklen(ep, asoc, type, arg, 3249 + commands); 3250 + 3283 3251 /* Although we do have an association in this case, it corresponds 3284 3252 * to a restarted association. So the packet is treated as an OOTB 3285 3253 * packet and the state function that handles OOTB SHUTDOWN_ACK is ··· 3303 3257 { 3304 3258 struct sctp_chunk *chunk = arg; 3305 3259 struct sctp_chunk *asconf_ack = NULL; 3260 + struct sctp_paramhdr *err_param = NULL; 3306 3261 sctp_addiphdr_t *hdr; 3262 + union sctp_addr_param *addr_param; 3307 3263 __u32 serial; 3264 + int length; 3308 3265 3309 3266 if (!sctp_vtag_verify(chunk, asoc)) { 3310 3267 sctp_add_cmd_sf(commands, SCTP_CMD_REPORT_BAD_TAG, ··· 3322 3273 3323 3274 hdr = (sctp_addiphdr_t *)chunk->skb->data; 3324 3275 serial = ntohl(hdr->serial); 3276 + 3277 + addr_param = (union sctp_addr_param *)hdr->params; 3278 + length = ntohs(addr_param->p.length); 3279 + if (length < sizeof(sctp_paramhdr_t)) 3280 + return sctp_sf_violation_paramlen(ep, asoc, type, 3281 + (void *)addr_param, commands); 3282 + 3283 + /* Verify the ASCONF chunk before processing it. */ 3284 + if (!sctp_verify_asconf(asoc, 3285 + (sctp_paramhdr_t *)((void *)addr_param + length), 3286 + (void *)chunk->chunk_end, 3287 + &err_param)) 3288 + return sctp_sf_violation_paramlen(ep, asoc, type, 3289 + (void *)&err_param, commands); 3325 3290 3326 3291 /* ADDIP 4.2 C1) Compare the value of the serial number to the value 3327 3292 * the endpoint stored in a new association variable ··· 3391 3328 struct sctp_chunk *asconf_ack = arg; 3392 3329 struct sctp_chunk *last_asconf = asoc->addip_last_asconf; 3393 3330 struct sctp_chunk *abort; 3331 + struct sctp_paramhdr *err_param = NULL; 3394 3332 sctp_addiphdr_t *addip_hdr; 3395 3333 __u32 sent_serial, rcvd_serial; 3396 3334 ··· 3408 3344 3409 3345 addip_hdr = (sctp_addiphdr_t *)asconf_ack->skb->data; 3410 3346 rcvd_serial = ntohl(addip_hdr->serial); 3347 + 3348 + /* Verify the ASCONF-ACK chunk before processing it. */ 3349 + if (!sctp_verify_asconf(asoc, 3350 + (sctp_paramhdr_t *)addip_hdr->params, 3351 + (void *)asconf_ack->chunk_end, 3352 + &err_param)) 3353 + return sctp_sf_violation_paramlen(ep, asoc, type, 3354 + (void *)&err_param, commands); 3411 3355 3412 3356 if (last_asconf) { 3413 3357 addip_hdr = (sctp_addiphdr_t *)last_asconf->subh.addip_hdr; ··· 3727 3655 void *arg, 3728 3656 sctp_cmd_seq_t *commands) 3729 3657 { 3658 + struct sctp_chunk *chunk = arg; 3659 + 3660 + /* Make sure that the chunk has a valid length. 3661 + * Since we don't know the chunk type, we use a general 3662 + * chunkhdr structure to make a comparison. 3663 + */ 3664 + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) 3665 + return sctp_sf_violation_chunklen(ep, asoc, type, arg, 3666 + commands); 3667 + 3730 3668 SCTP_DEBUG_PRINTK("Chunk %d is discarded\n", type.chunk); 3731 3669 return SCTP_DISPOSITION_DISCARD; 3732 3670 } ··· 3792 3710 void *arg, 3793 3711 sctp_cmd_seq_t *commands) 3794 3712 { 3713 + struct sctp_chunk *chunk = arg; 3714 + 3715 + /* Make sure that the chunk has a valid length. */ 3716 + if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) 3717 + return sctp_sf_violation_chunklen(ep, asoc, type, arg, 3718 + commands); 3719 + 3795 3720 return SCTP_DISPOSITION_VIOLATION; 3796 3721 } 3797 3722 ··· 3806 3717 * Common function to handle a protocol violation. 3807 3718 */ 3808 3719 static sctp_disposition_t sctp_sf_abort_violation( 3720 + const struct sctp_endpoint *ep, 3809 3721 const struct sctp_association *asoc, 3810 3722 void *arg, 3811 3723 sctp_cmd_seq_t *commands, 3812 3724 const __u8 *payload, 3813 3725 const size_t paylen) 3814 3726 { 3727 + struct sctp_packet *packet = NULL; 3815 3728 struct sctp_chunk *chunk = arg; 3816 3729 struct sctp_chunk *abort = NULL; 3817 3730 ··· 3822 3731 if (!abort) 3823 3732 goto nomem; 3824 3733 3825 - sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort)); 3826 - SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS); 3734 + if (asoc) { 3735 + sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(abort)); 3736 + SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS); 3827 3737 3828 - if (asoc->state <= SCTP_STATE_COOKIE_ECHOED) { 3829 - sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP, 3830 - SCTP_TO(SCTP_EVENT_TIMEOUT_T1_INIT)); 3831 - sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, 3832 - SCTP_ERROR(ECONNREFUSED)); 3833 - sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED, 3834 - SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION)); 3738 + if (asoc->state <= SCTP_STATE_COOKIE_ECHOED) { 3739 + sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP, 3740 + SCTP_TO(SCTP_EVENT_TIMEOUT_T1_INIT)); 3741 + sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, 3742 + SCTP_ERROR(ECONNREFUSED)); 3743 + sctp_add_cmd_sf(commands, SCTP_CMD_INIT_FAILED, 3744 + SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION)); 3745 + } else { 3746 + sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, 3747 + SCTP_ERROR(ECONNABORTED)); 3748 + sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED, 3749 + SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION)); 3750 + SCTP_DEC_STATS(SCTP_MIB_CURRESTAB); 3751 + } 3835 3752 } else { 3836 - sctp_add_cmd_sf(commands, SCTP_CMD_SET_SK_ERR, 3837 - SCTP_ERROR(ECONNABORTED)); 3838 - sctp_add_cmd_sf(commands, SCTP_CMD_ASSOC_FAILED, 3839 - SCTP_PERR(SCTP_ERROR_PROTO_VIOLATION)); 3840 - SCTP_DEC_STATS(SCTP_MIB_CURRESTAB); 3753 + packet = sctp_ootb_pkt_new(asoc, chunk); 3754 + 3755 + if (!packet) 3756 + goto nomem_pkt; 3757 + 3758 + if (sctp_test_T_bit(abort)) 3759 + packet->vtag = ntohl(chunk->sctp_hdr->vtag); 3760 + 3761 + abort->skb->sk = ep->base.sk; 3762 + 3763 + sctp_packet_append_chunk(packet, abort); 3764 + 3765 + sctp_add_cmd_sf(commands, SCTP_CMD_SEND_PKT, 3766 + SCTP_PACKET(packet)); 3767 + 3768 + SCTP_INC_STATS(SCTP_MIB_OUTCTRLCHUNKS); 3841 3769 } 3842 3770 3843 - sctp_add_cmd_sf(commands, SCTP_CMD_DISCARD_PACKET, SCTP_NULL()); 3771 + sctp_sf_pdiscard(ep, asoc, SCTP_ST_CHUNK(0), arg, commands); 3844 3772 3845 3773 SCTP_INC_STATS(SCTP_MIB_ABORTEDS); 3846 3774 3847 3775 return SCTP_DISPOSITION_ABORT; 3848 3776 3777 + nomem_pkt: 3778 + sctp_chunk_free(abort); 3849 3779 nomem: 3850 3780 return SCTP_DISPOSITION_NOMEM; 3851 3781 } ··· 3899 3787 { 3900 3788 char err_str[]="The following chunk had invalid length:"; 3901 3789 3902 - return sctp_sf_abort_violation(asoc, arg, commands, err_str, 3790 + return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str, 3791 + sizeof(err_str)); 3792 + } 3793 + 3794 + /* 3795 + * Handle a protocol violation when the parameter length is invalid. 3796 + * "Invalid" length is identified as smaller then the minimal length a 3797 + * given parameter can be. 3798 + */ 3799 + static sctp_disposition_t sctp_sf_violation_paramlen( 3800 + const struct sctp_endpoint *ep, 3801 + const struct sctp_association *asoc, 3802 + const sctp_subtype_t type, 3803 + void *arg, 3804 + sctp_cmd_seq_t *commands) { 3805 + char err_str[] = "The following parameter had invalid length:"; 3806 + 3807 + return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str, 3903 3808 sizeof(err_str)); 3904 3809 } 3905 3810 ··· 3935 3806 { 3936 3807 char err_str[]="The cumulative tsn ack beyond the max tsn currently sent:"; 3937 3808 3938 - return sctp_sf_abort_violation(asoc, arg, commands, err_str, 3809 + return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str, 3939 3810 sizeof(err_str)); 3940 3811 } 3941 3812 3813 + /* Handle protocol violation of an invalid chunk bundling. For example, 3814 + * when we have an association and we recieve bundled INIT-ACK, or 3815 + * SHUDOWN-COMPLETE, our peer is clearly violationg the "MUST NOT bundle" 3816 + * statement from the specs. Additinally, there might be an attacker 3817 + * on the path and we may not want to continue this communication. 3818 + */ 3819 + static sctp_disposition_t sctp_sf_violation_chunk( 3820 + const struct sctp_endpoint *ep, 3821 + const struct sctp_association *asoc, 3822 + const sctp_subtype_t type, 3823 + void *arg, 3824 + sctp_cmd_seq_t *commands) 3825 + { 3826 + char err_str[]="The following chunk violates protocol:"; 3827 + 3828 + if (!asoc) 3829 + return sctp_sf_violation(ep, asoc, type, arg, commands); 3830 + 3831 + return sctp_sf_abort_violation(ep, asoc, arg, commands, err_str, 3832 + sizeof(err_str)); 3833 + } 3942 3834 /*************************************************************************** 3943 3835 * These are the state functions for handling primitive (Section 10) events. 3944 3836 ***************************************************************************/ ··· 5326 5176 * association exists, otherwise, use the peer's vtag. 5327 5177 */ 5328 5178 if (asoc) { 5329 - vtag = asoc->peer.i.init_tag; 5179 + /* Special case the INIT-ACK as there is no peer's vtag 5180 + * yet. 5181 + */ 5182 + switch(chunk->chunk_hdr->type) { 5183 + case SCTP_CID_INIT_ACK: 5184 + { 5185 + sctp_initack_chunk_t *initack; 5186 + 5187 + initack = (sctp_initack_chunk_t *)chunk->chunk_hdr; 5188 + vtag = ntohl(initack->init_hdr.init_tag); 5189 + break; 5190 + } 5191 + default: 5192 + vtag = asoc->peer.i.init_tag; 5193 + break; 5194 + } 5330 5195 } else { 5331 5196 /* Special case the INIT and stale COOKIE_ECHO as there is no 5332 5197 * vtag yet.
+8 -8
net/sctp/sm_statetable.c
··· 110 110 /* SCTP_STATE_EMPTY */ \ 111 111 TYPE_SCTP_FUNC(sctp_sf_ootb), \ 112 112 /* SCTP_STATE_CLOSED */ \ 113 - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ 113 + TYPE_SCTP_FUNC(sctp_sf_ootb), \ 114 114 /* SCTP_STATE_COOKIE_WAIT */ \ 115 115 TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ 116 116 /* SCTP_STATE_COOKIE_ECHOED */ \ ··· 173 173 /* SCTP_STATE_EMPTY */ \ 174 174 TYPE_SCTP_FUNC(sctp_sf_ootb), \ 175 175 /* SCTP_STATE_CLOSED */ \ 176 - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ 176 + TYPE_SCTP_FUNC(sctp_sf_ootb), \ 177 177 /* SCTP_STATE_COOKIE_WAIT */ \ 178 178 TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ 179 179 /* SCTP_STATE_COOKIE_ECHOED */ \ ··· 194 194 /* SCTP_STATE_EMPTY */ \ 195 195 TYPE_SCTP_FUNC(sctp_sf_ootb), \ 196 196 /* SCTP_STATE_CLOSED */ \ 197 - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ 197 + TYPE_SCTP_FUNC(sctp_sf_ootb), \ 198 198 /* SCTP_STATE_COOKIE_WAIT */ \ 199 199 TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ 200 200 /* SCTP_STATE_COOKIE_ECHOED */ \ ··· 216 216 /* SCTP_STATE_EMPTY */ \ 217 217 TYPE_SCTP_FUNC(sctp_sf_ootb), \ 218 218 /* SCTP_STATE_CLOSED */ \ 219 - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ 219 + TYPE_SCTP_FUNC(sctp_sf_ootb), \ 220 220 /* SCTP_STATE_COOKIE_WAIT */ \ 221 221 TYPE_SCTP_FUNC(sctp_sf_violation), \ 222 222 /* SCTP_STATE_COOKIE_ECHOED */ \ ··· 258 258 /* SCTP_STATE_EMPTY */ \ 259 259 TYPE_SCTP_FUNC(sctp_sf_ootb), \ 260 260 /* SCTP_STATE_CLOSED */ \ 261 - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ 261 + TYPE_SCTP_FUNC(sctp_sf_ootb), \ 262 262 /* SCTP_STATE_COOKIE_WAIT */ \ 263 263 TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ 264 264 /* SCTP_STATE_COOKIE_ECHOED */ \ ··· 300 300 /* SCTP_STATE_EMPTY */ \ 301 301 TYPE_SCTP_FUNC(sctp_sf_ootb), \ 302 302 /* SCTP_STATE_CLOSED */ \ 303 - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ 303 + TYPE_SCTP_FUNC(sctp_sf_ootb), \ 304 304 /* SCTP_STATE_COOKIE_WAIT */ \ 305 305 TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ 306 306 /* SCTP_STATE_COOKIE_ECHOED */ \ ··· 499 499 /* SCTP_STATE_EMPTY */ \ 500 500 TYPE_SCTP_FUNC(sctp_sf_ootb), \ 501 501 /* SCTP_STATE_CLOSED */ \ 502 - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), \ 502 + TYPE_SCTP_FUNC(sctp_sf_ootb), \ 503 503 /* SCTP_STATE_COOKIE_WAIT */ \ 504 504 TYPE_SCTP_FUNC(sctp_sf_discard_chunk), \ 505 505 /* SCTP_STATE_COOKIE_ECHOED */ \ ··· 528 528 /* SCTP_STATE_EMPTY */ 529 529 TYPE_SCTP_FUNC(sctp_sf_ootb), 530 530 /* SCTP_STATE_CLOSED */ 531 - TYPE_SCTP_FUNC(sctp_sf_tabort_8_4_8), 531 + TYPE_SCTP_FUNC(sctp_sf_ootb), 532 532 /* SCTP_STATE_COOKIE_WAIT */ 533 533 TYPE_SCTP_FUNC(sctp_sf_unk_chunk), 534 534 /* SCTP_STATE_COOKIE_ECHOED */
-3
net/socket.c
··· 777 777 if (pos != 0) 778 778 return -ESPIPE; 779 779 780 - if (iocb->ki_left == 0) /* Match SYS5 behaviour */ 781 - return 0; 782 - 783 780 x = alloc_sock_iocb(iocb, &siocb); 784 781 if (!x) 785 782 return -ENOMEM;
+2 -1
net/sunrpc/svcsock.c
··· 1110 1110 serv->sv_name); 1111 1111 printk(KERN_NOTICE 1112 1112 "%s: last TCP connect from %s\n", 1113 - serv->sv_name, buf); 1113 + serv->sv_name, __svc_print_addr(sin, 1114 + buf, sizeof(buf))); 1114 1115 } 1115 1116 /* 1116 1117 * Always select the oldest socket. It's not fair,
+1 -1
net/wireless/core.c
··· 213 213 out_fail_sysfs: 214 214 return err; 215 215 } 216 - module_init(cfg80211_init); 216 + subsys_initcall(cfg80211_init); 217 217 218 218 static void cfg80211_exit(void) 219 219 {
+2
net/wireless/sysfs.c
··· 52 52 cfg80211_dev_free(rdev); 53 53 } 54 54 55 + #ifdef CONFIG_HOTPLUG 55 56 static int wiphy_uevent(struct device *dev, char **envp, 56 57 int num_envp, char *buf, int size) 57 58 { 58 59 /* TODO, we probably need stuff here */ 59 60 return 0; 60 61 } 62 + #endif 61 63 62 64 struct class ieee80211_class = { 63 65 .name = "ieee80211",
+2
security/selinux/hooks.c
··· 316 316 } 317 317 318 318 enum { 319 + Opt_error = -1, 319 320 Opt_context = 1, 320 321 Opt_fscontext = 2, 321 322 Opt_defcontext = 4, ··· 328 327 {Opt_fscontext, "fscontext=%s"}, 329 328 {Opt_defcontext, "defcontext=%s"}, 330 329 {Opt_rootcontext, "rootcontext=%s"}, 330 + {Opt_error, NULL}, 331 331 }; 332 332 333 333 #define SEL_MOUNT_FAIL_MSG "SELinux: duplicate or incompatible mount options\n"
+39 -29
sound/core/memalloc.c
··· 27 27 #include <linux/pci.h> 28 28 #include <linux/slab.h> 29 29 #include <linux/mm.h> 30 + #include <linux/seq_file.h> 30 31 #include <asm/uaccess.h> 31 32 #include <linux/dma-mapping.h> 32 33 #include <linux/moduleparam.h> ··· 482 481 #define SND_MEM_PROC_FILE "driver/snd-page-alloc" 483 482 static struct proc_dir_entry *snd_mem_proc; 484 483 485 - static int snd_mem_proc_read(char *page, char **start, off_t off, 486 - int count, int *eof, void *data) 484 + static int snd_mem_proc_read(struct seq_file *seq, void *offset) 487 485 { 488 - int len = 0; 489 486 long pages = snd_allocated_pages >> (PAGE_SHIFT-12); 490 487 struct snd_mem_list *mem; 491 488 int devno; 492 489 static char *types[] = { "UNKNOWN", "CONT", "DEV", "DEV-SG", "SBUS" }; 493 490 494 491 mutex_lock(&list_mutex); 495 - len += snprintf(page + len, count - len, 496 - "pages : %li bytes (%li pages per %likB)\n", 497 - pages * PAGE_SIZE, pages, PAGE_SIZE / 1024); 492 + seq_printf(seq, "pages : %li bytes (%li pages per %likB)\n", 493 + pages * PAGE_SIZE, pages, PAGE_SIZE / 1024); 498 494 devno = 0; 499 495 list_for_each_entry(mem, &mem_list_head, list) { 500 496 devno++; 501 - len += snprintf(page + len, count - len, 502 - "buffer %d : ID %08x : type %s\n", 503 - devno, mem->id, types[mem->buffer.dev.type]); 504 - len += snprintf(page + len, count - len, 505 - " addr = 0x%lx, size = %d bytes\n", 506 - (unsigned long)mem->buffer.addr, (int)mem->buffer.bytes); 497 + seq_printf(seq, "buffer %d : ID %08x : type %s\n", 498 + devno, mem->id, types[mem->buffer.dev.type]); 499 + seq_printf(seq, " addr = 0x%lx, size = %d bytes\n", 500 + (unsigned long)mem->buffer.addr, 501 + (int)mem->buffer.bytes); 507 502 } 508 503 mutex_unlock(&list_mutex); 509 - return len; 504 + return 0; 505 + } 506 + 507 + static int snd_mem_proc_open(struct inode *inode, struct file *file) 508 + { 509 + return single_open(file, snd_mem_proc_read, NULL); 510 510 } 511 511 512 512 /* FIXME: for pci only - other bus? */ 513 513 #ifdef CONFIG_PCI 514 514 #define gettoken(bufp) strsep(bufp, " \t\n") 515 515 516 - static int snd_mem_proc_write(struct file *file, const char __user *buffer, 517 - unsigned long count, void *data) 516 + static ssize_t snd_mem_proc_write(struct file *file, const char __user * buffer, 517 + size_t count, loff_t * ppos) 518 518 { 519 519 char buf[128]; 520 520 char *token, *p; 521 521 522 - if (count > ARRAY_SIZE(buf) - 1) 523 - count = ARRAY_SIZE(buf) - 1; 522 + if (count > sizeof(buf) - 1) 523 + return -EINVAL; 524 524 if (copy_from_user(buf, buffer, count)) 525 525 return -EFAULT; 526 - buf[ARRAY_SIZE(buf) - 1] = '\0'; 526 + buf[count] = '\0'; 527 527 528 528 p = buf; 529 529 token = gettoken(&p); 530 530 if (! token || *token == '#') 531 - return (int)count; 531 + return count; 532 532 if (strcmp(token, "add") == 0) { 533 533 char *endp; 534 534 int vendor, device, size, buffers; ··· 550 548 (buffers = simple_strtol(token, NULL, 0)) <= 0 || 551 549 buffers > 4) { 552 550 printk(KERN_ERR "snd-page-alloc: invalid proc write format\n"); 553 - return (int)count; 551 + return count; 554 552 } 555 553 vendor &= 0xffff; 556 554 device &= 0xffff; ··· 562 560 if (pci_set_dma_mask(pci, mask) < 0 || 563 561 pci_set_consistent_dma_mask(pci, mask) < 0) { 564 562 printk(KERN_ERR "snd-page-alloc: cannot set DMA mask %lx for pci %04x:%04x\n", mask, vendor, device); 565 - return (int)count; 563 + return count; 566 564 } 567 565 } 568 566 for (i = 0; i < buffers; i++) { ··· 572 570 size, &dmab) < 0) { 573 571 printk(KERN_ERR "snd-page-alloc: cannot allocate buffer pages (size = %d)\n", size); 574 572 pci_dev_put(pci); 575 - return (int)count; 573 + return count; 576 574 } 577 575 snd_dma_reserve_buf(&dmab, snd_dma_pci_buf_id(pci)); 578 576 } ··· 598 596 free_all_reserved_pages(); 599 597 else 600 598 printk(KERN_ERR "snd-page-alloc: invalid proc cmd\n"); 601 - return (int)count; 599 + return count; 602 600 } 603 601 #endif /* CONFIG_PCI */ 602 + 603 + static const struct file_operations snd_mem_proc_fops = { 604 + .owner = THIS_MODULE, 605 + .open = snd_mem_proc_open, 606 + .read = seq_read, 607 + #ifdef CONFIG_PCI 608 + .write = snd_mem_proc_write, 609 + #endif 610 + .llseek = seq_lseek, 611 + .release = single_release, 612 + }; 613 + 604 614 #endif /* CONFIG_PROC_FS */ 605 615 606 616 /* ··· 623 609 { 624 610 #ifdef CONFIG_PROC_FS 625 611 snd_mem_proc = create_proc_entry(SND_MEM_PROC_FILE, 0644, NULL); 626 - if (snd_mem_proc) { 627 - snd_mem_proc->read_proc = snd_mem_proc_read; 628 - #ifdef CONFIG_PCI 629 - snd_mem_proc->write_proc = snd_mem_proc_write; 630 - #endif 631 - } 612 + if (snd_mem_proc) 613 + snd_mem_proc->proc_fops = &snd_mem_proc_fops; 632 614 #endif 633 615 return 0; 634 616 }