Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'board-specific' of git://github.com/hzhuang1/linux into next/boards

* 'board-specific' of git://github.com/hzhuang1/linux:
pxa/hx4700: Remove pcmcia platform_device structure
ARM: pxa/hx4700: Reduce sleep mode battery discharge by 35%
ARM: pxa/hx4700: Remove unwanted request for GPIO105

(update to 3.3-rc7)

Signed-off-by: Arnd Bergmann <arnd@arndb.de>

+1229 -744
+3 -3
Documentation/devicetree/bindings/gpio/led.txt
··· 7 7 node's name represents the name of the corresponding LED. 8 8 9 9 LED sub-node properties: 10 - - gpios : Should specify the LED's GPIO, see "Specifying GPIO information 11 - for devices" in Documentation/devicetree/booting-without-of.txt. Active 12 - low LEDs should be indicated using flags in the GPIO specifier. 10 + - gpios : Should specify the LED's GPIO, see "gpios property" in 11 + Documentation/devicetree/gpio.txt. Active low LEDs should be 12 + indicated using flags in the GPIO specifier. 13 13 - label : (optional) The label for this LED. If omitted, the label is 14 14 taken from the node name (excluding the unit address). 15 15 - linux,default-trigger : (optional) This parameter, if present, is a
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 30 30 nintendo Nintendo 31 31 nvidia NVIDIA 32 32 nxp NXP Semiconductors 33 + picochip Picochip Ltd 33 34 powervr Imagination Technologies 34 35 qcom Qualcomm, Inc. 35 36 ramtron Ramtron International
+20 -6
Documentation/hwmon/jc42
··· 7 7 Addresses scanned: I2C 0x18 - 0x1f 8 8 Datasheets: 9 9 http://www.analog.com/static/imported-files/data_sheets/ADT7408.pdf 10 - * IDT TSE2002B3, TS3000B3 11 - Prefix: 'tse2002b3', 'ts3000b3' 10 + * Atmel AT30TS00 11 + Prefix: 'at30ts00' 12 12 Addresses scanned: I2C 0x18 - 0x1f 13 13 Datasheets: 14 - http://www.idt.com/products/getdoc.cfm?docid=18715691 15 - http://www.idt.com/products/getdoc.cfm?docid=18715692 14 + http://www.atmel.com/Images/doc8585.pdf 15 + * IDT TSE2002B3, TSE2002GB2, TS3000B3, TS3000GB2 16 + Prefix: 'tse2002', 'ts3000' 17 + Addresses scanned: I2C 0x18 - 0x1f 18 + Datasheets: 19 + http://www.idt.com/sites/default/files/documents/IDT_TSE2002B3C_DST_20100512_120303152056.pdf 20 + http://www.idt.com/sites/default/files/documents/IDT_TSE2002GB2A1_DST_20111107_120303145914.pdf 21 + http://www.idt.com/sites/default/files/documents/IDT_TS3000B3A_DST_20101129_120303152013.pdf 22 + http://www.idt.com/sites/default/files/documents/IDT_TS3000GB2A1_DST_20111104_120303151012.pdf 16 23 * Maxim MAX6604 17 24 Prefix: 'max6604' 18 25 Addresses scanned: I2C 0x18 - 0x1f 19 26 Datasheets: 20 27 http://datasheets.maxim-ic.com/en/ds/MAX6604.pdf 21 - * Microchip MCP9805, MCP98242, MCP98243, MCP9843 22 - Prefixes: 'mcp9805', 'mcp98242', 'mcp98243', 'mcp9843' 28 + * Microchip MCP9804, MCP9805, MCP98242, MCP98243, MCP9843 29 + Prefixes: 'mcp9804', 'mcp9805', 'mcp98242', 'mcp98243', 'mcp9843' 23 30 Addresses scanned: I2C 0x18 - 0x1f 24 31 Datasheets: 32 + http://ww1.microchip.com/downloads/en/DeviceDoc/22203C.pdf 25 33 http://ww1.microchip.com/downloads/en/DeviceDoc/21977b.pdf 26 34 http://ww1.microchip.com/downloads/en/DeviceDoc/21996a.pdf 27 35 http://ww1.microchip.com/downloads/en/DeviceDoc/22153c.pdf ··· 56 48 Datasheets: 57 49 http://www.st.com/stonline/products/literature/ds/13447/stts424.pdf 58 50 http://www.st.com/stonline/products/literature/ds/13448/stts424e02.pdf 51 + * ST Microelectronics STTS2002, STTS3000 52 + Prefix: 'stts2002', 'stts3000' 53 + Addresses scanned: I2C 0x18 - 0x1f 54 + Datasheets: 55 + http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATASHEET/CD00225278.pdf 56 + http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATA_BRIEF/CD00270920.pdf 59 57 * JEDEC JC 42.4 compliant temperature sensor chips 60 58 Prefix: 'jc42' 61 59 Addresses scanned: I2C 0x18 - 0x1f
+2 -1
Documentation/input/alps.txt
··· 13 13 14 14 All ALPS touchpads should respond to the "E6 report" command sequence: 15 15 E8-E6-E6-E6-E9. An ALPS touchpad should respond with either 00-00-0A or 16 - 00-00-64. 16 + 00-00-64 if no buttons are pressed. The bits 0-2 of the first byte will be 1s 17 + if some buttons are pressed. 17 18 18 19 If the E6 report is successful, the touchpad model is identified using the "E7 19 20 report" sequence: E8-E7-E7-E7-E9. The response is the model signature and is
+6
Documentation/kernel-parameters.txt
··· 2211 2211 2212 2212 default: off. 2213 2213 2214 + printk.always_kmsg_dump= 2215 + Trigger kmsg_dump for cases other than kernel oops or 2216 + panics 2217 + Format: <bool> (1/Y/y=enable, 0/N/n=disable) 2218 + default: disabled 2219 + 2214 2220 printk.time= Show timing data prefixed to each printk message line 2215 2221 Format: <bool> (1/Y/y=enable, 0/N/n=disable) 2216 2222
+3 -3
MAINTAINERS
··· 962 962 F: drivers/platform/msm/ 963 963 F: drivers/*/pm8???-* 964 964 F: include/linux/mfd/pm8xxx/ 965 - T: git git://codeaurora.org/quic/kernel/davidb/linux-msm.git 965 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/davidb/linux-msm.git 966 966 S: Maintained 967 967 968 968 ARM/TOSA MACHINE SUPPORT ··· 1310 1310 F: include/linux/atm* 1311 1311 1312 1312 ATMEL AT91 MCI DRIVER 1313 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 1313 + M: Ludovic Desroches <ludovic.desroches@atmel.com> 1314 1314 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1315 1315 W: http://www.atmel.com/products/AT91/ 1316 1316 W: http://www.at91.com/ ··· 1318 1318 F: drivers/mmc/host/at91_mci.c 1319 1319 1320 1320 ATMEL AT91 / AT32 MCI DRIVER 1321 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 1321 + M: Ludovic Desroches <ludovic.desroches@atmel.com> 1322 1322 S: Maintained 1323 1323 F: drivers/mmc/host/atmel-mci.c 1324 1324 F: drivers/mmc/host/atmel-mci-regs.h
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 3 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = -rc7 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/alpha/include/asm/futex.h
··· 108 108 " lda $31,3b-2b(%0)\n" 109 109 " .previous\n" 110 110 : "+r"(ret), "=&r"(prev), "=&r"(cmp) 111 - : "r"(uaddr), "r"((long)oldval), "r"(newval) 111 + : "r"(uaddr), "r"((long)(int)oldval), "r"(newval) 112 112 : "memory"); 113 113 114 114 *uval = prev;
+1 -1
arch/arm/Kconfig
··· 1280 1280 depends on CPU_V7 1281 1281 help 1282 1282 This option enables the workaround for the 743622 Cortex-A9 1283 - (r2p0..r2p2) erratum. Under very rare conditions, a faulty 1283 + (r2p*) erratum. Under very rare conditions, a faulty 1284 1284 optimisation in the Cortex-A9 Store Buffer may lead to data 1285 1285 corruption. This workaround sets a specific bit in the diagnostic 1286 1286 register of the Cortex-A9 which disables the Store Buffer
+1
arch/arm/boot/.gitignore
··· 3 3 xipImage 4 4 bootpImage 5 5 uImage 6 + *.dtb
+1 -1
arch/arm/include/asm/pmu.h
··· 134 134 135 135 u64 armpmu_event_update(struct perf_event *event, 136 136 struct hw_perf_event *hwc, 137 - int idx, int overflow); 137 + int idx); 138 138 139 139 int armpmu_event_set_period(struct perf_event *event, 140 140 struct hw_perf_event *hwc,
+1
arch/arm/kernel/ecard.c
··· 242 242 243 243 memcpy(dst_pgd, src_pgd, sizeof(pgd_t) * (EASI_SIZE / PGDIR_SIZE)); 244 244 245 + vma.vm_flags = VM_EXEC; 245 246 vma.vm_mm = mm; 246 247 247 248 flush_tlb_range(&vma, IO_START, IO_START + IO_SIZE);
+34 -11
arch/arm/kernel/perf_event.c
··· 180 180 u64 181 181 armpmu_event_update(struct perf_event *event, 182 182 struct hw_perf_event *hwc, 183 - int idx, int overflow) 183 + int idx) 184 184 { 185 185 struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 186 186 u64 delta, prev_raw_count, new_raw_count; ··· 193 193 new_raw_count) != prev_raw_count) 194 194 goto again; 195 195 196 - new_raw_count &= armpmu->max_period; 197 - prev_raw_count &= armpmu->max_period; 198 - 199 - if (overflow) 200 - delta = armpmu->max_period - prev_raw_count + new_raw_count + 1; 201 - else 202 - delta = new_raw_count - prev_raw_count; 196 + delta = (new_raw_count - prev_raw_count) & armpmu->max_period; 203 197 204 198 local64_add(delta, &event->count); 205 199 local64_sub(delta, &hwc->period_left); ··· 210 216 if (hwc->idx < 0) 211 217 return; 212 218 213 - armpmu_event_update(event, hwc, hwc->idx, 0); 219 + armpmu_event_update(event, hwc, hwc->idx); 214 220 } 215 221 216 222 static void ··· 226 232 if (!(hwc->state & PERF_HES_STOPPED)) { 227 233 armpmu->disable(hwc, hwc->idx); 228 234 barrier(); /* why? */ 229 - armpmu_event_update(event, hwc, hwc->idx, 0); 235 + armpmu_event_update(event, hwc, hwc->idx); 230 236 hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; 231 237 } 232 238 } ··· 512 518 hwc->config_base |= (unsigned long)mapping; 513 519 514 520 if (!hwc->sample_period) { 515 - hwc->sample_period = armpmu->max_period; 521 + /* 522 + * For non-sampling runs, limit the sample_period to half 523 + * of the counter width. That way, the new counter value 524 + * is far less likely to overtake the previous one unless 525 + * you have some serious IRQ latency issues. 526 + */ 527 + hwc->sample_period = armpmu->max_period >> 1; 516 528 hwc->last_period = hwc->sample_period; 517 529 local64_set(&hwc->period_left, hwc->sample_period); 518 530 } ··· 680 680 } 681 681 682 682 /* 683 + * PMU hardware loses all context when a CPU goes offline. 684 + * When a CPU is hotplugged back in, since some hardware registers are 685 + * UNKNOWN at reset, the PMU must be explicitly reset to avoid reading 686 + * junk values out of them. 687 + */ 688 + static int __cpuinit pmu_cpu_notify(struct notifier_block *b, 689 + unsigned long action, void *hcpu) 690 + { 691 + if ((action & ~CPU_TASKS_FROZEN) != CPU_STARTING) 692 + return NOTIFY_DONE; 693 + 694 + if (cpu_pmu && cpu_pmu->reset) 695 + cpu_pmu->reset(NULL); 696 + 697 + return NOTIFY_OK; 698 + } 699 + 700 + static struct notifier_block __cpuinitdata pmu_cpu_notifier = { 701 + .notifier_call = pmu_cpu_notify, 702 + }; 703 + 704 + /* 683 705 * CPU PMU identification and registration. 684 706 */ 685 707 static int __init ··· 752 730 pr_info("enabled with %s PMU driver, %d counters available\n", 753 731 cpu_pmu->name, cpu_pmu->num_events); 754 732 cpu_pmu_init(cpu_pmu); 733 + register_cpu_notifier(&pmu_cpu_notifier); 755 734 armpmu_register(cpu_pmu, "cpu", PERF_TYPE_RAW); 756 735 } else { 757 736 pr_info("no hardware support available\n");
+3 -19
arch/arm/kernel/perf_event_v6.c
··· 467 467 raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 468 468 } 469 469 470 - static int counter_is_active(unsigned long pmcr, int idx) 471 - { 472 - unsigned long mask = 0; 473 - if (idx == ARMV6_CYCLE_COUNTER) 474 - mask = ARMV6_PMCR_CCOUNT_IEN; 475 - else if (idx == ARMV6_COUNTER0) 476 - mask = ARMV6_PMCR_COUNT0_IEN; 477 - else if (idx == ARMV6_COUNTER1) 478 - mask = ARMV6_PMCR_COUNT1_IEN; 479 - 480 - if (mask) 481 - return pmcr & mask; 482 - 483 - WARN_ONCE(1, "invalid counter number (%d)\n", idx); 484 - return 0; 485 - } 486 - 487 470 static irqreturn_t 488 471 armv6pmu_handle_irq(int irq_num, 489 472 void *dev) ··· 496 513 struct perf_event *event = cpuc->events[idx]; 497 514 struct hw_perf_event *hwc; 498 515 499 - if (!counter_is_active(pmcr, idx)) 516 + /* Ignore if we don't have an event. */ 517 + if (!event) 500 518 continue; 501 519 502 520 /* ··· 508 524 continue; 509 525 510 526 hwc = &event->hw; 511 - armpmu_event_update(event, hwc, idx, 1); 527 + armpmu_event_update(event, hwc, idx); 512 528 data.period = event->hw.last_period; 513 529 if (!armpmu_event_set_period(event, hwc, idx)) 514 530 continue;
+10 -1
arch/arm/kernel/perf_event_v7.c
··· 809 809 810 810 counter = ARMV7_IDX_TO_COUNTER(idx); 811 811 asm volatile("mcr p15, 0, %0, c9, c14, 2" : : "r" (BIT(counter))); 812 + isb(); 813 + /* Clear the overflow flag in case an interrupt is pending. */ 814 + asm volatile("mcr p15, 0, %0, c9, c12, 3" : : "r" (BIT(counter))); 815 + isb(); 816 + 812 817 return idx; 813 818 } 814 819 ··· 960 955 struct perf_event *event = cpuc->events[idx]; 961 956 struct hw_perf_event *hwc; 962 957 958 + /* Ignore if we don't have an event. */ 959 + if (!event) 960 + continue; 961 + 963 962 /* 964 963 * We have a single interrupt for all counters. Check that 965 964 * each counter has overflowed before we process it. ··· 972 963 continue; 973 964 974 965 hwc = &event->hw; 975 - armpmu_event_update(event, hwc, idx, 1); 966 + armpmu_event_update(event, hwc, idx); 976 967 data.period = event->hw.last_period; 977 968 if (!armpmu_event_set_period(event, hwc, idx)) 978 969 continue;
+16 -4
arch/arm/kernel/perf_event_xscale.c
··· 255 255 struct perf_event *event = cpuc->events[idx]; 256 256 struct hw_perf_event *hwc; 257 257 258 + if (!event) 259 + continue; 260 + 258 261 if (!xscale1_pmnc_counter_has_overflowed(pmnc, idx)) 259 262 continue; 260 263 261 264 hwc = &event->hw; 262 - armpmu_event_update(event, hwc, idx, 1); 265 + armpmu_event_update(event, hwc, idx); 263 266 data.period = event->hw.last_period; 264 267 if (!armpmu_event_set_period(event, hwc, idx)) 265 268 continue; ··· 595 592 struct perf_event *event = cpuc->events[idx]; 596 593 struct hw_perf_event *hwc; 597 594 598 - if (!xscale2_pmnc_counter_has_overflowed(pmnc, idx)) 595 + if (!event) 596 + continue; 597 + 598 + if (!xscale2_pmnc_counter_has_overflowed(of_flags, idx)) 599 599 continue; 600 600 601 601 hwc = &event->hw; 602 - armpmu_event_update(event, hwc, idx, 1); 602 + armpmu_event_update(event, hwc, idx); 603 603 data.period = event->hw.last_period; 604 604 if (!armpmu_event_set_period(event, hwc, idx)) 605 605 continue; ··· 669 663 static void 670 664 xscale2pmu_disable_event(struct hw_perf_event *hwc, int idx) 671 665 { 672 - unsigned long flags, ien, evtsel; 666 + unsigned long flags, ien, evtsel, of_flags; 673 667 struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 674 668 675 669 ien = xscale2pmu_read_int_enable(); ··· 678 672 switch (idx) { 679 673 case XSCALE_CYCLE_COUNTER: 680 674 ien &= ~XSCALE2_CCOUNT_INT_EN; 675 + of_flags = XSCALE2_CCOUNT_OVERFLOW; 681 676 break; 682 677 case XSCALE_COUNTER0: 683 678 ien &= ~XSCALE2_COUNT0_INT_EN; 684 679 evtsel &= ~XSCALE2_COUNT0_EVT_MASK; 685 680 evtsel |= XSCALE_PERFCTR_UNUSED << XSCALE2_COUNT0_EVT_SHFT; 681 + of_flags = XSCALE2_COUNT0_OVERFLOW; 686 682 break; 687 683 case XSCALE_COUNTER1: 688 684 ien &= ~XSCALE2_COUNT1_INT_EN; 689 685 evtsel &= ~XSCALE2_COUNT1_EVT_MASK; 690 686 evtsel |= XSCALE_PERFCTR_UNUSED << XSCALE2_COUNT1_EVT_SHFT; 687 + of_flags = XSCALE2_COUNT1_OVERFLOW; 691 688 break; 692 689 case XSCALE_COUNTER2: 693 690 ien &= ~XSCALE2_COUNT2_INT_EN; 694 691 evtsel &= ~XSCALE2_COUNT2_EVT_MASK; 695 692 evtsel |= XSCALE_PERFCTR_UNUSED << XSCALE2_COUNT2_EVT_SHFT; 693 + of_flags = XSCALE2_COUNT2_OVERFLOW; 696 694 break; 697 695 case XSCALE_COUNTER3: 698 696 ien &= ~XSCALE2_COUNT3_INT_EN; 699 697 evtsel &= ~XSCALE2_COUNT3_EVT_MASK; 700 698 evtsel |= XSCALE_PERFCTR_UNUSED << XSCALE2_COUNT3_EVT_SHFT; 699 + of_flags = XSCALE2_COUNT3_OVERFLOW; 701 700 break; 702 701 default: 703 702 WARN_ONCE(1, "invalid counter number (%d)\n", idx); ··· 712 701 raw_spin_lock_irqsave(&events->pmu_lock, flags); 713 702 xscale2pmu_write_event_select(evtsel); 714 703 xscale2pmu_write_int_enable(ien); 704 + xscale2pmu_write_overflow_flags(of_flags); 715 705 raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 716 706 } 717 707
+10 -9
arch/arm/mach-at91/at91sam9g45_devices.c
··· 38 38 #if defined(CONFIG_AT_HDMAC) || defined(CONFIG_AT_HDMAC_MODULE) 39 39 static u64 hdmac_dmamask = DMA_BIT_MASK(32); 40 40 41 - static struct at_dma_platform_data atdma_pdata = { 42 - .nr_channels = 8, 43 - }; 44 - 45 41 static struct resource hdmac_resources[] = { 46 42 [0] = { 47 43 .start = AT91SAM9G45_BASE_DMA, ··· 52 56 }; 53 57 54 58 static struct platform_device at_hdmac_device = { 55 - .name = "at_hdmac", 59 + .name = "at91sam9g45_dma", 56 60 .id = -1, 57 61 .dev = { 58 62 .dma_mask = &hdmac_dmamask, 59 63 .coherent_dma_mask = DMA_BIT_MASK(32), 60 - .platform_data = &atdma_pdata, 61 64 }, 62 65 .resource = hdmac_resources, 63 66 .num_resources = ARRAY_SIZE(hdmac_resources), ··· 64 69 65 70 void __init at91_add_device_hdmac(void) 66 71 { 67 - dma_cap_set(DMA_MEMCPY, atdma_pdata.cap_mask); 68 - dma_cap_set(DMA_SLAVE, atdma_pdata.cap_mask); 69 - platform_device_register(&at_hdmac_device); 72 + #if defined(CONFIG_OF) 73 + struct device_node *of_node = 74 + of_find_node_by_name(NULL, "dma-controller"); 75 + 76 + if (of_node) 77 + of_node_put(of_node); 78 + else 79 + #endif 80 + platform_device_register(&at_hdmac_device); 70 81 } 71 82 #else 72 83 void __init at91_add_device_hdmac(void) {}
+1 -7
arch/arm/mach-at91/at91sam9rl_devices.c
··· 33 33 #if defined(CONFIG_AT_HDMAC) || defined(CONFIG_AT_HDMAC_MODULE) 34 34 static u64 hdmac_dmamask = DMA_BIT_MASK(32); 35 35 36 - static struct at_dma_platform_data atdma_pdata = { 37 - .nr_channels = 2, 38 - }; 39 - 40 36 static struct resource hdmac_resources[] = { 41 37 [0] = { 42 38 .start = AT91SAM9RL_BASE_DMA, ··· 47 51 }; 48 52 49 53 static struct platform_device at_hdmac_device = { 50 - .name = "at_hdmac", 54 + .name = "at91sam9rl_dma", 51 55 .id = -1, 52 56 .dev = { 53 57 .dma_mask = &hdmac_dmamask, 54 58 .coherent_dma_mask = DMA_BIT_MASK(32), 55 - .platform_data = &atdma_pdata, 56 59 }, 57 60 .resource = hdmac_resources, 58 61 .num_resources = ARRAY_SIZE(hdmac_resources), ··· 59 64 60 65 void __init at91_add_device_hdmac(void) 61 66 { 62 - dma_cap_set(DMA_MEMCPY, atdma_pdata.cap_mask); 63 67 platform_device_register(&at_hdmac_device); 64 68 } 65 69 #else
+2
arch/arm/mach-ep93xx/vision_ep9307.c
··· 34 34 #include <mach/ep93xx_spi.h> 35 35 #include <mach/gpio-ep93xx.h> 36 36 37 + #include <asm/hardware/vic.h> 37 38 #include <asm/mach-types.h> 38 39 #include <asm/mach/map.h> 39 40 #include <asm/mach/arch.h> ··· 362 361 .atag_offset = 0x100, 363 362 .map_io = vision_map_io, 364 363 .init_irq = ep93xx_init_irq, 364 + .handle_irq = vic_handle_irq, 365 365 .timer = &ep93xx_timer, 366 366 .init_machine = vision_init_machine, 367 367 .restart = ep93xx_restart,
+2
arch/arm/mach-exynos/mach-universal_c210.c
··· 13 13 #include <linux/i2c.h> 14 14 #include <linux/gpio_keys.h> 15 15 #include <linux/gpio.h> 16 + #include <linux/interrupt.h> 16 17 #include <linux/fb.h> 17 18 #include <linux/mfd/max8998.h> 18 19 #include <linux/regulator/machine.h> ··· 605 604 .threshold = 0x28, 606 605 .voltage = 2800000, /* 2.8V */ 607 606 .orient = MXT_DIAGONAL, 607 + .irqflags = IRQF_TRIGGER_FALLING, 608 608 }; 609 609 610 610 static struct i2c_board_info i2c3_devs[] __initdata = {
+1
arch/arm/mach-omap2/id.c
··· 343 343 case 0xb944: 344 344 omap_revision = AM335X_REV_ES1_0; 345 345 *cpu_rev = "1.0"; 346 + break; 346 347 case 0xb8f2: 347 348 switch (rev) { 348 349 case 0:
+1 -2
arch/arm/mach-omap2/mailbox.c
··· 420 420 platform_driver_unregister(&omap2_mbox_driver); 421 421 } 422 422 423 - /* must be ready before omap3isp is probed */ 424 - subsys_initcall(omap2_mbox_init); 423 + module_init(omap2_mbox_init); 425 424 module_exit(omap2_mbox_exit); 426 425 427 426 MODULE_LICENSE("GPL v2");
+2 -1
arch/arm/mach-omap2/omap-iommu.c
··· 150 150 platform_device_put(omap_iommu_pdev[i]); 151 151 return err; 152 152 } 153 - module_init(omap_iommu_init); 153 + /* must be ready before omap3isp is probed */ 154 + subsys_initcall(omap_iommu_init); 154 155 155 156 static void __exit omap_iommu_exit(void) 156 157 {
+2
arch/arm/mach-omap2/omap4-common.c
··· 31 31 32 32 #include "common.h" 33 33 #include "omap4-sar-layout.h" 34 + #include <linux/export.h> 34 35 35 36 #ifdef CONFIG_CACHE_L2X0 36 37 static void __iomem *l2cache_base; ··· 56 55 isb(); 57 56 } 58 57 } 58 + EXPORT_SYMBOL(omap_bus_sync); 59 59 60 60 /* Steal one page physical memory for barrier implementation */ 61 61 int __init omap_barrier_reserve_memblock(void)
-1
arch/arm/mach-omap2/twl-common.c
··· 270 270 .constraints = { 271 271 .min_uV = 3300000, 272 272 .max_uV = 3300000, 273 - .apply_uV = true, 274 273 .valid_modes_mask = REGULATOR_MODE_NORMAL 275 274 | REGULATOR_MODE_STANDBY, 276 275 .valid_ops_mask = REGULATOR_CHANGE_MODE
-1
arch/arm/mach-pxa/generic.h
··· 49 49 #endif 50 50 51 51 extern struct syscore_ops pxa_irq_syscore_ops; 52 - extern struct syscore_ops pxa_gpio_syscore_ops; 53 52 extern struct syscore_ops pxa2xx_mfp_syscore_ops; 54 53 extern struct syscore_ops pxa3xx_mfp_syscore_ops; 55 54
+2 -15
arch/arm/mach-pxa/hx4700.c
··· 97 97 98 98 /* BTUART */ 99 99 GPIO42_BTUART_RXD, 100 - GPIO43_BTUART_TXD, 100 + GPIO43_BTUART_TXD_LPM_LOW, 101 101 GPIO44_BTUART_CTS, 102 - GPIO45_BTUART_RTS, 102 + GPIO45_BTUART_RTS_LPM_LOW, 103 103 104 104 /* PWM 1 (Backlight) */ 105 105 GPIO17_PWM1_OUT, ··· 803 803 804 804 805 805 /* 806 - * PCMCIA 807 - */ 808 - 809 - static struct platform_device pcmcia = { 810 - .name = "hx4700-pcmcia", 811 - .dev = { 812 - .parent = &asic3.dev, 813 - }, 814 - }; 815 - 816 - /* 817 806 * Platform devices 818 807 */ 819 808 ··· 818 829 &power_supply, 819 830 &strataflash, 820 831 &audio, 821 - &pcmcia, 822 832 }; 823 833 824 834 static struct gpio global_gpios[] = { ··· 833 845 { GPIO32_HX4700_RS232_ON, GPIOF_OUT_INIT_HIGH, "RS232_ON" }, 834 846 { GPIO71_HX4700_ASIC3_nRESET, GPIOF_OUT_INIT_HIGH, "ASIC3_nRESET" }, 835 847 { GPIO82_HX4700_EUART_RESET, GPIOF_OUT_INIT_HIGH, "EUART_RESET" }, 836 - { GPIO105_HX4700_nIR_ON, GPIOF_OUT_INIT_HIGH, "nIR_EN" }, 837 848 }; 838 849 839 850 static void __init hx4700_init(void)
+2
arch/arm/mach-pxa/include/mach/mfp-pxa27x.h
··· 158 158 #define GPIO44_BTUART_CTS MFP_CFG_IN(GPIO44, AF1) 159 159 #define GPIO42_BTUART_RXD MFP_CFG_IN(GPIO42, AF1) 160 160 #define GPIO45_BTUART_RTS MFP_CFG_OUT(GPIO45, AF2, DRIVE_HIGH) 161 + #define GPIO45_BTUART_RTS_LPM_LOW MFP_CFG_OUT(GPIO45, AF2, DRIVE_LOW) 161 162 #define GPIO43_BTUART_TXD MFP_CFG_OUT(GPIO43, AF2, DRIVE_HIGH) 163 + #define GPIO43_BTUART_TXD_LPM_LOW MFP_CFG_OUT(GPIO43, AF2, DRIVE_LOW) 162 164 163 165 /* STUART */ 164 166 #define GPIO46_STUART_RXD MFP_CFG_IN(GPIO46, AF2)
+7
arch/arm/mach-pxa/mfp-pxa2xx.c
··· 226 226 { 227 227 int i; 228 228 229 + /* running before pxa_gpio_probe() */ 230 + #ifdef CONFIG_CPU_PXA26x 231 + pxa_last_gpio = 89; 232 + #else 233 + pxa_last_gpio = 84; 234 + #endif 229 235 for (i = 0; i <= pxa_last_gpio; i++) 230 236 gpio_desc[i].valid = 1; 231 237 ··· 301 295 { 302 296 int i, gpio; 303 297 298 + pxa_last_gpio = 120; /* running before pxa_gpio_probe() */ 304 299 for (i = 0; i <= pxa_last_gpio; i++) { 305 300 /* skip GPIO2, 5, 6, 7, 8, they are not 306 301 * valid pins allow configuration
-1
arch/arm/mach-pxa/pxa25x.c
··· 368 368 369 369 register_syscore_ops(&pxa_irq_syscore_ops); 370 370 register_syscore_ops(&pxa2xx_mfp_syscore_ops); 371 - register_syscore_ops(&pxa_gpio_syscore_ops); 372 371 register_syscore_ops(&pxa2xx_clock_syscore_ops); 373 372 374 373 ret = platform_add_devices(pxa25x_devices,
-1
arch/arm/mach-pxa/pxa27x.c
··· 456 456 457 457 register_syscore_ops(&pxa_irq_syscore_ops); 458 458 register_syscore_ops(&pxa2xx_mfp_syscore_ops); 459 - register_syscore_ops(&pxa_gpio_syscore_ops); 460 459 register_syscore_ops(&pxa2xx_clock_syscore_ops); 461 460 462 461 ret = platform_add_devices(devices, ARRAY_SIZE(devices));
-1
arch/arm/mach-pxa/pxa3xx.c
··· 462 462 463 463 register_syscore_ops(&pxa_irq_syscore_ops); 464 464 register_syscore_ops(&pxa3xx_mfp_syscore_ops); 465 - register_syscore_ops(&pxa_gpio_syscore_ops); 466 465 register_syscore_ops(&pxa3xx_clock_syscore_ops); 467 466 468 467 ret = platform_add_devices(devices, ARRAY_SIZE(devices));
-1
arch/arm/mach-pxa/pxa95x.c
··· 283 283 return ret; 284 284 285 285 register_syscore_ops(&pxa_irq_syscore_ops); 286 - register_syscore_ops(&pxa_gpio_syscore_ops); 287 286 register_syscore_ops(&pxa3xx_clock_syscore_ops); 288 287 289 288 ret = platform_add_devices(devices, ARRAY_SIZE(devices));
+1 -1
arch/arm/mach-s3c2440/common.h
··· 12 12 #ifndef __ARCH_ARM_MACH_S3C2440_COMMON_H 13 13 #define __ARCH_ARM_MACH_S3C2440_COMMON_H 14 14 15 - void s3c2440_restart(char mode, const char *cmd); 15 + void s3c244x_restart(char mode, const char *cmd); 16 16 17 17 #endif /* __ARCH_ARM_MACH_S3C2440_COMMON_H */
+1 -1
arch/arm/mach-s3c2440/mach-anubis.c
··· 487 487 .init_machine = anubis_init, 488 488 .init_irq = s3c24xx_init_irq, 489 489 .timer = &s3c24xx_timer, 490 - .restart = s3c2440_restart, 490 + .restart = s3c244x_restart, 491 491 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-at2440evb.c
··· 222 222 .init_machine = at2440evb_init, 223 223 .init_irq = s3c24xx_init_irq, 224 224 .timer = &s3c24xx_timer, 225 - .restart = s3c2440_restart, 225 + .restart = s3c244x_restart, 226 226 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-gta02.c
··· 601 601 .init_irq = s3c24xx_init_irq, 602 602 .init_machine = gta02_machine_init, 603 603 .timer = &s3c24xx_timer, 604 - .restart = s3c2440_restart, 604 + .restart = s3c244x_restart, 605 605 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-mini2440.c
··· 701 701 .init_machine = mini2440_init, 702 702 .init_irq = s3c24xx_init_irq, 703 703 .timer = &s3c24xx_timer, 704 - .restart = s3c2440_restart, 704 + .restart = s3c244x_restart, 705 705 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-nexcoder.c
··· 158 158 .init_machine = nexcoder_init, 159 159 .init_irq = s3c24xx_init_irq, 160 160 .timer = &s3c24xx_timer, 161 - .restart = s3c2440_restart, 161 + .restart = s3c244x_restart, 162 162 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-osiris.c
··· 436 436 .init_irq = s3c24xx_init_irq, 437 437 .init_machine = osiris_init, 438 438 .timer = &s3c24xx_timer, 439 - .restart = s3c2440_restart, 439 + .restart = s3c244x_restart, 440 440 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-rx1950.c
··· 822 822 .init_irq = s3c24xx_init_irq, 823 823 .init_machine = rx1950_init_machine, 824 824 .timer = &s3c24xx_timer, 825 - .restart = s3c2440_restart, 825 + .restart = s3c244x_restart, 826 826 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-rx3715.c
··· 213 213 .init_irq = rx3715_init_irq, 214 214 .init_machine = rx3715_init_machine, 215 215 .timer = &s3c24xx_timer, 216 - .restart = s3c2440_restart, 216 + .restart = s3c244x_restart, 217 217 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-smdk2440.c
··· 183 183 .map_io = smdk2440_map_io, 184 184 .init_machine = smdk2440_machine_init, 185 185 .timer = &s3c24xx_timer, 186 - .restart = s3c2440_restart, 186 + .restart = s3c244x_restart, 187 187 MACHINE_END
-13
arch/arm/mach-s3c2440/s3c2440.c
··· 35 35 #include <plat/cpu.h> 36 36 #include <plat/s3c244x.h> 37 37 #include <plat/pm.h> 38 - #include <plat/watchdog-reset.h> 39 38 40 39 #include <plat/gpio-core.h> 41 40 #include <plat/gpio-cfg.h> ··· 72 73 73 74 s3c24xx_gpiocfg_default.set_pull = s3c24xx_gpio_setpull_1up; 74 75 s3c24xx_gpiocfg_default.get_pull = s3c24xx_gpio_getpull_1up; 75 - } 76 - 77 - void s3c2440_restart(char mode, const char *cmd) 78 - { 79 - if (mode == 's') { 80 - soft_restart(0); 81 - } 82 - 83 - arch_wdt_reset(); 84 - 85 - /* we'll take a jump through zero as a poor second */ 86 - soft_restart(0); 87 76 }
+12
arch/arm/mach-s3c2440/s3c244x.c
··· 46 46 #include <plat/pm.h> 47 47 #include <plat/pll.h> 48 48 #include <plat/nand-core.h> 49 + #include <plat/watchdog-reset.h> 49 50 50 51 static struct map_desc s3c244x_iodesc[] __initdata = { 51 52 IODESC_ENT(CLKPWR), ··· 197 196 .suspend = s3c244x_suspend, 198 197 .resume = s3c244x_resume, 199 198 }; 199 + 200 + void s3c244x_restart(char mode, const char *cmd) 201 + { 202 + if (mode == 's') 203 + soft_restart(0); 204 + 205 + arch_wdt_reset(); 206 + 207 + /* we'll take a jump through zero as a poor second */ 208 + soft_restart(0); 209 + }
+1 -1
arch/arm/mach-ux500/Kconfig
··· 5 5 default y 6 6 select ARM_GIC 7 7 select HAS_MTU 8 - select ARM_ERRATA_753970 8 + select PL310_ERRATA_753970 9 9 select ARM_ERRATA_754322 10 10 select ARM_ERRATA_764369 11 11
+1 -1
arch/arm/mach-vexpress/Kconfig
··· 7 7 select ARM_GIC 8 8 select ARM_ERRATA_720789 9 9 select ARM_ERRATA_751472 10 - select ARM_ERRATA_753970 10 + select PL310_ERRATA_753970 11 11 select HAVE_SMP 12 12 select MIGHT_HAVE_CACHE_L2X0 13 13
+1 -3
arch/arm/mm/proc-v7.S
··· 230 230 mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register 231 231 #endif 232 232 #ifdef CONFIG_ARM_ERRATA_743622 233 - teq r6, #0x20 @ present in r2p0 234 - teqne r6, #0x21 @ present in r2p1 235 - teqne r6, #0x22 @ present in r2p2 233 + teq r5, #0x00200000 @ only present in r2p* 236 234 mrceq p15, 0, r10, c15, c0, 1 @ read diagnostic register 237 235 orreq r10, r10, #1 << 6 @ set bit #6 238 236 mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register
+9 -1
arch/arm/plat-omap/include/plat/irqs.h
··· 428 428 #define OMAP_GPMC_NR_IRQS 8 429 429 #define OMAP_GPMC_IRQ_END (OMAP_GPMC_IRQ_BASE + OMAP_GPMC_NR_IRQS) 430 430 431 + /* PRCM IRQ handler */ 432 + #ifdef CONFIG_ARCH_OMAP2PLUS 433 + #define OMAP_PRCM_IRQ_BASE (OMAP_GPMC_IRQ_END) 434 + #define OMAP_PRCM_NR_IRQS 64 435 + #define OMAP_PRCM_IRQ_END (OMAP_PRCM_IRQ_BASE + OMAP_PRCM_NR_IRQS) 436 + #else 437 + #define OMAP_PRCM_IRQ_END OMAP_GPMC_IRQ_END 438 + #endif 431 439 432 - #define NR_IRQS OMAP_GPMC_IRQ_END 440 + #define NR_IRQS OMAP_PRCM_IRQ_END 433 441 434 442 #define OMAP_IRQ_BIT(irq) (1 << ((irq) % 32)) 435 443
+1 -1
arch/arm/plat-s3c24xx/dma.c
··· 1249 1249 struct s3c2410_dma_chan *cp = s3c2410_chans + dma_channels - 1; 1250 1250 int channel; 1251 1251 1252 - for (channel = dma_channels - 1; channel >= 0; cp++, channel--) 1252 + for (channel = dma_channels - 1; channel >= 0; cp--, channel--) 1253 1253 s3c2410_dma_resume_chan(cp); 1254 1254 } 1255 1255
+1 -1
arch/arm/plat-samsung/devs.c
··· 1409 1409 1410 1410 #ifdef CONFIG_S3C_DEV_USB_HSOTG 1411 1411 static struct resource s3c_usb_hsotg_resources[] = { 1412 - [0] = DEFINE_RES_MEM(S3C_PA_USB_HSOTG, SZ_16K), 1412 + [0] = DEFINE_RES_MEM(S3C_PA_USB_HSOTG, SZ_128K), 1413 1413 [1] = DEFINE_RES_IRQ(IRQ_OTG), 1414 1414 }; 1415 1415
+4 -2
arch/arm/plat-spear/time.c
··· 145 145 static int clockevent_next_event(unsigned long cycles, 146 146 struct clock_event_device *clk_event_dev) 147 147 { 148 - u16 val; 148 + u16 val = readw(gpt_base + CR(CLKEVT)); 149 + 150 + if (val & CTRL_ENABLE) 151 + writew(val & ~CTRL_ENABLE, gpt_base + CR(CLKEVT)); 149 152 150 153 writew(cycles, gpt_base + LOAD(CLKEVT)); 151 154 152 - val = readw(gpt_base + CR(CLKEVT)); 153 155 val |= CTRL_ENABLE | CTRL_INT_ENABLE; 154 156 writew(val, gpt_base + CR(CLKEVT)); 155 157
+2 -2
arch/c6x/include/asm/processor.h
··· 122 122 123 123 extern unsigned long get_wchan(struct task_struct *p); 124 124 125 - #define KSTK_EIP(tsk) (task_pt_regs(task)->pc) 126 - #define KSTK_ESP(tsk) (task_pt_regs(task)->sp) 125 + #define KSTK_EIP(task) (task_pt_regs(task)->pc) 126 + #define KSTK_ESP(task) (task_pt_regs(task)->sp) 127 127 128 128 #define cpu_relax() do { } while (0) 129 129
+1 -1
arch/mips/alchemy/common/time.c
··· 146 146 cd->shift = 32; 147 147 cd->mult = div_sc(32768, NSEC_PER_SEC, cd->shift); 148 148 cd->max_delta_ns = clockevent_delta2ns(0xffffffff, cd); 149 - cd->min_delta_ns = clockevent_delta2ns(8, cd); /* ~0.25ms */ 149 + cd->min_delta_ns = clockevent_delta2ns(9, cd); /* ~0.28ms */ 150 150 clockevents_register_device(cd); 151 151 setup_irq(m2int, &au1x_rtcmatch2_irqaction); 152 152
+1 -1
arch/mips/ath79/dev-wmac.c
··· 96 96 { 97 97 if (soc_is_ar913x()) 98 98 ar913x_wmac_setup(); 99 - if (soc_is_ar933x()) 99 + else if (soc_is_ar933x()) 100 100 ar933x_wmac_setup(); 101 101 else 102 102 BUG();
+2 -2
arch/mips/configs/nlm_xlp_defconfig
··· 8 8 # CONFIG_SECCOMP is not set 9 9 CONFIG_USE_OF=y 10 10 CONFIG_EXPERIMENTAL=y 11 - CONFIG_CROSS_COMPILE="mips-linux-gnu-" 11 + CONFIG_CROSS_COMPILE="" 12 12 # CONFIG_LOCALVERSION_AUTO is not set 13 13 CONFIG_SYSVIPC=y 14 14 CONFIG_POSIX_MQUEUE=y ··· 22 22 CONFIG_CGROUPS=y 23 23 CONFIG_NAMESPACES=y 24 24 CONFIG_BLK_DEV_INITRD=y 25 - CONFIG_INITRAMFS_SOURCE="usr/dev_file_list usr/rootfs.xlp" 25 + CONFIG_INITRAMFS_SOURCE="" 26 26 CONFIG_RD_BZIP2=y 27 27 CONFIG_RD_LZMA=y 28 28 CONFIG_INITRAMFS_COMPRESSION_LZMA=y
+2 -2
arch/mips/configs/nlm_xlr_defconfig
··· 8 8 CONFIG_PREEMPT_VOLUNTARY=y 9 9 CONFIG_KEXEC=y 10 10 CONFIG_EXPERIMENTAL=y 11 - CONFIG_CROSS_COMPILE="mips-linux-gnu-" 11 + CONFIG_CROSS_COMPILE="" 12 12 # CONFIG_LOCALVERSION_AUTO is not set 13 13 CONFIG_SYSVIPC=y 14 14 CONFIG_POSIX_MQUEUE=y ··· 22 22 CONFIG_NAMESPACES=y 23 23 CONFIG_SCHED_AUTOGROUP=y 24 24 CONFIG_BLK_DEV_INITRD=y 25 - CONFIG_INITRAMFS_SOURCE="usr/dev_file_list usr/rootfs.xlr" 25 + CONFIG_INITRAMFS_SOURCE="" 26 26 CONFIG_RD_BZIP2=y 27 27 CONFIG_RD_LZMA=y 28 28 CONFIG_INITRAMFS_COMPRESSION_GZIP=y
+1 -1
arch/mips/configs/powertv_defconfig
··· 6 6 CONFIG_PREEMPT=y 7 7 # CONFIG_SECCOMP is not set 8 8 CONFIG_EXPERIMENTAL=y 9 - CONFIG_CROSS_COMPILE="mips-linux-" 9 + CONFIG_CROSS_COMPILE="" 10 10 # CONFIG_SWAP is not set 11 11 CONFIG_SYSVIPC=y 12 12 CONFIG_LOG_BUF_SHIFT=16
+19 -1
arch/mips/include/asm/mach-au1x00/gpio-au1300.h
··· 11 11 #include <asm/io.h> 12 12 #include <asm/mach-au1x00/au1000.h> 13 13 14 + struct gpio; 15 + struct gpio_chip; 16 + 14 17 /* with the current GPIC design, up to 128 GPIOs are possible. 15 18 * The only implementation so far is in the Au1300, which has 75 externally 16 19 * available GPIOs. ··· 206 203 return 0; 207 204 } 208 205 209 - static inline void gpio_free(unsigned int gpio) 206 + static inline int gpio_request_one(unsigned gpio, 207 + unsigned long flags, const char *label) 208 + { 209 + return 0; 210 + } 211 + 212 + static inline int gpio_request_array(struct gpio *array, size_t num) 213 + { 214 + return 0; 215 + } 216 + 217 + static inline void gpio_free(unsigned gpio) 218 + { 219 + } 220 + 221 + static inline void gpio_free_array(struct gpio *array, size_t num) 210 222 { 211 223 } 212 224
-3
arch/mips/include/asm/page.h
··· 39 39 #define HPAGE_MASK (~(HPAGE_SIZE - 1)) 40 40 #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) 41 41 #else /* !CONFIG_HUGETLB_PAGE */ 42 - # ifndef BUILD_BUG 43 - # define BUILD_BUG() do { extern void __build_bug(void); __build_bug(); } while (0) 44 - # endif 45 42 #define HPAGE_SHIFT ({BUILD_BUG(); 0; }) 46 43 #define HPAGE_SIZE ({BUILD_BUG(); 0; }) 47 44 #define HPAGE_MASK ({BUILD_BUG(); 0; })
-1
arch/mips/kernel/smp-bmips.c
··· 8 8 * SMP support for BMIPS 9 9 */ 10 10 11 - #include <linux/version.h> 12 11 #include <linux/init.h> 13 12 #include <linux/sched.h> 14 13 #include <linux/mm.h>
+1 -1
arch/mips/kernel/traps.c
··· 1135 1135 printk(KERN_DEBUG "YIELD Scheduler Exception\n"); 1136 1136 break; 1137 1137 case 5: 1138 - printk(KERN_DEBUG "Gating Storage Schedulier Exception\n"); 1138 + printk(KERN_DEBUG "Gating Storage Scheduler Exception\n"); 1139 1139 break; 1140 1140 default: 1141 1141 printk(KERN_DEBUG "*** UNKNOWN THREAD EXCEPTION %d ***\n",
-1
arch/mips/kernel/vmlinux.lds.S
··· 69 69 RODATA 70 70 71 71 /* writeable */ 72 - _sdata = .; /* Start of data section */ 73 72 .data : { /* Data */ 74 73 . = . + DATAOFFSET; /* for CONFIG_MAPPED_KERNEL */ 75 74
+29 -7
arch/mips/mm/fault.c
··· 42 42 const int field = sizeof(unsigned long) * 2; 43 43 siginfo_t info; 44 44 int fault; 45 + unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE | 46 + (write ? FAULT_FLAG_WRITE : 0); 45 47 46 48 #if 0 47 49 printk("Cpu%d[%s:%d:%0*lx:%ld:%0*lx]\n", raw_smp_processor_id(), ··· 93 91 if (in_atomic() || !mm) 94 92 goto bad_area_nosemaphore; 95 93 94 + retry: 96 95 down_read(&mm->mmap_sem); 97 96 vma = find_vma(mm, address); 98 97 if (!vma) ··· 147 144 * make sure we exit gracefully rather than endlessly redo 148 145 * the fault. 149 146 */ 150 - fault = handle_mm_fault(mm, vma, address, write ? FAULT_FLAG_WRITE : 0); 147 + fault = handle_mm_fault(mm, vma, address, flags); 148 + 149 + if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) 150 + return; 151 + 151 152 perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); 152 153 if (unlikely(fault & VM_FAULT_ERROR)) { 153 154 if (fault & VM_FAULT_OOM) ··· 160 153 goto do_sigbus; 161 154 BUG(); 162 155 } 163 - if (fault & VM_FAULT_MAJOR) { 164 - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); 165 - tsk->maj_flt++; 166 - } else { 167 - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); 168 - tsk->min_flt++; 156 + if (flags & FAULT_FLAG_ALLOW_RETRY) { 157 + if (fault & VM_FAULT_MAJOR) { 158 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, 159 + regs, address); 160 + tsk->maj_flt++; 161 + } else { 162 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, 163 + regs, address); 164 + tsk->min_flt++; 165 + } 166 + if (fault & VM_FAULT_RETRY) { 167 + flags &= ~FAULT_FLAG_ALLOW_RETRY; 168 + 169 + /* 170 + * No need to up_read(&mm->mmap_sem) as we would 171 + * have already released it in __lock_page_or_retry 172 + * in mm/filemap.c. 173 + */ 174 + 175 + goto retry; 176 + } 169 177 } 170 178 171 179 up_read(&mm->mmap_sem);
+1 -4
arch/mips/pci/pci.c
··· 279 279 { 280 280 /* Propagate hose info into the subordinate devices. */ 281 281 282 - struct list_head *ln; 283 282 struct pci_dev *dev = bus->self; 284 283 285 284 if (pci_probe_only && dev && ··· 287 288 pcibios_fixup_device_resources(dev, bus); 288 289 } 289 290 290 - for (ln = bus->devices.next; ln != &bus->devices; ln = ln->next) { 291 - dev = pci_dev_b(ln); 292 - 291 + list_for_each_entry(dev, &bus->devices, bus_list) { 293 292 if ((dev->class >> 8) != PCI_CLASS_BRIDGE_PCI) 294 293 pcibios_fixup_device_resources(dev, bus); 295 294 }
-10
arch/mips/pmc-sierra/yosemite/ht-irq.c
··· 35 35 */ 36 36 void __init titan_ht_pcibios_fixup_bus(struct pci_bus *bus) 37 37 { 38 - struct pci_bus *current_bus = bus; 39 - struct pci_dev *devices; 40 - struct list_head *devices_link; 41 - 42 - list_for_each(devices_link, &(current_bus->devices)) { 43 - devices = pci_dev_b(devices_link); 44 - if (devices == NULL) 45 - continue; 46 - } 47 - 48 38 /* 49 39 * PLX and SPKT related changes go here 50 40 */
+1 -1
arch/mips/txx9/generic/7segled.c
··· 102 102 break; 103 103 } 104 104 dev->id = i; 105 - dev->dev = &tx_7segled_subsys; 105 + dev->bus = &tx_7segled_subsys; 106 106 error = device_register(dev); 107 107 if (!error) { 108 108 device_create_file(dev, &dev_attr_ascii);
+7 -7
arch/x86/ia32/ia32_aout.c
··· 315 315 current->mm->free_area_cache = TASK_UNMAPPED_BASE; 316 316 current->mm->cached_hole_size = 0; 317 317 318 + retval = setup_arg_pages(bprm, IA32_STACK_TOP, EXSTACK_DEFAULT); 319 + if (retval < 0) { 320 + /* Someone check-me: is this error path enough? */ 321 + send_sig(SIGKILL, current, 0); 322 + return retval; 323 + } 324 + 318 325 install_exec_creds(bprm); 319 326 current->flags &= ~PF_FORKNOEXEC; 320 327 ··· 416 409 set_binfmt(&aout_format); 417 410 418 411 set_brk(current->mm->start_brk, current->mm->brk); 419 - 420 - retval = setup_arg_pages(bprm, IA32_STACK_TOP, EXSTACK_DEFAULT); 421 - if (retval < 0) { 422 - /* Someone check-me: is this error path enough? */ 423 - send_sig(SIGKILL, current, 0); 424 - return retval; 425 - } 426 412 427 413 current->mm->start_stack = 428 414 (unsigned long)create_aout_tables((char __user *)bprm->p, bprm);
+2 -2
arch/x86/lib/delay.c
··· 48 48 } 49 49 50 50 /* TSC based delay: */ 51 - static void delay_tsc(unsigned long loops) 51 + static void delay_tsc(unsigned long __loops) 52 52 { 53 - unsigned long bclock, now; 53 + u32 bclock, now, loops = __loops; 54 54 int cpu; 55 55 56 56 preempt_disable();
+3 -1
arch/x86/mm/hugetlbpage.c
··· 333 333 * Lookup failure means no vma is above this address, 334 334 * i.e. return with success: 335 335 */ 336 - if (!(vma = find_vma_prev(mm, addr, &prev_vma))) 336 + vma = find_vma(mm, addr); 337 + if (!vma) 337 338 return addr; 338 339 339 340 /* 340 341 * new region fits between prev_vma->vm_end and 341 342 * vma->vm_start, use it: 342 343 */ 344 + prev_vma = vma->vm_prev; 343 345 if (addr + len <= vma->vm_start && 344 346 (!prev_vma || (addr >= prev_vma->vm_end))) { 345 347 /* remember the address as a hint for next time */
+17 -5
arch/x86/pci/acpi.c
··· 60 60 DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."), 61 61 }, 62 62 }, 63 + /* https://bugzilla.kernel.org/show_bug.cgi?id=42619 */ 64 + { 65 + .callback = set_use_crs, 66 + .ident = "MSI MS-7253", 67 + .matches = { 68 + DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"), 69 + DMI_MATCH(DMI_BOARD_NAME, "MS-7253"), 70 + DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies, LTD"), 71 + }, 72 + }, 63 73 64 74 /* Now for the blacklist.. */ 65 75 ··· 292 282 int i; 293 283 struct resource *res, *root, *conflict; 294 284 295 - if (!pci_use_crs) 296 - return; 297 - 298 285 coalesce_windows(info, IORESOURCE_MEM); 299 286 coalesce_windows(info, IORESOURCE_IO); 300 287 ··· 343 336 acpi_walk_resources(device->handle, METHOD_NAME__CRS, setup_resource, 344 337 &info); 345 338 346 - add_resources(&info); 347 - return; 339 + if (pci_use_crs) { 340 + add_resources(&info); 341 + 342 + return; 343 + } 344 + 345 + kfree(info.name); 348 346 349 347 name_alloc_fail: 350 348 kfree(info.res);
+1 -1
drivers/block/floppy.c
··· 3832 3832 bio.bi_size = size; 3833 3833 bio.bi_bdev = bdev; 3834 3834 bio.bi_sector = 0; 3835 - bio.bi_flags = BIO_QUIET; 3835 + bio.bi_flags = (1 << BIO_QUIET); 3836 3836 init_completion(&complete); 3837 3837 bio.bi_private = &complete; 3838 3838 bio.bi_end_io = floppy_rb0_complete;
+2
drivers/gpu/drm/gma500/cdv_device.c
··· 321 321 cdv_get_core_freq(dev); 322 322 gma_intel_opregion_init(dev); 323 323 psb_intel_init_bios(dev); 324 + REG_WRITE(PORT_HOTPLUG_EN, 0); 325 + REG_WRITE(PORT_HOTPLUG_STAT, REG_READ(PORT_HOTPLUG_STAT)); 324 326 return 0; 325 327 } 326 328
-1
drivers/gpu/drm/gma500/framebuffer.c
··· 247 247 .fb_imageblit = cfb_imageblit, 248 248 .fb_pan_display = psbfb_pan, 249 249 .fb_mmap = psbfb_mmap, 250 - .fb_sync = psbfb_sync, 251 250 .fb_ioctl = psbfb_ioctl, 252 251 }; 253 252
+4 -5
drivers/gpu/drm/gma500/gtt.c
··· 446 446 pg->gtt_start = pci_resource_start(dev->pdev, PSB_GTT_RESOURCE); 447 447 gtt_pages = pci_resource_len(dev->pdev, PSB_GTT_RESOURCE) 448 448 >> PAGE_SHIFT; 449 - /* Some CDV firmware doesn't report this currently. In which case the 450 - system has 64 gtt pages */ 449 + /* CDV doesn't report this. In which case the system has 64 gtt pages */ 451 450 if (pg->gtt_start == 0 || gtt_pages == 0) { 452 - dev_err(dev->dev, "GTT PCI BAR not initialized.\n"); 451 + dev_dbg(dev->dev, "GTT PCI BAR not initialized.\n"); 453 452 gtt_pages = 64; 454 453 pg->gtt_start = dev_priv->pge_ctl; 455 454 } ··· 460 461 461 462 if (pg->gatt_pages == 0 || pg->gatt_start == 0) { 462 463 static struct resource fudge; /* Preferably peppermint */ 463 - /* This can occur on CDV SDV systems. Fudge it in this case. 464 + /* This can occur on CDV systems. Fudge it in this case. 464 465 We really don't care what imaginary space is being allocated 465 466 at this point */ 466 - dev_err(dev->dev, "GATT PCI BAR not initialized.\n"); 467 + dev_dbg(dev->dev, "GATT PCI BAR not initialized.\n"); 467 468 pg->gatt_start = 0x40000000; 468 469 pg->gatt_pages = (128 * 1024 * 1024) >> PAGE_SHIFT; 469 470 /* This is a little confusing but in fact the GTT is providing
+3
drivers/gpu/drm/radeon/r600.c
··· 2362 2362 uint64_t addr = semaphore->gpu_addr; 2363 2363 unsigned sel = emit_wait ? PACKET3_SEM_SEL_WAIT : PACKET3_SEM_SEL_SIGNAL; 2364 2364 2365 + if (rdev->family < CHIP_CAYMAN) 2366 + sel |= PACKET3_SEM_WAIT_ON_SIGNAL; 2367 + 2365 2368 radeon_ring_write(ring, PACKET3(PACKET3_MEM_SEMAPHORE, 1)); 2366 2369 radeon_ring_write(ring, addr & 0xffffffff); 2367 2370 radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
+8
drivers/gpu/drm/radeon/r600_blit_shaders.c
··· 314 314 0x00000000, /* VGT_VTX_CNT_EN */ 315 315 316 316 0xc0016900, 317 + 0x000000d4, 318 + 0x00000000, /* SX_MISC */ 319 + 320 + 0xc0016900, 317 321 0x000002c8, 318 322 0x00000000, /* VGT_STRMOUT_BUFFER_EN */ 319 323 ··· 628 624 0x00000000, /* VGT_STRMOUT_EN */ 629 625 0x00000000, /* VGT_REUSE_OFF */ 630 626 0x00000000, /* VGT_VTX_CNT_EN */ 627 + 628 + 0xc0016900, 629 + 0x000000d4, 630 + 0x00000000, /* SX_MISC */ 631 631 632 632 0xc0016900, 633 633 0x000002c8,
+1
drivers/gpu/drm/radeon/r600d.h
··· 831 831 #define PACKET3_STRMOUT_BUFFER_UPDATE 0x34 832 832 #define PACKET3_INDIRECT_BUFFER_MP 0x38 833 833 #define PACKET3_MEM_SEMAPHORE 0x39 834 + # define PACKET3_SEM_WAIT_ON_SIGNAL (0x1 << 12) 834 835 # define PACKET3_SEM_SEL_SIGNAL (0x6 << 29) 835 836 # define PACKET3_SEM_SEL_WAIT (0x7 << 29) 836 837 #define PACKET3_MPEG_INDEX 0x3A
+1 -1
drivers/gpu/drm/radeon/radeon_connectors.c
··· 1057 1057 (radeon_connector->connector_object_id == CONNECTOR_OBJECT_ID_HDMI_TYPE_B)) 1058 1058 return MODE_OK; 1059 1059 else if (radeon_connector->connector_object_id == CONNECTOR_OBJECT_ID_HDMI_TYPE_A) { 1060 - if (ASIC_IS_DCE3(rdev)) { 1060 + if (0) { 1061 1061 /* HDMI 1.3+ supports max clock of 340 Mhz */ 1062 1062 if (mode->clock > 340000) 1063 1063 return MODE_CLOCK_HIGH;
+15 -3
drivers/gpu/drm/radeon/radeon_display.c
··· 1078 1078 .create_handle = radeon_user_framebuffer_create_handle, 1079 1079 }; 1080 1080 1081 - void 1081 + int 1082 1082 radeon_framebuffer_init(struct drm_device *dev, 1083 1083 struct radeon_framebuffer *rfb, 1084 1084 struct drm_mode_fb_cmd2 *mode_cmd, 1085 1085 struct drm_gem_object *obj) 1086 1086 { 1087 + int ret; 1087 1088 rfb->obj = obj; 1088 - drm_framebuffer_init(dev, &rfb->base, &radeon_fb_funcs); 1089 + ret = drm_framebuffer_init(dev, &rfb->base, &radeon_fb_funcs); 1090 + if (ret) { 1091 + rfb->obj = NULL; 1092 + return ret; 1093 + } 1089 1094 drm_helper_mode_fill_fb_struct(&rfb->base, mode_cmd); 1095 + return 0; 1090 1096 } 1091 1097 1092 1098 static struct drm_framebuffer * ··· 1102 1096 { 1103 1097 struct drm_gem_object *obj; 1104 1098 struct radeon_framebuffer *radeon_fb; 1099 + int ret; 1105 1100 1106 1101 obj = drm_gem_object_lookup(dev, file_priv, mode_cmd->handles[0]); 1107 1102 if (obj == NULL) { ··· 1115 1108 if (radeon_fb == NULL) 1116 1109 return ERR_PTR(-ENOMEM); 1117 1110 1118 - radeon_framebuffer_init(dev, radeon_fb, mode_cmd, obj); 1111 + ret = radeon_framebuffer_init(dev, radeon_fb, mode_cmd, obj); 1112 + if (ret) { 1113 + kfree(radeon_fb); 1114 + drm_gem_object_unreference_unlocked(obj); 1115 + return NULL; 1116 + } 1119 1117 1120 1118 return &radeon_fb->base; 1121 1119 }
+2 -4
drivers/gpu/drm/radeon/radeon_encoders.c
··· 307 307 bool radeon_dig_monitor_is_duallink(struct drm_encoder *encoder, 308 308 u32 pixel_clock) 309 309 { 310 - struct drm_device *dev = encoder->dev; 311 - struct radeon_device *rdev = dev->dev_private; 312 310 struct drm_connector *connector; 313 311 struct radeon_connector *radeon_connector; 314 312 struct radeon_connector_atom_dig *dig_connector; ··· 324 326 case DRM_MODE_CONNECTOR_HDMIB: 325 327 if (radeon_connector->use_digital) { 326 328 /* HDMI 1.3 supports up to 340 Mhz over single link */ 327 - if (ASIC_IS_DCE3(rdev) && drm_detect_hdmi_monitor(radeon_connector->edid)) { 329 + if (0 && drm_detect_hdmi_monitor(radeon_connector->edid)) { 328 330 if (pixel_clock > 340000) 329 331 return true; 330 332 else ··· 346 348 return false; 347 349 else { 348 350 /* HDMI 1.3 supports up to 340 Mhz over single link */ 349 - if (ASIC_IS_DCE3(rdev) && drm_detect_hdmi_monitor(radeon_connector->edid)) { 351 + if (0 && drm_detect_hdmi_monitor(radeon_connector->edid)) { 350 352 if (pixel_clock > 340000) 351 353 return true; 352 354 else
+10 -1
drivers/gpu/drm/radeon/radeon_fb.c
··· 209 209 sizes->surface_depth); 210 210 211 211 ret = radeonfb_create_pinned_object(rfbdev, &mode_cmd, &gobj); 212 + if (ret) { 213 + DRM_ERROR("failed to create fbcon object %d\n", ret); 214 + return ret; 215 + } 216 + 212 217 rbo = gem_to_radeon_bo(gobj); 213 218 214 219 /* okay we have an object now allocate the framebuffer */ ··· 225 220 226 221 info->par = rfbdev; 227 222 228 - radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj); 223 + ret = radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj); 224 + if (ret) { 225 + DRM_ERROR("failed to initalise framebuffer %d\n", ret); 226 + goto out_unref; 227 + } 229 228 230 229 fb = &rfbdev->rfb.base; 231 230
+1 -1
drivers/gpu/drm/radeon/radeon_mode.h
··· 649 649 u16 blue, int regno); 650 650 extern void radeon_crtc_fb_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green, 651 651 u16 *blue, int regno); 652 - void radeon_framebuffer_init(struct drm_device *dev, 652 + int radeon_framebuffer_init(struct drm_device *dev, 653 653 struct radeon_framebuffer *rfb, 654 654 struct drm_mode_fb_cmd2 *mode_cmd, 655 655 struct drm_gem_object *obj);
+3
drivers/hid/hid-ids.h
··· 59 59 #define USB_VENDOR_ID_AIRCABLE 0x16CA 60 60 #define USB_DEVICE_ID_AIRCABLE1 0x1502 61 61 62 + #define USB_VENDOR_ID_AIREN 0x1a2c 63 + #define USB_DEVICE_ID_AIREN_SLIMPLUS 0x0002 64 + 62 65 #define USB_VENDOR_ID_ALCOR 0x058f 63 66 #define USB_DEVICE_ID_ALCOR_USBRS232 0x9720 64 67
+7 -2
drivers/hid/hid-input.c
··· 986 986 return; 987 987 } 988 988 989 - /* Ignore out-of-range values as per HID specification, section 5.10 */ 990 - if (value < field->logical_minimum || value > field->logical_maximum) { 989 + /* 990 + * Ignore out-of-range values as per HID specification, 991 + * section 5.10 and 6.2.25 992 + */ 993 + if ((field->flags & HID_MAIN_ITEM_VARIABLE) && 994 + (value < field->logical_minimum || 995 + value > field->logical_maximum)) { 991 996 dbg_hid("Ignoring out-of-range value %x\n", value); 992 997 return; 993 998 }
+1
drivers/hid/usbhid/hid-quirks.c
··· 54 54 { USB_VENDOR_ID_PLAYDOTCOM, USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII, HID_QUIRK_MULTI_INPUT }, 55 55 { USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS, HID_QUIRK_MULTI_INPUT }, 56 56 57 + { USB_VENDOR_ID_AIREN, USB_DEVICE_ID_AIREN_SLIMPLUS, HID_QUIRK_NOGET }, 57 58 { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_UC100KM, HID_QUIRK_NOGET }, 58 59 { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS124U, HID_QUIRK_NOGET }, 59 60 { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM, HID_QUIRK_NOGET },
+3 -2
drivers/hwmon/Kconfig
··· 497 497 If you say yes here, you get support for JEDEC JC42.4 compliant 498 498 temperature sensors, which are used on many DDR3 memory modules for 499 499 mobile devices and servers. Support will include, but not be limited 500 - to, ADT7408, CAT34TS02, CAT6095, MAX6604, MCP9805, MCP98242, MCP98243, 501 - MCP9843, SE97, SE98, STTS424(E), TSE2002B3, and TS3000B3. 500 + to, ADT7408, AT30TS00, CAT34TS02, CAT6095, MAX6604, MCP9804, MCP9805, 501 + MCP98242, MCP98243, MCP9843, SE97, SE98, STTS424(E), STTS2002, 502 + STTS3000, TSE2002B3, TSE2002GB2, TS3000B3, and TS3000GB2. 502 503 503 504 This driver can also be built as a module. If so, the module 504 505 will be called jc42.
+28 -2
drivers/hwmon/jc42.c
··· 64 64 65 65 /* Manufacturer IDs */ 66 66 #define ADT_MANID 0x11d4 /* Analog Devices */ 67 + #define ATMEL_MANID 0x001f /* Atmel */ 67 68 #define MAX_MANID 0x004d /* Maxim */ 68 69 #define IDT_MANID 0x00b3 /* IDT */ 69 70 #define MCP_MANID 0x0054 /* Microchip */ ··· 78 77 #define ADT7408_DEVID 0x0801 79 78 #define ADT7408_DEVID_MASK 0xffff 80 79 80 + /* Atmel */ 81 + #define AT30TS00_DEVID 0x8201 82 + #define AT30TS00_DEVID_MASK 0xffff 83 + 81 84 /* IDT */ 82 85 #define TS3000B3_DEVID 0x2903 /* Also matches TSE2002B3 */ 83 86 #define TS3000B3_DEVID_MASK 0xffff 87 + 88 + #define TS3000GB2_DEVID 0x2912 /* Also matches TSE2002GB2 */ 89 + #define TS3000GB2_DEVID_MASK 0xffff 84 90 85 91 /* Maxim */ 86 92 #define MAX6604_DEVID 0x3e00 87 93 #define MAX6604_DEVID_MASK 0xffff 88 94 89 95 /* Microchip */ 96 + #define MCP9804_DEVID 0x0200 97 + #define MCP9804_DEVID_MASK 0xfffc 98 + 90 99 #define MCP98242_DEVID 0x2000 91 100 #define MCP98242_DEVID_MASK 0xfffc 92 101 ··· 124 113 #define STTS424E_DEVID 0x0000 125 114 #define STTS424E_DEVID_MASK 0xfffe 126 115 116 + #define STTS2002_DEVID 0x0300 117 + #define STTS2002_DEVID_MASK 0xffff 118 + 119 + #define STTS3000_DEVID 0x0200 120 + #define STTS3000_DEVID_MASK 0xffff 121 + 127 122 static u16 jc42_hysteresis[] = { 0, 1500, 3000, 6000 }; 128 123 129 124 struct jc42_chips { ··· 140 123 141 124 static struct jc42_chips jc42_chips[] = { 142 125 { ADT_MANID, ADT7408_DEVID, ADT7408_DEVID_MASK }, 126 + { ATMEL_MANID, AT30TS00_DEVID, AT30TS00_DEVID_MASK }, 143 127 { IDT_MANID, TS3000B3_DEVID, TS3000B3_DEVID_MASK }, 128 + { IDT_MANID, TS3000GB2_DEVID, TS3000GB2_DEVID_MASK }, 144 129 { MAX_MANID, MAX6604_DEVID, MAX6604_DEVID_MASK }, 130 + { MCP_MANID, MCP9804_DEVID, MCP9804_DEVID_MASK }, 145 131 { MCP_MANID, MCP98242_DEVID, MCP98242_DEVID_MASK }, 146 132 { MCP_MANID, MCP98243_DEVID, MCP98243_DEVID_MASK }, 147 133 { MCP_MANID, MCP9843_DEVID, MCP9843_DEVID_MASK }, ··· 153 133 { NXP_MANID, SE98_DEVID, SE98_DEVID_MASK }, 154 134 { STM_MANID, STTS424_DEVID, STTS424_DEVID_MASK }, 155 135 { STM_MANID, STTS424E_DEVID, STTS424E_DEVID_MASK }, 136 + { STM_MANID, STTS2002_DEVID, STTS2002_DEVID_MASK }, 137 + { STM_MANID, STTS3000_DEVID, STTS3000_DEVID_MASK }, 156 138 }; 157 139 158 140 /* Each client has this additional data */ ··· 181 159 182 160 static const struct i2c_device_id jc42_id[] = { 183 161 { "adt7408", 0 }, 162 + { "at30ts00", 0 }, 184 163 { "cat94ts02", 0 }, 185 164 { "cat6095", 0 }, 186 165 { "jc42", 0 }, 187 166 { "max6604", 0 }, 167 + { "mcp9804", 0 }, 188 168 { "mcp9805", 0 }, 189 169 { "mcp98242", 0 }, 190 170 { "mcp98243", 0 }, ··· 195 171 { "se97b", 0 }, 196 172 { "se98", 0 }, 197 173 { "stts424", 0 }, 198 - { "tse2002b3", 0 }, 199 - { "ts3000b3", 0 }, 174 + { "stts2002", 0 }, 175 + { "stts3000", 0 }, 176 + { "tse2002", 0 }, 177 + { "ts3000", 0 }, 200 178 { } 201 179 }; 202 180 MODULE_DEVICE_TABLE(i2c, jc42_id);
+2 -1
drivers/hwmon/pmbus/pmbus_core.c
··· 54 54 lcrit_alarm, crit_alarm */ 55 55 #define PMBUS_IOUT_BOOLEANS_PER_PAGE 3 /* alarm, lcrit_alarm, 56 56 crit_alarm */ 57 - #define PMBUS_POUT_BOOLEANS_PER_PAGE 2 /* alarm, crit_alarm */ 57 + #define PMBUS_POUT_BOOLEANS_PER_PAGE 3 /* cap_alarm, alarm, crit_alarm 58 + */ 58 59 #define PMBUS_MAX_BOOLEANS_PER_FAN 2 /* alarm, fault */ 59 60 #define PMBUS_MAX_BOOLEANS_PER_TEMP 4 /* min_alarm, max_alarm, 60 61 lcrit_alarm, crit_alarm */
+6 -4
drivers/hwmon/pmbus/zl6100.c
··· 33 33 struct zl6100_data { 34 34 int id; 35 35 ktime_t access; /* chip access time */ 36 + int delay; /* Delay between chip accesses in uS */ 36 37 struct pmbus_driver_info info; 37 38 }; 38 39 ··· 53 52 /* Some chips need a delay between accesses */ 54 53 static inline void zl6100_wait(const struct zl6100_data *data) 55 54 { 56 - if (delay) { 55 + if (data->delay) { 57 56 s64 delta = ktime_us_delta(ktime_get(), data->access); 58 - if (delta < delay) 59 - udelay(delay - delta); 57 + if (delta < data->delay) 58 + udelay(data->delay - delta); 60 59 } 61 60 } 62 61 ··· 208 207 * can be cleared later for additional chips if tests show that it 209 208 * is not needed (in other words, better be safe than sorry). 210 209 */ 210 + data->delay = delay; 211 211 if (data->id == zl2004 || data->id == zl6105) 212 - delay = 0; 212 + data->delay = 0; 213 213 214 214 /* 215 215 * Since there was a direct I2C device access above, wait before
+1 -1
drivers/input/evdev.c
··· 332 332 struct evdev_client *client = file->private_data; 333 333 struct evdev *evdev = client->evdev; 334 334 struct input_event event; 335 - int retval; 335 + int retval = 0; 336 336 337 337 if (count < input_event_size()) 338 338 return -EINVAL;
+2 -4
drivers/input/misc/twl4030-vibra.c
··· 172 172 } 173 173 174 174 /*** Module ***/ 175 - #if CONFIG_PM 175 + #if CONFIG_PM_SLEEP 176 176 static int twl4030_vibra_suspend(struct device *dev) 177 177 { 178 178 struct platform_device *pdev = to_platform_device(dev); ··· 189 189 vibra_disable_leds(); 190 190 return 0; 191 191 } 192 + #endif 192 193 193 194 static SIMPLE_DEV_PM_OPS(twl4030_vibra_pm_ops, 194 195 twl4030_vibra_suspend, twl4030_vibra_resume); 195 - #endif 196 196 197 197 static int __devinit twl4030_vibra_probe(struct platform_device *pdev) 198 198 { ··· 273 273 .driver = { 274 274 .name = "twl4030-vibra", 275 275 .owner = THIS_MODULE, 276 - #ifdef CONFIG_PM 277 276 .pm = &twl4030_vibra_pm_ops, 278 - #endif 279 277 }, 280 278 }; 281 279 module_platform_driver(twl4030_vibra_driver);
+5 -2
drivers/input/mouse/alps.c
··· 952 952 953 953 /* 954 954 * First try "E6 report". 955 - * ALPS should return 0,0,10 or 0,0,100 955 + * ALPS should return 0,0,10 or 0,0,100 if no buttons are pressed. 956 + * The bits 0-2 of the first byte will be 1s if some buttons are 957 + * pressed. 956 958 */ 957 959 param[0] = 0; 958 960 if (ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES) || ··· 970 968 psmouse_dbg(psmouse, "E6 report: %2.2x %2.2x %2.2x", 971 969 param[0], param[1], param[2]); 972 970 973 - if (param[0] != 0 || param[1] != 0 || (param[2] != 10 && param[2] != 100)) 971 + if ((param[0] & 0xf8) != 0 || param[1] != 0 || 972 + (param[2] != 10 && param[2] != 100)) 974 973 return NULL; 975 974 976 975 /*
+2
drivers/input/tablet/Kconfig
··· 77 77 tristate "Wacom Intuos/Graphire tablet support (USB)" 78 78 depends on USB_ARCH_HAS_HCD 79 79 select USB 80 + select NEW_LEDS 81 + select LEDS_CLASS 80 82 help 81 83 Say Y here if you want to use the USB version of the Wacom Intuos 82 84 or Graphire tablet. Make sure to say Y to "Mouse support"
+1 -1
drivers/input/tablet/wacom_wac.c
··· 926 926 { 927 927 struct input_dev *input = wacom->input; 928 928 unsigned char *data = wacom->data; 929 - int count = data[1] & 0x03; 929 + int count = data[1] & 0x07; 930 930 int i; 931 931 932 932 if (data[0] != 0x02)
+1 -1
drivers/iommu/amd_iommu_init.c
··· 275 275 } 276 276 277 277 /* Programs the physical address of the device table into the IOMMU hardware */ 278 - static void __init iommu_set_device_table(struct amd_iommu *iommu) 278 + static void iommu_set_device_table(struct amd_iommu *iommu) 279 279 { 280 280 u64 entry; 281 281
+1 -1
drivers/md/dm-flakey.c
··· 323 323 * Corrupt successful READs while in down state. 324 324 * If flags were specified, only corrupt those that match. 325 325 */ 326 - if (!error && bio_submitted_while_down && 326 + if (fc->corrupt_bio_byte && !error && bio_submitted_while_down && 327 327 (bio_data_dir(bio) == READ) && (fc->corrupt_bio_rw == READ) && 328 328 all_corrupt_bio_flags_match(bio, fc)) 329 329 corrupt_bio_data(bio, fc);
+16 -7
drivers/md/dm-io.c
··· 296 296 unsigned offset; 297 297 unsigned num_bvecs; 298 298 sector_t remaining = where->count; 299 + struct request_queue *q = bdev_get_queue(where->bdev); 300 + sector_t discard_sectors; 299 301 300 302 /* 301 303 * where->count may be zero if rw holds a flush and we need to ··· 307 305 /* 308 306 * Allocate a suitably sized-bio. 309 307 */ 310 - num_bvecs = dm_sector_div_up(remaining, 311 - (PAGE_SIZE >> SECTOR_SHIFT)); 312 - num_bvecs = min_t(int, bio_get_nr_vecs(where->bdev), num_bvecs); 308 + if (rw & REQ_DISCARD) 309 + num_bvecs = 1; 310 + else 311 + num_bvecs = min_t(int, bio_get_nr_vecs(where->bdev), 312 + dm_sector_div_up(remaining, (PAGE_SIZE >> SECTOR_SHIFT))); 313 + 313 314 bio = bio_alloc_bioset(GFP_NOIO, num_bvecs, io->client->bios); 314 315 bio->bi_sector = where->sector + (where->count - remaining); 315 316 bio->bi_bdev = where->bdev; ··· 320 315 bio->bi_destructor = dm_bio_destructor; 321 316 store_io_and_region_in_bio(bio, io, region); 322 317 323 - /* 324 - * Try and add as many pages as possible. 325 - */ 326 - while (remaining) { 318 + if (rw & REQ_DISCARD) { 319 + discard_sectors = min_t(sector_t, q->limits.max_discard_sectors, remaining); 320 + bio->bi_size = discard_sectors << SECTOR_SHIFT; 321 + remaining -= discard_sectors; 322 + } else while (remaining) { 323 + /* 324 + * Try and add as many pages as possible. 325 + */ 327 326 dp->get_page(dp, &page, &len, &offset); 328 327 len = min(len, to_bytes(remaining)); 329 328 if (!bio_add_page(bio, page, len, offset))
+1 -1
drivers/md/dm-ioctl.c
··· 1437 1437 1438 1438 if (!argc) { 1439 1439 DMWARN("Empty message received."); 1440 - goto out; 1440 + goto out_argv; 1441 1441 } 1442 1442 1443 1443 table = dm_get_live_table(md);
+11 -6
drivers/md/dm-raid.c
··· 668 668 return ret; 669 669 670 670 sb = page_address(rdev->sb_page); 671 - if (sb->magic != cpu_to_le32(DM_RAID_MAGIC)) { 671 + 672 + /* 673 + * Two cases that we want to write new superblocks and rebuild: 674 + * 1) New device (no matching magic number) 675 + * 2) Device specified for rebuild (!In_sync w/ offset == 0) 676 + */ 677 + if ((sb->magic != cpu_to_le32(DM_RAID_MAGIC)) || 678 + (!test_bit(In_sync, &rdev->flags) && !rdev->recovery_offset)) { 672 679 super_sync(rdev->mddev, rdev); 673 680 674 681 set_bit(FirstUse, &rdev->flags); ··· 752 745 */ 753 746 rdev_for_each(r, t, mddev) { 754 747 if (!test_bit(In_sync, &r->flags)) { 755 - if (!test_bit(FirstUse, &r->flags)) 756 - DMERR("Superblock area of " 757 - "rebuild device %d should have been " 758 - "cleared.", r->raid_disk); 759 - set_bit(FirstUse, &r->flags); 748 + DMINFO("Device %d specified for rebuild: " 749 + "Clearing superblock", r->raid_disk); 760 750 rebuilds++; 761 751 } else if (test_bit(FirstUse, &r->flags)) 762 752 new_devs++; ··· 975 971 976 972 INIT_WORK(&rs->md.event_work, do_table_event); 977 973 ti->private = rs; 974 + ti->num_flush_requests = 1; 978 975 979 976 mutex_lock(&rs->md.reconfig_mutex); 980 977 ret = md_run(&rs->md);
+20 -5
drivers/md/dm-thin-metadata.c
··· 385 385 data_sm = dm_sm_disk_create(tm, nr_blocks); 386 386 if (IS_ERR(data_sm)) { 387 387 DMERR("sm_disk_create failed"); 388 + dm_tm_unlock(tm, sblock); 388 389 r = PTR_ERR(data_sm); 389 390 goto bad; 390 391 } ··· 790 789 return 0; 791 790 } 792 791 792 + /* 793 + * __open_device: Returns @td corresponding to device with id @dev, 794 + * creating it if @create is set and incrementing @td->open_count. 795 + * On failure, @td is undefined. 796 + */ 793 797 static int __open_device(struct dm_pool_metadata *pmd, 794 798 dm_thin_id dev, int create, 795 799 struct dm_thin_device **td) ··· 805 799 struct disk_device_details details_le; 806 800 807 801 /* 808 - * Check the device isn't already open. 802 + * If the device is already open, return it. 809 803 */ 810 804 list_for_each_entry(td2, &pmd->thin_devices, list) 811 805 if (td2->id == dev) { 806 + /* 807 + * May not create an already-open device. 808 + */ 809 + if (create) 810 + return -EEXIST; 811 + 812 812 td2->open_count++; 813 813 *td = td2; 814 814 return 0; ··· 829 817 if (r != -ENODATA || !create) 830 818 return r; 831 819 820 + /* 821 + * Create new device. 822 + */ 832 823 changed = 1; 833 824 details_le.mapped_blocks = 0; 834 825 details_le.transaction_id = cpu_to_le64(pmd->trans_id); ··· 897 882 898 883 r = __open_device(pmd, dev, 1, &td); 899 884 if (r) { 900 - __close_device(td); 901 885 dm_btree_remove(&pmd->tl_info, pmd->root, &key, &pmd->root); 902 886 dm_btree_del(&pmd->bl_info, dev_root); 903 887 return r; 904 888 } 905 - td->changed = 1; 906 889 __close_device(td); 907 890 908 891 return r; ··· 980 967 goto bad; 981 968 982 969 r = __set_snapshot_details(pmd, td, origin, pmd->time); 970 + __close_device(td); 971 + 983 972 if (r) 984 973 goto bad; 985 974 986 - __close_device(td); 987 975 return 0; 988 976 989 977 bad: 990 - __close_device(td); 991 978 dm_btree_remove(&pmd->tl_info, pmd->root, &key, &pmd->root); 992 979 dm_btree_remove(&pmd->details_info, pmd->details_root, 993 980 &key, &pmd->details_root); ··· 1224 1211 if (r) 1225 1212 return r; 1226 1213 1214 + td->mapped_blocks--; 1215 + td->changed = 1; 1227 1216 pmd->need_commit = 1; 1228 1217 1229 1218 return 0;
+1 -1
drivers/md/raid1.c
··· 624 624 return 1; 625 625 626 626 rcu_read_lock(); 627 - for (i = 0; i < conf->raid_disks; i++) { 627 + for (i = 0; i < conf->raid_disks * 2; i++) { 628 628 struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev); 629 629 if (rdev && !test_bit(Faulty, &rdev->flags)) { 630 630 struct request_queue *q = bdev_get_queue(rdev->bdev);
+27 -11
drivers/md/raid10.c
··· 67 67 68 68 static void allow_barrier(struct r10conf *conf); 69 69 static void lower_barrier(struct r10conf *conf); 70 + static int enough(struct r10conf *conf, int ignore); 70 71 71 72 static void * r10bio_pool_alloc(gfp_t gfp_flags, void *data) 72 73 { ··· 348 347 * wait for the 'master' bio. 349 348 */ 350 349 set_bit(R10BIO_Uptodate, &r10_bio->state); 350 + } else { 351 + /* If all other devices that store this block have 352 + * failed, we want to return the error upwards rather 353 + * than fail the last device. Here we redefine 354 + * "uptodate" to mean "Don't want to retry" 355 + */ 356 + unsigned long flags; 357 + spin_lock_irqsave(&conf->device_lock, flags); 358 + if (!enough(conf, rdev->raid_disk)) 359 + uptodate = 1; 360 + spin_unlock_irqrestore(&conf->device_lock, flags); 361 + } 362 + if (uptodate) { 351 363 raid_end_bio_io(r10_bio); 352 364 rdev_dec_pending(rdev, conf->mddev); 353 365 } else { ··· 2066 2052 "md/raid10:%s: %s: Failing raid device\n", 2067 2053 mdname(mddev), b); 2068 2054 md_error(mddev, conf->mirrors[d].rdev); 2055 + r10_bio->devs[r10_bio->read_slot].bio = IO_BLOCKED; 2069 2056 return; 2070 2057 } 2071 2058 ··· 2120 2105 rdev, 2121 2106 r10_bio->devs[r10_bio->read_slot].addr 2122 2107 + sect, 2123 - s, 0)) 2108 + s, 0)) { 2124 2109 md_error(mddev, rdev); 2110 + r10_bio->devs[r10_bio->read_slot].bio 2111 + = IO_BLOCKED; 2112 + } 2125 2113 break; 2126 2114 } 2127 2115 ··· 2317 2299 * This is all done synchronously while the array is 2318 2300 * frozen. 2319 2301 */ 2302 + bio = r10_bio->devs[slot].bio; 2303 + bdevname(bio->bi_bdev, b); 2304 + bio_put(bio); 2305 + r10_bio->devs[slot].bio = NULL; 2306 + 2320 2307 if (mddev->ro == 0) { 2321 2308 freeze_array(conf); 2322 2309 fix_read_error(conf, mddev, r10_bio); 2323 2310 unfreeze_array(conf); 2324 - } 2311 + } else 2312 + r10_bio->devs[slot].bio = IO_BLOCKED; 2313 + 2325 2314 rdev_dec_pending(rdev, mddev); 2326 2315 2327 - bio = r10_bio->devs[slot].bio; 2328 - bdevname(bio->bi_bdev, b); 2329 - r10_bio->devs[slot].bio = 2330 - mddev->ro ? IO_BLOCKED : NULL; 2331 2316 read_more: 2332 2317 rdev = read_balance(conf, r10_bio, &max_sectors); 2333 2318 if (rdev == NULL) { ··· 2339 2318 mdname(mddev), b, 2340 2319 (unsigned long long)r10_bio->sector); 2341 2320 raid_end_bio_io(r10_bio); 2342 - bio_put(bio); 2343 2321 return; 2344 2322 } 2345 2323 2346 2324 do_sync = (r10_bio->master_bio->bi_rw & REQ_SYNC); 2347 - if (bio) 2348 - bio_put(bio); 2349 2325 slot = r10_bio->read_slot; 2350 2326 printk_ratelimited( 2351 2327 KERN_ERR ··· 2378 2360 mbio->bi_phys_segments++; 2379 2361 spin_unlock_irq(&conf->device_lock); 2380 2362 generic_make_request(bio); 2381 - bio = NULL; 2382 2363 2383 2364 r10_bio = mempool_alloc(conf->r10bio_pool, 2384 2365 GFP_NOIO); ··· 3260 3243 disk->rdev = rdev; 3261 3244 } 3262 3245 3263 - disk->rdev = rdev; 3264 3246 disk_stack_limits(mddev->gendisk, rdev->bdev, 3265 3247 rdev->data_offset << 9); 3266 3248 /* as we don't honour merge_bvec_fn, we must never risk
+3 -2
drivers/mfd/ab8500-core.c
··· 956 956 return ret; 957 957 958 958 out_freeirq: 959 - if (ab8500->irq_base) { 959 + if (ab8500->irq_base) 960 960 free_irq(ab8500->irq, ab8500); 961 961 out_removeirq: 962 + if (ab8500->irq_base) 962 963 ab8500_irq_remove(ab8500); 963 - } 964 + 964 965 return ret; 965 966 } 966 967
+1 -1
drivers/mfd/mfd-core.c
··· 123 123 } 124 124 125 125 if (!cell->ignore_resource_conflicts) { 126 - ret = acpi_check_resource_conflict(res); 126 + ret = acpi_check_resource_conflict(&res[r]); 127 127 if (ret) 128 128 goto fail_res; 129 129 }
+1 -1
drivers/mfd/s5m-core.c
··· 105 105 s5m87xx->rtc = i2c_new_dummy(i2c->adapter, RTC_I2C_ADDR); 106 106 i2c_set_clientdata(s5m87xx->rtc, s5m87xx); 107 107 108 - if (pdata->cfg_pmic_irq) 108 + if (pdata && pdata->cfg_pmic_irq) 109 109 pdata->cfg_pmic_irq(); 110 110 111 111 s5m_irq_init(s5m87xx);
+1 -1
drivers/mfd/tps65910.c
··· 168 168 goto err; 169 169 170 170 init_data->irq = pmic_plat_data->irq; 171 - init_data->irq_base = pmic_plat_data->irq; 171 + init_data->irq_base = pmic_plat_data->irq_base; 172 172 173 173 tps65910_gpio_init(tps65910, pmic_plat_data->gpio_base); 174 174
+1 -1
drivers/mfd/tps65912-core.c
··· 151 151 goto err; 152 152 153 153 init_data->irq = pmic_plat_data->irq; 154 - init_data->irq_base = pmic_plat_data->irq; 154 + init_data->irq_base = pmic_plat_data->irq_base; 155 155 ret = tps65912_irq_init(tps65912, init_data->irq, init_data); 156 156 if (ret < 0) 157 157 goto err;
-1
drivers/mfd/wm8350-irq.c
··· 496 496 497 497 mutex_init(&wm8350->irq_lock); 498 498 wm8350->chip_irq = irq; 499 - wm8350->irq_base = pdata->irq_base; 500 499 501 500 if (pdata && pdata->irq_base > 0) 502 501 irq_base = pdata->irq_base;
+14
drivers/mfd/wm8994-core.c
··· 256 256 break; 257 257 } 258 258 259 + switch (wm8994->type) { 260 + case WM1811: 261 + ret = wm8994_reg_read(wm8994, WM8994_ANTIPOP_2); 262 + if (ret < 0) { 263 + dev_err(dev, "Failed to read jackdet: %d\n", ret); 264 + } else if (ret & WM1811_JACKDET_MODE_MASK) { 265 + dev_dbg(dev, "CODEC still active, ignoring suspend\n"); 266 + return 0; 267 + } 268 + break; 269 + default: 270 + break; 271 + } 272 + 259 273 /* Disable LDO pulldowns while the device is suspended if we 260 274 * don't know that something will be driving them. */ 261 275 if (!wm8994->ldo_ena_always_driven)
+1
drivers/mfd/wm8994-regmap.c
··· 806 806 case WM8994_DC_SERVO_2: 807 807 case WM8994_DC_SERVO_READBACK: 808 808 case WM8994_DC_SERVO_4: 809 + case WM8994_DC_SERVO_4E: 809 810 case WM8994_ANALOGUE_HP_1: 810 811 case WM8958_MIC_DETECT_1: 811 812 case WM8958_MIC_DETECT_2:
+2 -2
drivers/misc/c2port/core.c
··· 984 984 " - (C) 2007 Rodolfo Giometti\n"); 985 985 986 986 c2port_class = class_create(THIS_MODULE, "c2port"); 987 - if (!c2port_class) { 987 + if (IS_ERR(c2port_class)) { 988 988 printk(KERN_ERR "c2port: failed to allocate class\n"); 989 - return -ENOMEM; 989 + return PTR_ERR(c2port_class); 990 990 } 991 991 c2port_class->dev_attrs = c2port_attrs; 992 992
+3
drivers/mmc/core/core.c
··· 2068 2068 */ 2069 2069 mmc_hw_reset_for_init(host); 2070 2070 2071 + /* Initialization should be done at 3.3 V I/O voltage. */ 2072 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 2073 + 2071 2074 /* 2072 2075 * sdio_reset sends CMD52 to reset card. Since we do not know 2073 2076 * if the card is being re-initialized, just send it. CMD52
+2 -2
drivers/mmc/core/host.c
··· 238 238 /* Hold MCI clock for 8 cycles by default */ 239 239 host->clk_delay = 8; 240 240 /* 241 - * Default clock gating delay is 200ms. 241 + * Default clock gating delay is 0ms to avoid wasting power. 242 242 * This value can be tuned by writing into sysfs entry. 243 243 */ 244 - host->clkgate_delay = 200; 244 + host->clkgate_delay = 0; 245 245 host->clk_gated = false; 246 246 INIT_DELAYED_WORK(&host->clk_gate_work, mmc_host_clk_gate_work); 247 247 spin_lock_init(&host->clk_lock);
+3
drivers/mmc/core/mmc.c
··· 816 816 if (!mmc_host_is_spi(host)) 817 817 mmc_set_bus_mode(host, MMC_BUSMODE_OPENDRAIN); 818 818 819 + /* Initialization should be done at 3.3 V I/O voltage. */ 820 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 821 + 819 822 /* 820 823 * Since we're changing the OCR value, we seem to 821 824 * need to tell some cards to go back to the idle
+3 -5
drivers/mmc/core/sd.c
··· 911 911 BUG_ON(!host); 912 912 WARN_ON(!host->claimed); 913 913 914 + /* The initialization should be done at 3.3 V I/O voltage. */ 915 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 916 + 914 917 err = mmc_sd_get_cid(host, ocr, cid, &rocr); 915 918 if (err) 916 919 return err; ··· 1158 1155 1159 1156 BUG_ON(!host); 1160 1157 WARN_ON(!host->claimed); 1161 - 1162 - /* Make sure we are at 3.3V signalling voltage */ 1163 - err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, false); 1164 - if (err) 1165 - return err; 1166 1158 1167 1159 /* Disable preset value enable if already set since last time */ 1168 1160 if (host->ops->enable_preset_value) {
+8
drivers/mmc/core/sdio.c
··· 585 585 * Inform the card of the voltage 586 586 */ 587 587 if (!powered_resume) { 588 + /* The initialization should be done at 3.3 V I/O voltage. */ 589 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 590 + 588 591 err = mmc_send_io_op_cond(host, host->ocr, &ocr); 589 592 if (err) 590 593 goto err; ··· 999 996 * With these steps taken, mmc_select_voltage() is also required to 1000 997 * restore the correct voltage setting of the card. 1001 998 */ 999 + 1000 + /* The initialization should be done at 3.3 V I/O voltage. */ 1001 + if (!mmc_card_keep_power(host)) 1002 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 1003 + 1002 1004 sdio_reset(host); 1003 1005 mmc_go_idle(host); 1004 1006 mmc_send_if_cond(host, host->ocr_avail);
+10 -11
drivers/mmc/host/atmel-mci.c
··· 1948 1948 } 1949 1949 } 1950 1950 1951 - static void atmci_configure_dma(struct atmel_mci *host) 1951 + static bool atmci_configure_dma(struct atmel_mci *host) 1952 1952 { 1953 1953 struct mci_platform_data *pdata; 1954 1954 1955 1955 if (host == NULL) 1956 - return; 1956 + return false; 1957 1957 1958 1958 pdata = host->pdev->dev.platform_data; 1959 1959 ··· 1970 1970 host->dma.chan = 1971 1971 dma_request_channel(mask, atmci_filter, pdata->dma_slave); 1972 1972 } 1973 - if (!host->dma.chan) 1974 - dev_notice(&host->pdev->dev, "DMA not available, using PIO\n"); 1975 - else 1973 + if (!host->dma.chan) { 1974 + dev_warn(&host->pdev->dev, "no DMA channel available\n"); 1975 + return false; 1976 + } else { 1976 1977 dev_info(&host->pdev->dev, 1977 1978 "Using %s for DMA transfers\n", 1978 1979 dma_chan_name(host->dma.chan)); 1980 + return true; 1981 + } 1979 1982 } 1980 1983 1981 1984 static inline unsigned int atmci_get_version(struct atmel_mci *host) ··· 2088 2085 2089 2086 /* Get MCI capabilities and set operations according to it */ 2090 2087 atmci_get_cap(host); 2091 - if (host->caps.has_dma) { 2092 - dev_info(&pdev->dev, "using DMA\n"); 2088 + if (host->caps.has_dma && atmci_configure_dma(host)) { 2093 2089 host->prepare_data = &atmci_prepare_data_dma; 2094 2090 host->submit_data = &atmci_submit_data_dma; 2095 2091 host->stop_transfer = &atmci_stop_transfer_dma; ··· 2098 2096 host->submit_data = &atmci_submit_data_pdc; 2099 2097 host->stop_transfer = &atmci_stop_transfer_pdc; 2100 2098 } else { 2101 - dev_info(&pdev->dev, "no DMA, no PDC\n"); 2099 + dev_info(&pdev->dev, "using PIO\n"); 2102 2100 host->prepare_data = &atmci_prepare_data; 2103 2101 host->submit_data = &atmci_submit_data; 2104 2102 host->stop_transfer = &atmci_stop_transfer; 2105 2103 } 2106 - 2107 - if (host->caps.has_dma) 2108 - atmci_configure_dma(host); 2109 2104 2110 2105 platform_set_drvdata(pdev, host); 2111 2106
+4 -3
drivers/mmc/host/mmci.c
··· 1271 1271 /* 1272 1272 * Block size can be up to 2048 bytes, but must be a power of two. 1273 1273 */ 1274 - mmc->max_blk_size = 2048; 1274 + mmc->max_blk_size = 1 << 11; 1275 1275 1276 1276 /* 1277 - * No limit on the number of blocks transferred. 1277 + * Limit the number of blocks transferred so that we don't overflow 1278 + * the maximum request size. 1278 1279 */ 1279 - mmc->max_blk_count = mmc->max_req_size; 1280 + mmc->max_blk_count = mmc->max_req_size >> 11; 1280 1281 1281 1282 spin_lock_init(&host->lock); 1282 1283
+3 -2
drivers/mmc/host/sdhci-esdhc-imx.c
··· 269 269 imx_data->scratchpad = val; 270 270 return; 271 271 case SDHCI_COMMAND: 272 - if ((host->cmd->opcode == MMC_STOP_TRANSMISSION) 273 - && (imx_data->flags & ESDHC_FLAG_MULTIBLK_NO_INT)) 272 + if ((host->cmd->opcode == MMC_STOP_TRANSMISSION || 273 + host->cmd->opcode == MMC_SET_BLOCK_COUNT) && 274 + (imx_data->flags & ESDHC_FLAG_MULTIBLK_NO_INT)) 274 275 val |= SDHCI_CMD_ABORTCMD; 275 276 276 277 if (is_imx6q_usdhc(imx_data)) {
+1 -1
drivers/net/caif/caif_hsi.c
··· 978 978 dev->netdev_ops = &cfhsi_ops; 979 979 dev->type = ARPHRD_CAIF; 980 980 dev->flags = IFF_POINTOPOINT | IFF_NOARP; 981 - dev->mtu = CFHSI_MAX_PAYLOAD_SZ; 981 + dev->mtu = CFHSI_MAX_CAIF_FRAME_SZ; 982 982 dev->tx_queue_len = 0; 983 983 dev->destructor = free_netdev; 984 984 skb_queue_head_init(&cfhsi->qhead);
+1 -1
drivers/net/ethernet/atheros/atl1c/atl1c_main.c
··· 1710 1710 "atl1c hardware error (status = 0x%x)\n", 1711 1711 status & ISR_ERROR); 1712 1712 /* reset MAC */ 1713 - adapter->work_event |= ATL1C_WORK_EVENT_RESET; 1713 + set_bit(ATL1C_WORK_EVENT_RESET, &adapter->work_event); 1714 1714 schedule_work(&adapter->common_task); 1715 1715 return IRQ_HANDLED; 1716 1716 }
+26 -25
drivers/net/ethernet/broadcom/tg3.c
··· 5352 5352 } 5353 5353 } 5354 5354 5355 - netdev_completed_queue(tp->dev, pkts_compl, bytes_compl); 5355 + netdev_tx_completed_queue(txq, pkts_compl, bytes_compl); 5356 5356 5357 5357 tnapi->tx_cons = sw_idx; 5358 5358 ··· 6793 6793 } 6794 6794 6795 6795 skb_tx_timestamp(skb); 6796 - netdev_sent_queue(tp->dev, skb->len); 6796 + netdev_tx_sent_queue(txq, skb->len); 6797 6797 6798 6798 /* Packets are ready, update Tx producer idx local and on card. */ 6799 6799 tw32_tx_mbox(tnapi->prodmbox, entry); ··· 7275 7275 7276 7276 dev_kfree_skb_any(skb); 7277 7277 } 7278 + netdev_tx_reset_queue(netdev_get_tx_queue(tp->dev, j)); 7278 7279 } 7279 - netdev_reset_queue(tp->dev); 7280 7280 } 7281 7281 7282 7282 /* Initialize tx/rx rings for packet processing. ··· 7886 7886 return 0; 7887 7887 } 7888 7888 7889 - static struct rtnl_link_stats64 *tg3_get_stats64(struct net_device *, 7890 - struct rtnl_link_stats64 *); 7891 - static struct tg3_ethtool_stats *tg3_get_estats(struct tg3 *, 7892 - struct tg3_ethtool_stats *); 7889 + static void tg3_get_nstats(struct tg3 *, struct rtnl_link_stats64 *); 7890 + static void tg3_get_estats(struct tg3 *, struct tg3_ethtool_stats *); 7893 7891 7894 7892 /* tp->lock is held. */ 7895 7893 static int tg3_halt(struct tg3 *tp, int kind, int silent) ··· 7908 7910 7909 7911 if (tp->hw_stats) { 7910 7912 /* Save the stats across chip resets... */ 7911 - tg3_get_stats64(tp->dev, &tp->net_stats_prev), 7913 + tg3_get_nstats(tp, &tp->net_stats_prev), 7912 7914 tg3_get_estats(tp, &tp->estats_prev); 7913 7915 7914 7916 /* And make sure the next sample is new data */ ··· 9845 9847 return ((u64)val->high << 32) | ((u64)val->low); 9846 9848 } 9847 9849 9848 - static u64 calc_crc_errors(struct tg3 *tp) 9850 + static u64 tg3_calc_crc_errors(struct tg3 *tp) 9849 9851 { 9850 9852 struct tg3_hw_stats *hw_stats = tp->hw_stats; 9851 9853 ··· 9854 9856 GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5701)) { 9855 9857 u32 val; 9856 9858 9857 - spin_lock_bh(&tp->lock); 9858 9859 if (!tg3_readphy(tp, MII_TG3_TEST1, &val)) { 9859 9860 tg3_writephy(tp, MII_TG3_TEST1, 9860 9861 val | MII_TG3_TEST1_CRC_EN); 9861 9862 tg3_readphy(tp, MII_TG3_RXR_COUNTERS, &val); 9862 9863 } else 9863 9864 val = 0; 9864 - spin_unlock_bh(&tp->lock); 9865 9865 9866 9866 tp->phy_crc_errors += val; 9867 9867 ··· 9873 9877 estats->member = old_estats->member + \ 9874 9878 get_stat64(&hw_stats->member) 9875 9879 9876 - static struct tg3_ethtool_stats *tg3_get_estats(struct tg3 *tp, 9877 - struct tg3_ethtool_stats *estats) 9880 + static void tg3_get_estats(struct tg3 *tp, struct tg3_ethtool_stats *estats) 9878 9881 { 9879 9882 struct tg3_ethtool_stats *old_estats = &tp->estats_prev; 9880 9883 struct tg3_hw_stats *hw_stats = tp->hw_stats; 9881 9884 9882 9885 if (!hw_stats) 9883 - return old_estats; 9886 + return; 9884 9887 9885 9888 ESTAT_ADD(rx_octets); 9886 9889 ESTAT_ADD(rx_fragments); ··· 9958 9963 ESTAT_ADD(nic_tx_threshold_hit); 9959 9964 9960 9965 ESTAT_ADD(mbuf_lwm_thresh_hit); 9961 - 9962 - return estats; 9963 9966 } 9964 9967 9965 - static struct rtnl_link_stats64 *tg3_get_stats64(struct net_device *dev, 9966 - struct rtnl_link_stats64 *stats) 9968 + static void tg3_get_nstats(struct tg3 *tp, struct rtnl_link_stats64 *stats) 9967 9969 { 9968 - struct tg3 *tp = netdev_priv(dev); 9969 9970 struct rtnl_link_stats64 *old_stats = &tp->net_stats_prev; 9970 9971 struct tg3_hw_stats *hw_stats = tp->hw_stats; 9971 - 9972 - if (!hw_stats) 9973 - return old_stats; 9974 9972 9975 9973 stats->rx_packets = old_stats->rx_packets + 9976 9974 get_stat64(&hw_stats->rx_ucast_packets) + ··· 10007 10019 get_stat64(&hw_stats->tx_carrier_sense_errors); 10008 10020 10009 10021 stats->rx_crc_errors = old_stats->rx_crc_errors + 10010 - calc_crc_errors(tp); 10022 + tg3_calc_crc_errors(tp); 10011 10023 10012 10024 stats->rx_missed_errors = old_stats->rx_missed_errors + 10013 10025 get_stat64(&hw_stats->rx_discards); 10014 10026 10015 10027 stats->rx_dropped = tp->rx_dropped; 10016 10028 stats->tx_dropped = tp->tx_dropped; 10017 - 10018 - return stats; 10019 10029 } 10020 10030 10021 10031 static inline u32 calc_crc(unsigned char *buf, int len) ··· 15393 15407 ec->tx_coalesce_usecs_irq = 0; 15394 15408 ec->stats_block_coalesce_usecs = 0; 15395 15409 } 15410 + } 15411 + 15412 + static struct rtnl_link_stats64 *tg3_get_stats64(struct net_device *dev, 15413 + struct rtnl_link_stats64 *stats) 15414 + { 15415 + struct tg3 *tp = netdev_priv(dev); 15416 + 15417 + if (!tp->hw_stats) 15418 + return &tp->net_stats_prev; 15419 + 15420 + spin_lock_bh(&tp->lock); 15421 + tg3_get_nstats(tp, stats); 15422 + spin_unlock_bh(&tp->lock); 15423 + 15424 + return stats; 15396 15425 } 15397 15426 15398 15427 static const struct net_device_ops tg3_netdev_ops = {
+2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 196 196 CH_DEVICE(0x4408, 4), 197 197 CH_DEVICE(0x4409, 4), 198 198 CH_DEVICE(0x440a, 4), 199 + CH_DEVICE(0x440d, 4), 200 + CH_DEVICE(0x440e, 4), 199 201 { 0, } 200 202 }; 201 203
+2
drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
··· 2892 2892 CH_DEVICE(0x4808, 0), /* T420-cx */ 2893 2893 CH_DEVICE(0x4809, 0), /* T420-bt */ 2894 2894 CH_DEVICE(0x480a, 0), /* T404-bt */ 2895 + CH_DEVICE(0x480d, 0), /* T480-cr */ 2896 + CH_DEVICE(0x480e, 0), /* T440-lp-cr */ 2895 2897 { 0, } 2896 2898 }; 2897 2899
+1 -1
drivers/net/ethernet/cisco/enic/enic.h
··· 94 94 u32 rx_coalesce_usecs; 95 95 u32 tx_coalesce_usecs; 96 96 #ifdef CONFIG_PCI_IOV 97 - u32 num_vfs; 97 + u16 num_vfs; 98 98 #endif 99 99 struct enic_port_profile *pp; 100 100
+1 -1
drivers/net/ethernet/cisco/enic/enic_main.c
··· 2370 2370 pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV); 2371 2371 if (pos) { 2372 2372 pci_read_config_word(pdev, pos + PCI_SRIOV_TOTAL_VF, 2373 - (u16 *)&enic->num_vfs); 2373 + &enic->num_vfs); 2374 2374 if (enic->num_vfs) { 2375 2375 err = pci_enable_sriov(pdev, enic->num_vfs); 2376 2376 if (err) {
+3 -1
drivers/net/ethernet/ibm/ehea/ehea_main.c
··· 336 336 stats->tx_bytes = tx_bytes; 337 337 stats->rx_packets = rx_packets; 338 338 339 - return &port->stats; 339 + stats->multicast = port->stats.multicast; 340 + stats->rx_errors = port->stats.rx_errors; 341 + return stats; 340 342 } 341 343 342 344 static void ehea_update_stats(struct work_struct *work)
-5
drivers/net/ethernet/mellanox/mlx4/qp.c
··· 151 151 context->log_page_size = mtt->page_shift - MLX4_ICM_PAGE_SHIFT; 152 152 } 153 153 154 - port = ((context->pri_path.sched_queue >> 6) & 1) + 1; 155 - if (dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH) 156 - context->pri_path.sched_queue = (context->pri_path.sched_queue & 157 - 0xc3); 158 - 159 154 *(__be32 *) mailbox->buf = cpu_to_be32(optpar); 160 155 memcpy(mailbox->buf + 8, context, sizeof *context); 161 156
+1 -2
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 2255 2255 2256 2256 if (vhcr->op_modifier == 0) { 2257 2257 err = handle_resize(dev, slave, vhcr, inbox, outbox, cmd, cq); 2258 - if (err) 2259 - goto ex_put; 2258 + goto ex_put; 2260 2259 } 2261 2260 2262 2261 err = mlx4_DMA_wrapper(dev, slave, vhcr, inbox, outbox, cmd);
+8 -7
drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_param.c
··· 321 321 pr_debug("AutoNeg specified along with Speed or Duplex, AutoNeg parameter ignored\n"); 322 322 hw->phy.autoneg_advertised = opt.def; 323 323 } else { 324 - hw->phy.autoneg_advertised = AutoNeg; 325 - pch_gbe_validate_option( 326 - (int *)(&hw->phy.autoneg_advertised), 327 - &opt, adapter); 324 + int tmp = AutoNeg; 325 + 326 + pch_gbe_validate_option(&tmp, &opt, adapter); 327 + hw->phy.autoneg_advertised = tmp; 328 328 } 329 329 } 330 330 ··· 495 495 .arg = { .l = { .nr = (int)ARRAY_SIZE(fc_list), 496 496 .p = fc_list } } 497 497 }; 498 - hw->mac.fc = FlowControl; 499 - pch_gbe_validate_option((int *)(&hw->mac.fc), 500 - &opt, adapter); 498 + int tmp = FlowControl; 499 + 500 + pch_gbe_validate_option(&tmp, &opt, adapter); 501 + hw->mac.fc = tmp; 501 502 } 502 503 503 504 pch_gbe_check_copper_options(adapter);
+1
drivers/net/ethernet/packetengines/Kconfig
··· 4 4 5 5 config NET_PACKET_ENGINE 6 6 bool "Packet Engine devices" 7 + default y 7 8 depends on PCI 8 9 ---help--- 9 10 If you have a network (Ethernet) card belonging to this class, say Y
+2 -3
drivers/net/ethernet/qlogic/qla3xxx.c
··· 3017 3017 (void __iomem *)port_regs; 3018 3018 u32 delay = 10; 3019 3019 int status = 0; 3020 - unsigned long hw_flags = 0; 3021 3020 3022 3021 if (ql_mii_setup(qdev)) 3023 3022 return -1; ··· 3227 3228 value = ql_read_page0_reg(qdev, &port_regs->portStatus); 3228 3229 if (value & PORT_STATUS_IC) 3229 3230 break; 3230 - spin_unlock_irqrestore(&qdev->hw_lock, hw_flags); 3231 + spin_unlock_irq(&qdev->hw_lock); 3231 3232 msleep(500); 3232 - spin_lock_irqsave(&qdev->hw_lock, hw_flags); 3233 + spin_lock_irq(&qdev->hw_lock); 3233 3234 } while (--delay); 3234 3235 3235 3236 if (delay == 0) {
+13
drivers/net/ethernet/realtek/r8169.c
··· 3781 3781 3782 3782 static void rtl_hw_jumbo_enable(struct rtl8169_private *tp) 3783 3783 { 3784 + void __iomem *ioaddr = tp->mmio_addr; 3785 + 3786 + RTL_W8(Cfg9346, Cfg9346_Unlock); 3784 3787 rtl_generic_op(tp, tp->jumbo_ops.enable); 3788 + RTL_W8(Cfg9346, Cfg9346_Lock); 3785 3789 } 3786 3790 3787 3791 static void rtl_hw_jumbo_disable(struct rtl8169_private *tp) 3788 3792 { 3793 + void __iomem *ioaddr = tp->mmio_addr; 3794 + 3795 + RTL_W8(Cfg9346, Cfg9346_Unlock); 3789 3796 rtl_generic_op(tp, tp->jumbo_ops.disable); 3797 + RTL_W8(Cfg9346, Cfg9346_Lock); 3790 3798 } 3791 3799 3792 3800 static void r8168c_hw_jumbo_enable(struct rtl8169_private *tp) ··· 6194 6186 { 6195 6187 struct net_device *dev = pci_get_drvdata(pdev); 6196 6188 struct rtl8169_private *tp = netdev_priv(dev); 6189 + struct device *d = &pdev->dev; 6190 + 6191 + pm_runtime_get_sync(d); 6197 6192 6198 6193 rtl8169_net_suspend(dev); 6199 6194 ··· 6218 6207 pci_wake_from_d3(pdev, true); 6219 6208 pci_set_power_state(pdev, PCI_D3hot); 6220 6209 } 6210 + 6211 + pm_runtime_put_noidle(d); 6221 6212 } 6222 6213 6223 6214 static struct pci_driver rtl8169_pci_driver = {
+2 -2
drivers/net/hyperv/netvsc_drv.c
··· 313 313 static void netvsc_get_drvinfo(struct net_device *net, 314 314 struct ethtool_drvinfo *info) 315 315 { 316 - strcpy(info->driver, "hv_netvsc"); 316 + strcpy(info->driver, KBUILD_MODNAME); 317 317 strcpy(info->version, HV_DRV_VERSION); 318 318 strcpy(info->fw_version, "N/A"); 319 319 } ··· 485 485 486 486 /* The one and only one */ 487 487 static struct hv_driver netvsc_drv = { 488 - .name = "netvsc", 488 + .name = KBUILD_MODNAME, 489 489 .id_table = id_table, 490 490 .probe = netvsc_probe, 491 491 .remove = netvsc_remove,
+2
drivers/net/usb/usbnet.c
··· 589 589 entry = (struct skb_data *) skb->cb; 590 590 urb = entry->urb; 591 591 592 + spin_unlock_irqrestore(&q->lock, flags); 592 593 // during some PM-driven resume scenarios, 593 594 // these (async) unlinks complete immediately 594 595 retval = usb_unlink_urb (urb); ··· 597 596 netdev_dbg(dev->net, "unlink urb err, %d\n", retval); 598 597 else 599 598 count++; 599 + spin_lock_irqsave(&q->lock, flags); 600 600 } 601 601 spin_unlock_irqrestore (&q->lock, flags); 602 602 return count;
+1 -6
drivers/net/vmxnet3/vmxnet3_drv.c
··· 830 830 ctx->l4_hdr_size = ((struct tcphdr *) 831 831 skb_transport_header(skb))->doff * 4; 832 832 else if (iph->protocol == IPPROTO_UDP) 833 - /* 834 - * Use tcp header size so that bytes to 835 - * be copied are more than required by 836 - * the device. 837 - */ 838 833 ctx->l4_hdr_size = 839 - sizeof(struct tcphdr); 834 + sizeof(struct udphdr); 840 835 else 841 836 ctx->l4_hdr_size = 0; 842 837 } else {
+2 -2
drivers/net/vmxnet3/vmxnet3_int.h
··· 70 70 /* 71 71 * Version numbers 72 72 */ 73 - #define VMXNET3_DRIVER_VERSION_STRING "1.1.18.0-k" 73 + #define VMXNET3_DRIVER_VERSION_STRING "1.1.29.0-k" 74 74 75 75 /* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */ 76 - #define VMXNET3_DRIVER_VERSION_NUM 0x01011200 76 + #define VMXNET3_DRIVER_VERSION_NUM 0x01011D00 77 77 78 78 #if defined(CONFIG_PCI_MSI) 79 79 /* RSS only makes sense if MSI-X is supported. */
+1 -24
drivers/net/wireless/ath/ath9k/ar5008_phy.c
··· 489 489 ATH_ALLOC_BANK(ah->analogBank6Data, ah->iniBank6.ia_rows); 490 490 ATH_ALLOC_BANK(ah->analogBank6TPCData, ah->iniBank6TPC.ia_rows); 491 491 ATH_ALLOC_BANK(ah->analogBank7Data, ah->iniBank7.ia_rows); 492 - ATH_ALLOC_BANK(ah->addac5416_21, 493 - ah->iniAddac.ia_rows * ah->iniAddac.ia_columns); 494 492 ATH_ALLOC_BANK(ah->bank6Temp, ah->iniBank6.ia_rows); 495 493 496 494 return 0; ··· 517 519 ATH_FREE_BANK(ah->analogBank6Data); 518 520 ATH_FREE_BANK(ah->analogBank6TPCData); 519 521 ATH_FREE_BANK(ah->analogBank7Data); 520 - ATH_FREE_BANK(ah->addac5416_21); 521 522 ATH_FREE_BANK(ah->bank6Temp); 522 523 523 524 #undef ATH_FREE_BANK ··· 802 805 if (ah->eep_ops->set_addac) 803 806 ah->eep_ops->set_addac(ah, chan); 804 807 805 - if (AR_SREV_5416_22_OR_LATER(ah)) { 806 - REG_WRITE_ARRAY(&ah->iniAddac, 1, regWrites); 807 - } else { 808 - struct ar5416IniArray temp; 809 - u32 addacSize = 810 - sizeof(u32) * ah->iniAddac.ia_rows * 811 - ah->iniAddac.ia_columns; 812 - 813 - /* For AR5416 2.0/2.1 */ 814 - memcpy(ah->addac5416_21, 815 - ah->iniAddac.ia_array, addacSize); 816 - 817 - /* override CLKDRV value at [row, column] = [31, 1] */ 818 - (ah->addac5416_21)[31 * ah->iniAddac.ia_columns + 1] = 0; 819 - 820 - temp.ia_array = ah->addac5416_21; 821 - temp.ia_columns = ah->iniAddac.ia_columns; 822 - temp.ia_rows = ah->iniAddac.ia_rows; 823 - REG_WRITE_ARRAY(&temp, 1, regWrites); 824 - } 825 - 808 + REG_WRITE_ARRAY(&ah->iniAddac, 1, regWrites); 826 809 REG_WRITE(ah, AR_PHY_ADC_SERIAL_CTL, AR_PHY_SEL_INTERNAL_ADDAC); 827 810 828 811 ENABLE_REGWRITE_BUFFER(ah);
+19
drivers/net/wireless/ath/ath9k/ar9002_hw.c
··· 180 180 INIT_INI_ARRAY(&ah->iniAddac, ar5416Addac, 181 181 ARRAY_SIZE(ar5416Addac), 2); 182 182 } 183 + 184 + /* iniAddac needs to be modified for these chips */ 185 + if (AR_SREV_9160(ah) || !AR_SREV_5416_22_OR_LATER(ah)) { 186 + struct ar5416IniArray *addac = &ah->iniAddac; 187 + u32 size = sizeof(u32) * addac->ia_rows * addac->ia_columns; 188 + u32 *data; 189 + 190 + data = kmalloc(size, GFP_KERNEL); 191 + if (!data) 192 + return; 193 + 194 + memcpy(data, addac->ia_array, size); 195 + addac->ia_array = data; 196 + 197 + if (!AR_SREV_5416_22_OR_LATER(ah)) { 198 + /* override CLKDRV value */ 199 + INI_RA(addac, 31,1) = 0; 200 + } 201 + } 183 202 } 184 203 185 204 /* Support for Japan ch.14 (2484) spread */
-1
drivers/net/wireless/ath/ath9k/hw.h
··· 940 940 u32 *analogBank6Data; 941 941 u32 *analogBank6TPCData; 942 942 u32 *analogBank7Data; 943 - u32 *addac5416_21; 944 943 u32 *bank6Temp; 945 944 946 945 u8 txpower_limit;
+6 -3
drivers/net/wireless/ath/carl9170/tx.c
··· 1234 1234 { 1235 1235 struct ieee80211_sta *sta; 1236 1236 struct carl9170_sta_info *sta_info; 1237 + struct ieee80211_tx_info *tx_info; 1237 1238 1238 1239 rcu_read_lock(); 1239 1240 sta = __carl9170_get_tx_sta(ar, skb); ··· 1242 1241 goto out_rcu; 1243 1242 1244 1243 sta_info = (void *) sta->drv_priv; 1245 - if (unlikely(sta_info->sleeping)) { 1246 - struct ieee80211_tx_info *tx_info; 1244 + tx_info = IEEE80211_SKB_CB(skb); 1247 1245 1246 + if (unlikely(sta_info->sleeping) && 1247 + !(tx_info->flags & (IEEE80211_TX_CTL_POLL_RESPONSE | 1248 + IEEE80211_TX_CTL_CLEAR_PS_FILT))) { 1248 1249 rcu_read_unlock(); 1249 1250 1250 - tx_info = IEEE80211_SKB_CB(skb); 1251 1251 if (tx_info->flags & IEEE80211_TX_CTL_AMPDU) 1252 1252 atomic_dec(&ar->tx_ampdu_upload); 1253 1253 1254 1254 tx_info->flags |= IEEE80211_TX_STAT_TX_FILTERED; 1255 + carl9170_release_dev_space(ar, skb); 1255 1256 carl9170_tx_status(ar, skb, false); 1256 1257 return true; 1257 1258 }
+4 -8
drivers/net/wireless/brcm80211/brcmsmac/ampdu.c
··· 1051 1051 } 1052 1052 /* either retransmit or send bar if ack not recd */ 1053 1053 if (!ack_recd) { 1054 - struct ieee80211_tx_rate *txrate = 1055 - tx_info->status.rates; 1056 - if (retry && (txrate[0].count < (int)retry_limit)) { 1054 + if (retry && (ini->txretry[index] < (int)retry_limit)) { 1057 1055 ini->txretry[index]++; 1058 1056 ini->tx_in_transit--; 1059 1057 /* 1060 1058 * Use high prededence for retransmit to 1061 1059 * give some punch 1062 1060 */ 1063 - /* brcms_c_txq_enq(wlc, scb, p, 1064 - * BRCMS_PRIO_TO_PREC(tid)); */ 1065 1061 brcms_c_txq_enq(wlc, scb, p, 1066 1062 BRCMS_PRIO_TO_HI_PREC(tid)); 1067 1063 } else { ··· 1070 1074 IEEE80211_TX_STAT_AMPDU_NO_BACK; 1071 1075 skb_pull(p, D11_PHY_HDR_LEN); 1072 1076 skb_pull(p, D11_TXH_LEN); 1073 - wiphy_err(wiphy, "%s: BA Timeout, seq %d, in_" 1074 - "transit %d\n", "AMPDU status", seq, 1075 - ini->tx_in_transit); 1077 + BCMMSG(wiphy, 1078 + "BA Timeout, seq %d, in_transit %d\n", 1079 + seq, ini->tx_in_transit); 1076 1080 ieee80211_tx_status_irqsafe(wlc->pub->ieee_hw, 1077 1081 p); 1078 1082 }
+1 -1
drivers/net/wireless/iwlwifi/iwl-agn-lib.c
··· 1240 1240 .flags = CMD_SYNC, 1241 1241 .data[0] = key_data.rsc_tsc, 1242 1242 .dataflags[0] = IWL_HCMD_DFL_NOCOPY, 1243 - .len[0] = sizeof(key_data.rsc_tsc), 1243 + .len[0] = sizeof(*key_data.rsc_tsc), 1244 1244 }; 1245 1245 1246 1246 ret = iwl_trans_send_cmd(trans(priv), &rsc_tsc_cmd);
+9 -1
drivers/net/wireless/iwlwifi/iwl-agn-sta.c
··· 1187 1187 unsigned long flags; 1188 1188 struct iwl_addsta_cmd sta_cmd; 1189 1189 u8 sta_id = iwlagn_key_sta_id(priv, ctx->vif, sta); 1190 + __le16 key_flags; 1190 1191 1191 1192 /* if station isn't there, neither is the key */ 1192 1193 if (sta_id == IWL_INVALID_STATION) ··· 1213 1212 IWL_ERR(priv, "offset %d not used in uCode key table.\n", 1214 1213 keyconf->hw_key_idx); 1215 1214 1216 - sta_cmd.key.key_flags = STA_KEY_FLG_NO_ENC | STA_KEY_FLG_INVALID; 1215 + key_flags = cpu_to_le16(keyconf->keyidx << STA_KEY_FLG_KEYID_POS); 1216 + key_flags |= STA_KEY_FLG_MAP_KEY_MSK | STA_KEY_FLG_NO_ENC | 1217 + STA_KEY_FLG_INVALID; 1218 + 1219 + if (!(keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE)) 1220 + key_flags |= STA_KEY_MULTICAST_MSK; 1221 + 1222 + sta_cmd.key.key_flags = key_flags; 1217 1223 sta_cmd.key.key_offset = WEP_INVALID_OFFSET; 1218 1224 sta_cmd.sta.modify_mask = STA_MODIFY_KEY_MASK; 1219 1225 sta_cmd.mode = STA_CONTROL_MODIFY_MSK;
+1
drivers/net/wireless/mwifiex/cfg80211.c
··· 846 846 priv->sec_info.wpa_enabled = false; 847 847 priv->sec_info.wpa2_enabled = false; 848 848 priv->wep_key_curr_index = 0; 849 + priv->sec_info.encryption_mode = 0; 849 850 ret = mwifiex_set_encode(priv, NULL, 0, 0, 1); 850 851 851 852 if (mode == NL80211_IFTYPE_ADHOC) {
+2 -1
drivers/net/wireless/rt2x00/rt2x00dev.c
··· 1220 1220 cancel_work_sync(&rt2x00dev->rxdone_work); 1221 1221 cancel_work_sync(&rt2x00dev->txdone_work); 1222 1222 } 1223 - destroy_workqueue(rt2x00dev->workqueue); 1223 + if (rt2x00dev->workqueue) 1224 + destroy_workqueue(rt2x00dev->workqueue); 1224 1225 1225 1226 /* 1226 1227 * Free the tx status fifo.
-1
drivers/of/fdt.c
··· 23 23 #include <asm/machdep.h> 24 24 #endif /* CONFIG_PPC */ 25 25 26 - #include <asm/setup.h> 27 26 #include <asm/page.h> 28 27 29 28 char *of_fdt_get_string(struct boot_param_header *blob, u32 offset)
+1 -1
drivers/of/of_mdio.c
··· 182 182 if (!phy_id || sz < sizeof(*phy_id)) 183 183 return NULL; 184 184 185 - sprintf(bus_id, PHY_ID_FMT, "0", be32_to_cpu(phy_id[0])); 185 + sprintf(bus_id, PHY_ID_FMT, "fixed-0", be32_to_cpu(phy_id[0])); 186 186 187 187 phy = phy_connect(dev, bus_id, hndlr, 0, iface); 188 188 return IS_ERR(phy) ? NULL : phy;
+2 -2
drivers/pps/pps.c
··· 369 369 int err; 370 370 371 371 pps_class = class_create(THIS_MODULE, "pps"); 372 - if (!pps_class) { 372 + if (IS_ERR(pps_class)) { 373 373 pr_err("failed to allocate class\n"); 374 - return -ENOMEM; 374 + return PTR_ERR(pps_class); 375 375 } 376 376 pps_class->dev_attrs = pps_attrs; 377 377
+3 -2
drivers/rapidio/devices/tsi721.c
··· 410 410 */ 411 411 mport = priv->mport; 412 412 413 - wr_ptr = ioread32(priv->regs + TSI721_IDQ_WP(IDB_QUEUE)); 414 - rd_ptr = ioread32(priv->regs + TSI721_IDQ_RP(IDB_QUEUE)); 413 + wr_ptr = ioread32(priv->regs + TSI721_IDQ_WP(IDB_QUEUE)) % IDB_QSIZE; 414 + rd_ptr = ioread32(priv->regs + TSI721_IDQ_RP(IDB_QUEUE)) % IDB_QSIZE; 415 415 416 416 while (wr_ptr != rd_ptr) { 417 417 idb_entry = (u64 *)(priv->idb_base + 418 418 (TSI721_IDB_ENTRY_SIZE * rd_ptr)); 419 419 rd_ptr++; 420 + rd_ptr %= IDB_QSIZE; 420 421 idb.msg = *idb_entry; 421 422 *idb_entry = 0; 422 423
+4 -4
drivers/regulator/da9052-regulator.c
··· 260 260 * the LDO activate bit to implment the changes on the 261 261 * LDO output. 262 262 */ 263 - return da9052_reg_update(regulator->da9052, DA9052_SUPPLY_REG, 0, 264 - info->activate_bit); 263 + return da9052_reg_update(regulator->da9052, DA9052_SUPPLY_REG, 264 + info->activate_bit, info->activate_bit); 265 265 } 266 266 267 267 static int da9052_set_dcdc_voltage(struct regulator_dev *rdev, ··· 280 280 * the DCDC activate bit to implment the changes on the 281 281 * DCDC output. 282 282 */ 283 - return da9052_reg_update(regulator->da9052, DA9052_SUPPLY_REG, 0, 284 - info->activate_bit); 283 + return da9052_reg_update(regulator->da9052, DA9052_SUPPLY_REG, 284 + info->activate_bit, info->activate_bit); 285 285 } 286 286 287 287 static int da9052_get_regulator_voltage_sel(struct regulator_dev *rdev)
+1 -1
drivers/regulator/tps65910-regulator.c
··· 662 662 tps65910_reg_write(pmic, TPS65910_VDD2_OP, vsel); 663 663 break; 664 664 case TPS65911_REG_VDDCTRL: 665 - vsel = selector; 665 + vsel = selector + 3; 666 666 tps65910_reg_write(pmic, TPS65911_VDDCTRL_OP, vsel); 667 667 } 668 668
+7 -7
drivers/rtc/rtc-r9701.c
··· 125 125 unsigned char tmp; 126 126 int res; 127 127 128 + tmp = R100CNT; 129 + res = read_regs(&spi->dev, &tmp, 1); 130 + if (res || tmp != 0x20) { 131 + dev_err(&spi->dev, "cannot read RTC register\n"); 132 + return -ENODEV; 133 + } 134 + 128 135 rtc = rtc_device_register("r9701", 129 136 &spi->dev, &r9701_rtc_ops, THIS_MODULE); 130 137 if (IS_ERR(rtc)) 131 138 return PTR_ERR(rtc); 132 139 133 140 dev_set_drvdata(&spi->dev, rtc); 134 - 135 - tmp = R100CNT; 136 - res = read_regs(&spi->dev, &tmp, 1); 137 - if (res || tmp != 0x20) { 138 - rtc_device_unregister(rtc); 139 - return res; 140 - } 141 141 142 142 return 0; 143 143 }
+2 -2
drivers/s390/cio/qdio_main.c
··· 167 167 DBF_ERROR("%4x EQBS ERROR", SCH_NO(q)); 168 168 DBF_ERROR("%3d%3d%2d", count, tmp_count, nr); 169 169 q->handler(q->irq_ptr->cdev, QDIO_ERROR_ACTIVATE_CHECK_CONDITION, 170 - 0, -1, -1, q->irq_ptr->int_parm); 170 + q->nr, q->first_to_kick, count, q->irq_ptr->int_parm); 171 171 return 0; 172 172 } 173 173 ··· 215 215 DBF_ERROR("%4x SQBS ERROR", SCH_NO(q)); 216 216 DBF_ERROR("%3d%3d%2d", count, tmp_count, nr); 217 217 q->handler(q->irq_ptr->cdev, QDIO_ERROR_ACTIVATE_CHECK_CONDITION, 218 - 0, -1, -1, q->irq_ptr->int_parm); 218 + q->nr, q->first_to_kick, count, q->irq_ptr->int_parm); 219 219 return 0; 220 220 } 221 221
+1 -1
drivers/scsi/sd_dif.c
··· 408 408 kunmap_atomic(sdt, KM_USER0); 409 409 } 410 410 411 - bio->bi_flags |= BIO_MAPPED_INTEGRITY; 411 + bio->bi_flags |= (1 << BIO_MAPPED_INTEGRITY); 412 412 } 413 413 414 414 return 0;
+1 -1
drivers/spi/spi-pl022.c
··· 1083 1083 return -ENOMEM; 1084 1084 } 1085 1085 1086 - static int __init pl022_dma_probe(struct pl022 *pl022) 1086 + static int __devinit pl022_dma_probe(struct pl022 *pl022) 1087 1087 { 1088 1088 dma_cap_mask_t mask; 1089 1089
+1 -1
drivers/tty/Kconfig
··· 365 365 366 366 config PPC_EARLY_DEBUG_EHV_BC 367 367 bool "Early console (udbg) support for ePAPR hypervisors" 368 - depends on PPC_EPAPR_HV_BYTECHAN 368 + depends on PPC_EPAPR_HV_BYTECHAN=y 369 369 help 370 370 Select this option to enable early console (a.k.a. "udbg") support 371 371 via an ePAPR byte channel. You also need to choose the byte channel
+2 -9
drivers/usb/host/ehci-fsl.c
··· 239 239 ehci_writel(ehci, portsc, &ehci->regs->port_status[port_offset]); 240 240 } 241 241 242 - static int ehci_fsl_usb_setup(struct ehci_hcd *ehci) 242 + static void ehci_fsl_usb_setup(struct ehci_hcd *ehci) 243 243 { 244 244 struct usb_hcd *hcd = ehci_to_hcd(ehci); 245 245 struct fsl_usb2_platform_data *pdata; ··· 299 299 #endif 300 300 out_be32(non_ehci + FSL_SOC_USB_SICTRL, 0x00000001); 301 301 } 302 - 303 - if (!(in_be32(non_ehci + FSL_SOC_USB_CTRL) & CTRL_PHY_CLK_VALID)) { 304 - printk(KERN_WARNING "fsl-ehci: USB PHY clock invalid\n"); 305 - return -ENODEV; 306 - } 307 - return 0; 308 302 } 309 303 310 304 /* called after powerup, by probe or system-pm "wakeup" */ 311 305 static int ehci_fsl_reinit(struct ehci_hcd *ehci) 312 306 { 313 - if (ehci_fsl_usb_setup(ehci)) 314 - return -ENODEV; 307 + ehci_fsl_usb_setup(ehci); 315 308 ehci_port_power(ehci, 0); 316 309 317 310 return 0;
-1
drivers/usb/host/ehci-fsl.h
··· 45 45 #define FSL_SOC_USB_PRICTRL 0x40c /* NOTE: big-endian */ 46 46 #define FSL_SOC_USB_SICTRL 0x410 /* NOTE: big-endian */ 47 47 #define FSL_SOC_USB_CTRL 0x500 /* NOTE: big-endian */ 48 - #define CTRL_PHY_CLK_VALID (1 << 17) 49 48 #define SNOOP_SIZE_2GB 0x1e 50 49 #endif /* _EHCI_FSL_H */
+12 -12
fs/aio.c
··· 228 228 call_rcu(&ctx->rcu_head, ctx_rcu_free); 229 229 } 230 230 231 - static inline void get_ioctx(struct kioctx *kioctx) 232 - { 233 - BUG_ON(atomic_read(&kioctx->users) <= 0); 234 - atomic_inc(&kioctx->users); 235 - } 236 - 237 231 static inline int try_get_ioctx(struct kioctx *kioctx) 238 232 { 239 233 return atomic_inc_not_zero(&kioctx->users); ··· 267 273 mm = ctx->mm = current->mm; 268 274 atomic_inc(&mm->mm_count); 269 275 270 - atomic_set(&ctx->users, 1); 276 + atomic_set(&ctx->users, 2); 271 277 spin_lock_init(&ctx->ctx_lock); 272 278 spin_lock_init(&ctx->ring_info.ring_lock); 273 279 init_waitqueue_head(&ctx->wait); ··· 484 490 kmem_cache_free(kiocb_cachep, req); 485 491 ctx->reqs_active--; 486 492 } 493 + if (unlikely(!ctx->reqs_active && ctx->dead)) 494 + wake_up_all(&ctx->wait); 487 495 spin_unlock_irq(&ctx->ctx_lock); 488 496 } 489 497 ··· 603 607 fput(req->ki_filp); 604 608 605 609 /* Link the iocb into the context's free list */ 610 + rcu_read_lock(); 606 611 spin_lock_irq(&ctx->ctx_lock); 607 612 really_put_req(ctx, req); 613 + /* 614 + * at that point ctx might've been killed, but actual 615 + * freeing is RCU'd 616 + */ 608 617 spin_unlock_irq(&ctx->ctx_lock); 618 + rcu_read_unlock(); 609 619 610 - put_ioctx(ctx); 611 620 spin_lock_irq(&fput_lock); 612 621 } 613 622 spin_unlock_irq(&fput_lock); ··· 643 642 * this function will be executed w/out any aio kthread wakeup. 644 643 */ 645 644 if (unlikely(!fput_atomic(req->ki_filp))) { 646 - get_ioctx(ctx); 647 645 spin_lock(&fput_lock); 648 646 list_add(&req->ki_list, &fput_head); 649 647 spin_unlock(&fput_lock); ··· 1336 1336 ret = PTR_ERR(ioctx); 1337 1337 if (!IS_ERR(ioctx)) { 1338 1338 ret = put_user(ioctx->user_id, ctxp); 1339 - if (!ret) 1339 + if (!ret) { 1340 + put_ioctx(ioctx); 1340 1341 return 0; 1341 - 1342 - get_ioctx(ioctx); /* io_destroy() expects us to hold a ref */ 1342 + } 1343 1343 io_destroy(ioctx); 1344 1344 } 1345 1345
+7 -7
fs/binfmt_aout.c
··· 259 259 current->mm->free_area_cache = current->mm->mmap_base; 260 260 current->mm->cached_hole_size = 0; 261 261 262 + retval = setup_arg_pages(bprm, STACK_TOP, EXSTACK_DEFAULT); 263 + if (retval < 0) { 264 + /* Someone check-me: is this error path enough? */ 265 + send_sig(SIGKILL, current, 0); 266 + return retval; 267 + } 268 + 262 269 install_exec_creds(bprm); 263 270 current->flags &= ~PF_FORKNOEXEC; 264 271 ··· 356 349 retval = set_brk(current->mm->start_brk, current->mm->brk); 357 350 if (retval < 0) { 358 351 send_sig(SIGKILL, current, 0); 359 - return retval; 360 - } 361 - 362 - retval = setup_arg_pages(bprm, STACK_TOP, EXSTACK_DEFAULT); 363 - if (retval < 0) { 364 - /* Someone check-me: is this error path enough? */ 365 - send_sig(SIGKILL, current, 0); 366 352 return retval; 367 353 } 368 354
+6 -2
fs/btrfs/backref.c
··· 583 583 struct btrfs_path *path; 584 584 struct btrfs_key info_key = { 0 }; 585 585 struct btrfs_delayed_ref_root *delayed_refs = NULL; 586 - struct btrfs_delayed_ref_head *head = NULL; 586 + struct btrfs_delayed_ref_head *head; 587 587 int info_level = 0; 588 588 int ret; 589 589 struct list_head prefs_delayed; ··· 607 607 * at a specified point in time 608 608 */ 609 609 again: 610 + head = NULL; 611 + 610 612 ret = btrfs_search_slot(trans, fs_info->extent_root, &key, path, 0, 0); 611 613 if (ret < 0) 612 614 goto out; ··· 637 635 goto again; 638 636 } 639 637 ret = __add_delayed_refs(head, seq, &info_key, &prefs_delayed); 640 - if (ret) 638 + if (ret) { 639 + spin_unlock(&delayed_refs->lock); 641 640 goto out; 641 + } 642 642 } 643 643 spin_unlock(&delayed_refs->lock); 644 644
+1 -1
fs/btrfs/reada.c
··· 305 305 306 306 spin_lock(&fs_info->reada_lock); 307 307 ret = radix_tree_insert(&dev->reada_zones, 308 - (unsigned long)zone->end >> PAGE_CACHE_SHIFT, 308 + (unsigned long)(zone->end >> PAGE_CACHE_SHIFT), 309 309 zone); 310 310 spin_unlock(&fs_info->reada_lock); 311 311
+18 -2
fs/cifs/dir.c
··· 584 584 * If either that or op not supported returned, follow 585 585 * the normal lookup. 586 586 */ 587 - if ((rc == 0) || (rc == -ENOENT)) 587 + switch (rc) { 588 + case 0: 589 + /* 590 + * The server may allow us to open things like 591 + * FIFOs, but the client isn't set up to deal 592 + * with that. If it's not a regular file, just 593 + * close it and proceed as if it were a normal 594 + * lookup. 595 + */ 596 + if (newInode && !S_ISREG(newInode->i_mode)) { 597 + CIFSSMBClose(xid, pTcon, fileHandle); 598 + break; 599 + } 600 + case -ENOENT: 588 601 posix_open = true; 589 - else if ((rc == -EINVAL) || (rc != -EOPNOTSUPP)) 602 + case -EOPNOTSUPP: 603 + break; 604 + default: 590 605 pTcon->broken_posix_open = true; 606 + } 591 607 } 592 608 if (!posix_open) 593 609 rc = cifs_get_inode_info_unix(&newInode, full_path,
+19 -9
fs/cifs/inode.c
··· 534 534 if (fattr->cf_cifsattrs & ATTR_DIRECTORY) { 535 535 fattr->cf_mode = S_IFDIR | cifs_sb->mnt_dir_mode; 536 536 fattr->cf_dtype = DT_DIR; 537 + /* 538 + * Server can return wrong NumberOfLinks value for directories 539 + * when Unix extensions are disabled - fake it. 540 + */ 541 + fattr->cf_nlink = 2; 537 542 } else { 538 543 fattr->cf_mode = S_IFREG | cifs_sb->mnt_file_mode; 539 544 fattr->cf_dtype = DT_REG; ··· 546 541 /* clear write bits if ATTR_READONLY is set */ 547 542 if (fattr->cf_cifsattrs & ATTR_READONLY) 548 543 fattr->cf_mode &= ~(S_IWUGO); 549 - } 550 544 551 - fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks); 545 + fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks); 546 + } 552 547 553 548 fattr->cf_uid = cifs_sb->mnt_uid; 554 549 fattr->cf_gid = cifs_sb->mnt_gid; ··· 1327 1322 } 1328 1323 /*BB check (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID ) to see if need 1329 1324 to set uid/gid */ 1330 - inc_nlink(inode); 1331 1325 1332 1326 cifs_unix_basic_to_fattr(&fattr, pInfo, cifs_sb); 1333 1327 cifs_fill_uniqueid(inode->i_sb, &fattr); ··· 1359 1355 d_drop(direntry); 1360 1356 } else { 1361 1357 mkdir_get_info: 1362 - inc_nlink(inode); 1363 1358 if (pTcon->unix_ext) 1364 1359 rc = cifs_get_inode_info_unix(&newinode, full_path, 1365 1360 inode->i_sb, xid); ··· 1439 1436 } 1440 1437 } 1441 1438 mkdir_out: 1439 + /* 1440 + * Force revalidate to get parent dir info when needed since cached 1441 + * attributes are invalid now. 1442 + */ 1443 + CIFS_I(inode)->time = 0; 1442 1444 kfree(full_path); 1443 1445 FreeXid(xid); 1444 1446 cifs_put_tlink(tlink); ··· 1483 1475 cifs_put_tlink(tlink); 1484 1476 1485 1477 if (!rc) { 1486 - drop_nlink(inode); 1487 1478 spin_lock(&direntry->d_inode->i_lock); 1488 1479 i_size_write(direntry->d_inode, 0); 1489 1480 clear_nlink(direntry->d_inode); ··· 1490 1483 } 1491 1484 1492 1485 cifsInode = CIFS_I(direntry->d_inode); 1493 - cifsInode->time = 0; /* force revalidate to go get info when 1494 - needed */ 1486 + /* force revalidate to go get info when needed */ 1487 + cifsInode->time = 0; 1495 1488 1496 1489 cifsInode = CIFS_I(inode); 1497 - cifsInode->time = 0; /* force revalidate to get parent dir info 1498 - since cached search results now invalid */ 1490 + /* 1491 + * Force revalidate to get parent dir info when needed since cached 1492 + * attributes are invalid now. 1493 + */ 1494 + cifsInode->time = 0; 1499 1495 1500 1496 direntry->d_inode->i_ctime = inode->i_ctime = inode->i_mtime = 1501 1497 current_fs_time(inode->i_sb);
+20
fs/dcache.c
··· 137 137 } 138 138 #endif 139 139 140 + /* 141 + * Compare 2 name strings, return 0 if they match, otherwise non-zero. 142 + * The strings are both count bytes long, and count is non-zero. 143 + */ 144 + static inline int dentry_cmp(const unsigned char *cs, size_t scount, 145 + const unsigned char *ct, size_t tcount) 146 + { 147 + if (scount != tcount) 148 + return 1; 149 + 150 + do { 151 + if (*cs != *ct) 152 + return 1; 153 + cs++; 154 + ct++; 155 + tcount--; 156 + } while (tcount); 157 + return 0; 158 + } 159 + 140 160 static void __d_free(struct rcu_head *head) 141 161 { 142 162 struct dentry *dentry = container_of(head, struct dentry, d_u.d_rcu);
+2 -16
fs/exec.c
··· 1915 1915 { 1916 1916 struct task_struct *tsk = current; 1917 1917 struct mm_struct *mm = tsk->mm; 1918 - struct completion *vfork_done; 1919 1918 int core_waiters = -EBUSY; 1920 1919 1921 1920 init_completion(&core_state->startup); ··· 1926 1927 core_waiters = zap_threads(tsk, mm, core_state, exit_code); 1927 1928 up_write(&mm->mmap_sem); 1928 1929 1929 - if (unlikely(core_waiters < 0)) 1930 - goto fail; 1931 - 1932 - /* 1933 - * Make sure nobody is waiting for us to release the VM, 1934 - * otherwise we can deadlock when we wait on each other 1935 - */ 1936 - vfork_done = tsk->vfork_done; 1937 - if (vfork_done) { 1938 - tsk->vfork_done = NULL; 1939 - complete(vfork_done); 1940 - } 1941 - 1942 - if (core_waiters) 1930 + if (core_waiters > 0) 1943 1931 wait_for_completion(&core_state->startup); 1944 - fail: 1932 + 1945 1933 return core_waiters; 1946 1934 } 1947 1935
+2
include/linux/amba/serial.h
··· 23 23 #ifndef ASM_ARM_HARDWARE_SERIAL_AMBA_H 24 24 #define ASM_ARM_HARDWARE_SERIAL_AMBA_H 25 25 26 + #include <linux/types.h> 27 + 26 28 /* ------------------------------------------------------------------------------- 27 29 * From AMBA UART (PL010) Block Specification 28 30 * -------------------------------------------------------------------------------
-20
include/linux/dcache.h
··· 47 47 }; 48 48 extern struct dentry_stat_t dentry_stat; 49 49 50 - /* 51 - * Compare 2 name strings, return 0 if they match, otherwise non-zero. 52 - * The strings are both count bytes long, and count is non-zero. 53 - */ 54 - static inline int dentry_cmp(const unsigned char *cs, size_t scount, 55 - const unsigned char *ct, size_t tcount) 56 - { 57 - if (scount != tcount) 58 - return 1; 59 - 60 - do { 61 - if (*cs != *ct) 62 - return 1; 63 - cs++; 64 - ct++; 65 - tcount--; 66 - } while (tcount); 67 - return 0; 68 - } 69 - 70 50 /* Name hashing routines. Initial hash value */ 71 51 /* Hash courtesy of the R5 hash in reiserfs modulo sign bits */ 72 52 #define init_name_hash() 0
+7 -2
include/linux/kmsg_dump.h
··· 15 15 #include <linux/errno.h> 16 16 #include <linux/list.h> 17 17 18 + /* 19 + * Keep this list arranged in rough order of priority. Anything listed after 20 + * KMSG_DUMP_OOPS will not be logged by default unless printk.always_kmsg_dump 21 + * is passed to the kernel. 22 + */ 18 23 enum kmsg_dump_reason { 19 - KMSG_DUMP_OOPS, 20 24 KMSG_DUMP_PANIC, 25 + KMSG_DUMP_OOPS, 26 + KMSG_DUMP_EMERG, 21 27 KMSG_DUMP_RESTART, 22 28 KMSG_DUMP_HALT, 23 29 KMSG_DUMP_POWEROFF, 24 - KMSG_DUMP_EMERG, 25 30 }; 26 31 27 32 /**
-5
include/linux/memcontrol.h
··· 129 129 extern void mem_cgroup_replace_page_cache(struct page *oldpage, 130 130 struct page *newpage); 131 131 132 - extern void mem_cgroup_reset_owner(struct page *page); 133 132 #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP 134 133 extern int do_swap_account; 135 134 #endif ··· 389 390 } 390 391 static inline void mem_cgroup_replace_page_cache(struct page *oldpage, 391 392 struct page *newpage) 392 - { 393 - } 394 - 395 - static inline void mem_cgroup_reset_owner(struct page *page) 396 393 { 397 394 } 398 395 #endif /* CONFIG_CGROUP_MEM_CONT */
+8
include/linux/of.h
··· 281 281 return NULL; 282 282 } 283 283 284 + static inline struct device_node *of_find_compatible_node( 285 + struct device_node *from, 286 + const char *type, 287 + const char *compat) 288 + { 289 + return NULL; 290 + } 291 + 284 292 static inline int of_property_read_u32_array(const struct device_node *np, 285 293 const char *propname, 286 294 u32 *out_values, size_t sz)
+15 -14
include/linux/percpu.h
··· 348 348 #define _this_cpu_generic_to_op(pcp, val, op) \ 349 349 do { \ 350 350 unsigned long flags; \ 351 - local_irq_save(flags); \ 351 + raw_local_irq_save(flags); \ 352 352 *__this_cpu_ptr(&(pcp)) op val; \ 353 - local_irq_restore(flags); \ 353 + raw_local_irq_restore(flags); \ 354 354 } while (0) 355 355 356 356 #ifndef this_cpu_write ··· 449 449 ({ \ 450 450 typeof(pcp) ret__; \ 451 451 unsigned long flags; \ 452 - local_irq_save(flags); \ 452 + raw_local_irq_save(flags); \ 453 453 __this_cpu_add(pcp, val); \ 454 454 ret__ = __this_cpu_read(pcp); \ 455 - local_irq_restore(flags); \ 455 + raw_local_irq_restore(flags); \ 456 456 ret__; \ 457 457 }) 458 458 ··· 479 479 #define _this_cpu_generic_xchg(pcp, nval) \ 480 480 ({ typeof(pcp) ret__; \ 481 481 unsigned long flags; \ 482 - local_irq_save(flags); \ 482 + raw_local_irq_save(flags); \ 483 483 ret__ = __this_cpu_read(pcp); \ 484 484 __this_cpu_write(pcp, nval); \ 485 - local_irq_restore(flags); \ 485 + raw_local_irq_restore(flags); \ 486 486 ret__; \ 487 487 }) 488 488 ··· 507 507 ({ \ 508 508 typeof(pcp) ret__; \ 509 509 unsigned long flags; \ 510 - local_irq_save(flags); \ 510 + raw_local_irq_save(flags); \ 511 511 ret__ = __this_cpu_read(pcp); \ 512 512 if (ret__ == (oval)) \ 513 513 __this_cpu_write(pcp, nval); \ 514 - local_irq_restore(flags); \ 514 + raw_local_irq_restore(flags); \ 515 515 ret__; \ 516 516 }) 517 517 ··· 544 544 ({ \ 545 545 int ret__; \ 546 546 unsigned long flags; \ 547 - local_irq_save(flags); \ 547 + raw_local_irq_save(flags); \ 548 548 ret__ = __this_cpu_generic_cmpxchg_double(pcp1, pcp2, \ 549 549 oval1, oval2, nval1, nval2); \ 550 - local_irq_restore(flags); \ 550 + raw_local_irq_restore(flags); \ 551 551 ret__; \ 552 552 }) 553 553 ··· 718 718 # ifndef __this_cpu_add_return_8 719 719 # define __this_cpu_add_return_8(pcp, val) __this_cpu_generic_add_return(pcp, val) 720 720 # endif 721 - # define __this_cpu_add_return(pcp, val) __pcpu_size_call_return2(this_cpu_add_return_, pcp, val) 721 + # define __this_cpu_add_return(pcp, val) \ 722 + __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) 722 723 #endif 723 724 724 - #define __this_cpu_sub_return(pcp, val) this_cpu_add_return(pcp, -(val)) 725 - #define __this_cpu_inc_return(pcp) this_cpu_add_return(pcp, 1) 726 - #define __this_cpu_dec_return(pcp) this_cpu_add_return(pcp, -1) 725 + #define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(val)) 726 + #define __this_cpu_inc_return(pcp) __this_cpu_add_return(pcp, 1) 727 + #define __this_cpu_dec_return(pcp) __this_cpu_add_return(pcp, -1) 727 728 728 729 #define __this_cpu_generic_xchg(pcp, nval) \ 729 730 ({ typeof(pcp) ret__; \
+1 -2
include/linux/sched.h
··· 1777 1777 /* 1778 1778 * Per process flags 1779 1779 */ 1780 - #define PF_STARTING 0x00000002 /* being created */ 1781 1780 #define PF_EXITING 0x00000004 /* getting shut down */ 1782 1781 #define PF_EXITPIDONE 0x00000008 /* pi exit done on shut down */ 1783 1782 #define PF_VCPU 0x00000010 /* I'm a virtual CPU */ ··· 2370 2371 * Protects ->fs, ->files, ->mm, ->group_info, ->comm, keyring 2371 2372 * subscriptions and synchronises with wait4(). Also used in procfs. Also 2372 2373 * pins the final release of task.io_context. Also protects ->cpuset and 2373 - * ->cgroup.subsys[]. 2374 + * ->cgroup.subsys[]. And ->vfork_done. 2374 2375 * 2375 2376 * Nests both inside and outside of read_lock(&tasklist_lock). 2376 2377 * It must not be nested with write_lock_irq(&tasklist_lock),
+2 -1
include/linux/tcp.h
··· 412 412 413 413 struct tcp_sack_block recv_sack_cache[4]; 414 414 415 - struct sk_buff *highest_sack; /* highest skb with SACK received 415 + struct sk_buff *highest_sack; /* skb just after the highest 416 + * skb with SACKed bit set 416 417 * (validity guaranteed only if 417 418 * sacked_out > 0) 418 419 */
+3 -1
include/net/inetpeer.h
··· 35 35 36 36 u32 metrics[RTAX_MAX]; 37 37 u32 rate_tokens; /* rate limiting for ICMP */ 38 - int redirect_genid; 39 38 unsigned long rate_last; 40 39 unsigned long pmtu_expires; 41 40 u32 pmtu_orig; 42 41 u32 pmtu_learned; 43 42 struct inetpeer_addr_base redirect_learned; 43 + struct list_head gc_list; 44 44 /* 45 45 * Once inet_peer is queued for deletion (refcnt == -1), following fields 46 46 * are not available: rid, ip_id_count, tcp_ts, tcp_ts_stamp ··· 95 95 /* can be called from BH context or outside */ 96 96 extern void inet_putpeer(struct inet_peer *p); 97 97 extern bool inet_peer_xrlim_allow(struct inet_peer *peer, int timeout); 98 + 99 + extern void inetpeer_invalidate_tree(int family); 98 100 99 101 /* 100 102 * temporary check to make sure we dont access rid, ip_id_count, tcp_ts,
+3 -2
include/net/tcp.h
··· 1364 1364 } 1365 1365 } 1366 1366 1367 - /* Start sequence of the highest skb with SACKed bit, valid only if 1368 - * sacked > 0 or when the caller has ensured validity by itself. 1367 + /* Start sequence of the skb just after the highest skb with SACKed 1368 + * bit, valid only if sacked_out > 0 or when the caller has ensured 1369 + * validity by itself. 1369 1370 */ 1370 1371 static inline u32 tcp_highest_sack_seq(struct tcp_sock *tp) 1371 1372 {
+39 -21
kernel/fork.c
··· 668 668 return mm; 669 669 } 670 670 671 + static void complete_vfork_done(struct task_struct *tsk) 672 + { 673 + struct completion *vfork; 674 + 675 + task_lock(tsk); 676 + vfork = tsk->vfork_done; 677 + if (likely(vfork)) { 678 + tsk->vfork_done = NULL; 679 + complete(vfork); 680 + } 681 + task_unlock(tsk); 682 + } 683 + 684 + static int wait_for_vfork_done(struct task_struct *child, 685 + struct completion *vfork) 686 + { 687 + int killed; 688 + 689 + freezer_do_not_count(); 690 + killed = wait_for_completion_killable(vfork); 691 + freezer_count(); 692 + 693 + if (killed) { 694 + task_lock(child); 695 + child->vfork_done = NULL; 696 + task_unlock(child); 697 + } 698 + 699 + put_task_struct(child); 700 + return killed; 701 + } 702 + 671 703 /* Please note the differences between mmput and mm_release. 672 704 * mmput is called whenever we stop holding onto a mm_struct, 673 705 * error success whatever. ··· 715 683 */ 716 684 void mm_release(struct task_struct *tsk, struct mm_struct *mm) 717 685 { 718 - struct completion *vfork_done = tsk->vfork_done; 719 - 720 686 /* Get rid of any futexes when releasing the mm */ 721 687 #ifdef CONFIG_FUTEX 722 688 if (unlikely(tsk->robust_list)) { ··· 734 704 /* Get rid of any cached register state */ 735 705 deactivate_mm(tsk, mm); 736 706 737 - /* notify parent sleeping on vfork() */ 738 - if (vfork_done) { 739 - tsk->vfork_done = NULL; 740 - complete(vfork_done); 741 - } 707 + if (tsk->vfork_done) 708 + complete_vfork_done(tsk); 742 709 743 710 /* 744 711 * If we're exiting normally, clear a user-space tid field if 745 712 * requested. We leave this alone when dying by signal, to leave 746 713 * the value intact in a core dump, and to save the unnecessary 747 - * trouble otherwise. Userland only wants this done for a sys_exit. 714 + * trouble, say, a killed vfork parent shouldn't touch this mm. 715 + * Userland only wants this done for a sys_exit. 748 716 */ 749 717 if (tsk->clear_child_tid) { 750 718 if (!(tsk->flags & PF_SIGNALED) && ··· 1046 1018 1047 1019 new_flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER); 1048 1020 new_flags |= PF_FORKNOEXEC; 1049 - new_flags |= PF_STARTING; 1050 1021 p->flags = new_flags; 1051 1022 } 1052 1023 ··· 1575 1548 if (clone_flags & CLONE_VFORK) { 1576 1549 p->vfork_done = &vfork; 1577 1550 init_completion(&vfork); 1551 + get_task_struct(p); 1578 1552 } 1579 - 1580 - /* 1581 - * We set PF_STARTING at creation in case tracing wants to 1582 - * use this to distinguish a fully live task from one that 1583 - * hasn't finished SIGSTOP raising yet. Now we clear it 1584 - * and set the child going. 1585 - */ 1586 - p->flags &= ~PF_STARTING; 1587 1553 1588 1554 wake_up_new_task(p); 1589 1555 ··· 1585 1565 ptrace_event(trace, nr); 1586 1566 1587 1567 if (clone_flags & CLONE_VFORK) { 1588 - freezer_do_not_count(); 1589 - wait_for_completion(&vfork); 1590 - freezer_count(); 1591 - ptrace_event(PTRACE_EVENT_VFORK_DONE, nr); 1568 + if (!wait_for_vfork_done(p, &vfork)) 1569 + ptrace_event(PTRACE_EVENT_VFORK_DONE, nr); 1592 1570 } 1593 1571 } else { 1594 1572 nr = PTR_ERR(p);
+7 -4
kernel/hung_task.c
··· 119 119 * For preemptible RCU it is sufficient to call rcu_read_unlock in order 120 120 * to exit the grace period. For classic RCU, a reschedule is required. 121 121 */ 122 - static void rcu_lock_break(struct task_struct *g, struct task_struct *t) 122 + static bool rcu_lock_break(struct task_struct *g, struct task_struct *t) 123 123 { 124 + bool can_cont; 125 + 124 126 get_task_struct(g); 125 127 get_task_struct(t); 126 128 rcu_read_unlock(); 127 129 cond_resched(); 128 130 rcu_read_lock(); 131 + can_cont = pid_alive(g) && pid_alive(t); 129 132 put_task_struct(t); 130 133 put_task_struct(g); 134 + 135 + return can_cont; 131 136 } 132 137 133 138 /* ··· 159 154 goto unlock; 160 155 if (!--batch_count) { 161 156 batch_count = HUNG_TASK_BATCHING; 162 - rcu_lock_break(g, t); 163 - /* Exit if t or g was unhashed during refresh. */ 164 - if (t->state == TASK_DEAD || g->state == TASK_DEAD) 157 + if (!rcu_lock_break(g, t)) 165 158 goto unlock; 166 159 } 167 160 /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */
+38 -6
kernel/irq/manage.c
··· 985 985 986 986 /* add new interrupt at end of irq queue */ 987 987 do { 988 + /* 989 + * Or all existing action->thread_mask bits, 990 + * so we can find the next zero bit for this 991 + * new action. 992 + */ 988 993 thread_mask |= old->thread_mask; 989 994 old_ptr = &old->next; 990 995 old = *old_ptr; ··· 998 993 } 999 994 1000 995 /* 1001 - * Setup the thread mask for this irqaction. Unlikely to have 1002 - * 32 resp 64 irqs sharing one line, but who knows. 996 + * Setup the thread mask for this irqaction for ONESHOT. For 997 + * !ONESHOT irqs the thread mask is 0 so we can avoid a 998 + * conditional in irq_wake_thread(). 1003 999 */ 1004 - if (new->flags & IRQF_ONESHOT && thread_mask == ~0UL) { 1005 - ret = -EBUSY; 1006 - goto out_mask; 1000 + if (new->flags & IRQF_ONESHOT) { 1001 + /* 1002 + * Unlikely to have 32 resp 64 irqs sharing one line, 1003 + * but who knows. 1004 + */ 1005 + if (thread_mask == ~0UL) { 1006 + ret = -EBUSY; 1007 + goto out_mask; 1008 + } 1009 + /* 1010 + * The thread_mask for the action is or'ed to 1011 + * desc->thread_active to indicate that the 1012 + * IRQF_ONESHOT thread handler has been woken, but not 1013 + * yet finished. The bit is cleared when a thread 1014 + * completes. When all threads of a shared interrupt 1015 + * line have completed desc->threads_active becomes 1016 + * zero and the interrupt line is unmasked. See 1017 + * handle.c:irq_wake_thread() for further information. 1018 + * 1019 + * If no thread is woken by primary (hard irq context) 1020 + * interrupt handlers, then desc->threads_active is 1021 + * also checked for zero to unmask the irq line in the 1022 + * affected hard irq flow handlers 1023 + * (handle_[fasteoi|level]_irq). 1024 + * 1025 + * The new action gets the first zero bit of 1026 + * thread_mask assigned. See the loop above which or's 1027 + * all existing action->thread_mask bits. 1028 + */ 1029 + new->thread_mask = 1 << ffz(thread_mask); 1007 1030 } 1008 - new->thread_mask = 1 << ffz(thread_mask); 1009 1031 1010 1032 if (!shared) { 1011 1033 init_waitqueue_head(&desc->wait_for_threads);
+7 -5
kernel/kprobes.c
··· 1334 1334 if (!kernel_text_address((unsigned long) p->addr) || 1335 1335 in_kprobes_functions((unsigned long) p->addr) || 1336 1336 ftrace_text_reserved(p->addr, p->addr) || 1337 - jump_label_text_reserved(p->addr, p->addr)) 1338 - goto fail_with_jump_label; 1337 + jump_label_text_reserved(p->addr, p->addr)) { 1338 + ret = -EINVAL; 1339 + goto cannot_probe; 1340 + } 1339 1341 1340 1342 /* User can pass only KPROBE_FLAG_DISABLED to register_kprobe */ 1341 1343 p->flags &= KPROBE_FLAG_DISABLED; ··· 1354 1352 * its code to prohibit unexpected unloading. 1355 1353 */ 1356 1354 if (unlikely(!try_module_get(probed_mod))) 1357 - goto fail_with_jump_label; 1355 + goto cannot_probe; 1358 1356 1359 1357 /* 1360 1358 * If the module freed .init.text, we couldn't insert ··· 1363 1361 if (within_module_init((unsigned long)p->addr, probed_mod) && 1364 1362 probed_mod->state != MODULE_STATE_COMING) { 1365 1363 module_put(probed_mod); 1366 - goto fail_with_jump_label; 1364 + goto cannot_probe; 1367 1365 } 1368 1366 /* ret will be updated by following code */ 1369 1367 } ··· 1411 1409 1412 1410 return ret; 1413 1411 1414 - fail_with_jump_label: 1412 + cannot_probe: 1415 1413 preempt_enable(); 1416 1414 jump_label_unlock(); 1417 1415 return ret;
+6
kernel/printk.c
··· 702 702 #endif 703 703 module_param_named(time, printk_time, bool, S_IRUGO | S_IWUSR); 704 704 705 + static bool always_kmsg_dump; 706 + module_param_named(always_kmsg_dump, always_kmsg_dump, bool, S_IRUGO | S_IWUSR); 707 + 705 708 /* Check if we have any console registered that can be called early in boot. */ 706 709 static int have_callable_console(void) 707 710 { ··· 1734 1731 const char *s1, *s2; 1735 1732 unsigned long l1, l2; 1736 1733 unsigned long flags; 1734 + 1735 + if ((reason > KMSG_DUMP_OOPS) && !always_kmsg_dump) 1736 + return; 1737 1737 1738 1738 /* Theoretically, the log could move on after we do this, but 1739 1739 there's not a lot we can do about that. The new messages
+2 -2
kernel/sched/core.c
··· 6728 6728 static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action, 6729 6729 void *hcpu) 6730 6730 { 6731 - switch (action) { 6731 + switch (action & ~CPU_TASKS_FROZEN) { 6732 6732 case CPU_ONLINE: 6733 6733 case CPU_DOWN_FAILED: 6734 6734 cpuset_update_active_cpus(); ··· 6741 6741 static int cpuset_cpu_inactive(struct notifier_block *nfb, unsigned long action, 6742 6742 void *hcpu) 6743 6743 { 6744 - switch (action) { 6744 + switch (action & ~CPU_TASKS_FROZEN) { 6745 6745 case CPU_DOWN_PREPARE: 6746 6746 cpuset_update_active_cpus(); 6747 6747 return NOTIFY_OK;
+3 -11
lib/debugobjects.c
··· 818 818 if (obj->static_init == 1) { 819 819 debug_object_init(obj, &descr_type_test); 820 820 debug_object_activate(obj, &descr_type_test); 821 - /* 822 - * Real code should return 0 here ! This is 823 - * not a fixup of some bad behaviour. We 824 - * merily call the debug_init function to keep 825 - * track of the object. 826 - */ 827 - return 1; 828 - } else { 829 - /* Real code needs to emit a warning here */ 821 + return 0; 830 822 } 831 - return 0; 823 + return 1; 832 824 833 825 case ODEBUG_STATE_ACTIVE: 834 826 debug_object_deactivate(obj, &descr_type_test); ··· 959 967 960 968 obj.static_init = 1; 961 969 debug_object_activate(&obj, &descr_type_test); 962 - if (check_results(&obj, ODEBUG_STATE_ACTIVE, ++fixups, warnings)) 970 + if (check_results(&obj, ODEBUG_STATE_ACTIVE, fixups, warnings)) 963 971 goto out; 964 972 debug_object_init(&obj, &descr_type_test); 965 973 if (check_results(&obj, ODEBUG_STATE_INIT, ++fixups, ++warnings))
+9 -3
lib/vsprintf.c
··· 891 891 case 'U': 892 892 return uuid_string(buf, end, ptr, spec, fmt); 893 893 case 'V': 894 - return buf + vsnprintf(buf, end > buf ? end - buf : 0, 895 - ((struct va_format *)ptr)->fmt, 896 - *(((struct va_format *)ptr)->va)); 894 + { 895 + va_list va; 896 + 897 + va_copy(va, *((struct va_format *)ptr)->va); 898 + buf += vsnprintf(buf, end > buf ? end - buf : 0, 899 + ((struct va_format *)ptr)->fmt, va); 900 + va_end(va); 901 + return buf; 902 + } 897 903 case 'K': 898 904 /* 899 905 * %pK cannot be used in IRQ context because its test
+3 -3
mm/huge_memory.c
··· 671 671 set_pmd_at(mm, haddr, pmd, entry); 672 672 prepare_pmd_huge_pte(pgtable, mm); 673 673 add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR); 674 + mm->nr_ptes++; 674 675 spin_unlock(&mm->page_table_lock); 675 676 } 676 677 ··· 790 789 pmd = pmd_mkold(pmd_wrprotect(pmd)); 791 790 set_pmd_at(dst_mm, addr, dst_pmd, pmd); 792 791 prepare_pmd_huge_pte(pgtable, dst_mm); 792 + dst_mm->nr_ptes++; 793 793 794 794 ret = 0; 795 795 out_unlock: ··· 889 887 } 890 888 kfree(pages); 891 889 892 - mm->nr_ptes++; 893 890 smp_wmb(); /* make pte visible before pmd */ 894 891 pmd_populate(mm, pmd, pgtable); 895 892 page_remove_rmap(page); ··· 1048 1047 VM_BUG_ON(page_mapcount(page) < 0); 1049 1048 add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); 1050 1049 VM_BUG_ON(!PageHead(page)); 1050 + tlb->mm->nr_ptes--; 1051 1051 spin_unlock(&tlb->mm->page_table_lock); 1052 1052 tlb_remove_page(tlb, page); 1053 1053 pte_free(tlb->mm, pgtable); ··· 1377 1375 pte_unmap(pte); 1378 1376 } 1379 1377 1380 - mm->nr_ptes++; 1381 1378 smp_wmb(); /* make pte visible before pmd */ 1382 1379 /* 1383 1380 * Up to this point the pmd is present and huge and ··· 1989 1988 set_pmd_at(mm, address, pmd, _pmd); 1990 1989 update_mmu_cache(vma, address, _pmd); 1991 1990 prepare_pmd_huge_pte(pgtable, mm); 1992 - mm->nr_ptes--; 1993 1991 spin_unlock(&mm->page_table_lock); 1994 1992 1995 1993 #ifndef CONFIG_NUMA
+1 -1
mm/hugetlb.c
··· 2277 2277 set_page_dirty(page); 2278 2278 list_add(&page->lru, &page_list); 2279 2279 } 2280 - spin_unlock(&mm->page_table_lock); 2281 2280 flush_tlb_range(vma, start, end); 2281 + spin_unlock(&mm->page_table_lock); 2282 2282 mmu_notifier_invalidate_range_end(mm, start, end); 2283 2283 list_for_each_entry_safe(page, tmp, &page_list, lru) { 2284 2284 page_remove_rmap(page);
-11
mm/ksm.c
··· 28 28 #include <linux/kthread.h> 29 29 #include <linux/wait.h> 30 30 #include <linux/slab.h> 31 - #include <linux/memcontrol.h> 32 31 #include <linux/rbtree.h> 33 32 #include <linux/memory.h> 34 33 #include <linux/mmu_notifier.h> ··· 1571 1572 1572 1573 new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); 1573 1574 if (new_page) { 1574 - /* 1575 - * The memcg-specific accounting when moving 1576 - * pages around the LRU lists relies on the 1577 - * page's owner (memcg) to be valid. Usually, 1578 - * pages are assigned to a new owner before 1579 - * being put on the LRU list, but since this 1580 - * is not the case here, the stale owner from 1581 - * a previous allocation cycle must be reset. 1582 - */ 1583 - mem_cgroup_reset_owner(new_page); 1584 1575 copy_user_highpage(new_page, page, address, vma); 1585 1576 1586 1577 SetPageDirty(new_page);
+50 -52
mm/memcontrol.c
··· 1042 1042 1043 1043 pc = lookup_page_cgroup(page); 1044 1044 memcg = pc->mem_cgroup; 1045 + 1046 + /* 1047 + * Surreptitiously switch any uncharged page to root: 1048 + * an uncharged page off lru does nothing to secure 1049 + * its former mem_cgroup from sudden removal. 1050 + * 1051 + * Our caller holds lru_lock, and PageCgroupUsed is updated 1052 + * under page_cgroup lock: between them, they make all uses 1053 + * of pc->mem_cgroup safe. 1054 + */ 1055 + if (!PageCgroupUsed(pc) && memcg != root_mem_cgroup) 1056 + pc->mem_cgroup = memcg = root_mem_cgroup; 1057 + 1045 1058 mz = page_cgroup_zoneinfo(memcg, page); 1046 1059 /* compound_order() is stabilized through lru_lock */ 1047 1060 MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page); ··· 2421 2408 struct page *page, 2422 2409 unsigned int nr_pages, 2423 2410 struct page_cgroup *pc, 2424 - enum charge_type ctype) 2411 + enum charge_type ctype, 2412 + bool lrucare) 2425 2413 { 2414 + struct zone *uninitialized_var(zone); 2415 + bool was_on_lru = false; 2416 + 2426 2417 lock_page_cgroup(pc); 2427 2418 if (unlikely(PageCgroupUsed(pc))) { 2428 2419 unlock_page_cgroup(pc); ··· 2437 2420 * we don't need page_cgroup_lock about tail pages, becase they are not 2438 2421 * accessed by any other context at this point. 2439 2422 */ 2423 + 2424 + /* 2425 + * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page 2426 + * may already be on some other mem_cgroup's LRU. Take care of it. 2427 + */ 2428 + if (lrucare) { 2429 + zone = page_zone(page); 2430 + spin_lock_irq(&zone->lru_lock); 2431 + if (PageLRU(page)) { 2432 + ClearPageLRU(page); 2433 + del_page_from_lru_list(zone, page, page_lru(page)); 2434 + was_on_lru = true; 2435 + } 2436 + } 2437 + 2440 2438 pc->mem_cgroup = memcg; 2441 2439 /* 2442 2440 * We access a page_cgroup asynchronously without lock_page_cgroup(). ··· 2475 2443 break; 2476 2444 } 2477 2445 2446 + if (lrucare) { 2447 + if (was_on_lru) { 2448 + VM_BUG_ON(PageLRU(page)); 2449 + SetPageLRU(page); 2450 + add_page_to_lru_list(zone, page, page_lru(page)); 2451 + } 2452 + spin_unlock_irq(&zone->lru_lock); 2453 + } 2454 + 2478 2455 mem_cgroup_charge_statistics(memcg, PageCgroupCache(pc), nr_pages); 2479 2456 unlock_page_cgroup(pc); 2480 - WARN_ON_ONCE(PageLRU(page)); 2457 + 2481 2458 /* 2482 2459 * "charge_statistics" updated event counter. Then, check it. 2483 2460 * Insert ancestor (and ancestor's ancestors), to softlimit RB-tree. ··· 2684 2643 ret = __mem_cgroup_try_charge(mm, gfp_mask, nr_pages, &memcg, oom); 2685 2644 if (ret == -ENOMEM) 2686 2645 return ret; 2687 - __mem_cgroup_commit_charge(memcg, page, nr_pages, pc, ctype); 2646 + __mem_cgroup_commit_charge(memcg, page, nr_pages, pc, ctype, false); 2688 2647 return 0; 2689 2648 } 2690 2649 ··· 2703 2662 static void 2704 2663 __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *ptr, 2705 2664 enum charge_type ctype); 2706 - 2707 - static void 2708 - __mem_cgroup_commit_charge_lrucare(struct page *page, struct mem_cgroup *memcg, 2709 - enum charge_type ctype) 2710 - { 2711 - struct page_cgroup *pc = lookup_page_cgroup(page); 2712 - struct zone *zone = page_zone(page); 2713 - unsigned long flags; 2714 - bool removed = false; 2715 - 2716 - /* 2717 - * In some case, SwapCache, FUSE(splice_buf->radixtree), the page 2718 - * is already on LRU. It means the page may on some other page_cgroup's 2719 - * LRU. Take care of it. 2720 - */ 2721 - spin_lock_irqsave(&zone->lru_lock, flags); 2722 - if (PageLRU(page)) { 2723 - del_page_from_lru_list(zone, page, page_lru(page)); 2724 - ClearPageLRU(page); 2725 - removed = true; 2726 - } 2727 - __mem_cgroup_commit_charge(memcg, page, 1, pc, ctype); 2728 - if (removed) { 2729 - add_page_to_lru_list(zone, page, page_lru(page)); 2730 - SetPageLRU(page); 2731 - } 2732 - spin_unlock_irqrestore(&zone->lru_lock, flags); 2733 - return; 2734 - } 2735 2665 2736 2666 int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm, 2737 2667 gfp_t gfp_mask) ··· 2781 2769 __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *memcg, 2782 2770 enum charge_type ctype) 2783 2771 { 2772 + struct page_cgroup *pc; 2773 + 2784 2774 if (mem_cgroup_disabled()) 2785 2775 return; 2786 2776 if (!memcg) 2787 2777 return; 2788 2778 cgroup_exclude_rmdir(&memcg->css); 2789 2779 2790 - __mem_cgroup_commit_charge_lrucare(page, memcg, ctype); 2780 + pc = lookup_page_cgroup(page); 2781 + __mem_cgroup_commit_charge(memcg, page, 1, pc, ctype, true); 2791 2782 /* 2792 2783 * Now swap is on-memory. This means this page may be 2793 2784 * counted both as mem and swap....double count. ··· 3042 3027 batch->memcg = NULL; 3043 3028 } 3044 3029 3045 - /* 3046 - * A function for resetting pc->mem_cgroup for newly allocated pages. 3047 - * This function should be called if the newpage will be added to LRU 3048 - * before start accounting. 3049 - */ 3050 - void mem_cgroup_reset_owner(struct page *newpage) 3051 - { 3052 - struct page_cgroup *pc; 3053 - 3054 - if (mem_cgroup_disabled()) 3055 - return; 3056 - 3057 - pc = lookup_page_cgroup(newpage); 3058 - VM_BUG_ON(PageCgroupUsed(pc)); 3059 - pc->mem_cgroup = root_mem_cgroup; 3060 - } 3061 - 3062 3030 #ifdef CONFIG_SWAP 3063 3031 /* 3064 3032 * called after __delete_from_swap_cache() and drop "page" account. ··· 3246 3248 ctype = MEM_CGROUP_CHARGE_TYPE_CACHE; 3247 3249 else 3248 3250 ctype = MEM_CGROUP_CHARGE_TYPE_SHMEM; 3249 - __mem_cgroup_commit_charge(memcg, newpage, 1, pc, ctype); 3251 + __mem_cgroup_commit_charge(memcg, newpage, 1, pc, ctype, false); 3250 3252 return ret; 3251 3253 } 3252 3254 ··· 3330 3332 * the newpage may be on LRU(or pagevec for LRU) already. We lock 3331 3333 * LRU while we overwrite pc->mem_cgroup. 3332 3334 */ 3333 - __mem_cgroup_commit_charge_lrucare(newpage, memcg, type); 3335 + __mem_cgroup_commit_charge(memcg, newpage, 1, pc, type, true); 3334 3336 } 3335 3337 3336 3338 #ifdef CONFIG_DEBUG_VM
+2 -1
mm/mempolicy.c
··· 640 640 unsigned long vmstart; 641 641 unsigned long vmend; 642 642 643 - vma = find_vma_prev(mm, start, &prev); 643 + vma = find_vma(mm, start); 644 644 if (!vma || vma->vm_start > start) 645 645 return -EFAULT; 646 646 647 + prev = vma->vm_prev; 647 648 if (start > vma->vm_start) 648 649 prev = vma; 649 650
-2
mm/migrate.c
··· 839 839 if (!newpage) 840 840 return -ENOMEM; 841 841 842 - mem_cgroup_reset_owner(newpage); 843 - 844 842 if (page_count(page) == 1) { 845 843 /* page was freed from under us. So we are done. */ 846 844 goto out;
+2 -1
mm/mlock.c
··· 385 385 return -EINVAL; 386 386 if (end == start) 387 387 return 0; 388 - vma = find_vma_prev(current->mm, start, &prev); 388 + vma = find_vma(current->mm, start); 389 389 if (!vma || vma->vm_start > start) 390 390 return -ENOMEM; 391 391 392 + prev = vma->vm_prev; 392 393 if (start > vma->vm_start) 393 394 prev = vma; 394 395
+14 -3
mm/mmap.c
··· 1266 1266 vma->vm_pgoff = pgoff; 1267 1267 INIT_LIST_HEAD(&vma->anon_vma_chain); 1268 1268 1269 + error = -EINVAL; /* when rejecting VM_GROWSDOWN|VM_GROWSUP */ 1270 + 1269 1271 if (file) { 1270 - error = -EINVAL; 1271 1272 if (vm_flags & (VM_GROWSDOWN|VM_GROWSUP)) 1272 1273 goto free_vma; 1273 1274 if (vm_flags & VM_DENYWRITE) { ··· 1294 1293 pgoff = vma->vm_pgoff; 1295 1294 vm_flags = vma->vm_flags; 1296 1295 } else if (vm_flags & VM_SHARED) { 1296 + if (unlikely(vm_flags & (VM_GROWSDOWN|VM_GROWSUP))) 1297 + goto free_vma; 1297 1298 error = shmem_zero_setup(vma); 1298 1299 if (error) 1299 1300 goto free_vma; ··· 1608 1605 1609 1606 /* 1610 1607 * Same as find_vma, but also return a pointer to the previous VMA in *pprev. 1611 - * Note: pprev is set to NULL when return value is NULL. 1612 1608 */ 1613 1609 struct vm_area_struct * 1614 1610 find_vma_prev(struct mm_struct *mm, unsigned long addr, ··· 1616 1614 struct vm_area_struct *vma; 1617 1615 1618 1616 vma = find_vma(mm, addr); 1619 - *pprev = vma ? vma->vm_prev : NULL; 1617 + if (vma) { 1618 + *pprev = vma->vm_prev; 1619 + } else { 1620 + struct rb_node *rb_node = mm->mm_rb.rb_node; 1621 + *pprev = NULL; 1622 + while (rb_node) { 1623 + *pprev = rb_entry(rb_node, struct vm_area_struct, vm_rb); 1624 + rb_node = rb_node->rb_right; 1625 + } 1626 + } 1620 1627 return vma; 1621 1628 } 1622 1629
+2 -1
mm/mprotect.c
··· 262 262 263 263 down_write(&current->mm->mmap_sem); 264 264 265 - vma = find_vma_prev(current->mm, start, &prev); 265 + vma = find_vma(current->mm, start); 266 266 error = -ENOMEM; 267 267 if (!vma) 268 268 goto out; 269 + prev = vma->vm_prev; 269 270 if (unlikely(grows & PROT_GROWSDOWN)) { 270 271 if (vma->vm_start >= end) 271 272 goto out;
+3 -1
mm/page_cgroup.c
··· 379 379 pgoff_t offset = swp_offset(ent); 380 380 struct swap_cgroup_ctrl *ctrl; 381 381 struct page *mappage; 382 + struct swap_cgroup *sc; 382 383 383 384 ctrl = &swap_cgroup_ctrl[swp_type(ent)]; 384 385 if (ctrlp) 385 386 *ctrlp = ctrl; 386 387 387 388 mappage = ctrl->map[offset / SC_PER_PAGE]; 388 - return page_address(mappage) + offset % SC_PER_PAGE; 389 + sc = page_address(mappage); 390 + return sc + offset % SC_PER_PAGE; 389 391 } 390 392 391 393 /**
+1 -2
mm/percpu-vm.c
··· 184 184 page_end - page_start); 185 185 } 186 186 187 - for (i = page_start; i < page_end; i++) 188 - __clear_bit(i, populated); 187 + bitmap_clear(populated, page_start, page_end - page_start); 189 188 } 190 189 191 190 /**
+5 -3
mm/swap.c
··· 652 652 void lru_add_page_tail(struct zone* zone, 653 653 struct page *page, struct page *page_tail) 654 654 { 655 - int active; 655 + int uninitialized_var(active); 656 656 enum lru_list lru; 657 657 const int file = 0; 658 658 ··· 672 672 active = 0; 673 673 lru = LRU_INACTIVE_ANON; 674 674 } 675 - update_page_reclaim_stat(zone, page_tail, file, active); 676 675 } else { 677 676 SetPageUnevictable(page_tail); 678 677 lru = LRU_UNEVICTABLE; ··· 692 693 list_head = page_tail->lru.prev; 693 694 list_move_tail(&page_tail->lru, list_head); 694 695 } 696 + 697 + if (!PageUnevictable(page)) 698 + update_page_reclaim_stat(zone, page_tail, file, active); 695 699 } 696 700 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 697 701 ··· 712 710 SetPageLRU(page); 713 711 if (active) 714 712 SetPageActive(page); 715 - update_page_reclaim_stat(zone, page, file, active); 716 713 add_page_to_lru_list(zone, page, lru); 714 + update_page_reclaim_stat(zone, page, file, active); 717 715 } 718 716 719 717 /*
-10
mm/swap_state.c
··· 300 300 new_page = alloc_page_vma(gfp_mask, vma, addr); 301 301 if (!new_page) 302 302 break; /* Out of memory */ 303 - /* 304 - * The memcg-specific accounting when moving 305 - * pages around the LRU lists relies on the 306 - * page's owner (memcg) to be valid. Usually, 307 - * pages are assigned to a new owner before 308 - * being put on the LRU list, but since this 309 - * is not the case here, the stale owner from 310 - * a previous allocation cycle must be reset. 311 - */ 312 - mem_cgroup_reset_owner(new_page); 313 303 } 314 304 315 305 /*
+5 -2
net/bridge/br_multicast.c
··· 446 446 ip6h->nexthdr = IPPROTO_HOPOPTS; 447 447 ip6h->hop_limit = 1; 448 448 ipv6_addr_set(&ip6h->daddr, htonl(0xff020000), 0, 0, htonl(1)); 449 - ipv6_dev_get_saddr(dev_net(br->dev), br->dev, &ip6h->daddr, 0, 450 - &ip6h->saddr); 449 + if (ipv6_dev_get_saddr(dev_net(br->dev), br->dev, &ip6h->daddr, 0, 450 + &ip6h->saddr)) { 451 + kfree_skb(skb); 452 + return NULL; 453 + } 451 454 ipv6_eth_mc_map(&ip6h->daddr, eth->h_dest); 452 455 453 456 hopopt = (u8 *)(ip6h + 1);
+18 -14
net/bridge/br_netfilter.c
··· 62 62 #define brnf_filter_pppoe_tagged 0 63 63 #endif 64 64 65 + #define IS_IP(skb) \ 66 + (!vlan_tx_tag_present(skb) && skb->protocol == htons(ETH_P_IP)) 67 + 68 + #define IS_IPV6(skb) \ 69 + (!vlan_tx_tag_present(skb) && skb->protocol == htons(ETH_P_IPV6)) 70 + 71 + #define IS_ARP(skb) \ 72 + (!vlan_tx_tag_present(skb) && skb->protocol == htons(ETH_P_ARP)) 73 + 65 74 static inline __be16 vlan_proto(const struct sk_buff *skb) 66 75 { 67 76 if (vlan_tx_tag_present(skb)) ··· 648 639 return NF_DROP; 649 640 br = p->br; 650 641 651 - if (skb->protocol == htons(ETH_P_IPV6) || IS_VLAN_IPV6(skb) || 652 - IS_PPPOE_IPV6(skb)) { 642 + if (IS_IPV6(skb) || IS_VLAN_IPV6(skb) || IS_PPPOE_IPV6(skb)) { 653 643 if (!brnf_call_ip6tables && !br->nf_call_ip6tables) 654 644 return NF_ACCEPT; 655 645 ··· 659 651 if (!brnf_call_iptables && !br->nf_call_iptables) 660 652 return NF_ACCEPT; 661 653 662 - if (skb->protocol != htons(ETH_P_IP) && !IS_VLAN_IP(skb) && 663 - !IS_PPPOE_IP(skb)) 654 + if (!IS_IP(skb) && !IS_VLAN_IP(skb) && !IS_PPPOE_IP(skb)) 664 655 return NF_ACCEPT; 665 656 666 657 nf_bridge_pull_encap_header_rcsum(skb); ··· 708 701 struct nf_bridge_info *nf_bridge = skb->nf_bridge; 709 702 struct net_device *in; 710 703 711 - if (skb->protocol != htons(ETH_P_ARP) && !IS_VLAN_ARP(skb)) { 704 + if (!IS_ARP(skb) && !IS_VLAN_ARP(skb)) { 712 705 in = nf_bridge->physindev; 713 706 if (nf_bridge->mask & BRNF_PKT_TYPE) { 714 707 skb->pkt_type = PACKET_OTHERHOST; ··· 724 717 skb->dev, br_forward_finish, 1); 725 718 return 0; 726 719 } 720 + 727 721 728 722 /* This is the 'purely bridged' case. For IP, we pass the packet to 729 723 * netfilter with indev and outdev set to the bridge device, ··· 752 744 if (!parent) 753 745 return NF_DROP; 754 746 755 - if (skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb) || 756 - IS_PPPOE_IP(skb)) 747 + if (IS_IP(skb) || IS_VLAN_IP(skb) || IS_PPPOE_IP(skb)) 757 748 pf = PF_INET; 758 - else if (skb->protocol == htons(ETH_P_IPV6) || IS_VLAN_IPV6(skb) || 759 - IS_PPPOE_IPV6(skb)) 749 + else if (IS_IPV6(skb) || IS_VLAN_IPV6(skb) || IS_PPPOE_IPV6(skb)) 760 750 pf = PF_INET6; 761 751 else 762 752 return NF_ACCEPT; ··· 801 795 if (!brnf_call_arptables && !br->nf_call_arptables) 802 796 return NF_ACCEPT; 803 797 804 - if (skb->protocol != htons(ETH_P_ARP)) { 798 + if (!IS_ARP(skb)) { 805 799 if (!IS_VLAN_ARP(skb)) 806 800 return NF_ACCEPT; 807 801 nf_bridge_pull_encap_header(skb); ··· 859 853 if (!realoutdev) 860 854 return NF_DROP; 861 855 862 - if (skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb) || 863 - IS_PPPOE_IP(skb)) 856 + if (IS_IP(skb) || IS_VLAN_IP(skb) || IS_PPPOE_IP(skb)) 864 857 pf = PF_INET; 865 - else if (skb->protocol == htons(ETH_P_IPV6) || IS_VLAN_IPV6(skb) || 866 - IS_PPPOE_IPV6(skb)) 858 + else if (IS_IPV6(skb) || IS_VLAN_IPV6(skb) || IS_PPPOE_IPV6(skb)) 867 859 pf = PF_INET6; 868 860 else 869 861 return NF_ACCEPT;
+4 -4
net/bridge/br_stp.c
··· 17 17 #include "br_private_stp.h" 18 18 19 19 /* since time values in bpdu are in jiffies and then scaled (1/256) 20 - * before sending, make sure that is at least one. 20 + * before sending, make sure that is at least one STP tick. 21 21 */ 22 - #define MESSAGE_AGE_INCR ((HZ < 256) ? 1 : (HZ/256)) 22 + #define MESSAGE_AGE_INCR ((HZ / 256) + 1) 23 23 24 24 static const char *const br_port_state_names[] = { 25 25 [BR_STATE_DISABLED] = "disabled", ··· 31 31 32 32 void br_log_state(const struct net_bridge_port *p) 33 33 { 34 - br_info(p->br, "port %u(%s) entering %s state\n", 34 + br_info(p->br, "port %u(%s) entered %s state\n", 35 35 (unsigned) p->port_no, p->dev->name, 36 36 br_port_state_names[p->state]); 37 37 } ··· 186 186 p->designated_cost = bpdu->root_path_cost; 187 187 p->designated_bridge = bpdu->bridge_id; 188 188 p->designated_port = bpdu->port_id; 189 - p->designated_age = jiffies + bpdu->message_age; 189 + p->designated_age = jiffies - bpdu->message_age; 190 190 191 191 mod_timer(&p->message_age_timer, jiffies 192 192 + (p->br->max_age - bpdu->message_age));
+1 -2
net/bridge/br_stp_if.c
··· 98 98 struct net_bridge *br = p->br; 99 99 int wasroot; 100 100 101 - br_log_state(p); 102 - 103 101 wasroot = br_is_root_bridge(br); 104 102 br_become_designated_port(p); 105 103 p->state = BR_STATE_DISABLED; 106 104 p->topology_change_ack = 0; 107 105 p->config_pending = 0; 108 106 107 + br_log_state(p); 109 108 br_ifinfo_notify(RTM_NEWLINK, p); 110 109 111 110 del_timer(&p->message_age_timer);
+15 -11
net/bridge/netfilter/ebtables.c
··· 1335 1335 const char *base, char __user *ubase) 1336 1336 { 1337 1337 char __user *hlp = ubase + ((char *)m - base); 1338 - if (copy_to_user(hlp, m->u.match->name, EBT_FUNCTION_MAXNAMELEN)) 1338 + char name[EBT_FUNCTION_MAXNAMELEN] = {}; 1339 + 1340 + /* ebtables expects 32 bytes long names but xt_match names are 29 bytes 1341 + long. Copy 29 bytes and fill remaining bytes with zeroes. */ 1342 + strncpy(name, m->u.match->name, sizeof(name)); 1343 + if (copy_to_user(hlp, name, EBT_FUNCTION_MAXNAMELEN)) 1339 1344 return -EFAULT; 1340 1345 return 0; 1341 1346 } ··· 1349 1344 const char *base, char __user *ubase) 1350 1345 { 1351 1346 char __user *hlp = ubase + ((char *)w - base); 1352 - if (copy_to_user(hlp , w->u.watcher->name, EBT_FUNCTION_MAXNAMELEN)) 1347 + char name[EBT_FUNCTION_MAXNAMELEN] = {}; 1348 + 1349 + strncpy(name, w->u.watcher->name, sizeof(name)); 1350 + if (copy_to_user(hlp , name, EBT_FUNCTION_MAXNAMELEN)) 1353 1351 return -EFAULT; 1354 1352 return 0; 1355 1353 } ··· 1363 1355 int ret; 1364 1356 char __user *hlp; 1365 1357 const struct ebt_entry_target *t; 1358 + char name[EBT_FUNCTION_MAXNAMELEN] = {}; 1366 1359 1367 1360 if (e->bitmask == 0) 1368 1361 return 0; ··· 1377 1368 ret = EBT_WATCHER_ITERATE(e, ebt_make_watchername, base, ubase); 1378 1369 if (ret != 0) 1379 1370 return ret; 1380 - if (copy_to_user(hlp, t->u.target->name, EBT_FUNCTION_MAXNAMELEN)) 1371 + strncpy(name, t->u.target->name, sizeof(name)); 1372 + if (copy_to_user(hlp, name, EBT_FUNCTION_MAXNAMELEN)) 1381 1373 return -EFAULT; 1382 1374 return 0; 1383 1375 } ··· 1903 1893 1904 1894 switch (compat_mwt) { 1905 1895 case EBT_COMPAT_MATCH: 1906 - match = try_then_request_module(xt_find_match(NFPROTO_BRIDGE, 1907 - name, 0), "ebt_%s", name); 1908 - if (match == NULL) 1909 - return -ENOENT; 1896 + match = xt_request_find_match(NFPROTO_BRIDGE, name, 0); 1910 1897 if (IS_ERR(match)) 1911 1898 return PTR_ERR(match); 1912 1899 ··· 1922 1915 break; 1923 1916 case EBT_COMPAT_WATCHER: /* fallthrough */ 1924 1917 case EBT_COMPAT_TARGET: 1925 - wt = try_then_request_module(xt_find_target(NFPROTO_BRIDGE, 1926 - name, 0), "ebt_%s", name); 1927 - if (wt == NULL) 1928 - return -ENOENT; 1918 + wt = xt_request_find_target(NFPROTO_BRIDGE, name, 0); 1929 1919 if (IS_ERR(wt)) 1930 1920 return PTR_ERR(wt); 1931 1921 off = xt_compat_target_offset(wt);
+10 -8
net/core/rtnetlink.c
··· 1060 1060 rcu_read_lock(); 1061 1061 cb->seq = net->dev_base_seq; 1062 1062 1063 - nlmsg_parse(cb->nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, 1064 - ifla_policy); 1063 + if (nlmsg_parse(cb->nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, 1064 + ifla_policy) >= 0) { 1065 1065 1066 - if (tb[IFLA_EXT_MASK]) 1067 - ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); 1066 + if (tb[IFLA_EXT_MASK]) 1067 + ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); 1068 + } 1068 1069 1069 1070 for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) { 1070 1071 idx = 0; ··· 1901 1900 u32 ext_filter_mask = 0; 1902 1901 u16 min_ifinfo_dump_size = 0; 1903 1902 1904 - nlmsg_parse(nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, ifla_policy); 1905 - 1906 - if (tb[IFLA_EXT_MASK]) 1907 - ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); 1903 + if (nlmsg_parse(nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, 1904 + ifla_policy) >= 0) { 1905 + if (tb[IFLA_EXT_MASK]) 1906 + ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); 1907 + } 1908 1908 1909 1909 if (!ext_filter_mask) 1910 1910 return NLMSG_GOODSIZE;
+79 -2
net/ipv4/inetpeer.c
··· 17 17 #include <linux/kernel.h> 18 18 #include <linux/mm.h> 19 19 #include <linux/net.h> 20 + #include <linux/workqueue.h> 20 21 #include <net/ip.h> 21 22 #include <net/inetpeer.h> 22 23 #include <net/secure_seq.h> ··· 67 66 68 67 static struct kmem_cache *peer_cachep __read_mostly; 69 68 69 + static LIST_HEAD(gc_list); 70 + static const int gc_delay = 60 * HZ; 71 + static struct delayed_work gc_work; 72 + static DEFINE_SPINLOCK(gc_lock); 73 + 70 74 #define node_height(x) x->avl_height 71 75 72 76 #define peer_avl_empty ((struct inet_peer *)&peer_fake_node) ··· 108 102 int inet_peer_minttl __read_mostly = 120 * HZ; /* TTL under high load: 120 sec */ 109 103 int inet_peer_maxttl __read_mostly = 10 * 60 * HZ; /* usual time to live: 10 min */ 110 104 105 + static void inetpeer_gc_worker(struct work_struct *work) 106 + { 107 + struct inet_peer *p, *n; 108 + LIST_HEAD(list); 109 + 110 + spin_lock_bh(&gc_lock); 111 + list_replace_init(&gc_list, &list); 112 + spin_unlock_bh(&gc_lock); 113 + 114 + if (list_empty(&list)) 115 + return; 116 + 117 + list_for_each_entry_safe(p, n, &list, gc_list) { 118 + 119 + if(need_resched()) 120 + cond_resched(); 121 + 122 + if (p->avl_left != peer_avl_empty) { 123 + list_add_tail(&p->avl_left->gc_list, &list); 124 + p->avl_left = peer_avl_empty; 125 + } 126 + 127 + if (p->avl_right != peer_avl_empty) { 128 + list_add_tail(&p->avl_right->gc_list, &list); 129 + p->avl_right = peer_avl_empty; 130 + } 131 + 132 + n = list_entry(p->gc_list.next, struct inet_peer, gc_list); 133 + 134 + if (!atomic_read(&p->refcnt)) { 135 + list_del(&p->gc_list); 136 + kmem_cache_free(peer_cachep, p); 137 + } 138 + } 139 + 140 + if (list_empty(&list)) 141 + return; 142 + 143 + spin_lock_bh(&gc_lock); 144 + list_splice(&list, &gc_list); 145 + spin_unlock_bh(&gc_lock); 146 + 147 + schedule_delayed_work(&gc_work, gc_delay); 148 + } 111 149 112 150 /* Called from ip_output.c:ip_init */ 113 151 void __init inet_initpeers(void) ··· 176 126 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, 177 127 NULL); 178 128 129 + INIT_DELAYED_WORK_DEFERRABLE(&gc_work, inetpeer_gc_worker); 179 130 } 180 131 181 132 static int addr_compare(const struct inetpeer_addr *a, ··· 498 447 p->rate_last = 0; 499 448 p->pmtu_expires = 0; 500 449 p->pmtu_orig = 0; 501 - p->redirect_genid = 0; 502 450 memset(&p->redirect_learned, 0, sizeof(p->redirect_learned)); 503 - 451 + INIT_LIST_HEAD(&p->gc_list); 504 452 505 453 /* Link the node. */ 506 454 link_to_pool(p, base); ··· 559 509 return rc; 560 510 } 561 511 EXPORT_SYMBOL(inet_peer_xrlim_allow); 512 + 513 + void inetpeer_invalidate_tree(int family) 514 + { 515 + struct inet_peer *old, *new, *prev; 516 + struct inet_peer_base *base = family_to_base(family); 517 + 518 + write_seqlock_bh(&base->lock); 519 + 520 + old = base->root; 521 + if (old == peer_avl_empty_rcu) 522 + goto out; 523 + 524 + new = peer_avl_empty_rcu; 525 + 526 + prev = cmpxchg(&base->root, old, new); 527 + if (prev == old) { 528 + base->total = 0; 529 + spin_lock(&gc_lock); 530 + list_add_tail(&prev->gc_list, &gc_list); 531 + spin_unlock(&gc_lock); 532 + schedule_delayed_work(&gc_work, gc_delay); 533 + } 534 + 535 + out: 536 + write_sequnlock_bh(&base->lock); 537 + } 538 + EXPORT_SYMBOL(inetpeer_invalidate_tree);
+3 -9
net/ipv4/route.c
··· 132 132 static int ip_rt_min_pmtu __read_mostly = 512 + 20 + 20; 133 133 static int ip_rt_min_advmss __read_mostly = 256; 134 134 static int rt_chain_length_max __read_mostly = 20; 135 - static int redirect_genid; 136 135 137 136 static struct delayed_work expires_work; 138 137 static unsigned long expires_ljiffies; ··· 936 937 937 938 get_random_bytes(&shuffle, sizeof(shuffle)); 938 939 atomic_add(shuffle + 1U, &net->ipv4.rt_genid); 939 - redirect_genid++; 940 + inetpeer_invalidate_tree(AF_INET); 940 941 } 941 942 942 943 /* ··· 1484 1485 1485 1486 peer = rt->peer; 1486 1487 if (peer) { 1487 - if (peer->redirect_learned.a4 != new_gw || 1488 - peer->redirect_genid != redirect_genid) { 1488 + if (peer->redirect_learned.a4 != new_gw) { 1489 1489 peer->redirect_learned.a4 = new_gw; 1490 - peer->redirect_genid = redirect_genid; 1491 1490 atomic_inc(&__rt_peer_genid); 1492 1491 } 1493 1492 check_peer_redir(&rt->dst, peer); ··· 1790 1793 if (peer) { 1791 1794 check_peer_pmtu(&rt->dst, peer); 1792 1795 1793 - if (peer->redirect_genid != redirect_genid) 1794 - peer->redirect_learned.a4 = 0; 1795 1796 if (peer->redirect_learned.a4 && 1796 1797 peer->redirect_learned.a4 != rt->rt_gateway) 1797 1798 check_peer_redir(&rt->dst, peer); ··· 1953 1958 dst_init_metrics(&rt->dst, peer->metrics, false); 1954 1959 1955 1960 check_peer_pmtu(&rt->dst, peer); 1956 - if (peer->redirect_genid != redirect_genid) 1957 - peer->redirect_learned.a4 = 0; 1961 + 1958 1962 if (peer->redirect_learned.a4 && 1959 1963 peer->redirect_learned.a4 != rt->rt_gateway) { 1960 1964 rt->rt_gateway = peer->redirect_learned.a4;
+15 -8
net/ipv4/tcp_input.c
··· 1403 1403 1404 1404 BUG_ON(!pcount); 1405 1405 1406 - /* Adjust hint for FACK. Non-FACK is handled in tcp_sacktag_one(). */ 1407 - if (tcp_is_fack(tp) && (skb == tp->lost_skb_hint)) 1406 + /* Adjust counters and hints for the newly sacked sequence 1407 + * range but discard the return value since prev is already 1408 + * marked. We must tag the range first because the seq 1409 + * advancement below implicitly advances 1410 + * tcp_highest_sack_seq() when skb is highest_sack. 1411 + */ 1412 + tcp_sacktag_one(sk, state, TCP_SKB_CB(skb)->sacked, 1413 + start_seq, end_seq, dup_sack, pcount); 1414 + 1415 + if (skb == tp->lost_skb_hint) 1408 1416 tp->lost_cnt_hint += pcount; 1409 1417 1410 1418 TCP_SKB_CB(prev)->end_seq += shifted; ··· 1437 1429 skb_shinfo(skb)->gso_size = 0; 1438 1430 skb_shinfo(skb)->gso_type = 0; 1439 1431 } 1440 - 1441 - /* Adjust counters and hints for the newly sacked sequence range but 1442 - * discard the return value since prev is already marked. 1443 - */ 1444 - tcp_sacktag_one(sk, state, TCP_SKB_CB(skb)->sacked, 1445 - start_seq, end_seq, dup_sack, pcount); 1446 1432 1447 1433 /* Difference in this won't matter, both ACKed by the same cumul. ACK */ 1448 1434 TCP_SKB_CB(prev)->sacked |= (TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS); ··· 1584 1582 len = pcount * mss; 1585 1583 } 1586 1584 } 1585 + 1586 + /* tcp_sacktag_one() won't SACK-tag ranges below snd_una */ 1587 + if (!after(TCP_SKB_CB(skb)->seq + len, tp->snd_una)) 1588 + goto fallback; 1587 1589 1588 1590 if (!skb_shift(prev, skb, len)) 1589 1591 goto fallback; ··· 2573 2567 2574 2568 if (cnt > packets) { 2575 2569 if ((tcp_is_sack(tp) && !tcp_is_fack(tp)) || 2570 + (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) || 2576 2571 (oldcnt >= packets)) 2577 2572 break; 2578 2573
+4
net/ipv6/addrconf.c
··· 434 434 /* Join all-node multicast group */ 435 435 ipv6_dev_mc_inc(dev, &in6addr_linklocal_allnodes); 436 436 437 + /* Join all-router multicast group if forwarding is set */ 438 + if (ndev->cnf.forwarding && dev && (dev->flags & IFF_MULTICAST)) 439 + ipv6_dev_mc_inc(dev, &in6addr_linklocal_allrouters); 440 + 437 441 return ndev; 438 442 } 439 443
+3
net/mac80211/iface.c
··· 1332 1332 hw_roc = true; 1333 1333 1334 1334 list_for_each_entry(sdata, &local->interfaces, list) { 1335 + if (sdata->vif.type == NL80211_IFTYPE_MONITOR || 1336 + sdata->vif.type == NL80211_IFTYPE_AP_VLAN) 1337 + continue; 1335 1338 if (sdata->old_idle == sdata->vif.bss_conf.idle) 1336 1339 continue; 1337 1340 if (!ieee80211_sdata_running(sdata))
+1 -1
net/mac80211/rate.c
··· 344 344 for (i = 0; i < IEEE80211_TX_MAX_RATES; i++) { 345 345 info->control.rates[i].idx = -1; 346 346 info->control.rates[i].flags = 0; 347 - info->control.rates[i].count = 1; 347 + info->control.rates[i].count = 0; 348 348 } 349 349 350 350 if (sdata->local->hw.flags & IEEE80211_HW_HAS_RATE_CONTROL)
+6 -2
net/netfilter/nf_conntrack_core.c
··· 635 635 636 636 if (del_timer(&ct->timeout)) { 637 637 death_by_timeout((unsigned long)ct); 638 - dropped = 1; 639 - NF_CT_STAT_INC_ATOMIC(net, early_drop); 638 + /* Check if we indeed killed this entry. Reliable event 639 + delivery may have inserted it into the dying list. */ 640 + if (test_bit(IPS_DYING_BIT, &ct->status)) { 641 + dropped = 1; 642 + NF_CT_STAT_INC_ATOMIC(net, early_drop); 643 + } 640 644 } 641 645 nf_ct_put(ct); 642 646 return dropped;
-3
net/netfilter/nf_conntrack_netlink.c
··· 1041 1041 if (!parse_nat_setup) { 1042 1042 #ifdef CONFIG_MODULES 1043 1043 rcu_read_unlock(); 1044 - spin_unlock_bh(&nf_conntrack_lock); 1045 1044 nfnl_unlock(); 1046 1045 if (request_module("nf-nat-ipv4") < 0) { 1047 1046 nfnl_lock(); 1048 - spin_lock_bh(&nf_conntrack_lock); 1049 1047 rcu_read_lock(); 1050 1048 return -EOPNOTSUPP; 1051 1049 } 1052 1050 nfnl_lock(); 1053 - spin_lock_bh(&nf_conntrack_lock); 1054 1051 rcu_read_lock(); 1055 1052 if (nfnetlink_parse_nat_setup_hook) 1056 1053 return -EAGAIN;
+32 -12
net/openvswitch/actions.c
··· 1 1 /* 2 - * Copyright (c) 2007-2011 Nicira Networks. 2 + * Copyright (c) 2007-2012 Nicira Networks. 3 3 * 4 4 * This program is free software; you can redistribute it and/or 5 5 * modify it under the terms of version 2 of the GNU General Public ··· 145 145 inet_proto_csum_replace4(&tcp_hdr(skb)->check, skb, 146 146 *addr, new_addr, 1); 147 147 } else if (nh->protocol == IPPROTO_UDP) { 148 - if (likely(transport_len >= sizeof(struct udphdr))) 149 - inet_proto_csum_replace4(&udp_hdr(skb)->check, skb, 150 - *addr, new_addr, 1); 148 + if (likely(transport_len >= sizeof(struct udphdr))) { 149 + struct udphdr *uh = udp_hdr(skb); 150 + 151 + if (uh->check || skb->ip_summed == CHECKSUM_PARTIAL) { 152 + inet_proto_csum_replace4(&uh->check, skb, 153 + *addr, new_addr, 1); 154 + if (!uh->check) 155 + uh->check = CSUM_MANGLED_0; 156 + } 157 + } 151 158 } 152 159 153 160 csum_replace4(&nh->check, *addr, new_addr); ··· 204 197 skb->rxhash = 0; 205 198 } 206 199 207 - static int set_udp_port(struct sk_buff *skb, 208 - const struct ovs_key_udp *udp_port_key) 200 + static void set_udp_port(struct sk_buff *skb, __be16 *port, __be16 new_port) 201 + { 202 + struct udphdr *uh = udp_hdr(skb); 203 + 204 + if (uh->check && skb->ip_summed != CHECKSUM_PARTIAL) { 205 + set_tp_port(skb, port, new_port, &uh->check); 206 + 207 + if (!uh->check) 208 + uh->check = CSUM_MANGLED_0; 209 + } else { 210 + *port = new_port; 211 + skb->rxhash = 0; 212 + } 213 + } 214 + 215 + static int set_udp(struct sk_buff *skb, const struct ovs_key_udp *udp_port_key) 209 216 { 210 217 struct udphdr *uh; 211 218 int err; ··· 231 210 232 211 uh = udp_hdr(skb); 233 212 if (udp_port_key->udp_src != uh->source) 234 - set_tp_port(skb, &uh->source, udp_port_key->udp_src, &uh->check); 213 + set_udp_port(skb, &uh->source, udp_port_key->udp_src); 235 214 236 215 if (udp_port_key->udp_dst != uh->dest) 237 - set_tp_port(skb, &uh->dest, udp_port_key->udp_dst, &uh->check); 216 + set_udp_port(skb, &uh->dest, udp_port_key->udp_dst); 238 217 239 218 return 0; 240 219 } 241 220 242 - static int set_tcp_port(struct sk_buff *skb, 243 - const struct ovs_key_tcp *tcp_port_key) 221 + static int set_tcp(struct sk_buff *skb, const struct ovs_key_tcp *tcp_port_key) 244 222 { 245 223 struct tcphdr *th; 246 224 int err; ··· 348 328 break; 349 329 350 330 case OVS_KEY_ATTR_TCP: 351 - err = set_tcp_port(skb, nla_data(nested_attr)); 331 + err = set_tcp(skb, nla_data(nested_attr)); 352 332 break; 353 333 354 334 case OVS_KEY_ATTR_UDP: 355 - err = set_udp_port(skb, nla_data(nested_attr)); 335 + err = set_udp(skb, nla_data(nested_attr)); 356 336 break; 357 337 } 358 338
+3
net/openvswitch/datapath.c
··· 1521 1521 vport = ovs_vport_locate(nla_data(a[OVS_VPORT_ATTR_NAME])); 1522 1522 if (!vport) 1523 1523 return ERR_PTR(-ENODEV); 1524 + if (ovs_header->dp_ifindex && 1525 + ovs_header->dp_ifindex != get_dpifindex(vport->dp)) 1526 + return ERR_PTR(-ENODEV); 1524 1527 return vport; 1525 1528 } else if (a[OVS_VPORT_ATTR_PORT_NO]) { 1526 1529 u32 port_no = nla_get_u32(a[OVS_VPORT_ATTR_PORT_NO]);
+17
sound/pci/hda/patch_realtek.c
··· 2068 2068 */ 2069 2069 2070 2070 static void alc_init_special_input_src(struct hda_codec *codec); 2071 + static int alc269_fill_coef(struct hda_codec *codec); 2071 2072 2072 2073 static int alc_init(struct hda_codec *codec) 2073 2074 { 2074 2075 struct alc_spec *spec = codec->spec; 2075 2076 unsigned int i; 2077 + 2078 + if (codec->vendor_id == 0x10ec0269) 2079 + alc269_fill_coef(codec); 2076 2080 2077 2081 alc_fix_pll(codec); 2078 2082 alc_auto_init_amp(codec, spec->init_amp); ··· 4371 4367 ALC882_FIXUP_PB_M5210, 4372 4368 ALC882_FIXUP_ACER_ASPIRE_7736, 4373 4369 ALC882_FIXUP_ASUS_W90V, 4370 + ALC889_FIXUP_CD, 4374 4371 ALC889_FIXUP_VAIO_TT, 4375 4372 ALC888_FIXUP_EEE1601, 4376 4373 ALC882_FIXUP_EAPD, ··· 4496 4491 .type = ALC_FIXUP_PINS, 4497 4492 .v.pins = (const struct alc_pincfg[]) { 4498 4493 { 0x16, 0x99130110 }, /* fix sequence for CLFE */ 4494 + { } 4495 + } 4496 + }, 4497 + [ALC889_FIXUP_CD] = { 4498 + .type = ALC_FIXUP_PINS, 4499 + .v.pins = (const struct alc_pincfg[]) { 4500 + { 0x1c, 0x993301f0 }, /* CD */ 4499 4501 { } 4500 4502 } 4501 4503 }, ··· 4662 4650 4663 4651 SND_PCI_QUIRK(0x1071, 0x8258, "Evesham Voyaeger", ALC882_FIXUP_EAPD), 4664 4652 SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3), 4653 + SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3", ALC889_FIXUP_CD), 4665 4654 SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX), 4666 4655 SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD), 4667 4656 SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD), ··· 5480 5467 5481 5468 static int alc269_fill_coef(struct hda_codec *codec) 5482 5469 { 5470 + struct alc_spec *spec = codec->spec; 5483 5471 int val; 5472 + 5473 + if (spec->codec_variant != ALC269_TYPE_ALC269VB) 5474 + return 0; 5484 5475 5485 5476 if ((alc_get_coef0(codec) & 0x00ff) < 0x015) { 5486 5477 alc_write_coef_idx(codec, 0xf, 0x960b);
+1
sound/pci/rme9652/hdspm.c
··· 6333 6333 6334 6334 hw->ops.open = snd_hdspm_hwdep_dummy_op; 6335 6335 hw->ops.ioctl = snd_hdspm_hwdep_ioctl; 6336 + hw->ops.ioctl_compat = snd_hdspm_hwdep_ioctl; 6336 6337 hw->ops.release = snd_hdspm_hwdep_dummy_op; 6337 6338 6338 6339 return 0;
+2 -2
sound/soc/samsung/neo1973_wm8753.c
··· 367 367 .platform_name = "samsung-audio", 368 368 .cpu_dai_name = "s3c24xx-iis", 369 369 .codec_dai_name = "wm8753-hifi", 370 - .codec_name = "wm8753-codec.0-001a", 370 + .codec_name = "wm8753.0-001a", 371 371 .init = neo1973_wm8753_init, 372 372 .ops = &neo1973_hifi_ops, 373 373 }, ··· 376 376 .stream_name = "Voice", 377 377 .cpu_dai_name = "dfbmcs320-pcm", 378 378 .codec_dai_name = "wm8753-voice", 379 - .codec_name = "wm8753-codec.0-001a", 379 + .codec_name = "wm8753.0-001a", 380 380 .ops = &neo1973_voice_ops, 381 381 }, 382 382 };
+21 -10
tools/perf/builtin-record.c
··· 204 204 205 205 if (opts->group && pos != first) 206 206 group_fd = first->fd; 207 + fallback_missing_features: 208 + if (opts->exclude_guest_missing) 209 + attr->exclude_guest = attr->exclude_host = 0; 207 210 retry_sample_id: 208 211 attr->sample_id_all = opts->sample_id_all_avail ? 1 : 0; 209 212 try_again: ··· 220 217 } else if (err == ENODEV && opts->cpu_list) { 221 218 die("No such device - did you specify" 222 219 " an out-of-range profile CPU?\n"); 223 - } else if (err == EINVAL && opts->sample_id_all_avail) { 224 - /* 225 - * Old kernel, no attr->sample_id_type_all field 226 - */ 227 - opts->sample_id_all_avail = false; 228 - if (!opts->sample_time && !opts->raw_samples && !time_needed) 229 - attr->sample_type &= ~PERF_SAMPLE_TIME; 220 + } else if (err == EINVAL) { 221 + if (!opts->exclude_guest_missing && 222 + (attr->exclude_guest || attr->exclude_host)) { 223 + pr_debug("Old kernel, cannot exclude " 224 + "guest or host samples.\n"); 225 + opts->exclude_guest_missing = true; 226 + goto fallback_missing_features; 227 + } else if (opts->sample_id_all_avail) { 228 + /* 229 + * Old kernel, no attr->sample_id_type_all field 230 + */ 231 + opts->sample_id_all_avail = false; 232 + if (!opts->sample_time && !opts->raw_samples && !time_needed) 233 + attr->sample_type &= ~PERF_SAMPLE_TIME; 230 234 231 - goto retry_sample_id; 235 + goto retry_sample_id; 236 + } 232 237 } 233 238 234 239 /* ··· 514 503 return err; 515 504 } 516 505 517 - if (!!rec->no_buildid 506 + if (!rec->no_buildid 518 507 && !perf_header__has_feat(&session->header, HEADER_BUILD_ID)) { 519 - pr_err("Couldn't generating buildids. " 508 + pr_err("Couldn't generate buildids. " 520 509 "Use --no-buildid to profile anyway.\n"); 521 510 return -1; 522 511 }
+17 -6
tools/perf/builtin-top.c
··· 857 857 attr->mmap = 1; 858 858 attr->comm = 1; 859 859 attr->inherit = top->inherit; 860 + fallback_missing_features: 861 + if (top->exclude_guest_missing) 862 + attr->exclude_guest = attr->exclude_host = 0; 860 863 retry_sample_id: 861 864 attr->sample_id_all = top->sample_id_all_avail ? 1 : 0; 862 865 try_again: ··· 871 868 if (err == EPERM || err == EACCES) { 872 869 ui__error_paranoid(); 873 870 goto out_err; 874 - } else if (err == EINVAL && top->sample_id_all_avail) { 875 - /* 876 - * Old kernel, no attr->sample_id_type_all field 877 - */ 878 - top->sample_id_all_avail = false; 879 - goto retry_sample_id; 871 + } else if (err == EINVAL) { 872 + if (!top->exclude_guest_missing && 873 + (attr->exclude_guest || attr->exclude_host)) { 874 + pr_debug("Old kernel, cannot exclude " 875 + "guest or host samples.\n"); 876 + top->exclude_guest_missing = true; 877 + goto fallback_missing_features; 878 + } else if (top->sample_id_all_avail) { 879 + /* 880 + * Old kernel, no attr->sample_id_type_all field 881 + */ 882 + top->sample_id_all_avail = false; 883 + goto retry_sample_id; 884 + } 880 885 } 881 886 /* 882 887 * If it's cycles then fall back to hrtimer
+1
tools/perf/perf.h
··· 199 199 bool sample_address; 200 200 bool sample_time; 201 201 bool sample_id_all_avail; 202 + bool exclude_guest_missing; 202 203 bool system_wide; 203 204 bool period; 204 205 unsigned int freq;
+1
tools/perf/util/top.h
··· 34 34 bool inherit; 35 35 bool group; 36 36 bool sample_id_all_avail; 37 + bool exclude_guest_missing; 37 38 bool dump_symtab; 38 39 const char *cpu_list; 39 40 struct hist_entry *sym_filter_entry;
+1 -1
tools/perf/util/util.c
··· 6 6 * XXX We need to find a better place for these things... 7 7 */ 8 8 bool perf_host = true; 9 - bool perf_guest = true; 9 + bool perf_guest = false; 10 10 11 11 void event_attr_init(struct perf_event_attr *attr) 12 12 {