Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'perf/urgent' into perf/core

Merge reason: We are going to queue up a dependent patch.

Signed-off-by: Ingo Molnar <mingo@elte.hu>

+1888 -997
+3 -3
Documentation/devicetree/bindings/gpio/led.txt
··· 7 7 node's name represents the name of the corresponding LED. 8 8 9 9 LED sub-node properties: 10 - - gpios : Should specify the LED's GPIO, see "Specifying GPIO information 11 - for devices" in Documentation/devicetree/booting-without-of.txt. Active 12 - low LEDs should be indicated using flags in the GPIO specifier. 10 + - gpios : Should specify the LED's GPIO, see "gpios property" in 11 + Documentation/devicetree/gpio.txt. Active low LEDs should be 12 + indicated using flags in the GPIO specifier. 13 13 - label : (optional) The label for this LED. If omitted, the label is 14 14 taken from the node name (excluding the unit address). 15 15 - linux,default-trigger : (optional) This parameter, if present, is a
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 30 30 nintendo Nintendo 31 31 nvidia NVIDIA 32 32 nxp NXP Semiconductors 33 + picochip Picochip Ltd 33 34 powervr Imagination Technologies 34 35 qcom Qualcomm, Inc. 35 36 ramtron Ramtron International
+20 -6
Documentation/hwmon/jc42
··· 7 7 Addresses scanned: I2C 0x18 - 0x1f 8 8 Datasheets: 9 9 http://www.analog.com/static/imported-files/data_sheets/ADT7408.pdf 10 - * IDT TSE2002B3, TS3000B3 11 - Prefix: 'tse2002b3', 'ts3000b3' 10 + * Atmel AT30TS00 11 + Prefix: 'at30ts00' 12 12 Addresses scanned: I2C 0x18 - 0x1f 13 13 Datasheets: 14 - http://www.idt.com/products/getdoc.cfm?docid=18715691 15 - http://www.idt.com/products/getdoc.cfm?docid=18715692 14 + http://www.atmel.com/Images/doc8585.pdf 15 + * IDT TSE2002B3, TSE2002GB2, TS3000B3, TS3000GB2 16 + Prefix: 'tse2002', 'ts3000' 17 + Addresses scanned: I2C 0x18 - 0x1f 18 + Datasheets: 19 + http://www.idt.com/sites/default/files/documents/IDT_TSE2002B3C_DST_20100512_120303152056.pdf 20 + http://www.idt.com/sites/default/files/documents/IDT_TSE2002GB2A1_DST_20111107_120303145914.pdf 21 + http://www.idt.com/sites/default/files/documents/IDT_TS3000B3A_DST_20101129_120303152013.pdf 22 + http://www.idt.com/sites/default/files/documents/IDT_TS3000GB2A1_DST_20111104_120303151012.pdf 16 23 * Maxim MAX6604 17 24 Prefix: 'max6604' 18 25 Addresses scanned: I2C 0x18 - 0x1f 19 26 Datasheets: 20 27 http://datasheets.maxim-ic.com/en/ds/MAX6604.pdf 21 - * Microchip MCP9805, MCP98242, MCP98243, MCP9843 22 - Prefixes: 'mcp9805', 'mcp98242', 'mcp98243', 'mcp9843' 28 + * Microchip MCP9804, MCP9805, MCP98242, MCP98243, MCP9843 29 + Prefixes: 'mcp9804', 'mcp9805', 'mcp98242', 'mcp98243', 'mcp9843' 23 30 Addresses scanned: I2C 0x18 - 0x1f 24 31 Datasheets: 32 + http://ww1.microchip.com/downloads/en/DeviceDoc/22203C.pdf 25 33 http://ww1.microchip.com/downloads/en/DeviceDoc/21977b.pdf 26 34 http://ww1.microchip.com/downloads/en/DeviceDoc/21996a.pdf 27 35 http://ww1.microchip.com/downloads/en/DeviceDoc/22153c.pdf ··· 56 48 Datasheets: 57 49 http://www.st.com/stonline/products/literature/ds/13447/stts424.pdf 58 50 http://www.st.com/stonline/products/literature/ds/13448/stts424e02.pdf 51 + * ST Microelectronics STTS2002, STTS3000 52 + Prefix: 'stts2002', 'stts3000' 53 + Addresses scanned: I2C 0x18 - 0x1f 54 + Datasheets: 55 + http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATASHEET/CD00225278.pdf 56 + http://www.st.com/internet/com/TECHNICAL_RESOURCES/TECHNICAL_LITERATURE/DATA_BRIEF/CD00270920.pdf 59 57 * JEDEC JC 42.4 compliant temperature sensor chips 60 58 Prefix: 'jc42' 61 59 Addresses scanned: I2C 0x18 - 0x1f
+2 -1
Documentation/input/alps.txt
··· 13 13 14 14 All ALPS touchpads should respond to the "E6 report" command sequence: 15 15 E8-E6-E6-E6-E9. An ALPS touchpad should respond with either 00-00-0A or 16 - 00-00-64. 16 + 00-00-64 if no buttons are pressed. The bits 0-2 of the first byte will be 1s 17 + if some buttons are pressed. 17 18 18 19 If the E6 report is successful, the touchpad model is identified using the "E7 19 20 report" sequence: E8-E7-E7-E7-E9. The response is the model signature and is
+6
Documentation/kernel-parameters.txt
··· 2211 2211 2212 2212 default: off. 2213 2213 2214 + printk.always_kmsg_dump= 2215 + Trigger kmsg_dump for cases other than kernel oops or 2216 + panics 2217 + Format: <bool> (1/Y/y=enable, 0/N/n=disable) 2218 + default: disabled 2219 + 2214 2220 printk.time= Show timing data prefixed to each printk message line 2215 2221 Format: <bool> (1/Y/y=enable, 0/N/n=disable) 2216 2222
+4 -4
MAINTAINERS
··· 962 962 F: drivers/platform/msm/ 963 963 F: drivers/*/pm8???-* 964 964 F: include/linux/mfd/pm8xxx/ 965 - T: git git://codeaurora.org/quic/kernel/davidb/linux-msm.git 965 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/davidb/linux-msm.git 966 966 S: Maintained 967 967 968 968 ARM/TOSA MACHINE SUPPORT ··· 1310 1310 F: include/linux/atm* 1311 1311 1312 1312 ATMEL AT91 MCI DRIVER 1313 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 1313 + M: Ludovic Desroches <ludovic.desroches@atmel.com> 1314 1314 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1315 1315 W: http://www.atmel.com/products/AT91/ 1316 1316 W: http://www.at91.com/ ··· 1318 1318 F: drivers/mmc/host/at91_mci.c 1319 1319 1320 1320 ATMEL AT91 / AT32 MCI DRIVER 1321 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 1321 + M: Ludovic Desroches <ludovic.desroches@atmel.com> 1322 1322 S: Maintained 1323 1323 F: drivers/mmc/host/atmel-mci.c 1324 1324 F: drivers/mmc/host/atmel-mci-regs.h ··· 7271 7271 M: Wim Van Sebroeck <wim@iguana.be> 7272 7272 L: linux-watchdog@vger.kernel.org 7273 7273 W: http://www.linux-watchdog.org/ 7274 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/wim/linux-2.6-watchdog.git 7274 + T: git git://www.linux-watchdog.org/linux-watchdog.git 7275 7275 S: Maintained 7276 7276 F: Documentation/watchdog/ 7277 7277 F: drivers/watchdog/
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 3 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc7 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/alpha/include/asm/futex.h
··· 108 108 " lda $31,3b-2b(%0)\n" 109 109 " .previous\n" 110 110 : "+r"(ret), "=&r"(prev), "=&r"(cmp) 111 - : "r"(uaddr), "r"((long)oldval), "r"(newval) 111 + : "r"(uaddr), "r"((long)(int)oldval), "r"(newval) 112 112 : "memory"); 113 113 114 114 *uval = prev;
+1 -1
arch/arm/Kconfig
··· 1280 1280 depends on CPU_V7 1281 1281 help 1282 1282 This option enables the workaround for the 743622 Cortex-A9 1283 - (r2p0..r2p2) erratum. Under very rare conditions, a faulty 1283 + (r2p*) erratum. Under very rare conditions, a faulty 1284 1284 optimisation in the Cortex-A9 Store Buffer may lead to data 1285 1285 corruption. This workaround sets a specific bit in the diagnostic 1286 1286 register of the Cortex-A9 which disables the Store Buffer
+1
arch/arm/boot/.gitignore
··· 3 3 xipImage 4 4 bootpImage 5 5 uImage 6 + *.dtb
+1 -1
arch/arm/include/asm/pmu.h
··· 134 134 135 135 u64 armpmu_event_update(struct perf_event *event, 136 136 struct hw_perf_event *hwc, 137 - int idx, int overflow); 137 + int idx); 138 138 139 139 int armpmu_event_set_period(struct perf_event *event, 140 140 struct hw_perf_event *hwc,
+1
arch/arm/kernel/ecard.c
··· 242 242 243 243 memcpy(dst_pgd, src_pgd, sizeof(pgd_t) * (EASI_SIZE / PGDIR_SIZE)); 244 244 245 + vma.vm_flags = VM_EXEC; 245 246 vma.vm_mm = mm; 246 247 247 248 flush_tlb_range(&vma, IO_START, IO_START + IO_SIZE);
+34 -11
arch/arm/kernel/perf_event.c
··· 180 180 u64 181 181 armpmu_event_update(struct perf_event *event, 182 182 struct hw_perf_event *hwc, 183 - int idx, int overflow) 183 + int idx) 184 184 { 185 185 struct arm_pmu *armpmu = to_arm_pmu(event->pmu); 186 186 u64 delta, prev_raw_count, new_raw_count; ··· 193 193 new_raw_count) != prev_raw_count) 194 194 goto again; 195 195 196 - new_raw_count &= armpmu->max_period; 197 - prev_raw_count &= armpmu->max_period; 198 - 199 - if (overflow) 200 - delta = armpmu->max_period - prev_raw_count + new_raw_count + 1; 201 - else 202 - delta = new_raw_count - prev_raw_count; 196 + delta = (new_raw_count - prev_raw_count) & armpmu->max_period; 203 197 204 198 local64_add(delta, &event->count); 205 199 local64_sub(delta, &hwc->period_left); ··· 210 216 if (hwc->idx < 0) 211 217 return; 212 218 213 - armpmu_event_update(event, hwc, hwc->idx, 0); 219 + armpmu_event_update(event, hwc, hwc->idx); 214 220 } 215 221 216 222 static void ··· 226 232 if (!(hwc->state & PERF_HES_STOPPED)) { 227 233 armpmu->disable(hwc, hwc->idx); 228 234 barrier(); /* why? */ 229 - armpmu_event_update(event, hwc, hwc->idx, 0); 235 + armpmu_event_update(event, hwc, hwc->idx); 230 236 hwc->state |= PERF_HES_STOPPED | PERF_HES_UPTODATE; 231 237 } 232 238 } ··· 512 518 hwc->config_base |= (unsigned long)mapping; 513 519 514 520 if (!hwc->sample_period) { 515 - hwc->sample_period = armpmu->max_period; 521 + /* 522 + * For non-sampling runs, limit the sample_period to half 523 + * of the counter width. That way, the new counter value 524 + * is far less likely to overtake the previous one unless 525 + * you have some serious IRQ latency issues. 526 + */ 527 + hwc->sample_period = armpmu->max_period >> 1; 516 528 hwc->last_period = hwc->sample_period; 517 529 local64_set(&hwc->period_left, hwc->sample_period); 518 530 } ··· 680 680 } 681 681 682 682 /* 683 + * PMU hardware loses all context when a CPU goes offline. 684 + * When a CPU is hotplugged back in, since some hardware registers are 685 + * UNKNOWN at reset, the PMU must be explicitly reset to avoid reading 686 + * junk values out of them. 687 + */ 688 + static int __cpuinit pmu_cpu_notify(struct notifier_block *b, 689 + unsigned long action, void *hcpu) 690 + { 691 + if ((action & ~CPU_TASKS_FROZEN) != CPU_STARTING) 692 + return NOTIFY_DONE; 693 + 694 + if (cpu_pmu && cpu_pmu->reset) 695 + cpu_pmu->reset(NULL); 696 + 697 + return NOTIFY_OK; 698 + } 699 + 700 + static struct notifier_block __cpuinitdata pmu_cpu_notifier = { 701 + .notifier_call = pmu_cpu_notify, 702 + }; 703 + 704 + /* 683 705 * CPU PMU identification and registration. 684 706 */ 685 707 static int __init ··· 752 730 pr_info("enabled with %s PMU driver, %d counters available\n", 753 731 cpu_pmu->name, cpu_pmu->num_events); 754 732 cpu_pmu_init(cpu_pmu); 733 + register_cpu_notifier(&pmu_cpu_notifier); 755 734 armpmu_register(cpu_pmu, "cpu", PERF_TYPE_RAW); 756 735 } else { 757 736 pr_info("no hardware support available\n");
+3 -19
arch/arm/kernel/perf_event_v6.c
··· 467 467 raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 468 468 } 469 469 470 - static int counter_is_active(unsigned long pmcr, int idx) 471 - { 472 - unsigned long mask = 0; 473 - if (idx == ARMV6_CYCLE_COUNTER) 474 - mask = ARMV6_PMCR_CCOUNT_IEN; 475 - else if (idx == ARMV6_COUNTER0) 476 - mask = ARMV6_PMCR_COUNT0_IEN; 477 - else if (idx == ARMV6_COUNTER1) 478 - mask = ARMV6_PMCR_COUNT1_IEN; 479 - 480 - if (mask) 481 - return pmcr & mask; 482 - 483 - WARN_ONCE(1, "invalid counter number (%d)\n", idx); 484 - return 0; 485 - } 486 - 487 470 static irqreturn_t 488 471 armv6pmu_handle_irq(int irq_num, 489 472 void *dev) ··· 496 513 struct perf_event *event = cpuc->events[idx]; 497 514 struct hw_perf_event *hwc; 498 515 499 - if (!counter_is_active(pmcr, idx)) 516 + /* Ignore if we don't have an event. */ 517 + if (!event) 500 518 continue; 501 519 502 520 /* ··· 508 524 continue; 509 525 510 526 hwc = &event->hw; 511 - armpmu_event_update(event, hwc, idx, 1); 527 + armpmu_event_update(event, hwc, idx); 512 528 data.period = event->hw.last_period; 513 529 if (!armpmu_event_set_period(event, hwc, idx)) 514 530 continue;
+10 -1
arch/arm/kernel/perf_event_v7.c
··· 809 809 810 810 counter = ARMV7_IDX_TO_COUNTER(idx); 811 811 asm volatile("mcr p15, 0, %0, c9, c14, 2" : : "r" (BIT(counter))); 812 + isb(); 813 + /* Clear the overflow flag in case an interrupt is pending. */ 814 + asm volatile("mcr p15, 0, %0, c9, c12, 3" : : "r" (BIT(counter))); 815 + isb(); 816 + 812 817 return idx; 813 818 } 814 819 ··· 960 955 struct perf_event *event = cpuc->events[idx]; 961 956 struct hw_perf_event *hwc; 962 957 958 + /* Ignore if we don't have an event. */ 959 + if (!event) 960 + continue; 961 + 963 962 /* 964 963 * We have a single interrupt for all counters. Check that 965 964 * each counter has overflowed before we process it. ··· 972 963 continue; 973 964 974 965 hwc = &event->hw; 975 - armpmu_event_update(event, hwc, idx, 1); 966 + armpmu_event_update(event, hwc, idx); 976 967 data.period = event->hw.last_period; 977 968 if (!armpmu_event_set_period(event, hwc, idx)) 978 969 continue;
+16 -4
arch/arm/kernel/perf_event_xscale.c
··· 255 255 struct perf_event *event = cpuc->events[idx]; 256 256 struct hw_perf_event *hwc; 257 257 258 + if (!event) 259 + continue; 260 + 258 261 if (!xscale1_pmnc_counter_has_overflowed(pmnc, idx)) 259 262 continue; 260 263 261 264 hwc = &event->hw; 262 - armpmu_event_update(event, hwc, idx, 1); 265 + armpmu_event_update(event, hwc, idx); 263 266 data.period = event->hw.last_period; 264 267 if (!armpmu_event_set_period(event, hwc, idx)) 265 268 continue; ··· 595 592 struct perf_event *event = cpuc->events[idx]; 596 593 struct hw_perf_event *hwc; 597 594 598 - if (!xscale2_pmnc_counter_has_overflowed(pmnc, idx)) 595 + if (!event) 596 + continue; 597 + 598 + if (!xscale2_pmnc_counter_has_overflowed(of_flags, idx)) 599 599 continue; 600 600 601 601 hwc = &event->hw; 602 - armpmu_event_update(event, hwc, idx, 1); 602 + armpmu_event_update(event, hwc, idx); 603 603 data.period = event->hw.last_period; 604 604 if (!armpmu_event_set_period(event, hwc, idx)) 605 605 continue; ··· 669 663 static void 670 664 xscale2pmu_disable_event(struct hw_perf_event *hwc, int idx) 671 665 { 672 - unsigned long flags, ien, evtsel; 666 + unsigned long flags, ien, evtsel, of_flags; 673 667 struct pmu_hw_events *events = cpu_pmu->get_hw_events(); 674 668 675 669 ien = xscale2pmu_read_int_enable(); ··· 678 672 switch (idx) { 679 673 case XSCALE_CYCLE_COUNTER: 680 674 ien &= ~XSCALE2_CCOUNT_INT_EN; 675 + of_flags = XSCALE2_CCOUNT_OVERFLOW; 681 676 break; 682 677 case XSCALE_COUNTER0: 683 678 ien &= ~XSCALE2_COUNT0_INT_EN; 684 679 evtsel &= ~XSCALE2_COUNT0_EVT_MASK; 685 680 evtsel |= XSCALE_PERFCTR_UNUSED << XSCALE2_COUNT0_EVT_SHFT; 681 + of_flags = XSCALE2_COUNT0_OVERFLOW; 686 682 break; 687 683 case XSCALE_COUNTER1: 688 684 ien &= ~XSCALE2_COUNT1_INT_EN; 689 685 evtsel &= ~XSCALE2_COUNT1_EVT_MASK; 690 686 evtsel |= XSCALE_PERFCTR_UNUSED << XSCALE2_COUNT1_EVT_SHFT; 687 + of_flags = XSCALE2_COUNT1_OVERFLOW; 691 688 break; 692 689 case XSCALE_COUNTER2: 693 690 ien &= ~XSCALE2_COUNT2_INT_EN; 694 691 evtsel &= ~XSCALE2_COUNT2_EVT_MASK; 695 692 evtsel |= XSCALE_PERFCTR_UNUSED << XSCALE2_COUNT2_EVT_SHFT; 693 + of_flags = XSCALE2_COUNT2_OVERFLOW; 696 694 break; 697 695 case XSCALE_COUNTER3: 698 696 ien &= ~XSCALE2_COUNT3_INT_EN; 699 697 evtsel &= ~XSCALE2_COUNT3_EVT_MASK; 700 698 evtsel |= XSCALE_PERFCTR_UNUSED << XSCALE2_COUNT3_EVT_SHFT; 699 + of_flags = XSCALE2_COUNT3_OVERFLOW; 701 700 break; 702 701 default: 703 702 WARN_ONCE(1, "invalid counter number (%d)\n", idx); ··· 712 701 raw_spin_lock_irqsave(&events->pmu_lock, flags); 713 702 xscale2pmu_write_event_select(evtsel); 714 703 xscale2pmu_write_int_enable(ien); 704 + xscale2pmu_write_overflow_flags(of_flags); 715 705 raw_spin_unlock_irqrestore(&events->pmu_lock, flags); 716 706 } 717 707
+10 -9
arch/arm/mach-at91/at91sam9g45_devices.c
··· 38 38 #if defined(CONFIG_AT_HDMAC) || defined(CONFIG_AT_HDMAC_MODULE) 39 39 static u64 hdmac_dmamask = DMA_BIT_MASK(32); 40 40 41 - static struct at_dma_platform_data atdma_pdata = { 42 - .nr_channels = 8, 43 - }; 44 - 45 41 static struct resource hdmac_resources[] = { 46 42 [0] = { 47 43 .start = AT91SAM9G45_BASE_DMA, ··· 52 56 }; 53 57 54 58 static struct platform_device at_hdmac_device = { 55 - .name = "at_hdmac", 59 + .name = "at91sam9g45_dma", 56 60 .id = -1, 57 61 .dev = { 58 62 .dma_mask = &hdmac_dmamask, 59 63 .coherent_dma_mask = DMA_BIT_MASK(32), 60 - .platform_data = &atdma_pdata, 61 64 }, 62 65 .resource = hdmac_resources, 63 66 .num_resources = ARRAY_SIZE(hdmac_resources), ··· 64 69 65 70 void __init at91_add_device_hdmac(void) 66 71 { 67 - dma_cap_set(DMA_MEMCPY, atdma_pdata.cap_mask); 68 - dma_cap_set(DMA_SLAVE, atdma_pdata.cap_mask); 69 - platform_device_register(&at_hdmac_device); 72 + #if defined(CONFIG_OF) 73 + struct device_node *of_node = 74 + of_find_node_by_name(NULL, "dma-controller"); 75 + 76 + if (of_node) 77 + of_node_put(of_node); 78 + else 79 + #endif 80 + platform_device_register(&at_hdmac_device); 70 81 } 71 82 #else 72 83 void __init at91_add_device_hdmac(void) {}
+1 -7
arch/arm/mach-at91/at91sam9rl_devices.c
··· 33 33 #if defined(CONFIG_AT_HDMAC) || defined(CONFIG_AT_HDMAC_MODULE) 34 34 static u64 hdmac_dmamask = DMA_BIT_MASK(32); 35 35 36 - static struct at_dma_platform_data atdma_pdata = { 37 - .nr_channels = 2, 38 - }; 39 - 40 36 static struct resource hdmac_resources[] = { 41 37 [0] = { 42 38 .start = AT91SAM9RL_BASE_DMA, ··· 47 51 }; 48 52 49 53 static struct platform_device at_hdmac_device = { 50 - .name = "at_hdmac", 54 + .name = "at91sam9rl_dma", 51 55 .id = -1, 52 56 .dev = { 53 57 .dma_mask = &hdmac_dmamask, 54 58 .coherent_dma_mask = DMA_BIT_MASK(32), 55 - .platform_data = &atdma_pdata, 56 59 }, 57 60 .resource = hdmac_resources, 58 61 .num_resources = ARRAY_SIZE(hdmac_resources), ··· 59 64 60 65 void __init at91_add_device_hdmac(void) 61 66 { 62 - dma_cap_set(DMA_MEMCPY, atdma_pdata.cap_mask); 63 67 platform_device_register(&at_hdmac_device); 64 68 } 65 69 #else
+2
arch/arm/mach-ep93xx/vision_ep9307.c
··· 34 34 #include <mach/ep93xx_spi.h> 35 35 #include <mach/gpio-ep93xx.h> 36 36 37 + #include <asm/hardware/vic.h> 37 38 #include <asm/mach-types.h> 38 39 #include <asm/mach/map.h> 39 40 #include <asm/mach/arch.h> ··· 362 361 .atag_offset = 0x100, 363 362 .map_io = vision_map_io, 364 363 .init_irq = ep93xx_init_irq, 364 + .handle_irq = vic_handle_irq, 365 365 .timer = &ep93xx_timer, 366 366 .init_machine = vision_init_machine, 367 367 .restart = ep93xx_restart,
+2
arch/arm/mach-exynos/mach-universal_c210.c
··· 13 13 #include <linux/i2c.h> 14 14 #include <linux/gpio_keys.h> 15 15 #include <linux/gpio.h> 16 + #include <linux/interrupt.h> 16 17 #include <linux/fb.h> 17 18 #include <linux/mfd/max8998.h> 18 19 #include <linux/regulator/machine.h> ··· 596 595 .threshold = 0x28, 597 596 .voltage = 2800000, /* 2.8V */ 598 597 .orient = MXT_DIAGONAL, 598 + .irqflags = IRQF_TRIGGER_FALLING, 599 599 }; 600 600 601 601 static struct i2c_board_info i2c3_devs[] __initdata = {
+1 -1
arch/arm/mach-lpc32xx/include/mach/irqs.h
··· 61 61 */ 62 62 #define IRQ_LPC32XX_JTAG_COMM_TX LPC32XX_SIC1_IRQ(1) 63 63 #define IRQ_LPC32XX_JTAG_COMM_RX LPC32XX_SIC1_IRQ(2) 64 - #define IRQ_LPC32XX_GPI_11 LPC32XX_SIC1_IRQ(4) 64 + #define IRQ_LPC32XX_GPI_28 LPC32XX_SIC1_IRQ(4) 65 65 #define IRQ_LPC32XX_TS_P LPC32XX_SIC1_IRQ(6) 66 66 #define IRQ_LPC32XX_TS_IRQ LPC32XX_SIC1_IRQ(7) 67 67 #define IRQ_LPC32XX_TS_AUX LPC32XX_SIC1_IRQ(8)
+20 -5
arch/arm/mach-lpc32xx/irq.c
··· 118 118 .event_group = &lpc32xx_event_pin_regs, 119 119 .mask = LPC32XX_CLKPWR_EXTSRC_GPI_06_BIT, 120 120 }, 121 + [IRQ_LPC32XX_GPI_28] = { 122 + .event_group = &lpc32xx_event_pin_regs, 123 + .mask = LPC32XX_CLKPWR_EXTSRC_GPI_28_BIT, 124 + }, 121 125 [IRQ_LPC32XX_GPIO_00] = { 122 126 .event_group = &lpc32xx_event_int_regs, 123 127 .mask = LPC32XX_CLKPWR_INTSRC_GPIO_00_BIT, ··· 309 305 310 306 if (state) 311 307 eventreg |= lpc32xx_events[d->irq].mask; 312 - else 308 + else { 313 309 eventreg &= ~lpc32xx_events[d->irq].mask; 310 + 311 + /* 312 + * When disabling the wakeup, clear the latched 313 + * event 314 + */ 315 + __raw_writel(lpc32xx_events[d->irq].mask, 316 + lpc32xx_events[d->irq]. 317 + event_group->rawstat_reg); 318 + } 314 319 315 320 __raw_writel(eventreg, 316 321 lpc32xx_events[d->irq].event_group->enab_reg); ··· 393 380 394 381 /* Setup SIC1 */ 395 382 __raw_writel(0, LPC32XX_INTC_MASK(LPC32XX_SIC1_BASE)); 396 - __raw_writel(MIC_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC1_BASE)); 397 - __raw_writel(MIC_ATR_DEFAULT, LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC1_BASE)); 383 + __raw_writel(SIC1_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC1_BASE)); 384 + __raw_writel(SIC1_ATR_DEFAULT, 385 + LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC1_BASE)); 398 386 399 387 /* Setup SIC2 */ 400 388 __raw_writel(0, LPC32XX_INTC_MASK(LPC32XX_SIC2_BASE)); 401 - __raw_writel(MIC_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC2_BASE)); 402 - __raw_writel(MIC_ATR_DEFAULT, LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC2_BASE)); 389 + __raw_writel(SIC2_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC2_BASE)); 390 + __raw_writel(SIC2_ATR_DEFAULT, 391 + LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC2_BASE)); 403 392 404 393 /* Configure supported IRQ's */ 405 394 for (i = 0; i < NR_IRQS; i++) {
+19 -1
arch/arm/mach-lpc32xx/serial.c
··· 88 88 char *uart_ck_name; 89 89 u32 ck_mode_mask; 90 90 void __iomem *pdiv_clk_reg; 91 + resource_size_t mapbase; 91 92 }; 92 93 93 94 static struct uartinit uartinit_data[] __initdata = { ··· 98 97 .ck_mode_mask = 99 98 LPC32XX_UART_CLKMODE_LOAD(LPC32XX_UART_CLKMODE_ON, 5), 100 99 .pdiv_clk_reg = LPC32XX_CLKPWR_UART5_CLK_CTRL, 100 + .mapbase = LPC32XX_UART5_BASE, 101 101 }, 102 102 #endif 103 103 #ifdef CONFIG_ARCH_LPC32XX_UART3_SELECT ··· 107 105 .ck_mode_mask = 108 106 LPC32XX_UART_CLKMODE_LOAD(LPC32XX_UART_CLKMODE_ON, 3), 109 107 .pdiv_clk_reg = LPC32XX_CLKPWR_UART3_CLK_CTRL, 108 + .mapbase = LPC32XX_UART3_BASE, 110 109 }, 111 110 #endif 112 111 #ifdef CONFIG_ARCH_LPC32XX_UART4_SELECT ··· 116 113 .ck_mode_mask = 117 114 LPC32XX_UART_CLKMODE_LOAD(LPC32XX_UART_CLKMODE_ON, 4), 118 115 .pdiv_clk_reg = LPC32XX_CLKPWR_UART4_CLK_CTRL, 116 + .mapbase = LPC32XX_UART4_BASE, 119 117 }, 120 118 #endif 121 119 #ifdef CONFIG_ARCH_LPC32XX_UART6_SELECT ··· 125 121 .ck_mode_mask = 126 122 LPC32XX_UART_CLKMODE_LOAD(LPC32XX_UART_CLKMODE_ON, 6), 127 123 .pdiv_clk_reg = LPC32XX_CLKPWR_UART6_CLK_CTRL, 124 + .mapbase = LPC32XX_UART6_BASE, 128 125 }, 129 126 #endif 130 127 }; ··· 170 165 171 166 /* pre-UART clock divider set to 1 */ 172 167 __raw_writel(0x0101, uartinit_data[i].pdiv_clk_reg); 168 + 169 + /* 170 + * Force a flush of the RX FIFOs to work around a 171 + * HW bug 172 + */ 173 + puart = uartinit_data[i].mapbase; 174 + __raw_writel(0xC1, LPC32XX_UART_IIR_FCR(puart)); 175 + __raw_writel(0x00, LPC32XX_UART_DLL_FIFO(puart)); 176 + j = LPC32XX_SUART_FIFO_SIZE; 177 + while (j--) 178 + tmp = __raw_readl( 179 + LPC32XX_UART_DLL_FIFO(puart)); 180 + __raw_writel(0, LPC32XX_UART_IIR_FCR(puart)); 173 181 } 174 182 175 183 /* This needs to be done after all UART clocks are setup */ 176 184 __raw_writel(clkmodes, LPC32XX_UARTCTL_CLKMODE); 177 - for (i = 0; i < ARRAY_SIZE(uartinit_data) - 1; i++) { 185 + for (i = 0; i < ARRAY_SIZE(uartinit_data); i++) { 178 186 /* Force a flush of the RX FIFOs to work around a HW bug */ 179 187 puart = serial_std_platform_data[i].mapbase; 180 188 __raw_writel(0xC1, LPC32XX_UART_IIR_FCR(puart));
-1
arch/arm/mach-mmp/aspenite.c
··· 17 17 #include <linux/mtd/partitions.h> 18 18 #include <linux/mtd/nand.h> 19 19 #include <linux/interrupt.h> 20 - #include <linux/gpio.h> 21 20 22 21 #include <asm/mach-types.h> 23 22 #include <asm/mach/arch.h>
-1
arch/arm/mach-mmp/pxa168.c
··· 24 24 #include <mach/dma.h> 25 25 #include <mach/devices.h> 26 26 #include <mach/mfp.h> 27 - #include <linux/platform_device.h> 28 27 #include <linux/dma-mapping.h> 29 28 #include <mach/pxa168.h> 30 29
-1
arch/arm/mach-mmp/tavorevb.c
··· 12 12 #include <linux/kernel.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/smc91x.h> 15 - #include <linux/gpio.h> 16 15 17 16 #include <asm/mach-types.h> 18 17 #include <asm/mach/arch.h>
+2 -2
arch/arm/mach-omap1/board-innovator.c
··· 416 416 #ifdef CONFIG_ARCH_OMAP15XX 417 417 if (cpu_is_omap1510()) { 418 418 omap1_usb_init(&innovator1510_usb_config); 419 - innovator_config[1].data = &innovator1510_lcd_config; 419 + innovator_config[0].data = &innovator1510_lcd_config; 420 420 } 421 421 #endif 422 422 #ifdef CONFIG_ARCH_OMAP16XX 423 423 if (cpu_is_omap1610()) { 424 424 omap1_usb_init(&h2_usb_config); 425 - innovator_config[1].data = &innovator1610_lcd_config; 425 + innovator_config[0].data = &innovator1610_lcd_config; 426 426 } 427 427 #endif 428 428 omap_board_config = innovator_config;
+2 -2
arch/arm/mach-omap2/Kconfig
··· 364 364 going on could result in system crashes; 365 365 366 366 config OMAP4_ERRATA_I688 367 - bool "OMAP4 errata: Async Bridge Corruption (BROKEN)" 368 - depends on ARCH_OMAP4 && BROKEN 367 + bool "OMAP4 errata: Async Bridge Corruption" 368 + depends on ARCH_OMAP4 369 369 select ARCH_HAS_BARRIERS 370 370 help 371 371 If a data is stalled inside asynchronous bridge because of back
+4
arch/arm/mach-omap2/board-n8x0.c
··· 371 371 else 372 372 *openp = 0; 373 373 374 + #ifdef CONFIG_MMC_OMAP 374 375 omap_mmc_notify_cover_event(mmc_device, index, *openp); 376 + #else 377 + pr_warn("MMC: notify cover event not available\n"); 378 + #endif 375 379 } 376 380 377 381 static int n8x0_mmc_late_init(struct device *dev)
+1 -1
arch/arm/mach-omap2/board-omap3evm.c
··· 381 381 gpio_request_one(gpio + 7, GPIOF_OUT_INIT_LOW, "EN_DVI"); 382 382 383 383 /* TWL4030_GPIO_MAX + 1 == ledB (out, active low LED) */ 384 - gpio_leds[2].gpio = gpio + TWL4030_GPIO_MAX + 1; 384 + gpio_leds[0].gpio = gpio + TWL4030_GPIO_MAX + 1; 385 385 386 386 platform_device_register(&leds_gpio); 387 387
+1
arch/arm/mach-omap2/common.h
··· 132 132 void am33xx_map_io(void); 133 133 void omap4_map_io(void); 134 134 void ti81xx_map_io(void); 135 + void omap_barriers_init(void); 135 136 136 137 /** 137 138 * omap_test_timeout - busy-loop, testing a condition
+2 -3
arch/arm/mach-omap2/cpuidle44xx.c
··· 65 65 struct timespec ts_preidle, ts_postidle, ts_idle; 66 66 u32 cpu1_state; 67 67 int idle_time; 68 - int new_state_idx; 69 68 int cpu_id = smp_processor_id(); 70 69 71 70 /* Used to keep track of the total time in idle */ ··· 83 84 */ 84 85 cpu1_state = pwrdm_read_pwrst(cpu1_pd); 85 86 if (cpu1_state != PWRDM_POWER_OFF) { 86 - new_state_idx = drv->safe_state_index; 87 - cx = cpuidle_get_statedata(&dev->states_usage[new_state_idx]); 87 + index = drv->safe_state_index; 88 + cx = cpuidle_get_statedata(&dev->states_usage[index]); 88 89 } 89 90 90 91 if (index > 0)
+52
arch/arm/mach-omap2/gpmc-smsc911x.c
··· 19 19 #include <linux/interrupt.h> 20 20 #include <linux/io.h> 21 21 #include <linux/smsc911x.h> 22 + #include <linux/regulator/fixed.h> 23 + #include <linux/regulator/machine.h> 22 24 23 25 #include <plat/board.h> 24 26 #include <plat/gpmc.h> ··· 44 42 .flags = SMSC911X_USE_16BIT, 45 43 }; 46 44 45 + static struct regulator_consumer_supply gpmc_smsc911x_supply[] = { 46 + REGULATOR_SUPPLY("vddvario", "smsc911x.0"), 47 + REGULATOR_SUPPLY("vdd33a", "smsc911x.0"), 48 + }; 49 + 50 + /* Generic regulator definition to satisfy smsc911x */ 51 + static struct regulator_init_data gpmc_smsc911x_reg_init_data = { 52 + .constraints = { 53 + .min_uV = 3300000, 54 + .max_uV = 3300000, 55 + .valid_modes_mask = REGULATOR_MODE_NORMAL 56 + | REGULATOR_MODE_STANDBY, 57 + .valid_ops_mask = REGULATOR_CHANGE_MODE 58 + | REGULATOR_CHANGE_STATUS, 59 + }, 60 + .num_consumer_supplies = ARRAY_SIZE(gpmc_smsc911x_supply), 61 + .consumer_supplies = gpmc_smsc911x_supply, 62 + }; 63 + 64 + static struct fixed_voltage_config gpmc_smsc911x_fixed_reg_data = { 65 + .supply_name = "gpmc_smsc911x", 66 + .microvolts = 3300000, 67 + .gpio = -EINVAL, 68 + .startup_delay = 0, 69 + .enable_high = 0, 70 + .enabled_at_boot = 1, 71 + .init_data = &gpmc_smsc911x_reg_init_data, 72 + }; 73 + 74 + /* 75 + * Platform device id of 42 is a temporary fix to avoid conflicts 76 + * with other reg-fixed-voltage devices. The real fix should 77 + * involve the driver core providing a way of dynamically 78 + * assigning a unique id on registration for platform devices 79 + * in the same name space. 80 + */ 81 + static struct platform_device gpmc_smsc911x_regulator = { 82 + .name = "reg-fixed-voltage", 83 + .id = 42, 84 + .dev = { 85 + .platform_data = &gpmc_smsc911x_fixed_reg_data, 86 + }, 87 + }; 88 + 47 89 /* 48 90 * Initialize smsc911x device connected to the GPMC. Note that we 49 91 * assume that pin multiplexing is done in the board-*.c file, ··· 100 54 int ret; 101 55 102 56 gpmc_cfg = board_data; 57 + 58 + ret = platform_device_register(&gpmc_smsc911x_regulator); 59 + if (ret < 0) { 60 + pr_err("Unable to register smsc911x regulators: %d\n", ret); 61 + return; 62 + } 103 63 104 64 if (gpmc_cs_request(gpmc_cfg->cs, SZ_16M, &cs_mem_base) < 0) { 105 65 pr_err("Failed to request GPMC mem region\n");
+6
arch/arm/mach-omap2/hsmmc.c
··· 428 428 return 0; 429 429 } 430 430 431 + static int omap_hsmmc_done; 431 432 #define MAX_OMAP_MMC_HWMOD_NAME_LEN 16 432 433 433 434 void omap_init_hsmmc(struct omap2_hsmmc_info *hsmmcinfo, int ctrl_nr) ··· 491 490 void omap2_hsmmc_init(struct omap2_hsmmc_info *controllers) 492 491 { 493 492 u32 reg; 493 + 494 + if (omap_hsmmc_done) 495 + return; 496 + 497 + omap_hsmmc_done = 1; 494 498 495 499 if (!cpu_is_omap44xx()) { 496 500 if (cpu_is_omap2430()) {
+1
arch/arm/mach-omap2/id.c
··· 343 343 case 0xb944: 344 344 omap_revision = AM335X_REV_ES1_0; 345 345 *cpu_rev = "1.0"; 346 + break; 346 347 case 0xb8f2: 347 348 switch (rev) { 348 349 case 0:
+1
arch/arm/mach-omap2/io.c
··· 307 307 void __init omap44xx_map_common_io(void) 308 308 { 309 309 iotable_init(omap44xx_io_desc, ARRAY_SIZE(omap44xx_io_desc)); 310 + omap_barriers_init(); 310 311 } 311 312 #endif 312 313
+9 -1
arch/arm/mach-omap2/mailbox.c
··· 281 281 .ops = &omap2_mbox_ops, 282 282 .priv = &omap2_mbox_iva_priv, 283 283 }; 284 + #endif 284 285 285 - struct omap_mbox *omap2_mboxes[] = { &mbox_dsp_info, &mbox_iva_info, NULL }; 286 + #ifdef CONFIG_ARCH_OMAP2 287 + struct omap_mbox *omap2_mboxes[] = { 288 + &mbox_dsp_info, 289 + #ifdef CONFIG_SOC_OMAP2420 290 + &mbox_iva_info, 291 + #endif 292 + NULL 293 + }; 286 294 #endif 287 295 288 296 #if defined(CONFIG_ARCH_OMAP4)
+1 -1
arch/arm/mach-omap2/mux.c
··· 218 218 return -ENODEV; 219 219 } 220 220 221 - static int __init 221 + static int 222 222 omap_mux_get_by_name(const char *muxname, 223 223 struct omap_mux_partition **found_partition, 224 224 struct omap_mux **found_mux)
+2 -1
arch/arm/mach-omap2/omap-iommu.c
··· 150 150 platform_device_put(omap_iommu_pdev[i]); 151 151 return err; 152 152 } 153 - module_init(omap_iommu_init); 153 + /* must be ready before omap3isp is probed */ 154 + subsys_initcall(omap_iommu_init); 154 155 155 156 static void __exit omap_iommu_exit(void) 156 157 {
+18 -9
arch/arm/mach-omap2/omap4-common.c
··· 24 24 25 25 #include <plat/irqs.h> 26 26 #include <plat/sram.h> 27 + #include <plat/omap-secure.h> 27 28 28 29 #include <mach/hardware.h> 29 30 #include <mach/omap-wakeupgen.h> 30 31 31 32 #include "common.h" 32 33 #include "omap4-sar-layout.h" 34 + #include <linux/export.h> 33 35 34 36 #ifdef CONFIG_CACHE_L2X0 35 37 static void __iomem *l2cache_base; ··· 45 43 46 44 void __iomem *dram_sync, *sram_sync; 47 45 46 + static phys_addr_t paddr; 47 + static u32 size; 48 + 48 49 void omap_bus_sync(void) 49 50 { 50 51 if (dram_sync && sram_sync) { ··· 56 51 isb(); 57 52 } 58 53 } 54 + EXPORT_SYMBOL(omap_bus_sync); 59 55 60 - static int __init omap_barriers_init(void) 56 + /* Steal one page physical memory for barrier implementation */ 57 + int __init omap_barrier_reserve_memblock(void) 61 58 { 62 - struct map_desc dram_io_desc[1]; 63 - phys_addr_t paddr; 64 - u32 size; 65 - 66 - if (!cpu_is_omap44xx()) 67 - return -ENODEV; 68 59 69 60 size = ALIGN(PAGE_SIZE, SZ_1M); 70 61 paddr = arm_memblock_steal(size, SZ_1M); 62 + 63 + return 0; 64 + } 65 + 66 + void __init omap_barriers_init(void) 67 + { 68 + struct map_desc dram_io_desc[1]; 71 69 72 70 dram_io_desc[0].virtual = OMAP4_DRAM_BARRIER_VA; 73 71 dram_io_desc[0].pfn = __phys_to_pfn(paddr); ··· 83 75 pr_info("OMAP4: Map 0x%08llx to 0x%08lx for dram barrier\n", 84 76 (long long) paddr, dram_io_desc[0].virtual); 85 77 86 - return 0; 87 78 } 88 - core_initcall(omap_barriers_init); 79 + #else 80 + void __init omap_barriers_init(void) 81 + {} 89 82 #endif 90 83 91 84 void __init gic_init_irq(void)
+3
arch/arm/mach-omap2/pm.c
··· 174 174 freq = clk->rate; 175 175 clk_put(clk); 176 176 177 + rcu_read_lock(); 177 178 opp = opp_find_freq_ceil(dev, &freq); 178 179 if (IS_ERR(opp)) { 180 + rcu_read_unlock(); 179 181 pr_err("%s: unable to find boot up OPP for vdd_%s\n", 180 182 __func__, vdd_name); 181 183 goto exit; 182 184 } 183 185 184 186 bootup_volt = opp_get_voltage(opp); 187 + rcu_read_unlock(); 185 188 if (!bootup_volt) { 186 189 pr_err("%s: unable to find voltage corresponding " 187 190 "to the bootup OPP for vdd_%s\n", __func__, vdd_name);
-1
arch/arm/mach-omap2/twl-common.c
··· 270 270 .constraints = { 271 271 .min_uV = 3300000, 272 272 .max_uV = 3300000, 273 - .apply_uV = true, 274 273 .valid_modes_mask = REGULATOR_MODE_NORMAL 275 274 | REGULATOR_MODE_STANDBY, 276 275 .valid_ops_mask = REGULATOR_CHANGE_MODE
+3 -3
arch/arm/mach-omap2/usb-host.c
··· 486 486 void __init usbhs_init(const struct usbhs_omap_board_data *pdata) 487 487 { 488 488 struct omap_hwmod *oh[2]; 489 - struct omap_device *od; 489 + struct platform_device *pdev; 490 490 int bus_id = -1; 491 491 int i; 492 492 ··· 522 522 return; 523 523 } 524 524 525 - od = omap_device_build_ss(OMAP_USBHS_DEVICE, bus_id, oh, 2, 525 + pdev = omap_device_build_ss(OMAP_USBHS_DEVICE, bus_id, oh, 2, 526 526 (void *)&usbhs_data, sizeof(usbhs_data), 527 527 omap_uhhtll_latency, 528 528 ARRAY_SIZE(omap_uhhtll_latency), false); 529 - if (IS_ERR(od)) { 529 + if (IS_ERR(pdev)) { 530 530 pr_err("Could not build hwmod devices %s,%s\n", 531 531 USBHS_UHH_HWMODNAME, USBHS_TLL_HWMODNAME); 532 532 return;
-1
arch/arm/mach-pxa/generic.h
··· 49 49 #endif 50 50 51 51 extern struct syscore_ops pxa_irq_syscore_ops; 52 - extern struct syscore_ops pxa_gpio_syscore_ops; 53 52 extern struct syscore_ops pxa2xx_mfp_syscore_ops; 54 53 extern struct syscore_ops pxa3xx_mfp_syscore_ops; 55 54
+25
arch/arm/mach-pxa/hx4700.c
··· 45 45 #include <mach/hx4700.h> 46 46 #include <mach/irda.h> 47 47 48 + #include <sound/ak4641.h> 48 49 #include <video/platform_lcd.h> 49 50 #include <video/w100fb.h> 50 51 ··· 766 765 }; 767 766 768 767 /* 768 + * Asahi Kasei AK4641 on I2C 769 + */ 770 + 771 + static struct ak4641_platform_data ak4641_info = { 772 + .gpio_power = GPIO27_HX4700_CODEC_ON, 773 + .gpio_npdn = GPIO109_HX4700_CODEC_nPDN, 774 + }; 775 + 776 + static struct i2c_board_info i2c_board_info[] __initdata = { 777 + { 778 + I2C_BOARD_INFO("ak4641", 0x12), 779 + .platform_data = &ak4641_info, 780 + }, 781 + }; 782 + 783 + static struct platform_device audio = { 784 + .name = "hx4700-audio", 785 + .id = -1, 786 + }; 787 + 788 + 789 + /* 769 790 * PCMCIA 770 791 */ 771 792 ··· 813 790 &gpio_vbus, 814 791 &power_supply, 815 792 &strataflash, 793 + &audio, 816 794 &pcmcia, 817 795 }; 818 796 ··· 851 827 pxa_set_ficp_info(&ficp_info); 852 828 pxa27x_set_i2c_power_info(NULL); 853 829 pxa_set_i2c_info(NULL); 830 + i2c_register_board_info(0, ARRAY_AND_SIZE(i2c_board_info)); 854 831 i2c_register_board_info(1, ARRAY_AND_SIZE(pi2c_board_info)); 855 832 pxa2xx_set_spi_info(2, &pxa_ssp2_master_info); 856 833 spi_register_board_info(ARRAY_AND_SIZE(tsc2046_board_info));
+7
arch/arm/mach-pxa/mfp-pxa2xx.c
··· 226 226 { 227 227 int i; 228 228 229 + /* running before pxa_gpio_probe() */ 230 + #ifdef CONFIG_CPU_PXA26x 231 + pxa_last_gpio = 89; 232 + #else 233 + pxa_last_gpio = 84; 234 + #endif 229 235 for (i = 0; i <= pxa_last_gpio; i++) 230 236 gpio_desc[i].valid = 1; 231 237 ··· 301 295 { 302 296 int i, gpio; 303 297 298 + pxa_last_gpio = 120; /* running before pxa_gpio_probe() */ 304 299 for (i = 0; i <= pxa_last_gpio; i++) { 305 300 /* skip GPIO2, 5, 6, 7, 8, they are not 306 301 * valid pins allow configuration
+1 -2
arch/arm/mach-pxa/pxa25x.c
··· 25 25 #include <linux/suspend.h> 26 26 #include <linux/syscore_ops.h> 27 27 #include <linux/irq.h> 28 - #include <linux/gpio.h> 29 28 30 29 #include <asm/mach/map.h> 31 30 #include <asm/suspend.h> ··· 208 209 INIT_CLKREG(&clk_pxa25x_gpio11, NULL, "GPIO11_CLK"), 209 210 INIT_CLKREG(&clk_pxa25x_gpio12, NULL, "GPIO12_CLK"), 210 211 INIT_CLKREG(&clk_pxa25x_mem, "pxa2xx-pcmcia", NULL), 212 + INIT_CLKREG(&clk_dummy, "pxa-gpio", NULL), 211 213 }; 212 214 213 215 static struct clk_lookup pxa25x_hwuart_clkreg = ··· 368 368 369 369 register_syscore_ops(&pxa_irq_syscore_ops); 370 370 register_syscore_ops(&pxa2xx_mfp_syscore_ops); 371 - register_syscore_ops(&pxa_gpio_syscore_ops); 372 371 register_syscore_ops(&pxa2xx_clock_syscore_ops); 373 372 374 373 ret = platform_add_devices(pxa25x_devices,
+1 -2
arch/arm/mach-pxa/pxa27x.c
··· 22 22 #include <linux/io.h> 23 23 #include <linux/irq.h> 24 24 #include <linux/i2c/pxa-i2c.h> 25 - #include <linux/gpio.h> 26 25 27 26 #include <asm/mach/map.h> 28 27 #include <mach/hardware.h> ··· 229 230 INIT_CLKREG(&clk_pxa27x_im, NULL, "IMCLK"), 230 231 INIT_CLKREG(&clk_pxa27x_memc, NULL, "MEMCLK"), 231 232 INIT_CLKREG(&clk_pxa27x_mem, "pxa2xx-pcmcia", NULL), 233 + INIT_CLKREG(&clk_dummy, "pxa-gpio", NULL), 232 234 }; 233 235 234 236 #ifdef CONFIG_PM ··· 456 456 457 457 register_syscore_ops(&pxa_irq_syscore_ops); 458 458 register_syscore_ops(&pxa2xx_mfp_syscore_ops); 459 - register_syscore_ops(&pxa_gpio_syscore_ops); 460 459 register_syscore_ops(&pxa2xx_clock_syscore_ops); 461 460 462 461 ret = platform_add_devices(devices, ARRAY_SIZE(devices));
-1
arch/arm/mach-pxa/pxa3xx.c
··· 462 462 463 463 register_syscore_ops(&pxa_irq_syscore_ops); 464 464 register_syscore_ops(&pxa3xx_mfp_syscore_ops); 465 - register_syscore_ops(&pxa_gpio_syscore_ops); 466 465 register_syscore_ops(&pxa3xx_clock_syscore_ops); 467 466 468 467 ret = platform_add_devices(devices, ARRAY_SIZE(devices));
-1
arch/arm/mach-pxa/pxa95x.c
··· 283 283 return ret; 284 284 285 285 register_syscore_ops(&pxa_irq_syscore_ops); 286 - register_syscore_ops(&pxa_gpio_syscore_ops); 287 286 register_syscore_ops(&pxa3xx_clock_syscore_ops); 288 287 289 288 ret = platform_add_devices(devices, ARRAY_SIZE(devices));
-1
arch/arm/mach-pxa/saarb.c
··· 15 15 #include <linux/i2c.h> 16 16 #include <linux/i2c/pxa-i2c.h> 17 17 #include <linux/mfd/88pm860x.h> 18 - #include <linux/gpio.h> 19 18 20 19 #include <asm/mach-types.h> 21 20 #include <asm/mach/arch.h>
+1 -2
arch/arm/mach-pxa/sharpsl_pm.c
··· 168 168 #define MAXCTRL_SEL_SH 4 169 169 #define MAXCTRL_STR (1u << 7) 170 170 171 + extern int max1111_read_channel(int); 171 172 /* 172 173 * Read MAX1111 ADC 173 174 */ ··· 177 176 /* Ugly, better move this function into another module */ 178 177 if (machine_is_tosa()) 179 178 return 0; 180 - 181 - extern int max1111_read_channel(int); 182 179 183 180 /* max1111 accepts channels from 0-3, however, 184 181 * it is encoded from 0-7 here in the code.
+2 -3
arch/arm/mach-pxa/spitz_pm.c
··· 172 172 static unsigned long spitz_charger_wakeup(void) 173 173 { 174 174 unsigned long ret; 175 - ret = (!gpio_get_value(SPITZ_GPIO_KEY_INT) 175 + ret = ((!gpio_get_value(SPITZ_GPIO_KEY_INT) 176 176 << GPIO_bit(SPITZ_GPIO_KEY_INT)) 177 - | (!gpio_get_value(SPITZ_GPIO_SYNC) 178 - << GPIO_bit(SPITZ_GPIO_SYNC)); 177 + | gpio_get_value(SPITZ_GPIO_SYNC)); 179 178 return ret; 180 179 } 181 180
+1 -1
arch/arm/mach-s3c2440/common.h
··· 12 12 #ifndef __ARCH_ARM_MACH_S3C2440_COMMON_H 13 13 #define __ARCH_ARM_MACH_S3C2440_COMMON_H 14 14 15 - void s3c2440_restart(char mode, const char *cmd); 15 + void s3c244x_restart(char mode, const char *cmd); 16 16 17 17 #endif /* __ARCH_ARM_MACH_S3C2440_COMMON_H */
+1 -1
arch/arm/mach-s3c2440/mach-anubis.c
··· 487 487 .init_machine = anubis_init, 488 488 .init_irq = s3c24xx_init_irq, 489 489 .timer = &s3c24xx_timer, 490 - .restart = s3c2440_restart, 490 + .restart = s3c244x_restart, 491 491 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-at2440evb.c
··· 222 222 .init_machine = at2440evb_init, 223 223 .init_irq = s3c24xx_init_irq, 224 224 .timer = &s3c24xx_timer, 225 - .restart = s3c2440_restart, 225 + .restart = s3c244x_restart, 226 226 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-gta02.c
··· 601 601 .init_irq = s3c24xx_init_irq, 602 602 .init_machine = gta02_machine_init, 603 603 .timer = &s3c24xx_timer, 604 - .restart = s3c2440_restart, 604 + .restart = s3c244x_restart, 605 605 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-mini2440.c
··· 701 701 .init_machine = mini2440_init, 702 702 .init_irq = s3c24xx_init_irq, 703 703 .timer = &s3c24xx_timer, 704 - .restart = s3c2440_restart, 704 + .restart = s3c244x_restart, 705 705 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-nexcoder.c
··· 158 158 .init_machine = nexcoder_init, 159 159 .init_irq = s3c24xx_init_irq, 160 160 .timer = &s3c24xx_timer, 161 - .restart = s3c2440_restart, 161 + .restart = s3c244x_restart, 162 162 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-osiris.c
··· 436 436 .init_irq = s3c24xx_init_irq, 437 437 .init_machine = osiris_init, 438 438 .timer = &s3c24xx_timer, 439 - .restart = s3c2440_restart, 439 + .restart = s3c244x_restart, 440 440 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-rx1950.c
··· 822 822 .init_irq = s3c24xx_init_irq, 823 823 .init_machine = rx1950_init_machine, 824 824 .timer = &s3c24xx_timer, 825 - .restart = s3c2440_restart, 825 + .restart = s3c244x_restart, 826 826 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-rx3715.c
··· 213 213 .init_irq = rx3715_init_irq, 214 214 .init_machine = rx3715_init_machine, 215 215 .timer = &s3c24xx_timer, 216 - .restart = s3c2440_restart, 216 + .restart = s3c244x_restart, 217 217 MACHINE_END
+1 -1
arch/arm/mach-s3c2440/mach-smdk2440.c
··· 183 183 .map_io = smdk2440_map_io, 184 184 .init_machine = smdk2440_machine_init, 185 185 .timer = &s3c24xx_timer, 186 - .restart = s3c2440_restart, 186 + .restart = s3c244x_restart, 187 187 MACHINE_END
-13
arch/arm/mach-s3c2440/s3c2440.c
··· 35 35 #include <plat/cpu.h> 36 36 #include <plat/s3c244x.h> 37 37 #include <plat/pm.h> 38 - #include <plat/watchdog-reset.h> 39 38 40 39 #include <plat/gpio-core.h> 41 40 #include <plat/gpio-cfg.h> ··· 72 73 73 74 s3c24xx_gpiocfg_default.set_pull = s3c24xx_gpio_setpull_1up; 74 75 s3c24xx_gpiocfg_default.get_pull = s3c24xx_gpio_getpull_1up; 75 - } 76 - 77 - void s3c2440_restart(char mode, const char *cmd) 78 - { 79 - if (mode == 's') { 80 - soft_restart(0); 81 - } 82 - 83 - arch_wdt_reset(); 84 - 85 - /* we'll take a jump through zero as a poor second */ 86 - soft_restart(0); 87 76 }
+12
arch/arm/mach-s3c2440/s3c244x.c
··· 46 46 #include <plat/pm.h> 47 47 #include <plat/pll.h> 48 48 #include <plat/nand-core.h> 49 + #include <plat/watchdog-reset.h> 49 50 50 51 static struct map_desc s3c244x_iodesc[] __initdata = { 51 52 IODESC_ENT(CLKPWR), ··· 197 196 .suspend = s3c244x_suspend, 198 197 .resume = s3c244x_resume, 199 198 }; 199 + 200 + void s3c244x_restart(char mode, const char *cmd) 201 + { 202 + if (mode == 's') 203 + soft_restart(0); 204 + 205 + arch_wdt_reset(); 206 + 207 + /* we'll take a jump through zero as a poor second */ 208 + soft_restart(0); 209 + }
+1 -1
arch/arm/mach-ux500/Kconfig
··· 5 5 default y 6 6 select ARM_GIC 7 7 select HAS_MTU 8 - select ARM_ERRATA_753970 8 + select PL310_ERRATA_753970 9 9 select ARM_ERRATA_754322 10 10 select ARM_ERRATA_764369 11 11
+1 -1
arch/arm/mach-vexpress/Kconfig
··· 7 7 select ARM_GIC 8 8 select ARM_ERRATA_720789 9 9 select ARM_ERRATA_751472 10 - select ARM_ERRATA_753970 10 + select PL310_ERRATA_753970 11 11 select HAVE_SMP 12 12 select MIGHT_HAVE_CACHE_L2X0 13 13
+1 -3
arch/arm/mm/proc-v7.S
··· 230 230 mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register 231 231 #endif 232 232 #ifdef CONFIG_ARM_ERRATA_743622 233 - teq r6, #0x20 @ present in r2p0 234 - teqne r6, #0x21 @ present in r2p1 235 - teqne r6, #0x22 @ present in r2p2 233 + teq r5, #0x00200000 @ only present in r2p* 236 234 mrceq p15, 0, r10, c15, c0, 1 @ read diagnostic register 237 235 orreq r10, r10, #1 << 6 @ set bit #6 238 236 mcreq p15, 0, r10, c15, c0, 1 @ write diagnostic register
+1
arch/arm/plat-omap/common.c
··· 69 69 omap_vram_reserve_sdram_memblock(); 70 70 omap_dsp_reserve_sdram_memblock(); 71 71 omap_secure_ram_reserve_memblock(); 72 + omap_barrier_reserve_memblock(); 72 73 } 73 74 74 75 void __init omap_init_consistent_dma_size(void)
+9 -1
arch/arm/plat-omap/include/plat/irqs.h
··· 428 428 #define OMAP_GPMC_NR_IRQS 8 429 429 #define OMAP_GPMC_IRQ_END (OMAP_GPMC_IRQ_BASE + OMAP_GPMC_NR_IRQS) 430 430 431 + /* PRCM IRQ handler */ 432 + #ifdef CONFIG_ARCH_OMAP2PLUS 433 + #define OMAP_PRCM_IRQ_BASE (OMAP_GPMC_IRQ_END) 434 + #define OMAP_PRCM_NR_IRQS 64 435 + #define OMAP_PRCM_IRQ_END (OMAP_PRCM_IRQ_BASE + OMAP_PRCM_NR_IRQS) 436 + #else 437 + #define OMAP_PRCM_IRQ_END OMAP_GPMC_IRQ_END 438 + #endif 431 439 432 - #define NR_IRQS OMAP_GPMC_IRQ_END 440 + #define NR_IRQS OMAP_PRCM_IRQ_END 433 441 434 442 #define OMAP_IRQ_BIT(irq) (1 << ((irq) % 32)) 435 443
+6
arch/arm/plat-omap/include/plat/omap-secure.h
··· 10 10 { } 11 11 #endif 12 12 13 + #ifdef CONFIG_OMAP4_ERRATA_I688 14 + extern int omap_barrier_reserve_memblock(void); 15 + #else 16 + static inline void omap_barrier_reserve_memblock(void) 17 + { } 18 + #endif 13 19 #endif /* __OMAP_SECURE_H__ */
+1 -1
arch/arm/plat-s3c24xx/dma.c
··· 1249 1249 struct s3c2410_dma_chan *cp = s3c2410_chans + dma_channels - 1; 1250 1250 int channel; 1251 1251 1252 - for (channel = dma_channels - 1; channel >= 0; cp++, channel--) 1252 + for (channel = dma_channels - 1; channel >= 0; cp--, channel--) 1253 1253 s3c2410_dma_resume_chan(cp); 1254 1254 } 1255 1255
+1 -1
arch/arm/plat-samsung/devs.c
··· 1409 1409 1410 1410 #ifdef CONFIG_S3C_DEV_USB_HSOTG 1411 1411 static struct resource s3c_usb_hsotg_resources[] = { 1412 - [0] = DEFINE_RES_MEM(S3C_PA_USB_HSOTG, SZ_16K), 1412 + [0] = DEFINE_RES_MEM(S3C_PA_USB_HSOTG, SZ_128K), 1413 1413 [1] = DEFINE_RES_IRQ(IRQ_OTG), 1414 1414 }; 1415 1415
+4 -2
arch/arm/plat-spear/time.c
··· 145 145 static int clockevent_next_event(unsigned long cycles, 146 146 struct clock_event_device *clk_event_dev) 147 147 { 148 - u16 val; 148 + u16 val = readw(gpt_base + CR(CLKEVT)); 149 + 150 + if (val & CTRL_ENABLE) 151 + writew(val & ~CTRL_ENABLE, gpt_base + CR(CLKEVT)); 149 152 150 153 writew(cycles, gpt_base + LOAD(CLKEVT)); 151 154 152 - val = readw(gpt_base + CR(CLKEVT)); 153 155 val |= CTRL_ENABLE | CTRL_INT_ENABLE; 154 156 writew(val, gpt_base + CR(CLKEVT)); 155 157
+2 -2
arch/c6x/include/asm/processor.h
··· 122 122 123 123 extern unsigned long get_wchan(struct task_struct *p); 124 124 125 - #define KSTK_EIP(tsk) (task_pt_regs(task)->pc) 126 - #define KSTK_ESP(tsk) (task_pt_regs(task)->sp) 125 + #define KSTK_EIP(task) (task_pt_regs(task)->pc) 126 + #define KSTK_ESP(task) (task_pt_regs(task)->sp) 127 127 128 128 #define cpu_relax() do { } while (0) 129 129
+1 -1
arch/mips/alchemy/common/time.c
··· 146 146 cd->shift = 32; 147 147 cd->mult = div_sc(32768, NSEC_PER_SEC, cd->shift); 148 148 cd->max_delta_ns = clockevent_delta2ns(0xffffffff, cd); 149 - cd->min_delta_ns = clockevent_delta2ns(8, cd); /* ~0.25ms */ 149 + cd->min_delta_ns = clockevent_delta2ns(9, cd); /* ~0.28ms */ 150 150 clockevents_register_device(cd); 151 151 setup_irq(m2int, &au1x_rtcmatch2_irqaction); 152 152
+1 -1
arch/mips/ath79/dev-wmac.c
··· 96 96 { 97 97 if (soc_is_ar913x()) 98 98 ar913x_wmac_setup(); 99 - if (soc_is_ar933x()) 99 + else if (soc_is_ar933x()) 100 100 ar933x_wmac_setup(); 101 101 else 102 102 BUG();
+2 -2
arch/mips/configs/nlm_xlp_defconfig
··· 8 8 # CONFIG_SECCOMP is not set 9 9 CONFIG_USE_OF=y 10 10 CONFIG_EXPERIMENTAL=y 11 - CONFIG_CROSS_COMPILE="mips-linux-gnu-" 11 + CONFIG_CROSS_COMPILE="" 12 12 # CONFIG_LOCALVERSION_AUTO is not set 13 13 CONFIG_SYSVIPC=y 14 14 CONFIG_POSIX_MQUEUE=y ··· 22 22 CONFIG_CGROUPS=y 23 23 CONFIG_NAMESPACES=y 24 24 CONFIG_BLK_DEV_INITRD=y 25 - CONFIG_INITRAMFS_SOURCE="usr/dev_file_list usr/rootfs.xlp" 25 + CONFIG_INITRAMFS_SOURCE="" 26 26 CONFIG_RD_BZIP2=y 27 27 CONFIG_RD_LZMA=y 28 28 CONFIG_INITRAMFS_COMPRESSION_LZMA=y
+2 -2
arch/mips/configs/nlm_xlr_defconfig
··· 8 8 CONFIG_PREEMPT_VOLUNTARY=y 9 9 CONFIG_KEXEC=y 10 10 CONFIG_EXPERIMENTAL=y 11 - CONFIG_CROSS_COMPILE="mips-linux-gnu-" 11 + CONFIG_CROSS_COMPILE="" 12 12 # CONFIG_LOCALVERSION_AUTO is not set 13 13 CONFIG_SYSVIPC=y 14 14 CONFIG_POSIX_MQUEUE=y ··· 22 22 CONFIG_NAMESPACES=y 23 23 CONFIG_SCHED_AUTOGROUP=y 24 24 CONFIG_BLK_DEV_INITRD=y 25 - CONFIG_INITRAMFS_SOURCE="usr/dev_file_list usr/rootfs.xlr" 25 + CONFIG_INITRAMFS_SOURCE="" 26 26 CONFIG_RD_BZIP2=y 27 27 CONFIG_RD_LZMA=y 28 28 CONFIG_INITRAMFS_COMPRESSION_GZIP=y
+1 -1
arch/mips/configs/powertv_defconfig
··· 6 6 CONFIG_PREEMPT=y 7 7 # CONFIG_SECCOMP is not set 8 8 CONFIG_EXPERIMENTAL=y 9 - CONFIG_CROSS_COMPILE="mips-linux-" 9 + CONFIG_CROSS_COMPILE="" 10 10 # CONFIG_SWAP is not set 11 11 CONFIG_SYSVIPC=y 12 12 CONFIG_LOG_BUF_SHIFT=16
+19 -1
arch/mips/include/asm/mach-au1x00/gpio-au1300.h
··· 11 11 #include <asm/io.h> 12 12 #include <asm/mach-au1x00/au1000.h> 13 13 14 + struct gpio; 15 + struct gpio_chip; 16 + 14 17 /* with the current GPIC design, up to 128 GPIOs are possible. 15 18 * The only implementation so far is in the Au1300, which has 75 externally 16 19 * available GPIOs. ··· 206 203 return 0; 207 204 } 208 205 209 - static inline void gpio_free(unsigned int gpio) 206 + static inline int gpio_request_one(unsigned gpio, 207 + unsigned long flags, const char *label) 208 + { 209 + return 0; 210 + } 211 + 212 + static inline int gpio_request_array(struct gpio *array, size_t num) 213 + { 214 + return 0; 215 + } 216 + 217 + static inline void gpio_free(unsigned gpio) 218 + { 219 + } 220 + 221 + static inline void gpio_free_array(struct gpio *array, size_t num) 210 222 { 211 223 } 212 224
-3
arch/mips/include/asm/page.h
··· 39 39 #define HPAGE_MASK (~(HPAGE_SIZE - 1)) 40 40 #define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT) 41 41 #else /* !CONFIG_HUGETLB_PAGE */ 42 - # ifndef BUILD_BUG 43 - # define BUILD_BUG() do { extern void __build_bug(void); __build_bug(); } while (0) 44 - # endif 45 42 #define HPAGE_SHIFT ({BUILD_BUG(); 0; }) 46 43 #define HPAGE_SIZE ({BUILD_BUG(); 0; }) 47 44 #define HPAGE_MASK ({BUILD_BUG(); 0; })
-1
arch/mips/kernel/smp-bmips.c
··· 8 8 * SMP support for BMIPS 9 9 */ 10 10 11 - #include <linux/version.h> 12 11 #include <linux/init.h> 13 12 #include <linux/sched.h> 14 13 #include <linux/mm.h>
+1 -1
arch/mips/kernel/traps.c
··· 1135 1135 printk(KERN_DEBUG "YIELD Scheduler Exception\n"); 1136 1136 break; 1137 1137 case 5: 1138 - printk(KERN_DEBUG "Gating Storage Schedulier Exception\n"); 1138 + printk(KERN_DEBUG "Gating Storage Scheduler Exception\n"); 1139 1139 break; 1140 1140 default: 1141 1141 printk(KERN_DEBUG "*** UNKNOWN THREAD EXCEPTION %d ***\n",
-1
arch/mips/kernel/vmlinux.lds.S
··· 69 69 RODATA 70 70 71 71 /* writeable */ 72 - _sdata = .; /* Start of data section */ 73 72 .data : { /* Data */ 74 73 . = . + DATAOFFSET; /* for CONFIG_MAPPED_KERNEL */ 75 74
+29 -7
arch/mips/mm/fault.c
··· 42 42 const int field = sizeof(unsigned long) * 2; 43 43 siginfo_t info; 44 44 int fault; 45 + unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE | 46 + (write ? FAULT_FLAG_WRITE : 0); 45 47 46 48 #if 0 47 49 printk("Cpu%d[%s:%d:%0*lx:%ld:%0*lx]\n", raw_smp_processor_id(), ··· 93 91 if (in_atomic() || !mm) 94 92 goto bad_area_nosemaphore; 95 93 94 + retry: 96 95 down_read(&mm->mmap_sem); 97 96 vma = find_vma(mm, address); 98 97 if (!vma) ··· 147 144 * make sure we exit gracefully rather than endlessly redo 148 145 * the fault. 149 146 */ 150 - fault = handle_mm_fault(mm, vma, address, write ? FAULT_FLAG_WRITE : 0); 147 + fault = handle_mm_fault(mm, vma, address, flags); 148 + 149 + if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) 150 + return; 151 + 151 152 perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); 152 153 if (unlikely(fault & VM_FAULT_ERROR)) { 153 154 if (fault & VM_FAULT_OOM) ··· 160 153 goto do_sigbus; 161 154 BUG(); 162 155 } 163 - if (fault & VM_FAULT_MAJOR) { 164 - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, regs, address); 165 - tsk->maj_flt++; 166 - } else { 167 - perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, regs, address); 168 - tsk->min_flt++; 156 + if (flags & FAULT_FLAG_ALLOW_RETRY) { 157 + if (fault & VM_FAULT_MAJOR) { 158 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MAJ, 1, 159 + regs, address); 160 + tsk->maj_flt++; 161 + } else { 162 + perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS_MIN, 1, 163 + regs, address); 164 + tsk->min_flt++; 165 + } 166 + if (fault & VM_FAULT_RETRY) { 167 + flags &= ~FAULT_FLAG_ALLOW_RETRY; 168 + 169 + /* 170 + * No need to up_read(&mm->mmap_sem) as we would 171 + * have already released it in __lock_page_or_retry 172 + * in mm/filemap.c. 173 + */ 174 + 175 + goto retry; 176 + } 169 177 } 170 178 171 179 up_read(&mm->mmap_sem);
+1 -4
arch/mips/pci/pci.c
··· 279 279 { 280 280 /* Propagate hose info into the subordinate devices. */ 281 281 282 - struct list_head *ln; 283 282 struct pci_dev *dev = bus->self; 284 283 285 284 if (pci_probe_only && dev && ··· 287 288 pcibios_fixup_device_resources(dev, bus); 288 289 } 289 290 290 - for (ln = bus->devices.next; ln != &bus->devices; ln = ln->next) { 291 - dev = pci_dev_b(ln); 292 - 291 + list_for_each_entry(dev, &bus->devices, bus_list) { 293 292 if ((dev->class >> 8) != PCI_CLASS_BRIDGE_PCI) 294 293 pcibios_fixup_device_resources(dev, bus); 295 294 }
-10
arch/mips/pmc-sierra/yosemite/ht-irq.c
··· 35 35 */ 36 36 void __init titan_ht_pcibios_fixup_bus(struct pci_bus *bus) 37 37 { 38 - struct pci_bus *current_bus = bus; 39 - struct pci_dev *devices; 40 - struct list_head *devices_link; 41 - 42 - list_for_each(devices_link, &(current_bus->devices)) { 43 - devices = pci_dev_b(devices_link); 44 - if (devices == NULL) 45 - continue; 46 - } 47 - 48 38 /* 49 39 * PLX and SPKT related changes go here 50 40 */
+1 -1
arch/mips/txx9/generic/7segled.c
··· 102 102 break; 103 103 } 104 104 dev->id = i; 105 - dev->dev = &tx_7segled_subsys; 105 + dev->bus = &tx_7segled_subsys; 106 106 error = device_register(dev); 107 107 if (!error) { 108 108 device_create_file(dev, &dev_attr_ascii);
+7 -1
arch/openrisc/include/asm/ptrace.h
··· 77 77 long syscallno; /* Syscall number (used by strace) */ 78 78 long dummy; /* Cheap alignment fix */ 79 79 }; 80 - #endif /* __ASSEMBLY__ */ 81 80 82 81 /* TODO: Rename this to REDZONE because that's what it is */ 83 82 #define STACK_FRAME_OVERHEAD 128 /* size of minimum stack frame */ ··· 85 86 #define user_mode(regs) (((regs)->sr & SPR_SR_SM) == 0) 86 87 #define user_stack_pointer(regs) ((unsigned long)(regs)->sp) 87 88 #define profile_pc(regs) instruction_pointer(regs) 89 + 90 + static inline long regs_return_value(struct pt_regs *regs) 91 + { 92 + return regs->gpr[11]; 93 + } 94 + 95 + #endif /* __ASSEMBLY__ */ 88 96 89 97 /* 90 98 * Offsets used by 'ptrace' system call interface.
+1
arch/openrisc/kernel/init_task.c
··· 17 17 18 18 #include <linux/init_task.h> 19 19 #include <linux/mqueue.h> 20 + #include <linux/export.h> 20 21 21 22 static struct signal_struct init_signals = INIT_SIGNALS(init_signals); 22 23 static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
+1
arch/openrisc/kernel/irq.c
··· 23 23 #include <linux/irq.h> 24 24 #include <linux/seq_file.h> 25 25 #include <linux/kernel_stat.h> 26 + #include <linux/export.h> 26 27 27 28 #include <linux/irqflags.h> 28 29
+4 -8
arch/openrisc/kernel/ptrace.c
··· 188 188 */ 189 189 ret = -1L; 190 190 191 - /* Are these regs right??? */ 192 - if (unlikely(current->audit_context)) 193 - audit_syscall_entry(audit_arch(), regs->syscallno, 194 - regs->gpr[3], regs->gpr[4], 195 - regs->gpr[5], regs->gpr[6]); 191 + audit_syscall_entry(audit_arch(), regs->syscallno, 192 + regs->gpr[3], regs->gpr[4], 193 + regs->gpr[5], regs->gpr[6]); 196 194 197 195 return ret ? : regs->syscallno; 198 196 } ··· 199 201 { 200 202 int step; 201 203 202 - if (unlikely(current->audit_context)) 203 - audit_syscall_exit(AUDITSC_RESULT(regs->gpr[11]), 204 - regs->gpr[11]); 204 + audit_syscall_exit(regs); 205 205 206 206 step = test_thread_flag(TIF_SINGLESTEP); 207 207 if (step || test_thread_flag(TIF_SYSCALL_TRACE))
+4
arch/parisc/Makefile
··· 31 31 UTS_MACHINE := parisc64 32 32 CHECKFLAGS += -D__LP64__=1 -m64 33 33 WIDTH := 64 34 + 35 + # FIXME: if no default set, should really try to locate dynamically 36 + ifeq ($(CROSS_COMPILE),) 34 37 CROSS_COMPILE := hppa64-linux-gnu- 38 + endif 35 39 else # 32-bit 36 40 WIDTH := 37 41 endif
+3
arch/s390/Kconfig
··· 227 227 config SYSVIPC_COMPAT 228 228 def_bool y if COMPAT && SYSVIPC 229 229 230 + config KEYS_COMPAT 231 + def_bool y if COMPAT && KEYS 232 + 230 233 config AUDIT_ARCH 231 234 def_bool y 232 235
-1
arch/s390/kernel/crash_dump.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/gfp.h> 13 13 #include <linux/slab.h> 14 - #include <linux/crash_dump.h> 15 14 #include <linux/bootmem.h> 16 15 #include <linux/elf.h> 17 16 #include <asm/ipl.h>
+26 -4
arch/s390/mm/init.c
··· 223 223 #ifdef CONFIG_MEMORY_HOTPLUG 224 224 int arch_add_memory(int nid, u64 start, u64 size) 225 225 { 226 - struct pglist_data *pgdat; 226 + unsigned long zone_start_pfn, zone_end_pfn, nr_pages; 227 + unsigned long start_pfn = PFN_DOWN(start); 228 + unsigned long size_pages = PFN_DOWN(size); 227 229 struct zone *zone; 228 230 int rc; 229 231 230 - pgdat = NODE_DATA(nid); 231 - zone = pgdat->node_zones + ZONE_MOVABLE; 232 232 rc = vmem_add_mapping(start, size); 233 233 if (rc) 234 234 return rc; 235 - rc = __add_pages(nid, zone, PFN_DOWN(start), PFN_DOWN(size)); 235 + for_each_zone(zone) { 236 + if (zone_idx(zone) != ZONE_MOVABLE) { 237 + /* Add range within existing zone limits */ 238 + zone_start_pfn = zone->zone_start_pfn; 239 + zone_end_pfn = zone->zone_start_pfn + 240 + zone->spanned_pages; 241 + } else { 242 + /* Add remaining range to ZONE_MOVABLE */ 243 + zone_start_pfn = start_pfn; 244 + zone_end_pfn = start_pfn + size_pages; 245 + } 246 + if (start_pfn < zone_start_pfn || start_pfn >= zone_end_pfn) 247 + continue; 248 + nr_pages = (start_pfn + size_pages > zone_end_pfn) ? 249 + zone_end_pfn - start_pfn : size_pages; 250 + rc = __add_pages(nid, zone, start_pfn, nr_pages); 251 + if (rc) 252 + break; 253 + start_pfn += nr_pages; 254 + size_pages -= nr_pages; 255 + if (!size_pages) 256 + break; 257 + } 236 258 if (rc) 237 259 vmem_remove_mapping(start, size); 238 260 return rc;
+7 -7
arch/x86/ia32/ia32_aout.c
··· 315 315 current->mm->free_area_cache = TASK_UNMAPPED_BASE; 316 316 current->mm->cached_hole_size = 0; 317 317 318 + retval = setup_arg_pages(bprm, IA32_STACK_TOP, EXSTACK_DEFAULT); 319 + if (retval < 0) { 320 + /* Someone check-me: is this error path enough? */ 321 + send_sig(SIGKILL, current, 0); 322 + return retval; 323 + } 324 + 318 325 install_exec_creds(bprm); 319 326 current->flags &= ~PF_FORKNOEXEC; 320 327 ··· 416 409 set_binfmt(&aout_format); 417 410 418 411 set_brk(current->mm->start_brk, current->mm->brk); 419 - 420 - retval = setup_arg_pages(bprm, IA32_STACK_TOP, EXSTACK_DEFAULT); 421 - if (retval < 0) { 422 - /* Someone check-me: is this error path enough? */ 423 - send_sig(SIGKILL, current, 0); 424 - return retval; 425 - } 426 412 427 413 current->mm->start_stack = 428 414 (unsigned long)create_aout_tables((char __user *)bprm->p, bprm);
+9 -8
arch/x86/kernel/cpu/perf_event_intel.c
··· 385 385 #define NHM_LOCAL_DRAM (1 << 14) 386 386 #define NHM_NON_DRAM (1 << 15) 387 387 388 - #define NHM_ALL_DRAM (NHM_REMOTE_DRAM|NHM_LOCAL_DRAM) 388 + #define NHM_LOCAL (NHM_LOCAL_DRAM|NHM_REMOTE_CACHE_FWD) 389 + #define NHM_REMOTE (NHM_REMOTE_DRAM) 389 390 390 391 #define NHM_DMND_READ (NHM_DMND_DATA_RD) 391 392 #define NHM_DMND_WRITE (NHM_DMND_RFO|NHM_DMND_WB) 392 393 #define NHM_DMND_PREFETCH (NHM_PF_DATA_RD|NHM_PF_DATA_RFO) 393 394 394 395 #define NHM_L3_HIT (NHM_UNCORE_HIT|NHM_OTHER_CORE_HIT_SNP|NHM_OTHER_CORE_HITM) 395 - #define NHM_L3_MISS (NHM_NON_DRAM|NHM_ALL_DRAM|NHM_REMOTE_CACHE_FWD) 396 + #define NHM_L3_MISS (NHM_NON_DRAM|NHM_LOCAL_DRAM|NHM_REMOTE_DRAM|NHM_REMOTE_CACHE_FWD) 396 397 #define NHM_L3_ACCESS (NHM_L3_HIT|NHM_L3_MISS) 397 398 398 399 static __initconst const u64 nehalem_hw_cache_extra_regs ··· 417 416 }, 418 417 [ C(NODE) ] = { 419 418 [ C(OP_READ) ] = { 420 - [ C(RESULT_ACCESS) ] = NHM_DMND_READ|NHM_ALL_DRAM, 421 - [ C(RESULT_MISS) ] = NHM_DMND_READ|NHM_REMOTE_DRAM, 419 + [ C(RESULT_ACCESS) ] = NHM_DMND_READ|NHM_LOCAL|NHM_REMOTE, 420 + [ C(RESULT_MISS) ] = NHM_DMND_READ|NHM_REMOTE, 422 421 }, 423 422 [ C(OP_WRITE) ] = { 424 - [ C(RESULT_ACCESS) ] = NHM_DMND_WRITE|NHM_ALL_DRAM, 425 - [ C(RESULT_MISS) ] = NHM_DMND_WRITE|NHM_REMOTE_DRAM, 423 + [ C(RESULT_ACCESS) ] = NHM_DMND_WRITE|NHM_LOCAL|NHM_REMOTE, 424 + [ C(RESULT_MISS) ] = NHM_DMND_WRITE|NHM_REMOTE, 426 425 }, 427 426 [ C(OP_PREFETCH) ] = { 428 - [ C(RESULT_ACCESS) ] = NHM_DMND_PREFETCH|NHM_ALL_DRAM, 429 - [ C(RESULT_MISS) ] = NHM_DMND_PREFETCH|NHM_REMOTE_DRAM, 427 + [ C(RESULT_ACCESS) ] = NHM_DMND_PREFETCH|NHM_LOCAL|NHM_REMOTE, 428 + [ C(RESULT_MISS) ] = NHM_DMND_PREFETCH|NHM_REMOTE, 430 429 }, 431 430 }, 432 431 };
+2 -2
arch/x86/lib/delay.c
··· 48 48 } 49 49 50 50 /* TSC based delay: */ 51 - static void delay_tsc(unsigned long loops) 51 + static void delay_tsc(unsigned long __loops) 52 52 { 53 - unsigned long bclock, now; 53 + u32 bclock, now, loops = __loops; 54 54 int cpu; 55 55 56 56 preempt_disable();
+3 -1
arch/x86/mm/hugetlbpage.c
··· 333 333 * Lookup failure means no vma is above this address, 334 334 * i.e. return with success: 335 335 */ 336 - if (!(vma = find_vma_prev(mm, addr, &prev_vma))) 336 + vma = find_vma(mm, addr); 337 + if (!vma) 337 338 return addr; 338 339 339 340 /* 340 341 * new region fits between prev_vma->vm_end and 341 342 * vma->vm_start, use it: 342 343 */ 344 + prev_vma = vma->vm_prev; 343 345 if (addr + len <= vma->vm_start && 344 346 (!prev_vma || (addr >= prev_vma->vm_end))) { 345 347 /* remember the address as a hint for next time */
+17 -5
arch/x86/pci/acpi.c
··· 60 60 DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."), 61 61 }, 62 62 }, 63 + /* https://bugzilla.kernel.org/show_bug.cgi?id=42619 */ 64 + { 65 + .callback = set_use_crs, 66 + .ident = "MSI MS-7253", 67 + .matches = { 68 + DMI_MATCH(DMI_BOARD_VENDOR, "MICRO-STAR INTERNATIONAL CO., LTD"), 69 + DMI_MATCH(DMI_BOARD_NAME, "MS-7253"), 70 + DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies, LTD"), 71 + }, 72 + }, 63 73 64 74 /* Now for the blacklist.. */ 65 75 ··· 292 282 int i; 293 283 struct resource *res, *root, *conflict; 294 284 295 - if (!pci_use_crs) 296 - return; 297 - 298 285 coalesce_windows(info, IORESOURCE_MEM); 299 286 coalesce_windows(info, IORESOURCE_IO); 300 287 ··· 343 336 acpi_walk_resources(device->handle, METHOD_NAME__CRS, setup_resource, 344 337 &info); 345 338 346 - add_resources(&info); 347 - return; 339 + if (pci_use_crs) { 340 + add_resources(&info); 341 + 342 + return; 343 + } 344 + 345 + kfree(info.name); 348 346 349 347 name_alloc_fail: 350 348 kfree(info.res);
+1 -1
drivers/block/floppy.c
··· 3832 3832 bio.bi_size = size; 3833 3833 bio.bi_bdev = bdev; 3834 3834 bio.bi_sector = 0; 3835 - bio.bi_flags = BIO_QUIET; 3835 + bio.bi_flags = (1 << BIO_QUIET); 3836 3836 init_completion(&complete); 3837 3837 bio.bi_private = &complete; 3838 3838 bio.bi_end_io = floppy_rb0_complete;
+1
drivers/crypto/mv_cesa.c
··· 714 714 { 715 715 struct mv_req_hash_ctx *ctx = ahash_request_ctx(req); 716 716 717 + ahash_request_set_crypt(req, NULL, req->result, 0); 717 718 mv_update_hash_req_ctx(ctx, 1, 0); 718 719 return mv_handle_req(&req->base); 719 720 }
+11 -5
drivers/gpu/drm/exynos/exynos_drm_connector.c
··· 28 28 #include "drmP.h" 29 29 #include "drm_crtc_helper.h" 30 30 31 + #include <drm/exynos_drm.h> 31 32 #include "exynos_drm_drv.h" 32 33 #include "exynos_drm_encoder.h" 33 34 ··· 45 44 /* convert exynos_video_timings to drm_display_mode */ 46 45 static inline void 47 46 convert_to_display_mode(struct drm_display_mode *mode, 48 - struct fb_videomode *timing) 47 + struct exynos_drm_panel_info *panel) 49 48 { 49 + struct fb_videomode *timing = &panel->timing; 50 50 DRM_DEBUG_KMS("%s\n", __FILE__); 51 51 52 52 mode->clock = timing->pixclock / 1000; ··· 62 60 mode->vsync_start = mode->vdisplay + timing->upper_margin; 63 61 mode->vsync_end = mode->vsync_start + timing->vsync_len; 64 62 mode->vtotal = mode->vsync_end + timing->lower_margin; 63 + mode->width_mm = panel->width_mm; 64 + mode->height_mm = panel->height_mm; 65 65 66 66 if (timing->vmode & FB_VMODE_INTERLACED) 67 67 mode->flags |= DRM_MODE_FLAG_INTERLACE; ··· 152 148 connector->display_info.raw_edid = edid; 153 149 } else { 154 150 struct drm_display_mode *mode = drm_mode_create(connector->dev); 155 - struct fb_videomode *timing; 151 + struct exynos_drm_panel_info *panel; 156 152 157 - if (display_ops->get_timing) 158 - timing = display_ops->get_timing(manager->dev); 153 + if (display_ops->get_panel) 154 + panel = display_ops->get_panel(manager->dev); 159 155 else { 160 156 drm_mode_destroy(connector->dev, mode); 161 157 return 0; 162 158 } 163 159 164 - convert_to_display_mode(mode, timing); 160 + convert_to_display_mode(mode, panel); 161 + connector->display_info.width_mm = mode->width_mm; 162 + connector->display_info.height_mm = mode->height_mm; 165 163 166 164 mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED; 167 165 drm_mode_set_name(mode);
+2 -2
drivers/gpu/drm/exynos/exynos_drm_drv.h
··· 136 136 * @type: one of EXYNOS_DISPLAY_TYPE_LCD and HDMI. 137 137 * @is_connected: check for that display is connected or not. 138 138 * @get_edid: get edid modes from display driver. 139 - * @get_timing: get timing object from display driver. 139 + * @get_panel: get panel object from display driver. 140 140 * @check_timing: check if timing is valid or not. 141 141 * @power_on: display device on or off. 142 142 */ ··· 145 145 bool (*is_connected)(struct device *dev); 146 146 int (*get_edid)(struct device *dev, struct drm_connector *connector, 147 147 u8 *edid, int len); 148 - void *(*get_timing)(struct device *dev); 148 + void *(*get_panel)(struct device *dev); 149 149 int (*check_timing)(struct device *dev, void *timing); 150 150 int (*power_on)(struct device *dev, int mode); 151 151 };
+14 -13
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 89 89 bool suspended; 90 90 struct mutex lock; 91 91 92 - struct fb_videomode *timing; 92 + struct exynos_drm_panel_info *panel; 93 93 }; 94 94 95 95 static bool fimd_display_is_connected(struct device *dev) ··· 101 101 return true; 102 102 } 103 103 104 - static void *fimd_get_timing(struct device *dev) 104 + static void *fimd_get_panel(struct device *dev) 105 105 { 106 106 struct fimd_context *ctx = get_fimd_context(dev); 107 107 108 108 DRM_DEBUG_KMS("%s\n", __FILE__); 109 109 110 - return ctx->timing; 110 + return ctx->panel; 111 111 } 112 112 113 113 static int fimd_check_timing(struct device *dev, void *timing) ··· 131 131 static struct exynos_drm_display_ops fimd_display_ops = { 132 132 .type = EXYNOS_DISPLAY_TYPE_LCD, 133 133 .is_connected = fimd_display_is_connected, 134 - .get_timing = fimd_get_timing, 134 + .get_panel = fimd_get_panel, 135 135 .check_timing = fimd_check_timing, 136 136 .power_on = fimd_display_power_on, 137 137 }; ··· 193 193 static void fimd_commit(struct device *dev) 194 194 { 195 195 struct fimd_context *ctx = get_fimd_context(dev); 196 - struct fb_videomode *timing = ctx->timing; 196 + struct exynos_drm_panel_info *panel = ctx->panel; 197 + struct fb_videomode *timing = &panel->timing; 197 198 u32 val; 198 199 199 200 if (ctx->suspended) ··· 787 786 struct fimd_context *ctx; 788 787 struct exynos_drm_subdrv *subdrv; 789 788 struct exynos_drm_fimd_pdata *pdata; 790 - struct fb_videomode *timing; 789 + struct exynos_drm_panel_info *panel; 791 790 struct resource *res; 792 791 int win; 793 792 int ret = -EINVAL; ··· 800 799 return -EINVAL; 801 800 } 802 801 803 - timing = &pdata->timing; 804 - if (!timing) { 805 - dev_err(dev, "timing is null.\n"); 802 + panel = &pdata->panel; 803 + if (!panel) { 804 + dev_err(dev, "panel is null.\n"); 806 805 return -EINVAL; 807 806 } 808 807 ··· 864 863 goto err_req_irq; 865 864 } 866 865 867 - ctx->clkdiv = fimd_calc_clkdiv(ctx, timing); 866 + ctx->clkdiv = fimd_calc_clkdiv(ctx, &panel->timing); 868 867 ctx->vidcon0 = pdata->vidcon0; 869 868 ctx->vidcon1 = pdata->vidcon1; 870 869 ctx->default_win = pdata->default_win; 871 - ctx->timing = timing; 870 + ctx->panel = panel; 872 871 873 - timing->pixclock = clk_get_rate(ctx->lcd_clk) / ctx->clkdiv; 872 + panel->timing.pixclock = clk_get_rate(ctx->lcd_clk) / ctx->clkdiv; 874 873 875 874 DRM_DEBUG_KMS("pixel clock = %d, clkdiv = %d\n", 876 - timing->pixclock, ctx->clkdiv); 875 + panel->timing.pixclock, ctx->clkdiv); 877 876 878 877 subdrv = &ctx->subdrv; 879 878
+2
drivers/gpu/drm/gma500/cdv_device.c
··· 321 321 cdv_get_core_freq(dev); 322 322 gma_intel_opregion_init(dev); 323 323 psb_intel_init_bios(dev); 324 + REG_WRITE(PORT_HOTPLUG_EN, 0); 325 + REG_WRITE(PORT_HOTPLUG_STAT, REG_READ(PORT_HOTPLUG_STAT)); 324 326 return 0; 325 327 } 326 328
-1
drivers/gpu/drm/gma500/framebuffer.c
··· 247 247 .fb_imageblit = cfb_imageblit, 248 248 .fb_pan_display = psbfb_pan, 249 249 .fb_mmap = psbfb_mmap, 250 - .fb_sync = psbfb_sync, 251 250 .fb_ioctl = psbfb_ioctl, 252 251 }; 253 252
+4 -5
drivers/gpu/drm/gma500/gtt.c
··· 446 446 pg->gtt_start = pci_resource_start(dev->pdev, PSB_GTT_RESOURCE); 447 447 gtt_pages = pci_resource_len(dev->pdev, PSB_GTT_RESOURCE) 448 448 >> PAGE_SHIFT; 449 - /* Some CDV firmware doesn't report this currently. In which case the 450 - system has 64 gtt pages */ 449 + /* CDV doesn't report this. In which case the system has 64 gtt pages */ 451 450 if (pg->gtt_start == 0 || gtt_pages == 0) { 452 - dev_err(dev->dev, "GTT PCI BAR not initialized.\n"); 451 + dev_dbg(dev->dev, "GTT PCI BAR not initialized.\n"); 453 452 gtt_pages = 64; 454 453 pg->gtt_start = dev_priv->pge_ctl; 455 454 } ··· 460 461 461 462 if (pg->gatt_pages == 0 || pg->gatt_start == 0) { 462 463 static struct resource fudge; /* Preferably peppermint */ 463 - /* This can occur on CDV SDV systems. Fudge it in this case. 464 + /* This can occur on CDV systems. Fudge it in this case. 464 465 We really don't care what imaginary space is being allocated 465 466 at this point */ 466 - dev_err(dev->dev, "GATT PCI BAR not initialized.\n"); 467 + dev_dbg(dev->dev, "GATT PCI BAR not initialized.\n"); 467 468 pg->gatt_start = 0x40000000; 468 469 pg->gatt_pages = (128 * 1024 * 1024) >> PAGE_SHIFT; 469 470 /* This is a little confusing but in fact the GTT is providing
+12 -3
drivers/gpu/drm/i915/intel_display.c
··· 4680 4680 4681 4681 crtc = intel_get_crtc_for_plane(dev, plane); 4682 4682 clock = crtc->mode.clock; 4683 + if (!clock) { 4684 + *sprite_wm = 0; 4685 + return false; 4686 + } 4683 4687 4684 4688 line_time_us = (sprite_width * 1000) / clock; 4689 + if (!line_time_us) { 4690 + *sprite_wm = 0; 4691 + return false; 4692 + } 4693 + 4685 4694 line_count = (latency_ns / line_time_us + 1000) / 1000; 4686 4695 line_size = sprite_width * pixel_size; 4687 4696 ··· 6184 6175 int i; 6185 6176 6186 6177 /* The clocks have to be on to load the palette. */ 6187 - if (!crtc->enabled) 6178 + if (!crtc->enabled || !intel_crtc->active) 6188 6179 return; 6189 6180 6190 6181 /* use legacy palette for Ironlake */ ··· 6570 6561 mode_cmd.height = mode->vdisplay; 6571 6562 mode_cmd.pitches[0] = intel_framebuffer_pitch_for_width(mode_cmd.width, 6572 6563 bpp); 6573 - mode_cmd.pixel_format = 0; 6564 + mode_cmd.pixel_format = drm_mode_legacy_fb_format(bpp, depth); 6574 6565 6575 6566 return intel_framebuffer_create(dev, &mode_cmd, obj); 6576 6567 } ··· 8194 8185 8195 8186 if (intel_enable_rc6(dev_priv->dev)) 8196 8187 rc6_mask = GEN6_RC_CTL_RC6_ENABLE | 8197 - (IS_GEN7(dev_priv->dev)) ? GEN6_RC_CTL_RC6p_ENABLE : 0; 8188 + ((IS_GEN7(dev_priv->dev)) ? GEN6_RC_CTL_RC6p_ENABLE : 0); 8198 8189 8199 8190 I915_WRITE(GEN6_RC_CONTROL, 8200 8191 rc6_mask |
+1 -13
drivers/gpu/drm/i915/intel_ringbuffer.c
··· 301 301 302 302 I915_WRITE_CTL(ring, 303 303 ((ring->size - PAGE_SIZE) & RING_NR_PAGES) 304 - | RING_REPORT_64K | RING_VALID); 304 + | RING_VALID); 305 305 306 306 /* If the head is still not zero, the ring is dead */ 307 307 if ((I915_READ_CTL(ring) & RING_VALID) == 0 || ··· 1132 1132 struct drm_device *dev = ring->dev; 1133 1133 struct drm_i915_private *dev_priv = dev->dev_private; 1134 1134 unsigned long end; 1135 - u32 head; 1136 - 1137 - /* If the reported head position has wrapped or hasn't advanced, 1138 - * fallback to the slow and accurate path. 1139 - */ 1140 - head = intel_read_status_page(ring, 4); 1141 - if (head > ring->head) { 1142 - ring->head = head; 1143 - ring->space = ring_space(ring); 1144 - if (ring->space >= n) 1145 - return 0; 1146 - } 1147 1135 1148 1136 trace_i915_ring_wait_begin(ring); 1149 1137 if (drm_core_check_feature(dev, DRIVER_GEM))
+3
drivers/gpu/drm/radeon/r600.c
··· 2362 2362 uint64_t addr = semaphore->gpu_addr; 2363 2363 unsigned sel = emit_wait ? PACKET3_SEM_SEL_WAIT : PACKET3_SEM_SEL_SIGNAL; 2364 2364 2365 + if (rdev->family < CHIP_CAYMAN) 2366 + sel |= PACKET3_SEM_WAIT_ON_SIGNAL; 2367 + 2365 2368 radeon_ring_write(ring, PACKET3(PACKET3_MEM_SEMAPHORE, 1)); 2366 2369 radeon_ring_write(ring, addr & 0xffffffff); 2367 2370 radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
+8
drivers/gpu/drm/radeon/r600_blit_shaders.c
··· 314 314 0x00000000, /* VGT_VTX_CNT_EN */ 315 315 316 316 0xc0016900, 317 + 0x000000d4, 318 + 0x00000000, /* SX_MISC */ 319 + 320 + 0xc0016900, 317 321 0x000002c8, 318 322 0x00000000, /* VGT_STRMOUT_BUFFER_EN */ 319 323 ··· 628 624 0x00000000, /* VGT_STRMOUT_EN */ 629 625 0x00000000, /* VGT_REUSE_OFF */ 630 626 0x00000000, /* VGT_VTX_CNT_EN */ 627 + 628 + 0xc0016900, 629 + 0x000000d4, 630 + 0x00000000, /* SX_MISC */ 631 631 632 632 0xc0016900, 633 633 0x000002c8,
+1
drivers/gpu/drm/radeon/r600_cs.c
··· 1304 1304 h0 = G_038004_TEX_HEIGHT(word1) + 1; 1305 1305 d0 = G_038004_TEX_DEPTH(word1); 1306 1306 nfaces = 1; 1307 + array = 0; 1307 1308 switch (G_038000_DIM(word0)) { 1308 1309 case V_038000_SQ_TEX_DIM_1D: 1309 1310 case V_038000_SQ_TEX_DIM_2D:
+1
drivers/gpu/drm/radeon/r600d.h
··· 831 831 #define PACKET3_STRMOUT_BUFFER_UPDATE 0x34 832 832 #define PACKET3_INDIRECT_BUFFER_MP 0x38 833 833 #define PACKET3_MEM_SEMAPHORE 0x39 834 + # define PACKET3_SEM_WAIT_ON_SIGNAL (0x1 << 12) 834 835 # define PACKET3_SEM_SEL_SIGNAL (0x6 << 29) 835 836 # define PACKET3_SEM_SEL_WAIT (0x7 << 29) 836 837 #define PACKET3_MPEG_INDEX 0x3A
+18 -9
drivers/gpu/drm/radeon/radeon_connectors.c
··· 1057 1057 (radeon_connector->connector_object_id == CONNECTOR_OBJECT_ID_HDMI_TYPE_B)) 1058 1058 return MODE_OK; 1059 1059 else if (radeon_connector->connector_object_id == CONNECTOR_OBJECT_ID_HDMI_TYPE_A) { 1060 - if (ASIC_IS_DCE3(rdev)) { 1060 + if (0) { 1061 1061 /* HDMI 1.3+ supports max clock of 340 Mhz */ 1062 1062 if (mode->clock > 340000) 1063 1063 return MODE_CLOCK_HIGH; ··· 1117 1117 (connector->connector_type == DRM_MODE_CONNECTOR_LVDS)) { 1118 1118 struct drm_display_mode *mode; 1119 1119 1120 - if (!radeon_dig_connector->edp_on) 1121 - atombios_set_edp_panel_power(connector, 1122 - ATOM_TRANSMITTER_ACTION_POWER_ON); 1123 - ret = radeon_ddc_get_modes(radeon_connector); 1124 - if (!radeon_dig_connector->edp_on) 1125 - atombios_set_edp_panel_power(connector, 1126 - ATOM_TRANSMITTER_ACTION_POWER_OFF); 1120 + if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 1121 + if (!radeon_dig_connector->edp_on) 1122 + atombios_set_edp_panel_power(connector, 1123 + ATOM_TRANSMITTER_ACTION_POWER_ON); 1124 + ret = radeon_ddc_get_modes(radeon_connector); 1125 + if (!radeon_dig_connector->edp_on) 1126 + atombios_set_edp_panel_power(connector, 1127 + ATOM_TRANSMITTER_ACTION_POWER_OFF); 1128 + } else { 1129 + /* need to setup ddc on the bridge */ 1130 + if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) != 1131 + ENCODER_OBJECT_ID_NONE) { 1132 + if (encoder) 1133 + radeon_atom_ext_encoder_setup_ddc(encoder); 1134 + } 1135 + ret = radeon_ddc_get_modes(radeon_connector); 1136 + } 1127 1137 1128 1138 if (ret > 0) { 1129 1139 if (encoder) { ··· 1144 1134 return ret; 1145 1135 } 1146 1136 1147 - encoder = radeon_best_single_encoder(connector); 1148 1137 if (!encoder) 1149 1138 return 0; 1150 1139
+15 -3
drivers/gpu/drm/radeon/radeon_display.c
··· 1078 1078 .create_handle = radeon_user_framebuffer_create_handle, 1079 1079 }; 1080 1080 1081 - void 1081 + int 1082 1082 radeon_framebuffer_init(struct drm_device *dev, 1083 1083 struct radeon_framebuffer *rfb, 1084 1084 struct drm_mode_fb_cmd2 *mode_cmd, 1085 1085 struct drm_gem_object *obj) 1086 1086 { 1087 + int ret; 1087 1088 rfb->obj = obj; 1088 - drm_framebuffer_init(dev, &rfb->base, &radeon_fb_funcs); 1089 + ret = drm_framebuffer_init(dev, &rfb->base, &radeon_fb_funcs); 1090 + if (ret) { 1091 + rfb->obj = NULL; 1092 + return ret; 1093 + } 1089 1094 drm_helper_mode_fill_fb_struct(&rfb->base, mode_cmd); 1095 + return 0; 1090 1096 } 1091 1097 1092 1098 static struct drm_framebuffer * ··· 1102 1096 { 1103 1097 struct drm_gem_object *obj; 1104 1098 struct radeon_framebuffer *radeon_fb; 1099 + int ret; 1105 1100 1106 1101 obj = drm_gem_object_lookup(dev, file_priv, mode_cmd->handles[0]); 1107 1102 if (obj == NULL) { ··· 1115 1108 if (radeon_fb == NULL) 1116 1109 return ERR_PTR(-ENOMEM); 1117 1110 1118 - radeon_framebuffer_init(dev, radeon_fb, mode_cmd, obj); 1111 + ret = radeon_framebuffer_init(dev, radeon_fb, mode_cmd, obj); 1112 + if (ret) { 1113 + kfree(radeon_fb); 1114 + drm_gem_object_unreference_unlocked(obj); 1115 + return NULL; 1116 + } 1119 1117 1120 1118 return &radeon_fb->base; 1121 1119 }
+2 -4
drivers/gpu/drm/radeon/radeon_encoders.c
··· 307 307 bool radeon_dig_monitor_is_duallink(struct drm_encoder *encoder, 308 308 u32 pixel_clock) 309 309 { 310 - struct drm_device *dev = encoder->dev; 311 - struct radeon_device *rdev = dev->dev_private; 312 310 struct drm_connector *connector; 313 311 struct radeon_connector *radeon_connector; 314 312 struct radeon_connector_atom_dig *dig_connector; ··· 324 326 case DRM_MODE_CONNECTOR_HDMIB: 325 327 if (radeon_connector->use_digital) { 326 328 /* HDMI 1.3 supports up to 340 Mhz over single link */ 327 - if (ASIC_IS_DCE3(rdev) && drm_detect_hdmi_monitor(radeon_connector->edid)) { 329 + if (0 && drm_detect_hdmi_monitor(radeon_connector->edid)) { 328 330 if (pixel_clock > 340000) 329 331 return true; 330 332 else ··· 346 348 return false; 347 349 else { 348 350 /* HDMI 1.3 supports up to 340 Mhz over single link */ 349 - if (ASIC_IS_DCE3(rdev) && drm_detect_hdmi_monitor(radeon_connector->edid)) { 351 + if (0 && drm_detect_hdmi_monitor(radeon_connector->edid)) { 350 352 if (pixel_clock > 340000) 351 353 return true; 352 354 else
+10 -1
drivers/gpu/drm/radeon/radeon_fb.c
··· 209 209 sizes->surface_depth); 210 210 211 211 ret = radeonfb_create_pinned_object(rfbdev, &mode_cmd, &gobj); 212 + if (ret) { 213 + DRM_ERROR("failed to create fbcon object %d\n", ret); 214 + return ret; 215 + } 216 + 212 217 rbo = gem_to_radeon_bo(gobj); 213 218 214 219 /* okay we have an object now allocate the framebuffer */ ··· 225 220 226 221 info->par = rfbdev; 227 222 228 - radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj); 223 + ret = radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj); 224 + if (ret) { 225 + DRM_ERROR("failed to initalise framebuffer %d\n", ret); 226 + goto out_unref; 227 + } 229 228 230 229 fb = &rfbdev->rfb.base; 231 230
+1 -1
drivers/gpu/drm/radeon/radeon_gart.c
··· 597 597 if (bo_va == NULL) 598 598 return 0; 599 599 600 - list_del(&bo_va->bo_list); 601 600 mutex_lock(&vm->mutex); 602 601 radeon_mutex_lock(&rdev->cs_mutex); 603 602 radeon_vm_bo_update_pte(rdev, vm, bo, NULL); 604 603 radeon_mutex_unlock(&rdev->cs_mutex); 605 604 list_del(&bo_va->vm_list); 606 605 mutex_unlock(&vm->mutex); 606 + list_del(&bo_va->bo_list); 607 607 608 608 kfree(bo_va); 609 609 return 0;
+1 -1
drivers/gpu/drm/radeon/radeon_mode.h
··· 649 649 u16 blue, int regno); 650 650 extern void radeon_crtc_fb_gamma_get(struct drm_crtc *crtc, u16 *red, u16 *green, 651 651 u16 *blue, int regno); 652 - void radeon_framebuffer_init(struct drm_device *dev, 652 + int radeon_framebuffer_init(struct drm_device *dev, 653 653 struct radeon_framebuffer *rfb, 654 654 struct drm_mode_fb_cmd2 *mode_cmd, 655 655 struct drm_gem_object *obj);
+3
drivers/hid/hid-ids.h
··· 59 59 #define USB_VENDOR_ID_AIRCABLE 0x16CA 60 60 #define USB_DEVICE_ID_AIRCABLE1 0x1502 61 61 62 + #define USB_VENDOR_ID_AIREN 0x1a2c 63 + #define USB_DEVICE_ID_AIREN_SLIMPLUS 0x0002 64 + 62 65 #define USB_VENDOR_ID_ALCOR 0x058f 63 66 #define USB_DEVICE_ID_ALCOR_USBRS232 0x9720 64 67
+7 -2
drivers/hid/hid-input.c
··· 986 986 return; 987 987 } 988 988 989 - /* Ignore out-of-range values as per HID specification, section 5.10 */ 990 - if (value < field->logical_minimum || value > field->logical_maximum) { 989 + /* 990 + * Ignore out-of-range values as per HID specification, 991 + * section 5.10 and 6.2.25 992 + */ 993 + if ((field->flags & HID_MAIN_ITEM_VARIABLE) && 994 + (value < field->logical_minimum || 995 + value > field->logical_maximum)) { 991 996 dbg_hid("Ignoring out-of-range value %x\n", value); 992 997 return; 993 998 }
+1
drivers/hid/usbhid/hid-quirks.c
··· 54 54 { USB_VENDOR_ID_PLAYDOTCOM, USB_DEVICE_ID_PLAYDOTCOM_EMS_USBII, HID_QUIRK_MULTI_INPUT }, 55 55 { USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS, HID_QUIRK_MULTI_INPUT }, 56 56 57 + { USB_VENDOR_ID_AIREN, USB_DEVICE_ID_AIREN_SLIMPLUS, HID_QUIRK_NOGET }, 57 58 { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_UC100KM, HID_QUIRK_NOGET }, 58 59 { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS124U, HID_QUIRK_NOGET }, 59 60 { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM, HID_QUIRK_NOGET },
+3 -2
drivers/hwmon/Kconfig
··· 497 497 If you say yes here, you get support for JEDEC JC42.4 compliant 498 498 temperature sensors, which are used on many DDR3 memory modules for 499 499 mobile devices and servers. Support will include, but not be limited 500 - to, ADT7408, CAT34TS02, CAT6095, MAX6604, MCP9805, MCP98242, MCP98243, 501 - MCP9843, SE97, SE98, STTS424(E), TSE2002B3, and TS3000B3. 500 + to, ADT7408, AT30TS00, CAT34TS02, CAT6095, MAX6604, MCP9804, MCP9805, 501 + MCP98242, MCP98243, MCP9843, SE97, SE98, STTS424(E), STTS2002, 502 + STTS3000, TSE2002B3, TSE2002GB2, TS3000B3, and TS3000GB2. 502 503 503 504 This driver can also be built as a module. If so, the module 504 505 will be called jc42.
+75 -14
drivers/hwmon/f75375s.c
··· 178 178 i2c_smbus_write_byte_data(client, reg + 1, (value & 0xFF)); 179 179 } 180 180 181 + static void f75375_write_pwm(struct i2c_client *client, int nr) 182 + { 183 + struct f75375_data *data = i2c_get_clientdata(client); 184 + if (data->kind == f75387) 185 + f75375_write16(client, F75375_REG_FAN_EXP(nr), data->pwm[nr]); 186 + else 187 + f75375_write8(client, F75375_REG_FAN_PWM_DUTY(nr), 188 + data->pwm[nr]); 189 + } 190 + 181 191 static struct f75375_data *f75375_update_device(struct device *dev) 182 192 { 183 193 struct i2c_client *client = to_i2c_client(dev); ··· 264 254 return 1500000 / rpm; 265 255 } 266 256 257 + static bool duty_mode_enabled(u8 pwm_enable) 258 + { 259 + switch (pwm_enable) { 260 + case 0: /* Manual, duty mode (full speed) */ 261 + case 1: /* Manual, duty mode */ 262 + case 4: /* Auto, duty mode */ 263 + return true; 264 + case 2: /* Auto, speed mode */ 265 + case 3: /* Manual, speed mode */ 266 + return false; 267 + default: 268 + BUG(); 269 + } 270 + } 271 + 272 + static bool auto_mode_enabled(u8 pwm_enable) 273 + { 274 + switch (pwm_enable) { 275 + case 0: /* Manual, duty mode (full speed) */ 276 + case 1: /* Manual, duty mode */ 277 + case 3: /* Manual, speed mode */ 278 + return false; 279 + case 2: /* Auto, speed mode */ 280 + case 4: /* Auto, duty mode */ 281 + return true; 282 + default: 283 + BUG(); 284 + } 285 + } 286 + 267 287 static ssize_t set_fan_min(struct device *dev, struct device_attribute *attr, 268 288 const char *buf, size_t count) 269 289 { ··· 327 287 if (err < 0) 328 288 return err; 329 289 290 + if (auto_mode_enabled(data->pwm_enable[nr])) 291 + return -EINVAL; 292 + if (data->kind == f75387 && duty_mode_enabled(data->pwm_enable[nr])) 293 + return -EINVAL; 294 + 330 295 mutex_lock(&data->update_lock); 331 296 data->fan_target[nr] = rpm_to_reg(val); 332 297 f75375_write16(client, F75375_REG_FAN_EXP(nr), data->fan_target[nr]); ··· 352 307 if (err < 0) 353 308 return err; 354 309 310 + if (auto_mode_enabled(data->pwm_enable[nr]) || 311 + !duty_mode_enabled(data->pwm_enable[nr])) 312 + return -EINVAL; 313 + 355 314 mutex_lock(&data->update_lock); 356 315 data->pwm[nr] = SENSORS_LIMIT(val, 0, 255); 357 - f75375_write8(client, F75375_REG_FAN_PWM_DUTY(nr), data->pwm[nr]); 316 + f75375_write_pwm(client, nr); 358 317 mutex_unlock(&data->update_lock); 359 318 return count; 360 319 } ··· 376 327 struct f75375_data *data = i2c_get_clientdata(client); 377 328 u8 fanmode; 378 329 379 - if (val < 0 || val > 3) 330 + if (val < 0 || val > 4) 380 331 return -EINVAL; 381 332 382 333 fanmode = f75375_read8(client, F75375_REG_FAN_TIMER); 383 334 if (data->kind == f75387) { 335 + /* For now, deny dangerous toggling of duty mode */ 336 + if (duty_mode_enabled(data->pwm_enable[nr]) != 337 + duty_mode_enabled(val)) 338 + return -EOPNOTSUPP; 384 339 /* clear each fanX_mode bit before setting them properly */ 385 340 fanmode &= ~(1 << F75387_FAN_DUTY_MODE(nr)); 386 341 fanmode &= ~(1 << F75387_FAN_MANU_MODE(nr)); ··· 398 345 fanmode |= (1 << F75387_FAN_MANU_MODE(nr)); 399 346 fanmode |= (1 << F75387_FAN_DUTY_MODE(nr)); 400 347 break; 401 - case 2: /* AUTOMATIC*/ 402 - fanmode |= (1 << F75387_FAN_DUTY_MODE(nr)); 348 + case 2: /* Automatic, speed mode */ 403 349 break; 404 350 case 3: /* fan speed */ 405 351 fanmode |= (1 << F75387_FAN_MANU_MODE(nr)); 352 + break; 353 + case 4: /* Automatic, pwm */ 354 + fanmode |= (1 << F75387_FAN_DUTY_MODE(nr)); 406 355 break; 407 356 } 408 357 } else { ··· 423 368 break; 424 369 case 3: /* fan speed */ 425 370 break; 371 + case 4: /* Automatic pwm */ 372 + return -EINVAL; 426 373 } 427 374 } 428 375 429 376 f75375_write8(client, F75375_REG_FAN_TIMER, fanmode); 430 377 data->pwm_enable[nr] = val; 431 378 if (val == 0) 432 - f75375_write8(client, F75375_REG_FAN_PWM_DUTY(nr), 433 - data->pwm[nr]); 379 + f75375_write_pwm(client, nr); 434 380 return 0; 435 381 } 436 382 ··· 782 726 783 727 manu = ((mode >> F75387_FAN_MANU_MODE(nr)) & 1); 784 728 duty = ((mode >> F75387_FAN_DUTY_MODE(nr)) & 1); 785 - if (manu && duty) 786 - /* speed */ 729 + if (!manu && duty) 730 + /* auto, pwm */ 731 + data->pwm_enable[nr] = 4; 732 + else if (manu && !duty) 733 + /* manual, speed */ 787 734 data->pwm_enable[nr] = 3; 788 - else if (!manu && duty) 789 - /* automatic */ 735 + else if (!manu && !duty) 736 + /* automatic, speed */ 790 737 data->pwm_enable[nr] = 2; 791 738 else 792 - /* manual */ 739 + /* manual, pwm */ 793 740 data->pwm_enable[nr] = 1; 794 741 } else { 795 742 if (!(conf & (1 << F75375_FAN_CTRL_LINEAR(nr)))) ··· 817 758 set_pwm_enable_direct(client, 0, f75375s_pdata->pwm_enable[0]); 818 759 set_pwm_enable_direct(client, 1, f75375s_pdata->pwm_enable[1]); 819 760 for (nr = 0; nr < 2; nr++) { 761 + if (auto_mode_enabled(f75375s_pdata->pwm_enable[nr]) || 762 + !duty_mode_enabled(f75375s_pdata->pwm_enable[nr])) 763 + continue; 820 764 data->pwm[nr] = SENSORS_LIMIT(f75375s_pdata->pwm[nr], 0, 255); 821 - f75375_write8(client, F75375_REG_FAN_PWM_DUTY(nr), 822 - data->pwm[nr]); 765 + f75375_write_pwm(client, nr); 823 766 } 824 767 825 768 } ··· 848 787 if (err) 849 788 goto exit_free; 850 789 851 - if (data->kind == f75375) { 790 + if (data->kind != f75373) { 852 791 err = sysfs_chmod_file(&client->dev.kobj, 853 792 &sensor_dev_attr_pwm1_mode.dev_attr.attr, 854 793 S_IRUGO | S_IWUSR);
+28 -2
drivers/hwmon/jc42.c
··· 64 64 65 65 /* Manufacturer IDs */ 66 66 #define ADT_MANID 0x11d4 /* Analog Devices */ 67 + #define ATMEL_MANID 0x001f /* Atmel */ 67 68 #define MAX_MANID 0x004d /* Maxim */ 68 69 #define IDT_MANID 0x00b3 /* IDT */ 69 70 #define MCP_MANID 0x0054 /* Microchip */ ··· 78 77 #define ADT7408_DEVID 0x0801 79 78 #define ADT7408_DEVID_MASK 0xffff 80 79 80 + /* Atmel */ 81 + #define AT30TS00_DEVID 0x8201 82 + #define AT30TS00_DEVID_MASK 0xffff 83 + 81 84 /* IDT */ 82 85 #define TS3000B3_DEVID 0x2903 /* Also matches TSE2002B3 */ 83 86 #define TS3000B3_DEVID_MASK 0xffff 87 + 88 + #define TS3000GB2_DEVID 0x2912 /* Also matches TSE2002GB2 */ 89 + #define TS3000GB2_DEVID_MASK 0xffff 84 90 85 91 /* Maxim */ 86 92 #define MAX6604_DEVID 0x3e00 87 93 #define MAX6604_DEVID_MASK 0xffff 88 94 89 95 /* Microchip */ 96 + #define MCP9804_DEVID 0x0200 97 + #define MCP9804_DEVID_MASK 0xfffc 98 + 90 99 #define MCP98242_DEVID 0x2000 91 100 #define MCP98242_DEVID_MASK 0xfffc 92 101 ··· 124 113 #define STTS424E_DEVID 0x0000 125 114 #define STTS424E_DEVID_MASK 0xfffe 126 115 116 + #define STTS2002_DEVID 0x0300 117 + #define STTS2002_DEVID_MASK 0xffff 118 + 119 + #define STTS3000_DEVID 0x0200 120 + #define STTS3000_DEVID_MASK 0xffff 121 + 127 122 static u16 jc42_hysteresis[] = { 0, 1500, 3000, 6000 }; 128 123 129 124 struct jc42_chips { ··· 140 123 141 124 static struct jc42_chips jc42_chips[] = { 142 125 { ADT_MANID, ADT7408_DEVID, ADT7408_DEVID_MASK }, 126 + { ATMEL_MANID, AT30TS00_DEVID, AT30TS00_DEVID_MASK }, 143 127 { IDT_MANID, TS3000B3_DEVID, TS3000B3_DEVID_MASK }, 128 + { IDT_MANID, TS3000GB2_DEVID, TS3000GB2_DEVID_MASK }, 144 129 { MAX_MANID, MAX6604_DEVID, MAX6604_DEVID_MASK }, 130 + { MCP_MANID, MCP9804_DEVID, MCP9804_DEVID_MASK }, 145 131 { MCP_MANID, MCP98242_DEVID, MCP98242_DEVID_MASK }, 146 132 { MCP_MANID, MCP98243_DEVID, MCP98243_DEVID_MASK }, 147 133 { MCP_MANID, MCP9843_DEVID, MCP9843_DEVID_MASK }, ··· 153 133 { NXP_MANID, SE98_DEVID, SE98_DEVID_MASK }, 154 134 { STM_MANID, STTS424_DEVID, STTS424_DEVID_MASK }, 155 135 { STM_MANID, STTS424E_DEVID, STTS424E_DEVID_MASK }, 136 + { STM_MANID, STTS2002_DEVID, STTS2002_DEVID_MASK }, 137 + { STM_MANID, STTS3000_DEVID, STTS3000_DEVID_MASK }, 156 138 }; 157 139 158 140 /* Each client has this additional data */ ··· 181 159 182 160 static const struct i2c_device_id jc42_id[] = { 183 161 { "adt7408", 0 }, 162 + { "at30ts00", 0 }, 184 163 { "cat94ts02", 0 }, 185 164 { "cat6095", 0 }, 186 165 { "jc42", 0 }, 187 166 { "max6604", 0 }, 167 + { "mcp9804", 0 }, 188 168 { "mcp9805", 0 }, 189 169 { "mcp98242", 0 }, 190 170 { "mcp98243", 0 }, ··· 195 171 { "se97b", 0 }, 196 172 { "se98", 0 }, 197 173 { "stts424", 0 }, 198 - { "tse2002b3", 0 }, 199 - { "ts3000b3", 0 }, 174 + { "stts2002", 0 }, 175 + { "stts3000", 0 }, 176 + { "tse2002", 0 }, 177 + { "ts3000", 0 }, 200 178 { } 201 179 }; 202 180 MODULE_DEVICE_TABLE(i2c, jc42_id);
+2 -1
drivers/hwmon/pmbus/pmbus_core.c
··· 54 54 lcrit_alarm, crit_alarm */ 55 55 #define PMBUS_IOUT_BOOLEANS_PER_PAGE 3 /* alarm, lcrit_alarm, 56 56 crit_alarm */ 57 - #define PMBUS_POUT_BOOLEANS_PER_PAGE 2 /* alarm, crit_alarm */ 57 + #define PMBUS_POUT_BOOLEANS_PER_PAGE 3 /* cap_alarm, alarm, crit_alarm 58 + */ 58 59 #define PMBUS_MAX_BOOLEANS_PER_FAN 2 /* alarm, fault */ 59 60 #define PMBUS_MAX_BOOLEANS_PER_TEMP 4 /* min_alarm, max_alarm, 60 61 lcrit_alarm, crit_alarm */
+6 -4
drivers/hwmon/pmbus/zl6100.c
··· 33 33 struct zl6100_data { 34 34 int id; 35 35 ktime_t access; /* chip access time */ 36 + int delay; /* Delay between chip accesses in uS */ 36 37 struct pmbus_driver_info info; 37 38 }; 38 39 ··· 53 52 /* Some chips need a delay between accesses */ 54 53 static inline void zl6100_wait(const struct zl6100_data *data) 55 54 { 56 - if (delay) { 55 + if (data->delay) { 57 56 s64 delta = ktime_us_delta(ktime_get(), data->access); 58 - if (delta < delay) 59 - udelay(delay - delta); 57 + if (delta < data->delay) 58 + udelay(data->delay - delta); 60 59 } 61 60 } 62 61 ··· 208 207 * can be cleared later for additional chips if tests show that it 209 208 * is not needed (in other words, better be safe than sorry). 210 209 */ 210 + data->delay = delay; 211 211 if (data->id == zl2004 || data->id == zl6105) 212 - delay = 0; 212 + data->delay = 0; 213 213 214 214 /* 215 215 * Since there was a direct I2C device access above, wait before
+10 -3
drivers/i2c/busses/i2c-mxs.c
··· 72 72 73 73 #define MXS_I2C_QUEUESTAT (0x70) 74 74 #define MXS_I2C_QUEUESTAT_RD_QUEUE_EMPTY 0x00002000 75 + #define MXS_I2C_QUEUESTAT_WRITE_QUEUE_CNT_MASK 0x0000001F 75 76 76 77 #define MXS_I2C_QUEUECMD (0x80) 77 78 ··· 220 219 int ret; 221 220 int flags; 222 221 223 - init_completion(&i2c->cmd_complete); 224 - 225 222 dev_dbg(i2c->dev, "addr: 0x%04x, len: %d, flags: 0x%x, stop: %d\n", 226 223 msg->addr, msg->len, msg->flags, stop); 227 224 228 225 if (msg->len == 0) 229 226 return -EINVAL; 227 + 228 + init_completion(&i2c->cmd_complete); 230 229 231 230 flags = stop ? MXS_I2C_CTRL0_POST_SEND_STOP : 0; 232 231 ··· 287 286 { 288 287 struct mxs_i2c_dev *i2c = dev_id; 289 288 u32 stat = readl(i2c->regs + MXS_I2C_CTRL1) & MXS_I2C_IRQ_MASK; 289 + bool is_last_cmd; 290 290 291 291 if (!stat) 292 292 return IRQ_NONE; ··· 302 300 else 303 301 i2c->cmd_err = 0; 304 302 305 - complete(&i2c->cmd_complete); 303 + is_last_cmd = (readl(i2c->regs + MXS_I2C_QUEUESTAT) & 304 + MXS_I2C_QUEUESTAT_WRITE_QUEUE_CNT_MASK) == 0; 305 + 306 + if (is_last_cmd || i2c->cmd_err) 307 + complete(&i2c->cmd_complete); 306 308 307 309 writel(stat, i2c->regs + MXS_I2C_CTRL1_CLR); 310 + 308 311 return IRQ_HANDLED; 309 312 } 310 313
+1 -1
drivers/input/evdev.c
··· 332 332 struct evdev_client *client = file->private_data; 333 333 struct evdev *evdev = client->evdev; 334 334 struct input_event event; 335 - int retval; 335 + int retval = 0; 336 336 337 337 if (count < input_event_size()) 338 338 return -EINVAL;
+2 -4
drivers/input/misc/twl4030-vibra.c
··· 172 172 } 173 173 174 174 /*** Module ***/ 175 - #if CONFIG_PM 175 + #if CONFIG_PM_SLEEP 176 176 static int twl4030_vibra_suspend(struct device *dev) 177 177 { 178 178 struct platform_device *pdev = to_platform_device(dev); ··· 189 189 vibra_disable_leds(); 190 190 return 0; 191 191 } 192 + #endif 192 193 193 194 static SIMPLE_DEV_PM_OPS(twl4030_vibra_pm_ops, 194 195 twl4030_vibra_suspend, twl4030_vibra_resume); 195 - #endif 196 196 197 197 static int __devinit twl4030_vibra_probe(struct platform_device *pdev) 198 198 { ··· 273 273 .driver = { 274 274 .name = "twl4030-vibra", 275 275 .owner = THIS_MODULE, 276 - #ifdef CONFIG_PM 277 276 .pm = &twl4030_vibra_pm_ops, 278 - #endif 279 277 }, 280 278 }; 281 279 module_platform_driver(twl4030_vibra_driver);
+5 -2
drivers/input/mouse/alps.c
··· 952 952 953 953 /* 954 954 * First try "E6 report". 955 - * ALPS should return 0,0,10 or 0,0,100 955 + * ALPS should return 0,0,10 or 0,0,100 if no buttons are pressed. 956 + * The bits 0-2 of the first byte will be 1s if some buttons are 957 + * pressed. 956 958 */ 957 959 param[0] = 0; 958 960 if (ps2_command(ps2dev, param, PSMOUSE_CMD_SETRES) || ··· 970 968 psmouse_dbg(psmouse, "E6 report: %2.2x %2.2x %2.2x", 971 969 param[0], param[1], param[2]); 972 970 973 - if (param[0] != 0 || param[1] != 0 || (param[2] != 10 && param[2] != 100)) 971 + if ((param[0] & 0xf8) != 0 || param[1] != 0 || 972 + (param[2] != 10 && param[2] != 100)) 974 973 return NULL; 975 974 976 975 /*
+2
drivers/input/tablet/Kconfig
··· 77 77 tristate "Wacom Intuos/Graphire tablet support (USB)" 78 78 depends on USB_ARCH_HAS_HCD 79 79 select USB 80 + select NEW_LEDS 81 + select LEDS_CLASS 80 82 help 81 83 Say Y here if you want to use the USB version of the Wacom Intuos 82 84 or Graphire tablet. Make sure to say Y to "Mouse support"
+1 -1
drivers/input/tablet/wacom_wac.c
··· 926 926 { 927 927 struct input_dev *input = wacom->input; 928 928 unsigned char *data = wacom->data; 929 - int count = data[1] & 0x03; 929 + int count = data[1] & 0x07; 930 930 int i; 931 931 932 932 if (data[0] != 0x02)
+1 -1
drivers/iommu/amd_iommu_init.c
··· 275 275 } 276 276 277 277 /* Programs the physical address of the device table into the IOMMU hardware */ 278 - static void __init iommu_set_device_table(struct amd_iommu *iommu) 278 + static void iommu_set_device_table(struct amd_iommu *iommu) 279 279 { 280 280 u64 entry; 281 281
+45 -14
drivers/iommu/omap-iommu-debug.c
··· 44 44 static ssize_t debug_read_regs(struct file *file, char __user *userbuf, 45 45 size_t count, loff_t *ppos) 46 46 { 47 - struct omap_iommu *obj = file->private_data; 47 + struct device *dev = file->private_data; 48 + struct omap_iommu *obj = dev_to_omap_iommu(dev); 48 49 char *p, *buf; 49 50 ssize_t bytes; 50 51 ··· 68 67 static ssize_t debug_read_tlb(struct file *file, char __user *userbuf, 69 68 size_t count, loff_t *ppos) 70 69 { 71 - struct omap_iommu *obj = file->private_data; 70 + struct device *dev = file->private_data; 71 + struct omap_iommu *obj = dev_to_omap_iommu(dev); 72 72 char *p, *buf; 73 73 ssize_t bytes, rest; 74 74 ··· 99 97 struct iotlb_entry e; 100 98 struct cr_regs cr; 101 99 int err; 102 - struct omap_iommu *obj = file->private_data; 100 + struct device *dev = file->private_data; 101 + struct omap_iommu *obj = dev_to_omap_iommu(dev); 103 102 char buf[MAXCOLUMN], *p = buf; 104 103 105 104 count = min(count, sizeof(buf)); ··· 187 184 static ssize_t debug_read_pagetable(struct file *file, char __user *userbuf, 188 185 size_t count, loff_t *ppos) 189 186 { 190 - struct omap_iommu *obj = file->private_data; 187 + struct device *dev = file->private_data; 188 + struct omap_iommu *obj = dev_to_omap_iommu(dev); 191 189 char *p, *buf; 192 190 size_t bytes; 193 191 ··· 216 212 static ssize_t debug_read_mmap(struct file *file, char __user *userbuf, 217 213 size_t count, loff_t *ppos) 218 214 { 219 - struct omap_iommu *obj = file->private_data; 215 + struct device *dev = file->private_data; 216 + struct omap_iommu *obj = dev_to_omap_iommu(dev); 220 217 char *p, *buf; 221 218 struct iovm_struct *tmp; 222 219 int uninitialized_var(i); ··· 259 254 static ssize_t debug_read_mem(struct file *file, char __user *userbuf, 260 255 size_t count, loff_t *ppos) 261 256 { 262 - struct omap_iommu *obj = file->private_data; 257 + struct device *dev = file->private_data; 263 258 char *p, *buf; 264 259 struct iovm_struct *area; 265 260 ssize_t bytes; ··· 273 268 274 269 mutex_lock(&iommu_debug_lock); 275 270 276 - area = omap_find_iovm_area(obj, (u32)ppos); 277 - if (IS_ERR(area)) { 271 + area = omap_find_iovm_area(dev, (u32)ppos); 272 + if (!area) { 278 273 bytes = -EINVAL; 279 274 goto err_out; 280 275 } ··· 292 287 static ssize_t debug_write_mem(struct file *file, const char __user *userbuf, 293 288 size_t count, loff_t *ppos) 294 289 { 295 - struct omap_iommu *obj = file->private_data; 290 + struct device *dev = file->private_data; 296 291 struct iovm_struct *area; 297 292 char *p, *buf; 298 293 ··· 310 305 goto err_out; 311 306 } 312 307 313 - area = omap_find_iovm_area(obj, (u32)ppos); 314 - if (IS_ERR(area)) { 308 + area = omap_find_iovm_area(dev, (u32)ppos); 309 + if (!area) { 315 310 count = -EINVAL; 316 311 goto err_out; 317 312 } ··· 355 350 { \ 356 351 struct dentry *dent; \ 357 352 dent = debugfs_create_file(#attr, mode, parent, \ 358 - obj, &debug_##attr##_fops); \ 353 + dev, &debug_##attr##_fops); \ 359 354 if (!dent) \ 360 355 return -ENOMEM; \ 361 356 } ··· 367 362 { 368 363 struct platform_device *pdev = to_platform_device(dev); 369 364 struct omap_iommu *obj = platform_get_drvdata(pdev); 365 + struct omap_iommu_arch_data *arch_data; 370 366 struct dentry *d, *parent; 371 367 372 368 if (!obj || !obj->dev) 373 369 return -EINVAL; 374 370 371 + arch_data = kzalloc(sizeof(*arch_data), GFP_KERNEL); 372 + if (!arch_data) 373 + return -ENOMEM; 374 + 375 + arch_data->iommu_dev = obj; 376 + 377 + dev->archdata.iommu = arch_data; 378 + 375 379 d = debugfs_create_dir(obj->name, iommu_debug_root); 376 380 if (!d) 377 - return -ENOMEM; 381 + goto nomem; 378 382 parent = d; 379 383 380 384 d = debugfs_create_u8("nr_tlb_entries", 400, parent, 381 385 (u8 *)&obj->nr_tlb_entries); 382 386 if (!d) 383 - return -ENOMEM; 387 + goto nomem; 384 388 385 389 DEBUG_ADD_FILE_RO(ver); 386 390 DEBUG_ADD_FILE_RO(regs); ··· 397 383 DEBUG_ADD_FILE(pagetable); 398 384 DEBUG_ADD_FILE_RO(mmap); 399 385 DEBUG_ADD_FILE(mem); 386 + 387 + return 0; 388 + 389 + nomem: 390 + kfree(arch_data); 391 + return -ENOMEM; 392 + } 393 + 394 + static int iommu_debug_unregister(struct device *dev, void *data) 395 + { 396 + if (!dev->archdata.iommu) 397 + return 0; 398 + 399 + kfree(dev->archdata.iommu); 400 + 401 + dev->archdata.iommu = NULL; 400 402 401 403 return 0; 402 404 } ··· 441 411 static void __exit iommu_debugfs_exit(void) 442 412 { 443 413 debugfs_remove_recursive(iommu_debug_root); 414 + omap_foreach_iommu_device(NULL, iommu_debug_unregister); 444 415 } 445 416 module_exit(iommu_debugfs_exit) 446 417
+2 -1
drivers/iommu/omap-iommu.c
··· 1223 1223 1224 1224 return platform_driver_register(&omap_iommu_driver); 1225 1225 } 1226 - module_init(omap_iommu_init); 1226 + /* must be ready before omap3isp is probed */ 1227 + subsys_initcall(omap_iommu_init); 1227 1228 1228 1229 static void __exit omap_iommu_exit(void) 1229 1230 {
+1 -1
drivers/md/dm-flakey.c
··· 323 323 * Corrupt successful READs while in down state. 324 324 * If flags were specified, only corrupt those that match. 325 325 */ 326 - if (!error && bio_submitted_while_down && 326 + if (fc->corrupt_bio_byte && !error && bio_submitted_while_down && 327 327 (bio_data_dir(bio) == READ) && (fc->corrupt_bio_rw == READ) && 328 328 all_corrupt_bio_flags_match(bio, fc)) 329 329 corrupt_bio_data(bio, fc);
+16 -7
drivers/md/dm-io.c
··· 296 296 unsigned offset; 297 297 unsigned num_bvecs; 298 298 sector_t remaining = where->count; 299 + struct request_queue *q = bdev_get_queue(where->bdev); 300 + sector_t discard_sectors; 299 301 300 302 /* 301 303 * where->count may be zero if rw holds a flush and we need to ··· 307 305 /* 308 306 * Allocate a suitably sized-bio. 309 307 */ 310 - num_bvecs = dm_sector_div_up(remaining, 311 - (PAGE_SIZE >> SECTOR_SHIFT)); 312 - num_bvecs = min_t(int, bio_get_nr_vecs(where->bdev), num_bvecs); 308 + if (rw & REQ_DISCARD) 309 + num_bvecs = 1; 310 + else 311 + num_bvecs = min_t(int, bio_get_nr_vecs(where->bdev), 312 + dm_sector_div_up(remaining, (PAGE_SIZE >> SECTOR_SHIFT))); 313 + 313 314 bio = bio_alloc_bioset(GFP_NOIO, num_bvecs, io->client->bios); 314 315 bio->bi_sector = where->sector + (where->count - remaining); 315 316 bio->bi_bdev = where->bdev; ··· 320 315 bio->bi_destructor = dm_bio_destructor; 321 316 store_io_and_region_in_bio(bio, io, region); 322 317 323 - /* 324 - * Try and add as many pages as possible. 325 - */ 326 - while (remaining) { 318 + if (rw & REQ_DISCARD) { 319 + discard_sectors = min_t(sector_t, q->limits.max_discard_sectors, remaining); 320 + bio->bi_size = discard_sectors << SECTOR_SHIFT; 321 + remaining -= discard_sectors; 322 + } else while (remaining) { 323 + /* 324 + * Try and add as many pages as possible. 325 + */ 327 326 dp->get_page(dp, &page, &len, &offset); 328 327 len = min(len, to_bytes(remaining)); 329 328 if (!bio_add_page(bio, page, len, offset))
+1 -1
drivers/md/dm-ioctl.c
··· 1437 1437 1438 1438 if (!argc) { 1439 1439 DMWARN("Empty message received."); 1440 - goto out; 1440 + goto out_argv; 1441 1441 } 1442 1442 1443 1443 table = dm_get_live_table(md);
+11 -6
drivers/md/dm-raid.c
··· 668 668 return ret; 669 669 670 670 sb = page_address(rdev->sb_page); 671 - if (sb->magic != cpu_to_le32(DM_RAID_MAGIC)) { 671 + 672 + /* 673 + * Two cases that we want to write new superblocks and rebuild: 674 + * 1) New device (no matching magic number) 675 + * 2) Device specified for rebuild (!In_sync w/ offset == 0) 676 + */ 677 + if ((sb->magic != cpu_to_le32(DM_RAID_MAGIC)) || 678 + (!test_bit(In_sync, &rdev->flags) && !rdev->recovery_offset)) { 672 679 super_sync(rdev->mddev, rdev); 673 680 674 681 set_bit(FirstUse, &rdev->flags); ··· 752 745 */ 753 746 rdev_for_each(r, t, mddev) { 754 747 if (!test_bit(In_sync, &r->flags)) { 755 - if (!test_bit(FirstUse, &r->flags)) 756 - DMERR("Superblock area of " 757 - "rebuild device %d should have been " 758 - "cleared.", r->raid_disk); 759 - set_bit(FirstUse, &r->flags); 748 + DMINFO("Device %d specified for rebuild: " 749 + "Clearing superblock", r->raid_disk); 760 750 rebuilds++; 761 751 } else if (test_bit(FirstUse, &r->flags)) 762 752 new_devs++; ··· 975 971 976 972 INIT_WORK(&rs->md.event_work, do_table_event); 977 973 ti->private = rs; 974 + ti->num_flush_requests = 1; 978 975 979 976 mutex_lock(&rs->md.reconfig_mutex); 980 977 ret = md_run(&rs->md);
+20 -5
drivers/md/dm-thin-metadata.c
··· 385 385 data_sm = dm_sm_disk_create(tm, nr_blocks); 386 386 if (IS_ERR(data_sm)) { 387 387 DMERR("sm_disk_create failed"); 388 + dm_tm_unlock(tm, sblock); 388 389 r = PTR_ERR(data_sm); 389 390 goto bad; 390 391 } ··· 790 789 return 0; 791 790 } 792 791 792 + /* 793 + * __open_device: Returns @td corresponding to device with id @dev, 794 + * creating it if @create is set and incrementing @td->open_count. 795 + * On failure, @td is undefined. 796 + */ 793 797 static int __open_device(struct dm_pool_metadata *pmd, 794 798 dm_thin_id dev, int create, 795 799 struct dm_thin_device **td) ··· 805 799 struct disk_device_details details_le; 806 800 807 801 /* 808 - * Check the device isn't already open. 802 + * If the device is already open, return it. 809 803 */ 810 804 list_for_each_entry(td2, &pmd->thin_devices, list) 811 805 if (td2->id == dev) { 806 + /* 807 + * May not create an already-open device. 808 + */ 809 + if (create) 810 + return -EEXIST; 811 + 812 812 td2->open_count++; 813 813 *td = td2; 814 814 return 0; ··· 829 817 if (r != -ENODATA || !create) 830 818 return r; 831 819 820 + /* 821 + * Create new device. 822 + */ 832 823 changed = 1; 833 824 details_le.mapped_blocks = 0; 834 825 details_le.transaction_id = cpu_to_le64(pmd->trans_id); ··· 897 882 898 883 r = __open_device(pmd, dev, 1, &td); 899 884 if (r) { 900 - __close_device(td); 901 885 dm_btree_remove(&pmd->tl_info, pmd->root, &key, &pmd->root); 902 886 dm_btree_del(&pmd->bl_info, dev_root); 903 887 return r; 904 888 } 905 - td->changed = 1; 906 889 __close_device(td); 907 890 908 891 return r; ··· 980 967 goto bad; 981 968 982 969 r = __set_snapshot_details(pmd, td, origin, pmd->time); 970 + __close_device(td); 971 + 983 972 if (r) 984 973 goto bad; 985 974 986 - __close_device(td); 987 975 return 0; 988 976 989 977 bad: 990 - __close_device(td); 991 978 dm_btree_remove(&pmd->tl_info, pmd->root, &key, &pmd->root); 992 979 dm_btree_remove(&pmd->details_info, pmd->details_root, 993 980 &key, &pmd->details_root); ··· 1224 1211 if (r) 1225 1212 return r; 1226 1213 1214 + td->mapped_blocks--; 1215 + td->changed = 1; 1227 1216 pmd->need_commit = 1; 1228 1217 1229 1218 return 0;
+1 -1
drivers/md/raid1.c
··· 624 624 return 1; 625 625 626 626 rcu_read_lock(); 627 - for (i = 0; i < conf->raid_disks; i++) { 627 + for (i = 0; i < conf->raid_disks * 2; i++) { 628 628 struct md_rdev *rdev = rcu_dereference(conf->mirrors[i].rdev); 629 629 if (rdev && !test_bit(Faulty, &rdev->flags)) { 630 630 struct request_queue *q = bdev_get_queue(rdev->bdev);
+27 -11
drivers/md/raid10.c
··· 67 67 68 68 static void allow_barrier(struct r10conf *conf); 69 69 static void lower_barrier(struct r10conf *conf); 70 + static int enough(struct r10conf *conf, int ignore); 70 71 71 72 static void * r10bio_pool_alloc(gfp_t gfp_flags, void *data) 72 73 { ··· 348 347 * wait for the 'master' bio. 349 348 */ 350 349 set_bit(R10BIO_Uptodate, &r10_bio->state); 350 + } else { 351 + /* If all other devices that store this block have 352 + * failed, we want to return the error upwards rather 353 + * than fail the last device. Here we redefine 354 + * "uptodate" to mean "Don't want to retry" 355 + */ 356 + unsigned long flags; 357 + spin_lock_irqsave(&conf->device_lock, flags); 358 + if (!enough(conf, rdev->raid_disk)) 359 + uptodate = 1; 360 + spin_unlock_irqrestore(&conf->device_lock, flags); 361 + } 362 + if (uptodate) { 351 363 raid_end_bio_io(r10_bio); 352 364 rdev_dec_pending(rdev, conf->mddev); 353 365 } else { ··· 2066 2052 "md/raid10:%s: %s: Failing raid device\n", 2067 2053 mdname(mddev), b); 2068 2054 md_error(mddev, conf->mirrors[d].rdev); 2055 + r10_bio->devs[r10_bio->read_slot].bio = IO_BLOCKED; 2069 2056 return; 2070 2057 } 2071 2058 ··· 2120 2105 rdev, 2121 2106 r10_bio->devs[r10_bio->read_slot].addr 2122 2107 + sect, 2123 - s, 0)) 2108 + s, 0)) { 2124 2109 md_error(mddev, rdev); 2110 + r10_bio->devs[r10_bio->read_slot].bio 2111 + = IO_BLOCKED; 2112 + } 2125 2113 break; 2126 2114 } 2127 2115 ··· 2317 2299 * This is all done synchronously while the array is 2318 2300 * frozen. 2319 2301 */ 2302 + bio = r10_bio->devs[slot].bio; 2303 + bdevname(bio->bi_bdev, b); 2304 + bio_put(bio); 2305 + r10_bio->devs[slot].bio = NULL; 2306 + 2320 2307 if (mddev->ro == 0) { 2321 2308 freeze_array(conf); 2322 2309 fix_read_error(conf, mddev, r10_bio); 2323 2310 unfreeze_array(conf); 2324 - } 2311 + } else 2312 + r10_bio->devs[slot].bio = IO_BLOCKED; 2313 + 2325 2314 rdev_dec_pending(rdev, mddev); 2326 2315 2327 - bio = r10_bio->devs[slot].bio; 2328 - bdevname(bio->bi_bdev, b); 2329 - r10_bio->devs[slot].bio = 2330 - mddev->ro ? IO_BLOCKED : NULL; 2331 2316 read_more: 2332 2317 rdev = read_balance(conf, r10_bio, &max_sectors); 2333 2318 if (rdev == NULL) { ··· 2339 2318 mdname(mddev), b, 2340 2319 (unsigned long long)r10_bio->sector); 2341 2320 raid_end_bio_io(r10_bio); 2342 - bio_put(bio); 2343 2321 return; 2344 2322 } 2345 2323 2346 2324 do_sync = (r10_bio->master_bio->bi_rw & REQ_SYNC); 2347 - if (bio) 2348 - bio_put(bio); 2349 2325 slot = r10_bio->read_slot; 2350 2326 printk_ratelimited( 2351 2327 KERN_ERR ··· 2378 2360 mbio->bi_phys_segments++; 2379 2361 spin_unlock_irq(&conf->device_lock); 2380 2362 generic_make_request(bio); 2381 - bio = NULL; 2382 2363 2383 2364 r10_bio = mempool_alloc(conf->r10bio_pool, 2384 2365 GFP_NOIO); ··· 3260 3243 disk->rdev = rdev; 3261 3244 } 3262 3245 3263 - disk->rdev = rdev; 3264 3246 disk_stack_limits(mddev->gendisk, rdev->bdev, 3265 3247 rdev->data_offset << 9); 3266 3248 /* as we don't honour merge_bvec_fn, we must never risk
+3 -2
drivers/mfd/ab8500-core.c
··· 956 956 return ret; 957 957 958 958 out_freeirq: 959 - if (ab8500->irq_base) { 959 + if (ab8500->irq_base) 960 960 free_irq(ab8500->irq, ab8500); 961 961 out_removeirq: 962 + if (ab8500->irq_base) 962 963 ab8500_irq_remove(ab8500); 963 - } 964 + 964 965 return ret; 965 966 } 966 967
+1 -1
drivers/mfd/mfd-core.c
··· 123 123 } 124 124 125 125 if (!cell->ignore_resource_conflicts) { 126 - ret = acpi_check_resource_conflict(res); 126 + ret = acpi_check_resource_conflict(&res[r]); 127 127 if (ret) 128 128 goto fail_res; 129 129 }
+1 -1
drivers/mfd/s5m-core.c
··· 105 105 s5m87xx->rtc = i2c_new_dummy(i2c->adapter, RTC_I2C_ADDR); 106 106 i2c_set_clientdata(s5m87xx->rtc, s5m87xx); 107 107 108 - if (pdata->cfg_pmic_irq) 108 + if (pdata && pdata->cfg_pmic_irq) 109 109 pdata->cfg_pmic_irq(); 110 110 111 111 s5m_irq_init(s5m87xx);
+1 -1
drivers/mfd/tps65910.c
··· 168 168 goto err; 169 169 170 170 init_data->irq = pmic_plat_data->irq; 171 - init_data->irq_base = pmic_plat_data->irq; 171 + init_data->irq_base = pmic_plat_data->irq_base; 172 172 173 173 tps65910_gpio_init(tps65910, pmic_plat_data->gpio_base); 174 174
+1 -1
drivers/mfd/tps65912-core.c
··· 151 151 goto err; 152 152 153 153 init_data->irq = pmic_plat_data->irq; 154 - init_data->irq_base = pmic_plat_data->irq; 154 + init_data->irq_base = pmic_plat_data->irq_base; 155 155 ret = tps65912_irq_init(tps65912, init_data->irq, init_data); 156 156 if (ret < 0) 157 157 goto err;
-1
drivers/mfd/wm8350-irq.c
··· 496 496 497 497 mutex_init(&wm8350->irq_lock); 498 498 wm8350->chip_irq = irq; 499 - wm8350->irq_base = pdata->irq_base; 500 499 501 500 if (pdata && pdata->irq_base > 0) 502 501 irq_base = pdata->irq_base;
+14
drivers/mfd/wm8994-core.c
··· 256 256 break; 257 257 } 258 258 259 + switch (wm8994->type) { 260 + case WM1811: 261 + ret = wm8994_reg_read(wm8994, WM8994_ANTIPOP_2); 262 + if (ret < 0) { 263 + dev_err(dev, "Failed to read jackdet: %d\n", ret); 264 + } else if (ret & WM1811_JACKDET_MODE_MASK) { 265 + dev_dbg(dev, "CODEC still active, ignoring suspend\n"); 266 + return 0; 267 + } 268 + break; 269 + default: 270 + break; 271 + } 272 + 259 273 /* Disable LDO pulldowns while the device is suspended if we 260 274 * don't know that something will be driving them. */ 261 275 if (!wm8994->ldo_ena_always_driven)
+1
drivers/mfd/wm8994-regmap.c
··· 806 806 case WM8994_DC_SERVO_2: 807 807 case WM8994_DC_SERVO_READBACK: 808 808 case WM8994_DC_SERVO_4: 809 + case WM8994_DC_SERVO_4E: 809 810 case WM8994_ANALOGUE_HP_1: 810 811 case WM8958_MIC_DETECT_1: 811 812 case WM8958_MIC_DETECT_2:
+2 -2
drivers/misc/c2port/core.c
··· 984 984 " - (C) 2007 Rodolfo Giometti\n"); 985 985 986 986 c2port_class = class_create(THIS_MODULE, "c2port"); 987 - if (!c2port_class) { 987 + if (IS_ERR(c2port_class)) { 988 988 printk(KERN_ERR "c2port: failed to allocate class\n"); 989 - return -ENOMEM; 989 + return PTR_ERR(c2port_class); 990 990 } 991 991 c2port_class->dev_attrs = c2port_attrs; 992 992
+3
drivers/mmc/core/core.c
··· 2068 2068 */ 2069 2069 mmc_hw_reset_for_init(host); 2070 2070 2071 + /* Initialization should be done at 3.3 V I/O voltage. */ 2072 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 2073 + 2071 2074 /* 2072 2075 * sdio_reset sends CMD52 to reset card. Since we do not know 2073 2076 * if the card is being re-initialized, just send it. CMD52
+2 -2
drivers/mmc/core/host.c
··· 238 238 /* Hold MCI clock for 8 cycles by default */ 239 239 host->clk_delay = 8; 240 240 /* 241 - * Default clock gating delay is 200ms. 241 + * Default clock gating delay is 0ms to avoid wasting power. 242 242 * This value can be tuned by writing into sysfs entry. 243 243 */ 244 - host->clkgate_delay = 200; 244 + host->clkgate_delay = 0; 245 245 host->clk_gated = false; 246 246 INIT_DELAYED_WORK(&host->clk_gate_work, mmc_host_clk_gate_work); 247 247 spin_lock_init(&host->clk_lock);
+3
drivers/mmc/core/mmc.c
··· 816 816 if (!mmc_host_is_spi(host)) 817 817 mmc_set_bus_mode(host, MMC_BUSMODE_OPENDRAIN); 818 818 819 + /* Initialization should be done at 3.3 V I/O voltage. */ 820 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 821 + 819 822 /* 820 823 * Since we're changing the OCR value, we seem to 821 824 * need to tell some cards to go back to the idle
+3 -5
drivers/mmc/core/sd.c
··· 911 911 BUG_ON(!host); 912 912 WARN_ON(!host->claimed); 913 913 914 + /* The initialization should be done at 3.3 V I/O voltage. */ 915 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 916 + 914 917 err = mmc_sd_get_cid(host, ocr, cid, &rocr); 915 918 if (err) 916 919 return err; ··· 1158 1155 1159 1156 BUG_ON(!host); 1160 1157 WARN_ON(!host->claimed); 1161 - 1162 - /* Make sure we are at 3.3V signalling voltage */ 1163 - err = mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, false); 1164 - if (err) 1165 - return err; 1166 1158 1167 1159 /* Disable preset value enable if already set since last time */ 1168 1160 if (host->ops->enable_preset_value) {
+8
drivers/mmc/core/sdio.c
··· 585 585 * Inform the card of the voltage 586 586 */ 587 587 if (!powered_resume) { 588 + /* The initialization should be done at 3.3 V I/O voltage. */ 589 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 590 + 588 591 err = mmc_send_io_op_cond(host, host->ocr, &ocr); 589 592 if (err) 590 593 goto err; ··· 999 996 * With these steps taken, mmc_select_voltage() is also required to 1000 997 * restore the correct voltage setting of the card. 1001 998 */ 999 + 1000 + /* The initialization should be done at 3.3 V I/O voltage. */ 1001 + if (!mmc_card_keep_power(host)) 1002 + mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_330, 0); 1003 + 1002 1004 sdio_reset(host); 1003 1005 mmc_go_idle(host); 1004 1006 mmc_send_if_cond(host, host->ocr_avail);
+10 -11
drivers/mmc/host/atmel-mci.c
··· 1948 1948 } 1949 1949 } 1950 1950 1951 - static void atmci_configure_dma(struct atmel_mci *host) 1951 + static bool atmci_configure_dma(struct atmel_mci *host) 1952 1952 { 1953 1953 struct mci_platform_data *pdata; 1954 1954 1955 1955 if (host == NULL) 1956 - return; 1956 + return false; 1957 1957 1958 1958 pdata = host->pdev->dev.platform_data; 1959 1959 ··· 1970 1970 host->dma.chan = 1971 1971 dma_request_channel(mask, atmci_filter, pdata->dma_slave); 1972 1972 } 1973 - if (!host->dma.chan) 1974 - dev_notice(&host->pdev->dev, "DMA not available, using PIO\n"); 1975 - else 1973 + if (!host->dma.chan) { 1974 + dev_warn(&host->pdev->dev, "no DMA channel available\n"); 1975 + return false; 1976 + } else { 1976 1977 dev_info(&host->pdev->dev, 1977 1978 "Using %s for DMA transfers\n", 1978 1979 dma_chan_name(host->dma.chan)); 1980 + return true; 1981 + } 1979 1982 } 1980 1983 1981 1984 static inline unsigned int atmci_get_version(struct atmel_mci *host) ··· 2088 2085 2089 2086 /* Get MCI capabilities and set operations according to it */ 2090 2087 atmci_get_cap(host); 2091 - if (host->caps.has_dma) { 2092 - dev_info(&pdev->dev, "using DMA\n"); 2088 + if (host->caps.has_dma && atmci_configure_dma(host)) { 2093 2089 host->prepare_data = &atmci_prepare_data_dma; 2094 2090 host->submit_data = &atmci_submit_data_dma; 2095 2091 host->stop_transfer = &atmci_stop_transfer_dma; ··· 2098 2096 host->submit_data = &atmci_submit_data_pdc; 2099 2097 host->stop_transfer = &atmci_stop_transfer_pdc; 2100 2098 } else { 2101 - dev_info(&pdev->dev, "no DMA, no PDC\n"); 2099 + dev_info(&pdev->dev, "using PIO\n"); 2102 2100 host->prepare_data = &atmci_prepare_data; 2103 2101 host->submit_data = &atmci_submit_data; 2104 2102 host->stop_transfer = &atmci_stop_transfer; 2105 2103 } 2106 - 2107 - if (host->caps.has_dma) 2108 - atmci_configure_dma(host); 2109 2104 2110 2105 platform_set_drvdata(pdev, host); 2111 2106
+4 -3
drivers/mmc/host/mmci.c
··· 1271 1271 /* 1272 1272 * Block size can be up to 2048 bytes, but must be a power of two. 1273 1273 */ 1274 - mmc->max_blk_size = 2048; 1274 + mmc->max_blk_size = 1 << 11; 1275 1275 1276 1276 /* 1277 - * No limit on the number of blocks transferred. 1277 + * Limit the number of blocks transferred so that we don't overflow 1278 + * the maximum request size. 1278 1279 */ 1279 - mmc->max_blk_count = mmc->max_req_size; 1280 + mmc->max_blk_count = mmc->max_req_size >> 11; 1280 1281 1281 1282 spin_lock_init(&host->lock); 1282 1283
+3 -2
drivers/mmc/host/sdhci-esdhc-imx.c
··· 269 269 imx_data->scratchpad = val; 270 270 return; 271 271 case SDHCI_COMMAND: 272 - if ((host->cmd->opcode == MMC_STOP_TRANSMISSION) 273 - && (imx_data->flags & ESDHC_FLAG_MULTIBLK_NO_INT)) 272 + if ((host->cmd->opcode == MMC_STOP_TRANSMISSION || 273 + host->cmd->opcode == MMC_SET_BLOCK_COUNT) && 274 + (imx_data->flags & ESDHC_FLAG_MULTIBLK_NO_INT)) 274 275 val |= SDHCI_CMD_ABORTCMD; 275 276 276 277 if (is_imx6q_usdhc(imx_data)) {
+1 -1
drivers/net/caif/caif_hsi.c
··· 978 978 dev->netdev_ops = &cfhsi_ops; 979 979 dev->type = ARPHRD_CAIF; 980 980 dev->flags = IFF_POINTOPOINT | IFF_NOARP; 981 - dev->mtu = CFHSI_MAX_PAYLOAD_SZ; 981 + dev->mtu = CFHSI_MAX_CAIF_FRAME_SZ; 982 982 dev->tx_queue_len = 0; 983 983 dev->destructor = free_netdev; 984 984 skb_queue_head_init(&cfhsi->qhead);
+1 -1
drivers/net/ethernet/atheros/atl1c/atl1c_main.c
··· 1710 1710 "atl1c hardware error (status = 0x%x)\n", 1711 1711 status & ISR_ERROR); 1712 1712 /* reset MAC */ 1713 - adapter->work_event |= ATL1C_WORK_EVENT_RESET; 1713 + set_bit(ATL1C_WORK_EVENT_RESET, &adapter->work_event); 1714 1714 schedule_work(&adapter->common_task); 1715 1715 return IRQ_HANDLED; 1716 1716 }
+26 -25
drivers/net/ethernet/broadcom/tg3.c
··· 5352 5352 } 5353 5353 } 5354 5354 5355 - netdev_completed_queue(tp->dev, pkts_compl, bytes_compl); 5355 + netdev_tx_completed_queue(txq, pkts_compl, bytes_compl); 5356 5356 5357 5357 tnapi->tx_cons = sw_idx; 5358 5358 ··· 6793 6793 } 6794 6794 6795 6795 skb_tx_timestamp(skb); 6796 - netdev_sent_queue(tp->dev, skb->len); 6796 + netdev_tx_sent_queue(txq, skb->len); 6797 6797 6798 6798 /* Packets are ready, update Tx producer idx local and on card. */ 6799 6799 tw32_tx_mbox(tnapi->prodmbox, entry); ··· 7275 7275 7276 7276 dev_kfree_skb_any(skb); 7277 7277 } 7278 + netdev_tx_reset_queue(netdev_get_tx_queue(tp->dev, j)); 7278 7279 } 7279 - netdev_reset_queue(tp->dev); 7280 7280 } 7281 7281 7282 7282 /* Initialize tx/rx rings for packet processing. ··· 7886 7886 return 0; 7887 7887 } 7888 7888 7889 - static struct rtnl_link_stats64 *tg3_get_stats64(struct net_device *, 7890 - struct rtnl_link_stats64 *); 7891 - static struct tg3_ethtool_stats *tg3_get_estats(struct tg3 *, 7892 - struct tg3_ethtool_stats *); 7889 + static void tg3_get_nstats(struct tg3 *, struct rtnl_link_stats64 *); 7890 + static void tg3_get_estats(struct tg3 *, struct tg3_ethtool_stats *); 7893 7891 7894 7892 /* tp->lock is held. */ 7895 7893 static int tg3_halt(struct tg3 *tp, int kind, int silent) ··· 7908 7910 7909 7911 if (tp->hw_stats) { 7910 7912 /* Save the stats across chip resets... */ 7911 - tg3_get_stats64(tp->dev, &tp->net_stats_prev), 7913 + tg3_get_nstats(tp, &tp->net_stats_prev), 7912 7914 tg3_get_estats(tp, &tp->estats_prev); 7913 7915 7914 7916 /* And make sure the next sample is new data */ ··· 9845 9847 return ((u64)val->high << 32) | ((u64)val->low); 9846 9848 } 9847 9849 9848 - static u64 calc_crc_errors(struct tg3 *tp) 9850 + static u64 tg3_calc_crc_errors(struct tg3 *tp) 9849 9851 { 9850 9852 struct tg3_hw_stats *hw_stats = tp->hw_stats; 9851 9853 ··· 9854 9856 GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5701)) { 9855 9857 u32 val; 9856 9858 9857 - spin_lock_bh(&tp->lock); 9858 9859 if (!tg3_readphy(tp, MII_TG3_TEST1, &val)) { 9859 9860 tg3_writephy(tp, MII_TG3_TEST1, 9860 9861 val | MII_TG3_TEST1_CRC_EN); 9861 9862 tg3_readphy(tp, MII_TG3_RXR_COUNTERS, &val); 9862 9863 } else 9863 9864 val = 0; 9864 - spin_unlock_bh(&tp->lock); 9865 9865 9866 9866 tp->phy_crc_errors += val; 9867 9867 ··· 9873 9877 estats->member = old_estats->member + \ 9874 9878 get_stat64(&hw_stats->member) 9875 9879 9876 - static struct tg3_ethtool_stats *tg3_get_estats(struct tg3 *tp, 9877 - struct tg3_ethtool_stats *estats) 9880 + static void tg3_get_estats(struct tg3 *tp, struct tg3_ethtool_stats *estats) 9878 9881 { 9879 9882 struct tg3_ethtool_stats *old_estats = &tp->estats_prev; 9880 9883 struct tg3_hw_stats *hw_stats = tp->hw_stats; 9881 9884 9882 9885 if (!hw_stats) 9883 - return old_estats; 9886 + return; 9884 9887 9885 9888 ESTAT_ADD(rx_octets); 9886 9889 ESTAT_ADD(rx_fragments); ··· 9958 9963 ESTAT_ADD(nic_tx_threshold_hit); 9959 9964 9960 9965 ESTAT_ADD(mbuf_lwm_thresh_hit); 9961 - 9962 - return estats; 9963 9966 } 9964 9967 9965 - static struct rtnl_link_stats64 *tg3_get_stats64(struct net_device *dev, 9966 - struct rtnl_link_stats64 *stats) 9968 + static void tg3_get_nstats(struct tg3 *tp, struct rtnl_link_stats64 *stats) 9967 9969 { 9968 - struct tg3 *tp = netdev_priv(dev); 9969 9970 struct rtnl_link_stats64 *old_stats = &tp->net_stats_prev; 9970 9971 struct tg3_hw_stats *hw_stats = tp->hw_stats; 9971 - 9972 - if (!hw_stats) 9973 - return old_stats; 9974 9972 9975 9973 stats->rx_packets = old_stats->rx_packets + 9976 9974 get_stat64(&hw_stats->rx_ucast_packets) + ··· 10007 10019 get_stat64(&hw_stats->tx_carrier_sense_errors); 10008 10020 10009 10021 stats->rx_crc_errors = old_stats->rx_crc_errors + 10010 - calc_crc_errors(tp); 10022 + tg3_calc_crc_errors(tp); 10011 10023 10012 10024 stats->rx_missed_errors = old_stats->rx_missed_errors + 10013 10025 get_stat64(&hw_stats->rx_discards); 10014 10026 10015 10027 stats->rx_dropped = tp->rx_dropped; 10016 10028 stats->tx_dropped = tp->tx_dropped; 10017 - 10018 - return stats; 10019 10029 } 10020 10030 10021 10031 static inline u32 calc_crc(unsigned char *buf, int len) ··· 15393 15407 ec->tx_coalesce_usecs_irq = 0; 15394 15408 ec->stats_block_coalesce_usecs = 0; 15395 15409 } 15410 + } 15411 + 15412 + static struct rtnl_link_stats64 *tg3_get_stats64(struct net_device *dev, 15413 + struct rtnl_link_stats64 *stats) 15414 + { 15415 + struct tg3 *tp = netdev_priv(dev); 15416 + 15417 + if (!tp->hw_stats) 15418 + return &tp->net_stats_prev; 15419 + 15420 + spin_lock_bh(&tp->lock); 15421 + tg3_get_nstats(tp, stats); 15422 + spin_unlock_bh(&tp->lock); 15423 + 15424 + return stats; 15396 15425 } 15397 15426 15398 15427 static const struct net_device_ops tg3_netdev_ops = {
+2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 196 196 CH_DEVICE(0x4408, 4), 197 197 CH_DEVICE(0x4409, 4), 198 198 CH_DEVICE(0x440a, 4), 199 + CH_DEVICE(0x440d, 4), 200 + CH_DEVICE(0x440e, 4), 199 201 { 0, } 200 202 }; 201 203
+2
drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
··· 2892 2892 CH_DEVICE(0x4808, 0), /* T420-cx */ 2893 2893 CH_DEVICE(0x4809, 0), /* T420-bt */ 2894 2894 CH_DEVICE(0x480a, 0), /* T404-bt */ 2895 + CH_DEVICE(0x480d, 0), /* T480-cr */ 2896 + CH_DEVICE(0x480e, 0), /* T440-lp-cr */ 2895 2897 { 0, } 2896 2898 }; 2897 2899
+1 -1
drivers/net/ethernet/cisco/enic/enic.h
··· 94 94 u32 rx_coalesce_usecs; 95 95 u32 tx_coalesce_usecs; 96 96 #ifdef CONFIG_PCI_IOV 97 - u32 num_vfs; 97 + u16 num_vfs; 98 98 #endif 99 99 struct enic_port_profile *pp; 100 100
+1 -1
drivers/net/ethernet/cisco/enic/enic_main.c
··· 2370 2370 pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV); 2371 2371 if (pos) { 2372 2372 pci_read_config_word(pdev, pos + PCI_SRIOV_TOTAL_VF, 2373 - (u16 *)&enic->num_vfs); 2373 + &enic->num_vfs); 2374 2374 if (enic->num_vfs) { 2375 2375 err = pci_enable_sriov(pdev, enic->num_vfs); 2376 2376 if (err) {
+3 -1
drivers/net/ethernet/ibm/ehea/ehea_main.c
··· 336 336 stats->tx_bytes = tx_bytes; 337 337 stats->rx_packets = rx_packets; 338 338 339 - return &port->stats; 339 + stats->multicast = port->stats.multicast; 340 + stats->rx_errors = port->stats.rx_errors; 341 + return stats; 340 342 } 341 343 342 344 static void ehea_update_stats(struct work_struct *work)
-5
drivers/net/ethernet/mellanox/mlx4/qp.c
··· 151 151 context->log_page_size = mtt->page_shift - MLX4_ICM_PAGE_SHIFT; 152 152 } 153 153 154 - port = ((context->pri_path.sched_queue >> 6) & 1) + 1; 155 - if (dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH) 156 - context->pri_path.sched_queue = (context->pri_path.sched_queue & 157 - 0xc3); 158 - 159 154 *(__be32 *) mailbox->buf = cpu_to_be32(optpar); 160 155 memcpy(mailbox->buf + 8, context, sizeof *context); 161 156
+1 -2
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 2255 2255 2256 2256 if (vhcr->op_modifier == 0) { 2257 2257 err = handle_resize(dev, slave, vhcr, inbox, outbox, cmd, cq); 2258 - if (err) 2259 - goto ex_put; 2258 + goto ex_put; 2260 2259 } 2261 2260 2262 2261 err = mlx4_DMA_wrapper(dev, slave, vhcr, inbox, outbox, cmd);
+8 -7
drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_param.c
··· 321 321 pr_debug("AutoNeg specified along with Speed or Duplex, AutoNeg parameter ignored\n"); 322 322 hw->phy.autoneg_advertised = opt.def; 323 323 } else { 324 - hw->phy.autoneg_advertised = AutoNeg; 325 - pch_gbe_validate_option( 326 - (int *)(&hw->phy.autoneg_advertised), 327 - &opt, adapter); 324 + int tmp = AutoNeg; 325 + 326 + pch_gbe_validate_option(&tmp, &opt, adapter); 327 + hw->phy.autoneg_advertised = tmp; 328 328 } 329 329 } 330 330 ··· 495 495 .arg = { .l = { .nr = (int)ARRAY_SIZE(fc_list), 496 496 .p = fc_list } } 497 497 }; 498 - hw->mac.fc = FlowControl; 499 - pch_gbe_validate_option((int *)(&hw->mac.fc), 500 - &opt, adapter); 498 + int tmp = FlowControl; 499 + 500 + pch_gbe_validate_option(&tmp, &opt, adapter); 501 + hw->mac.fc = tmp; 501 502 } 502 503 503 504 pch_gbe_check_copper_options(adapter);
+1
drivers/net/ethernet/packetengines/Kconfig
··· 4 4 5 5 config NET_PACKET_ENGINE 6 6 bool "Packet Engine devices" 7 + default y 7 8 depends on PCI 8 9 ---help--- 9 10 If you have a network (Ethernet) card belonging to this class, say Y
+2 -3
drivers/net/ethernet/qlogic/qla3xxx.c
··· 3017 3017 (void __iomem *)port_regs; 3018 3018 u32 delay = 10; 3019 3019 int status = 0; 3020 - unsigned long hw_flags = 0; 3021 3020 3022 3021 if (ql_mii_setup(qdev)) 3023 3022 return -1; ··· 3227 3228 value = ql_read_page0_reg(qdev, &port_regs->portStatus); 3228 3229 if (value & PORT_STATUS_IC) 3229 3230 break; 3230 - spin_unlock_irqrestore(&qdev->hw_lock, hw_flags); 3231 + spin_unlock_irq(&qdev->hw_lock); 3231 3232 msleep(500); 3232 - spin_lock_irqsave(&qdev->hw_lock, hw_flags); 3233 + spin_lock_irq(&qdev->hw_lock); 3233 3234 } while (--delay); 3234 3235 3235 3236 if (delay == 0) {
+13
drivers/net/ethernet/realtek/r8169.c
··· 3781 3781 3782 3782 static void rtl_hw_jumbo_enable(struct rtl8169_private *tp) 3783 3783 { 3784 + void __iomem *ioaddr = tp->mmio_addr; 3785 + 3786 + RTL_W8(Cfg9346, Cfg9346_Unlock); 3784 3787 rtl_generic_op(tp, tp->jumbo_ops.enable); 3788 + RTL_W8(Cfg9346, Cfg9346_Lock); 3785 3789 } 3786 3790 3787 3791 static void rtl_hw_jumbo_disable(struct rtl8169_private *tp) 3788 3792 { 3793 + void __iomem *ioaddr = tp->mmio_addr; 3794 + 3795 + RTL_W8(Cfg9346, Cfg9346_Unlock); 3789 3796 rtl_generic_op(tp, tp->jumbo_ops.disable); 3797 + RTL_W8(Cfg9346, Cfg9346_Lock); 3790 3798 } 3791 3799 3792 3800 static void r8168c_hw_jumbo_enable(struct rtl8169_private *tp) ··· 6194 6186 { 6195 6187 struct net_device *dev = pci_get_drvdata(pdev); 6196 6188 struct rtl8169_private *tp = netdev_priv(dev); 6189 + struct device *d = &pdev->dev; 6190 + 6191 + pm_runtime_get_sync(d); 6197 6192 6198 6193 rtl8169_net_suspend(dev); 6199 6194 ··· 6218 6207 pci_wake_from_d3(pdev, true); 6219 6208 pci_set_power_state(pdev, PCI_D3hot); 6220 6209 } 6210 + 6211 + pm_runtime_put_noidle(d); 6221 6212 } 6222 6213 6223 6214 static struct pci_driver rtl8169_pci_driver = {
+2 -2
drivers/net/hyperv/netvsc_drv.c
··· 313 313 static void netvsc_get_drvinfo(struct net_device *net, 314 314 struct ethtool_drvinfo *info) 315 315 { 316 - strcpy(info->driver, "hv_netvsc"); 316 + strcpy(info->driver, KBUILD_MODNAME); 317 317 strcpy(info->version, HV_DRV_VERSION); 318 318 strcpy(info->fw_version, "N/A"); 319 319 } ··· 485 485 486 486 /* The one and only one */ 487 487 static struct hv_driver netvsc_drv = { 488 - .name = "netvsc", 488 + .name = KBUILD_MODNAME, 489 489 .id_table = id_table, 490 490 .probe = netvsc_probe, 491 491 .remove = netvsc_remove,
+2
drivers/net/usb/usbnet.c
··· 589 589 entry = (struct skb_data *) skb->cb; 590 590 urb = entry->urb; 591 591 592 + spin_unlock_irqrestore(&q->lock, flags); 592 593 // during some PM-driven resume scenarios, 593 594 // these (async) unlinks complete immediately 594 595 retval = usb_unlink_urb (urb); ··· 597 596 netdev_dbg(dev->net, "unlink urb err, %d\n", retval); 598 597 else 599 598 count++; 599 + spin_lock_irqsave(&q->lock, flags); 600 600 } 601 601 spin_unlock_irqrestore (&q->lock, flags); 602 602 return count;
+1 -6
drivers/net/vmxnet3/vmxnet3_drv.c
··· 830 830 ctx->l4_hdr_size = ((struct tcphdr *) 831 831 skb_transport_header(skb))->doff * 4; 832 832 else if (iph->protocol == IPPROTO_UDP) 833 - /* 834 - * Use tcp header size so that bytes to 835 - * be copied are more than required by 836 - * the device. 837 - */ 838 833 ctx->l4_hdr_size = 839 - sizeof(struct tcphdr); 834 + sizeof(struct udphdr); 840 835 else 841 836 ctx->l4_hdr_size = 0; 842 837 } else {
+2 -2
drivers/net/vmxnet3/vmxnet3_int.h
··· 70 70 /* 71 71 * Version numbers 72 72 */ 73 - #define VMXNET3_DRIVER_VERSION_STRING "1.1.18.0-k" 73 + #define VMXNET3_DRIVER_VERSION_STRING "1.1.29.0-k" 74 74 75 75 /* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */ 76 - #define VMXNET3_DRIVER_VERSION_NUM 0x01011200 76 + #define VMXNET3_DRIVER_VERSION_NUM 0x01011D00 77 77 78 78 #if defined(CONFIG_PCI_MSI) 79 79 /* RSS only makes sense if MSI-X is supported. */
+1 -24
drivers/net/wireless/ath/ath9k/ar5008_phy.c
··· 489 489 ATH_ALLOC_BANK(ah->analogBank6Data, ah->iniBank6.ia_rows); 490 490 ATH_ALLOC_BANK(ah->analogBank6TPCData, ah->iniBank6TPC.ia_rows); 491 491 ATH_ALLOC_BANK(ah->analogBank7Data, ah->iniBank7.ia_rows); 492 - ATH_ALLOC_BANK(ah->addac5416_21, 493 - ah->iniAddac.ia_rows * ah->iniAddac.ia_columns); 494 492 ATH_ALLOC_BANK(ah->bank6Temp, ah->iniBank6.ia_rows); 495 493 496 494 return 0; ··· 517 519 ATH_FREE_BANK(ah->analogBank6Data); 518 520 ATH_FREE_BANK(ah->analogBank6TPCData); 519 521 ATH_FREE_BANK(ah->analogBank7Data); 520 - ATH_FREE_BANK(ah->addac5416_21); 521 522 ATH_FREE_BANK(ah->bank6Temp); 522 523 523 524 #undef ATH_FREE_BANK ··· 802 805 if (ah->eep_ops->set_addac) 803 806 ah->eep_ops->set_addac(ah, chan); 804 807 805 - if (AR_SREV_5416_22_OR_LATER(ah)) { 806 - REG_WRITE_ARRAY(&ah->iniAddac, 1, regWrites); 807 - } else { 808 - struct ar5416IniArray temp; 809 - u32 addacSize = 810 - sizeof(u32) * ah->iniAddac.ia_rows * 811 - ah->iniAddac.ia_columns; 812 - 813 - /* For AR5416 2.0/2.1 */ 814 - memcpy(ah->addac5416_21, 815 - ah->iniAddac.ia_array, addacSize); 816 - 817 - /* override CLKDRV value at [row, column] = [31, 1] */ 818 - (ah->addac5416_21)[31 * ah->iniAddac.ia_columns + 1] = 0; 819 - 820 - temp.ia_array = ah->addac5416_21; 821 - temp.ia_columns = ah->iniAddac.ia_columns; 822 - temp.ia_rows = ah->iniAddac.ia_rows; 823 - REG_WRITE_ARRAY(&temp, 1, regWrites); 824 - } 825 - 808 + REG_WRITE_ARRAY(&ah->iniAddac, 1, regWrites); 826 809 REG_WRITE(ah, AR_PHY_ADC_SERIAL_CTL, AR_PHY_SEL_INTERNAL_ADDAC); 827 810 828 811 ENABLE_REGWRITE_BUFFER(ah);
+19
drivers/net/wireless/ath/ath9k/ar9002_hw.c
··· 180 180 INIT_INI_ARRAY(&ah->iniAddac, ar5416Addac, 181 181 ARRAY_SIZE(ar5416Addac), 2); 182 182 } 183 + 184 + /* iniAddac needs to be modified for these chips */ 185 + if (AR_SREV_9160(ah) || !AR_SREV_5416_22_OR_LATER(ah)) { 186 + struct ar5416IniArray *addac = &ah->iniAddac; 187 + u32 size = sizeof(u32) * addac->ia_rows * addac->ia_columns; 188 + u32 *data; 189 + 190 + data = kmalloc(size, GFP_KERNEL); 191 + if (!data) 192 + return; 193 + 194 + memcpy(data, addac->ia_array, size); 195 + addac->ia_array = data; 196 + 197 + if (!AR_SREV_5416_22_OR_LATER(ah)) { 198 + /* override CLKDRV value */ 199 + INI_RA(addac, 31,1) = 0; 200 + } 201 + } 183 202 } 184 203 185 204 /* Support for Japan ch.14 (2484) spread */
-1
drivers/net/wireless/ath/ath9k/hw.h
··· 940 940 u32 *analogBank6Data; 941 941 u32 *analogBank6TPCData; 942 942 u32 *analogBank7Data; 943 - u32 *addac5416_21; 944 943 u32 *bank6Temp; 945 944 946 945 u8 txpower_limit;
+6 -3
drivers/net/wireless/ath/carl9170/tx.c
··· 1234 1234 { 1235 1235 struct ieee80211_sta *sta; 1236 1236 struct carl9170_sta_info *sta_info; 1237 + struct ieee80211_tx_info *tx_info; 1237 1238 1238 1239 rcu_read_lock(); 1239 1240 sta = __carl9170_get_tx_sta(ar, skb); ··· 1242 1241 goto out_rcu; 1243 1242 1244 1243 sta_info = (void *) sta->drv_priv; 1245 - if (unlikely(sta_info->sleeping)) { 1246 - struct ieee80211_tx_info *tx_info; 1244 + tx_info = IEEE80211_SKB_CB(skb); 1247 1245 1246 + if (unlikely(sta_info->sleeping) && 1247 + !(tx_info->flags & (IEEE80211_TX_CTL_POLL_RESPONSE | 1248 + IEEE80211_TX_CTL_CLEAR_PS_FILT))) { 1248 1249 rcu_read_unlock(); 1249 1250 1250 - tx_info = IEEE80211_SKB_CB(skb); 1251 1251 if (tx_info->flags & IEEE80211_TX_CTL_AMPDU) 1252 1252 atomic_dec(&ar->tx_ampdu_upload); 1253 1253 1254 1254 tx_info->flags |= IEEE80211_TX_STAT_TX_FILTERED; 1255 + carl9170_release_dev_space(ar, skb); 1255 1256 carl9170_tx_status(ar, skb, false); 1256 1257 return true; 1257 1258 }
+4 -8
drivers/net/wireless/brcm80211/brcmsmac/ampdu.c
··· 1051 1051 } 1052 1052 /* either retransmit or send bar if ack not recd */ 1053 1053 if (!ack_recd) { 1054 - struct ieee80211_tx_rate *txrate = 1055 - tx_info->status.rates; 1056 - if (retry && (txrate[0].count < (int)retry_limit)) { 1054 + if (retry && (ini->txretry[index] < (int)retry_limit)) { 1057 1055 ini->txretry[index]++; 1058 1056 ini->tx_in_transit--; 1059 1057 /* 1060 1058 * Use high prededence for retransmit to 1061 1059 * give some punch 1062 1060 */ 1063 - /* brcms_c_txq_enq(wlc, scb, p, 1064 - * BRCMS_PRIO_TO_PREC(tid)); */ 1065 1061 brcms_c_txq_enq(wlc, scb, p, 1066 1062 BRCMS_PRIO_TO_HI_PREC(tid)); 1067 1063 } else { ··· 1070 1074 IEEE80211_TX_STAT_AMPDU_NO_BACK; 1071 1075 skb_pull(p, D11_PHY_HDR_LEN); 1072 1076 skb_pull(p, D11_TXH_LEN); 1073 - wiphy_err(wiphy, "%s: BA Timeout, seq %d, in_" 1074 - "transit %d\n", "AMPDU status", seq, 1075 - ini->tx_in_transit); 1077 + BCMMSG(wiphy, 1078 + "BA Timeout, seq %d, in_transit %d\n", 1079 + seq, ini->tx_in_transit); 1076 1080 ieee80211_tx_status_irqsafe(wlc->pub->ieee_hw, 1077 1081 p); 1078 1082 }
+1 -1
drivers/net/wireless/iwlwifi/iwl-agn-lib.c
··· 1240 1240 .flags = CMD_SYNC, 1241 1241 .data[0] = key_data.rsc_tsc, 1242 1242 .dataflags[0] = IWL_HCMD_DFL_NOCOPY, 1243 - .len[0] = sizeof(key_data.rsc_tsc), 1243 + .len[0] = sizeof(*key_data.rsc_tsc), 1244 1244 }; 1245 1245 1246 1246 ret = iwl_trans_send_cmd(trans(priv), &rsc_tsc_cmd);
+9 -1
drivers/net/wireless/iwlwifi/iwl-agn-sta.c
··· 1187 1187 unsigned long flags; 1188 1188 struct iwl_addsta_cmd sta_cmd; 1189 1189 u8 sta_id = iwlagn_key_sta_id(priv, ctx->vif, sta); 1190 + __le16 key_flags; 1190 1191 1191 1192 /* if station isn't there, neither is the key */ 1192 1193 if (sta_id == IWL_INVALID_STATION) ··· 1213 1212 IWL_ERR(priv, "offset %d not used in uCode key table.\n", 1214 1213 keyconf->hw_key_idx); 1215 1214 1216 - sta_cmd.key.key_flags = STA_KEY_FLG_NO_ENC | STA_KEY_FLG_INVALID; 1215 + key_flags = cpu_to_le16(keyconf->keyidx << STA_KEY_FLG_KEYID_POS); 1216 + key_flags |= STA_KEY_FLG_MAP_KEY_MSK | STA_KEY_FLG_NO_ENC | 1217 + STA_KEY_FLG_INVALID; 1218 + 1219 + if (!(keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE)) 1220 + key_flags |= STA_KEY_MULTICAST_MSK; 1221 + 1222 + sta_cmd.key.key_flags = key_flags; 1217 1223 sta_cmd.key.key_offset = WEP_INVALID_OFFSET; 1218 1224 sta_cmd.sta.modify_mask = STA_MODIFY_KEY_MASK; 1219 1225 sta_cmd.mode = STA_CONTROL_MODIFY_MSK;
+1
drivers/net/wireless/mwifiex/cfg80211.c
··· 846 846 priv->sec_info.wpa_enabled = false; 847 847 priv->sec_info.wpa2_enabled = false; 848 848 priv->wep_key_curr_index = 0; 849 + priv->sec_info.encryption_mode = 0; 849 850 ret = mwifiex_set_encode(priv, NULL, 0, 0, 1); 850 851 851 852 if (mode == NL80211_IFTYPE_ADHOC) {
+2 -1
drivers/net/wireless/rt2x00/rt2x00dev.c
··· 1220 1220 cancel_work_sync(&rt2x00dev->rxdone_work); 1221 1221 cancel_work_sync(&rt2x00dev->txdone_work); 1222 1222 } 1223 - destroy_workqueue(rt2x00dev->workqueue); 1223 + if (rt2x00dev->workqueue) 1224 + destroy_workqueue(rt2x00dev->workqueue); 1224 1225 1225 1226 /* 1226 1227 * Free the tx status fifo.
-1
drivers/of/fdt.c
··· 23 23 #include <asm/machdep.h> 24 24 #endif /* CONFIG_PPC */ 25 25 26 - #include <asm/setup.h> 27 26 #include <asm/page.h> 28 27 29 28 char *of_fdt_get_string(struct boot_param_header *blob, u32 offset)
+1 -1
drivers/of/of_mdio.c
··· 182 182 if (!phy_id || sz < sizeof(*phy_id)) 183 183 return NULL; 184 184 185 - sprintf(bus_id, PHY_ID_FMT, "0", be32_to_cpu(phy_id[0])); 185 + sprintf(bus_id, PHY_ID_FMT, "fixed-0", be32_to_cpu(phy_id[0])); 186 186 187 187 phy = phy_connect(dev, bus_id, hndlr, 0, iface); 188 188 return IS_ERR(phy) ? NULL : phy;
+2
drivers/parisc/iommu-helpers.h
··· 1 + #include <linux/prefetch.h> 2 + 1 3 /** 2 4 * iommu_fill_pdir - Insert coalesced scatter/gather chunks into the I/O Pdir. 3 5 * @ioc: The I/O Controller.
+3 -9
drivers/pcmcia/pxa2xx_base.c
··· 328 328 goto err1; 329 329 } 330 330 331 - if (ret) { 332 - while (--i >= 0) 333 - soc_pcmcia_remove_one(&sinfo->skt[i]); 334 - kfree(sinfo); 335 - clk_put(clk); 336 - } else { 337 - pxa2xx_configure_sockets(&dev->dev); 338 - dev_set_drvdata(&dev->dev, sinfo); 339 - } 331 + pxa2xx_configure_sockets(&dev->dev); 332 + dev_set_drvdata(&dev->dev, sinfo); 340 333 341 334 return 0; 342 335 343 336 err1: 344 337 while (--i >= 0) 345 338 soc_pcmcia_remove_one(&sinfo->skt[i]); 339 + clk_put(clk); 346 340 kfree(sinfo); 347 341 err0: 348 342 return ret;
+2 -2
drivers/pps/pps.c
··· 369 369 int err; 370 370 371 371 pps_class = class_create(THIS_MODULE, "pps"); 372 - if (!pps_class) { 372 + if (IS_ERR(pps_class)) { 373 373 pr_err("failed to allocate class\n"); 374 - return -ENOMEM; 374 + return PTR_ERR(pps_class); 375 375 } 376 376 pps_class->dev_attrs = pps_attrs; 377 377
+3 -2
drivers/rapidio/devices/tsi721.c
··· 410 410 */ 411 411 mport = priv->mport; 412 412 413 - wr_ptr = ioread32(priv->regs + TSI721_IDQ_WP(IDB_QUEUE)); 414 - rd_ptr = ioread32(priv->regs + TSI721_IDQ_RP(IDB_QUEUE)); 413 + wr_ptr = ioread32(priv->regs + TSI721_IDQ_WP(IDB_QUEUE)) % IDB_QSIZE; 414 + rd_ptr = ioread32(priv->regs + TSI721_IDQ_RP(IDB_QUEUE)) % IDB_QSIZE; 415 415 416 416 while (wr_ptr != rd_ptr) { 417 417 idb_entry = (u64 *)(priv->idb_base + 418 418 (TSI721_IDB_ENTRY_SIZE * rd_ptr)); 419 419 rd_ptr++; 420 + rd_ptr %= IDB_QSIZE; 420 421 idb.msg = *idb_entry; 421 422 *idb_entry = 0; 422 423
+3 -3
drivers/regulator/88pm8607.c
··· 196 196 }; 197 197 198 198 static const unsigned int LDO13_table[] = { 199 - 1300000, 1800000, 2000000, 2500000, 2800000, 3000000, 0, 0, 199 + 1200000, 1300000, 1800000, 2000000, 2500000, 2800000, 3000000, 0, 200 200 }; 201 201 202 202 static const unsigned int LDO13_suspend_table[] = { ··· 389 389 PM8607_LDO( 7, LDO7, 0, 3, SUPPLIES_EN12, 1), 390 390 PM8607_LDO( 8, LDO8, 0, 3, SUPPLIES_EN12, 2), 391 391 PM8607_LDO( 9, LDO9, 0, 3, SUPPLIES_EN12, 3), 392 - PM8607_LDO(10, LDO10, 0, 3, SUPPLIES_EN12, 4), 392 + PM8607_LDO(10, LDO10, 0, 4, SUPPLIES_EN12, 4), 393 393 PM8607_LDO(12, LDO12, 0, 4, SUPPLIES_EN12, 5), 394 394 PM8607_LDO(13, VIBRATOR_SET, 1, 3, VIBRATOR_SET, 0), 395 - PM8607_LDO(14, LDO14, 0, 4, SUPPLIES_EN12, 6), 395 + PM8607_LDO(14, LDO14, 0, 3, SUPPLIES_EN12, 6), 396 396 }; 397 397 398 398 static int __devinit pm8607_regulator_probe(struct platform_device *pdev)
+4 -4
drivers/regulator/da9052-regulator.c
··· 260 260 * the LDO activate bit to implment the changes on the 261 261 * LDO output. 262 262 */ 263 - return da9052_reg_update(regulator->da9052, DA9052_SUPPLY_REG, 0, 264 - info->activate_bit); 263 + return da9052_reg_update(regulator->da9052, DA9052_SUPPLY_REG, 264 + info->activate_bit, info->activate_bit); 265 265 } 266 266 267 267 static int da9052_set_dcdc_voltage(struct regulator_dev *rdev, ··· 280 280 * the DCDC activate bit to implment the changes on the 281 281 * DCDC output. 282 282 */ 283 - return da9052_reg_update(regulator->da9052, DA9052_SUPPLY_REG, 0, 284 - info->activate_bit); 283 + return da9052_reg_update(regulator->da9052, DA9052_SUPPLY_REG, 284 + info->activate_bit, info->activate_bit); 285 285 } 286 286 287 287 static int da9052_get_regulator_voltage_sel(struct regulator_dev *rdev)
+1 -1
drivers/regulator/tps65910-regulator.c
··· 662 662 tps65910_reg_write(pmic, TPS65910_VDD2_OP, vsel); 663 663 break; 664 664 case TPS65911_REG_VDDCTRL: 665 - vsel = selector; 665 + vsel = selector + 3; 666 666 tps65910_reg_write(pmic, TPS65911_VDDCTRL_OP, vsel); 667 667 } 668 668
+7 -7
drivers/rtc/rtc-r9701.c
··· 125 125 unsigned char tmp; 126 126 int res; 127 127 128 + tmp = R100CNT; 129 + res = read_regs(&spi->dev, &tmp, 1); 130 + if (res || tmp != 0x20) { 131 + dev_err(&spi->dev, "cannot read RTC register\n"); 132 + return -ENODEV; 133 + } 134 + 128 135 rtc = rtc_device_register("r9701", 129 136 &spi->dev, &r9701_rtc_ops, THIS_MODULE); 130 137 if (IS_ERR(rtc)) 131 138 return PTR_ERR(rtc); 132 139 133 140 dev_set_drvdata(&spi->dev, rtc); 134 - 135 - tmp = R100CNT; 136 - res = read_regs(&spi->dev, &tmp, 1); 137 - if (res || tmp != 0x20) { 138 - rtc_device_unregister(rtc); 139 - return res; 140 - } 141 141 142 142 return 0; 143 143 }
+2 -2
drivers/s390/cio/qdio_main.c
··· 167 167 DBF_ERROR("%4x EQBS ERROR", SCH_NO(q)); 168 168 DBF_ERROR("%3d%3d%2d", count, tmp_count, nr); 169 169 q->handler(q->irq_ptr->cdev, QDIO_ERROR_ACTIVATE_CHECK_CONDITION, 170 - 0, -1, -1, q->irq_ptr->int_parm); 170 + q->nr, q->first_to_kick, count, q->irq_ptr->int_parm); 171 171 return 0; 172 172 } 173 173 ··· 215 215 DBF_ERROR("%4x SQBS ERROR", SCH_NO(q)); 216 216 DBF_ERROR("%3d%3d%2d", count, tmp_count, nr); 217 217 q->handler(q->irq_ptr->cdev, QDIO_ERROR_ACTIVATE_CHECK_CONDITION, 218 - 0, -1, -1, q->irq_ptr->int_parm); 218 + q->nr, q->first_to_kick, count, q->irq_ptr->int_parm); 219 219 return 0; 220 220 } 221 221
+2 -2
drivers/scsi/osd/osd_uld.c
··· 69 69 #ifndef SCSI_OSD_MAJOR 70 70 # define SCSI_OSD_MAJOR 260 71 71 #endif 72 - #define SCSI_OSD_MAX_MINOR 64 72 + #define SCSI_OSD_MAX_MINOR MINORMASK 73 73 74 74 static const char osd_name[] = "osd"; 75 - static const char *osd_version_string = "open-osd 0.2.0"; 75 + static const char *osd_version_string = "open-osd 0.2.1"; 76 76 77 77 MODULE_AUTHOR("Boaz Harrosh <bharrosh@panasas.com>"); 78 78 MODULE_DESCRIPTION("open-osd Upper-Layer-Driver osd.ko");
+1 -1
drivers/scsi/sd_dif.c
··· 408 408 kunmap_atomic(sdt, KM_USER0); 409 409 } 410 410 411 - bio->bi_flags |= BIO_MAPPED_INTEGRITY; 411 + bio->bi_flags |= (1 << BIO_MAPPED_INTEGRITY); 412 412 } 413 413 414 414 return 0;
+1 -1
drivers/spi/spi-pl022.c
··· 1083 1083 return -ENOMEM; 1084 1084 } 1085 1085 1086 - static int __init pl022_dma_probe(struct pl022 *pl022) 1086 + static int __devinit pl022_dma_probe(struct pl022 *pl022) 1087 1087 { 1088 1088 dma_cap_mask_t mask; 1089 1089
+1 -1
drivers/tty/Kconfig
··· 365 365 366 366 config PPC_EARLY_DEBUG_EHV_BC 367 367 bool "Early console (udbg) support for ePAPR hypervisors" 368 - depends on PPC_EPAPR_HV_BYTECHAN 368 + depends on PPC_EPAPR_HV_BYTECHAN=y 369 369 help 370 370 Select this option to enable early console (a.k.a. "udbg") support 371 371 via an ePAPR byte channel. You also need to choose the byte channel
+2 -9
drivers/usb/host/ehci-fsl.c
··· 239 239 ehci_writel(ehci, portsc, &ehci->regs->port_status[port_offset]); 240 240 } 241 241 242 - static int ehci_fsl_usb_setup(struct ehci_hcd *ehci) 242 + static void ehci_fsl_usb_setup(struct ehci_hcd *ehci) 243 243 { 244 244 struct usb_hcd *hcd = ehci_to_hcd(ehci); 245 245 struct fsl_usb2_platform_data *pdata; ··· 299 299 #endif 300 300 out_be32(non_ehci + FSL_SOC_USB_SICTRL, 0x00000001); 301 301 } 302 - 303 - if (!(in_be32(non_ehci + FSL_SOC_USB_CTRL) & CTRL_PHY_CLK_VALID)) { 304 - printk(KERN_WARNING "fsl-ehci: USB PHY clock invalid\n"); 305 - return -ENODEV; 306 - } 307 - return 0; 308 302 } 309 303 310 304 /* called after powerup, by probe or system-pm "wakeup" */ 311 305 static int ehci_fsl_reinit(struct ehci_hcd *ehci) 312 306 { 313 - if (ehci_fsl_usb_setup(ehci)) 314 - return -ENODEV; 307 + ehci_fsl_usb_setup(ehci); 315 308 ehci_port_power(ehci, 0); 316 309 317 310 return 0;
-1
drivers/usb/host/ehci-fsl.h
··· 45 45 #define FSL_SOC_USB_PRICTRL 0x40c /* NOTE: big-endian */ 46 46 #define FSL_SOC_USB_SICTRL 0x410 /* NOTE: big-endian */ 47 47 #define FSL_SOC_USB_CTRL 0x500 /* NOTE: big-endian */ 48 - #define CTRL_PHY_CLK_VALID (1 << 17) 49 48 #define SNOOP_SIZE_2GB 0x1e 50 49 #endif /* _EHCI_FSL_H */
+1 -1
drivers/video/omap2/displays/Kconfig
··· 12 12 13 13 config PANEL_DVI 14 14 tristate "DVI output" 15 - depends on OMAP2_DSS_DPI 15 + depends on OMAP2_DSS_DPI && I2C 16 16 help 17 17 Driver for external monitors, connected via DVI. The driver uses i2c 18 18 to read EDID information from the monitor.
+6
drivers/video/omap2/dss/apply.c
··· 1276 1276 1277 1277 spin_unlock_irqrestore(&data_lock, flags); 1278 1278 1279 + /* wait for overlay to be enabled */ 1280 + wait_pending_extra_info_updates(); 1281 + 1279 1282 mutex_unlock(&apply_lock); 1280 1283 1281 1284 return 0; ··· 1315 1312 dss_set_go_bits(); 1316 1313 1317 1314 spin_unlock_irqrestore(&data_lock, flags); 1315 + 1316 + /* wait for the overlay to be disabled */ 1317 + wait_pending_extra_info_updates(); 1318 1318 1319 1319 mutex_unlock(&apply_lock); 1320 1320
+23 -1
drivers/video/omap2/dss/hdmi.c
··· 165 165 166 166 DSSDBG("hdmi_runtime_get\n"); 167 167 168 + /* 169 + * HACK: Add dss_runtime_get() to ensure DSS clock domain is enabled. 170 + * This should be removed later. 171 + */ 172 + r = dss_runtime_get(); 173 + if (r < 0) 174 + goto err_get_dss; 175 + 168 176 r = pm_runtime_get_sync(&hdmi.pdev->dev); 169 177 WARN_ON(r < 0); 170 - return r < 0 ? r : 0; 178 + if (r < 0) 179 + goto err_get_hdmi; 180 + 181 + return 0; 182 + 183 + err_get_hdmi: 184 + dss_runtime_put(); 185 + err_get_dss: 186 + return r; 171 187 } 172 188 173 189 static void hdmi_runtime_put(void) ··· 194 178 195 179 r = pm_runtime_put_sync(&hdmi.pdev->dev); 196 180 WARN_ON(r < 0); 181 + 182 + /* 183 + * HACK: This is added to complement the dss_runtime_get() call in 184 + * hdmi_runtime_get(). This should be removed later. 185 + */ 186 + dss_runtime_put(); 197 187 } 198 188 199 189 int hdmi_init_display(struct omap_dss_device *dssdev)
+1 -8
drivers/video/omap2/dss/ti_hdmi_4xxx_ip.c
··· 479 479 480 480 bool ti_hdmi_4xxx_detect(struct hdmi_ip_data *ip_data) 481 481 { 482 - int r; 483 - 484 - void __iomem *base = hdmi_core_sys_base(ip_data); 485 - 486 - /* HPD */ 487 - r = REG_GET(base, HDMI_CORE_SYS_SYS_STAT, 1, 1); 488 - 489 - return r == 1; 482 + return gpio_get_value(ip_data->hpd_gpio); 490 483 } 491 484 492 485 static void hdmi_core_init(struct hdmi_core_video_config *video_cfg,
+4
drivers/video/via/hw.c
··· 1810 1810 break; 1811 1811 } 1812 1812 1813 + /* magic required on VX900 for correct modesetting on IGA1 */ 1814 + via_write_reg_mask(VIACR, 0x45, 0x00, 0x01); 1815 + 1813 1816 /* probably this should go to the scaling code one day */ 1817 + via_write_reg_mask(VIACR, 0xFD, 0, 0x80); /* VX900 hw scale on IGA2 */ 1814 1818 viafb_write_regx(scaling_parameters, ARRAY_SIZE(scaling_parameters)); 1815 1819 1816 1820 /* Fill VPIT Parameters */
+22 -11
drivers/virtio/virtio_balloon.c
··· 367 367 #ifdef CONFIG_PM 368 368 static int virtballoon_freeze(struct virtio_device *vdev) 369 369 { 370 + struct virtio_balloon *vb = vdev->priv; 371 + 370 372 /* 371 373 * The kthread is already frozen by the PM core before this 372 374 * function is called. 373 375 */ 376 + 377 + while (vb->num_pages) 378 + leak_balloon(vb, vb->num_pages); 379 + update_balloon_size(vb); 374 380 375 381 /* Ensure we don't get any more requests from the host */ 376 382 vdev->config->reset(vdev); ··· 384 378 return 0; 385 379 } 386 380 381 + static int restore_common(struct virtio_device *vdev) 382 + { 383 + struct virtio_balloon *vb = vdev->priv; 384 + int ret; 385 + 386 + ret = init_vqs(vdev->priv); 387 + if (ret) 388 + return ret; 389 + 390 + fill_balloon(vb, towards_target(vb)); 391 + update_balloon_size(vb); 392 + return 0; 393 + } 394 + 387 395 static int virtballoon_thaw(struct virtio_device *vdev) 388 396 { 389 - return init_vqs(vdev->priv); 397 + return restore_common(vdev); 390 398 } 391 399 392 400 static int virtballoon_restore(struct virtio_device *vdev) 393 401 { 394 402 struct virtio_balloon *vb = vdev->priv; 395 - struct page *page, *page2; 396 - 397 - /* We're starting from a clean slate */ 398 - vb->num_pages = 0; 399 403 400 404 /* 401 405 * If a request wasn't complete at the time of freezing, this ··· 413 397 */ 414 398 vb->need_stats_update = 0; 415 399 416 - /* We don't have these pages in the balloon anymore! */ 417 - list_for_each_entry_safe(page, page2, &vb->pages, lru) { 418 - list_del(&page->lru); 419 - totalram_pages++; 420 - } 421 - return init_vqs(vdev->priv); 400 + return restore_common(vdev); 422 401 } 423 402 #endif 424 403
+1 -1
drivers/watchdog/Kconfig
··· 1098 1098 For Freescale Book-E processors, this is a number between 0 and 63. 1099 1099 For other Book-E processors, this is a number between 0 and 3. 1100 1100 1101 - The value can be overidden by the wdt_period command-line parameter. 1101 + The value can be overridden by the wdt_period command-line parameter. 1102 1102 1103 1103 # PPC64 Architecture 1104 1104
+5 -1
drivers/watchdog/booke_wdt.c
··· 198 198 booke_wdt_period = tmp; 199 199 #endif 200 200 booke_wdt_set(); 201 - return 0; 201 + /* Fall */ 202 202 case WDIOC_GETTIMEOUT: 203 + #ifdef CONFIG_FSL_BOOKE 204 + return put_user(period_to_sec(booke_wdt_period), p); 205 + #else 203 206 return put_user(booke_wdt_period, p); 207 + #endif 204 208 default: 205 209 return -ENOTTY; 206 210 }
+3 -2
drivers/watchdog/hpwdt.c
··· 231 231 232 232 cmn_regs.u1.reax = CRU_BIOS_SIGNATURE_VALUE; 233 233 234 - set_memory_x((unsigned long)bios32_entrypoint, (2 * PAGE_SIZE)); 234 + set_memory_x((unsigned long)bios32_map, 2); 235 235 asminline_call(&cmn_regs, bios32_entrypoint); 236 236 237 237 if (cmn_regs.u1.ral != 0) { ··· 250 250 cru_rom_addr = 251 251 ioremap(cru_physical_address, cru_length); 252 252 if (cru_rom_addr) { 253 - set_memory_x((unsigned long)cru_rom_addr, cru_length); 253 + set_memory_x((unsigned long)cru_rom_addr & PAGE_MASK, 254 + (cru_length + PAGE_SIZE - 1) >> PAGE_SHIFT); 254 255 retval = 0; 255 256 } 256 257 }
+1 -1
drivers/watchdog/pnx4008_wdt.c
··· 264 264 wdt_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 265 265 if (wdt_mem == NULL) { 266 266 printk(KERN_INFO MODULE_NAME 267 - "failed to get memory region resouce\n"); 267 + "failed to get memory region resource\n"); 268 268 return -ENOENT; 269 269 } 270 270
+31 -26
drivers/watchdog/s3c2410_wdt.c
··· 312 312 dev = &pdev->dev; 313 313 wdt_dev = &pdev->dev; 314 314 315 - /* get the memory region for the watchdog timer */ 316 - 317 315 wdt_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 318 316 if (wdt_mem == NULL) { 319 317 dev_err(dev, "no memory resource specified\n"); 320 318 return -ENOENT; 321 319 } 322 320 321 + wdt_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 322 + if (wdt_irq == NULL) { 323 + dev_err(dev, "no irq resource specified\n"); 324 + ret = -ENOENT; 325 + goto err; 326 + } 327 + 328 + /* get the memory region for the watchdog timer */ 329 + 323 330 size = resource_size(wdt_mem); 324 331 if (!request_mem_region(wdt_mem->start, size, pdev->name)) { 325 332 dev_err(dev, "failed to get memory region\n"); 326 - return -EBUSY; 333 + ret = -EBUSY; 334 + goto err; 327 335 } 328 336 329 337 wdt_base = ioremap(wdt_mem->start, size); ··· 343 335 344 336 DBG("probe: mapped wdt_base=%p\n", wdt_base); 345 337 346 - wdt_irq = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 347 - if (wdt_irq == NULL) { 348 - dev_err(dev, "no irq resource specified\n"); 349 - ret = -ENOENT; 350 - goto err_map; 351 - } 352 - 353 - ret = request_irq(wdt_irq->start, s3c2410wdt_irq, 0, pdev->name, pdev); 354 - if (ret != 0) { 355 - dev_err(dev, "failed to install irq (%d)\n", ret); 356 - goto err_map; 357 - } 358 - 359 338 wdt_clock = clk_get(&pdev->dev, "watchdog"); 360 339 if (IS_ERR(wdt_clock)) { 361 340 dev_err(dev, "failed to find watchdog clock source\n"); 362 341 ret = PTR_ERR(wdt_clock); 363 - goto err_irq; 342 + goto err_map; 364 343 } 365 344 366 345 clk_enable(wdt_clock); 367 346 368 - if (s3c2410wdt_cpufreq_register() < 0) { 347 + ret = s3c2410wdt_cpufreq_register(); 348 + if (ret < 0) { 369 349 printk(KERN_ERR PFX "failed to register cpufreq\n"); 370 350 goto err_clk; 371 351 } ··· 374 378 "cannot start\n"); 375 379 } 376 380 381 + ret = request_irq(wdt_irq->start, s3c2410wdt_irq, 0, pdev->name, pdev); 382 + if (ret != 0) { 383 + dev_err(dev, "failed to install irq (%d)\n", ret); 384 + goto err_cpufreq; 385 + } 386 + 377 387 watchdog_set_nowayout(&s3c2410_wdd, nowayout); 378 388 379 389 ret = watchdog_register_device(&s3c2410_wdd); 380 390 if (ret) { 381 391 dev_err(dev, "cannot register watchdog (%d)\n", ret); 382 - goto err_cpufreq; 392 + goto err_irq; 383 393 } 384 394 385 395 if (tmr_atboot && started == 0) { ··· 410 408 411 409 return 0; 412 410 411 + err_irq: 412 + free_irq(wdt_irq->start, pdev); 413 + 413 414 err_cpufreq: 414 415 s3c2410wdt_cpufreq_deregister(); 415 416 416 417 err_clk: 417 418 clk_disable(wdt_clock); 418 419 clk_put(wdt_clock); 419 - 420 - err_irq: 421 - free_irq(wdt_irq->start, pdev); 420 + wdt_clock = NULL; 422 421 423 422 err_map: 424 423 iounmap(wdt_base); 425 424 426 425 err_req: 427 426 release_mem_region(wdt_mem->start, size); 428 - wdt_mem = NULL; 429 427 428 + err: 429 + wdt_irq = NULL; 430 + wdt_mem = NULL; 430 431 return ret; 431 432 } 432 433 ··· 437 432 { 438 433 watchdog_unregister_device(&s3c2410_wdd); 439 434 435 + free_irq(wdt_irq->start, dev); 436 + 440 437 s3c2410wdt_cpufreq_deregister(); 441 438 442 439 clk_disable(wdt_clock); 443 440 clk_put(wdt_clock); 444 441 wdt_clock = NULL; 445 442 446 - free_irq(wdt_irq->start, dev); 447 - wdt_irq = NULL; 448 - 449 443 iounmap(wdt_base); 450 444 451 445 release_mem_region(wdt_mem->start, resource_size(wdt_mem)); 446 + wdt_irq = NULL; 452 447 wdt_mem = NULL; 453 448 return 0; 454 449 }
+12 -12
fs/aio.c
··· 228 228 call_rcu(&ctx->rcu_head, ctx_rcu_free); 229 229 } 230 230 231 - static inline void get_ioctx(struct kioctx *kioctx) 232 - { 233 - BUG_ON(atomic_read(&kioctx->users) <= 0); 234 - atomic_inc(&kioctx->users); 235 - } 236 - 237 231 static inline int try_get_ioctx(struct kioctx *kioctx) 238 232 { 239 233 return atomic_inc_not_zero(&kioctx->users); ··· 267 273 mm = ctx->mm = current->mm; 268 274 atomic_inc(&mm->mm_count); 269 275 270 - atomic_set(&ctx->users, 1); 276 + atomic_set(&ctx->users, 2); 271 277 spin_lock_init(&ctx->ctx_lock); 272 278 spin_lock_init(&ctx->ring_info.ring_lock); 273 279 init_waitqueue_head(&ctx->wait); ··· 484 490 kmem_cache_free(kiocb_cachep, req); 485 491 ctx->reqs_active--; 486 492 } 493 + if (unlikely(!ctx->reqs_active && ctx->dead)) 494 + wake_up_all(&ctx->wait); 487 495 spin_unlock_irq(&ctx->ctx_lock); 488 496 } 489 497 ··· 603 607 fput(req->ki_filp); 604 608 605 609 /* Link the iocb into the context's free list */ 610 + rcu_read_lock(); 606 611 spin_lock_irq(&ctx->ctx_lock); 607 612 really_put_req(ctx, req); 613 + /* 614 + * at that point ctx might've been killed, but actual 615 + * freeing is RCU'd 616 + */ 608 617 spin_unlock_irq(&ctx->ctx_lock); 618 + rcu_read_unlock(); 609 619 610 - put_ioctx(ctx); 611 620 spin_lock_irq(&fput_lock); 612 621 } 613 622 spin_unlock_irq(&fput_lock); ··· 643 642 * this function will be executed w/out any aio kthread wakeup. 644 643 */ 645 644 if (unlikely(!fput_atomic(req->ki_filp))) { 646 - get_ioctx(ctx); 647 645 spin_lock(&fput_lock); 648 646 list_add(&req->ki_list, &fput_head); 649 647 spin_unlock(&fput_lock); ··· 1336 1336 ret = PTR_ERR(ioctx); 1337 1337 if (!IS_ERR(ioctx)) { 1338 1338 ret = put_user(ioctx->user_id, ctxp); 1339 - if (!ret) 1339 + if (!ret) { 1340 + put_ioctx(ioctx); 1340 1341 return 0; 1341 - 1342 - get_ioctx(ioctx); /* io_destroy() expects us to hold a ref */ 1342 + } 1343 1343 io_destroy(ioctx); 1344 1344 } 1345 1345
+7 -7
fs/binfmt_aout.c
··· 259 259 current->mm->free_area_cache = current->mm->mmap_base; 260 260 current->mm->cached_hole_size = 0; 261 261 262 + retval = setup_arg_pages(bprm, STACK_TOP, EXSTACK_DEFAULT); 263 + if (retval < 0) { 264 + /* Someone check-me: is this error path enough? */ 265 + send_sig(SIGKILL, current, 0); 266 + return retval; 267 + } 268 + 262 269 install_exec_creds(bprm); 263 270 current->flags &= ~PF_FORKNOEXEC; 264 271 ··· 356 349 retval = set_brk(current->mm->start_brk, current->mm->brk); 357 350 if (retval < 0) { 358 351 send_sig(SIGKILL, current, 0); 359 - return retval; 360 - } 361 - 362 - retval = setup_arg_pages(bprm, STACK_TOP, EXSTACK_DEFAULT); 363 - if (retval < 0) { 364 - /* Someone check-me: is this error path enough? */ 365 - send_sig(SIGKILL, current, 0); 366 352 return retval; 367 353 } 368 354
+1 -1
fs/binfmt_elf.c
··· 1421 1421 for (i = 1; i < view->n; ++i) { 1422 1422 const struct user_regset *regset = &view->regsets[i]; 1423 1423 do_thread_regset_writeback(t->task, regset); 1424 - if (regset->core_note_type && 1424 + if (regset->core_note_type && regset->get && 1425 1425 (!regset->active || regset->active(t->task, regset))) { 1426 1426 int ret; 1427 1427 size_t size = regset->n * regset->size;
+6 -2
fs/btrfs/backref.c
··· 583 583 struct btrfs_path *path; 584 584 struct btrfs_key info_key = { 0 }; 585 585 struct btrfs_delayed_ref_root *delayed_refs = NULL; 586 - struct btrfs_delayed_ref_head *head = NULL; 586 + struct btrfs_delayed_ref_head *head; 587 587 int info_level = 0; 588 588 int ret; 589 589 struct list_head prefs_delayed; ··· 607 607 * at a specified point in time 608 608 */ 609 609 again: 610 + head = NULL; 611 + 610 612 ret = btrfs_search_slot(trans, fs_info->extent_root, &key, path, 0, 0); 611 613 if (ret < 0) 612 614 goto out; ··· 637 635 goto again; 638 636 } 639 637 ret = __add_delayed_refs(head, seq, &info_key, &prefs_delayed); 640 - if (ret) 638 + if (ret) { 639 + spin_unlock(&delayed_refs->lock); 641 640 goto out; 641 + } 642 642 } 643 643 spin_unlock(&delayed_refs->lock); 644 644
+1 -1
fs/btrfs/reada.c
··· 305 305 306 306 spin_lock(&fs_info->reada_lock); 307 307 ret = radix_tree_insert(&dev->reada_zones, 308 - (unsigned long)zone->end >> PAGE_CACHE_SHIFT, 308 + (unsigned long)(zone->end >> PAGE_CACHE_SHIFT), 309 309 zone); 310 310 spin_unlock(&fs_info->reada_lock); 311 311
+18 -2
fs/cifs/dir.c
··· 584 584 * If either that or op not supported returned, follow 585 585 * the normal lookup. 586 586 */ 587 - if ((rc == 0) || (rc == -ENOENT)) 587 + switch (rc) { 588 + case 0: 589 + /* 590 + * The server may allow us to open things like 591 + * FIFOs, but the client isn't set up to deal 592 + * with that. If it's not a regular file, just 593 + * close it and proceed as if it were a normal 594 + * lookup. 595 + */ 596 + if (newInode && !S_ISREG(newInode->i_mode)) { 597 + CIFSSMBClose(xid, pTcon, fileHandle); 598 + break; 599 + } 600 + case -ENOENT: 588 601 posix_open = true; 589 - else if ((rc == -EINVAL) || (rc != -EOPNOTSUPP)) 602 + case -EOPNOTSUPP: 603 + break; 604 + default: 590 605 pTcon->broken_posix_open = true; 606 + } 591 607 } 592 608 if (!posix_open) 593 609 rc = cifs_get_inode_info_unix(&newInode, full_path,
+19 -9
fs/cifs/inode.c
··· 534 534 if (fattr->cf_cifsattrs & ATTR_DIRECTORY) { 535 535 fattr->cf_mode = S_IFDIR | cifs_sb->mnt_dir_mode; 536 536 fattr->cf_dtype = DT_DIR; 537 + /* 538 + * Server can return wrong NumberOfLinks value for directories 539 + * when Unix extensions are disabled - fake it. 540 + */ 541 + fattr->cf_nlink = 2; 537 542 } else { 538 543 fattr->cf_mode = S_IFREG | cifs_sb->mnt_file_mode; 539 544 fattr->cf_dtype = DT_REG; ··· 546 541 /* clear write bits if ATTR_READONLY is set */ 547 542 if (fattr->cf_cifsattrs & ATTR_READONLY) 548 543 fattr->cf_mode &= ~(S_IWUGO); 549 - } 550 544 551 - fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks); 545 + fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks); 546 + } 552 547 553 548 fattr->cf_uid = cifs_sb->mnt_uid; 554 549 fattr->cf_gid = cifs_sb->mnt_gid; ··· 1327 1322 } 1328 1323 /*BB check (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SET_UID ) to see if need 1329 1324 to set uid/gid */ 1330 - inc_nlink(inode); 1331 1325 1332 1326 cifs_unix_basic_to_fattr(&fattr, pInfo, cifs_sb); 1333 1327 cifs_fill_uniqueid(inode->i_sb, &fattr); ··· 1359 1355 d_drop(direntry); 1360 1356 } else { 1361 1357 mkdir_get_info: 1362 - inc_nlink(inode); 1363 1358 if (pTcon->unix_ext) 1364 1359 rc = cifs_get_inode_info_unix(&newinode, full_path, 1365 1360 inode->i_sb, xid); ··· 1439 1436 } 1440 1437 } 1441 1438 mkdir_out: 1439 + /* 1440 + * Force revalidate to get parent dir info when needed since cached 1441 + * attributes are invalid now. 1442 + */ 1443 + CIFS_I(inode)->time = 0; 1442 1444 kfree(full_path); 1443 1445 FreeXid(xid); 1444 1446 cifs_put_tlink(tlink); ··· 1483 1475 cifs_put_tlink(tlink); 1484 1476 1485 1477 if (!rc) { 1486 - drop_nlink(inode); 1487 1478 spin_lock(&direntry->d_inode->i_lock); 1488 1479 i_size_write(direntry->d_inode, 0); 1489 1480 clear_nlink(direntry->d_inode); ··· 1490 1483 } 1491 1484 1492 1485 cifsInode = CIFS_I(direntry->d_inode); 1493 - cifsInode->time = 0; /* force revalidate to go get info when 1494 - needed */ 1486 + /* force revalidate to go get info when needed */ 1487 + cifsInode->time = 0; 1495 1488 1496 1489 cifsInode = CIFS_I(inode); 1497 - cifsInode->time = 0; /* force revalidate to get parent dir info 1498 - since cached search results now invalid */ 1490 + /* 1491 + * Force revalidate to get parent dir info when needed since cached 1492 + * attributes are invalid now. 1493 + */ 1494 + cifsInode->time = 0; 1499 1495 1500 1496 direntry->d_inode->i_ctime = inode->i_ctime = inode->i_mtime = 1501 1497 current_fs_time(inode->i_sb);
+28 -5
fs/dcache.c
··· 104 104 105 105 static struct hlist_bl_head *dentry_hashtable __read_mostly; 106 106 107 - static inline struct hlist_bl_head *d_hash(struct dentry *parent, 107 + static inline struct hlist_bl_head *d_hash(const struct dentry *parent, 108 108 unsigned long hash) 109 109 { 110 110 hash += ((unsigned long) parent ^ GOLDEN_RATIO_PRIME) / L1_CACHE_BYTES; ··· 136 136 return proc_dointvec(table, write, buffer, lenp, ppos); 137 137 } 138 138 #endif 139 + 140 + /* 141 + * Compare 2 name strings, return 0 if they match, otherwise non-zero. 142 + * The strings are both count bytes long, and count is non-zero. 143 + */ 144 + static inline int dentry_cmp(const unsigned char *cs, size_t scount, 145 + const unsigned char *ct, size_t tcount) 146 + { 147 + if (scount != tcount) 148 + return 1; 149 + 150 + do { 151 + if (*cs != *ct) 152 + return 1; 153 + cs++; 154 + ct++; 155 + tcount--; 156 + } while (tcount); 157 + return 0; 158 + } 139 159 140 160 static void __d_free(struct rcu_head *head) 141 161 { ··· 1737 1717 * child is looked up. Thus, an interlocking stepping of sequence lock checks 1738 1718 * is formed, giving integrity down the path walk. 1739 1719 */ 1740 - struct dentry *__d_lookup_rcu(struct dentry *parent, struct qstr *name, 1741 - unsigned *seq, struct inode **inode) 1720 + struct dentry *__d_lookup_rcu(const struct dentry *parent, 1721 + const struct qstr *name, 1722 + unsigned *seqp, struct inode **inode) 1742 1723 { 1743 1724 unsigned int len = name->len; 1744 1725 unsigned int hash = name->hash; ··· 1769 1748 * See Documentation/filesystems/path-lookup.txt for more details. 1770 1749 */ 1771 1750 hlist_bl_for_each_entry_rcu(dentry, node, b, d_hash) { 1751 + unsigned seq; 1772 1752 struct inode *i; 1773 1753 const char *tname; 1774 1754 int tlen; ··· 1778 1756 continue; 1779 1757 1780 1758 seqretry: 1781 - *seq = read_seqcount_begin(&dentry->d_seq); 1759 + seq = read_seqcount_begin(&dentry->d_seq); 1782 1760 if (dentry->d_parent != parent) 1783 1761 continue; 1784 1762 if (d_unhashed(dentry)) ··· 1793 1771 * edge of memory when walking. If we could load this 1794 1772 * atomically some other way, we could drop this check. 1795 1773 */ 1796 - if (read_seqcount_retry(&dentry->d_seq, *seq)) 1774 + if (read_seqcount_retry(&dentry->d_seq, seq)) 1797 1775 goto seqretry; 1798 1776 if (unlikely(parent->d_flags & DCACHE_OP_COMPARE)) { 1799 1777 if (parent->d_op->d_compare(parent, *inode, ··· 1810 1788 * order to do anything useful with the returned dentry 1811 1789 * anyway. 1812 1790 */ 1791 + *seqp = seq; 1813 1792 *inode = i; 1814 1793 return dentry; 1815 1794 }
+1 -1
fs/ecryptfs/miscdev.c
··· 429 429 goto memdup; 430 430 } else if (count < MIN_MSG_PKT_SIZE || count > MAX_MSG_PKT_SIZE) { 431 431 printk(KERN_WARNING "%s: Acceptable packet size range is " 432 - "[%d-%lu], but amount of data written is [%zu].", 432 + "[%d-%zu], but amount of data written is [%zu].", 433 433 __func__, MIN_MSG_PKT_SIZE, MAX_MSG_PKT_SIZE, count); 434 434 return -EINVAL; 435 435 }
+2 -16
fs/exec.c
··· 1918 1918 { 1919 1919 struct task_struct *tsk = current; 1920 1920 struct mm_struct *mm = tsk->mm; 1921 - struct completion *vfork_done; 1922 1921 int core_waiters = -EBUSY; 1923 1922 1924 1923 init_completion(&core_state->startup); ··· 1929 1930 core_waiters = zap_threads(tsk, mm, core_state, exit_code); 1930 1931 up_write(&mm->mmap_sem); 1931 1932 1932 - if (unlikely(core_waiters < 0)) 1933 - goto fail; 1934 - 1935 - /* 1936 - * Make sure nobody is waiting for us to release the VM, 1937 - * otherwise we can deadlock when we wait on each other 1938 - */ 1939 - vfork_done = tsk->vfork_done; 1940 - if (vfork_done) { 1941 - tsk->vfork_done = NULL; 1942 - complete(vfork_done); 1943 - } 1944 - 1945 - if (core_waiters) 1933 + if (core_waiters > 0) 1946 1934 wait_for_completion(&core_state->startup); 1947 - fail: 1935 + 1948 1936 return core_waiters; 1949 1937 } 1950 1938
+10 -4
fs/gfs2/glock.c
··· 167 167 spin_unlock(&lru_lock); 168 168 } 169 169 170 - static void gfs2_glock_remove_from_lru(struct gfs2_glock *gl) 170 + static void __gfs2_glock_remove_from_lru(struct gfs2_glock *gl) 171 171 { 172 - spin_lock(&lru_lock); 173 172 if (!list_empty(&gl->gl_lru)) { 174 173 list_del_init(&gl->gl_lru); 175 174 atomic_dec(&lru_count); 176 175 clear_bit(GLF_LRU, &gl->gl_flags); 177 176 } 177 + } 178 + 179 + static void gfs2_glock_remove_from_lru(struct gfs2_glock *gl) 180 + { 181 + spin_lock(&lru_lock); 182 + __gfs2_glock_remove_from_lru(gl); 178 183 spin_unlock(&lru_lock); 179 184 } 180 185 ··· 222 217 struct gfs2_sbd *sdp = gl->gl_sbd; 223 218 struct address_space *mapping = gfs2_glock2aspace(gl); 224 219 225 - if (atomic_dec_and_test(&gl->gl_ref)) { 220 + if (atomic_dec_and_lock(&gl->gl_ref, &lru_lock)) { 221 + __gfs2_glock_remove_from_lru(gl); 222 + spin_unlock(&lru_lock); 226 223 spin_lock_bucket(gl->gl_hash); 227 224 hlist_bl_del_rcu(&gl->gl_list); 228 225 spin_unlock_bucket(gl->gl_hash); 229 - gfs2_glock_remove_from_lru(gl); 230 226 GLOCK_BUG_ON(gl, !list_empty(&gl->gl_holders)); 231 227 GLOCK_BUG_ON(gl, mapping && mapping->nrpages); 232 228 trace_gfs2_glock_put(gl);
+1 -4
fs/gfs2/inode.c
··· 391 391 int error; 392 392 int dblocks = 1; 393 393 394 - error = gfs2_rindex_update(sdp); 395 - if (error) 396 - fs_warn(sdp, "rindex update returns %d\n", error); 397 - 398 394 error = gfs2_inplace_reserve(dip, RES_DINODE); 399 395 if (error) 400 396 goto out; ··· 1039 1043 rgd = gfs2_blk2rgrpd(sdp, ip->i_no_addr); 1040 1044 if (!rgd) 1041 1045 goto out_inodes; 1046 + 1042 1047 gfs2_holder_init(rgd->rd_gl, LM_ST_EXCLUSIVE, 0, ghs + 2); 1043 1048 1044 1049
+5
fs/gfs2/ops_fstype.c
··· 800 800 fs_err(sdp, "can't get quota file inode: %d\n", error); 801 801 goto fail_rindex; 802 802 } 803 + 804 + error = gfs2_rindex_update(sdp); 805 + if (error) 806 + goto fail_qinode; 807 + 803 808 return 0; 804 809 805 810 fail_qinode:
+9 -4
fs/gfs2/rgrp.c
··· 683 683 struct gfs2_glock *gl = ip->i_gl; 684 684 struct gfs2_holder ri_gh; 685 685 int error = 0; 686 + int unlock_required = 0; 686 687 687 688 /* Read new copy from disk if we don't have the latest */ 688 689 if (!sdp->sd_rindex_uptodate) { 689 690 mutex_lock(&sdp->sd_rindex_mutex); 690 - error = gfs2_glock_nq_init(gl, LM_ST_SHARED, 0, &ri_gh); 691 - if (error) 692 - return error; 691 + if (!gfs2_glock_is_locked_by_me(gl)) { 692 + error = gfs2_glock_nq_init(gl, LM_ST_SHARED, 0, &ri_gh); 693 + if (error) 694 + return error; 695 + unlock_required = 1; 696 + } 693 697 if (!sdp->sd_rindex_uptodate) 694 698 error = gfs2_ri_update(ip); 695 - gfs2_glock_dq_uninit(&ri_gh); 699 + if (unlock_required) 700 + gfs2_glock_dq_uninit(&ri_gh); 696 701 mutex_unlock(&sdp->sd_rindex_mutex); 697 702 } 698 703
+44 -22
fs/namei.c
··· 1374 1374 return 1; 1375 1375 } 1376 1376 1377 + unsigned int full_name_hash(const unsigned char *name, unsigned int len) 1378 + { 1379 + unsigned long hash = init_name_hash(); 1380 + while (len--) 1381 + hash = partial_name_hash(*name++, hash); 1382 + return end_name_hash(hash); 1383 + } 1384 + EXPORT_SYMBOL(full_name_hash); 1385 + 1386 + /* 1387 + * We know there's a real path component here of at least 1388 + * one character. 1389 + */ 1390 + static inline unsigned long hash_name(const char *name, unsigned int *hashp) 1391 + { 1392 + unsigned long hash = init_name_hash(); 1393 + unsigned long len = 0, c; 1394 + 1395 + c = (unsigned char)*name; 1396 + do { 1397 + len++; 1398 + hash = partial_name_hash(c, hash); 1399 + c = (unsigned char)name[len]; 1400 + } while (c && c != '/'); 1401 + *hashp = end_name_hash(hash); 1402 + return len; 1403 + } 1404 + 1377 1405 /* 1378 1406 * Name resolution. 1379 1407 * This is the basic name resolution function, turning a pathname into ··· 1422 1394 1423 1395 /* At this point we know we have a real path component. */ 1424 1396 for(;;) { 1425 - unsigned long hash; 1426 1397 struct qstr this; 1427 - unsigned int c; 1398 + long len; 1428 1399 int type; 1429 1400 1430 1401 err = may_lookup(nd); 1431 1402 if (err) 1432 1403 break; 1433 1404 1405 + len = hash_name(name, &this.hash); 1434 1406 this.name = name; 1435 - c = *(const unsigned char *)name; 1436 - 1437 - hash = init_name_hash(); 1438 - do { 1439 - name++; 1440 - hash = partial_name_hash(c, hash); 1441 - c = *(const unsigned char *)name; 1442 - } while (c && (c != '/')); 1443 - this.len = name - (const char *) this.name; 1444 - this.hash = end_name_hash(hash); 1407 + this.len = len; 1445 1408 1446 1409 type = LAST_NORM; 1447 - if (this.name[0] == '.') switch (this.len) { 1410 + if (name[0] == '.') switch (len) { 1448 1411 case 2: 1449 - if (this.name[1] == '.') { 1412 + if (name[1] == '.') { 1450 1413 type = LAST_DOTDOT; 1451 1414 nd->flags |= LOOKUP_JUMPED; 1452 1415 } ··· 1456 1437 } 1457 1438 } 1458 1439 1459 - /* remove trailing slashes? */ 1460 - if (!c) 1440 + if (!name[len]) 1461 1441 goto last_component; 1462 - while (*++name == '/'); 1463 - if (!*name) 1442 + /* 1443 + * If it wasn't NUL, we know it was '/'. Skip that 1444 + * slash, and continue until no more slashes. 1445 + */ 1446 + do { 1447 + len++; 1448 + } while (unlikely(name[len] == '/')); 1449 + if (!name[len]) 1464 1450 goto last_component; 1451 + name += len; 1465 1452 1466 1453 err = walk_component(nd, &next, &this, type, LOOKUP_FOLLOW); 1467 1454 if (err < 0) ··· 1800 1775 struct dentry *lookup_one_len(const char *name, struct dentry *base, int len) 1801 1776 { 1802 1777 struct qstr this; 1803 - unsigned long hash; 1804 1778 unsigned int c; 1805 1779 1806 1780 WARN_ON_ONCE(!mutex_is_locked(&base->d_inode->i_mutex)); 1807 1781 1808 1782 this.name = name; 1809 1783 this.len = len; 1784 + this.hash = full_name_hash(name, len); 1810 1785 if (!len) 1811 1786 return ERR_PTR(-EACCES); 1812 1787 1813 - hash = init_name_hash(); 1814 1788 while (len--) { 1815 1789 c = *(const unsigned char *)name++; 1816 1790 if (c == '/' || c == '\0') 1817 1791 return ERR_PTR(-EACCES); 1818 - hash = partial_name_hash(c, hash); 1819 1792 } 1820 - this.hash = end_name_hash(hash); 1821 1793 /* 1822 1794 * See if the low-level filesystem might want 1823 1795 * to use its own hash..
+1 -1
include/asm-generic/iomap.h
··· 70 70 /* Destroy a virtual mapping cookie for a PCI BAR (memory or IO) */ 71 71 struct pci_dev; 72 72 extern void pci_iounmap(struct pci_dev *dev, void __iomem *); 73 - #else 73 + #elif defined(CONFIG_GENERIC_IOMAP) 74 74 struct pci_dev; 75 75 static inline void pci_iounmap(struct pci_dev *dev, void __iomem *addr) 76 76 { }
+1 -1
include/asm-generic/pci_iomap.h
··· 25 25 #define __pci_ioport_map(dev, port, nr) ioport_map((port), (nr)) 26 26 #endif 27 27 28 - #else 28 + #elif defined(CONFIG_GENERIC_PCI_IOMAP) 29 29 static inline void __iomem *pci_iomap(struct pci_dev *dev, int bar, unsigned long max) 30 30 { 31 31 return NULL;
+1
include/drm/Kbuild
··· 2 2 header-y += drm_fourcc.h 3 3 header-y += drm_mode.h 4 4 header-y += drm_sarea.h 5 + header-y += exynos_drm.h 5 6 header-y += i810_drm.h 6 7 header-y += i915_drm.h 7 8 header-y += mga_drm.h
+19 -3
include/drm/exynos_drm.h
··· 97 97 #define DRM_IOCTL_EXYNOS_PLANE_SET_ZPOS DRM_IOWR(DRM_COMMAND_BASE + \ 98 98 DRM_EXYNOS_PLANE_SET_ZPOS, struct drm_exynos_plane_set_zpos) 99 99 100 + #ifdef __KERNEL__ 101 + 102 + /** 103 + * A structure for lcd panel information. 104 + * 105 + * @timing: default video mode for initializing 106 + * @width_mm: physical size of lcd width. 107 + * @height_mm: physical size of lcd height. 108 + */ 109 + struct exynos_drm_panel_info { 110 + struct fb_videomode timing; 111 + u32 width_mm; 112 + u32 height_mm; 113 + }; 114 + 100 115 /** 101 116 * Platform Specific Structure for DRM based FIMD. 102 117 * 103 - * @timing: default video mode for initializing 118 + * @panel: default panel info for initializing 104 119 * @default_win: default window layer number to be used for UI. 105 120 * @bpp: default bit per pixel. 106 121 */ 107 122 struct exynos_drm_fimd_pdata { 108 - struct fb_videomode timing; 123 + struct exynos_drm_panel_info panel; 109 124 u32 vidcon0; 110 125 u32 vidcon1; 111 126 unsigned int default_win; ··· 154 139 unsigned int bpp; 155 140 }; 156 141 157 - #endif 142 + #endif /* __KERNEL__ */ 143 + #endif /* _EXYNOS_DRM_H_ */
+2
include/linux/amba/serial.h
··· 23 23 #ifndef ASM_ARM_HARDWARE_SERIAL_AMBA_H 24 24 #define ASM_ARM_HARDWARE_SERIAL_AMBA_H 25 25 26 + #include <linux/types.h> 27 + 26 28 /* ------------------------------------------------------------------------------- 27 29 * From AMBA UART (PL010) Block Specification 28 30 * -------------------------------------------------------------------------------
+3 -30
include/linux/dcache.h
··· 47 47 }; 48 48 extern struct dentry_stat_t dentry_stat; 49 49 50 - /* 51 - * Compare 2 name strings, return 0 if they match, otherwise non-zero. 52 - * The strings are both count bytes long, and count is non-zero. 53 - */ 54 - static inline int dentry_cmp(const unsigned char *cs, size_t scount, 55 - const unsigned char *ct, size_t tcount) 56 - { 57 - int ret; 58 - if (scount != tcount) 59 - return 1; 60 - do { 61 - ret = (*cs != *ct); 62 - if (ret) 63 - break; 64 - cs++; 65 - ct++; 66 - tcount--; 67 - } while (tcount); 68 - return ret; 69 - } 70 - 71 50 /* Name hashing routines. Initial hash value */ 72 51 /* Hash courtesy of the R5 hash in reiserfs modulo sign bits */ 73 52 #define init_name_hash() 0 ··· 68 89 } 69 90 70 91 /* Compute the hash for a name string. */ 71 - static inline unsigned int 72 - full_name_hash(const unsigned char *name, unsigned int len) 73 - { 74 - unsigned long hash = init_name_hash(); 75 - while (len--) 76 - hash = partial_name_hash(*name++, hash); 77 - return end_name_hash(hash); 78 - } 92 + extern unsigned int full_name_hash(const unsigned char *, unsigned int); 79 93 80 94 /* 81 95 * Try to keep struct dentry aligned on 64 byte cachelines (this will ··· 281 309 extern struct dentry *d_lookup(struct dentry *, struct qstr *); 282 310 extern struct dentry *d_hash_and_lookup(struct dentry *, struct qstr *); 283 311 extern struct dentry *__d_lookup(struct dentry *, struct qstr *); 284 - extern struct dentry *__d_lookup_rcu(struct dentry *parent, struct qstr *name, 312 + extern struct dentry *__d_lookup_rcu(const struct dentry *parent, 313 + const struct qstr *name, 285 314 unsigned *seq, struct inode **inode); 286 315 287 316 /**
+7 -2
include/linux/kmsg_dump.h
··· 15 15 #include <linux/errno.h> 16 16 #include <linux/list.h> 17 17 18 + /* 19 + * Keep this list arranged in rough order of priority. Anything listed after 20 + * KMSG_DUMP_OOPS will not be logged by default unless printk.always_kmsg_dump 21 + * is passed to the kernel. 22 + */ 18 23 enum kmsg_dump_reason { 19 - KMSG_DUMP_OOPS, 20 24 KMSG_DUMP_PANIC, 25 + KMSG_DUMP_OOPS, 26 + KMSG_DUMP_EMERG, 21 27 KMSG_DUMP_RESTART, 22 28 KMSG_DUMP_HALT, 23 29 KMSG_DUMP_POWEROFF, 24 - KMSG_DUMP_EMERG, 25 30 }; 26 31 27 32 /**
-5
include/linux/memcontrol.h
··· 129 129 extern void mem_cgroup_replace_page_cache(struct page *oldpage, 130 130 struct page *newpage); 131 131 132 - extern void mem_cgroup_reset_owner(struct page *page); 133 132 #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP 134 133 extern int do_swap_account; 135 134 #endif ··· 389 390 } 390 391 static inline void mem_cgroup_replace_page_cache(struct page *oldpage, 391 392 struct page *newpage) 392 - { 393 - } 394 - 395 - static inline void mem_cgroup_reset_owner(struct page *page) 396 393 { 397 394 } 398 395 #endif /* CONFIG_CGROUP_MEM_CONT */
+8
include/linux/of.h
··· 281 281 return NULL; 282 282 } 283 283 284 + static inline struct device_node *of_find_compatible_node( 285 + struct device_node *from, 286 + const char *type, 287 + const char *compat) 288 + { 289 + return NULL; 290 + } 291 + 284 292 static inline int of_property_read_u32_array(const struct device_node *np, 285 293 const char *propname, 286 294 u32 *out_values, size_t sz)
+15 -14
include/linux/percpu.h
··· 348 348 #define _this_cpu_generic_to_op(pcp, val, op) \ 349 349 do { \ 350 350 unsigned long flags; \ 351 - local_irq_save(flags); \ 351 + raw_local_irq_save(flags); \ 352 352 *__this_cpu_ptr(&(pcp)) op val; \ 353 - local_irq_restore(flags); \ 353 + raw_local_irq_restore(flags); \ 354 354 } while (0) 355 355 356 356 #ifndef this_cpu_write ··· 449 449 ({ \ 450 450 typeof(pcp) ret__; \ 451 451 unsigned long flags; \ 452 - local_irq_save(flags); \ 452 + raw_local_irq_save(flags); \ 453 453 __this_cpu_add(pcp, val); \ 454 454 ret__ = __this_cpu_read(pcp); \ 455 - local_irq_restore(flags); \ 455 + raw_local_irq_restore(flags); \ 456 456 ret__; \ 457 457 }) 458 458 ··· 479 479 #define _this_cpu_generic_xchg(pcp, nval) \ 480 480 ({ typeof(pcp) ret__; \ 481 481 unsigned long flags; \ 482 - local_irq_save(flags); \ 482 + raw_local_irq_save(flags); \ 483 483 ret__ = __this_cpu_read(pcp); \ 484 484 __this_cpu_write(pcp, nval); \ 485 - local_irq_restore(flags); \ 485 + raw_local_irq_restore(flags); \ 486 486 ret__; \ 487 487 }) 488 488 ··· 507 507 ({ \ 508 508 typeof(pcp) ret__; \ 509 509 unsigned long flags; \ 510 - local_irq_save(flags); \ 510 + raw_local_irq_save(flags); \ 511 511 ret__ = __this_cpu_read(pcp); \ 512 512 if (ret__ == (oval)) \ 513 513 __this_cpu_write(pcp, nval); \ 514 - local_irq_restore(flags); \ 514 + raw_local_irq_restore(flags); \ 515 515 ret__; \ 516 516 }) 517 517 ··· 544 544 ({ \ 545 545 int ret__; \ 546 546 unsigned long flags; \ 547 - local_irq_save(flags); \ 547 + raw_local_irq_save(flags); \ 548 548 ret__ = __this_cpu_generic_cmpxchg_double(pcp1, pcp2, \ 549 549 oval1, oval2, nval1, nval2); \ 550 - local_irq_restore(flags); \ 550 + raw_local_irq_restore(flags); \ 551 551 ret__; \ 552 552 }) 553 553 ··· 718 718 # ifndef __this_cpu_add_return_8 719 719 # define __this_cpu_add_return_8(pcp, val) __this_cpu_generic_add_return(pcp, val) 720 720 # endif 721 - # define __this_cpu_add_return(pcp, val) __pcpu_size_call_return2(this_cpu_add_return_, pcp, val) 721 + # define __this_cpu_add_return(pcp, val) \ 722 + __pcpu_size_call_return2(__this_cpu_add_return_, pcp, val) 722 723 #endif 723 724 724 - #define __this_cpu_sub_return(pcp, val) this_cpu_add_return(pcp, -(val)) 725 - #define __this_cpu_inc_return(pcp) this_cpu_add_return(pcp, 1) 726 - #define __this_cpu_dec_return(pcp) this_cpu_add_return(pcp, -1) 725 + #define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(val)) 726 + #define __this_cpu_inc_return(pcp) __this_cpu_add_return(pcp, 1) 727 + #define __this_cpu_dec_return(pcp) __this_cpu_add_return(pcp, -1) 727 728 728 729 #define __this_cpu_generic_xchg(pcp, nval) \ 729 730 ({ typeof(pcp) ret__; \
+8 -2
include/linux/regset.h
··· 335 335 { 336 336 const struct user_regset *regset = &view->regsets[setno]; 337 337 338 + if (!regset->get) 339 + return -EOPNOTSUPP; 340 + 338 341 if (!access_ok(VERIFY_WRITE, data, size)) 339 - return -EIO; 342 + return -EFAULT; 340 343 341 344 return regset->get(target, regset, offset, size, NULL, data); 342 345 } ··· 361 358 { 362 359 const struct user_regset *regset = &view->regsets[setno]; 363 360 361 + if (!regset->set) 362 + return -EOPNOTSUPP; 363 + 364 364 if (!access_ok(VERIFY_READ, data, size)) 365 - return -EIO; 365 + return -EFAULT; 366 366 367 367 return regset->set(target, regset, offset, size, NULL, data); 368 368 }
+1 -2
include/linux/sched.h
··· 1777 1777 /* 1778 1778 * Per process flags 1779 1779 */ 1780 - #define PF_STARTING 0x00000002 /* being created */ 1781 1780 #define PF_EXITING 0x00000004 /* getting shut down */ 1782 1781 #define PF_EXITPIDONE 0x00000008 /* pi exit done on shut down */ 1783 1782 #define PF_VCPU 0x00000010 /* I'm a virtual CPU */ ··· 2370 2371 * Protects ->fs, ->files, ->mm, ->group_info, ->comm, keyring 2371 2372 * subscriptions and synchronises with wait4(). Also used in procfs. Also 2372 2373 * pins the final release of task.io_context. Also protects ->cpuset and 2373 - * ->cgroup.subsys[]. 2374 + * ->cgroup.subsys[]. And ->vfork_done. 2374 2375 * 2375 2376 * Nests both inside and outside of read_lock(&tasklist_lock). 2376 2377 * It must not be nested with write_lock_irq(&tasklist_lock),
+2 -1
include/linux/tcp.h
··· 412 412 413 413 struct tcp_sack_block recv_sack_cache[4]; 414 414 415 - struct sk_buff *highest_sack; /* highest skb with SACK received 415 + struct sk_buff *highest_sack; /* skb just after the highest 416 + * skb with SACKed bit set 416 417 * (validity guaranteed only if 417 418 * sacked_out > 0) 418 419 */
+3 -1
include/net/inetpeer.h
··· 35 35 36 36 u32 metrics[RTAX_MAX]; 37 37 u32 rate_tokens; /* rate limiting for ICMP */ 38 - int redirect_genid; 39 38 unsigned long rate_last; 40 39 unsigned long pmtu_expires; 41 40 u32 pmtu_orig; 42 41 u32 pmtu_learned; 43 42 struct inetpeer_addr_base redirect_learned; 43 + struct list_head gc_list; 44 44 /* 45 45 * Once inet_peer is queued for deletion (refcnt == -1), following fields 46 46 * are not available: rid, ip_id_count, tcp_ts, tcp_ts_stamp ··· 95 95 /* can be called from BH context or outside */ 96 96 extern void inet_putpeer(struct inet_peer *p); 97 97 extern bool inet_peer_xrlim_allow(struct inet_peer *peer, int timeout); 98 + 99 + extern void inetpeer_invalidate_tree(int family); 98 100 99 101 /* 100 102 * temporary check to make sure we dont access rid, ip_id_count, tcp_ts,
+3 -2
include/net/tcp.h
··· 1364 1364 } 1365 1365 } 1366 1366 1367 - /* Start sequence of the highest skb with SACKed bit, valid only if 1368 - * sacked > 0 or when the caller has ensured validity by itself. 1367 + /* Start sequence of the skb just after the highest skb with SACKed 1368 + * bit, valid only if sacked_out > 0 or when the caller has ensured 1369 + * validity by itself. 1369 1370 */ 1370 1371 static inline u32 tcp_highest_sack_seq(struct tcp_sock *tp) 1371 1372 {
+39 -21
kernel/fork.c
··· 668 668 return mm; 669 669 } 670 670 671 + static void complete_vfork_done(struct task_struct *tsk) 672 + { 673 + struct completion *vfork; 674 + 675 + task_lock(tsk); 676 + vfork = tsk->vfork_done; 677 + if (likely(vfork)) { 678 + tsk->vfork_done = NULL; 679 + complete(vfork); 680 + } 681 + task_unlock(tsk); 682 + } 683 + 684 + static int wait_for_vfork_done(struct task_struct *child, 685 + struct completion *vfork) 686 + { 687 + int killed; 688 + 689 + freezer_do_not_count(); 690 + killed = wait_for_completion_killable(vfork); 691 + freezer_count(); 692 + 693 + if (killed) { 694 + task_lock(child); 695 + child->vfork_done = NULL; 696 + task_unlock(child); 697 + } 698 + 699 + put_task_struct(child); 700 + return killed; 701 + } 702 + 671 703 /* Please note the differences between mmput and mm_release. 672 704 * mmput is called whenever we stop holding onto a mm_struct, 673 705 * error success whatever. ··· 715 683 */ 716 684 void mm_release(struct task_struct *tsk, struct mm_struct *mm) 717 685 { 718 - struct completion *vfork_done = tsk->vfork_done; 719 - 720 686 /* Get rid of any futexes when releasing the mm */ 721 687 #ifdef CONFIG_FUTEX 722 688 if (unlikely(tsk->robust_list)) { ··· 734 704 /* Get rid of any cached register state */ 735 705 deactivate_mm(tsk, mm); 736 706 737 - /* notify parent sleeping on vfork() */ 738 - if (vfork_done) { 739 - tsk->vfork_done = NULL; 740 - complete(vfork_done); 741 - } 707 + if (tsk->vfork_done) 708 + complete_vfork_done(tsk); 742 709 743 710 /* 744 711 * If we're exiting normally, clear a user-space tid field if 745 712 * requested. We leave this alone when dying by signal, to leave 746 713 * the value intact in a core dump, and to save the unnecessary 747 - * trouble otherwise. Userland only wants this done for a sys_exit. 714 + * trouble, say, a killed vfork parent shouldn't touch this mm. 715 + * Userland only wants this done for a sys_exit. 748 716 */ 749 717 if (tsk->clear_child_tid) { 750 718 if (!(tsk->flags & PF_SIGNALED) && ··· 1046 1018 1047 1019 new_flags &= ~(PF_SUPERPRIV | PF_WQ_WORKER); 1048 1020 new_flags |= PF_FORKNOEXEC; 1049 - new_flags |= PF_STARTING; 1050 1021 p->flags = new_flags; 1051 1022 } 1052 1023 ··· 1575 1548 if (clone_flags & CLONE_VFORK) { 1576 1549 p->vfork_done = &vfork; 1577 1550 init_completion(&vfork); 1551 + get_task_struct(p); 1578 1552 } 1579 - 1580 - /* 1581 - * We set PF_STARTING at creation in case tracing wants to 1582 - * use this to distinguish a fully live task from one that 1583 - * hasn't finished SIGSTOP raising yet. Now we clear it 1584 - * and set the child going. 1585 - */ 1586 - p->flags &= ~PF_STARTING; 1587 1553 1588 1554 wake_up_new_task(p); 1589 1555 ··· 1585 1565 ptrace_event(trace, nr); 1586 1566 1587 1567 if (clone_flags & CLONE_VFORK) { 1588 - freezer_do_not_count(); 1589 - wait_for_completion(&vfork); 1590 - freezer_count(); 1591 - ptrace_event(PTRACE_EVENT_VFORK_DONE, nr); 1568 + if (!wait_for_vfork_done(p, &vfork)) 1569 + ptrace_event(PTRACE_EVENT_VFORK_DONE, nr); 1592 1570 } 1593 1571 } else { 1594 1572 nr = PTR_ERR(p);
+7 -4
kernel/hung_task.c
··· 119 119 * For preemptible RCU it is sufficient to call rcu_read_unlock in order 120 120 * to exit the grace period. For classic RCU, a reschedule is required. 121 121 */ 122 - static void rcu_lock_break(struct task_struct *g, struct task_struct *t) 122 + static bool rcu_lock_break(struct task_struct *g, struct task_struct *t) 123 123 { 124 + bool can_cont; 125 + 124 126 get_task_struct(g); 125 127 get_task_struct(t); 126 128 rcu_read_unlock(); 127 129 cond_resched(); 128 130 rcu_read_lock(); 131 + can_cont = pid_alive(g) && pid_alive(t); 129 132 put_task_struct(t); 130 133 put_task_struct(g); 134 + 135 + return can_cont; 131 136 } 132 137 133 138 /* ··· 159 154 goto unlock; 160 155 if (!--batch_count) { 161 156 batch_count = HUNG_TASK_BATCHING; 162 - rcu_lock_break(g, t); 163 - /* Exit if t or g was unhashed during refresh. */ 164 - if (t->state == TASK_DEAD || g->state == TASK_DEAD) 157 + if (!rcu_lock_break(g, t)) 165 158 goto unlock; 166 159 } 167 160 /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */
+38 -6
kernel/irq/manage.c
··· 985 985 986 986 /* add new interrupt at end of irq queue */ 987 987 do { 988 + /* 989 + * Or all existing action->thread_mask bits, 990 + * so we can find the next zero bit for this 991 + * new action. 992 + */ 988 993 thread_mask |= old->thread_mask; 989 994 old_ptr = &old->next; 990 995 old = *old_ptr; ··· 998 993 } 999 994 1000 995 /* 1001 - * Setup the thread mask for this irqaction. Unlikely to have 1002 - * 32 resp 64 irqs sharing one line, but who knows. 996 + * Setup the thread mask for this irqaction for ONESHOT. For 997 + * !ONESHOT irqs the thread mask is 0 so we can avoid a 998 + * conditional in irq_wake_thread(). 1003 999 */ 1004 - if (new->flags & IRQF_ONESHOT && thread_mask == ~0UL) { 1005 - ret = -EBUSY; 1006 - goto out_mask; 1000 + if (new->flags & IRQF_ONESHOT) { 1001 + /* 1002 + * Unlikely to have 32 resp 64 irqs sharing one line, 1003 + * but who knows. 1004 + */ 1005 + if (thread_mask == ~0UL) { 1006 + ret = -EBUSY; 1007 + goto out_mask; 1008 + } 1009 + /* 1010 + * The thread_mask for the action is or'ed to 1011 + * desc->thread_active to indicate that the 1012 + * IRQF_ONESHOT thread handler has been woken, but not 1013 + * yet finished. The bit is cleared when a thread 1014 + * completes. When all threads of a shared interrupt 1015 + * line have completed desc->threads_active becomes 1016 + * zero and the interrupt line is unmasked. See 1017 + * handle.c:irq_wake_thread() for further information. 1018 + * 1019 + * If no thread is woken by primary (hard irq context) 1020 + * interrupt handlers, then desc->threads_active is 1021 + * also checked for zero to unmask the irq line in the 1022 + * affected hard irq flow handlers 1023 + * (handle_[fasteoi|level]_irq). 1024 + * 1025 + * The new action gets the first zero bit of 1026 + * thread_mask assigned. See the loop above which or's 1027 + * all existing action->thread_mask bits. 1028 + */ 1029 + new->thread_mask = 1 << ffz(thread_mask); 1007 1030 } 1008 - new->thread_mask = 1 << ffz(thread_mask); 1009 1031 1010 1032 if (!shared) { 1011 1033 init_waitqueue_head(&desc->wait_for_threads);
+7 -5
kernel/kprobes.c
··· 1334 1334 if (!kernel_text_address((unsigned long) p->addr) || 1335 1335 in_kprobes_functions((unsigned long) p->addr) || 1336 1336 ftrace_text_reserved(p->addr, p->addr) || 1337 - jump_label_text_reserved(p->addr, p->addr)) 1338 - goto fail_with_jump_label; 1337 + jump_label_text_reserved(p->addr, p->addr)) { 1338 + ret = -EINVAL; 1339 + goto cannot_probe; 1340 + } 1339 1341 1340 1342 /* User can pass only KPROBE_FLAG_DISABLED to register_kprobe */ 1341 1343 p->flags &= KPROBE_FLAG_DISABLED; ··· 1354 1352 * its code to prohibit unexpected unloading. 1355 1353 */ 1356 1354 if (unlikely(!try_module_get(probed_mod))) 1357 - goto fail_with_jump_label; 1355 + goto cannot_probe; 1358 1356 1359 1357 /* 1360 1358 * If the module freed .init.text, we couldn't insert ··· 1363 1361 if (within_module_init((unsigned long)p->addr, probed_mod) && 1364 1362 probed_mod->state != MODULE_STATE_COMING) { 1365 1363 module_put(probed_mod); 1366 - goto fail_with_jump_label; 1364 + goto cannot_probe; 1367 1365 } 1368 1366 /* ret will be updated by following code */ 1369 1367 } ··· 1411 1409 1412 1410 return ret; 1413 1411 1414 - fail_with_jump_label: 1412 + cannot_probe: 1415 1413 preempt_enable(); 1416 1414 jump_label_unlock(); 1417 1415 return ret;
+6
kernel/printk.c
··· 707 707 #endif 708 708 module_param_named(time, printk_time, bool, S_IRUGO | S_IWUSR); 709 709 710 + static bool always_kmsg_dump; 711 + module_param_named(always_kmsg_dump, always_kmsg_dump, bool, S_IRUGO | S_IWUSR); 712 + 710 713 /* Check if we have any console registered that can be called early in boot. */ 711 714 static int have_callable_console(void) 712 715 { ··· 1739 1736 const char *s1, *s2; 1740 1737 unsigned long l1, l2; 1741 1738 unsigned long flags; 1739 + 1740 + if ((reason > KMSG_DUMP_OOPS) && !always_kmsg_dump) 1741 + return; 1742 1742 1743 1743 /* Theoretically, the log could move on after we do this, but 1744 1744 there's not a lot we can do about that. The new messages
+3 -11
lib/debugobjects.c
··· 818 818 if (obj->static_init == 1) { 819 819 debug_object_init(obj, &descr_type_test); 820 820 debug_object_activate(obj, &descr_type_test); 821 - /* 822 - * Real code should return 0 here ! This is 823 - * not a fixup of some bad behaviour. We 824 - * merily call the debug_init function to keep 825 - * track of the object. 826 - */ 827 - return 1; 828 - } else { 829 - /* Real code needs to emit a warning here */ 821 + return 0; 830 822 } 831 - return 0; 823 + return 1; 832 824 833 825 case ODEBUG_STATE_ACTIVE: 834 826 debug_object_deactivate(obj, &descr_type_test); ··· 959 967 960 968 obj.static_init = 1; 961 969 debug_object_activate(&obj, &descr_type_test); 962 - if (check_results(&obj, ODEBUG_STATE_ACTIVE, ++fixups, warnings)) 970 + if (check_results(&obj, ODEBUG_STATE_ACTIVE, fixups, warnings)) 963 971 goto out; 964 972 debug_object_init(&obj, &descr_type_test); 965 973 if (check_results(&obj, ODEBUG_STATE_INIT, ++fixups, ++warnings))
+9 -3
lib/vsprintf.c
··· 891 891 case 'U': 892 892 return uuid_string(buf, end, ptr, spec, fmt); 893 893 case 'V': 894 - return buf + vsnprintf(buf, end > buf ? end - buf : 0, 895 - ((struct va_format *)ptr)->fmt, 896 - *(((struct va_format *)ptr)->va)); 894 + { 895 + va_list va; 896 + 897 + va_copy(va, *((struct va_format *)ptr)->va); 898 + buf += vsnprintf(buf, end > buf ? end - buf : 0, 899 + ((struct va_format *)ptr)->fmt, va); 900 + va_end(va); 901 + return buf; 902 + } 897 903 case 'K': 898 904 /* 899 905 * %pK cannot be used in IRQ context because its test
+3 -3
mm/huge_memory.c
··· 671 671 set_pmd_at(mm, haddr, pmd, entry); 672 672 prepare_pmd_huge_pte(pgtable, mm); 673 673 add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR); 674 + mm->nr_ptes++; 674 675 spin_unlock(&mm->page_table_lock); 675 676 } 676 677 ··· 790 789 pmd = pmd_mkold(pmd_wrprotect(pmd)); 791 790 set_pmd_at(dst_mm, addr, dst_pmd, pmd); 792 791 prepare_pmd_huge_pte(pgtable, dst_mm); 792 + dst_mm->nr_ptes++; 793 793 794 794 ret = 0; 795 795 out_unlock: ··· 889 887 } 890 888 kfree(pages); 891 889 892 - mm->nr_ptes++; 893 890 smp_wmb(); /* make pte visible before pmd */ 894 891 pmd_populate(mm, pmd, pgtable); 895 892 page_remove_rmap(page); ··· 1048 1047 VM_BUG_ON(page_mapcount(page) < 0); 1049 1048 add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR); 1050 1049 VM_BUG_ON(!PageHead(page)); 1050 + tlb->mm->nr_ptes--; 1051 1051 spin_unlock(&tlb->mm->page_table_lock); 1052 1052 tlb_remove_page(tlb, page); 1053 1053 pte_free(tlb->mm, pgtable); ··· 1377 1375 pte_unmap(pte); 1378 1376 } 1379 1377 1380 - mm->nr_ptes++; 1381 1378 smp_wmb(); /* make pte visible before pmd */ 1382 1379 /* 1383 1380 * Up to this point the pmd is present and huge and ··· 1989 1988 set_pmd_at(mm, address, pmd, _pmd); 1990 1989 update_mmu_cache(vma, address, _pmd); 1991 1990 prepare_pmd_huge_pte(pgtable, mm); 1992 - mm->nr_ptes--; 1993 1991 spin_unlock(&mm->page_table_lock); 1994 1992 1995 1993 #ifndef CONFIG_NUMA
+1 -1
mm/hugetlb.c
··· 2277 2277 set_page_dirty(page); 2278 2278 list_add(&page->lru, &page_list); 2279 2279 } 2280 - spin_unlock(&mm->page_table_lock); 2281 2280 flush_tlb_range(vma, start, end); 2281 + spin_unlock(&mm->page_table_lock); 2282 2282 mmu_notifier_invalidate_range_end(mm, start, end); 2283 2283 list_for_each_entry_safe(page, tmp, &page_list, lru) { 2284 2284 page_remove_rmap(page);
-11
mm/ksm.c
··· 28 28 #include <linux/kthread.h> 29 29 #include <linux/wait.h> 30 30 #include <linux/slab.h> 31 - #include <linux/memcontrol.h> 32 31 #include <linux/rbtree.h> 33 32 #include <linux/memory.h> 34 33 #include <linux/mmu_notifier.h> ··· 1571 1572 1572 1573 new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); 1573 1574 if (new_page) { 1574 - /* 1575 - * The memcg-specific accounting when moving 1576 - * pages around the LRU lists relies on the 1577 - * page's owner (memcg) to be valid. Usually, 1578 - * pages are assigned to a new owner before 1579 - * being put on the LRU list, but since this 1580 - * is not the case here, the stale owner from 1581 - * a previous allocation cycle must be reset. 1582 - */ 1583 - mem_cgroup_reset_owner(new_page); 1584 1575 copy_user_highpage(new_page, page, address, vma); 1585 1576 1586 1577 SetPageDirty(new_page);
+3 -3
mm/memblock.c
··· 99 99 phys_addr_t this_start, this_end, cand; 100 100 u64 i; 101 101 102 - /* align @size to avoid excessive fragmentation on reserved array */ 103 - size = round_up(size, align); 104 - 105 102 /* pump up @end */ 106 103 if (end == MEMBLOCK_ALLOC_ACCESSIBLE) 107 104 end = memblock.current_limit; ··· 727 730 int nid) 728 731 { 729 732 phys_addr_t found; 733 + 734 + /* align @size to avoid excessive fragmentation on reserved array */ 735 + size = round_up(size, align); 730 736 731 737 found = memblock_find_in_range_node(0, max_addr, size, align, nid); 732 738 if (found && !memblock_reserve(found, size))
+50 -52
mm/memcontrol.c
··· 1042 1042 1043 1043 pc = lookup_page_cgroup(page); 1044 1044 memcg = pc->mem_cgroup; 1045 + 1046 + /* 1047 + * Surreptitiously switch any uncharged page to root: 1048 + * an uncharged page off lru does nothing to secure 1049 + * its former mem_cgroup from sudden removal. 1050 + * 1051 + * Our caller holds lru_lock, and PageCgroupUsed is updated 1052 + * under page_cgroup lock: between them, they make all uses 1053 + * of pc->mem_cgroup safe. 1054 + */ 1055 + if (!PageCgroupUsed(pc) && memcg != root_mem_cgroup) 1056 + pc->mem_cgroup = memcg = root_mem_cgroup; 1057 + 1045 1058 mz = page_cgroup_zoneinfo(memcg, page); 1046 1059 /* compound_order() is stabilized through lru_lock */ 1047 1060 MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page); ··· 2421 2408 struct page *page, 2422 2409 unsigned int nr_pages, 2423 2410 struct page_cgroup *pc, 2424 - enum charge_type ctype) 2411 + enum charge_type ctype, 2412 + bool lrucare) 2425 2413 { 2414 + struct zone *uninitialized_var(zone); 2415 + bool was_on_lru = false; 2416 + 2426 2417 lock_page_cgroup(pc); 2427 2418 if (unlikely(PageCgroupUsed(pc))) { 2428 2419 unlock_page_cgroup(pc); ··· 2437 2420 * we don't need page_cgroup_lock about tail pages, becase they are not 2438 2421 * accessed by any other context at this point. 2439 2422 */ 2423 + 2424 + /* 2425 + * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page 2426 + * may already be on some other mem_cgroup's LRU. Take care of it. 2427 + */ 2428 + if (lrucare) { 2429 + zone = page_zone(page); 2430 + spin_lock_irq(&zone->lru_lock); 2431 + if (PageLRU(page)) { 2432 + ClearPageLRU(page); 2433 + del_page_from_lru_list(zone, page, page_lru(page)); 2434 + was_on_lru = true; 2435 + } 2436 + } 2437 + 2440 2438 pc->mem_cgroup = memcg; 2441 2439 /* 2442 2440 * We access a page_cgroup asynchronously without lock_page_cgroup(). ··· 2475 2443 break; 2476 2444 } 2477 2445 2446 + if (lrucare) { 2447 + if (was_on_lru) { 2448 + VM_BUG_ON(PageLRU(page)); 2449 + SetPageLRU(page); 2450 + add_page_to_lru_list(zone, page, page_lru(page)); 2451 + } 2452 + spin_unlock_irq(&zone->lru_lock); 2453 + } 2454 + 2478 2455 mem_cgroup_charge_statistics(memcg, PageCgroupCache(pc), nr_pages); 2479 2456 unlock_page_cgroup(pc); 2480 - WARN_ON_ONCE(PageLRU(page)); 2457 + 2481 2458 /* 2482 2459 * "charge_statistics" updated event counter. Then, check it. 2483 2460 * Insert ancestor (and ancestor's ancestors), to softlimit RB-tree. ··· 2684 2643 ret = __mem_cgroup_try_charge(mm, gfp_mask, nr_pages, &memcg, oom); 2685 2644 if (ret == -ENOMEM) 2686 2645 return ret; 2687 - __mem_cgroup_commit_charge(memcg, page, nr_pages, pc, ctype); 2646 + __mem_cgroup_commit_charge(memcg, page, nr_pages, pc, ctype, false); 2688 2647 return 0; 2689 2648 } 2690 2649 ··· 2703 2662 static void 2704 2663 __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *ptr, 2705 2664 enum charge_type ctype); 2706 - 2707 - static void 2708 - __mem_cgroup_commit_charge_lrucare(struct page *page, struct mem_cgroup *memcg, 2709 - enum charge_type ctype) 2710 - { 2711 - struct page_cgroup *pc = lookup_page_cgroup(page); 2712 - struct zone *zone = page_zone(page); 2713 - unsigned long flags; 2714 - bool removed = false; 2715 - 2716 - /* 2717 - * In some case, SwapCache, FUSE(splice_buf->radixtree), the page 2718 - * is already on LRU. It means the page may on some other page_cgroup's 2719 - * LRU. Take care of it. 2720 - */ 2721 - spin_lock_irqsave(&zone->lru_lock, flags); 2722 - if (PageLRU(page)) { 2723 - del_page_from_lru_list(zone, page, page_lru(page)); 2724 - ClearPageLRU(page); 2725 - removed = true; 2726 - } 2727 - __mem_cgroup_commit_charge(memcg, page, 1, pc, ctype); 2728 - if (removed) { 2729 - add_page_to_lru_list(zone, page, page_lru(page)); 2730 - SetPageLRU(page); 2731 - } 2732 - spin_unlock_irqrestore(&zone->lru_lock, flags); 2733 - return; 2734 - } 2735 2665 2736 2666 int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm, 2737 2667 gfp_t gfp_mask) ··· 2781 2769 __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *memcg, 2782 2770 enum charge_type ctype) 2783 2771 { 2772 + struct page_cgroup *pc; 2773 + 2784 2774 if (mem_cgroup_disabled()) 2785 2775 return; 2786 2776 if (!memcg) 2787 2777 return; 2788 2778 cgroup_exclude_rmdir(&memcg->css); 2789 2779 2790 - __mem_cgroup_commit_charge_lrucare(page, memcg, ctype); 2780 + pc = lookup_page_cgroup(page); 2781 + __mem_cgroup_commit_charge(memcg, page, 1, pc, ctype, true); 2791 2782 /* 2792 2783 * Now swap is on-memory. This means this page may be 2793 2784 * counted both as mem and swap....double count. ··· 3042 3027 batch->memcg = NULL; 3043 3028 } 3044 3029 3045 - /* 3046 - * A function for resetting pc->mem_cgroup for newly allocated pages. 3047 - * This function should be called if the newpage will be added to LRU 3048 - * before start accounting. 3049 - */ 3050 - void mem_cgroup_reset_owner(struct page *newpage) 3051 - { 3052 - struct page_cgroup *pc; 3053 - 3054 - if (mem_cgroup_disabled()) 3055 - return; 3056 - 3057 - pc = lookup_page_cgroup(newpage); 3058 - VM_BUG_ON(PageCgroupUsed(pc)); 3059 - pc->mem_cgroup = root_mem_cgroup; 3060 - } 3061 - 3062 3030 #ifdef CONFIG_SWAP 3063 3031 /* 3064 3032 * called after __delete_from_swap_cache() and drop "page" account. ··· 3246 3248 ctype = MEM_CGROUP_CHARGE_TYPE_CACHE; 3247 3249 else 3248 3250 ctype = MEM_CGROUP_CHARGE_TYPE_SHMEM; 3249 - __mem_cgroup_commit_charge(memcg, newpage, 1, pc, ctype); 3251 + __mem_cgroup_commit_charge(memcg, newpage, 1, pc, ctype, false); 3250 3252 return ret; 3251 3253 } 3252 3254 ··· 3330 3332 * the newpage may be on LRU(or pagevec for LRU) already. We lock 3331 3333 * LRU while we overwrite pc->mem_cgroup. 3332 3334 */ 3333 - __mem_cgroup_commit_charge_lrucare(newpage, memcg, type); 3335 + __mem_cgroup_commit_charge(memcg, newpage, 1, pc, type, true); 3334 3336 } 3335 3337 3336 3338 #ifdef CONFIG_DEBUG_VM
+2 -1
mm/mempolicy.c
··· 640 640 unsigned long vmstart; 641 641 unsigned long vmend; 642 642 643 - vma = find_vma_prev(mm, start, &prev); 643 + vma = find_vma(mm, start); 644 644 if (!vma || vma->vm_start > start) 645 645 return -EFAULT; 646 646 647 + prev = vma->vm_prev; 647 648 if (start > vma->vm_start) 648 649 prev = vma; 649 650
-2
mm/migrate.c
··· 839 839 if (!newpage) 840 840 return -ENOMEM; 841 841 842 - mem_cgroup_reset_owner(newpage); 843 - 844 842 if (page_count(page) == 1) { 845 843 /* page was freed from under us. So we are done. */ 846 844 goto out;
+2 -1
mm/mlock.c
··· 385 385 return -EINVAL; 386 386 if (end == start) 387 387 return 0; 388 - vma = find_vma_prev(current->mm, start, &prev); 388 + vma = find_vma(current->mm, start); 389 389 if (!vma || vma->vm_start > start) 390 390 return -ENOMEM; 391 391 392 + prev = vma->vm_prev; 392 393 if (start > vma->vm_start) 393 394 prev = vma; 394 395
+14 -3
mm/mmap.c
··· 1266 1266 vma->vm_pgoff = pgoff; 1267 1267 INIT_LIST_HEAD(&vma->anon_vma_chain); 1268 1268 1269 + error = -EINVAL; /* when rejecting VM_GROWSDOWN|VM_GROWSUP */ 1270 + 1269 1271 if (file) { 1270 - error = -EINVAL; 1271 1272 if (vm_flags & (VM_GROWSDOWN|VM_GROWSUP)) 1272 1273 goto free_vma; 1273 1274 if (vm_flags & VM_DENYWRITE) { ··· 1294 1293 pgoff = vma->vm_pgoff; 1295 1294 vm_flags = vma->vm_flags; 1296 1295 } else if (vm_flags & VM_SHARED) { 1296 + if (unlikely(vm_flags & (VM_GROWSDOWN|VM_GROWSUP))) 1297 + goto free_vma; 1297 1298 error = shmem_zero_setup(vma); 1298 1299 if (error) 1299 1300 goto free_vma; ··· 1608 1605 1609 1606 /* 1610 1607 * Same as find_vma, but also return a pointer to the previous VMA in *pprev. 1611 - * Note: pprev is set to NULL when return value is NULL. 1612 1608 */ 1613 1609 struct vm_area_struct * 1614 1610 find_vma_prev(struct mm_struct *mm, unsigned long addr, ··· 1616 1614 struct vm_area_struct *vma; 1617 1615 1618 1616 vma = find_vma(mm, addr); 1619 - *pprev = vma ? vma->vm_prev : NULL; 1617 + if (vma) { 1618 + *pprev = vma->vm_prev; 1619 + } else { 1620 + struct rb_node *rb_node = mm->mm_rb.rb_node; 1621 + *pprev = NULL; 1622 + while (rb_node) { 1623 + *pprev = rb_entry(rb_node, struct vm_area_struct, vm_rb); 1624 + rb_node = rb_node->rb_right; 1625 + } 1626 + } 1620 1627 return vma; 1621 1628 } 1622 1629
+2 -1
mm/mprotect.c
··· 262 262 263 263 down_write(&current->mm->mmap_sem); 264 264 265 - vma = find_vma_prev(current->mm, start, &prev); 265 + vma = find_vma(current->mm, start); 266 266 error = -ENOMEM; 267 267 if (!vma) 268 268 goto out; 269 + prev = vma->vm_prev; 269 270 if (unlikely(grows & PROT_GROWSDOWN)) { 270 271 if (vma->vm_start >= end) 271 272 goto out;
+3 -1
mm/page_cgroup.c
··· 379 379 pgoff_t offset = swp_offset(ent); 380 380 struct swap_cgroup_ctrl *ctrl; 381 381 struct page *mappage; 382 + struct swap_cgroup *sc; 382 383 383 384 ctrl = &swap_cgroup_ctrl[swp_type(ent)]; 384 385 if (ctrlp) 385 386 *ctrlp = ctrl; 386 387 387 388 mappage = ctrl->map[offset / SC_PER_PAGE]; 388 - return page_address(mappage) + offset % SC_PER_PAGE; 389 + sc = page_address(mappage); 390 + return sc + offset % SC_PER_PAGE; 389 391 } 390 392 391 393 /**
+1 -2
mm/percpu-vm.c
··· 184 184 page_end - page_start); 185 185 } 186 186 187 - for (i = page_start; i < page_end; i++) 188 - __clear_bit(i, populated); 187 + bitmap_clear(populated, page_start, page_end - page_start); 189 188 } 190 189 191 190 /**
+5 -3
mm/swap.c
··· 652 652 void lru_add_page_tail(struct zone* zone, 653 653 struct page *page, struct page *page_tail) 654 654 { 655 - int active; 655 + int uninitialized_var(active); 656 656 enum lru_list lru; 657 657 const int file = 0; 658 658 ··· 672 672 active = 0; 673 673 lru = LRU_INACTIVE_ANON; 674 674 } 675 - update_page_reclaim_stat(zone, page_tail, file, active); 676 675 } else { 677 676 SetPageUnevictable(page_tail); 678 677 lru = LRU_UNEVICTABLE; ··· 692 693 list_head = page_tail->lru.prev; 693 694 list_move_tail(&page_tail->lru, list_head); 694 695 } 696 + 697 + if (!PageUnevictable(page)) 698 + update_page_reclaim_stat(zone, page_tail, file, active); 695 699 } 696 700 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 697 701 ··· 712 710 SetPageLRU(page); 713 711 if (active) 714 712 SetPageActive(page); 715 - update_page_reclaim_stat(zone, page, file, active); 716 713 add_page_to_lru_list(zone, page, lru); 714 + update_page_reclaim_stat(zone, page, file, active); 717 715 } 718 716 719 717 /*
-10
mm/swap_state.c
··· 300 300 new_page = alloc_page_vma(gfp_mask, vma, addr); 301 301 if (!new_page) 302 302 break; /* Out of memory */ 303 - /* 304 - * The memcg-specific accounting when moving 305 - * pages around the LRU lists relies on the 306 - * page's owner (memcg) to be valid. Usually, 307 - * pages are assigned to a new owner before 308 - * being put on the LRU list, but since this 309 - * is not the case here, the stale owner from 310 - * a previous allocation cycle must be reset. 311 - */ 312 - mem_cgroup_reset_owner(new_page); 313 303 } 314 304 315 305 /*
+5 -2
net/bridge/br_multicast.c
··· 446 446 ip6h->nexthdr = IPPROTO_HOPOPTS; 447 447 ip6h->hop_limit = 1; 448 448 ipv6_addr_set(&ip6h->daddr, htonl(0xff020000), 0, 0, htonl(1)); 449 - ipv6_dev_get_saddr(dev_net(br->dev), br->dev, &ip6h->daddr, 0, 450 - &ip6h->saddr); 449 + if (ipv6_dev_get_saddr(dev_net(br->dev), br->dev, &ip6h->daddr, 0, 450 + &ip6h->saddr)) { 451 + kfree_skb(skb); 452 + return NULL; 453 + } 451 454 ipv6_eth_mc_map(&ip6h->daddr, eth->h_dest); 452 455 453 456 hopopt = (u8 *)(ip6h + 1);
+18 -14
net/bridge/br_netfilter.c
··· 62 62 #define brnf_filter_pppoe_tagged 0 63 63 #endif 64 64 65 + #define IS_IP(skb) \ 66 + (!vlan_tx_tag_present(skb) && skb->protocol == htons(ETH_P_IP)) 67 + 68 + #define IS_IPV6(skb) \ 69 + (!vlan_tx_tag_present(skb) && skb->protocol == htons(ETH_P_IPV6)) 70 + 71 + #define IS_ARP(skb) \ 72 + (!vlan_tx_tag_present(skb) && skb->protocol == htons(ETH_P_ARP)) 73 + 65 74 static inline __be16 vlan_proto(const struct sk_buff *skb) 66 75 { 67 76 if (vlan_tx_tag_present(skb)) ··· 648 639 return NF_DROP; 649 640 br = p->br; 650 641 651 - if (skb->protocol == htons(ETH_P_IPV6) || IS_VLAN_IPV6(skb) || 652 - IS_PPPOE_IPV6(skb)) { 642 + if (IS_IPV6(skb) || IS_VLAN_IPV6(skb) || IS_PPPOE_IPV6(skb)) { 653 643 if (!brnf_call_ip6tables && !br->nf_call_ip6tables) 654 644 return NF_ACCEPT; 655 645 ··· 659 651 if (!brnf_call_iptables && !br->nf_call_iptables) 660 652 return NF_ACCEPT; 661 653 662 - if (skb->protocol != htons(ETH_P_IP) && !IS_VLAN_IP(skb) && 663 - !IS_PPPOE_IP(skb)) 654 + if (!IS_IP(skb) && !IS_VLAN_IP(skb) && !IS_PPPOE_IP(skb)) 664 655 return NF_ACCEPT; 665 656 666 657 nf_bridge_pull_encap_header_rcsum(skb); ··· 708 701 struct nf_bridge_info *nf_bridge = skb->nf_bridge; 709 702 struct net_device *in; 710 703 711 - if (skb->protocol != htons(ETH_P_ARP) && !IS_VLAN_ARP(skb)) { 704 + if (!IS_ARP(skb) && !IS_VLAN_ARP(skb)) { 712 705 in = nf_bridge->physindev; 713 706 if (nf_bridge->mask & BRNF_PKT_TYPE) { 714 707 skb->pkt_type = PACKET_OTHERHOST; ··· 724 717 skb->dev, br_forward_finish, 1); 725 718 return 0; 726 719 } 720 + 727 721 728 722 /* This is the 'purely bridged' case. For IP, we pass the packet to 729 723 * netfilter with indev and outdev set to the bridge device, ··· 752 744 if (!parent) 753 745 return NF_DROP; 754 746 755 - if (skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb) || 756 - IS_PPPOE_IP(skb)) 747 + if (IS_IP(skb) || IS_VLAN_IP(skb) || IS_PPPOE_IP(skb)) 757 748 pf = PF_INET; 758 - else if (skb->protocol == htons(ETH_P_IPV6) || IS_VLAN_IPV6(skb) || 759 - IS_PPPOE_IPV6(skb)) 749 + else if (IS_IPV6(skb) || IS_VLAN_IPV6(skb) || IS_PPPOE_IPV6(skb)) 760 750 pf = PF_INET6; 761 751 else 762 752 return NF_ACCEPT; ··· 801 795 if (!brnf_call_arptables && !br->nf_call_arptables) 802 796 return NF_ACCEPT; 803 797 804 - if (skb->protocol != htons(ETH_P_ARP)) { 798 + if (!IS_ARP(skb)) { 805 799 if (!IS_VLAN_ARP(skb)) 806 800 return NF_ACCEPT; 807 801 nf_bridge_pull_encap_header(skb); ··· 859 853 if (!realoutdev) 860 854 return NF_DROP; 861 855 862 - if (skb->protocol == htons(ETH_P_IP) || IS_VLAN_IP(skb) || 863 - IS_PPPOE_IP(skb)) 856 + if (IS_IP(skb) || IS_VLAN_IP(skb) || IS_PPPOE_IP(skb)) 864 857 pf = PF_INET; 865 - else if (skb->protocol == htons(ETH_P_IPV6) || IS_VLAN_IPV6(skb) || 866 - IS_PPPOE_IPV6(skb)) 858 + else if (IS_IPV6(skb) || IS_VLAN_IPV6(skb) || IS_PPPOE_IPV6(skb)) 867 859 pf = PF_INET6; 868 860 else 869 861 return NF_ACCEPT;
+4 -4
net/bridge/br_stp.c
··· 17 17 #include "br_private_stp.h" 18 18 19 19 /* since time values in bpdu are in jiffies and then scaled (1/256) 20 - * before sending, make sure that is at least one. 20 + * before sending, make sure that is at least one STP tick. 21 21 */ 22 - #define MESSAGE_AGE_INCR ((HZ < 256) ? 1 : (HZ/256)) 22 + #define MESSAGE_AGE_INCR ((HZ / 256) + 1) 23 23 24 24 static const char *const br_port_state_names[] = { 25 25 [BR_STATE_DISABLED] = "disabled", ··· 31 31 32 32 void br_log_state(const struct net_bridge_port *p) 33 33 { 34 - br_info(p->br, "port %u(%s) entering %s state\n", 34 + br_info(p->br, "port %u(%s) entered %s state\n", 35 35 (unsigned) p->port_no, p->dev->name, 36 36 br_port_state_names[p->state]); 37 37 } ··· 186 186 p->designated_cost = bpdu->root_path_cost; 187 187 p->designated_bridge = bpdu->bridge_id; 188 188 p->designated_port = bpdu->port_id; 189 - p->designated_age = jiffies + bpdu->message_age; 189 + p->designated_age = jiffies - bpdu->message_age; 190 190 191 191 mod_timer(&p->message_age_timer, jiffies 192 192 + (p->br->max_age - bpdu->message_age));
+1 -2
net/bridge/br_stp_if.c
··· 98 98 struct net_bridge *br = p->br; 99 99 int wasroot; 100 100 101 - br_log_state(p); 102 - 103 101 wasroot = br_is_root_bridge(br); 104 102 br_become_designated_port(p); 105 103 p->state = BR_STATE_DISABLED; 106 104 p->topology_change_ack = 0; 107 105 p->config_pending = 0; 108 106 107 + br_log_state(p); 109 108 br_ifinfo_notify(RTM_NEWLINK, p); 110 109 111 110 del_timer(&p->message_age_timer);
+15 -11
net/bridge/netfilter/ebtables.c
··· 1335 1335 const char *base, char __user *ubase) 1336 1336 { 1337 1337 char __user *hlp = ubase + ((char *)m - base); 1338 - if (copy_to_user(hlp, m->u.match->name, EBT_FUNCTION_MAXNAMELEN)) 1338 + char name[EBT_FUNCTION_MAXNAMELEN] = {}; 1339 + 1340 + /* ebtables expects 32 bytes long names but xt_match names are 29 bytes 1341 + long. Copy 29 bytes and fill remaining bytes with zeroes. */ 1342 + strncpy(name, m->u.match->name, sizeof(name)); 1343 + if (copy_to_user(hlp, name, EBT_FUNCTION_MAXNAMELEN)) 1339 1344 return -EFAULT; 1340 1345 return 0; 1341 1346 } ··· 1349 1344 const char *base, char __user *ubase) 1350 1345 { 1351 1346 char __user *hlp = ubase + ((char *)w - base); 1352 - if (copy_to_user(hlp , w->u.watcher->name, EBT_FUNCTION_MAXNAMELEN)) 1347 + char name[EBT_FUNCTION_MAXNAMELEN] = {}; 1348 + 1349 + strncpy(name, w->u.watcher->name, sizeof(name)); 1350 + if (copy_to_user(hlp , name, EBT_FUNCTION_MAXNAMELEN)) 1353 1351 return -EFAULT; 1354 1352 return 0; 1355 1353 } ··· 1363 1355 int ret; 1364 1356 char __user *hlp; 1365 1357 const struct ebt_entry_target *t; 1358 + char name[EBT_FUNCTION_MAXNAMELEN] = {}; 1366 1359 1367 1360 if (e->bitmask == 0) 1368 1361 return 0; ··· 1377 1368 ret = EBT_WATCHER_ITERATE(e, ebt_make_watchername, base, ubase); 1378 1369 if (ret != 0) 1379 1370 return ret; 1380 - if (copy_to_user(hlp, t->u.target->name, EBT_FUNCTION_MAXNAMELEN)) 1371 + strncpy(name, t->u.target->name, sizeof(name)); 1372 + if (copy_to_user(hlp, name, EBT_FUNCTION_MAXNAMELEN)) 1381 1373 return -EFAULT; 1382 1374 return 0; 1383 1375 } ··· 1903 1893 1904 1894 switch (compat_mwt) { 1905 1895 case EBT_COMPAT_MATCH: 1906 - match = try_then_request_module(xt_find_match(NFPROTO_BRIDGE, 1907 - name, 0), "ebt_%s", name); 1908 - if (match == NULL) 1909 - return -ENOENT; 1896 + match = xt_request_find_match(NFPROTO_BRIDGE, name, 0); 1910 1897 if (IS_ERR(match)) 1911 1898 return PTR_ERR(match); 1912 1899 ··· 1922 1915 break; 1923 1916 case EBT_COMPAT_WATCHER: /* fallthrough */ 1924 1917 case EBT_COMPAT_TARGET: 1925 - wt = try_then_request_module(xt_find_target(NFPROTO_BRIDGE, 1926 - name, 0), "ebt_%s", name); 1927 - if (wt == NULL) 1928 - return -ENOENT; 1918 + wt = xt_request_find_target(NFPROTO_BRIDGE, name, 0); 1929 1919 if (IS_ERR(wt)) 1930 1920 return PTR_ERR(wt); 1931 1921 off = xt_compat_target_offset(wt);
+10 -8
net/core/rtnetlink.c
··· 1060 1060 rcu_read_lock(); 1061 1061 cb->seq = net->dev_base_seq; 1062 1062 1063 - nlmsg_parse(cb->nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, 1064 - ifla_policy); 1063 + if (nlmsg_parse(cb->nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, 1064 + ifla_policy) >= 0) { 1065 1065 1066 - if (tb[IFLA_EXT_MASK]) 1067 - ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); 1066 + if (tb[IFLA_EXT_MASK]) 1067 + ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); 1068 + } 1068 1069 1069 1070 for (h = s_h; h < NETDEV_HASHENTRIES; h++, s_idx = 0) { 1070 1071 idx = 0; ··· 1901 1900 u32 ext_filter_mask = 0; 1902 1901 u16 min_ifinfo_dump_size = 0; 1903 1902 1904 - nlmsg_parse(nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, ifla_policy); 1905 - 1906 - if (tb[IFLA_EXT_MASK]) 1907 - ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); 1903 + if (nlmsg_parse(nlh, sizeof(struct rtgenmsg), tb, IFLA_MAX, 1904 + ifla_policy) >= 0) { 1905 + if (tb[IFLA_EXT_MASK]) 1906 + ext_filter_mask = nla_get_u32(tb[IFLA_EXT_MASK]); 1907 + } 1908 1908 1909 1909 if (!ext_filter_mask) 1910 1910 return NLMSG_GOODSIZE;
+79 -2
net/ipv4/inetpeer.c
··· 17 17 #include <linux/kernel.h> 18 18 #include <linux/mm.h> 19 19 #include <linux/net.h> 20 + #include <linux/workqueue.h> 20 21 #include <net/ip.h> 21 22 #include <net/inetpeer.h> 22 23 #include <net/secure_seq.h> ··· 67 66 68 67 static struct kmem_cache *peer_cachep __read_mostly; 69 68 69 + static LIST_HEAD(gc_list); 70 + static const int gc_delay = 60 * HZ; 71 + static struct delayed_work gc_work; 72 + static DEFINE_SPINLOCK(gc_lock); 73 + 70 74 #define node_height(x) x->avl_height 71 75 72 76 #define peer_avl_empty ((struct inet_peer *)&peer_fake_node) ··· 108 102 int inet_peer_minttl __read_mostly = 120 * HZ; /* TTL under high load: 120 sec */ 109 103 int inet_peer_maxttl __read_mostly = 10 * 60 * HZ; /* usual time to live: 10 min */ 110 104 105 + static void inetpeer_gc_worker(struct work_struct *work) 106 + { 107 + struct inet_peer *p, *n; 108 + LIST_HEAD(list); 109 + 110 + spin_lock_bh(&gc_lock); 111 + list_replace_init(&gc_list, &list); 112 + spin_unlock_bh(&gc_lock); 113 + 114 + if (list_empty(&list)) 115 + return; 116 + 117 + list_for_each_entry_safe(p, n, &list, gc_list) { 118 + 119 + if(need_resched()) 120 + cond_resched(); 121 + 122 + if (p->avl_left != peer_avl_empty) { 123 + list_add_tail(&p->avl_left->gc_list, &list); 124 + p->avl_left = peer_avl_empty; 125 + } 126 + 127 + if (p->avl_right != peer_avl_empty) { 128 + list_add_tail(&p->avl_right->gc_list, &list); 129 + p->avl_right = peer_avl_empty; 130 + } 131 + 132 + n = list_entry(p->gc_list.next, struct inet_peer, gc_list); 133 + 134 + if (!atomic_read(&p->refcnt)) { 135 + list_del(&p->gc_list); 136 + kmem_cache_free(peer_cachep, p); 137 + } 138 + } 139 + 140 + if (list_empty(&list)) 141 + return; 142 + 143 + spin_lock_bh(&gc_lock); 144 + list_splice(&list, &gc_list); 145 + spin_unlock_bh(&gc_lock); 146 + 147 + schedule_delayed_work(&gc_work, gc_delay); 148 + } 111 149 112 150 /* Called from ip_output.c:ip_init */ 113 151 void __init inet_initpeers(void) ··· 176 126 0, SLAB_HWCACHE_ALIGN | SLAB_PANIC, 177 127 NULL); 178 128 129 + INIT_DELAYED_WORK_DEFERRABLE(&gc_work, inetpeer_gc_worker); 179 130 } 180 131 181 132 static int addr_compare(const struct inetpeer_addr *a, ··· 498 447 p->rate_last = 0; 499 448 p->pmtu_expires = 0; 500 449 p->pmtu_orig = 0; 501 - p->redirect_genid = 0; 502 450 memset(&p->redirect_learned, 0, sizeof(p->redirect_learned)); 503 - 451 + INIT_LIST_HEAD(&p->gc_list); 504 452 505 453 /* Link the node. */ 506 454 link_to_pool(p, base); ··· 559 509 return rc; 560 510 } 561 511 EXPORT_SYMBOL(inet_peer_xrlim_allow); 512 + 513 + void inetpeer_invalidate_tree(int family) 514 + { 515 + struct inet_peer *old, *new, *prev; 516 + struct inet_peer_base *base = family_to_base(family); 517 + 518 + write_seqlock_bh(&base->lock); 519 + 520 + old = base->root; 521 + if (old == peer_avl_empty_rcu) 522 + goto out; 523 + 524 + new = peer_avl_empty_rcu; 525 + 526 + prev = cmpxchg(&base->root, old, new); 527 + if (prev == old) { 528 + base->total = 0; 529 + spin_lock(&gc_lock); 530 + list_add_tail(&prev->gc_list, &gc_list); 531 + spin_unlock(&gc_lock); 532 + schedule_delayed_work(&gc_work, gc_delay); 533 + } 534 + 535 + out: 536 + write_sequnlock_bh(&base->lock); 537 + } 538 + EXPORT_SYMBOL(inetpeer_invalidate_tree);
+3 -9
net/ipv4/route.c
··· 132 132 static int ip_rt_min_pmtu __read_mostly = 512 + 20 + 20; 133 133 static int ip_rt_min_advmss __read_mostly = 256; 134 134 static int rt_chain_length_max __read_mostly = 20; 135 - static int redirect_genid; 136 135 137 136 static struct delayed_work expires_work; 138 137 static unsigned long expires_ljiffies; ··· 936 937 937 938 get_random_bytes(&shuffle, sizeof(shuffle)); 938 939 atomic_add(shuffle + 1U, &net->ipv4.rt_genid); 939 - redirect_genid++; 940 + inetpeer_invalidate_tree(AF_INET); 940 941 } 941 942 942 943 /* ··· 1484 1485 1485 1486 peer = rt->peer; 1486 1487 if (peer) { 1487 - if (peer->redirect_learned.a4 != new_gw || 1488 - peer->redirect_genid != redirect_genid) { 1488 + if (peer->redirect_learned.a4 != new_gw) { 1489 1489 peer->redirect_learned.a4 = new_gw; 1490 - peer->redirect_genid = redirect_genid; 1491 1490 atomic_inc(&__rt_peer_genid); 1492 1491 } 1493 1492 check_peer_redir(&rt->dst, peer); ··· 1790 1793 if (peer) { 1791 1794 check_peer_pmtu(&rt->dst, peer); 1792 1795 1793 - if (peer->redirect_genid != redirect_genid) 1794 - peer->redirect_learned.a4 = 0; 1795 1796 if (peer->redirect_learned.a4 && 1796 1797 peer->redirect_learned.a4 != rt->rt_gateway) 1797 1798 check_peer_redir(&rt->dst, peer); ··· 1953 1958 dst_init_metrics(&rt->dst, peer->metrics, false); 1954 1959 1955 1960 check_peer_pmtu(&rt->dst, peer); 1956 - if (peer->redirect_genid != redirect_genid) 1957 - peer->redirect_learned.a4 = 0; 1961 + 1958 1962 if (peer->redirect_learned.a4 && 1959 1963 peer->redirect_learned.a4 != rt->rt_gateway) { 1960 1964 rt->rt_gateway = peer->redirect_learned.a4;
+15 -8
net/ipv4/tcp_input.c
··· 1403 1403 1404 1404 BUG_ON(!pcount); 1405 1405 1406 - /* Adjust hint for FACK. Non-FACK is handled in tcp_sacktag_one(). */ 1407 - if (tcp_is_fack(tp) && (skb == tp->lost_skb_hint)) 1406 + /* Adjust counters and hints for the newly sacked sequence 1407 + * range but discard the return value since prev is already 1408 + * marked. We must tag the range first because the seq 1409 + * advancement below implicitly advances 1410 + * tcp_highest_sack_seq() when skb is highest_sack. 1411 + */ 1412 + tcp_sacktag_one(sk, state, TCP_SKB_CB(skb)->sacked, 1413 + start_seq, end_seq, dup_sack, pcount); 1414 + 1415 + if (skb == tp->lost_skb_hint) 1408 1416 tp->lost_cnt_hint += pcount; 1409 1417 1410 1418 TCP_SKB_CB(prev)->end_seq += shifted; ··· 1437 1429 skb_shinfo(skb)->gso_size = 0; 1438 1430 skb_shinfo(skb)->gso_type = 0; 1439 1431 } 1440 - 1441 - /* Adjust counters and hints for the newly sacked sequence range but 1442 - * discard the return value since prev is already marked. 1443 - */ 1444 - tcp_sacktag_one(sk, state, TCP_SKB_CB(skb)->sacked, 1445 - start_seq, end_seq, dup_sack, pcount); 1446 1432 1447 1433 /* Difference in this won't matter, both ACKed by the same cumul. ACK */ 1448 1434 TCP_SKB_CB(prev)->sacked |= (TCP_SKB_CB(skb)->sacked & TCPCB_EVER_RETRANS); ··· 1584 1582 len = pcount * mss; 1585 1583 } 1586 1584 } 1585 + 1586 + /* tcp_sacktag_one() won't SACK-tag ranges below snd_una */ 1587 + if (!after(TCP_SKB_CB(skb)->seq + len, tp->snd_una)) 1588 + goto fallback; 1587 1589 1588 1590 if (!skb_shift(prev, skb, len)) 1589 1591 goto fallback; ··· 2573 2567 2574 2568 if (cnt > packets) { 2575 2569 if ((tcp_is_sack(tp) && !tcp_is_fack(tp)) || 2570 + (TCP_SKB_CB(skb)->sacked & TCPCB_SACKED_ACKED) || 2576 2571 (oldcnt >= packets)) 2577 2572 break; 2578 2573
+4
net/ipv6/addrconf.c
··· 434 434 /* Join all-node multicast group */ 435 435 ipv6_dev_mc_inc(dev, &in6addr_linklocal_allnodes); 436 436 437 + /* Join all-router multicast group if forwarding is set */ 438 + if (ndev->cnf.forwarding && dev && (dev->flags & IFF_MULTICAST)) 439 + ipv6_dev_mc_inc(dev, &in6addr_linklocal_allrouters); 440 + 437 441 return ndev; 438 442 } 439 443
+3
net/mac80211/iface.c
··· 1332 1332 hw_roc = true; 1333 1333 1334 1334 list_for_each_entry(sdata, &local->interfaces, list) { 1335 + if (sdata->vif.type == NL80211_IFTYPE_MONITOR || 1336 + sdata->vif.type == NL80211_IFTYPE_AP_VLAN) 1337 + continue; 1335 1338 if (sdata->old_idle == sdata->vif.bss_conf.idle) 1336 1339 continue; 1337 1340 if (!ieee80211_sdata_running(sdata))
+1 -1
net/mac80211/rate.c
··· 344 344 for (i = 0; i < IEEE80211_TX_MAX_RATES; i++) { 345 345 info->control.rates[i].idx = -1; 346 346 info->control.rates[i].flags = 0; 347 - info->control.rates[i].count = 1; 347 + info->control.rates[i].count = 0; 348 348 } 349 349 350 350 if (sdata->local->hw.flags & IEEE80211_HW_HAS_RATE_CONTROL)
+6 -2
net/netfilter/nf_conntrack_core.c
··· 635 635 636 636 if (del_timer(&ct->timeout)) { 637 637 death_by_timeout((unsigned long)ct); 638 - dropped = 1; 639 - NF_CT_STAT_INC_ATOMIC(net, early_drop); 638 + /* Check if we indeed killed this entry. Reliable event 639 + delivery may have inserted it into the dying list. */ 640 + if (test_bit(IPS_DYING_BIT, &ct->status)) { 641 + dropped = 1; 642 + NF_CT_STAT_INC_ATOMIC(net, early_drop); 643 + } 640 644 } 641 645 nf_ct_put(ct); 642 646 return dropped;
-3
net/netfilter/nf_conntrack_netlink.c
··· 1041 1041 if (!parse_nat_setup) { 1042 1042 #ifdef CONFIG_MODULES 1043 1043 rcu_read_unlock(); 1044 - spin_unlock_bh(&nf_conntrack_lock); 1045 1044 nfnl_unlock(); 1046 1045 if (request_module("nf-nat-ipv4") < 0) { 1047 1046 nfnl_lock(); 1048 - spin_lock_bh(&nf_conntrack_lock); 1049 1047 rcu_read_lock(); 1050 1048 return -EOPNOTSUPP; 1051 1049 } 1052 1050 nfnl_lock(); 1053 - spin_lock_bh(&nf_conntrack_lock); 1054 1051 rcu_read_lock(); 1055 1052 if (nfnetlink_parse_nat_setup_hook) 1056 1053 return -EAGAIN;
+32 -12
net/openvswitch/actions.c
··· 1 1 /* 2 - * Copyright (c) 2007-2011 Nicira Networks. 2 + * Copyright (c) 2007-2012 Nicira Networks. 3 3 * 4 4 * This program is free software; you can redistribute it and/or 5 5 * modify it under the terms of version 2 of the GNU General Public ··· 145 145 inet_proto_csum_replace4(&tcp_hdr(skb)->check, skb, 146 146 *addr, new_addr, 1); 147 147 } else if (nh->protocol == IPPROTO_UDP) { 148 - if (likely(transport_len >= sizeof(struct udphdr))) 149 - inet_proto_csum_replace4(&udp_hdr(skb)->check, skb, 150 - *addr, new_addr, 1); 148 + if (likely(transport_len >= sizeof(struct udphdr))) { 149 + struct udphdr *uh = udp_hdr(skb); 150 + 151 + if (uh->check || skb->ip_summed == CHECKSUM_PARTIAL) { 152 + inet_proto_csum_replace4(&uh->check, skb, 153 + *addr, new_addr, 1); 154 + if (!uh->check) 155 + uh->check = CSUM_MANGLED_0; 156 + } 157 + } 151 158 } 152 159 153 160 csum_replace4(&nh->check, *addr, new_addr); ··· 204 197 skb->rxhash = 0; 205 198 } 206 199 207 - static int set_udp_port(struct sk_buff *skb, 208 - const struct ovs_key_udp *udp_port_key) 200 + static void set_udp_port(struct sk_buff *skb, __be16 *port, __be16 new_port) 201 + { 202 + struct udphdr *uh = udp_hdr(skb); 203 + 204 + if (uh->check && skb->ip_summed != CHECKSUM_PARTIAL) { 205 + set_tp_port(skb, port, new_port, &uh->check); 206 + 207 + if (!uh->check) 208 + uh->check = CSUM_MANGLED_0; 209 + } else { 210 + *port = new_port; 211 + skb->rxhash = 0; 212 + } 213 + } 214 + 215 + static int set_udp(struct sk_buff *skb, const struct ovs_key_udp *udp_port_key) 209 216 { 210 217 struct udphdr *uh; 211 218 int err; ··· 231 210 232 211 uh = udp_hdr(skb); 233 212 if (udp_port_key->udp_src != uh->source) 234 - set_tp_port(skb, &uh->source, udp_port_key->udp_src, &uh->check); 213 + set_udp_port(skb, &uh->source, udp_port_key->udp_src); 235 214 236 215 if (udp_port_key->udp_dst != uh->dest) 237 - set_tp_port(skb, &uh->dest, udp_port_key->udp_dst, &uh->check); 216 + set_udp_port(skb, &uh->dest, udp_port_key->udp_dst); 238 217 239 218 return 0; 240 219 } 241 220 242 - static int set_tcp_port(struct sk_buff *skb, 243 - const struct ovs_key_tcp *tcp_port_key) 221 + static int set_tcp(struct sk_buff *skb, const struct ovs_key_tcp *tcp_port_key) 244 222 { 245 223 struct tcphdr *th; 246 224 int err; ··· 348 328 break; 349 329 350 330 case OVS_KEY_ATTR_TCP: 351 - err = set_tcp_port(skb, nla_data(nested_attr)); 331 + err = set_tcp(skb, nla_data(nested_attr)); 352 332 break; 353 333 354 334 case OVS_KEY_ATTR_UDP: 355 - err = set_udp_port(skb, nla_data(nested_attr)); 335 + err = set_udp(skb, nla_data(nested_attr)); 356 336 break; 357 337 } 358 338
+3
net/openvswitch/datapath.c
··· 1521 1521 vport = ovs_vport_locate(nla_data(a[OVS_VPORT_ATTR_NAME])); 1522 1522 if (!vport) 1523 1523 return ERR_PTR(-ENODEV); 1524 + if (ovs_header->dp_ifindex && 1525 + ovs_header->dp_ifindex != get_dpifindex(vport->dp)) 1526 + return ERR_PTR(-ENODEV); 1524 1527 return vport; 1525 1528 } else if (a[OVS_VPORT_ATTR_PORT_NO]) { 1526 1529 u32 port_no = nla_get_u32(a[OVS_VPORT_ATTR_PORT_NO]);
+1 -2
sound/pci/azt3328.c
··· 2684 2684 err = snd_opl3_hwdep_new(opl3, 0, 1, NULL); 2685 2685 if (err < 0) 2686 2686 goto out_err; 2687 + opl3->private_data = chip; 2687 2688 } 2688 - 2689 - opl3->private_data = chip; 2690 2689 2691 2690 sprintf(card->longname, "%s at 0x%lx, irq %i", 2692 2691 card->shortname, chip->ctrl_io, chip->irq);
+8 -4
sound/pci/hda/hda_codec.c
··· 1759 1759 parm = ch ? AC_AMP_SET_RIGHT : AC_AMP_SET_LEFT; 1760 1760 parm |= direction == HDA_OUTPUT ? AC_AMP_SET_OUTPUT : AC_AMP_SET_INPUT; 1761 1761 parm |= index << AC_AMP_SET_INDEX_SHIFT; 1762 - parm |= val; 1762 + if ((val & HDA_AMP_MUTE) && !(info->amp_caps & AC_AMPCAP_MUTE) && 1763 + (info->amp_caps & AC_AMPCAP_MIN_MUTE)) 1764 + ; /* set the zero value as a fake mute */ 1765 + else 1766 + parm |= val; 1763 1767 snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_AMP_GAIN_MUTE, parm); 1764 1768 info->vol[ch] = val; 1765 1769 } ··· 2030 2026 val1 = -((caps & AC_AMPCAP_OFFSET) >> AC_AMPCAP_OFFSET_SHIFT); 2031 2027 val1 += ofs; 2032 2028 val1 = ((int)val1) * ((int)val2); 2033 - if (min_mute) 2029 + if (min_mute || (caps & AC_AMPCAP_MIN_MUTE)) 2034 2030 val2 |= TLV_DB_SCALE_MUTE; 2035 2031 if (put_user(SNDRV_CTL_TLVT_DB_SCALE, _tlv)) 2036 2032 return -EFAULT; ··· 5118 5114 const char *pfx = "", *sfx = ""; 5119 5115 5120 5116 /* handle as a speaker if it's a fixed line-out */ 5121 - if (!strcmp(name, "Line-Out") && attr == INPUT_PIN_ATTR_INT) 5117 + if (!strcmp(name, "Line Out") && attr == INPUT_PIN_ATTR_INT) 5122 5118 name = "Speaker"; 5123 5119 /* check the location */ 5124 5120 switch (attr) { ··· 5177 5173 5178 5174 switch (get_defcfg_device(def_conf)) { 5179 5175 case AC_JACK_LINE_OUT: 5180 - return fill_audio_out_name(codec, nid, cfg, "Line-Out", 5176 + return fill_audio_out_name(codec, nid, cfg, "Line Out", 5181 5177 label, maxlen, indexp); 5182 5178 case AC_JACK_SPEAKER: 5183 5179 return fill_audio_out_name(codec, nid, cfg, "Speaker",
+3
sound/pci/hda/hda_codec.h
··· 298 298 #define AC_AMPCAP_MUTE (1<<31) /* mute capable */ 299 299 #define AC_AMPCAP_MUTE_SHIFT 31 300 300 301 + /* driver-specific amp-caps: using bits 24-30 */ 302 + #define AC_AMPCAP_MIN_MUTE (1 << 30) /* min-volume = mute */ 303 + 301 304 /* Connection list */ 302 305 #define AC_CLIST_LENGTH (0x7f<<0) 303 306 #define AC_CLIST_LONG (1<<7)
+2 -2
sound/pci/hda/patch_cirrus.c
··· 609 609 "Front Speaker", "Surround Speaker", "Bass Speaker" 610 610 }; 611 611 static const char * const line_outs[] = { 612 - "Front Line-Out", "Surround Line-Out", "Bass Line-Out" 612 + "Front Line Out", "Surround Line Out", "Bass Line Out" 613 613 }; 614 614 615 615 fix_volume_caps(codec, dac); ··· 635 635 if (num_ctls > 1) 636 636 name = line_outs[idx]; 637 637 else 638 - name = "Line-Out"; 638 + name = "Line Out"; 639 639 break; 640 640 } 641 641
+22 -2
sound/pci/hda/patch_conexant.c
··· 3482 3482 "Disabled", "Enabled" 3483 3483 }; 3484 3484 static const char * const texts3[] = { 3485 - "Disabled", "Speaker Only", "Line-Out+Speaker" 3485 + "Disabled", "Speaker Only", "Line Out+Speaker" 3486 3486 }; 3487 3487 const char * const *texts; 3488 3488 ··· 4079 4079 err = snd_hda_ctl_add(codec, nid, kctl); 4080 4080 if (err < 0) 4081 4081 return err; 4082 - if (!(query_amp_caps(codec, nid, hda_dir) & AC_AMPCAP_MUTE)) 4082 + if (!(query_amp_caps(codec, nid, hda_dir) & 4083 + (AC_AMPCAP_MUTE | AC_AMPCAP_MIN_MUTE))) 4083 4084 break; 4084 4085 } 4085 4086 return 0; ··· 4380 4379 {} 4381 4380 }; 4382 4381 4382 + /* add "fake" mute amp-caps to DACs on cx5051 so that mixer mute switches 4383 + * can be created (bko#42825) 4384 + */ 4385 + static void add_cx5051_fake_mutes(struct hda_codec *codec) 4386 + { 4387 + static hda_nid_t out_nids[] = { 4388 + 0x10, 0x11, 0 4389 + }; 4390 + hda_nid_t *p; 4391 + 4392 + for (p = out_nids; *p; p++) 4393 + snd_hda_override_amp_caps(codec, *p, HDA_OUTPUT, 4394 + AC_AMPCAP_MIN_MUTE | 4395 + query_amp_caps(codec, *p, HDA_OUTPUT)); 4396 + } 4397 + 4383 4398 static int patch_conexant_auto(struct hda_codec *codec) 4384 4399 { 4385 4400 struct conexant_spec *spec; ··· 4413 4396 switch (codec->vendor_id) { 4414 4397 case 0x14f15045: 4415 4398 spec->single_adc_amp = 1; 4399 + break; 4400 + case 0x14f15051: 4401 + add_cx5051_fake_mutes(codec); 4416 4402 break; 4417 4403 } 4418 4404
+21 -4
sound/pci/hda/patch_realtek.c
··· 802 802 "Disabled", "Enabled" 803 803 }; 804 804 static const char * const texts3[] = { 805 - "Disabled", "Speaker Only", "Line-Out+Speaker" 805 + "Disabled", "Speaker Only", "Line Out+Speaker" 806 806 }; 807 807 const char * const *texts; 808 808 ··· 1856 1856 "Headphone Playback Volume", 1857 1857 "Speaker Playback Volume", 1858 1858 "Mono Playback Volume", 1859 - "Line-Out Playback Volume", 1859 + "Line Out Playback Volume", 1860 1860 "CLFE Playback Volume", 1861 1861 "Bass Speaker Playback Volume", 1862 1862 "PCM Playback Volume", ··· 1873 1873 "Speaker Playback Switch", 1874 1874 "Mono Playback Switch", 1875 1875 "IEC958 Playback Switch", 1876 - "Line-Out Playback Switch", 1876 + "Line Out Playback Switch", 1877 1877 "CLFE Playback Switch", 1878 1878 "Bass Speaker Playback Switch", 1879 1879 "PCM Playback Switch", ··· 2068 2068 */ 2069 2069 2070 2070 static void alc_init_special_input_src(struct hda_codec *codec); 2071 + static int alc269_fill_coef(struct hda_codec *codec); 2071 2072 2072 2073 static int alc_init(struct hda_codec *codec) 2073 2074 { 2074 2075 struct alc_spec *spec = codec->spec; 2075 2076 unsigned int i; 2077 + 2078 + if (codec->vendor_id == 0x10ec0269) 2079 + alc269_fill_coef(codec); 2076 2080 2077 2081 alc_fix_pll(codec); 2078 2082 alc_auto_init_amp(codec, spec->init_amp); ··· 3801 3797 else 3802 3798 nums = spec->num_adc_nids; 3803 3799 for (c = 0; c < nums; c++) 3804 - alc_mux_select(codec, 0, spec->cur_mux[c], true); 3800 + alc_mux_select(codec, c, spec->cur_mux[c], true); 3805 3801 } 3806 3802 3807 3803 /* add mic boosts if needed */ ··· 4371 4367 ALC882_FIXUP_PB_M5210, 4372 4368 ALC882_FIXUP_ACER_ASPIRE_7736, 4373 4369 ALC882_FIXUP_ASUS_W90V, 4370 + ALC889_FIXUP_CD, 4374 4371 ALC889_FIXUP_VAIO_TT, 4375 4372 ALC888_FIXUP_EEE1601, 4376 4373 ALC882_FIXUP_EAPD, ··· 4496 4491 .type = ALC_FIXUP_PINS, 4497 4492 .v.pins = (const struct alc_pincfg[]) { 4498 4493 { 0x16, 0x99130110 }, /* fix sequence for CLFE */ 4494 + { } 4495 + } 4496 + }, 4497 + [ALC889_FIXUP_CD] = { 4498 + .type = ALC_FIXUP_PINS, 4499 + .v.pins = (const struct alc_pincfg[]) { 4500 + { 0x1c, 0x993301f0 }, /* CD */ 4499 4501 { } 4500 4502 } 4501 4503 }, ··· 4662 4650 4663 4651 SND_PCI_QUIRK(0x1071, 0x8258, "Evesham Voyaeger", ALC882_FIXUP_EAPD), 4664 4652 SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3), 4653 + SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3", ALC889_FIXUP_CD), 4665 4654 SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX), 4666 4655 SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD), 4667 4656 SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD), ··· 5480 5467 5481 5468 static int alc269_fill_coef(struct hda_codec *codec) 5482 5469 { 5470 + struct alc_spec *spec = codec->spec; 5483 5471 int val; 5472 + 5473 + if (spec->codec_variant != ALC269_TYPE_ALC269VB) 5474 + return 0; 5484 5475 5485 5476 if ((alc_get_coef0(codec) & 0x00ff) < 0x015) { 5486 5477 alc_write_coef_idx(codec, 0xf, 0x960b);
+1 -1
sound/pci/hda/patch_sigmatel.c
··· 4629 4629 unsigned int val = AC_PINCTL_OUT_EN | AC_PINCTL_HP_EN; 4630 4630 if (no_hp_sensing(spec, i)) 4631 4631 continue; 4632 - if (presence) 4632 + if (1 /*presence*/) 4633 4633 stac92xx_set_pinctl(codec, cfg->hp_pins[i], val); 4634 4634 #if 0 /* FIXME */ 4635 4635 /* Resetting the pinctl like below may lead to (a sort of) regressions
+1
sound/pci/rme9652/hdspm.c
··· 6333 6333 6334 6334 hw->ops.open = snd_hdspm_hwdep_dummy_op; 6335 6335 hw->ops.ioctl = snd_hdspm_hwdep_ioctl; 6336 + hw->ops.ioctl_compat = snd_hdspm_hwdep_ioctl; 6336 6337 hw->ops.release = snd_hdspm_hwdep_dummy_op; 6337 6338 6338 6339 return 0;
+1 -1
sound/soc/imx/imx-ssi.c
··· 112 112 break; 113 113 case SND_SOC_DAIFMT_DSP_A: 114 114 /* data on rising edge of bclk, frame high 1clk before data */ 115 - strcr |= SSI_STCR_TFSL | SSI_STCR_TEFS; 115 + strcr |= SSI_STCR_TFSL | SSI_STCR_TXBIT0 | SSI_STCR_TEFS; 116 116 break; 117 117 } 118 118
+2 -2
sound/soc/samsung/neo1973_wm8753.c
··· 367 367 .platform_name = "samsung-audio", 368 368 .cpu_dai_name = "s3c24xx-iis", 369 369 .codec_dai_name = "wm8753-hifi", 370 - .codec_name = "wm8753-codec.0-001a", 370 + .codec_name = "wm8753.0-001a", 371 371 .init = neo1973_wm8753_init, 372 372 .ops = &neo1973_hifi_ops, 373 373 }, ··· 376 376 .stream_name = "Voice", 377 377 .cpu_dai_name = "dfbmcs320-pcm", 378 378 .codec_dai_name = "wm8753-voice", 379 - .codec_name = "wm8753-codec.0-001a", 379 + .codec_name = "wm8753.0-001a", 380 380 .ops = &neo1973_voice_ops, 381 381 }, 382 382 };
+9 -3
sound/soc/soc-dapm.c
··· 3068 3068 * standby. 3069 3069 */ 3070 3070 if (powerdown) { 3071 - snd_soc_dapm_set_bias_level(dapm, SND_SOC_BIAS_PREPARE); 3071 + if (dapm->bias_level == SND_SOC_BIAS_ON) 3072 + snd_soc_dapm_set_bias_level(dapm, 3073 + SND_SOC_BIAS_PREPARE); 3072 3074 dapm_seq_run(dapm, &down_list, 0, false); 3073 - snd_soc_dapm_set_bias_level(dapm, SND_SOC_BIAS_STANDBY); 3075 + if (dapm->bias_level == SND_SOC_BIAS_PREPARE) 3076 + snd_soc_dapm_set_bias_level(dapm, 3077 + SND_SOC_BIAS_STANDBY); 3074 3078 } 3075 3079 } 3076 3080 ··· 3087 3083 3088 3084 list_for_each_entry(codec, &card->codec_dev_list, list) { 3089 3085 soc_dapm_shutdown_codec(&codec->dapm); 3090 - snd_soc_dapm_set_bias_level(&codec->dapm, SND_SOC_BIAS_OFF); 3086 + if (codec->dapm.bias_level == SND_SOC_BIAS_STANDBY) 3087 + snd_soc_dapm_set_bias_level(&codec->dapm, 3088 + SND_SOC_BIAS_OFF); 3091 3089 } 3092 3090 } 3093 3091
+5 -3
tools/testing/ktest/ktest.pl
··· 3244 3244 $in_bisect = 1; 3245 3245 3246 3246 my $failed = 0; 3247 - build "oldconfig"; 3248 - start_monitor_and_boot or $failed = 1; 3249 - end_monitor; 3247 + build "oldconfig" or $failed = 1; 3248 + if (!$failed) { 3249 + start_monitor_and_boot or $failed = 1; 3250 + end_monitor; 3251 + } 3250 3252 3251 3253 $in_bisect = 0; 3252 3254