Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tags 'omap-devel-gpmc-fixed-for-v3.7' and 'cleanup-omap-tags-for-v3.7' into cleanup-sparseirq

Changes for GPMC (General Purpose Memory Controller) that take it
closer for being just a regular device driver.

Remove the ancient omap specific atags that are no longer needed.

At some point we were planning to pass the bootloader information
with custom atags that did not work out too well.

There's no need for these any longer as the kernel has been booting
fine without them for quite some time. And Now we have device tree
support that can be used instead.

+3306 -2196
+12
Documentation/ABI/testing/sysfs-bus-pci
··· 210 210 firmware assigned instance number of the PCI 211 211 device that can help in understanding the firmware 212 212 intended order of the PCI device. 213 + 214 + What: /sys/bus/pci/devices/.../d3cold_allowed 215 + Date: July 2012 216 + Contact: Huang Ying <ying.huang@intel.com> 217 + Description: 218 + d3cold_allowed is bit to control whether the corresponding PCI 219 + device can be put into D3Cold state. If it is cleared, the 220 + device will never be put into D3Cold state. If it is set, the 221 + device may be put into D3Cold state if other requirements are 222 + satisfied too. Reading this attribute will show the current 223 + value of d3cold_allowed bit. Writing this attribute will set 224 + the value of d3cold_allowed bit.
+8 -2
Documentation/block/00-INDEX
··· 3 3 biodoc.txt 4 4 - Notes on the Generic Block Layer Rewrite in Linux 2.5 5 5 capability.txt 6 - - Generic Block Device Capability (/sys/block/<disk>/capability) 6 + - Generic Block Device Capability (/sys/block/<device>/capability) 7 + cfq-iosched.txt 8 + - CFQ IO scheduler tunables 9 + data-integrity.txt 10 + - Block data integrity 7 11 deadline-iosched.txt 8 12 - Deadline IO scheduler tunables 9 13 ioprio.txt 10 14 - Block io priorities (in CFQ scheduler) 15 + queue-sysfs.txt 16 + - Queue's sysfs entries 11 17 request.txt 12 18 - The members of struct request (in include/linux/blkdev.h) 13 19 stat.txt 14 - - Block layer statistics in /sys/block/<dev>/stat 20 + - Block layer statistics in /sys/block/<device>/stat 15 21 switching-sched.txt 16 22 - Switching I/O schedulers at runtime 17 23 writeback_cache_control.txt
+77
Documentation/block/cfq-iosched.txt
··· 1 + CFQ (Complete Fairness Queueing) 2 + =============================== 3 + 4 + The main aim of CFQ scheduler is to provide a fair allocation of the disk 5 + I/O bandwidth for all the processes which requests an I/O operation. 6 + 7 + CFQ maintains the per process queue for the processes which request I/O 8 + operation(syncronous requests). In case of asynchronous requests, all the 9 + requests from all the processes are batched together according to their 10 + process's I/O priority. 11 + 1 12 CFQ ioscheduler tunables 2 13 ======================== 3 14 ··· 35 24 there are multiple spindles behind single LUN (Host based hardware RAID 36 25 controller or for storage arrays), setting slice_idle=0 might end up in better 37 26 throughput and acceptable latencies. 27 + 28 + back_seek_max 29 + ------------- 30 + This specifies, given in Kbytes, the maximum "distance" for backward seeking. 31 + The distance is the amount of space from the current head location to the 32 + sectors that are backward in terms of distance. 33 + 34 + This parameter allows the scheduler to anticipate requests in the "backward" 35 + direction and consider them as being the "next" if they are within this 36 + distance from the current head location. 37 + 38 + back_seek_penalty 39 + ----------------- 40 + This parameter is used to compute the cost of backward seeking. If the 41 + backward distance of request is just 1/back_seek_penalty from a "front" 42 + request, then the seeking cost of two requests is considered equivalent. 43 + 44 + So scheduler will not bias toward one or the other request (otherwise scheduler 45 + will bias toward front request). Default value of back_seek_penalty is 2. 46 + 47 + fifo_expire_async 48 + ----------------- 49 + This parameter is used to set the timeout of asynchronous requests. Default 50 + value of this is 248ms. 51 + 52 + fifo_expire_sync 53 + ---------------- 54 + This parameter is used to set the timeout of synchronous requests. Default 55 + value of this is 124ms. In case to favor synchronous requests over asynchronous 56 + one, this value should be decreased relative to fifo_expire_async. 57 + 58 + slice_async 59 + ----------- 60 + This parameter is same as of slice_sync but for asynchronous queue. The 61 + default value is 40ms. 62 + 63 + slice_async_rq 64 + -------------- 65 + This parameter is used to limit the dispatching of asynchronous request to 66 + device request queue in queue's slice time. The maximum number of request that 67 + are allowed to be dispatched also depends upon the io priority. Default value 68 + for this is 2. 69 + 70 + slice_sync 71 + ---------- 72 + When a queue is selected for execution, the queues IO requests are only 73 + executed for a certain amount of time(time_slice) before switching to another 74 + queue. This parameter is used to calculate the time slice of synchronous 75 + queue. 76 + 77 + time_slice is computed using the below equation:- 78 + time_slice = slice_sync + (slice_sync/5 * (4 - prio)). To increase the 79 + time_slice of synchronous queue, increase the value of slice_sync. Default 80 + value is 100ms. 81 + 82 + quantum 83 + ------- 84 + This specifies the number of request dispatched to the device queue. In a 85 + queue's time slice, a request will not be dispatched if the number of request 86 + in the device exceeds this parameter. This parameter is used for synchronous 87 + request. 88 + 89 + In case of storage with several disk, this setting can limit the parallel 90 + processing of request. Therefore, increasing the value can imporve the 91 + performace although this can cause the latency of some I/O to increase due 92 + to more number of requests. 38 93 39 94 CFQ IOPS Mode for group scheduling 40 95 ===================================
+64
Documentation/block/queue-sysfs.txt
··· 9 9 Files denoted with a RO postfix are readonly and the RW postfix means 10 10 read-write. 11 11 12 + add_random (RW) 13 + ---------------- 14 + This file allows to trun off the disk entropy contribution. Default 15 + value of this file is '1'(on). 16 + 17 + discard_granularity (RO) 18 + ----------------------- 19 + This shows the size of internal allocation of the device in bytes, if 20 + reported by the device. A value of '0' means device does not support 21 + the discard functionality. 22 + 23 + discard_max_bytes (RO) 24 + ---------------------- 25 + Devices that support discard functionality may have internal limits on 26 + the number of bytes that can be trimmed or unmapped in a single operation. 27 + The discard_max_bytes parameter is set by the device driver to the maximum 28 + number of bytes that can be discarded in a single operation. Discard 29 + requests issued to the device must not exceed this limit. A discard_max_bytes 30 + value of 0 means that the device does not support discard functionality. 31 + 32 + discard_zeroes_data (RO) 33 + ------------------------ 34 + When read, this file will show if the discarded block are zeroed by the 35 + device or not. If its value is '1' the blocks are zeroed otherwise not. 36 + 12 37 hw_sector_size (RO) 13 38 ------------------- 14 39 This is the hardware sector size of the device, in bytes. 15 40 41 + iostats (RW) 42 + ------------- 43 + This file is used to control (on/off) the iostats accounting of the 44 + disk. 45 + 46 + logical_block_size (RO) 47 + ----------------------- 48 + This is the logcal block size of the device, in bytes. 49 + 16 50 max_hw_sectors_kb (RO) 17 51 ---------------------- 18 52 This is the maximum number of kilobytes supported in a single data transfer. 53 + 54 + max_integrity_segments (RO) 55 + --------------------------- 56 + When read, this file shows the max limit of integrity segments as 57 + set by block layer which a hardware controller can handle. 19 58 20 59 max_sectors_kb (RW) 21 60 ------------------- 22 61 This is the maximum number of kilobytes that the block layer will allow 23 62 for a filesystem request. Must be smaller than or equal to the maximum 24 63 size allowed by the hardware. 64 + 65 + max_segments (RO) 66 + ----------------- 67 + Maximum number of segments of the device. 68 + 69 + max_segment_size (RO) 70 + --------------------- 71 + Maximum segment size of the device. 72 + 73 + minimum_io_size (RO) 74 + -------------------- 75 + This is the smallest preferred io size reported by the device. 25 76 26 77 nomerges (RW) 27 78 ------------- ··· 96 45 each request queue may have upto N request pools, each independently 97 46 regulated by nr_requests. 98 47 48 + optimal_io_size (RO) 49 + -------------------- 50 + This is the optimal io size reported by the device. 51 + 52 + physical_block_size (RO) 53 + ------------------------ 54 + This is the physical block size of device, in bytes. 55 + 99 56 read_ahead_kb (RW) 100 57 ------------------ 101 58 Maximum number of kilobytes to read-ahead for filesystems on this block 102 59 device. 60 + 61 + rotational (RW) 62 + --------------- 63 + This file is used to stat if the device is of rotational type or 64 + non-rotational type. 103 65 104 66 rq_affinity (RW) 105 67 ----------------
+4 -4
Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.txt
··· 10 10 - compatible : Should be "fsl,<chip>-esdhc" 11 11 12 12 Optional properties: 13 - - fsl,cd-internal : Indicate to use controller internal card detection 14 - - fsl,wp-internal : Indicate to use controller internal write protection 13 + - fsl,cd-controller : Indicate to use controller internal card detection 14 + - fsl,wp-controller : Indicate to use controller internal write protection 15 15 16 16 Examples: 17 17 ··· 19 19 compatible = "fsl,imx51-esdhc"; 20 20 reg = <0x70004000 0x4000>; 21 21 interrupts = <1>; 22 - fsl,cd-internal; 23 - fsl,wp-internal; 22 + fsl,cd-controller; 23 + fsl,wp-controller; 24 24 }; 25 25 26 26 esdhc@70008000 {
+1 -1
Documentation/feature-removal-schedule.txt
··· 579 579 ---------------------------- 580 580 581 581 What: at91-mci driver ("CONFIG_MMC_AT91") 582 - When: 3.7 582 + When: 3.8 583 583 Why: There are two mci drivers: at91-mci and atmel-mci. The PDC support 584 584 was added to atmel-mci as a first step to support more chips. 585 585 Then at91-mci was kept only for old IP versions (on at91rm9200 and
+1 -1
Documentation/watchdog/src/watchdog-test.c
··· 31 31 * or "-e" to enable the card. 32 32 */ 33 33 34 - void term(int sig) 34 + static void term(int sig) 35 35 { 36 36 close(fd); 37 37 fprintf(stderr, "Stopping watchdog ticks...\n");
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc3 4 + EXTRAVERSION = -rc5 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+2 -1
arch/arm/Kconfig
··· 6 6 select HAVE_DMA_API_DEBUG 7 7 select HAVE_IDE if PCI || ISA || PCMCIA 8 8 select HAVE_DMA_ATTRS 9 - select HAVE_DMA_CONTIGUOUS if (CPU_V6 || CPU_V6K || CPU_V7) 9 + select HAVE_DMA_CONTIGUOUS if MMU 10 10 select HAVE_MEMBLOCK 11 11 select RTC_LIB 12 12 select SYS_SUPPORTS_APM_EMULATION ··· 2144 2144 config CPU_FREQ_IMX 2145 2145 tristate "CPUfreq driver for i.MX CPUs" 2146 2146 depends on ARCH_MXC && CPU_FREQ 2147 + select CPU_FREQ_TABLE 2147 2148 help 2148 2149 This enables the CPUfreq driver for i.MX CPUs. 2149 2150
+5
arch/arm/boot/dts/am33xx.dtsi
··· 154 154 #size-cells = <0>; 155 155 ti,hwmods = "i2c3"; 156 156 }; 157 + 158 + wdt2: wdt@44e35000 { 159 + compatible = "ti,omap3-wdt"; 160 + ti,hwmods = "wd_timer2"; 161 + }; 157 162 }; 158 163 };
+1 -1
arch/arm/boot/dts/at91sam9g25ek.dts
··· 15 15 compatible = "atmel,at91sam9g25ek", "atmel,at91sam9x5ek", "atmel,at91sam9x5", "atmel,at91sam9"; 16 16 17 17 chosen { 18 - bootargs = "128M console=ttyS0,115200 root=/dev/mtdblock1 rw rootfstype=ubifs ubi.mtd=1 root=ubi0:rootfs"; 18 + bootargs = "console=ttyS0,115200 root=/dev/mtdblock1 rw rootfstype=ubifs ubi.mtd=1 root=ubi0:rootfs"; 19 19 }; 20 20 21 21 ahb {
+2 -2
arch/arm/boot/dts/imx51-babbage.dts
··· 25 25 aips@70000000 { /* aips-1 */ 26 26 spba@70000000 { 27 27 esdhc@70004000 { /* ESDHC1 */ 28 - fsl,cd-internal; 29 - fsl,wp-internal; 28 + fsl,cd-controller; 29 + fsl,wp-controller; 30 30 status = "okay"; 31 31 }; 32 32
+5 -1
arch/arm/boot/dts/kirkwood-iconnect.dts
··· 41 41 }; 42 42 power-blue { 43 43 label = "power:blue"; 44 - gpios = <&gpio1 11 0>; 44 + gpios = <&gpio1 10 0>; 45 45 linux,default-trigger = "timer"; 46 + }; 47 + power-red { 48 + label = "power:red"; 49 + gpios = <&gpio1 11 0>; 46 50 }; 47 51 usb1 { 48 52 label = "usb1:blue";
+3
arch/arm/boot/dts/twl6030.dtsi
··· 66 66 67 67 vcxio: regulator@8 { 68 68 compatible = "ti,twl6030-vcxio"; 69 + regulator-always-on; 69 70 }; 70 71 71 72 vusb: regulator@9 { ··· 75 74 76 75 v1v8: regulator@10 { 77 76 compatible = "ti,twl6030-v1v8"; 77 + regulator-always-on; 78 78 }; 79 79 80 80 v2v1: regulator@11 { 81 81 compatible = "ti,twl6030-v2v1"; 82 + regulator-always-on; 82 83 }; 83 84 84 85 clk32kg: regulator@12 {
+1 -1
arch/arm/configs/armadillo800eva_defconfig
··· 33 33 CONFIG_FORCE_MAX_ZONEORDER=13 34 34 CONFIG_ZBOOT_ROM_TEXT=0x0 35 35 CONFIG_ZBOOT_ROM_BSS=0x0 36 - CONFIG_CMDLINE="console=tty0 console=ttySC1,115200 earlyprintk=sh-sci.1,115200 ignore_loglevel root=/dev/nfs ip=dhcp nfsroot=,rsize=4096,wsize=4096" 36 + CONFIG_CMDLINE="console=tty0 console=ttySC1,115200 earlyprintk=sh-sci.1,115200 ignore_loglevel root=/dev/nfs ip=dhcp nfsroot=,rsize=4096,wsize=4096 rw" 37 37 CONFIG_CMDLINE_FORCE=y 38 38 CONFIG_KEXEC=y 39 39 CONFIG_VFP=y
+1
arch/arm/configs/u8500_defconfig
··· 86 86 CONFIG_LEDS_CLASS=y 87 87 CONFIG_LEDS_LM3530=y 88 88 CONFIG_LEDS_LP5521=y 89 + CONFIG_LEDS_GPIO=y 89 90 CONFIG_RTC_CLASS=y 90 91 CONFIG_RTC_DRV_AB8500=y 91 92 CONFIG_RTC_DRV_PL031=y
+7
arch/arm/include/asm/dma-mapping.h
··· 203 203 } 204 204 205 205 /* 206 + * This can be called during early boot to increase the size of the atomic 207 + * coherent DMA pool above the default value of 256KiB. It must be called 208 + * before postcore_initcall. 209 + */ 210 + extern void __init init_dma_coherent_pool_size(unsigned long size); 211 + 212 + /* 206 213 * This can be called during boot to increase the size of the consistent 207 214 * DMA region above it's default value of 2MB. It must be called before the 208 215 * memory allocator is initialised, i.e. before any core_initcall.
+1 -1
arch/arm/mach-at91/at91rm9200_time.c
··· 197 197 at91_st_read(AT91_ST_SR); 198 198 199 199 /* Make IRQs happen for the system timer */ 200 - setup_irq(AT91_ID_SYS, &at91rm9200_timer_irq); 200 + setup_irq(NR_IRQS_LEGACY + AT91_ID_SYS, &at91rm9200_timer_irq); 201 201 202 202 /* The 32KiHz "Slow Clock" (tick every 30517.58 nanoseconds) is used 203 203 * directly for the clocksource and all clockevents, after adjusting
+5 -1
arch/arm/mach-at91/at91sam9260_devices.c
··· 726 726 .flags = IORESOURCE_MEM, 727 727 }, { 728 728 .flags = IORESOURCE_MEM, 729 + }, { 730 + .flags = IORESOURCE_IRQ, 729 731 }, 730 732 }; 731 733 ··· 746 744 * The second resource is needed: 747 745 * GPBR will serve as the storage for RTC time offset 748 746 */ 749 - at91sam9260_rtt_device.num_resources = 2; 747 + at91sam9260_rtt_device.num_resources = 3; 750 748 rtt_resources[1].start = AT91SAM9260_BASE_GPBR + 751 749 4 * CONFIG_RTC_DRV_AT91SAM9_GPBR; 752 750 rtt_resources[1].end = rtt_resources[1].start + 3; 751 + rtt_resources[2].start = NR_IRQS_LEGACY + AT91_ID_SYS; 752 + rtt_resources[2].end = NR_IRQS_LEGACY + AT91_ID_SYS; 753 753 } 754 754 #else 755 755 static void __init at91_add_device_rtt_rtc(void)
+5 -1
arch/arm/mach-at91/at91sam9261_devices.c
··· 609 609 .flags = IORESOURCE_MEM, 610 610 }, { 611 611 .flags = IORESOURCE_MEM, 612 + }, { 613 + .flags = IORESOURCE_IRQ, 612 614 } 613 615 }; 614 616 ··· 628 626 * The second resource is needed: 629 627 * GPBR will serve as the storage for RTC time offset 630 628 */ 631 - at91sam9261_rtt_device.num_resources = 2; 629 + at91sam9261_rtt_device.num_resources = 3; 632 630 rtt_resources[1].start = AT91SAM9261_BASE_GPBR + 633 631 4 * CONFIG_RTC_DRV_AT91SAM9_GPBR; 634 632 rtt_resources[1].end = rtt_resources[1].start + 3; 633 + rtt_resources[2].start = NR_IRQS_LEGACY + AT91_ID_SYS; 634 + rtt_resources[2].end = NR_IRQS_LEGACY + AT91_ID_SYS; 635 635 } 636 636 #else 637 637 static void __init at91_add_device_rtt_rtc(void)
+8 -2
arch/arm/mach-at91/at91sam9263_devices.c
··· 990 990 .flags = IORESOURCE_MEM, 991 991 }, { 992 992 .flags = IORESOURCE_MEM, 993 + }, { 994 + .flags = IORESOURCE_IRQ, 993 995 } 994 996 }; 995 997 ··· 1008 1006 .flags = IORESOURCE_MEM, 1009 1007 }, { 1010 1008 .flags = IORESOURCE_MEM, 1009 + }, { 1010 + .flags = IORESOURCE_IRQ, 1011 1011 } 1012 1012 }; 1013 1013 ··· 1031 1027 * The second resource is needed only for the chosen RTT: 1032 1028 * GPBR will serve as the storage for RTC time offset 1033 1029 */ 1034 - at91sam9263_rtt0_device.num_resources = 2; 1030 + at91sam9263_rtt0_device.num_resources = 3; 1035 1031 at91sam9263_rtt1_device.num_resources = 1; 1036 1032 pdev = &at91sam9263_rtt0_device; 1037 1033 r = rtt0_resources; 1038 1034 break; 1039 1035 case 1: 1040 1036 at91sam9263_rtt0_device.num_resources = 1; 1041 - at91sam9263_rtt1_device.num_resources = 2; 1037 + at91sam9263_rtt1_device.num_resources = 3; 1042 1038 pdev = &at91sam9263_rtt1_device; 1043 1039 r = rtt1_resources; 1044 1040 break; ··· 1051 1047 pdev->name = "rtc-at91sam9"; 1052 1048 r[1].start = AT91SAM9263_BASE_GPBR + 4 * CONFIG_RTC_DRV_AT91SAM9_GPBR; 1053 1049 r[1].end = r[1].start + 3; 1050 + r[2].start = NR_IRQS_LEGACY + AT91_ID_SYS; 1051 + r[2].end = NR_IRQS_LEGACY + AT91_ID_SYS; 1054 1052 } 1055 1053 #else 1056 1054 static void __init at91_add_device_rtt_rtc(void)
+5 -1
arch/arm/mach-at91/at91sam9g45_devices.c
··· 1293 1293 .flags = IORESOURCE_MEM, 1294 1294 }, { 1295 1295 .flags = IORESOURCE_MEM, 1296 + }, { 1297 + .flags = IORESOURCE_IRQ, 1296 1298 } 1297 1299 }; 1298 1300 ··· 1312 1310 * The second resource is needed: 1313 1311 * GPBR will serve as the storage for RTC time offset 1314 1312 */ 1315 - at91sam9g45_rtt_device.num_resources = 2; 1313 + at91sam9g45_rtt_device.num_resources = 3; 1316 1314 rtt_resources[1].start = AT91SAM9G45_BASE_GPBR + 1317 1315 4 * CONFIG_RTC_DRV_AT91SAM9_GPBR; 1318 1316 rtt_resources[1].end = rtt_resources[1].start + 3; 1317 + rtt_resources[2].start = NR_IRQS_LEGACY + AT91_ID_SYS; 1318 + rtt_resources[2].end = NR_IRQS_LEGACY + AT91_ID_SYS; 1319 1319 } 1320 1320 #else 1321 1321 static void __init at91_add_device_rtt_rtc(void)
+5 -1
arch/arm/mach-at91/at91sam9rl_devices.c
··· 688 688 .flags = IORESOURCE_MEM, 689 689 }, { 690 690 .flags = IORESOURCE_MEM, 691 + }, { 692 + .flags = IORESOURCE_IRQ, 691 693 } 692 694 }; 693 695 ··· 707 705 * The second resource is needed: 708 706 * GPBR will serve as the storage for RTC time offset 709 707 */ 710 - at91sam9rl_rtt_device.num_resources = 2; 708 + at91sam9rl_rtt_device.num_resources = 3; 711 709 rtt_resources[1].start = AT91SAM9RL_BASE_GPBR + 712 710 4 * CONFIG_RTC_DRV_AT91SAM9_GPBR; 713 711 rtt_resources[1].end = rtt_resources[1].start + 3; 712 + rtt_resources[2].start = NR_IRQS_LEGACY + AT91_ID_SYS; 713 + rtt_resources[2].end = NR_IRQS_LEGACY + AT91_ID_SYS; 714 714 } 715 715 #else 716 716 static void __init at91_add_device_rtt_rtc(void)
+12
arch/arm/mach-at91/clock.c
··· 63 63 64 64 #define cpu_has_300M_plla() (cpu_is_at91sam9g10()) 65 65 66 + #define cpu_has_240M_plla() (cpu_is_at91sam9261() \ 67 + || cpu_is_at91sam9263() \ 68 + || cpu_is_at91sam9rl()) 69 + 70 + #define cpu_has_210M_plla() (cpu_is_at91sam9260()) 71 + 66 72 #define cpu_has_pllb() (!(cpu_is_at91sam9rl() \ 67 73 || cpu_is_at91sam9g45() \ 68 74 || cpu_is_at91sam9x5() \ ··· 711 705 pll_overclock = true; 712 706 } else if (cpu_has_800M_plla()) { 713 707 if (plla.rate_hz > 800000000) 708 + pll_overclock = true; 709 + } else if (cpu_has_240M_plla()) { 710 + if (plla.rate_hz > 240000000) 711 + pll_overclock = true; 712 + } else if (cpu_has_210M_plla()) { 713 + if (plla.rate_hz > 210000000) 714 714 pll_overclock = true; 715 715 } else { 716 716 if (plla.rate_hz > 209000000)
+2 -1
arch/arm/mach-dove/common.c
··· 102 102 void __init dove_ge00_init(struct mv643xx_eth_platform_data *eth_data) 103 103 { 104 104 orion_ge00_init(eth_data, DOVE_GE00_PHYS_BASE, 105 - IRQ_DOVE_GE00_SUM, IRQ_DOVE_GE00_ERR); 105 + IRQ_DOVE_GE00_SUM, IRQ_DOVE_GE00_ERR, 106 + 1600); 106 107 } 107 108 108 109 /*****************************************************************************
+7
arch/arm/mach-exynos/mach-origen.c
··· 42 42 #include <plat/backlight.h> 43 43 #include <plat/fb.h> 44 44 #include <plat/mfc.h> 45 + #include <plat/hdmi.h> 45 46 46 47 #include <mach/ohci.h> 47 48 #include <mach/map.h> ··· 735 734 s3c_gpio_setpull(EXYNOS4_GPX2(2), S3C_GPIO_PULL_NONE); 736 735 } 737 736 737 + /* I2C module and id for HDMIPHY */ 738 + static struct i2c_board_info hdmiphy_info = { 739 + I2C_BOARD_INFO("hdmiphy-exynos4210", 0x38), 740 + }; 741 + 738 742 static void s5p_tv_setup(void) 739 743 { 740 744 /* Direct HPD to HDMI chip */ ··· 787 781 788 782 s5p_tv_setup(); 789 783 s5p_i2c_hdmiphy_set_platdata(NULL); 784 + s5p_hdmi_set_platdata(&hdmiphy_info, NULL, 0); 790 785 791 786 #ifdef CONFIG_DRM_EXYNOS 792 787 s5p_device_fimd0.dev.platform_data = &drm_fimd_pdata;
+7
arch/arm/mach-exynos/mach-smdkv310.c
··· 40 40 #include <plat/mfc.h> 41 41 #include <plat/ehci.h> 42 42 #include <plat/clock.h> 43 + #include <plat/hdmi.h> 43 44 44 45 #include <mach/map.h> 45 46 #include <mach/ohci.h> ··· 355 354 .pwm_period_ns = 1000, 356 355 }; 357 356 357 + /* I2C module and id for HDMIPHY */ 358 + static struct i2c_board_info hdmiphy_info = { 359 + I2C_BOARD_INFO("hdmiphy-exynos4210", 0x38), 360 + }; 361 + 358 362 static void s5p_tv_setup(void) 359 363 { 360 364 /* direct HPD to HDMI chip */ ··· 394 388 395 389 s5p_tv_setup(); 396 390 s5p_i2c_hdmiphy_set_platdata(NULL); 391 + s5p_hdmi_set_platdata(&hdmiphy_info, NULL, 0); 397 392 398 393 samsung_keypad_set_platdata(&smdkv310_keypad_data); 399 394
+1
arch/arm/mach-gemini/irq.c
··· 17 17 #include <linux/sched.h> 18 18 #include <asm/irq.h> 19 19 #include <asm/mach/irq.h> 20 + #include <asm/system_misc.h> 20 21 #include <mach/hardware.h> 21 22 22 23 #define IRQ_SOURCE(base_addr) (base_addr + 0x00)
+5 -5
arch/arm/mach-imx/Makefile
··· 9 9 obj-$(CONFIG_SOC_IMX31) += mm-imx3.o cpu-imx31.o clk-imx31.o iomux-imx31.o ehci-imx31.o pm-imx3.o 10 10 obj-$(CONFIG_SOC_IMX35) += mm-imx3.o cpu-imx35.o clk-imx35.o ehci-imx35.o pm-imx3.o 11 11 12 - obj-$(CONFIG_SOC_IMX5) += cpu-imx5.o mm-imx5.o clk-imx51-imx53.o ehci-imx5.o pm-imx5.o cpu_op-mx51.o 12 + imx5-pm-$(CONFIG_PM) += pm-imx5.o 13 + obj-$(CONFIG_SOC_IMX5) += cpu-imx5.o mm-imx5.o clk-imx51-imx53.o ehci-imx5.o $(imx5-pm-y) cpu_op-mx51.o 13 14 14 15 obj-$(CONFIG_COMMON_CLK) += clk-pllv1.o clk-pllv2.o clk-pllv3.o clk-gate2.o \ 15 16 clk-pfd.o clk-busy.o ··· 71 70 obj-$(CONFIG_HAVE_IMX_GPC) += gpc.o 72 71 obj-$(CONFIG_HAVE_IMX_MMDC) += mmdc.o 73 72 obj-$(CONFIG_HAVE_IMX_SRC) += src.o 74 - obj-$(CONFIG_CPU_V7) += head-v7.o 75 - AFLAGS_head-v7.o :=-Wa,-march=armv7-a 76 - obj-$(CONFIG_SMP) += platsmp.o 73 + AFLAGS_headsmp.o :=-Wa,-march=armv7-a 74 + obj-$(CONFIG_SMP) += headsmp.o platsmp.o 77 75 obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o 78 76 obj-$(CONFIG_SOC_IMX6Q) += clk-imx6q.o mach-imx6q.o 79 77 80 78 ifeq ($(CONFIG_PM),y) 81 - obj-$(CONFIG_SOC_IMX6Q) += pm-imx6q.o 79 + obj-$(CONFIG_SOC_IMX6Q) += pm-imx6q.o headsmp.o 82 80 endif 83 81 84 82 # i.MX5 based machines
+5 -3
arch/arm/mach-imx/clk-imx6q.c
··· 152 152 ssi2, ssi3, uart_ipg, uart_serial, usboh3, usdhc1, usdhc2, usdhc3, 153 153 usdhc4, vdo_axi, vpu_axi, cko1, pll1_sys, pll2_bus, pll3_usb_otg, 154 154 pll4_audio, pll5_video, pll6_mlb, pll7_usb_host, pll8_enet, ssi1_ipg, 155 - ssi2_ipg, ssi3_ipg, rom, usbphy1, usbphy2, 155 + ssi2_ipg, ssi3_ipg, rom, usbphy1, usbphy2, ldb_di0_div_3_5, ldb_di1_div_3_5, 156 156 clk_max 157 157 }; 158 158 ··· 288 288 clk[gpu3d_shader] = imx_clk_divider("gpu3d_shader", "gpu3d_shader_sel", base + 0x18, 29, 3); 289 289 clk[ipu1_podf] = imx_clk_divider("ipu1_podf", "ipu1_sel", base + 0x3c, 11, 3); 290 290 clk[ipu2_podf] = imx_clk_divider("ipu2_podf", "ipu2_sel", base + 0x3c, 16, 3); 291 - clk[ldb_di0_podf] = imx_clk_divider("ldb_di0_podf", "ldb_di0_sel", base + 0x20, 10, 1); 292 - clk[ldb_di1_podf] = imx_clk_divider("ldb_di1_podf", "ldb_di1_sel", base + 0x20, 11, 1); 291 + clk[ldb_di0_div_3_5] = imx_clk_fixed_factor("ldb_di0_div_3_5", "ldb_di0_sel", 2, 7); 292 + clk[ldb_di0_podf] = imx_clk_divider("ldb_di0_podf", "ldb_di0_div_3_5", base + 0x20, 10, 1); 293 + clk[ldb_di1_div_3_5] = imx_clk_fixed_factor("ldb_di1_div_3_5", "ldb_di1_sel", 2, 7); 294 + clk[ldb_di1_podf] = imx_clk_divider("ldb_di1_podf", "ldb_di1_div_3_5", base + 0x20, 11, 1); 293 295 clk[ipu1_di0_pre] = imx_clk_divider("ipu1_di0_pre", "ipu1_di0_pre_sel", base + 0x34, 3, 3); 294 296 clk[ipu1_di1_pre] = imx_clk_divider("ipu1_di1_pre", "ipu1_di1_pre_sel", base + 0x34, 12, 3); 295 297 clk[ipu2_di0_pre] = imx_clk_divider("ipu2_di0_pre", "ipu2_di0_pre_sel", base + 0x38, 3, 3);
arch/arm/mach-imx/head-v7.S arch/arm/mach-imx/headsmp.S
+3 -20
arch/arm/mach-imx/hotplug.c
··· 42 42 : "cc"); 43 43 } 44 44 45 - static inline void cpu_leave_lowpower(void) 46 - { 47 - unsigned int v; 48 - 49 - asm volatile( 50 - "mrc p15, 0, %0, c1, c0, 0\n" 51 - " orr %0, %0, %1\n" 52 - " mcr p15, 0, %0, c1, c0, 0\n" 53 - " mrc p15, 0, %0, c1, c0, 1\n" 54 - " orr %0, %0, %2\n" 55 - " mcr p15, 0, %0, c1, c0, 1\n" 56 - : "=&r" (v) 57 - : "Ir" (CR_C), "Ir" (0x40) 58 - : "cc"); 59 - } 60 - 61 45 /* 62 46 * platform-specific code to shutdown a CPU 63 47 * ··· 51 67 { 52 68 cpu_enter_lowpower(); 53 69 imx_enable_cpu(cpu, false); 54 - cpu_do_idle(); 55 - cpu_leave_lowpower(); 56 70 57 - /* We should never return from idle */ 58 - panic("cpu %d unexpectedly exit from shutdown\n", cpu); 71 + /* spin here until hardware takes it down */ 72 + while (1) 73 + ; 59 74 } 60 75 61 76 int platform_cpu_disable(unsigned int cpu)
+2 -2
arch/arm/mach-imx/mach-imx6q.c
··· 71 71 /* For imx6q sabrelite board: set KSZ9021RN RGMII pad skew */ 72 72 static int ksz9021rn_phy_fixup(struct phy_device *phydev) 73 73 { 74 - if (IS_ENABLED(CONFIG_PHYLIB)) { 74 + if (IS_BUILTIN(CONFIG_PHYLIB)) { 75 75 /* min rx data delay */ 76 76 phy_write(phydev, 0x0b, 0x8105); 77 77 phy_write(phydev, 0x0c, 0x0000); ··· 112 112 113 113 static void __init imx6q_sabrelite_init(void) 114 114 { 115 - if (IS_ENABLED(CONFIG_PHYLIB)) 115 + if (IS_BUILTIN(CONFIG_PHYLIB)) 116 116 phy_register_fixup_for_uid(PHY_ID_KSZ9021, MICREL_PHY_ID_MASK, 117 117 ksz9021rn_phy_fixup); 118 118 imx6q_sabrelite_cko1_setup();
+2 -1
arch/arm/mach-kirkwood/Makefile.boot
··· 7 7 dtb-$(CONFIG_MACH_DLINK_KIRKWOOD_DT) += kirkwood-dns325.dtb 8 8 dtb-$(CONFIG_MACH_ICONNECT_DT) += kirkwood-iconnect.dtb 9 9 dtb-$(CONFIG_MACH_IB62X0_DT) += kirkwood-ib62x0.dtb 10 - dtb-$(CONFIG_MACH_TS219_DT) += kirkwood-qnap-ts219.dtb 10 + dtb-$(CONFIG_MACH_TS219_DT) += kirkwood-ts219-6281.dtb 11 + dtb-$(CONFIG_MACH_TS219_DT) += kirkwood-ts219-6282.dtb 11 12 dtb-$(CONFIG_MACH_GOFLEXNET_DT) += kirkwood-goflexnet.dtb 12 13 dtb-$(CONFIG_MACH_LSXL_DT) += kirkwood-lschlv2.dtb 13 14 dtb-$(CONFIG_MACH_LSXL_DT) += kirkwood-lsxhl.dtb
+9 -2
arch/arm/mach-kirkwood/common.c
··· 301 301 { 302 302 orion_ge00_init(eth_data, 303 303 GE00_PHYS_BASE, IRQ_KIRKWOOD_GE00_SUM, 304 - IRQ_KIRKWOOD_GE00_ERR); 304 + IRQ_KIRKWOOD_GE00_ERR, 1600); 305 305 /* The interface forgets the MAC address assigned by u-boot if 306 306 the clock is turned off, so claim the clk now. */ 307 307 clk_prepare_enable(ge0); ··· 315 315 { 316 316 orion_ge01_init(eth_data, 317 317 GE01_PHYS_BASE, IRQ_KIRKWOOD_GE01_SUM, 318 - IRQ_KIRKWOOD_GE01_ERR); 318 + IRQ_KIRKWOOD_GE01_ERR, 1600); 319 319 clk_prepare_enable(ge1); 320 320 } 321 321 ··· 517 517 void __init kirkwood_init_early(void) 518 518 { 519 519 orion_time_set_base(TIMER_VIRT_BASE); 520 + 521 + /* 522 + * Some Kirkwood devices allocate their coherent buffers from atomic 523 + * context. Increase size of atomic coherent pool to make sure such 524 + * the allocations won't fail. 525 + */ 526 + init_dma_coherent_pool_size(SZ_1M); 520 527 } 521 528 522 529 int kirkwood_tclk;
+1
arch/arm/mach-kirkwood/db88f6281-bp-setup.c
··· 10 10 11 11 #include <linux/kernel.h> 12 12 #include <linux/init.h> 13 + #include <linux/sizes.h> 13 14 #include <linux/platform_device.h> 14 15 #include <linux/mtd/partitions.h> 15 16 #include <linux/ata_platform.h>
+1 -1
arch/arm/mach-mmp/sram.c
··· 68 68 struct resource *res; 69 69 int ret = 0; 70 70 71 - if (!pdata && !pdata->pool_name) 71 + if (!pdata || !pdata->pool_name) 72 72 return -ENODEV; 73 73 74 74 info = kzalloc(sizeof(*info), GFP_KERNEL);
+1 -1
arch/arm/mach-mv78xx0/addr-map.c
··· 37 37 #define WIN0_OFF(n) (BRIDGE_VIRT_BASE + 0x0000 + ((n) << 4)) 38 38 #define WIN8_OFF(n) (BRIDGE_VIRT_BASE + 0x0900 + (((n) - 8) << 4)) 39 39 40 - static void __init __iomem *win_cfg_base(int win) 40 + static void __init __iomem *win_cfg_base(const struct orion_addr_map_cfg *cfg, int win) 41 41 { 42 42 /* 43 43 * Find the control register base address for this window.
+4 -2
arch/arm/mach-mv78xx0/common.c
··· 213 213 { 214 214 orion_ge00_init(eth_data, 215 215 GE00_PHYS_BASE, IRQ_MV78XX0_GE00_SUM, 216 - IRQ_MV78XX0_GE_ERR); 216 + IRQ_MV78XX0_GE_ERR, 217 + MV643XX_TX_CSUM_DEFAULT_LIMIT); 217 218 } 218 219 219 220 ··· 225 224 { 226 225 orion_ge01_init(eth_data, 227 226 GE01_PHYS_BASE, IRQ_MV78XX0_GE01_SUM, 228 - NO_IRQ); 227 + NO_IRQ, 228 + MV643XX_TX_CSUM_DEFAULT_LIMIT); 229 229 } 230 230 231 231
-1
arch/arm/mach-omap1/board-ams-delta.c
··· 37 37 #include <plat/board-ams-delta.h> 38 38 #include <plat/keypad.h> 39 39 #include <plat/mux.h> 40 - #include <plat/board.h> 41 40 42 41 #include <mach/hardware.h> 43 42 #include <mach/ams-delta-fiq.h>
-1
arch/arm/mach-omap1/board-fsample.c
··· 32 32 #include <plat/flash.h> 33 33 #include <plat/fpga.h> 34 34 #include <plat/keypad.h> 35 - #include <plat/board.h> 36 35 37 36 #include <mach/hardware.h> 38 37
-6
arch/arm/mach-omap1/board-generic.c
··· 23 23 #include <asm/mach/map.h> 24 24 25 25 #include <plat/mux.h> 26 - #include <plat/board.h> 27 26 28 27 #include <mach/usb.h> 29 28 ··· 51 52 }; 52 53 #endif 53 54 54 - static struct omap_board_config_kernel generic_config[] __initdata = { 55 - }; 56 - 57 55 static void __init omap_generic_init(void) 58 56 { 59 57 #ifdef CONFIG_ARCH_OMAP15XX ··· 72 76 } 73 77 #endif 74 78 75 - omap_board_config = generic_config; 76 - omap_board_config_size = ARRAY_SIZE(generic_config); 77 79 omap_serial_init(); 78 80 omap_register_i2c_bus(1, 100, NULL, 0); 79 81 }
-1
arch/arm/mach-omap1/board-htcherald.c
··· 42 42 #include <asm/mach/arch.h> 43 43 44 44 #include <plat/omap7xx.h> 45 - #include <plat/board.h> 46 45 #include <plat/keypad.h> 47 46 #include <plat/mmc.h> 48 47
-1
arch/arm/mach-omap1/board-nokia770.c
··· 26 26 #include <asm/mach/map.h> 27 27 28 28 #include <plat/mux.h> 29 - #include <plat/board.h> 30 29 #include <plat/keypad.h> 31 30 #include <plat/lcd_mipid.h> 32 31 #include <plat/mmc.h>
+1
arch/arm/mach-omap1/board-osk.c
··· 39 39 #include <linux/mtd/partitions.h> 40 40 #include <linux/mtd/physmap.h> 41 41 #include <linux/i2c/tps65010.h> 42 + #include <linux/platform_data/omap1_bl.h> 42 43 43 44 #include <asm/mach-types.h> 44 45 #include <asm/mach/arch.h>
+1 -1
arch/arm/mach-omap1/board-palmte.c
··· 28 28 #include <linux/interrupt.h> 29 29 #include <linux/apm-emulation.h> 30 30 #include <linux/omapfb.h> 31 + #include <linux/platform_data/omap1_bl.h> 31 32 32 33 #include <asm/mach-types.h> 33 34 #include <asm/mach/arch.h> ··· 38 37 #include <plat/mux.h> 39 38 #include <plat/tc.h> 40 39 #include <plat/dma.h> 41 - #include <plat/board.h> 42 40 #include <plat/irda.h> 43 41 #include <plat/keypad.h> 44 42
+1 -1
arch/arm/mach-omap1/board-palmtt.c
··· 27 27 #include <linux/omapfb.h> 28 28 #include <linux/spi/spi.h> 29 29 #include <linux/spi/ads7846.h> 30 + #include <linux/platform_data/omap1_bl.h> 30 31 31 32 #include <asm/mach-types.h> 32 33 #include <asm/mach/arch.h> ··· 38 37 #include <plat/mux.h> 39 38 #include <plat/dma.h> 40 39 #include <plat/tc.h> 41 - #include <plat/board.h> 42 40 #include <plat/irda.h> 43 41 #include <plat/keypad.h> 44 42
+1 -1
arch/arm/mach-omap1/board-palmz71.c
··· 30 30 #include <linux/omapfb.h> 31 31 #include <linux/spi/spi.h> 32 32 #include <linux/spi/ads7846.h> 33 + #include <linux/platform_data/omap1_bl.h> 33 34 34 35 #include <asm/mach-types.h> 35 36 #include <asm/mach/arch.h> ··· 40 39 #include <plat/mux.h> 41 40 #include <plat/dma.h> 42 41 #include <plat/tc.h> 43 - #include <plat/board.h> 44 42 #include <plat/irda.h> 45 43 #include <plat/keypad.h> 46 44
-1
arch/arm/mach-omap1/board-perseus2.c
··· 32 32 #include <plat/fpga.h> 33 33 #include <plat/flash.h> 34 34 #include <plat/keypad.h> 35 - #include <plat/board.h> 36 35 37 36 #include <mach/hardware.h> 38 37
-1
arch/arm/mach-omap1/board-sx1.c
··· 38 38 #include <plat/dma.h> 39 39 #include <plat/irda.h> 40 40 #include <plat/tc.h> 41 - #include <plat/board.h> 42 41 #include <plat/keypad.h> 43 42 #include <plat/board-sx1.h> 44 43
-6
arch/arm/mach-omap1/board-voiceblue.c
··· 35 35 #include <plat/flash.h> 36 36 #include <plat/mux.h> 37 37 #include <plat/tc.h> 38 - #include <plat/board.h> 39 38 40 39 #include <mach/hardware.h> 41 40 #include <mach/usb.h> ··· 154 155 .pins[2] = 6, 155 156 }; 156 157 157 - static struct omap_board_config_kernel voiceblue_config[] = { 158 - }; 159 - 160 158 #define MACHINE_PANICED 1 161 159 #define MACHINE_REBOOTING 2 162 160 #define MACHINE_REBOOT 4 ··· 271 275 voiceblue_smc91x_resources[1].start = gpio_to_irq(8); 272 276 voiceblue_smc91x_resources[1].end = gpio_to_irq(8); 273 277 platform_add_devices(voiceblue_devices, ARRAY_SIZE(voiceblue_devices)); 274 - omap_board_config = voiceblue_config; 275 - omap_board_config_size = ARRAY_SIZE(voiceblue_config); 276 278 omap_serial_init(); 277 279 omap1_usb_init(&voiceblue_usb_config); 278 280 omap_register_i2c_bus(1, 100, NULL, 0);
-8
arch/arm/mach-omap1/clock_data.c
··· 25 25 #include <plat/clock.h> 26 26 #include <plat/cpu.h> 27 27 #include <plat/clkdev_omap.h> 28 - #include <plat/board.h> 29 28 #include <plat/sram.h> /* for omap_sram_reprogram_clock() */ 30 29 31 30 #include <mach/hardware.h> ··· 787 788 int __init omap1_clk_init(void) 788 789 { 789 790 struct omap_clk *c; 790 - const struct omap_clock_config *info; 791 791 int crystal_type = 0; /* Default 12 MHz */ 792 792 u32 reg; 793 793 ··· 834 836 api_ck_p = clk_get(NULL, "api_ck"); 835 837 ck_dpll1_p = clk_get(NULL, "ck_dpll1"); 836 838 ck_ref_p = clk_get(NULL, "ck_ref"); 837 - 838 - info = omap_get_config(OMAP_TAG_CLOCK, struct omap_clock_config); 839 - if (info != NULL) { 840 - if (!cpu_is_omap15xx()) 841 - crystal_type = info->system_clock_type; 842 - } 843 839 844 840 if (cpu_is_omap7xx()) 845 841 ck_ref.rate = 13000000;
-1
arch/arm/mach-omap1/devices.c
··· 20 20 #include <asm/mach/map.h> 21 21 22 22 #include <plat/tc.h> 23 - #include <plat/board.h> 24 23 #include <plat/mux.h> 25 24 #include <plat/dma.h> 26 25 #include <plat/mmc.h>
-1
arch/arm/mach-omap1/serial.c
··· 22 22 23 23 #include <asm/mach-types.h> 24 24 25 - #include <plat/board.h> 26 25 #include <plat/mux.h> 27 26 #include <plat/fpga.h> 28 27
+2 -1
arch/arm/mach-omap2/Kconfig
··· 62 62 select PM_OPP if PM 63 63 select USB_ARCH_HAS_EHCI if USB_SUPPORT 64 64 select ARM_CPU_SUSPEND if PM 65 - select ARCH_NEEDS_CPU_IDLE_COUPLED 65 + select ARCH_NEEDS_CPU_IDLE_COUPLED if SMP 66 66 67 67 config SOC_OMAP5 68 68 bool "TI OMAP5" 69 69 select CPU_V7 70 70 select ARM_GIC 71 71 select HAVE_SMP 72 + select ARM_CPU_SUSPEND if PM 72 73 73 74 comment "OMAP Core Type" 74 75 depends on ARCH_OMAP2
-1
arch/arm/mach-omap2/board-2430sdp.c
··· 33 33 #include <asm/mach/arch.h> 34 34 #include <asm/mach/map.h> 35 35 36 - #include <plat/board.h> 37 36 #include "common.h" 38 37 #include <plat/gpmc.h> 39 38 #include <plat/usb.h>
-6
arch/arm/mach-omap2/board-3430sdp.c
··· 31 31 #include <asm/mach/map.h> 32 32 33 33 #include <plat/mcspi.h> 34 - #include <plat/board.h> 35 34 #include <plat/usb.h> 36 35 #include "common.h" 37 36 #include <plat/dma.h> ··· 188 189 .num_devices = ARRAY_SIZE(sdp3430_dss_devices), 189 190 .devices = sdp3430_dss_devices, 190 191 .default_device = &sdp3430_lcd_device, 191 - }; 192 - 193 - static struct omap_board_config_kernel sdp3430_config[] __initdata = { 194 192 }; 195 193 196 194 static struct omap2_hsmmc_info mmc[] = { ··· 572 576 int gpio_pendown; 573 577 574 578 omap3_mux_init(board_mux, OMAP_PACKAGE_CBB); 575 - omap_board_config = sdp3430_config; 576 - omap_board_config_size = ARRAY_SIZE(sdp3430_config); 577 579 omap_hsmmc_init(mmc); 578 580 omap3430_i2c_init(); 579 581 omap_display_init(&sdp3430_dss_data);
-6
arch/arm/mach-omap2/board-3630sdp.c
··· 17 17 #include <asm/mach/arch.h> 18 18 19 19 #include "common.h" 20 - #include <plat/board.h> 21 20 #include <plat/gpmc-smc91x.h> 22 21 #include <plat/usb.h> 23 22 ··· 64 65 .reset_gpio_port[0] = 126, 65 66 .reset_gpio_port[1] = 61, 66 67 .reset_gpio_port[2] = -EINVAL 67 - }; 68 - 69 - static struct omap_board_config_kernel sdp_config[] __initdata = { 70 68 }; 71 69 72 70 #ifdef CONFIG_OMAP_MUX ··· 193 197 static void __init omap_sdp_init(void) 194 198 { 195 199 omap3_mux_init(board_mux, OMAP_PACKAGE_CBP); 196 - omap_board_config = sdp_config; 197 - omap_board_config_size = ARRAY_SIZE(sdp_config); 198 200 zoom_peripherals_init(); 199 201 omap_sdrc_init(h8mbx00u0mer0em_sdrc_params, 200 202 h8mbx00u0mer0em_sdrc_params);
-1
arch/arm/mach-omap2/board-4430sdp.c
··· 34 34 #include <asm/mach/arch.h> 35 35 #include <asm/mach/map.h> 36 36 37 - #include <plat/board.h> 38 37 #include "common.h" 39 38 #include <plat/usb.h> 40 39 #include <plat/mmc.h>
-9
arch/arm/mach-omap2/board-am3517crane.c
··· 26 26 #include <asm/mach/arch.h> 27 27 #include <asm/mach/map.h> 28 28 29 - #include <plat/board.h> 30 29 #include "common.h" 31 30 #include <plat/usb.h> 32 31 ··· 35 36 36 37 #define GPIO_USB_POWER 35 37 38 #define GPIO_USB_NRESET 38 38 - 39 - 40 - /* Board initialization */ 41 - static struct omap_board_config_kernel am3517_crane_config[] __initdata = { 42 - }; 43 39 44 40 #ifdef CONFIG_OMAP_MUX 45 41 static struct omap_board_mux board_mux[] __initdata = { ··· 60 66 omap3_mux_init(board_mux, OMAP_PACKAGE_CBB); 61 67 omap_serial_init(); 62 68 omap_sdrc_init(NULL, NULL); 63 - 64 - omap_board_config = am3517_crane_config; 65 - omap_board_config_size = ARRAY_SIZE(am3517_crane_config); 66 69 67 70 /* Configure GPIO for EHCI port */ 68 71 if (omap_mux_init_gpio(GPIO_USB_NRESET, OMAP_PIN_OUTPUT)) {
-6
arch/arm/mach-omap2/board-am3517evm.c
··· 32 32 #include <asm/mach/arch.h> 33 33 #include <asm/mach/map.h> 34 34 35 - #include <plat/board.h> 36 35 #include "common.h" 37 36 #include <plat/usb.h> 38 37 #include <video/omapdss.h> ··· 323 324 platform_device_register(&am3517_hecc_device); 324 325 } 325 326 326 - static struct omap_board_config_kernel am3517_evm_config[] __initdata = { 327 - }; 328 - 329 327 static struct omap2_hsmmc_info mmc[] = { 330 328 { 331 329 .mmc = 1, ··· 342 346 343 347 static void __init am3517_evm_init(void) 344 348 { 345 - omap_board_config = am3517_evm_config; 346 - omap_board_config_size = ARRAY_SIZE(am3517_evm_config); 347 349 omap3_mux_init(board_mux, OMAP_PACKAGE_CBB); 348 350 349 351 am3517_evm_i2c_init();
-1
arch/arm/mach-omap2/board-apollon.c
··· 35 35 #include <asm/mach/flash.h> 36 36 37 37 #include <plat/led.h> 38 - #include <plat/board.h> 39 38 #include "common.h" 40 39 #include <plat/gpmc.h> 41 40
-6
arch/arm/mach-omap2/board-cm-t35.c
··· 37 37 #include <asm/mach/arch.h> 38 38 #include <asm/mach/map.h> 39 39 40 - #include <plat/board.h> 41 40 #include "common.h" 42 41 #include <plat/nand.h> 43 42 #include <plat/gpmc.h> ··· 713 714 static inline void cm_t3730_init_mux(void) {} 714 715 #endif 715 716 716 - static struct omap_board_config_kernel cm_t35_config[] __initdata = { 717 - }; 718 - 719 717 static void __init cm_t3x_common_init(void) 720 718 { 721 - omap_board_config = cm_t35_config; 722 - omap_board_config_size = ARRAY_SIZE(cm_t35_config); 723 719 omap3_mux_init(board_mux, OMAP_PACKAGE_CUS); 724 720 omap_serial_init(); 725 721 omap_sdrc_init(mt46h32m32lf6_sdrc_params,
-6
arch/arm/mach-omap2/board-cm-t3517.c
··· 38 38 #include <asm/mach/arch.h> 39 39 #include <asm/mach/map.h> 40 40 41 - #include <plat/board.h> 42 41 #include "common.h" 43 42 #include <plat/usb.h> 44 43 #include <plat/nand.h> ··· 248 249 static inline void cm_t3517_init_nand(void) {} 249 250 #endif 250 251 251 - static struct omap_board_config_kernel cm_t3517_config[] __initdata = { 252 - }; 253 - 254 252 #ifdef CONFIG_OMAP_MUX 255 253 static struct omap_board_mux board_mux[] __initdata = { 256 254 /* GPIO186 - Green LED */ ··· 281 285 omap3_mux_init(board_mux, OMAP_PACKAGE_CBB); 282 286 omap_serial_init(); 283 287 omap_sdrc_init(NULL, NULL); 284 - omap_board_config = cm_t3517_config; 285 - omap_board_config_size = ARRAY_SIZE(cm_t3517_config); 286 288 cm_t3517_init_leds(); 287 289 cm_t3517_init_nand(); 288 290 cm_t3517_init_rtc();
-1
arch/arm/mach-omap2/board-devkit8000.c
··· 40 40 #include <asm/mach/map.h> 41 41 #include <asm/mach/flash.h> 42 42 43 - #include <plat/board.h> 44 43 #include "common.h" 45 44 #include <plat/gpmc.h> 46 45 #include <plat/nand.h>
-1
arch/arm/mach-omap2/board-generic.c
··· 20 20 #include <asm/hardware/gic.h> 21 21 #include <asm/mach/arch.h> 22 22 23 - #include <plat/board.h> 24 23 #include "common.h" 25 24 #include "common-board-devices.h" 26 25
+1 -1
arch/arm/mach-omap2/board-h4.c
··· 32 32 #include <asm/mach/arch.h> 33 33 #include <asm/mach/map.h> 34 34 35 - #include <plat/board.h> 36 35 #include "common.h" 37 36 #include <plat/menelaus.h> 38 37 #include <plat/dma.h> 39 38 #include <plat/gpmc.h> 39 + #include <plat/debug-devices.h> 40 40 41 41 #include <video/omapdss.h> 42 42 #include <video/omap-panel-generic-dpi.h>
+2 -1
arch/arm/mach-omap2/board-igep0020.c
··· 29 29 #include <asm/mach-types.h> 30 30 #include <asm/mach/arch.h> 31 31 32 - #include <plat/board.h> 33 32 #include "common.h" 34 33 #include <plat/gpmc.h> 35 34 #include <plat/usb.h> ··· 553 554 554 555 #ifdef CONFIG_OMAP_MUX 555 556 static struct omap_board_mux board_mux[] __initdata = { 557 + /* SMSC9221 LAN Controller ETH IRQ (GPIO_176) */ 558 + OMAP3_MUX(MCSPI1_CS2, OMAP_MUX_MODE4 | OMAP_PIN_INPUT), 556 559 { .reg_offset = OMAP_MUX_TERMINATOR }, 557 560 }; 558 561 #endif
-1
arch/arm/mach-omap2/board-ldp.c
··· 35 35 #include <asm/mach/map.h> 36 36 37 37 #include <plat/mcspi.h> 38 - #include <plat/board.h> 39 38 #include "common.h" 40 39 #include <plat/gpmc.h> 41 40 #include <mach/board-zoom.h>
-1
arch/arm/mach-omap2/board-n8x0.c
··· 25 25 #include <asm/mach/arch.h> 26 26 #include <asm/mach-types.h> 27 27 28 - #include <plat/board.h> 29 28 #include "common.h" 30 29 #include <plat/menelaus.h> 31 30 #include <mach/irqs.h>
-1
arch/arm/mach-omap2/board-omap3beagle.c
··· 39 39 #include <asm/mach/map.h> 40 40 #include <asm/mach/flash.h> 41 41 42 - #include <plat/board.h> 43 42 #include "common.h" 44 43 #include <video/omapdss.h> 45 44 #include <video/omap-panel-tfp410.h>
+13 -7
arch/arm/mach-omap2/board-omap3evm.c
··· 45 45 #include <asm/mach/arch.h> 46 46 #include <asm/mach/map.h> 47 47 48 - #include <plat/board.h> 49 48 #include <plat/usb.h> 50 49 #include <plat/nand.h> 51 50 #include "common.h" ··· 57 58 #include "hsmmc.h" 58 59 #include "common-board-devices.h" 59 60 61 + #define OMAP3_EVM_TS_GPIO 175 60 62 #define OMAP3_EVM_EHCI_VBUS 22 61 63 #define OMAP3_EVM_EHCI_SELECT 61 62 64 ··· 73 73 */ 74 74 #define OMAP3EVM_GEN1_ETHR_GPIO_RST 64 75 75 #define OMAP3EVM_GEN2_ETHR_GPIO_RST 7 76 + 77 + /* 78 + * OMAP35x EVM revision 79 + * Run time detection of EVM revision is done by reading Ethernet 80 + * PHY ID - 81 + * GEN_1 = 0x01150000 82 + * GEN_2 = 0x92200000 83 + */ 84 + enum { 85 + OMAP3EVM_BOARD_GEN_1 = 0, /* EVM Rev between A - D */ 86 + OMAP3EVM_BOARD_GEN_2, /* EVM Rev >= Rev E */ 87 + }; 76 88 77 89 static u8 omap3_evm_version; 78 90 ··· 537 525 return 0; 538 526 } 539 527 540 - static struct omap_board_config_kernel omap3_evm_config[] __initdata = { 541 - }; 542 - 543 528 static struct usbhs_omap_board_data usbhs_bdata __initdata = { 544 529 545 530 .port_mode[0] = OMAP_USBHS_PORT_MODE_UNUSED, ··· 695 686 696 687 obm = (cpu_is_omap3630()) ? omap36x_board_mux : omap35x_board_mux; 697 688 omap3_mux_init(obm, OMAP_PACKAGE_CBB); 698 - 699 - omap_board_config = omap3_evm_config; 700 - omap_board_config_size = ARRAY_SIZE(omap3_evm_config); 701 689 702 690 omap_mux_init_gpio(63, OMAP_PIN_INPUT); 703 691 omap_hsmmc_init(mmc);
-1
arch/arm/mach-omap2/board-omap3logic.c
··· 41 41 #include "common-board-devices.h" 42 42 43 43 #include <plat/mux.h> 44 - #include <plat/board.h> 45 44 #include "common.h" 46 45 #include <plat/gpmc-smsc911x.h> 47 46 #include <plat/gpmc.h>
-1
arch/arm/mach-omap2/board-omap3pandora.c
··· 40 40 #include <asm/mach/arch.h> 41 41 #include <asm/mach/map.h> 42 42 43 - #include <plat/board.h> 44 43 #include "common.h" 45 44 #include <mach/hardware.h> 46 45 #include <plat/mcspi.h>
-6
arch/arm/mach-omap2/board-omap3stalker.c
··· 35 35 #include <asm/mach/map.h> 36 36 #include <asm/mach/flash.h> 37 37 38 - #include <plat/board.h> 39 38 #include "common.h" 40 39 #include <plat/gpmc.h> 41 40 #include <plat/nand.h> ··· 361 362 362 363 #define OMAP3_STALKER_TS_GPIO 175 363 364 364 - static struct omap_board_config_kernel omap3_stalker_config[] __initdata = { 365 - }; 366 - 367 365 static struct platform_device *omap3_stalker_devices[] __initdata = { 368 366 &keys_gpio, 369 367 }; ··· 395 399 { 396 400 regulator_register_fixed(0, dummy_supplies, ARRAY_SIZE(dummy_supplies)); 397 401 omap3_mux_init(board_mux, OMAP_PACKAGE_CUS); 398 - omap_board_config = omap3_stalker_config; 399 - omap_board_config_size = ARRAY_SIZE(omap3_stalker_config); 400 402 401 403 omap_mux_init_gpio(23, OMAP_PIN_INPUT); 402 404 omap_hsmmc_init(mmc);
-1
arch/arm/mach-omap2/board-omap3touchbook.c
··· 44 44 #include <asm/mach/flash.h> 45 45 #include <asm/system_info.h> 46 46 47 - #include <plat/board.h> 48 47 #include "common.h" 49 48 #include <plat/gpmc.h> 50 49 #include <plat/nand.h>
-1
arch/arm/mach-omap2/board-omap4panda.c
··· 39 39 #include <asm/mach/map.h> 40 40 #include <video/omapdss.h> 41 41 42 - #include <plat/board.h> 43 42 #include "common.h" 44 43 #include <plat/usb.h> 45 44 #include <plat/mmc.h>
-1
arch/arm/mach-omap2/board-overo.c
··· 42 42 #include <asm/mach/flash.h> 43 43 #include <asm/mach/map.h> 44 44 45 - #include <plat/board.h> 46 45 #include "common.h" 47 46 #include <video/omapdss.h> 48 47 #include <video/omap-panel-generic-dpi.h>
-1
arch/arm/mach-omap2/board-rx51-peripherals.c
··· 28 28 #include <asm/system_info.h> 29 29 30 30 #include <plat/mcspi.h> 31 - #include <plat/board.h> 32 31 #include "common.h" 33 32 #include <plat/dma.h> 34 33 #include <plat/gpmc.h>
-1
arch/arm/mach-omap2/board-rx51.c
··· 24 24 #include <asm/mach/map.h> 25 25 26 26 #include <plat/mcspi.h> 27 - #include <plat/board.h> 28 27 #include "common.h" 29 28 #include <plat/dma.h> 30 29 #include <plat/gpmc.h>
-6
arch/arm/mach-omap2/board-ti8168evm.c
··· 21 21 #include <asm/mach/map.h> 22 22 23 23 #include <plat/irqs.h> 24 - #include <plat/board.h> 25 24 #include "common.h" 26 25 #include <plat/usb.h> 27 26 ··· 31 32 .power = 500, 32 33 }; 33 34 34 - static struct omap_board_config_kernel ti81xx_evm_config[] __initdata = { 35 - }; 36 - 37 35 static void __init ti81xx_evm_init(void) 38 36 { 39 37 omap_serial_init(); 40 38 omap_sdrc_init(NULL, NULL); 41 - omap_board_config = ti81xx_evm_config; 42 - omap_board_config_size = ARRAY_SIZE(ti81xx_evm_config); 43 39 usb_musb_init(&musb_board_data); 44 40 } 45 41
-1
arch/arm/mach-omap2/board-zoom.c
··· 22 22 #include <asm/mach/arch.h> 23 23 24 24 #include "common.h" 25 - #include <plat/board.h> 26 25 #include <plat/usb.h> 27 26 28 27 #include <mach/board-zoom.h>
-11
arch/arm/mach-omap2/common-board-devices.c
··· 35 35 .turbo_mode = 0, 36 36 }; 37 37 38 - /* 39 - * ADS7846 driver maybe request a gpio according to the value 40 - * of pdata->get_pendown_state, but we have done this. So set 41 - * get_pendown_state to avoid twice gpio requesting. 42 - */ 43 - static int omap3_get_pendown_state(void) 44 - { 45 - return !gpio_get_value(OMAP3_EVM_TS_GPIO); 46 - } 47 - 48 38 static struct ads7846_platform_data ads7846_config = { 49 39 .x_max = 0x0fff, 50 40 .y_max = 0x0fff, ··· 45 55 .debounce_rep = 1, 46 56 .gpio_pendown = -EINVAL, 47 57 .keep_vref_on = 1, 48 - .get_pendown_state = &omap3_get_pendown_state, 49 58 }; 50 59 51 60 static struct spi_board_info ads7846_spi_board_info __initdata = {
-1
arch/arm/mach-omap2/common-board-devices.h
··· 4 4 #include "twl-common.h" 5 5 6 6 #define NAND_BLOCK_SIZE SZ_128K 7 - #define OMAP3_EVM_TS_GPIO 175 8 7 9 8 struct mtd_partition; 10 9 struct ads7846_platform_data;
-1
arch/arm/mach-omap2/common.c
··· 18 18 #include <linux/io.h> 19 19 20 20 #include <plat/hardware.h> 21 - #include <plat/board.h> 22 21 #include <plat/mux.h> 23 22 #include <plat/clock.h> 24 23
+2 -1
arch/arm/mach-omap2/cpuidle44xx.c
··· 238 238 for_each_cpu(cpu_id, cpu_online_mask) { 239 239 dev = &per_cpu(omap4_idle_dev, cpu_id); 240 240 dev->cpu = cpu_id; 241 + #ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED 241 242 dev->coupled_cpus = *cpu_online_mask; 242 - 243 + #endif 243 244 cpuidle_register_driver(&omap4_idle_driver); 244 245 245 246 if (cpuidle_register_device(dev)) {
-1
arch/arm/mach-omap2/devices.c
··· 26 26 #include <asm/pmu.h> 27 27 28 28 #include "iomap.h" 29 - #include <plat/board.h> 30 29 #include <plat/dma.h> 31 30 #include <plat/omap_hwmod.h> 32 31 #include <plat/omap_device.h>
+23 -6
arch/arm/mach-omap2/gpmc-nand.c
··· 18 18 19 19 #include <plat/cpu.h> 20 20 #include <plat/nand.h> 21 - #include <plat/board.h> 22 21 #include <plat/gpmc.h> 23 22 24 - static struct resource gpmc_nand_resource = { 25 - .flags = IORESOURCE_MEM, 23 + static struct resource gpmc_nand_resource[] = { 24 + { 25 + .flags = IORESOURCE_MEM, 26 + }, 27 + { 28 + .flags = IORESOURCE_IRQ, 29 + }, 30 + { 31 + .flags = IORESOURCE_IRQ, 32 + }, 26 33 }; 27 34 28 35 static struct platform_device gpmc_nand_device = { 29 36 .name = "omap2-nand", 30 37 .id = 0, 31 - .num_resources = 1, 32 - .resource = &gpmc_nand_resource, 38 + .num_resources = ARRAY_SIZE(gpmc_nand_resource), 39 + .resource = gpmc_nand_resource, 33 40 }; 34 41 35 42 static int omap2_nand_gpmc_retime(struct omap_nand_platform_data *gpmc_nand_data) ··· 82 75 gpmc_cs_configure(gpmc_nand_data->cs, GPMC_CONFIG_DEV_SIZE, 0); 83 76 gpmc_cs_configure(gpmc_nand_data->cs, 84 77 GPMC_CONFIG_DEV_TYPE, GPMC_DEVICETYPE_NAND); 78 + gpmc_cs_configure(gpmc_nand_data->cs, GPMC_CONFIG_WP, 0); 85 79 err = gpmc_cs_set_timings(gpmc_nand_data->cs, &t); 86 80 if (err) 87 81 return err; ··· 98 90 gpmc_nand_device.dev.platform_data = gpmc_nand_data; 99 91 100 92 err = gpmc_cs_request(gpmc_nand_data->cs, NAND_IO_SIZE, 101 - &gpmc_nand_data->phys_base); 93 + (unsigned long *)&gpmc_nand_resource[0].start); 102 94 if (err < 0) { 103 95 dev_err(dev, "Cannot request GPMC CS\n"); 104 96 return err; 105 97 } 106 98 99 + gpmc_nand_resource[0].end = gpmc_nand_resource[0].start + 100 + NAND_IO_SIZE - 1; 101 + 102 + gpmc_nand_resource[1].start = 103 + gpmc_get_client_irq(GPMC_IRQ_FIFOEVENTENABLE); 104 + gpmc_nand_resource[2].start = 105 + gpmc_get_client_irq(GPMC_IRQ_COUNT_EVENT); 107 106 /* Set timings in GPMC */ 108 107 err = omap2_nand_gpmc_retime(gpmc_nand_data); 109 108 if (err < 0) { ··· 122 107 if (gpmc_nand_data->dev_ready) { 123 108 gpmc_cs_configure(gpmc_nand_data->cs, GPMC_CONFIG_RDY_BSY, 1); 124 109 } 110 + 111 + gpmc_update_nand_reg(&gpmc_nand_data->reg, gpmc_nand_data->cs); 125 112 126 113 err = platform_device_register(&gpmc_nand_device); 127 114 if (err < 0) {
+22 -2
arch/arm/mach-omap2/gpmc-onenand.c
··· 20 20 21 21 #include <plat/cpu.h> 22 22 #include <plat/onenand.h> 23 - #include <plat/board.h> 24 23 #include <plat/gpmc.h> 25 24 25 + #define ONENAND_IO_SIZE SZ_128K 26 + 26 27 static struct omap_onenand_platform_data *gpmc_onenand_data; 28 + 29 + static struct resource gpmc_onenand_resource = { 30 + .flags = IORESOURCE_MEM, 31 + }; 27 32 28 33 static struct platform_device gpmc_onenand_device = { 29 34 .name = "omap2-onenand", 30 35 .id = -1, 36 + .num_resources = 1, 37 + .resource = &gpmc_onenand_resource, 31 38 }; 32 39 33 40 static int omap2_onenand_set_async_mode(int cs, void __iomem *onenand_base) ··· 397 390 398 391 void __init gpmc_onenand_init(struct omap_onenand_platform_data *_onenand_data) 399 392 { 393 + int err; 394 + 400 395 gpmc_onenand_data = _onenand_data; 401 396 gpmc_onenand_data->onenand_setup = gpmc_onenand_setup; 402 397 gpmc_onenand_device.dev.platform_data = gpmc_onenand_data; ··· 410 401 gpmc_onenand_data->flags |= ONENAND_SYNC_READ; 411 402 } 412 403 404 + err = gpmc_cs_request(gpmc_onenand_data->cs, ONENAND_IO_SIZE, 405 + (unsigned long *)&gpmc_onenand_resource.start); 406 + if (err < 0) { 407 + pr_err("%s: Cannot request GPMC CS\n", __func__); 408 + return; 409 + } 410 + 411 + gpmc_onenand_resource.end = gpmc_onenand_resource.start + 412 + ONENAND_IO_SIZE - 1; 413 + 413 414 if (platform_device_register(&gpmc_onenand_device) < 0) { 414 - printk(KERN_ERR "Unable to register OneNAND device\n"); 415 + pr_err("%s: Unable to register OneNAND device\n", __func__); 416 + gpmc_cs_free(gpmc_onenand_data->cs); 415 417 return; 416 418 } 417 419 }
-1
arch/arm/mach-omap2/gpmc-smc91x.c
··· 17 17 #include <linux/io.h> 18 18 #include <linux/smc91x.h> 19 19 20 - #include <plat/board.h> 21 20 #include <plat/gpmc.h> 22 21 #include <plat/gpmc-smc91x.h> 23 22
-1
arch/arm/mach-omap2/gpmc-smsc911x.c
··· 20 20 #include <linux/io.h> 21 21 #include <linux/smsc911x.h> 22 22 23 - #include <plat/board.h> 24 23 #include <plat/gpmc.h> 25 24 #include <plat/gpmc-smsc911x.h> 26 25
+139 -17
arch/arm/mach-omap2/gpmc.c
··· 78 78 #define ENABLE_PREFETCH (0x1 << 7) 79 79 #define DMA_MPU_MODE 2 80 80 81 + /* XXX: Only NAND irq has been considered,currently these are the only ones used 82 + */ 83 + #define GPMC_NR_IRQ 2 84 + 85 + struct gpmc_client_irq { 86 + unsigned irq; 87 + u32 bitmask; 88 + }; 89 + 81 90 /* Structure to save gpmc cs context */ 82 91 struct gpmc_cs_config { 83 92 u32 config1; ··· 113 104 u32 prefetch_control; 114 105 struct gpmc_cs_config cs_context[GPMC_CS_NUM]; 115 106 }; 107 + 108 + static struct gpmc_client_irq gpmc_client_irq[GPMC_NR_IRQ]; 109 + static struct irq_chip gpmc_irq_chip; 110 + static unsigned gpmc_irq_start; 116 111 117 112 static struct resource gpmc_mem_root; 118 113 static struct resource gpmc_cs_mem[GPMC_CS_NUM]; ··· 695 682 } 696 683 EXPORT_SYMBOL(gpmc_prefetch_reset); 697 684 685 + void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs) 686 + { 687 + reg->gpmc_status = gpmc_base + GPMC_STATUS; 688 + reg->gpmc_nand_command = gpmc_base + GPMC_CS0_OFFSET + 689 + GPMC_CS_NAND_COMMAND + GPMC_CS_SIZE * cs; 690 + reg->gpmc_nand_address = gpmc_base + GPMC_CS0_OFFSET + 691 + GPMC_CS_NAND_ADDRESS + GPMC_CS_SIZE * cs; 692 + reg->gpmc_nand_data = gpmc_base + GPMC_CS0_OFFSET + 693 + GPMC_CS_NAND_DATA + GPMC_CS_SIZE * cs; 694 + reg->gpmc_prefetch_config1 = gpmc_base + GPMC_PREFETCH_CONFIG1; 695 + reg->gpmc_prefetch_config2 = gpmc_base + GPMC_PREFETCH_CONFIG2; 696 + reg->gpmc_prefetch_control = gpmc_base + GPMC_PREFETCH_CONTROL; 697 + reg->gpmc_prefetch_status = gpmc_base + GPMC_PREFETCH_STATUS; 698 + reg->gpmc_ecc_config = gpmc_base + GPMC_ECC_CONFIG; 699 + reg->gpmc_ecc_control = gpmc_base + GPMC_ECC_CONTROL; 700 + reg->gpmc_ecc_size_config = gpmc_base + GPMC_ECC_SIZE_CONFIG; 701 + reg->gpmc_ecc1_result = gpmc_base + GPMC_ECC1_RESULT; 702 + reg->gpmc_bch_result0 = gpmc_base + GPMC_ECC_BCH_RESULT_0; 703 + } 704 + 705 + int gpmc_get_client_irq(unsigned irq_config) 706 + { 707 + int i; 708 + 709 + if (hweight32(irq_config) > 1) 710 + return 0; 711 + 712 + for (i = 0; i < GPMC_NR_IRQ; i++) 713 + if (gpmc_client_irq[i].bitmask & irq_config) 714 + return gpmc_client_irq[i].irq; 715 + 716 + return 0; 717 + } 718 + 719 + static int gpmc_irq_endis(unsigned irq, bool endis) 720 + { 721 + int i; 722 + u32 regval; 723 + 724 + for (i = 0; i < GPMC_NR_IRQ; i++) 725 + if (irq == gpmc_client_irq[i].irq) { 726 + regval = gpmc_read_reg(GPMC_IRQENABLE); 727 + if (endis) 728 + regval |= gpmc_client_irq[i].bitmask; 729 + else 730 + regval &= ~gpmc_client_irq[i].bitmask; 731 + gpmc_write_reg(GPMC_IRQENABLE, regval); 732 + break; 733 + } 734 + 735 + return 0; 736 + } 737 + 738 + static void gpmc_irq_disable(struct irq_data *p) 739 + { 740 + gpmc_irq_endis(p->irq, false); 741 + } 742 + 743 + static void gpmc_irq_enable(struct irq_data *p) 744 + { 745 + gpmc_irq_endis(p->irq, true); 746 + } 747 + 748 + static void gpmc_irq_noop(struct irq_data *data) { } 749 + 750 + static unsigned int gpmc_irq_noop_ret(struct irq_data *data) { return 0; } 751 + 752 + static int gpmc_setup_irq(int gpmc_irq) 753 + { 754 + int i; 755 + u32 regval; 756 + 757 + if (!gpmc_irq) 758 + return -EINVAL; 759 + 760 + gpmc_irq_start = irq_alloc_descs(-1, 0, GPMC_NR_IRQ, 0); 761 + if (IS_ERR_VALUE(gpmc_irq_start)) { 762 + pr_err("irq_alloc_descs failed\n"); 763 + return gpmc_irq_start; 764 + } 765 + 766 + gpmc_irq_chip.name = "gpmc"; 767 + gpmc_irq_chip.irq_startup = gpmc_irq_noop_ret; 768 + gpmc_irq_chip.irq_enable = gpmc_irq_enable; 769 + gpmc_irq_chip.irq_disable = gpmc_irq_disable; 770 + gpmc_irq_chip.irq_shutdown = gpmc_irq_noop; 771 + gpmc_irq_chip.irq_ack = gpmc_irq_noop; 772 + gpmc_irq_chip.irq_mask = gpmc_irq_noop; 773 + gpmc_irq_chip.irq_unmask = gpmc_irq_noop; 774 + 775 + gpmc_client_irq[0].bitmask = GPMC_IRQ_FIFOEVENTENABLE; 776 + gpmc_client_irq[1].bitmask = GPMC_IRQ_COUNT_EVENT; 777 + 778 + for (i = 0; i < GPMC_NR_IRQ; i++) { 779 + gpmc_client_irq[i].irq = gpmc_irq_start + i; 780 + irq_set_chip_and_handler(gpmc_client_irq[i].irq, 781 + &gpmc_irq_chip, handle_simple_irq); 782 + set_irq_flags(gpmc_client_irq[i].irq, 783 + IRQF_VALID | IRQF_NOAUTOEN); 784 + } 785 + 786 + /* Disable interrupts */ 787 + gpmc_write_reg(GPMC_IRQENABLE, 0); 788 + 789 + /* clear interrupts */ 790 + regval = gpmc_read_reg(GPMC_IRQSTATUS); 791 + gpmc_write_reg(GPMC_IRQSTATUS, regval); 792 + 793 + return request_irq(gpmc_irq, gpmc_handle_irq, 0, "gpmc", NULL); 794 + } 795 + 698 796 static void __init gpmc_mem_init(void) 699 797 { 700 798 int cs; ··· 835 711 836 712 static int __init gpmc_init(void) 837 713 { 838 - u32 l, irq; 839 - int cs, ret = -EINVAL; 714 + u32 l; 715 + int ret = -EINVAL; 840 716 int gpmc_irq; 841 717 char *ck = NULL; 842 718 ··· 885 761 gpmc_write_reg(GPMC_SYSCONFIG, l); 886 762 gpmc_mem_init(); 887 763 888 - /* initalize the irq_chained */ 889 - irq = OMAP_GPMC_IRQ_BASE; 890 - for (cs = 0; cs < GPMC_CS_NUM; cs++) { 891 - irq_set_chip_and_handler(irq, &dummy_irq_chip, 892 - handle_simple_irq); 893 - set_irq_flags(irq, IRQF_VALID); 894 - irq++; 895 - } 896 - 897 - ret = request_irq(gpmc_irq, gpmc_handle_irq, IRQF_SHARED, "gpmc", NULL); 764 + ret = gpmc_setup_irq(gpmc_irq); 898 765 if (ret) 899 766 pr_err("gpmc: irq-%d could not claim: err %d\n", 900 767 gpmc_irq, ret); ··· 895 780 896 781 static irqreturn_t gpmc_handle_irq(int irq, void *dev) 897 782 { 898 - u8 cs; 783 + int i; 784 + u32 regval; 899 785 900 - /* check cs to invoke the irq */ 901 - cs = ((gpmc_read_reg(GPMC_PREFETCH_CONFIG1)) >> CS_NUM_SHIFT) & 0x7; 902 - if (OMAP_GPMC_IRQ_BASE+cs <= OMAP_GPMC_IRQ_END) 903 - generic_handle_irq(OMAP_GPMC_IRQ_BASE+cs); 786 + regval = gpmc_read_reg(GPMC_IRQSTATUS); 787 + 788 + if (!regval) 789 + return IRQ_NONE; 790 + 791 + for (i = 0; i < GPMC_NR_IRQ; i++) 792 + if (regval & gpmc_client_irq[i].bitmask) 793 + generic_handle_irq(gpmc_client_irq[i].irq); 794 + 795 + gpmc_write_reg(GPMC_IRQSTATUS, regval); 904 796 905 797 return IRQ_HANDLED; 906 798 }
-1
arch/arm/mach-omap2/mux.h
··· 127 127 * @gpio: GPIO number 128 128 * @muxnames: available signal modes for a ball 129 129 * @balls: available balls on the package 130 - * @partition: mux partition 131 130 */ 132 131 struct omap_mux { 133 132 u16 reg_offset;
+1 -1
arch/arm/mach-omap2/opp4xxx_data.c
··· 94 94 { 95 95 int r = -ENODEV; 96 96 97 - if (!cpu_is_omap44xx()) 97 + if (!cpu_is_omap443x()) 98 98 return r; 99 99 100 100 r = omap_init_opp_table(omap44xx_opp_def_list,
-1
arch/arm/mach-omap2/pm-debug.c
··· 28 28 #include <linux/slab.h> 29 29 30 30 #include <plat/clock.h> 31 - #include <plat/board.h> 32 31 #include "powerdomain.h" 33 32 #include "clockdomain.h" 34 33 #include <plat/dmtimer.h>
-11
arch/arm/mach-omap2/pm24xx.c
··· 38 38 #include <plat/clock.h> 39 39 #include <plat/sram.h> 40 40 #include <plat/dma.h> 41 - #include <plat/board.h> 42 41 43 42 #include <mach/irqs.h> 44 43 ··· 350 351 } 351 352 352 353 prcm_setup_regs(); 353 - 354 - /* Hack to prevent MPU retention when STI console is enabled. */ 355 - { 356 - const struct omap_sti_console_config *sti; 357 - 358 - sti = omap_get_config(OMAP_TAG_STI_CONSOLE, 359 - struct omap_sti_console_config); 360 - if (sti != NULL && sti->enable) 361 - sti_console_enabled = 1; 362 - } 363 354 364 355 /* 365 356 * We copy the assembler sleep/wakeup routines to SRAM.
+5 -16
arch/arm/mach-omap2/pm34xx.c
··· 272 272 per_next_state = pwrdm_read_next_pwrst(per_pwrdm); 273 273 core_next_state = pwrdm_read_next_pwrst(core_pwrdm); 274 274 275 - if (mpu_next_state < PWRDM_POWER_ON) { 276 - pwrdm_pre_transition(mpu_pwrdm); 277 - pwrdm_pre_transition(neon_pwrdm); 278 - } 275 + pwrdm_pre_transition(NULL); 279 276 280 277 /* PER */ 281 278 if (per_next_state < PWRDM_POWER_ON) { 282 - pwrdm_pre_transition(per_pwrdm); 283 279 per_going_off = (per_next_state == PWRDM_POWER_OFF) ? 1 : 0; 284 280 omap2_gpio_prepare_for_idle(per_going_off); 285 281 } 286 282 287 283 /* CORE */ 288 284 if (core_next_state < PWRDM_POWER_ON) { 289 - pwrdm_pre_transition(core_pwrdm); 290 285 if (core_next_state == PWRDM_POWER_OFF) { 291 286 omap3_core_save_context(); 292 287 omap3_cm_save_context(); ··· 334 339 omap2_prm_clear_mod_reg_bits(OMAP3430_AUTO_OFF_MASK, 335 340 OMAP3430_GR_MOD, 336 341 OMAP3_PRM_VOLTCTRL_OFFSET); 337 - pwrdm_post_transition(core_pwrdm); 338 342 } 339 343 omap3_intc_resume_idle(); 340 344 341 - /* PER */ 342 - if (per_next_state < PWRDM_POWER_ON) { 343 - omap2_gpio_resume_after_idle(); 344 - pwrdm_post_transition(per_pwrdm); 345 - } 345 + pwrdm_post_transition(NULL); 346 346 347 - if (mpu_next_state < PWRDM_POWER_ON) { 348 - pwrdm_post_transition(mpu_pwrdm); 349 - pwrdm_post_transition(neon_pwrdm); 350 - } 347 + /* PER */ 348 + if (per_next_state < PWRDM_POWER_ON) 349 + omap2_gpio_resume_after_idle(); 351 350 } 352 351 353 352 static void omap3_pm_idle(void)
-1
arch/arm/mach-omap2/serial.c
··· 29 29 30 30 #include <plat/omap-serial.h> 31 31 #include "common.h" 32 - #include <plat/board.h> 33 32 #include <plat/dma.h> 34 33 #include <plat/omap_hwmod.h> 35 34 #include <plat/omap_device.h>
+6 -2
arch/arm/mach-omap2/sleep44xx.S
··· 56 56 * The restore function pointer is stored at CPUx_WAKEUP_NS_PA_ADDR_OFFSET. 57 57 * It returns to the caller for CPU INACTIVE and ON power states or in case 58 58 * CPU failed to transition to targeted OFF/DORMANT state. 59 + * 60 + * omap4_finish_suspend() calls v7_flush_dcache_all() which doesn't save 61 + * stack frame and it expects the caller to take care of it. Hence the entire 62 + * stack frame is saved to avoid possible stack corruption. 59 63 */ 60 64 ENTRY(omap4_finish_suspend) 61 - stmfd sp!, {lr} 65 + stmfd sp!, {r4-r12, lr} 62 66 cmp r0, #0x0 63 67 beq do_WFI @ No lowpower state, jump to WFI 64 68 ··· 230 226 skip_scu_gp_clear: 231 227 isb 232 228 dsb 233 - ldmfd sp!, {pc} 229 + ldmfd sp!, {r4-r12, pc} 234 230 ENDPROC(omap4_finish_suspend) 235 231 236 232 /*
+1
arch/arm/mach-omap2/twl-common.c
··· 67 67 const char *pmic_type, int pmic_irq, 68 68 struct twl4030_platform_data *pmic_data) 69 69 { 70 + omap_mux_init_signal("sys_nirq", OMAP_PIN_INPUT_PULLUP | OMAP_PIN_OFF_WAKEUPENABLE); 70 71 strncpy(pmic_i2c_board_info.type, pmic_type, 71 72 sizeof(pmic_i2c_board_info.type)); 72 73 pmic_i2c_board_info.irq = pmic_irq;
+2 -1
arch/arm/mach-orion5x/common.c
··· 109 109 { 110 110 orion_ge00_init(eth_data, 111 111 ORION5X_ETH_PHYS_BASE, IRQ_ORION5X_ETH_SUM, 112 - IRQ_ORION5X_ETH_ERR); 112 + IRQ_ORION5X_ETH_ERR, 113 + MV643XX_TX_CSUM_DEFAULT_LIMIT); 113 114 } 114 115 115 116
+2 -1
arch/arm/mach-s3c24xx/include/mach/dma.h
··· 24 24 */ 25 25 26 26 enum dma_ch { 27 - DMACH_XD0, 27 + DMACH_DT_PROP = -1, /* not yet supported, do not use */ 28 + DMACH_XD0 = 0, 28 29 DMACH_XD1, 29 30 DMACH_SDI, 30 31 DMACH_SPI0,
+7 -6
arch/arm/mach-shmobile/board-armadillo800eva.c
··· 520 520 }; 521 521 522 522 /* GPIO KEY */ 523 - #define GPIO_KEY(c, g, d) { .code = c, .gpio = g, .desc = d, .active_low = 1 } 523 + #define GPIO_KEY(c, g, d, ...) \ 524 + { .code = c, .gpio = g, .desc = d, .active_low = 1, __VA_ARGS__ } 524 525 525 526 static struct gpio_keys_button gpio_buttons[] = { 526 - GPIO_KEY(KEY_POWER, GPIO_PORT99, "SW1"), 527 - GPIO_KEY(KEY_BACK, GPIO_PORT100, "SW2"), 528 - GPIO_KEY(KEY_MENU, GPIO_PORT97, "SW3"), 529 - GPIO_KEY(KEY_HOME, GPIO_PORT98, "SW4"), 527 + GPIO_KEY(KEY_POWER, GPIO_PORT99, "SW3", .wakeup = 1), 528 + GPIO_KEY(KEY_BACK, GPIO_PORT100, "SW4"), 529 + GPIO_KEY(KEY_MENU, GPIO_PORT97, "SW5"), 530 + GPIO_KEY(KEY_HOME, GPIO_PORT98, "SW6"), 530 531 }; 531 532 532 533 static struct gpio_keys_platform_data gpio_key_info = { ··· 902 901 &camera_device, 903 902 &ceu0_device, 904 903 &fsi_device, 905 - &fsi_hdmi_device, 906 904 &fsi_wm8978_device, 905 + &fsi_hdmi_device, 907 906 }; 908 907 909 908 static void __init eva_clock_init(void)
+2 -1
arch/arm/mach-shmobile/board-mackerel.c
··· 695 695 * - J30 "open" 696 696 * - modify usbhs1_get_id() USBHS_HOST -> USBHS_GADGET 697 697 * - add .get_vbus = usbhs_get_vbus in usbhs1_private 698 + * - check usbhs0_device(pio)/usbhs1_device(irq) order in mackerel_devices. 698 699 */ 699 700 #define IRQ8 evt2irq(0x0300) 700 701 #define USB_PHY_MODE (1 << 4) ··· 1326 1325 &nor_flash_device, 1327 1326 &smc911x_device, 1328 1327 &lcdc_device, 1329 - &usbhs1_device, 1330 1328 &usbhs0_device, 1329 + &usbhs1_device, 1331 1330 &leds_device, 1332 1331 &fsi_device, 1333 1332 &fsi_ak4643_device,
+1 -1
arch/arm/mach-shmobile/board-marzen.c
··· 67 67 68 68 static struct platform_device eth_device = { 69 69 .name = "smsc911x", 70 - .id = 0, 70 + .id = -1, 71 71 .dev = { 72 72 .platform_data = &smsc911x_platdata, 73 73 },
+2 -2
arch/arm/mach-shmobile/intc-sh73a0.c
··· 259 259 return 0; /* always allow wakeup */ 260 260 } 261 261 262 - #define RELOC_BASE 0x1000 262 + #define RELOC_BASE 0x1200 263 263 264 - /* INTCA IRQ pins at INTCS + 0x1000 to make space for GIC+INTC handling */ 264 + /* INTCA IRQ pins at INTCS + RELOC_BASE to make space for GIC+INTC handling */ 265 265 #define INTCS_VECT_RELOC(n, vect) INTCS_VECT((n), (vect) + RELOC_BASE) 266 266 267 267 INTC_IRQ_PINS_32(intca_irq_pins, 0xe6900000,
-1
arch/arm/mach-ux500/Kconfig
··· 41 41 config MACH_SNOWBALL 42 42 bool "U8500 Snowball platform" 43 43 select MACH_MOP500 44 - select LEDS_GPIO 45 44 help 46 45 Include support for the snowball development platform. 47 46
+5 -5
arch/arm/mach-ux500/board-mop500-msp.c
··· 191 191 return pdev; 192 192 } 193 193 194 - /* Platform device for ASoC U8500 machine */ 195 - static struct platform_device snd_soc_u8500 = { 196 - .name = "snd-soc-u8500", 194 + /* Platform device for ASoC MOP500 machine */ 195 + static struct platform_device snd_soc_mop500 = { 196 + .name = "snd-soc-mop500", 197 197 .id = 0, 198 198 .dev = { 199 199 .platform_data = NULL, ··· 227 227 { 228 228 struct platform_device *msp1; 229 229 230 - pr_info("%s: Register platform-device 'snd-soc-u8500'.\n", __func__); 231 - platform_device_register(&snd_soc_u8500); 230 + pr_info("%s: Register platform-device 'snd-soc-mop500'.\n", __func__); 231 + platform_device_register(&snd_soc_mop500); 232 232 233 233 pr_info("Initialize MSP I2S-devices.\n"); 234 234 db8500_add_msp_i2s(parent, 0, U8500_MSP0_BASE, IRQ_DB8500_MSP0,
+4
arch/arm/mach-ux500/board-mop500.c
··· 776 776 ARRAY_SIZE(mop500_platform_devs)); 777 777 778 778 mop500_sdi_init(parent); 779 + mop500_msp_init(parent); 779 780 i2c0_devs = ARRAY_SIZE(mop500_i2c0_devices); 780 781 i2c_register_board_info(0, mop500_i2c0_devices, i2c0_devs); 781 782 i2c_register_board_info(2, mop500_i2c2_devices, ··· 784 783 785 784 mop500_uib_init(); 786 785 786 + } else if (of_machine_is_compatible("calaosystems,snowball-a9500")) { 787 + mop500_msp_init(parent); 787 788 } else if (of_machine_is_compatible("st-ericsson,hrefv60+")) { 788 789 /* 789 790 * The HREFv60 board removed a GPIO expander and routed ··· 797 794 ARRAY_SIZE(mop500_platform_devs)); 798 795 799 796 hrefv60_sdi_init(parent); 797 + mop500_msp_init(parent); 800 798 801 799 i2c0_devs = ARRAY_SIZE(mop500_i2c0_devices); 802 800 i2c0_devs -= NUM_PRE_V60_I2C0_DEVICES;
+104 -10
arch/arm/mm/dma-mapping.c
··· 267 267 vunmap(cpu_addr); 268 268 } 269 269 270 + #define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K 271 + 270 272 struct dma_pool { 271 273 size_t size; 272 274 spinlock_t lock; 273 275 unsigned long *bitmap; 274 276 unsigned long nr_pages; 275 277 void *vaddr; 276 - struct page *page; 278 + struct page **pages; 277 279 }; 278 280 279 281 static struct dma_pool atomic_pool = { 280 - .size = SZ_256K, 282 + .size = DEFAULT_DMA_COHERENT_POOL_SIZE, 281 283 }; 282 284 283 285 static int __init early_coherent_pool(char *p) ··· 288 286 return 0; 289 287 } 290 288 early_param("coherent_pool", early_coherent_pool); 289 + 290 + void __init init_dma_coherent_pool_size(unsigned long size) 291 + { 292 + /* 293 + * Catch any attempt to set the pool size too late. 294 + */ 295 + BUG_ON(atomic_pool.vaddr); 296 + 297 + /* 298 + * Set architecture specific coherent pool size only if 299 + * it has not been changed by kernel command line parameter. 300 + */ 301 + if (atomic_pool.size == DEFAULT_DMA_COHERENT_POOL_SIZE) 302 + atomic_pool.size = size; 303 + } 291 304 292 305 /* 293 306 * Initialise the coherent pool for atomic allocations. ··· 314 297 unsigned long nr_pages = pool->size >> PAGE_SHIFT; 315 298 unsigned long *bitmap; 316 299 struct page *page; 300 + struct page **pages; 317 301 void *ptr; 318 302 int bitmap_size = BITS_TO_LONGS(nr_pages) * sizeof(long); 319 303 ··· 322 304 if (!bitmap) 323 305 goto no_bitmap; 324 306 307 + pages = kzalloc(nr_pages * sizeof(struct page *), GFP_KERNEL); 308 + if (!pages) 309 + goto no_pages; 310 + 325 311 if (IS_ENABLED(CONFIG_CMA)) 326 312 ptr = __alloc_from_contiguous(NULL, pool->size, prot, &page); 327 313 else 328 314 ptr = __alloc_remap_buffer(NULL, pool->size, GFP_KERNEL, prot, 329 315 &page, NULL); 330 316 if (ptr) { 317 + int i; 318 + 319 + for (i = 0; i < nr_pages; i++) 320 + pages[i] = page + i; 321 + 331 322 spin_lock_init(&pool->lock); 332 323 pool->vaddr = ptr; 333 - pool->page = page; 324 + pool->pages = pages; 334 325 pool->bitmap = bitmap; 335 326 pool->nr_pages = nr_pages; 336 327 pr_info("DMA: preallocated %u KiB pool for atomic coherent allocations\n", 337 328 (unsigned)pool->size / 1024); 338 329 return 0; 339 330 } 331 + no_pages: 340 332 kfree(bitmap); 341 333 no_bitmap: 342 334 pr_err("DMA: failed to allocate %u KiB pool for atomic coherent allocation\n", ··· 471 443 if (pageno < pool->nr_pages) { 472 444 bitmap_set(pool->bitmap, pageno, count); 473 445 ptr = pool->vaddr + PAGE_SIZE * pageno; 474 - *ret_page = pool->page + pageno; 446 + *ret_page = pool->pages[pageno]; 447 + } else { 448 + pr_err_once("ERROR: %u KiB atomic DMA coherent pool is too small!\n" 449 + "Please increase it with coherent_pool= kernel parameter!\n", 450 + (unsigned)pool->size / 1024); 475 451 } 476 452 spin_unlock_irqrestore(&pool->lock, flags); 477 453 478 454 return ptr; 455 + } 456 + 457 + static bool __in_atomic_pool(void *start, size_t size) 458 + { 459 + struct dma_pool *pool = &atomic_pool; 460 + void *end = start + size; 461 + void *pool_start = pool->vaddr; 462 + void *pool_end = pool->vaddr + pool->size; 463 + 464 + if (start < pool_start || start > pool_end) 465 + return false; 466 + 467 + if (end <= pool_end) 468 + return true; 469 + 470 + WARN(1, "Wrong coherent size(%p-%p) from atomic pool(%p-%p)\n", 471 + start, end - 1, pool_start, pool_end - 1); 472 + 473 + return false; 479 474 } 480 475 481 476 static int __free_from_pool(void *start, size_t size) ··· 507 456 unsigned long pageno, count; 508 457 unsigned long flags; 509 458 510 - if (start < pool->vaddr || start > pool->vaddr + pool->size) 459 + if (!__in_atomic_pool(start, size)) 511 460 return 0; 512 - 513 - if (start + size > pool->vaddr + pool->size) { 514 - WARN(1, "freeing wrong coherent size from pool\n"); 515 - return 0; 516 - } 517 461 518 462 pageno = (start - pool->vaddr) >> PAGE_SHIFT; 519 463 count = size >> PAGE_SHIFT; ··· 1136 1090 return 0; 1137 1091 } 1138 1092 1093 + static struct page **__atomic_get_pages(void *addr) 1094 + { 1095 + struct dma_pool *pool = &atomic_pool; 1096 + struct page **pages = pool->pages; 1097 + int offs = (addr - pool->vaddr) >> PAGE_SHIFT; 1098 + 1099 + return pages + offs; 1100 + } 1101 + 1139 1102 static struct page **__iommu_get_pages(void *cpu_addr, struct dma_attrs *attrs) 1140 1103 { 1141 1104 struct vm_struct *area; 1105 + 1106 + if (__in_atomic_pool(cpu_addr, PAGE_SIZE)) 1107 + return __atomic_get_pages(cpu_addr); 1142 1108 1143 1109 if (dma_get_attr(DMA_ATTR_NO_KERNEL_MAPPING, attrs)) 1144 1110 return cpu_addr; ··· 1159 1101 if (area && (area->flags & VM_ARM_DMA_CONSISTENT)) 1160 1102 return area->pages; 1161 1103 return NULL; 1104 + } 1105 + 1106 + static void *__iommu_alloc_atomic(struct device *dev, size_t size, 1107 + dma_addr_t *handle) 1108 + { 1109 + struct page *page; 1110 + void *addr; 1111 + 1112 + addr = __alloc_from_pool(size, &page); 1113 + if (!addr) 1114 + return NULL; 1115 + 1116 + *handle = __iommu_create_mapping(dev, &page, size); 1117 + if (*handle == DMA_ERROR_CODE) 1118 + goto err_mapping; 1119 + 1120 + return addr; 1121 + 1122 + err_mapping: 1123 + __free_from_pool(addr, size); 1124 + return NULL; 1125 + } 1126 + 1127 + static void __iommu_free_atomic(struct device *dev, struct page **pages, 1128 + dma_addr_t handle, size_t size) 1129 + { 1130 + __iommu_remove_mapping(dev, handle, size); 1131 + __free_from_pool(page_address(pages[0]), size); 1162 1132 } 1163 1133 1164 1134 static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, ··· 1198 1112 1199 1113 *handle = DMA_ERROR_CODE; 1200 1114 size = PAGE_ALIGN(size); 1115 + 1116 + if (gfp & GFP_ATOMIC) 1117 + return __iommu_alloc_atomic(dev, size, handle); 1201 1118 1202 1119 pages = __iommu_alloc_buffer(dev, size, gfp); 1203 1120 if (!pages) ··· 1265 1176 1266 1177 if (!pages) { 1267 1178 WARN(1, "trying to free invalid coherent area: %p\n", cpu_addr); 1179 + return; 1180 + } 1181 + 1182 + if (__in_atomic_pool(cpu_addr, size)) { 1183 + __iommu_free_atomic(dev, pages, handle, size); 1268 1184 return; 1269 1185 } 1270 1186
-40
arch/arm/plat-omap/common.c
··· 17 17 #include <linux/dma-mapping.h> 18 18 19 19 #include <plat/common.h> 20 - #include <plat/board.h> 21 20 #include <plat/vram.h> 22 21 #include <plat/dsp.h> 23 22 #include <plat/dma.h> 24 23 25 24 #include <plat/omap-secure.h> 26 - 27 - 28 - #define NO_LENGTH_CHECK 0xffffffff 29 - 30 - struct omap_board_config_kernel *omap_board_config __initdata; 31 - int omap_board_config_size; 32 - 33 - static const void *__init get_config(u16 tag, size_t len, 34 - int skip, size_t *len_out) 35 - { 36 - struct omap_board_config_kernel *kinfo = NULL; 37 - int i; 38 - 39 - /* Try to find the config from the board-specific structures 40 - * in the kernel. */ 41 - for (i = 0; i < omap_board_config_size; i++) { 42 - if (omap_board_config[i].tag == tag) { 43 - if (skip == 0) { 44 - kinfo = &omap_board_config[i]; 45 - break; 46 - } else { 47 - skip--; 48 - } 49 - } 50 - } 51 - if (kinfo == NULL) 52 - return NULL; 53 - return kinfo->data; 54 - } 55 - 56 - const void *__init __omap_get_config(u16 tag, size_t len, int nr) 57 - { 58 - return get_config(tag, len, nr, NULL); 59 - } 60 - 61 - const void *__init omap_get_var_config(u16 tag, size_t *len) 62 - { 63 - return get_config(tag, NO_LENGTH_CHECK, 0, len); 64 - } 65 25 66 26 void __init omap_reserve(void) 67 27 {
-1
arch/arm/plat-omap/counter_32k.c
··· 24 24 25 25 #include <plat/hardware.h> 26 26 #include <plat/common.h> 27 - #include <plat/board.h> 28 27 29 28 #include <plat/clock.h> 30 29
-3
arch/arm/plat-omap/debug-devices.c
··· 17 17 18 18 #include <mach/hardware.h> 19 19 20 - #include <plat/board.h> 21 - 22 - 23 20 /* Many OMAP development platforms reuse the same "debug board"; these 24 21 * platforms include H2, H3, H4, and Perseus2. 25 22 */
-1
arch/arm/plat-omap/devices.c
··· 23 23 #include <asm/memblock.h> 24 24 25 25 #include <plat/tc.h> 26 - #include <plat/board.h> 27 26 #include <plat/mmc.h> 28 27 #include <plat/menelaus.h> 29 28 #include <plat/omap44xx.h>
+3 -3
arch/arm/plat-omap/dmtimer.c
··· 189 189 timer->reserved = 1; 190 190 break; 191 191 } 192 + spin_unlock_irqrestore(&dm_timer_lock, flags); 192 193 193 194 if (timer) { 194 195 ret = omap_dm_timer_prepare(timer); ··· 198 197 timer = NULL; 199 198 } 200 199 } 201 - spin_unlock_irqrestore(&dm_timer_lock, flags); 202 200 203 201 if (!timer) 204 202 pr_debug("%s: timer request failed!\n", __func__); ··· 220 220 break; 221 221 } 222 222 } 223 + spin_unlock_irqrestore(&dm_timer_lock, flags); 223 224 224 225 if (timer) { 225 226 ret = omap_dm_timer_prepare(timer); ··· 229 228 timer = NULL; 230 229 } 231 230 } 232 - spin_unlock_irqrestore(&dm_timer_lock, flags); 233 231 234 232 if (!timer) 235 233 pr_debug("%s: timer%d request failed!\n", __func__, id); ··· 258 258 259 259 void omap_dm_timer_disable(struct omap_dm_timer *timer) 260 260 { 261 - pm_runtime_put(&timer->pdev->dev); 261 + pm_runtime_put_sync(&timer->pdev->dev); 262 262 } 263 263 EXPORT_SYMBOL_GPL(omap_dm_timer_disable); 264 264
-2
arch/arm/plat-omap/fb.c
··· 33 33 #include <mach/hardware.h> 34 34 #include <asm/mach/map.h> 35 35 36 - #include <plat/board.h> 37 - 38 36 #if defined(CONFIG_FB_OMAP) || defined(CONFIG_FB_OMAP_MODULE) 39 37 40 38 static bool omapfb_lcd_configured;
-138
arch/arm/plat-omap/include/plat/board.h
··· 1 - /* 2 - * arch/arm/plat-omap/include/mach/board.h 3 - * 4 - * Information structures for board-specific data 5 - * 6 - * Copyright (C) 2004 Nokia Corporation 7 - * Written by Juha Yrjölä <juha.yrjola@nokia.com> 8 - */ 9 - 10 - #ifndef _OMAP_BOARD_H 11 - #define _OMAP_BOARD_H 12 - 13 - #include <linux/types.h> 14 - 15 - #include <plat/gpio-switch.h> 16 - 17 - /* 18 - * OMAP35x EVM revision 19 - * Run time detection of EVM revision is done by reading Ethernet 20 - * PHY ID - 21 - * GEN_1 = 0x01150000 22 - * GEN_2 = 0x92200000 23 - */ 24 - enum { 25 - OMAP3EVM_BOARD_GEN_1 = 0, /* EVM Rev between A - D */ 26 - OMAP3EVM_BOARD_GEN_2, /* EVM Rev >= Rev E */ 27 - }; 28 - 29 - /* Different peripheral ids */ 30 - #define OMAP_TAG_CLOCK 0x4f01 31 - #define OMAP_TAG_GPIO_SWITCH 0x4f06 32 - #define OMAP_TAG_STI_CONSOLE 0x4f09 33 - #define OMAP_TAG_CAMERA_SENSOR 0x4f0a 34 - 35 - #define OMAP_TAG_BOOT_REASON 0x4f80 36 - #define OMAP_TAG_FLASH_PART 0x4f81 37 - #define OMAP_TAG_VERSION_STR 0x4f82 38 - 39 - struct omap_clock_config { 40 - /* 0 for 12 MHz, 1 for 13 MHz and 2 for 19.2 MHz */ 41 - u8 system_clock_type; 42 - }; 43 - 44 - struct omap_serial_console_config { 45 - u8 console_uart; 46 - u32 console_speed; 47 - }; 48 - 49 - struct omap_sti_console_config { 50 - unsigned enable:1; 51 - u8 channel; 52 - }; 53 - 54 - struct omap_camera_sensor_config { 55 - u16 reset_gpio; 56 - int (*power_on)(void * data); 57 - int (*power_off)(void * data); 58 - }; 59 - 60 - struct omap_lcd_config { 61 - char panel_name[16]; 62 - char ctrl_name[16]; 63 - s16 nreset_gpio; 64 - u8 data_lines; 65 - }; 66 - 67 - struct device; 68 - struct fb_info; 69 - struct omap_backlight_config { 70 - int default_intensity; 71 - int (*set_power)(struct device *dev, int state); 72 - }; 73 - 74 - struct omap_fbmem_config { 75 - u32 start; 76 - u32 size; 77 - }; 78 - 79 - struct omap_pwm_led_platform_data { 80 - const char *name; 81 - int intensity_timer; 82 - int blink_timer; 83 - void (*set_power)(struct omap_pwm_led_platform_data *self, int on_off); 84 - }; 85 - 86 - struct omap_uart_config { 87 - /* Bit field of UARTs present; bit 0 --> UART1 */ 88 - unsigned int enabled_uarts; 89 - }; 90 - 91 - 92 - struct omap_flash_part_config { 93 - char part_table[0]; 94 - }; 95 - 96 - struct omap_boot_reason_config { 97 - char reason_str[12]; 98 - }; 99 - 100 - struct omap_version_config { 101 - char component[12]; 102 - char version[12]; 103 - }; 104 - 105 - struct omap_board_config_entry { 106 - u16 tag; 107 - u16 len; 108 - u8 data[0]; 109 - }; 110 - 111 - struct omap_board_config_kernel { 112 - u16 tag; 113 - const void *data; 114 - }; 115 - 116 - extern const void *__init __omap_get_config(u16 tag, size_t len, int nr); 117 - 118 - #define omap_get_config(tag, type) \ 119 - ((const type *) __omap_get_config((tag), sizeof(type), 0)) 120 - #define omap_get_nr_config(tag, type, nr) \ 121 - ((const type *) __omap_get_config((tag), sizeof(type), (nr))) 122 - 123 - extern const void *__init omap_get_var_config(u16 tag, size_t *len); 124 - 125 - extern struct omap_board_config_kernel *omap_board_config; 126 - extern int omap_board_config_size; 127 - 128 - 129 - /* for TI reference platforms sharing the same debug card */ 130 - extern int debug_card_init(u32 addr, unsigned gpio); 131 - 132 - /* OMAP3EVM revision */ 133 - #if defined(CONFIG_MACH_OMAP3EVM) 134 - u8 get_omap3_evm_rev(void); 135 - #else 136 - #define get_omap3_evm_rev() (-EINVAL) 137 - #endif 138 - #endif
+2 -1
arch/arm/plat-omap/include/plat/cpu.h
··· 372 372 #define cpu_class_is_omap1() (cpu_is_omap7xx() || cpu_is_omap15xx() || \ 373 373 cpu_is_omap16xx()) 374 374 #define cpu_class_is_omap2() (cpu_is_omap24xx() || cpu_is_omap34xx() || \ 375 - cpu_is_omap44xx() || soc_is_omap54xx()) 375 + cpu_is_omap44xx() || soc_is_omap54xx() || \ 376 + soc_is_am33xx()) 376 377 377 378 /* Various silicon revisions for omap2 */ 378 379 #define OMAP242X_CLASS 0x24200024
+9
arch/arm/plat-omap/include/plat/debug-devices.h
··· 1 + #ifndef _OMAP_DEBUG_DEVICES_H 2 + #define _OMAP_DEBUG_DEVICES_H 3 + 4 + #include <linux/types.h> 5 + 6 + /* for TI reference platforms sharing the same debug card */ 7 + extern int debug_card_init(u32 addr, unsigned gpio); 8 + 9 + #endif
+19
arch/arm/plat-omap/include/plat/gpmc.h
··· 133 133 u16 wr_data_mux_bus; /* WRDATAONADMUXBUS */ 134 134 }; 135 135 136 + struct gpmc_nand_regs { 137 + void __iomem *gpmc_status; 138 + void __iomem *gpmc_nand_command; 139 + void __iomem *gpmc_nand_address; 140 + void __iomem *gpmc_nand_data; 141 + void __iomem *gpmc_prefetch_config1; 142 + void __iomem *gpmc_prefetch_config2; 143 + void __iomem *gpmc_prefetch_control; 144 + void __iomem *gpmc_prefetch_status; 145 + void __iomem *gpmc_ecc_config; 146 + void __iomem *gpmc_ecc_control; 147 + void __iomem *gpmc_ecc_size_config; 148 + void __iomem *gpmc_ecc1_result; 149 + void __iomem *gpmc_bch_result0; 150 + }; 151 + 152 + extern void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs); 153 + extern int gpmc_get_client_irq(unsigned irq_config); 154 + 136 155 extern unsigned int gpmc_ns_to_ticks(unsigned int time_ns); 137 156 extern unsigned int gpmc_ps_to_ticks(unsigned int time_ps); 138 157 extern unsigned int gpmc_ticks_to_ns(unsigned int ticks);
-1
arch/arm/plat-omap/include/plat/mmc.h
··· 15 15 #include <linux/device.h> 16 16 #include <linux/mmc/host.h> 17 17 18 - #include <plat/board.h> 19 18 #include <plat/omap_hwmod.h> 20 19 21 20 #define OMAP15XX_NR_MMC 1
+9
arch/arm/plat-omap/include/plat/multi.h
··· 108 108 # endif 109 109 #endif 110 110 111 + #ifdef CONFIG_SOC_AM33XX 112 + # ifdef OMAP_NAME 113 + # undef MULTI_OMAP2 114 + # define MULTI_OMAP2 115 + # else 116 + # define OMAP_NAME am33xx 117 + # endif 118 + #endif 119 + 111 120 #endif /* __PLAT_OMAP_MULTI_H */
+1 -1
arch/arm/plat-omap/include/plat/nand.h
··· 26 26 bool dev_ready; 27 27 int gpmc_irq; 28 28 enum nand_io xfer_type; 29 - unsigned long phys_base; 30 29 int devsize; 31 30 enum omap_ecc ecc_opt; 31 + struct gpmc_nand_regs reg; 32 32 }; 33 33 34 34 /* minimum size for IO mapping */
+1 -3
arch/arm/plat-omap/include/plat/uncompress.h
··· 110 110 _DEBUG_LL_ENTRY(mach, AM33XX_UART##p##_BASE, OMAP_PORT_SHIFT, \ 111 111 AM33XXUART##p) 112 112 113 - static inline void __arch_decomp_setup(unsigned long arch_id) 113 + static inline void arch_decomp_setup(void) 114 114 { 115 115 int port = 0; 116 116 ··· 197 197 DEBUG_LL_AM33XX(1, am335xevm); 198 198 } while (0); 199 199 } 200 - 201 - #define arch_decomp_setup() __arch_decomp_setup(arch_id) 202 200 203 201 /* 204 202 * nothing to do
-1
arch/arm/plat-omap/include/plat/usb.h
··· 5 5 6 6 #include <linux/io.h> 7 7 #include <linux/usb/musb.h> 8 - #include <plat/board.h> 9 8 10 9 #define OMAP3_HS_USB_PORTS 3 11 10
-1
arch/arm/plat-omap/sram.c
··· 26 26 #include <asm/mach/map.h> 27 27 28 28 #include <plat/sram.h> 29 - #include <plat/board.h> 30 29 #include <plat/cpu.h> 31 30 32 31 #include "sram.h"
+6 -2
arch/arm/plat-orion/common.c
··· 291 291 void __init orion_ge00_init(struct mv643xx_eth_platform_data *eth_data, 292 292 unsigned long mapbase, 293 293 unsigned long irq, 294 - unsigned long irq_err) 294 + unsigned long irq_err, 295 + unsigned int tx_csum_limit) 295 296 { 296 297 fill_resources(&orion_ge00_shared, orion_ge00_shared_resources, 297 298 mapbase + 0x2000, SZ_16K - 1, irq_err); 299 + orion_ge00_shared_data.tx_csum_limit = tx_csum_limit; 298 300 ge_complete(&orion_ge00_shared_data, 299 301 orion_ge00_resources, irq, &orion_ge00_shared, 300 302 eth_data, &orion_ge00); ··· 345 343 void __init orion_ge01_init(struct mv643xx_eth_platform_data *eth_data, 346 344 unsigned long mapbase, 347 345 unsigned long irq, 348 - unsigned long irq_err) 346 + unsigned long irq_err, 347 + unsigned int tx_csum_limit) 349 348 { 350 349 fill_resources(&orion_ge01_shared, orion_ge01_shared_resources, 351 350 mapbase + 0x2000, SZ_16K - 1, irq_err); 351 + orion_ge01_shared_data.tx_csum_limit = tx_csum_limit; 352 352 ge_complete(&orion_ge01_shared_data, 353 353 orion_ge01_resources, irq, &orion_ge01_shared, 354 354 eth_data, &orion_ge01);
+4 -2
arch/arm/plat-orion/include/plat/common.h
··· 39 39 void __init orion_ge00_init(struct mv643xx_eth_platform_data *eth_data, 40 40 unsigned long mapbase, 41 41 unsigned long irq, 42 - unsigned long irq_err); 42 + unsigned long irq_err, 43 + unsigned int tx_csum_limit); 43 44 44 45 void __init orion_ge01_init(struct mv643xx_eth_platform_data *eth_data, 45 46 unsigned long mapbase, 46 47 unsigned long irq, 47 - unsigned long irq_err); 48 + unsigned long irq_err, 49 + unsigned int tx_csum_limit); 48 50 49 51 void __init orion_ge10_init(struct mv643xx_eth_platform_data *eth_data, 50 52 unsigned long mapbase,
+1 -1
arch/arm/plat-s3c24xx/dma.c
··· 430 430 * when necessary. 431 431 */ 432 432 433 - int s3c2410_dma_enqueue(unsigned int channel, void *id, 433 + int s3c2410_dma_enqueue(enum dma_ch channel, void *id, 434 434 dma_addr_t data, int size) 435 435 { 436 436 struct s3c2410_dma_chan *chan = s3c_dma_lookup_channel(channel);
+28 -1
arch/arm/plat-samsung/devs.c
··· 32 32 #include <linux/platform_data/s3c-hsudc.h> 33 33 #include <linux/platform_data/s3c-hsotg.h> 34 34 35 + #include <media/s5p_hdmi.h> 36 + 35 37 #include <asm/irq.h> 36 38 #include <asm/pmu.h> 37 39 #include <asm/mach/arch.h> ··· 750 748 if (!pd) { 751 749 pd = &default_i2c_data; 752 750 753 - if (soc_is_exynos4210()) 751 + if (soc_is_exynos4210() || 752 + soc_is_exynos4212() || soc_is_exynos4412()) 754 753 pd->bus_num = 8; 755 754 else if (soc_is_s5pv210()) 756 755 pd->bus_num = 3; ··· 762 759 npd = s3c_set_platdata(pd, sizeof(struct s3c2410_platform_i2c), 763 760 &s5p_device_i2c_hdmiphy); 764 761 } 762 + 763 + struct s5p_hdmi_platform_data s5p_hdmi_def_platdata; 764 + 765 + void __init s5p_hdmi_set_platdata(struct i2c_board_info *hdmiphy_info, 766 + struct i2c_board_info *mhl_info, int mhl_bus) 767 + { 768 + struct s5p_hdmi_platform_data *pd = &s5p_hdmi_def_platdata; 769 + 770 + if (soc_is_exynos4210() || 771 + soc_is_exynos4212() || soc_is_exynos4412()) 772 + pd->hdmiphy_bus = 8; 773 + else if (soc_is_s5pv210()) 774 + pd->hdmiphy_bus = 3; 775 + else 776 + pd->hdmiphy_bus = 0; 777 + 778 + pd->hdmiphy_info = hdmiphy_info; 779 + pd->mhl_info = mhl_info; 780 + pd->mhl_bus = mhl_bus; 781 + 782 + s3c_set_platdata(pd, sizeof(struct s5p_hdmi_platform_data), 783 + &s5p_device_hdmi); 784 + } 785 + 765 786 #endif /* CONFIG_S5P_DEV_I2C_HDMIPHY */ 766 787 767 788 /* I2S */
+16
arch/arm/plat-samsung/include/plat/hdmi.h
··· 1 + /* 2 + * Copyright (C) 2012 Samsung Electronics Co.Ltd 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms of the GNU General Public License as published by the 6 + * Free Software Foundation; either version 2 of the License, or (at your 7 + * option) any later version. 8 + */ 9 + 10 + #ifndef __PLAT_SAMSUNG_HDMI_H 11 + #define __PLAT_SAMSUNG_HDMI_H __FILE__ 12 + 13 + extern void s5p_hdmi_set_platdata(struct i2c_board_info *hdmiphy_info, 14 + struct i2c_board_info *mhl_info, int mhl_bus); 15 + 16 + #endif /* __PLAT_SAMSUNG_HDMI_H */
+1 -1
arch/arm/plat-samsung/pm.c
··· 74 74 75 75 #ifdef CONFIG_SAMSUNG_PM_DEBUG 76 76 77 - struct pm_uart_save uart_save[CONFIG_SERIAL_SAMSUNG_UARTS]; 77 + static struct pm_uart_save uart_save[CONFIG_SERIAL_SAMSUNG_UARTS]; 78 78 79 79 static void s3c_pm_save_uart(unsigned int uart, struct pm_uart_save *save) 80 80 {
+1
arch/mips/Kconfig
··· 89 89 select CEVT_R4K 90 90 select CSRC_R4K 91 91 select DMA_NONCOHERENT 92 + select HAVE_CLK 92 93 select IRQ_CPU 93 94 select MIPS_MACHINE 94 95 select SYS_HAS_CPU_MIPS32_R2
+2
arch/mips/alchemy/board-mtx1.c
··· 228 228 * adapter on the mtx-1 "singleboard" variant. It triggers a custom 229 229 * logic chip connected to EXT_IO3 (GPIO1) to suppress IDSEL signals. 230 230 */ 231 + udelay(1); 232 + 231 233 if (assert && devsel != 0) 232 234 /* Suppress signal to Cardbus */ 233 235 alchemy_gpio_set_value(1, 0); /* set EXT_IO3 OFF */
+2
arch/mips/ath79/dev-usb.c
··· 145 145 146 146 ath79_ohci_resources[0].start = AR7240_OHCI_BASE; 147 147 ath79_ohci_resources[0].end = AR7240_OHCI_BASE + AR7240_OHCI_SIZE - 1; 148 + ath79_ohci_resources[1].start = ATH79_CPU_IRQ_USB; 149 + ath79_ohci_resources[1].end = ATH79_CPU_IRQ_USB; 148 150 platform_device_register(&ath79_ohci_device); 149 151 } 150 152
+4 -2
arch/mips/ath79/gpio.c
··· 188 188 189 189 if (soc_is_ar71xx()) 190 190 ath79_gpio_count = AR71XX_GPIO_COUNT; 191 - else if (soc_is_ar724x()) 192 - ath79_gpio_count = AR724X_GPIO_COUNT; 191 + else if (soc_is_ar7240()) 192 + ath79_gpio_count = AR7240_GPIO_COUNT; 193 + else if (soc_is_ar7241() || soc_is_ar7242()) 194 + ath79_gpio_count = AR7241_GPIO_COUNT; 193 195 else if (soc_is_ar913x()) 194 196 ath79_gpio_count = AR913X_GPIO_COUNT; 195 197 else if (soc_is_ar933x())
+4
arch/mips/bcm63xx/dev-spi.c
··· 106 106 if (BCMCPU_IS_6338() || BCMCPU_IS_6348()) { 107 107 spi_resources[0].end += BCM_6338_RSET_SPI_SIZE - 1; 108 108 spi_pdata.fifo_size = SPI_6338_MSG_DATA_SIZE; 109 + spi_pdata.msg_type_shift = SPI_6338_MSG_TYPE_SHIFT; 110 + spi_pdata.msg_ctl_width = SPI_6338_MSG_CTL_WIDTH; 109 111 } 110 112 111 113 if (BCMCPU_IS_6358() || BCMCPU_IS_6368()) { 112 114 spi_resources[0].end += BCM_6358_RSET_SPI_SIZE - 1; 113 115 spi_pdata.fifo_size = SPI_6358_MSG_DATA_SIZE; 116 + spi_pdata.msg_type_shift = SPI_6358_MSG_TYPE_SHIFT; 117 + spi_pdata.msg_ctl_width = SPI_6358_MSG_CTL_WIDTH; 114 118 } 115 119 116 120 bcm63xx_spi_regs_init();
+43 -46
arch/mips/cavium-octeon/octeon-irq.c
··· 61 61 octeon_irq_ciu_to_irq[line][bit] = irq; 62 62 } 63 63 64 + static void octeon_irq_force_ciu_mapping(struct irq_domain *domain, 65 + int irq, int line, int bit) 66 + { 67 + irq_domain_associate(domain, irq, line << 6 | bit); 68 + } 69 + 64 70 static int octeon_coreid_for_cpu(int cpu) 65 71 { 66 72 #ifdef CONFIG_SMP ··· 189 183 mutex_init(&cd->core_irq_mutex); 190 184 191 185 irq = OCTEON_IRQ_SW0 + i; 192 - switch (irq) { 193 - case OCTEON_IRQ_TIMER: 194 - case OCTEON_IRQ_SW0: 195 - case OCTEON_IRQ_SW1: 196 - case OCTEON_IRQ_5: 197 - case OCTEON_IRQ_PERF: 198 - irq_set_chip_data(irq, cd); 199 - irq_set_chip_and_handler(irq, &octeon_irq_chip_core, 200 - handle_percpu_irq); 201 - break; 202 - default: 203 - break; 204 - } 186 + irq_set_chip_data(irq, cd); 187 + irq_set_chip_and_handler(irq, &octeon_irq_chip_core, 188 + handle_percpu_irq); 205 189 } 206 190 } 207 191 ··· 886 890 unsigned int type; 887 891 unsigned int pin; 888 892 unsigned int trigger; 889 - struct octeon_irq_gpio_domain_data *gpiod; 890 893 891 894 if (d->of_node != node) 892 895 return -EINVAL; ··· 920 925 break; 921 926 } 922 927 *out_type = type; 923 - gpiod = d->host_data; 924 - *out_hwirq = gpiod->base_hwirq + pin; 928 + *out_hwirq = pin; 925 929 926 930 return 0; 927 931 } ··· 990 996 static int octeon_irq_gpio_map(struct irq_domain *d, 991 997 unsigned int virq, irq_hw_number_t hw) 992 998 { 993 - unsigned int line = hw >> 6; 994 - unsigned int bit = hw & 63; 999 + struct octeon_irq_gpio_domain_data *gpiod = d->host_data; 1000 + unsigned int line, bit; 995 1001 996 1002 if (!octeon_irq_virq_in_range(virq)) 997 1003 return -EINVAL; 998 1004 1005 + hw += gpiod->base_hwirq; 1006 + line = hw >> 6; 1007 + bit = hw & 63; 999 1008 if (line > 1 || octeon_irq_ciu_to_irq[line][bit] != 0) 1000 1009 return -EINVAL; 1001 1010 1002 1011 octeon_irq_set_ciu_mapping(virq, line, bit, 1003 1012 octeon_irq_gpio_chip, 1004 1013 octeon_irq_handle_gpio); 1005 - 1006 1014 return 0; 1007 1015 } 1008 1016 ··· 1145 1149 struct irq_chip *chip_wd; 1146 1150 struct device_node *gpio_node; 1147 1151 struct device_node *ciu_node; 1152 + struct irq_domain *ciu_domain = NULL; 1148 1153 1149 1154 octeon_irq_init_ciu_percpu(); 1150 1155 octeon_irq_setup_secondary = octeon_irq_setup_secondary_ciu; ··· 1174 1177 /* Mips internal */ 1175 1178 octeon_irq_init_core(); 1176 1179 1177 - /* CIU_0 */ 1178 - for (i = 0; i < 16; i++) 1179 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_WORKQ0, 0, i + 0, chip, handle_level_irq); 1180 - 1181 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_MBOX0, 0, 32, chip_mbox, handle_percpu_irq); 1182 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_MBOX1, 0, 33, chip_mbox, handle_percpu_irq); 1183 - 1184 - for (i = 0; i < 4; i++) 1185 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_PCI_INT0, 0, i + 36, chip, handle_level_irq); 1186 - for (i = 0; i < 4; i++) 1187 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_PCI_MSI0, 0, i + 40, chip, handle_level_irq); 1188 - 1189 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_RML, 0, 46, chip, handle_level_irq); 1190 - for (i = 0; i < 4; i++) 1191 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_TIMER0, 0, i + 52, chip, handle_edge_irq); 1192 - 1193 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_USB0, 0, 56, chip, handle_level_irq); 1194 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_BOOTDMA, 0, 63, chip, handle_level_irq); 1195 - 1196 - /* CIU_1 */ 1197 - for (i = 0; i < 16; i++) 1198 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_WDOG0, 1, i + 0, chip_wd, handle_level_irq); 1199 - 1200 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_USB1, 1, 17, chip, handle_level_irq); 1201 - 1202 1180 gpio_node = of_find_compatible_node(NULL, NULL, "cavium,octeon-3860-gpio"); 1203 1181 if (gpio_node) { 1204 1182 struct octeon_irq_gpio_domain_data *gpiod; ··· 1191 1219 1192 1220 ciu_node = of_find_compatible_node(NULL, NULL, "cavium,octeon-3860-ciu"); 1193 1221 if (ciu_node) { 1194 - irq_domain_add_tree(ciu_node, &octeon_irq_domain_ciu_ops, NULL); 1222 + ciu_domain = irq_domain_add_tree(ciu_node, &octeon_irq_domain_ciu_ops, NULL); 1195 1223 of_node_put(ciu_node); 1196 1224 } else 1197 - pr_warn("Cannot find device node for cavium,octeon-3860-ciu.\n"); 1225 + panic("Cannot find device node for cavium,octeon-3860-ciu."); 1226 + 1227 + /* CIU_0 */ 1228 + for (i = 0; i < 16; i++) 1229 + octeon_irq_force_ciu_mapping(ciu_domain, i + OCTEON_IRQ_WORKQ0, 0, i + 0); 1230 + 1231 + octeon_irq_set_ciu_mapping(OCTEON_IRQ_MBOX0, 0, 32, chip_mbox, handle_percpu_irq); 1232 + octeon_irq_set_ciu_mapping(OCTEON_IRQ_MBOX1, 0, 33, chip_mbox, handle_percpu_irq); 1233 + 1234 + for (i = 0; i < 4; i++) 1235 + octeon_irq_force_ciu_mapping(ciu_domain, i + OCTEON_IRQ_PCI_INT0, 0, i + 36); 1236 + for (i = 0; i < 4; i++) 1237 + octeon_irq_force_ciu_mapping(ciu_domain, i + OCTEON_IRQ_PCI_MSI0, 0, i + 40); 1238 + 1239 + octeon_irq_force_ciu_mapping(ciu_domain, OCTEON_IRQ_RML, 0, 46); 1240 + for (i = 0; i < 4; i++) 1241 + octeon_irq_force_ciu_mapping(ciu_domain, i + OCTEON_IRQ_TIMER0, 0, i + 52); 1242 + 1243 + octeon_irq_force_ciu_mapping(ciu_domain, OCTEON_IRQ_USB0, 0, 56); 1244 + octeon_irq_force_ciu_mapping(ciu_domain, OCTEON_IRQ_BOOTDMA, 0, 63); 1245 + 1246 + /* CIU_1 */ 1247 + for (i = 0; i < 16; i++) 1248 + octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_WDOG0, 1, i + 0, chip_wd, handle_level_irq); 1249 + 1250 + octeon_irq_force_ciu_mapping(ciu_domain, OCTEON_IRQ_USB1, 1, 17); 1198 1251 1199 1252 /* Enable the CIU lines */ 1200 1253 set_c0_status(STATUSF_IP3 | STATUSF_IP2);
+2 -1
arch/mips/include/asm/mach-ath79/ar71xx_regs.h
··· 393 393 #define AR71XX_GPIO_REG_FUNC 0x28 394 394 395 395 #define AR71XX_GPIO_COUNT 16 396 - #define AR724X_GPIO_COUNT 18 396 + #define AR7240_GPIO_COUNT 18 397 + #define AR7241_GPIO_COUNT 20 397 398 #define AR913X_GPIO_COUNT 22 398 399 #define AR933X_GPIO_COUNT 30 399 400 #define AR934X_GPIO_COUNT 23
-1
arch/mips/include/asm/mach-ath79/cpu-feature-overrides.h
··· 42 42 #define cpu_has_mips64r1 0 43 43 #define cpu_has_mips64r2 0 44 44 45 - #define cpu_has_dsp 0 46 45 #define cpu_has_mipsmt 0 47 46 48 47 #define cpu_has_64bits 0
+2
arch/mips/include/asm/mach-bcm63xx/bcm63xx_dev_spi.h
··· 9 9 10 10 struct bcm63xx_spi_pdata { 11 11 unsigned int fifo_size; 12 + unsigned int msg_type_shift; 13 + unsigned int msg_ctl_width; 12 14 int bus_num; 13 15 int num_chipselect; 14 16 u32 speed_hz;
+10 -3
arch/mips/include/asm/mach-bcm63xx/bcm63xx_regs.h
··· 1054 1054 #define SPI_6338_FILL_BYTE 0x07 1055 1055 #define SPI_6338_MSG_TAIL 0x09 1056 1056 #define SPI_6338_RX_TAIL 0x0b 1057 - #define SPI_6338_MSG_CTL 0x40 1057 + #define SPI_6338_MSG_CTL 0x40 /* 8-bits register */ 1058 + #define SPI_6338_MSG_CTL_WIDTH 8 1058 1059 #define SPI_6338_MSG_DATA 0x41 1059 1060 #define SPI_6338_MSG_DATA_SIZE 0x3f 1060 1061 #define SPI_6338_RX_DATA 0x80 ··· 1071 1070 #define SPI_6348_FILL_BYTE 0x07 1072 1071 #define SPI_6348_MSG_TAIL 0x09 1073 1072 #define SPI_6348_RX_TAIL 0x0b 1074 - #define SPI_6348_MSG_CTL 0x40 1073 + #define SPI_6348_MSG_CTL 0x40 /* 8-bits register */ 1074 + #define SPI_6348_MSG_CTL_WIDTH 8 1075 1075 #define SPI_6348_MSG_DATA 0x41 1076 1076 #define SPI_6348_MSG_DATA_SIZE 0x3f 1077 1077 #define SPI_6348_RX_DATA 0x80 ··· 1080 1078 1081 1079 /* BCM 6358 SPI core */ 1082 1080 #define SPI_6358_MSG_CTL 0x00 /* 16-bits register */ 1081 + #define SPI_6358_MSG_CTL_WIDTH 16 1083 1082 #define SPI_6358_MSG_DATA 0x02 1084 1083 #define SPI_6358_MSG_DATA_SIZE 0x21e 1085 1084 #define SPI_6358_RX_DATA 0x400 ··· 1097 1094 1098 1095 /* BCM 6358 SPI core */ 1099 1096 #define SPI_6368_MSG_CTL 0x00 /* 16-bits register */ 1097 + #define SPI_6368_MSG_CTL_WIDTH 16 1100 1098 #define SPI_6368_MSG_DATA 0x02 1101 1099 #define SPI_6368_MSG_DATA_SIZE 0x21e 1102 1100 #define SPI_6368_RX_DATA 0x400 ··· 1119 1115 #define SPI_HD_W 0x01 1120 1116 #define SPI_HD_R 0x02 1121 1117 #define SPI_BYTE_CNT_SHIFT 0 1122 - #define SPI_MSG_TYPE_SHIFT 14 1118 + #define SPI_6338_MSG_TYPE_SHIFT 6 1119 + #define SPI_6348_MSG_TYPE_SHIFT 6 1120 + #define SPI_6358_MSG_TYPE_SHIFT 14 1121 + #define SPI_6368_MSG_TYPE_SHIFT 14 1123 1122 1124 1123 /* Command */ 1125 1124 #define SPI_CMD_NOOP 0x00
+1 -9
arch/mips/include/asm/mach-cavium-octeon/irq.h
··· 21 21 OCTEON_IRQ_TIMER, 22 22 /* sources in CIU_INTX_EN0 */ 23 23 OCTEON_IRQ_WORKQ0, 24 - OCTEON_IRQ_GPIO0 = OCTEON_IRQ_WORKQ0 + 16, 25 - OCTEON_IRQ_WDOG0 = OCTEON_IRQ_GPIO0 + 16, 24 + OCTEON_IRQ_WDOG0 = OCTEON_IRQ_WORKQ0 + 16, 26 25 OCTEON_IRQ_WDOG15 = OCTEON_IRQ_WDOG0 + 15, 27 26 OCTEON_IRQ_MBOX0 = OCTEON_IRQ_WDOG0 + 16, 28 27 OCTEON_IRQ_MBOX1, 29 - OCTEON_IRQ_UART0, 30 - OCTEON_IRQ_UART1, 31 - OCTEON_IRQ_UART2, 32 28 OCTEON_IRQ_PCI_INT0, 33 29 OCTEON_IRQ_PCI_INT1, 34 30 OCTEON_IRQ_PCI_INT2, ··· 34 38 OCTEON_IRQ_PCI_MSI2, 35 39 OCTEON_IRQ_PCI_MSI3, 36 40 37 - OCTEON_IRQ_TWSI, 38 - OCTEON_IRQ_TWSI2, 39 41 OCTEON_IRQ_RML, 40 42 OCTEON_IRQ_TIMER0, 41 43 OCTEON_IRQ_TIMER1, ··· 41 47 OCTEON_IRQ_TIMER3, 42 48 OCTEON_IRQ_USB0, 43 49 OCTEON_IRQ_USB1, 44 - OCTEON_IRQ_MII0, 45 - OCTEON_IRQ_MII1, 46 50 OCTEON_IRQ_BOOTDMA, 47 51 #ifndef CONFIG_PCI_MSI 48 52 OCTEON_IRQ_LAST = 127
+1
arch/mips/include/asm/module.h
··· 10 10 struct list_head dbe_list; 11 11 const struct exception_table_entry *dbe_start; 12 12 const struct exception_table_entry *dbe_end; 13 + struct mips_hi16 *r_mips_hi16_list; 13 14 }; 14 15 15 16 typedef uint8_t Elf64_Byte; /* Type for a 8-bit quantity. */
+4 -4
arch/mips/include/asm/r4k-timer.h
··· 12 12 13 13 #ifdef CONFIG_SYNC_R4K 14 14 15 - extern void synchronise_count_master(void); 16 - extern void synchronise_count_slave(void); 15 + extern void synchronise_count_master(int cpu); 16 + extern void synchronise_count_slave(int cpu); 17 17 18 18 #else 19 19 20 - static inline void synchronise_count_master(void) 20 + static inline void synchronise_count_master(int cpu) 21 21 { 22 22 } 23 23 24 - static inline void synchronise_count_slave(void) 24 + static inline void synchronise_count_slave(int cpu) 25 25 { 26 26 } 27 27
+34 -9
arch/mips/kernel/module.c
··· 39 39 Elf_Addr value; 40 40 }; 41 41 42 - static struct mips_hi16 *mips_hi16_list; 43 - 44 42 static LIST_HEAD(dbe_list); 45 43 static DEFINE_SPINLOCK(dbe_lock); 46 44 ··· 126 128 127 129 n->addr = (Elf_Addr *)location; 128 130 n->value = v; 129 - n->next = mips_hi16_list; 130 - mips_hi16_list = n; 131 + n->next = me->arch.r_mips_hi16_list; 132 + me->arch.r_mips_hi16_list = n; 131 133 132 134 return 0; 133 135 } ··· 140 142 return 0; 141 143 } 142 144 145 + static void free_relocation_chain(struct mips_hi16 *l) 146 + { 147 + struct mips_hi16 *next; 148 + 149 + while (l) { 150 + next = l->next; 151 + kfree(l); 152 + l = next; 153 + } 154 + } 155 + 143 156 static int apply_r_mips_lo16_rel(struct module *me, u32 *location, Elf_Addr v) 144 157 { 145 158 unsigned long insnlo = *location; 159 + struct mips_hi16 *l; 146 160 Elf_Addr val, vallo; 147 161 148 162 /* Sign extend the addend we extract from the lo insn. */ 149 163 vallo = ((insnlo & 0xffff) ^ 0x8000) - 0x8000; 150 164 151 - if (mips_hi16_list != NULL) { 152 - struct mips_hi16 *l; 153 - 154 - l = mips_hi16_list; 165 + if (me->arch.r_mips_hi16_list != NULL) { 166 + l = me->arch.r_mips_hi16_list; 155 167 while (l != NULL) { 156 168 struct mips_hi16 *next; 157 169 unsigned long insn; ··· 196 188 l = next; 197 189 } 198 190 199 - mips_hi16_list = NULL; 191 + me->arch.r_mips_hi16_list = NULL; 200 192 } 201 193 202 194 /* ··· 209 201 return 0; 210 202 211 203 out_danger: 204 + free_relocation_chain(l); 205 + me->arch.r_mips_hi16_list = NULL; 206 + 212 207 pr_err("module %s: dangerous R_MIPS_LO16 REL relocation\n", me->name); 213 208 214 209 return -ENOEXEC; ··· 284 273 pr_debug("Applying relocate section %u to %u\n", relsec, 285 274 sechdrs[relsec].sh_info); 286 275 276 + me->arch.r_mips_hi16_list = NULL; 287 277 for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) { 288 278 /* This is where to make the change */ 289 279 location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr ··· 306 294 res = reloc_handlers_rel[ELF_MIPS_R_TYPE(rel[i])](me, location, v); 307 295 if (res) 308 296 return res; 297 + } 298 + 299 + /* 300 + * Normally the hi16 list should be deallocated at this point. A 301 + * malformed binary however could contain a series of R_MIPS_HI16 302 + * relocations not followed by a R_MIPS_LO16 relocation. In that 303 + * case, free up the list and return an error. 304 + */ 305 + if (me->arch.r_mips_hi16_list) { 306 + free_relocation_chain(me->arch.r_mips_hi16_list); 307 + me->arch.r_mips_hi16_list = NULL; 308 + 309 + return -ENOEXEC; 309 310 } 310 311 311 312 return 0;
+2 -2
arch/mips/kernel/smp.c
··· 130 130 131 131 cpu_set(cpu, cpu_callin_map); 132 132 133 - synchronise_count_slave(); 133 + synchronise_count_slave(cpu); 134 134 135 135 /* 136 136 * irq will be enabled in ->smp_finish(), enabling it too early ··· 173 173 void __init smp_cpus_done(unsigned int max_cpus) 174 174 { 175 175 mp_ops->cpus_done(); 176 - synchronise_count_master(); 177 176 } 178 177 179 178 /* called from main before smp_init() */ ··· 205 206 while (!cpu_isset(cpu, cpu_callin_map)) 206 207 udelay(100); 207 208 209 + synchronise_count_master(cpu); 208 210 return 0; 209 211 } 210 212
+11 -15
arch/mips/kernel/sync-r4k.c
··· 28 28 #define COUNTON 100 29 29 #define NR_LOOPS 5 30 30 31 - void __cpuinit synchronise_count_master(void) 31 + void __cpuinit synchronise_count_master(int cpu) 32 32 { 33 33 int i; 34 34 unsigned long flags; 35 35 unsigned int initcount; 36 - int nslaves; 37 36 38 37 #ifdef CONFIG_MIPS_MT_SMTC 39 38 /* ··· 42 43 return; 43 44 #endif 44 45 45 - printk(KERN_INFO "Synchronize counters across %u CPUs: ", 46 - num_online_cpus()); 46 + printk(KERN_INFO "Synchronize counters for CPU %u: ", cpu); 47 47 48 48 local_irq_save(flags); 49 49 ··· 50 52 * Notify the slaves that it's time to start 51 53 */ 52 54 atomic_set(&count_reference, read_c0_count()); 53 - atomic_set(&count_start_flag, 1); 55 + atomic_set(&count_start_flag, cpu); 54 56 smp_wmb(); 55 57 56 58 /* Count will be initialised to current timer for all CPU's */ ··· 67 69 * two CPUs. 68 70 */ 69 71 70 - nslaves = num_online_cpus()-1; 71 72 for (i = 0; i < NR_LOOPS; i++) { 72 - /* slaves loop on '!= ncpus' */ 73 - while (atomic_read(&count_count_start) != nslaves) 73 + /* slaves loop on '!= 2' */ 74 + while (atomic_read(&count_count_start) != 1) 74 75 mb(); 75 76 atomic_set(&count_count_stop, 0); 76 77 smp_wmb(); ··· 86 89 /* 87 90 * Wait for all slaves to leave the synchronization point: 88 91 */ 89 - while (atomic_read(&count_count_stop) != nslaves) 92 + while (atomic_read(&count_count_stop) != 1) 90 93 mb(); 91 94 atomic_set(&count_count_start, 0); 92 95 smp_wmb(); ··· 94 97 } 95 98 /* Arrange for an interrupt in a short while */ 96 99 write_c0_compare(read_c0_count() + COUNTON); 100 + atomic_set(&count_start_flag, 0); 97 101 98 102 local_irq_restore(flags); 99 103 ··· 106 108 printk("done.\n"); 107 109 } 108 110 109 - void __cpuinit synchronise_count_slave(void) 111 + void __cpuinit synchronise_count_slave(int cpu) 110 112 { 111 113 int i; 112 114 unsigned int initcount; 113 - int ncpus; 114 115 115 116 #ifdef CONFIG_MIPS_MT_SMTC 116 117 /* ··· 124 127 * so we first wait for the master to say everyone is ready 125 128 */ 126 129 127 - while (!atomic_read(&count_start_flag)) 130 + while (atomic_read(&count_start_flag) != cpu) 128 131 mb(); 129 132 130 133 /* Count will be initialised to next expire for all CPU's */ 131 134 initcount = atomic_read(&count_reference); 132 135 133 - ncpus = num_online_cpus(); 134 136 for (i = 0; i < NR_LOOPS; i++) { 135 137 atomic_inc(&count_count_start); 136 - while (atomic_read(&count_count_start) != ncpus) 138 + while (atomic_read(&count_count_start) != 2) 137 139 mb(); 138 140 139 141 /* ··· 142 146 write_c0_count(initcount); 143 147 144 148 atomic_inc(&count_count_stop); 145 - while (atomic_read(&count_count_stop) != ncpus) 149 + while (atomic_read(&count_count_stop) != 2) 146 150 mb(); 147 151 } 148 152 /* Arrange for an interrupt in a short while */
-13
arch/mips/mti-malta/malta-pci.c
··· 252 252 253 253 register_pci_controller(controller); 254 254 } 255 - 256 - /* Enable PCI 2.1 compatibility in PIIX4 */ 257 - static void __devinit quirk_dlcsetup(struct pci_dev *dev) 258 - { 259 - u8 odlc, ndlc; 260 - (void) pci_read_config_byte(dev, 0x82, &odlc); 261 - /* Enable passive releases and delayed transaction */ 262 - ndlc = odlc | 7; 263 - (void) pci_write_config_byte(dev, 0x82, ndlc); 264 - } 265 - 266 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371AB_0, 267 - quirk_dlcsetup);
+22
arch/mips/pci/pci-ar724x.c
··· 23 23 #define AR724X_PCI_MEM_BASE 0x10000000 24 24 #define AR724X_PCI_MEM_SIZE 0x08000000 25 25 26 + #define AR724X_PCI_REG_RESET 0x18 26 27 #define AR724X_PCI_REG_INT_STATUS 0x4c 27 28 #define AR724X_PCI_REG_INT_MASK 0x50 29 + 30 + #define AR724X_PCI_RESET_LINK_UP BIT(0) 28 31 29 32 #define AR724X_PCI_INT_DEV0 BIT(14) 30 33 ··· 41 38 42 39 static u32 ar724x_pci_bar0_value; 43 40 static bool ar724x_pci_bar0_is_cached; 41 + static bool ar724x_pci_link_up; 42 + 43 + static inline bool ar724x_pci_check_link(void) 44 + { 45 + u32 reset; 46 + 47 + reset = __raw_readl(ar724x_pci_ctrl_base + AR724X_PCI_REG_RESET); 48 + return reset & AR724X_PCI_RESET_LINK_UP; 49 + } 44 50 45 51 static int ar724x_pci_read(struct pci_bus *bus, unsigned int devfn, int where, 46 52 int size, uint32_t *value) ··· 57 45 unsigned long flags; 58 46 void __iomem *base; 59 47 u32 data; 48 + 49 + if (!ar724x_pci_link_up) 50 + return PCIBIOS_DEVICE_NOT_FOUND; 60 51 61 52 if (devfn) 62 53 return PCIBIOS_DEVICE_NOT_FOUND; ··· 110 95 void __iomem *base; 111 96 u32 data; 112 97 int s; 98 + 99 + if (!ar724x_pci_link_up) 100 + return PCIBIOS_DEVICE_NOT_FOUND; 113 101 114 102 if (devfn) 115 103 return PCIBIOS_DEVICE_NOT_FOUND; ··· 297 279 AR724X_PCI_CTRL_SIZE); 298 280 if (ar724x_pci_ctrl_base == NULL) 299 281 goto err_unmap_devcfg; 282 + 283 + ar724x_pci_link_up = ar724x_pci_check_link(); 284 + if (!ar724x_pci_link_up) 285 + pr_warn("ar724x: PCIe link is down\n"); 300 286 301 287 ar724x_pci_irq_init(irq); 302 288 register_pci_controller(&ar724x_pci_controller);
+2 -2
arch/parisc/include/asm/atomic.h
··· 141 141 142 142 #define atomic_sub_and_test(i,v) (atomic_sub_return((i),(v)) == 0) 143 143 144 - #define ATOMIC_INIT(i) ((atomic_t) { (i) }) 144 + #define ATOMIC_INIT(i) { (i) } 145 145 146 146 #define smp_mb__before_atomic_dec() smp_mb() 147 147 #define smp_mb__after_atomic_dec() smp_mb() ··· 150 150 151 151 #ifdef CONFIG_64BIT 152 152 153 - #define ATOMIC64_INIT(i) ((atomic64_t) { (i) }) 153 + #define ATOMIC64_INIT(i) { (i) } 154 154 155 155 static __inline__ s64 156 156 __atomic64_add_return(s64 i, atomic64_t *v)
+1 -1
arch/parisc/kernel/process.c
··· 309 309 cregs->ksp = (unsigned long)stack 310 310 + (pregs->gr[21] & (THREAD_SIZE - 1)); 311 311 cregs->gr[30] = usp; 312 - if (p->personality == PER_HPUX) { 312 + if (personality(p->personality) == PER_HPUX) { 313 313 #ifdef CONFIG_HPUX 314 314 cregs->kpc = (unsigned long) &hpux_child_return; 315 315 #else
+4 -4
arch/parisc/kernel/sys_parisc.c
··· 225 225 long err; 226 226 227 227 if (personality(current->personality) == PER_LINUX32 228 - && personality == PER_LINUX) 229 - personality = PER_LINUX32; 228 + && personality(personality) == PER_LINUX) 229 + personality = (personality & ~PER_MASK) | PER_LINUX32; 230 230 231 231 err = sys_personality(personality); 232 - if (err == PER_LINUX32) 233 - err = PER_LINUX; 232 + if (personality(err) == PER_LINUX32) 233 + err = (err & ~PER_MASK) | PER_LINUX; 234 234 235 235 return err; 236 236 }
+7
arch/powerpc/boot/dts/fsl/p4080si-post.dtsi
··· 345 345 /include/ "qoriq-duart-1.dtsi" 346 346 /include/ "qoriq-gpio-0.dtsi" 347 347 /include/ "qoriq-usb2-mph-0.dtsi" 348 + usb@210000 { 349 + compatible = "fsl-usb2-mph-v1.6", "fsl,mpc85xx-usb2-mph", "fsl-usb2-mph"; 350 + port0; 351 + }; 348 352 /include/ "qoriq-usb2-dr-0.dtsi" 353 + usb@211000 { 354 + compatible = "fsl-usb2-dr-v1.6", "fsl,mpc85xx-usb2-dr", "fsl-usb2-dr"; 355 + }; 349 356 /include/ "qoriq-sec4.0-0.dtsi" 350 357 };
+10 -21
arch/powerpc/configs/85xx/p1023rds_defconfig
··· 6 6 CONFIG_POSIX_MQUEUE=y 7 7 CONFIG_BSD_PROCESS_ACCT=y 8 8 CONFIG_AUDIT=y 9 - CONFIG_SPARSE_IRQ=y 9 + CONFIG_IRQ_DOMAIN_DEBUG=y 10 + CONFIG_NO_HZ=y 11 + CONFIG_HIGH_RES_TIMERS=y 10 12 CONFIG_IKCONFIG=y 11 13 CONFIG_IKCONFIG_PROC=y 12 14 CONFIG_LOG_BUF_SHIFT=14 13 15 CONFIG_BLK_DEV_INITRD=y 14 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 15 16 CONFIG_KALLSYMS_ALL=y 16 - CONFIG_KALLSYMS_EXTRA_PASS=y 17 17 CONFIG_EMBEDDED=y 18 18 CONFIG_MODULES=y 19 19 CONFIG_MODULE_UNLOAD=y 20 20 CONFIG_MODULE_FORCE_UNLOAD=y 21 21 CONFIG_MODVERSIONS=y 22 22 # CONFIG_BLK_DEV_BSG is not set 23 + CONFIG_PARTITION_ADVANCED=y 24 + CONFIG_MAC_PARTITION=y 23 25 CONFIG_P1023_RDS=y 24 26 CONFIG_QUICC_ENGINE=y 25 27 CONFIG_QE_GPIO=y 26 28 CONFIG_CPM2=y 27 - CONFIG_GPIO_MPC8XXX=y 28 29 CONFIG_HIGHMEM=y 29 - CONFIG_NO_HZ=y 30 - CONFIG_HIGH_RES_TIMERS=y 31 30 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 32 31 CONFIG_BINFMT_MISC=m 33 32 CONFIG_MATH_EMULATION=y ··· 62 63 CONFIG_IPV6=y 63 64 CONFIG_IP_SCTP=m 64 65 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 66 + CONFIG_DEVTMPFS=y 65 67 CONFIG_PROC_DEVICETREE=y 66 68 CONFIG_BLK_DEV_LOOP=y 67 69 CONFIG_BLK_DEV_RAM=y 68 70 CONFIG_BLK_DEV_RAM_SIZE=131072 69 - CONFIG_MISC_DEVICES=y 70 71 CONFIG_EEPROM_LEGACY=y 71 72 CONFIG_BLK_DEV_SD=y 72 73 CONFIG_CHR_DEV_ST=y ··· 79 80 CONFIG_SATA_SIL24=y 80 81 CONFIG_NETDEVICES=y 81 82 CONFIG_DUMMY=y 83 + CONFIG_FS_ENET=y 84 + CONFIG_FSL_PQ_MDIO=y 85 + CONFIG_E1000E=y 82 86 CONFIG_MARVELL_PHY=y 83 87 CONFIG_DAVICOM_PHY=y 84 88 CONFIG_CICADA_PHY=y 85 89 CONFIG_VITESSE_PHY=y 86 90 CONFIG_FIXED_PHY=y 87 - CONFIG_NET_ETHERNET=y 88 - CONFIG_FS_ENET=y 89 - CONFIG_E1000E=y 90 - CONFIG_FSL_PQ_MDIO=y 91 91 CONFIG_INPUT_FF_MEMLESS=m 92 92 # CONFIG_INPUT_MOUSEDEV is not set 93 93 # CONFIG_INPUT_KEYBOARD is not set ··· 96 98 CONFIG_SERIAL_8250_CONSOLE=y 97 99 CONFIG_SERIAL_8250_NR_UARTS=2 98 100 CONFIG_SERIAL_8250_RUNTIME_UARTS=2 99 - CONFIG_SERIAL_8250_EXTENDED=y 100 101 CONFIG_SERIAL_8250_MANY_PORTS=y 101 102 CONFIG_SERIAL_8250_DETECT_IRQ=y 102 103 CONFIG_SERIAL_8250_RSA=y 103 104 CONFIG_SERIAL_QE=m 104 - CONFIG_HW_RANDOM=y 105 105 CONFIG_NVRAM=y 106 106 CONFIG_I2C=y 107 107 CONFIG_I2C_CPM=m 108 108 CONFIG_I2C_MPC=y 109 + CONFIG_GPIO_MPC8XXX=y 109 110 # CONFIG_HWMON is not set 110 111 CONFIG_VIDEO_OUTPUT_CONTROL=y 111 112 CONFIG_SOUND=y ··· 120 123 CONFIG_FSL_DMA=y 121 124 # CONFIG_NET_DMA is not set 122 125 CONFIG_STAGING=y 123 - # CONFIG_STAGING_EXCLUDE_BUILD is not set 124 126 CONFIG_EXT2_FS=y 125 127 CONFIG_EXT3_FS=y 126 128 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set ··· 146 150 CONFIG_SYSV_FS=m 147 151 CONFIG_UFS_FS=m 148 152 CONFIG_NFS_FS=y 149 - CONFIG_NFS_V3=y 150 153 CONFIG_NFS_V4=y 151 154 CONFIG_ROOT_NFS=y 152 155 CONFIG_NFSD=y 153 - CONFIG_PARTITION_ADVANCED=y 154 - CONFIG_MAC_PARTITION=y 155 156 CONFIG_CRC_T10DIF=y 156 157 CONFIG_FRAME_WARN=8092 157 158 CONFIG_DEBUG_FS=y 158 - CONFIG_DEBUG_KERNEL=y 159 159 CONFIG_DETECT_HUNG_TASK=y 160 160 # CONFIG_DEBUG_BUGVERBOSE is not set 161 161 CONFIG_DEBUG_INFO=y 162 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 163 - CONFIG_SYSCTL_SYSCALL_CHECK=y 164 - CONFIG_IRQ_DOMAIN_DEBUG=y 165 162 CONFIG_CRYPTO_PCBC=m 166 163 CONFIG_CRYPTO_SHA256=y 167 164 CONFIG_CRYPTO_SHA512=y
+10 -21
arch/powerpc/configs/corenet32_smp_defconfig
··· 6 6 CONFIG_POSIX_MQUEUE=y 7 7 CONFIG_BSD_PROCESS_ACCT=y 8 8 CONFIG_AUDIT=y 9 - CONFIG_SPARSE_IRQ=y 10 - CONFIG_RCU_TRACE=y 9 + CONFIG_NO_HZ=y 10 + CONFIG_HIGH_RES_TIMERS=y 11 11 CONFIG_IKCONFIG=y 12 12 CONFIG_IKCONFIG_PROC=y 13 13 CONFIG_LOG_BUF_SHIFT=14 ··· 21 21 CONFIG_MODULE_FORCE_UNLOAD=y 22 22 CONFIG_MODVERSIONS=y 23 23 # CONFIG_BLK_DEV_BSG is not set 24 + CONFIG_PARTITION_ADVANCED=y 25 + CONFIG_MAC_PARTITION=y 24 26 CONFIG_P2041_RDB=y 25 27 CONFIG_P3041_DS=y 26 28 CONFIG_P4080_DS=y 27 29 CONFIG_P5020_DS=y 28 30 CONFIG_HIGHMEM=y 29 - CONFIG_NO_HZ=y 30 - CONFIG_HIGH_RES_TIMERS=y 31 31 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 32 32 CONFIG_BINFMT_MISC=m 33 33 CONFIG_KEXEC=y 34 34 CONFIG_IRQ_ALL_CPUS=y 35 35 CONFIG_FORCE_MAX_ZONEORDER=13 36 - CONFIG_FSL_LBC=y 37 36 CONFIG_PCI=y 38 37 CONFIG_PCIEPORTBUS=y 39 - CONFIG_PCI_MSI=y 40 38 # CONFIG_PCIEASPM is not set 39 + CONFIG_PCI_MSI=y 41 40 CONFIG_RAPIDIO=y 42 41 CONFIG_FSL_RIO=y 43 42 CONFIG_NET=y ··· 69 70 CONFIG_IPV6=y 70 71 CONFIG_IP_SCTP=m 71 72 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 73 + CONFIG_DEVTMPFS=y 72 74 CONFIG_MTD=y 73 75 CONFIG_MTD_CMDLINE_PARTS=y 74 76 CONFIG_MTD_CHAR=y ··· 77 77 CONFIG_MTD_CFI=y 78 78 CONFIG_MTD_CFI_AMDSTD=y 79 79 CONFIG_MTD_PHYSMAP_OF=y 80 - CONFIG_MTD_NAND=y 81 - CONFIG_MTD_NAND_ECC=y 82 - CONFIG_MTD_NAND_IDS=y 83 - CONFIG_MTD_NAND_FSL_IFC=y 84 - CONFIG_MTD_NAND_FSL_ELBC=y 85 80 CONFIG_MTD_M25P80=y 81 + CONFIG_MTD_NAND=y 82 + CONFIG_MTD_NAND_FSL_ELBC=y 83 + CONFIG_MTD_NAND_FSL_IFC=y 86 84 CONFIG_PROC_DEVICETREE=y 87 85 CONFIG_BLK_DEV_LOOP=y 88 86 CONFIG_BLK_DEV_RAM=y 89 87 CONFIG_BLK_DEV_RAM_SIZE=131072 90 - CONFIG_MISC_DEVICES=y 91 88 CONFIG_BLK_DEV_SD=y 92 89 CONFIG_CHR_DEV_ST=y 93 90 CONFIG_BLK_DEV_SR=y ··· 112 115 CONFIG_PPC_EPAPR_HV_BYTECHAN=y 113 116 CONFIG_SERIAL_8250=y 114 117 CONFIG_SERIAL_8250_CONSOLE=y 115 - CONFIG_SERIAL_8250_EXTENDED=y 116 118 CONFIG_SERIAL_8250_MANY_PORTS=y 117 119 CONFIG_SERIAL_8250_DETECT_IRQ=y 118 120 CONFIG_SERIAL_8250_RSA=y 119 - CONFIG_HW_RANDOM=y 120 121 CONFIG_NVRAM=y 121 122 CONFIG_I2C=y 122 123 CONFIG_I2C_CHARDEV=y ··· 127 132 CONFIG_VIDEO_OUTPUT_CONTROL=y 128 133 CONFIG_USB_HID=m 129 134 CONFIG_USB=y 130 - CONFIG_USB_DEVICEFS=y 131 135 CONFIG_USB_MON=y 132 136 CONFIG_USB_EHCI_HCD=y 133 137 CONFIG_USB_EHCI_FSL=y ··· 136 142 CONFIG_USB_STORAGE=y 137 143 CONFIG_MMC=y 138 144 CONFIG_MMC_SDHCI=y 139 - CONFIG_MMC_SDHCI_OF=y 140 - CONFIG_MMC_SDHCI_OF_ESDHC=y 141 145 CONFIG_EDAC=y 142 146 CONFIG_EDAC_MM_EDAC=y 143 147 CONFIG_EDAC_MPC85XX=y ··· 162 170 CONFIG_JFFS2_FS=y 163 171 CONFIG_CRAMFS=y 164 172 CONFIG_NFS_FS=y 165 - CONFIG_NFS_V3=y 166 173 CONFIG_NFS_V4=y 167 174 CONFIG_ROOT_NFS=y 168 175 CONFIG_NFSD=m 169 - CONFIG_PARTITION_ADVANCED=y 170 - CONFIG_MAC_PARTITION=y 171 176 CONFIG_NLS_ISO8859_1=y 172 177 CONFIG_NLS_UTF8=m 173 178 CONFIG_MAGIC_SYSRQ=y 174 179 CONFIG_DEBUG_SHIRQ=y 175 180 CONFIG_DETECT_HUNG_TASK=y 176 181 CONFIG_DEBUG_INFO=y 177 - CONFIG_SYSCTL_SYSCALL_CHECK=y 182 + CONFIG_RCU_TRACE=y 178 183 CONFIG_CRYPTO_NULL=y 179 184 CONFIG_CRYPTO_PCBC=m 180 185 CONFIG_CRYPTO_MD4=y
+1
arch/powerpc/configs/corenet64_smp_defconfig
··· 56 56 CONFIG_IPV6=y 57 57 CONFIG_IP_SCTP=m 58 58 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 59 + CONFIG_DEVTMPFS=y 59 60 CONFIG_MTD=y 60 61 CONFIG_MTD_CMDLINE_PARTS=y 61 62 CONFIG_MTD_CHAR=y
+73 -30
arch/powerpc/configs/g5_defconfig
··· 1 + CONFIG_PPC64=y 2 + CONFIG_ALTIVEC=y 3 + CONFIG_SMP=y 4 + CONFIG_NR_CPUS=4 1 5 CONFIG_EXPERIMENTAL=y 2 6 CONFIG_SYSVIPC=y 3 7 CONFIG_POSIX_MQUEUE=y 4 - CONFIG_NO_HZ=y 5 - CONFIG_HIGH_RES_TIMERS=y 6 8 CONFIG_IKCONFIG=y 7 9 CONFIG_IKCONFIG_PROC=y 8 10 CONFIG_BLK_DEV_INITRD=y ··· 15 13 CONFIG_MODULE_UNLOAD=y 16 14 CONFIG_MODVERSIONS=y 17 15 CONFIG_MODULE_SRCVERSION_ALL=y 18 - CONFIG_PARTITION_ADVANCED=y 19 - CONFIG_MAC_PARTITION=y 20 - CONFIG_SMP=y 21 - CONFIG_NR_CPUS=4 22 - CONFIG_KEXEC=y 23 - # CONFIG_RELOCATABLE is not set 16 + # CONFIG_PPC_PSERIES is not set 24 17 CONFIG_CPU_FREQ=y 25 18 CONFIG_CPU_FREQ_GOV_POWERSAVE=y 26 19 CONFIG_CPU_FREQ_GOV_USERSPACE=y 20 + CONFIG_CPU_FREQ_PMAC64=y 21 + CONFIG_NO_HZ=y 22 + CONFIG_HIGH_RES_TIMERS=y 23 + CONFIG_KEXEC=y 24 + CONFIG_IRQ_ALL_CPUS=y 25 + # CONFIG_MIGRATION is not set 27 26 CONFIG_PCI_MSI=y 28 27 CONFIG_NET=y 29 28 CONFIG_PACKET=y ··· 52 49 CONFIG_NF_CONNTRACK_IPV4=m 53 50 CONFIG_IP_NF_QUEUE=m 54 51 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 52 + CONFIG_PROC_DEVICETREE=y 55 53 CONFIG_BLK_DEV_LOOP=y 56 54 CONFIG_BLK_DEV_NBD=m 57 55 CONFIG_BLK_DEV_RAM=y ··· 60 56 CONFIG_CDROM_PKTCDVD=m 61 57 CONFIG_IDE=y 62 58 CONFIG_BLK_DEV_IDECD=y 59 + CONFIG_BLK_DEV_IDE_PMAC=y 60 + CONFIG_BLK_DEV_IDE_PMAC_ATA100FIRST=y 63 61 CONFIG_BLK_DEV_SD=y 64 62 CONFIG_CHR_DEV_ST=y 65 63 CONFIG_BLK_DEV_SR=y ··· 85 79 CONFIG_DM_SNAPSHOT=m 86 80 CONFIG_DM_MIRROR=m 87 81 CONFIG_DM_ZERO=m 88 - CONFIG_MACINTOSH_DRIVERS=y 82 + CONFIG_IEEE1394=y 83 + CONFIG_IEEE1394_OHCI1394=y 84 + CONFIG_IEEE1394_SBP2=m 85 + CONFIG_IEEE1394_ETH1394=m 86 + CONFIG_IEEE1394_RAWIO=y 87 + CONFIG_IEEE1394_VIDEO1394=m 88 + CONFIG_IEEE1394_DV1394=m 89 + CONFIG_ADB_PMU=y 90 + CONFIG_PMAC_SMU=y 89 91 CONFIG_MAC_EMUMOUSEBTN=y 92 + CONFIG_THERM_PM72=y 93 + CONFIG_WINDFARM=y 94 + CONFIG_WINDFARM_PM81=y 95 + CONFIG_WINDFARM_PM91=y 96 + CONFIG_WINDFARM_PM112=y 97 + CONFIG_WINDFARM_PM121=y 90 98 CONFIG_NETDEVICES=y 91 - CONFIG_BONDING=m 92 99 CONFIG_DUMMY=m 93 - CONFIG_MII=y 100 + CONFIG_BONDING=m 94 101 CONFIG_TUN=m 102 + CONFIG_NET_ETHERNET=y 103 + CONFIG_MII=y 104 + CONFIG_SUNGEM=y 95 105 CONFIG_ACENIC=m 96 106 CONFIG_ACENIC_OMIT_TIGON_I=y 97 - CONFIG_TIGON3=y 98 107 CONFIG_E1000=y 99 - CONFIG_SUNGEM=y 100 - CONFIG_PPP=m 101 - CONFIG_PPP_BSDCOMP=m 102 - CONFIG_PPP_DEFLATE=m 103 - CONFIG_PPPOE=m 104 - CONFIG_PPP_ASYNC=m 105 - CONFIG_PPP_SYNC_TTY=m 108 + CONFIG_TIGON3=y 106 109 CONFIG_USB_CATC=m 107 110 CONFIG_USB_KAWETH=m 108 111 CONFIG_USB_PEGASUS=m ··· 121 106 # CONFIG_USB_NET_NET1080 is not set 122 107 # CONFIG_USB_NET_CDC_SUBSET is not set 123 108 # CONFIG_USB_NET_ZAURUS is not set 109 + CONFIG_PPP=m 110 + CONFIG_PPP_ASYNC=m 111 + CONFIG_PPP_SYNC_TTY=m 112 + CONFIG_PPP_DEFLATE=m 113 + CONFIG_PPP_BSDCOMP=m 114 + CONFIG_PPPOE=m 124 115 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 125 116 CONFIG_INPUT_JOYDEV=m 126 117 CONFIG_INPUT_EVDEV=y 118 + # CONFIG_KEYBOARD_ATKBD is not set 127 119 # CONFIG_MOUSE_PS2 is not set 120 + # CONFIG_SERIO_I8042 is not set 128 121 # CONFIG_SERIO_SERPORT is not set 129 - CONFIG_VT_HW_CONSOLE_BINDING=y 130 122 # CONFIG_HW_RANDOM is not set 131 123 CONFIG_GEN_RTC=y 132 124 CONFIG_RAW_DRIVER=y 133 125 CONFIG_I2C_CHARDEV=y 134 126 # CONFIG_HWMON is not set 135 - CONFIG_AGP=y 136 - CONFIG_DRM=y 137 - CONFIG_DRM_NOUVEAU=y 127 + CONFIG_AGP=m 128 + CONFIG_AGP_UNINORTH=m 138 129 CONFIG_VIDEO_OUTPUT_CONTROL=m 130 + CONFIG_FB=y 139 131 CONFIG_FIRMWARE_EDID=y 140 132 CONFIG_FB_TILEBLITTING=y 133 + CONFIG_FB_OF=y 134 + CONFIG_FB_NVIDIA=y 135 + CONFIG_FB_NVIDIA_I2C=y 141 136 CONFIG_FB_RADEON=y 137 + # CONFIG_VGA_CONSOLE is not set 138 + CONFIG_FRAMEBUFFER_CONSOLE=y 142 139 CONFIG_LOGO=y 143 140 CONFIG_SOUND=m 144 141 CONFIG_SND=m ··· 158 131 CONFIG_SND_MIXER_OSS=m 159 132 CONFIG_SND_PCM_OSS=m 160 133 CONFIG_SND_SEQUENCER_OSS=y 134 + CONFIG_SND_POWERMAC=m 135 + CONFIG_SND_AOA=m 136 + CONFIG_SND_AOA_FABRIC_LAYOUT=m 137 + CONFIG_SND_AOA_ONYX=m 138 + CONFIG_SND_AOA_TAS=m 139 + CONFIG_SND_AOA_TOONIE=m 161 140 CONFIG_SND_USB_AUDIO=m 141 + CONFIG_HID_PID=y 142 + CONFIG_USB_HIDDEV=y 162 143 CONFIG_HID_GYRATION=y 163 144 CONFIG_LOGITECH_FF=y 164 145 CONFIG_HID_PANTHERLORD=y ··· 174 139 CONFIG_HID_SAMSUNG=y 175 140 CONFIG_HID_SONY=y 176 141 CONFIG_HID_SUNPLUS=y 177 - CONFIG_HID_PID=y 178 - CONFIG_USB_HIDDEV=y 179 142 CONFIG_USB=y 143 + CONFIG_USB_DEVICEFS=y 180 144 CONFIG_USB_MON=y 181 145 CONFIG_USB_EHCI_HCD=y 146 + # CONFIG_USB_EHCI_HCD_PPC_OF is not set 182 147 CONFIG_USB_OHCI_HCD=y 148 + CONFIG_USB_OHCI_HCD_PPC_OF_BE=y 183 149 CONFIG_USB_ACM=m 184 150 CONFIG_USB_PRINTER=y 185 151 CONFIG_USB_STORAGE=y ··· 244 208 CONFIG_REISERFS_FS_SECURITY=y 245 209 CONFIG_XFS_FS=m 246 210 CONFIG_XFS_POSIX_ACL=y 211 + CONFIG_INOTIFY=y 212 + CONFIG_AUTOFS_FS=m 247 213 CONFIG_ISO9660_FS=y 248 214 CONFIG_JOLIET=y 249 215 CONFIG_ZISOFS=y ··· 259 221 CONFIG_HFSPLUS_FS=m 260 222 CONFIG_CRAMFS=y 261 223 CONFIG_NFS_FS=y 224 + CONFIG_NFS_V3=y 262 225 CONFIG_NFS_V3_ACL=y 263 226 CONFIG_NFS_V4=y 264 227 CONFIG_NFSD=y 265 228 CONFIG_NFSD_V3_ACL=y 266 229 CONFIG_NFSD_V4=y 267 230 CONFIG_CIFS=m 231 + CONFIG_PARTITION_ADVANCED=y 268 232 CONFIG_NLS_CODEPAGE_437=y 269 233 CONFIG_NLS_CODEPAGE_1250=y 270 234 CONFIG_NLS_CODEPAGE_1251=y ··· 274 234 CONFIG_NLS_ISO8859_1=y 275 235 CONFIG_NLS_ISO8859_15=y 276 236 CONFIG_NLS_UTF8=y 237 + CONFIG_CRC_T10DIF=y 238 + CONFIG_LIBCRC32C=m 277 239 CONFIG_MAGIC_SYSRQ=y 278 - # CONFIG_UNUSED_SYMBOLS is not set 279 240 CONFIG_DEBUG_FS=y 280 241 CONFIG_DEBUG_KERNEL=y 281 242 CONFIG_DEBUG_MUTEXES=y 243 + # CONFIG_RCU_CPU_STALL_DETECTOR is not set 282 244 CONFIG_LATENCYTOP=y 283 - CONFIG_STRICT_DEVMEM=y 245 + CONFIG_SYSCTL_SYSCALL_CHECK=y 246 + CONFIG_BOOTX_TEXT=y 284 247 CONFIG_CRYPTO_NULL=m 285 248 CONFIG_CRYPTO_TEST=m 249 + CONFIG_CRYPTO_ECB=m 286 250 CONFIG_CRYPTO_PCBC=m 287 251 CONFIG_CRYPTO_HMAC=y 252 + CONFIG_CRYPTO_MD4=m 288 253 CONFIG_CRYPTO_MICHAEL_MIC=m 289 254 CONFIG_CRYPTO_SHA256=m 290 255 CONFIG_CRYPTO_SHA512=m 291 256 CONFIG_CRYPTO_WP512=m 292 257 CONFIG_CRYPTO_AES=m 293 258 CONFIG_CRYPTO_ANUBIS=m 259 + CONFIG_CRYPTO_ARC4=m 294 260 CONFIG_CRYPTO_BLOWFISH=m 295 261 CONFIG_CRYPTO_CAST5=m 296 262 CONFIG_CRYPTO_CAST6=m ··· 306 260 CONFIG_CRYPTO_TWOFISH=m 307 261 # CONFIG_CRYPTO_ANSI_CPRNG is not set 308 262 # CONFIG_CRYPTO_HW is not set 309 - # CONFIG_VIRTUALIZATION is not set 310 - CONFIG_CRC_T10DIF=y 311 - CONFIG_LIBCRC32C=m
+5 -13
arch/powerpc/configs/mpc83xx_defconfig
··· 2 2 CONFIG_SYSVIPC=y 3 3 CONFIG_LOG_BUF_SHIFT=14 4 4 CONFIG_BLK_DEV_INITRD=y 5 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 6 5 CONFIG_EXPERT=y 7 6 CONFIG_SLAB=y 8 7 CONFIG_MODULES=y 9 8 CONFIG_MODULE_UNLOAD=y 10 9 # CONFIG_BLK_DEV_BSG is not set 10 + CONFIG_PARTITION_ADVANCED=y 11 11 # CONFIG_PPC_CHRP is not set 12 12 # CONFIG_PPC_PMAC is not set 13 13 CONFIG_PPC_83xx=y ··· 25 25 CONFIG_QUICC_ENGINE=y 26 26 CONFIG_QE_GPIO=y 27 27 CONFIG_MATH_EMULATION=y 28 - CONFIG_SPARSE_IRQ=y 29 28 CONFIG_PCI=y 30 29 CONFIG_NET=y 31 30 CONFIG_PACKET=y ··· 41 42 # CONFIG_INET_LRO is not set 42 43 # CONFIG_IPV6 is not set 43 44 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 45 + CONFIG_DEVTMPFS=y 44 46 # CONFIG_FW_LOADER is not set 45 47 CONFIG_MTD=y 46 - CONFIG_MTD_PARTITIONS=y 47 - CONFIG_MTD_OF_PARTS=y 48 48 CONFIG_MTD_CHAR=y 49 49 CONFIG_MTD_BLOCK=y 50 50 CONFIG_MTD_CFI=y ··· 62 64 CONFIG_SATA_FSL=y 63 65 CONFIG_SATA_SIL=y 64 66 CONFIG_NETDEVICES=y 67 + CONFIG_MII=y 68 + CONFIG_UCC_GETH=y 69 + CONFIG_GIANFAR=y 65 70 CONFIG_MARVELL_PHY=y 66 71 CONFIG_DAVICOM_PHY=y 67 72 CONFIG_VITESSE_PHY=y 68 73 CONFIG_ICPLUS_PHY=y 69 74 CONFIG_FIXED_PHY=y 70 - CONFIG_NET_ETHERNET=y 71 - CONFIG_MII=y 72 - CONFIG_GIANFAR=y 73 - CONFIG_UCC_GETH=y 74 75 CONFIG_INPUT_FF_MEMLESS=m 75 76 # CONFIG_INPUT_MOUSEDEV is not set 76 77 # CONFIG_INPUT_KEYBOARD is not set ··· 109 112 CONFIG_EXT2_FS=y 110 113 CONFIG_EXT3_FS=y 111 114 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 112 - CONFIG_INOTIFY=y 113 115 CONFIG_PROC_KCORE=y 114 116 CONFIG_TMPFS=y 115 117 CONFIG_NFS_FS=y 116 - CONFIG_NFS_V3=y 117 118 CONFIG_NFS_V4=y 118 119 CONFIG_ROOT_NFS=y 119 - CONFIG_PARTITION_ADVANCED=y 120 120 CONFIG_CRC_T10DIF=y 121 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 122 - CONFIG_SYSCTL_SYSCALL_CHECK=y 123 121 CONFIG_CRYPTO_ECB=m 124 122 CONFIG_CRYPTO_PCBC=m 125 123 CONFIG_CRYPTO_SHA256=y
+9 -24
arch/powerpc/configs/mpc85xx_defconfig
··· 5 5 CONFIG_POSIX_MQUEUE=y 6 6 CONFIG_BSD_PROCESS_ACCT=y 7 7 CONFIG_AUDIT=y 8 - CONFIG_SPARSE_IRQ=y 8 + CONFIG_IRQ_DOMAIN_DEBUG=y 9 + CONFIG_NO_HZ=y 10 + CONFIG_HIGH_RES_TIMERS=y 9 11 CONFIG_IKCONFIG=y 10 12 CONFIG_IKCONFIG_PROC=y 11 13 CONFIG_LOG_BUF_SHIFT=14 ··· 19 17 CONFIG_MODULE_FORCE_UNLOAD=y 20 18 CONFIG_MODVERSIONS=y 21 19 # CONFIG_BLK_DEV_BSG is not set 20 + CONFIG_PARTITION_ADVANCED=y 21 + CONFIG_MAC_PARTITION=y 22 22 CONFIG_MPC8540_ADS=y 23 23 CONFIG_MPC8560_ADS=y 24 24 CONFIG_MPC85xx_CDS=y ··· 44 40 CONFIG_QUICC_ENGINE=y 45 41 CONFIG_QE_GPIO=y 46 42 CONFIG_HIGHMEM=y 47 - CONFIG_NO_HZ=y 48 - CONFIG_HIGH_RES_TIMERS=y 49 43 CONFIG_BINFMT_MISC=m 50 44 CONFIG_MATH_EMULATION=y 51 45 CONFIG_FORCE_MAX_ZONEORDER=12 ··· 76 74 CONFIG_IPV6=y 77 75 CONFIG_IP_SCTP=m 78 76 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 77 + CONFIG_DEVTMPFS=y 79 78 CONFIG_MTD=y 80 79 CONFIG_MTD_CMDLINE_PARTS=y 81 80 CONFIG_MTD_CHAR=y 82 81 CONFIG_MTD_BLOCK=y 83 - CONFIG_MTD_CFI=y 84 82 CONFIG_FTL=y 85 - CONFIG_MTD_GEN_PROBE=y 86 - CONFIG_MTD_MAP_BANK_WIDTH_1=y 87 - CONFIG_MTD_MAP_BANK_WIDTH_2=y 88 - CONFIG_MTD_MAP_BANK_WIDTH_4=y 89 - CONFIG_MTD_CFI_I1=y 90 - CONFIG_MTD_CFI_I2=y 83 + CONFIG_MTD_CFI=y 91 84 CONFIG_MTD_CFI_INTELEXT=y 92 85 CONFIG_MTD_CFI_AMDSTD=y 93 - CONFIG_MTD_CFI_UTIL=y 94 86 CONFIG_MTD_PHYSMAP_OF=y 95 - CONFIG_MTD_PARTITIONS=y 96 - CONFIG_MTD_OF_PARTS=y 87 + CONFIG_MTD_M25P80=y 97 88 CONFIG_MTD_NAND=y 98 89 CONFIG_MTD_NAND_FSL_ELBC=y 99 90 CONFIG_MTD_NAND_FSL_IFC=y 100 - CONFIG_MTD_NAND_IDS=y 101 - CONFIG_MTD_NAND_ECC=y 102 - CONFIG_MTD_M25P80=y 103 91 CONFIG_PROC_DEVICETREE=y 104 92 CONFIG_BLK_DEV_LOOP=y 105 93 CONFIG_BLK_DEV_NBD=y 106 94 CONFIG_BLK_DEV_RAM=y 107 95 CONFIG_BLK_DEV_RAM_SIZE=131072 108 - CONFIG_MISC_DEVICES=y 109 96 CONFIG_EEPROM_LEGACY=y 110 97 CONFIG_BLK_DEV_SD=y 111 98 CONFIG_CHR_DEV_ST=y ··· 106 115 CONFIG_SATA_AHCI=y 107 116 CONFIG_SATA_FSL=y 108 117 CONFIG_PATA_ALI=y 118 + CONFIG_PATA_VIA=y 109 119 CONFIG_NETDEVICES=y 110 120 CONFIG_DUMMY=y 111 121 CONFIG_FS_ENET=y ··· 126 134 CONFIG_SERIAL_8250_CONSOLE=y 127 135 CONFIG_SERIAL_8250_NR_UARTS=2 128 136 CONFIG_SERIAL_8250_RUNTIME_UARTS=2 129 - CONFIG_SERIAL_8250_EXTENDED=y 130 137 CONFIG_SERIAL_8250_MANY_PORTS=y 131 138 CONFIG_SERIAL_8250_DETECT_IRQ=y 132 139 CONFIG_SERIAL_8250_RSA=y ··· 174 183 CONFIG_HID_SONY=y 175 184 CONFIG_HID_SUNPLUS=y 176 185 CONFIG_USB=y 177 - CONFIG_USB_DEVICEFS=y 178 186 CONFIG_USB_MON=y 179 187 CONFIG_USB_EHCI_HCD=y 180 188 CONFIG_USB_EHCI_FSL=y ··· 219 229 CONFIG_SYSV_FS=m 220 230 CONFIG_UFS_FS=m 221 231 CONFIG_NFS_FS=y 222 - CONFIG_NFS_V3=y 223 232 CONFIG_NFS_V4=y 224 233 CONFIG_ROOT_NFS=y 225 234 CONFIG_NFSD=y 226 - CONFIG_PARTITION_ADVANCED=y 227 - CONFIG_MAC_PARTITION=y 228 235 CONFIG_CRC_T10DIF=y 229 236 CONFIG_DEBUG_FS=y 230 237 CONFIG_DETECT_HUNG_TASK=y 231 238 CONFIG_DEBUG_INFO=y 232 - CONFIG_SYSCTL_SYSCALL_CHECK=y 233 - CONFIG_IRQ_DOMAIN_DEBUG=y 234 239 CONFIG_CRYPTO_PCBC=m 235 240 CONFIG_CRYPTO_SHA256=y 236 241 CONFIG_CRYPTO_SHA512=y
+8 -24
arch/powerpc/configs/mpc85xx_smp_defconfig
··· 7 7 CONFIG_POSIX_MQUEUE=y 8 8 CONFIG_BSD_PROCESS_ACCT=y 9 9 CONFIG_AUDIT=y 10 - CONFIG_SPARSE_IRQ=y 10 + CONFIG_IRQ_DOMAIN_DEBUG=y 11 + CONFIG_NO_HZ=y 12 + CONFIG_HIGH_RES_TIMERS=y 11 13 CONFIG_IKCONFIG=y 12 14 CONFIG_IKCONFIG_PROC=y 13 15 CONFIG_LOG_BUF_SHIFT=14 ··· 21 19 CONFIG_MODULE_FORCE_UNLOAD=y 22 20 CONFIG_MODVERSIONS=y 23 21 # CONFIG_BLK_DEV_BSG is not set 22 + CONFIG_PARTITION_ADVANCED=y 23 + CONFIG_MAC_PARTITION=y 24 24 CONFIG_MPC8540_ADS=y 25 25 CONFIG_MPC8560_ADS=y 26 26 CONFIG_MPC85xx_CDS=y ··· 46 42 CONFIG_QUICC_ENGINE=y 47 43 CONFIG_QE_GPIO=y 48 44 CONFIG_HIGHMEM=y 49 - CONFIG_NO_HZ=y 50 - CONFIG_HIGH_RES_TIMERS=y 51 45 CONFIG_BINFMT_MISC=m 52 46 CONFIG_MATH_EMULATION=y 53 47 CONFIG_IRQ_ALL_CPUS=y ··· 79 77 CONFIG_IPV6=y 80 78 CONFIG_IP_SCTP=m 81 79 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 80 + CONFIG_DEVTMPFS=y 82 81 CONFIG_MTD=y 83 82 CONFIG_MTD_CMDLINE_PARTS=y 84 83 CONFIG_MTD_CHAR=y 85 84 CONFIG_MTD_BLOCK=y 86 - CONFIG_MTD_CFI=y 87 85 CONFIG_FTL=y 88 - CONFIG_MTD_GEN_PROBE=y 89 - CONFIG_MTD_MAP_BANK_WIDTH_1=y 90 - CONFIG_MTD_MAP_BANK_WIDTH_2=y 91 - CONFIG_MTD_MAP_BANK_WIDTH_4=y 92 - CONFIG_MTD_CFI_I1=y 93 - CONFIG_MTD_CFI_I2=y 86 + CONFIG_MTD_CFI=y 94 87 CONFIG_MTD_CFI_INTELEXT=y 95 88 CONFIG_MTD_CFI_AMDSTD=y 96 - CONFIG_MTD_CFI_UTIL=y 97 89 CONFIG_MTD_PHYSMAP_OF=y 98 - CONFIG_MTD_PARTITIONS=y 99 - CONFIG_MTD_OF_PARTS=y 90 + CONFIG_MTD_M25P80=y 100 91 CONFIG_MTD_NAND=y 101 92 CONFIG_MTD_NAND_FSL_ELBC=y 102 93 CONFIG_MTD_NAND_FSL_IFC=y 103 - CONFIG_MTD_NAND_IDS=y 104 - CONFIG_MTD_NAND_ECC=y 105 - CONFIG_MTD_M25P80=y 106 94 CONFIG_PROC_DEVICETREE=y 107 95 CONFIG_BLK_DEV_LOOP=y 108 96 CONFIG_BLK_DEV_NBD=y 109 97 CONFIG_BLK_DEV_RAM=y 110 98 CONFIG_BLK_DEV_RAM_SIZE=131072 111 - CONFIG_MISC_DEVICES=y 112 99 CONFIG_EEPROM_LEGACY=y 113 100 CONFIG_BLK_DEV_SD=y 114 101 CONFIG_CHR_DEV_ST=y ··· 128 137 CONFIG_SERIAL_8250_CONSOLE=y 129 138 CONFIG_SERIAL_8250_NR_UARTS=2 130 139 CONFIG_SERIAL_8250_RUNTIME_UARTS=2 131 - CONFIG_SERIAL_8250_EXTENDED=y 132 140 CONFIG_SERIAL_8250_MANY_PORTS=y 133 141 CONFIG_SERIAL_8250_DETECT_IRQ=y 134 142 CONFIG_SERIAL_8250_RSA=y ··· 176 186 CONFIG_HID_SONY=y 177 187 CONFIG_HID_SUNPLUS=y 178 188 CONFIG_USB=y 179 - CONFIG_USB_DEVICEFS=y 180 189 CONFIG_USB_MON=y 181 190 CONFIG_USB_EHCI_HCD=y 182 191 CONFIG_USB_EHCI_FSL=y ··· 221 232 CONFIG_SYSV_FS=m 222 233 CONFIG_UFS_FS=m 223 234 CONFIG_NFS_FS=y 224 - CONFIG_NFS_V3=y 225 235 CONFIG_NFS_V4=y 226 236 CONFIG_ROOT_NFS=y 227 237 CONFIG_NFSD=y 228 - CONFIG_PARTITION_ADVANCED=y 229 - CONFIG_MAC_PARTITION=y 230 238 CONFIG_CRC_T10DIF=y 231 239 CONFIG_DEBUG_FS=y 232 240 CONFIG_DETECT_HUNG_TASK=y 233 241 CONFIG_DEBUG_INFO=y 234 - CONFIG_SYSCTL_SYSCALL_CHECK=y 235 - CONFIG_IRQ_DOMAIN_DEBUG=y 236 242 CONFIG_CRYPTO_PCBC=m 237 243 CONFIG_CRYPTO_SHA256=y 238 244 CONFIG_CRYPTO_SHA512=y
-2
arch/powerpc/include/asm/cputable.h
··· 553 553 & feature); 554 554 } 555 555 556 - #ifdef CONFIG_HAVE_HW_BREAKPOINT 557 556 #define HBP_NUM 1 558 - #endif /* CONFIG_HAVE_HW_BREAKPOINT */ 559 557 560 558 #endif /* !__ASSEMBLY__ */ 561 559
+1
arch/powerpc/include/asm/kvm_host.h
··· 33 33 #include <asm/kvm_asm.h> 34 34 #include <asm/processor.h> 35 35 #include <asm/page.h> 36 + #include <asm/cacheflush.h> 36 37 37 38 #define KVM_MAX_VCPUS NR_CPUS 38 39 #define KVM_MAX_VCORES NR_CPUS
+12
arch/powerpc/include/asm/kvm_ppc.h
··· 219 219 void kvmppc_free_lpid(long lpid); 220 220 void kvmppc_init_lpid(unsigned long nr_lpids); 221 221 222 + static inline void kvmppc_mmu_flush_icache(pfn_t pfn) 223 + { 224 + /* Clear i-cache for new pages */ 225 + struct page *page; 226 + page = pfn_to_page(pfn); 227 + if (!test_bit(PG_arch_1, &page->flags)) { 228 + flush_dcache_icache_page(page); 229 + set_bit(PG_arch_1, &page->flags); 230 + } 231 + } 232 + 233 + 222 234 #endif /* __POWERPC_KVM_PPC_H__ */
+1
arch/powerpc/include/asm/mpic_msgr.h
··· 14 14 #include <linux/types.h> 15 15 #include <linux/spinlock.h> 16 16 #include <asm/smp.h> 17 + #include <asm/io.h> 17 18 18 19 struct mpic_msgr { 19 20 u32 __iomem *base;
+1
arch/powerpc/include/asm/processor.h
··· 386 386 enum idle_boot_override {IDLE_NO_OVERRIDE = 0, IDLE_POWERSAVE_OFF}; 387 387 388 388 extern int powersave_nap; /* set if nap mode can be used in idle loop */ 389 + extern void power7_nap(void); 389 390 390 391 #ifdef CONFIG_PSERIES_IDLE 391 392 extern void update_smt_snooze_delay(int snooze);
+1
arch/powerpc/kernel/asm-offsets.c
··· 76 76 DEFINE(SIGSEGV, SIGSEGV); 77 77 DEFINE(NMI_MASK, NMI_MASK); 78 78 DEFINE(THREAD_DSCR, offsetof(struct thread_struct, dscr)); 79 + DEFINE(THREAD_DSCR_INHERIT, offsetof(struct thread_struct, dscr_inherit)); 79 80 #else 80 81 DEFINE(THREAD_INFO, offsetof(struct task_struct, stack)); 81 82 #endif /* CONFIG_PPC64 */
+2
arch/powerpc/kernel/dbell.c
··· 28 28 29 29 void doorbell_cause_ipi(int cpu, unsigned long data) 30 30 { 31 + /* Order previous accesses vs. msgsnd, which is treated as a store */ 32 + mb(); 31 33 ppc_msgsnd(PPC_DBELL, 0, data); 32 34 } 33 35
+4 -5
arch/powerpc/kernel/dma-iommu.c
··· 83 83 return 0; 84 84 } 85 85 86 - if ((tbl->it_offset + tbl->it_size) > (mask >> IOMMU_PAGE_SHIFT)) { 87 - dev_info(dev, "Warning: IOMMU window too big for device mask\n"); 88 - dev_info(dev, "mask: 0x%08llx, table end: 0x%08lx\n", 89 - mask, (tbl->it_offset + tbl->it_size) << 90 - IOMMU_PAGE_SHIFT); 86 + if (tbl->it_offset > (mask >> IOMMU_PAGE_SHIFT)) { 87 + dev_info(dev, "Warning: IOMMU offset too big for device mask\n"); 88 + dev_info(dev, "mask: 0x%08llx, table offset: 0x%08lx\n", 89 + mask, tbl->it_offset << IOMMU_PAGE_SHIFT); 91 90 return 0; 92 91 } else 93 92 return 1;
+17 -6
arch/powerpc/kernel/entry_64.S
··· 370 370 li r3,0 371 371 b syscall_exit 372 372 373 + .section ".toc","aw" 374 + DSCR_DEFAULT: 375 + .tc dscr_default[TC],dscr_default 376 + 377 + .section ".text" 378 + 373 379 /* 374 380 * This routine switches between two different tasks. The process 375 381 * state of one is saved on its kernel stack. Then the state ··· 515 509 mr r1,r8 /* start using new stack pointer */ 516 510 std r7,PACAKSAVE(r13) 517 511 518 - ld r6,_CCR(r1) 519 - mtcrf 0xFF,r6 520 - 521 512 #ifdef CONFIG_ALTIVEC 522 513 BEGIN_FTR_SECTION 523 514 ld r0,THREAD_VRSAVE(r4) ··· 523 520 #endif /* CONFIG_ALTIVEC */ 524 521 #ifdef CONFIG_PPC64 525 522 BEGIN_FTR_SECTION 523 + lwz r6,THREAD_DSCR_INHERIT(r4) 524 + ld r7,DSCR_DEFAULT@toc(2) 526 525 ld r0,THREAD_DSCR(r4) 527 - cmpd r0,r25 528 - beq 1f 526 + cmpwi r6,0 527 + bne 1f 528 + ld r0,0(r7) 529 + 1: cmpd r0,r25 530 + beq 2f 529 531 mtspr SPRN_DSCR,r0 530 - 1: 532 + 2: 531 533 END_FTR_SECTION_IFSET(CPU_FTR_DSCR) 532 534 #endif 535 + 536 + ld r6,_CCR(r1) 537 + mtcrf 0xFF,r6 533 538 534 539 /* r3-r13 are destroyed -- Cort */ 535 540 REST_8GPRS(14, r1)
+2 -1
arch/powerpc/kernel/exceptions-64s.S
··· 186 186 KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x800) 187 187 188 188 MASKABLE_EXCEPTION_PSERIES(0x900, 0x900, decrementer) 189 - MASKABLE_EXCEPTION_HV(0x980, 0x982, decrementer) 189 + STD_EXCEPTION_HV(0x980, 0x982, hdecrementer) 190 190 191 191 STD_EXCEPTION_PSERIES(0xa00, 0xa00, trap_0a) 192 192 KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xa00) ··· 486 486 487 487 STD_EXCEPTION_COMMON_ASYNC(0x500, hardware_interrupt, do_IRQ) 488 488 STD_EXCEPTION_COMMON_ASYNC(0x900, decrementer, .timer_interrupt) 489 + STD_EXCEPTION_COMMON(0x980, hdecrementer, .hdec_interrupt) 489 490 STD_EXCEPTION_COMMON(0xa00, trap_0a, .unknown_exception) 490 491 STD_EXCEPTION_COMMON(0xb00, trap_0b, .unknown_exception) 491 492 STD_EXCEPTION_COMMON(0xd00, single_step, .single_step_exception)
+1 -1
arch/powerpc/kernel/hw_breakpoint.c
··· 253 253 254 254 /* Do not emulate user-space instructions, instead single-step them */ 255 255 if (user_mode(regs)) { 256 - bp->ctx->task->thread.last_hit_ubp = bp; 256 + current->thread.last_hit_ubp = bp; 257 257 regs->msr |= MSR_SE; 258 258 goto out; 259 259 }
+2
arch/powerpc/kernel/idle_power7.S
··· 28 28 lwz r4,ADDROFF(powersave_nap)(r3) 29 29 cmpwi 0,r4,0 30 30 beqlr 31 + /* fall through */ 31 32 33 + _GLOBAL(power7_nap) 32 34 /* NAP is a state loss, we create a regs frame on the 33 35 * stack, fill it up with the state we care about and 34 36 * stick a pointer to it in PACAR1. We really only
+24 -3
arch/powerpc/kernel/kgdb.c
··· 25 25 #include <asm/processor.h> 26 26 #include <asm/machdep.h> 27 27 #include <asm/debug.h> 28 + #include <linux/slab.h> 28 29 29 30 /* 30 31 * This table contains the mapping between PowerPC hardware trap types, and ··· 102 101 return SIGHUP; /* default for things we don't know about */ 103 102 } 104 103 104 + /** 105 + * 106 + * kgdb_skipexception - Bail out of KGDB when we've been triggered. 107 + * @exception: Exception vector number 108 + * @regs: Current &struct pt_regs. 109 + * 110 + * On some architectures we need to skip a breakpoint exception when 111 + * it occurs after a breakpoint has been removed. 112 + * 113 + */ 114 + int kgdb_skipexception(int exception, struct pt_regs *regs) 115 + { 116 + return kgdb_isremovedbreak(regs->nip); 117 + } 118 + 105 119 static int kgdb_call_nmi_hook(struct pt_regs *regs) 106 120 { 107 121 kgdb_nmicallback(raw_smp_processor_id(), regs); ··· 154 138 static int kgdb_singlestep(struct pt_regs *regs) 155 139 { 156 140 struct thread_info *thread_info, *exception_thread_info; 141 + struct thread_info *backup_current_thread_info = \ 142 + (struct thread_info *)kmalloc(sizeof(struct thread_info), GFP_KERNEL); 157 143 158 144 if (user_mode(regs)) 159 145 return 0; ··· 173 155 thread_info = (struct thread_info *)(regs->gpr[1] & ~(THREAD_SIZE-1)); 174 156 exception_thread_info = current_thread_info(); 175 157 176 - if (thread_info != exception_thread_info) 158 + if (thread_info != exception_thread_info) { 159 + /* Save the original current_thread_info. */ 160 + memcpy(backup_current_thread_info, exception_thread_info, sizeof *thread_info); 177 161 memcpy(exception_thread_info, thread_info, sizeof *thread_info); 162 + } 178 163 179 164 kgdb_handle_exception(0, SIGTRAP, 0, regs); 180 165 181 166 if (thread_info != exception_thread_info) 182 - memcpy(thread_info, exception_thread_info, sizeof *thread_info); 167 + /* Restore current_thread_info lastly. */ 168 + memcpy(exception_thread_info, backup_current_thread_info, sizeof *thread_info); 183 169 184 170 return 1; 185 171 } ··· 432 410 #else 433 411 linux_regs->msr |= MSR_SE; 434 412 #endif 435 - kgdb_single_step = 1; 436 413 atomic_set(&kgdb_cpu_doing_single_step, 437 414 raw_smp_processor_id()); 438 415 }
+2 -10
arch/powerpc/kernel/process.c
··· 802 802 #endif /* CONFIG_PPC_STD_MMU_64 */ 803 803 #ifdef CONFIG_PPC64 804 804 if (cpu_has_feature(CPU_FTR_DSCR)) { 805 - if (current->thread.dscr_inherit) { 806 - p->thread.dscr_inherit = 1; 807 - p->thread.dscr = current->thread.dscr; 808 - } else if (0 != dscr_default) { 809 - p->thread.dscr_inherit = 1; 810 - p->thread.dscr = dscr_default; 811 - } else { 812 - p->thread.dscr_inherit = 0; 813 - p->thread.dscr = 0; 814 - } 805 + p->thread.dscr_inherit = current->thread.dscr_inherit; 806 + p->thread.dscr = current->thread.dscr; 815 807 } 816 808 #endif 817 809
+9 -2
arch/powerpc/kernel/smp.c
··· 198 198 struct cpu_messages *info = &per_cpu(ipi_message, cpu); 199 199 char *message = (char *)&info->messages; 200 200 201 + /* 202 + * Order previous accesses before accesses in the IPI handler. 203 + */ 204 + smp_mb(); 201 205 message[msg] = 1; 202 - mb(); 206 + /* 207 + * cause_ipi functions are required to include a full barrier 208 + * before doing whatever causes the IPI. 209 + */ 203 210 smp_ops->cause_ipi(cpu, info->data); 204 211 } 205 212 ··· 218 211 mb(); /* order any irq clear */ 219 212 220 213 do { 221 - all = xchg_local(&info->messages, 0); 214 + all = xchg(&info->messages, 0); 222 215 223 216 #ifdef __BIG_ENDIAN 224 217 if (all & (1 << (24 - 8 * PPC_MSG_CALL_FUNCTION)))
+4 -4
arch/powerpc/kernel/syscalls.c
··· 107 107 long ret; 108 108 109 109 if (personality(current->personality) == PER_LINUX32 110 - && personality == PER_LINUX) 111 - personality = PER_LINUX32; 110 + && personality(personality) == PER_LINUX) 111 + personality = (personality & ~PER_MASK) | PER_LINUX32; 112 112 ret = sys_personality(personality); 113 - if (ret == PER_LINUX32) 114 - ret = PER_LINUX; 113 + if (personality(ret) == PER_LINUX32) 114 + ret = (ret & ~PER_MASK) | PER_LINUX; 115 115 return ret; 116 116 } 117 117 #endif
+10
arch/powerpc/kernel/sysfs.c
··· 194 194 return sprintf(buf, "%lx\n", dscr_default); 195 195 } 196 196 197 + static void update_dscr(void *dummy) 198 + { 199 + if (!current->thread.dscr_inherit) { 200 + current->thread.dscr = dscr_default; 201 + mtspr(SPRN_DSCR, dscr_default); 202 + } 203 + } 204 + 197 205 static ssize_t __used store_dscr_default(struct device *dev, 198 206 struct device_attribute *attr, const char *buf, 199 207 size_t count) ··· 213 205 if (ret != 1) 214 206 return -EINVAL; 215 207 dscr_default = val; 208 + 209 + on_each_cpu(update_dscr, NULL, 1); 216 210 217 211 return count; 218 212 }
+9
arch/powerpc/kernel/time.c
··· 535 535 trace_timer_interrupt_exit(regs); 536 536 } 537 537 538 + /* 539 + * Hypervisor decrementer interrupts shouldn't occur but are sometimes 540 + * left pending on exit from a KVM guest. We don't need to do anything 541 + * to clear them, as they are edge-triggered. 542 + */ 543 + void hdec_interrupt(struct pt_regs *regs) 544 + { 545 + } 546 + 538 547 #ifdef CONFIG_SUSPEND 539 548 static void generic_suspend_disable_irqs(void) 540 549 {
+2 -1
arch/powerpc/kernel/traps.c
··· 972 972 cpu_has_feature(CPU_FTR_DSCR)) { 973 973 PPC_WARN_EMULATED(mtdscr, regs); 974 974 rd = (instword >> 21) & 0x1f; 975 - mtspr(SPRN_DSCR, regs->gpr[rd]); 975 + current->thread.dscr = regs->gpr[rd]; 976 976 current->thread.dscr_inherit = 1; 977 + mtspr(SPRN_DSCR, current->thread.dscr); 977 978 return 0; 978 979 } 979 980 #endif
+3
arch/powerpc/kvm/book3s_32_mmu_host.c
··· 211 211 pteg1 |= PP_RWRX; 212 212 } 213 213 214 + if (orig_pte->may_execute) 215 + kvmppc_mmu_flush_icache(hpaddr >> PAGE_SHIFT); 216 + 214 217 local_irq_disable(); 215 218 216 219 if (pteg[rr]) {
+2
arch/powerpc/kvm/book3s_64_mmu_host.c
··· 126 126 127 127 if (!orig_pte->may_execute) 128 128 rflags |= HPTE_R_N; 129 + else 130 + kvmppc_mmu_flush_icache(hpaddr >> PAGE_SHIFT); 129 131 130 132 hash = hpt_hash(va, PTE_SIZE, MMU_SEGSIZE_256M); 131 133
+7 -5
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 1421 1421 sync /* order setting ceded vs. testing prodded */ 1422 1422 lbz r5,VCPU_PRODDED(r3) 1423 1423 cmpwi r5,0 1424 - bne 1f 1424 + bne kvm_cede_prodded 1425 1425 li r0,0 /* set trap to 0 to say hcall is handled */ 1426 1426 stw r0,VCPU_TRAP(r3) 1427 1427 li r0,H_SUCCESS 1428 1428 std r0,VCPU_GPR(R3)(r3) 1429 1429 BEGIN_FTR_SECTION 1430 - b 2f /* just send it up to host on 970 */ 1430 + b kvm_cede_exit /* just send it up to host on 970 */ 1431 1431 END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_206) 1432 1432 1433 1433 /* ··· 1446 1446 or r4,r4,r0 1447 1447 PPC_POPCNTW(R7,R4) 1448 1448 cmpw r7,r8 1449 - bge 2f 1449 + bge kvm_cede_exit 1450 1450 stwcx. r4,0,r6 1451 1451 bne 31b 1452 1452 li r0,1 ··· 1555 1555 b hcall_real_fallback 1556 1556 1557 1557 /* cede when already previously prodded case */ 1558 - 1: li r0,0 1558 + kvm_cede_prodded: 1559 + li r0,0 1559 1560 stb r0,VCPU_PRODDED(r3) 1560 1561 sync /* order testing prodded vs. clearing ceded */ 1561 1562 stb r0,VCPU_CEDED(r3) ··· 1564 1563 blr 1565 1564 1566 1565 /* we've ceded but we want to give control to the host */ 1567 - 2: li r3,H_TOO_HARD 1566 + kvm_cede_exit: 1567 + li r3,H_TOO_HARD 1568 1568 blr 1569 1569 1570 1570 secondary_too_late:
+7 -4
arch/powerpc/kvm/e500_tlb.c
··· 322 322 static void clear_tlb1_bitmap(struct kvmppc_vcpu_e500 *vcpu_e500) 323 323 { 324 324 if (vcpu_e500->g2h_tlb1_map) 325 - memset(vcpu_e500->g2h_tlb1_map, 326 - sizeof(u64) * vcpu_e500->gtlb_params[1].entries, 0); 325 + memset(vcpu_e500->g2h_tlb1_map, 0, 326 + sizeof(u64) * vcpu_e500->gtlb_params[1].entries); 327 327 if (vcpu_e500->h2g_tlb1_rmap) 328 - memset(vcpu_e500->h2g_tlb1_rmap, 329 - sizeof(unsigned int) * host_tlb_params[1].entries, 0); 328 + memset(vcpu_e500->h2g_tlb1_rmap, 0, 329 + sizeof(unsigned int) * host_tlb_params[1].entries); 330 330 } 331 331 332 332 static void clear_tlb_privs(struct kvmppc_vcpu_e500 *vcpu_e500) ··· 539 539 540 540 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, 541 541 ref, gvaddr, stlbe); 542 + 543 + /* Clear i-cache for new pages */ 544 + kvmppc_mmu_flush_icache(pfn); 542 545 } 543 546 544 547 /* XXX only map the one-one case, for now use TLB0 */
+1 -1
arch/powerpc/lib/code-patching.c
··· 20 20 { 21 21 int err; 22 22 23 - err = __put_user(instr, addr); 23 + __put_user_size(instr, addr, 4, err); 24 24 if (err) 25 25 return err; 26 26 asm ("dcbst 0, %0; sync; icbi 0,%0; sync; isync" : : "r" (addr));
+2 -33
arch/powerpc/lib/copyuser_power7.S
··· 288 288 std r0,16(r1) 289 289 stdu r1,-STACKFRAMESIZE(r1) 290 290 bl .enter_vmx_usercopy 291 - cmpwi r3,0 291 + cmpwi cr1,r3,0 292 292 ld r0,STACKFRAMESIZE+16(r1) 293 293 ld r3,STACKFRAMESIZE+48(r1) 294 294 ld r4,STACKFRAMESIZE+56(r1) ··· 326 326 dcbt r0,r8,0b01010 /* GO */ 327 327 .machine pop 328 328 329 - /* 330 - * We prefetch both the source and destination using enhanced touch 331 - * instructions. We use a stream ID of 0 for the load side and 332 - * 1 for the store side. 333 - */ 334 - clrrdi r6,r4,7 335 - clrrdi r9,r3,7 336 - ori r9,r9,1 /* stream=1 */ 337 - 338 - srdi r7,r5,7 /* length in cachelines, capped at 0x3FF */ 339 - cmpldi cr1,r7,0x3FF 340 - ble cr1,1f 341 - li r7,0x3FF 342 - 1: lis r0,0x0E00 /* depth=7 */ 343 - sldi r7,r7,7 344 - or r7,r7,r0 345 - ori r10,r7,1 /* stream=1 */ 346 - 347 - lis r8,0x8000 /* GO=1 */ 348 - clrldi r8,r8,32 349 - 350 - .machine push 351 - .machine "power4" 352 - dcbt r0,r6,0b01000 353 - dcbt r0,r7,0b01010 354 - dcbtst r0,r9,0b01000 355 - dcbtst r0,r10,0b01010 356 - eieio 357 - dcbt r0,r8,0b01010 /* GO */ 358 - .machine pop 359 - 360 - beq .Lunwind_stack_nonvmx_copy 329 + beq cr1,.Lunwind_stack_nonvmx_copy 361 330 362 331 /* 363 332 * If source and destination are not relatively aligned we use a
+2 -2
arch/powerpc/lib/memcpy_power7.S
··· 222 222 std r0,16(r1) 223 223 stdu r1,-STACKFRAMESIZE(r1) 224 224 bl .enter_vmx_copy 225 - cmpwi r3,0 225 + cmpwi cr1,r3,0 226 226 ld r0,STACKFRAMESIZE+16(r1) 227 227 ld r3,STACKFRAMESIZE+48(r1) 228 228 ld r4,STACKFRAMESIZE+56(r1) ··· 260 260 dcbt r0,r8,0b01010 /* GO */ 261 261 .machine pop 262 262 263 - beq .Lunwind_stack_nonvmx_copy 263 + beq cr1,.Lunwind_stack_nonvmx_copy 264 264 265 265 /* 266 266 * If source and destination are not relatively aligned we use a
+1
arch/powerpc/mm/mem.c
··· 469 469 __flush_dcache_icache_phys(page_to_pfn(page) << PAGE_SHIFT); 470 470 #endif 471 471 } 472 + EXPORT_SYMBOL(flush_dcache_icache_page); 472 473 473 474 void clear_user_page(void *page, unsigned long vaddr, struct page *pg) 474 475 {
+4 -3
arch/powerpc/mm/numa.c
··· 1436 1436 1437 1437 /* 1438 1438 * Update the node maps and sysfs entries for each cpu whose home node 1439 - * has changed. 1439 + * has changed. Returns 1 when the topology has changed, and 0 otherwise. 1440 1440 */ 1441 1441 int arch_update_cpu_topology(void) 1442 1442 { 1443 - int cpu, nid, old_nid; 1443 + int cpu, nid, old_nid, changed = 0; 1444 1444 unsigned int associativity[VPHN_ASSOC_BUFSIZE] = {0}; 1445 1445 struct device *dev; 1446 1446 ··· 1466 1466 dev = get_cpu_device(cpu); 1467 1467 if (dev) 1468 1468 kobject_uevent(&dev->kobj, KOBJ_CHANGE); 1469 + changed = 1; 1469 1470 } 1470 1471 1471 - return 1; 1472 + return changed; 1472 1473 } 1473 1474 1474 1475 static void topology_work_fn(struct work_struct *work)
+1 -1
arch/powerpc/perf/core-book3s.c
··· 1431 1431 if (!event->hw.idx || is_limited_pmc(event->hw.idx)) 1432 1432 continue; 1433 1433 val = read_pmc(event->hw.idx); 1434 - if ((int)val < 0) { 1434 + if (pmc_overflow(val)) { 1435 1435 /* event has overflowed */ 1436 1436 found = 1; 1437 1437 record_and_restart(event, val, regs);
+1 -9
arch/powerpc/platforms/powernv/smp.c
··· 106 106 { 107 107 unsigned int cpu; 108 108 109 - /* If powersave_nap is enabled, use NAP mode, else just 110 - * spin aimlessly 111 - */ 112 - if (!powersave_nap) { 113 - generic_mach_cpu_die(); 114 - return; 115 - } 116 - 117 109 /* Standard hot unplug procedure */ 118 110 local_irq_disable(); 119 111 idle_task_exit(); ··· 120 128 */ 121 129 mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1); 122 130 while (!generic_check_cpu_restart(cpu)) { 123 - power7_idle(); 131 + power7_nap(); 124 132 if (!generic_check_cpu_restart(cpu)) { 125 133 DBG("CPU%d Unexpected exit while offline !\n", cpu); 126 134 /* We may be getting an IPI, so we re-enable
+8 -5
arch/powerpc/sysdev/fsl_pci.c
··· 465 465 iounmap(hose->cfg_data); 466 466 iounmap(hose->cfg_addr); 467 467 pcibios_free_controller(hose); 468 - return 0; 468 + return -ENODEV; 469 469 } 470 470 471 471 setup_pci_cmd(hose); ··· 827 827 828 828 void __devinit fsl_pci_init(void) 829 829 { 830 + int ret; 830 831 struct device_node *node; 831 832 struct pci_controller *hose; 832 833 dma_addr_t max = 0xffffffff; ··· 856 855 if (!fsl_pci_primary) 857 856 fsl_pci_primary = node; 858 857 859 - fsl_add_bridge(node, fsl_pci_primary == node); 860 - hose = pci_find_hose_for_OF_device(node); 861 - max = min(max, hose->dma_window_base_cur + 862 - hose->dma_window_size); 858 + ret = fsl_add_bridge(node, fsl_pci_primary == node); 859 + if (ret == 0) { 860 + hose = pci_find_hose_for_OF_device(node); 861 + max = min(max, hose->dma_window_base_cur + 862 + hose->dma_window_size); 863 + } 863 864 } 864 865 } 865 866
+3
arch/powerpc/sysdev/mpic_msgr.c
··· 14 14 #include <linux/list.h> 15 15 #include <linux/of_platform.h> 16 16 #include <linux/errno.h> 17 + #include <linux/err.h> 18 + #include <linux/export.h> 19 + #include <linux/slab.h> 17 20 #include <asm/prom.h> 18 21 #include <asm/hw_irq.h> 19 22 #include <asm/ppc-pci.h>
+5 -1
arch/powerpc/sysdev/xics/icp-hv.c
··· 65 65 static inline void icp_hv_set_qirr(int n_cpu , u8 value) 66 66 { 67 67 int hw_cpu = get_hard_smp_processor_id(n_cpu); 68 - long rc = plpar_hcall_norets(H_IPI, hw_cpu, value); 68 + long rc; 69 + 70 + /* Make sure all previous accesses are ordered before IPI sending */ 71 + mb(); 72 + rc = plpar_hcall_norets(H_IPI, hw_cpu, value); 69 73 if (rc != H_SUCCESS) { 70 74 pr_err("%s: bad return code qirr cpu=%d hw_cpu=%d mfrr=0x%x " 71 75 "returned %ld\n", __func__, n_cpu, hw_cpu, value, rc);
+30 -46
arch/powerpc/xmon/xmon.c
··· 17 17 #include <linux/reboot.h> 18 18 #include <linux/delay.h> 19 19 #include <linux/kallsyms.h> 20 + #include <linux/kmsg_dump.h> 20 21 #include <linux/cpumask.h> 21 22 #include <linux/export.h> 22 23 #include <linux/sysrq.h> ··· 895 894 #endif 896 895 default: 897 896 printf("Unrecognized command: "); 898 - do { 897 + do { 899 898 if (' ' < cmd && cmd <= '~') 900 899 putchar(cmd); 901 900 else 902 901 printf("\\x%x", cmd); 903 902 cmd = inchar(); 904 - } while (cmd != '\n'); 903 + } while (cmd != '\n'); 905 904 printf(" (type ? for help)\n"); 906 905 break; 907 906 } ··· 1098 1097 return 1; 1099 1098 } 1100 1099 1101 - static char *breakpoint_help_string = 1100 + static char *breakpoint_help_string = 1102 1101 "Breakpoint command usage:\n" 1103 1102 "b show breakpoints\n" 1104 1103 "b <addr> [cnt] set breakpoint at given instr addr\n" ··· 1194 1193 1195 1194 default: 1196 1195 termch = cmd; 1197 - cmd = skipbl(); 1196 + cmd = skipbl(); 1198 1197 if (cmd == '?') { 1199 1198 printf(breakpoint_help_string); 1200 1199 break; ··· 1360 1359 sp + REGS_OFFSET); 1361 1360 break; 1362 1361 } 1363 - printf("--- Exception: %lx %s at ", regs.trap, 1362 + printf("--- Exception: %lx %s at ", regs.trap, 1364 1363 getvecname(TRAP(&regs))); 1365 1364 pc = regs.nip; 1366 1365 lr = regs.link; ··· 1624 1623 1625 1624 cmd = skipbl(); 1626 1625 if (cmd == '\n') { 1627 - unsigned long sp, toc; 1626 + unsigned long sp, toc; 1628 1627 asm("mr %0,1" : "=r" (sp) :); 1629 1628 asm("mr %0,2" : "=r" (toc) :); 1630 1629 1631 1630 printf("msr = "REG" sprg0= "REG"\n", 1632 1631 mfmsr(), mfspr(SPRN_SPRG0)); 1633 1632 printf("pvr = "REG" sprg1= "REG"\n", 1634 - mfspr(SPRN_PVR), mfspr(SPRN_SPRG1)); 1633 + mfspr(SPRN_PVR), mfspr(SPRN_SPRG1)); 1635 1634 printf("dec = "REG" sprg2= "REG"\n", 1636 1635 mfspr(SPRN_DEC), mfspr(SPRN_SPRG2)); 1637 1636 printf("sp = "REG" sprg3= "REG"\n", sp, mfspr(SPRN_SPRG3)); ··· 1784 1783 static int brev; 1785 1784 static int mnoread; 1786 1785 1787 - static char *memex_help_string = 1786 + static char *memex_help_string = 1788 1787 "Memory examine command usage:\n" 1789 1788 "m [addr] [flags] examine/change memory\n" 1790 1789 " addr is optional. will start where left off.\n" ··· 1799 1798 "NOTE: flags are saved as defaults\n" 1800 1799 ""; 1801 1800 1802 - static char *memex_subcmd_help_string = 1801 + static char *memex_subcmd_help_string = 1803 1802 "Memory examine subcommands:\n" 1804 1803 " hexval write this val to current location\n" 1805 1804 " 'string' write chars from string to this location\n" ··· 2065 2064 nr = mread(adrs, temp, r); 2066 2065 adrs += nr; 2067 2066 for (m = 0; m < r; ++m) { 2068 - if ((m & (sizeof(long) - 1)) == 0 && m > 0) 2067 + if ((m & (sizeof(long) - 1)) == 0 && m > 0) 2069 2068 putchar(' '); 2070 2069 if (m < nr) 2071 2070 printf("%.2x", temp[m]); ··· 2073 2072 printf("%s", fault_chars[fault_type]); 2074 2073 } 2075 2074 for (; m < 16; ++m) { 2076 - if ((m & (sizeof(long) - 1)) == 0) 2075 + if ((m & (sizeof(long) - 1)) == 0) 2077 2076 putchar(' '); 2078 2077 printf(" "); 2079 2078 } ··· 2149 2148 void 2150 2149 dump_log_buf(void) 2151 2150 { 2152 - const unsigned long size = 128; 2153 - unsigned long end, addr; 2154 - unsigned char buf[size + 1]; 2151 + struct kmsg_dumper dumper = { .active = 1 }; 2152 + unsigned char buf[128]; 2153 + size_t len; 2155 2154 2156 - addr = 0; 2157 - buf[size] = '\0'; 2155 + if (setjmp(bus_error_jmp) != 0) { 2156 + printf("Error dumping printk buffer!\n"); 2157 + return; 2158 + } 2158 2159 2159 - if (setjmp(bus_error_jmp) != 0) { 2160 - printf("Unable to lookup symbol __log_buf!\n"); 2161 - return; 2162 - } 2160 + catch_memory_errors = 1; 2161 + sync(); 2163 2162 2164 - catch_memory_errors = 1; 2165 - sync(); 2166 - addr = kallsyms_lookup_name("__log_buf"); 2163 + kmsg_dump_rewind_nolock(&dumper); 2164 + while (kmsg_dump_get_line_nolock(&dumper, false, buf, sizeof(buf), &len)) { 2165 + buf[len] = '\0'; 2166 + printf("%s", buf); 2167 + } 2167 2168 2168 - if (! addr) 2169 - printf("Symbol __log_buf not found!\n"); 2170 - else { 2171 - end = addr + (1 << CONFIG_LOG_BUF_SHIFT); 2172 - while (addr < end) { 2173 - if (! mread(addr, buf, size)) { 2174 - printf("Can't read memory at address 0x%lx\n", addr); 2175 - break; 2176 - } 2177 - 2178 - printf("%s", buf); 2179 - 2180 - if (strlen(buf) < size) 2181 - break; 2182 - 2183 - addr += size; 2184 - } 2185 - } 2186 - 2187 - sync(); 2188 - /* wait a little while to see if we get a machine check */ 2189 - __delay(200); 2190 - catch_memory_errors = 0; 2169 + sync(); 2170 + /* wait a little while to see if we get a machine check */ 2171 + __delay(200); 2172 + catch_memory_errors = 0; 2191 2173 } 2192 2174 2193 2175 /*
+2 -1
arch/s390/include/asm/elf.h
··· 180 180 #define ELF_PLATFORM (elf_platform) 181 181 182 182 #ifndef CONFIG_64BIT 183 - #define SET_PERSONALITY(ex) set_personality(PER_LINUX) 183 + #define SET_PERSONALITY(ex) \ 184 + set_personality(PER_LINUX | (current->personality & (~PER_MASK))) 184 185 #else /* CONFIG_64BIT */ 185 186 #define SET_PERSONALITY(ex) \ 186 187 do { \
+1 -2
arch/s390/include/asm/posix_types.h
··· 13 13 */ 14 14 15 15 typedef unsigned long __kernel_size_t; 16 + typedef long __kernel_ssize_t; 16 17 #define __kernel_size_t __kernel_size_t 17 18 18 19 typedef unsigned short __kernel_old_dev_t; ··· 26 25 typedef unsigned short __kernel_ipc_pid_t; 27 26 typedef unsigned short __kernel_uid_t; 28 27 typedef unsigned short __kernel_gid_t; 29 - typedef int __kernel_ssize_t; 30 28 typedef int __kernel_ptrdiff_t; 31 29 32 30 #else /* __s390x__ */ ··· 35 35 typedef int __kernel_ipc_pid_t; 36 36 typedef unsigned int __kernel_uid_t; 37 37 typedef unsigned int __kernel_gid_t; 38 - typedef long __kernel_ssize_t; 39 38 typedef long __kernel_ptrdiff_t; 40 39 typedef unsigned long __kernel_sigset_t; /* at least 32 bits */ 41 40
+1
arch/s390/include/asm/smp.h
··· 44 44 } 45 45 46 46 static inline int smp_find_processor_id(int address) { return 0; } 47 + static inline int smp_store_status(int cpu) { return 0; } 47 48 static inline int smp_vcpu_scheduled(int cpu) { return 1; } 48 49 static inline void smp_yield_cpu(int cpu) { } 49 50 static inline void smp_yield(void) { }
+1 -1
arch/um/os-Linux/time.c
··· 114 114 skew += this_tick - last_tick; 115 115 116 116 while (skew >= one_tick) { 117 - alarm_handler(SIGVTALRM, NULL); 117 + alarm_handler(SIGVTALRM, NULL, NULL); 118 118 skew -= one_tick; 119 119 } 120 120
+1 -2
arch/x86/include/asm/spinlock.h
··· 12 12 * Simple spin lock operations. There are two variants, one clears IRQ's 13 13 * on the local processor, one does not. 14 14 * 15 - * These are fair FIFO ticket locks, which are currently limited to 256 16 - * CPUs. 15 + * These are fair FIFO ticket locks, which support up to 2^16 CPUs. 17 16 * 18 17 * (the type definitions are in asm/spinlock_types.h) 19 18 */
+1 -1
arch/x86/kernel/alternative.c
··· 165 165 #endif 166 166 167 167 #ifdef P6_NOP1 168 - static const unsigned char __initconst_or_module p6nops[] = 168 + static const unsigned char p6nops[] = 169 169 { 170 170 P6_NOP1, 171 171 P6_NOP2,
+1 -1
arch/x86/kernel/irq.c
··· 270 270 271 271 if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) { 272 272 break_affinity = 1; 273 - affinity = cpu_all_mask; 273 + affinity = cpu_online_mask; 274 274 } 275 275 276 276 chip = irq_data_get_irq_chip(data);
+4 -3
arch/x86/kernel/microcode_amd.c
··· 143 143 unsigned int *current_size) 144 144 { 145 145 struct microcode_header_amd *mc_hdr; 146 - unsigned int actual_size; 146 + unsigned int actual_size, patch_size; 147 147 u16 equiv_cpu_id; 148 148 149 149 /* size of the current patch we're staring at */ 150 - *current_size = *(u32 *)(ucode_ptr + 4) + SECTION_HDR_SIZE; 150 + patch_size = *(u32 *)(ucode_ptr + 4); 151 + *current_size = patch_size + SECTION_HDR_SIZE; 151 152 152 153 equiv_cpu_id = find_equiv_id(); 153 154 if (!equiv_cpu_id) ··· 175 174 /* 176 175 * now that the header looks sane, verify its size 177 176 */ 178 - actual_size = verify_ucode_size(cpu, *current_size, leftover_size); 177 + actual_size = verify_ucode_size(cpu, patch_size, leftover_size); 179 178 if (!actual_size) 180 179 return 0; 181 180
+21 -9
arch/x86/kvm/emulate.c
··· 475 475 return address_mask(ctxt, reg); 476 476 } 477 477 478 + static void masked_increment(ulong *reg, ulong mask, int inc) 479 + { 480 + assign_masked(reg, *reg + inc, mask); 481 + } 482 + 478 483 static inline void 479 484 register_address_increment(struct x86_emulate_ctxt *ctxt, unsigned long *reg, int inc) 480 485 { 486 + ulong mask; 487 + 481 488 if (ctxt->ad_bytes == sizeof(unsigned long)) 482 - *reg += inc; 489 + mask = ~0UL; 483 490 else 484 - *reg = (*reg & ~ad_mask(ctxt)) | ((*reg + inc) & ad_mask(ctxt)); 491 + mask = ad_mask(ctxt); 492 + masked_increment(reg, mask, inc); 493 + } 494 + 495 + static void rsp_increment(struct x86_emulate_ctxt *ctxt, int inc) 496 + { 497 + masked_increment(&ctxt->regs[VCPU_REGS_RSP], stack_mask(ctxt), inc); 485 498 } 486 499 487 500 static inline void jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) ··· 1535 1522 { 1536 1523 struct segmented_address addr; 1537 1524 1538 - register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RSP], -bytes); 1539 - addr.ea = register_address(ctxt, ctxt->regs[VCPU_REGS_RSP]); 1525 + rsp_increment(ctxt, -bytes); 1526 + addr.ea = ctxt->regs[VCPU_REGS_RSP] & stack_mask(ctxt); 1540 1527 addr.seg = VCPU_SREG_SS; 1541 1528 1542 1529 return segmented_write(ctxt, addr, data, bytes); ··· 1555 1542 int rc; 1556 1543 struct segmented_address addr; 1557 1544 1558 - addr.ea = register_address(ctxt, ctxt->regs[VCPU_REGS_RSP]); 1545 + addr.ea = ctxt->regs[VCPU_REGS_RSP] & stack_mask(ctxt); 1559 1546 addr.seg = VCPU_SREG_SS; 1560 1547 rc = segmented_read(ctxt, addr, dest, len); 1561 1548 if (rc != X86EMUL_CONTINUE) 1562 1549 return rc; 1563 1550 1564 - register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RSP], len); 1551 + rsp_increment(ctxt, len); 1565 1552 return rc; 1566 1553 } 1567 1554 ··· 1701 1688 1702 1689 while (reg >= VCPU_REGS_RAX) { 1703 1690 if (reg == VCPU_REGS_RSP) { 1704 - register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RSP], 1705 - ctxt->op_bytes); 1691 + rsp_increment(ctxt, ctxt->op_bytes); 1706 1692 --reg; 1707 1693 } 1708 1694 ··· 2837 2825 rc = emulate_pop(ctxt, &ctxt->dst.val, ctxt->op_bytes); 2838 2826 if (rc != X86EMUL_CONTINUE) 2839 2827 return rc; 2840 - register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RSP], ctxt->src.val); 2828 + rsp_increment(ctxt, ctxt->src.val); 2841 2829 return X86EMUL_CONTINUE; 2842 2830 } 2843 2831
+9 -4
arch/x86/kvm/mmu.c
··· 4113 4113 LIST_HEAD(invalid_list); 4114 4114 4115 4115 /* 4116 + * Never scan more than sc->nr_to_scan VM instances. 4117 + * Will not hit this condition practically since we do not try 4118 + * to shrink more than one VM and it is very unlikely to see 4119 + * !n_used_mmu_pages so many times. 4120 + */ 4121 + if (!nr_to_scan--) 4122 + break; 4123 + /* 4116 4124 * n_used_mmu_pages is accessed without holding kvm->mmu_lock 4117 4125 * here. We may skip a VM instance errorneosly, but we do not 4118 4126 * want to shrink a VM that only started to populate its MMU 4119 4127 * anyway. 4120 4128 */ 4121 - if (kvm->arch.n_used_mmu_pages > 0) { 4122 - if (!nr_to_scan--) 4123 - break; 4129 + if (!kvm->arch.n_used_mmu_pages) 4124 4130 continue; 4125 - } 4126 4131 4127 4132 idx = srcu_read_lock(&kvm->srcu); 4128 4133 spin_lock(&kvm->mmu_lock);
+4 -1
arch/x86/kvm/x86.c
··· 806 806 * kvm-specific. Those are put in the beginning of the list. 807 807 */ 808 808 809 - #define KVM_SAVE_MSRS_BEGIN 9 809 + #define KVM_SAVE_MSRS_BEGIN 10 810 810 static u32 msrs_to_save[] = { 811 811 MSR_KVM_SYSTEM_TIME, MSR_KVM_WALL_CLOCK, 812 812 MSR_KVM_SYSTEM_TIME_NEW, MSR_KVM_WALL_CLOCK_NEW, ··· 1999 1999 break; 2000 2000 case MSR_KVM_STEAL_TIME: 2001 2001 data = vcpu->arch.st.msr_val; 2002 + break; 2003 + case MSR_KVM_PV_EOI_EN: 2004 + data = vcpu->arch.pv_eoi.msr_val; 2002 2005 break; 2003 2006 case MSR_IA32_P5_MC_ADDR: 2004 2007 case MSR_IA32_P5_MC_TYPE:
+11 -107
arch/x86/xen/enlighten.c
··· 31 31 #include <linux/pci.h> 32 32 #include <linux/gfp.h> 33 33 #include <linux/memblock.h> 34 - #include <linux/syscore_ops.h> 35 34 36 35 #include <xen/xen.h> 37 36 #include <xen/interface/xen.h> ··· 1469 1470 #endif 1470 1471 } 1471 1472 1472 - #ifdef CONFIG_XEN_PVHVM 1473 - /* 1474 - * The pfn containing the shared_info is located somewhere in RAM. This 1475 - * will cause trouble if the current kernel is doing a kexec boot into a 1476 - * new kernel. The new kernel (and its startup code) can not know where 1477 - * the pfn is, so it can not reserve the page. The hypervisor will 1478 - * continue to update the pfn, and as a result memory corruption occours 1479 - * in the new kernel. 1480 - * 1481 - * One way to work around this issue is to allocate a page in the 1482 - * xen-platform pci device's BAR memory range. But pci init is done very 1483 - * late and the shared_info page is already in use very early to read 1484 - * the pvclock. So moving the pfn from RAM to MMIO is racy because some 1485 - * code paths on other vcpus could access the pfn during the small 1486 - * window when the old pfn is moved to the new pfn. There is even a 1487 - * small window were the old pfn is not backed by a mfn, and during that 1488 - * time all reads return -1. 1489 - * 1490 - * Because it is not known upfront where the MMIO region is located it 1491 - * can not be used right from the start in xen_hvm_init_shared_info. 1492 - * 1493 - * To minimise trouble the move of the pfn is done shortly before kexec. 1494 - * This does not eliminate the race because all vcpus are still online 1495 - * when the syscore_ops will be called. But hopefully there is no work 1496 - * pending at this point in time. Also the syscore_op is run last which 1497 - * reduces the risk further. 1498 - */ 1499 - 1500 - static struct shared_info *xen_hvm_shared_info; 1501 - 1502 - static void xen_hvm_connect_shared_info(unsigned long pfn) 1473 + void __ref xen_hvm_init_shared_info(void) 1503 1474 { 1475 + int cpu; 1504 1476 struct xen_add_to_physmap xatp; 1477 + static struct shared_info *shared_info_page = 0; 1505 1478 1479 + if (!shared_info_page) 1480 + shared_info_page = (struct shared_info *) 1481 + extend_brk(PAGE_SIZE, PAGE_SIZE); 1506 1482 xatp.domid = DOMID_SELF; 1507 1483 xatp.idx = 0; 1508 1484 xatp.space = XENMAPSPACE_shared_info; 1509 - xatp.gpfn = pfn; 1485 + xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT; 1510 1486 if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp)) 1511 1487 BUG(); 1512 1488 1513 - } 1514 - static void xen_hvm_set_shared_info(struct shared_info *sip) 1515 - { 1516 - int cpu; 1517 - 1518 - HYPERVISOR_shared_info = sip; 1489 + HYPERVISOR_shared_info = (struct shared_info *)shared_info_page; 1519 1490 1520 1491 /* xen_vcpu is a pointer to the vcpu_info struct in the shared_info 1521 1492 * page, we use it in the event channel upcall and in some pvclock 1522 1493 * related functions. We don't need the vcpu_info placement 1523 1494 * optimizations because we don't use any pv_mmu or pv_irq op on 1524 1495 * HVM. 1525 - * When xen_hvm_set_shared_info is run at boot time only vcpu 0 is 1526 - * online but xen_hvm_set_shared_info is run at resume time too and 1496 + * When xen_hvm_init_shared_info is run at boot time only vcpu 0 is 1497 + * online but xen_hvm_init_shared_info is run at resume time too and 1527 1498 * in that case multiple vcpus might be online. */ 1528 1499 for_each_online_cpu(cpu) { 1529 1500 per_cpu(xen_vcpu, cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu]; 1530 1501 } 1531 1502 } 1532 1503 1533 - /* Reconnect the shared_info pfn to a mfn */ 1534 - void xen_hvm_resume_shared_info(void) 1535 - { 1536 - xen_hvm_connect_shared_info(__pa(xen_hvm_shared_info) >> PAGE_SHIFT); 1537 - } 1538 - 1539 - #ifdef CONFIG_KEXEC 1540 - static struct shared_info *xen_hvm_shared_info_kexec; 1541 - static unsigned long xen_hvm_shared_info_pfn_kexec; 1542 - 1543 - /* Remember a pfn in MMIO space for kexec reboot */ 1544 - void __devinit xen_hvm_prepare_kexec(struct shared_info *sip, unsigned long pfn) 1545 - { 1546 - xen_hvm_shared_info_kexec = sip; 1547 - xen_hvm_shared_info_pfn_kexec = pfn; 1548 - } 1549 - 1550 - static void xen_hvm_syscore_shutdown(void) 1551 - { 1552 - struct xen_memory_reservation reservation = { 1553 - .domid = DOMID_SELF, 1554 - .nr_extents = 1, 1555 - }; 1556 - unsigned long prev_pfn; 1557 - int rc; 1558 - 1559 - if (!xen_hvm_shared_info_kexec) 1560 - return; 1561 - 1562 - prev_pfn = __pa(xen_hvm_shared_info) >> PAGE_SHIFT; 1563 - set_xen_guest_handle(reservation.extent_start, &prev_pfn); 1564 - 1565 - /* Move pfn to MMIO, disconnects previous pfn from mfn */ 1566 - xen_hvm_connect_shared_info(xen_hvm_shared_info_pfn_kexec); 1567 - 1568 - /* Update pointers, following hypercall is also a memory barrier */ 1569 - xen_hvm_set_shared_info(xen_hvm_shared_info_kexec); 1570 - 1571 - /* Allocate new mfn for previous pfn */ 1572 - do { 1573 - rc = HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation); 1574 - if (rc == 0) 1575 - msleep(123); 1576 - } while (rc == 0); 1577 - 1578 - /* Make sure the previous pfn is really connected to a (new) mfn */ 1579 - BUG_ON(rc != 1); 1580 - } 1581 - 1582 - static struct syscore_ops xen_hvm_syscore_ops = { 1583 - .shutdown = xen_hvm_syscore_shutdown, 1584 - }; 1585 - #endif 1586 - 1587 - /* Use a pfn in RAM, may move to MMIO before kexec. */ 1588 - static void __init xen_hvm_init_shared_info(void) 1589 - { 1590 - /* Remember pointer for resume */ 1591 - xen_hvm_shared_info = extend_brk(PAGE_SIZE, PAGE_SIZE); 1592 - xen_hvm_connect_shared_info(__pa(xen_hvm_shared_info) >> PAGE_SHIFT); 1593 - xen_hvm_set_shared_info(xen_hvm_shared_info); 1594 - } 1595 - 1504 + #ifdef CONFIG_XEN_PVHVM 1596 1505 static void __init init_hvm_pv_info(void) 1597 1506 { 1598 1507 int major, minor; ··· 1551 1644 init_hvm_pv_info(); 1552 1645 1553 1646 xen_hvm_init_shared_info(); 1554 - #ifdef CONFIG_KEXEC 1555 - register_syscore_ops(&xen_hvm_syscore_ops); 1556 - #endif 1557 1647 1558 1648 if (xen_feature(XENFEAT_hvm_callback_vector)) 1559 1649 xen_have_vector_callback = 1;
+1 -1
arch/x86/xen/mmu.c
··· 1283 1283 cpumask_clear_cpu(smp_processor_id(), to_cpumask(args->mask)); 1284 1284 1285 1285 args->op.cmd = MMUEXT_TLB_FLUSH_MULTI; 1286 - if (start != TLB_FLUSH_ALL && (end - start) <= PAGE_SIZE) { 1286 + if (end != TLB_FLUSH_ALL && (end - start) <= PAGE_SIZE) { 1287 1287 args->op.cmd = MMUEXT_INVLPG_MULTI; 1288 1288 args->op.arg1.linear_addr = start; 1289 1289 }
+92 -3
arch/x86/xen/p2m.c
··· 196 196 197 197 /* When we populate back during bootup, the amount of pages can vary. The 198 198 * max we have is seen is 395979, but that does not mean it can't be more. 199 - * But some machines can have 3GB I/O holes even. So lets reserve enough 200 - * for 4GB of I/O and E820 holes. */ 201 - RESERVE_BRK(p2m_populated, PMD_SIZE * 4); 199 + * Some machines can have 3GB I/O holes even. With early_can_reuse_p2m_middle 200 + * it can re-use Xen provided mfn_list array, so we only need to allocate at 201 + * most three P2M top nodes. */ 202 + RESERVE_BRK(p2m_populated, PAGE_SIZE * 3); 203 + 202 204 static inline unsigned p2m_top_index(unsigned long pfn) 203 205 { 204 206 BUG_ON(pfn >= MAX_P2M_PFN); ··· 577 575 } 578 576 return true; 579 577 } 578 + 579 + /* 580 + * Skim over the P2M tree looking at pages that are either filled with 581 + * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and 582 + * replace the P2M leaf with a p2m_missing or p2m_identity. 583 + * Stick the old page in the new P2M tree location. 584 + */ 585 + bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn) 586 + { 587 + unsigned topidx; 588 + unsigned mididx; 589 + unsigned ident_pfns; 590 + unsigned inv_pfns; 591 + unsigned long *p2m; 592 + unsigned long *mid_mfn_p; 593 + unsigned idx; 594 + unsigned long pfn; 595 + 596 + /* We only look when this entails a P2M middle layer */ 597 + if (p2m_index(set_pfn)) 598 + return false; 599 + 600 + for (pfn = 0; pfn < MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) { 601 + topidx = p2m_top_index(pfn); 602 + 603 + if (!p2m_top[topidx]) 604 + continue; 605 + 606 + if (p2m_top[topidx] == p2m_mid_missing) 607 + continue; 608 + 609 + mididx = p2m_mid_index(pfn); 610 + p2m = p2m_top[topidx][mididx]; 611 + if (!p2m) 612 + continue; 613 + 614 + if ((p2m == p2m_missing) || (p2m == p2m_identity)) 615 + continue; 616 + 617 + if ((unsigned long)p2m == INVALID_P2M_ENTRY) 618 + continue; 619 + 620 + ident_pfns = 0; 621 + inv_pfns = 0; 622 + for (idx = 0; idx < P2M_PER_PAGE; idx++) { 623 + /* IDENTITY_PFNs are 1:1 */ 624 + if (p2m[idx] == IDENTITY_FRAME(pfn + idx)) 625 + ident_pfns++; 626 + else if (p2m[idx] == INVALID_P2M_ENTRY) 627 + inv_pfns++; 628 + else 629 + break; 630 + } 631 + if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE)) 632 + goto found; 633 + } 634 + return false; 635 + found: 636 + /* Found one, replace old with p2m_identity or p2m_missing */ 637 + p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing); 638 + /* And the other for save/restore.. */ 639 + mid_mfn_p = p2m_top_mfn_p[topidx]; 640 + /* NOTE: Even if it is a p2m_identity it should still be point to 641 + * a page filled with INVALID_P2M_ENTRY entries. */ 642 + mid_mfn_p[mididx] = virt_to_mfn(p2m_missing); 643 + 644 + /* Reset where we want to stick the old page in. */ 645 + topidx = p2m_top_index(set_pfn); 646 + mididx = p2m_mid_index(set_pfn); 647 + 648 + /* This shouldn't happen */ 649 + if (WARN_ON(p2m_top[topidx] == p2m_mid_missing)) 650 + early_alloc_p2m(set_pfn); 651 + 652 + if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing)) 653 + return false; 654 + 655 + p2m_init(p2m); 656 + p2m_top[topidx][mididx] = p2m; 657 + mid_mfn_p = p2m_top_mfn_p[topidx]; 658 + mid_mfn_p[mididx] = virt_to_mfn(p2m); 659 + 660 + return true; 661 + } 580 662 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn) 581 663 { 582 664 if (unlikely(!__set_phys_to_machine(pfn, mfn))) { 583 665 if (!early_alloc_p2m(pfn)) 584 666 return false; 667 + 668 + if (early_can_reuse_p2m_middle(pfn, mfn)) 669 + return __set_phys_to_machine(pfn, mfn); 585 670 586 671 if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/)) 587 672 return false;
+8 -1
arch/x86/xen/setup.c
··· 78 78 memblock_reserve(start, size); 79 79 80 80 xen_max_p2m_pfn = PFN_DOWN(start + size); 81 + for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) { 82 + unsigned long mfn = pfn_to_mfn(pfn); 81 83 82 - for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++) 84 + if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn)) 85 + continue; 86 + WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n", 87 + pfn, mfn); 88 + 83 89 __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); 90 + } 84 91 } 85 92 86 93 static unsigned long __init xen_do_chunk(unsigned long start,
+1 -1
arch/x86/xen/suspend.c
··· 30 30 { 31 31 #ifdef CONFIG_XEN_PVHVM 32 32 int cpu; 33 - xen_hvm_resume_shared_info(); 33 + xen_hvm_init_shared_info(); 34 34 xen_callback_vector(); 35 35 xen_unplug_emulated_devices(); 36 36 if (xen_feature(XENFEAT_hvm_safe_pvclock)) {
+1 -1
arch/x86/xen/xen-ops.h
··· 41 41 void xen_vcpu_restore(void); 42 42 43 43 void xen_callback_vector(void); 44 - void xen_hvm_resume_shared_info(void); 44 + void xen_hvm_init_shared_info(void); 45 45 void xen_unplug_emulated_devices(void); 46 46 47 47 void __init xen_build_dynamic_phys_to_machine(void);
+28 -13
block/blk-lib.c
··· 44 44 struct request_queue *q = bdev_get_queue(bdev); 45 45 int type = REQ_WRITE | REQ_DISCARD; 46 46 unsigned int max_discard_sectors; 47 + unsigned int granularity, alignment, mask; 47 48 struct bio_batch bb; 48 49 struct bio *bio; 49 50 int ret = 0; ··· 55 54 if (!blk_queue_discard(q)) 56 55 return -EOPNOTSUPP; 57 56 57 + /* Zero-sector (unknown) and one-sector granularities are the same. */ 58 + granularity = max(q->limits.discard_granularity >> 9, 1U); 59 + mask = granularity - 1; 60 + alignment = (bdev_discard_alignment(bdev) >> 9) & mask; 61 + 58 62 /* 59 63 * Ensure that max_discard_sectors is of the proper 60 - * granularity 64 + * granularity, so that requests stay aligned after a split. 61 65 */ 62 66 max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9); 67 + max_discard_sectors = round_down(max_discard_sectors, granularity); 63 68 if (unlikely(!max_discard_sectors)) { 64 69 /* Avoid infinite loop below. Being cautious never hurts. */ 65 70 return -EOPNOTSUPP; 66 - } else if (q->limits.discard_granularity) { 67 - unsigned int disc_sects = q->limits.discard_granularity >> 9; 68 - 69 - max_discard_sectors &= ~(disc_sects - 1); 70 71 } 71 72 72 73 if (flags & BLKDEV_DISCARD_SECURE) { ··· 82 79 bb.wait = &wait; 83 80 84 81 while (nr_sects) { 82 + unsigned int req_sects; 83 + sector_t end_sect; 84 + 85 85 bio = bio_alloc(gfp_mask, 1); 86 86 if (!bio) { 87 87 ret = -ENOMEM; 88 88 break; 89 + } 90 + 91 + req_sects = min_t(sector_t, nr_sects, max_discard_sectors); 92 + 93 + /* 94 + * If splitting a request, and the next starting sector would be 95 + * misaligned, stop the discard at the previous aligned sector. 96 + */ 97 + end_sect = sector + req_sects; 98 + if (req_sects < nr_sects && (end_sect & mask) != alignment) { 99 + end_sect = 100 + round_down(end_sect - alignment, granularity) 101 + + alignment; 102 + req_sects = end_sect - sector; 89 103 } 90 104 91 105 bio->bi_sector = sector; ··· 110 90 bio->bi_bdev = bdev; 111 91 bio->bi_private = &bb; 112 92 113 - if (nr_sects > max_discard_sectors) { 114 - bio->bi_size = max_discard_sectors << 9; 115 - nr_sects -= max_discard_sectors; 116 - sector += max_discard_sectors; 117 - } else { 118 - bio->bi_size = nr_sects << 9; 119 - nr_sects = 0; 120 - } 93 + bio->bi_size = req_sects << 9; 94 + nr_sects -= req_sects; 95 + sector = end_sect; 121 96 122 97 atomic_inc(&bb.done); 123 98 submit_bio(type, bio);
+82 -35
block/blk-merge.c
··· 110 110 return 0; 111 111 } 112 112 113 + static void 114 + __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, 115 + struct scatterlist *sglist, struct bio_vec **bvprv, 116 + struct scatterlist **sg, int *nsegs, int *cluster) 117 + { 118 + 119 + int nbytes = bvec->bv_len; 120 + 121 + if (*bvprv && *cluster) { 122 + if ((*sg)->length + nbytes > queue_max_segment_size(q)) 123 + goto new_segment; 124 + 125 + if (!BIOVEC_PHYS_MERGEABLE(*bvprv, bvec)) 126 + goto new_segment; 127 + if (!BIOVEC_SEG_BOUNDARY(q, *bvprv, bvec)) 128 + goto new_segment; 129 + 130 + (*sg)->length += nbytes; 131 + } else { 132 + new_segment: 133 + if (!*sg) 134 + *sg = sglist; 135 + else { 136 + /* 137 + * If the driver previously mapped a shorter 138 + * list, we could see a termination bit 139 + * prematurely unless it fully inits the sg 140 + * table on each mapping. We KNOW that there 141 + * must be more entries here or the driver 142 + * would be buggy, so force clear the 143 + * termination bit to avoid doing a full 144 + * sg_init_table() in drivers for each command. 145 + */ 146 + (*sg)->page_link &= ~0x02; 147 + *sg = sg_next(*sg); 148 + } 149 + 150 + sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); 151 + (*nsegs)++; 152 + } 153 + *bvprv = bvec; 154 + } 155 + 113 156 /* 114 157 * map a request to scatterlist, return number of sg entries setup. Caller 115 158 * must make sure sg can hold rq->nr_phys_segments entries ··· 174 131 bvprv = NULL; 175 132 sg = NULL; 176 133 rq_for_each_segment(bvec, rq, iter) { 177 - int nbytes = bvec->bv_len; 178 - 179 - if (bvprv && cluster) { 180 - if (sg->length + nbytes > queue_max_segment_size(q)) 181 - goto new_segment; 182 - 183 - if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) 184 - goto new_segment; 185 - if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec)) 186 - goto new_segment; 187 - 188 - sg->length += nbytes; 189 - } else { 190 - new_segment: 191 - if (!sg) 192 - sg = sglist; 193 - else { 194 - /* 195 - * If the driver previously mapped a shorter 196 - * list, we could see a termination bit 197 - * prematurely unless it fully inits the sg 198 - * table on each mapping. We KNOW that there 199 - * must be more entries here or the driver 200 - * would be buggy, so force clear the 201 - * termination bit to avoid doing a full 202 - * sg_init_table() in drivers for each command. 203 - */ 204 - sg->page_link &= ~0x02; 205 - sg = sg_next(sg); 206 - } 207 - 208 - sg_set_page(sg, bvec->bv_page, nbytes, bvec->bv_offset); 209 - nsegs++; 210 - } 211 - bvprv = bvec; 134 + __blk_segment_map_sg(q, bvec, sglist, &bvprv, &sg, 135 + &nsegs, &cluster); 212 136 } /* segments in rq */ 213 137 214 138 ··· 208 198 return nsegs; 209 199 } 210 200 EXPORT_SYMBOL(blk_rq_map_sg); 201 + 202 + /** 203 + * blk_bio_map_sg - map a bio to a scatterlist 204 + * @q: request_queue in question 205 + * @bio: bio being mapped 206 + * @sglist: scatterlist being mapped 207 + * 208 + * Note: 209 + * Caller must make sure sg can hold bio->bi_phys_segments entries 210 + * 211 + * Will return the number of sg entries setup 212 + */ 213 + int blk_bio_map_sg(struct request_queue *q, struct bio *bio, 214 + struct scatterlist *sglist) 215 + { 216 + struct bio_vec *bvec, *bvprv; 217 + struct scatterlist *sg; 218 + int nsegs, cluster; 219 + unsigned long i; 220 + 221 + nsegs = 0; 222 + cluster = blk_queue_cluster(q); 223 + 224 + bvprv = NULL; 225 + sg = NULL; 226 + bio_for_each_segment(bvec, bio, i) { 227 + __blk_segment_map_sg(q, bvec, sglist, &bvprv, &sg, 228 + &nsegs, &cluster); 229 + } /* segments in bio */ 230 + 231 + if (sg) 232 + sg_mark_end(sg); 233 + 234 + BUG_ON(bio->bi_phys_segments && nsegs > bio->bi_phys_segments); 235 + return nsegs; 236 + } 237 + EXPORT_SYMBOL(blk_bio_map_sg); 211 238 212 239 static inline int ll_new_hw_segment(struct request_queue *q, 213 240 struct request *req,
+1 -1
block/genhd.c
··· 835 835 836 836 static void *show_partition_start(struct seq_file *seqf, loff_t *pos) 837 837 { 838 - static void *p; 838 + void *p; 839 839 840 840 p = disk_seqf_start(seqf, pos); 841 841 if (!IS_ERR_OR_NULL(p) && !*pos)
+1 -1
drivers/ata/Kconfig
··· 115 115 If unsure, say N. 116 116 117 117 config ATA_SFF 118 - bool "ATA SFF support" 118 + bool "ATA SFF support (for legacy IDE and PATA)" 119 119 default y 120 120 help 121 121 This option adds support for ATA controllers with SFF
+8
drivers/ata/ahci.c
··· 256 256 { PCI_VDEVICE(INTEL, 0x8c07), board_ahci }, /* Lynx Point RAID */ 257 257 { PCI_VDEVICE(INTEL, 0x8c0e), board_ahci }, /* Lynx Point RAID */ 258 258 { PCI_VDEVICE(INTEL, 0x8c0f), board_ahci }, /* Lynx Point RAID */ 259 + { PCI_VDEVICE(INTEL, 0x9c02), board_ahci }, /* Lynx Point-LP AHCI */ 260 + { PCI_VDEVICE(INTEL, 0x9c03), board_ahci }, /* Lynx Point-LP AHCI */ 261 + { PCI_VDEVICE(INTEL, 0x9c04), board_ahci }, /* Lynx Point-LP RAID */ 262 + { PCI_VDEVICE(INTEL, 0x9c05), board_ahci }, /* Lynx Point-LP RAID */ 263 + { PCI_VDEVICE(INTEL, 0x9c06), board_ahci }, /* Lynx Point-LP RAID */ 264 + { PCI_VDEVICE(INTEL, 0x9c07), board_ahci }, /* Lynx Point-LP RAID */ 265 + { PCI_VDEVICE(INTEL, 0x9c0e), board_ahci }, /* Lynx Point-LP RAID */ 266 + { PCI_VDEVICE(INTEL, 0x9c0f), board_ahci }, /* Lynx Point-LP RAID */ 259 267 260 268 /* JMicron 360/1/3/5/6, match class to avoid IDE function */ 261 269 { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+1
drivers/ata/ahci.h
··· 320 320 extern struct ata_port_operations ahci_ops; 321 321 extern struct ata_port_operations ahci_pmp_retry_srst_ops; 322 322 323 + unsigned int ahci_dev_classify(struct ata_port *ap); 323 324 void ahci_fill_cmd_slot(struct ahci_port_priv *pp, unsigned int tag, 324 325 u32 opts); 325 326 void ahci_save_initial_config(struct device *dev,
+8
drivers/ata/ata_piix.c
··· 329 329 { 0x8086, 0x8c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 330 330 /* SATA Controller IDE (Lynx Point) */ 331 331 { 0x8086, 0x8c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 332 + /* SATA Controller IDE (Lynx Point-LP) */ 333 + { 0x8086, 0x9c00, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb }, 334 + /* SATA Controller IDE (Lynx Point-LP) */ 335 + { 0x8086, 0x9c01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb }, 336 + /* SATA Controller IDE (Lynx Point-LP) */ 337 + { 0x8086, 0x9c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 338 + /* SATA Controller IDE (Lynx Point-LP) */ 339 + { 0x8086, 0x9c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 332 340 /* SATA Controller IDE (DH89xxCC) */ 333 341 { 0x8086, 0x2326, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 334 342 { } /* terminate list */
+2 -1
drivers/ata/libahci.c
··· 1139 1139 } 1140 1140 } 1141 1141 1142 - static unsigned int ahci_dev_classify(struct ata_port *ap) 1142 + unsigned int ahci_dev_classify(struct ata_port *ap) 1143 1143 { 1144 1144 void __iomem *port_mmio = ahci_port_base(ap); 1145 1145 struct ata_taskfile tf; ··· 1153 1153 1154 1154 return ata_dev_classify(&tf); 1155 1155 } 1156 + EXPORT_SYMBOL_GPL(ahci_dev_classify); 1156 1157 1157 1158 void ahci_fill_cmd_slot(struct ahci_port_priv *pp, unsigned int tag, 1158 1159 u32 opts)
+4 -11
drivers/ata/libata-acpi.c
··· 60 60 if (ap->flags & ATA_FLAG_ACPI_SATA) 61 61 return NULL; 62 62 63 - /* 64 - * If acpi bind operation has already happened, we can get the handle 65 - * for the port by checking the corresponding scsi_host device's 66 - * firmware node, otherwise we will need to find out the handle from 67 - * its parent's acpi node. 68 - */ 69 - if (ap->scsi_host) 70 - return DEVICE_ACPI_HANDLE(&ap->scsi_host->shost_gendev); 71 - else 72 - return acpi_get_child(DEVICE_ACPI_HANDLE(ap->host->dev), 73 - ap->port_no); 63 + return acpi_get_child(DEVICE_ACPI_HANDLE(ap->host->dev), ap->port_no); 74 64 } 75 65 EXPORT_SYMBOL(ata_ap_acpi_handle); 76 66 ··· 1090 1100 1091 1101 if (!*handle) 1092 1102 return -ENODEV; 1103 + 1104 + if (ata_acpi_gtm(ap, &ap->__acpi_init_gtm) == 0) 1105 + ap->pflags |= ATA_PFLAG_INIT_GTM_VALID; 1093 1106 1094 1107 return 0; 1095 1108 }
+2 -1
drivers/ata/libata-core.c
··· 4062 4062 { "_NEC DV5800A", NULL, ATA_HORKAGE_NODMA }, 4063 4063 { "SAMSUNG CD-ROM SN-124", "N001", ATA_HORKAGE_NODMA }, 4064 4064 { "Seagate STT20000A", NULL, ATA_HORKAGE_NODMA }, 4065 - { "2GB ATA Flash Disk", "ADMA428M", ATA_HORKAGE_NODMA }, 4065 + { " 2GB ATA Flash Disk", "ADMA428M", ATA_HORKAGE_NODMA }, 4066 4066 /* Odd clown on sil3726/4726 PMPs */ 4067 4067 { "Config Disk", NULL, ATA_HORKAGE_DISABLE }, 4068 4068 ··· 4128 4128 4129 4129 /* Devices that do not need bridging limits applied */ 4130 4130 { "MTRON MSP-SATA*", NULL, ATA_HORKAGE_BRIDGE_OK, }, 4131 + { "BUFFALO HD-QSU2/R5", NULL, ATA_HORKAGE_BRIDGE_OK, }, 4131 4132 4132 4133 /* Devices which aren't very happy with higher link speeds */ 4133 4134 { "WD My Book", NULL, ATA_HORKAGE_1_5_GBPS, },
+16
drivers/ata/pata_atiixp.c
··· 20 20 #include <linux/delay.h> 21 21 #include <scsi/scsi_host.h> 22 22 #include <linux/libata.h> 23 + #include <linux/dmi.h> 23 24 24 25 #define DRV_NAME "pata_atiixp" 25 26 #define DRV_VERSION "0.4.6" ··· 34 33 ATIIXP_IDE_UDMA_MODE = 0x56 35 34 }; 36 35 36 + static const struct dmi_system_id attixp_cable_override_dmi_table[] = { 37 + { 38 + /* Board has onboard PATA<->SATA converters */ 39 + .ident = "MSI E350DM-E33", 40 + .matches = { 41 + DMI_MATCH(DMI_BOARD_VENDOR, "MSI"), 42 + DMI_MATCH(DMI_BOARD_NAME, "E350DM-E33(MS-7720)"), 43 + }, 44 + }, 45 + { } 46 + }; 47 + 37 48 static int atiixp_cable_detect(struct ata_port *ap) 38 49 { 39 50 struct pci_dev *pdev = to_pci_dev(ap->host->dev); 40 51 u8 udma; 52 + 53 + if (dmi_check_system(attixp_cable_override_dmi_table)) 54 + return ATA_CBL_PATA40_SHORT; 41 55 42 56 /* Hack from drivers/ide/pci. Really we want to know how to do the 43 57 raw detection not play follow the bios mode guess */
+1 -1
drivers/base/dma-contiguous.c
··· 250 250 return -EINVAL; 251 251 252 252 /* Sanitise input arguments */ 253 - alignment = PAGE_SIZE << max(MAX_ORDER, pageblock_order); 253 + alignment = PAGE_SIZE << max(MAX_ORDER - 1, pageblock_order); 254 254 base = ALIGN(base, alignment); 255 255 size = ALIGN(size, alignment); 256 256 limit &= ~(alignment - 1);
+14 -1
drivers/block/drbd/drbd_bitmap.c
··· 889 889 unsigned int done; 890 890 unsigned flags; 891 891 #define BM_AIO_COPY_PAGES 1 892 + #define BM_WRITE_ALL_PAGES 2 892 893 int error; 893 894 struct kref kref; 894 895 }; ··· 1060 1059 if (lazy_writeout_upper_idx && i == lazy_writeout_upper_idx) 1061 1060 break; 1062 1061 if (rw & WRITE) { 1063 - if (bm_test_page_unchanged(b->bm_pages[i])) { 1062 + if (!(flags & BM_WRITE_ALL_PAGES) && 1063 + bm_test_page_unchanged(b->bm_pages[i])) { 1064 1064 dynamic_dev_dbg(DEV, "skipped bm write for idx %u\n", i); 1065 1065 continue; 1066 1066 } ··· 1140 1138 int drbd_bm_write(struct drbd_conf *mdev) __must_hold(local) 1141 1139 { 1142 1140 return bm_rw(mdev, WRITE, 0, 0); 1141 + } 1142 + 1143 + /** 1144 + * drbd_bm_write_all() - Write the whole bitmap to its on disk location. 1145 + * @mdev: DRBD device. 1146 + * 1147 + * Will write all pages. 1148 + */ 1149 + int drbd_bm_write_all(struct drbd_conf *mdev) __must_hold(local) 1150 + { 1151 + return bm_rw(mdev, WRITE, BM_WRITE_ALL_PAGES, 0); 1143 1152 } 1144 1153 1145 1154 /**
+1
drivers/block/drbd/drbd_int.h
··· 1469 1469 extern int drbd_bm_write_page(struct drbd_conf *mdev, unsigned int idx) __must_hold(local); 1470 1470 extern int drbd_bm_read(struct drbd_conf *mdev) __must_hold(local); 1471 1471 extern int drbd_bm_write(struct drbd_conf *mdev) __must_hold(local); 1472 + extern int drbd_bm_write_all(struct drbd_conf *mdev) __must_hold(local); 1472 1473 extern int drbd_bm_write_copy_pages(struct drbd_conf *mdev) __must_hold(local); 1473 1474 extern unsigned long drbd_bm_ALe_set_all(struct drbd_conf *mdev, 1474 1475 unsigned long al_enr);
+12 -16
drivers/block/drbd/drbd_main.c
··· 79 79 static void md_sync_timer_fn(unsigned long data); 80 80 static int w_bitmap_io(struct drbd_conf *mdev, struct drbd_work *w, int unused); 81 81 static int w_go_diskless(struct drbd_conf *mdev, struct drbd_work *w, int unused); 82 + static void _tl_clear(struct drbd_conf *mdev); 82 83 83 84 MODULE_AUTHOR("Philipp Reisner <phil@linbit.com>, " 84 85 "Lars Ellenberg <lars@linbit.com>"); ··· 433 432 434 433 /* Actions operating on the disk state, also want to work on 435 434 requests that got barrier acked. */ 436 - switch (what) { 437 - case fail_frozen_disk_io: 438 - case restart_frozen_disk_io: 439 - list_for_each_safe(le, tle, &mdev->barrier_acked_requests) { 440 - req = list_entry(le, struct drbd_request, tl_requests); 441 - _req_mod(req, what); 442 - } 443 435 444 - case connection_lost_while_pending: 445 - case resend: 446 - break; 447 - default: 448 - dev_err(DEV, "what = %d in _tl_restart()\n", what); 436 + list_for_each_safe(le, tle, &mdev->barrier_acked_requests) { 437 + req = list_entry(le, struct drbd_request, tl_requests); 438 + _req_mod(req, what); 449 439 } 450 440 } 451 441 ··· 451 459 */ 452 460 void tl_clear(struct drbd_conf *mdev) 453 461 { 462 + spin_lock_irq(&mdev->req_lock); 463 + _tl_clear(mdev); 464 + spin_unlock_irq(&mdev->req_lock); 465 + } 466 + 467 + static void _tl_clear(struct drbd_conf *mdev) 468 + { 454 469 struct list_head *le, *tle; 455 470 struct drbd_request *r; 456 - 457 - spin_lock_irq(&mdev->req_lock); 458 471 459 472 _tl_restart(mdev, connection_lost_while_pending); 460 473 ··· 479 482 480 483 memset(mdev->app_reads_hash, 0, APP_R_HSIZE*sizeof(void *)); 481 484 482 - spin_unlock_irq(&mdev->req_lock); 483 485 } 484 486 485 487 void tl_restart(struct drbd_conf *mdev, enum drbd_req_event what) ··· 1472 1476 if (ns.susp_fen) { 1473 1477 /* case1: The outdate peer handler is successful: */ 1474 1478 if (os.pdsk > D_OUTDATED && ns.pdsk <= D_OUTDATED) { 1475 - tl_clear(mdev); 1476 1479 if (test_bit(NEW_CUR_UUID, &mdev->flags)) { 1477 1480 drbd_uuid_new_current(mdev); 1478 1481 clear_bit(NEW_CUR_UUID, &mdev->flags); 1479 1482 } 1480 1483 spin_lock_irq(&mdev->req_lock); 1484 + _tl_clear(mdev); 1481 1485 _drbd_set_state(_NS(mdev, susp_fen, 0), CS_VERBOSE, NULL); 1482 1486 spin_unlock_irq(&mdev->req_lock); 1483 1487 }
+2 -2
drivers/block/drbd/drbd_nl.c
··· 674 674 la_size_changed && md_moved ? "size changed and md moved" : 675 675 la_size_changed ? "size changed" : "md moved"); 676 676 /* next line implicitly does drbd_suspend_io()+drbd_resume_io() */ 677 - err = drbd_bitmap_io(mdev, &drbd_bm_write, 678 - "size changed", BM_LOCKED_MASK); 677 + err = drbd_bitmap_io(mdev, md_moved ? &drbd_bm_write_all : &drbd_bm_write, 678 + "size changed", BM_LOCKED_MASK); 679 679 if (err) { 680 680 rv = dev_size_error; 681 681 goto out;
+32 -4
drivers/block/drbd/drbd_req.c
··· 695 695 break; 696 696 697 697 case resend: 698 + /* Simply complete (local only) READs. */ 699 + if (!(req->rq_state & RQ_WRITE) && !req->w.cb) { 700 + _req_may_be_done(req, m); 701 + break; 702 + } 703 + 698 704 /* If RQ_NET_OK is already set, we got a P_WRITE_ACK or P_RECV_ACK 699 705 before the connection loss (B&C only); only P_BARRIER_ACK was missing. 700 706 Trowing them out of the TL here by pretending we got a BARRIER_ACK ··· 840 834 req->private_bio = NULL; 841 835 } 842 836 if (rw == WRITE) { 843 - remote = 1; 837 + /* Need to replicate writes. Unless it is an empty flush, 838 + * which is better mapped to a DRBD P_BARRIER packet, 839 + * also for drbd wire protocol compatibility reasons. */ 840 + if (unlikely(size == 0)) { 841 + /* The only size==0 bios we expect are empty flushes. */ 842 + D_ASSERT(bio->bi_rw & REQ_FLUSH); 843 + remote = 0; 844 + } else 845 + remote = 1; 844 846 } else { 845 847 /* READ || READA */ 846 848 if (local) { ··· 884 870 * extent. This waits for any resync activity in the corresponding 885 871 * resync extent to finish, and, if necessary, pulls in the target 886 872 * extent into the activity log, which involves further disk io because 887 - * of transactional on-disk meta data updates. */ 888 - if (rw == WRITE && local && !test_bit(AL_SUSPENDED, &mdev->flags)) { 873 + * of transactional on-disk meta data updates. 874 + * Empty flushes don't need to go into the activity log, they can only 875 + * flush data for pending writes which are already in there. */ 876 + if (rw == WRITE && local && size 877 + && !test_bit(AL_SUSPENDED, &mdev->flags)) { 889 878 req->rq_state |= RQ_IN_ACT_LOG; 890 879 drbd_al_begin_io(mdev, sector); 891 880 } ··· 1011 994 if (rw == WRITE && _req_conflicts(req)) 1012 995 goto fail_conflicting; 1013 996 1014 - list_add_tail(&req->tl_requests, &mdev->newest_tle->requests); 997 + /* no point in adding empty flushes to the transfer log, 998 + * they are mapped to drbd barriers already. */ 999 + if (likely(size!=0)) 1000 + list_add_tail(&req->tl_requests, &mdev->newest_tle->requests); 1015 1001 1016 1002 /* NOTE remote first: to get the concurrent write detection right, 1017 1003 * we must register the request before start of local IO. */ ··· 1033 1013 if (remote && 1034 1014 mdev->net_conf->on_congestion != OC_BLOCK && mdev->agreed_pro_version >= 96) 1035 1015 maybe_pull_ahead(mdev); 1016 + 1017 + /* If this was a flush, queue a drbd barrier/start a new epoch. 1018 + * Unless the current epoch was empty anyways, or we are not currently 1019 + * replicating, in which case there is no point. */ 1020 + if (unlikely(bio->bi_rw & REQ_FLUSH) 1021 + && mdev->newest_tle->n_writes 1022 + && drbd_should_do_remote(mdev->state)) 1023 + queue_barrier(mdev); 1036 1024 1037 1025 spin_unlock_irq(&mdev->req_lock); 1038 1026 kfree(b); /* if someone else has beaten us to it... */
+3 -1
drivers/cpufreq/omap-cpufreq.c
··· 218 218 219 219 policy->cur = policy->min = policy->max = omap_getspeed(policy->cpu); 220 220 221 - if (atomic_inc_return(&freq_table_users) == 1) 221 + if (!freq_table) 222 222 result = opp_init_cpufreq_table(mpu_dev, &freq_table); 223 223 224 224 if (result) { ··· 226 226 __func__, policy->cpu, result); 227 227 goto fail_ck; 228 228 } 229 + 230 + atomic_inc_return(&freq_table_users); 229 231 230 232 result = cpufreq_frequency_table_cpuinfo(policy, freq_table); 231 233 if (result)
+5 -5
drivers/crypto/caam/jr.c
··· 63 63 64 64 head = ACCESS_ONCE(jrp->head); 65 65 66 - spin_lock_bh(&jrp->outlock); 66 + spin_lock(&jrp->outlock); 67 67 68 68 sw_idx = tail = jrp->tail; 69 69 hw_idx = jrp->out_ring_read_index; ··· 115 115 jrp->tail = tail; 116 116 } 117 117 118 - spin_unlock_bh(&jrp->outlock); 118 + spin_unlock(&jrp->outlock); 119 119 120 120 /* Finally, execute user's callback */ 121 121 usercall(dev, userdesc, userstatus, userarg); ··· 236 236 return -EIO; 237 237 } 238 238 239 - spin_lock(&jrp->inplock); 239 + spin_lock_bh(&jrp->inplock); 240 240 241 241 head = jrp->head; 242 242 tail = ACCESS_ONCE(jrp->tail); 243 243 244 244 if (!rd_reg32(&jrp->rregs->inpring_avail) || 245 245 CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) { 246 - spin_unlock(&jrp->inplock); 246 + spin_unlock_bh(&jrp->inplock); 247 247 dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE); 248 248 return -EBUSY; 249 249 } ··· 265 265 266 266 wr_reg32(&jrp->rregs->inpring_jobadd, 1); 267 267 268 - spin_unlock(&jrp->inplock); 268 + spin_unlock_bh(&jrp->inplock); 269 269 270 270 return 0; 271 271 }
+2 -2
drivers/crypto/hifn_795x.c
··· 821 821 /* 822 822 * We must wait at least 256 Pk_clk cycles between two reads of the rng. 823 823 */ 824 - dev->rng_wait_time = DIV_ROUND_UP(NSEC_PER_SEC, dev->pk_clk_freq) * 825 - 256; 824 + dev->rng_wait_time = DIV_ROUND_UP_ULL(NSEC_PER_SEC, 825 + dev->pk_clk_freq) * 256; 826 826 827 827 dev->rng.name = dev->name; 828 828 dev->rng.data_present = hifn_rng_data_present,
+1 -1
drivers/gpio/Kconfig
··· 294 294 295 295 config GPIO_MC9S08DZ60 296 296 bool "MX35 3DS BOARD MC9S08DZ60 GPIO functions" 297 - depends on I2C && MACH_MX35_3DS 297 + depends on I2C=y && MACH_MX35_3DS 298 298 help 299 299 Select this to enable the MC9S08DZ60 GPIO driver 300 300
+2 -2
drivers/gpio/gpio-em.c
··· 247 247 248 248 p->irq_base = irq_alloc_descs(pdata->irq_base, 0, 249 249 pdata->number_of_pins, numa_node_id()); 250 - if (IS_ERR_VALUE(p->irq_base)) { 250 + if (p->irq_base < 0) { 251 251 dev_err(&pdev->dev, "cannot get irq_desc\n"); 252 - return -ENXIO; 252 + return p->irq_base; 253 253 } 254 254 pr_debug("gio: hw base = %d, nr = %d, sw base = %d\n", 255 255 pdata->gpio_base, pdata->number_of_pins, p->irq_base);
+1
drivers/gpio/gpio-rdc321x.c
··· 170 170 rdc321x_gpio_dev->reg2_data_base = r->start + 0x4; 171 171 172 172 rdc321x_gpio_dev->chip.label = "rdc321x-gpio"; 173 + rdc321x_gpio_dev->chip.owner = THIS_MODULE; 173 174 rdc321x_gpio_dev->chip.direction_input = rdc_gpio_direction_input; 174 175 rdc321x_gpio_dev->chip.direction_output = rdc_gpio_config; 175 176 rdc321x_gpio_dev->chip.get = rdc_gpio_get_value;
+1 -1
drivers/gpio/gpiolib-of.c
··· 82 82 gpiochip_find(&gg_data, of_gpiochip_find_and_xlate); 83 83 84 84 of_node_put(gg_data.gpiospec.np); 85 - pr_debug("%s exited with status %d\n", __func__, ret); 85 + pr_debug("%s exited with status %d\n", __func__, gg_data.out_gpio); 86 86 return gg_data.out_gpio; 87 87 } 88 88 EXPORT_SYMBOL(of_get_named_gpio_flags);
+1 -1
drivers/gpu/drm/drm_crtc.c
··· 1981 1981 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 1982 1982 return -EINVAL; 1983 1983 1984 - if (!req->flags) 1984 + if (!req->flags || (~DRM_MODE_CURSOR_FLAGS & req->flags)) 1985 1985 return -EINVAL; 1986 1986 1987 1987 mutex_lock(&dev->mode_config.mutex);
+3
drivers/gpu/drm/drm_edid.c
··· 87 87 int product_id; 88 88 u32 quirks; 89 89 } edid_quirk_list[] = { 90 + /* ASUS VW222S */ 91 + { "ACI", 0x22a2, EDID_QUIRK_FORCE_REDUCED_BLANKING }, 92 + 90 93 /* Acer AL1706 */ 91 94 { "ACR", 44358, EDID_QUIRK_PREFER_LARGE_60 }, 92 95 /* Acer F51 */
+3
drivers/gpu/drm/gma500/psb_intel_display.c
··· 1362 1362 (struct drm_connector **) (psb_intel_crtc + 1); 1363 1363 psb_intel_crtc->mode_set.num_connectors = 0; 1364 1364 psb_intel_cursor_init(dev, psb_intel_crtc); 1365 + 1366 + /* Set to true so that the pipe is forced off on initial config. */ 1367 + psb_intel_crtc->active = true; 1365 1368 } 1366 1369 1367 1370 int psb_intel_get_pipe_from_crtc_id(struct drm_device *dev, void *data,
+1 -1
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 72 72 /* ppgtt PDEs reside in the global gtt pagetable, which has 512*1024 73 73 * entries. For aliasing ppgtt support we just steal them at the end for 74 74 * now. */ 75 - first_pd_entry_in_global_pt = 512*1024 - I915_PPGTT_PD_ENTRIES; 75 + first_pd_entry_in_global_pt = dev_priv->mm.gtt->gtt_total_entries - I915_PPGTT_PD_ENTRIES; 76 76 77 77 ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL); 78 78 if (!ppgtt)
+6 -6
drivers/gpu/drm/i915/intel_display.c
··· 1384 1384 enum pipe pipe, int reg) 1385 1385 { 1386 1386 u32 val = I915_READ(reg); 1387 - WARN(hdmi_pipe_enabled(dev_priv, val, pipe), 1387 + WARN(hdmi_pipe_enabled(dev_priv, pipe, val), 1388 1388 "PCH HDMI (0x%08x) enabled on transcoder %c, should be disabled\n", 1389 1389 reg, pipe_name(pipe)); 1390 1390 ··· 1404 1404 1405 1405 reg = PCH_ADPA; 1406 1406 val = I915_READ(reg); 1407 - WARN(adpa_pipe_enabled(dev_priv, val, pipe), 1407 + WARN(adpa_pipe_enabled(dev_priv, pipe, val), 1408 1408 "PCH VGA enabled on transcoder %c, should be disabled\n", 1409 1409 pipe_name(pipe)); 1410 1410 1411 1411 reg = PCH_LVDS; 1412 1412 val = I915_READ(reg); 1413 - WARN(lvds_pipe_enabled(dev_priv, val, pipe), 1413 + WARN(lvds_pipe_enabled(dev_priv, pipe, val), 1414 1414 "PCH LVDS enabled on transcoder %c, should be disabled\n", 1415 1415 pipe_name(pipe)); 1416 1416 ··· 1872 1872 enum pipe pipe, int reg) 1873 1873 { 1874 1874 u32 val = I915_READ(reg); 1875 - if (hdmi_pipe_enabled(dev_priv, val, pipe)) { 1875 + if (hdmi_pipe_enabled(dev_priv, pipe, val)) { 1876 1876 DRM_DEBUG_KMS("Disabling pch HDMI %x on pipe %d\n", 1877 1877 reg, pipe); 1878 1878 I915_WRITE(reg, val & ~PORT_ENABLE); ··· 1894 1894 1895 1895 reg = PCH_ADPA; 1896 1896 val = I915_READ(reg); 1897 - if (adpa_pipe_enabled(dev_priv, val, pipe)) 1897 + if (adpa_pipe_enabled(dev_priv, pipe, val)) 1898 1898 I915_WRITE(reg, val & ~ADPA_DAC_ENABLE); 1899 1899 1900 1900 reg = PCH_LVDS; 1901 1901 val = I915_READ(reg); 1902 - if (lvds_pipe_enabled(dev_priv, val, pipe)) { 1902 + if (lvds_pipe_enabled(dev_priv, pipe, val)) { 1903 1903 DRM_DEBUG_KMS("disable lvds on pipe %d val 0x%08x\n", pipe, val); 1904 1904 I915_WRITE(reg, val & ~LVDS_PORT_EN); 1905 1905 POSTING_READ(reg);
+8
drivers/gpu/drm/i915/intel_lvds.c
··· 780 780 DMI_MATCH(DMI_BOARD_NAME, "ZBOXSD-ID12/ID13"), 781 781 }, 782 782 }, 783 + { 784 + .callback = intel_no_lvds_dmi_callback, 785 + .ident = "Gigabyte GA-D525TUD", 786 + .matches = { 787 + DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."), 788 + DMI_MATCH(DMI_BOARD_NAME, "D525TUD"), 789 + }, 790 + }, 783 791 784 792 { } /* terminating entry */ 785 793 };
+2 -2
drivers/gpu/drm/i915/intel_sprite.c
··· 60 60 61 61 switch (fb->pixel_format) { 62 62 case DRM_FORMAT_XBGR8888: 63 - sprctl |= SPRITE_FORMAT_RGBX888; 63 + sprctl |= SPRITE_FORMAT_RGBX888 | SPRITE_RGB_ORDER_RGBX; 64 64 pixel_size = 4; 65 65 break; 66 66 case DRM_FORMAT_XRGB8888: 67 - sprctl |= SPRITE_FORMAT_RGBX888 | SPRITE_RGB_ORDER_RGBX; 67 + sprctl |= SPRITE_FORMAT_RGBX888; 68 68 pixel_size = 4; 69 69 break; 70 70 case DRM_FORMAT_YUYV:
+4 -2
drivers/gpu/drm/nouveau/nouveau_state.c
··· 736 736 } 737 737 break; 738 738 case NV_C0: 739 - nvc0_copy_create(dev, 1); 739 + if (!(nv_rd32(dev, 0x022500) & 0x00000200)) 740 + nvc0_copy_create(dev, 1); 740 741 case NV_D0: 741 - nvc0_copy_create(dev, 0); 742 + if (!(nv_rd32(dev, 0x022500) & 0x00000100)) 743 + nvc0_copy_create(dev, 0); 742 744 break; 743 745 default: 744 746 break;
+16 -20
drivers/gpu/drm/radeon/atombios_crtc.c
··· 258 258 radeon_crtc->enabled = true; 259 259 /* adjust pm to dpms changes BEFORE enabling crtcs */ 260 260 radeon_pm_compute_clocks(rdev); 261 - /* disable crtc pair power gating before programming */ 262 261 if (ASIC_IS_DCE6(rdev) && !radeon_crtc->in_mode_set) 263 262 atombios_powergate_crtc(crtc, ATOM_DISABLE); 264 263 atombios_enable_crtc(crtc, ATOM_ENABLE); ··· 277 278 atombios_enable_crtc_memreq(crtc, ATOM_DISABLE); 278 279 atombios_enable_crtc(crtc, ATOM_DISABLE); 279 280 radeon_crtc->enabled = false; 280 - /* power gating is per-pair */ 281 - if (ASIC_IS_DCE6(rdev) && !radeon_crtc->in_mode_set) { 282 - struct drm_crtc *other_crtc; 283 - struct radeon_crtc *other_radeon_crtc; 284 - list_for_each_entry(other_crtc, &rdev->ddev->mode_config.crtc_list, head) { 285 - other_radeon_crtc = to_radeon_crtc(other_crtc); 286 - if (((radeon_crtc->crtc_id == 0) && (other_radeon_crtc->crtc_id == 1)) || 287 - ((radeon_crtc->crtc_id == 1) && (other_radeon_crtc->crtc_id == 0)) || 288 - ((radeon_crtc->crtc_id == 2) && (other_radeon_crtc->crtc_id == 3)) || 289 - ((radeon_crtc->crtc_id == 3) && (other_radeon_crtc->crtc_id == 2)) || 290 - ((radeon_crtc->crtc_id == 4) && (other_radeon_crtc->crtc_id == 5)) || 291 - ((radeon_crtc->crtc_id == 5) && (other_radeon_crtc->crtc_id == 4))) { 292 - /* if both crtcs in the pair are off, enable power gating */ 293 - if (other_radeon_crtc->enabled == false) 294 - atombios_powergate_crtc(crtc, ATOM_ENABLE); 295 - break; 296 - } 297 - } 298 - } 281 + if (ASIC_IS_DCE6(rdev) && !radeon_crtc->in_mode_set) 282 + atombios_powergate_crtc(crtc, ATOM_ENABLE); 299 283 /* adjust pm to dpms changes AFTER disabling crtcs */ 300 284 radeon_pm_compute_clocks(rdev); 301 285 break; ··· 1664 1682 struct drm_device *dev = crtc->dev; 1665 1683 struct radeon_device *rdev = dev->dev_private; 1666 1684 struct radeon_atom_ss ss; 1685 + int i; 1667 1686 1668 1687 atombios_crtc_dpms(crtc, DRM_MODE_DPMS_OFF); 1688 + 1689 + for (i = 0; i < rdev->num_crtc; i++) { 1690 + if (rdev->mode_info.crtcs[i] && 1691 + rdev->mode_info.crtcs[i]->enabled && 1692 + i != radeon_crtc->crtc_id && 1693 + radeon_crtc->pll_id == rdev->mode_info.crtcs[i]->pll_id) { 1694 + /* one other crtc is using this pll don't turn 1695 + * off the pll 1696 + */ 1697 + goto done; 1698 + } 1699 + } 1669 1700 1670 1701 switch (radeon_crtc->pll_id) { 1671 1702 case ATOM_PPLL1: ··· 1696 1701 default: 1697 1702 break; 1698 1703 } 1704 + done: 1699 1705 radeon_crtc->pll_id = -1; 1700 1706 } 1701 1707
+12 -17
drivers/gpu/drm/radeon/atombios_dp.c
··· 577 577 struct radeon_device *rdev = dev->dev_private; 578 578 struct radeon_connector *radeon_connector = to_radeon_connector(connector); 579 579 int panel_mode = DP_PANEL_MODE_EXTERNAL_DP_MODE; 580 + u16 dp_bridge = radeon_connector_encoder_get_dp_bridge_encoder_id(connector); 581 + u8 tmp; 580 582 581 583 if (!ASIC_IS_DCE4(rdev)) 582 584 return panel_mode; 583 585 584 - if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) == 585 - ENCODER_OBJECT_ID_NUTMEG) 586 - panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE; 587 - else if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) == 588 - ENCODER_OBJECT_ID_TRAVIS) { 589 - u8 id[6]; 590 - int i; 591 - for (i = 0; i < 6; i++) 592 - id[i] = radeon_read_dpcd_reg(radeon_connector, 0x503 + i); 593 - if (id[0] == 0x73 && 594 - id[1] == 0x69 && 595 - id[2] == 0x76 && 596 - id[3] == 0x61 && 597 - id[4] == 0x72 && 598 - id[5] == 0x54) 586 + if (dp_bridge != ENCODER_OBJECT_ID_NONE) { 587 + /* DP bridge chips */ 588 + tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP); 589 + if (tmp & 1) 590 + panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; 591 + else if ((dp_bridge == ENCODER_OBJECT_ID_NUTMEG) || 592 + (dp_bridge == ENCODER_OBJECT_ID_TRAVIS)) 599 593 panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE; 600 594 else 601 - panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; 595 + panel_mode = DP_PANEL_MODE_EXTERNAL_DP_MODE; 602 596 } else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 603 - u8 tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP); 597 + /* eDP */ 598 + tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP); 604 599 if (tmp & 1) 605 600 panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; 606 601 }
+73 -67
drivers/gpu/drm/radeon/atombios_encoders.c
··· 1379 1379 struct drm_device *dev = encoder->dev; 1380 1380 struct radeon_device *rdev = dev->dev_private; 1381 1381 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1382 + struct drm_encoder *ext_encoder = radeon_get_external_encoder(encoder); 1383 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 1382 1384 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1383 1385 struct radeon_connector *radeon_connector = NULL; 1384 1386 struct radeon_connector_atom_dig *radeon_dig_connector = NULL; ··· 1392 1390 1393 1391 switch (mode) { 1394 1392 case DRM_MODE_DPMS_ON: 1395 - /* some early dce3.2 boards have a bug in their transmitter control table */ 1396 - if ((rdev->family == CHIP_RV710) || (rdev->family == CHIP_RV730) || 1397 - ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) { 1398 - if (ASIC_IS_DCE6(rdev)) { 1399 - /* It seems we need to call ATOM_ENCODER_CMD_SETUP again 1400 - * before reenabling encoder on DPMS ON, otherwise we never 1401 - * get picture 1402 - */ 1403 - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1393 + if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) { 1394 + if (!connector) 1395 + dig->panel_mode = DP_PANEL_MODE_EXTERNAL_DP_MODE; 1396 + else 1397 + dig->panel_mode = radeon_dp_get_panel_mode(encoder, connector); 1398 + 1399 + /* setup and enable the encoder */ 1400 + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1401 + atombios_dig_encoder_setup(encoder, 1402 + ATOM_ENCODER_CMD_SETUP_PANEL_MODE, 1403 + dig->panel_mode); 1404 + if (ext_encoder) { 1405 + if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE61(rdev)) 1406 + atombios_external_encoder_setup(encoder, ext_encoder, 1407 + EXTERNAL_ENCODER_ACTION_V3_ENCODER_SETUP); 1404 1408 } 1405 1409 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1406 - } else { 1410 + } else if (ASIC_IS_DCE4(rdev)) { 1411 + /* setup and enable the encoder */ 1412 + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1413 + /* enable the transmitter */ 1414 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1407 1415 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); 1416 + } else { 1417 + /* setup and enable the encoder and transmitter */ 1418 + atombios_dig_encoder_setup(encoder, ATOM_ENABLE, 0); 1419 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_SETUP, 0, 0); 1420 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1421 + /* some early dce3.2 boards have a bug in their transmitter control table */ 1422 + if ((rdev->family != CHIP_RV710) || (rdev->family != CHIP_RV730)) 1423 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); 1408 1424 } 1409 1425 if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) { 1410 1426 if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { ··· 1440 1420 case DRM_MODE_DPMS_STANDBY: 1441 1421 case DRM_MODE_DPMS_SUSPEND: 1442 1422 case DRM_MODE_DPMS_OFF: 1443 - if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) 1423 + if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) { 1424 + /* disable the transmitter */ 1444 1425 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1445 - else 1426 + } else if (ASIC_IS_DCE4(rdev)) { 1427 + /* disable the transmitter */ 1446 1428 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0); 1429 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1430 + } else { 1431 + /* disable the encoder and transmitter */ 1432 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0); 1433 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1434 + atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); 1435 + } 1447 1436 if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) { 1448 1437 if (ASIC_IS_DCE4(rdev)) 1449 1438 atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0); ··· 1769 1740 struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); 1770 1741 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1771 1742 struct drm_encoder *test_encoder; 1772 - struct radeon_encoder_atom_dig *dig; 1743 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 1773 1744 uint32_t dig_enc_in_use = 0; 1774 1745 1775 - /* DCE4/5 */ 1776 - if (ASIC_IS_DCE4(rdev)) { 1777 - dig = radeon_encoder->enc_priv; 1778 - if (ASIC_IS_DCE41(rdev)) { 1746 + if (ASIC_IS_DCE6(rdev)) { 1747 + /* DCE6 */ 1748 + switch (radeon_encoder->encoder_id) { 1749 + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 1750 + if (dig->linkb) 1751 + return 1; 1752 + else 1753 + return 0; 1754 + break; 1755 + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 1756 + if (dig->linkb) 1757 + return 3; 1758 + else 1759 + return 2; 1760 + break; 1761 + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 1762 + if (dig->linkb) 1763 + return 5; 1764 + else 1765 + return 4; 1766 + break; 1767 + } 1768 + } else if (ASIC_IS_DCE4(rdev)) { 1769 + /* DCE4/5 */ 1770 + if (ASIC_IS_DCE41(rdev) && !ASIC_IS_DCE61(rdev)) { 1779 1771 /* ontario follows DCE4 */ 1780 1772 if (rdev->family == CHIP_PALM) { 1781 1773 if (dig->linkb) ··· 1898 1848 struct drm_device *dev = encoder->dev; 1899 1849 struct radeon_device *rdev = dev->dev_private; 1900 1850 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1901 - struct drm_encoder *ext_encoder = radeon_get_external_encoder(encoder); 1902 1851 1903 1852 radeon_encoder->pixel_clock = adjusted_mode->clock; 1853 + 1854 + /* need to call this here rather than in prepare() since we need some crtc info */ 1855 + radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); 1904 1856 1905 1857 if (ASIC_IS_AVIVO(rdev) && !ASIC_IS_DCE4(rdev)) { 1906 1858 if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT | ATOM_DEVICE_TV_SUPPORT)) ··· 1922 1870 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 1923 1871 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 1924 1872 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 1925 - if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) { 1926 - struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1927 - struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 1928 - 1929 - if (!connector) 1930 - dig->panel_mode = DP_PANEL_MODE_EXTERNAL_DP_MODE; 1931 - else 1932 - dig->panel_mode = radeon_dp_get_panel_mode(encoder, connector); 1933 - 1934 - /* setup and enable the encoder */ 1935 - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1936 - atombios_dig_encoder_setup(encoder, 1937 - ATOM_ENCODER_CMD_SETUP_PANEL_MODE, 1938 - dig->panel_mode); 1939 - } else if (ASIC_IS_DCE4(rdev)) { 1940 - /* disable the transmitter */ 1941 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1942 - /* setup and enable the encoder */ 1943 - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1944 - 1945 - /* enable the transmitter */ 1946 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1947 - } else { 1948 - /* disable the encoder and transmitter */ 1949 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1950 - atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); 1951 - 1952 - /* setup and enable the encoder and transmitter */ 1953 - atombios_dig_encoder_setup(encoder, ATOM_ENABLE, 0); 1954 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_SETUP, 0, 0); 1955 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1956 - } 1873 + /* handled in dpms */ 1957 1874 break; 1958 1875 case ENCODER_OBJECT_ID_INTERNAL_DDI: 1959 1876 case ENCODER_OBJECT_ID_INTERNAL_DVO1: ··· 1941 1920 atombios_tv_setup(encoder, ATOM_DISABLE); 1942 1921 } 1943 1922 break; 1944 - } 1945 - 1946 - if (ext_encoder) { 1947 - if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE61(rdev)) 1948 - atombios_external_encoder_setup(encoder, ext_encoder, 1949 - EXTERNAL_ENCODER_ACTION_V3_ENCODER_SETUP); 1950 - else 1951 - atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE); 1952 1923 } 1953 1924 1954 1925 atombios_apply_encoder_quirks(encoder, adjusted_mode); ··· 2129 2116 } 2130 2117 2131 2118 radeon_atom_output_lock(encoder, true); 2132 - radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); 2133 2119 2134 2120 if (connector) { 2135 2121 struct radeon_connector *radeon_connector = to_radeon_connector(connector); ··· 2149 2137 2150 2138 static void radeon_atom_encoder_commit(struct drm_encoder *encoder) 2151 2139 { 2140 + /* need to call this here as we need the crtc set up */ 2152 2141 radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_ON); 2153 2142 radeon_atom_output_lock(encoder, false); 2154 2143 } ··· 2190 2177 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 2191 2178 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 2192 2179 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 2193 - if (ASIC_IS_DCE4(rdev)) 2194 - /* disable the transmitter */ 2195 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 2196 - else { 2197 - /* disable the encoder and transmitter */ 2198 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 2199 - atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); 2200 - } 2180 + /* handled in dpms */ 2201 2181 break; 2202 2182 case ENCODER_OBJECT_ID_INTERNAL_DDI: 2203 2183 case ENCODER_OBJECT_ID_INTERNAL_DVO1:
+25 -3
drivers/gpu/drm/radeon/r600_cs.c
··· 63 63 u32 cb_color_size_idx[8]; /* unused */ 64 64 u32 cb_target_mask; 65 65 u32 cb_shader_mask; /* unused */ 66 + bool is_resolve; 66 67 u32 cb_color_size[8]; 67 68 u32 vgt_strmout_en; 68 69 u32 vgt_strmout_buffer_en; ··· 316 315 track->cb_color_bo[i] = NULL; 317 316 track->cb_color_bo_offset[i] = 0xFFFFFFFF; 318 317 track->cb_color_bo_mc[i] = 0xFFFFFFFF; 318 + track->cb_color_frag_bo[i] = NULL; 319 + track->cb_color_frag_offset[i] = 0xFFFFFFFF; 320 + track->cb_color_tile_bo[i] = NULL; 321 + track->cb_color_tile_offset[i] = 0xFFFFFFFF; 322 + track->cb_color_mask[i] = 0xFFFFFFFF; 319 323 } 324 + track->is_resolve = false; 325 + track->nsamples = 16; 326 + track->log_nsamples = 4; 320 327 track->cb_target_mask = 0xFFFFFFFF; 321 328 track->cb_shader_mask = 0xFFFFFFFF; 322 329 track->cb_dirty = true; ··· 361 352 volatile u32 *ib = p->ib.ptr; 362 353 unsigned array_mode; 363 354 u32 format; 355 + /* When resolve is used, the second colorbuffer has always 1 sample. */ 356 + unsigned nsamples = track->is_resolve && i == 1 ? 1 : track->nsamples; 364 357 365 358 size = radeon_bo_size(track->cb_color_bo[i]) - track->cb_color_bo_offset[i]; 366 359 format = G_0280A0_FORMAT(track->cb_color_info[i]); ··· 386 375 array_check.group_size = track->group_size; 387 376 array_check.nbanks = track->nbanks; 388 377 array_check.npipes = track->npipes; 389 - array_check.nsamples = track->nsamples; 378 + array_check.nsamples = nsamples; 390 379 array_check.blocksize = r600_fmt_get_blocksize(format); 391 380 if (r600_get_array_mode_alignment(&array_check, 392 381 &pitch_align, &height_align, &depth_align, &base_align)) { ··· 432 421 433 422 /* check offset */ 434 423 tmp = r600_fmt_get_nblocksy(format, height) * r600_fmt_get_nblocksx(format, pitch) * 435 - r600_fmt_get_blocksize(format) * track->nsamples; 424 + r600_fmt_get_blocksize(format) * nsamples; 436 425 switch (array_mode) { 437 426 default: 438 427 case V_0280A0_ARRAY_LINEAR_GENERAL: ··· 803 792 */ 804 793 if (track->cb_dirty) { 805 794 tmp = track->cb_target_mask; 795 + 796 + /* We must check both colorbuffers for RESOLVE. */ 797 + if (track->is_resolve) { 798 + tmp |= 0xff; 799 + } 800 + 806 801 for (i = 0; i < 8; i++) { 807 802 if ((tmp >> (i * 4)) & 0xF) { 808 803 /* at least one component is enabled */ ··· 1298 1281 track->nsamples = 1 << tmp; 1299 1282 track->cb_dirty = true; 1300 1283 break; 1284 + case R_028808_CB_COLOR_CONTROL: 1285 + tmp = G_028808_SPECIAL_OP(radeon_get_ib_value(p, idx)); 1286 + track->is_resolve = tmp == V_028808_SPECIAL_RESOLVE_BOX; 1287 + track->cb_dirty = true; 1288 + break; 1301 1289 case R_0280A0_CB_COLOR0_INFO: 1302 1290 case R_0280A4_CB_COLOR1_INFO: 1303 1291 case R_0280A8_CB_COLOR2_INFO: ··· 1438 1416 case R_028118_CB_COLOR6_MASK: 1439 1417 case R_02811C_CB_COLOR7_MASK: 1440 1418 tmp = (reg - R_028100_CB_COLOR0_MASK) / 4; 1441 - track->cb_color_mask[tmp] = ib[idx]; 1419 + track->cb_color_mask[tmp] = radeon_get_ib_value(p, idx); 1442 1420 if (G_0280A0_TILE_MODE(track->cb_color_info[tmp])) { 1443 1421 track->cb_dirty = true; 1444 1422 }
+8
drivers/gpu/drm/radeon/r600d.h
··· 66 66 #define CC_RB_BACKEND_DISABLE 0x98F4 67 67 #define BACKEND_DISABLE(x) ((x) << 16) 68 68 69 + #define R_028808_CB_COLOR_CONTROL 0x28808 70 + #define S_028808_SPECIAL_OP(x) (((x) & 0x7) << 4) 71 + #define G_028808_SPECIAL_OP(x) (((x) >> 4) & 0x7) 72 + #define C_028808_SPECIAL_OP 0xFFFFFF8F 73 + #define V_028808_SPECIAL_NORMAL 0x00 74 + #define V_028808_SPECIAL_DISABLE 0x01 75 + #define V_028808_SPECIAL_RESOLVE_BOX 0x07 76 + 69 77 #define CB_COLOR0_BASE 0x28040 70 78 #define CB_COLOR1_BASE 0x28044 71 79 #define CB_COLOR2_BASE 0x28048
+4 -1
drivers/gpu/drm/radeon/radeon_device.c
··· 1051 1051 if (rdev->flags & RADEON_IS_AGP) 1052 1052 rdev->need_dma32 = true; 1053 1053 if ((rdev->flags & RADEON_IS_PCI) && 1054 - (rdev->family < CHIP_RS400)) 1054 + (rdev->family <= CHIP_RS740)) 1055 1055 rdev->need_dma32 = true; 1056 1056 1057 1057 dma_bits = rdev->need_dma32 ? 32 : 40; ··· 1346 1346 for (i = 0; i < RADEON_NUM_RINGS; ++i) { 1347 1347 radeon_ring_restore(rdev, &rdev->ring[i], 1348 1348 ring_sizes[i], ring_data[i]); 1349 + ring_sizes[i] = 0; 1350 + ring_data[i] = NULL; 1349 1351 } 1350 1352 1351 1353 r = radeon_ib_ring_tests(rdev); 1352 1354 if (r) { 1353 1355 dev_err(rdev->dev, "ib ring test failed (%d).\n", r); 1354 1356 if (saved) { 1357 + saved = false; 1355 1358 radeon_suspend(rdev); 1356 1359 goto retry; 1357 1360 }
+2 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 63 63 * 2.19.0 - r600-eg: MSAA textures 64 64 * 2.20.0 - r600-si: RADEON_INFO_TIMESTAMP query 65 65 * 2.21.0 - r600-r700: FMASK and CMASK 66 + * 2.22.0 - r600 only: RESOLVE_BOX allowed 66 67 */ 67 68 #define KMS_DRIVER_MAJOR 2 68 - #define KMS_DRIVER_MINOR 21 69 + #define KMS_DRIVER_MINOR 22 69 70 #define KMS_DRIVER_PATCHLEVEL 0 70 71 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags); 71 72 int radeon_driver_unload_kms(struct drm_device *dev);
-1
drivers/gpu/drm/radeon/reg_srcs/r600
··· 744 744 0x00028C38 CB_CLRCMP_DST 745 745 0x00028C3C CB_CLRCMP_MSK 746 746 0x00028C34 CB_CLRCMP_SRC 747 - 0x00028808 CB_COLOR_CONTROL 748 747 0x0002842C CB_FOG_BLUE 749 748 0x00028428 CB_FOG_GREEN 750 749 0x00028424 CB_FOG_RED
+5 -3
drivers/hid/hid-core.c
··· 996 996 struct hid_driver *hdrv = hid->driver; 997 997 int ret; 998 998 999 - hid_dump_input(hid, usage, value); 999 + if (!list_empty(&hid->debug_list)) 1000 + hid_dump_input(hid, usage, value); 1000 1001 1001 1002 if (hdrv && hdrv->event && hid_match_usage(hid, usage)) { 1002 1003 ret = hdrv->event(hid, field, usage, value); ··· 1559 1558 { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X) }, 1560 1559 { HID_USB_DEVICE(USB_VENDOR_ID_LABTEC, USB_DEVICE_ID_LABTEC_WIRELESS_KEYBOARD) }, 1561 1560 { HID_USB_DEVICE(USB_VENDOR_ID_LCPOWER, USB_DEVICE_ID_LCPOWER_LC1000 ) }, 1562 - { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_TPKBD) }, 1561 + #if IS_ENABLED(CONFIG_HID_LENOVO_TPKBD) 1562 + { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_TPKBD) }, 1563 + #endif 1563 1564 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_MX3000_RECEIVER) }, 1564 1565 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_S510_RECEIVER) }, 1565 1566 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_S510_RECEIVER_2) }, ··· 1627 1624 { HID_USB_DEVICE(USB_VENDOR_ID_ORTEK, USB_DEVICE_ID_ORTEK_WKB2000) }, 1628 1625 { HID_USB_DEVICE(USB_VENDOR_ID_PETALYNX, USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE) }, 1629 1626 { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_KEYBOARD) }, 1630 - { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_PIXART_IMAGING_INC_OPTICAL_TOUCH_SCREEN) }, 1631 1627 { HID_USB_DEVICE(USB_VENDOR_ID_ROCCAT, USB_DEVICE_ID_ROCCAT_KONE) }, 1632 1628 { HID_USB_DEVICE(USB_VENDOR_ID_ROCCAT, USB_DEVICE_ID_ROCCAT_ARVO) }, 1633 1629 { HID_USB_DEVICE(USB_VENDOR_ID_ROCCAT, USB_DEVICE_ID_ROCCAT_ISKU) },
+2 -2
drivers/hid/hid-logitech-dj.c
··· 439 439 struct dj_report *dj_report; 440 440 int retval; 441 441 442 - dj_report = kzalloc(sizeof(dj_report), GFP_KERNEL); 442 + dj_report = kzalloc(sizeof(struct dj_report), GFP_KERNEL); 443 443 if (!dj_report) 444 444 return -ENOMEM; 445 445 dj_report->report_id = REPORT_ID_DJ_SHORT; ··· 456 456 struct dj_report *dj_report; 457 457 int retval; 458 458 459 - dj_report = kzalloc(sizeof(dj_report), GFP_KERNEL); 459 + dj_report = kzalloc(sizeof(struct dj_report), GFP_KERNEL); 460 460 if (!dj_report) 461 461 return -ENOMEM; 462 462 dj_report->report_id = REPORT_ID_DJ_SHORT;
+1
drivers/hid/usbhid/hid-quirks.c
··· 70 70 { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_AXIS_295, HID_QUIRK_NOGET }, 71 71 { USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET }, 72 72 { USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET }, 73 + { USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET }, 73 74 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS }, 74 75 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS }, 75 76 { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN2, HID_QUIRK_NO_INIT_REPORTS },
+6
drivers/hwmon/asus_atk0110.c
··· 34 34 .matches = { 35 35 DMI_MATCH(DMI_BOARD_NAME, "SABERTOOTH X58") 36 36 } 37 + }, { 38 + /* Old interface reads the same sensor for fan0 and fan1 */ 39 + .ident = "Asus M5A78L", 40 + .matches = { 41 + DMI_MATCH(DMI_BOARD_NAME, "M5A78L") 42 + } 37 43 }, 38 44 { } 39 45 };
+2 -2
drivers/ide/ide-pm.c
··· 4 4 5 5 int generic_ide_suspend(struct device *dev, pm_message_t mesg) 6 6 { 7 - ide_drive_t *drive = dev_get_drvdata(dev); 7 + ide_drive_t *drive = to_ide_device(dev); 8 8 ide_drive_t *pair = ide_get_pair_dev(drive); 9 9 ide_hwif_t *hwif = drive->hwif; 10 10 struct request *rq; ··· 40 40 41 41 int generic_ide_resume(struct device *dev) 42 42 { 43 - ide_drive_t *drive = dev_get_drvdata(dev); 43 + ide_drive_t *drive = to_ide_device(dev); 44 44 ide_drive_t *pair = ide_get_pair_dev(drive); 45 45 ide_hwif_t *hwif = drive->hwif; 46 46 struct request *rq;
+3
drivers/input/keyboard/imx_keypad.c
··· 358 358 /* Inhibit KDI and KRI interrupts. */ 359 359 reg_val = readw(keypad->mmio_base + KPSR); 360 360 reg_val &= ~(KBD_STAT_KRIE | KBD_STAT_KDIE); 361 + reg_val |= KBD_STAT_KPKR | KBD_STAT_KPKD; 361 362 writew(reg_val, keypad->mmio_base + KPSR); 362 363 363 364 /* Colums as open drain and disable all rows */ ··· 516 515 input_set_drvdata(input_dev, keypad); 517 516 518 517 /* Ensure that the keypad will stay dormant until opened */ 518 + clk_enable(keypad->clk); 519 519 imx_keypad_inhibit(keypad); 520 + clk_disable(keypad->clk); 520 521 521 522 error = request_irq(irq, imx_keypad_irq_handler, 0, 522 523 pdev->name, keypad);
+14
drivers/input/serio/i8042-x86ia64io.h
··· 177 177 }, 178 178 }, 179 179 { 180 + /* Gigabyte T1005 - defines wrong chassis type ("Other") */ 181 + .matches = { 182 + DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"), 183 + DMI_MATCH(DMI_PRODUCT_NAME, "T1005"), 184 + }, 185 + }, 186 + { 187 + /* Gigabyte T1005M/P - defines wrong chassis type ("Other") */ 188 + .matches = { 189 + DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"), 190 + DMI_MATCH(DMI_PRODUCT_NAME, "T1005M/P"), 191 + }, 192 + }, 193 + { 180 194 .matches = { 181 195 DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"), 182 196 DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion dv9700"),
+5 -1
drivers/input/tablet/wacom_wac.c
··· 1848 1848 { "Wacom Intuos5 M", WACOM_PKGLEN_INTUOS, 44704, 27940, 2047, 1849 1849 63, INTUOS5, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES }; 1850 1850 static const struct wacom_features wacom_features_0xF4 = 1851 - { "Wacom Cintiq 24HD", WACOM_PKGLEN_INTUOS, 104480, 65600, 2047, 1851 + { "Wacom Cintiq 24HD", WACOM_PKGLEN_INTUOS, 104480, 65600, 2047, 1852 + 63, WACOM_24HD, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES }; 1853 + static const struct wacom_features wacom_features_0xF8 = 1854 + { "Wacom Cintiq 24HD touch", WACOM_PKGLEN_INTUOS, 104480, 65600, 2047, 1852 1855 63, WACOM_24HD, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES }; 1853 1856 static const struct wacom_features wacom_features_0x3F = 1854 1857 { "Wacom Cintiq 21UX", WACOM_PKGLEN_INTUOS, 87200, 65600, 1023, ··· 2094 2091 { USB_DEVICE_WACOM(0xEF) }, 2095 2092 { USB_DEVICE_WACOM(0x47) }, 2096 2093 { USB_DEVICE_WACOM(0xF4) }, 2094 + { USB_DEVICE_WACOM(0xF8) }, 2097 2095 { USB_DEVICE_WACOM(0xFA) }, 2098 2096 { USB_DEVICE_LENOVO(0x6004) }, 2099 2097 { }
+1 -1
drivers/input/touchscreen/edt-ft5x06.c
··· 602 602 { 603 603 if (tsdata->debug_dir) 604 604 debugfs_remove_recursive(tsdata->debug_dir); 605 + kfree(tsdata->raw_buffer); 605 606 } 606 607 607 608 #else ··· 844 843 if (gpio_is_valid(pdata->reset_pin)) 845 844 gpio_free(pdata->reset_pin); 846 845 847 - kfree(tsdata->raw_buffer); 848 846 kfree(tsdata); 849 847 850 848 return 0;
+25 -1
drivers/mmc/card/block.c
··· 1411 1411 /* complete ongoing async transfer before issuing discard */ 1412 1412 if (card->host->areq) 1413 1413 mmc_blk_issue_rw_rq(mq, NULL); 1414 - if (req->cmd_flags & REQ_SECURE) 1414 + if (req->cmd_flags & REQ_SECURE && 1415 + !(card->quirks & MMC_QUIRK_SEC_ERASE_TRIM_BROKEN)) 1415 1416 ret = mmc_blk_issue_secdiscard_rq(mq, req); 1416 1417 else 1417 1418 ret = mmc_blk_issue_discard_rq(mq, req); ··· 1717 1716 #define CID_MANFID_SANDISK 0x2 1718 1717 #define CID_MANFID_TOSHIBA 0x11 1719 1718 #define CID_MANFID_MICRON 0x13 1719 + #define CID_MANFID_SAMSUNG 0x15 1720 1720 1721 1721 static const struct mmc_fixup blk_fixups[] = 1722 1722 { ··· 1753 1751 */ 1754 1752 MMC_FIXUP(CID_NAME_ANY, CID_MANFID_MICRON, 0x200, add_quirk_mmc, 1755 1753 MMC_QUIRK_LONG_READ_TIME), 1754 + 1755 + /* 1756 + * On these Samsung MoviNAND parts, performing secure erase or 1757 + * secure trim can result in unrecoverable corruption due to a 1758 + * firmware bug. 1759 + */ 1760 + MMC_FIXUP("M8G2FA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 1761 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 1762 + MMC_FIXUP("MAG4FA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 1763 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 1764 + MMC_FIXUP("MBG8FA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 1765 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 1766 + MMC_FIXUP("MCGAFA", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 1767 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 1768 + MMC_FIXUP("VAL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 1769 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 1770 + MMC_FIXUP("VYL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 1771 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 1772 + MMC_FIXUP("KYL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 1773 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 1774 + MMC_FIXUP("VZL00M", CID_MANFID_SAMSUNG, CID_OEMID_ANY, add_quirk_mmc, 1775 + MMC_QUIRK_SEC_ERASE_TRIM_BROKEN), 1756 1776 1757 1777 END_FIXUP 1758 1778 };
+5 -1
drivers/mmc/host/atmel-mci.c
··· 81 81 bool has_bad_data_ordering; 82 82 bool need_reset_after_xfer; 83 83 bool need_blksz_mul_4; 84 + bool need_notbusy_for_read_ops; 84 85 }; 85 86 86 87 struct atmel_mci_dma { ··· 1626 1625 __func__); 1627 1626 atmci_set_completed(host, EVENT_XFER_COMPLETE); 1628 1627 1629 - if (host->data->flags & MMC_DATA_WRITE) { 1628 + if (host->caps.need_notbusy_for_read_ops || 1629 + (host->data->flags & MMC_DATA_WRITE)) { 1630 1630 atmci_writel(host, ATMCI_IER, ATMCI_NOTBUSY); 1631 1631 state = STATE_WAITING_NOTBUSY; 1632 1632 } else if (host->mrq->stop) { ··· 2220 2218 host->caps.has_bad_data_ordering = 1; 2221 2219 host->caps.need_reset_after_xfer = 1; 2222 2220 host->caps.need_blksz_mul_4 = 1; 2221 + host->caps.need_notbusy_for_read_ops = 0; 2223 2222 2224 2223 /* keep only major version number */ 2225 2224 switch (version & 0xf00) { ··· 2241 2238 case 0x200: 2242 2239 host->caps.has_rwproof = 1; 2243 2240 host->caps.need_blksz_mul_4 = 0; 2241 + host->caps.need_notbusy_for_read_ops = 1; 2244 2242 case 0x100: 2245 2243 host->caps.has_bad_data_ordering = 0; 2246 2244 host->caps.need_reset_after_xfer = 0;
-7
drivers/mmc/host/bfin_sdh.c
··· 49 49 #define bfin_write_SDH_CFG bfin_write_RSI_CFG 50 50 #endif 51 51 52 - struct dma_desc_array { 53 - unsigned long start_addr; 54 - unsigned short cfg; 55 - unsigned short x_count; 56 - short x_modify; 57 - } __packed; 58 - 59 52 struct sdh_host { 60 53 struct mmc_host *mmc; 61 54 spinlock_t lock;
+46 -39
drivers/mmc/host/dw_mmc.c
··· 627 627 { 628 628 struct dw_mci *host = slot->host; 629 629 u32 div; 630 + u32 clk_en_a; 630 631 631 632 if (slot->clock != host->current_speed) { 632 633 div = host->bus_hz / slot->clock; ··· 660 659 mci_send_cmd(slot, 661 660 SDMMC_CMD_UPD_CLK | SDMMC_CMD_PRV_DAT_WAIT, 0); 662 661 663 - /* enable clock */ 664 - mci_writel(host, CLKENA, ((SDMMC_CLKEN_ENABLE | 665 - SDMMC_CLKEN_LOW_PWR) << slot->id)); 662 + /* enable clock; only low power if no SDIO */ 663 + clk_en_a = SDMMC_CLKEN_ENABLE << slot->id; 664 + if (!(mci_readl(host, INTMASK) & SDMMC_INT_SDIO(slot->id))) 665 + clk_en_a |= SDMMC_CLKEN_LOW_PWR << slot->id; 666 + mci_writel(host, CLKENA, clk_en_a); 666 667 667 668 /* inform CIU */ 668 669 mci_send_cmd(slot, ··· 865 862 return present; 866 863 } 867 864 865 + /* 866 + * Disable lower power mode. 867 + * 868 + * Low power mode will stop the card clock when idle. According to the 869 + * description of the CLKENA register we should disable low power mode 870 + * for SDIO cards if we need SDIO interrupts to work. 871 + * 872 + * This function is fast if low power mode is already disabled. 873 + */ 874 + static void dw_mci_disable_low_power(struct dw_mci_slot *slot) 875 + { 876 + struct dw_mci *host = slot->host; 877 + u32 clk_en_a; 878 + const u32 clken_low_pwr = SDMMC_CLKEN_LOW_PWR << slot->id; 879 + 880 + clk_en_a = mci_readl(host, CLKENA); 881 + 882 + if (clk_en_a & clken_low_pwr) { 883 + mci_writel(host, CLKENA, clk_en_a & ~clken_low_pwr); 884 + mci_send_cmd(slot, SDMMC_CMD_UPD_CLK | 885 + SDMMC_CMD_PRV_DAT_WAIT, 0); 886 + } 887 + } 888 + 868 889 static void dw_mci_enable_sdio_irq(struct mmc_host *mmc, int enb) 869 890 { 870 891 struct dw_mci_slot *slot = mmc_priv(mmc); ··· 898 871 /* Enable/disable Slot Specific SDIO interrupt */ 899 872 int_mask = mci_readl(host, INTMASK); 900 873 if (enb) { 874 + /* 875 + * Turn off low power mode if it was enabled. This is a bit of 876 + * a heavy operation and we disable / enable IRQs a lot, so 877 + * we'll leave low power mode disabled and it will get 878 + * re-enabled again in dw_mci_setup_bus(). 879 + */ 880 + dw_mci_disable_low_power(slot); 881 + 901 882 mci_writel(host, INTMASK, 902 883 (int_mask | SDMMC_INT_SDIO(slot->id))); 903 884 } else { ··· 1464 1429 nbytes += len; 1465 1430 remain -= len; 1466 1431 } while (remain); 1467 - sg_miter->consumed = offset; 1468 1432 1433 + sg_miter->consumed = offset; 1469 1434 status = mci_readl(host, MINTSTS); 1470 1435 mci_writel(host, RINTSTS, SDMMC_INT_RXDR); 1471 - if (status & DW_MCI_DATA_ERROR_FLAGS) { 1472 - host->data_status = status; 1473 - data->bytes_xfered += nbytes; 1474 - sg_miter_stop(sg_miter); 1475 - host->sg = NULL; 1476 - smp_wmb(); 1477 - 1478 - set_bit(EVENT_DATA_ERROR, &host->pending_events); 1479 - 1480 - tasklet_schedule(&host->tasklet); 1481 - return; 1482 - } 1483 1436 } while (status & SDMMC_INT_RXDR); /*if the RXDR is ready read again*/ 1484 1437 data->bytes_xfered += nbytes; 1485 1438 ··· 1520 1497 nbytes += len; 1521 1498 remain -= len; 1522 1499 } while (remain); 1523 - sg_miter->consumed = offset; 1524 1500 1501 + sg_miter->consumed = offset; 1525 1502 status = mci_readl(host, MINTSTS); 1526 1503 mci_writel(host, RINTSTS, SDMMC_INT_TXDR); 1527 - if (status & DW_MCI_DATA_ERROR_FLAGS) { 1528 - host->data_status = status; 1529 - data->bytes_xfered += nbytes; 1530 - sg_miter_stop(sg_miter); 1531 - host->sg = NULL; 1532 - 1533 - smp_wmb(); 1534 - 1535 - set_bit(EVENT_DATA_ERROR, &host->pending_events); 1536 - 1537 - tasklet_schedule(&host->tasklet); 1538 - return; 1539 - } 1540 1504 } while (status & SDMMC_INT_TXDR); /* if TXDR write again */ 1541 1505 data->bytes_xfered += nbytes; 1542 1506 ··· 1557 1547 static irqreturn_t dw_mci_interrupt(int irq, void *dev_id) 1558 1548 { 1559 1549 struct dw_mci *host = dev_id; 1560 - u32 status, pending; 1550 + u32 pending; 1561 1551 unsigned int pass_count = 0; 1562 1552 int i; 1563 1553 1564 1554 do { 1565 - status = mci_readl(host, RINTSTS); 1566 1555 pending = mci_readl(host, MINTSTS); /* read-only mask reg */ 1567 1556 1568 1557 /* ··· 1579 1570 1580 1571 if (pending & DW_MCI_CMD_ERROR_FLAGS) { 1581 1572 mci_writel(host, RINTSTS, DW_MCI_CMD_ERROR_FLAGS); 1582 - host->cmd_status = status; 1573 + host->cmd_status = pending; 1583 1574 smp_wmb(); 1584 1575 set_bit(EVENT_CMD_COMPLETE, &host->pending_events); 1585 1576 } ··· 1587 1578 if (pending & DW_MCI_DATA_ERROR_FLAGS) { 1588 1579 /* if there is an error report DATA_ERROR */ 1589 1580 mci_writel(host, RINTSTS, DW_MCI_DATA_ERROR_FLAGS); 1590 - host->data_status = status; 1581 + host->data_status = pending; 1591 1582 smp_wmb(); 1592 1583 set_bit(EVENT_DATA_ERROR, &host->pending_events); 1593 - if (!(pending & (SDMMC_INT_DTO | SDMMC_INT_DCRC | 1594 - SDMMC_INT_SBE | SDMMC_INT_EBE))) 1595 - tasklet_schedule(&host->tasklet); 1584 + tasklet_schedule(&host->tasklet); 1596 1585 } 1597 1586 1598 1587 if (pending & SDMMC_INT_DATA_OVER) { 1599 1588 mci_writel(host, RINTSTS, SDMMC_INT_DATA_OVER); 1600 1589 if (!host->data_status) 1601 - host->data_status = status; 1590 + host->data_status = pending; 1602 1591 smp_wmb(); 1603 1592 if (host->dir_status == DW_MCI_RECV_STATUS) { 1604 1593 if (host->sg != NULL) ··· 1620 1613 1621 1614 if (pending & SDMMC_INT_CMD_DONE) { 1622 1615 mci_writel(host, RINTSTS, SDMMC_INT_CMD_DONE); 1623 - dw_mci_cmd_interrupt(host, status); 1616 + dw_mci_cmd_interrupt(host, pending); 1624 1617 } 1625 1618 1626 1619 if (pending & SDMMC_INT_CD) {
+7 -7
drivers/mmc/host/mxs-mmc.c
··· 285 285 writel(stat & MXS_MMC_IRQ_BITS, 286 286 host->base + HW_SSP_CTRL1(host) + STMP_OFFSET_REG_CLR); 287 287 288 + spin_unlock(&host->lock); 289 + 288 290 if ((stat & BM_SSP_CTRL1_SDIO_IRQ) && (stat & BM_SSP_CTRL1_SDIO_IRQ_EN)) 289 291 mmc_signal_sdio_irq(host->mmc); 290 - 291 - spin_unlock(&host->lock); 292 292 293 293 if (stat & BM_SSP_CTRL1_RESP_TIMEOUT_IRQ) 294 294 cmd->error = -ETIMEDOUT; ··· 644 644 host->base + HW_SSP_CTRL0 + STMP_OFFSET_REG_SET); 645 645 writel(BM_SSP_CTRL1_SDIO_IRQ_EN, 646 646 host->base + HW_SSP_CTRL1(host) + STMP_OFFSET_REG_SET); 647 - 648 - if (readl(host->base + HW_SSP_STATUS(host)) & 649 - BM_SSP_STATUS_SDIO_IRQ) 650 - mmc_signal_sdio_irq(host->mmc); 651 - 652 647 } else { 653 648 writel(BM_SSP_CTRL0_SDIO_IRQ_CHECK, 654 649 host->base + HW_SSP_CTRL0 + STMP_OFFSET_REG_CLR); ··· 652 657 } 653 658 654 659 spin_unlock_irqrestore(&host->lock, flags); 660 + 661 + if (enable && readl(host->base + HW_SSP_STATUS(host)) & 662 + BM_SSP_STATUS_SDIO_IRQ) 663 + mmc_signal_sdio_irq(host->mmc); 664 + 655 665 } 656 666 657 667 static const struct mmc_host_ops mxs_mmc_ops = {
+11 -4
drivers/mmc/host/omap.c
··· 33 33 #include <asm/io.h> 34 34 #include <asm/irq.h> 35 35 36 - #include <plat/board.h> 37 36 #include <plat/mmc.h> 38 37 #include <asm/gpio.h> 39 38 #include <plat/dma.h> ··· 667 668 static void 668 669 mmc_omap_xfer_data(struct mmc_omap_host *host, int write) 669 670 { 670 - int n; 671 + int n, nwords; 671 672 672 673 if (host->buffer_bytes_left == 0) { 673 674 host->sg_idx++; ··· 677 678 n = 64; 678 679 if (n > host->buffer_bytes_left) 679 680 n = host->buffer_bytes_left; 681 + 682 + nwords = n / 2; 683 + nwords += n & 1; /* handle odd number of bytes to transfer */ 684 + 680 685 host->buffer_bytes_left -= n; 681 686 host->total_bytes_left -= n; 682 687 host->data->bytes_xfered += n; 683 688 684 689 if (write) { 685 - __raw_writesw(host->virt_base + OMAP_MMC_REG(host, DATA), host->buffer, n); 690 + __raw_writesw(host->virt_base + OMAP_MMC_REG(host, DATA), 691 + host->buffer, nwords); 686 692 } else { 687 - __raw_readsw(host->virt_base + OMAP_MMC_REG(host, DATA), host->buffer, n); 693 + __raw_readsw(host->virt_base + OMAP_MMC_REG(host, DATA), 694 + host->buffer, nwords); 688 695 } 696 + 697 + host->buffer += nwords; 689 698 } 690 699 691 700 static inline void mmc_omap_report_irq(u16 status)
-1
drivers/mmc/host/omap_hsmmc.c
··· 40 40 #include <linux/regulator/consumer.h> 41 41 #include <linux/pm_runtime.h> 42 42 #include <mach/hardware.h> 43 - #include <plat/board.h> 44 43 #include <plat/mmc.h> 45 44 #include <plat/cpu.h> 46 45
+3 -3
drivers/mmc/host/sdhci-esdhc.h
··· 48 48 int div = 1; 49 49 u32 temp; 50 50 51 + if (clock == 0) 52 + goto out; 53 + 51 54 temp = sdhci_readl(host, ESDHC_SYSTEM_CONTROL); 52 55 temp &= ~(ESDHC_CLOCK_IPGEN | ESDHC_CLOCK_HCKEN | ESDHC_CLOCK_PEREN 53 56 | ESDHC_CLOCK_MASK); 54 57 sdhci_writel(host, temp, ESDHC_SYSTEM_CONTROL); 55 - 56 - if (clock == 0) 57 - goto out; 58 58 59 59 while (host->max_clk / pre_div / 16 > clock && pre_div < 256) 60 60 pre_div *= 2;
+227 -78
drivers/mtd/nand/omap2.c
··· 101 101 #define P4e_s(a) (TF(a & NAND_Ecc_P4e) << 0) 102 102 #define P4o_s(a) (TF(a & NAND_Ecc_P4o) << 1) 103 103 104 + #define PREFETCH_CONFIG1_CS_SHIFT 24 105 + #define ECC_CONFIG_CS_SHIFT 1 106 + #define CS_MASK 0x7 107 + #define ENABLE_PREFETCH (0x1 << 7) 108 + #define DMA_MPU_MODE_SHIFT 2 109 + #define ECCSIZE1_SHIFT 22 110 + #define ECC1RESULTSIZE 0x1 111 + #define ECCCLEAR 0x100 112 + #define ECC1 0x1 113 + 104 114 /* oob info generated runtime depending on ecc algorithm and layout selected */ 105 115 static struct nand_ecclayout omap_oobinfo; 106 116 /* Define some generic bad / good block scan pattern which are used ··· 134 124 135 125 int gpmc_cs; 136 126 unsigned long phys_base; 127 + unsigned long mem_size; 137 128 struct completion comp; 138 129 struct dma_chan *dma; 139 - int gpmc_irq; 130 + int gpmc_irq_fifo; 131 + int gpmc_irq_count; 140 132 enum { 141 133 OMAP_NAND_IO_READ = 0, /* read */ 142 134 OMAP_NAND_IO_WRITE, /* write */ 143 135 } iomode; 144 136 u_char *buf; 145 137 int buf_len; 138 + struct gpmc_nand_regs reg; 146 139 147 140 #ifdef CONFIG_MTD_NAND_OMAP_BCH 148 141 struct bch_control *bch; 149 142 struct nand_ecclayout ecclayout; 150 143 #endif 151 144 }; 145 + 146 + /** 147 + * omap_prefetch_enable - configures and starts prefetch transfer 148 + * @cs: cs (chip select) number 149 + * @fifo_th: fifo threshold to be used for read/ write 150 + * @dma_mode: dma mode enable (1) or disable (0) 151 + * @u32_count: number of bytes to be transferred 152 + * @is_write: prefetch read(0) or write post(1) mode 153 + */ 154 + static int omap_prefetch_enable(int cs, int fifo_th, int dma_mode, 155 + unsigned int u32_count, int is_write, struct omap_nand_info *info) 156 + { 157 + u32 val; 158 + 159 + if (fifo_th > PREFETCH_FIFOTHRESHOLD_MAX) 160 + return -1; 161 + 162 + if (readl(info->reg.gpmc_prefetch_control)) 163 + return -EBUSY; 164 + 165 + /* Set the amount of bytes to be prefetched */ 166 + writel(u32_count, info->reg.gpmc_prefetch_config2); 167 + 168 + /* Set dma/mpu mode, the prefetch read / post write and 169 + * enable the engine. Set which cs is has requested for. 170 + */ 171 + val = ((cs << PREFETCH_CONFIG1_CS_SHIFT) | 172 + PREFETCH_FIFOTHRESHOLD(fifo_th) | ENABLE_PREFETCH | 173 + (dma_mode << DMA_MPU_MODE_SHIFT) | (0x1 & is_write)); 174 + writel(val, info->reg.gpmc_prefetch_config1); 175 + 176 + /* Start the prefetch engine */ 177 + writel(0x1, info->reg.gpmc_prefetch_control); 178 + 179 + return 0; 180 + } 181 + 182 + /** 183 + * omap_prefetch_reset - disables and stops the prefetch engine 184 + */ 185 + static int omap_prefetch_reset(int cs, struct omap_nand_info *info) 186 + { 187 + u32 config1; 188 + 189 + /* check if the same module/cs is trying to reset */ 190 + config1 = readl(info->reg.gpmc_prefetch_config1); 191 + if (((config1 >> PREFETCH_CONFIG1_CS_SHIFT) & CS_MASK) != cs) 192 + return -EINVAL; 193 + 194 + /* Stop the PFPW engine */ 195 + writel(0x0, info->reg.gpmc_prefetch_control); 196 + 197 + /* Reset/disable the PFPW engine */ 198 + writel(0x0, info->reg.gpmc_prefetch_config1); 199 + 200 + return 0; 201 + } 152 202 153 203 /** 154 204 * omap_hwcontrol - hardware specific access to control-lines ··· 228 158 229 159 if (cmd != NAND_CMD_NONE) { 230 160 if (ctrl & NAND_CLE) 231 - gpmc_nand_write(info->gpmc_cs, GPMC_NAND_COMMAND, cmd); 161 + writeb(cmd, info->reg.gpmc_nand_command); 232 162 233 163 else if (ctrl & NAND_ALE) 234 - gpmc_nand_write(info->gpmc_cs, GPMC_NAND_ADDRESS, cmd); 164 + writeb(cmd, info->reg.gpmc_nand_address); 235 165 236 166 else /* NAND_NCE */ 237 - gpmc_nand_write(info->gpmc_cs, GPMC_NAND_DATA, cmd); 167 + writeb(cmd, info->reg.gpmc_nand_data); 238 168 } 239 169 } 240 170 ··· 268 198 iowrite8(*p++, info->nand.IO_ADDR_W); 269 199 /* wait until buffer is available for write */ 270 200 do { 271 - status = gpmc_read_status(GPMC_STATUS_BUFFER); 201 + status = readl(info->reg.gpmc_status) & 202 + GPMC_STATUS_BUFF_EMPTY; 272 203 } while (!status); 273 204 } 274 205 } ··· 306 235 iowrite16(*p++, info->nand.IO_ADDR_W); 307 236 /* wait until buffer is available for write */ 308 237 do { 309 - status = gpmc_read_status(GPMC_STATUS_BUFFER); 238 + status = readl(info->reg.gpmc_status) & 239 + GPMC_STATUS_BUFF_EMPTY; 310 240 } while (!status); 311 241 } 312 242 } ··· 337 265 } 338 266 339 267 /* configure and start prefetch transfer */ 340 - ret = gpmc_prefetch_enable(info->gpmc_cs, 341 - PREFETCH_FIFOTHRESHOLD_MAX, 0x0, len, 0x0); 268 + ret = omap_prefetch_enable(info->gpmc_cs, 269 + PREFETCH_FIFOTHRESHOLD_MAX, 0x0, len, 0x0, info); 342 270 if (ret) { 343 271 /* PFPW engine is busy, use cpu copy method */ 344 272 if (info->nand.options & NAND_BUSWIDTH_16) ··· 347 275 omap_read_buf8(mtd, (u_char *)p, len); 348 276 } else { 349 277 do { 350 - r_count = gpmc_read_status(GPMC_PREFETCH_FIFO_CNT); 278 + r_count = readl(info->reg.gpmc_prefetch_status); 279 + r_count = GPMC_PREFETCH_STATUS_FIFO_CNT(r_count); 351 280 r_count = r_count >> 2; 352 281 ioread32_rep(info->nand.IO_ADDR_R, p, r_count); 353 282 p += r_count; 354 283 len -= r_count << 2; 355 284 } while (len); 356 285 /* disable and stop the PFPW engine */ 357 - gpmc_prefetch_reset(info->gpmc_cs); 286 + omap_prefetch_reset(info->gpmc_cs, info); 358 287 } 359 288 } 360 289 ··· 374 301 int i = 0, ret = 0; 375 302 u16 *p = (u16 *)buf; 376 303 unsigned long tim, limit; 304 + u32 val; 377 305 378 306 /* take care of subpage writes */ 379 307 if (len % 2 != 0) { ··· 384 310 } 385 311 386 312 /* configure and start prefetch transfer */ 387 - ret = gpmc_prefetch_enable(info->gpmc_cs, 388 - PREFETCH_FIFOTHRESHOLD_MAX, 0x0, len, 0x1); 313 + ret = omap_prefetch_enable(info->gpmc_cs, 314 + PREFETCH_FIFOTHRESHOLD_MAX, 0x0, len, 0x1, info); 389 315 if (ret) { 390 316 /* PFPW engine is busy, use cpu copy method */ 391 317 if (info->nand.options & NAND_BUSWIDTH_16) ··· 394 320 omap_write_buf8(mtd, (u_char *)p, len); 395 321 } else { 396 322 while (len) { 397 - w_count = gpmc_read_status(GPMC_PREFETCH_FIFO_CNT); 323 + w_count = readl(info->reg.gpmc_prefetch_status); 324 + w_count = GPMC_PREFETCH_STATUS_FIFO_CNT(w_count); 398 325 w_count = w_count >> 1; 399 326 for (i = 0; (i < w_count) && len; i++, len -= 2) 400 327 iowrite16(*p++, info->nand.IO_ADDR_W); ··· 404 329 tim = 0; 405 330 limit = (loops_per_jiffy * 406 331 msecs_to_jiffies(OMAP_NAND_TIMEOUT_MS)); 407 - while (gpmc_read_status(GPMC_PREFETCH_COUNT) && (tim++ < limit)) 332 + do { 408 333 cpu_relax(); 334 + val = readl(info->reg.gpmc_prefetch_status); 335 + val = GPMC_PREFETCH_STATUS_COUNT(val); 336 + } while (val && (tim++ < limit)); 409 337 410 338 /* disable and stop the PFPW engine */ 411 - gpmc_prefetch_reset(info->gpmc_cs); 339 + omap_prefetch_reset(info->gpmc_cs, info); 412 340 } 413 341 } 414 342 ··· 443 365 unsigned long tim, limit; 444 366 unsigned n; 445 367 int ret; 368 + u32 val; 446 369 447 370 if (addr >= high_memory) { 448 371 struct page *p1; ··· 475 396 tx->callback_param = &info->comp; 476 397 dmaengine_submit(tx); 477 398 478 - /* configure and start prefetch transfer */ 479 - ret = gpmc_prefetch_enable(info->gpmc_cs, 480 - PREFETCH_FIFOTHRESHOLD_MAX, 0x1, len, is_write); 399 + /* configure and start prefetch transfer */ 400 + ret = omap_prefetch_enable(info->gpmc_cs, 401 + PREFETCH_FIFOTHRESHOLD_MAX, 0x1, len, is_write, info); 481 402 if (ret) 482 403 /* PFPW engine is busy, use cpu copy method */ 483 404 goto out_copy_unmap; ··· 489 410 wait_for_completion(&info->comp); 490 411 tim = 0; 491 412 limit = (loops_per_jiffy * msecs_to_jiffies(OMAP_NAND_TIMEOUT_MS)); 492 - while (gpmc_read_status(GPMC_PREFETCH_COUNT) && (tim++ < limit)) 413 + 414 + do { 493 415 cpu_relax(); 416 + val = readl(info->reg.gpmc_prefetch_status); 417 + val = GPMC_PREFETCH_STATUS_COUNT(val); 418 + } while (val && (tim++ < limit)); 494 419 495 420 /* disable and stop the PFPW engine */ 496 - gpmc_prefetch_reset(info->gpmc_cs); 421 + omap_prefetch_reset(info->gpmc_cs, info); 497 422 498 423 dma_unmap_sg(info->dma->device->dev, &sg, 1, dir); 499 424 return 0; ··· 554 471 { 555 472 struct omap_nand_info *info = (struct omap_nand_info *) dev; 556 473 u32 bytes; 557 - u32 irq_stat; 558 474 559 - irq_stat = gpmc_read_status(GPMC_GET_IRQ_STATUS); 560 - bytes = gpmc_read_status(GPMC_PREFETCH_FIFO_CNT); 475 + bytes = readl(info->reg.gpmc_prefetch_status); 476 + bytes = GPMC_PREFETCH_STATUS_FIFO_CNT(bytes); 561 477 bytes = bytes & 0xFFFC; /* io in multiple of 4 bytes */ 562 478 if (info->iomode == OMAP_NAND_IO_WRITE) { /* checks for write io */ 563 - if (irq_stat & 0x2) 479 + if (this_irq == info->gpmc_irq_count) 564 480 goto done; 565 481 566 482 if (info->buf_len && (info->buf_len < bytes)) ··· 576 494 (u32 *)info->buf, bytes >> 2); 577 495 info->buf = info->buf + bytes; 578 496 579 - if (irq_stat & 0x2) 497 + if (this_irq == info->gpmc_irq_count) 580 498 goto done; 581 499 } 582 - gpmc_cs_configure(info->gpmc_cs, GPMC_SET_IRQ_STATUS, irq_stat); 583 500 584 501 return IRQ_HANDLED; 585 502 586 503 done: 587 504 complete(&info->comp); 588 - /* disable irq */ 589 - gpmc_cs_configure(info->gpmc_cs, GPMC_ENABLE_IRQ, 0); 590 505 591 - /* clear status */ 592 - gpmc_cs_configure(info->gpmc_cs, GPMC_SET_IRQ_STATUS, irq_stat); 506 + disable_irq_nosync(info->gpmc_irq_fifo); 507 + disable_irq_nosync(info->gpmc_irq_count); 593 508 594 509 return IRQ_HANDLED; 595 510 } ··· 613 534 init_completion(&info->comp); 614 535 615 536 /* configure and start prefetch transfer */ 616 - ret = gpmc_prefetch_enable(info->gpmc_cs, 617 - PREFETCH_FIFOTHRESHOLD_MAX/2, 0x0, len, 0x0); 537 + ret = omap_prefetch_enable(info->gpmc_cs, 538 + PREFETCH_FIFOTHRESHOLD_MAX/2, 0x0, len, 0x0, info); 618 539 if (ret) 619 540 /* PFPW engine is busy, use cpu copy method */ 620 541 goto out_copy; 621 542 622 543 info->buf_len = len; 623 - /* enable irq */ 624 - gpmc_cs_configure(info->gpmc_cs, GPMC_ENABLE_IRQ, 625 - (GPMC_IRQ_FIFOEVENTENABLE | GPMC_IRQ_COUNT_EVENT)); 544 + 545 + enable_irq(info->gpmc_irq_count); 546 + enable_irq(info->gpmc_irq_fifo); 626 547 627 548 /* waiting for read to complete */ 628 549 wait_for_completion(&info->comp); 629 550 630 551 /* disable and stop the PFPW engine */ 631 - gpmc_prefetch_reset(info->gpmc_cs); 552 + omap_prefetch_reset(info->gpmc_cs, info); 632 553 return; 633 554 634 555 out_copy: ··· 651 572 struct omap_nand_info, mtd); 652 573 int ret = 0; 653 574 unsigned long tim, limit; 575 + u32 val; 654 576 655 577 if (len <= mtd->oobsize) { 656 578 omap_write_buf_pref(mtd, buf, len); ··· 663 583 init_completion(&info->comp); 664 584 665 585 /* configure and start prefetch transfer : size=24 */ 666 - ret = gpmc_prefetch_enable(info->gpmc_cs, 667 - (PREFETCH_FIFOTHRESHOLD_MAX * 3) / 8, 0x0, len, 0x1); 586 + ret = omap_prefetch_enable(info->gpmc_cs, 587 + (PREFETCH_FIFOTHRESHOLD_MAX * 3) / 8, 0x0, len, 0x1, info); 668 588 if (ret) 669 589 /* PFPW engine is busy, use cpu copy method */ 670 590 goto out_copy; 671 591 672 592 info->buf_len = len; 673 - /* enable irq */ 674 - gpmc_cs_configure(info->gpmc_cs, GPMC_ENABLE_IRQ, 675 - (GPMC_IRQ_FIFOEVENTENABLE | GPMC_IRQ_COUNT_EVENT)); 593 + 594 + enable_irq(info->gpmc_irq_count); 595 + enable_irq(info->gpmc_irq_fifo); 676 596 677 597 /* waiting for write to complete */ 678 598 wait_for_completion(&info->comp); 599 + 679 600 /* wait for data to flushed-out before reset the prefetch */ 680 601 tim = 0; 681 602 limit = (loops_per_jiffy * msecs_to_jiffies(OMAP_NAND_TIMEOUT_MS)); 682 - while (gpmc_read_status(GPMC_PREFETCH_COUNT) && (tim++ < limit)) 603 + do { 604 + val = readl(info->reg.gpmc_prefetch_status); 605 + val = GPMC_PREFETCH_STATUS_COUNT(val); 683 606 cpu_relax(); 607 + } while (val && (tim++ < limit)); 684 608 685 609 /* disable and stop the PFPW engine */ 686 - gpmc_prefetch_reset(info->gpmc_cs); 610 + omap_prefetch_reset(info->gpmc_cs, info); 687 611 return; 688 612 689 613 out_copy: ··· 927 843 { 928 844 struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 929 845 mtd); 930 - return gpmc_calculate_ecc(info->gpmc_cs, dat, ecc_code); 846 + u32 val; 847 + 848 + val = readl(info->reg.gpmc_ecc_config); 849 + if (((val >> ECC_CONFIG_CS_SHIFT) & ~CS_MASK) != info->gpmc_cs) 850 + return -EINVAL; 851 + 852 + /* read ecc result */ 853 + val = readl(info->reg.gpmc_ecc1_result); 854 + *ecc_code++ = val; /* P128e, ..., P1e */ 855 + *ecc_code++ = val >> 16; /* P128o, ..., P1o */ 856 + /* P2048o, P1024o, P512o, P256o, P2048e, P1024e, P512e, P256e */ 857 + *ecc_code++ = ((val >> 8) & 0x0f) | ((val >> 20) & 0xf0); 858 + 859 + return 0; 931 860 } 932 861 933 862 /** ··· 954 857 mtd); 955 858 struct nand_chip *chip = mtd->priv; 956 859 unsigned int dev_width = (chip->options & NAND_BUSWIDTH_16) ? 1 : 0; 860 + u32 val; 957 861 958 - gpmc_enable_hwecc(info->gpmc_cs, mode, dev_width, info->nand.ecc.size); 862 + /* clear ecc and enable bits */ 863 + val = ECCCLEAR | ECC1; 864 + writel(val, info->reg.gpmc_ecc_control); 865 + 866 + /* program ecc and result sizes */ 867 + val = ((((info->nand.ecc.size >> 1) - 1) << ECCSIZE1_SHIFT) | 868 + ECC1RESULTSIZE); 869 + writel(val, info->reg.gpmc_ecc_size_config); 870 + 871 + switch (mode) { 872 + case NAND_ECC_READ: 873 + case NAND_ECC_WRITE: 874 + writel(ECCCLEAR | ECC1, info->reg.gpmc_ecc_control); 875 + break; 876 + case NAND_ECC_READSYN: 877 + writel(ECCCLEAR, info->reg.gpmc_ecc_control); 878 + break; 879 + default: 880 + dev_info(&info->pdev->dev, 881 + "error: unrecognized Mode[%d]!\n", mode); 882 + break; 883 + } 884 + 885 + /* (ECC 16 or 8 bit col) | ( CS ) | ECC Enable */ 886 + val = (dev_width << 7) | (info->gpmc_cs << 1) | (0x1); 887 + writel(val, info->reg.gpmc_ecc_config); 959 888 } 960 889 961 890 /** ··· 1009 886 else 1010 887 timeo += (HZ * 20) / 1000; 1011 888 1012 - gpmc_nand_write(info->gpmc_cs, 1013 - GPMC_NAND_COMMAND, (NAND_CMD_STATUS & 0xFF)); 889 + writeb(NAND_CMD_STATUS & 0xFF, info->reg.gpmc_nand_command); 1014 890 while (time_before(jiffies, timeo)) { 1015 - status = gpmc_nand_read(info->gpmc_cs, GPMC_NAND_DATA); 891 + status = readb(info->reg.gpmc_nand_data); 1016 892 if (status & NAND_STATUS_READY) 1017 893 break; 1018 894 cond_resched(); ··· 1031 909 struct omap_nand_info *info = container_of(mtd, struct omap_nand_info, 1032 910 mtd); 1033 911 1034 - val = gpmc_read_status(GPMC_GET_IRQ_STATUS); 1035 - if ((val & 0x100) == 0x100) { 1036 - /* Clear IRQ Interrupt */ 1037 - val |= 0x100; 1038 - val &= ~(0x0); 1039 - gpmc_cs_configure(info->gpmc_cs, GPMC_SET_IRQ_STATUS, val); 1040 - } else { 1041 - unsigned int cnt = 0; 1042 - while (cnt++ < 0x1FF) { 1043 - if ((val & 0x100) == 0x100) 1044 - return 0; 1045 - val = gpmc_read_status(GPMC_GET_IRQ_STATUS); 1046 - } 1047 - } 912 + val = readl(info->reg.gpmc_status); 1048 913 1049 - return 1; 914 + if ((val & 0x100) == 0x100) { 915 + return 1; 916 + } else { 917 + return 0; 918 + } 1050 919 } 1051 920 1052 921 #ifdef CONFIG_MTD_NAND_OMAP_BCH ··· 1268 1155 int i, offset; 1269 1156 dma_cap_mask_t mask; 1270 1157 unsigned sig; 1158 + struct resource *res; 1271 1159 1272 1160 pdata = pdev->dev.platform_data; 1273 1161 if (pdata == NULL) { ··· 1288 1174 info->pdev = pdev; 1289 1175 1290 1176 info->gpmc_cs = pdata->cs; 1291 - info->phys_base = pdata->phys_base; 1177 + info->reg = pdata->reg; 1292 1178 1293 1179 info->mtd.priv = &info->nand; 1294 1180 info->mtd.name = dev_name(&pdev->dev); ··· 1297 1183 info->nand.options = pdata->devsize; 1298 1184 info->nand.options |= NAND_SKIP_BBTSCAN; 1299 1185 1300 - /* NAND write protect off */ 1301 - gpmc_cs_configure(info->gpmc_cs, GPMC_CONFIG_WP, 0); 1186 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1187 + if (res == NULL) { 1188 + err = -EINVAL; 1189 + dev_err(&pdev->dev, "error getting memory resource\n"); 1190 + goto out_free_info; 1191 + } 1302 1192 1303 - if (!request_mem_region(info->phys_base, NAND_IO_SIZE, 1193 + info->phys_base = res->start; 1194 + info->mem_size = resource_size(res); 1195 + 1196 + if (!request_mem_region(info->phys_base, info->mem_size, 1304 1197 pdev->dev.driver->name)) { 1305 1198 err = -EBUSY; 1306 1199 goto out_free_info; 1307 1200 } 1308 1201 1309 - info->nand.IO_ADDR_R = ioremap(info->phys_base, NAND_IO_SIZE); 1202 + info->nand.IO_ADDR_R = ioremap(info->phys_base, info->mem_size); 1310 1203 if (!info->nand.IO_ADDR_R) { 1311 1204 err = -ENOMEM; 1312 1205 goto out_release_mem_region; ··· 1386 1265 break; 1387 1266 1388 1267 case NAND_OMAP_PREFETCH_IRQ: 1389 - err = request_irq(pdata->gpmc_irq, 1390 - omap_nand_irq, IRQF_SHARED, "gpmc-nand", info); 1268 + info->gpmc_irq_fifo = platform_get_irq(pdev, 0); 1269 + if (info->gpmc_irq_fifo <= 0) { 1270 + dev_err(&pdev->dev, "error getting fifo irq\n"); 1271 + err = -ENODEV; 1272 + goto out_release_mem_region; 1273 + } 1274 + err = request_irq(info->gpmc_irq_fifo, omap_nand_irq, 1275 + IRQF_SHARED, "gpmc-nand-fifo", info); 1391 1276 if (err) { 1392 1277 dev_err(&pdev->dev, "requesting irq(%d) error:%d", 1393 - pdata->gpmc_irq, err); 1278 + info->gpmc_irq_fifo, err); 1279 + info->gpmc_irq_fifo = 0; 1394 1280 goto out_release_mem_region; 1395 - } else { 1396 - info->gpmc_irq = pdata->gpmc_irq; 1397 - info->nand.read_buf = omap_read_buf_irq_pref; 1398 - info->nand.write_buf = omap_write_buf_irq_pref; 1399 1281 } 1282 + 1283 + info->gpmc_irq_count = platform_get_irq(pdev, 1); 1284 + if (info->gpmc_irq_count <= 0) { 1285 + dev_err(&pdev->dev, "error getting count irq\n"); 1286 + err = -ENODEV; 1287 + goto out_release_mem_region; 1288 + } 1289 + err = request_irq(info->gpmc_irq_count, omap_nand_irq, 1290 + IRQF_SHARED, "gpmc-nand-count", info); 1291 + if (err) { 1292 + dev_err(&pdev->dev, "requesting irq(%d) error:%d", 1293 + info->gpmc_irq_count, err); 1294 + info->gpmc_irq_count = 0; 1295 + goto out_release_mem_region; 1296 + } 1297 + 1298 + info->nand.read_buf = omap_read_buf_irq_pref; 1299 + info->nand.write_buf = omap_write_buf_irq_pref; 1300 + 1400 1301 break; 1401 1302 1402 1303 default: ··· 1506 1363 out_release_mem_region: 1507 1364 if (info->dma) 1508 1365 dma_release_channel(info->dma); 1509 - release_mem_region(info->phys_base, NAND_IO_SIZE); 1366 + if (info->gpmc_irq_count > 0) 1367 + free_irq(info->gpmc_irq_count, info); 1368 + if (info->gpmc_irq_fifo > 0) 1369 + free_irq(info->gpmc_irq_fifo, info); 1370 + release_mem_region(info->phys_base, info->mem_size); 1510 1371 out_free_info: 1511 1372 kfree(info); 1512 1373 ··· 1528 1381 if (info->dma) 1529 1382 dma_release_channel(info->dma); 1530 1383 1531 - if (info->gpmc_irq) 1532 - free_irq(info->gpmc_irq, info); 1384 + if (info->gpmc_irq_count > 0) 1385 + free_irq(info->gpmc_irq_count, info); 1386 + if (info->gpmc_irq_fifo > 0) 1387 + free_irq(info->gpmc_irq_fifo, info); 1533 1388 1534 1389 /* Release NAND device, its internal structures and partitions */ 1535 1390 nand_release(&info->mtd);
+16 -15
drivers/mtd/onenand/omap2.c
··· 44 44 45 45 #include <plat/dma.h> 46 46 47 - #include <plat/board.h> 48 - 49 47 #define DRIVER_NAME "omap2-onenand" 50 48 51 - #define ONENAND_IO_SIZE SZ_128K 52 49 #define ONENAND_BUFRAM_SIZE (1024 * 5) 53 50 54 51 struct omap2_onenand { 55 52 struct platform_device *pdev; 56 53 int gpmc_cs; 57 54 unsigned long phys_base; 55 + unsigned int mem_size; 58 56 int gpio_irq; 59 57 struct mtd_info mtd; 60 58 struct onenand_chip onenand; ··· 624 626 struct omap2_onenand *c; 625 627 struct onenand_chip *this; 626 628 int r; 629 + struct resource *res; 627 630 628 631 pdata = pdev->dev.platform_data; 629 632 if (pdata == NULL) { ··· 646 647 c->gpio_irq = 0; 647 648 } 648 649 649 - r = gpmc_cs_request(c->gpmc_cs, ONENAND_IO_SIZE, &c->phys_base); 650 - if (r < 0) { 651 - dev_err(&pdev->dev, "Cannot request GPMC CS\n"); 650 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 651 + if (res == NULL) { 652 + r = -EINVAL; 653 + dev_err(&pdev->dev, "error getting memory resource\n"); 652 654 goto err_kfree; 653 655 } 654 656 655 - if (request_mem_region(c->phys_base, ONENAND_IO_SIZE, 657 + c->phys_base = res->start; 658 + c->mem_size = resource_size(res); 659 + 660 + if (request_mem_region(c->phys_base, c->mem_size, 656 661 pdev->dev.driver->name) == NULL) { 657 - dev_err(&pdev->dev, "Cannot reserve memory region at 0x%08lx, " 658 - "size: 0x%x\n", c->phys_base, ONENAND_IO_SIZE); 662 + dev_err(&pdev->dev, "Cannot reserve memory region at 0x%08lx, size: 0x%x\n", 663 + c->phys_base, c->mem_size); 659 664 r = -EBUSY; 660 - goto err_free_cs; 665 + goto err_kfree; 661 666 } 662 - c->onenand.base = ioremap(c->phys_base, ONENAND_IO_SIZE); 667 + c->onenand.base = ioremap(c->phys_base, c->mem_size); 663 668 if (c->onenand.base == NULL) { 664 669 r = -ENOMEM; 665 670 goto err_release_mem_region; ··· 779 776 err_iounmap: 780 777 iounmap(c->onenand.base); 781 778 err_release_mem_region: 782 - release_mem_region(c->phys_base, ONENAND_IO_SIZE); 783 - err_free_cs: 784 - gpmc_cs_free(c->gpmc_cs); 779 + release_mem_region(c->phys_base, c->mem_size); 785 780 err_kfree: 786 781 kfree(c); 787 782 ··· 801 800 gpio_free(c->gpio_irq); 802 801 } 803 802 iounmap(c->onenand.base); 804 - release_mem_region(c->phys_base, ONENAND_IO_SIZE); 803 + release_mem_region(c->phys_base, c->mem_size); 805 804 gpmc_cs_free(c->gpmc_cs); 806 805 kfree(c); 807 806
+2 -2
drivers/mtd/ubi/vtbl.c
··· 340 340 * of this LEB as it will be deleted and freed in 'ubi_add_to_av()'. 341 341 */ 342 342 err = ubi_add_to_av(ubi, ai, new_aeb->pnum, new_aeb->ec, vid_hdr, 0); 343 - kfree(new_aeb); 343 + kmem_cache_free(ai->aeb_slab_cache, new_aeb); 344 344 ubi_free_vid_hdr(ubi, vid_hdr); 345 345 return err; 346 346 ··· 353 353 list_add(&new_aeb->u.list, &ai->erase); 354 354 goto retry; 355 355 } 356 - kfree(new_aeb); 356 + kmem_cache_free(ai->aeb_slab_cache, new_aeb); 357 357 out_free: 358 358 ubi_free_vid_hdr(ubi, vid_hdr); 359 359 return err;
+3 -1
drivers/net/can/sja1000/sja1000_platform.c
··· 109 109 priv = netdev_priv(dev); 110 110 111 111 dev->irq = res_irq->start; 112 - priv->irq_flags = res_irq->flags & (IRQF_TRIGGER_MASK | IRQF_SHARED); 112 + priv->irq_flags = res_irq->flags & IRQF_TRIGGER_MASK; 113 + if (res_irq->flags & IORESOURCE_IRQ_SHAREABLE) 114 + priv->irq_flags |= IRQF_SHARED; 113 115 priv->reg_base = addr; 114 116 /* The CAN clock frequency is half the oscillator clock frequency */ 115 117 priv->can.clock.freq = pdata->osc_freq / 2;
+4 -3
drivers/net/can/softing/softing_fw.c
··· 150 150 const uint8_t *mem, *end, *dat; 151 151 uint16_t type, len; 152 152 uint32_t addr; 153 - uint8_t *buf = NULL; 153 + uint8_t *buf = NULL, *new_buf; 154 154 int buflen = 0; 155 155 int8_t type_end = 0; 156 156 ··· 199 199 if (len > buflen) { 200 200 /* align buflen */ 201 201 buflen = (len + (1024-1)) & ~(1024-1); 202 - buf = krealloc(buf, buflen, GFP_KERNEL); 203 - if (!buf) { 202 + new_buf = krealloc(buf, buflen, GFP_KERNEL); 203 + if (!new_buf) { 204 204 ret = -ENOMEM; 205 205 goto failed; 206 206 } 207 + buf = new_buf; 207 208 } 208 209 /* verify record data */ 209 210 memcpy_fromio(buf, &dpram[addr + offset], len);
-3
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1708 1708 continue; \ 1709 1709 else 1710 1710 1711 - #define for_each_napi_rx_queue(bp, var) \ 1712 - for ((var) = 0; (var) < bp->num_napi_queues; (var)++) 1713 - 1714 1711 /* Skip OOO FP */ 1715 1712 #define for_each_tx_queue(bp, var) \ 1716 1713 for ((var) = 0; (var) < BNX2X_NUM_QUEUES(bp); (var)++) \
+4
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 2046 2046 */ 2047 2047 bnx2x_setup_tc(bp->dev, bp->max_cos); 2048 2048 2049 + /* Add all NAPI objects */ 2050 + bnx2x_add_all_napi(bp); 2049 2051 bnx2x_napi_enable(bp); 2050 2052 2051 2053 /* set pf load just before approaching the MCP */ ··· 2410 2408 2411 2409 /* Disable HW interrupts, NAPI */ 2412 2410 bnx2x_netif_stop(bp, 1); 2411 + /* Delete all NAPI objects */ 2412 + bnx2x_del_all_napi(bp); 2413 2413 2414 2414 /* Release IRQs */ 2415 2415 bnx2x_free_irq(bp);
+2 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
··· 792 792 bp->num_napi_queues = bp->num_queues; 793 793 794 794 /* Add NAPI objects */ 795 - for_each_napi_rx_queue(bp, i) 795 + for_each_rx_queue(bp, i) 796 796 netif_napi_add(bp->dev, &bnx2x_fp(bp, i, napi), 797 797 bnx2x_poll, BNX2X_NAPI_WEIGHT); 798 798 } ··· 801 801 { 802 802 int i; 803 803 804 - for_each_napi_rx_queue(bp, i) 804 + for_each_rx_queue(bp, i) 805 805 netif_napi_del(&bnx2x_fp(bp, i, napi)); 806 806 } 807 807
-2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
··· 2888 2888 */ 2889 2889 static void bnx2x_change_num_queues(struct bnx2x *bp, int num_rss) 2890 2890 { 2891 - bnx2x_del_all_napi(bp); 2892 2891 bnx2x_disable_msi(bp); 2893 2892 BNX2X_NUM_QUEUES(bp) = num_rss + NON_ETH_CONTEXT_USE; 2894 2893 bnx2x_set_int_mode(bp); 2895 - bnx2x_add_all_napi(bp); 2896 2894 } 2897 2895 2898 2896 /**
+9 -9
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 8427 8427 8428 8428 /* Disable HW interrupts, NAPI */ 8429 8429 bnx2x_netif_stop(bp, 1); 8430 + /* Delete all NAPI objects */ 8431 + bnx2x_del_all_napi(bp); 8430 8432 8431 8433 /* Release IRQs */ 8432 8434 bnx2x_free_irq(bp); ··· 11231 11229 static void poll_bnx2x(struct net_device *dev) 11232 11230 { 11233 11231 struct bnx2x *bp = netdev_priv(dev); 11232 + int i; 11234 11233 11235 - disable_irq(bp->pdev->irq); 11236 - bnx2x_interrupt(bp->pdev->irq, dev); 11237 - enable_irq(bp->pdev->irq); 11234 + for_each_eth_queue(bp, i) { 11235 + struct bnx2x_fastpath *fp = &bp->fp[i]; 11236 + napi_schedule(&bnx2x_fp(bp, fp->index, napi)); 11237 + } 11238 11238 } 11239 11239 #endif 11240 11240 ··· 11903 11899 */ 11904 11900 bnx2x_set_int_mode(bp); 11905 11901 11906 - /* Add all NAPI objects */ 11907 - bnx2x_add_all_napi(bp); 11908 - 11909 11902 rc = register_netdev(dev); 11910 11903 if (rc) { 11911 11904 dev_err(&pdev->dev, "Cannot register net device\n"); ··· 11977 11976 11978 11977 unregister_netdev(dev); 11979 11978 11980 - /* Delete all NAPI objects */ 11981 - bnx2x_del_all_napi(bp); 11982 - 11983 11979 /* Power on: we can't let PCI layer write to us while we are in D3 */ 11984 11980 bnx2x_set_power_state(bp, PCI_D0); 11985 11981 ··· 12023 12025 bnx2x_tx_disable(bp); 12024 12026 12025 12027 bnx2x_netif_stop(bp, 0); 12028 + /* Delete all NAPI objects */ 12029 + bnx2x_del_all_napi(bp); 12026 12030 12027 12031 del_timer_sync(&bp->timer); 12028 12032
+5 -5
drivers/net/ethernet/cirrus/cs89x0.c
··· 1243 1243 { 1244 1244 struct net_local *lp = netdev_priv(dev); 1245 1245 unsigned long flags; 1246 + u16 cfg; 1246 1247 1247 1248 spin_lock_irqsave(&lp->lock, flags); 1248 1249 if (dev->flags & IFF_PROMISC) ··· 1261 1260 /* in promiscuous mode, we accept errored packets, 1262 1261 * so we have to enable interrupts on them also 1263 1262 */ 1264 - writereg(dev, PP_RxCFG, 1265 - (lp->curr_rx_cfg | 1266 - (lp->rx_mode == RX_ALL_ACCEPT) 1267 - ? (RX_CRC_ERROR_ENBL | RX_RUNT_ENBL | RX_EXTRA_DATA_ENBL) 1268 - : 0)); 1263 + cfg = lp->curr_rx_cfg; 1264 + if (lp->rx_mode == RX_ALL_ACCEPT) 1265 + cfg |= RX_CRC_ERROR_ENBL | RX_RUNT_ENBL | RX_EXTRA_DATA_ENBL; 1266 + writereg(dev, PP_RxCFG, cfg); 1269 1267 spin_unlock_irqrestore(&lp->lock, flags); 1270 1268 } 1271 1269
+4 -2
drivers/net/ethernet/emulex/benet/be_cmds.c
··· 259 259 int num = 0, status = 0; 260 260 struct be_mcc_obj *mcc_obj = &adapter->mcc_obj; 261 261 262 - spin_lock_bh(&adapter->mcc_cq_lock); 262 + spin_lock(&adapter->mcc_cq_lock); 263 263 while ((compl = be_mcc_compl_get(adapter))) { 264 264 if (compl->flags & CQE_FLAGS_ASYNC_MASK) { 265 265 /* Interpret flags as an async trailer */ ··· 280 280 if (num) 281 281 be_cq_notify(adapter, mcc_obj->cq.id, mcc_obj->rearm_cq, num); 282 282 283 - spin_unlock_bh(&adapter->mcc_cq_lock); 283 + spin_unlock(&adapter->mcc_cq_lock); 284 284 return status; 285 285 } 286 286 ··· 295 295 if (be_error(adapter)) 296 296 return -EIO; 297 297 298 + local_bh_disable(); 298 299 status = be_process_mcc(adapter); 300 + local_bh_enable(); 299 301 300 302 if (atomic_read(&mcc_obj->q.used) == 0) 301 303 break;
+2
drivers/net/ethernet/emulex/benet/be_main.c
··· 3763 3763 /* when interrupts are not yet enabled, just reap any pending 3764 3764 * mcc completions */ 3765 3765 if (!netif_running(adapter->netdev)) { 3766 + local_bh_disable(); 3766 3767 be_process_mcc(adapter); 3768 + local_bh_enable(); 3767 3769 goto reschedule; 3768 3770 } 3769 3771
+1 -1
drivers/net/ethernet/freescale/gianfar.c
··· 1041 1041 1042 1042 if (priv->device_flags & FSL_GIANFAR_DEV_HAS_VLAN) { 1043 1043 dev->hw_features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; 1044 - dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX; 1044 + dev->features |= NETIF_F_HW_VLAN_RX; 1045 1045 } 1046 1046 1047 1047 if (priv->device_flags & FSL_GIANFAR_DEV_HAS_EXTENDED_HASH) {
+1
drivers/net/ethernet/intel/e1000e/e1000.h
··· 310 310 */ 311 311 struct e1000_ring *tx_ring /* One per active queue */ 312 312 ____cacheline_aligned_in_smp; 313 + u32 tx_fifo_limit; 313 314 314 315 struct napi_struct napi; 315 316
+23 -25
drivers/net/ethernet/intel/e1000e/netdev.c
··· 3517 3517 } 3518 3518 3519 3519 /* 3520 + * Alignment of Tx data is on an arbitrary byte boundary with the 3521 + * maximum size per Tx descriptor limited only to the transmit 3522 + * allocation of the packet buffer minus 96 bytes with an upper 3523 + * limit of 24KB due to receive synchronization limitations. 3524 + */ 3525 + adapter->tx_fifo_limit = min_t(u32, ((er32(PBA) >> 16) << 10) - 96, 3526 + 24 << 10); 3527 + 3528 + /* 3520 3529 * Disable Adaptive Interrupt Moderation if 2 full packets cannot 3521 3530 * fit in receive buffer. 3522 3531 */ ··· 4794 4785 return 1; 4795 4786 } 4796 4787 4797 - #define E1000_MAX_PER_TXD 8192 4798 - #define E1000_MAX_TXD_PWR 12 4799 - 4800 4788 static int e1000_tx_map(struct e1000_ring *tx_ring, struct sk_buff *skb, 4801 4789 unsigned int first, unsigned int max_per_txd, 4802 - unsigned int nr_frags, unsigned int mss) 4790 + unsigned int nr_frags) 4803 4791 { 4804 4792 struct e1000_adapter *adapter = tx_ring->adapter; 4805 4793 struct pci_dev *pdev = adapter->pdev; ··· 5029 5023 5030 5024 static int e1000_maybe_stop_tx(struct e1000_ring *tx_ring, int size) 5031 5025 { 5026 + BUG_ON(size > tx_ring->count); 5027 + 5032 5028 if (e1000_desc_unused(tx_ring) >= size) 5033 5029 return 0; 5034 5030 return __e1000_maybe_stop_tx(tx_ring, size); 5035 5031 } 5036 5032 5037 - #define TXD_USE_COUNT(S, X) (((S) >> (X)) + 1) 5038 5033 static netdev_tx_t e1000_xmit_frame(struct sk_buff *skb, 5039 5034 struct net_device *netdev) 5040 5035 { 5041 5036 struct e1000_adapter *adapter = netdev_priv(netdev); 5042 5037 struct e1000_ring *tx_ring = adapter->tx_ring; 5043 5038 unsigned int first; 5044 - unsigned int max_per_txd = E1000_MAX_PER_TXD; 5045 - unsigned int max_txd_pwr = E1000_MAX_TXD_PWR; 5046 5039 unsigned int tx_flags = 0; 5047 5040 unsigned int len = skb_headlen(skb); 5048 5041 unsigned int nr_frags; ··· 5061 5056 } 5062 5057 5063 5058 mss = skb_shinfo(skb)->gso_size; 5064 - /* 5065 - * The controller does a simple calculation to 5066 - * make sure there is enough room in the FIFO before 5067 - * initiating the DMA for each buffer. The calc is: 5068 - * 4 = ceil(buffer len/mss). To make sure we don't 5069 - * overrun the FIFO, adjust the max buffer len if mss 5070 - * drops. 5071 - */ 5072 5059 if (mss) { 5073 5060 u8 hdr_len; 5074 - max_per_txd = min(mss << 2, max_per_txd); 5075 - max_txd_pwr = fls(max_per_txd) - 1; 5076 5061 5077 5062 /* 5078 5063 * TSO Workaround for 82571/2/3 Controllers -- if skb->data ··· 5092 5097 count++; 5093 5098 count++; 5094 5099 5095 - count += TXD_USE_COUNT(len, max_txd_pwr); 5100 + count += DIV_ROUND_UP(len, adapter->tx_fifo_limit); 5096 5101 5097 5102 nr_frags = skb_shinfo(skb)->nr_frags; 5098 5103 for (f = 0; f < nr_frags; f++) 5099 - count += TXD_USE_COUNT(skb_frag_size(&skb_shinfo(skb)->frags[f]), 5100 - max_txd_pwr); 5104 + count += DIV_ROUND_UP(skb_frag_size(&skb_shinfo(skb)->frags[f]), 5105 + adapter->tx_fifo_limit); 5101 5106 5102 5107 if (adapter->hw.mac.tx_pkt_filtering) 5103 5108 e1000_transfer_dhcp_info(adapter, skb); ··· 5139 5144 tx_flags |= E1000_TX_FLAGS_NO_FCS; 5140 5145 5141 5146 /* if count is 0 then mapping error has occurred */ 5142 - count = e1000_tx_map(tx_ring, skb, first, max_per_txd, nr_frags, mss); 5147 + count = e1000_tx_map(tx_ring, skb, first, adapter->tx_fifo_limit, 5148 + nr_frags); 5143 5149 if (count) { 5144 5150 skb_tx_timestamp(skb); 5145 5151 5146 5152 netdev_sent_queue(netdev, skb->len); 5147 5153 e1000_tx_queue(tx_ring, tx_flags, count); 5148 5154 /* Make sure there is space in the ring for the next send. */ 5149 - e1000_maybe_stop_tx(tx_ring, MAX_SKB_FRAGS + 2); 5150 - 5155 + e1000_maybe_stop_tx(tx_ring, 5156 + (MAX_SKB_FRAGS * 5157 + DIV_ROUND_UP(PAGE_SIZE, 5158 + adapter->tx_fifo_limit) + 2)); 5151 5159 } else { 5152 5160 dev_kfree_skb_any(skb); 5153 5161 tx_ring->buffer_info[first].time_stamp = 0; ··· 6325 6327 adapter->hw.phy.autoneg_advertised = 0x2f; 6326 6328 6327 6329 /* ring size defaults */ 6328 - adapter->rx_ring->count = 256; 6329 - adapter->tx_ring->count = 256; 6330 + adapter->rx_ring->count = E1000_DEFAULT_RXD; 6331 + adapter->tx_ring->count = E1000_DEFAULT_TXD; 6330 6332 6331 6333 /* 6332 6334 * Initial Wake on LAN setting - If APM wake is enabled in
+2 -2
drivers/net/ethernet/sfc/ethtool.c
··· 863 863 &ip_entry->ip4dst, &ip_entry->pdst); 864 864 if (rc != 0) { 865 865 rc = efx_filter_get_ipv4_full( 866 - &spec, &proto, &ip_entry->ip4src, &ip_entry->psrc, 867 - &ip_entry->ip4dst, &ip_entry->pdst); 866 + &spec, &proto, &ip_entry->ip4dst, &ip_entry->pdst, 867 + &ip_entry->ip4src, &ip_entry->psrc); 868 868 EFX_WARN_ON_PARANOID(rc); 869 869 ip_mask->ip4src = ~0; 870 870 ip_mask->psrc = ~0;
+5
drivers/net/ethernet/stmicro/stmmac/common.h
··· 22 22 Author: Giuseppe Cavallaro <peppe.cavallaro@st.com> 23 23 *******************************************************************************/ 24 24 25 + #ifndef __COMMON_H__ 26 + #define __COMMON_H__ 27 + 25 28 #include <linux/etherdevice.h> 26 29 #include <linux/netdevice.h> 27 30 #include <linux/phy.h> ··· 369 366 370 367 extern void dwmac_dma_flush_tx_fifo(void __iomem *ioaddr); 371 368 extern const struct stmmac_ring_mode_ops ring_mode_ops; 369 + 370 + #endif /* __COMMON_H__ */
+6
drivers/net/ethernet/stmicro/stmmac/descs.h
··· 20 20 21 21 Author: Giuseppe Cavallaro <peppe.cavallaro@st.com> 22 22 *******************************************************************************/ 23 + 24 + #ifndef __DESCS_H__ 25 + #define __DESCS_H__ 26 + 23 27 struct dma_desc { 24 28 /* Receive descriptor */ 25 29 union { ··· 170 166 * is not calculated */ 171 167 cic_full = 3, /* IP header and pseudoheader */ 172 168 }; 169 + 170 + #endif /* __DESCS_H__ */
+5
drivers/net/ethernet/stmicro/stmmac/descs_com.h
··· 27 27 Author: Giuseppe Cavallaro <peppe.cavallaro@st.com> 28 28 *******************************************************************************/ 29 29 30 + #ifndef __DESC_COM_H__ 31 + #define __DESC_COM_H__ 32 + 30 33 #if defined(CONFIG_STMMAC_RING) 31 34 static inline void ehn_desc_rx_set_on_ring_chain(struct dma_desc *p, int end) 32 35 { ··· 127 124 p->des01.tx.buffer1_size = len; 128 125 } 129 126 #endif 127 + 128 + #endif /* __DESC_COM_H__ */
+5
drivers/net/ethernet/stmicro/stmmac/dwmac100.h
··· 22 22 Author: Giuseppe Cavallaro <peppe.cavallaro@st.com> 23 23 *******************************************************************************/ 24 24 25 + #ifndef __DWMAC100_H__ 26 + #define __DWMAC100_H__ 27 + 25 28 #include <linux/phy.h> 26 29 #include "common.h" 27 30 ··· 122 119 #define DMA_MISSED_FRAME_M_CNTR 0x0000ffff /* Missed Frame Couinter */ 123 120 124 121 extern const struct stmmac_dma_ops dwmac100_dma_ops; 122 + 123 + #endif /* __DWMAC100_H__ */
+4 -1
drivers/net/ethernet/stmicro/stmmac/dwmac1000.h
··· 19 19 20 20 Author: Giuseppe Cavallaro <peppe.cavallaro@st.com> 21 21 *******************************************************************************/ 22 + #ifndef __DWMAC1000_H__ 23 + #define __DWMAC1000_H__ 22 24 23 25 #include <linux/phy.h> 24 26 #include "common.h" ··· 231 229 #define GMAC_MMC_RX_CSUM_OFFLOAD 0x208 232 230 233 231 /* Synopsys Core versions */ 234 - #define DWMAC_CORE_3_40 34 232 + #define DWMAC_CORE_3_40 0x34 235 233 236 234 extern const struct stmmac_dma_ops dwmac1000_dma_ops; 235 + #endif /* __DWMAC1000_H__ */
+5
drivers/net/ethernet/stmicro/stmmac/dwmac_dma.h
··· 22 22 Author: Giuseppe Cavallaro <peppe.cavallaro@st.com> 23 23 *******************************************************************************/ 24 24 25 + #ifndef __DWMAC_DMA_H__ 26 + #define __DWMAC_DMA_H__ 27 + 25 28 /* DMA CRS Control and Status Register Mapping */ 26 29 #define DMA_BUS_MODE 0x00001000 /* Bus Mode */ 27 30 #define DMA_XMT_POLL_DEMAND 0x00001004 /* Transmit Poll Demand */ ··· 112 109 extern void dwmac_dma_stop_rx(void __iomem *ioaddr); 113 110 extern int dwmac_dma_interrupt(void __iomem *ioaddr, 114 111 struct stmmac_extra_stats *x); 112 + 113 + #endif /* __DWMAC_DMA_H__ */
+5
drivers/net/ethernet/stmicro/stmmac/mmc.h
··· 22 22 Author: Giuseppe Cavallaro <peppe.cavallaro@st.com> 23 23 *******************************************************************************/ 24 24 25 + #ifndef __MMC_H__ 26 + #define __MMC_H__ 27 + 25 28 /* MMC control register */ 26 29 /* When set, all counter are reset */ 27 30 #define MMC_CNTRL_COUNTER_RESET 0x1 ··· 132 129 extern void dwmac_mmc_ctrl(void __iomem *ioaddr, unsigned int mode); 133 130 extern void dwmac_mmc_intr_all_mask(void __iomem *ioaddr); 134 131 extern void dwmac_mmc_read(void __iomem *ioaddr, struct stmmac_counters *mmc); 132 + 133 + #endif /* __MMC_H__ */
+3 -3
drivers/net/ethernet/stmicro/stmmac/mmc_core.c
··· 33 33 #define MMC_TX_INTR 0x00000108 /* MMC TX Interrupt */ 34 34 #define MMC_RX_INTR_MASK 0x0000010c /* MMC Interrupt Mask */ 35 35 #define MMC_TX_INTR_MASK 0x00000110 /* MMC Interrupt Mask */ 36 - #define MMC_DEFAUL_MASK 0xffffffff 36 + #define MMC_DEFAULT_MASK 0xffffffff 37 37 38 38 /* MMC TX counter registers */ 39 39 ··· 147 147 /* To mask all all interrupts.*/ 148 148 void dwmac_mmc_intr_all_mask(void __iomem *ioaddr) 149 149 { 150 - writel(MMC_DEFAUL_MASK, ioaddr + MMC_RX_INTR_MASK); 151 - writel(MMC_DEFAUL_MASK, ioaddr + MMC_TX_INTR_MASK); 150 + writel(MMC_DEFAULT_MASK, ioaddr + MMC_RX_INTR_MASK); 151 + writel(MMC_DEFAULT_MASK, ioaddr + MMC_TX_INTR_MASK); 152 152 } 153 153 154 154 /* This reads the MAC core counters (if actaully supported).
+5
drivers/net/ethernet/stmicro/stmmac/stmmac.h
··· 20 20 Author: Giuseppe Cavallaro <peppe.cavallaro@st.com> 21 21 *******************************************************************************/ 22 22 23 + #ifndef __STMMAC_H__ 24 + #define __STMMAC_H__ 25 + 23 26 #define STMMAC_RESOURCE_NAME "stmmaceth" 24 27 #define DRV_MODULE_VERSION "March_2012" 25 28 ··· 169 166 { 170 167 } 171 168 #endif /* CONFIG_STMMAC_PCI */ 169 + 170 + #endif /* __STMMAC_H__ */
+4
drivers/net/ethernet/stmicro/stmmac/stmmac_timer.h
··· 21 21 22 22 Author: Giuseppe Cavallaro <peppe.cavallaro@st.com> 23 23 *******************************************************************************/ 24 + #ifndef __STMMAC_TIMER_H__ 25 + #define __STMMAC_TIMER_H__ 24 26 25 27 struct stmmac_timer { 26 28 void (*timer_start) (unsigned int new_freq); ··· 42 40 extern int tmu2_register_user(void *fnt, void *data); 43 41 extern void tmu2_unregister_user(void); 44 42 #endif 43 + 44 + #endif /* __STMMAC_TIMER_H__ */
+3 -1
drivers/net/ethernet/ti/davinci_mdio.c
··· 394 394 struct device *dev = &pdev->dev; 395 395 struct davinci_mdio_data *data = dev_get_drvdata(dev); 396 396 397 - if (data->bus) 397 + if (data->bus) { 398 + mdiobus_unregister(data->bus); 398 399 mdiobus_free(data->bus); 400 + } 399 401 400 402 if (data->clk) 401 403 clk_put(data->clk);
+1 -1
drivers/net/fddi/skfp/pmf.c
··· 673 673 sm_pm_get_ls(smc,port_to_mib(smc,port))) ; 674 674 break ; 675 675 case SMT_P_REASON : 676 - * (u_long *) to = 0 ; 676 + *(u32 *)to = 0 ; 677 677 sp_len = 4 ; 678 678 goto sp_done ; 679 679 case SMT_P1033 : /* time stamp */
+4
drivers/net/usb/qmi_wwan.c
··· 413 413 414 414 /* 5. Gobi 2000 and 3000 devices */ 415 415 {QMI_GOBI_DEVICE(0x413c, 0x8186)}, /* Dell Gobi 2000 Modem device (N0218, VU936) */ 416 + {QMI_GOBI_DEVICE(0x413c, 0x8194)}, /* Dell Gobi 3000 Composite */ 416 417 {QMI_GOBI_DEVICE(0x05c6, 0x920b)}, /* Generic Gobi 2000 Modem device */ 418 + {QMI_GOBI_DEVICE(0x05c6, 0x920d)}, /* Gobi 3000 Composite */ 417 419 {QMI_GOBI_DEVICE(0x05c6, 0x9225)}, /* Sony Gobi 2000 Modem device (N0279, VU730) */ 418 420 {QMI_GOBI_DEVICE(0x05c6, 0x9245)}, /* Samsung Gobi 2000 Modem device (VL176) */ 419 421 {QMI_GOBI_DEVICE(0x03f0, 0x251d)}, /* HP Gobi 2000 Modem device (VP412) */ ··· 443 441 {QMI_GOBI_DEVICE(0x1199, 0x9015)}, /* Sierra Wireless Gobi 3000 Modem device */ 444 442 {QMI_GOBI_DEVICE(0x1199, 0x9019)}, /* Sierra Wireless Gobi 3000 Modem device */ 445 443 {QMI_GOBI_DEVICE(0x1199, 0x901b)}, /* Sierra Wireless MC7770 */ 444 + {QMI_GOBI_DEVICE(0x12d1, 0x14f1)}, /* Sony Gobi 3000 Composite */ 445 + {QMI_GOBI_DEVICE(0x1410, 0xa021)}, /* Foxconn Gobi 3000 Modem device (Novatel E396) */ 446 446 447 447 { } /* END */ 448 448 };
+1 -1
drivers/net/usb/usbnet.c
··· 1573 1573 netif_device_present(dev->net) && 1574 1574 !timer_pending(&dev->delay) && 1575 1575 !test_bit(EVENT_RX_HALT, &dev->flags)) 1576 - rx_alloc_submit(dev, GFP_KERNEL); 1576 + rx_alloc_submit(dev, GFP_NOIO); 1577 1577 1578 1578 if (!(dev->txq.qlen >= TX_QLEN(dev))) 1579 1579 netif_tx_wake_all_queues(dev->net);
+1 -1
drivers/net/wireless/ath/ath5k/eeprom.c
··· 1482 1482 case AR5K_EEPROM_MODE_11A: 1483 1483 offset += AR5K_EEPROM_TARGET_PWR_OFF_11A(ee->ee_version); 1484 1484 rate_pcal_info = ee->ee_rate_tpwr_a; 1485 - ee->ee_rate_target_pwr_num[mode] = AR5K_EEPROM_N_5GHZ_CHAN; 1485 + ee->ee_rate_target_pwr_num[mode] = AR5K_EEPROM_N_5GHZ_RATE_CHAN; 1486 1486 break; 1487 1487 case AR5K_EEPROM_MODE_11B: 1488 1488 offset += AR5K_EEPROM_TARGET_PWR_OFF_11B(ee->ee_version);
+1
drivers/net/wireless/ath/ath5k/eeprom.h
··· 182 182 #define AR5K_EEPROM_EEP_DELTA 10 183 183 #define AR5K_EEPROM_N_MODES 3 184 184 #define AR5K_EEPROM_N_5GHZ_CHAN 10 185 + #define AR5K_EEPROM_N_5GHZ_RATE_CHAN 8 185 186 #define AR5K_EEPROM_N_2GHZ_CHAN 3 186 187 #define AR5K_EEPROM_N_2GHZ_CHAN_2413 4 187 188 #define AR5K_EEPROM_N_2GHZ_CHAN_MAX 4
+3
drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c
··· 1233 1233 /* dpc will not be rescheduled */ 1234 1234 wl->resched = false; 1235 1235 1236 + /* inform publicly that interface is down */ 1237 + wl->pub->up = false; 1238 + 1236 1239 return 0; 1237 1240 } 1238 1241
+2 -1
drivers/net/wireless/ipw2x00/ipw2100.c
··· 2042 2042 return; 2043 2043 } 2044 2044 len = ETH_ALEN; 2045 - ipw2100_get_ordinal(priv, IPW_ORD_STAT_ASSN_AP_BSSID, &bssid, &len); 2045 + ret = ipw2100_get_ordinal(priv, IPW_ORD_STAT_ASSN_AP_BSSID, bssid, 2046 + &len); 2046 2047 if (ret) { 2047 2048 IPW_DEBUG_INFO("failed querying ordinals at line %d\n", 2048 2049 __LINE__);
+3
drivers/net/wireless/iwlwifi/dvm/debugfs.c
··· 124 124 const struct fw_img *img; 125 125 size_t bufsz; 126 126 127 + if (!iwl_is_ready_rf(priv)) 128 + return -EAGAIN; 129 + 127 130 /* default is to dump the entire data segment */ 128 131 if (!priv->dbgfs_sram_offset && !priv->dbgfs_sram_len) { 129 132 priv->dbgfs_sram_offset = 0x800000;
+1 -1
drivers/net/wireless/iwlwifi/pcie/internal.h
··· 350 350 /***************************************************** 351 351 * Error handling 352 352 ******************************************************/ 353 - int iwl_dump_fh(struct iwl_trans *trans, char **buf, bool display); 353 + int iwl_dump_fh(struct iwl_trans *trans, char **buf); 354 354 void iwl_dump_csr(struct iwl_trans *trans); 355 355 356 356 /*****************************************************
+1 -1
drivers/net/wireless/iwlwifi/pcie/rx.c
··· 555 555 } 556 556 557 557 iwl_dump_csr(trans); 558 - iwl_dump_fh(trans, NULL, false); 558 + iwl_dump_fh(trans, NULL); 559 559 560 560 iwl_op_mode_nic_error(trans->op_mode); 561 561 }
+16 -14
drivers/net/wireless/iwlwifi/pcie/trans.c
··· 1649 1649 #undef IWL_CMD 1650 1650 } 1651 1651 1652 - int iwl_dump_fh(struct iwl_trans *trans, char **buf, bool display) 1652 + int iwl_dump_fh(struct iwl_trans *trans, char **buf) 1653 1653 { 1654 1654 int i; 1655 - #ifdef CONFIG_IWLWIFI_DEBUG 1656 - int pos = 0; 1657 - size_t bufsz = 0; 1658 - #endif 1659 1655 static const u32 fh_tbl[] = { 1660 1656 FH_RSCSR_CHNL0_STTS_WPTR_REG, 1661 1657 FH_RSCSR_CHNL0_RBDCB_BASE_REG, ··· 1663 1667 FH_TSSR_TX_STATUS_REG, 1664 1668 FH_TSSR_TX_ERROR_REG 1665 1669 }; 1666 - #ifdef CONFIG_IWLWIFI_DEBUG 1667 - if (display) { 1668 - bufsz = ARRAY_SIZE(fh_tbl) * 48 + 40; 1670 + 1671 + #ifdef CONFIG_IWLWIFI_DEBUGFS 1672 + if (buf) { 1673 + int pos = 0; 1674 + size_t bufsz = ARRAY_SIZE(fh_tbl) * 48 + 40; 1675 + 1669 1676 *buf = kmalloc(bufsz, GFP_KERNEL); 1670 1677 if (!*buf) 1671 1678 return -ENOMEM; 1679 + 1672 1680 pos += scnprintf(*buf + pos, bufsz - pos, 1673 1681 "FH register values:\n"); 1674 - for (i = 0; i < ARRAY_SIZE(fh_tbl); i++) { 1682 + 1683 + for (i = 0; i < ARRAY_SIZE(fh_tbl); i++) 1675 1684 pos += scnprintf(*buf + pos, bufsz - pos, 1676 1685 " %34s: 0X%08x\n", 1677 1686 get_fh_string(fh_tbl[i]), 1678 1687 iwl_read_direct32(trans, fh_tbl[i])); 1679 - } 1688 + 1680 1689 return pos; 1681 1690 } 1682 1691 #endif 1692 + 1683 1693 IWL_ERR(trans, "FH register values:\n"); 1684 - for (i = 0; i < ARRAY_SIZE(fh_tbl); i++) { 1694 + for (i = 0; i < ARRAY_SIZE(fh_tbl); i++) 1685 1695 IWL_ERR(trans, " %34s: 0X%08x\n", 1686 1696 get_fh_string(fh_tbl[i]), 1687 1697 iwl_read_direct32(trans, fh_tbl[i])); 1688 - } 1698 + 1689 1699 return 0; 1690 1700 } 1691 1701 ··· 1984 1982 size_t count, loff_t *ppos) 1985 1983 { 1986 1984 struct iwl_trans *trans = file->private_data; 1987 - char *buf; 1985 + char *buf = NULL; 1988 1986 int pos = 0; 1989 1987 ssize_t ret = -EFAULT; 1990 1988 1991 - ret = pos = iwl_dump_fh(trans, &buf, true); 1989 + ret = pos = iwl_dump_fh(trans, &buf); 1992 1990 if (buf) { 1993 1991 ret = simple_read_from_buffer(user_buf, 1994 1992 count, ppos, buf, pos);
+10 -29
drivers/net/xen-netfront.c
··· 57 57 static const struct ethtool_ops xennet_ethtool_ops; 58 58 59 59 struct netfront_cb { 60 - struct page *page; 61 - unsigned offset; 60 + int pull_to; 62 61 }; 63 62 64 63 #define NETFRONT_SKB_CB(skb) ((struct netfront_cb *)((skb)->cb)) ··· 866 867 struct sk_buff *skb; 867 868 868 869 while ((skb = __skb_dequeue(rxq)) != NULL) { 869 - struct page *page = NETFRONT_SKB_CB(skb)->page; 870 - void *vaddr = page_address(page); 871 - unsigned offset = NETFRONT_SKB_CB(skb)->offset; 870 + int pull_to = NETFRONT_SKB_CB(skb)->pull_to; 872 871 873 - memcpy(skb->data, vaddr + offset, 874 - skb_headlen(skb)); 875 - 876 - if (page != skb_frag_page(&skb_shinfo(skb)->frags[0])) 877 - __free_page(page); 872 + __pskb_pull_tail(skb, pull_to - skb_headlen(skb)); 878 873 879 874 /* Ethernet work: Delayed to here as it peeks the header. */ 880 875 skb->protocol = eth_type_trans(skb, dev); ··· 906 913 struct sk_buff_head errq; 907 914 struct sk_buff_head tmpq; 908 915 unsigned long flags; 909 - unsigned int len; 910 916 int err; 911 917 912 918 spin_lock(&np->rx_lock); ··· 947 955 } 948 956 } 949 957 950 - NETFRONT_SKB_CB(skb)->page = 951 - skb_frag_page(&skb_shinfo(skb)->frags[0]); 952 - NETFRONT_SKB_CB(skb)->offset = rx->offset; 958 + NETFRONT_SKB_CB(skb)->pull_to = rx->status; 959 + if (NETFRONT_SKB_CB(skb)->pull_to > RX_COPY_THRESHOLD) 960 + NETFRONT_SKB_CB(skb)->pull_to = RX_COPY_THRESHOLD; 953 961 954 - len = rx->status; 955 - if (len > RX_COPY_THRESHOLD) 956 - len = RX_COPY_THRESHOLD; 957 - skb_put(skb, len); 958 - 959 - if (rx->status > len) { 960 - skb_shinfo(skb)->frags[0].page_offset = 961 - rx->offset + len; 962 - skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status - len); 963 - skb->data_len = rx->status - len; 964 - } else { 965 - __skb_fill_page_desc(skb, 0, NULL, 0, 0); 966 - skb_shinfo(skb)->nr_frags = 0; 967 - } 962 + skb_shinfo(skb)->frags[0].page_offset = rx->offset; 963 + skb_frag_size_set(&skb_shinfo(skb)->frags[0], rx->status); 964 + skb->data_len = rx->status; 968 965 969 966 i = xennet_fill_frags(np, skb, &tmpq); 970 967 ··· 980 999 * receive throughout using the standard receive 981 1000 * buffer size was cut by 25%(!!!). 982 1001 */ 983 - skb->truesize += skb->data_len - (RX_COPY_THRESHOLD - len); 1002 + skb->truesize += skb->data_len - RX_COPY_THRESHOLD; 984 1003 skb->len += skb->data_len; 985 1004 986 1005 if (rx->flags & XEN_NETRXF_csum_blank)
+6
drivers/pci/pci-driver.c
··· 280 280 { 281 281 struct drv_dev_and_id *ddi = _ddi; 282 282 struct device *dev = &ddi->dev->dev; 283 + struct device *parent = dev->parent; 283 284 int rc; 284 285 286 + /* The parent bridge must be in active state when probing */ 287 + if (parent) 288 + pm_runtime_get_sync(parent); 285 289 /* Unbound PCI devices are always set to disabled and suspended. 286 290 * During probe, the device is set to enabled and active and the 287 291 * usage count is incremented. If the driver supports runtime PM, ··· 302 298 pm_runtime_set_suspended(dev); 303 299 pm_runtime_put_noidle(dev); 304 300 } 301 + if (parent) 302 + pm_runtime_put(parent); 305 303 return rc; 306 304 } 307 305
+42
drivers/pci/pci-sysfs.c
··· 458 458 } 459 459 struct device_attribute vga_attr = __ATTR_RO(boot_vga); 460 460 461 + static void 462 + pci_config_pm_runtime_get(struct pci_dev *pdev) 463 + { 464 + struct device *dev = &pdev->dev; 465 + struct device *parent = dev->parent; 466 + 467 + if (parent) 468 + pm_runtime_get_sync(parent); 469 + pm_runtime_get_noresume(dev); 470 + /* 471 + * pdev->current_state is set to PCI_D3cold during suspending, 472 + * so wait until suspending completes 473 + */ 474 + pm_runtime_barrier(dev); 475 + /* 476 + * Only need to resume devices in D3cold, because config 477 + * registers are still accessible for devices suspended but 478 + * not in D3cold. 479 + */ 480 + if (pdev->current_state == PCI_D3cold) 481 + pm_runtime_resume(dev); 482 + } 483 + 484 + static void 485 + pci_config_pm_runtime_put(struct pci_dev *pdev) 486 + { 487 + struct device *dev = &pdev->dev; 488 + struct device *parent = dev->parent; 489 + 490 + pm_runtime_put(dev); 491 + if (parent) 492 + pm_runtime_put_sync(parent); 493 + } 494 + 461 495 static ssize_t 462 496 pci_read_config(struct file *filp, struct kobject *kobj, 463 497 struct bin_attribute *bin_attr, ··· 517 483 } else { 518 484 size = count; 519 485 } 486 + 487 + pci_config_pm_runtime_get(dev); 520 488 521 489 if ((off & 1) && size) { 522 490 u8 val; ··· 565 529 --size; 566 530 } 567 531 532 + pci_config_pm_runtime_put(dev); 533 + 568 534 return count; 569 535 } 570 536 ··· 587 549 count = size; 588 550 } 589 551 552 + pci_config_pm_runtime_get(dev); 553 + 590 554 if ((off & 1) && size) { 591 555 pci_user_write_config_byte(dev, off, data[off - init_off]); 592 556 off++; ··· 626 586 off++; 627 587 --size; 628 588 } 589 + 590 + pci_config_pm_runtime_put(dev); 629 591 630 592 return count; 631 593 }
+1
drivers/pci/pci.c
··· 1941 1941 dev->pm_cap = pm; 1942 1942 dev->d3_delay = PCI_PM_D3_WAIT; 1943 1943 dev->d3cold_delay = PCI_PM_D3COLD_WAIT; 1944 + dev->d3cold_allowed = true; 1944 1945 1945 1946 dev->d1_support = false; 1946 1947 dev->d2_support = false;
+14
drivers/pci/pcie/portdrv_pci.c
··· 140 140 { 141 141 return 0; 142 142 } 143 + 144 + static int pcie_port_runtime_idle(struct device *dev) 145 + { 146 + /* Delay for a short while to prevent too frequent suspend/resume */ 147 + pm_schedule_suspend(dev, 10); 148 + return -EBUSY; 149 + } 143 150 #else 144 151 #define pcie_port_runtime_suspend NULL 145 152 #define pcie_port_runtime_resume NULL 153 + #define pcie_port_runtime_idle NULL 146 154 #endif 147 155 148 156 static const struct dev_pm_ops pcie_portdrv_pm_ops = { ··· 163 155 .resume_noirq = pcie_port_resume_noirq, 164 156 .runtime_suspend = pcie_port_runtime_suspend, 165 157 .runtime_resume = pcie_port_runtime_resume, 158 + .runtime_idle = pcie_port_runtime_idle, 166 159 }; 167 160 168 161 #define PCIE_PORTDRV_PM_OPS (&pcie_portdrv_pm_ops) ··· 209 200 return status; 210 201 211 202 pci_save_state(dev); 203 + /* 204 + * D3cold may not work properly on some PCIe port, so disable 205 + * it by default. 206 + */ 207 + dev->d3cold_allowed = false; 212 208 if (!pci_match_id(port_runtime_pm_black_list, dev)) 213 209 pm_runtime_put_noidle(&dev->dev); 214 210
+17 -14
drivers/pci/probe.c
··· 144 144 case PCI_BASE_ADDRESS_MEM_TYPE_32: 145 145 break; 146 146 case PCI_BASE_ADDRESS_MEM_TYPE_1M: 147 - dev_info(&dev->dev, "1M mem BAR treated as 32-bit BAR\n"); 147 + /* 1M mem BAR treated as 32-bit BAR */ 148 148 break; 149 149 case PCI_BASE_ADDRESS_MEM_TYPE_64: 150 150 flags |= IORESOURCE_MEM_64; 151 151 break; 152 152 default: 153 - dev_warn(&dev->dev, 154 - "mem unknown type %x treated as 32-bit BAR\n", 155 - mem_type); 153 + /* mem unknown type treated as 32-bit BAR */ 156 154 break; 157 155 } 158 156 return flags; ··· 171 173 u32 l, sz, mask; 172 174 u16 orig_cmd; 173 175 struct pci_bus_region region; 176 + bool bar_too_big = false, bar_disabled = false; 174 177 175 178 mask = type ? PCI_ROM_ADDRESS_MASK : ~0; 176 179 180 + /* No printks while decoding is disabled! */ 177 181 if (!dev->mmio_always_on) { 178 182 pci_read_config_word(dev, PCI_COMMAND, &orig_cmd); 179 183 pci_write_config_word(dev, PCI_COMMAND, ··· 240 240 goto fail; 241 241 242 242 if ((sizeof(resource_size_t) < 8) && (sz64 > 0x100000000ULL)) { 243 - dev_err(&dev->dev, "reg %x: can't handle 64-bit BAR\n", 244 - pos); 243 + bar_too_big = true; 245 244 goto fail; 246 245 } 247 246 ··· 251 252 region.start = 0; 252 253 region.end = sz64; 253 254 pcibios_bus_to_resource(dev, res, &region); 255 + bar_disabled = true; 254 256 } else { 255 257 region.start = l64; 256 258 region.end = l64 + sz64; 257 259 pcibios_bus_to_resource(dev, res, &region); 258 - dev_printk(KERN_DEBUG, &dev->dev, "reg %x: %pR\n", 259 - pos, res); 260 260 } 261 261 } else { 262 262 sz = pci_size(l, sz, mask); ··· 266 268 region.start = l; 267 269 region.end = l + sz; 268 270 pcibios_bus_to_resource(dev, res, &region); 269 - 270 - dev_printk(KERN_DEBUG, &dev->dev, "reg %x: %pR\n", pos, res); 271 271 } 272 272 273 - out: 273 + goto out; 274 + 275 + 276 + fail: 277 + res->flags = 0; 278 + out: 274 279 if (!dev->mmio_always_on) 275 280 pci_write_config_word(dev, PCI_COMMAND, orig_cmd); 276 281 282 + if (bar_too_big) 283 + dev_err(&dev->dev, "reg %x: can't handle 64-bit BAR\n", pos); 284 + if (res->flags && !bar_disabled) 285 + dev_printk(KERN_DEBUG, &dev->dev, "reg %x: %pR\n", pos, res); 286 + 277 287 return (res->flags & IORESOURCE_MEM_64) ? 1 : 0; 278 - fail: 279 - res->flags = 0; 280 - goto out; 281 288 } 282 289 283 290 static void pci_read_bases(struct pci_dev *dev, unsigned int howmany, int rom)
+15 -7
drivers/rtc/rtc-at91sam9.c
··· 58 58 struct rtc_device *rtcdev; 59 59 u32 imr; 60 60 void __iomem *gpbr; 61 + int irq; 61 62 }; 62 63 63 64 #define rtt_readl(rtc, field) \ ··· 293 292 { 294 293 struct resource *r, *r_gpbr; 295 294 struct sam9_rtc *rtc; 296 - int ret; 295 + int ret, irq; 297 296 u32 mr; 298 297 299 298 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); ··· 303 302 return -ENODEV; 304 303 } 305 304 305 + irq = platform_get_irq(pdev, 0); 306 + if (irq < 0) { 307 + dev_err(&pdev->dev, "failed to get interrupt resource\n"); 308 + return irq; 309 + } 310 + 306 311 rtc = kzalloc(sizeof *rtc, GFP_KERNEL); 307 312 if (!rtc) 308 313 return -ENOMEM; 314 + 315 + rtc->irq = irq; 309 316 310 317 /* platform setup code should have handled this; sigh */ 311 318 if (!device_can_wakeup(&pdev->dev)) ··· 354 345 } 355 346 356 347 /* register irq handler after we know what name we'll use */ 357 - ret = request_irq(AT91_ID_SYS, at91_rtc_interrupt, 358 - IRQF_SHARED, 348 + ret = request_irq(rtc->irq, at91_rtc_interrupt, IRQF_SHARED, 359 349 dev_name(&rtc->rtcdev->dev), rtc); 360 350 if (ret) { 361 - dev_dbg(&pdev->dev, "can't share IRQ %d?\n", AT91_ID_SYS); 351 + dev_dbg(&pdev->dev, "can't share IRQ %d?\n", rtc->irq); 362 352 rtc_device_unregister(rtc->rtcdev); 363 353 goto fail_register; 364 354 } ··· 394 386 395 387 /* disable all interrupts */ 396 388 rtt_writel(rtc, MR, mr & ~(AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN)); 397 - free_irq(AT91_ID_SYS, rtc); 389 + free_irq(rtc->irq, rtc); 398 390 399 391 rtc_device_unregister(rtc->rtcdev); 400 392 ··· 431 423 rtc->imr = mr & (AT91_RTT_ALMIEN | AT91_RTT_RTTINCIEN); 432 424 if (rtc->imr) { 433 425 if (device_may_wakeup(&pdev->dev) && (mr & AT91_RTT_ALMIEN)) { 434 - enable_irq_wake(AT91_ID_SYS); 426 + enable_irq_wake(rtc->irq); 435 427 /* don't let RTTINC cause wakeups */ 436 428 if (mr & AT91_RTT_RTTINCIEN) 437 429 rtt_writel(rtc, MR, mr & ~AT91_RTT_RTTINCIEN); ··· 449 441 450 442 if (rtc->imr) { 451 443 if (device_may_wakeup(&pdev->dev)) 452 - disable_irq_wake(AT91_ID_SYS); 444 + disable_irq_wake(rtc->irq); 453 445 mr = rtt_readl(rtc, MR); 454 446 rtt_writel(rtc, MR, mr | rtc->imr); 455 447 }
+1 -1
drivers/s390/block/dasd_eckd.c
··· 3804 3804 case BIODASDSYMMIO: 3805 3805 return dasd_symm_io(device, argp); 3806 3806 default: 3807 - return -ENOIOCTLCMD; 3807 + return -ENOTTY; 3808 3808 } 3809 3809 } 3810 3810
+2 -5
drivers/s390/block/dasd_ioctl.c
··· 498 498 break; 499 499 default: 500 500 /* if the discipline has an ioctl method try it. */ 501 - if (base->discipline->ioctl) { 501 + rc = -ENOTTY; 502 + if (base->discipline->ioctl) 502 503 rc = base->discipline->ioctl(block, cmd, argp); 503 - if (rc == -ENOIOCTLCMD) 504 - rc = -EINVAL; 505 - } else 506 - rc = -EINVAL; 507 504 } 508 505 dasd_put_device(base); 509 506 return rc;
+25 -4
drivers/spi/spi-bcm63xx.c
··· 47 47 /* Platform data */ 48 48 u32 speed_hz; 49 49 unsigned fifo_size; 50 + unsigned int msg_type_shift; 51 + unsigned int msg_ctl_width; 50 52 51 53 /* Data buffers */ 52 54 const unsigned char *tx_ptr; ··· 223 221 msg_ctl = (t->len << SPI_BYTE_CNT_SHIFT); 224 222 225 223 if (t->rx_buf && t->tx_buf) 226 - msg_ctl |= (SPI_FD_RW << SPI_MSG_TYPE_SHIFT); 224 + msg_ctl |= (SPI_FD_RW << bs->msg_type_shift); 227 225 else if (t->rx_buf) 228 - msg_ctl |= (SPI_HD_R << SPI_MSG_TYPE_SHIFT); 226 + msg_ctl |= (SPI_HD_R << bs->msg_type_shift); 229 227 else if (t->tx_buf) 230 - msg_ctl |= (SPI_HD_W << SPI_MSG_TYPE_SHIFT); 228 + msg_ctl |= (SPI_HD_W << bs->msg_type_shift); 231 229 232 - bcm_spi_writew(bs, msg_ctl, SPI_MSG_CTL); 230 + switch (bs->msg_ctl_width) { 231 + case 8: 232 + bcm_spi_writeb(bs, msg_ctl, SPI_MSG_CTL); 233 + break; 234 + case 16: 235 + bcm_spi_writew(bs, msg_ctl, SPI_MSG_CTL); 236 + break; 237 + } 233 238 234 239 /* Issue the transfer */ 235 240 cmd = SPI_CMD_START_IMMEDIATE; ··· 415 406 master->transfer_one_message = bcm63xx_spi_transfer_one; 416 407 master->mode_bits = MODEBITS; 417 408 bs->speed_hz = pdata->speed_hz; 409 + bs->msg_type_shift = pdata->msg_type_shift; 410 + bs->msg_ctl_width = pdata->msg_ctl_width; 418 411 bs->tx_io = (u8 *)(bs->regs + bcm63xx_spireg(SPI_MSG_DATA)); 419 412 bs->rx_io = (const u8 *)(bs->regs + bcm63xx_spireg(SPI_RX_DATA)); 413 + 414 + switch (bs->msg_ctl_width) { 415 + case 8: 416 + case 16: 417 + break; 418 + default: 419 + dev_err(dev, "unsupported MSG_CTL width: %d\n", 420 + bs->msg_ctl_width); 421 + goto out_clk_disable; 422 + } 420 423 421 424 /* Initialize hardware */ 422 425 clk_enable(bs->clk);
-2
drivers/video/auo_k190x.c
··· 987 987 fb_dealloc_cmap(&info->cmap); 988 988 err_cmap: 989 989 fb_deferred_io_cleanup(info); 990 - kfree(info->fbdefio); 991 990 err_defio: 992 991 vfree((void *)info->screen_base); 993 992 err_irq: ··· 1021 1022 fb_dealloc_cmap(&info->cmap); 1022 1023 1023 1024 fb_deferred_io_cleanup(info); 1024 - kfree(info->fbdefio); 1025 1025 1026 1026 vfree((void *)info->screen_base); 1027 1027
+1 -1
drivers/video/backlight/omap1_bl.c
··· 27 27 #include <linux/fb.h> 28 28 #include <linux/backlight.h> 29 29 #include <linux/slab.h> 30 + #include <linux/platform_data/omap1_bl.h> 30 31 31 32 #include <mach/hardware.h> 32 - #include <plat/board.h> 33 33 #include <plat/mux.h> 34 34 35 35 #define OMAPBL_MAX_INTENSITY 0xff
+1 -1
drivers/video/console/bitblit.c
··· 162 162 image.depth = 1; 163 163 164 164 if (attribute) { 165 - buf = kmalloc(cellsize, GFP_KERNEL); 165 + buf = kmalloc(cellsize, GFP_ATOMIC); 166 166 if (!buf) 167 167 return; 168 168 }
+1 -1
drivers/video/console/fbcon.c
··· 449 449 450 450 while ((options = strsep(&this_opt, ",")) != NULL) { 451 451 if (!strncmp(options, "font:", 5)) 452 - strcpy(fontname, options + 5); 452 + strlcpy(fontname, options + 5, sizeof(fontname)); 453 453 454 454 if (!strncmp(options, "scrollback:", 11)) { 455 455 options += 11;
+2
drivers/video/mb862xx/mb862xxfbdrv.c
··· 328 328 case MB862XX_L1_SET_CFG: 329 329 if (copy_from_user(l1_cfg, argp, sizeof(*l1_cfg))) 330 330 return -EFAULT; 331 + if (l1_cfg->dh == 0 || l1_cfg->dw == 0) 332 + return -EINVAL; 331 333 if ((l1_cfg->sw >= l1_cfg->dw) && (l1_cfg->sh >= l1_cfg->dh)) { 332 334 /* downscaling */ 333 335 outreg(cap, GC_CAP_CSC,
+14
drivers/video/omap2/dss/sdi.c
··· 105 105 106 106 sdi_config_lcd_manager(dssdev); 107 107 108 + /* 109 + * LCLK and PCLK divisors are located in shadow registers, and we 110 + * normally write them to DISPC registers when enabling the output. 111 + * However, SDI uses pck-free as source clock for its PLL, and pck-free 112 + * is affected by the divisors. And as we need the PLL before enabling 113 + * the output, we need to write the divisors early. 114 + * 115 + * It seems just writing to the DISPC register is enough, and we don't 116 + * need to care about the shadow register mechanism for pck-free. The 117 + * exact reason for this is unknown. 118 + */ 119 + dispc_mgr_set_clock_div(dssdev->manager->id, 120 + &sdi.mgr_config.clock_info); 121 + 108 122 dss_sdi_init(dssdev->phy.sdi.datapairs); 109 123 r = dss_sdi_enable(); 110 124 if (r)
+1 -1
drivers/video/omap2/omapfb/omapfb-main.c
··· 1192 1192 break; 1193 1193 1194 1194 if (regno < 16) { 1195 - u16 pal; 1195 + u32 pal; 1196 1196 pal = ((red >> (16 - var->red.length)) << 1197 1197 var->red.offset) | 1198 1198 ((green >> (16 - var->green.length)) <<
+3 -4
drivers/watchdog/booke_wdt.c
··· 166 166 167 167 switch (cmd) { 168 168 case WDIOC_GETSUPPORT: 169 - if (copy_to_user((void *)arg, &ident, sizeof(ident))) 170 - return -EFAULT; 169 + return copy_to_user(p, &ident, sizeof(ident)) ? -EFAULT : 0; 171 170 case WDIOC_GETSTATUS: 172 171 return put_user(0, p); 173 172 case WDIOC_GETBOOTSTATUS: 174 173 /* XXX: something is clearing TSR */ 175 174 tmp = mfspr(SPRN_TSR) & TSR_WRS(3); 176 175 /* returns CARDRESET if last reset was caused by the WDT */ 177 - return (tmp ? WDIOF_CARDRESET : 0); 176 + return put_user((tmp ? WDIOF_CARDRESET : 0), p); 178 177 case WDIOC_SETOPTIONS: 179 178 if (get_user(tmp, p)) 180 - return -EINVAL; 179 + return -EFAULT; 181 180 if (tmp == WDIOS_ENABLECARD) { 182 181 booke_wdt_ping(); 183 182 break;
-1
drivers/watchdog/da9052_wdt.c
··· 21 21 #include <linux/types.h> 22 22 #include <linux/kernel.h> 23 23 #include <linux/jiffies.h> 24 - #include <linux/delay.h> 25 24 26 25 #include <linux/mfd/da9052/reg.h> 27 26 #include <linux/mfd/da9052/da9052.h>
-15
drivers/xen/platform-pci.c
··· 101 101 return 0; 102 102 } 103 103 104 - static void __devinit prepare_shared_info(void) 105 - { 106 - #ifdef CONFIG_KEXEC 107 - unsigned long addr; 108 - struct shared_info *hvm_shared_info; 109 - 110 - addr = alloc_xen_mmio(PAGE_SIZE); 111 - hvm_shared_info = ioremap(addr, PAGE_SIZE); 112 - memset(hvm_shared_info, 0, PAGE_SIZE); 113 - xen_hvm_prepare_kexec(hvm_shared_info, addr >> PAGE_SHIFT); 114 - #endif 115 - } 116 - 117 104 static int __devinit platform_pci_init(struct pci_dev *pdev, 118 105 const struct pci_device_id *ent) 119 106 { ··· 137 150 138 151 platform_mmio = mmio_addr; 139 152 platform_mmiolen = mmio_len; 140 - 141 - prepare_shared_info(); 142 153 143 154 if (!xen_have_vector_callback) { 144 155 ret = xen_allocate_irq(pdev);
+1 -1
drivers/xen/swiotlb-xen.c
··· 232 232 return ret; 233 233 234 234 if (hwdev && hwdev->coherent_dma_mask) 235 - dma_mask = hwdev->coherent_dma_mask; 235 + dma_mask = dma_alloc_coherent_mask(hwdev, flags); 236 236 237 237 phys = virt_to_phys(ret); 238 238 dev_addr = xen_phys_to_bus(phys);
+4 -4
drivers/xen/xen-pciback/pci_stub.c
··· 353 353 if (err) 354 354 goto config_release; 355 355 356 - dev_dbg(&dev->dev, "reseting (FLR, D3, etc) the device\n"); 357 - __pci_reset_function_locked(dev); 358 - 359 356 /* We need the device active to save the state. */ 360 357 dev_dbg(&dev->dev, "save state of device\n"); 361 358 pci_save_state(dev); 362 359 dev_data->pci_saved_state = pci_store_saved_state(dev); 363 360 if (!dev_data->pci_saved_state) 364 361 dev_err(&dev->dev, "Could not store PCI conf saved state!\n"); 365 - 362 + else { 363 + dev_dbg(&dev->dev, "reseting (FLR, D3, etc) the device\n"); 364 + __pci_reset_function_locked(dev); 365 + } 366 366 /* Now disable the device (this also ensures some private device 367 367 * data is setup before we export) 368 368 */
+6 -5
fs/bio.c
··· 73 73 { 74 74 unsigned int sz = sizeof(struct bio) + extra_size; 75 75 struct kmem_cache *slab = NULL; 76 - struct bio_slab *bslab; 76 + struct bio_slab *bslab, *new_bio_slabs; 77 77 unsigned int i, entry = -1; 78 78 79 79 mutex_lock(&bio_slab_lock); ··· 97 97 98 98 if (bio_slab_nr == bio_slab_max && entry == -1) { 99 99 bio_slab_max <<= 1; 100 - bio_slabs = krealloc(bio_slabs, 101 - bio_slab_max * sizeof(struct bio_slab), 102 - GFP_KERNEL); 103 - if (!bio_slabs) 100 + new_bio_slabs = krealloc(bio_slabs, 101 + bio_slab_max * sizeof(struct bio_slab), 102 + GFP_KERNEL); 103 + if (!new_bio_slabs) 104 104 goto out_unlock; 105 + bio_slabs = new_bio_slabs; 105 106 } 106 107 if (entry == -1) 107 108 entry = bio_slab_nr++;
+3
fs/block_dev.c
··· 1578 1578 unsigned long nr_segs, loff_t pos) 1579 1579 { 1580 1580 struct file *file = iocb->ki_filp; 1581 + struct blk_plug plug; 1581 1582 ssize_t ret; 1582 1583 1583 1584 BUG_ON(iocb->ki_pos != pos); 1584 1585 1586 + blk_start_plug(&plug); 1585 1587 ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos); 1586 1588 if (ret > 0 || ret == -EIOCBQUEUED) { 1587 1589 ssize_t err; ··· 1592 1590 if (err < 0 && ret > 0) 1593 1591 ret = err; 1594 1592 } 1593 + blk_finish_plug(&plug); 1595 1594 return ret; 1596 1595 } 1597 1596 EXPORT_SYMBOL_GPL(blkdev_aio_write);
+2 -2
fs/btrfs/backref.c
··· 1438 1438 ret = extent_from_logical(fs_info, logical, path, 1439 1439 &found_key); 1440 1440 btrfs_release_path(path); 1441 - if (ret & BTRFS_EXTENT_FLAG_TREE_BLOCK) 1442 - ret = -EINVAL; 1443 1441 if (ret < 0) 1444 1442 return ret; 1443 + if (ret & BTRFS_EXTENT_FLAG_TREE_BLOCK) 1444 + return -EINVAL; 1445 1445 1446 1446 extent_item_pos = logical - found_key.objectid; 1447 1447 ret = iterate_extent_inodes(fs_info, found_key.objectid,
+1
fs/btrfs/compression.c
··· 818 818 btrfs_compress_op[idx]->free_workspace(workspace); 819 819 atomic_dec(alloc_workspace); 820 820 wake: 821 + smp_mb(); 821 822 if (waitqueue_active(workspace_wait)) 822 823 wake_up(workspace_wait); 823 824 }
+3 -6
fs/btrfs/ctree.c
··· 421 421 spin_unlock(&fs_info->tree_mod_seq_lock); 422 422 423 423 /* 424 - * we removed the lowest blocker from the blocker list, so there may be 425 - * more processible delayed refs. 426 - */ 427 - wake_up(&fs_info->tree_mod_seq_wait); 428 - 429 - /* 430 424 * anything that's lower than the lowest existing (read: blocked) 431 425 * sequence number can be removed from the tree. 432 426 */ ··· 624 630 int i; 625 631 u32 nritems; 626 632 int ret; 633 + 634 + if (btrfs_header_level(eb) == 0) 635 + return; 627 636 628 637 nritems = btrfs_header_nritems(eb); 629 638 for (i = nritems - 1; i >= 0; i--) {
+1 -2
fs/btrfs/ctree.h
··· 1252 1252 atomic_t tree_mod_seq; 1253 1253 struct list_head tree_mod_seq_list; 1254 1254 struct seq_list tree_mod_seq_elem; 1255 - wait_queue_head_t tree_mod_seq_wait; 1256 1255 1257 1256 /* this protects tree_mod_log */ 1258 1257 rwlock_t tree_mod_log_lock; ··· 3191 3192 int btrfs_lookup_bio_sums(struct btrfs_root *root, struct inode *inode, 3192 3193 struct bio *bio, u32 *dst); 3193 3194 int btrfs_lookup_bio_sums_dio(struct btrfs_root *root, struct inode *inode, 3194 - struct bio *bio, u64 logical_offset, u32 *dst); 3195 + struct bio *bio, u64 logical_offset); 3195 3196 int btrfs_insert_file_extent(struct btrfs_trans_handle *trans, 3196 3197 struct btrfs_root *root, 3197 3198 u64 objectid, u64 pos,
+6 -6
fs/btrfs/delayed-inode.c
··· 512 512 513 513 rb_erase(&delayed_item->rb_node, root); 514 514 delayed_item->delayed_node->count--; 515 - atomic_dec(&delayed_root->items); 516 - if (atomic_read(&delayed_root->items) < BTRFS_DELAYED_BACKGROUND && 515 + if (atomic_dec_return(&delayed_root->items) < 516 + BTRFS_DELAYED_BACKGROUND && 517 517 waitqueue_active(&delayed_root->wait)) 518 518 wake_up(&delayed_root->wait); 519 519 } ··· 1028 1028 btrfs_release_delayed_item(prev); 1029 1029 ret = 0; 1030 1030 btrfs_release_path(path); 1031 - if (curr) 1031 + if (curr) { 1032 + mutex_unlock(&node->mutex); 1032 1033 goto do_again; 1033 - else 1034 + } else 1034 1035 goto delete_fail; 1035 1036 } 1036 1037 ··· 1056 1055 delayed_node->count--; 1057 1056 1058 1057 delayed_root = delayed_node->root->fs_info->delayed_root; 1059 - atomic_dec(&delayed_root->items); 1060 - if (atomic_read(&delayed_root->items) < 1058 + if (atomic_dec_return(&delayed_root->items) < 1061 1059 BTRFS_DELAYED_BACKGROUND && 1062 1060 waitqueue_active(&delayed_root->wait)) 1063 1061 wake_up(&delayed_root->wait);
+128 -35
fs/btrfs/delayed-ref.c
··· 38 38 static int comp_tree_refs(struct btrfs_delayed_tree_ref *ref2, 39 39 struct btrfs_delayed_tree_ref *ref1) 40 40 { 41 - if (ref1->node.type == BTRFS_TREE_BLOCK_REF_KEY) { 42 - if (ref1->root < ref2->root) 43 - return -1; 44 - if (ref1->root > ref2->root) 45 - return 1; 46 - } else { 47 - if (ref1->parent < ref2->parent) 48 - return -1; 49 - if (ref1->parent > ref2->parent) 50 - return 1; 51 - } 41 + if (ref1->root < ref2->root) 42 + return -1; 43 + if (ref1->root > ref2->root) 44 + return 1; 45 + if (ref1->parent < ref2->parent) 46 + return -1; 47 + if (ref1->parent > ref2->parent) 48 + return 1; 52 49 return 0; 53 50 } 54 51 ··· 82 85 * type of the delayed backrefs and content of delayed backrefs. 83 86 */ 84 87 static int comp_entry(struct btrfs_delayed_ref_node *ref2, 85 - struct btrfs_delayed_ref_node *ref1) 88 + struct btrfs_delayed_ref_node *ref1, 89 + bool compare_seq) 86 90 { 87 91 if (ref1->bytenr < ref2->bytenr) 88 92 return -1; ··· 100 102 if (ref1->type > ref2->type) 101 103 return 1; 102 104 /* merging of sequenced refs is not allowed */ 103 - if (ref1->seq < ref2->seq) 104 - return -1; 105 - if (ref1->seq > ref2->seq) 106 - return 1; 105 + if (compare_seq) { 106 + if (ref1->seq < ref2->seq) 107 + return -1; 108 + if (ref1->seq > ref2->seq) 109 + return 1; 110 + } 107 111 if (ref1->type == BTRFS_TREE_BLOCK_REF_KEY || 108 112 ref1->type == BTRFS_SHARED_BLOCK_REF_KEY) { 109 113 return comp_tree_refs(btrfs_delayed_node_to_tree_ref(ref2), ··· 139 139 entry = rb_entry(parent_node, struct btrfs_delayed_ref_node, 140 140 rb_node); 141 141 142 - cmp = comp_entry(entry, ins); 142 + cmp = comp_entry(entry, ins, 1); 143 143 if (cmp < 0) 144 144 p = &(*p)->rb_left; 145 145 else if (cmp > 0) ··· 231 231 } 232 232 btrfs_put_delayed_ref(&head->node); 233 233 return 0; 234 + } 235 + 236 + static void inline drop_delayed_ref(struct btrfs_trans_handle *trans, 237 + struct btrfs_delayed_ref_root *delayed_refs, 238 + struct btrfs_delayed_ref_node *ref) 239 + { 240 + rb_erase(&ref->rb_node, &delayed_refs->root); 241 + ref->in_tree = 0; 242 + btrfs_put_delayed_ref(ref); 243 + delayed_refs->num_entries--; 244 + if (trans->delayed_ref_updates) 245 + trans->delayed_ref_updates--; 246 + } 247 + 248 + static int merge_ref(struct btrfs_trans_handle *trans, 249 + struct btrfs_delayed_ref_root *delayed_refs, 250 + struct btrfs_delayed_ref_node *ref, u64 seq) 251 + { 252 + struct rb_node *node; 253 + int merged = 0; 254 + int mod = 0; 255 + int done = 0; 256 + 257 + node = rb_prev(&ref->rb_node); 258 + while (node) { 259 + struct btrfs_delayed_ref_node *next; 260 + 261 + next = rb_entry(node, struct btrfs_delayed_ref_node, rb_node); 262 + node = rb_prev(node); 263 + if (next->bytenr != ref->bytenr) 264 + break; 265 + if (seq && next->seq >= seq) 266 + break; 267 + if (comp_entry(ref, next, 0)) 268 + continue; 269 + 270 + if (ref->action == next->action) { 271 + mod = next->ref_mod; 272 + } else { 273 + if (ref->ref_mod < next->ref_mod) { 274 + struct btrfs_delayed_ref_node *tmp; 275 + 276 + tmp = ref; 277 + ref = next; 278 + next = tmp; 279 + done = 1; 280 + } 281 + mod = -next->ref_mod; 282 + } 283 + 284 + merged++; 285 + drop_delayed_ref(trans, delayed_refs, next); 286 + ref->ref_mod += mod; 287 + if (ref->ref_mod == 0) { 288 + drop_delayed_ref(trans, delayed_refs, ref); 289 + break; 290 + } else { 291 + /* 292 + * You can't have multiples of the same ref on a tree 293 + * block. 294 + */ 295 + WARN_ON(ref->type == BTRFS_TREE_BLOCK_REF_KEY || 296 + ref->type == BTRFS_SHARED_BLOCK_REF_KEY); 297 + } 298 + 299 + if (done) 300 + break; 301 + node = rb_prev(&ref->rb_node); 302 + } 303 + 304 + return merged; 305 + } 306 + 307 + void btrfs_merge_delayed_refs(struct btrfs_trans_handle *trans, 308 + struct btrfs_fs_info *fs_info, 309 + struct btrfs_delayed_ref_root *delayed_refs, 310 + struct btrfs_delayed_ref_head *head) 311 + { 312 + struct rb_node *node; 313 + u64 seq = 0; 314 + 315 + spin_lock(&fs_info->tree_mod_seq_lock); 316 + if (!list_empty(&fs_info->tree_mod_seq_list)) { 317 + struct seq_list *elem; 318 + 319 + elem = list_first_entry(&fs_info->tree_mod_seq_list, 320 + struct seq_list, list); 321 + seq = elem->seq; 322 + } 323 + spin_unlock(&fs_info->tree_mod_seq_lock); 324 + 325 + node = rb_prev(&head->node.rb_node); 326 + while (node) { 327 + struct btrfs_delayed_ref_node *ref; 328 + 329 + ref = rb_entry(node, struct btrfs_delayed_ref_node, 330 + rb_node); 331 + if (ref->bytenr != head->node.bytenr) 332 + break; 333 + 334 + /* We can't merge refs that are outside of our seq count */ 335 + if (seq && ref->seq >= seq) 336 + break; 337 + if (merge_ref(trans, delayed_refs, ref, seq)) 338 + node = rb_prev(&head->node.rb_node); 339 + else 340 + node = rb_prev(node); 341 + } 234 342 } 235 343 236 344 int btrfs_check_delayed_seq(struct btrfs_fs_info *fs_info, ··· 444 336 * every changing the extent allocation tree. 445 337 */ 446 338 existing->ref_mod--; 447 - if (existing->ref_mod == 0) { 448 - rb_erase(&existing->rb_node, 449 - &delayed_refs->root); 450 - existing->in_tree = 0; 451 - btrfs_put_delayed_ref(existing); 452 - delayed_refs->num_entries--; 453 - if (trans->delayed_ref_updates) 454 - trans->delayed_ref_updates--; 455 - } else { 339 + if (existing->ref_mod == 0) 340 + drop_delayed_ref(trans, delayed_refs, existing); 341 + else 456 342 WARN_ON(existing->type == BTRFS_TREE_BLOCK_REF_KEY || 457 343 existing->type == BTRFS_SHARED_BLOCK_REF_KEY); 458 - } 459 344 } else { 460 345 WARN_ON(existing->type == BTRFS_TREE_BLOCK_REF_KEY || 461 346 existing->type == BTRFS_SHARED_BLOCK_REF_KEY); ··· 763 662 add_delayed_tree_ref(fs_info, trans, &ref->node, bytenr, 764 663 num_bytes, parent, ref_root, level, action, 765 664 for_cow); 766 - if (!need_ref_seq(for_cow, ref_root) && 767 - waitqueue_active(&fs_info->tree_mod_seq_wait)) 768 - wake_up(&fs_info->tree_mod_seq_wait); 769 665 spin_unlock(&delayed_refs->lock); 770 666 if (need_ref_seq(for_cow, ref_root)) 771 667 btrfs_qgroup_record_ref(trans, &ref->node, extent_op); ··· 811 713 add_delayed_data_ref(fs_info, trans, &ref->node, bytenr, 812 714 num_bytes, parent, ref_root, owner, offset, 813 715 action, for_cow); 814 - if (!need_ref_seq(for_cow, ref_root) && 815 - waitqueue_active(&fs_info->tree_mod_seq_wait)) 816 - wake_up(&fs_info->tree_mod_seq_wait); 817 716 spin_unlock(&delayed_refs->lock); 818 717 if (need_ref_seq(for_cow, ref_root)) 819 718 btrfs_qgroup_record_ref(trans, &ref->node, extent_op); ··· 839 744 num_bytes, BTRFS_UPDATE_DELAYED_HEAD, 840 745 extent_op->is_data); 841 746 842 - if (waitqueue_active(&fs_info->tree_mod_seq_wait)) 843 - wake_up(&fs_info->tree_mod_seq_wait); 844 747 spin_unlock(&delayed_refs->lock); 845 748 return 0; 846 749 }
+4
fs/btrfs/delayed-ref.h
··· 167 167 struct btrfs_trans_handle *trans, 168 168 u64 bytenr, u64 num_bytes, 169 169 struct btrfs_delayed_extent_op *extent_op); 170 + void btrfs_merge_delayed_refs(struct btrfs_trans_handle *trans, 171 + struct btrfs_fs_info *fs_info, 172 + struct btrfs_delayed_ref_root *delayed_refs, 173 + struct btrfs_delayed_ref_head *head); 170 174 171 175 struct btrfs_delayed_ref_head * 172 176 btrfs_find_delayed_ref_head(struct btrfs_trans_handle *trans, u64 bytenr);
+14 -39
fs/btrfs/disk-io.c
··· 377 377 ret = read_extent_buffer_pages(io_tree, eb, start, 378 378 WAIT_COMPLETE, 379 379 btree_get_extent, mirror_num); 380 - if (!ret && !verify_parent_transid(io_tree, eb, 380 + if (!ret) { 381 + if (!verify_parent_transid(io_tree, eb, 381 382 parent_transid, 0)) 382 - break; 383 + break; 384 + else 385 + ret = -EIO; 386 + } 383 387 384 388 /* 385 389 * This buffer's crc is fine, but its contents are corrupted, so ··· 758 754 limit = btrfs_async_submit_limit(fs_info); 759 755 limit = limit * 2 / 3; 760 756 761 - atomic_dec(&fs_info->nr_async_submits); 762 - 763 - if (atomic_read(&fs_info->nr_async_submits) < limit && 757 + if (atomic_dec_return(&fs_info->nr_async_submits) < limit && 764 758 waitqueue_active(&fs_info->async_submit_wait)) 765 759 wake_up(&fs_info->async_submit_wait); 766 760 ··· 2034 2032 fs_info->free_chunk_space = 0; 2035 2033 fs_info->tree_mod_log = RB_ROOT; 2036 2034 2037 - init_waitqueue_head(&fs_info->tree_mod_seq_wait); 2038 - 2039 2035 /* readahead state */ 2040 2036 INIT_RADIX_TREE(&fs_info->reada_tree, GFP_NOFS & ~__GFP_WAIT); 2041 2037 spin_lock_init(&fs_info->reada_lock); ··· 2528 2528 goto fail_trans_kthread; 2529 2529 2530 2530 /* do not make disk changes in broken FS */ 2531 - if (btrfs_super_log_root(disk_super) != 0 && 2532 - !(fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR)) { 2531 + if (btrfs_super_log_root(disk_super) != 0) { 2533 2532 u64 bytenr = btrfs_super_log_root(disk_super); 2534 2533 2535 2534 if (fs_devices->rw_devices == 0) { ··· 3188 3189 /* clear out the rbtree of defraggable inodes */ 3189 3190 btrfs_run_defrag_inodes(fs_info); 3190 3191 3191 - /* 3192 - * Here come 2 situations when btrfs is broken to flip readonly: 3193 - * 3194 - * 1. when btrfs flips readonly somewhere else before 3195 - * btrfs_commit_super, sb->s_flags has MS_RDONLY flag, 3196 - * and btrfs will skip to write sb directly to keep 3197 - * ERROR state on disk. 3198 - * 3199 - * 2. when btrfs flips readonly just in btrfs_commit_super, 3200 - * and in such case, btrfs cannot write sb via btrfs_commit_super, 3201 - * and since fs_state has been set BTRFS_SUPER_FLAG_ERROR flag, 3202 - * btrfs will cleanup all FS resources first and write sb then. 3203 - */ 3204 3192 if (!(fs_info->sb->s_flags & MS_RDONLY)) { 3205 3193 ret = btrfs_commit_super(root); 3206 3194 if (ret) 3207 3195 printk(KERN_ERR "btrfs: commit super ret %d\n", ret); 3208 3196 } 3209 3197 3210 - if (fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) { 3211 - ret = btrfs_error_commit_super(root); 3212 - if (ret) 3213 - printk(KERN_ERR "btrfs: commit super ret %d\n", ret); 3214 - } 3198 + if (fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) 3199 + btrfs_error_commit_super(root); 3215 3200 3216 3201 btrfs_put_block_group_cache(fs_info); 3217 3202 ··· 3417 3434 if (read_only) 3418 3435 return 0; 3419 3436 3420 - if (fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) { 3421 - printk(KERN_WARNING "warning: mount fs with errors, " 3422 - "running btrfsck is recommended\n"); 3423 - } 3424 - 3425 3437 return 0; 3426 3438 } 3427 3439 3428 - int btrfs_error_commit_super(struct btrfs_root *root) 3440 + void btrfs_error_commit_super(struct btrfs_root *root) 3429 3441 { 3430 - int ret; 3431 - 3432 3442 mutex_lock(&root->fs_info->cleaner_mutex); 3433 3443 btrfs_run_delayed_iputs(root); 3434 3444 mutex_unlock(&root->fs_info->cleaner_mutex); ··· 3431 3455 3432 3456 /* cleanup FS via transaction */ 3433 3457 btrfs_cleanup_transaction(root); 3434 - 3435 - ret = write_ctree_super(NULL, root, 0); 3436 - 3437 - return ret; 3438 3458 } 3439 3459 3440 3460 static void btrfs_destroy_ordered_operations(struct btrfs_root *root) ··· 3754 3782 /* FIXME: cleanup wait for commit */ 3755 3783 t->in_commit = 1; 3756 3784 t->blocked = 1; 3785 + smp_mb(); 3757 3786 if (waitqueue_active(&root->fs_info->transaction_blocked_wait)) 3758 3787 wake_up(&root->fs_info->transaction_blocked_wait); 3759 3788 3760 3789 t->blocked = 0; 3790 + smp_mb(); 3761 3791 if (waitqueue_active(&root->fs_info->transaction_wait)) 3762 3792 wake_up(&root->fs_info->transaction_wait); 3763 3793 3764 3794 t->commit_done = 1; 3795 + smp_mb(); 3765 3796 if (waitqueue_active(&t->commit_wait)) 3766 3797 wake_up(&t->commit_wait); 3767 3798
+1 -1
fs/btrfs/disk-io.h
··· 54 54 struct btrfs_root *root, int max_mirrors); 55 55 struct buffer_head *btrfs_read_dev_super(struct block_device *bdev); 56 56 int btrfs_commit_super(struct btrfs_root *root); 57 - int btrfs_error_commit_super(struct btrfs_root *root); 57 + void btrfs_error_commit_super(struct btrfs_root *root); 58 58 struct extent_buffer *btrfs_find_tree_block(struct btrfs_root *root, 59 59 u64 bytenr, u32 blocksize); 60 60 struct btrfs_root *btrfs_read_fs_root_no_radix(struct btrfs_root *tree_root,
+58 -65
fs/btrfs/extent-tree.c
··· 2252 2252 } 2253 2253 2254 2254 /* 2255 + * We need to try and merge add/drops of the same ref since we 2256 + * can run into issues with relocate dropping the implicit ref 2257 + * and then it being added back again before the drop can 2258 + * finish. If we merged anything we need to re-loop so we can 2259 + * get a good ref. 2260 + */ 2261 + btrfs_merge_delayed_refs(trans, fs_info, delayed_refs, 2262 + locked_ref); 2263 + 2264 + /* 2255 2265 * locked_ref is the head node, so we have to go one 2256 2266 * node back for any delayed ref updates 2257 2267 */ ··· 2328 2318 ref->in_tree = 0; 2329 2319 rb_erase(&ref->rb_node, &delayed_refs->root); 2330 2320 delayed_refs->num_entries--; 2331 - /* 2332 - * we modified num_entries, but as we're currently running 2333 - * delayed refs, skip 2334 - * wake_up(&delayed_refs->seq_wait); 2335 - * here. 2336 - */ 2321 + if (locked_ref) { 2322 + /* 2323 + * when we play the delayed ref, also correct the 2324 + * ref_mod on head 2325 + */ 2326 + switch (ref->action) { 2327 + case BTRFS_ADD_DELAYED_REF: 2328 + case BTRFS_ADD_DELAYED_EXTENT: 2329 + locked_ref->node.ref_mod -= ref->ref_mod; 2330 + break; 2331 + case BTRFS_DROP_DELAYED_REF: 2332 + locked_ref->node.ref_mod += ref->ref_mod; 2333 + break; 2334 + default: 2335 + WARN_ON(1); 2336 + } 2337 + } 2337 2338 spin_unlock(&delayed_refs->lock); 2338 2339 2339 2340 ret = run_one_delayed_ref(trans, root, ref, extent_op, ··· 2369 2348 spin_lock(&delayed_refs->lock); 2370 2349 } 2371 2350 return count; 2372 - } 2373 - 2374 - static void wait_for_more_refs(struct btrfs_fs_info *fs_info, 2375 - struct btrfs_delayed_ref_root *delayed_refs, 2376 - unsigned long num_refs, 2377 - struct list_head *first_seq) 2378 - { 2379 - spin_unlock(&delayed_refs->lock); 2380 - pr_debug("waiting for more refs (num %ld, first %p)\n", 2381 - num_refs, first_seq); 2382 - wait_event(fs_info->tree_mod_seq_wait, 2383 - num_refs != delayed_refs->num_entries || 2384 - fs_info->tree_mod_seq_list.next != first_seq); 2385 - pr_debug("done waiting for more refs (num %ld, first %p)\n", 2386 - delayed_refs->num_entries, fs_info->tree_mod_seq_list.next); 2387 - spin_lock(&delayed_refs->lock); 2388 2351 } 2389 2352 2390 2353 #ifdef SCRAMBLE_DELAYED_REFS ··· 2465 2460 struct btrfs_delayed_ref_root *delayed_refs; 2466 2461 struct btrfs_delayed_ref_node *ref; 2467 2462 struct list_head cluster; 2468 - struct list_head *first_seq = NULL; 2469 2463 int ret; 2470 2464 u64 delayed_start; 2471 2465 int run_all = count == (unsigned long)-1; 2472 2466 int run_most = 0; 2473 - unsigned long num_refs = 0; 2474 - int consider_waiting; 2467 + int loops; 2475 2468 2476 2469 /* We'll clean this up in btrfs_cleanup_transaction */ 2477 2470 if (trans->aborted) ··· 2487 2484 delayed_refs = &trans->transaction->delayed_refs; 2488 2485 INIT_LIST_HEAD(&cluster); 2489 2486 again: 2490 - consider_waiting = 0; 2487 + loops = 0; 2491 2488 spin_lock(&delayed_refs->lock); 2492 2489 2493 2490 #ifdef SCRAMBLE_DELAYED_REFS ··· 2515 2512 if (ret) 2516 2513 break; 2517 2514 2518 - if (delayed_start >= delayed_refs->run_delayed_start) { 2519 - if (consider_waiting == 0) { 2520 - /* 2521 - * btrfs_find_ref_cluster looped. let's do one 2522 - * more cycle. if we don't run any delayed ref 2523 - * during that cycle (because we can't because 2524 - * all of them are blocked) and if the number of 2525 - * refs doesn't change, we avoid busy waiting. 2526 - */ 2527 - consider_waiting = 1; 2528 - num_refs = delayed_refs->num_entries; 2529 - first_seq = root->fs_info->tree_mod_seq_list.next; 2530 - } else { 2531 - wait_for_more_refs(root->fs_info, delayed_refs, 2532 - num_refs, first_seq); 2533 - /* 2534 - * after waiting, things have changed. we 2535 - * dropped the lock and someone else might have 2536 - * run some refs, built new clusters and so on. 2537 - * therefore, we restart staleness detection. 2538 - */ 2539 - consider_waiting = 0; 2540 - } 2541 - } 2542 - 2543 2515 ret = run_clustered_refs(trans, root, &cluster); 2544 2516 if (ret < 0) { 2545 2517 spin_unlock(&delayed_refs->lock); ··· 2527 2549 if (count == 0) 2528 2550 break; 2529 2551 2530 - if (ret || delayed_refs->run_delayed_start == 0) { 2552 + if (delayed_start >= delayed_refs->run_delayed_start) { 2553 + if (loops == 0) { 2554 + /* 2555 + * btrfs_find_ref_cluster looped. let's do one 2556 + * more cycle. if we don't run any delayed ref 2557 + * during that cycle (because we can't because 2558 + * all of them are blocked), bail out. 2559 + */ 2560 + loops = 1; 2561 + } else { 2562 + /* 2563 + * no runnable refs left, stop trying 2564 + */ 2565 + BUG_ON(run_all); 2566 + break; 2567 + } 2568 + } 2569 + if (ret) { 2531 2570 /* refs were run, let's reset staleness detection */ 2532 - consider_waiting = 0; 2571 + loops = 0; 2533 2572 } 2534 2573 } 2535 2574 ··· 3002 3007 } 3003 3008 spin_unlock(&block_group->lock); 3004 3009 3005 - num_pages = (int)div64_u64(block_group->key.offset, 1024 * 1024 * 1024); 3010 + /* 3011 + * Try to preallocate enough space based on how big the block group is. 3012 + * Keep in mind this has to include any pinned space which could end up 3013 + * taking up quite a bit since it's not folded into the other space 3014 + * cache. 3015 + */ 3016 + num_pages = (int)div64_u64(block_group->key.offset, 256 * 1024 * 1024); 3006 3017 if (!num_pages) 3007 3018 num_pages = 1; 3008 3019 3009 - /* 3010 - * Just to make absolutely sure we have enough space, we're going to 3011 - * preallocate 12 pages worth of space for each block group. In 3012 - * practice we ought to use at most 8, but we need extra space so we can 3013 - * add our header and have a terminator between the extents and the 3014 - * bitmaps. 3015 - */ 3016 3020 num_pages *= 16; 3017 3021 num_pages *= PAGE_CACHE_SIZE; 3018 3022 ··· 4565 4571 if (root->fs_info->quota_enabled) { 4566 4572 ret = btrfs_qgroup_reserve(root, num_bytes + 4567 4573 nr_extents * root->leafsize); 4568 - if (ret) 4574 + if (ret) { 4575 + mutex_unlock(&BTRFS_I(inode)->delalloc_mutex); 4569 4576 return ret; 4577 + } 4570 4578 } 4571 4579 4572 4580 ret = reserve_metadata_bytes(root, block_rsv, to_reserve, flush); ··· 5290 5294 rb_erase(&head->node.rb_node, &delayed_refs->root); 5291 5295 5292 5296 delayed_refs->num_entries--; 5293 - smp_mb(); 5294 - if (waitqueue_active(&root->fs_info->tree_mod_seq_wait)) 5295 - wake_up(&root->fs_info->tree_mod_seq_wait); 5296 5297 5297 5298 /* 5298 5299 * we don't take a ref on the node because we're removing it from the
+2 -15
fs/btrfs/extent_io.c
··· 2330 2330 if (uptodate && tree->ops && tree->ops->readpage_end_io_hook) { 2331 2331 ret = tree->ops->readpage_end_io_hook(page, start, end, 2332 2332 state, mirror); 2333 - if (ret) { 2334 - /* no IO indicated but software detected errors 2335 - * in the block, either checksum errors or 2336 - * issues with the contents */ 2337 - struct btrfs_root *root = 2338 - BTRFS_I(page->mapping->host)->root; 2339 - struct btrfs_device *device; 2340 - 2333 + if (ret) 2341 2334 uptodate = 0; 2342 - device = btrfs_find_device_for_logical( 2343 - root, start, mirror); 2344 - if (device) 2345 - btrfs_dev_stat_inc_and_print(device, 2346 - BTRFS_DEV_STAT_CORRUPTION_ERRS); 2347 - } else { 2335 + else 2348 2336 clean_io_failure(start, page); 2349 - } 2350 2337 } 2351 2338 2352 2339 if (!uptodate && tree->ops && tree->ops->readpage_io_failed_hook) {
+2 -2
fs/btrfs/file-item.c
··· 272 272 } 273 273 274 274 int btrfs_lookup_bio_sums_dio(struct btrfs_root *root, struct inode *inode, 275 - struct bio *bio, u64 offset, u32 *dst) 275 + struct bio *bio, u64 offset) 276 276 { 277 - return __btrfs_lookup_bio_sums(root, inode, bio, offset, dst, 1); 277 + return __btrfs_lookup_bio_sums(root, inode, bio, offset, NULL, 1); 278 278 } 279 279 280 280 int btrfs_lookup_csums_range(struct btrfs_root *root, u64 start, u64 end,
+164 -162
fs/btrfs/inode.c
··· 1008 1008 nr_pages = (async_cow->end - async_cow->start + PAGE_CACHE_SIZE) >> 1009 1009 PAGE_CACHE_SHIFT; 1010 1010 1011 - atomic_sub(nr_pages, &root->fs_info->async_delalloc_pages); 1012 - 1013 - if (atomic_read(&root->fs_info->async_delalloc_pages) < 1011 + if (atomic_sub_return(nr_pages, &root->fs_info->async_delalloc_pages) < 1014 1012 5 * 1024 * 1024 && 1015 1013 waitqueue_active(&root->fs_info->async_submit_wait)) 1016 1014 wake_up(&root->fs_info->async_submit_wait); ··· 1883 1885 trans = btrfs_join_transaction_nolock(root); 1884 1886 else 1885 1887 trans = btrfs_join_transaction(root); 1886 - if (IS_ERR(trans)) 1887 - return PTR_ERR(trans); 1888 + if (IS_ERR(trans)) { 1889 + ret = PTR_ERR(trans); 1890 + trans = NULL; 1891 + goto out; 1892 + } 1888 1893 trans->block_rsv = &root->fs_info->delalloc_block_rsv; 1889 1894 ret = btrfs_update_inode_fallback(trans, root, inode); 1890 1895 if (ret) /* -ENOMEM or corruption */ ··· 3175 3174 btrfs_i_size_write(dir, dir->i_size - name_len * 2); 3176 3175 inode_inc_iversion(dir); 3177 3176 dir->i_mtime = dir->i_ctime = CURRENT_TIME; 3178 - ret = btrfs_update_inode(trans, root, dir); 3177 + ret = btrfs_update_inode_fallback(trans, root, dir); 3179 3178 if (ret) 3180 3179 btrfs_abort_transaction(trans, root, ret); 3181 3180 out: ··· 5775 5774 return ret; 5776 5775 } 5777 5776 5777 + static int lock_extent_direct(struct inode *inode, u64 lockstart, u64 lockend, 5778 + struct extent_state **cached_state, int writing) 5779 + { 5780 + struct btrfs_ordered_extent *ordered; 5781 + int ret = 0; 5782 + 5783 + while (1) { 5784 + lock_extent_bits(&BTRFS_I(inode)->io_tree, lockstart, lockend, 5785 + 0, cached_state); 5786 + /* 5787 + * We're concerned with the entire range that we're going to be 5788 + * doing DIO to, so we need to make sure theres no ordered 5789 + * extents in this range. 5790 + */ 5791 + ordered = btrfs_lookup_ordered_range(inode, lockstart, 5792 + lockend - lockstart + 1); 5793 + 5794 + /* 5795 + * We need to make sure there are no buffered pages in this 5796 + * range either, we could have raced between the invalidate in 5797 + * generic_file_direct_write and locking the extent. The 5798 + * invalidate needs to happen so that reads after a write do not 5799 + * get stale data. 5800 + */ 5801 + if (!ordered && (!writing || 5802 + !test_range_bit(&BTRFS_I(inode)->io_tree, 5803 + lockstart, lockend, EXTENT_UPTODATE, 0, 5804 + *cached_state))) 5805 + break; 5806 + 5807 + unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, 5808 + cached_state, GFP_NOFS); 5809 + 5810 + if (ordered) { 5811 + btrfs_start_ordered_extent(inode, ordered, 1); 5812 + btrfs_put_ordered_extent(ordered); 5813 + } else { 5814 + /* Screw you mmap */ 5815 + ret = filemap_write_and_wait_range(inode->i_mapping, 5816 + lockstart, 5817 + lockend); 5818 + if (ret) 5819 + break; 5820 + 5821 + /* 5822 + * If we found a page that couldn't be invalidated just 5823 + * fall back to buffered. 5824 + */ 5825 + ret = invalidate_inode_pages2_range(inode->i_mapping, 5826 + lockstart >> PAGE_CACHE_SHIFT, 5827 + lockend >> PAGE_CACHE_SHIFT); 5828 + if (ret) 5829 + break; 5830 + } 5831 + 5832 + cond_resched(); 5833 + } 5834 + 5835 + return ret; 5836 + } 5837 + 5778 5838 static int btrfs_get_blocks_direct(struct inode *inode, sector_t iblock, 5779 5839 struct buffer_head *bh_result, int create) 5780 5840 { 5781 5841 struct extent_map *em; 5782 5842 struct btrfs_root *root = BTRFS_I(inode)->root; 5843 + struct extent_state *cached_state = NULL; 5783 5844 u64 start = iblock << inode->i_blkbits; 5845 + u64 lockstart, lockend; 5784 5846 u64 len = bh_result->b_size; 5785 5847 struct btrfs_trans_handle *trans; 5848 + int unlock_bits = EXTENT_LOCKED; 5849 + int ret; 5850 + 5851 + if (create) { 5852 + ret = btrfs_delalloc_reserve_space(inode, len); 5853 + if (ret) 5854 + return ret; 5855 + unlock_bits |= EXTENT_DELALLOC | EXTENT_DIRTY; 5856 + } else { 5857 + len = min_t(u64, len, root->sectorsize); 5858 + } 5859 + 5860 + lockstart = start; 5861 + lockend = start + len - 1; 5862 + 5863 + /* 5864 + * If this errors out it's because we couldn't invalidate pagecache for 5865 + * this range and we need to fallback to buffered. 5866 + */ 5867 + if (lock_extent_direct(inode, lockstart, lockend, &cached_state, create)) 5868 + return -ENOTBLK; 5869 + 5870 + if (create) { 5871 + ret = set_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, 5872 + lockend, EXTENT_DELALLOC, NULL, 5873 + &cached_state, GFP_NOFS); 5874 + if (ret) 5875 + goto unlock_err; 5876 + } 5786 5877 5787 5878 em = btrfs_get_extent(inode, NULL, 0, start, len, 0); 5788 - if (IS_ERR(em)) 5789 - return PTR_ERR(em); 5879 + if (IS_ERR(em)) { 5880 + ret = PTR_ERR(em); 5881 + goto unlock_err; 5882 + } 5790 5883 5791 5884 /* 5792 5885 * Ok for INLINE and COMPRESSED extents we need to fallback on buffered ··· 5899 5804 if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) || 5900 5805 em->block_start == EXTENT_MAP_INLINE) { 5901 5806 free_extent_map(em); 5902 - return -ENOTBLK; 5807 + ret = -ENOTBLK; 5808 + goto unlock_err; 5903 5809 } 5904 5810 5905 5811 /* Just a good old fashioned hole, return */ 5906 5812 if (!create && (em->block_start == EXTENT_MAP_HOLE || 5907 5813 test_bit(EXTENT_FLAG_PREALLOC, &em->flags))) { 5908 5814 free_extent_map(em); 5909 - /* DIO will do one hole at a time, so just unlock a sector */ 5910 - unlock_extent(&BTRFS_I(inode)->io_tree, start, 5911 - start + root->sectorsize - 1); 5912 - return 0; 5815 + ret = 0; 5816 + goto unlock_err; 5913 5817 } 5914 5818 5915 5819 /* ··· 5921 5827 * 5922 5828 */ 5923 5829 if (!create) { 5924 - len = em->len - (start - em->start); 5925 - goto map; 5830 + len = min(len, em->len - (start - em->start)); 5831 + lockstart = start + len; 5832 + goto unlock; 5926 5833 } 5927 5834 5928 5835 if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags) || ··· 5955 5860 btrfs_end_transaction(trans, root); 5956 5861 if (ret) { 5957 5862 free_extent_map(em); 5958 - return ret; 5863 + goto unlock_err; 5959 5864 } 5960 5865 goto unlock; 5961 5866 } ··· 5968 5873 */ 5969 5874 len = bh_result->b_size; 5970 5875 em = btrfs_new_extent_direct(inode, em, start, len); 5971 - if (IS_ERR(em)) 5972 - return PTR_ERR(em); 5876 + if (IS_ERR(em)) { 5877 + ret = PTR_ERR(em); 5878 + goto unlock_err; 5879 + } 5973 5880 len = min(len, em->len - (start - em->start)); 5974 5881 unlock: 5975 - clear_extent_bit(&BTRFS_I(inode)->io_tree, start, start + len - 1, 5976 - EXTENT_LOCKED | EXTENT_DELALLOC | EXTENT_DIRTY, 1, 5977 - 0, NULL, GFP_NOFS); 5978 - map: 5979 5882 bh_result->b_blocknr = (em->block_start + (start - em->start)) >> 5980 5883 inode->i_blkbits; 5981 5884 bh_result->b_size = len; ··· 5991 5898 i_size_write(inode, start + len); 5992 5899 } 5993 5900 5901 + /* 5902 + * In the case of write we need to clear and unlock the entire range, 5903 + * in the case of read we need to unlock only the end area that we 5904 + * aren't using if there is any left over space. 5905 + */ 5906 + if (lockstart < lockend) { 5907 + if (create && len < lockend - lockstart) { 5908 + clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, 5909 + lockstart + len - 1, unlock_bits, 1, 0, 5910 + &cached_state, GFP_NOFS); 5911 + /* 5912 + * Beside unlock, we also need to cleanup reserved space 5913 + * for the left range by attaching EXTENT_DO_ACCOUNTING. 5914 + */ 5915 + clear_extent_bit(&BTRFS_I(inode)->io_tree, 5916 + lockstart + len, lockend, 5917 + unlock_bits | EXTENT_DO_ACCOUNTING, 5918 + 1, 0, NULL, GFP_NOFS); 5919 + } else { 5920 + clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, 5921 + lockend, unlock_bits, 1, 0, 5922 + &cached_state, GFP_NOFS); 5923 + } 5924 + } else { 5925 + free_extent_state(cached_state); 5926 + } 5927 + 5994 5928 free_extent_map(em); 5995 5929 5996 5930 return 0; 5931 + 5932 + unlock_err: 5933 + if (create) 5934 + unlock_bits |= EXTENT_DO_ACCOUNTING; 5935 + 5936 + clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, lockend, 5937 + unlock_bits, 1, 0, &cached_state, GFP_NOFS); 5938 + return ret; 5997 5939 } 5998 5940 5999 5941 struct btrfs_dio_private { ··· 6036 5908 u64 logical_offset; 6037 5909 u64 disk_bytenr; 6038 5910 u64 bytes; 6039 - u32 *csums; 6040 5911 void *private; 6041 5912 6042 5913 /* number of bios pending for this dio */ ··· 6055 5928 struct inode *inode = dip->inode; 6056 5929 struct btrfs_root *root = BTRFS_I(inode)->root; 6057 5930 u64 start; 6058 - u32 *private = dip->csums; 6059 5931 6060 5932 start = dip->logical_offset; 6061 5933 do { ··· 6062 5936 struct page *page = bvec->bv_page; 6063 5937 char *kaddr; 6064 5938 u32 csum = ~(u32)0; 5939 + u64 private = ~(u32)0; 6065 5940 unsigned long flags; 6066 5941 5942 + if (get_state_private(&BTRFS_I(inode)->io_tree, 5943 + start, &private)) 5944 + goto failed; 6067 5945 local_irq_save(flags); 6068 5946 kaddr = kmap_atomic(page); 6069 5947 csum = btrfs_csum_data(root, kaddr + bvec->bv_offset, ··· 6077 5947 local_irq_restore(flags); 6078 5948 6079 5949 flush_dcache_page(bvec->bv_page); 6080 - if (csum != *private) { 5950 + if (csum != private) { 5951 + failed: 6081 5952 printk(KERN_ERR "btrfs csum failed ino %llu off" 6082 5953 " %llu csum %u private %u\n", 6083 5954 (unsigned long long)btrfs_ino(inode), 6084 5955 (unsigned long long)start, 6085 - csum, *private); 5956 + csum, (unsigned)private); 6086 5957 err = -EIO; 6087 5958 } 6088 5959 } 6089 5960 6090 5961 start += bvec->bv_len; 6091 - private++; 6092 5962 bvec++; 6093 5963 } while (bvec <= bvec_end); 6094 5964 ··· 6096 5966 dip->logical_offset + dip->bytes - 1); 6097 5967 bio->bi_private = dip->private; 6098 5968 6099 - kfree(dip->csums); 6100 5969 kfree(dip); 6101 5970 6102 5971 /* If we had a csum failure make sure to clear the uptodate flag */ ··· 6201 6072 6202 6073 static inline int __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode, 6203 6074 int rw, u64 file_offset, int skip_sum, 6204 - u32 *csums, int async_submit) 6075 + int async_submit) 6205 6076 { 6206 6077 int write = rw & REQ_WRITE; 6207 6078 struct btrfs_root *root = BTRFS_I(inode)->root; ··· 6234 6105 if (ret) 6235 6106 goto err; 6236 6107 } else if (!skip_sum) { 6237 - ret = btrfs_lookup_bio_sums_dio(root, inode, bio, 6238 - file_offset, csums); 6108 + ret = btrfs_lookup_bio_sums_dio(root, inode, bio, file_offset); 6239 6109 if (ret) 6240 6110 goto err; 6241 6111 } ··· 6260 6132 u64 submit_len = 0; 6261 6133 u64 map_length; 6262 6134 int nr_pages = 0; 6263 - u32 *csums = dip->csums; 6264 6135 int ret = 0; 6265 6136 int async_submit = 0; 6266 - int write = rw & REQ_WRITE; 6267 6137 6268 6138 map_length = orig_bio->bi_size; 6269 6139 ret = btrfs_map_block(map_tree, READ, start_sector << 9, ··· 6297 6171 atomic_inc(&dip->pending_bios); 6298 6172 ret = __btrfs_submit_dio_bio(bio, inode, rw, 6299 6173 file_offset, skip_sum, 6300 - csums, async_submit); 6174 + async_submit); 6301 6175 if (ret) { 6302 6176 bio_put(bio); 6303 6177 atomic_dec(&dip->pending_bios); 6304 6178 goto out_err; 6305 6179 } 6306 6180 6307 - /* Write's use the ordered csums */ 6308 - if (!write && !skip_sum) 6309 - csums = csums + nr_pages; 6310 6181 start_sector += submit_len >> 9; 6311 6182 file_offset += submit_len; 6312 6183 ··· 6333 6210 6334 6211 submit: 6335 6212 ret = __btrfs_submit_dio_bio(bio, inode, rw, file_offset, skip_sum, 6336 - csums, async_submit); 6213 + async_submit); 6337 6214 if (!ret) 6338 6215 return 0; 6339 6216 ··· 6368 6245 if (!dip) { 6369 6246 ret = -ENOMEM; 6370 6247 goto free_ordered; 6371 - } 6372 - dip->csums = NULL; 6373 - 6374 - /* Write's use the ordered csum stuff, so we don't need dip->csums */ 6375 - if (!write && !skip_sum) { 6376 - dip->csums = kmalloc(sizeof(u32) * bio->bi_vcnt, GFP_NOFS); 6377 - if (!dip->csums) { 6378 - kfree(dip); 6379 - ret = -ENOMEM; 6380 - goto free_ordered; 6381 - } 6382 6248 } 6383 6249 6384 6250 dip->private = bio->bi_private; ··· 6453 6341 out: 6454 6342 return retval; 6455 6343 } 6344 + 6456 6345 static ssize_t btrfs_direct_IO(int rw, struct kiocb *iocb, 6457 6346 const struct iovec *iov, loff_t offset, 6458 6347 unsigned long nr_segs) 6459 6348 { 6460 6349 struct file *file = iocb->ki_filp; 6461 6350 struct inode *inode = file->f_mapping->host; 6462 - struct btrfs_ordered_extent *ordered; 6463 - struct extent_state *cached_state = NULL; 6464 - u64 lockstart, lockend; 6465 - ssize_t ret; 6466 - int writing = rw & WRITE; 6467 - int write_bits = 0; 6468 - size_t count = iov_length(iov, nr_segs); 6469 6351 6470 6352 if (check_direct_IO(BTRFS_I(inode)->root, rw, iocb, iov, 6471 - offset, nr_segs)) { 6353 + offset, nr_segs)) 6472 6354 return 0; 6473 - } 6474 6355 6475 - lockstart = offset; 6476 - lockend = offset + count - 1; 6477 - 6478 - if (writing) { 6479 - ret = btrfs_delalloc_reserve_space(inode, count); 6480 - if (ret) 6481 - goto out; 6482 - } 6483 - 6484 - while (1) { 6485 - lock_extent_bits(&BTRFS_I(inode)->io_tree, lockstart, lockend, 6486 - 0, &cached_state); 6487 - /* 6488 - * We're concerned with the entire range that we're going to be 6489 - * doing DIO to, so we need to make sure theres no ordered 6490 - * extents in this range. 6491 - */ 6492 - ordered = btrfs_lookup_ordered_range(inode, lockstart, 6493 - lockend - lockstart + 1); 6494 - 6495 - /* 6496 - * We need to make sure there are no buffered pages in this 6497 - * range either, we could have raced between the invalidate in 6498 - * generic_file_direct_write and locking the extent. The 6499 - * invalidate needs to happen so that reads after a write do not 6500 - * get stale data. 6501 - */ 6502 - if (!ordered && (!writing || 6503 - !test_range_bit(&BTRFS_I(inode)->io_tree, 6504 - lockstart, lockend, EXTENT_UPTODATE, 0, 6505 - cached_state))) 6506 - break; 6507 - 6508 - unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, 6509 - &cached_state, GFP_NOFS); 6510 - 6511 - if (ordered) { 6512 - btrfs_start_ordered_extent(inode, ordered, 1); 6513 - btrfs_put_ordered_extent(ordered); 6514 - } else { 6515 - /* Screw you mmap */ 6516 - ret = filemap_write_and_wait_range(file->f_mapping, 6517 - lockstart, 6518 - lockend); 6519 - if (ret) 6520 - goto out; 6521 - 6522 - /* 6523 - * If we found a page that couldn't be invalidated just 6524 - * fall back to buffered. 6525 - */ 6526 - ret = invalidate_inode_pages2_range(file->f_mapping, 6527 - lockstart >> PAGE_CACHE_SHIFT, 6528 - lockend >> PAGE_CACHE_SHIFT); 6529 - if (ret) { 6530 - if (ret == -EBUSY) 6531 - ret = 0; 6532 - goto out; 6533 - } 6534 - } 6535 - 6536 - cond_resched(); 6537 - } 6538 - 6539 - /* 6540 - * we don't use btrfs_set_extent_delalloc because we don't want 6541 - * the dirty or uptodate bits 6542 - */ 6543 - if (writing) { 6544 - write_bits = EXTENT_DELALLOC | EXTENT_DO_ACCOUNTING; 6545 - ret = set_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, lockend, 6546 - EXTENT_DELALLOC, NULL, &cached_state, 6547 - GFP_NOFS); 6548 - if (ret) { 6549 - clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, 6550 - lockend, EXTENT_LOCKED | write_bits, 6551 - 1, 0, &cached_state, GFP_NOFS); 6552 - goto out; 6553 - } 6554 - } 6555 - 6556 - free_extent_state(cached_state); 6557 - cached_state = NULL; 6558 - 6559 - ret = __blockdev_direct_IO(rw, iocb, inode, 6356 + return __blockdev_direct_IO(rw, iocb, inode, 6560 6357 BTRFS_I(inode)->root->fs_info->fs_devices->latest_bdev, 6561 6358 iov, offset, nr_segs, btrfs_get_blocks_direct, NULL, 6562 6359 btrfs_submit_direct, 0); 6563 - 6564 - if (ret < 0 && ret != -EIOCBQUEUED) { 6565 - clear_extent_bit(&BTRFS_I(inode)->io_tree, offset, 6566 - offset + iov_length(iov, nr_segs) - 1, 6567 - EXTENT_LOCKED | write_bits, 1, 0, 6568 - &cached_state, GFP_NOFS); 6569 - } else if (ret >= 0 && ret < iov_length(iov, nr_segs)) { 6570 - /* 6571 - * We're falling back to buffered, unlock the section we didn't 6572 - * do IO on. 6573 - */ 6574 - clear_extent_bit(&BTRFS_I(inode)->io_tree, offset + ret, 6575 - offset + iov_length(iov, nr_segs) - 1, 6576 - EXTENT_LOCKED | write_bits, 1, 0, 6577 - &cached_state, GFP_NOFS); 6578 - } 6579 - out: 6580 - free_extent_state(cached_state); 6581 - return ret; 6582 6360 } 6583 6361 6584 6362 static int btrfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+1 -1
fs/btrfs/ioctl.c
··· 424 424 uuid_le_gen(&new_uuid); 425 425 memcpy(root_item.uuid, new_uuid.b, BTRFS_UUID_SIZE); 426 426 root_item.otime.sec = cpu_to_le64(cur_time.tv_sec); 427 - root_item.otime.nsec = cpu_to_le64(cur_time.tv_nsec); 427 + root_item.otime.nsec = cpu_to_le32(cur_time.tv_nsec); 428 428 root_item.ctime = root_item.otime; 429 429 btrfs_set_root_ctransid(&root_item, trans->transid); 430 430 btrfs_set_root_otransid(&root_item, trans->transid);
+1 -1
fs/btrfs/locking.c
··· 67 67 { 68 68 if (eb->lock_nested) { 69 69 read_lock(&eb->lock); 70 - if (&eb->lock_nested && current->pid == eb->lock_owner) { 70 + if (eb->lock_nested && current->pid == eb->lock_owner) { 71 71 read_unlock(&eb->lock); 72 72 return; 73 73 }
+9 -3
fs/btrfs/qgroup.c
··· 1364 1364 spin_lock(&fs_info->qgroup_lock); 1365 1365 1366 1366 dstgroup = add_qgroup_rb(fs_info, objectid); 1367 - if (!dstgroup) 1367 + if (IS_ERR(dstgroup)) { 1368 + ret = PTR_ERR(dstgroup); 1368 1369 goto unlock; 1370 + } 1369 1371 1370 1372 if (srcid) { 1371 1373 srcgroup = find_qgroup_rb(fs_info, srcid); 1372 - if (!srcgroup) 1374 + if (!srcgroup) { 1375 + ret = -EINVAL; 1373 1376 goto unlock; 1377 + } 1374 1378 dstgroup->rfer = srcgroup->rfer - level_size; 1375 1379 dstgroup->rfer_cmpr = srcgroup->rfer_cmpr - level_size; 1376 1380 srcgroup->excl = level_size; ··· 1383 1379 qgroup_dirty(fs_info, srcgroup); 1384 1380 } 1385 1381 1386 - if (!inherit) 1382 + if (!inherit) { 1383 + ret = -EINVAL; 1387 1384 goto unlock; 1385 + } 1388 1386 1389 1387 i_qgroups = (u64 *)(inherit + 1); 1390 1388 for (i = 0; i < inherit->num_qgroups; ++i) {
+2 -2
fs/btrfs/root-tree.c
··· 544 544 struct timespec ct = CURRENT_TIME; 545 545 546 546 spin_lock(&root->root_times_lock); 547 - item->ctransid = trans->transid; 547 + item->ctransid = cpu_to_le64(trans->transid); 548 548 item->ctime.sec = cpu_to_le64(ct.tv_sec); 549 - item->ctime.nsec = cpu_to_le64(ct.tv_nsec); 549 + item->ctime.nsec = cpu_to_le32(ct.tv_nsec); 550 550 spin_unlock(&root->root_times_lock); 551 551 }
+11 -4
fs/btrfs/super.c
··· 838 838 struct btrfs_trans_handle *trans; 839 839 struct btrfs_fs_info *fs_info = btrfs_sb(sb); 840 840 struct btrfs_root *root = fs_info->tree_root; 841 - int ret; 842 841 843 842 trace_btrfs_sync_fs(wait); 844 843 ··· 848 849 849 850 btrfs_wait_ordered_extents(root, 0, 0); 850 851 851 - trans = btrfs_start_transaction(root, 0); 852 + spin_lock(&fs_info->trans_lock); 853 + if (!fs_info->running_transaction) { 854 + spin_unlock(&fs_info->trans_lock); 855 + return 0; 856 + } 857 + spin_unlock(&fs_info->trans_lock); 858 + 859 + trans = btrfs_join_transaction(root); 852 860 if (IS_ERR(trans)) 853 861 return PTR_ERR(trans); 854 - ret = btrfs_commit_transaction(trans, root); 855 - return ret; 862 + return btrfs_commit_transaction(trans, root); 856 863 } 857 864 858 865 static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry) ··· 1535 1530 while (cur_devices) { 1536 1531 head = &cur_devices->devices; 1537 1532 list_for_each_entry(dev, head, dev_list) { 1533 + if (dev->missing) 1534 + continue; 1538 1535 if (!first_dev || dev->devid < first_dev->devid) 1539 1536 first_dev = dev; 1540 1537 }
+2 -1
fs/btrfs/transaction.c
··· 1031 1031 1032 1032 btrfs_i_size_write(parent_inode, parent_inode->i_size + 1033 1033 dentry->d_name.len * 2); 1034 + parent_inode->i_mtime = parent_inode->i_ctime = CURRENT_TIME; 1034 1035 ret = btrfs_update_inode(trans, parent_root, parent_inode); 1035 1036 if (ret) 1036 1037 goto abort_trans_dput; ··· 1067 1066 memcpy(new_root_item->parent_uuid, root->root_item.uuid, 1068 1067 BTRFS_UUID_SIZE); 1069 1068 new_root_item->otime.sec = cpu_to_le64(cur_time.tv_sec); 1070 - new_root_item->otime.nsec = cpu_to_le64(cur_time.tv_nsec); 1069 + new_root_item->otime.nsec = cpu_to_le32(cur_time.tv_nsec); 1071 1070 btrfs_set_root_otransid(new_root_item, trans->transid); 1072 1071 memset(&new_root_item->stime, 0, sizeof(new_root_item->stime)); 1073 1072 memset(&new_root_item->rtime, 0, sizeof(new_root_item->rtime));
+6 -27
fs/btrfs/volumes.c
··· 227 227 cur = pending; 228 228 pending = pending->bi_next; 229 229 cur->bi_next = NULL; 230 - atomic_dec(&fs_info->nr_async_bios); 231 230 232 - if (atomic_read(&fs_info->nr_async_bios) < limit && 231 + if (atomic_dec_return(&fs_info->nr_async_bios) < limit && 233 232 waitqueue_active(&fs_info->async_submit_wait)) 234 233 wake_up(&fs_info->async_submit_wait); 235 234 ··· 568 569 memcpy(new_device, device, sizeof(*new_device)); 569 570 570 571 /* Safe because we are under uuid_mutex */ 571 - name = rcu_string_strdup(device->name->str, GFP_NOFS); 572 - BUG_ON(device->name && !name); /* -ENOMEM */ 573 - rcu_assign_pointer(new_device->name, name); 572 + if (device->name) { 573 + name = rcu_string_strdup(device->name->str, GFP_NOFS); 574 + BUG_ON(device->name && !name); /* -ENOMEM */ 575 + rcu_assign_pointer(new_device->name, name); 576 + } 574 577 new_device->bdev = NULL; 575 578 new_device->writeable = 0; 576 579 new_device->in_fs_metadata = 0; ··· 4604 4603 } 4605 4604 free_extent_buffer(sb); 4606 4605 return ret; 4607 - } 4608 - 4609 - struct btrfs_device *btrfs_find_device_for_logical(struct btrfs_root *root, 4610 - u64 logical, int mirror_num) 4611 - { 4612 - struct btrfs_mapping_tree *map_tree = &root->fs_info->mapping_tree; 4613 - int ret; 4614 - u64 map_length = 0; 4615 - struct btrfs_bio *bbio = NULL; 4616 - struct btrfs_device *device; 4617 - 4618 - BUG_ON(mirror_num == 0); 4619 - ret = btrfs_map_block(map_tree, WRITE, logical, &map_length, &bbio, 4620 - mirror_num); 4621 - if (ret) { 4622 - BUG_ON(bbio != NULL); 4623 - return NULL; 4624 - } 4625 - BUG_ON(mirror_num != bbio->mirror_num); 4626 - device = bbio->stripes[mirror_num - 1].dev; 4627 - kfree(bbio); 4628 - return device; 4629 4606 } 4630 4607 4631 4608 int btrfs_read_chunk_tree(struct btrfs_root *root)
-2
fs/btrfs/volumes.h
··· 289 289 int btrfs_chunk_readonly(struct btrfs_root *root, u64 chunk_offset); 290 290 int find_free_dev_extent(struct btrfs_device *device, u64 num_bytes, 291 291 u64 *start, u64 *max_avail); 292 - struct btrfs_device *btrfs_find_device_for_logical(struct btrfs_root *root, 293 - u64 logical, int mirror_num); 294 292 void btrfs_dev_stat_print_on_error(struct btrfs_device *device); 295 293 void btrfs_dev_stat_inc_and_print(struct btrfs_device *dev, int index); 296 294 int btrfs_get_dev_stats(struct btrfs_root *root,
+30 -36
fs/buffer.c
··· 914 914 /* 915 915 * Initialise the state of a blockdev page's buffers. 916 916 */ 917 - static void 917 + static sector_t 918 918 init_page_buffers(struct page *page, struct block_device *bdev, 919 919 sector_t block, int size) 920 920 { ··· 936 936 block++; 937 937 bh = bh->b_this_page; 938 938 } while (bh != head); 939 + 940 + /* 941 + * Caller needs to validate requested block against end of device. 942 + */ 943 + return end_block; 939 944 } 940 945 941 946 /* 942 947 * Create the page-cache page that contains the requested block. 943 948 * 944 - * This is user purely for blockdev mappings. 949 + * This is used purely for blockdev mappings. 945 950 */ 946 - static struct page * 951 + static int 947 952 grow_dev_page(struct block_device *bdev, sector_t block, 948 - pgoff_t index, int size) 953 + pgoff_t index, int size, int sizebits) 949 954 { 950 955 struct inode *inode = bdev->bd_inode; 951 956 struct page *page; 952 957 struct buffer_head *bh; 958 + sector_t end_block; 959 + int ret = 0; /* Will call free_more_memory() */ 953 960 954 961 page = find_or_create_page(inode->i_mapping, index, 955 962 (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE); 956 963 if (!page) 957 - return NULL; 964 + return ret; 958 965 959 966 BUG_ON(!PageLocked(page)); 960 967 961 968 if (page_has_buffers(page)) { 962 969 bh = page_buffers(page); 963 970 if (bh->b_size == size) { 964 - init_page_buffers(page, bdev, block, size); 965 - return page; 971 + end_block = init_page_buffers(page, bdev, 972 + index << sizebits, size); 973 + goto done; 966 974 } 967 975 if (!try_to_free_buffers(page)) 968 976 goto failed; ··· 990 982 */ 991 983 spin_lock(&inode->i_mapping->private_lock); 992 984 link_dev_buffers(page, bh); 993 - init_page_buffers(page, bdev, block, size); 985 + end_block = init_page_buffers(page, bdev, index << sizebits, size); 994 986 spin_unlock(&inode->i_mapping->private_lock); 995 - return page; 996 - 987 + done: 988 + ret = (block < end_block) ? 1 : -ENXIO; 997 989 failed: 998 990 unlock_page(page); 999 991 page_cache_release(page); 1000 - return NULL; 992 + return ret; 1001 993 } 1002 994 1003 995 /* ··· 1007 999 static int 1008 1000 grow_buffers(struct block_device *bdev, sector_t block, int size) 1009 1001 { 1010 - struct page *page; 1011 1002 pgoff_t index; 1012 1003 int sizebits; 1013 1004 ··· 1030 1023 bdevname(bdev, b)); 1031 1024 return -EIO; 1032 1025 } 1033 - block = index << sizebits; 1026 + 1034 1027 /* Create a page with the proper size buffers.. */ 1035 - page = grow_dev_page(bdev, block, index, size); 1036 - if (!page) 1037 - return 0; 1038 - unlock_page(page); 1039 - page_cache_release(page); 1040 - return 1; 1028 + return grow_dev_page(bdev, block, index, size, sizebits); 1041 1029 } 1042 1030 1043 1031 static struct buffer_head * 1044 1032 __getblk_slow(struct block_device *bdev, sector_t block, int size) 1045 1033 { 1046 - int ret; 1047 - struct buffer_head *bh; 1048 - 1049 1034 /* Size must be multiple of hard sectorsize */ 1050 1035 if (unlikely(size & (bdev_logical_block_size(bdev)-1) || 1051 1036 (size < 512 || size > PAGE_SIZE))) { ··· 1050 1051 return NULL; 1051 1052 } 1052 1053 1053 - retry: 1054 - bh = __find_get_block(bdev, block, size); 1055 - if (bh) 1056 - return bh; 1054 + for (;;) { 1055 + struct buffer_head *bh; 1056 + int ret; 1057 1057 1058 - ret = grow_buffers(bdev, block, size); 1059 - if (ret == 0) { 1060 - free_more_memory(); 1061 - goto retry; 1062 - } else if (ret > 0) { 1063 1058 bh = __find_get_block(bdev, block, size); 1064 1059 if (bh) 1065 1060 return bh; 1061 + 1062 + ret = grow_buffers(bdev, block, size); 1063 + if (ret < 0) 1064 + return NULL; 1065 + if (ret == 0) 1066 + free_more_memory(); 1066 1067 } 1067 - return NULL; 1068 1068 } 1069 1069 1070 1070 /* ··· 1318 1320 * __getblk will locate (and, if necessary, create) the buffer_head 1319 1321 * which corresponds to the passed block_device, block and size. The 1320 1322 * returned buffer has its reference count incremented. 1321 - * 1322 - * __getblk() cannot fail - it just keeps trying. If you pass it an 1323 - * illegal block number, __getblk() will happily return a buffer_head 1324 - * which represents the non-existent block. Very weird. 1325 1323 * 1326 1324 * __getblk() will lock up the machine if grow_dev_page's try_to_free_buffers() 1327 1325 * attempt is failing. FIXME, perhaps?
+8 -3
fs/cifs/cifssmb.c
··· 1576 1576 /* result already set, check signature */ 1577 1577 if (server->sec_mode & 1578 1578 (SECMODE_SIGN_REQUIRED | SECMODE_SIGN_ENABLED)) { 1579 - if (cifs_verify_signature(rdata->iov, rdata->nr_iov, 1580 - server, mid->sequence_number + 1)) 1581 - cERROR(1, "Unexpected SMB signature"); 1579 + int rc = 0; 1580 + 1581 + rc = cifs_verify_signature(rdata->iov, rdata->nr_iov, 1582 + server, 1583 + mid->sequence_number + 1); 1584 + if (rc) 1585 + cERROR(1, "SMB signature verification returned " 1586 + "error = %d", rc); 1582 1587 } 1583 1588 /* FIXME: should this be counted toward the initiating task? */ 1584 1589 task_io_account_read(rdata->bytes);
+1 -8
fs/cifs/dir.c
··· 356 356 cifs_create_set_dentry: 357 357 if (rc != 0) { 358 358 cFYI(1, "Create worked, get_inode_info failed rc = %d", rc); 359 + CIFSSMBClose(xid, tcon, *fileHandle); 359 360 goto out; 360 361 } 361 362 d_drop(direntry); 362 363 d_add(direntry, newinode); 363 - 364 - /* ENOENT for create? How weird... */ 365 - rc = -ENOENT; 366 - if (!newinode) { 367 - CIFSSMBClose(xid, tcon, *fileHandle); 368 - goto out; 369 - } 370 - rc = 0; 371 364 372 365 out: 373 366 kfree(buf);
+16 -8
fs/cifs/inode.c
··· 124 124 { 125 125 struct cifsInodeInfo *cifs_i = CIFS_I(inode); 126 126 struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb); 127 - unsigned long oldtime = cifs_i->time; 128 127 129 128 cifs_revalidate_cache(inode, fattr); 130 129 130 + spin_lock(&inode->i_lock); 131 131 inode->i_atime = fattr->cf_atime; 132 132 inode->i_mtime = fattr->cf_mtime; 133 133 inode->i_ctime = fattr->cf_ctime; ··· 148 148 else 149 149 cifs_i->time = jiffies; 150 150 151 - cFYI(1, "inode 0x%p old_time=%ld new_time=%ld", inode, 152 - oldtime, cifs_i->time); 153 - 154 151 cifs_i->delete_pending = fattr->cf_flags & CIFS_FATTR_DELETE_PENDING; 155 152 156 153 cifs_i->server_eof = fattr->cf_eof; ··· 155 158 * Can't safely change the file size here if the client is writing to 156 159 * it due to potential races. 157 160 */ 158 - spin_lock(&inode->i_lock); 159 161 if (is_size_safe_to_change(cifs_i, fattr->cf_eof)) { 160 162 i_size_write(inode, fattr->cf_eof); 161 163 ··· 855 859 856 860 if (rc && tcon->ipc) { 857 861 cFYI(1, "ipc connection - fake read inode"); 862 + spin_lock(&inode->i_lock); 858 863 inode->i_mode |= S_IFDIR; 859 864 set_nlink(inode, 2); 860 865 inode->i_op = &cifs_ipc_inode_ops; 861 866 inode->i_fop = &simple_dir_operations; 862 867 inode->i_uid = cifs_sb->mnt_uid; 863 868 inode->i_gid = cifs_sb->mnt_gid; 869 + spin_unlock(&inode->i_lock); 864 870 } else if (rc) { 865 871 iget_failed(inode); 866 872 inode = ERR_PTR(rc); ··· 1108 1110 goto out_close; 1109 1111 } 1110 1112 1113 + /* copied from fs/nfs/dir.c with small changes */ 1114 + static void 1115 + cifs_drop_nlink(struct inode *inode) 1116 + { 1117 + spin_lock(&inode->i_lock); 1118 + if (inode->i_nlink > 0) 1119 + drop_nlink(inode); 1120 + spin_unlock(&inode->i_lock); 1121 + } 1111 1122 1112 1123 /* 1113 1124 * If dentry->d_inode is null (usually meaning the cached dentry ··· 1173 1166 psx_del_no_retry: 1174 1167 if (!rc) { 1175 1168 if (inode) 1176 - drop_nlink(inode); 1169 + cifs_drop_nlink(inode); 1177 1170 } else if (rc == -ENOENT) { 1178 1171 d_drop(dentry); 1179 1172 } else if (rc == -ETXTBSY) { 1180 1173 rc = cifs_rename_pending_delete(full_path, dentry, xid); 1181 1174 if (rc == 0) 1182 - drop_nlink(inode); 1175 + cifs_drop_nlink(inode); 1183 1176 } else if ((rc == -EACCES) && (dosattr == 0) && inode) { 1184 1177 attrs = kzalloc(sizeof(*attrs), GFP_KERNEL); 1185 1178 if (attrs == NULL) { ··· 1248 1241 * setting nlink not necessary except in cases where we failed to get it 1249 1242 * from the server or was set bogus 1250 1243 */ 1244 + spin_lock(&dentry->d_inode->i_lock); 1251 1245 if ((dentry->d_inode) && (dentry->d_inode->i_nlink < 2)) 1252 1246 set_nlink(dentry->d_inode, 2); 1253 - 1247 + spin_unlock(&dentry->d_inode->i_lock); 1254 1248 mode &= ~current_umask(); 1255 1249 /* must turn on setgid bit if parent dir has it */ 1256 1250 if (inode->i_mode & S_ISGID)
+2
fs/cifs/link.c
··· 433 433 if (old_file->d_inode) { 434 434 cifsInode = CIFS_I(old_file->d_inode); 435 435 if (rc == 0) { 436 + spin_lock(&old_file->d_inode->i_lock); 436 437 inc_nlink(old_file->d_inode); 438 + spin_unlock(&old_file->d_inode->i_lock); 437 439 /* BB should we make this contingent on superblock flag NOATIME? */ 438 440 /* old_file->d_inode->i_ctime = CURRENT_TIME;*/ 439 441 /* parent dir timestamps will update from srv
+9 -7
fs/cifs/smb2misc.c
··· 52 52 cERROR(1, "Bad protocol string signature header %x", 53 53 *(unsigned int *) hdr->ProtocolId); 54 54 if (mid != hdr->MessageId) 55 - cERROR(1, "Mids do not match"); 55 + cERROR(1, "Mids do not match: %llu and %llu", mid, 56 + hdr->MessageId); 56 57 } 57 58 cERROR(1, "Bad SMB detected. The Mid=%llu", hdr->MessageId); 58 59 return 1; ··· 108 107 * ie Validate the wct via smb2_struct_sizes table above 109 108 */ 110 109 111 - if (length < 2 + sizeof(struct smb2_hdr)) { 110 + if (length < sizeof(struct smb2_pdu)) { 112 111 if ((length >= sizeof(struct smb2_hdr)) && (hdr->Status != 0)) { 113 112 pdu->StructureSize2 = 0; 114 113 /* ··· 122 121 return 1; 123 122 } 124 123 if (len > CIFSMaxBufSize + MAX_SMB2_HDR_SIZE - 4) { 125 - cERROR(1, "SMB length greater than maximum, mid=%lld", mid); 124 + cERROR(1, "SMB length greater than maximum, mid=%llu", mid); 126 125 return 1; 127 126 } 128 127 129 128 if (check_smb2_hdr(hdr, mid)) 130 129 return 1; 131 130 132 - if (hdr->StructureSize != SMB2_HEADER_SIZE) { 133 - cERROR(1, "Illegal structure size %d", 131 + if (hdr->StructureSize != SMB2_HEADER_STRUCTURE_SIZE) { 132 + cERROR(1, "Illegal structure size %u", 134 133 le16_to_cpu(hdr->StructureSize)); 135 134 return 1; 136 135 } ··· 162 161 if (4 + len != clc_len) { 163 162 cFYI(1, "Calculated size %u length %u mismatch mid %llu", 164 163 clc_len, 4 + len, mid); 165 - if (clc_len == 4 + len + 1) /* BB FIXME (fix samba) */ 166 - return 0; /* BB workaround Samba 3 bug SessSetup rsp */ 164 + /* server can return one byte more */ 165 + if (clc_len == 4 + len + 1) 166 + return 0; 167 167 return 1; 168 168 } 169 169 return 0;
+6 -4
fs/cifs/smb2pdu.h
··· 87 87 88 88 #define SMB2_PROTO_NUMBER __constant_cpu_to_le32(0x424d53fe) 89 89 90 - #define SMB2_HEADER_SIZE __constant_le16_to_cpu(64) 91 - 92 - #define SMB2_ERROR_STRUCTURE_SIZE2 __constant_le16_to_cpu(9) 93 - 94 90 /* 95 91 * SMB2 Header Definition 96 92 * ··· 95 99 * "PDU" : "Protocol Data Unit" (ie a network "frame") 96 100 * 97 101 */ 102 + 103 + #define SMB2_HEADER_STRUCTURE_SIZE __constant_le16_to_cpu(64) 104 + 98 105 struct smb2_hdr { 99 106 __be32 smb2_buf_length; /* big endian on wire */ 100 107 /* length is only two or three bytes - with ··· 139 140 * command code name for the struct. Note that structures must be packed. 140 141 * 141 142 */ 143 + 144 + #define SMB2_ERROR_STRUCTURE_SIZE2 __constant_le16_to_cpu(9) 145 + 142 146 struct smb2_err_rsp { 143 147 struct smb2_hdr hdr; 144 148 __le16 StructureSize;
+6 -3
fs/cifs/transport.c
··· 503 503 /* convert the length into a more usable form */ 504 504 if (server->sec_mode & (SECMODE_SIGN_REQUIRED | SECMODE_SIGN_ENABLED)) { 505 505 struct kvec iov; 506 + int rc = 0; 506 507 507 508 iov.iov_base = mid->resp_buf; 508 509 iov.iov_len = len; 509 510 /* FIXME: add code to kill session */ 510 - if (cifs_verify_signature(&iov, 1, server, 511 - mid->sequence_number + 1) != 0) 512 - cERROR(1, "Unexpected SMB signature"); 511 + rc = cifs_verify_signature(&iov, 1, server, 512 + mid->sequence_number + 1); 513 + if (rc) 514 + cERROR(1, "SMB signature verification returned error = " 515 + "%d", rc); 513 516 } 514 517 515 518 /* BB special case reconnect tid and uid here? */
+5
fs/direct-io.c
··· 1062 1062 unsigned long user_addr; 1063 1063 size_t bytes; 1064 1064 struct buffer_head map_bh = { 0, }; 1065 + struct blk_plug plug; 1065 1066 1066 1067 if (rw & WRITE) 1067 1068 rw = WRITE_ODIRECT; ··· 1178 1177 PAGE_SIZE - user_addr / PAGE_SIZE); 1179 1178 } 1180 1179 1180 + blk_start_plug(&plug); 1181 + 1181 1182 for (seg = 0; seg < nr_segs; seg++) { 1182 1183 user_addr = (unsigned long)iov[seg].iov_base; 1183 1184 sdio.size += bytes = iov[seg].iov_len; ··· 1237 1234 } 1238 1235 if (sdio.bio) 1239 1236 dio_bio_submit(dio, &sdio); 1237 + 1238 + blk_finish_plug(&plug); 1240 1239 1241 1240 /* 1242 1241 * It is possible that, we return short IO due to end of file.
+5
fs/jbd/journal.c
··· 1113 1113 1114 1114 BUG_ON(!mutex_is_locked(&journal->j_checkpoint_mutex)); 1115 1115 spin_lock(&journal->j_state_lock); 1116 + /* Is it already empty? */ 1117 + if (sb->s_start == 0) { 1118 + spin_unlock(&journal->j_state_lock); 1119 + return; 1120 + } 1116 1121 jbd_debug(1, "JBD: Marking journal as empty (seq %d)\n", 1117 1122 journal->j_tail_sequence); 1118 1123
+7 -8
fs/logfs/dev_bdev.c
··· 26 26 struct completion complete; 27 27 28 28 bio_init(&bio); 29 + bio.bi_max_vecs = 1; 29 30 bio.bi_io_vec = &bio_vec; 30 31 bio_vec.bv_page = page; 31 32 bio_vec.bv_len = PAGE_SIZE; ··· 96 95 struct address_space *mapping = super->s_mapping_inode->i_mapping; 97 96 struct bio *bio; 98 97 struct page *page; 99 - struct request_queue *q = bdev_get_queue(sb->s_bdev); 100 - unsigned int max_pages = queue_max_hw_sectors(q) >> (PAGE_SHIFT - 9); 98 + unsigned int max_pages; 101 99 int i; 102 100 103 - if (max_pages > BIO_MAX_PAGES) 104 - max_pages = BIO_MAX_PAGES; 101 + max_pages = min(nr_pages, (size_t) bio_get_nr_vecs(super->s_bdev)); 102 + 105 103 bio = bio_alloc(GFP_NOFS, max_pages); 106 104 BUG_ON(!bio); 107 105 ··· 190 190 { 191 191 struct logfs_super *super = logfs_super(sb); 192 192 struct bio *bio; 193 - struct request_queue *q = bdev_get_queue(sb->s_bdev); 194 - unsigned int max_pages = queue_max_hw_sectors(q) >> (PAGE_SHIFT - 9); 193 + unsigned int max_pages; 195 194 int i; 196 195 197 - if (max_pages > BIO_MAX_PAGES) 198 - max_pages = BIO_MAX_PAGES; 196 + max_pages = min(nr_pages, (size_t) bio_get_nr_vecs(super->s_bdev)); 197 + 199 198 bio = bio_alloc(GFP_NOFS, max_pages); 200 199 BUG_ON(!bio); 201 200
+17 -1
fs/logfs/inode.c
··· 156 156 call_rcu(&inode->i_rcu, logfs_i_callback); 157 157 } 158 158 159 + static void __logfs_destroy_meta_inode(struct inode *inode) 160 + { 161 + struct logfs_inode *li = logfs_inode(inode); 162 + BUG_ON(li->li_block); 163 + call_rcu(&inode->i_rcu, logfs_i_callback); 164 + } 165 + 159 166 static void logfs_destroy_inode(struct inode *inode) 160 167 { 161 168 struct logfs_inode *li = logfs_inode(inode); 169 + 170 + if (inode->i_ino < LOGFS_RESERVED_INOS) { 171 + /* 172 + * The reserved inodes are never destroyed unless we are in 173 + * unmont path. 174 + */ 175 + __logfs_destroy_meta_inode(inode); 176 + return; 177 + } 162 178 163 179 BUG_ON(list_empty(&li->li_freeing_list)); 164 180 spin_lock(&logfs_inode_lock); ··· 389 373 { 390 374 struct logfs_super *super = logfs_super(sb); 391 375 /* kill the meta-inodes */ 392 - iput(super->s_master_inode); 393 376 iput(super->s_segfile_inode); 377 + iput(super->s_master_inode); 394 378 iput(super->s_mapping_inode); 395 379 } 396 380
+1 -1
fs/logfs/journal.c
··· 565 565 index = ofs >> PAGE_SHIFT; 566 566 page_ofs = ofs & (PAGE_SIZE - 1); 567 567 568 - page = find_lock_page(mapping, index); 568 + page = find_or_create_page(mapping, index, GFP_NOFS); 569 569 BUG_ON(!page); 570 570 memcpy(wbuf, page_address(page) + page_ofs, super->s_writesize); 571 571 unlock_page(page);
-1
fs/logfs/readwrite.c
··· 2189 2189 return; 2190 2190 } 2191 2191 2192 - BUG_ON(inode->i_ino < LOGFS_RESERVED_INOS); 2193 2192 page = inode_to_page(inode); 2194 2193 BUG_ON(!page); /* FIXME: Use emergency page */ 2195 2194 logfs_put_write_page(page);
+1 -1
fs/logfs/segment.c
··· 886 886 887 887 static void map_invalidatepage(struct page *page, unsigned long l) 888 888 { 889 - BUG(); 889 + return; 890 890 } 891 891 892 892 static int map_releasepage(struct page *page, gfp_t g)
+2 -2
fs/nfsd/nfs4callback.c
··· 651 651 652 652 if (clp->cl_minorversion == 0) { 653 653 if (!clp->cl_cred.cr_principal && 654 - (clp->cl_flavor >= RPC_AUTH_GSS_KRB5)) 654 + (clp->cl_cred.cr_flavor >= RPC_AUTH_GSS_KRB5)) 655 655 return -EINVAL; 656 656 args.client_name = clp->cl_cred.cr_principal; 657 657 args.prognumber = conn->cb_prog, 658 658 args.protocol = XPRT_TRANSPORT_TCP; 659 - args.authflavor = clp->cl_flavor; 659 + args.authflavor = clp->cl_cred.cr_flavor; 660 660 clp->cl_cb_ident = conn->cb_ident; 661 661 } else { 662 662 if (!conn->cb_xprt)
-1
fs/nfsd/state.h
··· 231 231 nfs4_verifier cl_verifier; /* generated by client */ 232 232 time_t cl_time; /* time of last lease renewal */ 233 233 struct sockaddr_storage cl_addr; /* client ipaddress */ 234 - u32 cl_flavor; /* setclientid pseudoflavor */ 235 234 struct svc_cred cl_cred; /* setclientid principal */ 236 235 clientid_t cl_clientid; /* generated by server */ 237 236 nfs4_verifier cl_confirm; /* generated by server */
+1 -1
fs/quota/dquot.c
··· 1589 1589 goto out; 1590 1590 } 1591 1591 1592 - down_read(&sb_dqopt(inode->i_sb)->dqptr_sem); 1593 1592 for (cnt = 0; cnt < MAXQUOTAS; cnt++) 1594 1593 warn[cnt].w_type = QUOTA_NL_NOWARN; 1595 1594 1595 + down_read(&sb_dqopt(inode->i_sb)->dqptr_sem); 1596 1596 spin_lock(&dq_data_lock); 1597 1597 for (cnt = 0; cnt < MAXQUOTAS; cnt++) { 1598 1598 if (!dquots[cnt])
-2
fs/reiserfs/bitmap.c
··· 1334 1334 else if (bitmap == 0) 1335 1335 block = (REISERFS_DISK_OFFSET_IN_BYTES >> sb->s_blocksize_bits) + 1; 1336 1336 1337 - reiserfs_write_unlock(sb); 1338 1337 bh = sb_bread(sb, block); 1339 - reiserfs_write_lock(sb); 1340 1338 if (bh == NULL) 1341 1339 reiserfs_warning(sb, "sh-2029: %s: bitmap block (#%u) " 1342 1340 "reading failed", __func__, block);
+1 -1
fs/reiserfs/inode.c
··· 76 76 ; 77 77 } 78 78 out: 79 + reiserfs_write_unlock_once(inode->i_sb, depth); 79 80 clear_inode(inode); /* note this must go after the journal_end to prevent deadlock */ 80 81 dquot_drop(inode); 81 82 inode->i_blocks = 0; 82 - reiserfs_write_unlock_once(inode->i_sb, depth); 83 83 return; 84 84 85 85 no_delete:
+1 -1
fs/ubifs/debug.h
··· 167 167 #define ubifs_dbg_msg(type, fmt, ...) \ 168 168 pr_debug("UBIFS DBG " type ": " fmt "\n", ##__VA_ARGS__) 169 169 170 - #define DBG_KEY_BUF_LEN 32 170 + #define DBG_KEY_BUF_LEN 48 171 171 #define ubifs_dbg_msg_key(type, key, fmt, ...) do { \ 172 172 char __tmp_key_buf[DBG_KEY_BUF_LEN]; \ 173 173 pr_debug("UBIFS DBG " type ": " fmt "%s\n", ##__VA_ARGS__, \
+4 -1
fs/ubifs/lpt.c
··· 1749 1749 return 0; 1750 1750 1751 1751 out_err: 1752 - ubifs_lpt_free(c, 0); 1752 + if (wr) 1753 + ubifs_lpt_free(c, 1); 1754 + if (rd) 1755 + ubifs_lpt_free(c, 0); 1753 1756 return err; 1754 1757 } 1755 1758
+1 -1
fs/ubifs/recovery.c
··· 788 788 789 789 corrupted_rescan: 790 790 /* Re-scan the corrupted data with verbose messages */ 791 - ubifs_err("corruptio %d", ret); 791 + ubifs_err("corruption %d", ret); 792 792 ubifs_scan_a_node(c, buf, len, lnum, offs, 1); 793 793 corrupted: 794 794 ubifs_scanned_corruption(c, lnum, offs, buf);
+1 -2
fs/ubifs/replay.c
··· 1026 1026 c->replaying = 1; 1027 1027 lnum = c->ltail_lnum = c->lhead_lnum; 1028 1028 1029 - lnum = UBIFS_LOG_LNUM; 1030 1029 do { 1031 1030 err = replay_log_leb(c, lnum, 0, c->sbuf); 1032 1031 if (err == 1) ··· 1034 1035 if (err) 1035 1036 goto out; 1036 1037 lnum = ubifs_next_log_lnum(c, lnum); 1037 - } while (lnum != UBIFS_LOG_LNUM); 1038 + } while (lnum != c->ltail_lnum); 1038 1039 1039 1040 err = replay_buds(c); 1040 1041 if (err)
-3
fs/ubifs/super.c
··· 1157 1157 * 1158 1158 * This function mounts UBIFS file system. Returns zero in case of success and 1159 1159 * a negative error code in case of failure. 1160 - * 1161 - * Note, the function does not de-allocate resources it it fails half way 1162 - * through, and the caller has to do this instead. 1163 1160 */ 1164 1161 static int mount_ubifs(struct ubifs_info *c) 1165 1162 {
+4 -1
fs/udf/inode.c
··· 1124 1124 if (err) 1125 1125 return err; 1126 1126 down_write(&iinfo->i_data_sem); 1127 - } else 1127 + } else { 1128 1128 iinfo->i_lenAlloc = newsize; 1129 + goto set_size; 1130 + } 1129 1131 } 1130 1132 err = udf_extend_file(inode, newsize); 1131 1133 if (err) { 1132 1134 up_write(&iinfo->i_data_sem); 1133 1135 return err; 1134 1136 } 1137 + set_size: 1135 1138 truncate_setsize(inode, newsize); 1136 1139 up_write(&iinfo->i_data_sem); 1137 1140 } else {
+6 -1
fs/udf/super.c
··· 1344 1344 udf_err(sb, "error loading logical volume descriptor: " 1345 1345 "Partition table too long (%u > %lu)\n", table_len, 1346 1346 sb->s_blocksize - sizeof(*lvd)); 1347 + ret = 1; 1347 1348 goto out_bh; 1348 1349 } 1349 1350 ··· 1389 1388 UDF_ID_SPARABLE, 1390 1389 strlen(UDF_ID_SPARABLE))) { 1391 1390 if (udf_load_sparable_map(sb, map, 1392 - (struct sparablePartitionMap *)gpm) < 0) 1391 + (struct sparablePartitionMap *)gpm) < 0) { 1392 + ret = 1; 1393 1393 goto out_bh; 1394 + } 1394 1395 } else if (!strncmp(upm2->partIdent.ident, 1395 1396 UDF_ID_METADATA, 1396 1397 strlen(UDF_ID_METADATA))) { ··· 2003 2000 if (!silent) 2004 2001 pr_notice("Rescanning with blocksize %d\n", 2005 2002 UDF_DEFAULT_BLOCKSIZE); 2003 + brelse(sbi->s_lvid_bh); 2004 + sbi->s_lvid_bh = NULL; 2006 2005 uopt.blocksize = UDF_DEFAULT_BLOCKSIZE; 2007 2006 ret = udf_load_vrs(sb, &uopt, silent, &fileset); 2008 2007 }
+4 -2
fs/xfs/xfs_discard.c
··· 179 179 * used by the fstrim application. In the end it really doesn't 180 180 * matter as trimming blocks is an advisory interface. 181 181 */ 182 + if (range.start >= XFS_FSB_TO_B(mp, mp->m_sb.sb_dblocks) || 183 + range.minlen > XFS_FSB_TO_B(mp, XFS_ALLOC_AG_MAX_USABLE(mp))) 184 + return -XFS_ERROR(EINVAL); 185 + 182 186 start = BTOBB(range.start); 183 187 end = start + BTOBBT(range.len) - 1; 184 188 minlen = BTOBB(max_t(u64, granularity, range.minlen)); 185 189 186 - if (XFS_BB_TO_FSB(mp, start) >= mp->m_sb.sb_dblocks) 187 - return -XFS_ERROR(EINVAL); 188 190 if (end > XFS_FSB_TO_BB(mp, mp->m_sb.sb_dblocks) - 1) 189 191 end = XFS_FSB_TO_BB(mp, mp->m_sb.sb_dblocks)- 1; 190 192
+9 -8
fs/xfs/xfs_ialloc.c
··· 962 962 if (!pag->pagi_freecount && !okalloc) 963 963 goto nextag; 964 964 965 + /* 966 + * Then read in the AGI buffer and recheck with the AGI buffer 967 + * lock held. 968 + */ 965 969 error = xfs_ialloc_read_agi(mp, tp, agno, &agbp); 966 970 if (error) 967 971 goto out_error; 968 972 969 - /* 970 - * Once the AGI has been read in we have to recheck 971 - * pagi_freecount with the AGI buffer lock held. 972 - */ 973 973 if (pag->pagi_freecount) { 974 974 xfs_perag_put(pag); 975 975 goto out_alloc; 976 976 } 977 977 978 - if (!okalloc) { 979 - xfs_trans_brelse(tp, agbp); 980 - goto nextag; 981 - } 978 + if (!okalloc) 979 + goto nextag_relse_buffer; 980 + 982 981 983 982 error = xfs_ialloc_ag_alloc(tp, agbp, &ialloced); 984 983 if (error) { ··· 1006 1007 return 0; 1007 1008 } 1008 1009 1010 + nextag_relse_buffer: 1011 + xfs_trans_brelse(tp, agbp); 1009 1012 nextag: 1010 1013 xfs_perag_put(pag); 1011 1014 if (++agno == mp->m_sb.sb_agcount)
+1 -1
fs/xfs/xfs_rtalloc.c
··· 857 857 xfs_buf_t *bp; /* block buffer, result */ 858 858 xfs_inode_t *ip; /* bitmap or summary inode */ 859 859 xfs_bmbt_irec_t map; 860 - int nmap; 860 + int nmap = 1; 861 861 int error; /* error value */ 862 862 863 863 ip = issum ? mp->m_rsumip : mp->m_rbmip;
+2 -1
include/drm/drm_crtc.h
··· 118 118 .hdisplay = (hd), .hsync_start = (hss), .hsync_end = (hse), \ 119 119 .htotal = (ht), .hskew = (hsk), .vdisplay = (vd), \ 120 120 .vsync_start = (vss), .vsync_end = (vse), .vtotal = (vt), \ 121 - .vscan = (vs), .flags = (f), .vrefresh = 0 121 + .vscan = (vs), .flags = (f), .vrefresh = 0, \ 122 + .base.type = DRM_MODE_OBJECT_MODE 122 123 123 124 #define CRTC_INTERLACE_HALVE_V 0x1 /* halve V values for interlacing */ 124 125
+3 -2
include/drm/drm_mode.h
··· 359 359 struct drm_mode_modeinfo mode; 360 360 }; 361 361 362 - #define DRM_MODE_CURSOR_BO (1<<0) 363 - #define DRM_MODE_CURSOR_MOVE (1<<1) 362 + #define DRM_MODE_CURSOR_BO 0x01 363 + #define DRM_MODE_CURSOR_MOVE 0x02 364 + #define DRM_MODE_CURSOR_FLAGS 0x03 364 365 365 366 /* 366 367 * depending on the value in flags different members are used.
+13 -1
include/linux/blkdev.h
··· 601 601 * it already be started by driver. 602 602 */ 603 603 #define RQ_NOMERGE_FLAGS \ 604 - (REQ_NOMERGE | REQ_STARTED | REQ_SOFTBARRIER | REQ_FLUSH | REQ_FUA) 604 + (REQ_NOMERGE | REQ_STARTED | REQ_SOFTBARRIER | REQ_FLUSH | REQ_FUA | REQ_DISCARD) 605 605 #define rq_mergeable(rq) \ 606 606 (!((rq)->cmd_flags & RQ_NOMERGE_FLAGS) && \ 607 607 (((rq)->cmd_flags & REQ_DISCARD) || \ ··· 894 894 extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev); 895 895 896 896 extern int blk_rq_map_sg(struct request_queue *, struct request *, struct scatterlist *); 897 + extern int blk_bio_map_sg(struct request_queue *q, struct bio *bio, 898 + struct scatterlist *sglist); 897 899 extern void blk_dump_rq_flags(struct request *, char *); 898 900 extern long nr_blockdev_pages(void); 899 901 ··· 1139 1137 1140 1138 return (lim->discard_granularity + lim->discard_alignment - alignment) 1141 1139 & (lim->discard_granularity - 1); 1140 + } 1141 + 1142 + static inline int bdev_discard_alignment(struct block_device *bdev) 1143 + { 1144 + struct request_queue *q = bdev_get_queue(bdev); 1145 + 1146 + if (bdev != bdev->bd_contains) 1147 + return bdev->bd_part->discard_alignment; 1148 + 1149 + return q->limits.discard_alignment; 1142 1150 } 1143 1151 1144 1152 static inline unsigned int queue_discard_zeroes_data(struct request_queue *q)
+4
include/linux/cpuidle.h
··· 194 194 195 195 #ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED 196 196 void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a); 197 + #else 198 + static inline void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a) 199 + { 200 + } 197 201 #endif 198 202 199 203 /******************************
+10 -2
include/linux/kernel.h
··· 82 82 __x - (__x % (y)); \ 83 83 } \ 84 84 ) 85 + 86 + /* 87 + * Divide positive or negative dividend by positive divisor and round 88 + * to closest integer. Result is undefined for negative divisors. 89 + */ 85 90 #define DIV_ROUND_CLOSEST(x, divisor)( \ 86 91 { \ 87 - typeof(divisor) __divisor = divisor; \ 88 - (((x) + ((__divisor) / 2)) / (__divisor)); \ 92 + typeof(x) __x = x; \ 93 + typeof(divisor) __d = divisor; \ 94 + (((typeof(x))-1) >= 0 || (__x) >= 0) ? \ 95 + (((__x) + ((__d) / 2)) / (__d)) : \ 96 + (((__x) - ((__d) / 2)) / (__d)); \ 89 97 } \ 90 98 ) 91 99
-7
include/linux/ktime.h
··· 58 58 59 59 typedef union ktime ktime_t; /* Kill this */ 60 60 61 - #define KTIME_MAX ((s64)~((u64)1 << 63)) 62 - #if (BITS_PER_LONG == 64) 63 - # define KTIME_SEC_MAX (KTIME_MAX / NSEC_PER_SEC) 64 - #else 65 - # define KTIME_SEC_MAX LONG_MAX 66 - #endif 67 - 68 61 /* 69 62 * ktime_t definitions when using the 64-bit scalar representation: 70 63 */
+1
include/linux/mmc/card.h
··· 239 239 #define MMC_QUIRK_BLK_NO_CMD23 (1<<7) /* Avoid CMD23 for regular multiblock */ 240 240 #define MMC_QUIRK_BROKEN_BYTE_MODE_512 (1<<8) /* Avoid sending 512 bytes in */ 241 241 #define MMC_QUIRK_LONG_READ_TIME (1<<9) /* Data read time > CSD says */ 242 + #define MMC_QUIRK_SEC_ERASE_TRIM_BROKEN (1<<10) /* Skip secure for erase/trim */ 242 243 /* byte mode */ 243 244 unsigned int poweroff_notify_state; /* eMMC4.5 notify feature */ 244 245 #define MMC_NO_POWER_NOTIFICATION 0
+2
include/linux/mv643xx_eth.h
··· 15 15 #define MV643XX_ETH_SIZE_REG_4 0x2224 16 16 #define MV643XX_ETH_BASE_ADDR_ENABLE_REG 0x2290 17 17 18 + #define MV643XX_TX_CSUM_DEFAULT_LIMIT 0 19 + 18 20 struct mv643xx_eth_shared_platform_data { 19 21 struct mbus_dram_target_info *dram; 20 22 struct platform_device *shared_smi;
+6 -1
include/linux/omapfb.h
··· 220 220 221 221 #ifdef __KERNEL__ 222 222 223 - #include <plat/board.h> 223 + struct omap_lcd_config { 224 + char panel_name[16]; 225 + char ctrl_name[16]; 226 + s16 nreset_gpio; 227 + u8 data_lines; 228 + }; 224 229 225 230 struct omapfb_platform_data { 226 231 struct omap_lcd_config lcd;
+1 -1
include/linux/pci_ids.h
··· 2149 2149 #define PCI_DEVICE_ID_TIGON3_5704S 0x16a8 2150 2150 #define PCI_DEVICE_ID_NX2_57800_VF 0x16a9 2151 2151 #define PCI_DEVICE_ID_NX2_5706S 0x16aa 2152 - #define PCI_DEVICE_ID_NX2_57840_MF 0x16ab 2152 + #define PCI_DEVICE_ID_NX2_57840_MF 0x16a4 2153 2153 #define PCI_DEVICE_ID_NX2_5708S 0x16ac 2154 2154 #define PCI_DEVICE_ID_NX2_57840_VF 0x16ad 2155 2155 #define PCI_DEVICE_ID_NX2_57810_MF 0x16ae
+11
include/linux/platform_data/omap1_bl.h
··· 1 + #ifndef __OMAP1_BL_H__ 2 + #define __OMAP1_BL_H__ 3 + 4 + #include <linux/device.h> 5 + 6 + struct omap_backlight_config { 7 + int default_intensity; 8 + int (*set_power)(struct device *dev, int state); 9 + }; 10 + 11 + #endif
+27 -2
include/linux/time.h
··· 107 107 return ts_delta; 108 108 } 109 109 110 + #define KTIME_MAX ((s64)~((u64)1 << 63)) 111 + #if (BITS_PER_LONG == 64) 112 + # define KTIME_SEC_MAX (KTIME_MAX / NSEC_PER_SEC) 113 + #else 114 + # define KTIME_SEC_MAX LONG_MAX 115 + #endif 116 + 110 117 /* 111 118 * Returns true if the timespec is norm, false if denorm: 112 119 */ 113 - #define timespec_valid(ts) \ 114 - (((ts)->tv_sec >= 0) && (((unsigned long) (ts)->tv_nsec) < NSEC_PER_SEC)) 120 + static inline bool timespec_valid(const struct timespec *ts) 121 + { 122 + /* Dates before 1970 are bogus */ 123 + if (ts->tv_sec < 0) 124 + return false; 125 + /* Can't have more nanoseconds then a second */ 126 + if ((unsigned long)ts->tv_nsec >= NSEC_PER_SEC) 127 + return false; 128 + return true; 129 + } 130 + 131 + static inline bool timespec_valid_strict(const struct timespec *ts) 132 + { 133 + if (!timespec_valid(ts)) 134 + return false; 135 + /* Disallow values that could overflow ktime_t */ 136 + if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX) 137 + return false; 138 + return true; 139 + } 115 140 116 141 extern void read_persistent_clock(struct timespec *ts); 117 142 extern void read_boot_clock(struct timespec *ts);
+1
include/net/netfilter/nf_conntrack_ecache.h
··· 18 18 u16 ctmask; /* bitmask of ct events to be delivered */ 19 19 u16 expmask; /* bitmask of expect events to be delivered */ 20 20 u32 pid; /* netlink pid of destroyer */ 21 + struct timer_list timeout; 21 22 }; 22 23 23 24 static inline struct nf_conntrack_ecache *
-2
include/xen/events.h
··· 58 58 59 59 void xen_irq_resume(void); 60 60 61 - void xen_hvm_prepare_kexec(struct shared_info *sip, unsigned long pfn); 62 - 63 61 /* Clear an irq's pending state, in preparation for polling on it */ 64 62 void xen_clear_irq_pending(int irq); 65 63 void xen_set_irq_pending(int irq);
+2 -2
kernel/fork.c
··· 455 455 if (retval) 456 456 goto out; 457 457 458 - if (file && uprobe_mmap(tmp)) 459 - goto out; 458 + if (file) 459 + uprobe_mmap(tmp); 460 460 } 461 461 /* a new mm has just been created */ 462 462 arch_dup_mmap(oldmm, mm);
+33 -6
kernel/time/timekeeping.c
··· 115 115 { 116 116 tk->xtime_sec += ts->tv_sec; 117 117 tk->xtime_nsec += (u64)ts->tv_nsec << tk->shift; 118 + tk_normalize_xtime(tk); 118 119 } 119 120 120 121 static void tk_set_wall_to_mono(struct timekeeper *tk, struct timespec wtm) ··· 277 276 tk->xtime_nsec += cycle_delta * tk->mult; 278 277 279 278 /* If arch requires, add in gettimeoffset() */ 280 - tk->xtime_nsec += arch_gettimeoffset() << tk->shift; 279 + tk->xtime_nsec += (u64)arch_gettimeoffset() << tk->shift; 281 280 282 281 tk_normalize_xtime(tk); 283 282 ··· 428 427 struct timespec ts_delta, xt; 429 428 unsigned long flags; 430 429 431 - if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC) 430 + if (!timespec_valid_strict(tv)) 432 431 return -EINVAL; 433 432 434 433 write_seqlock_irqsave(&tk->lock, flags); ··· 464 463 { 465 464 struct timekeeper *tk = &timekeeper; 466 465 unsigned long flags; 466 + struct timespec tmp; 467 + int ret = 0; 467 468 468 469 if ((unsigned long)ts->tv_nsec >= NSEC_PER_SEC) 469 470 return -EINVAL; ··· 474 471 475 472 timekeeping_forward_now(tk); 476 473 474 + /* Make sure the proposed value is valid */ 475 + tmp = timespec_add(tk_xtime(tk), *ts); 476 + if (!timespec_valid_strict(&tmp)) { 477 + ret = -EINVAL; 478 + goto error; 479 + } 477 480 478 481 tk_xtime_add(tk, ts); 479 482 tk_set_wall_to_mono(tk, timespec_sub(tk->wall_to_monotonic, *ts)); 480 483 484 + error: /* even if we error out, we forwarded the time, so call update */ 481 485 timekeeping_update(tk, true); 482 486 483 487 write_sequnlock_irqrestore(&tk->lock, flags); ··· 492 482 /* signal hrtimers about time change */ 493 483 clock_was_set(); 494 484 495 - return 0; 485 + return ret; 496 486 } 497 487 EXPORT_SYMBOL(timekeeping_inject_offset); 498 488 ··· 659 649 struct timespec now, boot, tmp; 660 650 661 651 read_persistent_clock(&now); 652 + if (!timespec_valid_strict(&now)) { 653 + pr_warn("WARNING: Persistent clock returned invalid value!\n" 654 + " Check your CMOS/BIOS settings.\n"); 655 + now.tv_sec = 0; 656 + now.tv_nsec = 0; 657 + } 658 + 662 659 read_boot_clock(&boot); 660 + if (!timespec_valid_strict(&boot)) { 661 + pr_warn("WARNING: Boot clock returned invalid value!\n" 662 + " Check your CMOS/BIOS settings.\n"); 663 + boot.tv_sec = 0; 664 + boot.tv_nsec = 0; 665 + } 663 666 664 667 seqlock_init(&tk->lock); 665 668 ··· 713 690 static void __timekeeping_inject_sleeptime(struct timekeeper *tk, 714 691 struct timespec *delta) 715 692 { 716 - if (!timespec_valid(delta)) { 693 + if (!timespec_valid_strict(delta)) { 717 694 printk(KERN_WARNING "__timekeeping_inject_sleeptime: Invalid " 718 695 "sleep delta value!\n"); 719 696 return; ··· 1152 1129 offset = (clock->read(clock) - clock->cycle_last) & clock->mask; 1153 1130 #endif 1154 1131 1132 + /* Check if there's really nothing to do */ 1133 + if (offset < tk->cycle_interval) 1134 + goto out; 1135 + 1155 1136 /* 1156 1137 * With NO_HZ we may have to accumulate many cycle_intervals 1157 1138 * (think "ticks") worth of time at once. To do this efficiently, ··· 1188 1161 * the vsyscall implementations are converted to use xtime_nsec 1189 1162 * (shifted nanoseconds), this can be killed. 1190 1163 */ 1191 - remainder = tk->xtime_nsec & ((1 << tk->shift) - 1); 1164 + remainder = tk->xtime_nsec & ((1ULL << tk->shift) - 1); 1192 1165 tk->xtime_nsec -= remainder; 1193 - tk->xtime_nsec += 1 << tk->shift; 1166 + tk->xtime_nsec += 1ULL << tk->shift; 1194 1167 tk->ntp_error += remainder << tk->ntp_error_shift; 1195 1168 1196 1169 /*
+4
kernel/trace/trace_syscalls.c
··· 506 506 int size; 507 507 508 508 syscall_nr = syscall_get_nr(current, regs); 509 + if (syscall_nr < 0) 510 + return; 509 511 if (!test_bit(syscall_nr, enabled_perf_enter_syscalls)) 510 512 return; 511 513 ··· 582 580 int size; 583 581 584 582 syscall_nr = syscall_get_nr(current, regs); 583 + if (syscall_nr < 0) 584 + return; 585 585 if (!test_bit(syscall_nr, enabled_perf_exit_syscalls)) 586 586 return; 587 587
-7
mm/filemap.c
··· 1412 1412 retval = filemap_write_and_wait_range(mapping, pos, 1413 1413 pos + iov_length(iov, nr_segs) - 1); 1414 1414 if (!retval) { 1415 - struct blk_plug plug; 1416 - 1417 - blk_start_plug(&plug); 1418 1415 retval = mapping->a_ops->direct_IO(READ, iocb, 1419 1416 iov, pos, nr_segs); 1420 - blk_finish_plug(&plug); 1421 1417 } 1422 1418 if (retval > 0) { 1423 1419 *ppos = pos + retval; ··· 2523 2527 { 2524 2528 struct file *file = iocb->ki_filp; 2525 2529 struct inode *inode = file->f_mapping->host; 2526 - struct blk_plug plug; 2527 2530 ssize_t ret; 2528 2531 2529 2532 BUG_ON(iocb->ki_pos != pos); 2530 2533 2531 2534 sb_start_write(inode->i_sb); 2532 2535 mutex_lock(&inode->i_mutex); 2533 - blk_start_plug(&plug); 2534 2536 ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos); 2535 2537 mutex_unlock(&inode->i_mutex); 2536 2538 ··· 2539 2545 if (err < 0 && ret > 0) 2540 2546 ret = err; 2541 2547 } 2542 - blk_finish_plug(&plug); 2543 2548 sb_end_write(inode->i_sb); 2544 2549 return ret; 2545 2550 }
+1 -1
mm/mempolicy.c
··· 2562 2562 break; 2563 2563 2564 2564 default: 2565 - BUG(); 2565 + return -EINVAL; 2566 2566 } 2567 2567 2568 2568 l = strlen(policy_modes[mode]);
+2 -3
mm/mmap.c
··· 1356 1356 } else if ((flags & MAP_POPULATE) && !(flags & MAP_NONBLOCK)) 1357 1357 make_pages_present(addr, addr + len); 1358 1358 1359 - if (file && uprobe_mmap(vma)) 1360 - /* matching probes but cannot insert */ 1361 - goto unmap_and_free_vma; 1359 + if (file) 1360 + uprobe_mmap(vma); 1362 1361 1363 1362 return addr; 1364 1363
+1
mm/slab.c
··· 3260 3260 3261 3261 /* cache_grow can reenable interrupts, then ac could change. */ 3262 3262 ac = cpu_cache_get(cachep); 3263 + node = numa_mem_id(); 3263 3264 3264 3265 /* no objects in sight? abort */ 3265 3266 if (!x && (ac->avail == 0 || force_refill))
+1 -9
net/core/netpoll.c
··· 168 168 struct napi_struct *napi; 169 169 int budget = 16; 170 170 171 - WARN_ON_ONCE(!irqs_disabled()); 172 - 173 171 list_for_each_entry(napi, &dev->napi_list, dev_list) { 174 - local_irq_enable(); 175 172 if (napi->poll_owner != smp_processor_id() && 176 173 spin_trylock(&napi->poll_lock)) { 177 - rcu_read_lock_bh(); 178 174 budget = poll_one_napi(rcu_dereference_bh(dev->npinfo), 179 175 napi, budget); 180 - rcu_read_unlock_bh(); 181 176 spin_unlock(&napi->poll_lock); 182 177 183 - if (!budget) { 184 - local_irq_disable(); 178 + if (!budget) 185 179 break; 186 - } 187 180 } 188 - local_irq_disable(); 189 181 } 190 182 } 191 183
+12 -2
net/ipv4/ipmr.c
··· 124 124 static struct kmem_cache *mrt_cachep __read_mostly; 125 125 126 126 static struct mr_table *ipmr_new_table(struct net *net, u32 id); 127 + static void ipmr_free_table(struct mr_table *mrt); 128 + 127 129 static int ip_mr_forward(struct net *net, struct mr_table *mrt, 128 130 struct sk_buff *skb, struct mfc_cache *cache, 129 131 int local); ··· 133 131 struct sk_buff *pkt, vifi_t vifi, int assert); 134 132 static int __ipmr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb, 135 133 struct mfc_cache *c, struct rtmsg *rtm); 134 + static void mroute_clean_tables(struct mr_table *mrt); 136 135 static void ipmr_expire_process(unsigned long arg); 137 136 138 137 #ifdef CONFIG_IP_MROUTE_MULTIPLE_TABLES ··· 274 271 275 272 list_for_each_entry_safe(mrt, next, &net->ipv4.mr_tables, list) { 276 273 list_del(&mrt->list); 277 - kfree(mrt); 274 + ipmr_free_table(mrt); 278 275 } 279 276 fib_rules_unregister(net->ipv4.mr_rules_ops); 280 277 } ··· 302 299 303 300 static void __net_exit ipmr_rules_exit(struct net *net) 304 301 { 305 - kfree(net->ipv4.mrt); 302 + ipmr_free_table(net->ipv4.mrt); 306 303 } 307 304 #endif 308 305 ··· 337 334 list_add_tail_rcu(&mrt->list, &net->ipv4.mr_tables); 338 335 #endif 339 336 return mrt; 337 + } 338 + 339 + static void ipmr_free_table(struct mr_table *mrt) 340 + { 341 + del_timer_sync(&mrt->ipmr_expire_timer); 342 + mroute_clean_tables(mrt); 343 + kfree(mrt); 340 344 } 341 345 342 346 /* Service routines creating virtual interfaces: DVMRP tunnels and PIMREG */
+4 -1
net/ipv4/netfilter/nf_nat_sip.c
··· 502 502 ret = nf_ct_expect_related(rtcp_exp); 503 503 if (ret == 0) 504 504 break; 505 - else if (ret != -EBUSY) { 505 + else if (ret == -EBUSY) { 506 + nf_ct_unexpect_related(rtp_exp); 507 + continue; 508 + } else if (ret < 0) { 506 509 nf_ct_unexpect_related(rtp_exp); 507 510 port = 0; 508 511 break;
+4 -2
net/ipv4/route.c
··· 934 934 if (mtu < ip_rt_min_pmtu) 935 935 mtu = ip_rt_min_pmtu; 936 936 937 + rcu_read_lock(); 937 938 if (fib_lookup(dev_net(rt->dst.dev), fl4, &res) == 0) { 938 939 struct fib_nh *nh = &FIB_RES_NH(res); 939 940 940 941 update_or_create_fnhe(nh, fl4->daddr, 0, mtu, 941 942 jiffies + ip_rt_mtu_expires); 942 943 } 944 + rcu_read_unlock(); 943 945 return mtu; 944 946 } 945 947 ··· 958 956 dst->obsolete = DST_OBSOLETE_KILL; 959 957 } else { 960 958 rt->rt_pmtu = mtu; 961 - dst_set_expires(&rt->dst, ip_rt_mtu_expires); 959 + rt->dst.expires = max(1UL, jiffies + ip_rt_mtu_expires); 962 960 } 963 961 } 964 962 ··· 1265 1263 { 1266 1264 struct rtable *rt = (struct rtable *) dst; 1267 1265 1268 - if (dst->flags & DST_NOCACHE) { 1266 + if (!list_empty(&rt->rt_uncached)) { 1269 1267 spin_lock_bh(&rt_uncached_lock); 1270 1268 list_del(&rt->rt_uncached); 1271 1269 spin_unlock_bh(&rt_uncached_lock);
+7 -8
net/ipv4/tcp_input.c
··· 2926 2926 * tcp_xmit_retransmit_queue(). 2927 2927 */ 2928 2928 static void tcp_fastretrans_alert(struct sock *sk, int pkts_acked, 2929 - int newly_acked_sacked, bool is_dupack, 2929 + int prior_sacked, bool is_dupack, 2930 2930 int flag) 2931 2931 { 2932 2932 struct inet_connection_sock *icsk = inet_csk(sk); 2933 2933 struct tcp_sock *tp = tcp_sk(sk); 2934 2934 int do_lost = is_dupack || ((flag & FLAG_DATA_SACKED) && 2935 2935 (tcp_fackets_out(tp) > tp->reordering)); 2936 + int newly_acked_sacked = 0; 2936 2937 int fast_rexmit = 0; 2937 2938 2938 2939 if (WARN_ON(!tp->packets_out && tp->sacked_out)) ··· 2993 2992 tcp_add_reno_sack(sk); 2994 2993 } else 2995 2994 do_lost = tcp_try_undo_partial(sk, pkts_acked); 2995 + newly_acked_sacked = pkts_acked + tp->sacked_out - prior_sacked; 2996 2996 break; 2997 2997 case TCP_CA_Loss: 2998 2998 if (flag & FLAG_DATA_ACKED) ··· 3015 3013 if (is_dupack) 3016 3014 tcp_add_reno_sack(sk); 3017 3015 } 3016 + newly_acked_sacked = pkts_acked + tp->sacked_out - prior_sacked; 3018 3017 3019 3018 if (icsk->icsk_ca_state <= TCP_CA_Disorder) 3020 3019 tcp_try_undo_dsack(sk); ··· 3593 3590 int prior_packets; 3594 3591 int prior_sacked = tp->sacked_out; 3595 3592 int pkts_acked = 0; 3596 - int newly_acked_sacked = 0; 3597 3593 bool frto_cwnd = false; 3598 3594 3599 3595 /* If the ack is older than previous acks ··· 3668 3666 flag |= tcp_clean_rtx_queue(sk, prior_fackets, prior_snd_una); 3669 3667 3670 3668 pkts_acked = prior_packets - tp->packets_out; 3671 - newly_acked_sacked = (prior_packets - prior_sacked) - 3672 - (tp->packets_out - tp->sacked_out); 3673 3669 3674 3670 if (tp->frto_counter) 3675 3671 frto_cwnd = tcp_process_frto(sk, flag); ··· 3681 3681 tcp_may_raise_cwnd(sk, flag)) 3682 3682 tcp_cong_avoid(sk, ack, prior_in_flight); 3683 3683 is_dupack = !(flag & (FLAG_SND_UNA_ADVANCED | FLAG_NOT_DUP)); 3684 - tcp_fastretrans_alert(sk, pkts_acked, newly_acked_sacked, 3684 + tcp_fastretrans_alert(sk, pkts_acked, prior_sacked, 3685 3685 is_dupack, flag); 3686 3686 } else { 3687 3687 if ((flag & FLAG_DATA_ACKED) && !frto_cwnd) ··· 3698 3698 no_queue: 3699 3699 /* If data was DSACKed, see if we can undo a cwnd reduction. */ 3700 3700 if (flag & FLAG_DSACKING_ACK) 3701 - tcp_fastretrans_alert(sk, pkts_acked, newly_acked_sacked, 3701 + tcp_fastretrans_alert(sk, pkts_acked, prior_sacked, 3702 3702 is_dupack, flag); 3703 3703 /* If this ack opens up a zero window, clear backoff. It was 3704 3704 * being used to time the probes, and is probably far higher than ··· 3718 3718 */ 3719 3719 if (TCP_SKB_CB(skb)->sacked) { 3720 3720 flag |= tcp_sacktag_write_queue(sk, skb, prior_snd_una); 3721 - newly_acked_sacked = tp->sacked_out - prior_sacked; 3722 - tcp_fastretrans_alert(sk, pkts_acked, newly_acked_sacked, 3721 + tcp_fastretrans_alert(sk, pkts_acked, prior_sacked, 3723 3722 is_dupack, flag); 3724 3723 } 3725 3724
+3 -3
net/ipv6/esp6.c
··· 167 167 struct esp_data *esp = x->data; 168 168 169 169 /* skb is pure payload to encrypt */ 170 - err = -ENOMEM; 171 - 172 170 aead = esp->aead; 173 171 alen = crypto_aead_authsize(aead); 174 172 ··· 201 203 } 202 204 203 205 tmp = esp_alloc_tmp(aead, nfrags + sglists, seqhilen); 204 - if (!tmp) 206 + if (!tmp) { 207 + err = -ENOMEM; 205 208 goto error; 209 + } 206 210 207 211 seqhi = esp_tmp_seqhi(tmp); 208 212 iv = esp_tmp_iv(aead, tmp, seqhilen);
+1 -2
net/l2tp/l2tp_core.c
··· 1347 1347 /* Remove from tunnel list */ 1348 1348 spin_lock_bh(&pn->l2tp_tunnel_list_lock); 1349 1349 list_del_rcu(&tunnel->list); 1350 + kfree_rcu(tunnel, rcu); 1350 1351 spin_unlock_bh(&pn->l2tp_tunnel_list_lock); 1351 - synchronize_rcu(); 1352 1352 1353 1353 atomic_dec(&l2tp_tunnel_count); 1354 - kfree(tunnel); 1355 1354 } 1356 1355 1357 1356 /* Create a socket for the tunnel, if one isn't set up by
+1
net/l2tp/l2tp_core.h
··· 163 163 164 164 struct l2tp_tunnel { 165 165 int magic; /* Should be L2TP_TUNNEL_MAGIC */ 166 + struct rcu_head rcu; 166 167 rwlock_t hlist_lock; /* protect session_hlist */ 167 168 struct hlist_head session_hlist[L2TP_HASH_SIZE]; 168 169 /* hashed list of sessions,
+16 -22
net/mac80211/tx.c
··· 1811 1811 meshhdrlen = ieee80211_new_mesh_header(&mesh_hdr, 1812 1812 sdata, NULL, NULL); 1813 1813 } else { 1814 - int is_mesh_mcast = 1; 1815 - const u8 *mesh_da; 1814 + /* DS -> MBSS (802.11-2012 13.11.3.3). 1815 + * For unicast with unknown forwarding information, 1816 + * destination might be in the MBSS or if that fails 1817 + * forwarded to another mesh gate. In either case 1818 + * resolution will be handled in ieee80211_xmit(), so 1819 + * leave the original DA. This also works for mcast */ 1820 + const u8 *mesh_da = skb->data; 1816 1821 1817 - if (is_multicast_ether_addr(skb->data)) 1818 - /* DA TA mSA AE:SA */ 1819 - mesh_da = skb->data; 1820 - else { 1821 - static const u8 bcast[ETH_ALEN] = 1822 - { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff }; 1823 - if (mppath) { 1824 - /* RA TA mDA mSA AE:DA SA */ 1825 - mesh_da = mppath->mpp; 1826 - is_mesh_mcast = 0; 1827 - } else if (mpath) { 1828 - mesh_da = mpath->dst; 1829 - is_mesh_mcast = 0; 1830 - } else { 1831 - /* DA TA mSA AE:SA */ 1832 - mesh_da = bcast; 1833 - } 1834 - } 1822 + if (mppath) 1823 + mesh_da = mppath->mpp; 1824 + else if (mpath) 1825 + mesh_da = mpath->dst; 1826 + rcu_read_unlock(); 1827 + 1835 1828 hdrlen = ieee80211_fill_mesh_addresses(&hdr, &fc, 1836 1829 mesh_da, sdata->vif.addr); 1837 - rcu_read_unlock(); 1838 - if (is_mesh_mcast) 1830 + if (is_multicast_ether_addr(mesh_da)) 1831 + /* DA TA mSA AE:SA */ 1839 1832 meshhdrlen = 1840 1833 ieee80211_new_mesh_header(&mesh_hdr, 1841 1834 sdata, 1842 1835 skb->data + ETH_ALEN, 1843 1836 NULL); 1844 1837 else 1838 + /* RA TA mDA mSA AE:DA SA */ 1845 1839 meshhdrlen = 1846 1840 ieee80211_new_mesh_header(&mesh_hdr, 1847 1841 sdata,
+3 -1
net/netfilter/ipvs/ip_vs_ctl.c
··· 1171 1171 goto out_err; 1172 1172 } 1173 1173 svc->stats.cpustats = alloc_percpu(struct ip_vs_cpu_stats); 1174 - if (!svc->stats.cpustats) 1174 + if (!svc->stats.cpustats) { 1175 + ret = -ENOMEM; 1175 1176 goto out_err; 1177 + } 1176 1178 1177 1179 /* I'm the first user of the service */ 1178 1180 atomic_set(&svc->usecnt, 0);
+11 -5
net/netfilter/nf_conntrack_core.c
··· 249 249 { 250 250 struct nf_conn *ct = (void *)ul_conntrack; 251 251 struct net *net = nf_ct_net(ct); 252 + struct nf_conntrack_ecache *ecache = nf_ct_ecache_find(ct); 253 + 254 + BUG_ON(ecache == NULL); 252 255 253 256 if (nf_conntrack_event(IPCT_DESTROY, ct) < 0) { 254 257 /* bad luck, let's retry again */ 255 - ct->timeout.expires = jiffies + 258 + ecache->timeout.expires = jiffies + 256 259 (random32() % net->ct.sysctl_events_retry_timeout); 257 - add_timer(&ct->timeout); 260 + add_timer(&ecache->timeout); 258 261 return; 259 262 } 260 263 /* we've got the event delivered, now it's dying */ ··· 271 268 void nf_ct_insert_dying_list(struct nf_conn *ct) 272 269 { 273 270 struct net *net = nf_ct_net(ct); 271 + struct nf_conntrack_ecache *ecache = nf_ct_ecache_find(ct); 272 + 273 + BUG_ON(ecache == NULL); 274 274 275 275 /* add this conntrack to the dying list */ 276 276 spin_lock_bh(&nf_conntrack_lock); ··· 281 275 &net->ct.dying); 282 276 spin_unlock_bh(&nf_conntrack_lock); 283 277 /* set a new timer to retry event delivery */ 284 - setup_timer(&ct->timeout, death_by_event, (unsigned long)ct); 285 - ct->timeout.expires = jiffies + 278 + setup_timer(&ecache->timeout, death_by_event, (unsigned long)ct); 279 + ecache->timeout.expires = jiffies + 286 280 (random32() % net->ct.sysctl_events_retry_timeout); 287 - add_timer(&ct->timeout); 281 + add_timer(&ecache->timeout); 288 282 } 289 283 EXPORT_SYMBOL_GPL(nf_ct_insert_dying_list); 290 284
+2 -1
net/netfilter/nf_conntrack_netlink.c
··· 2790 2790 goto err_unreg_subsys; 2791 2791 } 2792 2792 2793 - if (register_pernet_subsys(&ctnetlink_net_ops)) { 2793 + ret = register_pernet_subsys(&ctnetlink_net_ops); 2794 + if (ret < 0) { 2794 2795 pr_err("ctnetlink_init: cannot register pernet operations\n"); 2795 2796 goto err_unreg_exp_subsys; 2796 2797 }
+3 -1
net/netlink/af_netlink.c
··· 1373 1373 dst_pid = addr->nl_pid; 1374 1374 dst_group = ffs(addr->nl_groups); 1375 1375 err = -EPERM; 1376 - if (dst_group && !netlink_capable(sock, NL_NONROOT_SEND)) 1376 + if ((dst_group || dst_pid) && 1377 + !netlink_capable(sock, NL_NONROOT_SEND)) 1377 1378 goto out; 1378 1379 } else { 1379 1380 dst_pid = nlk->dst_pid; ··· 2148 2147 rcu_assign_pointer(nl_table[NETLINK_USERSOCK].listeners, listeners); 2149 2148 nl_table[NETLINK_USERSOCK].module = THIS_MODULE; 2150 2149 nl_table[NETLINK_USERSOCK].registered = 1; 2150 + nl_table[NETLINK_USERSOCK].nl_nonroot = NL_NONROOT_SEND; 2151 2151 2152 2152 netlink_table_ungrab(); 2153 2153 }
+1 -1
net/packet/af_packet.c
··· 1273 1273 spin_unlock(&f->lock); 1274 1274 } 1275 1275 1276 - bool match_fanout_group(struct packet_type *ptype, struct sock * sk) 1276 + static bool match_fanout_group(struct packet_type *ptype, struct sock * sk) 1277 1277 { 1278 1278 if (ptype->af_packet_priv == (void*)((struct packet_sock *)sk)->fanout) 1279 1279 return true;
+2 -2
net/socket.c
··· 2604 2604 err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv); 2605 2605 set_fs(old_fs); 2606 2606 if (!err) 2607 - err = compat_put_timeval(up, &ktv); 2607 + err = compat_put_timeval(&ktv, up); 2608 2608 2609 2609 return err; 2610 2610 } ··· 2620 2620 err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts); 2621 2621 set_fs(old_fs); 2622 2622 if (!err) 2623 - err = compat_put_timespec(up, &kts); 2623 + err = compat_put_timespec(&kts, up); 2624 2624 2625 2625 return err; 2626 2626 }
+4 -6
net/sunrpc/svc_xprt.c
··· 316 316 */ 317 317 void svc_xprt_enqueue(struct svc_xprt *xprt) 318 318 { 319 - struct svc_serv *serv = xprt->xpt_server; 320 319 struct svc_pool *pool; 321 320 struct svc_rqst *rqstp; 322 321 int cpu; ··· 361 362 rqstp, rqstp->rq_xprt); 362 363 rqstp->rq_xprt = xprt; 363 364 svc_xprt_get(xprt); 364 - rqstp->rq_reserved = serv->sv_max_mesg; 365 - atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved); 366 365 pool->sp_stats.threads_woken++; 367 366 wake_up(&rqstp->rq_wait); 368 367 } else { ··· 637 640 if (xprt) { 638 641 rqstp->rq_xprt = xprt; 639 642 svc_xprt_get(xprt); 640 - rqstp->rq_reserved = serv->sv_max_mesg; 641 - atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved); 642 643 643 644 /* As there is a shortage of threads and this request 644 645 * had to be queued, don't allow the thread to wait so ··· 733 738 else 734 739 len = xprt->xpt_ops->xpo_recvfrom(rqstp); 735 740 dprintk("svc: got len=%d\n", len); 741 + rqstp->rq_reserved = serv->sv_max_mesg; 742 + atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved); 736 743 } 737 744 svc_xprt_received(xprt); 738 745 ··· 791 794 792 795 /* Grab mutex to serialize outgoing data. */ 793 796 mutex_lock(&xprt->xpt_mutex); 794 - if (test_bit(XPT_DEAD, &xprt->xpt_flags)) 797 + if (test_bit(XPT_DEAD, &xprt->xpt_flags) 798 + || test_bit(XPT_CLOSE, &xprt->xpt_flags)) 795 799 len = -ENOTCONN; 796 800 else 797 801 len = xprt->xpt_ops->xpo_sendto(rqstp);
+1 -1
net/sunrpc/svcsock.c
··· 1129 1129 if (len >= 0) 1130 1130 svsk->sk_tcplen += len; 1131 1131 if (len != want) { 1132 + svc_tcp_save_pages(svsk, rqstp); 1132 1133 if (len < 0 && len != -EAGAIN) 1133 1134 goto err_other; 1134 - svc_tcp_save_pages(svsk, rqstp); 1135 1135 dprintk("svc: incomplete TCP record (%d of %d)\n", 1136 1136 svsk->sk_tcplen, svsk->sk_reclen); 1137 1137 goto err_noclose;
+3 -1
net/xfrm/xfrm_state.c
··· 1994 1994 goto error; 1995 1995 1996 1996 x->outer_mode = xfrm_get_mode(x->props.mode, family); 1997 - if (x->outer_mode == NULL) 1997 + if (x->outer_mode == NULL) { 1998 + err = -EPROTONOSUPPORT; 1998 1999 goto error; 2000 + } 1999 2001 2000 2002 if (init_replay) { 2001 2003 err = xfrm_init_replay(x);
+1 -1
scripts/Makefile.fwinst
··· 42 42 $(installed-fw-dirs): 43 43 $(call cmd,mkdir) 44 44 45 - $(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $(INSTALL_FW_PATH)/$$(dir %) 45 + $(installed-fw): $(INSTALL_FW_PATH)/%: $(obj)/% | $$(dir $(INSTALL_FW_PATH)/%) 46 46 $(call cmd,install) 47 47 48 48 PHONY += __fw_install __fw_modinst FORCE
+8 -2
sound/pci/hda/hda_codec.c
··· 1209 1209 kfree(codec); 1210 1210 } 1211 1211 1212 + static bool snd_hda_codec_get_supported_ps(struct hda_codec *codec, 1213 + hda_nid_t fg, unsigned int power_state); 1214 + 1212 1215 static void hda_set_power_state(struct hda_codec *codec, hda_nid_t fg, 1213 1216 unsigned int power_state); 1214 1217 ··· 1319 1316 snd_hda_codec_read(codec, nid, 0, 1320 1317 AC_VERB_GET_SUBSYSTEM_ID, 0); 1321 1318 } 1319 + 1320 + codec->epss = snd_hda_codec_get_supported_ps(codec, 1321 + codec->afg ? codec->afg : codec->mfg, 1322 + AC_PWRST_EPSS); 1322 1323 1323 1324 /* power-up all before initialization */ 1324 1325 hda_set_power_state(codec, ··· 3550 3543 /* this delay seems necessary to avoid click noise at power-down */ 3551 3544 if (power_state == AC_PWRST_D3) { 3552 3545 /* transition time less than 10ms for power down */ 3553 - bool epss = snd_hda_codec_get_supported_ps(codec, fg, AC_PWRST_EPSS); 3554 - msleep(epss ? 10 : 100); 3546 + msleep(codec->epss ? 10 : 100); 3555 3547 } 3556 3548 3557 3549 /* repeat power states setting at most 10 times*/
+1
sound/pci/hda/hda_codec.h
··· 862 862 unsigned int ignore_misc_bit:1; /* ignore MISC_NO_PRESENCE bit */ 863 863 unsigned int no_jack_detect:1; /* Machine has no jack-detection */ 864 864 unsigned int pcm_format_first:1; /* PCM format must be set first */ 865 + unsigned int epss:1; /* supporting EPSS? */ 865 866 #ifdef CONFIG_SND_HDA_POWER_SAVE 866 867 unsigned int power_on :1; /* current (global) power-state */ 867 868 int power_transition; /* power-state in transition */
+4
sound/pci/hda/patch_sigmatel.c
··· 4543 4543 struct auto_pin_cfg *cfg = &spec->autocfg; 4544 4544 int i; 4545 4545 4546 + if (cfg->speaker_outs == 0) 4547 + return; 4548 + 4546 4549 for (i = 0; i < cfg->line_outs; i++) { 4547 4550 if (presence) 4548 4551 break; ··· 5534 5531 snd_hda_codec_set_pincfg(codec, 0xf, 0x2181205e); 5535 5532 } 5536 5533 5534 + codec->epss = 0; /* longer delay needed for D3 */ 5537 5535 codec->no_trigger_sense = 1; 5538 5536 codec->spec = spec; 5539 5537
+2 -2
sound/usb/card.c
··· 553 553 struct snd_usb_audio *chip) 554 554 { 555 555 struct snd_card *card; 556 - struct list_head *p; 556 + struct list_head *p, *n; 557 557 558 558 if (chip == (void *)-1L) 559 559 return; ··· 570 570 snd_usb_stream_disconnect(p); 571 571 } 572 572 /* release the endpoint resources */ 573 - list_for_each(p, &chip->ep_list) { 573 + list_for_each_safe(p, n, &chip->ep_list) { 574 574 snd_usb_endpoint_free(p); 575 575 } 576 576 /* release the midi resources */
+10 -14
sound/usb/endpoint.c
··· 141 141 * 142 142 * For implicit feedback, next_packet_size() is unused. 143 143 */ 144 - static int next_packet_size(struct snd_usb_endpoint *ep) 144 + int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep) 145 145 { 146 146 unsigned long flags; 147 147 int ret; ··· 175 175 176 176 if (ep->retire_data_urb) 177 177 ep->retire_data_urb(ep->data_subs, urb); 178 - } 179 - 180 - static void prepare_outbound_urb_sizes(struct snd_usb_endpoint *ep, 181 - struct snd_urb_ctx *ctx) 182 - { 183 - int i; 184 - 185 - for (i = 0; i < ctx->packets; ++i) 186 - ctx->packet_size[i] = next_packet_size(ep); 187 178 } 188 179 189 180 /* ··· 361 370 goto exit_clear; 362 371 } 363 372 364 - prepare_outbound_urb_sizes(ep, ctx); 365 373 prepare_outbound_urb(ep, ctx); 366 374 } else { 367 375 retire_inbound_urb(ep, ctx); ··· 789 799 /** 790 800 * snd_usb_endpoint_start: start an snd_usb_endpoint 791 801 * 792 - * @ep: the endpoint to start 802 + * @ep: the endpoint to start 803 + * @can_sleep: flag indicating whether the operation is executed in 804 + * non-atomic context 793 805 * 794 806 * A call to this function will increment the use count of the endpoint. 795 807 * In case it is not already running, the URBs for this endpoint will be ··· 801 809 * 802 810 * Returns an error if the URB submission failed, 0 in all other cases. 803 811 */ 804 - int snd_usb_endpoint_start(struct snd_usb_endpoint *ep) 812 + int snd_usb_endpoint_start(struct snd_usb_endpoint *ep, int can_sleep) 805 813 { 806 814 int err; 807 815 unsigned int i; ··· 812 820 /* already running? */ 813 821 if (++ep->use_count != 1) 814 822 return 0; 823 + 824 + /* just to be sure */ 825 + deactivate_urbs(ep, 0, can_sleep); 826 + if (can_sleep) 827 + wait_clear_urbs(ep); 815 828 816 829 ep->active_mask = 0; 817 830 ep->unlink_mask = 0; ··· 847 850 goto __error; 848 851 849 852 if (usb_pipeout(ep->pipe)) { 850 - prepare_outbound_urb_sizes(ep, urb->context); 851 853 prepare_outbound_urb(ep, urb->context); 852 854 } else { 853 855 prepare_inbound_urb(ep, urb->context);
+2 -1
sound/usb/endpoint.h
··· 13 13 struct audioformat *fmt, 14 14 struct snd_usb_endpoint *sync_ep); 15 15 16 - int snd_usb_endpoint_start(struct snd_usb_endpoint *ep); 16 + int snd_usb_endpoint_start(struct snd_usb_endpoint *ep, int can_sleep); 17 17 void snd_usb_endpoint_stop(struct snd_usb_endpoint *ep, 18 18 int force, int can_sleep, int wait); 19 19 int snd_usb_endpoint_activate(struct snd_usb_endpoint *ep); ··· 21 21 void snd_usb_endpoint_free(struct list_head *head); 22 22 23 23 int snd_usb_endpoint_implict_feedback_sink(struct snd_usb_endpoint *ep); 24 + int snd_usb_endpoint_next_packet_size(struct snd_usb_endpoint *ep); 24 25 25 26 void snd_usb_handle_sync_urb(struct snd_usb_endpoint *ep, 26 27 struct snd_usb_endpoint *sender,
+52 -12
sound/usb/pcm.c
··· 212 212 } 213 213 } 214 214 215 - static int start_endpoints(struct snd_usb_substream *subs) 215 + static int start_endpoints(struct snd_usb_substream *subs, int can_sleep) 216 216 { 217 217 int err; 218 218 ··· 225 225 snd_printdd(KERN_DEBUG "Starting data EP @%p\n", ep); 226 226 227 227 ep->data_subs = subs; 228 - err = snd_usb_endpoint_start(ep); 228 + err = snd_usb_endpoint_start(ep, can_sleep); 229 229 if (err < 0) { 230 230 clear_bit(SUBSTREAM_FLAG_DATA_EP_STARTED, &subs->flags); 231 231 return err; ··· 236 236 !test_and_set_bit(SUBSTREAM_FLAG_SYNC_EP_STARTED, &subs->flags)) { 237 237 struct snd_usb_endpoint *ep = subs->sync_endpoint; 238 238 239 + if (subs->data_endpoint->iface != subs->sync_endpoint->iface || 240 + subs->data_endpoint->alt_idx != subs->sync_endpoint->alt_idx) { 241 + err = usb_set_interface(subs->dev, 242 + subs->sync_endpoint->iface, 243 + subs->sync_endpoint->alt_idx); 244 + if (err < 0) { 245 + snd_printk(KERN_ERR 246 + "%d:%d:%d: cannot set interface (%d)\n", 247 + subs->dev->devnum, 248 + subs->sync_endpoint->iface, 249 + subs->sync_endpoint->alt_idx, err); 250 + return -EIO; 251 + } 252 + } 253 + 239 254 snd_printdd(KERN_DEBUG "Starting sync EP @%p\n", ep); 240 255 241 256 ep->sync_slave = subs->data_endpoint; 242 - err = snd_usb_endpoint_start(ep); 257 + err = snd_usb_endpoint_start(ep, can_sleep); 243 258 if (err < 0) { 244 259 clear_bit(SUBSTREAM_FLAG_SYNC_EP_STARTED, &subs->flags); 245 260 return err; ··· 559 544 subs->last_frame_number = 0; 560 545 runtime->delay = 0; 561 546 562 - /* clear the pending deactivation on the target EPs */ 563 - deactivate_endpoints(subs); 564 - 565 547 /* for playback, submit the URBs now; otherwise, the first hwptr_done 566 548 * updates for all URBs would happen at the same time when starting */ 567 549 if (subs->direction == SNDRV_PCM_STREAM_PLAYBACK) 568 - return start_endpoints(subs); 550 + return start_endpoints(subs, 1); 569 551 570 552 return 0; 571 553 } ··· 1044 1032 struct urb *urb) 1045 1033 { 1046 1034 struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime; 1035 + struct snd_usb_endpoint *ep = subs->data_endpoint; 1047 1036 struct snd_urb_ctx *ctx = urb->context; 1048 1037 unsigned int counts, frames, bytes; 1049 1038 int i, stride, period_elapsed = 0; ··· 1056 1043 urb->number_of_packets = 0; 1057 1044 spin_lock_irqsave(&subs->lock, flags); 1058 1045 for (i = 0; i < ctx->packets; i++) { 1059 - counts = ctx->packet_size[i]; 1046 + if (ctx->packet_size[i]) 1047 + counts = ctx->packet_size[i]; 1048 + else 1049 + counts = snd_usb_endpoint_next_packet_size(ep); 1050 + 1060 1051 /* set up descriptor */ 1061 1052 urb->iso_frame_desc[i].offset = frames * stride; 1062 1053 urb->iso_frame_desc[i].length = counts * stride; ··· 1111 1094 subs->hwptr_done += bytes; 1112 1095 if (subs->hwptr_done >= runtime->buffer_size * stride) 1113 1096 subs->hwptr_done -= runtime->buffer_size * stride; 1097 + 1098 + /* update delay with exact number of samples queued */ 1099 + runtime->delay = subs->last_delay; 1114 1100 runtime->delay += frames; 1101 + subs->last_delay = runtime->delay; 1102 + 1103 + /* realign last_frame_number */ 1104 + subs->last_frame_number = usb_get_current_frame_number(subs->dev); 1105 + subs->last_frame_number &= 0xFF; /* keep 8 LSBs */ 1106 + 1115 1107 spin_unlock_irqrestore(&subs->lock, flags); 1116 1108 urb->transfer_buffer_length = bytes; 1117 1109 if (period_elapsed) ··· 1138 1112 struct snd_pcm_runtime *runtime = subs->pcm_substream->runtime; 1139 1113 int stride = runtime->frame_bits >> 3; 1140 1114 int processed = urb->transfer_buffer_length / stride; 1115 + int est_delay; 1141 1116 1142 1117 spin_lock_irqsave(&subs->lock, flags); 1143 - if (processed > runtime->delay) 1144 - runtime->delay = 0; 1118 + est_delay = snd_usb_pcm_delay(subs, runtime->rate); 1119 + /* update delay with exact number of samples played */ 1120 + if (processed > subs->last_delay) 1121 + subs->last_delay = 0; 1145 1122 else 1146 - runtime->delay -= processed; 1123 + subs->last_delay -= processed; 1124 + runtime->delay = subs->last_delay; 1125 + 1126 + /* 1127 + * Report when delay estimate is off by more than 2ms. 1128 + * The error should be lower than 2ms since the estimate relies 1129 + * on two reads of a counter updated every ms. 1130 + */ 1131 + if (abs(est_delay - subs->last_delay) * 1000 > runtime->rate * 2) 1132 + snd_printk(KERN_DEBUG "delay: estimated %d, actual %d\n", 1133 + est_delay, subs->last_delay); 1134 + 1147 1135 spin_unlock_irqrestore(&subs->lock, flags); 1148 1136 } 1149 1137 ··· 1215 1175 1216 1176 switch (cmd) { 1217 1177 case SNDRV_PCM_TRIGGER_START: 1218 - err = start_endpoints(subs); 1178 + err = start_endpoints(subs, 0); 1219 1179 if (err < 0) 1220 1180 return err; 1221 1181
+2
tools/perf/util/python-ext-sources
··· 10 10 util/evlist.c 11 11 util/evsel.c 12 12 util/cpumap.c 13 + util/hweight.c 13 14 util/thread_map.c 14 15 util/util.c 15 16 util/xyarray.c 16 17 util/cgroup.c 17 18 util/debugfs.c 19 + util/rblist.c 18 20 util/strlist.c 19 21 ../../lib/rbtree.c
+4 -3
virt/kvm/kvm_main.c
··· 1976 1976 if (copy_from_user(&csigset, sigmask_arg->sigset, 1977 1977 sizeof csigset)) 1978 1978 goto out; 1979 - } 1980 - sigset_from_compat(&sigset, &csigset); 1981 - r = kvm_vcpu_ioctl_set_sigmask(vcpu, &sigset); 1979 + sigset_from_compat(&sigset, &csigset); 1980 + r = kvm_vcpu_ioctl_set_sigmask(vcpu, &sigset); 1981 + } else 1982 + r = kvm_vcpu_ioctl_set_sigmask(vcpu, NULL); 1982 1983 break; 1983 1984 } 1984 1985 default: