Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'pinmux-dt' of git://gitorious.org/sirfprima2-kernel/sirfprima2-kernel into next/dt

This branch enables device tree bindings for the CSR Prima pin control
subsystem. All patches acked by Linus W (pinctrl maintainer).

* 'pinmux-dt' of git://gitorious.org/sirfprima2-kernel/sirfprima2-kernel:
pinctrl: sirf: add DT-binding pinmux mapping support
ARM: PRIMA2: rename prima2-cb.dts to prima2.dtsi as it only has SoC features
ARM: PRIMA2: add missing interrupts property for pinctrl node
+ sync to Linux 3.6-rc4

+2662 -1772
+8 -2
Documentation/block/00-INDEX
··· 3 3 biodoc.txt 4 4 - Notes on the Generic Block Layer Rewrite in Linux 2.5 5 5 capability.txt 6 - - Generic Block Device Capability (/sys/block/<disk>/capability) 6 + - Generic Block Device Capability (/sys/block/<device>/capability) 7 + cfq-iosched.txt 8 + - CFQ IO scheduler tunables 9 + data-integrity.txt 10 + - Block data integrity 7 11 deadline-iosched.txt 8 12 - Deadline IO scheduler tunables 9 13 ioprio.txt 10 14 - Block io priorities (in CFQ scheduler) 15 + queue-sysfs.txt 16 + - Queue's sysfs entries 11 17 request.txt 12 18 - The members of struct request (in include/linux/blkdev.h) 13 19 stat.txt 14 - - Block layer statistics in /sys/block/<dev>/stat 20 + - Block layer statistics in /sys/block/<device>/stat 15 21 switching-sched.txt 16 22 - Switching I/O schedulers at runtime 17 23 writeback_cache_control.txt
+77
Documentation/block/cfq-iosched.txt
··· 1 + CFQ (Complete Fairness Queueing) 2 + =============================== 3 + 4 + The main aim of CFQ scheduler is to provide a fair allocation of the disk 5 + I/O bandwidth for all the processes which requests an I/O operation. 6 + 7 + CFQ maintains the per process queue for the processes which request I/O 8 + operation(syncronous requests). In case of asynchronous requests, all the 9 + requests from all the processes are batched together according to their 10 + process's I/O priority. 11 + 1 12 CFQ ioscheduler tunables 2 13 ======================== 3 14 ··· 35 24 there are multiple spindles behind single LUN (Host based hardware RAID 36 25 controller or for storage arrays), setting slice_idle=0 might end up in better 37 26 throughput and acceptable latencies. 27 + 28 + back_seek_max 29 + ------------- 30 + This specifies, given in Kbytes, the maximum "distance" for backward seeking. 31 + The distance is the amount of space from the current head location to the 32 + sectors that are backward in terms of distance. 33 + 34 + This parameter allows the scheduler to anticipate requests in the "backward" 35 + direction and consider them as being the "next" if they are within this 36 + distance from the current head location. 37 + 38 + back_seek_penalty 39 + ----------------- 40 + This parameter is used to compute the cost of backward seeking. If the 41 + backward distance of request is just 1/back_seek_penalty from a "front" 42 + request, then the seeking cost of two requests is considered equivalent. 43 + 44 + So scheduler will not bias toward one or the other request (otherwise scheduler 45 + will bias toward front request). Default value of back_seek_penalty is 2. 46 + 47 + fifo_expire_async 48 + ----------------- 49 + This parameter is used to set the timeout of asynchronous requests. Default 50 + value of this is 248ms. 51 + 52 + fifo_expire_sync 53 + ---------------- 54 + This parameter is used to set the timeout of synchronous requests. Default 55 + value of this is 124ms. In case to favor synchronous requests over asynchronous 56 + one, this value should be decreased relative to fifo_expire_async. 57 + 58 + slice_async 59 + ----------- 60 + This parameter is same as of slice_sync but for asynchronous queue. The 61 + default value is 40ms. 62 + 63 + slice_async_rq 64 + -------------- 65 + This parameter is used to limit the dispatching of asynchronous request to 66 + device request queue in queue's slice time. The maximum number of request that 67 + are allowed to be dispatched also depends upon the io priority. Default value 68 + for this is 2. 69 + 70 + slice_sync 71 + ---------- 72 + When a queue is selected for execution, the queues IO requests are only 73 + executed for a certain amount of time(time_slice) before switching to another 74 + queue. This parameter is used to calculate the time slice of synchronous 75 + queue. 76 + 77 + time_slice is computed using the below equation:- 78 + time_slice = slice_sync + (slice_sync/5 * (4 - prio)). To increase the 79 + time_slice of synchronous queue, increase the value of slice_sync. Default 80 + value is 100ms. 81 + 82 + quantum 83 + ------- 84 + This specifies the number of request dispatched to the device queue. In a 85 + queue's time slice, a request will not be dispatched if the number of request 86 + in the device exceeds this parameter. This parameter is used for synchronous 87 + request. 88 + 89 + In case of storage with several disk, this setting can limit the parallel 90 + processing of request. Therefore, increasing the value can imporve the 91 + performace although this can cause the latency of some I/O to increase due 92 + to more number of requests. 38 93 39 94 CFQ IOPS Mode for group scheduling 40 95 ===================================
+64
Documentation/block/queue-sysfs.txt
··· 9 9 Files denoted with a RO postfix are readonly and the RW postfix means 10 10 read-write. 11 11 12 + add_random (RW) 13 + ---------------- 14 + This file allows to trun off the disk entropy contribution. Default 15 + value of this file is '1'(on). 16 + 17 + discard_granularity (RO) 18 + ----------------------- 19 + This shows the size of internal allocation of the device in bytes, if 20 + reported by the device. A value of '0' means device does not support 21 + the discard functionality. 22 + 23 + discard_max_bytes (RO) 24 + ---------------------- 25 + Devices that support discard functionality may have internal limits on 26 + the number of bytes that can be trimmed or unmapped in a single operation. 27 + The discard_max_bytes parameter is set by the device driver to the maximum 28 + number of bytes that can be discarded in a single operation. Discard 29 + requests issued to the device must not exceed this limit. A discard_max_bytes 30 + value of 0 means that the device does not support discard functionality. 31 + 32 + discard_zeroes_data (RO) 33 + ------------------------ 34 + When read, this file will show if the discarded block are zeroed by the 35 + device or not. If its value is '1' the blocks are zeroed otherwise not. 36 + 12 37 hw_sector_size (RO) 13 38 ------------------- 14 39 This is the hardware sector size of the device, in bytes. 15 40 41 + iostats (RW) 42 + ------------- 43 + This file is used to control (on/off) the iostats accounting of the 44 + disk. 45 + 46 + logical_block_size (RO) 47 + ----------------------- 48 + This is the logcal block size of the device, in bytes. 49 + 16 50 max_hw_sectors_kb (RO) 17 51 ---------------------- 18 52 This is the maximum number of kilobytes supported in a single data transfer. 53 + 54 + max_integrity_segments (RO) 55 + --------------------------- 56 + When read, this file shows the max limit of integrity segments as 57 + set by block layer which a hardware controller can handle. 19 58 20 59 max_sectors_kb (RW) 21 60 ------------------- 22 61 This is the maximum number of kilobytes that the block layer will allow 23 62 for a filesystem request. Must be smaller than or equal to the maximum 24 63 size allowed by the hardware. 64 + 65 + max_segments (RO) 66 + ----------------- 67 + Maximum number of segments of the device. 68 + 69 + max_segment_size (RO) 70 + --------------------- 71 + Maximum segment size of the device. 72 + 73 + minimum_io_size (RO) 74 + -------------------- 75 + This is the smallest preferred io size reported by the device. 25 76 26 77 nomerges (RW) 27 78 ------------- ··· 96 45 each request queue may have upto N request pools, each independently 97 46 regulated by nr_requests. 98 47 48 + optimal_io_size (RO) 49 + -------------------- 50 + This is the optimal io size reported by the device. 51 + 52 + physical_block_size (RO) 53 + ------------------------ 54 + This is the physical block size of device, in bytes. 55 + 99 56 read_ahead_kb (RW) 100 57 ------------------ 101 58 Maximum number of kilobytes to read-ahead for filesystems on this block 102 59 device. 60 + 61 + rotational (RW) 62 + --------------- 63 + This file is used to stat if the device is of rotational type or 64 + non-rotational type. 103 65 104 66 rq_affinity (RW) 105 67 ----------------
+4 -4
Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.txt
··· 10 10 - compatible : Should be "fsl,<chip>-esdhc" 11 11 12 12 Optional properties: 13 - - fsl,cd-internal : Indicate to use controller internal card detection 14 - - fsl,wp-internal : Indicate to use controller internal write protection 13 + - fsl,cd-controller : Indicate to use controller internal card detection 14 + - fsl,wp-controller : Indicate to use controller internal write protection 15 15 16 16 Examples: 17 17 ··· 19 19 compatible = "fsl,imx51-esdhc"; 20 20 reg = <0x70004000 0x4000>; 21 21 interrupts = <1>; 22 - fsl,cd-internal; 23 - fsl,wp-internal; 22 + fsl,cd-controller; 23 + fsl,wp-controller; 24 24 }; 25 25 26 26 esdhc@70008000 {
+1 -1
Documentation/watchdog/src/watchdog-test.c
··· 31 31 * or "-e" to enable the card. 32 32 */ 33 33 34 - void term(int sig) 34 + static void term(int sig) 35 35 { 36 36 close(fd); 37 37 fprintf(stderr, "Stopping watchdog ticks...\n");
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc3 4 + EXTRAVERSION = -rc4 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+1
arch/arm/Kconfig
··· 2144 2144 config CPU_FREQ_IMX 2145 2145 tristate "CPUfreq driver for i.MX CPUs" 2146 2146 depends on ARCH_MXC && CPU_FREQ 2147 + select CPU_FREQ_TABLE 2147 2148 help 2148 2149 This enables the CPUfreq driver for i.MX CPUs. 2149 2150
+5
arch/arm/boot/dts/am33xx.dtsi
··· 154 154 #size-cells = <0>; 155 155 ti,hwmods = "i2c3"; 156 156 }; 157 + 158 + wdt2: wdt@44e35000 { 159 + compatible = "ti,omap3-wdt"; 160 + ti,hwmods = "wd_timer2"; 161 + }; 157 162 }; 158 163 };
+2 -2
arch/arm/boot/dts/imx51-babbage.dts
··· 25 25 aips@70000000 { /* aips-1 */ 26 26 spba@70000000 { 27 27 esdhc@70004000 { /* ESDHC1 */ 28 - fsl,cd-internal; 29 - fsl,wp-internal; 28 + fsl,cd-controller; 29 + fsl,wp-controller; 30 30 status = "okay"; 31 31 }; 32 32
+5 -1
arch/arm/boot/dts/kirkwood-iconnect.dts
··· 41 41 }; 42 42 power-blue { 43 43 label = "power:blue"; 44 - gpios = <&gpio1 11 0>; 44 + gpios = <&gpio1 10 0>; 45 45 linux,default-trigger = "timer"; 46 + }; 47 + power-red { 48 + label = "power:red"; 49 + gpios = <&gpio1 11 0>; 46 50 }; 47 51 usb1 { 48 52 label = "usb1:blue";
-424
arch/arm/boot/dts/prima2-cb.dts
··· 1 - /dts-v1/; 2 - / { 3 - model = "SiRF Prima2 eVB"; 4 - compatible = "sirf,prima2-cb", "sirf,prima2"; 5 - #address-cells = <1>; 6 - #size-cells = <1>; 7 - interrupt-parent = <&intc>; 8 - 9 - memory { 10 - reg = <0x00000000 0x20000000>; 11 - }; 12 - 13 - chosen { 14 - bootargs = "mem=512M real_root=/dev/mmcblk0p2 console=ttyS0 panel=1 bootsplash=true bpp=16 androidboot.console=ttyS1"; 15 - linux,stdout-path = &uart1; 16 - }; 17 - 18 - cpus { 19 - #address-cells = <1>; 20 - #size-cells = <0>; 21 - 22 - cpu@0 { 23 - reg = <0x0>; 24 - d-cache-line-size = <32>; 25 - i-cache-line-size = <32>; 26 - d-cache-size = <32768>; 27 - i-cache-size = <32768>; 28 - /* from bootloader */ 29 - timebase-frequency = <0>; 30 - bus-frequency = <0>; 31 - clock-frequency = <0>; 32 - }; 33 - }; 34 - 35 - axi { 36 - compatible = "simple-bus"; 37 - #address-cells = <1>; 38 - #size-cells = <1>; 39 - ranges = <0x40000000 0x40000000 0x80000000>; 40 - 41 - l2-cache-controller@80040000 { 42 - compatible = "arm,pl310-cache", "sirf,prima2-pl310-cache"; 43 - reg = <0x80040000 0x1000>; 44 - interrupts = <59>; 45 - arm,tag-latency = <1 1 1>; 46 - arm,data-latency = <1 1 1>; 47 - arm,filter-ranges = <0 0x40000000>; 48 - }; 49 - 50 - intc: interrupt-controller@80020000 { 51 - #interrupt-cells = <1>; 52 - interrupt-controller; 53 - compatible = "sirf,prima2-intc"; 54 - reg = <0x80020000 0x1000>; 55 - }; 56 - 57 - sys-iobg { 58 - compatible = "simple-bus"; 59 - #address-cells = <1>; 60 - #size-cells = <1>; 61 - ranges = <0x88000000 0x88000000 0x40000>; 62 - 63 - clock-controller@88000000 { 64 - compatible = "sirf,prima2-clkc"; 65 - reg = <0x88000000 0x1000>; 66 - interrupts = <3>; 67 - }; 68 - 69 - reset-controller@88010000 { 70 - compatible = "sirf,prima2-rstc"; 71 - reg = <0x88010000 0x1000>; 72 - }; 73 - 74 - rsc-controller@88020000 { 75 - compatible = "sirf,prima2-rsc"; 76 - reg = <0x88020000 0x1000>; 77 - }; 78 - }; 79 - 80 - mem-iobg { 81 - compatible = "simple-bus"; 82 - #address-cells = <1>; 83 - #size-cells = <1>; 84 - ranges = <0x90000000 0x90000000 0x10000>; 85 - 86 - memory-controller@90000000 { 87 - compatible = "sirf,prima2-memc"; 88 - reg = <0x90000000 0x10000>; 89 - interrupts = <27>; 90 - }; 91 - }; 92 - 93 - disp-iobg { 94 - compatible = "simple-bus"; 95 - #address-cells = <1>; 96 - #size-cells = <1>; 97 - ranges = <0x90010000 0x90010000 0x30000>; 98 - 99 - display@90010000 { 100 - compatible = "sirf,prima2-lcd"; 101 - reg = <0x90010000 0x20000>; 102 - interrupts = <30>; 103 - }; 104 - 105 - vpp@90020000 { 106 - compatible = "sirf,prima2-vpp"; 107 - reg = <0x90020000 0x10000>; 108 - interrupts = <31>; 109 - }; 110 - }; 111 - 112 - graphics-iobg { 113 - compatible = "simple-bus"; 114 - #address-cells = <1>; 115 - #size-cells = <1>; 116 - ranges = <0x98000000 0x98000000 0x8000000>; 117 - 118 - graphics@98000000 { 119 - compatible = "powervr,sgx531"; 120 - reg = <0x98000000 0x8000000>; 121 - interrupts = <6>; 122 - }; 123 - }; 124 - 125 - multimedia-iobg { 126 - compatible = "simple-bus"; 127 - #address-cells = <1>; 128 - #size-cells = <1>; 129 - ranges = <0xa0000000 0xa0000000 0x8000000>; 130 - 131 - multimedia@a0000000 { 132 - compatible = "sirf,prima2-video-codec"; 133 - reg = <0xa0000000 0x8000000>; 134 - interrupts = <5>; 135 - }; 136 - }; 137 - 138 - dsp-iobg { 139 - compatible = "simple-bus"; 140 - #address-cells = <1>; 141 - #size-cells = <1>; 142 - ranges = <0xa8000000 0xa8000000 0x2000000>; 143 - 144 - dspif@a8000000 { 145 - compatible = "sirf,prima2-dspif"; 146 - reg = <0xa8000000 0x10000>; 147 - interrupts = <9>; 148 - }; 149 - 150 - gps@a8010000 { 151 - compatible = "sirf,prima2-gps"; 152 - reg = <0xa8010000 0x10000>; 153 - interrupts = <7>; 154 - }; 155 - 156 - dsp@a9000000 { 157 - compatible = "sirf,prima2-dsp"; 158 - reg = <0xa9000000 0x1000000>; 159 - interrupts = <8>; 160 - }; 161 - }; 162 - 163 - peri-iobg { 164 - compatible = "simple-bus"; 165 - #address-cells = <1>; 166 - #size-cells = <1>; 167 - ranges = <0xb0000000 0xb0000000 0x180000>; 168 - 169 - timer@b0020000 { 170 - compatible = "sirf,prima2-tick"; 171 - reg = <0xb0020000 0x1000>; 172 - interrupts = <0>; 173 - }; 174 - 175 - nand@b0030000 { 176 - compatible = "sirf,prima2-nand"; 177 - reg = <0xb0030000 0x10000>; 178 - interrupts = <41>; 179 - }; 180 - 181 - audio@b0040000 { 182 - compatible = "sirf,prima2-audio"; 183 - reg = <0xb0040000 0x10000>; 184 - interrupts = <35>; 185 - }; 186 - 187 - uart0: uart@b0050000 { 188 - cell-index = <0>; 189 - compatible = "sirf,prima2-uart"; 190 - reg = <0xb0050000 0x10000>; 191 - interrupts = <17>; 192 - }; 193 - 194 - uart1: uart@b0060000 { 195 - cell-index = <1>; 196 - compatible = "sirf,prima2-uart"; 197 - reg = <0xb0060000 0x10000>; 198 - interrupts = <18>; 199 - }; 200 - 201 - uart2: uart@b0070000 { 202 - cell-index = <2>; 203 - compatible = "sirf,prima2-uart"; 204 - reg = <0xb0070000 0x10000>; 205 - interrupts = <19>; 206 - }; 207 - 208 - usp0: usp@b0080000 { 209 - cell-index = <0>; 210 - compatible = "sirf,prima2-usp"; 211 - reg = <0xb0080000 0x10000>; 212 - interrupts = <20>; 213 - }; 214 - 215 - usp1: usp@b0090000 { 216 - cell-index = <1>; 217 - compatible = "sirf,prima2-usp"; 218 - reg = <0xb0090000 0x10000>; 219 - interrupts = <21>; 220 - }; 221 - 222 - usp2: usp@b00a0000 { 223 - cell-index = <2>; 224 - compatible = "sirf,prima2-usp"; 225 - reg = <0xb00a0000 0x10000>; 226 - interrupts = <22>; 227 - }; 228 - 229 - dmac0: dma-controller@b00b0000 { 230 - cell-index = <0>; 231 - compatible = "sirf,prima2-dmac"; 232 - reg = <0xb00b0000 0x10000>; 233 - interrupts = <12>; 234 - }; 235 - 236 - dmac1: dma-controller@b0160000 { 237 - cell-index = <1>; 238 - compatible = "sirf,prima2-dmac"; 239 - reg = <0xb0160000 0x10000>; 240 - interrupts = <13>; 241 - }; 242 - 243 - vip@b00C0000 { 244 - compatible = "sirf,prima2-vip"; 245 - reg = <0xb00C0000 0x10000>; 246 - }; 247 - 248 - spi0: spi@b00d0000 { 249 - cell-index = <0>; 250 - compatible = "sirf,prima2-spi"; 251 - reg = <0xb00d0000 0x10000>; 252 - interrupts = <15>; 253 - }; 254 - 255 - spi1: spi@b0170000 { 256 - cell-index = <1>; 257 - compatible = "sirf,prima2-spi"; 258 - reg = <0xb0170000 0x10000>; 259 - interrupts = <16>; 260 - }; 261 - 262 - i2c0: i2c@b00e0000 { 263 - cell-index = <0>; 264 - compatible = "sirf,prima2-i2c"; 265 - reg = <0xb00e0000 0x10000>; 266 - interrupts = <24>; 267 - }; 268 - 269 - i2c1: i2c@b00f0000 { 270 - cell-index = <1>; 271 - compatible = "sirf,prima2-i2c"; 272 - reg = <0xb00f0000 0x10000>; 273 - interrupts = <25>; 274 - }; 275 - 276 - tsc@b0110000 { 277 - compatible = "sirf,prima2-tsc"; 278 - reg = <0xb0110000 0x10000>; 279 - interrupts = <33>; 280 - }; 281 - 282 - gpio: gpio-controller@b0120000 { 283 - #gpio-cells = <2>; 284 - #interrupt-cells = <2>; 285 - compatible = "sirf,prima2-gpio-pinmux"; 286 - reg = <0xb0120000 0x10000>; 287 - gpio-controller; 288 - interrupt-controller; 289 - }; 290 - 291 - pwm@b0130000 { 292 - compatible = "sirf,prima2-pwm"; 293 - reg = <0xb0130000 0x10000>; 294 - }; 295 - 296 - efusesys@b0140000 { 297 - compatible = "sirf,prima2-efuse"; 298 - reg = <0xb0140000 0x10000>; 299 - }; 300 - 301 - pulsec@b0150000 { 302 - compatible = "sirf,prima2-pulsec"; 303 - reg = <0xb0150000 0x10000>; 304 - interrupts = <48>; 305 - }; 306 - 307 - pci-iobg { 308 - compatible = "sirf,prima2-pciiobg", "simple-bus"; 309 - #address-cells = <1>; 310 - #size-cells = <1>; 311 - ranges = <0x56000000 0x56000000 0x1b00000>; 312 - 313 - sd0: sdhci@56000000 { 314 - cell-index = <0>; 315 - compatible = "sirf,prima2-sdhc"; 316 - reg = <0x56000000 0x100000>; 317 - interrupts = <38>; 318 - }; 319 - 320 - sd1: sdhci@56100000 { 321 - cell-index = <1>; 322 - compatible = "sirf,prima2-sdhc"; 323 - reg = <0x56100000 0x100000>; 324 - interrupts = <38>; 325 - }; 326 - 327 - sd2: sdhci@56200000 { 328 - cell-index = <2>; 329 - compatible = "sirf,prima2-sdhc"; 330 - reg = <0x56200000 0x100000>; 331 - interrupts = <23>; 332 - }; 333 - 334 - sd3: sdhci@56300000 { 335 - cell-index = <3>; 336 - compatible = "sirf,prima2-sdhc"; 337 - reg = <0x56300000 0x100000>; 338 - interrupts = <23>; 339 - }; 340 - 341 - sd4: sdhci@56400000 { 342 - cell-index = <4>; 343 - compatible = "sirf,prima2-sdhc"; 344 - reg = <0x56400000 0x100000>; 345 - interrupts = <39>; 346 - }; 347 - 348 - sd5: sdhci@56500000 { 349 - cell-index = <5>; 350 - compatible = "sirf,prima2-sdhc"; 351 - reg = <0x56500000 0x100000>; 352 - interrupts = <39>; 353 - }; 354 - 355 - pci-copy@57900000 { 356 - compatible = "sirf,prima2-pcicp"; 357 - reg = <0x57900000 0x100000>; 358 - interrupts = <40>; 359 - }; 360 - 361 - rom-interface@57a00000 { 362 - compatible = "sirf,prima2-romif"; 363 - reg = <0x57a00000 0x100000>; 364 - }; 365 - }; 366 - }; 367 - 368 - rtc-iobg { 369 - compatible = "sirf,prima2-rtciobg", "sirf-prima2-rtciobg-bus"; 370 - #address-cells = <1>; 371 - #size-cells = <1>; 372 - reg = <0x80030000 0x10000>; 373 - 374 - gpsrtc@1000 { 375 - compatible = "sirf,prima2-gpsrtc"; 376 - reg = <0x1000 0x1000>; 377 - interrupts = <55 56 57>; 378 - }; 379 - 380 - sysrtc@2000 { 381 - compatible = "sirf,prima2-sysrtc"; 382 - reg = <0x2000 0x1000>; 383 - interrupts = <52 53 54>; 384 - }; 385 - 386 - pwrc@3000 { 387 - compatible = "sirf,prima2-pwrc"; 388 - reg = <0x3000 0x1000>; 389 - interrupts = <32>; 390 - }; 391 - }; 392 - 393 - uus-iobg { 394 - compatible = "simple-bus"; 395 - #address-cells = <1>; 396 - #size-cells = <1>; 397 - ranges = <0xb8000000 0xb8000000 0x40000>; 398 - 399 - usb0: usb@b00e0000 { 400 - compatible = "chipidea,ci13611a-prima2"; 401 - reg = <0xb8000000 0x10000>; 402 - interrupts = <10>; 403 - }; 404 - 405 - usb1: usb@b00f0000 { 406 - compatible = "chipidea,ci13611a-prima2"; 407 - reg = <0xb8010000 0x10000>; 408 - interrupts = <11>; 409 - }; 410 - 411 - sata@b00f0000 { 412 - compatible = "synopsys,dwc-ahsata"; 413 - reg = <0xb8020000 0x10000>; 414 - interrupts = <37>; 415 - }; 416 - 417 - security@b00f0000 { 418 - compatible = "sirf,prima2-security"; 419 - reg = <0xb8030000 0x10000>; 420 - interrupts = <42>; 421 - }; 422 - }; 423 - }; 424 - };
+37
arch/arm/boot/dts/prima2-evb.dts
··· 1 + /* 2 + * DTS file for CSR SiRFprimaII Evaluation Board 3 + * 4 + * Copyright (c) 2012 Cambridge Silicon Radio Limited, a CSR plc group company. 5 + * 6 + * Licensed under GPLv2 or later. 7 + */ 8 + 9 + /dts-v1/; 10 + 11 + /include/ "prima2.dtsi" 12 + 13 + / { 14 + model = "CSR SiRFprimaII Evaluation Board"; 15 + compatible = "sirf,prima2", "sirf,prima2-cb"; 16 + 17 + memory { 18 + reg = <0x00000000 0x20000000>; 19 + }; 20 + 21 + axi { 22 + peri-iobg { 23 + uart@b0060000 { 24 + pinctrl-names = "default"; 25 + pinctrl-0 = <&uart1_pins_a>; 26 + }; 27 + spi@b00d0000 { 28 + pinctrl-names = "default"; 29 + pinctrl-0 = <&spi0_pins_a>; 30 + }; 31 + spi@b0170000 { 32 + pinctrl-names = "default"; 33 + pinctrl-0 = <&spi1_pins_a>; 34 + }; 35 + }; 36 + }; 37 + };
+640
arch/arm/boot/dts/prima2.dtsi
··· 1 + /* 2 + * DTS file for CSR SiRFprimaII SoC 3 + * 4 + * Copyright (c) 2012 Cambridge Silicon Radio Limited, a CSR plc group company. 5 + * 6 + * Licensed under GPLv2 or later. 7 + */ 8 + 9 + /include/ "skeleton.dtsi" 10 + / { 11 + compatible = "sirf,prima2"; 12 + #address-cells = <1>; 13 + #size-cells = <1>; 14 + interrupt-parent = <&intc>; 15 + 16 + cpus { 17 + #address-cells = <1>; 18 + #size-cells = <0>; 19 + 20 + cpu@0 { 21 + reg = <0x0>; 22 + d-cache-line-size = <32>; 23 + i-cache-line-size = <32>; 24 + d-cache-size = <32768>; 25 + i-cache-size = <32768>; 26 + /* from bootloader */ 27 + timebase-frequency = <0>; 28 + bus-frequency = <0>; 29 + clock-frequency = <0>; 30 + }; 31 + }; 32 + 33 + axi { 34 + compatible = "simple-bus"; 35 + #address-cells = <1>; 36 + #size-cells = <1>; 37 + ranges = <0x40000000 0x40000000 0x80000000>; 38 + 39 + l2-cache-controller@80040000 { 40 + compatible = "arm,pl310-cache", "sirf,prima2-pl310-cache"; 41 + reg = <0x80040000 0x1000>; 42 + interrupts = <59>; 43 + arm,tag-latency = <1 1 1>; 44 + arm,data-latency = <1 1 1>; 45 + arm,filter-ranges = <0 0x40000000>; 46 + }; 47 + 48 + intc: interrupt-controller@80020000 { 49 + #interrupt-cells = <1>; 50 + interrupt-controller; 51 + compatible = "sirf,prima2-intc"; 52 + reg = <0x80020000 0x1000>; 53 + }; 54 + 55 + sys-iobg { 56 + compatible = "simple-bus"; 57 + #address-cells = <1>; 58 + #size-cells = <1>; 59 + ranges = <0x88000000 0x88000000 0x40000>; 60 + 61 + clock-controller@88000000 { 62 + compatible = "sirf,prima2-clkc"; 63 + reg = <0x88000000 0x1000>; 64 + interrupts = <3>; 65 + }; 66 + 67 + reset-controller@88010000 { 68 + compatible = "sirf,prima2-rstc"; 69 + reg = <0x88010000 0x1000>; 70 + }; 71 + 72 + rsc-controller@88020000 { 73 + compatible = "sirf,prima2-rsc"; 74 + reg = <0x88020000 0x1000>; 75 + }; 76 + }; 77 + 78 + mem-iobg { 79 + compatible = "simple-bus"; 80 + #address-cells = <1>; 81 + #size-cells = <1>; 82 + ranges = <0x90000000 0x90000000 0x10000>; 83 + 84 + memory-controller@90000000 { 85 + compatible = "sirf,prima2-memc"; 86 + reg = <0x90000000 0x10000>; 87 + interrupts = <27>; 88 + }; 89 + }; 90 + 91 + disp-iobg { 92 + compatible = "simple-bus"; 93 + #address-cells = <1>; 94 + #size-cells = <1>; 95 + ranges = <0x90010000 0x90010000 0x30000>; 96 + 97 + display@90010000 { 98 + compatible = "sirf,prima2-lcd"; 99 + reg = <0x90010000 0x20000>; 100 + interrupts = <30>; 101 + }; 102 + 103 + vpp@90020000 { 104 + compatible = "sirf,prima2-vpp"; 105 + reg = <0x90020000 0x10000>; 106 + interrupts = <31>; 107 + }; 108 + }; 109 + 110 + graphics-iobg { 111 + compatible = "simple-bus"; 112 + #address-cells = <1>; 113 + #size-cells = <1>; 114 + ranges = <0x98000000 0x98000000 0x8000000>; 115 + 116 + graphics@98000000 { 117 + compatible = "powervr,sgx531"; 118 + reg = <0x98000000 0x8000000>; 119 + interrupts = <6>; 120 + }; 121 + }; 122 + 123 + multimedia-iobg { 124 + compatible = "simple-bus"; 125 + #address-cells = <1>; 126 + #size-cells = <1>; 127 + ranges = <0xa0000000 0xa0000000 0x8000000>; 128 + 129 + multimedia@a0000000 { 130 + compatible = "sirf,prima2-video-codec"; 131 + reg = <0xa0000000 0x8000000>; 132 + interrupts = <5>; 133 + }; 134 + }; 135 + 136 + dsp-iobg { 137 + compatible = "simple-bus"; 138 + #address-cells = <1>; 139 + #size-cells = <1>; 140 + ranges = <0xa8000000 0xa8000000 0x2000000>; 141 + 142 + dspif@a8000000 { 143 + compatible = "sirf,prima2-dspif"; 144 + reg = <0xa8000000 0x10000>; 145 + interrupts = <9>; 146 + }; 147 + 148 + gps@a8010000 { 149 + compatible = "sirf,prima2-gps"; 150 + reg = <0xa8010000 0x10000>; 151 + interrupts = <7>; 152 + }; 153 + 154 + dsp@a9000000 { 155 + compatible = "sirf,prima2-dsp"; 156 + reg = <0xa9000000 0x1000000>; 157 + interrupts = <8>; 158 + }; 159 + }; 160 + 161 + peri-iobg { 162 + compatible = "simple-bus"; 163 + #address-cells = <1>; 164 + #size-cells = <1>; 165 + ranges = <0xb0000000 0xb0000000 0x180000>; 166 + 167 + timer@b0020000 { 168 + compatible = "sirf,prima2-tick"; 169 + reg = <0xb0020000 0x1000>; 170 + interrupts = <0>; 171 + }; 172 + 173 + nand@b0030000 { 174 + compatible = "sirf,prima2-nand"; 175 + reg = <0xb0030000 0x10000>; 176 + interrupts = <41>; 177 + }; 178 + 179 + audio@b0040000 { 180 + compatible = "sirf,prima2-audio"; 181 + reg = <0xb0040000 0x10000>; 182 + interrupts = <35>; 183 + }; 184 + 185 + uart0: uart@b0050000 { 186 + cell-index = <0>; 187 + compatible = "sirf,prima2-uart"; 188 + reg = <0xb0050000 0x10000>; 189 + interrupts = <17>; 190 + }; 191 + 192 + uart1: uart@b0060000 { 193 + cell-index = <1>; 194 + compatible = "sirf,prima2-uart"; 195 + reg = <0xb0060000 0x10000>; 196 + interrupts = <18>; 197 + }; 198 + 199 + uart2: uart@b0070000 { 200 + cell-index = <2>; 201 + compatible = "sirf,prima2-uart"; 202 + reg = <0xb0070000 0x10000>; 203 + interrupts = <19>; 204 + }; 205 + 206 + usp0: usp@b0080000 { 207 + cell-index = <0>; 208 + compatible = "sirf,prima2-usp"; 209 + reg = <0xb0080000 0x10000>; 210 + interrupts = <20>; 211 + }; 212 + 213 + usp1: usp@b0090000 { 214 + cell-index = <1>; 215 + compatible = "sirf,prima2-usp"; 216 + reg = <0xb0090000 0x10000>; 217 + interrupts = <21>; 218 + }; 219 + 220 + usp2: usp@b00a0000 { 221 + cell-index = <2>; 222 + compatible = "sirf,prima2-usp"; 223 + reg = <0xb00a0000 0x10000>; 224 + interrupts = <22>; 225 + }; 226 + 227 + dmac0: dma-controller@b00b0000 { 228 + cell-index = <0>; 229 + compatible = "sirf,prima2-dmac"; 230 + reg = <0xb00b0000 0x10000>; 231 + interrupts = <12>; 232 + }; 233 + 234 + dmac1: dma-controller@b0160000 { 235 + cell-index = <1>; 236 + compatible = "sirf,prima2-dmac"; 237 + reg = <0xb0160000 0x10000>; 238 + interrupts = <13>; 239 + }; 240 + 241 + vip@b00C0000 { 242 + compatible = "sirf,prima2-vip"; 243 + reg = <0xb00C0000 0x10000>; 244 + }; 245 + 246 + spi0: spi@b00d0000 { 247 + cell-index = <0>; 248 + compatible = "sirf,prima2-spi"; 249 + reg = <0xb00d0000 0x10000>; 250 + interrupts = <15>; 251 + }; 252 + 253 + spi1: spi@b0170000 { 254 + cell-index = <1>; 255 + compatible = "sirf,prima2-spi"; 256 + reg = <0xb0170000 0x10000>; 257 + interrupts = <16>; 258 + }; 259 + 260 + i2c0: i2c@b00e0000 { 261 + cell-index = <0>; 262 + compatible = "sirf,prima2-i2c"; 263 + reg = <0xb00e0000 0x10000>; 264 + interrupts = <24>; 265 + }; 266 + 267 + i2c1: i2c@b00f0000 { 268 + cell-index = <1>; 269 + compatible = "sirf,prima2-i2c"; 270 + reg = <0xb00f0000 0x10000>; 271 + interrupts = <25>; 272 + }; 273 + 274 + tsc@b0110000 { 275 + compatible = "sirf,prima2-tsc"; 276 + reg = <0xb0110000 0x10000>; 277 + interrupts = <33>; 278 + }; 279 + 280 + gpio: pinctrl@b0120000 { 281 + #gpio-cells = <2>; 282 + #interrupt-cells = <2>; 283 + compatible = "sirf,prima2-pinctrl"; 284 + reg = <0xb0120000 0x10000>; 285 + interrupts = <43 44 45 46 47>; 286 + gpio-controller; 287 + interrupt-controller; 288 + 289 + lcd_16pins_a: lcd0@0 { 290 + lcd { 291 + sirf,pins = "lcd_16bitsgrp"; 292 + sirf,function = "lcd_16bits"; 293 + }; 294 + }; 295 + lcd_18pins_a: lcd0@1 { 296 + lcd { 297 + sirf,pins = "lcd_18bitsgrp"; 298 + sirf,function = "lcd_18bits"; 299 + }; 300 + }; 301 + lcd_24pins_a: lcd0@2 { 302 + lcd { 303 + sirf,pins = "lcd_24bitsgrp"; 304 + sirf,function = "lcd_24bits"; 305 + }; 306 + }; 307 + lcdrom_pins_a: lcdrom0@0 { 308 + lcd { 309 + sirf,pins = "lcdromgrp"; 310 + sirf,function = "lcdrom"; 311 + }; 312 + }; 313 + uart0_pins_a: uart0@0 { 314 + uart { 315 + sirf,pins = "uart0grp"; 316 + sirf,function = "uart0"; 317 + }; 318 + }; 319 + uart1_pins_a: uart1@0 { 320 + uart { 321 + sirf,pins = "uart1grp"; 322 + sirf,function = "uart1"; 323 + }; 324 + }; 325 + uart2_pins_a: uart2@0 { 326 + uart { 327 + sirf,pins = "uart2grp"; 328 + sirf,function = "uart2"; 329 + }; 330 + }; 331 + uart2_noflow_pins_a: uart2@1 { 332 + uart { 333 + sirf,pins = "uart2_nostreamctrlgrp"; 334 + sirf,function = "uart2_nostreamctrl"; 335 + }; 336 + }; 337 + spi0_pins_a: spi0@0 { 338 + spi { 339 + sirf,pins = "spi0grp"; 340 + sirf,function = "spi0"; 341 + }; 342 + }; 343 + spi1_pins_a: spi1@0 { 344 + spi { 345 + sirf,pins = "spi1grp"; 346 + sirf,function = "spi1"; 347 + }; 348 + }; 349 + i2c0_pins_a: i2c0@0 { 350 + i2c { 351 + sirf,pins = "i2c0grp"; 352 + sirf,function = "i2c0"; 353 + }; 354 + }; 355 + i2c1_pins_a: i2c1@0 { 356 + i2c { 357 + sirf,pins = "i2c1grp"; 358 + sirf,function = "i2c1"; 359 + }; 360 + }; 361 + pwm0_pins_a: pwm0@0 { 362 + pwm { 363 + sirf,pins = "pwm0grp"; 364 + sirf,function = "pwm0"; 365 + }; 366 + }; 367 + pwm1_pins_a: pwm1@0 { 368 + pwm { 369 + sirf,pins = "pwm1grp"; 370 + sirf,function = "pwm1"; 371 + }; 372 + }; 373 + pwm2_pins_a: pwm2@0 { 374 + pwm { 375 + sirf,pins = "pwm2grp"; 376 + sirf,function = "pwm2"; 377 + }; 378 + }; 379 + pwm3_pins_a: pwm3@0 { 380 + pwm { 381 + sirf,pins = "pwm3grp"; 382 + sirf,function = "pwm3"; 383 + }; 384 + }; 385 + gps_pins_a: gps@0 { 386 + gps { 387 + sirf,pins = "gpsgrp"; 388 + sirf,function = "gps"; 389 + }; 390 + }; 391 + vip_pins_a: vip@0 { 392 + vip { 393 + sirf,pins = "vipgrp"; 394 + sirf,function = "vip"; 395 + }; 396 + }; 397 + sdmmc0_pins_a: sdmmc0@0 { 398 + sdmmc0 { 399 + sirf,pins = "sdmmc0grp"; 400 + sirf,function = "sdmmc0"; 401 + }; 402 + }; 403 + sdmmc1_pins_a: sdmmc1@0 { 404 + sdmmc1 { 405 + sirf,pins = "sdmmc1grp"; 406 + sirf,function = "sdmmc1"; 407 + }; 408 + }; 409 + sdmmc2_pins_a: sdmmc2@0 { 410 + sdmmc2 { 411 + sirf,pins = "sdmmc2grp"; 412 + sirf,function = "sdmmc2"; 413 + }; 414 + }; 415 + sdmmc3_pins_a: sdmmc3@0 { 416 + sdmmc3 { 417 + sirf,pins = "sdmmc3grp"; 418 + sirf,function = "sdmmc3"; 419 + }; 420 + }; 421 + sdmmc4_pins_a: sdmmc4@0 { 422 + sdmmc4 { 423 + sirf,pins = "sdmmc4grp"; 424 + sirf,function = "sdmmc4"; 425 + }; 426 + }; 427 + sdmmc5_pins_a: sdmmc5@0 { 428 + sdmmc5 { 429 + sirf,pins = "sdmmc5grp"; 430 + sirf,function = "sdmmc5"; 431 + }; 432 + }; 433 + i2s_pins_a: i2s@0 { 434 + i2s { 435 + sirf,pins = "i2sgrp"; 436 + sirf,function = "i2s"; 437 + }; 438 + }; 439 + ac97_pins_a: ac97@0 { 440 + ac97 { 441 + sirf,pins = "ac97grp"; 442 + sirf,function = "ac97"; 443 + }; 444 + }; 445 + nand_pins_a: nand@0 { 446 + nand { 447 + sirf,pins = "nandgrp"; 448 + sirf,function = "nand"; 449 + }; 450 + }; 451 + usp0_pins_a: usp0@0 { 452 + usp0 { 453 + sirf,pins = "usp0grp"; 454 + sirf,function = "usp0"; 455 + }; 456 + }; 457 + usp1_pins_a: usp1@0 { 458 + usp1 { 459 + sirf,pins = "usp1grp"; 460 + sirf,function = "usp1"; 461 + }; 462 + }; 463 + usp2_pins_a: usp2@0 { 464 + usp2 { 465 + sirf,pins = "usp2grp"; 466 + sirf,function = "usp2"; 467 + }; 468 + }; 469 + usb0_utmi_drvbus_pins_a: usb0_utmi_drvbus@0 { 470 + usb0_utmi_drvbus { 471 + sirf,pins = "usb0_utmi_drvbusgrp"; 472 + sirf,function = "usb0_utmi_drvbus"; 473 + }; 474 + }; 475 + usb1_utmi_drvbus_pins_a: usb1_utmi_drvbus@0 { 476 + usb1_utmi_drvbus { 477 + sirf,pins = "usb1_utmi_drvbusgrp"; 478 + sirf,function = "usb1_utmi_drvbus"; 479 + }; 480 + }; 481 + warm_rst_pins_a: warm_rst@0 { 482 + warm_rst { 483 + sirf,pins = "warm_rstgrp"; 484 + sirf,function = "warm_rst"; 485 + }; 486 + }; 487 + pulse_count_pins_a: pulse_count@0 { 488 + pulse_count { 489 + sirf,pins = "pulse_countgrp"; 490 + sirf,function = "pulse_count"; 491 + }; 492 + }; 493 + cko0_rst_pins_a: cko0_rst@0 { 494 + cko0_rst { 495 + sirf,pins = "cko0_rstgrp"; 496 + sirf,function = "cko0_rst"; 497 + }; 498 + }; 499 + cko1_rst_pins_a: cko1_rst@0 { 500 + cko1_rst { 501 + sirf,pins = "cko1_rstgrp"; 502 + sirf,function = "cko1_rst"; 503 + }; 504 + }; 505 + }; 506 + 507 + pwm@b0130000 { 508 + compatible = "sirf,prima2-pwm"; 509 + reg = <0xb0130000 0x10000>; 510 + }; 511 + 512 + efusesys@b0140000 { 513 + compatible = "sirf,prima2-efuse"; 514 + reg = <0xb0140000 0x10000>; 515 + }; 516 + 517 + pulsec@b0150000 { 518 + compatible = "sirf,prima2-pulsec"; 519 + reg = <0xb0150000 0x10000>; 520 + interrupts = <48>; 521 + }; 522 + 523 + pci-iobg { 524 + compatible = "sirf,prima2-pciiobg", "simple-bus"; 525 + #address-cells = <1>; 526 + #size-cells = <1>; 527 + ranges = <0x56000000 0x56000000 0x1b00000>; 528 + 529 + sd0: sdhci@56000000 { 530 + cell-index = <0>; 531 + compatible = "sirf,prima2-sdhc"; 532 + reg = <0x56000000 0x100000>; 533 + interrupts = <38>; 534 + }; 535 + 536 + sd1: sdhci@56100000 { 537 + cell-index = <1>; 538 + compatible = "sirf,prima2-sdhc"; 539 + reg = <0x56100000 0x100000>; 540 + interrupts = <38>; 541 + }; 542 + 543 + sd2: sdhci@56200000 { 544 + cell-index = <2>; 545 + compatible = "sirf,prima2-sdhc"; 546 + reg = <0x56200000 0x100000>; 547 + interrupts = <23>; 548 + }; 549 + 550 + sd3: sdhci@56300000 { 551 + cell-index = <3>; 552 + compatible = "sirf,prima2-sdhc"; 553 + reg = <0x56300000 0x100000>; 554 + interrupts = <23>; 555 + }; 556 + 557 + sd4: sdhci@56400000 { 558 + cell-index = <4>; 559 + compatible = "sirf,prima2-sdhc"; 560 + reg = <0x56400000 0x100000>; 561 + interrupts = <39>; 562 + }; 563 + 564 + sd5: sdhci@56500000 { 565 + cell-index = <5>; 566 + compatible = "sirf,prima2-sdhc"; 567 + reg = <0x56500000 0x100000>; 568 + interrupts = <39>; 569 + }; 570 + 571 + pci-copy@57900000 { 572 + compatible = "sirf,prima2-pcicp"; 573 + reg = <0x57900000 0x100000>; 574 + interrupts = <40>; 575 + }; 576 + 577 + rom-interface@57a00000 { 578 + compatible = "sirf,prima2-romif"; 579 + reg = <0x57a00000 0x100000>; 580 + }; 581 + }; 582 + }; 583 + 584 + rtc-iobg { 585 + compatible = "sirf,prima2-rtciobg", "sirf-prima2-rtciobg-bus"; 586 + #address-cells = <1>; 587 + #size-cells = <1>; 588 + reg = <0x80030000 0x10000>; 589 + 590 + gpsrtc@1000 { 591 + compatible = "sirf,prima2-gpsrtc"; 592 + reg = <0x1000 0x1000>; 593 + interrupts = <55 56 57>; 594 + }; 595 + 596 + sysrtc@2000 { 597 + compatible = "sirf,prima2-sysrtc"; 598 + reg = <0x2000 0x1000>; 599 + interrupts = <52 53 54>; 600 + }; 601 + 602 + pwrc@3000 { 603 + compatible = "sirf,prima2-pwrc"; 604 + reg = <0x3000 0x1000>; 605 + interrupts = <32>; 606 + }; 607 + }; 608 + 609 + uus-iobg { 610 + compatible = "simple-bus"; 611 + #address-cells = <1>; 612 + #size-cells = <1>; 613 + ranges = <0xb8000000 0xb8000000 0x40000>; 614 + 615 + usb0: usb@b00e0000 { 616 + compatible = "chipidea,ci13611a-prima2"; 617 + reg = <0xb8000000 0x10000>; 618 + interrupts = <10>; 619 + }; 620 + 621 + usb1: usb@b00f0000 { 622 + compatible = "chipidea,ci13611a-prima2"; 623 + reg = <0xb8010000 0x10000>; 624 + interrupts = <11>; 625 + }; 626 + 627 + sata@b00f0000 { 628 + compatible = "synopsys,dwc-ahsata"; 629 + reg = <0xb8020000 0x10000>; 630 + interrupts = <37>; 631 + }; 632 + 633 + security@b00f0000 { 634 + compatible = "sirf,prima2-security"; 635 + reg = <0xb8030000 0x10000>; 636 + interrupts = <42>; 637 + }; 638 + }; 639 + }; 640 + };
+3
arch/arm/boot/dts/twl6030.dtsi
··· 66 66 67 67 vcxio: regulator@8 { 68 68 compatible = "ti,twl6030-vcxio"; 69 + regulator-always-on; 69 70 }; 70 71 71 72 vusb: regulator@9 { ··· 75 74 76 75 v1v8: regulator@10 { 77 76 compatible = "ti,twl6030-v1v8"; 77 + regulator-always-on; 78 78 }; 79 79 80 80 v2v1: regulator@11 { 81 81 compatible = "ti,twl6030-v2v1"; 82 + regulator-always-on; 82 83 }; 83 84 84 85 clk32kg: regulator@12 {
+1
arch/arm/configs/u8500_defconfig
··· 86 86 CONFIG_LEDS_CLASS=y 87 87 CONFIG_LEDS_LM3530=y 88 88 CONFIG_LEDS_LP5521=y 89 + CONFIG_LEDS_GPIO=y 89 90 CONFIG_RTC_CLASS=y 90 91 CONFIG_RTC_DRV_AB8500=y 91 92 CONFIG_RTC_DRV_PL031=y
+2 -1
arch/arm/mach-dove/common.c
··· 102 102 void __init dove_ge00_init(struct mv643xx_eth_platform_data *eth_data) 103 103 { 104 104 orion_ge00_init(eth_data, DOVE_GE00_PHYS_BASE, 105 - IRQ_DOVE_GE00_SUM, IRQ_DOVE_GE00_ERR); 105 + IRQ_DOVE_GE00_SUM, IRQ_DOVE_GE00_ERR, 106 + 1600); 106 107 } 107 108 108 109 /*****************************************************************************
+7
arch/arm/mach-exynos/mach-origen.c
··· 42 42 #include <plat/backlight.h> 43 43 #include <plat/fb.h> 44 44 #include <plat/mfc.h> 45 + #include <plat/hdmi.h> 45 46 46 47 #include <mach/ohci.h> 47 48 #include <mach/map.h> ··· 735 734 s3c_gpio_setpull(EXYNOS4_GPX2(2), S3C_GPIO_PULL_NONE); 736 735 } 737 736 737 + /* I2C module and id for HDMIPHY */ 738 + static struct i2c_board_info hdmiphy_info = { 739 + I2C_BOARD_INFO("hdmiphy-exynos4210", 0x38), 740 + }; 741 + 738 742 static void s5p_tv_setup(void) 739 743 { 740 744 /* Direct HPD to HDMI chip */ ··· 787 781 788 782 s5p_tv_setup(); 789 783 s5p_i2c_hdmiphy_set_platdata(NULL); 784 + s5p_hdmi_set_platdata(&hdmiphy_info, NULL, 0); 790 785 791 786 #ifdef CONFIG_DRM_EXYNOS 792 787 s5p_device_fimd0.dev.platform_data = &drm_fimd_pdata;
+7
arch/arm/mach-exynos/mach-smdkv310.c
··· 40 40 #include <plat/mfc.h> 41 41 #include <plat/ehci.h> 42 42 #include <plat/clock.h> 43 + #include <plat/hdmi.h> 43 44 44 45 #include <mach/map.h> 45 46 #include <mach/ohci.h> ··· 355 354 .pwm_period_ns = 1000, 356 355 }; 357 356 357 + /* I2C module and id for HDMIPHY */ 358 + static struct i2c_board_info hdmiphy_info = { 359 + I2C_BOARD_INFO("hdmiphy-exynos4210", 0x38), 360 + }; 361 + 358 362 static void s5p_tv_setup(void) 359 363 { 360 364 /* direct HPD to HDMI chip */ ··· 394 388 395 389 s5p_tv_setup(); 396 390 s5p_i2c_hdmiphy_set_platdata(NULL); 391 + s5p_hdmi_set_platdata(&hdmiphy_info, NULL, 0); 397 392 398 393 samsung_keypad_set_platdata(&smdkv310_keypad_data); 399 394
+5 -5
arch/arm/mach-imx/Makefile
··· 9 9 obj-$(CONFIG_SOC_IMX31) += mm-imx3.o cpu-imx31.o clk-imx31.o iomux-imx31.o ehci-imx31.o pm-imx3.o 10 10 obj-$(CONFIG_SOC_IMX35) += mm-imx3.o cpu-imx35.o clk-imx35.o ehci-imx35.o pm-imx3.o 11 11 12 - obj-$(CONFIG_SOC_IMX5) += cpu-imx5.o mm-imx5.o clk-imx51-imx53.o ehci-imx5.o pm-imx5.o cpu_op-mx51.o 12 + imx5-pm-$(CONFIG_PM) += pm-imx5.o 13 + obj-$(CONFIG_SOC_IMX5) += cpu-imx5.o mm-imx5.o clk-imx51-imx53.o ehci-imx5.o $(imx5-pm-y) cpu_op-mx51.o 13 14 14 15 obj-$(CONFIG_COMMON_CLK) += clk-pllv1.o clk-pllv2.o clk-pllv3.o clk-gate2.o \ 15 16 clk-pfd.o clk-busy.o ··· 71 70 obj-$(CONFIG_HAVE_IMX_GPC) += gpc.o 72 71 obj-$(CONFIG_HAVE_IMX_MMDC) += mmdc.o 73 72 obj-$(CONFIG_HAVE_IMX_SRC) += src.o 74 - obj-$(CONFIG_CPU_V7) += head-v7.o 75 - AFLAGS_head-v7.o :=-Wa,-march=armv7-a 76 - obj-$(CONFIG_SMP) += platsmp.o 73 + AFLAGS_headsmp.o :=-Wa,-march=armv7-a 74 + obj-$(CONFIG_SMP) += headsmp.o platsmp.o 77 75 obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o 78 76 obj-$(CONFIG_SOC_IMX6Q) += clk-imx6q.o mach-imx6q.o 79 77 80 78 ifeq ($(CONFIG_PM),y) 81 - obj-$(CONFIG_SOC_IMX6Q) += pm-imx6q.o 79 + obj-$(CONFIG_SOC_IMX6Q) += pm-imx6q.o headsmp.o 82 80 endif 83 81 84 82 # i.MX5 based machines
+5 -3
arch/arm/mach-imx/clk-imx6q.c
··· 152 152 ssi2, ssi3, uart_ipg, uart_serial, usboh3, usdhc1, usdhc2, usdhc3, 153 153 usdhc4, vdo_axi, vpu_axi, cko1, pll1_sys, pll2_bus, pll3_usb_otg, 154 154 pll4_audio, pll5_video, pll6_mlb, pll7_usb_host, pll8_enet, ssi1_ipg, 155 - ssi2_ipg, ssi3_ipg, rom, usbphy1, usbphy2, 155 + ssi2_ipg, ssi3_ipg, rom, usbphy1, usbphy2, ldb_di0_div_3_5, ldb_di1_div_3_5, 156 156 clk_max 157 157 }; 158 158 ··· 288 288 clk[gpu3d_shader] = imx_clk_divider("gpu3d_shader", "gpu3d_shader_sel", base + 0x18, 29, 3); 289 289 clk[ipu1_podf] = imx_clk_divider("ipu1_podf", "ipu1_sel", base + 0x3c, 11, 3); 290 290 clk[ipu2_podf] = imx_clk_divider("ipu2_podf", "ipu2_sel", base + 0x3c, 16, 3); 291 - clk[ldb_di0_podf] = imx_clk_divider("ldb_di0_podf", "ldb_di0_sel", base + 0x20, 10, 1); 292 - clk[ldb_di1_podf] = imx_clk_divider("ldb_di1_podf", "ldb_di1_sel", base + 0x20, 11, 1); 291 + clk[ldb_di0_div_3_5] = imx_clk_fixed_factor("ldb_di0_div_3_5", "ldb_di0_sel", 2, 7); 292 + clk[ldb_di0_podf] = imx_clk_divider("ldb_di0_podf", "ldb_di0_div_3_5", base + 0x20, 10, 1); 293 + clk[ldb_di1_div_3_5] = imx_clk_fixed_factor("ldb_di1_div_3_5", "ldb_di1_sel", 2, 7); 294 + clk[ldb_di1_podf] = imx_clk_divider("ldb_di1_podf", "ldb_di1_div_3_5", base + 0x20, 11, 1); 293 295 clk[ipu1_di0_pre] = imx_clk_divider("ipu1_di0_pre", "ipu1_di0_pre_sel", base + 0x34, 3, 3); 294 296 clk[ipu1_di1_pre] = imx_clk_divider("ipu1_di1_pre", "ipu1_di1_pre_sel", base + 0x34, 12, 3); 295 297 clk[ipu2_di0_pre] = imx_clk_divider("ipu2_di0_pre", "ipu2_di0_pre_sel", base + 0x38, 3, 3);
arch/arm/mach-imx/head-v7.S arch/arm/mach-imx/headsmp.S
+3 -20
arch/arm/mach-imx/hotplug.c
··· 42 42 : "cc"); 43 43 } 44 44 45 - static inline void cpu_leave_lowpower(void) 46 - { 47 - unsigned int v; 48 - 49 - asm volatile( 50 - "mrc p15, 0, %0, c1, c0, 0\n" 51 - " orr %0, %0, %1\n" 52 - " mcr p15, 0, %0, c1, c0, 0\n" 53 - " mrc p15, 0, %0, c1, c0, 1\n" 54 - " orr %0, %0, %2\n" 55 - " mcr p15, 0, %0, c1, c0, 1\n" 56 - : "=&r" (v) 57 - : "Ir" (CR_C), "Ir" (0x40) 58 - : "cc"); 59 - } 60 - 61 45 /* 62 46 * platform-specific code to shutdown a CPU 63 47 * ··· 51 67 { 52 68 cpu_enter_lowpower(); 53 69 imx_enable_cpu(cpu, false); 54 - cpu_do_idle(); 55 - cpu_leave_lowpower(); 56 70 57 - /* We should never return from idle */ 58 - panic("cpu %d unexpectedly exit from shutdown\n", cpu); 71 + /* spin here until hardware takes it down */ 72 + while (1) 73 + ; 59 74 } 60 75 61 76 int platform_cpu_disable(unsigned int cpu)
+2 -2
arch/arm/mach-imx/mach-imx6q.c
··· 71 71 /* For imx6q sabrelite board: set KSZ9021RN RGMII pad skew */ 72 72 static int ksz9021rn_phy_fixup(struct phy_device *phydev) 73 73 { 74 - if (IS_ENABLED(CONFIG_PHYLIB)) { 74 + if (IS_BUILTIN(CONFIG_PHYLIB)) { 75 75 /* min rx data delay */ 76 76 phy_write(phydev, 0x0b, 0x8105); 77 77 phy_write(phydev, 0x0c, 0x0000); ··· 112 112 113 113 static void __init imx6q_sabrelite_init(void) 114 114 { 115 - if (IS_ENABLED(CONFIG_PHYLIB)) 115 + if (IS_BUILTIN(CONFIG_PHYLIB)) 116 116 phy_register_fixup_for_uid(PHY_ID_KSZ9021, MICREL_PHY_ID_MASK, 117 117 ksz9021rn_phy_fixup); 118 118 imx6q_sabrelite_cko1_setup();
+2 -1
arch/arm/mach-kirkwood/Makefile.boot
··· 7 7 dtb-$(CONFIG_MACH_DLINK_KIRKWOOD_DT) += kirkwood-dns325.dtb 8 8 dtb-$(CONFIG_MACH_ICONNECT_DT) += kirkwood-iconnect.dtb 9 9 dtb-$(CONFIG_MACH_IB62X0_DT) += kirkwood-ib62x0.dtb 10 - dtb-$(CONFIG_MACH_TS219_DT) += kirkwood-qnap-ts219.dtb 10 + dtb-$(CONFIG_MACH_TS219_DT) += kirkwood-ts219-6281.dtb 11 + dtb-$(CONFIG_MACH_TS219_DT) += kirkwood-ts219-6282.dtb 11 12 dtb-$(CONFIG_MACH_GOFLEXNET_DT) += kirkwood-goflexnet.dtb 12 13 dtb-$(CONFIG_MACH_LSXL_DT) += kirkwood-lschlv2.dtb 13 14 dtb-$(CONFIG_MACH_LSXL_DT) += kirkwood-lsxhl.dtb
+2 -2
arch/arm/mach-kirkwood/common.c
··· 301 301 { 302 302 orion_ge00_init(eth_data, 303 303 GE00_PHYS_BASE, IRQ_KIRKWOOD_GE00_SUM, 304 - IRQ_KIRKWOOD_GE00_ERR); 304 + IRQ_KIRKWOOD_GE00_ERR, 1600); 305 305 /* The interface forgets the MAC address assigned by u-boot if 306 306 the clock is turned off, so claim the clk now. */ 307 307 clk_prepare_enable(ge0); ··· 315 315 { 316 316 orion_ge01_init(eth_data, 317 317 GE01_PHYS_BASE, IRQ_KIRKWOOD_GE01_SUM, 318 - IRQ_KIRKWOOD_GE01_ERR); 318 + IRQ_KIRKWOOD_GE01_ERR, 1600); 319 319 clk_prepare_enable(ge1); 320 320 } 321 321
+1 -1
arch/arm/mach-mmp/sram.c
··· 68 68 struct resource *res; 69 69 int ret = 0; 70 70 71 - if (!pdata && !pdata->pool_name) 71 + if (!pdata || !pdata->pool_name) 72 72 return -ENODEV; 73 73 74 74 info = kzalloc(sizeof(*info), GFP_KERNEL);
+1 -1
arch/arm/mach-mv78xx0/addr-map.c
··· 37 37 #define WIN0_OFF(n) (BRIDGE_VIRT_BASE + 0x0000 + ((n) << 4)) 38 38 #define WIN8_OFF(n) (BRIDGE_VIRT_BASE + 0x0900 + (((n) - 8) << 4)) 39 39 40 - static void __init __iomem *win_cfg_base(int win) 40 + static void __init __iomem *win_cfg_base(const struct orion_addr_map_cfg *cfg, int win) 41 41 { 42 42 /* 43 43 * Find the control register base address for this window.
+4 -2
arch/arm/mach-mv78xx0/common.c
··· 213 213 { 214 214 orion_ge00_init(eth_data, 215 215 GE00_PHYS_BASE, IRQ_MV78XX0_GE00_SUM, 216 - IRQ_MV78XX0_GE_ERR); 216 + IRQ_MV78XX0_GE_ERR, 217 + MV643XX_TX_CSUM_DEFAULT_LIMIT); 217 218 } 218 219 219 220 ··· 225 224 { 226 225 orion_ge01_init(eth_data, 227 226 GE01_PHYS_BASE, IRQ_MV78XX0_GE01_SUM, 228 - NO_IRQ); 227 + NO_IRQ, 228 + MV643XX_TX_CSUM_DEFAULT_LIMIT); 229 229 } 230 230 231 231
+2 -1
arch/arm/mach-omap2/Kconfig
··· 62 62 select PM_OPP if PM 63 63 select USB_ARCH_HAS_EHCI if USB_SUPPORT 64 64 select ARM_CPU_SUSPEND if PM 65 - select ARCH_NEEDS_CPU_IDLE_COUPLED 65 + select ARCH_NEEDS_CPU_IDLE_COUPLED if SMP 66 66 67 67 config SOC_OMAP5 68 68 bool "TI OMAP5" 69 69 select CPU_V7 70 70 select ARM_GIC 71 71 select HAVE_SMP 72 + select ARM_CPU_SUSPEND if PM 72 73 73 74 comment "OMAP Core Type" 74 75 depends on ARCH_OMAP2
+2
arch/arm/mach-omap2/board-igep0020.c
··· 554 554 555 555 #ifdef CONFIG_OMAP_MUX 556 556 static struct omap_board_mux board_mux[] __initdata = { 557 + /* SMSC9221 LAN Controller ETH IRQ (GPIO_176) */ 558 + OMAP3_MUX(MCSPI1_CS2, OMAP_MUX_MODE4 | OMAP_PIN_INPUT), 557 559 { .reg_offset = OMAP_MUX_TERMINATOR }, 558 560 }; 559 561 #endif
+1
arch/arm/mach-omap2/board-omap3evm.c
··· 58 58 #include "hsmmc.h" 59 59 #include "common-board-devices.h" 60 60 61 + #define OMAP3_EVM_TS_GPIO 175 61 62 #define OMAP3_EVM_EHCI_VBUS 22 62 63 #define OMAP3_EVM_EHCI_SELECT 61 63 64
-11
arch/arm/mach-omap2/common-board-devices.c
··· 35 35 .turbo_mode = 0, 36 36 }; 37 37 38 - /* 39 - * ADS7846 driver maybe request a gpio according to the value 40 - * of pdata->get_pendown_state, but we have done this. So set 41 - * get_pendown_state to avoid twice gpio requesting. 42 - */ 43 - static int omap3_get_pendown_state(void) 44 - { 45 - return !gpio_get_value(OMAP3_EVM_TS_GPIO); 46 - } 47 - 48 38 static struct ads7846_platform_data ads7846_config = { 49 39 .x_max = 0x0fff, 50 40 .y_max = 0x0fff, ··· 45 55 .debounce_rep = 1, 46 56 .gpio_pendown = -EINVAL, 47 57 .keep_vref_on = 1, 48 - .get_pendown_state = &omap3_get_pendown_state, 49 58 }; 50 59 51 60 static struct spi_board_info ads7846_spi_board_info __initdata = {
-1
arch/arm/mach-omap2/common-board-devices.h
··· 4 4 #include "twl-common.h" 5 5 6 6 #define NAND_BLOCK_SIZE SZ_128K 7 - #define OMAP3_EVM_TS_GPIO 175 8 7 9 8 struct mtd_partition; 10 9 struct ads7846_platform_data;
+2 -1
arch/arm/mach-omap2/cpuidle44xx.c
··· 238 238 for_each_cpu(cpu_id, cpu_online_mask) { 239 239 dev = &per_cpu(omap4_idle_dev, cpu_id); 240 240 dev->cpu = cpu_id; 241 + #ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED 241 242 dev->coupled_cpus = *cpu_online_mask; 242 - 243 + #endif 243 244 cpuidle_register_driver(&omap4_idle_driver); 244 245 245 246 if (cpuidle_register_device(dev)) {
-1
arch/arm/mach-omap2/mux.h
··· 127 127 * @gpio: GPIO number 128 128 * @muxnames: available signal modes for a ball 129 129 * @balls: available balls on the package 130 - * @partition: mux partition 131 130 */ 132 131 struct omap_mux { 133 132 u16 reg_offset;
+1 -1
arch/arm/mach-omap2/opp4xxx_data.c
··· 94 94 { 95 95 int r = -ENODEV; 96 96 97 - if (!cpu_is_omap44xx()) 97 + if (!cpu_is_omap443x()) 98 98 return r; 99 99 100 100 r = omap_init_opp_table(omap44xx_opp_def_list,
+5 -16
arch/arm/mach-omap2/pm34xx.c
··· 272 272 per_next_state = pwrdm_read_next_pwrst(per_pwrdm); 273 273 core_next_state = pwrdm_read_next_pwrst(core_pwrdm); 274 274 275 - if (mpu_next_state < PWRDM_POWER_ON) { 276 - pwrdm_pre_transition(mpu_pwrdm); 277 - pwrdm_pre_transition(neon_pwrdm); 278 - } 275 + pwrdm_pre_transition(NULL); 279 276 280 277 /* PER */ 281 278 if (per_next_state < PWRDM_POWER_ON) { 282 - pwrdm_pre_transition(per_pwrdm); 283 279 per_going_off = (per_next_state == PWRDM_POWER_OFF) ? 1 : 0; 284 280 omap2_gpio_prepare_for_idle(per_going_off); 285 281 } 286 282 287 283 /* CORE */ 288 284 if (core_next_state < PWRDM_POWER_ON) { 289 - pwrdm_pre_transition(core_pwrdm); 290 285 if (core_next_state == PWRDM_POWER_OFF) { 291 286 omap3_core_save_context(); 292 287 omap3_cm_save_context(); ··· 334 339 omap2_prm_clear_mod_reg_bits(OMAP3430_AUTO_OFF_MASK, 335 340 OMAP3430_GR_MOD, 336 341 OMAP3_PRM_VOLTCTRL_OFFSET); 337 - pwrdm_post_transition(core_pwrdm); 338 342 } 339 343 omap3_intc_resume_idle(); 340 344 341 - /* PER */ 342 - if (per_next_state < PWRDM_POWER_ON) { 343 - omap2_gpio_resume_after_idle(); 344 - pwrdm_post_transition(per_pwrdm); 345 - } 345 + pwrdm_post_transition(NULL); 346 346 347 - if (mpu_next_state < PWRDM_POWER_ON) { 348 - pwrdm_post_transition(mpu_pwrdm); 349 - pwrdm_post_transition(neon_pwrdm); 350 - } 347 + /* PER */ 348 + if (per_next_state < PWRDM_POWER_ON) 349 + omap2_gpio_resume_after_idle(); 351 350 } 352 351 353 352 static void omap3_pm_idle(void)
+6 -2
arch/arm/mach-omap2/sleep44xx.S
··· 56 56 * The restore function pointer is stored at CPUx_WAKEUP_NS_PA_ADDR_OFFSET. 57 57 * It returns to the caller for CPU INACTIVE and ON power states or in case 58 58 * CPU failed to transition to targeted OFF/DORMANT state. 59 + * 60 + * omap4_finish_suspend() calls v7_flush_dcache_all() which doesn't save 61 + * stack frame and it expects the caller to take care of it. Hence the entire 62 + * stack frame is saved to avoid possible stack corruption. 59 63 */ 60 64 ENTRY(omap4_finish_suspend) 61 - stmfd sp!, {lr} 65 + stmfd sp!, {r4-r12, lr} 62 66 cmp r0, #0x0 63 67 beq do_WFI @ No lowpower state, jump to WFI 64 68 ··· 230 226 skip_scu_gp_clear: 231 227 isb 232 228 dsb 233 - ldmfd sp!, {pc} 229 + ldmfd sp!, {r4-r12, pc} 234 230 ENDPROC(omap4_finish_suspend) 235 231 236 232 /*
+1
arch/arm/mach-omap2/twl-common.c
··· 67 67 const char *pmic_type, int pmic_irq, 68 68 struct twl4030_platform_data *pmic_data) 69 69 { 70 + omap_mux_init_signal("sys_nirq", OMAP_PIN_INPUT_PULLUP | OMAP_PIN_OFF_WAKEUPENABLE); 70 71 strncpy(pmic_i2c_board_info.type, pmic_type, 71 72 sizeof(pmic_i2c_board_info.type)); 72 73 pmic_i2c_board_info.irq = pmic_irq;
+2 -1
arch/arm/mach-orion5x/common.c
··· 109 109 { 110 110 orion_ge00_init(eth_data, 111 111 ORION5X_ETH_PHYS_BASE, IRQ_ORION5X_ETH_SUM, 112 - IRQ_ORION5X_ETH_ERR); 112 + IRQ_ORION5X_ETH_ERR, 113 + MV643XX_TX_CSUM_DEFAULT_LIMIT); 113 114 } 114 115 115 116
+2
arch/arm/mach-prima2/Makefile.boot
··· 1 1 zreladdr-y += 0x00008000 2 2 params_phys-y := 0x00000100 3 3 initrd_phys-y := 0x00800000 4 + 5 + dtb-$(CONFIG_ARCH_PRIMA2) += prima2-evb.dtb
+2 -1
arch/arm/mach-s3c24xx/include/mach/dma.h
··· 24 24 */ 25 25 26 26 enum dma_ch { 27 - DMACH_XD0, 27 + DMACH_DT_PROP = -1, /* not yet supported, do not use */ 28 + DMACH_XD0 = 0, 28 29 DMACH_XD1, 29 30 DMACH_SDI, 30 31 DMACH_SPI0,
-1
arch/arm/mach-ux500/Kconfig
··· 41 41 config MACH_SNOWBALL 42 42 bool "U8500 Snowball platform" 43 43 select MACH_MOP500 44 - select LEDS_GPIO 45 44 help 46 45 Include support for the snowball development platform. 47 46
+5 -5
arch/arm/mach-ux500/board-mop500-msp.c
··· 191 191 return pdev; 192 192 } 193 193 194 - /* Platform device for ASoC U8500 machine */ 195 - static struct platform_device snd_soc_u8500 = { 196 - .name = "snd-soc-u8500", 194 + /* Platform device for ASoC MOP500 machine */ 195 + static struct platform_device snd_soc_mop500 = { 196 + .name = "snd-soc-mop500", 197 197 .id = 0, 198 198 .dev = { 199 199 .platform_data = NULL, ··· 227 227 { 228 228 struct platform_device *msp1; 229 229 230 - pr_info("%s: Register platform-device 'snd-soc-u8500'.\n", __func__); 231 - platform_device_register(&snd_soc_u8500); 230 + pr_info("%s: Register platform-device 'snd-soc-mop500'.\n", __func__); 231 + platform_device_register(&snd_soc_mop500); 232 232 233 233 pr_info("Initialize MSP I2S-devices.\n"); 234 234 db8500_add_msp_i2s(parent, 0, U8500_MSP0_BASE, IRQ_DB8500_MSP0,
+4
arch/arm/mach-ux500/board-mop500.c
··· 797 797 ARRAY_SIZE(mop500_platform_devs)); 798 798 799 799 mop500_sdi_init(parent); 800 + mop500_msp_init(parent); 800 801 i2c0_devs = ARRAY_SIZE(mop500_i2c0_devices); 801 802 i2c_register_board_info(0, mop500_i2c0_devices, i2c0_devs); 802 803 i2c_register_board_info(2, mop500_i2c2_devices, ··· 805 804 806 805 mop500_uib_init(); 807 806 807 + } else if (of_machine_is_compatible("calaosystems,snowball-a9500")) { 808 + mop500_msp_init(parent); 808 809 } else if (of_machine_is_compatible("st-ericsson,hrefv60+")) { 809 810 /* 810 811 * The HREFv60 board removed a GPIO expander and routed ··· 818 815 ARRAY_SIZE(mop500_platform_devs)); 819 816 820 817 hrefv60_sdi_init(parent); 818 + mop500_msp_init(parent); 821 819 822 820 i2c0_devs = ARRAY_SIZE(mop500_i2c0_devices); 823 821 i2c0_devs -= NUM_PRE_V60_I2C0_DEVICES;
+3 -3
arch/arm/plat-omap/dmtimer.c
··· 189 189 timer->reserved = 1; 190 190 break; 191 191 } 192 + spin_unlock_irqrestore(&dm_timer_lock, flags); 192 193 193 194 if (timer) { 194 195 ret = omap_dm_timer_prepare(timer); ··· 198 197 timer = NULL; 199 198 } 200 199 } 201 - spin_unlock_irqrestore(&dm_timer_lock, flags); 202 200 203 201 if (!timer) 204 202 pr_debug("%s: timer request failed!\n", __func__); ··· 220 220 break; 221 221 } 222 222 } 223 + spin_unlock_irqrestore(&dm_timer_lock, flags); 223 224 224 225 if (timer) { 225 226 ret = omap_dm_timer_prepare(timer); ··· 229 228 timer = NULL; 230 229 } 231 230 } 232 - spin_unlock_irqrestore(&dm_timer_lock, flags); 233 231 234 232 if (!timer) 235 233 pr_debug("%s: timer%d request failed!\n", __func__, id); ··· 258 258 259 259 void omap_dm_timer_disable(struct omap_dm_timer *timer) 260 260 { 261 - pm_runtime_put(&timer->pdev->dev); 261 + pm_runtime_put_sync(&timer->pdev->dev); 262 262 } 263 263 EXPORT_SYMBOL_GPL(omap_dm_timer_disable); 264 264
+2 -1
arch/arm/plat-omap/include/plat/cpu.h
··· 372 372 #define cpu_class_is_omap1() (cpu_is_omap7xx() || cpu_is_omap15xx() || \ 373 373 cpu_is_omap16xx()) 374 374 #define cpu_class_is_omap2() (cpu_is_omap24xx() || cpu_is_omap34xx() || \ 375 - cpu_is_omap44xx() || soc_is_omap54xx()) 375 + cpu_is_omap44xx() || soc_is_omap54xx() || \ 376 + soc_is_am33xx()) 376 377 377 378 /* Various silicon revisions for omap2 */ 378 379 #define OMAP242X_CLASS 0x24200024
+9
arch/arm/plat-omap/include/plat/multi.h
··· 108 108 # endif 109 109 #endif 110 110 111 + #ifdef CONFIG_SOC_AM33XX 112 + # ifdef OMAP_NAME 113 + # undef MULTI_OMAP2 114 + # define MULTI_OMAP2 115 + # else 116 + # define OMAP_NAME am33xx 117 + # endif 118 + #endif 119 + 111 120 #endif /* __PLAT_OMAP_MULTI_H */
+1 -3
arch/arm/plat-omap/include/plat/uncompress.h
··· 110 110 _DEBUG_LL_ENTRY(mach, AM33XX_UART##p##_BASE, OMAP_PORT_SHIFT, \ 111 111 AM33XXUART##p) 112 112 113 - static inline void __arch_decomp_setup(unsigned long arch_id) 113 + static inline void arch_decomp_setup(void) 114 114 { 115 115 int port = 0; 116 116 ··· 197 197 DEBUG_LL_AM33XX(1, am335xevm); 198 198 } while (0); 199 199 } 200 - 201 - #define arch_decomp_setup() __arch_decomp_setup(arch_id) 202 200 203 201 /* 204 202 * nothing to do
+6 -2
arch/arm/plat-orion/common.c
··· 291 291 void __init orion_ge00_init(struct mv643xx_eth_platform_data *eth_data, 292 292 unsigned long mapbase, 293 293 unsigned long irq, 294 - unsigned long irq_err) 294 + unsigned long irq_err, 295 + unsigned int tx_csum_limit) 295 296 { 296 297 fill_resources(&orion_ge00_shared, orion_ge00_shared_resources, 297 298 mapbase + 0x2000, SZ_16K - 1, irq_err); 299 + orion_ge00_shared_data.tx_csum_limit = tx_csum_limit; 298 300 ge_complete(&orion_ge00_shared_data, 299 301 orion_ge00_resources, irq, &orion_ge00_shared, 300 302 eth_data, &orion_ge00); ··· 345 343 void __init orion_ge01_init(struct mv643xx_eth_platform_data *eth_data, 346 344 unsigned long mapbase, 347 345 unsigned long irq, 348 - unsigned long irq_err) 346 + unsigned long irq_err, 347 + unsigned int tx_csum_limit) 349 348 { 350 349 fill_resources(&orion_ge01_shared, orion_ge01_shared_resources, 351 350 mapbase + 0x2000, SZ_16K - 1, irq_err); 351 + orion_ge01_shared_data.tx_csum_limit = tx_csum_limit; 352 352 ge_complete(&orion_ge01_shared_data, 353 353 orion_ge01_resources, irq, &orion_ge01_shared, 354 354 eth_data, &orion_ge01);
+4 -2
arch/arm/plat-orion/include/plat/common.h
··· 39 39 void __init orion_ge00_init(struct mv643xx_eth_platform_data *eth_data, 40 40 unsigned long mapbase, 41 41 unsigned long irq, 42 - unsigned long irq_err); 42 + unsigned long irq_err, 43 + unsigned int tx_csum_limit); 43 44 44 45 void __init orion_ge01_init(struct mv643xx_eth_platform_data *eth_data, 45 46 unsigned long mapbase, 46 47 unsigned long irq, 47 - unsigned long irq_err); 48 + unsigned long irq_err, 49 + unsigned int tx_csum_limit); 48 50 49 51 void __init orion_ge10_init(struct mv643xx_eth_platform_data *eth_data, 50 52 unsigned long mapbase,
+1 -1
arch/arm/plat-s3c24xx/dma.c
··· 430 430 * when necessary. 431 431 */ 432 432 433 - int s3c2410_dma_enqueue(unsigned int channel, void *id, 433 + int s3c2410_dma_enqueue(enum dma_ch channel, void *id, 434 434 dma_addr_t data, int size) 435 435 { 436 436 struct s3c2410_dma_chan *chan = s3c_dma_lookup_channel(channel);
+28 -1
arch/arm/plat-samsung/devs.c
··· 32 32 #include <linux/platform_data/s3c-hsudc.h> 33 33 #include <linux/platform_data/s3c-hsotg.h> 34 34 35 + #include <media/s5p_hdmi.h> 36 + 35 37 #include <asm/irq.h> 36 38 #include <asm/pmu.h> 37 39 #include <asm/mach/arch.h> ··· 750 748 if (!pd) { 751 749 pd = &default_i2c_data; 752 750 753 - if (soc_is_exynos4210()) 751 + if (soc_is_exynos4210() || 752 + soc_is_exynos4212() || soc_is_exynos4412()) 754 753 pd->bus_num = 8; 755 754 else if (soc_is_s5pv210()) 756 755 pd->bus_num = 3; ··· 762 759 npd = s3c_set_platdata(pd, sizeof(struct s3c2410_platform_i2c), 763 760 &s5p_device_i2c_hdmiphy); 764 761 } 762 + 763 + struct s5p_hdmi_platform_data s5p_hdmi_def_platdata; 764 + 765 + void __init s5p_hdmi_set_platdata(struct i2c_board_info *hdmiphy_info, 766 + struct i2c_board_info *mhl_info, int mhl_bus) 767 + { 768 + struct s5p_hdmi_platform_data *pd = &s5p_hdmi_def_platdata; 769 + 770 + if (soc_is_exynos4210() || 771 + soc_is_exynos4212() || soc_is_exynos4412()) 772 + pd->hdmiphy_bus = 8; 773 + else if (soc_is_s5pv210()) 774 + pd->hdmiphy_bus = 3; 775 + else 776 + pd->hdmiphy_bus = 0; 777 + 778 + pd->hdmiphy_info = hdmiphy_info; 779 + pd->mhl_info = mhl_info; 780 + pd->mhl_bus = mhl_bus; 781 + 782 + s3c_set_platdata(pd, sizeof(struct s5p_hdmi_platform_data), 783 + &s5p_device_hdmi); 784 + } 785 + 765 786 #endif /* CONFIG_S5P_DEV_I2C_HDMIPHY */ 766 787 767 788 /* I2S */
+16
arch/arm/plat-samsung/include/plat/hdmi.h
··· 1 + /* 2 + * Copyright (C) 2012 Samsung Electronics Co.Ltd 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms of the GNU General Public License as published by the 6 + * Free Software Foundation; either version 2 of the License, or (at your 7 + * option) any later version. 8 + */ 9 + 10 + #ifndef __PLAT_SAMSUNG_HDMI_H 11 + #define __PLAT_SAMSUNG_HDMI_H __FILE__ 12 + 13 + extern void s5p_hdmi_set_platdata(struct i2c_board_info *hdmiphy_info, 14 + struct i2c_board_info *mhl_info, int mhl_bus); 15 + 16 + #endif /* __PLAT_SAMSUNG_HDMI_H */
+1 -1
arch/arm/plat-samsung/pm.c
··· 74 74 75 75 #ifdef CONFIG_SAMSUNG_PM_DEBUG 76 76 77 - struct pm_uart_save uart_save[CONFIG_SERIAL_SAMSUNG_UARTS]; 77 + static struct pm_uart_save uart_save[CONFIG_SERIAL_SAMSUNG_UARTS]; 78 78 79 79 static void s3c_pm_save_uart(unsigned int uart, struct pm_uart_save *save) 80 80 {
+1
arch/mips/Kconfig
··· 89 89 select CEVT_R4K 90 90 select CSRC_R4K 91 91 select DMA_NONCOHERENT 92 + select HAVE_CLK 92 93 select IRQ_CPU 93 94 select MIPS_MACHINE 94 95 select SYS_HAS_CPU_MIPS32_R2
+2
arch/mips/alchemy/board-mtx1.c
··· 228 228 * adapter on the mtx-1 "singleboard" variant. It triggers a custom 229 229 * logic chip connected to EXT_IO3 (GPIO1) to suppress IDSEL signals. 230 230 */ 231 + udelay(1); 232 + 231 233 if (assert && devsel != 0) 232 234 /* Suppress signal to Cardbus */ 233 235 alchemy_gpio_set_value(1, 0); /* set EXT_IO3 OFF */
+2
arch/mips/ath79/dev-usb.c
··· 145 145 146 146 ath79_ohci_resources[0].start = AR7240_OHCI_BASE; 147 147 ath79_ohci_resources[0].end = AR7240_OHCI_BASE + AR7240_OHCI_SIZE - 1; 148 + ath79_ohci_resources[1].start = ATH79_CPU_IRQ_USB; 149 + ath79_ohci_resources[1].end = ATH79_CPU_IRQ_USB; 148 150 platform_device_register(&ath79_ohci_device); 149 151 } 150 152
+4 -2
arch/mips/ath79/gpio.c
··· 188 188 189 189 if (soc_is_ar71xx()) 190 190 ath79_gpio_count = AR71XX_GPIO_COUNT; 191 - else if (soc_is_ar724x()) 192 - ath79_gpio_count = AR724X_GPIO_COUNT; 191 + else if (soc_is_ar7240()) 192 + ath79_gpio_count = AR7240_GPIO_COUNT; 193 + else if (soc_is_ar7241() || soc_is_ar7242()) 194 + ath79_gpio_count = AR7241_GPIO_COUNT; 193 195 else if (soc_is_ar913x()) 194 196 ath79_gpio_count = AR913X_GPIO_COUNT; 195 197 else if (soc_is_ar933x())
+4
arch/mips/bcm63xx/dev-spi.c
··· 106 106 if (BCMCPU_IS_6338() || BCMCPU_IS_6348()) { 107 107 spi_resources[0].end += BCM_6338_RSET_SPI_SIZE - 1; 108 108 spi_pdata.fifo_size = SPI_6338_MSG_DATA_SIZE; 109 + spi_pdata.msg_type_shift = SPI_6338_MSG_TYPE_SHIFT; 110 + spi_pdata.msg_ctl_width = SPI_6338_MSG_CTL_WIDTH; 109 111 } 110 112 111 113 if (BCMCPU_IS_6358() || BCMCPU_IS_6368()) { 112 114 spi_resources[0].end += BCM_6358_RSET_SPI_SIZE - 1; 113 115 spi_pdata.fifo_size = SPI_6358_MSG_DATA_SIZE; 116 + spi_pdata.msg_type_shift = SPI_6358_MSG_TYPE_SHIFT; 117 + spi_pdata.msg_ctl_width = SPI_6358_MSG_CTL_WIDTH; 114 118 } 115 119 116 120 bcm63xx_spi_regs_init();
+43 -46
arch/mips/cavium-octeon/octeon-irq.c
··· 61 61 octeon_irq_ciu_to_irq[line][bit] = irq; 62 62 } 63 63 64 + static void octeon_irq_force_ciu_mapping(struct irq_domain *domain, 65 + int irq, int line, int bit) 66 + { 67 + irq_domain_associate(domain, irq, line << 6 | bit); 68 + } 69 + 64 70 static int octeon_coreid_for_cpu(int cpu) 65 71 { 66 72 #ifdef CONFIG_SMP ··· 189 183 mutex_init(&cd->core_irq_mutex); 190 184 191 185 irq = OCTEON_IRQ_SW0 + i; 192 - switch (irq) { 193 - case OCTEON_IRQ_TIMER: 194 - case OCTEON_IRQ_SW0: 195 - case OCTEON_IRQ_SW1: 196 - case OCTEON_IRQ_5: 197 - case OCTEON_IRQ_PERF: 198 - irq_set_chip_data(irq, cd); 199 - irq_set_chip_and_handler(irq, &octeon_irq_chip_core, 200 - handle_percpu_irq); 201 - break; 202 - default: 203 - break; 204 - } 186 + irq_set_chip_data(irq, cd); 187 + irq_set_chip_and_handler(irq, &octeon_irq_chip_core, 188 + handle_percpu_irq); 205 189 } 206 190 } 207 191 ··· 886 890 unsigned int type; 887 891 unsigned int pin; 888 892 unsigned int trigger; 889 - struct octeon_irq_gpio_domain_data *gpiod; 890 893 891 894 if (d->of_node != node) 892 895 return -EINVAL; ··· 920 925 break; 921 926 } 922 927 *out_type = type; 923 - gpiod = d->host_data; 924 - *out_hwirq = gpiod->base_hwirq + pin; 928 + *out_hwirq = pin; 925 929 926 930 return 0; 927 931 } ··· 990 996 static int octeon_irq_gpio_map(struct irq_domain *d, 991 997 unsigned int virq, irq_hw_number_t hw) 992 998 { 993 - unsigned int line = hw >> 6; 994 - unsigned int bit = hw & 63; 999 + struct octeon_irq_gpio_domain_data *gpiod = d->host_data; 1000 + unsigned int line, bit; 995 1001 996 1002 if (!octeon_irq_virq_in_range(virq)) 997 1003 return -EINVAL; 998 1004 1005 + hw += gpiod->base_hwirq; 1006 + line = hw >> 6; 1007 + bit = hw & 63; 999 1008 if (line > 1 || octeon_irq_ciu_to_irq[line][bit] != 0) 1000 1009 return -EINVAL; 1001 1010 1002 1011 octeon_irq_set_ciu_mapping(virq, line, bit, 1003 1012 octeon_irq_gpio_chip, 1004 1013 octeon_irq_handle_gpio); 1005 - 1006 1014 return 0; 1007 1015 } 1008 1016 ··· 1145 1149 struct irq_chip *chip_wd; 1146 1150 struct device_node *gpio_node; 1147 1151 struct device_node *ciu_node; 1152 + struct irq_domain *ciu_domain = NULL; 1148 1153 1149 1154 octeon_irq_init_ciu_percpu(); 1150 1155 octeon_irq_setup_secondary = octeon_irq_setup_secondary_ciu; ··· 1174 1177 /* Mips internal */ 1175 1178 octeon_irq_init_core(); 1176 1179 1177 - /* CIU_0 */ 1178 - for (i = 0; i < 16; i++) 1179 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_WORKQ0, 0, i + 0, chip, handle_level_irq); 1180 - 1181 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_MBOX0, 0, 32, chip_mbox, handle_percpu_irq); 1182 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_MBOX1, 0, 33, chip_mbox, handle_percpu_irq); 1183 - 1184 - for (i = 0; i < 4; i++) 1185 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_PCI_INT0, 0, i + 36, chip, handle_level_irq); 1186 - for (i = 0; i < 4; i++) 1187 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_PCI_MSI0, 0, i + 40, chip, handle_level_irq); 1188 - 1189 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_RML, 0, 46, chip, handle_level_irq); 1190 - for (i = 0; i < 4; i++) 1191 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_TIMER0, 0, i + 52, chip, handle_edge_irq); 1192 - 1193 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_USB0, 0, 56, chip, handle_level_irq); 1194 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_BOOTDMA, 0, 63, chip, handle_level_irq); 1195 - 1196 - /* CIU_1 */ 1197 - for (i = 0; i < 16; i++) 1198 - octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_WDOG0, 1, i + 0, chip_wd, handle_level_irq); 1199 - 1200 - octeon_irq_set_ciu_mapping(OCTEON_IRQ_USB1, 1, 17, chip, handle_level_irq); 1201 - 1202 1180 gpio_node = of_find_compatible_node(NULL, NULL, "cavium,octeon-3860-gpio"); 1203 1181 if (gpio_node) { 1204 1182 struct octeon_irq_gpio_domain_data *gpiod; ··· 1191 1219 1192 1220 ciu_node = of_find_compatible_node(NULL, NULL, "cavium,octeon-3860-ciu"); 1193 1221 if (ciu_node) { 1194 - irq_domain_add_tree(ciu_node, &octeon_irq_domain_ciu_ops, NULL); 1222 + ciu_domain = irq_domain_add_tree(ciu_node, &octeon_irq_domain_ciu_ops, NULL); 1195 1223 of_node_put(ciu_node); 1196 1224 } else 1197 - pr_warn("Cannot find device node for cavium,octeon-3860-ciu.\n"); 1225 + panic("Cannot find device node for cavium,octeon-3860-ciu."); 1226 + 1227 + /* CIU_0 */ 1228 + for (i = 0; i < 16; i++) 1229 + octeon_irq_force_ciu_mapping(ciu_domain, i + OCTEON_IRQ_WORKQ0, 0, i + 0); 1230 + 1231 + octeon_irq_set_ciu_mapping(OCTEON_IRQ_MBOX0, 0, 32, chip_mbox, handle_percpu_irq); 1232 + octeon_irq_set_ciu_mapping(OCTEON_IRQ_MBOX1, 0, 33, chip_mbox, handle_percpu_irq); 1233 + 1234 + for (i = 0; i < 4; i++) 1235 + octeon_irq_force_ciu_mapping(ciu_domain, i + OCTEON_IRQ_PCI_INT0, 0, i + 36); 1236 + for (i = 0; i < 4; i++) 1237 + octeon_irq_force_ciu_mapping(ciu_domain, i + OCTEON_IRQ_PCI_MSI0, 0, i + 40); 1238 + 1239 + octeon_irq_force_ciu_mapping(ciu_domain, OCTEON_IRQ_RML, 0, 46); 1240 + for (i = 0; i < 4; i++) 1241 + octeon_irq_force_ciu_mapping(ciu_domain, i + OCTEON_IRQ_TIMER0, 0, i + 52); 1242 + 1243 + octeon_irq_force_ciu_mapping(ciu_domain, OCTEON_IRQ_USB0, 0, 56); 1244 + octeon_irq_force_ciu_mapping(ciu_domain, OCTEON_IRQ_BOOTDMA, 0, 63); 1245 + 1246 + /* CIU_1 */ 1247 + for (i = 0; i < 16; i++) 1248 + octeon_irq_set_ciu_mapping(i + OCTEON_IRQ_WDOG0, 1, i + 0, chip_wd, handle_level_irq); 1249 + 1250 + octeon_irq_force_ciu_mapping(ciu_domain, OCTEON_IRQ_USB1, 1, 17); 1198 1251 1199 1252 /* Enable the CIU lines */ 1200 1253 set_c0_status(STATUSF_IP3 | STATUSF_IP2);
+2 -1
arch/mips/include/asm/mach-ath79/ar71xx_regs.h
··· 393 393 #define AR71XX_GPIO_REG_FUNC 0x28 394 394 395 395 #define AR71XX_GPIO_COUNT 16 396 - #define AR724X_GPIO_COUNT 18 396 + #define AR7240_GPIO_COUNT 18 397 + #define AR7241_GPIO_COUNT 20 397 398 #define AR913X_GPIO_COUNT 22 398 399 #define AR933X_GPIO_COUNT 30 399 400 #define AR934X_GPIO_COUNT 23
-1
arch/mips/include/asm/mach-ath79/cpu-feature-overrides.h
··· 42 42 #define cpu_has_mips64r1 0 43 43 #define cpu_has_mips64r2 0 44 44 45 - #define cpu_has_dsp 0 46 45 #define cpu_has_mipsmt 0 47 46 48 47 #define cpu_has_64bits 0
+2
arch/mips/include/asm/mach-bcm63xx/bcm63xx_dev_spi.h
··· 9 9 10 10 struct bcm63xx_spi_pdata { 11 11 unsigned int fifo_size; 12 + unsigned int msg_type_shift; 13 + unsigned int msg_ctl_width; 12 14 int bus_num; 13 15 int num_chipselect; 14 16 u32 speed_hz;
+10 -3
arch/mips/include/asm/mach-bcm63xx/bcm63xx_regs.h
··· 1054 1054 #define SPI_6338_FILL_BYTE 0x07 1055 1055 #define SPI_6338_MSG_TAIL 0x09 1056 1056 #define SPI_6338_RX_TAIL 0x0b 1057 - #define SPI_6338_MSG_CTL 0x40 1057 + #define SPI_6338_MSG_CTL 0x40 /* 8-bits register */ 1058 + #define SPI_6338_MSG_CTL_WIDTH 8 1058 1059 #define SPI_6338_MSG_DATA 0x41 1059 1060 #define SPI_6338_MSG_DATA_SIZE 0x3f 1060 1061 #define SPI_6338_RX_DATA 0x80 ··· 1071 1070 #define SPI_6348_FILL_BYTE 0x07 1072 1071 #define SPI_6348_MSG_TAIL 0x09 1073 1072 #define SPI_6348_RX_TAIL 0x0b 1074 - #define SPI_6348_MSG_CTL 0x40 1073 + #define SPI_6348_MSG_CTL 0x40 /* 8-bits register */ 1074 + #define SPI_6348_MSG_CTL_WIDTH 8 1075 1075 #define SPI_6348_MSG_DATA 0x41 1076 1076 #define SPI_6348_MSG_DATA_SIZE 0x3f 1077 1077 #define SPI_6348_RX_DATA 0x80 ··· 1080 1078 1081 1079 /* BCM 6358 SPI core */ 1082 1080 #define SPI_6358_MSG_CTL 0x00 /* 16-bits register */ 1081 + #define SPI_6358_MSG_CTL_WIDTH 16 1083 1082 #define SPI_6358_MSG_DATA 0x02 1084 1083 #define SPI_6358_MSG_DATA_SIZE 0x21e 1085 1084 #define SPI_6358_RX_DATA 0x400 ··· 1097 1094 1098 1095 /* BCM 6358 SPI core */ 1099 1096 #define SPI_6368_MSG_CTL 0x00 /* 16-bits register */ 1097 + #define SPI_6368_MSG_CTL_WIDTH 16 1100 1098 #define SPI_6368_MSG_DATA 0x02 1101 1099 #define SPI_6368_MSG_DATA_SIZE 0x21e 1102 1100 #define SPI_6368_RX_DATA 0x400 ··· 1119 1115 #define SPI_HD_W 0x01 1120 1116 #define SPI_HD_R 0x02 1121 1117 #define SPI_BYTE_CNT_SHIFT 0 1122 - #define SPI_MSG_TYPE_SHIFT 14 1118 + #define SPI_6338_MSG_TYPE_SHIFT 6 1119 + #define SPI_6348_MSG_TYPE_SHIFT 6 1120 + #define SPI_6358_MSG_TYPE_SHIFT 14 1121 + #define SPI_6368_MSG_TYPE_SHIFT 14 1123 1122 1124 1123 /* Command */ 1125 1124 #define SPI_CMD_NOOP 0x00
+1 -9
arch/mips/include/asm/mach-cavium-octeon/irq.h
··· 21 21 OCTEON_IRQ_TIMER, 22 22 /* sources in CIU_INTX_EN0 */ 23 23 OCTEON_IRQ_WORKQ0, 24 - OCTEON_IRQ_GPIO0 = OCTEON_IRQ_WORKQ0 + 16, 25 - OCTEON_IRQ_WDOG0 = OCTEON_IRQ_GPIO0 + 16, 24 + OCTEON_IRQ_WDOG0 = OCTEON_IRQ_WORKQ0 + 16, 26 25 OCTEON_IRQ_WDOG15 = OCTEON_IRQ_WDOG0 + 15, 27 26 OCTEON_IRQ_MBOX0 = OCTEON_IRQ_WDOG0 + 16, 28 27 OCTEON_IRQ_MBOX1, 29 - OCTEON_IRQ_UART0, 30 - OCTEON_IRQ_UART1, 31 - OCTEON_IRQ_UART2, 32 28 OCTEON_IRQ_PCI_INT0, 33 29 OCTEON_IRQ_PCI_INT1, 34 30 OCTEON_IRQ_PCI_INT2, ··· 34 38 OCTEON_IRQ_PCI_MSI2, 35 39 OCTEON_IRQ_PCI_MSI3, 36 40 37 - OCTEON_IRQ_TWSI, 38 - OCTEON_IRQ_TWSI2, 39 41 OCTEON_IRQ_RML, 40 42 OCTEON_IRQ_TIMER0, 41 43 OCTEON_IRQ_TIMER1, ··· 41 47 OCTEON_IRQ_TIMER3, 42 48 OCTEON_IRQ_USB0, 43 49 OCTEON_IRQ_USB1, 44 - OCTEON_IRQ_MII0, 45 - OCTEON_IRQ_MII1, 46 50 OCTEON_IRQ_BOOTDMA, 47 51 #ifndef CONFIG_PCI_MSI 48 52 OCTEON_IRQ_LAST = 127
+1
arch/mips/include/asm/module.h
··· 10 10 struct list_head dbe_list; 11 11 const struct exception_table_entry *dbe_start; 12 12 const struct exception_table_entry *dbe_end; 13 + struct mips_hi16 *r_mips_hi16_list; 13 14 }; 14 15 15 16 typedef uint8_t Elf64_Byte; /* Type for a 8-bit quantity. */
+4 -4
arch/mips/include/asm/r4k-timer.h
··· 12 12 13 13 #ifdef CONFIG_SYNC_R4K 14 14 15 - extern void synchronise_count_master(void); 16 - extern void synchronise_count_slave(void); 15 + extern void synchronise_count_master(int cpu); 16 + extern void synchronise_count_slave(int cpu); 17 17 18 18 #else 19 19 20 - static inline void synchronise_count_master(void) 20 + static inline void synchronise_count_master(int cpu) 21 21 { 22 22 } 23 23 24 - static inline void synchronise_count_slave(void) 24 + static inline void synchronise_count_slave(int cpu) 25 25 { 26 26 } 27 27
+34 -9
arch/mips/kernel/module.c
··· 39 39 Elf_Addr value; 40 40 }; 41 41 42 - static struct mips_hi16 *mips_hi16_list; 43 - 44 42 static LIST_HEAD(dbe_list); 45 43 static DEFINE_SPINLOCK(dbe_lock); 46 44 ··· 126 128 127 129 n->addr = (Elf_Addr *)location; 128 130 n->value = v; 129 - n->next = mips_hi16_list; 130 - mips_hi16_list = n; 131 + n->next = me->arch.r_mips_hi16_list; 132 + me->arch.r_mips_hi16_list = n; 131 133 132 134 return 0; 133 135 } ··· 140 142 return 0; 141 143 } 142 144 145 + static void free_relocation_chain(struct mips_hi16 *l) 146 + { 147 + struct mips_hi16 *next; 148 + 149 + while (l) { 150 + next = l->next; 151 + kfree(l); 152 + l = next; 153 + } 154 + } 155 + 143 156 static int apply_r_mips_lo16_rel(struct module *me, u32 *location, Elf_Addr v) 144 157 { 145 158 unsigned long insnlo = *location; 159 + struct mips_hi16 *l; 146 160 Elf_Addr val, vallo; 147 161 148 162 /* Sign extend the addend we extract from the lo insn. */ 149 163 vallo = ((insnlo & 0xffff) ^ 0x8000) - 0x8000; 150 164 151 - if (mips_hi16_list != NULL) { 152 - struct mips_hi16 *l; 153 - 154 - l = mips_hi16_list; 165 + if (me->arch.r_mips_hi16_list != NULL) { 166 + l = me->arch.r_mips_hi16_list; 155 167 while (l != NULL) { 156 168 struct mips_hi16 *next; 157 169 unsigned long insn; ··· 196 188 l = next; 197 189 } 198 190 199 - mips_hi16_list = NULL; 191 + me->arch.r_mips_hi16_list = NULL; 200 192 } 201 193 202 194 /* ··· 209 201 return 0; 210 202 211 203 out_danger: 204 + free_relocation_chain(l); 205 + me->arch.r_mips_hi16_list = NULL; 206 + 212 207 pr_err("module %s: dangerous R_MIPS_LO16 REL relocation\n", me->name); 213 208 214 209 return -ENOEXEC; ··· 284 273 pr_debug("Applying relocate section %u to %u\n", relsec, 285 274 sechdrs[relsec].sh_info); 286 275 276 + me->arch.r_mips_hi16_list = NULL; 287 277 for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) { 288 278 /* This is where to make the change */ 289 279 location = (void *)sechdrs[sechdrs[relsec].sh_info].sh_addr ··· 306 294 res = reloc_handlers_rel[ELF_MIPS_R_TYPE(rel[i])](me, location, v); 307 295 if (res) 308 296 return res; 297 + } 298 + 299 + /* 300 + * Normally the hi16 list should be deallocated at this point. A 301 + * malformed binary however could contain a series of R_MIPS_HI16 302 + * relocations not followed by a R_MIPS_LO16 relocation. In that 303 + * case, free up the list and return an error. 304 + */ 305 + if (me->arch.r_mips_hi16_list) { 306 + free_relocation_chain(me->arch.r_mips_hi16_list); 307 + me->arch.r_mips_hi16_list = NULL; 308 + 309 + return -ENOEXEC; 309 310 } 310 311 311 312 return 0;
+2 -2
arch/mips/kernel/smp.c
··· 130 130 131 131 cpu_set(cpu, cpu_callin_map); 132 132 133 - synchronise_count_slave(); 133 + synchronise_count_slave(cpu); 134 134 135 135 /* 136 136 * irq will be enabled in ->smp_finish(), enabling it too early ··· 173 173 void __init smp_cpus_done(unsigned int max_cpus) 174 174 { 175 175 mp_ops->cpus_done(); 176 - synchronise_count_master(); 177 176 } 178 177 179 178 /* called from main before smp_init() */ ··· 205 206 while (!cpu_isset(cpu, cpu_callin_map)) 206 207 udelay(100); 207 208 209 + synchronise_count_master(cpu); 208 210 return 0; 209 211 } 210 212
+11 -15
arch/mips/kernel/sync-r4k.c
··· 28 28 #define COUNTON 100 29 29 #define NR_LOOPS 5 30 30 31 - void __cpuinit synchronise_count_master(void) 31 + void __cpuinit synchronise_count_master(int cpu) 32 32 { 33 33 int i; 34 34 unsigned long flags; 35 35 unsigned int initcount; 36 - int nslaves; 37 36 38 37 #ifdef CONFIG_MIPS_MT_SMTC 39 38 /* ··· 42 43 return; 43 44 #endif 44 45 45 - printk(KERN_INFO "Synchronize counters across %u CPUs: ", 46 - num_online_cpus()); 46 + printk(KERN_INFO "Synchronize counters for CPU %u: ", cpu); 47 47 48 48 local_irq_save(flags); 49 49 ··· 50 52 * Notify the slaves that it's time to start 51 53 */ 52 54 atomic_set(&count_reference, read_c0_count()); 53 - atomic_set(&count_start_flag, 1); 55 + atomic_set(&count_start_flag, cpu); 54 56 smp_wmb(); 55 57 56 58 /* Count will be initialised to current timer for all CPU's */ ··· 67 69 * two CPUs. 68 70 */ 69 71 70 - nslaves = num_online_cpus()-1; 71 72 for (i = 0; i < NR_LOOPS; i++) { 72 - /* slaves loop on '!= ncpus' */ 73 - while (atomic_read(&count_count_start) != nslaves) 73 + /* slaves loop on '!= 2' */ 74 + while (atomic_read(&count_count_start) != 1) 74 75 mb(); 75 76 atomic_set(&count_count_stop, 0); 76 77 smp_wmb(); ··· 86 89 /* 87 90 * Wait for all slaves to leave the synchronization point: 88 91 */ 89 - while (atomic_read(&count_count_stop) != nslaves) 92 + while (atomic_read(&count_count_stop) != 1) 90 93 mb(); 91 94 atomic_set(&count_count_start, 0); 92 95 smp_wmb(); ··· 94 97 } 95 98 /* Arrange for an interrupt in a short while */ 96 99 write_c0_compare(read_c0_count() + COUNTON); 100 + atomic_set(&count_start_flag, 0); 97 101 98 102 local_irq_restore(flags); 99 103 ··· 106 108 printk("done.\n"); 107 109 } 108 110 109 - void __cpuinit synchronise_count_slave(void) 111 + void __cpuinit synchronise_count_slave(int cpu) 110 112 { 111 113 int i; 112 114 unsigned int initcount; 113 - int ncpus; 114 115 115 116 #ifdef CONFIG_MIPS_MT_SMTC 116 117 /* ··· 124 127 * so we first wait for the master to say everyone is ready 125 128 */ 126 129 127 - while (!atomic_read(&count_start_flag)) 130 + while (atomic_read(&count_start_flag) != cpu) 128 131 mb(); 129 132 130 133 /* Count will be initialised to next expire for all CPU's */ 131 134 initcount = atomic_read(&count_reference); 132 135 133 - ncpus = num_online_cpus(); 134 136 for (i = 0; i < NR_LOOPS; i++) { 135 137 atomic_inc(&count_count_start); 136 - while (atomic_read(&count_count_start) != ncpus) 138 + while (atomic_read(&count_count_start) != 2) 137 139 mb(); 138 140 139 141 /* ··· 142 146 write_c0_count(initcount); 143 147 144 148 atomic_inc(&count_count_stop); 145 - while (atomic_read(&count_count_stop) != ncpus) 149 + while (atomic_read(&count_count_stop) != 2) 146 150 mb(); 147 151 } 148 152 /* Arrange for an interrupt in a short while */
-13
arch/mips/mti-malta/malta-pci.c
··· 252 252 253 253 register_pci_controller(controller); 254 254 } 255 - 256 - /* Enable PCI 2.1 compatibility in PIIX4 */ 257 - static void __devinit quirk_dlcsetup(struct pci_dev *dev) 258 - { 259 - u8 odlc, ndlc; 260 - (void) pci_read_config_byte(dev, 0x82, &odlc); 261 - /* Enable passive releases and delayed transaction */ 262 - ndlc = odlc | 7; 263 - (void) pci_write_config_byte(dev, 0x82, ndlc); 264 - } 265 - 266 - DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82371AB_0, 267 - quirk_dlcsetup);
+22
arch/mips/pci/pci-ar724x.c
··· 23 23 #define AR724X_PCI_MEM_BASE 0x10000000 24 24 #define AR724X_PCI_MEM_SIZE 0x08000000 25 25 26 + #define AR724X_PCI_REG_RESET 0x18 26 27 #define AR724X_PCI_REG_INT_STATUS 0x4c 27 28 #define AR724X_PCI_REG_INT_MASK 0x50 29 + 30 + #define AR724X_PCI_RESET_LINK_UP BIT(0) 28 31 29 32 #define AR724X_PCI_INT_DEV0 BIT(14) 30 33 ··· 41 38 42 39 static u32 ar724x_pci_bar0_value; 43 40 static bool ar724x_pci_bar0_is_cached; 41 + static bool ar724x_pci_link_up; 42 + 43 + static inline bool ar724x_pci_check_link(void) 44 + { 45 + u32 reset; 46 + 47 + reset = __raw_readl(ar724x_pci_ctrl_base + AR724X_PCI_REG_RESET); 48 + return reset & AR724X_PCI_RESET_LINK_UP; 49 + } 44 50 45 51 static int ar724x_pci_read(struct pci_bus *bus, unsigned int devfn, int where, 46 52 int size, uint32_t *value) ··· 57 45 unsigned long flags; 58 46 void __iomem *base; 59 47 u32 data; 48 + 49 + if (!ar724x_pci_link_up) 50 + return PCIBIOS_DEVICE_NOT_FOUND; 60 51 61 52 if (devfn) 62 53 return PCIBIOS_DEVICE_NOT_FOUND; ··· 110 95 void __iomem *base; 111 96 u32 data; 112 97 int s; 98 + 99 + if (!ar724x_pci_link_up) 100 + return PCIBIOS_DEVICE_NOT_FOUND; 113 101 114 102 if (devfn) 115 103 return PCIBIOS_DEVICE_NOT_FOUND; ··· 297 279 AR724X_PCI_CTRL_SIZE); 298 280 if (ar724x_pci_ctrl_base == NULL) 299 281 goto err_unmap_devcfg; 282 + 283 + ar724x_pci_link_up = ar724x_pci_check_link(); 284 + if (!ar724x_pci_link_up) 285 + pr_warn("ar724x: PCIe link is down\n"); 300 286 301 287 ar724x_pci_irq_init(irq); 302 288 register_pci_controller(&ar724x_pci_controller);
+2 -2
arch/parisc/include/asm/atomic.h
··· 141 141 142 142 #define atomic_sub_and_test(i,v) (atomic_sub_return((i),(v)) == 0) 143 143 144 - #define ATOMIC_INIT(i) ((atomic_t) { (i) }) 144 + #define ATOMIC_INIT(i) { (i) } 145 145 146 146 #define smp_mb__before_atomic_dec() smp_mb() 147 147 #define smp_mb__after_atomic_dec() smp_mb() ··· 150 150 151 151 #ifdef CONFIG_64BIT 152 152 153 - #define ATOMIC64_INIT(i) ((atomic64_t) { (i) }) 153 + #define ATOMIC64_INIT(i) { (i) } 154 154 155 155 static __inline__ s64 156 156 __atomic64_add_return(s64 i, atomic64_t *v)
+1 -1
arch/parisc/kernel/process.c
··· 309 309 cregs->ksp = (unsigned long)stack 310 310 + (pregs->gr[21] & (THREAD_SIZE - 1)); 311 311 cregs->gr[30] = usp; 312 - if (p->personality == PER_HPUX) { 312 + if (personality(p->personality) == PER_HPUX) { 313 313 #ifdef CONFIG_HPUX 314 314 cregs->kpc = (unsigned long) &hpux_child_return; 315 315 #else
+4 -4
arch/parisc/kernel/sys_parisc.c
··· 225 225 long err; 226 226 227 227 if (personality(current->personality) == PER_LINUX32 228 - && personality == PER_LINUX) 229 - personality = PER_LINUX32; 228 + && personality(personality) == PER_LINUX) 229 + personality = (personality & ~PER_MASK) | PER_LINUX32; 230 230 231 231 err = sys_personality(personality); 232 - if (err == PER_LINUX32) 233 - err = PER_LINUX; 232 + if (personality(err) == PER_LINUX32) 233 + err = (err & ~PER_MASK) | PER_LINUX; 234 234 235 235 return err; 236 236 }
+7
arch/powerpc/boot/dts/fsl/p4080si-post.dtsi
··· 345 345 /include/ "qoriq-duart-1.dtsi" 346 346 /include/ "qoriq-gpio-0.dtsi" 347 347 /include/ "qoriq-usb2-mph-0.dtsi" 348 + usb@210000 { 349 + compatible = "fsl-usb2-mph-v1.6", "fsl,mpc85xx-usb2-mph", "fsl-usb2-mph"; 350 + port0; 351 + }; 348 352 /include/ "qoriq-usb2-dr-0.dtsi" 353 + usb@211000 { 354 + compatible = "fsl-usb2-dr-v1.6", "fsl,mpc85xx-usb2-dr", "fsl-usb2-dr"; 355 + }; 349 356 /include/ "qoriq-sec4.0-0.dtsi" 350 357 };
+10 -21
arch/powerpc/configs/85xx/p1023rds_defconfig
··· 6 6 CONFIG_POSIX_MQUEUE=y 7 7 CONFIG_BSD_PROCESS_ACCT=y 8 8 CONFIG_AUDIT=y 9 - CONFIG_SPARSE_IRQ=y 9 + CONFIG_IRQ_DOMAIN_DEBUG=y 10 + CONFIG_NO_HZ=y 11 + CONFIG_HIGH_RES_TIMERS=y 10 12 CONFIG_IKCONFIG=y 11 13 CONFIG_IKCONFIG_PROC=y 12 14 CONFIG_LOG_BUF_SHIFT=14 13 15 CONFIG_BLK_DEV_INITRD=y 14 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 15 16 CONFIG_KALLSYMS_ALL=y 16 - CONFIG_KALLSYMS_EXTRA_PASS=y 17 17 CONFIG_EMBEDDED=y 18 18 CONFIG_MODULES=y 19 19 CONFIG_MODULE_UNLOAD=y 20 20 CONFIG_MODULE_FORCE_UNLOAD=y 21 21 CONFIG_MODVERSIONS=y 22 22 # CONFIG_BLK_DEV_BSG is not set 23 + CONFIG_PARTITION_ADVANCED=y 24 + CONFIG_MAC_PARTITION=y 23 25 CONFIG_P1023_RDS=y 24 26 CONFIG_QUICC_ENGINE=y 25 27 CONFIG_QE_GPIO=y 26 28 CONFIG_CPM2=y 27 - CONFIG_GPIO_MPC8XXX=y 28 29 CONFIG_HIGHMEM=y 29 - CONFIG_NO_HZ=y 30 - CONFIG_HIGH_RES_TIMERS=y 31 30 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 32 31 CONFIG_BINFMT_MISC=m 33 32 CONFIG_MATH_EMULATION=y ··· 62 63 CONFIG_IPV6=y 63 64 CONFIG_IP_SCTP=m 64 65 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 66 + CONFIG_DEVTMPFS=y 65 67 CONFIG_PROC_DEVICETREE=y 66 68 CONFIG_BLK_DEV_LOOP=y 67 69 CONFIG_BLK_DEV_RAM=y 68 70 CONFIG_BLK_DEV_RAM_SIZE=131072 69 - CONFIG_MISC_DEVICES=y 70 71 CONFIG_EEPROM_LEGACY=y 71 72 CONFIG_BLK_DEV_SD=y 72 73 CONFIG_CHR_DEV_ST=y ··· 79 80 CONFIG_SATA_SIL24=y 80 81 CONFIG_NETDEVICES=y 81 82 CONFIG_DUMMY=y 83 + CONFIG_FS_ENET=y 84 + CONFIG_FSL_PQ_MDIO=y 85 + CONFIG_E1000E=y 82 86 CONFIG_MARVELL_PHY=y 83 87 CONFIG_DAVICOM_PHY=y 84 88 CONFIG_CICADA_PHY=y 85 89 CONFIG_VITESSE_PHY=y 86 90 CONFIG_FIXED_PHY=y 87 - CONFIG_NET_ETHERNET=y 88 - CONFIG_FS_ENET=y 89 - CONFIG_E1000E=y 90 - CONFIG_FSL_PQ_MDIO=y 91 91 CONFIG_INPUT_FF_MEMLESS=m 92 92 # CONFIG_INPUT_MOUSEDEV is not set 93 93 # CONFIG_INPUT_KEYBOARD is not set ··· 96 98 CONFIG_SERIAL_8250_CONSOLE=y 97 99 CONFIG_SERIAL_8250_NR_UARTS=2 98 100 CONFIG_SERIAL_8250_RUNTIME_UARTS=2 99 - CONFIG_SERIAL_8250_EXTENDED=y 100 101 CONFIG_SERIAL_8250_MANY_PORTS=y 101 102 CONFIG_SERIAL_8250_DETECT_IRQ=y 102 103 CONFIG_SERIAL_8250_RSA=y 103 104 CONFIG_SERIAL_QE=m 104 - CONFIG_HW_RANDOM=y 105 105 CONFIG_NVRAM=y 106 106 CONFIG_I2C=y 107 107 CONFIG_I2C_CPM=m 108 108 CONFIG_I2C_MPC=y 109 + CONFIG_GPIO_MPC8XXX=y 109 110 # CONFIG_HWMON is not set 110 111 CONFIG_VIDEO_OUTPUT_CONTROL=y 111 112 CONFIG_SOUND=y ··· 120 123 CONFIG_FSL_DMA=y 121 124 # CONFIG_NET_DMA is not set 122 125 CONFIG_STAGING=y 123 - # CONFIG_STAGING_EXCLUDE_BUILD is not set 124 126 CONFIG_EXT2_FS=y 125 127 CONFIG_EXT3_FS=y 126 128 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set ··· 146 150 CONFIG_SYSV_FS=m 147 151 CONFIG_UFS_FS=m 148 152 CONFIG_NFS_FS=y 149 - CONFIG_NFS_V3=y 150 153 CONFIG_NFS_V4=y 151 154 CONFIG_ROOT_NFS=y 152 155 CONFIG_NFSD=y 153 - CONFIG_PARTITION_ADVANCED=y 154 - CONFIG_MAC_PARTITION=y 155 156 CONFIG_CRC_T10DIF=y 156 157 CONFIG_FRAME_WARN=8092 157 158 CONFIG_DEBUG_FS=y 158 - CONFIG_DEBUG_KERNEL=y 159 159 CONFIG_DETECT_HUNG_TASK=y 160 160 # CONFIG_DEBUG_BUGVERBOSE is not set 161 161 CONFIG_DEBUG_INFO=y 162 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 163 - CONFIG_SYSCTL_SYSCALL_CHECK=y 164 - CONFIG_IRQ_DOMAIN_DEBUG=y 165 162 CONFIG_CRYPTO_PCBC=m 166 163 CONFIG_CRYPTO_SHA256=y 167 164 CONFIG_CRYPTO_SHA512=y
+10 -21
arch/powerpc/configs/corenet32_smp_defconfig
··· 6 6 CONFIG_POSIX_MQUEUE=y 7 7 CONFIG_BSD_PROCESS_ACCT=y 8 8 CONFIG_AUDIT=y 9 - CONFIG_SPARSE_IRQ=y 10 - CONFIG_RCU_TRACE=y 9 + CONFIG_NO_HZ=y 10 + CONFIG_HIGH_RES_TIMERS=y 11 11 CONFIG_IKCONFIG=y 12 12 CONFIG_IKCONFIG_PROC=y 13 13 CONFIG_LOG_BUF_SHIFT=14 ··· 21 21 CONFIG_MODULE_FORCE_UNLOAD=y 22 22 CONFIG_MODVERSIONS=y 23 23 # CONFIG_BLK_DEV_BSG is not set 24 + CONFIG_PARTITION_ADVANCED=y 25 + CONFIG_MAC_PARTITION=y 24 26 CONFIG_P2041_RDB=y 25 27 CONFIG_P3041_DS=y 26 28 CONFIG_P4080_DS=y 27 29 CONFIG_P5020_DS=y 28 30 CONFIG_HIGHMEM=y 29 - CONFIG_NO_HZ=y 30 - CONFIG_HIGH_RES_TIMERS=y 31 31 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 32 32 CONFIG_BINFMT_MISC=m 33 33 CONFIG_KEXEC=y 34 34 CONFIG_IRQ_ALL_CPUS=y 35 35 CONFIG_FORCE_MAX_ZONEORDER=13 36 - CONFIG_FSL_LBC=y 37 36 CONFIG_PCI=y 38 37 CONFIG_PCIEPORTBUS=y 39 - CONFIG_PCI_MSI=y 40 38 # CONFIG_PCIEASPM is not set 39 + CONFIG_PCI_MSI=y 41 40 CONFIG_RAPIDIO=y 42 41 CONFIG_FSL_RIO=y 43 42 CONFIG_NET=y ··· 69 70 CONFIG_IPV6=y 70 71 CONFIG_IP_SCTP=m 71 72 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 73 + CONFIG_DEVTMPFS=y 72 74 CONFIG_MTD=y 73 75 CONFIG_MTD_CMDLINE_PARTS=y 74 76 CONFIG_MTD_CHAR=y ··· 77 77 CONFIG_MTD_CFI=y 78 78 CONFIG_MTD_CFI_AMDSTD=y 79 79 CONFIG_MTD_PHYSMAP_OF=y 80 - CONFIG_MTD_NAND=y 81 - CONFIG_MTD_NAND_ECC=y 82 - CONFIG_MTD_NAND_IDS=y 83 - CONFIG_MTD_NAND_FSL_IFC=y 84 - CONFIG_MTD_NAND_FSL_ELBC=y 85 80 CONFIG_MTD_M25P80=y 81 + CONFIG_MTD_NAND=y 82 + CONFIG_MTD_NAND_FSL_ELBC=y 83 + CONFIG_MTD_NAND_FSL_IFC=y 86 84 CONFIG_PROC_DEVICETREE=y 87 85 CONFIG_BLK_DEV_LOOP=y 88 86 CONFIG_BLK_DEV_RAM=y 89 87 CONFIG_BLK_DEV_RAM_SIZE=131072 90 - CONFIG_MISC_DEVICES=y 91 88 CONFIG_BLK_DEV_SD=y 92 89 CONFIG_CHR_DEV_ST=y 93 90 CONFIG_BLK_DEV_SR=y ··· 112 115 CONFIG_PPC_EPAPR_HV_BYTECHAN=y 113 116 CONFIG_SERIAL_8250=y 114 117 CONFIG_SERIAL_8250_CONSOLE=y 115 - CONFIG_SERIAL_8250_EXTENDED=y 116 118 CONFIG_SERIAL_8250_MANY_PORTS=y 117 119 CONFIG_SERIAL_8250_DETECT_IRQ=y 118 120 CONFIG_SERIAL_8250_RSA=y 119 - CONFIG_HW_RANDOM=y 120 121 CONFIG_NVRAM=y 121 122 CONFIG_I2C=y 122 123 CONFIG_I2C_CHARDEV=y ··· 127 132 CONFIG_VIDEO_OUTPUT_CONTROL=y 128 133 CONFIG_USB_HID=m 129 134 CONFIG_USB=y 130 - CONFIG_USB_DEVICEFS=y 131 135 CONFIG_USB_MON=y 132 136 CONFIG_USB_EHCI_HCD=y 133 137 CONFIG_USB_EHCI_FSL=y ··· 136 142 CONFIG_USB_STORAGE=y 137 143 CONFIG_MMC=y 138 144 CONFIG_MMC_SDHCI=y 139 - CONFIG_MMC_SDHCI_OF=y 140 - CONFIG_MMC_SDHCI_OF_ESDHC=y 141 145 CONFIG_EDAC=y 142 146 CONFIG_EDAC_MM_EDAC=y 143 147 CONFIG_EDAC_MPC85XX=y ··· 162 170 CONFIG_JFFS2_FS=y 163 171 CONFIG_CRAMFS=y 164 172 CONFIG_NFS_FS=y 165 - CONFIG_NFS_V3=y 166 173 CONFIG_NFS_V4=y 167 174 CONFIG_ROOT_NFS=y 168 175 CONFIG_NFSD=m 169 - CONFIG_PARTITION_ADVANCED=y 170 - CONFIG_MAC_PARTITION=y 171 176 CONFIG_NLS_ISO8859_1=y 172 177 CONFIG_NLS_UTF8=m 173 178 CONFIG_MAGIC_SYSRQ=y 174 179 CONFIG_DEBUG_SHIRQ=y 175 180 CONFIG_DETECT_HUNG_TASK=y 176 181 CONFIG_DEBUG_INFO=y 177 - CONFIG_SYSCTL_SYSCALL_CHECK=y 182 + CONFIG_RCU_TRACE=y 178 183 CONFIG_CRYPTO_NULL=y 179 184 CONFIG_CRYPTO_PCBC=m 180 185 CONFIG_CRYPTO_MD4=y
+1
arch/powerpc/configs/corenet64_smp_defconfig
··· 56 56 CONFIG_IPV6=y 57 57 CONFIG_IP_SCTP=m 58 58 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 59 + CONFIG_DEVTMPFS=y 59 60 CONFIG_MTD=y 60 61 CONFIG_MTD_CMDLINE_PARTS=y 61 62 CONFIG_MTD_CHAR=y
+73 -30
arch/powerpc/configs/g5_defconfig
··· 1 + CONFIG_PPC64=y 2 + CONFIG_ALTIVEC=y 3 + CONFIG_SMP=y 4 + CONFIG_NR_CPUS=4 1 5 CONFIG_EXPERIMENTAL=y 2 6 CONFIG_SYSVIPC=y 3 7 CONFIG_POSIX_MQUEUE=y 4 - CONFIG_NO_HZ=y 5 - CONFIG_HIGH_RES_TIMERS=y 6 8 CONFIG_IKCONFIG=y 7 9 CONFIG_IKCONFIG_PROC=y 8 10 CONFIG_BLK_DEV_INITRD=y ··· 15 13 CONFIG_MODULE_UNLOAD=y 16 14 CONFIG_MODVERSIONS=y 17 15 CONFIG_MODULE_SRCVERSION_ALL=y 18 - CONFIG_PARTITION_ADVANCED=y 19 - CONFIG_MAC_PARTITION=y 20 - CONFIG_SMP=y 21 - CONFIG_NR_CPUS=4 22 - CONFIG_KEXEC=y 23 - # CONFIG_RELOCATABLE is not set 16 + # CONFIG_PPC_PSERIES is not set 24 17 CONFIG_CPU_FREQ=y 25 18 CONFIG_CPU_FREQ_GOV_POWERSAVE=y 26 19 CONFIG_CPU_FREQ_GOV_USERSPACE=y 20 + CONFIG_CPU_FREQ_PMAC64=y 21 + CONFIG_NO_HZ=y 22 + CONFIG_HIGH_RES_TIMERS=y 23 + CONFIG_KEXEC=y 24 + CONFIG_IRQ_ALL_CPUS=y 25 + # CONFIG_MIGRATION is not set 27 26 CONFIG_PCI_MSI=y 28 27 CONFIG_NET=y 29 28 CONFIG_PACKET=y ··· 52 49 CONFIG_NF_CONNTRACK_IPV4=m 53 50 CONFIG_IP_NF_QUEUE=m 54 51 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 52 + CONFIG_PROC_DEVICETREE=y 55 53 CONFIG_BLK_DEV_LOOP=y 56 54 CONFIG_BLK_DEV_NBD=m 57 55 CONFIG_BLK_DEV_RAM=y ··· 60 56 CONFIG_CDROM_PKTCDVD=m 61 57 CONFIG_IDE=y 62 58 CONFIG_BLK_DEV_IDECD=y 59 + CONFIG_BLK_DEV_IDE_PMAC=y 60 + CONFIG_BLK_DEV_IDE_PMAC_ATA100FIRST=y 63 61 CONFIG_BLK_DEV_SD=y 64 62 CONFIG_CHR_DEV_ST=y 65 63 CONFIG_BLK_DEV_SR=y ··· 85 79 CONFIG_DM_SNAPSHOT=m 86 80 CONFIG_DM_MIRROR=m 87 81 CONFIG_DM_ZERO=m 88 - CONFIG_MACINTOSH_DRIVERS=y 82 + CONFIG_IEEE1394=y 83 + CONFIG_IEEE1394_OHCI1394=y 84 + CONFIG_IEEE1394_SBP2=m 85 + CONFIG_IEEE1394_ETH1394=m 86 + CONFIG_IEEE1394_RAWIO=y 87 + CONFIG_IEEE1394_VIDEO1394=m 88 + CONFIG_IEEE1394_DV1394=m 89 + CONFIG_ADB_PMU=y 90 + CONFIG_PMAC_SMU=y 89 91 CONFIG_MAC_EMUMOUSEBTN=y 92 + CONFIG_THERM_PM72=y 93 + CONFIG_WINDFARM=y 94 + CONFIG_WINDFARM_PM81=y 95 + CONFIG_WINDFARM_PM91=y 96 + CONFIG_WINDFARM_PM112=y 97 + CONFIG_WINDFARM_PM121=y 90 98 CONFIG_NETDEVICES=y 91 - CONFIG_BONDING=m 92 99 CONFIG_DUMMY=m 93 - CONFIG_MII=y 100 + CONFIG_BONDING=m 94 101 CONFIG_TUN=m 102 + CONFIG_NET_ETHERNET=y 103 + CONFIG_MII=y 104 + CONFIG_SUNGEM=y 95 105 CONFIG_ACENIC=m 96 106 CONFIG_ACENIC_OMIT_TIGON_I=y 97 - CONFIG_TIGON3=y 98 107 CONFIG_E1000=y 99 - CONFIG_SUNGEM=y 100 - CONFIG_PPP=m 101 - CONFIG_PPP_BSDCOMP=m 102 - CONFIG_PPP_DEFLATE=m 103 - CONFIG_PPPOE=m 104 - CONFIG_PPP_ASYNC=m 105 - CONFIG_PPP_SYNC_TTY=m 108 + CONFIG_TIGON3=y 106 109 CONFIG_USB_CATC=m 107 110 CONFIG_USB_KAWETH=m 108 111 CONFIG_USB_PEGASUS=m ··· 121 106 # CONFIG_USB_NET_NET1080 is not set 122 107 # CONFIG_USB_NET_CDC_SUBSET is not set 123 108 # CONFIG_USB_NET_ZAURUS is not set 109 + CONFIG_PPP=m 110 + CONFIG_PPP_ASYNC=m 111 + CONFIG_PPP_SYNC_TTY=m 112 + CONFIG_PPP_DEFLATE=m 113 + CONFIG_PPP_BSDCOMP=m 114 + CONFIG_PPPOE=m 124 115 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set 125 116 CONFIG_INPUT_JOYDEV=m 126 117 CONFIG_INPUT_EVDEV=y 118 + # CONFIG_KEYBOARD_ATKBD is not set 127 119 # CONFIG_MOUSE_PS2 is not set 120 + # CONFIG_SERIO_I8042 is not set 128 121 # CONFIG_SERIO_SERPORT is not set 129 - CONFIG_VT_HW_CONSOLE_BINDING=y 130 122 # CONFIG_HW_RANDOM is not set 131 123 CONFIG_GEN_RTC=y 132 124 CONFIG_RAW_DRIVER=y 133 125 CONFIG_I2C_CHARDEV=y 134 126 # CONFIG_HWMON is not set 135 - CONFIG_AGP=y 136 - CONFIG_DRM=y 137 - CONFIG_DRM_NOUVEAU=y 127 + CONFIG_AGP=m 128 + CONFIG_AGP_UNINORTH=m 138 129 CONFIG_VIDEO_OUTPUT_CONTROL=m 130 + CONFIG_FB=y 139 131 CONFIG_FIRMWARE_EDID=y 140 132 CONFIG_FB_TILEBLITTING=y 133 + CONFIG_FB_OF=y 134 + CONFIG_FB_NVIDIA=y 135 + CONFIG_FB_NVIDIA_I2C=y 141 136 CONFIG_FB_RADEON=y 137 + # CONFIG_VGA_CONSOLE is not set 138 + CONFIG_FRAMEBUFFER_CONSOLE=y 142 139 CONFIG_LOGO=y 143 140 CONFIG_SOUND=m 144 141 CONFIG_SND=m ··· 158 131 CONFIG_SND_MIXER_OSS=m 159 132 CONFIG_SND_PCM_OSS=m 160 133 CONFIG_SND_SEQUENCER_OSS=y 134 + CONFIG_SND_POWERMAC=m 135 + CONFIG_SND_AOA=m 136 + CONFIG_SND_AOA_FABRIC_LAYOUT=m 137 + CONFIG_SND_AOA_ONYX=m 138 + CONFIG_SND_AOA_TAS=m 139 + CONFIG_SND_AOA_TOONIE=m 161 140 CONFIG_SND_USB_AUDIO=m 141 + CONFIG_HID_PID=y 142 + CONFIG_USB_HIDDEV=y 162 143 CONFIG_HID_GYRATION=y 163 144 CONFIG_LOGITECH_FF=y 164 145 CONFIG_HID_PANTHERLORD=y ··· 174 139 CONFIG_HID_SAMSUNG=y 175 140 CONFIG_HID_SONY=y 176 141 CONFIG_HID_SUNPLUS=y 177 - CONFIG_HID_PID=y 178 - CONFIG_USB_HIDDEV=y 179 142 CONFIG_USB=y 143 + CONFIG_USB_DEVICEFS=y 180 144 CONFIG_USB_MON=y 181 145 CONFIG_USB_EHCI_HCD=y 146 + # CONFIG_USB_EHCI_HCD_PPC_OF is not set 182 147 CONFIG_USB_OHCI_HCD=y 148 + CONFIG_USB_OHCI_HCD_PPC_OF_BE=y 183 149 CONFIG_USB_ACM=m 184 150 CONFIG_USB_PRINTER=y 185 151 CONFIG_USB_STORAGE=y ··· 244 208 CONFIG_REISERFS_FS_SECURITY=y 245 209 CONFIG_XFS_FS=m 246 210 CONFIG_XFS_POSIX_ACL=y 211 + CONFIG_INOTIFY=y 212 + CONFIG_AUTOFS_FS=m 247 213 CONFIG_ISO9660_FS=y 248 214 CONFIG_JOLIET=y 249 215 CONFIG_ZISOFS=y ··· 259 221 CONFIG_HFSPLUS_FS=m 260 222 CONFIG_CRAMFS=y 261 223 CONFIG_NFS_FS=y 224 + CONFIG_NFS_V3=y 262 225 CONFIG_NFS_V3_ACL=y 263 226 CONFIG_NFS_V4=y 264 227 CONFIG_NFSD=y 265 228 CONFIG_NFSD_V3_ACL=y 266 229 CONFIG_NFSD_V4=y 267 230 CONFIG_CIFS=m 231 + CONFIG_PARTITION_ADVANCED=y 268 232 CONFIG_NLS_CODEPAGE_437=y 269 233 CONFIG_NLS_CODEPAGE_1250=y 270 234 CONFIG_NLS_CODEPAGE_1251=y ··· 274 234 CONFIG_NLS_ISO8859_1=y 275 235 CONFIG_NLS_ISO8859_15=y 276 236 CONFIG_NLS_UTF8=y 237 + CONFIG_CRC_T10DIF=y 238 + CONFIG_LIBCRC32C=m 277 239 CONFIG_MAGIC_SYSRQ=y 278 - # CONFIG_UNUSED_SYMBOLS is not set 279 240 CONFIG_DEBUG_FS=y 280 241 CONFIG_DEBUG_KERNEL=y 281 242 CONFIG_DEBUG_MUTEXES=y 243 + # CONFIG_RCU_CPU_STALL_DETECTOR is not set 282 244 CONFIG_LATENCYTOP=y 283 - CONFIG_STRICT_DEVMEM=y 245 + CONFIG_SYSCTL_SYSCALL_CHECK=y 246 + CONFIG_BOOTX_TEXT=y 284 247 CONFIG_CRYPTO_NULL=m 285 248 CONFIG_CRYPTO_TEST=m 249 + CONFIG_CRYPTO_ECB=m 286 250 CONFIG_CRYPTO_PCBC=m 287 251 CONFIG_CRYPTO_HMAC=y 252 + CONFIG_CRYPTO_MD4=m 288 253 CONFIG_CRYPTO_MICHAEL_MIC=m 289 254 CONFIG_CRYPTO_SHA256=m 290 255 CONFIG_CRYPTO_SHA512=m 291 256 CONFIG_CRYPTO_WP512=m 292 257 CONFIG_CRYPTO_AES=m 293 258 CONFIG_CRYPTO_ANUBIS=m 259 + CONFIG_CRYPTO_ARC4=m 294 260 CONFIG_CRYPTO_BLOWFISH=m 295 261 CONFIG_CRYPTO_CAST5=m 296 262 CONFIG_CRYPTO_CAST6=m ··· 306 260 CONFIG_CRYPTO_TWOFISH=m 307 261 # CONFIG_CRYPTO_ANSI_CPRNG is not set 308 262 # CONFIG_CRYPTO_HW is not set 309 - # CONFIG_VIRTUALIZATION is not set 310 - CONFIG_CRC_T10DIF=y 311 - CONFIG_LIBCRC32C=m
+5 -13
arch/powerpc/configs/mpc83xx_defconfig
··· 2 2 CONFIG_SYSVIPC=y 3 3 CONFIG_LOG_BUF_SHIFT=14 4 4 CONFIG_BLK_DEV_INITRD=y 5 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 6 5 CONFIG_EXPERT=y 7 6 CONFIG_SLAB=y 8 7 CONFIG_MODULES=y 9 8 CONFIG_MODULE_UNLOAD=y 10 9 # CONFIG_BLK_DEV_BSG is not set 10 + CONFIG_PARTITION_ADVANCED=y 11 11 # CONFIG_PPC_CHRP is not set 12 12 # CONFIG_PPC_PMAC is not set 13 13 CONFIG_PPC_83xx=y ··· 25 25 CONFIG_QUICC_ENGINE=y 26 26 CONFIG_QE_GPIO=y 27 27 CONFIG_MATH_EMULATION=y 28 - CONFIG_SPARSE_IRQ=y 29 28 CONFIG_PCI=y 30 29 CONFIG_NET=y 31 30 CONFIG_PACKET=y ··· 41 42 # CONFIG_INET_LRO is not set 42 43 # CONFIG_IPV6 is not set 43 44 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 45 + CONFIG_DEVTMPFS=y 44 46 # CONFIG_FW_LOADER is not set 45 47 CONFIG_MTD=y 46 - CONFIG_MTD_PARTITIONS=y 47 - CONFIG_MTD_OF_PARTS=y 48 48 CONFIG_MTD_CHAR=y 49 49 CONFIG_MTD_BLOCK=y 50 50 CONFIG_MTD_CFI=y ··· 62 64 CONFIG_SATA_FSL=y 63 65 CONFIG_SATA_SIL=y 64 66 CONFIG_NETDEVICES=y 67 + CONFIG_MII=y 68 + CONFIG_UCC_GETH=y 69 + CONFIG_GIANFAR=y 65 70 CONFIG_MARVELL_PHY=y 66 71 CONFIG_DAVICOM_PHY=y 67 72 CONFIG_VITESSE_PHY=y 68 73 CONFIG_ICPLUS_PHY=y 69 74 CONFIG_FIXED_PHY=y 70 - CONFIG_NET_ETHERNET=y 71 - CONFIG_MII=y 72 - CONFIG_GIANFAR=y 73 - CONFIG_UCC_GETH=y 74 75 CONFIG_INPUT_FF_MEMLESS=m 75 76 # CONFIG_INPUT_MOUSEDEV is not set 76 77 # CONFIG_INPUT_KEYBOARD is not set ··· 109 112 CONFIG_EXT2_FS=y 110 113 CONFIG_EXT3_FS=y 111 114 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 112 - CONFIG_INOTIFY=y 113 115 CONFIG_PROC_KCORE=y 114 116 CONFIG_TMPFS=y 115 117 CONFIG_NFS_FS=y 116 - CONFIG_NFS_V3=y 117 118 CONFIG_NFS_V4=y 118 119 CONFIG_ROOT_NFS=y 119 - CONFIG_PARTITION_ADVANCED=y 120 120 CONFIG_CRC_T10DIF=y 121 - # CONFIG_RCU_CPU_STALL_DETECTOR is not set 122 - CONFIG_SYSCTL_SYSCALL_CHECK=y 123 121 CONFIG_CRYPTO_ECB=m 124 122 CONFIG_CRYPTO_PCBC=m 125 123 CONFIG_CRYPTO_SHA256=y
+9 -24
arch/powerpc/configs/mpc85xx_defconfig
··· 5 5 CONFIG_POSIX_MQUEUE=y 6 6 CONFIG_BSD_PROCESS_ACCT=y 7 7 CONFIG_AUDIT=y 8 - CONFIG_SPARSE_IRQ=y 8 + CONFIG_IRQ_DOMAIN_DEBUG=y 9 + CONFIG_NO_HZ=y 10 + CONFIG_HIGH_RES_TIMERS=y 9 11 CONFIG_IKCONFIG=y 10 12 CONFIG_IKCONFIG_PROC=y 11 13 CONFIG_LOG_BUF_SHIFT=14 ··· 19 17 CONFIG_MODULE_FORCE_UNLOAD=y 20 18 CONFIG_MODVERSIONS=y 21 19 # CONFIG_BLK_DEV_BSG is not set 20 + CONFIG_PARTITION_ADVANCED=y 21 + CONFIG_MAC_PARTITION=y 22 22 CONFIG_MPC8540_ADS=y 23 23 CONFIG_MPC8560_ADS=y 24 24 CONFIG_MPC85xx_CDS=y ··· 44 40 CONFIG_QUICC_ENGINE=y 45 41 CONFIG_QE_GPIO=y 46 42 CONFIG_HIGHMEM=y 47 - CONFIG_NO_HZ=y 48 - CONFIG_HIGH_RES_TIMERS=y 49 43 CONFIG_BINFMT_MISC=m 50 44 CONFIG_MATH_EMULATION=y 51 45 CONFIG_FORCE_MAX_ZONEORDER=12 ··· 76 74 CONFIG_IPV6=y 77 75 CONFIG_IP_SCTP=m 78 76 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 77 + CONFIG_DEVTMPFS=y 79 78 CONFIG_MTD=y 80 79 CONFIG_MTD_CMDLINE_PARTS=y 81 80 CONFIG_MTD_CHAR=y 82 81 CONFIG_MTD_BLOCK=y 83 - CONFIG_MTD_CFI=y 84 82 CONFIG_FTL=y 85 - CONFIG_MTD_GEN_PROBE=y 86 - CONFIG_MTD_MAP_BANK_WIDTH_1=y 87 - CONFIG_MTD_MAP_BANK_WIDTH_2=y 88 - CONFIG_MTD_MAP_BANK_WIDTH_4=y 89 - CONFIG_MTD_CFI_I1=y 90 - CONFIG_MTD_CFI_I2=y 83 + CONFIG_MTD_CFI=y 91 84 CONFIG_MTD_CFI_INTELEXT=y 92 85 CONFIG_MTD_CFI_AMDSTD=y 93 - CONFIG_MTD_CFI_UTIL=y 94 86 CONFIG_MTD_PHYSMAP_OF=y 95 - CONFIG_MTD_PARTITIONS=y 96 - CONFIG_MTD_OF_PARTS=y 87 + CONFIG_MTD_M25P80=y 97 88 CONFIG_MTD_NAND=y 98 89 CONFIG_MTD_NAND_FSL_ELBC=y 99 90 CONFIG_MTD_NAND_FSL_IFC=y 100 - CONFIG_MTD_NAND_IDS=y 101 - CONFIG_MTD_NAND_ECC=y 102 - CONFIG_MTD_M25P80=y 103 91 CONFIG_PROC_DEVICETREE=y 104 92 CONFIG_BLK_DEV_LOOP=y 105 93 CONFIG_BLK_DEV_NBD=y 106 94 CONFIG_BLK_DEV_RAM=y 107 95 CONFIG_BLK_DEV_RAM_SIZE=131072 108 - CONFIG_MISC_DEVICES=y 109 96 CONFIG_EEPROM_LEGACY=y 110 97 CONFIG_BLK_DEV_SD=y 111 98 CONFIG_CHR_DEV_ST=y ··· 106 115 CONFIG_SATA_AHCI=y 107 116 CONFIG_SATA_FSL=y 108 117 CONFIG_PATA_ALI=y 118 + CONFIG_PATA_VIA=y 109 119 CONFIG_NETDEVICES=y 110 120 CONFIG_DUMMY=y 111 121 CONFIG_FS_ENET=y ··· 126 134 CONFIG_SERIAL_8250_CONSOLE=y 127 135 CONFIG_SERIAL_8250_NR_UARTS=2 128 136 CONFIG_SERIAL_8250_RUNTIME_UARTS=2 129 - CONFIG_SERIAL_8250_EXTENDED=y 130 137 CONFIG_SERIAL_8250_MANY_PORTS=y 131 138 CONFIG_SERIAL_8250_DETECT_IRQ=y 132 139 CONFIG_SERIAL_8250_RSA=y ··· 174 183 CONFIG_HID_SONY=y 175 184 CONFIG_HID_SUNPLUS=y 176 185 CONFIG_USB=y 177 - CONFIG_USB_DEVICEFS=y 178 186 CONFIG_USB_MON=y 179 187 CONFIG_USB_EHCI_HCD=y 180 188 CONFIG_USB_EHCI_FSL=y ··· 219 229 CONFIG_SYSV_FS=m 220 230 CONFIG_UFS_FS=m 221 231 CONFIG_NFS_FS=y 222 - CONFIG_NFS_V3=y 223 232 CONFIG_NFS_V4=y 224 233 CONFIG_ROOT_NFS=y 225 234 CONFIG_NFSD=y 226 - CONFIG_PARTITION_ADVANCED=y 227 - CONFIG_MAC_PARTITION=y 228 235 CONFIG_CRC_T10DIF=y 229 236 CONFIG_DEBUG_FS=y 230 237 CONFIG_DETECT_HUNG_TASK=y 231 238 CONFIG_DEBUG_INFO=y 232 - CONFIG_SYSCTL_SYSCALL_CHECK=y 233 - CONFIG_IRQ_DOMAIN_DEBUG=y 234 239 CONFIG_CRYPTO_PCBC=m 235 240 CONFIG_CRYPTO_SHA256=y 236 241 CONFIG_CRYPTO_SHA512=y
+8 -24
arch/powerpc/configs/mpc85xx_smp_defconfig
··· 7 7 CONFIG_POSIX_MQUEUE=y 8 8 CONFIG_BSD_PROCESS_ACCT=y 9 9 CONFIG_AUDIT=y 10 - CONFIG_SPARSE_IRQ=y 10 + CONFIG_IRQ_DOMAIN_DEBUG=y 11 + CONFIG_NO_HZ=y 12 + CONFIG_HIGH_RES_TIMERS=y 11 13 CONFIG_IKCONFIG=y 12 14 CONFIG_IKCONFIG_PROC=y 13 15 CONFIG_LOG_BUF_SHIFT=14 ··· 21 19 CONFIG_MODULE_FORCE_UNLOAD=y 22 20 CONFIG_MODVERSIONS=y 23 21 # CONFIG_BLK_DEV_BSG is not set 22 + CONFIG_PARTITION_ADVANCED=y 23 + CONFIG_MAC_PARTITION=y 24 24 CONFIG_MPC8540_ADS=y 25 25 CONFIG_MPC8560_ADS=y 26 26 CONFIG_MPC85xx_CDS=y ··· 46 42 CONFIG_QUICC_ENGINE=y 47 43 CONFIG_QE_GPIO=y 48 44 CONFIG_HIGHMEM=y 49 - CONFIG_NO_HZ=y 50 - CONFIG_HIGH_RES_TIMERS=y 51 45 CONFIG_BINFMT_MISC=m 52 46 CONFIG_MATH_EMULATION=y 53 47 CONFIG_IRQ_ALL_CPUS=y ··· 79 77 CONFIG_IPV6=y 80 78 CONFIG_IP_SCTP=m 81 79 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 80 + CONFIG_DEVTMPFS=y 82 81 CONFIG_MTD=y 83 82 CONFIG_MTD_CMDLINE_PARTS=y 84 83 CONFIG_MTD_CHAR=y 85 84 CONFIG_MTD_BLOCK=y 86 - CONFIG_MTD_CFI=y 87 85 CONFIG_FTL=y 88 - CONFIG_MTD_GEN_PROBE=y 89 - CONFIG_MTD_MAP_BANK_WIDTH_1=y 90 - CONFIG_MTD_MAP_BANK_WIDTH_2=y 91 - CONFIG_MTD_MAP_BANK_WIDTH_4=y 92 - CONFIG_MTD_CFI_I1=y 93 - CONFIG_MTD_CFI_I2=y 86 + CONFIG_MTD_CFI=y 94 87 CONFIG_MTD_CFI_INTELEXT=y 95 88 CONFIG_MTD_CFI_AMDSTD=y 96 - CONFIG_MTD_CFI_UTIL=y 97 89 CONFIG_MTD_PHYSMAP_OF=y 98 - CONFIG_MTD_PARTITIONS=y 99 - CONFIG_MTD_OF_PARTS=y 90 + CONFIG_MTD_M25P80=y 100 91 CONFIG_MTD_NAND=y 101 92 CONFIG_MTD_NAND_FSL_ELBC=y 102 93 CONFIG_MTD_NAND_FSL_IFC=y 103 - CONFIG_MTD_NAND_IDS=y 104 - CONFIG_MTD_NAND_ECC=y 105 - CONFIG_MTD_M25P80=y 106 94 CONFIG_PROC_DEVICETREE=y 107 95 CONFIG_BLK_DEV_LOOP=y 108 96 CONFIG_BLK_DEV_NBD=y 109 97 CONFIG_BLK_DEV_RAM=y 110 98 CONFIG_BLK_DEV_RAM_SIZE=131072 111 - CONFIG_MISC_DEVICES=y 112 99 CONFIG_EEPROM_LEGACY=y 113 100 CONFIG_BLK_DEV_SD=y 114 101 CONFIG_CHR_DEV_ST=y ··· 128 137 CONFIG_SERIAL_8250_CONSOLE=y 129 138 CONFIG_SERIAL_8250_NR_UARTS=2 130 139 CONFIG_SERIAL_8250_RUNTIME_UARTS=2 131 - CONFIG_SERIAL_8250_EXTENDED=y 132 140 CONFIG_SERIAL_8250_MANY_PORTS=y 133 141 CONFIG_SERIAL_8250_DETECT_IRQ=y 134 142 CONFIG_SERIAL_8250_RSA=y ··· 176 186 CONFIG_HID_SONY=y 177 187 CONFIG_HID_SUNPLUS=y 178 188 CONFIG_USB=y 179 - CONFIG_USB_DEVICEFS=y 180 189 CONFIG_USB_MON=y 181 190 CONFIG_USB_EHCI_HCD=y 182 191 CONFIG_USB_EHCI_FSL=y ··· 221 232 CONFIG_SYSV_FS=m 222 233 CONFIG_UFS_FS=m 223 234 CONFIG_NFS_FS=y 224 - CONFIG_NFS_V3=y 225 235 CONFIG_NFS_V4=y 226 236 CONFIG_ROOT_NFS=y 227 237 CONFIG_NFSD=y 228 - CONFIG_PARTITION_ADVANCED=y 229 - CONFIG_MAC_PARTITION=y 230 238 CONFIG_CRC_T10DIF=y 231 239 CONFIG_DEBUG_FS=y 232 240 CONFIG_DETECT_HUNG_TASK=y 233 241 CONFIG_DEBUG_INFO=y 234 - CONFIG_SYSCTL_SYSCALL_CHECK=y 235 - CONFIG_IRQ_DOMAIN_DEBUG=y 236 242 CONFIG_CRYPTO_PCBC=m 237 243 CONFIG_CRYPTO_SHA256=y 238 244 CONFIG_CRYPTO_SHA512=y
-2
arch/powerpc/include/asm/cputable.h
··· 553 553 & feature); 554 554 } 555 555 556 - #ifdef CONFIG_HAVE_HW_BREAKPOINT 557 556 #define HBP_NUM 1 558 - #endif /* CONFIG_HAVE_HW_BREAKPOINT */ 559 557 560 558 #endif /* !__ASSEMBLY__ */ 561 559
+1
arch/powerpc/include/asm/kvm_host.h
··· 33 33 #include <asm/kvm_asm.h> 34 34 #include <asm/processor.h> 35 35 #include <asm/page.h> 36 + #include <asm/cacheflush.h> 36 37 37 38 #define KVM_MAX_VCPUS NR_CPUS 38 39 #define KVM_MAX_VCORES NR_CPUS
+12
arch/powerpc/include/asm/kvm_ppc.h
··· 219 219 void kvmppc_free_lpid(long lpid); 220 220 void kvmppc_init_lpid(unsigned long nr_lpids); 221 221 222 + static inline void kvmppc_mmu_flush_icache(pfn_t pfn) 223 + { 224 + /* Clear i-cache for new pages */ 225 + struct page *page; 226 + page = pfn_to_page(pfn); 227 + if (!test_bit(PG_arch_1, &page->flags)) { 228 + flush_dcache_icache_page(page); 229 + set_bit(PG_arch_1, &page->flags); 230 + } 231 + } 232 + 233 + 222 234 #endif /* __POWERPC_KVM_PPC_H__ */
+1
arch/powerpc/include/asm/mpic_msgr.h
··· 14 14 #include <linux/types.h> 15 15 #include <linux/spinlock.h> 16 16 #include <asm/smp.h> 17 + #include <asm/io.h> 17 18 18 19 struct mpic_msgr { 19 20 u32 __iomem *base;
+4 -5
arch/powerpc/kernel/dma-iommu.c
··· 83 83 return 0; 84 84 } 85 85 86 - if ((tbl->it_offset + tbl->it_size) > (mask >> IOMMU_PAGE_SHIFT)) { 87 - dev_info(dev, "Warning: IOMMU window too big for device mask\n"); 88 - dev_info(dev, "mask: 0x%08llx, table end: 0x%08lx\n", 89 - mask, (tbl->it_offset + tbl->it_size) << 90 - IOMMU_PAGE_SHIFT); 86 + if (tbl->it_offset > (mask >> IOMMU_PAGE_SHIFT)) { 87 + dev_info(dev, "Warning: IOMMU offset too big for device mask\n"); 88 + dev_info(dev, "mask: 0x%08llx, table offset: 0x%08lx\n", 89 + mask, tbl->it_offset << IOMMU_PAGE_SHIFT); 91 90 return 0; 92 91 } else 93 92 return 1;
+1 -1
arch/powerpc/kernel/hw_breakpoint.c
··· 253 253 254 254 /* Do not emulate user-space instructions, instead single-step them */ 255 255 if (user_mode(regs)) { 256 - bp->ctx->task->thread.last_hit_ubp = bp; 256 + current->thread.last_hit_ubp = bp; 257 257 regs->msr |= MSR_SE; 258 258 goto out; 259 259 }
+24 -3
arch/powerpc/kernel/kgdb.c
··· 25 25 #include <asm/processor.h> 26 26 #include <asm/machdep.h> 27 27 #include <asm/debug.h> 28 + #include <linux/slab.h> 28 29 29 30 /* 30 31 * This table contains the mapping between PowerPC hardware trap types, and ··· 102 101 return SIGHUP; /* default for things we don't know about */ 103 102 } 104 103 104 + /** 105 + * 106 + * kgdb_skipexception - Bail out of KGDB when we've been triggered. 107 + * @exception: Exception vector number 108 + * @regs: Current &struct pt_regs. 109 + * 110 + * On some architectures we need to skip a breakpoint exception when 111 + * it occurs after a breakpoint has been removed. 112 + * 113 + */ 114 + int kgdb_skipexception(int exception, struct pt_regs *regs) 115 + { 116 + return kgdb_isremovedbreak(regs->nip); 117 + } 118 + 105 119 static int kgdb_call_nmi_hook(struct pt_regs *regs) 106 120 { 107 121 kgdb_nmicallback(raw_smp_processor_id(), regs); ··· 154 138 static int kgdb_singlestep(struct pt_regs *regs) 155 139 { 156 140 struct thread_info *thread_info, *exception_thread_info; 141 + struct thread_info *backup_current_thread_info = \ 142 + (struct thread_info *)kmalloc(sizeof(struct thread_info), GFP_KERNEL); 157 143 158 144 if (user_mode(regs)) 159 145 return 0; ··· 173 155 thread_info = (struct thread_info *)(regs->gpr[1] & ~(THREAD_SIZE-1)); 174 156 exception_thread_info = current_thread_info(); 175 157 176 - if (thread_info != exception_thread_info) 158 + if (thread_info != exception_thread_info) { 159 + /* Save the original current_thread_info. */ 160 + memcpy(backup_current_thread_info, exception_thread_info, sizeof *thread_info); 177 161 memcpy(exception_thread_info, thread_info, sizeof *thread_info); 162 + } 178 163 179 164 kgdb_handle_exception(0, SIGTRAP, 0, regs); 180 165 181 166 if (thread_info != exception_thread_info) 182 - memcpy(thread_info, exception_thread_info, sizeof *thread_info); 167 + /* Restore current_thread_info lastly. */ 168 + memcpy(exception_thread_info, backup_current_thread_info, sizeof *thread_info); 183 169 184 170 return 1; 185 171 } ··· 432 410 #else 433 411 linux_regs->msr |= MSR_SE; 434 412 #endif 435 - kgdb_single_step = 1; 436 413 atomic_set(&kgdb_cpu_doing_single_step, 437 414 raw_smp_processor_id()); 438 415 }
+4 -4
arch/powerpc/kernel/syscalls.c
··· 107 107 long ret; 108 108 109 109 if (personality(current->personality) == PER_LINUX32 110 - && personality == PER_LINUX) 111 - personality = PER_LINUX32; 110 + && personality(personality) == PER_LINUX) 111 + personality = (personality & ~PER_MASK) | PER_LINUX32; 112 112 ret = sys_personality(personality); 113 - if (ret == PER_LINUX32) 114 - ret = PER_LINUX; 113 + if (personality(ret) == PER_LINUX32) 114 + ret = (ret & ~PER_MASK) | PER_LINUX; 115 115 return ret; 116 116 } 117 117 #endif
+3
arch/powerpc/kvm/book3s_32_mmu_host.c
··· 211 211 pteg1 |= PP_RWRX; 212 212 } 213 213 214 + if (orig_pte->may_execute) 215 + kvmppc_mmu_flush_icache(hpaddr >> PAGE_SHIFT); 216 + 214 217 local_irq_disable(); 215 218 216 219 if (pteg[rr]) {
+2
arch/powerpc/kvm/book3s_64_mmu_host.c
··· 126 126 127 127 if (!orig_pte->may_execute) 128 128 rflags |= HPTE_R_N; 129 + else 130 + kvmppc_mmu_flush_icache(hpaddr >> PAGE_SHIFT); 129 131 130 132 hash = hpt_hash(va, PTE_SIZE, MMU_SEGSIZE_256M); 131 133
+7 -5
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 1421 1421 sync /* order setting ceded vs. testing prodded */ 1422 1422 lbz r5,VCPU_PRODDED(r3) 1423 1423 cmpwi r5,0 1424 - bne 1f 1424 + bne kvm_cede_prodded 1425 1425 li r0,0 /* set trap to 0 to say hcall is handled */ 1426 1426 stw r0,VCPU_TRAP(r3) 1427 1427 li r0,H_SUCCESS 1428 1428 std r0,VCPU_GPR(R3)(r3) 1429 1429 BEGIN_FTR_SECTION 1430 - b 2f /* just send it up to host on 970 */ 1430 + b kvm_cede_exit /* just send it up to host on 970 */ 1431 1431 END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_206) 1432 1432 1433 1433 /* ··· 1446 1446 or r4,r4,r0 1447 1447 PPC_POPCNTW(R7,R4) 1448 1448 cmpw r7,r8 1449 - bge 2f 1449 + bge kvm_cede_exit 1450 1450 stwcx. r4,0,r6 1451 1451 bne 31b 1452 1452 li r0,1 ··· 1555 1555 b hcall_real_fallback 1556 1556 1557 1557 /* cede when already previously prodded case */ 1558 - 1: li r0,0 1558 + kvm_cede_prodded: 1559 + li r0,0 1559 1560 stb r0,VCPU_PRODDED(r3) 1560 1561 sync /* order testing prodded vs. clearing ceded */ 1561 1562 stb r0,VCPU_CEDED(r3) ··· 1564 1563 blr 1565 1564 1566 1565 /* we've ceded but we want to give control to the host */ 1567 - 2: li r3,H_TOO_HARD 1566 + kvm_cede_exit: 1567 + li r3,H_TOO_HARD 1568 1568 blr 1569 1569 1570 1570 secondary_too_late:
+7 -4
arch/powerpc/kvm/e500_tlb.c
··· 322 322 static void clear_tlb1_bitmap(struct kvmppc_vcpu_e500 *vcpu_e500) 323 323 { 324 324 if (vcpu_e500->g2h_tlb1_map) 325 - memset(vcpu_e500->g2h_tlb1_map, 326 - sizeof(u64) * vcpu_e500->gtlb_params[1].entries, 0); 325 + memset(vcpu_e500->g2h_tlb1_map, 0, 326 + sizeof(u64) * vcpu_e500->gtlb_params[1].entries); 327 327 if (vcpu_e500->h2g_tlb1_rmap) 328 - memset(vcpu_e500->h2g_tlb1_rmap, 329 - sizeof(unsigned int) * host_tlb_params[1].entries, 0); 328 + memset(vcpu_e500->h2g_tlb1_rmap, 0, 329 + sizeof(unsigned int) * host_tlb_params[1].entries); 330 330 } 331 331 332 332 static void clear_tlb_privs(struct kvmppc_vcpu_e500 *vcpu_e500) ··· 539 539 540 540 kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize, 541 541 ref, gvaddr, stlbe); 542 + 543 + /* Clear i-cache for new pages */ 544 + kvmppc_mmu_flush_icache(pfn); 542 545 } 543 546 544 547 /* XXX only map the one-one case, for now use TLB0 */
+2 -33
arch/powerpc/lib/copyuser_power7.S
··· 288 288 std r0,16(r1) 289 289 stdu r1,-STACKFRAMESIZE(r1) 290 290 bl .enter_vmx_usercopy 291 - cmpwi r3,0 291 + cmpwi cr1,r3,0 292 292 ld r0,STACKFRAMESIZE+16(r1) 293 293 ld r3,STACKFRAMESIZE+48(r1) 294 294 ld r4,STACKFRAMESIZE+56(r1) ··· 326 326 dcbt r0,r8,0b01010 /* GO */ 327 327 .machine pop 328 328 329 - /* 330 - * We prefetch both the source and destination using enhanced touch 331 - * instructions. We use a stream ID of 0 for the load side and 332 - * 1 for the store side. 333 - */ 334 - clrrdi r6,r4,7 335 - clrrdi r9,r3,7 336 - ori r9,r9,1 /* stream=1 */ 337 - 338 - srdi r7,r5,7 /* length in cachelines, capped at 0x3FF */ 339 - cmpldi cr1,r7,0x3FF 340 - ble cr1,1f 341 - li r7,0x3FF 342 - 1: lis r0,0x0E00 /* depth=7 */ 343 - sldi r7,r7,7 344 - or r7,r7,r0 345 - ori r10,r7,1 /* stream=1 */ 346 - 347 - lis r8,0x8000 /* GO=1 */ 348 - clrldi r8,r8,32 349 - 350 - .machine push 351 - .machine "power4" 352 - dcbt r0,r6,0b01000 353 - dcbt r0,r7,0b01010 354 - dcbtst r0,r9,0b01000 355 - dcbtst r0,r10,0b01010 356 - eieio 357 - dcbt r0,r8,0b01010 /* GO */ 358 - .machine pop 359 - 360 - beq .Lunwind_stack_nonvmx_copy 329 + beq cr1,.Lunwind_stack_nonvmx_copy 361 330 362 331 /* 363 332 * If source and destination are not relatively aligned we use a
+2 -2
arch/powerpc/lib/memcpy_power7.S
··· 222 222 std r0,16(r1) 223 223 stdu r1,-STACKFRAMESIZE(r1) 224 224 bl .enter_vmx_copy 225 - cmpwi r3,0 225 + cmpwi cr1,r3,0 226 226 ld r0,STACKFRAMESIZE+16(r1) 227 227 ld r3,STACKFRAMESIZE+48(r1) 228 228 ld r4,STACKFRAMESIZE+56(r1) ··· 260 260 dcbt r0,r8,0b01010 /* GO */ 261 261 .machine pop 262 262 263 - beq .Lunwind_stack_nonvmx_copy 263 + beq cr1,.Lunwind_stack_nonvmx_copy 264 264 265 265 /* 266 266 * If source and destination are not relatively aligned we use a
+1
arch/powerpc/mm/mem.c
··· 469 469 __flush_dcache_icache_phys(page_to_pfn(page) << PAGE_SHIFT); 470 470 #endif 471 471 } 472 + EXPORT_SYMBOL(flush_dcache_icache_page); 472 473 473 474 void clear_user_page(void *page, unsigned long vaddr, struct page *pg) 474 475 {
+1 -1
arch/powerpc/perf/core-book3s.c
··· 1431 1431 if (!event->hw.idx || is_limited_pmc(event->hw.idx)) 1432 1432 continue; 1433 1433 val = read_pmc(event->hw.idx); 1434 - if ((int)val < 0) { 1434 + if (pmc_overflow(val)) { 1435 1435 /* event has overflowed */ 1436 1436 found = 1; 1437 1437 record_and_restart(event, val, regs);
+8 -5
arch/powerpc/sysdev/fsl_pci.c
··· 465 465 iounmap(hose->cfg_data); 466 466 iounmap(hose->cfg_addr); 467 467 pcibios_free_controller(hose); 468 - return 0; 468 + return -ENODEV; 469 469 } 470 470 471 471 setup_pci_cmd(hose); ··· 827 827 828 828 void __devinit fsl_pci_init(void) 829 829 { 830 + int ret; 830 831 struct device_node *node; 831 832 struct pci_controller *hose; 832 833 dma_addr_t max = 0xffffffff; ··· 856 855 if (!fsl_pci_primary) 857 856 fsl_pci_primary = node; 858 857 859 - fsl_add_bridge(node, fsl_pci_primary == node); 860 - hose = pci_find_hose_for_OF_device(node); 861 - max = min(max, hose->dma_window_base_cur + 862 - hose->dma_window_size); 858 + ret = fsl_add_bridge(node, fsl_pci_primary == node); 859 + if (ret == 0) { 860 + hose = pci_find_hose_for_OF_device(node); 861 + max = min(max, hose->dma_window_base_cur + 862 + hose->dma_window_size); 863 + } 863 864 } 864 865 } 865 866
+3
arch/powerpc/sysdev/mpic_msgr.c
··· 14 14 #include <linux/list.h> 15 15 #include <linux/of_platform.h> 16 16 #include <linux/errno.h> 17 + #include <linux/err.h> 18 + #include <linux/export.h> 19 + #include <linux/slab.h> 17 20 #include <asm/prom.h> 18 21 #include <asm/hw_irq.h> 19 22 #include <asm/ppc-pci.h>
+30 -46
arch/powerpc/xmon/xmon.c
··· 17 17 #include <linux/reboot.h> 18 18 #include <linux/delay.h> 19 19 #include <linux/kallsyms.h> 20 + #include <linux/kmsg_dump.h> 20 21 #include <linux/cpumask.h> 21 22 #include <linux/export.h> 22 23 #include <linux/sysrq.h> ··· 895 894 #endif 896 895 default: 897 896 printf("Unrecognized command: "); 898 - do { 897 + do { 899 898 if (' ' < cmd && cmd <= '~') 900 899 putchar(cmd); 901 900 else 902 901 printf("\\x%x", cmd); 903 902 cmd = inchar(); 904 - } while (cmd != '\n'); 903 + } while (cmd != '\n'); 905 904 printf(" (type ? for help)\n"); 906 905 break; 907 906 } ··· 1098 1097 return 1; 1099 1098 } 1100 1099 1101 - static char *breakpoint_help_string = 1100 + static char *breakpoint_help_string = 1102 1101 "Breakpoint command usage:\n" 1103 1102 "b show breakpoints\n" 1104 1103 "b <addr> [cnt] set breakpoint at given instr addr\n" ··· 1194 1193 1195 1194 default: 1196 1195 termch = cmd; 1197 - cmd = skipbl(); 1196 + cmd = skipbl(); 1198 1197 if (cmd == '?') { 1199 1198 printf(breakpoint_help_string); 1200 1199 break; ··· 1360 1359 sp + REGS_OFFSET); 1361 1360 break; 1362 1361 } 1363 - printf("--- Exception: %lx %s at ", regs.trap, 1362 + printf("--- Exception: %lx %s at ", regs.trap, 1364 1363 getvecname(TRAP(&regs))); 1365 1364 pc = regs.nip; 1366 1365 lr = regs.link; ··· 1624 1623 1625 1624 cmd = skipbl(); 1626 1625 if (cmd == '\n') { 1627 - unsigned long sp, toc; 1626 + unsigned long sp, toc; 1628 1627 asm("mr %0,1" : "=r" (sp) :); 1629 1628 asm("mr %0,2" : "=r" (toc) :); 1630 1629 1631 1630 printf("msr = "REG" sprg0= "REG"\n", 1632 1631 mfmsr(), mfspr(SPRN_SPRG0)); 1633 1632 printf("pvr = "REG" sprg1= "REG"\n", 1634 - mfspr(SPRN_PVR), mfspr(SPRN_SPRG1)); 1633 + mfspr(SPRN_PVR), mfspr(SPRN_SPRG1)); 1635 1634 printf("dec = "REG" sprg2= "REG"\n", 1636 1635 mfspr(SPRN_DEC), mfspr(SPRN_SPRG2)); 1637 1636 printf("sp = "REG" sprg3= "REG"\n", sp, mfspr(SPRN_SPRG3)); ··· 1784 1783 static int brev; 1785 1784 static int mnoread; 1786 1785 1787 - static char *memex_help_string = 1786 + static char *memex_help_string = 1788 1787 "Memory examine command usage:\n" 1789 1788 "m [addr] [flags] examine/change memory\n" 1790 1789 " addr is optional. will start where left off.\n" ··· 1799 1798 "NOTE: flags are saved as defaults\n" 1800 1799 ""; 1801 1800 1802 - static char *memex_subcmd_help_string = 1801 + static char *memex_subcmd_help_string = 1803 1802 "Memory examine subcommands:\n" 1804 1803 " hexval write this val to current location\n" 1805 1804 " 'string' write chars from string to this location\n" ··· 2065 2064 nr = mread(adrs, temp, r); 2066 2065 adrs += nr; 2067 2066 for (m = 0; m < r; ++m) { 2068 - if ((m & (sizeof(long) - 1)) == 0 && m > 0) 2067 + if ((m & (sizeof(long) - 1)) == 0 && m > 0) 2069 2068 putchar(' '); 2070 2069 if (m < nr) 2071 2070 printf("%.2x", temp[m]); ··· 2073 2072 printf("%s", fault_chars[fault_type]); 2074 2073 } 2075 2074 for (; m < 16; ++m) { 2076 - if ((m & (sizeof(long) - 1)) == 0) 2075 + if ((m & (sizeof(long) - 1)) == 0) 2077 2076 putchar(' '); 2078 2077 printf(" "); 2079 2078 } ··· 2149 2148 void 2150 2149 dump_log_buf(void) 2151 2150 { 2152 - const unsigned long size = 128; 2153 - unsigned long end, addr; 2154 - unsigned char buf[size + 1]; 2151 + struct kmsg_dumper dumper = { .active = 1 }; 2152 + unsigned char buf[128]; 2153 + size_t len; 2155 2154 2156 - addr = 0; 2157 - buf[size] = '\0'; 2155 + if (setjmp(bus_error_jmp) != 0) { 2156 + printf("Error dumping printk buffer!\n"); 2157 + return; 2158 + } 2158 2159 2159 - if (setjmp(bus_error_jmp) != 0) { 2160 - printf("Unable to lookup symbol __log_buf!\n"); 2161 - return; 2162 - } 2160 + catch_memory_errors = 1; 2161 + sync(); 2163 2162 2164 - catch_memory_errors = 1; 2165 - sync(); 2166 - addr = kallsyms_lookup_name("__log_buf"); 2163 + kmsg_dump_rewind_nolock(&dumper); 2164 + while (kmsg_dump_get_line_nolock(&dumper, false, buf, sizeof(buf), &len)) { 2165 + buf[len] = '\0'; 2166 + printf("%s", buf); 2167 + } 2167 2168 2168 - if (! addr) 2169 - printf("Symbol __log_buf not found!\n"); 2170 - else { 2171 - end = addr + (1 << CONFIG_LOG_BUF_SHIFT); 2172 - while (addr < end) { 2173 - if (! mread(addr, buf, size)) { 2174 - printf("Can't read memory at address 0x%lx\n", addr); 2175 - break; 2176 - } 2177 - 2178 - printf("%s", buf); 2179 - 2180 - if (strlen(buf) < size) 2181 - break; 2182 - 2183 - addr += size; 2184 - } 2185 - } 2186 - 2187 - sync(); 2188 - /* wait a little while to see if we get a machine check */ 2189 - __delay(200); 2190 - catch_memory_errors = 0; 2169 + sync(); 2170 + /* wait a little while to see if we get a machine check */ 2171 + __delay(200); 2172 + catch_memory_errors = 0; 2191 2173 } 2192 2174 2193 2175 /*
+2 -1
arch/s390/include/asm/elf.h
··· 180 180 #define ELF_PLATFORM (elf_platform) 181 181 182 182 #ifndef CONFIG_64BIT 183 - #define SET_PERSONALITY(ex) set_personality(PER_LINUX) 183 + #define SET_PERSONALITY(ex) \ 184 + set_personality(PER_LINUX | (current->personality & (~PER_MASK))) 184 185 #else /* CONFIG_64BIT */ 185 186 #define SET_PERSONALITY(ex) \ 186 187 do { \
+1 -2
arch/s390/include/asm/posix_types.h
··· 13 13 */ 14 14 15 15 typedef unsigned long __kernel_size_t; 16 + typedef long __kernel_ssize_t; 16 17 #define __kernel_size_t __kernel_size_t 17 18 18 19 typedef unsigned short __kernel_old_dev_t; ··· 26 25 typedef unsigned short __kernel_ipc_pid_t; 27 26 typedef unsigned short __kernel_uid_t; 28 27 typedef unsigned short __kernel_gid_t; 29 - typedef int __kernel_ssize_t; 30 28 typedef int __kernel_ptrdiff_t; 31 29 32 30 #else /* __s390x__ */ ··· 35 35 typedef int __kernel_ipc_pid_t; 36 36 typedef unsigned int __kernel_uid_t; 37 37 typedef unsigned int __kernel_gid_t; 38 - typedef long __kernel_ssize_t; 39 38 typedef long __kernel_ptrdiff_t; 40 39 typedef unsigned long __kernel_sigset_t; /* at least 32 bits */ 41 40
+1
arch/s390/include/asm/smp.h
··· 44 44 } 45 45 46 46 static inline int smp_find_processor_id(int address) { return 0; } 47 + static inline int smp_store_status(int cpu) { return 0; } 47 48 static inline int smp_vcpu_scheduled(int cpu) { return 1; } 48 49 static inline void smp_yield_cpu(int cpu) { } 49 50 static inline void smp_yield(void) { }
+1 -2
arch/x86/include/asm/spinlock.h
··· 12 12 * Simple spin lock operations. There are two variants, one clears IRQ's 13 13 * on the local processor, one does not. 14 14 * 15 - * These are fair FIFO ticket locks, which are currently limited to 256 16 - * CPUs. 15 + * These are fair FIFO ticket locks, which support up to 2^16 CPUs. 17 16 * 18 17 * (the type definitions are in asm/spinlock_types.h) 19 18 */
+1 -1
arch/x86/kernel/alternative.c
··· 165 165 #endif 166 166 167 167 #ifdef P6_NOP1 168 - static const unsigned char __initconst_or_module p6nops[] = 168 + static const unsigned char p6nops[] = 169 169 { 170 170 P6_NOP1, 171 171 P6_NOP2,
+1 -1
arch/x86/kernel/irq.c
··· 270 270 271 271 if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) { 272 272 break_affinity = 1; 273 - affinity = cpu_all_mask; 273 + affinity = cpu_online_mask; 274 274 } 275 275 276 276 chip = irq_data_get_irq_chip(data);
+4 -3
arch/x86/kernel/microcode_amd.c
··· 143 143 unsigned int *current_size) 144 144 { 145 145 struct microcode_header_amd *mc_hdr; 146 - unsigned int actual_size; 146 + unsigned int actual_size, patch_size; 147 147 u16 equiv_cpu_id; 148 148 149 149 /* size of the current patch we're staring at */ 150 - *current_size = *(u32 *)(ucode_ptr + 4) + SECTION_HDR_SIZE; 150 + patch_size = *(u32 *)(ucode_ptr + 4); 151 + *current_size = patch_size + SECTION_HDR_SIZE; 151 152 152 153 equiv_cpu_id = find_equiv_id(); 153 154 if (!equiv_cpu_id) ··· 175 174 /* 176 175 * now that the header looks sane, verify its size 177 176 */ 178 - actual_size = verify_ucode_size(cpu, *current_size, leftover_size); 177 + actual_size = verify_ucode_size(cpu, patch_size, leftover_size); 179 178 if (!actual_size) 180 179 return 0; 181 180
+21 -9
arch/x86/kvm/emulate.c
··· 475 475 return address_mask(ctxt, reg); 476 476 } 477 477 478 + static void masked_increment(ulong *reg, ulong mask, int inc) 479 + { 480 + assign_masked(reg, *reg + inc, mask); 481 + } 482 + 478 483 static inline void 479 484 register_address_increment(struct x86_emulate_ctxt *ctxt, unsigned long *reg, int inc) 480 485 { 486 + ulong mask; 487 + 481 488 if (ctxt->ad_bytes == sizeof(unsigned long)) 482 - *reg += inc; 489 + mask = ~0UL; 483 490 else 484 - *reg = (*reg & ~ad_mask(ctxt)) | ((*reg + inc) & ad_mask(ctxt)); 491 + mask = ad_mask(ctxt); 492 + masked_increment(reg, mask, inc); 493 + } 494 + 495 + static void rsp_increment(struct x86_emulate_ctxt *ctxt, int inc) 496 + { 497 + masked_increment(&ctxt->regs[VCPU_REGS_RSP], stack_mask(ctxt), inc); 485 498 } 486 499 487 500 static inline void jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) ··· 1535 1522 { 1536 1523 struct segmented_address addr; 1537 1524 1538 - register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RSP], -bytes); 1539 - addr.ea = register_address(ctxt, ctxt->regs[VCPU_REGS_RSP]); 1525 + rsp_increment(ctxt, -bytes); 1526 + addr.ea = ctxt->regs[VCPU_REGS_RSP] & stack_mask(ctxt); 1540 1527 addr.seg = VCPU_SREG_SS; 1541 1528 1542 1529 return segmented_write(ctxt, addr, data, bytes); ··· 1555 1542 int rc; 1556 1543 struct segmented_address addr; 1557 1544 1558 - addr.ea = register_address(ctxt, ctxt->regs[VCPU_REGS_RSP]); 1545 + addr.ea = ctxt->regs[VCPU_REGS_RSP] & stack_mask(ctxt); 1559 1546 addr.seg = VCPU_SREG_SS; 1560 1547 rc = segmented_read(ctxt, addr, dest, len); 1561 1548 if (rc != X86EMUL_CONTINUE) 1562 1549 return rc; 1563 1550 1564 - register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RSP], len); 1551 + rsp_increment(ctxt, len); 1565 1552 return rc; 1566 1553 } 1567 1554 ··· 1701 1688 1702 1689 while (reg >= VCPU_REGS_RAX) { 1703 1690 if (reg == VCPU_REGS_RSP) { 1704 - register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RSP], 1705 - ctxt->op_bytes); 1691 + rsp_increment(ctxt, ctxt->op_bytes); 1706 1692 --reg; 1707 1693 } 1708 1694 ··· 2837 2825 rc = emulate_pop(ctxt, &ctxt->dst.val, ctxt->op_bytes); 2838 2826 if (rc != X86EMUL_CONTINUE) 2839 2827 return rc; 2840 - register_address_increment(ctxt, &ctxt->regs[VCPU_REGS_RSP], ctxt->src.val); 2828 + rsp_increment(ctxt, ctxt->src.val); 2841 2829 return X86EMUL_CONTINUE; 2842 2830 } 2843 2831
+9 -4
arch/x86/kvm/mmu.c
··· 4113 4113 LIST_HEAD(invalid_list); 4114 4114 4115 4115 /* 4116 + * Never scan more than sc->nr_to_scan VM instances. 4117 + * Will not hit this condition practically since we do not try 4118 + * to shrink more than one VM and it is very unlikely to see 4119 + * !n_used_mmu_pages so many times. 4120 + */ 4121 + if (!nr_to_scan--) 4122 + break; 4123 + /* 4116 4124 * n_used_mmu_pages is accessed without holding kvm->mmu_lock 4117 4125 * here. We may skip a VM instance errorneosly, but we do not 4118 4126 * want to shrink a VM that only started to populate its MMU 4119 4127 * anyway. 4120 4128 */ 4121 - if (kvm->arch.n_used_mmu_pages > 0) { 4122 - if (!nr_to_scan--) 4123 - break; 4129 + if (!kvm->arch.n_used_mmu_pages) 4124 4130 continue; 4125 - } 4126 4131 4127 4132 idx = srcu_read_lock(&kvm->srcu); 4128 4133 spin_lock(&kvm->mmu_lock);
+4 -1
arch/x86/kvm/x86.c
··· 806 806 * kvm-specific. Those are put in the beginning of the list. 807 807 */ 808 808 809 - #define KVM_SAVE_MSRS_BEGIN 9 809 + #define KVM_SAVE_MSRS_BEGIN 10 810 810 static u32 msrs_to_save[] = { 811 811 MSR_KVM_SYSTEM_TIME, MSR_KVM_WALL_CLOCK, 812 812 MSR_KVM_SYSTEM_TIME_NEW, MSR_KVM_WALL_CLOCK_NEW, ··· 1999 1999 break; 2000 2000 case MSR_KVM_STEAL_TIME: 2001 2001 data = vcpu->arch.st.msr_val; 2002 + break; 2003 + case MSR_KVM_PV_EOI_EN: 2004 + data = vcpu->arch.pv_eoi.msr_val; 2002 2005 break; 2003 2006 case MSR_IA32_P5_MC_ADDR: 2004 2007 case MSR_IA32_P5_MC_TYPE:
+11 -107
arch/x86/xen/enlighten.c
··· 31 31 #include <linux/pci.h> 32 32 #include <linux/gfp.h> 33 33 #include <linux/memblock.h> 34 - #include <linux/syscore_ops.h> 35 34 36 35 #include <xen/xen.h> 37 36 #include <xen/interface/xen.h> ··· 1469 1470 #endif 1470 1471 } 1471 1472 1472 - #ifdef CONFIG_XEN_PVHVM 1473 - /* 1474 - * The pfn containing the shared_info is located somewhere in RAM. This 1475 - * will cause trouble if the current kernel is doing a kexec boot into a 1476 - * new kernel. The new kernel (and its startup code) can not know where 1477 - * the pfn is, so it can not reserve the page. The hypervisor will 1478 - * continue to update the pfn, and as a result memory corruption occours 1479 - * in the new kernel. 1480 - * 1481 - * One way to work around this issue is to allocate a page in the 1482 - * xen-platform pci device's BAR memory range. But pci init is done very 1483 - * late and the shared_info page is already in use very early to read 1484 - * the pvclock. So moving the pfn from RAM to MMIO is racy because some 1485 - * code paths on other vcpus could access the pfn during the small 1486 - * window when the old pfn is moved to the new pfn. There is even a 1487 - * small window were the old pfn is not backed by a mfn, and during that 1488 - * time all reads return -1. 1489 - * 1490 - * Because it is not known upfront where the MMIO region is located it 1491 - * can not be used right from the start in xen_hvm_init_shared_info. 1492 - * 1493 - * To minimise trouble the move of the pfn is done shortly before kexec. 1494 - * This does not eliminate the race because all vcpus are still online 1495 - * when the syscore_ops will be called. But hopefully there is no work 1496 - * pending at this point in time. Also the syscore_op is run last which 1497 - * reduces the risk further. 1498 - */ 1499 - 1500 - static struct shared_info *xen_hvm_shared_info; 1501 - 1502 - static void xen_hvm_connect_shared_info(unsigned long pfn) 1473 + void __ref xen_hvm_init_shared_info(void) 1503 1474 { 1475 + int cpu; 1504 1476 struct xen_add_to_physmap xatp; 1477 + static struct shared_info *shared_info_page = 0; 1505 1478 1479 + if (!shared_info_page) 1480 + shared_info_page = (struct shared_info *) 1481 + extend_brk(PAGE_SIZE, PAGE_SIZE); 1506 1482 xatp.domid = DOMID_SELF; 1507 1483 xatp.idx = 0; 1508 1484 xatp.space = XENMAPSPACE_shared_info; 1509 - xatp.gpfn = pfn; 1485 + xatp.gpfn = __pa(shared_info_page) >> PAGE_SHIFT; 1510 1486 if (HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp)) 1511 1487 BUG(); 1512 1488 1513 - } 1514 - static void xen_hvm_set_shared_info(struct shared_info *sip) 1515 - { 1516 - int cpu; 1517 - 1518 - HYPERVISOR_shared_info = sip; 1489 + HYPERVISOR_shared_info = (struct shared_info *)shared_info_page; 1519 1490 1520 1491 /* xen_vcpu is a pointer to the vcpu_info struct in the shared_info 1521 1492 * page, we use it in the event channel upcall and in some pvclock 1522 1493 * related functions. We don't need the vcpu_info placement 1523 1494 * optimizations because we don't use any pv_mmu or pv_irq op on 1524 1495 * HVM. 1525 - * When xen_hvm_set_shared_info is run at boot time only vcpu 0 is 1526 - * online but xen_hvm_set_shared_info is run at resume time too and 1496 + * When xen_hvm_init_shared_info is run at boot time only vcpu 0 is 1497 + * online but xen_hvm_init_shared_info is run at resume time too and 1527 1498 * in that case multiple vcpus might be online. */ 1528 1499 for_each_online_cpu(cpu) { 1529 1500 per_cpu(xen_vcpu, cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu]; 1530 1501 } 1531 1502 } 1532 1503 1533 - /* Reconnect the shared_info pfn to a mfn */ 1534 - void xen_hvm_resume_shared_info(void) 1535 - { 1536 - xen_hvm_connect_shared_info(__pa(xen_hvm_shared_info) >> PAGE_SHIFT); 1537 - } 1538 - 1539 - #ifdef CONFIG_KEXEC 1540 - static struct shared_info *xen_hvm_shared_info_kexec; 1541 - static unsigned long xen_hvm_shared_info_pfn_kexec; 1542 - 1543 - /* Remember a pfn in MMIO space for kexec reboot */ 1544 - void __devinit xen_hvm_prepare_kexec(struct shared_info *sip, unsigned long pfn) 1545 - { 1546 - xen_hvm_shared_info_kexec = sip; 1547 - xen_hvm_shared_info_pfn_kexec = pfn; 1548 - } 1549 - 1550 - static void xen_hvm_syscore_shutdown(void) 1551 - { 1552 - struct xen_memory_reservation reservation = { 1553 - .domid = DOMID_SELF, 1554 - .nr_extents = 1, 1555 - }; 1556 - unsigned long prev_pfn; 1557 - int rc; 1558 - 1559 - if (!xen_hvm_shared_info_kexec) 1560 - return; 1561 - 1562 - prev_pfn = __pa(xen_hvm_shared_info) >> PAGE_SHIFT; 1563 - set_xen_guest_handle(reservation.extent_start, &prev_pfn); 1564 - 1565 - /* Move pfn to MMIO, disconnects previous pfn from mfn */ 1566 - xen_hvm_connect_shared_info(xen_hvm_shared_info_pfn_kexec); 1567 - 1568 - /* Update pointers, following hypercall is also a memory barrier */ 1569 - xen_hvm_set_shared_info(xen_hvm_shared_info_kexec); 1570 - 1571 - /* Allocate new mfn for previous pfn */ 1572 - do { 1573 - rc = HYPERVISOR_memory_op(XENMEM_populate_physmap, &reservation); 1574 - if (rc == 0) 1575 - msleep(123); 1576 - } while (rc == 0); 1577 - 1578 - /* Make sure the previous pfn is really connected to a (new) mfn */ 1579 - BUG_ON(rc != 1); 1580 - } 1581 - 1582 - static struct syscore_ops xen_hvm_syscore_ops = { 1583 - .shutdown = xen_hvm_syscore_shutdown, 1584 - }; 1585 - #endif 1586 - 1587 - /* Use a pfn in RAM, may move to MMIO before kexec. */ 1588 - static void __init xen_hvm_init_shared_info(void) 1589 - { 1590 - /* Remember pointer for resume */ 1591 - xen_hvm_shared_info = extend_brk(PAGE_SIZE, PAGE_SIZE); 1592 - xen_hvm_connect_shared_info(__pa(xen_hvm_shared_info) >> PAGE_SHIFT); 1593 - xen_hvm_set_shared_info(xen_hvm_shared_info); 1594 - } 1595 - 1504 + #ifdef CONFIG_XEN_PVHVM 1596 1505 static void __init init_hvm_pv_info(void) 1597 1506 { 1598 1507 int major, minor; ··· 1551 1644 init_hvm_pv_info(); 1552 1645 1553 1646 xen_hvm_init_shared_info(); 1554 - #ifdef CONFIG_KEXEC 1555 - register_syscore_ops(&xen_hvm_syscore_ops); 1556 - #endif 1557 1647 1558 1648 if (xen_feature(XENFEAT_hvm_callback_vector)) 1559 1649 xen_have_vector_callback = 1;
+92 -3
arch/x86/xen/p2m.c
··· 196 196 197 197 /* When we populate back during bootup, the amount of pages can vary. The 198 198 * max we have is seen is 395979, but that does not mean it can't be more. 199 - * But some machines can have 3GB I/O holes even. So lets reserve enough 200 - * for 4GB of I/O and E820 holes. */ 201 - RESERVE_BRK(p2m_populated, PMD_SIZE * 4); 199 + * Some machines can have 3GB I/O holes even. With early_can_reuse_p2m_middle 200 + * it can re-use Xen provided mfn_list array, so we only need to allocate at 201 + * most three P2M top nodes. */ 202 + RESERVE_BRK(p2m_populated, PAGE_SIZE * 3); 203 + 202 204 static inline unsigned p2m_top_index(unsigned long pfn) 203 205 { 204 206 BUG_ON(pfn >= MAX_P2M_PFN); ··· 577 575 } 578 576 return true; 579 577 } 578 + 579 + /* 580 + * Skim over the P2M tree looking at pages that are either filled with 581 + * INVALID_P2M_ENTRY or with 1:1 PFNs. If found, re-use that page and 582 + * replace the P2M leaf with a p2m_missing or p2m_identity. 583 + * Stick the old page in the new P2M tree location. 584 + */ 585 + bool __init early_can_reuse_p2m_middle(unsigned long set_pfn, unsigned long set_mfn) 586 + { 587 + unsigned topidx; 588 + unsigned mididx; 589 + unsigned ident_pfns; 590 + unsigned inv_pfns; 591 + unsigned long *p2m; 592 + unsigned long *mid_mfn_p; 593 + unsigned idx; 594 + unsigned long pfn; 595 + 596 + /* We only look when this entails a P2M middle layer */ 597 + if (p2m_index(set_pfn)) 598 + return false; 599 + 600 + for (pfn = 0; pfn <= MAX_DOMAIN_PAGES; pfn += P2M_PER_PAGE) { 601 + topidx = p2m_top_index(pfn); 602 + 603 + if (!p2m_top[topidx]) 604 + continue; 605 + 606 + if (p2m_top[topidx] == p2m_mid_missing) 607 + continue; 608 + 609 + mididx = p2m_mid_index(pfn); 610 + p2m = p2m_top[topidx][mididx]; 611 + if (!p2m) 612 + continue; 613 + 614 + if ((p2m == p2m_missing) || (p2m == p2m_identity)) 615 + continue; 616 + 617 + if ((unsigned long)p2m == INVALID_P2M_ENTRY) 618 + continue; 619 + 620 + ident_pfns = 0; 621 + inv_pfns = 0; 622 + for (idx = 0; idx < P2M_PER_PAGE; idx++) { 623 + /* IDENTITY_PFNs are 1:1 */ 624 + if (p2m[idx] == IDENTITY_FRAME(pfn + idx)) 625 + ident_pfns++; 626 + else if (p2m[idx] == INVALID_P2M_ENTRY) 627 + inv_pfns++; 628 + else 629 + break; 630 + } 631 + if ((ident_pfns == P2M_PER_PAGE) || (inv_pfns == P2M_PER_PAGE)) 632 + goto found; 633 + } 634 + return false; 635 + found: 636 + /* Found one, replace old with p2m_identity or p2m_missing */ 637 + p2m_top[topidx][mididx] = (ident_pfns ? p2m_identity : p2m_missing); 638 + /* And the other for save/restore.. */ 639 + mid_mfn_p = p2m_top_mfn_p[topidx]; 640 + /* NOTE: Even if it is a p2m_identity it should still be point to 641 + * a page filled with INVALID_P2M_ENTRY entries. */ 642 + mid_mfn_p[mididx] = virt_to_mfn(p2m_missing); 643 + 644 + /* Reset where we want to stick the old page in. */ 645 + topidx = p2m_top_index(set_pfn); 646 + mididx = p2m_mid_index(set_pfn); 647 + 648 + /* This shouldn't happen */ 649 + if (WARN_ON(p2m_top[topidx] == p2m_mid_missing)) 650 + early_alloc_p2m(set_pfn); 651 + 652 + if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing)) 653 + return false; 654 + 655 + p2m_init(p2m); 656 + p2m_top[topidx][mididx] = p2m; 657 + mid_mfn_p = p2m_top_mfn_p[topidx]; 658 + mid_mfn_p[mididx] = virt_to_mfn(p2m); 659 + 660 + return true; 661 + } 580 662 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn) 581 663 { 582 664 if (unlikely(!__set_phys_to_machine(pfn, mfn))) { 583 665 if (!early_alloc_p2m(pfn)) 584 666 return false; 667 + 668 + if (early_can_reuse_p2m_middle(pfn, mfn)) 669 + return __set_phys_to_machine(pfn, mfn); 585 670 586 671 if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/)) 587 672 return false;
+8 -1
arch/x86/xen/setup.c
··· 78 78 memblock_reserve(start, size); 79 79 80 80 xen_max_p2m_pfn = PFN_DOWN(start + size); 81 + for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) { 82 + unsigned long mfn = pfn_to_mfn(pfn); 81 83 82 - for (pfn = PFN_DOWN(start); pfn <= xen_max_p2m_pfn; pfn++) 84 + if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn)) 85 + continue; 86 + WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n", 87 + pfn, mfn); 88 + 83 89 __set_phys_to_machine(pfn, INVALID_P2M_ENTRY); 90 + } 84 91 } 85 92 86 93 static unsigned long __init xen_do_chunk(unsigned long start,
+1 -1
arch/x86/xen/suspend.c
··· 30 30 { 31 31 #ifdef CONFIG_XEN_PVHVM 32 32 int cpu; 33 - xen_hvm_resume_shared_info(); 33 + xen_hvm_init_shared_info(); 34 34 xen_callback_vector(); 35 35 xen_unplug_emulated_devices(); 36 36 if (xen_feature(XENFEAT_hvm_safe_pvclock)) {
+1 -1
arch/x86/xen/xen-ops.h
··· 41 41 void xen_vcpu_restore(void); 42 42 43 43 void xen_callback_vector(void); 44 - void xen_hvm_resume_shared_info(void); 44 + void xen_hvm_init_shared_info(void); 45 45 void xen_unplug_emulated_devices(void); 46 46 47 47 void __init xen_build_dynamic_phys_to_machine(void);
+28 -13
block/blk-lib.c
··· 44 44 struct request_queue *q = bdev_get_queue(bdev); 45 45 int type = REQ_WRITE | REQ_DISCARD; 46 46 unsigned int max_discard_sectors; 47 + unsigned int granularity, alignment, mask; 47 48 struct bio_batch bb; 48 49 struct bio *bio; 49 50 int ret = 0; ··· 55 54 if (!blk_queue_discard(q)) 56 55 return -EOPNOTSUPP; 57 56 57 + /* Zero-sector (unknown) and one-sector granularities are the same. */ 58 + granularity = max(q->limits.discard_granularity >> 9, 1U); 59 + mask = granularity - 1; 60 + alignment = (bdev_discard_alignment(bdev) >> 9) & mask; 61 + 58 62 /* 59 63 * Ensure that max_discard_sectors is of the proper 60 - * granularity 64 + * granularity, so that requests stay aligned after a split. 61 65 */ 62 66 max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9); 67 + max_discard_sectors = round_down(max_discard_sectors, granularity); 63 68 if (unlikely(!max_discard_sectors)) { 64 69 /* Avoid infinite loop below. Being cautious never hurts. */ 65 70 return -EOPNOTSUPP; 66 - } else if (q->limits.discard_granularity) { 67 - unsigned int disc_sects = q->limits.discard_granularity >> 9; 68 - 69 - max_discard_sectors &= ~(disc_sects - 1); 70 71 } 71 72 72 73 if (flags & BLKDEV_DISCARD_SECURE) { ··· 82 79 bb.wait = &wait; 83 80 84 81 while (nr_sects) { 82 + unsigned int req_sects; 83 + sector_t end_sect; 84 + 85 85 bio = bio_alloc(gfp_mask, 1); 86 86 if (!bio) { 87 87 ret = -ENOMEM; 88 88 break; 89 + } 90 + 91 + req_sects = min_t(sector_t, nr_sects, max_discard_sectors); 92 + 93 + /* 94 + * If splitting a request, and the next starting sector would be 95 + * misaligned, stop the discard at the previous aligned sector. 96 + */ 97 + end_sect = sector + req_sects; 98 + if (req_sects < nr_sects && (end_sect & mask) != alignment) { 99 + end_sect = 100 + round_down(end_sect - alignment, granularity) 101 + + alignment; 102 + req_sects = end_sect - sector; 89 103 } 90 104 91 105 bio->bi_sector = sector; ··· 110 90 bio->bi_bdev = bdev; 111 91 bio->bi_private = &bb; 112 92 113 - if (nr_sects > max_discard_sectors) { 114 - bio->bi_size = max_discard_sectors << 9; 115 - nr_sects -= max_discard_sectors; 116 - sector += max_discard_sectors; 117 - } else { 118 - bio->bi_size = nr_sects << 9; 119 - nr_sects = 0; 120 - } 93 + bio->bi_size = req_sects << 9; 94 + nr_sects -= req_sects; 95 + sector = end_sect; 121 96 122 97 atomic_inc(&bb.done); 123 98 submit_bio(type, bio);
+82 -35
block/blk-merge.c
··· 110 110 return 0; 111 111 } 112 112 113 + static void 114 + __blk_segment_map_sg(struct request_queue *q, struct bio_vec *bvec, 115 + struct scatterlist *sglist, struct bio_vec **bvprv, 116 + struct scatterlist **sg, int *nsegs, int *cluster) 117 + { 118 + 119 + int nbytes = bvec->bv_len; 120 + 121 + if (*bvprv && *cluster) { 122 + if ((*sg)->length + nbytes > queue_max_segment_size(q)) 123 + goto new_segment; 124 + 125 + if (!BIOVEC_PHYS_MERGEABLE(*bvprv, bvec)) 126 + goto new_segment; 127 + if (!BIOVEC_SEG_BOUNDARY(q, *bvprv, bvec)) 128 + goto new_segment; 129 + 130 + (*sg)->length += nbytes; 131 + } else { 132 + new_segment: 133 + if (!*sg) 134 + *sg = sglist; 135 + else { 136 + /* 137 + * If the driver previously mapped a shorter 138 + * list, we could see a termination bit 139 + * prematurely unless it fully inits the sg 140 + * table on each mapping. We KNOW that there 141 + * must be more entries here or the driver 142 + * would be buggy, so force clear the 143 + * termination bit to avoid doing a full 144 + * sg_init_table() in drivers for each command. 145 + */ 146 + (*sg)->page_link &= ~0x02; 147 + *sg = sg_next(*sg); 148 + } 149 + 150 + sg_set_page(*sg, bvec->bv_page, nbytes, bvec->bv_offset); 151 + (*nsegs)++; 152 + } 153 + *bvprv = bvec; 154 + } 155 + 113 156 /* 114 157 * map a request to scatterlist, return number of sg entries setup. Caller 115 158 * must make sure sg can hold rq->nr_phys_segments entries ··· 174 131 bvprv = NULL; 175 132 sg = NULL; 176 133 rq_for_each_segment(bvec, rq, iter) { 177 - int nbytes = bvec->bv_len; 178 - 179 - if (bvprv && cluster) { 180 - if (sg->length + nbytes > queue_max_segment_size(q)) 181 - goto new_segment; 182 - 183 - if (!BIOVEC_PHYS_MERGEABLE(bvprv, bvec)) 184 - goto new_segment; 185 - if (!BIOVEC_SEG_BOUNDARY(q, bvprv, bvec)) 186 - goto new_segment; 187 - 188 - sg->length += nbytes; 189 - } else { 190 - new_segment: 191 - if (!sg) 192 - sg = sglist; 193 - else { 194 - /* 195 - * If the driver previously mapped a shorter 196 - * list, we could see a termination bit 197 - * prematurely unless it fully inits the sg 198 - * table on each mapping. We KNOW that there 199 - * must be more entries here or the driver 200 - * would be buggy, so force clear the 201 - * termination bit to avoid doing a full 202 - * sg_init_table() in drivers for each command. 203 - */ 204 - sg->page_link &= ~0x02; 205 - sg = sg_next(sg); 206 - } 207 - 208 - sg_set_page(sg, bvec->bv_page, nbytes, bvec->bv_offset); 209 - nsegs++; 210 - } 211 - bvprv = bvec; 134 + __blk_segment_map_sg(q, bvec, sglist, &bvprv, &sg, 135 + &nsegs, &cluster); 212 136 } /* segments in rq */ 213 137 214 138 ··· 208 198 return nsegs; 209 199 } 210 200 EXPORT_SYMBOL(blk_rq_map_sg); 201 + 202 + /** 203 + * blk_bio_map_sg - map a bio to a scatterlist 204 + * @q: request_queue in question 205 + * @bio: bio being mapped 206 + * @sglist: scatterlist being mapped 207 + * 208 + * Note: 209 + * Caller must make sure sg can hold bio->bi_phys_segments entries 210 + * 211 + * Will return the number of sg entries setup 212 + */ 213 + int blk_bio_map_sg(struct request_queue *q, struct bio *bio, 214 + struct scatterlist *sglist) 215 + { 216 + struct bio_vec *bvec, *bvprv; 217 + struct scatterlist *sg; 218 + int nsegs, cluster; 219 + unsigned long i; 220 + 221 + nsegs = 0; 222 + cluster = blk_queue_cluster(q); 223 + 224 + bvprv = NULL; 225 + sg = NULL; 226 + bio_for_each_segment(bvec, bio, i) { 227 + __blk_segment_map_sg(q, bvec, sglist, &bvprv, &sg, 228 + &nsegs, &cluster); 229 + } /* segments in bio */ 230 + 231 + if (sg) 232 + sg_mark_end(sg); 233 + 234 + BUG_ON(bio->bi_phys_segments && nsegs > bio->bi_phys_segments); 235 + return nsegs; 236 + } 237 + EXPORT_SYMBOL(blk_bio_map_sg); 211 238 212 239 static inline int ll_new_hw_segment(struct request_queue *q, 213 240 struct request *req,
+1 -1
block/genhd.c
··· 835 835 836 836 static void *show_partition_start(struct seq_file *seqf, loff_t *pos) 837 837 { 838 - static void *p; 838 + void *p; 839 839 840 840 p = disk_seqf_start(seqf, pos); 841 841 if (!IS_ERR_OR_NULL(p) && !*pos)
+1 -1
drivers/ata/Kconfig
··· 115 115 If unsure, say N. 116 116 117 117 config ATA_SFF 118 - bool "ATA SFF support" 118 + bool "ATA SFF support (for legacy IDE and PATA)" 119 119 default y 120 120 help 121 121 This option adds support for ATA controllers with SFF
+8
drivers/ata/ahci.c
··· 256 256 { PCI_VDEVICE(INTEL, 0x8c07), board_ahci }, /* Lynx Point RAID */ 257 257 { PCI_VDEVICE(INTEL, 0x8c0e), board_ahci }, /* Lynx Point RAID */ 258 258 { PCI_VDEVICE(INTEL, 0x8c0f), board_ahci }, /* Lynx Point RAID */ 259 + { PCI_VDEVICE(INTEL, 0x9c02), board_ahci }, /* Lynx Point-LP AHCI */ 260 + { PCI_VDEVICE(INTEL, 0x9c03), board_ahci }, /* Lynx Point-LP AHCI */ 261 + { PCI_VDEVICE(INTEL, 0x9c04), board_ahci }, /* Lynx Point-LP RAID */ 262 + { PCI_VDEVICE(INTEL, 0x9c05), board_ahci }, /* Lynx Point-LP RAID */ 263 + { PCI_VDEVICE(INTEL, 0x9c06), board_ahci }, /* Lynx Point-LP RAID */ 264 + { PCI_VDEVICE(INTEL, 0x9c07), board_ahci }, /* Lynx Point-LP RAID */ 265 + { PCI_VDEVICE(INTEL, 0x9c0e), board_ahci }, /* Lynx Point-LP RAID */ 266 + { PCI_VDEVICE(INTEL, 0x9c0f), board_ahci }, /* Lynx Point-LP RAID */ 259 267 260 268 /* JMicron 360/1/3/5/6, match class to avoid IDE function */ 261 269 { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+1
drivers/ata/ahci.h
··· 320 320 extern struct ata_port_operations ahci_ops; 321 321 extern struct ata_port_operations ahci_pmp_retry_srst_ops; 322 322 323 + unsigned int ahci_dev_classify(struct ata_port *ap); 323 324 void ahci_fill_cmd_slot(struct ahci_port_priv *pp, unsigned int tag, 324 325 u32 opts); 325 326 void ahci_save_initial_config(struct device *dev,
+8
drivers/ata/ata_piix.c
··· 329 329 { 0x8086, 0x8c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 330 330 /* SATA Controller IDE (Lynx Point) */ 331 331 { 0x8086, 0x8c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 332 + /* SATA Controller IDE (Lynx Point-LP) */ 333 + { 0x8086, 0x9c00, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb }, 334 + /* SATA Controller IDE (Lynx Point-LP) */ 335 + { 0x8086, 0x9c01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_sata_snb }, 336 + /* SATA Controller IDE (Lynx Point-LP) */ 337 + { 0x8086, 0x9c08, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 338 + /* SATA Controller IDE (Lynx Point-LP) */ 339 + { 0x8086, 0x9c09, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 332 340 /* SATA Controller IDE (DH89xxCC) */ 333 341 { 0x8086, 0x2326, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ich8_2port_sata }, 334 342 { } /* terminate list */
+2 -1
drivers/ata/libahci.c
··· 1139 1139 } 1140 1140 } 1141 1141 1142 - static unsigned int ahci_dev_classify(struct ata_port *ap) 1142 + unsigned int ahci_dev_classify(struct ata_port *ap) 1143 1143 { 1144 1144 void __iomem *port_mmio = ahci_port_base(ap); 1145 1145 struct ata_taskfile tf; ··· 1153 1153 1154 1154 return ata_dev_classify(&tf); 1155 1155 } 1156 + EXPORT_SYMBOL_GPL(ahci_dev_classify); 1156 1157 1157 1158 void ahci_fill_cmd_slot(struct ahci_port_priv *pp, unsigned int tag, 1158 1159 u32 opts)
+4 -11
drivers/ata/libata-acpi.c
··· 60 60 if (ap->flags & ATA_FLAG_ACPI_SATA) 61 61 return NULL; 62 62 63 - /* 64 - * If acpi bind operation has already happened, we can get the handle 65 - * for the port by checking the corresponding scsi_host device's 66 - * firmware node, otherwise we will need to find out the handle from 67 - * its parent's acpi node. 68 - */ 69 - if (ap->scsi_host) 70 - return DEVICE_ACPI_HANDLE(&ap->scsi_host->shost_gendev); 71 - else 72 - return acpi_get_child(DEVICE_ACPI_HANDLE(ap->host->dev), 73 - ap->port_no); 63 + return acpi_get_child(DEVICE_ACPI_HANDLE(ap->host->dev), ap->port_no); 74 64 } 75 65 EXPORT_SYMBOL(ata_ap_acpi_handle); 76 66 ··· 1090 1100 1091 1101 if (!*handle) 1092 1102 return -ENODEV; 1103 + 1104 + if (ata_acpi_gtm(ap, &ap->__acpi_init_gtm) == 0) 1105 + ap->pflags |= ATA_PFLAG_INIT_GTM_VALID; 1093 1106 1094 1107 return 0; 1095 1108 }
+2 -1
drivers/ata/libata-core.c
··· 4062 4062 { "_NEC DV5800A", NULL, ATA_HORKAGE_NODMA }, 4063 4063 { "SAMSUNG CD-ROM SN-124", "N001", ATA_HORKAGE_NODMA }, 4064 4064 { "Seagate STT20000A", NULL, ATA_HORKAGE_NODMA }, 4065 - { "2GB ATA Flash Disk", "ADMA428M", ATA_HORKAGE_NODMA }, 4065 + { " 2GB ATA Flash Disk", "ADMA428M", ATA_HORKAGE_NODMA }, 4066 4066 /* Odd clown on sil3726/4726 PMPs */ 4067 4067 { "Config Disk", NULL, ATA_HORKAGE_DISABLE }, 4068 4068 ··· 4128 4128 4129 4129 /* Devices that do not need bridging limits applied */ 4130 4130 { "MTRON MSP-SATA*", NULL, ATA_HORKAGE_BRIDGE_OK, }, 4131 + { "BUFFALO HD-QSU2/R5", NULL, ATA_HORKAGE_BRIDGE_OK, }, 4131 4132 4132 4133 /* Devices which aren't very happy with higher link speeds */ 4133 4134 { "WD My Book", NULL, ATA_HORKAGE_1_5_GBPS, },
+16
drivers/ata/pata_atiixp.c
··· 20 20 #include <linux/delay.h> 21 21 #include <scsi/scsi_host.h> 22 22 #include <linux/libata.h> 23 + #include <linux/dmi.h> 23 24 24 25 #define DRV_NAME "pata_atiixp" 25 26 #define DRV_VERSION "0.4.6" ··· 34 33 ATIIXP_IDE_UDMA_MODE = 0x56 35 34 }; 36 35 36 + static const struct dmi_system_id attixp_cable_override_dmi_table[] = { 37 + { 38 + /* Board has onboard PATA<->SATA converters */ 39 + .ident = "MSI E350DM-E33", 40 + .matches = { 41 + DMI_MATCH(DMI_BOARD_VENDOR, "MSI"), 42 + DMI_MATCH(DMI_BOARD_NAME, "E350DM-E33(MS-7720)"), 43 + }, 44 + }, 45 + { } 46 + }; 47 + 37 48 static int atiixp_cable_detect(struct ata_port *ap) 38 49 { 39 50 struct pci_dev *pdev = to_pci_dev(ap->host->dev); 40 51 u8 udma; 52 + 53 + if (dmi_check_system(attixp_cable_override_dmi_table)) 54 + return ATA_CBL_PATA40_SHORT; 41 55 42 56 /* Hack from drivers/ide/pci. Really we want to know how to do the 43 57 raw detection not play follow the bios mode guess */
+14 -1
drivers/block/drbd/drbd_bitmap.c
··· 889 889 unsigned int done; 890 890 unsigned flags; 891 891 #define BM_AIO_COPY_PAGES 1 892 + #define BM_WRITE_ALL_PAGES 2 892 893 int error; 893 894 struct kref kref; 894 895 }; ··· 1060 1059 if (lazy_writeout_upper_idx && i == lazy_writeout_upper_idx) 1061 1060 break; 1062 1061 if (rw & WRITE) { 1063 - if (bm_test_page_unchanged(b->bm_pages[i])) { 1062 + if (!(flags & BM_WRITE_ALL_PAGES) && 1063 + bm_test_page_unchanged(b->bm_pages[i])) { 1064 1064 dynamic_dev_dbg(DEV, "skipped bm write for idx %u\n", i); 1065 1065 continue; 1066 1066 } ··· 1140 1138 int drbd_bm_write(struct drbd_conf *mdev) __must_hold(local) 1141 1139 { 1142 1140 return bm_rw(mdev, WRITE, 0, 0); 1141 + } 1142 + 1143 + /** 1144 + * drbd_bm_write_all() - Write the whole bitmap to its on disk location. 1145 + * @mdev: DRBD device. 1146 + * 1147 + * Will write all pages. 1148 + */ 1149 + int drbd_bm_write_all(struct drbd_conf *mdev) __must_hold(local) 1150 + { 1151 + return bm_rw(mdev, WRITE, BM_WRITE_ALL_PAGES, 0); 1143 1152 } 1144 1153 1145 1154 /**
+1
drivers/block/drbd/drbd_int.h
··· 1469 1469 extern int drbd_bm_write_page(struct drbd_conf *mdev, unsigned int idx) __must_hold(local); 1470 1470 extern int drbd_bm_read(struct drbd_conf *mdev) __must_hold(local); 1471 1471 extern int drbd_bm_write(struct drbd_conf *mdev) __must_hold(local); 1472 + extern int drbd_bm_write_all(struct drbd_conf *mdev) __must_hold(local); 1472 1473 extern int drbd_bm_write_copy_pages(struct drbd_conf *mdev) __must_hold(local); 1473 1474 extern unsigned long drbd_bm_ALe_set_all(struct drbd_conf *mdev, 1474 1475 unsigned long al_enr);
+12 -16
drivers/block/drbd/drbd_main.c
··· 79 79 static void md_sync_timer_fn(unsigned long data); 80 80 static int w_bitmap_io(struct drbd_conf *mdev, struct drbd_work *w, int unused); 81 81 static int w_go_diskless(struct drbd_conf *mdev, struct drbd_work *w, int unused); 82 + static void _tl_clear(struct drbd_conf *mdev); 82 83 83 84 MODULE_AUTHOR("Philipp Reisner <phil@linbit.com>, " 84 85 "Lars Ellenberg <lars@linbit.com>"); ··· 433 432 434 433 /* Actions operating on the disk state, also want to work on 435 434 requests that got barrier acked. */ 436 - switch (what) { 437 - case fail_frozen_disk_io: 438 - case restart_frozen_disk_io: 439 - list_for_each_safe(le, tle, &mdev->barrier_acked_requests) { 440 - req = list_entry(le, struct drbd_request, tl_requests); 441 - _req_mod(req, what); 442 - } 443 435 444 - case connection_lost_while_pending: 445 - case resend: 446 - break; 447 - default: 448 - dev_err(DEV, "what = %d in _tl_restart()\n", what); 436 + list_for_each_safe(le, tle, &mdev->barrier_acked_requests) { 437 + req = list_entry(le, struct drbd_request, tl_requests); 438 + _req_mod(req, what); 449 439 } 450 440 } 451 441 ··· 451 459 */ 452 460 void tl_clear(struct drbd_conf *mdev) 453 461 { 462 + spin_lock_irq(&mdev->req_lock); 463 + _tl_clear(mdev); 464 + spin_unlock_irq(&mdev->req_lock); 465 + } 466 + 467 + static void _tl_clear(struct drbd_conf *mdev) 468 + { 454 469 struct list_head *le, *tle; 455 470 struct drbd_request *r; 456 - 457 - spin_lock_irq(&mdev->req_lock); 458 471 459 472 _tl_restart(mdev, connection_lost_while_pending); 460 473 ··· 479 482 480 483 memset(mdev->app_reads_hash, 0, APP_R_HSIZE*sizeof(void *)); 481 484 482 - spin_unlock_irq(&mdev->req_lock); 483 485 } 484 486 485 487 void tl_restart(struct drbd_conf *mdev, enum drbd_req_event what) ··· 1472 1476 if (ns.susp_fen) { 1473 1477 /* case1: The outdate peer handler is successful: */ 1474 1478 if (os.pdsk > D_OUTDATED && ns.pdsk <= D_OUTDATED) { 1475 - tl_clear(mdev); 1476 1479 if (test_bit(NEW_CUR_UUID, &mdev->flags)) { 1477 1480 drbd_uuid_new_current(mdev); 1478 1481 clear_bit(NEW_CUR_UUID, &mdev->flags); 1479 1482 } 1480 1483 spin_lock_irq(&mdev->req_lock); 1484 + _tl_clear(mdev); 1481 1485 _drbd_set_state(_NS(mdev, susp_fen, 0), CS_VERBOSE, NULL); 1482 1486 spin_unlock_irq(&mdev->req_lock); 1483 1487 }
+2 -2
drivers/block/drbd/drbd_nl.c
··· 674 674 la_size_changed && md_moved ? "size changed and md moved" : 675 675 la_size_changed ? "size changed" : "md moved"); 676 676 /* next line implicitly does drbd_suspend_io()+drbd_resume_io() */ 677 - err = drbd_bitmap_io(mdev, &drbd_bm_write, 678 - "size changed", BM_LOCKED_MASK); 677 + err = drbd_bitmap_io(mdev, md_moved ? &drbd_bm_write_all : &drbd_bm_write, 678 + "size changed", BM_LOCKED_MASK); 679 679 if (err) { 680 680 rv = dev_size_error; 681 681 goto out;
+32 -4
drivers/block/drbd/drbd_req.c
··· 695 695 break; 696 696 697 697 case resend: 698 + /* Simply complete (local only) READs. */ 699 + if (!(req->rq_state & RQ_WRITE) && !req->w.cb) { 700 + _req_may_be_done(req, m); 701 + break; 702 + } 703 + 698 704 /* If RQ_NET_OK is already set, we got a P_WRITE_ACK or P_RECV_ACK 699 705 before the connection loss (B&C only); only P_BARRIER_ACK was missing. 700 706 Trowing them out of the TL here by pretending we got a BARRIER_ACK ··· 840 834 req->private_bio = NULL; 841 835 } 842 836 if (rw == WRITE) { 843 - remote = 1; 837 + /* Need to replicate writes. Unless it is an empty flush, 838 + * which is better mapped to a DRBD P_BARRIER packet, 839 + * also for drbd wire protocol compatibility reasons. */ 840 + if (unlikely(size == 0)) { 841 + /* The only size==0 bios we expect are empty flushes. */ 842 + D_ASSERT(bio->bi_rw & REQ_FLUSH); 843 + remote = 0; 844 + } else 845 + remote = 1; 844 846 } else { 845 847 /* READ || READA */ 846 848 if (local) { ··· 884 870 * extent. This waits for any resync activity in the corresponding 885 871 * resync extent to finish, and, if necessary, pulls in the target 886 872 * extent into the activity log, which involves further disk io because 887 - * of transactional on-disk meta data updates. */ 888 - if (rw == WRITE && local && !test_bit(AL_SUSPENDED, &mdev->flags)) { 873 + * of transactional on-disk meta data updates. 874 + * Empty flushes don't need to go into the activity log, they can only 875 + * flush data for pending writes which are already in there. */ 876 + if (rw == WRITE && local && size 877 + && !test_bit(AL_SUSPENDED, &mdev->flags)) { 889 878 req->rq_state |= RQ_IN_ACT_LOG; 890 879 drbd_al_begin_io(mdev, sector); 891 880 } ··· 1011 994 if (rw == WRITE && _req_conflicts(req)) 1012 995 goto fail_conflicting; 1013 996 1014 - list_add_tail(&req->tl_requests, &mdev->newest_tle->requests); 997 + /* no point in adding empty flushes to the transfer log, 998 + * they are mapped to drbd barriers already. */ 999 + if (likely(size!=0)) 1000 + list_add_tail(&req->tl_requests, &mdev->newest_tle->requests); 1015 1001 1016 1002 /* NOTE remote first: to get the concurrent write detection right, 1017 1003 * we must register the request before start of local IO. */ ··· 1033 1013 if (remote && 1034 1014 mdev->net_conf->on_congestion != OC_BLOCK && mdev->agreed_pro_version >= 96) 1035 1015 maybe_pull_ahead(mdev); 1016 + 1017 + /* If this was a flush, queue a drbd barrier/start a new epoch. 1018 + * Unless the current epoch was empty anyways, or we are not currently 1019 + * replicating, in which case there is no point. */ 1020 + if (unlikely(bio->bi_rw & REQ_FLUSH) 1021 + && mdev->newest_tle->n_writes 1022 + && drbd_should_do_remote(mdev->state)) 1023 + queue_barrier(mdev); 1036 1024 1037 1025 spin_unlock_irq(&mdev->req_lock); 1038 1026 kfree(b); /* if someone else has beaten us to it... */
+3 -1
drivers/cpufreq/omap-cpufreq.c
··· 218 218 219 219 policy->cur = policy->min = policy->max = omap_getspeed(policy->cpu); 220 220 221 - if (atomic_inc_return(&freq_table_users) == 1) 221 + if (!freq_table) 222 222 result = opp_init_cpufreq_table(mpu_dev, &freq_table); 223 223 224 224 if (result) { ··· 226 226 __func__, policy->cpu, result); 227 227 goto fail_ck; 228 228 } 229 + 230 + atomic_inc_return(&freq_table_users); 229 231 230 232 result = cpufreq_frequency_table_cpuinfo(policy, freq_table); 231 233 if (result)
+5 -5
drivers/crypto/caam/jr.c
··· 63 63 64 64 head = ACCESS_ONCE(jrp->head); 65 65 66 - spin_lock_bh(&jrp->outlock); 66 + spin_lock(&jrp->outlock); 67 67 68 68 sw_idx = tail = jrp->tail; 69 69 hw_idx = jrp->out_ring_read_index; ··· 115 115 jrp->tail = tail; 116 116 } 117 117 118 - spin_unlock_bh(&jrp->outlock); 118 + spin_unlock(&jrp->outlock); 119 119 120 120 /* Finally, execute user's callback */ 121 121 usercall(dev, userdesc, userstatus, userarg); ··· 236 236 return -EIO; 237 237 } 238 238 239 - spin_lock(&jrp->inplock); 239 + spin_lock_bh(&jrp->inplock); 240 240 241 241 head = jrp->head; 242 242 tail = ACCESS_ONCE(jrp->tail); 243 243 244 244 if (!rd_reg32(&jrp->rregs->inpring_avail) || 245 245 CIRC_SPACE(head, tail, JOBR_DEPTH) <= 0) { 246 - spin_unlock(&jrp->inplock); 246 + spin_unlock_bh(&jrp->inplock); 247 247 dma_unmap_single(dev, desc_dma, desc_size, DMA_TO_DEVICE); 248 248 return -EBUSY; 249 249 } ··· 265 265 266 266 wr_reg32(&jrp->rregs->inpring_jobadd, 1); 267 267 268 - spin_unlock(&jrp->inplock); 268 + spin_unlock_bh(&jrp->inplock); 269 269 270 270 return 0; 271 271 }
+2 -2
drivers/crypto/hifn_795x.c
··· 821 821 /* 822 822 * We must wait at least 256 Pk_clk cycles between two reads of the rng. 823 823 */ 824 - dev->rng_wait_time = DIV_ROUND_UP(NSEC_PER_SEC, dev->pk_clk_freq) * 825 - 256; 824 + dev->rng_wait_time = DIV_ROUND_UP_ULL(NSEC_PER_SEC, 825 + dev->pk_clk_freq) * 256; 826 826 827 827 dev->rng.name = dev->name; 828 828 dev->rng.data_present = hifn_rng_data_present,
+1 -1
drivers/gpu/drm/drm_crtc.c
··· 1981 1981 if (!drm_core_check_feature(dev, DRIVER_MODESET)) 1982 1982 return -EINVAL; 1983 1983 1984 - if (!req->flags) 1984 + if (!req->flags || (~DRM_MODE_CURSOR_FLAGS & req->flags)) 1985 1985 return -EINVAL; 1986 1986 1987 1987 mutex_lock(&dev->mode_config.mutex);
+3
drivers/gpu/drm/drm_edid.c
··· 87 87 int product_id; 88 88 u32 quirks; 89 89 } edid_quirk_list[] = { 90 + /* ASUS VW222S */ 91 + { "ACI", 0x22a2, EDID_QUIRK_FORCE_REDUCED_BLANKING }, 92 + 90 93 /* Acer AL1706 */ 91 94 { "ACR", 44358, EDID_QUIRK_PREFER_LARGE_60 }, 92 95 /* Acer F51 */
+3
drivers/gpu/drm/gma500/psb_intel_display.c
··· 1362 1362 (struct drm_connector **) (psb_intel_crtc + 1); 1363 1363 psb_intel_crtc->mode_set.num_connectors = 0; 1364 1364 psb_intel_cursor_init(dev, psb_intel_crtc); 1365 + 1366 + /* Set to true so that the pipe is forced off on initial config. */ 1367 + psb_intel_crtc->active = true; 1365 1368 } 1366 1369 1367 1370 int psb_intel_get_pipe_from_crtc_id(struct drm_device *dev, void *data,
+1 -1
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 72 72 /* ppgtt PDEs reside in the global gtt pagetable, which has 512*1024 73 73 * entries. For aliasing ppgtt support we just steal them at the end for 74 74 * now. */ 75 - first_pd_entry_in_global_pt = 512*1024 - I915_PPGTT_PD_ENTRIES; 75 + first_pd_entry_in_global_pt = dev_priv->mm.gtt->gtt_total_entries - I915_PPGTT_PD_ENTRIES; 76 76 77 77 ppgtt = kzalloc(sizeof(*ppgtt), GFP_KERNEL); 78 78 if (!ppgtt)
+6 -6
drivers/gpu/drm/i915/intel_display.c
··· 1384 1384 enum pipe pipe, int reg) 1385 1385 { 1386 1386 u32 val = I915_READ(reg); 1387 - WARN(hdmi_pipe_enabled(dev_priv, val, pipe), 1387 + WARN(hdmi_pipe_enabled(dev_priv, pipe, val), 1388 1388 "PCH HDMI (0x%08x) enabled on transcoder %c, should be disabled\n", 1389 1389 reg, pipe_name(pipe)); 1390 1390 ··· 1404 1404 1405 1405 reg = PCH_ADPA; 1406 1406 val = I915_READ(reg); 1407 - WARN(adpa_pipe_enabled(dev_priv, val, pipe), 1407 + WARN(adpa_pipe_enabled(dev_priv, pipe, val), 1408 1408 "PCH VGA enabled on transcoder %c, should be disabled\n", 1409 1409 pipe_name(pipe)); 1410 1410 1411 1411 reg = PCH_LVDS; 1412 1412 val = I915_READ(reg); 1413 - WARN(lvds_pipe_enabled(dev_priv, val, pipe), 1413 + WARN(lvds_pipe_enabled(dev_priv, pipe, val), 1414 1414 "PCH LVDS enabled on transcoder %c, should be disabled\n", 1415 1415 pipe_name(pipe)); 1416 1416 ··· 1872 1872 enum pipe pipe, int reg) 1873 1873 { 1874 1874 u32 val = I915_READ(reg); 1875 - if (hdmi_pipe_enabled(dev_priv, val, pipe)) { 1875 + if (hdmi_pipe_enabled(dev_priv, pipe, val)) { 1876 1876 DRM_DEBUG_KMS("Disabling pch HDMI %x on pipe %d\n", 1877 1877 reg, pipe); 1878 1878 I915_WRITE(reg, val & ~PORT_ENABLE); ··· 1894 1894 1895 1895 reg = PCH_ADPA; 1896 1896 val = I915_READ(reg); 1897 - if (adpa_pipe_enabled(dev_priv, val, pipe)) 1897 + if (adpa_pipe_enabled(dev_priv, pipe, val)) 1898 1898 I915_WRITE(reg, val & ~ADPA_DAC_ENABLE); 1899 1899 1900 1900 reg = PCH_LVDS; 1901 1901 val = I915_READ(reg); 1902 - if (lvds_pipe_enabled(dev_priv, val, pipe)) { 1902 + if (lvds_pipe_enabled(dev_priv, pipe, val)) { 1903 1903 DRM_DEBUG_KMS("disable lvds on pipe %d val 0x%08x\n", pipe, val); 1904 1904 I915_WRITE(reg, val & ~LVDS_PORT_EN); 1905 1905 POSTING_READ(reg);
+8
drivers/gpu/drm/i915/intel_lvds.c
··· 780 780 DMI_MATCH(DMI_BOARD_NAME, "ZBOXSD-ID12/ID13"), 781 781 }, 782 782 }, 783 + { 784 + .callback = intel_no_lvds_dmi_callback, 785 + .ident = "Gigabyte GA-D525TUD", 786 + .matches = { 787 + DMI_MATCH(DMI_BOARD_VENDOR, "Gigabyte Technology Co., Ltd."), 788 + DMI_MATCH(DMI_BOARD_NAME, "D525TUD"), 789 + }, 790 + }, 783 791 784 792 { } /* terminating entry */ 785 793 };
+2 -2
drivers/gpu/drm/i915/intel_sprite.c
··· 60 60 61 61 switch (fb->pixel_format) { 62 62 case DRM_FORMAT_XBGR8888: 63 - sprctl |= SPRITE_FORMAT_RGBX888; 63 + sprctl |= SPRITE_FORMAT_RGBX888 | SPRITE_RGB_ORDER_RGBX; 64 64 pixel_size = 4; 65 65 break; 66 66 case DRM_FORMAT_XRGB8888: 67 - sprctl |= SPRITE_FORMAT_RGBX888 | SPRITE_RGB_ORDER_RGBX; 67 + sprctl |= SPRITE_FORMAT_RGBX888; 68 68 pixel_size = 4; 69 69 break; 70 70 case DRM_FORMAT_YUYV:
+4 -2
drivers/gpu/drm/nouveau/nouveau_state.c
··· 736 736 } 737 737 break; 738 738 case NV_C0: 739 - nvc0_copy_create(dev, 1); 739 + if (!(nv_rd32(dev, 0x022500) & 0x00000200)) 740 + nvc0_copy_create(dev, 1); 740 741 case NV_D0: 741 - nvc0_copy_create(dev, 0); 742 + if (!(nv_rd32(dev, 0x022500) & 0x00000100)) 743 + nvc0_copy_create(dev, 0); 742 744 break; 743 745 default: 744 746 break;
+16 -20
drivers/gpu/drm/radeon/atombios_crtc.c
··· 258 258 radeon_crtc->enabled = true; 259 259 /* adjust pm to dpms changes BEFORE enabling crtcs */ 260 260 radeon_pm_compute_clocks(rdev); 261 - /* disable crtc pair power gating before programming */ 262 261 if (ASIC_IS_DCE6(rdev) && !radeon_crtc->in_mode_set) 263 262 atombios_powergate_crtc(crtc, ATOM_DISABLE); 264 263 atombios_enable_crtc(crtc, ATOM_ENABLE); ··· 277 278 atombios_enable_crtc_memreq(crtc, ATOM_DISABLE); 278 279 atombios_enable_crtc(crtc, ATOM_DISABLE); 279 280 radeon_crtc->enabled = false; 280 - /* power gating is per-pair */ 281 - if (ASIC_IS_DCE6(rdev) && !radeon_crtc->in_mode_set) { 282 - struct drm_crtc *other_crtc; 283 - struct radeon_crtc *other_radeon_crtc; 284 - list_for_each_entry(other_crtc, &rdev->ddev->mode_config.crtc_list, head) { 285 - other_radeon_crtc = to_radeon_crtc(other_crtc); 286 - if (((radeon_crtc->crtc_id == 0) && (other_radeon_crtc->crtc_id == 1)) || 287 - ((radeon_crtc->crtc_id == 1) && (other_radeon_crtc->crtc_id == 0)) || 288 - ((radeon_crtc->crtc_id == 2) && (other_radeon_crtc->crtc_id == 3)) || 289 - ((radeon_crtc->crtc_id == 3) && (other_radeon_crtc->crtc_id == 2)) || 290 - ((radeon_crtc->crtc_id == 4) && (other_radeon_crtc->crtc_id == 5)) || 291 - ((radeon_crtc->crtc_id == 5) && (other_radeon_crtc->crtc_id == 4))) { 292 - /* if both crtcs in the pair are off, enable power gating */ 293 - if (other_radeon_crtc->enabled == false) 294 - atombios_powergate_crtc(crtc, ATOM_ENABLE); 295 - break; 296 - } 297 - } 298 - } 281 + if (ASIC_IS_DCE6(rdev) && !radeon_crtc->in_mode_set) 282 + atombios_powergate_crtc(crtc, ATOM_ENABLE); 299 283 /* adjust pm to dpms changes AFTER disabling crtcs */ 300 284 radeon_pm_compute_clocks(rdev); 301 285 break; ··· 1664 1682 struct drm_device *dev = crtc->dev; 1665 1683 struct radeon_device *rdev = dev->dev_private; 1666 1684 struct radeon_atom_ss ss; 1685 + int i; 1667 1686 1668 1687 atombios_crtc_dpms(crtc, DRM_MODE_DPMS_OFF); 1688 + 1689 + for (i = 0; i < rdev->num_crtc; i++) { 1690 + if (rdev->mode_info.crtcs[i] && 1691 + rdev->mode_info.crtcs[i]->enabled && 1692 + i != radeon_crtc->crtc_id && 1693 + radeon_crtc->pll_id == rdev->mode_info.crtcs[i]->pll_id) { 1694 + /* one other crtc is using this pll don't turn 1695 + * off the pll 1696 + */ 1697 + goto done; 1698 + } 1699 + } 1669 1700 1670 1701 switch (radeon_crtc->pll_id) { 1671 1702 case ATOM_PPLL1: ··· 1696 1701 default: 1697 1702 break; 1698 1703 } 1704 + done: 1699 1705 radeon_crtc->pll_id = -1; 1700 1706 } 1701 1707
+12 -17
drivers/gpu/drm/radeon/atombios_dp.c
··· 577 577 struct radeon_device *rdev = dev->dev_private; 578 578 struct radeon_connector *radeon_connector = to_radeon_connector(connector); 579 579 int panel_mode = DP_PANEL_MODE_EXTERNAL_DP_MODE; 580 + u16 dp_bridge = radeon_connector_encoder_get_dp_bridge_encoder_id(connector); 581 + u8 tmp; 580 582 581 583 if (!ASIC_IS_DCE4(rdev)) 582 584 return panel_mode; 583 585 584 - if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) == 585 - ENCODER_OBJECT_ID_NUTMEG) 586 - panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE; 587 - else if (radeon_connector_encoder_get_dp_bridge_encoder_id(connector) == 588 - ENCODER_OBJECT_ID_TRAVIS) { 589 - u8 id[6]; 590 - int i; 591 - for (i = 0; i < 6; i++) 592 - id[i] = radeon_read_dpcd_reg(radeon_connector, 0x503 + i); 593 - if (id[0] == 0x73 && 594 - id[1] == 0x69 && 595 - id[2] == 0x76 && 596 - id[3] == 0x61 && 597 - id[4] == 0x72 && 598 - id[5] == 0x54) 586 + if (dp_bridge != ENCODER_OBJECT_ID_NONE) { 587 + /* DP bridge chips */ 588 + tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP); 589 + if (tmp & 1) 590 + panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; 591 + else if ((dp_bridge == ENCODER_OBJECT_ID_NUTMEG) || 592 + (dp_bridge == ENCODER_OBJECT_ID_TRAVIS)) 599 593 panel_mode = DP_PANEL_MODE_INTERNAL_DP1_MODE; 600 594 else 601 - panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; 595 + panel_mode = DP_PANEL_MODE_EXTERNAL_DP_MODE; 602 596 } else if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { 603 - u8 tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP); 597 + /* eDP */ 598 + tmp = radeon_read_dpcd_reg(radeon_connector, DP_EDP_CONFIGURATION_CAP); 604 599 if (tmp & 1) 605 600 panel_mode = DP_PANEL_MODE_INTERNAL_DP2_MODE; 606 601 }
+73 -67
drivers/gpu/drm/radeon/atombios_encoders.c
··· 1379 1379 struct drm_device *dev = encoder->dev; 1380 1380 struct radeon_device *rdev = dev->dev_private; 1381 1381 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1382 + struct drm_encoder *ext_encoder = radeon_get_external_encoder(encoder); 1383 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 1382 1384 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1383 1385 struct radeon_connector *radeon_connector = NULL; 1384 1386 struct radeon_connector_atom_dig *radeon_dig_connector = NULL; ··· 1392 1390 1393 1391 switch (mode) { 1394 1392 case DRM_MODE_DPMS_ON: 1395 - /* some early dce3.2 boards have a bug in their transmitter control table */ 1396 - if ((rdev->family == CHIP_RV710) || (rdev->family == CHIP_RV730) || 1397 - ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) { 1398 - if (ASIC_IS_DCE6(rdev)) { 1399 - /* It seems we need to call ATOM_ENCODER_CMD_SETUP again 1400 - * before reenabling encoder on DPMS ON, otherwise we never 1401 - * get picture 1402 - */ 1403 - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1393 + if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) { 1394 + if (!connector) 1395 + dig->panel_mode = DP_PANEL_MODE_EXTERNAL_DP_MODE; 1396 + else 1397 + dig->panel_mode = radeon_dp_get_panel_mode(encoder, connector); 1398 + 1399 + /* setup and enable the encoder */ 1400 + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1401 + atombios_dig_encoder_setup(encoder, 1402 + ATOM_ENCODER_CMD_SETUP_PANEL_MODE, 1403 + dig->panel_mode); 1404 + if (ext_encoder) { 1405 + if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE61(rdev)) 1406 + atombios_external_encoder_setup(encoder, ext_encoder, 1407 + EXTERNAL_ENCODER_ACTION_V3_ENCODER_SETUP); 1404 1408 } 1405 1409 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1406 - } else { 1410 + } else if (ASIC_IS_DCE4(rdev)) { 1411 + /* setup and enable the encoder */ 1412 + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1413 + /* enable the transmitter */ 1414 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1407 1415 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); 1416 + } else { 1417 + /* setup and enable the encoder and transmitter */ 1418 + atombios_dig_encoder_setup(encoder, ATOM_ENABLE, 0); 1419 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_SETUP, 0, 0); 1420 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1421 + /* some early dce3.2 boards have a bug in their transmitter control table */ 1422 + if ((rdev->family != CHIP_RV710) || (rdev->family != CHIP_RV730)) 1423 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); 1408 1424 } 1409 1425 if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) { 1410 1426 if (connector->connector_type == DRM_MODE_CONNECTOR_eDP) { ··· 1440 1420 case DRM_MODE_DPMS_STANDBY: 1441 1421 case DRM_MODE_DPMS_SUSPEND: 1442 1422 case DRM_MODE_DPMS_OFF: 1443 - if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) 1423 + if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) { 1424 + /* disable the transmitter */ 1444 1425 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1445 - else 1426 + } else if (ASIC_IS_DCE4(rdev)) { 1427 + /* disable the transmitter */ 1446 1428 atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0); 1429 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1430 + } else { 1431 + /* disable the encoder and transmitter */ 1432 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0); 1433 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1434 + atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); 1435 + } 1447 1436 if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) { 1448 1437 if (ASIC_IS_DCE4(rdev)) 1449 1438 atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF, 0); ··· 1769 1740 struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); 1770 1741 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1771 1742 struct drm_encoder *test_encoder; 1772 - struct radeon_encoder_atom_dig *dig; 1743 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 1773 1744 uint32_t dig_enc_in_use = 0; 1774 1745 1775 - /* DCE4/5 */ 1776 - if (ASIC_IS_DCE4(rdev)) { 1777 - dig = radeon_encoder->enc_priv; 1778 - if (ASIC_IS_DCE41(rdev)) { 1746 + if (ASIC_IS_DCE6(rdev)) { 1747 + /* DCE6 */ 1748 + switch (radeon_encoder->encoder_id) { 1749 + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 1750 + if (dig->linkb) 1751 + return 1; 1752 + else 1753 + return 0; 1754 + break; 1755 + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 1756 + if (dig->linkb) 1757 + return 3; 1758 + else 1759 + return 2; 1760 + break; 1761 + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 1762 + if (dig->linkb) 1763 + return 5; 1764 + else 1765 + return 4; 1766 + break; 1767 + } 1768 + } else if (ASIC_IS_DCE4(rdev)) { 1769 + /* DCE4/5 */ 1770 + if (ASIC_IS_DCE41(rdev) && !ASIC_IS_DCE61(rdev)) { 1779 1771 /* ontario follows DCE4 */ 1780 1772 if (rdev->family == CHIP_PALM) { 1781 1773 if (dig->linkb) ··· 1898 1848 struct drm_device *dev = encoder->dev; 1899 1849 struct radeon_device *rdev = dev->dev_private; 1900 1850 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1901 - struct drm_encoder *ext_encoder = radeon_get_external_encoder(encoder); 1902 1851 1903 1852 radeon_encoder->pixel_clock = adjusted_mode->clock; 1853 + 1854 + /* need to call this here rather than in prepare() since we need some crtc info */ 1855 + radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); 1904 1856 1905 1857 if (ASIC_IS_AVIVO(rdev) && !ASIC_IS_DCE4(rdev)) { 1906 1858 if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT | ATOM_DEVICE_TV_SUPPORT)) ··· 1922 1870 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 1923 1871 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 1924 1872 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 1925 - if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE5(rdev)) { 1926 - struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1927 - struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 1928 - 1929 - if (!connector) 1930 - dig->panel_mode = DP_PANEL_MODE_EXTERNAL_DP_MODE; 1931 - else 1932 - dig->panel_mode = radeon_dp_get_panel_mode(encoder, connector); 1933 - 1934 - /* setup and enable the encoder */ 1935 - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1936 - atombios_dig_encoder_setup(encoder, 1937 - ATOM_ENCODER_CMD_SETUP_PANEL_MODE, 1938 - dig->panel_mode); 1939 - } else if (ASIC_IS_DCE4(rdev)) { 1940 - /* disable the transmitter */ 1941 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1942 - /* setup and enable the encoder */ 1943 - atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_SETUP, 0); 1944 - 1945 - /* enable the transmitter */ 1946 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1947 - } else { 1948 - /* disable the encoder and transmitter */ 1949 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 1950 - atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); 1951 - 1952 - /* setup and enable the encoder and transmitter */ 1953 - atombios_dig_encoder_setup(encoder, ATOM_ENABLE, 0); 1954 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_SETUP, 0, 0); 1955 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0); 1956 - } 1873 + /* handled in dpms */ 1957 1874 break; 1958 1875 case ENCODER_OBJECT_ID_INTERNAL_DDI: 1959 1876 case ENCODER_OBJECT_ID_INTERNAL_DVO1: ··· 1941 1920 atombios_tv_setup(encoder, ATOM_DISABLE); 1942 1921 } 1943 1922 break; 1944 - } 1945 - 1946 - if (ext_encoder) { 1947 - if (ASIC_IS_DCE41(rdev) || ASIC_IS_DCE61(rdev)) 1948 - atombios_external_encoder_setup(encoder, ext_encoder, 1949 - EXTERNAL_ENCODER_ACTION_V3_ENCODER_SETUP); 1950 - else 1951 - atombios_external_encoder_setup(encoder, ext_encoder, ATOM_ENABLE); 1952 1923 } 1953 1924 1954 1925 atombios_apply_encoder_quirks(encoder, adjusted_mode); ··· 2129 2116 } 2130 2117 2131 2118 radeon_atom_output_lock(encoder, true); 2132 - radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); 2133 2119 2134 2120 if (connector) { 2135 2121 struct radeon_connector *radeon_connector = to_radeon_connector(connector); ··· 2149 2137 2150 2138 static void radeon_atom_encoder_commit(struct drm_encoder *encoder) 2151 2139 { 2140 + /* need to call this here as we need the crtc set up */ 2152 2141 radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_ON); 2153 2142 radeon_atom_output_lock(encoder, false); 2154 2143 } ··· 2190 2177 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 2191 2178 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 2192 2179 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 2193 - if (ASIC_IS_DCE4(rdev)) 2194 - /* disable the transmitter */ 2195 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 2196 - else { 2197 - /* disable the encoder and transmitter */ 2198 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE, 0, 0); 2199 - atombios_dig_encoder_setup(encoder, ATOM_DISABLE, 0); 2200 - } 2180 + /* handled in dpms */ 2201 2181 break; 2202 2182 case ENCODER_OBJECT_ID_INTERNAL_DDI: 2203 2183 case ENCODER_OBJECT_ID_INTERNAL_DVO1:
+25 -3
drivers/gpu/drm/radeon/r600_cs.c
··· 63 63 u32 cb_color_size_idx[8]; /* unused */ 64 64 u32 cb_target_mask; 65 65 u32 cb_shader_mask; /* unused */ 66 + bool is_resolve; 66 67 u32 cb_color_size[8]; 67 68 u32 vgt_strmout_en; 68 69 u32 vgt_strmout_buffer_en; ··· 316 315 track->cb_color_bo[i] = NULL; 317 316 track->cb_color_bo_offset[i] = 0xFFFFFFFF; 318 317 track->cb_color_bo_mc[i] = 0xFFFFFFFF; 318 + track->cb_color_frag_bo[i] = NULL; 319 + track->cb_color_frag_offset[i] = 0xFFFFFFFF; 320 + track->cb_color_tile_bo[i] = NULL; 321 + track->cb_color_tile_offset[i] = 0xFFFFFFFF; 322 + track->cb_color_mask[i] = 0xFFFFFFFF; 319 323 } 324 + track->is_resolve = false; 325 + track->nsamples = 16; 326 + track->log_nsamples = 4; 320 327 track->cb_target_mask = 0xFFFFFFFF; 321 328 track->cb_shader_mask = 0xFFFFFFFF; 322 329 track->cb_dirty = true; ··· 361 352 volatile u32 *ib = p->ib.ptr; 362 353 unsigned array_mode; 363 354 u32 format; 355 + /* When resolve is used, the second colorbuffer has always 1 sample. */ 356 + unsigned nsamples = track->is_resolve && i == 1 ? 1 : track->nsamples; 364 357 365 358 size = radeon_bo_size(track->cb_color_bo[i]) - track->cb_color_bo_offset[i]; 366 359 format = G_0280A0_FORMAT(track->cb_color_info[i]); ··· 386 375 array_check.group_size = track->group_size; 387 376 array_check.nbanks = track->nbanks; 388 377 array_check.npipes = track->npipes; 389 - array_check.nsamples = track->nsamples; 378 + array_check.nsamples = nsamples; 390 379 array_check.blocksize = r600_fmt_get_blocksize(format); 391 380 if (r600_get_array_mode_alignment(&array_check, 392 381 &pitch_align, &height_align, &depth_align, &base_align)) { ··· 432 421 433 422 /* check offset */ 434 423 tmp = r600_fmt_get_nblocksy(format, height) * r600_fmt_get_nblocksx(format, pitch) * 435 - r600_fmt_get_blocksize(format) * track->nsamples; 424 + r600_fmt_get_blocksize(format) * nsamples; 436 425 switch (array_mode) { 437 426 default: 438 427 case V_0280A0_ARRAY_LINEAR_GENERAL: ··· 803 792 */ 804 793 if (track->cb_dirty) { 805 794 tmp = track->cb_target_mask; 795 + 796 + /* We must check both colorbuffers for RESOLVE. */ 797 + if (track->is_resolve) { 798 + tmp |= 0xff; 799 + } 800 + 806 801 for (i = 0; i < 8; i++) { 807 802 if ((tmp >> (i * 4)) & 0xF) { 808 803 /* at least one component is enabled */ ··· 1298 1281 track->nsamples = 1 << tmp; 1299 1282 track->cb_dirty = true; 1300 1283 break; 1284 + case R_028808_CB_COLOR_CONTROL: 1285 + tmp = G_028808_SPECIAL_OP(radeon_get_ib_value(p, idx)); 1286 + track->is_resolve = tmp == V_028808_SPECIAL_RESOLVE_BOX; 1287 + track->cb_dirty = true; 1288 + break; 1301 1289 case R_0280A0_CB_COLOR0_INFO: 1302 1290 case R_0280A4_CB_COLOR1_INFO: 1303 1291 case R_0280A8_CB_COLOR2_INFO: ··· 1438 1416 case R_028118_CB_COLOR6_MASK: 1439 1417 case R_02811C_CB_COLOR7_MASK: 1440 1418 tmp = (reg - R_028100_CB_COLOR0_MASK) / 4; 1441 - track->cb_color_mask[tmp] = ib[idx]; 1419 + track->cb_color_mask[tmp] = radeon_get_ib_value(p, idx); 1442 1420 if (G_0280A0_TILE_MODE(track->cb_color_info[tmp])) { 1443 1421 track->cb_dirty = true; 1444 1422 }
+8
drivers/gpu/drm/radeon/r600d.h
··· 66 66 #define CC_RB_BACKEND_DISABLE 0x98F4 67 67 #define BACKEND_DISABLE(x) ((x) << 16) 68 68 69 + #define R_028808_CB_COLOR_CONTROL 0x28808 70 + #define S_028808_SPECIAL_OP(x) (((x) & 0x7) << 4) 71 + #define G_028808_SPECIAL_OP(x) (((x) >> 4) & 0x7) 72 + #define C_028808_SPECIAL_OP 0xFFFFFF8F 73 + #define V_028808_SPECIAL_NORMAL 0x00 74 + #define V_028808_SPECIAL_DISABLE 0x01 75 + #define V_028808_SPECIAL_RESOLVE_BOX 0x07 76 + 69 77 #define CB_COLOR0_BASE 0x28040 70 78 #define CB_COLOR1_BASE 0x28044 71 79 #define CB_COLOR2_BASE 0x28048
+4 -1
drivers/gpu/drm/radeon/radeon_device.c
··· 1051 1051 if (rdev->flags & RADEON_IS_AGP) 1052 1052 rdev->need_dma32 = true; 1053 1053 if ((rdev->flags & RADEON_IS_PCI) && 1054 - (rdev->family < CHIP_RS400)) 1054 + (rdev->family <= CHIP_RS740)) 1055 1055 rdev->need_dma32 = true; 1056 1056 1057 1057 dma_bits = rdev->need_dma32 ? 32 : 40; ··· 1346 1346 for (i = 0; i < RADEON_NUM_RINGS; ++i) { 1347 1347 radeon_ring_restore(rdev, &rdev->ring[i], 1348 1348 ring_sizes[i], ring_data[i]); 1349 + ring_sizes[i] = 0; 1350 + ring_data[i] = NULL; 1349 1351 } 1350 1352 1351 1353 r = radeon_ib_ring_tests(rdev); 1352 1354 if (r) { 1353 1355 dev_err(rdev->dev, "ib ring test failed (%d).\n", r); 1354 1356 if (saved) { 1357 + saved = false; 1355 1358 radeon_suspend(rdev); 1356 1359 goto retry; 1357 1360 }
+2 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 63 63 * 2.19.0 - r600-eg: MSAA textures 64 64 * 2.20.0 - r600-si: RADEON_INFO_TIMESTAMP query 65 65 * 2.21.0 - r600-r700: FMASK and CMASK 66 + * 2.22.0 - r600 only: RESOLVE_BOX allowed 66 67 */ 67 68 #define KMS_DRIVER_MAJOR 2 68 - #define KMS_DRIVER_MINOR 21 69 + #define KMS_DRIVER_MINOR 22 69 70 #define KMS_DRIVER_PATCHLEVEL 0 70 71 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags); 71 72 int radeon_driver_unload_kms(struct drm_device *dev);
-1
drivers/gpu/drm/radeon/reg_srcs/r600
··· 744 744 0x00028C38 CB_CLRCMP_DST 745 745 0x00028C3C CB_CLRCMP_MSK 746 746 0x00028C34 CB_CLRCMP_SRC 747 - 0x00028808 CB_COLOR_CONTROL 748 747 0x0002842C CB_FOG_BLUE 749 748 0x00028428 CB_FOG_GREEN 750 749 0x00028424 CB_FOG_RED
-1
drivers/hid/hid-core.c
··· 1624 1624 { HID_USB_DEVICE(USB_VENDOR_ID_ORTEK, USB_DEVICE_ID_ORTEK_WKB2000) }, 1625 1625 { HID_USB_DEVICE(USB_VENDOR_ID_PETALYNX, USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE) }, 1626 1626 { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_KEYBOARD) }, 1627 - { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_PIXART_IMAGING_INC_OPTICAL_TOUCH_SCREEN) }, 1628 1627 { HID_USB_DEVICE(USB_VENDOR_ID_ROCCAT, USB_DEVICE_ID_ROCCAT_KONE) }, 1629 1628 { HID_USB_DEVICE(USB_VENDOR_ID_ROCCAT, USB_DEVICE_ID_ROCCAT_ARVO) }, 1630 1629 { HID_USB_DEVICE(USB_VENDOR_ID_ROCCAT, USB_DEVICE_ID_ROCCAT_ISKU) },
+6
drivers/hwmon/asus_atk0110.c
··· 34 34 .matches = { 35 35 DMI_MATCH(DMI_BOARD_NAME, "SABERTOOTH X58") 36 36 } 37 + }, { 38 + /* Old interface reads the same sensor for fan0 and fan1 */ 39 + .ident = "Asus M5A78L", 40 + .matches = { 41 + DMI_MATCH(DMI_BOARD_NAME, "M5A78L") 42 + } 37 43 }, 38 44 { } 39 45 };
+2 -2
drivers/ide/ide-pm.c
··· 4 4 5 5 int generic_ide_suspend(struct device *dev, pm_message_t mesg) 6 6 { 7 - ide_drive_t *drive = dev_get_drvdata(dev); 7 + ide_drive_t *drive = to_ide_device(dev); 8 8 ide_drive_t *pair = ide_get_pair_dev(drive); 9 9 ide_hwif_t *hwif = drive->hwif; 10 10 struct request *rq; ··· 40 40 41 41 int generic_ide_resume(struct device *dev) 42 42 { 43 - ide_drive_t *drive = dev_get_drvdata(dev); 43 + ide_drive_t *drive = to_ide_device(dev); 44 44 ide_drive_t *pair = ide_get_pair_dev(drive); 45 45 ide_hwif_t *hwif = drive->hwif; 46 46 struct request *rq;
+57 -1
drivers/pinctrl/pinctrl-sirf.c
··· 17 17 #include <linux/pinctrl/pinctrl.h> 18 18 #include <linux/pinctrl/pinmux.h> 19 19 #include <linux/pinctrl/consumer.h> 20 + #include <linux/pinctrl/machine.h> 20 21 #include <linux/of.h> 21 22 #include <linux/of_address.h> 22 23 #include <linux/of_device.h> ··· 917 916 seq_printf(s, " " DRIVER_NAME); 918 917 } 919 918 919 + static int sirfsoc_dt_node_to_map(struct pinctrl_dev *pctldev, 920 + struct device_node *np_config, 921 + struct pinctrl_map **map, unsigned *num_maps) 922 + { 923 + struct sirfsoc_pmx *spmx = pinctrl_dev_get_drvdata(pctldev); 924 + struct device_node *np; 925 + struct property *prop; 926 + const char *function, *group; 927 + int ret, index = 0, count = 0; 928 + 929 + /* calculate number of maps required */ 930 + for_each_child_of_node(np_config, np) { 931 + ret = of_property_read_string(np, "sirf,function", &function); 932 + if (ret < 0) 933 + return ret; 934 + 935 + ret = of_property_count_strings(np, "sirf,pins"); 936 + if (ret < 0) 937 + return ret; 938 + 939 + count += ret; 940 + } 941 + 942 + if (!count) { 943 + dev_err(spmx->dev, "No child nodes passed via DT\n"); 944 + return -ENODEV; 945 + } 946 + 947 + *map = kzalloc(sizeof(**map) * count, GFP_KERNEL); 948 + if (!*map) 949 + return -ENOMEM; 950 + 951 + for_each_child_of_node(np_config, np) { 952 + of_property_read_string(np, "sirf,function", &function); 953 + of_property_for_each_string(np, "sirf,pins", prop, group) { 954 + (*map)[index].type = PIN_MAP_TYPE_MUX_GROUP; 955 + (*map)[index].data.mux.group = group; 956 + (*map)[index].data.mux.function = function; 957 + index++; 958 + } 959 + } 960 + 961 + *num_maps = count; 962 + 963 + return 0; 964 + } 965 + 966 + static void sirfsoc_dt_free_map(struct pinctrl_dev *pctldev, 967 + struct pinctrl_map *map, unsigned num_maps) 968 + { 969 + kfree(map); 970 + } 971 + 920 972 static struct pinctrl_ops sirfsoc_pctrl_ops = { 921 973 .get_groups_count = sirfsoc_get_groups_count, 922 974 .get_group_name = sirfsoc_get_group_name, 923 975 .get_group_pins = sirfsoc_get_group_pins, 924 976 .pin_dbg_show = sirfsoc_pin_dbg_show, 977 + .dt_node_to_map = sirfsoc_dt_node_to_map, 978 + .dt_free_map = sirfsoc_dt_free_map, 925 979 }; 926 980 927 981 struct sirfsoc_pmx_func { ··· 1277 1221 } 1278 1222 1279 1223 static const struct of_device_id pinmux_ids[] __devinitconst = { 1280 - { .compatible = "sirf,prima2-gpio-pinmux" }, 1224 + { .compatible = "sirf,prima2-pinctrl" }, 1281 1225 {} 1282 1226 }; 1283 1227
+1 -1
drivers/s390/block/dasd_eckd.c
··· 3804 3804 case BIODASDSYMMIO: 3805 3805 return dasd_symm_io(device, argp); 3806 3806 default: 3807 - return -ENOIOCTLCMD; 3807 + return -ENOTTY; 3808 3808 } 3809 3809 } 3810 3810
+2 -5
drivers/s390/block/dasd_ioctl.c
··· 498 498 break; 499 499 default: 500 500 /* if the discipline has an ioctl method try it. */ 501 - if (base->discipline->ioctl) { 501 + rc = -ENOTTY; 502 + if (base->discipline->ioctl) 502 503 rc = base->discipline->ioctl(block, cmd, argp); 503 - if (rc == -ENOIOCTLCMD) 504 - rc = -EINVAL; 505 - } else 506 - rc = -EINVAL; 507 504 } 508 505 dasd_put_device(base); 509 506 return rc;
+25 -4
drivers/spi/spi-bcm63xx.c
··· 47 47 /* Platform data */ 48 48 u32 speed_hz; 49 49 unsigned fifo_size; 50 + unsigned int msg_type_shift; 51 + unsigned int msg_ctl_width; 50 52 51 53 /* Data buffers */ 52 54 const unsigned char *tx_ptr; ··· 223 221 msg_ctl = (t->len << SPI_BYTE_CNT_SHIFT); 224 222 225 223 if (t->rx_buf && t->tx_buf) 226 - msg_ctl |= (SPI_FD_RW << SPI_MSG_TYPE_SHIFT); 224 + msg_ctl |= (SPI_FD_RW << bs->msg_type_shift); 227 225 else if (t->rx_buf) 228 - msg_ctl |= (SPI_HD_R << SPI_MSG_TYPE_SHIFT); 226 + msg_ctl |= (SPI_HD_R << bs->msg_type_shift); 229 227 else if (t->tx_buf) 230 - msg_ctl |= (SPI_HD_W << SPI_MSG_TYPE_SHIFT); 228 + msg_ctl |= (SPI_HD_W << bs->msg_type_shift); 231 229 232 - bcm_spi_writew(bs, msg_ctl, SPI_MSG_CTL); 230 + switch (bs->msg_ctl_width) { 231 + case 8: 232 + bcm_spi_writeb(bs, msg_ctl, SPI_MSG_CTL); 233 + break; 234 + case 16: 235 + bcm_spi_writew(bs, msg_ctl, SPI_MSG_CTL); 236 + break; 237 + } 233 238 234 239 /* Issue the transfer */ 235 240 cmd = SPI_CMD_START_IMMEDIATE; ··· 415 406 master->transfer_one_message = bcm63xx_spi_transfer_one; 416 407 master->mode_bits = MODEBITS; 417 408 bs->speed_hz = pdata->speed_hz; 409 + bs->msg_type_shift = pdata->msg_type_shift; 410 + bs->msg_ctl_width = pdata->msg_ctl_width; 418 411 bs->tx_io = (u8 *)(bs->regs + bcm63xx_spireg(SPI_MSG_DATA)); 419 412 bs->rx_io = (const u8 *)(bs->regs + bcm63xx_spireg(SPI_RX_DATA)); 413 + 414 + switch (bs->msg_ctl_width) { 415 + case 8: 416 + case 16: 417 + break; 418 + default: 419 + dev_err(dev, "unsupported MSG_CTL width: %d\n", 420 + bs->msg_ctl_width); 421 + goto out_clk_disable; 422 + } 420 423 421 424 /* Initialize hardware */ 422 425 clk_enable(bs->clk);
+3 -4
drivers/watchdog/booke_wdt.c
··· 166 166 167 167 switch (cmd) { 168 168 case WDIOC_GETSUPPORT: 169 - if (copy_to_user((void *)arg, &ident, sizeof(ident))) 170 - return -EFAULT; 169 + return copy_to_user(p, &ident, sizeof(ident)) ? -EFAULT : 0; 171 170 case WDIOC_GETSTATUS: 172 171 return put_user(0, p); 173 172 case WDIOC_GETBOOTSTATUS: 174 173 /* XXX: something is clearing TSR */ 175 174 tmp = mfspr(SPRN_TSR) & TSR_WRS(3); 176 175 /* returns CARDRESET if last reset was caused by the WDT */ 177 - return (tmp ? WDIOF_CARDRESET : 0); 176 + return put_user((tmp ? WDIOF_CARDRESET : 0), p); 178 177 case WDIOC_SETOPTIONS: 179 178 if (get_user(tmp, p)) 180 - return -EINVAL; 179 + return -EFAULT; 181 180 if (tmp == WDIOS_ENABLECARD) { 182 181 booke_wdt_ping(); 183 182 break;
-1
drivers/watchdog/da9052_wdt.c
··· 21 21 #include <linux/types.h> 22 22 #include <linux/kernel.h> 23 23 #include <linux/jiffies.h> 24 - #include <linux/delay.h> 25 24 26 25 #include <linux/mfd/da9052/reg.h> 27 26 #include <linux/mfd/da9052/da9052.h>
-15
drivers/xen/platform-pci.c
··· 101 101 return 0; 102 102 } 103 103 104 - static void __devinit prepare_shared_info(void) 105 - { 106 - #ifdef CONFIG_KEXEC 107 - unsigned long addr; 108 - struct shared_info *hvm_shared_info; 109 - 110 - addr = alloc_xen_mmio(PAGE_SIZE); 111 - hvm_shared_info = ioremap(addr, PAGE_SIZE); 112 - memset(hvm_shared_info, 0, PAGE_SIZE); 113 - xen_hvm_prepare_kexec(hvm_shared_info, addr >> PAGE_SHIFT); 114 - #endif 115 - } 116 - 117 104 static int __devinit platform_pci_init(struct pci_dev *pdev, 118 105 const struct pci_device_id *ent) 119 106 { ··· 137 150 138 151 platform_mmio = mmio_addr; 139 152 platform_mmiolen = mmio_len; 140 - 141 - prepare_shared_info(); 142 153 143 154 if (!xen_have_vector_callback) { 144 155 ret = xen_allocate_irq(pdev);
+6 -5
fs/bio.c
··· 73 73 { 74 74 unsigned int sz = sizeof(struct bio) + extra_size; 75 75 struct kmem_cache *slab = NULL; 76 - struct bio_slab *bslab; 76 + struct bio_slab *bslab, *new_bio_slabs; 77 77 unsigned int i, entry = -1; 78 78 79 79 mutex_lock(&bio_slab_lock); ··· 97 97 98 98 if (bio_slab_nr == bio_slab_max && entry == -1) { 99 99 bio_slab_max <<= 1; 100 - bio_slabs = krealloc(bio_slabs, 101 - bio_slab_max * sizeof(struct bio_slab), 102 - GFP_KERNEL); 103 - if (!bio_slabs) 100 + new_bio_slabs = krealloc(bio_slabs, 101 + bio_slab_max * sizeof(struct bio_slab), 102 + GFP_KERNEL); 103 + if (!new_bio_slabs) 104 104 goto out_unlock; 105 + bio_slabs = new_bio_slabs; 105 106 } 106 107 if (entry == -1) 107 108 entry = bio_slab_nr++;
+3
fs/block_dev.c
··· 1578 1578 unsigned long nr_segs, loff_t pos) 1579 1579 { 1580 1580 struct file *file = iocb->ki_filp; 1581 + struct blk_plug plug; 1581 1582 ssize_t ret; 1582 1583 1583 1584 BUG_ON(iocb->ki_pos != pos); 1584 1585 1586 + blk_start_plug(&plug); 1585 1587 ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos); 1586 1588 if (ret > 0 || ret == -EIOCBQUEUED) { 1587 1589 ssize_t err; ··· 1592 1590 if (err < 0 && ret > 0) 1593 1591 ret = err; 1594 1592 } 1593 + blk_finish_plug(&plug); 1595 1594 return ret; 1596 1595 } 1597 1596 EXPORT_SYMBOL_GPL(blkdev_aio_write);
+2 -2
fs/btrfs/backref.c
··· 1438 1438 ret = extent_from_logical(fs_info, logical, path, 1439 1439 &found_key); 1440 1440 btrfs_release_path(path); 1441 - if (ret & BTRFS_EXTENT_FLAG_TREE_BLOCK) 1442 - ret = -EINVAL; 1443 1441 if (ret < 0) 1444 1442 return ret; 1443 + if (ret & BTRFS_EXTENT_FLAG_TREE_BLOCK) 1444 + return -EINVAL; 1445 1445 1446 1446 extent_item_pos = logical - found_key.objectid; 1447 1447 ret = iterate_extent_inodes(fs_info, found_key.objectid,
+1
fs/btrfs/compression.c
··· 818 818 btrfs_compress_op[idx]->free_workspace(workspace); 819 819 atomic_dec(alloc_workspace); 820 820 wake: 821 + smp_mb(); 821 822 if (waitqueue_active(workspace_wait)) 822 823 wake_up(workspace_wait); 823 824 }
+3 -6
fs/btrfs/ctree.c
··· 421 421 spin_unlock(&fs_info->tree_mod_seq_lock); 422 422 423 423 /* 424 - * we removed the lowest blocker from the blocker list, so there may be 425 - * more processible delayed refs. 426 - */ 427 - wake_up(&fs_info->tree_mod_seq_wait); 428 - 429 - /* 430 424 * anything that's lower than the lowest existing (read: blocked) 431 425 * sequence number can be removed from the tree. 432 426 */ ··· 624 630 int i; 625 631 u32 nritems; 626 632 int ret; 633 + 634 + if (btrfs_header_level(eb) == 0) 635 + return; 627 636 628 637 nritems = btrfs_header_nritems(eb); 629 638 for (i = nritems - 1; i >= 0; i--) {
+1 -2
fs/btrfs/ctree.h
··· 1252 1252 atomic_t tree_mod_seq; 1253 1253 struct list_head tree_mod_seq_list; 1254 1254 struct seq_list tree_mod_seq_elem; 1255 - wait_queue_head_t tree_mod_seq_wait; 1256 1255 1257 1256 /* this protects tree_mod_log */ 1258 1257 rwlock_t tree_mod_log_lock; ··· 3191 3192 int btrfs_lookup_bio_sums(struct btrfs_root *root, struct inode *inode, 3192 3193 struct bio *bio, u32 *dst); 3193 3194 int btrfs_lookup_bio_sums_dio(struct btrfs_root *root, struct inode *inode, 3194 - struct bio *bio, u64 logical_offset, u32 *dst); 3195 + struct bio *bio, u64 logical_offset); 3195 3196 int btrfs_insert_file_extent(struct btrfs_trans_handle *trans, 3196 3197 struct btrfs_root *root, 3197 3198 u64 objectid, u64 pos,
+6 -6
fs/btrfs/delayed-inode.c
··· 512 512 513 513 rb_erase(&delayed_item->rb_node, root); 514 514 delayed_item->delayed_node->count--; 515 - atomic_dec(&delayed_root->items); 516 - if (atomic_read(&delayed_root->items) < BTRFS_DELAYED_BACKGROUND && 515 + if (atomic_dec_return(&delayed_root->items) < 516 + BTRFS_DELAYED_BACKGROUND && 517 517 waitqueue_active(&delayed_root->wait)) 518 518 wake_up(&delayed_root->wait); 519 519 } ··· 1028 1028 btrfs_release_delayed_item(prev); 1029 1029 ret = 0; 1030 1030 btrfs_release_path(path); 1031 - if (curr) 1031 + if (curr) { 1032 + mutex_unlock(&node->mutex); 1032 1033 goto do_again; 1033 - else 1034 + } else 1034 1035 goto delete_fail; 1035 1036 } 1036 1037 ··· 1056 1055 delayed_node->count--; 1057 1056 1058 1057 delayed_root = delayed_node->root->fs_info->delayed_root; 1059 - atomic_dec(&delayed_root->items); 1060 - if (atomic_read(&delayed_root->items) < 1058 + if (atomic_dec_return(&delayed_root->items) < 1061 1059 BTRFS_DELAYED_BACKGROUND && 1062 1060 waitqueue_active(&delayed_root->wait)) 1063 1061 wake_up(&delayed_root->wait);
+128 -35
fs/btrfs/delayed-ref.c
··· 38 38 static int comp_tree_refs(struct btrfs_delayed_tree_ref *ref2, 39 39 struct btrfs_delayed_tree_ref *ref1) 40 40 { 41 - if (ref1->node.type == BTRFS_TREE_BLOCK_REF_KEY) { 42 - if (ref1->root < ref2->root) 43 - return -1; 44 - if (ref1->root > ref2->root) 45 - return 1; 46 - } else { 47 - if (ref1->parent < ref2->parent) 48 - return -1; 49 - if (ref1->parent > ref2->parent) 50 - return 1; 51 - } 41 + if (ref1->root < ref2->root) 42 + return -1; 43 + if (ref1->root > ref2->root) 44 + return 1; 45 + if (ref1->parent < ref2->parent) 46 + return -1; 47 + if (ref1->parent > ref2->parent) 48 + return 1; 52 49 return 0; 53 50 } 54 51 ··· 82 85 * type of the delayed backrefs and content of delayed backrefs. 83 86 */ 84 87 static int comp_entry(struct btrfs_delayed_ref_node *ref2, 85 - struct btrfs_delayed_ref_node *ref1) 88 + struct btrfs_delayed_ref_node *ref1, 89 + bool compare_seq) 86 90 { 87 91 if (ref1->bytenr < ref2->bytenr) 88 92 return -1; ··· 100 102 if (ref1->type > ref2->type) 101 103 return 1; 102 104 /* merging of sequenced refs is not allowed */ 103 - if (ref1->seq < ref2->seq) 104 - return -1; 105 - if (ref1->seq > ref2->seq) 106 - return 1; 105 + if (compare_seq) { 106 + if (ref1->seq < ref2->seq) 107 + return -1; 108 + if (ref1->seq > ref2->seq) 109 + return 1; 110 + } 107 111 if (ref1->type == BTRFS_TREE_BLOCK_REF_KEY || 108 112 ref1->type == BTRFS_SHARED_BLOCK_REF_KEY) { 109 113 return comp_tree_refs(btrfs_delayed_node_to_tree_ref(ref2), ··· 139 139 entry = rb_entry(parent_node, struct btrfs_delayed_ref_node, 140 140 rb_node); 141 141 142 - cmp = comp_entry(entry, ins); 142 + cmp = comp_entry(entry, ins, 1); 143 143 if (cmp < 0) 144 144 p = &(*p)->rb_left; 145 145 else if (cmp > 0) ··· 231 231 } 232 232 btrfs_put_delayed_ref(&head->node); 233 233 return 0; 234 + } 235 + 236 + static void inline drop_delayed_ref(struct btrfs_trans_handle *trans, 237 + struct btrfs_delayed_ref_root *delayed_refs, 238 + struct btrfs_delayed_ref_node *ref) 239 + { 240 + rb_erase(&ref->rb_node, &delayed_refs->root); 241 + ref->in_tree = 0; 242 + btrfs_put_delayed_ref(ref); 243 + delayed_refs->num_entries--; 244 + if (trans->delayed_ref_updates) 245 + trans->delayed_ref_updates--; 246 + } 247 + 248 + static int merge_ref(struct btrfs_trans_handle *trans, 249 + struct btrfs_delayed_ref_root *delayed_refs, 250 + struct btrfs_delayed_ref_node *ref, u64 seq) 251 + { 252 + struct rb_node *node; 253 + int merged = 0; 254 + int mod = 0; 255 + int done = 0; 256 + 257 + node = rb_prev(&ref->rb_node); 258 + while (node) { 259 + struct btrfs_delayed_ref_node *next; 260 + 261 + next = rb_entry(node, struct btrfs_delayed_ref_node, rb_node); 262 + node = rb_prev(node); 263 + if (next->bytenr != ref->bytenr) 264 + break; 265 + if (seq && next->seq >= seq) 266 + break; 267 + if (comp_entry(ref, next, 0)) 268 + continue; 269 + 270 + if (ref->action == next->action) { 271 + mod = next->ref_mod; 272 + } else { 273 + if (ref->ref_mod < next->ref_mod) { 274 + struct btrfs_delayed_ref_node *tmp; 275 + 276 + tmp = ref; 277 + ref = next; 278 + next = tmp; 279 + done = 1; 280 + } 281 + mod = -next->ref_mod; 282 + } 283 + 284 + merged++; 285 + drop_delayed_ref(trans, delayed_refs, next); 286 + ref->ref_mod += mod; 287 + if (ref->ref_mod == 0) { 288 + drop_delayed_ref(trans, delayed_refs, ref); 289 + break; 290 + } else { 291 + /* 292 + * You can't have multiples of the same ref on a tree 293 + * block. 294 + */ 295 + WARN_ON(ref->type == BTRFS_TREE_BLOCK_REF_KEY || 296 + ref->type == BTRFS_SHARED_BLOCK_REF_KEY); 297 + } 298 + 299 + if (done) 300 + break; 301 + node = rb_prev(&ref->rb_node); 302 + } 303 + 304 + return merged; 305 + } 306 + 307 + void btrfs_merge_delayed_refs(struct btrfs_trans_handle *trans, 308 + struct btrfs_fs_info *fs_info, 309 + struct btrfs_delayed_ref_root *delayed_refs, 310 + struct btrfs_delayed_ref_head *head) 311 + { 312 + struct rb_node *node; 313 + u64 seq = 0; 314 + 315 + spin_lock(&fs_info->tree_mod_seq_lock); 316 + if (!list_empty(&fs_info->tree_mod_seq_list)) { 317 + struct seq_list *elem; 318 + 319 + elem = list_first_entry(&fs_info->tree_mod_seq_list, 320 + struct seq_list, list); 321 + seq = elem->seq; 322 + } 323 + spin_unlock(&fs_info->tree_mod_seq_lock); 324 + 325 + node = rb_prev(&head->node.rb_node); 326 + while (node) { 327 + struct btrfs_delayed_ref_node *ref; 328 + 329 + ref = rb_entry(node, struct btrfs_delayed_ref_node, 330 + rb_node); 331 + if (ref->bytenr != head->node.bytenr) 332 + break; 333 + 334 + /* We can't merge refs that are outside of our seq count */ 335 + if (seq && ref->seq >= seq) 336 + break; 337 + if (merge_ref(trans, delayed_refs, ref, seq)) 338 + node = rb_prev(&head->node.rb_node); 339 + else 340 + node = rb_prev(node); 341 + } 234 342 } 235 343 236 344 int btrfs_check_delayed_seq(struct btrfs_fs_info *fs_info, ··· 444 336 * every changing the extent allocation tree. 445 337 */ 446 338 existing->ref_mod--; 447 - if (existing->ref_mod == 0) { 448 - rb_erase(&existing->rb_node, 449 - &delayed_refs->root); 450 - existing->in_tree = 0; 451 - btrfs_put_delayed_ref(existing); 452 - delayed_refs->num_entries--; 453 - if (trans->delayed_ref_updates) 454 - trans->delayed_ref_updates--; 455 - } else { 339 + if (existing->ref_mod == 0) 340 + drop_delayed_ref(trans, delayed_refs, existing); 341 + else 456 342 WARN_ON(existing->type == BTRFS_TREE_BLOCK_REF_KEY || 457 343 existing->type == BTRFS_SHARED_BLOCK_REF_KEY); 458 - } 459 344 } else { 460 345 WARN_ON(existing->type == BTRFS_TREE_BLOCK_REF_KEY || 461 346 existing->type == BTRFS_SHARED_BLOCK_REF_KEY); ··· 763 662 add_delayed_tree_ref(fs_info, trans, &ref->node, bytenr, 764 663 num_bytes, parent, ref_root, level, action, 765 664 for_cow); 766 - if (!need_ref_seq(for_cow, ref_root) && 767 - waitqueue_active(&fs_info->tree_mod_seq_wait)) 768 - wake_up(&fs_info->tree_mod_seq_wait); 769 665 spin_unlock(&delayed_refs->lock); 770 666 if (need_ref_seq(for_cow, ref_root)) 771 667 btrfs_qgroup_record_ref(trans, &ref->node, extent_op); ··· 811 713 add_delayed_data_ref(fs_info, trans, &ref->node, bytenr, 812 714 num_bytes, parent, ref_root, owner, offset, 813 715 action, for_cow); 814 - if (!need_ref_seq(for_cow, ref_root) && 815 - waitqueue_active(&fs_info->tree_mod_seq_wait)) 816 - wake_up(&fs_info->tree_mod_seq_wait); 817 716 spin_unlock(&delayed_refs->lock); 818 717 if (need_ref_seq(for_cow, ref_root)) 819 718 btrfs_qgroup_record_ref(trans, &ref->node, extent_op); ··· 839 744 num_bytes, BTRFS_UPDATE_DELAYED_HEAD, 840 745 extent_op->is_data); 841 746 842 - if (waitqueue_active(&fs_info->tree_mod_seq_wait)) 843 - wake_up(&fs_info->tree_mod_seq_wait); 844 747 spin_unlock(&delayed_refs->lock); 845 748 return 0; 846 749 }
+4
fs/btrfs/delayed-ref.h
··· 167 167 struct btrfs_trans_handle *trans, 168 168 u64 bytenr, u64 num_bytes, 169 169 struct btrfs_delayed_extent_op *extent_op); 170 + void btrfs_merge_delayed_refs(struct btrfs_trans_handle *trans, 171 + struct btrfs_fs_info *fs_info, 172 + struct btrfs_delayed_ref_root *delayed_refs, 173 + struct btrfs_delayed_ref_head *head); 170 174 171 175 struct btrfs_delayed_ref_head * 172 176 btrfs_find_delayed_ref_head(struct btrfs_trans_handle *trans, u64 bytenr);
+14 -39
fs/btrfs/disk-io.c
··· 377 377 ret = read_extent_buffer_pages(io_tree, eb, start, 378 378 WAIT_COMPLETE, 379 379 btree_get_extent, mirror_num); 380 - if (!ret && !verify_parent_transid(io_tree, eb, 380 + if (!ret) { 381 + if (!verify_parent_transid(io_tree, eb, 381 382 parent_transid, 0)) 382 - break; 383 + break; 384 + else 385 + ret = -EIO; 386 + } 383 387 384 388 /* 385 389 * This buffer's crc is fine, but its contents are corrupted, so ··· 758 754 limit = btrfs_async_submit_limit(fs_info); 759 755 limit = limit * 2 / 3; 760 756 761 - atomic_dec(&fs_info->nr_async_submits); 762 - 763 - if (atomic_read(&fs_info->nr_async_submits) < limit && 757 + if (atomic_dec_return(&fs_info->nr_async_submits) < limit && 764 758 waitqueue_active(&fs_info->async_submit_wait)) 765 759 wake_up(&fs_info->async_submit_wait); 766 760 ··· 2034 2032 fs_info->free_chunk_space = 0; 2035 2033 fs_info->tree_mod_log = RB_ROOT; 2036 2034 2037 - init_waitqueue_head(&fs_info->tree_mod_seq_wait); 2038 - 2039 2035 /* readahead state */ 2040 2036 INIT_RADIX_TREE(&fs_info->reada_tree, GFP_NOFS & ~__GFP_WAIT); 2041 2037 spin_lock_init(&fs_info->reada_lock); ··· 2528 2528 goto fail_trans_kthread; 2529 2529 2530 2530 /* do not make disk changes in broken FS */ 2531 - if (btrfs_super_log_root(disk_super) != 0 && 2532 - !(fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR)) { 2531 + if (btrfs_super_log_root(disk_super) != 0) { 2533 2532 u64 bytenr = btrfs_super_log_root(disk_super); 2534 2533 2535 2534 if (fs_devices->rw_devices == 0) { ··· 3188 3189 /* clear out the rbtree of defraggable inodes */ 3189 3190 btrfs_run_defrag_inodes(fs_info); 3190 3191 3191 - /* 3192 - * Here come 2 situations when btrfs is broken to flip readonly: 3193 - * 3194 - * 1. when btrfs flips readonly somewhere else before 3195 - * btrfs_commit_super, sb->s_flags has MS_RDONLY flag, 3196 - * and btrfs will skip to write sb directly to keep 3197 - * ERROR state on disk. 3198 - * 3199 - * 2. when btrfs flips readonly just in btrfs_commit_super, 3200 - * and in such case, btrfs cannot write sb via btrfs_commit_super, 3201 - * and since fs_state has been set BTRFS_SUPER_FLAG_ERROR flag, 3202 - * btrfs will cleanup all FS resources first and write sb then. 3203 - */ 3204 3192 if (!(fs_info->sb->s_flags & MS_RDONLY)) { 3205 3193 ret = btrfs_commit_super(root); 3206 3194 if (ret) 3207 3195 printk(KERN_ERR "btrfs: commit super ret %d\n", ret); 3208 3196 } 3209 3197 3210 - if (fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) { 3211 - ret = btrfs_error_commit_super(root); 3212 - if (ret) 3213 - printk(KERN_ERR "btrfs: commit super ret %d\n", ret); 3214 - } 3198 + if (fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) 3199 + btrfs_error_commit_super(root); 3215 3200 3216 3201 btrfs_put_block_group_cache(fs_info); 3217 3202 ··· 3417 3434 if (read_only) 3418 3435 return 0; 3419 3436 3420 - if (fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR) { 3421 - printk(KERN_WARNING "warning: mount fs with errors, " 3422 - "running btrfsck is recommended\n"); 3423 - } 3424 - 3425 3437 return 0; 3426 3438 } 3427 3439 3428 - int btrfs_error_commit_super(struct btrfs_root *root) 3440 + void btrfs_error_commit_super(struct btrfs_root *root) 3429 3441 { 3430 - int ret; 3431 - 3432 3442 mutex_lock(&root->fs_info->cleaner_mutex); 3433 3443 btrfs_run_delayed_iputs(root); 3434 3444 mutex_unlock(&root->fs_info->cleaner_mutex); ··· 3431 3455 3432 3456 /* cleanup FS via transaction */ 3433 3457 btrfs_cleanup_transaction(root); 3434 - 3435 - ret = write_ctree_super(NULL, root, 0); 3436 - 3437 - return ret; 3438 3458 } 3439 3459 3440 3460 static void btrfs_destroy_ordered_operations(struct btrfs_root *root) ··· 3754 3782 /* FIXME: cleanup wait for commit */ 3755 3783 t->in_commit = 1; 3756 3784 t->blocked = 1; 3785 + smp_mb(); 3757 3786 if (waitqueue_active(&root->fs_info->transaction_blocked_wait)) 3758 3787 wake_up(&root->fs_info->transaction_blocked_wait); 3759 3788 3760 3789 t->blocked = 0; 3790 + smp_mb(); 3761 3791 if (waitqueue_active(&root->fs_info->transaction_wait)) 3762 3792 wake_up(&root->fs_info->transaction_wait); 3763 3793 3764 3794 t->commit_done = 1; 3795 + smp_mb(); 3765 3796 if (waitqueue_active(&t->commit_wait)) 3766 3797 wake_up(&t->commit_wait); 3767 3798
+1 -1
fs/btrfs/disk-io.h
··· 54 54 struct btrfs_root *root, int max_mirrors); 55 55 struct buffer_head *btrfs_read_dev_super(struct block_device *bdev); 56 56 int btrfs_commit_super(struct btrfs_root *root); 57 - int btrfs_error_commit_super(struct btrfs_root *root); 57 + void btrfs_error_commit_super(struct btrfs_root *root); 58 58 struct extent_buffer *btrfs_find_tree_block(struct btrfs_root *root, 59 59 u64 bytenr, u32 blocksize); 60 60 struct btrfs_root *btrfs_read_fs_root_no_radix(struct btrfs_root *tree_root,
+58 -65
fs/btrfs/extent-tree.c
··· 2252 2252 } 2253 2253 2254 2254 /* 2255 + * We need to try and merge add/drops of the same ref since we 2256 + * can run into issues with relocate dropping the implicit ref 2257 + * and then it being added back again before the drop can 2258 + * finish. If we merged anything we need to re-loop so we can 2259 + * get a good ref. 2260 + */ 2261 + btrfs_merge_delayed_refs(trans, fs_info, delayed_refs, 2262 + locked_ref); 2263 + 2264 + /* 2255 2265 * locked_ref is the head node, so we have to go one 2256 2266 * node back for any delayed ref updates 2257 2267 */ ··· 2328 2318 ref->in_tree = 0; 2329 2319 rb_erase(&ref->rb_node, &delayed_refs->root); 2330 2320 delayed_refs->num_entries--; 2331 - /* 2332 - * we modified num_entries, but as we're currently running 2333 - * delayed refs, skip 2334 - * wake_up(&delayed_refs->seq_wait); 2335 - * here. 2336 - */ 2321 + if (locked_ref) { 2322 + /* 2323 + * when we play the delayed ref, also correct the 2324 + * ref_mod on head 2325 + */ 2326 + switch (ref->action) { 2327 + case BTRFS_ADD_DELAYED_REF: 2328 + case BTRFS_ADD_DELAYED_EXTENT: 2329 + locked_ref->node.ref_mod -= ref->ref_mod; 2330 + break; 2331 + case BTRFS_DROP_DELAYED_REF: 2332 + locked_ref->node.ref_mod += ref->ref_mod; 2333 + break; 2334 + default: 2335 + WARN_ON(1); 2336 + } 2337 + } 2337 2338 spin_unlock(&delayed_refs->lock); 2338 2339 2339 2340 ret = run_one_delayed_ref(trans, root, ref, extent_op, ··· 2369 2348 spin_lock(&delayed_refs->lock); 2370 2349 } 2371 2350 return count; 2372 - } 2373 - 2374 - static void wait_for_more_refs(struct btrfs_fs_info *fs_info, 2375 - struct btrfs_delayed_ref_root *delayed_refs, 2376 - unsigned long num_refs, 2377 - struct list_head *first_seq) 2378 - { 2379 - spin_unlock(&delayed_refs->lock); 2380 - pr_debug("waiting for more refs (num %ld, first %p)\n", 2381 - num_refs, first_seq); 2382 - wait_event(fs_info->tree_mod_seq_wait, 2383 - num_refs != delayed_refs->num_entries || 2384 - fs_info->tree_mod_seq_list.next != first_seq); 2385 - pr_debug("done waiting for more refs (num %ld, first %p)\n", 2386 - delayed_refs->num_entries, fs_info->tree_mod_seq_list.next); 2387 - spin_lock(&delayed_refs->lock); 2388 2351 } 2389 2352 2390 2353 #ifdef SCRAMBLE_DELAYED_REFS ··· 2465 2460 struct btrfs_delayed_ref_root *delayed_refs; 2466 2461 struct btrfs_delayed_ref_node *ref; 2467 2462 struct list_head cluster; 2468 - struct list_head *first_seq = NULL; 2469 2463 int ret; 2470 2464 u64 delayed_start; 2471 2465 int run_all = count == (unsigned long)-1; 2472 2466 int run_most = 0; 2473 - unsigned long num_refs = 0; 2474 - int consider_waiting; 2467 + int loops; 2475 2468 2476 2469 /* We'll clean this up in btrfs_cleanup_transaction */ 2477 2470 if (trans->aborted) ··· 2487 2484 delayed_refs = &trans->transaction->delayed_refs; 2488 2485 INIT_LIST_HEAD(&cluster); 2489 2486 again: 2490 - consider_waiting = 0; 2487 + loops = 0; 2491 2488 spin_lock(&delayed_refs->lock); 2492 2489 2493 2490 #ifdef SCRAMBLE_DELAYED_REFS ··· 2515 2512 if (ret) 2516 2513 break; 2517 2514 2518 - if (delayed_start >= delayed_refs->run_delayed_start) { 2519 - if (consider_waiting == 0) { 2520 - /* 2521 - * btrfs_find_ref_cluster looped. let's do one 2522 - * more cycle. if we don't run any delayed ref 2523 - * during that cycle (because we can't because 2524 - * all of them are blocked) and if the number of 2525 - * refs doesn't change, we avoid busy waiting. 2526 - */ 2527 - consider_waiting = 1; 2528 - num_refs = delayed_refs->num_entries; 2529 - first_seq = root->fs_info->tree_mod_seq_list.next; 2530 - } else { 2531 - wait_for_more_refs(root->fs_info, delayed_refs, 2532 - num_refs, first_seq); 2533 - /* 2534 - * after waiting, things have changed. we 2535 - * dropped the lock and someone else might have 2536 - * run some refs, built new clusters and so on. 2537 - * therefore, we restart staleness detection. 2538 - */ 2539 - consider_waiting = 0; 2540 - } 2541 - } 2542 - 2543 2515 ret = run_clustered_refs(trans, root, &cluster); 2544 2516 if (ret < 0) { 2545 2517 spin_unlock(&delayed_refs->lock); ··· 2527 2549 if (count == 0) 2528 2550 break; 2529 2551 2530 - if (ret || delayed_refs->run_delayed_start == 0) { 2552 + if (delayed_start >= delayed_refs->run_delayed_start) { 2553 + if (loops == 0) { 2554 + /* 2555 + * btrfs_find_ref_cluster looped. let's do one 2556 + * more cycle. if we don't run any delayed ref 2557 + * during that cycle (because we can't because 2558 + * all of them are blocked), bail out. 2559 + */ 2560 + loops = 1; 2561 + } else { 2562 + /* 2563 + * no runnable refs left, stop trying 2564 + */ 2565 + BUG_ON(run_all); 2566 + break; 2567 + } 2568 + } 2569 + if (ret) { 2531 2570 /* refs were run, let's reset staleness detection */ 2532 - consider_waiting = 0; 2571 + loops = 0; 2533 2572 } 2534 2573 } 2535 2574 ··· 3002 3007 } 3003 3008 spin_unlock(&block_group->lock); 3004 3009 3005 - num_pages = (int)div64_u64(block_group->key.offset, 1024 * 1024 * 1024); 3010 + /* 3011 + * Try to preallocate enough space based on how big the block group is. 3012 + * Keep in mind this has to include any pinned space which could end up 3013 + * taking up quite a bit since it's not folded into the other space 3014 + * cache. 3015 + */ 3016 + num_pages = (int)div64_u64(block_group->key.offset, 256 * 1024 * 1024); 3006 3017 if (!num_pages) 3007 3018 num_pages = 1; 3008 3019 3009 - /* 3010 - * Just to make absolutely sure we have enough space, we're going to 3011 - * preallocate 12 pages worth of space for each block group. In 3012 - * practice we ought to use at most 8, but we need extra space so we can 3013 - * add our header and have a terminator between the extents and the 3014 - * bitmaps. 3015 - */ 3016 3020 num_pages *= 16; 3017 3021 num_pages *= PAGE_CACHE_SIZE; 3018 3022 ··· 4565 4571 if (root->fs_info->quota_enabled) { 4566 4572 ret = btrfs_qgroup_reserve(root, num_bytes + 4567 4573 nr_extents * root->leafsize); 4568 - if (ret) 4574 + if (ret) { 4575 + mutex_unlock(&BTRFS_I(inode)->delalloc_mutex); 4569 4576 return ret; 4577 + } 4570 4578 } 4571 4579 4572 4580 ret = reserve_metadata_bytes(root, block_rsv, to_reserve, flush); ··· 5290 5294 rb_erase(&head->node.rb_node, &delayed_refs->root); 5291 5295 5292 5296 delayed_refs->num_entries--; 5293 - smp_mb(); 5294 - if (waitqueue_active(&root->fs_info->tree_mod_seq_wait)) 5295 - wake_up(&root->fs_info->tree_mod_seq_wait); 5296 5297 5297 5298 /* 5298 5299 * we don't take a ref on the node because we're removing it from the
+2 -15
fs/btrfs/extent_io.c
··· 2330 2330 if (uptodate && tree->ops && tree->ops->readpage_end_io_hook) { 2331 2331 ret = tree->ops->readpage_end_io_hook(page, start, end, 2332 2332 state, mirror); 2333 - if (ret) { 2334 - /* no IO indicated but software detected errors 2335 - * in the block, either checksum errors or 2336 - * issues with the contents */ 2337 - struct btrfs_root *root = 2338 - BTRFS_I(page->mapping->host)->root; 2339 - struct btrfs_device *device; 2340 - 2333 + if (ret) 2341 2334 uptodate = 0; 2342 - device = btrfs_find_device_for_logical( 2343 - root, start, mirror); 2344 - if (device) 2345 - btrfs_dev_stat_inc_and_print(device, 2346 - BTRFS_DEV_STAT_CORRUPTION_ERRS); 2347 - } else { 2335 + else 2348 2336 clean_io_failure(start, page); 2349 - } 2350 2337 } 2351 2338 2352 2339 if (!uptodate && tree->ops && tree->ops->readpage_io_failed_hook) {
+2 -2
fs/btrfs/file-item.c
··· 272 272 } 273 273 274 274 int btrfs_lookup_bio_sums_dio(struct btrfs_root *root, struct inode *inode, 275 - struct bio *bio, u64 offset, u32 *dst) 275 + struct bio *bio, u64 offset) 276 276 { 277 - return __btrfs_lookup_bio_sums(root, inode, bio, offset, dst, 1); 277 + return __btrfs_lookup_bio_sums(root, inode, bio, offset, NULL, 1); 278 278 } 279 279 280 280 int btrfs_lookup_csums_range(struct btrfs_root *root, u64 start, u64 end,
+164 -162
fs/btrfs/inode.c
··· 1008 1008 nr_pages = (async_cow->end - async_cow->start + PAGE_CACHE_SIZE) >> 1009 1009 PAGE_CACHE_SHIFT; 1010 1010 1011 - atomic_sub(nr_pages, &root->fs_info->async_delalloc_pages); 1012 - 1013 - if (atomic_read(&root->fs_info->async_delalloc_pages) < 1011 + if (atomic_sub_return(nr_pages, &root->fs_info->async_delalloc_pages) < 1014 1012 5 * 1024 * 1024 && 1015 1013 waitqueue_active(&root->fs_info->async_submit_wait)) 1016 1014 wake_up(&root->fs_info->async_submit_wait); ··· 1883 1885 trans = btrfs_join_transaction_nolock(root); 1884 1886 else 1885 1887 trans = btrfs_join_transaction(root); 1886 - if (IS_ERR(trans)) 1887 - return PTR_ERR(trans); 1888 + if (IS_ERR(trans)) { 1889 + ret = PTR_ERR(trans); 1890 + trans = NULL; 1891 + goto out; 1892 + } 1888 1893 trans->block_rsv = &root->fs_info->delalloc_block_rsv; 1889 1894 ret = btrfs_update_inode_fallback(trans, root, inode); 1890 1895 if (ret) /* -ENOMEM or corruption */ ··· 3175 3174 btrfs_i_size_write(dir, dir->i_size - name_len * 2); 3176 3175 inode_inc_iversion(dir); 3177 3176 dir->i_mtime = dir->i_ctime = CURRENT_TIME; 3178 - ret = btrfs_update_inode(trans, root, dir); 3177 + ret = btrfs_update_inode_fallback(trans, root, dir); 3179 3178 if (ret) 3180 3179 btrfs_abort_transaction(trans, root, ret); 3181 3180 out: ··· 5775 5774 return ret; 5776 5775 } 5777 5776 5777 + static int lock_extent_direct(struct inode *inode, u64 lockstart, u64 lockend, 5778 + struct extent_state **cached_state, int writing) 5779 + { 5780 + struct btrfs_ordered_extent *ordered; 5781 + int ret = 0; 5782 + 5783 + while (1) { 5784 + lock_extent_bits(&BTRFS_I(inode)->io_tree, lockstart, lockend, 5785 + 0, cached_state); 5786 + /* 5787 + * We're concerned with the entire range that we're going to be 5788 + * doing DIO to, so we need to make sure theres no ordered 5789 + * extents in this range. 5790 + */ 5791 + ordered = btrfs_lookup_ordered_range(inode, lockstart, 5792 + lockend - lockstart + 1); 5793 + 5794 + /* 5795 + * We need to make sure there are no buffered pages in this 5796 + * range either, we could have raced between the invalidate in 5797 + * generic_file_direct_write and locking the extent. The 5798 + * invalidate needs to happen so that reads after a write do not 5799 + * get stale data. 5800 + */ 5801 + if (!ordered && (!writing || 5802 + !test_range_bit(&BTRFS_I(inode)->io_tree, 5803 + lockstart, lockend, EXTENT_UPTODATE, 0, 5804 + *cached_state))) 5805 + break; 5806 + 5807 + unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, 5808 + cached_state, GFP_NOFS); 5809 + 5810 + if (ordered) { 5811 + btrfs_start_ordered_extent(inode, ordered, 1); 5812 + btrfs_put_ordered_extent(ordered); 5813 + } else { 5814 + /* Screw you mmap */ 5815 + ret = filemap_write_and_wait_range(inode->i_mapping, 5816 + lockstart, 5817 + lockend); 5818 + if (ret) 5819 + break; 5820 + 5821 + /* 5822 + * If we found a page that couldn't be invalidated just 5823 + * fall back to buffered. 5824 + */ 5825 + ret = invalidate_inode_pages2_range(inode->i_mapping, 5826 + lockstart >> PAGE_CACHE_SHIFT, 5827 + lockend >> PAGE_CACHE_SHIFT); 5828 + if (ret) 5829 + break; 5830 + } 5831 + 5832 + cond_resched(); 5833 + } 5834 + 5835 + return ret; 5836 + } 5837 + 5778 5838 static int btrfs_get_blocks_direct(struct inode *inode, sector_t iblock, 5779 5839 struct buffer_head *bh_result, int create) 5780 5840 { 5781 5841 struct extent_map *em; 5782 5842 struct btrfs_root *root = BTRFS_I(inode)->root; 5843 + struct extent_state *cached_state = NULL; 5783 5844 u64 start = iblock << inode->i_blkbits; 5845 + u64 lockstart, lockend; 5784 5846 u64 len = bh_result->b_size; 5785 5847 struct btrfs_trans_handle *trans; 5848 + int unlock_bits = EXTENT_LOCKED; 5849 + int ret; 5850 + 5851 + if (create) { 5852 + ret = btrfs_delalloc_reserve_space(inode, len); 5853 + if (ret) 5854 + return ret; 5855 + unlock_bits |= EXTENT_DELALLOC | EXTENT_DIRTY; 5856 + } else { 5857 + len = min_t(u64, len, root->sectorsize); 5858 + } 5859 + 5860 + lockstart = start; 5861 + lockend = start + len - 1; 5862 + 5863 + /* 5864 + * If this errors out it's because we couldn't invalidate pagecache for 5865 + * this range and we need to fallback to buffered. 5866 + */ 5867 + if (lock_extent_direct(inode, lockstart, lockend, &cached_state, create)) 5868 + return -ENOTBLK; 5869 + 5870 + if (create) { 5871 + ret = set_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, 5872 + lockend, EXTENT_DELALLOC, NULL, 5873 + &cached_state, GFP_NOFS); 5874 + if (ret) 5875 + goto unlock_err; 5876 + } 5786 5877 5787 5878 em = btrfs_get_extent(inode, NULL, 0, start, len, 0); 5788 - if (IS_ERR(em)) 5789 - return PTR_ERR(em); 5879 + if (IS_ERR(em)) { 5880 + ret = PTR_ERR(em); 5881 + goto unlock_err; 5882 + } 5790 5883 5791 5884 /* 5792 5885 * Ok for INLINE and COMPRESSED extents we need to fallback on buffered ··· 5899 5804 if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) || 5900 5805 em->block_start == EXTENT_MAP_INLINE) { 5901 5806 free_extent_map(em); 5902 - return -ENOTBLK; 5807 + ret = -ENOTBLK; 5808 + goto unlock_err; 5903 5809 } 5904 5810 5905 5811 /* Just a good old fashioned hole, return */ 5906 5812 if (!create && (em->block_start == EXTENT_MAP_HOLE || 5907 5813 test_bit(EXTENT_FLAG_PREALLOC, &em->flags))) { 5908 5814 free_extent_map(em); 5909 - /* DIO will do one hole at a time, so just unlock a sector */ 5910 - unlock_extent(&BTRFS_I(inode)->io_tree, start, 5911 - start + root->sectorsize - 1); 5912 - return 0; 5815 + ret = 0; 5816 + goto unlock_err; 5913 5817 } 5914 5818 5915 5819 /* ··· 5921 5827 * 5922 5828 */ 5923 5829 if (!create) { 5924 - len = em->len - (start - em->start); 5925 - goto map; 5830 + len = min(len, em->len - (start - em->start)); 5831 + lockstart = start + len; 5832 + goto unlock; 5926 5833 } 5927 5834 5928 5835 if (test_bit(EXTENT_FLAG_PREALLOC, &em->flags) || ··· 5955 5860 btrfs_end_transaction(trans, root); 5956 5861 if (ret) { 5957 5862 free_extent_map(em); 5958 - return ret; 5863 + goto unlock_err; 5959 5864 } 5960 5865 goto unlock; 5961 5866 } ··· 5968 5873 */ 5969 5874 len = bh_result->b_size; 5970 5875 em = btrfs_new_extent_direct(inode, em, start, len); 5971 - if (IS_ERR(em)) 5972 - return PTR_ERR(em); 5876 + if (IS_ERR(em)) { 5877 + ret = PTR_ERR(em); 5878 + goto unlock_err; 5879 + } 5973 5880 len = min(len, em->len - (start - em->start)); 5974 5881 unlock: 5975 - clear_extent_bit(&BTRFS_I(inode)->io_tree, start, start + len - 1, 5976 - EXTENT_LOCKED | EXTENT_DELALLOC | EXTENT_DIRTY, 1, 5977 - 0, NULL, GFP_NOFS); 5978 - map: 5979 5882 bh_result->b_blocknr = (em->block_start + (start - em->start)) >> 5980 5883 inode->i_blkbits; 5981 5884 bh_result->b_size = len; ··· 5991 5898 i_size_write(inode, start + len); 5992 5899 } 5993 5900 5901 + /* 5902 + * In the case of write we need to clear and unlock the entire range, 5903 + * in the case of read we need to unlock only the end area that we 5904 + * aren't using if there is any left over space. 5905 + */ 5906 + if (lockstart < lockend) { 5907 + if (create && len < lockend - lockstart) { 5908 + clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, 5909 + lockstart + len - 1, unlock_bits, 1, 0, 5910 + &cached_state, GFP_NOFS); 5911 + /* 5912 + * Beside unlock, we also need to cleanup reserved space 5913 + * for the left range by attaching EXTENT_DO_ACCOUNTING. 5914 + */ 5915 + clear_extent_bit(&BTRFS_I(inode)->io_tree, 5916 + lockstart + len, lockend, 5917 + unlock_bits | EXTENT_DO_ACCOUNTING, 5918 + 1, 0, NULL, GFP_NOFS); 5919 + } else { 5920 + clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, 5921 + lockend, unlock_bits, 1, 0, 5922 + &cached_state, GFP_NOFS); 5923 + } 5924 + } else { 5925 + free_extent_state(cached_state); 5926 + } 5927 + 5994 5928 free_extent_map(em); 5995 5929 5996 5930 return 0; 5931 + 5932 + unlock_err: 5933 + if (create) 5934 + unlock_bits |= EXTENT_DO_ACCOUNTING; 5935 + 5936 + clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, lockend, 5937 + unlock_bits, 1, 0, &cached_state, GFP_NOFS); 5938 + return ret; 5997 5939 } 5998 5940 5999 5941 struct btrfs_dio_private { ··· 6036 5908 u64 logical_offset; 6037 5909 u64 disk_bytenr; 6038 5910 u64 bytes; 6039 - u32 *csums; 6040 5911 void *private; 6041 5912 6042 5913 /* number of bios pending for this dio */ ··· 6055 5928 struct inode *inode = dip->inode; 6056 5929 struct btrfs_root *root = BTRFS_I(inode)->root; 6057 5930 u64 start; 6058 - u32 *private = dip->csums; 6059 5931 6060 5932 start = dip->logical_offset; 6061 5933 do { ··· 6062 5936 struct page *page = bvec->bv_page; 6063 5937 char *kaddr; 6064 5938 u32 csum = ~(u32)0; 5939 + u64 private = ~(u32)0; 6065 5940 unsigned long flags; 6066 5941 5942 + if (get_state_private(&BTRFS_I(inode)->io_tree, 5943 + start, &private)) 5944 + goto failed; 6067 5945 local_irq_save(flags); 6068 5946 kaddr = kmap_atomic(page); 6069 5947 csum = btrfs_csum_data(root, kaddr + bvec->bv_offset, ··· 6077 5947 local_irq_restore(flags); 6078 5948 6079 5949 flush_dcache_page(bvec->bv_page); 6080 - if (csum != *private) { 5950 + if (csum != private) { 5951 + failed: 6081 5952 printk(KERN_ERR "btrfs csum failed ino %llu off" 6082 5953 " %llu csum %u private %u\n", 6083 5954 (unsigned long long)btrfs_ino(inode), 6084 5955 (unsigned long long)start, 6085 - csum, *private); 5956 + csum, (unsigned)private); 6086 5957 err = -EIO; 6087 5958 } 6088 5959 } 6089 5960 6090 5961 start += bvec->bv_len; 6091 - private++; 6092 5962 bvec++; 6093 5963 } while (bvec <= bvec_end); 6094 5964 ··· 6096 5966 dip->logical_offset + dip->bytes - 1); 6097 5967 bio->bi_private = dip->private; 6098 5968 6099 - kfree(dip->csums); 6100 5969 kfree(dip); 6101 5970 6102 5971 /* If we had a csum failure make sure to clear the uptodate flag */ ··· 6201 6072 6202 6073 static inline int __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode, 6203 6074 int rw, u64 file_offset, int skip_sum, 6204 - u32 *csums, int async_submit) 6075 + int async_submit) 6205 6076 { 6206 6077 int write = rw & REQ_WRITE; 6207 6078 struct btrfs_root *root = BTRFS_I(inode)->root; ··· 6234 6105 if (ret) 6235 6106 goto err; 6236 6107 } else if (!skip_sum) { 6237 - ret = btrfs_lookup_bio_sums_dio(root, inode, bio, 6238 - file_offset, csums); 6108 + ret = btrfs_lookup_bio_sums_dio(root, inode, bio, file_offset); 6239 6109 if (ret) 6240 6110 goto err; 6241 6111 } ··· 6260 6132 u64 submit_len = 0; 6261 6133 u64 map_length; 6262 6134 int nr_pages = 0; 6263 - u32 *csums = dip->csums; 6264 6135 int ret = 0; 6265 6136 int async_submit = 0; 6266 - int write = rw & REQ_WRITE; 6267 6137 6268 6138 map_length = orig_bio->bi_size; 6269 6139 ret = btrfs_map_block(map_tree, READ, start_sector << 9, ··· 6297 6171 atomic_inc(&dip->pending_bios); 6298 6172 ret = __btrfs_submit_dio_bio(bio, inode, rw, 6299 6173 file_offset, skip_sum, 6300 - csums, async_submit); 6174 + async_submit); 6301 6175 if (ret) { 6302 6176 bio_put(bio); 6303 6177 atomic_dec(&dip->pending_bios); 6304 6178 goto out_err; 6305 6179 } 6306 6180 6307 - /* Write's use the ordered csums */ 6308 - if (!write && !skip_sum) 6309 - csums = csums + nr_pages; 6310 6181 start_sector += submit_len >> 9; 6311 6182 file_offset += submit_len; 6312 6183 ··· 6333 6210 6334 6211 submit: 6335 6212 ret = __btrfs_submit_dio_bio(bio, inode, rw, file_offset, skip_sum, 6336 - csums, async_submit); 6213 + async_submit); 6337 6214 if (!ret) 6338 6215 return 0; 6339 6216 ··· 6368 6245 if (!dip) { 6369 6246 ret = -ENOMEM; 6370 6247 goto free_ordered; 6371 - } 6372 - dip->csums = NULL; 6373 - 6374 - /* Write's use the ordered csum stuff, so we don't need dip->csums */ 6375 - if (!write && !skip_sum) { 6376 - dip->csums = kmalloc(sizeof(u32) * bio->bi_vcnt, GFP_NOFS); 6377 - if (!dip->csums) { 6378 - kfree(dip); 6379 - ret = -ENOMEM; 6380 - goto free_ordered; 6381 - } 6382 6248 } 6383 6249 6384 6250 dip->private = bio->bi_private; ··· 6453 6341 out: 6454 6342 return retval; 6455 6343 } 6344 + 6456 6345 static ssize_t btrfs_direct_IO(int rw, struct kiocb *iocb, 6457 6346 const struct iovec *iov, loff_t offset, 6458 6347 unsigned long nr_segs) 6459 6348 { 6460 6349 struct file *file = iocb->ki_filp; 6461 6350 struct inode *inode = file->f_mapping->host; 6462 - struct btrfs_ordered_extent *ordered; 6463 - struct extent_state *cached_state = NULL; 6464 - u64 lockstart, lockend; 6465 - ssize_t ret; 6466 - int writing = rw & WRITE; 6467 - int write_bits = 0; 6468 - size_t count = iov_length(iov, nr_segs); 6469 6351 6470 6352 if (check_direct_IO(BTRFS_I(inode)->root, rw, iocb, iov, 6471 - offset, nr_segs)) { 6353 + offset, nr_segs)) 6472 6354 return 0; 6473 - } 6474 6355 6475 - lockstart = offset; 6476 - lockend = offset + count - 1; 6477 - 6478 - if (writing) { 6479 - ret = btrfs_delalloc_reserve_space(inode, count); 6480 - if (ret) 6481 - goto out; 6482 - } 6483 - 6484 - while (1) { 6485 - lock_extent_bits(&BTRFS_I(inode)->io_tree, lockstart, lockend, 6486 - 0, &cached_state); 6487 - /* 6488 - * We're concerned with the entire range that we're going to be 6489 - * doing DIO to, so we need to make sure theres no ordered 6490 - * extents in this range. 6491 - */ 6492 - ordered = btrfs_lookup_ordered_range(inode, lockstart, 6493 - lockend - lockstart + 1); 6494 - 6495 - /* 6496 - * We need to make sure there are no buffered pages in this 6497 - * range either, we could have raced between the invalidate in 6498 - * generic_file_direct_write and locking the extent. The 6499 - * invalidate needs to happen so that reads after a write do not 6500 - * get stale data. 6501 - */ 6502 - if (!ordered && (!writing || 6503 - !test_range_bit(&BTRFS_I(inode)->io_tree, 6504 - lockstart, lockend, EXTENT_UPTODATE, 0, 6505 - cached_state))) 6506 - break; 6507 - 6508 - unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, 6509 - &cached_state, GFP_NOFS); 6510 - 6511 - if (ordered) { 6512 - btrfs_start_ordered_extent(inode, ordered, 1); 6513 - btrfs_put_ordered_extent(ordered); 6514 - } else { 6515 - /* Screw you mmap */ 6516 - ret = filemap_write_and_wait_range(file->f_mapping, 6517 - lockstart, 6518 - lockend); 6519 - if (ret) 6520 - goto out; 6521 - 6522 - /* 6523 - * If we found a page that couldn't be invalidated just 6524 - * fall back to buffered. 6525 - */ 6526 - ret = invalidate_inode_pages2_range(file->f_mapping, 6527 - lockstart >> PAGE_CACHE_SHIFT, 6528 - lockend >> PAGE_CACHE_SHIFT); 6529 - if (ret) { 6530 - if (ret == -EBUSY) 6531 - ret = 0; 6532 - goto out; 6533 - } 6534 - } 6535 - 6536 - cond_resched(); 6537 - } 6538 - 6539 - /* 6540 - * we don't use btrfs_set_extent_delalloc because we don't want 6541 - * the dirty or uptodate bits 6542 - */ 6543 - if (writing) { 6544 - write_bits = EXTENT_DELALLOC | EXTENT_DO_ACCOUNTING; 6545 - ret = set_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, lockend, 6546 - EXTENT_DELALLOC, NULL, &cached_state, 6547 - GFP_NOFS); 6548 - if (ret) { 6549 - clear_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, 6550 - lockend, EXTENT_LOCKED | write_bits, 6551 - 1, 0, &cached_state, GFP_NOFS); 6552 - goto out; 6553 - } 6554 - } 6555 - 6556 - free_extent_state(cached_state); 6557 - cached_state = NULL; 6558 - 6559 - ret = __blockdev_direct_IO(rw, iocb, inode, 6356 + return __blockdev_direct_IO(rw, iocb, inode, 6560 6357 BTRFS_I(inode)->root->fs_info->fs_devices->latest_bdev, 6561 6358 iov, offset, nr_segs, btrfs_get_blocks_direct, NULL, 6562 6359 btrfs_submit_direct, 0); 6563 - 6564 - if (ret < 0 && ret != -EIOCBQUEUED) { 6565 - clear_extent_bit(&BTRFS_I(inode)->io_tree, offset, 6566 - offset + iov_length(iov, nr_segs) - 1, 6567 - EXTENT_LOCKED | write_bits, 1, 0, 6568 - &cached_state, GFP_NOFS); 6569 - } else if (ret >= 0 && ret < iov_length(iov, nr_segs)) { 6570 - /* 6571 - * We're falling back to buffered, unlock the section we didn't 6572 - * do IO on. 6573 - */ 6574 - clear_extent_bit(&BTRFS_I(inode)->io_tree, offset + ret, 6575 - offset + iov_length(iov, nr_segs) - 1, 6576 - EXTENT_LOCKED | write_bits, 1, 0, 6577 - &cached_state, GFP_NOFS); 6578 - } 6579 - out: 6580 - free_extent_state(cached_state); 6581 - return ret; 6582 6360 } 6583 6361 6584 6362 static int btrfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+1 -1
fs/btrfs/ioctl.c
··· 424 424 uuid_le_gen(&new_uuid); 425 425 memcpy(root_item.uuid, new_uuid.b, BTRFS_UUID_SIZE); 426 426 root_item.otime.sec = cpu_to_le64(cur_time.tv_sec); 427 - root_item.otime.nsec = cpu_to_le64(cur_time.tv_nsec); 427 + root_item.otime.nsec = cpu_to_le32(cur_time.tv_nsec); 428 428 root_item.ctime = root_item.otime; 429 429 btrfs_set_root_ctransid(&root_item, trans->transid); 430 430 btrfs_set_root_otransid(&root_item, trans->transid);
+1 -1
fs/btrfs/locking.c
··· 67 67 { 68 68 if (eb->lock_nested) { 69 69 read_lock(&eb->lock); 70 - if (&eb->lock_nested && current->pid == eb->lock_owner) { 70 + if (eb->lock_nested && current->pid == eb->lock_owner) { 71 71 read_unlock(&eb->lock); 72 72 return; 73 73 }
+9 -3
fs/btrfs/qgroup.c
··· 1364 1364 spin_lock(&fs_info->qgroup_lock); 1365 1365 1366 1366 dstgroup = add_qgroup_rb(fs_info, objectid); 1367 - if (!dstgroup) 1367 + if (IS_ERR(dstgroup)) { 1368 + ret = PTR_ERR(dstgroup); 1368 1369 goto unlock; 1370 + } 1369 1371 1370 1372 if (srcid) { 1371 1373 srcgroup = find_qgroup_rb(fs_info, srcid); 1372 - if (!srcgroup) 1374 + if (!srcgroup) { 1375 + ret = -EINVAL; 1373 1376 goto unlock; 1377 + } 1374 1378 dstgroup->rfer = srcgroup->rfer - level_size; 1375 1379 dstgroup->rfer_cmpr = srcgroup->rfer_cmpr - level_size; 1376 1380 srcgroup->excl = level_size; ··· 1383 1379 qgroup_dirty(fs_info, srcgroup); 1384 1380 } 1385 1381 1386 - if (!inherit) 1382 + if (!inherit) { 1383 + ret = -EINVAL; 1387 1384 goto unlock; 1385 + } 1388 1386 1389 1387 i_qgroups = (u64 *)(inherit + 1); 1390 1388 for (i = 0; i < inherit->num_qgroups; ++i) {
+2 -2
fs/btrfs/root-tree.c
··· 544 544 struct timespec ct = CURRENT_TIME; 545 545 546 546 spin_lock(&root->root_times_lock); 547 - item->ctransid = trans->transid; 547 + item->ctransid = cpu_to_le64(trans->transid); 548 548 item->ctime.sec = cpu_to_le64(ct.tv_sec); 549 - item->ctime.nsec = cpu_to_le64(ct.tv_nsec); 549 + item->ctime.nsec = cpu_to_le32(ct.tv_nsec); 550 550 spin_unlock(&root->root_times_lock); 551 551 }
+11 -4
fs/btrfs/super.c
··· 838 838 struct btrfs_trans_handle *trans; 839 839 struct btrfs_fs_info *fs_info = btrfs_sb(sb); 840 840 struct btrfs_root *root = fs_info->tree_root; 841 - int ret; 842 841 843 842 trace_btrfs_sync_fs(wait); 844 843 ··· 848 849 849 850 btrfs_wait_ordered_extents(root, 0, 0); 850 851 851 - trans = btrfs_start_transaction(root, 0); 852 + spin_lock(&fs_info->trans_lock); 853 + if (!fs_info->running_transaction) { 854 + spin_unlock(&fs_info->trans_lock); 855 + return 0; 856 + } 857 + spin_unlock(&fs_info->trans_lock); 858 + 859 + trans = btrfs_join_transaction(root); 852 860 if (IS_ERR(trans)) 853 861 return PTR_ERR(trans); 854 - ret = btrfs_commit_transaction(trans, root); 855 - return ret; 862 + return btrfs_commit_transaction(trans, root); 856 863 } 857 864 858 865 static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry) ··· 1535 1530 while (cur_devices) { 1536 1531 head = &cur_devices->devices; 1537 1532 list_for_each_entry(dev, head, dev_list) { 1533 + if (dev->missing) 1534 + continue; 1538 1535 if (!first_dev || dev->devid < first_dev->devid) 1539 1536 first_dev = dev; 1540 1537 }
+2 -1
fs/btrfs/transaction.c
··· 1031 1031 1032 1032 btrfs_i_size_write(parent_inode, parent_inode->i_size + 1033 1033 dentry->d_name.len * 2); 1034 + parent_inode->i_mtime = parent_inode->i_ctime = CURRENT_TIME; 1034 1035 ret = btrfs_update_inode(trans, parent_root, parent_inode); 1035 1036 if (ret) 1036 1037 goto abort_trans_dput; ··· 1067 1066 memcpy(new_root_item->parent_uuid, root->root_item.uuid, 1068 1067 BTRFS_UUID_SIZE); 1069 1068 new_root_item->otime.sec = cpu_to_le64(cur_time.tv_sec); 1070 - new_root_item->otime.nsec = cpu_to_le64(cur_time.tv_nsec); 1069 + new_root_item->otime.nsec = cpu_to_le32(cur_time.tv_nsec); 1071 1070 btrfs_set_root_otransid(new_root_item, trans->transid); 1072 1071 memset(&new_root_item->stime, 0, sizeof(new_root_item->stime)); 1073 1072 memset(&new_root_item->rtime, 0, sizeof(new_root_item->rtime));
+6 -27
fs/btrfs/volumes.c
··· 227 227 cur = pending; 228 228 pending = pending->bi_next; 229 229 cur->bi_next = NULL; 230 - atomic_dec(&fs_info->nr_async_bios); 231 230 232 - if (atomic_read(&fs_info->nr_async_bios) < limit && 231 + if (atomic_dec_return(&fs_info->nr_async_bios) < limit && 233 232 waitqueue_active(&fs_info->async_submit_wait)) 234 233 wake_up(&fs_info->async_submit_wait); 235 234 ··· 568 569 memcpy(new_device, device, sizeof(*new_device)); 569 570 570 571 /* Safe because we are under uuid_mutex */ 571 - name = rcu_string_strdup(device->name->str, GFP_NOFS); 572 - BUG_ON(device->name && !name); /* -ENOMEM */ 573 - rcu_assign_pointer(new_device->name, name); 572 + if (device->name) { 573 + name = rcu_string_strdup(device->name->str, GFP_NOFS); 574 + BUG_ON(device->name && !name); /* -ENOMEM */ 575 + rcu_assign_pointer(new_device->name, name); 576 + } 574 577 new_device->bdev = NULL; 575 578 new_device->writeable = 0; 576 579 new_device->in_fs_metadata = 0; ··· 4604 4603 } 4605 4604 free_extent_buffer(sb); 4606 4605 return ret; 4607 - } 4608 - 4609 - struct btrfs_device *btrfs_find_device_for_logical(struct btrfs_root *root, 4610 - u64 logical, int mirror_num) 4611 - { 4612 - struct btrfs_mapping_tree *map_tree = &root->fs_info->mapping_tree; 4613 - int ret; 4614 - u64 map_length = 0; 4615 - struct btrfs_bio *bbio = NULL; 4616 - struct btrfs_device *device; 4617 - 4618 - BUG_ON(mirror_num == 0); 4619 - ret = btrfs_map_block(map_tree, WRITE, logical, &map_length, &bbio, 4620 - mirror_num); 4621 - if (ret) { 4622 - BUG_ON(bbio != NULL); 4623 - return NULL; 4624 - } 4625 - BUG_ON(mirror_num != bbio->mirror_num); 4626 - device = bbio->stripes[mirror_num - 1].dev; 4627 - kfree(bbio); 4628 - return device; 4629 4606 } 4630 4607 4631 4608 int btrfs_read_chunk_tree(struct btrfs_root *root)
-2
fs/btrfs/volumes.h
··· 289 289 int btrfs_chunk_readonly(struct btrfs_root *root, u64 chunk_offset); 290 290 int find_free_dev_extent(struct btrfs_device *device, u64 num_bytes, 291 291 u64 *start, u64 *max_avail); 292 - struct btrfs_device *btrfs_find_device_for_logical(struct btrfs_root *root, 293 - u64 logical, int mirror_num); 294 292 void btrfs_dev_stat_print_on_error(struct btrfs_device *device); 295 293 void btrfs_dev_stat_inc_and_print(struct btrfs_device *dev, int index); 296 294 int btrfs_get_dev_stats(struct btrfs_root *root,
+30 -36
fs/buffer.c
··· 914 914 /* 915 915 * Initialise the state of a blockdev page's buffers. 916 916 */ 917 - static void 917 + static sector_t 918 918 init_page_buffers(struct page *page, struct block_device *bdev, 919 919 sector_t block, int size) 920 920 { ··· 936 936 block++; 937 937 bh = bh->b_this_page; 938 938 } while (bh != head); 939 + 940 + /* 941 + * Caller needs to validate requested block against end of device. 942 + */ 943 + return end_block; 939 944 } 940 945 941 946 /* 942 947 * Create the page-cache page that contains the requested block. 943 948 * 944 - * This is user purely for blockdev mappings. 949 + * This is used purely for blockdev mappings. 945 950 */ 946 - static struct page * 951 + static int 947 952 grow_dev_page(struct block_device *bdev, sector_t block, 948 - pgoff_t index, int size) 953 + pgoff_t index, int size, int sizebits) 949 954 { 950 955 struct inode *inode = bdev->bd_inode; 951 956 struct page *page; 952 957 struct buffer_head *bh; 958 + sector_t end_block; 959 + int ret = 0; /* Will call free_more_memory() */ 953 960 954 961 page = find_or_create_page(inode->i_mapping, index, 955 962 (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE); 956 963 if (!page) 957 - return NULL; 964 + return ret; 958 965 959 966 BUG_ON(!PageLocked(page)); 960 967 961 968 if (page_has_buffers(page)) { 962 969 bh = page_buffers(page); 963 970 if (bh->b_size == size) { 964 - init_page_buffers(page, bdev, block, size); 965 - return page; 971 + end_block = init_page_buffers(page, bdev, 972 + index << sizebits, size); 973 + goto done; 966 974 } 967 975 if (!try_to_free_buffers(page)) 968 976 goto failed; ··· 990 982 */ 991 983 spin_lock(&inode->i_mapping->private_lock); 992 984 link_dev_buffers(page, bh); 993 - init_page_buffers(page, bdev, block, size); 985 + end_block = init_page_buffers(page, bdev, index << sizebits, size); 994 986 spin_unlock(&inode->i_mapping->private_lock); 995 - return page; 996 - 987 + done: 988 + ret = (block < end_block) ? 1 : -ENXIO; 997 989 failed: 998 990 unlock_page(page); 999 991 page_cache_release(page); 1000 - return NULL; 992 + return ret; 1001 993 } 1002 994 1003 995 /* ··· 1007 999 static int 1008 1000 grow_buffers(struct block_device *bdev, sector_t block, int size) 1009 1001 { 1010 - struct page *page; 1011 1002 pgoff_t index; 1012 1003 int sizebits; 1013 1004 ··· 1030 1023 bdevname(bdev, b)); 1031 1024 return -EIO; 1032 1025 } 1033 - block = index << sizebits; 1026 + 1034 1027 /* Create a page with the proper size buffers.. */ 1035 - page = grow_dev_page(bdev, block, index, size); 1036 - if (!page) 1037 - return 0; 1038 - unlock_page(page); 1039 - page_cache_release(page); 1040 - return 1; 1028 + return grow_dev_page(bdev, block, index, size, sizebits); 1041 1029 } 1042 1030 1043 1031 static struct buffer_head * 1044 1032 __getblk_slow(struct block_device *bdev, sector_t block, int size) 1045 1033 { 1046 - int ret; 1047 - struct buffer_head *bh; 1048 - 1049 1034 /* Size must be multiple of hard sectorsize */ 1050 1035 if (unlikely(size & (bdev_logical_block_size(bdev)-1) || 1051 1036 (size < 512 || size > PAGE_SIZE))) { ··· 1050 1051 return NULL; 1051 1052 } 1052 1053 1053 - retry: 1054 - bh = __find_get_block(bdev, block, size); 1055 - if (bh) 1056 - return bh; 1054 + for (;;) { 1055 + struct buffer_head *bh; 1056 + int ret; 1057 1057 1058 - ret = grow_buffers(bdev, block, size); 1059 - if (ret == 0) { 1060 - free_more_memory(); 1061 - goto retry; 1062 - } else if (ret > 0) { 1063 1058 bh = __find_get_block(bdev, block, size); 1064 1059 if (bh) 1065 1060 return bh; 1061 + 1062 + ret = grow_buffers(bdev, block, size); 1063 + if (ret < 0) 1064 + return NULL; 1065 + if (ret == 0) 1066 + free_more_memory(); 1066 1067 } 1067 - return NULL; 1068 1068 } 1069 1069 1070 1070 /* ··· 1318 1320 * __getblk will locate (and, if necessary, create) the buffer_head 1319 1321 * which corresponds to the passed block_device, block and size. The 1320 1322 * returned buffer has its reference count incremented. 1321 - * 1322 - * __getblk() cannot fail - it just keeps trying. If you pass it an 1323 - * illegal block number, __getblk() will happily return a buffer_head 1324 - * which represents the non-existent block. Very weird. 1325 1323 * 1326 1324 * __getblk() will lock up the machine if grow_dev_page's try_to_free_buffers() 1327 1325 * attempt is failing. FIXME, perhaps?
+5
fs/direct-io.c
··· 1062 1062 unsigned long user_addr; 1063 1063 size_t bytes; 1064 1064 struct buffer_head map_bh = { 0, }; 1065 + struct blk_plug plug; 1065 1066 1066 1067 if (rw & WRITE) 1067 1068 rw = WRITE_ODIRECT; ··· 1178 1177 PAGE_SIZE - user_addr / PAGE_SIZE); 1179 1178 } 1180 1179 1180 + blk_start_plug(&plug); 1181 + 1181 1182 for (seg = 0; seg < nr_segs; seg++) { 1182 1183 user_addr = (unsigned long)iov[seg].iov_base; 1183 1184 sdio.size += bytes = iov[seg].iov_len; ··· 1237 1234 } 1238 1235 if (sdio.bio) 1239 1236 dio_bio_submit(dio, &sdio); 1237 + 1238 + blk_finish_plug(&plug); 1240 1239 1241 1240 /* 1242 1241 * It is possible that, we return short IO due to end of file.
+5
fs/jbd/journal.c
··· 1113 1113 1114 1114 BUG_ON(!mutex_is_locked(&journal->j_checkpoint_mutex)); 1115 1115 spin_lock(&journal->j_state_lock); 1116 + /* Is it already empty? */ 1117 + if (sb->s_start == 0) { 1118 + spin_unlock(&journal->j_state_lock); 1119 + return; 1120 + } 1116 1121 jbd_debug(1, "JBD: Marking journal as empty (seq %d)\n", 1117 1122 journal->j_tail_sequence); 1118 1123
+7 -8
fs/logfs/dev_bdev.c
··· 26 26 struct completion complete; 27 27 28 28 bio_init(&bio); 29 + bio.bi_max_vecs = 1; 29 30 bio.bi_io_vec = &bio_vec; 30 31 bio_vec.bv_page = page; 31 32 bio_vec.bv_len = PAGE_SIZE; ··· 96 95 struct address_space *mapping = super->s_mapping_inode->i_mapping; 97 96 struct bio *bio; 98 97 struct page *page; 99 - struct request_queue *q = bdev_get_queue(sb->s_bdev); 100 - unsigned int max_pages = queue_max_hw_sectors(q) >> (PAGE_SHIFT - 9); 98 + unsigned int max_pages; 101 99 int i; 102 100 103 - if (max_pages > BIO_MAX_PAGES) 104 - max_pages = BIO_MAX_PAGES; 101 + max_pages = min(nr_pages, (size_t) bio_get_nr_vecs(super->s_bdev)); 102 + 105 103 bio = bio_alloc(GFP_NOFS, max_pages); 106 104 BUG_ON(!bio); 107 105 ··· 190 190 { 191 191 struct logfs_super *super = logfs_super(sb); 192 192 struct bio *bio; 193 - struct request_queue *q = bdev_get_queue(sb->s_bdev); 194 - unsigned int max_pages = queue_max_hw_sectors(q) >> (PAGE_SHIFT - 9); 193 + unsigned int max_pages; 195 194 int i; 196 195 197 - if (max_pages > BIO_MAX_PAGES) 198 - max_pages = BIO_MAX_PAGES; 196 + max_pages = min(nr_pages, (size_t) bio_get_nr_vecs(super->s_bdev)); 197 + 199 198 bio = bio_alloc(GFP_NOFS, max_pages); 200 199 BUG_ON(!bio); 201 200
+17 -1
fs/logfs/inode.c
··· 156 156 call_rcu(&inode->i_rcu, logfs_i_callback); 157 157 } 158 158 159 + static void __logfs_destroy_meta_inode(struct inode *inode) 160 + { 161 + struct logfs_inode *li = logfs_inode(inode); 162 + BUG_ON(li->li_block); 163 + call_rcu(&inode->i_rcu, logfs_i_callback); 164 + } 165 + 159 166 static void logfs_destroy_inode(struct inode *inode) 160 167 { 161 168 struct logfs_inode *li = logfs_inode(inode); 169 + 170 + if (inode->i_ino < LOGFS_RESERVED_INOS) { 171 + /* 172 + * The reserved inodes are never destroyed unless we are in 173 + * unmont path. 174 + */ 175 + __logfs_destroy_meta_inode(inode); 176 + return; 177 + } 162 178 163 179 BUG_ON(list_empty(&li->li_freeing_list)); 164 180 spin_lock(&logfs_inode_lock); ··· 389 373 { 390 374 struct logfs_super *super = logfs_super(sb); 391 375 /* kill the meta-inodes */ 392 - iput(super->s_master_inode); 393 376 iput(super->s_segfile_inode); 377 + iput(super->s_master_inode); 394 378 iput(super->s_mapping_inode); 395 379 } 396 380
+1 -1
fs/logfs/journal.c
··· 565 565 index = ofs >> PAGE_SHIFT; 566 566 page_ofs = ofs & (PAGE_SIZE - 1); 567 567 568 - page = find_lock_page(mapping, index); 568 + page = find_or_create_page(mapping, index, GFP_NOFS); 569 569 BUG_ON(!page); 570 570 memcpy(wbuf, page_address(page) + page_ofs, super->s_writesize); 571 571 unlock_page(page);
-1
fs/logfs/readwrite.c
··· 2189 2189 return; 2190 2190 } 2191 2191 2192 - BUG_ON(inode->i_ino < LOGFS_RESERVED_INOS); 2193 2192 page = inode_to_page(inode); 2194 2193 BUG_ON(!page); /* FIXME: Use emergency page */ 2195 2194 logfs_put_write_page(page);
+1 -1
fs/logfs/segment.c
··· 886 886 887 887 static void map_invalidatepage(struct page *page, unsigned long l) 888 888 { 889 - BUG(); 889 + return; 890 890 } 891 891 892 892 static int map_releasepage(struct page *page, gfp_t g)
+2 -2
fs/nfsd/nfs4callback.c
··· 651 651 652 652 if (clp->cl_minorversion == 0) { 653 653 if (!clp->cl_cred.cr_principal && 654 - (clp->cl_flavor >= RPC_AUTH_GSS_KRB5)) 654 + (clp->cl_cred.cr_flavor >= RPC_AUTH_GSS_KRB5)) 655 655 return -EINVAL; 656 656 args.client_name = clp->cl_cred.cr_principal; 657 657 args.prognumber = conn->cb_prog, 658 658 args.protocol = XPRT_TRANSPORT_TCP; 659 - args.authflavor = clp->cl_flavor; 659 + args.authflavor = clp->cl_cred.cr_flavor; 660 660 clp->cl_cb_ident = conn->cb_ident; 661 661 } else { 662 662 if (!conn->cb_xprt)
-1
fs/nfsd/state.h
··· 231 231 nfs4_verifier cl_verifier; /* generated by client */ 232 232 time_t cl_time; /* time of last lease renewal */ 233 233 struct sockaddr_storage cl_addr; /* client ipaddress */ 234 - u32 cl_flavor; /* setclientid pseudoflavor */ 235 234 struct svc_cred cl_cred; /* setclientid principal */ 236 235 clientid_t cl_clientid; /* generated by server */ 237 236 nfs4_verifier cl_confirm; /* generated by server */
+1 -1
fs/quota/dquot.c
··· 1589 1589 goto out; 1590 1590 } 1591 1591 1592 - down_read(&sb_dqopt(inode->i_sb)->dqptr_sem); 1593 1592 for (cnt = 0; cnt < MAXQUOTAS; cnt++) 1594 1593 warn[cnt].w_type = QUOTA_NL_NOWARN; 1595 1594 1595 + down_read(&sb_dqopt(inode->i_sb)->dqptr_sem); 1596 1596 spin_lock(&dq_data_lock); 1597 1597 for (cnt = 0; cnt < MAXQUOTAS; cnt++) { 1598 1598 if (!dquots[cnt])
-2
fs/reiserfs/bitmap.c
··· 1334 1334 else if (bitmap == 0) 1335 1335 block = (REISERFS_DISK_OFFSET_IN_BYTES >> sb->s_blocksize_bits) + 1; 1336 1336 1337 - reiserfs_write_unlock(sb); 1338 1337 bh = sb_bread(sb, block); 1339 - reiserfs_write_lock(sb); 1340 1338 if (bh == NULL) 1341 1339 reiserfs_warning(sb, "sh-2029: %s: bitmap block (#%u) " 1342 1340 "reading failed", __func__, block);
+1 -1
fs/reiserfs/inode.c
··· 76 76 ; 77 77 } 78 78 out: 79 + reiserfs_write_unlock_once(inode->i_sb, depth); 79 80 clear_inode(inode); /* note this must go after the journal_end to prevent deadlock */ 80 81 dquot_drop(inode); 81 82 inode->i_blocks = 0; 82 - reiserfs_write_unlock_once(inode->i_sb, depth); 83 83 return; 84 84 85 85 no_delete:
+1 -1
fs/ubifs/debug.h
··· 167 167 #define ubifs_dbg_msg(type, fmt, ...) \ 168 168 pr_debug("UBIFS DBG " type ": " fmt "\n", ##__VA_ARGS__) 169 169 170 - #define DBG_KEY_BUF_LEN 32 170 + #define DBG_KEY_BUF_LEN 48 171 171 #define ubifs_dbg_msg_key(type, key, fmt, ...) do { \ 172 172 char __tmp_key_buf[DBG_KEY_BUF_LEN]; \ 173 173 pr_debug("UBIFS DBG " type ": " fmt "%s\n", ##__VA_ARGS__, \
+4 -1
fs/ubifs/lpt.c
··· 1749 1749 return 0; 1750 1750 1751 1751 out_err: 1752 - ubifs_lpt_free(c, 0); 1752 + if (wr) 1753 + ubifs_lpt_free(c, 1); 1754 + if (rd) 1755 + ubifs_lpt_free(c, 0); 1753 1756 return err; 1754 1757 } 1755 1758
+1 -1
fs/ubifs/recovery.c
··· 788 788 789 789 corrupted_rescan: 790 790 /* Re-scan the corrupted data with verbose messages */ 791 - ubifs_err("corruptio %d", ret); 791 + ubifs_err("corruption %d", ret); 792 792 ubifs_scan_a_node(c, buf, len, lnum, offs, 1); 793 793 corrupted: 794 794 ubifs_scanned_corruption(c, lnum, offs, buf);
+1 -2
fs/ubifs/replay.c
··· 1026 1026 c->replaying = 1; 1027 1027 lnum = c->ltail_lnum = c->lhead_lnum; 1028 1028 1029 - lnum = UBIFS_LOG_LNUM; 1030 1029 do { 1031 1030 err = replay_log_leb(c, lnum, 0, c->sbuf); 1032 1031 if (err == 1) ··· 1034 1035 if (err) 1035 1036 goto out; 1036 1037 lnum = ubifs_next_log_lnum(c, lnum); 1037 - } while (lnum != UBIFS_LOG_LNUM); 1038 + } while (lnum != c->ltail_lnum); 1038 1039 1039 1040 err = replay_buds(c); 1040 1041 if (err)
-3
fs/ubifs/super.c
··· 1157 1157 * 1158 1158 * This function mounts UBIFS file system. Returns zero in case of success and 1159 1159 * a negative error code in case of failure. 1160 - * 1161 - * Note, the function does not de-allocate resources it it fails half way 1162 - * through, and the caller has to do this instead. 1163 1160 */ 1164 1161 static int mount_ubifs(struct ubifs_info *c) 1165 1162 {
+4 -1
fs/udf/inode.c
··· 1124 1124 if (err) 1125 1125 return err; 1126 1126 down_write(&iinfo->i_data_sem); 1127 - } else 1127 + } else { 1128 1128 iinfo->i_lenAlloc = newsize; 1129 + goto set_size; 1130 + } 1129 1131 } 1130 1132 err = udf_extend_file(inode, newsize); 1131 1133 if (err) { 1132 1134 up_write(&iinfo->i_data_sem); 1133 1135 return err; 1134 1136 } 1137 + set_size: 1135 1138 truncate_setsize(inode, newsize); 1136 1139 up_write(&iinfo->i_data_sem); 1137 1140 } else {
+6 -1
fs/udf/super.c
··· 1344 1344 udf_err(sb, "error loading logical volume descriptor: " 1345 1345 "Partition table too long (%u > %lu)\n", table_len, 1346 1346 sb->s_blocksize - sizeof(*lvd)); 1347 + ret = 1; 1347 1348 goto out_bh; 1348 1349 } 1349 1350 ··· 1389 1388 UDF_ID_SPARABLE, 1390 1389 strlen(UDF_ID_SPARABLE))) { 1391 1390 if (udf_load_sparable_map(sb, map, 1392 - (struct sparablePartitionMap *)gpm) < 0) 1391 + (struct sparablePartitionMap *)gpm) < 0) { 1392 + ret = 1; 1393 1393 goto out_bh; 1394 + } 1394 1395 } else if (!strncmp(upm2->partIdent.ident, 1395 1396 UDF_ID_METADATA, 1396 1397 strlen(UDF_ID_METADATA))) { ··· 2003 2000 if (!silent) 2004 2001 pr_notice("Rescanning with blocksize %d\n", 2005 2002 UDF_DEFAULT_BLOCKSIZE); 2003 + brelse(sbi->s_lvid_bh); 2004 + sbi->s_lvid_bh = NULL; 2006 2005 uopt.blocksize = UDF_DEFAULT_BLOCKSIZE; 2007 2006 ret = udf_load_vrs(sb, &uopt, silent, &fileset); 2008 2007 }
+4 -2
fs/xfs/xfs_discard.c
··· 179 179 * used by the fstrim application. In the end it really doesn't 180 180 * matter as trimming blocks is an advisory interface. 181 181 */ 182 + if (range.start >= XFS_FSB_TO_B(mp, mp->m_sb.sb_dblocks) || 183 + range.minlen > XFS_FSB_TO_B(mp, XFS_ALLOC_AG_MAX_USABLE(mp))) 184 + return -XFS_ERROR(EINVAL); 185 + 182 186 start = BTOBB(range.start); 183 187 end = start + BTOBBT(range.len) - 1; 184 188 minlen = BTOBB(max_t(u64, granularity, range.minlen)); 185 189 186 - if (XFS_BB_TO_FSB(mp, start) >= mp->m_sb.sb_dblocks) 187 - return -XFS_ERROR(EINVAL); 188 190 if (end > XFS_FSB_TO_BB(mp, mp->m_sb.sb_dblocks) - 1) 189 191 end = XFS_FSB_TO_BB(mp, mp->m_sb.sb_dblocks)- 1; 190 192
+9 -8
fs/xfs/xfs_ialloc.c
··· 962 962 if (!pag->pagi_freecount && !okalloc) 963 963 goto nextag; 964 964 965 + /* 966 + * Then read in the AGI buffer and recheck with the AGI buffer 967 + * lock held. 968 + */ 965 969 error = xfs_ialloc_read_agi(mp, tp, agno, &agbp); 966 970 if (error) 967 971 goto out_error; 968 972 969 - /* 970 - * Once the AGI has been read in we have to recheck 971 - * pagi_freecount with the AGI buffer lock held. 972 - */ 973 973 if (pag->pagi_freecount) { 974 974 xfs_perag_put(pag); 975 975 goto out_alloc; 976 976 } 977 977 978 - if (!okalloc) { 979 - xfs_trans_brelse(tp, agbp); 980 - goto nextag; 981 - } 978 + if (!okalloc) 979 + goto nextag_relse_buffer; 980 + 982 981 983 982 error = xfs_ialloc_ag_alloc(tp, agbp, &ialloced); 984 983 if (error) { ··· 1006 1007 return 0; 1007 1008 } 1008 1009 1010 + nextag_relse_buffer: 1011 + xfs_trans_brelse(tp, agbp); 1009 1012 nextag: 1010 1013 xfs_perag_put(pag); 1011 1014 if (++agno == mp->m_sb.sb_agcount)
+1 -1
fs/xfs/xfs_rtalloc.c
··· 857 857 xfs_buf_t *bp; /* block buffer, result */ 858 858 xfs_inode_t *ip; /* bitmap or summary inode */ 859 859 xfs_bmbt_irec_t map; 860 - int nmap; 860 + int nmap = 1; 861 861 int error; /* error value */ 862 862 863 863 ip = issum ? mp->m_rsumip : mp->m_rbmip;
+2 -1
include/drm/drm_crtc.h
··· 118 118 .hdisplay = (hd), .hsync_start = (hss), .hsync_end = (hse), \ 119 119 .htotal = (ht), .hskew = (hsk), .vdisplay = (vd), \ 120 120 .vsync_start = (vss), .vsync_end = (vse), .vtotal = (vt), \ 121 - .vscan = (vs), .flags = (f), .vrefresh = 0 121 + .vscan = (vs), .flags = (f), .vrefresh = 0, \ 122 + .base.type = DRM_MODE_OBJECT_MODE 122 123 123 124 #define CRTC_INTERLACE_HALVE_V 0x1 /* halve V values for interlacing */ 124 125
+3 -2
include/drm/drm_mode.h
··· 359 359 struct drm_mode_modeinfo mode; 360 360 }; 361 361 362 - #define DRM_MODE_CURSOR_BO (1<<0) 363 - #define DRM_MODE_CURSOR_MOVE (1<<1) 362 + #define DRM_MODE_CURSOR_BO 0x01 363 + #define DRM_MODE_CURSOR_MOVE 0x02 364 + #define DRM_MODE_CURSOR_FLAGS 0x03 364 365 365 366 /* 366 367 * depending on the value in flags different members are used.
+13 -1
include/linux/blkdev.h
··· 601 601 * it already be started by driver. 602 602 */ 603 603 #define RQ_NOMERGE_FLAGS \ 604 - (REQ_NOMERGE | REQ_STARTED | REQ_SOFTBARRIER | REQ_FLUSH | REQ_FUA) 604 + (REQ_NOMERGE | REQ_STARTED | REQ_SOFTBARRIER | REQ_FLUSH | REQ_FUA | REQ_DISCARD) 605 605 #define rq_mergeable(rq) \ 606 606 (!((rq)->cmd_flags & RQ_NOMERGE_FLAGS) && \ 607 607 (((rq)->cmd_flags & REQ_DISCARD) || \ ··· 894 894 extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev); 895 895 896 896 extern int blk_rq_map_sg(struct request_queue *, struct request *, struct scatterlist *); 897 + extern int blk_bio_map_sg(struct request_queue *q, struct bio *bio, 898 + struct scatterlist *sglist); 897 899 extern void blk_dump_rq_flags(struct request *, char *); 898 900 extern long nr_blockdev_pages(void); 899 901 ··· 1139 1137 1140 1138 return (lim->discard_granularity + lim->discard_alignment - alignment) 1141 1139 & (lim->discard_granularity - 1); 1140 + } 1141 + 1142 + static inline int bdev_discard_alignment(struct block_device *bdev) 1143 + { 1144 + struct request_queue *q = bdev_get_queue(bdev); 1145 + 1146 + if (bdev != bdev->bd_contains) 1147 + return bdev->bd_part->discard_alignment; 1148 + 1149 + return q->limits.discard_alignment; 1142 1150 } 1143 1151 1144 1152 static inline unsigned int queue_discard_zeroes_data(struct request_queue *q)
+4
include/linux/cpuidle.h
··· 194 194 195 195 #ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED 196 196 void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a); 197 + #else 198 + static inline void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a) 199 + { 200 + } 197 201 #endif 198 202 199 203 /******************************
-7
include/linux/ktime.h
··· 58 58 59 59 typedef union ktime ktime_t; /* Kill this */ 60 60 61 - #define KTIME_MAX ((s64)~((u64)1 << 63)) 62 - #if (BITS_PER_LONG == 64) 63 - # define KTIME_SEC_MAX (KTIME_MAX / NSEC_PER_SEC) 64 - #else 65 - # define KTIME_SEC_MAX LONG_MAX 66 - #endif 67 - 68 61 /* 69 62 * ktime_t definitions when using the 64-bit scalar representation: 70 63 */
+2
include/linux/mv643xx_eth.h
··· 15 15 #define MV643XX_ETH_SIZE_REG_4 0x2224 16 16 #define MV643XX_ETH_BASE_ADDR_ENABLE_REG 0x2290 17 17 18 + #define MV643XX_TX_CSUM_DEFAULT_LIMIT 0 19 + 18 20 struct mv643xx_eth_shared_platform_data { 19 21 struct mbus_dram_target_info *dram; 20 22 struct platform_device *shared_smi;
+27 -2
include/linux/time.h
··· 107 107 return ts_delta; 108 108 } 109 109 110 + #define KTIME_MAX ((s64)~((u64)1 << 63)) 111 + #if (BITS_PER_LONG == 64) 112 + # define KTIME_SEC_MAX (KTIME_MAX / NSEC_PER_SEC) 113 + #else 114 + # define KTIME_SEC_MAX LONG_MAX 115 + #endif 116 + 110 117 /* 111 118 * Returns true if the timespec is norm, false if denorm: 112 119 */ 113 - #define timespec_valid(ts) \ 114 - (((ts)->tv_sec >= 0) && (((unsigned long) (ts)->tv_nsec) < NSEC_PER_SEC)) 120 + static inline bool timespec_valid(const struct timespec *ts) 121 + { 122 + /* Dates before 1970 are bogus */ 123 + if (ts->tv_sec < 0) 124 + return false; 125 + /* Can't have more nanoseconds then a second */ 126 + if ((unsigned long)ts->tv_nsec >= NSEC_PER_SEC) 127 + return false; 128 + return true; 129 + } 130 + 131 + static inline bool timespec_valid_strict(const struct timespec *ts) 132 + { 133 + if (!timespec_valid(ts)) 134 + return false; 135 + /* Disallow values that could overflow ktime_t */ 136 + if ((unsigned long long)ts->tv_sec >= KTIME_SEC_MAX) 137 + return false; 138 + return true; 139 + } 115 140 116 141 extern void read_persistent_clock(struct timespec *ts); 117 142 extern void read_boot_clock(struct timespec *ts);
-2
include/xen/events.h
··· 58 58 59 59 void xen_irq_resume(void); 60 60 61 - void xen_hvm_prepare_kexec(struct shared_info *sip, unsigned long pfn); 62 - 63 61 /* Clear an irq's pending state, in preparation for polling on it */ 64 62 void xen_clear_irq_pending(int irq); 65 63 void xen_set_irq_pending(int irq);
+2 -2
kernel/fork.c
··· 455 455 if (retval) 456 456 goto out; 457 457 458 - if (file && uprobe_mmap(tmp)) 459 - goto out; 458 + if (file) 459 + uprobe_mmap(tmp); 460 460 } 461 461 /* a new mm has just been created */ 462 462 arch_dup_mmap(oldmm, mm);
+33 -6
kernel/time/timekeeping.c
··· 115 115 { 116 116 tk->xtime_sec += ts->tv_sec; 117 117 tk->xtime_nsec += (u64)ts->tv_nsec << tk->shift; 118 + tk_normalize_xtime(tk); 118 119 } 119 120 120 121 static void tk_set_wall_to_mono(struct timekeeper *tk, struct timespec wtm) ··· 277 276 tk->xtime_nsec += cycle_delta * tk->mult; 278 277 279 278 /* If arch requires, add in gettimeoffset() */ 280 - tk->xtime_nsec += arch_gettimeoffset() << tk->shift; 279 + tk->xtime_nsec += (u64)arch_gettimeoffset() << tk->shift; 281 280 282 281 tk_normalize_xtime(tk); 283 282 ··· 428 427 struct timespec ts_delta, xt; 429 428 unsigned long flags; 430 429 431 - if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC) 430 + if (!timespec_valid_strict(tv)) 432 431 return -EINVAL; 433 432 434 433 write_seqlock_irqsave(&tk->lock, flags); ··· 464 463 { 465 464 struct timekeeper *tk = &timekeeper; 466 465 unsigned long flags; 466 + struct timespec tmp; 467 + int ret = 0; 467 468 468 469 if ((unsigned long)ts->tv_nsec >= NSEC_PER_SEC) 469 470 return -EINVAL; ··· 474 471 475 472 timekeeping_forward_now(tk); 476 473 474 + /* Make sure the proposed value is valid */ 475 + tmp = timespec_add(tk_xtime(tk), *ts); 476 + if (!timespec_valid_strict(&tmp)) { 477 + ret = -EINVAL; 478 + goto error; 479 + } 477 480 478 481 tk_xtime_add(tk, ts); 479 482 tk_set_wall_to_mono(tk, timespec_sub(tk->wall_to_monotonic, *ts)); 480 483 484 + error: /* even if we error out, we forwarded the time, so call update */ 481 485 timekeeping_update(tk, true); 482 486 483 487 write_sequnlock_irqrestore(&tk->lock, flags); ··· 492 482 /* signal hrtimers about time change */ 493 483 clock_was_set(); 494 484 495 - return 0; 485 + return ret; 496 486 } 497 487 EXPORT_SYMBOL(timekeeping_inject_offset); 498 488 ··· 659 649 struct timespec now, boot, tmp; 660 650 661 651 read_persistent_clock(&now); 652 + if (!timespec_valid_strict(&now)) { 653 + pr_warn("WARNING: Persistent clock returned invalid value!\n" 654 + " Check your CMOS/BIOS settings.\n"); 655 + now.tv_sec = 0; 656 + now.tv_nsec = 0; 657 + } 658 + 662 659 read_boot_clock(&boot); 660 + if (!timespec_valid_strict(&boot)) { 661 + pr_warn("WARNING: Boot clock returned invalid value!\n" 662 + " Check your CMOS/BIOS settings.\n"); 663 + boot.tv_sec = 0; 664 + boot.tv_nsec = 0; 665 + } 663 666 664 667 seqlock_init(&tk->lock); 665 668 ··· 713 690 static void __timekeeping_inject_sleeptime(struct timekeeper *tk, 714 691 struct timespec *delta) 715 692 { 716 - if (!timespec_valid(delta)) { 693 + if (!timespec_valid_strict(delta)) { 717 694 printk(KERN_WARNING "__timekeeping_inject_sleeptime: Invalid " 718 695 "sleep delta value!\n"); 719 696 return; ··· 1152 1129 offset = (clock->read(clock) - clock->cycle_last) & clock->mask; 1153 1130 #endif 1154 1131 1132 + /* Check if there's really nothing to do */ 1133 + if (offset < tk->cycle_interval) 1134 + goto out; 1135 + 1155 1136 /* 1156 1137 * With NO_HZ we may have to accumulate many cycle_intervals 1157 1138 * (think "ticks") worth of time at once. To do this efficiently, ··· 1188 1161 * the vsyscall implementations are converted to use xtime_nsec 1189 1162 * (shifted nanoseconds), this can be killed. 1190 1163 */ 1191 - remainder = tk->xtime_nsec & ((1 << tk->shift) - 1); 1164 + remainder = tk->xtime_nsec & ((1ULL << tk->shift) - 1); 1192 1165 tk->xtime_nsec -= remainder; 1193 - tk->xtime_nsec += 1 << tk->shift; 1166 + tk->xtime_nsec += 1ULL << tk->shift; 1194 1167 tk->ntp_error += remainder << tk->ntp_error_shift; 1195 1168 1196 1169 /*
+4
kernel/trace/trace_syscalls.c
··· 506 506 int size; 507 507 508 508 syscall_nr = syscall_get_nr(current, regs); 509 + if (syscall_nr < 0) 510 + return; 509 511 if (!test_bit(syscall_nr, enabled_perf_enter_syscalls)) 510 512 return; 511 513 ··· 582 580 int size; 583 581 584 582 syscall_nr = syscall_get_nr(current, regs); 583 + if (syscall_nr < 0) 584 + return; 585 585 if (!test_bit(syscall_nr, enabled_perf_exit_syscalls)) 586 586 return; 587 587
-7
mm/filemap.c
··· 1412 1412 retval = filemap_write_and_wait_range(mapping, pos, 1413 1413 pos + iov_length(iov, nr_segs) - 1); 1414 1414 if (!retval) { 1415 - struct blk_plug plug; 1416 - 1417 - blk_start_plug(&plug); 1418 1415 retval = mapping->a_ops->direct_IO(READ, iocb, 1419 1416 iov, pos, nr_segs); 1420 - blk_finish_plug(&plug); 1421 1417 } 1422 1418 if (retval > 0) { 1423 1419 *ppos = pos + retval; ··· 2523 2527 { 2524 2528 struct file *file = iocb->ki_filp; 2525 2529 struct inode *inode = file->f_mapping->host; 2526 - struct blk_plug plug; 2527 2530 ssize_t ret; 2528 2531 2529 2532 BUG_ON(iocb->ki_pos != pos); 2530 2533 2531 2534 sb_start_write(inode->i_sb); 2532 2535 mutex_lock(&inode->i_mutex); 2533 - blk_start_plug(&plug); 2534 2536 ret = __generic_file_aio_write(iocb, iov, nr_segs, &iocb->ki_pos); 2535 2537 mutex_unlock(&inode->i_mutex); 2536 2538 ··· 2539 2545 if (err < 0 && ret > 0) 2540 2546 ret = err; 2541 2547 } 2542 - blk_finish_plug(&plug); 2543 2548 sb_end_write(inode->i_sb); 2544 2549 return ret; 2545 2550 }
+2 -3
mm/mmap.c
··· 1356 1356 } else if ((flags & MAP_POPULATE) && !(flags & MAP_NONBLOCK)) 1357 1357 make_pages_present(addr, addr + len); 1358 1358 1359 - if (file && uprobe_mmap(vma)) 1360 - /* matching probes but cannot insert */ 1361 - goto unmap_and_free_vma; 1359 + if (file) 1360 + uprobe_mmap(vma); 1362 1361 1363 1362 return addr; 1364 1363
+1
mm/slab.c
··· 3260 3260 3261 3261 /* cache_grow can reenable interrupts, then ac could change. */ 3262 3262 ac = cpu_cache_get(cachep); 3263 + node = numa_mem_id(); 3263 3264 3264 3265 /* no objects in sight? abort */ 3265 3266 if (!x && (ac->avail == 0 || force_refill))
+4 -6
net/sunrpc/svc_xprt.c
··· 316 316 */ 317 317 void svc_xprt_enqueue(struct svc_xprt *xprt) 318 318 { 319 - struct svc_serv *serv = xprt->xpt_server; 320 319 struct svc_pool *pool; 321 320 struct svc_rqst *rqstp; 322 321 int cpu; ··· 361 362 rqstp, rqstp->rq_xprt); 362 363 rqstp->rq_xprt = xprt; 363 364 svc_xprt_get(xprt); 364 - rqstp->rq_reserved = serv->sv_max_mesg; 365 - atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved); 366 365 pool->sp_stats.threads_woken++; 367 366 wake_up(&rqstp->rq_wait); 368 367 } else { ··· 637 640 if (xprt) { 638 641 rqstp->rq_xprt = xprt; 639 642 svc_xprt_get(xprt); 640 - rqstp->rq_reserved = serv->sv_max_mesg; 641 - atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved); 642 643 643 644 /* As there is a shortage of threads and this request 644 645 * had to be queued, don't allow the thread to wait so ··· 733 738 else 734 739 len = xprt->xpt_ops->xpo_recvfrom(rqstp); 735 740 dprintk("svc: got len=%d\n", len); 741 + rqstp->rq_reserved = serv->sv_max_mesg; 742 + atomic_add(rqstp->rq_reserved, &xprt->xpt_reserved); 736 743 } 737 744 svc_xprt_received(xprt); 738 745 ··· 791 794 792 795 /* Grab mutex to serialize outgoing data. */ 793 796 mutex_lock(&xprt->xpt_mutex); 794 - if (test_bit(XPT_DEAD, &xprt->xpt_flags)) 797 + if (test_bit(XPT_DEAD, &xprt->xpt_flags) 798 + || test_bit(XPT_CLOSE, &xprt->xpt_flags)) 795 799 len = -ENOTCONN; 796 800 else 797 801 len = xprt->xpt_ops->xpo_sendto(rqstp);
+1 -1
net/sunrpc/svcsock.c
··· 1129 1129 if (len >= 0) 1130 1130 svsk->sk_tcplen += len; 1131 1131 if (len != want) { 1132 + svc_tcp_save_pages(svsk, rqstp); 1132 1133 if (len < 0 && len != -EAGAIN) 1133 1134 goto err_other; 1134 - svc_tcp_save_pages(svsk, rqstp); 1135 1135 dprintk("svc: incomplete TCP record (%d of %d)\n", 1136 1136 svsk->sk_tcplen, svsk->sk_reclen); 1137 1137 goto err_noclose;
+2
tools/perf/util/python-ext-sources
··· 10 10 util/evlist.c 11 11 util/evsel.c 12 12 util/cpumap.c 13 + util/hweight.c 13 14 util/thread_map.c 14 15 util/util.c 15 16 util/xyarray.c 16 17 util/cgroup.c 17 18 util/debugfs.c 19 + util/rblist.c 18 20 util/strlist.c 19 21 ../../lib/rbtree.c
+4 -3
virt/kvm/kvm_main.c
··· 1976 1976 if (copy_from_user(&csigset, sigmask_arg->sigset, 1977 1977 sizeof csigset)) 1978 1978 goto out; 1979 - } 1980 - sigset_from_compat(&sigset, &csigset); 1981 - r = kvm_vcpu_ioctl_set_sigmask(vcpu, &sigset); 1979 + sigset_from_compat(&sigset, &csigset); 1980 + r = kvm_vcpu_ioctl_set_sigmask(vcpu, &sigset); 1981 + } else 1982 + r = kvm_vcpu_ioctl_set_sigmask(vcpu, NULL); 1982 1983 break; 1983 1984 } 1984 1985 default: