Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'for-rmk' of git://git.marvell.com/orion into devel

authored by

Russell King and committed by
Russell King
7698fded 2d8d2493

+3622 -1516
+6
Documentation/hwmon/sysfs-interface
··· 150 150 Unit: revolution/min (RPM) 151 151 RW 152 152 153 + fan[1-*]_max Fan maximum value 154 + Unit: revolution/min (RPM) 155 + Only rarely supported by the hardware. 156 + RW 157 + 153 158 fan[1-*]_input Fan input value. 154 159 Unit: revolution/min (RPM) 155 160 RO ··· 395 390 in[0-*]_min_alarm 396 391 in[0-*]_max_alarm 397 392 fan[1-*]_min_alarm 393 + fan[1-*]_max_alarm 398 394 temp[1-*]_min_alarm 399 395 temp[1-*]_max_alarm 400 396 temp[1-*]_crit_alarm
+79 -24
Documentation/input/multi-touch-protocol.txt
··· 18 18 Anonymous finger details are sent sequentially as separate packets of ABS 19 19 events. Only the ABS_MT events are recognized as part of a finger 20 20 packet. The end of a packet is marked by calling the input_mt_sync() 21 - function, which generates a SYN_MT_REPORT event. The end of multi-touch 22 - transfer is marked by calling the usual input_sync() function. 21 + function, which generates a SYN_MT_REPORT event. This instructs the 22 + receiver to accept the data for the current finger and prepare to receive 23 + another. The end of a multi-touch transfer is marked by calling the usual 24 + input_sync() function. This instructs the receiver to act upon events 25 + accumulated since last EV_SYN/SYN_REPORT and prepare to receive a new 26 + set of events/packets. 23 27 24 28 A set of ABS_MT events with the desired properties is defined. The events 25 29 are divided into categories, to allow for partial implementation. The ··· 31 27 ABS_MT_POSITION_Y, which allows for multiple fingers to be tracked. If the 32 28 device supports it, the ABS_MT_WIDTH_MAJOR may be used to provide the size 33 29 of the approaching finger. Anisotropy and direction may be specified with 34 - ABS_MT_TOUCH_MINOR, ABS_MT_WIDTH_MINOR and ABS_MT_ORIENTATION. Devices with 35 - more granular information may specify general shapes as blobs, i.e., as a 36 - sequence of rectangular shapes grouped together by an 37 - ABS_MT_BLOB_ID. Finally, the ABS_MT_TOOL_TYPE may be used to specify 38 - whether the touching tool is a finger or a pen or something else. 30 + ABS_MT_TOUCH_MINOR, ABS_MT_WIDTH_MINOR and ABS_MT_ORIENTATION. The 31 + ABS_MT_TOOL_TYPE may be used to specify whether the touching tool is a 32 + finger or a pen or something else. Devices with more granular information 33 + may specify general shapes as blobs, i.e., as a sequence of rectangular 34 + shapes grouped together by an ABS_MT_BLOB_ID. Finally, for the few devices 35 + that currently support it, the ABS_MT_TRACKING_ID event may be used to 36 + report finger tracking from hardware [5]. 37 + 38 + Here is what a minimal event sequence for a two-finger touch would look 39 + like: 40 + 41 + ABS_MT_TOUCH_MAJOR 42 + ABS_MT_POSITION_X 43 + ABS_MT_POSITION_Y 44 + SYN_MT_REPORT 45 + ABS_MT_TOUCH_MAJOR 46 + ABS_MT_POSITION_X 47 + ABS_MT_POSITION_Y 48 + SYN_MT_REPORT 49 + SYN_REPORT 39 50 40 51 41 52 Event Semantics ··· 63 44 64 45 The length of the major axis of the contact. The length should be given in 65 46 surface units. If the surface has an X times Y resolution, the largest 66 - possible value of ABS_MT_TOUCH_MAJOR is sqrt(X^2 + Y^2), the diagonal. 47 + possible value of ABS_MT_TOUCH_MAJOR is sqrt(X^2 + Y^2), the diagonal [4]. 67 48 68 49 ABS_MT_TOUCH_MINOR 69 50 70 51 The length, in surface units, of the minor axis of the contact. If the 71 - contact is circular, this event can be omitted. 52 + contact is circular, this event can be omitted [4]. 72 53 73 54 ABS_MT_WIDTH_MAJOR 74 55 75 56 The length, in surface units, of the major axis of the approaching 76 57 tool. This should be understood as the size of the tool itself. The 77 58 orientation of the contact and the approaching tool are assumed to be the 78 - same. 59 + same [4]. 79 60 80 61 ABS_MT_WIDTH_MINOR 81 62 82 63 The length, in surface units, of the minor axis of the approaching 83 - tool. Omit if circular. 64 + tool. Omit if circular [4]. 84 65 85 66 The above four values can be used to derive additional information about 86 67 the contact. The ratio ABS_MT_TOUCH_MAJOR / ABS_MT_WIDTH_MAJOR approximates ··· 89 70 90 71 ABS_MT_ORIENTATION 91 72 92 - The orientation of the ellipse. The value should describe half a revolution 93 - clockwise around the touch center. The scale of the value is arbitrary, but 94 - zero should be returned for an ellipse aligned along the Y axis of the 95 - surface. As an example, an index finger placed straight onto the axis could 96 - return zero orientation, something negative when twisted to the left, and 97 - something positive when twisted to the right. This value can be omitted if 98 - the touching object is circular, or if the information is not available in 99 - the kernel driver. 73 + The orientation of the ellipse. The value should describe a signed quarter 74 + of a revolution clockwise around the touch center. The signed value range 75 + is arbitrary, but zero should be returned for a finger aligned along the Y 76 + axis of the surface, a negative value when finger is turned to the left, and 77 + a positive value when finger turned to the right. When completely aligned with 78 + the X axis, the range max should be returned. Orientation can be omitted 79 + if the touching object is circular, or if the information is not available 80 + in the kernel driver. Partial orientation support is possible if the device 81 + can distinguish between the two axis, but not (uniquely) any values in 82 + between. In such cases, the range of ABS_MT_ORIENTATION should be [0, 1] 83 + [4]. 100 84 101 85 ABS_MT_POSITION_X 102 86 ··· 120 98 121 99 The BLOB_ID groups several packets together into one arbitrarily shaped 122 100 contact. This is a low-level anonymous grouping, and should not be confused 123 - with the high-level contactID, explained below. Most kernel drivers will 124 - not have this capability, and can safely omit the event. 101 + with the high-level trackingID [5]. Most kernel drivers will not have blob 102 + capability, and can safely omit the event. 103 + 104 + ABS_MT_TRACKING_ID 105 + 106 + The TRACKING_ID identifies an initiated contact throughout its life cycle 107 + [5]. There are currently only a few devices that support it, so this event 108 + should normally be omitted. 109 + 110 + 111 + Event Computation 112 + ----------------- 113 + 114 + The flora of different hardware unavoidably leads to some devices fitting 115 + better to the MT protocol than others. To simplify and unify the mapping, 116 + this section gives recipes for how to compute certain events. 117 + 118 + For devices reporting contacts as rectangular shapes, signed orientation 119 + cannot be obtained. Assuming X and Y are the lengths of the sides of the 120 + touching rectangle, here is a simple formula that retains the most 121 + information possible: 122 + 123 + ABS_MT_TOUCH_MAJOR := max(X, Y) 124 + ABS_MT_TOUCH_MINOR := min(X, Y) 125 + ABS_MT_ORIENTATION := bool(X > Y) 126 + 127 + The range of ABS_MT_ORIENTATION should be set to [0, 1], to indicate that 128 + the device can distinguish between a finger along the Y axis (0) and a 129 + finger along the X axis (1). 125 130 126 131 127 132 Finger Tracking ··· 158 109 anonymous contacts currently on the surface. The order in which the packets 159 110 appear in the event stream is not important. 160 111 161 - The process of finger tracking, i.e., to assign a unique contactID to each 112 + The process of finger tracking, i.e., to assign a unique trackingID to each 162 113 initiated contact on the surface, is left to user space; preferably the 163 - multi-touch X driver [3]. In that driver, the contactID stays the same and 114 + multi-touch X driver [3]. In that driver, the trackingID stays the same and 164 115 unique until the contact vanishes (when the finger leaves the surface). The 165 116 problem of assigning a set of anonymous fingers to a set of identified 166 117 fingers is a euclidian bipartite matching problem at each event update, and 167 118 relies on a sufficiently rapid update rate. 119 + 120 + There are a few devices that support trackingID in hardware. User space can 121 + make use of these native identifiers to reduce bandwidth and cpu usage. 122 + 168 123 169 124 Notes 170 125 ----- ··· 189 136 time of writing (April 2009), the MT protocol is not yet merged, and the 190 137 prototype implements finger matching, basic mouse support and two-finger 191 138 scrolling. The project aims at improving the quality of current multi-touch 192 - functionality available in the synaptics X driver, and in addition 139 + functionality available in the Synaptics X driver, and in addition 193 140 implement more advanced gestures. 141 + [4] See the section on event computation. 142 + [5] See the section on finger tracking.
+4
Documentation/kernel-parameters.txt
··· 1535 1535 register save and restore. The kernel will only save 1536 1536 legacy floating-point registers on task switch. 1537 1537 1538 + noxsave [BUGS=X86] Disables x86 extended register state save 1539 + and restore using xsave. The kernel will fallback to 1540 + enabling legacy floating-point and sse state. 1541 + 1538 1542 nohlt [BUGS=ARM,SH] Tells the kernel that the sleep(SH) or 1539 1543 wfi(ARM) instruction doesn't work correctly and not to 1540 1544 use it. This is also useful when using JTAG debugger.
+1
Documentation/sound/alsa/HD-Audio-Models.txt
··· 334 334 ref-no-jd Reference board without HP/Mic jack detection 335 335 3stack D965 3stack 336 336 5stack D965 5stack + SPDIF 337 + 5stack-no-fp D965 5stack without front panel 337 338 dell-3stack Dell Dimension E520 338 339 dell-bios Fixes with Dell BIOS setup 339 340 auto BIOS setup (default)
+5
Documentation/sound/alsa/Procfile.txt
··· 104 104 When this value is greater than 1, the driver will show the 105 105 stack trace additionally. This may help the debugging. 106 106 107 + Since 2.6.30, this option also enables the hwptr check using 108 + jiffies. This detects spontaneous invalid pointer callback 109 + values, but can be lead to too much corrections for a (mostly 110 + buggy) hardware that doesn't give smooth pointer updates. 111 + 107 112 card*/pcm*/sub*/info 108 113 The general information of this PCM sub-stream. 109 114
+21 -12
MAINTAINERS
··· 434 434 435 435 AMD GEODE CS5536 USB DEVICE CONTROLLER DRIVER 436 436 P: Thomas Dahlmann 437 - M: thomas.dahlmann@amd.com 437 + M: dahlmann.thomas@arcor.de 438 438 L: linux-geode@lists.infradead.org (moderated for non-subscribers) 439 439 S: Supported 440 440 F: drivers/usb/gadget/amd5536udc.* ··· 624 624 L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) 625 625 T: git git://gitorious.org/linux-gemini/mainline.git 626 626 S: Maintained 627 + F: arch/arm/mach-gemini/ 627 628 628 629 ARM/EBSA110 MACHINE SUPPORT 629 630 P: Russell King ··· 651 650 M: paulius.zaleckas@teltonika.lt 652 651 L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only) 653 652 S: Maintained 653 + F: arch/arm/mm/*-fa* 654 654 655 655 ARM/FOOTBRIDGE ARCHITECTURE 656 656 P: Russell King ··· 1134 1132 F: include/linux/bfs_fs.h 1135 1133 1136 1134 BLACKFIN ARCHITECTURE 1137 - P: Bryan Wu 1138 - M: cooloney@kernel.org 1135 + P: Mike Frysinger 1136 + M: vapier@gentoo.org 1139 1137 L: uclinux-dist-devel@blackfin.uclinux.org 1140 1138 W: http://blackfin.uclinux.org 1141 1139 S: Supported 1142 1140 F: arch/blackfin/ 1143 1141 1144 1142 BLACKFIN EMAC DRIVER 1145 - P: Bryan Wu 1146 - M: cooloney@kernel.org 1147 - L: uclinux-dist-devel@blackfin.uclinux.org (subscribers-only) 1143 + P: Michael Hennerich 1144 + M: michael.hennerich@analog.com 1145 + L: uclinux-dist-devel@blackfin.uclinux.org 1148 1146 W: http://blackfin.uclinux.org 1149 1147 S: Supported 1150 1148 F: drivers/net/bfin_mac.* ··· 1152 1150 BLACKFIN RTC DRIVER 1153 1151 P: Mike Frysinger 1154 1152 M: vapier.adi@gmail.com 1155 - L: uclinux-dist-devel@blackfin.uclinux.org (subscribers-only) 1153 + L: uclinux-dist-devel@blackfin.uclinux.org 1156 1154 W: http://blackfin.uclinux.org 1157 1155 S: Supported 1158 1156 F: drivers/rtc/rtc-bfin.c ··· 1160 1158 BLACKFIN SERIAL DRIVER 1161 1159 P: Sonic Zhang 1162 1160 M: sonic.zhang@analog.com 1163 - L: uclinux-dist-devel@blackfin.uclinux.org (subscribers-only) 1161 + L: uclinux-dist-devel@blackfin.uclinux.org 1164 1162 W: http://blackfin.uclinux.org 1165 1163 S: Supported 1166 1164 F: drivers/serial/bfin_5xx.c ··· 1168 1166 BLACKFIN WATCHDOG DRIVER 1169 1167 P: Mike Frysinger 1170 1168 M: vapier.adi@gmail.com 1171 - L: uclinux-dist-devel@blackfin.uclinux.org (subscribers-only) 1169 + L: uclinux-dist-devel@blackfin.uclinux.org 1172 1170 W: http://blackfin.uclinux.org 1173 1171 S: Supported 1174 1172 F: drivers/watchdog/bfin_wdt.c ··· 1176 1174 BLACKFIN I2C TWI DRIVER 1177 1175 P: Sonic Zhang 1178 1176 M: sonic.zhang@analog.com 1179 - L: uclinux-dist-devel@blackfin.uclinux.org (subscribers-only) 1177 + L: uclinux-dist-devel@blackfin.uclinux.org 1180 1178 W: http://blackfin.uclinux.org/ 1181 1179 S: Supported 1182 1180 F: drivers/i2c/busses/i2c-bfin-twi.c ··· 1541 1539 W: http://www.fi.muni.cz/~kas/cosa/ 1542 1540 S: Maintained 1543 1541 F: drivers/net/wan/cosa* 1542 + 1543 + CPMAC ETHERNET DRIVER 1544 + P: Florian Fainelli 1545 + M: florian@openwrt.org 1546 + L: netdev@vger.kernel.org 1547 + S: Maintained 1548 + F: drivers/net/cpmac.c 1544 1549 1545 1550 CPU FREQUENCY DRIVERS 1546 1551 P: Dave Jones ··· 1980 1971 1981 1972 EDAC-E752X 1982 1973 P: Mark Gross 1983 - P: Doug Thompson 1984 1974 M: mark.gross@intel.com 1975 + P: Doug Thompson 1985 1976 M: dougthompson@xmission.com 1986 1977 L: bluesmoke-devel@lists.sourceforge.net (moderated for non-subscribers) 1987 1978 W: bluesmoke.sourceforge.net ··· 2258 2249 M: leoli@freescale.com 2259 2250 P: Zhang Wei 2260 2251 M: zw@zh-kernel.org 2261 - L: linuxppc-embedded@ozlabs.org 2252 + L: linuxppc-dev@ozlabs.org 2262 2253 L: linux-kernel@vger.kernel.org 2263 2254 S: Maintained 2264 2255 F: drivers/dma/fsldma.*
+1 -1
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 30 4 - EXTRAVERSION = -rc7 4 + EXTRAVERSION = -rc8 5 5 NAME = Man-Eating Seals of Antiquity 6 6 7 7 # *DOCUMENTATION*
+3
arch/arm/Kconfig
··· 394 394 select CPU_FEROCEON 395 395 select PCI 396 396 select GENERIC_GPIO 397 + select ARCH_REQUIRE_GPIOLIB 397 398 select GENERIC_TIME 398 399 select GENERIC_CLOCKEVENTS 399 400 select PLAT_ORION ··· 416 415 select CPU_FEROCEON 417 416 select PCI 418 417 select GENERIC_GPIO 418 + select ARCH_REQUIRE_GPIOLIB 419 419 select GENERIC_TIME 420 420 select GENERIC_CLOCKEVENTS 421 421 select PLAT_ORION ··· 430 428 select CPU_FEROCEON 431 429 select PCI 432 430 select GENERIC_GPIO 431 + select ARCH_REQUIRE_GPIOLIB 433 432 select GENERIC_TIME 434 433 select GENERIC_CLOCKEVENTS 435 434 select PLAT_ORION
+4 -1
arch/arm/configs/kirkwood_defconfig
··· 182 182 CONFIG_MACH_DB88F6281_BP=y 183 183 CONFIG_MACH_RD88F6192_NAS=y 184 184 CONFIG_MACH_RD88F6281=y 185 + CONFIG_MACH_MV88F6281GTW_GE=y 185 186 CONFIG_MACH_SHEEVAPLUG=y 186 187 CONFIG_MACH_TS219=y 187 188 CONFIG_PLAT_ORION=y ··· 271 270 # 272 271 # CPU Power Management 273 272 # 274 - # CONFIG_CPU_IDLE is not set 273 + CONFIG_CPU_IDLE=y 274 + CONFIG_CPU_IDLE_GOV_LADDER=y 275 + CONFIG_CPU_IDLE_GOV_MENU=y 275 276 276 277 # 277 278 # Floating point emulation
+2 -1
arch/arm/configs/orion5x_defconfig
··· 903 903 CONFIG_LEGACY_PTYS=y 904 904 CONFIG_LEGACY_PTY_COUNT=16 905 905 # CONFIG_IPMI_HANDLER is not set 906 - # CONFIG_HW_RANDOM is not set 906 + CONFIG_HW_RANDOM=m 907 + CONFIG_HW_RANDOM_TIMERIOMEM=m 907 908 # CONFIG_R3964 is not set 908 909 # CONFIG_APPLICOM is not set 909 910 # CONFIG_RAW_DRIVER is not set
+13
arch/arm/include/asm/assembler.h
··· 114 114 .align 3; \ 115 115 .long 9999b,9001f; \ 116 116 .previous 117 + 118 + /* 119 + * SMP data memory barrier 120 + */ 121 + .macro smp_dmb 122 + #ifdef CONFIG_SMP 123 + #if __LINUX_ARM_ARCH__ >= 7 124 + dmb 125 + #elif __LINUX_ARM_ARCH__ == 6 126 + mcr p15, 0, r0, c7, c10, 5 @ dmb 127 + #endif 128 + #endif 129 + .endm
+52 -9
arch/arm/include/asm/atomic.h
··· 44 44 : "cc"); 45 45 } 46 46 47 + static inline void atomic_add(int i, atomic_t *v) 48 + { 49 + unsigned long tmp; 50 + int result; 51 + 52 + __asm__ __volatile__("@ atomic_add\n" 53 + "1: ldrex %0, [%2]\n" 54 + " add %0, %0, %3\n" 55 + " strex %1, %0, [%2]\n" 56 + " teq %1, #0\n" 57 + " bne 1b" 58 + : "=&r" (result), "=&r" (tmp) 59 + : "r" (&v->counter), "Ir" (i) 60 + : "cc"); 61 + } 62 + 47 63 static inline int atomic_add_return(int i, atomic_t *v) 48 64 { 49 65 unsigned long tmp; 50 66 int result; 67 + 68 + smp_mb(); 51 69 52 70 __asm__ __volatile__("@ atomic_add_return\n" 53 71 "1: ldrex %0, [%2]\n" ··· 77 59 : "r" (&v->counter), "Ir" (i) 78 60 : "cc"); 79 61 62 + smp_mb(); 63 + 80 64 return result; 65 + } 66 + 67 + static inline void atomic_sub(int i, atomic_t *v) 68 + { 69 + unsigned long tmp; 70 + int result; 71 + 72 + __asm__ __volatile__("@ atomic_sub\n" 73 + "1: ldrex %0, [%2]\n" 74 + " sub %0, %0, %3\n" 75 + " strex %1, %0, [%2]\n" 76 + " teq %1, #0\n" 77 + " bne 1b" 78 + : "=&r" (result), "=&r" (tmp) 79 + : "r" (&v->counter), "Ir" (i) 80 + : "cc"); 81 81 } 82 82 83 83 static inline int atomic_sub_return(int i, atomic_t *v) 84 84 { 85 85 unsigned long tmp; 86 86 int result; 87 + 88 + smp_mb(); 87 89 88 90 __asm__ __volatile__("@ atomic_sub_return\n" 89 91 "1: ldrex %0, [%2]\n" ··· 115 77 : "r" (&v->counter), "Ir" (i) 116 78 : "cc"); 117 79 80 + smp_mb(); 81 + 118 82 return result; 119 83 } 120 84 121 85 static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) 122 86 { 123 87 unsigned long oldval, res; 88 + 89 + smp_mb(); 124 90 125 91 do { 126 92 __asm__ __volatile__("@ atomic_cmpxchg\n" ··· 136 94 : "r" (&ptr->counter), "Ir" (old), "r" (new) 137 95 : "cc"); 138 96 } while (res); 97 + 98 + smp_mb(); 139 99 140 100 return oldval; 141 101 } ··· 179 135 180 136 return val; 181 137 } 138 + #define atomic_add(i, v) (void) atomic_add_return(i, v) 182 139 183 140 static inline int atomic_sub_return(int i, atomic_t *v) 184 141 { ··· 193 148 194 149 return val; 195 150 } 151 + #define atomic_sub(i, v) (void) atomic_sub_return(i, v) 196 152 197 153 static inline int atomic_cmpxchg(atomic_t *v, int old, int new) 198 154 { ··· 233 187 } 234 188 #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0) 235 189 236 - #define atomic_add(i, v) (void) atomic_add_return(i, v) 237 - #define atomic_inc(v) (void) atomic_add_return(1, v) 238 - #define atomic_sub(i, v) (void) atomic_sub_return(i, v) 239 - #define atomic_dec(v) (void) atomic_sub_return(1, v) 190 + #define atomic_inc(v) atomic_add(1, v) 191 + #define atomic_dec(v) atomic_sub(1, v) 240 192 241 193 #define atomic_inc_and_test(v) (atomic_add_return(1, v) == 0) 242 194 #define atomic_dec_and_test(v) (atomic_sub_return(1, v) == 0) ··· 244 200 245 201 #define atomic_add_negative(i,v) (atomic_add_return(i, v) < 0) 246 202 247 - /* Atomic operations are already serializing on ARM */ 248 - #define smp_mb__before_atomic_dec() barrier() 249 - #define smp_mb__after_atomic_dec() barrier() 250 - #define smp_mb__before_atomic_inc() barrier() 251 - #define smp_mb__after_atomic_inc() barrier() 203 + #define smp_mb__before_atomic_dec() smp_mb() 204 + #define smp_mb__after_atomic_dec() smp_mb() 205 + #define smp_mb__before_atomic_inc() smp_mb() 206 + #define smp_mb__after_atomic_inc() smp_mb() 252 207 253 208 #include <asm-generic/atomic.h> 254 209 #endif
-3
arch/arm/include/asm/flat.h
··· 5 5 #ifndef __ARM_FLAT_H__ 6 6 #define __ARM_FLAT_H__ 7 7 8 - /* An odd number of words will be pushed after this alignment, so 9 - deliberately misalign the value. */ 10 - #define flat_stack_align(sp) sp = (void *)(((unsigned long)(sp) - 4) | 4) 11 8 #define flat_argvp_envp_on_stack() 1 12 9 #define flat_old_ram_flag(flags) (flags) 13 10 #define flat_reloc_valid(reloc, size) ((reloc) <= (size))
+1
arch/arm/include/asm/sizes.h
··· 29 29 #define SZ_512 0x00000200 30 30 31 31 #define SZ_1K 0x00000400 32 + #define SZ_2K 0x00000800 32 33 #define SZ_4K 0x00001000 33 34 #define SZ_8K 0x00002000 34 35 #define SZ_16K 0x00004000
+176
arch/arm/include/asm/system.h
··· 248 248 unsigned int tmp; 249 249 #endif 250 250 251 + smp_mb(); 252 + 251 253 switch (size) { 252 254 #if __LINUX_ARM_ARCH__ >= 6 253 255 case 1: ··· 309 307 __bad_xchg(ptr, size), ret = 0; 310 308 break; 311 309 } 310 + smp_mb(); 312 311 313 312 return ret; 314 313 } ··· 318 315 extern void enable_hlt(void); 319 316 320 317 #include <asm-generic/cmpxchg-local.h> 318 + 319 + #if __LINUX_ARM_ARCH__ < 6 320 + 321 + #ifdef CONFIG_SMP 322 + #error "SMP is not supported on this platform" 323 + #endif 321 324 322 325 /* 323 326 * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make ··· 337 328 #ifndef CONFIG_SMP 338 329 #include <asm-generic/cmpxchg.h> 339 330 #endif 331 + 332 + #else /* __LINUX_ARM_ARCH__ >= 6 */ 333 + 334 + extern void __bad_cmpxchg(volatile void *ptr, int size); 335 + 336 + /* 337 + * cmpxchg only support 32-bits operands on ARMv6. 338 + */ 339 + 340 + static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old, 341 + unsigned long new, int size) 342 + { 343 + unsigned long oldval, res; 344 + 345 + switch (size) { 346 + #ifdef CONFIG_CPU_32v6K 347 + case 1: 348 + do { 349 + asm volatile("@ __cmpxchg1\n" 350 + " ldrexb %1, [%2]\n" 351 + " mov %0, #0\n" 352 + " teq %1, %3\n" 353 + " strexbeq %0, %4, [%2]\n" 354 + : "=&r" (res), "=&r" (oldval) 355 + : "r" (ptr), "Ir" (old), "r" (new) 356 + : "memory", "cc"); 357 + } while (res); 358 + break; 359 + case 2: 360 + do { 361 + asm volatile("@ __cmpxchg1\n" 362 + " ldrexh %1, [%2]\n" 363 + " mov %0, #0\n" 364 + " teq %1, %3\n" 365 + " strexheq %0, %4, [%2]\n" 366 + : "=&r" (res), "=&r" (oldval) 367 + : "r" (ptr), "Ir" (old), "r" (new) 368 + : "memory", "cc"); 369 + } while (res); 370 + break; 371 + #endif /* CONFIG_CPU_32v6K */ 372 + case 4: 373 + do { 374 + asm volatile("@ __cmpxchg4\n" 375 + " ldrex %1, [%2]\n" 376 + " mov %0, #0\n" 377 + " teq %1, %3\n" 378 + " strexeq %0, %4, [%2]\n" 379 + : "=&r" (res), "=&r" (oldval) 380 + : "r" (ptr), "Ir" (old), "r" (new) 381 + : "memory", "cc"); 382 + } while (res); 383 + break; 384 + default: 385 + __bad_cmpxchg(ptr, size); 386 + oldval = 0; 387 + } 388 + 389 + return oldval; 390 + } 391 + 392 + static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, 393 + unsigned long new, int size) 394 + { 395 + unsigned long ret; 396 + 397 + smp_mb(); 398 + ret = __cmpxchg(ptr, old, new, size); 399 + smp_mb(); 400 + 401 + return ret; 402 + } 403 + 404 + #define cmpxchg(ptr,o,n) \ 405 + ((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ 406 + (unsigned long)(o), \ 407 + (unsigned long)(n), \ 408 + sizeof(*(ptr)))) 409 + 410 + static inline unsigned long __cmpxchg_local(volatile void *ptr, 411 + unsigned long old, 412 + unsigned long new, int size) 413 + { 414 + unsigned long ret; 415 + 416 + switch (size) { 417 + #ifndef CONFIG_CPU_32v6K 418 + case 1: 419 + case 2: 420 + ret = __cmpxchg_local_generic(ptr, old, new, size); 421 + break; 422 + #endif /* !CONFIG_CPU_32v6K */ 423 + default: 424 + ret = __cmpxchg(ptr, old, new, size); 425 + } 426 + 427 + return ret; 428 + } 429 + 430 + #define cmpxchg_local(ptr,o,n) \ 431 + ((__typeof__(*(ptr)))__cmpxchg_local((ptr), \ 432 + (unsigned long)(o), \ 433 + (unsigned long)(n), \ 434 + sizeof(*(ptr)))) 435 + 436 + #ifdef CONFIG_CPU_32v6K 437 + 438 + /* 439 + * Note : ARMv7-M (currently unsupported by Linux) does not support 440 + * ldrexd/strexd. If ARMv7-M is ever supported by the Linux kernel, it should 441 + * not be allowed to use __cmpxchg64. 442 + */ 443 + static inline unsigned long long __cmpxchg64(volatile void *ptr, 444 + unsigned long long old, 445 + unsigned long long new) 446 + { 447 + register unsigned long long oldval asm("r0"); 448 + register unsigned long long __old asm("r2") = old; 449 + register unsigned long long __new asm("r4") = new; 450 + unsigned long res; 451 + 452 + do { 453 + asm volatile( 454 + " @ __cmpxchg8\n" 455 + " ldrexd %1, %H1, [%2]\n" 456 + " mov %0, #0\n" 457 + " teq %1, %3\n" 458 + " teqeq %H1, %H3\n" 459 + " strexdeq %0, %4, %H4, [%2]\n" 460 + : "=&r" (res), "=&r" (oldval) 461 + : "r" (ptr), "Ir" (__old), "r" (__new) 462 + : "memory", "cc"); 463 + } while (res); 464 + 465 + return oldval; 466 + } 467 + 468 + static inline unsigned long long __cmpxchg64_mb(volatile void *ptr, 469 + unsigned long long old, 470 + unsigned long long new) 471 + { 472 + unsigned long long ret; 473 + 474 + smp_mb(); 475 + ret = __cmpxchg64(ptr, old, new); 476 + smp_mb(); 477 + 478 + return ret; 479 + } 480 + 481 + #define cmpxchg64(ptr,o,n) \ 482 + ((__typeof__(*(ptr)))__cmpxchg64_mb((ptr), \ 483 + (unsigned long long)(o), \ 484 + (unsigned long long)(n))) 485 + 486 + #define cmpxchg64_local(ptr,o,n) \ 487 + ((__typeof__(*(ptr)))__cmpxchg64((ptr), \ 488 + (unsigned long long)(o), \ 489 + (unsigned long long)(n))) 490 + 491 + #else /* !CONFIG_CPU_32v6K */ 492 + 493 + #define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n)) 494 + 495 + #endif /* CONFIG_CPU_32v6K */ 496 + 497 + #endif /* __LINUX_ARM_ARCH__ >= 6 */ 340 498 341 499 #endif /* __ASSEMBLY__ */ 342 500
+9
arch/arm/kernel/elf.c
··· 78 78 return 1; 79 79 if (cpu_architecture() < CPU_ARCH_ARMv6) 80 80 return 1; 81 + #if !defined(CONFIG_AEABI) || defined(CONFIG_OABI_COMPAT) 82 + /* 83 + * If we have support for OABI programs, we can never allow NX 84 + * support - our signal syscall restart mechanism relies upon 85 + * being able to execute code placed on the user stack. 86 + */ 87 + return 1; 88 + #else 81 89 return 0; 90 + #endif 82 91 } 83 92 EXPORT_SYMBOL(arm_elf_read_implies_exec);
+1 -4
arch/arm/kernel/entry-armv.S
··· 815 815 */ 816 816 817 817 __kuser_memory_barrier: @ 0xffff0fa0 818 - 819 - #if __LINUX_ARM_ARCH__ >= 6 && defined(CONFIG_SMP) 820 - mcr p15, 0, r0, c7, c10, 5 @ dmb 821 - #endif 818 + smp_dmb 822 819 usr_ret lr 823 820 824 821 .align 5
+2
arch/arm/lib/bitops.h
··· 18 18 mov r2, #1 19 19 add r1, r1, r0, lsr #3 @ Get byte offset 20 20 mov r3, r2, lsl r3 @ create mask 21 + smp_dmb 21 22 1: ldrexb r2, [r1] 22 23 ands r0, r2, r3 @ save old value of bit 23 24 \instr r2, r2, r3 @ toggle bit 24 25 strexb ip, r2, [r1] 25 26 cmp ip, #0 26 27 bne 1b 28 + smp_dmb 27 29 cmp r0, #0 28 30 movne r0, #1 29 31 2: mov pc, lr
+1 -2
arch/arm/mach-gemini/include/mach/hardware.h
··· 15 15 /* 16 16 * Memory Map definitions 17 17 */ 18 - /* FIXME: Does it really swap SRAM like this? */ 19 18 #ifdef CONFIG_GEMINI_MEM_SWAP 20 19 # define GEMINI_DRAM_BASE 0x00000000 21 - # define GEMINI_SRAM_BASE 0x20000000 20 + # define GEMINI_SRAM_BASE 0x70000000 22 21 #else 23 22 # define GEMINI_SRAM_BASE 0x00000000 24 23 # define GEMINI_DRAM_BASE 0x10000000
+6
arch/arm/mach-kirkwood/Kconfig
··· 20 20 Say 'Y' here if you want your kernel to support the 21 21 Marvell RD-88F6281 Reference Board. 22 22 23 + config MACH_MV88F6281GTW_GE 24 + bool "Marvell 88F6281 GTW GE Board" 25 + help 26 + Say 'Y' here if you want your kernel to support the 27 + Marvell 88F6281 GTW GE Board. 28 + 23 29 config MACH_SHEEVAPLUG 24 30 bool "Marvell SheevaPlug Reference Board" 25 31 help
+3
arch/arm/mach-kirkwood/Makefile
··· 3 3 obj-$(CONFIG_MACH_DB88F6281_BP) += db88f6281-bp-setup.o 4 4 obj-$(CONFIG_MACH_RD88F6192_NAS) += rd88f6192-nas-setup.o 5 5 obj-$(CONFIG_MACH_RD88F6281) += rd88f6281-setup.o 6 + obj-$(CONFIG_MACH_MV88F6281GTW_GE) += mv88f6281gtw_ge-setup.o 6 7 obj-$(CONFIG_MACH_SHEEVAPLUG) += sheevaplug-setup.o 7 8 obj-$(CONFIG_MACH_TS219) += ts219-setup.o 9 + 10 + obj-$(CONFIG_CPU_IDLE) += cpuidle.o
+7 -7
arch/arm/mach-kirkwood/addr-map.c
··· 20 20 */ 21 21 #define TARGET_DDR 0 22 22 #define TARGET_DEV_BUS 1 23 + #define TARGET_SRAM 3 23 24 #define TARGET_PCIE 4 24 25 #define ATTR_DEV_SPI_ROM 0x1e 25 26 #define ATTR_DEV_BOOT 0x1d ··· 31 30 #define ATTR_DEV_CS0 0x3e 32 31 #define ATTR_PCIE_IO 0xe0 33 32 #define ATTR_PCIE_MEM 0xe8 33 + #define ATTR_SRAM 0x01 34 34 35 35 /* 36 36 * Helpers to get DDR bank info ··· 50 48 51 49 52 50 struct mbus_dram_target_info kirkwood_mbus_dram_info; 53 - static int __initdata win_alloc_count; 54 51 55 52 static int __init cpu_win_can_remap(int win) 56 53 { ··· 113 112 setup_cpu_win(2, KIRKWOOD_NAND_MEM_PHYS_BASE, KIRKWOOD_NAND_MEM_SIZE, 114 113 TARGET_DEV_BUS, ATTR_DEV_NAND, -1); 115 114 116 - win_alloc_count = 3; 115 + /* 116 + * Setup window for SRAM. 117 + */ 118 + setup_cpu_win(3, KIRKWOOD_SRAM_PHYS_BASE, KIRKWOOD_SRAM_SIZE, 119 + TARGET_SRAM, ATTR_SRAM, -1); 117 120 118 121 /* 119 122 * Setup MBUS dram target info. ··· 144 139 } 145 140 } 146 141 kirkwood_mbus_dram_info.num_cs = cs; 147 - } 148 - 149 - void __init kirkwood_setup_sram_win(u32 base, u32 size) 150 - { 151 - setup_cpu_win(win_alloc_count++, base, size, 0x03, 0x00, -1); 152 142 }
+165 -2
arch/arm/mach-kirkwood/common.c
··· 16 16 #include <linux/mv643xx_eth.h> 17 17 #include <linux/mv643xx_i2c.h> 18 18 #include <linux/ata_platform.h> 19 + #include <linux/mtd/nand.h> 19 20 #include <linux/spi/orion_spi.h> 20 21 #include <net/dsa.h> 21 22 #include <asm/page.h> ··· 30 29 #include <plat/mvsdio.h> 31 30 #include <plat/mv_xor.h> 32 31 #include <plat/orion_nand.h> 32 + #include <plat/orion_wdt.h> 33 33 #include <plat/time.h> 34 34 #include "common.h" 35 35 ··· 56 54 iotable_init(kirkwood_io_desc, ARRAY_SIZE(kirkwood_io_desc)); 57 55 } 58 56 57 + /* 58 + * Default clock control bits. Any bit _not_ set in this variable 59 + * will be cleared from the hardware after platform devices have been 60 + * registered. Some reserved bits must be set to 1. 61 + */ 62 + unsigned int kirkwood_clk_ctrl = CGC_DUNIT | CGC_RESERVED; 63 + 59 64 60 65 /***************************************************************************** 61 66 * EHCI ··· 104 95 105 96 void __init kirkwood_ehci_init(void) 106 97 { 98 + kirkwood_clk_ctrl |= CGC_USB0; 107 99 platform_device_register(&kirkwood_ehci); 108 100 } 109 101 ··· 154 144 .id = 0, 155 145 .num_resources = 1, 156 146 .resource = kirkwood_ge00_resources, 147 + .dev = { 148 + .coherent_dma_mask = 0xffffffff, 149 + }, 157 150 }; 158 151 159 152 void __init kirkwood_ge00_init(struct mv643xx_eth_platform_data *eth_data) 160 153 { 154 + kirkwood_clk_ctrl |= CGC_GE0; 161 155 eth_data->shared = &kirkwood_ge00_shared; 162 156 kirkwood_ge00.dev.platform_data = eth_data; 163 157 ··· 216 202 .id = 1, 217 203 .num_resources = 1, 218 204 .resource = kirkwood_ge01_resources, 205 + .dev = { 206 + .coherent_dma_mask = 0xffffffff, 207 + }, 219 208 }; 220 209 221 210 void __init kirkwood_ge01_init(struct mv643xx_eth_platform_data *eth_data) 222 211 { 212 + kirkwood_clk_ctrl |= CGC_GE1; 223 213 eth_data->shared = &kirkwood_ge01_shared; 224 214 kirkwood_ge01.dev.platform_data = eth_data; 225 215 ··· 266 248 kirkwood_switch_device.dev.platform_data = d; 267 249 268 250 platform_device_register(&kirkwood_switch_device); 251 + } 252 + 253 + 254 + /***************************************************************************** 255 + * NAND flash 256 + ****************************************************************************/ 257 + static struct resource kirkwood_nand_resource = { 258 + .flags = IORESOURCE_MEM, 259 + .start = KIRKWOOD_NAND_MEM_PHYS_BASE, 260 + .end = KIRKWOOD_NAND_MEM_PHYS_BASE + 261 + KIRKWOOD_NAND_MEM_SIZE - 1, 262 + }; 263 + 264 + static struct orion_nand_data kirkwood_nand_data = { 265 + .cle = 0, 266 + .ale = 1, 267 + .width = 8, 268 + }; 269 + 270 + static struct platform_device kirkwood_nand_flash = { 271 + .name = "orion_nand", 272 + .id = -1, 273 + .dev = { 274 + .platform_data = &kirkwood_nand_data, 275 + }, 276 + .resource = &kirkwood_nand_resource, 277 + .num_resources = 1, 278 + }; 279 + 280 + void __init kirkwood_nand_init(struct mtd_partition *parts, int nr_parts, 281 + int chip_delay) 282 + { 283 + kirkwood_clk_ctrl |= CGC_RUNIT; 284 + kirkwood_nand_data.parts = parts; 285 + kirkwood_nand_data.nr_parts = nr_parts; 286 + kirkwood_nand_data.chip_delay = chip_delay; 287 + platform_device_register(&kirkwood_nand_flash); 269 288 } 270 289 271 290 ··· 350 295 351 296 void __init kirkwood_sata_init(struct mv_sata_platform_data *sata_data) 352 297 { 298 + kirkwood_clk_ctrl |= CGC_SATA0; 299 + if (sata_data->n_ports > 1) 300 + kirkwood_clk_ctrl |= CGC_SATA1; 353 301 sata_data->dram = &kirkwood_mbus_dram_info; 354 302 kirkwood_sata.dev.platform_data = sata_data; 355 303 platform_device_register(&kirkwood_sata); ··· 398 340 else 399 341 mvsdio_data->clock = 200000000; 400 342 mvsdio_data->dram = &kirkwood_mbus_dram_info; 343 + kirkwood_clk_ctrl |= CGC_SDIO; 401 344 kirkwood_sdio.dev.platform_data = mvsdio_data; 402 345 platform_device_register(&kirkwood_sdio); 403 346 } ··· 430 371 431 372 void __init kirkwood_spi_init() 432 373 { 374 + kirkwood_clk_ctrl |= CGC_RUNIT; 433 375 platform_device_register(&kirkwood_spi); 434 376 } 435 377 ··· 446 386 447 387 static struct resource kirkwood_i2c_resources[] = { 448 388 { 449 - .name = "i2c", 450 389 .start = I2C_PHYS_BASE, 451 390 .end = I2C_PHYS_BASE + 0x1f, 452 391 .flags = IORESOURCE_MEM, 453 392 }, { 454 - .name = "i2c", 455 393 .start = IRQ_KIRKWOOD_TWSI, 456 394 .end = IRQ_KIRKWOOD_TWSI, 457 395 .flags = IORESOURCE_IRQ, ··· 561 503 562 504 563 505 /***************************************************************************** 506 + * Cryptographic Engines and Security Accelerator (CESA) 507 + ****************************************************************************/ 508 + 509 + static struct resource kirkwood_crypto_res[] = { 510 + { 511 + .name = "regs", 512 + .start = CRYPTO_PHYS_BASE, 513 + .end = CRYPTO_PHYS_BASE + 0xffff, 514 + .flags = IORESOURCE_MEM, 515 + }, { 516 + .name = "sram", 517 + .start = KIRKWOOD_SRAM_PHYS_BASE, 518 + .end = KIRKWOOD_SRAM_PHYS_BASE + KIRKWOOD_SRAM_SIZE - 1, 519 + .flags = IORESOURCE_MEM, 520 + }, { 521 + .name = "crypto interrupt", 522 + .start = IRQ_KIRKWOOD_CRYPTO, 523 + .end = IRQ_KIRKWOOD_CRYPTO, 524 + .flags = IORESOURCE_IRQ, 525 + }, 526 + }; 527 + 528 + static struct platform_device kirkwood_crypto_device = { 529 + .name = "mv_crypto", 530 + .id = -1, 531 + .num_resources = ARRAY_SIZE(kirkwood_crypto_res), 532 + .resource = kirkwood_crypto_res, 533 + }; 534 + 535 + void __init kirkwood_crypto_init(void) 536 + { 537 + kirkwood_clk_ctrl |= CGC_CRYPTO; 538 + platform_device_register(&kirkwood_crypto_device); 539 + } 540 + 541 + 542 + /***************************************************************************** 564 543 * XOR 565 544 ****************************************************************************/ 566 545 static struct mv_xor_platform_shared_data kirkwood_xor_shared_data = { ··· 688 593 689 594 static void __init kirkwood_xor0_init(void) 690 595 { 596 + kirkwood_clk_ctrl |= CGC_XOR0; 691 597 platform_device_register(&kirkwood_xor0_shared); 692 598 693 599 /* ··· 787 691 788 692 static void __init kirkwood_xor1_init(void) 789 693 { 694 + kirkwood_clk_ctrl |= CGC_XOR1; 790 695 platform_device_register(&kirkwood_xor1_shared); 791 696 792 697 /* ··· 802 705 dma_cap_set(DMA_MEMSET, kirkwood_xor11_data.cap_mask); 803 706 dma_cap_set(DMA_XOR, kirkwood_xor11_data.cap_mask); 804 707 platform_device_register(&kirkwood_xor11_channel); 708 + } 709 + 710 + 711 + /***************************************************************************** 712 + * Watchdog 713 + ****************************************************************************/ 714 + static struct orion_wdt_platform_data kirkwood_wdt_data = { 715 + .tclk = 0, 716 + }; 717 + 718 + static struct platform_device kirkwood_wdt_device = { 719 + .name = "orion_wdt", 720 + .id = -1, 721 + .dev = { 722 + .platform_data = &kirkwood_wdt_data, 723 + }, 724 + .num_resources = 0, 725 + }; 726 + 727 + static void __init kirkwood_wdt_init(void) 728 + { 729 + kirkwood_wdt_data.tclk = kirkwood_tclk; 730 + platform_device_register(&kirkwood_wdt_device); 805 731 } 806 732 807 733 ··· 920 800 921 801 /* internal devices that every board has */ 922 802 kirkwood_rtc_init(); 803 + kirkwood_wdt_init(); 923 804 kirkwood_xor0_init(); 924 805 kirkwood_xor1_init(); 806 + kirkwood_crypto_init(); 925 807 } 808 + 809 + static int __init kirkwood_clock_gate(void) 810 + { 811 + unsigned int curr = readl(CLOCK_GATING_CTRL); 812 + 813 + printk(KERN_DEBUG "Gating clock of unused units\n"); 814 + printk(KERN_DEBUG "before: 0x%08x\n", curr); 815 + 816 + /* Make sure those units are accessible */ 817 + writel(curr | CGC_SATA0 | CGC_SATA1 | CGC_PEX0, CLOCK_GATING_CTRL); 818 + 819 + /* For SATA: first shutdown the phy */ 820 + if (!(kirkwood_clk_ctrl & CGC_SATA0)) { 821 + /* Disable PLL and IVREF */ 822 + writel(readl(SATA0_PHY_MODE_2) & ~0xf, SATA0_PHY_MODE_2); 823 + /* Disable PHY */ 824 + writel(readl(SATA0_IF_CTRL) | 0x200, SATA0_IF_CTRL); 825 + } 826 + if (!(kirkwood_clk_ctrl & CGC_SATA1)) { 827 + /* Disable PLL and IVREF */ 828 + writel(readl(SATA1_PHY_MODE_2) & ~0xf, SATA1_PHY_MODE_2); 829 + /* Disable PHY */ 830 + writel(readl(SATA1_IF_CTRL) | 0x200, SATA1_IF_CTRL); 831 + } 832 + 833 + /* For PCIe: first shutdown the phy */ 834 + if (!(kirkwood_clk_ctrl & CGC_PEX0)) { 835 + writel(readl(PCIE_LINK_CTRL) | 0x10, PCIE_LINK_CTRL); 836 + while (1) 837 + if (readl(PCIE_STATUS) & 0x1) 838 + break; 839 + writel(readl(PCIE_LINK_CTRL) & ~0x10, PCIE_LINK_CTRL); 840 + } 841 + 842 + /* Now gate clock the required units */ 843 + writel(kirkwood_clk_ctrl, CLOCK_GATING_CTRL); 844 + printk(KERN_DEBUG " after: 0x%08x\n", readl(CLOCK_GATING_CTRL)); 845 + 846 + return 0; 847 + } 848 + late_initcall(kirkwood_clock_gate);
+3 -1
arch/arm/mach-kirkwood/common.h
··· 15 15 struct mv643xx_eth_platform_data; 16 16 struct mv_sata_platform_data; 17 17 struct mvsdio_platform_data; 18 + struct mtd_partition; 18 19 19 20 /* 20 21 * Basic Kirkwood init functions used early by machine-setup. ··· 26 25 27 26 extern struct mbus_dram_target_info kirkwood_mbus_dram_info; 28 27 void kirkwood_setup_cpu_mbus(void); 29 - void kirkwood_setup_sram_win(u32 base, u32 size); 30 28 31 29 void kirkwood_pcie_id(u32 *dev, u32 *rev); 32 30 ··· 40 40 void kirkwood_i2c_init(void); 41 41 void kirkwood_uart0_init(void); 42 42 void kirkwood_uart1_init(void); 43 + void kirkwood_nand_init(struct mtd_partition *parts, int nr_parts, int delay); 43 44 44 45 extern int kirkwood_tclk; 45 46 extern struct sys_timer kirkwood_timer; 46 47 48 + #define ARRAY_AND_SIZE(x) (x), ARRAY_SIZE(x) 47 49 48 50 #endif
+96
arch/arm/mach-kirkwood/cpuidle.c
··· 1 + /* 2 + * arch/arm/mach-kirkwood/cpuidle.c 3 + * 4 + * CPU idle Marvell Kirkwood SoCs 5 + * 6 + * This file is licensed under the terms of the GNU General Public 7 + * License version 2. This program is licensed "as is" without any 8 + * warranty of any kind, whether express or implied. 9 + * 10 + * The cpu idle uses wait-for-interrupt and DDR self refresh in order 11 + * to implement two idle states - 12 + * #1 wait-for-interrupt 13 + * #2 wait-for-interrupt and DDR self refresh 14 + */ 15 + 16 + #include <linux/kernel.h> 17 + #include <linux/init.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/cpuidle.h> 20 + #include <linux/io.h> 21 + #include <asm/proc-fns.h> 22 + #include <mach/kirkwood.h> 23 + 24 + #define KIRKWOOD_MAX_STATES 2 25 + 26 + static struct cpuidle_driver kirkwood_idle_driver = { 27 + .name = "kirkwood_idle", 28 + .owner = THIS_MODULE, 29 + }; 30 + 31 + static DEFINE_PER_CPU(struct cpuidle_device, kirkwood_cpuidle_device); 32 + 33 + /* Actual code that puts the SoC in different idle states */ 34 + static int kirkwood_enter_idle(struct cpuidle_device *dev, 35 + struct cpuidle_state *state) 36 + { 37 + struct timeval before, after; 38 + int idle_time; 39 + 40 + local_irq_disable(); 41 + do_gettimeofday(&before); 42 + if (state == &dev->states[0]) 43 + /* Wait for interrupt state */ 44 + cpu_do_idle(); 45 + else if (state == &dev->states[1]) { 46 + /* 47 + * Following write will put DDR in self refresh. 48 + * Note that we have 256 cycles before DDR puts it 49 + * self in self-refresh, so the wait-for-interrupt 50 + * call afterwards won't get the DDR from self refresh 51 + * mode. 52 + */ 53 + writel(0x7, DDR_OPERATION_BASE); 54 + cpu_do_idle(); 55 + } 56 + do_gettimeofday(&after); 57 + local_irq_enable(); 58 + idle_time = (after.tv_sec - before.tv_sec) * USEC_PER_SEC + 59 + (after.tv_usec - before.tv_usec); 60 + return idle_time; 61 + } 62 + 63 + /* Initialize CPU idle by registering the idle states */ 64 + static int kirkwood_init_cpuidle(void) 65 + { 66 + struct cpuidle_device *device; 67 + 68 + cpuidle_register_driver(&kirkwood_idle_driver); 69 + 70 + device = &per_cpu(kirkwood_cpuidle_device, smp_processor_id()); 71 + device->state_count = KIRKWOOD_MAX_STATES; 72 + 73 + /* Wait for interrupt state */ 74 + device->states[0].enter = kirkwood_enter_idle; 75 + device->states[0].exit_latency = 1; 76 + device->states[0].target_residency = 10000; 77 + device->states[0].flags = CPUIDLE_FLAG_TIME_VALID; 78 + strcpy(device->states[0].name, "WFI"); 79 + strcpy(device->states[0].desc, "Wait for interrupt"); 80 + 81 + /* Wait for interrupt and DDR self refresh state */ 82 + device->states[1].enter = kirkwood_enter_idle; 83 + device->states[1].exit_latency = 10; 84 + device->states[1].target_residency = 10000; 85 + device->states[1].flags = CPUIDLE_FLAG_TIME_VALID; 86 + strcpy(device->states[1].name, "DDR SR"); 87 + strcpy(device->states[1].desc, "WFI and DDR Self Refresh"); 88 + 89 + if (cpuidle_register_device(device)) { 90 + printk(KERN_ERR "kirkwood_init_cpuidle: Failed registering\n"); 91 + return -EIO; 92 + } 93 + return 0; 94 + } 95 + 96 + device_initcall(kirkwood_init_cpuidle);
+1 -30
arch/arm/mach-kirkwood/db88f6281-bp-setup.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/init.h> 13 13 #include <linux/platform_device.h> 14 - #include <linux/mtd/nand.h> 15 14 #include <linux/mtd/partitions.h> 16 15 #include <linux/ata_platform.h> 17 16 #include <linux/mv643xx_eth.h> 18 17 #include <asm/mach-types.h> 19 18 #include <asm/mach/arch.h> 20 19 #include <mach/kirkwood.h> 21 - #include <plat/orion_nand.h> 22 20 #include <plat/mvsdio.h> 23 21 #include "common.h" 24 22 #include "mpp.h" ··· 35 37 .offset = MTDPART_OFS_NXTBLK, 36 38 .size = MTDPART_SIZ_FULL 37 39 }, 38 - }; 39 - 40 - static struct resource db88f6281_nand_resource = { 41 - .flags = IORESOURCE_MEM, 42 - .start = KIRKWOOD_NAND_MEM_PHYS_BASE, 43 - .end = KIRKWOOD_NAND_MEM_PHYS_BASE + 44 - KIRKWOOD_NAND_MEM_SIZE - 1, 45 - }; 46 - 47 - static struct orion_nand_data db88f6281_nand_data = { 48 - .parts = db88f6281_nand_parts, 49 - .nr_parts = ARRAY_SIZE(db88f6281_nand_parts), 50 - .cle = 0, 51 - .ale = 1, 52 - .width = 8, 53 - .chip_delay = 25, 54 - }; 55 - 56 - static struct platform_device db88f6281_nand_flash = { 57 - .name = "orion_nand", 58 - .id = -1, 59 - .dev = { 60 - .platform_data = &db88f6281_nand_data, 61 - }, 62 - .resource = &db88f6281_nand_resource, 63 - .num_resources = 1, 64 40 }; 65 41 66 42 static struct mv643xx_eth_platform_data db88f6281_ge00_data = { ··· 64 92 kirkwood_init(); 65 93 kirkwood_mpp_conf(db88f6281_mpp_config); 66 94 95 + kirkwood_nand_init(ARRAY_AND_SIZE(db88f6281_nand_parts), 25); 67 96 kirkwood_ehci_init(); 68 97 kirkwood_ge00_init(&db88f6281_ge00_data); 69 98 kirkwood_sata_init(&db88f6281_sata_data); 70 99 kirkwood_uart0_init(); 71 100 kirkwood_sdio_init(&db88f6281_mvsdio_data); 72 - 73 - platform_device_register(&db88f6281_nand_flash); 74 101 } 75 102 76 103 static int __init db88f6281_pci_init(void)
+21
arch/arm/mach-kirkwood/include/mach/bridge-regs.h
··· 17 17 #define CPU_RESET 0x00000002 18 18 19 19 #define RSTOUTn_MASK (BRIDGE_VIRT_BASE | 0x0108) 20 + #define WDT_RESET_OUT_EN 0x00000002 20 21 #define SOFT_RESET_OUT_EN 0x00000004 21 22 22 23 #define SYSTEM_SOFT_RESET (BRIDGE_VIRT_BASE | 0x010c) 23 24 #define SOFT_RESET 0x00000001 24 25 25 26 #define BRIDGE_CAUSE (BRIDGE_VIRT_BASE | 0x0110) 27 + #define WDT_INT_REQ 0x0008 28 + 26 29 #define BRIDGE_MASK (BRIDGE_VIRT_BASE | 0x0114) 27 30 #define BRIDGE_INT_TIMER0 0x0002 28 31 #define BRIDGE_INT_TIMER1 0x0004 ··· 41 38 42 39 #define L2_CONFIG_REG (BRIDGE_VIRT_BASE | 0x0128) 43 40 #define L2_WRITETHROUGH 0x00000010 41 + 42 + #define CLOCK_GATING_CTRL (BRIDGE_VIRT_BASE | 0x11c) 43 + #define CGC_GE0 (1 << 0) 44 + #define CGC_PEX0 (1 << 2) 45 + #define CGC_USB0 (1 << 3) 46 + #define CGC_SDIO (1 << 4) 47 + #define CGC_TSU (1 << 5) 48 + #define CGC_DUNIT (1 << 6) 49 + #define CGC_RUNIT (1 << 7) 50 + #define CGC_XOR0 (1 << 8) 51 + #define CGC_AUDIO (1 << 9) 52 + #define CGC_SATA0 (1 << 14) 53 + #define CGC_SATA1 (1 << 15) 54 + #define CGC_XOR1 (1 << 16) 55 + #define CGC_CRYPTO (1 << 17) 56 + #define CGC_GE1 (1 << 19) 57 + #define CGC_TDM (1 << 20) 58 + #define CGC_RESERVED ((1 << 18) | (0x6 << 21)) 44 59 45 60 #endif
+25
arch/arm/mach-kirkwood/include/mach/io.h
··· 19 19 + KIRKWOOD_PCIE_IO_VIRT_BASE); 20 20 } 21 21 22 + static inline void __iomem * 23 + __arch_ioremap(unsigned long paddr, size_t size, unsigned int mtype) 24 + { 25 + void __iomem *retval; 26 + unsigned long offs = paddr - KIRKWOOD_REGS_PHYS_BASE; 27 + if (mtype == MT_DEVICE && size && offs < KIRKWOOD_REGS_SIZE && 28 + size <= KIRKWOOD_REGS_SIZE && offs + size <= KIRKWOOD_REGS_SIZE) { 29 + retval = (void __iomem *)KIRKWOOD_REGS_VIRT_BASE + offs; 30 + } else { 31 + retval = __arm_ioremap(paddr, size, mtype); 32 + } 33 + 34 + return retval; 35 + } 36 + 37 + static inline void 38 + __arch_iounmap(void __iomem *addr) 39 + { 40 + if (addr < (void __iomem *)KIRKWOOD_REGS_VIRT_BASE || 41 + addr >= (void __iomem *)(KIRKWOOD_REGS_VIRT_BASE + KIRKWOOD_REGS_SIZE)) 42 + __iounmap(addr); 43 + } 44 + 45 + #define __arch_ioremap(p, s, m) __arch_ioremap(p, s, m) 46 + #define __arch_iounmap(a) __arch_iounmap(a) 22 47 #define __io(a) __io(a) 23 48 #define __mem_pci(a) (a) 24 49
+15 -3
arch/arm/mach-kirkwood/include/mach/kirkwood.h
··· 20 20 * f1000000 on-chip peripheral registers 21 21 * f2000000 PCIe I/O space 22 22 * f3000000 NAND controller address window 23 + * f4000000 Security Accelerator SRAM 23 24 * 24 25 * virt phys size 25 26 * fee00000 f1000000 1M on-chip peripheral registers 26 27 * fef00000 f2000000 1M PCIe I/O space 27 28 */ 28 29 30 + #define KIRKWOOD_SRAM_PHYS_BASE 0xf4000000 31 + #define KIRKWOOD_SRAM_SIZE SZ_2K 32 + 29 33 #define KIRKWOOD_NAND_MEM_PHYS_BASE 0xf3000000 30 - #define KIRKWOOD_NAND_MEM_SIZE SZ_64K /* 1K is sufficient, but 64K 31 - * is the minimal window size 32 - */ 34 + #define KIRKWOOD_NAND_MEM_SIZE SZ_1K 33 35 34 36 #define KIRKWOOD_PCIE_IO_PHYS_BASE 0xf2000000 35 37 #define KIRKWOOD_PCIE_IO_VIRT_BASE 0xfef00000 ··· 50 48 */ 51 49 #define DDR_VIRT_BASE (KIRKWOOD_REGS_VIRT_BASE | 0x00000) 52 50 #define DDR_WINDOW_CPU_BASE (DDR_VIRT_BASE | 0x1500) 51 + #define DDR_OPERATION_BASE (DDR_VIRT_BASE | 0x1418) 53 52 54 53 #define DEV_BUS_PHYS_BASE (KIRKWOOD_REGS_PHYS_BASE | 0x10000) 55 54 #define DEV_BUS_VIRT_BASE (KIRKWOOD_REGS_VIRT_BASE | 0x10000) ··· 66 63 67 64 #define BRIDGE_VIRT_BASE (KIRKWOOD_REGS_VIRT_BASE | 0x20000) 68 65 66 + #define CRYPTO_PHYS_BASE (KIRKWOOD_REGS_PHYS_BASE | 0x30000) 67 + 69 68 #define PCIE_VIRT_BASE (KIRKWOOD_REGS_VIRT_BASE | 0x40000) 69 + #define PCIE_LINK_CTRL (PCIE_VIRT_BASE | 0x70) 70 + #define PCIE_STATUS (PCIE_VIRT_BASE | 0x1a04) 70 71 71 72 #define USB_PHYS_BASE (KIRKWOOD_REGS_PHYS_BASE | 0x50000) 72 73 ··· 87 80 #define GE01_PHYS_BASE (KIRKWOOD_REGS_PHYS_BASE | 0x74000) 88 81 89 82 #define SATA_PHYS_BASE (KIRKWOOD_REGS_PHYS_BASE | 0x80000) 83 + #define SATA_VIRT_BASE (KIRKWOOD_REGS_VIRT_BASE | 0x80000) 84 + #define SATA0_IF_CTRL (SATA_VIRT_BASE | 0x2050) 85 + #define SATA0_PHY_MODE_2 (SATA_VIRT_BASE | 0x2330) 86 + #define SATA1_IF_CTRL (SATA_VIRT_BASE | 0x4050) 87 + #define SATA1_PHY_MODE_2 (SATA_VIRT_BASE | 0x4330) 90 88 91 89 #define SDIO_PHYS_BASE (KIRKWOOD_REGS_PHYS_BASE | 0x90000) 92 90
+3
arch/arm/mach-kirkwood/mpp.c
··· 48 48 if (!variant_mask) 49 49 return; 50 50 51 + /* Initialize gpiolib. */ 52 + orion_gpio_init(); 53 + 51 54 printk(KERN_DEBUG "initial MPP regs:"); 52 55 for (i = 0; i < MPP_NR_REGS; i++) { 53 56 mpp_ctrl[i] = readl(MPP_CTRL(i));
+173
arch/arm/mach-kirkwood/mv88f6281gtw_ge-setup.c
··· 1 + /* 2 + * arch/arm/mach-kirkwood/mv88f6281gtw_ge-setup.c 3 + * 4 + * Marvell 88F6281 GTW GE Board Setup 5 + * 6 + * This file is licensed under the terms of the GNU General Public 7 + * License version 2. This program is licensed "as is" without any 8 + * warranty of any kind, whether express or implied. 9 + */ 10 + 11 + #include <linux/kernel.h> 12 + #include <linux/init.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/pci.h> 15 + #include <linux/irq.h> 16 + #include <linux/mtd/physmap.h> 17 + #include <linux/timer.h> 18 + #include <linux/mv643xx_eth.h> 19 + #include <linux/ethtool.h> 20 + #include <linux/gpio.h> 21 + #include <linux/leds.h> 22 + #include <linux/input.h> 23 + #include <linux/gpio_keys.h> 24 + #include <linux/spi/flash.h> 25 + #include <linux/spi/spi.h> 26 + #include <linux/spi/orion_spi.h> 27 + #include <net/dsa.h> 28 + #include <asm/mach-types.h> 29 + #include <asm/mach/arch.h> 30 + #include <asm/mach/pci.h> 31 + #include <mach/kirkwood.h> 32 + #include "common.h" 33 + #include "mpp.h" 34 + 35 + static struct mv643xx_eth_platform_data mv88f6281gtw_ge_ge00_data = { 36 + .phy_addr = MV643XX_ETH_PHY_NONE, 37 + .speed = SPEED_1000, 38 + .duplex = DUPLEX_FULL, 39 + }; 40 + 41 + static struct dsa_chip_data mv88f6281gtw_ge_switch_chip_data = { 42 + .port_names[0] = "lan1", 43 + .port_names[1] = "lan2", 44 + .port_names[2] = "lan3", 45 + .port_names[3] = "lan4", 46 + .port_names[4] = "wan", 47 + .port_names[5] = "cpu", 48 + }; 49 + 50 + static struct dsa_platform_data mv88f6281gtw_ge_switch_plat_data = { 51 + .nr_chips = 1, 52 + .chip = &mv88f6281gtw_ge_switch_chip_data, 53 + }; 54 + 55 + static const struct flash_platform_data mv88f6281gtw_ge_spi_slave_data = { 56 + .type = "mx25l12805d", 57 + }; 58 + 59 + static struct spi_board_info __initdata mv88f6281gtw_ge_spi_slave_info[] = { 60 + { 61 + .modalias = "m25p80", 62 + .platform_data = &mv88f6281gtw_ge_spi_slave_data, 63 + .irq = -1, 64 + .max_speed_hz = 50000000, 65 + .bus_num = 0, 66 + .chip_select = 0, 67 + }, 68 + }; 69 + 70 + static struct gpio_keys_button mv88f6281gtw_ge_button_pins[] = { 71 + { 72 + .code = KEY_RESTART, 73 + .gpio = 47, 74 + .desc = "SWR Button", 75 + .active_low = 1, 76 + }, { 77 + .code = KEY_F1, 78 + .gpio = 46, 79 + .desc = "WPS Button(F1)", 80 + .active_low = 1, 81 + }, 82 + }; 83 + 84 + static struct gpio_keys_platform_data mv88f6281gtw_ge_button_data = { 85 + .buttons = mv88f6281gtw_ge_button_pins, 86 + .nbuttons = ARRAY_SIZE(mv88f6281gtw_ge_button_pins), 87 + }; 88 + 89 + static struct platform_device mv88f6281gtw_ge_buttons = { 90 + .name = "gpio-keys", 91 + .id = -1, 92 + .num_resources = 0, 93 + .dev = { 94 + .platform_data = &mv88f6281gtw_ge_button_data, 95 + }, 96 + }; 97 + 98 + static struct gpio_led mv88f6281gtw_ge_led_pins[] = { 99 + { 100 + .name = "gtw:green:Status", 101 + .gpio = 20, 102 + .active_low = 0, 103 + }, { 104 + .name = "gtw:red:Status", 105 + .gpio = 21, 106 + .active_low = 0, 107 + }, { 108 + .name = "gtw:green:USB", 109 + .gpio = 12, 110 + .active_low = 0, 111 + }, 112 + }; 113 + 114 + static struct gpio_led_platform_data mv88f6281gtw_ge_led_data = { 115 + .leds = mv88f6281gtw_ge_led_pins, 116 + .num_leds = ARRAY_SIZE(mv88f6281gtw_ge_led_pins), 117 + }; 118 + 119 + static struct platform_device mv88f6281gtw_ge_leds = { 120 + .name = "leds-gpio", 121 + .id = -1, 122 + .dev = { 123 + .platform_data = &mv88f6281gtw_ge_led_data, 124 + }, 125 + }; 126 + 127 + static unsigned int mv88f6281gtw_ge_mpp_config[] __initdata = { 128 + MPP12_GPO, /* Status#_USB pin */ 129 + MPP20_GPIO, /* Status#_GLED pin */ 130 + MPP21_GPIO, /* Status#_RLED pin */ 131 + MPP46_GPIO, /* WPS_Switch pin */ 132 + MPP47_GPIO, /* SW_Init pin */ 133 + 0 134 + }; 135 + 136 + static void __init mv88f6281gtw_ge_init(void) 137 + { 138 + /* 139 + * Basic setup. Needs to be called early. 140 + */ 141 + kirkwood_init(); 142 + kirkwood_mpp_conf(mv88f6281gtw_ge_mpp_config); 143 + 144 + kirkwood_ehci_init(); 145 + kirkwood_ge00_init(&mv88f6281gtw_ge_ge00_data); 146 + kirkwood_ge00_switch_init(&mv88f6281gtw_ge_switch_plat_data, NO_IRQ); 147 + spi_register_board_info(mv88f6281gtw_ge_spi_slave_info, 148 + ARRAY_SIZE(mv88f6281gtw_ge_spi_slave_info)); 149 + kirkwood_spi_init(); 150 + kirkwood_uart0_init(); 151 + platform_device_register(&mv88f6281gtw_ge_leds); 152 + platform_device_register(&mv88f6281gtw_ge_buttons); 153 + } 154 + 155 + static int __init mv88f6281gtw_ge_pci_init(void) 156 + { 157 + if (machine_is_mv88f6281gtw_ge()) 158 + kirkwood_pcie_init(); 159 + 160 + return 0; 161 + } 162 + subsys_initcall(mv88f6281gtw_ge_pci_init); 163 + 164 + MACHINE_START(MV88F6281GTW_GE, "Marvell 88F6281 GTW GE Board") 165 + /* Maintainer: Lennert Buytenhek <buytenh@marvell.com> */ 166 + .phys_io = KIRKWOOD_REGS_PHYS_BASE, 167 + .io_pg_offst = ((KIRKWOOD_REGS_VIRT_BASE) >> 18) & 0xfffc, 168 + .boot_params = 0x00000100, 169 + .init_machine = mv88f6281gtw_ge_init, 170 + .map_io = kirkwood_map_io, 171 + .init_irq = kirkwood_init_irq, 172 + .timer = &kirkwood_timer, 173 + MACHINE_END
+4
arch/arm/mach-kirkwood/pcie.c
··· 14 14 #include <asm/irq.h> 15 15 #include <asm/mach/pci.h> 16 16 #include <plat/pcie.h> 17 + #include <mach/bridge-regs.h> 17 18 #include "common.h" 18 19 19 20 ··· 96 95 static int kirkwood_pcie_setup(int nr, struct pci_sys_data *sys) 97 96 { 98 97 struct resource *res; 98 + extern unsigned int kirkwood_clk_ctrl; 99 99 100 100 /* 101 101 * Generic PCIe unit setup. ··· 134 132 135 133 sys->resource[2] = NULL; 136 134 sys->io_offset = 0; 135 + 136 + kirkwood_clk_ctrl |= CGC_PEX0; 137 137 138 138 return 1; 139 139 }
-2
arch/arm/mach-kirkwood/rd88f6192-nas-setup.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/init.h> 13 13 #include <linux/platform_device.h> 14 - #include <linux/mtd/nand.h> 15 - #include <linux/mtd/partitions.h> 16 14 #include <linux/ata_platform.h> 17 15 #include <linux/mv643xx_eth.h> 18 16 #include <linux/spi/flash.h>
+1 -30
arch/arm/mach-kirkwood/rd88f6281-setup.c
··· 12 12 #include <linux/init.h> 13 13 #include <linux/platform_device.h> 14 14 #include <linux/irq.h> 15 - #include <linux/mtd/nand.h> 16 15 #include <linux/mtd/partitions.h> 17 16 #include <linux/ata_platform.h> 18 17 #include <linux/mv643xx_eth.h> ··· 21 22 #include <asm/mach/arch.h> 22 23 #include <mach/kirkwood.h> 23 24 #include <plat/mvsdio.h> 24 - #include <plat/orion_nand.h> 25 25 #include "common.h" 26 26 #include "mpp.h" 27 27 ··· 38 40 .offset = MTDPART_OFS_NXTBLK, 39 41 .size = MTDPART_SIZ_FULL 40 42 }, 41 - }; 42 - 43 - static struct resource rd88f6281_nand_resource = { 44 - .flags = IORESOURCE_MEM, 45 - .start = KIRKWOOD_NAND_MEM_PHYS_BASE, 46 - .end = KIRKWOOD_NAND_MEM_PHYS_BASE + 47 - KIRKWOOD_NAND_MEM_SIZE - 1, 48 - }; 49 - 50 - static struct orion_nand_data rd88f6281_nand_data = { 51 - .parts = rd88f6281_nand_parts, 52 - .nr_parts = ARRAY_SIZE(rd88f6281_nand_parts), 53 - .cle = 0, 54 - .ale = 1, 55 - .width = 8, 56 - .chip_delay = 25, 57 - }; 58 - 59 - static struct platform_device rd88f6281_nand_flash = { 60 - .name = "orion_nand", 61 - .id = -1, 62 - .dev = { 63 - .platform_data = &rd88f6281_nand_data, 64 - }, 65 - .resource = &rd88f6281_nand_resource, 66 - .num_resources = 1, 67 43 }; 68 44 69 45 static struct mv643xx_eth_platform_data rd88f6281_ge00_data = { ··· 86 114 kirkwood_init(); 87 115 kirkwood_mpp_conf(rd88f6281_mpp_config); 88 116 117 + kirkwood_nand_init(ARRAY_AND_SIZE(rd88f6281_nand_parts), 25); 89 118 kirkwood_ehci_init(); 90 119 91 120 kirkwood_ge00_init(&rd88f6281_ge00_data); ··· 102 129 kirkwood_sata_init(&rd88f6281_sata_data); 103 130 kirkwood_sdio_init(&rd88f6281_mvsdio_data); 104 131 kirkwood_uart0_init(); 105 - 106 - platform_device_register(&rd88f6281_nand_flash); 107 132 } 108 133 109 134 static int __init rd88f6281_pci_init(void)
+2 -30
arch/arm/mach-kirkwood/sheevaplug-setup.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/init.h> 13 13 #include <linux/platform_device.h> 14 - #include <linux/mtd/nand.h> 15 14 #include <linux/mtd/partitions.h> 16 15 #include <linux/mv643xx_eth.h> 17 16 #include <linux/gpio.h> ··· 19 20 #include <asm/mach/arch.h> 20 21 #include <mach/kirkwood.h> 21 22 #include <plat/mvsdio.h> 22 - #include <plat/orion_nand.h> 23 23 #include "common.h" 24 24 #include "mpp.h" 25 25 ··· 38 40 }, 39 41 }; 40 42 41 - static struct resource sheevaplug_nand_resource = { 42 - .flags = IORESOURCE_MEM, 43 - .start = KIRKWOOD_NAND_MEM_PHYS_BASE, 44 - .end = KIRKWOOD_NAND_MEM_PHYS_BASE + 45 - KIRKWOOD_NAND_MEM_SIZE - 1, 46 - }; 47 - 48 - static struct orion_nand_data sheevaplug_nand_data = { 49 - .parts = sheevaplug_nand_parts, 50 - .nr_parts = ARRAY_SIZE(sheevaplug_nand_parts), 51 - .cle = 0, 52 - .ale = 1, 53 - .width = 8, 54 - .chip_delay = 25, 55 - }; 56 - 57 - static struct platform_device sheevaplug_nand_flash = { 58 - .name = "orion_nand", 59 - .id = -1, 60 - .dev = { 61 - .platform_data = &sheevaplug_nand_data, 62 - }, 63 - .resource = &sheevaplug_nand_resource, 64 - .num_resources = 1, 65 - }; 66 - 67 43 static struct mv643xx_eth_platform_data sheevaplug_ge00_data = { 68 44 .phy_addr = MV643XX_ETH_PHY_ADDR(0), 69 45 }; 70 46 71 47 static struct mvsdio_platform_data sheevaplug_mvsdio_data = { 72 - // unfortunately the CD signal has not been connected */ 48 + /* unfortunately the CD signal has not been connected */ 73 49 }; 74 50 75 51 static struct gpio_led sheevaplug_led_pins[] = { ··· 83 111 kirkwood_mpp_conf(sheevaplug_mpp_config); 84 112 85 113 kirkwood_uart0_init(); 114 + kirkwood_nand_init(ARRAY_AND_SIZE(sheevaplug_nand_parts), 25); 86 115 87 116 if (gpio_request(29, "USB Power Enable") != 0 || 88 117 gpio_direction_output(29, 1) != 0) ··· 93 120 kirkwood_ge00_init(&sheevaplug_ge00_data); 94 121 kirkwood_sdio_init(&sheevaplug_mvsdio_data); 95 122 96 - platform_device_register(&sheevaplug_nand_flash); 97 123 platform_device_register(&sheevaplug_leds); 98 124 } 99 125
+2 -4
arch/arm/mach-kirkwood/ts219-setup.c
··· 142 142 MPP1_SPI_MOSI, 143 143 MPP2_SPI_SCK, 144 144 MPP3_SPI_MISO, 145 + MPP4_SATA1_ACTn, 146 + MPP5_SATA0_ACTn, 145 147 MPP8_TW_SDA, 146 148 MPP9_TW_SCK, 147 149 MPP10_UART0_TXD, ··· 152 150 MPP14_UART1_RXD, /* PIC controller */ 153 151 MPP15_GPIO, /* USB Copy button */ 154 152 MPP16_GPIO, /* Reset button */ 155 - MPP20_SATA1_ACTn, 156 - MPP21_SATA0_ACTn, 157 - MPP22_SATA1_PRESENTn, 158 - MPP23_SATA0_PRESENTn, 159 153 0 160 154 }; 161 155
+6
arch/arm/mach-loki/common.c
··· 82 82 .id = 0, 83 83 .num_resources = 1, 84 84 .resource = loki_ge0_resources, 85 + .dev = { 86 + .coherent_dma_mask = 0xffffffff, 87 + }, 85 88 }; 86 89 87 90 void __init loki_ge0_init(struct mv643xx_eth_platform_data *eth_data) ··· 139 136 .id = 1, 140 137 .num_resources = 1, 141 138 .resource = loki_ge1_resources, 139 + .dev = { 140 + .coherent_dma_mask = 0xffffffff, 141 + }, 142 142 }; 143 143 144 144 void __init loki_ge1_init(struct mv643xx_eth_platform_data *eth_data)
+5
arch/arm/mach-mmp/include/mach/mfp-pxa168.h
··· 3 3 4 4 #include <mach/mfp.h> 5 5 6 + #define MFP_DRIVE_VERY_SLOW (0x0 << 13) 7 + #define MFP_DRIVE_SLOW (0x1 << 13) 8 + #define MFP_DRIVE_MEDIUM (0x2 << 13) 9 + #define MFP_DRIVE_FAST (0x3 << 13) 10 + 6 11 /* GPIO */ 7 12 #define GPIO0_GPIO MFP_CFG(GPIO0, AF5) 8 13 #define GPIO1_GPIO MFP_CFG(GPIO1, AF5)
+5
arch/arm/mach-mmp/include/mach/mfp-pxa910.h
··· 3 3 4 4 #include <mach/mfp.h> 5 5 6 + #define MFP_DRIVE_VERY_SLOW (0x0 << 13) 7 + #define MFP_DRIVE_SLOW (0x2 << 13) 8 + #define MFP_DRIVE_MEDIUM (0x4 << 13) 9 + #define MFP_DRIVE_FAST (0x8 << 13) 10 + 6 11 /* UART2 */ 7 12 #define GPIO47_UART2_RXD MFP_CFG(GPIO47, AF6) 8 13 #define GPIO48_UART2_TXD MFP_CFG(GPIO48, AF6)
+3 -6
arch/arm/mach-mmp/include/mach/mfp.h
··· 12 12 * possible, we make the following compromise: 13 13 * 14 14 * 1. SLEEP_OE_N will always be programmed to '1' (by MFP_LPM_FLOAT) 15 - * 2. DRIVE strength definitions redefined to include the reserved bit10 15 + * 2. DRIVE strength definitions redefined to include the reserved bit 16 + * - the reserved bit differs between pxa168 and pxa910, and the 17 + * MFP_DRIVE_* macros are individually defined in mfp-pxa{168,910}.h 16 18 * 3. Override MFP_CFG() and MFP_CFG_DRV() 17 19 * 4. Drop the use of MFP_CFG_LPM() and MFP_CFG_X() 18 20 */ 19 - 20 - #define MFP_DRIVE_VERY_SLOW (0x0 << 13) 21 - #define MFP_DRIVE_SLOW (0x2 << 13) 22 - #define MFP_DRIVE_MEDIUM (0x4 << 13) 23 - #define MFP_DRIVE_FAST (0x8 << 13) 24 21 25 22 #undef MFP_CFG 26 23 #undef MFP_CFG_DRV
+1 -1
arch/arm/mach-mmp/time.c
··· 136 136 .set_mode = timer_set_mode, 137 137 }; 138 138 139 - static cycle_t clksrc_read(void) 139 + static cycle_t clksrc_read(struct clocksource *cs) 140 140 { 141 141 return timer_read(); 142 142 }
+12 -4
arch/arm/mach-mv78xx0/common.c
··· 321 321 .id = 0, 322 322 .num_resources = 1, 323 323 .resource = mv78xx0_ge00_resources, 324 + .dev = { 325 + .coherent_dma_mask = 0xffffffff, 326 + }, 324 327 }; 325 328 326 329 void __init mv78xx0_ge00_init(struct mv643xx_eth_platform_data *eth_data) ··· 378 375 .id = 1, 379 376 .num_resources = 1, 380 377 .resource = mv78xx0_ge01_resources, 378 + .dev = { 379 + .coherent_dma_mask = 0xffffffff, 380 + }, 381 381 }; 382 382 383 383 void __init mv78xx0_ge01_init(struct mv643xx_eth_platform_data *eth_data) ··· 435 429 .id = 2, 436 430 .num_resources = 1, 437 431 .resource = mv78xx0_ge10_resources, 432 + .dev = { 433 + .coherent_dma_mask = 0xffffffff, 434 + }, 438 435 }; 439 436 440 437 void __init mv78xx0_ge10_init(struct mv643xx_eth_platform_data *eth_data) ··· 505 496 .id = 3, 506 497 .num_resources = 1, 507 498 .resource = mv78xx0_ge11_resources, 499 + .dev = { 500 + .coherent_dma_mask = 0xffffffff, 501 + }, 508 502 }; 509 503 510 504 void __init mv78xx0_ge11_init(struct mv643xx_eth_platform_data *eth_data) ··· 544 532 545 533 static struct resource mv78xx0_i2c_0_resources[] = { 546 534 { 547 - .name = "i2c 0 base", 548 535 .start = I2C_0_PHYS_BASE, 549 536 .end = I2C_0_PHYS_BASE + 0x1f, 550 537 .flags = IORESOURCE_MEM, 551 538 }, { 552 - .name = "i2c 0 irq", 553 539 .start = IRQ_MV78XX0_I2C_0, 554 540 .end = IRQ_MV78XX0_I2C_0, 555 541 .flags = IORESOURCE_IRQ, ··· 577 567 578 568 static struct resource mv78xx0_i2c_1_resources[] = { 579 569 { 580 - .name = "i2c 1 base", 581 570 .start = I2C_1_PHYS_BASE, 582 571 .end = I2C_1_PHYS_BASE + 0x1f, 583 572 .flags = IORESOURCE_MEM, 584 573 }, { 585 - .name = "i2c 1 irq", 586 574 .start = IRQ_MV78XX0_I2C_1, 587 575 .end = IRQ_MV78XX0_I2C_1, 588 576 .flags = IORESOURCE_IRQ,
+3
arch/arm/mach-mv78xx0/irq.c
··· 28 28 { 29 29 int i; 30 30 31 + /* Initialize gpiolib. */ 32 + orion_gpio_init(); 33 + 31 34 orion_irq_init(0, (void __iomem *)(IRQ_VIRT_BASE + IRQ_MASK_LOW_OFF)); 32 35 orion_irq_init(32, (void __iomem *)(IRQ_VIRT_BASE + IRQ_MASK_HIGH_OFF)); 33 36 orion_irq_init(64, (void __iomem *)(IRQ_VIRT_BASE + IRQ_MASK_ERR_OFF));
+12 -2
arch/arm/mach-orion5x/addr-map.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/mbus.h> 16 16 #include <linux/io.h> 17 + #include <linux/errno.h> 17 18 #include <mach/hardware.h> 18 19 #include "common.h" 19 20 ··· 45 44 #define TARGET_DEV_BUS 1 46 45 #define TARGET_PCI 3 47 46 #define TARGET_PCIE 4 47 + #define TARGET_SRAM 9 48 48 #define ATTR_PCIE_MEM 0x59 49 49 #define ATTR_PCIE_IO 0x51 50 50 #define ATTR_PCIE_WA 0x79 ··· 55 53 #define ATTR_DEV_CS1 0x1d 56 54 #define ATTR_DEV_CS2 0x1b 57 55 #define ATTR_DEV_BOOT 0xf 56 + #define ATTR_SRAM 0x0 58 57 59 58 /* 60 59 * Helpers to get DDR bank info ··· 90 87 return 0; 91 88 } 92 89 93 - static void __init setup_cpu_win(int win, u32 base, u32 size, 90 + static int __init setup_cpu_win(int win, u32 base, u32 size, 94 91 u8 target, u8 attr, int remap) 95 92 { 96 93 if (win >= 8) { 97 94 printk(KERN_ERR "setup_cpu_win: trying to allocate " 98 95 "window %d\n", win); 99 - return; 96 + return -ENOSPC; 100 97 } 101 98 102 99 writel(base & 0xffff0000, CPU_WIN_BASE(win)); ··· 110 107 writel(remap & 0xffff0000, CPU_WIN_REMAP_LO(win)); 111 108 writel(0, CPU_WIN_REMAP_HI(win)); 112 109 } 110 + return 0; 113 111 } 114 112 115 113 void __init orion5x_setup_cpu_mbus_bridge(void) ··· 196 192 { 197 193 setup_cpu_win(win_alloc_count++, base, size, 198 194 TARGET_PCIE, ATTR_PCIE_WA, -1); 195 + } 196 + 197 + int __init orion5x_setup_sram_win(void) 198 + { 199 + return setup_cpu_win(win_alloc_count, ORION5X_SRAM_PHYS_BASE, 200 + ORION5X_SRAM_SIZE, TARGET_SRAM, ATTR_SRAM, -1); 199 201 }
+42 -5
arch/arm/mach-orion5x/common.c
··· 31 31 #include <plat/ehci-orion.h> 32 32 #include <plat/mv_xor.h> 33 33 #include <plat/orion_nand.h> 34 - #include <plat/orion5x_wdt.h> 34 + #include <plat/orion_wdt.h> 35 35 #include <plat/time.h> 36 36 #include "common.h" 37 37 ··· 188 188 .id = 0, 189 189 .num_resources = 1, 190 190 .resource = orion5x_eth_resources, 191 + .dev = { 192 + .coherent_dma_mask = 0xffffffff, 193 + }, 191 194 }; 192 195 193 196 void __init orion5x_eth_init(struct mv643xx_eth_platform_data *eth_data) ··· 251 248 252 249 static struct resource orion5x_i2c_resources[] = { 253 250 { 254 - .name = "i2c base", 255 251 .start = I2C_PHYS_BASE, 256 252 .end = I2C_PHYS_BASE + 0x1f, 257 253 .flags = IORESOURCE_MEM, 258 254 }, { 259 - .name = "i2c irq", 260 255 .start = IRQ_ORION5X_I2C, 261 256 .end = IRQ_ORION5X_I2C, 262 257 .flags = IORESOURCE_IRQ, ··· 536 535 platform_device_register(&orion5x_xor1_channel); 537 536 } 538 537 538 + static struct resource orion5x_crypto_res[] = { 539 + { 540 + .name = "regs", 541 + .start = ORION5X_CRYPTO_PHYS_BASE, 542 + .end = ORION5X_CRYPTO_PHYS_BASE + 0xffff, 543 + .flags = IORESOURCE_MEM, 544 + }, { 545 + .name = "sram", 546 + .start = ORION5X_SRAM_PHYS_BASE, 547 + .end = ORION5X_SRAM_PHYS_BASE + SZ_8K - 1, 548 + .flags = IORESOURCE_MEM, 549 + }, { 550 + .name = "crypto interrupt", 551 + .start = IRQ_ORION5X_CESA, 552 + .end = IRQ_ORION5X_CESA, 553 + .flags = IORESOURCE_IRQ, 554 + }, 555 + }; 556 + 557 + static struct platform_device orion5x_crypto_device = { 558 + .name = "mv_crypto", 559 + .id = -1, 560 + .num_resources = ARRAY_SIZE(orion5x_crypto_res), 561 + .resource = orion5x_crypto_res, 562 + }; 563 + 564 + int __init orion5x_crypto_init(void) 565 + { 566 + int ret; 567 + 568 + ret = orion5x_setup_sram_win(); 569 + if (ret) 570 + return ret; 571 + 572 + return platform_device_register(&orion5x_crypto_device); 573 + } 539 574 540 575 /***************************************************************************** 541 576 * Watchdog 542 577 ****************************************************************************/ 543 - static struct orion5x_wdt_platform_data orion5x_wdt_data = { 578 + static struct orion_wdt_platform_data orion5x_wdt_data = { 544 579 .tclk = 0, 545 580 }; 546 581 547 582 static struct platform_device orion5x_wdt_device = { 548 - .name = "orion5x_wdt", 583 + .name = "orion_wdt", 549 584 .id = -1, 550 585 .dev = { 551 586 .platform_data = &orion5x_wdt_data,
+2
arch/arm/mach-orion5x/common.h
··· 26 26 void orion5x_setup_dev1_win(u32 base, u32 size); 27 27 void orion5x_setup_dev2_win(u32 base, u32 size); 28 28 void orion5x_setup_pcie_wa_win(u32 base, u32 size); 29 + int orion5x_setup_sram_win(void); 29 30 30 31 void orion5x_ehci0_init(void); 31 32 void orion5x_ehci1_init(void); ··· 38 37 void orion5x_uart0_init(void); 39 38 void orion5x_uart1_init(void); 40 39 void orion5x_xor_init(void); 40 + int orion5x_crypto_init(void); 41 41 42 42 /* 43 43 * PCIe/PCI functions.
+2 -2
arch/arm/mach-orion5x/include/mach/bridge-regs.h
··· 17 17 18 18 #define CPU_CTRL (ORION5X_BRIDGE_VIRT_BASE | 0x104) 19 19 20 - #define CPU_RESET_MASK (ORION5X_BRIDGE_VIRT_BASE | 0x108) 21 - #define WDT_RESET 0x0002 20 + #define RSTOUTn_MASK (ORION5X_BRIDGE_VIRT_BASE | 0x108) 21 + #define WDT_RESET_OUT_EN 0x0002 22 22 23 23 #define CPU_SOFT_RESET (ORION5X_BRIDGE_VIRT_BASE | 0x10c) 24 24
+6
arch/arm/mach-orion5x/include/mach/orion5x.h
··· 24 24 * f1000000 on-chip peripheral registers 25 25 * f2000000 PCIe I/O space 26 26 * f2100000 PCI I/O space 27 + * f2200000 SRAM dedicated for the crypto unit 27 28 * f4000000 device bus mappings (boot) 28 29 * fa000000 device bus mappings (cs0) 29 30 * fa800000 device bus mappings (cs2) ··· 49 48 #define ORION5X_PCI_IO_VIRT_BASE 0xfdf00000 50 49 #define ORION5X_PCI_IO_BUS_BASE 0x00100000 51 50 #define ORION5X_PCI_IO_SIZE SZ_1M 51 + 52 + #define ORION5X_SRAM_PHYS_BASE (0xf2200000) 53 + #define ORION5X_SRAM_SIZE SZ_8K 52 54 53 55 /* Relevant only for Orion-1/Orion-NAS */ 54 56 #define ORION5X_PCIE_WA_PHYS_BASE 0xf0000000 ··· 97 93 98 94 #define ORION5X_SATA_PHYS_BASE (ORION5X_REGS_PHYS_BASE | 0x80000) 99 95 #define ORION5X_SATA_VIRT_BASE (ORION5X_REGS_VIRT_BASE | 0x80000) 96 + 97 + #define ORION5X_CRYPTO_PHYS_BASE (ORION5X_REGS_PHYS_BASE | 0x90000) 100 98 101 99 #define ORION5X_USB1_PHYS_BASE (ORION5X_REGS_PHYS_BASE | 0xa0000) 102 100 #define ORION5X_USB1_VIRT_BASE (ORION5X_REGS_VIRT_BASE | 0xa0000)
+1 -1
arch/arm/mach-orion5x/include/mach/system.h
··· 23 23 /* 24 24 * Enable and issue soft reset 25 25 */ 26 - orion5x_setbits(CPU_RESET_MASK, (1 << 2)); 26 + orion5x_setbits(RSTOUTn_MASK, (1 << 2)); 27 27 orion5x_setbits(CPU_SOFT_RESET, 1); 28 28 } 29 29
+3
arch/arm/mach-orion5x/mpp.c
··· 124 124 u32 mpp_8_15_ctrl = readl(MPP_8_15_CTRL); 125 125 u32 mpp_16_19_ctrl = readl(MPP_16_19_CTRL); 126 126 127 + /* Initialize gpiolib. */ 128 + orion_gpio_init(); 129 + 127 130 while (mode->mpp >= 0) { 128 131 u32 *reg; 129 132 int num_type;
+2 -2
arch/arm/mach-orion5x/mss2-setup.c
··· 181 181 /* 182 182 * Enable and issue soft reset 183 183 */ 184 - reg = readl(CPU_RESET_MASK); 184 + reg = readl(RSTOUTn_MASK); 185 185 reg |= 1 << 2; 186 - writel(reg, CPU_RESET_MASK); 186 + writel(reg, RSTOUTn_MASK); 187 187 188 188 reg = readl(CPU_SOFT_RESET); 189 189 reg |= 1;
+1
arch/arm/mach-orion5x/ts78xx-fpga.h
··· 25 25 /* Technologic Systems */ 26 26 struct fpga_device ts_rtc; 27 27 struct fpga_device ts_nand; 28 + struct fpga_device ts_rng; 28 29 }; 29 30 30 31 struct ts78xx_fpga_data {
+58
arch/arm/mach-orion5x/ts78xx-setup.c
··· 17 17 #include <linux/m48t86.h> 18 18 #include <linux/mtd/nand.h> 19 19 #include <linux/mtd/partitions.h> 20 + #include <linux/timeriomem-rng.h> 20 21 #include <asm/mach-types.h> 21 22 #include <asm/mach/arch.h> 22 23 #include <asm/mach/map.h> ··· 271 270 } 272 271 273 272 /***************************************************************************** 273 + * HW RNG 274 + ****************************************************************************/ 275 + #define TS_RNG_DATA (TS78XX_FPGA_REGS_PHYS_BASE | 0x044) 276 + 277 + static struct resource ts78xx_ts_rng_resource = { 278 + .flags = IORESOURCE_MEM, 279 + .start = TS_RNG_DATA, 280 + .end = TS_RNG_DATA + 4 - 1, 281 + }; 282 + 283 + static struct timeriomem_rng_data ts78xx_ts_rng_data = { 284 + .period = 1000000, /* one second */ 285 + }; 286 + 287 + static struct platform_device ts78xx_ts_rng_device = { 288 + .name = "timeriomem_rng", 289 + .id = -1, 290 + .dev = { 291 + .platform_data = &ts78xx_ts_rng_data, 292 + }, 293 + .resource = &ts78xx_ts_rng_resource, 294 + .num_resources = 1, 295 + }; 296 + 297 + static int ts78xx_ts_rng_load(void) 298 + { 299 + int rc; 300 + 301 + if (ts78xx_fpga.supports.ts_rng.init == 0) { 302 + rc = platform_device_register(&ts78xx_ts_rng_device); 303 + if (!rc) 304 + ts78xx_fpga.supports.ts_rng.init = 1; 305 + } else 306 + rc = platform_device_add(&ts78xx_ts_rng_device); 307 + 308 + return rc; 309 + }; 310 + 311 + static void ts78xx_ts_rng_unload(void) 312 + { 313 + platform_device_del(&ts78xx_ts_rng_device); 314 + } 315 + 316 + /***************************************************************************** 274 317 * FPGA 'hotplug' support code 275 318 ****************************************************************************/ 276 319 static void ts78xx_fpga_devices_zero_init(void) 277 320 { 278 321 ts78xx_fpga.supports.ts_rtc.init = 0; 279 322 ts78xx_fpga.supports.ts_nand.init = 0; 323 + ts78xx_fpga.supports.ts_rng.init = 0; 280 324 } 281 325 282 326 static void ts78xx_fpga_supports(void) ··· 335 289 case TS7800_REV_5: 336 290 ts78xx_fpga.supports.ts_rtc.present = 1; 337 291 ts78xx_fpga.supports.ts_nand.present = 1; 292 + ts78xx_fpga.supports.ts_rng.present = 1; 338 293 break; 339 294 default: 340 295 ts78xx_fpga.supports.ts_rtc.present = 0; 341 296 ts78xx_fpga.supports.ts_nand.present = 0; 297 + ts78xx_fpga.supports.ts_rng.present = 0; 342 298 } 343 299 } 344 300 ··· 364 316 } 365 317 ret |= tmp; 366 318 } 319 + if (ts78xx_fpga.supports.ts_rng.present == 1) { 320 + tmp = ts78xx_ts_rng_load(); 321 + if (tmp) { 322 + printk(KERN_INFO "TS-78xx: RNG not registered\n"); 323 + ts78xx_fpga.supports.ts_rng.present = 0; 324 + } 325 + ret |= tmp; 326 + } 367 327 368 328 return ret; 369 329 } ··· 384 328 ts78xx_ts_rtc_unload(); 385 329 if (ts78xx_fpga.supports.ts_nand.present == 1) 386 330 ts78xx_ts_nand_unload(); 331 + if (ts78xx_fpga.supports.ts_rng.present == 1) 332 + ts78xx_ts_rng_unload(); 387 333 388 334 return ret; 389 335 }
+16
arch/arm/mach-orion5x/wnr854t-setup.c
··· 15 15 #include <linux/mtd/physmap.h> 16 16 #include <linux/mv643xx_eth.h> 17 17 #include <linux/ethtool.h> 18 + #include <net/dsa.h> 18 19 #include <asm/mach-types.h> 19 20 #include <asm/gpio.h> 20 21 #include <asm/mach/arch.h> ··· 98 97 .duplex = DUPLEX_FULL, 99 98 }; 100 99 100 + static struct dsa_chip_data wnr854t_switch_chip_data = { 101 + .port_names[0] = "lan3", 102 + .port_names[1] = "lan4", 103 + .port_names[2] = "wan", 104 + .port_names[3] = "cpu", 105 + .port_names[5] = "lan1", 106 + .port_names[7] = "lan2", 107 + }; 108 + 109 + static struct dsa_platform_data wnr854t_switch_plat_data = { 110 + .nr_chips = 1, 111 + .chip = &wnr854t_switch_chip_data, 112 + }; 113 + 101 114 static void __init wnr854t_init(void) 102 115 { 103 116 /* ··· 125 110 * Configure peripherals. 126 111 */ 127 112 orion5x_eth_init(&wnr854t_eth_data); 113 + orion5x_eth_switch_init(&wnr854t_switch_plat_data, NO_IRQ); 128 114 orion5x_uart0_init(); 129 115 130 116 orion5x_setup_dev_boot_win(WNR854T_NOR_BOOT_BASE,
+18 -18
arch/arm/mach-pxa/ezx.c
··· 111 111 GPIO25_SSP1_TXD, 112 112 GPIO26_SSP1_RXD, 113 113 GPIO24_GPIO, /* pcap chip select */ 114 - GPIO1_GPIO, /* pcap interrupt */ 115 - GPIO4_GPIO, /* WDI_AP */ 116 - GPIO55_GPIO, /* SYS_RESTART */ 114 + GPIO1_GPIO | WAKEUP_ON_EDGE_RISE, /* pcap interrupt */ 115 + GPIO4_GPIO | MFP_LPM_DRIVE_HIGH, /* WDI_AP */ 116 + GPIO55_GPIO | MFP_LPM_DRIVE_HIGH, /* SYS_RESTART */ 117 117 118 118 /* MMC */ 119 119 GPIO32_MMC_CLK, ··· 144 144 #if defined(CONFIG_MACH_EZX_A780) || defined(CONFIG_MACH_EZX_E680) 145 145 static unsigned long gen1_pin_config[] __initdata = { 146 146 /* flip / lockswitch */ 147 - GPIO12_GPIO, 147 + GPIO12_GPIO | WAKEUP_ON_EDGE_BOTH, 148 148 149 149 /* bluetooth (bcm2035) */ 150 - GPIO14_GPIO | WAKEUP_ON_LEVEL_HIGH, /* HOSTWAKE */ 150 + GPIO14_GPIO | WAKEUP_ON_EDGE_RISE, /* HOSTWAKE */ 151 151 GPIO48_GPIO, /* RESET */ 152 152 GPIO28_GPIO, /* WAKEUP */ 153 153 154 154 /* Neptune handshake */ 155 - GPIO0_GPIO | WAKEUP_ON_LEVEL_HIGH, /* BP_RDY */ 156 - GPIO57_GPIO, /* AP_RDY */ 157 - GPIO13_GPIO | WAKEUP_ON_LEVEL_HIGH, /* WDI */ 158 - GPIO3_GPIO | WAKEUP_ON_LEVEL_HIGH, /* WDI2 */ 159 - GPIO82_GPIO, /* RESET */ 160 - GPIO99_GPIO, /* TC_MM_EN */ 155 + GPIO0_GPIO | WAKEUP_ON_EDGE_FALL, /* BP_RDY */ 156 + GPIO57_GPIO | MFP_LPM_DRIVE_HIGH, /* AP_RDY */ 157 + GPIO13_GPIO | WAKEUP_ON_EDGE_BOTH, /* WDI */ 158 + GPIO3_GPIO | WAKEUP_ON_EDGE_BOTH, /* WDI2 */ 159 + GPIO82_GPIO | MFP_LPM_DRIVE_HIGH, /* RESET */ 160 + GPIO99_GPIO | MFP_LPM_DRIVE_HIGH, /* TC_MM_EN */ 161 161 162 162 /* sound */ 163 163 GPIO52_SSP3_SCLK, ··· 199 199 defined(CONFIG_MACH_EZX_E2) || defined(CONFIG_MACH_EZX_E6) 200 200 static unsigned long gen2_pin_config[] __initdata = { 201 201 /* flip / lockswitch */ 202 - GPIO15_GPIO, 202 + GPIO15_GPIO | WAKEUP_ON_EDGE_BOTH, 203 203 204 204 /* EOC */ 205 - GPIO10_GPIO, 205 + GPIO10_GPIO | WAKEUP_ON_EDGE_RISE, 206 206 207 207 /* bluetooth (bcm2045) */ 208 - GPIO13_GPIO | WAKEUP_ON_LEVEL_HIGH, /* HOSTWAKE */ 208 + GPIO13_GPIO | WAKEUP_ON_EDGE_RISE, /* HOSTWAKE */ 209 209 GPIO37_GPIO, /* RESET */ 210 210 GPIO57_GPIO, /* WAKEUP */ 211 211 212 212 /* Neptune handshake */ 213 - GPIO0_GPIO | WAKEUP_ON_LEVEL_HIGH, /* BP_RDY */ 214 - GPIO96_GPIO, /* AP_RDY */ 215 - GPIO3_GPIO | WAKEUP_ON_LEVEL_HIGH, /* WDI */ 216 - GPIO116_GPIO, /* RESET */ 213 + GPIO0_GPIO | WAKEUP_ON_EDGE_FALL, /* BP_RDY */ 214 + GPIO96_GPIO | MFP_LPM_DRIVE_HIGH, /* AP_RDY */ 215 + GPIO3_GPIO | WAKEUP_ON_EDGE_FALL, /* WDI */ 216 + GPIO116_GPIO | MFP_LPM_DRIVE_HIGH, /* RESET */ 217 217 GPIO41_GPIO, /* BP_FLASH */ 218 218 219 219 /* sound */
+3 -2
arch/arm/mach-pxa/include/mach/reset.h
··· 13 13 /** 14 14 * init_gpio_reset() - register GPIO as reset generator 15 15 * @gpio: gpio nr 16 - * @output: set gpio as out/low instead of input during normal work 16 + * @output: set gpio as output instead of input during normal work 17 + * @level: output level 17 18 */ 18 - extern int init_gpio_reset(int gpio, int output); 19 + extern int init_gpio_reset(int gpio, int output, int level); 19 20 20 21 #endif /* __ASM_ARCH_RESET_H */
+6
arch/arm/mach-pxa/mfp-pxa2xx.c
··· 322 322 #ifdef CONFIG_PM 323 323 static unsigned long saved_gafr[2][4]; 324 324 static unsigned long saved_gpdr[4]; 325 + static unsigned long saved_pgsr[4]; 325 326 326 327 static int pxa2xx_mfp_suspend(struct sys_device *d, pm_message_t state) 327 328 { ··· 333 332 saved_gafr[0][i] = GAFR_L(i); 334 333 saved_gafr[1][i] = GAFR_U(i); 335 334 saved_gpdr[i] = GPDR(i * 32); 335 + saved_pgsr[i] = PGSR(i); 336 336 337 337 GPDR(i * 32) = gpdr_lpm[i]; 338 338 } ··· 348 346 GAFR_L(i) = saved_gafr[0][i]; 349 347 GAFR_U(i) = saved_gafr[1][i]; 350 348 GPDR(i * 32) = saved_gpdr[i]; 349 + PGSR(i) = saved_pgsr[i]; 351 350 } 352 351 PSSR = PSSR_RDH | PSSR_PH; 353 352 return 0; ··· 376 373 377 374 if (cpu_is_pxa27x()) 378 375 pxa27x_mfp_init(); 376 + 377 + /* clear RDH bit to enable GPIO receivers after reset/sleep exit */ 378 + PSSR = PSSR_RDH; 379 379 380 380 /* initialize gafr_run[], pgsr_lpm[] from existing values */ 381 381 for (i = 0; i <= gpio_to_bank(pxa_last_gpio); i++)
+2
arch/arm/mach-pxa/palmld.c
··· 62 62 GPIO29_AC97_SDATA_IN_0, 63 63 GPIO30_AC97_SDATA_OUT, 64 64 GPIO31_AC97_SYNC, 65 + GPIO89_AC97_SYSCLK, 66 + GPIO95_AC97_nRESET, 65 67 66 68 /* IrDA */ 67 69 GPIO108_GPIO, /* ir disable */
+1
arch/arm/mach-pxa/palmt5.c
··· 64 64 GPIO29_AC97_SDATA_IN_0, 65 65 GPIO30_AC97_SDATA_OUT, 66 66 GPIO31_AC97_SYNC, 67 + GPIO89_AC97_SYSCLK, 67 68 GPIO95_AC97_nRESET, 68 69 69 70 /* IrDA */
+1
arch/arm/mach-pxa/palmtx.c
··· 65 65 GPIO29_AC97_SDATA_IN_0, 66 66 GPIO30_AC97_SDATA_OUT, 67 67 GPIO31_AC97_SYNC, 68 + GPIO89_AC97_SYSCLK, 68 69 GPIO95_AC97_nRESET, 69 70 70 71 /* IrDA */
+2 -2
arch/arm/mach-pxa/reset.c
··· 20 20 21 21 static int reset_gpio = -1; 22 22 23 - int init_gpio_reset(int gpio, int output) 23 + int init_gpio_reset(int gpio, int output, int level) 24 24 { 25 25 int rc; 26 26 ··· 31 31 } 32 32 33 33 if (output) 34 - rc = gpio_direction_output(gpio, 0); 34 + rc = gpio_direction_output(gpio, level); 35 35 else 36 36 rc = gpio_direction_input(gpio); 37 37 if (rc) {
+7 -1
arch/arm/mach-pxa/spitz.c
··· 531 531 return gpio_direction_output(SPITZ_GPIO_USB_HOST, 1); 532 532 } 533 533 534 + static void spitz_ohci_exit(struct device *dev) 535 + { 536 + gpio_free(SPITZ_GPIO_USB_HOST); 537 + } 538 + 534 539 static struct pxaohci_platform_data spitz_ohci_platform_data = { 535 540 .port_mode = PMM_NPS_MODE, 536 541 .init = spitz_ohci_init, 542 + .exit = spitz_ohci_exit, 537 543 .flags = ENABLE_PORT_ALL | NO_OC_PROTECTION, 538 544 .power_budget = 150, 539 545 }; ··· 737 731 738 732 static void __init common_init(void) 739 733 { 740 - init_gpio_reset(SPITZ_GPIO_ON_RESET, 1); 734 + init_gpio_reset(SPITZ_GPIO_ON_RESET, 1, 0); 741 735 pm_power_off = spitz_poweroff; 742 736 arm_pm_restart = spitz_restart; 743 737
+1 -1
arch/arm/mach-pxa/tosa.c
··· 897 897 gpio_set_wake(MFP_PIN_GPIO1, 1); 898 898 /* We can't pass to gpio-keys since it will drop the Reset altfunc */ 899 899 900 - init_gpio_reset(TOSA_GPIO_ON_RESET, 0); 900 + init_gpio_reset(TOSA_GPIO_ON_RESET, 0, 0); 901 901 902 902 pm_power_off = tosa_poweroff; 903 903 arm_pm_restart = tosa_restart;
+81 -125
arch/arm/plat-orion/gpio.c
··· 15 15 #include <linux/spinlock.h> 16 16 #include <linux/bitops.h> 17 17 #include <linux/io.h> 18 - #include <asm/gpio.h> 18 + #include <linux/gpio.h> 19 19 20 20 static DEFINE_SPINLOCK(gpio_lock); 21 - static const char *gpio_label[GPIO_MAX]; /* non null for allocated GPIOs */ 22 21 static unsigned long gpio_valid_input[BITS_TO_LONGS(GPIO_MAX)]; 23 22 static unsigned long gpio_valid_output[BITS_TO_LONGS(GPIO_MAX)]; 24 23 ··· 45 46 writel(u, GPIO_OUT(pin)); 46 47 } 47 48 49 + static inline void __set_blinking(unsigned pin, int blink) 50 + { 51 + u32 u; 52 + 53 + u = readl(GPIO_BLINK_EN(pin)); 54 + if (blink) 55 + u |= 1 << (pin & 31); 56 + else 57 + u &= ~(1 << (pin & 31)); 58 + writel(u, GPIO_BLINK_EN(pin)); 59 + } 60 + 61 + static inline int orion_gpio_is_valid(unsigned pin, int mode) 62 + { 63 + if (pin < GPIO_MAX) { 64 + if ((mode & GPIO_INPUT_OK) && !test_bit(pin, gpio_valid_input)) 65 + goto err_out; 66 + if ((mode & GPIO_OUTPUT_OK) && !test_bit(pin, gpio_valid_output)) 67 + goto err_out; 68 + return true; 69 + } 70 + 71 + err_out: 72 + pr_debug("%s: invalid GPIO %d\n", __func__, pin); 73 + return false; 74 + } 48 75 49 76 /* 50 77 * GENERIC_GPIO primitives. 51 78 */ 52 - int gpio_direction_input(unsigned pin) 79 + static int orion_gpio_direction_input(struct gpio_chip *chip, unsigned pin) 53 80 { 54 81 unsigned long flags; 55 82 56 - if (pin >= GPIO_MAX || !test_bit(pin, gpio_valid_input)) { 57 - pr_debug("%s: invalid GPIO %d\n", __func__, pin); 83 + if (!orion_gpio_is_valid(pin, GPIO_INPUT_OK)) 58 84 return -EINVAL; 59 - } 60 85 61 86 spin_lock_irqsave(&gpio_lock, flags); 62 87 63 - /* 64 - * Some callers might not have used gpio_request(), 65 - * so flag this pin as requested now. 66 - */ 67 - if (gpio_label[pin] == NULL) 68 - gpio_label[pin] = "?"; 69 - 70 - /* 71 - * Configure GPIO direction. 72 - */ 88 + /* Configure GPIO direction. */ 73 89 __set_direction(pin, 1); 74 90 75 91 spin_unlock_irqrestore(&gpio_lock, flags); 76 92 77 93 return 0; 78 94 } 79 - EXPORT_SYMBOL(gpio_direction_input); 80 95 81 - int gpio_direction_output(unsigned pin, int value) 82 - { 83 - unsigned long flags; 84 - u32 u; 85 - 86 - if (pin >= GPIO_MAX || !test_bit(pin, gpio_valid_output)) { 87 - pr_debug("%s: invalid GPIO %d\n", __func__, pin); 88 - return -EINVAL; 89 - } 90 - 91 - spin_lock_irqsave(&gpio_lock, flags); 92 - 93 - /* 94 - * Some callers might not have used gpio_request(), 95 - * so flag this pin as requested now. 96 - */ 97 - if (gpio_label[pin] == NULL) 98 - gpio_label[pin] = "?"; 99 - 100 - /* 101 - * Disable blinking. 102 - */ 103 - u = readl(GPIO_BLINK_EN(pin)); 104 - u &= ~(1 << (pin & 31)); 105 - writel(u, GPIO_BLINK_EN(pin)); 106 - 107 - /* 108 - * Configure GPIO output value. 109 - */ 110 - __set_level(pin, value); 111 - 112 - /* 113 - * Configure GPIO direction. 114 - */ 115 - __set_direction(pin, 0); 116 - 117 - spin_unlock_irqrestore(&gpio_lock, flags); 118 - 119 - return 0; 120 - } 121 - EXPORT_SYMBOL(gpio_direction_output); 122 - 123 - int gpio_get_value(unsigned pin) 96 + static int orion_gpio_get_value(struct gpio_chip *chip, unsigned pin) 124 97 { 125 98 int val; 126 99 ··· 103 132 104 133 return (val >> (pin & 31)) & 1; 105 134 } 106 - EXPORT_SYMBOL(gpio_get_value); 107 135 108 - void gpio_set_value(unsigned pin, int value) 136 + static int orion_gpio_direction_output(struct gpio_chip *chip, unsigned pin, 137 + int value) 109 138 { 110 139 unsigned long flags; 111 - u32 u; 140 + 141 + if (!orion_gpio_is_valid(pin, GPIO_OUTPUT_OK)) 142 + return -EINVAL; 112 143 113 144 spin_lock_irqsave(&gpio_lock, flags); 114 145 115 - /* 116 - * Disable blinking. 117 - */ 118 - u = readl(GPIO_BLINK_EN(pin)); 119 - u &= ~(1 << (pin & 31)); 120 - writel(u, GPIO_BLINK_EN(pin)); 146 + /* Disable blinking. */ 147 + __set_blinking(pin, 0); 121 148 122 - /* 123 - * Configure GPIO output value. 124 - */ 149 + /* Configure GPIO output value. */ 150 + __set_level(pin, value); 151 + 152 + /* Configure GPIO direction. */ 153 + __set_direction(pin, 0); 154 + 155 + spin_unlock_irqrestore(&gpio_lock, flags); 156 + 157 + return 0; 158 + } 159 + 160 + static void orion_gpio_set_value(struct gpio_chip *chip, unsigned pin, 161 + int value) 162 + { 163 + unsigned long flags; 164 + 165 + spin_lock_irqsave(&gpio_lock, flags); 166 + 167 + /* Configure GPIO output value. */ 125 168 __set_level(pin, value); 126 169 127 170 spin_unlock_irqrestore(&gpio_lock, flags); 128 171 } 129 - EXPORT_SYMBOL(gpio_set_value); 130 172 131 - int gpio_request(unsigned pin, const char *label) 173 + static int orion_gpio_request(struct gpio_chip *chip, unsigned pin) 132 174 { 133 - unsigned long flags; 134 - int ret; 135 - 136 - if (pin >= GPIO_MAX || 137 - !(test_bit(pin, gpio_valid_input) || 138 - test_bit(pin, gpio_valid_output))) { 139 - pr_debug("%s: invalid GPIO %d\n", __func__, pin); 140 - return -EINVAL; 141 - } 142 - 143 - spin_lock_irqsave(&gpio_lock, flags); 144 - if (gpio_label[pin] == NULL) { 145 - gpio_label[pin] = label ? label : "?"; 146 - ret = 0; 147 - } else { 148 - pr_debug("%s: GPIO %d already used as %s\n", 149 - __func__, pin, gpio_label[pin]); 150 - ret = -EBUSY; 151 - } 152 - spin_unlock_irqrestore(&gpio_lock, flags); 153 - 154 - return ret; 175 + if (orion_gpio_is_valid(pin, GPIO_INPUT_OK) || 176 + orion_gpio_is_valid(pin, GPIO_OUTPUT_OK)) 177 + return 0; 178 + return -EINVAL; 155 179 } 156 - EXPORT_SYMBOL(gpio_request); 157 180 158 - void gpio_free(unsigned pin) 181 + static struct gpio_chip orion_gpiochip = { 182 + .label = "orion_gpio", 183 + .direction_input = orion_gpio_direction_input, 184 + .get = orion_gpio_get_value, 185 + .direction_output = orion_gpio_direction_output, 186 + .set = orion_gpio_set_value, 187 + .request = orion_gpio_request, 188 + .base = 0, 189 + .ngpio = GPIO_MAX, 190 + .can_sleep = 0, 191 + }; 192 + 193 + void __init orion_gpio_init(void) 159 194 { 160 - if (pin >= GPIO_MAX || 161 - !(test_bit(pin, gpio_valid_input) || 162 - test_bit(pin, gpio_valid_output))) { 163 - pr_debug("%s: invalid GPIO %d\n", __func__, pin); 164 - return; 165 - } 166 - 167 - if (gpio_label[pin] == NULL) 168 - pr_warning("%s: GPIO %d already freed\n", __func__, pin); 169 - else 170 - gpio_label[pin] = NULL; 195 + gpiochip_add(&orion_gpiochip); 171 196 } 172 - EXPORT_SYMBOL(gpio_free); 173 - 174 197 175 198 /* 176 199 * Orion-specific GPIO API extensions. 177 200 */ 178 201 void __init orion_gpio_set_unused(unsigned pin) 179 202 { 180 - /* 181 - * Configure as output, drive low. 182 - */ 203 + /* Configure as output, drive low. */ 183 204 __set_level(pin, 0); 184 205 __set_direction(pin, 0); 185 206 } ··· 193 230 void orion_gpio_set_blink(unsigned pin, int blink) 194 231 { 195 232 unsigned long flags; 196 - u32 u; 197 233 198 234 spin_lock_irqsave(&gpio_lock, flags); 199 235 200 - /* 201 - * Set output value to zero. 202 - */ 236 + /* Set output value to zero. */ 203 237 __set_level(pin, 0); 204 238 205 - u = readl(GPIO_BLINK_EN(pin)); 206 - if (blink) 207 - u |= 1 << (pin & 31); 208 - else 209 - u &= ~(1 << (pin & 31)); 210 - writel(u, GPIO_BLINK_EN(pin)); 239 + /* Set blinking. */ 240 + __set_blinking(pin, blink); 211 241 212 242 spin_unlock_irqrestore(&gpio_lock, flags); 213 243 } ··· 324 368 } 325 369 326 370 struct irq_chip orion_gpio_irq_chip = { 327 - .name = "orion_gpio", 371 + .name = "orion_gpio_irq", 328 372 .ack = gpio_irq_ack, 329 373 .mask = gpio_irq_mask, 330 374 .unmask = gpio_irq_unmask,
+8 -9
arch/arm/plat-orion/include/plat/gpio.h
··· 14 14 /* 15 15 * GENERIC_GPIO primitives. 16 16 */ 17 - int gpio_request(unsigned pin, const char *label); 18 - void gpio_free(unsigned pin); 19 - int gpio_direction_input(unsigned pin); 20 - int gpio_direction_output(unsigned pin, int value); 21 - int gpio_get_value(unsigned pin); 22 - void gpio_set_value(unsigned pin, int value); 17 + #define gpio_get_value __gpio_get_value 18 + #define gpio_set_value __gpio_set_value 19 + #define gpio_cansleep __gpio_cansleep 23 20 24 21 /* 25 22 * Orion-specific GPIO API extensions. ··· 24 27 void orion_gpio_set_unused(unsigned pin); 25 28 void orion_gpio_set_blink(unsigned pin, int blink); 26 29 27 - #define GPIO_BIDI_OK (1 << 0) 28 - #define GPIO_INPUT_OK (1 << 1) 29 - #define GPIO_OUTPUT_OK (1 << 2) 30 + #define GPIO_INPUT_OK (1 << 0) 31 + #define GPIO_OUTPUT_OK (1 << 1) 30 32 void orion_gpio_set_valid(unsigned pin, int mode); 33 + 34 + /* Initialize gpiolib. */ 35 + void __init orion_gpio_init(void); 31 36 32 37 /* 33 38 * GPIO interrupt handling.
+4 -4
arch/arm/plat-orion/include/plat/orion5x_wdt.h arch/arm/plat-orion/include/plat/orion_wdt.h
··· 1 1 /* 2 - * arch/arm/plat-orion/include/plat/orion5x_wdt.h 2 + * arch/arm/plat-orion/include/plat/orion_wdt.h 3 3 * 4 4 * This file is licensed under the terms of the GNU General Public 5 5 * License version 2. This program is licensed "as is" without any 6 6 * warranty of any kind, whether express or implied. 7 7 */ 8 8 9 - #ifndef __PLAT_ORION5X_WDT_H 10 - #define __PLAT_ORION5X_WDT_H 9 + #ifndef __PLAT_ORION_WDT_H 10 + #define __PLAT_ORION_WDT_H 11 11 12 - struct orion5x_wdt_platform_data { 12 + struct orion_wdt_platform_data { 13 13 u32 tclk; /* no <linux/clk.h> support yet */ 14 14 }; 15 15
+58 -1
arch/arm/plat-orion/time.c
··· 12 12 */ 13 13 14 14 #include <linux/kernel.h> 15 + #include <linux/sched.h> 16 + #include <linux/cnt32_to_63.h> 17 + #include <linux/timer.h> 15 18 #include <linux/clockchips.h> 16 19 #include <linux/interrupt.h> 17 20 #include <linux/irq.h> 18 21 #include <asm/mach/time.h> 19 22 #include <mach/bridge-regs.h> 23 + #include <mach/hardware.h> 20 24 21 25 /* 22 26 * Number of timer ticks per jiffy. ··· 41 37 #define TIMER1_RELOAD (TIMER_VIRT_BASE + 0x0018) 42 38 #define TIMER1_VAL (TIMER_VIRT_BASE + 0x001c) 43 39 40 + 41 + /* 42 + * Orion's sched_clock implementation. It has a resolution of 43 + * at least 7.5ns (133MHz TCLK) and a maximum value of 834 days. 44 + * 45 + * Because the hardware timer period is quite short (21 secs if 46 + * 200MHz TCLK) and because cnt32_to_63() needs to be called at 47 + * least once per half period to work properly, a kernel timer is 48 + * set up to ensure this requirement is always met. 49 + */ 50 + #define TCLK2NS_SCALE_FACTOR 8 51 + 52 + static unsigned long tclk2ns_scale; 53 + 54 + unsigned long long sched_clock(void) 55 + { 56 + unsigned long long v = cnt32_to_63(0xffffffff - readl(TIMER0_VAL)); 57 + return (v * tclk2ns_scale) >> TCLK2NS_SCALE_FACTOR; 58 + } 59 + 60 + static struct timer_list cnt32_to_63_keepwarm_timer; 61 + 62 + static void cnt32_to_63_keepwarm(unsigned long data) 63 + { 64 + mod_timer(&cnt32_to_63_keepwarm_timer, round_jiffies(jiffies + data)); 65 + (void) sched_clock(); 66 + } 67 + 68 + static void __init setup_sched_clock(unsigned long tclk) 69 + { 70 + unsigned long long v; 71 + unsigned long data; 72 + 73 + v = NSEC_PER_SEC; 74 + v <<= TCLK2NS_SCALE_FACTOR; 75 + v += tclk/2; 76 + do_div(v, tclk); 77 + /* 78 + * We want an even value to automatically clear the top bit 79 + * returned by cnt32_to_63() without an additional run time 80 + * instruction. So if the LSB is 1 then round it up. 81 + */ 82 + if (v & 1) 83 + v++; 84 + tclk2ns_scale = v; 85 + 86 + data = (0xffffffffUL / tclk / 2 - 2) * HZ; 87 + setup_timer(&cnt32_to_63_keepwarm_timer, cnt32_to_63_keepwarm, data); 88 + mod_timer(&cnt32_to_63_keepwarm_timer, round_jiffies(jiffies + data)); 89 + } 44 90 45 91 /* 46 92 * Clocksource handling. ··· 230 176 231 177 ticks_per_jiffy = (tclk + HZ/2) / HZ; 232 178 179 + /* 180 + * Set scale and timer for sched_clock 181 + */ 182 + setup_sched_clock(tclk); 233 183 234 184 /* 235 185 * Setup free-running clocksource timer (interrupts ··· 247 189 writel(u | TIMER0_EN | TIMER0_RELOAD_EN, TIMER_CTRL); 248 190 orion_clksrc.mult = clocksource_hz2mult(tclk, orion_clksrc.shift); 249 191 clocksource_register(&orion_clksrc); 250 - 251 192 252 193 /* 253 194 * Setup clockevent timer (interrupt-driven.)
+122 -9
arch/arm/tools/mach-types
··· 12 12 # 13 13 # http://www.arm.linux.org.uk/developer/machines/?action=new 14 14 # 15 - # Last update: Mon Mar 23 20:09:01 2009 15 + # Last update: Fri May 29 10:14:20 2009 16 16 # 17 17 # machine_is_xxx CONFIG_xxxx MACH_TYPE_xxx number 18 18 # ··· 916 916 apf9328 MACH_APF9328 APF9328 906 917 917 omap_wipoq MACH_OMAP_WIPOQ OMAP_WIPOQ 907 918 918 omap_twip MACH_OMAP_TWIP OMAP_TWIP 908 919 - palmt650 MACH_PALMT650 PALMT650 909 919 + treo650 MACH_TREO650 TREO650 909 920 920 acumen MACH_ACUMEN ACUMEN 910 921 921 xp100 MACH_XP100 XP100 911 922 922 fs2410 MACH_FS2410 FS2410 912 ··· 1232 1232 vpac270 MACH_VPAC270 VPAC270 1227 1233 1233 rd129 MACH_RD129 RD129 1228 1234 1234 htcwizard MACH_HTCWIZARD HTCWIZARD 1229 1235 - xscale_treo680 MACH_XSCALE_TREO680 XSCALE_TREO680 1230 1235 + treo680 MACH_TREO680 TREO680 1230 1236 1236 tecon_tmezon MACH_TECON_TMEZON TECON_TMEZON 1231 1237 1237 zylonite MACH_ZYLONITE ZYLONITE 1233 1238 1238 gene1270 MACH_GENE1270 GENE1270 1234 ··· 1418 1418 cnty_titan MACH_CNTY_TITAN CNTY_TITAN 1418 1419 1419 app3xx MACH_APP3XX APP3XX 1419 1420 1420 sideoatsgrama MACH_SIDEOATSGRAMA SIDEOATSGRAMA 1420 1421 - palmtreo700p MACH_PALMTREO700P PALMTREO700P 1421 1422 - palmtreo700w MACH_PALMTREO700W PALMTREO700W 1422 1423 - palmtreo750 MACH_PALMTREO750 PALMTREO750 1423 1424 - palmtreo755p MACH_PALMTREO755P PALMTREO755P 1424 1421 + treo700p MACH_TREO700P TREO700P 1421 1422 + treo700w MACH_TREO700W TREO700W 1422 1423 + treo750 MACH_TREO750 TREO750 1423 1424 + treo755p MACH_TREO755P TREO755P 1424 1425 1425 ezreganut9200 MACH_EZREGANUT9200 EZREGANUT9200 1425 1426 1426 sarge MACH_SARGE SARGE 1426 1427 1427 a696 MACH_A696 A696 1427 ··· 1721 1721 csb637xo MACH_CSB637XO CSB637XO 1730 1722 1722 evisiong MACH_EVISIONG EVISIONG 1731 1723 1723 stmp37xx MACH_STMP37XX STMP37XX 1732 1724 - stmp378x MACH_STMP38XX STMP38XX 1733 1724 + stmp378x MACH_STMP378X STMP378X 1733 1725 1725 tnt MACH_TNT TNT 1734 1726 1726 tbxt MACH_TBXT TBXT 1735 1727 1727 playmate MACH_PLAYMATE PLAYMATE 1736 ··· 1817 1817 tavorevb MACH_TAVOREVB TAVOREVB 1827 1818 1818 saar MACH_SAAR SAAR 1828 1819 1819 deister_eyecam MACH_DEISTER_EYECAM DEISTER_EYECAM 1829 1820 - at91sam9m10ek MACH_AT91SAM9M10EK AT91SAM9M10EK 1830 1820 + at91sam9m10g45ek MACH_AT91SAM9M10G45EK AT91SAM9M10G45EK 1830 1821 1821 linkstation_produo MACH_LINKSTATION_PRODUO LINKSTATION_PRODUO 1831 1822 1822 hit_b0 MACH_HIT_B0 HIT_B0 1832 1823 1823 adx_rmu MACH_ADX_RMU ADX_RMU 1833 ··· 2132 2132 at91cap9stk MACH_AT91CAP9STK AT91CAP9STK 2142 2133 2133 spc300 MACH_SPC300 SPC300 2143 2134 2134 eko MACH_EKO EKO 2144 2135 + ccw9m2443 MACH_CCW9M2443 CCW9M2443 2145 2136 + ccw9m2443js MACH_CCW9M2443JS CCW9M2443JS 2146 2137 + m2m_router_device MACH_M2M_ROUTER_DEVICE M2M_ROUTER_DEVICE 2147 2138 + str9104nas MACH_STAR9104NAS STAR9104NAS 2148 2139 + pca100 MACH_PCA100 PCA100 2149 2140 + z3_dm365_mod_01 MACH_Z3_DM365_MOD_01 Z3_DM365_MOD_01 2150 2141 + hipox MACH_HIPOX HIPOX 2151 2142 + omap3_piteds MACH_OMAP3_PITEDS OMAP3_PITEDS 2152 2143 + bm150r MACH_BM150R BM150R 2153 2144 + tbone MACH_TBONE TBONE 2154 2145 + merlin MACH_MERLIN MERLIN 2155 2146 + falcon MACH_FALCON FALCON 2156 2147 + davinci_da850_evm MACH_DAVINCI_DA850_EVM DAVINCI_DA850_EVM 2157 2148 + s5p6440 MACH_S5P6440 S5P6440 2158 2149 + at91sam9g10ek MACH_AT91SAM9G10EK AT91SAM9G10EK 2159 2150 + omap_4430sdp MACH_OMAP_4430SDP OMAP_4430SDP 2160 2151 + lpc313x MACH_LPC313X LPC313X 2161 2152 + magx_zn5 MACH_MAGX_ZN5 MAGX_ZN5 2162 2153 + magx_em30 MACH_MAGX_EM30 MAGX_EM30 2163 2154 + magx_ve66 MACH_MAGX_VE66 MAGX_VE66 2164 2155 + meesc MACH_MEESC MEESC 2165 2156 + otc570 MACH_OTC570 OTC570 2166 2157 + bcu2412 MACH_BCU2412 BCU2412 2167 2158 + beacon MACH_BEACON BEACON 2168 2159 + actia_tgw MACH_ACTIA_TGW ACTIA_TGW 2169 2160 + e4430 MACH_E4430 E4430 2170 2161 + ql300 MACH_QL300 QL300 2171 2162 + btmavb101 MACH_BTMAVB101 BTMAVB101 2172 2163 + btmawb101 MACH_BTMAWB101 BTMAWB101 2173 2164 + sq201 MACH_SQ201 SQ201 2174 2165 + quatro45xx MACH_QUATRO45XX QUATRO45XX 2175 2166 + openpad MACH_OPENPAD OPENPAD 2176 2167 + tx25 MACH_TX25 TX25 2177 2168 + omap3_torpedo MACH_OMAP3_TORPEDO OMAP3_TORPEDO 2178 2169 + htcraphael_k MACH_HTCRAPHAEL_K HTCRAPHAEL_K 2179 2170 + lal43 MACH_LAL43 LAL43 2181 2171 + htcraphael_cdma500 MACH_HTCRAPHAEL_CDMA500 HTCRAPHAEL_CDMA500 2182 2172 + anw6410 MACH_ANW6410 ANW6410 2183 2173 + htcprophet MACH_HTCPROPHET HTCPROPHET 2185 2174 + cfa_10022 MACH_CFA_10022 CFA_10022 2186 2175 + imx27_visstrim_m10 MACH_IMX27_VISSTRIM_M10 IMX27_VISSTRIM_M10 2187 2176 + px2imx27 MACH_PX2IMX27 PX2IMX27 2188 2177 + stm3210e_eval MACH_STM3210E_EVAL STM3210E_EVAL 2189 2178 + dvs10 MACH_DVS10 DVS10 2190 2179 + portuxg20 MACH_PORTUXG20 PORTUXG20 2191 2180 + arm_spv MACH_ARM_SPV ARM_SPV 2192 2181 + smdkc110 MACH_SMDKC110 SMDKC110 2193 2182 + cabespresso MACH_CABESPRESSO CABESPRESSO 2194 2183 + hmc800 MACH_HMC800 HMC800 2195 2184 + sholes MACH_SHOLES SHOLES 2196 2185 + btmxc31 MACH_BTMXC31 BTMXC31 2197 2186 + dt501 MACH_DT501 DT501 2198 2187 + ktx MACH_KTX KTX 2199 2188 + omap3517evm MACH_OMAP3517EVM OMAP3517EVM 2200 2189 + netspace_v2 MACH_NETSPACE_V2 NETSPACE_V2 2201 2190 + netspace_max_v2 MACH_NETSPACE_MAX_V2 NETSPACE_MAX_V2 2202 2191 + d2net_v2 MACH_D2NET_V2 D2NET_V2 2203 2192 + net2big_v2 MACH_NET2BIG_V2 NET2BIG_V2 2204 2193 + net4big_v2 MACH_NET4BIG_V2 NET4BIG_V2 2205 2194 + net5big_v2 MACH_NET5BIG_V2 NET5BIG_V2 2206 2195 + endb2443 MACH_ENDB2443 ENDB2443 2207 2196 + inetspace_v2 MACH_INETSPACE_V2 INETSPACE_V2 2208 2197 + tros MACH_TROS TROS 2209 2198 + pelco_homer MACH_PELCO_HOMER PELCO_HOMER 2210 2199 + ofsp8 MACH_OFSP8 OFSP8 2211 2200 + at91sam9g45ekes MACH_AT91SAM9G45EKES AT91SAM9G45EKES 2212 2201 + guf_cupid MACH_GUF_CUPID GUF_CUPID 2213 2202 + eab1r MACH_EAB1R EAB1R 2214 2203 + desirec MACH_DESIREC DESIREC 2215 2204 + cordoba MACH_CORDOBA CORDOBA 2216 2205 + irvine MACH_IRVINE IRVINE 2217 2206 + sff772 MACH_SFF772 SFF772 2218 2207 + pelco_milano MACH_PELCO_MILANO PELCO_MILANO 2219 2208 + pc7302 MACH_PC7302 PC7302 2220 2209 + bip6000 MACH_BIP6000 BIP6000 2221 2210 + silvermoon MACH_SILVERMOON SILVERMOON 2222 2211 + vc0830 MACH_VC0830 VC0830 2223 2212 + dt430 MACH_DT430 DT430 2224 2213 + ji42pf MACH_JI42PF JI42PF 2225 2214 + gnet_ksm MACH_GNET_KSM GNET_KSM 2226 2215 + gnet_sgm MACH_GNET_SGM GNET_SGM 2227 2216 + gnet_sgr MACH_GNET_SGR GNET_SGR 2228 2217 + omap3_icetekevm MACH_OMAP3_ICETEKEVM OMAP3_ICETEKEVM 2229 2218 + pnp MACH_PNP PNP 2230 2219 + ctera_2bay_k MACH_CTERA_2BAY_K CTERA_2BAY_K 2231 2220 + ctera_2bay_u MACH_CTERA_2BAY_U CTERA_2BAY_U 2232 2221 + sas_c MACH_SAS_C SAS_C 2233 2222 + vma2315 MACH_VMA2315 VMA2315 2234 2223 + vcs MACH_VCS VCS 2235 2224 + spear600 MACH_SPEAR600 SPEAR600 2236 2225 + spear300 MACH_SPEAR300 SPEAR300 2237 2226 + spear1300 MACH_SPEAR1300 SPEAR1300 2238 2227 + lilly1131 MACH_LILLY1131 LILLY1131 2239 2228 + arvoo_ax301 MACH_ARVOO_AX301 ARVOO_AX301 2240 2229 + mapphone MACH_MAPPHONE MAPPHONE 2241 2230 + legend MACH_LEGEND LEGEND 2242 2231 + salsa MACH_SALSA SALSA 2243 2232 + lounge MACH_LOUNGE LOUNGE 2244 2233 + vision MACH_VISION VISION 2245 2234 + vmb20 MACH_VMB20 VMB20 2246 2235 + hy2410 MACH_HY2410 HY2410 2247 2236 + hy9315 MACH_HY9315 HY9315 2248 2237 + bullwinkle MACH_BULLWINKLE BULLWINKLE 2249 2238 + arm_ultimator2 MACH_ARM_ULTIMATOR2 ARM_ULTIMATOR2 2250 2239 + vs_v210 MACH_VS_V210 VS_V210 2252 2240 + vs_v212 MACH_VS_V212 VS_V212 2253 2241 + hmt MACH_HMT HMT 2254 2242 + suen3 MACH_SUEN3 SUEN3 2255 2243 + vesper MACH_VESPER VESPER 2256 2244 + str9 MACH_STR9 STR9 2257 2245 + omap3_wl_ff MACH_OMAP3_WL_FF OMAP3_WL_FF 2258 2246 + simcom MACH_SIMCOM SIMCOM 2259 2247 + mcwebio MACH_MCWEBIO MCWEBIO 2260
-1
arch/blackfin/include/asm/.gitignore
··· 1 - +mach
-1
arch/blackfin/include/asm/flat.h
··· 10 10 11 11 #include <asm/unaligned.h> 12 12 13 - #define flat_stack_align(sp) /* nothing needed */ 14 13 #define flat_argvp_envp_on_stack() 0 15 14 #define flat_old_ram_flag(flags) (flags) 16 15
+3 -1
arch/blackfin/include/asm/unistd.h
··· 378 378 #define __NR_dup3 363 379 379 #define __NR_pipe2 364 380 380 #define __NR_inotify_init1 365 381 + #define __NR_preadv 366 382 + #define __NR_pwritev 367 381 383 382 - #define __NR_syscall 366 384 + #define __NR_syscall 368 383 385 #define NR_syscalls __NR_syscall 384 386 385 387 /* Old optional stuff no one actually uses */
+1
arch/blackfin/kernel/.gitignore
··· 1 + vmlinux.lds
+1 -2
arch/blackfin/lib/strncmp.c
··· 8 8 9 9 #define strncmp __inline_strncmp 10 10 #include <asm/string.h> 11 - #undef strncmp 12 - 13 11 #include <linux/module.h> 12 + #undef strncmp 14 13 15 14 int strncmp(const char *cs, const char *ct, size_t count) 16 15 {
+2
arch/blackfin/mach-common/entry.S
··· 1581 1581 .long _sys_dup3 1582 1582 .long _sys_pipe2 1583 1583 .long _sys_inotify_init1 /* 365 */ 1584 + .long _sys_preadv 1585 + .long _sys_pwritev 1584 1586 1585 1587 .rept NR_syscalls-(.-_sys_call_table)/4 1586 1588 .long _sys_ni_syscall
-1
arch/h8300/include/asm/flat.h
··· 5 5 #ifndef __H8300_FLAT_H__ 6 6 #define __H8300_FLAT_H__ 7 7 8 - #define flat_stack_align(sp) /* nothing needed */ 9 8 #define flat_argvp_envp_on_stack() 1 10 9 #define flat_old_ram_flag(flags) 1 11 10 #define flat_reloc_valid(reloc, size) ((reloc) <= (size))
-1
arch/m32r/include/asm/flat.h
··· 12 12 #ifndef __ASM_M32R_FLAT_H 13 13 #define __ASM_M32R_FLAT_H 14 14 15 - #define flat_stack_align(sp) (*sp += (*sp & 3 ? (4 - (*sp & 3)): 0)) 16 15 #define flat_argvp_envp_on_stack() 0 17 16 #define flat_old_ram_flag(flags) (flags) 18 17 #define flat_set_persistent(relval, p) 0
-1
arch/m68k/include/asm/flat.h
··· 5 5 #ifndef __M68KNOMMU_FLAT_H__ 6 6 #define __M68KNOMMU_FLAT_H__ 7 7 8 - #define flat_stack_align(sp) /* nothing needed */ 9 8 #define flat_argvp_envp_on_stack() 1 10 9 #define flat_old_ram_flag(flags) (flags) 11 10 #define flat_reloc_valid(reloc, size) ((reloc) <= (size))
+12
arch/powerpc/Kconfig
··· 868 868 default "0x80000000" if PPC_PREP || PPC_8xx 869 869 default "0xc0000000" 870 870 871 + config CONSISTENT_SIZE_BOOL 872 + bool "Set custom consistent memory pool size" 873 + depends on ADVANCED_OPTIONS && NOT_COHERENT_CACHE 874 + help 875 + This option allows you to set the size of the 876 + consistent memory pool. This pool of virtual memory 877 + is used to make consistent memory allocations. 878 + 879 + config CONSISTENT_SIZE 880 + hex "Size of consistent memory pool" if CONSISTENT_SIZE_BOOL 881 + default "0x00200000" if NOT_COHERENT_CACHE 882 + 871 883 config PIN_TLB 872 884 bool "Pinned Kernel TLBs (860 ONLY)" 873 885 depends on ADVANCED_OPTIONS && 8xx
+195 -83
arch/powerpc/configs/pmac32_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.28-rc3 4 - # Tue Nov 11 19:36:51 2008 3 + # Linux kernel version: 2.6.30-rc7 4 + # Mon May 25 14:53:25 2009 5 5 # 6 6 # CONFIG_PPC64 is not set 7 7 ··· 14 14 # CONFIG_40x is not set 15 15 # CONFIG_44x is not set 16 16 # CONFIG_E200 is not set 17 + CONFIG_PPC_BOOK3S=y 17 18 CONFIG_PPC_FPU=y 18 19 CONFIG_ALTIVEC=y 19 20 CONFIG_PPC_STD_MMU=y ··· 44 43 CONFIG_PPC=y 45 44 CONFIG_EARLY_PRINTK=y 46 45 CONFIG_GENERIC_NVRAM=y 47 - CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y 46 + CONFIG_SCHED_OMIT_FRAME_POINTER=y 48 47 CONFIG_ARCH_MAY_HAVE_PC_FDC=y 49 48 CONFIG_PPC_OF=y 50 49 CONFIG_OF=y ··· 53 52 CONFIG_AUDIT_ARCH=y 54 53 CONFIG_GENERIC_BUG=y 55 54 CONFIG_SYS_SUPPORTS_APM_EMULATION=y 55 + CONFIG_DTC=y 56 56 # CONFIG_DEFAULT_UIMAGE is not set 57 57 CONFIG_HIBERNATE_32=y 58 58 CONFIG_ARCH_HIBERNATION_POSSIBLE=y 59 59 CONFIG_ARCH_SUSPEND_POSSIBLE=y 60 60 # CONFIG_PPC_DCR_NATIVE is not set 61 61 # CONFIG_PPC_DCR_MMIO is not set 62 + CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y 62 63 CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" 63 64 64 65 # ··· 75 72 CONFIG_SYSVIPC=y 76 73 CONFIG_SYSVIPC_SYSCTL=y 77 74 CONFIG_POSIX_MQUEUE=y 75 + CONFIG_POSIX_MQUEUE_SYSCTL=y 78 76 # CONFIG_BSD_PROCESS_ACCT is not set 79 77 # CONFIG_TASKSTATS is not set 80 78 # CONFIG_AUDIT is not set 79 + 80 + # 81 + # RCU Subsystem 82 + # 83 + CONFIG_CLASSIC_RCU=y 84 + # CONFIG_TREE_RCU is not set 85 + # CONFIG_PREEMPT_RCU is not set 86 + # CONFIG_TREE_RCU_TRACE is not set 87 + # CONFIG_PREEMPT_RCU_TRACE is not set 81 88 CONFIG_IKCONFIG=y 82 89 CONFIG_IKCONFIG_PROC=y 83 90 CONFIG_LOG_BUF_SHIFT=14 84 - # CONFIG_CGROUPS is not set 85 91 # CONFIG_GROUP_SCHED is not set 92 + # CONFIG_CGROUPS is not set 86 93 CONFIG_SYSFS_DEPRECATED=y 87 94 CONFIG_SYSFS_DEPRECATED_V2=y 88 95 # CONFIG_RELAY is not set ··· 101 88 # CONFIG_IPC_NS is not set 102 89 # CONFIG_USER_NS is not set 103 90 # CONFIG_PID_NS is not set 91 + # CONFIG_NET_NS is not set 104 92 CONFIG_BLK_DEV_INITRD=y 105 93 CONFIG_INITRAMFS_SOURCE="" 94 + CONFIG_RD_GZIP=y 95 + CONFIG_RD_BZIP2=y 96 + CONFIG_RD_LZMA=y 106 97 # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 107 98 CONFIG_SYSCTL=y 99 + CONFIG_ANON_INODES=y 108 100 # CONFIG_EMBEDDED is not set 109 101 CONFIG_SYSCTL_SYSCALL=y 110 102 CONFIG_KALLSYMS=y 111 103 CONFIG_KALLSYMS_ALL=y 112 104 # CONFIG_KALLSYMS_EXTRA_PASS is not set 105 + # CONFIG_STRIP_ASM_SYMS is not set 113 106 CONFIG_HOTPLUG=y 114 107 CONFIG_PRINTK=y 115 108 CONFIG_BUG=y 116 109 CONFIG_ELF_CORE=y 117 - # CONFIG_COMPAT_BRK is not set 118 110 CONFIG_BASE_FULL=y 119 111 CONFIG_FUTEX=y 120 - CONFIG_ANON_INODES=y 121 112 CONFIG_EPOLL=y 122 113 CONFIG_SIGNALFD=y 123 114 CONFIG_TIMERFD=y ··· 131 114 CONFIG_VM_EVENT_COUNTERS=y 132 115 CONFIG_PCI_QUIRKS=y 133 116 CONFIG_SLUB_DEBUG=y 117 + # CONFIG_COMPAT_BRK is not set 134 118 # CONFIG_SLAB is not set 135 119 CONFIG_SLUB=y 136 120 # CONFIG_SLOB is not set 137 121 CONFIG_PROFILING=y 122 + CONFIG_TRACEPOINTS=y 138 123 # CONFIG_MARKERS is not set 139 124 CONFIG_OPROFILE=y 140 125 CONFIG_HAVE_OPROFILE=y ··· 146 127 CONFIG_HAVE_KPROBES=y 147 128 CONFIG_HAVE_KRETPROBES=y 148 129 CONFIG_HAVE_ARCH_TRACEHOOK=y 130 + # CONFIG_SLOW_WORK is not set 149 131 # CONFIG_HAVE_GENERIC_DMA_COHERENT is not set 150 132 CONFIG_SLABINFO=y 151 133 CONFIG_RT_MUTEXES=y 152 - # CONFIG_TINY_SHMEM is not set 153 134 CONFIG_BASE_SMALL=0 154 135 CONFIG_MODULES=y 155 136 # CONFIG_MODULE_FORCE_LOAD is not set ··· 157 138 CONFIG_MODULE_FORCE_UNLOAD=y 158 139 # CONFIG_MODVERSIONS is not set 159 140 # CONFIG_MODULE_SRCVERSION_ALL is not set 160 - CONFIG_KMOD=y 161 141 CONFIG_BLOCK=y 162 142 CONFIG_LBD=y 163 - # CONFIG_BLK_DEV_IO_TRACE is not set 164 - CONFIG_LSF=y 165 143 CONFIG_BLK_DEV_BSG=y 166 144 # CONFIG_BLK_DEV_INTEGRITY is not set 167 145 ··· 174 158 # CONFIG_DEFAULT_CFQ is not set 175 159 # CONFIG_DEFAULT_NOOP is not set 176 160 CONFIG_DEFAULT_IOSCHED="anticipatory" 177 - CONFIG_CLASSIC_RCU=y 178 161 CONFIG_FREEZER=y 179 162 180 163 # 181 164 # Platform support 182 165 # 183 - CONFIG_PPC_MULTIPLATFORM=y 184 - CONFIG_CLASSIC32=y 185 166 # CONFIG_PPC_CHRP is not set 186 167 # CONFIG_MPC5121_ADS is not set 187 168 # CONFIG_MPC5121_GENERIC is not set ··· 191 178 # CONFIG_PPC_83xx is not set 192 179 # CONFIG_PPC_86xx is not set 193 180 # CONFIG_EMBEDDED6xx is not set 181 + # CONFIG_AMIGAONE is not set 194 182 CONFIG_PPC_NATIVE=y 183 + CONFIG_PPC_OF_BOOT_TRAMPOLINE=y 195 184 # CONFIG_IPIC is not set 196 185 CONFIG_MPIC=y 197 186 # CONFIG_MPIC_WEIRD is not set ··· 227 212 CONFIG_PPC601_SYNC_FIX=y 228 213 # CONFIG_TAU is not set 229 214 # CONFIG_FSL_ULI1575 is not set 215 + # CONFIG_SIMPLE_GPIO is not set 230 216 231 217 # 232 218 # Kernel options 233 219 # 234 - # CONFIG_HIGHMEM is not set 220 + CONFIG_HIGHMEM=y 235 221 CONFIG_TICK_ONESHOT=y 236 222 CONFIG_NO_HZ=y 237 223 CONFIG_HIGH_RES_TIMERS=y ··· 255 239 CONFIG_ARCH_HAS_WALK_MEMORY=y 256 240 CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y 257 241 # CONFIG_KEXEC is not set 242 + # CONFIG_CRASH_DUMP is not set 258 243 CONFIG_ARCH_FLATMEM_ENABLE=y 259 244 CONFIG_ARCH_POPULATES_NODE_MAP=y 260 245 CONFIG_SELECT_MEMORY_MODEL=y ··· 267 250 CONFIG_PAGEFLAGS_EXTENDED=y 268 251 CONFIG_SPLIT_PTLOCK_CPUS=4 269 252 # CONFIG_MIGRATION is not set 270 - # CONFIG_RESOURCES_64BIT is not set 271 253 # CONFIG_PHYS_ADDR_T_64BIT is not set 272 254 CONFIG_ZONE_DMA_FLAG=1 273 255 CONFIG_BOUNCE=y 274 256 CONFIG_VIRT_TO_BUS=y 275 257 CONFIG_UNEVICTABLE_LRU=y 258 + CONFIG_HAVE_MLOCK=y 259 + CONFIG_HAVE_MLOCKED_PAGE_BIT=y 260 + CONFIG_PPC_4K_PAGES=y 261 + # CONFIG_PPC_16K_PAGES is not set 262 + # CONFIG_PPC_64K_PAGES is not set 263 + # CONFIG_PPC_256K_PAGES is not set 276 264 CONFIG_FORCE_MAX_ZONEORDER=11 277 265 CONFIG_PROC_DEVICETREE=y 278 266 # CONFIG_CMDLINE_BOOL is not set ··· 310 288 # CONFIG_PCI_MSI is not set 311 289 # CONFIG_PCI_LEGACY is not set 312 290 # CONFIG_PCI_DEBUG is not set 291 + # CONFIG_PCI_STUB is not set 292 + # CONFIG_PCI_IOV is not set 313 293 CONFIG_PCCARD=m 314 294 # CONFIG_PCMCIA_DEBUG is not set 315 295 CONFIG_PCMCIA=m ··· 421 397 CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m 422 398 # CONFIG_NETFILTER_XT_TARGET_CONNMARK is not set 423 399 # CONFIG_NETFILTER_XT_TARGET_DSCP is not set 400 + CONFIG_NETFILTER_XT_TARGET_HL=m 401 + # CONFIG_NETFILTER_XT_TARGET_LED is not set 424 402 CONFIG_NETFILTER_XT_TARGET_MARK=m 425 403 CONFIG_NETFILTER_XT_TARGET_NFLOG=m 426 404 CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m ··· 431 405 CONFIG_NETFILTER_XT_TARGET_TRACE=m 432 406 CONFIG_NETFILTER_XT_TARGET_TCPMSS=m 433 407 CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m 408 + # CONFIG_NETFILTER_XT_MATCH_CLUSTER is not set 434 409 CONFIG_NETFILTER_XT_MATCH_COMMENT=m 435 410 # CONFIG_NETFILTER_XT_MATCH_CONNBYTES is not set 436 411 CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m ··· 442 415 CONFIG_NETFILTER_XT_MATCH_ESP=m 443 416 # CONFIG_NETFILTER_XT_MATCH_HASHLIMIT is not set 444 417 CONFIG_NETFILTER_XT_MATCH_HELPER=m 418 + CONFIG_NETFILTER_XT_MATCH_HL=m 445 419 CONFIG_NETFILTER_XT_MATCH_IPRANGE=m 446 420 CONFIG_NETFILTER_XT_MATCH_LENGTH=m 447 421 CONFIG_NETFILTER_XT_MATCH_LIMIT=m ··· 506 478 CONFIG_IP_NF_ARP_MANGLE=m 507 479 CONFIG_IP_DCCP=m 508 480 CONFIG_INET_DCCP_DIAG=m 509 - CONFIG_IP_DCCP_ACKVEC=y 510 481 511 482 # 512 483 # DCCP CCIDs Configuration (EXPERIMENTAL) 513 484 # 514 - CONFIG_IP_DCCP_CCID2=m 515 485 # CONFIG_IP_DCCP_CCID2_DEBUG is not set 516 - CONFIG_IP_DCCP_CCID3=m 486 + CONFIG_IP_DCCP_CCID3=y 517 487 # CONFIG_IP_DCCP_CCID3_DEBUG is not set 518 488 CONFIG_IP_DCCP_CCID3_RTO=100 519 - CONFIG_IP_DCCP_TFRC_LIB=m 489 + CONFIG_IP_DCCP_TFRC_LIB=y 520 490 521 491 # 522 492 # DCCP Kernel Hacking ··· 534 508 # CONFIG_LAPB is not set 535 509 # CONFIG_ECONET is not set 536 510 # CONFIG_WAN_ROUTER is not set 511 + # CONFIG_PHONET is not set 537 512 # CONFIG_NET_SCHED is not set 538 513 CONFIG_NET_CLS_ROUTE=y 514 + # CONFIG_DCB is not set 539 515 540 516 # 541 517 # Network testing 542 518 # 543 519 # CONFIG_NET_PKTGEN is not set 520 + # CONFIG_NET_DROP_MONITOR is not set 544 521 # CONFIG_HAMRADIO is not set 545 522 # CONFIG_CAN is not set 546 523 CONFIG_IRDA=m ··· 606 577 # 607 578 # Bluetooth device drivers 608 579 # 609 - CONFIG_BT_HCIUSB=m 610 - # CONFIG_BT_HCIUSB_SCO is not set 611 580 # CONFIG_BT_HCIBTUSB is not set 612 581 # CONFIG_BT_HCIUART is not set 613 582 CONFIG_BT_HCIBCM203X=m ··· 617 590 # CONFIG_BT_HCIBTUART is not set 618 591 # CONFIG_BT_HCIVHCI is not set 619 592 # CONFIG_AF_RXRPC is not set 620 - # CONFIG_PHONET is not set 621 593 CONFIG_WIRELESS=y 622 594 CONFIG_CFG80211=m 623 - CONFIG_NL80211=y 595 + # CONFIG_CFG80211_REG_DEBUG is not set 624 596 CONFIG_WIRELESS_OLD_REGULATORY=y 625 597 CONFIG_WIRELESS_EXT=y 626 598 CONFIG_WIRELESS_EXT_SYSFS=y 599 + # CONFIG_LIB80211 is not set 627 600 CONFIG_MAC80211=m 628 601 629 602 # 630 603 # Rate control algorithm selection 631 604 # 632 - CONFIG_MAC80211_RC_PID=y 633 - # CONFIG_MAC80211_RC_MINSTREL is not set 634 - CONFIG_MAC80211_RC_DEFAULT_PID=y 635 - # CONFIG_MAC80211_RC_DEFAULT_MINSTREL is not set 636 - CONFIG_MAC80211_RC_DEFAULT="pid" 605 + CONFIG_MAC80211_RC_MINSTREL=y 606 + # CONFIG_MAC80211_RC_DEFAULT_PID is not set 607 + CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y 608 + CONFIG_MAC80211_RC_DEFAULT="minstrel" 637 609 # CONFIG_MAC80211_MESH is not set 638 610 CONFIG_MAC80211_LEDS=y 611 + # CONFIG_MAC80211_DEBUGFS is not set 639 612 # CONFIG_MAC80211_DEBUG_MENU is not set 640 - CONFIG_IEEE80211=m 641 - # CONFIG_IEEE80211_DEBUG is not set 642 - CONFIG_IEEE80211_CRYPT_WEP=m 643 - CONFIG_IEEE80211_CRYPT_CCMP=m 644 - CONFIG_IEEE80211_CRYPT_TKIP=m 613 + # CONFIG_WIMAX is not set 645 614 # CONFIG_RFKILL is not set 646 615 # CONFIG_NET_9P is not set 647 616 ··· 685 662 # CONFIG_BLK_DEV_HD is not set 686 663 CONFIG_MISC_DEVICES=y 687 664 # CONFIG_PHANTOM is not set 688 - # CONFIG_EEPROM_93CX6 is not set 689 665 # CONFIG_SGI_IOC4 is not set 690 666 # CONFIG_TIFM_CORE is not set 667 + # CONFIG_ICS932S401 is not set 691 668 # CONFIG_ENCLOSURE_SERVICES is not set 692 669 # CONFIG_HP_ILO is not set 670 + # CONFIG_ISL29003 is not set 671 + # CONFIG_C2PORT is not set 672 + 673 + # 674 + # EEPROM support 675 + # 676 + # CONFIG_EEPROM_AT24 is not set 677 + # CONFIG_EEPROM_LEGACY is not set 678 + # CONFIG_EEPROM_93CX6 is not set 693 679 CONFIG_HAVE_IDE=y 694 680 CONFIG_IDE=y 695 681 696 682 # 697 683 # Please see Documentation/ide/ide.txt for help/info on IDE drives 698 684 # 685 + CONFIG_IDE_XFER_MODE=y 699 686 CONFIG_IDE_TIMINGS=y 700 687 CONFIG_IDE_ATAPI=y 701 688 # CONFIG_BLK_DEV_IDE_SATA is not set ··· 717 684 CONFIG_BLK_DEV_IDECD=y 718 685 CONFIG_BLK_DEV_IDECD_VERBOSE_ERRORS=y 719 686 # CONFIG_BLK_DEV_IDETAPE is not set 720 - CONFIG_BLK_DEV_IDESCSI=y 721 687 # CONFIG_IDE_TASK_IOCTL is not set 722 688 CONFIG_IDE_PROC_FS=y 723 689 ··· 746 714 # CONFIG_BLK_DEV_JMICRON is not set 747 715 # CONFIG_BLK_DEV_SC1200 is not set 748 716 # CONFIG_BLK_DEV_PIIX is not set 717 + # CONFIG_BLK_DEV_IT8172 is not set 749 718 # CONFIG_BLK_DEV_IT8213 is not set 750 719 # CONFIG_BLK_DEV_IT821X is not set 751 720 # CONFIG_BLK_DEV_NS87415 is not set ··· 761 728 # CONFIG_BLK_DEV_TC86C001 is not set 762 729 CONFIG_BLK_DEV_IDE_PMAC=y 763 730 CONFIG_BLK_DEV_IDE_PMAC_ATA100FIRST=y 764 - CONFIG_BLK_DEV_IDEDMA_PMAC=y 765 731 CONFIG_BLK_DEV_IDEDMA=y 766 732 767 733 # ··· 804 772 # CONFIG_SCSI_SRP_ATTRS is not set 805 773 CONFIG_SCSI_LOWLEVEL=y 806 774 # CONFIG_ISCSI_TCP is not set 775 + # CONFIG_SCSI_CXGB3_ISCSI is not set 807 776 # CONFIG_BLK_DEV_3W_XXXX_RAID is not set 808 777 # CONFIG_SCSI_3W_9XXX is not set 809 778 # CONFIG_SCSI_ACARD is not set ··· 824 791 # CONFIG_MEGARAID_NEWGEN is not set 825 792 # CONFIG_MEGARAID_LEGACY is not set 826 793 # CONFIG_MEGARAID_SAS is not set 794 + # CONFIG_SCSI_MPT2SAS is not set 827 795 # CONFIG_SCSI_HPTIOP is not set 828 796 # CONFIG_SCSI_BUSLOGIC is not set 797 + # CONFIG_LIBFC is not set 798 + # CONFIG_LIBFCOE is not set 799 + # CONFIG_FCOE is not set 829 800 # CONFIG_SCSI_DMX3191D is not set 830 801 # CONFIG_SCSI_EATA is not set 831 802 # CONFIG_SCSI_FUTURE_DOMAIN is not set ··· 859 822 # CONFIG_SCSI_SRP is not set 860 823 # CONFIG_SCSI_LOWLEVEL_PCMCIA is not set 861 824 # CONFIG_SCSI_DH is not set 825 + # CONFIG_SCSI_OSD_INITIATOR is not set 862 826 # CONFIG_ATA is not set 863 827 CONFIG_MD=y 864 828 CONFIG_BLK_DEV_MD=m ··· 919 881 # CONFIG_ANSLCD is not set 920 882 CONFIG_PMAC_RACKMETER=m 921 883 CONFIG_NETDEVICES=y 884 + CONFIG_COMPAT_NET_DEV_OPS=y 922 885 CONFIG_DUMMY=m 923 886 # CONFIG_BONDING is not set 924 887 # CONFIG_MACVLAN is not set ··· 937 898 CONFIG_SUNGEM=y 938 899 # CONFIG_CASSINI is not set 939 900 # CONFIG_NET_VENDOR_3COM is not set 901 + # CONFIG_ETHOC is not set 902 + # CONFIG_DNET is not set 940 903 # CONFIG_NET_TULIP is not set 941 904 # CONFIG_HP100 is not set 942 905 # CONFIG_IBM_NEW_EMAC_ZMII is not set ··· 954 913 # CONFIG_ADAPTEC_STARFIRE is not set 955 914 # CONFIG_B44 is not set 956 915 # CONFIG_FORCEDETH is not set 957 - # CONFIG_EEPRO100 is not set 958 916 # CONFIG_E100 is not set 959 917 # CONFIG_FEALNX is not set 960 918 # CONFIG_NATSEMI is not set ··· 963 923 # CONFIG_R6040 is not set 964 924 # CONFIG_SIS900 is not set 965 925 # CONFIG_EPIC100 is not set 926 + # CONFIG_SMSC9420 is not set 966 927 # CONFIG_SUNDANCE is not set 967 928 # CONFIG_TLAN is not set 968 929 # CONFIG_VIA_RHINE is not set ··· 976 935 # CONFIG_E1000E is not set 977 936 # CONFIG_IP1000 is not set 978 937 # CONFIG_IGB is not set 938 + # CONFIG_IGBVF is not set 979 939 # CONFIG_NS83820 is not set 980 940 # CONFIG_HAMACHI is not set 981 941 # CONFIG_YELLOWFIN is not set ··· 987 945 # CONFIG_VIA_VELOCITY is not set 988 946 # CONFIG_TIGON3 is not set 989 947 # CONFIG_BNX2 is not set 990 - # CONFIG_MV643XX_ETH is not set 991 948 # CONFIG_QLA3XXX is not set 992 949 # CONFIG_ATL1 is not set 993 950 # CONFIG_ATL1E is not set 951 + # CONFIG_ATL1C is not set 994 952 # CONFIG_JME is not set 995 953 CONFIG_NETDEV_10000=y 996 954 # CONFIG_CHELSIO_T1 is not set 955 + CONFIG_CHELSIO_T3_DEPENDS=y 997 956 # CONFIG_CHELSIO_T3 is not set 998 957 # CONFIG_ENIC is not set 999 958 # CONFIG_IXGBE is not set 1000 959 # CONFIG_IXGB is not set 1001 960 # CONFIG_S2IO is not set 961 + # CONFIG_VXGE is not set 1002 962 # CONFIG_MYRI10GE is not set 1003 963 # CONFIG_NETXEN_NIC is not set 1004 964 # CONFIG_NIU is not set ··· 1010 966 # CONFIG_BNX2X is not set 1011 967 # CONFIG_QLGE is not set 1012 968 # CONFIG_SFC is not set 969 + # CONFIG_BE2NET is not set 1013 970 # CONFIG_TR is not set 1014 971 1015 972 # ··· 1019 974 # CONFIG_WLAN_PRE80211 is not set 1020 975 CONFIG_WLAN_80211=y 1021 976 # CONFIG_PCMCIA_RAYCS is not set 1022 - # CONFIG_IPW2100 is not set 1023 - # CONFIG_IPW2200 is not set 1024 977 # CONFIG_LIBERTAS is not set 1025 978 # CONFIG_LIBERTAS_THINFIRM is not set 1026 979 # CONFIG_AIRO is not set 1027 - CONFIG_HERMES=m 1028 - CONFIG_APPLE_AIRPORT=m 1029 - # CONFIG_PLX_HERMES is not set 1030 - # CONFIG_TMD_HERMES is not set 1031 - # CONFIG_NORTEL_HERMES is not set 1032 - CONFIG_PCI_HERMES=m 1033 - CONFIG_PCMCIA_HERMES=m 1034 - # CONFIG_PCMCIA_SPECTRUM is not set 1035 980 # CONFIG_ATMEL is not set 981 + # CONFIG_AT76C50X_USB is not set 1036 982 # CONFIG_AIRO_CS is not set 1037 983 # CONFIG_PCMCIA_WL3501 is not set 1038 984 CONFIG_PRISM54=m ··· 1033 997 # CONFIG_RTL8187 is not set 1034 998 # CONFIG_ADM8211 is not set 1035 999 # CONFIG_MAC80211_HWSIM is not set 1000 + # CONFIG_MWL8K is not set 1036 1001 CONFIG_P54_COMMON=m 1037 1002 # CONFIG_P54_USB is not set 1038 1003 # CONFIG_P54_PCI is not set 1004 + CONFIG_P54_LEDS=y 1039 1005 # CONFIG_ATH5K is not set 1040 1006 # CONFIG_ATH9K is not set 1041 - # CONFIG_IWLCORE is not set 1042 - # CONFIG_IWLWIFI_LEDS is not set 1043 - # CONFIG_IWLAGN is not set 1044 - # CONFIG_IWL3945 is not set 1007 + # CONFIG_AR9170_USB is not set 1008 + # CONFIG_IPW2100 is not set 1009 + # CONFIG_IPW2200 is not set 1010 + # CONFIG_IWLWIFI is not set 1045 1011 # CONFIG_HOSTAP is not set 1046 1012 CONFIG_B43=m 1047 1013 CONFIG_B43_PCI_AUTOSELECT=y ··· 1063 1025 # CONFIG_B43LEGACY_PIO_MODE is not set 1064 1026 # CONFIG_ZD1211RW is not set 1065 1027 # CONFIG_RT2X00 is not set 1028 + CONFIG_HERMES=m 1029 + CONFIG_HERMES_CACHE_FW_ON_INIT=y 1030 + CONFIG_APPLE_AIRPORT=m 1031 + # CONFIG_PLX_HERMES is not set 1032 + # CONFIG_TMD_HERMES is not set 1033 + # CONFIG_NORTEL_HERMES is not set 1034 + CONFIG_PCI_HERMES=m 1035 + CONFIG_PCMCIA_HERMES=m 1036 + # CONFIG_PCMCIA_SPECTRUM is not set 1037 + 1038 + # 1039 + # Enable WiMAX (Networking options) to see the WiMAX drivers 1040 + # 1066 1041 1067 1042 # 1068 1043 # USB Network Adapters ··· 1087 1036 CONFIG_USB_USBNET=m 1088 1037 CONFIG_USB_NET_AX8817X=m 1089 1038 CONFIG_USB_NET_CDCETHER=m 1039 + # CONFIG_USB_NET_CDC_EEM is not set 1090 1040 # CONFIG_USB_NET_DM9601 is not set 1091 1041 # CONFIG_USB_NET_SMSC95XX is not set 1092 1042 # CONFIG_USB_NET_GL620A is not set ··· 1151 1099 CONFIG_INPUT_MOUSE=y 1152 1100 # CONFIG_MOUSE_PS2 is not set 1153 1101 # CONFIG_MOUSE_SERIAL is not set 1154 - # CONFIG_MOUSE_APPLETOUCH is not set 1102 + CONFIG_MOUSE_APPLETOUCH=y 1155 1103 # CONFIG_MOUSE_BCM5974 is not set 1156 1104 # CONFIG_MOUSE_VSXXXAA is not set 1157 1105 # CONFIG_INPUT_JOYSTICK is not set ··· 1202 1150 # CONFIG_SERIAL_JSM is not set 1203 1151 # CONFIG_SERIAL_OF_PLATFORM is not set 1204 1152 CONFIG_UNIX98_PTYS=y 1153 + # CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set 1205 1154 CONFIG_LEGACY_PTYS=y 1206 1155 CONFIG_LEGACY_PTY_COUNT=256 1156 + # CONFIG_HVC_UDBG is not set 1207 1157 # CONFIG_IPMI_HANDLER is not set 1208 1158 CONFIG_HW_RANDOM=m 1159 + # CONFIG_HW_RANDOM_TIMERIOMEM is not set 1209 1160 CONFIG_NVRAM=y 1210 1161 CONFIG_GEN_RTC=y 1211 1162 # CONFIG_GEN_RTC_X is not set ··· 1287 1232 # Miscellaneous I2C Chip support 1288 1233 # 1289 1234 # CONFIG_DS1682 is not set 1290 - # CONFIG_EEPROM_AT24 is not set 1291 - # CONFIG_EEPROM_LEGACY is not set 1292 1235 # CONFIG_SENSORS_PCF8574 is not set 1293 1236 # CONFIG_PCF8575 is not set 1294 1237 # CONFIG_SENSORS_PCA9539 is not set 1295 - # CONFIG_SENSORS_PCF8591 is not set 1296 1238 # CONFIG_SENSORS_MAX6875 is not set 1297 1239 # CONFIG_SENSORS_TSL2550 is not set 1298 1240 # CONFIG_I2C_DEBUG_CORE is not set ··· 1311 1259 # CONFIG_THERMAL is not set 1312 1260 # CONFIG_THERMAL_HWMON is not set 1313 1261 # CONFIG_WATCHDOG is not set 1262 + CONFIG_SSB_POSSIBLE=y 1314 1263 1315 1264 # 1316 1265 # Sonics Silicon Backplane 1317 1266 # 1318 - CONFIG_SSB_POSSIBLE=y 1319 1267 CONFIG_SSB=m 1320 1268 CONFIG_SSB_SPROM=y 1321 1269 CONFIG_SSB_PCIHOST_POSSIBLE=y ··· 1333 1281 # CONFIG_MFD_CORE is not set 1334 1282 # CONFIG_MFD_SM501 is not set 1335 1283 # CONFIG_HTC_PASIC3 is not set 1284 + # CONFIG_TWL4030_CORE is not set 1336 1285 # CONFIG_MFD_TMIO is not set 1337 1286 # CONFIG_PMIC_DA903X is not set 1338 1287 # CONFIG_MFD_WM8400 is not set 1339 1288 # CONFIG_MFD_WM8350_I2C is not set 1340 - 1341 - # 1342 - # Voltage and Current regulators 1343 - # 1289 + # CONFIG_MFD_PCF50633 is not set 1344 1290 # CONFIG_REGULATOR is not set 1345 - # CONFIG_REGULATOR_FIXED_VOLTAGE is not set 1346 - # CONFIG_REGULATOR_VIRTUAL_CONSUMER is not set 1347 - # CONFIG_REGULATOR_BQ24022 is not set 1348 1291 1349 1292 # 1350 1293 # Multimedia devices ··· 1437 1390 # CONFIG_FB_KYRO is not set 1438 1391 CONFIG_FB_3DFX=y 1439 1392 # CONFIG_FB_3DFX_ACCEL is not set 1393 + CONFIG_FB_3DFX_I2C=y 1440 1394 # CONFIG_FB_VOODOO1 is not set 1441 1395 # CONFIG_FB_VT8623 is not set 1442 1396 # CONFIG_FB_TRIDENT is not set ··· 1447 1399 # CONFIG_FB_IBM_GXT4500 is not set 1448 1400 # CONFIG_FB_VIRTUAL is not set 1449 1401 # CONFIG_FB_METRONOME is not set 1402 + # CONFIG_FB_MB862XX is not set 1403 + # CONFIG_FB_BROADSHEET is not set 1450 1404 CONFIG_BACKLIGHT_LCD_SUPPORT=y 1451 1405 CONFIG_LCD_CLASS_DEVICE=m 1452 1406 # CONFIG_LCD_ILI9320 is not set 1453 1407 # CONFIG_LCD_PLATFORM is not set 1454 1408 CONFIG_BACKLIGHT_CLASS_DEVICE=y 1455 - # CONFIG_BACKLIGHT_CORGI is not set 1409 + CONFIG_BACKLIGHT_GENERIC=y 1456 1410 1457 1411 # 1458 1412 # Display device support ··· 1494 1444 CONFIG_SND_PCM_OSS=m 1495 1445 CONFIG_SND_PCM_OSS_PLUGINS=y 1496 1446 CONFIG_SND_SEQUENCER_OSS=y 1447 + # CONFIG_SND_HRTIMER is not set 1497 1448 # CONFIG_SND_DYNAMIC_MINORS is not set 1498 1449 CONFIG_SND_SUPPORT_OLD_API=y 1499 1450 CONFIG_SND_VERBOSE_PROCFS=y 1500 1451 # CONFIG_SND_VERBOSE_PRINTK is not set 1501 1452 # CONFIG_SND_DEBUG is not set 1453 + CONFIG_SND_VMASTER=y 1502 1454 CONFIG_SND_DRIVERS=y 1503 1455 CONFIG_SND_DUMMY=m 1504 1456 # CONFIG_SND_VIRMIDI is not set ··· 1538 1486 # CONFIG_SND_INDIGO is not set 1539 1487 # CONFIG_SND_INDIGOIO is not set 1540 1488 # CONFIG_SND_INDIGODJ is not set 1489 + # CONFIG_SND_INDIGOIOX is not set 1490 + # CONFIG_SND_INDIGODJX is not set 1541 1491 # CONFIG_SND_EMU10K1 is not set 1542 1492 # CONFIG_SND_EMU10K1X is not set 1543 1493 # CONFIG_SND_ENS1370 is not set ··· 1605 1551 # 1606 1552 # Special HID drivers 1607 1553 # 1608 - CONFIG_HID_COMPAT=y 1609 1554 CONFIG_HID_A4TECH=y 1610 1555 CONFIG_HID_APPLE=y 1611 1556 CONFIG_HID_BELKIN=y 1612 - CONFIG_HID_BRIGHT=y 1613 1557 CONFIG_HID_CHERRY=y 1614 1558 CONFIG_HID_CHICONY=y 1615 1559 CONFIG_HID_CYPRESS=y 1616 - CONFIG_HID_DELL=y 1560 + # CONFIG_DRAGONRISE_FF is not set 1617 1561 CONFIG_HID_EZKEY=y 1562 + CONFIG_HID_KYE=y 1618 1563 CONFIG_HID_GYRATION=y 1564 + CONFIG_HID_KENSINGTON=y 1619 1565 CONFIG_HID_LOGITECH=y 1620 1566 # CONFIG_LOGITECH_FF is not set 1621 1567 # CONFIG_LOGIRUMBLEPAD2_FF is not set 1622 1568 CONFIG_HID_MICROSOFT=y 1623 1569 CONFIG_HID_MONTEREY=y 1570 + CONFIG_HID_NTRIG=y 1624 1571 CONFIG_HID_PANTHERLORD=y 1625 1572 # CONFIG_PANTHERLORD_FF is not set 1626 1573 CONFIG_HID_PETALYNX=y 1627 1574 CONFIG_HID_SAMSUNG=y 1628 1575 CONFIG_HID_SONY=y 1629 1576 CONFIG_HID_SUNPLUS=y 1577 + # CONFIG_GREENASIA_FF is not set 1578 + CONFIG_HID_TOPSEED=y 1630 1579 # CONFIG_THRUSTMASTER_FF is not set 1631 1580 # CONFIG_ZEROPLUS_FF is not set 1632 1581 CONFIG_USB_SUPPORT=y ··· 1660 1603 CONFIG_USB_EHCI_ROOT_HUB_TT=y 1661 1604 # CONFIG_USB_EHCI_TT_NEWSCHED is not set 1662 1605 # CONFIG_USB_EHCI_HCD_PPC_OF is not set 1606 + # CONFIG_USB_OXU210HP_HCD is not set 1663 1607 # CONFIG_USB_ISP116X_HCD is not set 1664 1608 # CONFIG_USB_ISP1760_HCD is not set 1665 1609 CONFIG_USB_OHCI_HCD=y ··· 1683 1625 # CONFIG_USB_TMC is not set 1684 1626 1685 1627 # 1686 - # NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' 1628 + # NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may 1687 1629 # 1688 1630 1689 1631 # 1690 - # may also be needed; see USB_STORAGE Help for more information 1632 + # also be needed; see USB_STORAGE Help for more info 1691 1633 # 1692 1634 CONFIG_USB_STORAGE=m 1693 1635 # CONFIG_USB_STORAGE_DEBUG is not set 1694 1636 # CONFIG_USB_STORAGE_DATAFAB is not set 1695 1637 # CONFIG_USB_STORAGE_FREECOM is not set 1696 1638 # CONFIG_USB_STORAGE_ISD200 is not set 1697 - # CONFIG_USB_STORAGE_DPCM is not set 1698 1639 # CONFIG_USB_STORAGE_USBAT is not set 1699 1640 # CONFIG_USB_STORAGE_SDDR09 is not set 1700 1641 # CONFIG_USB_STORAGE_SDDR55 is not set 1701 1642 # CONFIG_USB_STORAGE_JUMPSHOT is not set 1702 1643 # CONFIG_USB_STORAGE_ALAUDA is not set 1703 - CONFIG_USB_STORAGE_ONETOUCH=y 1644 + CONFIG_USB_STORAGE_ONETOUCH=m 1704 1645 # CONFIG_USB_STORAGE_KARMA is not set 1705 1646 # CONFIG_USB_STORAGE_CYPRESS_ATACB is not set 1706 1647 # CONFIG_USB_LIBUSUAL is not set ··· 1722 1665 # CONFIG_USB_SERIAL_CH341 is not set 1723 1666 # CONFIG_USB_SERIAL_WHITEHEAT is not set 1724 1667 # CONFIG_USB_SERIAL_DIGI_ACCELEPORT is not set 1725 - # CONFIG_USB_SERIAL_CP2101 is not set 1668 + # CONFIG_USB_SERIAL_CP210X is not set 1726 1669 # CONFIG_USB_SERIAL_CYPRESS_M8 is not set 1727 1670 # CONFIG_USB_SERIAL_EMPEG is not set 1728 1671 # CONFIG_USB_SERIAL_FTDI_SIO is not set ··· 1758 1701 # CONFIG_USB_SERIAL_NAVMAN is not set 1759 1702 # CONFIG_USB_SERIAL_PL2303 is not set 1760 1703 # CONFIG_USB_SERIAL_OTI6858 is not set 1704 + # CONFIG_USB_SERIAL_QUALCOMM is not set 1761 1705 # CONFIG_USB_SERIAL_SPCP8X5 is not set 1762 1706 # CONFIG_USB_SERIAL_HP4X is not set 1763 1707 # CONFIG_USB_SERIAL_SAFE is not set 1708 + # CONFIG_USB_SERIAL_SIEMENS_MPI is not set 1764 1709 # CONFIG_USB_SERIAL_SIERRAWIRELESS is not set 1710 + # CONFIG_USB_SERIAL_SYMBOL is not set 1765 1711 # CONFIG_USB_SERIAL_TI is not set 1766 1712 # CONFIG_USB_SERIAL_CYBERJACK is not set 1767 1713 # CONFIG_USB_SERIAL_XIRCOM is not set 1768 1714 # CONFIG_USB_SERIAL_OPTION is not set 1769 1715 # CONFIG_USB_SERIAL_OMNINET is not set 1716 + # CONFIG_USB_SERIAL_OPTICON is not set 1770 1717 # CONFIG_USB_SERIAL_DEBUG is not set 1771 1718 1772 1719 # ··· 1787 1726 # CONFIG_USB_LED is not set 1788 1727 # CONFIG_USB_CYPRESS_CY7C63 is not set 1789 1728 # CONFIG_USB_CYTHERM is not set 1790 - # CONFIG_USB_PHIDGET is not set 1791 1729 # CONFIG_USB_IDMOUSE is not set 1792 1730 # CONFIG_USB_FTDI_ELAN is not set 1793 1731 CONFIG_USB_APPLEDISPLAY=m ··· 1798 1738 # CONFIG_USB_ISIGHTFW is not set 1799 1739 # CONFIG_USB_VST is not set 1800 1740 # CONFIG_USB_GADGET is not set 1741 + 1742 + # 1743 + # OTG and related infrastructure 1744 + # 1745 + # CONFIG_NOP_USB_XCEIV is not set 1801 1746 # CONFIG_UWB is not set 1802 1747 # CONFIG_MMC is not set 1803 1748 # CONFIG_MEMSTICK is not set ··· 1813 1748 # LED drivers 1814 1749 # 1815 1750 # CONFIG_LEDS_PCA9532 is not set 1751 + # CONFIG_LEDS_LP5521 is not set 1816 1752 # CONFIG_LEDS_PCA955X is not set 1753 + # CONFIG_LEDS_BD2802 is not set 1817 1754 1818 1755 # 1819 1756 # LED Triggers ··· 1826 1759 # CONFIG_LEDS_TRIGGER_HEARTBEAT is not set 1827 1760 # CONFIG_LEDS_TRIGGER_BACKLIGHT is not set 1828 1761 CONFIG_LEDS_TRIGGER_DEFAULT_ON=y 1762 + 1763 + # 1764 + # iptables trigger is under Netfilter config (LED target) 1765 + # 1829 1766 # CONFIG_ACCESSIBILITY is not set 1830 1767 # CONFIG_INFINIBAND is not set 1831 1768 # CONFIG_EDAC is not set 1832 1769 # CONFIG_RTC_CLASS is not set 1833 1770 # CONFIG_DMADEVICES is not set 1771 + # CONFIG_AUXDISPLAY is not set 1834 1772 # CONFIG_UIO is not set 1835 1773 # CONFIG_STAGING is not set 1836 1774 ··· 1846 1774 # CONFIG_EXT2_FS_XATTR is not set 1847 1775 # CONFIG_EXT2_FS_XIP is not set 1848 1776 CONFIG_EXT3_FS=y 1777 + # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set 1849 1778 CONFIG_EXT3_FS_XATTR=y 1850 1779 CONFIG_EXT3_FS_POSIX_ACL=y 1851 1780 # CONFIG_EXT3_FS_SECURITY is not set ··· 1856 1783 # CONFIG_EXT4_FS_POSIX_ACL is not set 1857 1784 # CONFIG_EXT4_FS_SECURITY is not set 1858 1785 CONFIG_JBD=y 1786 + # CONFIG_JBD_DEBUG is not set 1859 1787 CONFIG_JBD2=y 1788 + # CONFIG_JBD2_DEBUG is not set 1860 1789 CONFIG_FS_MBCACHE=y 1861 1790 # CONFIG_REISERFS_FS is not set 1862 1791 # CONFIG_JFS_FS is not set ··· 1867 1792 # CONFIG_XFS_FS is not set 1868 1793 # CONFIG_GFS2_FS is not set 1869 1794 # CONFIG_OCFS2_FS is not set 1795 + # CONFIG_BTRFS_FS is not set 1870 1796 CONFIG_DNOTIFY=y 1871 1797 CONFIG_INOTIFY=y 1872 1798 CONFIG_INOTIFY_USER=y ··· 1875 1799 # CONFIG_AUTOFS_FS is not set 1876 1800 CONFIG_AUTOFS4_FS=m 1877 1801 CONFIG_FUSE_FS=m 1802 + 1803 + # 1804 + # Caches 1805 + # 1806 + # CONFIG_FSCACHE is not set 1878 1807 1879 1808 # 1880 1809 # CD-ROM/DVD Filesystems ··· 1912 1831 # CONFIG_TMPFS_POSIX_ACL is not set 1913 1832 # CONFIG_HUGETLB_PAGE is not set 1914 1833 # CONFIG_CONFIGFS_FS is not set 1915 - 1916 - # 1917 - # Miscellaneous filesystems 1918 - # 1834 + CONFIG_MISC_FILESYSTEMS=y 1919 1835 # CONFIG_ADFS_FS is not set 1920 1836 # CONFIG_AFFS_FS is not set 1921 1837 CONFIG_HFS_FS=m ··· 1921 1843 # CONFIG_BFS_FS is not set 1922 1844 # CONFIG_EFS_FS is not set 1923 1845 # CONFIG_CRAMFS is not set 1846 + # CONFIG_SQUASHFS is not set 1924 1847 # CONFIG_VXFS_FS is not set 1925 1848 # CONFIG_MINIX_FS is not set 1926 1849 # CONFIG_OMFS_FS is not set ··· 1930 1851 # CONFIG_ROMFS_FS is not set 1931 1852 # CONFIG_SYSV_FS is not set 1932 1853 # CONFIG_UFS_FS is not set 1854 + # CONFIG_NILFS2_FS is not set 1933 1855 CONFIG_NETWORK_FILESYSTEMS=y 1934 1856 CONFIG_NFS_FS=y 1935 1857 CONFIG_NFS_V3=y ··· 1948 1868 CONFIG_NFS_COMMON=y 1949 1869 CONFIG_SUNRPC=y 1950 1870 CONFIG_SUNRPC_GSS=y 1951 - # CONFIG_SUNRPC_REGISTER_V4 is not set 1952 1871 CONFIG_RPCSEC_GSS_KRB5=y 1953 1872 # CONFIG_RPCSEC_GSS_SPKM3 is not set 1954 1873 CONFIG_SMB_FS=m ··· 2019 1940 # CONFIG_NLS_KOI8_U is not set 2020 1941 CONFIG_NLS_UTF8=m 2021 1942 # CONFIG_DLM is not set 1943 + CONFIG_BINARY_PRINTF=y 2022 1944 2023 1945 # 2024 1946 # Library routines 2025 1947 # 2026 1948 CONFIG_BITREVERSE=y 1949 + CONFIG_GENERIC_FIND_LAST_BIT=y 2027 1950 CONFIG_CRC_CCITT=y 2028 1951 CONFIG_CRC16=y 2029 1952 CONFIG_CRC_T10DIF=y ··· 2035 1954 CONFIG_LIBCRC32C=m 2036 1955 CONFIG_ZLIB_INFLATE=y 2037 1956 CONFIG_ZLIB_DEFLATE=y 1957 + CONFIG_DECOMPRESS_GZIP=y 1958 + CONFIG_DECOMPRESS_BZIP2=y 1959 + CONFIG_DECOMPRESS_LZMA=y 2038 1960 CONFIG_TEXTSEARCH=y 2039 1961 CONFIG_TEXTSEARCH_KMP=m 2040 1962 CONFIG_TEXTSEARCH_BM=m 2041 1963 CONFIG_TEXTSEARCH_FSM=m 2042 - CONFIG_PLIST=y 2043 1964 CONFIG_HAS_IOMEM=y 2044 1965 CONFIG_HAS_IOPORT=y 2045 1966 CONFIG_HAS_DMA=y 2046 1967 CONFIG_HAVE_LMB=y 1968 + CONFIG_NLATTR=y 2047 1969 2048 1970 # 2049 1971 # Kernel hacking ··· 2057 1973 CONFIG_FRAME_WARN=1024 2058 1974 CONFIG_MAGIC_SYSRQ=y 2059 1975 # CONFIG_UNUSED_SYMBOLS is not set 2060 - # CONFIG_DEBUG_FS is not set 1976 + CONFIG_DEBUG_FS=y 2061 1977 # CONFIG_HEADERS_CHECK is not set 2062 1978 CONFIG_DEBUG_KERNEL=y 2063 1979 # CONFIG_DEBUG_SHIRQ is not set 2064 1980 CONFIG_DETECT_SOFTLOCKUP=y 2065 1981 # CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set 2066 1982 CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0 1983 + CONFIG_DETECT_HUNG_TASK=y 1984 + # CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set 1985 + CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0 2067 1986 CONFIG_SCHED_DEBUG=y 2068 1987 CONFIG_SCHEDSTATS=y 2069 1988 # CONFIG_TIMER_STATS is not set ··· 2081 1994 # CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set 2082 1995 CONFIG_STACKTRACE=y 2083 1996 # CONFIG_DEBUG_KOBJECT is not set 1997 + # CONFIG_DEBUG_HIGHMEM is not set 2084 1998 CONFIG_DEBUG_BUGVERBOSE=y 2085 1999 # CONFIG_DEBUG_INFO is not set 2086 2000 # CONFIG_DEBUG_VM is not set ··· 2089 2001 CONFIG_DEBUG_MEMORY_INIT=y 2090 2002 # CONFIG_DEBUG_LIST is not set 2091 2003 # CONFIG_DEBUG_SG is not set 2004 + # CONFIG_DEBUG_NOTIFIERS is not set 2092 2005 # CONFIG_BOOT_PRINTK_DELAY is not set 2093 2006 # CONFIG_RCU_TORTURE_TEST is not set 2094 2007 # CONFIG_RCU_CPU_STALL_DETECTOR is not set ··· 2098 2009 # CONFIG_FAULT_INJECTION is not set 2099 2010 CONFIG_LATENCYTOP=y 2100 2011 CONFIG_SYSCTL_SYSCALL_CHECK=y 2012 + CONFIG_NOP_TRACER=y 2101 2013 CONFIG_HAVE_FUNCTION_TRACER=y 2014 + CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y 2015 + CONFIG_HAVE_DYNAMIC_FTRACE=y 2016 + CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y 2017 + CONFIG_RING_BUFFER=y 2018 + CONFIG_TRACING=y 2019 + CONFIG_TRACING_SUPPORT=y 2102 2020 2103 2021 # 2104 2022 # Tracers ··· 2113 2017 # CONFIG_FUNCTION_TRACER is not set 2114 2018 # CONFIG_SCHED_TRACER is not set 2115 2019 # CONFIG_CONTEXT_SWITCH_TRACER is not set 2020 + # CONFIG_EVENT_TRACER is not set 2116 2021 # CONFIG_BOOT_TRACER is not set 2022 + # CONFIG_TRACE_BRANCH_PROFILING is not set 2117 2023 # CONFIG_STACK_TRACER is not set 2118 - # CONFIG_DYNAMIC_PRINTK_DEBUG is not set 2024 + # CONFIG_KMEMTRACE is not set 2025 + # CONFIG_WORKQUEUE_TRACER is not set 2026 + # CONFIG_BLK_DEV_IO_TRACE is not set 2027 + # CONFIG_FTRACE_STARTUP_TEST is not set 2028 + # CONFIG_DYNAMIC_DEBUG is not set 2119 2029 # CONFIG_SAMPLES is not set 2120 2030 CONFIG_HAVE_ARCH_KGDB=y 2121 2031 # CONFIG_KGDB is not set 2032 + CONFIG_PRINT_STACK_DEPTH=64 2122 2033 # CONFIG_DEBUG_STACKOVERFLOW is not set 2123 2034 # CONFIG_DEBUG_STACK_USAGE is not set 2124 2035 # CONFIG_CODE_PATCHING_SELFTEST is not set ··· 2136 2033 CONFIG_XMON_DISASSEMBLY=y 2137 2034 CONFIG_DEBUGGER=y 2138 2035 CONFIG_IRQSTACKS=y 2036 + # CONFIG_VIRQ_DEBUG is not set 2139 2037 # CONFIG_BDI_SWITCH is not set 2140 2038 CONFIG_BOOTX_TEXT=y 2141 2039 # CONFIG_PPC_EARLY_DEBUG is not set ··· 2155 2051 # 2156 2052 # CONFIG_CRYPTO_FIPS is not set 2157 2053 CONFIG_CRYPTO_ALGAPI=y 2054 + CONFIG_CRYPTO_ALGAPI2=y 2158 2055 CONFIG_CRYPTO_AEAD=y 2056 + CONFIG_CRYPTO_AEAD2=y 2159 2057 CONFIG_CRYPTO_BLKCIPHER=y 2058 + CONFIG_CRYPTO_BLKCIPHER2=y 2160 2059 CONFIG_CRYPTO_HASH=y 2161 - CONFIG_CRYPTO_RNG=y 2060 + CONFIG_CRYPTO_HASH2=y 2061 + CONFIG_CRYPTO_RNG2=y 2062 + CONFIG_CRYPTO_PCOMP=y 2162 2063 CONFIG_CRYPTO_MANAGER=y 2064 + CONFIG_CRYPTO_MANAGER2=y 2163 2065 # CONFIG_CRYPTO_GF128MUL is not set 2164 2066 CONFIG_CRYPTO_NULL=m 2067 + CONFIG_CRYPTO_WORKQUEUE=y 2165 2068 # CONFIG_CRYPTO_CRYPTD is not set 2166 2069 CONFIG_CRYPTO_AUTHENC=y 2167 2070 # CONFIG_CRYPTO_TEST is not set ··· 2238 2127 # Compression 2239 2128 # 2240 2129 CONFIG_CRYPTO_DEFLATE=m 2130 + # CONFIG_CRYPTO_ZLIB is not set 2241 2131 # CONFIG_CRYPTO_LZO is not set 2242 2132 2243 2133 #
+4 -2
arch/powerpc/include/asm/dma-mapping.h
··· 26 26 * allocate the space "normally" and use the cache management functions 27 27 * to ensure it is consistent. 28 28 */ 29 - extern void *__dma_alloc_coherent(size_t size, dma_addr_t *handle, gfp_t gfp); 29 + struct device; 30 + extern void *__dma_alloc_coherent(struct device *dev, size_t size, 31 + dma_addr_t *handle, gfp_t gfp); 30 32 extern void __dma_free_coherent(size_t size, void *vaddr); 31 33 extern void __dma_sync(void *vaddr, size_t size, int direction); 32 34 extern void __dma_sync_page(struct page *page, unsigned long offset, ··· 39 37 * Cache coherent cores. 40 38 */ 41 39 42 - #define __dma_alloc_coherent(gfp, size, handle) NULL 40 + #define __dma_alloc_coherent(dev, gfp, size, handle) NULL 43 41 #define __dma_free_coherent(size, addr) ((void)0) 44 42 #define __dma_sync(addr, size, rw) ((void)0) 45 43 #define __dma_sync_page(pg, off, sz, rw) ((void)0)
+2 -2
arch/powerpc/include/asm/fixmap.h
··· 14 14 #ifndef _ASM_FIXMAP_H 15 15 #define _ASM_FIXMAP_H 16 16 17 - extern unsigned long FIXADDR_TOP; 18 - 19 17 #ifndef __ASSEMBLY__ 20 18 #include <linux/kernel.h> 21 19 #include <asm/page.h> ··· 21 23 #include <linux/threads.h> 22 24 #include <asm/kmap_types.h> 23 25 #endif 26 + 27 + #define FIXADDR_TOP ((unsigned long)(-PAGE_SIZE)) 24 28 25 29 /* 26 30 * Here we define all the compile-time 'special' virtual
+24 -2
arch/powerpc/include/asm/pgtable-ppc32.h
··· 10 10 11 11 extern unsigned long va_to_phys(unsigned long address); 12 12 extern pte_t *va_to_pte(unsigned long address); 13 - extern unsigned long ioremap_bot, ioremap_base; 13 + extern unsigned long ioremap_bot; 14 14 15 15 #ifdef CONFIG_44x 16 16 extern int icache_44x_need_flush; ··· 56 56 printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) 57 57 58 58 /* 59 + * This is the bottom of the PKMAP area with HIGHMEM or an arbitrary 60 + * value (for now) on others, from where we can start layout kernel 61 + * virtual space that goes below PKMAP and FIXMAP 62 + */ 63 + #ifdef CONFIG_HIGHMEM 64 + #define KVIRT_TOP PKMAP_BASE 65 + #else 66 + #define KVIRT_TOP (0xfe000000UL) /* for now, could be FIXMAP_BASE ? */ 67 + #endif 68 + 69 + /* 70 + * ioremap_bot starts at that address. Early ioremaps move down from there, 71 + * until mem_init() at which point this becomes the top of the vmalloc 72 + * and ioremap space 73 + */ 74 + #ifdef CONFIG_NOT_COHERENT_CACHE 75 + #define IOREMAP_TOP ((KVIRT_TOP - CONFIG_CONSISTENT_SIZE) & PAGE_MASK) 76 + #else 77 + #define IOREMAP_TOP KVIRT_TOP 78 + #endif 79 + 80 + /* 59 81 * Just any arbitrary offset to the start of the vmalloc VM area: the 60 - * current 64MB value just means that there will be a 64MB "hole" after the 82 + * current 16MB value just means that there will be a 64MB "hole" after the 61 83 * physical memory until the kernel virtual memory starts. That means that 62 84 * any out-of-bounds memory accesses will hopefully be caught. 63 85 * The vmalloc() routines leaves a hole of 4kB between each vmalloced
+1 -1
arch/powerpc/kernel/dma.c
··· 32 32 { 33 33 void *ret; 34 34 #ifdef CONFIG_NOT_COHERENT_CACHE 35 - ret = __dma_alloc_coherent(size, dma_handle, flag); 35 + ret = __dma_alloc_coherent(dev, size, dma_handle, flag); 36 36 if (ret == NULL) 37 37 return NULL; 38 38 *dma_handle += get_dma_direct_offset(dev);
-1
arch/powerpc/lib/Makefile
··· 18 18 memcpy_64.o usercopy_64.o mem_64.o string.o 19 19 obj-$(CONFIG_XMON) += sstep.o 20 20 obj-$(CONFIG_KPROBES) += sstep.o 21 - obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o 22 21 23 22 ifeq ($(CONFIG_PPC64),y) 24 23 obj-$(CONFIG_SMP) += locks.o
-237
arch/powerpc/lib/dma-noncoherent.c
··· 1 - /* 2 - * PowerPC version derived from arch/arm/mm/consistent.c 3 - * Copyright (C) 2001 Dan Malek (dmalek@jlc.net) 4 - * 5 - * Copyright (C) 2000 Russell King 6 - * 7 - * Consistent memory allocators. Used for DMA devices that want to 8 - * share uncached memory with the processor core. The function return 9 - * is the virtual address and 'dma_handle' is the physical address. 10 - * Mostly stolen from the ARM port, with some changes for PowerPC. 11 - * -- Dan 12 - * 13 - * Reorganized to get rid of the arch-specific consistent_* functions 14 - * and provide non-coherent implementations for the DMA API. -Matt 15 - * 16 - * Added in_interrupt() safe dma_alloc_coherent()/dma_free_coherent() 17 - * implementation. This is pulled straight from ARM and barely 18 - * modified. -Matt 19 - * 20 - * This program is free software; you can redistribute it and/or modify 21 - * it under the terms of the GNU General Public License version 2 as 22 - * published by the Free Software Foundation. 23 - */ 24 - 25 - #include <linux/sched.h> 26 - #include <linux/kernel.h> 27 - #include <linux/errno.h> 28 - #include <linux/string.h> 29 - #include <linux/types.h> 30 - #include <linux/highmem.h> 31 - #include <linux/dma-mapping.h> 32 - #include <linux/vmalloc.h> 33 - 34 - #include <asm/tlbflush.h> 35 - 36 - /* 37 - * Allocate DMA-coherent memory space and return both the kernel remapped 38 - * virtual and bus address for that space. 39 - */ 40 - void * 41 - __dma_alloc_coherent(size_t size, dma_addr_t *handle, gfp_t gfp) 42 - { 43 - struct page *page; 44 - unsigned long order; 45 - int i; 46 - unsigned int nr_pages = PAGE_ALIGN(size)>>PAGE_SHIFT; 47 - unsigned int array_size = nr_pages * sizeof(struct page *); 48 - struct page **pages; 49 - struct page *end; 50 - u64 mask = 0x00ffffff, limit; /* ISA default */ 51 - struct vm_struct *area; 52 - 53 - BUG_ON(!mem_init_done); 54 - size = PAGE_ALIGN(size); 55 - limit = (mask + 1) & ~mask; 56 - if (limit && size >= limit) { 57 - printk(KERN_WARNING "coherent allocation too big (requested " 58 - "%#x mask %#Lx)\n", size, mask); 59 - return NULL; 60 - } 61 - 62 - order = get_order(size); 63 - 64 - if (mask != 0xffffffff) 65 - gfp |= GFP_DMA; 66 - 67 - page = alloc_pages(gfp, order); 68 - if (!page) 69 - goto no_page; 70 - 71 - end = page + (1 << order); 72 - 73 - /* 74 - * Invalidate any data that might be lurking in the 75 - * kernel direct-mapped region for device DMA. 76 - */ 77 - { 78 - unsigned long kaddr = (unsigned long)page_address(page); 79 - memset(page_address(page), 0, size); 80 - flush_dcache_range(kaddr, kaddr + size); 81 - } 82 - 83 - split_page(page, order); 84 - 85 - /* 86 - * Set the "dma handle" 87 - */ 88 - *handle = page_to_phys(page); 89 - 90 - area = get_vm_area_caller(size, VM_IOREMAP, 91 - __builtin_return_address(1)); 92 - if (!area) 93 - goto out_free_pages; 94 - 95 - if (array_size > PAGE_SIZE) { 96 - pages = vmalloc(array_size); 97 - area->flags |= VM_VPAGES; 98 - } else { 99 - pages = kmalloc(array_size, GFP_KERNEL); 100 - } 101 - if (!pages) 102 - goto out_free_area; 103 - 104 - area->pages = pages; 105 - area->nr_pages = nr_pages; 106 - 107 - for (i = 0; i < nr_pages; i++) 108 - pages[i] = page + i; 109 - 110 - if (map_vm_area(area, pgprot_noncached(PAGE_KERNEL), &pages)) 111 - goto out_unmap; 112 - 113 - /* 114 - * Free the otherwise unused pages. 115 - */ 116 - page += nr_pages; 117 - while (page < end) { 118 - __free_page(page); 119 - page++; 120 - } 121 - 122 - return area->addr; 123 - out_unmap: 124 - vunmap(area->addr); 125 - if (array_size > PAGE_SIZE) 126 - vfree(pages); 127 - else 128 - kfree(pages); 129 - goto out_free_pages; 130 - out_free_area: 131 - free_vm_area(area); 132 - out_free_pages: 133 - if (page) 134 - __free_pages(page, order); 135 - no_page: 136 - return NULL; 137 - } 138 - EXPORT_SYMBOL(__dma_alloc_coherent); 139 - 140 - /* 141 - * free a page as defined by the above mapping. 142 - */ 143 - void __dma_free_coherent(size_t size, void *vaddr) 144 - { 145 - vfree(vaddr); 146 - 147 - } 148 - EXPORT_SYMBOL(__dma_free_coherent); 149 - 150 - /* 151 - * make an area consistent. 152 - */ 153 - void __dma_sync(void *vaddr, size_t size, int direction) 154 - { 155 - unsigned long start = (unsigned long)vaddr; 156 - unsigned long end = start + size; 157 - 158 - switch (direction) { 159 - case DMA_NONE: 160 - BUG(); 161 - case DMA_FROM_DEVICE: 162 - /* 163 - * invalidate only when cache-line aligned otherwise there is 164 - * the potential for discarding uncommitted data from the cache 165 - */ 166 - if ((start & (L1_CACHE_BYTES - 1)) || (size & (L1_CACHE_BYTES - 1))) 167 - flush_dcache_range(start, end); 168 - else 169 - invalidate_dcache_range(start, end); 170 - break; 171 - case DMA_TO_DEVICE: /* writeback only */ 172 - clean_dcache_range(start, end); 173 - break; 174 - case DMA_BIDIRECTIONAL: /* writeback and invalidate */ 175 - flush_dcache_range(start, end); 176 - break; 177 - } 178 - } 179 - EXPORT_SYMBOL(__dma_sync); 180 - 181 - #ifdef CONFIG_HIGHMEM 182 - /* 183 - * __dma_sync_page() implementation for systems using highmem. 184 - * In this case, each page of a buffer must be kmapped/kunmapped 185 - * in order to have a virtual address for __dma_sync(). This must 186 - * not sleep so kmap_atomic()/kunmap_atomic() are used. 187 - * 188 - * Note: yes, it is possible and correct to have a buffer extend 189 - * beyond the first page. 190 - */ 191 - static inline void __dma_sync_page_highmem(struct page *page, 192 - unsigned long offset, size_t size, int direction) 193 - { 194 - size_t seg_size = min((size_t)(PAGE_SIZE - offset), size); 195 - size_t cur_size = seg_size; 196 - unsigned long flags, start, seg_offset = offset; 197 - int nr_segs = 1 + ((size - seg_size) + PAGE_SIZE - 1)/PAGE_SIZE; 198 - int seg_nr = 0; 199 - 200 - local_irq_save(flags); 201 - 202 - do { 203 - start = (unsigned long)kmap_atomic(page + seg_nr, 204 - KM_PPC_SYNC_PAGE) + seg_offset; 205 - 206 - /* Sync this buffer segment */ 207 - __dma_sync((void *)start, seg_size, direction); 208 - kunmap_atomic((void *)start, KM_PPC_SYNC_PAGE); 209 - seg_nr++; 210 - 211 - /* Calculate next buffer segment size */ 212 - seg_size = min((size_t)PAGE_SIZE, size - cur_size); 213 - 214 - /* Add the segment size to our running total */ 215 - cur_size += seg_size; 216 - seg_offset = 0; 217 - } while (seg_nr < nr_segs); 218 - 219 - local_irq_restore(flags); 220 - } 221 - #endif /* CONFIG_HIGHMEM */ 222 - 223 - /* 224 - * __dma_sync_page makes memory consistent. identical to __dma_sync, but 225 - * takes a struct page instead of a virtual address 226 - */ 227 - void __dma_sync_page(struct page *page, unsigned long offset, 228 - size_t size, int direction) 229 - { 230 - #ifdef CONFIG_HIGHMEM 231 - __dma_sync_page_highmem(page, offset, size, direction); 232 - #else 233 - unsigned long start = (unsigned long)page_address(page) + offset; 234 - __dma_sync((void *)start, size, direction); 235 - #endif 236 - } 237 - EXPORT_SYMBOL(__dma_sync_page);
+1
arch/powerpc/mm/Makefile
··· 26 26 obj-$(CONFIG_PPC_MM_SLICES) += slice.o 27 27 obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o 28 28 obj-$(CONFIG_PPC_SUBPAGE_PROT) += subpage-prot.o 29 + obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o
+400
arch/powerpc/mm/dma-noncoherent.c
··· 1 + /* 2 + * PowerPC version derived from arch/arm/mm/consistent.c 3 + * Copyright (C) 2001 Dan Malek (dmalek@jlc.net) 4 + * 5 + * Copyright (C) 2000 Russell King 6 + * 7 + * Consistent memory allocators. Used for DMA devices that want to 8 + * share uncached memory with the processor core. The function return 9 + * is the virtual address and 'dma_handle' is the physical address. 10 + * Mostly stolen from the ARM port, with some changes for PowerPC. 11 + * -- Dan 12 + * 13 + * Reorganized to get rid of the arch-specific consistent_* functions 14 + * and provide non-coherent implementations for the DMA API. -Matt 15 + * 16 + * Added in_interrupt() safe dma_alloc_coherent()/dma_free_coherent() 17 + * implementation. This is pulled straight from ARM and barely 18 + * modified. -Matt 19 + * 20 + * This program is free software; you can redistribute it and/or modify 21 + * it under the terms of the GNU General Public License version 2 as 22 + * published by the Free Software Foundation. 23 + */ 24 + 25 + #include <linux/sched.h> 26 + #include <linux/kernel.h> 27 + #include <linux/errno.h> 28 + #include <linux/string.h> 29 + #include <linux/types.h> 30 + #include <linux/highmem.h> 31 + #include <linux/dma-mapping.h> 32 + 33 + #include <asm/tlbflush.h> 34 + 35 + #include "mmu_decl.h" 36 + 37 + /* 38 + * This address range defaults to a value that is safe for all 39 + * platforms which currently set CONFIG_NOT_COHERENT_CACHE. It 40 + * can be further configured for specific applications under 41 + * the "Advanced Setup" menu. -Matt 42 + */ 43 + #define CONSISTENT_BASE (IOREMAP_TOP) 44 + #define CONSISTENT_END (CONSISTENT_BASE + CONFIG_CONSISTENT_SIZE) 45 + #define CONSISTENT_OFFSET(x) (((unsigned long)(x) - CONSISTENT_BASE) >> PAGE_SHIFT) 46 + 47 + /* 48 + * This is the page table (2MB) covering uncached, DMA consistent allocations 49 + */ 50 + static DEFINE_SPINLOCK(consistent_lock); 51 + 52 + /* 53 + * VM region handling support. 54 + * 55 + * This should become something generic, handling VM region allocations for 56 + * vmalloc and similar (ioremap, module space, etc). 57 + * 58 + * I envisage vmalloc()'s supporting vm_struct becoming: 59 + * 60 + * struct vm_struct { 61 + * struct vm_region region; 62 + * unsigned long flags; 63 + * struct page **pages; 64 + * unsigned int nr_pages; 65 + * unsigned long phys_addr; 66 + * }; 67 + * 68 + * get_vm_area() would then call vm_region_alloc with an appropriate 69 + * struct vm_region head (eg): 70 + * 71 + * struct vm_region vmalloc_head = { 72 + * .vm_list = LIST_HEAD_INIT(vmalloc_head.vm_list), 73 + * .vm_start = VMALLOC_START, 74 + * .vm_end = VMALLOC_END, 75 + * }; 76 + * 77 + * However, vmalloc_head.vm_start is variable (typically, it is dependent on 78 + * the amount of RAM found at boot time.) I would imagine that get_vm_area() 79 + * would have to initialise this each time prior to calling vm_region_alloc(). 80 + */ 81 + struct ppc_vm_region { 82 + struct list_head vm_list; 83 + unsigned long vm_start; 84 + unsigned long vm_end; 85 + }; 86 + 87 + static struct ppc_vm_region consistent_head = { 88 + .vm_list = LIST_HEAD_INIT(consistent_head.vm_list), 89 + .vm_start = CONSISTENT_BASE, 90 + .vm_end = CONSISTENT_END, 91 + }; 92 + 93 + static struct ppc_vm_region * 94 + ppc_vm_region_alloc(struct ppc_vm_region *head, size_t size, gfp_t gfp) 95 + { 96 + unsigned long addr = head->vm_start, end = head->vm_end - size; 97 + unsigned long flags; 98 + struct ppc_vm_region *c, *new; 99 + 100 + new = kmalloc(sizeof(struct ppc_vm_region), gfp); 101 + if (!new) 102 + goto out; 103 + 104 + spin_lock_irqsave(&consistent_lock, flags); 105 + 106 + list_for_each_entry(c, &head->vm_list, vm_list) { 107 + if ((addr + size) < addr) 108 + goto nospc; 109 + if ((addr + size) <= c->vm_start) 110 + goto found; 111 + addr = c->vm_end; 112 + if (addr > end) 113 + goto nospc; 114 + } 115 + 116 + found: 117 + /* 118 + * Insert this entry _before_ the one we found. 119 + */ 120 + list_add_tail(&new->vm_list, &c->vm_list); 121 + new->vm_start = addr; 122 + new->vm_end = addr + size; 123 + 124 + spin_unlock_irqrestore(&consistent_lock, flags); 125 + return new; 126 + 127 + nospc: 128 + spin_unlock_irqrestore(&consistent_lock, flags); 129 + kfree(new); 130 + out: 131 + return NULL; 132 + } 133 + 134 + static struct ppc_vm_region *ppc_vm_region_find(struct ppc_vm_region *head, unsigned long addr) 135 + { 136 + struct ppc_vm_region *c; 137 + 138 + list_for_each_entry(c, &head->vm_list, vm_list) { 139 + if (c->vm_start == addr) 140 + goto out; 141 + } 142 + c = NULL; 143 + out: 144 + return c; 145 + } 146 + 147 + /* 148 + * Allocate DMA-coherent memory space and return both the kernel remapped 149 + * virtual and bus address for that space. 150 + */ 151 + void * 152 + __dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp) 153 + { 154 + struct page *page; 155 + struct ppc_vm_region *c; 156 + unsigned long order; 157 + u64 mask = ISA_DMA_THRESHOLD, limit; 158 + 159 + if (dev) { 160 + mask = dev->coherent_dma_mask; 161 + 162 + /* 163 + * Sanity check the DMA mask - it must be non-zero, and 164 + * must be able to be satisfied by a DMA allocation. 165 + */ 166 + if (mask == 0) { 167 + dev_warn(dev, "coherent DMA mask is unset\n"); 168 + goto no_page; 169 + } 170 + 171 + if ((~mask) & ISA_DMA_THRESHOLD) { 172 + dev_warn(dev, "coherent DMA mask %#llx is smaller " 173 + "than system GFP_DMA mask %#llx\n", 174 + mask, (unsigned long long)ISA_DMA_THRESHOLD); 175 + goto no_page; 176 + } 177 + } 178 + 179 + 180 + size = PAGE_ALIGN(size); 181 + limit = (mask + 1) & ~mask; 182 + if ((limit && size >= limit) || 183 + size >= (CONSISTENT_END - CONSISTENT_BASE)) { 184 + printk(KERN_WARNING "coherent allocation too big (requested %#x mask %#Lx)\n", 185 + size, mask); 186 + return NULL; 187 + } 188 + 189 + order = get_order(size); 190 + 191 + /* Might be useful if we ever have a real legacy DMA zone... */ 192 + if (mask != 0xffffffff) 193 + gfp |= GFP_DMA; 194 + 195 + page = alloc_pages(gfp, order); 196 + if (!page) 197 + goto no_page; 198 + 199 + /* 200 + * Invalidate any data that might be lurking in the 201 + * kernel direct-mapped region for device DMA. 202 + */ 203 + { 204 + unsigned long kaddr = (unsigned long)page_address(page); 205 + memset(page_address(page), 0, size); 206 + flush_dcache_range(kaddr, kaddr + size); 207 + } 208 + 209 + /* 210 + * Allocate a virtual address in the consistent mapping region. 211 + */ 212 + c = ppc_vm_region_alloc(&consistent_head, size, 213 + gfp & ~(__GFP_DMA | __GFP_HIGHMEM)); 214 + if (c) { 215 + unsigned long vaddr = c->vm_start; 216 + struct page *end = page + (1 << order); 217 + 218 + split_page(page, order); 219 + 220 + /* 221 + * Set the "dma handle" 222 + */ 223 + *handle = page_to_phys(page); 224 + 225 + do { 226 + SetPageReserved(page); 227 + map_page(vaddr, page_to_phys(page), 228 + pgprot_noncached(PAGE_KERNEL)); 229 + page++; 230 + vaddr += PAGE_SIZE; 231 + } while (size -= PAGE_SIZE); 232 + 233 + /* 234 + * Free the otherwise unused pages. 235 + */ 236 + while (page < end) { 237 + __free_page(page); 238 + page++; 239 + } 240 + 241 + return (void *)c->vm_start; 242 + } 243 + 244 + if (page) 245 + __free_pages(page, order); 246 + no_page: 247 + return NULL; 248 + } 249 + EXPORT_SYMBOL(__dma_alloc_coherent); 250 + 251 + /* 252 + * free a page as defined by the above mapping. 253 + */ 254 + void __dma_free_coherent(size_t size, void *vaddr) 255 + { 256 + struct ppc_vm_region *c; 257 + unsigned long flags, addr; 258 + 259 + size = PAGE_ALIGN(size); 260 + 261 + spin_lock_irqsave(&consistent_lock, flags); 262 + 263 + c = ppc_vm_region_find(&consistent_head, (unsigned long)vaddr); 264 + if (!c) 265 + goto no_area; 266 + 267 + if ((c->vm_end - c->vm_start) != size) { 268 + printk(KERN_ERR "%s: freeing wrong coherent size (%ld != %d)\n", 269 + __func__, c->vm_end - c->vm_start, size); 270 + dump_stack(); 271 + size = c->vm_end - c->vm_start; 272 + } 273 + 274 + addr = c->vm_start; 275 + do { 276 + pte_t *ptep; 277 + unsigned long pfn; 278 + 279 + ptep = pte_offset_kernel(pmd_offset(pud_offset(pgd_offset_k(addr), 280 + addr), 281 + addr), 282 + addr); 283 + if (!pte_none(*ptep) && pte_present(*ptep)) { 284 + pfn = pte_pfn(*ptep); 285 + pte_clear(&init_mm, addr, ptep); 286 + if (pfn_valid(pfn)) { 287 + struct page *page = pfn_to_page(pfn); 288 + 289 + ClearPageReserved(page); 290 + __free_page(page); 291 + } 292 + } 293 + addr += PAGE_SIZE; 294 + } while (size -= PAGE_SIZE); 295 + 296 + flush_tlb_kernel_range(c->vm_start, c->vm_end); 297 + 298 + list_del(&c->vm_list); 299 + 300 + spin_unlock_irqrestore(&consistent_lock, flags); 301 + 302 + kfree(c); 303 + return; 304 + 305 + no_area: 306 + spin_unlock_irqrestore(&consistent_lock, flags); 307 + printk(KERN_ERR "%s: trying to free invalid coherent area: %p\n", 308 + __func__, vaddr); 309 + dump_stack(); 310 + } 311 + EXPORT_SYMBOL(__dma_free_coherent); 312 + 313 + /* 314 + * make an area consistent. 315 + */ 316 + void __dma_sync(void *vaddr, size_t size, int direction) 317 + { 318 + unsigned long start = (unsigned long)vaddr; 319 + unsigned long end = start + size; 320 + 321 + switch (direction) { 322 + case DMA_NONE: 323 + BUG(); 324 + case DMA_FROM_DEVICE: 325 + /* 326 + * invalidate only when cache-line aligned otherwise there is 327 + * the potential for discarding uncommitted data from the cache 328 + */ 329 + if ((start & (L1_CACHE_BYTES - 1)) || (size & (L1_CACHE_BYTES - 1))) 330 + flush_dcache_range(start, end); 331 + else 332 + invalidate_dcache_range(start, end); 333 + break; 334 + case DMA_TO_DEVICE: /* writeback only */ 335 + clean_dcache_range(start, end); 336 + break; 337 + case DMA_BIDIRECTIONAL: /* writeback and invalidate */ 338 + flush_dcache_range(start, end); 339 + break; 340 + } 341 + } 342 + EXPORT_SYMBOL(__dma_sync); 343 + 344 + #ifdef CONFIG_HIGHMEM 345 + /* 346 + * __dma_sync_page() implementation for systems using highmem. 347 + * In this case, each page of a buffer must be kmapped/kunmapped 348 + * in order to have a virtual address for __dma_sync(). This must 349 + * not sleep so kmap_atomic()/kunmap_atomic() are used. 350 + * 351 + * Note: yes, it is possible and correct to have a buffer extend 352 + * beyond the first page. 353 + */ 354 + static inline void __dma_sync_page_highmem(struct page *page, 355 + unsigned long offset, size_t size, int direction) 356 + { 357 + size_t seg_size = min((size_t)(PAGE_SIZE - offset), size); 358 + size_t cur_size = seg_size; 359 + unsigned long flags, start, seg_offset = offset; 360 + int nr_segs = 1 + ((size - seg_size) + PAGE_SIZE - 1)/PAGE_SIZE; 361 + int seg_nr = 0; 362 + 363 + local_irq_save(flags); 364 + 365 + do { 366 + start = (unsigned long)kmap_atomic(page + seg_nr, 367 + KM_PPC_SYNC_PAGE) + seg_offset; 368 + 369 + /* Sync this buffer segment */ 370 + __dma_sync((void *)start, seg_size, direction); 371 + kunmap_atomic((void *)start, KM_PPC_SYNC_PAGE); 372 + seg_nr++; 373 + 374 + /* Calculate next buffer segment size */ 375 + seg_size = min((size_t)PAGE_SIZE, size - cur_size); 376 + 377 + /* Add the segment size to our running total */ 378 + cur_size += seg_size; 379 + seg_offset = 0; 380 + } while (seg_nr < nr_segs); 381 + 382 + local_irq_restore(flags); 383 + } 384 + #endif /* CONFIG_HIGHMEM */ 385 + 386 + /* 387 + * __dma_sync_page makes memory consistent. identical to __dma_sync, but 388 + * takes a struct page instead of a virtual address 389 + */ 390 + void __dma_sync_page(struct page *page, unsigned long offset, 391 + size_t size, int direction) 392 + { 393 + #ifdef CONFIG_HIGHMEM 394 + __dma_sync_page_highmem(page, offset, size, direction); 395 + #else 396 + unsigned long start = (unsigned long)page_address(page) + offset; 397 + __dma_sync((void *)start, size, direction); 398 + #endif 399 + } 400 + EXPORT_SYMBOL(__dma_sync_page);
+2 -6
arch/powerpc/mm/init_32.c
··· 168 168 ppc_md.progress("MMU:mapin", 0x301); 169 169 mapin_ram(); 170 170 171 - #ifdef CONFIG_HIGHMEM 172 - ioremap_base = PKMAP_BASE; 173 - #else 174 - ioremap_base = 0xfe000000UL; /* for now, could be 0xfffff000 */ 175 - #endif /* CONFIG_HIGHMEM */ 176 - ioremap_bot = ioremap_base; 171 + /* Initialize early top-down ioremap allocator */ 172 + ioremap_bot = IOREMAP_TOP; 177 173 178 174 /* Map in I/O resources */ 179 175 if (ppc_md.progress)
+17
arch/powerpc/mm/mem.c
··· 380 380 bsssize >> 10, 381 381 initsize >> 10); 382 382 383 + #ifdef CONFIG_PPC32 384 + pr_info("Kernel virtual memory layout:\n"); 385 + pr_info(" * 0x%08lx..0x%08lx : fixmap\n", FIXADDR_START, FIXADDR_TOP); 386 + #ifdef CONFIG_HIGHMEM 387 + pr_info(" * 0x%08lx..0x%08lx : highmem PTEs\n", 388 + PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP)); 389 + #endif /* CONFIG_HIGHMEM */ 390 + #ifdef CONFIG_NOT_COHERENT_CACHE 391 + pr_info(" * 0x%08lx..0x%08lx : consistent mem\n", 392 + IOREMAP_TOP, IOREMAP_TOP + CONFIG_CONSISTENT_SIZE); 393 + #endif /* CONFIG_NOT_COHERENT_CACHE */ 394 + pr_info(" * 0x%08lx..0x%08lx : early ioremap\n", 395 + ioremap_bot, IOREMAP_TOP); 396 + pr_info(" * 0x%08lx..0x%08lx : vmalloc & ioremap\n", 397 + VMALLOC_START, VMALLOC_END); 398 + #endif /* CONFIG_PPC32 */ 399 + 383 400 mem_init_done = 1; 384 401 } 385 402
+3 -3
arch/powerpc/mm/mmu_context_nohash.c
··· 127 127 128 128 pr_debug("[%d] steal context %d from mm @%p\n", cpu, id, mm); 129 129 130 - /* Mark this mm has having no context anymore */ 131 - mm->context.id = MMU_NO_CONTEXT; 132 - 133 130 /* Flush the TLB for that context */ 134 131 local_flush_tlb_mm(mm); 132 + 133 + /* Mark this mm has having no context anymore */ 134 + mm->context.id = MMU_NO_CONTEXT; 135 135 136 136 /* XXX This clear should ultimately be part of local_flush_tlb_mm */ 137 137 __clear_bit(id, stale_map[cpu]);
-2
arch/powerpc/mm/pgtable_32.c
··· 399 399 #endif /* CONFIG_DEBUG_PAGEALLOC */ 400 400 401 401 static int fixmaps; 402 - unsigned long FIXADDR_TOP = (-PAGE_SIZE); 403 - EXPORT_SYMBOL(FIXADDR_TOP); 404 402 405 403 void __set_fixmap (enum fixed_addresses idx, phys_addr_t phys, pgprot_t flags) 406 404 {
-1
arch/sh/include/asm/flat.h
··· 12 12 #ifndef __ASM_SH_FLAT_H 13 13 #define __ASM_SH_FLAT_H 14 14 15 - #define flat_stack_align(sp) /* nothing needed */ 16 15 #define flat_argvp_envp_on_stack() 0 17 16 #define flat_old_ram_flag(flags) (flags) 18 17 #define flat_reloc_valid(reloc, size) ((reloc) <= (size))
+3 -2
arch/sparc/include/asm/elf_64.h
··· 208 208 else \ 209 209 clear_thread_flag(TIF_ABI_PENDING); \ 210 210 /* flush_thread will update pgd cache */ \ 211 - if (current->personality != PER_LINUX32) \ 212 - set_personality(PER_LINUX); \ 211 + if (personality(current->personality) != PER_LINUX32) \ 212 + set_personality(PER_LINUX | \ 213 + (current->personality & (~PER_MASK))); \ 213 214 } while (0) 214 215 215 216 #endif /* !(__ASM_SPARC64_ELF_H) */
+1 -1
arch/sparc/lib/csum_copy_from_user.S
··· 5 5 6 6 #define EX_LD(x) \ 7 7 98: x; \ 8 - .section .fixup; \ 8 + .section .fixup, "ax"; \ 9 9 .align 4; \ 10 10 99: retl; \ 11 11 mov -1, %o0; \
+1 -1
arch/sparc/lib/csum_copy_to_user.S
··· 5 5 6 6 #define EX_ST(x) \ 7 7 98: x; \ 8 - .section .fixup; \ 8 + .section .fixup,"ax"; \ 9 9 .align 4; \ 10 10 99: retl; \ 11 11 mov -1, %o0; \
+5 -2
arch/x86/boot/compressed/relocs.c
··· 504 504 if (sym->st_shndx == SHN_ABS) { 505 505 continue; 506 506 } 507 - if (r_type == R_386_PC32) { 508 - /* PC relative relocations don't need to be adjusted */ 507 + if (r_type == R_386_NONE || r_type == R_386_PC32) { 508 + /* 509 + * NONE can be ignored and and PC relative 510 + * relocations don't need to be adjusted. 511 + */ 509 512 } 510 513 else if (r_type == R_386_32) { 511 514 /* Visit relocations that need to be adjusted */
+13 -16
arch/x86/boot/memory.c
··· 17 17 18 18 #define SMAP 0x534d4150 /* ASCII "SMAP" */ 19 19 20 - struct e820_ext_entry { 21 - struct e820entry std; 22 - u32 ext_flags; 23 - } __attribute__((packed)); 24 - 25 20 static int detect_memory_e820(void) 26 21 { 27 22 int count = 0; ··· 24 29 u32 size, id, edi; 25 30 u8 err; 26 31 struct e820entry *desc = boot_params.e820_map; 27 - static struct e820_ext_entry buf; /* static so it is zeroed */ 32 + static struct e820entry buf; /* static so it is zeroed */ 28 33 29 34 /* 30 - * Set this here so that if the BIOS doesn't change this field 31 - * but still doesn't change %ecx, we're still okay... 35 + * Note: at least one BIOS is known which assumes that the 36 + * buffer pointed to by one e820 call is the same one as 37 + * the previous call, and only changes modified fields. Therefore, 38 + * we use a temporary buffer and copy the results entry by entry. 39 + * 40 + * This routine deliberately does not try to account for 41 + * ACPI 3+ extended attributes. This is because there are 42 + * BIOSes in the field which report zero for the valid bit for 43 + * all ranges, and we don't currently make any use of the 44 + * other attribute bits. Revisit this if we see the extended 45 + * attribute bits deployed in a meaningful way in the future. 32 46 */ 33 - buf.ext_flags = 1; 34 47 35 48 do { 36 49 size = sizeof buf; ··· 69 66 break; 70 67 } 71 68 72 - /* ACPI 3.0 added the extended flags support. If bit 0 73 - in the extended flags is zero, we're supposed to simply 74 - ignore the entry -- a backwards incompatible change! */ 75 - if (size > 20 && !(buf.ext_flags & 1)) 76 - continue; 77 - 78 - *desc++ = buf.std; 69 + *desc++ = buf; 79 70 count++; 80 71 } while (next && count < ARRAY_SIZE(boot_params.e820_map)); 81 72
+7
arch/x86/kernel/cpu/common.c
··· 114 114 } }; 115 115 EXPORT_PER_CPU_SYMBOL_GPL(gdt_page); 116 116 117 + static int __init x86_xsave_setup(char *s) 118 + { 119 + setup_clear_cpu_cap(X86_FEATURE_XSAVE); 120 + return 1; 121 + } 122 + __setup("noxsave", x86_xsave_setup); 123 + 117 124 #ifdef CONFIG_X86_32 118 125 static int cachesize_override __cpuinitdata = -1; 119 126 static int disable_x86_serial_nr __cpuinitdata = 1;
+2 -2
arch/x86/kernel/cpu/cpufreq/acpi-cpufreq.c
··· 693 693 if (perf->control_register.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE && 694 694 policy->cpuinfo.transition_latency > 20 * 1000) { 695 695 policy->cpuinfo.transition_latency = 20 * 1000; 696 - printk_once(KERN_INFO "Capping off P-state tranision" 697 - " latency at 20 uS\n"); 696 + printk_once(KERN_INFO 697 + "P-state transition latency capped at 20 uS\n"); 698 698 } 699 699 700 700 /* table init */
+1
arch/x86/kernel/cpu/cpufreq/p4-clockmod.c
··· 168 168 case 0x0E: /* Core */ 169 169 case 0x0F: /* Core Duo */ 170 170 case 0x16: /* Celeron Core */ 171 + case 0x1C: /* Atom */ 171 172 p4clockmod_driver.flags |= CPUFREQ_CONST_LOOPS; 172 173 return speedstep_get_frequency(SPEEDSTEP_CPU_PCORE); 173 174 case 0x0D: /* Pentium M (Dothan) */
+2
arch/x86/kernel/cpu/cpufreq/powernow-k7.c
··· 168 168 return 1; 169 169 } 170 170 171 + #ifdef CONFIG_X86_POWERNOW_K7_ACPI 171 172 static void invalidate_entry(unsigned int entry) 172 173 { 173 174 powernow_table[entry].frequency = CPUFREQ_ENTRY_INVALID; 174 175 } 176 + #endif 175 177 176 178 static int get_ranges(unsigned char *pst) 177 179 {
+26 -16
arch/x86/kernel/cpu/cpufreq/powernow-k8.c
··· 649 649 data->batps); 650 650 } 651 651 652 + static u32 freq_from_fid_did(u32 fid, u32 did) 653 + { 654 + u32 mhz = 0; 655 + 656 + if (boot_cpu_data.x86 == 0x10) 657 + mhz = (100 * (fid + 0x10)) >> did; 658 + else if (boot_cpu_data.x86 == 0x11) 659 + mhz = (100 * (fid + 8)) >> did; 660 + else 661 + BUG(); 662 + 663 + return mhz * 1000; 664 + } 665 + 652 666 static int fill_powernow_table(struct powernow_k8_data *data, 653 667 struct pst_s *pst, u8 maxvid) 654 668 { ··· 937 923 938 924 powernow_table[i].index = index; 939 925 940 - powernow_table[i].frequency = 941 - data->acpi_data.states[i].core_frequency * 1000; 926 + /* Frequency may be rounded for these */ 927 + if (boot_cpu_data.x86 == 0x10 || boot_cpu_data.x86 == 0x11) { 928 + powernow_table[i].frequency = 929 + freq_from_fid_did(lo & 0x3f, (lo >> 6) & 7); 930 + } else 931 + powernow_table[i].frequency = 932 + data->acpi_data.states[i].core_frequency * 1000; 942 933 } 943 934 return 0; 944 935 } ··· 1234 1215 return cpufreq_frequency_table_verify(pol, data->powernow_table); 1235 1216 } 1236 1217 1218 + static const char ACPI_PSS_BIOS_BUG_MSG[] = 1219 + KERN_ERR FW_BUG PFX "No compatible ACPI _PSS objects found.\n" 1220 + KERN_ERR FW_BUG PFX "Try again with latest BIOS.\n"; 1221 + 1237 1222 /* per CPU init entry point to the driver */ 1238 1223 static int __cpuinit powernowk8_cpu_init(struct cpufreq_policy *pol) 1239 1224 { 1240 1225 struct powernow_k8_data *data; 1241 1226 cpumask_t oldmask; 1242 1227 int rc; 1243 - static int print_once; 1244 1228 1245 1229 if (!cpu_online(pol->cpu)) 1246 1230 return -ENODEV; ··· 1266 1244 * an UP version, and is deprecated by AMD. 1267 1245 */ 1268 1246 if (num_online_cpus() != 1) { 1269 - /* 1270 - * Replace this one with print_once as soon as such a 1271 - * thing gets introduced 1272 - */ 1273 - if (!print_once) { 1274 - WARN_ONCE(1, KERN_ERR FW_BUG PFX "Your BIOS " 1275 - "does not provide ACPI _PSS objects " 1276 - "in a way that Linux understands. " 1277 - "Please report this to the Linux ACPI" 1278 - " maintainers and complain to your " 1279 - "BIOS vendor.\n"); 1280 - print_once++; 1281 - } 1247 + printk_once(ACPI_PSS_BIOS_BUG_MSG); 1282 1248 goto err_out; 1283 1249 } 1284 1250 if (pol->cpu != 0) {
+8
arch/x86/kernel/reboot.c
··· 232 232 DMI_MATCH(DMI_PRODUCT_NAME, "Dell DXP061"), 233 233 }, 234 234 }, 235 + { /* Handle problems with rebooting on Sony VGN-Z540N */ 236 + .callback = set_bios_reboot, 237 + .ident = "Sony VGN-Z540N", 238 + .matches = { 239 + DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"), 240 + DMI_MATCH(DMI_PRODUCT_NAME, "VGN-Z540N"), 241 + }, 242 + }, 235 243 { } 236 244 }; 237 245
+3 -1
arch/x86/kernel/setup_percpu.c
··· 160 160 /* 161 161 * If large page isn't supported, there's no benefit in doing 162 162 * this. Also, on non-NUMA, embedding is better. 163 + * 164 + * NOTE: disabled for now. 163 165 */ 164 - if (!cpu_has_pse || !pcpu_need_numa()) 166 + if (true || !cpu_has_pse || !pcpu_need_numa()) 165 167 return -EINVAL; 166 168 167 169 /*
+1 -2
arch/x86/kvm/mmu.c
··· 2897 2897 2898 2898 static int kvm_pv_mmu_flush_tlb(struct kvm_vcpu *vcpu) 2899 2899 { 2900 - kvm_x86_ops->tlb_flush(vcpu); 2901 - set_bit(KVM_REQ_MMU_SYNC, &vcpu->requests); 2900 + kvm_set_cr3(vcpu, vcpu->arch.cr3); 2902 2901 return 1; 2903 2902 } 2904 2903
+5 -1
arch/x86/kvm/x86.c
··· 338 338 339 339 void kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4) 340 340 { 341 + unsigned long old_cr4 = vcpu->arch.cr4; 342 + unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE; 343 + 341 344 if (cr4 & CR4_RESERVED_BITS) { 342 345 printk(KERN_DEBUG "set_cr4: #GP, reserved bits\n"); 343 346 kvm_inject_gp(vcpu, 0); ··· 354 351 kvm_inject_gp(vcpu, 0); 355 352 return; 356 353 } 357 - } else if (is_paging(vcpu) && !is_pae(vcpu) && (cr4 & X86_CR4_PAE) 354 + } else if (is_paging(vcpu) && (cr4 & X86_CR4_PAE) 355 + && ((cr4 ^ old_cr4) & pdptr_bits) 358 356 && !load_pdptrs(vcpu, vcpu->arch.cr3)) { 359 357 printk(KERN_DEBUG "set_cr4: #GP, pdptrs reserved bits\n"); 360 358 kvm_inject_gp(vcpu, 0);
+5 -1
arch/x86/mm/hugetlbpage.c
··· 26 26 unsigned long sbase = saddr & PUD_MASK; 27 27 unsigned long s_end = sbase + PUD_SIZE; 28 28 29 + /* Allow segments to share if only one is marked locked */ 30 + unsigned long vm_flags = vma->vm_flags & ~VM_LOCKED; 31 + unsigned long svm_flags = svma->vm_flags & ~VM_LOCKED; 32 + 29 33 /* 30 34 * match the virtual addresses, permission and the alignment of the 31 35 * page table page. 32 36 */ 33 37 if (pmd_index(addr) != pmd_index(saddr) || 34 - vma->vm_flags != svma->vm_flags || 38 + vm_flags != svm_flags || 35 39 sbase < svma->vm_start || svma->vm_end < s_end) 36 40 return 0; 37 41
+4 -9
arch/x86/mm/pageattr.c
··· 153 153 */ 154 154 __flush_tlb_all(); 155 155 156 - if (cache && boot_cpu_data.x86_model >= 4) 156 + if (cache && boot_cpu_data.x86 >= 4) 157 157 wbinvd(); 158 158 } 159 159 ··· 208 208 int in_flags, struct page **pages) 209 209 { 210 210 unsigned int i, level; 211 + unsigned long do_wbinvd = cache && numpages >= 1024; /* 4M threshold */ 211 212 212 213 BUG_ON(irqs_disabled()); 213 214 214 - on_each_cpu(__cpa_flush_range, NULL, 1); 215 + on_each_cpu(__cpa_flush_all, (void *) do_wbinvd, 1); 215 216 216 - if (!cache) 217 + if (!cache || do_wbinvd) 217 218 return; 218 219 219 - /* 4M threshold */ 220 - if (numpages >= 1024) { 221 - if (boot_cpu_data.x86_model >= 4) 222 - wbinvd(); 223 - return; 224 - } 225 220 /* 226 221 * We only need to flush on one CPU, 227 222 * clflush is a MESI-coherent instruction that
+4 -3
crypto/ahash.c
··· 82 82 if (err) 83 83 return err; 84 84 85 - walk->offset = 0; 86 - 87 - if (nbytes) 85 + if (nbytes) { 86 + walk->offset = 0; 87 + walk->pg++; 88 88 return hash_walk_next(walk); 89 + } 89 90 90 91 if (!walk->total) 91 92 return 0;
+6 -18
drivers/acpi/pci_bind.c
··· 116 116 struct acpi_pci_data *pdata; 117 117 struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL }; 118 118 acpi_handle handle; 119 - struct pci_dev *dev; 120 - struct pci_bus *bus; 121 - 122 119 123 120 if (!device || !device->parent) 124 121 return -EINVAL; ··· 173 176 * Locate matching device in PCI namespace. If it doesn't exist 174 177 * this typically means that the device isn't currently inserted 175 178 * (e.g. docking station, port replicator, etc.). 176 - * We cannot simply search the global pci device list, since 177 - * PCI devices are added to the global pci list when the root 178 - * bridge start ops are run, which may not have happened yet. 179 179 */ 180 - bus = pci_find_bus(data->id.segment, data->id.bus); 181 - if (bus) { 182 - list_for_each_entry(dev, &bus->devices, bus_list) { 183 - if (dev->devfn == PCI_DEVFN(data->id.device, 184 - data->id.function)) { 185 - data->dev = dev; 186 - break; 187 - } 188 - } 189 - } 180 + data->dev = pci_get_slot(pdata->bus, 181 + PCI_DEVFN(data->id.device, data->id.function)); 190 182 if (!data->dev) { 191 183 ACPI_DEBUG_PRINT((ACPI_DB_INFO, 192 184 "Device %04x:%02x:%02x.%d not present in PCI namespace\n", ··· 245 259 246 260 end: 247 261 kfree(buffer.pointer); 248 - if (result) 262 + if (result) { 263 + pci_dev_put(data->dev); 249 264 kfree(data); 250 - 265 + } 251 266 return result; 252 267 } 253 268 ··· 290 303 if (data->dev->subordinate) { 291 304 acpi_pci_irq_del_prt(data->id.segment, data->bus->number); 292 305 } 306 + pci_dev_put(data->dev); 293 307 kfree(data); 294 308 295 309 end:
+7 -1
drivers/acpi/processor_idle.c
··· 148 148 if (cpu_has(&cpu_data(pr->id), X86_FEATURE_ARAT)) 149 149 return; 150 150 151 + if (boot_cpu_has(X86_FEATURE_AMDC1E)) 152 + type = ACPI_STATE_C1; 153 + 151 154 /* 152 155 * Check, if one of the previous states already marked the lapic 153 156 * unstable ··· 614 611 switch (cx->type) { 615 612 case ACPI_STATE_C1: 616 613 cx->valid = 1; 614 + acpi_timer_check_state(i, pr, cx); 617 615 break; 618 616 619 617 case ACPI_STATE_C2: ··· 834 830 835 831 /* Do not access any ACPI IO ports in suspend path */ 836 832 if (acpi_idle_suspend) { 837 - acpi_safe_halt(); 838 833 local_irq_enable(); 834 + cpu_relax(); 839 835 return 0; 840 836 } 841 837 838 + acpi_state_timer_broadcast(pr, cx, 1); 842 839 kt1 = ktime_get_real(); 843 840 acpi_idle_do_entry(cx); 844 841 kt2 = ktime_get_real(); ··· 847 842 848 843 local_irq_enable(); 849 844 cx->usage++; 845 + acpi_state_timer_broadcast(pr, cx, 0); 850 846 851 847 return idle_time; 852 848 }
+9 -3
drivers/acpi/processor_perflib.c
··· 309 309 (u32) px->bus_master_latency, 310 310 (u32) px->control, (u32) px->status)); 311 311 312 - if (!px->core_frequency) { 313 - printk(KERN_ERR PREFIX 314 - "Invalid _PSS data: freq is zero\n"); 312 + /* 313 + * Check that ACPI's u64 MHz will be valid as u32 KHz in cpufreq 314 + */ 315 + if (!px->core_frequency || 316 + ((u32)(px->core_frequency * 1000) != 317 + (px->core_frequency * 1000))) { 318 + printk(KERN_ERR FW_BUG PREFIX 319 + "Invalid BIOS _PSS frequency: 0x%llx MHz\n", 320 + px->core_frequency); 315 321 result = -EFAULT; 316 322 kfree(pr->performance->states); 317 323 goto end;
+1 -1
drivers/acpi/processor_throttling.c
··· 840 840 state = acpi_get_throttling_state(pr, value); 841 841 if (state == -1) { 842 842 ACPI_WARNING((AE_INFO, 843 - "Invalid throttling state, reset\n")); 843 + "Invalid throttling state, reset")); 844 844 state = 0; 845 845 ret = acpi_processor_set_throttling(pr, state); 846 846 if (ret)
+17 -1
drivers/acpi/video.c
··· 570 570 DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5710Z"), 571 571 }, 572 572 }, 573 + { 574 + .callback = video_set_bqc_offset, 575 + .ident = "eMachines E510", 576 + .matches = { 577 + DMI_MATCH(DMI_BOARD_VENDOR, "EMACHINES"), 578 + DMI_MATCH(DMI_PRODUCT_NAME, "eMachines E510"), 579 + }, 580 + }, 581 + { 582 + .callback = video_set_bqc_offset, 583 + .ident = "Acer Aspire 5315", 584 + .matches = { 585 + DMI_MATCH(DMI_BOARD_VENDOR, "Acer"), 586 + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5315"), 587 + }, 588 + }, 573 589 {} 574 590 }; 575 591 ··· 2350 2334 return acpi_video_register(); 2351 2335 } 2352 2336 2353 - void __exit acpi_video_exit(void) 2337 + void acpi_video_exit(void) 2354 2338 { 2355 2339 2356 2340 acpi_bus_unregister_driver(&acpi_video_bus);
+12 -1
drivers/ata/pata_netcell.c
··· 20 20 21 21 /* No PIO or DMA methods needed for this device */ 22 22 23 + static unsigned int netcell_read_id(struct ata_device *adev, 24 + struct ata_taskfile *tf, u16 *id) 25 + { 26 + unsigned int err_mask = ata_do_dev_read_id(adev, tf, id); 27 + /* Firmware forgets to mark words 85-87 valid */ 28 + if (err_mask == 0) 29 + id[ATA_ID_CSF_DEFAULT] |= 0x0400; 30 + return err_mask; 31 + } 32 + 23 33 static struct scsi_host_template netcell_sht = { 24 34 ATA_BMDMA_SHT(DRV_NAME), 25 35 }; 26 36 27 37 static struct ata_port_operations netcell_ops = { 28 38 .inherits = &ata_bmdma_port_ops, 29 - .cable_detect = ata_cable_80wire, 39 + .cable_detect = ata_cable_80wire, 40 + .read_id = netcell_read_id, 30 41 }; 31 42 32 43
+3 -1
drivers/base/bus.c
··· 700 700 } 701 701 702 702 kobject_uevent(&priv->kobj, KOBJ_ADD); 703 - return error; 703 + return 0; 704 704 out_unregister: 705 + kfree(drv->p); 706 + drv->p = NULL; 705 707 kobject_put(&priv->kobj); 706 708 out_put_bus: 707 709 bus_put(bus);
+4 -1
drivers/base/core.c
··· 879 879 } 880 880 881 881 if (!dev_name(dev)) 882 - goto done; 882 + goto name_error; 883 883 884 884 pr_debug("device: '%s': %s\n", dev_name(dev), __func__); 885 885 ··· 978 978 cleanup_device_parent(dev); 979 979 if (parent) 980 980 put_device(parent); 981 + name_error: 982 + kfree(dev->p); 983 + dev->p = NULL; 981 984 goto done; 982 985 } 983 986
+4
drivers/base/driver.c
··· 257 257 */ 258 258 void driver_unregister(struct device_driver *drv) 259 259 { 260 + if (!drv || !drv->p) { 261 + WARN(1, "Unexpected driver unregister!\n"); 262 + return; 263 + } 260 264 driver_remove_groups(drv, drv->groups); 261 265 bus_remove_driver(drv); 262 266 }
+4
drivers/base/power/main.c
··· 357 357 { 358 358 struct device *dev; 359 359 360 + mutex_lock(&dpm_list_mtx); 360 361 list_for_each_entry(dev, &dpm_list, power.entry) 361 362 if (dev->power.status > DPM_OFF) { 362 363 int error; ··· 367 366 if (error) 368 367 pm_dev_err(dev, state, " early", error); 369 368 } 369 + mutex_unlock(&dpm_list_mtx); 370 370 } 371 371 372 372 /** ··· 616 614 int error = 0; 617 615 618 616 suspend_device_irqs(); 617 + mutex_lock(&dpm_list_mtx); 619 618 list_for_each_entry_reverse(dev, &dpm_list, power.entry) { 620 619 error = suspend_device_noirq(dev, state); 621 620 if (error) { ··· 625 622 } 626 623 dev->power.status = DPM_OFF_IRQ; 627 624 } 625 + mutex_unlock(&dpm_list_mtx); 628 626 if (error) 629 627 device_power_up(resume_event(state)); 630 628 return error;
+2 -2
drivers/cpufreq/cpufreq.c
··· 1070 1070 spin_unlock_irqrestore(&cpufreq_driver_lock, flags); 1071 1071 #endif 1072 1072 1073 + unlock_policy_rwsem_write(cpu); 1074 + 1073 1075 if (cpufreq_driver->target) 1074 1076 __cpufreq_governor(data, CPUFREQ_GOV_STOP); 1075 - 1076 - unlock_policy_rwsem_write(cpu); 1077 1077 1078 1078 kobject_put(&data->kobj); 1079 1079
+4 -1
drivers/cpufreq/cpufreq_conservative.c
··· 91 91 * (like __cpufreq_driver_target()) is being called with dbs_mutex taken, then 92 92 * cpu_hotplug lock should be taken before that. Note that cpu_hotplug lock 93 93 * is recursive for the same process. -Venki 94 + * DEADLOCK ALERT! (2) : do_dbs_timer() must not take the dbs_mutex, because it 95 + * would deadlock with cancel_delayed_work_sync(), which is needed for proper 96 + * raceless workqueue teardown. 94 97 */ 95 98 static DEFINE_MUTEX(dbs_mutex); 96 99 ··· 545 542 static inline void dbs_timer_exit(struct cpu_dbs_info_s *dbs_info) 546 543 { 547 544 dbs_info->enable = 0; 548 - cancel_delayed_work(&dbs_info->work); 545 + cancel_delayed_work_sync(&dbs_info->work); 549 546 } 550 547 551 548 static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
+4 -1
drivers/cpufreq/cpufreq_ondemand.c
··· 98 98 * (like __cpufreq_driver_target()) is being called with dbs_mutex taken, then 99 99 * cpu_hotplug lock should be taken before that. Note that cpu_hotplug lock 100 100 * is recursive for the same process. -Venki 101 + * DEADLOCK ALERT! (2) : do_dbs_timer() must not take the dbs_mutex, because it 102 + * would deadlock with cancel_delayed_work_sync(), which is needed for proper 103 + * raceless workqueue teardown. 101 104 */ 102 105 static DEFINE_MUTEX(dbs_mutex); 103 106 ··· 565 562 static inline void dbs_timer_exit(struct cpu_dbs_info_s *dbs_info) 566 563 { 567 564 dbs_info->enable = 0; 568 - cancel_delayed_work(&dbs_info->work); 565 + cancel_delayed_work_sync(&dbs_info->work); 569 566 } 570 567 571 568 static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
+47 -24
drivers/dma/fsldma.c
··· 179 179 static void set_ld_eol(struct fsl_dma_chan *fsl_chan, 180 180 struct fsl_desc_sw *desc) 181 181 { 182 + u64 snoop_bits; 183 + 184 + snoop_bits = ((fsl_chan->feature & FSL_DMA_IP_MASK) == FSL_DMA_IP_83XX) 185 + ? FSL_DMA_SNEN : 0; 186 + 182 187 desc->hw.next_ln_addr = CPU_TO_DMA(fsl_chan, 183 - DMA_TO_CPU(fsl_chan, desc->hw.next_ln_addr, 64) | FSL_DMA_EOL, 184 - 64); 188 + DMA_TO_CPU(fsl_chan, desc->hw.next_ln_addr, 64) | FSL_DMA_EOL 189 + | snoop_bits, 64); 185 190 } 186 191 187 192 static void append_ld_queue(struct fsl_dma_chan *fsl_chan, ··· 318 313 319 314 static dma_cookie_t fsl_dma_tx_submit(struct dma_async_tx_descriptor *tx) 320 315 { 321 - struct fsl_desc_sw *desc = tx_to_fsl_desc(tx); 322 316 struct fsl_dma_chan *fsl_chan = to_fsl_chan(tx->chan); 317 + struct fsl_desc_sw *desc; 323 318 unsigned long flags; 324 319 dma_cookie_t cookie; 325 320 ··· 327 322 spin_lock_irqsave(&fsl_chan->desc_lock, flags); 328 323 329 324 cookie = fsl_chan->common.cookie; 330 - cookie++; 331 - if (cookie < 0) 332 - cookie = 1; 333 - desc->async_tx.cookie = cookie; 334 - fsl_chan->common.cookie = desc->async_tx.cookie; 325 + list_for_each_entry(desc, &tx->tx_list, node) { 326 + cookie++; 327 + if (cookie < 0) 328 + cookie = 1; 335 329 336 - append_ld_queue(fsl_chan, desc); 337 - list_splice_init(&desc->async_tx.tx_list, fsl_chan->ld_queue.prev); 330 + desc->async_tx.cookie = cookie; 331 + } 332 + 333 + fsl_chan->common.cookie = cookie; 334 + append_ld_queue(fsl_chan, tx_to_fsl_desc(tx)); 335 + list_splice_init(&tx->tx_list, fsl_chan->ld_queue.prev); 338 336 339 337 spin_unlock_irqrestore(&fsl_chan->desc_lock, flags); 340 338 ··· 462 454 { 463 455 struct fsl_dma_chan *fsl_chan; 464 456 struct fsl_desc_sw *first = NULL, *prev = NULL, *new; 457 + struct list_head *list; 465 458 size_t copy; 466 - LIST_HEAD(link_chain); 467 459 468 460 if (!chan) 469 461 return NULL; ··· 480 472 if (!new) { 481 473 dev_err(fsl_chan->dev, 482 474 "No free memory for link descriptor\n"); 483 - return NULL; 475 + goto fail; 484 476 } 485 477 #ifdef FSL_DMA_LD_DEBUG 486 478 dev_dbg(fsl_chan->dev, "new link desc alloc %p\n", new); ··· 515 507 /* Set End-of-link to the last link descriptor of new list*/ 516 508 set_ld_eol(fsl_chan, new); 517 509 518 - return first ? &first->async_tx : NULL; 510 + return &first->async_tx; 511 + 512 + fail: 513 + if (!first) 514 + return NULL; 515 + 516 + list = &first->async_tx.tx_list; 517 + list_for_each_entry_safe_reverse(new, prev, list, node) { 518 + list_del(&new->node); 519 + dma_pool_free(fsl_chan->desc_pool, new, new->async_tx.phys); 520 + } 521 + 522 + return NULL; 519 523 } 520 524 521 525 /** ··· 618 598 dma_addr_t next_dest_addr; 619 599 unsigned long flags; 620 600 601 + spin_lock_irqsave(&fsl_chan->desc_lock, flags); 602 + 621 603 if (!dma_is_idle(fsl_chan)) 622 - return; 604 + goto out_unlock; 623 605 624 606 dma_halt(fsl_chan); 625 607 626 608 /* If there are some link descriptors 627 609 * not transfered in queue. We need to start it. 628 610 */ 629 - spin_lock_irqsave(&fsl_chan->desc_lock, flags); 630 611 631 612 /* Find the first un-transfer desciptor */ 632 613 for (ld_node = fsl_chan->ld_queue.next; ··· 638 617 fsl_chan->common.cookie) == DMA_SUCCESS); 639 618 ld_node = ld_node->next); 640 619 641 - spin_unlock_irqrestore(&fsl_chan->desc_lock, flags); 642 - 643 620 if (ld_node != &fsl_chan->ld_queue) { 644 621 /* Get the ld start address from ld_queue */ 645 622 next_dest_addr = to_fsl_desc(ld_node)->async_tx.phys; 646 - dev_dbg(fsl_chan->dev, "xfer LDs staring from %p\n", 647 - (void *)next_dest_addr); 623 + dev_dbg(fsl_chan->dev, "xfer LDs staring from 0x%llx\n", 624 + (unsigned long long)next_dest_addr); 648 625 set_cdar(fsl_chan, next_dest_addr); 649 626 dma_start(fsl_chan); 650 627 } else { 651 628 set_cdar(fsl_chan, 0); 652 629 set_ndar(fsl_chan, 0); 653 630 } 631 + 632 + out_unlock: 633 + spin_unlock_irqrestore(&fsl_chan->desc_lock, flags); 654 634 } 655 635 656 636 /** ··· 756 734 */ 757 735 if (stat & FSL_DMA_SR_EOSI) { 758 736 dev_dbg(fsl_chan->dev, "event: End-of-segments INT\n"); 759 - dev_dbg(fsl_chan->dev, "event: clndar %p, nlndar %p\n", 760 - (void *)get_cdar(fsl_chan), (void *)get_ndar(fsl_chan)); 737 + dev_dbg(fsl_chan->dev, "event: clndar 0x%llx, nlndar 0x%llx\n", 738 + (unsigned long long)get_cdar(fsl_chan), 739 + (unsigned long long)get_ndar(fsl_chan)); 761 740 stat &= ~FSL_DMA_SR_EOSI; 762 741 update_cookie = 1; 763 742 } ··· 853 830 new_fsl_chan->reg.end - new_fsl_chan->reg.start + 1); 854 831 855 832 new_fsl_chan->id = ((new_fsl_chan->reg.start - 0x100) & 0xfff) >> 7; 856 - if (new_fsl_chan->id > FSL_DMA_MAX_CHANS_PER_DEVICE) { 833 + if (new_fsl_chan->id >= FSL_DMA_MAX_CHANS_PER_DEVICE) { 857 834 dev_err(fdev->dev, "There is no %d channel!\n", 858 835 new_fsl_chan->id); 859 836 err = -EINVAL; ··· 948 925 } 949 926 950 927 dev_info(&dev->dev, "Probe the Freescale DMA driver for %s " 951 - "controller at %p...\n", 952 - match->compatible, (void *)fdev->reg.start); 928 + "controller at 0x%llx...\n", 929 + match->compatible, (unsigned long long)fdev->reg.start); 953 930 fdev->reg_base = ioremap(fdev->reg.start, fdev->reg.end 954 931 - fdev->reg.start + 1); 955 932
+1 -1
drivers/dma/ioat_dma.c
··· 173 173 xfercap = (xfercap_scale == 0 ? -1 : (1UL << xfercap_scale)); 174 174 175 175 #ifdef CONFIG_I7300_IDLE_IOAT_CHANNEL 176 - if (i7300_idle_platform_probe(NULL, NULL) == 0) { 176 + if (i7300_idle_platform_probe(NULL, NULL, 1) == 0) { 177 177 device->common.chancnt--; 178 178 } 179 179 #endif
+6 -2
drivers/edac/Kconfig
··· 192 192 193 193 config EDAC_AMD8131 194 194 tristate "AMD8131 HyperTransport PCI-X Tunnel" 195 - depends on EDAC_MM_EDAC && PCI 195 + depends on EDAC_MM_EDAC && PCI && PPC_MAPLE 196 196 help 197 197 Support for error detection and correction on the 198 198 AMD8131 HyperTransport PCI-X Tunnel chip. 199 + Note, add more Kconfig dependency if it's adopted 200 + on some machine other than Maple. 199 201 200 202 config EDAC_AMD8111 201 203 tristate "AMD8111 HyperTransport I/O Hub" 202 - depends on EDAC_MM_EDAC && PCI 204 + depends on EDAC_MM_EDAC && PCI && PPC_MAPLE 203 205 help 204 206 Support for error detection and correction on the 205 207 AMD8111 HyperTransport I/O Hub chip. 208 + Note, add more Kconfig dependency if it's adopted 209 + on some machine other than Maple. 206 210 207 211 endif # EDAC
+2
drivers/edac/Makefile
··· 35 35 obj-$(CONFIG_EDAC_MV64X60) += mv64x60_edac.o 36 36 obj-$(CONFIG_EDAC_CELL) += cell_edac.o 37 37 obj-$(CONFIG_EDAC_PPC4XX) += ppc4xx_edac.o 38 + obj-$(CONFIG_EDAC_AMD8111) += amd8111_edac.o 39 + obj-$(CONFIG_EDAC_AMD8131) += amd8131_edac.o
+2 -2
drivers/edac/amd8111_edac.c
··· 389 389 dev_info->edac_dev->dev = &dev_info->dev->dev; 390 390 dev_info->edac_dev->mod_name = AMD8111_EDAC_MOD_STR; 391 391 dev_info->edac_dev->ctl_name = dev_info->ctl_name; 392 - dev_info->edac_dev->dev_name = dev_info->dev->dev.bus_id; 392 + dev_info->edac_dev->dev_name = dev_name(&dev_info->dev->dev); 393 393 394 394 if (edac_op_state == EDAC_OPSTATE_POLL) 395 395 dev_info->edac_dev->edac_check = dev_info->check; ··· 473 473 pci_info->edac_dev->dev = &pci_info->dev->dev; 474 474 pci_info->edac_dev->mod_name = AMD8111_EDAC_MOD_STR; 475 475 pci_info->edac_dev->ctl_name = pci_info->ctl_name; 476 - pci_info->edac_dev->dev_name = pci_info->dev->dev.bus_id; 476 + pci_info->edac_dev->dev_name = dev_name(&pci_info->dev->dev); 477 477 478 478 if (edac_op_state == EDAC_OPSTATE_POLL) 479 479 pci_info->edac_dev->edac_check = pci_info->check;
+1 -1
drivers/edac/amd8131_edac.c
··· 287 287 dev_info->edac_dev->dev = &dev_info->dev->dev; 288 288 dev_info->edac_dev->mod_name = AMD8131_EDAC_MOD_STR; 289 289 dev_info->edac_dev->ctl_name = dev_info->ctl_name; 290 - dev_info->edac_dev->dev_name = dev_info->dev->dev.bus_id; 290 + dev_info->edac_dev->dev_name = dev_name(&dev_info->dev->dev); 291 291 292 292 if (edac_op_state == EDAC_OPSTATE_POLL) 293 293 dev_info->edac_dev->edac_check = amd8131_chipset.check;
+7 -7
drivers/gpu/drm/Kconfig
··· 67 67 will load the correct one. 68 68 69 69 config DRM_I915 70 + tristate "i915 driver" 70 71 select FB_CFB_FILLRECT 71 72 select FB_CFB_COPYAREA 72 73 select FB_CFB_IMAGEBLIT 73 74 select FB 74 75 select FRAMEBUFFER_CONSOLE if !EMBEDDED 75 - tristate "i915 driver" 76 + # i915 depends on ACPI_VIDEO when ACPI is enabled 77 + # but for select to work, need to select ACPI_VIDEO's dependencies, ick 78 + select VIDEO_OUTPUT_CONTROL if ACPI 79 + select BACKLIGHT_CLASS_DEVICE if ACPI 80 + select INPUT if ACPI 81 + select ACPI_VIDEO if ACPI 76 82 help 77 83 Choose this option if you have a system that has Intel 830M, 845G, 78 84 852GM, 855GM 865G or 915G integrated graphics. If M is selected, the ··· 90 84 config DRM_I915_KMS 91 85 bool "Enable modesetting on intel by default" 92 86 depends on DRM_I915 93 - # i915 KMS depends on ACPI_VIDEO when ACPI is enabled 94 - # but for select to work, need to select ACPI_VIDEO's dependencies, ick 95 - select VIDEO_OUTPUT_CONTROL if ACPI 96 - select BACKLIGHT_CLASS_DEVICE if ACPI 97 - select INPUT if ACPI 98 - select ACPI_VIDEO if ACPI 99 87 help 100 88 Choose this option if you want kernel modesetting enabled by default, 101 89 and you have a new enough userspace to support this. Running old
+2 -1
drivers/gpu/drm/i915/i915_drv.h
··· 180 180 int backlight_duty_cycle; /* restore backlight to this value */ 181 181 bool panel_wants_dither; 182 182 struct drm_display_mode *panel_fixed_mode; 183 - struct drm_display_mode *vbt_mode; /* if any */ 183 + struct drm_display_mode *lfp_lvds_vbt_mode; /* if any */ 184 + struct drm_display_mode *sdvo_lvds_vbt_mode; /* if any */ 184 185 185 186 /* Feature bits from the VBIOS */ 186 187 unsigned int int_tv_support:1;
+38 -27
drivers/gpu/drm/i915/i915_gem.c
··· 349 349 last_data_page = (data_ptr + args->size - 1) / PAGE_SIZE; 350 350 num_pages = last_data_page - first_data_page + 1; 351 351 352 - user_pages = kcalloc(num_pages, sizeof(struct page *), GFP_KERNEL); 352 + user_pages = drm_calloc_large(num_pages, sizeof(struct page *)); 353 353 if (user_pages == NULL) 354 354 return -ENOMEM; 355 355 ··· 429 429 SetPageDirty(user_pages[i]); 430 430 page_cache_release(user_pages[i]); 431 431 } 432 - kfree(user_pages); 432 + drm_free_large(user_pages); 433 433 434 434 return ret; 435 435 } ··· 649 649 last_data_page = (data_ptr + args->size - 1) / PAGE_SIZE; 650 650 num_pages = last_data_page - first_data_page + 1; 651 651 652 - user_pages = kcalloc(num_pages, sizeof(struct page *), GFP_KERNEL); 652 + user_pages = drm_calloc_large(num_pages, sizeof(struct page *)); 653 653 if (user_pages == NULL) 654 654 return -ENOMEM; 655 655 ··· 719 719 out_unpin_pages: 720 720 for (i = 0; i < pinned_pages; i++) 721 721 page_cache_release(user_pages[i]); 722 - kfree(user_pages); 722 + drm_free_large(user_pages); 723 723 724 724 return ret; 725 725 } ··· 824 824 last_data_page = (data_ptr + args->size - 1) / PAGE_SIZE; 825 825 num_pages = last_data_page - first_data_page + 1; 826 826 827 - user_pages = kcalloc(num_pages, sizeof(struct page *), GFP_KERNEL); 827 + user_pages = drm_calloc_large(num_pages, sizeof(struct page *)); 828 828 if (user_pages == NULL) 829 829 return -ENOMEM; 830 830 ··· 902 902 fail_put_user_pages: 903 903 for (i = 0; i < pinned_pages; i++) 904 904 page_cache_release(user_pages[i]); 905 - kfree(user_pages); 905 + drm_free_large(user_pages); 906 906 907 907 return ret; 908 908 } ··· 1145 1145 mutex_unlock(&dev->struct_mutex); 1146 1146 return VM_FAULT_SIGBUS; 1147 1147 } 1148 - list_add(&obj_priv->list, &dev_priv->mm.inactive_list); 1148 + 1149 + ret = i915_gem_object_set_to_gtt_domain(obj, write); 1150 + if (ret) { 1151 + mutex_unlock(&dev->struct_mutex); 1152 + return VM_FAULT_SIGBUS; 1153 + } 1154 + 1155 + list_add_tail(&obj_priv->list, &dev_priv->mm.inactive_list); 1149 1156 } 1150 1157 1151 1158 /* Need a new fence register? */ ··· 1382 1375 mutex_unlock(&dev->struct_mutex); 1383 1376 return ret; 1384 1377 } 1385 - list_add(&obj_priv->list, &dev_priv->mm.inactive_list); 1378 + list_add_tail(&obj_priv->list, &dev_priv->mm.inactive_list); 1386 1379 } 1387 1380 1388 1381 drm_gem_object_unreference(obj); ··· 1415 1408 } 1416 1409 obj_priv->dirty = 0; 1417 1410 1418 - drm_free(obj_priv->pages, 1419 - page_count * sizeof(struct page *), 1420 - DRM_MEM_DRIVER); 1411 + drm_free_large(obj_priv->pages); 1421 1412 obj_priv->pages = NULL; 1422 1413 } 1423 1414 ··· 2029 2024 */ 2030 2025 page_count = obj->size / PAGE_SIZE; 2031 2026 BUG_ON(obj_priv->pages != NULL); 2032 - obj_priv->pages = drm_calloc(page_count, sizeof(struct page *), 2033 - DRM_MEM_DRIVER); 2027 + obj_priv->pages = drm_calloc_large(page_count, sizeof(struct page *)); 2034 2028 if (obj_priv->pages == NULL) { 2035 2029 DRM_ERROR("Faled to allocate page list\n"); 2036 2030 obj_priv->pages_refcount--; ··· 2135 2131 return; 2136 2132 } 2137 2133 2138 - pitch_val = (obj_priv->stride / 128) - 1; 2139 - WARN_ON(pitch_val & ~0x0000000f); 2134 + pitch_val = obj_priv->stride / 128; 2135 + pitch_val = ffs(pitch_val) - 1; 2136 + WARN_ON(pitch_val > I830_FENCE_MAX_PITCH_VAL); 2137 + 2140 2138 val = obj_priv->gtt_offset; 2141 2139 if (obj_priv->tiling_mode == I915_TILING_Y) 2142 2140 val |= 1 << I830_FENCE_TILING_Y_SHIFT; ··· 2429 2423 */ 2430 2424 if (obj_priv->pages == NULL) 2431 2425 return; 2426 + 2427 + /* XXX: The 865 in particular appears to be weird in how it handles 2428 + * cache flushing. We haven't figured it out, but the 2429 + * clflush+agp_chipset_flush doesn't appear to successfully get the 2430 + * data visible to the PGU, while wbinvd + agp_chipset_flush does. 2431 + */ 2432 + if (IS_I865G(obj->dev)) { 2433 + wbinvd(); 2434 + return; 2435 + } 2432 2436 2433 2437 drm_clflush_pages(obj_priv->pages, obj->size / PAGE_SIZE); 2434 2438 } ··· 3127 3111 reloc_count += exec_list[i].relocation_count; 3128 3112 } 3129 3113 3130 - *relocs = drm_calloc(reloc_count, sizeof(**relocs), DRM_MEM_DRIVER); 3114 + *relocs = drm_calloc_large(reloc_count, sizeof(**relocs)); 3131 3115 if (*relocs == NULL) 3132 3116 return -ENOMEM; 3133 3117 ··· 3141 3125 exec_list[i].relocation_count * 3142 3126 sizeof(**relocs)); 3143 3127 if (ret != 0) { 3144 - drm_free(*relocs, reloc_count * sizeof(**relocs), 3145 - DRM_MEM_DRIVER); 3128 + drm_free_large(*relocs); 3146 3129 *relocs = NULL; 3147 3130 return -EFAULT; 3148 3131 } ··· 3180 3165 } 3181 3166 3182 3167 err: 3183 - drm_free(relocs, reloc_count * sizeof(*relocs), DRM_MEM_DRIVER); 3168 + drm_free_large(relocs); 3184 3169 3185 3170 return ret; 3186 3171 } ··· 3213 3198 return -EINVAL; 3214 3199 } 3215 3200 /* Copy in the exec list from userland */ 3216 - exec_list = drm_calloc(sizeof(*exec_list), args->buffer_count, 3217 - DRM_MEM_DRIVER); 3218 - object_list = drm_calloc(sizeof(*object_list), args->buffer_count, 3219 - DRM_MEM_DRIVER); 3201 + exec_list = drm_calloc_large(sizeof(*exec_list), args->buffer_count); 3202 + object_list = drm_calloc_large(sizeof(*object_list), args->buffer_count); 3220 3203 if (exec_list == NULL || object_list == NULL) { 3221 3204 DRM_ERROR("Failed to allocate exec or object list " 3222 3205 "for %d buffers\n", ··· 3475 3462 } 3476 3463 3477 3464 pre_mutex_err: 3478 - drm_free(object_list, sizeof(*object_list) * args->buffer_count, 3479 - DRM_MEM_DRIVER); 3480 - drm_free(exec_list, sizeof(*exec_list) * args->buffer_count, 3481 - DRM_MEM_DRIVER); 3465 + drm_free_large(object_list); 3466 + drm_free_large(exec_list); 3482 3467 drm_free(cliprects, sizeof(*cliprects) * args->num_cliprects, 3483 3468 DRM_MEM_DRIVER); 3484 3469
+11 -3
drivers/gpu/drm/i915/i915_gem_tiling.c
··· 213 213 if (tiling_mode == I915_TILING_NONE) 214 214 return true; 215 215 216 - if (tiling_mode == I915_TILING_Y && HAS_128_BYTE_Y_TILING(dev)) 216 + if (!IS_I9XX(dev) || 217 + (tiling_mode == I915_TILING_Y && HAS_128_BYTE_Y_TILING(dev))) 217 218 tile_width = 128; 218 219 else 219 220 tile_width = 512; ··· 226 225 if (stride / 128 > I965_FENCE_MAX_PITCH_VAL) 227 226 return false; 228 227 } else if (IS_I9XX(dev)) { 229 - if (stride / tile_width > I830_FENCE_MAX_PITCH_VAL || 228 + uint32_t pitch_val = ffs(stride / tile_width) - 1; 229 + 230 + /* XXX: For Y tiling, FENCE_MAX_PITCH_VAL is actually 6 (8KB) 231 + * instead of 4 (2KB) on 945s. 232 + */ 233 + if (pitch_val > I915_FENCE_MAX_PITCH_VAL || 230 234 size > (I830_FENCE_MAX_SIZE_VAL << 20)) 231 235 return false; 232 236 } else { 233 - if (stride / 128 > I830_FENCE_MAX_PITCH_VAL || 237 + uint32_t pitch_val = ffs(stride / tile_width) - 1; 238 + 239 + if (pitch_val > I830_FENCE_MAX_PITCH_VAL || 234 240 size > (I830_FENCE_MAX_SIZE_VAL << 19)) 235 241 return false; 236 242 }
+19 -1
drivers/gpu/drm/i915/i915_reg.h
··· 190 190 #define I830_FENCE_SIZE_BITS(size) ((ffs((size) >> 19) - 1) << 8) 191 191 #define I830_FENCE_PITCH_SHIFT 4 192 192 #define I830_FENCE_REG_VALID (1<<0) 193 - #define I830_FENCE_MAX_PITCH_VAL 0x10 193 + #define I915_FENCE_MAX_PITCH_VAL 0x10 194 + #define I830_FENCE_MAX_PITCH_VAL 6 194 195 #define I830_FENCE_MAX_SIZE_VAL (1<<8) 195 196 196 197 #define I915_FENCE_START_MASK 0x0ff00000 ··· 1411 1410 1412 1411 /* Cursor A & B regs */ 1413 1412 #define CURACNTR 0x70080 1413 + /* Old style CUR*CNTR flags (desktop 8xx) */ 1414 + #define CURSOR_ENABLE 0x80000000 1415 + #define CURSOR_GAMMA_ENABLE 0x40000000 1416 + #define CURSOR_STRIDE_MASK 0x30000000 1417 + #define CURSOR_FORMAT_SHIFT 24 1418 + #define CURSOR_FORMAT_MASK (0x07 << CURSOR_FORMAT_SHIFT) 1419 + #define CURSOR_FORMAT_2C (0x00 << CURSOR_FORMAT_SHIFT) 1420 + #define CURSOR_FORMAT_3C (0x01 << CURSOR_FORMAT_SHIFT) 1421 + #define CURSOR_FORMAT_4C (0x02 << CURSOR_FORMAT_SHIFT) 1422 + #define CURSOR_FORMAT_ARGB (0x04 << CURSOR_FORMAT_SHIFT) 1423 + #define CURSOR_FORMAT_XRGB (0x05 << CURSOR_FORMAT_SHIFT) 1424 + /* New style CUR*CNTR flags */ 1425 + #define CURSOR_MODE 0x27 1414 1426 #define CURSOR_MODE_DISABLE 0x00 1415 1427 #define CURSOR_MODE_64_32B_AX 0x07 1416 1428 #define CURSOR_MODE_64_ARGB_AX ((1 << 5) | CURSOR_MODE_64_32B_AX) 1429 + #define MCURSOR_PIPE_SELECT (1 << 28) 1430 + #define MCURSOR_PIPE_A 0x00 1431 + #define MCURSOR_PIPE_B (1 << 28) 1417 1432 #define MCURSOR_GAMMA_ENABLE (1 << 26) 1418 1433 #define CURABASE 0x70084 1419 1434 #define CURAPOS 0x70088 ··· 1437 1420 #define CURSOR_POS_SIGN 0x8000 1438 1421 #define CURSOR_X_SHIFT 0 1439 1422 #define CURSOR_Y_SHIFT 16 1423 + #define CURSIZE 0x700a0 1440 1424 #define CURBCNTR 0x700c0 1441 1425 #define CURBBASE 0x700c4 1442 1426 #define CURBPOS 0x700c8
+73 -31
drivers/gpu/drm/i915/intel_bios.c
··· 57 57 return NULL; 58 58 } 59 59 60 - /* Try to find panel data */ 61 60 static void 62 - parse_panel_data(struct drm_i915_private *dev_priv, struct bdb_header *bdb) 61 + fill_detail_timing_data(struct drm_display_mode *panel_fixed_mode, 62 + struct lvds_dvo_timing *dvo_timing) 63 + { 64 + panel_fixed_mode->hdisplay = (dvo_timing->hactive_hi << 8) | 65 + dvo_timing->hactive_lo; 66 + panel_fixed_mode->hsync_start = panel_fixed_mode->hdisplay + 67 + ((dvo_timing->hsync_off_hi << 8) | dvo_timing->hsync_off_lo); 68 + panel_fixed_mode->hsync_end = panel_fixed_mode->hsync_start + 69 + dvo_timing->hsync_pulse_width; 70 + panel_fixed_mode->htotal = panel_fixed_mode->hdisplay + 71 + ((dvo_timing->hblank_hi << 8) | dvo_timing->hblank_lo); 72 + 73 + panel_fixed_mode->vdisplay = (dvo_timing->vactive_hi << 8) | 74 + dvo_timing->vactive_lo; 75 + panel_fixed_mode->vsync_start = panel_fixed_mode->vdisplay + 76 + dvo_timing->vsync_off; 77 + panel_fixed_mode->vsync_end = panel_fixed_mode->vsync_start + 78 + dvo_timing->vsync_pulse_width; 79 + panel_fixed_mode->vtotal = panel_fixed_mode->vdisplay + 80 + ((dvo_timing->vblank_hi << 8) | dvo_timing->vblank_lo); 81 + panel_fixed_mode->clock = dvo_timing->clock * 10; 82 + panel_fixed_mode->type = DRM_MODE_TYPE_PREFERRED; 83 + 84 + /* Some VBTs have bogus h/vtotal values */ 85 + if (panel_fixed_mode->hsync_end > panel_fixed_mode->htotal) 86 + panel_fixed_mode->htotal = panel_fixed_mode->hsync_end + 1; 87 + if (panel_fixed_mode->vsync_end > panel_fixed_mode->vtotal) 88 + panel_fixed_mode->vtotal = panel_fixed_mode->vsync_end + 1; 89 + 90 + drm_mode_set_name(panel_fixed_mode); 91 + } 92 + 93 + /* Try to find integrated panel data */ 94 + static void 95 + parse_lfp_panel_data(struct drm_i915_private *dev_priv, 96 + struct bdb_header *bdb) 63 97 { 64 98 struct bdb_lvds_options *lvds_options; 65 99 struct bdb_lvds_lfp_data *lvds_lfp_data; ··· 125 91 panel_fixed_mode = drm_calloc(1, sizeof(*panel_fixed_mode), 126 92 DRM_MEM_DRIVER); 127 93 128 - panel_fixed_mode->hdisplay = (dvo_timing->hactive_hi << 8) | 129 - dvo_timing->hactive_lo; 130 - panel_fixed_mode->hsync_start = panel_fixed_mode->hdisplay + 131 - ((dvo_timing->hsync_off_hi << 8) | dvo_timing->hsync_off_lo); 132 - panel_fixed_mode->hsync_end = panel_fixed_mode->hsync_start + 133 - dvo_timing->hsync_pulse_width; 134 - panel_fixed_mode->htotal = panel_fixed_mode->hdisplay + 135 - ((dvo_timing->hblank_hi << 8) | dvo_timing->hblank_lo); 94 + fill_detail_timing_data(panel_fixed_mode, dvo_timing); 136 95 137 - panel_fixed_mode->vdisplay = (dvo_timing->vactive_hi << 8) | 138 - dvo_timing->vactive_lo; 139 - panel_fixed_mode->vsync_start = panel_fixed_mode->vdisplay + 140 - dvo_timing->vsync_off; 141 - panel_fixed_mode->vsync_end = panel_fixed_mode->vsync_start + 142 - dvo_timing->vsync_pulse_width; 143 - panel_fixed_mode->vtotal = panel_fixed_mode->vdisplay + 144 - ((dvo_timing->vblank_hi << 8) | dvo_timing->vblank_lo); 145 - panel_fixed_mode->clock = dvo_timing->clock * 10; 146 - panel_fixed_mode->type = DRM_MODE_TYPE_PREFERRED; 147 - 148 - /* Some VBTs have bogus h/vtotal values */ 149 - if (panel_fixed_mode->hsync_end > panel_fixed_mode->htotal) 150 - panel_fixed_mode->htotal = panel_fixed_mode->hsync_end + 1; 151 - if (panel_fixed_mode->vsync_end > panel_fixed_mode->vtotal) 152 - panel_fixed_mode->vtotal = panel_fixed_mode->vsync_end + 1; 153 - 154 - drm_mode_set_name(panel_fixed_mode); 155 - 156 - dev_priv->vbt_mode = panel_fixed_mode; 96 + dev_priv->lfp_lvds_vbt_mode = panel_fixed_mode; 157 97 158 98 DRM_DEBUG("Found panel mode in BIOS VBT tables:\n"); 159 99 drm_mode_debug_printmodeline(panel_fixed_mode); 100 + 101 + return; 102 + } 103 + 104 + /* Try to find sdvo panel data */ 105 + static void 106 + parse_sdvo_panel_data(struct drm_i915_private *dev_priv, 107 + struct bdb_header *bdb) 108 + { 109 + struct bdb_sdvo_lvds_options *sdvo_lvds_options; 110 + struct lvds_dvo_timing *dvo_timing; 111 + struct drm_display_mode *panel_fixed_mode; 112 + 113 + dev_priv->sdvo_lvds_vbt_mode = NULL; 114 + 115 + sdvo_lvds_options = find_section(bdb, BDB_SDVO_LVDS_OPTIONS); 116 + if (!sdvo_lvds_options) 117 + return; 118 + 119 + dvo_timing = find_section(bdb, BDB_SDVO_PANEL_DTDS); 120 + if (!dvo_timing) 121 + return; 122 + 123 + panel_fixed_mode = drm_calloc(1, sizeof(*panel_fixed_mode), 124 + DRM_MEM_DRIVER); 125 + 126 + if (!panel_fixed_mode) 127 + return; 128 + 129 + fill_detail_timing_data(panel_fixed_mode, 130 + dvo_timing + sdvo_lvds_options->panel_type); 131 + 132 + dev_priv->sdvo_lvds_vbt_mode = panel_fixed_mode; 160 133 161 134 return; 162 135 } ··· 240 199 241 200 /* Grab useful general definitions */ 242 201 parse_general_features(dev_priv, bdb); 243 - parse_panel_data(dev_priv, bdb); 202 + parse_lfp_panel_data(dev_priv, bdb); 203 + parse_sdvo_panel_data(dev_priv, bdb); 244 204 245 205 pci_unmap_rom(pdev, bios); 246 206
+17
drivers/gpu/drm/i915/intel_bios.h
··· 279 279 struct vch_panel_data panels[16]; 280 280 } __attribute__((packed)); 281 281 282 + struct bdb_sdvo_lvds_options { 283 + u8 panel_backlight; 284 + u8 h40_set_panel_type; 285 + u8 panel_type; 286 + u8 ssc_clk_freq; 287 + u16 als_low_trip; 288 + u16 als_high_trip; 289 + u8 sclalarcoeff_tab_row_num; 290 + u8 sclalarcoeff_tab_row_size; 291 + u8 coefficient[8]; 292 + u8 panel_misc_bits_1; 293 + u8 panel_misc_bits_2; 294 + u8 panel_misc_bits_3; 295 + u8 panel_misc_bits_4; 296 + } __attribute__((packed)); 297 + 298 + 282 299 bool intel_init_bios(struct drm_device *dev); 283 300 284 301 /*
+147 -2
drivers/gpu/drm/i915/intel_crt.c
··· 198 198 return intel_ddc_probe(intel_output); 199 199 } 200 200 201 + static enum drm_connector_status 202 + intel_crt_load_detect(struct drm_crtc *crtc, struct intel_output *intel_output) 203 + { 204 + struct drm_encoder *encoder = &intel_output->enc; 205 + struct drm_device *dev = encoder->dev; 206 + struct drm_i915_private *dev_priv = dev->dev_private; 207 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 208 + uint32_t pipe = intel_crtc->pipe; 209 + uint32_t save_bclrpat; 210 + uint32_t save_vtotal; 211 + uint32_t vtotal, vactive; 212 + uint32_t vsample; 213 + uint32_t vblank, vblank_start, vblank_end; 214 + uint32_t dsl; 215 + uint32_t bclrpat_reg; 216 + uint32_t vtotal_reg; 217 + uint32_t vblank_reg; 218 + uint32_t vsync_reg; 219 + uint32_t pipeconf_reg; 220 + uint32_t pipe_dsl_reg; 221 + uint8_t st00; 222 + enum drm_connector_status status; 223 + 224 + if (pipe == 0) { 225 + bclrpat_reg = BCLRPAT_A; 226 + vtotal_reg = VTOTAL_A; 227 + vblank_reg = VBLANK_A; 228 + vsync_reg = VSYNC_A; 229 + pipeconf_reg = PIPEACONF; 230 + pipe_dsl_reg = PIPEADSL; 231 + } else { 232 + bclrpat_reg = BCLRPAT_B; 233 + vtotal_reg = VTOTAL_B; 234 + vblank_reg = VBLANK_B; 235 + vsync_reg = VSYNC_B; 236 + pipeconf_reg = PIPEBCONF; 237 + pipe_dsl_reg = PIPEBDSL; 238 + } 239 + 240 + save_bclrpat = I915_READ(bclrpat_reg); 241 + save_vtotal = I915_READ(vtotal_reg); 242 + vblank = I915_READ(vblank_reg); 243 + 244 + vtotal = ((save_vtotal >> 16) & 0xfff) + 1; 245 + vactive = (save_vtotal & 0x7ff) + 1; 246 + 247 + vblank_start = (vblank & 0xfff) + 1; 248 + vblank_end = ((vblank >> 16) & 0xfff) + 1; 249 + 250 + /* Set the border color to purple. */ 251 + I915_WRITE(bclrpat_reg, 0x500050); 252 + 253 + if (IS_I9XX(dev)) { 254 + uint32_t pipeconf = I915_READ(pipeconf_reg); 255 + I915_WRITE(pipeconf_reg, pipeconf | PIPECONF_FORCE_BORDER); 256 + /* Wait for next Vblank to substitue 257 + * border color for Color info */ 258 + intel_wait_for_vblank(dev); 259 + st00 = I915_READ8(VGA_MSR_WRITE); 260 + status = ((st00 & (1 << 4)) != 0) ? 261 + connector_status_connected : 262 + connector_status_disconnected; 263 + 264 + I915_WRITE(pipeconf_reg, pipeconf); 265 + } else { 266 + bool restore_vblank = false; 267 + int count, detect; 268 + 269 + /* 270 + * If there isn't any border, add some. 271 + * Yes, this will flicker 272 + */ 273 + if (vblank_start <= vactive && vblank_end >= vtotal) { 274 + uint32_t vsync = I915_READ(vsync_reg); 275 + uint32_t vsync_start = (vsync & 0xffff) + 1; 276 + 277 + vblank_start = vsync_start; 278 + I915_WRITE(vblank_reg, 279 + (vblank_start - 1) | 280 + ((vblank_end - 1) << 16)); 281 + restore_vblank = true; 282 + } 283 + /* sample in the vertical border, selecting the larger one */ 284 + if (vblank_start - vactive >= vtotal - vblank_end) 285 + vsample = (vblank_start + vactive) >> 1; 286 + else 287 + vsample = (vtotal + vblank_end) >> 1; 288 + 289 + /* 290 + * Wait for the border to be displayed 291 + */ 292 + while (I915_READ(pipe_dsl_reg) >= vactive) 293 + ; 294 + while ((dsl = I915_READ(pipe_dsl_reg)) <= vsample) 295 + ; 296 + /* 297 + * Watch ST00 for an entire scanline 298 + */ 299 + detect = 0; 300 + count = 0; 301 + do { 302 + count++; 303 + /* Read the ST00 VGA status register */ 304 + st00 = I915_READ8(VGA_MSR_WRITE); 305 + if (st00 & (1 << 4)) 306 + detect++; 307 + } while ((I915_READ(pipe_dsl_reg) == dsl)); 308 + 309 + /* restore vblank if necessary */ 310 + if (restore_vblank) 311 + I915_WRITE(vblank_reg, vblank); 312 + /* 313 + * If more than 3/4 of the scanline detected a monitor, 314 + * then it is assumed to be present. This works even on i830, 315 + * where there isn't any way to force the border color across 316 + * the screen 317 + */ 318 + status = detect * 4 > count * 3 ? 319 + connector_status_connected : 320 + connector_status_disconnected; 321 + } 322 + 323 + /* Restore previous settings */ 324 + I915_WRITE(bclrpat_reg, save_bclrpat); 325 + 326 + return status; 327 + } 328 + 201 329 static enum drm_connector_status intel_crt_detect(struct drm_connector *connector) 202 330 { 203 331 struct drm_device *dev = connector->dev; 332 + struct intel_output *intel_output = to_intel_output(connector); 333 + struct drm_encoder *encoder = &intel_output->enc; 334 + struct drm_crtc *crtc; 335 + int dpms_mode; 336 + enum drm_connector_status status; 204 337 205 338 if (IS_I9XX(dev) && !IS_I915G(dev) && !IS_I915GM(dev)) { 206 339 if (intel_crt_detect_hotplug(connector)) ··· 345 212 if (intel_crt_detect_ddc(connector)) 346 213 return connector_status_connected; 347 214 348 - /* TODO use load detect */ 349 - return connector_status_unknown; 215 + /* for pre-945g platforms use load detect */ 216 + if (encoder->crtc && encoder->crtc->enabled) { 217 + status = intel_crt_load_detect(encoder->crtc, intel_output); 218 + } else { 219 + crtc = intel_get_load_detect_pipe(intel_output, 220 + NULL, &dpms_mode); 221 + if (crtc) { 222 + status = intel_crt_load_detect(crtc, intel_output); 223 + intel_release_load_detect_pipe(intel_output, dpms_mode); 224 + } else 225 + status = connector_status_unknown; 226 + } 227 + 228 + return status; 350 229 } 351 230 352 231 static void intel_crt_destroy(struct drm_connector *connector)
+20 -6
drivers/gpu/drm/i915/intel_display.c
··· 1357 1357 int pipe = intel_crtc->pipe; 1358 1358 uint32_t control = (pipe == 0) ? CURACNTR : CURBCNTR; 1359 1359 uint32_t base = (pipe == 0) ? CURABASE : CURBBASE; 1360 - uint32_t temp; 1360 + uint32_t temp = I915_READ(control); 1361 1361 size_t addr; 1362 1362 int ret; 1363 1363 ··· 1366 1366 /* if we want to turn off the cursor ignore width and height */ 1367 1367 if (!handle) { 1368 1368 DRM_DEBUG("cursor off\n"); 1369 - temp = CURSOR_MODE_DISABLE; 1369 + if (IS_MOBILE(dev) || IS_I9XX(dev)) { 1370 + temp &= ~(CURSOR_MODE | MCURSOR_GAMMA_ENABLE); 1371 + temp |= CURSOR_MODE_DISABLE; 1372 + } else { 1373 + temp &= ~(CURSOR_ENABLE | CURSOR_GAMMA_ENABLE); 1374 + } 1370 1375 addr = 0; 1371 1376 bo = NULL; 1372 1377 mutex_lock(&dev->struct_mutex); ··· 1414 1409 addr = obj_priv->phys_obj->handle->busaddr; 1415 1410 } 1416 1411 1417 - temp = 0; 1418 - /* set the pipe for the cursor */ 1419 - temp |= (pipe << 28); 1420 - temp |= CURSOR_MODE_64_ARGB_AX | MCURSOR_GAMMA_ENABLE; 1412 + if (!IS_I9XX(dev)) 1413 + I915_WRITE(CURSIZE, (height << 12) | width); 1414 + 1415 + /* Hooray for CUR*CNTR differences */ 1416 + if (IS_MOBILE(dev) || IS_I9XX(dev)) { 1417 + temp &= ~(CURSOR_MODE | MCURSOR_PIPE_SELECT); 1418 + temp |= CURSOR_MODE_64_ARGB_AX | MCURSOR_GAMMA_ENABLE; 1419 + temp |= (pipe << 28); /* Connect to correct pipe */ 1420 + } else { 1421 + temp &= ~(CURSOR_FORMAT_MASK); 1422 + temp |= CURSOR_ENABLE; 1423 + temp |= CURSOR_FORMAT_ARGB | CURSOR_GAMMA_ENABLE; 1424 + } 1421 1425 1422 1426 finish: 1423 1427 I915_WRITE(control, temp);
+2 -2
drivers/gpu/drm/i915/intel_lvds.c
··· 511 511 } 512 512 513 513 /* Failed to get EDID, what about VBT? */ 514 - if (dev_priv->vbt_mode) { 514 + if (dev_priv->lfp_lvds_vbt_mode) { 515 515 mutex_lock(&dev->mode_config.mutex); 516 516 dev_priv->panel_fixed_mode = 517 - drm_mode_duplicate(dev, dev_priv->vbt_mode); 517 + drm_mode_duplicate(dev, dev_priv->lfp_lvds_vbt_mode); 518 518 mutex_unlock(&dev->mode_config.mutex); 519 519 if (dev_priv->panel_fixed_mode) { 520 520 dev_priv->panel_fixed_mode->type |=
+114 -23
drivers/gpu/drm/i915/intel_sdvo.c
··· 69 69 * This is set if we treat the device as HDMI, instead of DVI. 70 70 */ 71 71 bool is_hdmi; 72 + /** 73 + * This is set if we detect output of sdvo device as LVDS. 74 + */ 75 + bool is_lvds; 72 76 73 77 /** 74 78 * Returned SDTV resolutions allowed for the current format, if the ··· 1402 1398 static void intel_sdvo_get_ddc_modes(struct drm_connector *connector) 1403 1399 { 1404 1400 struct intel_output *intel_output = to_intel_output(connector); 1405 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1406 1401 1407 1402 /* set the bus switch and get the modes */ 1408 - intel_sdvo_set_control_bus_switch(intel_output, sdvo_priv->ddc_bus); 1409 1403 intel_ddc_get_modes(intel_output); 1410 1404 1411 1405 #if 0 ··· 1545 1543 } 1546 1544 } 1547 1545 1546 + static void intel_sdvo_get_lvds_modes(struct drm_connector *connector) 1547 + { 1548 + struct intel_output *intel_output = to_intel_output(connector); 1549 + struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1550 + struct drm_i915_private *dev_priv = connector->dev->dev_private; 1551 + 1552 + /* 1553 + * Attempt to get the mode list from DDC. 1554 + * Assume that the preferred modes are 1555 + * arranged in priority order. 1556 + */ 1557 + /* set the bus switch and get the modes */ 1558 + intel_sdvo_set_control_bus_switch(intel_output, sdvo_priv->ddc_bus); 1559 + intel_ddc_get_modes(intel_output); 1560 + if (list_empty(&connector->probed_modes) == false) 1561 + return; 1562 + 1563 + /* Fetch modes from VBT */ 1564 + if (dev_priv->sdvo_lvds_vbt_mode != NULL) { 1565 + struct drm_display_mode *newmode; 1566 + newmode = drm_mode_duplicate(connector->dev, 1567 + dev_priv->sdvo_lvds_vbt_mode); 1568 + if (newmode != NULL) { 1569 + /* Guarantee the mode is preferred */ 1570 + newmode->type = (DRM_MODE_TYPE_PREFERRED | 1571 + DRM_MODE_TYPE_DRIVER); 1572 + drm_mode_probed_add(connector, newmode); 1573 + } 1574 + } 1575 + } 1576 + 1548 1577 static int intel_sdvo_get_modes(struct drm_connector *connector) 1549 1578 { 1550 1579 struct intel_output *output = to_intel_output(connector); ··· 1583 1550 1584 1551 if (sdvo_priv->is_tv) 1585 1552 intel_sdvo_get_tv_modes(connector); 1553 + else if (sdvo_priv->is_lvds == true) 1554 + intel_sdvo_get_lvds_modes(connector); 1586 1555 else 1587 1556 intel_sdvo_get_ddc_modes(connector); 1588 1557 ··· 1599 1564 1600 1565 if (intel_output->i2c_bus) 1601 1566 intel_i2c_destroy(intel_output->i2c_bus); 1567 + if (intel_output->ddc_bus) 1568 + intel_i2c_destroy(intel_output->ddc_bus); 1569 + 1602 1570 drm_sysfs_connector_remove(connector); 1603 1571 drm_connector_cleanup(connector); 1604 1572 kfree(intel_output); ··· 1698 1660 return true; 1699 1661 } 1700 1662 1663 + static struct intel_output * 1664 + intel_sdvo_chan_to_intel_output(struct intel_i2c_chan *chan) 1665 + { 1666 + struct drm_device *dev = chan->drm_dev; 1667 + struct drm_connector *connector; 1668 + struct intel_output *intel_output = NULL; 1669 + 1670 + list_for_each_entry(connector, 1671 + &dev->mode_config.connector_list, head) { 1672 + if (to_intel_output(connector)->ddc_bus == chan) { 1673 + intel_output = to_intel_output(connector); 1674 + break; 1675 + } 1676 + } 1677 + return intel_output; 1678 + } 1679 + 1680 + static int intel_sdvo_master_xfer(struct i2c_adapter *i2c_adap, 1681 + struct i2c_msg msgs[], int num) 1682 + { 1683 + struct intel_output *intel_output; 1684 + struct intel_sdvo_priv *sdvo_priv; 1685 + struct i2c_algo_bit_data *algo_data; 1686 + struct i2c_algorithm *algo; 1687 + 1688 + algo_data = (struct i2c_algo_bit_data *)i2c_adap->algo_data; 1689 + intel_output = 1690 + intel_sdvo_chan_to_intel_output( 1691 + (struct intel_i2c_chan *)(algo_data->data)); 1692 + if (intel_output == NULL) 1693 + return -EINVAL; 1694 + 1695 + sdvo_priv = intel_output->dev_priv; 1696 + algo = (struct i2c_algorithm *)intel_output->i2c_bus->adapter.algo; 1697 + 1698 + intel_sdvo_set_control_bus_switch(intel_output, sdvo_priv->ddc_bus); 1699 + return algo->master_xfer(i2c_adap, msgs, num); 1700 + } 1701 + 1702 + static struct i2c_algorithm intel_sdvo_i2c_bit_algo = { 1703 + .master_xfer = intel_sdvo_master_xfer, 1704 + }; 1705 + 1701 1706 bool intel_sdvo_init(struct drm_device *dev, int output_device) 1702 1707 { 1703 1708 struct drm_connector *connector; 1704 1709 struct intel_output *intel_output; 1705 1710 struct intel_sdvo_priv *sdvo_priv; 1706 1711 struct intel_i2c_chan *i2cbus = NULL; 1712 + struct intel_i2c_chan *ddcbus = NULL; 1707 1713 int connector_type; 1708 1714 u8 ch[0x40]; 1709 1715 int i; ··· 1758 1676 return false; 1759 1677 } 1760 1678 1761 - connector = &intel_output->base; 1762 - 1763 - drm_connector_init(dev, connector, &intel_sdvo_connector_funcs, 1764 - DRM_MODE_CONNECTOR_Unknown); 1765 - drm_connector_helper_add(connector, &intel_sdvo_connector_helper_funcs); 1766 1679 sdvo_priv = (struct intel_sdvo_priv *)(intel_output + 1); 1767 1680 intel_output->type = INTEL_OUTPUT_SDVO; 1768 - 1769 - connector->interlace_allowed = 0; 1770 - connector->doublescan_allowed = 0; 1771 1681 1772 1682 /* setup the DDC bus. */ 1773 1683 if (output_device == SDVOB) ··· 1768 1694 i2cbus = intel_i2c_create(dev, GPIOE, "SDVOCTRL_E for SDVOC"); 1769 1695 1770 1696 if (!i2cbus) 1771 - goto err_connector; 1697 + goto err_inteloutput; 1772 1698 1773 1699 sdvo_priv->i2c_bus = i2cbus; 1774 1700 ··· 1784 1710 intel_output->i2c_bus = i2cbus; 1785 1711 intel_output->dev_priv = sdvo_priv; 1786 1712 1787 - 1788 1713 /* Read the regs to test if we can talk to the device */ 1789 1714 for (i = 0; i < 0x40; i++) { 1790 1715 if (!intel_sdvo_read_byte(intel_output, i, &ch[i])) { ··· 1793 1720 } 1794 1721 } 1795 1722 1723 + /* setup the DDC bus. */ 1724 + if (output_device == SDVOB) 1725 + ddcbus = intel_i2c_create(dev, GPIOE, "SDVOB DDC BUS"); 1726 + else 1727 + ddcbus = intel_i2c_create(dev, GPIOE, "SDVOC DDC BUS"); 1728 + 1729 + if (ddcbus == NULL) 1730 + goto err_i2c; 1731 + 1732 + intel_sdvo_i2c_bit_algo.functionality = 1733 + intel_output->i2c_bus->adapter.algo->functionality; 1734 + ddcbus->adapter.algo = &intel_sdvo_i2c_bit_algo; 1735 + intel_output->ddc_bus = ddcbus; 1736 + 1737 + /* In defaut case sdvo lvds is false */ 1738 + sdvo_priv->is_lvds = false; 1796 1739 intel_sdvo_get_capabilities(intel_output, &sdvo_priv->caps); 1797 1740 1798 1741 if (sdvo_priv->caps.output_flags & ··· 1818 1729 else 1819 1730 sdvo_priv->controlled_output = SDVO_OUTPUT_TMDS1; 1820 1731 1821 - connector->display_info.subpixel_order = SubPixelHorizontalRGB; 1822 1732 encoder_type = DRM_MODE_ENCODER_TMDS; 1823 1733 connector_type = DRM_MODE_CONNECTOR_DVID; 1824 1734 ··· 1835 1747 else if (sdvo_priv->caps.output_flags & SDVO_OUTPUT_SVID0) 1836 1748 { 1837 1749 sdvo_priv->controlled_output = SDVO_OUTPUT_SVID0; 1838 - connector->display_info.subpixel_order = SubPixelHorizontalRGB; 1839 1750 encoder_type = DRM_MODE_ENCODER_TVDAC; 1840 1751 connector_type = DRM_MODE_CONNECTOR_SVIDEO; 1841 1752 sdvo_priv->is_tv = true; ··· 1843 1756 else if (sdvo_priv->caps.output_flags & SDVO_OUTPUT_RGB0) 1844 1757 { 1845 1758 sdvo_priv->controlled_output = SDVO_OUTPUT_RGB0; 1846 - connector->display_info.subpixel_order = SubPixelHorizontalRGB; 1847 1759 encoder_type = DRM_MODE_ENCODER_DAC; 1848 1760 connector_type = DRM_MODE_CONNECTOR_VGA; 1849 1761 } 1850 1762 else if (sdvo_priv->caps.output_flags & SDVO_OUTPUT_RGB1) 1851 1763 { 1852 1764 sdvo_priv->controlled_output = SDVO_OUTPUT_RGB1; 1853 - connector->display_info.subpixel_order = SubPixelHorizontalRGB; 1854 1765 encoder_type = DRM_MODE_ENCODER_DAC; 1855 1766 connector_type = DRM_MODE_CONNECTOR_VGA; 1856 1767 } 1857 1768 else if (sdvo_priv->caps.output_flags & SDVO_OUTPUT_LVDS0) 1858 1769 { 1859 1770 sdvo_priv->controlled_output = SDVO_OUTPUT_LVDS0; 1860 - connector->display_info.subpixel_order = SubPixelHorizontalRGB; 1861 1771 encoder_type = DRM_MODE_ENCODER_LVDS; 1862 1772 connector_type = DRM_MODE_CONNECTOR_LVDS; 1773 + sdvo_priv->is_lvds = true; 1863 1774 } 1864 1775 else if (sdvo_priv->caps.output_flags & SDVO_OUTPUT_LVDS1) 1865 1776 { 1866 1777 sdvo_priv->controlled_output = SDVO_OUTPUT_LVDS1; 1867 - connector->display_info.subpixel_order = SubPixelHorizontalRGB; 1868 1778 encoder_type = DRM_MODE_ENCODER_LVDS; 1869 1779 connector_type = DRM_MODE_CONNECTOR_LVDS; 1780 + sdvo_priv->is_lvds = true; 1870 1781 } 1871 1782 else 1872 1783 { ··· 1880 1795 goto err_i2c; 1881 1796 } 1882 1797 1798 + connector = &intel_output->base; 1799 + drm_connector_init(dev, connector, &intel_sdvo_connector_funcs, 1800 + connector_type); 1801 + drm_connector_helper_add(connector, &intel_sdvo_connector_helper_funcs); 1802 + connector->interlace_allowed = 0; 1803 + connector->doublescan_allowed = 0; 1804 + connector->display_info.subpixel_order = SubPixelHorizontalRGB; 1805 + 1883 1806 drm_encoder_init(dev, &intel_output->enc, &intel_sdvo_enc_funcs, encoder_type); 1884 1807 drm_encoder_helper_add(&intel_output->enc, &intel_sdvo_helper_funcs); 1885 - connector->connector_type = connector_type; 1886 1808 1887 1809 drm_mode_connector_attach_encoder(&intel_output->base, &intel_output->enc); 1888 1810 drm_sysfs_connector_add(connector); ··· 1921 1829 sdvo_priv->caps.output_flags & 1922 1830 (SDVO_OUTPUT_TMDS1 | SDVO_OUTPUT_RGB1) ? 'Y' : 'N'); 1923 1831 1924 - intel_output->ddc_bus = i2cbus; 1925 - 1926 1832 return true; 1927 1833 1928 1834 err_i2c: 1835 + if (ddcbus != NULL) 1836 + intel_i2c_destroy(intel_output->ddc_bus); 1929 1837 intel_i2c_destroy(intel_output->i2c_bus); 1930 - err_connector: 1931 - drm_connector_cleanup(connector); 1838 + err_inteloutput: 1932 1839 kfree(intel_output); 1933 1840 1934 1841 return false;
+1 -1
drivers/hwmon/lm78.c
··· 182 182 .name = "lm78", 183 183 }, 184 184 .probe = lm78_isa_probe, 185 - .remove = lm78_isa_remove, 185 + .remove = __devexit_p(lm78_isa_remove), 186 186 }; 187 187 188 188
+11
drivers/ide/ide-pci-generic.c
··· 33 33 module_param_named(all_generic_ide, ide_generic_all, bool, 0444); 34 34 MODULE_PARM_DESC(all_generic_ide, "IDE generic will claim all unknown PCI IDE storage controllers."); 35 35 36 + static void netcell_quirkproc(ide_drive_t *drive) 37 + { 38 + /* mark words 85-87 as valid */ 39 + drive->id[ATA_ID_CSF_DEFAULT] |= 0x4000; 40 + } 41 + 42 + static const struct ide_port_ops netcell_port_ops = { 43 + .quirkproc = netcell_quirkproc, 44 + }; 45 + 36 46 #define DECLARE_GENERIC_PCI_DEV(extra_flags) \ 37 47 { \ 38 48 .name = DRV_NAME, \ ··· 84 74 85 75 { /* 6: Revolution */ 86 76 .name = DRV_NAME, 77 + .port_ops = &netcell_port_ops, 87 78 .host_flags = IDE_HFLAG_CLEAR_SIMPLEX | 88 79 IDE_HFLAG_TRUST_BIOS_FOR_DMA | 89 80 IDE_HFLAG_OFF_BOARD,
+5 -1
drivers/idle/i7300_idle.c
··· 41 41 module_param_named(debug, debug, uint, 0644); 42 42 MODULE_PARM_DESC(debug, "Enable debug printks in this driver"); 43 43 44 + static int forceload; 45 + module_param_named(forceload, forceload, uint, 0644); 46 + MODULE_PARM_DESC(debug, "Enable driver testing on unvalidated i5000"); 47 + 44 48 #define dprintk(fmt, arg...) \ 45 49 do { if (debug) printk(KERN_INFO I7300_PRINT fmt, ##arg); } while (0) 46 50 ··· 556 552 cpus_clear(idle_cpumask); 557 553 total_us = 0; 558 554 559 - if (i7300_idle_platform_probe(&fbd_dev, &ioat_dev)) 555 + if (i7300_idle_platform_probe(&fbd_dev, &ioat_dev, forceload)) 560 556 return -ENODEV; 561 557 562 558 if (i7300_idle_thrt_save())
+1
drivers/input/input.c
··· 42 42 ABS_MT_POSITION_Y, 43 43 ABS_MT_TOOL_TYPE, 44 44 ABS_MT_BLOB_ID, 45 + ABS_MT_TRACKING_ID, 45 46 0 46 47 }; 47 48 static unsigned long input_abs_bypass[BITS_TO_LONGS(ABS_CNT)];
+1 -1
drivers/input/serio/libps2.c
··· 210 210 timeout = wait_event_timeout(ps2dev->wait, 211 211 !(ps2dev->flags & PS2_FLAG_CMD1), timeout); 212 212 213 - if (ps2dev->cmdcnt && timeout > 0) { 213 + if (ps2dev->cmdcnt && !(ps2dev->flags & PS2_FLAG_CMD1)) { 214 214 215 215 timeout = ps2_adjust_timeout(ps2dev, command, timeout); 216 216 wait_event_timeout(ps2dev->wait,
+1 -1
drivers/input/touchscreen/ucb1400_ts.c
··· 419 419 #ifdef CONFIG_PM 420 420 static int ucb1400_ts_resume(struct platform_device *dev) 421 421 { 422 - struct ucb1400_ts *ucb = platform_get_drvdata(dev); 422 + struct ucb1400_ts *ucb = dev->dev.platform_data; 423 423 424 424 if (ucb->ts_task) { 425 425 /*
+1 -1
drivers/isdn/gigaset/isocdata.c
··· 175 175 return -EINVAL; 176 176 } 177 177 src = iwb->read; 178 - if (unlikely(limit > BAS_OUTBUFSIZE + BAS_OUTBUFPAD || 178 + if (unlikely(limit >= BAS_OUTBUFSIZE + BAS_OUTBUFPAD || 179 179 (read < src && limit >= src))) { 180 180 pr_err("isoc write buffer frame reservation violated\n"); 181 181 return -EFAULT;
+10 -9
drivers/lguest/x86/core.c
··· 358 358 if (emulate_insn(cpu)) 359 359 return; 360 360 } 361 + /* If KVM is active, the vmcall instruction triggers a 362 + * General Protection Fault. Normally it triggers an 363 + * invalid opcode fault (6): */ 364 + case 6: 365 + /* We need to check if ring == GUEST_PL and 366 + * faulting instruction == vmcall. */ 367 + if (is_hypercall(cpu)) { 368 + rewrite_hypercall(cpu); 369 + return; 370 + } 361 371 break; 362 372 case 14: /* We've intercepted a Page Fault. */ 363 373 /* The Guest accessed a virtual address that wasn't mapped. ··· 413 403 * up the pointer now to indicate a hypercall is pending. */ 414 404 cpu->hcall = (struct hcall_args *)cpu->regs; 415 405 return; 416 - case 6: 417 - /* kvm hypercalls trigger an invalid opcode fault (6). 418 - * We need to check if ring == GUEST_PL and 419 - * faulting instruction == vmcall. */ 420 - if (is_hypercall(cpu)) { 421 - rewrite_hypercall(cpu); 422 - return; 423 - } 424 - break; 425 406 } 426 407 427 408 /* We didn't handle the trap, so it needs to go to the Guest. */
+7 -6
drivers/md/bitmap.c
··· 1097 1097 } 1098 1098 bitmap->allclean = 1; 1099 1099 1100 + spin_lock_irqsave(&bitmap->lock, flags); 1100 1101 for (j = 0; j < bitmap->chunks; j++) { 1101 1102 bitmap_counter_t *bmc; 1102 - spin_lock_irqsave(&bitmap->lock, flags); 1103 - if (!bitmap->filemap) { 1103 + if (!bitmap->filemap) 1104 1104 /* error or shutdown */ 1105 - spin_unlock_irqrestore(&bitmap->lock, flags); 1106 1105 break; 1107 - } 1108 1106 1109 1107 page = filemap_get_page(bitmap, j); 1110 1108 ··· 1119 1121 write_page(bitmap, page, 0); 1120 1122 bitmap->allclean = 0; 1121 1123 } 1124 + spin_lock_irqsave(&bitmap->lock, flags); 1125 + j |= (PAGE_BITS - 1); 1122 1126 continue; 1123 1127 } 1124 1128 ··· 1181 1181 ext2_clear_bit(file_page_offset(j), paddr); 1182 1182 kunmap_atomic(paddr, KM_USER0); 1183 1183 } 1184 - } 1185 - spin_unlock_irqrestore(&bitmap->lock, flags); 1184 + } else 1185 + j |= PAGE_COUNTER_MASK; 1186 1186 } 1187 + spin_unlock_irqrestore(&bitmap->lock, flags); 1187 1188 1188 1189 /* now sync the final page */ 1189 1190 if (lastpage != NULL) {
+21 -10
drivers/md/md.c
··· 1375 1375 1376 1376 sb->raid_disks = cpu_to_le32(mddev->raid_disks); 1377 1377 sb->size = cpu_to_le64(mddev->dev_sectors); 1378 + sb->chunksize = cpu_to_le32(mddev->chunk_size >> 9); 1379 + sb->level = cpu_to_le32(mddev->level); 1380 + sb->layout = cpu_to_le32(mddev->layout); 1378 1381 1379 1382 if (mddev->bitmap && mddev->bitmap_file == NULL) { 1380 1383 sb->bitmap_offset = cpu_to_le32((__u32)mddev->bitmap_offset); ··· 3306 3303 action_show(mddev_t *mddev, char *page) 3307 3304 { 3308 3305 char *type = "idle"; 3309 - if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) || 3306 + if (test_bit(MD_RECOVERY_FROZEN, &mddev->recovery)) 3307 + type = "frozen"; 3308 + else if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery) || 3310 3309 (!mddev->ro && test_bit(MD_RECOVERY_NEEDED, &mddev->recovery))) { 3311 3310 if (test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery)) 3312 3311 type = "reshape"; ··· 3331 3326 if (!mddev->pers || !mddev->pers->sync_request) 3332 3327 return -EINVAL; 3333 3328 3334 - if (cmd_match(page, "idle")) { 3329 + if (cmd_match(page, "frozen")) 3330 + set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 3331 + else 3332 + clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 3333 + 3334 + if (cmd_match(page, "idle") || cmd_match(page, "frozen")) { 3335 3335 if (mddev->sync_thread) { 3336 3336 set_bit(MD_RECOVERY_INTR, &mddev->recovery); 3337 3337 md_unregister_thread(mddev->sync_thread); ··· 3690 3680 if (strict_blocks_to_sectors(buf, &sectors) < 0) 3691 3681 return -EINVAL; 3692 3682 if (mddev->pers && mddev->pers->size(mddev, 0, 0) < sectors) 3693 - return -EINVAL; 3683 + return -E2BIG; 3694 3684 3695 3685 mddev->external_size = 1; 3696 3686 } ··· 5567 5557 .owner = THIS_MODULE, 5568 5558 .open = md_open, 5569 5559 .release = md_release, 5570 - .locked_ioctl = md_ioctl, 5560 + .ioctl = md_ioctl, 5571 5561 .getgeo = md_getgeo, 5572 5562 .media_changed = md_media_changed, 5573 5563 .revalidate_disk= md_revalidate, ··· 6362 6352 6363 6353 skipped = 0; 6364 6354 6365 - if ((mddev->curr_resync > mddev->curr_resync_completed && 6366 - (mddev->curr_resync - mddev->curr_resync_completed) 6367 - > (max_sectors >> 4)) || 6368 - (j - mddev->curr_resync_completed)*2 6369 - >= mddev->resync_max - mddev->curr_resync_completed 6370 - ) { 6355 + if (!test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) && 6356 + ((mddev->curr_resync > mddev->curr_resync_completed && 6357 + (mddev->curr_resync - mddev->curr_resync_completed) 6358 + > (max_sectors >> 4)) || 6359 + (j - mddev->curr_resync_completed)*2 6360 + >= mddev->resync_max - mddev->curr_resync_completed 6361 + )) { 6371 6362 /* time to update curr_resync_completed */ 6372 6363 blk_unplug(mddev->queue); 6373 6364 wait_event(mddev->recovery_wait,
+3 -3
drivers/md/raid5.c
··· 3811 3811 safepos = conf->reshape_safe; 3812 3812 sector_div(safepos, data_disks); 3813 3813 if (mddev->delta_disks < 0) { 3814 - writepos -= reshape_sectors; 3814 + writepos -= min_t(sector_t, reshape_sectors, writepos); 3815 3815 readpos += reshape_sectors; 3816 3816 safepos += reshape_sectors; 3817 3817 } else { 3818 3818 writepos += reshape_sectors; 3819 - readpos -= reshape_sectors; 3820 - safepos -= reshape_sectors; 3819 + readpos -= min_t(sector_t, reshape_sectors, readpos); 3820 + safepos -= min_t(sector_t, reshape_sectors, safepos); 3821 3821 } 3822 3822 3823 3823 /* 'writepos' is the most advanced device address we might write.
+23 -20
drivers/mtd/nand/mxc_nand.c
··· 831 831 break; 832 832 833 833 case NAND_CMD_READID: 834 + host->col_addr = 0; 834 835 send_read_id(host); 835 836 break; 836 837 ··· 868 867 mtd->priv = this; 869 868 mtd->owner = THIS_MODULE; 870 869 mtd->dev.parent = &pdev->dev; 870 + mtd->name = "mxc_nand"; 871 871 872 872 /* 50 us command delay time */ 873 873 this->chip_delay = 5; ··· 884 882 this->verify_buf = mxc_nand_verify_buf; 885 883 886 884 host->clk = clk_get(&pdev->dev, "nfc"); 887 - if (IS_ERR(host->clk)) 885 + if (IS_ERR(host->clk)) { 886 + err = PTR_ERR(host->clk); 888 887 goto eclk; 888 + } 889 889 890 890 clk_enable(host->clk); 891 891 host->clk_act = 1; ··· 900 896 901 897 host->regs = ioremap(res->start, res->end - res->start + 1); 902 898 if (!host->regs) { 903 - err = -EIO; 899 + err = -ENOMEM; 904 900 goto eres; 905 901 } 906 902 ··· 1015 1011 #ifdef CONFIG_PM 1016 1012 static int mxcnd_suspend(struct platform_device *pdev, pm_message_t state) 1017 1013 { 1018 - struct mtd_info *info = platform_get_drvdata(pdev); 1014 + struct mtd_info *mtd = platform_get_drvdata(pdev); 1015 + struct nand_chip *nand_chip = mtd->priv; 1016 + struct mxc_nand_host *host = nand_chip->priv; 1019 1017 int ret = 0; 1020 1018 1021 1019 DEBUG(MTD_DEBUG_LEVEL0, "MXC_ND : NAND suspend\n"); 1022 - if (info) 1023 - ret = info->suspend(info); 1024 - 1025 - /* Disable the NFC clock */ 1026 - clk_disable(nfc_clk); /* FIXME */ 1020 + if (mtd) { 1021 + ret = mtd->suspend(mtd); 1022 + /* Disable the NFC clock */ 1023 + clk_disable(host->clk); 1024 + } 1027 1025 1028 1026 return ret; 1029 1027 } 1030 1028 1031 1029 static int mxcnd_resume(struct platform_device *pdev) 1032 1030 { 1033 - struct mtd_info *info = platform_get_drvdata(pdev); 1031 + struct mtd_info *mtd = platform_get_drvdata(pdev); 1032 + struct nand_chip *nand_chip = mtd->priv; 1033 + struct mxc_nand_host *host = nand_chip->priv; 1034 1034 int ret = 0; 1035 1035 1036 1036 DEBUG(MTD_DEBUG_LEVEL0, "MXC_ND : NAND resume\n"); 1037 - /* Enable the NFC clock */ 1038 - clk_enable(nfc_clk); /* FIXME */ 1039 1037 1040 - if (info) 1041 - info->resume(info); 1038 + if (mtd) { 1039 + /* Enable the NFC clock */ 1040 + clk_enable(host->clk); 1041 + mtd->resume(mtd); 1042 + } 1042 1043 1043 1044 return ret; 1044 1045 } ··· 1064 1055 1065 1056 static int __init mxc_nd_init(void) 1066 1057 { 1067 - /* Register the device driver structure. */ 1068 - pr_info("MXC MTD nand Driver\n"); 1069 - if (platform_driver_probe(&mxcnd_driver, mxcnd_probe) != 0) { 1070 - printk(KERN_ERR "Driver register failed for mxcnd_driver\n"); 1071 - return -ENODEV; 1072 - } 1073 - return 0; 1058 + return platform_driver_probe(&mxcnd_driver, mxcnd_probe); 1074 1059 } 1075 1060 1076 1061 static void __exit mxc_nd_cleanup(void)
+4
drivers/net/3c509.c
··· 480 480 481 481 #ifdef CONFIG_EISA 482 482 static struct eisa_device_id el3_eisa_ids[] = { 483 + { "TCM5090" }, 484 + { "TCM5091" }, 483 485 { "TCM5092" }, 484 486 { "TCM5093" }, 487 + { "TCM5094" }, 485 488 { "TCM5095" }, 489 + { "TCM5098" }, 486 490 { "" } 487 491 }; 488 492 MODULE_DEVICE_TABLE(eisa, el3_eisa_ids);
+1 -1
drivers/net/Makefile
··· 102 102 obj-$(CONFIG_NET) += Space.o loopback.o 103 103 obj-$(CONFIG_SEEQ8005) += seeq8005.o 104 104 obj-$(CONFIG_NET_SB1000) += sb1000.o 105 - obj-$(CONFIG_MAC8390) += mac8390.o 8390.o 105 + obj-$(CONFIG_MAC8390) += mac8390.o 106 106 obj-$(CONFIG_APNE) += apne.o 8390.o 107 107 obj-$(CONFIG_PCMCIA_PCNET) += 8390.o 108 108 obj-$(CONFIG_HP100) += hp100.o
+1
drivers/net/atl1e/atl1e_main.c
··· 37 37 */ 38 38 static struct pci_device_id atl1e_pci_tbl[] = { 39 39 {PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, PCI_DEVICE_ID_ATTANSIC_L1E)}, 40 + {PCI_DEVICE(PCI_VENDOR_ID_ATTANSIC, 0x1066)}, 40 41 /* required last entry */ 41 42 { 0 } 42 43 };
+6
drivers/net/atlx/atl1.c
··· 82 82 83 83 #include "atl1.h" 84 84 85 + #define ATLX_DRIVER_VERSION "2.1.3" 86 + MODULE_AUTHOR("Xiong Huang <xiong.huang@atheros.com>, \ 87 + Chris Snook <csnook@redhat.com>, Jay Cliburn <jcliburn@gmail.com>"); 88 + MODULE_LICENSE("GPL"); 89 + MODULE_VERSION(ATLX_DRIVER_VERSION); 90 + 85 91 /* Temporary hack for merging atl1 and atl2 */ 86 92 #include "atlx.c" 87 93
-6
drivers/net/atlx/atlx.h
··· 29 29 #include <linux/module.h> 30 30 #include <linux/types.h> 31 31 32 - #define ATLX_DRIVER_VERSION "2.1.3" 33 - MODULE_AUTHOR("Xiong Huang <xiong.huang@atheros.com>, \ 34 - Chris Snook <csnook@redhat.com>, Jay Cliburn <jcliburn@gmail.com>"); 35 - MODULE_LICENSE("GPL"); 36 - MODULE_VERSION(ATLX_DRIVER_VERSION); 37 - 38 32 #define ATLX_ERR_PHY 2 39 33 #define ATLX_ERR_PHY_SPEED 7 40 34 #define ATLX_ERR_PHY_RES 8
+14 -15
drivers/net/bfin_mac.c
··· 979 979 return 0; 980 980 } 981 981 982 - static const struct net_device_ops bfin_mac_netdev_ops = { 983 - .ndo_open = bfin_mac_open, 984 - .ndo_stop = bfin_mac_close, 985 - .ndo_start_xmit = bfin_mac_hard_start_xmit, 986 - .ndo_set_mac_address = bfin_mac_set_mac_address, 987 - .ndo_tx_timeout = bfin_mac_timeout, 988 - .ndo_set_multicast_list = bfin_mac_set_multicast_list, 989 - .ndo_validate_addr = eth_validate_addr, 990 - .ndo_change_mtu = eth_change_mtu, 991 - #ifdef CONFIG_NET_POLL_CONTROLLER 992 - .ndo_poll_controller = bfin_mac_poll, 993 - #endif 994 - }; 995 - 996 982 /* 997 - * 998 983 * this makes the board clean up everything that it can 999 984 * and not talk to the outside world. Caused by 1000 985 * an 'ifconfig ethX down' ··· 1003 1018 1004 1019 return 0; 1005 1020 } 1021 + 1022 + static const struct net_device_ops bfin_mac_netdev_ops = { 1023 + .ndo_open = bfin_mac_open, 1024 + .ndo_stop = bfin_mac_close, 1025 + .ndo_start_xmit = bfin_mac_hard_start_xmit, 1026 + .ndo_set_mac_address = bfin_mac_set_mac_address, 1027 + .ndo_tx_timeout = bfin_mac_timeout, 1028 + .ndo_set_multicast_list = bfin_mac_set_multicast_list, 1029 + .ndo_validate_addr = eth_validate_addr, 1030 + .ndo_change_mtu = eth_change_mtu, 1031 + #ifdef CONFIG_NET_POLL_CONTROLLER 1032 + .ndo_poll_controller = bfin_mac_poll, 1033 + #endif 1034 + }; 1006 1035 1007 1036 static int __devinit bfin_mac_probe(struct platform_device *pdev) 1008 1037 {
+2 -2
drivers/net/cxgb3/adapter.h
··· 85 85 struct page *page; 86 86 void *va; 87 87 unsigned int offset; 88 - u64 *p_cnt; 89 - DECLARE_PCI_UNMAP_ADDR(mapping); 88 + unsigned long *p_cnt; 89 + dma_addr_t mapping; 90 90 }; 91 91 92 92 struct rx_desc;
+5 -3
drivers/net/cxgb3/cxgb3_main.c
··· 2496 2496 for_each_port(adapter, i) { 2497 2497 struct net_device *dev = adapter->port[i]; 2498 2498 struct port_info *p = netdev_priv(dev); 2499 + int link_fault; 2499 2500 2500 2501 spin_lock_irq(&adapter->work_lock); 2501 - if (p->link_fault) { 2502 + link_fault = p->link_fault; 2503 + spin_unlock_irq(&adapter->work_lock); 2504 + 2505 + if (link_fault) { 2502 2506 t3_link_fault(adapter, i); 2503 - spin_unlock_irq(&adapter->work_lock); 2504 2507 continue; 2505 2508 } 2506 - spin_unlock_irq(&adapter->work_lock); 2507 2509 2508 2510 if (!(p->phy.caps & SUPPORTED_IRQ) && netif_running(dev)) { 2509 2511 t3_xgm_intr_disable(adapter, i);
+5 -6
drivers/net/cxgb3/sge.c
··· 355 355 (*d->pg_chunk.p_cnt)--; 356 356 if (!*d->pg_chunk.p_cnt) 357 357 pci_unmap_page(pdev, 358 - pci_unmap_addr(&d->pg_chunk, mapping), 358 + d->pg_chunk.mapping, 359 359 q->alloc_size, PCI_DMA_FROMDEVICE); 360 360 361 361 put_page(d->pg_chunk.page); ··· 454 454 q->pg_chunk.offset = 0; 455 455 mapping = pci_map_page(adapter->pdev, q->pg_chunk.page, 456 456 0, q->alloc_size, PCI_DMA_FROMDEVICE); 457 - pci_unmap_addr_set(&q->pg_chunk, mapping, mapping); 457 + q->pg_chunk.mapping = mapping; 458 458 } 459 459 sd->pg_chunk = q->pg_chunk; 460 460 ··· 511 511 nomem: q->alloc_failed++; 512 512 break; 513 513 } 514 - mapping = pci_unmap_addr(&sd->pg_chunk, mapping) + 515 - sd->pg_chunk.offset; 514 + mapping = sd->pg_chunk.mapping + sd->pg_chunk.offset; 516 515 pci_unmap_addr_set(sd, dma_addr, mapping); 517 516 518 517 add_one_rx_chunk(mapping, d, q->gen); ··· 880 881 (*sd->pg_chunk.p_cnt)--; 881 882 if (!*sd->pg_chunk.p_cnt) 882 883 pci_unmap_page(adap->pdev, 883 - pci_unmap_addr(&sd->pg_chunk, mapping), 884 + sd->pg_chunk.mapping, 884 885 fl->alloc_size, 885 886 PCI_DMA_FROMDEVICE); 886 887 if (!skb) { ··· 2095 2096 (*sd->pg_chunk.p_cnt)--; 2096 2097 if (!*sd->pg_chunk.p_cnt) 2097 2098 pci_unmap_page(adap->pdev, 2098 - pci_unmap_addr(&sd->pg_chunk, mapping), 2099 + sd->pg_chunk.mapping, 2099 2100 fl->alloc_size, 2100 2101 PCI_DMA_FROMDEVICE); 2101 2102
+5
drivers/net/cxgb3/t3_hw.c
··· 1274 1274 A_XGM_INT_STATUS + mac->offset); 1275 1275 link_fault &= F_LINKFAULTCHANGE; 1276 1276 1277 + link_ok = lc->link_ok; 1278 + speed = lc->speed; 1279 + duplex = lc->duplex; 1280 + fc = lc->fc; 1281 + 1277 1282 phy->ops->get_link_status(phy, &link_ok, &speed, &duplex, &fc); 1278 1283 1279 1284 if (link_fault) {
+3 -2
drivers/net/e1000/e1000_main.c
··· 4027 4027 PCI_DMA_FROMDEVICE); 4028 4028 4029 4029 length = le16_to_cpu(rx_desc->length); 4030 - 4031 - if (unlikely(!(status & E1000_RXD_STAT_EOP))) { 4030 + /* !EOP means multiple descriptors were used to store a single 4031 + * packet, also make sure the frame isn't just CRC only */ 4032 + if (unlikely(!(status & E1000_RXD_STAT_EOP) || (length <= 4))) { 4032 4033 /* All receives must fit into a single buffer */ 4033 4034 E1000_DBG("%s: Receive packet consumed multiple" 4034 4035 " buffers\n", netdev->name);
+13 -2
drivers/net/forcedeth.c
··· 897 897 }; 898 898 static int phy_cross = NV_CROSSOVER_DETECTION_DISABLED; 899 899 900 + /* 901 + * Power down phy when interface is down (persists through reboot; 902 + * older Linux and other OSes may not power it up again) 903 + */ 904 + static int phy_power_down = 0; 905 + 900 906 static inline struct fe_priv *get_nvpriv(struct net_device *dev) 901 907 { 902 908 return netdev_priv(dev); ··· 1491 1485 1492 1486 /* restart auto negotiation, power down phy */ 1493 1487 mii_control = mii_rw(dev, np->phyaddr, MII_BMCR, MII_READ); 1494 - mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE | BMCR_PDOWN); 1488 + mii_control |= (BMCR_ANRESTART | BMCR_ANENABLE); 1489 + if (phy_power_down) { 1490 + mii_control |= BMCR_PDOWN; 1491 + } 1495 1492 if (mii_rw(dev, np->phyaddr, MII_BMCR, mii_control)) { 1496 1493 return PHY_ERROR; 1497 1494 } ··· 5522 5513 5523 5514 nv_drain_rxtx(dev); 5524 5515 5525 - if (np->wolenabled) { 5516 + if (np->wolenabled || !phy_power_down) { 5526 5517 writel(NVREG_PFF_ALWAYS|NVREG_PFF_MYADDR, base + NvRegPacketFilterFlags); 5527 5518 nv_start_rx(dev); 5528 5519 } else { ··· 6376 6367 MODULE_PARM_DESC(dma_64bit, "High DMA is enabled by setting to 1 and disabled by setting to 0."); 6377 6368 module_param(phy_cross, int, 0); 6378 6369 MODULE_PARM_DESC(phy_cross, "Phy crossover detection for Realtek 8201 phy is enabled by setting to 1 and disabled by setting to 0."); 6370 + module_param(phy_power_down, int, 0); 6371 + MODULE_PARM_DESC(phy_power_down, "Power down phy and disable link when interface is down (1), or leave phy powered up (0)."); 6379 6372 6380 6373 MODULE_AUTHOR("Manfred Spraul <manfred@colorfullife.com>"); 6381 6374 MODULE_DESCRIPTION("Reverse Engineered nForce ethernet driver");
+10 -1
drivers/net/gianfar.c
··· 1885 1885 1886 1886 if (unlikely(!newskb)) 1887 1887 newskb = skb; 1888 - else if (skb) 1888 + else if (skb) { 1889 + /* 1890 + * We need to reset ->data to what it 1891 + * was before gfar_new_skb() re-aligned 1892 + * it to an RXBUF_ALIGNMENT boundary 1893 + * before we put the skb back on the 1894 + * recycle list. 1895 + */ 1896 + skb->data = skb->head + NET_SKB_PAD; 1889 1897 __skb_queue_head(&priv->rx_recycle, skb); 1898 + } 1890 1899 } else { 1891 1900 /* Increment the number of packets */ 1892 1901 dev->stats.rx_packets++;
+1 -1
drivers/net/gianfar.h
··· 259 259 (IEVENT_RXC | IEVENT_BSY | IEVENT_EBERR | IEVENT_MSRO | \ 260 260 IEVENT_BABT | IEVENT_TXC | IEVENT_TXE | IEVENT_LC \ 261 261 | IEVENT_CRL | IEVENT_XFUN | IEVENT_DPE | IEVENT_PERR \ 262 - | IEVENT_MAG) 262 + | IEVENT_MAG | IEVENT_BABR) 263 263 264 264 #define IMASK_INIT_CLEAR 0x00000000 265 265 #define IMASK_BABR 0x80000000
+6 -6
drivers/net/mac8390.c
··· 304 304 if (!MACH_IS_MAC) 305 305 return ERR_PTR(-ENODEV); 306 306 307 - dev = alloc_ei_netdev(); 307 + dev = ____alloc_ei_netdev(0); 308 308 if (!dev) 309 309 return ERR_PTR(-ENOMEM); 310 310 ··· 481 481 static const struct net_device_ops mac8390_netdev_ops = { 482 482 .ndo_open = mac8390_open, 483 483 .ndo_stop = mac8390_close, 484 - .ndo_start_xmit = ei_start_xmit, 485 - .ndo_tx_timeout = ei_tx_timeout, 486 - .ndo_get_stats = ei_get_stats, 487 - .ndo_set_multicast_list = ei_set_multicast_list, 484 + .ndo_start_xmit = __ei_start_xmit, 485 + .ndo_tx_timeout = __ei_tx_timeout, 486 + .ndo_get_stats = __ei_get_stats, 487 + .ndo_set_multicast_list = __ei_set_multicast_list, 488 488 .ndo_validate_addr = eth_validate_addr, 489 489 .ndo_set_mac_address = eth_mac_addr, 490 490 .ndo_change_mtu = eth_change_mtu, 491 491 #ifdef CONFIG_NET_POLL_CONTROLLER 492 - .ndo_poll_controller = ei_poll, 492 + .ndo_poll_controller = __ei_poll, 493 493 #endif 494 494 }; 495 495
+4 -4
drivers/net/mlx4/en_tx.c
··· 426 426 427 427 INC_PERF_COUNTER(priv->pstats.tx_poll); 428 428 429 - if (!spin_trylock(&ring->comp_lock)) { 429 + if (!spin_trylock_irq(&ring->comp_lock)) { 430 430 mod_timer(&cq->timer, jiffies + MLX4_EN_TX_POLL_TIMEOUT); 431 431 return; 432 432 } ··· 439 439 if (inflight && priv->port_up) 440 440 mod_timer(&cq->timer, jiffies + MLX4_EN_TX_POLL_TIMEOUT); 441 441 442 - spin_unlock(&ring->comp_lock); 442 + spin_unlock_irq(&ring->comp_lock); 443 443 } 444 444 445 445 static struct mlx4_en_tx_desc *mlx4_en_bounce_to_desc(struct mlx4_en_priv *priv, ··· 482 482 483 483 /* Poll the CQ every mlx4_en_TX_MODER_POLL packets */ 484 484 if ((++ring->poll_cnt & (MLX4_EN_TX_POLL_MODER - 1)) == 0) 485 - if (spin_trylock(&ring->comp_lock)) { 485 + if (spin_trylock_irq(&ring->comp_lock)) { 486 486 mlx4_en_process_tx_cq(priv->dev, cq); 487 - spin_unlock(&ring->comp_lock); 487 + spin_unlock_irq(&ring->comp_lock); 488 488 } 489 489 } 490 490
+62 -50
drivers/net/r8169.c
··· 3554 3554 int handled = 0; 3555 3555 int status; 3556 3556 3557 + /* loop handling interrupts until we have no new ones or 3558 + * we hit a invalid/hotplug case. 3559 + */ 3557 3560 status = RTL_R16(IntrStatus); 3561 + while (status && status != 0xffff) { 3562 + handled = 1; 3558 3563 3559 - /* hotplug/major error/no more work/shared irq */ 3560 - if ((status == 0xffff) || !status) 3561 - goto out; 3562 - 3563 - handled = 1; 3564 - 3565 - if (unlikely(!netif_running(dev))) { 3566 - rtl8169_asic_down(ioaddr); 3567 - goto out; 3568 - } 3569 - 3570 - status &= tp->intr_mask; 3571 - RTL_W16(IntrStatus, 3572 - (status & RxFIFOOver) ? (status | RxOverflow) : status); 3573 - 3574 - if (!(status & tp->intr_event)) 3575 - goto out; 3576 - 3577 - /* Work around for rx fifo overflow */ 3578 - if (unlikely(status & RxFIFOOver) && 3579 - (tp->mac_version == RTL_GIGA_MAC_VER_11)) { 3580 - netif_stop_queue(dev); 3581 - rtl8169_tx_timeout(dev); 3582 - goto out; 3583 - } 3584 - 3585 - if (unlikely(status & SYSErr)) { 3586 - rtl8169_pcierr_interrupt(dev); 3587 - goto out; 3588 - } 3589 - 3590 - if (status & LinkChg) 3591 - rtl8169_check_link_status(dev, tp, ioaddr); 3592 - 3593 - if (status & tp->napi_event) { 3594 - RTL_W16(IntrMask, tp->intr_event & ~tp->napi_event); 3595 - tp->intr_mask = ~tp->napi_event; 3596 - 3597 - if (likely(napi_schedule_prep(&tp->napi))) 3598 - __napi_schedule(&tp->napi); 3599 - else if (netif_msg_intr(tp)) { 3600 - printk(KERN_INFO "%s: interrupt %04x in poll\n", 3601 - dev->name, status); 3564 + /* Handle all of the error cases first. These will reset 3565 + * the chip, so just exit the loop. 3566 + */ 3567 + if (unlikely(!netif_running(dev))) { 3568 + rtl8169_asic_down(ioaddr); 3569 + break; 3602 3570 } 3571 + 3572 + /* Work around for rx fifo overflow */ 3573 + if (unlikely(status & RxFIFOOver) && 3574 + (tp->mac_version == RTL_GIGA_MAC_VER_11)) { 3575 + netif_stop_queue(dev); 3576 + rtl8169_tx_timeout(dev); 3577 + break; 3578 + } 3579 + 3580 + if (unlikely(status & SYSErr)) { 3581 + rtl8169_pcierr_interrupt(dev); 3582 + break; 3583 + } 3584 + 3585 + if (status & LinkChg) 3586 + rtl8169_check_link_status(dev, tp, ioaddr); 3587 + 3588 + /* We need to see the lastest version of tp->intr_mask to 3589 + * avoid ignoring an MSI interrupt and having to wait for 3590 + * another event which may never come. 3591 + */ 3592 + smp_rmb(); 3593 + if (status & tp->intr_mask & tp->napi_event) { 3594 + RTL_W16(IntrMask, tp->intr_event & ~tp->napi_event); 3595 + tp->intr_mask = ~tp->napi_event; 3596 + 3597 + if (likely(napi_schedule_prep(&tp->napi))) 3598 + __napi_schedule(&tp->napi); 3599 + else if (netif_msg_intr(tp)) { 3600 + printk(KERN_INFO "%s: interrupt %04x in poll\n", 3601 + dev->name, status); 3602 + } 3603 + } 3604 + 3605 + /* We only get a new MSI interrupt when all active irq 3606 + * sources on the chip have been acknowledged. So, ack 3607 + * everything we've seen and check if new sources have become 3608 + * active to avoid blocking all interrupts from the chip. 3609 + */ 3610 + RTL_W16(IntrStatus, 3611 + (status & RxFIFOOver) ? (status | RxOverflow) : status); 3612 + status = RTL_R16(IntrStatus); 3603 3613 } 3604 - out: 3614 + 3605 3615 return IRQ_RETVAL(handled); 3606 3616 } 3607 3617 ··· 3627 3617 3628 3618 if (work_done < budget) { 3629 3619 napi_complete(napi); 3630 - tp->intr_mask = 0xffff; 3631 - /* 3632 - * 20040426: the barrier is not strictly required but the 3633 - * behavior of the irq handler could be less predictable 3634 - * without it. Btw, the lack of flush for the posted pci 3635 - * write is safe - FR 3620 + 3621 + /* We need for force the visibility of tp->intr_mask 3622 + * for other CPUs, as we can loose an MSI interrupt 3623 + * and potentially wait for a retransmit timeout if we don't. 3624 + * The posted write to IntrMask is safe, as it will 3625 + * eventually make it to the chip and we won't loose anything 3626 + * until it does. 3636 3627 */ 3628 + tp->intr_mask = 0xffff; 3637 3629 smp_wmb(); 3638 3630 RTL_W16(IntrMask, tp->intr_event); 3639 3631 }
+30 -5
drivers/net/wimax/i2400m/usb.c
··· 505 505 #ifdef CONFIG_PM 506 506 struct usb_device *usb_dev = i2400mu->usb_dev; 507 507 #endif 508 + unsigned is_autosuspend = 0; 508 509 struct i2400m *i2400m = &i2400mu->i2400m; 510 + 511 + #ifdef CONFIG_PM 512 + if (usb_dev->auto_pm > 0) 513 + is_autosuspend = 1; 514 + #endif 509 515 510 516 d_fnstart(3, dev, "(iface %p pm_msg %u)\n", iface, pm_msg.event); 511 517 if (i2400m->updown == 0) 512 518 goto no_firmware; 513 - d_printf(1, dev, "fw up, requesting standby\n"); 519 + if (i2400m->state == I2400M_SS_DATA_PATH_CONNECTED && is_autosuspend) { 520 + /* ugh -- the device is connected and this suspend 521 + * request is an autosuspend one (not a system standby 522 + * / hibernate). 523 + * 524 + * The only way the device can go to standby is if the 525 + * link with the base station is in IDLE mode; that 526 + * were the case, we'd be in status 527 + * I2400M_SS_CONNECTED_IDLE. But we are not. 528 + * 529 + * If we *tell* him to go power save now, it'll reset 530 + * as a precautionary measure, so if this is an 531 + * autosuspend thing, say no and it'll come back 532 + * later, when the link is IDLE 533 + */ 534 + result = -EBADF; 535 + d_printf(1, dev, "fw up, link up, not-idle, autosuspend: " 536 + "not entering powersave\n"); 537 + goto error_not_now; 538 + } 539 + d_printf(1, dev, "fw up: entering powersave\n"); 514 540 atomic_dec(&i2400mu->do_autopm); 515 541 result = i2400m_cmd_enter_powersave(i2400m); 516 542 atomic_inc(&i2400mu->do_autopm); 517 - #ifdef CONFIG_PM 518 - if (result < 0 && usb_dev->auto_pm == 0) { 543 + if (result < 0 && !is_autosuspend) { 519 544 /* System suspend, can't fail */ 520 545 dev_err(dev, "failed to suspend, will reset on resume\n"); 521 546 result = 0; 522 547 } 523 - #endif 524 548 if (result < 0) 525 549 goto error_enter_powersave; 526 550 i2400mu_notification_release(i2400mu); 527 - d_printf(1, dev, "fw up, got standby\n"); 551 + d_printf(1, dev, "powersave requested\n"); 528 552 error_enter_powersave: 553 + error_not_now: 529 554 no_firmware: 530 555 d_fnend(3, dev, "(iface %p pm_msg %u) = %d\n", 531 556 iface, pm_msg.event, result);
+1
drivers/net/wireless/Kconfig
··· 430 430 ASUS P5B Deluxe 431 431 Toshiba Satellite Pro series of laptops 432 432 Asus Wireless Link 433 + Linksys WUSB54GC-EU 433 434 434 435 Thanks to Realtek for their support! 435 436
+14 -9
drivers/net/wireless/airo.c
··· 6467 6467 { 6468 6468 struct airo_info *local = dev->ml_priv; 6469 6469 int index = (dwrq->flags & IW_ENCODE_INDEX) - 1; 6470 + int wep_key_len; 6470 6471 u8 buf[16]; 6471 6472 6472 6473 if (!local->wep_capable) ··· 6501 6500 dwrq->flags |= index + 1; 6502 6501 6503 6502 /* Copy the key to the user buffer */ 6504 - dwrq->length = get_wep_key(local, index, &buf[0], sizeof(buf)); 6505 - if (dwrq->length != -1) 6506 - memcpy(extra, buf, dwrq->length); 6507 - else 6503 + wep_key_len = get_wep_key(local, index, &buf[0], sizeof(buf)); 6504 + if (wep_key_len < 0) { 6508 6505 dwrq->length = 0; 6506 + } else { 6507 + dwrq->length = wep_key_len; 6508 + memcpy(extra, buf, dwrq->length); 6509 + } 6509 6510 6510 6511 return 0; 6511 6512 } ··· 6620 6617 struct airo_info *local = dev->ml_priv; 6621 6618 struct iw_point *encoding = &wrqu->encoding; 6622 6619 struct iw_encode_ext *ext = (struct iw_encode_ext *)extra; 6623 - int idx, max_key_len; 6620 + int idx, max_key_len, wep_key_len; 6624 6621 u8 buf[16]; 6625 6622 6626 6623 if (!local->wep_capable) ··· 6664 6661 memset(extra, 0, 16); 6665 6662 6666 6663 /* Copy the key to the user buffer */ 6667 - ext->key_len = get_wep_key(local, idx, &buf[0], sizeof(buf)); 6668 - if (ext->key_len != -1) 6669 - memcpy(extra, buf, ext->key_len); 6670 - else 6664 + wep_key_len = get_wep_key(local, idx, &buf[0], sizeof(buf)); 6665 + if (wep_key_len < 0) { 6671 6666 ext->key_len = 0; 6667 + } else { 6668 + ext->key_len = wep_key_len; 6669 + memcpy(extra, buf, ext->key_len); 6670 + } 6672 6671 6673 6672 return 0; 6674 6673 }
+6 -6
drivers/net/wireless/at76c50x-usb.c
··· 1873 1873 if (ret != CMD_STATUS_COMPLETE) { 1874 1874 queue_delayed_work(priv->hw->workqueue, &priv->dwork_hw_scan, 1875 1875 SCAN_POLL_INTERVAL); 1876 - goto exit; 1876 + mutex_unlock(&priv->mtx); 1877 + return; 1877 1878 } 1878 - 1879 - ieee80211_scan_completed(priv->hw, false); 1880 1879 1881 1880 if (is_valid_ether_addr(priv->bssid)) 1882 1881 at76_join(priv); 1883 1882 1884 - ieee80211_wake_queues(priv->hw); 1885 - 1886 - exit: 1887 1883 mutex_unlock(&priv->mtx); 1884 + 1885 + ieee80211_scan_completed(priv->hw, false); 1886 + 1887 + ieee80211_wake_queues(priv->hw); 1888 1888 } 1889 1889 1890 1890 static int at76_hw_scan(struct ieee80211_hw *hw,
+25 -18
drivers/net/wireless/ath5k/phy.c
··· 1487 1487 { 1488 1488 s8 tmp; 1489 1489 s16 min_pwrL, min_pwrR; 1490 - s16 pwr_i = pwrL[0]; 1490 + s16 pwr_i; 1491 1491 1492 - do { 1493 - pwr_i--; 1494 - tmp = (s8) ath5k_get_interpolated_value(pwr_i, 1495 - pwrL[0], pwrL[1], 1496 - stepL[0], stepL[1]); 1492 + if (pwrL[0] == pwrL[1]) 1493 + min_pwrL = pwrL[0]; 1494 + else { 1495 + pwr_i = pwrL[0]; 1496 + do { 1497 + pwr_i--; 1498 + tmp = (s8) ath5k_get_interpolated_value(pwr_i, 1499 + pwrL[0], pwrL[1], 1500 + stepL[0], stepL[1]); 1501 + } while (tmp > 1); 1497 1502 1498 - } while (tmp > 1); 1503 + min_pwrL = pwr_i; 1504 + } 1499 1505 1500 - min_pwrL = pwr_i; 1506 + if (pwrR[0] == pwrR[1]) 1507 + min_pwrR = pwrR[0]; 1508 + else { 1509 + pwr_i = pwrR[0]; 1510 + do { 1511 + pwr_i--; 1512 + tmp = (s8) ath5k_get_interpolated_value(pwr_i, 1513 + pwrR[0], pwrR[1], 1514 + stepR[0], stepR[1]); 1515 + } while (tmp > 1); 1501 1516 1502 - pwr_i = pwrR[0]; 1503 - do { 1504 - pwr_i--; 1505 - tmp = (s8) ath5k_get_interpolated_value(pwr_i, 1506 - pwrR[0], pwrR[1], 1507 - stepR[0], stepR[1]); 1508 - 1509 - } while (tmp > 1); 1510 - 1511 - min_pwrR = pwr_i; 1517 + min_pwrR = pwr_i; 1518 + } 1512 1519 1513 1520 /* Keep the right boundary so that it works for both curves */ 1514 1521 return max(min_pwrL, min_pwrR);
+4 -4
drivers/net/wireless/ath5k/reset.c
··· 26 26 \*****************************/ 27 27 28 28 #include <linux/pci.h> /* To determine if a card is pci-e */ 29 - #include <linux/bitops.h> /* For get_bitmask_order */ 29 + #include <linux/log2.h> 30 30 #include "ath5k.h" 31 31 #include "reg.h" 32 32 #include "base.h" ··· 69 69 70 70 /* Get exponent 71 71 * ALGO: coef_exp = 14 - highest set bit position */ 72 - coef_exp = get_bitmask_order(coef_scaled); 72 + coef_exp = ilog2(coef_scaled); 73 73 74 74 /* Doesn't make sense if it's zero*/ 75 - if (!coef_exp) 75 + if (!coef_scaled || !coef_exp) 76 76 return -EINVAL; 77 77 78 78 /* Note: we've shifted coef_scaled by 24 */ ··· 359 359 mode |= AR5K_PHY_MODE_FREQ_5GHZ; 360 360 361 361 if (ah->ah_radio == AR5K_RF5413) 362 - clock |= AR5K_PHY_PLL_40MHZ_5413; 362 + clock = AR5K_PHY_PLL_40MHZ_5413; 363 363 else 364 364 clock |= AR5K_PHY_PLL_40MHZ; 365 365
+1 -1
drivers/net/wireless/iwlwifi/iwl-5000.c
··· 46 46 #include "iwl-6000-hw.h" 47 47 48 48 /* Highest firmware API version supported */ 49 - #define IWL5000_UCODE_API_MAX 1 49 + #define IWL5000_UCODE_API_MAX 2 50 50 #define IWL5150_UCODE_API_MAX 2 51 51 52 52 /* Lowest firmware API version supported */
-7
drivers/net/wireless/iwlwifi/iwl-agn.c
··· 669 669 if (!iwl_is_ready_rf(priv)) 670 670 return -EAGAIN; 671 671 672 - cancel_delayed_work(&priv->scan_check); 673 - if (iwl_scan_cancel_timeout(priv, 100)) { 674 - IWL_WARN(priv, "Aborted scan still in progress after 100ms\n"); 675 - IWL_DEBUG_MAC80211(priv, "leaving - scan abort failed.\n"); 676 - return -EAGAIN; 677 - } 678 - 679 672 iwl_commit_rxon(priv); 680 673 681 674 return 0;
+4 -3
drivers/net/wireless/iwlwifi/iwl-scan.c
··· 227 227 /* The HW is no longer scanning */ 228 228 clear_bit(STATUS_SCAN_HW, &priv->status); 229 229 230 - /* The scan completion notification came in, so kill that timer... */ 231 - cancel_delayed_work(&priv->scan_check); 232 - 233 230 IWL_DEBUG_INFO(priv, "Scan pass on %sGHz took %dms\n", 234 231 (priv->scan_bands & BIT(IEEE80211_BAND_2GHZ)) ? 235 232 "2.4" : "5.2", ··· 709 712 710 713 mutex_lock(&priv->mutex); 711 714 715 + cancel_delayed_work(&priv->scan_check); 716 + 712 717 if (!iwl_is_ready(priv)) { 713 718 IWL_WARN(priv, "request scan called when driver not ready.\n"); 714 719 goto done; ··· 923 924 container_of(work, struct iwl_priv, scan_completed); 924 925 925 926 IWL_DEBUG_SCAN(priv, "SCAN complete scan\n"); 927 + 928 + cancel_delayed_work(&priv->scan_check); 926 929 927 930 ieee80211_scan_completed(priv->hw, false); 928 931
+2 -7
drivers/net/wireless/iwlwifi/iwl3945-base.c
··· 782 782 if (!iwl_is_ready_rf(priv)) 783 783 return -EAGAIN; 784 784 785 - cancel_delayed_work(&priv->scan_check); 786 - if (iwl_scan_cancel_timeout(priv, 100)) { 787 - IWL_WARN(priv, "Aborted scan still in progress after 100ms\n"); 788 - IWL_DEBUG_MAC80211(priv, "leaving - scan abort failed.\n"); 789 - return -EAGAIN; 790 - } 791 - 792 785 iwl3945_commit_rxon(priv); 793 786 794 787 return 0; ··· 3290 3297 conf = ieee80211_get_hw_conf(priv->hw); 3291 3298 3292 3299 mutex_lock(&priv->mutex); 3300 + 3301 + cancel_delayed_work(&priv->scan_check); 3293 3302 3294 3303 if (!iwl_is_ready(priv)) { 3295 3304 IWL_WARN(priv, "request scan called when driver not ready.\n");
+1 -1
drivers/net/wireless/rt2x00/rt2x00debug.c
··· 138 138 139 139 if (cipher == CIPHER_TKIP_NO_MIC) 140 140 cipher = CIPHER_TKIP; 141 - if (cipher == CIPHER_NONE || cipher > CIPHER_MAX) 141 + if (cipher == CIPHER_NONE || cipher >= CIPHER_MAX) 142 142 return; 143 143 144 144 /* Remove CIPHER_NONE index */
+2
drivers/net/wireless/rtl818x/rtl8187_dev.c
··· 71 71 {USB_DEVICE(0x18E8, 0x6232), .driver_info = DEVICE_RTL8187}, 72 72 /* AirLive */ 73 73 {USB_DEVICE(0x1b75, 0x8187), .driver_info = DEVICE_RTL8187}, 74 + /* Linksys */ 75 + {USB_DEVICE(0x1737, 0x0073), .driver_info = DEVICE_RTL8187B}, 74 76 {} 75 77 }; 76 78
+6 -2
drivers/oprofile/cpu_buffer.c
··· 78 78 op_ring_buffer_write = NULL; 79 79 } 80 80 81 + #define RB_EVENT_HDR_SIZE 4 82 + 81 83 int alloc_cpu_buffers(void) 82 84 { 83 85 int i; 84 86 85 87 unsigned long buffer_size = oprofile_cpu_buffer_size; 88 + unsigned long byte_size = buffer_size * (sizeof(struct op_sample) + 89 + RB_EVENT_HDR_SIZE); 86 90 87 - op_ring_buffer_read = ring_buffer_alloc(buffer_size, OP_BUFFER_FLAGS); 91 + op_ring_buffer_read = ring_buffer_alloc(byte_size, OP_BUFFER_FLAGS); 88 92 if (!op_ring_buffer_read) 89 93 goto fail; 90 - op_ring_buffer_write = ring_buffer_alloc(buffer_size, OP_BUFFER_FLAGS); 94 + op_ring_buffer_write = ring_buffer_alloc(byte_size, OP_BUFFER_FLAGS); 91 95 if (!op_ring_buffer_write) 92 96 goto fail; 93 97
+2 -2
drivers/parport/parport_gsc.c
··· 352 352 unsigned long port; 353 353 354 354 if (!dev->irq) { 355 - printk(KERN_WARNING "IRQ not found for parallel device at 0x%lx\n", 356 - dev->hpa.start); 355 + printk(KERN_WARNING "IRQ not found for parallel device at 0x%llx\n", 356 + (unsigned long long)dev->hpa.start); 357 357 return -ENODEV; 358 358 } 359 359
+10 -3
drivers/parport/share.c
··· 614 614 * pardevice fields. -arca 615 615 */ 616 616 port->ops->init_state(tmp, tmp->state); 617 - parport_device_proc_register(tmp); 617 + if (!test_and_set_bit(PARPORT_DEVPROC_REGISTERED, &port->devflags)) { 618 + port->proc_device = tmp; 619 + parport_device_proc_register(tmp); 620 + } 618 621 return tmp; 619 622 620 623 out_free_all: ··· 649 646 } 650 647 #endif 651 648 652 - parport_device_proc_unregister(dev); 653 - 654 649 port = dev->port->physport; 650 + 651 + if (port->proc_device == dev) { 652 + port->proc_device = NULL; 653 + clear_bit(PARPORT_DEVPROC_REGISTERED, &port->devflags); 654 + parport_device_proc_unregister(dev); 655 + } 655 656 656 657 if (port->cad == dev) { 657 658 printk(KERN_DEBUG "%s: %s forgot to release port\n",
-1
drivers/pci/hotplug/acpiphp.h
··· 129 129 struct acpiphp_bridge *bridge; /* Ejectable PCI-to-PCI bridge */ 130 130 131 131 struct list_head sibling; 132 - struct pci_dev *pci_dev; 133 132 struct notifier_block nb; 134 133 acpi_handle handle; 135 134
+26 -37
drivers/pci/hotplug/acpiphp_glue.c
··· 32 32 33 33 /* 34 34 * Lifetime rules for pci_dev: 35 - * - The one in acpiphp_func has its refcount elevated by pci_get_slot() 36 - * when the driver is loaded or when an insertion event occurs. It loses 37 - * a refcount when its ejected or the driver unloads. 38 35 * - The one in acpiphp_bridge has its refcount elevated by pci_get_slot() 39 36 * when the bridge is scanned and it loses a refcount when the bridge 40 37 * is removed. ··· 127 130 unsigned long long adr, sun; 128 131 int device, function, retval; 129 132 struct pci_bus *pbus = bridge->pci_bus; 133 + struct pci_dev *pdev; 130 134 131 135 if (!acpi_pci_check_ejectable(pbus, handle) && !is_dock_device(handle)) 132 136 return AE_OK; ··· 211 213 newfunc->slot = slot; 212 214 list_add_tail(&newfunc->sibling, &slot->funcs); 213 215 214 - /* associate corresponding pci_dev */ 215 - newfunc->pci_dev = pci_get_slot(pbus, PCI_DEVFN(device, function)); 216 - if (newfunc->pci_dev) { 216 + pdev = pci_get_slot(pbus, PCI_DEVFN(device, function)); 217 + if (pdev) { 217 218 slot->flags |= (SLOT_ENABLED | SLOT_POWEREDON); 219 + pci_dev_put(pdev); 218 220 } 219 221 220 222 if (is_dock_device(handle)) { ··· 615 617 if (ACPI_FAILURE(status)) 616 618 err("failed to remove notify handler\n"); 617 619 } 618 - pci_dev_put(func->pci_dev); 619 620 list_del(list); 620 621 kfree(func); 621 622 } ··· 1098 1101 pci_enable_bridges(bus); 1099 1102 pci_bus_add_devices(bus); 1100 1103 1101 - /* associate pci_dev to our representation */ 1102 1104 list_for_each (l, &slot->funcs) { 1103 1105 func = list_entry(l, struct acpiphp_func, sibling); 1104 - func->pci_dev = pci_get_slot(bus, PCI_DEVFN(slot->device, 1105 - func->function)); 1106 - if (!func->pci_dev) 1106 + dev = pci_get_slot(bus, PCI_DEVFN(slot->device, 1107 + func->function)); 1108 + if (!dev) 1107 1109 continue; 1108 1110 1109 - if (func->pci_dev->hdr_type != PCI_HEADER_TYPE_BRIDGE && 1110 - func->pci_dev->hdr_type != PCI_HEADER_TYPE_CARDBUS) 1111 + if (dev->hdr_type != PCI_HEADER_TYPE_BRIDGE && 1112 + dev->hdr_type != PCI_HEADER_TYPE_CARDBUS) { 1113 + pci_dev_put(dev); 1111 1114 continue; 1115 + } 1112 1116 1113 1117 status = find_p2p_bridge(func->handle, (u32)1, bus, NULL); 1114 1118 if (ACPI_FAILURE(status)) 1115 1119 warn("find_p2p_bridge failed (error code = 0x%x)\n", 1116 1120 status); 1121 + pci_dev_put(dev); 1117 1122 } 1118 1123 1119 1124 slot->flags |= SLOT_ENABLED; ··· 1141 1142 */ 1142 1143 static int disable_device(struct acpiphp_slot *slot) 1143 1144 { 1144 - int retval = 0; 1145 1145 struct acpiphp_func *func; 1146 - struct list_head *l; 1146 + struct pci_dev *pdev; 1147 1147 1148 1148 /* is this slot already disabled? */ 1149 1149 if (!(slot->flags & SLOT_ENABLED)) 1150 1150 goto err_exit; 1151 1151 1152 - list_for_each (l, &slot->funcs) { 1153 - func = list_entry(l, struct acpiphp_func, sibling); 1154 - 1152 + list_for_each_entry(func, &slot->funcs, sibling) { 1155 1153 if (func->bridge) { 1156 1154 /* cleanup p2p bridges under this P2P bridge */ 1157 1155 cleanup_p2p_bridge(func->bridge->handle, ··· 1156 1160 func->bridge = NULL; 1157 1161 } 1158 1162 1159 - if (func->pci_dev) { 1160 - pci_stop_bus_device(func->pci_dev); 1161 - if (func->pci_dev->subordinate) { 1162 - disable_bridges(func->pci_dev->subordinate); 1163 - pci_disable_device(func->pci_dev); 1163 + pdev = pci_get_slot(slot->bridge->pci_bus, 1164 + PCI_DEVFN(slot->device, func->function)); 1165 + if (pdev) { 1166 + pci_stop_bus_device(pdev); 1167 + if (pdev->subordinate) { 1168 + disable_bridges(pdev->subordinate); 1169 + pci_disable_device(pdev); 1164 1170 } 1171 + pci_remove_bus_device(pdev); 1172 + pci_dev_put(pdev); 1165 1173 } 1166 1174 } 1167 1175 1168 - list_for_each (l, &slot->funcs) { 1169 - func = list_entry(l, struct acpiphp_func, sibling); 1170 - 1176 + list_for_each_entry(func, &slot->funcs, sibling) { 1171 1177 acpiphp_unconfigure_ioapics(func->handle); 1172 1178 acpiphp_bus_trim(func->handle); 1173 - /* try to remove anyway. 1174 - * acpiphp_bus_add might have been failed */ 1175 - 1176 - if (!func->pci_dev) 1177 - continue; 1178 - 1179 - pci_remove_bus_device(func->pci_dev); 1180 - pci_dev_put(func->pci_dev); 1181 - func->pci_dev = NULL; 1182 1179 } 1183 1180 1184 1181 slot->flags &= (~SLOT_ENABLED); 1185 1182 1186 - err_exit: 1187 - return retval; 1183 + err_exit: 1184 + return 0; 1188 1185 } 1189 1186 1190 1187
+15
drivers/serial/8250.c
··· 137 137 unsigned char mcr; 138 138 unsigned char mcr_mask; /* mask of user bits */ 139 139 unsigned char mcr_force; /* mask of forced bits */ 140 + unsigned char cur_iotype; /* Running I/O type */ 140 141 141 142 /* 142 143 * Some bits in registers are cleared on a read, so they must ··· 472 471 473 472 static void set_io_from_upio(struct uart_port *p) 474 473 { 474 + struct uart_8250_port *up = (struct uart_8250_port *)p; 475 475 switch (p->iotype) { 476 476 case UPIO_HUB6: 477 477 p->serial_in = hub6_serial_in; ··· 511 509 p->serial_out = io_serial_out; 512 510 break; 513 511 } 512 + /* Remember loaded iotype */ 513 + up->cur_iotype = p->iotype; 514 514 } 515 515 516 516 static void ··· 1941 1937 up->capabilities = uart_config[up->port.type].flags; 1942 1938 up->mcr = 0; 1943 1939 1940 + if (up->port.iotype != up->cur_iotype) 1941 + set_io_from_upio(port); 1942 + 1944 1943 if (up->port.type == PORT_16C950) { 1945 1944 /* Wake up and initialize UART */ 1946 1945 up->acr = 0; ··· 2570 2563 if (ret < 0) 2571 2564 probeflags &= ~PROBE_RSA; 2572 2565 2566 + if (up->port.iotype != up->cur_iotype) 2567 + set_io_from_upio(port); 2568 + 2573 2569 if (flags & UART_CONFIG_TYPE) 2574 2570 autoconfig(up, probeflags); 2575 2571 if (up->port.type != PORT_UNKNOWN && flags & UART_CONFIG_IRQ) ··· 2680 2670 serial8250_register_ports(struct uart_driver *drv, struct device *dev) 2681 2671 { 2682 2672 int i; 2673 + 2674 + for (i = 0; i < nr_uarts; i++) { 2675 + struct uart_8250_port *up = &serial8250_ports[i]; 2676 + up->cur_iotype = 0xFF; 2677 + } 2683 2678 2684 2679 serial8250_isa_init_ports(); 2685 2680
+2 -2
drivers/serial/8250_gsc.c
··· 39 39 */ 40 40 if (parisc_parent(dev)->id.hw_type != HPHW_IOA) 41 41 printk(KERN_INFO 42 - "Serial: device 0x%lx not configured.\n" 42 + "Serial: device 0x%llx not configured.\n" 43 43 "Enable support for Wax, Lasi, Asp or Dino.\n", 44 - dev->hpa.start); 44 + (unsigned long long)dev->hpa.start); 45 45 return -ENODEV; 46 46 } 47 47
+1 -1
drivers/serial/mpc52xx_uart.c
··· 988 988 pr_debug("mpc52xx_console_setup co=%p, co->index=%i, options=%s\n", 989 989 co, co->index, options); 990 990 991 - if ((co->index < 0) || (co->index > MPC52xx_PSC_MAXNUM)) { 991 + if ((co->index < 0) || (co->index >= MPC52xx_PSC_MAXNUM)) { 992 992 pr_debug("PSC%x out of range\n", co->index); 993 993 return -EINVAL; 994 994 }
-1
drivers/usb/Makefile
··· 11 11 obj-$(CONFIG_PCI) += host/ 12 12 obj-$(CONFIG_USB_EHCI_HCD) += host/ 13 13 obj-$(CONFIG_USB_ISP116X_HCD) += host/ 14 - obj-$(CONFIG_USB_ISP1760_HCD) += host/ 15 14 obj-$(CONFIG_USB_OHCI_HCD) += host/ 16 15 obj-$(CONFIG_USB_UHCI_HCD) += host/ 17 16 obj-$(CONFIG_USB_FHCI_HCD) += host/
+3
drivers/usb/class/cdc-acm.c
··· 1375 1375 { USB_DEVICE(0x0572, 0x1324), /* Conexant USB MODEM RD02-D400 */ 1376 1376 .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1377 1377 }, 1378 + { USB_DEVICE(0x0572, 0x1328), /* Shiro / Aztech USB MODEM UM-3100 */ 1379 + .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1380 + }, 1378 1381 { USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */ 1379 1382 }, 1380 1383 { USB_DEVICE(0x0572, 0x1329), /* Hummingbird huc56s (Conexant) */
+3 -2
drivers/usb/gadget/atmel_usba_udc.c
··· 794 794 if (ep->desc) { 795 795 list_add_tail(&req->queue, &ep->queue); 796 796 797 - if (ep->is_in || (ep_is_control(ep) 797 + if ((!ep_is_control(ep) && ep->is_in) || 798 + (ep_is_control(ep) 798 799 && (ep->state == DATA_STAGE_IN 799 800 || ep->state == STATUS_STAGE_IN))) 800 801 usba_ep_writel(ep, CTL_ENB, USBA_TX_PK_RDY); ··· 1941 1940 usba_writel(udc, CTRL, USBA_DISABLE_MASK); 1942 1941 clk_disable(pclk); 1943 1942 1944 - usba_ep = kmalloc(sizeof(struct usba_ep) * pdata->num_ep, 1943 + usba_ep = kzalloc(sizeof(struct usba_ep) * pdata->num_ep, 1945 1944 GFP_KERNEL); 1946 1945 if (!usba_ep) 1947 1946 goto err_alloc_ep;
+22 -2
drivers/usb/host/isp1760-hcd.c
··· 1658 1658 u32 reg_base, or_reg, skip_reg; 1659 1659 unsigned long flags; 1660 1660 struct ptd ptd; 1661 + packet_enqueue *pe; 1661 1662 1662 1663 switch (usb_pipetype(urb->pipe)) { 1663 1664 case PIPE_ISOCHRONOUS: ··· 1670 1669 reg_base = INT_REGS_OFFSET; 1671 1670 or_reg = HC_INT_IRQ_MASK_OR_REG; 1672 1671 skip_reg = HC_INT_PTD_SKIPMAP_REG; 1672 + pe = enqueue_an_INT_packet; 1673 1673 break; 1674 1674 1675 1675 default: ··· 1678 1676 reg_base = ATL_REGS_OFFSET; 1679 1677 or_reg = HC_ATL_IRQ_MASK_OR_REG; 1680 1678 skip_reg = HC_ATL_PTD_SKIPMAP_REG; 1679 + pe = enqueue_an_ATL_packet; 1681 1680 break; 1682 1681 } 1683 1682 ··· 1690 1687 u32 skip_map; 1691 1688 u32 or_map; 1692 1689 struct isp1760_qtd *qtd; 1690 + struct isp1760_qh *qh = ints->qh; 1693 1691 1694 1692 skip_map = isp1760_readl(hcd->regs + skip_reg); 1695 1693 skip_map |= 1 << i; ··· 1703 1699 priv_write_copy(priv, (u32 *)&ptd, hcd->regs + reg_base 1704 1700 + i * sizeof(ptd), sizeof(ptd)); 1705 1701 qtd = ints->qtd; 1706 - 1707 - clean_up_qtdlist(qtd); 1702 + qtd = clean_up_qtdlist(qtd); 1708 1703 1709 1704 free_mem(priv, ints->payload); 1710 1705 ··· 1714 1711 ints->payload = 0; 1715 1712 1716 1713 isp1760_urb_done(priv, urb, status); 1714 + if (qtd) 1715 + pe(hcd, qh, qtd); 1717 1716 break; 1717 + 1718 + } else if (ints->qtd) { 1719 + struct isp1760_qtd *qtd, *prev_qtd = ints->qtd; 1720 + 1721 + for (qtd = ints->qtd->hw_next; qtd; qtd = qtd->hw_next) { 1722 + if (qtd->urb == urb) { 1723 + prev_qtd->hw_next = clean_up_qtdlist(qtd); 1724 + isp1760_urb_done(priv, urb, status); 1725 + break; 1726 + } 1727 + prev_qtd = qtd; 1728 + } 1729 + /* we found the urb before the end of the list */ 1730 + if (qtd) 1731 + break; 1718 1732 } 1719 1733 ints++; 1720 1734 }
+1
drivers/usb/serial/usb-serial.c
··· 974 974 if (retval > 0) { 975 975 /* quietly accept this device, but don't bind to a 976 976 serial port as it's about to disappear */ 977 + serial->num_ports = 0; 977 978 goto exit; 978 979 } 979 980 }
+2 -8
drivers/video/atmel_lcdfb.c
··· 29 29 30 30 /* configurable parameters */ 31 31 #define ATMEL_LCDC_CVAL_DEFAULT 0xc8 32 - #define ATMEL_LCDC_DMA_BURST_LEN 8 33 - 34 - #if defined(CONFIG_ARCH_AT91SAM9263) || defined(CONFIG_ARCH_AT91CAP9) || \ 35 - defined(CONFIG_ARCH_AT91SAM9RL) 36 - #define ATMEL_LCDC_FIFO_SIZE 2048 37 - #else 38 - #define ATMEL_LCDC_FIFO_SIZE 512 39 - #endif 32 + #define ATMEL_LCDC_DMA_BURST_LEN 8 /* words */ 33 + #define ATMEL_LCDC_FIFO_SIZE 512 /* words */ 40 34 41 35 #if defined(CONFIG_ARCH_AT91) 42 36 #define ATMEL_LCDFB_FBINFO_DEFAULT (FBINFO_DEFAULT \
+11 -1
drivers/video/s3c-fb.c
··· 947 947 int win; 948 948 949 949 for (win = 0; win <= S3C_FB_MAX_WIN; win++) 950 - s3c_fb_release_win(sfb, sfb->windows[win]); 950 + if (sfb->windows[win]) 951 + s3c_fb_release_win(sfb, sfb->windows[win]); 951 952 952 953 iounmap(sfb->regs); 953 954 ··· 986 985 static int s3c_fb_resume(struct platform_device *pdev) 987 986 { 988 987 struct s3c_fb *sfb = platform_get_drvdata(pdev); 988 + struct s3c_fb_platdata *pd = sfb->pdata; 989 989 struct s3c_fb_win *win; 990 990 int win_no; 991 991 992 992 clk_enable(sfb->bus_clk); 993 993 994 + /* setup registers */ 995 + writel(pd->vidcon1, sfb->regs + VIDCON1); 996 + 997 + /* zero all windows before we do anything */ 998 + for (win_no = 0; win_no < S3C_FB_MAX_WIN; win_no++) 999 + s3c_fb_clear_win(sfb, win_no); 1000 + 1001 + /* restore framebuffers */ 994 1002 for (win_no = 0; win_no < S3C_FB_MAX_WIN; win_no++) { 995 1003 win = sfb->windows[win_no]; 996 1004 if (!win)
+5 -5
drivers/watchdog/Kconfig
··· 231 231 NOTE: once enabled, this timer cannot be disabled. 232 232 Say N if you are unsure. 233 233 234 - config ORION5X_WATCHDOG 235 - tristate "Orion5x watchdog" 236 - depends on ARCH_ORION5X 234 + config ORION_WATCHDOG 235 + tristate "Orion watchdog" 236 + depends on ARCH_ORION5X || ARCH_KIRKWOOD 237 237 help 238 238 Say Y here if to include support for the watchdog timer 239 - in the Orion5x ARM SoCs. 239 + in the Marvell Orion5x and Kirkwood ARM SoCs. 240 240 To compile this driver as a module, choose M here: the 241 - module will be called orion5x_wdt. 241 + module will be called orion_wdt. 242 242 243 243 # AVR32 Architecture 244 244
+1 -1
drivers/watchdog/Makefile
··· 40 40 obj-$(CONFIG_PNX4008_WATCHDOG) += pnx4008_wdt.o 41 41 obj-$(CONFIG_IOP_WATCHDOG) += iop_wdt.o 42 42 obj-$(CONFIG_DAVINCI_WATCHDOG) += davinci_wdt.o 43 - obj-$(CONFIG_ORION5X_WATCHDOG) += orion5x_wdt.o 43 + obj-$(CONFIG_ORION_WATCHDOG) += orion_wdt.o 44 44 45 45 # AVR32 Architecture 46 46 obj-$(CONFIG_AT32AP700X_WDT) += at32ap700x_wdt.o
+60 -60
drivers/watchdog/orion5x_wdt.c drivers/watchdog/orion_wdt.c
··· 1 1 /* 2 - * drivers/watchdog/orion5x_wdt.c 2 + * drivers/watchdog/orion_wdt.c 3 3 * 4 - * Watchdog driver for Orion5x processors 4 + * Watchdog driver for Orion/Kirkwood processors 5 5 * 6 6 * Author: Sylver Bruneau <sylver.bruneau@googlemail.com> 7 7 * ··· 23 23 #include <linux/io.h> 24 24 #include <linux/spinlock.h> 25 25 #include <mach/bridge-regs.h> 26 - #include <plat/orion5x_wdt.h> 26 + #include <plat/orion_wdt.h> 27 27 28 28 /* 29 29 * Watchdog timer block registers. ··· 43 43 static unsigned long wdt_status; 44 44 static spinlock_t wdt_lock; 45 45 46 - static void orion5x_wdt_ping(void) 46 + static void orion_wdt_ping(void) 47 47 { 48 48 spin_lock(&wdt_lock); 49 49 ··· 53 53 spin_unlock(&wdt_lock); 54 54 } 55 55 56 - static void orion5x_wdt_enable(void) 56 + static void orion_wdt_enable(void) 57 57 { 58 58 u32 reg; 59 59 ··· 73 73 writel(reg, TIMER_CTRL); 74 74 75 75 /* Enable reset on watchdog */ 76 - reg = readl(CPU_RESET_MASK); 77 - reg |= WDT_RESET; 78 - writel(reg, CPU_RESET_MASK); 76 + reg = readl(RSTOUTn_MASK); 77 + reg |= WDT_RESET_OUT_EN; 78 + writel(reg, RSTOUTn_MASK); 79 79 80 80 spin_unlock(&wdt_lock); 81 81 } 82 82 83 - static void orion5x_wdt_disable(void) 83 + static void orion_wdt_disable(void) 84 84 { 85 85 u32 reg; 86 86 87 87 spin_lock(&wdt_lock); 88 88 89 89 /* Disable reset on watchdog */ 90 - reg = readl(CPU_RESET_MASK); 91 - reg &= ~WDT_RESET; 92 - writel(reg, CPU_RESET_MASK); 90 + reg = readl(RSTOUTn_MASK); 91 + reg &= ~WDT_RESET_OUT_EN; 92 + writel(reg, RSTOUTn_MASK); 93 93 94 94 /* Disable watchdog timer */ 95 95 reg = readl(TIMER_CTRL); ··· 99 99 spin_unlock(&wdt_lock); 100 100 } 101 101 102 - static int orion5x_wdt_get_timeleft(int *time_left) 102 + static int orion_wdt_get_timeleft(int *time_left) 103 103 { 104 104 spin_lock(&wdt_lock); 105 105 *time_left = readl(WDT_VAL) / wdt_tclk; ··· 107 107 return 0; 108 108 } 109 109 110 - static int orion5x_wdt_open(struct inode *inode, struct file *file) 110 + static int orion_wdt_open(struct inode *inode, struct file *file) 111 111 { 112 112 if (test_and_set_bit(WDT_IN_USE, &wdt_status)) 113 113 return -EBUSY; 114 114 clear_bit(WDT_OK_TO_CLOSE, &wdt_status); 115 - orion5x_wdt_enable(); 115 + orion_wdt_enable(); 116 116 return nonseekable_open(inode, file); 117 117 } 118 118 119 - static ssize_t orion5x_wdt_write(struct file *file, const char *data, 119 + static ssize_t orion_wdt_write(struct file *file, const char *data, 120 120 size_t len, loff_t *ppos) 121 121 { 122 122 if (len) { ··· 133 133 set_bit(WDT_OK_TO_CLOSE, &wdt_status); 134 134 } 135 135 } 136 - orion5x_wdt_ping(); 136 + orion_wdt_ping(); 137 137 } 138 138 return len; 139 139 } 140 140 141 - static int orion5x_wdt_settimeout(int new_time) 141 + static int orion_wdt_settimeout(int new_time) 142 142 { 143 143 if ((new_time <= 0) || (new_time > wdt_max_duration)) 144 144 return -EINVAL; 145 145 146 146 /* Set new watchdog time to be used when 147 - * orion5x_wdt_enable() or orion5x_wdt_ping() is called. */ 147 + * orion_wdt_enable() or orion_wdt_ping() is called. */ 148 148 heartbeat = new_time; 149 149 return 0; 150 150 } ··· 152 152 static const struct watchdog_info ident = { 153 153 .options = WDIOF_MAGICCLOSE | WDIOF_SETTIMEOUT | 154 154 WDIOF_KEEPALIVEPING, 155 - .identity = "Orion5x Watchdog", 155 + .identity = "Orion Watchdog", 156 156 }; 157 157 158 - static long orion5x_wdt_ioctl(struct file *file, unsigned int cmd, 158 + static long orion_wdt_ioctl(struct file *file, unsigned int cmd, 159 159 unsigned long arg) 160 160 { 161 161 int ret = -ENOTTY; ··· 173 173 break; 174 174 175 175 case WDIOC_KEEPALIVE: 176 - orion5x_wdt_ping(); 176 + orion_wdt_ping(); 177 177 ret = 0; 178 178 break; 179 179 ··· 182 182 if (ret) 183 183 break; 184 184 185 - if (orion5x_wdt_settimeout(time)) { 185 + if (orion_wdt_settimeout(time)) { 186 186 ret = -EINVAL; 187 187 break; 188 188 } 189 - orion5x_wdt_ping(); 189 + orion_wdt_ping(); 190 190 /* Fall through */ 191 191 192 192 case WDIOC_GETTIMEOUT: ··· 194 194 break; 195 195 196 196 case WDIOC_GETTIMELEFT: 197 - if (orion5x_wdt_get_timeleft(&time)) { 197 + if (orion_wdt_get_timeleft(&time)) { 198 198 ret = -EINVAL; 199 199 break; 200 200 } ··· 204 204 return ret; 205 205 } 206 206 207 - static int orion5x_wdt_release(struct inode *inode, struct file *file) 207 + static int orion_wdt_release(struct inode *inode, struct file *file) 208 208 { 209 209 if (test_bit(WDT_OK_TO_CLOSE, &wdt_status)) 210 - orion5x_wdt_disable(); 210 + orion_wdt_disable(); 211 211 else 212 212 printk(KERN_CRIT "WATCHDOG: Device closed unexpectedly - " 213 213 "timer will not stop\n"); ··· 218 218 } 219 219 220 220 221 - static const struct file_operations orion5x_wdt_fops = { 221 + static const struct file_operations orion_wdt_fops = { 222 222 .owner = THIS_MODULE, 223 223 .llseek = no_llseek, 224 - .write = orion5x_wdt_write, 225 - .unlocked_ioctl = orion5x_wdt_ioctl, 226 - .open = orion5x_wdt_open, 227 - .release = orion5x_wdt_release, 224 + .write = orion_wdt_write, 225 + .unlocked_ioctl = orion_wdt_ioctl, 226 + .open = orion_wdt_open, 227 + .release = orion_wdt_release, 228 228 }; 229 229 230 - static struct miscdevice orion5x_wdt_miscdev = { 230 + static struct miscdevice orion_wdt_miscdev = { 231 231 .minor = WATCHDOG_MINOR, 232 232 .name = "watchdog", 233 - .fops = &orion5x_wdt_fops, 233 + .fops = &orion_wdt_fops, 234 234 }; 235 235 236 - static int __devinit orion5x_wdt_probe(struct platform_device *pdev) 236 + static int __devinit orion_wdt_probe(struct platform_device *pdev) 237 237 { 238 - struct orion5x_wdt_platform_data *pdata = pdev->dev.platform_data; 238 + struct orion_wdt_platform_data *pdata = pdev->dev.platform_data; 239 239 int ret; 240 240 241 241 if (pdata) { 242 242 wdt_tclk = pdata->tclk; 243 243 } else { 244 - printk(KERN_ERR "Orion5x Watchdog misses platform data\n"); 244 + printk(KERN_ERR "Orion Watchdog misses platform data\n"); 245 245 return -ENODEV; 246 246 } 247 247 248 - if (orion5x_wdt_miscdev.parent) 248 + if (orion_wdt_miscdev.parent) 249 249 return -EBUSY; 250 - orion5x_wdt_miscdev.parent = &pdev->dev; 250 + orion_wdt_miscdev.parent = &pdev->dev; 251 251 252 252 wdt_max_duration = WDT_MAX_CYCLE_COUNT / wdt_tclk; 253 - if (orion5x_wdt_settimeout(heartbeat)) 253 + if (orion_wdt_settimeout(heartbeat)) 254 254 heartbeat = wdt_max_duration; 255 255 256 - ret = misc_register(&orion5x_wdt_miscdev); 256 + ret = misc_register(&orion_wdt_miscdev); 257 257 if (ret) 258 258 return ret; 259 259 260 - printk(KERN_INFO "Orion5x Watchdog Timer: Initial timeout %d sec%s\n", 260 + printk(KERN_INFO "Orion Watchdog Timer: Initial timeout %d sec%s\n", 261 261 heartbeat, nowayout ? ", nowayout" : ""); 262 262 return 0; 263 263 } 264 264 265 - static int __devexit orion5x_wdt_remove(struct platform_device *pdev) 265 + static int __devexit orion_wdt_remove(struct platform_device *pdev) 266 266 { 267 267 int ret; 268 268 269 269 if (test_bit(WDT_IN_USE, &wdt_status)) { 270 - orion5x_wdt_disable(); 270 + orion_wdt_disable(); 271 271 clear_bit(WDT_IN_USE, &wdt_status); 272 272 } 273 273 274 - ret = misc_deregister(&orion5x_wdt_miscdev); 274 + ret = misc_deregister(&orion_wdt_miscdev); 275 275 if (!ret) 276 - orion5x_wdt_miscdev.parent = NULL; 276 + orion_wdt_miscdev.parent = NULL; 277 277 278 278 return ret; 279 279 } 280 280 281 - static void orion5x_wdt_shutdown(struct platform_device *pdev) 281 + static void orion_wdt_shutdown(struct platform_device *pdev) 282 282 { 283 283 if (test_bit(WDT_IN_USE, &wdt_status)) 284 - orion5x_wdt_disable(); 284 + orion_wdt_disable(); 285 285 } 286 286 287 - static struct platform_driver orion5x_wdt_driver = { 288 - .probe = orion5x_wdt_probe, 289 - .remove = __devexit_p(orion5x_wdt_remove), 290 - .shutdown = orion5x_wdt_shutdown, 287 + static struct platform_driver orion_wdt_driver = { 288 + .probe = orion_wdt_probe, 289 + .remove = __devexit_p(orion_wdt_remove), 290 + .shutdown = orion_wdt_shutdown, 291 291 .driver = { 292 292 .owner = THIS_MODULE, 293 - .name = "orion5x_wdt", 293 + .name = "orion_wdt", 294 294 }, 295 295 }; 296 296 297 - static int __init orion5x_wdt_init(void) 297 + static int __init orion_wdt_init(void) 298 298 { 299 299 spin_lock_init(&wdt_lock); 300 - return platform_driver_register(&orion5x_wdt_driver); 300 + return platform_driver_register(&orion_wdt_driver); 301 301 } 302 302 303 - static void __exit orion5x_wdt_exit(void) 303 + static void __exit orion_wdt_exit(void) 304 304 { 305 - platform_driver_unregister(&orion5x_wdt_driver); 305 + platform_driver_unregister(&orion_wdt_driver); 306 306 } 307 307 308 - module_init(orion5x_wdt_init); 309 - module_exit(orion5x_wdt_exit); 308 + module_init(orion_wdt_init); 309 + module_exit(orion_wdt_exit); 310 310 311 311 MODULE_AUTHOR("Sylver Bruneau <sylver.bruneau@googlemail.com>"); 312 - MODULE_DESCRIPTION("Orion5x Processor Watchdog"); 312 + MODULE_DESCRIPTION("Orion Processor Watchdog"); 313 313 314 314 module_param(heartbeat, int, 0); 315 315 MODULE_PARM_DESC(heartbeat, "Initial watchdog heartbeat in seconds");
+1
firmware/cis/.gitignore
··· 1 + *.cis
+31 -15
fs/binfmt_flat.c
··· 41 41 #include <asm/uaccess.h> 42 42 #include <asm/unaligned.h> 43 43 #include <asm/cacheflush.h> 44 + #include <asm/page.h> 44 45 45 46 /****************************************************************************/ 46 47 ··· 53 52 #define DBG_FLT(a...) printk(a) 54 53 #else 55 54 #define DBG_FLT(a...) 55 + #endif 56 + 57 + /* 58 + * User data (stack, data section and bss) needs to be aligned 59 + * for the same reasons as SLAB memory is, and to the same amount. 60 + * Avoid duplicating architecture specific code by using the same 61 + * macro as with SLAB allocation: 62 + */ 63 + #ifdef ARCH_SLAB_MINALIGN 64 + #define FLAT_DATA_ALIGN (ARCH_SLAB_MINALIGN) 65 + #else 66 + #define FLAT_DATA_ALIGN (sizeof(void *)) 56 67 #endif 57 68 58 69 #define RELOC_FAILED 0xff00ff01 /* Relocation incorrect somewhere */ ··· 127 114 int envc = bprm->envc; 128 115 char uninitialized_var(dummy); 129 116 130 - sp = (unsigned long *) ((-(unsigned long)sizeof(char *))&(unsigned long) p); 117 + sp = (unsigned long *)p; 118 + sp -= (envc + argc + 2) + 1 + (flat_argvp_envp_on_stack() ? 2 : 0); 119 + sp = (unsigned long *) ((unsigned long)sp & -FLAT_DATA_ALIGN); 120 + argv = sp + 1 + (flat_argvp_envp_on_stack() ? 2 : 0); 121 + envp = argv + (argc + 1); 131 122 132 - sp -= envc+1; 133 - envp = sp; 134 - sp -= argc+1; 135 - argv = sp; 136 - 137 - flat_stack_align(sp); 138 123 if (flat_argvp_envp_on_stack()) { 139 - --sp; put_user((unsigned long) envp, sp); 140 - --sp; put_user((unsigned long) argv, sp); 124 + put_user((unsigned long) envp, sp + 2); 125 + put_user((unsigned long) argv, sp + 1); 141 126 } 142 127 143 - put_user(argc,--sp); 128 + put_user(argc, sp); 144 129 current->mm->arg_start = (unsigned long) p; 145 130 while (argc-->0) { 146 131 put_user((unsigned long) p, argv++); ··· 569 558 ret = realdatastart; 570 559 goto err; 571 560 } 572 - datapos = realdatastart + MAX_SHARED_LIBS * sizeof(unsigned long); 561 + datapos = ALIGN(realdatastart + 562 + MAX_SHARED_LIBS * sizeof(unsigned long), 563 + FLAT_DATA_ALIGN); 573 564 574 565 DBG_FLT("BINFMT_FLAT: Allocated data+bss+stack (%d bytes): %x\n", 575 566 (int)(data_len + bss_len + stack_len), (int)datapos); ··· 617 604 } 618 605 619 606 realdatastart = textpos + ntohl(hdr->data_start); 620 - datapos = realdatastart + MAX_SHARED_LIBS * sizeof(unsigned long); 621 - reloc = (unsigned long *) (textpos + ntohl(hdr->reloc_start) + 622 - MAX_SHARED_LIBS * sizeof(unsigned long)); 607 + datapos = ALIGN(realdatastart + 608 + MAX_SHARED_LIBS * sizeof(unsigned long), 609 + FLAT_DATA_ALIGN); 610 + 611 + reloc = (unsigned long *) 612 + (datapos + (ntohl(hdr->reloc_start) - text_len)); 623 613 memp = textpos; 624 614 memp_size = len; 625 615 #ifdef CONFIG_BINFMT_ZFLAT ··· 870 854 stack_len = TOP_OF_ARGS - bprm->p; /* the strings */ 871 855 stack_len += (bprm->argc + 1) * sizeof(char *); /* the argv array */ 872 856 stack_len += (bprm->envc + 1) * sizeof(char *); /* the envp array */ 873 - 857 + stack_len += FLAT_DATA_ALIGN - 1; /* reserve for upcoming alignment */ 874 858 875 859 res = load_flat_file(bprm, &libinfo, 0, &stack_len); 876 860 if (res > (unsigned long)-4096)
+9 -9
fs/cachefiles/internal.h
··· 122 122 } 123 123 124 124 /* 125 - * cf-bind.c 125 + * bind.c 126 126 */ 127 127 extern int cachefiles_daemon_bind(struct cachefiles_cache *cache, char *args); 128 128 extern void cachefiles_daemon_unbind(struct cachefiles_cache *cache); 129 129 130 130 /* 131 - * cf-daemon.c 131 + * daemon.c 132 132 */ 133 133 extern const struct file_operations cachefiles_daemon_fops; 134 134 ··· 136 136 unsigned fnr, unsigned bnr); 137 137 138 138 /* 139 - * cf-interface.c 139 + * interface.c 140 140 */ 141 141 extern const struct fscache_cache_ops cachefiles_cache_ops; 142 142 143 143 /* 144 - * cf-key.c 144 + * key.c 145 145 */ 146 146 extern char *cachefiles_cook_key(const u8 *raw, int keylen, uint8_t type); 147 147 148 148 /* 149 - * cf-namei.c 149 + * namei.c 150 150 */ 151 151 extern int cachefiles_delete_object(struct cachefiles_cache *cache, 152 152 struct cachefiles_object *object); ··· 165 165 struct dentry *dir, char *filename); 166 166 167 167 /* 168 - * cf-proc.c 168 + * proc.c 169 169 */ 170 170 #ifdef CONFIG_CACHEFILES_HISTOGRAM 171 171 extern atomic_t cachefiles_lookup_histogram[HZ]; ··· 190 190 #endif 191 191 192 192 /* 193 - * cf-rdwr.c 193 + * rdwr.c 194 194 */ 195 195 extern int cachefiles_read_or_alloc_page(struct fscache_retrieval *, 196 196 struct page *, gfp_t); ··· 205 205 extern void cachefiles_uncache_page(struct fscache_object *, struct page *); 206 206 207 207 /* 208 - * cf-security.c 208 + * security.c 209 209 */ 210 210 extern int cachefiles_get_security_ID(struct cachefiles_cache *cache); 211 211 extern int cachefiles_determine_cache_security(struct cachefiles_cache *cache, ··· 225 225 } 226 226 227 227 /* 228 - * cf-xattr.c 228 + * xattr.c 229 229 */ 230 230 extern int cachefiles_check_object_type(struct cachefiles_object *object); 231 231 extern int cachefiles_set_object_xattr(struct cachefiles_object *object,
+9 -9
fs/fscache/internal.h
··· 28 28 #define FSCACHE_MAX_THREADS 32 29 29 30 30 /* 31 - * fsc-cache.c 31 + * cache.c 32 32 */ 33 33 extern struct list_head fscache_cache_list; 34 34 extern struct rw_semaphore fscache_addremove_sem; ··· 37 37 struct fscache_cookie *); 38 38 39 39 /* 40 - * fsc-cookie.c 40 + * cookie.c 41 41 */ 42 42 extern struct kmem_cache *fscache_cookie_jar; 43 43 ··· 45 45 extern void __fscache_cookie_put(struct fscache_cookie *); 46 46 47 47 /* 48 - * fsc-fsdef.c 48 + * fsdef.c 49 49 */ 50 50 extern struct fscache_cookie fscache_fsdef_index; 51 51 extern struct fscache_cookie_def fscache_fsdef_netfs_def; 52 52 53 53 /* 54 - * fsc-histogram.c 54 + * histogram.c 55 55 */ 56 56 #ifdef CONFIG_FSCACHE_HISTOGRAM 57 57 extern atomic_t fscache_obj_instantiate_histogram[HZ]; ··· 75 75 #endif 76 76 77 77 /* 78 - * fsc-main.c 78 + * main.c 79 79 */ 80 80 extern unsigned fscache_defer_lookup; 81 81 extern unsigned fscache_defer_create; ··· 86 86 extern int fscache_wait_bit_interruptible(void *); 87 87 88 88 /* 89 - * fsc-object.c 89 + * object.c 90 90 */ 91 91 extern void fscache_withdrawing_object(struct fscache_cache *, 92 92 struct fscache_object *); 93 93 extern void fscache_enqueue_object(struct fscache_object *); 94 94 95 95 /* 96 - * fsc-operation.c 96 + * operation.c 97 97 */ 98 98 extern int fscache_submit_exclusive_op(struct fscache_object *, 99 99 struct fscache_operation *); ··· 104 104 extern void fscache_operation_gc(struct work_struct *); 105 105 106 106 /* 107 - * fsc-proc.c 107 + * proc.c 108 108 */ 109 109 #ifdef CONFIG_PROC_FS 110 110 extern int __init fscache_proc_init(void); ··· 115 115 #endif 116 116 117 117 /* 118 - * fsc-stats.c 118 + * stats.c 119 119 */ 120 120 #ifdef CONFIG_FSCACHE_STATS 121 121 extern atomic_t fscache_n_ops_processed[FSCACHE_MAX_THREADS];
-7
fs/jffs2/erase.c
··· 480 480 return; 481 481 482 482 filebad: 483 - mutex_lock(&c->erase_free_sem); 484 - spin_lock(&c->erase_completion_lock); 485 - /* Stick it on a list (any list) so erase_failed can take it 486 - right off again. Silly, but shouldn't happen often. */ 487 - list_move(&jeb->list, &c->erasing_list); 488 - spin_unlock(&c->erase_completion_lock); 489 - mutex_unlock(&c->erase_free_sem); 490 483 jffs2_erase_failed(c, jeb, bad_offset); 491 484 return; 492 485
+3 -6
fs/nfs/nfs4proc.c
··· 2594 2594 unsigned long timestamp = (unsigned long)data; 2595 2595 2596 2596 if (task->tk_status < 0) { 2597 - switch (task->tk_status) { 2598 - case -NFS4ERR_STALE_CLIENTID: 2599 - case -NFS4ERR_EXPIRED: 2600 - case -NFS4ERR_CB_PATH_DOWN: 2601 - nfs4_schedule_state_recovery(clp); 2602 - } 2597 + /* Unless we're shutting down, schedule state recovery! */ 2598 + if (test_bit(NFS_CS_RENEWD, &clp->cl_res_state) != 0) 2599 + nfs4_schedule_state_recovery(clp); 2603 2600 return; 2604 2601 } 2605 2602 spin_lock(&clp->cl_lock);
+1 -1
fs/nfs/nfsroot.c
··· 129 129 Opt_err 130 130 }; 131 131 132 - static match_table_t __initconst tokens = { 132 + static const match_table_t tokens __initconst = { 133 133 {Opt_port, "port=%u"}, 134 134 {Opt_rsize, "rsize=%u"}, 135 135 {Opt_wsize, "wsize=%u"},
+3 -3
fs/nfsd/vfs.c
··· 1015 1015 host_err = vfs_writev(file, (struct iovec __user *)vec, vlen, &offset); 1016 1016 set_fs(oldfs); 1017 1017 if (host_err >= 0) { 1018 + *cnt = host_err; 1018 1019 nfsdstats.io_write += host_err; 1019 1020 fsnotify_modify(file->f_path.dentry); 1020 1021 } ··· 1061 1060 } 1062 1061 1063 1062 dprintk("nfsd: write complete host_err=%d\n", host_err); 1064 - if (host_err >= 0) { 1063 + if (host_err >= 0) 1065 1064 err = 0; 1066 - *cnt = host_err; 1067 - } else 1065 + else 1068 1066 err = nfserrno(host_err); 1069 1067 out: 1070 1068 return err;
+4 -2
fs/nilfs2/cpfile.c
··· 311 311 ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 0, &cp_bh); 312 312 if (ret < 0) { 313 313 if (ret != -ENOENT) 314 - goto out_sem; 314 + goto out_header; 315 315 /* skip hole */ 316 316 ret = 0; 317 317 continue; ··· 344 344 continue; 345 345 printk(KERN_ERR "%s: cannot delete block\n", 346 346 __func__); 347 - goto out_sem; 347 + goto out_header; 348 348 } 349 349 } 350 350 ··· 361 361 nilfs_mdt_mark_dirty(cpfile); 362 362 kunmap_atomic(kaddr, KM_USER0); 363 363 } 364 + 365 + out_header: 364 366 brelse(header_bh); 365 367 366 368 out_sem:
+1 -1
fs/proc/base.c
··· 1956 1956 const struct pid_entry *p = ptr; 1957 1957 struct inode *inode; 1958 1958 struct proc_inode *ei; 1959 - struct dentry *error = ERR_PTR(-EINVAL); 1959 + struct dentry *error = ERR_PTR(-ENOENT); 1960 1960 1961 1961 inode = proc_pid_make_inode(dir->i_sb, task); 1962 1962 if (!inode)
+1 -1
fs/sysfs/file.c
··· 723 723 mutex_unlock(&sysfs_workq_mutex); 724 724 725 725 if (sysfs_workqueue == NULL) { 726 - sysfs_workqueue = create_workqueue("sysfsd"); 726 + sysfs_workqueue = create_singlethread_workqueue("sysfsd"); 727 727 if (sysfs_workqueue == NULL) { 728 728 module_put(owner); 729 729 return -ENOMEM;
+1 -1
fs/xfs/linux-2.6/kmem.h
··· 103 103 static inline int 104 104 kmem_shake_allow(gfp_t gfp_mask) 105 105 { 106 - return (gfp_mask & __GFP_WAIT) != 0; 106 + return ((gfp_mask & __GFP_WAIT) && (gfp_mask & __GFP_FS)); 107 107 } 108 108 109 109 #endif /* __XFS_SUPPORT_KMEM_H__ */
+5 -3
fs/xfs/xfs_dfrag.c
··· 347 347 348 348 error = xfs_trans_commit(tp, XFS_TRANS_SWAPEXT); 349 349 350 - out_unlock: 351 - xfs_iunlock(ip, XFS_ILOCK_EXCL | XFS_IOLOCK_EXCL); 352 - xfs_iunlock(tip, XFS_ILOCK_EXCL | XFS_IOLOCK_EXCL); 353 350 out: 354 351 kmem_free(tempifp); 355 352 return error; 353 + 354 + out_unlock: 355 + xfs_iunlock(ip, XFS_ILOCK_EXCL | XFS_IOLOCK_EXCL); 356 + xfs_iunlock(tip, XFS_ILOCK_EXCL | XFS_IOLOCK_EXCL); 357 + goto out; 356 358 357 359 out_trans_cancel: 358 360 xfs_trans_cancel(tp, 0);
+1 -1
fs/xfs/xfs_fsops.c
··· 160 160 nagcount = new + (nb_mod != 0); 161 161 if (nb_mod && nb_mod < XFS_MIN_AG_BLOCKS) { 162 162 nagcount--; 163 - nb = nagcount * mp->m_sb.sb_agblocks; 163 + nb = (xfs_rfsblock_t)nagcount * mp->m_sb.sb_agblocks; 164 164 if (nb < mp->m_sb.sb_dblocks) 165 165 return XFS_ERROR(EINVAL); 166 166 }
+24
include/drm/drmP.h
··· 1519 1519 { 1520 1520 return kcalloc(nmemb, size, GFP_KERNEL); 1521 1521 } 1522 + 1523 + static __inline__ void *drm_calloc_large(size_t nmemb, size_t size) 1524 + { 1525 + u8 *addr; 1526 + 1527 + if (size <= PAGE_SIZE) 1528 + return kcalloc(nmemb, size, GFP_KERNEL); 1529 + 1530 + addr = vmalloc(nmemb * size); 1531 + if (!addr) 1532 + return NULL; 1533 + 1534 + memset(addr, 0, nmemb * size); 1535 + 1536 + return addr; 1537 + } 1538 + 1539 + static __inline void drm_free_large(void *ptr) 1540 + { 1541 + if (!is_vmalloc_addr(ptr)) 1542 + return kfree(ptr); 1543 + 1544 + vfree(ptr); 1545 + } 1522 1546 #else 1523 1547 extern void *drm_alloc(size_t size, int area); 1524 1548 extern void drm_free(void *pt, size_t size, int area);
+1 -2
include/linux/auto_fs.h
··· 14 14 #ifndef _LINUX_AUTO_FS_H 15 15 #define _LINUX_AUTO_FS_H 16 16 17 + #include <linux/types.h> 17 18 #ifdef __KERNEL__ 18 19 #include <linux/fs.h> 19 20 #include <linux/limits.h> 20 - #include <linux/types.h> 21 21 #include <linux/ioctl.h> 22 22 #else 23 - #include <asm/types.h> 24 23 #include <sys/ioctl.h> 25 24 #endif /* __KERNEL__ */ 26 25
+1
include/linux/cred.h
··· 13 13 #define _LINUX_CRED_H 14 14 15 15 #include <linux/capability.h> 16 + #include <linux/init.h> 16 17 #include <linux/key.h> 17 18 #include <asm/atomic.h> 18 19
+10 -10
include/linux/i7300_idle.h
··· 16 16 struct fbd_ioat { 17 17 unsigned int vendor; 18 18 unsigned int ioat_dev; 19 + unsigned int enabled; 19 20 }; 20 21 21 22 /* 22 23 * The i5000 chip-set has the same hooks as the i7300 23 - * but support is disabled by default because this driver 24 - * has not been validated on that platform. 24 + * but it is not enabled by default and must be manually 25 + * manually enabled with "forceload=1" because it is 26 + * only lightly validated. 25 27 */ 26 - #define SUPPORT_I5000 0 27 28 28 29 static const struct fbd_ioat fbd_ioat_list[] = { 29 - {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB}, 30 - #if SUPPORT_I5000 31 - {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT}, 32 - #endif 30 + {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_CNB, 1}, 31 + {PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT, 0}, 33 32 {0, 0} 34 33 }; 35 34 36 35 /* table of devices that work with this driver */ 37 36 static const struct pci_device_id pci_tbl[] = { 38 37 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_FBD_CNB) }, 39 - #if SUPPORT_I5000 40 38 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_5000_ERR) }, 41 - #endif 42 39 { } /* Terminating entry */ 43 40 }; 44 41 45 42 /* Check for known platforms with I/O-AT */ 46 43 static inline int i7300_idle_platform_probe(struct pci_dev **fbd_dev, 47 - struct pci_dev **ioat_dev) 44 + struct pci_dev **ioat_dev, 45 + int enable_all) 48 46 { 49 47 int i; 50 48 struct pci_dev *memdev, *dmadev; ··· 67 69 for (i = 0; fbd_ioat_list[i].vendor != 0; i++) { 68 70 if (dmadev->vendor == fbd_ioat_list[i].vendor && 69 71 dmadev->device == fbd_ioat_list[i].ioat_dev) { 72 + if (!(fbd_ioat_list[i].enabled || enable_all)) 73 + continue; 70 74 if (fbd_dev) 71 75 *fbd_dev = memdev; 72 76 if (ioat_dev)
+1
include/linux/input.h
··· 656 656 #define ABS_MT_POSITION_Y 0x36 /* Center Y ellipse position */ 657 657 #define ABS_MT_TOOL_TYPE 0x37 /* Type of touching device */ 658 658 #define ABS_MT_BLOB_ID 0x38 /* Group a set of packets as a blob */ 659 + #define ABS_MT_TRACKING_ID 0x39 /* Unique ID of initiated contact */ 659 660 660 661 #define ABS_MAX 0x3f 661 662 #define ABS_CNT (ABS_MAX+1)
+1
include/linux/net_dropmon.h
··· 1 1 #ifndef __NET_DROPMON_H 2 2 #define __NET_DROPMON_H 3 3 4 + #include <linux/types.h> 4 5 #include <linux/netlink.h> 5 6 6 7 struct net_dm_drop_point {
+4
include/linux/netfilter/nf_conntrack_tcp.h
··· 35 35 /* Has unacknowledged data */ 36 36 #define IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED 0x10 37 37 38 + /* The field td_maxack has been set */ 39 + #define IP_CT_TCP_FLAG_MAXACK_SET 0x20 40 + 38 41 struct nf_ct_tcp_flags { 39 42 __u8 flags; 40 43 __u8 mask; ··· 49 46 u_int32_t td_end; /* max of seq + len */ 50 47 u_int32_t td_maxend; /* max of ack + max(win, 1) */ 51 48 u_int32_t td_maxwin; /* max(win) */ 49 + u_int32_t td_maxack; /* max of ack */ 52 50 u_int8_t td_scale; /* window scale factor */ 53 51 u_int8_t flags; /* per direction options */ 54 52 };
+4
include/linux/parport.h
··· 324 324 int spintime; 325 325 atomic_t ref_count; 326 326 327 + unsigned long devflags; 328 + #define PARPORT_DEVPROC_REGISTERED 0 329 + struct pardevice *proc_device; /* Currently register proc device */ 330 + 327 331 struct list_head full_list; 328 332 struct parport *slaves[3]; 329 333 };
+5
include/linux/swap.h
··· 437 437 return 0; 438 438 } 439 439 440 + static inline void 441 + mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) 442 + { 443 + } 444 + 440 445 #endif /* CONFIG_SWAP */ 441 446 #endif /* __KERNEL__*/ 442 447 #endif /* _LINUX_SWAP_H */
+1 -2
init/main.c
··· 566 566 tick_init(); 567 567 boot_cpu_init(); 568 568 page_address_init(); 569 - printk(KERN_NOTICE); 570 - printk(linux_banner); 569 + printk(KERN_NOTICE "%s", linux_banner); 571 570 setup_arch(&command_line); 572 571 mm_init_owner(&init_mm, &init_task); 573 572 setup_command_line(command_line);
+12 -8
kernel/async.c
··· 92 92 static async_cookie_t __lowest_in_progress(struct list_head *running) 93 93 { 94 94 struct async_entry *entry; 95 + async_cookie_t ret = next_cookie; /* begin with "infinity" value */ 96 + 95 97 if (!list_empty(running)) { 96 98 entry = list_first_entry(running, 97 99 struct async_entry, list); 98 - return entry->cookie; 99 - } else if (!list_empty(&async_pending)) { 100 - entry = list_first_entry(&async_pending, 101 - struct async_entry, list); 102 - return entry->cookie; 103 - } else { 104 - /* nothing in progress... next_cookie is "infinity" */ 105 - return next_cookie; 100 + ret = entry->cookie; 106 101 } 107 102 103 + if (!list_empty(&async_pending)) { 104 + list_for_each_entry(entry, &async_pending, list) 105 + if (entry->running == running) { 106 + ret = entry->cookie; 107 + break; 108 + } 109 + } 110 + 111 + return ret; 108 112 } 109 113 110 114 static async_cookie_t lowest_in_progress(struct list_head *running)
-2
kernel/kexec.c
··· 1451 1451 error = device_suspend(PMSG_FREEZE); 1452 1452 if (error) 1453 1453 goto Resume_console; 1454 - device_pm_lock(); 1455 1454 /* At this point, device_suspend() has been called, 1456 1455 * but *not* device_power_down(). We *must* 1457 1456 * device_power_down() now. Otherwise, drivers for ··· 1488 1489 enable_nonboot_cpus(); 1489 1490 device_power_up(PMSG_RESTORE); 1490 1491 Resume_devices: 1491 - device_pm_unlock(); 1492 1492 device_resume(PMSG_RESTORE); 1493 1493 Resume_console: 1494 1494 resume_console();
+3 -1
kernel/kmod.c
··· 370 370 sub_info->argv = argv; 371 371 sub_info->envp = envp; 372 372 sub_info->cred = prepare_usermodehelper_creds(); 373 - if (!sub_info->cred) 373 + if (!sub_info->cred) { 374 + kfree(sub_info); 374 375 return NULL; 376 + } 375 377 376 378 out: 377 379 return sub_info;
+3 -18
kernel/power/disk.c
··· 215 215 if (error) 216 216 return error; 217 217 218 - device_pm_lock(); 219 - 220 218 /* At this point, device_suspend() has been called, but *not* 221 219 * device_power_down(). We *must* call device_power_down() now. 222 220 * Otherwise, drivers for some devices (e.g. interrupt controllers) ··· 225 227 if (error) { 226 228 printk(KERN_ERR "PM: Some devices failed to power down, " 227 229 "aborting hibernation\n"); 228 - goto Unlock; 230 + return error; 229 231 } 230 232 231 233 error = platform_pre_snapshot(platform_mode); ··· 277 279 278 280 device_power_up(in_suspend ? 279 281 (error ? PMSG_RECOVER : PMSG_THAW) : PMSG_RESTORE); 280 - 281 - Unlock: 282 - device_pm_unlock(); 283 282 284 283 return error; 285 284 } ··· 339 344 { 340 345 int error; 341 346 342 - device_pm_lock(); 343 - 344 347 error = device_power_down(PMSG_QUIESCE); 345 348 if (error) { 346 349 printk(KERN_ERR "PM: Some devices failed to power down, " 347 350 "aborting resume\n"); 348 - goto Unlock; 351 + return error; 349 352 } 350 353 351 354 error = platform_pre_restore(platform_mode); ··· 395 402 platform_restore_cleanup(platform_mode); 396 403 397 404 device_power_up(PMSG_RECOVER); 398 - 399 - Unlock: 400 - device_pm_unlock(); 401 405 402 406 return error; 403 407 } ··· 454 464 goto Resume_devices; 455 465 } 456 466 457 - device_pm_lock(); 458 - 459 467 error = device_power_down(PMSG_HIBERNATE); 460 468 if (error) 461 - goto Unlock; 469 + goto Resume_devices; 462 470 463 471 error = hibernation_ops->prepare(); 464 472 if (error) ··· 480 492 hibernation_ops->finish(); 481 493 482 494 device_power_up(PMSG_RESTORE); 483 - 484 - Unlock: 485 - device_pm_unlock(); 486 495 487 496 Resume_devices: 488 497 entering_platform_hibernation = false;
+1 -6
kernel/power/main.c
··· 289 289 { 290 290 int error; 291 291 292 - device_pm_lock(); 293 - 294 292 if (suspend_ops->prepare) { 295 293 error = suspend_ops->prepare(); 296 294 if (error) 297 - goto Done; 295 + return error; 298 296 } 299 297 300 298 error = device_power_down(PMSG_SUSPEND); ··· 340 342 Platfrom_finish: 341 343 if (suspend_ops->finish) 342 344 suspend_ops->finish(); 343 - 344 - Done: 345 - device_pm_unlock(); 346 345 347 346 return error; 348 347 }
+3 -3
mm/filemap.c
··· 121 121 mapping->nrpages--; 122 122 __dec_zone_page_state(page, NR_FILE_PAGES); 123 123 BUG_ON(page_mapped(page)); 124 - mem_cgroup_uncharge_cache_page(page); 125 124 126 125 /* 127 126 * Some filesystems seem to re-dirty the page even after ··· 144 145 spin_lock_irq(&mapping->tree_lock); 145 146 __remove_from_page_cache(page); 146 147 spin_unlock_irq(&mapping->tree_lock); 148 + mem_cgroup_uncharge_cache_page(page); 147 149 } 148 150 149 151 static int sync_page(void *word) ··· 476 476 if (likely(!error)) { 477 477 mapping->nrpages++; 478 478 __inc_zone_page_state(page, NR_FILE_PAGES); 479 + spin_unlock_irq(&mapping->tree_lock); 479 480 } else { 480 481 page->mapping = NULL; 482 + spin_unlock_irq(&mapping->tree_lock); 481 483 mem_cgroup_uncharge_cache_page(page); 482 484 page_cache_release(page); 483 485 } 484 - 485 - spin_unlock_irq(&mapping->tree_lock); 486 486 radix_tree_preload_end(); 487 487 } else 488 488 mem_cgroup_uncharge_cache_page(page);
+13 -13
mm/hugetlb.c
··· 316 316 static struct resv_map *vma_resv_map(struct vm_area_struct *vma) 317 317 { 318 318 VM_BUG_ON(!is_vm_hugetlb_page(vma)); 319 - if (!(vma->vm_flags & VM_SHARED)) 319 + if (!(vma->vm_flags & VM_MAYSHARE)) 320 320 return (struct resv_map *)(get_vma_private_data(vma) & 321 321 ~HPAGE_RESV_MASK); 322 322 return NULL; ··· 325 325 static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map) 326 326 { 327 327 VM_BUG_ON(!is_vm_hugetlb_page(vma)); 328 - VM_BUG_ON(vma->vm_flags & VM_SHARED); 328 + VM_BUG_ON(vma->vm_flags & VM_MAYSHARE); 329 329 330 330 set_vma_private_data(vma, (get_vma_private_data(vma) & 331 331 HPAGE_RESV_MASK) | (unsigned long)map); ··· 334 334 static void set_vma_resv_flags(struct vm_area_struct *vma, unsigned long flags) 335 335 { 336 336 VM_BUG_ON(!is_vm_hugetlb_page(vma)); 337 - VM_BUG_ON(vma->vm_flags & VM_SHARED); 337 + VM_BUG_ON(vma->vm_flags & VM_MAYSHARE); 338 338 339 339 set_vma_private_data(vma, get_vma_private_data(vma) | flags); 340 340 } ··· 353 353 if (vma->vm_flags & VM_NORESERVE) 354 354 return; 355 355 356 - if (vma->vm_flags & VM_SHARED) { 356 + if (vma->vm_flags & VM_MAYSHARE) { 357 357 /* Shared mappings always use reserves */ 358 358 h->resv_huge_pages--; 359 359 } else if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) { ··· 369 369 void reset_vma_resv_huge_pages(struct vm_area_struct *vma) 370 370 { 371 371 VM_BUG_ON(!is_vm_hugetlb_page(vma)); 372 - if (!(vma->vm_flags & VM_SHARED)) 372 + if (!(vma->vm_flags & VM_MAYSHARE)) 373 373 vma->vm_private_data = (void *)0; 374 374 } 375 375 376 376 /* Returns true if the VMA has associated reserve pages */ 377 377 static int vma_has_reserves(struct vm_area_struct *vma) 378 378 { 379 - if (vma->vm_flags & VM_SHARED) 379 + if (vma->vm_flags & VM_MAYSHARE) 380 380 return 1; 381 381 if (is_vma_resv_set(vma, HPAGE_RESV_OWNER)) 382 382 return 1; ··· 924 924 struct address_space *mapping = vma->vm_file->f_mapping; 925 925 struct inode *inode = mapping->host; 926 926 927 - if (vma->vm_flags & VM_SHARED) { 927 + if (vma->vm_flags & VM_MAYSHARE) { 928 928 pgoff_t idx = vma_hugecache_offset(h, vma, addr); 929 929 return region_chg(&inode->i_mapping->private_list, 930 930 idx, idx + 1); ··· 949 949 struct address_space *mapping = vma->vm_file->f_mapping; 950 950 struct inode *inode = mapping->host; 951 951 952 - if (vma->vm_flags & VM_SHARED) { 952 + if (vma->vm_flags & VM_MAYSHARE) { 953 953 pgoff_t idx = vma_hugecache_offset(h, vma, addr); 954 954 region_add(&inode->i_mapping->private_list, idx, idx + 1); 955 955 ··· 1893 1893 * at the time of fork() could consume its reserves on COW instead 1894 1894 * of the full address range. 1895 1895 */ 1896 - if (!(vma->vm_flags & VM_SHARED) && 1896 + if (!(vma->vm_flags & VM_MAYSHARE) && 1897 1897 is_vma_resv_set(vma, HPAGE_RESV_OWNER) && 1898 1898 old_page != pagecache_page) 1899 1899 outside_reserve = 1; ··· 2000 2000 clear_huge_page(page, address, huge_page_size(h)); 2001 2001 __SetPageUptodate(page); 2002 2002 2003 - if (vma->vm_flags & VM_SHARED) { 2003 + if (vma->vm_flags & VM_MAYSHARE) { 2004 2004 int err; 2005 2005 struct inode *inode = mapping->host; 2006 2006 ··· 2104 2104 goto out_mutex; 2105 2105 } 2106 2106 2107 - if (!(vma->vm_flags & VM_SHARED)) 2107 + if (!(vma->vm_flags & VM_MAYSHARE)) 2108 2108 pagecache_page = hugetlbfs_pagecache_page(h, 2109 2109 vma, address); 2110 2110 } ··· 2289 2289 * to reserve the full area even if read-only as mprotect() may be 2290 2290 * called to make the mapping read-write. Assume !vma is a shm mapping 2291 2291 */ 2292 - if (!vma || vma->vm_flags & VM_SHARED) 2292 + if (!vma || vma->vm_flags & VM_MAYSHARE) 2293 2293 chg = region_chg(&inode->i_mapping->private_list, from, to); 2294 2294 else { 2295 2295 struct resv_map *resv_map = resv_map_alloc(); ··· 2330 2330 * consumed reservations are stored in the map. Hence, nothing 2331 2331 * else has to be done for private mappings here 2332 2332 */ 2333 - if (!vma || vma->vm_flags & VM_SHARED) 2333 + if (!vma || vma->vm_flags & VM_MAYSHARE) 2334 2334 region_add(&inode->i_mapping->private_list, from, to); 2335 2335 return 0; 2336 2336 }
+4 -10
mm/memcontrol.c
··· 314 314 return mem; 315 315 } 316 316 317 - static bool mem_cgroup_is_obsolete(struct mem_cgroup *mem) 318 - { 319 - if (!mem) 320 - return true; 321 - return css_is_removed(&mem->css); 322 - } 323 - 324 - 325 317 /* 326 318 * Call callback function against all cgroup under hierarchy tree. 327 319 */ ··· 924 932 if (unlikely(!mem)) 925 933 return 0; 926 934 927 - VM_BUG_ON(!mem || mem_cgroup_is_obsolete(mem)); 935 + VM_BUG_ON(css_is_removed(&mem->css)); 928 936 929 937 while (1) { 930 938 int ret; ··· 1480 1488 __mem_cgroup_uncharge_common(page, MEM_CGROUP_CHARGE_TYPE_CACHE); 1481 1489 } 1482 1490 1491 + #ifdef CONFIG_SWAP 1483 1492 /* 1484 - * called from __delete_from_swap_cache() and drop "page" account. 1493 + * called after __delete_from_swap_cache() and drop "page" account. 1485 1494 * memcg information is recorded to swap_cgroup of "ent" 1486 1495 */ 1487 1496 void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) ··· 1499 1506 if (memcg) 1500 1507 css_put(&memcg->css); 1501 1508 } 1509 + #endif 1502 1510 1503 1511 #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP 1504 1512 /*
+15 -9
mm/oom_kill.c
··· 284 284 printk(KERN_INFO "[ pid ] uid tgid total_vm rss cpu oom_adj " 285 285 "name\n"); 286 286 do_each_thread(g, p) { 287 - /* 288 - * total_vm and rss sizes do not exist for tasks with a 289 - * detached mm so there's no need to report them. 290 - */ 291 - if (!p->mm) 292 - continue; 287 + struct mm_struct *mm; 288 + 293 289 if (mem && !task_in_mem_cgroup(p, mem)) 294 290 continue; 295 291 if (!thread_group_leader(p)) 296 292 continue; 297 293 298 294 task_lock(p); 295 + mm = p->mm; 296 + if (!mm) { 297 + /* 298 + * total_vm and rss sizes do not exist for tasks with no 299 + * mm so there's no need to report them; they can't be 300 + * oom killed anyway. 301 + */ 302 + task_unlock(p); 303 + continue; 304 + } 299 305 printk(KERN_INFO "[%5d] %5d %5d %8lu %8lu %3d %3d %s\n", 300 - p->pid, __task_cred(p)->uid, p->tgid, 301 - p->mm->total_vm, get_mm_rss(p->mm), (int)task_cpu(p), 302 - p->oomkilladj, p->comm); 306 + p->pid, __task_cred(p)->uid, p->tgid, mm->total_vm, 307 + get_mm_rss(mm), (int)task_cpu(p), p->oomkilladj, 308 + p->comm); 303 309 task_unlock(p); 304 310 } while_each_thread(g, p); 305 311 }
+1 -3
mm/swap_state.c
··· 109 109 */ 110 110 void __delete_from_swap_cache(struct page *page) 111 111 { 112 - swp_entry_t ent = {.val = page_private(page)}; 113 - 114 112 VM_BUG_ON(!PageLocked(page)); 115 113 VM_BUG_ON(!PageSwapCache(page)); 116 114 VM_BUG_ON(PageWriteback(page)); ··· 119 121 total_swapcache_pages--; 120 122 __dec_zone_page_state(page, NR_FILE_PAGES); 121 123 INC_CACHE_INFO(del_total); 122 - mem_cgroup_uncharge_swapcache(page, ent); 123 124 } 124 125 125 126 /** ··· 188 191 __delete_from_swap_cache(page); 189 192 spin_unlock_irq(&swapper_space.tree_lock); 190 193 194 + mem_cgroup_uncharge_swapcache(page, entry); 191 195 swap_free(entry); 192 196 page_cache_release(page); 193 197 }
+1
mm/truncate.c
··· 359 359 BUG_ON(page_has_private(page)); 360 360 __remove_from_page_cache(page); 361 361 spin_unlock_irq(&mapping->tree_lock); 362 + mem_cgroup_uncharge_cache_page(page); 362 363 page_cache_release(page); /* pagecache ref */ 363 364 return 1; 364 365 failed:
+2
mm/vmscan.c
··· 470 470 swp_entry_t swap = { .val = page_private(page) }; 471 471 __delete_from_swap_cache(page); 472 472 spin_unlock_irq(&mapping->tree_lock); 473 + mem_cgroup_uncharge_swapcache(page, swap); 473 474 swap_free(swap); 474 475 } else { 475 476 __remove_from_page_cache(page); 476 477 spin_unlock_irq(&mapping->tree_lock); 478 + mem_cgroup_uncharge_cache_page(page); 477 479 } 478 480 479 481 return 1;
-6
net/bluetooth/hci_sysfs.c
··· 90 90 struct hci_conn *conn = container_of(work, struct hci_conn, work_add); 91 91 struct hci_dev *hdev = conn->hdev; 92 92 93 - /* ensure previous del is complete */ 94 - flush_work(&conn->work_del); 95 - 96 93 dev_set_name(&conn->dev, "%s:%d", hdev->name, conn->handle); 97 94 98 95 if (device_add(&conn->dev) < 0) { ··· 114 117 { 115 118 struct hci_conn *conn = container_of(work, struct hci_conn, work_del); 116 119 struct hci_dev *hdev = conn->hdev; 117 - 118 - /* ensure previous add is complete */ 119 - flush_work(&conn->work_add); 120 120 121 121 if (!device_is_registered(&conn->dev)) 122 122 return;
+1 -1
net/core/pktgen.c
··· 2447 2447 if (pkt_dev->cflows) { 2448 2448 /* let go of the SAs if we have them */ 2449 2449 int i = 0; 2450 - for (; i < pkt_dev->nflows; i++){ 2450 + for (; i < pkt_dev->cflows; i++) { 2451 2451 struct xfrm_state *x = pkt_dev->flows[i].x; 2452 2452 if (x) { 2453 2453 xfrm_state_put(x);
+5 -1
net/ipv4/fib_trie.c
··· 986 986 static struct node *trie_rebalance(struct trie *t, struct tnode *tn) 987 987 { 988 988 int wasfull; 989 - t_key cindex, key = tn->key; 989 + t_key cindex, key; 990 990 struct tnode *tp; 991 + 992 + preempt_disable(); 993 + key = tn->key; 991 994 992 995 while (tn != NULL && (tp = node_parent((struct node *)tn)) != NULL) { 993 996 cindex = tkey_extract_bits(key, tp->pos, tp->bits); ··· 1010 1007 if (IS_TNODE(tn)) 1011 1008 tn = (struct tnode *)resize(t, (struct tnode *)tn); 1012 1009 1010 + preempt_enable(); 1013 1011 return (struct node *)tn; 1014 1012 } 1015 1013
+20 -40
net/ipv4/route.c
··· 784 784 { 785 785 static unsigned int rover; 786 786 unsigned int i = rover, goal; 787 - struct rtable *rth, **rthp; 788 - unsigned long length = 0, samples = 0; 787 + struct rtable *rth, *aux, **rthp; 788 + unsigned long samples = 0; 789 789 unsigned long sum = 0, sum2 = 0; 790 790 u64 mult; 791 791 ··· 795 795 goal = (unsigned int)mult; 796 796 if (goal > rt_hash_mask) 797 797 goal = rt_hash_mask + 1; 798 - length = 0; 799 798 for (; goal > 0; goal--) { 800 799 unsigned long tmo = ip_rt_gc_timeout; 800 + unsigned long length; 801 801 802 802 i = (i + 1) & rt_hash_mask; 803 803 rthp = &rt_hash_table[i].chain; ··· 809 809 810 810 if (*rthp == NULL) 811 811 continue; 812 + length = 0; 812 813 spin_lock_bh(rt_hash_lock_addr(i)); 813 814 while ((rth = *rthp) != NULL) { 815 + prefetch(rth->u.dst.rt_next); 814 816 if (rt_is_expired(rth)) { 815 817 *rthp = rth->u.dst.rt_next; 816 818 rt_free(rth); ··· 821 819 if (rth->u.dst.expires) { 822 820 /* Entry is expired even if it is in use */ 823 821 if (time_before_eq(jiffies, rth->u.dst.expires)) { 822 + nofree: 824 823 tmo >>= 1; 825 824 rthp = &rth->u.dst.rt_next; 826 825 /* 827 - * Only bump our length if the hash 828 - * inputs on entries n and n+1 are not 829 - * the same, we only count entries on 826 + * We only count entries on 830 827 * a chain with equal hash inputs once 831 828 * so that entries for different QOS 832 829 * levels, and other non-hash input 833 830 * attributes don't unfairly skew 834 831 * the length computation 835 832 */ 836 - if ((*rthp == NULL) || 837 - !compare_hash_inputs(&(*rthp)->fl, 838 - &rth->fl)) 839 - length += ONE; 833 + for (aux = rt_hash_table[i].chain;;) { 834 + if (aux == rth) { 835 + length += ONE; 836 + break; 837 + } 838 + if (compare_hash_inputs(&aux->fl, &rth->fl)) 839 + break; 840 + aux = aux->u.dst.rt_next; 841 + } 840 842 continue; 841 843 } 842 - } else if (!rt_may_expire(rth, tmo, ip_rt_gc_timeout)) { 843 - tmo >>= 1; 844 - rthp = &rth->u.dst.rt_next; 845 - if ((*rthp == NULL) || 846 - !compare_hash_inputs(&(*rthp)->fl, 847 - &rth->fl)) 848 - length += ONE; 849 - continue; 850 - } 844 + } else if (!rt_may_expire(rth, tmo, ip_rt_gc_timeout)) 845 + goto nofree; 851 846 852 847 /* Cleanup aged off entries. */ 853 848 *rthp = rth->u.dst.rt_next; ··· 1067 1068 static int rt_intern_hash(unsigned hash, struct rtable *rt, struct rtable **rp) 1068 1069 { 1069 1070 struct rtable *rth, **rthp; 1070 - struct rtable *rthi; 1071 1071 unsigned long now; 1072 1072 struct rtable *cand, **candp; 1073 1073 u32 min_score; ··· 1086 1088 } 1087 1089 1088 1090 rthp = &rt_hash_table[hash].chain; 1089 - rthi = NULL; 1090 1091 1091 1092 spin_lock_bh(rt_hash_lock_addr(hash)); 1092 1093 while ((rth = *rthp) != NULL) { ··· 1131 1134 chain_length++; 1132 1135 1133 1136 rthp = &rth->u.dst.rt_next; 1134 - 1135 - /* 1136 - * check to see if the next entry in the chain 1137 - * contains the same hash input values as rt. If it does 1138 - * This is where we will insert into the list, instead of 1139 - * at the head. This groups entries that differ by aspects not 1140 - * relvant to the hash function together, which we use to adjust 1141 - * our chain length 1142 - */ 1143 - if (*rthp && compare_hash_inputs(&(*rthp)->fl, &rt->fl)) 1144 - rthi = rth; 1145 1137 } 1146 1138 1147 1139 if (cand) { ··· 1191 1205 } 1192 1206 } 1193 1207 1194 - if (rthi) 1195 - rt->u.dst.rt_next = rthi->u.dst.rt_next; 1196 - else 1197 - rt->u.dst.rt_next = rt_hash_table[hash].chain; 1208 + rt->u.dst.rt_next = rt_hash_table[hash].chain; 1198 1209 1199 1210 #if RT_CACHE_DEBUG >= 2 1200 1211 if (rt->u.dst.rt_next) { ··· 1207 1224 * previous writes to rt are comitted to memory 1208 1225 * before making rt visible to other CPUS. 1209 1226 */ 1210 - if (rthi) 1211 - rcu_assign_pointer(rthi->u.dst.rt_next, rt); 1212 - else 1213 - rcu_assign_pointer(rt_hash_table[hash].chain, rt); 1227 + rcu_assign_pointer(rt_hash_table[hash].chain, rt); 1214 1228 1215 1229 spin_unlock_bh(rt_hash_lock_addr(hash)); 1216 1230 *rp = rt;
+9 -2
net/ipv4/tcp_vegas.c
··· 158 158 } 159 159 EXPORT_SYMBOL_GPL(tcp_vegas_cwnd_event); 160 160 161 + static inline u32 tcp_vegas_ssthresh(struct tcp_sock *tp) 162 + { 163 + return min(tp->snd_ssthresh, tp->snd_cwnd-1); 164 + } 165 + 161 166 static void tcp_vegas_cong_avoid(struct sock *sk, u32 ack, u32 in_flight) 162 167 { 163 168 struct tcp_sock *tp = tcp_sk(sk); ··· 226 221 */ 227 222 diff = tp->snd_cwnd * (rtt-vegas->baseRTT) / vegas->baseRTT; 228 223 229 - if (diff > gamma && tp->snd_ssthresh > 2 ) { 224 + if (diff > gamma && tp->snd_cwnd <= tp->snd_ssthresh) { 230 225 /* Going too fast. Time to slow down 231 226 * and switch to congestion avoidance. 232 227 */ 233 - tp->snd_ssthresh = 2; 234 228 235 229 /* Set cwnd to match the actual rate 236 230 * exactly: ··· 239 235 * utilization. 240 236 */ 241 237 tp->snd_cwnd = min(tp->snd_cwnd, (u32)target_cwnd+1); 238 + tp->snd_ssthresh = tcp_vegas_ssthresh(tp); 242 239 243 240 } else if (tp->snd_cwnd <= tp->snd_ssthresh) { 244 241 /* Slow start. */ ··· 255 250 * we slow down. 256 251 */ 257 252 tp->snd_cwnd--; 253 + tp->snd_ssthresh 254 + = tcp_vegas_ssthresh(tp); 258 255 } else if (diff < alpha) { 259 256 /* We don't have enough extra packets 260 257 * in the network, so speed up.
+3
net/ipv6/route.c
··· 137 137 } 138 138 }, 139 139 .rt6i_flags = (RTF_REJECT | RTF_NONEXTHOP), 140 + .rt6i_protocol = RTPROT_KERNEL, 140 141 .rt6i_metric = ~(u32) 0, 141 142 .rt6i_ref = ATOMIC_INIT(1), 142 143 }; ··· 160 159 } 161 160 }, 162 161 .rt6i_flags = (RTF_REJECT | RTF_NONEXTHOP), 162 + .rt6i_protocol = RTPROT_KERNEL, 163 163 .rt6i_metric = ~(u32) 0, 164 164 .rt6i_ref = ATOMIC_INIT(1), 165 165 }; ··· 178 176 } 179 177 }, 180 178 .rt6i_flags = (RTF_REJECT | RTF_NONEXTHOP), 179 + .rt6i_protocol = RTPROT_KERNEL, 181 180 .rt6i_metric = ~(u32) 0, 182 181 .rt6i_ref = ATOMIC_INIT(1), 183 182 };
+4
net/netfilter/nf_conntrack_proto_dccp.c
··· 22 22 #include <linux/netfilter/nfnetlink_conntrack.h> 23 23 #include <net/netfilter/nf_conntrack.h> 24 24 #include <net/netfilter/nf_conntrack_l4proto.h> 25 + #include <net/netfilter/nf_conntrack_ecache.h> 25 26 #include <net/netfilter/nf_log.h> 26 27 27 28 static DEFINE_RWLOCK(dccp_lock); ··· 553 552 ct->proto.dccp.last_pkt = type; 554 553 ct->proto.dccp.state = new_state; 555 554 write_unlock_bh(&dccp_lock); 555 + 556 + if (new_state != old_state) 557 + nf_conntrack_event_cache(IPCT_PROTOINFO, ct); 556 558 557 559 dn = dccp_pernet(net); 558 560 nf_ct_refresh_acct(ct, ctinfo, skb, dn->dccp_timeout[new_state]);
+18
net/netfilter/nf_conntrack_proto_tcp.c
··· 634 634 sender->td_end = end; 635 635 sender->flags |= IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED; 636 636 } 637 + if (tcph->ack) { 638 + if (!(sender->flags & IP_CT_TCP_FLAG_MAXACK_SET)) { 639 + sender->td_maxack = ack; 640 + sender->flags |= IP_CT_TCP_FLAG_MAXACK_SET; 641 + } else if (after(ack, sender->td_maxack)) 642 + sender->td_maxack = ack; 643 + } 644 + 637 645 /* 638 646 * Update receiver data. 639 647 */ ··· 926 918 "nf_ct_tcp: invalid state "); 927 919 return -NF_ACCEPT; 928 920 case TCP_CONNTRACK_CLOSE: 921 + if (index == TCP_RST_SET 922 + && (ct->proto.tcp.seen[!dir].flags & IP_CT_TCP_FLAG_MAXACK_SET) 923 + && before(ntohl(th->seq), ct->proto.tcp.seen[!dir].td_maxack)) { 924 + /* Invalid RST */ 925 + write_unlock_bh(&tcp_lock); 926 + if (LOG_INVALID(net, IPPROTO_TCP)) 927 + nf_log_packet(pf, 0, skb, NULL, NULL, NULL, 928 + "nf_ct_tcp: invalid RST "); 929 + return -NF_ACCEPT; 930 + } 929 931 if (index == TCP_RST_SET 930 932 && ((test_bit(IPS_SEEN_REPLY_BIT, &ct->status) 931 933 && ct->proto.tcp.last_index == TCP_SYN_SET)
+1 -1
net/netfilter/xt_hashlimit.c
··· 926 926 if (!hlist_empty(&htable->hash[*bucket])) { 927 927 hlist_for_each_entry(ent, pos, &htable->hash[*bucket], node) 928 928 if (dl_seq_real_show(ent, htable->family, s)) 929 - return 1; 929 + return -1; 930 930 } 931 931 return 0; 932 932 }
+6 -6
net/rxrpc/ar-connection.c
··· 343 343 /* not yet present - create a candidate for a new connection 344 344 * and then redo the check */ 345 345 conn = rxrpc_alloc_connection(gfp); 346 - if (IS_ERR(conn)) { 347 - _leave(" = %ld", PTR_ERR(conn)); 348 - return PTR_ERR(conn); 346 + if (!conn) { 347 + _leave(" = -ENOMEM"); 348 + return -ENOMEM; 349 349 } 350 350 351 351 conn->trans = trans; ··· 508 508 /* not yet present - create a candidate for a new connection and then 509 509 * redo the check */ 510 510 candidate = rxrpc_alloc_connection(gfp); 511 - if (IS_ERR(candidate)) { 512 - _leave(" = %ld", PTR_ERR(candidate)); 513 - return PTR_ERR(candidate); 511 + if (!candidate) { 512 + _leave(" = -ENOMEM"); 513 + return -ENOMEM; 514 514 } 515 515 516 516 candidate->trans = trans;
+17 -6
net/sched/cls_api.c
··· 135 135 unsigned long cl; 136 136 unsigned long fh; 137 137 int err; 138 + int tp_created = 0; 138 139 139 140 if (net != &init_net) 140 141 return -EINVAL; ··· 267 266 goto errout; 268 267 } 269 268 270 - spin_lock_bh(root_lock); 271 - tp->next = *back; 272 - *back = tp; 273 - spin_unlock_bh(root_lock); 269 + tp_created = 1; 274 270 275 271 } else if (tca[TCA_KIND] && nla_strcmp(tca[TCA_KIND], tp->ops->kind)) 276 272 goto errout; ··· 294 296 switch (n->nlmsg_type) { 295 297 case RTM_NEWTFILTER: 296 298 err = -EEXIST; 297 - if (n->nlmsg_flags & NLM_F_EXCL) 299 + if (n->nlmsg_flags & NLM_F_EXCL) { 300 + if (tp_created) 301 + tcf_destroy(tp); 298 302 goto errout; 303 + } 299 304 break; 300 305 case RTM_DELTFILTER: 301 306 err = tp->ops->delete(tp, fh); ··· 315 314 } 316 315 317 316 err = tp->ops->change(tp, cl, t->tcm_handle, tca, &fh); 318 - if (err == 0) 317 + if (err == 0) { 318 + if (tp_created) { 319 + spin_lock_bh(root_lock); 320 + tp->next = *back; 321 + *back = tp; 322 + spin_unlock_bh(root_lock); 323 + } 319 324 tfilter_notify(skb, n, tp, fh, RTM_NEWTFILTER); 325 + } else { 326 + if (tp_created) 327 + tcf_destroy(tp); 328 + } 320 329 321 330 errout: 322 331 if (cl)
+11 -11
net/sched/cls_cgroup.c
··· 104 104 struct tcf_result *res) 105 105 { 106 106 struct cls_cgroup_head *head = tp->root; 107 - struct cgroup_cls_state *cs; 108 - int ret = 0; 107 + u32 classid; 109 108 110 109 /* 111 110 * Due to the nature of the classifier it is required to ignore all ··· 120 121 return -1; 121 122 122 123 rcu_read_lock(); 123 - cs = task_cls_state(current); 124 - if (cs->classid && tcf_em_tree_match(skb, &head->ematches, NULL)) { 125 - res->classid = cs->classid; 126 - res->class = 0; 127 - ret = tcf_exts_exec(skb, &head->exts, res); 128 - } else 129 - ret = -1; 130 - 124 + classid = task_cls_state(current)->classid; 131 125 rcu_read_unlock(); 132 126 133 - return ret; 127 + if (!classid) 128 + return -1; 129 + 130 + if (!tcf_em_tree_match(skb, &head->ematches, NULL)) 131 + return -1; 132 + 133 + res->classid = classid; 134 + res->class = 0; 135 + return tcf_exts_exec(skb, &head->exts, res); 134 136 } 135 137 136 138 static unsigned long cls_cgroup_get(struct tcf_proto *tp, u32 handle)
+28 -7
net/sunrpc/svcsock.c
··· 345 345 lock_sock(sock->sk); 346 346 sock->sk->sk_sndbuf = snd * 2; 347 347 sock->sk->sk_rcvbuf = rcv * 2; 348 + sock->sk->sk_userlocks |= SOCK_SNDBUF_LOCK|SOCK_RCVBUF_LOCK; 348 349 release_sock(sock->sk); 349 350 #endif 350 351 } ··· 797 796 test_bit(XPT_CONN, &svsk->sk_xprt.xpt_flags), 798 797 test_bit(XPT_CLOSE, &svsk->sk_xprt.xpt_flags)); 799 798 799 + if (test_and_clear_bit(XPT_CHNGBUF, &svsk->sk_xprt.xpt_flags)) 800 + /* sndbuf needs to have room for one request 801 + * per thread, otherwise we can stall even when the 802 + * network isn't a bottleneck. 803 + * 804 + * We count all threads rather than threads in a 805 + * particular pool, which provides an upper bound 806 + * on the number of threads which will access the socket. 807 + * 808 + * rcvbuf just needs to be able to hold a few requests. 809 + * Normally they will be removed from the queue 810 + * as soon a a complete request arrives. 811 + */ 812 + svc_sock_setbufsize(svsk->sk_sock, 813 + (serv->sv_nrthreads+3) * serv->sv_max_mesg, 814 + 3 * serv->sv_max_mesg); 815 + 800 816 clear_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags); 801 817 802 818 /* Receive data. If we haven't got the record length yet, get ··· 1061 1043 1062 1044 tcp_sk(sk)->nonagle |= TCP_NAGLE_OFF; 1063 1045 1046 + /* initialise setting must have enough space to 1047 + * receive and respond to one request. 1048 + * svc_tcp_recvfrom will re-adjust if necessary 1049 + */ 1050 + svc_sock_setbufsize(svsk->sk_sock, 1051 + 3 * svsk->sk_xprt.xpt_server->sv_max_mesg, 1052 + 3 * svsk->sk_xprt.xpt_server->sv_max_mesg); 1053 + 1054 + set_bit(XPT_CHNGBUF, &svsk->sk_xprt.xpt_flags); 1064 1055 set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags); 1065 1056 if (sk->sk_state != TCP_ESTABLISHED) 1066 1057 set_bit(XPT_CLOSE, &svsk->sk_xprt.xpt_flags); ··· 1139 1112 /* Initialize the socket */ 1140 1113 if (sock->type == SOCK_DGRAM) 1141 1114 svc_udp_init(svsk, serv); 1142 - else { 1143 - /* initialise setting must have enough space to 1144 - * receive and respond to one request. 1145 - */ 1146 - svc_sock_setbufsize(svsk->sk_sock, 4 * serv->sv_max_mesg, 1147 - 4 * serv->sv_max_mesg); 1115 + else 1148 1116 svc_tcp_init(svsk, serv); 1149 - } 1150 1117 1151 1118 dprintk("svc: svc_setup_socket created %p (inet %p)\n", 1152 1119 svsk, svsk->sk_sk);
+6 -6
net/sunrpc/xprtrdma/svc_rdma_sendto.c
··· 128 128 page_bytes -= sge_bytes; 129 129 130 130 frmr->page_list->page_list[page_no] = 131 - ib_dma_map_page(xprt->sc_cm_id->device, page, 0, 131 + ib_dma_map_single(xprt->sc_cm_id->device, 132 + page_address(page), 132 133 PAGE_SIZE, DMA_TO_DEVICE); 133 134 if (ib_dma_mapping_error(xprt->sc_cm_id->device, 134 135 frmr->page_list->page_list[page_no])) ··· 533 532 clear_bit(RDMACTXT_F_FAST_UNREG, &ctxt->flags); 534 533 535 534 /* Prepare the SGE for the RPCRDMA Header */ 535 + ctxt->sge[0].lkey = rdma->sc_dma_lkey; 536 + ctxt->sge[0].length = svc_rdma_xdr_get_reply_hdr_len(rdma_resp); 536 537 ctxt->sge[0].addr = 537 - ib_dma_map_page(rdma->sc_cm_id->device, 538 - page, 0, PAGE_SIZE, DMA_TO_DEVICE); 538 + ib_dma_map_single(rdma->sc_cm_id->device, page_address(page), 539 + ctxt->sge[0].length, DMA_TO_DEVICE); 539 540 if (ib_dma_mapping_error(rdma->sc_cm_id->device, ctxt->sge[0].addr)) 540 541 goto err; 541 542 atomic_inc(&rdma->sc_dma_used); 542 543 543 544 ctxt->direction = DMA_TO_DEVICE; 544 - 545 - ctxt->sge[0].length = svc_rdma_xdr_get_reply_hdr_len(rdma_resp); 546 - ctxt->sge[0].lkey = rdma->sc_dma_lkey; 547 545 548 546 /* Determine how many of our SGE are to be transmitted */ 549 547 for (sge_no = 1; byte_count && sge_no < vec->count; sge_no++) {
+5 -5
net/sunrpc/xprtrdma/svc_rdma_transport.c
··· 500 500 BUG_ON(sge_no >= xprt->sc_max_sge); 501 501 page = svc_rdma_get_page(); 502 502 ctxt->pages[sge_no] = page; 503 - pa = ib_dma_map_page(xprt->sc_cm_id->device, 504 - page, 0, PAGE_SIZE, 503 + pa = ib_dma_map_single(xprt->sc_cm_id->device, 504 + page_address(page), PAGE_SIZE, 505 505 DMA_FROM_DEVICE); 506 506 if (ib_dma_mapping_error(xprt->sc_cm_id->device, pa)) 507 507 goto err_put_ctxt; ··· 1315 1315 length = svc_rdma_xdr_encode_error(xprt, rmsgp, err, va); 1316 1316 1317 1317 /* Prepare SGE for local address */ 1318 - sge.addr = ib_dma_map_page(xprt->sc_cm_id->device, 1319 - p, 0, PAGE_SIZE, DMA_FROM_DEVICE); 1318 + sge.addr = ib_dma_map_single(xprt->sc_cm_id->device, 1319 + page_address(p), PAGE_SIZE, DMA_FROM_DEVICE); 1320 1320 if (ib_dma_mapping_error(xprt->sc_cm_id->device, sge.addr)) { 1321 1321 put_page(p); 1322 1322 return; ··· 1343 1343 if (ret) { 1344 1344 dprintk("svcrdma: Error %d posting send for protocol error\n", 1345 1345 ret); 1346 - ib_dma_unmap_page(xprt->sc_cm_id->device, 1346 + ib_dma_unmap_single(xprt->sc_cm_id->device, 1347 1347 sge.addr, PAGE_SIZE, 1348 1348 DMA_FROM_DEVICE); 1349 1349 svc_rdma_put_context(ctxt, 1);
+2 -1
net/sunrpc/xprtrdma/verbs.c
··· 1495 1495 frmr_wr.wr.fast_reg.page_shift = PAGE_SHIFT; 1496 1496 frmr_wr.wr.fast_reg.length = i << PAGE_SHIFT; 1497 1497 frmr_wr.wr.fast_reg.access_flags = (writing ? 1498 - IB_ACCESS_REMOTE_WRITE : IB_ACCESS_REMOTE_READ); 1498 + IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : 1499 + IB_ACCESS_REMOTE_READ); 1499 1500 frmr_wr.wr.fast_reg.rkey = seg1->mr_chunk.rl_mw->r.frmr.fr_mr->rkey; 1500 1501 DECR_CQCOUNT(&r_xprt->rx_ep); 1501 1502
+7
net/wireless/reg.c
··· 1551 1551 1552 1552 queue_regulatory_request(request); 1553 1553 1554 + /* 1555 + * This ensures last_request is populated once modules 1556 + * come swinging in and calling regulatory hints and 1557 + * wiphy_apply_custom_regulatory(). 1558 + */ 1559 + flush_scheduled_work(); 1560 + 1554 1561 return 0; 1555 1562 } 1556 1563
+7
net/wireless/wext.c
··· 786 786 err = -EFAULT; 787 787 goto out; 788 788 } 789 + 790 + if (cmd == SIOCSIWENCODEEXT) { 791 + struct iw_encode_ext *ee = (void *) extra; 792 + 793 + if (iwp->length < sizeof(*ee) + ee->key_len) 794 + return -EFAULT; 795 + } 789 796 } 790 797 791 798 err = handler(dev, info, (union iwreq_data *) iwp, extra);
+6
security/tomoyo/tomoyo.c
··· 27 27 28 28 static int tomoyo_bprm_set_creds(struct linux_binprm *bprm) 29 29 { 30 + int rc; 31 + 32 + rc = cap_bprm_set_creds(bprm); 33 + if (rc) 34 + return rc; 35 + 30 36 /* 31 37 * Do only if this function is called for the first time of an execve 32 38 * operation.
+8 -2
sound/core/pcm_lib.c
··· 249 249 new_hw_ptr = hw_base + pos; 250 250 } 251 251 } 252 + 253 + /* Do jiffies check only in xrun_debug mode */ 254 + if (!xrun_debug(substream)) 255 + goto no_jiffies_check; 256 + 252 257 /* Skip the jiffies check for hardwares with BATCH flag. 253 258 * Such hardware usually just increases the position at each IRQ, 254 259 * thus it can't give any strange position. ··· 341 336 hw_base = 0; 342 337 new_hw_ptr = hw_base + pos; 343 338 } 344 - if (((delta * HZ) / runtime->rate) > jdelta + HZ/100) { 339 + /* Do jiffies check only in xrun_debug mode */ 340 + if (xrun_debug(substream) && 341 + ((delta * HZ) / runtime->rate) > jdelta + HZ/100) { 345 342 hw_ptr_error(substream, 346 343 "hw_ptr skipping! " 347 344 "(pos=%ld, delta=%ld, period=%ld, jdelta=%lu/%lu)\n", ··· 1485 1478 runtime->status->hw_ptr %= runtime->buffer_size; 1486 1479 else 1487 1480 runtime->status->hw_ptr = 0; 1488 - runtime->hw_ptr_jiffies = jiffies; 1489 1481 snd_pcm_stream_unlock_irqrestore(substream, flags); 1490 1482 return 0; 1491 1483 }
+6
sound/core/pcm_native.c
··· 848 848 { 849 849 struct snd_pcm_runtime *runtime = substream->runtime; 850 850 snd_pcm_trigger_tstamp(substream); 851 + runtime->hw_ptr_jiffies = jiffies; 851 852 runtime->status->state = state; 852 853 if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK && 853 854 runtime->silence_size > 0) ··· 962 961 { 963 962 if (substream->runtime->trigger_master != substream) 964 963 return 0; 964 + /* The jiffies check in snd_pcm_update_hw_ptr*() is done by 965 + * a delta betwen the current jiffies, this gives a large enough 966 + * delta, effectively to skip the check once. 967 + */ 968 + substream->runtime->hw_ptr_jiffies = jiffies - HZ * 1000; 965 969 return substream->ops->trigger(substream, 966 970 push ? SNDRV_PCM_TRIGGER_PAUSE_PUSH : 967 971 SNDRV_PCM_TRIGGER_PAUSE_RELEASE);
+1
sound/pci/hda/hda_intel.c
··· 2141 2141 /* including bogus ALC268 in slot#2 that conflicts with ALC888 */ 2142 2142 SND_PCI_QUIRK(0x17c0, 0x4085, "Medion MD96630", 0x01), 2143 2143 /* forced codec slots */ 2144 + SND_PCI_QUIRK(0x1043, 0x1262, "ASUS W5Fm", 0x103), 2144 2145 SND_PCI_QUIRK(0x1046, 0x1262, "ASUS W5F", 0x103), 2145 2146 {} 2146 2147 };
+1
sound/pci/hda/patch_conexant.c
··· 1848 1848 1849 1849 static struct snd_pci_quirk cxt5051_cfg_tbl[] = { 1850 1850 SND_PCI_QUIRK(0x103c, 0x30cf, "HP DV6736", CXT5051_HP_DV6736), 1851 + SND_PCI_QUIRK(0x103c, 0x360b, "Compaq Presario CQ60", CXT5051_HP), 1851 1852 SND_PCI_QUIRK(0x14f1, 0x0101, "Conexant Reference board", 1852 1853 CXT5051_LAPTOP), 1853 1854 SND_PCI_QUIRK(0x14f1, 0x5051, "HP Spartan 1.1", CXT5051_HP),
+6
sound/pci/hda/patch_realtek.c
··· 776 776 pincap = (pincap & AC_PINCAP_VREF) >> AC_PINCAP_VREF_SHIFT; 777 777 if (pincap & AC_PINCAP_VREF_80) 778 778 val = PIN_VREF80; 779 + else if (pincap & AC_PINCAP_VREF_50) 780 + val = PIN_VREF50; 781 + else if (pincap & AC_PINCAP_VREF_100) 782 + val = PIN_VREF100; 783 + else if (pincap & AC_PINCAP_VREF_GRD) 784 + val = PIN_VREFGRD; 779 785 } 780 786 snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_PIN_WIDGET_CONTROL, val); 781 787 }
+10
sound/pci/hda/patch_sigmatel.c
··· 150 150 STAC_D965_REF, 151 151 STAC_D965_3ST, 152 152 STAC_D965_5ST, 153 + STAC_D965_5ST_NO_FP, 153 154 STAC_DELL_3ST, 154 155 STAC_DELL_BIOS, 155 156 STAC_927X_MODELS ··· 2155 2154 0x40000100, 0x40000100 2156 2155 }; 2157 2156 2157 + static unsigned int d965_5st_no_fp_pin_configs[14] = { 2158 + 0x40000100, 0x40000100, 0x0181304e, 0x01014010, 2159 + 0x01a19040, 0x01011012, 0x01016011, 0x40000100, 2160 + 0x40000100, 0x40000100, 0x40000100, 0x01442070, 2161 + 0x40000100, 0x40000100 2162 + }; 2163 + 2158 2164 static unsigned int dell_3st_pin_configs[14] = { 2159 2165 0x02211230, 0x02a11220, 0x01a19040, 0x01114210, 2160 2166 0x01111212, 0x01116211, 0x01813050, 0x01112214, ··· 2174 2166 [STAC_D965_REF] = ref927x_pin_configs, 2175 2167 [STAC_D965_3ST] = d965_3st_pin_configs, 2176 2168 [STAC_D965_5ST] = d965_5st_pin_configs, 2169 + [STAC_D965_5ST_NO_FP] = d965_5st_no_fp_pin_configs, 2177 2170 [STAC_DELL_3ST] = dell_3st_pin_configs, 2178 2171 [STAC_DELL_BIOS] = NULL, 2179 2172 }; ··· 2185 2176 [STAC_D965_REF] = "ref", 2186 2177 [STAC_D965_3ST] = "3stack", 2187 2178 [STAC_D965_5ST] = "5stack", 2179 + [STAC_D965_5ST_NO_FP] = "5stack-no-fp", 2188 2180 [STAC_DELL_3ST] = "dell-3stack", 2189 2181 [STAC_DELL_BIOS] = "dell-bios", 2190 2182 };
+1 -1
sound/usb/usbaudio.c
··· 3347 3347 [QUIRK_MIDI_YAMAHA] = snd_usb_create_midi_interface, 3348 3348 [QUIRK_MIDI_MIDIMAN] = snd_usb_create_midi_interface, 3349 3349 [QUIRK_MIDI_NOVATION] = snd_usb_create_midi_interface, 3350 - [QUIRK_MIDI_RAW] = snd_usb_create_midi_interface, 3350 + [QUIRK_MIDI_FASTLANE] = snd_usb_create_midi_interface, 3351 3351 [QUIRK_MIDI_EMAGIC] = snd_usb_create_midi_interface, 3352 3352 [QUIRK_MIDI_CME] = snd_usb_create_midi_interface, 3353 3353 [QUIRK_AUDIO_STANDARD_INTERFACE] = create_standard_audio_quirk,
+1 -1
sound/usb/usbaudio.h
··· 153 153 QUIRK_MIDI_YAMAHA, 154 154 QUIRK_MIDI_MIDIMAN, 155 155 QUIRK_MIDI_NOVATION, 156 - QUIRK_MIDI_RAW, 156 + QUIRK_MIDI_FASTLANE, 157 157 QUIRK_MIDI_EMAGIC, 158 158 QUIRK_MIDI_CME, 159 159 QUIRK_MIDI_US122L,
+11 -1
sound/usb/usbmidi.c
··· 1778 1778 umidi->usb_protocol_ops = &snd_usbmidi_novation_ops; 1779 1779 err = snd_usbmidi_detect_per_port_endpoints(umidi, endpoints); 1780 1780 break; 1781 - case QUIRK_MIDI_RAW: 1781 + case QUIRK_MIDI_FASTLANE: 1782 1782 umidi->usb_protocol_ops = &snd_usbmidi_raw_ops; 1783 + /* 1784 + * Interface 1 contains isochronous endpoints, but with the same 1785 + * numbers as in interface 0. Since it is interface 1 that the 1786 + * USB core has most recently seen, these descriptors are now 1787 + * associated with the endpoint numbers. This will foul up our 1788 + * attempts to submit bulk/interrupt URBs to the endpoints in 1789 + * interface 0, so we have to make sure that the USB core looks 1790 + * again at interface 0 by calling usb_set_interface() on it. 1791 + */ 1792 + usb_set_interface(umidi->chip->dev, 0, 0); 1783 1793 err = snd_usbmidi_detect_per_port_endpoints(umidi, endpoints); 1784 1794 break; 1785 1795 case QUIRK_MIDI_EMAGIC:
+1 -1
sound/usb/usbquirks.h
··· 1868 1868 .data = & (const struct snd_usb_audio_quirk[]) { 1869 1869 { 1870 1870 .ifnum = 0, 1871 - .type = QUIRK_MIDI_RAW 1871 + .type = QUIRK_MIDI_FASTLANE 1872 1872 }, 1873 1873 { 1874 1874 .ifnum = 1,