Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6

Conflicts:
drivers/net/wireless/iwlwifi/iwl-6000.c
net/core/dev.c

+4540 -2576
+21 -16
Documentation/RCU/NMI-RCU.txt
··· 34 34 cpu = smp_processor_id(); 35 35 ++nmi_count(cpu); 36 36 37 - if (!rcu_dereference(nmi_callback)(regs, cpu)) 37 + if (!rcu_dereference_sched(nmi_callback)(regs, cpu)) 38 38 default_do_nmi(regs); 39 39 40 40 nmi_exit(); ··· 47 47 default_do_nmi() function to handle a machine-specific NMI. Finally, 48 48 preemption is restored. 49 49 50 - Strictly speaking, rcu_dereference() is not needed, since this code runs 51 - only on i386, which does not need rcu_dereference() anyway. However, 52 - it is a good documentation aid, particularly for anyone attempting to 53 - do something similar on Alpha. 50 + In theory, rcu_dereference_sched() is not needed, since this code runs 51 + only on i386, which in theory does not need rcu_dereference_sched() 52 + anyway. However, in practice it is a good documentation aid, particularly 53 + for anyone attempting to do something similar on Alpha or on systems 54 + with aggressive optimizing compilers. 54 55 55 - Quick Quiz: Why might the rcu_dereference() be necessary on Alpha, 56 + Quick Quiz: Why might the rcu_dereference_sched() be necessary on Alpha, 56 57 given that the code referenced by the pointer is read-only? 57 58 58 59 ··· 100 99 101 100 Answer to Quick Quiz 102 101 103 - Why might the rcu_dereference() be necessary on Alpha, given 102 + Why might the rcu_dereference_sched() be necessary on Alpha, given 104 103 that the code referenced by the pointer is read-only? 105 104 106 105 Answer: The caller to set_nmi_callback() might well have 107 - initialized some data that is to be used by the 108 - new NMI handler. In this case, the rcu_dereference() 109 - would be needed, because otherwise a CPU that received 110 - an NMI just after the new handler was set might see 111 - the pointer to the new NMI handler, but the old 112 - pre-initialized version of the handler's data. 106 + initialized some data that is to be used by the new NMI 107 + handler. In this case, the rcu_dereference_sched() would 108 + be needed, because otherwise a CPU that received an NMI 109 + just after the new handler was set might see the pointer 110 + to the new NMI handler, but the old pre-initialized 111 + version of the handler's data. 113 112 114 - More important, the rcu_dereference() makes it clear 115 - to someone reading the code that the pointer is being 116 - protected by RCU. 113 + This same sad story can happen on other CPUs when using 114 + a compiler with aggressive pointer-value speculation 115 + optimizations. 116 + 117 + More important, the rcu_dereference_sched() makes it 118 + clear to someone reading the code that the pointer is 119 + being protected by RCU-sched.
+4 -3
Documentation/RCU/checklist.txt
··· 260 260 The reason that it is permissible to use RCU list-traversal 261 261 primitives when the update-side lock is held is that doing so 262 262 can be quite helpful in reducing code bloat when common code is 263 - shared between readers and updaters. 263 + shared between readers and updaters. Additional primitives 264 + are provided for this case, as discussed in lockdep.txt. 264 265 265 266 10. Conversely, if you are in an RCU read-side critical section, 266 267 and you don't hold the appropriate update-side lock, you -must- ··· 345 344 requiring SRCU's read-side deadlock immunity or low read-side 346 345 realtime latency. 347 346 348 - Note that, rcu_assign_pointer() and rcu_dereference() relate to 349 - SRCU just as they do to other forms of RCU. 347 + Note that, rcu_assign_pointer() relates to SRCU just as they do 348 + to other forms of RCU. 350 349 351 350 15. The whole point of call_rcu(), synchronize_rcu(), and friends 352 351 is to wait until all pre-existing readers have finished before
+26 -2
Documentation/RCU/lockdep.txt
··· 32 32 srcu_dereference(p, sp): 33 33 Check for SRCU read-side critical section. 34 34 rcu_dereference_check(p, c): 35 - Use explicit check expression "c". 35 + Use explicit check expression "c". This is useful in 36 + code that is invoked by both readers and updaters. 36 37 rcu_dereference_raw(p) 37 38 Don't check. (Use sparingly, if at all.) 39 + rcu_dereference_protected(p, c): 40 + Use explicit check expression "c", and omit all barriers 41 + and compiler constraints. This is useful when the data 42 + structure cannot change, for example, in code that is 43 + invoked only by updaters. 44 + rcu_access_pointer(p): 45 + Return the value of the pointer and omit all barriers, 46 + but retain the compiler constraints that prevent duplicating 47 + or coalescsing. This is useful when when testing the 48 + value of the pointer itself, for example, against NULL. 38 49 39 50 The rcu_dereference_check() check expression can be any boolean 40 51 expression, but would normally include one of the rcu_read_lock_held() ··· 70 59 RCU read-side critical sections, in case (2) the ->file_lock prevents 71 60 any change from taking place, and finally, in case (3) the current task 72 61 is the only task accessing the file_struct, again preventing any change 73 - from taking place. 62 + from taking place. If the above statement was invoked only from updater 63 + code, it could instead be written as follows: 64 + 65 + file = rcu_dereference_protected(fdt->fd[fd], 66 + lockdep_is_held(&files->file_lock) || 67 + atomic_read(&files->count) == 1); 68 + 69 + This would verify cases #2 and #3 above, and furthermore lockdep would 70 + complain if this was used in an RCU read-side critical section unless one 71 + of these two cases held. Because rcu_dereference_protected() omits all 72 + barriers and compiler constraints, it generates better code than do the 73 + other flavors of rcu_dereference(). On the other hand, it is illegal 74 + to use rcu_dereference_protected() if either the RCU-protected pointer 75 + or the RCU-protected data that it points to can change concurrently. 74 76 75 77 There are currently only "universal" versions of the rcu_assign_pointer() 76 78 and RCU list-/tree-traversal primitives, which do not (yet) check for
+6
Documentation/RCU/whatisRCU.txt
··· 840 840 init_srcu_struct 841 841 cleanup_srcu_struct 842 842 843 + All: lockdep-checked RCU-protected pointer access 844 + 845 + rcu_dereference_check 846 + rcu_dereference_protected 847 + rcu_access_pointer 848 + 843 849 See the comment headers in the source code (or the docbook generated 844 850 from them) for more information. 845 851
+17 -6
Documentation/input/multi-touch-protocol.txt
··· 68 68 SYN_MT_REPORT 69 69 SYN_REPORT 70 70 71 + Here is the sequence after lifting one of the fingers: 72 + 73 + ABS_MT_POSITION_X 74 + ABS_MT_POSITION_Y 75 + SYN_MT_REPORT 76 + SYN_REPORT 77 + 78 + And here is the sequence after lifting the remaining finger: 79 + 80 + SYN_MT_REPORT 81 + SYN_REPORT 82 + 83 + If the driver reports one of BTN_TOUCH or ABS_PRESSURE in addition to the 84 + ABS_MT events, the last SYN_MT_REPORT event may be omitted. Otherwise, the 85 + last SYN_REPORT will be dropped by the input core, resulting in no 86 + zero-finger event reaching userland. 71 87 72 88 Event Semantics 73 89 --------------- ··· 233 217 difference between the contact position and the approaching tool position 234 218 could be used to derive tilt. 235 219 [2] The list can of course be extended. 236 - [3] The multi-touch X driver is currently in the prototyping stage. At the 237 - time of writing (April 2009), the MT protocol is not yet merged, and the 238 - prototype implements finger matching, basic mouse support and two-finger 239 - scrolling. The project aims at improving the quality of current multi-touch 240 - functionality available in the Synaptics X driver, and in addition 241 - implement more advanced gestures. 220 + [3] Multitouch X driver project: http://bitmath.org/code/multitouch/. 242 221 [4] See the section on event computation. 243 222 [5] See the section on finger tracking.
-5
Documentation/kernel-parameters.txt
··· 320 320 amd_iommu= [HW,X86-84] 321 321 Pass parameters to the AMD IOMMU driver in the system. 322 322 Possible values are: 323 - isolate - enable device isolation (each device, as far 324 - as possible, will get its own protection 325 - domain) [default] 326 - share - put every device behind one IOMMU into the 327 - same protection domain 328 323 fullflush - enable flushing of IO/TLB entries when 329 324 they are unmapped. Otherwise they are 330 325 flushed before they will be reused, which
+12 -2
MAINTAINERS
··· 485 485 F: drivers/input/mouse/bcm5974.c 486 486 487 487 APPLE SMC DRIVER 488 - M: Nicolas Boichat <nicolas@boichat.ch> 489 - L: mactel-linux-devel@lists.sourceforge.net 488 + M: Henrik Rydberg <rydberg@euromail.se> 489 + L: lm-sensors@lm-sensors.org 490 490 S: Maintained 491 491 F: drivers/hwmon/applesmc.c 492 492 ··· 970 970 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 971 971 W: http://www.mcuos.com 972 972 S: Maintained 973 + 974 + ARM/U300 MACHINE SUPPORT 975 + M: Linus Walleij <linus.walleij@stericsson.com> 976 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 977 + S: Supported 978 + F: arch/arm/mach-u300/ 979 + F: drivers/i2c/busses/i2c-stu300.c 980 + F: drivers/rtc/rtc-coh901331.c 981 + F: drivers/watchdog/coh901327_wdt.c 982 + F: drivers/dma/coh901318* 973 983 974 984 ARM/U8500 ARM ARCHITECTURE 975 985 M: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com>
+2 -2
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 34 4 - EXTRAVERSION = -rc3 5 - NAME = Man-Eating Seals of Antiquity 4 + EXTRAVERSION = -rc5 5 + NAME = Sheep on Meth 6 6 7 7 # *DOCUMENTATION* 8 8 # To see a list of typical targets execute "make help"
+1 -1
arch/arm/boot/compressed/head.S
··· 172 172 adr r0, LC0 173 173 ARM( ldmia r0, {r1, r2, r3, r4, r5, r6, r11, ip, sp}) 174 174 THUMB( ldmia r0, {r1, r2, r3, r4, r5, r6, r11, ip} ) 175 - THUMB( ldr sp, [r0, #28] ) 175 + THUMB( ldr sp, [r0, #32] ) 176 176 subs r0, r0, r1 @ calculate the delta offset 177 177 178 178 @ if delta is zero, we are
+14 -1
arch/arm/include/asm/highmem.h
··· 11 11 12 12 #define kmap_prot PAGE_KERNEL 13 13 14 - #define flush_cache_kmaps() flush_cache_all() 14 + #define flush_cache_kmaps() \ 15 + do { \ 16 + if (cache_is_vivt()) \ 17 + flush_cache_all(); \ 18 + } while (0) 15 19 16 20 extern pte_t *pkmap_page_table; 17 21 ··· 25 21 extern void *kmap_high_get(struct page *page); 26 22 extern void kunmap_high(struct page *page); 27 23 24 + extern void *kmap_high_l1_vipt(struct page *page, pte_t *saved_pte); 25 + extern void kunmap_high_l1_vipt(struct page *page, pte_t saved_pte); 26 + 27 + /* 28 + * The following functions are already defined by <linux/highmem.h> 29 + * when CONFIG_HIGHMEM is not set. 30 + */ 31 + #ifdef CONFIG_HIGHMEM 28 32 extern void *kmap(struct page *page); 29 33 extern void kunmap(struct page *page); 30 34 extern void *kmap_atomic(struct page *page, enum km_type type); 31 35 extern void kunmap_atomic(void *kvaddr, enum km_type type); 32 36 extern void *kmap_atomic_pfn(unsigned long pfn, enum km_type type); 33 37 extern struct page *kmap_atomic_to_page(const void *ptr); 38 + #endif 34 39 35 40 #endif
+1
arch/arm/include/asm/kmap_types.h
··· 18 18 KM_IRQ1, 19 19 KM_SOFTIRQ0, 20 20 KM_SOFTIRQ1, 21 + KM_L1_CACHE, 21 22 KM_L2_CACHE, 22 23 KM_TYPE_NR 23 24 };
+11 -12
arch/arm/include/asm/ucontext.h
··· 59 59 #endif /* CONFIG_IWMMXT */ 60 60 61 61 #ifdef CONFIG_VFP 62 - #if __LINUX_ARM_ARCH__ < 6 63 - /* For ARM pre-v6, we use fstmiax and fldmiax. This adds one extra 64 - * word after the registers, and a word of padding at the end for 65 - * alignment. */ 66 62 #define VFP_MAGIC 0x56465001 67 - #define VFP_STORAGE_SIZE 152 68 - #else 69 - #define VFP_MAGIC 0x56465002 70 - #define VFP_STORAGE_SIZE 144 71 - #endif 72 63 73 64 struct vfp_sigframe 74 65 { 75 66 unsigned long magic; 76 67 unsigned long size; 77 - union vfp_state storage; 78 - }; 68 + struct user_vfp ufp; 69 + struct user_vfp_exc ufp_exc; 70 + } __attribute__((__aligned__(8))); 71 + 72 + /* 73 + * 8 byte for magic and size, 264 byte for ufp, 12 bytes for ufp_exc, 74 + * 4 bytes padding. 75 + */ 76 + #define VFP_STORAGE_SIZE sizeof(struct vfp_sigframe) 77 + 79 78 #endif /* CONFIG_VFP */ 80 79 81 80 /* ··· 90 91 #ifdef CONFIG_IWMMXT 91 92 struct iwmmxt_sigframe iwmmxt; 92 93 #endif 93 - #if 0 && defined CONFIG_VFP /* Not yet saved. */ 94 + #ifdef CONFIG_VFP 94 95 struct vfp_sigframe vfp; 95 96 #endif 96 97 /* Something that isn't a valid magic number for any coprocessor. */
+11 -1
arch/arm/include/asm/user.h
··· 83 83 84 84 /* 85 85 * User specific VFP registers. If only VFPv2 is present, registers 16 to 31 86 - * are ignored by the ptrace system call. 86 + * are ignored by the ptrace system call and the signal handler. 87 87 */ 88 88 struct user_vfp { 89 89 unsigned long long fpregs[32]; 90 90 unsigned long fpscr; 91 + }; 92 + 93 + /* 94 + * VFP exception registers exposed to user space during signal delivery. 95 + * Fields not relavant to the current VFP architecture are ignored. 96 + */ 97 + struct user_vfp_exc { 98 + unsigned long fpexc; 99 + unsigned long fpinst; 100 + unsigned long fpinst2; 91 101 }; 92 102 93 103 #endif /* _ARM_USER_H */
+89 -4
arch/arm/kernel/signal.c
··· 18 18 #include <asm/cacheflush.h> 19 19 #include <asm/ucontext.h> 20 20 #include <asm/unistd.h> 21 + #include <asm/vfp.h> 21 22 22 23 #include "ptrace.h" 23 24 #include "signal.h" ··· 176 175 177 176 #endif 178 177 178 + #ifdef CONFIG_VFP 179 + 180 + static int preserve_vfp_context(struct vfp_sigframe __user *frame) 181 + { 182 + struct thread_info *thread = current_thread_info(); 183 + struct vfp_hard_struct *h = &thread->vfpstate.hard; 184 + const unsigned long magic = VFP_MAGIC; 185 + const unsigned long size = VFP_STORAGE_SIZE; 186 + int err = 0; 187 + 188 + vfp_sync_hwstate(thread); 189 + __put_user_error(magic, &frame->magic, err); 190 + __put_user_error(size, &frame->size, err); 191 + 192 + /* 193 + * Copy the floating point registers. There can be unused 194 + * registers see asm/hwcap.h for details. 195 + */ 196 + err |= __copy_to_user(&frame->ufp.fpregs, &h->fpregs, 197 + sizeof(h->fpregs)); 198 + /* 199 + * Copy the status and control register. 200 + */ 201 + __put_user_error(h->fpscr, &frame->ufp.fpscr, err); 202 + 203 + /* 204 + * Copy the exception registers. 205 + */ 206 + __put_user_error(h->fpexc, &frame->ufp_exc.fpexc, err); 207 + __put_user_error(h->fpinst, &frame->ufp_exc.fpinst, err); 208 + __put_user_error(h->fpinst2, &frame->ufp_exc.fpinst2, err); 209 + 210 + return err ? -EFAULT : 0; 211 + } 212 + 213 + static int restore_vfp_context(struct vfp_sigframe __user *frame) 214 + { 215 + struct thread_info *thread = current_thread_info(); 216 + struct vfp_hard_struct *h = &thread->vfpstate.hard; 217 + unsigned long magic; 218 + unsigned long size; 219 + unsigned long fpexc; 220 + int err = 0; 221 + 222 + __get_user_error(magic, &frame->magic, err); 223 + __get_user_error(size, &frame->size, err); 224 + 225 + if (err) 226 + return -EFAULT; 227 + if (magic != VFP_MAGIC || size != VFP_STORAGE_SIZE) 228 + return -EINVAL; 229 + 230 + /* 231 + * Copy the floating point registers. There can be unused 232 + * registers see asm/hwcap.h for details. 233 + */ 234 + err |= __copy_from_user(&h->fpregs, &frame->ufp.fpregs, 235 + sizeof(h->fpregs)); 236 + /* 237 + * Copy the status and control register. 238 + */ 239 + __get_user_error(h->fpscr, &frame->ufp.fpscr, err); 240 + 241 + /* 242 + * Sanitise and restore the exception registers. 243 + */ 244 + __get_user_error(fpexc, &frame->ufp_exc.fpexc, err); 245 + /* Ensure the VFP is enabled. */ 246 + fpexc |= FPEXC_EN; 247 + /* Ensure FPINST2 is invalid and the exception flag is cleared. */ 248 + fpexc &= ~(FPEXC_EX | FPEXC_FP2V); 249 + h->fpexc = fpexc; 250 + 251 + __get_user_error(h->fpinst, &frame->ufp_exc.fpinst, err); 252 + __get_user_error(h->fpinst2, &frame->ufp_exc.fpinst2, err); 253 + 254 + if (!err) 255 + vfp_flush_hwstate(thread); 256 + 257 + return err ? -EFAULT : 0; 258 + } 259 + 260 + #endif 261 + 179 262 /* 180 263 * Do a signal return; undo the signal stack. These are aligned to 64-bit. 181 264 */ ··· 318 233 err |= restore_iwmmxt_context(&aux->iwmmxt); 319 234 #endif 320 235 #ifdef CONFIG_VFP 321 - // if (err == 0) 322 - // err |= vfp_restore_state(&sf->aux.vfp); 236 + if (err == 0) 237 + err |= restore_vfp_context(&aux->vfp); 323 238 #endif 324 239 325 240 return err; ··· 433 348 err |= preserve_iwmmxt_context(&aux->iwmmxt); 434 349 #endif 435 350 #ifdef CONFIG_VFP 436 - // if (err == 0) 437 - // err |= vfp_save_state(&sf->aux.vfp); 351 + if (err == 0) 352 + err |= preserve_vfp_context(&aux->vfp); 438 353 #endif 439 354 __put_user_error(0, &aux->end_magic, err); 440 355
+2 -2
arch/arm/mach-at91/Makefile
··· 16 16 obj-$(CONFIG_ARCH_AT91SAM9G10) += at91sam9261.o at91sam926x_time.o at91sam9261_devices.o sam9_smc.o 17 17 obj-$(CONFIG_ARCH_AT91SAM9263) += at91sam9263.o at91sam926x_time.o at91sam9263_devices.o sam9_smc.o 18 18 obj-$(CONFIG_ARCH_AT91SAM9RL) += at91sam9rl.o at91sam926x_time.o at91sam9rl_devices.o sam9_smc.o 19 - obj-$(CONFIG_ARCH_AT91SAM9G20) += at91sam9260.o at91sam926x_time.o at91sam9260_devices.o sam9_smc.o 20 - obj-$(CONFIG_ARCH_AT91SAM9G45) += at91sam9g45.o at91sam926x_time.o at91sam9g45_devices.o sam9_smc.o 19 + obj-$(CONFIG_ARCH_AT91SAM9G20) += at91sam9260.o at91sam926x_time.o at91sam9260_devices.o sam9_smc.o 20 + obj-$(CONFIG_ARCH_AT91SAM9G45) += at91sam9g45.o at91sam926x_time.o at91sam9g45_devices.o sam9_smc.o 21 21 obj-$(CONFIG_ARCH_AT91CAP9) += at91cap9.o at91sam926x_time.o at91cap9_devices.o sam9_smc.o 22 22 obj-$(CONFIG_ARCH_AT572D940HF) += at572d940hf.o at91sam926x_time.o at572d940hf_devices.o sam9_smc.o 23 23 obj-$(CONFIG_ARCH_AT91X40) += at91x40.o at91x40_time.o
+12 -4
arch/arm/mach-at91/pm_slowclock.S
··· 175 175 orr r3, r3, #(1 << 29) /* bit 29 always set */ 176 176 str r3, [r1, #(AT91_CKGR_PLLAR - AT91_PMC)] 177 177 178 - wait_pllalock 179 - 180 178 /* Save PLLB setting and disable it */ 181 179 ldr r3, [r1, #(AT91_CKGR_PLLBR - AT91_PMC)] 182 180 str r3, .saved_pllbr 183 181 184 182 mov r3, #AT91_PMC_PLLCOUNT 185 183 str r3, [r1, #(AT91_CKGR_PLLBR - AT91_PMC)] 186 - 187 - wait_pllblock 188 184 189 185 /* Turn off the main oscillator */ 190 186 ldr r3, [r1, #(AT91_CKGR_MOR - AT91_PMC)] ··· 201 205 ldr r3, .saved_pllbr 202 206 str r3, [r1, #(AT91_CKGR_PLLBR - AT91_PMC)] 203 207 208 + tst r3, #(AT91_PMC_MUL & 0xff0000) 209 + bne 1f 210 + tst r3, #(AT91_PMC_MUL & ~0xff0000) 211 + beq 2f 212 + 1: 204 213 wait_pllblock 214 + 2: 205 215 206 216 /* Restore PLLA setting */ 207 217 ldr r3, .saved_pllar 208 218 str r3, [r1, #(AT91_CKGR_PLLAR - AT91_PMC)] 209 219 220 + tst r3, #(AT91_PMC_MUL & 0xff0000) 221 + bne 3f 222 + tst r3, #(AT91_PMC_MUL & ~0xff0000) 223 + beq 4f 224 + 3: 210 225 wait_pllalock 226 + 4: 211 227 212 228 #ifdef SLOWDOWN_MASTER_CLOCK 213 229 /*
+10 -3
arch/arm/mach-bcmring/dma.c
··· 2221 2221 int dma_unmap(DMA_MemMap_t *memMap, /* Stores state information about the map */ 2222 2222 int dirtied /* non-zero if any of the pages were modified */ 2223 2223 ) { 2224 + 2225 + int rc = 0; 2224 2226 int regionIdx; 2225 2227 int segmentIdx; 2226 2228 DMA_Region_t *region; 2227 2229 DMA_Segment_t *segment; 2230 + 2231 + down(&memMap->lock); 2228 2232 2229 2233 for (regionIdx = 0; regionIdx < memMap->numRegionsUsed; regionIdx++) { 2230 2234 region = &memMap->region[regionIdx]; ··· 2243 2239 printk(KERN_ERR 2244 2240 "%s: vmalloc'd pages are not yet supported\n", 2245 2241 __func__); 2246 - return -EINVAL; 2242 + rc = -EINVAL; 2243 + goto out; 2247 2244 } 2248 2245 2249 2246 case DMA_MEM_TYPE_KMALLOC: ··· 2281 2276 printk(KERN_ERR 2282 2277 "%s: Unsupported memory type: %d\n", 2283 2278 __func__, region->memType); 2284 - return -EINVAL; 2279 + rc = -EINVAL; 2280 + goto out; 2285 2281 } 2286 2282 } 2287 2283 ··· 2320 2314 memMap->numRegionsUsed = 0; 2321 2315 memMap->inUse = 0; 2322 2316 2317 + out: 2323 2318 up(&memMap->lock); 2324 2319 2325 - return 0; 2320 + return rc; 2326 2321 } 2327 2322 2328 2323 EXPORT_SYMBOL(dma_unmap);
+3 -3
arch/arm/mach-ep93xx/gpio.c
··· 25 25 #include <mach/hardware.h> 26 26 27 27 /************************************************************************* 28 - * GPIO handling for EP93xx 28 + * Interrupt handling for EP93xx on-chip GPIOs 29 29 *************************************************************************/ 30 30 static unsigned char gpio_int_unmasked[3]; 31 31 static unsigned char gpio_int_enabled[3]; ··· 40 40 static const u8 int_en_register_offset[3] = { 0x9c, 0xb8, 0x58 }; 41 41 static const u8 int_debounce_register_offset[3] = { 0xa8, 0xc4, 0x64 }; 42 42 43 - void ep93xx_gpio_update_int_params(unsigned port) 43 + static void ep93xx_gpio_update_int_params(unsigned port) 44 44 { 45 45 BUG_ON(port > 2); 46 46 ··· 56 56 EP93XX_GPIO_REG(int_en_register_offset[port])); 57 57 } 58 58 59 - void ep93xx_gpio_int_mask(unsigned line) 59 + static inline void ep93xx_gpio_int_mask(unsigned line) 60 60 { 61 61 gpio_int_unmasked[line >> 3] &= ~(1 << (line & 7)); 62 62 }
+10
arch/arm/mach-mx3/Kconfig
··· 62 62 Include support for MX31PDK (3DS) platform. This includes specific 63 63 configurations for the board and its peripherals. 64 64 65 + config MACH_MX31_3DS_MXC_NAND_USE_BBT 66 + bool "Make the MXC NAND driver use the in flash Bad Block Table" 67 + depends on MACH_MX31_3DS 68 + depends on MTD_NAND_MXC 69 + help 70 + Enable this if you want that the MXC NAND driver uses the in flash 71 + Bad Block Table to know what blocks are bad instead of scanning the 72 + entire flash looking for bad block markers. 73 + 65 74 config MACH_MX31MOBOARD 66 75 bool "Support mx31moboard platforms (EPFL Mobots group)" 67 76 select ARCH_MX31 ··· 104 95 config MACH_ARMADILLO5X0 105 96 bool "Support Atmark Armadillo-500 Development Base Board" 106 97 select ARCH_MX31 98 + select MXC_ULPI if USB_ULPI 107 99 help 108 100 Include support for Atmark Armadillo-500 platform. This includes 109 101 specific configurations for the board and its peripherals.
+2 -3
arch/arm/mach-mx3/clock-imx31.c
··· 468 468 } 469 469 470 470 DEFINE_CLOCK(perclk_clk, 0, NULL, 0, NULL, NULL, &ipg_clk); 471 + DEFINE_CLOCK(ckil_clk, 0, NULL, 0, clk_ckil_get_rate, NULL, NULL); 471 472 472 473 DEFINE_CLOCK(sdhc1_clk, 0, MXC_CCM_CGR0, 0, NULL, NULL, &perclk_clk); 473 474 DEFINE_CLOCK(sdhc2_clk, 1, MXC_CCM_CGR0, 2, NULL, NULL, &perclk_clk); ··· 491 490 DEFINE_CLOCK(mstick1_clk, 0, MXC_CCM_CGR1, 2, mstick1_get_rate, NULL, &usb_pll_clk); 492 491 DEFINE_CLOCK(mstick2_clk, 1, MXC_CCM_CGR1, 4, mstick2_get_rate, NULL, &usb_pll_clk); 493 492 DEFINE_CLOCK1(csi_clk, 0, MXC_CCM_CGR1, 6, csi, NULL, &serial_pll_clk); 494 - DEFINE_CLOCK(rtc_clk, 0, MXC_CCM_CGR1, 8, NULL, NULL, &ipg_clk); 493 + DEFINE_CLOCK(rtc_clk, 0, MXC_CCM_CGR1, 8, NULL, NULL, &ckil_clk); 495 494 DEFINE_CLOCK(wdog_clk, 0, MXC_CCM_CGR1, 10, NULL, NULL, &ipg_clk); 496 495 DEFINE_CLOCK(pwm_clk, 0, MXC_CCM_CGR1, 12, NULL, NULL, &perclk_clk); 497 496 DEFINE_CLOCK(usb_clk2, 0, MXC_CCM_CGR1, 18, usb_get_rate, NULL, &ahb_clk); ··· 515 514 DEFINE_CLOCK(nfc_clk, 0, NULL, 0, nfc_get_rate, NULL, &ahb_clk); 516 515 DEFINE_CLOCK(scc_clk, 0, NULL, 0, NULL, NULL, &ipg_clk); 517 516 DEFINE_CLOCK(ipg_clk, 0, NULL, 0, ipg_get_rate, NULL, &ahb_clk); 518 - DEFINE_CLOCK(ckil_clk, 0, NULL, 0, clk_ckil_get_rate, NULL, NULL); 519 517 520 518 #define _REGISTER_CLOCK(d, n, c) \ 521 519 { \ ··· 572 572 _REGISTER_CLOCK(NULL, "iim", iim_clk) 573 573 _REGISTER_CLOCK(NULL, "mpeg4", mpeg4_clk) 574 574 _REGISTER_CLOCK(NULL, "mbx", mbx_clk) 575 - _REGISTER_CLOCK("mxc_rtc", NULL, ckil_clk) 576 575 }; 577 576 578 577 int __init mx31_clocks_init(unsigned long fref)
+18 -1
arch/arm/mach-mx3/devices.c
··· 575 575 .resource = imx_ssi_resources1, 576 576 }; 577 577 578 - static int mx3_devices_init(void) 578 + static struct resource imx_wdt_resources[] = { 579 + { 580 + .flags = IORESOURCE_MEM, 581 + }, 582 + }; 583 + 584 + struct platform_device imx_wdt_device0 = { 585 + .name = "imx-wdt", 586 + .id = 0, 587 + .num_resources = ARRAY_SIZE(imx_wdt_resources), 588 + .resource = imx_wdt_resources, 589 + }; 590 + 591 + static int __init mx3_devices_init(void) 579 592 { 580 593 if (cpu_is_mx31()) { 581 594 mxc_nand_resources[0].start = MX31_NFC_BASE_ADDR; 582 595 mxc_nand_resources[0].end = MX31_NFC_BASE_ADDR + 0xfff; 596 + imx_wdt_resources[0].start = MX31_WDOG_BASE_ADDR; 597 + imx_wdt_resources[0].end = MX31_WDOG_BASE_ADDR + 0x3fff; 583 598 mxc_register_device(&mxc_rnga_device, NULL); 584 599 } 585 600 if (cpu_is_mx35()) { ··· 612 597 imx_ssi_resources0[1].end = MX35_INT_SSI1; 613 598 imx_ssi_resources1[1].start = MX35_INT_SSI2; 614 599 imx_ssi_resources1[1].end = MX35_INT_SSI2; 600 + imx_wdt_resources[0].start = MX35_WDOG_BASE_ADDR; 601 + imx_wdt_resources[0].end = MX35_WDOG_BASE_ADDR + 0x3fff; 615 602 } 616 603 617 604 return 0;
+2 -1
arch/arm/mach-mx3/devices.h
··· 25 25 extern struct platform_device mxc_spi_device2; 26 26 extern struct platform_device imx_ssi_device0; 27 27 extern struct platform_device imx_ssi_device1; 28 - 28 + extern struct platform_device imx_ssi_device1; 29 + extern struct platform_device imx_wdt_device0;
+166
arch/arm/mach-mx3/mach-armadillo5x0.c
··· 36 36 #include <linux/input.h> 37 37 #include <linux/gpio_keys.h> 38 38 #include <linux/i2c.h> 39 + #include <linux/usb/otg.h> 40 + #include <linux/usb/ulpi.h> 41 + #include <linux/delay.h> 39 42 40 43 #include <mach/hardware.h> 41 44 #include <asm/mach-types.h> ··· 55 52 #include <mach/ipu.h> 56 53 #include <mach/mx3fb.h> 57 54 #include <mach/mxc_nand.h> 55 + #include <mach/mxc_ehci.h> 56 + #include <mach/ulpi.h> 58 57 59 58 #include "devices.h" 60 59 #include "crm_regs.h" ··· 108 103 /* I2C2 */ 109 104 MX31_PIN_CSPI2_MOSI__SCL, 110 105 MX31_PIN_CSPI2_MISO__SDA, 106 + /* OTG */ 107 + MX31_PIN_USBOTG_DATA0__USBOTG_DATA0, 108 + MX31_PIN_USBOTG_DATA1__USBOTG_DATA1, 109 + MX31_PIN_USBOTG_DATA2__USBOTG_DATA2, 110 + MX31_PIN_USBOTG_DATA3__USBOTG_DATA3, 111 + MX31_PIN_USBOTG_DATA4__USBOTG_DATA4, 112 + MX31_PIN_USBOTG_DATA5__USBOTG_DATA5, 113 + MX31_PIN_USBOTG_DATA6__USBOTG_DATA6, 114 + MX31_PIN_USBOTG_DATA7__USBOTG_DATA7, 115 + MX31_PIN_USBOTG_CLK__USBOTG_CLK, 116 + MX31_PIN_USBOTG_DIR__USBOTG_DIR, 117 + MX31_PIN_USBOTG_NXT__USBOTG_NXT, 118 + MX31_PIN_USBOTG_STP__USBOTG_STP, 119 + /* USB host 2 */ 120 + IOMUX_MODE(MX31_PIN_USBH2_CLK, IOMUX_CONFIG_FUNC), 121 + IOMUX_MODE(MX31_PIN_USBH2_DIR, IOMUX_CONFIG_FUNC), 122 + IOMUX_MODE(MX31_PIN_USBH2_NXT, IOMUX_CONFIG_FUNC), 123 + IOMUX_MODE(MX31_PIN_USBH2_STP, IOMUX_CONFIG_FUNC), 124 + IOMUX_MODE(MX31_PIN_USBH2_DATA0, IOMUX_CONFIG_FUNC), 125 + IOMUX_MODE(MX31_PIN_USBH2_DATA1, IOMUX_CONFIG_FUNC), 126 + IOMUX_MODE(MX31_PIN_STXD3, IOMUX_CONFIG_FUNC), 127 + IOMUX_MODE(MX31_PIN_SRXD3, IOMUX_CONFIG_FUNC), 128 + IOMUX_MODE(MX31_PIN_SCK3, IOMUX_CONFIG_FUNC), 129 + IOMUX_MODE(MX31_PIN_SFS3, IOMUX_CONFIG_FUNC), 130 + IOMUX_MODE(MX31_PIN_STXD6, IOMUX_CONFIG_FUNC), 131 + IOMUX_MODE(MX31_PIN_SRXD6, IOMUX_CONFIG_FUNC), 111 132 }; 133 + 134 + /* USB */ 135 + #if defined(CONFIG_USB_ULPI) 136 + 137 + #define OTG_RESET IOMUX_TO_GPIO(MX31_PIN_STXD4) 138 + #define USBH2_RESET IOMUX_TO_GPIO(MX31_PIN_SCK6) 139 + #define USBH2_CS IOMUX_TO_GPIO(MX31_PIN_GPIO1_3) 140 + 141 + #define USB_PAD_CFG (PAD_CTL_DRV_MAX | PAD_CTL_SRE_FAST | PAD_CTL_HYS_CMOS | \ 142 + PAD_CTL_ODE_CMOS | PAD_CTL_100K_PU) 143 + 144 + static int usbotg_init(struct platform_device *pdev) 145 + { 146 + int err; 147 + 148 + mxc_iomux_set_pad(MX31_PIN_USBOTG_DATA0, USB_PAD_CFG); 149 + mxc_iomux_set_pad(MX31_PIN_USBOTG_DATA1, USB_PAD_CFG); 150 + mxc_iomux_set_pad(MX31_PIN_USBOTG_DATA2, USB_PAD_CFG); 151 + mxc_iomux_set_pad(MX31_PIN_USBOTG_DATA3, USB_PAD_CFG); 152 + mxc_iomux_set_pad(MX31_PIN_USBOTG_DATA4, USB_PAD_CFG); 153 + mxc_iomux_set_pad(MX31_PIN_USBOTG_DATA5, USB_PAD_CFG); 154 + mxc_iomux_set_pad(MX31_PIN_USBOTG_DATA6, USB_PAD_CFG); 155 + mxc_iomux_set_pad(MX31_PIN_USBOTG_DATA7, USB_PAD_CFG); 156 + mxc_iomux_set_pad(MX31_PIN_USBOTG_CLK, USB_PAD_CFG); 157 + mxc_iomux_set_pad(MX31_PIN_USBOTG_DIR, USB_PAD_CFG); 158 + mxc_iomux_set_pad(MX31_PIN_USBOTG_NXT, USB_PAD_CFG); 159 + mxc_iomux_set_pad(MX31_PIN_USBOTG_STP, USB_PAD_CFG); 160 + 161 + /* Chip already enabled by hardware */ 162 + /* OTG phy reset*/ 163 + err = gpio_request(OTG_RESET, "USB-OTG-RESET"); 164 + if (err) { 165 + pr_err("Failed to request the usb otg reset gpio\n"); 166 + return err; 167 + } 168 + 169 + err = gpio_direction_output(OTG_RESET, 1/*HIGH*/); 170 + if (err) { 171 + pr_err("Failed to reset the usb otg phy\n"); 172 + goto otg_free_reset; 173 + } 174 + 175 + gpio_set_value(OTG_RESET, 0/*LOW*/); 176 + mdelay(5); 177 + gpio_set_value(OTG_RESET, 1/*HIGH*/); 178 + 179 + return 0; 180 + 181 + otg_free_reset: 182 + gpio_free(OTG_RESET); 183 + return err; 184 + } 185 + 186 + static int usbh2_init(struct platform_device *pdev) 187 + { 188 + int err; 189 + 190 + mxc_iomux_set_pad(MX31_PIN_USBH2_CLK, USB_PAD_CFG); 191 + mxc_iomux_set_pad(MX31_PIN_USBH2_DIR, USB_PAD_CFG); 192 + mxc_iomux_set_pad(MX31_PIN_USBH2_NXT, USB_PAD_CFG); 193 + mxc_iomux_set_pad(MX31_PIN_USBH2_STP, USB_PAD_CFG); 194 + mxc_iomux_set_pad(MX31_PIN_USBH2_DATA0, USB_PAD_CFG); 195 + mxc_iomux_set_pad(MX31_PIN_USBH2_DATA1, USB_PAD_CFG); 196 + mxc_iomux_set_pad(MX31_PIN_SRXD6, USB_PAD_CFG); 197 + mxc_iomux_set_pad(MX31_PIN_STXD6, USB_PAD_CFG); 198 + mxc_iomux_set_pad(MX31_PIN_SFS3, USB_PAD_CFG); 199 + mxc_iomux_set_pad(MX31_PIN_SCK3, USB_PAD_CFG); 200 + mxc_iomux_set_pad(MX31_PIN_SRXD3, USB_PAD_CFG); 201 + mxc_iomux_set_pad(MX31_PIN_STXD3, USB_PAD_CFG); 202 + 203 + mxc_iomux_set_gpr(MUX_PGP_UH2, true); 204 + 205 + 206 + /* Enable the chip */ 207 + err = gpio_request(USBH2_CS, "USB-H2-CS"); 208 + if (err) { 209 + pr_err("Failed to request the usb host 2 CS gpio\n"); 210 + return err; 211 + } 212 + 213 + err = gpio_direction_output(USBH2_CS, 0/*Enabled*/); 214 + if (err) { 215 + pr_err("Failed to drive the usb host 2 CS gpio\n"); 216 + goto h2_free_cs; 217 + } 218 + 219 + /* H2 phy reset*/ 220 + err = gpio_request(USBH2_RESET, "USB-H2-RESET"); 221 + if (err) { 222 + pr_err("Failed to request the usb host 2 reset gpio\n"); 223 + goto h2_free_cs; 224 + } 225 + 226 + err = gpio_direction_output(USBH2_RESET, 1/*HIGH*/); 227 + if (err) { 228 + pr_err("Failed to reset the usb host 2 phy\n"); 229 + goto h2_free_reset; 230 + } 231 + 232 + gpio_set_value(USBH2_RESET, 0/*LOW*/); 233 + mdelay(5); 234 + gpio_set_value(USBH2_RESET, 1/*HIGH*/); 235 + 236 + return 0; 237 + 238 + h2_free_reset: 239 + gpio_free(USBH2_RESET); 240 + h2_free_cs: 241 + gpio_free(USBH2_CS); 242 + return err; 243 + } 244 + 245 + static struct mxc_usbh_platform_data usbotg_pdata = { 246 + .init = usbotg_init, 247 + .portsc = MXC_EHCI_MODE_ULPI | MXC_EHCI_UTMI_8BIT, 248 + .flags = MXC_EHCI_POWER_PINS_ENABLED | MXC_EHCI_INTERFACE_DIFF_UNI, 249 + }; 250 + 251 + static struct mxc_usbh_platform_data usbh2_pdata = { 252 + .init = usbh2_init, 253 + .portsc = MXC_EHCI_MODE_ULPI | MXC_EHCI_UTMI_8BIT, 254 + .flags = MXC_EHCI_POWER_PINS_ENABLED | MXC_EHCI_INTERFACE_DIFF_UNI, 255 + }; 256 + #endif /* CONFIG_USB_ULPI */ 112 257 113 258 /* RTC over I2C*/ 114 259 #define ARMADILLO5X0_RTC_GPIO IOMUX_TO_GPIO(MX31_PIN_SRXD4) ··· 548 393 if (armadillo5x0_i2c_rtc.irq == 0) 549 394 pr_warning("armadillo5x0_init: failed to get RTC IRQ\n"); 550 395 i2c_register_board_info(1, &armadillo5x0_i2c_rtc, 1); 396 + 397 + /* USB */ 398 + #if defined(CONFIG_USB_ULPI) 399 + usbotg_pdata.otg = otg_ulpi_create(&mxc_ulpi_access_ops, 400 + USB_OTG_DRV_VBUS | USB_OTG_DRV_VBUS_EXT); 401 + usbh2_pdata.otg = otg_ulpi_create(&mxc_ulpi_access_ops, 402 + USB_OTG_DRV_VBUS | USB_OTG_DRV_VBUS_EXT); 403 + 404 + mxc_register_device(&mxc_otg_host, &usbotg_pdata); 405 + mxc_register_device(&mxc_usbh2, &usbh2_pdata); 406 + #endif 551 407 } 552 408 553 409 static void __init armadillo5x0_timer_init(void)
+97 -19
arch/arm/mach-mx3/mach-mx31_3ds.c
··· 23 23 #include <linux/gpio.h> 24 24 #include <linux/smsc911x.h> 25 25 #include <linux/platform_device.h> 26 + #include <linux/mfd/mc13783.h> 27 + #include <linux/spi/spi.h> 28 + #include <linux/regulator/machine.h> 26 29 27 30 #include <mach/hardware.h> 28 31 #include <asm/mach-types.h> ··· 34 31 #include <asm/memory.h> 35 32 #include <asm/mach/map.h> 36 33 #include <mach/common.h> 37 - #include <mach/board-mx31pdk.h> 34 + #include <mach/board-mx31_3ds.h> 38 35 #include <mach/imx-uart.h> 39 36 #include <mach/iomux-mx3.h> 37 + #include <mach/mxc_nand.h> 38 + #include <mach/spi.h> 40 39 #include "devices.h" 41 40 42 41 /*! 43 - * @file mx31pdk.c 42 + * @file mx31_3ds.c 44 43 * 45 44 * @brief This file contains the board-specific initialization routines. 46 45 * 47 46 * @ingroup System 48 47 */ 49 48 50 - static int mx31pdk_pins[] = { 49 + static int mx31_3ds_pins[] = { 51 50 /* UART1 */ 52 51 MX31_PIN_CTS1__CTS1, 53 52 MX31_PIN_RTS1__RTS1, 54 53 MX31_PIN_TXD1__TXD1, 55 54 MX31_PIN_RXD1__RXD1, 56 55 IOMUX_MODE(MX31_PIN_GPIO1_1, IOMUX_CONFIG_GPIO), 56 + /* SPI 1 */ 57 + MX31_PIN_CSPI2_SCLK__SCLK, 58 + MX31_PIN_CSPI2_MOSI__MOSI, 59 + MX31_PIN_CSPI2_MISO__MISO, 60 + MX31_PIN_CSPI2_SPI_RDY__SPI_RDY, 61 + MX31_PIN_CSPI2_SS0__SS0, 62 + MX31_PIN_CSPI2_SS2__SS2, /*CS for MC13783 */ 63 + /* MC13783 IRQ */ 64 + IOMUX_MODE(MX31_PIN_GPIO1_3, IOMUX_CONFIG_GPIO), 65 + }; 66 + 67 + /* Regulators */ 68 + static struct regulator_init_data pwgtx_init = { 69 + .constraints = { 70 + .boot_on = 1, 71 + .always_on = 1, 72 + }, 73 + }; 74 + 75 + static struct mc13783_regulator_init_data mx31_3ds_regulators[] = { 76 + { 77 + .id = MC13783_REGU_PWGT1SPI, /* Power Gate for ARM core. */ 78 + .init_data = &pwgtx_init, 79 + }, { 80 + .id = MC13783_REGU_PWGT2SPI, /* Power Gate for L2 Cache. */ 81 + .init_data = &pwgtx_init, 82 + }, 83 + }; 84 + 85 + /* MC13783 */ 86 + static struct mc13783_platform_data mc13783_pdata __initdata = { 87 + .regulators = mx31_3ds_regulators, 88 + .num_regulators = ARRAY_SIZE(mx31_3ds_regulators), 89 + .flags = MC13783_USE_REGULATOR, 90 + }; 91 + 92 + /* SPI */ 93 + static int spi1_internal_chipselect[] = { 94 + MXC_SPI_CS(0), 95 + MXC_SPI_CS(2), 96 + }; 97 + 98 + static struct spi_imx_master spi1_pdata = { 99 + .chipselect = spi1_internal_chipselect, 100 + .num_chipselect = ARRAY_SIZE(spi1_internal_chipselect), 101 + }; 102 + 103 + static struct spi_board_info mx31_3ds_spi_devs[] __initdata = { 104 + { 105 + .modalias = "mc13783", 106 + .max_speed_hz = 1000000, 107 + .bus_num = 1, 108 + .chip_select = 1, /* SS2 */ 109 + .platform_data = &mc13783_pdata, 110 + .irq = IOMUX_TO_IRQ(MX31_PIN_GPIO1_3), 111 + .mode = SPI_CS_HIGH, 112 + }, 113 + }; 114 + 115 + /* 116 + * NAND Flash 117 + */ 118 + static struct mxc_nand_platform_data imx31_3ds_nand_flash_pdata = { 119 + .width = 1, 120 + .hw_ecc = 1, 121 + #ifdef MACH_MX31_3DS_MXC_NAND_USE_BBT 122 + .flash_bbt = 1, 123 + #endif 57 124 }; 58 125 59 126 static struct imxuart_platform_data uart_pdata = { ··· 168 95 * LEDs, switches, interrupts for Ethernet. 169 96 */ 170 97 171 - static void mx31pdk_expio_irq_handler(uint32_t irq, struct irq_desc *desc) 98 + static void mx31_3ds_expio_irq_handler(uint32_t irq, struct irq_desc *desc) 172 99 { 173 100 uint32_t imr_val; 174 101 uint32_t int_valid; ··· 236 163 .unmask = expio_unmask_irq, 237 164 }; 238 165 239 - static int __init mx31pdk_init_expio(void) 166 + static int __init mx31_3ds_init_expio(void) 240 167 { 241 168 int i; 242 169 int ret; ··· 249 176 return -ENODEV; 250 177 } 251 178 252 - pr_info("i.MX31PDK Debug board detected, rev = 0x%04X\n", 179 + pr_info("i.MX31 3DS Debug board detected, rev = 0x%04X\n", 253 180 __raw_readw(CPLD_CODE_VER_REG)); 254 181 255 182 /* ··· 274 201 set_irq_flags(i, IRQF_VALID); 275 202 } 276 203 set_irq_type(EXPIO_PARENT_INT, IRQ_TYPE_LEVEL_LOW); 277 - set_irq_chained_handler(EXPIO_PARENT_INT, mx31pdk_expio_irq_handler); 204 + set_irq_chained_handler(EXPIO_PARENT_INT, mx31_3ds_expio_irq_handler); 278 205 279 206 return 0; 280 207 } ··· 282 209 /* 283 210 * This structure defines the MX31 memory map. 284 211 */ 285 - static struct map_desc mx31pdk_io_desc[] __initdata = { 212 + static struct map_desc mx31_3ds_io_desc[] __initdata = { 286 213 { 287 214 .virtual = MX31_CS5_BASE_ADDR_VIRT, 288 215 .pfn = __phys_to_pfn(MX31_CS5_BASE_ADDR), ··· 294 221 /* 295 222 * Set up static virtual mappings. 296 223 */ 297 - static void __init mx31pdk_map_io(void) 224 + static void __init mx31_3ds_map_io(void) 298 225 { 299 226 mx31_map_io(); 300 - iotable_init(mx31pdk_io_desc, ARRAY_SIZE(mx31pdk_io_desc)); 227 + iotable_init(mx31_3ds_io_desc, ARRAY_SIZE(mx31_3ds_io_desc)); 301 228 } 302 229 303 230 /*! ··· 305 232 */ 306 233 static void __init mxc_board_init(void) 307 234 { 308 - mxc_iomux_setup_multiple_pins(mx31pdk_pins, ARRAY_SIZE(mx31pdk_pins), 309 - "mx31pdk"); 235 + mxc_iomux_setup_multiple_pins(mx31_3ds_pins, ARRAY_SIZE(mx31_3ds_pins), 236 + "mx31_3ds"); 310 237 311 238 mxc_register_device(&mxc_uart_device0, &uart_pdata); 239 + mxc_register_device(&mxc_nand_device, &imx31_3ds_nand_flash_pdata); 312 240 313 - if (!mx31pdk_init_expio()) 241 + mxc_register_device(&mxc_spi_device1, &spi1_pdata); 242 + spi_register_board_info(mx31_3ds_spi_devs, 243 + ARRAY_SIZE(mx31_3ds_spi_devs)); 244 + 245 + if (!mx31_3ds_init_expio()) 314 246 platform_device_register(&smsc911x_device); 315 247 } 316 248 317 - static void __init mx31pdk_timer_init(void) 249 + static void __init mx31_3ds_timer_init(void) 318 250 { 319 251 mx31_clocks_init(26000000); 320 252 } 321 253 322 - static struct sys_timer mx31pdk_timer = { 323 - .init = mx31pdk_timer_init, 254 + static struct sys_timer mx31_3ds_timer = { 255 + .init = mx31_3ds_timer_init, 324 256 }; 325 257 326 258 /* 327 259 * The following uses standard kernel macros defined in arch.h in order to 328 - * initialize __mach_desc_MX31PDK data structure. 260 + * initialize __mach_desc_MX31_3DS data structure. 329 261 */ 330 262 MACHINE_START(MX31_3DS, "Freescale MX31PDK (3DS)") 331 263 /* Maintainer: Freescale Semiconductor, Inc. */ 332 264 .phys_io = MX31_AIPS1_BASE_ADDR, 333 265 .io_pg_offst = (MX31_AIPS1_BASE_ADDR_VIRT >> 18) & 0xfffc, 334 266 .boot_params = MX3x_PHYS_OFFSET + 0x100, 335 - .map_io = mx31pdk_map_io, 267 + .map_io = mx31_3ds_map_io, 336 268 .init_irq = mx31_init_irq, 337 269 .init_machine = mxc_board_init, 338 - .timer = &mx31pdk_timer, 270 + .timer = &mx31_3ds_timer, 339 271 MACHINE_END
-1
arch/arm/mach-mx3/mach-pcm037.c
··· 35 35 #include <linux/can/platform/sja1000.h> 36 36 #include <linux/usb/otg.h> 37 37 #include <linux/usb/ulpi.h> 38 - #include <linux/fsl_devices.h> 39 38 #include <linux/gfp.h> 40 39 41 40 #include <media/soc_camera.h>
+1 -1
arch/arm/mach-mx3/mx31lite-db.c
··· 28 28 #include <linux/types.h> 29 29 #include <linux/init.h> 30 30 #include <linux/gpio.h> 31 - #include <linux/platform_device.h> 32 31 #include <linux/leds.h> 33 32 #include <linux/platform_device.h> 34 33 ··· 205 206 mxc_register_device(&mxcsdhc_device0, &mmc_pdata); 206 207 mxc_register_device(&mxc_spi_device0, &spi0_pdata); 207 208 platform_device_register(&litekit_led_device); 209 + mxc_register_device(&imx_wdt_device0, NULL); 208 210 } 209 211
+1 -1
arch/arm/mach-mx5/clock-mx51.c
··· 757 757 758 758 /* GPT */ 759 759 DEFINE_CLOCK(gpt_clk, 0, MXC_CCM_CCGR2, MXC_CCM_CCGRx_CG9_OFFSET, 760 - NULL, NULL, &ipg_perclk, NULL); 760 + NULL, NULL, &ipg_clk, NULL); 761 761 DEFINE_CLOCK(gpt_ipg_clk, 0, MXC_CCM_CCGR2, MXC_CCM_CCGRx_CG10_OFFSET, 762 762 NULL, NULL, &ipg_clk, NULL); 763 763
+53
arch/arm/mach-mx5/cpu.c
··· 14 14 #include <linux/types.h> 15 15 #include <linux/kernel.h> 16 16 #include <linux/init.h> 17 + #include <linux/module.h> 17 18 #include <mach/hardware.h> 18 19 #include <asm/io.h> 20 + 21 + static int cpu_silicon_rev = -1; 22 + 23 + #define SI_REV 0x48 24 + 25 + static void query_silicon_parameter(void) 26 + { 27 + void __iomem *rom = ioremap(MX51_IROM_BASE_ADDR, MX51_IROM_SIZE); 28 + u32 rev; 29 + 30 + if (!rom) { 31 + cpu_silicon_rev = -EINVAL; 32 + return; 33 + } 34 + 35 + rev = readl(rom + SI_REV); 36 + switch (rev) { 37 + case 0x1: 38 + cpu_silicon_rev = MX51_CHIP_REV_1_0; 39 + break; 40 + case 0x2: 41 + cpu_silicon_rev = MX51_CHIP_REV_1_1; 42 + break; 43 + case 0x10: 44 + cpu_silicon_rev = MX51_CHIP_REV_2_0; 45 + break; 46 + case 0x20: 47 + cpu_silicon_rev = MX51_CHIP_REV_3_0; 48 + break; 49 + default: 50 + cpu_silicon_rev = 0; 51 + } 52 + 53 + iounmap(rom); 54 + } 55 + 56 + /* 57 + * Returns: 58 + * the silicon revision of the cpu 59 + * -EINVAL - not a mx51 60 + */ 61 + int mx51_revision(void) 62 + { 63 + if (!cpu_is_mx51()) 64 + return -EINVAL; 65 + 66 + if (cpu_silicon_rev == -1) 67 + query_silicon_parameter(); 68 + 69 + return cpu_silicon_rev; 70 + } 71 + EXPORT_SYMBOL(mx51_revision); 19 72 20 73 static int __init post_cpu_init(void) 21 74 {
+13 -19
arch/arm/mach-mx5/mm.c
··· 35 35 .length = MX51_DEBUG_SIZE, 36 36 .type = MT_DEVICE 37 37 }, { 38 - .virtual = MX51_TZIC_BASE_ADDR_VIRT, 39 - .pfn = __phys_to_pfn(MX51_TZIC_BASE_ADDR), 40 - .length = MX51_TZIC_SIZE, 41 - .type = MT_DEVICE 42 - }, { 43 38 .virtual = MX51_AIPS1_BASE_ADDR_VIRT, 44 39 .pfn = __phys_to_pfn(MX51_AIPS1_BASE_ADDR), 45 40 .length = MX51_AIPS1_SIZE, ··· 49 54 .pfn = __phys_to_pfn(MX51_AIPS2_BASE_ADDR), 50 55 .length = MX51_AIPS2_SIZE, 51 56 .type = MT_DEVICE 52 - }, { 53 - .virtual = MX51_NFC_AXI_BASE_ADDR_VIRT, 54 - .pfn = __phys_to_pfn(MX51_NFC_AXI_BASE_ADDR), 55 - .length = MX51_NFC_AXI_SIZE, 56 - .type = MT_DEVICE 57 57 }, 58 58 }; 59 59 ··· 59 69 */ 60 70 void __init mx51_map_io(void) 61 71 { 62 - u32 tzic_addr; 63 - 64 - if (mx51_revision() < MX51_CHIP_REV_2_0) 65 - tzic_addr = 0x8FFFC000; 66 - else 67 - tzic_addr = 0xE0003000; 68 - mxc_io_desc[2].pfn = __phys_to_pfn(tzic_addr); 69 - 70 72 mxc_set_cpu_type(MXC_CPU_MX51); 71 73 mxc_iomux_v3_init(MX51_IO_ADDRESS(MX51_IOMUXC_BASE_ADDR)); 72 74 mxc_arch_reset_init(MX51_IO_ADDRESS(MX51_WDOG_BASE_ADDR)); ··· 67 85 68 86 void __init mx51_init_irq(void) 69 87 { 70 - tzic_init_irq(MX51_IO_ADDRESS(MX51_TZIC_BASE_ADDR)); 88 + unsigned long tzic_addr; 89 + void __iomem *tzic_virt; 90 + 91 + if (mx51_revision() < MX51_CHIP_REV_2_0) 92 + tzic_addr = MX51_TZIC_BASE_ADDR_TO1; 93 + else 94 + tzic_addr = MX51_TZIC_BASE_ADDR; 95 + 96 + tzic_virt = ioremap(tzic_addr, SZ_16K); 97 + if (!tzic_virt) 98 + panic("unable to map TZIC interrupt controller\n"); 99 + 100 + tzic_init_irq(tzic_virt); 71 101 }
+1 -8
arch/arm/mm/copypage-v6.c
··· 41 41 kfrom = kmap_atomic(from, KM_USER0); 42 42 kto = kmap_atomic(to, KM_USER1); 43 43 copy_page(kto, kfrom); 44 - #ifdef CONFIG_HIGHMEM 45 - /* 46 - * kmap_atomic() doesn't set the page virtual address, and 47 - * kunmap_atomic() takes care of cache flushing already. 48 - */ 49 - if (page_address(to) != NULL) 50 - #endif 51 - __cpuc_flush_dcache_area(kto, PAGE_SIZE); 44 + __cpuc_flush_dcache_area(kto, PAGE_SIZE); 52 45 kunmap_atomic(kto, KM_USER1); 53 46 kunmap_atomic(kfrom, KM_USER0); 54 47 }
+5
arch/arm/mm/dma-mapping.c
··· 464 464 vaddr += offset; 465 465 op(vaddr, len, dir); 466 466 kunmap_high(page); 467 + } else if (cache_is_vipt()) { 468 + pte_t saved_pte; 469 + vaddr = kmap_high_l1_vipt(page, &saved_pte); 470 + op(vaddr + offset, len, dir); 471 + kunmap_high_l1_vipt(page, saved_pte); 467 472 } 468 473 } else { 469 474 vaddr = page_address(page) + offset;
+15 -10
arch/arm/mm/flush.c
··· 13 13 14 14 #include <asm/cacheflush.h> 15 15 #include <asm/cachetype.h> 16 + #include <asm/highmem.h> 16 17 #include <asm/smp_plat.h> 17 18 #include <asm/system.h> 18 19 #include <asm/tlbflush.h> ··· 153 152 154 153 void __flush_dcache_page(struct address_space *mapping, struct page *page) 155 154 { 156 - void *addr = page_address(page); 157 - 158 155 /* 159 156 * Writeback any data associated with the kernel mapping of this 160 157 * page. This ensures that data in the physical page is mutually 161 158 * coherent with the kernels mapping. 162 159 */ 163 - #ifdef CONFIG_HIGHMEM 164 - /* 165 - * kmap_atomic() doesn't set the page virtual address, and 166 - * kunmap_atomic() takes care of cache flushing already. 167 - */ 168 - if (addr) 169 - #endif 170 - __cpuc_flush_dcache_area(addr, PAGE_SIZE); 160 + if (!PageHighMem(page)) { 161 + __cpuc_flush_dcache_area(page_address(page), PAGE_SIZE); 162 + } else { 163 + void *addr = kmap_high_get(page); 164 + if (addr) { 165 + __cpuc_flush_dcache_area(addr, PAGE_SIZE); 166 + kunmap_high(page); 167 + } else if (cache_is_vipt()) { 168 + pte_t saved_pte; 169 + addr = kmap_high_l1_vipt(page, &saved_pte); 170 + __cpuc_flush_dcache_area(addr, PAGE_SIZE); 171 + kunmap_high_l1_vipt(page, saved_pte); 172 + } 173 + } 171 174 172 175 /* 173 176 * If this is a page cache page, and we have an aliasing VIPT cache,
+86 -1
arch/arm/mm/highmem.c
··· 79 79 unsigned int idx = type + KM_TYPE_NR * smp_processor_id(); 80 80 81 81 if (kvaddr >= (void *)FIXADDR_START) { 82 - __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); 82 + if (cache_is_vivt()) 83 + __cpuc_flush_dcache_area((void *)vaddr, PAGE_SIZE); 83 84 #ifdef CONFIG_DEBUG_HIGHMEM 84 85 BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN + idx)); 85 86 set_pte_ext(TOP_PTE(vaddr), __pte(0), 0); ··· 125 124 pte = TOP_PTE(vaddr); 126 125 return pte_page(*pte); 127 126 } 127 + 128 + #ifdef CONFIG_CPU_CACHE_VIPT 129 + 130 + #include <linux/percpu.h> 131 + 132 + /* 133 + * The VIVT cache of a highmem page is always flushed before the page 134 + * is unmapped. Hence unmapped highmem pages need no cache maintenance 135 + * in that case. 136 + * 137 + * However unmapped pages may still be cached with a VIPT cache, and 138 + * it is not possible to perform cache maintenance on them using physical 139 + * addresses unfortunately. So we have no choice but to set up a temporary 140 + * virtual mapping for that purpose. 141 + * 142 + * Yet this VIPT cache maintenance may be triggered from DMA support 143 + * functions which are possibly called from interrupt context. As we don't 144 + * want to keep interrupt disabled all the time when such maintenance is 145 + * taking place, we therefore allow for some reentrancy by preserving and 146 + * restoring the previous fixmap entry before the interrupted context is 147 + * resumed. If the reentrancy depth is 0 then there is no need to restore 148 + * the previous fixmap, and leaving the current one in place allow it to 149 + * be reused the next time without a TLB flush (common with DMA). 150 + */ 151 + 152 + static DEFINE_PER_CPU(int, kmap_high_l1_vipt_depth); 153 + 154 + void *kmap_high_l1_vipt(struct page *page, pte_t *saved_pte) 155 + { 156 + unsigned int idx, cpu = smp_processor_id(); 157 + int *depth = &per_cpu(kmap_high_l1_vipt_depth, cpu); 158 + unsigned long vaddr, flags; 159 + pte_t pte, *ptep; 160 + 161 + idx = KM_L1_CACHE + KM_TYPE_NR * cpu; 162 + vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 163 + ptep = TOP_PTE(vaddr); 164 + pte = mk_pte(page, kmap_prot); 165 + 166 + if (!in_interrupt()) 167 + preempt_disable(); 168 + 169 + raw_local_irq_save(flags); 170 + (*depth)++; 171 + if (pte_val(*ptep) == pte_val(pte)) { 172 + *saved_pte = pte; 173 + } else { 174 + *saved_pte = *ptep; 175 + set_pte_ext(ptep, pte, 0); 176 + local_flush_tlb_kernel_page(vaddr); 177 + } 178 + raw_local_irq_restore(flags); 179 + 180 + return (void *)vaddr; 181 + } 182 + 183 + void kunmap_high_l1_vipt(struct page *page, pte_t saved_pte) 184 + { 185 + unsigned int idx, cpu = smp_processor_id(); 186 + int *depth = &per_cpu(kmap_high_l1_vipt_depth, cpu); 187 + unsigned long vaddr, flags; 188 + pte_t pte, *ptep; 189 + 190 + idx = KM_L1_CACHE + KM_TYPE_NR * cpu; 191 + vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); 192 + ptep = TOP_PTE(vaddr); 193 + pte = mk_pte(page, kmap_prot); 194 + 195 + BUG_ON(pte_val(*ptep) != pte_val(pte)); 196 + BUG_ON(*depth <= 0); 197 + 198 + raw_local_irq_save(flags); 199 + (*depth)--; 200 + if (*depth != 0 && pte_val(pte) != pte_val(saved_pte)) { 201 + set_pte_ext(ptep, saved_pte, 0); 202 + local_flush_tlb_kernel_page(vaddr); 203 + } 204 + raw_local_irq_restore(flags); 205 + 206 + if (!in_interrupt()) 207 + preempt_enable(); 208 + } 209 + 210 + #endif /* CONFIG_CPU_CACHE_VIPT */
+10 -4
arch/arm/mm/mmu.c
··· 420 420 user_pgprot |= L_PTE_SHARED; 421 421 kern_pgprot |= L_PTE_SHARED; 422 422 vecs_pgprot |= L_PTE_SHARED; 423 + mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_S; 424 + mem_types[MT_DEVICE_WC].prot_pte |= L_PTE_SHARED; 425 + mem_types[MT_DEVICE_CACHED].prot_sect |= PMD_SECT_S; 426 + mem_types[MT_DEVICE_CACHED].prot_pte |= L_PTE_SHARED; 423 427 mem_types[MT_MEMORY].prot_sect |= PMD_SECT_S; 424 428 mem_types[MT_MEMORY_NONCACHED].prot_sect |= PMD_SECT_S; 425 429 #endif ··· 1054 1050 pgd_t *pgd; 1055 1051 int i; 1056 1052 1057 - if (current->mm && current->mm->pgd) 1058 - pgd = current->mm->pgd; 1059 - else 1060 - pgd = init_mm.pgd; 1053 + /* 1054 + * We need to access to user-mode page tables here. For kernel threads 1055 + * we don't have any user-mode mappings so we use the context that we 1056 + * "borrowed". 1057 + */ 1058 + pgd = current->active_mm->pgd; 1061 1059 1062 1060 base_pmdval = PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | PMD_TYPE_SECT; 1063 1061 if (cpu_architecture() <= CPU_ARCH_ARMv5TEJ && !cpu_is_xscale())
+3 -3
arch/arm/plat-mxc/include/mach/board-mx31pdk.h arch/arm/plat-mxc/include/mach/board-mx31_3ds.h
··· 8 8 * published by the Free Software Foundation. 9 9 */ 10 10 11 - #ifndef __ASM_ARCH_MXC_BOARD_MX31PDK_H__ 12 - #define __ASM_ARCH_MXC_BOARD_MX31PDK_H__ 11 + #ifndef __ASM_ARCH_MXC_BOARD_MX31_3DS_H__ 12 + #define __ASM_ARCH_MXC_BOARD_MX31_3DS_H__ 13 13 14 14 /* Definitions for components on the Debug board */ 15 15 ··· 56 56 57 57 #define MXC_MAX_EXP_IO_LINES 16 58 58 59 - #endif /* __ASM_ARCH_MXC_BOARD_MX31PDK_H__ */ 59 + #endif /* __ASM_ARCH_MXC_BOARD_MX31_3DS_H__ */
+12 -21
arch/arm/plat-mxc/include/mach/mx51.h
··· 14 14 * FB100000 70000000 1M SPBA 0 15 15 * FB000000 73F00000 1M AIPS 1 16 16 * FB200000 83F00000 1M AIPS 2 17 - * FA100000 8FFFC000 16K TZIC (interrupt controller) 17 + * 8FFFC000 16K TZIC (interrupt controller) 18 18 * 90000000 256M CSD0 SDRAM/DDR 19 19 * A0000000 256M CSD1 SDRAM/DDR 20 20 * B0000000 128M CS0 Flash ··· 23 23 * C8000000 64M CS3 Flash 24 24 * CC000000 32M CS4 SRAM 25 25 * CE000000 32M CS5 SRAM 26 - * F9000000 CFFF0000 64K NFC (NAND Flash AXI) 26 + * CFFF0000 64K NFC (NAND Flash AXI) 27 27 * 28 28 */ 29 + 30 + /* 31 + * IROM 32 + */ 33 + #define MX51_IROM_BASE_ADDR 0x0 34 + #define MX51_IROM_SIZE SZ_64K 29 35 30 36 /* 31 37 * IRAM ··· 46 40 * NFC 47 41 */ 48 42 #define MX51_NFC_AXI_BASE_ADDR 0xCFFF0000 /* NAND flash AXI */ 49 - #define MX51_NFC_AXI_BASE_ADDR_VIRT 0xF9000000 50 43 #define MX51_NFC_AXI_SIZE SZ_64K 51 44 52 45 /* ··· 54 49 #define MX51_GPU_BASE_ADDR 0x20000000 55 50 #define MX51_GPU2D_BASE_ADDR 0xD0000000 56 51 57 - #define MX51_TZIC_BASE_ADDR 0x8FFFC000 58 - #define MX51_TZIC_BASE_ADDR_VIRT 0xFA100000 59 - #define MX51_TZIC_SIZE SZ_16K 52 + #define MX51_TZIC_BASE_ADDR_TO1 0x8FFFC000 53 + #define MX51_TZIC_BASE_ADDR 0xE0000000 60 54 61 55 #define MX51_DEBUG_BASE_ADDR 0x60000000 62 56 #define MX51_DEBUG_BASE_ADDR_VIRT 0xFA200000 ··· 236 232 #define MX51_IO_ADDRESS(x) \ 237 233 (void __iomem *) \ 238 234 (MX51_IS_MODULE(x, IRAM) ? MX51_IRAM_IO_ADDRESS(x) : \ 239 - MX51_IS_MODULE(x, TZIC) ? MX51_TZIC_IO_ADDRESS(x) : \ 240 235 MX51_IS_MODULE(x, DEBUG) ? MX51_DEBUG_IO_ADDRESS(x) : \ 241 236 MX51_IS_MODULE(x, SPBA0) ? MX51_SPBA0_IO_ADDRESS(x) : \ 242 237 MX51_IS_MODULE(x, AIPS1) ? MX51_AIPS1_IO_ADDRESS(x) : \ 243 - MX51_IS_MODULE(x, AIPS2) ? MX51_AIPS2_IO_ADDRESS(x) : \ 244 - MX51_IS_MODULE(x, NFC_AXI) ? MX51_NFC_AXI_IO_ADDRESS(x) : \ 238 + MX51_IS_MODULE(x, AIPS2) ? MX51_AIPS2_IO_ADDRESS(x) : \ 245 239 0xDEADBEEF) 246 240 247 241 /* ··· 247 245 */ 248 246 #define MX51_IRAM_IO_ADDRESS(x) \ 249 247 (((x) - MX51_IRAM_BASE_ADDR) + MX51_IRAM_BASE_ADDR_VIRT) 250 - 251 - #define MX51_TZIC_IO_ADDRESS(x) \ 252 - (((x) - MX51_TZIC_BASE_ADDR) + MX51_TZIC_BASE_ADDR_VIRT) 253 248 254 249 #define MX51_DEBUG_IO_ADDRESS(x) \ 255 250 (((x) - MX51_DEBUG_BASE_ADDR) + MX51_DEBUG_BASE_ADDR_VIRT) ··· 259 260 260 261 #define MX51_AIPS2_IO_ADDRESS(x) \ 261 262 (((x) - MX51_AIPS2_BASE_ADDR) + MX51_AIPS2_BASE_ADDR_VIRT) 262 - 263 - #define MX51_NFC_AXI_IO_ADDRESS(x) \ 264 - (((x) - MX51_NFC_AXI_BASE_ADDR) + MX51_NFC_AXI_BASE_ADDR_VIRT) 265 263 266 264 #define MX51_IS_MEM_DEVICE_NONSHARED(x) 0 267 265 ··· 439 443 440 444 #if !defined(__ASSEMBLY__) && !defined(__MXC_BOOT_UNCOMPRESS) 441 445 442 - extern unsigned int system_rev; 443 - 444 - static inline unsigned int mx51_revision(void) 445 - { 446 - return system_rev; 447 - } 446 + extern int mx51_revision(void); 448 447 #endif 449 448 450 449 #endif /* __ASM_ARCH_MXC_MX51_H__ */
+4
arch/arm/plat-mxc/include/mach/uncompress.h
··· 66 66 #define MX2X_UART1_BASE_ADDR 0x1000a000 67 67 #define MX3X_UART1_BASE_ADDR 0x43F90000 68 68 #define MX3X_UART2_BASE_ADDR 0x43F94000 69 + #define MX51_UART1_BASE_ADDR 0x73fbc000 69 70 70 71 static __inline__ void __arch_decomp_setup(unsigned long arch_id) 71 72 { ··· 101 100 break; 102 101 case MACH_TYPE_MAGX_ZN5: 103 102 uart_base = MX3X_UART2_BASE_ADDR; 103 + break; 104 + case MACH_TYPE_MX51_BABBAGE: 105 + uart_base = MX51_UART1_BASE_ADDR; 104 106 break; 105 107 default: 106 108 break;
+10 -21
arch/arm/vfp/vfpmodule.c
··· 428 428 static inline void vfp_pm_init(void) { } 429 429 #endif /* CONFIG_PM */ 430 430 431 - /* 432 - * Synchronise the hardware VFP state of a thread other than current with the 433 - * saved one. This function is used by the ptrace mechanism. 434 - */ 435 - #ifdef CONFIG_SMP 436 - void vfp_sync_hwstate(struct thread_info *thread) 437 - { 438 - } 439 - 440 - void vfp_flush_hwstate(struct thread_info *thread) 441 - { 442 - /* 443 - * On SMP systems, the VFP state is automatically saved at every 444 - * context switch. We mark the thread VFP state as belonging to a 445 - * non-existent CPU so that the saved one will be reloaded when 446 - * needed. 447 - */ 448 - thread->vfpstate.hard.cpu = NR_CPUS; 449 - } 450 - #else 451 431 void vfp_sync_hwstate(struct thread_info *thread) 452 432 { 453 433 unsigned int cpu = get_cpu(); ··· 470 490 last_VFP_context[cpu] = NULL; 471 491 } 472 492 493 + #ifdef CONFIG_SMP 494 + /* 495 + * For SMP we still have to take care of the case where the thread 496 + * migrates to another CPU and then back to the original CPU on which 497 + * the last VFP user is still the same thread. Mark the thread VFP 498 + * state as belonging to a non-existent CPU so that the saved one will 499 + * be reloaded in the above case. 500 + */ 501 + thread->vfpstate.hard.cpu = NR_CPUS; 502 + #endif 473 503 put_cpu(); 474 504 } 475 - #endif 476 505 477 506 #include <linux/smp.h> 478 507
+6 -2
arch/m68k/include/asm/atomic_mm.h
··· 148 148 static inline int atomic_sub_and_test(int i, atomic_t *v) 149 149 { 150 150 char c; 151 - __asm__ __volatile__("subl %2,%1; seq %0" : "=d" (c), "+m" (*v): "g" (i)); 151 + __asm__ __volatile__("subl %2,%1; seq %0" 152 + : "=d" (c), "+m" (*v) 153 + : "id" (i)); 152 154 return c != 0; 153 155 } 154 156 155 157 static inline int atomic_add_negative(int i, atomic_t *v) 156 158 { 157 159 char c; 158 - __asm__ __volatile__("addl %2,%1; smi %0" : "=d" (c), "+m" (*v): "g" (i)); 160 + __asm__ __volatile__("addl %2,%1; smi %0" 161 + : "=d" (c), "+m" (*v) 162 + : "id" (i)); 159 163 return c != 0; 160 164 } 161 165
+1 -3
arch/m68k/include/asm/sigcontext.h
··· 17 17 #ifndef __uClinux__ 18 18 # ifdef __mcoldfire__ 19 19 unsigned long sc_fpregs[2][2]; /* room for two fp registers */ 20 - unsigned long sc_fpcntl[3]; 21 - unsigned char sc_fpstate[16+6*8]; 22 20 # else 23 21 unsigned long sc_fpregs[2*3]; /* room for two fp registers */ 22 + # endif 24 23 unsigned long sc_fpcntl[3]; 25 24 unsigned char sc_fpstate[216]; 26 - # endif 27 25 #endif 28 26 }; 29 27
-40
arch/mips/alchemy/devboards/db1200/setup.c
··· 60 60 wmb(); 61 61 } 62 62 63 - /* use the hexleds to count the number of times the cpu has entered 64 - * wait, the dots to indicate whether the CPU is currently idle or 65 - * active (dots off = sleeping, dots on = working) for cases where 66 - * the number doesn't change for a long(er) period of time. 67 - */ 68 - static void db1200_wait(void) 69 - { 70 - __asm__(" .set push \n" 71 - " .set mips3 \n" 72 - " .set noreorder \n" 73 - " cache 0x14, 0(%0) \n" 74 - " cache 0x14, 32(%0) \n" 75 - " cache 0x14, 64(%0) \n" 76 - /* dots off: we're about to call wait */ 77 - " lui $26, 0xb980 \n" 78 - " ori $27, $0, 3 \n" 79 - " sb $27, 0x18($26) \n" 80 - " sync \n" 81 - " nop \n" 82 - " wait \n" 83 - " nop \n" 84 - " nop \n" 85 - " nop \n" 86 - " nop \n" 87 - " nop \n" 88 - /* dots on: there's work to do, increment cntr */ 89 - " lui $26, 0xb980 \n" 90 - " sb $0, 0x18($26) \n" 91 - " lui $26, 0xb9c0 \n" 92 - " lb $27, 0($26) \n" 93 - " addiu $27, $27, 1 \n" 94 - " sb $27, 0($26) \n" 95 - " sync \n" 96 - " .set pop \n" 97 - : : "r" (db1200_wait)); 98 - } 99 - 100 63 static int __init db1200_arch_init(void) 101 64 { 102 65 /* GPIO7 is low-level triggered CPLD cascade */ ··· 72 109 */ 73 110 irq_to_desc(DB1200_SD0_INSERT_INT)->status |= IRQ_NOAUTOEN; 74 111 irq_to_desc(DB1200_SD0_EJECT_INT)->status |= IRQ_NOAUTOEN; 75 - 76 - if (cpu_wait) 77 - cpu_wait = db1200_wait; 78 112 79 113 return 0; 80 114 }
+2 -1
arch/mips/ar7/platform.c
··· 168 168 .on = vlynq_on, 169 169 .off = vlynq_off, 170 170 }, 171 - .reset_bit = 26, 171 + .reset_bit = 16, 172 172 .gpio_bit = 19, 173 173 }; 174 174 ··· 600 600 } 601 601 602 602 if (ar7_has_high_cpmac()) { 603 + res = fixed_phy_add(PHY_POLL, cpmac_high.id, &fixed_phy_status); 603 604 if (!res) { 604 605 cpmac_get_mac(1, cpmac_high_data.dev_addr); 605 606
+147 -84
arch/mips/bcm63xx/boards/board_bcm963xx.c
··· 18 18 #include <asm/addrspace.h> 19 19 #include <bcm63xx_board.h> 20 20 #include <bcm63xx_cpu.h> 21 + #include <bcm63xx_dev_uart.h> 21 22 #include <bcm63xx_regs.h> 22 23 #include <bcm63xx_io.h> 23 24 #include <bcm63xx_dev_pci.h> ··· 41 40 .name = "96338GW", 42 41 .expected_cpu_id = 0x6338, 43 42 43 + .has_uart0 = 1, 44 44 .has_enet0 = 1, 45 45 .enet0 = { 46 46 .force_speed_100 = 1, ··· 84 82 .name = "96338W", 85 83 .expected_cpu_id = 0x6338, 86 84 85 + .has_uart0 = 1, 87 86 .has_enet0 = 1, 88 87 .enet0 = { 89 88 .force_speed_100 = 1, ··· 129 126 static struct board_info __initdata board_96345gw2 = { 130 127 .name = "96345GW2", 131 128 .expected_cpu_id = 0x6345, 129 + 130 + .has_uart0 = 1, 132 131 }; 133 132 #endif 134 133 ··· 142 137 .name = "96348R", 143 138 .expected_cpu_id = 0x6348, 144 139 140 + .has_uart0 = 1, 145 141 .has_enet0 = 1, 146 142 .has_pci = 1, 147 143 ··· 186 180 .name = "96348GW-10", 187 181 .expected_cpu_id = 0x6348, 188 182 183 + .has_uart0 = 1, 189 184 .has_enet0 = 1, 190 185 .has_enet1 = 1, 191 186 .has_pci = 1, ··· 246 239 .name = "96348GW-11", 247 240 .expected_cpu_id = 0x6348, 248 241 242 + .has_uart0 = 1, 249 243 .has_enet0 = 1, 250 244 .has_enet1 = 1, 251 245 .has_pci = 1, ··· 300 292 .name = "96348GW", 301 293 .expected_cpu_id = 0x6348, 302 294 295 + .has_uart0 = 1, 303 296 .has_enet0 = 1, 304 297 .has_enet1 = 1, 305 298 .has_pci = 1, ··· 358 349 .name = "F@ST2404", 359 350 .expected_cpu_id = 0x6348, 360 351 361 - .has_enet0 = 1, 362 - .has_enet1 = 1, 363 - .has_pci = 1, 352 + .has_uart0 = 1, 353 + .has_enet0 = 1, 354 + .has_enet1 = 1, 355 + .has_pci = 1, 364 356 365 357 .enet0 = { 366 358 .has_phy = 1, ··· 378 368 .has_ehci0 = 1, 379 369 }; 380 370 371 + static struct board_info __initdata board_rta1025w_16 = { 372 + .name = "RTA1025W_16", 373 + .expected_cpu_id = 0x6348, 374 + 375 + .has_enet0 = 1, 376 + .has_enet1 = 1, 377 + .has_pci = 1, 378 + 379 + .enet0 = { 380 + .has_phy = 1, 381 + .use_internal_phy = 1, 382 + }, 383 + .enet1 = { 384 + .force_speed_100 = 1, 385 + .force_duplex_full = 1, 386 + }, 387 + }; 388 + 389 + 381 390 static struct board_info __initdata board_DV201AMR = { 382 391 .name = "DV201AMR", 383 392 .expected_cpu_id = 0x6348, 384 393 394 + .has_uart0 = 1, 385 395 .has_pci = 1, 386 396 .has_ohci0 = 1, 387 397 ··· 421 391 .name = "96348GW-A", 422 392 .expected_cpu_id = 0x6348, 423 393 394 + .has_uart0 = 1, 424 395 .has_enet0 = 1, 425 396 .has_enet1 = 1, 426 397 .has_pci = 1, ··· 447 416 .name = "96358VW", 448 417 .expected_cpu_id = 0x6358, 449 418 419 + .has_uart0 = 1, 450 420 .has_enet0 = 1, 451 421 .has_enet1 = 1, 452 422 .has_pci = 1, ··· 499 467 .name = "96358VW2", 500 468 .expected_cpu_id = 0x6358, 501 469 470 + .has_uart0 = 1, 502 471 .has_enet0 = 1, 503 472 .has_enet1 = 1, 504 473 .has_pci = 1, ··· 547 514 .name = "AGPF-S0", 548 515 .expected_cpu_id = 0x6358, 549 516 517 + .has_uart0 = 1, 550 518 .has_enet0 = 1, 551 519 .has_enet1 = 1, 552 520 .has_pci = 1, ··· 564 530 565 531 .has_ohci0 = 1, 566 532 .has_ehci0 = 1, 533 + }; 534 + 535 + static struct board_info __initdata board_DWVS0 = { 536 + .name = "DWV-S0", 537 + .expected_cpu_id = 0x6358, 538 + 539 + .has_enet0 = 1, 540 + .has_enet1 = 1, 541 + .has_pci = 1, 542 + 543 + .enet0 = { 544 + .has_phy = 1, 545 + .use_internal_phy = 1, 546 + }, 547 + 548 + .enet1 = { 549 + .force_speed_100 = 1, 550 + .force_duplex_full = 1, 551 + }, 552 + 553 + .has_ohci0 = 1, 567 554 }; 568 555 #endif 569 556 ··· 607 552 &board_FAST2404, 608 553 &board_DV201AMR, 609 554 &board_96348gw_a, 555 + &board_rta1025w_16, 610 556 #endif 611 557 612 558 #ifdef CONFIG_BCM63XX_CPU_6358 613 559 &board_96358vw, 614 560 &board_96358vw2, 615 561 &board_AGPFS0, 562 + &board_DWVS0, 616 563 #endif 617 564 }; 565 + 566 + /* 567 + * Register a sane SPROMv2 to make the on-board 568 + * bcm4318 WLAN work 569 + */ 570 + #ifdef CONFIG_SSB_PCIHOST 571 + static struct ssb_sprom bcm63xx_sprom = { 572 + .revision = 0x02, 573 + .board_rev = 0x17, 574 + .country_code = 0x0, 575 + .ant_available_bg = 0x3, 576 + .pa0b0 = 0x15ae, 577 + .pa0b1 = 0xfa85, 578 + .pa0b2 = 0xfe8d, 579 + .pa1b0 = 0xffff, 580 + .pa1b1 = 0xffff, 581 + .pa1b2 = 0xffff, 582 + .gpio0 = 0xff, 583 + .gpio1 = 0xff, 584 + .gpio2 = 0xff, 585 + .gpio3 = 0xff, 586 + .maxpwr_bg = 0x004c, 587 + .itssi_bg = 0x00, 588 + .boardflags_lo = 0x2848, 589 + .boardflags_hi = 0x0000, 590 + }; 591 + #endif 592 + 593 + /* 594 + * return board name for /proc/cpuinfo 595 + */ 596 + const char *board_get_name(void) 597 + { 598 + return board.name; 599 + } 600 + 601 + /* 602 + * register & return a new board mac address 603 + */ 604 + static int board_get_mac_address(u8 *mac) 605 + { 606 + u8 *p; 607 + int count; 608 + 609 + if (mac_addr_used >= nvram.mac_addr_count) { 610 + printk(KERN_ERR PFX "not enough mac address\n"); 611 + return -ENODEV; 612 + } 613 + 614 + memcpy(mac, nvram.mac_addr_base, ETH_ALEN); 615 + p = mac + ETH_ALEN - 1; 616 + count = mac_addr_used; 617 + 618 + while (count--) { 619 + do { 620 + (*p)++; 621 + if (*p != 0) 622 + break; 623 + p--; 624 + } while (p != mac); 625 + } 626 + 627 + if (p == mac) { 628 + printk(KERN_ERR PFX "unable to fetch mac address\n"); 629 + return -ENODEV; 630 + } 631 + 632 + mac_addr_used++; 633 + return 0; 634 + } 618 635 619 636 /* 620 637 * early init callback, read nvram data from flash and checksum it ··· 786 659 } 787 660 788 661 bcm_gpio_writel(val, GPIO_MODE_REG); 662 + 663 + /* Generate MAC address for WLAN and 664 + * register our SPROM */ 665 + #ifdef CONFIG_SSB_PCIHOST 666 + if (!board_get_mac_address(bcm63xx_sprom.il0mac)) { 667 + memcpy(bcm63xx_sprom.et0mac, bcm63xx_sprom.il0mac, ETH_ALEN); 668 + memcpy(bcm63xx_sprom.et1mac, bcm63xx_sprom.il0mac, ETH_ALEN); 669 + if (ssb_arch_set_fallback_sprom(&bcm63xx_sprom) < 0) 670 + printk(KERN_ERR "failed to register fallback SPROM\n"); 671 + } 672 + #endif 789 673 } 790 674 791 675 /* ··· 812 674 /* make sure we're running on expected cpu */ 813 675 if (bcm63xx_get_cpu_id() != board.expected_cpu_id) 814 676 panic("unexpected CPU for bcm963xx board"); 815 - } 816 - 817 - /* 818 - * return board name for /proc/cpuinfo 819 - */ 820 - const char *board_get_name(void) 821 - { 822 - return board.name; 823 - } 824 - 825 - /* 826 - * register & return a new board mac address 827 - */ 828 - static int board_get_mac_address(u8 *mac) 829 - { 830 - u8 *p; 831 - int count; 832 - 833 - if (mac_addr_used >= nvram.mac_addr_count) { 834 - printk(KERN_ERR PFX "not enough mac address\n"); 835 - return -ENODEV; 836 - } 837 - 838 - memcpy(mac, nvram.mac_addr_base, ETH_ALEN); 839 - p = mac + ETH_ALEN - 1; 840 - count = mac_addr_used; 841 - 842 - while (count--) { 843 - do { 844 - (*p)++; 845 - if (*p != 0) 846 - break; 847 - p--; 848 - } while (p != mac); 849 - } 850 - 851 - if (p == mac) { 852 - printk(KERN_ERR PFX "unable to fetch mac address\n"); 853 - return -ENODEV; 854 - } 855 - 856 - mac_addr_used++; 857 - return 0; 858 677 } 859 678 860 679 static struct mtd_partition mtd_partitions[] = { ··· 845 750 }, 846 751 }; 847 752 848 - /* 849 - * Register a sane SPROMv2 to make the on-board 850 - * bcm4318 WLAN work 851 - */ 852 - #ifdef CONFIG_SSB_PCIHOST 853 - static struct ssb_sprom bcm63xx_sprom = { 854 - .revision = 0x02, 855 - .board_rev = 0x17, 856 - .country_code = 0x0, 857 - .ant_available_bg = 0x3, 858 - .pa0b0 = 0x15ae, 859 - .pa0b1 = 0xfa85, 860 - .pa0b2 = 0xfe8d, 861 - .pa1b0 = 0xffff, 862 - .pa1b1 = 0xffff, 863 - .pa1b2 = 0xffff, 864 - .gpio0 = 0xff, 865 - .gpio1 = 0xff, 866 - .gpio2 = 0xff, 867 - .gpio3 = 0xff, 868 - .maxpwr_bg = 0x004c, 869 - .itssi_bg = 0x00, 870 - .boardflags_lo = 0x2848, 871 - .boardflags_hi = 0x0000, 872 - }; 873 - #endif 874 - 875 753 static struct gpio_led_platform_data bcm63xx_led_data; 876 754 877 755 static struct platform_device bcm63xx_gpio_leds = { ··· 860 792 { 861 793 u32 val; 862 794 795 + if (board.has_uart0) 796 + bcm63xx_uart_register(0); 797 + 798 + if (board.has_uart1) 799 + bcm63xx_uart_register(1); 800 + 863 801 if (board.has_pccard) 864 802 bcm63xx_pcmcia_register(); 865 803 ··· 879 805 880 806 if (board.has_dsp) 881 807 bcm63xx_dsp_register(&board.dsp); 882 - 883 - /* Generate MAC address for WLAN and 884 - * register our SPROM */ 885 - #ifdef CONFIG_SSB_PCIHOST 886 - if (!board_get_mac_address(bcm63xx_sprom.il0mac)) { 887 - memcpy(bcm63xx_sprom.et0mac, bcm63xx_sprom.il0mac, ETH_ALEN); 888 - memcpy(bcm63xx_sprom.et1mac, bcm63xx_sprom.il0mac, ETH_ALEN); 889 - if (ssb_arch_set_fallback_sprom(&bcm63xx_sprom) < 0) 890 - printk(KERN_ERR "failed to register fallback SPROM\n"); 891 - } 892 - #endif 893 808 894 809 /* read base address of boot chip select (0) */ 895 810 if (BCMCPU_IS_6345())
+5
arch/mips/bcm63xx/cpu.c
··· 36 36 [RSET_TIMER] = BCM_6338_TIMER_BASE, 37 37 [RSET_WDT] = BCM_6338_WDT_BASE, 38 38 [RSET_UART0] = BCM_6338_UART0_BASE, 39 + [RSET_UART1] = BCM_6338_UART1_BASE, 39 40 [RSET_GPIO] = BCM_6338_GPIO_BASE, 40 41 [RSET_SPI] = BCM_6338_SPI_BASE, 41 42 [RSET_OHCI0] = BCM_6338_OHCI0_BASE, ··· 73 72 [RSET_TIMER] = BCM_6345_TIMER_BASE, 74 73 [RSET_WDT] = BCM_6345_WDT_BASE, 75 74 [RSET_UART0] = BCM_6345_UART0_BASE, 75 + [RSET_UART1] = BCM_6345_UART1_BASE, 76 76 [RSET_GPIO] = BCM_6345_GPIO_BASE, 77 77 [RSET_SPI] = BCM_6345_SPI_BASE, 78 78 [RSET_UDC0] = BCM_6345_UDC0_BASE, ··· 111 109 [RSET_TIMER] = BCM_6348_TIMER_BASE, 112 110 [RSET_WDT] = BCM_6348_WDT_BASE, 113 111 [RSET_UART0] = BCM_6348_UART0_BASE, 112 + [RSET_UART1] = BCM_6348_UART1_BASE, 114 113 [RSET_GPIO] = BCM_6348_GPIO_BASE, 115 114 [RSET_SPI] = BCM_6348_SPI_BASE, 116 115 [RSET_OHCI0] = BCM_6348_OHCI0_BASE, ··· 153 150 [RSET_TIMER] = BCM_6358_TIMER_BASE, 154 151 [RSET_WDT] = BCM_6358_WDT_BASE, 155 152 [RSET_UART0] = BCM_6358_UART0_BASE, 153 + [RSET_UART1] = BCM_6358_UART1_BASE, 156 154 [RSET_GPIO] = BCM_6358_GPIO_BASE, 157 155 [RSET_SPI] = BCM_6358_SPI_BASE, 158 156 [RSET_OHCI0] = BCM_6358_OHCI0_BASE, ··· 174 170 static const int bcm96358_irqs[] = { 175 171 [IRQ_TIMER] = BCM_6358_TIMER_IRQ, 176 172 [IRQ_UART0] = BCM_6358_UART0_IRQ, 173 + [IRQ_UART1] = BCM_6358_UART1_IRQ, 177 174 [IRQ_DSL] = BCM_6358_DSL_IRQ, 178 175 [IRQ_ENET0] = BCM_6358_ENET0_IRQ, 179 176 [IRQ_ENET1] = BCM_6358_ENET1_IRQ,
+50 -16
arch/mips/bcm63xx/dev-uart.c
··· 11 11 #include <linux/platform_device.h> 12 12 #include <bcm63xx_cpu.h> 13 13 14 - static struct resource uart_resources[] = { 14 + static struct resource uart0_resources[] = { 15 15 { 16 - .start = -1, /* filled at runtime */ 17 - .end = -1, /* filled at runtime */ 16 + /* start & end filled at runtime */ 18 17 .flags = IORESOURCE_MEM, 19 18 }, 20 19 { 21 - .start = -1, /* filled at runtime */ 20 + /* start filled at runtime */ 22 21 .flags = IORESOURCE_IRQ, 23 22 }, 24 23 }; 25 24 26 - static struct platform_device bcm63xx_uart_device = { 27 - .name = "bcm63xx_uart", 28 - .id = 0, 29 - .num_resources = ARRAY_SIZE(uart_resources), 30 - .resource = uart_resources, 25 + static struct resource uart1_resources[] = { 26 + { 27 + /* start & end filled at runtime */ 28 + .flags = IORESOURCE_MEM, 29 + }, 30 + { 31 + /* start filled at runtime */ 32 + .flags = IORESOURCE_IRQ, 33 + }, 31 34 }; 32 35 33 - int __init bcm63xx_uart_register(void) 36 + static struct platform_device bcm63xx_uart_devices[] = { 37 + { 38 + .name = "bcm63xx_uart", 39 + .id = 0, 40 + .num_resources = ARRAY_SIZE(uart0_resources), 41 + .resource = uart0_resources, 42 + }, 43 + 44 + { 45 + .name = "bcm63xx_uart", 46 + .id = 1, 47 + .num_resources = ARRAY_SIZE(uart1_resources), 48 + .resource = uart1_resources, 49 + } 50 + }; 51 + 52 + int __init bcm63xx_uart_register(unsigned int id) 34 53 { 35 - uart_resources[0].start = bcm63xx_regset_address(RSET_UART0); 36 - uart_resources[0].end = uart_resources[0].start; 37 - uart_resources[0].end += RSET_UART_SIZE - 1; 38 - uart_resources[1].start = bcm63xx_get_irq_number(IRQ_UART0); 39 - return platform_device_register(&bcm63xx_uart_device); 54 + if (id >= ARRAY_SIZE(bcm63xx_uart_devices)) 55 + return -ENODEV; 56 + 57 + if (id == 1 && !BCMCPU_IS_6358()) 58 + return -ENODEV; 59 + 60 + if (id == 0) { 61 + uart0_resources[0].start = bcm63xx_regset_address(RSET_UART0); 62 + uart0_resources[0].end = uart0_resources[0].start + 63 + RSET_UART_SIZE - 1; 64 + uart0_resources[1].start = bcm63xx_get_irq_number(IRQ_UART0); 65 + } 66 + 67 + if (id == 1) { 68 + uart1_resources[0].start = bcm63xx_regset_address(RSET_UART1); 69 + uart1_resources[0].end = uart1_resources[0].start + 70 + RSET_UART_SIZE - 1; 71 + uart1_resources[1].start = bcm63xx_get_irq_number(IRQ_UART1); 72 + } 73 + 74 + return platform_device_register(&bcm63xx_uart_devices[id]); 40 75 } 41 - arch_initcall(bcm63xx_uart_register);
+2 -2
arch/mips/bcm63xx/gpio.c
··· 125 125 126 126 int __init bcm63xx_gpio_init(void) 127 127 { 128 + gpio_out_low = bcm_gpio_readl(GPIO_DATA_LO_REG); 129 + gpio_out_high = bcm_gpio_readl(GPIO_DATA_HI_REG); 128 130 bcm63xx_gpio_chip.ngpio = bcm63xx_gpio_count(); 129 131 pr_info("registering %d GPIOs\n", bcm63xx_gpio_chip.ngpio); 130 132 131 133 return gpiochip_add(&bcm63xx_gpio_chip); 132 134 } 133 - 134 - arch_initcall(bcm63xx_gpio_init);
+1 -81
arch/mips/cavium-octeon/setup.c
··· 45 45 extern void pci_console_init(const char *arg); 46 46 #endif 47 47 48 - #ifdef CONFIG_CAVIUM_RESERVE32 49 - extern uint64_t octeon_reserve32_memory; 50 - #endif 51 48 static unsigned long long MAX_MEMORY = 512ull << 20; 52 49 53 50 struct octeon_boot_descriptor *octeon_boot_desc_ptr; ··· 183 186 write_octeon_c0_dcacheerr(0); 184 187 } 185 188 186 - #ifdef CONFIG_CAVIUM_RESERVE32_USE_WIRED_TLB 187 - /** 188 - * Called on every core to setup the wired tlb entry needed 189 - * if CONFIG_CAVIUM_RESERVE32_USE_WIRED_TLB is set. 190 - * 191 - */ 192 - static void octeon_hal_setup_per_cpu_reserved32(void *unused) 193 - { 194 - /* 195 - * The config has selected to wire the reserve32 memory for all 196 - * userspace applications. We need to put a wired TLB entry in for each 197 - * 512MB of reserve32 memory. We only handle double 256MB pages here, 198 - * so reserve32 must be multiple of 512MB. 199 - */ 200 - uint32_t size = CONFIG_CAVIUM_RESERVE32; 201 - uint32_t entrylo0 = 202 - 0x7 | ((octeon_reserve32_memory & ((1ul << 40) - 1)) >> 6); 203 - uint32_t entrylo1 = entrylo0 + (256 << 14); 204 - uint32_t entryhi = (0x80000000UL - (CONFIG_CAVIUM_RESERVE32 << 20)); 205 - while (size >= 512) { 206 - #if 0 207 - pr_info("CPU%d: Adding double wired TLB entry for 0x%lx\n", 208 - smp_processor_id(), entryhi); 209 - #endif 210 - add_wired_entry(entrylo0, entrylo1, entryhi, PM_256M); 211 - entrylo0 += 512 << 14; 212 - entrylo1 += 512 << 14; 213 - entryhi += 512 << 20; 214 - size -= 512; 215 - } 216 - } 217 - #endif /* CONFIG_CAVIUM_RESERVE32_USE_WIRED_TLB */ 218 - 219 - /** 220 - * Called to release the named block which was used to made sure 221 - * that nobody used the memory for something else during 222 - * init. Now we'll free it so userspace apps can use this 223 - * memory region with bootmem_alloc. 224 - * 225 - * This function is called only once from prom_free_prom_memory(). 226 - */ 227 - void octeon_hal_setup_reserved32(void) 228 - { 229 - #ifdef CONFIG_CAVIUM_RESERVE32_USE_WIRED_TLB 230 - on_each_cpu(octeon_hal_setup_per_cpu_reserved32, NULL, 0, 1); 231 - #endif 232 - } 233 - 234 189 /** 235 190 * Reboot Octeon 236 191 * ··· 242 293 243 294 octeon_kill_core(NULL); 244 295 } 245 - 246 - #if 0 247 - /** 248 - * Platform time init specifics. 249 - * Returns 250 - */ 251 - void __init plat_time_init(void) 252 - { 253 - /* Nothing special here, but we are required to have one */ 254 - } 255 - 256 - #endif 257 296 258 297 /** 259 298 * Handle all the error condition interrupts that might occur. ··· 439 502 * memory when it is getting memory from the 440 503 * bootloader. Later, after the memory allocations are 441 504 * complete, the reserve32 will be freed. 442 - */ 443 - #ifdef CONFIG_CAVIUM_RESERVE32_USE_WIRED_TLB 444 - if (CONFIG_CAVIUM_RESERVE32 & 0x1ff) 445 - pr_err("CAVIUM_RESERVE32 isn't a multiple of 512MB. " 446 - "This is required if CAVIUM_RESERVE32_USE_WIRED_TLB " 447 - "is set\n"); 448 - else 449 - addr = cvmx_bootmem_phy_named_block_alloc(CONFIG_CAVIUM_RESERVE32 << 20, 450 - 0, 0, 512 << 20, 451 - "CAVIUM_RESERVE32", 0); 452 - #else 453 - /* 505 + * 454 506 * Allocate memory for RESERVED32 aligned on 2MB boundary. This 455 507 * is in case we later use hugetlb entries with it. 456 508 */ 457 509 addr = cvmx_bootmem_phy_named_block_alloc(CONFIG_CAVIUM_RESERVE32 << 20, 458 510 0, 0, 2 << 20, 459 511 "CAVIUM_RESERVE32", 0); 460 - #endif 461 512 if (addr < 0) 462 513 pr_err("Failed to allocate CAVIUM_RESERVE32 memory area\n"); 463 514 else ··· 742 817 panic("Unable to request_irq(OCTEON_IRQ_RML)\n"); 743 818 } 744 819 #endif 745 - 746 - /* This call is here so that it is performed after any TLB 747 - initializations. It needs to be after these in case the 748 - CONFIG_CAVIUM_RESERVE32_USE_WIRED_TLB option is set */ 749 - octeon_hal_setup_reserved32(); 750 820 }
-8
arch/mips/cavium-octeon/smp.c
··· 279 279 uint32_t avail_coremask; 280 280 struct cvmx_bootmem_named_block_desc *block_desc; 281 281 282 - #ifdef CONFIG_CAVIUM_OCTEON_WATCHDOG 283 - /* Disable the watchdog */ 284 - cvmx_ciu_wdogx_t ciu_wdog; 285 - ciu_wdog.u64 = cvmx_read_csr(CVMX_CIU_WDOGX(cpu)); 286 - ciu_wdog.s.mode = 0; 287 - cvmx_write_csr(CVMX_CIU_WDOGX(cpu), ciu_wdog.u64); 288 - #endif 289 - 290 282 while (per_cpu(cpu_state, cpu) != CPU_DEAD) 291 283 cpu_relax(); 292 284
+506 -206
arch/mips/configs/bigsur_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.26-rc8 4 - # Wed Jul 2 17:02:55 2008 3 + # Linux kernel version: 2.6.34-rc3 4 + # Sat Apr 3 16:32:11 2010 5 5 # 6 6 CONFIG_MIPS=y 7 7 ··· 9 9 # Machine selection 10 10 # 11 11 # CONFIG_MACH_ALCHEMY is not set 12 + # CONFIG_AR7 is not set 12 13 # CONFIG_BCM47XX is not set 14 + # CONFIG_BCM63XX is not set 13 15 # CONFIG_MIPS_COBALT is not set 14 16 # CONFIG_MACH_DECSTATION is not set 15 17 # CONFIG_MACH_JAZZ is not set 16 18 # CONFIG_LASAT is not set 17 - # CONFIG_LEMOTE_FULONG is not set 19 + # CONFIG_MACH_LOONGSON is not set 18 20 # CONFIG_MIPS_MALTA is not set 19 21 # CONFIG_MIPS_SIM is not set 20 - # CONFIG_MARKEINS is not set 22 + # CONFIG_NEC_MARKEINS is not set 21 23 # CONFIG_MACH_VR41XX is not set 24 + # CONFIG_NXP_STB220 is not set 25 + # CONFIG_NXP_STB225 is not set 22 26 # CONFIG_PNX8550_JBS is not set 23 27 # CONFIG_PNX8550_STB810 is not set 24 28 # CONFIG_PMC_MSP is not set 25 29 # CONFIG_PMC_YOSEMITE is not set 30 + # CONFIG_POWERTV is not set 26 31 # CONFIG_SGI_IP22 is not set 27 32 # CONFIG_SGI_IP27 is not set 28 33 # CONFIG_SGI_IP28 is not set ··· 41 36 # CONFIG_SIBYTE_SENTOSA is not set 42 37 CONFIG_SIBYTE_BIGSUR=y 43 38 # CONFIG_SNI_RM is not set 44 - # CONFIG_TOSHIBA_JMR3927 is not set 45 - # CONFIG_TOSHIBA_RBTX4927 is not set 46 - # CONFIG_TOSHIBA_RBTX4938 is not set 39 + # CONFIG_MACH_TX39XX is not set 40 + # CONFIG_MACH_TX49XX is not set 41 + # CONFIG_MIKROTIK_RB532 is not set 47 42 # CONFIG_WR_PPMC is not set 43 + # CONFIG_CAVIUM_OCTEON_SIMULATOR is not set 44 + # CONFIG_CAVIUM_OCTEON_REFERENCE_BOARD is not set 45 + # CONFIG_ALCHEMY_GPIO_INDIRECT is not set 48 46 CONFIG_SIBYTE_BCM1x80=y 49 47 CONFIG_SIBYTE_SB1xxx_SOC=y 50 48 # CONFIG_CPU_SB1_PASS_1 is not set ··· 56 48 # CONFIG_CPU_SB1_PASS_4 is not set 57 49 # CONFIG_CPU_SB1_PASS_2_112x is not set 58 50 # CONFIG_CPU_SB1_PASS_3 is not set 59 - # CONFIG_SIMULATION is not set 60 51 # CONFIG_SB1_CEX_ALWAYS_FATAL is not set 61 52 # CONFIG_SB1_CERR_STALL is not set 62 - CONFIG_SIBYTE_CFE=y 63 53 # CONFIG_SIBYTE_CFE_CONSOLE is not set 64 54 # CONFIG_SIBYTE_BUS_WATCHER is not set 65 55 # CONFIG_SIBYTE_TBPROF is not set 66 56 CONFIG_SIBYTE_HAS_ZBUS_PROFILING=y 57 + CONFIG_LOONGSON_UART_BASE=y 67 58 CONFIG_RWSEM_GENERIC_SPINLOCK=y 68 59 # CONFIG_ARCH_HAS_ILOG2_U32 is not set 69 60 # CONFIG_ARCH_HAS_ILOG2_U64 is not set ··· 73 66 CONFIG_GENERIC_CLOCKEVENTS=y 74 67 CONFIG_GENERIC_TIME=y 75 68 CONFIG_GENERIC_CMOS_UPDATE=y 76 - CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y 77 - # CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ is not set 69 + CONFIG_SCHED_OMIT_FRAME_POINTER=y 70 + CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ=y 78 71 CONFIG_CEVT_BCM1480=y 79 72 CONFIG_CSRC_BCM1480=y 80 73 CONFIG_CFE=y 81 74 CONFIG_DMA_COHERENT=y 82 - CONFIG_EARLY_PRINTK=y 83 75 CONFIG_SYS_HAS_EARLY_PRINTK=y 84 - # CONFIG_HOTPLUG_CPU is not set 85 76 # CONFIG_NO_IOPORT is not set 86 77 CONFIG_CPU_BIG_ENDIAN=y 87 78 # CONFIG_CPU_LITTLE_ENDIAN is not set ··· 93 88 # 94 89 # CPU selection 95 90 # 96 - # CONFIG_CPU_LOONGSON2 is not set 91 + # CONFIG_CPU_LOONGSON2E is not set 92 + # CONFIG_CPU_LOONGSON2F is not set 97 93 # CONFIG_CPU_MIPS32_R1 is not set 98 94 # CONFIG_CPU_MIPS32_R2 is not set 99 95 # CONFIG_CPU_MIPS64_R1 is not set ··· 107 101 # CONFIG_CPU_TX49XX is not set 108 102 # CONFIG_CPU_R5000 is not set 109 103 # CONFIG_CPU_R5432 is not set 104 + # CONFIG_CPU_R5500 is not set 110 105 # CONFIG_CPU_R6000 is not set 111 106 # CONFIG_CPU_NEVADA is not set 112 107 # CONFIG_CPU_R8000 is not set ··· 115 108 # CONFIG_CPU_RM7000 is not set 116 109 # CONFIG_CPU_RM9000 is not set 117 110 CONFIG_CPU_SB1=y 111 + # CONFIG_CPU_CAVIUM_OCTEON is not set 118 112 CONFIG_SYS_HAS_CPU_SB1=y 119 113 CONFIG_WEAK_ORDERING=y 120 114 CONFIG_SYS_SUPPORTS_32BIT_KERNEL=y ··· 131 123 CONFIG_PAGE_SIZE_4KB=y 132 124 # CONFIG_PAGE_SIZE_8KB is not set 133 125 # CONFIG_PAGE_SIZE_16KB is not set 126 + # CONFIG_PAGE_SIZE_32KB is not set 134 127 # CONFIG_PAGE_SIZE_64KB is not set 135 128 # CONFIG_SIBYTE_DMA_PAGEOPS is not set 136 129 CONFIG_MIPS_MT_DISABLED=y 137 130 # CONFIG_MIPS_MT_SMP is not set 138 131 # CONFIG_MIPS_MT_SMTC is not set 132 + # CONFIG_ARCH_PHYS_ADDR_T_64BIT is not set 139 133 CONFIG_CPU_HAS_SYNC=y 140 134 CONFIG_GENERIC_HARDIRQS=y 141 135 CONFIG_GENERIC_IRQ_PROBE=y ··· 152 142 # CONFIG_SPARSEMEM_MANUAL is not set 153 143 CONFIG_FLATMEM=y 154 144 CONFIG_FLAT_NODE_MEM_MAP=y 155 - # CONFIG_SPARSEMEM_STATIC is not set 156 - # CONFIG_SPARSEMEM_VMEMMAP_ENABLE is not set 157 145 CONFIG_PAGEFLAGS_EXTENDED=y 158 146 CONFIG_SPLIT_PTLOCK_CPUS=4 159 - CONFIG_RESOURCES_64BIT=y 147 + CONFIG_PHYS_ADDR_T_64BIT=y 160 148 CONFIG_ZONE_DMA_FLAG=0 161 149 CONFIG_VIRT_TO_BUS=y 150 + # CONFIG_KSM is not set 151 + CONFIG_DEFAULT_MMAP_MIN_ADDR=4096 162 152 CONFIG_SMP=y 163 153 CONFIG_SYS_SUPPORTS_SMP=y 164 154 CONFIG_NR_CPUS_DEFAULT_4=y 165 155 CONFIG_NR_CPUS=4 166 - # CONFIG_MIPS_CMP is not set 167 156 CONFIG_TICK_ONESHOT=y 168 157 CONFIG_NO_HZ=y 169 158 CONFIG_HIGH_RES_TIMERS=y ··· 184 175 CONFIG_LOCKDEP_SUPPORT=y 185 176 CONFIG_STACKTRACE_SUPPORT=y 186 177 CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config" 178 + CONFIG_CONSTRUCTORS=y 187 179 188 180 # 189 181 # General setup ··· 198 188 CONFIG_SYSVIPC=y 199 189 CONFIG_SYSVIPC_SYSCTL=y 200 190 CONFIG_POSIX_MQUEUE=y 191 + CONFIG_POSIX_MQUEUE_SYSCTL=y 201 192 CONFIG_BSD_PROCESS_ACCT=y 202 193 CONFIG_BSD_PROCESS_ACCT_V3=y 203 194 CONFIG_TASKSTATS=y ··· 206 195 CONFIG_TASK_XACCT=y 207 196 CONFIG_TASK_IO_ACCOUNTING=y 208 197 CONFIG_AUDIT=y 198 + 199 + # 200 + # RCU Subsystem 201 + # 202 + CONFIG_TREE_RCU=y 203 + # CONFIG_TREE_PREEMPT_RCU is not set 204 + # CONFIG_TINY_RCU is not set 205 + # CONFIG_RCU_TRACE is not set 206 + CONFIG_RCU_FANOUT=64 207 + # CONFIG_RCU_FANOUT_EXACT is not set 208 + # CONFIG_RCU_FAST_NO_HZ is not set 209 + # CONFIG_TREE_RCU_TRACE is not set 209 210 CONFIG_IKCONFIG=y 210 211 CONFIG_IKCONFIG_PROC=y 211 212 CONFIG_LOG_BUF_SHIFT=16 212 213 # CONFIG_CGROUPS is not set 213 - CONFIG_GROUP_SCHED=y 214 - CONFIG_FAIR_GROUP_SCHED=y 215 - # CONFIG_RT_GROUP_SCHED is not set 216 - CONFIG_USER_SCHED=y 217 - # CONFIG_CGROUP_SCHED is not set 218 - CONFIG_SYSFS_DEPRECATED=y 219 - CONFIG_SYSFS_DEPRECATED_V2=y 214 + # CONFIG_SYSFS_DEPRECATED_V2 is not set 220 215 CONFIG_RELAY=y 221 - # CONFIG_NAMESPACES is not set 216 + CONFIG_NAMESPACES=y 217 + CONFIG_UTS_NS=y 218 + CONFIG_IPC_NS=y 219 + CONFIG_USER_NS=y 220 + CONFIG_PID_NS=y 221 + CONFIG_NET_NS=y 222 222 CONFIG_BLK_DEV_INITRD=y 223 223 CONFIG_INITRAMFS_SOURCE="" 224 + CONFIG_RD_GZIP=y 225 + # CONFIG_RD_BZIP2 is not set 226 + # CONFIG_RD_LZMA is not set 227 + # CONFIG_RD_LZO is not set 224 228 # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 225 229 CONFIG_SYSCTL=y 230 + CONFIG_ANON_INODES=y 226 231 CONFIG_EMBEDDED=y 227 232 # CONFIG_SYSCTL_SYSCALL is not set 228 233 CONFIG_KALLSYMS=y ··· 249 222 CONFIG_BUG=y 250 223 CONFIG_ELF_CORE=y 251 224 # CONFIG_PCSPKR_PLATFORM is not set 252 - CONFIG_COMPAT_BRK=y 253 225 CONFIG_BASE_FULL=y 254 226 CONFIG_FUTEX=y 255 - CONFIG_ANON_INODES=y 256 227 CONFIG_EPOLL=y 257 228 CONFIG_SIGNALFD=y 258 229 CONFIG_TIMERFD=y 259 230 CONFIG_EVENTFD=y 260 231 CONFIG_SHMEM=y 232 + CONFIG_AIO=y 233 + 234 + # 235 + # Kernel Performance Events And Counters 236 + # 261 237 CONFIG_VM_EVENT_COUNTERS=y 238 + CONFIG_PCI_QUIRKS=y 239 + CONFIG_COMPAT_BRK=y 262 240 CONFIG_SLAB=y 263 241 # CONFIG_SLUB is not set 264 242 # CONFIG_SLOB is not set 265 243 # CONFIG_PROFILING is not set 266 - # CONFIG_MARKERS is not set 267 244 CONFIG_HAVE_OPROFILE=y 268 - # CONFIG_HAVE_KPROBES is not set 269 - # CONFIG_HAVE_KRETPROBES is not set 270 - # CONFIG_HAVE_DMA_ATTRS is not set 271 - CONFIG_PROC_PAGE_MONITOR=y 245 + CONFIG_HAVE_SYSCALL_WRAPPERS=y 246 + CONFIG_USE_GENERIC_SMP_HELPERS=y 247 + 248 + # 249 + # GCOV-based kernel profiling 250 + # 251 + # CONFIG_SLOW_WORK is not set 252 + CONFIG_HAVE_GENERIC_DMA_COHERENT=y 272 253 CONFIG_SLABINFO=y 273 254 CONFIG_RT_MUTEXES=y 274 - # CONFIG_TINY_SHMEM is not set 275 255 CONFIG_BASE_SMALL=0 276 256 CONFIG_MODULES=y 277 257 # CONFIG_MODULE_FORCE_LOAD is not set ··· 286 252 # CONFIG_MODULE_FORCE_UNLOAD is not set 287 253 CONFIG_MODVERSIONS=y 288 254 CONFIG_MODULE_SRCVERSION_ALL=y 289 - CONFIG_KMOD=y 290 255 CONFIG_STOP_MACHINE=y 291 256 CONFIG_BLOCK=y 292 - # CONFIG_BLK_DEV_IO_TRACE is not set 293 257 # CONFIG_BLK_DEV_BSG is not set 258 + # CONFIG_BLK_DEV_INTEGRITY is not set 294 259 CONFIG_BLOCK_COMPAT=y 295 260 296 261 # 297 262 # IO Schedulers 298 263 # 299 264 CONFIG_IOSCHED_NOOP=y 300 - CONFIG_IOSCHED_AS=y 301 265 CONFIG_IOSCHED_DEADLINE=y 302 266 CONFIG_IOSCHED_CFQ=y 303 - CONFIG_DEFAULT_AS=y 304 267 # CONFIG_DEFAULT_DEADLINE is not set 305 - # CONFIG_DEFAULT_CFQ is not set 268 + CONFIG_DEFAULT_CFQ=y 306 269 # CONFIG_DEFAULT_NOOP is not set 307 - CONFIG_DEFAULT_IOSCHED="anticipatory" 308 - CONFIG_CLASSIC_RCU=y 270 + CONFIG_DEFAULT_IOSCHED="cfq" 271 + # CONFIG_INLINE_SPIN_TRYLOCK is not set 272 + # CONFIG_INLINE_SPIN_TRYLOCK_BH is not set 273 + # CONFIG_INLINE_SPIN_LOCK is not set 274 + # CONFIG_INLINE_SPIN_LOCK_BH is not set 275 + # CONFIG_INLINE_SPIN_LOCK_IRQ is not set 276 + # CONFIG_INLINE_SPIN_LOCK_IRQSAVE is not set 277 + CONFIG_INLINE_SPIN_UNLOCK=y 278 + # CONFIG_INLINE_SPIN_UNLOCK_BH is not set 279 + CONFIG_INLINE_SPIN_UNLOCK_IRQ=y 280 + # CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE is not set 281 + # CONFIG_INLINE_READ_TRYLOCK is not set 282 + # CONFIG_INLINE_READ_LOCK is not set 283 + # CONFIG_INLINE_READ_LOCK_BH is not set 284 + # CONFIG_INLINE_READ_LOCK_IRQ is not set 285 + # CONFIG_INLINE_READ_LOCK_IRQSAVE is not set 286 + CONFIG_INLINE_READ_UNLOCK=y 287 + # CONFIG_INLINE_READ_UNLOCK_BH is not set 288 + CONFIG_INLINE_READ_UNLOCK_IRQ=y 289 + # CONFIG_INLINE_READ_UNLOCK_IRQRESTORE is not set 290 + # CONFIG_INLINE_WRITE_TRYLOCK is not set 291 + # CONFIG_INLINE_WRITE_LOCK is not set 292 + # CONFIG_INLINE_WRITE_LOCK_BH is not set 293 + # CONFIG_INLINE_WRITE_LOCK_IRQ is not set 294 + # CONFIG_INLINE_WRITE_LOCK_IRQSAVE is not set 295 + CONFIG_INLINE_WRITE_UNLOCK=y 296 + # CONFIG_INLINE_WRITE_UNLOCK_BH is not set 297 + CONFIG_INLINE_WRITE_UNLOCK_IRQ=y 298 + # CONFIG_INLINE_WRITE_UNLOCK_IRQRESTORE is not set 299 + CONFIG_MUTEX_SPIN_ON_OWNER=y 300 + # CONFIG_FREEZER is not set 309 301 310 302 # 311 303 # Bus options (PCI, PCMCIA, EISA, ISA, TC) ··· 340 280 CONFIG_PCI=y 341 281 CONFIG_PCI_DOMAINS=y 342 282 # CONFIG_ARCH_SUPPORTS_MSI is not set 343 - CONFIG_PCI_LEGACY=y 344 283 CONFIG_PCI_DEBUG=y 284 + # CONFIG_PCI_STUB is not set 285 + # CONFIG_PCI_IOV is not set 345 286 CONFIG_MMU=y 346 287 CONFIG_ZONE_DMA32=y 347 288 # CONFIG_PCCARD is not set ··· 352 291 # Executable file formats 353 292 # 354 293 CONFIG_BINFMT_ELF=y 294 + # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 295 + # CONFIG_HAVE_AOUT is not set 355 296 # CONFIG_BINFMT_MISC is not set 356 297 CONFIG_MIPS32_COMPAT=y 357 298 CONFIG_COMPAT=y ··· 367 304 # 368 305 CONFIG_PM=y 369 306 # CONFIG_PM_DEBUG is not set 370 - 371 - # 372 - # Networking 373 - # 307 + # CONFIG_PM_RUNTIME is not set 374 308 CONFIG_NET=y 375 309 376 310 # 377 311 # Networking options 378 312 # 379 313 CONFIG_PACKET=y 380 - CONFIG_PACKET_MMAP=y 381 314 CONFIG_UNIX=y 382 315 CONFIG_XFRM=y 383 316 CONFIG_XFRM_USER=m 384 317 # CONFIG_XFRM_SUB_POLICY is not set 385 318 CONFIG_XFRM_MIGRATE=y 386 319 # CONFIG_XFRM_STATISTICS is not set 320 + CONFIG_XFRM_IPCOMP=m 387 321 CONFIG_NET_KEY=y 388 322 CONFIG_NET_KEY_MIGRATE=y 389 323 CONFIG_INET=y ··· 413 353 CONFIG_TCP_CONG_CUBIC=y 414 354 CONFIG_DEFAULT_TCP_CONG="cubic" 415 355 CONFIG_TCP_MD5SIG=y 356 + CONFIG_IPV6=m 357 + CONFIG_IPV6_PRIVACY=y 358 + CONFIG_IPV6_ROUTER_PREF=y 359 + CONFIG_IPV6_ROUTE_INFO=y 360 + CONFIG_IPV6_OPTIMISTIC_DAD=y 361 + CONFIG_INET6_AH=m 362 + CONFIG_INET6_ESP=m 363 + CONFIG_INET6_IPCOMP=m 364 + CONFIG_IPV6_MIP6=m 365 + CONFIG_INET6_XFRM_TUNNEL=m 366 + CONFIG_INET6_TUNNEL=m 367 + CONFIG_INET6_XFRM_MODE_TRANSPORT=m 368 + CONFIG_INET6_XFRM_MODE_TUNNEL=m 369 + CONFIG_INET6_XFRM_MODE_BEET=m 370 + CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION=m 371 + CONFIG_IPV6_SIT=m 372 + CONFIG_IPV6_SIT_6RD=y 373 + CONFIG_IPV6_NDISC_NODETYPE=y 374 + CONFIG_IPV6_TUNNEL=m 375 + CONFIG_IPV6_MULTIPLE_TABLES=y 376 + CONFIG_IPV6_SUBTREES=y 377 + # CONFIG_IPV6_MROUTE is not set 378 + CONFIG_NETLABEL=y 379 + CONFIG_NETWORK_SECMARK=y 380 + CONFIG_NETFILTER=y 381 + # CONFIG_NETFILTER_DEBUG is not set 382 + # CONFIG_NETFILTER_ADVANCED is not set 383 + 384 + # 385 + # Core Netfilter Configuration 386 + # 387 + CONFIG_NETFILTER_NETLINK=m 388 + CONFIG_NETFILTER_NETLINK_LOG=m 389 + CONFIG_NF_CONNTRACK=m 390 + CONFIG_NF_CONNTRACK_SECMARK=y 391 + CONFIG_NF_CONNTRACK_FTP=m 392 + CONFIG_NF_CONNTRACK_IRC=m 393 + CONFIG_NF_CONNTRACK_SIP=m 394 + CONFIG_NF_CT_NETLINK=m 395 + CONFIG_NETFILTER_XTABLES=m 396 + CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m 397 + CONFIG_NETFILTER_XT_TARGET_MARK=m 398 + CONFIG_NETFILTER_XT_TARGET_NFLOG=m 399 + CONFIG_NETFILTER_XT_TARGET_SECMARK=m 400 + CONFIG_NETFILTER_XT_TARGET_TCPMSS=m 401 + CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m 402 + CONFIG_NETFILTER_XT_MATCH_MARK=m 403 + CONFIG_NETFILTER_XT_MATCH_POLICY=m 404 + CONFIG_NETFILTER_XT_MATCH_STATE=m 416 405 CONFIG_IP_VS=m 406 + CONFIG_IP_VS_IPV6=y 417 407 # CONFIG_IP_VS_DEBUG is not set 418 408 CONFIG_IP_VS_TAB_BITS=12 419 409 ··· 472 362 # 473 363 CONFIG_IP_VS_PROTO_TCP=y 474 364 CONFIG_IP_VS_PROTO_UDP=y 365 + CONFIG_IP_VS_PROTO_AH_ESP=y 475 366 CONFIG_IP_VS_PROTO_ESP=y 476 367 CONFIG_IP_VS_PROTO_AH=y 368 + CONFIG_IP_VS_PROTO_SCTP=y 477 369 478 370 # 479 371 # IPVS scheduler ··· 495 383 # IPVS application helper 496 384 # 497 385 CONFIG_IP_VS_FTP=m 498 - CONFIG_IPV6=m 499 - CONFIG_IPV6_PRIVACY=y 500 - CONFIG_IPV6_ROUTER_PREF=y 501 - CONFIG_IPV6_ROUTE_INFO=y 502 - CONFIG_IPV6_OPTIMISTIC_DAD=y 503 - CONFIG_INET6_AH=m 504 - CONFIG_INET6_ESP=m 505 - CONFIG_INET6_IPCOMP=m 506 - CONFIG_IPV6_MIP6=m 507 - CONFIG_INET6_XFRM_TUNNEL=m 508 - CONFIG_INET6_TUNNEL=m 509 - CONFIG_INET6_XFRM_MODE_TRANSPORT=m 510 - CONFIG_INET6_XFRM_MODE_TUNNEL=m 511 - CONFIG_INET6_XFRM_MODE_BEET=m 512 - CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION=m 513 - CONFIG_IPV6_SIT=m 514 - CONFIG_IPV6_NDISC_NODETYPE=y 515 - CONFIG_IPV6_TUNNEL=m 516 - CONFIG_IPV6_MULTIPLE_TABLES=y 517 - CONFIG_IPV6_SUBTREES=y 518 - # CONFIG_IPV6_MROUTE is not set 519 - CONFIG_NETWORK_SECMARK=y 520 - CONFIG_NETFILTER=y 521 - # CONFIG_NETFILTER_DEBUG is not set 522 - # CONFIG_NETFILTER_ADVANCED is not set 523 - 524 - # 525 - # Core Netfilter Configuration 526 - # 527 - CONFIG_NETFILTER_NETLINK=m 528 - CONFIG_NETFILTER_NETLINK_LOG=m 529 - CONFIG_NF_CONNTRACK=m 530 - CONFIG_NF_CONNTRACK_SECMARK=y 531 - CONFIG_NF_CONNTRACK_FTP=m 532 - CONFIG_NF_CONNTRACK_IRC=m 533 - CONFIG_NF_CONNTRACK_SIP=m 534 - CONFIG_NF_CT_NETLINK=m 535 - CONFIG_NETFILTER_XTABLES=m 536 - CONFIG_NETFILTER_XT_TARGET_MARK=m 537 - CONFIG_NETFILTER_XT_TARGET_NFLOG=m 538 - CONFIG_NETFILTER_XT_TARGET_SECMARK=m 539 - CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m 540 - CONFIG_NETFILTER_XT_TARGET_TCPMSS=m 541 - CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m 542 - CONFIG_NETFILTER_XT_MATCH_MARK=m 543 - CONFIG_NETFILTER_XT_MATCH_POLICY=m 544 - CONFIG_NETFILTER_XT_MATCH_STATE=m 545 386 546 387 # 547 388 # IP: Netfilter Configuration 548 389 # 390 + CONFIG_NF_DEFRAG_IPV4=m 549 391 CONFIG_NF_CONNTRACK_IPV4=m 550 392 CONFIG_NF_CONNTRACK_PROC_COMPAT=y 551 393 CONFIG_IP_NF_IPTABLES=m ··· 525 459 CONFIG_NF_CONNTRACK_IPV6=m 526 460 CONFIG_IP6_NF_IPTABLES=m 527 461 CONFIG_IP6_NF_MATCH_IPV6HEADER=m 528 - CONFIG_IP6_NF_FILTER=m 529 462 CONFIG_IP6_NF_TARGET_LOG=m 463 + CONFIG_IP6_NF_FILTER=m 530 464 CONFIG_IP6_NF_TARGET_REJECT=m 531 465 CONFIG_IP6_NF_MANGLE=m 532 - # CONFIG_IP_DCCP is not set 466 + CONFIG_IP_DCCP=m 467 + CONFIG_INET_DCCP_DIAG=m 468 + 469 + # 470 + # DCCP CCIDs Configuration (EXPERIMENTAL) 471 + # 472 + # CONFIG_IP_DCCP_CCID2_DEBUG is not set 473 + CONFIG_IP_DCCP_CCID3=y 474 + # CONFIG_IP_DCCP_CCID3_DEBUG is not set 475 + CONFIG_IP_DCCP_CCID3_RTO=100 476 + CONFIG_IP_DCCP_TFRC_LIB=y 477 + 478 + # 479 + # DCCP Kernel Hacking 480 + # 481 + # CONFIG_IP_DCCP_DEBUG is not set 533 482 CONFIG_IP_SCTP=m 534 483 # CONFIG_SCTP_DBG_MSG is not set 535 484 # CONFIG_SCTP_DBG_OBJCNT is not set 536 485 # CONFIG_SCTP_HMAC_NONE is not set 537 - # CONFIG_SCTP_HMAC_SHA1 is not set 538 - CONFIG_SCTP_HMAC_MD5=y 486 + CONFIG_SCTP_HMAC_SHA1=y 487 + # CONFIG_SCTP_HMAC_MD5 is not set 488 + # CONFIG_RDS is not set 539 489 # CONFIG_TIPC is not set 540 490 # CONFIG_ATM is not set 541 - # CONFIG_BRIDGE is not set 542 - # CONFIG_VLAN_8021Q is not set 491 + CONFIG_STP=m 492 + CONFIG_GARP=m 493 + CONFIG_BRIDGE=m 494 + CONFIG_BRIDGE_IGMP_SNOOPING=y 495 + # CONFIG_NET_DSA is not set 496 + CONFIG_VLAN_8021Q=m 497 + CONFIG_VLAN_8021Q_GVRP=y 543 498 # CONFIG_DECNET is not set 499 + CONFIG_LLC=m 544 500 # CONFIG_LLC2 is not set 545 501 # CONFIG_IPX is not set 546 502 # CONFIG_ATALK is not set ··· 570 482 # CONFIG_LAPB is not set 571 483 # CONFIG_ECONET is not set 572 484 # CONFIG_WAN_ROUTER is not set 485 + # CONFIG_PHONET is not set 486 + # CONFIG_IEEE802154 is not set 573 487 # CONFIG_NET_SCHED is not set 488 + # CONFIG_DCB is not set 574 489 575 490 # 576 491 # Network testing 577 492 # 578 493 # CONFIG_NET_PKTGEN is not set 579 - # CONFIG_HAMRADIO is not set 494 + CONFIG_HAMRADIO=y 495 + 496 + # 497 + # Packet Radio protocols 498 + # 499 + CONFIG_AX25=m 500 + CONFIG_AX25_DAMA_SLAVE=y 501 + CONFIG_NETROM=m 502 + CONFIG_ROSE=m 503 + 504 + # 505 + # AX.25 network device drivers 506 + # 507 + CONFIG_MKISS=m 508 + CONFIG_6PACK=m 509 + CONFIG_BPQETHER=m 510 + CONFIG_BAYCOM_SER_FDX=m 511 + CONFIG_BAYCOM_SER_HDX=m 512 + CONFIG_YAM=m 580 513 # CONFIG_CAN is not set 581 514 # CONFIG_IRDA is not set 582 515 # CONFIG_BT is not set 583 516 # CONFIG_AF_RXRPC is not set 584 517 CONFIG_FIB_RULES=y 518 + CONFIG_WIRELESS=y 519 + # CONFIG_CFG80211 is not set 520 + # CONFIG_LIB80211 is not set 585 521 586 522 # 587 - # Wireless 523 + # CFG80211 needs to be enabled for MAC80211 588 524 # 589 - # CONFIG_CFG80211 is not set 590 - # CONFIG_WIRELESS_EXT is not set 591 - # CONFIG_MAC80211 is not set 592 - # CONFIG_IEEE80211 is not set 525 + # CONFIG_WIMAX is not set 593 526 # CONFIG_RFKILL is not set 594 527 # CONFIG_NET_9P is not set 595 528 ··· 622 513 # Generic Driver Options 623 514 # 624 515 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 516 + # CONFIG_DEVTMPFS is not set 625 517 CONFIG_STANDALONE=y 626 518 CONFIG_PREVENT_FIRMWARE_BUILD=y 627 519 CONFIG_FW_LOADER=m 520 + CONFIG_FIRMWARE_IN_KERNEL=y 521 + CONFIG_EXTRA_FIRMWARE="" 628 522 # CONFIG_DEBUG_DRIVER is not set 629 523 # CONFIG_DEBUG_DEVRES is not set 630 524 # CONFIG_SYS_HYPERVISOR is not set ··· 642 530 # CONFIG_BLK_DEV_COW_COMMON is not set 643 531 CONFIG_BLK_DEV_LOOP=m 644 532 CONFIG_BLK_DEV_CRYPTOLOOP=m 533 + 534 + # 535 + # DRBD disabled because PROC_FS, INET or CONNECTOR not selected 536 + # 645 537 CONFIG_BLK_DEV_NBD=m 646 538 # CONFIG_BLK_DEV_SX8 is not set 647 539 # CONFIG_BLK_DEV_RAM is not set 648 540 # CONFIG_CDROM_PKTCDVD is not set 649 541 # CONFIG_ATA_OVER_ETH is not set 542 + # CONFIG_BLK_DEV_HD is not set 650 543 CONFIG_MISC_DEVICES=y 544 + # CONFIG_AD525X_DPOT is not set 651 545 # CONFIG_PHANTOM is not set 652 - # CONFIG_EEPROM_93CX6 is not set 653 546 CONFIG_SGI_IOC4=m 654 547 # CONFIG_TIFM_CORE is not set 548 + # CONFIG_ICS932S401 is not set 655 549 # CONFIG_ENCLOSURE_SERVICES is not set 550 + # CONFIG_HP_ILO is not set 551 + # CONFIG_ISL29003 is not set 552 + # CONFIG_SENSORS_TSL2550 is not set 553 + # CONFIG_DS1682 is not set 554 + # CONFIG_C2PORT is not set 555 + 556 + # 557 + # EEPROM support 558 + # 559 + # CONFIG_EEPROM_AT24 is not set 560 + CONFIG_EEPROM_LEGACY=y 561 + CONFIG_EEPROM_MAX6875=y 562 + # CONFIG_EEPROM_93CX6 is not set 563 + # CONFIG_CB710_CORE is not set 656 564 CONFIG_HAVE_IDE=y 657 565 CONFIG_IDE=y 658 - CONFIG_IDE_MAX_HWIFS=4 659 - CONFIG_BLK_DEV_IDE=y 660 566 661 567 # 662 568 # Please see Documentation/ide/ide.txt for help/info on IDE drives 663 569 # 570 + CONFIG_IDE_XFER_MODE=y 571 + CONFIG_IDE_TIMINGS=y 572 + CONFIG_IDE_ATAPI=y 664 573 # CONFIG_BLK_DEV_IDE_SATA is not set 665 - CONFIG_BLK_DEV_IDEDISK=y 666 - # CONFIG_IDEDISK_MULTI_MODE is not set 574 + CONFIG_IDE_GD=y 575 + CONFIG_IDE_GD_ATA=y 576 + # CONFIG_IDE_GD_ATAPI is not set 667 577 CONFIG_BLK_DEV_IDECD=y 668 578 CONFIG_BLK_DEV_IDECD_VERBOSE_ERRORS=y 669 579 CONFIG_BLK_DEV_IDETAPE=y 670 - CONFIG_BLK_DEV_IDEFLOPPY=y 671 - # CONFIG_BLK_DEV_IDESCSI is not set 672 580 # CONFIG_IDE_TASK_IOCTL is not set 673 581 CONFIG_IDE_PROC_FS=y 674 582 ··· 713 581 # CONFIG_BLK_DEV_AMD74XX is not set 714 582 CONFIG_BLK_DEV_CMD64X=y 715 583 # CONFIG_BLK_DEV_TRIFLEX is not set 716 - # CONFIG_BLK_DEV_CY82C693 is not set 717 584 # CONFIG_BLK_DEV_CS5520 is not set 718 585 # CONFIG_BLK_DEV_CS5530 is not set 719 - # CONFIG_BLK_DEV_HPT34X is not set 720 586 # CONFIG_BLK_DEV_HPT366 is not set 721 587 # CONFIG_BLK_DEV_JMICRON is not set 722 588 # CONFIG_BLK_DEV_SC1200 is not set 723 589 # CONFIG_BLK_DEV_PIIX is not set 590 + # CONFIG_BLK_DEV_IT8172 is not set 724 591 CONFIG_BLK_DEV_IT8213=m 725 592 # CONFIG_BLK_DEV_IT821X is not set 726 593 # CONFIG_BLK_DEV_NS87415 is not set ··· 731 600 # CONFIG_BLK_DEV_TRM290 is not set 732 601 # CONFIG_BLK_DEV_VIA82CXXX is not set 733 602 CONFIG_BLK_DEV_TC86C001=m 734 - # CONFIG_BLK_DEV_IDE_SWARM is not set 735 603 CONFIG_BLK_DEV_IDEDMA=y 736 - # CONFIG_BLK_DEV_HD_ONLY is not set 737 - # CONFIG_BLK_DEV_HD is not set 738 604 739 605 # 740 606 # SCSI device support 741 607 # 608 + CONFIG_SCSI_MOD=y 742 609 # CONFIG_RAID_ATTRS is not set 743 610 CONFIG_SCSI=y 744 611 CONFIG_SCSI_DMA=y ··· 754 625 CONFIG_BLK_DEV_SR_VENDOR=y 755 626 CONFIG_CHR_DEV_SG=m 756 627 CONFIG_CHR_DEV_SCH=m 757 - 758 - # 759 - # Some SCSI devices (e.g. CD jukebox) support multiple LUNs 760 - # 761 628 # CONFIG_SCSI_MULTI_LUN is not set 762 629 # CONFIG_SCSI_CONSTANTS is not set 763 630 # CONFIG_SCSI_LOGGING is not set ··· 770 645 # CONFIG_SCSI_SRP_ATTRS is not set 771 646 CONFIG_SCSI_LOWLEVEL=y 772 647 # CONFIG_ISCSI_TCP is not set 648 + # CONFIG_SCSI_CXGB3_ISCSI is not set 649 + # CONFIG_SCSI_BNX2_ISCSI is not set 650 + # CONFIG_BE2ISCSI is not set 773 651 # CONFIG_BLK_DEV_3W_XXXX_RAID is not set 652 + # CONFIG_SCSI_HPSA is not set 774 653 # CONFIG_SCSI_3W_9XXX is not set 654 + # CONFIG_SCSI_3W_SAS is not set 775 655 # CONFIG_SCSI_ACARD is not set 776 656 # CONFIG_SCSI_AACRAID is not set 777 657 # CONFIG_SCSI_AIC7XXX is not set 778 658 # CONFIG_SCSI_AIC7XXX_OLD is not set 779 659 # CONFIG_SCSI_AIC79XX is not set 780 660 # CONFIG_SCSI_AIC94XX is not set 661 + # CONFIG_SCSI_MVSAS is not set 781 662 # CONFIG_SCSI_DPT_I2O is not set 782 663 # CONFIG_SCSI_ADVANSYS is not set 783 664 # CONFIG_SCSI_ARCMSR is not set 784 665 # CONFIG_MEGARAID_NEWGEN is not set 785 666 # CONFIG_MEGARAID_LEGACY is not set 786 667 # CONFIG_MEGARAID_SAS is not set 668 + # CONFIG_SCSI_MPT2SAS is not set 787 669 # CONFIG_SCSI_HPTIOP is not set 670 + # CONFIG_LIBFC is not set 671 + # CONFIG_LIBFCOE is not set 672 + # CONFIG_FCOE is not set 788 673 # CONFIG_SCSI_DMX3191D is not set 789 674 # CONFIG_SCSI_FUTURE_DOMAIN is not set 790 675 # CONFIG_SCSI_IPS is not set 791 676 # CONFIG_SCSI_INITIO is not set 792 677 # CONFIG_SCSI_INIA100 is not set 793 - # CONFIG_SCSI_MVSAS is not set 794 678 # CONFIG_SCSI_STEX is not set 795 679 # CONFIG_SCSI_SYM53C8XX_2 is not set 796 680 # CONFIG_SCSI_IPR is not set ··· 810 676 # CONFIG_SCSI_DC395x is not set 811 677 # CONFIG_SCSI_DC390T is not set 812 678 # CONFIG_SCSI_DEBUG is not set 679 + # CONFIG_SCSI_PMCRAID is not set 680 + # CONFIG_SCSI_PM8001 is not set 813 681 # CONFIG_SCSI_SRP is not set 682 + # CONFIG_SCSI_BFA_FC is not set 683 + # CONFIG_SCSI_DH is not set 684 + # CONFIG_SCSI_OSD_INITIATOR is not set 814 685 CONFIG_ATA=y 815 686 # CONFIG_ATA_NONSTANDARD is not set 687 + CONFIG_ATA_VERBOSE_ERROR=y 816 688 CONFIG_SATA_PMP=y 817 689 # CONFIG_SATA_AHCI is not set 818 690 CONFIG_SATA_SIL24=y ··· 840 700 # CONFIG_PATA_ALI is not set 841 701 # CONFIG_PATA_AMD is not set 842 702 # CONFIG_PATA_ARTOP is not set 703 + # CONFIG_PATA_ATP867X is not set 843 704 # CONFIG_PATA_ATIIXP is not set 844 705 # CONFIG_PATA_CMD640_PCI is not set 845 706 # CONFIG_PATA_CMD64X is not set ··· 856 715 # CONFIG_PATA_IT821X is not set 857 716 # CONFIG_PATA_IT8213 is not set 858 717 # CONFIG_PATA_JMICRON is not set 718 + # CONFIG_PATA_LEGACY is not set 859 719 # CONFIG_PATA_TRIFLEX is not set 860 720 # CONFIG_PATA_MARVELL is not set 861 721 # CONFIG_PATA_MPIIX is not set ··· 867 725 # CONFIG_PATA_NS87415 is not set 868 726 # CONFIG_PATA_OPTI is not set 869 727 # CONFIG_PATA_OPTIDMA is not set 728 + # CONFIG_PATA_PDC2027X is not set 870 729 # CONFIG_PATA_PDC_OLD is not set 871 730 # CONFIG_PATA_RADISYS is not set 731 + # CONFIG_PATA_RDC is not set 872 732 # CONFIG_PATA_RZ1000 is not set 873 733 # CONFIG_PATA_SC1200 is not set 874 734 # CONFIG_PATA_SERVERWORKS is not set 875 - # CONFIG_PATA_PDC2027X is not set 876 735 CONFIG_PATA_SIL680=y 877 736 # CONFIG_PATA_SIS is not set 737 + # CONFIG_PATA_TOSHIBA is not set 878 738 # CONFIG_PATA_VIA is not set 879 739 # CONFIG_PATA_WINBOND is not set 880 740 # CONFIG_PATA_PLATFORM is not set ··· 889 745 # 890 746 891 747 # 892 - # Enable only one of the two stacks, unless you know what you are doing 748 + # You can enable one or both FireWire driver stacks. 749 + # 750 + 751 + # 752 + # The newer stack is recommended. 893 753 # 894 754 # CONFIG_FIREWIRE is not set 895 755 # CONFIG_IEEE1394 is not set 896 756 # CONFIG_I2O is not set 897 757 CONFIG_NETDEVICES=y 898 - # CONFIG_NETDEVICES_MULTIQUEUE is not set 899 758 # CONFIG_DUMMY is not set 900 759 # CONFIG_BONDING is not set 901 760 # CONFIG_MACVLAN is not set ··· 921 774 # CONFIG_BROADCOM_PHY is not set 922 775 # CONFIG_ICPLUS_PHY is not set 923 776 # CONFIG_REALTEK_PHY is not set 777 + # CONFIG_NATIONAL_PHY is not set 778 + # CONFIG_STE10XP is not set 779 + # CONFIG_LSI_ET1011C_PHY is not set 924 780 # CONFIG_FIXED_PHY is not set 925 781 # CONFIG_MDIO_BITBANG is not set 926 782 CONFIG_NET_ETHERNET=y ··· 933 783 # CONFIG_SUNGEM is not set 934 784 # CONFIG_CASSINI is not set 935 785 # CONFIG_NET_VENDOR_3COM is not set 786 + # CONFIG_SMC91X is not set 936 787 # CONFIG_DM9000 is not set 788 + # CONFIG_ETHOC is not set 789 + # CONFIG_SMSC911X is not set 790 + # CONFIG_DNET is not set 937 791 # CONFIG_NET_TULIP is not set 938 792 # CONFIG_HP100 is not set 939 793 # CONFIG_IBM_NEW_EMAC_ZMII is not set 940 794 # CONFIG_IBM_NEW_EMAC_RGMII is not set 941 795 # CONFIG_IBM_NEW_EMAC_TAH is not set 942 796 # CONFIG_IBM_NEW_EMAC_EMAC4 is not set 797 + # CONFIG_IBM_NEW_EMAC_NO_FLOW_CTRL is not set 798 + # CONFIG_IBM_NEW_EMAC_MAL_CLR_ICINTSTAT is not set 799 + # CONFIG_IBM_NEW_EMAC_MAL_COMMON_ERR is not set 943 800 # CONFIG_NET_PCI is not set 944 801 # CONFIG_B44 is not set 802 + # CONFIG_KS8842 is not set 803 + # CONFIG_KS8851_MLL is not set 804 + # CONFIG_ATL2 is not set 945 805 CONFIG_NETDEV_1000=y 946 806 # CONFIG_ACENIC is not set 947 807 # CONFIG_DL2K is not set 948 808 # CONFIG_E1000 is not set 949 809 # CONFIG_E1000E is not set 950 - # CONFIG_E1000E_ENABLED is not set 951 810 # CONFIG_IP1000 is not set 952 811 # CONFIG_IGB is not set 812 + # CONFIG_IGBVF is not set 953 813 # CONFIG_NS83820 is not set 954 814 # CONFIG_HAMACHI is not set 955 815 # CONFIG_YELLOWFIN is not set ··· 971 811 # CONFIG_VIA_VELOCITY is not set 972 812 # CONFIG_TIGON3 is not set 973 813 # CONFIG_BNX2 is not set 814 + # CONFIG_CNIC is not set 974 815 # CONFIG_QLA3XXX is not set 975 816 # CONFIG_ATL1 is not set 817 + # CONFIG_ATL1E is not set 818 + # CONFIG_ATL1C is not set 819 + # CONFIG_JME is not set 976 820 CONFIG_NETDEV_10000=y 821 + CONFIG_MDIO=m 977 822 # CONFIG_CHELSIO_T1 is not set 823 + CONFIG_CHELSIO_T3_DEPENDS=y 978 824 CONFIG_CHELSIO_T3=m 825 + # CONFIG_ENIC is not set 979 826 # CONFIG_IXGBE is not set 980 827 # CONFIG_IXGB is not set 981 828 # CONFIG_S2IO is not set 829 + # CONFIG_VXGE is not set 982 830 # CONFIG_MYRI10GE is not set 983 831 CONFIG_NETXEN_NIC=m 984 832 # CONFIG_NIU is not set 833 + # CONFIG_MLX4_EN is not set 985 834 # CONFIG_MLX4_CORE is not set 986 835 # CONFIG_TEHUTI is not set 987 836 # CONFIG_BNX2X is not set 837 + # CONFIG_QLCNIC is not set 838 + # CONFIG_QLGE is not set 988 839 # CONFIG_SFC is not set 840 + # CONFIG_BE2NET is not set 989 841 # CONFIG_TR is not set 842 + CONFIG_WLAN=y 843 + # CONFIG_ATMEL is not set 844 + # CONFIG_PRISM54 is not set 845 + # CONFIG_HOSTAP is not set 990 846 991 847 # 992 - # Wireless LAN 848 + # Enable WiMAX (Networking options) to see the WiMAX drivers 993 849 # 994 - # CONFIG_WLAN_PRE80211 is not set 995 - # CONFIG_WLAN_80211 is not set 996 - # CONFIG_IWLWIFI_LEDS is not set 997 850 # CONFIG_WAN is not set 998 851 # CONFIG_FDDI is not set 999 852 # CONFIG_HIPPI is not set ··· 1029 856 # CONFIG_NETCONSOLE is not set 1030 857 # CONFIG_NETPOLL is not set 1031 858 # CONFIG_NET_POLL_CONTROLLER is not set 859 + # CONFIG_VMXNET3 is not set 1032 860 # CONFIG_ISDN is not set 1033 861 # CONFIG_PHONE is not set 1034 862 ··· 1047 873 # CONFIG_SERIO_PCIPS2 is not set 1048 874 # CONFIG_SERIO_LIBPS2 is not set 1049 875 CONFIG_SERIO_RAW=m 876 + # CONFIG_SERIO_ALTERA_PS2 is not set 1050 877 # CONFIG_GAMEPORT is not set 1051 878 1052 879 # ··· 1068 893 # CONFIG_N_HDLC is not set 1069 894 # CONFIG_RISCOM8 is not set 1070 895 # CONFIG_SPECIALIX is not set 1071 - # CONFIG_SX is not set 1072 - # CONFIG_RIO is not set 1073 896 # CONFIG_STALDRV is not set 1074 897 # CONFIG_NOZOMI is not set 1075 898 ··· 1084 911 CONFIG_SERIAL_CORE=y 1085 912 CONFIG_SERIAL_CORE_CONSOLE=y 1086 913 # CONFIG_SERIAL_JSM is not set 914 + # CONFIG_SERIAL_TIMBERDALE is not set 1087 915 CONFIG_UNIX98_PTYS=y 916 + # CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set 1088 917 CONFIG_LEGACY_PTYS=y 1089 918 CONFIG_LEGACY_PTY_COUNT=256 1090 919 # CONFIG_IPMI_HANDLER is not set ··· 1098 923 CONFIG_DEVPORT=y 1099 924 CONFIG_I2C=y 1100 925 CONFIG_I2C_BOARDINFO=y 926 + CONFIG_I2C_COMPAT=y 1101 927 CONFIG_I2C_CHARDEV=y 928 + CONFIG_I2C_HELPER_AUTO=y 1102 929 1103 930 # 1104 931 # I2C Hardware Bus support 932 + # 933 + 934 + # 935 + # PC SMBus host controller drivers 1105 936 # 1106 937 # CONFIG_I2C_ALI1535 is not set 1107 938 # CONFIG_I2C_ALI1563 is not set ··· 1115 934 # CONFIG_I2C_AMD756 is not set 1116 935 # CONFIG_I2C_AMD8111 is not set 1117 936 # CONFIG_I2C_I801 is not set 1118 - # CONFIG_I2C_I810 is not set 937 + # CONFIG_I2C_ISCH is not set 1119 938 # CONFIG_I2C_PIIX4 is not set 1120 939 # CONFIG_I2C_NFORCE2 is not set 1121 - # CONFIG_I2C_OCORES is not set 1122 - # CONFIG_I2C_PARPORT_LIGHT is not set 1123 - # CONFIG_I2C_PROSAVAGE is not set 1124 - # CONFIG_I2C_SAVAGE4 is not set 1125 - CONFIG_I2C_SIBYTE=y 1126 - # CONFIG_I2C_SIMTEC is not set 1127 940 # CONFIG_I2C_SIS5595 is not set 1128 941 # CONFIG_I2C_SIS630 is not set 1129 942 # CONFIG_I2C_SIS96X is not set 1130 - # CONFIG_I2C_TAOS_EVM is not set 1131 - # CONFIG_I2C_STUB is not set 1132 943 # CONFIG_I2C_VIA is not set 1133 944 # CONFIG_I2C_VIAPRO is not set 1134 - # CONFIG_I2C_VOODOO3 is not set 1135 - # CONFIG_I2C_PCA_PLATFORM is not set 1136 945 1137 946 # 1138 - # Miscellaneous I2C Chip support 947 + # I2C system bus drivers (mostly embedded / system-on-chip) 1139 948 # 1140 - # CONFIG_DS1682 is not set 1141 - CONFIG_EEPROM_LEGACY=y 1142 - CONFIG_SENSORS_PCF8574=y 1143 - # CONFIG_PCF8575 is not set 1144 - CONFIG_SENSORS_PCF8591=y 1145 - CONFIG_EEPROM_MAX6875=y 1146 - # CONFIG_SENSORS_TSL2550 is not set 949 + # CONFIG_I2C_OCORES is not set 950 + # CONFIG_I2C_SIMTEC is not set 951 + # CONFIG_I2C_XILINX is not set 952 + 953 + # 954 + # External I2C/SMBus adapter drivers 955 + # 956 + # CONFIG_I2C_PARPORT_LIGHT is not set 957 + # CONFIG_I2C_TAOS_EVM is not set 958 + 959 + # 960 + # Other I2C/SMBus bus drivers 961 + # 962 + # CONFIG_I2C_PCA_PLATFORM is not set 963 + CONFIG_I2C_SIBYTE=y 964 + # CONFIG_I2C_STUB is not set 1147 965 CONFIG_I2C_DEBUG_CORE=y 1148 966 CONFIG_I2C_DEBUG_ALGO=y 1149 967 CONFIG_I2C_DEBUG_BUS=y 1150 - CONFIG_I2C_DEBUG_CHIP=y 1151 968 # CONFIG_SPI is not set 969 + 970 + # 971 + # PPS support 972 + # 973 + # CONFIG_PPS is not set 1152 974 # CONFIG_W1 is not set 1153 975 # CONFIG_POWER_SUPPLY is not set 1154 976 # CONFIG_HWMON is not set 1155 977 # CONFIG_THERMAL is not set 1156 - # CONFIG_THERMAL_HWMON is not set 1157 978 # CONFIG_WATCHDOG is not set 979 + CONFIG_SSB_POSSIBLE=y 1158 980 1159 981 # 1160 982 # Sonics Silicon Backplane 1161 983 # 1162 - CONFIG_SSB_POSSIBLE=y 1163 984 # CONFIG_SSB is not set 1164 985 1165 986 # 1166 987 # Multifunction device drivers 1167 988 # 989 + # CONFIG_MFD_CORE is not set 990 + # CONFIG_MFD_88PM860X is not set 1168 991 # CONFIG_MFD_SM501 is not set 1169 992 # CONFIG_HTC_PASIC3 is not set 1170 - 1171 - # 1172 - # Multimedia devices 1173 - # 1174 - 1175 - # 1176 - # Multimedia core support 1177 - # 1178 - # CONFIG_VIDEO_DEV is not set 1179 - # CONFIG_DVB_CORE is not set 1180 - # CONFIG_VIDEO_MEDIA is not set 1181 - 1182 - # 1183 - # Multimedia drivers 1184 - # 1185 - # CONFIG_DAB is not set 993 + # CONFIG_TWL4030_CORE is not set 994 + # CONFIG_MFD_TMIO is not set 995 + # CONFIG_PMIC_DA903X is not set 996 + # CONFIG_PMIC_ADP5520 is not set 997 + # CONFIG_MFD_MAX8925 is not set 998 + # CONFIG_MFD_WM8400 is not set 999 + # CONFIG_MFD_WM831X is not set 1000 + # CONFIG_MFD_WM8350_I2C is not set 1001 + # CONFIG_MFD_WM8994 is not set 1002 + # CONFIG_MFD_PCF50633 is not set 1003 + # CONFIG_AB3100_CORE is not set 1004 + # CONFIG_LPC_SCH is not set 1005 + # CONFIG_REGULATOR is not set 1006 + # CONFIG_MEDIA_SUPPORT is not set 1186 1007 1187 1008 # 1188 1009 # Graphics support 1189 1010 # 1011 + CONFIG_VGA_ARB=y 1012 + CONFIG_VGA_ARB_MAX_GPUS=16 1190 1013 # CONFIG_DRM is not set 1191 1014 # CONFIG_VGASTATE is not set 1192 1015 # CONFIG_VIDEO_OUTPUT_CONTROL is not set ··· 1201 1016 # Display device support 1202 1017 # 1203 1018 # CONFIG_DISPLAY_SUPPORT is not set 1204 - 1205 - # 1206 - # Sound 1207 - # 1208 1019 # CONFIG_SOUND is not set 1209 1020 CONFIG_USB_SUPPORT=y 1210 1021 CONFIG_USB_ARCH_HAS_HCD=y ··· 1211 1030 # CONFIG_USB_OTG_BLACKLIST_HUB is not set 1212 1031 1213 1032 # 1214 - # NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' 1033 + # Enable Host or Gadget support to see Inventra options 1034 + # 1035 + 1036 + # 1037 + # NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may 1215 1038 # 1216 1039 # CONFIG_USB_GADGET is not set 1040 + 1041 + # 1042 + # OTG and related infrastructure 1043 + # 1044 + # CONFIG_UWB is not set 1217 1045 # CONFIG_MMC is not set 1218 1046 # CONFIG_MEMSTICK is not set 1219 1047 # CONFIG_NEW_LEDS is not set ··· 1230 1040 # CONFIG_INFINIBAND is not set 1231 1041 CONFIG_RTC_LIB=y 1232 1042 # CONFIG_RTC_CLASS is not set 1043 + # CONFIG_DMADEVICES is not set 1044 + # CONFIG_AUXDISPLAY is not set 1233 1045 # CONFIG_UIO is not set 1046 + 1047 + # 1048 + # TI VLYNQ 1049 + # 1050 + # CONFIG_STAGING is not set 1234 1051 1235 1052 # 1236 1053 # File systems 1237 1054 # 1238 1055 CONFIG_EXT2_FS=m 1239 1056 CONFIG_EXT2_FS_XATTR=y 1240 - # CONFIG_EXT2_FS_POSIX_ACL is not set 1241 - # CONFIG_EXT2_FS_SECURITY is not set 1242 - # CONFIG_EXT2_FS_XIP is not set 1243 - CONFIG_EXT3_FS=y 1057 + CONFIG_EXT2_FS_POSIX_ACL=y 1058 + CONFIG_EXT2_FS_SECURITY=y 1059 + CONFIG_EXT2_FS_XIP=y 1060 + CONFIG_EXT3_FS=m 1061 + CONFIG_EXT3_DEFAULTS_TO_ORDERED=y 1244 1062 CONFIG_EXT3_FS_XATTR=y 1245 - # CONFIG_EXT3_FS_POSIX_ACL is not set 1246 - # CONFIG_EXT3_FS_SECURITY is not set 1247 - # CONFIG_EXT4DEV_FS is not set 1248 - CONFIG_JBD=y 1063 + CONFIG_EXT3_FS_POSIX_ACL=y 1064 + CONFIG_EXT3_FS_SECURITY=y 1065 + CONFIG_EXT4_FS=y 1066 + CONFIG_EXT4_FS_XATTR=y 1067 + CONFIG_EXT4_FS_POSIX_ACL=y 1068 + CONFIG_EXT4_FS_SECURITY=y 1069 + # CONFIG_EXT4_DEBUG is not set 1070 + CONFIG_FS_XIP=y 1071 + CONFIG_JBD=m 1072 + CONFIG_JBD2=y 1249 1073 CONFIG_FS_MBCACHE=y 1250 1074 # CONFIG_REISERFS_FS is not set 1251 1075 # CONFIG_JFS_FS is not set 1252 - # CONFIG_FS_POSIX_ACL is not set 1076 + CONFIG_FS_POSIX_ACL=y 1253 1077 # CONFIG_XFS_FS is not set 1254 1078 # CONFIG_GFS2_FS is not set 1255 1079 # CONFIG_OCFS2_FS is not set 1080 + # CONFIG_BTRFS_FS is not set 1081 + # CONFIG_NILFS2_FS is not set 1082 + CONFIG_FILE_LOCKING=y 1083 + CONFIG_FSNOTIFY=y 1256 1084 CONFIG_DNOTIFY=y 1257 1085 CONFIG_INOTIFY=y 1258 1086 CONFIG_INOTIFY_USER=y 1259 1087 CONFIG_QUOTA=y 1260 1088 CONFIG_QUOTA_NETLINK_INTERFACE=y 1261 1089 # CONFIG_PRINT_QUOTA_WARNING is not set 1090 + CONFIG_QUOTA_TREE=m 1262 1091 # CONFIG_QFMT_V1 is not set 1263 1092 CONFIG_QFMT_V2=m 1264 1093 CONFIG_QUOTACTL=y 1265 1094 CONFIG_AUTOFS_FS=m 1266 1095 CONFIG_AUTOFS4_FS=m 1267 1096 CONFIG_FUSE_FS=m 1097 + # CONFIG_CUSE is not set 1098 + 1099 + # 1100 + # Caches 1101 + # 1102 + # CONFIG_FSCACHE is not set 1268 1103 1269 1104 # 1270 1105 # CD-ROM/DVD Filesystems ··· 1318 1103 CONFIG_PROC_FS=y 1319 1104 CONFIG_PROC_KCORE=y 1320 1105 CONFIG_PROC_SYSCTL=y 1106 + CONFIG_PROC_PAGE_MONITOR=y 1321 1107 CONFIG_SYSFS=y 1322 1108 CONFIG_TMPFS=y 1323 1109 # CONFIG_TMPFS_POSIX_ACL is not set 1324 1110 # CONFIG_HUGETLB_PAGE is not set 1325 1111 CONFIG_CONFIGFS_FS=m 1326 - 1327 - # 1328 - # Miscellaneous filesystems 1329 - # 1112 + CONFIG_MISC_FILESYSTEMS=y 1330 1113 # CONFIG_ADFS_FS is not set 1331 1114 # CONFIG_AFFS_FS is not set 1332 1115 # CONFIG_ECRYPT_FS is not set ··· 1333 1120 # CONFIG_BEFS_FS is not set 1334 1121 # CONFIG_BFS_FS is not set 1335 1122 # CONFIG_EFS_FS is not set 1123 + # CONFIG_LOGFS is not set 1336 1124 # CONFIG_CRAMFS is not set 1125 + # CONFIG_SQUASHFS is not set 1337 1126 # CONFIG_VXFS_FS is not set 1338 1127 # CONFIG_MINIX_FS is not set 1128 + # CONFIG_OMFS_FS is not set 1339 1129 # CONFIG_HPFS_FS is not set 1340 1130 # CONFIG_QNX4FS_FS is not set 1341 1131 # CONFIG_ROMFS_FS is not set ··· 1349 1133 CONFIG_NFS_V3=y 1350 1134 # CONFIG_NFS_V3_ACL is not set 1351 1135 # CONFIG_NFS_V4 is not set 1352 - # CONFIG_NFSD is not set 1353 1136 CONFIG_ROOT_NFS=y 1137 + # CONFIG_NFSD is not set 1354 1138 CONFIG_LOCKD=y 1355 1139 CONFIG_LOCKD_V4=y 1356 1140 CONFIG_NFS_COMMON=y 1357 1141 CONFIG_SUNRPC=y 1358 - # CONFIG_SUNRPC_BIND34 is not set 1359 - # CONFIG_RPCSEC_GSS_KRB5 is not set 1360 - # CONFIG_RPCSEC_GSS_SPKM3 is not set 1142 + CONFIG_SUNRPC_GSS=m 1143 + CONFIG_RPCSEC_GSS_KRB5=m 1144 + CONFIG_RPCSEC_GSS_SPKM3=m 1361 1145 # CONFIG_SMB_FS is not set 1146 + # CONFIG_CEPH_FS is not set 1362 1147 # CONFIG_CIFS is not set 1363 1148 # CONFIG_NCP_FS is not set 1364 1149 # CONFIG_CODA_FS is not set ··· 1422 1205 CONFIG_ENABLE_MUST_CHECK=y 1423 1206 CONFIG_FRAME_WARN=2048 1424 1207 CONFIG_MAGIC_SYSRQ=y 1208 + # CONFIG_STRIP_ASM_SYMS is not set 1425 1209 # CONFIG_UNUSED_SYMBOLS is not set 1426 1210 # CONFIG_DEBUG_FS is not set 1427 1211 # CONFIG_HEADERS_CHECK is not set 1428 1212 CONFIG_DEBUG_KERNEL=y 1429 1213 # CONFIG_DEBUG_SHIRQ is not set 1430 1214 CONFIG_DETECT_SOFTLOCKUP=y 1215 + # CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set 1216 + CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0 1217 + CONFIG_DETECT_HUNG_TASK=y 1218 + # CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set 1219 + CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=0 1431 1220 CONFIG_SCHED_DEBUG=y 1432 1221 # CONFIG_SCHEDSTATS is not set 1433 1222 # CONFIG_TIMER_STATS is not set ··· 1442 1219 # CONFIG_DEBUG_RT_MUTEXES is not set 1443 1220 # CONFIG_RT_MUTEX_TESTER is not set 1444 1221 # CONFIG_DEBUG_SPINLOCK is not set 1445 - CONFIG_DEBUG_MUTEXES=y 1222 + # CONFIG_DEBUG_MUTEXES is not set 1446 1223 # CONFIG_DEBUG_LOCK_ALLOC is not set 1447 1224 # CONFIG_PROVE_LOCKING is not set 1448 1225 # CONFIG_LOCK_STAT is not set 1449 - # CONFIG_DEBUG_SPINLOCK_SLEEP is not set 1226 + CONFIG_DEBUG_SPINLOCK_SLEEP=y 1450 1227 # CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set 1451 1228 # CONFIG_DEBUG_KOBJECT is not set 1452 1229 # CONFIG_DEBUG_INFO is not set 1453 1230 # CONFIG_DEBUG_VM is not set 1454 1231 # CONFIG_DEBUG_WRITECOUNT is not set 1455 - # CONFIG_DEBUG_LIST is not set 1232 + CONFIG_DEBUG_MEMORY_INIT=y 1233 + CONFIG_DEBUG_LIST=y 1456 1234 # CONFIG_DEBUG_SG is not set 1235 + # CONFIG_DEBUG_NOTIFIERS is not set 1236 + # CONFIG_DEBUG_CREDENTIALS is not set 1457 1237 # CONFIG_BOOT_PRINTK_DELAY is not set 1458 1238 # CONFIG_RCU_TORTURE_TEST is not set 1239 + CONFIG_RCU_CPU_STALL_DETECTOR=y 1459 1240 # CONFIG_BACKTRACE_SELF_TEST is not set 1241 + # CONFIG_DEBUG_BLOCK_EXT_DEVT is not set 1242 + # CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set 1460 1243 # CONFIG_FAULT_INJECTION is not set 1244 + # CONFIG_SYSCTL_SYSCALL_CHECK is not set 1245 + # CONFIG_PAGE_POISONING is not set 1246 + CONFIG_HAVE_FUNCTION_TRACER=y 1247 + CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y 1248 + CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y 1249 + CONFIG_HAVE_DYNAMIC_FTRACE=y 1250 + CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y 1251 + CONFIG_TRACING_SUPPORT=y 1252 + CONFIG_FTRACE=y 1253 + # CONFIG_FUNCTION_TRACER is not set 1254 + # CONFIG_IRQSOFF_TRACER is not set 1255 + # CONFIG_SCHED_TRACER is not set 1256 + # CONFIG_ENABLE_DEFAULT_TRACERS is not set 1257 + # CONFIG_BOOT_TRACER is not set 1258 + CONFIG_BRANCH_PROFILE_NONE=y 1259 + # CONFIG_PROFILE_ANNOTATED_BRANCHES is not set 1260 + # CONFIG_PROFILE_ALL_BRANCHES is not set 1261 + # CONFIG_STACK_TRACER is not set 1262 + # CONFIG_KMEMTRACE is not set 1263 + # CONFIG_WORKQUEUE_TRACER is not set 1264 + # CONFIG_BLK_DEV_IO_TRACE is not set 1461 1265 # CONFIG_SAMPLES is not set 1266 + CONFIG_HAVE_ARCH_KGDB=y 1267 + # CONFIG_KGDB is not set 1268 + CONFIG_EARLY_PRINTK=y 1462 1269 # CONFIG_CMDLINE_BOOL is not set 1463 1270 # CONFIG_DEBUG_STACK_USAGE is not set 1464 1271 # CONFIG_SB1XXX_CORELIS is not set ··· 1499 1246 # 1500 1247 CONFIG_KEYS=y 1501 1248 CONFIG_KEYS_DEBUG_PROC_KEYS=y 1502 - # CONFIG_SECURITY is not set 1503 - # CONFIG_SECURITY_FILE_CAPABILITIES is not set 1249 + CONFIG_SECURITY=y 1250 + # CONFIG_SECURITYFS is not set 1251 + CONFIG_SECURITY_NETWORK=y 1252 + CONFIG_SECURITY_NETWORK_XFRM=y 1253 + # CONFIG_SECURITY_PATH is not set 1254 + CONFIG_LSM_MMAP_MIN_ADDR=65536 1255 + CONFIG_SECURITY_SELINUX=y 1256 + CONFIG_SECURITY_SELINUX_BOOTPARAM=y 1257 + CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1 1258 + CONFIG_SECURITY_SELINUX_DISABLE=y 1259 + CONFIG_SECURITY_SELINUX_DEVELOP=y 1260 + CONFIG_SECURITY_SELINUX_AVC_STATS=y 1261 + CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1 1262 + # CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set 1263 + # CONFIG_SECURITY_SMACK is not set 1264 + # CONFIG_SECURITY_TOMOYO is not set 1265 + # CONFIG_DEFAULT_SECURITY_SELINUX is not set 1266 + # CONFIG_DEFAULT_SECURITY_SMACK is not set 1267 + # CONFIG_DEFAULT_SECURITY_TOMOYO is not set 1268 + CONFIG_DEFAULT_SECURITY_DAC=y 1269 + CONFIG_DEFAULT_SECURITY="" 1504 1270 CONFIG_CRYPTO=y 1505 1271 1506 1272 # 1507 1273 # Crypto core or helper 1508 1274 # 1275 + # CONFIG_CRYPTO_FIPS is not set 1509 1276 CONFIG_CRYPTO_ALGAPI=y 1277 + CONFIG_CRYPTO_ALGAPI2=y 1510 1278 CONFIG_CRYPTO_AEAD=m 1279 + CONFIG_CRYPTO_AEAD2=y 1511 1280 CONFIG_CRYPTO_BLKCIPHER=y 1281 + CONFIG_CRYPTO_BLKCIPHER2=y 1512 1282 CONFIG_CRYPTO_HASH=y 1283 + CONFIG_CRYPTO_HASH2=y 1284 + CONFIG_CRYPTO_RNG=m 1285 + CONFIG_CRYPTO_RNG2=y 1286 + CONFIG_CRYPTO_PCOMP=y 1513 1287 CONFIG_CRYPTO_MANAGER=y 1288 + CONFIG_CRYPTO_MANAGER2=y 1514 1289 CONFIG_CRYPTO_GF128MUL=m 1515 1290 CONFIG_CRYPTO_NULL=y 1291 + # CONFIG_CRYPTO_PCRYPT is not set 1292 + CONFIG_CRYPTO_WORKQUEUE=y 1516 1293 # CONFIG_CRYPTO_CRYPTD is not set 1517 1294 CONFIG_CRYPTO_AUTHENC=m 1518 1295 # CONFIG_CRYPTO_TEST is not set ··· 1559 1276 # 1560 1277 CONFIG_CRYPTO_CBC=m 1561 1278 CONFIG_CRYPTO_CTR=m 1562 - # CONFIG_CRYPTO_CTS is not set 1279 + CONFIG_CRYPTO_CTS=m 1563 1280 CONFIG_CRYPTO_ECB=m 1564 1281 CONFIG_CRYPTO_LRW=m 1565 1282 CONFIG_CRYPTO_PCBC=m ··· 1570 1287 # 1571 1288 CONFIG_CRYPTO_HMAC=y 1572 1289 CONFIG_CRYPTO_XCBC=m 1290 + CONFIG_CRYPTO_VMAC=m 1573 1291 1574 1292 # 1575 1293 # Digest 1576 1294 # 1577 - # CONFIG_CRYPTO_CRC32C is not set 1295 + CONFIG_CRYPTO_CRC32C=m 1296 + CONFIG_CRYPTO_GHASH=m 1578 1297 CONFIG_CRYPTO_MD4=m 1579 1298 CONFIG_CRYPTO_MD5=y 1580 1299 CONFIG_CRYPTO_MICHAEL_MIC=m 1300 + CONFIG_CRYPTO_RMD128=m 1301 + CONFIG_CRYPTO_RMD160=m 1302 + CONFIG_CRYPTO_RMD256=m 1303 + CONFIG_CRYPTO_RMD320=m 1581 1304 CONFIG_CRYPTO_SHA1=m 1582 1305 CONFIG_CRYPTO_SHA256=m 1583 1306 CONFIG_CRYPTO_SHA512=m ··· 1614 1325 # Compression 1615 1326 # 1616 1327 CONFIG_CRYPTO_DEFLATE=m 1617 - # CONFIG_CRYPTO_LZO is not set 1328 + CONFIG_CRYPTO_ZLIB=m 1329 + CONFIG_CRYPTO_LZO=m 1330 + 1331 + # 1332 + # Random Number Generation 1333 + # 1334 + CONFIG_CRYPTO_ANSI_CPRNG=m 1618 1335 CONFIG_CRYPTO_HW=y 1619 1336 # CONFIG_CRYPTO_DEV_HIFN_795X is not set 1337 + # CONFIG_BINARY_PRINTF is not set 1620 1338 1621 1339 # 1622 1340 # Library routines 1623 1341 # 1624 1342 CONFIG_BITREVERSE=y 1625 - # CONFIG_GENERIC_FIND_FIRST_BIT is not set 1343 + CONFIG_GENERIC_FIND_LAST_BIT=y 1626 1344 CONFIG_CRC_CCITT=m 1627 - # CONFIG_CRC16 is not set 1345 + CONFIG_CRC16=y 1346 + CONFIG_CRC_T10DIF=m 1628 1347 CONFIG_CRC_ITU_T=m 1629 1348 CONFIG_CRC32=y 1630 - # CONFIG_CRC7 is not set 1349 + CONFIG_CRC7=m 1631 1350 CONFIG_LIBCRC32C=m 1632 1351 CONFIG_AUDIT_GENERIC=y 1633 - CONFIG_ZLIB_INFLATE=m 1352 + CONFIG_ZLIB_INFLATE=y 1634 1353 CONFIG_ZLIB_DEFLATE=m 1635 - CONFIG_PLIST=y 1354 + CONFIG_LZO_COMPRESS=m 1355 + CONFIG_LZO_DECOMPRESS=m 1356 + CONFIG_DECOMPRESS_GZIP=y 1636 1357 CONFIG_HAS_IOMEM=y 1637 1358 CONFIG_HAS_IOPORT=y 1638 1359 CONFIG_HAS_DMA=y 1360 + CONFIG_NLATTR=y
+4 -2
arch/mips/include/asm/abi.h
··· 13 13 #include <asm/siginfo.h> 14 14 15 15 struct mips_abi { 16 - int (* const setup_frame)(struct k_sigaction * ka, 16 + int (* const setup_frame)(void *sig_return, struct k_sigaction *ka, 17 17 struct pt_regs *regs, int signr, 18 18 sigset_t *set); 19 - int (* const setup_rt_frame)(struct k_sigaction * ka, 19 + const unsigned long signal_return_offset; 20 + int (* const setup_rt_frame)(void *sig_return, struct k_sigaction *ka, 20 21 struct pt_regs *regs, int signr, 21 22 sigset_t *set, siginfo_t *info); 23 + const unsigned long rt_signal_return_offset; 22 24 const unsigned long restart; 23 25 }; 24 26
+5
arch/mips/include/asm/elf.h
··· 310 310 311 311 #endif /* CONFIG_64BIT */ 312 312 313 + struct pt_regs; 313 314 struct task_struct; 314 315 315 316 extern void elf_dump_regs(elf_greg_t *, struct pt_regs *regs); ··· 368 367 #define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2) 369 368 #endif 370 369 370 + #define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1 371 + struct linux_binprm; 372 + extern int arch_setup_additional_pages(struct linux_binprm *bprm, 373 + int uses_interp); 371 374 #endif /* _ASM_ELF_H */
+5 -1
arch/mips/include/asm/fpu_emulator.h
··· 41 41 DECLARE_PER_CPU(struct mips_fpu_emulator_stats, fpuemustats); 42 42 43 43 #define MIPS_FPU_EMU_INC_STATS(M) \ 44 - cpu_local_wrap(__local_inc(&__get_cpu_var(fpuemustats).M)) 44 + do { \ 45 + preempt_disable(); \ 46 + __local_inc(&__get_cpu_var(fpuemustats).M); \ 47 + preempt_enable(); \ 48 + } while (0) 45 49 46 50 #else 47 51 #define MIPS_FPU_EMU_INC_STATS(M) do { } while (0)
+15
arch/mips/include/asm/mach-bcm63xx/bcm63xx_cpu.h
··· 85 85 RSET_TIMER, 86 86 RSET_WDT, 87 87 RSET_UART0, 88 + RSET_UART1, 88 89 RSET_GPIO, 89 90 RSET_SPI, 90 91 RSET_UDC0, ··· 124 123 #define BCM_6338_TIMER_BASE (0xfffe0200) 125 124 #define BCM_6338_WDT_BASE (0xfffe021c) 126 125 #define BCM_6338_UART0_BASE (0xfffe0300) 126 + #define BCM_6338_UART1_BASE (0xdeadbeef) 127 127 #define BCM_6338_GPIO_BASE (0xfffe0400) 128 128 #define BCM_6338_SPI_BASE (0xfffe0c00) 129 129 #define BCM_6338_UDC0_BASE (0xdeadbeef) ··· 155 153 #define BCM_6345_TIMER_BASE (0xfffe0200) 156 154 #define BCM_6345_WDT_BASE (0xfffe021c) 157 155 #define BCM_6345_UART0_BASE (0xfffe0300) 156 + #define BCM_6345_UART1_BASE (0xdeadbeef) 158 157 #define BCM_6345_GPIO_BASE (0xfffe0400) 159 158 #define BCM_6345_SPI_BASE (0xdeadbeef) 160 159 #define BCM_6345_UDC0_BASE (0xdeadbeef) ··· 185 182 #define BCM_6348_TIMER_BASE (0xfffe0200) 186 183 #define BCM_6348_WDT_BASE (0xfffe021c) 187 184 #define BCM_6348_UART0_BASE (0xfffe0300) 185 + #define BCM_6348_UART1_BASE (0xdeadbeef) 188 186 #define BCM_6348_GPIO_BASE (0xfffe0400) 189 187 #define BCM_6348_SPI_BASE (0xfffe0c00) 190 188 #define BCM_6348_UDC0_BASE (0xfffe1000) ··· 212 208 #define BCM_6358_TIMER_BASE (0xfffe0040) 213 209 #define BCM_6358_WDT_BASE (0xfffe005c) 214 210 #define BCM_6358_UART0_BASE (0xfffe0100) 211 + #define BCM_6358_UART1_BASE (0xfffe0120) 215 212 #define BCM_6358_GPIO_BASE (0xfffe0080) 216 213 #define BCM_6358_SPI_BASE (0xdeadbeef) 217 214 #define BCM_6358_UDC0_BASE (0xfffe0800) ··· 251 246 return BCM_6338_WDT_BASE; 252 247 case RSET_UART0: 253 248 return BCM_6338_UART0_BASE; 249 + case RSET_UART1: 250 + return BCM_6338_UART1_BASE; 254 251 case RSET_GPIO: 255 252 return BCM_6338_GPIO_BASE; 256 253 case RSET_SPI: ··· 299 292 return BCM_6345_WDT_BASE; 300 293 case RSET_UART0: 301 294 return BCM_6345_UART0_BASE; 295 + case RSET_UART1: 296 + return BCM_6345_UART1_BASE; 302 297 case RSET_GPIO: 303 298 return BCM_6345_GPIO_BASE; 304 299 case RSET_SPI: ··· 347 338 return BCM_6348_WDT_BASE; 348 339 case RSET_UART0: 349 340 return BCM_6348_UART0_BASE; 341 + case RSET_UART1: 342 + return BCM_6348_UART1_BASE; 350 343 case RSET_GPIO: 351 344 return BCM_6348_GPIO_BASE; 352 345 case RSET_SPI: ··· 395 384 return BCM_6358_WDT_BASE; 396 385 case RSET_UART0: 397 386 return BCM_6358_UART0_BASE; 387 + case RSET_UART1: 388 + return BCM_6358_UART1_BASE; 398 389 case RSET_GPIO: 399 390 return BCM_6358_GPIO_BASE; 400 391 case RSET_SPI: ··· 442 429 enum bcm63xx_irq { 443 430 IRQ_TIMER = 0, 444 431 IRQ_UART0, 432 + IRQ_UART1, 445 433 IRQ_DSL, 446 434 IRQ_ENET0, 447 435 IRQ_ENET1, ··· 524 510 */ 525 511 #define BCM_6358_TIMER_IRQ (IRQ_INTERNAL_BASE + 0) 526 512 #define BCM_6358_UART0_IRQ (IRQ_INTERNAL_BASE + 2) 513 + #define BCM_6358_UART1_IRQ (IRQ_INTERNAL_BASE + 3) 527 514 #define BCM_6358_OHCI0_IRQ (IRQ_INTERNAL_BASE + 5) 528 515 #define BCM_6358_ENET1_IRQ (IRQ_INTERNAL_BASE + 6) 529 516 #define BCM_6358_ENET0_IRQ (IRQ_INTERNAL_BASE + 8)
+6
arch/mips/include/asm/mach-bcm63xx/bcm63xx_dev_uart.h
··· 1 + #ifndef BCM63XX_DEV_UART_H_ 2 + #define BCM63XX_DEV_UART_H_ 3 + 4 + int bcm63xx_uart_register(unsigned int id); 5 + 6 + #endif /* BCM63XX_DEV_UART_H_ */
+4
arch/mips/include/asm/mach-bcm63xx/bcm63xx_gpio.h
··· 10 10 switch (bcm63xx_get_cpu_id()) { 11 11 case BCM6358_CPU_ID: 12 12 return 40; 13 + case BCM6338_CPU_ID: 14 + return 8; 15 + case BCM6345_CPU_ID: 16 + return 16; 13 17 case BCM6348_CPU_ID: 14 18 default: 15 19 return 37;
+2
arch/mips/include/asm/mach-bcm63xx/board_bcm963xx.h
··· 45 45 unsigned int has_ohci0:1; 46 46 unsigned int has_ehci0:1; 47 47 unsigned int has_dsp:1; 48 + unsigned int has_uart0:1; 49 + unsigned int has_uart1:1; 48 50 49 51 /* ethernet config */ 50 52 struct bcm63xx_enet_platform_data enet0;
+1 -1
arch/mips/include/asm/mach-bcm63xx/cpu-feature-overrides.h
··· 24 24 #define cpu_has_smartmips 0 25 25 #define cpu_has_vtag_icache 0 26 26 27 - #if !defined(BCMCPU_RUNTIME_DETECT) && (defined(CONFIG_BCMCPU_IS_6348) || defined(CONFIG_CPU_IS_6338) || defined(CONFIG_CPU_IS_BCM6345)) 27 + #if !defined(BCMCPU_RUNTIME_DETECT) && (defined(CONFIG_BCM63XX_CPU_6348) || defined(CONFIG_BCM63XX_CPU_6345) || defined(CONFIG_BCM63XX_CPU_6338)) 28 28 #define cpu_has_dc_aliases 0 29 29 #endif 30 30
+5 -1
arch/mips/include/asm/mach-sibyte/war.h
··· 16 16 #if defined(CONFIG_SB1_PASS_1_WORKAROUNDS) || \ 17 17 defined(CONFIG_SB1_PASS_2_WORKAROUNDS) 18 18 19 - #define BCM1250_M3_WAR 1 19 + #ifndef __ASSEMBLY__ 20 + extern int sb1250_m3_workaround_needed(void); 21 + #endif 22 + 23 + #define BCM1250_M3_WAR sb1250_m3_workaround_needed() 20 24 #define SIBYTE_1956_WAR 1 21 25 22 26 #else
+4 -1
arch/mips/include/asm/mmu.h
··· 1 1 #ifndef __ASM_MMU_H 2 2 #define __ASM_MMU_H 3 3 4 - typedef unsigned long mm_context_t[NR_CPUS]; 4 + typedef struct { 5 + unsigned long asid[NR_CPUS]; 6 + void *vdso; 7 + } mm_context_t; 5 8 6 9 #endif /* __ASM_MMU_H */
+1 -1
arch/mips/include/asm/mmu_context.h
··· 104 104 105 105 #endif 106 106 107 - #define cpu_context(cpu, mm) ((mm)->context[cpu]) 107 + #define cpu_context(cpu, mm) ((mm)->context.asid[cpu]) 108 108 #define cpu_asid(cpu, mm) (cpu_context((cpu), (mm)) & ASID_MASK) 109 109 #define asid_cache(cpu) (cpu_data[cpu].asid_cache) 110 110
+4 -2
arch/mips/include/asm/page.h
··· 188 188 #define VM_DATA_DEFAULT_FLAGS (VM_READ | VM_WRITE | VM_EXEC | \ 189 189 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC) 190 190 191 - #define UNCAC_ADDR(addr) ((addr) - PAGE_OFFSET + UNCAC_BASE) 192 - #define CAC_ADDR(addr) ((addr) - UNCAC_BASE + PAGE_OFFSET) 191 + #define UNCAC_ADDR(addr) ((addr) - PAGE_OFFSET + UNCAC_BASE + \ 192 + PHYS_OFFSET) 193 + #define CAC_ADDR(addr) ((addr) - UNCAC_BASE + PAGE_OFFSET - \ 194 + PHYS_OFFSET) 193 195 194 196 #include <asm-generic/memory_model.h> 195 197 #include <asm-generic/getorder.h>
+9 -2
arch/mips/include/asm/processor.h
··· 33 33 34 34 extern unsigned int vced_count, vcei_count; 35 35 36 + /* 37 + * A special page (the vdso) is mapped into all processes at the very 38 + * top of the virtual memory space. 39 + */ 40 + #define SPECIAL_PAGES_SIZE PAGE_SIZE 41 + 36 42 #ifdef CONFIG_32BIT 37 43 /* 38 44 * User space process size: 2GB. This is hardcoded into a few places, 39 45 * so don't change it unless you know what you are doing. 40 46 */ 41 47 #define TASK_SIZE 0x7fff8000UL 42 - #define STACK_TOP TASK_SIZE 48 + #define STACK_TOP ((TASK_SIZE & PAGE_MASK) - SPECIAL_PAGES_SIZE) 43 49 44 50 /* 45 51 * This decides where the kernel will search for a free chunk of vm ··· 65 59 #define TASK_SIZE32 0x7fff8000UL 66 60 #define TASK_SIZE 0x10000000000UL 67 61 #define STACK_TOP \ 68 - (test_thread_flag(TIF_32BIT_ADDR) ? TASK_SIZE32 : TASK_SIZE) 62 + (((test_thread_flag(TIF_32BIT_ADDR) ? \ 63 + TASK_SIZE32 : TASK_SIZE) & PAGE_MASK) - SPECIAL_PAGES_SIZE) 69 64 70 65 /* 71 66 * This decides where the kernel will search for a free chunk of vm
+19
arch/mips/include/asm/stackframe.h
··· 121 121 .endm 122 122 #else 123 123 .macro get_saved_sp /* Uniprocessor variation */ 124 + #ifdef CONFIG_CPU_LOONGSON2F 125 + /* 126 + * Clear BTB (branch target buffer), forbid RAS (return address 127 + * stack) to workaround the Out-of-order Issue in Loongson2F 128 + * via its diagnostic register. 129 + */ 130 + move k0, ra 131 + jal 1f 132 + nop 133 + 1: jal 1f 134 + nop 135 + 1: jal 1f 136 + nop 137 + 1: jal 1f 138 + nop 139 + 1: move ra, k0 140 + li k0, 3 141 + mtc0 k0, $22 142 + #endif /* CONFIG_CPU_LOONGSON2F */ 124 143 #if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32) 125 144 lui k1, %hi(kernelsp) 126 145 #else
+2
arch/mips/include/asm/uasm.h
··· 84 84 Ip_u1u2u3(_mfc0); 85 85 Ip_u1u2u3(_mtc0); 86 86 Ip_u2u1u3(_ori); 87 + Ip_u3u1u2(_or); 87 88 Ip_u2s3u1(_pref); 88 89 Ip_0(_rfe); 89 90 Ip_u2s3u1(_sc); ··· 103 102 Ip_u3u1u2(_xor); 104 103 Ip_u2u1u3(_xori); 105 104 Ip_u2u1msbu3(_dins); 105 + Ip_u1(_syscall); 106 106 107 107 /* Handle labels. */ 108 108 struct uasm_label {
+29
arch/mips/include/asm/vdso.h
··· 1 + /* 2 + * This file is subject to the terms and conditions of the GNU General Public 3 + * License. See the file "COPYING" in the main directory of this archive 4 + * for more details. 5 + * 6 + * Copyright (C) 2009 Cavium Networks 7 + */ 8 + 9 + #ifndef __ASM_VDSO_H 10 + #define __ASM_VDSO_H 11 + 12 + #include <linux/types.h> 13 + 14 + 15 + #ifdef CONFIG_32BIT 16 + struct mips_vdso { 17 + u32 signal_trampoline[2]; 18 + u32 rt_signal_trampoline[2]; 19 + }; 20 + #else /* !CONFIG_32BIT */ 21 + struct mips_vdso { 22 + u32 o32_signal_trampoline[2]; 23 + u32 o32_rt_signal_trampoline[2]; 24 + u32 rt_signal_trampoline[2]; 25 + u32 n32_rt_signal_trampoline[2]; 26 + }; 27 + #endif /* CONFIG_32BIT */ 28 + 29 + #endif /* __ASM_VDSO_H */
+1 -1
arch/mips/kernel/Makefile
··· 6 6 7 7 obj-y += cpu-probe.o branch.o entry.o genex.o irq.o process.o \ 8 8 ptrace.o reset.o setup.o signal.o syscall.o \ 9 - time.o topology.o traps.o unaligned.o watch.o 9 + time.o topology.o traps.o unaligned.o watch.o vdso.o 10 10 11 11 ifdef CONFIG_FUNCTION_TRACER 12 12 CFLAGS_REMOVE_ftrace.o = -pg
+4
arch/mips/kernel/cpufreq/loongson2_clock.c
··· 164 164 spin_unlock_irqrestore(&loongson2_wait_lock, flags); 165 165 } 166 166 EXPORT_SYMBOL_GPL(loongson2_cpu_wait); 167 + 168 + MODULE_AUTHOR("Yanhua <yanh@lemote.com>"); 169 + MODULE_DESCRIPTION("cpufreq driver for Loongson 2F"); 170 + MODULE_LICENSE("GPL");
+6 -1
arch/mips/kernel/process.c
··· 63 63 64 64 smtc_idle_loop_hook(); 65 65 #endif 66 - if (cpu_wait) 66 + 67 + if (cpu_wait) { 68 + /* Don't trace irqs off for idle */ 69 + stop_critical_timings(); 67 70 (*cpu_wait)(); 71 + start_critical_timings(); 72 + } 68 73 } 69 74 #ifdef CONFIG_HOTPLUG_CPU 70 75 if (!cpu_online(cpu) && !cpu_isset(cpu, cpu_callin_map) &&
-5
arch/mips/kernel/signal-common.h
··· 26 26 */ 27 27 extern void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, 28 28 size_t frame_size); 29 - /* 30 - * install trampoline code to get back from the sig handler 31 - */ 32 - extern int install_sigtramp(unsigned int __user *tramp, unsigned int syscall); 33 - 34 29 /* Check and clear pending FPU exceptions in saved CSR */ 35 30 extern int fpcsr_pending(unsigned int __user *fpcsr); 36 31
+19 -67
arch/mips/kernel/signal.c
··· 32 32 #include <asm/ucontext.h> 33 33 #include <asm/cpu-features.h> 34 34 #include <asm/war.h> 35 + #include <asm/vdso.h> 35 36 36 37 #include "signal-common.h" 37 38 ··· 45 44 extern asmlinkage int fpu_emulator_save_context(struct sigcontext __user *sc); 46 45 extern asmlinkage int fpu_emulator_restore_context(struct sigcontext __user *sc); 47 46 48 - /* 49 - * Horribly complicated - with the bloody RM9000 workarounds enabled 50 - * the signal trampolines is moving to the end of the structure so we can 51 - * increase the alignment without breaking software compatibility. 52 - */ 53 - #if ICACHE_REFILLS_WORKAROUND_WAR == 0 54 - 55 47 struct sigframe { 56 48 u32 sf_ass[4]; /* argument save space for o32 */ 57 - u32 sf_code[2]; /* signal trampoline */ 49 + u32 sf_pad[2]; /* Was: signal trampoline */ 58 50 struct sigcontext sf_sc; 59 51 sigset_t sf_mask; 60 52 }; 61 53 62 54 struct rt_sigframe { 63 55 u32 rs_ass[4]; /* argument save space for o32 */ 64 - u32 rs_code[2]; /* signal trampoline */ 56 + u32 rs_pad[2]; /* Was: signal trampoline */ 65 57 struct siginfo rs_info; 66 58 struct ucontext rs_uc; 67 59 }; 68 - 69 - #else 70 - 71 - struct sigframe { 72 - u32 sf_ass[4]; /* argument save space for o32 */ 73 - u32 sf_pad[2]; 74 - struct sigcontext sf_sc; /* hw context */ 75 - sigset_t sf_mask; 76 - u32 sf_code[8] ____cacheline_aligned; /* signal trampoline */ 77 - }; 78 - 79 - struct rt_sigframe { 80 - u32 rs_ass[4]; /* argument save space for o32 */ 81 - u32 rs_pad[2]; 82 - struct siginfo rs_info; 83 - struct ucontext rs_uc; 84 - u32 rs_code[8] ____cacheline_aligned; /* signal trampoline */ 85 - }; 86 - 87 - #endif 88 60 89 61 /* 90 62 * Helper routines ··· 238 264 sp = current->sas_ss_sp + current->sas_ss_size; 239 265 240 266 return (void __user *)((sp - frame_size) & (ICACHE_REFILLS_WORKAROUND_WAR ? ~(cpu_icache_line_size()-1) : ALMASK)); 241 - } 242 - 243 - int install_sigtramp(unsigned int __user *tramp, unsigned int syscall) 244 - { 245 - int err; 246 - 247 - /* 248 - * Set up the return code ... 249 - * 250 - * li v0, __NR__foo_sigreturn 251 - * syscall 252 - */ 253 - 254 - err = __put_user(0x24020000 + syscall, tramp + 0); 255 - err |= __put_user(0x0000000c , tramp + 1); 256 - if (ICACHE_REFILLS_WORKAROUND_WAR) { 257 - err |= __put_user(0, tramp + 2); 258 - err |= __put_user(0, tramp + 3); 259 - err |= __put_user(0, tramp + 4); 260 - err |= __put_user(0, tramp + 5); 261 - err |= __put_user(0, tramp + 6); 262 - err |= __put_user(0, tramp + 7); 263 - } 264 - flush_cache_sigtramp((unsigned long) tramp); 265 - 266 - return err; 267 267 } 268 268 269 269 /* ··· 432 484 } 433 485 434 486 #ifdef CONFIG_TRAD_SIGNALS 435 - static int setup_frame(struct k_sigaction * ka, struct pt_regs *regs, 436 - int signr, sigset_t *set) 487 + static int setup_frame(void *sig_return, struct k_sigaction *ka, 488 + struct pt_regs *regs, int signr, sigset_t *set) 437 489 { 438 490 struct sigframe __user *frame; 439 491 int err = 0; ··· 441 493 frame = get_sigframe(ka, regs, sizeof(*frame)); 442 494 if (!access_ok(VERIFY_WRITE, frame, sizeof (*frame))) 443 495 goto give_sigsegv; 444 - 445 - err |= install_sigtramp(frame->sf_code, __NR_sigreturn); 446 496 447 497 err |= setup_sigcontext(regs, &frame->sf_sc); 448 498 err |= __copy_to_user(&frame->sf_mask, set, sizeof(*set)); ··· 461 515 regs->regs[ 5] = 0; 462 516 regs->regs[ 6] = (unsigned long) &frame->sf_sc; 463 517 regs->regs[29] = (unsigned long) frame; 464 - regs->regs[31] = (unsigned long) frame->sf_code; 518 + regs->regs[31] = (unsigned long) sig_return; 465 519 regs->cp0_epc = regs->regs[25] = (unsigned long) ka->sa.sa_handler; 466 520 467 521 DEBUGP("SIG deliver (%s:%d): sp=0x%p pc=0x%lx ra=0x%lx\n", ··· 475 529 } 476 530 #endif 477 531 478 - static int setup_rt_frame(struct k_sigaction * ka, struct pt_regs *regs, 479 - int signr, sigset_t *set, siginfo_t *info) 532 + static int setup_rt_frame(void *sig_return, struct k_sigaction *ka, 533 + struct pt_regs *regs, int signr, sigset_t *set, 534 + siginfo_t *info) 480 535 { 481 536 struct rt_sigframe __user *frame; 482 537 int err = 0; ··· 485 538 frame = get_sigframe(ka, regs, sizeof(*frame)); 486 539 if (!access_ok(VERIFY_WRITE, frame, sizeof (*frame))) 487 540 goto give_sigsegv; 488 - 489 - err |= install_sigtramp(frame->rs_code, __NR_rt_sigreturn); 490 541 491 542 /* Create siginfo. */ 492 543 err |= copy_siginfo_to_user(&frame->rs_info, info); ··· 518 573 regs->regs[ 5] = (unsigned long) &frame->rs_info; 519 574 regs->regs[ 6] = (unsigned long) &frame->rs_uc; 520 575 regs->regs[29] = (unsigned long) frame; 521 - regs->regs[31] = (unsigned long) frame->rs_code; 576 + regs->regs[31] = (unsigned long) sig_return; 522 577 regs->cp0_epc = regs->regs[25] = (unsigned long) ka->sa.sa_handler; 523 578 524 579 DEBUGP("SIG deliver (%s:%d): sp=0x%p pc=0x%lx ra=0x%lx\n", ··· 535 590 struct mips_abi mips_abi = { 536 591 #ifdef CONFIG_TRAD_SIGNALS 537 592 .setup_frame = setup_frame, 593 + .signal_return_offset = offsetof(struct mips_vdso, signal_trampoline), 538 594 #endif 539 595 .setup_rt_frame = setup_rt_frame, 596 + .rt_signal_return_offset = 597 + offsetof(struct mips_vdso, rt_signal_trampoline), 540 598 .restart = __NR_restart_syscall 541 599 }; 542 600 ··· 547 599 struct k_sigaction *ka, sigset_t *oldset, struct pt_regs *regs) 548 600 { 549 601 int ret; 602 + struct mips_abi *abi = current->thread.abi; 603 + void *vdso = current->mm->context.vdso; 550 604 551 605 switch(regs->regs[0]) { 552 606 case ERESTART_RESTARTBLOCK: ··· 569 619 regs->regs[0] = 0; /* Don't deal with this again. */ 570 620 571 621 if (sig_uses_siginfo(ka)) 572 - ret = current->thread.abi->setup_rt_frame(ka, regs, sig, oldset, info); 622 + ret = abi->setup_rt_frame(vdso + abi->rt_signal_return_offset, 623 + ka, regs, sig, oldset, info); 573 624 else 574 - ret = current->thread.abi->setup_frame(ka, regs, sig, oldset); 625 + ret = abi->setup_frame(vdso + abi->signal_return_offset, 626 + ka, regs, sig, oldset); 575 627 576 628 spin_lock_irq(&current->sighand->siglock); 577 629 sigorsets(&current->blocked, &current->blocked, &ka->sa.sa_mask);
+14 -41
arch/mips/kernel/signal32.c
··· 32 32 #include <asm/system.h> 33 33 #include <asm/fpu.h> 34 34 #include <asm/war.h> 35 + #include <asm/vdso.h> 35 36 36 37 #include "signal-common.h" 37 38 ··· 48 47 /* 49 48 * Including <asm/unistd.h> would give use the 64-bit syscall numbers ... 50 49 */ 51 - #define __NR_O32_sigreturn 4119 52 - #define __NR_O32_rt_sigreturn 4193 53 50 #define __NR_O32_restart_syscall 4253 54 51 55 52 /* 32-bit compatibility types */ ··· 76 77 compat_sigset_t uc_sigmask; /* mask last for extensibility */ 77 78 }; 78 79 79 - /* 80 - * Horribly complicated - with the bloody RM9000 workarounds enabled 81 - * the signal trampolines is moving to the end of the structure so we can 82 - * increase the alignment without breaking software compatibility. 83 - */ 84 - #if ICACHE_REFILLS_WORKAROUND_WAR == 0 85 - 86 80 struct sigframe32 { 87 81 u32 sf_ass[4]; /* argument save space for o32 */ 88 - u32 sf_code[2]; /* signal trampoline */ 82 + u32 sf_pad[2]; /* Was: signal trampoline */ 89 83 struct sigcontext32 sf_sc; 90 84 compat_sigset_t sf_mask; 91 85 }; 92 86 93 87 struct rt_sigframe32 { 94 88 u32 rs_ass[4]; /* argument save space for o32 */ 95 - u32 rs_code[2]; /* signal trampoline */ 89 + u32 rs_pad[2]; /* Was: signal trampoline */ 96 90 compat_siginfo_t rs_info; 97 91 struct ucontext32 rs_uc; 98 92 }; 99 - 100 - #else /* ICACHE_REFILLS_WORKAROUND_WAR */ 101 - 102 - struct sigframe32 { 103 - u32 sf_ass[4]; /* argument save space for o32 */ 104 - u32 sf_pad[2]; 105 - struct sigcontext32 sf_sc; /* hw context */ 106 - compat_sigset_t sf_mask; 107 - u32 sf_code[8] ____cacheline_aligned; /* signal trampoline */ 108 - }; 109 - 110 - struct rt_sigframe32 { 111 - u32 rs_ass[4]; /* argument save space for o32 */ 112 - u32 rs_pad[2]; 113 - compat_siginfo_t rs_info; 114 - struct ucontext32 rs_uc; 115 - u32 rs_code[8] __attribute__((aligned(32))); /* signal trampoline */ 116 - }; 117 - 118 - #endif /* !ICACHE_REFILLS_WORKAROUND_WAR */ 119 93 120 94 /* 121 95 * sigcontext handlers ··· 570 598 force_sig(SIGSEGV, current); 571 599 } 572 600 573 - static int setup_frame_32(struct k_sigaction * ka, struct pt_regs *regs, 574 - int signr, sigset_t *set) 601 + static int setup_frame_32(void *sig_return, struct k_sigaction *ka, 602 + struct pt_regs *regs, int signr, sigset_t *set) 575 603 { 576 604 struct sigframe32 __user *frame; 577 605 int err = 0; ··· 579 607 frame = get_sigframe(ka, regs, sizeof(*frame)); 580 608 if (!access_ok(VERIFY_WRITE, frame, sizeof (*frame))) 581 609 goto give_sigsegv; 582 - 583 - err |= install_sigtramp(frame->sf_code, __NR_O32_sigreturn); 584 610 585 611 err |= setup_sigcontext32(regs, &frame->sf_sc); 586 612 err |= __copy_conv_sigset_to_user(&frame->sf_mask, set); ··· 600 630 regs->regs[ 5] = 0; 601 631 regs->regs[ 6] = (unsigned long) &frame->sf_sc; 602 632 regs->regs[29] = (unsigned long) frame; 603 - regs->regs[31] = (unsigned long) frame->sf_code; 633 + regs->regs[31] = (unsigned long) sig_return; 604 634 regs->cp0_epc = regs->regs[25] = (unsigned long) ka->sa.sa_handler; 605 635 606 636 DEBUGP("SIG deliver (%s:%d): sp=0x%p pc=0x%lx ra=0x%lx\n", ··· 614 644 return -EFAULT; 615 645 } 616 646 617 - static int setup_rt_frame_32(struct k_sigaction * ka, struct pt_regs *regs, 618 - int signr, sigset_t *set, siginfo_t *info) 647 + static int setup_rt_frame_32(void *sig_return, struct k_sigaction *ka, 648 + struct pt_regs *regs, int signr, sigset_t *set, 649 + siginfo_t *info) 619 650 { 620 651 struct rt_sigframe32 __user *frame; 621 652 int err = 0; ··· 625 654 frame = get_sigframe(ka, regs, sizeof(*frame)); 626 655 if (!access_ok(VERIFY_WRITE, frame, sizeof (*frame))) 627 656 goto give_sigsegv; 628 - 629 - err |= install_sigtramp(frame->rs_code, __NR_O32_rt_sigreturn); 630 657 631 658 /* Convert (siginfo_t -> compat_siginfo_t) and copy to user. */ 632 659 err |= copy_siginfo_to_user32(&frame->rs_info, info); ··· 659 690 regs->regs[ 5] = (unsigned long) &frame->rs_info; 660 691 regs->regs[ 6] = (unsigned long) &frame->rs_uc; 661 692 regs->regs[29] = (unsigned long) frame; 662 - regs->regs[31] = (unsigned long) frame->rs_code; 693 + regs->regs[31] = (unsigned long) sig_return; 663 694 regs->cp0_epc = regs->regs[25] = (unsigned long) ka->sa.sa_handler; 664 695 665 696 DEBUGP("SIG deliver (%s:%d): sp=0x%p pc=0x%lx ra=0x%lx\n", ··· 678 709 */ 679 710 struct mips_abi mips_abi_32 = { 680 711 .setup_frame = setup_frame_32, 712 + .signal_return_offset = 713 + offsetof(struct mips_vdso, o32_signal_trampoline), 681 714 .setup_rt_frame = setup_rt_frame_32, 715 + .rt_signal_return_offset = 716 + offsetof(struct mips_vdso, o32_rt_signal_trampoline), 682 717 .restart = __NR_O32_restart_syscall 683 718 }; 684 719
+6 -20
arch/mips/kernel/signal_n32.c
··· 39 39 #include <asm/fpu.h> 40 40 #include <asm/cpu-features.h> 41 41 #include <asm/war.h> 42 + #include <asm/vdso.h> 42 43 43 44 #include "signal-common.h" 44 45 45 46 /* 46 47 * Including <asm/unistd.h> would give use the 64-bit syscall numbers ... 47 48 */ 48 - #define __NR_N32_rt_sigreturn 6211 49 49 #define __NR_N32_restart_syscall 6214 50 50 51 51 extern int setup_sigcontext(struct pt_regs *, struct sigcontext __user *); ··· 67 67 compat_sigset_t uc_sigmask; /* mask last for extensibility */ 68 68 }; 69 69 70 - #if ICACHE_REFILLS_WORKAROUND_WAR == 0 71 - 72 70 struct rt_sigframe_n32 { 73 71 u32 rs_ass[4]; /* argument save space for o32 */ 74 - u32 rs_code[2]; /* signal trampoline */ 72 + u32 rs_pad[2]; /* Was: signal trampoline */ 75 73 struct compat_siginfo rs_info; 76 74 struct ucontextn32 rs_uc; 77 75 }; 78 - 79 - #else /* ICACHE_REFILLS_WORKAROUND_WAR */ 80 - 81 - struct rt_sigframe_n32 { 82 - u32 rs_ass[4]; /* argument save space for o32 */ 83 - u32 rs_pad[2]; 84 - struct compat_siginfo rs_info; 85 - struct ucontextn32 rs_uc; 86 - u32 rs_code[8] ____cacheline_aligned; /* signal trampoline */ 87 - }; 88 - 89 - #endif /* !ICACHE_REFILLS_WORKAROUND_WAR */ 90 76 91 77 extern void sigset_from_compat(sigset_t *set, compat_sigset_t *compat); 92 78 ··· 159 173 force_sig(SIGSEGV, current); 160 174 } 161 175 162 - static int setup_rt_frame_n32(struct k_sigaction * ka, 176 + static int setup_rt_frame_n32(void *sig_return, struct k_sigaction *ka, 163 177 struct pt_regs *regs, int signr, sigset_t *set, siginfo_t *info) 164 178 { 165 179 struct rt_sigframe_n32 __user *frame; ··· 169 183 frame = get_sigframe(ka, regs, sizeof(*frame)); 170 184 if (!access_ok(VERIFY_WRITE, frame, sizeof (*frame))) 171 185 goto give_sigsegv; 172 - 173 - install_sigtramp(frame->rs_code, __NR_N32_rt_sigreturn); 174 186 175 187 /* Create siginfo. */ 176 188 err |= copy_siginfo_to_user32(&frame->rs_info, info); ··· 203 219 regs->regs[ 5] = (unsigned long) &frame->rs_info; 204 220 regs->regs[ 6] = (unsigned long) &frame->rs_uc; 205 221 regs->regs[29] = (unsigned long) frame; 206 - regs->regs[31] = (unsigned long) frame->rs_code; 222 + regs->regs[31] = (unsigned long) sig_return; 207 223 regs->cp0_epc = regs->regs[25] = (unsigned long) ka->sa.sa_handler; 208 224 209 225 DEBUGP("SIG deliver (%s:%d): sp=0x%p pc=0x%lx ra=0x%lx\n", ··· 219 235 220 236 struct mips_abi mips_abi_n32 = { 221 237 .setup_rt_frame = setup_rt_frame_n32, 238 + .rt_signal_return_offset = 239 + offsetof(struct mips_vdso, n32_rt_signal_trampoline), 222 240 .restart = __NR_N32_restart_syscall 223 241 };
+1 -1
arch/mips/kernel/smtc.c
··· 182 182 {0, 0, 0, 0, 0, 0, 0, 1} 183 183 }; 184 184 int tcnoprog[NR_CPUS]; 185 - static atomic_t idle_hook_initialized = {0}; 185 + static atomic_t idle_hook_initialized = ATOMIC_INIT(0); 186 186 static int clock_hang_reported[NR_CPUS]; 187 187 188 188 #endif /* CONFIG_SMTC_IDLE_HOOK_DEBUG */
+5 -1
arch/mips/kernel/syscall.c
··· 79 79 int do_color_align; 80 80 unsigned long task_size; 81 81 82 - task_size = STACK_TOP; 82 + #ifdef CONFIG_32BIT 83 + task_size = TASK_SIZE; 84 + #else /* Must be CONFIG_64BIT*/ 85 + task_size = test_thread_flag(TIF_32BIT_ADDR) ? TASK_SIZE32 : TASK_SIZE; 86 + #endif 83 87 84 88 if (len > task_size) 85 89 return -ENOMEM;
+1 -1
arch/mips/kernel/traps.c
··· 1599 1599 ebase = (unsigned long) 1600 1600 __alloc_bootmem(size, 1 << fls(size), 0); 1601 1601 } else { 1602 - ebase = CAC_BASE; 1602 + ebase = CKSEG0; 1603 1603 if (cpu_has_mips_r2) 1604 1604 ebase += (read_c0_ebase() & 0x3ffff000); 1605 1605 }
+112
arch/mips/kernel/vdso.c
··· 1 + /* 2 + * This file is subject to the terms and conditions of the GNU General Public 3 + * License. See the file "COPYING" in the main directory of this archive 4 + * for more details. 5 + * 6 + * Copyright (C) 2009, 2010 Cavium Networks, Inc. 7 + */ 8 + 9 + 10 + #include <linux/kernel.h> 11 + #include <linux/err.h> 12 + #include <linux/sched.h> 13 + #include <linux/mm.h> 14 + #include <linux/init.h> 15 + #include <linux/binfmts.h> 16 + #include <linux/elf.h> 17 + #include <linux/vmalloc.h> 18 + #include <linux/unistd.h> 19 + 20 + #include <asm/vdso.h> 21 + #include <asm/uasm.h> 22 + 23 + /* 24 + * Including <asm/unistd.h> would give use the 64-bit syscall numbers ... 25 + */ 26 + #define __NR_O32_sigreturn 4119 27 + #define __NR_O32_rt_sigreturn 4193 28 + #define __NR_N32_rt_sigreturn 6211 29 + 30 + static struct page *vdso_page; 31 + 32 + static void __init install_trampoline(u32 *tramp, unsigned int sigreturn) 33 + { 34 + uasm_i_addiu(&tramp, 2, 0, sigreturn); /* li v0, sigreturn */ 35 + uasm_i_syscall(&tramp, 0); 36 + } 37 + 38 + static int __init init_vdso(void) 39 + { 40 + struct mips_vdso *vdso; 41 + 42 + vdso_page = alloc_page(GFP_KERNEL); 43 + if (!vdso_page) 44 + panic("Cannot allocate vdso"); 45 + 46 + vdso = vmap(&vdso_page, 1, 0, PAGE_KERNEL); 47 + if (!vdso) 48 + panic("Cannot map vdso"); 49 + clear_page(vdso); 50 + 51 + install_trampoline(vdso->rt_signal_trampoline, __NR_rt_sigreturn); 52 + #ifdef CONFIG_32BIT 53 + install_trampoline(vdso->signal_trampoline, __NR_sigreturn); 54 + #else 55 + install_trampoline(vdso->n32_rt_signal_trampoline, 56 + __NR_N32_rt_sigreturn); 57 + install_trampoline(vdso->o32_signal_trampoline, __NR_O32_sigreturn); 58 + install_trampoline(vdso->o32_rt_signal_trampoline, 59 + __NR_O32_rt_sigreturn); 60 + #endif 61 + 62 + vunmap(vdso); 63 + 64 + pr_notice("init_vdso successfull\n"); 65 + 66 + return 0; 67 + } 68 + device_initcall(init_vdso); 69 + 70 + static unsigned long vdso_addr(unsigned long start) 71 + { 72 + return STACK_TOP; 73 + } 74 + 75 + int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) 76 + { 77 + int ret; 78 + unsigned long addr; 79 + struct mm_struct *mm = current->mm; 80 + 81 + down_write(&mm->mmap_sem); 82 + 83 + addr = vdso_addr(mm->start_stack); 84 + 85 + addr = get_unmapped_area(NULL, addr, PAGE_SIZE, 0, 0); 86 + if (IS_ERR_VALUE(addr)) { 87 + ret = addr; 88 + goto up_fail; 89 + } 90 + 91 + ret = install_special_mapping(mm, addr, PAGE_SIZE, 92 + VM_READ|VM_EXEC| 93 + VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC| 94 + VM_ALWAYSDUMP, 95 + &vdso_page); 96 + 97 + if (ret) 98 + goto up_fail; 99 + 100 + mm->context.vdso = (void *)addr; 101 + 102 + up_fail: 103 + up_write(&mm->mmap_sem); 104 + return ret; 105 + } 106 + 107 + const char *arch_vma_name(struct vm_area_struct *vma) 108 + { 109 + if (vma->vm_mm && vma->vm_start == (long)vma->vm_mm->context.vdso) 110 + return "[vdso]"; 111 + return NULL; 112 + }
+2 -2
arch/mips/lib/delay.c
··· 41 41 42 42 void __udelay(unsigned long us) 43 43 { 44 - unsigned int lpj = current_cpu_data.udelay_val; 44 + unsigned int lpj = raw_current_cpu_data.udelay_val; 45 45 46 46 __delay((us * 0x000010c7ull * HZ * lpj) >> 32); 47 47 } ··· 49 49 50 50 void __ndelay(unsigned long ns) 51 51 { 52 - unsigned int lpj = current_cpu_data.udelay_val; 52 + unsigned int lpj = raw_current_cpu_data.udelay_val; 53 53 54 54 __delay((ns * 0x00000005ull * HZ * lpj) >> 32); 55 55 }
+1 -2
arch/mips/lib/libgcc.h
··· 17 17 #error I feel sick. 18 18 #endif 19 19 20 - typedef union 21 - { 20 + typedef union { 22 21 struct DWstruct s; 23 22 long long ll; 24 23 } DWunion;
+1 -1
arch/mips/mm/cache.c
··· 133 133 } 134 134 135 135 unsigned long _page_cachable_default; 136 - EXPORT_SYMBOL_GPL(_page_cachable_default); 136 + EXPORT_SYMBOL(_page_cachable_default); 137 137 138 138 static inline void setup_protection_map(void) 139 139 {
+16 -6
arch/mips/mm/tlbex.c
··· 788 788 * create the plain linear handler 789 789 */ 790 790 if (bcm1250_m3_war()) { 791 - UASM_i_MFC0(&p, K0, C0_BADVADDR); 792 - UASM_i_MFC0(&p, K1, C0_ENTRYHI); 791 + unsigned int segbits = 44; 792 + 793 + uasm_i_dmfc0(&p, K0, C0_BADVADDR); 794 + uasm_i_dmfc0(&p, K1, C0_ENTRYHI); 793 795 uasm_i_xor(&p, K0, K0, K1); 794 - UASM_i_SRL(&p, K0, K0, PAGE_SHIFT + 1); 796 + uasm_i_dsrl32(&p, K1, K0, 62 - 32); 797 + uasm_i_dsrl(&p, K0, K0, 12 + 1); 798 + uasm_i_dsll32(&p, K0, K0, 64 + 12 + 1 - segbits - 32); 799 + uasm_i_or(&p, K0, K0, K1); 795 800 uasm_il_bnez(&p, &r, K0, label_leave); 796 801 /* No need for uasm_i_nop */ 797 802 } ··· 1317 1312 memset(relocs, 0, sizeof(relocs)); 1318 1313 1319 1314 if (bcm1250_m3_war()) { 1320 - UASM_i_MFC0(&p, K0, C0_BADVADDR); 1321 - UASM_i_MFC0(&p, K1, C0_ENTRYHI); 1315 + unsigned int segbits = 44; 1316 + 1317 + uasm_i_dmfc0(&p, K0, C0_BADVADDR); 1318 + uasm_i_dmfc0(&p, K1, C0_ENTRYHI); 1322 1319 uasm_i_xor(&p, K0, K0, K1); 1323 - UASM_i_SRL(&p, K0, K0, PAGE_SHIFT + 1); 1320 + uasm_i_dsrl32(&p, K1, K0, 62 - 32); 1321 + uasm_i_dsrl(&p, K0, K0, 12 + 1); 1322 + uasm_i_dsll32(&p, K0, K0, 64 + 12 + 1 - segbits - 32); 1323 + uasm_i_or(&p, K0, K0, K1); 1324 1324 uasm_il_bnez(&p, &r, K0, label_leave); 1325 1325 /* No need for uasm_i_nop */ 1326 1326 }
+20 -3
arch/mips/mm/uasm.c
··· 31 31 BIMM = 0x040, 32 32 JIMM = 0x080, 33 33 FUNC = 0x100, 34 - SET = 0x200 34 + SET = 0x200, 35 + SCIMM = 0x400 35 36 }; 36 37 37 38 #define OP_MASK 0x3f ··· 53 52 #define FUNC_SH 0 54 53 #define SET_MASK 0x7 55 54 #define SET_SH 0 55 + #define SCIMM_MASK 0xfffff 56 + #define SCIMM_SH 6 56 57 57 58 enum opcode { 58 59 insn_invalid, ··· 64 61 insn_dmtc0, insn_dsll, insn_dsll32, insn_dsra, insn_dsrl, 65 62 insn_dsrl32, insn_drotr, insn_dsubu, insn_eret, insn_j, insn_jal, 66 63 insn_jr, insn_ld, insn_ll, insn_lld, insn_lui, insn_lw, insn_mfc0, 67 - insn_mtc0, insn_ori, insn_pref, insn_rfe, insn_sc, insn_scd, 64 + insn_mtc0, insn_or, insn_ori, insn_pref, insn_rfe, insn_sc, insn_scd, 68 65 insn_sd, insn_sll, insn_sra, insn_srl, insn_rotr, insn_subu, insn_sw, 69 66 insn_tlbp, insn_tlbr, insn_tlbwi, insn_tlbwr, insn_xor, insn_xori, 70 - insn_dins 67 + insn_dins, insn_syscall 71 68 }; 72 69 73 70 struct insn { ··· 120 117 { insn_lw, M(lw_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 121 118 { insn_mfc0, M(cop0_op, mfc_op, 0, 0, 0, 0), RT | RD | SET}, 122 119 { insn_mtc0, M(cop0_op, mtc_op, 0, 0, 0, 0), RT | RD | SET}, 120 + { insn_or, M(spec_op, 0, 0, 0, 0, or_op), RS | RT | RD }, 123 121 { insn_ori, M(ori_op, 0, 0, 0, 0, 0), RS | RT | UIMM }, 124 122 { insn_pref, M(pref_op, 0, 0, 0, 0, 0), RS | RT | SIMM }, 125 123 { insn_rfe, M(cop0_op, cop_op, 0, 0, 0, rfe_op), 0 }, ··· 140 136 { insn_xor, M(spec_op, 0, 0, 0, 0, xor_op), RS | RT | RD }, 141 137 { insn_xori, M(xori_op, 0, 0, 0, 0, 0), RS | RT | UIMM }, 142 138 { insn_dins, M(spec3_op, 0, 0, 0, 0, dins_op), RS | RT | RD | RE }, 139 + { insn_syscall, M(spec_op, 0, 0, 0, 0, syscall_op), SCIMM}, 143 140 { insn_invalid, 0, 0 } 144 141 }; 145 142 ··· 213 208 return (arg >> 2) & JIMM_MASK; 214 209 } 215 210 211 + static inline __cpuinit u32 build_scimm(u32 arg) 212 + { 213 + if (arg & ~SCIMM_MASK) 214 + printk(KERN_WARNING "Micro-assembler field overflow\n"); 215 + 216 + return (arg & SCIMM_MASK) << SCIMM_SH; 217 + } 218 + 216 219 static inline __cpuinit u32 build_func(u32 arg) 217 220 { 218 221 if (arg & ~FUNC_MASK) ··· 279 266 op |= build_func(va_arg(ap, u32)); 280 267 if (ip->fields & SET) 281 268 op |= build_set(va_arg(ap, u32)); 269 + if (ip->fields & SCIMM) 270 + op |= build_scimm(va_arg(ap, u32)); 282 271 va_end(ap); 283 272 284 273 **buf = op; ··· 388 373 I_u1u2u3(_mfc0) 389 374 I_u1u2u3(_mtc0) 390 375 I_u2u1u3(_ori) 376 + I_u3u1u2(_or) 391 377 I_u2s3u1(_pref) 392 378 I_0(_rfe) 393 379 I_u2s3u1(_sc) ··· 407 391 I_u3u1u2(_xor) 408 392 I_u2u1u3(_xori) 409 393 I_u2u1msbu3(_dins); 394 + I_u1(_syscall); 410 395 411 396 /* Handle labels. */ 412 397 void __cpuinit uasm_build_label(struct uasm_label **lab, u32 *addr, int lid)
+10
arch/mips/pci/ops-loongson2.c
··· 180 180 }; 181 181 182 182 #ifdef CONFIG_CS5536 183 + DEFINE_RAW_SPINLOCK(msr_lock); 184 + 183 185 void _rdmsr(u32 msr, u32 *hi, u32 *lo) 184 186 { 185 187 struct pci_bus bus = { 186 188 .number = PCI_BUS_CS5536 187 189 }; 188 190 u32 devfn = PCI_DEVFN(PCI_IDSEL_CS5536, 0); 191 + unsigned long flags; 192 + 193 + raw_spin_lock_irqsave(&msr_lock, flags); 189 194 loongson_pcibios_write(&bus, devfn, PCI_MSR_ADDR, 4, msr); 190 195 loongson_pcibios_read(&bus, devfn, PCI_MSR_DATA_LO, 4, lo); 191 196 loongson_pcibios_read(&bus, devfn, PCI_MSR_DATA_HI, 4, hi); 197 + raw_spin_unlock_irqrestore(&msr_lock, flags); 192 198 } 193 199 EXPORT_SYMBOL(_rdmsr); 194 200 ··· 204 198 .number = PCI_BUS_CS5536 205 199 }; 206 200 u32 devfn = PCI_DEVFN(PCI_IDSEL_CS5536, 0); 201 + unsigned long flags; 202 + 203 + raw_spin_lock_irqsave(&msr_lock, flags); 207 204 loongson_pcibios_write(&bus, devfn, PCI_MSR_ADDR, 4, msr); 208 205 loongson_pcibios_write(&bus, devfn, PCI_MSR_DATA_LO, 4, lo); 209 206 loongson_pcibios_write(&bus, devfn, PCI_MSR_DATA_HI, 4, hi); 207 + raw_spin_unlock_irqrestore(&msr_lock, flags); 210 208 } 211 209 EXPORT_SYMBOL(_wrmsr); 212 210 #endif
+15
arch/mips/sibyte/sb1250/setup.c
··· 87 87 return ret; 88 88 } 89 89 90 + int sb1250_m3_workaround_needed(void) 91 + { 92 + switch (soc_type) { 93 + case K_SYS_SOC_TYPE_BCM1250: 94 + case K_SYS_SOC_TYPE_BCM1250_ALT: 95 + case K_SYS_SOC_TYPE_BCM1250_ALT2: 96 + case K_SYS_SOC_TYPE_BCM1125: 97 + case K_SYS_SOC_TYPE_BCM1125H: 98 + return soc_pass < K_SYS_REVISION_BCM1250_C0; 99 + 100 + default: 101 + return 0; 102 + } 103 + } 104 + 90 105 static int __init setup_bcm112x(void) 91 106 { 92 107 int ret = 0;
+3
arch/sparc/Kconfig
··· 37 37 def_bool 64BIT 38 38 select ARCH_SUPPORTS_MSI 39 39 select HAVE_FUNCTION_TRACER 40 + select HAVE_FUNCTION_GRAPH_TRACER 41 + select HAVE_FUNCTION_GRAPH_FP_TEST 42 + select HAVE_FUNCTION_TRACE_MCOUNT_TEST 40 43 select HAVE_KRETPROBES 41 44 select HAVE_KPROBES 42 45 select HAVE_LMB
+1 -4
arch/sparc/Kconfig.debug
··· 19 19 bool "D-cache flush debugging" 20 20 depends on SPARC64 && DEBUG_KERNEL 21 21 22 - config STACK_DEBUG 23 - bool "Stack Overflow Detection Support" 24 - 25 22 config MCOUNT 26 23 bool 27 24 depends on SPARC64 28 - depends on STACK_DEBUG || FUNCTION_TRACER 25 + depends on FUNCTION_TRACER 29 26 default y 30 27 31 28 config FRAME_POINTER
+1 -1
arch/sparc/include/asm/cpudata_64.h
··· 17 17 unsigned int __nmi_count; 18 18 unsigned long clock_tick; /* %tick's per second */ 19 19 unsigned long __pad; 20 - unsigned int __pad1; 20 + unsigned int irq0_irqs; 21 21 unsigned int __pad2; 22 22 23 23 /* Dcache line 2, rarely used */
+19 -2
arch/sparc/include/asm/irqflags_64.h
··· 76 76 */ 77 77 static inline unsigned long __raw_local_irq_save(void) 78 78 { 79 - unsigned long flags = __raw_local_save_flags(); 79 + unsigned long flags, tmp; 80 80 81 - raw_local_irq_disable(); 81 + /* Disable interrupts to PIL_NORMAL_MAX unless we already 82 + * are using PIL_NMI, in which case PIL_NMI is retained. 83 + * 84 + * The only values we ever program into the %pil are 0, 85 + * PIL_NORMAL_MAX and PIL_NMI. 86 + * 87 + * Since PIL_NMI is the largest %pil value and all bits are 88 + * set in it (0xf), it doesn't matter what PIL_NORMAL_MAX 89 + * actually is. 90 + */ 91 + __asm__ __volatile__( 92 + "rdpr %%pil, %0\n\t" 93 + "or %0, %2, %1\n\t" 94 + "wrpr %1, 0x0, %%pil" 95 + : "=r" (flags), "=r" (tmp) 96 + : "i" (PIL_NORMAL_MAX) 97 + : "memory" 98 + ); 82 99 83 100 return flags; 84 101 }
+1 -1
arch/sparc/include/asm/thread_info_64.h
··· 111 111 #define THREAD_SHIFT PAGE_SHIFT 112 112 #endif /* PAGE_SHIFT == 13 */ 113 113 114 - #define PREEMPT_ACTIVE 0x4000000 114 + #define PREEMPT_ACTIVE 0x10000000 115 115 116 116 /* 117 117 * macros/functions for gaining access to the thread information structure
+9 -1
arch/sparc/kernel/Makefile
··· 13 13 CPPFLAGS_vmlinux.lds := -Usparc -m$(BITS) 14 14 extra-y += vmlinux.lds 15 15 16 + ifdef CONFIG_FUNCTION_TRACER 17 + # Do not profile debug and lowlevel utilities 18 + CFLAGS_REMOVE_ftrace.o := -pg 19 + CFLAGS_REMOVE_time_$(BITS).o := -pg 20 + CFLAGS_REMOVE_perf_event.o := -pg 21 + CFLAGS_REMOVE_pcr.o := -pg 22 + endif 23 + 16 24 obj-$(CONFIG_SPARC32) += entry.o wof.o wuf.o 17 25 obj-$(CONFIG_SPARC32) += etrap_32.o 18 26 obj-$(CONFIG_SPARC32) += rtrap_32.o ··· 93 85 94 86 95 87 obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o 96 - CFLAGS_REMOVE_ftrace.o := -pg 88 + obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o 97 89 98 90 obj-$(CONFIG_EARLYFB) += btext.o 99 91 obj-$(CONFIG_STACKTRACE) += stacktrace.o
+59 -1
arch/sparc/kernel/ftrace.c
··· 13 13 14 14 static u32 ftrace_call_replace(unsigned long ip, unsigned long addr) 15 15 { 16 - static u32 call; 16 + u32 call; 17 17 s32 off; 18 18 19 19 off = ((s32)addr - (s32)ip); ··· 91 91 return 0; 92 92 } 93 93 #endif 94 + 95 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 96 + 97 + #ifdef CONFIG_DYNAMIC_FTRACE 98 + extern void ftrace_graph_call(void); 99 + 100 + int ftrace_enable_ftrace_graph_caller(void) 101 + { 102 + unsigned long ip = (unsigned long)(&ftrace_graph_call); 103 + u32 old, new; 104 + 105 + old = *(u32 *) &ftrace_graph_call; 106 + new = ftrace_call_replace(ip, (unsigned long) &ftrace_graph_caller); 107 + return ftrace_modify_code(ip, old, new); 108 + } 109 + 110 + int ftrace_disable_ftrace_graph_caller(void) 111 + { 112 + unsigned long ip = (unsigned long)(&ftrace_graph_call); 113 + u32 old, new; 114 + 115 + old = *(u32 *) &ftrace_graph_call; 116 + new = ftrace_call_replace(ip, (unsigned long) &ftrace_stub); 117 + 118 + return ftrace_modify_code(ip, old, new); 119 + } 120 + 121 + #endif /* !CONFIG_DYNAMIC_FTRACE */ 122 + 123 + /* 124 + * Hook the return address and push it in the stack of return addrs 125 + * in current thread info. 126 + */ 127 + unsigned long prepare_ftrace_return(unsigned long parent, 128 + unsigned long self_addr, 129 + unsigned long frame_pointer) 130 + { 131 + unsigned long return_hooker = (unsigned long) &return_to_handler; 132 + struct ftrace_graph_ent trace; 133 + 134 + if (unlikely(atomic_read(&current->tracing_graph_pause))) 135 + return parent + 8UL; 136 + 137 + if (ftrace_push_return_trace(parent, self_addr, &trace.depth, 138 + frame_pointer) == -EBUSY) 139 + return parent + 8UL; 140 + 141 + trace.func = self_addr; 142 + 143 + /* Only trace if the calling function expects to */ 144 + if (!ftrace_graph_entry(&trace)) { 145 + current->curr_ret_stack--; 146 + return parent + 8UL; 147 + } 148 + 149 + return return_hooker; 150 + } 151 + #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+12 -19
arch/sparc/kernel/irq_64.c
··· 20 20 #include <linux/delay.h> 21 21 #include <linux/proc_fs.h> 22 22 #include <linux/seq_file.h> 23 + #include <linux/ftrace.h> 23 24 #include <linux/irq.h> 25 + #include <linux/kmemleak.h> 24 26 25 27 #include <asm/ptrace.h> 26 28 #include <asm/processor.h> ··· 47 45 48 46 #include "entry.h" 49 47 #include "cpumap.h" 48 + #include "kstack.h" 50 49 51 50 #define NUM_IVECS (IMAP_INR + 1) 52 51 ··· 650 647 bucket = kzalloc(sizeof(struct ino_bucket), GFP_ATOMIC); 651 648 if (unlikely(!bucket)) 652 649 return 0; 650 + 651 + /* The only reference we store to the IRQ bucket is 652 + * by physical address which kmemleak can't see, tell 653 + * it that this object explicitly is not a leak and 654 + * should be scanned. 655 + */ 656 + kmemleak_not_leak(bucket); 657 + 653 658 __flush_dcache_range((unsigned long) bucket, 654 659 ((unsigned long) bucket + 655 660 sizeof(struct ino_bucket))); ··· 714 703 void *hardirq_stack[NR_CPUS]; 715 704 void *softirq_stack[NR_CPUS]; 716 705 717 - static __attribute__((always_inline)) void *set_hardirq_stack(void) 718 - { 719 - void *orig_sp, *sp = hardirq_stack[smp_processor_id()]; 720 - 721 - __asm__ __volatile__("mov %%sp, %0" : "=r" (orig_sp)); 722 - if (orig_sp < sp || 723 - orig_sp > (sp + THREAD_SIZE)) { 724 - sp += THREAD_SIZE - 192 - STACK_BIAS; 725 - __asm__ __volatile__("mov %0, %%sp" : : "r" (sp)); 726 - } 727 - 728 - return orig_sp; 729 - } 730 - static __attribute__((always_inline)) void restore_hardirq_stack(void *orig_sp) 731 - { 732 - __asm__ __volatile__("mov %0, %%sp" : : "r" (orig_sp)); 733 - } 734 - 735 - void handler_irq(int irq, struct pt_regs *regs) 706 + void __irq_entry handler_irq(int irq, struct pt_regs *regs) 736 707 { 737 708 unsigned long pstate, bucket_pa; 738 709 struct pt_regs *old_regs;
+2 -1
arch/sparc/kernel/kgdb_64.c
··· 5 5 6 6 #include <linux/kgdb.h> 7 7 #include <linux/kdebug.h> 8 + #include <linux/ftrace.h> 8 9 9 10 #include <asm/kdebug.h> 10 11 #include <asm/ptrace.h> ··· 109 108 } 110 109 111 110 #ifdef CONFIG_SMP 112 - void smp_kgdb_capture_client(int irq, struct pt_regs *regs) 111 + void __irq_entry smp_kgdb_capture_client(int irq, struct pt_regs *regs) 113 112 { 114 113 unsigned long flags; 115 114
+19
arch/sparc/kernel/kstack.h
··· 61 61 62 62 } 63 63 64 + static inline __attribute__((always_inline)) void *set_hardirq_stack(void) 65 + { 66 + void *orig_sp, *sp = hardirq_stack[smp_processor_id()]; 67 + 68 + __asm__ __volatile__("mov %%sp, %0" : "=r" (orig_sp)); 69 + if (orig_sp < sp || 70 + orig_sp > (sp + THREAD_SIZE)) { 71 + sp += THREAD_SIZE - 192 - STACK_BIAS; 72 + __asm__ __volatile__("mov %0, %%sp" : : "r" (sp)); 73 + } 74 + 75 + return orig_sp; 76 + } 77 + 78 + static inline __attribute__((always_inline)) void restore_hardirq_stack(void *orig_sp) 79 + { 80 + __asm__ __volatile__("mov %0, %%sp" : : "r" (orig_sp)); 81 + } 82 + 64 83 #endif /* _KSTACK_H */
+8 -2
arch/sparc/kernel/nmi.c
··· 23 23 #include <asm/ptrace.h> 24 24 #include <asm/pcr.h> 25 25 26 + #include "kstack.h" 27 + 26 28 /* We don't have a real NMI on sparc64, but we can fake one 27 29 * up using profiling counter overflow interrupts and interrupt 28 30 * levels. ··· 94 92 notrace __kprobes void perfctr_irq(int irq, struct pt_regs *regs) 95 93 { 96 94 unsigned int sum, touched = 0; 97 - int cpu = smp_processor_id(); 95 + void *orig_sp; 98 96 99 97 clear_softint(1 << irq); 100 98 ··· 102 100 103 101 nmi_enter(); 104 102 103 + orig_sp = set_hardirq_stack(); 104 + 105 105 if (notify_die(DIE_NMI, "nmi", regs, 0, 106 106 pt_regs_trap_type(regs), SIGINT) == NOTIFY_STOP) 107 107 touched = 1; 108 108 else 109 109 pcr_ops->write(PCR_PIC_PRIV); 110 110 111 - sum = kstat_irqs_cpu(0, cpu); 111 + sum = local_cpu_data().irq0_irqs; 112 112 if (__get_cpu_var(nmi_touch)) { 113 113 __get_cpu_var(nmi_touch) = 0; 114 114 touched = 1; ··· 128 124 write_pic(picl_value(nmi_hz)); 129 125 pcr_ops->write(pcr_enable); 130 126 } 127 + 128 + restore_hardirq_stack(orig_sp); 131 129 132 130 nmi_exit(); 133 131 }
+8 -3
arch/sparc/kernel/pci_common.c
··· 371 371 struct resource *rp = kzalloc(sizeof(*rp), GFP_KERNEL); 372 372 373 373 if (!rp) { 374 - prom_printf("Cannot allocate IOMMU resource.\n"); 375 - prom_halt(); 374 + pr_info("%s: Cannot allocate IOMMU resource.\n", 375 + pbm->name); 376 + return; 376 377 } 377 378 rp->name = "IOMMU"; 378 379 rp->start = pbm->mem_space.start + (unsigned long) vdma[0]; 379 380 rp->end = rp->start + (unsigned long) vdma[1] - 1UL; 380 381 rp->flags = IORESOURCE_BUSY; 381 - request_resource(&pbm->mem_space, rp); 382 + if (request_resource(&pbm->mem_space, rp)) { 383 + pr_info("%s: Unable to request IOMMU resource.\n", 384 + pbm->name); 385 + kfree(rp); 386 + } 382 387 } 383 388 } 384 389
+2 -1
arch/sparc/kernel/pcr.c
··· 8 8 #include <linux/irq.h> 9 9 10 10 #include <linux/perf_event.h> 11 + #include <linux/ftrace.h> 11 12 12 13 #include <asm/pil.h> 13 14 #include <asm/pcr.h> ··· 35 34 * Therefore in such situations we defer the work by signalling 36 35 * a lower level cpu IRQ. 37 36 */ 38 - void deferred_pcr_work_irq(int irq, struct pt_regs *regs) 37 + void __irq_entry deferred_pcr_work_irq(int irq, struct pt_regs *regs) 39 38 { 40 39 struct pt_regs *old_regs; 41 40
+11 -1
arch/sparc/kernel/rtrap_64.S
··· 130 130 nop 131 131 call trace_hardirqs_on 132 132 nop 133 - wrpr %l4, %pil 133 + /* Do not actually set the %pil here. We will do that 134 + * below after we clear PSTATE_IE in the %pstate register. 135 + * If we re-enable interrupts here, we can recurse down 136 + * the hardirq stack potentially endlessly, causing a 137 + * stack overflow. 138 + * 139 + * It is tempting to put this test and trace_hardirqs_on 140 + * call at the 'rt_continue' label, but that will not work 141 + * as that path hits unconditionally and we do not want to 142 + * execute this in NMI return paths, for example. 143 + */ 134 144 #endif 135 145 rtrap_no_irq_enable: 136 146 andcc %l1, TSTATE_PRIV, %l3
+6 -5
arch/sparc/kernel/smp_64.c
··· 22 22 #include <linux/profile.h> 23 23 #include <linux/bootmem.h> 24 24 #include <linux/vmalloc.h> 25 + #include <linux/ftrace.h> 25 26 #include <linux/cpu.h> 26 27 #include <linux/slab.h> 27 28 ··· 824 823 &cpumask_of_cpu(cpu)); 825 824 } 826 825 827 - void smp_call_function_client(int irq, struct pt_regs *regs) 826 + void __irq_entry smp_call_function_client(int irq, struct pt_regs *regs) 828 827 { 829 828 clear_softint(1 << irq); 830 829 generic_smp_call_function_interrupt(); 831 830 } 832 831 833 - void smp_call_function_single_client(int irq, struct pt_regs *regs) 832 + void __irq_entry smp_call_function_single_client(int irq, struct pt_regs *regs) 834 833 { 835 834 clear_softint(1 << irq); 836 835 generic_smp_call_function_single_interrupt(); ··· 966 965 put_cpu(); 967 966 } 968 967 969 - void smp_new_mmu_context_version_client(int irq, struct pt_regs *regs) 968 + void __irq_entry smp_new_mmu_context_version_client(int irq, struct pt_regs *regs) 970 969 { 971 970 struct mm_struct *mm; 972 971 unsigned long flags; ··· 1150 1149 */ 1151 1150 extern void prom_world(int); 1152 1151 1153 - void smp_penguin_jailcell(int irq, struct pt_regs *regs) 1152 + void __irq_entry smp_penguin_jailcell(int irq, struct pt_regs *regs) 1154 1153 { 1155 1154 clear_softint(1 << irq); 1156 1155 ··· 1366 1365 &cpumask_of_cpu(cpu)); 1367 1366 } 1368 1367 1369 - void smp_receive_signal_client(int irq, struct pt_regs *regs) 1368 + void __irq_entry smp_receive_signal_client(int irq, struct pt_regs *regs) 1370 1369 { 1371 1370 clear_softint(1 << irq); 1372 1371 }
+3 -1
arch/sparc/kernel/time_64.c
··· 35 35 #include <linux/clocksource.h> 36 36 #include <linux/of_device.h> 37 37 #include <linux/platform_device.h> 38 + #include <linux/ftrace.h> 38 39 39 40 #include <asm/oplib.h> 40 41 #include <asm/timer.h> ··· 718 717 }; 719 718 static DEFINE_PER_CPU(struct clock_event_device, sparc64_events); 720 719 721 - void timer_interrupt(int irq, struct pt_regs *regs) 720 + void __irq_entry timer_interrupt(int irq, struct pt_regs *regs) 722 721 { 723 722 struct pt_regs *old_regs = set_irq_regs(regs); 724 723 unsigned long tick_mask = tick_ops->softint_mask; ··· 729 728 730 729 irq_enter(); 731 730 731 + local_cpu_data().irq0_irqs++; 732 732 kstat_incr_irqs_this_cpu(0, irq_to_desc(0)); 733 733 734 734 if (unlikely(!evt->event_handler)) {
+3 -23
arch/sparc/kernel/traps_64.c
··· 2203 2203 2204 2204 EXPORT_SYMBOL(dump_stack); 2205 2205 2206 - static inline int is_kernel_stack(struct task_struct *task, 2207 - struct reg_window *rw) 2208 - { 2209 - unsigned long rw_addr = (unsigned long) rw; 2210 - unsigned long thread_base, thread_end; 2211 - 2212 - if (rw_addr < PAGE_OFFSET) { 2213 - if (task != &init_task) 2214 - return 0; 2215 - } 2216 - 2217 - thread_base = (unsigned long) task_stack_page(task); 2218 - thread_end = thread_base + sizeof(union thread_union); 2219 - if (rw_addr >= thread_base && 2220 - rw_addr < thread_end && 2221 - !(rw_addr & 0x7UL)) 2222 - return 1; 2223 - 2224 - return 0; 2225 - } 2226 - 2227 2206 static inline struct reg_window *kernel_stack_up(struct reg_window *rw) 2228 2207 { 2229 2208 unsigned long fp = rw->ins[6]; ··· 2231 2252 show_regs(regs); 2232 2253 add_taint(TAINT_DIE); 2233 2254 if (regs->tstate & TSTATE_PRIV) { 2255 + struct thread_info *tp = current_thread_info(); 2234 2256 struct reg_window *rw = (struct reg_window *) 2235 2257 (regs->u_regs[UREG_FP] + STACK_BIAS); 2236 2258 ··· 2239 2259 * find some badly aligned kernel stack. 2240 2260 */ 2241 2261 while (rw && 2242 - count++ < 30&& 2243 - is_kernel_stack(current, rw)) { 2262 + count++ < 30 && 2263 + kstack_valid(tp, (unsigned long) rw)) { 2244 2264 printk("Caller[%016lx]: %pS\n", rw->ins[7], 2245 2265 (void *) rw->ins[7]); 2246 2266
+3 -3
arch/sparc/kernel/unaligned_64.c
··· 50 50 } 51 51 52 52 /* 16 = double-word, 8 = extra-word, 4 = word, 2 = half-word */ 53 - static inline int decode_access_size(unsigned int insn) 53 + static inline int decode_access_size(struct pt_regs *regs, unsigned int insn) 54 54 { 55 55 unsigned int tmp; 56 56 ··· 66 66 return 2; 67 67 else { 68 68 printk("Impossible unaligned trap. insn=%08x\n", insn); 69 - die_if_kernel("Byte sized unaligned access?!?!", current_thread_info()->kregs); 69 + die_if_kernel("Byte sized unaligned access?!?!", regs); 70 70 71 71 /* GCC should never warn that control reaches the end 72 72 * of this function without returning a value because ··· 286 286 asmlinkage void kernel_unaligned_trap(struct pt_regs *regs, unsigned int insn) 287 287 { 288 288 enum direction dir = decode_direction(insn); 289 - int size = decode_access_size(insn); 289 + int size = decode_access_size(regs, insn); 290 290 int orig_asi, asi; 291 291 292 292 current_thread_info()->kern_una_regs = regs;
+5
arch/sparc/kernel/vmlinux.lds.S
··· 46 46 SCHED_TEXT 47 47 LOCK_TEXT 48 48 KPROBES_TEXT 49 + IRQENTRY_TEXT 49 50 *(.gnu.warning) 50 51 } = 0 51 52 _etext = .; 52 53 53 54 RO_DATA(PAGE_SIZE) 55 + 56 + /* Start of data section */ 57 + _sdata = .; 58 + 54 59 .data1 : { 55 60 *(.data1) 56 61 }
+72 -87
arch/sparc/lib/mcount.S
··· 7 7 8 8 #include <linux/linkage.h> 9 9 10 - #include <asm/ptrace.h> 11 - #include <asm/thread_info.h> 12 - 13 10 /* 14 11 * This is the main variant and is called by C code. GCC's -pg option 15 12 * automatically instruments every C function with a call to this. 16 13 */ 17 14 18 - #ifdef CONFIG_STACK_DEBUG 19 - 20 - #define OVSTACKSIZE 4096 /* lets hope this is enough */ 21 - 22 - .data 23 - .align 8 24 - panicstring: 25 - .asciz "Stack overflow\n" 26 - .align 8 27 - ovstack: 28 - .skip OVSTACKSIZE 29 - #endif 30 15 .text 31 16 .align 32 32 17 .globl _mcount ··· 20 35 .type mcount,#function 21 36 _mcount: 22 37 mcount: 23 - #ifdef CONFIG_STACK_DEBUG 24 - /* 25 - * Check whether %sp is dangerously low. 26 - */ 27 - ldub [%g6 + TI_FPDEPTH], %g1 28 - srl %g1, 1, %g3 29 - add %g3, 1, %g3 30 - sllx %g3, 8, %g3 ! each fpregs frame is 256b 31 - add %g3, 192, %g3 32 - add %g6, %g3, %g3 ! where does task_struct+frame end? 33 - sub %g3, STACK_BIAS, %g3 34 - cmp %sp, %g3 35 - bg,pt %xcc, 1f 36 - nop 37 - lduh [%g6 + TI_CPU], %g1 38 - sethi %hi(hardirq_stack), %g3 39 - or %g3, %lo(hardirq_stack), %g3 40 - sllx %g1, 3, %g1 41 - ldx [%g3 + %g1], %g7 42 - sub %g7, STACK_BIAS, %g7 43 - cmp %sp, %g7 44 - bleu,pt %xcc, 2f 45 - sethi %hi(THREAD_SIZE), %g3 46 - add %g7, %g3, %g7 47 - cmp %sp, %g7 48 - blu,pn %xcc, 1f 49 - 2: sethi %hi(softirq_stack), %g3 50 - or %g3, %lo(softirq_stack), %g3 51 - ldx [%g3 + %g1], %g7 52 - sub %g7, STACK_BIAS, %g7 53 - cmp %sp, %g7 54 - bleu,pt %xcc, 3f 55 - sethi %hi(THREAD_SIZE), %g3 56 - add %g7, %g3, %g7 57 - cmp %sp, %g7 58 - blu,pn %xcc, 1f 59 - nop 60 - /* If we are already on ovstack, don't hop onto it 61 - * again, we are already trying to output the stack overflow 62 - * message. 63 - */ 64 - 3: sethi %hi(ovstack), %g7 ! cant move to panic stack fast enough 65 - or %g7, %lo(ovstack), %g7 66 - add %g7, OVSTACKSIZE, %g3 67 - sub %g3, STACK_BIAS + 192, %g3 68 - sub %g7, STACK_BIAS, %g7 69 - cmp %sp, %g7 70 - blu,pn %xcc, 2f 71 - cmp %sp, %g3 72 - bleu,pn %xcc, 1f 73 - nop 74 - 2: mov %g3, %sp 75 - sethi %hi(panicstring), %g3 76 - call prom_printf 77 - or %g3, %lo(panicstring), %o0 78 - call prom_halt 79 - nop 80 - 1: 81 - #endif 82 38 #ifdef CONFIG_FUNCTION_TRACER 83 39 #ifdef CONFIG_DYNAMIC_FTRACE 84 - mov %o7, %o0 85 - .globl mcount_call 86 - mcount_call: 87 - call ftrace_stub 88 - mov %o0, %o7 40 + /* Do nothing, the retl/nop below is all we need. */ 89 41 #else 90 - sethi %hi(ftrace_trace_function), %g1 42 + sethi %hi(function_trace_stop), %g1 43 + lduw [%g1 + %lo(function_trace_stop)], %g2 44 + brnz,pn %g2, 2f 45 + sethi %hi(ftrace_trace_function), %g1 91 46 sethi %hi(ftrace_stub), %g2 92 47 ldx [%g1 + %lo(ftrace_trace_function)], %g1 93 48 or %g2, %lo(ftrace_stub), %g2 94 49 cmp %g1, %g2 95 50 be,pn %icc, 1f 96 - mov %i7, %o1 97 - jmpl %g1, %g0 98 - mov %o7, %o0 51 + mov %i7, %g3 52 + save %sp, -176, %sp 53 + mov %g3, %o1 54 + jmpl %g1, %o7 55 + mov %i7, %o0 56 + ret 57 + restore 99 58 /* not reached */ 100 59 1: 60 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 61 + sethi %hi(ftrace_graph_return), %g1 62 + ldx [%g1 + %lo(ftrace_graph_return)], %g3 63 + cmp %g2, %g3 64 + bne,pn %xcc, 5f 65 + sethi %hi(ftrace_graph_entry_stub), %g2 66 + sethi %hi(ftrace_graph_entry), %g1 67 + or %g2, %lo(ftrace_graph_entry_stub), %g2 68 + ldx [%g1 + %lo(ftrace_graph_entry)], %g1 69 + cmp %g1, %g2 70 + be,pt %xcc, 2f 71 + nop 72 + 5: mov %i7, %g2 73 + mov %fp, %g3 74 + save %sp, -176, %sp 75 + mov %g2, %l0 76 + ba,pt %xcc, ftrace_graph_caller 77 + mov %g3, %l1 78 + #endif 79 + 2: 101 80 #endif 102 81 #endif 103 82 retl ··· 80 131 .globl ftrace_caller 81 132 .type ftrace_caller,#function 82 133 ftrace_caller: 83 - mov %i7, %o1 84 - mov %o7, %o0 134 + sethi %hi(function_trace_stop), %g1 135 + mov %i7, %g2 136 + lduw [%g1 + %lo(function_trace_stop)], %g1 137 + brnz,pn %g1, ftrace_stub 138 + mov %fp, %g3 139 + save %sp, -176, %sp 140 + mov %g2, %o1 141 + mov %g2, %l0 142 + mov %g3, %l1 85 143 .globl ftrace_call 86 144 ftrace_call: 87 145 call ftrace_stub 88 - mov %o0, %o7 89 - retl 146 + mov %i7, %o0 147 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 148 + .globl ftrace_graph_call 149 + ftrace_graph_call: 150 + call ftrace_stub 90 151 nop 152 + #endif 153 + ret 154 + restore 155 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 156 + .size ftrace_graph_call,.-ftrace_graph_call 157 + #endif 158 + .size ftrace_call,.-ftrace_call 91 159 .size ftrace_caller,.-ftrace_caller 92 160 #endif 161 + #endif 162 + 163 + #ifdef CONFIG_FUNCTION_GRAPH_TRACER 164 + ENTRY(ftrace_graph_caller) 165 + mov %l0, %o0 166 + mov %i7, %o1 167 + call prepare_ftrace_return 168 + mov %l1, %o2 169 + ret 170 + restore %o0, -8, %i7 171 + END(ftrace_graph_caller) 172 + 173 + ENTRY(return_to_handler) 174 + save %sp, -176, %sp 175 + call ftrace_return_to_handler 176 + mov %fp, %o0 177 + jmpl %o0 + 8, %g0 178 + restore 179 + END(return_to_handler) 93 180 #endif
+1 -1
arch/x86/ia32/ia32entry.S
··· 626 626 .quad stub32_sigreturn 627 627 .quad stub32_clone /* 120 */ 628 628 .quad sys_setdomainname 629 - .quad sys_uname 629 + .quad sys_newuname 630 630 .quad sys_modify_ldt 631 631 .quad compat_sys_adjtimex 632 632 .quad sys32_mprotect /* 125 */
+3
arch/x86/include/asm/amd_iommu_types.h
··· 21 21 #define _ASM_X86_AMD_IOMMU_TYPES_H 22 22 23 23 #include <linux/types.h> 24 + #include <linux/mutex.h> 24 25 #include <linux/list.h> 25 26 #include <linux/spinlock.h> 26 27 ··· 141 140 142 141 /* constants to configure the command buffer */ 143 142 #define CMD_BUFFER_SIZE 8192 143 + #define CMD_BUFFER_UNINITIALIZED 1 144 144 #define CMD_BUFFER_ENTRIES 512 145 145 #define MMIO_CMD_SIZE_SHIFT 56 146 146 #define MMIO_CMD_SIZE_512 (0x9ULL << MMIO_CMD_SIZE_SHIFT) ··· 239 237 struct list_head list; /* for list of all protection domains */ 240 238 struct list_head dev_list; /* List of all devices in this domain */ 241 239 spinlock_t lock; /* mostly used to lock the page table*/ 240 + struct mutex api_lock; /* protect page tables in the iommu-api path */ 242 241 u16 id; /* the domain id written to the device table */ 243 242 int mode; /* paging mode (0-6 levels) */ 244 243 u64 *pt_root; /* page table root pointer */
+23 -6
arch/x86/include/asm/lguest_hcall.h
··· 28 28 29 29 #ifndef __ASSEMBLY__ 30 30 #include <asm/hw_irq.h> 31 - #include <asm/kvm_para.h> 32 31 33 32 /*G:030 34 33 * But first, how does our Guest contact the Host to ask for privileged 35 34 * operations? There are two ways: the direct way is to make a "hypercall", 36 35 * to make requests of the Host Itself. 37 36 * 38 - * We use the KVM hypercall mechanism, though completely different hypercall 39 - * numbers. Seventeen hypercalls are available: the hypercall number is put in 40 - * the %eax register, and the arguments (when required) are placed in %ebx, 41 - * %ecx, %edx and %esi. If a return value makes sense, it's returned in %eax. 37 + * Our hypercall mechanism uses the highest unused trap code (traps 32 and 38 + * above are used by real hardware interrupts). Seventeen hypercalls are 39 + * available: the hypercall number is put in the %eax register, and the 40 + * arguments (when required) are placed in %ebx, %ecx, %edx and %esi. 41 + * If a return value makes sense, it's returned in %eax. 42 42 * 43 43 * Grossly invalid calls result in Sudden Death at the hands of the vengeful 44 44 * Host, rather than returning failure. This reflects Winston Churchill's 45 45 * definition of a gentleman: "someone who is only rude intentionally". 46 - :*/ 46 + */ 47 + static inline unsigned long 48 + hcall(unsigned long call, 49 + unsigned long arg1, unsigned long arg2, unsigned long arg3, 50 + unsigned long arg4) 51 + { 52 + /* "int" is the Intel instruction to trigger a trap. */ 53 + asm volatile("int $" __stringify(LGUEST_TRAP_ENTRY) 54 + /* The call in %eax (aka "a") might be overwritten */ 55 + : "=a"(call) 56 + /* The arguments are in %eax, %ebx, %ecx, %edx & %esi */ 57 + : "a"(call), "b"(arg1), "c"(arg2), "d"(arg3), "S"(arg4) 58 + /* "memory" means this might write somewhere in memory. 59 + * This isn't true for all calls, but it's safe to tell 60 + * gcc that it might happen so it doesn't get clever. */ 61 + : "memory"); 62 + return call; 63 + } 47 64 48 65 /* Can't use our min() macro here: needs to be a constant */ 49 66 #define LGUEST_IRQS (NR_IRQS < 32 ? NR_IRQS: 32)
+14 -6
arch/x86/kernel/amd_iommu.c
··· 118 118 return false; 119 119 120 120 /* No device or no PCI device */ 121 - if (!dev || dev->bus != &pci_bus_type) 121 + if (dev->bus != &pci_bus_type) 122 122 return false; 123 123 124 124 devid = get_device_id(dev); ··· 392 392 u32 tail, head; 393 393 u8 *target; 394 394 395 + WARN_ON(iommu->cmd_buf_size & CMD_BUFFER_UNINITIALIZED); 395 396 tail = readl(iommu->mmio_base + MMIO_CMD_TAIL_OFFSET); 396 397 target = iommu->cmd_buf + tail; 397 398 memcpy_toio(target, cmd, sizeof(*cmd)); ··· 2187 2186 struct dma_ops_domain *dma_dom; 2188 2187 u16 devid; 2189 2188 2190 - while ((dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev)) != NULL) { 2189 + for_each_pci_dev(dev) { 2191 2190 2192 2191 /* Do we handle this device? */ 2193 2192 if (!check_device(&dev->dev)) ··· 2299 2298 list_for_each_entry_safe(dev_data, next, &domain->dev_list, list) { 2300 2299 struct device *dev = dev_data->dev; 2301 2300 2302 - do_detach(dev); 2301 + __detach_device(dev); 2303 2302 atomic_set(&dev_data->bind, 0); 2304 2303 } 2305 2304 ··· 2328 2327 return NULL; 2329 2328 2330 2329 spin_lock_init(&domain->lock); 2330 + mutex_init(&domain->api_lock); 2331 2331 domain->id = domain_id_alloc(); 2332 2332 if (!domain->id) 2333 2333 goto out_err; ··· 2381 2379 2382 2380 free_pagetable(domain); 2383 2381 2384 - domain_id_free(domain->id); 2385 - 2386 - kfree(domain); 2382 + protection_domain_free(domain); 2387 2383 2388 2384 dom->priv = NULL; 2389 2385 } ··· 2456 2456 iova &= PAGE_MASK; 2457 2457 paddr &= PAGE_MASK; 2458 2458 2459 + mutex_lock(&domain->api_lock); 2460 + 2459 2461 for (i = 0; i < npages; ++i) { 2460 2462 ret = iommu_map_page(domain, iova, paddr, prot, PM_MAP_4k); 2461 2463 if (ret) ··· 2466 2464 iova += PAGE_SIZE; 2467 2465 paddr += PAGE_SIZE; 2468 2466 } 2467 + 2468 + mutex_unlock(&domain->api_lock); 2469 2469 2470 2470 return 0; 2471 2471 } ··· 2481 2477 2482 2478 iova &= PAGE_MASK; 2483 2479 2480 + mutex_lock(&domain->api_lock); 2481 + 2484 2482 for (i = 0; i < npages; ++i) { 2485 2483 iommu_unmap_page(domain, iova, PM_MAP_4k); 2486 2484 iova += PAGE_SIZE; 2487 2485 } 2488 2486 2489 2487 iommu_flush_tlb_pde(domain); 2488 + 2489 + mutex_unlock(&domain->api_lock); 2490 2490 } 2491 2491 2492 2492 static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
+33 -15
arch/x86/kernel/amd_iommu_init.c
··· 138 138 bool amd_iommu_np_cache __read_mostly; 139 139 140 140 /* 141 - * Set to true if ACPI table parsing and hardware intialization went properly 141 + * The ACPI table parsing functions set this variable on an error 142 142 */ 143 - static bool amd_iommu_initialized; 143 + static int __initdata amd_iommu_init_err; 144 144 145 145 /* 146 146 * List of protection domains - used during resume ··· 391 391 */ 392 392 for (i = 0; i < table->length; ++i) 393 393 checksum += p[i]; 394 - if (checksum != 0) 394 + if (checksum != 0) { 395 395 /* ACPI table corrupt */ 396 - return -ENODEV; 396 + amd_iommu_init_err = -ENODEV; 397 + return 0; 398 + } 397 399 398 400 p += IVRS_HEADER_LENGTH; 399 401 ··· 438 436 if (cmd_buf == NULL) 439 437 return NULL; 440 438 441 - iommu->cmd_buf_size = CMD_BUFFER_SIZE; 439 + iommu->cmd_buf_size = CMD_BUFFER_SIZE | CMD_BUFFER_UNINITIALIZED; 442 440 443 441 return cmd_buf; 444 442 } ··· 474 472 &entry, sizeof(entry)); 475 473 476 474 amd_iommu_reset_cmd_buffer(iommu); 475 + iommu->cmd_buf_size &= ~(CMD_BUFFER_UNINITIALIZED); 477 476 } 478 477 479 478 static void __init free_command_buffer(struct amd_iommu *iommu) 480 479 { 481 480 free_pages((unsigned long)iommu->cmd_buf, 482 - get_order(iommu->cmd_buf_size)); 481 + get_order(iommu->cmd_buf_size & ~(CMD_BUFFER_UNINITIALIZED))); 483 482 } 484 483 485 484 /* allocates the memory where the IOMMU will log its events to */ ··· 923 920 h->mmio_phys); 924 921 925 922 iommu = kzalloc(sizeof(struct amd_iommu), GFP_KERNEL); 926 - if (iommu == NULL) 927 - return -ENOMEM; 923 + if (iommu == NULL) { 924 + amd_iommu_init_err = -ENOMEM; 925 + return 0; 926 + } 927 + 928 928 ret = init_iommu_one(iommu, h); 929 - if (ret) 930 - return ret; 929 + if (ret) { 930 + amd_iommu_init_err = ret; 931 + return 0; 932 + } 931 933 break; 932 934 default: 933 935 break; ··· 941 933 942 934 } 943 935 WARN_ON(p != end); 944 - 945 - amd_iommu_initialized = true; 946 936 947 937 return 0; 948 938 } ··· 1217 1211 if (acpi_table_parse("IVRS", find_last_devid_acpi) != 0) 1218 1212 return -ENODEV; 1219 1213 1214 + ret = amd_iommu_init_err; 1215 + if (ret) 1216 + goto out; 1217 + 1220 1218 dev_table_size = tbl_size(DEV_TABLE_ENTRY_SIZE); 1221 1219 alias_table_size = tbl_size(ALIAS_TABLE_ENTRY_SIZE); 1222 1220 rlookup_table_size = tbl_size(RLOOKUP_TABLE_ENTRY_SIZE); ··· 1280 1270 if (acpi_table_parse("IVRS", init_iommu_all) != 0) 1281 1271 goto free; 1282 1272 1283 - if (!amd_iommu_initialized) 1273 + if (amd_iommu_init_err) { 1274 + ret = amd_iommu_init_err; 1284 1275 goto free; 1276 + } 1285 1277 1286 1278 if (acpi_table_parse("IVRS", init_memory_definitions) != 0) 1287 1279 goto free; 1280 + 1281 + if (amd_iommu_init_err) { 1282 + ret = amd_iommu_init_err; 1283 + goto free; 1284 + } 1288 1285 1289 1286 ret = sysdev_class_register(&amd_iommu_sysdev_class); 1290 1287 if (ret) ··· 1305 1288 if (ret) 1306 1289 goto free; 1307 1290 1291 + enable_iommus(); 1292 + 1308 1293 if (iommu_pass_through) 1309 1294 ret = amd_iommu_init_passthrough(); 1310 1295 else ··· 1318 1299 amd_iommu_init_api(); 1319 1300 1320 1301 amd_iommu_init_notifier(); 1321 - 1322 - enable_iommus(); 1323 1302 1324 1303 if (iommu_pass_through) 1325 1304 goto out; ··· 1332 1315 return ret; 1333 1316 1334 1317 free: 1318 + disable_iommus(); 1335 1319 1336 1320 amd_iommu_uninit_devices(); 1337 1321
+14 -1
arch/x86/kernel/aperture_64.c
··· 393 393 for (i = 0; i < ARRAY_SIZE(bus_dev_ranges); i++) { 394 394 int bus; 395 395 int dev_base, dev_limit; 396 + u32 ctl; 396 397 397 398 bus = bus_dev_ranges[i].bus; 398 399 dev_base = bus_dev_ranges[i].dev_base; ··· 407 406 gart_iommu_aperture = 1; 408 407 x86_init.iommu.iommu_init = gart_iommu_init; 409 408 410 - aper_order = (read_pci_config(bus, slot, 3, AMD64_GARTAPERTURECTL) >> 1) & 7; 409 + ctl = read_pci_config(bus, slot, 3, 410 + AMD64_GARTAPERTURECTL); 411 + 412 + /* 413 + * Before we do anything else disable the GART. It may 414 + * still be enabled if we boot into a crash-kernel here. 415 + * Reconfiguring the GART while it is enabled could have 416 + * unknown side-effects. 417 + */ 418 + ctl &= ~GARTEN; 419 + write_pci_config(bus, slot, 3, AMD64_GARTAPERTURECTL, ctl); 420 + 421 + aper_order = (ctl >> 1) & 7; 411 422 aper_size = (32 * 1024 * 1024) << aper_order; 412 423 aper_base = read_pci_config(bus, slot, 3, AMD64_GARTAPERTUREBASE) & 0x7fff; 413 424 aper_base <<= 25;
-6
arch/x86/kernel/crash.c
··· 27 27 #include <asm/cpu.h> 28 28 #include <asm/reboot.h> 29 29 #include <asm/virtext.h> 30 - #include <asm/x86_init.h> 31 30 32 31 #if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC) 33 32 ··· 102 103 #ifdef CONFIG_HPET_TIMER 103 104 hpet_disable(); 104 105 #endif 105 - 106 - #ifdef CONFIG_X86_64 107 - x86_platform.iommu_shutdown(); 108 - #endif 109 - 110 106 crash_save_cpu(regs, safe_smp_processor_id()); 111 107 }
+6 -2
arch/x86/kernel/dumpstack.h
··· 14 14 #define get_bp(bp) asm("movq %%rbp, %0" : "=r" (bp) :) 15 15 #endif 16 16 17 + #include <linux/uaccess.h> 18 + 17 19 extern void 18 20 show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, 19 21 unsigned long *stack, unsigned long bp, char *log_lvl); ··· 44 42 get_bp(frame); 45 43 46 44 #ifdef CONFIG_FRAME_POINTER 47 - while (n--) 48 - frame = frame->next_frame; 45 + while (n--) { 46 + if (probe_kernel_address(&frame->next_frame, frame)) 47 + break; 48 + } 49 49 #endif 50 50 51 51 return (unsigned long)frame;
+3
arch/x86/kernel/pci-gart_64.c
··· 565 565 566 566 enable_gart_translation(dev, __pa(agp_gatt_table)); 567 567 } 568 + 569 + /* Flush the GART-TLB to remove stale entries */ 570 + k8_flush_garts(); 568 571 } 569 572 570 573 /*
+30 -31
arch/x86/lguest/boot.c
··· 115 115 local_irq_save(flags); 116 116 if (lguest_data.hcall_status[next_call] != 0xFF) { 117 117 /* Table full, so do normal hcall which will flush table. */ 118 - kvm_hypercall4(call, arg1, arg2, arg3, arg4); 118 + hcall(call, arg1, arg2, arg3, arg4); 119 119 } else { 120 120 lguest_data.hcalls[next_call].arg0 = call; 121 121 lguest_data.hcalls[next_call].arg1 = arg1; ··· 145 145 * So, when we're in lazy mode, we call async_hcall() to store the call for 146 146 * future processing: 147 147 */ 148 - static void lazy_hcall1(unsigned long call, 149 - unsigned long arg1) 148 + static void lazy_hcall1(unsigned long call, unsigned long arg1) 150 149 { 151 150 if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) 152 - kvm_hypercall1(call, arg1); 151 + hcall(call, arg1, 0, 0, 0); 153 152 else 154 153 async_hcall(call, arg1, 0, 0, 0); 155 154 } 156 155 157 156 /* You can imagine what lazy_hcall2, 3 and 4 look like. :*/ 158 157 static void lazy_hcall2(unsigned long call, 159 - unsigned long arg1, 160 - unsigned long arg2) 158 + unsigned long arg1, 159 + unsigned long arg2) 161 160 { 162 161 if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) 163 - kvm_hypercall2(call, arg1, arg2); 162 + hcall(call, arg1, arg2, 0, 0); 164 163 else 165 164 async_hcall(call, arg1, arg2, 0, 0); 166 165 } 167 166 168 167 static void lazy_hcall3(unsigned long call, 169 - unsigned long arg1, 170 - unsigned long arg2, 171 - unsigned long arg3) 168 + unsigned long arg1, 169 + unsigned long arg2, 170 + unsigned long arg3) 172 171 { 173 172 if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) 174 - kvm_hypercall3(call, arg1, arg2, arg3); 173 + hcall(call, arg1, arg2, arg3, 0); 175 174 else 176 175 async_hcall(call, arg1, arg2, arg3, 0); 177 176 } 178 177 179 178 #ifdef CONFIG_X86_PAE 180 179 static void lazy_hcall4(unsigned long call, 181 - unsigned long arg1, 182 - unsigned long arg2, 183 - unsigned long arg3, 184 - unsigned long arg4) 180 + unsigned long arg1, 181 + unsigned long arg2, 182 + unsigned long arg3, 183 + unsigned long arg4) 185 184 { 186 185 if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) 187 - kvm_hypercall4(call, arg1, arg2, arg3, arg4); 186 + hcall(call, arg1, arg2, arg3, arg4); 188 187 else 189 188 async_hcall(call, arg1, arg2, arg3, arg4); 190 189 } ··· 195 196 :*/ 196 197 static void lguest_leave_lazy_mmu_mode(void) 197 198 { 198 - kvm_hypercall0(LHCALL_FLUSH_ASYNC); 199 + hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0, 0); 199 200 paravirt_leave_lazy_mmu(); 200 201 } 201 202 202 203 static void lguest_end_context_switch(struct task_struct *next) 203 204 { 204 - kvm_hypercall0(LHCALL_FLUSH_ASYNC); 205 + hcall(LHCALL_FLUSH_ASYNC, 0, 0, 0, 0); 205 206 paravirt_end_context_switch(next); 206 207 } 207 208 ··· 285 286 /* Keep the local copy up to date. */ 286 287 native_write_idt_entry(dt, entrynum, g); 287 288 /* Tell Host about this new entry. */ 288 - kvm_hypercall3(LHCALL_LOAD_IDT_ENTRY, entrynum, desc[0], desc[1]); 289 + hcall(LHCALL_LOAD_IDT_ENTRY, entrynum, desc[0], desc[1], 0); 289 290 } 290 291 291 292 /* ··· 299 300 struct desc_struct *idt = (void *)desc->address; 300 301 301 302 for (i = 0; i < (desc->size+1)/8; i++) 302 - kvm_hypercall3(LHCALL_LOAD_IDT_ENTRY, i, idt[i].a, idt[i].b); 303 + hcall(LHCALL_LOAD_IDT_ENTRY, i, idt[i].a, idt[i].b, 0); 303 304 } 304 305 305 306 /* ··· 320 321 struct desc_struct *gdt = (void *)desc->address; 321 322 322 323 for (i = 0; i < (desc->size+1)/8; i++) 323 - kvm_hypercall3(LHCALL_LOAD_GDT_ENTRY, i, gdt[i].a, gdt[i].b); 324 + hcall(LHCALL_LOAD_GDT_ENTRY, i, gdt[i].a, gdt[i].b, 0); 324 325 } 325 326 326 327 /* ··· 333 334 { 334 335 native_write_gdt_entry(dt, entrynum, desc, type); 335 336 /* Tell Host about this new entry. */ 336 - kvm_hypercall3(LHCALL_LOAD_GDT_ENTRY, entrynum, 337 - dt[entrynum].a, dt[entrynum].b); 337 + hcall(LHCALL_LOAD_GDT_ENTRY, entrynum, 338 + dt[entrynum].a, dt[entrynum].b, 0); 338 339 } 339 340 340 341 /* ··· 930 931 } 931 932 932 933 /* Please wake us this far in the future. */ 933 - kvm_hypercall1(LHCALL_SET_CLOCKEVENT, delta); 934 + hcall(LHCALL_SET_CLOCKEVENT, delta, 0, 0, 0); 934 935 return 0; 935 936 } 936 937 ··· 941 942 case CLOCK_EVT_MODE_UNUSED: 942 943 case CLOCK_EVT_MODE_SHUTDOWN: 943 944 /* A 0 argument shuts the clock down. */ 944 - kvm_hypercall0(LHCALL_SET_CLOCKEVENT); 945 + hcall(LHCALL_SET_CLOCKEVENT, 0, 0, 0, 0); 945 946 break; 946 947 case CLOCK_EVT_MODE_ONESHOT: 947 948 /* This is what we expect. */ ··· 1099 1100 /* STOP! Until an interrupt comes in. */ 1100 1101 static void lguest_safe_halt(void) 1101 1102 { 1102 - kvm_hypercall0(LHCALL_HALT); 1103 + hcall(LHCALL_HALT, 0, 0, 0, 0); 1103 1104 } 1104 1105 1105 1106 /* ··· 1111 1112 */ 1112 1113 static void lguest_power_off(void) 1113 1114 { 1114 - kvm_hypercall2(LHCALL_SHUTDOWN, __pa("Power down"), 1115 - LGUEST_SHUTDOWN_POWEROFF); 1115 + hcall(LHCALL_SHUTDOWN, __pa("Power down"), 1116 + LGUEST_SHUTDOWN_POWEROFF, 0, 0); 1116 1117 } 1117 1118 1118 1119 /* ··· 1122 1123 */ 1123 1124 static int lguest_panic(struct notifier_block *nb, unsigned long l, void *p) 1124 1125 { 1125 - kvm_hypercall2(LHCALL_SHUTDOWN, __pa(p), LGUEST_SHUTDOWN_POWEROFF); 1126 + hcall(LHCALL_SHUTDOWN, __pa(p), LGUEST_SHUTDOWN_POWEROFF, 0, 0); 1126 1127 /* The hcall won't return, but to keep gcc happy, we're "done". */ 1127 1128 return NOTIFY_DONE; 1128 1129 } ··· 1161 1162 len = sizeof(scratch) - 1; 1162 1163 scratch[len] = '\0'; 1163 1164 memcpy(scratch, buf, len); 1164 - kvm_hypercall1(LHCALL_NOTIFY, __pa(scratch)); 1165 + hcall(LHCALL_NOTIFY, __pa(scratch), 0, 0, 0); 1165 1166 1166 1167 /* This routine returns the number of bytes actually written. */ 1167 1168 return len; ··· 1173 1174 */ 1174 1175 static void lguest_restart(char *reason) 1175 1176 { 1176 - kvm_hypercall2(LHCALL_SHUTDOWN, __pa(reason), LGUEST_SHUTDOWN_RESTART); 1177 + hcall(LHCALL_SHUTDOWN, __pa(reason), LGUEST_SHUTDOWN_RESTART, 0, 0); 1177 1178 } 1178 1179 1179 1180 /*G:050
+1 -1
arch/x86/lguest/i386_head.S
··· 32 32 */ 33 33 movl $LHCALL_LGUEST_INIT, %eax 34 34 movl $lguest_data - __PAGE_OFFSET, %ebx 35 - .byte 0x0f,0x01,0xc1 /* KVM_HYPERCALL */ 35 + int $LGUEST_TRAP_ENTRY 36 36 37 37 /* Set up the initial stack so we can run C code. */ 38 38 movl $(init_thread_union+THREAD_SIZE),%esp
+11 -6
drivers/acpi/acpica/exprep.c
··· 471 471 /* allow full data read from EC address space */ 472 472 if (obj_desc->field.region_obj->region.space_id == 473 473 ACPI_ADR_SPACE_EC) { 474 - if (obj_desc->common_field.bit_length > 8) 475 - obj_desc->common_field.access_bit_width = 476 - ACPI_ROUND_UP(obj_desc->common_field. 477 - bit_length, 8); 474 + if (obj_desc->common_field.bit_length > 8) { 475 + unsigned width = 476 + ACPI_ROUND_BITS_UP_TO_BYTES( 477 + obj_desc->common_field.bit_length); 478 + // access_bit_width is u8, don't overflow it 479 + if (width > 8) 480 + width = 8; 478 481 obj_desc->common_field.access_byte_width = 479 - ACPI_DIV_8(obj_desc->common_field. 480 - access_bit_width); 482 + width; 483 + obj_desc->common_field.access_bit_width = 484 + 8 * width; 485 + } 481 486 } 482 487 483 488 ACPI_DEBUG_PRINT((ACPI_DB_BFIELD,
-3
drivers/char/agp/intel-agp.c
··· 1817 1817 pci_write_config_byte(agp_bridge->dev, INTEL_I845_AGPM, temp2 | (1 << 1)); 1818 1818 /* clear any possible error conditions */ 1819 1819 pci_write_config_word(agp_bridge->dev, INTEL_I845_ERRSTS, 0x001c); 1820 - 1821 - intel_i830_setup_flush(); 1822 1820 return 0; 1823 1821 } 1824 1822 ··· 2186 2188 .agp_destroy_page = agp_generic_destroy_page, 2187 2189 .agp_destroy_pages = agp_generic_destroy_pages, 2188 2190 .agp_type_to_mask_type = agp_generic_type_to_mask_type, 2189 - .chipset_flush = intel_i830_chipset_flush, 2190 2191 }; 2191 2192 2192 2193 static const struct agp_bridge_driver intel_850_driver = {
+5 -3
drivers/char/pcmcia/cm4000_cs.c
··· 1026 1026 1027 1027 xoutb(0, REG_FLAGS1(iobase)); /* clear detectCMM */ 1028 1028 /* last check before exit */ 1029 - if (!io_detect_cm4000(iobase, dev)) 1030 - count = -ENODEV; 1029 + if (!io_detect_cm4000(iobase, dev)) { 1030 + rc = -ENODEV; 1031 + goto release_io; 1032 + } 1031 1033 1032 1034 if (test_bit(IS_INVREV, &dev->flags) && count > 0) 1033 1035 str_invert_revert(dev->rbuf, count); 1034 1036 1035 1037 if (copy_to_user(buf, dev->rbuf, count)) 1036 - return -EFAULT; 1038 + rc = -EFAULT; 1037 1039 1038 1040 release_io: 1039 1041 clear_bit(LOCK_IO, &dev->flags);
+13 -10
drivers/firewire/core-cdev.c
··· 960 960 u.packet.header_length = GET_HEADER_LENGTH(control); 961 961 962 962 if (ctx->type == FW_ISO_CONTEXT_TRANSMIT) { 963 + if (u.packet.header_length % 4 != 0) 964 + return -EINVAL; 963 965 header_length = u.packet.header_length; 964 966 } else { 965 967 /* ··· 971 969 if (ctx->header_size == 0) { 972 970 if (u.packet.header_length > 0) 973 971 return -EINVAL; 974 - } else if (u.packet.header_length % ctx->header_size != 0) { 972 + } else if (u.packet.header_length == 0 || 973 + u.packet.header_length % ctx->header_size != 0) { 975 974 return -EINVAL; 976 975 } 977 976 header_length = 0; ··· 1357 1354 return -ENODEV; 1358 1355 1359 1356 if (_IOC_TYPE(cmd) != '#' || 1360 - _IOC_NR(cmd) >= ARRAY_SIZE(ioctl_handlers)) 1357 + _IOC_NR(cmd) >= ARRAY_SIZE(ioctl_handlers) || 1358 + _IOC_SIZE(cmd) > sizeof(buffer)) 1361 1359 return -EINVAL; 1362 1360 1363 - if (_IOC_DIR(cmd) & _IOC_WRITE) { 1364 - if (_IOC_SIZE(cmd) > sizeof(buffer) || 1365 - copy_from_user(&buffer, arg, _IOC_SIZE(cmd))) 1361 + if (_IOC_DIR(cmd) == _IOC_READ) 1362 + memset(&buffer, 0, _IOC_SIZE(cmd)); 1363 + 1364 + if (_IOC_DIR(cmd) & _IOC_WRITE) 1365 + if (copy_from_user(&buffer, arg, _IOC_SIZE(cmd))) 1366 1366 return -EFAULT; 1367 - } 1368 1367 1369 1368 ret = ioctl_handlers[_IOC_NR(cmd)](client, &buffer); 1370 1369 if (ret < 0) 1371 1370 return ret; 1372 1371 1373 - if (_IOC_DIR(cmd) & _IOC_READ) { 1374 - if (_IOC_SIZE(cmd) > sizeof(buffer) || 1375 - copy_to_user(arg, &buffer, _IOC_SIZE(cmd))) 1372 + if (_IOC_DIR(cmd) & _IOC_READ) 1373 + if (copy_to_user(arg, &buffer, _IOC_SIZE(cmd))) 1376 1374 return -EFAULT; 1377 - } 1378 1375 1379 1376 return ret; 1380 1377 }
+2 -2
drivers/gpu/drm/drm_stub.c
··· 516 516 } 517 517 driver = dev->driver; 518 518 519 - drm_vblank_cleanup(dev); 520 - 521 519 drm_lastclose(dev); 522 520 523 521 if (drm_core_has_MTRR(dev) && drm_core_has_AGP(dev) && ··· 534 536 kfree(dev->agp); 535 537 dev->agp = NULL; 536 538 } 539 + 540 + drm_vblank_cleanup(dev); 537 541 538 542 list_for_each_entry_safe(r_list, list_temp, &dev->maplist, head) 539 543 drm_rmmap(dev, r_list->map);
+1 -1
drivers/gpu/drm/i915/i915_debugfs.c
··· 226 226 } else { 227 227 struct drm_i915_gem_object *obj_priv; 228 228 229 - obj_priv = obj->driver_private; 229 + obj_priv = to_intel_bo(obj); 230 230 seq_printf(m, "Fenced object[%2d] = %p: %s " 231 231 "%08x %08zx %08x %s %08x %08x %d", 232 232 i, obj, get_pin_flag(obj_priv),
+3 -3
drivers/gpu/drm/i915/i915_drv.c
··· 80 80 .is_i915g = 1, .is_i9xx = 1, .cursor_needs_physical = 1, 81 81 }; 82 82 const static struct intel_device_info intel_i915gm_info = { 83 - .is_i9xx = 1, .is_mobile = 1, .has_fbc = 1, 83 + .is_i9xx = 1, .is_mobile = 1, 84 84 .cursor_needs_physical = 1, 85 85 }; 86 86 const static struct intel_device_info intel_i945g_info = { 87 87 .is_i9xx = 1, .has_hotplug = 1, .cursor_needs_physical = 1, 88 88 }; 89 89 const static struct intel_device_info intel_i945gm_info = { 90 - .is_i945gm = 1, .is_i9xx = 1, .is_mobile = 1, .has_fbc = 1, 90 + .is_i945gm = 1, .is_i9xx = 1, .is_mobile = 1, 91 91 .has_hotplug = 1, .cursor_needs_physical = 1, 92 92 }; 93 93 ··· 361 361 !dev_priv->mm.suspended) { 362 362 drm_i915_ring_buffer_t *ring = &dev_priv->ring; 363 363 struct drm_gem_object *obj = ring->ring_obj; 364 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 364 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 365 365 dev_priv->mm.suspended = 0; 366 366 367 367 /* Stop the ring if it's running. */
+4
drivers/gpu/drm/i915/i915_drv.h
··· 611 611 /* Reclocking support */ 612 612 bool render_reclock_avail; 613 613 bool lvds_downclock_avail; 614 + /* indicate whether the LVDS EDID is OK */ 615 + bool lvds_edid_good; 614 616 /* indicates the reduced downclock for LVDS*/ 615 617 int lvds_downclock; 616 618 struct work_struct idle_work; ··· 732 730 */ 733 731 atomic_t pending_flip; 734 732 }; 733 + 734 + #define to_intel_bo(x) ((struct drm_i915_gem_object *) (x)->driver_private) 735 735 736 736 /** 737 737 * Request queue structure.
+66 -66
drivers/gpu/drm/i915/i915_gem.c
··· 163 163 static int i915_gem_object_needs_bit17_swizzle(struct drm_gem_object *obj) 164 164 { 165 165 drm_i915_private_t *dev_priv = obj->dev->dev_private; 166 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 166 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 167 167 168 168 return dev_priv->mm.bit_6_swizzle_x == I915_BIT_6_SWIZZLE_9_10_17 && 169 169 obj_priv->tiling_mode != I915_TILING_NONE; ··· 264 264 struct drm_i915_gem_pread *args, 265 265 struct drm_file *file_priv) 266 266 { 267 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 267 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 268 268 ssize_t remain; 269 269 loff_t offset, page_base; 270 270 char __user *user_data; ··· 285 285 if (ret != 0) 286 286 goto fail_put_pages; 287 287 288 - obj_priv = obj->driver_private; 288 + obj_priv = to_intel_bo(obj); 289 289 offset = args->offset; 290 290 291 291 while (remain > 0) { ··· 354 354 struct drm_i915_gem_pread *args, 355 355 struct drm_file *file_priv) 356 356 { 357 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 357 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 358 358 struct mm_struct *mm = current->mm; 359 359 struct page **user_pages; 360 360 ssize_t remain; ··· 403 403 if (ret != 0) 404 404 goto fail_put_pages; 405 405 406 - obj_priv = obj->driver_private; 406 + obj_priv = to_intel_bo(obj); 407 407 offset = args->offset; 408 408 409 409 while (remain > 0) { ··· 479 479 obj = drm_gem_object_lookup(dev, file_priv, args->handle); 480 480 if (obj == NULL) 481 481 return -EBADF; 482 - obj_priv = obj->driver_private; 482 + obj_priv = to_intel_bo(obj); 483 483 484 484 /* Bounds check source. 485 485 * ··· 581 581 struct drm_i915_gem_pwrite *args, 582 582 struct drm_file *file_priv) 583 583 { 584 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 584 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 585 585 drm_i915_private_t *dev_priv = dev->dev_private; 586 586 ssize_t remain; 587 587 loff_t offset, page_base; ··· 605 605 if (ret) 606 606 goto fail; 607 607 608 - obj_priv = obj->driver_private; 608 + obj_priv = to_intel_bo(obj); 609 609 offset = obj_priv->gtt_offset + args->offset; 610 610 611 611 while (remain > 0) { ··· 655 655 struct drm_i915_gem_pwrite *args, 656 656 struct drm_file *file_priv) 657 657 { 658 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 658 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 659 659 drm_i915_private_t *dev_priv = dev->dev_private; 660 660 ssize_t remain; 661 661 loff_t gtt_page_base, offset; ··· 699 699 if (ret) 700 700 goto out_unpin_object; 701 701 702 - obj_priv = obj->driver_private; 702 + obj_priv = to_intel_bo(obj); 703 703 offset = obj_priv->gtt_offset + args->offset; 704 704 705 705 while (remain > 0) { ··· 761 761 struct drm_i915_gem_pwrite *args, 762 762 struct drm_file *file_priv) 763 763 { 764 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 764 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 765 765 ssize_t remain; 766 766 loff_t offset, page_base; 767 767 char __user *user_data; ··· 781 781 if (ret != 0) 782 782 goto fail_put_pages; 783 783 784 - obj_priv = obj->driver_private; 784 + obj_priv = to_intel_bo(obj); 785 785 offset = args->offset; 786 786 obj_priv->dirty = 1; 787 787 ··· 829 829 struct drm_i915_gem_pwrite *args, 830 830 struct drm_file *file_priv) 831 831 { 832 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 832 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 833 833 struct mm_struct *mm = current->mm; 834 834 struct page **user_pages; 835 835 ssize_t remain; ··· 877 877 if (ret != 0) 878 878 goto fail_put_pages; 879 879 880 - obj_priv = obj->driver_private; 880 + obj_priv = to_intel_bo(obj); 881 881 offset = args->offset; 882 882 obj_priv->dirty = 1; 883 883 ··· 952 952 obj = drm_gem_object_lookup(dev, file_priv, args->handle); 953 953 if (obj == NULL) 954 954 return -EBADF; 955 - obj_priv = obj->driver_private; 955 + obj_priv = to_intel_bo(obj); 956 956 957 957 /* Bounds check destination. 958 958 * ··· 1034 1034 obj = drm_gem_object_lookup(dev, file_priv, args->handle); 1035 1035 if (obj == NULL) 1036 1036 return -EBADF; 1037 - obj_priv = obj->driver_private; 1037 + obj_priv = to_intel_bo(obj); 1038 1038 1039 1039 mutex_lock(&dev->struct_mutex); 1040 1040 ··· 1096 1096 DRM_INFO("%s: sw_finish %d (%p %zd)\n", 1097 1097 __func__, args->handle, obj, obj->size); 1098 1098 #endif 1099 - obj_priv = obj->driver_private; 1099 + obj_priv = to_intel_bo(obj); 1100 1100 1101 1101 /* Pinned buffers may be scanout, so flush the cache */ 1102 1102 if (obj_priv->pin_count) ··· 1167 1167 struct drm_gem_object *obj = vma->vm_private_data; 1168 1168 struct drm_device *dev = obj->dev; 1169 1169 struct drm_i915_private *dev_priv = dev->dev_private; 1170 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1170 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1171 1171 pgoff_t page_offset; 1172 1172 unsigned long pfn; 1173 1173 int ret = 0; ··· 1234 1234 { 1235 1235 struct drm_device *dev = obj->dev; 1236 1236 struct drm_gem_mm *mm = dev->mm_private; 1237 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1237 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1238 1238 struct drm_map_list *list; 1239 1239 struct drm_local_map *map; 1240 1240 int ret = 0; ··· 1305 1305 i915_gem_release_mmap(struct drm_gem_object *obj) 1306 1306 { 1307 1307 struct drm_device *dev = obj->dev; 1308 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1308 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1309 1309 1310 1310 if (dev->dev_mapping) 1311 1311 unmap_mapping_range(dev->dev_mapping, ··· 1316 1316 i915_gem_free_mmap_offset(struct drm_gem_object *obj) 1317 1317 { 1318 1318 struct drm_device *dev = obj->dev; 1319 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1319 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1320 1320 struct drm_gem_mm *mm = dev->mm_private; 1321 1321 struct drm_map_list *list; 1322 1322 ··· 1347 1347 i915_gem_get_gtt_alignment(struct drm_gem_object *obj) 1348 1348 { 1349 1349 struct drm_device *dev = obj->dev; 1350 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1350 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1351 1351 int start, i; 1352 1352 1353 1353 /* ··· 1406 1406 1407 1407 mutex_lock(&dev->struct_mutex); 1408 1408 1409 - obj_priv = obj->driver_private; 1409 + obj_priv = to_intel_bo(obj); 1410 1410 1411 1411 if (obj_priv->madv != I915_MADV_WILLNEED) { 1412 1412 DRM_ERROR("Attempting to mmap a purgeable buffer\n"); ··· 1450 1450 void 1451 1451 i915_gem_object_put_pages(struct drm_gem_object *obj) 1452 1452 { 1453 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1453 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1454 1454 int page_count = obj->size / PAGE_SIZE; 1455 1455 int i; 1456 1456 ··· 1486 1486 { 1487 1487 struct drm_device *dev = obj->dev; 1488 1488 drm_i915_private_t *dev_priv = dev->dev_private; 1489 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1489 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1490 1490 1491 1491 /* Add a reference if we're newly entering the active list. */ 1492 1492 if (!obj_priv->active) { ··· 1506 1506 { 1507 1507 struct drm_device *dev = obj->dev; 1508 1508 drm_i915_private_t *dev_priv = dev->dev_private; 1509 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1509 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1510 1510 1511 1511 BUG_ON(!obj_priv->active); 1512 1512 list_move_tail(&obj_priv->list, &dev_priv->mm.flushing_list); ··· 1517 1517 static void 1518 1518 i915_gem_object_truncate(struct drm_gem_object *obj) 1519 1519 { 1520 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1520 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1521 1521 struct inode *inode; 1522 1522 1523 1523 inode = obj->filp->f_path.dentry->d_inode; ··· 1538 1538 { 1539 1539 struct drm_device *dev = obj->dev; 1540 1540 drm_i915_private_t *dev_priv = dev->dev_private; 1541 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1541 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1542 1542 1543 1543 i915_verify_inactive(dev, __FILE__, __LINE__); 1544 1544 if (obj_priv->pin_count != 0) ··· 1965 1965 i915_gem_object_wait_rendering(struct drm_gem_object *obj) 1966 1966 { 1967 1967 struct drm_device *dev = obj->dev; 1968 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1968 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1969 1969 int ret; 1970 1970 1971 1971 /* This function only exists to support waiting for existing rendering, ··· 1997 1997 { 1998 1998 struct drm_device *dev = obj->dev; 1999 1999 drm_i915_private_t *dev_priv = dev->dev_private; 2000 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2000 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2001 2001 int ret = 0; 2002 2002 2003 2003 #if WATCH_BUF ··· 2173 2173 #if WATCH_LRU 2174 2174 DRM_INFO("%s: evicting %p\n", __func__, obj); 2175 2175 #endif 2176 - obj_priv = obj->driver_private; 2176 + obj_priv = to_intel_bo(obj); 2177 2177 BUG_ON(obj_priv->pin_count != 0); 2178 2178 BUG_ON(obj_priv->active); 2179 2179 ··· 2244 2244 i915_gem_object_get_pages(struct drm_gem_object *obj, 2245 2245 gfp_t gfpmask) 2246 2246 { 2247 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2247 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2248 2248 int page_count, i; 2249 2249 struct address_space *mapping; 2250 2250 struct inode *inode; ··· 2297 2297 struct drm_gem_object *obj = reg->obj; 2298 2298 struct drm_device *dev = obj->dev; 2299 2299 drm_i915_private_t *dev_priv = dev->dev_private; 2300 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2300 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2301 2301 int regnum = obj_priv->fence_reg; 2302 2302 uint64_t val; 2303 2303 ··· 2319 2319 struct drm_gem_object *obj = reg->obj; 2320 2320 struct drm_device *dev = obj->dev; 2321 2321 drm_i915_private_t *dev_priv = dev->dev_private; 2322 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2322 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2323 2323 int regnum = obj_priv->fence_reg; 2324 2324 uint64_t val; 2325 2325 ··· 2339 2339 struct drm_gem_object *obj = reg->obj; 2340 2340 struct drm_device *dev = obj->dev; 2341 2341 drm_i915_private_t *dev_priv = dev->dev_private; 2342 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2342 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2343 2343 int regnum = obj_priv->fence_reg; 2344 2344 int tile_width; 2345 2345 uint32_t fence_reg, val; ··· 2381 2381 struct drm_gem_object *obj = reg->obj; 2382 2382 struct drm_device *dev = obj->dev; 2383 2383 drm_i915_private_t *dev_priv = dev->dev_private; 2384 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2384 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2385 2385 int regnum = obj_priv->fence_reg; 2386 2386 uint32_t val; 2387 2387 uint32_t pitch_val; ··· 2425 2425 if (!reg->obj) 2426 2426 return i; 2427 2427 2428 - obj_priv = reg->obj->driver_private; 2428 + obj_priv = to_intel_bo(reg->obj); 2429 2429 if (!obj_priv->pin_count) 2430 2430 avail++; 2431 2431 } ··· 2480 2480 { 2481 2481 struct drm_device *dev = obj->dev; 2482 2482 struct drm_i915_private *dev_priv = dev->dev_private; 2483 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2483 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2484 2484 struct drm_i915_fence_reg *reg = NULL; 2485 2485 int ret; 2486 2486 ··· 2547 2547 { 2548 2548 struct drm_device *dev = obj->dev; 2549 2549 drm_i915_private_t *dev_priv = dev->dev_private; 2550 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2550 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2551 2551 2552 2552 if (IS_GEN6(dev)) { 2553 2553 I915_WRITE64(FENCE_REG_SANDYBRIDGE_0 + ··· 2583 2583 i915_gem_object_put_fence_reg(struct drm_gem_object *obj) 2584 2584 { 2585 2585 struct drm_device *dev = obj->dev; 2586 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2586 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2587 2587 2588 2588 if (obj_priv->fence_reg == I915_FENCE_REG_NONE) 2589 2589 return 0; ··· 2621 2621 { 2622 2622 struct drm_device *dev = obj->dev; 2623 2623 drm_i915_private_t *dev_priv = dev->dev_private; 2624 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2624 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2625 2625 struct drm_mm_node *free_space; 2626 2626 gfp_t gfpmask = __GFP_NORETRY | __GFP_NOWARN; 2627 2627 int ret; ··· 2728 2728 void 2729 2729 i915_gem_clflush_object(struct drm_gem_object *obj) 2730 2730 { 2731 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2731 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2732 2732 2733 2733 /* If we don't have a page list set up, then we're not pinned 2734 2734 * to GPU, and we can ignore the cache flush because it'll happen ··· 2829 2829 int 2830 2830 i915_gem_object_set_to_gtt_domain(struct drm_gem_object *obj, int write) 2831 2831 { 2832 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2832 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2833 2833 uint32_t old_write_domain, old_read_domains; 2834 2834 int ret; 2835 2835 ··· 2879 2879 i915_gem_object_set_to_display_plane(struct drm_gem_object *obj) 2880 2880 { 2881 2881 struct drm_device *dev = obj->dev; 2882 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 2882 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 2883 2883 uint32_t old_write_domain, old_read_domains; 2884 2884 int ret; 2885 2885 ··· 3092 3092 i915_gem_object_set_to_gpu_domain(struct drm_gem_object *obj) 3093 3093 { 3094 3094 struct drm_device *dev = obj->dev; 3095 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 3095 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 3096 3096 uint32_t invalidate_domains = 0; 3097 3097 uint32_t flush_domains = 0; 3098 3098 uint32_t old_read_domains; ··· 3177 3177 static void 3178 3178 i915_gem_object_set_to_full_cpu_read_domain(struct drm_gem_object *obj) 3179 3179 { 3180 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 3180 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 3181 3181 3182 3182 if (!obj_priv->page_cpu_valid) 3183 3183 return; ··· 3217 3217 i915_gem_object_set_cpu_read_domain_range(struct drm_gem_object *obj, 3218 3218 uint64_t offset, uint64_t size) 3219 3219 { 3220 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 3220 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 3221 3221 uint32_t old_read_domains; 3222 3222 int i, ret; 3223 3223 ··· 3286 3286 { 3287 3287 struct drm_device *dev = obj->dev; 3288 3288 drm_i915_private_t *dev_priv = dev->dev_private; 3289 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 3289 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 3290 3290 int i, ret; 3291 3291 void __iomem *reloc_page; 3292 3292 bool need_fence; ··· 3337 3337 i915_gem_object_unpin(obj); 3338 3338 return -EBADF; 3339 3339 } 3340 - target_obj_priv = target_obj->driver_private; 3340 + target_obj_priv = to_intel_bo(target_obj); 3341 3341 3342 3342 #if WATCH_RELOC 3343 3343 DRM_INFO("%s: obj %p offset %08x target %d " ··· 3689 3689 prepare_to_wait(&dev_priv->pending_flip_queue, 3690 3690 &wait, TASK_INTERRUPTIBLE); 3691 3691 for (i = 0; i < count; i++) { 3692 - obj_priv = object_list[i]->driver_private; 3692 + obj_priv = to_intel_bo(object_list[i]); 3693 3693 if (atomic_read(&obj_priv->pending_flip) > 0) 3694 3694 break; 3695 3695 } ··· 3798 3798 goto err; 3799 3799 } 3800 3800 3801 - obj_priv = object_list[i]->driver_private; 3801 + obj_priv = to_intel_bo(object_list[i]); 3802 3802 if (obj_priv->in_execbuffer) { 3803 3803 DRM_ERROR("Object %p appears more than once in object list\n", 3804 3804 object_list[i]); ··· 3924 3924 3925 3925 for (i = 0; i < args->buffer_count; i++) { 3926 3926 struct drm_gem_object *obj = object_list[i]; 3927 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 3927 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 3928 3928 uint32_t old_write_domain = obj->write_domain; 3929 3929 3930 3930 obj->write_domain = obj->pending_write_domain; ··· 3999 3999 4000 4000 for (i = 0; i < args->buffer_count; i++) { 4001 4001 if (object_list[i]) { 4002 - obj_priv = object_list[i]->driver_private; 4002 + obj_priv = to_intel_bo(object_list[i]); 4003 4003 obj_priv->in_execbuffer = false; 4004 4004 } 4005 4005 drm_gem_object_unreference(object_list[i]); ··· 4177 4177 i915_gem_object_pin(struct drm_gem_object *obj, uint32_t alignment) 4178 4178 { 4179 4179 struct drm_device *dev = obj->dev; 4180 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 4180 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 4181 4181 int ret; 4182 4182 4183 4183 i915_verify_inactive(dev, __FILE__, __LINE__); ··· 4210 4210 { 4211 4211 struct drm_device *dev = obj->dev; 4212 4212 drm_i915_private_t *dev_priv = dev->dev_private; 4213 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 4213 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 4214 4214 4215 4215 i915_verify_inactive(dev, __FILE__, __LINE__); 4216 4216 obj_priv->pin_count--; ··· 4250 4250 mutex_unlock(&dev->struct_mutex); 4251 4251 return -EBADF; 4252 4252 } 4253 - obj_priv = obj->driver_private; 4253 + obj_priv = to_intel_bo(obj); 4254 4254 4255 4255 if (obj_priv->madv != I915_MADV_WILLNEED) { 4256 4256 DRM_ERROR("Attempting to pin a purgeable buffer\n"); ··· 4307 4307 return -EBADF; 4308 4308 } 4309 4309 4310 - obj_priv = obj->driver_private; 4310 + obj_priv = to_intel_bo(obj); 4311 4311 if (obj_priv->pin_filp != file_priv) { 4312 4312 DRM_ERROR("Not pinned by caller in i915_gem_pin_ioctl(): %d\n", 4313 4313 args->handle); ··· 4349 4349 */ 4350 4350 i915_gem_retire_requests(dev); 4351 4351 4352 - obj_priv = obj->driver_private; 4352 + obj_priv = to_intel_bo(obj); 4353 4353 /* Don't count being on the flushing list against the object being 4354 4354 * done. Otherwise, a buffer left on the flushing list but not getting 4355 4355 * flushed (because nobody's flushing that domain) won't ever return ··· 4395 4395 } 4396 4396 4397 4397 mutex_lock(&dev->struct_mutex); 4398 - obj_priv = obj->driver_private; 4398 + obj_priv = to_intel_bo(obj); 4399 4399 4400 4400 if (obj_priv->pin_count) { 4401 4401 drm_gem_object_unreference(obj); ··· 4456 4456 void i915_gem_free_object(struct drm_gem_object *obj) 4457 4457 { 4458 4458 struct drm_device *dev = obj->dev; 4459 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 4459 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 4460 4460 4461 4461 trace_i915_gem_object_destroy(obj); 4462 4462 ··· 4565 4565 DRM_ERROR("Failed to allocate status page\n"); 4566 4566 return -ENOMEM; 4567 4567 } 4568 - obj_priv = obj->driver_private; 4568 + obj_priv = to_intel_bo(obj); 4569 4569 obj_priv->agp_type = AGP_USER_CACHED_MEMORY; 4570 4570 4571 4571 ret = i915_gem_object_pin(obj, 4096); ··· 4609 4609 return; 4610 4610 4611 4611 obj = dev_priv->hws_obj; 4612 - obj_priv = obj->driver_private; 4612 + obj_priv = to_intel_bo(obj); 4613 4613 4614 4614 kunmap(obj_priv->pages[0]); 4615 4615 i915_gem_object_unpin(obj); ··· 4643 4643 i915_gem_cleanup_hws(dev); 4644 4644 return -ENOMEM; 4645 4645 } 4646 - obj_priv = obj->driver_private; 4646 + obj_priv = to_intel_bo(obj); 4647 4647 4648 4648 ret = i915_gem_object_pin(obj, 4096); 4649 4649 if (ret != 0) { ··· 4936 4936 int ret; 4937 4937 int page_count; 4938 4938 4939 - obj_priv = obj->driver_private; 4939 + obj_priv = to_intel_bo(obj); 4940 4940 if (!obj_priv->phys_obj) 4941 4941 return; 4942 4942 ··· 4975 4975 if (id > I915_MAX_PHYS_OBJECT) 4976 4976 return -EINVAL; 4977 4977 4978 - obj_priv = obj->driver_private; 4978 + obj_priv = to_intel_bo(obj); 4979 4979 4980 4980 if (obj_priv->phys_obj) { 4981 4981 if (obj_priv->phys_obj->id == id) ··· 5026 5026 struct drm_i915_gem_pwrite *args, 5027 5027 struct drm_file *file_priv) 5028 5028 { 5029 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 5029 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 5030 5030 void *obj_addr; 5031 5031 int ret; 5032 5032 char __user *user_data;
+2 -2
drivers/gpu/drm/i915/i915_gem_debug.c
··· 72 72 i915_gem_dump_object(struct drm_gem_object *obj, int len, 73 73 const char *where, uint32_t mark) 74 74 { 75 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 75 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 76 76 int page; 77 77 78 78 DRM_INFO("%s: object at offset %08x\n", where, obj_priv->gtt_offset); ··· 137 137 i915_gem_object_check_coherency(struct drm_gem_object *obj, int handle) 138 138 { 139 139 struct drm_device *dev = obj->dev; 140 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 140 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 141 141 int page; 142 142 uint32_t *gtt_mapping; 143 143 uint32_t *backing_map = NULL;
+5 -5
drivers/gpu/drm/i915/i915_gem_tiling.c
··· 240 240 i915_gem_object_fence_offset_ok(struct drm_gem_object *obj, int tiling_mode) 241 241 { 242 242 struct drm_device *dev = obj->dev; 243 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 243 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 244 244 245 245 if (obj_priv->gtt_space == NULL) 246 246 return true; ··· 280 280 obj = drm_gem_object_lookup(dev, file_priv, args->handle); 281 281 if (obj == NULL) 282 282 return -EINVAL; 283 - obj_priv = obj->driver_private; 283 + obj_priv = to_intel_bo(obj); 284 284 285 285 if (!i915_tiling_ok(dev, args->stride, obj->size, args->tiling_mode)) { 286 286 drm_gem_object_unreference_unlocked(obj); ··· 364 364 obj = drm_gem_object_lookup(dev, file_priv, args->handle); 365 365 if (obj == NULL) 366 366 return -EINVAL; 367 - obj_priv = obj->driver_private; 367 + obj_priv = to_intel_bo(obj); 368 368 369 369 mutex_lock(&dev->struct_mutex); 370 370 ··· 427 427 { 428 428 struct drm_device *dev = obj->dev; 429 429 drm_i915_private_t *dev_priv = dev->dev_private; 430 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 430 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 431 431 int page_count = obj->size >> PAGE_SHIFT; 432 432 int i; 433 433 ··· 456 456 { 457 457 struct drm_device *dev = obj->dev; 458 458 drm_i915_private_t *dev_priv = dev->dev_private; 459 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 459 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 460 460 int page_count = obj->size >> PAGE_SHIFT; 461 461 int i; 462 462
+4 -4
drivers/gpu/drm/i915/i915_irq.c
··· 260 260 261 261 if (mode_config->num_connector) { 262 262 list_for_each_entry(connector, &mode_config->connector_list, head) { 263 - struct intel_output *intel_output = to_intel_output(connector); 263 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 264 264 265 - if (intel_output->hot_plug) 266 - (*intel_output->hot_plug) (intel_output); 265 + if (intel_encoder->hot_plug) 266 + (*intel_encoder->hot_plug) (intel_encoder); 267 267 } 268 268 } 269 269 /* Just fire off a uevent and let userspace tell us what to do */ ··· 444 444 if (src == NULL) 445 445 return NULL; 446 446 447 - src_priv = src->driver_private; 447 + src_priv = to_intel_bo(src); 448 448 if (src_priv->pages == NULL) 449 449 return NULL; 450 450
+34 -34
drivers/gpu/drm/i915/intel_crt.c
··· 247 247 248 248 static bool intel_crt_detect_ddc(struct drm_connector *connector) 249 249 { 250 - struct intel_output *intel_output = to_intel_output(connector); 250 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 251 251 252 252 /* CRT should always be at 0, but check anyway */ 253 - if (intel_output->type != INTEL_OUTPUT_ANALOG) 253 + if (intel_encoder->type != INTEL_OUTPUT_ANALOG) 254 254 return false; 255 255 256 - return intel_ddc_probe(intel_output); 256 + return intel_ddc_probe(intel_encoder); 257 257 } 258 258 259 259 static enum drm_connector_status 260 - intel_crt_load_detect(struct drm_crtc *crtc, struct intel_output *intel_output) 260 + intel_crt_load_detect(struct drm_crtc *crtc, struct intel_encoder *intel_encoder) 261 261 { 262 - struct drm_encoder *encoder = &intel_output->enc; 262 + struct drm_encoder *encoder = &intel_encoder->enc; 263 263 struct drm_device *dev = encoder->dev; 264 264 struct drm_i915_private *dev_priv = dev->dev_private; 265 265 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); ··· 387 387 static enum drm_connector_status intel_crt_detect(struct drm_connector *connector) 388 388 { 389 389 struct drm_device *dev = connector->dev; 390 - struct intel_output *intel_output = to_intel_output(connector); 391 - struct drm_encoder *encoder = &intel_output->enc; 390 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 391 + struct drm_encoder *encoder = &intel_encoder->enc; 392 392 struct drm_crtc *crtc; 393 393 int dpms_mode; 394 394 enum drm_connector_status status; ··· 405 405 406 406 /* for pre-945g platforms use load detect */ 407 407 if (encoder->crtc && encoder->crtc->enabled) { 408 - status = intel_crt_load_detect(encoder->crtc, intel_output); 408 + status = intel_crt_load_detect(encoder->crtc, intel_encoder); 409 409 } else { 410 - crtc = intel_get_load_detect_pipe(intel_output, 410 + crtc = intel_get_load_detect_pipe(intel_encoder, 411 411 NULL, &dpms_mode); 412 412 if (crtc) { 413 - status = intel_crt_load_detect(crtc, intel_output); 414 - intel_release_load_detect_pipe(intel_output, dpms_mode); 413 + status = intel_crt_load_detect(crtc, intel_encoder); 414 + intel_release_load_detect_pipe(intel_encoder, dpms_mode); 415 415 } else 416 416 status = connector_status_unknown; 417 417 } ··· 421 421 422 422 static void intel_crt_destroy(struct drm_connector *connector) 423 423 { 424 - struct intel_output *intel_output = to_intel_output(connector); 424 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 425 425 426 - intel_i2c_destroy(intel_output->ddc_bus); 426 + intel_i2c_destroy(intel_encoder->ddc_bus); 427 427 drm_sysfs_connector_remove(connector); 428 428 drm_connector_cleanup(connector); 429 429 kfree(connector); ··· 432 432 static int intel_crt_get_modes(struct drm_connector *connector) 433 433 { 434 434 int ret; 435 - struct intel_output *intel_output = to_intel_output(connector); 435 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 436 436 struct i2c_adapter *ddcbus; 437 437 struct drm_device *dev = connector->dev; 438 438 439 439 440 - ret = intel_ddc_get_modes(intel_output); 440 + ret = intel_ddc_get_modes(intel_encoder); 441 441 if (ret || !IS_G4X(dev)) 442 442 goto end; 443 443 444 - ddcbus = intel_output->ddc_bus; 444 + ddcbus = intel_encoder->ddc_bus; 445 445 /* Try to probe digital port for output in DVI-I -> VGA mode. */ 446 - intel_output->ddc_bus = 446 + intel_encoder->ddc_bus = 447 447 intel_i2c_create(connector->dev, GPIOD, "CRTDDC_D"); 448 448 449 - if (!intel_output->ddc_bus) { 450 - intel_output->ddc_bus = ddcbus; 449 + if (!intel_encoder->ddc_bus) { 450 + intel_encoder->ddc_bus = ddcbus; 451 451 dev_printk(KERN_ERR, &connector->dev->pdev->dev, 452 452 "DDC bus registration failed for CRTDDC_D.\n"); 453 453 goto end; 454 454 } 455 455 /* Try to get modes by GPIOD port */ 456 - ret = intel_ddc_get_modes(intel_output); 456 + ret = intel_ddc_get_modes(intel_encoder); 457 457 intel_i2c_destroy(ddcbus); 458 458 459 459 end: ··· 506 506 void intel_crt_init(struct drm_device *dev) 507 507 { 508 508 struct drm_connector *connector; 509 - struct intel_output *intel_output; 509 + struct intel_encoder *intel_encoder; 510 510 struct drm_i915_private *dev_priv = dev->dev_private; 511 511 u32 i2c_reg; 512 512 513 - intel_output = kzalloc(sizeof(struct intel_output), GFP_KERNEL); 514 - if (!intel_output) 513 + intel_encoder = kzalloc(sizeof(struct intel_encoder), GFP_KERNEL); 514 + if (!intel_encoder) 515 515 return; 516 516 517 - connector = &intel_output->base; 518 - drm_connector_init(dev, &intel_output->base, 517 + connector = &intel_encoder->base; 518 + drm_connector_init(dev, &intel_encoder->base, 519 519 &intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA); 520 520 521 - drm_encoder_init(dev, &intel_output->enc, &intel_crt_enc_funcs, 521 + drm_encoder_init(dev, &intel_encoder->enc, &intel_crt_enc_funcs, 522 522 DRM_MODE_ENCODER_DAC); 523 523 524 - drm_mode_connector_attach_encoder(&intel_output->base, 525 - &intel_output->enc); 524 + drm_mode_connector_attach_encoder(&intel_encoder->base, 525 + &intel_encoder->enc); 526 526 527 527 /* Set up the DDC bus. */ 528 528 if (HAS_PCH_SPLIT(dev)) ··· 533 533 if (dev_priv->crt_ddc_bus != 0) 534 534 i2c_reg = dev_priv->crt_ddc_bus; 535 535 } 536 - intel_output->ddc_bus = intel_i2c_create(dev, i2c_reg, "CRTDDC_A"); 537 - if (!intel_output->ddc_bus) { 536 + intel_encoder->ddc_bus = intel_i2c_create(dev, i2c_reg, "CRTDDC_A"); 537 + if (!intel_encoder->ddc_bus) { 538 538 dev_printk(KERN_ERR, &dev->pdev->dev, "DDC bus registration " 539 539 "failed.\n"); 540 540 return; 541 541 } 542 542 543 - intel_output->type = INTEL_OUTPUT_ANALOG; 544 - intel_output->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 543 + intel_encoder->type = INTEL_OUTPUT_ANALOG; 544 + intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 545 545 (1 << INTEL_ANALOG_CLONE_BIT) | 546 546 (1 << INTEL_SDVO_LVDS_CLONE_BIT); 547 - intel_output->crtc_mask = (1 << 0) | (1 << 1); 547 + intel_encoder->crtc_mask = (1 << 0) | (1 << 1); 548 548 connector->interlace_allowed = 0; 549 549 connector->doublescan_allowed = 0; 550 550 551 - drm_encoder_helper_add(&intel_output->enc, &intel_crt_helper_funcs); 551 + drm_encoder_helper_add(&intel_encoder->enc, &intel_crt_helper_funcs); 552 552 drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs); 553 553 554 554 drm_sysfs_connector_add(connector);
+48 -48
drivers/gpu/drm/i915/intel_display.c
··· 747 747 list_for_each_entry(l_entry, &mode_config->connector_list, head) { 748 748 if (l_entry->encoder && 749 749 l_entry->encoder->crtc == crtc) { 750 - struct intel_output *intel_output = to_intel_output(l_entry); 751 - if (intel_output->type == type) 750 + struct intel_encoder *intel_encoder = to_intel_encoder(l_entry); 751 + if (intel_encoder->type == type) 752 752 return true; 753 753 } 754 754 } 755 755 return false; 756 756 } 757 757 758 - struct drm_connector * 759 - intel_pipe_get_output (struct drm_crtc *crtc) 758 + static struct drm_connector * 759 + intel_pipe_get_connector (struct drm_crtc *crtc) 760 760 { 761 761 struct drm_device *dev = crtc->dev; 762 762 struct drm_mode_config *mode_config = &dev->mode_config; ··· 1003 1003 struct drm_i915_private *dev_priv = dev->dev_private; 1004 1004 struct drm_framebuffer *fb = crtc->fb; 1005 1005 struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb); 1006 - struct drm_i915_gem_object *obj_priv = intel_fb->obj->driver_private; 1006 + struct drm_i915_gem_object *obj_priv = to_intel_bo(intel_fb->obj); 1007 1007 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 1008 1008 int plane, i; 1009 1009 u32 fbc_ctl, fbc_ctl2; ··· 1080 1080 struct drm_i915_private *dev_priv = dev->dev_private; 1081 1081 struct drm_framebuffer *fb = crtc->fb; 1082 1082 struct intel_framebuffer *intel_fb = to_intel_framebuffer(fb); 1083 - struct drm_i915_gem_object *obj_priv = intel_fb->obj->driver_private; 1083 + struct drm_i915_gem_object *obj_priv = to_intel_bo(intel_fb->obj); 1084 1084 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 1085 1085 int plane = (intel_crtc->plane == 0 ? DPFC_CTL_PLANEA : 1086 1086 DPFC_CTL_PLANEB); ··· 1176 1176 return; 1177 1177 1178 1178 intel_fb = to_intel_framebuffer(fb); 1179 - obj_priv = intel_fb->obj->driver_private; 1179 + obj_priv = to_intel_bo(intel_fb->obj); 1180 1180 1181 1181 /* 1182 1182 * If FBC is already on, we just have to verify that we can ··· 1243 1243 static int 1244 1244 intel_pin_and_fence_fb_obj(struct drm_device *dev, struct drm_gem_object *obj) 1245 1245 { 1246 - struct drm_i915_gem_object *obj_priv = obj->driver_private; 1246 + struct drm_i915_gem_object *obj_priv = to_intel_bo(obj); 1247 1247 u32 alignment; 1248 1248 int ret; 1249 1249 ··· 1323 1323 1324 1324 intel_fb = to_intel_framebuffer(crtc->fb); 1325 1325 obj = intel_fb->obj; 1326 - obj_priv = obj->driver_private; 1326 + obj_priv = to_intel_bo(obj); 1327 1327 1328 1328 mutex_lock(&dev->struct_mutex); 1329 1329 ret = intel_pin_and_fence_fb_obj(dev, obj); ··· 1401 1401 1402 1402 if (old_fb) { 1403 1403 intel_fb = to_intel_framebuffer(old_fb); 1404 - obj_priv = intel_fb->obj->driver_private; 1404 + obj_priv = to_intel_bo(intel_fb->obj); 1405 1405 i915_gem_object_unpin(intel_fb->obj); 1406 1406 } 1407 1407 intel_increase_pllclock(crtc, true); ··· 2917 2917 int dspsize_reg = (plane == 0) ? DSPASIZE : DSPBSIZE; 2918 2918 int dsppos_reg = (plane == 0) ? DSPAPOS : DSPBPOS; 2919 2919 int pipesrc_reg = (pipe == 0) ? PIPEASRC : PIPEBSRC; 2920 - int refclk, num_outputs = 0; 2920 + int refclk, num_connectors = 0; 2921 2921 intel_clock_t clock, reduced_clock; 2922 2922 u32 dpll = 0, fp = 0, fp2 = 0, dspcntr, pipeconf; 2923 2923 bool ok, has_reduced_clock = false, is_sdvo = false, is_dvo = false; ··· 2943 2943 drm_vblank_pre_modeset(dev, pipe); 2944 2944 2945 2945 list_for_each_entry(connector, &mode_config->connector_list, head) { 2946 - struct intel_output *intel_output = to_intel_output(connector); 2946 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 2947 2947 2948 2948 if (!connector->encoder || connector->encoder->crtc != crtc) 2949 2949 continue; 2950 2950 2951 - switch (intel_output->type) { 2951 + switch (intel_encoder->type) { 2952 2952 case INTEL_OUTPUT_LVDS: 2953 2953 is_lvds = true; 2954 2954 break; 2955 2955 case INTEL_OUTPUT_SDVO: 2956 2956 case INTEL_OUTPUT_HDMI: 2957 2957 is_sdvo = true; 2958 - if (intel_output->needs_tv_clock) 2958 + if (intel_encoder->needs_tv_clock) 2959 2959 is_tv = true; 2960 2960 break; 2961 2961 case INTEL_OUTPUT_DVO: ··· 2975 2975 break; 2976 2976 } 2977 2977 2978 - num_outputs++; 2978 + num_connectors++; 2979 2979 } 2980 2980 2981 - if (is_lvds && dev_priv->lvds_use_ssc && num_outputs < 2) { 2981 + if (is_lvds && dev_priv->lvds_use_ssc && num_connectors < 2) { 2982 2982 refclk = dev_priv->lvds_ssc_freq * 1000; 2983 2983 DRM_DEBUG_KMS("using SSC reference clock of %d MHz\n", 2984 2984 refclk / 1000); ··· 3049 3049 if (is_edp) { 3050 3050 struct drm_connector *edp; 3051 3051 target_clock = mode->clock; 3052 - edp = intel_pipe_get_output(crtc); 3053 - intel_edp_link_config(to_intel_output(edp), 3052 + edp = intel_pipe_get_connector(crtc); 3053 + intel_edp_link_config(to_intel_encoder(edp), 3054 3054 &lane, &link_bw); 3055 3055 } else { 3056 3056 /* DP over FDI requires target mode clock ··· 3231 3231 /* XXX: just matching BIOS for now */ 3232 3232 /* dpll |= PLL_REF_INPUT_TVCLKINBC; */ 3233 3233 dpll |= 3; 3234 - else if (is_lvds && dev_priv->lvds_use_ssc && num_outputs < 2) 3234 + else if (is_lvds && dev_priv->lvds_use_ssc && num_connectors < 2) 3235 3235 dpll |= PLLB_REF_INPUT_SPREADSPECTRUMIN; 3236 3236 else 3237 3237 dpll |= PLL_REF_INPUT_DREFCLK; ··· 3511 3511 if (!bo) 3512 3512 return -ENOENT; 3513 3513 3514 - obj_priv = bo->driver_private; 3514 + obj_priv = to_intel_bo(bo); 3515 3515 3516 3516 if (bo->size < width * height * 4) { 3517 3517 DRM_ERROR("buffer is to small\n"); ··· 3655 3655 * detection. 3656 3656 * 3657 3657 * It will be up to the load-detect code to adjust the pipe as appropriate for 3658 - * its requirements. The pipe will be connected to no other outputs. 3658 + * its requirements. The pipe will be connected to no other encoders. 3659 3659 * 3660 - * Currently this code will only succeed if there is a pipe with no outputs 3660 + * Currently this code will only succeed if there is a pipe with no encoders 3661 3661 * configured for it. In the future, it could choose to temporarily disable 3662 3662 * some outputs to free up a pipe for its use. 3663 3663 * ··· 3670 3670 704, 832, 0, 480, 489, 491, 520, 0, DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC), 3671 3671 }; 3672 3672 3673 - struct drm_crtc *intel_get_load_detect_pipe(struct intel_output *intel_output, 3673 + struct drm_crtc *intel_get_load_detect_pipe(struct intel_encoder *intel_encoder, 3674 3674 struct drm_display_mode *mode, 3675 3675 int *dpms_mode) 3676 3676 { 3677 3677 struct intel_crtc *intel_crtc; 3678 3678 struct drm_crtc *possible_crtc; 3679 3679 struct drm_crtc *supported_crtc =NULL; 3680 - struct drm_encoder *encoder = &intel_output->enc; 3680 + struct drm_encoder *encoder = &intel_encoder->enc; 3681 3681 struct drm_crtc *crtc = NULL; 3682 3682 struct drm_device *dev = encoder->dev; 3683 3683 struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private; ··· 3729 3729 } 3730 3730 3731 3731 encoder->crtc = crtc; 3732 - intel_output->base.encoder = encoder; 3733 - intel_output->load_detect_temp = true; 3732 + intel_encoder->base.encoder = encoder; 3733 + intel_encoder->load_detect_temp = true; 3734 3734 3735 3735 intel_crtc = to_intel_crtc(crtc); 3736 3736 *dpms_mode = intel_crtc->dpms_mode; ··· 3755 3755 return crtc; 3756 3756 } 3757 3757 3758 - void intel_release_load_detect_pipe(struct intel_output *intel_output, int dpms_mode) 3758 + void intel_release_load_detect_pipe(struct intel_encoder *intel_encoder, int dpms_mode) 3759 3759 { 3760 - struct drm_encoder *encoder = &intel_output->enc; 3760 + struct drm_encoder *encoder = &intel_encoder->enc; 3761 3761 struct drm_device *dev = encoder->dev; 3762 3762 struct drm_crtc *crtc = encoder->crtc; 3763 3763 struct drm_encoder_helper_funcs *encoder_funcs = encoder->helper_private; 3764 3764 struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; 3765 3765 3766 - if (intel_output->load_detect_temp) { 3766 + if (intel_encoder->load_detect_temp) { 3767 3767 encoder->crtc = NULL; 3768 - intel_output->base.encoder = NULL; 3769 - intel_output->load_detect_temp = false; 3768 + intel_encoder->base.encoder = NULL; 3769 + intel_encoder->load_detect_temp = false; 3770 3770 crtc->enabled = drm_helper_crtc_in_use(crtc); 3771 3771 drm_helper_disable_unused_functions(dev); 3772 3772 } 3773 3773 3774 - /* Switch crtc and output back off if necessary */ 3774 + /* Switch crtc and encoder back off if necessary */ 3775 3775 if (crtc->enabled && dpms_mode != DRM_MODE_DPMS_ON) { 3776 3776 if (encoder->crtc == crtc) 3777 3777 encoder_funcs->dpms(encoder, dpms_mode); ··· 4156 4156 work = intel_crtc->unpin_work; 4157 4157 if (work == NULL || !work->pending) { 4158 4158 if (work && !work->pending) { 4159 - obj_priv = work->pending_flip_obj->driver_private; 4159 + obj_priv = to_intel_bo(work->pending_flip_obj); 4160 4160 DRM_DEBUG_DRIVER("flip finish: %p (%d) not pending?\n", 4161 4161 obj_priv, 4162 4162 atomic_read(&obj_priv->pending_flip)); ··· 4181 4181 4182 4182 spin_unlock_irqrestore(&dev->event_lock, flags); 4183 4183 4184 - obj_priv = work->pending_flip_obj->driver_private; 4184 + obj_priv = to_intel_bo(work->pending_flip_obj); 4185 4185 4186 4186 /* Initial scanout buffer will have a 0 pending flip count */ 4187 4187 if ((atomic_read(&obj_priv->pending_flip) == 0) || ··· 4252 4252 ret = intel_pin_and_fence_fb_obj(dev, obj); 4253 4253 if (ret != 0) { 4254 4254 DRM_DEBUG_DRIVER("flip queue: %p pin & fence failed\n", 4255 - obj->driver_private); 4255 + to_intel_bo(obj)); 4256 4256 kfree(work); 4257 4257 intel_crtc->unpin_work = NULL; 4258 4258 mutex_unlock(&dev->struct_mutex); ··· 4266 4266 crtc->fb = fb; 4267 4267 i915_gem_object_flush_write_domain(obj); 4268 4268 drm_vblank_get(dev, intel_crtc->pipe); 4269 - obj_priv = obj->driver_private; 4269 + obj_priv = to_intel_bo(obj); 4270 4270 atomic_inc(&obj_priv->pending_flip); 4271 4271 work->pending_flip_obj = obj; 4272 4272 ··· 4399 4399 int entry = 0; 4400 4400 4401 4401 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 4402 - struct intel_output *intel_output = to_intel_output(connector); 4403 - if (type_mask & intel_output->clone_mask) 4402 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 4403 + if (type_mask & intel_encoder->clone_mask) 4404 4404 index_mask |= (1 << entry); 4405 4405 entry++; 4406 4406 } ··· 4495 4495 intel_tv_init(dev); 4496 4496 4497 4497 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 4498 - struct intel_output *intel_output = to_intel_output(connector); 4499 - struct drm_encoder *encoder = &intel_output->enc; 4498 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 4499 + struct drm_encoder *encoder = &intel_encoder->enc; 4500 4500 4501 - encoder->possible_crtcs = intel_output->crtc_mask; 4501 + encoder->possible_crtcs = intel_encoder->crtc_mask; 4502 4502 encoder->possible_clones = intel_connector_clones(dev, 4503 - intel_output->clone_mask); 4503 + intel_encoder->clone_mask); 4504 4504 } 4505 4505 } 4506 4506 ··· 4779 4779 struct drm_i915_gem_object *obj_priv = NULL; 4780 4780 4781 4781 if (dev_priv->pwrctx) { 4782 - obj_priv = dev_priv->pwrctx->driver_private; 4782 + obj_priv = to_intel_bo(dev_priv->pwrctx); 4783 4783 } else { 4784 4784 struct drm_gem_object *pwrctx; 4785 4785 4786 4786 pwrctx = intel_alloc_power_context(dev); 4787 4787 if (pwrctx) { 4788 4788 dev_priv->pwrctx = pwrctx; 4789 - obj_priv = pwrctx->driver_private; 4789 + obj_priv = to_intel_bo(pwrctx); 4790 4790 } 4791 4791 } 4792 4792 ··· 4815 4815 dev_priv->display.fbc_enabled = g4x_fbc_enabled; 4816 4816 dev_priv->display.enable_fbc = g4x_enable_fbc; 4817 4817 dev_priv->display.disable_fbc = g4x_disable_fbc; 4818 - } else if (IS_I965GM(dev) || IS_I945GM(dev) || IS_I915GM(dev)) { 4818 + } else if (IS_I965GM(dev)) { 4819 4819 dev_priv->display.fbc_enabled = i8xx_fbc_enabled; 4820 4820 dev_priv->display.enable_fbc = i8xx_enable_fbc; 4821 4821 dev_priv->display.disable_fbc = i8xx_disable_fbc; ··· 4957 4957 if (dev_priv->pwrctx) { 4958 4958 struct drm_i915_gem_object *obj_priv; 4959 4959 4960 - obj_priv = dev_priv->pwrctx->driver_private; 4960 + obj_priv = to_intel_bo(dev_priv->pwrctx); 4961 4961 I915_WRITE(PWRCTXA, obj_priv->gtt_offset &~ PWRCTX_EN); 4962 4962 I915_READ(PWRCTXA); 4963 4963 i915_gem_object_unpin(dev_priv->pwrctx); ··· 4978 4978 */ 4979 4979 struct drm_encoder *intel_best_encoder(struct drm_connector *connector) 4980 4980 { 4981 - struct intel_output *intel_output = to_intel_output(connector); 4981 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 4982 4982 4983 - return &intel_output->enc; 4983 + return &intel_encoder->enc; 4984 4984 } 4985 4985 4986 4986 /*
+128 -128
drivers/gpu/drm/i915/intel_dp.c
··· 55 55 uint8_t link_bw; 56 56 uint8_t lane_count; 57 57 uint8_t dpcd[4]; 58 - struct intel_output *intel_output; 58 + struct intel_encoder *intel_encoder; 59 59 struct i2c_adapter adapter; 60 60 struct i2c_algo_dp_aux_data algo; 61 61 }; 62 62 63 63 static void 64 - intel_dp_link_train(struct intel_output *intel_output, uint32_t DP, 64 + intel_dp_link_train(struct intel_encoder *intel_encoder, uint32_t DP, 65 65 uint8_t link_configuration[DP_LINK_CONFIGURATION_SIZE]); 66 66 67 67 static void 68 - intel_dp_link_down(struct intel_output *intel_output, uint32_t DP); 68 + intel_dp_link_down(struct intel_encoder *intel_encoder, uint32_t DP); 69 69 70 70 void 71 - intel_edp_link_config (struct intel_output *intel_output, 71 + intel_edp_link_config (struct intel_encoder *intel_encoder, 72 72 int *lane_num, int *link_bw) 73 73 { 74 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 74 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 75 75 76 76 *lane_num = dp_priv->lane_count; 77 77 if (dp_priv->link_bw == DP_LINK_BW_1_62) ··· 81 81 } 82 82 83 83 static int 84 - intel_dp_max_lane_count(struct intel_output *intel_output) 84 + intel_dp_max_lane_count(struct intel_encoder *intel_encoder) 85 85 { 86 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 86 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 87 87 int max_lane_count = 4; 88 88 89 89 if (dp_priv->dpcd[0] >= 0x11) { ··· 99 99 } 100 100 101 101 static int 102 - intel_dp_max_link_bw(struct intel_output *intel_output) 102 + intel_dp_max_link_bw(struct intel_encoder *intel_encoder) 103 103 { 104 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 104 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 105 105 int max_link_bw = dp_priv->dpcd[1]; 106 106 107 107 switch (max_link_bw) { ··· 127 127 /* I think this is a fiction */ 128 128 static int 129 129 intel_dp_link_required(struct drm_device *dev, 130 - struct intel_output *intel_output, int pixel_clock) 130 + struct intel_encoder *intel_encoder, int pixel_clock) 131 131 { 132 132 struct drm_i915_private *dev_priv = dev->dev_private; 133 133 134 - if (IS_eDP(intel_output)) 134 + if (IS_eDP(intel_encoder)) 135 135 return (pixel_clock * dev_priv->edp_bpp) / 8; 136 136 else 137 137 return pixel_clock * 3; ··· 141 141 intel_dp_mode_valid(struct drm_connector *connector, 142 142 struct drm_display_mode *mode) 143 143 { 144 - struct intel_output *intel_output = to_intel_output(connector); 145 - int max_link_clock = intel_dp_link_clock(intel_dp_max_link_bw(intel_output)); 146 - int max_lanes = intel_dp_max_lane_count(intel_output); 144 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 145 + int max_link_clock = intel_dp_link_clock(intel_dp_max_link_bw(intel_encoder)); 146 + int max_lanes = intel_dp_max_lane_count(intel_encoder); 147 147 148 - if (intel_dp_link_required(connector->dev, intel_output, mode->clock) 148 + if (intel_dp_link_required(connector->dev, intel_encoder, mode->clock) 149 149 > max_link_clock * max_lanes) 150 150 return MODE_CLOCK_HIGH; 151 151 ··· 209 209 } 210 210 211 211 static int 212 - intel_dp_aux_ch(struct intel_output *intel_output, 212 + intel_dp_aux_ch(struct intel_encoder *intel_encoder, 213 213 uint8_t *send, int send_bytes, 214 214 uint8_t *recv, int recv_size) 215 215 { 216 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 216 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 217 217 uint32_t output_reg = dp_priv->output_reg; 218 - struct drm_device *dev = intel_output->base.dev; 218 + struct drm_device *dev = intel_encoder->base.dev; 219 219 struct drm_i915_private *dev_priv = dev->dev_private; 220 220 uint32_t ch_ctl = output_reg + 0x10; 221 221 uint32_t ch_data = ch_ctl + 4; ··· 230 230 * and would like to run at 2MHz. So, take the 231 231 * hrawclk value and divide by 2 and use that 232 232 */ 233 - if (IS_eDP(intel_output)) 233 + if (IS_eDP(intel_encoder)) 234 234 aux_clock_divider = 225; /* eDP input clock at 450Mhz */ 235 235 else if (HAS_PCH_SPLIT(dev)) 236 236 aux_clock_divider = 62; /* IRL input clock fixed at 125Mhz */ ··· 313 313 314 314 /* Write data to the aux channel in native mode */ 315 315 static int 316 - intel_dp_aux_native_write(struct intel_output *intel_output, 316 + intel_dp_aux_native_write(struct intel_encoder *intel_encoder, 317 317 uint16_t address, uint8_t *send, int send_bytes) 318 318 { 319 319 int ret; ··· 330 330 memcpy(&msg[4], send, send_bytes); 331 331 msg_bytes = send_bytes + 4; 332 332 for (;;) { 333 - ret = intel_dp_aux_ch(intel_output, msg, msg_bytes, &ack, 1); 333 + ret = intel_dp_aux_ch(intel_encoder, msg, msg_bytes, &ack, 1); 334 334 if (ret < 0) 335 335 return ret; 336 336 if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_ACK) ··· 345 345 346 346 /* Write a single byte to the aux channel in native mode */ 347 347 static int 348 - intel_dp_aux_native_write_1(struct intel_output *intel_output, 348 + intel_dp_aux_native_write_1(struct intel_encoder *intel_encoder, 349 349 uint16_t address, uint8_t byte) 350 350 { 351 - return intel_dp_aux_native_write(intel_output, address, &byte, 1); 351 + return intel_dp_aux_native_write(intel_encoder, address, &byte, 1); 352 352 } 353 353 354 354 /* read bytes from a native aux channel */ 355 355 static int 356 - intel_dp_aux_native_read(struct intel_output *intel_output, 356 + intel_dp_aux_native_read(struct intel_encoder *intel_encoder, 357 357 uint16_t address, uint8_t *recv, int recv_bytes) 358 358 { 359 359 uint8_t msg[4]; ··· 372 372 reply_bytes = recv_bytes + 1; 373 373 374 374 for (;;) { 375 - ret = intel_dp_aux_ch(intel_output, msg, msg_bytes, 375 + ret = intel_dp_aux_ch(intel_encoder, msg, msg_bytes, 376 376 reply, reply_bytes); 377 377 if (ret == 0) 378 378 return -EPROTO; ··· 398 398 struct intel_dp_priv *dp_priv = container_of(adapter, 399 399 struct intel_dp_priv, 400 400 adapter); 401 - struct intel_output *intel_output = dp_priv->intel_output; 401 + struct intel_encoder *intel_encoder = dp_priv->intel_encoder; 402 402 uint16_t address = algo_data->address; 403 403 uint8_t msg[5]; 404 404 uint8_t reply[2]; ··· 437 437 } 438 438 439 439 for (;;) { 440 - ret = intel_dp_aux_ch(intel_output, 440 + ret = intel_dp_aux_ch(intel_encoder, 441 441 msg, msg_bytes, 442 442 reply, reply_bytes); 443 443 if (ret < 0) { ··· 465 465 } 466 466 467 467 static int 468 - intel_dp_i2c_init(struct intel_output *intel_output, const char *name) 468 + intel_dp_i2c_init(struct intel_encoder *intel_encoder, const char *name) 469 469 { 470 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 470 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 471 471 472 472 DRM_DEBUG_KMS("i2c_init %s\n", name); 473 473 dp_priv->algo.running = false; ··· 480 480 strncpy (dp_priv->adapter.name, name, sizeof(dp_priv->adapter.name) - 1); 481 481 dp_priv->adapter.name[sizeof(dp_priv->adapter.name) - 1] = '\0'; 482 482 dp_priv->adapter.algo_data = &dp_priv->algo; 483 - dp_priv->adapter.dev.parent = &intel_output->base.kdev; 483 + dp_priv->adapter.dev.parent = &intel_encoder->base.kdev; 484 484 485 485 return i2c_dp_aux_add_bus(&dp_priv->adapter); 486 486 } ··· 489 489 intel_dp_mode_fixup(struct drm_encoder *encoder, struct drm_display_mode *mode, 490 490 struct drm_display_mode *adjusted_mode) 491 491 { 492 - struct intel_output *intel_output = enc_to_intel_output(encoder); 493 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 492 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 493 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 494 494 int lane_count, clock; 495 - int max_lane_count = intel_dp_max_lane_count(intel_output); 496 - int max_clock = intel_dp_max_link_bw(intel_output) == DP_LINK_BW_2_7 ? 1 : 0; 495 + int max_lane_count = intel_dp_max_lane_count(intel_encoder); 496 + int max_clock = intel_dp_max_link_bw(intel_encoder) == DP_LINK_BW_2_7 ? 1 : 0; 497 497 static int bws[2] = { DP_LINK_BW_1_62, DP_LINK_BW_2_7 }; 498 498 499 499 for (lane_count = 1; lane_count <= max_lane_count; lane_count <<= 1) { 500 500 for (clock = 0; clock <= max_clock; clock++) { 501 501 int link_avail = intel_dp_link_clock(bws[clock]) * lane_count; 502 502 503 - if (intel_dp_link_required(encoder->dev, intel_output, mode->clock) 503 + if (intel_dp_link_required(encoder->dev, intel_encoder, mode->clock) 504 504 <= link_avail) { 505 505 dp_priv->link_bw = bws[clock]; 506 506 dp_priv->lane_count = lane_count; ··· 562 562 struct intel_dp_m_n m_n; 563 563 564 564 /* 565 - * Find the lane count in the intel_output private 565 + * Find the lane count in the intel_encoder private 566 566 */ 567 567 list_for_each_entry(connector, &mode_config->connector_list, head) { 568 - struct intel_output *intel_output = to_intel_output(connector); 569 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 568 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 569 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 570 570 571 571 if (!connector->encoder || connector->encoder->crtc != crtc) 572 572 continue; 573 573 574 - if (intel_output->type == INTEL_OUTPUT_DISPLAYPORT) { 574 + if (intel_encoder->type == INTEL_OUTPUT_DISPLAYPORT) { 575 575 lane_count = dp_priv->lane_count; 576 576 break; 577 577 } ··· 626 626 intel_dp_mode_set(struct drm_encoder *encoder, struct drm_display_mode *mode, 627 627 struct drm_display_mode *adjusted_mode) 628 628 { 629 - struct intel_output *intel_output = enc_to_intel_output(encoder); 630 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 631 - struct drm_crtc *crtc = intel_output->enc.crtc; 629 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 630 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 631 + struct drm_crtc *crtc = intel_encoder->enc.crtc; 632 632 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 633 633 634 634 dp_priv->DP = (DP_LINK_TRAIN_OFF | ··· 667 667 if (intel_crtc->pipe == 1) 668 668 dp_priv->DP |= DP_PIPEB_SELECT; 669 669 670 - if (IS_eDP(intel_output)) { 670 + if (IS_eDP(intel_encoder)) { 671 671 /* don't miss out required setting for eDP */ 672 672 dp_priv->DP |= DP_PLL_ENABLE; 673 673 if (adjusted_mode->clock < 200000) ··· 702 702 static void 703 703 intel_dp_dpms(struct drm_encoder *encoder, int mode) 704 704 { 705 - struct intel_output *intel_output = enc_to_intel_output(encoder); 706 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 707 - struct drm_device *dev = intel_output->base.dev; 705 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 706 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 707 + struct drm_device *dev = intel_encoder->base.dev; 708 708 struct drm_i915_private *dev_priv = dev->dev_private; 709 709 uint32_t dp_reg = I915_READ(dp_priv->output_reg); 710 710 711 711 if (mode != DRM_MODE_DPMS_ON) { 712 712 if (dp_reg & DP_PORT_EN) { 713 - intel_dp_link_down(intel_output, dp_priv->DP); 714 - if (IS_eDP(intel_output)) 713 + intel_dp_link_down(intel_encoder, dp_priv->DP); 714 + if (IS_eDP(intel_encoder)) 715 715 ironlake_edp_backlight_off(dev); 716 716 } 717 717 } else { 718 718 if (!(dp_reg & DP_PORT_EN)) { 719 - intel_dp_link_train(intel_output, dp_priv->DP, dp_priv->link_configuration); 720 - if (IS_eDP(intel_output)) 719 + intel_dp_link_train(intel_encoder, dp_priv->DP, dp_priv->link_configuration); 720 + if (IS_eDP(intel_encoder)) 721 721 ironlake_edp_backlight_on(dev); 722 722 } 723 723 } ··· 729 729 * link status information 730 730 */ 731 731 static bool 732 - intel_dp_get_link_status(struct intel_output *intel_output, 732 + intel_dp_get_link_status(struct intel_encoder *intel_encoder, 733 733 uint8_t link_status[DP_LINK_STATUS_SIZE]) 734 734 { 735 735 int ret; 736 736 737 - ret = intel_dp_aux_native_read(intel_output, 737 + ret = intel_dp_aux_native_read(intel_encoder, 738 738 DP_LANE0_1_STATUS, 739 739 link_status, DP_LINK_STATUS_SIZE); 740 740 if (ret != DP_LINK_STATUS_SIZE) ··· 752 752 static void 753 753 intel_dp_save(struct drm_connector *connector) 754 754 { 755 - struct intel_output *intel_output = to_intel_output(connector); 756 - struct drm_device *dev = intel_output->base.dev; 755 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 756 + struct drm_device *dev = intel_encoder->base.dev; 757 757 struct drm_i915_private *dev_priv = dev->dev_private; 758 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 758 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 759 759 760 760 dp_priv->save_DP = I915_READ(dp_priv->output_reg); 761 - intel_dp_aux_native_read(intel_output, DP_LINK_BW_SET, 761 + intel_dp_aux_native_read(intel_encoder, DP_LINK_BW_SET, 762 762 dp_priv->save_link_configuration, 763 763 sizeof (dp_priv->save_link_configuration)); 764 764 } ··· 825 825 } 826 826 827 827 static void 828 - intel_get_adjust_train(struct intel_output *intel_output, 828 + intel_get_adjust_train(struct intel_encoder *intel_encoder, 829 829 uint8_t link_status[DP_LINK_STATUS_SIZE], 830 830 int lane_count, 831 831 uint8_t train_set[4]) ··· 942 942 } 943 943 944 944 static bool 945 - intel_dp_set_link_train(struct intel_output *intel_output, 945 + intel_dp_set_link_train(struct intel_encoder *intel_encoder, 946 946 uint32_t dp_reg_value, 947 947 uint8_t dp_train_pat, 948 948 uint8_t train_set[4], 949 949 bool first) 950 950 { 951 - struct drm_device *dev = intel_output->base.dev; 951 + struct drm_device *dev = intel_encoder->base.dev; 952 952 struct drm_i915_private *dev_priv = dev->dev_private; 953 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 953 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 954 954 int ret; 955 955 956 956 I915_WRITE(dp_priv->output_reg, dp_reg_value); ··· 958 958 if (first) 959 959 intel_wait_for_vblank(dev); 960 960 961 - intel_dp_aux_native_write_1(intel_output, 961 + intel_dp_aux_native_write_1(intel_encoder, 962 962 DP_TRAINING_PATTERN_SET, 963 963 dp_train_pat); 964 964 965 - ret = intel_dp_aux_native_write(intel_output, 965 + ret = intel_dp_aux_native_write(intel_encoder, 966 966 DP_TRAINING_LANE0_SET, train_set, 4); 967 967 if (ret != 4) 968 968 return false; ··· 971 971 } 972 972 973 973 static void 974 - intel_dp_link_train(struct intel_output *intel_output, uint32_t DP, 974 + intel_dp_link_train(struct intel_encoder *intel_encoder, uint32_t DP, 975 975 uint8_t link_configuration[DP_LINK_CONFIGURATION_SIZE]) 976 976 { 977 - struct drm_device *dev = intel_output->base.dev; 977 + struct drm_device *dev = intel_encoder->base.dev; 978 978 struct drm_i915_private *dev_priv = dev->dev_private; 979 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 979 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 980 980 uint8_t train_set[4]; 981 981 uint8_t link_status[DP_LINK_STATUS_SIZE]; 982 982 int i; ··· 987 987 int tries; 988 988 989 989 /* Write the link configuration data */ 990 - intel_dp_aux_native_write(intel_output, 0x100, 990 + intel_dp_aux_native_write(intel_encoder, 0x100, 991 991 link_configuration, DP_LINK_CONFIGURATION_SIZE); 992 992 993 993 DP |= DP_PORT_EN; ··· 1001 1001 uint32_t signal_levels = intel_dp_signal_levels(train_set[0], dp_priv->lane_count); 1002 1002 DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels; 1003 1003 1004 - if (!intel_dp_set_link_train(intel_output, DP | DP_LINK_TRAIN_PAT_1, 1004 + if (!intel_dp_set_link_train(intel_encoder, DP | DP_LINK_TRAIN_PAT_1, 1005 1005 DP_TRAINING_PATTERN_1, train_set, first)) 1006 1006 break; 1007 1007 first = false; 1008 1008 /* Set training pattern 1 */ 1009 1009 1010 1010 udelay(100); 1011 - if (!intel_dp_get_link_status(intel_output, link_status)) 1011 + if (!intel_dp_get_link_status(intel_encoder, link_status)) 1012 1012 break; 1013 1013 1014 1014 if (intel_clock_recovery_ok(link_status, dp_priv->lane_count)) { ··· 1033 1033 voltage = train_set[0] & DP_TRAIN_VOLTAGE_SWING_MASK; 1034 1034 1035 1035 /* Compute new train_set as requested by target */ 1036 - intel_get_adjust_train(intel_output, link_status, dp_priv->lane_count, train_set); 1036 + intel_get_adjust_train(intel_encoder, link_status, dp_priv->lane_count, train_set); 1037 1037 } 1038 1038 1039 1039 /* channel equalization */ ··· 1045 1045 DP = (DP & ~(DP_VOLTAGE_MASK|DP_PRE_EMPHASIS_MASK)) | signal_levels; 1046 1046 1047 1047 /* channel eq pattern */ 1048 - if (!intel_dp_set_link_train(intel_output, DP | DP_LINK_TRAIN_PAT_2, 1048 + if (!intel_dp_set_link_train(intel_encoder, DP | DP_LINK_TRAIN_PAT_2, 1049 1049 DP_TRAINING_PATTERN_2, train_set, 1050 1050 false)) 1051 1051 break; 1052 1052 1053 1053 udelay(400); 1054 - if (!intel_dp_get_link_status(intel_output, link_status)) 1054 + if (!intel_dp_get_link_status(intel_encoder, link_status)) 1055 1055 break; 1056 1056 1057 1057 if (intel_channel_eq_ok(link_status, dp_priv->lane_count)) { ··· 1064 1064 break; 1065 1065 1066 1066 /* Compute new train_set as requested by target */ 1067 - intel_get_adjust_train(intel_output, link_status, dp_priv->lane_count, train_set); 1067 + intel_get_adjust_train(intel_encoder, link_status, dp_priv->lane_count, train_set); 1068 1068 ++tries; 1069 1069 } 1070 1070 1071 1071 I915_WRITE(dp_priv->output_reg, DP | DP_LINK_TRAIN_OFF); 1072 1072 POSTING_READ(dp_priv->output_reg); 1073 - intel_dp_aux_native_write_1(intel_output, 1073 + intel_dp_aux_native_write_1(intel_encoder, 1074 1074 DP_TRAINING_PATTERN_SET, DP_TRAINING_PATTERN_DISABLE); 1075 1075 } 1076 1076 1077 1077 static void 1078 - intel_dp_link_down(struct intel_output *intel_output, uint32_t DP) 1078 + intel_dp_link_down(struct intel_encoder *intel_encoder, uint32_t DP) 1079 1079 { 1080 - struct drm_device *dev = intel_output->base.dev; 1080 + struct drm_device *dev = intel_encoder->base.dev; 1081 1081 struct drm_i915_private *dev_priv = dev->dev_private; 1082 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 1082 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 1083 1083 1084 1084 DRM_DEBUG_KMS("\n"); 1085 1085 1086 - if (IS_eDP(intel_output)) { 1086 + if (IS_eDP(intel_encoder)) { 1087 1087 DP &= ~DP_PLL_ENABLE; 1088 1088 I915_WRITE(dp_priv->output_reg, DP); 1089 1089 POSTING_READ(dp_priv->output_reg); ··· 1096 1096 1097 1097 udelay(17000); 1098 1098 1099 - if (IS_eDP(intel_output)) 1099 + if (IS_eDP(intel_encoder)) 1100 1100 DP |= DP_LINK_TRAIN_OFF; 1101 1101 I915_WRITE(dp_priv->output_reg, DP & ~DP_PORT_EN); 1102 1102 POSTING_READ(dp_priv->output_reg); ··· 1105 1105 static void 1106 1106 intel_dp_restore(struct drm_connector *connector) 1107 1107 { 1108 - struct intel_output *intel_output = to_intel_output(connector); 1109 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 1108 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1109 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 1110 1110 1111 1111 if (dp_priv->save_DP & DP_PORT_EN) 1112 - intel_dp_link_train(intel_output, dp_priv->save_DP, dp_priv->save_link_configuration); 1112 + intel_dp_link_train(intel_encoder, dp_priv->save_DP, dp_priv->save_link_configuration); 1113 1113 else 1114 - intel_dp_link_down(intel_output, dp_priv->save_DP); 1114 + intel_dp_link_down(intel_encoder, dp_priv->save_DP); 1115 1115 } 1116 1116 1117 1117 /* ··· 1124 1124 */ 1125 1125 1126 1126 static void 1127 - intel_dp_check_link_status(struct intel_output *intel_output) 1127 + intel_dp_check_link_status(struct intel_encoder *intel_encoder) 1128 1128 { 1129 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 1129 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 1130 1130 uint8_t link_status[DP_LINK_STATUS_SIZE]; 1131 1131 1132 - if (!intel_output->enc.crtc) 1132 + if (!intel_encoder->enc.crtc) 1133 1133 return; 1134 1134 1135 - if (!intel_dp_get_link_status(intel_output, link_status)) { 1136 - intel_dp_link_down(intel_output, dp_priv->DP); 1135 + if (!intel_dp_get_link_status(intel_encoder, link_status)) { 1136 + intel_dp_link_down(intel_encoder, dp_priv->DP); 1137 1137 return; 1138 1138 } 1139 1139 1140 1140 if (!intel_channel_eq_ok(link_status, dp_priv->lane_count)) 1141 - intel_dp_link_train(intel_output, dp_priv->DP, dp_priv->link_configuration); 1141 + intel_dp_link_train(intel_encoder, dp_priv->DP, dp_priv->link_configuration); 1142 1142 } 1143 1143 1144 1144 static enum drm_connector_status 1145 1145 ironlake_dp_detect(struct drm_connector *connector) 1146 1146 { 1147 - struct intel_output *intel_output = to_intel_output(connector); 1148 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 1147 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1148 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 1149 1149 enum drm_connector_status status; 1150 1150 1151 1151 status = connector_status_disconnected; 1152 - if (intel_dp_aux_native_read(intel_output, 1152 + if (intel_dp_aux_native_read(intel_encoder, 1153 1153 0x000, dp_priv->dpcd, 1154 1154 sizeof (dp_priv->dpcd)) == sizeof (dp_priv->dpcd)) 1155 1155 { ··· 1168 1168 static enum drm_connector_status 1169 1169 intel_dp_detect(struct drm_connector *connector) 1170 1170 { 1171 - struct intel_output *intel_output = to_intel_output(connector); 1172 - struct drm_device *dev = intel_output->base.dev; 1171 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1172 + struct drm_device *dev = intel_encoder->base.dev; 1173 1173 struct drm_i915_private *dev_priv = dev->dev_private; 1174 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 1174 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 1175 1175 uint32_t temp, bit; 1176 1176 enum drm_connector_status status; 1177 1177 ··· 1210 1210 return connector_status_disconnected; 1211 1211 1212 1212 status = connector_status_disconnected; 1213 - if (intel_dp_aux_native_read(intel_output, 1213 + if (intel_dp_aux_native_read(intel_encoder, 1214 1214 0x000, dp_priv->dpcd, 1215 1215 sizeof (dp_priv->dpcd)) == sizeof (dp_priv->dpcd)) 1216 1216 { ··· 1222 1222 1223 1223 static int intel_dp_get_modes(struct drm_connector *connector) 1224 1224 { 1225 - struct intel_output *intel_output = to_intel_output(connector); 1226 - struct drm_device *dev = intel_output->base.dev; 1225 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1226 + struct drm_device *dev = intel_encoder->base.dev; 1227 1227 struct drm_i915_private *dev_priv = dev->dev_private; 1228 1228 int ret; 1229 1229 1230 1230 /* We should parse the EDID data and find out if it has an audio sink 1231 1231 */ 1232 1232 1233 - ret = intel_ddc_get_modes(intel_output); 1233 + ret = intel_ddc_get_modes(intel_encoder); 1234 1234 if (ret) 1235 1235 return ret; 1236 1236 1237 1237 /* if eDP has no EDID, try to use fixed panel mode from VBT */ 1238 - if (IS_eDP(intel_output)) { 1238 + if (IS_eDP(intel_encoder)) { 1239 1239 if (dev_priv->panel_fixed_mode != NULL) { 1240 1240 struct drm_display_mode *mode; 1241 1241 mode = drm_mode_duplicate(dev, dev_priv->panel_fixed_mode); ··· 1249 1249 static void 1250 1250 intel_dp_destroy (struct drm_connector *connector) 1251 1251 { 1252 - struct intel_output *intel_output = to_intel_output(connector); 1252 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1253 1253 1254 - if (intel_output->i2c_bus) 1255 - intel_i2c_destroy(intel_output->i2c_bus); 1254 + if (intel_encoder->i2c_bus) 1255 + intel_i2c_destroy(intel_encoder->i2c_bus); 1256 1256 drm_sysfs_connector_remove(connector); 1257 1257 drm_connector_cleanup(connector); 1258 - kfree(intel_output); 1258 + kfree(intel_encoder); 1259 1259 } 1260 1260 1261 1261 static const struct drm_encoder_helper_funcs intel_dp_helper_funcs = { ··· 1291 1291 }; 1292 1292 1293 1293 void 1294 - intel_dp_hot_plug(struct intel_output *intel_output) 1294 + intel_dp_hot_plug(struct intel_encoder *intel_encoder) 1295 1295 { 1296 - struct intel_dp_priv *dp_priv = intel_output->dev_priv; 1296 + struct intel_dp_priv *dp_priv = intel_encoder->dev_priv; 1297 1297 1298 1298 if (dp_priv->dpms_mode == DRM_MODE_DPMS_ON) 1299 - intel_dp_check_link_status(intel_output); 1299 + intel_dp_check_link_status(intel_encoder); 1300 1300 } 1301 1301 1302 1302 void ··· 1304 1304 { 1305 1305 struct drm_i915_private *dev_priv = dev->dev_private; 1306 1306 struct drm_connector *connector; 1307 - struct intel_output *intel_output; 1307 + struct intel_encoder *intel_encoder; 1308 1308 struct intel_dp_priv *dp_priv; 1309 1309 const char *name = NULL; 1310 1310 1311 - intel_output = kcalloc(sizeof(struct intel_output) + 1311 + intel_encoder = kcalloc(sizeof(struct intel_encoder) + 1312 1312 sizeof(struct intel_dp_priv), 1, GFP_KERNEL); 1313 - if (!intel_output) 1313 + if (!intel_encoder) 1314 1314 return; 1315 1315 1316 - dp_priv = (struct intel_dp_priv *)(intel_output + 1); 1316 + dp_priv = (struct intel_dp_priv *)(intel_encoder + 1); 1317 1317 1318 - connector = &intel_output->base; 1318 + connector = &intel_encoder->base; 1319 1319 drm_connector_init(dev, connector, &intel_dp_connector_funcs, 1320 1320 DRM_MODE_CONNECTOR_DisplayPort); 1321 1321 drm_connector_helper_add(connector, &intel_dp_connector_helper_funcs); 1322 1322 1323 1323 if (output_reg == DP_A) 1324 - intel_output->type = INTEL_OUTPUT_EDP; 1324 + intel_encoder->type = INTEL_OUTPUT_EDP; 1325 1325 else 1326 - intel_output->type = INTEL_OUTPUT_DISPLAYPORT; 1326 + intel_encoder->type = INTEL_OUTPUT_DISPLAYPORT; 1327 1327 1328 1328 if (output_reg == DP_B || output_reg == PCH_DP_B) 1329 - intel_output->clone_mask = (1 << INTEL_DP_B_CLONE_BIT); 1329 + intel_encoder->clone_mask = (1 << INTEL_DP_B_CLONE_BIT); 1330 1330 else if (output_reg == DP_C || output_reg == PCH_DP_C) 1331 - intel_output->clone_mask = (1 << INTEL_DP_C_CLONE_BIT); 1331 + intel_encoder->clone_mask = (1 << INTEL_DP_C_CLONE_BIT); 1332 1332 else if (output_reg == DP_D || output_reg == PCH_DP_D) 1333 - intel_output->clone_mask = (1 << INTEL_DP_D_CLONE_BIT); 1333 + intel_encoder->clone_mask = (1 << INTEL_DP_D_CLONE_BIT); 1334 1334 1335 - if (IS_eDP(intel_output)) 1336 - intel_output->clone_mask = (1 << INTEL_EDP_CLONE_BIT); 1335 + if (IS_eDP(intel_encoder)) 1336 + intel_encoder->clone_mask = (1 << INTEL_EDP_CLONE_BIT); 1337 1337 1338 - intel_output->crtc_mask = (1 << 0) | (1 << 1); 1338 + intel_encoder->crtc_mask = (1 << 0) | (1 << 1); 1339 1339 connector->interlace_allowed = true; 1340 1340 connector->doublescan_allowed = 0; 1341 1341 1342 - dp_priv->intel_output = intel_output; 1342 + dp_priv->intel_encoder = intel_encoder; 1343 1343 dp_priv->output_reg = output_reg; 1344 1344 dp_priv->has_audio = false; 1345 1345 dp_priv->dpms_mode = DRM_MODE_DPMS_ON; 1346 - intel_output->dev_priv = dp_priv; 1346 + intel_encoder->dev_priv = dp_priv; 1347 1347 1348 - drm_encoder_init(dev, &intel_output->enc, &intel_dp_enc_funcs, 1348 + drm_encoder_init(dev, &intel_encoder->enc, &intel_dp_enc_funcs, 1349 1349 DRM_MODE_ENCODER_TMDS); 1350 - drm_encoder_helper_add(&intel_output->enc, &intel_dp_helper_funcs); 1350 + drm_encoder_helper_add(&intel_encoder->enc, &intel_dp_helper_funcs); 1351 1351 1352 - drm_mode_connector_attach_encoder(&intel_output->base, 1353 - &intel_output->enc); 1352 + drm_mode_connector_attach_encoder(&intel_encoder->base, 1353 + &intel_encoder->enc); 1354 1354 drm_sysfs_connector_add(connector); 1355 1355 1356 1356 /* Set up the DDC bus. */ ··· 1378 1378 break; 1379 1379 } 1380 1380 1381 - intel_dp_i2c_init(intel_output, name); 1381 + intel_dp_i2c_init(intel_encoder, name); 1382 1382 1383 - intel_output->ddc_bus = &dp_priv->adapter; 1384 - intel_output->hot_plug = intel_dp_hot_plug; 1383 + intel_encoder->ddc_bus = &dp_priv->adapter; 1384 + intel_encoder->hot_plug = intel_dp_hot_plug; 1385 1385 1386 1386 if (output_reg == DP_A) { 1387 1387 /* initialize panel mode from VBT if available for eDP */
+9 -9
drivers/gpu/drm/i915/intel_drv.h
··· 95 95 }; 96 96 97 97 98 - struct intel_output { 98 + struct intel_encoder { 99 99 struct drm_connector base; 100 100 101 101 struct drm_encoder enc; ··· 105 105 bool load_detect_temp; 106 106 bool needs_tv_clock; 107 107 void *dev_priv; 108 - void (*hot_plug)(struct intel_output *); 108 + void (*hot_plug)(struct intel_encoder *); 109 109 int crtc_mask; 110 110 int clone_mask; 111 111 }; ··· 152 152 }; 153 153 154 154 #define to_intel_crtc(x) container_of(x, struct intel_crtc, base) 155 - #define to_intel_output(x) container_of(x, struct intel_output, base) 156 - #define enc_to_intel_output(x) container_of(x, struct intel_output, enc) 155 + #define to_intel_encoder(x) container_of(x, struct intel_encoder, base) 156 + #define enc_to_intel_encoder(x) container_of(x, struct intel_encoder, enc) 157 157 #define to_intel_framebuffer(x) container_of(x, struct intel_framebuffer, base) 158 158 159 159 struct i2c_adapter *intel_i2c_create(struct drm_device *dev, const u32 reg, 160 160 const char *name); 161 161 void intel_i2c_destroy(struct i2c_adapter *adapter); 162 - int intel_ddc_get_modes(struct intel_output *intel_output); 163 - extern bool intel_ddc_probe(struct intel_output *intel_output); 162 + int intel_ddc_get_modes(struct intel_encoder *intel_encoder); 163 + extern bool intel_ddc_probe(struct intel_encoder *intel_encoder); 164 164 void intel_i2c_quirk_set(struct drm_device *dev, bool enable); 165 165 void intel_i2c_reset_gmbus(struct drm_device *dev); 166 166 ··· 175 175 void 176 176 intel_dp_set_m_n(struct drm_crtc *crtc, struct drm_display_mode *mode, 177 177 struct drm_display_mode *adjusted_mode); 178 - extern void intel_edp_link_config (struct intel_output *, int *, int *); 178 + extern void intel_edp_link_config (struct intel_encoder *, int *, int *); 179 179 180 180 181 181 extern int intel_panel_fitter_pipe (struct drm_device *dev); ··· 191 191 struct drm_file *file_priv); 192 192 extern void intel_wait_for_vblank(struct drm_device *dev); 193 193 extern struct drm_crtc *intel_get_crtc_from_pipe(struct drm_device *dev, int pipe); 194 - extern struct drm_crtc *intel_get_load_detect_pipe(struct intel_output *intel_output, 194 + extern struct drm_crtc *intel_get_load_detect_pipe(struct intel_encoder *intel_encoder, 195 195 struct drm_display_mode *mode, 196 196 int *dpms_mode); 197 - extern void intel_release_load_detect_pipe(struct intel_output *intel_output, 197 + extern void intel_release_load_detect_pipe(struct intel_encoder *intel_encoder, 198 198 int dpms_mode); 199 199 200 200 extern struct drm_connector* intel_sdvo_find(struct drm_device *dev, int sdvoB);
+46 -46
drivers/gpu/drm/i915/intel_dvo.c
··· 80 80 static void intel_dvo_dpms(struct drm_encoder *encoder, int mode) 81 81 { 82 82 struct drm_i915_private *dev_priv = encoder->dev->dev_private; 83 - struct intel_output *intel_output = enc_to_intel_output(encoder); 84 - struct intel_dvo_device *dvo = intel_output->dev_priv; 83 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 84 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 85 85 u32 dvo_reg = dvo->dvo_reg; 86 86 u32 temp = I915_READ(dvo_reg); 87 87 ··· 99 99 static void intel_dvo_save(struct drm_connector *connector) 100 100 { 101 101 struct drm_i915_private *dev_priv = connector->dev->dev_private; 102 - struct intel_output *intel_output = to_intel_output(connector); 103 - struct intel_dvo_device *dvo = intel_output->dev_priv; 102 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 103 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 104 104 105 105 /* Each output should probably just save the registers it touches, 106 106 * but for now, use more overkill. ··· 115 115 static void intel_dvo_restore(struct drm_connector *connector) 116 116 { 117 117 struct drm_i915_private *dev_priv = connector->dev->dev_private; 118 - struct intel_output *intel_output = to_intel_output(connector); 119 - struct intel_dvo_device *dvo = intel_output->dev_priv; 118 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 119 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 120 120 121 121 dvo->dev_ops->restore(dvo); 122 122 ··· 128 128 static int intel_dvo_mode_valid(struct drm_connector *connector, 129 129 struct drm_display_mode *mode) 130 130 { 131 - struct intel_output *intel_output = to_intel_output(connector); 132 - struct intel_dvo_device *dvo = intel_output->dev_priv; 131 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 132 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 133 133 134 134 if (mode->flags & DRM_MODE_FLAG_DBLSCAN) 135 135 return MODE_NO_DBLESCAN; ··· 150 150 struct drm_display_mode *mode, 151 151 struct drm_display_mode *adjusted_mode) 152 152 { 153 - struct intel_output *intel_output = enc_to_intel_output(encoder); 154 - struct intel_dvo_device *dvo = intel_output->dev_priv; 153 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 154 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 155 155 156 156 /* If we have timings from the BIOS for the panel, put them in 157 157 * to the adjusted mode. The CRTC will be set up for this mode, ··· 186 186 struct drm_device *dev = encoder->dev; 187 187 struct drm_i915_private *dev_priv = dev->dev_private; 188 188 struct intel_crtc *intel_crtc = to_intel_crtc(encoder->crtc); 189 - struct intel_output *intel_output = enc_to_intel_output(encoder); 190 - struct intel_dvo_device *dvo = intel_output->dev_priv; 189 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 190 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 191 191 int pipe = intel_crtc->pipe; 192 192 u32 dvo_val; 193 193 u32 dvo_reg = dvo->dvo_reg, dvo_srcdim_reg; ··· 241 241 */ 242 242 static enum drm_connector_status intel_dvo_detect(struct drm_connector *connector) 243 243 { 244 - struct intel_output *intel_output = to_intel_output(connector); 245 - struct intel_dvo_device *dvo = intel_output->dev_priv; 244 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 245 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 246 246 247 247 return dvo->dev_ops->detect(dvo); 248 248 } 249 249 250 250 static int intel_dvo_get_modes(struct drm_connector *connector) 251 251 { 252 - struct intel_output *intel_output = to_intel_output(connector); 253 - struct intel_dvo_device *dvo = intel_output->dev_priv; 252 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 253 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 254 254 255 255 /* We should probably have an i2c driver get_modes function for those 256 256 * devices which will have a fixed set of modes determined by the chip 257 257 * (TV-out, for example), but for now with just TMDS and LVDS, 258 258 * that's not the case. 259 259 */ 260 - intel_ddc_get_modes(intel_output); 260 + intel_ddc_get_modes(intel_encoder); 261 261 if (!list_empty(&connector->probed_modes)) 262 262 return 1; 263 263 ··· 275 275 276 276 static void intel_dvo_destroy (struct drm_connector *connector) 277 277 { 278 - struct intel_output *intel_output = to_intel_output(connector); 279 - struct intel_dvo_device *dvo = intel_output->dev_priv; 278 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 279 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 280 280 281 281 if (dvo) { 282 282 if (dvo->dev_ops->destroy) ··· 286 286 /* no need, in i830_dvoices[] now */ 287 287 //kfree(dvo); 288 288 } 289 - if (intel_output->i2c_bus) 290 - intel_i2c_destroy(intel_output->i2c_bus); 291 - if (intel_output->ddc_bus) 292 - intel_i2c_destroy(intel_output->ddc_bus); 289 + if (intel_encoder->i2c_bus) 290 + intel_i2c_destroy(intel_encoder->i2c_bus); 291 + if (intel_encoder->ddc_bus) 292 + intel_i2c_destroy(intel_encoder->ddc_bus); 293 293 drm_sysfs_connector_remove(connector); 294 294 drm_connector_cleanup(connector); 295 - kfree(intel_output); 295 + kfree(intel_encoder); 296 296 } 297 297 298 298 #ifdef RANDR_GET_CRTC_INTERFACE ··· 300 300 { 301 301 struct drm_device *dev = connector->dev; 302 302 struct drm_i915_private *dev_priv = dev->dev_private; 303 - struct intel_output *intel_output = to_intel_output(connector); 304 - struct intel_dvo_device *dvo = intel_output->dev_priv; 303 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 304 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 305 305 int pipe = !!(I915_READ(dvo->dvo_reg) & SDVO_PIPE_B_SELECT); 306 306 307 307 return intel_pipe_to_crtc(pScrn, pipe); ··· 352 352 { 353 353 struct drm_device *dev = connector->dev; 354 354 struct drm_i915_private *dev_priv = dev->dev_private; 355 - struct intel_output *intel_output = to_intel_output(connector); 356 - struct intel_dvo_device *dvo = intel_output->dev_priv; 355 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 356 + struct intel_dvo_device *dvo = intel_encoder->dev_priv; 357 357 uint32_t dvo_reg = dvo->dvo_reg; 358 358 uint32_t dvo_val = I915_READ(dvo_reg); 359 359 struct drm_display_mode *mode = NULL; ··· 383 383 384 384 void intel_dvo_init(struct drm_device *dev) 385 385 { 386 - struct intel_output *intel_output; 386 + struct intel_encoder *intel_encoder; 387 387 struct intel_dvo_device *dvo; 388 388 struct i2c_adapter *i2cbus = NULL; 389 389 int ret = 0; 390 390 int i; 391 391 int encoder_type = DRM_MODE_ENCODER_NONE; 392 - intel_output = kzalloc (sizeof(struct intel_output), GFP_KERNEL); 393 - if (!intel_output) 392 + intel_encoder = kzalloc (sizeof(struct intel_encoder), GFP_KERNEL); 393 + if (!intel_encoder) 394 394 return; 395 395 396 396 /* Set up the DDC bus */ 397 - intel_output->ddc_bus = intel_i2c_create(dev, GPIOD, "DVODDC_D"); 398 - if (!intel_output->ddc_bus) 397 + intel_encoder->ddc_bus = intel_i2c_create(dev, GPIOD, "DVODDC_D"); 398 + if (!intel_encoder->ddc_bus) 399 399 goto free_intel; 400 400 401 401 /* Now, try to find a controller */ 402 402 for (i = 0; i < ARRAY_SIZE(intel_dvo_devices); i++) { 403 - struct drm_connector *connector = &intel_output->base; 403 + struct drm_connector *connector = &intel_encoder->base; 404 404 int gpio; 405 405 406 406 dvo = &intel_dvo_devices[i]; ··· 435 435 if (!ret) 436 436 continue; 437 437 438 - intel_output->type = INTEL_OUTPUT_DVO; 439 - intel_output->crtc_mask = (1 << 0) | (1 << 1); 438 + intel_encoder->type = INTEL_OUTPUT_DVO; 439 + intel_encoder->crtc_mask = (1 << 0) | (1 << 1); 440 440 switch (dvo->type) { 441 441 case INTEL_DVO_CHIP_TMDS: 442 - intel_output->clone_mask = 442 + intel_encoder->clone_mask = 443 443 (1 << INTEL_DVO_TMDS_CLONE_BIT) | 444 444 (1 << INTEL_ANALOG_CLONE_BIT); 445 445 drm_connector_init(dev, connector, ··· 448 448 encoder_type = DRM_MODE_ENCODER_TMDS; 449 449 break; 450 450 case INTEL_DVO_CHIP_LVDS: 451 - intel_output->clone_mask = 451 + intel_encoder->clone_mask = 452 452 (1 << INTEL_DVO_LVDS_CLONE_BIT); 453 453 drm_connector_init(dev, connector, 454 454 &intel_dvo_connector_funcs, ··· 463 463 connector->interlace_allowed = false; 464 464 connector->doublescan_allowed = false; 465 465 466 - intel_output->dev_priv = dvo; 467 - intel_output->i2c_bus = i2cbus; 466 + intel_encoder->dev_priv = dvo; 467 + intel_encoder->i2c_bus = i2cbus; 468 468 469 - drm_encoder_init(dev, &intel_output->enc, 469 + drm_encoder_init(dev, &intel_encoder->enc, 470 470 &intel_dvo_enc_funcs, encoder_type); 471 - drm_encoder_helper_add(&intel_output->enc, 471 + drm_encoder_helper_add(&intel_encoder->enc, 472 472 &intel_dvo_helper_funcs); 473 473 474 - drm_mode_connector_attach_encoder(&intel_output->base, 475 - &intel_output->enc); 474 + drm_mode_connector_attach_encoder(&intel_encoder->base, 475 + &intel_encoder->enc); 476 476 if (dvo->type == INTEL_DVO_CHIP_LVDS) { 477 477 /* For our LVDS chipsets, we should hopefully be able 478 478 * to dig the fixed panel mode out of the BIOS data. ··· 490 490 return; 491 491 } 492 492 493 - intel_i2c_destroy(intel_output->ddc_bus); 493 + intel_i2c_destroy(intel_encoder->ddc_bus); 494 494 /* Didn't find a chip, so tear down. */ 495 495 if (i2cbus != NULL) 496 496 intel_i2c_destroy(i2cbus); 497 497 free_intel: 498 - kfree(intel_output); 498 + kfree(intel_encoder); 499 499 }
+1 -1
drivers/gpu/drm/i915/intel_fb.c
··· 144 144 ret = -ENOMEM; 145 145 goto out; 146 146 } 147 - obj_priv = fbo->driver_private; 147 + obj_priv = to_intel_bo(fbo); 148 148 149 149 mutex_lock(&dev->struct_mutex); 150 150
+43 -43
drivers/gpu/drm/i915/intel_hdmi.c
··· 51 51 struct drm_i915_private *dev_priv = dev->dev_private; 52 52 struct drm_crtc *crtc = encoder->crtc; 53 53 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 54 - struct intel_output *intel_output = enc_to_intel_output(encoder); 55 - struct intel_hdmi_priv *hdmi_priv = intel_output->dev_priv; 54 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 55 + struct intel_hdmi_priv *hdmi_priv = intel_encoder->dev_priv; 56 56 u32 sdvox; 57 57 58 58 sdvox = SDVO_ENCODING_HDMI | ··· 74 74 { 75 75 struct drm_device *dev = encoder->dev; 76 76 struct drm_i915_private *dev_priv = dev->dev_private; 77 - struct intel_output *intel_output = enc_to_intel_output(encoder); 78 - struct intel_hdmi_priv *hdmi_priv = intel_output->dev_priv; 77 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 78 + struct intel_hdmi_priv *hdmi_priv = intel_encoder->dev_priv; 79 79 u32 temp; 80 80 81 81 temp = I915_READ(hdmi_priv->sdvox_reg); ··· 110 110 { 111 111 struct drm_device *dev = connector->dev; 112 112 struct drm_i915_private *dev_priv = dev->dev_private; 113 - struct intel_output *intel_output = to_intel_output(connector); 114 - struct intel_hdmi_priv *hdmi_priv = intel_output->dev_priv; 113 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 114 + struct intel_hdmi_priv *hdmi_priv = intel_encoder->dev_priv; 115 115 116 116 hdmi_priv->save_SDVOX = I915_READ(hdmi_priv->sdvox_reg); 117 117 } ··· 120 120 { 121 121 struct drm_device *dev = connector->dev; 122 122 struct drm_i915_private *dev_priv = dev->dev_private; 123 - struct intel_output *intel_output = to_intel_output(connector); 124 - struct intel_hdmi_priv *hdmi_priv = intel_output->dev_priv; 123 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 124 + struct intel_hdmi_priv *hdmi_priv = intel_encoder->dev_priv; 125 125 126 126 I915_WRITE(hdmi_priv->sdvox_reg, hdmi_priv->save_SDVOX); 127 127 POSTING_READ(hdmi_priv->sdvox_reg); ··· 151 151 static enum drm_connector_status 152 152 intel_hdmi_detect(struct drm_connector *connector) 153 153 { 154 - struct intel_output *intel_output = to_intel_output(connector); 155 - struct intel_hdmi_priv *hdmi_priv = intel_output->dev_priv; 154 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 155 + struct intel_hdmi_priv *hdmi_priv = intel_encoder->dev_priv; 156 156 struct edid *edid = NULL; 157 157 enum drm_connector_status status = connector_status_disconnected; 158 158 159 159 hdmi_priv->has_hdmi_sink = false; 160 - edid = drm_get_edid(&intel_output->base, 161 - intel_output->ddc_bus); 160 + edid = drm_get_edid(&intel_encoder->base, 161 + intel_encoder->ddc_bus); 162 162 163 163 if (edid) { 164 164 if (edid->input & DRM_EDID_INPUT_DIGITAL) { 165 165 status = connector_status_connected; 166 166 hdmi_priv->has_hdmi_sink = drm_detect_hdmi_monitor(edid); 167 167 } 168 - intel_output->base.display_info.raw_edid = NULL; 168 + intel_encoder->base.display_info.raw_edid = NULL; 169 169 kfree(edid); 170 170 } 171 171 ··· 174 174 175 175 static int intel_hdmi_get_modes(struct drm_connector *connector) 176 176 { 177 - struct intel_output *intel_output = to_intel_output(connector); 177 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 178 178 179 179 /* We should parse the EDID data and find out if it's an HDMI sink so 180 180 * we can send audio to it. 181 181 */ 182 182 183 - return intel_ddc_get_modes(intel_output); 183 + return intel_ddc_get_modes(intel_encoder); 184 184 } 185 185 186 186 static void intel_hdmi_destroy(struct drm_connector *connector) 187 187 { 188 - struct intel_output *intel_output = to_intel_output(connector); 188 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 189 189 190 - if (intel_output->i2c_bus) 191 - intel_i2c_destroy(intel_output->i2c_bus); 190 + if (intel_encoder->i2c_bus) 191 + intel_i2c_destroy(intel_encoder->i2c_bus); 192 192 drm_sysfs_connector_remove(connector); 193 193 drm_connector_cleanup(connector); 194 - kfree(intel_output); 194 + kfree(intel_encoder); 195 195 } 196 196 197 197 static const struct drm_encoder_helper_funcs intel_hdmi_helper_funcs = { ··· 230 230 { 231 231 struct drm_i915_private *dev_priv = dev->dev_private; 232 232 struct drm_connector *connector; 233 - struct intel_output *intel_output; 233 + struct intel_encoder *intel_encoder; 234 234 struct intel_hdmi_priv *hdmi_priv; 235 235 236 - intel_output = kcalloc(sizeof(struct intel_output) + 236 + intel_encoder = kcalloc(sizeof(struct intel_encoder) + 237 237 sizeof(struct intel_hdmi_priv), 1, GFP_KERNEL); 238 - if (!intel_output) 238 + if (!intel_encoder) 239 239 return; 240 - hdmi_priv = (struct intel_hdmi_priv *)(intel_output + 1); 240 + hdmi_priv = (struct intel_hdmi_priv *)(intel_encoder + 1); 241 241 242 - connector = &intel_output->base; 242 + connector = &intel_encoder->base; 243 243 drm_connector_init(dev, connector, &intel_hdmi_connector_funcs, 244 244 DRM_MODE_CONNECTOR_HDMIA); 245 245 drm_connector_helper_add(connector, &intel_hdmi_connector_helper_funcs); 246 246 247 - intel_output->type = INTEL_OUTPUT_HDMI; 247 + intel_encoder->type = INTEL_OUTPUT_HDMI; 248 248 249 249 connector->interlace_allowed = 0; 250 250 connector->doublescan_allowed = 0; 251 - intel_output->crtc_mask = (1 << 0) | (1 << 1); 251 + intel_encoder->crtc_mask = (1 << 0) | (1 << 1); 252 252 253 253 /* Set up the DDC bus. */ 254 254 if (sdvox_reg == SDVOB) { 255 - intel_output->clone_mask = (1 << INTEL_HDMIB_CLONE_BIT); 256 - intel_output->ddc_bus = intel_i2c_create(dev, GPIOE, "HDMIB"); 255 + intel_encoder->clone_mask = (1 << INTEL_HDMIB_CLONE_BIT); 256 + intel_encoder->ddc_bus = intel_i2c_create(dev, GPIOE, "HDMIB"); 257 257 dev_priv->hotplug_supported_mask |= HDMIB_HOTPLUG_INT_STATUS; 258 258 } else if (sdvox_reg == SDVOC) { 259 - intel_output->clone_mask = (1 << INTEL_HDMIC_CLONE_BIT); 260 - intel_output->ddc_bus = intel_i2c_create(dev, GPIOD, "HDMIC"); 259 + intel_encoder->clone_mask = (1 << INTEL_HDMIC_CLONE_BIT); 260 + intel_encoder->ddc_bus = intel_i2c_create(dev, GPIOD, "HDMIC"); 261 261 dev_priv->hotplug_supported_mask |= HDMIC_HOTPLUG_INT_STATUS; 262 262 } else if (sdvox_reg == HDMIB) { 263 - intel_output->clone_mask = (1 << INTEL_HDMID_CLONE_BIT); 264 - intel_output->ddc_bus = intel_i2c_create(dev, PCH_GPIOE, 263 + intel_encoder->clone_mask = (1 << INTEL_HDMID_CLONE_BIT); 264 + intel_encoder->ddc_bus = intel_i2c_create(dev, PCH_GPIOE, 265 265 "HDMIB"); 266 266 dev_priv->hotplug_supported_mask |= HDMIB_HOTPLUG_INT_STATUS; 267 267 } else if (sdvox_reg == HDMIC) { 268 - intel_output->clone_mask = (1 << INTEL_HDMIE_CLONE_BIT); 269 - intel_output->ddc_bus = intel_i2c_create(dev, PCH_GPIOD, 268 + intel_encoder->clone_mask = (1 << INTEL_HDMIE_CLONE_BIT); 269 + intel_encoder->ddc_bus = intel_i2c_create(dev, PCH_GPIOD, 270 270 "HDMIC"); 271 271 dev_priv->hotplug_supported_mask |= HDMIC_HOTPLUG_INT_STATUS; 272 272 } else if (sdvox_reg == HDMID) { 273 - intel_output->clone_mask = (1 << INTEL_HDMIF_CLONE_BIT); 274 - intel_output->ddc_bus = intel_i2c_create(dev, PCH_GPIOF, 273 + intel_encoder->clone_mask = (1 << INTEL_HDMIF_CLONE_BIT); 274 + intel_encoder->ddc_bus = intel_i2c_create(dev, PCH_GPIOF, 275 275 "HDMID"); 276 276 dev_priv->hotplug_supported_mask |= HDMID_HOTPLUG_INT_STATUS; 277 277 } 278 - if (!intel_output->ddc_bus) 278 + if (!intel_encoder->ddc_bus) 279 279 goto err_connector; 280 280 281 281 hdmi_priv->sdvox_reg = sdvox_reg; 282 - intel_output->dev_priv = hdmi_priv; 282 + intel_encoder->dev_priv = hdmi_priv; 283 283 284 - drm_encoder_init(dev, &intel_output->enc, &intel_hdmi_enc_funcs, 284 + drm_encoder_init(dev, &intel_encoder->enc, &intel_hdmi_enc_funcs, 285 285 DRM_MODE_ENCODER_TMDS); 286 - drm_encoder_helper_add(&intel_output->enc, &intel_hdmi_helper_funcs); 286 + drm_encoder_helper_add(&intel_encoder->enc, &intel_hdmi_helper_funcs); 287 287 288 - drm_mode_connector_attach_encoder(&intel_output->base, 289 - &intel_output->enc); 288 + drm_mode_connector_attach_encoder(&intel_encoder->base, 289 + &intel_encoder->enc); 290 290 drm_sysfs_connector_add(connector); 291 291 292 292 /* For G4X desktop chip, PEG_BAND_GAP_DATA 3:0 must first be written ··· 302 302 303 303 err_connector: 304 304 drm_connector_cleanup(connector); 305 - kfree(intel_output); 305 + kfree(intel_encoder); 306 306 307 307 return; 308 308 }
+47 -34
drivers/gpu/drm/i915/intel_lvds.c
··· 239 239 struct drm_i915_private *dev_priv = dev->dev_private; 240 240 struct intel_crtc *intel_crtc = to_intel_crtc(encoder->crtc); 241 241 struct drm_encoder *tmp_encoder; 242 - struct intel_output *intel_output = enc_to_intel_output(encoder); 243 - struct intel_lvds_priv *lvds_priv = intel_output->dev_priv; 242 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 243 + struct intel_lvds_priv *lvds_priv = intel_encoder->dev_priv; 244 244 u32 pfit_control = 0, pfit_pgm_ratios = 0; 245 245 int left_border = 0, right_border = 0, top_border = 0; 246 246 int bottom_border = 0; ··· 587 587 { 588 588 struct drm_device *dev = encoder->dev; 589 589 struct drm_i915_private *dev_priv = dev->dev_private; 590 - struct intel_output *intel_output = enc_to_intel_output(encoder); 591 - struct intel_lvds_priv *lvds_priv = intel_output->dev_priv; 590 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 591 + struct intel_lvds_priv *lvds_priv = intel_encoder->dev_priv; 592 592 593 593 /* 594 594 * The LVDS pin pair will already have been turned on in the ··· 635 635 static int intel_lvds_get_modes(struct drm_connector *connector) 636 636 { 637 637 struct drm_device *dev = connector->dev; 638 - struct intel_output *intel_output = to_intel_output(connector); 638 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 639 639 struct drm_i915_private *dev_priv = dev->dev_private; 640 640 int ret = 0; 641 641 642 - ret = intel_ddc_get_modes(intel_output); 642 + if (dev_priv->lvds_edid_good) { 643 + ret = intel_ddc_get_modes(intel_encoder); 643 644 644 - if (ret) 645 - return ret; 645 + if (ret) 646 + return ret; 647 + } 646 648 647 649 /* Didn't get an EDID, so 648 650 * Set wide sync ranges so we get all modes ··· 717 715 static void intel_lvds_destroy(struct drm_connector *connector) 718 716 { 719 717 struct drm_device *dev = connector->dev; 720 - struct intel_output *intel_output = to_intel_output(connector); 718 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 721 719 struct drm_i915_private *dev_priv = dev->dev_private; 722 720 723 - if (intel_output->ddc_bus) 724 - intel_i2c_destroy(intel_output->ddc_bus); 721 + if (intel_encoder->ddc_bus) 722 + intel_i2c_destroy(intel_encoder->ddc_bus); 725 723 if (dev_priv->lid_notifier.notifier_call) 726 724 acpi_lid_notifier_unregister(&dev_priv->lid_notifier); 727 725 drm_sysfs_connector_remove(connector); ··· 734 732 uint64_t value) 735 733 { 736 734 struct drm_device *dev = connector->dev; 737 - struct intel_output *intel_output = 738 - to_intel_output(connector); 735 + struct intel_encoder *intel_encoder = 736 + to_intel_encoder(connector); 739 737 740 738 if (property == dev->mode_config.scaling_mode_property && 741 739 connector->encoder) { 742 740 struct drm_crtc *crtc = connector->encoder->crtc; 743 - struct intel_lvds_priv *lvds_priv = intel_output->dev_priv; 741 + struct intel_lvds_priv *lvds_priv = intel_encoder->dev_priv; 744 742 if (value == DRM_MODE_SCALE_NONE) { 745 743 DRM_DEBUG_KMS("no scaling not supported\n"); 746 744 return 0; ··· 860 858 DMI_MATCH(DMI_PRODUCT_VERSION, "AO00001JW"), 861 859 }, 862 860 }, 861 + { 862 + .callback = intel_no_lvds_dmi_callback, 863 + .ident = "Clientron U800", 864 + .matches = { 865 + DMI_MATCH(DMI_SYS_VENDOR, "Clientron"), 866 + DMI_MATCH(DMI_PRODUCT_NAME, "U800"), 867 + }, 868 + }, 863 869 864 870 { } /* terminating entry */ 865 871 }; ··· 978 968 void intel_lvds_init(struct drm_device *dev) 979 969 { 980 970 struct drm_i915_private *dev_priv = dev->dev_private; 981 - struct intel_output *intel_output; 971 + struct intel_encoder *intel_encoder; 982 972 struct drm_connector *connector; 983 973 struct drm_encoder *encoder; 984 974 struct drm_display_mode *scan; /* *modes, *bios_mode; */ ··· 1006 996 gpio = PCH_GPIOC; 1007 997 } 1008 998 1009 - intel_output = kzalloc(sizeof(struct intel_output) + 999 + intel_encoder = kzalloc(sizeof(struct intel_encoder) + 1010 1000 sizeof(struct intel_lvds_priv), GFP_KERNEL); 1011 - if (!intel_output) { 1001 + if (!intel_encoder) { 1012 1002 return; 1013 1003 } 1014 1004 1015 - connector = &intel_output->base; 1016 - encoder = &intel_output->enc; 1017 - drm_connector_init(dev, &intel_output->base, &intel_lvds_connector_funcs, 1005 + connector = &intel_encoder->base; 1006 + encoder = &intel_encoder->enc; 1007 + drm_connector_init(dev, &intel_encoder->base, &intel_lvds_connector_funcs, 1018 1008 DRM_MODE_CONNECTOR_LVDS); 1019 1009 1020 - drm_encoder_init(dev, &intel_output->enc, &intel_lvds_enc_funcs, 1010 + drm_encoder_init(dev, &intel_encoder->enc, &intel_lvds_enc_funcs, 1021 1011 DRM_MODE_ENCODER_LVDS); 1022 1012 1023 - drm_mode_connector_attach_encoder(&intel_output->base, &intel_output->enc); 1024 - intel_output->type = INTEL_OUTPUT_LVDS; 1013 + drm_mode_connector_attach_encoder(&intel_encoder->base, &intel_encoder->enc); 1014 + intel_encoder->type = INTEL_OUTPUT_LVDS; 1025 1015 1026 - intel_output->clone_mask = (1 << INTEL_LVDS_CLONE_BIT); 1027 - intel_output->crtc_mask = (1 << 1); 1016 + intel_encoder->clone_mask = (1 << INTEL_LVDS_CLONE_BIT); 1017 + intel_encoder->crtc_mask = (1 << 1); 1028 1018 drm_encoder_helper_add(encoder, &intel_lvds_helper_funcs); 1029 1019 drm_connector_helper_add(connector, &intel_lvds_connector_helper_funcs); 1030 1020 connector->display_info.subpixel_order = SubPixelHorizontalRGB; 1031 1021 connector->interlace_allowed = false; 1032 1022 connector->doublescan_allowed = false; 1033 1023 1034 - lvds_priv = (struct intel_lvds_priv *)(intel_output + 1); 1035 - intel_output->dev_priv = lvds_priv; 1024 + lvds_priv = (struct intel_lvds_priv *)(intel_encoder + 1); 1025 + intel_encoder->dev_priv = lvds_priv; 1036 1026 /* create the scaling mode property */ 1037 1027 drm_mode_create_scaling_mode_property(dev); 1038 1028 /* 1039 1029 * the initial panel fitting mode will be FULL_SCREEN. 1040 1030 */ 1041 1031 1042 - drm_connector_attach_property(&intel_output->base, 1032 + drm_connector_attach_property(&intel_encoder->base, 1043 1033 dev->mode_config.scaling_mode_property, 1044 1034 DRM_MODE_SCALE_FULLSCREEN); 1045 1035 lvds_priv->fitting_mode = DRM_MODE_SCALE_FULLSCREEN; ··· 1054 1044 */ 1055 1045 1056 1046 /* Set up the DDC bus. */ 1057 - intel_output->ddc_bus = intel_i2c_create(dev, gpio, "LVDSDDC_C"); 1058 - if (!intel_output->ddc_bus) { 1047 + intel_encoder->ddc_bus = intel_i2c_create(dev, gpio, "LVDSDDC_C"); 1048 + if (!intel_encoder->ddc_bus) { 1059 1049 dev_printk(KERN_ERR, &dev->pdev->dev, "DDC bus registration " 1060 1050 "failed.\n"); 1061 1051 goto failed; ··· 1065 1055 * Attempt to get the fixed panel mode from DDC. Assume that the 1066 1056 * preferred mode is the right one. 1067 1057 */ 1068 - intel_ddc_get_modes(intel_output); 1058 + dev_priv->lvds_edid_good = true; 1059 + 1060 + if (!intel_ddc_get_modes(intel_encoder)) 1061 + dev_priv->lvds_edid_good = false; 1069 1062 1070 1063 list_for_each_entry(scan, &connector->probed_modes, head) { 1071 1064 mutex_lock(&dev->mode_config.mutex); ··· 1146 1133 1147 1134 failed: 1148 1135 DRM_DEBUG_KMS("No LVDS modes found, disabling.\n"); 1149 - if (intel_output->ddc_bus) 1150 - intel_i2c_destroy(intel_output->ddc_bus); 1136 + if (intel_encoder->ddc_bus) 1137 + intel_i2c_destroy(intel_encoder->ddc_bus); 1151 1138 drm_connector_cleanup(connector); 1152 1139 drm_encoder_cleanup(encoder); 1153 - kfree(intel_output); 1140 + kfree(intel_encoder); 1154 1141 }
+11 -11
drivers/gpu/drm/i915/intel_modes.c
··· 34 34 * intel_ddc_probe 35 35 * 36 36 */ 37 - bool intel_ddc_probe(struct intel_output *intel_output) 37 + bool intel_ddc_probe(struct intel_encoder *intel_encoder) 38 38 { 39 39 u8 out_buf[] = { 0x0, 0x0}; 40 40 u8 buf[2]; ··· 54 54 } 55 55 }; 56 56 57 - intel_i2c_quirk_set(intel_output->base.dev, true); 58 - ret = i2c_transfer(intel_output->ddc_bus, msgs, 2); 59 - intel_i2c_quirk_set(intel_output->base.dev, false); 57 + intel_i2c_quirk_set(intel_encoder->base.dev, true); 58 + ret = i2c_transfer(intel_encoder->ddc_bus, msgs, 2); 59 + intel_i2c_quirk_set(intel_encoder->base.dev, false); 60 60 if (ret == 2) 61 61 return true; 62 62 ··· 69 69 * 70 70 * Fetch the EDID information from @connector using the DDC bus. 71 71 */ 72 - int intel_ddc_get_modes(struct intel_output *intel_output) 72 + int intel_ddc_get_modes(struct intel_encoder *intel_encoder) 73 73 { 74 74 struct edid *edid; 75 75 int ret = 0; 76 76 77 - intel_i2c_quirk_set(intel_output->base.dev, true); 78 - edid = drm_get_edid(&intel_output->base, intel_output->ddc_bus); 79 - intel_i2c_quirk_set(intel_output->base.dev, false); 77 + intel_i2c_quirk_set(intel_encoder->base.dev, true); 78 + edid = drm_get_edid(&intel_encoder->base, intel_encoder->ddc_bus); 79 + intel_i2c_quirk_set(intel_encoder->base.dev, false); 80 80 if (edid) { 81 - drm_mode_connector_update_edid_property(&intel_output->base, 81 + drm_mode_connector_update_edid_property(&intel_encoder->base, 82 82 edid); 83 - ret = drm_add_edid_modes(&intel_output->base, edid); 84 - intel_output->base.display_info.raw_edid = NULL; 83 + ret = drm_add_edid_modes(&intel_encoder->base, edid); 84 + intel_encoder->base.display_info.raw_edid = NULL; 85 85 kfree(edid); 86 86 } 87 87
+3 -3
drivers/gpu/drm/i915/intel_overlay.c
··· 724 724 int ret, tmp_width; 725 725 struct overlay_registers *regs; 726 726 bool scale_changed = false; 727 - struct drm_i915_gem_object *bo_priv = new_bo->driver_private; 727 + struct drm_i915_gem_object *bo_priv = to_intel_bo(new_bo); 728 728 struct drm_device *dev = overlay->dev; 729 729 730 730 BUG_ON(!mutex_is_locked(&dev->struct_mutex)); ··· 809 809 intel_overlay_continue(overlay, scale_changed); 810 810 811 811 overlay->old_vid_bo = overlay->vid_bo; 812 - overlay->vid_bo = new_bo->driver_private; 812 + overlay->vid_bo = to_intel_bo(new_bo); 813 813 814 814 return 0; 815 815 ··· 1344 1344 reg_bo = drm_gem_object_alloc(dev, PAGE_SIZE); 1345 1345 if (!reg_bo) 1346 1346 goto out_free; 1347 - overlay->reg_bo = reg_bo->driver_private; 1347 + overlay->reg_bo = to_intel_bo(reg_bo); 1348 1348 1349 1349 if (OVERLAY_NONPHYSICAL(dev)) { 1350 1350 ret = i915_gem_object_pin(reg_bo, PAGE_SIZE);
+368 -363
drivers/gpu/drm/i915/intel_sdvo.c
··· 54 54 u8 slave_addr; 55 55 56 56 /* Register for the SDVO device: SDVOB or SDVOC */ 57 - int output_device; 57 + int sdvo_reg; 58 58 59 59 /* Active outputs controlled by this SDVO output */ 60 60 uint16_t controlled_output; ··· 124 124 */ 125 125 struct intel_sdvo_encode encode; 126 126 127 - /* DDC bus used by this SDVO output */ 127 + /* DDC bus used by this SDVO encoder */ 128 128 uint8_t ddc_bus; 129 129 130 130 /* Mac mini hack -- use the same DDC as the analog connector */ ··· 162 162 }; 163 163 164 164 static bool 165 - intel_sdvo_output_setup(struct intel_output *intel_output, uint16_t flags); 165 + intel_sdvo_output_setup(struct intel_encoder *intel_encoder, uint16_t flags); 166 166 167 167 /** 168 168 * Writes the SDVOB or SDVOC with the given value, but always writes both 169 169 * SDVOB and SDVOC to work around apparent hardware issues (according to 170 170 * comments in the BIOS). 171 171 */ 172 - static void intel_sdvo_write_sdvox(struct intel_output *intel_output, u32 val) 172 + static void intel_sdvo_write_sdvox(struct intel_encoder *intel_encoder, u32 val) 173 173 { 174 - struct drm_device *dev = intel_output->base.dev; 174 + struct drm_device *dev = intel_encoder->base.dev; 175 175 struct drm_i915_private *dev_priv = dev->dev_private; 176 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 176 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 177 177 u32 bval = val, cval = val; 178 178 int i; 179 179 180 - if (sdvo_priv->output_device == SDVOB) { 180 + if (sdvo_priv->sdvo_reg == SDVOB) { 181 181 cval = I915_READ(SDVOC); 182 182 } else { 183 183 bval = I915_READ(SDVOB); ··· 196 196 } 197 197 } 198 198 199 - static bool intel_sdvo_read_byte(struct intel_output *intel_output, u8 addr, 199 + static bool intel_sdvo_read_byte(struct intel_encoder *intel_encoder, u8 addr, 200 200 u8 *ch) 201 201 { 202 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 202 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 203 203 u8 out_buf[2]; 204 204 u8 buf[2]; 205 205 int ret; ··· 222 222 out_buf[0] = addr; 223 223 out_buf[1] = 0; 224 224 225 - if ((ret = i2c_transfer(intel_output->i2c_bus, msgs, 2)) == 2) 225 + if ((ret = i2c_transfer(intel_encoder->i2c_bus, msgs, 2)) == 2) 226 226 { 227 227 *ch = buf[0]; 228 228 return true; ··· 232 232 return false; 233 233 } 234 234 235 - static bool intel_sdvo_write_byte(struct intel_output *intel_output, int addr, 235 + static bool intel_sdvo_write_byte(struct intel_encoder *intel_encoder, int addr, 236 236 u8 ch) 237 237 { 238 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 238 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 239 239 u8 out_buf[2]; 240 240 struct i2c_msg msgs[] = { 241 241 { ··· 249 249 out_buf[0] = addr; 250 250 out_buf[1] = ch; 251 251 252 - if (i2c_transfer(intel_output->i2c_bus, msgs, 1) == 1) 252 + if (i2c_transfer(intel_encoder->i2c_bus, msgs, 1) == 1) 253 253 { 254 254 return true; 255 255 } ··· 353 353 SDVO_CMD_NAME_ENTRY(SDVO_CMD_GET_HBUF_DATA), 354 354 }; 355 355 356 - #define SDVO_NAME(dev_priv) ((dev_priv)->output_device == SDVOB ? "SDVOB" : "SDVOC") 357 - #define SDVO_PRIV(output) ((struct intel_sdvo_priv *) (output)->dev_priv) 356 + #define SDVO_NAME(dev_priv) ((dev_priv)->sdvo_reg == SDVOB ? "SDVOB" : "SDVOC") 357 + #define SDVO_PRIV(encoder) ((struct intel_sdvo_priv *) (encoder)->dev_priv) 358 358 359 - static void intel_sdvo_debug_write(struct intel_output *intel_output, u8 cmd, 359 + static void intel_sdvo_debug_write(struct intel_encoder *intel_encoder, u8 cmd, 360 360 void *args, int args_len) 361 361 { 362 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 362 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 363 363 int i; 364 364 365 365 DRM_DEBUG_KMS("%s: W: %02X ", ··· 379 379 DRM_LOG_KMS("\n"); 380 380 } 381 381 382 - static void intel_sdvo_write_cmd(struct intel_output *intel_output, u8 cmd, 382 + static void intel_sdvo_write_cmd(struct intel_encoder *intel_encoder, u8 cmd, 383 383 void *args, int args_len) 384 384 { 385 385 int i; 386 386 387 - intel_sdvo_debug_write(intel_output, cmd, args, args_len); 387 + intel_sdvo_debug_write(intel_encoder, cmd, args, args_len); 388 388 389 389 for (i = 0; i < args_len; i++) { 390 - intel_sdvo_write_byte(intel_output, SDVO_I2C_ARG_0 - i, 390 + intel_sdvo_write_byte(intel_encoder, SDVO_I2C_ARG_0 - i, 391 391 ((u8*)args)[i]); 392 392 } 393 393 394 - intel_sdvo_write_byte(intel_output, SDVO_I2C_OPCODE, cmd); 394 + intel_sdvo_write_byte(intel_encoder, SDVO_I2C_OPCODE, cmd); 395 395 } 396 396 397 397 static const char *cmd_status_names[] = { ··· 404 404 "Scaling not supported" 405 405 }; 406 406 407 - static void intel_sdvo_debug_response(struct intel_output *intel_output, 407 + static void intel_sdvo_debug_response(struct intel_encoder *intel_encoder, 408 408 void *response, int response_len, 409 409 u8 status) 410 410 { 411 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 411 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 412 412 int i; 413 413 414 414 DRM_DEBUG_KMS("%s: R: ", SDVO_NAME(sdvo_priv)); ··· 423 423 DRM_LOG_KMS("\n"); 424 424 } 425 425 426 - static u8 intel_sdvo_read_response(struct intel_output *intel_output, 426 + static u8 intel_sdvo_read_response(struct intel_encoder *intel_encoder, 427 427 void *response, int response_len) 428 428 { 429 429 int i; ··· 433 433 while (retry--) { 434 434 /* Read the command response */ 435 435 for (i = 0; i < response_len; i++) { 436 - intel_sdvo_read_byte(intel_output, 436 + intel_sdvo_read_byte(intel_encoder, 437 437 SDVO_I2C_RETURN_0 + i, 438 438 &((u8 *)response)[i]); 439 439 } 440 440 441 441 /* read the return status */ 442 - intel_sdvo_read_byte(intel_output, SDVO_I2C_CMD_STATUS, 442 + intel_sdvo_read_byte(intel_encoder, SDVO_I2C_CMD_STATUS, 443 443 &status); 444 444 445 - intel_sdvo_debug_response(intel_output, response, response_len, 445 + intel_sdvo_debug_response(intel_encoder, response, response_len, 446 446 status); 447 447 if (status != SDVO_CMD_STATUS_PENDING) 448 448 return status; ··· 470 470 * another I2C transaction after issuing the DDC bus switch, it will be 471 471 * switched to the internal SDVO register. 472 472 */ 473 - static void intel_sdvo_set_control_bus_switch(struct intel_output *intel_output, 473 + static void intel_sdvo_set_control_bus_switch(struct intel_encoder *intel_encoder, 474 474 u8 target) 475 475 { 476 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 476 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 477 477 u8 out_buf[2], cmd_buf[2], ret_value[2], ret; 478 478 struct i2c_msg msgs[] = { 479 479 { ··· 497 497 }, 498 498 }; 499 499 500 - intel_sdvo_debug_write(intel_output, SDVO_CMD_SET_CONTROL_BUS_SWITCH, 500 + intel_sdvo_debug_write(intel_encoder, SDVO_CMD_SET_CONTROL_BUS_SWITCH, 501 501 &target, 1); 502 502 /* write the DDC switch command argument */ 503 - intel_sdvo_write_byte(intel_output, SDVO_I2C_ARG_0, target); 503 + intel_sdvo_write_byte(intel_encoder, SDVO_I2C_ARG_0, target); 504 504 505 505 out_buf[0] = SDVO_I2C_OPCODE; 506 506 out_buf[1] = SDVO_CMD_SET_CONTROL_BUS_SWITCH; ··· 509 509 ret_value[0] = 0; 510 510 ret_value[1] = 0; 511 511 512 - ret = i2c_transfer(intel_output->i2c_bus, msgs, 3); 512 + ret = i2c_transfer(intel_encoder->i2c_bus, msgs, 3); 513 513 if (ret != 3) { 514 514 /* failure in I2C transfer */ 515 515 DRM_DEBUG_KMS("I2c transfer returned %d\n", ret); ··· 523 523 return; 524 524 } 525 525 526 - static bool intel_sdvo_set_target_input(struct intel_output *intel_output, bool target_0, bool target_1) 526 + static bool intel_sdvo_set_target_input(struct intel_encoder *intel_encoder, bool target_0, bool target_1) 527 527 { 528 528 struct intel_sdvo_set_target_input_args targets = {0}; 529 529 u8 status; ··· 534 534 if (target_1) 535 535 targets.target_1 = 1; 536 536 537 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_SET_TARGET_INPUT, &targets, 537 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_TARGET_INPUT, &targets, 538 538 sizeof(targets)); 539 539 540 - status = intel_sdvo_read_response(intel_output, NULL, 0); 540 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 541 541 542 542 return (status == SDVO_CMD_STATUS_SUCCESS); 543 543 } ··· 548 548 * This function is making an assumption about the layout of the response, 549 549 * which should be checked against the docs. 550 550 */ 551 - static bool intel_sdvo_get_trained_inputs(struct intel_output *intel_output, bool *input_1, bool *input_2) 551 + static bool intel_sdvo_get_trained_inputs(struct intel_encoder *intel_encoder, bool *input_1, bool *input_2) 552 552 { 553 553 struct intel_sdvo_get_trained_inputs_response response; 554 554 u8 status; 555 555 556 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_TRAINED_INPUTS, NULL, 0); 557 - status = intel_sdvo_read_response(intel_output, &response, sizeof(response)); 556 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_TRAINED_INPUTS, NULL, 0); 557 + status = intel_sdvo_read_response(intel_encoder, &response, sizeof(response)); 558 558 if (status != SDVO_CMD_STATUS_SUCCESS) 559 559 return false; 560 560 ··· 563 563 return true; 564 564 } 565 565 566 - static bool intel_sdvo_get_active_outputs(struct intel_output *intel_output, 566 + static bool intel_sdvo_get_active_outputs(struct intel_encoder *intel_encoder, 567 567 u16 *outputs) 568 568 { 569 569 u8 status; 570 570 571 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_ACTIVE_OUTPUTS, NULL, 0); 572 - status = intel_sdvo_read_response(intel_output, outputs, sizeof(*outputs)); 571 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_ACTIVE_OUTPUTS, NULL, 0); 572 + status = intel_sdvo_read_response(intel_encoder, outputs, sizeof(*outputs)); 573 573 574 574 return (status == SDVO_CMD_STATUS_SUCCESS); 575 575 } 576 576 577 - static bool intel_sdvo_set_active_outputs(struct intel_output *intel_output, 577 + static bool intel_sdvo_set_active_outputs(struct intel_encoder *intel_encoder, 578 578 u16 outputs) 579 579 { 580 580 u8 status; 581 581 582 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_SET_ACTIVE_OUTPUTS, &outputs, 582 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_ACTIVE_OUTPUTS, &outputs, 583 583 sizeof(outputs)); 584 - status = intel_sdvo_read_response(intel_output, NULL, 0); 584 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 585 585 return (status == SDVO_CMD_STATUS_SUCCESS); 586 586 } 587 587 588 - static bool intel_sdvo_set_encoder_power_state(struct intel_output *intel_output, 588 + static bool intel_sdvo_set_encoder_power_state(struct intel_encoder *intel_encoder, 589 589 int mode) 590 590 { 591 591 u8 status, state = SDVO_ENCODER_STATE_ON; ··· 605 605 break; 606 606 } 607 607 608 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_SET_ENCODER_POWER_STATE, &state, 608 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_ENCODER_POWER_STATE, &state, 609 609 sizeof(state)); 610 - status = intel_sdvo_read_response(intel_output, NULL, 0); 610 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 611 611 612 612 return (status == SDVO_CMD_STATUS_SUCCESS); 613 613 } 614 614 615 - static bool intel_sdvo_get_input_pixel_clock_range(struct intel_output *intel_output, 615 + static bool intel_sdvo_get_input_pixel_clock_range(struct intel_encoder *intel_encoder, 616 616 int *clock_min, 617 617 int *clock_max) 618 618 { 619 619 struct intel_sdvo_pixel_clock_range clocks; 620 620 u8 status; 621 621 622 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_INPUT_PIXEL_CLOCK_RANGE, 622 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_INPUT_PIXEL_CLOCK_RANGE, 623 623 NULL, 0); 624 624 625 - status = intel_sdvo_read_response(intel_output, &clocks, sizeof(clocks)); 625 + status = intel_sdvo_read_response(intel_encoder, &clocks, sizeof(clocks)); 626 626 627 627 if (status != SDVO_CMD_STATUS_SUCCESS) 628 628 return false; ··· 634 634 return true; 635 635 } 636 636 637 - static bool intel_sdvo_set_target_output(struct intel_output *intel_output, 637 + static bool intel_sdvo_set_target_output(struct intel_encoder *intel_encoder, 638 638 u16 outputs) 639 639 { 640 640 u8 status; 641 641 642 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_SET_TARGET_OUTPUT, &outputs, 642 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_TARGET_OUTPUT, &outputs, 643 643 sizeof(outputs)); 644 644 645 - status = intel_sdvo_read_response(intel_output, NULL, 0); 645 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 646 646 return (status == SDVO_CMD_STATUS_SUCCESS); 647 647 } 648 648 649 - static bool intel_sdvo_get_timing(struct intel_output *intel_output, u8 cmd, 649 + static bool intel_sdvo_get_timing(struct intel_encoder *intel_encoder, u8 cmd, 650 650 struct intel_sdvo_dtd *dtd) 651 651 { 652 652 u8 status; 653 653 654 - intel_sdvo_write_cmd(intel_output, cmd, NULL, 0); 655 - status = intel_sdvo_read_response(intel_output, &dtd->part1, 654 + intel_sdvo_write_cmd(intel_encoder, cmd, NULL, 0); 655 + status = intel_sdvo_read_response(intel_encoder, &dtd->part1, 656 656 sizeof(dtd->part1)); 657 657 if (status != SDVO_CMD_STATUS_SUCCESS) 658 658 return false; 659 659 660 - intel_sdvo_write_cmd(intel_output, cmd + 1, NULL, 0); 661 - status = intel_sdvo_read_response(intel_output, &dtd->part2, 660 + intel_sdvo_write_cmd(intel_encoder, cmd + 1, NULL, 0); 661 + status = intel_sdvo_read_response(intel_encoder, &dtd->part2, 662 662 sizeof(dtd->part2)); 663 663 if (status != SDVO_CMD_STATUS_SUCCESS) 664 664 return false; ··· 666 666 return true; 667 667 } 668 668 669 - static bool intel_sdvo_get_input_timing(struct intel_output *intel_output, 669 + static bool intel_sdvo_get_input_timing(struct intel_encoder *intel_encoder, 670 670 struct intel_sdvo_dtd *dtd) 671 671 { 672 - return intel_sdvo_get_timing(intel_output, 672 + return intel_sdvo_get_timing(intel_encoder, 673 673 SDVO_CMD_GET_INPUT_TIMINGS_PART1, dtd); 674 674 } 675 675 676 - static bool intel_sdvo_get_output_timing(struct intel_output *intel_output, 676 + static bool intel_sdvo_get_output_timing(struct intel_encoder *intel_encoder, 677 677 struct intel_sdvo_dtd *dtd) 678 678 { 679 - return intel_sdvo_get_timing(intel_output, 679 + return intel_sdvo_get_timing(intel_encoder, 680 680 SDVO_CMD_GET_OUTPUT_TIMINGS_PART1, dtd); 681 681 } 682 682 683 - static bool intel_sdvo_set_timing(struct intel_output *intel_output, u8 cmd, 683 + static bool intel_sdvo_set_timing(struct intel_encoder *intel_encoder, u8 cmd, 684 684 struct intel_sdvo_dtd *dtd) 685 685 { 686 686 u8 status; 687 687 688 - intel_sdvo_write_cmd(intel_output, cmd, &dtd->part1, sizeof(dtd->part1)); 689 - status = intel_sdvo_read_response(intel_output, NULL, 0); 688 + intel_sdvo_write_cmd(intel_encoder, cmd, &dtd->part1, sizeof(dtd->part1)); 689 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 690 690 if (status != SDVO_CMD_STATUS_SUCCESS) 691 691 return false; 692 692 693 - intel_sdvo_write_cmd(intel_output, cmd + 1, &dtd->part2, sizeof(dtd->part2)); 694 - status = intel_sdvo_read_response(intel_output, NULL, 0); 693 + intel_sdvo_write_cmd(intel_encoder, cmd + 1, &dtd->part2, sizeof(dtd->part2)); 694 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 695 695 if (status != SDVO_CMD_STATUS_SUCCESS) 696 696 return false; 697 697 698 698 return true; 699 699 } 700 700 701 - static bool intel_sdvo_set_input_timing(struct intel_output *intel_output, 701 + static bool intel_sdvo_set_input_timing(struct intel_encoder *intel_encoder, 702 702 struct intel_sdvo_dtd *dtd) 703 703 { 704 - return intel_sdvo_set_timing(intel_output, 704 + return intel_sdvo_set_timing(intel_encoder, 705 705 SDVO_CMD_SET_INPUT_TIMINGS_PART1, dtd); 706 706 } 707 707 708 - static bool intel_sdvo_set_output_timing(struct intel_output *intel_output, 708 + static bool intel_sdvo_set_output_timing(struct intel_encoder *intel_encoder, 709 709 struct intel_sdvo_dtd *dtd) 710 710 { 711 - return intel_sdvo_set_timing(intel_output, 711 + return intel_sdvo_set_timing(intel_encoder, 712 712 SDVO_CMD_SET_OUTPUT_TIMINGS_PART1, dtd); 713 713 } 714 714 715 715 static bool 716 - intel_sdvo_create_preferred_input_timing(struct intel_output *output, 716 + intel_sdvo_create_preferred_input_timing(struct intel_encoder *intel_encoder, 717 717 uint16_t clock, 718 718 uint16_t width, 719 719 uint16_t height) 720 720 { 721 721 struct intel_sdvo_preferred_input_timing_args args; 722 - struct intel_sdvo_priv *sdvo_priv = output->dev_priv; 722 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 723 723 uint8_t status; 724 724 725 725 memset(&args, 0, sizeof(args)); ··· 733 733 sdvo_priv->sdvo_lvds_fixed_mode->vdisplay != height)) 734 734 args.scaled = 1; 735 735 736 - intel_sdvo_write_cmd(output, SDVO_CMD_CREATE_PREFERRED_INPUT_TIMING, 736 + intel_sdvo_write_cmd(intel_encoder, 737 + SDVO_CMD_CREATE_PREFERRED_INPUT_TIMING, 737 738 &args, sizeof(args)); 738 - status = intel_sdvo_read_response(output, NULL, 0); 739 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 739 740 if (status != SDVO_CMD_STATUS_SUCCESS) 740 741 return false; 741 742 742 743 return true; 743 744 } 744 745 745 - static bool intel_sdvo_get_preferred_input_timing(struct intel_output *output, 746 + static bool intel_sdvo_get_preferred_input_timing(struct intel_encoder *intel_encoder, 746 747 struct intel_sdvo_dtd *dtd) 747 748 { 748 749 bool status; 749 750 750 - intel_sdvo_write_cmd(output, SDVO_CMD_GET_PREFERRED_INPUT_TIMING_PART1, 751 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_PREFERRED_INPUT_TIMING_PART1, 751 752 NULL, 0); 752 753 753 - status = intel_sdvo_read_response(output, &dtd->part1, 754 + status = intel_sdvo_read_response(intel_encoder, &dtd->part1, 754 755 sizeof(dtd->part1)); 755 756 if (status != SDVO_CMD_STATUS_SUCCESS) 756 757 return false; 757 758 758 - intel_sdvo_write_cmd(output, SDVO_CMD_GET_PREFERRED_INPUT_TIMING_PART2, 759 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_PREFERRED_INPUT_TIMING_PART2, 759 760 NULL, 0); 760 761 761 - status = intel_sdvo_read_response(output, &dtd->part2, 762 + status = intel_sdvo_read_response(intel_encoder, &dtd->part2, 762 763 sizeof(dtd->part2)); 763 764 if (status != SDVO_CMD_STATUS_SUCCESS) 764 765 return false; ··· 767 766 return false; 768 767 } 769 768 770 - static int intel_sdvo_get_clock_rate_mult(struct intel_output *intel_output) 769 + static int intel_sdvo_get_clock_rate_mult(struct intel_encoder *intel_encoder) 771 770 { 772 771 u8 response, status; 773 772 774 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_CLOCK_RATE_MULT, NULL, 0); 775 - status = intel_sdvo_read_response(intel_output, &response, 1); 773 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_CLOCK_RATE_MULT, NULL, 0); 774 + status = intel_sdvo_read_response(intel_encoder, &response, 1); 776 775 777 776 if (status != SDVO_CMD_STATUS_SUCCESS) { 778 777 DRM_DEBUG_KMS("Couldn't get SDVO clock rate multiplier\n"); ··· 784 783 return response; 785 784 } 786 785 787 - static bool intel_sdvo_set_clock_rate_mult(struct intel_output *intel_output, u8 val) 786 + static bool intel_sdvo_set_clock_rate_mult(struct intel_encoder *intel_encoder, u8 val) 788 787 { 789 788 u8 status; 790 789 791 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_SET_CLOCK_RATE_MULT, &val, 1); 792 - status = intel_sdvo_read_response(intel_output, NULL, 0); 790 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_CLOCK_RATE_MULT, &val, 1); 791 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 793 792 if (status != SDVO_CMD_STATUS_SUCCESS) 794 793 return false; 795 794 ··· 878 877 mode->flags |= DRM_MODE_FLAG_PVSYNC; 879 878 } 880 879 881 - static bool intel_sdvo_get_supp_encode(struct intel_output *output, 880 + static bool intel_sdvo_get_supp_encode(struct intel_encoder *intel_encoder, 882 881 struct intel_sdvo_encode *encode) 883 882 { 884 883 uint8_t status; 885 884 886 - intel_sdvo_write_cmd(output, SDVO_CMD_GET_SUPP_ENCODE, NULL, 0); 887 - status = intel_sdvo_read_response(output, encode, sizeof(*encode)); 885 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_SUPP_ENCODE, NULL, 0); 886 + status = intel_sdvo_read_response(intel_encoder, encode, sizeof(*encode)); 888 887 if (status != SDVO_CMD_STATUS_SUCCESS) { /* non-support means DVI */ 889 888 memset(encode, 0, sizeof(*encode)); 890 889 return false; ··· 893 892 return true; 894 893 } 895 894 896 - static bool intel_sdvo_set_encode(struct intel_output *output, uint8_t mode) 895 + static bool intel_sdvo_set_encode(struct intel_encoder *intel_encoder, 896 + uint8_t mode) 897 897 { 898 898 uint8_t status; 899 899 900 - intel_sdvo_write_cmd(output, SDVO_CMD_SET_ENCODE, &mode, 1); 901 - status = intel_sdvo_read_response(output, NULL, 0); 900 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_ENCODE, &mode, 1); 901 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 902 902 903 903 return (status == SDVO_CMD_STATUS_SUCCESS); 904 904 } 905 905 906 - static bool intel_sdvo_set_colorimetry(struct intel_output *output, 906 + static bool intel_sdvo_set_colorimetry(struct intel_encoder *intel_encoder, 907 907 uint8_t mode) 908 908 { 909 909 uint8_t status; 910 910 911 - intel_sdvo_write_cmd(output, SDVO_CMD_SET_COLORIMETRY, &mode, 1); 912 - status = intel_sdvo_read_response(output, NULL, 0); 911 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_COLORIMETRY, &mode, 1); 912 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 913 913 914 914 return (status == SDVO_CMD_STATUS_SUCCESS); 915 915 } 916 916 917 917 #if 0 918 - static void intel_sdvo_dump_hdmi_buf(struct intel_output *output) 918 + static void intel_sdvo_dump_hdmi_buf(struct intel_encoder *intel_encoder) 919 919 { 920 920 int i, j; 921 921 uint8_t set_buf_index[2]; ··· 925 923 uint8_t buf[48]; 926 924 uint8_t *pos; 927 925 928 - intel_sdvo_write_cmd(output, SDVO_CMD_GET_HBUF_AV_SPLIT, NULL, 0); 929 - intel_sdvo_read_response(output, &av_split, 1); 926 + intel_sdvo_write_cmd(encoder, SDVO_CMD_GET_HBUF_AV_SPLIT, NULL, 0); 927 + intel_sdvo_read_response(encoder, &av_split, 1); 930 928 931 929 for (i = 0; i <= av_split; i++) { 932 930 set_buf_index[0] = i; set_buf_index[1] = 0; 933 - intel_sdvo_write_cmd(output, SDVO_CMD_SET_HBUF_INDEX, 931 + intel_sdvo_write_cmd(encoder, SDVO_CMD_SET_HBUF_INDEX, 934 932 set_buf_index, 2); 935 - intel_sdvo_write_cmd(output, SDVO_CMD_GET_HBUF_INFO, NULL, 0); 936 - intel_sdvo_read_response(output, &buf_size, 1); 933 + intel_sdvo_write_cmd(encoder, SDVO_CMD_GET_HBUF_INFO, NULL, 0); 934 + intel_sdvo_read_response(encoder, &buf_size, 1); 937 935 938 936 pos = buf; 939 937 for (j = 0; j <= buf_size; j += 8) { 940 - intel_sdvo_write_cmd(output, SDVO_CMD_GET_HBUF_DATA, 938 + intel_sdvo_write_cmd(encoder, SDVO_CMD_GET_HBUF_DATA, 941 939 NULL, 0); 942 - intel_sdvo_read_response(output, pos, 8); 940 + intel_sdvo_read_response(encoder, pos, 8); 943 941 pos += 8; 944 942 } 945 943 } 946 944 } 947 945 #endif 948 946 949 - static void intel_sdvo_set_hdmi_buf(struct intel_output *output, int index, 950 - uint8_t *data, int8_t size, uint8_t tx_rate) 947 + static void intel_sdvo_set_hdmi_buf(struct intel_encoder *intel_encoder, 948 + int index, 949 + uint8_t *data, int8_t size, uint8_t tx_rate) 951 950 { 952 951 uint8_t set_buf_index[2]; 953 952 954 953 set_buf_index[0] = index; 955 954 set_buf_index[1] = 0; 956 955 957 - intel_sdvo_write_cmd(output, SDVO_CMD_SET_HBUF_INDEX, set_buf_index, 2); 956 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_HBUF_INDEX, 957 + set_buf_index, 2); 958 958 959 959 for (; size > 0; size -= 8) { 960 - intel_sdvo_write_cmd(output, SDVO_CMD_SET_HBUF_DATA, data, 8); 960 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_HBUF_DATA, data, 8); 961 961 data += 8; 962 962 } 963 963 964 - intel_sdvo_write_cmd(output, SDVO_CMD_SET_HBUF_TXRATE, &tx_rate, 1); 964 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_HBUF_TXRATE, &tx_rate, 1); 965 965 } 966 966 967 967 static uint8_t intel_sdvo_calc_hbuf_csum(uint8_t *data, uint8_t size) ··· 1038 1034 } __attribute__ ((packed)) u; 1039 1035 } __attribute__((packed)); 1040 1036 1041 - static void intel_sdvo_set_avi_infoframe(struct intel_output *output, 1037 + static void intel_sdvo_set_avi_infoframe(struct intel_encoder *intel_encoder, 1042 1038 struct drm_display_mode * mode) 1043 1039 { 1044 1040 struct dip_infoframe avi_if = { ··· 1049 1045 1050 1046 avi_if.checksum = intel_sdvo_calc_hbuf_csum((uint8_t *)&avi_if, 1051 1047 4 + avi_if.len); 1052 - intel_sdvo_set_hdmi_buf(output, 1, (uint8_t *)&avi_if, 4 + avi_if.len, 1048 + intel_sdvo_set_hdmi_buf(intel_encoder, 1, (uint8_t *)&avi_if, 1049 + 4 + avi_if.len, 1053 1050 SDVO_HBUF_TX_VSYNC); 1054 1051 } 1055 1052 1056 - static void intel_sdvo_set_tv_format(struct intel_output *output) 1053 + static void intel_sdvo_set_tv_format(struct intel_encoder *intel_encoder) 1057 1054 { 1058 1055 1059 1056 struct intel_sdvo_tv_format format; 1060 - struct intel_sdvo_priv *sdvo_priv = output->dev_priv; 1057 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1061 1058 uint32_t format_map, i; 1062 1059 uint8_t status; 1063 1060 ··· 1071 1066 memcpy(&format, &format_map, sizeof(format_map) > sizeof(format) ? 1072 1067 sizeof(format) : sizeof(format_map)); 1073 1068 1074 - intel_sdvo_write_cmd(output, SDVO_CMD_SET_TV_FORMAT, &format_map, 1069 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_TV_FORMAT, &format_map, 1075 1070 sizeof(format)); 1076 1071 1077 - status = intel_sdvo_read_response(output, NULL, 0); 1072 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 1078 1073 if (status != SDVO_CMD_STATUS_SUCCESS) 1079 1074 DRM_DEBUG_KMS("%s: Failed to set TV format\n", 1080 1075 SDVO_NAME(sdvo_priv)); ··· 1084 1079 struct drm_display_mode *mode, 1085 1080 struct drm_display_mode *adjusted_mode) 1086 1081 { 1087 - struct intel_output *output = enc_to_intel_output(encoder); 1088 - struct intel_sdvo_priv *dev_priv = output->dev_priv; 1082 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 1083 + struct intel_sdvo_priv *dev_priv = intel_encoder->dev_priv; 1089 1084 1090 1085 if (dev_priv->is_tv) { 1091 1086 struct intel_sdvo_dtd output_dtd; ··· 1100 1095 1101 1096 /* Set output timings */ 1102 1097 intel_sdvo_get_dtd_from_mode(&output_dtd, mode); 1103 - intel_sdvo_set_target_output(output, 1098 + intel_sdvo_set_target_output(intel_encoder, 1104 1099 dev_priv->controlled_output); 1105 - intel_sdvo_set_output_timing(output, &output_dtd); 1100 + intel_sdvo_set_output_timing(intel_encoder, &output_dtd); 1106 1101 1107 1102 /* Set the input timing to the screen. Assume always input 0. */ 1108 - intel_sdvo_set_target_input(output, true, false); 1103 + intel_sdvo_set_target_input(intel_encoder, true, false); 1109 1104 1110 1105 1111 - success = intel_sdvo_create_preferred_input_timing(output, 1106 + success = intel_sdvo_create_preferred_input_timing(intel_encoder, 1112 1107 mode->clock / 10, 1113 1108 mode->hdisplay, 1114 1109 mode->vdisplay); 1115 1110 if (success) { 1116 1111 struct intel_sdvo_dtd input_dtd; 1117 1112 1118 - intel_sdvo_get_preferred_input_timing(output, 1113 + intel_sdvo_get_preferred_input_timing(intel_encoder, 1119 1114 &input_dtd); 1120 1115 intel_sdvo_get_mode_from_dtd(adjusted_mode, &input_dtd); 1121 1116 dev_priv->sdvo_flags = input_dtd.part2.sdvo_flags; ··· 1138 1133 intel_sdvo_get_dtd_from_mode(&output_dtd, 1139 1134 dev_priv->sdvo_lvds_fixed_mode); 1140 1135 1141 - intel_sdvo_set_target_output(output, 1136 + intel_sdvo_set_target_output(intel_encoder, 1142 1137 dev_priv->controlled_output); 1143 - intel_sdvo_set_output_timing(output, &output_dtd); 1138 + intel_sdvo_set_output_timing(intel_encoder, &output_dtd); 1144 1139 1145 1140 /* Set the input timing to the screen. Assume always input 0. */ 1146 - intel_sdvo_set_target_input(output, true, false); 1141 + intel_sdvo_set_target_input(intel_encoder, true, false); 1147 1142 1148 1143 1149 1144 success = intel_sdvo_create_preferred_input_timing( 1150 - output, 1145 + intel_encoder, 1151 1146 mode->clock / 10, 1152 1147 mode->hdisplay, 1153 1148 mode->vdisplay); ··· 1155 1150 if (success) { 1156 1151 struct intel_sdvo_dtd input_dtd; 1157 1152 1158 - intel_sdvo_get_preferred_input_timing(output, 1153 + intel_sdvo_get_preferred_input_timing(intel_encoder, 1159 1154 &input_dtd); 1160 1155 intel_sdvo_get_mode_from_dtd(adjusted_mode, &input_dtd); 1161 1156 dev_priv->sdvo_flags = input_dtd.part2.sdvo_flags; ··· 1187 1182 struct drm_i915_private *dev_priv = dev->dev_private; 1188 1183 struct drm_crtc *crtc = encoder->crtc; 1189 1184 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 1190 - struct intel_output *output = enc_to_intel_output(encoder); 1191 - struct intel_sdvo_priv *sdvo_priv = output->dev_priv; 1185 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 1186 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1192 1187 u32 sdvox = 0; 1193 1188 int sdvo_pixel_multiply; 1194 1189 struct intel_sdvo_in_out_map in_out; ··· 1207 1202 in_out.in0 = sdvo_priv->controlled_output; 1208 1203 in_out.in1 = 0; 1209 1204 1210 - intel_sdvo_write_cmd(output, SDVO_CMD_SET_IN_OUT_MAP, 1205 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_IN_OUT_MAP, 1211 1206 &in_out, sizeof(in_out)); 1212 - status = intel_sdvo_read_response(output, NULL, 0); 1207 + status = intel_sdvo_read_response(intel_encoder, NULL, 0); 1213 1208 1214 1209 if (sdvo_priv->is_hdmi) { 1215 - intel_sdvo_set_avi_infoframe(output, mode); 1210 + intel_sdvo_set_avi_infoframe(intel_encoder, mode); 1216 1211 sdvox |= SDVO_AUDIO_ENABLE; 1217 1212 } 1218 1213 ··· 1229 1224 */ 1230 1225 if (!sdvo_priv->is_tv && !sdvo_priv->is_lvds) { 1231 1226 /* Set the output timing to the screen */ 1232 - intel_sdvo_set_target_output(output, 1227 + intel_sdvo_set_target_output(intel_encoder, 1233 1228 sdvo_priv->controlled_output); 1234 - intel_sdvo_set_output_timing(output, &input_dtd); 1229 + intel_sdvo_set_output_timing(intel_encoder, &input_dtd); 1235 1230 } 1236 1231 1237 1232 /* Set the input timing to the screen. Assume always input 0. */ 1238 - intel_sdvo_set_target_input(output, true, false); 1233 + intel_sdvo_set_target_input(intel_encoder, true, false); 1239 1234 1240 1235 if (sdvo_priv->is_tv) 1241 - intel_sdvo_set_tv_format(output); 1236 + intel_sdvo_set_tv_format(intel_encoder); 1242 1237 1243 1238 /* We would like to use intel_sdvo_create_preferred_input_timing() to 1244 1239 * provide the device with a timing it can support, if it supports that ··· 1246 1241 * output the preferred timing, and we don't support that currently. 1247 1242 */ 1248 1243 #if 0 1249 - success = intel_sdvo_create_preferred_input_timing(output, clock, 1244 + success = intel_sdvo_create_preferred_input_timing(encoder, clock, 1250 1245 width, height); 1251 1246 if (success) { 1252 1247 struct intel_sdvo_dtd *input_dtd; 1253 1248 1254 - intel_sdvo_get_preferred_input_timing(output, &input_dtd); 1255 - intel_sdvo_set_input_timing(output, &input_dtd); 1249 + intel_sdvo_get_preferred_input_timing(encoder, &input_dtd); 1250 + intel_sdvo_set_input_timing(encoder, &input_dtd); 1256 1251 } 1257 1252 #else 1258 - intel_sdvo_set_input_timing(output, &input_dtd); 1253 + intel_sdvo_set_input_timing(intel_encoder, &input_dtd); 1259 1254 #endif 1260 1255 1261 1256 switch (intel_sdvo_get_pixel_multiplier(mode)) { 1262 1257 case 1: 1263 - intel_sdvo_set_clock_rate_mult(output, 1258 + intel_sdvo_set_clock_rate_mult(intel_encoder, 1264 1259 SDVO_CLOCK_RATE_MULT_1X); 1265 1260 break; 1266 1261 case 2: 1267 - intel_sdvo_set_clock_rate_mult(output, 1262 + intel_sdvo_set_clock_rate_mult(intel_encoder, 1268 1263 SDVO_CLOCK_RATE_MULT_2X); 1269 1264 break; 1270 1265 case 4: 1271 - intel_sdvo_set_clock_rate_mult(output, 1266 + intel_sdvo_set_clock_rate_mult(intel_encoder, 1272 1267 SDVO_CLOCK_RATE_MULT_4X); 1273 1268 break; 1274 1269 } ··· 1279 1274 SDVO_VSYNC_ACTIVE_HIGH | 1280 1275 SDVO_HSYNC_ACTIVE_HIGH; 1281 1276 } else { 1282 - sdvox |= I915_READ(sdvo_priv->output_device); 1283 - switch (sdvo_priv->output_device) { 1277 + sdvox |= I915_READ(sdvo_priv->sdvo_reg); 1278 + switch (sdvo_priv->sdvo_reg) { 1284 1279 case SDVOB: 1285 1280 sdvox &= SDVOB_PRESERVE_MASK; 1286 1281 break; ··· 1304 1299 1305 1300 if (sdvo_priv->sdvo_flags & SDVO_NEED_TO_STALL) 1306 1301 sdvox |= SDVO_STALL_SELECT; 1307 - intel_sdvo_write_sdvox(output, sdvox); 1302 + intel_sdvo_write_sdvox(intel_encoder, sdvox); 1308 1303 } 1309 1304 1310 1305 static void intel_sdvo_dpms(struct drm_encoder *encoder, int mode) 1311 1306 { 1312 1307 struct drm_device *dev = encoder->dev; 1313 1308 struct drm_i915_private *dev_priv = dev->dev_private; 1314 - struct intel_output *intel_output = enc_to_intel_output(encoder); 1315 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1309 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 1310 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1316 1311 u32 temp; 1317 1312 1318 1313 if (mode != DRM_MODE_DPMS_ON) { 1319 - intel_sdvo_set_active_outputs(intel_output, 0); 1314 + intel_sdvo_set_active_outputs(intel_encoder, 0); 1320 1315 if (0) 1321 - intel_sdvo_set_encoder_power_state(intel_output, mode); 1316 + intel_sdvo_set_encoder_power_state(intel_encoder, mode); 1322 1317 1323 1318 if (mode == DRM_MODE_DPMS_OFF) { 1324 - temp = I915_READ(sdvo_priv->output_device); 1319 + temp = I915_READ(sdvo_priv->sdvo_reg); 1325 1320 if ((temp & SDVO_ENABLE) != 0) { 1326 - intel_sdvo_write_sdvox(intel_output, temp & ~SDVO_ENABLE); 1321 + intel_sdvo_write_sdvox(intel_encoder, temp & ~SDVO_ENABLE); 1327 1322 } 1328 1323 } 1329 1324 } else { ··· 1331 1326 int i; 1332 1327 u8 status; 1333 1328 1334 - temp = I915_READ(sdvo_priv->output_device); 1329 + temp = I915_READ(sdvo_priv->sdvo_reg); 1335 1330 if ((temp & SDVO_ENABLE) == 0) 1336 - intel_sdvo_write_sdvox(intel_output, temp | SDVO_ENABLE); 1331 + intel_sdvo_write_sdvox(intel_encoder, temp | SDVO_ENABLE); 1337 1332 for (i = 0; i < 2; i++) 1338 1333 intel_wait_for_vblank(dev); 1339 1334 1340 - status = intel_sdvo_get_trained_inputs(intel_output, &input1, 1335 + status = intel_sdvo_get_trained_inputs(intel_encoder, &input1, 1341 1336 &input2); 1342 1337 1343 1338 ··· 1351 1346 } 1352 1347 1353 1348 if (0) 1354 - intel_sdvo_set_encoder_power_state(intel_output, mode); 1355 - intel_sdvo_set_active_outputs(intel_output, sdvo_priv->controlled_output); 1349 + intel_sdvo_set_encoder_power_state(intel_encoder, mode); 1350 + intel_sdvo_set_active_outputs(intel_encoder, sdvo_priv->controlled_output); 1356 1351 } 1357 1352 return; 1358 1353 } ··· 1361 1356 { 1362 1357 struct drm_device *dev = connector->dev; 1363 1358 struct drm_i915_private *dev_priv = dev->dev_private; 1364 - struct intel_output *intel_output = to_intel_output(connector); 1365 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1359 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1360 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1366 1361 int o; 1367 1362 1368 - sdvo_priv->save_sdvo_mult = intel_sdvo_get_clock_rate_mult(intel_output); 1369 - intel_sdvo_get_active_outputs(intel_output, &sdvo_priv->save_active_outputs); 1363 + sdvo_priv->save_sdvo_mult = intel_sdvo_get_clock_rate_mult(intel_encoder); 1364 + intel_sdvo_get_active_outputs(intel_encoder, &sdvo_priv->save_active_outputs); 1370 1365 1371 1366 if (sdvo_priv->caps.sdvo_inputs_mask & 0x1) { 1372 - intel_sdvo_set_target_input(intel_output, true, false); 1373 - intel_sdvo_get_input_timing(intel_output, 1367 + intel_sdvo_set_target_input(intel_encoder, true, false); 1368 + intel_sdvo_get_input_timing(intel_encoder, 1374 1369 &sdvo_priv->save_input_dtd_1); 1375 1370 } 1376 1371 1377 1372 if (sdvo_priv->caps.sdvo_inputs_mask & 0x2) { 1378 - intel_sdvo_set_target_input(intel_output, false, true); 1379 - intel_sdvo_get_input_timing(intel_output, 1373 + intel_sdvo_set_target_input(intel_encoder, false, true); 1374 + intel_sdvo_get_input_timing(intel_encoder, 1380 1375 &sdvo_priv->save_input_dtd_2); 1381 1376 } 1382 1377 ··· 1385 1380 u16 this_output = (1 << o); 1386 1381 if (sdvo_priv->caps.output_flags & this_output) 1387 1382 { 1388 - intel_sdvo_set_target_output(intel_output, this_output); 1389 - intel_sdvo_get_output_timing(intel_output, 1383 + intel_sdvo_set_target_output(intel_encoder, this_output); 1384 + intel_sdvo_get_output_timing(intel_encoder, 1390 1385 &sdvo_priv->save_output_dtd[o]); 1391 1386 } 1392 1387 } ··· 1394 1389 /* XXX: Save TV format/enhancements. */ 1395 1390 } 1396 1391 1397 - sdvo_priv->save_SDVOX = I915_READ(sdvo_priv->output_device); 1392 + sdvo_priv->save_SDVOX = I915_READ(sdvo_priv->sdvo_reg); 1398 1393 } 1399 1394 1400 1395 static void intel_sdvo_restore(struct drm_connector *connector) 1401 1396 { 1402 1397 struct drm_device *dev = connector->dev; 1403 - struct intel_output *intel_output = to_intel_output(connector); 1404 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1398 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1399 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1405 1400 int o; 1406 1401 int i; 1407 1402 bool input1, input2; 1408 1403 u8 status; 1409 1404 1410 - intel_sdvo_set_active_outputs(intel_output, 0); 1405 + intel_sdvo_set_active_outputs(intel_encoder, 0); 1411 1406 1412 1407 for (o = SDVO_OUTPUT_FIRST; o <= SDVO_OUTPUT_LAST; o++) 1413 1408 { 1414 1409 u16 this_output = (1 << o); 1415 1410 if (sdvo_priv->caps.output_flags & this_output) { 1416 - intel_sdvo_set_target_output(intel_output, this_output); 1417 - intel_sdvo_set_output_timing(intel_output, &sdvo_priv->save_output_dtd[o]); 1411 + intel_sdvo_set_target_output(intel_encoder, this_output); 1412 + intel_sdvo_set_output_timing(intel_encoder, &sdvo_priv->save_output_dtd[o]); 1418 1413 } 1419 1414 } 1420 1415 1421 1416 if (sdvo_priv->caps.sdvo_inputs_mask & 0x1) { 1422 - intel_sdvo_set_target_input(intel_output, true, false); 1423 - intel_sdvo_set_input_timing(intel_output, &sdvo_priv->save_input_dtd_1); 1417 + intel_sdvo_set_target_input(intel_encoder, true, false); 1418 + intel_sdvo_set_input_timing(intel_encoder, &sdvo_priv->save_input_dtd_1); 1424 1419 } 1425 1420 1426 1421 if (sdvo_priv->caps.sdvo_inputs_mask & 0x2) { 1427 - intel_sdvo_set_target_input(intel_output, false, true); 1428 - intel_sdvo_set_input_timing(intel_output, &sdvo_priv->save_input_dtd_2); 1422 + intel_sdvo_set_target_input(intel_encoder, false, true); 1423 + intel_sdvo_set_input_timing(intel_encoder, &sdvo_priv->save_input_dtd_2); 1429 1424 } 1430 1425 1431 - intel_sdvo_set_clock_rate_mult(intel_output, sdvo_priv->save_sdvo_mult); 1426 + intel_sdvo_set_clock_rate_mult(intel_encoder, sdvo_priv->save_sdvo_mult); 1432 1427 1433 1428 if (sdvo_priv->is_tv) { 1434 1429 /* XXX: Restore TV format/enhancements. */ 1435 1430 } 1436 1431 1437 - intel_sdvo_write_sdvox(intel_output, sdvo_priv->save_SDVOX); 1432 + intel_sdvo_write_sdvox(intel_encoder, sdvo_priv->save_SDVOX); 1438 1433 1439 1434 if (sdvo_priv->save_SDVOX & SDVO_ENABLE) 1440 1435 { 1441 1436 for (i = 0; i < 2; i++) 1442 1437 intel_wait_for_vblank(dev); 1443 - status = intel_sdvo_get_trained_inputs(intel_output, &input1, &input2); 1438 + status = intel_sdvo_get_trained_inputs(intel_encoder, &input1, &input2); 1444 1439 if (status == SDVO_CMD_STATUS_SUCCESS && !input1) 1445 1440 DRM_DEBUG_KMS("First %s output reported failure to " 1446 1441 "sync\n", SDVO_NAME(sdvo_priv)); 1447 1442 } 1448 1443 1449 - intel_sdvo_set_active_outputs(intel_output, sdvo_priv->save_active_outputs); 1444 + intel_sdvo_set_active_outputs(intel_encoder, sdvo_priv->save_active_outputs); 1450 1445 } 1451 1446 1452 1447 static int intel_sdvo_mode_valid(struct drm_connector *connector, 1453 1448 struct drm_display_mode *mode) 1454 1449 { 1455 - struct intel_output *intel_output = to_intel_output(connector); 1456 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1450 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1451 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1457 1452 1458 1453 if (mode->flags & DRM_MODE_FLAG_DBLSCAN) 1459 1454 return MODE_NO_DBLESCAN; ··· 1478 1473 return MODE_OK; 1479 1474 } 1480 1475 1481 - static bool intel_sdvo_get_capabilities(struct intel_output *intel_output, struct intel_sdvo_caps *caps) 1476 + static bool intel_sdvo_get_capabilities(struct intel_encoder *intel_encoder, struct intel_sdvo_caps *caps) 1482 1477 { 1483 1478 u8 status; 1484 1479 1485 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_DEVICE_CAPS, NULL, 0); 1486 - status = intel_sdvo_read_response(intel_output, caps, sizeof(*caps)); 1480 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_DEVICE_CAPS, NULL, 0); 1481 + status = intel_sdvo_read_response(intel_encoder, caps, sizeof(*caps)); 1487 1482 if (status != SDVO_CMD_STATUS_SUCCESS) 1488 1483 return false; 1489 1484 ··· 1493 1488 struct drm_connector* intel_sdvo_find(struct drm_device *dev, int sdvoB) 1494 1489 { 1495 1490 struct drm_connector *connector = NULL; 1496 - struct intel_output *iout = NULL; 1491 + struct intel_encoder *iout = NULL; 1497 1492 struct intel_sdvo_priv *sdvo; 1498 1493 1499 1494 /* find the sdvo connector */ 1500 1495 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 1501 - iout = to_intel_output(connector); 1496 + iout = to_intel_encoder(connector); 1502 1497 1503 1498 if (iout->type != INTEL_OUTPUT_SDVO) 1504 1499 continue; 1505 1500 1506 1501 sdvo = iout->dev_priv; 1507 1502 1508 - if (sdvo->output_device == SDVOB && sdvoB) 1503 + if (sdvo->sdvo_reg == SDVOB && sdvoB) 1509 1504 return connector; 1510 1505 1511 - if (sdvo->output_device == SDVOC && !sdvoB) 1506 + if (sdvo->sdvo_reg == SDVOC && !sdvoB) 1512 1507 return connector; 1513 1508 1514 1509 } ··· 1520 1515 { 1521 1516 u8 response[2]; 1522 1517 u8 status; 1523 - struct intel_output *intel_output; 1518 + struct intel_encoder *intel_encoder; 1524 1519 DRM_DEBUG_KMS("\n"); 1525 1520 1526 1521 if (!connector) 1527 1522 return 0; 1528 1523 1529 - intel_output = to_intel_output(connector); 1524 + intel_encoder = to_intel_encoder(connector); 1530 1525 1531 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_HOT_PLUG_SUPPORT, NULL, 0); 1532 - status = intel_sdvo_read_response(intel_output, &response, 2); 1526 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_HOT_PLUG_SUPPORT, NULL, 0); 1527 + status = intel_sdvo_read_response(intel_encoder, &response, 2); 1533 1528 1534 1529 if (response[0] !=0) 1535 1530 return 1; ··· 1541 1536 { 1542 1537 u8 response[2]; 1543 1538 u8 status; 1544 - struct intel_output *intel_output = to_intel_output(connector); 1539 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1545 1540 1546 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_ACTIVE_HOT_PLUG, NULL, 0); 1547 - intel_sdvo_read_response(intel_output, &response, 2); 1541 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_ACTIVE_HOT_PLUG, NULL, 0); 1542 + intel_sdvo_read_response(intel_encoder, &response, 2); 1548 1543 1549 1544 if (on) { 1550 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_HOT_PLUG_SUPPORT, NULL, 0); 1551 - status = intel_sdvo_read_response(intel_output, &response, 2); 1545 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_HOT_PLUG_SUPPORT, NULL, 0); 1546 + status = intel_sdvo_read_response(intel_encoder, &response, 2); 1552 1547 1553 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &response, 2); 1548 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &response, 2); 1554 1549 } else { 1555 1550 response[0] = 0; 1556 1551 response[1] = 0; 1557 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &response, 2); 1552 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &response, 2); 1558 1553 } 1559 1554 1560 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_ACTIVE_HOT_PLUG, NULL, 0); 1561 - intel_sdvo_read_response(intel_output, &response, 2); 1555 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_ACTIVE_HOT_PLUG, NULL, 0); 1556 + intel_sdvo_read_response(intel_encoder, &response, 2); 1562 1557 } 1563 1558 1564 1559 static bool 1565 - intel_sdvo_multifunc_encoder(struct intel_output *intel_output) 1560 + intel_sdvo_multifunc_encoder(struct intel_encoder *intel_encoder) 1566 1561 { 1567 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1562 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1568 1563 int caps = 0; 1569 1564 1570 1565 if (sdvo_priv->caps.output_flags & ··· 1598 1593 intel_find_analog_connector(struct drm_device *dev) 1599 1594 { 1600 1595 struct drm_connector *connector; 1601 - struct intel_output *intel_output; 1596 + struct intel_encoder *intel_encoder; 1602 1597 1603 1598 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 1604 - intel_output = to_intel_output(connector); 1605 - if (intel_output->type == INTEL_OUTPUT_ANALOG) 1599 + intel_encoder = to_intel_encoder(connector); 1600 + if (intel_encoder->type == INTEL_OUTPUT_ANALOG) 1606 1601 return connector; 1607 1602 } 1608 1603 return NULL; ··· 1627 1622 enum drm_connector_status 1628 1623 intel_sdvo_hdmi_sink_detect(struct drm_connector *connector, u16 response) 1629 1624 { 1630 - struct intel_output *intel_output = to_intel_output(connector); 1631 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1625 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1626 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1632 1627 enum drm_connector_status status = connector_status_connected; 1633 1628 struct edid *edid = NULL; 1634 1629 1635 - edid = drm_get_edid(&intel_output->base, 1636 - intel_output->ddc_bus); 1630 + edid = drm_get_edid(&intel_encoder->base, 1631 + intel_encoder->ddc_bus); 1637 1632 1638 1633 /* This is only applied to SDVO cards with multiple outputs */ 1639 - if (edid == NULL && intel_sdvo_multifunc_encoder(intel_output)) { 1634 + if (edid == NULL && intel_sdvo_multifunc_encoder(intel_encoder)) { 1640 1635 uint8_t saved_ddc, temp_ddc; 1641 1636 saved_ddc = sdvo_priv->ddc_bus; 1642 1637 temp_ddc = sdvo_priv->ddc_bus >> 1; ··· 1646 1641 */ 1647 1642 while(temp_ddc > 1) { 1648 1643 sdvo_priv->ddc_bus = temp_ddc; 1649 - edid = drm_get_edid(&intel_output->base, 1650 - intel_output->ddc_bus); 1644 + edid = drm_get_edid(&intel_encoder->base, 1645 + intel_encoder->ddc_bus); 1651 1646 if (edid) { 1652 1647 /* 1653 1648 * When we can get the EDID, maybe it is the ··· 1666 1661 */ 1667 1662 if (edid == NULL && 1668 1663 sdvo_priv->analog_ddc_bus && 1669 - !intel_analog_is_connected(intel_output->base.dev)) 1670 - edid = drm_get_edid(&intel_output->base, 1664 + !intel_analog_is_connected(intel_encoder->base.dev)) 1665 + edid = drm_get_edid(&intel_encoder->base, 1671 1666 sdvo_priv->analog_ddc_bus); 1672 1667 if (edid != NULL) { 1673 1668 /* Don't report the output as connected if it's a DVI-I ··· 1682 1677 } 1683 1678 1684 1679 kfree(edid); 1685 - intel_output->base.display_info.raw_edid = NULL; 1680 + intel_encoder->base.display_info.raw_edid = NULL; 1686 1681 1687 1682 } else if (response & (SDVO_OUTPUT_TMDS0 | SDVO_OUTPUT_TMDS1)) 1688 1683 status = connector_status_disconnected; ··· 1694 1689 { 1695 1690 uint16_t response; 1696 1691 u8 status; 1697 - struct intel_output *intel_output = to_intel_output(connector); 1698 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1692 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1693 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1699 1694 1700 - intel_sdvo_write_cmd(intel_output, 1695 + intel_sdvo_write_cmd(intel_encoder, 1701 1696 SDVO_CMD_GET_ATTACHED_DISPLAYS, NULL, 0); 1702 1697 if (sdvo_priv->is_tv) { 1703 1698 /* add 30ms delay when the output type is SDVO-TV */ 1704 1699 mdelay(30); 1705 1700 } 1706 - status = intel_sdvo_read_response(intel_output, &response, 2); 1701 + status = intel_sdvo_read_response(intel_encoder, &response, 2); 1707 1702 1708 1703 DRM_DEBUG_KMS("SDVO response %d %d\n", response & 0xff, response >> 8); 1709 1704 ··· 1713 1708 if (response == 0) 1714 1709 return connector_status_disconnected; 1715 1710 1716 - if (intel_sdvo_multifunc_encoder(intel_output) && 1711 + if (intel_sdvo_multifunc_encoder(intel_encoder) && 1717 1712 sdvo_priv->attached_output != response) { 1718 1713 if (sdvo_priv->controlled_output != response && 1719 - intel_sdvo_output_setup(intel_output, response) != true) 1714 + intel_sdvo_output_setup(intel_encoder, response) != true) 1720 1715 return connector_status_unknown; 1721 1716 sdvo_priv->attached_output = response; 1722 1717 } ··· 1725 1720 1726 1721 static void intel_sdvo_get_ddc_modes(struct drm_connector *connector) 1727 1722 { 1728 - struct intel_output *intel_output = to_intel_output(connector); 1729 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1723 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1724 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1730 1725 int num_modes; 1731 1726 1732 1727 /* set the bus switch and get the modes */ 1733 - num_modes = intel_ddc_get_modes(intel_output); 1728 + num_modes = intel_ddc_get_modes(intel_encoder); 1734 1729 1735 1730 /* 1736 1731 * Mac mini hack. On this device, the DVI-I connector shares one DDC ··· 1740 1735 */ 1741 1736 if (num_modes == 0 && 1742 1737 sdvo_priv->analog_ddc_bus && 1743 - !intel_analog_is_connected(intel_output->base.dev)) { 1738 + !intel_analog_is_connected(intel_encoder->base.dev)) { 1744 1739 struct i2c_adapter *digital_ddc_bus; 1745 1740 1746 1741 /* Switch to the analog ddc bus and try that 1747 1742 */ 1748 - digital_ddc_bus = intel_output->ddc_bus; 1749 - intel_output->ddc_bus = sdvo_priv->analog_ddc_bus; 1743 + digital_ddc_bus = intel_encoder->ddc_bus; 1744 + intel_encoder->ddc_bus = sdvo_priv->analog_ddc_bus; 1750 1745 1751 - (void) intel_ddc_get_modes(intel_output); 1746 + (void) intel_ddc_get_modes(intel_encoder); 1752 1747 1753 - intel_output->ddc_bus = digital_ddc_bus; 1748 + intel_encoder->ddc_bus = digital_ddc_bus; 1754 1749 } 1755 1750 } 1756 1751 ··· 1821 1816 1822 1817 static void intel_sdvo_get_tv_modes(struct drm_connector *connector) 1823 1818 { 1824 - struct intel_output *output = to_intel_output(connector); 1819 + struct intel_encoder *output = to_intel_encoder(connector); 1825 1820 struct intel_sdvo_priv *sdvo_priv = output->dev_priv; 1826 1821 struct intel_sdvo_sdtv_resolution_request tv_res; 1827 1822 uint32_t reply = 0, format_map = 0; ··· 1863 1858 1864 1859 static void intel_sdvo_get_lvds_modes(struct drm_connector *connector) 1865 1860 { 1866 - struct intel_output *intel_output = to_intel_output(connector); 1861 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1867 1862 struct drm_i915_private *dev_priv = connector->dev->dev_private; 1868 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1863 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1869 1864 struct drm_display_mode *newmode; 1870 1865 1871 1866 /* ··· 1873 1868 * Assume that the preferred modes are 1874 1869 * arranged in priority order. 1875 1870 */ 1876 - intel_ddc_get_modes(intel_output); 1871 + intel_ddc_get_modes(intel_encoder); 1877 1872 if (list_empty(&connector->probed_modes) == false) 1878 1873 goto end; 1879 1874 ··· 1902 1897 1903 1898 static int intel_sdvo_get_modes(struct drm_connector *connector) 1904 1899 { 1905 - struct intel_output *output = to_intel_output(connector); 1900 + struct intel_encoder *output = to_intel_encoder(connector); 1906 1901 struct intel_sdvo_priv *sdvo_priv = output->dev_priv; 1907 1902 1908 1903 if (sdvo_priv->is_tv) ··· 1920 1915 static 1921 1916 void intel_sdvo_destroy_enhance_property(struct drm_connector *connector) 1922 1917 { 1923 - struct intel_output *intel_output = to_intel_output(connector); 1924 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1918 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1919 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1925 1920 struct drm_device *dev = connector->dev; 1926 1921 1927 1922 if (sdvo_priv->is_tv) { ··· 1958 1953 1959 1954 static void intel_sdvo_destroy(struct drm_connector *connector) 1960 1955 { 1961 - struct intel_output *intel_output = to_intel_output(connector); 1962 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1956 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1957 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1963 1958 1964 - if (intel_output->i2c_bus) 1965 - intel_i2c_destroy(intel_output->i2c_bus); 1966 - if (intel_output->ddc_bus) 1967 - intel_i2c_destroy(intel_output->ddc_bus); 1959 + if (intel_encoder->i2c_bus) 1960 + intel_i2c_destroy(intel_encoder->i2c_bus); 1961 + if (intel_encoder->ddc_bus) 1962 + intel_i2c_destroy(intel_encoder->ddc_bus); 1968 1963 if (sdvo_priv->analog_ddc_bus) 1969 1964 intel_i2c_destroy(sdvo_priv->analog_ddc_bus); 1970 1965 ··· 1982 1977 drm_sysfs_connector_remove(connector); 1983 1978 drm_connector_cleanup(connector); 1984 1979 1985 - kfree(intel_output); 1980 + kfree(intel_encoder); 1986 1981 } 1987 1982 1988 1983 static int ··· 1990 1985 struct drm_property *property, 1991 1986 uint64_t val) 1992 1987 { 1993 - struct intel_output *intel_output = to_intel_output(connector); 1994 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 1995 - struct drm_encoder *encoder = &intel_output->enc; 1988 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1989 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 1990 + struct drm_encoder *encoder = &intel_encoder->enc; 1996 1991 struct drm_crtc *crtc = encoder->crtc; 1997 1992 int ret = 0; 1998 1993 bool changed = false; ··· 2100 2095 sdvo_priv->cur_brightness = temp_value; 2101 2096 } 2102 2097 if (cmd) { 2103 - intel_sdvo_write_cmd(intel_output, cmd, &temp_value, 2); 2104 - status = intel_sdvo_read_response(intel_output, 2098 + intel_sdvo_write_cmd(intel_encoder, cmd, &temp_value, 2); 2099 + status = intel_sdvo_read_response(intel_encoder, 2105 2100 NULL, 0); 2106 2101 if (status != SDVO_CMD_STATUS_SUCCESS) { 2107 2102 DRM_DEBUG_KMS("Incorrect SDVO command \n"); ··· 2196 2191 } 2197 2192 2198 2193 static bool 2199 - intel_sdvo_get_digital_encoding_mode(struct intel_output *output) 2194 + intel_sdvo_get_digital_encoding_mode(struct intel_encoder *output) 2200 2195 { 2201 2196 struct intel_sdvo_priv *sdvo_priv = output->dev_priv; 2202 2197 uint8_t status; ··· 2210 2205 return true; 2211 2206 } 2212 2207 2213 - static struct intel_output * 2214 - intel_sdvo_chan_to_intel_output(struct intel_i2c_chan *chan) 2208 + static struct intel_encoder * 2209 + intel_sdvo_chan_to_intel_encoder(struct intel_i2c_chan *chan) 2215 2210 { 2216 2211 struct drm_device *dev = chan->drm_dev; 2217 2212 struct drm_connector *connector; 2218 - struct intel_output *intel_output = NULL; 2213 + struct intel_encoder *intel_encoder = NULL; 2219 2214 2220 2215 list_for_each_entry(connector, 2221 2216 &dev->mode_config.connector_list, head) { 2222 - if (to_intel_output(connector)->ddc_bus == &chan->adapter) { 2223 - intel_output = to_intel_output(connector); 2217 + if (to_intel_encoder(connector)->ddc_bus == &chan->adapter) { 2218 + intel_encoder = to_intel_encoder(connector); 2224 2219 break; 2225 2220 } 2226 2221 } 2227 - return intel_output; 2222 + return intel_encoder; 2228 2223 } 2229 2224 2230 2225 static int intel_sdvo_master_xfer(struct i2c_adapter *i2c_adap, 2231 2226 struct i2c_msg msgs[], int num) 2232 2227 { 2233 - struct intel_output *intel_output; 2228 + struct intel_encoder *intel_encoder; 2234 2229 struct intel_sdvo_priv *sdvo_priv; 2235 2230 struct i2c_algo_bit_data *algo_data; 2236 2231 const struct i2c_algorithm *algo; 2237 2232 2238 2233 algo_data = (struct i2c_algo_bit_data *)i2c_adap->algo_data; 2239 - intel_output = 2240 - intel_sdvo_chan_to_intel_output( 2234 + intel_encoder = 2235 + intel_sdvo_chan_to_intel_encoder( 2241 2236 (struct intel_i2c_chan *)(algo_data->data)); 2242 - if (intel_output == NULL) 2237 + if (intel_encoder == NULL) 2243 2238 return -EINVAL; 2244 2239 2245 - sdvo_priv = intel_output->dev_priv; 2246 - algo = intel_output->i2c_bus->algo; 2240 + sdvo_priv = intel_encoder->dev_priv; 2241 + algo = intel_encoder->i2c_bus->algo; 2247 2242 2248 - intel_sdvo_set_control_bus_switch(intel_output, sdvo_priv->ddc_bus); 2243 + intel_sdvo_set_control_bus_switch(intel_encoder, sdvo_priv->ddc_bus); 2249 2244 return algo->master_xfer(i2c_adap, msgs, num); 2250 2245 } 2251 2246 ··· 2254 2249 }; 2255 2250 2256 2251 static u8 2257 - intel_sdvo_get_slave_addr(struct drm_device *dev, int output_device) 2252 + intel_sdvo_get_slave_addr(struct drm_device *dev, int sdvo_reg) 2258 2253 { 2259 2254 struct drm_i915_private *dev_priv = dev->dev_private; 2260 2255 struct sdvo_device_mapping *my_mapping, *other_mapping; 2261 2256 2262 - if (output_device == SDVOB) { 2257 + if (sdvo_reg == SDVOB) { 2263 2258 my_mapping = &dev_priv->sdvo_mappings[0]; 2264 2259 other_mapping = &dev_priv->sdvo_mappings[1]; 2265 2260 } else { ··· 2284 2279 /* No SDVO device info is found for another DVO port, 2285 2280 * so use mapping assumption we had before BIOS parsing. 2286 2281 */ 2287 - if (output_device == SDVOB) 2282 + if (sdvo_reg == SDVOB) 2288 2283 return 0x70; 2289 2284 else 2290 2285 return 0x72; ··· 2310 2305 }; 2311 2306 2312 2307 static bool 2313 - intel_sdvo_output_setup(struct intel_output *intel_output, uint16_t flags) 2308 + intel_sdvo_output_setup(struct intel_encoder *intel_encoder, uint16_t flags) 2314 2309 { 2315 - struct drm_connector *connector = &intel_output->base; 2316 - struct drm_encoder *encoder = &intel_output->enc; 2317 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 2310 + struct drm_connector *connector = &intel_encoder->base; 2311 + struct drm_encoder *encoder = &intel_encoder->enc; 2312 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 2318 2313 bool ret = true, registered = false; 2319 2314 2320 2315 sdvo_priv->is_tv = false; 2321 - intel_output->needs_tv_clock = false; 2316 + intel_encoder->needs_tv_clock = false; 2322 2317 sdvo_priv->is_lvds = false; 2323 2318 2324 2319 if (device_is_registered(&connector->kdev)) { ··· 2336 2331 encoder->encoder_type = DRM_MODE_ENCODER_TMDS; 2337 2332 connector->connector_type = DRM_MODE_CONNECTOR_DVID; 2338 2333 2339 - if (intel_sdvo_get_supp_encode(intel_output, 2334 + if (intel_sdvo_get_supp_encode(intel_encoder, 2340 2335 &sdvo_priv->encode) && 2341 - intel_sdvo_get_digital_encoding_mode(intel_output) && 2336 + intel_sdvo_get_digital_encoding_mode(intel_encoder) && 2342 2337 sdvo_priv->is_hdmi) { 2343 2338 /* enable hdmi encoding mode if supported */ 2344 - intel_sdvo_set_encode(intel_output, SDVO_ENCODE_HDMI); 2345 - intel_sdvo_set_colorimetry(intel_output, 2339 + intel_sdvo_set_encode(intel_encoder, SDVO_ENCODE_HDMI); 2340 + intel_sdvo_set_colorimetry(intel_encoder, 2346 2341 SDVO_COLORIMETRY_RGB256); 2347 2342 connector->connector_type = DRM_MODE_CONNECTOR_HDMIA; 2348 - intel_output->clone_mask = 2343 + intel_encoder->clone_mask = 2349 2344 (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 2350 2345 (1 << INTEL_ANALOG_CLONE_BIT); 2351 2346 } ··· 2356 2351 encoder->encoder_type = DRM_MODE_ENCODER_TVDAC; 2357 2352 connector->connector_type = DRM_MODE_CONNECTOR_SVIDEO; 2358 2353 sdvo_priv->is_tv = true; 2359 - intel_output->needs_tv_clock = true; 2360 - intel_output->clone_mask = 1 << INTEL_SDVO_TV_CLONE_BIT; 2354 + intel_encoder->needs_tv_clock = true; 2355 + intel_encoder->clone_mask = 1 << INTEL_SDVO_TV_CLONE_BIT; 2361 2356 } else if (flags & SDVO_OUTPUT_RGB0) { 2362 2357 2363 2358 sdvo_priv->controlled_output = SDVO_OUTPUT_RGB0; 2364 2359 encoder->encoder_type = DRM_MODE_ENCODER_DAC; 2365 2360 connector->connector_type = DRM_MODE_CONNECTOR_VGA; 2366 - intel_output->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 2361 + intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 2367 2362 (1 << INTEL_ANALOG_CLONE_BIT); 2368 2363 } else if (flags & SDVO_OUTPUT_RGB1) { 2369 2364 2370 2365 sdvo_priv->controlled_output = SDVO_OUTPUT_RGB1; 2371 2366 encoder->encoder_type = DRM_MODE_ENCODER_DAC; 2372 2367 connector->connector_type = DRM_MODE_CONNECTOR_VGA; 2373 - intel_output->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 2368 + intel_encoder->clone_mask = (1 << INTEL_SDVO_NON_TV_CLONE_BIT) | 2374 2369 (1 << INTEL_ANALOG_CLONE_BIT); 2375 2370 } else if (flags & SDVO_OUTPUT_CVBS0) { 2376 2371 ··· 2378 2373 encoder->encoder_type = DRM_MODE_ENCODER_TVDAC; 2379 2374 connector->connector_type = DRM_MODE_CONNECTOR_SVIDEO; 2380 2375 sdvo_priv->is_tv = true; 2381 - intel_output->needs_tv_clock = true; 2382 - intel_output->clone_mask = 1 << INTEL_SDVO_TV_CLONE_BIT; 2376 + intel_encoder->needs_tv_clock = true; 2377 + intel_encoder->clone_mask = 1 << INTEL_SDVO_TV_CLONE_BIT; 2383 2378 } else if (flags & SDVO_OUTPUT_LVDS0) { 2384 2379 2385 2380 sdvo_priv->controlled_output = SDVO_OUTPUT_LVDS0; 2386 2381 encoder->encoder_type = DRM_MODE_ENCODER_LVDS; 2387 2382 connector->connector_type = DRM_MODE_CONNECTOR_LVDS; 2388 2383 sdvo_priv->is_lvds = true; 2389 - intel_output->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT) | 2384 + intel_encoder->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT) | 2390 2385 (1 << INTEL_SDVO_LVDS_CLONE_BIT); 2391 2386 } else if (flags & SDVO_OUTPUT_LVDS1) { 2392 2387 ··· 2394 2389 encoder->encoder_type = DRM_MODE_ENCODER_LVDS; 2395 2390 connector->connector_type = DRM_MODE_CONNECTOR_LVDS; 2396 2391 sdvo_priv->is_lvds = true; 2397 - intel_output->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT) | 2392 + intel_encoder->clone_mask = (1 << INTEL_ANALOG_CLONE_BIT) | 2398 2393 (1 << INTEL_SDVO_LVDS_CLONE_BIT); 2399 2394 } else { 2400 2395 ··· 2407 2402 bytes[0], bytes[1]); 2408 2403 ret = false; 2409 2404 } 2410 - intel_output->crtc_mask = (1 << 0) | (1 << 1); 2405 + intel_encoder->crtc_mask = (1 << 0) | (1 << 1); 2411 2406 2412 2407 if (ret && registered) 2413 2408 ret = drm_sysfs_connector_add(connector) == 0 ? true : false; ··· 2419 2414 2420 2415 static void intel_sdvo_tv_create_property(struct drm_connector *connector) 2421 2416 { 2422 - struct intel_output *intel_output = to_intel_output(connector); 2423 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 2417 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 2418 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 2424 2419 struct intel_sdvo_tv_format format; 2425 2420 uint32_t format_map, i; 2426 2421 uint8_t status; 2427 2422 2428 - intel_sdvo_set_target_output(intel_output, 2423 + intel_sdvo_set_target_output(intel_encoder, 2429 2424 sdvo_priv->controlled_output); 2430 2425 2431 - intel_sdvo_write_cmd(intel_output, 2426 + intel_sdvo_write_cmd(intel_encoder, 2432 2427 SDVO_CMD_GET_SUPPORTED_TV_FORMATS, NULL, 0); 2433 - status = intel_sdvo_read_response(intel_output, 2428 + status = intel_sdvo_read_response(intel_encoder, 2434 2429 &format, sizeof(format)); 2435 2430 if (status != SDVO_CMD_STATUS_SUCCESS) 2436 2431 return; ··· 2468 2463 2469 2464 static void intel_sdvo_create_enhance_property(struct drm_connector *connector) 2470 2465 { 2471 - struct intel_output *intel_output = to_intel_output(connector); 2472 - struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv; 2466 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 2467 + struct intel_sdvo_priv *sdvo_priv = intel_encoder->dev_priv; 2473 2468 struct intel_sdvo_enhancements_reply sdvo_data; 2474 2469 struct drm_device *dev = connector->dev; 2475 2470 uint8_t status; 2476 2471 uint16_t response, data_value[2]; 2477 2472 2478 - intel_sdvo_write_cmd(intel_output, SDVO_CMD_GET_SUPPORTED_ENHANCEMENTS, 2473 + intel_sdvo_write_cmd(intel_encoder, SDVO_CMD_GET_SUPPORTED_ENHANCEMENTS, 2479 2474 NULL, 0); 2480 - status = intel_sdvo_read_response(intel_output, &sdvo_data, 2475 + status = intel_sdvo_read_response(intel_encoder, &sdvo_data, 2481 2476 sizeof(sdvo_data)); 2482 2477 if (status != SDVO_CMD_STATUS_SUCCESS) { 2483 2478 DRM_DEBUG_KMS(" incorrect response is returned\n"); ··· 2493 2488 * property 2494 2489 */ 2495 2490 if (sdvo_data.overscan_h) { 2496 - intel_sdvo_write_cmd(intel_output, 2491 + intel_sdvo_write_cmd(intel_encoder, 2497 2492 SDVO_CMD_GET_MAX_OVERSCAN_H, NULL, 0); 2498 - status = intel_sdvo_read_response(intel_output, 2493 + status = intel_sdvo_read_response(intel_encoder, 2499 2494 &data_value, 4); 2500 2495 if (status != SDVO_CMD_STATUS_SUCCESS) { 2501 2496 DRM_DEBUG_KMS("Incorrect SDVO max " 2502 2497 "h_overscan\n"); 2503 2498 return; 2504 2499 } 2505 - intel_sdvo_write_cmd(intel_output, 2500 + intel_sdvo_write_cmd(intel_encoder, 2506 2501 SDVO_CMD_GET_OVERSCAN_H, NULL, 0); 2507 - status = intel_sdvo_read_response(intel_output, 2502 + status = intel_sdvo_read_response(intel_encoder, 2508 2503 &response, 2); 2509 2504 if (status != SDVO_CMD_STATUS_SUCCESS) { 2510 2505 DRM_DEBUG_KMS("Incorrect SDVO h_overscan\n"); ··· 2534 2529 data_value[0], data_value[1], response); 2535 2530 } 2536 2531 if (sdvo_data.overscan_v) { 2537 - intel_sdvo_write_cmd(intel_output, 2532 + intel_sdvo_write_cmd(intel_encoder, 2538 2533 SDVO_CMD_GET_MAX_OVERSCAN_V, NULL, 0); 2539 - status = intel_sdvo_read_response(intel_output, 2534 + status = intel_sdvo_read_response(intel_encoder, 2540 2535 &data_value, 4); 2541 2536 if (status != SDVO_CMD_STATUS_SUCCESS) { 2542 2537 DRM_DEBUG_KMS("Incorrect SDVO max " 2543 2538 "v_overscan\n"); 2544 2539 return; 2545 2540 } 2546 - intel_sdvo_write_cmd(intel_output, 2541 + intel_sdvo_write_cmd(intel_encoder, 2547 2542 SDVO_CMD_GET_OVERSCAN_V, NULL, 0); 2548 - status = intel_sdvo_read_response(intel_output, 2543 + status = intel_sdvo_read_response(intel_encoder, 2549 2544 &response, 2); 2550 2545 if (status != SDVO_CMD_STATUS_SUCCESS) { 2551 2546 DRM_DEBUG_KMS("Incorrect SDVO v_overscan\n"); ··· 2575 2570 data_value[0], data_value[1], response); 2576 2571 } 2577 2572 if (sdvo_data.position_h) { 2578 - intel_sdvo_write_cmd(intel_output, 2573 + intel_sdvo_write_cmd(intel_encoder, 2579 2574 SDVO_CMD_GET_MAX_POSITION_H, NULL, 0); 2580 - status = intel_sdvo_read_response(intel_output, 2575 + status = intel_sdvo_read_response(intel_encoder, 2581 2576 &data_value, 4); 2582 2577 if (status != SDVO_CMD_STATUS_SUCCESS) { 2583 2578 DRM_DEBUG_KMS("Incorrect SDVO Max h_pos\n"); 2584 2579 return; 2585 2580 } 2586 - intel_sdvo_write_cmd(intel_output, 2581 + intel_sdvo_write_cmd(intel_encoder, 2587 2582 SDVO_CMD_GET_POSITION_H, NULL, 0); 2588 - status = intel_sdvo_read_response(intel_output, 2583 + status = intel_sdvo_read_response(intel_encoder, 2589 2584 &response, 2); 2590 2585 if (status != SDVO_CMD_STATUS_SUCCESS) { 2591 2586 DRM_DEBUG_KMS("Incorrect SDVO get h_postion\n"); ··· 2606 2601 data_value[0], data_value[1], response); 2607 2602 } 2608 2603 if (sdvo_data.position_v) { 2609 - intel_sdvo_write_cmd(intel_output, 2604 + intel_sdvo_write_cmd(intel_encoder, 2610 2605 SDVO_CMD_GET_MAX_POSITION_V, NULL, 0); 2611 - status = intel_sdvo_read_response(intel_output, 2606 + status = intel_sdvo_read_response(intel_encoder, 2612 2607 &data_value, 4); 2613 2608 if (status != SDVO_CMD_STATUS_SUCCESS) { 2614 2609 DRM_DEBUG_KMS("Incorrect SDVO Max v_pos\n"); 2615 2610 return; 2616 2611 } 2617 - intel_sdvo_write_cmd(intel_output, 2612 + intel_sdvo_write_cmd(intel_encoder, 2618 2613 SDVO_CMD_GET_POSITION_V, NULL, 0); 2619 - status = intel_sdvo_read_response(intel_output, 2614 + status = intel_sdvo_read_response(intel_encoder, 2620 2615 &response, 2); 2621 2616 if (status != SDVO_CMD_STATUS_SUCCESS) { 2622 2617 DRM_DEBUG_KMS("Incorrect SDVO get v_postion\n"); ··· 2639 2634 } 2640 2635 if (sdvo_priv->is_tv) { 2641 2636 if (sdvo_data.saturation) { 2642 - intel_sdvo_write_cmd(intel_output, 2637 + intel_sdvo_write_cmd(intel_encoder, 2643 2638 SDVO_CMD_GET_MAX_SATURATION, NULL, 0); 2644 - status = intel_sdvo_read_response(intel_output, 2639 + status = intel_sdvo_read_response(intel_encoder, 2645 2640 &data_value, 4); 2646 2641 if (status != SDVO_CMD_STATUS_SUCCESS) { 2647 2642 DRM_DEBUG_KMS("Incorrect SDVO Max sat\n"); 2648 2643 return; 2649 2644 } 2650 - intel_sdvo_write_cmd(intel_output, 2645 + intel_sdvo_write_cmd(intel_encoder, 2651 2646 SDVO_CMD_GET_SATURATION, NULL, 0); 2652 - status = intel_sdvo_read_response(intel_output, 2647 + status = intel_sdvo_read_response(intel_encoder, 2653 2648 &response, 2); 2654 2649 if (status != SDVO_CMD_STATUS_SUCCESS) { 2655 2650 DRM_DEBUG_KMS("Incorrect SDVO get sat\n"); ··· 2671 2666 data_value[0], data_value[1], response); 2672 2667 } 2673 2668 if (sdvo_data.contrast) { 2674 - intel_sdvo_write_cmd(intel_output, 2669 + intel_sdvo_write_cmd(intel_encoder, 2675 2670 SDVO_CMD_GET_MAX_CONTRAST, NULL, 0); 2676 - status = intel_sdvo_read_response(intel_output, 2671 + status = intel_sdvo_read_response(intel_encoder, 2677 2672 &data_value, 4); 2678 2673 if (status != SDVO_CMD_STATUS_SUCCESS) { 2679 2674 DRM_DEBUG_KMS("Incorrect SDVO Max contrast\n"); 2680 2675 return; 2681 2676 } 2682 - intel_sdvo_write_cmd(intel_output, 2677 + intel_sdvo_write_cmd(intel_encoder, 2683 2678 SDVO_CMD_GET_CONTRAST, NULL, 0); 2684 - status = intel_sdvo_read_response(intel_output, 2679 + status = intel_sdvo_read_response(intel_encoder, 2685 2680 &response, 2); 2686 2681 if (status != SDVO_CMD_STATUS_SUCCESS) { 2687 2682 DRM_DEBUG_KMS("Incorrect SDVO get contrast\n"); ··· 2702 2697 data_value[0], data_value[1], response); 2703 2698 } 2704 2699 if (sdvo_data.hue) { 2705 - intel_sdvo_write_cmd(intel_output, 2700 + intel_sdvo_write_cmd(intel_encoder, 2706 2701 SDVO_CMD_GET_MAX_HUE, NULL, 0); 2707 - status = intel_sdvo_read_response(intel_output, 2702 + status = intel_sdvo_read_response(intel_encoder, 2708 2703 &data_value, 4); 2709 2704 if (status != SDVO_CMD_STATUS_SUCCESS) { 2710 2705 DRM_DEBUG_KMS("Incorrect SDVO Max hue\n"); 2711 2706 return; 2712 2707 } 2713 - intel_sdvo_write_cmd(intel_output, 2708 + intel_sdvo_write_cmd(intel_encoder, 2714 2709 SDVO_CMD_GET_HUE, NULL, 0); 2715 - status = intel_sdvo_read_response(intel_output, 2710 + status = intel_sdvo_read_response(intel_encoder, 2716 2711 &response, 2); 2717 2712 if (status != SDVO_CMD_STATUS_SUCCESS) { 2718 2713 DRM_DEBUG_KMS("Incorrect SDVO get hue\n"); ··· 2735 2730 } 2736 2731 if (sdvo_priv->is_tv || sdvo_priv->is_lvds) { 2737 2732 if (sdvo_data.brightness) { 2738 - intel_sdvo_write_cmd(intel_output, 2733 + intel_sdvo_write_cmd(intel_encoder, 2739 2734 SDVO_CMD_GET_MAX_BRIGHTNESS, NULL, 0); 2740 - status = intel_sdvo_read_response(intel_output, 2735 + status = intel_sdvo_read_response(intel_encoder, 2741 2736 &data_value, 4); 2742 2737 if (status != SDVO_CMD_STATUS_SUCCESS) { 2743 2738 DRM_DEBUG_KMS("Incorrect SDVO Max bright\n"); 2744 2739 return; 2745 2740 } 2746 - intel_sdvo_write_cmd(intel_output, 2741 + intel_sdvo_write_cmd(intel_encoder, 2747 2742 SDVO_CMD_GET_BRIGHTNESS, NULL, 0); 2748 - status = intel_sdvo_read_response(intel_output, 2743 + status = intel_sdvo_read_response(intel_encoder, 2749 2744 &response, 2); 2750 2745 if (status != SDVO_CMD_STATUS_SUCCESS) { 2751 2746 DRM_DEBUG_KMS("Incorrect SDVO get brigh\n"); ··· 2770 2765 return; 2771 2766 } 2772 2767 2773 - bool intel_sdvo_init(struct drm_device *dev, int output_device) 2768 + bool intel_sdvo_init(struct drm_device *dev, int sdvo_reg) 2774 2769 { 2775 2770 struct drm_i915_private *dev_priv = dev->dev_private; 2776 2771 struct drm_connector *connector; 2777 - struct intel_output *intel_output; 2772 + struct intel_encoder *intel_encoder; 2778 2773 struct intel_sdvo_priv *sdvo_priv; 2779 2774 2780 2775 u8 ch[0x40]; 2781 2776 int i; 2782 2777 2783 - intel_output = kcalloc(sizeof(struct intel_output)+sizeof(struct intel_sdvo_priv), 1, GFP_KERNEL); 2784 - if (!intel_output) { 2778 + intel_encoder = kcalloc(sizeof(struct intel_encoder)+sizeof(struct intel_sdvo_priv), 1, GFP_KERNEL); 2779 + if (!intel_encoder) { 2785 2780 return false; 2786 2781 } 2787 2782 2788 - sdvo_priv = (struct intel_sdvo_priv *)(intel_output + 1); 2789 - sdvo_priv->output_device = output_device; 2783 + sdvo_priv = (struct intel_sdvo_priv *)(intel_encoder + 1); 2784 + sdvo_priv->sdvo_reg = sdvo_reg; 2790 2785 2791 - intel_output->dev_priv = sdvo_priv; 2792 - intel_output->type = INTEL_OUTPUT_SDVO; 2786 + intel_encoder->dev_priv = sdvo_priv; 2787 + intel_encoder->type = INTEL_OUTPUT_SDVO; 2793 2788 2794 2789 /* setup the DDC bus. */ 2795 - if (output_device == SDVOB) 2796 - intel_output->i2c_bus = intel_i2c_create(dev, GPIOE, "SDVOCTRL_E for SDVOB"); 2790 + if (sdvo_reg == SDVOB) 2791 + intel_encoder->i2c_bus = intel_i2c_create(dev, GPIOE, "SDVOCTRL_E for SDVOB"); 2797 2792 else 2798 - intel_output->i2c_bus = intel_i2c_create(dev, GPIOE, "SDVOCTRL_E for SDVOC"); 2793 + intel_encoder->i2c_bus = intel_i2c_create(dev, GPIOE, "SDVOCTRL_E for SDVOC"); 2799 2794 2800 - if (!intel_output->i2c_bus) 2795 + if (!intel_encoder->i2c_bus) 2801 2796 goto err_inteloutput; 2802 2797 2803 - sdvo_priv->slave_addr = intel_sdvo_get_slave_addr(dev, output_device); 2798 + sdvo_priv->slave_addr = intel_sdvo_get_slave_addr(dev, sdvo_reg); 2804 2799 2805 2800 /* Save the bit-banging i2c functionality for use by the DDC wrapper */ 2806 - intel_sdvo_i2c_bit_algo.functionality = intel_output->i2c_bus->algo->functionality; 2801 + intel_sdvo_i2c_bit_algo.functionality = intel_encoder->i2c_bus->algo->functionality; 2807 2802 2808 2803 /* Read the regs to test if we can talk to the device */ 2809 2804 for (i = 0; i < 0x40; i++) { 2810 - if (!intel_sdvo_read_byte(intel_output, i, &ch[i])) { 2805 + if (!intel_sdvo_read_byte(intel_encoder, i, &ch[i])) { 2811 2806 DRM_DEBUG_KMS("No SDVO device found on SDVO%c\n", 2812 - output_device == SDVOB ? 'B' : 'C'); 2807 + sdvo_reg == SDVOB ? 'B' : 'C'); 2813 2808 goto err_i2c; 2814 2809 } 2815 2810 } 2816 2811 2817 2812 /* setup the DDC bus. */ 2818 - if (output_device == SDVOB) { 2819 - intel_output->ddc_bus = intel_i2c_create(dev, GPIOE, "SDVOB DDC BUS"); 2813 + if (sdvo_reg == SDVOB) { 2814 + intel_encoder->ddc_bus = intel_i2c_create(dev, GPIOE, "SDVOB DDC BUS"); 2820 2815 sdvo_priv->analog_ddc_bus = intel_i2c_create(dev, GPIOA, 2821 2816 "SDVOB/VGA DDC BUS"); 2822 2817 dev_priv->hotplug_supported_mask |= SDVOB_HOTPLUG_INT_STATUS; 2823 2818 } else { 2824 - intel_output->ddc_bus = intel_i2c_create(dev, GPIOE, "SDVOC DDC BUS"); 2819 + intel_encoder->ddc_bus = intel_i2c_create(dev, GPIOE, "SDVOC DDC BUS"); 2825 2820 sdvo_priv->analog_ddc_bus = intel_i2c_create(dev, GPIOA, 2826 2821 "SDVOC/VGA DDC BUS"); 2827 2822 dev_priv->hotplug_supported_mask |= SDVOC_HOTPLUG_INT_STATUS; 2828 2823 } 2829 2824 2830 - if (intel_output->ddc_bus == NULL) 2825 + if (intel_encoder->ddc_bus == NULL) 2831 2826 goto err_i2c; 2832 2827 2833 2828 /* Wrap with our custom algo which switches to DDC mode */ 2834 - intel_output->ddc_bus->algo = &intel_sdvo_i2c_bit_algo; 2829 + intel_encoder->ddc_bus->algo = &intel_sdvo_i2c_bit_algo; 2835 2830 2836 2831 /* In default case sdvo lvds is false */ 2837 - intel_sdvo_get_capabilities(intel_output, &sdvo_priv->caps); 2832 + intel_sdvo_get_capabilities(intel_encoder, &sdvo_priv->caps); 2838 2833 2839 - if (intel_sdvo_output_setup(intel_output, 2834 + if (intel_sdvo_output_setup(intel_encoder, 2840 2835 sdvo_priv->caps.output_flags) != true) { 2841 2836 DRM_DEBUG_KMS("SDVO output failed to setup on SDVO%c\n", 2842 - output_device == SDVOB ? 'B' : 'C'); 2837 + sdvo_reg == SDVOB ? 'B' : 'C'); 2843 2838 goto err_i2c; 2844 2839 } 2845 2840 2846 2841 2847 - connector = &intel_output->base; 2842 + connector = &intel_encoder->base; 2848 2843 drm_connector_init(dev, connector, &intel_sdvo_connector_funcs, 2849 2844 connector->connector_type); 2850 2845 ··· 2853 2848 connector->doublescan_allowed = 0; 2854 2849 connector->display_info.subpixel_order = SubPixelHorizontalRGB; 2855 2850 2856 - drm_encoder_init(dev, &intel_output->enc, 2857 - &intel_sdvo_enc_funcs, intel_output->enc.encoder_type); 2851 + drm_encoder_init(dev, &intel_encoder->enc, 2852 + &intel_sdvo_enc_funcs, intel_encoder->enc.encoder_type); 2858 2853 2859 - drm_encoder_helper_add(&intel_output->enc, &intel_sdvo_helper_funcs); 2854 + drm_encoder_helper_add(&intel_encoder->enc, &intel_sdvo_helper_funcs); 2860 2855 2861 - drm_mode_connector_attach_encoder(&intel_output->base, &intel_output->enc); 2856 + drm_mode_connector_attach_encoder(&intel_encoder->base, &intel_encoder->enc); 2862 2857 if (sdvo_priv->is_tv) 2863 2858 intel_sdvo_tv_create_property(connector); 2864 2859 ··· 2870 2865 intel_sdvo_select_ddc_bus(sdvo_priv); 2871 2866 2872 2867 /* Set the input timing to the screen. Assume always input 0. */ 2873 - intel_sdvo_set_target_input(intel_output, true, false); 2868 + intel_sdvo_set_target_input(intel_encoder, true, false); 2874 2869 2875 - intel_sdvo_get_input_pixel_clock_range(intel_output, 2870 + intel_sdvo_get_input_pixel_clock_range(intel_encoder, 2876 2871 &sdvo_priv->pixel_clock_min, 2877 2872 &sdvo_priv->pixel_clock_max); 2878 2873 ··· 2899 2894 err_i2c: 2900 2895 if (sdvo_priv->analog_ddc_bus != NULL) 2901 2896 intel_i2c_destroy(sdvo_priv->analog_ddc_bus); 2902 - if (intel_output->ddc_bus != NULL) 2903 - intel_i2c_destroy(intel_output->ddc_bus); 2904 - if (intel_output->i2c_bus != NULL) 2905 - intel_i2c_destroy(intel_output->i2c_bus); 2897 + if (intel_encoder->ddc_bus != NULL) 2898 + intel_i2c_destroy(intel_encoder->ddc_bus); 2899 + if (intel_encoder->i2c_bus != NULL) 2900 + intel_i2c_destroy(intel_encoder->i2c_bus); 2906 2901 err_inteloutput: 2907 - kfree(intel_output); 2902 + kfree(intel_encoder); 2908 2903 2909 2904 return false; 2910 2905 }
+48 -48
drivers/gpu/drm/i915/intel_tv.c
··· 921 921 { 922 922 struct drm_device *dev = connector->dev; 923 923 struct drm_i915_private *dev_priv = dev->dev_private; 924 - struct intel_output *intel_output = to_intel_output(connector); 925 - struct intel_tv_priv *tv_priv = intel_output->dev_priv; 924 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 925 + struct intel_tv_priv *tv_priv = intel_encoder->dev_priv; 926 926 int i; 927 927 928 928 tv_priv->save_TV_H_CTL_1 = I915_READ(TV_H_CTL_1); ··· 971 971 { 972 972 struct drm_device *dev = connector->dev; 973 973 struct drm_i915_private *dev_priv = dev->dev_private; 974 - struct intel_output *intel_output = to_intel_output(connector); 975 - struct intel_tv_priv *tv_priv = intel_output->dev_priv; 974 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 975 + struct intel_tv_priv *tv_priv = intel_encoder->dev_priv; 976 976 struct drm_crtc *crtc = connector->encoder->crtc; 977 977 struct intel_crtc *intel_crtc; 978 978 int i; ··· 1068 1068 } 1069 1069 1070 1070 static const struct tv_mode * 1071 - intel_tv_mode_find (struct intel_output *intel_output) 1071 + intel_tv_mode_find (struct intel_encoder *intel_encoder) 1072 1072 { 1073 - struct intel_tv_priv *tv_priv = intel_output->dev_priv; 1073 + struct intel_tv_priv *tv_priv = intel_encoder->dev_priv; 1074 1074 1075 1075 return intel_tv_mode_lookup(tv_priv->tv_format); 1076 1076 } ··· 1078 1078 static enum drm_mode_status 1079 1079 intel_tv_mode_valid(struct drm_connector *connector, struct drm_display_mode *mode) 1080 1080 { 1081 - struct intel_output *intel_output = to_intel_output(connector); 1082 - const struct tv_mode *tv_mode = intel_tv_mode_find(intel_output); 1081 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1082 + const struct tv_mode *tv_mode = intel_tv_mode_find(intel_encoder); 1083 1083 1084 1084 /* Ensure TV refresh is close to desired refresh */ 1085 1085 if (tv_mode && abs(tv_mode->refresh - drm_mode_vrefresh(mode) * 1000) ··· 1095 1095 { 1096 1096 struct drm_device *dev = encoder->dev; 1097 1097 struct drm_mode_config *drm_config = &dev->mode_config; 1098 - struct intel_output *intel_output = enc_to_intel_output(encoder); 1099 - const struct tv_mode *tv_mode = intel_tv_mode_find (intel_output); 1098 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 1099 + const struct tv_mode *tv_mode = intel_tv_mode_find (intel_encoder); 1100 1100 struct drm_encoder *other_encoder; 1101 1101 1102 1102 if (!tv_mode) ··· 1121 1121 struct drm_i915_private *dev_priv = dev->dev_private; 1122 1122 struct drm_crtc *crtc = encoder->crtc; 1123 1123 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 1124 - struct intel_output *intel_output = enc_to_intel_output(encoder); 1125 - struct intel_tv_priv *tv_priv = intel_output->dev_priv; 1126 - const struct tv_mode *tv_mode = intel_tv_mode_find(intel_output); 1124 + struct intel_encoder *intel_encoder = enc_to_intel_encoder(encoder); 1125 + struct intel_tv_priv *tv_priv = intel_encoder->dev_priv; 1126 + const struct tv_mode *tv_mode = intel_tv_mode_find(intel_encoder); 1127 1127 u32 tv_ctl; 1128 1128 u32 hctl1, hctl2, hctl3; 1129 1129 u32 vctl1, vctl2, vctl3, vctl4, vctl5, vctl6, vctl7; ··· 1360 1360 * \return false if TV is disconnected. 1361 1361 */ 1362 1362 static int 1363 - intel_tv_detect_type (struct drm_crtc *crtc, struct intel_output *intel_output) 1363 + intel_tv_detect_type (struct drm_crtc *crtc, struct intel_encoder *intel_encoder) 1364 1364 { 1365 - struct drm_encoder *encoder = &intel_output->enc; 1365 + struct drm_encoder *encoder = &intel_encoder->enc; 1366 1366 struct drm_device *dev = encoder->dev; 1367 1367 struct drm_i915_private *dev_priv = dev->dev_private; 1368 1368 unsigned long irqflags; ··· 1441 1441 */ 1442 1442 static void intel_tv_find_better_format(struct drm_connector *connector) 1443 1443 { 1444 - struct intel_output *intel_output = to_intel_output(connector); 1445 - struct intel_tv_priv *tv_priv = intel_output->dev_priv; 1446 - const struct tv_mode *tv_mode = intel_tv_mode_find(intel_output); 1444 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1445 + struct intel_tv_priv *tv_priv = intel_encoder->dev_priv; 1446 + const struct tv_mode *tv_mode = intel_tv_mode_find(intel_encoder); 1447 1447 int i; 1448 1448 1449 1449 if ((tv_priv->type == DRM_MODE_CONNECTOR_Component) == ··· 1475 1475 { 1476 1476 struct drm_crtc *crtc; 1477 1477 struct drm_display_mode mode; 1478 - struct intel_output *intel_output = to_intel_output(connector); 1479 - struct intel_tv_priv *tv_priv = intel_output->dev_priv; 1480 - struct drm_encoder *encoder = &intel_output->enc; 1478 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1479 + struct intel_tv_priv *tv_priv = intel_encoder->dev_priv; 1480 + struct drm_encoder *encoder = &intel_encoder->enc; 1481 1481 int dpms_mode; 1482 1482 int type = tv_priv->type; 1483 1483 ··· 1485 1485 drm_mode_set_crtcinfo(&mode, CRTC_INTERLACE_HALVE_V); 1486 1486 1487 1487 if (encoder->crtc && encoder->crtc->enabled) { 1488 - type = intel_tv_detect_type(encoder->crtc, intel_output); 1488 + type = intel_tv_detect_type(encoder->crtc, intel_encoder); 1489 1489 } else { 1490 - crtc = intel_get_load_detect_pipe(intel_output, &mode, &dpms_mode); 1490 + crtc = intel_get_load_detect_pipe(intel_encoder, &mode, &dpms_mode); 1491 1491 if (crtc) { 1492 - type = intel_tv_detect_type(crtc, intel_output); 1493 - intel_release_load_detect_pipe(intel_output, dpms_mode); 1492 + type = intel_tv_detect_type(crtc, intel_encoder); 1493 + intel_release_load_detect_pipe(intel_encoder, dpms_mode); 1494 1494 } else 1495 1495 type = -1; 1496 1496 } ··· 1525 1525 intel_tv_chose_preferred_modes(struct drm_connector *connector, 1526 1526 struct drm_display_mode *mode_ptr) 1527 1527 { 1528 - struct intel_output *intel_output = to_intel_output(connector); 1529 - const struct tv_mode *tv_mode = intel_tv_mode_find(intel_output); 1528 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1529 + const struct tv_mode *tv_mode = intel_tv_mode_find(intel_encoder); 1530 1530 1531 1531 if (tv_mode->nbr_end < 480 && mode_ptr->vdisplay == 480) 1532 1532 mode_ptr->type |= DRM_MODE_TYPE_PREFERRED; ··· 1550 1550 intel_tv_get_modes(struct drm_connector *connector) 1551 1551 { 1552 1552 struct drm_display_mode *mode_ptr; 1553 - struct intel_output *intel_output = to_intel_output(connector); 1554 - const struct tv_mode *tv_mode = intel_tv_mode_find(intel_output); 1553 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1554 + const struct tv_mode *tv_mode = intel_tv_mode_find(intel_encoder); 1555 1555 int j, count = 0; 1556 1556 u64 tmp; 1557 1557 ··· 1604 1604 static void 1605 1605 intel_tv_destroy (struct drm_connector *connector) 1606 1606 { 1607 - struct intel_output *intel_output = to_intel_output(connector); 1607 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1608 1608 1609 1609 drm_sysfs_connector_remove(connector); 1610 1610 drm_connector_cleanup(connector); 1611 - kfree(intel_output); 1611 + kfree(intel_encoder); 1612 1612 } 1613 1613 1614 1614 ··· 1617 1617 uint64_t val) 1618 1618 { 1619 1619 struct drm_device *dev = connector->dev; 1620 - struct intel_output *intel_output = to_intel_output(connector); 1621 - struct intel_tv_priv *tv_priv = intel_output->dev_priv; 1622 - struct drm_encoder *encoder = &intel_output->enc; 1620 + struct intel_encoder *intel_encoder = to_intel_encoder(connector); 1621 + struct intel_tv_priv *tv_priv = intel_encoder->dev_priv; 1622 + struct drm_encoder *encoder = &intel_encoder->enc; 1623 1623 struct drm_crtc *crtc = encoder->crtc; 1624 1624 int ret = 0; 1625 1625 bool changed = false; ··· 1740 1740 { 1741 1741 struct drm_i915_private *dev_priv = dev->dev_private; 1742 1742 struct drm_connector *connector; 1743 - struct intel_output *intel_output; 1743 + struct intel_encoder *intel_encoder; 1744 1744 struct intel_tv_priv *tv_priv; 1745 1745 u32 tv_dac_on, tv_dac_off, save_tv_dac; 1746 1746 char **tv_format_names; ··· 1780 1780 (tv_dac_off & TVDAC_STATE_CHG_EN) != 0) 1781 1781 return; 1782 1782 1783 - intel_output = kzalloc(sizeof(struct intel_output) + 1783 + intel_encoder = kzalloc(sizeof(struct intel_encoder) + 1784 1784 sizeof(struct intel_tv_priv), GFP_KERNEL); 1785 - if (!intel_output) { 1785 + if (!intel_encoder) { 1786 1786 return; 1787 1787 } 1788 1788 1789 - connector = &intel_output->base; 1789 + connector = &intel_encoder->base; 1790 1790 1791 1791 drm_connector_init(dev, connector, &intel_tv_connector_funcs, 1792 1792 DRM_MODE_CONNECTOR_SVIDEO); 1793 1793 1794 - drm_encoder_init(dev, &intel_output->enc, &intel_tv_enc_funcs, 1794 + drm_encoder_init(dev, &intel_encoder->enc, &intel_tv_enc_funcs, 1795 1795 DRM_MODE_ENCODER_TVDAC); 1796 1796 1797 - drm_mode_connector_attach_encoder(&intel_output->base, &intel_output->enc); 1798 - tv_priv = (struct intel_tv_priv *)(intel_output + 1); 1799 - intel_output->type = INTEL_OUTPUT_TVOUT; 1800 - intel_output->crtc_mask = (1 << 0) | (1 << 1); 1801 - intel_output->clone_mask = (1 << INTEL_TV_CLONE_BIT); 1802 - intel_output->enc.possible_crtcs = ((1 << 0) | (1 << 1)); 1803 - intel_output->enc.possible_clones = (1 << INTEL_OUTPUT_TVOUT); 1804 - intel_output->dev_priv = tv_priv; 1797 + drm_mode_connector_attach_encoder(&intel_encoder->base, &intel_encoder->enc); 1798 + tv_priv = (struct intel_tv_priv *)(intel_encoder + 1); 1799 + intel_encoder->type = INTEL_OUTPUT_TVOUT; 1800 + intel_encoder->crtc_mask = (1 << 0) | (1 << 1); 1801 + intel_encoder->clone_mask = (1 << INTEL_TV_CLONE_BIT); 1802 + intel_encoder->enc.possible_crtcs = ((1 << 0) | (1 << 1)); 1803 + intel_encoder->enc.possible_clones = (1 << INTEL_OUTPUT_TVOUT); 1804 + intel_encoder->dev_priv = tv_priv; 1805 1805 tv_priv->type = DRM_MODE_CONNECTOR_Unknown; 1806 1806 1807 1807 /* BIOS margin values */ ··· 1812 1812 1813 1813 tv_priv->tv_format = kstrdup(tv_modes[initial_mode].name, GFP_KERNEL); 1814 1814 1815 - drm_encoder_helper_add(&intel_output->enc, &intel_tv_helper_funcs); 1815 + drm_encoder_helper_add(&intel_encoder->enc, &intel_tv_helper_funcs); 1816 1816 drm_connector_helper_add(connector, &intel_tv_connector_helper_funcs); 1817 1817 connector->interlace_allowed = false; 1818 1818 connector->doublescan_allowed = false;
+10
drivers/gpu/drm/radeon/atom.c
··· 908 908 uint8_t attr = U8((*ptr)++), shift; 909 909 uint32_t saved, dst; 910 910 int dptr = *ptr; 911 + uint32_t dst_align = atom_dst_to_src[(attr >> 3) & 7][(attr >> 6) & 3]; 911 912 SDEBUG(" dst: "); 912 913 dst = atom_get_dst(ctx, arg, attr, ptr, &saved, 1); 914 + /* op needs to full dst value */ 915 + dst = saved; 913 916 shift = atom_get_src(ctx, attr, ptr); 914 917 SDEBUG(" shift: %d\n", shift); 915 918 dst <<= shift; 919 + dst &= atom_arg_mask[dst_align]; 920 + dst >>= atom_arg_shift[dst_align]; 916 921 SDEBUG(" dst: "); 917 922 atom_put_dst(ctx, arg, attr, &dptr, dst, saved); 918 923 } ··· 927 922 uint8_t attr = U8((*ptr)++), shift; 928 923 uint32_t saved, dst; 929 924 int dptr = *ptr; 925 + uint32_t dst_align = atom_dst_to_src[(attr >> 3) & 7][(attr >> 6) & 3]; 930 926 SDEBUG(" dst: "); 931 927 dst = atom_get_dst(ctx, arg, attr, ptr, &saved, 1); 928 + /* op needs to full dst value */ 929 + dst = saved; 932 930 shift = atom_get_src(ctx, attr, ptr); 933 931 SDEBUG(" shift: %d\n", shift); 934 932 dst >>= shift; 933 + dst &= atom_arg_mask[dst_align]; 934 + dst >>= atom_arg_shift[dst_align]; 935 935 SDEBUG(" dst: "); 936 936 atom_put_dst(ctx, arg, attr, &dptr, dst, saved); 937 937 }
+4
drivers/gpu/drm/radeon/atombios_crtc.c
··· 521 521 /* DVO wants 2x pixel clock if the DVO chip is in 12 bit mode */ 522 522 if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1) 523 523 adjusted_clock = mode->clock * 2; 524 + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT)) { 525 + pll->algo = PLL_ALGO_LEGACY; 526 + pll->flags |= RADEON_PLL_PREFER_CLOSEST_LOWER; 527 + } 524 528 } else { 525 529 if (encoder->encoder_type != DRM_MODE_ENCODER_DAC) 526 530 pll->flags |= RADEON_PLL_NO_ODD_POST_DIV;
+15 -6
drivers/gpu/drm/radeon/r100.c
··· 2891 2891 { 2892 2892 struct radeon_bo *robj; 2893 2893 unsigned long size; 2894 - unsigned u, i, w, h; 2894 + unsigned u, i, w, h, d; 2895 2895 int ret; 2896 2896 2897 2897 for (u = 0; u < track->num_texture; u++) { ··· 2923 2923 h = h / (1 << i); 2924 2924 if (track->textures[u].roundup_h) 2925 2925 h = roundup_pow_of_two(h); 2926 + if (track->textures[u].tex_coord_type == 1) { 2927 + d = (1 << track->textures[u].txdepth) / (1 << i); 2928 + if (!d) 2929 + d = 1; 2930 + } else { 2931 + d = 1; 2932 + } 2926 2933 if (track->textures[u].compress_format) { 2927 2934 2928 - size += r100_track_compress_size(track->textures[u].compress_format, w, h); 2935 + size += r100_track_compress_size(track->textures[u].compress_format, w, h) * d; 2929 2936 /* compressed textures are block based */ 2930 2937 } else 2931 - size += w * h; 2938 + size += w * h * d; 2932 2939 } 2933 2940 size *= track->textures[u].cpp; 2934 2941 2935 2942 switch (track->textures[u].tex_coord_type) { 2936 2943 case 0: 2937 - break; 2938 2944 case 1: 2939 - size *= (1 << track->textures[u].txdepth); 2940 2945 break; 2941 2946 case 2: 2942 2947 if (track->separate_cube) { ··· 3012 3007 } 3013 3008 } 3014 3009 prim_walk = (track->vap_vf_cntl >> 4) & 0x3; 3015 - nverts = (track->vap_vf_cntl >> 16) & 0xFFFF; 3010 + if (track->vap_vf_cntl & (1 << 14)) { 3011 + nverts = track->vap_alt_nverts; 3012 + } else { 3013 + nverts = (track->vap_vf_cntl >> 16) & 0xFFFF; 3014 + } 3016 3015 switch (prim_walk) { 3017 3016 case 1: 3018 3017 for (i = 0; i < track->num_arrays; i++) {
+1
drivers/gpu/drm/radeon/r100_track.h
··· 64 64 unsigned maxy; 65 65 unsigned vtx_size; 66 66 unsigned vap_vf_cntl; 67 + unsigned vap_alt_nverts; 67 68 unsigned immd_dwords; 68 69 unsigned num_arrays; 69 70 unsigned max_indx;
+11 -4
drivers/gpu/drm/radeon/r300.c
··· 730 730 /* VAP_VF_MAX_VTX_INDX */ 731 731 track->max_indx = idx_value & 0x00FFFFFFUL; 732 732 break; 733 + case 0x2088: 734 + /* VAP_ALT_NUM_VERTICES - only valid on r500 */ 735 + if (p->rdev->family < CHIP_RV515) 736 + goto fail; 737 + track->vap_alt_nverts = idx_value & 0xFFFFFF; 738 + break; 733 739 case 0x43E4: 734 740 /* SC_SCISSOR1 */ 735 741 track->maxy = ((idx_value >> 13) & 0x1FFF) + 1; ··· 773 767 tmp = idx_value & ~(0x7 << 16); 774 768 tmp |= tile_flags; 775 769 ib[idx] = tmp; 776 - 777 770 i = (reg - 0x4E38) >> 2; 778 771 track->cb[i].pitch = idx_value & 0x3FFE; 779 772 switch (((idx_value >> 21) & 0xF)) { ··· 1057 1052 break; 1058 1053 /* fallthrough do not move */ 1059 1054 default: 1060 - printk(KERN_ERR "Forbidden register 0x%04X in cs at %d\n", 1061 - reg, idx); 1062 - return -EINVAL; 1055 + goto fail; 1063 1056 } 1064 1057 return 0; 1058 + fail: 1059 + printk(KERN_ERR "Forbidden register 0x%04X in cs at %d\n", 1060 + reg, idx); 1061 + return -EINVAL; 1065 1062 } 1066 1063 1067 1064 static int r300_packet3_check(struct radeon_cs_parser *p,
+1 -1
drivers/gpu/drm/radeon/r600_audio.c
··· 35 35 */ 36 36 static int r600_audio_chipset_supported(struct radeon_device *rdev) 37 37 { 38 - return rdev->family >= CHIP_R600 38 + return (rdev->family >= CHIP_R600 && rdev->family < CHIP_CEDAR) 39 39 || rdev->family == CHIP_RS600 40 40 || rdev->family == CHIP_RS690 41 41 || rdev->family == CHIP_RS740;
+9
drivers/gpu/drm/radeon/r600_hdmi.c
··· 314 314 struct radeon_device *rdev = dev->dev_private; 315 315 uint32_t offset = to_radeon_encoder(encoder)->hdmi_offset; 316 316 317 + if (ASIC_IS_DCE4(rdev)) 318 + return; 319 + 317 320 if (!offset) 318 321 return; 319 322 ··· 487 484 struct radeon_device *rdev = dev->dev_private; 488 485 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 489 486 487 + if (ASIC_IS_DCE4(rdev)) 488 + return; 489 + 490 490 if (!radeon_encoder->hdmi_offset) { 491 491 r600_hdmi_assign_block(encoder); 492 492 if (!radeon_encoder->hdmi_offset) { ··· 530 524 struct drm_device *dev = encoder->dev; 531 525 struct radeon_device *rdev = dev->dev_private; 532 526 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 527 + 528 + if (ASIC_IS_DCE4(rdev)) 529 + return; 533 530 534 531 if (!radeon_encoder->hdmi_offset) { 535 532 dev_err(rdev->dev, "Disabling not enabled HDMI\n");
+11 -2
drivers/gpu/drm/radeon/radeon_connectors.c
··· 162 162 { 163 163 struct drm_device *dev = connector->dev; 164 164 struct drm_connector *conflict; 165 + struct radeon_connector *radeon_conflict; 165 166 int i; 166 167 167 168 list_for_each_entry(conflict, &dev->mode_config.connector_list, head) { 168 169 if (conflict == connector) 169 170 continue; 170 171 172 + radeon_conflict = to_radeon_connector(conflict); 171 173 for (i = 0; i < DRM_CONNECTOR_MAX_ENCODER; i++) { 172 174 if (conflict->encoder_ids[i] == 0) 173 175 break; ··· 177 175 /* if the IDs match */ 178 176 if (conflict->encoder_ids[i] == encoder->base.id) { 179 177 if (conflict->status != connector_status_connected) 178 + continue; 179 + 180 + if (radeon_conflict->use_digital) 180 181 continue; 181 182 182 183 if (priority == true) { ··· 292 287 293 288 if (property == rdev->mode_info.coherent_mode_property) { 294 289 struct radeon_encoder_atom_dig *dig; 290 + bool new_coherent_mode; 295 291 296 292 /* need to find digital encoder on connector */ 297 293 encoder = radeon_find_encoder(connector, DRM_MODE_ENCODER_TMDS); ··· 305 299 return 0; 306 300 307 301 dig = radeon_encoder->enc_priv; 308 - dig->coherent_mode = val ? true : false; 309 - radeon_property_change_mode(&radeon_encoder->base); 302 + new_coherent_mode = val ? true : false; 303 + if (dig->coherent_mode != new_coherent_mode) { 304 + dig->coherent_mode = new_coherent_mode; 305 + radeon_property_change_mode(&radeon_encoder->base); 306 + } 310 307 } 311 308 312 309 if (property == rdev->mode_info.tv_std_property) {
+52 -1
drivers/gpu/drm/radeon/radeon_device.c
··· 36 36 #include "radeon.h" 37 37 #include "atom.h" 38 38 39 + static const char radeon_family_name[][16] = { 40 + "R100", 41 + "RV100", 42 + "RS100", 43 + "RV200", 44 + "RS200", 45 + "R200", 46 + "RV250", 47 + "RS300", 48 + "RV280", 49 + "R300", 50 + "R350", 51 + "RV350", 52 + "RV380", 53 + "R420", 54 + "R423", 55 + "RV410", 56 + "RS400", 57 + "RS480", 58 + "RS600", 59 + "RS690", 60 + "RS740", 61 + "RV515", 62 + "R520", 63 + "RV530", 64 + "RV560", 65 + "RV570", 66 + "R580", 67 + "R600", 68 + "RV610", 69 + "RV630", 70 + "RV670", 71 + "RV620", 72 + "RV635", 73 + "RS780", 74 + "RS880", 75 + "RV770", 76 + "RV730", 77 + "RV710", 78 + "RV740", 79 + "CEDAR", 80 + "REDWOOD", 81 + "JUNIPER", 82 + "CYPRESS", 83 + "HEMLOCK", 84 + "LAST", 85 + }; 86 + 39 87 /* 40 88 * Clear GPU surface registers. 41 89 */ ··· 574 526 int r; 575 527 int dma_bits; 576 528 577 - DRM_INFO("radeon: Initializing kernel modesetting.\n"); 578 529 rdev->shutdown = false; 579 530 rdev->dev = &pdev->dev; 580 531 rdev->ddev = ddev; ··· 585 538 rdev->mc.gtt_size = radeon_gart_size * 1024 * 1024; 586 539 rdev->gpu_lockup = false; 587 540 rdev->accel_working = false; 541 + 542 + DRM_INFO("initializing kernel modesetting (%s 0x%04X:0x%04X).\n", 543 + radeon_family_name[rdev->family], pdev->vendor, pdev->device); 544 + 588 545 /* mutex initialization are all done here so we 589 546 * can recall function without having locking issues */ 590 547 mutex_init(&rdev->cs_mutex);
+2 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 43 43 * - 2.0.0 - initial interface 44 44 * - 2.1.0 - add square tiling interface 45 45 * - 2.2.0 - add r6xx/r7xx const buffer support 46 + * - 2.3.0 - add MSPOS + 3D texture + r500 VAP regs 46 47 */ 47 48 #define KMS_DRIVER_MAJOR 2 48 - #define KMS_DRIVER_MINOR 2 49 + #define KMS_DRIVER_MINOR 3 49 50 #define KMS_DRIVER_PATCHLEVEL 0 50 51 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags); 51 52 int radeon_driver_unload_kms(struct drm_device *dev);
+10 -2
drivers/gpu/drm/radeon/radeon_encoders.c
··· 865 865 else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { 866 866 if (dig->coherent_mode) 867 867 args.v3.acConfig.fCoherentMode = 1; 868 + if (radeon_encoder->pixel_clock > 165000) 869 + args.v3.acConfig.fDualLinkConnector = 1; 868 870 } 869 871 } else if (ASIC_IS_DCE32(rdev)) { 870 872 args.v2.acConfig.ucEncoderSel = dig->dig_encoder; ··· 890 888 else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { 891 889 if (dig->coherent_mode) 892 890 args.v2.acConfig.fCoherentMode = 1; 891 + if (radeon_encoder->pixel_clock > 165000) 892 + args.v2.acConfig.fDualLinkConnector = 1; 893 893 } 894 894 } else { 895 895 args.v1.ucConfig = ATOM_TRANSMITTER_CONFIG_CLKSRC_PPLL; ··· 1377 1373 case ENCODER_OBJECT_ID_INTERNAL_DAC2: 1378 1374 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2: 1379 1375 atombios_dac_setup(encoder, ATOM_ENABLE); 1380 - if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) 1381 - atombios_tv_setup(encoder, ATOM_ENABLE); 1376 + if (radeon_encoder->devices & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) { 1377 + if (radeon_encoder->active_device & (ATOM_DEVICE_TV_SUPPORT | ATOM_DEVICE_CV_SUPPORT)) 1378 + atombios_tv_setup(encoder, ATOM_ENABLE); 1379 + else 1380 + atombios_tv_setup(encoder, ATOM_DISABLE); 1381 + } 1382 1382 break; 1383 1383 } 1384 1384 atombios_apply_encoder_quirks(encoder, adjusted_mode);
+2 -1
drivers/gpu/drm/radeon/radeon_family.h
··· 36 36 * Radeon chip families 37 37 */ 38 38 enum radeon_family { 39 - CHIP_R100, 39 + CHIP_R100 = 0, 40 40 CHIP_RV100, 41 41 CHIP_RS100, 42 42 CHIP_RV200, ··· 99 99 RADEON_IS_PCI = 0x00800000UL, 100 100 RADEON_IS_IGPGART = 0x01000000UL, 101 101 }; 102 + 102 103 #endif
+2
drivers/gpu/drm/radeon/reg_srcs/r300
··· 125 125 0x4000 GB_VAP_RASTER_VTX_FMT_0 126 126 0x4004 GB_VAP_RASTER_VTX_FMT_1 127 127 0x4008 GB_ENABLE 128 + 0x4010 GB_MSPOS0 129 + 0x4014 GB_MSPOS1 128 130 0x401C GB_SELECT 129 131 0x4020 GB_AA_CONFIG 130 132 0x4024 GB_FIFO_SIZE
+2
drivers/gpu/drm/radeon/reg_srcs/r420
··· 125 125 0x4000 GB_VAP_RASTER_VTX_FMT_0 126 126 0x4004 GB_VAP_RASTER_VTX_FMT_1 127 127 0x4008 GB_ENABLE 128 + 0x4010 GB_MSPOS0 129 + 0x4014 GB_MSPOS1 128 130 0x401C GB_SELECT 129 131 0x4020 GB_AA_CONFIG 130 132 0x4024 GB_FIFO_SIZE
+2
drivers/gpu/drm/radeon/reg_srcs/rs600
··· 125 125 0x4000 GB_VAP_RASTER_VTX_FMT_0 126 126 0x4004 GB_VAP_RASTER_VTX_FMT_1 127 127 0x4008 GB_ENABLE 128 + 0x4010 GB_MSPOS0 129 + 0x4014 GB_MSPOS1 128 130 0x401C GB_SELECT 129 131 0x4020 GB_AA_CONFIG 130 132 0x4024 GB_FIFO_SIZE
+3
drivers/gpu/drm/radeon/reg_srcs/rv515
··· 35 35 0x1DA8 VAP_VPORT_ZSCALE 36 36 0x1DAC VAP_VPORT_ZOFFSET 37 37 0x2080 VAP_CNTL 38 + 0x208C VAP_INDEX_OFFSET 38 39 0x2090 VAP_OUT_VTX_FMT_0 39 40 0x2094 VAP_OUT_VTX_FMT_1 40 41 0x20B0 VAP_VTE_CNTL ··· 159 158 0x4000 GB_VAP_RASTER_VTX_FMT_0 160 159 0x4004 GB_VAP_RASTER_VTX_FMT_1 161 160 0x4008 GB_ENABLE 161 + 0x4010 GB_MSPOS0 162 + 0x4014 GB_MSPOS1 162 163 0x401C GB_SELECT 163 164 0x4020 GB_AA_CONFIG 164 165 0x4024 GB_FIFO_SIZE
+1 -1
drivers/gpu/drm/radeon/rs600.c
··· 159 159 WREG32_MC(R_000100_MC_PT0_CNTL, tmp); 160 160 161 161 tmp = RREG32_MC(R_000100_MC_PT0_CNTL); 162 - tmp |= S_000100_INVALIDATE_ALL_L1_TLBS(1) & S_000100_INVALIDATE_L2_CACHE(1); 162 + tmp |= S_000100_INVALIDATE_ALL_L1_TLBS(1) | S_000100_INVALIDATE_L2_CACHE(1); 163 163 WREG32_MC(R_000100_MC_PT0_CNTL, tmp); 164 164 165 165 tmp = RREG32_MC(R_000100_MC_PT0_CNTL);
+18
drivers/hwmon/applesmc.c
··· 142 142 "TM1S", "TM2P", "TM2S", "TM3S", "TM8P", "TM8S", "TM9P", "TM9S", 143 143 "TN0C", "TN0D", "TN0H", "TS0C", "Tp0C", "Tp1C", "Tv0S", "Tv1S", 144 144 NULL }, 145 + /* Set 17: iMac 9,1 */ 146 + { "TA0P", "TC0D", "TC0H", "TC0P", "TG0D", "TG0H", "TH0P", "TL0P", 147 + "TN0D", "TN0H", "TN0P", "TO0P", "Tm0P", "Tp0P", NULL }, 148 + /* Set 18: MacBook Pro 2,2 */ 149 + { "TB0T", "TC0D", "TC0P", "TG0H", "TG0P", "TG0T", "TM0P", "TTF0", 150 + "Th0H", "Th1H", "Tm0P", "Ts0P", NULL }, 145 151 }; 146 152 147 153 /* List of keys used to read/write fan speeds */ ··· 1356 1350 { .accelerometer = 1, .light = 1, .temperature_set = 15 }, 1357 1351 /* MacPro3,1: temperature set 16 */ 1358 1352 { .accelerometer = 0, .light = 0, .temperature_set = 16 }, 1353 + /* iMac 9,1: light sensor only, temperature set 17 */ 1354 + { .accelerometer = 0, .light = 0, .temperature_set = 17 }, 1355 + /* MacBook Pro 2,2: accelerometer, backlight and temperature set 18 */ 1356 + { .accelerometer = 1, .light = 1, .temperature_set = 18 }, 1359 1357 }; 1360 1358 1361 1359 /* Note that DMI_MATCH(...,"MacBook") will match "MacBookPro1,1". ··· 1385 1375 DMI_MATCH(DMI_BOARD_VENDOR, "Apple"), 1386 1376 DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro3") }, 1387 1377 &applesmc_dmi_data[9]}, 1378 + { applesmc_dmi_match, "Apple MacBook Pro 2,2", { 1379 + DMI_MATCH(DMI_BOARD_VENDOR, "Apple Computer, Inc."), 1380 + DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro2,2") }, 1381 + &applesmc_dmi_data[18]}, 1388 1382 { applesmc_dmi_match, "Apple MacBook Pro", { 1389 1383 DMI_MATCH(DMI_BOARD_VENDOR,"Apple"), 1390 1384 DMI_MATCH(DMI_PRODUCT_NAME,"MacBookPro") }, ··· 1429 1415 DMI_MATCH(DMI_BOARD_VENDOR, "Apple"), 1430 1416 DMI_MATCH(DMI_PRODUCT_NAME, "MacPro") }, 1431 1417 &applesmc_dmi_data[4]}, 1418 + { applesmc_dmi_match, "Apple iMac 9,1", { 1419 + DMI_MATCH(DMI_BOARD_VENDOR, "Apple Inc."), 1420 + DMI_MATCH(DMI_PRODUCT_NAME, "iMac9,1") }, 1421 + &applesmc_dmi_data[17]}, 1432 1422 { applesmc_dmi_match, "Apple iMac 8", { 1433 1423 DMI_MATCH(DMI_BOARD_VENDOR, "Apple"), 1434 1424 DMI_MATCH(DMI_PRODUCT_NAME, "iMac8") },
+15 -17
drivers/hwmon/it87.c
··· 539 539 540 540 struct it87_data *data = dev_get_drvdata(dev); 541 541 long val; 542 + u8 reg; 542 543 543 544 if (strict_strtol(buf, 10, &val) < 0) 544 545 return -EINVAL; 545 546 546 - mutex_lock(&data->update_lock); 547 - 548 - data->sensor &= ~(1 << nr); 549 - data->sensor &= ~(8 << nr); 547 + reg = it87_read_value(data, IT87_REG_TEMP_ENABLE); 548 + reg &= ~(1 << nr); 549 + reg &= ~(8 << nr); 550 550 if (val == 2) { /* backwards compatibility */ 551 551 dev_warn(dev, "Sensor type 2 is deprecated, please use 4 " 552 552 "instead\n"); ··· 554 554 } 555 555 /* 3 = thermal diode; 4 = thermistor; 0 = disabled */ 556 556 if (val == 3) 557 - data->sensor |= 1 << nr; 557 + reg |= 1 << nr; 558 558 else if (val == 4) 559 - data->sensor |= 8 << nr; 560 - else if (val != 0) { 561 - mutex_unlock(&data->update_lock); 559 + reg |= 8 << nr; 560 + else if (val != 0) 562 561 return -EINVAL; 563 - } 562 + 563 + mutex_lock(&data->update_lock); 564 + data->sensor = reg; 564 565 it87_write_value(data, IT87_REG_TEMP_ENABLE, data->sensor); 566 + data->valid = 0; /* Force cache refresh */ 565 567 mutex_unlock(&data->update_lock); 566 568 return count; 567 569 } ··· 1843 1841 it87_write_value(data, IT87_REG_TEMP_HIGH(i), 127); 1844 1842 } 1845 1843 1846 - /* Check if temperature channels are reset manually or by some reason */ 1847 - tmp = it87_read_value(data, IT87_REG_TEMP_ENABLE); 1848 - if ((tmp & 0x3f) == 0) { 1849 - /* Temp1,Temp3=thermistor; Temp2=thermal diode */ 1850 - tmp = (tmp & 0xc0) | 0x2a; 1851 - it87_write_value(data, IT87_REG_TEMP_ENABLE, tmp); 1852 - } 1853 - data->sensor = tmp; 1844 + /* Temperature channels are not forcibly enabled, as they can be 1845 + * set to two different sensor types and we can't guess which one 1846 + * is correct for a given system. These channels can be enabled at 1847 + * run-time through the temp{1-3}_type sysfs accessors if needed. */ 1854 1848 1855 1849 /* Check if voltage monitors are reset manually or by some reason */ 1856 1850 tmp = it87_read_value(data, IT87_REG_VIN_ENABLE);
+9 -4
drivers/hwmon/sht15.c
··· 303 303 **/ 304 304 static inline int sht15_calc_temp(struct sht15_data *data) 305 305 { 306 - int d1 = 0; 306 + int d1 = temppoints[0].d1; 307 307 int i; 308 308 309 - for (i = 1; i < ARRAY_SIZE(temppoints); i++) 309 + for (i = ARRAY_SIZE(temppoints) - 1; i > 0; i--) 310 310 /* Find pointer to interpolate */ 311 311 if (data->supply_uV > temppoints[i - 1].vdd) { 312 - d1 = (data->supply_uV/1000 - temppoints[i - 1].vdd) 312 + d1 = (data->supply_uV - temppoints[i - 1].vdd) 313 313 * (temppoints[i].d1 - temppoints[i - 1].d1) 314 314 / (temppoints[i].vdd - temppoints[i - 1].vdd) 315 315 + temppoints[i - 1].d1; ··· 542 542 /* If a regulator is available, query what the supply voltage actually is!*/ 543 543 data->reg = regulator_get(data->dev, "vcc"); 544 544 if (!IS_ERR(data->reg)) { 545 - data->supply_uV = regulator_get_voltage(data->reg); 545 + int voltage; 546 + 547 + voltage = regulator_get_voltage(data->reg); 548 + if (voltage) 549 + data->supply_uV = voltage; 550 + 546 551 regulator_enable(data->reg); 547 552 /* setup a notifier block to update this if another device 548 553 * causes the voltage to change */
+8 -1
drivers/input/input.c
··· 660 660 int input_get_keycode(struct input_dev *dev, 661 661 unsigned int scancode, unsigned int *keycode) 662 662 { 663 - return dev->getkeycode(dev, scancode, keycode); 663 + unsigned long flags; 664 + int retval; 665 + 666 + spin_lock_irqsave(&dev->event_lock, flags); 667 + retval = dev->getkeycode(dev, scancode, keycode); 668 + spin_unlock_irqrestore(&dev->event_lock, flags); 669 + 670 + return retval; 664 671 } 665 672 EXPORT_SYMBOL(input_get_keycode); 666 673
+3 -1
drivers/input/keyboard/matrix_keypad.c
··· 374 374 input_dev->name = pdev->name; 375 375 input_dev->id.bustype = BUS_HOST; 376 376 input_dev->dev.parent = &pdev->dev; 377 - input_dev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_REP); 377 + input_dev->evbit[0] = BIT_MASK(EV_KEY); 378 + if (!pdata->no_autorepeat) 379 + input_dev->evbit[0] |= BIT_MASK(EV_REP); 378 380 input_dev->open = matrix_keypad_start; 379 381 input_dev->close = matrix_keypad_stop; 380 382
+1
drivers/input/mouse/alps.c
··· 64 64 { { 0x62, 0x02, 0x14 }, 0xcf, 0xcf, 65 65 ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, 66 66 { { 0x73, 0x02, 0x50 }, 0xcf, 0xcf, ALPS_FOUR_BUTTONS }, /* Dell Vostro 1400 */ 67 + { { 0x73, 0x02, 0x64 }, 0xf8, 0xf8, 0 }, /* HP Pavilion dm3 */ 67 68 { { 0x52, 0x01, 0x14 }, 0xff, 0xff, 68 69 ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, /* Toshiba Tecra A11-11L */ 69 70 };
-1
drivers/input/mouse/bcm5974.c
··· 803 803 .disconnect = bcm5974_disconnect, 804 804 .suspend = bcm5974_suspend, 805 805 .resume = bcm5974_resume, 806 - .reset_resume = bcm5974_resume, 807 806 .id_table = bcm5974_table, 808 807 .supports_autosuspend = 1, 809 808 };
+1 -1
drivers/input/serio/i8042.c
··· 39 39 40 40 static bool i8042_nomux; 41 41 module_param_named(nomux, i8042_nomux, bool, 0); 42 - MODULE_PARM_DESC(nomux, "Do not check whether an active multiplexing conrtoller is present."); 42 + MODULE_PARM_DESC(nomux, "Do not check whether an active multiplexing controller is present."); 43 43 44 44 static bool i8042_unlock; 45 45 module_param_named(unlock, i8042_unlock, bool, 0);
+33 -19
drivers/input/sparse-keymap.c
··· 68 68 unsigned int scancode, 69 69 unsigned int *keycode) 70 70 { 71 - const struct key_entry *key = 72 - sparse_keymap_entry_from_scancode(dev, scancode); 71 + const struct key_entry *key; 73 72 74 - if (key && key->type == KE_KEY) { 75 - *keycode = key->keycode; 76 - return 0; 73 + if (dev->keycode) { 74 + key = sparse_keymap_entry_from_scancode(dev, scancode); 75 + if (key && key->type == KE_KEY) { 76 + *keycode = key->keycode; 77 + return 0; 78 + } 77 79 } 78 80 79 81 return -EINVAL; ··· 88 86 struct key_entry *key; 89 87 int old_keycode; 90 88 91 - if (keycode < 0 || keycode > KEY_MAX) 92 - return -EINVAL; 93 - 94 - key = sparse_keymap_entry_from_scancode(dev, scancode); 95 - if (key && key->type == KE_KEY) { 96 - old_keycode = key->keycode; 97 - key->keycode = keycode; 98 - set_bit(keycode, dev->keybit); 99 - if (!sparse_keymap_entry_from_keycode(dev, old_keycode)) 100 - clear_bit(old_keycode, dev->keybit); 101 - return 0; 89 + if (dev->keycode) { 90 + key = sparse_keymap_entry_from_scancode(dev, scancode); 91 + if (key && key->type == KE_KEY) { 92 + old_keycode = key->keycode; 93 + key->keycode = keycode; 94 + set_bit(keycode, dev->keybit); 95 + if (!sparse_keymap_entry_from_keycode(dev, old_keycode)) 96 + clear_bit(old_keycode, dev->keybit); 97 + return 0; 98 + } 102 99 } 103 100 104 101 return -EINVAL; ··· 165 164 return 0; 166 165 167 166 err_out: 168 - kfree(keymap); 167 + kfree(map); 169 168 return error; 170 169 171 170 } ··· 177 176 * 178 177 * This function is used to free memory allocated by sparse keymap 179 178 * in an input device that was set up by sparse_keymap_setup(). 179 + * NOTE: It is safe to cal this function while input device is 180 + * still registered (however the drivers should care not to try to 181 + * use freed keymap and thus have to shut off interrups/polling 182 + * before freeing the keymap). 180 183 */ 181 184 void sparse_keymap_free(struct input_dev *dev) 182 185 { 186 + unsigned long flags; 187 + 188 + /* 189 + * Take event lock to prevent racing with input_get_keycode() 190 + * and input_set_keycode() if we are called while input device 191 + * is still registered. 192 + */ 193 + spin_lock_irqsave(&dev->event_lock, flags); 194 + 183 195 kfree(dev->keycode); 184 196 dev->keycode = NULL; 185 197 dev->keycodemax = 0; 186 - dev->getkeycode = NULL; 187 - dev->setkeycode = NULL; 198 + 199 + spin_unlock_irqrestore(&dev->event_lock, flags); 188 200 } 189 201 EXPORT_SYMBOL(sparse_keymap_free); 190 202
+7 -5
drivers/input/tablet/wacom_sys.c
··· 673 673 int rv; 674 674 675 675 mutex_lock(&wacom->lock); 676 - if (wacom->open) { 676 + 677 + /* switch to wacom mode first */ 678 + wacom_query_tablet_data(intf, features); 679 + 680 + if (wacom->open) 677 681 rv = usb_submit_urb(wacom->irq, GFP_NOIO); 678 - /* switch to wacom mode if needed */ 679 - if (!wacom_retrieve_hid_descriptor(intf, features)) 680 - wacom_query_tablet_data(intf, features); 681 - } else 682 + else 682 683 rv = 0; 684 + 683 685 mutex_unlock(&wacom->lock); 684 686 685 687 return rv;
+104 -59
drivers/input/tablet/wacom_wac.c
··· 155 155 { 156 156 struct wacom_features *features = &wacom->features; 157 157 unsigned char *data = wacom->data; 158 - int x, y, prox; 159 - int rw = 0; 160 - int retval = 0; 158 + int x, y, rw; 159 + static int penData = 0; 161 160 162 161 if (data[0] != WACOM_REPORT_PENABLED) { 163 162 dbg("wacom_graphire_irq: received unknown report #%d", data[0]); 164 - goto exit; 163 + return 0; 165 164 } 166 165 167 - prox = data[1] & 0x80; 168 - if (prox || wacom->id[0]) { 169 - if (prox) { 170 - switch ((data[1] >> 5) & 3) { 166 + if (data[1] & 0x80) { 167 + /* in prox and not a pad data */ 168 + penData = 1; 169 + 170 + switch ((data[1] >> 5) & 3) { 171 171 172 172 case 0: /* Pen */ 173 173 wacom->tool[0] = BTN_TOOL_PEN; ··· 181 181 182 182 case 2: /* Mouse with wheel */ 183 183 wacom_report_key(wcombo, BTN_MIDDLE, data[1] & 0x04); 184 + if (features->type == WACOM_G4 || features->type == WACOM_MO) { 185 + rw = data[7] & 0x04 ? (data[7] & 0x03)-4 : (data[7] & 0x03); 186 + wacom_report_rel(wcombo, REL_WHEEL, -rw); 187 + } else 188 + wacom_report_rel(wcombo, REL_WHEEL, -(signed char) data[6]); 184 189 /* fall through */ 185 190 186 191 case 3: /* Mouse without wheel */ 187 192 wacom->tool[0] = BTN_TOOL_MOUSE; 188 193 wacom->id[0] = CURSOR_DEVICE_ID; 194 + wacom_report_key(wcombo, BTN_LEFT, data[1] & 0x01); 195 + wacom_report_key(wcombo, BTN_RIGHT, data[1] & 0x02); 196 + if (features->type == WACOM_G4 || features->type == WACOM_MO) 197 + wacom_report_abs(wcombo, ABS_DISTANCE, data[6] & 0x3f); 198 + else 199 + wacom_report_abs(wcombo, ABS_DISTANCE, data[7] & 0x3f); 189 200 break; 190 - } 191 201 } 192 202 x = wacom_le16_to_cpu(&data[2]); 193 203 y = wacom_le16_to_cpu(&data[4]); ··· 208 198 wacom_report_key(wcombo, BTN_TOUCH, data[1] & 0x01); 209 199 wacom_report_key(wcombo, BTN_STYLUS, data[1] & 0x02); 210 200 wacom_report_key(wcombo, BTN_STYLUS2, data[1] & 0x04); 211 - } else { 212 - wacom_report_key(wcombo, BTN_LEFT, data[1] & 0x01); 213 - wacom_report_key(wcombo, BTN_RIGHT, data[1] & 0x02); 214 - if (features->type == WACOM_G4 || 215 - features->type == WACOM_MO) { 216 - wacom_report_abs(wcombo, ABS_DISTANCE, data[6] & 0x3f); 217 - rw = (signed)(data[7] & 0x04) - (data[7] & 0x03); 218 - } else { 219 - wacom_report_abs(wcombo, ABS_DISTANCE, data[7] & 0x3f); 220 - rw = -(signed)data[6]; 221 - } 222 - wacom_report_rel(wcombo, REL_WHEEL, rw); 223 201 } 224 - 225 - if (!prox) 226 - wacom->id[0] = 0; 227 202 wacom_report_abs(wcombo, ABS_MISC, wacom->id[0]); /* report tool id */ 228 - wacom_report_key(wcombo, wacom->tool[0], prox); 229 - wacom_input_sync(wcombo); /* sync last event */ 203 + wacom_report_key(wcombo, wacom->tool[0], 1); 204 + } else if (wacom->id[0]) { 205 + wacom_report_abs(wcombo, ABS_X, 0); 206 + wacom_report_abs(wcombo, ABS_Y, 0); 207 + if (wacom->tool[0] == BTN_TOOL_MOUSE) { 208 + wacom_report_key(wcombo, BTN_LEFT, 0); 209 + wacom_report_key(wcombo, BTN_RIGHT, 0); 210 + wacom_report_abs(wcombo, ABS_DISTANCE, 0); 211 + } else { 212 + wacom_report_abs(wcombo, ABS_PRESSURE, 0); 213 + wacom_report_key(wcombo, BTN_TOUCH, 0); 214 + wacom_report_key(wcombo, BTN_STYLUS, 0); 215 + wacom_report_key(wcombo, BTN_STYLUS2, 0); 216 + } 217 + wacom->id[0] = 0; 218 + wacom_report_abs(wcombo, ABS_MISC, 0); /* reset tool id */ 219 + wacom_report_key(wcombo, wacom->tool[0], 0); 230 220 } 231 221 232 222 /* send pad data */ 233 223 switch (features->type) { 234 224 case WACOM_G4: 235 - prox = data[7] & 0xf8; 236 - if (prox || wacom->id[1]) { 225 + if (data[7] & 0xf8) { 226 + if (penData) { 227 + wacom_input_sync(wcombo); /* sync last event */ 228 + if (!wacom->id[0]) 229 + penData = 0; 230 + } 237 231 wacom->id[1] = PAD_DEVICE_ID; 238 232 wacom_report_key(wcombo, BTN_0, (data[7] & 0x40)); 239 233 wacom_report_key(wcombo, BTN_4, (data[7] & 0x80)); ··· 245 231 wacom_report_rel(wcombo, REL_WHEEL, rw); 246 232 wacom_report_key(wcombo, BTN_TOOL_FINGER, 0xf0); 247 233 wacom_report_abs(wcombo, ABS_MISC, wacom->id[1]); 248 - if (!prox) 249 - wacom->id[1] = 0; 250 - wacom_report_abs(wcombo, ABS_MISC, wacom->id[1]); 234 + wacom_input_event(wcombo, EV_MSC, MSC_SERIAL, 0xf0); 235 + } else if (wacom->id[1]) { 236 + if (penData) { 237 + wacom_input_sync(wcombo); /* sync last event */ 238 + if (!wacom->id[0]) 239 + penData = 0; 240 + } 241 + wacom->id[1] = 0; 242 + wacom_report_key(wcombo, BTN_0, (data[7] & 0x40)); 243 + wacom_report_key(wcombo, BTN_4, (data[7] & 0x80)); 244 + wacom_report_rel(wcombo, REL_WHEEL, 0); 245 + wacom_report_key(wcombo, BTN_TOOL_FINGER, 0); 246 + wacom_report_abs(wcombo, ABS_MISC, 0); 251 247 wacom_input_event(wcombo, EV_MSC, MSC_SERIAL, 0xf0); 252 248 } 253 - retval = 1; 254 249 break; 255 250 case WACOM_MO: 256 - prox = (data[7] & 0xf8) || data[8]; 257 - if (prox || wacom->id[1]) { 251 + if ((data[7] & 0xf8) || (data[8] & 0xff)) { 252 + if (penData) { 253 + wacom_input_sync(wcombo); /* sync last event */ 254 + if (!wacom->id[0]) 255 + penData = 0; 256 + } 258 257 wacom->id[1] = PAD_DEVICE_ID; 259 258 wacom_report_key(wcombo, BTN_0, (data[7] & 0x08)); 260 259 wacom_report_key(wcombo, BTN_1, (data[7] & 0x20)); ··· 275 248 wacom_report_key(wcombo, BTN_5, (data[7] & 0x40)); 276 249 wacom_report_abs(wcombo, ABS_WHEEL, (data[8] & 0x7f)); 277 250 wacom_report_key(wcombo, BTN_TOOL_FINGER, 0xf0); 278 - if (!prox) 279 - wacom->id[1] = 0; 280 251 wacom_report_abs(wcombo, ABS_MISC, wacom->id[1]); 281 252 wacom_input_event(wcombo, EV_MSC, MSC_SERIAL, 0xf0); 253 + } else if (wacom->id[1]) { 254 + if (penData) { 255 + wacom_input_sync(wcombo); /* sync last event */ 256 + if (!wacom->id[0]) 257 + penData = 0; 258 + } 259 + wacom->id[1] = 0; 260 + wacom_report_key(wcombo, BTN_0, (data[7] & 0x08)); 261 + wacom_report_key(wcombo, BTN_1, (data[7] & 0x20)); 262 + wacom_report_key(wcombo, BTN_4, (data[7] & 0x10)); 263 + wacom_report_key(wcombo, BTN_5, (data[7] & 0x40)); 264 + wacom_report_abs(wcombo, ABS_WHEEL, (data[8] & 0x7f)); 265 + wacom_report_key(wcombo, BTN_TOOL_FINGER, 0); 266 + wacom_report_abs(wcombo, ABS_MISC, 0); 267 + wacom_input_event(wcombo, EV_MSC, MSC_SERIAL, 0xf0); 282 268 } 283 - retval = 1; 284 269 break; 285 270 } 286 - exit: 287 - return retval; 271 + return 1; 288 272 } 289 273 290 274 static int wacom_intuos_inout(struct wacom_wac *wacom, void *wcombo) ··· 636 598 static void wacom_tpc_finger_in(struct wacom_wac *wacom, void *wcombo, char *data, int idx) 637 599 { 638 600 wacom_report_abs(wcombo, ABS_X, 639 - data[2 + idx * 2] | ((data[3 + idx * 2] & 0x7f) << 8)); 601 + (data[2 + idx * 2] & 0xff) | ((data[3 + idx * 2] & 0x7f) << 8)); 640 602 wacom_report_abs(wcombo, ABS_Y, 641 - data[6 + idx * 2] | ((data[7 + idx * 2] & 0x7f) << 8)); 603 + (data[6 + idx * 2] & 0xff) | ((data[7 + idx * 2] & 0x7f) << 8)); 642 604 wacom_report_abs(wcombo, ABS_MISC, wacom->id[0]); 643 605 wacom_report_key(wcombo, wacom->tool[idx], 1); 644 606 if (idx) ··· 782 744 783 745 touchInProx = 0; 784 746 785 - if (!wacom->id[0]) { /* first in prox */ 786 - /* Going into proximity select tool */ 787 - wacom->tool[0] = (data[1] & 0x0c) ? BTN_TOOL_RUBBER : BTN_TOOL_PEN; 788 - if (wacom->tool[0] == BTN_TOOL_PEN) 789 - wacom->id[0] = STYLUS_DEVICE_ID; 790 - else 791 - wacom->id[0] = ERASER_DEVICE_ID; 792 - } 793 - wacom_report_key(wcombo, BTN_STYLUS, data[1] & 0x02); 794 - wacom_report_key(wcombo, BTN_STYLUS2, data[1] & 0x10); 795 - wacom_report_abs(wcombo, ABS_X, wacom_le16_to_cpu(&data[2])); 796 - wacom_report_abs(wcombo, ABS_Y, wacom_le16_to_cpu(&data[4])); 797 - pressure = ((data[7] & 0x01) << 8) | data[6]; 798 - if (pressure < 0) 799 - pressure = features->pressure_max + pressure + 1; 800 - wacom_report_abs(wcombo, ABS_PRESSURE, pressure); 801 - wacom_report_key(wcombo, BTN_TOUCH, data[1] & 0x05); 802 - if (!prox) { /* out-prox */ 747 + if (prox) { /* in prox */ 748 + if (!wacom->id[0]) { 749 + /* Going into proximity select tool */ 750 + wacom->tool[0] = (data[1] & 0x0c) ? BTN_TOOL_RUBBER : BTN_TOOL_PEN; 751 + if (wacom->tool[0] == BTN_TOOL_PEN) 752 + wacom->id[0] = STYLUS_DEVICE_ID; 753 + else 754 + wacom->id[0] = ERASER_DEVICE_ID; 755 + } 756 + wacom_report_key(wcombo, BTN_STYLUS, data[1] & 0x02); 757 + wacom_report_key(wcombo, BTN_STYLUS2, data[1] & 0x10); 758 + wacom_report_abs(wcombo, ABS_X, wacom_le16_to_cpu(&data[2])); 759 + wacom_report_abs(wcombo, ABS_Y, wacom_le16_to_cpu(&data[4])); 760 + pressure = ((data[7] & 0x01) << 8) | data[6]; 761 + if (pressure < 0) 762 + pressure = features->pressure_max + pressure + 1; 763 + wacom_report_abs(wcombo, ABS_PRESSURE, pressure); 764 + wacom_report_key(wcombo, BTN_TOUCH, data[1] & 0x05); 765 + } else { 766 + wacom_report_abs(wcombo, ABS_X, 0); 767 + wacom_report_abs(wcombo, ABS_Y, 0); 768 + wacom_report_abs(wcombo, ABS_PRESSURE, 0); 769 + wacom_report_key(wcombo, BTN_STYLUS, 0); 770 + wacom_report_key(wcombo, BTN_STYLUS2, 0); 771 + wacom_report_key(wcombo, BTN_TOUCH, 0); 803 772 wacom->id[0] = 0; 804 773 /* pen is out so touch can be enabled now */ 805 774 touchInProx = 1;
-5
drivers/isdn/gigaset/bas-gigaset.c
··· 14 14 */ 15 15 16 16 #include "gigaset.h" 17 - 18 - #include <linux/errno.h> 19 - #include <linux/init.h> 20 - #include <linux/slab.h> 21 - #include <linux/timer.h> 22 17 #include <linux/usb.h> 23 18 #include <linux/module.h> 24 19 #include <linux/moduleparam.h>
-2
drivers/isdn/gigaset/capi.c
··· 12 12 */ 13 13 14 14 #include "gigaset.h" 15 - #include <linux/slab.h> 16 - #include <linux/ctype.h> 17 15 #include <linux/proc_fs.h> 18 16 #include <linux/seq_file.h> 19 17 #include <linux/isdn/capilli.h>
-2
drivers/isdn/gigaset/common.c
··· 14 14 */ 15 15 16 16 #include "gigaset.h" 17 - #include <linux/ctype.h> 18 17 #include <linux/module.h> 19 18 #include <linux/moduleparam.h> 20 - #include <linux/slab.h> 21 19 22 20 /* Version Information */ 23 21 #define DRIVER_AUTHOR "Hansjoerg Lipp <hjlipp@web.de>, Tilman Schmidt <tilman@imap.cc>, Stefan Eilers"
+2 -1
drivers/isdn/gigaset/gigaset.h
··· 20 20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 21 21 22 22 #include <linux/kernel.h> 23 + #include <linux/sched.h> 23 24 #include <linux/compiler.h> 24 25 #include <linux/types.h> 26 + #include <linux/ctype.h> 25 27 #include <linux/slab.h> 26 28 #include <linux/spinlock.h> 27 - #include <linux/usb.h> 28 29 #include <linux/skbuff.h> 29 30 #include <linux/netdevice.h> 30 31 #include <linux/ppp_defs.h>
-1
drivers/isdn/gigaset/i4l.c
··· 15 15 16 16 #include "gigaset.h" 17 17 #include <linux/isdnif.h> 18 - #include <linux/slab.h> 19 18 20 19 #define HW_HDR_LEN 2 /* Header size used to store ack info */ 21 20
-1
drivers/isdn/gigaset/interface.c
··· 13 13 14 14 #include "gigaset.h" 15 15 #include <linux/gigaset_dev.h> 16 - #include <linux/tty.h> 17 16 #include <linux/tty_flip.h> 18 17 19 18 /*** our ioctls ***/
-1
drivers/isdn/gigaset/proc.c
··· 14 14 */ 15 15 16 16 #include "gigaset.h" 17 - #include <linux/ctype.h> 18 17 19 18 static ssize_t show_cidmode(struct device *dev, 20 19 struct device_attribute *attr, char *buf)
-3
drivers/isdn/gigaset/ser-gigaset.c
··· 11 11 */ 12 12 13 13 #include "gigaset.h" 14 - 15 14 #include <linux/module.h> 16 15 #include <linux/moduleparam.h> 17 16 #include <linux/platform_device.h> 18 - #include <linux/tty.h> 19 17 #include <linux/completion.h> 20 - #include <linux/slab.h> 21 18 22 19 /* Version Information */ 23 20 #define DRIVER_AUTHOR "Tilman Schmidt"
-4
drivers/isdn/gigaset/usb-gigaset.c
··· 16 16 */ 17 17 18 18 #include "gigaset.h" 19 - 20 - #include <linux/errno.h> 21 - #include <linux/init.h> 22 - #include <linux/slab.h> 23 19 #include <linux/usb.h> 24 20 #include <linux/module.h> 25 21 #include <linux/moduleparam.h>
+2 -2
drivers/lguest/lguest_device.c
··· 178 178 179 179 /* We set the status. */ 180 180 to_lgdev(vdev)->desc->status = status; 181 - kvm_hypercall1(LHCALL_NOTIFY, (max_pfn << PAGE_SHIFT) + offset); 181 + hcall(LHCALL_NOTIFY, (max_pfn << PAGE_SHIFT) + offset, 0, 0, 0); 182 182 } 183 183 184 184 static void lg_set_status(struct virtio_device *vdev, u8 status) ··· 229 229 */ 230 230 struct lguest_vq_info *lvq = vq->priv; 231 231 232 - kvm_hypercall1(LHCALL_NOTIFY, lvq->config.pfn << PAGE_SHIFT); 232 + hcall(LHCALL_NOTIFY, lvq->config.pfn << PAGE_SHIFT, 0, 0, 0); 233 233 } 234 234 235 235 /* An extern declaration inside a C file is bad form. Don't do it. */
+12
drivers/lguest/x86/core.c
··· 288 288 insn = lgread(cpu, physaddr, u8); 289 289 290 290 /* 291 + * Around 2.6.33, the kernel started using an emulation for the 292 + * cmpxchg8b instruction in early boot on many configurations. This 293 + * code isn't paravirtualized, and it tries to disable interrupts. 294 + * Ignore it, which will Mostly Work. 295 + */ 296 + if (insn == 0xfa) { 297 + /* "cli", or Clear Interrupt Enable instruction. Skip it. */ 298 + cpu->regs->eip++; 299 + return 1; 300 + } 301 + 302 + /* 291 303 * 0x66 is an "operand prefix". It means it's using the upper 16 bits 292 304 * of the eax register. 293 305 */
+1 -1
drivers/net/8139too.c
··· 1944 1944 netif_dbg(tp, rx_status, dev, "%s() status %04x, size %04x, cur %04x\n", 1945 1945 __func__, rx_status, rx_size, cur_rx); 1946 1946 #if RTL8139_DEBUG > 2 1947 - print_dump_hex(KERN_DEBUG, "Frame contents: ", 1947 + print_hex_dump(KERN_DEBUG, "Frame contents: ", 1948 1948 DUMP_PREFIX_OFFSET, 16, 1, 1949 1949 &rx_ring[ring_offset], 70, true); 1950 1950 #endif
+6
drivers/net/wan/hdlc_ppp.c
··· 628 628 ppp_cp_event(dev, PID_LCP, STOP, 0, 0, 0, NULL); 629 629 } 630 630 631 + static void ppp_close(struct net_device *dev) 632 + { 633 + ppp_tx_flush(); 634 + } 635 + 631 636 static struct hdlc_proto proto = { 632 637 .start = ppp_start, 633 638 .stop = ppp_stop, 639 + .close = ppp_close, 634 640 .type_trans = ppp_type_trans, 635 641 .ioctl = ppp_ioctl, 636 642 .netif_rx = ppp_rx,
+2 -1
drivers/net/wireless/iwlwifi/iwl-6000.c
··· 262 262 EEPROM_REG_BAND_4_CHANNELS, 263 263 EEPROM_REG_BAND_5_CHANNELS, 264 264 EEPROM_REG_BAND_24_HT40_CHANNELS, 265 + EEPROM_6000_REG_BAND_24_HT40_CHANNELS, 265 266 EEPROM_REG_BAND_52_HT40_CHANNELS 266 267 }, 267 268 .verify_signature = iwlcore_eeprom_verify_signature, ··· 329 328 EEPROM_REG_BAND_3_CHANNELS, 330 329 EEPROM_REG_BAND_4_CHANNELS, 331 330 EEPROM_REG_BAND_5_CHANNELS, 332 - EEPROM_REG_BAND_24_HT40_CHANNELS, 331 + EEPROM_6000_REG_BAND_24_HT40_CHANNELS, 333 332 EEPROM_REG_BAND_52_HT40_CHANNELS 334 333 }, 335 334 .verify_signature = iwlcore_eeprom_verify_signature,
+1
drivers/net/wireless/iwlwifi/iwl-agn.c
··· 3305 3305 3306 3306 cancel_delayed_work_sync(&priv->init_alive_start); 3307 3307 cancel_delayed_work(&priv->scan_check); 3308 + cancel_work_sync(&priv->start_internal_scan); 3308 3309 cancel_delayed_work(&priv->alive_start); 3309 3310 cancel_work_sync(&priv->beacon_update); 3310 3311 del_timer_sync(&priv->statistics_periodic);
+12
drivers/net/wireless/iwlwifi/iwl-calib.c
··· 808 808 } 809 809 } 810 810 811 + /* 812 + * The above algorithm sometimes fails when the ucode 813 + * reports 0 for all chains. It's not clear why that 814 + * happens to start with, but it is then causing trouble 815 + * because this can make us enable more chains than the 816 + * hardware really has. 817 + * 818 + * To be safe, simply mask out any chains that we know 819 + * are not on the device. 820 + */ 821 + active_chains &= priv->hw_params.valid_rx_ant; 822 + 811 823 num_tx_chains = 0; 812 824 for (i = 0; i < NUM_RX_CHAINS; i++) { 813 825 /* loops on all the bits of
-1
drivers/net/wireless/iwlwifi/iwl-core.c
··· 2802 2802 */ 2803 2803 IWL_DEBUG_INFO(priv, "perform radio reset.\n"); 2804 2804 iwl_internal_short_hw_scan(priv); 2805 - return; 2806 2805 } 2807 2806 2808 2807
+1 -1
drivers/net/wireless/iwlwifi/iwl-core.h
··· 502 502 int iwl_scan_cancel(struct iwl_priv *priv); 503 503 int iwl_scan_cancel_timeout(struct iwl_priv *priv, unsigned long ms); 504 504 int iwl_mac_hw_scan(struct ieee80211_hw *hw, struct cfg80211_scan_request *req); 505 - int iwl_internal_short_hw_scan(struct iwl_priv *priv); 505 + void iwl_internal_short_hw_scan(struct iwl_priv *priv); 506 506 int iwl_force_reset(struct iwl_priv *priv, int mode); 507 507 u16 iwl_fill_probe_req(struct iwl_priv *priv, struct ieee80211_mgmt *frame, 508 508 const u8 *ie, int ie_len, int left);
+1
drivers/net/wireless/iwlwifi/iwl-dev.h
··· 1264 1264 struct work_struct tt_work; 1265 1265 struct work_struct ct_enter; 1266 1266 struct work_struct ct_exit; 1267 + struct work_struct start_internal_scan; 1267 1268 1268 1269 struct tasklet_struct irq_tasklet; 1269 1270
+4
drivers/net/wireless/iwlwifi/iwl-eeprom.h
··· 203 203 #define EEPROM_REG_BAND_52_HT40_CHANNELS ((0x92)\ 204 204 | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 22 bytes */ 205 205 206 + /* 6000 regulatory - indirect access */ 207 + #define EEPROM_6000_REG_BAND_24_HT40_CHANNELS ((0x80)\ 208 + | INDIRECT_ADDRESS | INDIRECT_REGULATORY) /* 14 bytes */ 209 + 206 210 /* 6000 and up regulatory tx power - indirect access */ 207 211 /* max. elements per section */ 208 212 #define EEPROM_MAX_TXPOWER_SECTION_ELEMENTS (8)
+20 -11
drivers/net/wireless/iwlwifi/iwl-scan.c
··· 470 470 471 471 static int iwl_scan_initiate(struct iwl_priv *priv) 472 472 { 473 + WARN_ON(!mutex_is_locked(&priv->mutex)); 474 + 473 475 IWL_DEBUG_INFO(priv, "Starting scan...\n"); 474 476 set_bit(STATUS_SCANNING, &priv->status); 475 477 priv->is_internal_short_scan = false; ··· 549 547 * internal short scan, this function should only been called while associated. 550 548 * It will reset and tune the radio to prevent possible RF related problem 551 549 */ 552 - int iwl_internal_short_hw_scan(struct iwl_priv *priv) 550 + void iwl_internal_short_hw_scan(struct iwl_priv *priv) 553 551 { 554 - int ret = 0; 552 + queue_work(priv->workqueue, &priv->start_internal_scan); 553 + } 554 + 555 + static void iwl_bg_start_internal_scan(struct work_struct *work) 556 + { 557 + struct iwl_priv *priv = 558 + container_of(work, struct iwl_priv, start_internal_scan); 559 + 560 + mutex_lock(&priv->mutex); 555 561 556 562 if (!iwl_is_ready_rf(priv)) { 557 - ret = -EIO; 558 563 IWL_DEBUG_SCAN(priv, "not ready or exit pending\n"); 559 - goto out; 564 + goto unlock; 560 565 } 566 + 561 567 if (test_bit(STATUS_SCANNING, &priv->status)) { 562 568 IWL_DEBUG_SCAN(priv, "Scan already in progress.\n"); 563 - ret = -EAGAIN; 564 - goto out; 569 + goto unlock; 565 570 } 571 + 566 572 if (test_bit(STATUS_SCAN_ABORTING, &priv->status)) { 567 573 IWL_DEBUG_SCAN(priv, "Scan request while abort pending\n"); 568 - ret = -EAGAIN; 569 - goto out; 574 + goto unlock; 570 575 } 571 576 572 577 priv->scan_bands = 0; ··· 586 577 set_bit(STATUS_SCANNING, &priv->status); 587 578 priv->is_internal_short_scan = true; 588 579 queue_work(priv->workqueue, &priv->request_scan); 589 - 590 - out: 591 - return ret; 580 + unlock: 581 + mutex_unlock(&priv->mutex); 592 582 } 593 583 594 584 #define IWL_SCAN_CHECK_WATCHDOG (7 * HZ) ··· 971 963 INIT_WORK(&priv->scan_completed, iwl_bg_scan_completed); 972 964 INIT_WORK(&priv->request_scan, iwl_bg_request_scan); 973 965 INIT_WORK(&priv->abort_scan, iwl_bg_abort_scan); 966 + INIT_WORK(&priv->start_internal_scan, iwl_bg_start_internal_scan); 974 967 INIT_DELAYED_WORK(&priv->scan_check, iwl_bg_scan_check); 975 968 } 976 969 EXPORT_SYMBOL(iwl_setup_scan_deferred_work);
+7 -2
drivers/pcmcia/cistpl.c
··· 1484 1484 if (!s) 1485 1485 return -EINVAL; 1486 1486 1487 + if (s->functions) { 1488 + WARN_ON(1); 1489 + return -EINVAL; 1490 + } 1491 + 1487 1492 /* We do not want to validate the CIS cache... */ 1488 1493 mutex_lock(&s->ops_mutex); 1489 1494 destroy_cis_cache(s); ··· 1644 1639 count = 0; 1645 1640 else { 1646 1641 struct pcmcia_socket *s; 1647 - unsigned int chains; 1642 + unsigned int chains = 1; 1648 1643 1649 1644 if (off + count > size) 1650 1645 count = size - off; ··· 1653 1648 1654 1649 if (!(s->state & SOCKET_PRESENT)) 1655 1650 return -ENODEV; 1656 - if (pccard_validate_cis(s, &chains)) 1651 + if (!s->functions && pccard_validate_cis(s, &chains)) 1657 1652 return -EIO; 1658 1653 if (!chains) 1659 1654 return -ENODATA;
+3 -1
drivers/pcmcia/db1xxx_ss.c
··· 166 166 167 167 ret = request_irq(sock->insert_irq, db1200_pcmcia_cdirq, 168 168 IRQF_DISABLED, "pcmcia_insert", sock); 169 - if (ret) 169 + if (ret) { 170 + local_irq_restore(flags); 170 171 goto out1; 172 + } 171 173 172 174 ret = request_irq(sock->eject_irq, db1200_pcmcia_cdirq, 173 175 IRQF_DISABLED, "pcmcia_eject", sock);
+14 -8
drivers/pcmcia/ds.c
··· 687 687 new_funcs = mfc.nfn; 688 688 else 689 689 new_funcs = 1; 690 - if (old_funcs > new_funcs) { 690 + if (old_funcs != new_funcs) { 691 + /* we need to re-start */ 691 692 pcmcia_card_remove(s, NULL); 692 693 pcmcia_card_add(s); 693 - } else if (new_funcs > old_funcs) { 694 - s->functions = new_funcs; 695 - pcmcia_device_add(s, 1); 696 694 } 697 695 } 698 696 ··· 726 728 struct pcmcia_socket *s = dev->socket; 727 729 const struct firmware *fw; 728 730 int ret = -ENOMEM; 731 + cistpl_longlink_mfc_t mfc; 732 + int old_funcs, new_funcs = 1; 729 733 730 734 if (!filename) 731 735 return -EINVAL; ··· 750 750 goto release; 751 751 } 752 752 753 + /* we need to re-start if the number of functions changed */ 754 + old_funcs = s->functions; 755 + if (!pccard_read_tuple(s, BIND_FN_ALL, CISTPL_LONGLINK_MFC, 756 + &mfc)) 757 + new_funcs = mfc.nfn; 758 + 759 + if (old_funcs != new_funcs) 760 + ret = -EBUSY; 753 761 754 762 /* update information */ 755 763 pcmcia_device_query(dev); ··· 866 858 if (did->match_flags & PCMCIA_DEV_ID_MATCH_FAKE_CIS) { 867 859 dev_dbg(&dev->dev, "device needs a fake CIS\n"); 868 860 if (!dev->socket->fake_cis) 869 - pcmcia_load_firmware(dev, did->cisfile); 870 - 871 - if (!dev->socket->fake_cis) 872 - return 0; 861 + if (pcmcia_load_firmware(dev, did->cisfile)) 862 + return 0; 873 863 } 874 864 875 865 if (did->match_flags & PCMCIA_DEV_ID_MATCH_ANONYMOUS) {
+5 -5
drivers/pcmcia/pcmcia_resource.c
··· 755 755 else 756 756 printk(KERN_WARNING "pcmcia: Driver needs updating to support IRQ sharing.\n"); 757 757 758 - #ifdef CONFIG_PCMCIA_PROBE 759 - 760 - if (s->irq.AssignedIRQ != 0) { 761 - /* If the interrupt is already assigned, it must be the same */ 758 + /* If the interrupt is already assigned, it must be the same */ 759 + if (s->irq.AssignedIRQ != 0) 762 760 irq = s->irq.AssignedIRQ; 763 - } else { 761 + 762 + #ifdef CONFIG_PCMCIA_PROBE 763 + if (!irq) { 764 764 int try; 765 765 u32 mask = s->irq_mask; 766 766 void *data = p_dev; /* something unique to this device */
+12 -4
drivers/pcmcia/rsrc_nonstatic.c
··· 214 214 return; 215 215 } 216 216 for (i = base, most = 0; i < base+num; i += 8) { 217 - res = claim_region(NULL, i, 8, IORESOURCE_IO, "PCMCIA ioprobe"); 217 + res = claim_region(s, i, 8, IORESOURCE_IO, "PCMCIA ioprobe"); 218 218 if (!res) 219 219 continue; 220 220 hole = inb(i); ··· 231 231 232 232 bad = any = 0; 233 233 for (i = base; i < base+num; i += 8) { 234 - res = claim_region(NULL, i, 8, IORESOURCE_IO, "PCMCIA ioprobe"); 235 - if (!res) 234 + res = claim_region(s, i, 8, IORESOURCE_IO, "PCMCIA ioprobe"); 235 + if (!res) { 236 + if (!any) 237 + printk(" excluding"); 238 + if (!bad) 239 + bad = any = i; 236 240 continue; 241 + } 237 242 for (j = 0; j < 8; j++) 238 243 if (inb(i+j) != most) 239 244 break; ··· 258 253 } 259 254 if (bad) { 260 255 if ((num > 16) && (bad == base) && (i == base+num)) { 256 + sub_interval(&s_data->io_db, bad, i-bad); 261 257 printk(" nothing: probe failed.\n"); 262 258 return; 263 259 } else { ··· 810 804 static int adjust_io(struct pcmcia_socket *s, unsigned int action, unsigned long start, unsigned long end) 811 805 { 812 806 struct socket_data *data = s->resource_data; 813 - unsigned long size = end - start + 1; 807 + unsigned long size; 814 808 int ret = 0; 815 809 816 810 #if defined(CONFIG_X86) ··· 819 813 if (start < 0x100) 820 814 start = 0x100; 821 815 #endif 816 + 817 + size = end - start + 1; 822 818 823 819 if (end < start) 824 820 return -EINVAL;
+9
drivers/serial/serial_cs.c
··· 105 105 * manfid 0x0160, 0x0104 106 106 * This card appears to have a 14.7456MHz clock. 107 107 */ 108 + /* Generic Modem: MD55x (GPRS/EDGE) have 109 + * Elan VPU16551 UART with 14.7456MHz oscillator 110 + * manfid 0x015D, 0x4C45 111 + */ 108 112 static void quirk_setup_brainboxes_0104(struct pcmcia_device *link, struct uart_port *port) 109 113 { 110 114 port->uartclk = 14745600; ··· 197 193 { 198 194 .manfid = 0x0160, 199 195 .prodid = 0x0104, 196 + .multi = -1, 197 + .setup = quirk_setup_brainboxes_0104, 198 + }, { 199 + .manfid = 0x015D, 200 + .prodid = 0x4C45, 200 201 .multi = -1, 201 202 .setup = quirk_setup_brainboxes_0104, 202 203 }, {
-29
drivers/ssb/driver_pcicore.c
··· 246 246 .pci_ops = &ssb_pcicore_pciops, 247 247 .io_resource = &ssb_pcicore_io_resource, 248 248 .mem_resource = &ssb_pcicore_mem_resource, 249 - .mem_offset = 0x24000000, 250 249 }; 251 - 252 - static u32 ssb_pcicore_pcibus_iobase = 0x100; 253 - static u32 ssb_pcicore_pcibus_membase = SSB_PCI_DMA; 254 250 255 251 /* This function is called when doing a pci_enable_device(). 256 252 * We must first check if the device is a device on the PCI-core bridge. */ 257 253 int ssb_pcicore_plat_dev_init(struct pci_dev *d) 258 254 { 259 - struct resource *res; 260 - int pos, size; 261 - u32 *base; 262 - 263 255 if (d->bus->ops != &ssb_pcicore_pciops) { 264 256 /* This is not a device on the PCI-core bridge. */ 265 257 return -ENODEV; ··· 260 268 ssb_printk(KERN_INFO "PCI: Fixing up device %s\n", 261 269 pci_name(d)); 262 270 263 - /* Fix up resource bases */ 264 - for (pos = 0; pos < 6; pos++) { 265 - res = &d->resource[pos]; 266 - if (res->flags & IORESOURCE_IO) 267 - base = &ssb_pcicore_pcibus_iobase; 268 - else 269 - base = &ssb_pcicore_pcibus_membase; 270 - res->flags |= IORESOURCE_PCI_FIXED; 271 - if (res->end) { 272 - size = res->end - res->start + 1; 273 - if (*base & (size - 1)) 274 - *base = (*base + size) & ~(size - 1); 275 - res->start = *base; 276 - res->end = res->start + size - 1; 277 - *base += size; 278 - pci_write_config_dword(d, PCI_BASE_ADDRESS_0 + (pos << 2), res->start); 279 - } 280 - /* Fix up PCI bridge BAR0 only */ 281 - if (d->bus->number == 0 && PCI_SLOT(d->devfn) == 0) 282 - break; 283 - } 284 271 /* Fix up interrupt lines */ 285 272 d->irq = ssb_mips_irq(extpci_core->dev) + 2; 286 273 pci_write_config_byte(d, PCI_INTERRUPT_LINE, d->irq);
+4 -4
drivers/watchdog/Kconfig
··· 194 194 195 195 config OMAP_WATCHDOG 196 196 tristate "OMAP Watchdog" 197 - depends on ARCH_OMAP16XX || ARCH_OMAP2 || ARCH_OMAP3 197 + depends on ARCH_OMAP16XX || ARCH_OMAP2PLUS 198 198 help 199 - Support for TI OMAP1610/OMAP1710/OMAP2420/OMAP3430 watchdog. Say 'Y' 200 - here to enable the OMAP1610/OMAP1710/OMAP2420/OMAP3430 watchdog timer. 199 + Support for TI OMAP1610/OMAP1710/OMAP2420/OMAP3430/OMAP4430 watchdog. Say 'Y' 200 + here to enable the OMAP1610/OMAP1710/OMAP2420/OMAP3430/OMAP4430 watchdog timer. 201 201 202 202 config PNX4008_WATCHDOG 203 203 tristate "PNX4008 Watchdog" ··· 302 302 303 303 config MAX63XX_WATCHDOG 304 304 tristate "Max63xx watchdog" 305 - depends on ARM 305 + depends on ARM && HAS_IOMEM 306 306 help 307 307 Support for memory mapped max63{69,70,71,72,73,74} watchdog timer. 308 308
+1 -1
drivers/watchdog/booke_wdt.c
··· 44 44 45 45 #ifdef CONFIG_FSL_BOOKE 46 46 #define WDTP(x) ((((x)&0x3)<<30)|(((x)&0x3c)<<15)) 47 - #define WDTP_MASK (WDTP(0)) 47 + #define WDTP_MASK (WDTP(0x3f)) 48 48 #else 49 49 #define WDTP(x) (TCR_WP(x)) 50 50 #define WDTP_MASK (TCR_WP_MASK)
+6 -1
drivers/watchdog/max63xx_wdt.c
··· 154 154 155 155 static void max63xx_wdt_disable(void) 156 156 { 157 + u8 val; 158 + 157 159 spin_lock(&io_lock); 158 160 159 - __raw_writeb(3, wdt_base); 161 + val = __raw_readb(wdt_base); 162 + val &= ~MAX6369_WDSET; 163 + val |= 3; 164 + __raw_writeb(val, wdt_base); 160 165 161 166 spin_unlock(&io_lock); 162 167
+15 -5
fs/btrfs/extent-tree.c
··· 3235 3235 u64 bytes) 3236 3236 { 3237 3237 struct btrfs_space_info *data_sinfo; 3238 - int ret = 0, committed = 0; 3238 + u64 used; 3239 + int ret = 0, committed = 0, flushed = 0; 3239 3240 3240 3241 /* make sure bytes are sectorsize aligned */ 3241 3242 bytes = (bytes + root->sectorsize - 1) & ~((u64)root->sectorsize - 1); ··· 3248 3247 again: 3249 3248 /* make sure we have enough space to handle the data first */ 3250 3249 spin_lock(&data_sinfo->lock); 3251 - if (data_sinfo->total_bytes - data_sinfo->bytes_used - 3252 - data_sinfo->bytes_delalloc - data_sinfo->bytes_reserved - 3253 - data_sinfo->bytes_pinned - data_sinfo->bytes_readonly - 3254 - data_sinfo->bytes_may_use - data_sinfo->bytes_super < bytes) { 3250 + used = data_sinfo->bytes_used + data_sinfo->bytes_delalloc + 3251 + data_sinfo->bytes_reserved + data_sinfo->bytes_pinned + 3252 + data_sinfo->bytes_readonly + data_sinfo->bytes_may_use + 3253 + data_sinfo->bytes_super; 3254 + 3255 + if (used + bytes > data_sinfo->total_bytes) { 3255 3256 struct btrfs_trans_handle *trans; 3257 + 3258 + if (!flushed) { 3259 + spin_unlock(&data_sinfo->lock); 3260 + flush_delalloc(root, data_sinfo); 3261 + flushed = 1; 3262 + goto again; 3263 + } 3256 3264 3257 3265 /* 3258 3266 * if we don't have enough free bytes in this space then we need
+6
fs/btrfs/volumes.c
··· 2250 2250 if (!looped) 2251 2251 calc_size = max_t(u64, min_stripe_size, calc_size); 2252 2252 2253 + /* 2254 + * we're about to do_div by the stripe_len so lets make sure 2255 + * we end up with something bigger than a stripe 2256 + */ 2257 + calc_size = max_t(u64, calc_size, stripe_len * 4); 2258 + 2253 2259 do_div(calc_size, stripe_len); 2254 2260 calc_size *= stripe_len; 2255 2261
+30 -32
fs/ceph/addr.c
··· 337 337 /* 338 338 * Get ref for the oldest snapc for an inode with dirty data... that is, the 339 339 * only snap context we are allowed to write back. 340 - * 341 - * Caller holds i_lock. 342 340 */ 343 - static struct ceph_snap_context *__get_oldest_context(struct inode *inode, 344 - u64 *snap_size) 341 + static struct ceph_snap_context *get_oldest_context(struct inode *inode, 342 + u64 *snap_size) 345 343 { 346 344 struct ceph_inode_info *ci = ceph_inode(inode); 347 345 struct ceph_snap_context *snapc = NULL; 348 346 struct ceph_cap_snap *capsnap = NULL; 349 347 348 + spin_lock(&inode->i_lock); 350 349 list_for_each_entry(capsnap, &ci->i_cap_snaps, ci_item) { 351 350 dout(" cap_snap %p snapc %p has %d dirty pages\n", capsnap, 352 351 capsnap->context, capsnap->dirty_pages); ··· 356 357 break; 357 358 } 358 359 } 359 - if (!snapc && ci->i_snap_realm) { 360 - snapc = ceph_get_snap_context(ci->i_snap_realm->cached_context); 360 + if (!snapc && ci->i_head_snapc) { 361 + snapc = ceph_get_snap_context(ci->i_head_snapc); 361 362 dout(" head snapc %p has %d dirty pages\n", 362 363 snapc, ci->i_wrbuffer_ref_head); 363 364 } 364 - return snapc; 365 - } 366 - 367 - static struct ceph_snap_context *get_oldest_context(struct inode *inode, 368 - u64 *snap_size) 369 - { 370 - struct ceph_snap_context *snapc = NULL; 371 - 372 - spin_lock(&inode->i_lock); 373 - snapc = __get_oldest_context(inode, snap_size); 374 365 spin_unlock(&inode->i_lock); 375 366 return snapc; 376 367 } ··· 381 392 int len = PAGE_CACHE_SIZE; 382 393 loff_t i_size; 383 394 int err = 0; 384 - struct ceph_snap_context *snapc; 395 + struct ceph_snap_context *snapc, *oldest; 385 396 u64 snap_size = 0; 386 397 long writeback_stat; 387 398 ··· 402 413 dout("writepage %p page %p not dirty?\n", inode, page); 403 414 goto out; 404 415 } 405 - if (snapc != get_oldest_context(inode, &snap_size)) { 416 + oldest = get_oldest_context(inode, &snap_size); 417 + if (snapc->seq > oldest->seq) { 406 418 dout("writepage %p page %p snapc %p not writeable - noop\n", 407 419 inode, page, (void *)page->private); 408 420 /* we should only noop if called by kswapd */ 409 421 WARN_ON((current->flags & PF_MEMALLOC) == 0); 422 + ceph_put_snap_context(oldest); 410 423 goto out; 411 424 } 425 + ceph_put_snap_context(oldest); 412 426 413 427 /* is this a partial page at end of file? */ 414 428 if (snap_size) ··· 450 458 ClearPagePrivate(page); 451 459 end_page_writeback(page); 452 460 ceph_put_wrbuffer_cap_refs(ci, 1, snapc); 453 - ceph_put_snap_context(snapc); 461 + ceph_put_snap_context(snapc); /* page's reference */ 454 462 out: 455 463 return err; 456 464 } ··· 550 558 dout("inode %p skipping page %p\n", inode, page); 551 559 wbc->pages_skipped++; 552 560 } 561 + ceph_put_snap_context((void *)page->private); 553 562 page->private = 0; 554 563 ClearPagePrivate(page); 555 - ceph_put_snap_context(snapc); 556 564 dout("unlocking %d %p\n", i, page); 557 565 end_page_writeback(page); 558 566 ··· 610 618 int range_whole = 0; 611 619 int should_loop = 1; 612 620 pgoff_t max_pages = 0, max_pages_ever = 0; 613 - struct ceph_snap_context *snapc = NULL, *last_snapc = NULL; 621 + struct ceph_snap_context *snapc = NULL, *last_snapc = NULL, *pgsnapc; 614 622 struct pagevec pvec; 615 623 int done = 0; 616 624 int rc = 0; ··· 762 770 } 763 771 764 772 /* only if matching snap context */ 765 - if (snapc != (void *)page->private) { 766 - dout("page snapc %p != oldest %p\n", 767 - (void *)page->private, snapc); 773 + pgsnapc = (void *)page->private; 774 + if (pgsnapc->seq > snapc->seq) { 775 + dout("page snapc %p %lld > oldest %p %lld\n", 776 + pgsnapc, pgsnapc->seq, snapc, snapc->seq); 768 777 unlock_page(page); 769 778 if (!locked_pages) 770 779 continue; /* keep looking for snap */ ··· 907 914 struct ceph_snap_context *snapc) 908 915 { 909 916 struct ceph_snap_context *oldest = get_oldest_context(inode, NULL); 910 - return !oldest || snapc->seq <= oldest->seq; 917 + int ret = !oldest || snapc->seq <= oldest->seq; 918 + 919 + ceph_put_snap_context(oldest); 920 + return ret; 911 921 } 912 922 913 923 /* ··· 932 936 int pos_in_page = pos & ~PAGE_CACHE_MASK; 933 937 int end_in_page = pos_in_page + len; 934 938 loff_t i_size; 935 - struct ceph_snap_context *snapc; 936 939 int r; 940 + struct ceph_snap_context *snapc, *oldest; 937 941 938 942 retry_locked: 939 943 /* writepages currently holds page lock, but if we change that later, */ ··· 943 947 BUG_ON(!ci->i_snap_realm); 944 948 down_read(&mdsc->snap_rwsem); 945 949 BUG_ON(!ci->i_snap_realm->cached_context); 946 - if (page->private && 947 - (void *)page->private != ci->i_snap_realm->cached_context) { 950 + snapc = (void *)page->private; 951 + if (snapc && snapc != ci->i_head_snapc) { 948 952 /* 949 953 * this page is already dirty in another (older) snap 950 954 * context! is it writeable now? 951 955 */ 952 - snapc = get_oldest_context(inode, NULL); 956 + oldest = get_oldest_context(inode, NULL); 953 957 up_read(&mdsc->snap_rwsem); 954 958 955 - if (snapc != (void *)page->private) { 959 + if (snapc->seq > oldest->seq) { 960 + ceph_put_snap_context(oldest); 956 961 dout(" page %p snapc %p not current or oldest\n", 957 - page, (void *)page->private); 962 + page, snapc); 958 963 /* 959 964 * queue for writeback, and wait for snapc to 960 965 * be writeable or written 961 966 */ 962 - snapc = ceph_get_snap_context((void *)page->private); 967 + snapc = ceph_get_snap_context(snapc); 963 968 unlock_page(page); 964 969 ceph_queue_writeback(inode); 965 970 r = wait_event_interruptible(ci->i_cap_wq, ··· 970 973 return r; 971 974 return -EAGAIN; 972 975 } 976 + ceph_put_snap_context(oldest); 973 977 974 978 /* yay, writeable, do it now (without dropping page lock) */ 975 979 dout(" page %p snapc %p not current, but oldest\n",
+32 -10
fs/ceph/caps.c
··· 1205 1205 if (capsnap->dirty_pages || capsnap->writing) 1206 1206 continue; 1207 1207 1208 + /* 1209 + * if cap writeback already occurred, we should have dropped 1210 + * the capsnap in ceph_put_wrbuffer_cap_refs. 1211 + */ 1212 + BUG_ON(capsnap->dirty == 0); 1213 + 1208 1214 /* pick mds, take s_mutex */ 1209 1215 mds = __ceph_get_cap_mds(ci, &mseq); 1210 1216 if (session && session->s_mds != mds) { ··· 2124 2118 } 2125 2119 spin_unlock(&inode->i_lock); 2126 2120 2127 - dout("put_cap_refs %p had %s %s\n", inode, ceph_cap_string(had), 2128 - last ? "last" : ""); 2121 + dout("put_cap_refs %p had %s%s%s\n", inode, ceph_cap_string(had), 2122 + last ? " last" : "", put ? " put" : ""); 2129 2123 2130 2124 if (last && !flushsnaps) 2131 2125 ceph_check_caps(ci, 0, NULL); ··· 2149 2143 { 2150 2144 struct inode *inode = &ci->vfs_inode; 2151 2145 int last = 0; 2152 - int last_snap = 0; 2146 + int complete_capsnap = 0; 2147 + int drop_capsnap = 0; 2153 2148 int found = 0; 2154 2149 struct ceph_cap_snap *capsnap = NULL; 2155 2150 ··· 2173 2166 list_for_each_entry(capsnap, &ci->i_cap_snaps, ci_item) { 2174 2167 if (capsnap->context == snapc) { 2175 2168 found = 1; 2176 - capsnap->dirty_pages -= nr; 2177 - last_snap = !capsnap->dirty_pages; 2178 2169 break; 2179 2170 } 2180 2171 } 2181 2172 BUG_ON(!found); 2173 + capsnap->dirty_pages -= nr; 2174 + if (capsnap->dirty_pages == 0) { 2175 + complete_capsnap = 1; 2176 + if (capsnap->dirty == 0) 2177 + /* cap writeback completed before we created 2178 + * the cap_snap; no FLUSHSNAP is needed */ 2179 + drop_capsnap = 1; 2180 + } 2182 2181 dout("put_wrbuffer_cap_refs on %p cap_snap %p " 2183 - " snap %lld %d/%d -> %d/%d %s%s\n", 2182 + " snap %lld %d/%d -> %d/%d %s%s%s\n", 2184 2183 inode, capsnap, capsnap->context->seq, 2185 2184 ci->i_wrbuffer_ref+nr, capsnap->dirty_pages + nr, 2186 2185 ci->i_wrbuffer_ref, capsnap->dirty_pages, 2187 2186 last ? " (wrbuffer last)" : "", 2188 - last_snap ? " (capsnap last)" : ""); 2187 + complete_capsnap ? " (complete capsnap)" : "", 2188 + drop_capsnap ? " (drop capsnap)" : ""); 2189 + if (drop_capsnap) { 2190 + ceph_put_snap_context(capsnap->context); 2191 + list_del(&capsnap->ci_item); 2192 + list_del(&capsnap->flushing_item); 2193 + ceph_put_cap_snap(capsnap); 2194 + } 2189 2195 } 2190 2196 2191 2197 spin_unlock(&inode->i_lock); ··· 2206 2186 if (last) { 2207 2187 ceph_check_caps(ci, CHECK_CAPS_AUTHONLY, NULL); 2208 2188 iput(inode); 2209 - } else if (last_snap) { 2189 + } else if (complete_capsnap) { 2210 2190 ceph_flush_snaps(ci); 2211 2191 wake_up(&ci->i_cap_wq); 2212 2192 } 2193 + if (drop_capsnap) 2194 + iput(inode); 2213 2195 } 2214 2196 2215 2197 /* ··· 2487 2465 break; 2488 2466 } 2489 2467 WARN_ON(capsnap->dirty_pages || capsnap->writing); 2490 - dout(" removing cap_snap %p follows %lld\n", 2491 - capsnap, follows); 2468 + dout(" removing %p cap_snap %p follows %lld\n", 2469 + inode, capsnap, follows); 2492 2470 ceph_put_snap_context(capsnap->context); 2493 2471 list_del(&capsnap->ci_item); 2494 2472 list_del(&capsnap->flushing_item);
+4 -3
fs/ceph/dir.c
··· 171 171 spin_lock(&inode->i_lock); 172 172 spin_lock(&dcache_lock); 173 173 174 + last = dentry; 175 + 174 176 if (err < 0) 175 177 goto out_unlock; 176 - 177 - last = dentry; 178 178 179 179 p = p->prev; 180 180 filp->f_pos++; ··· 312 312 req->r_readdir_offset = fi->next_offset; 313 313 req->r_args.readdir.frag = cpu_to_le32(frag); 314 314 req->r_args.readdir.max_entries = cpu_to_le32(max_entries); 315 - req->r_num_caps = max_entries; 315 + req->r_num_caps = max_entries + 1; 316 316 err = ceph_mdsc_do_request(mdsc, NULL, req); 317 317 if (err < 0) { 318 318 ceph_mdsc_put_request(req); ··· 489 489 struct inode *inode = ceph_get_snapdir(parent); 490 490 dout("ENOENT on snapdir %p '%.*s', linking to snapdir %p\n", 491 491 dentry, dentry->d_name.len, dentry->d_name.name, inode); 492 + BUG_ON(!d_unhashed(dentry)); 492 493 d_add(dentry, inode); 493 494 err = 0; 494 495 }
+9 -1
fs/ceph/inode.c
··· 886 886 struct inode *in = NULL; 887 887 struct ceph_mds_reply_inode *ininfo; 888 888 struct ceph_vino vino; 889 + struct ceph_client *client = ceph_sb_to_client(sb); 889 890 int i = 0; 890 891 int err = 0; 891 892 ··· 950 949 return err; 951 950 } 952 951 953 - if (rinfo->head->is_dentry && !req->r_aborted) { 952 + /* 953 + * ignore null lease/binding on snapdir ENOENT, or else we 954 + * will have trouble splicing in the virtual snapdir later 955 + */ 956 + if (rinfo->head->is_dentry && !req->r_aborted && 957 + (rinfo->head->is_target || strncmp(req->r_dentry->d_name.name, 958 + client->mount_args->snapdir_name, 959 + req->r_dentry->d_name.len))) { 954 960 /* 955 961 * lookup link rename : null -> possibly existing inode 956 962 * mknod symlink mkdir : null -> new inode
+9
fs/ceph/messenger.c
··· 30 30 static char tag_ack = CEPH_MSGR_TAG_ACK; 31 31 static char tag_keepalive = CEPH_MSGR_TAG_KEEPALIVE; 32 32 33 + #ifdef CONFIG_LOCKDEP 34 + static struct lock_class_key socket_class; 35 + #endif 36 + 33 37 34 38 static void queue_con(struct ceph_connection *con); 35 39 static void con_work(struct work_struct *); ··· 232 228 con->sock = sock; 233 229 sock->sk->sk_allocation = GFP_NOFS; 234 230 231 + #ifdef CONFIG_LOCKDEP 232 + lockdep_set_class(&sock->sk->sk_lock, &socket_class); 233 + #endif 234 + 235 235 set_sock_callbacks(sock, con); 236 236 237 237 dout("connect %s\n", pr_addr(&con->peer_addr.in_addr)); ··· 341 333 con->out_msg = NULL; 342 334 } 343 335 con->in_seq = 0; 336 + con->in_seq_acked = 0; 344 337 } 345 338 346 339 /*
+109 -71
fs/ceph/osdmap.c
··· 314 314 return ERR_PTR(err); 315 315 } 316 316 317 - 318 - /* 319 - * osd map 320 - */ 321 - void ceph_osdmap_destroy(struct ceph_osdmap *map) 322 - { 323 - dout("osdmap_destroy %p\n", map); 324 - if (map->crush) 325 - crush_destroy(map->crush); 326 - while (!RB_EMPTY_ROOT(&map->pg_temp)) { 327 - struct ceph_pg_mapping *pg = 328 - rb_entry(rb_first(&map->pg_temp), 329 - struct ceph_pg_mapping, node); 330 - rb_erase(&pg->node, &map->pg_temp); 331 - kfree(pg); 332 - } 333 - while (!RB_EMPTY_ROOT(&map->pg_pools)) { 334 - struct ceph_pg_pool_info *pi = 335 - rb_entry(rb_first(&map->pg_pools), 336 - struct ceph_pg_pool_info, node); 337 - rb_erase(&pi->node, &map->pg_pools); 338 - kfree(pi); 339 - } 340 - kfree(map->osd_state); 341 - kfree(map->osd_weight); 342 - kfree(map->osd_addr); 343 - kfree(map); 344 - } 345 - 346 - /* 347 - * adjust max osd value. reallocate arrays. 348 - */ 349 - static int osdmap_set_max_osd(struct ceph_osdmap *map, int max) 350 - { 351 - u8 *state; 352 - struct ceph_entity_addr *addr; 353 - u32 *weight; 354 - 355 - state = kcalloc(max, sizeof(*state), GFP_NOFS); 356 - addr = kcalloc(max, sizeof(*addr), GFP_NOFS); 357 - weight = kcalloc(max, sizeof(*weight), GFP_NOFS); 358 - if (state == NULL || addr == NULL || weight == NULL) { 359 - kfree(state); 360 - kfree(addr); 361 - kfree(weight); 362 - return -ENOMEM; 363 - } 364 - 365 - /* copy old? */ 366 - if (map->osd_state) { 367 - memcpy(state, map->osd_state, map->max_osd*sizeof(*state)); 368 - memcpy(addr, map->osd_addr, map->max_osd*sizeof(*addr)); 369 - memcpy(weight, map->osd_weight, map->max_osd*sizeof(*weight)); 370 - kfree(map->osd_state); 371 - kfree(map->osd_addr); 372 - kfree(map->osd_weight); 373 - } 374 - 375 - map->osd_state = state; 376 - map->osd_weight = weight; 377 - map->osd_addr = addr; 378 - map->max_osd = max; 379 - return 0; 380 - } 381 - 382 317 /* 383 318 * rbtree of pg_mapping for handling pg_temp (explicit mapping of pgid 384 319 * to a set of osds) ··· 417 482 return NULL; 418 483 } 419 484 485 + static void __remove_pg_pool(struct rb_root *root, struct ceph_pg_pool_info *pi) 486 + { 487 + rb_erase(&pi->node, root); 488 + kfree(pi->name); 489 + kfree(pi); 490 + } 491 + 420 492 void __decode_pool(void **p, struct ceph_pg_pool_info *pi) 421 493 { 422 494 ceph_decode_copy(p, &pi->v, sizeof(pi->v)); 423 495 calc_pg_masks(pi); 424 496 *p += le32_to_cpu(pi->v.num_snaps) * sizeof(u64); 425 497 *p += le32_to_cpu(pi->v.num_removed_snap_intervals) * sizeof(u64) * 2; 498 + } 499 + 500 + static int __decode_pool_names(void **p, void *end, struct ceph_osdmap *map) 501 + { 502 + struct ceph_pg_pool_info *pi; 503 + u32 num, len, pool; 504 + 505 + ceph_decode_32_safe(p, end, num, bad); 506 + dout(" %d pool names\n", num); 507 + while (num--) { 508 + ceph_decode_32_safe(p, end, pool, bad); 509 + ceph_decode_32_safe(p, end, len, bad); 510 + dout(" pool %d len %d\n", pool, len); 511 + pi = __lookup_pg_pool(&map->pg_pools, pool); 512 + if (pi) { 513 + kfree(pi->name); 514 + pi->name = kmalloc(len + 1, GFP_NOFS); 515 + if (pi->name) { 516 + memcpy(pi->name, *p, len); 517 + pi->name[len] = '\0'; 518 + dout(" name is %s\n", pi->name); 519 + } 520 + } 521 + *p += len; 522 + } 523 + return 0; 524 + 525 + bad: 526 + return -EINVAL; 527 + } 528 + 529 + /* 530 + * osd map 531 + */ 532 + void ceph_osdmap_destroy(struct ceph_osdmap *map) 533 + { 534 + dout("osdmap_destroy %p\n", map); 535 + if (map->crush) 536 + crush_destroy(map->crush); 537 + while (!RB_EMPTY_ROOT(&map->pg_temp)) { 538 + struct ceph_pg_mapping *pg = 539 + rb_entry(rb_first(&map->pg_temp), 540 + struct ceph_pg_mapping, node); 541 + rb_erase(&pg->node, &map->pg_temp); 542 + kfree(pg); 543 + } 544 + while (!RB_EMPTY_ROOT(&map->pg_pools)) { 545 + struct ceph_pg_pool_info *pi = 546 + rb_entry(rb_first(&map->pg_pools), 547 + struct ceph_pg_pool_info, node); 548 + __remove_pg_pool(&map->pg_pools, pi); 549 + } 550 + kfree(map->osd_state); 551 + kfree(map->osd_weight); 552 + kfree(map->osd_addr); 553 + kfree(map); 554 + } 555 + 556 + /* 557 + * adjust max osd value. reallocate arrays. 558 + */ 559 + static int osdmap_set_max_osd(struct ceph_osdmap *map, int max) 560 + { 561 + u8 *state; 562 + struct ceph_entity_addr *addr; 563 + u32 *weight; 564 + 565 + state = kcalloc(max, sizeof(*state), GFP_NOFS); 566 + addr = kcalloc(max, sizeof(*addr), GFP_NOFS); 567 + weight = kcalloc(max, sizeof(*weight), GFP_NOFS); 568 + if (state == NULL || addr == NULL || weight == NULL) { 569 + kfree(state); 570 + kfree(addr); 571 + kfree(weight); 572 + return -ENOMEM; 573 + } 574 + 575 + /* copy old? */ 576 + if (map->osd_state) { 577 + memcpy(state, map->osd_state, map->max_osd*sizeof(*state)); 578 + memcpy(addr, map->osd_addr, map->max_osd*sizeof(*addr)); 579 + memcpy(weight, map->osd_weight, map->max_osd*sizeof(*weight)); 580 + kfree(map->osd_state); 581 + kfree(map->osd_addr); 582 + kfree(map->osd_weight); 583 + } 584 + 585 + map->osd_state = state; 586 + map->osd_weight = weight; 587 + map->osd_addr = addr; 588 + map->max_osd = max; 589 + return 0; 426 590 } 427 591 428 592 /* ··· 560 526 ceph_decode_32_safe(p, end, max, bad); 561 527 while (max--) { 562 528 ceph_decode_need(p, end, 4 + 1 + sizeof(pi->v), bad); 563 - pi = kmalloc(sizeof(*pi), GFP_NOFS); 529 + pi = kzalloc(sizeof(*pi), GFP_NOFS); 564 530 if (!pi) 565 531 goto bad; 566 532 pi->id = ceph_decode_32(p); ··· 573 539 __decode_pool(p, pi); 574 540 __insert_pg_pool(&map->pg_pools, pi); 575 541 } 542 + 543 + if (version >= 5 && __decode_pool_names(p, end, map) < 0) 544 + goto bad; 545 + 576 546 ceph_decode_32_safe(p, end, map->pool_max, bad); 577 547 578 548 ceph_decode_32_safe(p, end, map->flags, bad); ··· 750 712 } 751 713 pi = __lookup_pg_pool(&map->pg_pools, pool); 752 714 if (!pi) { 753 - pi = kmalloc(sizeof(*pi), GFP_NOFS); 715 + pi = kzalloc(sizeof(*pi), GFP_NOFS); 754 716 if (!pi) { 755 717 err = -ENOMEM; 756 718 goto bad; ··· 760 722 } 761 723 __decode_pool(p, pi); 762 724 } 725 + if (version >= 5 && __decode_pool_names(p, end, map) < 0) 726 + goto bad; 763 727 764 728 /* old_pool */ 765 729 ceph_decode_32_safe(p, end, len, bad); ··· 770 730 771 731 ceph_decode_32_safe(p, end, pool, bad); 772 732 pi = __lookup_pg_pool(&map->pg_pools, pool); 773 - if (pi) { 774 - rb_erase(&pi->node, &map->pg_pools); 775 - kfree(pi); 776 - } 733 + if (pi) 734 + __remove_pg_pool(&map->pg_pools, pi); 777 735 } 778 736 779 737 /* new_up */
+1
fs/ceph/osdmap.h
··· 23 23 int id; 24 24 struct ceph_pg_pool v; 25 25 int pg_num_mask, pgp_num_mask, lpg_num_mask, lpgp_num_mask; 26 + char *name; 26 27 }; 27 28 28 29 struct ceph_pg_mapping {
+4 -2
fs/ceph/rados.h
··· 11 11 /* 12 12 * osdmap encoding versions 13 13 */ 14 - #define CEPH_OSDMAP_INC_VERSION 4 15 - #define CEPH_OSDMAP_VERSION 4 14 + #define CEPH_OSDMAP_INC_VERSION 5 15 + #define CEPH_OSDMAP_INC_VERSION_EXT 5 16 + #define CEPH_OSDMAP_VERSION 5 17 + #define CEPH_OSDMAP_VERSION_EXT 5 16 18 17 19 /* 18 20 * fs id
+13 -13
fs/ceph/snap.c
··· 431 431 * Caller must hold snap_rwsem for read (i.e., the realm topology won't 432 432 * change). 433 433 */ 434 - void ceph_queue_cap_snap(struct ceph_inode_info *ci, 435 - struct ceph_snap_context *snapc) 434 + void ceph_queue_cap_snap(struct ceph_inode_info *ci) 436 435 { 437 436 struct inode *inode = &ci->vfs_inode; 438 437 struct ceph_cap_snap *capsnap; ··· 450 451 as no new writes are allowed to start when pending, so any 451 452 writes in progress now were started before the previous 452 453 cap_snap. lucky us. */ 453 - dout("queue_cap_snap %p snapc %p seq %llu used %d" 454 - " already pending\n", inode, snapc, snapc->seq, used); 454 + dout("queue_cap_snap %p already pending\n", inode); 455 455 kfree(capsnap); 456 456 } else if (ci->i_wrbuffer_ref_head || (used & CEPH_CAP_FILE_WR)) { 457 + struct ceph_snap_context *snapc = ci->i_head_snapc; 458 + 457 459 igrab(inode); 458 460 459 461 atomic_set(&capsnap->nref, 1); ··· 463 463 INIT_LIST_HEAD(&capsnap->flushing_item); 464 464 465 465 capsnap->follows = snapc->seq - 1; 466 - capsnap->context = ceph_get_snap_context(snapc); 467 466 capsnap->issued = __ceph_caps_issued(ci, NULL); 468 467 capsnap->dirty = __ceph_caps_dirty(ci); 469 468 ··· 479 480 snapshot. */ 480 481 capsnap->dirty_pages = ci->i_wrbuffer_ref_head; 481 482 ci->i_wrbuffer_ref_head = 0; 482 - ceph_put_snap_context(ci->i_head_snapc); 483 + capsnap->context = snapc; 483 484 ci->i_head_snapc = NULL; 484 485 list_add_tail(&capsnap->ci_item, &ci->i_cap_snaps); 485 486 ··· 521 522 capsnap->ctime = inode->i_ctime; 522 523 capsnap->time_warp_seq = ci->i_time_warp_seq; 523 524 if (capsnap->dirty_pages) { 524 - dout("finish_cap_snap %p cap_snap %p snapc %p %llu s=%llu " 525 + dout("finish_cap_snap %p cap_snap %p snapc %p %llu %s s=%llu " 525 526 "still has %d dirty pages\n", inode, capsnap, 526 527 capsnap->context, capsnap->context->seq, 527 - capsnap->size, capsnap->dirty_pages); 528 + ceph_cap_string(capsnap->dirty), capsnap->size, 529 + capsnap->dirty_pages); 528 530 return 0; 529 531 } 530 - dout("finish_cap_snap %p cap_snap %p snapc %p %llu s=%llu clean\n", 532 + dout("finish_cap_snap %p cap_snap %p snapc %p %llu %s s=%llu\n", 531 533 inode, capsnap, capsnap->context, 532 - capsnap->context->seq, capsnap->size); 534 + capsnap->context->seq, ceph_cap_string(capsnap->dirty), 535 + capsnap->size); 533 536 534 537 spin_lock(&mdsc->snap_flush_lock); 535 538 list_add_tail(&ci->i_snap_flush_item, &mdsc->snap_flush_list); ··· 603 602 if (lastinode) 604 603 iput(lastinode); 605 604 lastinode = inode; 606 - ceph_queue_cap_snap(ci, realm->cached_context); 605 + ceph_queue_cap_snap(ci); 607 606 spin_lock(&realm->inodes_with_caps_lock); 608 607 } 609 608 spin_unlock(&realm->inodes_with_caps_lock); ··· 825 824 spin_unlock(&realm->inodes_with_caps_lock); 826 825 spin_unlock(&inode->i_lock); 827 826 828 - ceph_queue_cap_snap(ci, 829 - ci->i_snap_realm->cached_context); 827 + ceph_queue_cap_snap(ci); 830 828 831 829 iput(inode); 832 830 continue;
+1 -2
fs/ceph/super.h
··· 715 715 extern void ceph_handle_snap(struct ceph_mds_client *mdsc, 716 716 struct ceph_mds_session *session, 717 717 struct ceph_msg *msg); 718 - extern void ceph_queue_cap_snap(struct ceph_inode_info *ci, 719 - struct ceph_snap_context *snapc); 718 + extern void ceph_queue_cap_snap(struct ceph_inode_info *ci); 720 719 extern int __ceph_finish_cap_snap(struct ceph_inode_info *ci, 721 720 struct ceph_cap_snap *capsnap); 722 721 extern void ceph_cleanup_empty_realms(struct ceph_mds_client *mdsc);
+18 -19
fs/ecryptfs/crypto.c
··· 382 382 static void ecryptfs_lower_offset_for_extent(loff_t *offset, loff_t extent_num, 383 383 struct ecryptfs_crypt_stat *crypt_stat) 384 384 { 385 - (*offset) = (crypt_stat->num_header_bytes_at_front 386 - + (crypt_stat->extent_size * extent_num)); 385 + (*offset) = ecryptfs_lower_header_size(crypt_stat) 386 + + (crypt_stat->extent_size * extent_num); 387 387 } 388 388 389 389 /** ··· 835 835 set_extent_mask_and_shift(crypt_stat); 836 836 crypt_stat->iv_bytes = ECRYPTFS_DEFAULT_IV_BYTES; 837 837 if (crypt_stat->flags & ECRYPTFS_METADATA_IN_XATTR) 838 - crypt_stat->num_header_bytes_at_front = 0; 838 + crypt_stat->metadata_size = ECRYPTFS_MINIMUM_HEADER_EXTENT_SIZE; 839 839 else { 840 840 if (PAGE_CACHE_SIZE <= ECRYPTFS_MINIMUM_HEADER_EXTENT_SIZE) 841 - crypt_stat->num_header_bytes_at_front = 841 + crypt_stat->metadata_size = 842 842 ECRYPTFS_MINIMUM_HEADER_EXTENT_SIZE; 843 843 else 844 - crypt_stat->num_header_bytes_at_front = PAGE_CACHE_SIZE; 844 + crypt_stat->metadata_size = PAGE_CACHE_SIZE; 845 845 } 846 846 } 847 847 ··· 1108 1108 (*written) = MAGIC_ECRYPTFS_MARKER_SIZE_BYTES; 1109 1109 } 1110 1110 1111 - static void 1112 - write_ecryptfs_flags(char *page_virt, struct ecryptfs_crypt_stat *crypt_stat, 1113 - size_t *written) 1111 + void ecryptfs_write_crypt_stat_flags(char *page_virt, 1112 + struct ecryptfs_crypt_stat *crypt_stat, 1113 + size_t *written) 1114 1114 { 1115 1115 u32 flags = 0; 1116 1116 int i; ··· 1238 1238 1239 1239 header_extent_size = (u32)crypt_stat->extent_size; 1240 1240 num_header_extents_at_front = 1241 - (u16)(crypt_stat->num_header_bytes_at_front 1242 - / crypt_stat->extent_size); 1241 + (u16)(crypt_stat->metadata_size / crypt_stat->extent_size); 1243 1242 put_unaligned_be32(header_extent_size, virt); 1244 1243 virt += 4; 1245 1244 put_unaligned_be16(num_header_extents_at_front, virt); ··· 1291 1292 offset = ECRYPTFS_FILE_SIZE_BYTES; 1292 1293 write_ecryptfs_marker((page_virt + offset), &written); 1293 1294 offset += written; 1294 - write_ecryptfs_flags((page_virt + offset), crypt_stat, &written); 1295 + ecryptfs_write_crypt_stat_flags((page_virt + offset), crypt_stat, 1296 + &written); 1295 1297 offset += written; 1296 1298 ecryptfs_write_header_metadata((page_virt + offset), crypt_stat, 1297 1299 &written); ··· 1382 1382 rc = -EINVAL; 1383 1383 goto out; 1384 1384 } 1385 - virt_len = crypt_stat->num_header_bytes_at_front; 1385 + virt_len = crypt_stat->metadata_size; 1386 1386 order = get_order(virt_len); 1387 1387 /* Released in this function */ 1388 1388 virt = (char *)ecryptfs_get_zeroed_pages(GFP_KERNEL, order); ··· 1428 1428 header_extent_size = get_unaligned_be32(virt); 1429 1429 virt += sizeof(__be32); 1430 1430 num_header_extents_at_front = get_unaligned_be16(virt); 1431 - crypt_stat->num_header_bytes_at_front = 1432 - (((size_t)num_header_extents_at_front 1433 - * (size_t)header_extent_size)); 1431 + crypt_stat->metadata_size = (((size_t)num_header_extents_at_front 1432 + * (size_t)header_extent_size)); 1434 1433 (*bytes_read) = (sizeof(__be32) + sizeof(__be16)); 1435 1434 if ((validate_header_size == ECRYPTFS_VALIDATE_HEADER_SIZE) 1436 - && (crypt_stat->num_header_bytes_at_front 1435 + && (crypt_stat->metadata_size 1437 1436 < ECRYPTFS_MINIMUM_HEADER_EXTENT_SIZE)) { 1438 1437 rc = -EINVAL; 1439 1438 printk(KERN_WARNING "Invalid header size: [%zd]\n", 1440 - crypt_stat->num_header_bytes_at_front); 1439 + crypt_stat->metadata_size); 1441 1440 } 1442 1441 return rc; 1443 1442 } ··· 1451 1452 */ 1452 1453 static void set_default_header_data(struct ecryptfs_crypt_stat *crypt_stat) 1453 1454 { 1454 - crypt_stat->num_header_bytes_at_front = 1455 - ECRYPTFS_MINIMUM_HEADER_EXTENT_SIZE; 1455 + crypt_stat->metadata_size = ECRYPTFS_MINIMUM_HEADER_EXTENT_SIZE; 1456 1456 } 1457 1457 1458 1458 /** ··· 1605 1607 ecryptfs_dentry, 1606 1608 ECRYPTFS_VALIDATE_HEADER_SIZE); 1607 1609 if (rc) { 1610 + memset(page_virt, 0, PAGE_CACHE_SIZE); 1608 1611 rc = ecryptfs_read_xattr_region(page_virt, ecryptfs_inode); 1609 1612 if (rc) { 1610 1613 printk(KERN_DEBUG "Valid eCryptfs headers not found in "
+12 -1
fs/ecryptfs/ecryptfs_kernel.h
··· 273 273 u32 flags; 274 274 unsigned int file_version; 275 275 size_t iv_bytes; 276 - size_t num_header_bytes_at_front; 276 + size_t metadata_size; 277 277 size_t extent_size; /* Data extent size; default is 4096 */ 278 278 size_t key_size; 279 279 size_t extent_shift; ··· 464 464 465 465 extern struct mutex ecryptfs_daemon_hash_mux; 466 466 467 + static inline size_t 468 + ecryptfs_lower_header_size(struct ecryptfs_crypt_stat *crypt_stat) 469 + { 470 + if (crypt_stat->flags & ECRYPTFS_METADATA_IN_XATTR) 471 + return 0; 472 + return crypt_stat->metadata_size; 473 + } 474 + 467 475 static inline struct ecryptfs_file_info * 468 476 ecryptfs_file_to_private(struct file *file) 469 477 { ··· 659 651 int ecryptfs_write_metadata(struct dentry *ecryptfs_dentry); 660 652 int ecryptfs_read_metadata(struct dentry *ecryptfs_dentry); 661 653 int ecryptfs_new_file_context(struct dentry *ecryptfs_dentry); 654 + void ecryptfs_write_crypt_stat_flags(char *page_virt, 655 + struct ecryptfs_crypt_stat *crypt_stat, 656 + size_t *written); 662 657 int ecryptfs_read_and_validate_header_region(char *data, 663 658 struct inode *ecryptfs_inode); 664 659 int ecryptfs_read_and_validate_xattr_region(char *page_virt,
+67 -62
fs/ecryptfs/inode.c
··· 324 324 rc = ecryptfs_read_and_validate_header_region(page_virt, 325 325 ecryptfs_dentry->d_inode); 326 326 if (rc) { 327 + memset(page_virt, 0, PAGE_CACHE_SIZE); 327 328 rc = ecryptfs_read_and_validate_xattr_region(page_virt, 328 329 ecryptfs_dentry); 329 330 if (rc) { ··· 337 336 ecryptfs_dentry->d_sb)->mount_crypt_stat; 338 337 if (mount_crypt_stat->flags & ECRYPTFS_ENCRYPTED_VIEW_ENABLED) { 339 338 if (crypt_stat->flags & ECRYPTFS_METADATA_IN_XATTR) 340 - file_size = (crypt_stat->num_header_bytes_at_front 339 + file_size = (crypt_stat->metadata_size 341 340 + i_size_read(lower_dentry->d_inode)); 342 341 else 343 342 file_size = i_size_read(lower_dentry->d_inode); ··· 389 388 mutex_unlock(&lower_dir_dentry->d_inode->i_mutex); 390 389 if (IS_ERR(lower_dentry)) { 391 390 rc = PTR_ERR(lower_dentry); 392 - printk(KERN_ERR "%s: lookup_one_len() returned [%d] on " 393 - "lower_dentry = [%s]\n", __func__, rc, 394 - ecryptfs_dentry->d_name.name); 391 + ecryptfs_printk(KERN_DEBUG, "%s: lookup_one_len() returned " 392 + "[%d] on lower_dentry = [%s]\n", __func__, rc, 393 + encrypted_and_encoded_name); 395 394 goto out_d_drop; 396 395 } 397 396 if (lower_dentry->d_inode) ··· 418 417 mutex_unlock(&lower_dir_dentry->d_inode->i_mutex); 419 418 if (IS_ERR(lower_dentry)) { 420 419 rc = PTR_ERR(lower_dentry); 421 - printk(KERN_ERR "%s: lookup_one_len() returned [%d] on " 422 - "lower_dentry = [%s]\n", __func__, rc, 423 - encrypted_and_encoded_name); 420 + ecryptfs_printk(KERN_DEBUG, "%s: lookup_one_len() returned " 421 + "[%d] on lower_dentry = [%s]\n", __func__, rc, 422 + encrypted_and_encoded_name); 424 423 goto out_d_drop; 425 424 } 426 425 lookup_and_interpose: ··· 457 456 rc = ecryptfs_interpose(lower_new_dentry, new_dentry, dir->i_sb, 0); 458 457 if (rc) 459 458 goto out_lock; 460 - fsstack_copy_attr_times(dir, lower_new_dentry->d_inode); 461 - fsstack_copy_inode_size(dir, lower_new_dentry->d_inode); 459 + fsstack_copy_attr_times(dir, lower_dir_dentry->d_inode); 460 + fsstack_copy_inode_size(dir, lower_dir_dentry->d_inode); 462 461 old_dentry->d_inode->i_nlink = 463 462 ecryptfs_inode_to_lower(old_dentry->d_inode)->i_nlink; 464 463 i_size_write(new_dentry->d_inode, file_size_save); ··· 649 648 return rc; 650 649 } 651 650 652 - static int 653 - ecryptfs_readlink(struct dentry *dentry, char __user *buf, int bufsiz) 651 + static int ecryptfs_readlink_lower(struct dentry *dentry, char **buf, 652 + size_t *bufsiz) 654 653 { 654 + struct dentry *lower_dentry = ecryptfs_dentry_to_lower(dentry); 655 655 char *lower_buf; 656 - size_t lower_bufsiz; 657 - struct dentry *lower_dentry; 658 - struct ecryptfs_mount_crypt_stat *mount_crypt_stat; 659 - char *plaintext_name; 660 - size_t plaintext_name_size; 656 + size_t lower_bufsiz = PATH_MAX; 661 657 mm_segment_t old_fs; 662 658 int rc; 663 659 664 - lower_dentry = ecryptfs_dentry_to_lower(dentry); 665 - if (!lower_dentry->d_inode->i_op->readlink) { 666 - rc = -EINVAL; 667 - goto out; 668 - } 669 - mount_crypt_stat = &ecryptfs_superblock_to_private( 670 - dentry->d_sb)->mount_crypt_stat; 671 - /* 672 - * If the lower filename is encrypted, it will result in a significantly 673 - * longer name. If needed, truncate the name after decode and decrypt. 674 - */ 675 - if (mount_crypt_stat->flags & ECRYPTFS_GLOBAL_ENCRYPT_FILENAMES) 676 - lower_bufsiz = PATH_MAX; 677 - else 678 - lower_bufsiz = bufsiz; 679 - /* Released in this function */ 680 660 lower_buf = kmalloc(lower_bufsiz, GFP_KERNEL); 681 - if (lower_buf == NULL) { 682 - printk(KERN_ERR "%s: Out of memory whilst attempting to " 683 - "kmalloc [%zd] bytes\n", __func__, lower_bufsiz); 661 + if (!lower_buf) { 684 662 rc = -ENOMEM; 685 663 goto out; 686 664 } ··· 669 689 (char __user *)lower_buf, 670 690 lower_bufsiz); 671 691 set_fs(old_fs); 672 - if (rc >= 0) { 673 - rc = ecryptfs_decode_and_decrypt_filename(&plaintext_name, 674 - &plaintext_name_size, 675 - dentry, lower_buf, 676 - rc); 677 - if (rc) { 678 - printk(KERN_ERR "%s: Error attempting to decode and " 679 - "decrypt filename; rc = [%d]\n", __func__, 680 - rc); 681 - goto out_free_lower_buf; 682 - } 683 - /* Check for bufsiz <= 0 done in sys_readlinkat() */ 684 - rc = copy_to_user(buf, plaintext_name, 685 - min((size_t) bufsiz, plaintext_name_size)); 686 - if (rc) 687 - rc = -EFAULT; 688 - else 689 - rc = plaintext_name_size; 690 - kfree(plaintext_name); 691 - fsstack_copy_attr_atime(dentry->d_inode, lower_dentry->d_inode); 692 - } 693 - out_free_lower_buf: 692 + if (rc < 0) 693 + goto out; 694 + lower_bufsiz = rc; 695 + rc = ecryptfs_decode_and_decrypt_filename(buf, bufsiz, dentry, 696 + lower_buf, lower_bufsiz); 697 + out: 694 698 kfree(lower_buf); 699 + return rc; 700 + } 701 + 702 + static int 703 + ecryptfs_readlink(struct dentry *dentry, char __user *buf, int bufsiz) 704 + { 705 + char *kbuf; 706 + size_t kbufsiz, copied; 707 + int rc; 708 + 709 + rc = ecryptfs_readlink_lower(dentry, &kbuf, &kbufsiz); 710 + if (rc) 711 + goto out; 712 + copied = min_t(size_t, bufsiz, kbufsiz); 713 + rc = copy_to_user(buf, kbuf, copied) ? -EFAULT : copied; 714 + kfree(kbuf); 715 + fsstack_copy_attr_atime(dentry->d_inode, 716 + ecryptfs_dentry_to_lower(dentry)->d_inode); 695 717 out: 696 718 return rc; 697 719 } ··· 751 769 { 752 770 loff_t lower_size; 753 771 754 - lower_size = crypt_stat->num_header_bytes_at_front; 772 + lower_size = ecryptfs_lower_header_size(crypt_stat); 755 773 if (upper_size != 0) { 756 774 loff_t num_extents; 757 775 ··· 998 1016 return rc; 999 1017 } 1000 1018 1019 + int ecryptfs_getattr_link(struct vfsmount *mnt, struct dentry *dentry, 1020 + struct kstat *stat) 1021 + { 1022 + struct ecryptfs_mount_crypt_stat *mount_crypt_stat; 1023 + int rc = 0; 1024 + 1025 + mount_crypt_stat = &ecryptfs_superblock_to_private( 1026 + dentry->d_sb)->mount_crypt_stat; 1027 + generic_fillattr(dentry->d_inode, stat); 1028 + if (mount_crypt_stat->flags & ECRYPTFS_GLOBAL_ENCRYPT_FILENAMES) { 1029 + char *target; 1030 + size_t targetsiz; 1031 + 1032 + rc = ecryptfs_readlink_lower(dentry, &target, &targetsiz); 1033 + if (!rc) { 1034 + kfree(target); 1035 + stat->size = targetsiz; 1036 + } 1037 + } 1038 + return rc; 1039 + } 1040 + 1001 1041 int ecryptfs_getattr(struct vfsmount *mnt, struct dentry *dentry, 1002 1042 struct kstat *stat) 1003 1043 { ··· 1044 1040 1045 1041 lower_dentry = ecryptfs_dentry_to_lower(dentry); 1046 1042 if (!lower_dentry->d_inode->i_op->setxattr) { 1047 - rc = -ENOSYS; 1043 + rc = -EOPNOTSUPP; 1048 1044 goto out; 1049 1045 } 1050 1046 mutex_lock(&lower_dentry->d_inode->i_mutex); ··· 1062 1058 int rc = 0; 1063 1059 1064 1060 if (!lower_dentry->d_inode->i_op->getxattr) { 1065 - rc = -ENOSYS; 1061 + rc = -EOPNOTSUPP; 1066 1062 goto out; 1067 1063 } 1068 1064 mutex_lock(&lower_dentry->d_inode->i_mutex); ··· 1089 1085 1090 1086 lower_dentry = ecryptfs_dentry_to_lower(dentry); 1091 1087 if (!lower_dentry->d_inode->i_op->listxattr) { 1092 - rc = -ENOSYS; 1088 + rc = -EOPNOTSUPP; 1093 1089 goto out; 1094 1090 } 1095 1091 mutex_lock(&lower_dentry->d_inode->i_mutex); ··· 1106 1102 1107 1103 lower_dentry = ecryptfs_dentry_to_lower(dentry); 1108 1104 if (!lower_dentry->d_inode->i_op->removexattr) { 1109 - rc = -ENOSYS; 1105 + rc = -EOPNOTSUPP; 1110 1106 goto out; 1111 1107 } 1112 1108 mutex_lock(&lower_dentry->d_inode->i_mutex); ··· 1137 1133 .put_link = ecryptfs_put_link, 1138 1134 .permission = ecryptfs_permission, 1139 1135 .setattr = ecryptfs_setattr, 1136 + .getattr = ecryptfs_getattr_link, 1140 1137 .setxattr = ecryptfs_setxattr, 1141 1138 .getxattr = ecryptfs_getxattr, 1142 1139 .listxattr = ecryptfs_listxattr,
+21 -17
fs/ecryptfs/mmap.c
··· 83 83 return rc; 84 84 } 85 85 86 + static void strip_xattr_flag(char *page_virt, 87 + struct ecryptfs_crypt_stat *crypt_stat) 88 + { 89 + if (crypt_stat->flags & ECRYPTFS_METADATA_IN_XATTR) { 90 + size_t written; 91 + 92 + crypt_stat->flags &= ~ECRYPTFS_METADATA_IN_XATTR; 93 + ecryptfs_write_crypt_stat_flags(page_virt, crypt_stat, 94 + &written); 95 + crypt_stat->flags |= ECRYPTFS_METADATA_IN_XATTR; 96 + } 97 + } 98 + 86 99 /** 87 100 * Header Extent: 88 101 * Octets 0-7: Unencrypted file size (big-endian) ··· 111 98 * (big-endian) 112 99 * Octet 26: Begin RFC 2440 authentication token packet set 113 100 */ 114 - static void set_header_info(char *page_virt, 115 - struct ecryptfs_crypt_stat *crypt_stat) 116 - { 117 - size_t written; 118 - size_t save_num_header_bytes_at_front = 119 - crypt_stat->num_header_bytes_at_front; 120 - 121 - crypt_stat->num_header_bytes_at_front = 122 - ECRYPTFS_MINIMUM_HEADER_EXTENT_SIZE; 123 - ecryptfs_write_header_metadata(page_virt + 20, crypt_stat, &written); 124 - crypt_stat->num_header_bytes_at_front = 125 - save_num_header_bytes_at_front; 126 - } 127 101 128 102 /** 129 103 * ecryptfs_copy_up_encrypted_with_header ··· 136 136 * num_extents_per_page) 137 137 + extent_num_in_page); 138 138 size_t num_header_extents_at_front = 139 - (crypt_stat->num_header_bytes_at_front 140 - / crypt_stat->extent_size); 139 + (crypt_stat->metadata_size / crypt_stat->extent_size); 141 140 142 141 if (view_extent_num < num_header_extents_at_front) { 143 142 /* This is a header extent */ ··· 146 147 memset(page_virt, 0, PAGE_CACHE_SIZE); 147 148 /* TODO: Support more than one header extent */ 148 149 if (view_extent_num == 0) { 150 + size_t written; 151 + 149 152 rc = ecryptfs_read_xattr_region( 150 153 page_virt, page->mapping->host); 151 - set_header_info(page_virt, crypt_stat); 154 + strip_xattr_flag(page_virt + 16, crypt_stat); 155 + ecryptfs_write_header_metadata(page_virt + 20, 156 + crypt_stat, 157 + &written); 152 158 } 153 159 kunmap_atomic(page_virt, KM_USER0); 154 160 flush_dcache_page(page); ··· 166 162 /* This is an encrypted data extent */ 167 163 loff_t lower_offset = 168 164 ((view_extent_num * crypt_stat->extent_size) 169 - - crypt_stat->num_header_bytes_at_front); 165 + - crypt_stat->metadata_size); 170 166 171 167 rc = ecryptfs_read_lower_page_segment( 172 168 page, (lower_offset >> PAGE_CACHE_SHIFT),
-1
fs/ecryptfs/super.c
··· 86 86 if (lower_dentry->d_inode) { 87 87 fput(inode_info->lower_file); 88 88 inode_info->lower_file = NULL; 89 - d_drop(lower_dentry); 90 89 } 91 90 } 92 91 ecryptfs_destroy_crypt_stat(&inode_info->crypt_stat);
+2
fs/ext2/symlink.c
··· 32 32 .readlink = generic_readlink, 33 33 .follow_link = page_follow_link_light, 34 34 .put_link = page_put_link, 35 + .setattr = ext2_setattr, 35 36 #ifdef CONFIG_EXT2_FS_XATTR 36 37 .setxattr = generic_setxattr, 37 38 .getxattr = generic_getxattr, ··· 44 43 const struct inode_operations ext2_fast_symlink_inode_operations = { 45 44 .readlink = generic_readlink, 46 45 .follow_link = ext2_follow_link, 46 + .setattr = ext2_setattr, 47 47 #ifdef CONFIG_EXT2_FS_XATTR 48 48 .setxattr = generic_setxattr, 49 49 .getxattr = generic_getxattr,
+2
fs/ext3/symlink.c
··· 34 34 .readlink = generic_readlink, 35 35 .follow_link = page_follow_link_light, 36 36 .put_link = page_put_link, 37 + .setattr = ext3_setattr, 37 38 #ifdef CONFIG_EXT3_FS_XATTR 38 39 .setxattr = generic_setxattr, 39 40 .getxattr = generic_getxattr, ··· 46 45 const struct inode_operations ext3_fast_symlink_inode_operations = { 47 46 .readlink = generic_readlink, 48 47 .follow_link = ext3_follow_link, 48 + .setattr = ext3_setattr, 49 49 #ifdef CONFIG_EXT3_FS_XATTR 50 50 .setxattr = generic_setxattr, 51 51 .getxattr = generic_getxattr,
+2 -1
fs/nfs/client.c
··· 1294 1294 1295 1295 /* Initialise the client representation from the mount data */ 1296 1296 server->flags = data->flags; 1297 - server->caps |= NFS_CAP_ATOMIC_OPEN|NFS_CAP_CHANGE_ATTR; 1297 + server->caps |= NFS_CAP_ATOMIC_OPEN|NFS_CAP_CHANGE_ATTR| 1298 + NFS_CAP_POSIX_LOCK; 1298 1299 server->options = data->options; 1299 1300 1300 1301 /* Get a client record */
+1 -1
fs/nfs/dir.c
··· 1025 1025 res = NULL; 1026 1026 goto out; 1027 1027 /* This turned out not to be a regular file */ 1028 + case -EISDIR: 1028 1029 case -ENOTDIR: 1029 1030 goto no_open; 1030 1031 case -ELOOP: 1031 1032 if (!(nd->intent.open.flags & O_NOFOLLOW)) 1032 1033 goto no_open; 1033 - /* case -EISDIR: */ 1034 1034 /* case -EINVAL: */ 1035 1035 default: 1036 1036 goto out;
+4 -4
fs/nfs/inode.c
··· 623 623 list_for_each_entry(pos, &nfsi->open_files, list) { 624 624 if (cred != NULL && pos->cred != cred) 625 625 continue; 626 - if ((pos->mode & mode) == mode) { 627 - ctx = get_nfs_open_context(pos); 628 - break; 629 - } 626 + if ((pos->mode & (FMODE_READ|FMODE_WRITE)) != mode) 627 + continue; 628 + ctx = get_nfs_open_context(pos); 629 + break; 630 630 } 631 631 spin_unlock(&inode->i_lock); 632 632 return ctx;
+3 -1
fs/nfs/nfs4proc.c
··· 1523 1523 nfs_post_op_update_inode(dir, o_res->dir_attr); 1524 1524 } else 1525 1525 nfs_refresh_inode(dir, o_res->dir_attr); 1526 + if ((o_res->rflags & NFS4_OPEN_RESULT_LOCKTYPE_POSIX) == 0) 1527 + server->caps &= ~NFS_CAP_POSIX_LOCK; 1526 1528 if(o_res->rflags & NFS4_OPEN_RESULT_CONFIRM) { 1527 1529 status = _nfs4_proc_open_confirm(data); 1528 1530 if (status != 0) ··· 1666 1664 status = PTR_ERR(state); 1667 1665 if (IS_ERR(state)) 1668 1666 goto err_opendata_put; 1669 - if ((opendata->o_res.rflags & NFS4_OPEN_RESULT_LOCKTYPE_POSIX) != 0) 1667 + if (server->caps & NFS_CAP_POSIX_LOCK) 1670 1668 set_bit(NFS_STATE_POSIX_LOCKS, &state->flags); 1671 1669 nfs4_opendata_put(opendata); 1672 1670 nfs4_put_state_owner(sp);
+28 -16
fs/nfs/write.c
··· 201 201 struct inode *inode = page->mapping->host; 202 202 struct nfs_server *nfss = NFS_SERVER(inode); 203 203 204 + page_cache_get(page); 204 205 if (atomic_long_inc_return(&nfss->writeback) > 205 206 NFS_CONGESTION_ON_THRESH) { 206 207 set_bdi_congested(&nfss->backing_dev_info, ··· 217 216 struct nfs_server *nfss = NFS_SERVER(inode); 218 217 219 218 end_page_writeback(page); 219 + page_cache_release(page); 220 220 if (atomic_long_dec_return(&nfss->writeback) < NFS_CONGESTION_OFF_THRESH) 221 221 clear_bdi_congested(&nfss->backing_dev_info, BLK_RW_ASYNC); 222 222 } ··· 423 421 nfs_mark_request_dirty(struct nfs_page *req) 424 422 { 425 423 __set_page_dirty_nobuffers(req->wb_page); 424 + __mark_inode_dirty(req->wb_page->mapping->host, I_DIRTY_DATASYNC); 426 425 } 427 426 428 427 #if defined(CONFIG_NFS_V3) || defined(CONFIG_NFS_V4) ··· 663 660 req = nfs_setup_write_request(ctx, page, offset, count); 664 661 if (IS_ERR(req)) 665 662 return PTR_ERR(req); 663 + nfs_mark_request_dirty(req); 666 664 /* Update file length */ 667 665 nfs_grow_file(page, offset, count); 668 666 nfs_mark_uptodate(page, req->wb_pgbase, req->wb_bytes); 667 + nfs_mark_request_dirty(req); 669 668 nfs_clear_page_tag_locked(req); 670 669 return 0; 671 670 } ··· 744 739 status = nfs_writepage_setup(ctx, page, offset, count); 745 740 if (status < 0) 746 741 nfs_set_pageerror(page); 747 - else 748 - __set_page_dirty_nobuffers(page); 749 742 750 743 dprintk("NFS: nfs_updatepage returns %d (isize %lld)\n", 751 744 status, (long long)i_size_read(inode)); ··· 752 749 753 750 static void nfs_writepage_release(struct nfs_page *req) 754 751 { 752 + struct page *page = req->wb_page; 755 753 756 - if (PageError(req->wb_page) || !nfs_reschedule_unstable_write(req)) { 757 - nfs_end_page_writeback(req->wb_page); 754 + if (PageError(req->wb_page) || !nfs_reschedule_unstable_write(req)) 758 755 nfs_inode_remove_request(req); 759 - } else 760 - nfs_end_page_writeback(req->wb_page); 761 756 nfs_clear_page_tag_locked(req); 757 + nfs_end_page_writeback(page); 762 758 } 763 759 764 760 static int flush_task_priority(int how) ··· 781 779 int how) 782 780 { 783 781 struct inode *inode = req->wb_context->path.dentry->d_inode; 784 - int flags = (how & FLUSH_SYNC) ? 0 : RPC_TASK_ASYNC; 785 782 int priority = flush_task_priority(how); 786 783 struct rpc_task *task; 787 784 struct rpc_message msg = { ··· 795 794 .callback_ops = call_ops, 796 795 .callback_data = data, 797 796 .workqueue = nfsiod_workqueue, 798 - .flags = flags, 797 + .flags = RPC_TASK_ASYNC, 799 798 .priority = priority, 800 799 }; 800 + int ret = 0; 801 801 802 802 /* Set up the RPC argument and reply structs 803 803 * NB: take care not to mess about with data->commit et al. */ ··· 837 835 (unsigned long long)data->args.offset); 838 836 839 837 task = rpc_run_task(&task_setup_data); 840 - if (IS_ERR(task)) 841 - return PTR_ERR(task); 838 + if (IS_ERR(task)) { 839 + ret = PTR_ERR(task); 840 + goto out; 841 + } 842 + if (how & FLUSH_SYNC) { 843 + ret = rpc_wait_for_completion_task(task); 844 + if (ret == 0) 845 + ret = task->tk_status; 846 + } 842 847 rpc_put_task(task); 843 - return 0; 848 + out: 849 + return ret; 844 850 } 845 851 846 852 /* If a nfs_flush_* function fails, it should remove reqs from @head and ··· 857 847 */ 858 848 static void nfs_redirty_request(struct nfs_page *req) 859 849 { 850 + struct page *page = req->wb_page; 851 + 860 852 nfs_mark_request_dirty(req); 861 - nfs_end_page_writeback(req->wb_page); 862 853 nfs_clear_page_tag_locked(req); 854 + nfs_end_page_writeback(page); 863 855 } 864 856 865 857 /* ··· 1096 1084 if (nfs_write_need_commit(data)) { 1097 1085 memcpy(&req->wb_verf, &data->verf, sizeof(req->wb_verf)); 1098 1086 nfs_mark_request_commit(req); 1099 - nfs_end_page_writeback(page); 1100 1087 dprintk(" marked for commit\n"); 1101 1088 goto next; 1102 1089 } 1103 1090 dprintk(" OK\n"); 1104 1091 remove_request: 1105 - nfs_end_page_writeback(page); 1106 1092 nfs_inode_remove_request(req); 1107 1093 next: 1108 1094 nfs_clear_page_tag_locked(req); 1095 + nfs_end_page_writeback(page); 1109 1096 } 1110 1097 nfs_writedata_release(calldata); 1111 1098 } ··· 1218 1207 { 1219 1208 struct nfs_page *first = nfs_list_entry(head->next); 1220 1209 struct inode *inode = first->wb_context->path.dentry->d_inode; 1221 - int flags = (how & FLUSH_SYNC) ? 0 : RPC_TASK_ASYNC; 1222 1210 int priority = flush_task_priority(how); 1223 1211 struct rpc_task *task; 1224 1212 struct rpc_message msg = { ··· 1232 1222 .callback_ops = &nfs_commit_ops, 1233 1223 .callback_data = data, 1234 1224 .workqueue = nfsiod_workqueue, 1235 - .flags = flags, 1225 + .flags = RPC_TASK_ASYNC, 1236 1226 .priority = priority, 1237 1227 }; 1238 1228 ··· 1262 1252 task = rpc_run_task(&task_setup_data); 1263 1253 if (IS_ERR(task)) 1264 1254 return PTR_ERR(task); 1255 + if (how & FLUSH_SYNC) 1256 + rpc_wait_for_completion_task(task); 1265 1257 rpc_put_task(task); 1266 1258 return 0; 1267 1259 }
+1 -1
fs/nilfs2/alloc.c
··· 426 426 bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh); 427 427 if (!nilfs_clear_bit_atomic(nilfs_mdt_bgl_lock(inode, group), 428 428 group_offset, bitmap)) 429 - printk(KERN_WARNING "%s: entry numer %llu already freed\n", 429 + printk(KERN_WARNING "%s: entry number %llu already freed\n", 430 430 __func__, (unsigned long long)req->pr_entry_nr); 431 431 432 432 nilfs_palloc_group_desc_add_entries(inode, group, desc, 1);
+1 -1
fs/nilfs2/btree.c
··· 1879 1879 struct nilfs_btree_path *path, 1880 1880 int level, struct buffer_head *bh) 1881 1881 { 1882 - int maxlevel, ret; 1882 + int maxlevel = 0, ret; 1883 1883 struct nilfs_btree_node *parent; 1884 1884 struct inode *dat = nilfs_bmap_get_dat(&btree->bt_bmap); 1885 1885 __u64 ptr;
+1 -1
fs/nilfs2/ioctl.c
··· 649 649 long nilfs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 650 650 { 651 651 struct inode *inode = filp->f_dentry->d_inode; 652 - void __user *argp = (void * __user *)arg; 652 + void __user *argp = (void __user *)arg; 653 653 654 654 switch (cmd) { 655 655 case NILFS_IOCTL_CHANGE_CPMODE:
+8
fs/quota/Kconfig
··· 33 33 Note that this behavior is currently deprecated and may go away in 34 34 future. Please use notification via netlink socket instead. 35 35 36 + config QUOTA_DEBUG 37 + bool "Additional quota sanity checks" 38 + depends on QUOTA 39 + default n 40 + help 41 + If you say Y here, quota subsystem will perform some additional 42 + sanity checks of quota internal structures. If unsure, say N. 43 + 36 44 # Generic support for tree structured quota files. Selected when needed. 37 45 config QUOTA_TREE 38 46 tristate
+16 -12
fs/quota/dquot.c
··· 80 80 81 81 #include <asm/uaccess.h> 82 82 83 - #define __DQUOT_PARANOIA 84 - 85 83 /* 86 84 * There are three quota SMP locks. dq_list_lock protects all lists with quotas 87 85 * and quota formats, dqstats structure containing statistics about the lists ··· 693 695 694 696 if (!dquot) 695 697 return; 696 - #ifdef __DQUOT_PARANOIA 698 + #ifdef CONFIG_QUOTA_DEBUG 697 699 if (!atomic_read(&dquot->dq_count)) { 698 700 printk("VFS: dqput: trying to free free dquot\n"); 699 701 printk("VFS: device %s, dquot of %s %d\n", ··· 746 748 goto we_slept; 747 749 } 748 750 atomic_dec(&dquot->dq_count); 749 - #ifdef __DQUOT_PARANOIA 751 + #ifdef CONFIG_QUOTA_DEBUG 750 752 /* sanity check */ 751 753 BUG_ON(!list_empty(&dquot->dq_free)); 752 754 #endif ··· 843 845 dquot = NULL; 844 846 goto out; 845 847 } 846 - #ifdef __DQUOT_PARANOIA 848 + #ifdef CONFIG_QUOTA_DEBUG 847 849 BUG_ON(!dquot->dq_sb); /* Has somebody invalidated entry under us? */ 848 850 #endif 849 851 out: ··· 872 874 static void add_dquot_ref(struct super_block *sb, int type) 873 875 { 874 876 struct inode *inode, *old_inode = NULL; 877 + #ifdef CONFIG_QUOTA_DEBUG 875 878 int reserved = 0; 879 + #endif 876 880 877 881 spin_lock(&inode_lock); 878 882 list_for_each_entry(inode, &sb->s_inodes, i_sb_list) { 879 883 if (inode->i_state & (I_FREEING|I_CLEAR|I_WILL_FREE|I_NEW)) 880 884 continue; 885 + #ifdef CONFIG_QUOTA_DEBUG 881 886 if (unlikely(inode_get_rsv_space(inode) > 0)) 882 887 reserved = 1; 888 + #endif 883 889 if (!atomic_read(&inode->i_writecount)) 884 890 continue; 885 891 if (!dqinit_needed(inode, type)) ··· 905 903 spin_unlock(&inode_lock); 906 904 iput(old_inode); 907 905 906 + #ifdef CONFIG_QUOTA_DEBUG 908 907 if (reserved) { 909 908 printk(KERN_WARNING "VFS (%s): Writes happened before quota" 910 909 " was turned on thus quota information is probably " 911 910 "inconsistent. Please run quotacheck(8).\n", sb->s_id); 912 911 } 912 + #endif 913 913 } 914 914 915 915 /* ··· 938 934 inode->i_dquot[type] = NULL; 939 935 if (dquot) { 940 936 if (dqput_blocks(dquot)) { 941 - #ifdef __DQUOT_PARANOIA 937 + #ifdef CONFIG_QUOTA_DEBUG 942 938 if (atomic_read(&dquot->dq_count) != 1) 943 939 printk(KERN_WARNING "VFS: Adding dquot with dq_count %d to dispose list.\n", atomic_read(&dquot->dq_count)); 944 940 #endif ··· 2326 2322 if (di->dqb_valid & QIF_SPACE) { 2327 2323 dm->dqb_curspace = di->dqb_curspace - dm->dqb_rsvspace; 2328 2324 check_blim = 1; 2329 - __set_bit(DQ_LASTSET_B + QIF_SPACE_B, &dquot->dq_flags); 2325 + set_bit(DQ_LASTSET_B + QIF_SPACE_B, &dquot->dq_flags); 2330 2326 } 2331 2327 if (di->dqb_valid & QIF_BLIMITS) { 2332 2328 dm->dqb_bsoftlimit = qbtos(di->dqb_bsoftlimit); 2333 2329 dm->dqb_bhardlimit = qbtos(di->dqb_bhardlimit); 2334 2330 check_blim = 1; 2335 - __set_bit(DQ_LASTSET_B + QIF_BLIMITS_B, &dquot->dq_flags); 2331 + set_bit(DQ_LASTSET_B + QIF_BLIMITS_B, &dquot->dq_flags); 2336 2332 } 2337 2333 if (di->dqb_valid & QIF_INODES) { 2338 2334 dm->dqb_curinodes = di->dqb_curinodes; 2339 2335 check_ilim = 1; 2340 - __set_bit(DQ_LASTSET_B + QIF_INODES_B, &dquot->dq_flags); 2336 + set_bit(DQ_LASTSET_B + QIF_INODES_B, &dquot->dq_flags); 2341 2337 } 2342 2338 if (di->dqb_valid & QIF_ILIMITS) { 2343 2339 dm->dqb_isoftlimit = di->dqb_isoftlimit; 2344 2340 dm->dqb_ihardlimit = di->dqb_ihardlimit; 2345 2341 check_ilim = 1; 2346 - __set_bit(DQ_LASTSET_B + QIF_ILIMITS_B, &dquot->dq_flags); 2342 + set_bit(DQ_LASTSET_B + QIF_ILIMITS_B, &dquot->dq_flags); 2347 2343 } 2348 2344 if (di->dqb_valid & QIF_BTIME) { 2349 2345 dm->dqb_btime = di->dqb_btime; 2350 2346 check_blim = 1; 2351 - __set_bit(DQ_LASTSET_B + QIF_BTIME_B, &dquot->dq_flags); 2347 + set_bit(DQ_LASTSET_B + QIF_BTIME_B, &dquot->dq_flags); 2352 2348 } 2353 2349 if (di->dqb_valid & QIF_ITIME) { 2354 2350 dm->dqb_itime = di->dqb_itime; 2355 2351 check_ilim = 1; 2356 - __set_bit(DQ_LASTSET_B + QIF_ITIME_B, &dquot->dq_flags); 2352 + set_bit(DQ_LASTSET_B + QIF_ITIME_B, &dquot->dq_flags); 2357 2353 } 2358 2354 2359 2355 if (check_blim) {
+4 -6
fs/udf/balloc.c
··· 125 125 126 126 mutex_lock(&sbi->s_alloc_mutex); 127 127 partmap = &sbi->s_partmaps[bloc->partitionReferenceNum]; 128 - if (bloc->logicalBlockNum < 0 || 129 - (bloc->logicalBlockNum + count) > 130 - partmap->s_partition_len) { 128 + if (bloc->logicalBlockNum + count < count || 129 + (bloc->logicalBlockNum + count) > partmap->s_partition_len) { 131 130 udf_debug("%d < %d || %d + %d > %d\n", 132 131 bloc->logicalBlockNum, 0, bloc->logicalBlockNum, 133 132 count, partmap->s_partition_len); ··· 392 393 393 394 mutex_lock(&sbi->s_alloc_mutex); 394 395 partmap = &sbi->s_partmaps[bloc->partitionReferenceNum]; 395 - if (bloc->logicalBlockNum < 0 || 396 - (bloc->logicalBlockNum + count) > 397 - partmap->s_partition_len) { 396 + if (bloc->logicalBlockNum + count < count || 397 + (bloc->logicalBlockNum + count) > partmap->s_partition_len) { 398 398 udf_debug("%d < %d || %d + %d > %d\n", 399 399 bloc->logicalBlockNum, 0, bloc->logicalBlockNum, count, 400 400 partmap->s_partition_len);
+1 -1
fs/udf/file.c
··· 218 218 .llseek = generic_file_llseek, 219 219 }; 220 220 221 - static int udf_setattr(struct dentry *dentry, struct iattr *iattr) 221 + int udf_setattr(struct dentry *dentry, struct iattr *iattr) 222 222 { 223 223 struct inode *inode = dentry->d_inode; 224 224 int error;
+1 -1
fs/udf/inode.c
··· 1314 1314 break; 1315 1315 case ICBTAG_FILE_TYPE_SYMLINK: 1316 1316 inode->i_data.a_ops = &udf_symlink_aops; 1317 - inode->i_op = &page_symlink_inode_operations; 1317 + inode->i_op = &udf_symlink_inode_operations; 1318 1318 inode->i_mode = S_IFLNK | S_IRWXUGO; 1319 1319 break; 1320 1320 case ICBTAG_FILE_TYPE_MAIN:
+8 -1
fs/udf/namei.c
··· 925 925 iinfo = UDF_I(inode); 926 926 inode->i_mode = S_IFLNK | S_IRWXUGO; 927 927 inode->i_data.a_ops = &udf_symlink_aops; 928 - inode->i_op = &page_symlink_inode_operations; 928 + inode->i_op = &udf_symlink_inode_operations; 929 929 930 930 if (iinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB) { 931 931 struct kernel_lb_addr eloc; ··· 1393 1393 const struct inode_operations udf_dir_inode_operations = { 1394 1394 .lookup = udf_lookup, 1395 1395 .create = udf_create, 1396 + .setattr = udf_setattr, 1396 1397 .link = udf_link, 1397 1398 .unlink = udf_unlink, 1398 1399 .symlink = udf_symlink, ··· 1401 1400 .rmdir = udf_rmdir, 1402 1401 .mknod = udf_mknod, 1403 1402 .rename = udf_rename, 1403 + }; 1404 + const struct inode_operations udf_symlink_inode_operations = { 1405 + .readlink = generic_readlink, 1406 + .follow_link = page_follow_link_light, 1407 + .put_link = page_put_link, 1408 + .setattr = udf_setattr, 1404 1409 };
+2 -1
fs/udf/udfdecl.h
··· 76 76 extern const struct file_operations udf_dir_operations; 77 77 extern const struct inode_operations udf_file_inode_operations; 78 78 extern const struct file_operations udf_file_operations; 79 + extern const struct inode_operations udf_symlink_inode_operations; 79 80 extern const struct address_space_operations udf_aops; 80 81 extern const struct address_space_operations udf_adinicb_aops; 81 82 extern const struct address_space_operations udf_symlink_aops; ··· 132 131 /* file.c */ 133 132 extern int udf_ioctl(struct inode *, struct file *, unsigned int, 134 133 unsigned long); 135 - 134 + extern int udf_setattr(struct dentry *dentry, struct iattr *iattr); 136 135 /* inode.c */ 137 136 extern struct inode *udf_iget(struct super_block *, struct kernel_lb_addr *); 138 137 extern int udf_sync_inode(struct inode *);
+2 -2
fs/xfs/linux-2.6/xfs_sync.c
··· 820 820 * call into reclaim to find it in a clean state instead of waiting for 821 821 * it now. We also don't return errors here - if the error is transient 822 822 * then the next reclaim pass will flush the inode, and if the error 823 - * is permanent then the next sync reclaim will relcaim the inode and 823 + * is permanent then the next sync reclaim will reclaim the inode and 824 824 * pass on the error. 825 825 */ 826 - if (error && !XFS_FORCED_SHUTDOWN(ip->i_mount)) { 826 + if (error && error != EAGAIN && !XFS_FORCED_SHUTDOWN(ip->i_mount)) { 827 827 xfs_fs_cmn_err(CE_WARN, ip->i_mount, 828 828 "inode 0x%llx background reclaim flush failed with %d", 829 829 (long long)ip->i_ino, error);
+26 -12
fs/xfs/xfs_log.c
··· 745 745 746 746 /* 747 747 * Determine if we have a transaction that has gone to disk 748 - * that needs to be covered. Log activity needs to be idle (no AIL and 749 - * nothing in the iclogs). And, we need to be in the right state indicating 750 - * something has gone out. 748 + * that needs to be covered. To begin the transition to the idle state 749 + * firstly the log needs to be idle (no AIL and nothing in the iclogs). 750 + * If we are then in a state where covering is needed, the caller is informed 751 + * that dummy transactions are required to move the log into the idle state. 752 + * 753 + * Because this is called as part of the sync process, we should also indicate 754 + * that dummy transactions should be issued in anything but the covered or 755 + * idle states. This ensures that the log tail is accurately reflected in 756 + * the log at the end of the sync, hence if a crash occurrs avoids replay 757 + * of transactions where the metadata is already on disk. 751 758 */ 752 759 int 753 760 xfs_log_need_covered(xfs_mount_t *mp) ··· 766 759 return 0; 767 760 768 761 spin_lock(&log->l_icloglock); 769 - if (((log->l_covered_state == XLOG_STATE_COVER_NEED) || 770 - (log->l_covered_state == XLOG_STATE_COVER_NEED2)) 771 - && !xfs_trans_ail_tail(log->l_ailp) 772 - && xlog_iclogs_empty(log)) { 773 - if (log->l_covered_state == XLOG_STATE_COVER_NEED) 774 - log->l_covered_state = XLOG_STATE_COVER_DONE; 775 - else { 776 - ASSERT(log->l_covered_state == XLOG_STATE_COVER_NEED2); 777 - log->l_covered_state = XLOG_STATE_COVER_DONE2; 762 + switch (log->l_covered_state) { 763 + case XLOG_STATE_COVER_DONE: 764 + case XLOG_STATE_COVER_DONE2: 765 + case XLOG_STATE_COVER_IDLE: 766 + break; 767 + case XLOG_STATE_COVER_NEED: 768 + case XLOG_STATE_COVER_NEED2: 769 + if (!xfs_trans_ail_tail(log->l_ailp) && 770 + xlog_iclogs_empty(log)) { 771 + if (log->l_covered_state == XLOG_STATE_COVER_NEED) 772 + log->l_covered_state = XLOG_STATE_COVER_DONE; 773 + else 774 + log->l_covered_state = XLOG_STATE_COVER_DONE2; 778 775 } 776 + /* FALLTHRU */ 777 + default: 779 778 needed = 1; 779 + break; 780 780 } 781 781 spin_unlock(&log->l_icloglock); 782 782 return needed;
+1
include/drm/drm_pciids.h
··· 6 6 {0x1002, 0x3150, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY}, \ 7 7 {0x1002, 0x3152, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 8 8 {0x1002, 0x3154, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 9 + {0x1002, 0x3155, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 9 10 {0x1002, 0x3E50, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_NEW_MEMMAP}, \ 10 11 {0x1002, 0x3E54, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_NEW_MEMMAP}, \ 11 12 {0x1002, 0x4136, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS100|RADEON_IS_IGP}, \
+51 -19
include/linux/firewire-cdev.h
··· 1 1 /* 2 2 * Char device interface. 3 3 * 4 - * Copyright (C) 2005-2006 Kristian Hoegsberg <krh@bitplanet.net> 4 + * Copyright (C) 2005-2007 Kristian Hoegsberg <krh@bitplanet.net> 5 5 * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 6 + * Permission is hereby granted, free of charge, to any person obtaining a 7 + * copy of this software and associated documentation files (the "Software"), 8 + * to deal in the Software without restriction, including without limitation 9 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 10 + * and/or sell copies of the Software, and to permit persons to whom the 11 + * Software is furnished to do so, subject to the following conditions: 10 12 * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 13 + * The above copyright notice and this permission notice (including the next 14 + * paragraph) shall be included in all copies or substantial portions of the 15 + * Software. 15 16 * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software Foundation, 18 - * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 17 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 18 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 19 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 20 + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 21 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 22 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 23 + * DEALINGS IN THE SOFTWARE. 19 24 */ 20 25 21 26 #ifndef _LINUX_FIREWIRE_CDEV_H ··· 443 438 * @type: %FW_CDEV_ISO_CONTEXT_TRANSMIT or %FW_CDEV_ISO_CONTEXT_RECEIVE 444 439 * @header_size: Header size to strip for receive contexts 445 440 * @channel: Channel to bind to 446 - * @speed: Speed to transmit at 441 + * @speed: Speed for transmit contexts 447 442 * @closure: To be returned in &fw_cdev_event_iso_interrupt 448 443 * @handle: Handle to context, written back by kernel 449 444 * ··· 455 450 * 456 451 * If a context was successfully created, the kernel writes back a handle to the 457 452 * context, which must be passed in for subsequent operations on that context. 453 + * 454 + * For receive contexts, @header_size must be at least 4 and must be a multiple 455 + * of 4. 458 456 * 459 457 * Note that the effect of a @header_size > 4 depends on 460 458 * &fw_cdev_get_info.version, as documented at &fw_cdev_event_iso_interrupt. ··· 489 481 * 490 482 * &struct fw_cdev_iso_packet is used to describe isochronous packet queues. 491 483 * 492 - * Use the FW_CDEV_ISO_ macros to fill in @control. The sy and tag fields are 493 - * specified by IEEE 1394a and IEC 61883. 484 + * Use the FW_CDEV_ISO_ macros to fill in @control. 494 485 * 495 - * FIXME - finish this documentation 486 + * For transmit packets, the header length must be a multiple of 4 and specifies 487 + * the numbers of bytes in @header that will be prepended to the packet's 488 + * payload; these bytes are copied into the kernel and will not be accessed 489 + * after the ioctl has returned. The sy and tag fields are copied to the iso 490 + * packet header (these fields are specified by IEEE 1394a and IEC 61883-1). 491 + * The skip flag specifies that no packet is to be sent in a frame; when using 492 + * this, all other fields except the interrupt flag must be zero. 493 + * 494 + * For receive packets, the header length must be a multiple of the context's 495 + * header size; if the header length is larger than the context's header size, 496 + * multiple packets are queued for this entry. The sy and tag fields are 497 + * ignored. If the sync flag is set, the context drops all packets until 498 + * a packet with a matching sy field is received (the sync value to wait for is 499 + * specified in the &fw_cdev_start_iso structure). The payload length defines 500 + * how many payload bytes can be received for one packet (in addition to payload 501 + * quadlets that have been defined as headers and are stripped and returned in 502 + * the &fw_cdev_event_iso_interrupt structure). If more bytes are received, the 503 + * additional bytes are dropped. If less bytes are received, the remaining 504 + * bytes in this part of the payload buffer will not be written to, not even by 505 + * the next packet, i.e., packets received in consecutive frames will not 506 + * necessarily be consecutive in memory. If an entry has queued multiple 507 + * packets, the payload length is divided equally among them. 508 + * 509 + * When a packet with the interrupt flag set has been completed, the 510 + * &fw_cdev_event_iso_interrupt event will be sent. An entry that has queued 511 + * multiple receive packets is completed when its last packet is completed. 496 512 */ 497 513 struct fw_cdev_iso_packet { 498 514 __u32 control; ··· 533 501 * Queue a number of isochronous packets for reception or transmission. 534 502 * This ioctl takes a pointer to an array of &fw_cdev_iso_packet structs, 535 503 * which describe how to transmit from or receive into a contiguous region 536 - * of a mmap()'ed payload buffer. As part of the packet descriptors, 504 + * of a mmap()'ed payload buffer. As part of transmit packet descriptors, 537 505 * a series of headers can be supplied, which will be prepended to the 538 506 * payload during DMA. 539 507 * ··· 652 620 * instead of allocated. 653 621 * An %FW_CDEV_EVENT_ISO_RESOURCE_DEALLOCATED event concludes this operation. 654 622 * 655 - * To summarize, %FW_CDEV_IOC_DEALLOCATE_ISO_RESOURCE allocates iso resources 656 - * for the lifetime of the fd or handle. 623 + * To summarize, %FW_CDEV_IOC_ALLOCATE_ISO_RESOURCE allocates iso resources 624 + * for the lifetime of the fd or @handle. 657 625 * In contrast, %FW_CDEV_IOC_ALLOCATE_ISO_RESOURCE_ONCE allocates iso resources 658 626 * for the duration of a bus generation. 659 627 *
+27 -2
include/linux/firewire-constants.h
··· 1 + /* 2 + * IEEE 1394 constants. 3 + * 4 + * Copyright (C) 2005-2007 Kristian Hoegsberg <krh@bitplanet.net> 5 + * 6 + * Permission is hereby granted, free of charge, to any person obtaining a 7 + * copy of this software and associated documentation files (the "Software"), 8 + * to deal in the Software without restriction, including without limitation 9 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 10 + * and/or sell copies of the Software, and to permit persons to whom the 11 + * Software is furnished to do so, subject to the following conditions: 12 + * 13 + * The above copyright notice and this permission notice (including the next 14 + * paragraph) shall be included in all copies or substantial portions of the 15 + * Software. 16 + * 17 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 18 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 19 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 20 + * PRECISION INSIGHT AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 21 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 22 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 23 + * DEALINGS IN THE SOFTWARE. 24 + */ 25 + 1 26 #ifndef _LINUX_FIREWIRE_CONSTANTS_H 2 27 #define _LINUX_FIREWIRE_CONSTANTS_H 3 28 ··· 46 21 #define EXTCODE_WRAP_ADD 0x6 47 22 #define EXTCODE_VENDOR_DEPENDENT 0x7 48 23 49 - /* Juju specific tcodes */ 24 + /* Linux firewire-core (Juju) specific tcodes */ 50 25 #define TCODE_LOCK_MASK_SWAP (0x10 | EXTCODE_MASK_SWAP) 51 26 #define TCODE_LOCK_COMPARE_SWAP (0x10 | EXTCODE_COMPARE_SWAP) 52 27 #define TCODE_LOCK_FETCH_ADD (0x10 | EXTCODE_FETCH_ADD) ··· 61 36 #define RCODE_TYPE_ERROR 0x6 62 37 #define RCODE_ADDRESS_ERROR 0x7 63 38 64 - /* Juju specific rcodes */ 39 + /* Linux firewire-core (Juju) specific rcodes */ 65 40 #define RCODE_SEND_ERROR 0x10 66 41 #define RCODE_CANCELLED 0x11 67 42 #define RCODE_BUSY 0x12
+2
include/linux/input/matrix_keypad.h
··· 44 44 * @active_low: gpio polarity 45 45 * @wakeup: controls whether the device should be set up as wakeup 46 46 * source 47 + * @no_autorepeat: disable key autorepeat 47 48 * 48 49 * This structure represents platform-specific data that use used by 49 50 * matrix_keypad driver to perform proper initialization. ··· 65 64 66 65 bool active_low; 67 66 bool wakeup; 67 + bool no_autorepeat; 68 68 }; 69 69 70 70 /**
+1
include/linux/nfs_fs_sb.h
··· 176 176 #define NFS_CAP_ATIME (1U << 11) 177 177 #define NFS_CAP_CTIME (1U << 12) 178 178 #define NFS_CAP_MTIME (1U << 13) 179 + #define NFS_CAP_POSIX_LOCK (1U << 14) 179 180 180 181 181 182 /* maximum number of slots to use */
+56 -9
include/linux/rcupdate.h
··· 101 101 # define rcu_read_release_sched() \ 102 102 lock_release(&rcu_sched_lock_map, 1, _THIS_IP_) 103 103 104 - static inline int debug_lockdep_rcu_enabled(void) 105 - { 106 - return likely(rcu_scheduler_active && debug_locks); 107 - } 104 + extern int debug_lockdep_rcu_enabled(void); 108 105 109 106 /** 110 107 * rcu_read_lock_held - might we be in RCU read-side critical section? ··· 192 195 193 196 /** 194 197 * rcu_dereference_check - rcu_dereference with debug checking 198 + * @p: The pointer to read, prior to dereferencing 199 + * @c: The conditions under which the dereference will take place 195 200 * 196 - * Do an rcu_dereference(), but check that the context is correct. 197 - * For example, rcu_dereference_check(gp, rcu_read_lock_held()) to 198 - * ensure that the rcu_dereference_check() executes within an RCU 199 - * read-side critical section. It is also possible to check for 200 - * locks being held, for example, by using lockdep_is_held(). 201 + * Do an rcu_dereference(), but check that the conditions under which the 202 + * dereference will take place are correct. Typically the conditions indicate 203 + * the various locking conditions that should be held at that point. The check 204 + * should return true if the conditions are satisfied. 205 + * 206 + * For example: 207 + * 208 + * bar = rcu_dereference_check(foo->bar, rcu_read_lock_held() || 209 + * lockdep_is_held(&foo->lock)); 210 + * 211 + * could be used to indicate to lockdep that foo->bar may only be dereferenced 212 + * if either the RCU read lock is held, or that the lock required to replace 213 + * the bar struct at foo->bar is held. 214 + * 215 + * Note that the list of conditions may also include indications of when a lock 216 + * need not be held, for example during initialisation or destruction of the 217 + * target struct: 218 + * 219 + * bar = rcu_dereference_check(foo->bar, rcu_read_lock_held() || 220 + * lockdep_is_held(&foo->lock) || 221 + * atomic_read(&foo->usage) == 0); 201 222 */ 202 223 #define rcu_dereference_check(p, c) \ 203 224 ({ \ ··· 224 209 rcu_dereference_raw(p); \ 225 210 }) 226 211 212 + /** 213 + * rcu_dereference_protected - fetch RCU pointer when updates prevented 214 + * 215 + * Return the value of the specified RCU-protected pointer, but omit 216 + * both the smp_read_barrier_depends() and the ACCESS_ONCE(). This 217 + * is useful in cases where update-side locks prevent the value of the 218 + * pointer from changing. Please note that this primitive does -not- 219 + * prevent the compiler from repeating this reference or combining it 220 + * with other references, so it should not be used without protection 221 + * of appropriate locks. 222 + */ 223 + #define rcu_dereference_protected(p, c) \ 224 + ({ \ 225 + if (debug_lockdep_rcu_enabled() && !(c)) \ 226 + lockdep_rcu_dereference(__FILE__, __LINE__); \ 227 + (p); \ 228 + }) 229 + 227 230 #else /* #ifdef CONFIG_PROVE_RCU */ 228 231 229 232 #define rcu_dereference_check(p, c) rcu_dereference_raw(p) 233 + #define rcu_dereference_protected(p, c) (p) 230 234 231 235 #endif /* #else #ifdef CONFIG_PROVE_RCU */ 236 + 237 + /** 238 + * rcu_access_pointer - fetch RCU pointer with no dereferencing 239 + * 240 + * Return the value of the specified RCU-protected pointer, but omit the 241 + * smp_read_barrier_depends() and keep the ACCESS_ONCE(). This is useful 242 + * when the value of this pointer is accessed, but the pointer is not 243 + * dereferenced, for example, when testing an RCU-protected pointer against 244 + * NULL. This may also be used in cases where update-side locks prevent 245 + * the value of the pointer from changing, but rcu_dereference_protected() 246 + * is a lighter-weight primitive for this use case. 247 + */ 248 + #define rcu_access_pointer(p) ACCESS_ONCE(p) 232 249 233 250 /** 234 251 * rcu_read_lock - mark the beginning of an RCU read-side critical section.
+1 -1
kernel/power/user.c
··· 420 420 * User space encodes device types as two-byte values, 421 421 * so we need to recode them 422 422 */ 423 - swdev = old_decode_dev(swap_area.dev); 423 + swdev = new_decode_dev(swap_area.dev); 424 424 if (swdev) { 425 425 offset = swap_area.offset; 426 426 data->swap = swap_type_of(swdev, offset, NULL);
+7
kernel/rcupdate.c
··· 69 69 70 70 #ifdef CONFIG_DEBUG_LOCK_ALLOC 71 71 72 + int debug_lockdep_rcu_enabled(void) 73 + { 74 + return rcu_scheduler_active && debug_locks && 75 + current->lockdep_recursion == 0; 76 + } 77 + EXPORT_SYMBOL_GPL(debug_lockdep_rcu_enabled); 78 + 72 79 /** 73 80 * rcu_read_lock_bh_held - might we be in RCU-bh read-side critical section? 74 81 *
+1 -1
lib/Kconfig.debug
··· 356 356 config DEBUG_KMEMLEAK 357 357 bool "Kernel memory leak detector" 358 358 depends on DEBUG_KERNEL && EXPERIMENTAL && !MEMORY_HOTPLUG && \ 359 - (X86 || ARM || PPC || S390 || SUPERH || MICROBLAZE) 359 + (X86 || ARM || PPC || S390 || SPARC64 || SUPERH || MICROBLAZE) 360 360 361 361 select DEBUG_FS if SYSFS 362 362 select STACKTRACE if STACKTRACE_SUPPORT
+1 -1
lib/dma-debug.c
··· 570 570 * Now parse out the first token and use it as the name for the 571 571 * driver to filter for. 572 572 */ 573 - for (i = 0; i < NAME_MAX_LEN; ++i) { 573 + for (i = 0; i < NAME_MAX_LEN - 1; ++i) { 574 574 current_driver_name[i] = buf[i]; 575 575 if (isspace(buf[i]) || buf[i] == ' ' || buf[i] == 0) 576 576 break;
+5 -5
lib/vsprintf.c
··· 408 408 }; 409 409 410 410 struct printf_spec { 411 - u16 type; 412 - s16 field_width; /* width of output field */ 411 + u8 type; /* format_type enum */ 413 412 u8 flags; /* flags to number() */ 414 - u8 base; 415 - s8 precision; /* # of digits/chars */ 416 - u8 qualifier; 413 + u8 base; /* number base, 8, 10 or 16 only */ 414 + u8 qualifier; /* number qualifier, one of 'hHlLtzZ' */ 415 + s16 field_width; /* width of output field */ 416 + s16 precision; /* # of digits/chars */ 417 417 }; 418 418 419 419 static char *number(char *buf, char *end, unsigned long long num,
+70 -40
mm/mmap.c
··· 507 507 struct address_space *mapping = NULL; 508 508 struct prio_tree_root *root = NULL; 509 509 struct file *file = vma->vm_file; 510 - struct anon_vma *anon_vma = NULL; 511 510 long adjust_next = 0; 512 511 int remove_next = 0; 513 512 514 513 if (next && !insert) { 514 + struct vm_area_struct *exporter = NULL; 515 + 515 516 if (end >= next->vm_end) { 516 517 /* 517 518 * vma expands, overlapping all the next, and ··· 520 519 */ 521 520 again: remove_next = 1 + (end > next->vm_end); 522 521 end = next->vm_end; 523 - anon_vma = next->anon_vma; 522 + exporter = next; 524 523 importer = vma; 525 524 } else if (end > next->vm_start) { 526 525 /* ··· 528 527 * mprotect case 5 shifting the boundary up. 529 528 */ 530 529 adjust_next = (end - next->vm_start) >> PAGE_SHIFT; 531 - anon_vma = next->anon_vma; 530 + exporter = next; 532 531 importer = vma; 533 532 } else if (end < vma->vm_end) { 534 533 /* ··· 537 536 * mprotect case 4 shifting the boundary down. 538 537 */ 539 538 adjust_next = - ((vma->vm_end - end) >> PAGE_SHIFT); 540 - anon_vma = next->anon_vma; 539 + exporter = vma; 541 540 importer = next; 542 541 } 543 - } 544 542 545 - /* 546 - * When changing only vma->vm_end, we don't really need anon_vma lock. 547 - */ 548 - if (vma->anon_vma && (insert || importer || start != vma->vm_start)) 549 - anon_vma = vma->anon_vma; 550 - if (anon_vma) { 551 543 /* 552 544 * Easily overlooked: when mprotect shifts the boundary, 553 545 * make sure the expanding vma has anon_vma set if the 554 546 * shrinking vma had, to cover any anon pages imported. 555 547 */ 556 - if (importer && !importer->anon_vma) { 557 - /* Block reverse map lookups until things are set up. */ 558 - if (anon_vma_clone(importer, vma)) { 548 + if (exporter && exporter->anon_vma && !importer->anon_vma) { 549 + if (anon_vma_clone(importer, exporter)) 559 550 return -ENOMEM; 560 - } 561 - importer->anon_vma = anon_vma; 551 + importer->anon_vma = exporter->anon_vma; 562 552 } 563 553 } 564 554 ··· 817 825 } 818 826 819 827 /* 828 + * Rough compatbility check to quickly see if it's even worth looking 829 + * at sharing an anon_vma. 830 + * 831 + * They need to have the same vm_file, and the flags can only differ 832 + * in things that mprotect may change. 833 + * 834 + * NOTE! The fact that we share an anon_vma doesn't _have_ to mean that 835 + * we can merge the two vma's. For example, we refuse to merge a vma if 836 + * there is a vm_ops->close() function, because that indicates that the 837 + * driver is doing some kind of reference counting. But that doesn't 838 + * really matter for the anon_vma sharing case. 839 + */ 840 + static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_struct *b) 841 + { 842 + return a->vm_end == b->vm_start && 843 + mpol_equal(vma_policy(a), vma_policy(b)) && 844 + a->vm_file == b->vm_file && 845 + !((a->vm_flags ^ b->vm_flags) & ~(VM_READ|VM_WRITE|VM_EXEC)) && 846 + b->vm_pgoff == a->vm_pgoff + ((b->vm_start - a->vm_start) >> PAGE_SHIFT); 847 + } 848 + 849 + /* 850 + * Do some basic sanity checking to see if we can re-use the anon_vma 851 + * from 'old'. The 'a'/'b' vma's are in VM order - one of them will be 852 + * the same as 'old', the other will be the new one that is trying 853 + * to share the anon_vma. 854 + * 855 + * NOTE! This runs with mm_sem held for reading, so it is possible that 856 + * the anon_vma of 'old' is concurrently in the process of being set up 857 + * by another page fault trying to merge _that_. But that's ok: if it 858 + * is being set up, that automatically means that it will be a singleton 859 + * acceptable for merging, so we can do all of this optimistically. But 860 + * we do that ACCESS_ONCE() to make sure that we never re-load the pointer. 861 + * 862 + * IOW: that the "list_is_singular()" test on the anon_vma_chain only 863 + * matters for the 'stable anon_vma' case (ie the thing we want to avoid 864 + * is to return an anon_vma that is "complex" due to having gone through 865 + * a fork). 866 + * 867 + * We also make sure that the two vma's are compatible (adjacent, 868 + * and with the same memory policies). That's all stable, even with just 869 + * a read lock on the mm_sem. 870 + */ 871 + static struct anon_vma *reusable_anon_vma(struct vm_area_struct *old, struct vm_area_struct *a, struct vm_area_struct *b) 872 + { 873 + if (anon_vma_compatible(a, b)) { 874 + struct anon_vma *anon_vma = ACCESS_ONCE(old->anon_vma); 875 + 876 + if (anon_vma && list_is_singular(&old->anon_vma_chain)) 877 + return anon_vma; 878 + } 879 + return NULL; 880 + } 881 + 882 + /* 820 883 * find_mergeable_anon_vma is used by anon_vma_prepare, to check 821 884 * neighbouring vmas for a suitable anon_vma, before it goes off 822 885 * to allocate a new anon_vma. It checks because a repetitive ··· 881 834 */ 882 835 struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma) 883 836 { 837 + struct anon_vma *anon_vma; 884 838 struct vm_area_struct *near; 885 - unsigned long vm_flags; 886 839 887 840 near = vma->vm_next; 888 841 if (!near) 889 842 goto try_prev; 890 843 891 - /* 892 - * Since only mprotect tries to remerge vmas, match flags 893 - * which might be mprotected into each other later on. 894 - * Neither mlock nor madvise tries to remerge at present, 895 - * so leave their flags as obstructing a merge. 896 - */ 897 - vm_flags = vma->vm_flags & ~(VM_READ|VM_WRITE|VM_EXEC); 898 - vm_flags |= near->vm_flags & (VM_READ|VM_WRITE|VM_EXEC); 899 - 900 - if (near->anon_vma && vma->vm_end == near->vm_start && 901 - mpol_equal(vma_policy(vma), vma_policy(near)) && 902 - can_vma_merge_before(near, vm_flags, 903 - NULL, vma->vm_file, vma->vm_pgoff + 904 - ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT))) 905 - return near->anon_vma; 844 + anon_vma = reusable_anon_vma(near, vma, near); 845 + if (anon_vma) 846 + return anon_vma; 906 847 try_prev: 907 848 /* 908 849 * It is potentially slow to have to call find_vma_prev here. ··· 903 868 if (!near) 904 869 goto none; 905 870 906 - vm_flags = vma->vm_flags & ~(VM_READ|VM_WRITE|VM_EXEC); 907 - vm_flags |= near->vm_flags & (VM_READ|VM_WRITE|VM_EXEC); 908 - 909 - if (near->anon_vma && near->vm_end == vma->vm_start && 910 - mpol_equal(vma_policy(near), vma_policy(vma)) && 911 - can_vma_merge_after(near, vm_flags, 912 - NULL, vma->vm_file, vma->vm_pgoff)) 913 - return near->anon_vma; 871 + anon_vma = reusable_anon_vma(near, near, vma); 872 + if (anon_vma) 873 + return anon_vma; 914 874 none: 915 875 /* 916 876 * There's no absolute need to look only at touching neighbours:
+20 -4
mm/rmap.c
··· 182 182 { 183 183 struct anon_vma_chain *avc, *pavc; 184 184 185 - list_for_each_entry(pavc, &src->anon_vma_chain, same_vma) { 185 + list_for_each_entry_reverse(pavc, &src->anon_vma_chain, same_vma) { 186 186 avc = anon_vma_chain_alloc(); 187 187 if (!avc) 188 188 goto enomem_failure; ··· 730 730 * @page: the page to add the mapping to 731 731 * @vma: the vm area in which the mapping is added 732 732 * @address: the user virtual address mapped 733 + * @exclusive: the page is exclusively owned by the current process 733 734 */ 734 735 static void __page_set_anon_rmap(struct page *page, 735 - struct vm_area_struct *vma, unsigned long address) 736 + struct vm_area_struct *vma, unsigned long address, int exclusive) 736 737 { 737 738 struct anon_vma *anon_vma = vma->anon_vma; 738 739 739 740 BUG_ON(!anon_vma); 741 + 742 + /* 743 + * If the page isn't exclusively mapped into this vma, 744 + * we must use the _oldest_ possible anon_vma for the 745 + * page mapping! 746 + * 747 + * So take the last AVC chain entry in the vma, which is 748 + * the deepest ancestor, and use the anon_vma from that. 749 + */ 750 + if (!exclusive) { 751 + struct anon_vma_chain *avc; 752 + avc = list_entry(vma->anon_vma_chain.prev, struct anon_vma_chain, same_vma); 753 + anon_vma = avc->anon_vma; 754 + } 755 + 740 756 anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; 741 757 page->mapping = (struct address_space *) anon_vma; 742 758 page->index = linear_page_index(vma, address); ··· 807 791 VM_BUG_ON(!PageLocked(page)); 808 792 VM_BUG_ON(address < vma->vm_start || address >= vma->vm_end); 809 793 if (first) 810 - __page_set_anon_rmap(page, vma, address); 794 + __page_set_anon_rmap(page, vma, address, 0); 811 795 else 812 796 __page_check_anon_rmap(page, vma, address); 813 797 } ··· 829 813 SetPageSwapBacked(page); 830 814 atomic_set(&page->_mapcount, 0); /* increment count (starts at -1) */ 831 815 __inc_zone_page_state(page, NR_ANON_PAGES); 832 - __page_set_anon_rmap(page, vma, address); 816 + __page_set_anon_rmap(page, vma, address, 1); 833 817 if (page_evictable(page, vma)) 834 818 lru_cache_add_lru(page, LRU_ACTIVE_ANON); 835 819 else
+1 -1
net/bridge/br_multicast.c
··· 727 727 group = grec->grec_mca; 728 728 type = grec->grec_type; 729 729 730 - len += grec->grec_nsrcs * 4; 730 + len += ntohs(grec->grec_nsrcs) * 4; 731 731 if (!pskb_may_pull(skb, len)) 732 732 return -EINVAL; 733 733
+6 -2
net/core/dev.c
··· 2015 2015 if (dev->real_num_tx_queues > 1) 2016 2016 queue_index = skb_tx_hash(dev, skb); 2017 2017 2018 - if (sk && rcu_dereference_check(sk->sk_dst_cache, 1)) 2019 - sk_tx_queue_set(sk, queue_index); 2018 + if (sk) { 2019 + struct dst_entry *dst = rcu_dereference_check(sk->sk_dst_cache, 1); 2020 + 2021 + if (dst && skb_dst(skb) == dst) 2022 + sk_tx_queue_set(sk, queue_index); 2023 + } 2020 2024 } 2021 2025 } 2022 2026
+3 -1
net/ipv4/fib_trie.c
··· 209 209 { 210 210 struct node *ret = tnode_get_child(tn, i); 211 211 212 - return rcu_dereference(ret); 212 + return rcu_dereference_check(ret, 213 + rcu_read_lock_held() || 214 + lockdep_rtnl_is_held()); 213 215 } 214 216 215 217 static inline int tnode_child_length(const struct tnode *tn)
+1 -1
net/ipv4/ip_output.c
··· 120 120 newskb->pkt_type = PACKET_LOOPBACK; 121 121 newskb->ip_summed = CHECKSUM_UNNECESSARY; 122 122 WARN_ON(!skb_dst(newskb)); 123 - netif_rx(newskb); 123 + netif_rx_ni(newskb); 124 124 return 0; 125 125 } 126 126
+1 -1
net/ipv6/ip6_output.c
··· 108 108 newskb->ip_summed = CHECKSUM_UNNECESSARY; 109 109 WARN_ON(!skb_dst(newskb)); 110 110 111 - netif_rx(newskb); 111 + netif_rx_ni(newskb); 112 112 return 0; 113 113 } 114 114
+1 -1
net/ipv6/tcp_ipv6.c
··· 1018 1018 skb_reserve(buff, MAX_HEADER + sizeof(struct ipv6hdr) + tot_len); 1019 1019 1020 1020 t1 = (struct tcphdr *) skb_push(buff, tot_len); 1021 - skb_reset_transport_header(skb); 1021 + skb_reset_transport_header(buff); 1022 1022 1023 1023 /* Swap the send and the receive. */ 1024 1024 memset(t1, 0, sizeof(*t1));
-1
net/mac80211/agg-tx.c
··· 184 184 HT_AGG_STATE_REQ_STOP_BA_MSK)) != 185 185 HT_ADDBA_REQUESTED_MSK) { 186 186 spin_unlock_bh(&sta->lock); 187 - *state = HT_AGG_STATE_IDLE; 188 187 #ifdef CONFIG_MAC80211_HT_DEBUG 189 188 printk(KERN_DEBUG "timer expired on tid %d but we are not " 190 189 "(or no longer) expecting addBA response there",
+2
net/mac80211/mlme.c
··· 175 175 ht_changed = conf_is_ht(&local->hw.conf) != enable_ht || 176 176 channel_type != local->hw.conf.channel_type; 177 177 178 + if (local->tmp_channel) 179 + local->tmp_channel_type = channel_type; 178 180 local->oper_channel_type = channel_type; 179 181 180 182 if (ht_changed) {
-2
net/packet/af_packet.c
··· 2228 2228 case SIOCGIFDSTADDR: 2229 2229 case SIOCSIFDSTADDR: 2230 2230 case SIOCSIFFLAGS: 2231 - if (!net_eq(sock_net(sk), &init_net)) 2232 - return -ENOIOCTLCMD; 2233 2231 return inet_dgram_ops.ioctl(sock, cmd, arg); 2234 2232 #endif 2235 2233
+4 -1
net/sunrpc/xprtrdma/svc_rdma_transport.c
··· 679 679 int ret; 680 680 681 681 dprintk("svcrdma: Creating RDMA socket\n"); 682 - 682 + if (sa->sa_family != AF_INET) { 683 + dprintk("svcrdma: Address family %d is not supported.\n", sa->sa_family); 684 + return ERR_PTR(-EAFNOSUPPORT); 685 + } 683 686 cma_xprt = rdma_create_xprt(serv, 1); 684 687 if (!cma_xprt) 685 688 return ERR_PTR(-ENOMEM);
+1 -1
security/selinux/ss/avtab.h
··· 82 82 void avtab_cache_init(void); 83 83 void avtab_cache_destroy(void); 84 84 85 - #define MAX_AVTAB_HASH_BITS 13 85 + #define MAX_AVTAB_HASH_BITS 11 86 86 #define MAX_AVTAB_HASH_BUCKETS (1 << MAX_AVTAB_HASH_BITS) 87 87 #define MAX_AVTAB_HASH_MASK (MAX_AVTAB_HASH_BUCKETS-1) 88 88 #define MAX_AVTAB_SIZE MAX_AVTAB_HASH_BUCKETS
+5 -2
sound/arm/aaci.c
··· 863 863 struct snd_ac97 *ac97; 864 864 int ret; 865 865 866 - writel(0, aaci->base + AC97_POWERDOWN); 867 866 /* 868 867 * Assert AACIRESET for 2us 869 868 */ ··· 1046 1047 1047 1048 writel(0x1fff, aaci->base + AACI_INTCLR); 1048 1049 writel(aaci->maincr, aaci->base + AACI_MAINCR); 1049 - 1050 + /* 1051 + * Fix: ac97 read back fail errors by reading 1052 + * from any arbitrary aaci register. 1053 + */ 1054 + readl(aaci->base + AACI_CSCH1); 1050 1055 ret = aaci_probe_ac97(aaci); 1051 1056 if (ret) 1052 1057 goto out;
+1
sound/pci/hda/hda_intel.c
··· 2272 2272 SND_PCI_QUIRK(0x1458, 0xa022, "ga-ma770-ud3", POS_FIX_LPIB), 2273 2273 SND_PCI_QUIRK(0x1462, 0x1002, "MSI Wind U115", POS_FIX_LPIB), 2274 2274 SND_PCI_QUIRK(0x1565, 0x820f, "Biostar Microtech", POS_FIX_LPIB), 2275 + SND_PCI_QUIRK(0x1565, 0x8218, "Biostar Microtech", POS_FIX_LPIB), 2275 2276 SND_PCI_QUIRK(0x8086, 0xd601, "eMachines T5212", POS_FIX_LPIB), 2276 2277 {} 2277 2278 };
+161 -23
sound/pci/hda/patch_realtek.c
··· 230 230 ALC888_ACER_ASPIRE_7730G, 231 231 ALC883_MEDION, 232 232 ALC883_MEDION_MD2, 233 + ALC883_MEDION_WIM2160, 233 234 ALC883_LAPTOP_EAPD, 234 235 ALC883_LENOVO_101E_2ch, 235 236 ALC883_LENOVO_NB0763, ··· 1390 1389 1391 1390 static void alc_pick_fixup(struct hda_codec *codec, 1392 1391 const struct snd_pci_quirk *quirk, 1393 - const struct alc_fixup *fix) 1392 + const struct alc_fixup *fix, 1393 + int pre_init) 1394 1394 { 1395 1395 const struct alc_pincfg *cfg; 1396 1396 1397 1397 quirk = snd_pci_quirk_lookup(codec->bus->pci, quirk); 1398 1398 if (!quirk) 1399 1399 return; 1400 - 1401 1400 fix += quirk->value; 1402 1401 cfg = fix->pins; 1403 - if (cfg) { 1402 + if (pre_init && cfg) { 1403 + #ifdef CONFIG_SND_DEBUG_VERBOSE 1404 + snd_printdd(KERN_INFO "hda_codec: %s: Apply pincfg for %s\n", 1405 + codec->chip_name, quirk->name); 1406 + #endif 1404 1407 for (; cfg->nid; cfg++) 1405 1408 snd_hda_codec_set_pincfg(codec, cfg->nid, cfg->val); 1406 1409 } 1407 - if (fix->verbs) 1410 + if (!pre_init && fix->verbs) { 1411 + #ifdef CONFIG_SND_DEBUG_VERBOSE 1412 + snd_printdd(KERN_INFO "hda_codec: %s: Apply fix-verbs for %s\n", 1413 + codec->chip_name, quirk->name); 1414 + #endif 1408 1415 add_verb(codec->spec, fix->verbs); 1416 + } 1409 1417 } 1410 1418 1411 1419 static int alc_read_coef_idx(struct hda_codec *codec, ··· 4818 4808 } 4819 4809 } 4820 4810 4811 + static void alc880_auto_init_input_src(struct hda_codec *codec) 4812 + { 4813 + struct alc_spec *spec = codec->spec; 4814 + int c; 4815 + 4816 + for (c = 0; c < spec->num_adc_nids; c++) { 4817 + unsigned int mux_idx; 4818 + const struct hda_input_mux *imux; 4819 + mux_idx = c >= spec->num_mux_defs ? 0 : c; 4820 + imux = &spec->input_mux[mux_idx]; 4821 + if (!imux->num_items && mux_idx > 0) 4822 + imux = &spec->input_mux[0]; 4823 + if (imux) 4824 + snd_hda_codec_write(codec, spec->adc_nids[c], 0, 4825 + AC_VERB_SET_CONNECT_SEL, 4826 + imux->items[0].index); 4827 + } 4828 + } 4829 + 4821 4830 /* parse the BIOS configuration and set up the alc_spec */ 4822 4831 /* return 1 if successful, 0 if the proper config is not found, 4823 4832 * or a negative error code ··· 4915 4886 alc880_auto_init_multi_out(codec); 4916 4887 alc880_auto_init_extra_out(codec); 4917 4888 alc880_auto_init_analog_input(codec); 4889 + alc880_auto_init_input_src(codec); 4918 4890 if (spec->unsol_event) 4919 4891 alc_inithook(codec); 4920 4892 } ··· 6427 6397 } 6428 6398 } 6429 6399 6400 + #define alc260_auto_init_input_src alc880_auto_init_input_src 6401 + 6430 6402 /* 6431 6403 * generic initialization of ADC, input mixers and output mixers 6432 6404 */ ··· 6515 6483 struct alc_spec *spec = codec->spec; 6516 6484 alc260_auto_init_multi_out(codec); 6517 6485 alc260_auto_init_analog_input(codec); 6486 + alc260_auto_init_input_src(codec); 6518 6487 if (spec->unsol_event) 6519 6488 alc_inithook(codec); 6520 6489 } ··· 8488 8455 { } /* end */ 8489 8456 }; 8490 8457 8458 + static struct snd_kcontrol_new alc883_medion_wim2160_mixer[] = { 8459 + HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT), 8460 + HDA_BIND_MUTE("Front Playback Switch", 0x0c, 2, HDA_INPUT), 8461 + HDA_CODEC_MUTE("Speaker Playback Switch", 0x15, 0x0, HDA_OUTPUT), 8462 + HDA_CODEC_MUTE("Headphone Playback Switch", 0x1a, 0x0, HDA_OUTPUT), 8463 + HDA_CODEC_VOLUME("Line Playback Volume", 0x08, 0x0, HDA_INPUT), 8464 + HDA_CODEC_MUTE("Line Playback Switch", 0x08, 0x0, HDA_INPUT), 8465 + { } /* end */ 8466 + }; 8467 + 8468 + static struct hda_verb alc883_medion_wim2160_verbs[] = { 8469 + /* Unmute front mixer */ 8470 + {0x0c, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(0)}, 8471 + {0x0c, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(1)}, 8472 + 8473 + /* Set speaker pin to front mixer */ 8474 + {0x15, AC_VERB_SET_CONNECT_SEL, 0x00}, 8475 + 8476 + /* Init headphone pin */ 8477 + {0x1a, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_HP}, 8478 + {0x1a, AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE}, 8479 + {0x1a, AC_VERB_SET_CONNECT_SEL, 0x00}, 8480 + {0x1a, AC_VERB_SET_UNSOLICITED_ENABLE, ALC880_HP_EVENT | AC_USRSP_EN}, 8481 + 8482 + { } /* end */ 8483 + }; 8484 + 8485 + /* toggle speaker-output according to the hp-jack state */ 8486 + static void alc883_medion_wim2160_setup(struct hda_codec *codec) 8487 + { 8488 + struct alc_spec *spec = codec->spec; 8489 + 8490 + spec->autocfg.hp_pins[0] = 0x1a; 8491 + spec->autocfg.speaker_pins[0] = 0x15; 8492 + } 8493 + 8491 8494 static struct snd_kcontrol_new alc883_acer_aspire_mixer[] = { 8492 8495 HDA_CODEC_VOLUME("Front Playback Volume", 0x0c, 0x0, HDA_OUTPUT), 8493 8496 HDA_BIND_MUTE("Front Playback Switch", 0x0c, 2, HDA_INPUT), ··· 9233 9164 [ALC888_ACER_ASPIRE_7730G] = "acer-aspire-7730g", 9234 9165 [ALC883_MEDION] = "medion", 9235 9166 [ALC883_MEDION_MD2] = "medion-md2", 9167 + [ALC883_MEDION_WIM2160] = "medion-wim2160", 9236 9168 [ALC883_LAPTOP_EAPD] = "laptop-eapd", 9237 9169 [ALC883_LENOVO_101E_2ch] = "lenovo-101e", 9238 9170 [ALC883_LENOVO_NB0763] = "lenovo-nb0763", ··· 9350 9280 SND_PCI_QUIRK(0x1462, 0xaa08, "MSI", ALC883_TARGA_2ch_DIG), 9351 9281 9352 9282 SND_PCI_QUIRK(0x147b, 0x1083, "Abit IP35-PRO", ALC883_6ST_DIG), 9283 + SND_PCI_QUIRK(0x1558, 0x0571, "Clevo laptop M570U", ALC883_3ST_6ch_DIG), 9353 9284 SND_PCI_QUIRK(0x1558, 0x0721, "Clevo laptop M720R", ALC883_CLEVO_M720), 9354 9285 SND_PCI_QUIRK(0x1558, 0x0722, "Clevo laptop M720SR", ALC883_CLEVO_M720), 9355 9286 SND_PCI_QUIRK(0x1558, 0x5409, "Clevo laptop M540R", ALC883_CLEVO_M540R), ··· 9887 9816 .input_mux = &alc883_capture_source, 9888 9817 .unsol_event = alc_automute_amp_unsol_event, 9889 9818 .setup = alc883_medion_md2_setup, 9819 + .init_hook = alc_automute_amp, 9820 + }, 9821 + [ALC883_MEDION_WIM2160] = { 9822 + .mixers = { alc883_medion_wim2160_mixer }, 9823 + .init_verbs = { alc883_init_verbs, alc883_medion_wim2160_verbs }, 9824 + .num_dacs = ARRAY_SIZE(alc883_dac_nids), 9825 + .dac_nids = alc883_dac_nids, 9826 + .dig_out_nid = ALC883_DIGOUT_NID, 9827 + .num_adc_nids = ARRAY_SIZE(alc883_adc_nids), 9828 + .adc_nids = alc883_adc_nids, 9829 + .num_channel_mode = ARRAY_SIZE(alc883_3ST_2ch_modes), 9830 + .channel_mode = alc883_3ST_2ch_modes, 9831 + .input_mux = &alc883_capture_source, 9832 + .unsol_event = alc_automute_amp_unsol_event, 9833 + .setup = alc883_medion_wim2160_setup, 9890 9834 .init_hook = alc_automute_amp, 9891 9835 }, 9892 9836 [ALC883_LAPTOP_EAPD] = { ··· 10449 10363 board_config = ALC882_AUTO; 10450 10364 } 10451 10365 10452 - alc_pick_fixup(codec, alc882_fixup_tbl, alc882_fixups); 10366 + if (board_config == ALC882_AUTO) 10367 + alc_pick_fixup(codec, alc882_fixup_tbl, alc882_fixups, 1); 10453 10368 10454 10369 if (board_config == ALC882_AUTO) { 10455 10370 /* automatic parse from the BIOS config */ ··· 10522 10435 10523 10436 set_capture_mixer(codec); 10524 10437 set_beep_amp(spec, 0x0b, 0x05, HDA_INPUT); 10438 + 10439 + if (board_config == ALC882_AUTO) 10440 + alc_pick_fixup(codec, alc882_fixup_tbl, alc882_fixups, 0); 10525 10441 10526 10442 spec->vmaster_nid = 0x0c; 10527 10443 ··· 12906 12816 dac = 0x02; 12907 12817 break; 12908 12818 case 0x15: 12819 + case 0x21: /* ALC269vb has this pin, too */ 12909 12820 dac = 0x03; 12910 12821 break; 12911 12822 default: ··· 13826 13735 } 13827 13736 } 13828 13737 13738 + static void alc269_laptop_amic_setup(struct hda_codec *codec) 13739 + { 13740 + struct alc_spec *spec = codec->spec; 13741 + spec->autocfg.hp_pins[0] = 0x15; 13742 + spec->autocfg.speaker_pins[0] = 0x14; 13743 + spec->ext_mic.pin = 0x18; 13744 + spec->ext_mic.mux_idx = 0; 13745 + spec->int_mic.pin = 0x19; 13746 + spec->int_mic.mux_idx = 1; 13747 + spec->auto_mic = 1; 13748 + } 13749 + 13829 13750 static void alc269_laptop_dmic_setup(struct hda_codec *codec) 13830 13751 { 13831 13752 struct alc_spec *spec = codec->spec; ··· 13850 13747 spec->auto_mic = 1; 13851 13748 } 13852 13749 13853 - static void alc269vb_laptop_dmic_setup(struct hda_codec *codec) 13750 + static void alc269vb_laptop_amic_setup(struct hda_codec *codec) 13854 13751 { 13855 13752 struct alc_spec *spec = codec->spec; 13856 - spec->autocfg.hp_pins[0] = 0x15; 13857 - spec->autocfg.speaker_pins[0] = 0x14; 13858 - spec->ext_mic.pin = 0x18; 13859 - spec->ext_mic.mux_idx = 0; 13860 - spec->int_mic.pin = 0x12; 13861 - spec->int_mic.mux_idx = 6; 13862 - spec->auto_mic = 1; 13863 - } 13864 - 13865 - static void alc269_laptop_amic_setup(struct hda_codec *codec) 13866 - { 13867 - struct alc_spec *spec = codec->spec; 13868 - spec->autocfg.hp_pins[0] = 0x15; 13753 + spec->autocfg.hp_pins[0] = 0x21; 13869 13754 spec->autocfg.speaker_pins[0] = 0x14; 13870 13755 spec->ext_mic.pin = 0x18; 13871 13756 spec->ext_mic.mux_idx = 0; 13872 13757 spec->int_mic.pin = 0x19; 13873 13758 spec->int_mic.mux_idx = 1; 13759 + spec->auto_mic = 1; 13760 + } 13761 + 13762 + static void alc269vb_laptop_dmic_setup(struct hda_codec *codec) 13763 + { 13764 + struct alc_spec *spec = codec->spec; 13765 + spec->autocfg.hp_pins[0] = 0x21; 13766 + spec->autocfg.speaker_pins[0] = 0x14; 13767 + spec->ext_mic.pin = 0x18; 13768 + spec->ext_mic.mux_idx = 0; 13769 + spec->int_mic.pin = 0x12; 13770 + spec->int_mic.mux_idx = 6; 13874 13771 spec->auto_mic = 1; 13875 13772 } 13876 13773 ··· 14078 13975 alc_inithook(codec); 14079 13976 } 14080 13977 13978 + enum { 13979 + ALC269_FIXUP_SONY_VAIO, 13980 + }; 13981 + 13982 + const static struct hda_verb alc269_sony_vaio_fixup_verbs[] = { 13983 + {0x19, AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_VREFGRD}, 13984 + {} 13985 + }; 13986 + 13987 + static const struct alc_fixup alc269_fixups[] = { 13988 + [ALC269_FIXUP_SONY_VAIO] = { 13989 + .verbs = alc269_sony_vaio_fixup_verbs 13990 + }, 13991 + }; 13992 + 13993 + static struct snd_pci_quirk alc269_fixup_tbl[] = { 13994 + SND_PCI_QUIRK(0x104d, 0x9071, "Sony VAIO", ALC269_FIXUP_SONY_VAIO), 13995 + {} 13996 + }; 13997 + 13998 + 14081 13999 /* 14082 14000 * configuration and preset 14083 14001 */ ··· 14158 14034 ALC269_DMIC), 14159 14035 SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005HA", ALC269_DMIC), 14160 14036 SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005HA", ALC269_DMIC), 14161 - SND_PCI_QUIRK(0x104d, 0x9071, "SONY XTB", ALC269_DMIC), 14037 + SND_PCI_QUIRK(0x104d, 0x9071, "Sony VAIO", ALC269_AUTO), 14162 14038 SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook ICH9M-based", ALC269_LIFEBOOK), 14163 14039 SND_PCI_QUIRK(0x152d, 0x1778, "Quanta ON1", ALC269_DMIC), 14164 14040 SND_PCI_QUIRK(0x1734, 0x115d, "FSC Amilo", ALC269_FUJITSU), ··· 14232 14108 .num_channel_mode = ARRAY_SIZE(alc269_modes), 14233 14109 .channel_mode = alc269_modes, 14234 14110 .unsol_event = alc269_laptop_unsol_event, 14235 - .setup = alc269_laptop_amic_setup, 14111 + .setup = alc269vb_laptop_amic_setup, 14236 14112 .init_hook = alc269_laptop_inithook, 14237 14113 }, 14238 14114 [ALC269VB_DMIC] = { ··· 14312 14188 board_config = ALC269_AUTO; 14313 14189 } 14314 14190 14191 + if (board_config == ALC269_AUTO) 14192 + alc_pick_fixup(codec, alc269_fixup_tbl, alc269_fixups, 1); 14193 + 14315 14194 if (board_config == ALC269_AUTO) { 14316 14195 /* automatic parse from the BIOS config */ 14317 14196 err = alc269_parse_auto_config(codec); ··· 14366 14239 if (!spec->cap_mixer) 14367 14240 set_capture_mixer(codec); 14368 14241 set_beep_amp(spec, 0x0b, 0x04, HDA_INPUT); 14242 + 14243 + if (board_config == ALC269_AUTO) 14244 + alc_pick_fixup(codec, alc269_fixup_tbl, alc269_fixups, 0); 14369 14245 14370 14246 spec->vmaster_nid = 0x02; 14371 14247 ··· 15458 15328 board_config = ALC861_AUTO; 15459 15329 } 15460 15330 15461 - alc_pick_fixup(codec, alc861_fixup_tbl, alc861_fixups); 15331 + if (board_config == ALC861_AUTO) 15332 + alc_pick_fixup(codec, alc861_fixup_tbl, alc861_fixups, 1); 15462 15333 15463 15334 if (board_config == ALC861_AUTO) { 15464 15335 /* automatic parse from the BIOS config */ ··· 15495 15364 set_beep_amp(spec, 0x23, 0, HDA_OUTPUT); 15496 15365 15497 15366 spec->vmaster_nid = 0x03; 15367 + 15368 + if (board_config == ALC861_AUTO) 15369 + alc_pick_fixup(codec, alc861_fixup_tbl, alc861_fixups, 0); 15498 15370 15499 15371 codec->patch_ops = alc_patch_ops; 15500 15372 if (board_config == ALC861_AUTO) { ··· 16433 16299 board_config = ALC861VD_AUTO; 16434 16300 } 16435 16301 16436 - alc_pick_fixup(codec, alc861vd_fixup_tbl, alc861vd_fixups); 16302 + if (board_config == ALC861VD_AUTO) 16303 + alc_pick_fixup(codec, alc861vd_fixup_tbl, alc861vd_fixups, 1); 16437 16304 16438 16305 if (board_config == ALC861VD_AUTO) { 16439 16306 /* automatic parse from the BIOS config */ ··· 16481 16346 set_beep_amp(spec, 0x0b, 0x05, HDA_INPUT); 16482 16347 16483 16348 spec->vmaster_nid = 0x02; 16349 + 16350 + if (board_config == ALC861VD_AUTO) 16351 + alc_pick_fixup(codec, alc861vd_fixup_tbl, alc861vd_fixups, 0); 16484 16352 16485 16353 codec->patch_ops = alc_patch_ops; 16486 16354
+24 -17
sound/pci/hda/patch_via.c
··· 476 476 knew->name = kstrdup(tmpl->name, GFP_KERNEL); 477 477 if (!knew->name) 478 478 return NULL; 479 - return 0; 479 + return knew; 480 480 } 481 481 482 482 static void via_free_kctls(struct hda_codec *codec) ··· 1215 1215 }, 1216 1216 }; 1217 1217 1218 - static int via_hp_build(struct via_spec *spec) 1218 + static int via_hp_build(struct hda_codec *codec) 1219 1219 { 1220 + struct via_spec *spec = codec->spec; 1220 1221 struct snd_kcontrol_new *knew; 1221 1222 hda_nid_t nid; 1222 - 1223 - knew = via_clone_control(spec, &via_hp_mixer[0]); 1224 - if (knew == NULL) 1225 - return -ENOMEM; 1223 + int nums; 1224 + hda_nid_t conn[HDA_MAX_CONNECTIONS]; 1226 1225 1227 1226 switch (spec->codec_type) { 1228 1227 case VT1718S: ··· 1237 1238 nid = spec->autocfg.hp_pins[0]; 1238 1239 break; 1239 1240 } 1241 + 1242 + nums = snd_hda_get_connections(codec, nid, conn, HDA_MAX_CONNECTIONS); 1243 + if (nums <= 1) 1244 + return 0; 1245 + 1246 + knew = via_clone_control(spec, &via_hp_mixer[0]); 1247 + if (knew == NULL) 1248 + return -ENOMEM; 1240 1249 1241 1250 knew->subdevice = HDA_SUBDEV_NID_FLAG | nid; 1242 1251 knew->private_value = nid; ··· 2568 2561 spec->input_mux = &spec->private_imux[0]; 2569 2562 2570 2563 if (spec->hp_mux) 2571 - via_hp_build(spec); 2564 + via_hp_build(codec); 2572 2565 2573 2566 via_smart51_build(spec); 2574 2567 return 1; ··· 3094 3087 spec->input_mux = &spec->private_imux[0]; 3095 3088 3096 3089 if (spec->hp_mux) 3097 - via_hp_build(spec); 3090 + via_hp_build(codec); 3098 3091 3099 3092 via_smart51_build(spec); 3100 3093 return 1; ··· 3661 3654 spec->input_mux = &spec->private_imux[0]; 3662 3655 3663 3656 if (spec->hp_mux) 3664 - via_hp_build(spec); 3657 + via_hp_build(codec); 3665 3658 3666 3659 via_smart51_build(spec); 3667 3660 return 1; ··· 4147 4140 spec->input_mux = &spec->private_imux[0]; 4148 4141 4149 4142 if (spec->hp_mux) 4150 - via_hp_build(spec); 4143 + via_hp_build(codec); 4151 4144 4152 4145 via_smart51_build(spec); 4153 4146 return 1; ··· 4517 4510 spec->input_mux = &spec->private_imux[0]; 4518 4511 4519 4512 if (spec->hp_mux) 4520 - via_hp_build(spec); 4513 + via_hp_build(codec); 4521 4514 4522 4515 return 1; 4523 4516 } ··· 4937 4930 spec->input_mux = &spec->private_imux[0]; 4938 4931 4939 4932 if (spec->hp_mux) 4940 - via_hp_build(spec); 4933 + via_hp_build(codec); 4941 4934 4942 4935 via_smart51_build(spec); 4943 4936 ··· 5432 5425 spec->input_mux = &spec->private_imux[0]; 5433 5426 5434 5427 if (spec->hp_mux) 5435 - via_hp_build(spec); 5428 + via_hp_build(codec); 5436 5429 5437 5430 via_smart51_build(spec); 5438 5431 ··· 5788 5781 spec->input_mux = &spec->private_imux[0]; 5789 5782 5790 5783 if (spec->hp_mux) 5791 - via_hp_build(spec); 5784 + via_hp_build(codec); 5792 5785 5793 5786 return 1; 5794 5787 } ··· 6007 6000 6008 6001 /* Line-Out: PortE */ 6009 6002 err = via_add_control(spec, VIA_CTL_WIDGET_VOL, 6010 - "Master Front Playback Volume", 6003 + "Front Playback Volume", 6011 6004 HDA_COMPOSE_AMP_VAL(0x8, 3, 0, HDA_OUTPUT)); 6012 6005 if (err < 0) 6013 6006 return err; 6014 6007 err = via_add_control(spec, VIA_CTL_WIDGET_BIND_PIN_MUTE, 6015 - "Master Front Playback Switch", 6008 + "Front Playback Switch", 6016 6009 HDA_COMPOSE_AMP_VAL(0x28, 3, 0, HDA_OUTPUT)); 6017 6010 if (err < 0) 6018 6011 return err; ··· 6137 6130 spec->input_mux = &spec->private_imux[0]; 6138 6131 6139 6132 if (spec->hp_mux) 6140 - via_hp_build(spec); 6133 + via_hp_build(codec); 6141 6134 6142 6135 return 1; 6143 6136 }
-1
sound/soc/codecs/wm2000.c
··· 23 23 24 24 #include <linux/module.h> 25 25 #include <linux/moduleparam.h> 26 - #include <linux/version.h> 27 26 #include <linux/kernel.h> 28 27 #include <linux/init.h> 29 28 #include <linux/firmware.h>
+14 -1
sound/soc/imx/imx-pcm-dma-mx2.c
··· 71 71 72 72 static void snd_imx_dma_err_callback(int channel, void *data, int err) 73 73 { 74 - pr_err("DMA error callback called\n"); 74 + struct snd_pcm_substream *substream = data; 75 + struct snd_soc_pcm_runtime *rtd = substream->private_data; 76 + struct imx_pcm_dma_params *dma_params = rtd->dai->cpu_dai->dma_data; 77 + struct snd_pcm_runtime *runtime = substream->runtime; 78 + struct imx_pcm_runtime_data *iprtd = runtime->private_data; 79 + int ret; 75 80 76 81 pr_err("DMA timeout on channel %d -%s%s%s%s\n", 77 82 channel, ··· 84 79 err & IMX_DMA_ERR_REQUEST ? " request" : "", 85 80 err & IMX_DMA_ERR_TRANSFER ? " transfer" : "", 86 81 err & IMX_DMA_ERR_BUFFER ? " buffer" : ""); 82 + 83 + imx_dma_disable(iprtd->dma); 84 + ret = imx_dma_setup_sg(iprtd->dma, iprtd->sg_list, iprtd->sg_count, 85 + IMX_DMA_LENGTH_LOOP, dma_params->dma_addr, 86 + substream->stream == SNDRV_PCM_STREAM_PLAYBACK ? 87 + DMA_MODE_WRITE : DMA_MODE_READ); 88 + if (!ret) 89 + imx_dma_enable(iprtd->dma); 87 90 } 88 91 89 92 static int imx_ssi_dma_alloc(struct snd_pcm_substream *substream)
+29 -26
sound/soc/imx/imx-pcm-fiq.c
··· 39 39 unsigned long offset; 40 40 unsigned long last_offset; 41 41 unsigned long size; 42 - struct timer_list timer; 43 - int poll_time; 42 + struct hrtimer hrt; 43 + int poll_time_ns; 44 + struct snd_pcm_substream *substream; 45 + atomic_t running; 44 46 }; 45 47 46 - static inline void imx_ssi_set_next_poll(struct imx_pcm_runtime_data *iprtd) 48 + static enum hrtimer_restart snd_hrtimer_callback(struct hrtimer *hrt) 47 49 { 48 - iprtd->timer.expires = jiffies + iprtd->poll_time; 49 - } 50 - 51 - static void imx_ssi_timer_callback(unsigned long data) 52 - { 53 - struct snd_pcm_substream *substream = (void *)data; 50 + struct imx_pcm_runtime_data *iprtd = 51 + container_of(hrt, struct imx_pcm_runtime_data, hrt); 52 + struct snd_pcm_substream *substream = iprtd->substream; 54 53 struct snd_pcm_runtime *runtime = substream->runtime; 55 - struct imx_pcm_runtime_data *iprtd = runtime->private_data; 56 54 struct pt_regs regs; 57 55 unsigned long delta; 56 + 57 + if (!atomic_read(&iprtd->running)) 58 + return HRTIMER_NORESTART; 58 59 59 60 get_fiq_regs(&regs); 60 61 ··· 73 72 74 73 /* If we've transferred at least a period then report it and 75 74 * reset our poll time */ 76 - if (delta >= runtime->period_size) { 75 + if (delta >= iprtd->period) { 77 76 snd_pcm_period_elapsed(substream); 78 77 iprtd->last_offset = iprtd->offset; 79 - 80 - imx_ssi_set_next_poll(iprtd); 81 78 } 82 79 83 - /* Restart the timer; if we didn't report we'll run on the next tick */ 84 - add_timer(&iprtd->timer); 80 + hrtimer_forward_now(hrt, ns_to_ktime(iprtd->poll_time_ns)); 85 81 82 + return HRTIMER_RESTART; 86 83 } 87 84 88 85 static struct fiq_handler fh = { ··· 98 99 iprtd->period = params_period_bytes(params) ; 99 100 iprtd->offset = 0; 100 101 iprtd->last_offset = 0; 101 - iprtd->poll_time = HZ / (params_rate(params) / params_period_size(params)); 102 - 102 + iprtd->poll_time_ns = 1000000000 / params_rate(params) * 103 + params_period_size(params); 103 104 snd_pcm_set_runtime_buffer(substream, &substream->dma_buffer); 104 105 105 106 return 0; ··· 134 135 case SNDRV_PCM_TRIGGER_START: 135 136 case SNDRV_PCM_TRIGGER_RESUME: 136 137 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 137 - imx_ssi_set_next_poll(iprtd); 138 - add_timer(&iprtd->timer); 138 + atomic_set(&iprtd->running, 1); 139 + hrtimer_start(&iprtd->hrt, ns_to_ktime(iprtd->poll_time_ns), 140 + HRTIMER_MODE_REL); 139 141 if (++fiq_enable == 1) 140 142 enable_fiq(imx_pcm_fiq); 141 143 ··· 145 145 case SNDRV_PCM_TRIGGER_STOP: 146 146 case SNDRV_PCM_TRIGGER_SUSPEND: 147 147 case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 148 - del_timer(&iprtd->timer); 148 + atomic_set(&iprtd->running, 0); 149 + 149 150 if (--fiq_enable == 0) 150 151 disable_fiq(imx_pcm_fiq); 151 - 152 152 153 153 break; 154 154 default: ··· 180 180 .buffer_bytes_max = IMX_SSI_DMABUF_SIZE, 181 181 .period_bytes_min = 128, 182 182 .period_bytes_max = 16 * 1024, 183 - .periods_min = 2, 183 + .periods_min = 4, 184 184 .periods_max = 255, 185 185 .fifo_size = 0, 186 186 }; ··· 194 194 iprtd = kzalloc(sizeof(*iprtd), GFP_KERNEL); 195 195 runtime->private_data = iprtd; 196 196 197 - init_timer(&iprtd->timer); 198 - iprtd->timer.data = (unsigned long)substream; 199 - iprtd->timer.function = imx_ssi_timer_callback; 197 + iprtd->substream = substream; 198 + 199 + atomic_set(&iprtd->running, 0); 200 + hrtimer_init(&iprtd->hrt, CLOCK_MONOTONIC, HRTIMER_MODE_REL); 201 + iprtd->hrt.function = snd_hrtimer_callback; 200 202 201 203 ret = snd_pcm_hw_constraint_integer(substream->runtime, 202 204 SNDRV_PCM_HW_PARAM_PERIODS); ··· 214 212 struct snd_pcm_runtime *runtime = substream->runtime; 215 213 struct imx_pcm_runtime_data *iprtd = runtime->private_data; 216 214 217 - del_timer_sync(&iprtd->timer); 215 + hrtimer_cancel(&iprtd->hrt); 216 + 218 217 kfree(iprtd); 219 218 220 219 return 0;
+2 -1
sound/soc/imx/imx-ssi.c
··· 656 656 dai->private_data = ssi; 657 657 658 658 if ((cpu_is_mx27() || cpu_is_mx21()) && 659 - !(ssi->flags & IMX_SSI_USE_AC97)) { 659 + !(ssi->flags & IMX_SSI_USE_AC97) && 660 + (ssi->flags & IMX_SSI_DMA)) { 660 661 ssi->flags |= IMX_SSI_DMA; 661 662 platform = imx_ssi_dma_mx2_init(pdev, ssi); 662 663 } else
+18 -6
sound/usb/usbmidi.c
··· 986 986 DEFINE_WAIT(wait); 987 987 long timeout = msecs_to_jiffies(50); 988 988 989 + if (ep->umidi->disconnected) 990 + return; 989 991 /* 990 992 * The substream buffer is empty, but some data might still be in the 991 993 * currently active URBs, so we have to wait for those to complete. ··· 1125 1123 * Frees an output endpoint. 1126 1124 * May be called when ep hasn't been initialized completely. 1127 1125 */ 1128 - static void snd_usbmidi_out_endpoint_delete(struct snd_usb_midi_out_endpoint* ep) 1126 + static void snd_usbmidi_out_endpoint_clear(struct snd_usb_midi_out_endpoint *ep) 1129 1127 { 1130 1128 unsigned int i; 1131 1129 1132 1130 for (i = 0; i < OUTPUT_URBS; ++i) 1133 - if (ep->urbs[i].urb) 1131 + if (ep->urbs[i].urb) { 1134 1132 free_urb_and_buffer(ep->umidi, ep->urbs[i].urb, 1135 1133 ep->max_transfer); 1134 + ep->urbs[i].urb = NULL; 1135 + } 1136 + } 1137 + 1138 + static void snd_usbmidi_out_endpoint_delete(struct snd_usb_midi_out_endpoint *ep) 1139 + { 1140 + snd_usbmidi_out_endpoint_clear(ep); 1136 1141 kfree(ep); 1137 1142 } 1138 1143 ··· 1271 1262 usb_kill_urb(ep->out->urbs[j].urb); 1272 1263 if (umidi->usb_protocol_ops->finish_out_endpoint) 1273 1264 umidi->usb_protocol_ops->finish_out_endpoint(ep->out); 1265 + ep->out->active_urbs = 0; 1266 + if (ep->out->drain_urbs) { 1267 + ep->out->drain_urbs = 0; 1268 + wake_up(&ep->out->drain_wait); 1269 + } 1274 1270 } 1275 1271 if (ep->in) 1276 1272 for (j = 0; j < INPUT_URBS; ++j) 1277 1273 usb_kill_urb(ep->in->urbs[j]); 1278 1274 /* free endpoints here; later call can result in Oops */ 1279 - if (ep->out) { 1280 - snd_usbmidi_out_endpoint_delete(ep->out); 1281 - ep->out = NULL; 1282 - } 1275 + if (ep->out) 1276 + snd_usbmidi_out_endpoint_clear(ep->out); 1283 1277 if (ep->in) { 1284 1278 snd_usbmidi_in_endpoint_delete(ep->in); 1285 1279 ep->in = NULL;