Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'sh/stable-updates'

+5908 -3741
+2
Documentation/filesystems/00-INDEX
··· 16 16 - information about the BeOS filesystem for Linux. 17 17 bfs.txt 18 18 - info for the SCO UnixWare Boot Filesystem (BFS). 19 + ceph.txt 20 + - info for the Ceph Distributed File System 19 21 cifs.txt 20 22 - description of the CIFS filesystem. 21 23 coda.txt
+6 -5
Documentation/filesystems/ceph.txt
··· 8 8 9 9 * POSIX semantics 10 10 * Seamless scaling from 1 to many thousands of nodes 11 - * High availability and reliability. No single points of failure. 11 + * High availability and reliability. No single point of failure. 12 12 * N-way replication of data across storage nodes 13 13 * Fast recovery from node failures 14 14 * Automatic rebalancing of data on node addition/removal ··· 94 94 95 95 wsize=X 96 96 Specify the maximum write size in bytes. By default there is no 97 - maximu. Ceph will normally size writes based on the file stripe 97 + maximum. Ceph will normally size writes based on the file stripe 98 98 size. 99 99 100 100 rsize=X ··· 115 115 number of entries in that directory. 116 116 117 117 nocrc 118 - Disable CRC32C calculation for data writes. If set, the OSD 118 + Disable CRC32C calculation for data writes. If set, the storage node 119 119 must rely on TCP's error correction to detect data corruption 120 120 in the data payload. 121 121 ··· 133 133 http://ceph.newdream.net/ 134 134 135 135 The Linux kernel client source tree is available at 136 - git://ceph.newdream.net/linux-ceph-client.git 136 + git://ceph.newdream.net/git/ceph-client.git 137 + git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git 137 138 138 139 and the source for the full system is at 139 - git://ceph.newdream.net/ceph.git 140 + git://ceph.newdream.net/git/ceph.git
+54
Documentation/powerpc/dts-bindings/fsl/cpm_qe/qe.txt
··· 21 21 - fsl,qe-num-snums: define how many serial number(SNUM) the QE can use for the 22 22 threads. 23 23 24 + Optional properties: 25 + - fsl,firmware-phandle: 26 + Usage: required only if there is no fsl,qe-firmware child node 27 + Value type: <phandle> 28 + Definition: Points to a firmware node (see "QE Firmware Node" below) 29 + that contains the firmware that should be uploaded for this QE. 30 + The compatible property for the firmware node should say, 31 + "fsl,qe-firmware". 32 + 24 33 Recommended properties 25 34 - brg-frequency : the internal clock source frequency for baud-rate 26 35 generators in Hz. ··· 68 59 reg = <0 c000>; 69 60 }; 70 61 }; 62 + 63 + * QE Firmware Node 64 + 65 + This node defines a firmware binary that is embedded in the device tree, for 66 + the purpose of passing the firmware from bootloader to the kernel, or from 67 + the hypervisor to the guest. 68 + 69 + The firmware node itself contains the firmware binary contents, a compatible 70 + property, and any firmware-specific properties. The node should be placed 71 + inside a QE node that needs it. Doing so eliminates the need for a 72 + fsl,firmware-phandle property. Other QE nodes that need the same firmware 73 + should define an fsl,firmware-phandle property that points to the firmware node 74 + in the first QE node. 75 + 76 + The fsl,firmware property can be specified in the DTS (possibly using incbin) 77 + or can be inserted by the boot loader at boot time. 78 + 79 + Required properties: 80 + - compatible 81 + Usage: required 82 + Value type: <string> 83 + Definition: A standard property. Specify a string that indicates what 84 + kind of firmware it is. For QE, this should be "fsl,qe-firmware". 85 + 86 + - fsl,firmware 87 + Usage: required 88 + Value type: <prop-encoded-array>, encoded as an array of bytes 89 + Definition: A standard property. This property contains the firmware 90 + binary "blob". 91 + 92 + Example: 93 + qe1@e0080000 { 94 + compatible = "fsl,qe"; 95 + qe_firmware:qe-firmware { 96 + compatible = "fsl,qe-firmware"; 97 + fsl,firmware = [0x70 0xcd 0x00 0x00 0x01 0x46 0x45 ...]; 98 + }; 99 + ... 100 + }; 101 + 102 + qe2@e0090000 { 103 + compatible = "fsl,qe"; 104 + fsl,firmware-phandle = <&qe_firmware>; 105 + ... 106 + };
+12 -2
MAINTAINERS
··· 1443 1443 1444 1444 CEPH DISTRIBUTED FILE SYSTEM CLIENT 1445 1445 M: Sage Weil <sage@newdream.net> 1446 - L: ceph-devel@lists.sourceforge.net 1446 + L: ceph-devel@vger.kernel.org 1447 1447 W: http://ceph.newdream.net/ 1448 1448 T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git 1449 1449 S: Supported ··· 3083 3083 ISDN SUBSYSTEM 3084 3084 M: Karsten Keil <isdn@linux-pingi.de> 3085 3085 L: isdn4linux@listserv.isdn4linux.de (subscribers-only) 3086 + L: netdev@vger.kernel.org 3086 3087 W: http://www.isdn4linux.de 3087 3088 T: git git://git.kernel.org/pub/scm/linux/kernel/git/kkeil/isdn-2.6.git 3088 3089 S: Maintained ··· 3269 3268 S: Maintained 3270 3269 F: include/linux/kexec.h 3271 3270 F: kernel/kexec.c 3271 + 3272 + KEYS/KEYRINGS: 3273 + M: David Howells <dhowells@redhat.com> 3274 + L: keyrings@linux-nfs.org 3275 + S: Maintained 3276 + F: Documentation/keys.txt 3277 + F: include/linux/key.h 3278 + F: include/linux/key-type.h 3279 + F: include/keys/ 3280 + F: security/keys/ 3272 3281 3273 3282 KGDB 3274 3283 M: Jason Wessel <jason.wessel@windriver.com> ··· 5434 5423 F: sound/soc/codecs/twl4030* 5435 5424 5436 5425 TIPC NETWORK LAYER 5437 - M: Per Liden <per.liden@ericsson.com> 5438 5426 M: Jon Maloy <jon.maloy@ericsson.com> 5439 5427 M: Allan Stephens <allan.stephens@windriver.com> 5440 5428 L: tipc-discussion@lists.sourceforge.net
+1 -1
Makefile
··· 1 1 VERSION = 2 2 2 PATCHLEVEL = 6 3 3 SUBLEVEL = 34 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = Man-Eating Seals of Antiquity 6 6 7 7 # *DOCUMENTATION*
+1 -37
arch/arm/include/asm/cacheflush.h
··· 15 15 #include <asm/glue.h> 16 16 #include <asm/shmparam.h> 17 17 #include <asm/cachetype.h> 18 + #include <asm/outercache.h> 18 19 19 20 #define CACHE_COLOUR(vaddr) ((vaddr & (SHMLBA - 1)) >> PAGE_SHIFT) 20 21 ··· 220 219 void (*dma_flush_range)(const void *, const void *); 221 220 }; 222 221 223 - struct outer_cache_fns { 224 - void (*inv_range)(unsigned long, unsigned long); 225 - void (*clean_range)(unsigned long, unsigned long); 226 - void (*flush_range)(unsigned long, unsigned long); 227 - }; 228 - 229 222 /* 230 223 * Select the calling method 231 224 */ ··· 273 278 extern void dmac_map_area(const void *, size_t, int); 274 279 extern void dmac_unmap_area(const void *, size_t, int); 275 280 extern void dmac_flush_range(const void *, const void *); 276 - 277 - #endif 278 - 279 - #ifdef CONFIG_OUTER_CACHE 280 - 281 - extern struct outer_cache_fns outer_cache; 282 - 283 - static inline void outer_inv_range(unsigned long start, unsigned long end) 284 - { 285 - if (outer_cache.inv_range) 286 - outer_cache.inv_range(start, end); 287 - } 288 - static inline void outer_clean_range(unsigned long start, unsigned long end) 289 - { 290 - if (outer_cache.clean_range) 291 - outer_cache.clean_range(start, end); 292 - } 293 - static inline void outer_flush_range(unsigned long start, unsigned long end) 294 - { 295 - if (outer_cache.flush_range) 296 - outer_cache.flush_range(start, end); 297 - } 298 - 299 - #else 300 - 301 - static inline void outer_inv_range(unsigned long start, unsigned long end) 302 - { } 303 - static inline void outer_clean_range(unsigned long start, unsigned long end) 304 - { } 305 - static inline void outer_flush_range(unsigned long start, unsigned long end) 306 - { } 307 281 308 282 #endif 309 283
+1
arch/arm/include/asm/clkdev.h
··· 13 13 #define __ASM_CLKDEV_H 14 14 15 15 struct clk; 16 + struct device; 16 17 17 18 struct clk_lookup { 18 19 struct list_head node;
+1
arch/arm/include/asm/irq.h
··· 17 17 18 18 #ifndef __ASSEMBLY__ 19 19 struct irqaction; 20 + struct pt_regs; 20 21 extern void migrate_irqs(void); 21 22 22 23 extern void asm_do_IRQ(unsigned int, struct pt_regs *);
+75
arch/arm/include/asm/outercache.h
··· 1 + /* 2 + * arch/arm/include/asm/outercache.h 3 + * 4 + * Copyright (C) 2010 ARM Ltd. 5 + * Written by Catalin Marinas <catalin.marinas@arm.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program; if not, write to the Free Software 18 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 + */ 20 + 21 + #ifndef __ASM_OUTERCACHE_H 22 + #define __ASM_OUTERCACHE_H 23 + 24 + struct outer_cache_fns { 25 + void (*inv_range)(unsigned long, unsigned long); 26 + void (*clean_range)(unsigned long, unsigned long); 27 + void (*flush_range)(unsigned long, unsigned long); 28 + #ifdef CONFIG_OUTER_CACHE_SYNC 29 + void (*sync)(void); 30 + #endif 31 + }; 32 + 33 + #ifdef CONFIG_OUTER_CACHE 34 + 35 + extern struct outer_cache_fns outer_cache; 36 + 37 + static inline void outer_inv_range(unsigned long start, unsigned long end) 38 + { 39 + if (outer_cache.inv_range) 40 + outer_cache.inv_range(start, end); 41 + } 42 + static inline void outer_clean_range(unsigned long start, unsigned long end) 43 + { 44 + if (outer_cache.clean_range) 45 + outer_cache.clean_range(start, end); 46 + } 47 + static inline void outer_flush_range(unsigned long start, unsigned long end) 48 + { 49 + if (outer_cache.flush_range) 50 + outer_cache.flush_range(start, end); 51 + } 52 + 53 + #else 54 + 55 + static inline void outer_inv_range(unsigned long start, unsigned long end) 56 + { } 57 + static inline void outer_clean_range(unsigned long start, unsigned long end) 58 + { } 59 + static inline void outer_flush_range(unsigned long start, unsigned long end) 60 + { } 61 + 62 + #endif 63 + 64 + #ifdef CONFIG_OUTER_CACHE_SYNC 65 + static inline void outer_sync(void) 66 + { 67 + if (outer_cache.sync) 68 + outer_cache.sync(); 69 + } 70 + #else 71 + static inline void outer_sync(void) 72 + { } 73 + #endif 74 + 75 + #endif /* __ASM_OUTERCACHE_H */
+10 -6
arch/arm/include/asm/system.h
··· 60 60 #include <linux/linkage.h> 61 61 #include <linux/irqflags.h> 62 62 63 + #include <asm/outercache.h> 64 + 63 65 #define __exception __attribute__((section(".exception.text"))) 64 66 65 67 struct thread_info; ··· 139 137 #define dmb() __asm__ __volatile__ ("" : : : "memory") 140 138 #endif 141 139 142 - #if __LINUX_ARM_ARCH__ >= 7 || defined(CONFIG_SMP) 143 - #define mb() dmb() 140 + #ifdef CONFIG_ARCH_HAS_BARRIERS 141 + #include <mach/barriers.h> 142 + #elif __LINUX_ARM_ARCH__ >= 7 || defined(CONFIG_SMP) 143 + #define mb() do { dsb(); outer_sync(); } while (0) 144 144 #define rmb() dmb() 145 - #define wmb() dmb() 145 + #define wmb() mb() 146 146 #else 147 147 #define mb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0) 148 148 #define rmb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0) ··· 156 152 #define smp_rmb() barrier() 157 153 #define smp_wmb() barrier() 158 154 #else 159 - #define smp_mb() mb() 160 - #define smp_rmb() rmb() 161 - #define smp_wmb() wmb() 155 + #define smp_mb() dmb() 156 + #define smp_rmb() dmb() 157 + #define smp_wmb() dmb() 162 158 #endif 163 159 164 160 #define read_barrier_depends() do { } while(0)
+9 -1
arch/arm/kernel/kprobes.c
··· 393 393 /* 394 394 * Setup an empty pt_regs. Fill SP and PC fields as 395 395 * they're needed by longjmp_break_handler. 396 + * 397 + * We allocate some slack between the original SP and start of 398 + * our fabricated regs. To be precise we want to have worst case 399 + * covered which is STMFD with all 16 regs so we allocate 2 * 400 + * sizeof(struct_pt_regs)). 401 + * 402 + * This is to prevent any simulated instruction from writing 403 + * over the regs when they are accessing the stack. 396 404 */ 397 405 "sub sp, %0, %1 \n\t" 398 406 "ldr r0, ="__stringify(JPROBE_MAGIC_ADDR)"\n\t" ··· 418 410 "ldmia sp, {r0 - pc} \n\t" 419 411 : 420 412 : "r" (kcb->jprobe_saved_regs.ARM_sp), 421 - "I" (sizeof(struct pt_regs)), 413 + "I" (sizeof(struct pt_regs) * 2), 422 414 "J" (offsetof(struct pt_regs, ARM_sp)), 423 415 "J" (offsetof(struct pt_regs, ARM_pc)), 424 416 "J" (offsetof(struct pt_regs, ARM_cpsr))
+2 -2
arch/arm/lib/memmove.S
··· 74 74 rsb ip, ip, #32 75 75 addne pc, pc, ip @ C is always clear here 76 76 b 7f 77 - 6: nop 77 + 6: W(nop) 78 78 W(ldr) r3, [r1, #-4]! 79 79 W(ldr) r4, [r1, #-4]! 80 80 W(ldr) r5, [r1, #-4]! ··· 85 85 86 86 add pc, pc, ip 87 87 nop 88 - nop 88 + W(nop) 89 89 W(str) r3, [r0, #-4]! 90 90 W(str) r4, [r0, #-4]! 91 91 W(str) r5, [r0, #-4]!
+13
arch/arm/mm/Kconfig
··· 736 736 config OUTER_CACHE 737 737 bool 738 738 739 + config OUTER_CACHE_SYNC 740 + bool 741 + help 742 + The outer cache has a outer_cache_fns.sync function pointer 743 + that can be used to drain the write buffer of the outer cache. 744 + 739 745 config CACHE_FEROCEON_L2 740 746 bool "Enable the Feroceon L2 cache controller" 741 747 depends on ARCH_KIRKWOOD || ARCH_MV78XX0 ··· 763 757 REALVIEW_EB_A9MP || ARCH_MX35 || ARCH_MX31 || MACH_REALVIEW_PBX || ARCH_NOMADIK || ARCH_OMAP4 764 758 default y 765 759 select OUTER_CACHE 760 + select OUTER_CACHE_SYNC 766 761 help 767 762 This option enables the L2x0 PrimeCell. 768 763 ··· 788 781 int 789 782 default 6 if ARM_L1_CACHE_SHIFT_6 790 783 default 5 784 + 785 + config ARCH_HAS_BARRIERS 786 + bool 787 + help 788 + This option allows the use of custom mandatory barriers 789 + included via the mach/barriers.h file.
+10
arch/arm/mm/cache-l2x0.c
··· 93 93 } 94 94 #endif 95 95 96 + static void l2x0_cache_sync(void) 97 + { 98 + unsigned long flags; 99 + 100 + spin_lock_irqsave(&l2x0_lock, flags); 101 + cache_sync(); 102 + spin_unlock_irqrestore(&l2x0_lock, flags); 103 + } 104 + 96 105 static inline void l2x0_inv_all(void) 97 106 { 98 107 unsigned long flags; ··· 234 225 outer_cache.inv_range = l2x0_inv_range; 235 226 outer_cache.clean_range = l2x0_clean_range; 236 227 outer_cache.flush_range = l2x0_flush_range; 228 + outer_cache.sync = l2x0_cache_sync; 237 229 238 230 printk(KERN_INFO "L2X0 cache controller enabled\n"); 239 231 }
+1 -1
arch/arm/vfp/vfpmodule.c
··· 545 545 */ 546 546 elf_hwcap |= HWCAP_VFP; 547 547 #ifdef CONFIG_VFPv3 548 - if (VFP_arch >= 3) { 548 + if (VFP_arch >= 2) { 549 549 elf_hwcap |= HWCAP_VFPv3; 550 550 551 551 /*
+1 -1
arch/cris/arch-v32/drivers/pci/bios.c
··· 50 50 if ((res->flags & IORESOURCE_IO) && (start & 0x300)) 51 51 start = (start + 0x3ff) & ~0x3ff; 52 52 53 - return start 53 + return start; 54 54 } 55 55 56 56 int pcibios_enable_resources(struct pci_dev *dev, int mask)
+2 -4
arch/frv/mb93090-mb00/pci-frv.c
··· 41 41 if ((res->flags & IORESOURCE_IO) && (start & 0x300)) 42 42 start = (start + 0x3ff) & ~0x3ff; 43 43 44 - return start 44 + return start; 45 45 } 46 46 47 47 ··· 94 94 r = &dev->resource[idx]; 95 95 if (!r->start) 96 96 continue; 97 - if (pci_claim_resource(dev, idx) < 0) 98 - printk(KERN_ERR "PCI: Cannot allocate resource region %d of bridge %s\n", idx, pci_name(dev)); 97 + pci_claim_resource(dev, idx); 99 98 } 100 99 } 101 100 pcibios_allocate_bus_resources(&bus->children); ··· 124 125 DBG("PCI: Resource %08lx-%08lx (f=%lx, d=%d, p=%d)\n", 125 126 r->start, r->end, r->flags, disabled, pass); 126 127 if (pci_claim_resource(dev, idx) < 0) { 127 - printk(KERN_ERR "PCI: Cannot allocate resource region %d of device %s\n", idx, pci_name(dev)); 128 128 /* We'll assign a new address later */ 129 129 r->end -= r->start; 130 130 r->start = 0;
-3
arch/microblaze/Kconfig
··· 75 75 config HAVE_LATENCYTOP_SUPPORT 76 76 def_bool y 77 77 78 - config PCI 79 - def_bool n 80 - 81 78 config DTC 82 79 def_bool y 83 80
+3 -1
arch/microblaze/Makefile
··· 84 84 echo '* linux.bin - Create raw binary' 85 85 echo ' linux.bin.gz - Create compressed raw binary' 86 86 echo ' simpleImage.<dt> - ELF image with $(arch)/boot/dts/<dt>.dts linked in' 87 - echo ' - stripped elf with fdt blob 87 + echo ' - stripped elf with fdt blob' 88 88 echo ' simpleImage.<dt>.unstrip - full ELF image with fdt blob' 89 89 echo ' *_defconfig - Select default config from arch/microblaze/configs' 90 90 echo '' ··· 94 94 echo ' name of a dts file from the arch/microblaze/boot/dts/ directory' 95 95 echo ' (minus the .dts extension).' 96 96 endef 97 + 98 + MRPROPER_FILES += $(boot)/simpleImage.*
+1 -5
arch/microblaze/boot/Makefile
··· 23 23 endif 24 24 25 25 $(obj)/linux.bin: vmlinux FORCE 26 - [ -n $(CONFIG_INITRAMFS_SOURCE) ] && [ ! -e $(CONFIG_INITRAMFS_SOURCE) ] && \ 27 - touch $(CONFIG_INITRAMFS_SOURCE) || echo "No CPIO image" 28 26 $(call if_changed,objcopy) 29 27 $(call if_changed,uimage) 30 28 @echo 'Kernel: $@ is ready' ' (#'`cat .version`')' ··· 60 62 $(obj)/%.dtb: $(dtstree)/%.dts FORCE 61 63 $(call if_changed,dtc) 62 64 63 - clean-kernel += linux.bin linux.bin.gz simpleImage.* 64 - 65 - clean-files += *.dtb simpleImage.*.unstrip 65 + clean-files += *.dtb simpleImage.*.unstrip linux.bin.ub
-1
arch/microblaze/include/asm/processor.h
··· 14 14 #include <asm/ptrace.h> 15 15 #include <asm/setup.h> 16 16 #include <asm/registers.h> 17 - #include <asm/segment.h> 18 17 #include <asm/entry.h> 19 18 #include <asm/current.h> 20 19
-49
arch/microblaze/include/asm/segment.h
··· 1 - /* 2 - * Copyright (C) 2008-2009 Michal Simek <monstr@monstr.eu> 3 - * Copyright (C) 2008-2009 PetaLogix 4 - * Copyright (C) 2006 Atmark Techno, Inc. 5 - * 6 - * This file is subject to the terms and conditions of the GNU General Public 7 - * License. See the file "COPYING" in the main directory of this archive 8 - * for more details. 9 - */ 10 - 11 - #ifndef _ASM_MICROBLAZE_SEGMENT_H 12 - #define _ASM_MICROBLAZE_SEGMENT_H 13 - 14 - # ifndef __ASSEMBLY__ 15 - 16 - typedef struct { 17 - unsigned long seg; 18 - } mm_segment_t; 19 - 20 - /* 21 - * On Microblaze the fs value is actually the top of the corresponding 22 - * address space. 23 - * 24 - * The fs value determines whether argument validity checking should be 25 - * performed or not. If get_fs() == USER_DS, checking is performed, with 26 - * get_fs() == KERNEL_DS, checking is bypassed. 27 - * 28 - * For historical reasons, these macros are grossly misnamed. 29 - * 30 - * For non-MMU arch like Microblaze, KERNEL_DS and USER_DS is equal. 31 - */ 32 - # define MAKE_MM_SEG(s) ((mm_segment_t) { (s) }) 33 - 34 - # ifndef CONFIG_MMU 35 - # define KERNEL_DS MAKE_MM_SEG(0) 36 - # define USER_DS KERNEL_DS 37 - # else 38 - # define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF) 39 - # define USER_DS MAKE_MM_SEG(TASK_SIZE - 1) 40 - # endif 41 - 42 - # define get_ds() (KERNEL_DS) 43 - # define get_fs() (current_thread_info()->addr_limit) 44 - # define set_fs(val) (current_thread_info()->addr_limit = (val)) 45 - 46 - # define segment_eq(a, b) ((a).seg == (b).seg) 47 - 48 - # endif /* __ASSEMBLY__ */ 49 - #endif /* _ASM_MICROBLAZE_SEGMENT_H */
+4 -1
arch/microblaze/include/asm/thread_info.h
··· 19 19 #ifndef __ASSEMBLY__ 20 20 # include <linux/types.h> 21 21 # include <asm/processor.h> 22 - # include <asm/segment.h> 23 22 24 23 /* 25 24 * low level task data that entry.S needs immediate access to ··· 58 59 __u32 esr; 59 60 __u32 fsr; 60 61 }; 62 + 63 + typedef struct { 64 + unsigned long seg; 65 + } mm_segment_t; 61 66 62 67 struct thread_info { 63 68 struct task_struct *task; /* main task structure */
+2 -1
arch/microblaze/include/asm/tlbflush.h
··· 24 24 extern void _tlbia(void); 25 25 26 26 #define __tlbia() { preempt_disable(); _tlbia(); preempt_enable(); } 27 + #define __tlbie(x) { _tlbie(x); } 27 28 28 29 static inline void local_flush_tlb_all(void) 29 30 { __tlbia(); } ··· 32 31 { __tlbia(); } 33 32 static inline void local_flush_tlb_page(struct vm_area_struct *vma, 34 33 unsigned long vmaddr) 35 - { _tlbie(vmaddr); } 34 + { __tlbie(vmaddr); } 36 35 static inline void local_flush_tlb_range(struct vm_area_struct *vma, 37 36 unsigned long start, unsigned long end) 38 37 { __tlbia(); }
+255 -214
arch/microblaze/include/asm/uaccess.h
··· 22 22 #include <asm/mmu.h> 23 23 #include <asm/page.h> 24 24 #include <asm/pgtable.h> 25 - #include <asm/segment.h> 26 25 #include <linux/string.h> 27 26 28 27 #define VERIFY_READ 0 29 28 #define VERIFY_WRITE 1 30 29 31 - #define __clear_user(addr, n) (memset((void *)(addr), 0, (n)), 0) 30 + /* 31 + * On Microblaze the fs value is actually the top of the corresponding 32 + * address space. 33 + * 34 + * The fs value determines whether argument validity checking should be 35 + * performed or not. If get_fs() == USER_DS, checking is performed, with 36 + * get_fs() == KERNEL_DS, checking is bypassed. 37 + * 38 + * For historical reasons, these macros are grossly misnamed. 39 + * 40 + * For non-MMU arch like Microblaze, KERNEL_DS and USER_DS is equal. 41 + */ 42 + # define MAKE_MM_SEG(s) ((mm_segment_t) { (s) }) 43 + 44 + # ifndef CONFIG_MMU 45 + # define KERNEL_DS MAKE_MM_SEG(0) 46 + # define USER_DS KERNEL_DS 47 + # else 48 + # define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF) 49 + # define USER_DS MAKE_MM_SEG(TASK_SIZE - 1) 50 + # endif 51 + 52 + # define get_ds() (KERNEL_DS) 53 + # define get_fs() (current_thread_info()->addr_limit) 54 + # define set_fs(val) (current_thread_info()->addr_limit = (val)) 55 + 56 + # define segment_eq(a, b) ((a).seg == (b).seg) 57 + 58 + /* 59 + * The exception table consists of pairs of addresses: the first is the 60 + * address of an instruction that is allowed to fault, and the second is 61 + * the address at which the program should continue. No registers are 62 + * modified, so it is entirely up to the continuation code to figure out 63 + * what to do. 64 + * 65 + * All the routines below use bits of fixup code that are out of line 66 + * with the main instruction path. This means when everything is well, 67 + * we don't even have to jump over them. Further, they do not intrude 68 + * on our cache or tlb entries. 69 + */ 70 + struct exception_table_entry { 71 + unsigned long insn, fixup; 72 + }; 73 + 74 + /* Returns 0 if exception not found and fixup otherwise. */ 75 + extern unsigned long search_exception_table(unsigned long); 32 76 33 77 #ifndef CONFIG_MMU 34 78 35 - extern int ___range_ok(unsigned long addr, unsigned long size); 79 + /* Check against bounds of physical memory */ 80 + static inline int ___range_ok(unsigned long addr, unsigned long size) 81 + { 82 + return ((addr < memory_start) || 83 + ((addr + size) > memory_end)); 84 + } 36 85 37 86 #define __range_ok(addr, size) \ 38 87 ___range_ok((unsigned long)(addr), (unsigned long)(size)) 39 88 40 89 #define access_ok(type, addr, size) (__range_ok((addr), (size)) == 0) 41 - #define __access_ok(add, size) (__range_ok((addr), (size)) == 0) 42 90 43 - /* Undefined function to trigger linker error */ 44 - extern int bad_user_access_length(void); 45 - 46 - /* FIXME this is function for optimalization -> memcpy */ 47 - #define __get_user(var, ptr) \ 48 - ({ \ 49 - int __gu_err = 0; \ 50 - switch (sizeof(*(ptr))) { \ 51 - case 1: \ 52 - case 2: \ 53 - case 4: \ 54 - (var) = *(ptr); \ 55 - break; \ 56 - case 8: \ 57 - memcpy((void *) &(var), (ptr), 8); \ 58 - break; \ 59 - default: \ 60 - (var) = 0; \ 61 - __gu_err = __get_user_bad(); \ 62 - break; \ 63 - } \ 64 - __gu_err; \ 65 - }) 66 - 67 - #define __get_user_bad() (bad_user_access_length(), (-EFAULT)) 68 - 69 - /* FIXME is not there defined __pu_val */ 70 - #define __put_user(var, ptr) \ 71 - ({ \ 72 - int __pu_err = 0; \ 73 - switch (sizeof(*(ptr))) { \ 74 - case 1: \ 75 - case 2: \ 76 - case 4: \ 77 - *(ptr) = (var); \ 78 - break; \ 79 - case 8: { \ 80 - typeof(*(ptr)) __pu_val = (var); \ 81 - memcpy(ptr, &__pu_val, sizeof(__pu_val)); \ 82 - } \ 83 - break; \ 84 - default: \ 85 - __pu_err = __put_user_bad(); \ 86 - break; \ 87 - } \ 88 - __pu_err; \ 89 - }) 90 - 91 - #define __put_user_bad() (bad_user_access_length(), (-EFAULT)) 92 - 93 - #define put_user(x, ptr) __put_user((x), (ptr)) 94 - #define get_user(x, ptr) __get_user((x), (ptr)) 95 - 96 - #define copy_to_user(to, from, n) (memcpy((to), (from), (n)), 0) 97 - #define copy_from_user(to, from, n) (memcpy((to), (from), (n)), 0) 98 - 99 - #define __copy_to_user(to, from, n) (copy_to_user((to), (from), (n))) 100 - #define __copy_from_user(to, from, n) (copy_from_user((to), (from), (n))) 101 - #define __copy_to_user_inatomic(to, from, n) \ 102 - (__copy_to_user((to), (from), (n))) 103 - #define __copy_from_user_inatomic(to, from, n) \ 104 - (__copy_from_user((to), (from), (n))) 105 - 106 - static inline unsigned long clear_user(void *addr, unsigned long size) 107 - { 108 - if (access_ok(VERIFY_WRITE, addr, size)) 109 - size = __clear_user(addr, size); 110 - return size; 111 - } 112 - 113 - /* Returns 0 if exception not found and fixup otherwise. */ 114 - extern unsigned long search_exception_table(unsigned long); 115 - 116 - extern long strncpy_from_user(char *dst, const char *src, long count); 117 - extern long strnlen_user(const char *src, long count); 118 - 119 - #else /* CONFIG_MMU */ 91 + #else 120 92 121 93 /* 122 94 * Address is valid if: ··· 101 129 /* || printk("access_ok failed for %s at 0x%08lx (size %d), seg 0x%08x\n", 102 130 type?"WRITE":"READ",addr,size,get_fs().seg)) */ 103 131 104 - /* 105 - * All the __XXX versions macros/functions below do not perform 106 - * access checking. It is assumed that the necessary checks have been 107 - * already performed before the finction (macro) is called. 132 + #endif 133 + 134 + #ifdef CONFIG_MMU 135 + # define __FIXUP_SECTION ".section .fixup,\"ax\"\n" 136 + # define __EX_TABLE_SECTION ".section __ex_table,\"a\"\n" 137 + #else 138 + # define __FIXUP_SECTION ".section .discard,\"ax\"\n" 139 + # define __EX_TABLE_SECTION ".section .discard,\"a\"\n" 140 + #endif 141 + 142 + extern unsigned long __copy_tofrom_user(void __user *to, 143 + const void __user *from, unsigned long size); 144 + 145 + /* Return: number of not copied bytes, i.e. 0 if OK or non-zero if fail. */ 146 + static inline unsigned long __must_check __clear_user(void __user *to, 147 + unsigned long n) 148 + { 149 + /* normal memset with two words to __ex_table */ 150 + __asm__ __volatile__ ( \ 151 + "1: sb r0, %2, r0;" \ 152 + " addik %0, %0, -1;" \ 153 + " bneid %0, 1b;" \ 154 + " addik %2, %2, 1;" \ 155 + "2: " \ 156 + __EX_TABLE_SECTION \ 157 + ".word 1b,2b;" \ 158 + ".previous;" \ 159 + : "=r"(n) \ 160 + : "0"(n), "r"(to) 161 + ); 162 + return n; 163 + } 164 + 165 + static inline unsigned long __must_check clear_user(void __user *to, 166 + unsigned long n) 167 + { 168 + might_sleep(); 169 + if (unlikely(!access_ok(VERIFY_WRITE, to, n))) 170 + return n; 171 + 172 + return __clear_user(to, n); 173 + } 174 + 175 + /* put_user and get_user macros */ 176 + extern long __user_bad(void); 177 + 178 + #define __get_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \ 179 + ({ \ 180 + __asm__ __volatile__ ( \ 181 + "1:" insn " %1, %2, r0;" \ 182 + " addk %0, r0, r0;" \ 183 + "2: " \ 184 + __FIXUP_SECTION \ 185 + "3: brid 2b;" \ 186 + " addik %0, r0, %3;" \ 187 + ".previous;" \ 188 + __EX_TABLE_SECTION \ 189 + ".word 1b,3b;" \ 190 + ".previous;" \ 191 + : "=&r"(__gu_err), "=r"(__gu_val) \ 192 + : "r"(__gu_ptr), "i"(-EFAULT) \ 193 + ); \ 194 + }) 195 + 196 + /** 197 + * get_user: - Get a simple variable from user space. 198 + * @x: Variable to store result. 199 + * @ptr: Source address, in user space. 200 + * 201 + * Context: User context only. This function may sleep. 202 + * 203 + * This macro copies a single simple variable from user space to kernel 204 + * space. It supports simple types like char and int, but not larger 205 + * data types like structures or arrays. 206 + * 207 + * @ptr must have pointer-to-simple-variable type, and the result of 208 + * dereferencing @ptr must be assignable to @x without a cast. 209 + * 210 + * Returns zero on success, or -EFAULT on error. 211 + * On error, the variable @x is set to zero. 108 212 */ 109 - 110 - #define get_user(x, ptr) \ 111 - ({ \ 112 - access_ok(VERIFY_READ, (ptr), sizeof(*(ptr))) \ 113 - ? __get_user((x), (ptr)) : -EFAULT; \ 114 - }) 115 - 116 - #define put_user(x, ptr) \ 117 - ({ \ 118 - access_ok(VERIFY_WRITE, (ptr), sizeof(*(ptr))) \ 119 - ? __put_user((x), (ptr)) : -EFAULT; \ 120 - }) 121 213 122 214 #define __get_user(x, ptr) \ 123 215 ({ \ ··· 199 163 __get_user_asm("lw", (ptr), __gu_val, __gu_err); \ 200 164 break; \ 201 165 default: \ 202 - __gu_val = 0; __gu_err = -EINVAL; \ 166 + /* __gu_val = 0; __gu_err = -EINVAL;*/ __gu_err = __user_bad();\ 203 167 } \ 204 168 x = (__typeof__(*(ptr))) __gu_val; \ 205 169 __gu_err; \ 206 170 }) 207 171 208 - #define __get_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \ 172 + 173 + #define get_user(x, ptr) \ 209 174 ({ \ 210 - __asm__ __volatile__ ( \ 211 - "1:" insn " %1, %2, r0; \ 212 - addk %0, r0, r0; \ 213 - 2: \ 214 - .section .fixup,\"ax\"; \ 215 - 3: brid 2b; \ 216 - addik %0, r0, %3; \ 217 - .previous; \ 218 - .section __ex_table,\"a\"; \ 219 - .word 1b,3b; \ 220 - .previous;" \ 221 - : "=r"(__gu_err), "=r"(__gu_val) \ 222 - : "r"(__gu_ptr), "i"(-EFAULT) \ 223 - ); \ 175 + access_ok(VERIFY_READ, (ptr), sizeof(*(ptr))) \ 176 + ? __get_user((x), (ptr)) : -EFAULT; \ 224 177 }) 178 + 179 + #define __put_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \ 180 + ({ \ 181 + __asm__ __volatile__ ( \ 182 + "1:" insn " %1, %2, r0;" \ 183 + " addk %0, r0, r0;" \ 184 + "2: " \ 185 + __FIXUP_SECTION \ 186 + "3: brid 2b;" \ 187 + " addik %0, r0, %3;" \ 188 + ".previous;" \ 189 + __EX_TABLE_SECTION \ 190 + ".word 1b,3b;" \ 191 + ".previous;" \ 192 + : "=&r"(__gu_err) \ 193 + : "r"(__gu_val), "r"(__gu_ptr), "i"(-EFAULT) \ 194 + ); \ 195 + }) 196 + 197 + #define __put_user_asm_8(__gu_ptr, __gu_val, __gu_err) \ 198 + ({ \ 199 + __asm__ __volatile__ (" lwi %0, %1, 0;" \ 200 + "1: swi %0, %2, 0;" \ 201 + " lwi %0, %1, 4;" \ 202 + "2: swi %0, %2, 4;" \ 203 + " addk %0, r0, r0;" \ 204 + "3: " \ 205 + __FIXUP_SECTION \ 206 + "4: brid 3b;" \ 207 + " addik %0, r0, %3;" \ 208 + ".previous;" \ 209 + __EX_TABLE_SECTION \ 210 + ".word 1b,4b,2b,4b;" \ 211 + ".previous;" \ 212 + : "=&r"(__gu_err) \ 213 + : "r"(&__gu_val), "r"(__gu_ptr), "i"(-EFAULT) \ 214 + ); \ 215 + }) 216 + 217 + /** 218 + * put_user: - Write a simple value into user space. 219 + * @x: Value to copy to user space. 220 + * @ptr: Destination address, in user space. 221 + * 222 + * Context: User context only. This function may sleep. 223 + * 224 + * This macro copies a single simple value from kernel space to user 225 + * space. It supports simple types like char and int, but not larger 226 + * data types like structures or arrays. 227 + * 228 + * @ptr must have pointer-to-simple-variable type, and @x must be assignable 229 + * to the result of dereferencing @ptr. 230 + * 231 + * Returns zero on success, or -EFAULT on error. 232 + */ 225 233 226 234 #define __put_user(x, ptr) \ 227 235 ({ \ ··· 275 195 case 1: \ 276 196 __put_user_asm("sb", (ptr), __gu_val, __gu_err); \ 277 197 break; \ 278 - case 2: \ 198 + case 2: \ 279 199 __put_user_asm("sh", (ptr), __gu_val, __gu_err); \ 280 200 break; \ 281 201 case 4: \ ··· 285 205 __put_user_asm_8((ptr), __gu_val, __gu_err); \ 286 206 break; \ 287 207 default: \ 288 - __gu_err = -EINVAL; \ 208 + /*__gu_err = -EINVAL;*/ __gu_err = __user_bad(); \ 289 209 } \ 290 210 __gu_err; \ 291 211 }) 292 212 293 - #define __put_user_asm_8(__gu_ptr, __gu_val, __gu_err) \ 294 - ({ \ 295 - __asm__ __volatile__ (" lwi %0, %1, 0; \ 296 - 1: swi %0, %2, 0; \ 297 - lwi %0, %1, 4; \ 298 - 2: swi %0, %2, 4; \ 299 - addk %0,r0,r0; \ 300 - 3: \ 301 - .section .fixup,\"ax\"; \ 302 - 4: brid 3b; \ 303 - addik %0, r0, %3; \ 304 - .previous; \ 305 - .section __ex_table,\"a\"; \ 306 - .word 1b,4b,2b,4b; \ 307 - .previous;" \ 308 - : "=&r"(__gu_err) \ 309 - : "r"(&__gu_val), \ 310 - "r"(__gu_ptr), "i"(-EFAULT) \ 311 - ); \ 213 + #ifndef CONFIG_MMU 214 + 215 + #define put_user(x, ptr) __put_user((x), (ptr)) 216 + 217 + #else /* CONFIG_MMU */ 218 + 219 + #define put_user(x, ptr) \ 220 + ({ \ 221 + access_ok(VERIFY_WRITE, (ptr), sizeof(*(ptr))) \ 222 + ? __put_user((x), (ptr)) : -EFAULT; \ 312 223 }) 224 + #endif /* CONFIG_MMU */ 313 225 314 - #define __put_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \ 315 - ({ \ 316 - __asm__ __volatile__ ( \ 317 - "1:" insn " %1, %2, r0; \ 318 - addk %0, r0, r0; \ 319 - 2: \ 320 - .section .fixup,\"ax\"; \ 321 - 3: brid 2b; \ 322 - addik %0, r0, %3; \ 323 - .previous; \ 324 - .section __ex_table,\"a\"; \ 325 - .word 1b,3b; \ 326 - .previous;" \ 327 - : "=r"(__gu_err) \ 328 - : "r"(__gu_val), "r"(__gu_ptr), "i"(-EFAULT) \ 329 - ); \ 330 - }) 331 - 332 - /* 333 - * Return: number of not copied bytes, i.e. 0 if OK or non-zero if fail. 334 - */ 335 - static inline int clear_user(char *to, int size) 336 - { 337 - if (size && access_ok(VERIFY_WRITE, to, size)) { 338 - __asm__ __volatile__ (" \ 339 - 1: \ 340 - sb r0, %2, r0; \ 341 - addik %0, %0, -1; \ 342 - bneid %0, 1b; \ 343 - addik %2, %2, 1; \ 344 - 2: \ 345 - .section __ex_table,\"a\"; \ 346 - .word 1b,2b; \ 347 - .section .text;" \ 348 - : "=r"(size) \ 349 - : "0"(size), "r"(to) 350 - ); 351 - } 352 - return size; 353 - } 354 - 355 - #define __copy_from_user(to, from, n) copy_from_user((to), (from), (n)) 226 + /* copy_to_from_user */ 227 + #define __copy_from_user(to, from, n) \ 228 + __copy_tofrom_user((__force void __user *)(to), \ 229 + (void __user *)(from), (n)) 356 230 #define __copy_from_user_inatomic(to, from, n) \ 357 231 copy_from_user((to), (from), (n)) 358 232 359 - #define copy_to_user(to, from, n) \ 360 - (access_ok(VERIFY_WRITE, (to), (n)) ? \ 361 - __copy_tofrom_user((void __user *)(to), \ 362 - (__force const void __user *)(from), (n)) \ 363 - : -EFAULT) 233 + static inline long copy_from_user(void *to, 234 + const void __user *from, unsigned long n) 235 + { 236 + might_sleep(); 237 + if (access_ok(VERIFY_READ, from, n)) 238 + return __copy_from_user(to, from, n); 239 + return n; 240 + } 364 241 365 - #define __copy_to_user(to, from, n) copy_to_user((to), (from), (n)) 242 + #define __copy_to_user(to, from, n) \ 243 + __copy_tofrom_user((void __user *)(to), \ 244 + (__force const void __user *)(from), (n)) 366 245 #define __copy_to_user_inatomic(to, from, n) copy_to_user((to), (from), (n)) 367 246 368 - #define copy_from_user(to, from, n) \ 369 - (access_ok(VERIFY_READ, (from), (n)) ? \ 370 - __copy_tofrom_user((__force void __user *)(to), \ 371 - (void __user *)(from), (n)) \ 372 - : -EFAULT) 373 - 374 - extern int __strncpy_user(char *to, const char __user *from, int len); 375 - extern int __strnlen_user(const char __user *sstr, int len); 376 - 377 - #define strncpy_from_user(to, from, len) \ 378 - (access_ok(VERIFY_READ, from, 1) ? \ 379 - __strncpy_user(to, from, len) : -EFAULT) 380 - #define strnlen_user(str, len) \ 381 - (access_ok(VERIFY_READ, str, 1) ? __strnlen_user(str, len) : 0) 382 - 383 - #endif /* CONFIG_MMU */ 384 - 385 - extern unsigned long __copy_tofrom_user(void __user *to, 386 - const void __user *from, unsigned long size); 247 + static inline long copy_to_user(void __user *to, 248 + const void *from, unsigned long n) 249 + { 250 + might_sleep(); 251 + if (access_ok(VERIFY_WRITE, to, n)) 252 + return __copy_to_user(to, from, n); 253 + return n; 254 + } 387 255 388 256 /* 389 - * The exception table consists of pairs of addresses: the first is the 390 - * address of an instruction that is allowed to fault, and the second is 391 - * the address at which the program should continue. No registers are 392 - * modified, so it is entirely up to the continuation code to figure out 393 - * what to do. 394 - * 395 - * All the routines below use bits of fixup code that are out of line 396 - * with the main instruction path. This means when everything is well, 397 - * we don't even have to jump over them. Further, they do not intrude 398 - * on our cache or tlb entries. 257 + * Copy a null terminated string from userspace. 399 258 */ 400 - struct exception_table_entry { 401 - unsigned long insn, fixup; 402 - }; 259 + extern int __strncpy_user(char *to, const char __user *from, int len); 260 + 261 + #define __strncpy_from_user __strncpy_user 262 + 263 + static inline long 264 + strncpy_from_user(char *dst, const char __user *src, long count) 265 + { 266 + if (!access_ok(VERIFY_READ, src, 1)) 267 + return -EFAULT; 268 + return __strncpy_from_user(dst, src, count); 269 + } 270 + 271 + /* 272 + * Return the size of a string (including the ending 0) 273 + * 274 + * Return 0 on exception, a value greater than N if too long 275 + */ 276 + extern int __strnlen_user(const char __user *sstr, int len); 277 + 278 + static inline long strnlen_user(const char __user *src, long n) 279 + { 280 + if (!access_ok(VERIFY_READ, src, 1)) 281 + return 0; 282 + return __strnlen_user(src, n); 283 + } 403 284 404 285 #endif /* __ASSEMBLY__ */ 405 286 #endif /* __KERNEL__ */
+1 -1
arch/microblaze/kernel/dma.c
··· 37 37 38 38 static unsigned long get_dma_direct_offset(struct device *dev) 39 39 { 40 - if (dev) 40 + if (likely(dev)) 41 41 return (unsigned long)dev->archdata.dma_data; 42 42 43 43 return PCI_DRAM_OFFSET; /* FIXME Not sure if is correct */
+9 -3
arch/microblaze/kernel/head.S
··· 51 51 52 52 .text 53 53 ENTRY(_start) 54 + #if CONFIG_KERNEL_BASE_ADDR == 0 55 + brai TOPHYS(real_start) 56 + .org 0x100 57 + real_start: 58 + #endif 59 + 54 60 mfs r1, rmsr 55 61 andi r1, r1, ~2 56 62 mts rmsr, r1 ··· 105 99 tophys(r4,r4) /* convert to phys address */ 106 100 ori r3, r0, COMMAND_LINE_SIZE - 1 /* number of loops */ 107 101 _copy_command_line: 108 - lbu r2, r5, r6 /* r7=r5+r6 - r5 contain pointer to command line */ 109 - sb r2, r4, r6 /* addr[r4+r6]= r7*/ 102 + lbu r2, r5, r6 /* r2=r5+r6 - r5 contain pointer to command line */ 103 + sb r2, r4, r6 /* addr[r4+r6]= r2*/ 110 104 addik r6, r6, 1 /* increment counting */ 111 105 bgtid r3, _copy_command_line /* loop for all entries */ 112 106 addik r3, r3, -1 /* descrement loop */ ··· 134 128 * virtual to physical. 135 129 */ 136 130 nop 137 - addik r3, r0, 63 /* Invalidate all TLB entries */ 131 + addik r3, r0, MICROBLAZE_TLB_SIZE -1 /* Invalidate all TLB entries */ 138 132 _invalidate: 139 133 mts rtlbx, r3 140 134 mts rtlbhi, r0 /* flush: ensure V is clear */
+48 -64
arch/microblaze/kernel/hw_exception_handler.S
··· 313 313 mfs r5, rmsr; 314 314 nop 315 315 swi r5, r1, 0; 316 - mfs r3, resr 316 + mfs r4, resr 317 317 nop 318 - mfs r4, rear; 318 + mfs r3, rear; 319 319 nop 320 320 321 321 #ifndef CONFIG_MMU 322 - andi r5, r3, 0x1000; /* Check ESR[DS] */ 322 + andi r5, r4, 0x1000; /* Check ESR[DS] */ 323 323 beqi r5, not_in_delay_slot; /* Branch if ESR[DS] not set */ 324 324 mfs r17, rbtr; /* ESR[DS] set - return address in BTR */ 325 325 nop ··· 327 327 swi r17, r1, PT_R17 328 328 #endif 329 329 330 - andi r5, r3, 0x1F; /* Extract ESR[EXC] */ 330 + andi r5, r4, 0x1F; /* Extract ESR[EXC] */ 331 331 332 332 #ifdef CONFIG_MMU 333 333 /* Calculate exception vector offset = r5 << 2 */ 334 334 addk r6, r5, r5; /* << 1 */ 335 335 addk r6, r6, r6; /* << 2 */ 336 336 337 + #ifdef DEBUG 337 338 /* counting which exception happen */ 338 339 lwi r5, r0, 0x200 + TOPHYS(r0_ram) 339 340 addi r5, r5, 1 ··· 342 341 lwi r5, r6, 0x200 + TOPHYS(r0_ram) 343 342 addi r5, r5, 1 344 343 swi r5, r6, 0x200 + TOPHYS(r0_ram) 344 + #endif 345 345 /* end */ 346 346 /* Load the HW Exception vector */ 347 347 lwi r6, r6, TOPHYS(_MB_HW_ExceptionVectorTable) ··· 378 376 swi r18, r1, PT_R18 379 377 380 378 or r5, r1, r0 381 - andi r6, r3, 0x1F; /* Load ESR[EC] */ 379 + andi r6, r4, 0x1F; /* Load ESR[EC] */ 382 380 lwi r7, r0, PER_CPU(KM) /* MS: saving current kernel mode to regs */ 383 381 swi r7, r1, PT_MODE 384 382 mfs r7, rfsr ··· 428 426 */ 429 427 handle_unaligned_ex: 430 428 /* Working registers already saved: R3, R4, R5, R6 431 - * R3 = ESR 432 - * R4 = EAR 429 + * R4 = ESR 430 + * R3 = EAR 433 431 */ 434 432 #ifdef CONFIG_MMU 435 - andi r6, r3, 0x1000 /* Check ESR[DS] */ 433 + andi r6, r4, 0x1000 /* Check ESR[DS] */ 436 434 beqi r6, _no_delayslot /* Branch if ESR[DS] not set */ 437 435 mfs r17, rbtr; /* ESR[DS] set - return address in BTR */ 438 436 nop ··· 441 439 RESTORE_STATE; 442 440 bri unaligned_data_trap 443 441 #endif 444 - andi r6, r3, 0x3E0; /* Mask and extract the register operand */ 442 + andi r6, r4, 0x3E0; /* Mask and extract the register operand */ 445 443 srl r6, r6; /* r6 >> 5 */ 446 444 srl r6, r6; 447 445 srl r6, r6; ··· 450 448 /* Store the register operand in a temporary location */ 451 449 sbi r6, r0, TOPHYS(ex_reg_op); 452 450 453 - andi r6, r3, 0x400; /* Extract ESR[S] */ 451 + andi r6, r4, 0x400; /* Extract ESR[S] */ 454 452 bnei r6, ex_sw; 455 453 ex_lw: 456 - andi r6, r3, 0x800; /* Extract ESR[W] */ 454 + andi r6, r4, 0x800; /* Extract ESR[W] */ 457 455 beqi r6, ex_lhw; 458 - lbui r5, r4, 0; /* Exception address in r4 */ 456 + lbui r5, r3, 0; /* Exception address in r3 */ 459 457 /* Load a word, byte-by-byte from destination address 460 458 and save it in tmp space */ 461 459 sbi r5, r0, TOPHYS(ex_tmp_data_loc_0); 462 - lbui r5, r4, 1; 460 + lbui r5, r3, 1; 463 461 sbi r5, r0, TOPHYS(ex_tmp_data_loc_1); 464 - lbui r5, r4, 2; 462 + lbui r5, r3, 2; 465 463 sbi r5, r0, TOPHYS(ex_tmp_data_loc_2); 466 - lbui r5, r4, 3; 464 + lbui r5, r3, 3; 467 465 sbi r5, r0, TOPHYS(ex_tmp_data_loc_3); 468 - /* Get the destination register value into r3 */ 469 - lwi r3, r0, TOPHYS(ex_tmp_data_loc_0); 466 + /* Get the destination register value into r4 */ 467 + lwi r4, r0, TOPHYS(ex_tmp_data_loc_0); 470 468 bri ex_lw_tail; 471 469 ex_lhw: 472 - lbui r5, r4, 0; /* Exception address in r4 */ 470 + lbui r5, r3, 0; /* Exception address in r3 */ 473 471 /* Load a half-word, byte-by-byte from destination 474 472 address and save it in tmp space */ 475 473 sbi r5, r0, TOPHYS(ex_tmp_data_loc_0); 476 - lbui r5, r4, 1; 474 + lbui r5, r3, 1; 477 475 sbi r5, r0, TOPHYS(ex_tmp_data_loc_1); 478 - /* Get the destination register value into r3 */ 479 - lhui r3, r0, TOPHYS(ex_tmp_data_loc_0); 476 + /* Get the destination register value into r4 */ 477 + lhui r4, r0, TOPHYS(ex_tmp_data_loc_0); 480 478 ex_lw_tail: 481 479 /* Get the destination register number into r5 */ 482 480 lbui r5, r0, TOPHYS(ex_reg_op); ··· 504 502 andi r6, r6, 0x800; /* Extract ESR[W] */ 505 503 beqi r6, ex_shw; 506 504 /* Get the word - delay slot */ 507 - swi r3, r0, TOPHYS(ex_tmp_data_loc_0); 505 + swi r4, r0, TOPHYS(ex_tmp_data_loc_0); 508 506 /* Store the word, byte-by-byte into destination address */ 509 - lbui r3, r0, TOPHYS(ex_tmp_data_loc_0); 510 - sbi r3, r4, 0; 511 - lbui r3, r0, TOPHYS(ex_tmp_data_loc_1); 512 - sbi r3, r4, 1; 513 - lbui r3, r0, TOPHYS(ex_tmp_data_loc_2); 514 - sbi r3, r4, 2; 515 - lbui r3, r0, TOPHYS(ex_tmp_data_loc_3); 516 - sbi r3, r4, 3; 507 + lbui r4, r0, TOPHYS(ex_tmp_data_loc_0); 508 + sbi r4, r3, 0; 509 + lbui r4, r0, TOPHYS(ex_tmp_data_loc_1); 510 + sbi r4, r3, 1; 511 + lbui r4, r0, TOPHYS(ex_tmp_data_loc_2); 512 + sbi r4, r3, 2; 513 + lbui r4, r0, TOPHYS(ex_tmp_data_loc_3); 514 + sbi r4, r3, 3; 517 515 bri ex_handler_done; 518 516 519 517 ex_shw: 520 518 /* Store the lower half-word, byte-by-byte into destination address */ 521 - swi r3, r0, TOPHYS(ex_tmp_data_loc_0); 522 - lbui r3, r0, TOPHYS(ex_tmp_data_loc_2); 523 - sbi r3, r4, 0; 524 - lbui r3, r0, TOPHYS(ex_tmp_data_loc_3); 525 - sbi r3, r4, 1; 519 + swi r4, r0, TOPHYS(ex_tmp_data_loc_0); 520 + lbui r4, r0, TOPHYS(ex_tmp_data_loc_2); 521 + sbi r4, r3, 0; 522 + lbui r4, r0, TOPHYS(ex_tmp_data_loc_3); 523 + sbi r4, r3, 1; 526 524 ex_sw_end: /* Exception handling of store word, ends. */ 527 525 528 526 ex_handler_done: ··· 562 560 */ 563 561 mfs r11, rpid 564 562 nop 565 - bri 4 566 - mfs r3, rear /* Get faulting address */ 567 - nop 568 563 /* If we are faulting a kernel address, we have to use the 569 564 * kernel page tables. 570 565 */ 571 - ori r4, r0, CONFIG_KERNEL_START 572 - cmpu r4, r3, r4 573 - bgti r4, ex3 566 + ori r5, r0, CONFIG_KERNEL_START 567 + cmpu r5, r3, r5 568 + bgti r5, ex3 574 569 /* First, check if it was a zone fault (which means a user 575 570 * tried to access a kernel or read-protected page - always 576 571 * a SEGV). All other faults here must be stores, so no 577 572 * need to check ESR_S as well. */ 578 - mfs r4, resr 579 - nop 580 573 andi r4, r4, 0x800 /* ESR_Z - zone protection */ 581 574 bnei r4, ex2 582 575 ··· 586 589 * tried to access a kernel or read-protected page - always 587 590 * a SEGV). All other faults here must be stores, so no 588 591 * need to check ESR_S as well. */ 589 - mfs r4, resr 590 - nop 591 592 andi r4, r4, 0x800 /* ESR_Z */ 592 593 bnei r4, ex2 593 594 /* get current task address */ ··· 660 665 * R3 = ESR 661 666 */ 662 667 663 - mfs r3, rear /* Get faulting address */ 664 - nop 665 668 RESTORE_STATE; 666 669 bri page_fault_instr_trap 667 670 ··· 670 677 */ 671 678 handle_data_tlb_miss_exception: 672 679 /* Working registers already saved: R3, R4, R5, R6 673 - * R3 = ESR 680 + * R3 = EAR, R4 = ESR 674 681 */ 675 682 mfs r11, rpid 676 - nop 677 - bri 4 678 - mfs r3, rear /* Get faulting address */ 679 683 nop 680 684 681 685 /* If we are faulting a kernel address, we have to use the 682 686 * kernel page tables. */ 683 - ori r4, r0, CONFIG_KERNEL_START 684 - cmpu r4, r3, r4 687 + ori r6, r0, CONFIG_KERNEL_START 688 + cmpu r4, r3, r6 685 689 bgti r4, ex5 686 690 ori r4, r0, swapper_pg_dir 687 691 mts rpid, r0 /* TLB will have 0 TID */ ··· 721 731 * Many of these bits are software only. Bits we don't set 722 732 * here we (properly should) assume have the appropriate value. 723 733 */ 734 + brid finish_tlb_load 724 735 andni r4, r4, 0x0ce2 /* Make sure 20, 21 are zero */ 725 - 726 - bri finish_tlb_load 727 736 ex7: 728 737 /* The bailout. Restore registers to pre-exception conditions 729 738 * and call the heavyweights to help us out. ··· 742 753 * R3 = ESR 743 754 */ 744 755 mfs r11, rpid 745 - nop 746 - bri 4 747 - mfs r3, rear /* Get faulting address */ 748 756 nop 749 757 750 758 /* If we are faulting a kernel address, we have to use the ··· 778 792 lwi r4, r5, 0 /* Get Linux PTE */ 779 793 780 794 andi r6, r4, _PAGE_PRESENT 781 - beqi r6, ex7 795 + beqi r6, ex10 782 796 783 797 ori r4, r4, _PAGE_ACCESSED 784 798 swi r4, r5, 0 ··· 791 805 * Many of these bits are software only. Bits we don't set 792 806 * here we (properly should) assume have the appropriate value. 793 807 */ 808 + brid finish_tlb_load 794 809 andni r4, r4, 0x0ce2 /* Make sure 20, 21 are zero */ 795 - 796 - bri finish_tlb_load 797 810 ex10: 798 811 /* The bailout. Restore registers to pre-exception conditions 799 812 * and call the heavyweights to help us out. ··· 822 837 andi r5, r5, (MICROBLAZE_TLB_SIZE-1) 823 838 ori r6, r0, 1 824 839 cmp r31, r5, r6 825 - blti r31, sem 840 + blti r31, ex12 826 841 addik r5, r6, 1 827 - sem: 842 + ex12: 828 843 /* MS: save back current TLB index */ 829 844 swi r5, r0, TOPHYS(tlb_index) 830 845 ··· 844 859 nop 845 860 846 861 /* Done...restore registers and get out of here. */ 847 - ex12: 848 862 mts rpid, r11 849 863 nop 850 864 bri 4
+13 -2
arch/microblaze/kernel/misc.S
··· 26 26 * We avoid flushing the pinned 0, 1 and possibly 2 entries. 27 27 */ 28 28 .globl _tlbia; 29 + .type _tlbia, @function 29 30 .align 4; 30 31 _tlbia: 31 - addik r12, r0, 63 /* flush all entries (63 - 3) */ 32 + addik r12, r0, MICROBLAZE_TLB_SIZE - 1 /* flush all entries (63 - 3) */ 32 33 /* isync */ 33 34 _tlbia_1: 34 35 mts rtlbx, r12 ··· 42 41 /* sync */ 43 42 rtsd r15, 8 44 43 nop 44 + .size _tlbia, . - _tlbia 45 45 46 46 /* 47 47 * Flush MMU TLB for a particular address (in r5) 48 48 */ 49 49 .globl _tlbie; 50 + .type _tlbie, @function 50 51 .align 4; 51 52 _tlbie: 52 53 mts rtlbsx, r5 /* look up the address in TLB */ ··· 62 59 rtsd r15, 8 63 60 nop 64 61 62 + .size _tlbie, . - _tlbie 63 + 65 64 /* 66 65 * Allocate TLB entry for early console 67 66 */ 68 67 .globl early_console_reg_tlb_alloc; 68 + .type early_console_reg_tlb_alloc, @function 69 69 .align 4; 70 70 early_console_reg_tlb_alloc: 71 71 /* 72 72 * Load a TLB entry for the UART, so that microblaze_progress() can use 73 73 * the UARTs nice and early. We use a 4k real==virtual mapping. 74 74 */ 75 - ori r4, r0, 63 75 + ori r4, r0, MICROBLAZE_TLB_SIZE - 1 76 76 mts rtlbx, r4 /* TLB slot 2 */ 77 77 78 78 or r4,r5,r0 ··· 91 85 nop 92 86 rtsd r15, 8 93 87 nop 88 + 89 + .size early_console_reg_tlb_alloc, . - early_console_reg_tlb_alloc 94 90 95 91 /* 96 92 * Copy a whole page (4096 bytes). ··· 112 104 #define DCACHE_LINE_BYTES (4 * 4) 113 105 114 106 .globl copy_page; 107 + .type copy_page, @function 115 108 .align 4; 116 109 copy_page: 117 110 ori r11, r0, (PAGE_SIZE/DCACHE_LINE_BYTES) - 1 ··· 127 118 addik r11, r11, -1 128 119 rtsd r15, 8 129 120 nop 121 + 122 + .size copy_page, . - copy_page
+6 -4
arch/microblaze/kernel/process.c
··· 15 15 #include <linux/bitops.h> 16 16 #include <asm/system.h> 17 17 #include <asm/pgalloc.h> 18 + #include <asm/uaccess.h> /* for USER_DS macros */ 18 19 #include <asm/cacheflush.h> 19 20 20 21 void show_regs(struct pt_regs *regs) ··· 75 74 76 75 void default_idle(void) 77 76 { 78 - if (!hlt_counter) { 77 + if (likely(hlt_counter)) { 78 + while (!need_resched()) 79 + cpu_relax(); 80 + } else { 79 81 clear_thread_flag(TIF_POLLING_NRFLAG); 80 82 smp_mb__after_clear_bit(); 81 83 local_irq_disable(); ··· 86 82 cpu_sleep(); 87 83 local_irq_enable(); 88 84 set_thread_flag(TIF_POLLING_NRFLAG); 89 - } else 90 - while (!need_resched()) 91 - cpu_relax(); 85 + } 92 86 } 93 87 94 88 void cpu_idle(void)
+15 -9
arch/microblaze/kernel/setup.c
··· 92 92 } 93 93 #endif /* CONFIG_MTD_UCLINUX_EBSS */ 94 94 95 + #if defined(CONFIG_EARLY_PRINTK) && defined(CONFIG_SERIAL_UARTLITE_CONSOLE) 96 + #define eprintk early_printk 97 + #else 98 + #define eprintk printk 99 + #endif 100 + 95 101 void __init machine_early_init(const char *cmdline, unsigned int ram, 96 102 unsigned int fdt, unsigned int msr) 97 103 { ··· 145 139 setup_early_printk(NULL); 146 140 #endif 147 141 148 - early_printk("Ramdisk addr 0x%08x, ", ram); 142 + eprintk("Ramdisk addr 0x%08x, ", ram); 149 143 if (fdt) 150 - early_printk("FDT at 0x%08x\n", fdt); 144 + eprintk("FDT at 0x%08x\n", fdt); 151 145 else 152 - early_printk("Compiled-in FDT at 0x%08x\n", 146 + eprintk("Compiled-in FDT at 0x%08x\n", 153 147 (unsigned int)_fdt_start); 154 148 155 149 #ifdef CONFIG_MTD_UCLINUX 156 - early_printk("Found romfs @ 0x%08x (0x%08x)\n", 150 + eprintk("Found romfs @ 0x%08x (0x%08x)\n", 157 151 romfs_base, romfs_size); 158 - early_printk("#### klimit %p ####\n", old_klimit); 152 + eprintk("#### klimit %p ####\n", old_klimit); 159 153 BUG_ON(romfs_size < 0); /* What else can we do? */ 160 154 161 - early_printk("Moved 0x%08x bytes from 0x%08x to 0x%08x\n", 155 + eprintk("Moved 0x%08x bytes from 0x%08x to 0x%08x\n", 162 156 romfs_size, romfs_base, (unsigned)&_ebss); 163 157 164 - early_printk("New klimit: 0x%08x\n", (unsigned)klimit); 158 + eprintk("New klimit: 0x%08x\n", (unsigned)klimit); 165 159 #endif 166 160 167 161 #if CONFIG_XILINX_MICROBLAZE0_USE_MSR_INSTR 168 162 if (msr) 169 - early_printk("!!!Your kernel has setup MSR instruction but " 163 + eprintk("!!!Your kernel has setup MSR instruction but " 170 164 "CPU don't have it %d\n", msr); 171 165 #else 172 166 if (!msr) 173 - early_printk("!!!Your kernel not setup MSR instruction but " 167 + eprintk("!!!Your kernel not setup MSR instruction but " 174 168 "CPU have it %d\n", msr); 175 169 #endif 176 170
+2 -4
arch/microblaze/kernel/traps.c
··· 22 22 __enable_hw_exceptions(); 23 23 } 24 24 25 - static int kstack_depth_to_print = 24; 25 + static unsigned long kstack_depth_to_print = 24; 26 26 27 27 static int __init kstack_setup(char *s) 28 28 { 29 - kstack_depth_to_print = strict_strtoul(s, 0, NULL); 30 - 31 - return 1; 29 + return !strict_strtoul(s, 0, &kstack_depth_to_print); 32 30 } 33 31 __setup("kstack=", kstack_setup); 34 32
+1 -2
arch/microblaze/lib/Makefile
··· 10 10 lib-y += memcpy.o memmove.o 11 11 endif 12 12 13 - lib-$(CONFIG_NO_MMU) += uaccess.o 14 - lib-$(CONFIG_MMU) += uaccess_old.o 13 + lib-y += uaccess_old.o
+5 -1
arch/microblaze/lib/fastcopy.S
··· 30 30 */ 31 31 32 32 #include <linux/linkage.h> 33 - 33 + .text 34 34 .globl memcpy 35 + .type memcpy, @function 35 36 .ent memcpy 36 37 37 38 memcpy: ··· 346 345 rtsd r15, 8 347 346 nop 348 347 348 + .size memcpy, . - memcpy 349 349 .end memcpy 350 350 /*----------------------------------------------------------------------------*/ 351 351 .globl memmove 352 + .type memmove, @function 352 353 .ent memmove 353 354 354 355 memmove: ··· 662 659 rtsd r15, 8 663 660 nop 664 661 662 + .size memmove, . - memmove 665 663 .end memmove
+1 -1
arch/microblaze/lib/memcpy.c
··· 53 53 const uint32_t *i_src; 54 54 uint32_t *i_dst; 55 55 56 - if (c >= 4) { 56 + if (likely(c >= 4)) { 57 57 unsigned value, buf_hold; 58 58 59 59 /* Align the dstination to a word boundry. */
+8 -7
arch/microblaze/lib/memset.c
··· 33 33 #ifdef __HAVE_ARCH_MEMSET 34 34 void *memset(void *v_src, int c, __kernel_size_t n) 35 35 { 36 - 37 36 char *src = v_src; 38 37 #ifdef CONFIG_OPT_LIB_FUNCTION 39 38 uint32_t *i_src; 40 - uint32_t w32; 39 + uint32_t w32 = 0; 41 40 #endif 42 41 /* Truncate c to 8 bits */ 43 42 c = (c & 0xFF); 44 43 45 44 #ifdef CONFIG_OPT_LIB_FUNCTION 46 - /* Make a repeating word out of it */ 47 - w32 = c; 48 - w32 |= w32 << 8; 49 - w32 |= w32 << 16; 45 + if (unlikely(c)) { 46 + /* Make a repeating word out of it */ 47 + w32 = c; 48 + w32 |= w32 << 8; 49 + w32 |= w32 << 16; 50 + } 50 51 51 - if (n >= 4) { 52 + if (likely(n >= 4)) { 52 53 /* Align the destination to a word boundary */ 53 54 /* This is done in an endian independant manner */ 54 55 switch ((unsigned) src & 3) {
-48
arch/microblaze/lib/uaccess.c
··· 1 - /* 2 - * Copyright (C) 2006 Atmark Techno, Inc. 3 - * 4 - * This file is subject to the terms and conditions of the GNU General Public 5 - * License. See the file "COPYING" in the main directory of this archive 6 - * for more details. 7 - */ 8 - 9 - #include <linux/string.h> 10 - #include <asm/uaccess.h> 11 - 12 - #include <asm/bug.h> 13 - 14 - long strnlen_user(const char __user *src, long count) 15 - { 16 - return strlen(src) + 1; 17 - } 18 - 19 - #define __do_strncpy_from_user(dst, src, count, res) \ 20 - do { \ 21 - char *tmp; \ 22 - strncpy(dst, src, count); \ 23 - for (tmp = dst; *tmp && count > 0; tmp++, count--) \ 24 - ; \ 25 - res = (tmp - dst); \ 26 - } while (0) 27 - 28 - long __strncpy_from_user(char *dst, const char __user *src, long count) 29 - { 30 - long res; 31 - __do_strncpy_from_user(dst, src, count, res); 32 - return res; 33 - } 34 - 35 - long strncpy_from_user(char *dst, const char __user *src, long count) 36 - { 37 - long res = -EFAULT; 38 - if (access_ok(VERIFY_READ, src, 1)) 39 - __do_strncpy_from_user(dst, src, count, res); 40 - return res; 41 - } 42 - 43 - unsigned long __copy_tofrom_user(void __user *to, 44 - const void __user *from, unsigned long size) 45 - { 46 - memcpy(to, from, size); 47 - return 0; 48 - }
+31 -14
arch/microblaze/lib/uaccess_old.S
··· 22 22 23 23 .text 24 24 .globl __strncpy_user; 25 + .type __strncpy_user, @function 25 26 .align 4; 26 27 __strncpy_user: 27 28 ··· 51 50 3: 52 51 rtsd r15,8 53 52 nop 54 - 53 + .size __strncpy_user, . - __strncpy_user 55 54 56 55 .section .fixup, "ax" 57 56 .align 2 ··· 73 72 74 73 .text 75 74 .globl __strnlen_user; 75 + .type __strnlen_user, @function 76 76 .align 4; 77 77 __strnlen_user: 78 78 addik r3,r6,0 ··· 92 90 3: 93 91 rtsd r15,8 94 92 nop 95 - 93 + .size __strnlen_user, . - __strnlen_user 96 94 97 95 .section .fixup,"ax" 98 96 4: ··· 110 108 */ 111 109 .text 112 110 .globl __copy_tofrom_user; 111 + .type __copy_tofrom_user, @function 113 112 .align 4; 114 113 __copy_tofrom_user: 115 114 /* ··· 119 116 * r7, r3 - count 120 117 * r4 - tempval 121 118 */ 122 - addik r3,r7,0 123 - beqi r3,3f 124 - 1: 125 - lbu r4,r6,r0 126 - addik r6,r6,1 127 - 2: 128 - sb r4,r5,r0 129 - addik r3,r3,-1 130 - bneid r3,1b 131 - addik r5,r5,1 /* delay slot */ 119 + beqid r7, 3f /* zero size is not likely */ 120 + andi r3, r7, 0x3 /* filter add count */ 121 + bneid r3, 4f /* if is odd value then byte copying */ 122 + or r3, r5, r6 /* find if is any to/from unaligned */ 123 + andi r3, r3, 0x3 /* mask unaligned */ 124 + bneid r3, 1f /* it is unaligned -> then jump */ 125 + or r3, r0, r0 126 + 127 + /* at least one 4 byte copy */ 128 + 5: lw r4, r6, r3 129 + 6: sw r4, r5, r3 130 + addik r7, r7, -4 131 + bneid r7, 5b 132 + addik r3, r3, 4 133 + addik r3, r7, 0 134 + rtsd r15, 8 135 + nop 136 + 4: or r3, r0, r0 137 + 1: lbu r4,r6,r3 138 + 2: sb r4,r5,r3 139 + addik r7,r7,-1 140 + bneid r7,1b 141 + addik r3,r3,1 /* delay slot */ 132 142 3: 143 + addik r3,r7,0 133 144 rtsd r15,8 134 145 nop 135 - 146 + .size __copy_tofrom_user, . - __copy_tofrom_user 136 147 137 148 .section __ex_table,"a" 138 - .word 1b,3b,2b,3b 149 + .word 1b,3b,2b,3b,5b,3b,6b,3b
+12 -12
arch/microblaze/mm/fault.c
··· 106 106 regs->esr = error_code; 107 107 108 108 /* On a kernel SLB miss we can only check for a valid exception entry */ 109 - if (kernel_mode(regs) && (address >= TASK_SIZE)) { 109 + if (unlikely(kernel_mode(regs) && (address >= TASK_SIZE))) { 110 110 printk(KERN_WARNING "kernel task_size exceed"); 111 111 _exception(SIGSEGV, regs, code, address); 112 112 } ··· 122 122 } 123 123 #endif /* CONFIG_KGDB */ 124 124 125 - if (in_atomic() || !mm) { 125 + if (unlikely(in_atomic() || !mm)) { 126 126 if (kernel_mode(regs)) 127 127 goto bad_area_nosemaphore; 128 128 ··· 150 150 * source. If this is invalid we can skip the address space check, 151 151 * thus avoiding the deadlock. 152 152 */ 153 - if (!down_read_trylock(&mm->mmap_sem)) { 153 + if (unlikely(!down_read_trylock(&mm->mmap_sem))) { 154 154 if (kernel_mode(regs) && !search_exception_tables(regs->pc)) 155 155 goto bad_area_nosemaphore; 156 156 ··· 158 158 } 159 159 160 160 vma = find_vma(mm, address); 161 - if (!vma) 161 + if (unlikely(!vma)) 162 162 goto bad_area; 163 163 164 164 if (vma->vm_start <= address) 165 165 goto good_area; 166 166 167 - if (!(vma->vm_flags & VM_GROWSDOWN)) 167 + if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) 168 168 goto bad_area; 169 169 170 - if (!is_write) 170 + if (unlikely(!is_write)) 171 171 goto bad_area; 172 172 173 173 /* ··· 179 179 * before setting the user r1. Thus we allow the stack to 180 180 * expand to 1MB without further checks. 181 181 */ 182 - if (address + 0x100000 < vma->vm_end) { 182 + if (unlikely(address + 0x100000 < vma->vm_end)) { 183 183 184 184 /* get user regs even if this fault is in kernel mode */ 185 185 struct pt_regs *uregs = current->thread.regs; ··· 209 209 code = SEGV_ACCERR; 210 210 211 211 /* a write */ 212 - if (is_write) { 213 - if (!(vma->vm_flags & VM_WRITE)) 212 + if (unlikely(is_write)) { 213 + if (unlikely(!(vma->vm_flags & VM_WRITE))) 214 214 goto bad_area; 215 215 /* a read */ 216 216 } else { 217 217 /* protection fault */ 218 - if (error_code & 0x08000000) 218 + if (unlikely(error_code & 0x08000000)) 219 219 goto bad_area; 220 - if (!(vma->vm_flags & (VM_READ | VM_EXEC))) 220 + if (unlikely(!(vma->vm_flags & (VM_READ | VM_EXEC)))) 221 221 goto bad_area; 222 222 } 223 223 ··· 235 235 goto do_sigbus; 236 236 BUG(); 237 237 } 238 - if (fault & VM_FAULT_MAJOR) 238 + if (unlikely(fault & VM_FAULT_MAJOR)) 239 239 current->maj_flt++; 240 240 else 241 241 current->min_flt++;
-9
arch/microblaze/mm/init.c
··· 165 165 for (addr = begin; addr < end; addr += PAGE_SIZE) { 166 166 ClearPageReserved(virt_to_page(addr)); 167 167 init_page_count(virt_to_page(addr)); 168 - memset((void *)addr, 0xcc, PAGE_SIZE); 169 168 free_page(addr); 170 169 totalram_pages++; 171 170 } ··· 207 208 } 208 209 209 210 #ifndef CONFIG_MMU 210 - /* Check against bounds of physical memory */ 211 - int ___range_ok(unsigned long addr, unsigned long size) 212 - { 213 - return ((addr < memory_start) || 214 - ((addr + size) > memory_end)); 215 - } 216 - EXPORT_SYMBOL(___range_ok); 217 - 218 211 int page_is_ram(unsigned long pfn) 219 212 { 220 213 return __range_ok(pfn, 0);
+1 -1
arch/microblaze/mm/pgtable.c
··· 154 154 err = 0; 155 155 set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT, 156 156 __pgprot(flags))); 157 - if (mem_init_done) 157 + if (unlikely(mem_init_done)) 158 158 flush_HPTE(0, va, pmd_val(*pd)); 159 159 /* flush_HPTE(0, va, pg); */ 160 160 }
+2
arch/powerpc/include/asm/asm-compat.h
··· 28 28 #define PPC_LLARX(t, a, b, eh) PPC_LDARX(t, a, b, eh) 29 29 #define PPC_STLCX stringify_in_c(stdcx.) 30 30 #define PPC_CNTLZL stringify_in_c(cntlzd) 31 + #define PPC_LR_STKOFF 16 31 32 32 33 /* Move to CR, single-entry optimized version. Only available 33 34 * on POWER4 and later. ··· 52 51 #define PPC_STLCX stringify_in_c(stwcx.) 53 52 #define PPC_CNTLZL stringify_in_c(cntlzw) 54 53 #define PPC_MTOCRF stringify_in_c(mtcrf) 54 + #define PPC_LR_STKOFF 4 55 55 56 56 #endif 57 57
+26
arch/powerpc/kernel/misc.S
··· 127 127 _GLOBAL(__restore_cpu_power7) 128 128 /* place holder */ 129 129 blr 130 + 131 + /* 132 + * Get a minimal set of registers for our caller's nth caller. 133 + * r3 = regs pointer, r5 = n. 134 + * 135 + * We only get R1 (stack pointer), NIP (next instruction pointer) 136 + * and LR (link register). These are all we can get in the 137 + * general case without doing complicated stack unwinding, but 138 + * fortunately they are enough to do a stack backtrace, which 139 + * is all we need them for. 140 + */ 141 + _GLOBAL(perf_arch_fetch_caller_regs) 142 + mr r6,r1 143 + cmpwi r5,0 144 + mflr r4 145 + ble 2f 146 + mtctr r5 147 + 1: PPC_LL r6,0(r6) 148 + bdnz 1b 149 + PPC_LL r4,PPC_LR_STKOFF(r6) 150 + 2: PPC_LL r7,0(r6) 151 + PPC_LL r7,PPC_LR_STKOFF(r7) 152 + PPC_STL r6,GPR1-STACK_FRAME_OVERHEAD(r3) 153 + PPC_STL r4,_NIP-STACK_FRAME_OVERHEAD(r3) 154 + PPC_STL r7,_LINK-STACK_FRAME_OVERHEAD(r3) 155 + blr
+2
arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
··· 481 481 if (rc) 482 482 goto err_bcom_rx_irq; 483 483 484 + lpbfifo.dma_irqs_enabled = 1; 485 + 484 486 /* Request the Bestcomm transmit (memory --> fifo) task and IRQ */ 485 487 lpbfifo.bcom_tx_task = 486 488 bcom_gen_bd_tx_init(2, res.start + LPBFIFO_REG_FIFO_DATA,
+3
arch/sh/kernel/return_address.c
··· 9 9 * for more details. 10 10 */ 11 11 #include <linux/kernel.h> 12 + #include <linux/module.h> 12 13 #include <asm/dwarf.h> 13 14 14 15 #ifdef CONFIG_DWARF_UNWINDER ··· 53 52 } 54 53 55 54 #endif 55 + 56 + EXPORT_SYMBOL_GPL(return_address);
+28
arch/sh/mm/tlb-pteaex.c
··· 77 77 __raw_writel(asid, MMU_ITLB_ADDRESS_ARRAY2 | MMU_PAGE_ASSOC_BIT); 78 78 back_to_cached(); 79 79 } 80 + 81 + void local_flush_tlb_all(void) 82 + { 83 + unsigned long flags, status; 84 + int i; 85 + 86 + /* 87 + * Flush all the TLB. 88 + */ 89 + local_irq_save(flags); 90 + jump_to_uncached(); 91 + 92 + status = __raw_readl(MMUCR); 93 + status = ((status & MMUCR_URB) >> MMUCR_URB_SHIFT); 94 + 95 + if (status == 0) 96 + status = MMUCR_URB_NENTRIES; 97 + 98 + for (i = 0; i < status; i++) 99 + __raw_writel(0x0, MMU_UTLB_ADDRESS_ARRAY | (i << 8)); 100 + 101 + for (i = 0; i < 4; i++) 102 + __raw_writel(0x0, MMU_ITLB_ADDRESS_ARRAY | (i << 8)); 103 + 104 + back_to_cached(); 105 + ctrl_barrier(); 106 + local_irq_restore(flags); 107 + }
+19
arch/sh/mm/tlb-sh3.c
··· 77 77 for (i = 0; i < ways; i++) 78 78 __raw_writel(data, addr + (i << 8)); 79 79 } 80 + 81 + void local_flush_tlb_all(void) 82 + { 83 + unsigned long flags, status; 84 + 85 + /* 86 + * Flush all the TLB. 87 + * 88 + * Write to the MMU control register's bit: 89 + * TF-bit for SH-3, TI-bit for SH-4. 90 + * It's same position, bit #2. 91 + */ 92 + local_irq_save(flags); 93 + status = __raw_readl(MMUCR); 94 + status |= 0x04; 95 + __raw_writel(status, MMUCR); 96 + ctrl_barrier(); 97 + local_irq_restore(flags); 98 + }
+28
arch/sh/mm/tlb-sh4.c
··· 80 80 __raw_writel(data, addr); 81 81 back_to_cached(); 82 82 } 83 + 84 + void local_flush_tlb_all(void) 85 + { 86 + unsigned long flags, status; 87 + int i; 88 + 89 + /* 90 + * Flush all the TLB. 91 + */ 92 + local_irq_save(flags); 93 + jump_to_uncached(); 94 + 95 + status = __raw_readl(MMUCR); 96 + status = ((status & MMUCR_URB) >> MMUCR_URB_SHIFT); 97 + 98 + if (status == 0) 99 + status = MMUCR_URB_NENTRIES; 100 + 101 + for (i = 0; i < status; i++) 102 + __raw_writel(0x0, MMU_UTLB_ADDRESS_ARRAY | (i << 8)); 103 + 104 + for (i = 0; i < 4; i++) 105 + __raw_writel(0x0, MMU_ITLB_ADDRESS_ARRAY | (i << 8)); 106 + 107 + back_to_cached(); 108 + ctrl_barrier(); 109 + local_irq_restore(flags); 110 + }
-28
arch/sh/mm/tlbflush_32.c
··· 119 119 local_irq_restore(flags); 120 120 } 121 121 } 122 - 123 - void local_flush_tlb_all(void) 124 - { 125 - unsigned long flags, status; 126 - int i; 127 - 128 - /* 129 - * Flush all the TLB. 130 - */ 131 - local_irq_save(flags); 132 - jump_to_uncached(); 133 - 134 - status = __raw_readl(MMUCR); 135 - status = ((status & MMUCR_URB) >> MMUCR_URB_SHIFT); 136 - 137 - if (status == 0) 138 - status = MMUCR_URB_NENTRIES; 139 - 140 - for (i = 0; i < status; i++) 141 - __raw_writel(0x0, MMU_UTLB_ADDRESS_ARRAY | (i << 8)); 142 - 143 - for (i = 0; i < 4; i++) 144 - __raw_writel(0x0, MMU_ITLB_ADDRESS_ARRAY | (i << 8)); 145 - 146 - back_to_cached(); 147 - ctrl_barrier(); 148 - local_irq_restore(flags); 149 - }
+16 -12
arch/sparc/configs/sparc64_defconfig
··· 1 1 # 2 2 # Automatically generated make config: don't edit 3 - # Linux kernel version: 2.6.33 4 - # Wed Mar 3 02:54:29 2010 3 + # Linux kernel version: 2.6.34-rc3 4 + # Sat Apr 3 15:49:56 2010 5 5 # 6 6 CONFIG_64BIT=y 7 7 CONFIG_SPARC=y ··· 23 23 CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y 24 24 CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ=y 25 25 CONFIG_MMU=y 26 + CONFIG_NEED_DMA_MAP_STATE=y 26 27 CONFIG_ARCH_NO_VIRT_TO_BUS=y 27 28 CONFIG_OF=y 28 29 CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y ··· 440 439 # CONFIG_ENCLOSURE_SERVICES is not set 441 440 # CONFIG_HP_ILO is not set 442 441 # CONFIG_ISL29003 is not set 442 + # CONFIG_SENSORS_TSL2550 is not set 443 443 # CONFIG_DS1682 is not set 444 444 # CONFIG_C2PORT is not set 445 445 ··· 513 511 # 514 512 # SCSI device support 515 513 # 514 + CONFIG_SCSI_MOD=y 516 515 CONFIG_RAID_ATTRS=m 517 516 CONFIG_SCSI=y 518 517 CONFIG_SCSI_DMA=y ··· 891 888 CONFIG_SERIAL_CORE=y 892 889 CONFIG_SERIAL_CORE_CONSOLE=y 893 890 # CONFIG_SERIAL_JSM is not set 891 + # CONFIG_SERIAL_TIMBERDALE is not set 894 892 # CONFIG_SERIAL_GRLIB_GAISLER_APBUART is not set 895 893 CONFIG_UNIX98_PTYS=y 896 894 # CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set ··· 939 935 # 940 936 # CONFIG_I2C_OCORES is not set 941 937 # CONFIG_I2C_SIMTEC is not set 938 + # CONFIG_I2C_XILINX is not set 942 939 943 940 # 944 941 # External I2C/SMBus adapter drivers ··· 953 948 # 954 949 # CONFIG_I2C_PCA_PLATFORM is not set 955 950 # CONFIG_I2C_STUB is not set 956 - 957 - # 958 - # Miscellaneous I2C Chip support 959 - # 960 - # CONFIG_SENSORS_TSL2550 is not set 961 951 # CONFIG_I2C_DEBUG_CORE is not set 962 952 # CONFIG_I2C_DEBUG_ALGO is not set 963 953 # CONFIG_I2C_DEBUG_BUS is not set 964 - # CONFIG_I2C_DEBUG_CHIP is not set 965 954 # CONFIG_SPI is not set 966 955 967 956 # ··· 981 982 # CONFIG_SENSORS_ADM1029 is not set 982 983 # CONFIG_SENSORS_ADM1031 is not set 983 984 # CONFIG_SENSORS_ADM9240 is not set 985 + # CONFIG_SENSORS_ADT7411 is not set 984 986 # CONFIG_SENSORS_ADT7462 is not set 985 987 # CONFIG_SENSORS_ADT7470 is not set 986 - # CONFIG_SENSORS_ADT7473 is not set 987 988 # CONFIG_SENSORS_ADT7475 is not set 989 + # CONFIG_SENSORS_ASC7621 is not set 988 990 # CONFIG_SENSORS_ATXP1 is not set 989 991 # CONFIG_SENSORS_DS1621 is not set 990 992 # CONFIG_SENSORS_I5K_AMB is not set ··· 1052 1052 # Multifunction device drivers 1053 1053 # 1054 1054 # CONFIG_MFD_CORE is not set 1055 + # CONFIG_MFD_88PM860X is not set 1055 1056 # CONFIG_MFD_SM501 is not set 1056 1057 # CONFIG_HTC_PASIC3 is not set 1057 1058 # CONFIG_TWL4030_CORE is not set 1058 1059 # CONFIG_MFD_TMIO is not set 1059 1060 # CONFIG_PMIC_DA903X is not set 1060 1061 # CONFIG_PMIC_ADP5520 is not set 1062 + # CONFIG_MFD_MAX8925 is not set 1061 1063 # CONFIG_MFD_WM8400 is not set 1062 1064 # CONFIG_MFD_WM831X is not set 1063 1065 # CONFIG_MFD_WM8350_I2C is not set 1066 + # CONFIG_MFD_WM8994 is not set 1064 1067 # CONFIG_MFD_PCF50633 is not set 1065 1068 # CONFIG_AB3100_CORE is not set 1066 - # CONFIG_MFD_88PM8607 is not set 1069 + # CONFIG_LPC_SCH is not set 1067 1070 # CONFIG_REGULATOR is not set 1068 1071 # CONFIG_MEDIA_SUPPORT is not set 1069 1072 ··· 1116 1113 # CONFIG_FB_LEO is not set 1117 1114 CONFIG_FB_XVR500=y 1118 1115 CONFIG_FB_XVR2500=y 1116 + CONFIG_FB_XVR1000=y 1119 1117 # CONFIG_FB_S1D13XXX is not set 1120 1118 # CONFIG_FB_NVIDIA is not set 1121 1119 # CONFIG_FB_RIVA is not set ··· 1434 1430 # CONFIG_USB_RIO500 is not set 1435 1431 # CONFIG_USB_LEGOTOWER is not set 1436 1432 # CONFIG_USB_LCD is not set 1437 - # CONFIG_USB_BERRY_CHARGE is not set 1438 1433 # CONFIG_USB_LED is not set 1439 1434 # CONFIG_USB_CYPRESS_CY7C63 is not set 1440 1435 # CONFIG_USB_CYTHERM is not set ··· 1446 1443 # CONFIG_USB_IOWARRIOR is not set 1447 1444 # CONFIG_USB_TEST is not set 1448 1445 # CONFIG_USB_ISIGHTFW is not set 1449 - # CONFIG_USB_VST is not set 1450 1446 # CONFIG_USB_GADGET is not set 1451 1447 1452 1448 # ··· 1612 1610 # CONFIG_BEFS_FS is not set 1613 1611 # CONFIG_BFS_FS is not set 1614 1612 # CONFIG_EFS_FS is not set 1613 + # CONFIG_LOGFS is not set 1615 1614 # CONFIG_CRAMFS is not set 1616 1615 # CONFIG_SQUASHFS is not set 1617 1616 # CONFIG_VXFS_FS is not set ··· 1627 1624 # CONFIG_NFS_FS is not set 1628 1625 # CONFIG_NFSD is not set 1629 1626 # CONFIG_SMB_FS is not set 1627 + # CONFIG_CEPH_FS is not set 1630 1628 # CONFIG_CIFS is not set 1631 1629 # CONFIG_NCP_FS is not set 1632 1630 # CONFIG_CODA_FS is not set
+2 -2
arch/sparc/include/asm/stat.h
··· 53 53 ino_t st_ino; 54 54 mode_t st_mode; 55 55 short st_nlink; 56 - uid16_t st_uid; 57 - gid16_t st_gid; 56 + unsigned short st_uid; 57 + unsigned short st_gid; 58 58 unsigned short st_rdev; 59 59 off_t st_size; 60 60 time_t st_atime;
+75
arch/sparc/kernel/helpers.S
··· 46 46 nop 47 47 .size stack_trace_flush,.-stack_trace_flush 48 48 49 + #ifdef CONFIG_PERF_EVENTS 50 + .globl perf_arch_fetch_caller_regs 51 + .type perf_arch_fetch_caller_regs,#function 52 + perf_arch_fetch_caller_regs: 53 + /* We always read the %pstate into %o5 since we will use 54 + * that to construct a fake %tstate to store into the regs. 55 + */ 56 + rdpr %pstate, %o5 57 + brz,pn %o2, 50f 58 + mov %o2, %g7 59 + 60 + /* Turn off interrupts while we walk around the register 61 + * window by hand. 62 + */ 63 + wrpr %o5, PSTATE_IE, %pstate 64 + 65 + /* The %canrestore tells us how many register windows are 66 + * still live in the chip above us, past that we have to 67 + * walk the frame as saved on the stack. We stash away 68 + * the %cwp in %g1 so we can return back to the original 69 + * register window. 70 + */ 71 + rdpr %cwp, %g1 72 + rdpr %canrestore, %g2 73 + sub %g1, 1, %g3 74 + 75 + /* We have the skip count in %g7, if it hits zero then 76 + * %fp/%i7 are the registers we need. Otherwise if our 77 + * %canrestore count maintained in %g2 hits zero we have 78 + * to start traversing the stack. 79 + */ 80 + 10: brz,pn %g2, 4f 81 + sub %g2, 1, %g2 82 + wrpr %g3, %cwp 83 + subcc %g7, 1, %g7 84 + bne,pt %xcc, 10b 85 + sub %g3, 1, %g3 86 + 87 + /* We found the values we need in the cpu's register 88 + * windows. 89 + */ 90 + mov %fp, %g3 91 + ba,pt %xcc, 3f 92 + mov %i7, %g2 93 + 94 + 50: mov %fp, %g3 95 + ba,pt %xcc, 2f 96 + mov %i7, %g2 97 + 98 + /* We hit the end of the valid register windows in the 99 + * cpu, start traversing the stack frame. 100 + */ 101 + 4: mov %fp, %g3 102 + 103 + 20: ldx [%g3 + STACK_BIAS + RW_V9_I7], %g2 104 + subcc %g7, 1, %g7 105 + bne,pn %xcc, 20b 106 + ldx [%g3 + STACK_BIAS + RW_V9_I6], %g3 107 + 108 + /* Restore the current register window position and 109 + * re-enable interrupts. 110 + */ 111 + 3: wrpr %g1, %cwp 112 + wrpr %o5, %pstate 113 + 114 + 2: stx %g3, [%o0 + PT_V9_FP] 115 + sllx %o5, 8, %o5 116 + stx %o5, [%o0 + PT_V9_TSTATE] 117 + stx %g2, [%o0 + PT_V9_TPC] 118 + add %g2, 4, %g2 119 + retl 120 + stx %g2, [%o0 + PT_V9_TNPC] 121 + .size perf_arch_fetch_caller_regs,.-perf_arch_fetch_caller_regs 122 + #endif /* CONFIG_PERF_EVENTS */ 123 + 49 124 #ifdef CONFIG_SMP 50 125 .globl hard_smp_processor_id 51 126 .type hard_smp_processor_id,#function
+1 -1
arch/sparc/kernel/perf_event.c
··· 1337 1337 callchain_store(entry, PERF_CONTEXT_USER); 1338 1338 callchain_store(entry, regs->tpc); 1339 1339 1340 - ufp = regs->u_regs[UREG_I6]; 1340 + ufp = regs->u_regs[UREG_I6] & 0xffffffffUL; 1341 1341 do { 1342 1342 struct sparc_stackf32 *usf, sf; 1343 1343 unsigned long pc;
+4
arch/sparc/kernel/ptrace_32.c
··· 65 65 *k++ = regs->u_regs[pos++]; 66 66 67 67 reg_window = (unsigned long __user *) regs->u_regs[UREG_I6]; 68 + reg_window -= 16; 68 69 for (; count > 0 && pos < 32; count--) { 69 70 if (get_user(*k++, &reg_window[pos++])) 70 71 return -EFAULT; ··· 77 76 } 78 77 79 78 reg_window = (unsigned long __user *) regs->u_regs[UREG_I6]; 79 + reg_window -= 16; 80 80 for (; count > 0 && pos < 32; count--) { 81 81 if (get_user(reg, &reg_window[pos++]) || 82 82 put_user(reg, u++)) ··· 143 141 regs->u_regs[pos++] = *k++; 144 142 145 143 reg_window = (unsigned long __user *) regs->u_regs[UREG_I6]; 144 + reg_window -= 16; 146 145 for (; count > 0 && pos < 32; count--) { 147 146 if (put_user(*k++, &reg_window[pos++])) 148 147 return -EFAULT; ··· 156 153 } 157 154 158 155 reg_window = (unsigned long __user *) regs->u_regs[UREG_I6]; 156 + reg_window -= 16; 159 157 for (; count > 0 && pos < 32; count--) { 160 158 if (get_user(reg, u++) || 161 159 put_user(reg, &reg_window[pos++]))
+4
arch/sparc/kernel/ptrace_64.c
··· 492 492 *k++ = regs->u_regs[pos++]; 493 493 494 494 reg_window = (compat_ulong_t __user *) regs->u_regs[UREG_I6]; 495 + reg_window -= 16; 495 496 if (target == current) { 496 497 for (; count > 0 && pos < 32; count--) { 497 498 if (get_user(*k++, &reg_window[pos++])) ··· 517 516 } 518 517 519 518 reg_window = (compat_ulong_t __user *) regs->u_regs[UREG_I6]; 519 + reg_window -= 16; 520 520 if (target == current) { 521 521 for (; count > 0 && pos < 32; count--) { 522 522 if (get_user(reg, &reg_window[pos++]) || ··· 601 599 regs->u_regs[pos++] = *k++; 602 600 603 601 reg_window = (compat_ulong_t __user *) regs->u_regs[UREG_I6]; 602 + reg_window -= 16; 604 603 if (target == current) { 605 604 for (; count > 0 && pos < 32; count--) { 606 605 if (put_user(*k++, &reg_window[pos++])) ··· 628 625 } 629 626 630 627 reg_window = (compat_ulong_t __user *) regs->u_regs[UREG_I6]; 628 + reg_window -= 16; 631 629 if (target == current) { 632 630 for (; count > 0 && pos < 32; count--) { 633 631 if (get_user(reg, u++) ||
+2 -2
arch/sparc/kernel/sysfs.c
··· 107 107 unsigned long ret; 108 108 109 109 /* should return -EINVAL to userspace */ 110 - if (set_cpus_allowed(current, cpumask_of_cpu(cpu))) 110 + if (set_cpus_allowed_ptr(current, cpumask_of(cpu))) 111 111 return 0; 112 112 113 113 ret = func(arg); 114 114 115 - set_cpus_allowed(current, old_affinity); 115 + set_cpus_allowed_ptr(current, &old_affinity); 116 116 117 117 return ret; 118 118 }
+4 -4
arch/sparc/kernel/us2e_cpufreq.c
··· 238 238 return 0; 239 239 240 240 cpus_allowed = current->cpus_allowed; 241 - set_cpus_allowed(current, cpumask_of_cpu(cpu)); 241 + set_cpus_allowed_ptr(current, cpumask_of(cpu)); 242 242 243 243 clock_tick = sparc64_get_clock_tick(cpu) / 1000; 244 244 estar = read_hbreg(HBIRD_ESTAR_MODE_ADDR); 245 245 246 - set_cpus_allowed(current, cpus_allowed); 246 + set_cpus_allowed_ptr(current, &cpus_allowed); 247 247 248 248 return clock_tick / estar_to_divisor(estar); 249 249 } ··· 259 259 return; 260 260 261 261 cpus_allowed = current->cpus_allowed; 262 - set_cpus_allowed(current, cpumask_of_cpu(cpu)); 262 + set_cpus_allowed_ptr(current, cpumask_of(cpu)); 263 263 264 264 new_freq = clock_tick = sparc64_get_clock_tick(cpu) / 1000; 265 265 new_bits = index_to_estar_mode(index); ··· 281 281 282 282 cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 283 283 284 - set_cpus_allowed(current, cpus_allowed); 284 + set_cpus_allowed_ptr(current, &cpus_allowed); 285 285 } 286 286 287 287 static int us2e_freq_target(struct cpufreq_policy *policy,
+4 -4
arch/sparc/kernel/us3_cpufreq.c
··· 86 86 return 0; 87 87 88 88 cpus_allowed = current->cpus_allowed; 89 - set_cpus_allowed(current, cpumask_of_cpu(cpu)); 89 + set_cpus_allowed_ptr(current, cpumask_of(cpu)); 90 90 91 91 reg = read_safari_cfg(); 92 92 ret = get_current_freq(cpu, reg); 93 93 94 - set_cpus_allowed(current, cpus_allowed); 94 + set_cpus_allowed_ptr(current, &cpus_allowed); 95 95 96 96 return ret; 97 97 } ··· 106 106 return; 107 107 108 108 cpus_allowed = current->cpus_allowed; 109 - set_cpus_allowed(current, cpumask_of_cpu(cpu)); 109 + set_cpus_allowed_ptr(current, cpumask_of(cpu)); 110 110 111 111 new_freq = sparc64_get_clock_tick(cpu) / 1000; 112 112 switch (index) { ··· 140 140 141 141 cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE); 142 142 143 - set_cpus_allowed(current, cpus_allowed); 143 + set_cpus_allowed_ptr(current, &cpus_allowed); 144 144 } 145 145 146 146 static int us3_freq_target(struct cpufreq_policy *policy,
+1 -1
arch/sparc/mm/init_64.c
··· 2117 2117 "node=%d entry=%lu/%lu\n", start, block, nr, 2118 2118 node, 2119 2119 addr >> VMEMMAP_CHUNK_SHIFT, 2120 - VMEMMAP_SIZE >> VMEMMAP_CHUNK_SHIFT); 2120 + VMEMMAP_SIZE); 2121 2121 } 2122 2122 } 2123 2123 return 0;
+3 -3
arch/x86/include/asm/fixmap.h
··· 82 82 #endif 83 83 FIX_DBGP_BASE, 84 84 FIX_EARLYCON_MEM_BASE, 85 + #ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT 86 + FIX_OHCI1394_BASE, 87 + #endif 85 88 #ifdef CONFIG_X86_LOCAL_APIC 86 89 FIX_APIC_BASE, /* local (CPU) APIC) -- required for SMP or not */ 87 90 #endif ··· 135 132 (__end_of_permanent_fixed_addresses & (TOTAL_FIX_BTMAPS - 1)) 136 133 : __end_of_permanent_fixed_addresses, 137 134 FIX_BTMAP_BEGIN = FIX_BTMAP_END + TOTAL_FIX_BTMAPS - 1, 138 - #ifdef CONFIG_PROVIDE_OHCI1394_DMA_INIT 139 - FIX_OHCI1394_BASE, 140 - #endif 141 135 #ifdef CONFIG_X86_32 142 136 FIX_WP_TEST, 143 137 #endif
+1
arch/x86/include/asm/hw_irq.h
··· 133 133 134 134 typedef int vector_irq_t[NR_VECTORS]; 135 135 DECLARE_PER_CPU(vector_irq_t, vector_irq); 136 + extern void setup_vector_irq(int cpu); 136 137 137 138 #ifdef CONFIG_X86_IO_APIC 138 139 extern void lock_vector_lock(void);
+2
arch/x86/include/asm/msr-index.h
··· 105 105 #define MSR_AMD64_PATCH_LEVEL 0x0000008b 106 106 #define MSR_AMD64_NB_CFG 0xc001001f 107 107 #define MSR_AMD64_PATCH_LOADER 0xc0010020 108 + #define MSR_AMD64_OSVW_ID_LENGTH 0xc0010140 109 + #define MSR_AMD64_OSVW_STATUS 0xc0010141 108 110 #define MSR_AMD64_IBSFETCHCTL 0xc0011030 109 111 #define MSR_AMD64_IBSFETCHLINAD 0xc0011031 110 112 #define MSR_AMD64_IBSFETCHPHYSAD 0xc0011032
+8
arch/x86/kernel/apic/io_apic.c
··· 1268 1268 /* Mark the inuse vectors */ 1269 1269 for_each_irq_desc(irq, desc) { 1270 1270 cfg = desc->chip_data; 1271 + 1272 + /* 1273 + * If it is a legacy IRQ handled by the legacy PIC, this cpu 1274 + * will be part of the irq_cfg's domain. 1275 + */ 1276 + if (irq < legacy_pic->nr_legacy_irqs && !IO_APIC_IRQ(irq)) 1277 + cpumask_set_cpu(cpu, cfg->domain); 1278 + 1271 1279 if (!cpumask_test_cpu(cpu, cfg->domain)) 1272 1280 continue; 1273 1281 vector = cfg->vector;
+44 -10
arch/x86/kernel/cpu/perf_event.c
··· 28 28 #include <asm/apic.h> 29 29 #include <asm/stacktrace.h> 30 30 #include <asm/nmi.h> 31 + #include <asm/compat.h> 31 32 32 33 static u64 perf_event_mask __read_mostly; 33 34 ··· 159 158 struct perf_event *event); 160 159 struct event_constraint *event_constraints; 161 160 162 - void (*cpu_prepare)(int cpu); 161 + int (*cpu_prepare)(int cpu); 163 162 void (*cpu_starting)(int cpu); 164 163 void (*cpu_dying)(int cpu); 165 164 void (*cpu_dead)(int cpu); ··· 1334 1333 x86_pmu_notifier(struct notifier_block *self, unsigned long action, void *hcpu) 1335 1334 { 1336 1335 unsigned int cpu = (long)hcpu; 1336 + int ret = NOTIFY_OK; 1337 1337 1338 1338 switch (action & ~CPU_TASKS_FROZEN) { 1339 1339 case CPU_UP_PREPARE: 1340 1340 if (x86_pmu.cpu_prepare) 1341 - x86_pmu.cpu_prepare(cpu); 1341 + ret = x86_pmu.cpu_prepare(cpu); 1342 1342 break; 1343 1343 1344 1344 case CPU_STARTING: ··· 1352 1350 x86_pmu.cpu_dying(cpu); 1353 1351 break; 1354 1352 1353 + case CPU_UP_CANCELED: 1355 1354 case CPU_DEAD: 1356 1355 if (x86_pmu.cpu_dead) 1357 1356 x86_pmu.cpu_dead(cpu); ··· 1362 1359 break; 1363 1360 } 1364 1361 1365 - return NOTIFY_OK; 1362 + return ret; 1366 1363 } 1367 1364 1368 1365 static void __init pmu_check_apic(void) ··· 1631 1628 return len; 1632 1629 } 1633 1630 1634 - static int copy_stack_frame(const void __user *fp, struct stack_frame *frame) 1631 + #ifdef CONFIG_COMPAT 1632 + static inline int 1633 + perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry) 1635 1634 { 1636 - unsigned long bytes; 1635 + /* 32-bit process in 64-bit kernel. */ 1636 + struct stack_frame_ia32 frame; 1637 + const void __user *fp; 1637 1638 1638 - bytes = copy_from_user_nmi(frame, fp, sizeof(*frame)); 1639 + if (!test_thread_flag(TIF_IA32)) 1640 + return 0; 1639 1641 1640 - return bytes == sizeof(*frame); 1642 + fp = compat_ptr(regs->bp); 1643 + while (entry->nr < PERF_MAX_STACK_DEPTH) { 1644 + unsigned long bytes; 1645 + frame.next_frame = 0; 1646 + frame.return_address = 0; 1647 + 1648 + bytes = copy_from_user_nmi(&frame, fp, sizeof(frame)); 1649 + if (bytes != sizeof(frame)) 1650 + break; 1651 + 1652 + if (fp < compat_ptr(regs->sp)) 1653 + break; 1654 + 1655 + callchain_store(entry, frame.return_address); 1656 + fp = compat_ptr(frame.next_frame); 1657 + } 1658 + return 1; 1641 1659 } 1660 + #else 1661 + static inline int 1662 + perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry) 1663 + { 1664 + return 0; 1665 + } 1666 + #endif 1642 1667 1643 1668 static void 1644 1669 perf_callchain_user(struct pt_regs *regs, struct perf_callchain_entry *entry) ··· 1682 1651 callchain_store(entry, PERF_CONTEXT_USER); 1683 1652 callchain_store(entry, regs->ip); 1684 1653 1654 + if (perf_callchain_user32(regs, entry)) 1655 + return; 1656 + 1685 1657 while (entry->nr < PERF_MAX_STACK_DEPTH) { 1658 + unsigned long bytes; 1686 1659 frame.next_frame = NULL; 1687 1660 frame.return_address = 0; 1688 1661 1689 - if (!copy_stack_frame(fp, &frame)) 1662 + bytes = copy_from_user_nmi(&frame, fp, sizeof(frame)); 1663 + if (bytes != sizeof(frame)) 1690 1664 break; 1691 1665 1692 1666 if ((unsigned long)fp < regs->sp) ··· 1738 1702 return entry; 1739 1703 } 1740 1704 1741 - #ifdef CONFIG_EVENT_TRACING 1742 1705 void perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned long ip, int skip) 1743 1706 { 1744 1707 regs->ip = ip; ··· 1749 1714 regs->cs = __KERNEL_CS; 1750 1715 local_save_flags(regs->flags); 1751 1716 } 1752 - #endif
+47 -33
arch/x86/kernel/cpu/perf_event_amd.c
··· 137 137 return (hwc->config & 0xe0) == 0xe0; 138 138 } 139 139 140 + static inline int amd_has_nb(struct cpu_hw_events *cpuc) 141 + { 142 + struct amd_nb *nb = cpuc->amd_nb; 143 + 144 + return nb && nb->nb_id != -1; 145 + } 146 + 140 147 static void amd_put_event_constraints(struct cpu_hw_events *cpuc, 141 148 struct perf_event *event) 142 149 { ··· 154 147 /* 155 148 * only care about NB events 156 149 */ 157 - if (!(nb && amd_is_nb_event(hwc))) 150 + if (!(amd_has_nb(cpuc) && amd_is_nb_event(hwc))) 158 151 return; 159 152 160 153 /* ··· 221 214 /* 222 215 * if not NB event or no NB, then no constraints 223 216 */ 224 - if (!(nb && amd_is_nb_event(hwc))) 217 + if (!(amd_has_nb(cpuc) && amd_is_nb_event(hwc))) 225 218 return &unconstrained; 226 219 227 220 /* ··· 300 293 return nb; 301 294 } 302 295 303 - static void amd_pmu_cpu_online(int cpu) 296 + static int amd_pmu_cpu_prepare(int cpu) 304 297 { 305 - struct cpu_hw_events *cpu1, *cpu2; 306 - struct amd_nb *nb = NULL; 298 + struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu); 299 + 300 + WARN_ON_ONCE(cpuc->amd_nb); 301 + 302 + if (boot_cpu_data.x86_max_cores < 2) 303 + return NOTIFY_OK; 304 + 305 + cpuc->amd_nb = amd_alloc_nb(cpu, -1); 306 + if (!cpuc->amd_nb) 307 + return NOTIFY_BAD; 308 + 309 + return NOTIFY_OK; 310 + } 311 + 312 + static void amd_pmu_cpu_starting(int cpu) 313 + { 314 + struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu); 315 + struct amd_nb *nb; 307 316 int i, nb_id; 308 317 309 318 if (boot_cpu_data.x86_max_cores < 2) 310 319 return; 311 320 312 - /* 313 - * function may be called too early in the 314 - * boot process, in which case nb_id is bogus 315 - */ 316 321 nb_id = amd_get_nb_id(cpu); 317 - if (nb_id == BAD_APICID) 318 - return; 319 - 320 - cpu1 = &per_cpu(cpu_hw_events, cpu); 321 - cpu1->amd_nb = NULL; 322 + WARN_ON_ONCE(nb_id == BAD_APICID); 322 323 323 324 raw_spin_lock(&amd_nb_lock); 324 325 325 326 for_each_online_cpu(i) { 326 - cpu2 = &per_cpu(cpu_hw_events, i); 327 - nb = cpu2->amd_nb; 328 - if (!nb) 327 + nb = per_cpu(cpu_hw_events, i).amd_nb; 328 + if (WARN_ON_ONCE(!nb)) 329 329 continue; 330 - if (nb->nb_id == nb_id) 331 - goto found; 330 + 331 + if (nb->nb_id == nb_id) { 332 + kfree(cpuc->amd_nb); 333 + cpuc->amd_nb = nb; 334 + break; 335 + } 332 336 } 333 337 334 - nb = amd_alloc_nb(cpu, nb_id); 335 - if (!nb) { 336 - pr_err("perf_events: failed NB allocation for CPU%d\n", cpu); 337 - raw_spin_unlock(&amd_nb_lock); 338 - return; 339 - } 340 - found: 341 - nb->refcnt++; 342 - cpu1->amd_nb = nb; 338 + cpuc->amd_nb->nb_id = nb_id; 339 + cpuc->amd_nb->refcnt++; 343 340 344 341 raw_spin_unlock(&amd_nb_lock); 345 342 } 346 343 347 - static void amd_pmu_cpu_offline(int cpu) 344 + static void amd_pmu_cpu_dead(int cpu) 348 345 { 349 346 struct cpu_hw_events *cpuhw; 350 347 ··· 360 349 raw_spin_lock(&amd_nb_lock); 361 350 362 351 if (cpuhw->amd_nb) { 363 - if (--cpuhw->amd_nb->refcnt == 0) 364 - kfree(cpuhw->amd_nb); 352 + struct amd_nb *nb = cpuhw->amd_nb; 353 + 354 + if (nb->nb_id == -1 || --nb->refcnt == 0) 355 + kfree(nb); 365 356 366 357 cpuhw->amd_nb = NULL; 367 358 } ··· 392 379 .get_event_constraints = amd_get_event_constraints, 393 380 .put_event_constraints = amd_put_event_constraints, 394 381 395 - .cpu_prepare = amd_pmu_cpu_online, 396 - .cpu_dead = amd_pmu_cpu_offline, 382 + .cpu_prepare = amd_pmu_cpu_prepare, 383 + .cpu_starting = amd_pmu_cpu_starting, 384 + .cpu_dead = amd_pmu_cpu_dead, 397 385 }; 398 386 399 387 static __init int amd_pmu_init(void)
+5
arch/x86/kernel/dumpstack.h
··· 30 30 unsigned long return_address; 31 31 }; 32 32 33 + struct stack_frame_ia32 { 34 + u32 next_frame; 35 + u32 return_address; 36 + }; 37 + 33 38 static inline unsigned long rewind_frame_pointer(int n) 34 39 { 35 40 struct stack_frame *frame;
+3 -1
arch/x86/kernel/head32.c
··· 7 7 8 8 #include <linux/init.h> 9 9 #include <linux/start_kernel.h> 10 + #include <linux/mm.h> 10 11 11 12 #include <asm/setup.h> 12 13 #include <asm/sections.h> ··· 45 44 #ifdef CONFIG_BLK_DEV_INITRD 46 45 /* Reserve INITRD */ 47 46 if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) { 47 + /* Assume only end is not page aligned */ 48 48 u64 ramdisk_image = boot_params.hdr.ramdisk_image; 49 49 u64 ramdisk_size = boot_params.hdr.ramdisk_size; 50 - u64 ramdisk_end = ramdisk_image + ramdisk_size; 50 + u64 ramdisk_end = PAGE_ALIGN(ramdisk_image + ramdisk_size); 51 51 reserve_early(ramdisk_image, ramdisk_end, "RAMDISK"); 52 52 } 53 53 #endif
+2 -1
arch/x86/kernel/head64.c
··· 103 103 #ifdef CONFIG_BLK_DEV_INITRD 104 104 /* Reserve INITRD */ 105 105 if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) { 106 + /* Assume only end is not page aligned */ 106 107 unsigned long ramdisk_image = boot_params.hdr.ramdisk_image; 107 108 unsigned long ramdisk_size = boot_params.hdr.ramdisk_size; 108 - unsigned long ramdisk_end = ramdisk_image + ramdisk_size; 109 + unsigned long ramdisk_end = PAGE_ALIGN(ramdisk_image + ramdisk_size); 109 110 reserve_early(ramdisk_image, ramdisk_end, "RAMDISK"); 110 111 } 111 112 #endif
+22
arch/x86/kernel/irqinit.c
··· 141 141 x86_init.irqs.intr_init(); 142 142 } 143 143 144 + /* 145 + * Setup the vector to irq mappings. 146 + */ 147 + void setup_vector_irq(int cpu) 148 + { 149 + #ifndef CONFIG_X86_IO_APIC 150 + int irq; 151 + 152 + /* 153 + * On most of the platforms, legacy PIC delivers the interrupts on the 154 + * boot cpu. But there are certain platforms where PIC interrupts are 155 + * delivered to multiple cpu's. If the legacy IRQ is handled by the 156 + * legacy PIC, for the new cpu that is coming online, setup the static 157 + * legacy vector to irq mapping: 158 + */ 159 + for (irq = 0; irq < legacy_pic->nr_legacy_irqs; irq++) 160 + per_cpu(vector_irq, cpu)[IRQ0_VECTOR + irq] = irq; 161 + #endif 162 + 163 + __setup_vector_irq(cpu); 164 + } 165 + 144 166 static void __init smp_intr_init(void) 145 167 { 146 168 #ifdef CONFIG_SMP
+1 -1
arch/x86/kernel/kgdb.c
··· 618 618 * portion of kgdb because this operation requires mutexs to 619 619 * complete. 620 620 */ 621 + hw_breakpoint_init(&attr); 621 622 attr.bp_addr = (unsigned long)kgdb_arch_init; 622 - attr.type = PERF_TYPE_BREAKPOINT; 623 623 attr.bp_len = HW_BREAKPOINT_LEN_1; 624 624 attr.bp_type = HW_BREAKPOINT_W; 625 625 attr.disabled = 1;
+24 -8
arch/x86/kernel/process.c
··· 526 526 } 527 527 528 528 /* 529 - * Check for AMD CPUs, which have potentially C1E support 529 + * Check for AMD CPUs, where APIC timer interrupt does not wake up CPU from C1e. 530 + * For more information see 531 + * - Erratum #400 for NPT family 0xf and family 0x10 CPUs 532 + * - Erratum #365 for family 0x11 (not affected because C1e not in use) 530 533 */ 531 534 static int __cpuinit check_c1e_idle(const struct cpuinfo_x86 *c) 532 535 { 536 + u64 val; 533 537 if (c->x86_vendor != X86_VENDOR_AMD) 534 - return 0; 535 - 536 - if (c->x86 < 0x0F) 537 - return 0; 538 + goto no_c1e_idle; 538 539 539 540 /* Family 0x0f models < rev F do not have C1E */ 540 - if (c->x86 == 0x0f && c->x86_model < 0x40) 541 - return 0; 541 + if (c->x86 == 0x0F && c->x86_model >= 0x40) 542 + return 1; 542 543 543 - return 1; 544 + if (c->x86 == 0x10) { 545 + /* 546 + * check OSVW bit for CPUs that are not affected 547 + * by erratum #400 548 + */ 549 + rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, val); 550 + if (val >= 2) { 551 + rdmsrl(MSR_AMD64_OSVW_STATUS, val); 552 + if (!(val & BIT(1))) 553 + goto no_c1e_idle; 554 + } 555 + return 1; 556 + } 557 + 558 + no_c1e_idle: 559 + return 0; 544 560 } 545 561 546 562 static cpumask_var_t c1e_mask;
+6 -4
arch/x86/kernel/setup.c
··· 314 314 #define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT) 315 315 static void __init relocate_initrd(void) 316 316 { 317 - 317 + /* Assume only end is not page aligned */ 318 318 u64 ramdisk_image = boot_params.hdr.ramdisk_image; 319 319 u64 ramdisk_size = boot_params.hdr.ramdisk_size; 320 + u64 area_size = PAGE_ALIGN(ramdisk_size); 320 321 u64 end_of_lowmem = max_low_pfn_mapped << PAGE_SHIFT; 321 322 u64 ramdisk_here; 322 323 unsigned long slop, clen, mapaddr; 323 324 char *p, *q; 324 325 325 326 /* We need to move the initrd down into lowmem */ 326 - ramdisk_here = find_e820_area(0, end_of_lowmem, ramdisk_size, 327 + ramdisk_here = find_e820_area(0, end_of_lowmem, area_size, 327 328 PAGE_SIZE); 328 329 329 330 if (ramdisk_here == -1ULL) ··· 333 332 334 333 /* Note: this includes all the lowmem currently occupied by 335 334 the initrd, we rely on that fact to keep the data intact. */ 336 - reserve_early(ramdisk_here, ramdisk_here + ramdisk_size, 335 + reserve_early(ramdisk_here, ramdisk_here + area_size, 337 336 "NEW RAMDISK"); 338 337 initrd_start = ramdisk_here + PAGE_OFFSET; 339 338 initrd_end = initrd_start + ramdisk_size; ··· 377 376 378 377 static void __init reserve_initrd(void) 379 378 { 379 + /* Assume only end is not page aligned */ 380 380 u64 ramdisk_image = boot_params.hdr.ramdisk_image; 381 381 u64 ramdisk_size = boot_params.hdr.ramdisk_size; 382 - u64 ramdisk_end = ramdisk_image + ramdisk_size; 382 + u64 ramdisk_end = PAGE_ALIGN(ramdisk_image + ramdisk_size); 383 383 u64 end_of_lowmem = max_low_pfn_mapped << PAGE_SHIFT; 384 384 385 385 if (!boot_params.hdr.type_of_loader ||
+3 -3
arch/x86/kernel/smpboot.c
··· 242 242 end_local_APIC_setup(); 243 243 map_cpu_to_logical_apicid(); 244 244 245 - notify_cpu_starting(cpuid); 246 - 247 245 /* 248 246 * Need to setup vector mappings before we enable interrupts. 249 247 */ 250 - __setup_vector_irq(smp_processor_id()); 248 + setup_vector_irq(smp_processor_id()); 251 249 /* 252 250 * Get our bogomips. 253 251 * ··· 261 263 * Save our processor parameters 262 264 */ 263 265 smp_store_cpu_info(cpuid); 266 + 267 + notify_cpu_starting(cpuid); 264 268 265 269 /* 266 270 * Allow the master to continue.
+1 -1
arch/x86/kernel/vmlinux.lds.S
··· 291 291 .smp_locks : AT(ADDR(.smp_locks) - LOAD_OFFSET) { 292 292 __smp_locks = .; 293 293 *(.smp_locks) 294 - __smp_locks_end = .; 295 294 . = ALIGN(PAGE_SIZE); 295 + __smp_locks_end = .; 296 296 } 297 297 298 298 #ifdef CONFIG_X86_64
+26 -6
arch/x86/mm/init.c
··· 331 331 332 332 void free_init_pages(char *what, unsigned long begin, unsigned long end) 333 333 { 334 - unsigned long addr = begin; 334 + unsigned long addr; 335 + unsigned long begin_aligned, end_aligned; 335 336 336 - if (addr >= end) 337 + /* Make sure boundaries are page aligned */ 338 + begin_aligned = PAGE_ALIGN(begin); 339 + end_aligned = end & PAGE_MASK; 340 + 341 + if (WARN_ON(begin_aligned != begin || end_aligned != end)) { 342 + begin = begin_aligned; 343 + end = end_aligned; 344 + } 345 + 346 + if (begin >= end) 337 347 return; 348 + 349 + addr = begin; 338 350 339 351 /* 340 352 * If debugging page accesses then do not free this memory but ··· 355 343 */ 356 344 #ifdef CONFIG_DEBUG_PAGEALLOC 357 345 printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n", 358 - begin, PAGE_ALIGN(end)); 346 + begin, end); 359 347 set_memory_np(begin, (end - begin) >> PAGE_SHIFT); 360 348 #else 361 349 /* ··· 370 358 for (; addr < end; addr += PAGE_SIZE) { 371 359 ClearPageReserved(virt_to_page(addr)); 372 360 init_page_count(virt_to_page(addr)); 373 - memset((void *)(addr & ~(PAGE_SIZE-1)), 374 - POISON_FREE_INITMEM, PAGE_SIZE); 361 + memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE); 375 362 free_page(addr); 376 363 totalram_pages++; 377 364 } ··· 387 376 #ifdef CONFIG_BLK_DEV_INITRD 388 377 void free_initrd_mem(unsigned long start, unsigned long end) 389 378 { 390 - free_init_pages("initrd memory", start, end); 379 + /* 380 + * end could be not aligned, and We can not align that, 381 + * decompresser could be confused by aligned initrd_end 382 + * We already reserve the end partial page before in 383 + * - i386_start_kernel() 384 + * - x86_64_start_kernel() 385 + * - relocate_initrd() 386 + * So here We can do PAGE_ALIGN() safely to get partial page to be freed 387 + */ 388 + free_init_pages("initrd memory", start, PAGE_ALIGN(end)); 391 389 } 392 390 #endif
+18 -4
arch/x86/pci/acpi.c
··· 122 122 struct acpi_resource_address64 addr; 123 123 acpi_status status; 124 124 unsigned long flags; 125 - struct resource *root; 126 - u64 start, end; 125 + struct resource *root, *conflict; 126 + u64 start, end, max_len; 127 127 128 128 status = resource_to_addr(acpi_res, &addr); 129 129 if (!ACPI_SUCCESS(status)) ··· 139 139 flags = IORESOURCE_IO; 140 140 } else 141 141 return AE_OK; 142 + 143 + max_len = addr.maximum - addr.minimum + 1; 144 + if (addr.address_length > max_len) { 145 + dev_printk(KERN_DEBUG, &info->bridge->dev, 146 + "host bridge window length %#llx doesn't fit in " 147 + "%#llx-%#llx, trimming\n", 148 + (unsigned long long) addr.address_length, 149 + (unsigned long long) addr.minimum, 150 + (unsigned long long) addr.maximum); 151 + addr.address_length = max_len; 152 + } 142 153 143 154 start = addr.minimum + addr.translation_offset; 144 155 end = start + addr.address_length - 1; ··· 168 157 return AE_OK; 169 158 } 170 159 171 - if (insert_resource(root, res)) { 160 + conflict = insert_resource_conflict(root, res); 161 + if (conflict) { 172 162 dev_err(&info->bridge->dev, 173 - "can't allocate host bridge window %pR\n", res); 163 + "address space collision: host bridge window %pR " 164 + "conflicts with %s %pR\n", 165 + res, conflict->name, conflict); 174 166 } else { 175 167 pci_bus_add_resource(info->bus, res, 0); 176 168 info->res_num++;
-5
arch/x86/pci/i386.c
··· 127 127 continue; 128 128 if (!r->start || 129 129 pci_claim_resource(dev, idx) < 0) { 130 - dev_info(&dev->dev, 131 - "can't reserve window %pR\n", 132 - r); 133 130 /* 134 131 * Something is wrong with the region. 135 132 * Invalidate the resource to prevent ··· 178 181 "BAR %d: reserving %pr (d=%d, p=%d)\n", 179 182 idx, r, disabled, pass); 180 183 if (pci_claim_resource(dev, idx) < 0) { 181 - dev_info(&dev->dev, 182 - "can't reserve %pR\n", r); 183 184 /* We'll assign a new address later */ 184 185 r->end -= r->start; 185 186 r->start = 0;
+4
drivers/ata/pata_via.c
··· 576 576 u8 rev = isa->revision; 577 577 pci_dev_put(isa); 578 578 579 + if ((id->device == 0x0415 || id->device == 0x3164) && 580 + (config->id != id->device)) 581 + continue; 582 + 579 583 if (rev >= config->rev_min && rev <= config->rev_max) 580 584 break; 581 585 }
+31
drivers/base/power/main.c
··· 439 439 if (dev->bus && dev->bus->pm) { 440 440 pm_dev_dbg(dev, state, "EARLY "); 441 441 error = pm_noirq_op(dev, dev->bus->pm, state); 442 + if (error) 443 + goto End; 442 444 } 443 445 446 + if (dev->type && dev->type->pm) { 447 + pm_dev_dbg(dev, state, "EARLY type "); 448 + error = pm_noirq_op(dev, dev->type->pm, state); 449 + if (error) 450 + goto End; 451 + } 452 + 453 + if (dev->class && dev->class->pm) { 454 + pm_dev_dbg(dev, state, "EARLY class "); 455 + error = pm_noirq_op(dev, dev->class->pm, state); 456 + } 457 + 458 + End: 444 459 TRACE_RESUME(error); 445 460 return error; 446 461 } ··· 750 735 { 751 736 int error = 0; 752 737 738 + if (dev->class && dev->class->pm) { 739 + pm_dev_dbg(dev, state, "LATE class "); 740 + error = pm_noirq_op(dev, dev->class->pm, state); 741 + if (error) 742 + goto End; 743 + } 744 + 745 + if (dev->type && dev->type->pm) { 746 + pm_dev_dbg(dev, state, "LATE type "); 747 + error = pm_noirq_op(dev, dev->type->pm, state); 748 + if (error) 749 + goto End; 750 + } 751 + 753 752 if (dev->bus && dev->bus->pm) { 754 753 pm_dev_dbg(dev, state, "LATE "); 755 754 error = pm_noirq_op(dev, dev->bus->pm, state); 756 755 } 756 + 757 + End: 757 758 return error; 758 759 } 759 760
+2
drivers/char/tty_io.c
··· 1423 1423 list_del_init(&tty->tty_files); 1424 1424 file_list_unlock(); 1425 1425 1426 + put_pid(tty->pgrp); 1427 + put_pid(tty->session); 1426 1428 free_tty_struct(tty); 1427 1429 } 1428 1430
+41 -64
drivers/firewire/core-device.c
··· 126 126 } 127 127 EXPORT_SYMBOL(fw_csr_string); 128 128 129 - static bool is_fw_unit(struct device *dev); 130 - 131 - static int match_unit_directory(const u32 *directory, u32 match_flags, 132 - const struct ieee1394_device_id *id) 129 + static void get_ids(const u32 *directory, int *id) 133 130 { 134 131 struct fw_csr_iterator ci; 135 - int key, value, match; 132 + int key, value; 136 133 137 - match = 0; 138 134 fw_csr_iterator_init(&ci, directory); 139 135 while (fw_csr_iterator_next(&ci, &key, &value)) { 140 - if (key == CSR_VENDOR && value == id->vendor_id) 141 - match |= IEEE1394_MATCH_VENDOR_ID; 142 - if (key == CSR_MODEL && value == id->model_id) 143 - match |= IEEE1394_MATCH_MODEL_ID; 144 - if (key == CSR_SPECIFIER_ID && value == id->specifier_id) 145 - match |= IEEE1394_MATCH_SPECIFIER_ID; 146 - if (key == CSR_VERSION && value == id->version) 147 - match |= IEEE1394_MATCH_VERSION; 136 + switch (key) { 137 + case CSR_VENDOR: id[0] = value; break; 138 + case CSR_MODEL: id[1] = value; break; 139 + case CSR_SPECIFIER_ID: id[2] = value; break; 140 + case CSR_VERSION: id[3] = value; break; 141 + } 148 142 } 149 - 150 - return (match & match_flags) == match_flags; 151 143 } 144 + 145 + static void get_modalias_ids(struct fw_unit *unit, int *id) 146 + { 147 + get_ids(&fw_parent_device(unit)->config_rom[5], id); 148 + get_ids(unit->directory, id); 149 + } 150 + 151 + static bool match_ids(const struct ieee1394_device_id *id_table, int *id) 152 + { 153 + int match = 0; 154 + 155 + if (id[0] == id_table->vendor_id) 156 + match |= IEEE1394_MATCH_VENDOR_ID; 157 + if (id[1] == id_table->model_id) 158 + match |= IEEE1394_MATCH_MODEL_ID; 159 + if (id[2] == id_table->specifier_id) 160 + match |= IEEE1394_MATCH_SPECIFIER_ID; 161 + if (id[3] == id_table->version) 162 + match |= IEEE1394_MATCH_VERSION; 163 + 164 + return (match & id_table->match_flags) == id_table->match_flags; 165 + } 166 + 167 + static bool is_fw_unit(struct device *dev); 152 168 153 169 static int fw_unit_match(struct device *dev, struct device_driver *drv) 154 170 { 155 - struct fw_unit *unit = fw_unit(dev); 156 - struct fw_device *device; 157 - const struct ieee1394_device_id *id; 171 + const struct ieee1394_device_id *id_table = 172 + container_of(drv, struct fw_driver, driver)->id_table; 173 + int id[] = {0, 0, 0, 0}; 158 174 159 175 /* We only allow binding to fw_units. */ 160 176 if (!is_fw_unit(dev)) 161 177 return 0; 162 178 163 - device = fw_parent_device(unit); 164 - id = container_of(drv, struct fw_driver, driver)->id_table; 179 + get_modalias_ids(fw_unit(dev), id); 165 180 166 - for (; id->match_flags != 0; id++) { 167 - if (match_unit_directory(unit->directory, id->match_flags, id)) 181 + for (; id_table->match_flags != 0; id_table++) 182 + if (match_ids(id_table, id)) 168 183 return 1; 169 - 170 - /* Also check vendor ID in the root directory. */ 171 - if ((id->match_flags & IEEE1394_MATCH_VENDOR_ID) && 172 - match_unit_directory(&device->config_rom[5], 173 - IEEE1394_MATCH_VENDOR_ID, id) && 174 - match_unit_directory(unit->directory, id->match_flags 175 - & ~IEEE1394_MATCH_VENDOR_ID, id)) 176 - return 1; 177 - } 178 184 179 185 return 0; 180 186 } 181 187 182 188 static int get_modalias(struct fw_unit *unit, char *buffer, size_t buffer_size) 183 189 { 184 - struct fw_device *device = fw_parent_device(unit); 185 - struct fw_csr_iterator ci; 190 + int id[] = {0, 0, 0, 0}; 186 191 187 - int key, value; 188 - int vendor = 0; 189 - int model = 0; 190 - int specifier_id = 0; 191 - int version = 0; 192 - 193 - fw_csr_iterator_init(&ci, &device->config_rom[5]); 194 - while (fw_csr_iterator_next(&ci, &key, &value)) { 195 - switch (key) { 196 - case CSR_VENDOR: 197 - vendor = value; 198 - break; 199 - case CSR_MODEL: 200 - model = value; 201 - break; 202 - } 203 - } 204 - 205 - fw_csr_iterator_init(&ci, unit->directory); 206 - while (fw_csr_iterator_next(&ci, &key, &value)) { 207 - switch (key) { 208 - case CSR_SPECIFIER_ID: 209 - specifier_id = value; 210 - break; 211 - case CSR_VERSION: 212 - version = value; 213 - break; 214 - } 215 - } 192 + get_modalias_ids(unit, id); 216 193 217 194 return snprintf(buffer, buffer_size, 218 195 "ieee1394:ven%08Xmo%08Xsp%08Xver%08X", 219 - vendor, model, specifier_id, version); 196 + id[0], id[1], id[2], id[3]); 220 197 } 221 198 222 199 static int fw_unit_uevent(struct device *dev, struct kobj_uevent_env *env)
+3 -2
drivers/firewire/core-iso.c
··· 331 331 if (ret < 0) 332 332 *bandwidth = 0; 333 333 334 - if (allocate && ret < 0 && c >= 0) { 335 - deallocate_channel(card, irm_id, generation, c, buffer); 334 + if (allocate && ret < 0) { 335 + if (c >= 0) 336 + deallocate_channel(card, irm_id, generation, c, buffer); 336 337 *channel = ret; 337 338 } 338 339 }
+4
drivers/firewire/ohci.c
··· 231 231 232 232 static char ohci_driver_name[] = KBUILD_MODNAME; 233 233 234 + #define PCI_DEVICE_ID_TI_TSB12LV22 0x8009 235 + 234 236 #define QUIRK_CYCLE_TIMER 1 235 237 #define QUIRK_RESET_PACKET 2 236 238 #define QUIRK_BE_HEADERS 4 ··· 241 239 static const struct { 242 240 unsigned short vendor, device, flags; 243 241 } ohci_quirks[] = { 242 + {PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_TSB12LV22, QUIRK_CYCLE_TIMER | 243 + QUIRK_RESET_PACKET}, 244 244 {PCI_VENDOR_ID_TI, PCI_ANY_ID, QUIRK_RESET_PACKET}, 245 245 {PCI_VENDOR_ID_AL, PCI_ANY_ID, QUIRK_CYCLE_TIMER}, 246 246 {PCI_VENDOR_ID_NEC, PCI_ANY_ID, QUIRK_CYCLE_TIMER},
+1
drivers/gpu/drm/drm_crtc_helper.c
··· 104 104 if (connector->status == connector_status_disconnected) { 105 105 DRM_DEBUG_KMS("%s is disconnected\n", 106 106 drm_get_connector_name(connector)); 107 + drm_mode_connector_update_edid_property(connector, NULL); 107 108 goto prune; 108 109 } 109 110
-9
drivers/gpu/drm/drm_edid.c
··· 707 707 mode->vsync_end = mode->vsync_start + vsync_pulse_width; 708 708 mode->vtotal = mode->vdisplay + vblank; 709 709 710 - /* perform the basic check for the detailed timing */ 711 - if (mode->hsync_end > mode->htotal || 712 - mode->vsync_end > mode->vtotal) { 713 - drm_mode_destroy(dev, mode); 714 - DRM_DEBUG_KMS("Incorrect detailed timing. " 715 - "Sync is beyond the blank.\n"); 716 - return NULL; 717 - } 718 - 719 710 /* Some EDIDs have bogus h/vtotal values */ 720 711 if (mode->hsync_end > mode->htotal) 721 712 mode->htotal = mode->hsync_end + 1;
+2
drivers/gpu/drm/drm_fb_helper.c
··· 283 283 .help_msg = "force-fb(V)", 284 284 .action_msg = "Restore framebuffer console", 285 285 }; 286 + #else 287 + static struct sysrq_key_op sysrq_drm_fb_helper_restore_op = { }; 286 288 #endif 287 289 288 290 static void drm_fb_helper_on(struct fb_info *info)
+9 -7
drivers/gpu/drm/drm_fops.c
··· 140 140 spin_unlock(&dev->count_lock); 141 141 } 142 142 out: 143 - mutex_lock(&dev->struct_mutex); 144 - if (minor->type == DRM_MINOR_LEGACY) { 145 - BUG_ON((dev->dev_mapping != NULL) && 146 - (dev->dev_mapping != inode->i_mapping)); 147 - if (dev->dev_mapping == NULL) 148 - dev->dev_mapping = inode->i_mapping; 143 + if (!retcode) { 144 + mutex_lock(&dev->struct_mutex); 145 + if (minor->type == DRM_MINOR_LEGACY) { 146 + if (dev->dev_mapping == NULL) 147 + dev->dev_mapping = inode->i_mapping; 148 + else if (dev->dev_mapping != inode->i_mapping) 149 + retcode = -ENODEV; 150 + } 151 + mutex_unlock(&dev->struct_mutex); 149 152 } 150 - mutex_unlock(&dev->struct_mutex); 151 153 152 154 return retcode; 153 155 }
+1 -1
drivers/gpu/drm/nouveau/Makefile
··· 12 12 nouveau_dp.o nouveau_grctx.o \ 13 13 nv04_timer.o \ 14 14 nv04_mc.o nv40_mc.o nv50_mc.o \ 15 - nv04_fb.o nv10_fb.o nv40_fb.o \ 15 + nv04_fb.o nv10_fb.o nv40_fb.o nv50_fb.o \ 16 16 nv04_fifo.o nv10_fifo.o nv40_fifo.o nv50_fifo.o \ 17 17 nv04_graph.o nv10_graph.o nv20_graph.o \ 18 18 nv40_graph.o nv50_graph.o \
+26 -2
drivers/gpu/drm/nouveau/nouveau_bios.c
··· 5211 5211 } 5212 5212 5213 5213 static void 5214 + apply_dcb_connector_quirks(struct nvbios *bios, int idx) 5215 + { 5216 + struct dcb_connector_table_entry *cte = &bios->dcb.connector.entry[idx]; 5217 + struct drm_device *dev = bios->dev; 5218 + 5219 + /* Gigabyte NX85T */ 5220 + if ((dev->pdev->device == 0x0421) && 5221 + (dev->pdev->subsystem_vendor == 0x1458) && 5222 + (dev->pdev->subsystem_device == 0x344c)) { 5223 + if (cte->type == DCB_CONNECTOR_HDMI_1) 5224 + cte->type = DCB_CONNECTOR_DVI_I; 5225 + } 5226 + } 5227 + 5228 + static void 5214 5229 parse_dcb_connector_table(struct nvbios *bios) 5215 5230 { 5216 5231 struct drm_device *dev = bios->dev; ··· 5253 5238 entry = conntab + conntab[1]; 5254 5239 cte = &ct->entry[0]; 5255 5240 for (i = 0; i < conntab[2]; i++, entry += conntab[3], cte++) { 5241 + cte->index = i; 5256 5242 if (conntab[3] == 2) 5257 5243 cte->entry = ROM16(entry[0]); 5258 5244 else 5259 5245 cte->entry = ROM32(entry[0]); 5260 5246 5261 5247 cte->type = (cte->entry & 0x000000ff) >> 0; 5262 - cte->index = (cte->entry & 0x00000f00) >> 8; 5248 + cte->index2 = (cte->entry & 0x00000f00) >> 8; 5263 5249 switch (cte->entry & 0x00033000) { 5264 5250 case 0x00001000: 5265 5251 cte->gpio_tag = 0x07; ··· 5281 5265 5282 5266 if (cte->type == 0xff) 5283 5267 continue; 5268 + 5269 + apply_dcb_connector_quirks(bios, i); 5284 5270 5285 5271 NV_INFO(dev, " %d: 0x%08x: type 0x%02x idx %d tag 0x%02x\n", 5286 5272 i, cte->entry, cte->type, cte->index, cte->gpio_tag); ··· 5305 5287 break; 5306 5288 default: 5307 5289 cte->type = divine_connector_type(bios, cte->index); 5308 - NV_WARN(dev, "unknown type, using 0x%02x", cte->type); 5290 + NV_WARN(dev, "unknown type, using 0x%02x\n", cte->type); 5309 5291 break; 5292 + } 5293 + 5294 + if (nouveau_override_conntype) { 5295 + int type = divine_connector_type(bios, cte->index); 5296 + if (type != cte->type) 5297 + NV_WARN(dev, " -> type 0x%02x\n", cte->type); 5310 5298 } 5311 5299 5312 5300 }
+2 -1
drivers/gpu/drm/nouveau/nouveau_bios.h
··· 72 72 }; 73 73 74 74 struct dcb_connector_table_entry { 75 + uint8_t index; 75 76 uint32_t entry; 76 77 enum dcb_connector_type type; 77 - uint8_t index; 78 + uint8_t index2; 78 79 uint8_t gpio_tag; 79 80 }; 80 81
+1 -2
drivers/gpu/drm/nouveau/nouveau_bo.c
··· 439 439 440 440 switch (bo->mem.mem_type) { 441 441 case TTM_PL_VRAM: 442 - nouveau_bo_placement_set(nvbo, TTM_PL_FLAG_TT | 443 - TTM_PL_FLAG_SYSTEM); 442 + nouveau_bo_placement_set(nvbo, TTM_PL_FLAG_TT); 444 443 break; 445 444 default: 446 445 nouveau_bo_placement_set(nvbo, TTM_PL_FLAG_SYSTEM);
+1 -1
drivers/gpu/drm/nouveau/nouveau_connector.c
··· 302 302 303 303 detect_analog: 304 304 nv_encoder = find_encoder_by_type(connector, OUTPUT_ANALOG); 305 - if (!nv_encoder) 305 + if (!nv_encoder && !nouveau_tv_disable) 306 306 nv_encoder = find_encoder_by_type(connector, OUTPUT_TV); 307 307 if (nv_encoder) { 308 308 struct drm_encoder *encoder = to_drm_encoder(nv_encoder);
+5
drivers/gpu/drm/nouveau/nouveau_dma.c
··· 190 190 nouveau_bo_wr32(pb, ip++, upper_32_bits(offset) | length << 8); 191 191 192 192 chan->dma.ib_put = (chan->dma.ib_put + 1) & chan->dma.ib_max; 193 + 194 + DRM_MEMORYBARRIER(); 195 + /* Flush writes. */ 196 + nouveau_bo_rd32(pb, 0); 197 + 193 198 nvchan_wr32(chan, 0x8c, chan->dma.ib_put); 194 199 chan->dma.ib_free--; 195 200 }
+10
drivers/gpu/drm/nouveau/nouveau_drv.c
··· 83 83 int nouveau_nofbaccel = 0; 84 84 module_param_named(nofbaccel, nouveau_nofbaccel, int, 0400); 85 85 86 + MODULE_PARM_DESC(override_conntype, "Ignore DCB connector type"); 87 + int nouveau_override_conntype = 0; 88 + module_param_named(override_conntype, nouveau_override_conntype, int, 0400); 89 + 90 + MODULE_PARM_DESC(tv_disable, "Disable TV-out detection\n"); 91 + int nouveau_tv_disable = 0; 92 + module_param_named(tv_disable, nouveau_tv_disable, int, 0400); 93 + 86 94 MODULE_PARM_DESC(tv_norm, "Default TV norm.\n" 87 95 "\t\tSupported: PAL, PAL-M, PAL-N, PAL-Nc, NTSC-M, NTSC-J,\n" 88 96 "\t\t\thd480i, hd480p, hd576i, hd576p, hd720p, hd1080i.\n" ··· 162 154 if (pm_state.event == PM_EVENT_PRETHAW) 163 155 return 0; 164 156 157 + NV_INFO(dev, "Disabling fbcon acceleration...\n"); 165 158 fbdev_flags = dev_priv->fbdev_info->flags; 166 159 dev_priv->fbdev_info->flags |= FBINFO_HWACCEL_DISABLED; 167 160 161 + NV_INFO(dev, "Unpinning framebuffer(s)...\n"); 168 162 list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) { 169 163 struct nouveau_framebuffer *nouveau_fb; 170 164
+6
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 681 681 extern int nouveau_vram_pushbuf; 682 682 extern int nouveau_vram_notify; 683 683 extern int nouveau_fbpercrtc; 684 + extern int nouveau_tv_disable; 684 685 extern char *nouveau_tv_norm; 685 686 extern int nouveau_reg_debug; 686 687 extern char *nouveau_vbios; ··· 689 688 extern int nouveau_ignorelid; 690 689 extern int nouveau_nofbaccel; 691 690 extern int nouveau_noaccel; 691 + extern int nouveau_override_conntype; 692 692 693 693 extern int nouveau_pci_suspend(struct pci_dev *pdev, pm_message_t pm_state); 694 694 extern int nouveau_pci_resume(struct pci_dev *pdev); ··· 927 925 extern void nv40_fb_takedown(struct drm_device *); 928 926 extern void nv40_fb_set_region_tiling(struct drm_device *, int, uint32_t, 929 927 uint32_t, uint32_t); 928 + 929 + /* nv50_fb.c */ 930 + extern int nv50_fb_init(struct drm_device *); 931 + extern void nv50_fb_takedown(struct drm_device *); 930 932 931 933 /* nv04_fifo.c */ 932 934 extern int nv04_fifo_init(struct drm_device *);
+561 -48
drivers/gpu/drm/nouveau/nouveau_irq.c
··· 311 311 #define nouveau_print_bitfield_names(val, namelist) \ 312 312 nouveau_print_bitfield_names_((val), (namelist), ARRAY_SIZE(namelist)) 313 313 314 + struct nouveau_enum_names { 315 + uint32_t value; 316 + const char *name; 317 + }; 318 + 319 + static void 320 + nouveau_print_enum_names_(uint32_t value, 321 + const struct nouveau_enum_names *namelist, 322 + const int namelist_len) 323 + { 324 + /* 325 + * Caller must have already printed the KERN_* log level for us. 326 + * Also the caller is responsible for adding the newline. 327 + */ 328 + int i; 329 + for (i = 0; i < namelist_len; ++i) { 330 + if (value == namelist[i].value) { 331 + printk("%s", namelist[i].name); 332 + return; 333 + } 334 + } 335 + printk("unknown value 0x%08x", value); 336 + } 337 + #define nouveau_print_enum_names(val, namelist) \ 338 + nouveau_print_enum_names_((val), (namelist), ARRAY_SIZE(namelist)) 314 339 315 340 static int 316 341 nouveau_graph_chid_from_grctx(struct drm_device *dev) ··· 452 427 struct drm_nouveau_private *dev_priv = dev->dev_private; 453 428 uint32_t nsource = trap->nsource, nstatus = trap->nstatus; 454 429 455 - NV_INFO(dev, "%s - nSource:", id); 456 - nouveau_print_bitfield_names(nsource, nsource_names); 457 - printk(", nStatus:"); 458 - if (dev_priv->card_type < NV_10) 459 - nouveau_print_bitfield_names(nstatus, nstatus_names); 460 - else 461 - nouveau_print_bitfield_names(nstatus, nstatus_names_nv10); 462 - printk("\n"); 430 + if (dev_priv->card_type < NV_50) { 431 + NV_INFO(dev, "%s - nSource:", id); 432 + nouveau_print_bitfield_names(nsource, nsource_names); 433 + printk(", nStatus:"); 434 + if (dev_priv->card_type < NV_10) 435 + nouveau_print_bitfield_names(nstatus, nstatus_names); 436 + else 437 + nouveau_print_bitfield_names(nstatus, nstatus_names_nv10); 438 + printk("\n"); 439 + } 463 440 464 441 NV_INFO(dev, "%s - Ch %d/%d Class 0x%04x Mthd 0x%04x " 465 442 "Data 0x%08x:0x%08x\n", ··· 605 578 } 606 579 607 580 static void 581 + nv50_pfb_vm_trap(struct drm_device *dev, int display, const char *name) 582 + { 583 + struct drm_nouveau_private *dev_priv = dev->dev_private; 584 + uint32_t trap[6]; 585 + int i, ch; 586 + uint32_t idx = nv_rd32(dev, 0x100c90); 587 + if (idx & 0x80000000) { 588 + idx &= 0xffffff; 589 + if (display) { 590 + for (i = 0; i < 6; i++) { 591 + nv_wr32(dev, 0x100c90, idx | i << 24); 592 + trap[i] = nv_rd32(dev, 0x100c94); 593 + } 594 + for (ch = 0; ch < dev_priv->engine.fifo.channels; ch++) { 595 + struct nouveau_channel *chan = dev_priv->fifos[ch]; 596 + 597 + if (!chan || !chan->ramin) 598 + continue; 599 + 600 + if (trap[1] == chan->ramin->instance >> 12) 601 + break; 602 + } 603 + NV_INFO(dev, "%s - VM: Trapped %s at %02x%04x%04x status %08x %08x channel %d\n", 604 + name, (trap[5]&0x100?"read":"write"), 605 + trap[5]&0xff, trap[4]&0xffff, 606 + trap[3]&0xffff, trap[0], trap[2], ch); 607 + } 608 + nv_wr32(dev, 0x100c90, idx | 0x80000000); 609 + } else if (display) { 610 + NV_INFO(dev, "%s - no VM fault?\n", name); 611 + } 612 + } 613 + 614 + static struct nouveau_enum_names nv50_mp_exec_error_names[] = 615 + { 616 + { 3, "STACK_UNDERFLOW" }, 617 + { 4, "QUADON_ACTIVE" }, 618 + { 8, "TIMEOUT" }, 619 + { 0x10, "INVALID_OPCODE" }, 620 + { 0x40, "BREAKPOINT" }, 621 + }; 622 + 623 + static void 624 + nv50_pgraph_mp_trap(struct drm_device *dev, int tpid, int display) 625 + { 626 + struct drm_nouveau_private *dev_priv = dev->dev_private; 627 + uint32_t units = nv_rd32(dev, 0x1540); 628 + uint32_t addr, mp10, status, pc, oplow, ophigh; 629 + int i; 630 + int mps = 0; 631 + for (i = 0; i < 4; i++) { 632 + if (!(units & 1 << (i+24))) 633 + continue; 634 + if (dev_priv->chipset < 0xa0) 635 + addr = 0x408200 + (tpid << 12) + (i << 7); 636 + else 637 + addr = 0x408100 + (tpid << 11) + (i << 7); 638 + mp10 = nv_rd32(dev, addr + 0x10); 639 + status = nv_rd32(dev, addr + 0x14); 640 + if (!status) 641 + continue; 642 + if (display) { 643 + nv_rd32(dev, addr + 0x20); 644 + pc = nv_rd32(dev, addr + 0x24); 645 + oplow = nv_rd32(dev, addr + 0x70); 646 + ophigh= nv_rd32(dev, addr + 0x74); 647 + NV_INFO(dev, "PGRAPH_TRAP_MP_EXEC - " 648 + "TP %d MP %d: ", tpid, i); 649 + nouveau_print_enum_names(status, 650 + nv50_mp_exec_error_names); 651 + printk(" at %06x warp %d, opcode %08x %08x\n", 652 + pc&0xffffff, pc >> 24, 653 + oplow, ophigh); 654 + } 655 + nv_wr32(dev, addr + 0x10, mp10); 656 + nv_wr32(dev, addr + 0x14, 0); 657 + mps++; 658 + } 659 + if (!mps && display) 660 + NV_INFO(dev, "PGRAPH_TRAP_MP_EXEC - TP %d: " 661 + "No MPs claiming errors?\n", tpid); 662 + } 663 + 664 + static void 665 + nv50_pgraph_tp_trap(struct drm_device *dev, int type, uint32_t ustatus_old, 666 + uint32_t ustatus_new, int display, const char *name) 667 + { 668 + struct drm_nouveau_private *dev_priv = dev->dev_private; 669 + int tps = 0; 670 + uint32_t units = nv_rd32(dev, 0x1540); 671 + int i, r; 672 + uint32_t ustatus_addr, ustatus; 673 + for (i = 0; i < 16; i++) { 674 + if (!(units & (1 << i))) 675 + continue; 676 + if (dev_priv->chipset < 0xa0) 677 + ustatus_addr = ustatus_old + (i << 12); 678 + else 679 + ustatus_addr = ustatus_new + (i << 11); 680 + ustatus = nv_rd32(dev, ustatus_addr) & 0x7fffffff; 681 + if (!ustatus) 682 + continue; 683 + tps++; 684 + switch (type) { 685 + case 6: /* texture error... unknown for now */ 686 + nv50_pfb_vm_trap(dev, display, name); 687 + if (display) { 688 + NV_ERROR(dev, "magic set %d:\n", i); 689 + for (r = ustatus_addr + 4; r <= ustatus_addr + 0x10; r += 4) 690 + NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r, 691 + nv_rd32(dev, r)); 692 + } 693 + break; 694 + case 7: /* MP error */ 695 + if (ustatus & 0x00010000) { 696 + nv50_pgraph_mp_trap(dev, i, display); 697 + ustatus &= ~0x00010000; 698 + } 699 + break; 700 + case 8: /* TPDMA error */ 701 + { 702 + uint32_t e0c = nv_rd32(dev, ustatus_addr + 4); 703 + uint32_t e10 = nv_rd32(dev, ustatus_addr + 8); 704 + uint32_t e14 = nv_rd32(dev, ustatus_addr + 0xc); 705 + uint32_t e18 = nv_rd32(dev, ustatus_addr + 0x10); 706 + uint32_t e1c = nv_rd32(dev, ustatus_addr + 0x14); 707 + uint32_t e20 = nv_rd32(dev, ustatus_addr + 0x18); 708 + uint32_t e24 = nv_rd32(dev, ustatus_addr + 0x1c); 709 + nv50_pfb_vm_trap(dev, display, name); 710 + /* 2d engine destination */ 711 + if (ustatus & 0x00000010) { 712 + if (display) { 713 + NV_INFO(dev, "PGRAPH_TRAP_TPDMA_2D - TP %d - Unknown fault at address %02x%08x\n", 714 + i, e14, e10); 715 + NV_INFO(dev, "PGRAPH_TRAP_TPDMA_2D - TP %d - e0c: %08x, e18: %08x, e1c: %08x, e20: %08x, e24: %08x\n", 716 + i, e0c, e18, e1c, e20, e24); 717 + } 718 + ustatus &= ~0x00000010; 719 + } 720 + /* Render target */ 721 + if (ustatus & 0x00000040) { 722 + if (display) { 723 + NV_INFO(dev, "PGRAPH_TRAP_TPDMA_RT - TP %d - Unknown fault at address %02x%08x\n", 724 + i, e14, e10); 725 + NV_INFO(dev, "PGRAPH_TRAP_TPDMA_RT - TP %d - e0c: %08x, e18: %08x, e1c: %08x, e20: %08x, e24: %08x\n", 726 + i, e0c, e18, e1c, e20, e24); 727 + } 728 + ustatus &= ~0x00000040; 729 + } 730 + /* CUDA memory: l[], g[] or stack. */ 731 + if (ustatus & 0x00000080) { 732 + if (display) { 733 + if (e18 & 0x80000000) { 734 + /* g[] read fault? */ 735 + NV_INFO(dev, "PGRAPH_TRAP_TPDMA - TP %d - Global read fault at address %02x%08x\n", 736 + i, e14, e10 | ((e18 >> 24) & 0x1f)); 737 + e18 &= ~0x1f000000; 738 + } else if (e18 & 0xc) { 739 + /* g[] write fault? */ 740 + NV_INFO(dev, "PGRAPH_TRAP_TPDMA - TP %d - Global write fault at address %02x%08x\n", 741 + i, e14, e10 | ((e18 >> 7) & 0x1f)); 742 + e18 &= ~0x00000f80; 743 + } else { 744 + NV_INFO(dev, "PGRAPH_TRAP_TPDMA - TP %d - Unknown CUDA fault at address %02x%08x\n", 745 + i, e14, e10); 746 + } 747 + NV_INFO(dev, "PGRAPH_TRAP_TPDMA - TP %d - e0c: %08x, e18: %08x, e1c: %08x, e20: %08x, e24: %08x\n", 748 + i, e0c, e18, e1c, e20, e24); 749 + } 750 + ustatus &= ~0x00000080; 751 + } 752 + } 753 + break; 754 + } 755 + if (ustatus) { 756 + if (display) 757 + NV_INFO(dev, "%s - TP%d: Unhandled ustatus 0x%08x\n", name, i, ustatus); 758 + } 759 + nv_wr32(dev, ustatus_addr, 0xc0000000); 760 + } 761 + 762 + if (!tps && display) 763 + NV_INFO(dev, "%s - No TPs claiming errors?\n", name); 764 + } 765 + 766 + static void 767 + nv50_pgraph_trap_handler(struct drm_device *dev) 768 + { 769 + struct nouveau_pgraph_trap trap; 770 + uint32_t status = nv_rd32(dev, 0x400108); 771 + uint32_t ustatus; 772 + int display = nouveau_ratelimit(); 773 + 774 + 775 + if (!status && display) { 776 + nouveau_graph_trap_info(dev, &trap); 777 + nouveau_graph_dump_trap_info(dev, "PGRAPH_TRAP", &trap); 778 + NV_INFO(dev, "PGRAPH_TRAP - no units reporting traps?\n"); 779 + } 780 + 781 + /* DISPATCH: Relays commands to other units and handles NOTIFY, 782 + * COND, QUERY. If you get a trap from it, the command is still stuck 783 + * in DISPATCH and you need to do something about it. */ 784 + if (status & 0x001) { 785 + ustatus = nv_rd32(dev, 0x400804) & 0x7fffffff; 786 + if (!ustatus && display) { 787 + NV_INFO(dev, "PGRAPH_TRAP_DISPATCH - no ustatus?\n"); 788 + } 789 + 790 + /* Known to be triggered by screwed up NOTIFY and COND... */ 791 + if (ustatus & 0x00000001) { 792 + nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_DISPATCH_FAULT"); 793 + nv_wr32(dev, 0x400500, 0); 794 + if (nv_rd32(dev, 0x400808) & 0x80000000) { 795 + if (display) { 796 + if (nouveau_graph_trapped_channel(dev, &trap.channel)) 797 + trap.channel = -1; 798 + trap.class = nv_rd32(dev, 0x400814); 799 + trap.mthd = nv_rd32(dev, 0x400808) & 0x1ffc; 800 + trap.subc = (nv_rd32(dev, 0x400808) >> 16) & 0x7; 801 + trap.data = nv_rd32(dev, 0x40080c); 802 + trap.data2 = nv_rd32(dev, 0x400810); 803 + nouveau_graph_dump_trap_info(dev, 804 + "PGRAPH_TRAP_DISPATCH_FAULT", &trap); 805 + NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_FAULT - 400808: %08x\n", nv_rd32(dev, 0x400808)); 806 + NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_FAULT - 400848: %08x\n", nv_rd32(dev, 0x400848)); 807 + } 808 + nv_wr32(dev, 0x400808, 0); 809 + } else if (display) { 810 + NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_FAULT - No stuck command?\n"); 811 + } 812 + nv_wr32(dev, 0x4008e8, nv_rd32(dev, 0x4008e8) & 3); 813 + nv_wr32(dev, 0x400848, 0); 814 + ustatus &= ~0x00000001; 815 + } 816 + if (ustatus & 0x00000002) { 817 + nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_DISPATCH_QUERY"); 818 + nv_wr32(dev, 0x400500, 0); 819 + if (nv_rd32(dev, 0x40084c) & 0x80000000) { 820 + if (display) { 821 + if (nouveau_graph_trapped_channel(dev, &trap.channel)) 822 + trap.channel = -1; 823 + trap.class = nv_rd32(dev, 0x400814); 824 + trap.mthd = nv_rd32(dev, 0x40084c) & 0x1ffc; 825 + trap.subc = (nv_rd32(dev, 0x40084c) >> 16) & 0x7; 826 + trap.data = nv_rd32(dev, 0x40085c); 827 + trap.data2 = 0; 828 + nouveau_graph_dump_trap_info(dev, 829 + "PGRAPH_TRAP_DISPATCH_QUERY", &trap); 830 + NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_QUERY - 40084c: %08x\n", nv_rd32(dev, 0x40084c)); 831 + } 832 + nv_wr32(dev, 0x40084c, 0); 833 + } else if (display) { 834 + NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_QUERY - No stuck command?\n"); 835 + } 836 + ustatus &= ~0x00000002; 837 + } 838 + if (ustatus && display) 839 + NV_INFO(dev, "PGRAPH_TRAP_DISPATCH - Unhandled ustatus 0x%08x\n", ustatus); 840 + nv_wr32(dev, 0x400804, 0xc0000000); 841 + nv_wr32(dev, 0x400108, 0x001); 842 + status &= ~0x001; 843 + } 844 + 845 + /* TRAPs other than dispatch use the "normal" trap regs. */ 846 + if (status && display) { 847 + nouveau_graph_trap_info(dev, &trap); 848 + nouveau_graph_dump_trap_info(dev, 849 + "PGRAPH_TRAP", &trap); 850 + } 851 + 852 + /* M2MF: Memory to memory copy engine. */ 853 + if (status & 0x002) { 854 + ustatus = nv_rd32(dev, 0x406800) & 0x7fffffff; 855 + if (!ustatus && display) { 856 + NV_INFO(dev, "PGRAPH_TRAP_M2MF - no ustatus?\n"); 857 + } 858 + if (ustatus & 0x00000001) { 859 + nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_M2MF_NOTIFY"); 860 + ustatus &= ~0x00000001; 861 + } 862 + if (ustatus & 0x00000002) { 863 + nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_M2MF_IN"); 864 + ustatus &= ~0x00000002; 865 + } 866 + if (ustatus & 0x00000004) { 867 + nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_M2MF_OUT"); 868 + ustatus &= ~0x00000004; 869 + } 870 + NV_INFO (dev, "PGRAPH_TRAP_M2MF - %08x %08x %08x %08x\n", 871 + nv_rd32(dev, 0x406804), 872 + nv_rd32(dev, 0x406808), 873 + nv_rd32(dev, 0x40680c), 874 + nv_rd32(dev, 0x406810)); 875 + if (ustatus && display) 876 + NV_INFO(dev, "PGRAPH_TRAP_M2MF - Unhandled ustatus 0x%08x\n", ustatus); 877 + /* No sane way found yet -- just reset the bugger. */ 878 + nv_wr32(dev, 0x400040, 2); 879 + nv_wr32(dev, 0x400040, 0); 880 + nv_wr32(dev, 0x406800, 0xc0000000); 881 + nv_wr32(dev, 0x400108, 0x002); 882 + status &= ~0x002; 883 + } 884 + 885 + /* VFETCH: Fetches data from vertex buffers. */ 886 + if (status & 0x004) { 887 + ustatus = nv_rd32(dev, 0x400c04) & 0x7fffffff; 888 + if (!ustatus && display) { 889 + NV_INFO(dev, "PGRAPH_TRAP_VFETCH - no ustatus?\n"); 890 + } 891 + if (ustatus & 0x00000001) { 892 + nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_VFETCH_FAULT"); 893 + NV_INFO (dev, "PGRAPH_TRAP_VFETCH_FAULT - %08x %08x %08x %08x\n", 894 + nv_rd32(dev, 0x400c00), 895 + nv_rd32(dev, 0x400c08), 896 + nv_rd32(dev, 0x400c0c), 897 + nv_rd32(dev, 0x400c10)); 898 + ustatus &= ~0x00000001; 899 + } 900 + if (ustatus && display) 901 + NV_INFO(dev, "PGRAPH_TRAP_VFETCH - Unhandled ustatus 0x%08x\n", ustatus); 902 + nv_wr32(dev, 0x400c04, 0xc0000000); 903 + nv_wr32(dev, 0x400108, 0x004); 904 + status &= ~0x004; 905 + } 906 + 907 + /* STRMOUT: DirectX streamout / OpenGL transform feedback. */ 908 + if (status & 0x008) { 909 + ustatus = nv_rd32(dev, 0x401800) & 0x7fffffff; 910 + if (!ustatus && display) { 911 + NV_INFO(dev, "PGRAPH_TRAP_STRMOUT - no ustatus?\n"); 912 + } 913 + if (ustatus & 0x00000001) { 914 + nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_STRMOUT_FAULT"); 915 + NV_INFO (dev, "PGRAPH_TRAP_STRMOUT_FAULT - %08x %08x %08x %08x\n", 916 + nv_rd32(dev, 0x401804), 917 + nv_rd32(dev, 0x401808), 918 + nv_rd32(dev, 0x40180c), 919 + nv_rd32(dev, 0x401810)); 920 + ustatus &= ~0x00000001; 921 + } 922 + if (ustatus && display) 923 + NV_INFO(dev, "PGRAPH_TRAP_STRMOUT - Unhandled ustatus 0x%08x\n", ustatus); 924 + /* No sane way found yet -- just reset the bugger. */ 925 + nv_wr32(dev, 0x400040, 0x80); 926 + nv_wr32(dev, 0x400040, 0); 927 + nv_wr32(dev, 0x401800, 0xc0000000); 928 + nv_wr32(dev, 0x400108, 0x008); 929 + status &= ~0x008; 930 + } 931 + 932 + /* CCACHE: Handles code and c[] caches and fills them. */ 933 + if (status & 0x010) { 934 + ustatus = nv_rd32(dev, 0x405018) & 0x7fffffff; 935 + if (!ustatus && display) { 936 + NV_INFO(dev, "PGRAPH_TRAP_CCACHE - no ustatus?\n"); 937 + } 938 + if (ustatus & 0x00000001) { 939 + nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_CCACHE_FAULT"); 940 + NV_INFO (dev, "PGRAPH_TRAP_CCACHE_FAULT - %08x %08x %08x %08x %08x %08x %08x\n", 941 + nv_rd32(dev, 0x405800), 942 + nv_rd32(dev, 0x405804), 943 + nv_rd32(dev, 0x405808), 944 + nv_rd32(dev, 0x40580c), 945 + nv_rd32(dev, 0x405810), 946 + nv_rd32(dev, 0x405814), 947 + nv_rd32(dev, 0x40581c)); 948 + ustatus &= ~0x00000001; 949 + } 950 + if (ustatus && display) 951 + NV_INFO(dev, "PGRAPH_TRAP_CCACHE - Unhandled ustatus 0x%08x\n", ustatus); 952 + nv_wr32(dev, 0x405018, 0xc0000000); 953 + nv_wr32(dev, 0x400108, 0x010); 954 + status &= ~0x010; 955 + } 956 + 957 + /* Unknown, not seen yet... 0x402000 is the only trap status reg 958 + * remaining, so try to handle it anyway. Perhaps related to that 959 + * unknown DMA slot on tesla? */ 960 + if (status & 0x20) { 961 + nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_UNKC04"); 962 + ustatus = nv_rd32(dev, 0x402000) & 0x7fffffff; 963 + if (display) 964 + NV_INFO(dev, "PGRAPH_TRAP_UNKC04 - Unhandled ustatus 0x%08x\n", ustatus); 965 + nv_wr32(dev, 0x402000, 0xc0000000); 966 + /* no status modifiction on purpose */ 967 + } 968 + 969 + /* TEXTURE: CUDA texturing units */ 970 + if (status & 0x040) { 971 + nv50_pgraph_tp_trap (dev, 6, 0x408900, 0x408600, display, 972 + "PGRAPH_TRAP_TEXTURE"); 973 + nv_wr32(dev, 0x400108, 0x040); 974 + status &= ~0x040; 975 + } 976 + 977 + /* MP: CUDA execution engines. */ 978 + if (status & 0x080) { 979 + nv50_pgraph_tp_trap (dev, 7, 0x408314, 0x40831c, display, 980 + "PGRAPH_TRAP_MP"); 981 + nv_wr32(dev, 0x400108, 0x080); 982 + status &= ~0x080; 983 + } 984 + 985 + /* TPDMA: Handles TP-initiated uncached memory accesses: 986 + * l[], g[], stack, 2d surfaces, render targets. */ 987 + if (status & 0x100) { 988 + nv50_pgraph_tp_trap (dev, 8, 0x408e08, 0x408708, display, 989 + "PGRAPH_TRAP_TPDMA"); 990 + nv_wr32(dev, 0x400108, 0x100); 991 + status &= ~0x100; 992 + } 993 + 994 + if (status) { 995 + if (display) 996 + NV_INFO(dev, "PGRAPH_TRAP - Unknown trap 0x%08x\n", 997 + status); 998 + nv_wr32(dev, 0x400108, status); 999 + } 1000 + } 1001 + 1002 + /* There must be a *lot* of these. Will take some time to gather them up. */ 1003 + static struct nouveau_enum_names nv50_data_error_names[] = 1004 + { 1005 + { 4, "INVALID_VALUE" }, 1006 + { 5, "INVALID_ENUM" }, 1007 + { 8, "INVALID_OBJECT" }, 1008 + { 0xc, "INVALID_BITFIELD" }, 1009 + { 0x28, "MP_NO_REG_SPACE" }, 1010 + { 0x2b, "MP_BLOCK_SIZE_MISMATCH" }, 1011 + }; 1012 + 1013 + static void 608 1014 nv50_pgraph_irq_handler(struct drm_device *dev) 609 1015 { 1016 + struct nouveau_pgraph_trap trap; 1017 + int unhandled = 0; 610 1018 uint32_t status; 611 1019 612 1020 while ((status = nv_rd32(dev, NV03_PGRAPH_INTR))) { 613 - uint32_t nsource = nv_rd32(dev, NV03_PGRAPH_NSOURCE); 614 - 1021 + /* NOTIFY: You've set a NOTIFY an a command and it's done. */ 615 1022 if (status & 0x00000001) { 616 - nouveau_pgraph_intr_notify(dev, nsource); 1023 + nouveau_graph_trap_info(dev, &trap); 1024 + if (nouveau_ratelimit()) 1025 + nouveau_graph_dump_trap_info(dev, 1026 + "PGRAPH_NOTIFY", &trap); 617 1027 status &= ~0x00000001; 618 1028 nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000001); 619 1029 } 620 1030 621 - if (status & 0x00000010) { 622 - nouveau_pgraph_intr_error(dev, nsource | 623 - NV03_PGRAPH_NSOURCE_ILLEGAL_MTHD); 1031 + /* COMPUTE_QUERY: Purpose and exact cause unknown, happens 1032 + * when you write 0x200 to 0x50c0 method 0x31c. */ 1033 + if (status & 0x00000002) { 1034 + nouveau_graph_trap_info(dev, &trap); 1035 + if (nouveau_ratelimit()) 1036 + nouveau_graph_dump_trap_info(dev, 1037 + "PGRAPH_COMPUTE_QUERY", &trap); 1038 + status &= ~0x00000002; 1039 + nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000002); 1040 + } 624 1041 1042 + /* Unknown, never seen: 0x4 */ 1043 + 1044 + /* ILLEGAL_MTHD: You used a wrong method for this class. */ 1045 + if (status & 0x00000010) { 1046 + nouveau_graph_trap_info(dev, &trap); 1047 + if (nouveau_pgraph_intr_swmthd(dev, &trap)) 1048 + unhandled = 1; 1049 + if (unhandled && nouveau_ratelimit()) 1050 + nouveau_graph_dump_trap_info(dev, 1051 + "PGRAPH_ILLEGAL_MTHD", &trap); 625 1052 status &= ~0x00000010; 626 1053 nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000010); 627 1054 } 628 1055 1056 + /* ILLEGAL_CLASS: You used a wrong class. */ 1057 + if (status & 0x00000020) { 1058 + nouveau_graph_trap_info(dev, &trap); 1059 + if (nouveau_ratelimit()) 1060 + nouveau_graph_dump_trap_info(dev, 1061 + "PGRAPH_ILLEGAL_CLASS", &trap); 1062 + status &= ~0x00000020; 1063 + nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000020); 1064 + } 1065 + 1066 + /* DOUBLE_NOTIFY: You tried to set a NOTIFY on another NOTIFY. */ 1067 + if (status & 0x00000040) { 1068 + nouveau_graph_trap_info(dev, &trap); 1069 + if (nouveau_ratelimit()) 1070 + nouveau_graph_dump_trap_info(dev, 1071 + "PGRAPH_DOUBLE_NOTIFY", &trap); 1072 + status &= ~0x00000040; 1073 + nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000040); 1074 + } 1075 + 1076 + /* CONTEXT_SWITCH: PGRAPH needs us to load a new context */ 629 1077 if (status & 0x00001000) { 630 1078 nv_wr32(dev, 0x400500, 0x00000000); 631 1079 nv_wr32(dev, NV03_PGRAPH_INTR, ··· 1115 613 status &= ~NV_PGRAPH_INTR_CONTEXT_SWITCH; 1116 614 } 1117 615 1118 - if (status & 0x00100000) { 1119 - nouveau_pgraph_intr_error(dev, nsource | 1120 - NV03_PGRAPH_NSOURCE_DATA_ERROR); 616 + /* BUFFER_NOTIFY: Your m2mf transfer finished */ 617 + if (status & 0x00010000) { 618 + nouveau_graph_trap_info(dev, &trap); 619 + if (nouveau_ratelimit()) 620 + nouveau_graph_dump_trap_info(dev, 621 + "PGRAPH_BUFFER_NOTIFY", &trap); 622 + status &= ~0x00010000; 623 + nv_wr32(dev, NV03_PGRAPH_INTR, 0x00010000); 624 + } 1121 625 626 + /* DATA_ERROR: Invalid value for this method, or invalid 627 + * state in current PGRAPH context for this operation */ 628 + if (status & 0x00100000) { 629 + nouveau_graph_trap_info(dev, &trap); 630 + if (nouveau_ratelimit()) { 631 + nouveau_graph_dump_trap_info(dev, 632 + "PGRAPH_DATA_ERROR", &trap); 633 + NV_INFO (dev, "PGRAPH_DATA_ERROR - "); 634 + nouveau_print_enum_names(nv_rd32(dev, 0x400110), 635 + nv50_data_error_names); 636 + printk("\n"); 637 + } 1122 638 status &= ~0x00100000; 1123 639 nv_wr32(dev, NV03_PGRAPH_INTR, 0x00100000); 1124 640 } 1125 641 642 + /* TRAP: Something bad happened in the middle of command 643 + * execution. Has a billion types, subtypes, and even 644 + * subsubtypes. */ 1126 645 if (status & 0x00200000) { 1127 - int r; 1128 - 1129 - nouveau_pgraph_intr_error(dev, nsource | 1130 - NV03_PGRAPH_NSOURCE_PROTECTION_ERROR); 1131 - 1132 - NV_ERROR(dev, "magic set 1:\n"); 1133 - for (r = 0x408900; r <= 0x408910; r += 4) 1134 - NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r, 1135 - nv_rd32(dev, r)); 1136 - nv_wr32(dev, 0x408900, 1137 - nv_rd32(dev, 0x408904) | 0xc0000000); 1138 - for (r = 0x408e08; r <= 0x408e24; r += 4) 1139 - NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r, 1140 - nv_rd32(dev, r)); 1141 - nv_wr32(dev, 0x408e08, 1142 - nv_rd32(dev, 0x408e08) | 0xc0000000); 1143 - 1144 - NV_ERROR(dev, "magic set 2:\n"); 1145 - for (r = 0x409900; r <= 0x409910; r += 4) 1146 - NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r, 1147 - nv_rd32(dev, r)); 1148 - nv_wr32(dev, 0x409900, 1149 - nv_rd32(dev, 0x409904) | 0xc0000000); 1150 - for (r = 0x409e08; r <= 0x409e24; r += 4) 1151 - NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r, 1152 - nv_rd32(dev, r)); 1153 - nv_wr32(dev, 0x409e08, 1154 - nv_rd32(dev, 0x409e08) | 0xc0000000); 1155 - 646 + nv50_pgraph_trap_handler(dev); 1156 647 status &= ~0x00200000; 1157 - nv_wr32(dev, NV03_PGRAPH_NSOURCE, nsource); 1158 648 nv_wr32(dev, NV03_PGRAPH_INTR, 0x00200000); 1159 649 } 650 + 651 + /* Unknown, never seen: 0x00400000 */ 652 + 653 + /* SINGLE_STEP: Happens on every method if you turned on 654 + * single stepping in 40008c */ 655 + if (status & 0x01000000) { 656 + nouveau_graph_trap_info(dev, &trap); 657 + if (nouveau_ratelimit()) 658 + nouveau_graph_dump_trap_info(dev, 659 + "PGRAPH_SINGLE_STEP", &trap); 660 + status &= ~0x01000000; 661 + nv_wr32(dev, NV03_PGRAPH_INTR, 0x01000000); 662 + } 663 + 664 + /* 0x02000000 happens when you pause a ctxprog... 665 + * but the only way this can happen that I know is by 666 + * poking the relevant MMIO register, and we don't 667 + * do that. */ 1160 668 1161 669 if (status) { 1162 670 NV_INFO(dev, "Unhandled PGRAPH_INTR - 0x%08x\n", ··· 1184 672 } 1185 673 1186 674 nv_wr32(dev, NV03_PMC_INTR_0, NV_PMC_INTR_0_PGRAPH_PENDING); 1187 - nv_wr32(dev, 0x400824, nv_rd32(dev, 0x400824) & ~(1 << 31)); 675 + if (nv_rd32(dev, 0x400824) & (1 << 31)) 676 + nv_wr32(dev, 0x400824, nv_rd32(dev, 0x400824) & ~(1 << 31)); 1188 677 } 1189 678 1190 679 static void
+2 -3
drivers/gpu/drm/nouveau/nouveau_state.c
··· 35 35 #include "nouveau_drm.h" 36 36 #include "nv50_display.h" 37 37 38 - static int nouveau_stub_init(struct drm_device *dev) { return 0; } 39 38 static void nouveau_stub_takedown(struct drm_device *dev) {} 40 39 41 40 static int nouveau_init_engine_ptrs(struct drm_device *dev) ··· 276 277 engine->timer.init = nv04_timer_init; 277 278 engine->timer.read = nv04_timer_read; 278 279 engine->timer.takedown = nv04_timer_takedown; 279 - engine->fb.init = nouveau_stub_init; 280 - engine->fb.takedown = nouveau_stub_takedown; 280 + engine->fb.init = nv50_fb_init; 281 + engine->fb.takedown = nv50_fb_takedown; 281 282 engine->graph.grclass = nv50_graph_grclass; 282 283 engine->graph.init = nv50_graph_init; 283 284 engine->graph.takedown = nv50_graph_takedown;
+3 -3
drivers/gpu/drm/nouveau/nv04_crtc.c
··· 230 230 struct drm_framebuffer *fb = crtc->fb; 231 231 232 232 /* Calculate our timings */ 233 - int horizDisplay = (mode->crtc_hdisplay >> 3) - 1; 234 - int horizStart = (mode->crtc_hsync_start >> 3) - 1; 235 - int horizEnd = (mode->crtc_hsync_end >> 3) - 1; 233 + int horizDisplay = (mode->crtc_hdisplay >> 3) - 1; 234 + int horizStart = (mode->crtc_hsync_start >> 3) + 1; 235 + int horizEnd = (mode->crtc_hsync_end >> 3) + 1; 236 236 int horizTotal = (mode->crtc_htotal >> 3) - 5; 237 237 int horizBlankStart = (mode->crtc_hdisplay >> 3) - 1; 238 238 int horizBlankEnd = (mode->crtc_htotal >> 3) - 1;
+3 -3
drivers/gpu/drm/nouveau/nv04_fbcon.c
··· 118 118 return; 119 119 } 120 120 121 - width = ALIGN(image->width, 32); 122 - dsize = (width * image->height) >> 5; 121 + width = ALIGN(image->width, 8); 122 + dsize = ALIGN(width * image->height, 32) >> 5; 123 123 124 124 if (info->fix.visual == FB_VISUAL_TRUECOLOR || 125 125 info->fix.visual == FB_VISUAL_DIRECTCOLOR) { ··· 136 136 ((image->dx + image->width) & 0xffff)); 137 137 OUT_RING(chan, bg); 138 138 OUT_RING(chan, fg); 139 - OUT_RING(chan, (image->height << 16) | image->width); 140 139 OUT_RING(chan, (image->height << 16) | width); 140 + OUT_RING(chan, (image->height << 16) | image->width); 141 141 OUT_RING(chan, (image->dy << 16) | (image->dx & 0xffff)); 142 142 143 143 while (dsize) {
+2 -2
drivers/gpu/drm/nouveau/nv50_display.c
··· 522 522 } 523 523 524 524 for (i = 0 ; i < dcb->connector.entries; i++) { 525 - if (i != 0 && dcb->connector.entry[i].index == 526 - dcb->connector.entry[i - 1].index) 525 + if (i != 0 && dcb->connector.entry[i].index2 == 526 + dcb->connector.entry[i - 1].index2) 527 527 continue; 528 528 nouveau_connector_create(dev, &dcb->connector.entry[i]); 529 529 }
+32
drivers/gpu/drm/nouveau/nv50_fb.c
··· 1 + #include "drmP.h" 2 + #include "drm.h" 3 + #include "nouveau_drv.h" 4 + #include "nouveau_drm.h" 5 + 6 + int 7 + nv50_fb_init(struct drm_device *dev) 8 + { 9 + /* This is needed to get meaningful information from 100c90 10 + * on traps. No idea what these values mean exactly. */ 11 + struct drm_nouveau_private *dev_priv = dev->dev_private; 12 + 13 + switch (dev_priv->chipset) { 14 + case 0x50: 15 + nv_wr32(dev, 0x100c90, 0x0707ff); 16 + break; 17 + case 0xa5: 18 + case 0xa8: 19 + nv_wr32(dev, 0x100c90, 0x0d0fff); 20 + break; 21 + default: 22 + nv_wr32(dev, 0x100c90, 0x1d07ff); 23 + break; 24 + } 25 + 26 + return 0; 27 + } 28 + 29 + void 30 + nv50_fb_takedown(struct drm_device *dev) 31 + { 32 + }
+1 -1
drivers/gpu/drm/nouveau/nv50_fbcon.c
··· 233 233 BEGIN_RING(chan, NvSub2D, 0x0808, 3); 234 234 OUT_RING(chan, 0); 235 235 OUT_RING(chan, 0); 236 - OUT_RING(chan, 0); 236 + OUT_RING(chan, 1); 237 237 BEGIN_RING(chan, NvSub2D, 0x081c, 1); 238 238 OUT_RING(chan, 1); 239 239 BEGIN_RING(chan, NvSub2D, 0x0840, 4);
+18 -4
drivers/gpu/drm/nouveau/nv50_graph.c
··· 56 56 static void 57 57 nv50_graph_init_regs__nv(struct drm_device *dev) 58 58 { 59 + struct drm_nouveau_private *dev_priv = dev->dev_private; 60 + uint32_t units = nv_rd32(dev, 0x1540); 61 + int i; 62 + 59 63 NV_DEBUG(dev, "\n"); 60 64 61 65 nv_wr32(dev, 0x400804, 0xc0000000); ··· 68 64 nv_wr32(dev, 0x401800, 0xc0000000); 69 65 nv_wr32(dev, 0x405018, 0xc0000000); 70 66 nv_wr32(dev, 0x402000, 0xc0000000); 67 + 68 + for (i = 0; i < 16; i++) { 69 + if (units & 1 << i) { 70 + if (dev_priv->chipset < 0xa0) { 71 + nv_wr32(dev, 0x408900 + (i << 12), 0xc0000000); 72 + nv_wr32(dev, 0x408e08 + (i << 12), 0xc0000000); 73 + nv_wr32(dev, 0x408314 + (i << 12), 0xc0000000); 74 + } else { 75 + nv_wr32(dev, 0x408600 + (i << 11), 0xc0000000); 76 + nv_wr32(dev, 0x408708 + (i << 11), 0xc0000000); 77 + nv_wr32(dev, 0x40831c + (i << 11), 0xc0000000); 78 + } 79 + } 80 + } 71 81 72 82 nv_wr32(dev, 0x400108, 0xffffffff); 73 83 ··· 247 229 nouveau_grctx_vals_load(dev, ctx); 248 230 } 249 231 nv_wo32(dev, ctx, 0x00000/4, chan->ramin->instance >> 12); 250 - if ((dev_priv->chipset & 0xf0) == 0xa0) 251 - nv_wo32(dev, ctx, 0x00004/4, 0x00000000); 252 - else 253 - nv_wo32(dev, ctx, 0x0011c/4, 0x00000000); 254 232 dev_priv->engine.instmem.finish_access(dev); 255 233 256 234 return 0;
+9 -4
drivers/gpu/drm/nouveau/nv50_grctx.c
··· 64 64 #define CP_FLAG_ALWAYS ((2 * 32) + 13) 65 65 #define CP_FLAG_ALWAYS_FALSE 0 66 66 #define CP_FLAG_ALWAYS_TRUE 1 67 + #define CP_FLAG_INTR ((2 * 32) + 15) 68 + #define CP_FLAG_INTR_NOT_PENDING 0 69 + #define CP_FLAG_INTR_PENDING 1 67 70 68 71 #define CP_CTX 0x00100000 69 72 #define CP_CTX_COUNT 0x000f0000 ··· 217 214 cp_name(ctx, cp_setup_save); 218 215 cp_set (ctx, UNK1D, SET); 219 216 cp_wait(ctx, STATUS, BUSY); 217 + cp_wait(ctx, INTR, PENDING); 218 + cp_bra (ctx, STATUS, BUSY, cp_setup_save); 220 219 cp_set (ctx, UNK01, SET); 221 220 cp_set (ctx, SWAP_DIRECTION, SAVE); 222 221 ··· 274 269 int offset, base; 275 270 uint32_t units = nv_rd32 (ctx->dev, 0x1540); 276 271 277 - /* 0800 */ 272 + /* 0800: DISPATCH */ 278 273 cp_ctx(ctx, 0x400808, 7); 279 274 gr_def(ctx, 0x400814, 0x00000030); 280 275 cp_ctx(ctx, 0x400834, 0x32); ··· 305 300 gr_def(ctx, 0x400b20, 0x0001629d); 306 301 } 307 302 308 - /* 0C00 */ 303 + /* 0C00: VFETCH */ 309 304 cp_ctx(ctx, 0x400c08, 0x2); 310 305 gr_def(ctx, 0x400c08, 0x0000fe0c); 311 306 ··· 331 326 cp_ctx(ctx, 0x401540, 0x5); 332 327 gr_def(ctx, 0x401550, 0x00001018); 333 328 334 - /* 1800 */ 329 + /* 1800: STREAMOUT */ 335 330 cp_ctx(ctx, 0x401814, 0x1); 336 331 gr_def(ctx, 0x401814, 0x000000ff); 337 332 if (dev_priv->chipset == 0x50) { ··· 646 641 if (dev_priv->chipset == 0x50) 647 642 cp_ctx(ctx, 0x4063e0, 0x1); 648 643 649 - /* 6800 */ 644 + /* 6800: M2MF */ 650 645 if (dev_priv->chipset < 0x90) { 651 646 cp_ctx(ctx, 0x406814, 0x2b); 652 647 gr_def(ctx, 0x406818, 0x00000f80);
+1 -1
drivers/gpu/drm/radeon/Makefile
··· 50 50 radeon-y := radeon_drv.o radeon_cp.o radeon_state.o radeon_mem.o \ 51 51 radeon_irq.o r300_cmdbuf.o r600_cp.o 52 52 # add KMS driver 53 - radeon-y += radeon_device.o radeon_kms.o \ 53 + radeon-y += radeon_device.o radeon_asic.o radeon_kms.o \ 54 54 radeon_atombios.o radeon_agp.o atombios_crtc.o radeon_combios.o \ 55 55 atom.o radeon_fence.o radeon_ttm.o radeon_object.o radeon_gart.o \ 56 56 radeon_legacy_crtc.o radeon_legacy_encoders.o radeon_connectors.o \
+67 -24
drivers/gpu/drm/radeon/atom.c
··· 52 52 53 53 typedef struct { 54 54 struct atom_context *ctx; 55 - 56 55 uint32_t *ps, *ws; 57 56 int ps_shift; 58 57 uint16_t start; 58 + unsigned last_jump; 59 + unsigned long last_jump_jiffies; 60 + bool abort; 59 61 } atom_exec_context; 60 62 61 63 int atom_debug = 0; 62 - static void atom_execute_table_locked(struct atom_context *ctx, int index, uint32_t * params); 63 - void atom_execute_table(struct atom_context *ctx, int index, uint32_t * params); 64 + static int atom_execute_table_locked(struct atom_context *ctx, int index, uint32_t * params); 65 + int atom_execute_table(struct atom_context *ctx, int index, uint32_t * params); 64 66 65 67 static uint32_t atom_arg_mask[8] = 66 68 { 0xFFFFFFFF, 0xFFFF, 0xFFFF00, 0xFFFF0000, 0xFF, 0xFF00, 0xFF0000, ··· 606 604 static void atom_op_calltable(atom_exec_context *ctx, int *ptr, int arg) 607 605 { 608 606 int idx = U8((*ptr)++); 607 + int r = 0; 608 + 609 609 if (idx < ATOM_TABLE_NAMES_CNT) 610 610 SDEBUG(" table: %d (%s)\n", idx, atom_table_names[idx]); 611 611 else 612 612 SDEBUG(" table: %d\n", idx); 613 613 if (U16(ctx->ctx->cmd_table + 4 + 2 * idx)) 614 - atom_execute_table_locked(ctx->ctx, idx, ctx->ps + ctx->ps_shift); 614 + r = atom_execute_table_locked(ctx->ctx, idx, ctx->ps + ctx->ps_shift); 615 + if (r) { 616 + ctx->abort = true; 617 + } 615 618 } 616 619 617 620 static void atom_op_clear(atom_exec_context *ctx, int *ptr, int arg) ··· 680 673 static void atom_op_jump(atom_exec_context *ctx, int *ptr, int arg) 681 674 { 682 675 int execute = 0, target = U16(*ptr); 676 + unsigned long cjiffies; 677 + 683 678 (*ptr) += 2; 684 679 switch (arg) { 685 680 case ATOM_COND_ABOVE: ··· 709 700 if (arg != ATOM_COND_ALWAYS) 710 701 SDEBUG(" taken: %s\n", execute ? "yes" : "no"); 711 702 SDEBUG(" target: 0x%04X\n", target); 712 - if (execute) 703 + if (execute) { 704 + if (ctx->last_jump == (ctx->start + target)) { 705 + cjiffies = jiffies; 706 + if (time_after(cjiffies, ctx->last_jump_jiffies)) { 707 + cjiffies -= ctx->last_jump_jiffies; 708 + if ((jiffies_to_msecs(cjiffies) > 1000)) { 709 + DRM_ERROR("atombios stuck in loop for more than 1sec aborting\n"); 710 + ctx->abort = true; 711 + } 712 + } else { 713 + /* jiffies wrap around we will just wait a little longer */ 714 + ctx->last_jump_jiffies = jiffies; 715 + } 716 + } else { 717 + ctx->last_jump = ctx->start + target; 718 + ctx->last_jump_jiffies = jiffies; 719 + } 713 720 *ptr = ctx->start + target; 721 + } 714 722 } 715 723 716 724 static void atom_op_mask(atom_exec_context *ctx, int *ptr, int arg) ··· 1130 1104 atom_op_shr, ATOM_ARG_MC}, { 1131 1105 atom_op_debug, 0},}; 1132 1106 1133 - static void atom_execute_table_locked(struct atom_context *ctx, int index, uint32_t * params) 1107 + static int atom_execute_table_locked(struct atom_context *ctx, int index, uint32_t * params) 1134 1108 { 1135 1109 int base = CU16(ctx->cmd_table + 4 + 2 * index); 1136 1110 int len, ws, ps, ptr; ··· 1138 1112 atom_exec_context ectx; 1139 1113 1140 1114 if (!base) 1141 - return; 1115 + return -EINVAL; 1142 1116 1143 1117 len = CU16(base + ATOM_CT_SIZE_PTR); 1144 1118 ws = CU8(base + ATOM_CT_WS_PTR); ··· 1151 1125 ectx.ps_shift = ps / 4; 1152 1126 ectx.start = base; 1153 1127 ectx.ps = params; 1128 + ectx.abort = false; 1129 + ectx.last_jump = 0; 1154 1130 if (ws) 1155 1131 ectx.ws = kzalloc(4 * ws, GFP_KERNEL); 1156 1132 else ··· 1165 1137 SDEBUG("%s @ 0x%04X\n", atom_op_names[op], ptr - 1); 1166 1138 else 1167 1139 SDEBUG("[%d] @ 0x%04X\n", op, ptr - 1); 1140 + if (ectx.abort) { 1141 + DRM_ERROR("atombios stuck executing %04X (len %d, WS %d, PS %d) @ 0x%04X\n", 1142 + base, len, ws, ps, ptr - 1); 1143 + return -EINVAL; 1144 + } 1168 1145 1169 1146 if (op < ATOM_OP_CNT && op > 0) 1170 1147 opcode_table[op].func(&ectx, &ptr, ··· 1185 1152 1186 1153 if (ws) 1187 1154 kfree(ectx.ws); 1155 + return 0; 1188 1156 } 1189 1157 1190 - void atom_execute_table(struct atom_context *ctx, int index, uint32_t * params) 1158 + int atom_execute_table(struct atom_context *ctx, int index, uint32_t * params) 1191 1159 { 1160 + int r; 1161 + 1192 1162 mutex_lock(&ctx->mutex); 1193 1163 /* reset reg block */ 1194 1164 ctx->reg_block = 0; ··· 1199 1163 ctx->fb_base = 0; 1200 1164 /* reset io mode */ 1201 1165 ctx->io_mode = ATOM_IO_MM; 1202 - atom_execute_table_locked(ctx, index, params); 1166 + r = atom_execute_table_locked(ctx, index, params); 1203 1167 mutex_unlock(&ctx->mutex); 1168 + return r; 1204 1169 } 1205 1170 1206 1171 static int atom_iio_len[] = { 1, 2, 3, 3, 3, 3, 4, 4, 4, 3 }; ··· 1285 1248 1286 1249 if (!CU16(ctx->cmd_table + 4 + 2 * ATOM_CMD_INIT)) 1287 1250 return 1; 1288 - atom_execute_table(ctx, ATOM_CMD_INIT, ps); 1289 - 1290 - return 0; 1251 + return atom_execute_table(ctx, ATOM_CMD_INIT, ps); 1291 1252 } 1292 1253 1293 1254 void atom_destroy(struct atom_context *ctx) ··· 1295 1260 kfree(ctx); 1296 1261 } 1297 1262 1298 - void atom_parse_data_header(struct atom_context *ctx, int index, 1263 + bool atom_parse_data_header(struct atom_context *ctx, int index, 1299 1264 uint16_t * size, uint8_t * frev, uint8_t * crev, 1300 1265 uint16_t * data_start) 1301 1266 { 1302 1267 int offset = index * 2 + 4; 1303 1268 int idx = CU16(ctx->data_table + offset); 1269 + u16 *mdt = (u16 *)(ctx->bios + ctx->data_table + 4); 1270 + 1271 + if (!mdt[index]) 1272 + return false; 1304 1273 1305 1274 if (size) 1306 1275 *size = CU16(idx); ··· 1313 1274 if (crev) 1314 1275 *crev = CU8(idx + 3); 1315 1276 *data_start = idx; 1316 - return; 1277 + return true; 1317 1278 } 1318 1279 1319 - void atom_parse_cmd_header(struct atom_context *ctx, int index, uint8_t * frev, 1280 + bool atom_parse_cmd_header(struct atom_context *ctx, int index, uint8_t * frev, 1320 1281 uint8_t * crev) 1321 1282 { 1322 1283 int offset = index * 2 + 4; 1323 1284 int idx = CU16(ctx->cmd_table + offset); 1285 + u16 *mct = (u16 *)(ctx->bios + ctx->cmd_table + 4); 1286 + 1287 + if (!mct[index]) 1288 + return false; 1324 1289 1325 1290 if (frev) 1326 1291 *frev = CU8(idx + 2); 1327 1292 if (crev) 1328 1293 *crev = CU8(idx + 3); 1329 - return; 1294 + return true; 1330 1295 } 1331 1296 1332 1297 int atom_allocate_fb_scratch(struct atom_context *ctx) 1333 1298 { 1334 1299 int index = GetIndexIntoMasterTable(DATA, VRAM_UsageByFirmware); 1335 1300 uint16_t data_offset; 1336 - int usage_bytes; 1301 + int usage_bytes = 0; 1337 1302 struct _ATOM_VRAM_USAGE_BY_FIRMWARE *firmware_usage; 1338 1303 1339 - atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset); 1304 + if (atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) { 1305 + firmware_usage = (struct _ATOM_VRAM_USAGE_BY_FIRMWARE *)(ctx->bios + data_offset); 1340 1306 1341 - firmware_usage = (struct _ATOM_VRAM_USAGE_BY_FIRMWARE *)(ctx->bios + data_offset); 1307 + DRM_DEBUG("atom firmware requested %08x %dkb\n", 1308 + firmware_usage->asFirmwareVramReserveInfo[0].ulStartAddrUsedByFirmware, 1309 + firmware_usage->asFirmwareVramReserveInfo[0].usFirmwareUseInKb); 1342 1310 1343 - DRM_DEBUG("atom firmware requested %08x %dkb\n", 1344 - firmware_usage->asFirmwareVramReserveInfo[0].ulStartAddrUsedByFirmware, 1345 - firmware_usage->asFirmwareVramReserveInfo[0].usFirmwareUseInKb); 1346 - 1347 - usage_bytes = firmware_usage->asFirmwareVramReserveInfo[0].usFirmwareUseInKb * 1024; 1311 + usage_bytes = firmware_usage->asFirmwareVramReserveInfo[0].usFirmwareUseInKb * 1024; 1312 + } 1348 1313 if (usage_bytes == 0) 1349 1314 usage_bytes = 20 * 1024; 1350 1315 /* allocate some scratch memory */
+5 -3
drivers/gpu/drm/radeon/atom.h
··· 140 140 extern int atom_debug; 141 141 142 142 struct atom_context *atom_parse(struct card_info *, void *); 143 - void atom_execute_table(struct atom_context *, int, uint32_t *); 143 + int atom_execute_table(struct atom_context *, int, uint32_t *); 144 144 int atom_asic_init(struct atom_context *); 145 145 void atom_destroy(struct atom_context *); 146 - void atom_parse_data_header(struct atom_context *ctx, int index, uint16_t *size, uint8_t *frev, uint8_t *crev, uint16_t *data_start); 147 - void atom_parse_cmd_header(struct atom_context *ctx, int index, uint8_t *frev, uint8_t *crev); 146 + bool atom_parse_data_header(struct atom_context *ctx, int index, uint16_t *size, 147 + uint8_t *frev, uint8_t *crev, uint16_t *data_start); 148 + bool atom_parse_cmd_header(struct atom_context *ctx, int index, 149 + uint8_t *frev, uint8_t *crev); 148 150 int atom_allocate_fb_scratch(struct atom_context *ctx); 149 151 #include "atom-types.h" 150 152 #include "atombios.h"
+73 -25
drivers/gpu/drm/radeon/atombios_crtc.c
··· 353 353 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 354 354 } 355 355 356 + static void atombios_disable_ss(struct drm_crtc *crtc) 357 + { 358 + struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 359 + struct drm_device *dev = crtc->dev; 360 + struct radeon_device *rdev = dev->dev_private; 361 + u32 ss_cntl; 362 + 363 + if (ASIC_IS_DCE4(rdev)) { 364 + switch (radeon_crtc->pll_id) { 365 + case ATOM_PPLL1: 366 + ss_cntl = RREG32(EVERGREEN_P1PLL_SS_CNTL); 367 + ss_cntl &= ~EVERGREEN_PxPLL_SS_EN; 368 + WREG32(EVERGREEN_P1PLL_SS_CNTL, ss_cntl); 369 + break; 370 + case ATOM_PPLL2: 371 + ss_cntl = RREG32(EVERGREEN_P2PLL_SS_CNTL); 372 + ss_cntl &= ~EVERGREEN_PxPLL_SS_EN; 373 + WREG32(EVERGREEN_P2PLL_SS_CNTL, ss_cntl); 374 + break; 375 + case ATOM_DCPLL: 376 + case ATOM_PPLL_INVALID: 377 + return; 378 + } 379 + } else if (ASIC_IS_AVIVO(rdev)) { 380 + switch (radeon_crtc->pll_id) { 381 + case ATOM_PPLL1: 382 + ss_cntl = RREG32(AVIVO_P1PLL_INT_SS_CNTL); 383 + ss_cntl &= ~1; 384 + WREG32(AVIVO_P1PLL_INT_SS_CNTL, ss_cntl); 385 + break; 386 + case ATOM_PPLL2: 387 + ss_cntl = RREG32(AVIVO_P2PLL_INT_SS_CNTL); 388 + ss_cntl &= ~1; 389 + WREG32(AVIVO_P2PLL_INT_SS_CNTL, ss_cntl); 390 + break; 391 + case ATOM_DCPLL: 392 + case ATOM_PPLL_INVALID: 393 + return; 394 + } 395 + } 396 + } 397 + 398 + 356 399 union atom_enable_ss { 357 400 ENABLE_LVDS_SS_PARAMETERS legacy; 358 401 ENABLE_SPREAD_SPECTRUM_ON_PPLL_PS_ALLOCATION v1; 359 402 }; 360 403 361 - static void atombios_set_ss(struct drm_crtc *crtc, int enable) 404 + static void atombios_enable_ss(struct drm_crtc *crtc) 362 405 { 363 406 struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 364 407 struct drm_device *dev = crtc->dev; ··· 430 387 step = dig->ss->step; 431 388 delay = dig->ss->delay; 432 389 range = dig->ss->range; 433 - } else if (enable) 390 + } else 434 391 return; 435 - } else if (enable) 392 + } else 436 393 return; 437 394 break; 438 395 } ··· 449 406 args.v1.ucSpreadSpectrumDelay = delay; 450 407 args.v1.ucSpreadSpectrumRange = range; 451 408 args.v1.ucPpll = radeon_crtc->crtc_id ? ATOM_PPLL2 : ATOM_PPLL1; 452 - args.v1.ucEnable = enable; 409 + args.v1.ucEnable = ATOM_ENABLE; 453 410 } else { 454 411 args.legacy.usSpreadSpectrumPercentage = cpu_to_le16(percentage); 455 412 args.legacy.ucSpreadSpectrumType = type; 456 413 args.legacy.ucSpreadSpectrumStepSize_Delay = (step & 3) << 2; 457 414 args.legacy.ucSpreadSpectrumStepSize_Delay |= (delay & 7) << 4; 458 - args.legacy.ucEnable = enable; 415 + args.legacy.ucEnable = ATOM_ENABLE; 459 416 } 460 417 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 461 418 } ··· 521 478 /* DVO wants 2x pixel clock if the DVO chip is in 12 bit mode */ 522 479 if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1) 523 480 adjusted_clock = mode->clock * 2; 524 - /* LVDS PLL quirks */ 525 - if (encoder->encoder_type == DRM_MODE_ENCODER_LVDS) { 526 - struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 527 - pll->algo = dig->pll_algo; 528 - } 529 481 } else { 530 482 if (encoder->encoder_type != DRM_MODE_ENCODER_DAC) 531 483 pll->flags |= RADEON_PLL_NO_ODD_POST_DIV; ··· 541 503 int index; 542 504 543 505 index = GetIndexIntoMasterTable(COMMAND, AdjustDisplayPll); 544 - atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, 545 - &crev); 506 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, 507 + &crev)) 508 + return adjusted_clock; 546 509 547 510 memset(&args, 0, sizeof(args)); 548 511 ··· 581 542 } 582 543 } else if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) { 583 544 /* may want to enable SS on DP/eDP eventually */ 584 - args.v3.sInput.ucDispPllConfig |= 585 - DISPPLL_CONFIG_SS_ENABLE; 586 - if (mode->clock > 165000) 545 + /*args.v3.sInput.ucDispPllConfig |= 546 + DISPPLL_CONFIG_SS_ENABLE;*/ 547 + if (encoder_mode == ATOM_ENCODER_MODE_DP) 587 548 args.v3.sInput.ucDispPllConfig |= 588 - DISPPLL_CONFIG_DUAL_LINK; 549 + DISPPLL_CONFIG_COHERENT_MODE; 550 + else { 551 + if (mode->clock > 165000) 552 + args.v3.sInput.ucDispPllConfig |= 553 + DISPPLL_CONFIG_DUAL_LINK; 554 + } 589 555 } 590 556 atom_execute_table(rdev->mode_info.atom_context, 591 557 index, (uint32_t *)&args); ··· 636 592 memset(&args, 0, sizeof(args)); 637 593 638 594 index = GetIndexIntoMasterTable(COMMAND, SetPixelClock); 639 - atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, 640 - &crev); 595 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, 596 + &crev)) 597 + return; 641 598 642 599 switch (frev) { 643 600 case 1: ··· 712 667 &ref_div, &post_div); 713 668 714 669 index = GetIndexIntoMasterTable(COMMAND, SetPixelClock); 715 - atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, 716 - &crev); 670 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, 671 + &crev)) 672 + return; 717 673 718 674 switch (frev) { 719 675 case 1: ··· 1129 1083 1130 1084 /* TODO color tiling */ 1131 1085 1132 - /* pick pll */ 1133 - radeon_crtc->pll_id = radeon_atom_pick_pll(crtc); 1134 - 1135 - atombios_set_ss(crtc, 0); 1086 + atombios_disable_ss(crtc); 1136 1087 /* always set DCPLL */ 1137 1088 if (ASIC_IS_DCE4(rdev)) 1138 1089 atombios_crtc_set_dcpll(crtc); 1139 1090 atombios_crtc_set_pll(crtc, adjusted_mode); 1140 - atombios_set_ss(crtc, 1); 1091 + atombios_enable_ss(crtc); 1141 1092 1142 1093 if (ASIC_IS_DCE4(rdev)) 1143 1094 atombios_set_crtc_dtd_timing(crtc, adjusted_mode); ··· 1163 1120 1164 1121 static void atombios_crtc_prepare(struct drm_crtc *crtc) 1165 1122 { 1123 + struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc); 1124 + 1125 + /* pick pll */ 1126 + radeon_crtc->pll_id = radeon_atom_pick_pll(crtc); 1127 + 1166 1128 atombios_lock_crtc(crtc, ATOM_ENABLE); 1167 1129 atombios_crtc_dpms(crtc, DRM_MODE_DPMS_OFF); 1168 1130 }
+3 -3
drivers/gpu/drm/radeon/atombios_dp.c
··· 745 745 >> DP_TRAIN_PRE_EMPHASIS_SHIFT); 746 746 747 747 /* disable the training pattern on the sink */ 748 + dp_set_training(radeon_connector, DP_TRAINING_PATTERN_DISABLE); 749 + 750 + /* disable the training pattern on the source */ 748 751 if (ASIC_IS_DCE4(rdev)) 749 752 atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_LINK_TRAINING_COMPLETE); 750 753 else 751 754 radeon_dp_encoder_service(rdev, ATOM_DP_ACTION_TRAINING_COMPLETE, 752 755 dig_connector->dp_clock, enc_id, 0); 753 - 754 - radeon_dp_encoder_service(rdev, ATOM_DP_ACTION_TRAINING_COMPLETE, 755 - dig_connector->dp_clock, enc_id, 0); 756 756 } 757 757 758 758 int radeon_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode,
+4 -7
drivers/gpu/drm/radeon/evergreen.c
··· 25 25 #include <linux/platform_device.h> 26 26 #include "drmP.h" 27 27 #include "radeon.h" 28 + #include "radeon_asic.h" 28 29 #include "radeon_drm.h" 29 30 #include "rv770d.h" 30 31 #include "atom.h" ··· 437 436 438 437 int evergreen_mc_init(struct radeon_device *rdev) 439 438 { 440 - fixed20_12 a; 441 439 u32 tmp; 442 440 int chansize, numchan; 443 441 ··· 481 481 rdev->mc.real_vram_size = rdev->mc.aper_size; 482 482 } 483 483 r600_vram_gtt_location(rdev, &rdev->mc); 484 - /* FIXME: we should enforce default clock in case GPU is not in 485 - * default setup 486 - */ 487 - a.full = rfixed_const(100); 488 - rdev->pm.sclk.full = rfixed_const(rdev->clock.default_sclk); 489 - rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a); 484 + radeon_update_bandwidth_info(rdev); 485 + 490 486 return 0; 491 487 } 492 488 ··· 742 746 743 747 void evergreen_fini(struct radeon_device *rdev) 744 748 { 749 + radeon_pm_fini(rdev); 745 750 evergreen_suspend(rdev); 746 751 #if 0 747 752 r600_blit_fini(rdev);
+19 -6
drivers/gpu/drm/radeon/r100.c
··· 31 31 #include "radeon_drm.h" 32 32 #include "radeon_reg.h" 33 33 #include "radeon.h" 34 + #include "radeon_asic.h" 34 35 #include "r100d.h" 35 36 #include "rs100d.h" 36 37 #include "rv200d.h" ··· 236 235 237 236 void r100_pci_gart_fini(struct radeon_device *rdev) 238 237 { 238 + radeon_gart_fini(rdev); 239 239 r100_pci_gart_disable(rdev); 240 240 radeon_gart_table_ram_free(rdev); 241 - radeon_gart_fini(rdev); 242 241 } 243 242 244 243 int r100_irq_set(struct radeon_device *rdev) ··· 313 312 /* Vertical blank interrupts */ 314 313 if (status & RADEON_CRTC_VBLANK_STAT) { 315 314 drm_handle_vblank(rdev->ddev, 0); 315 + rdev->pm.vblank_sync = true; 316 316 wake_up(&rdev->irq.vblank_queue); 317 317 } 318 318 if (status & RADEON_CRTC2_VBLANK_STAT) { 319 319 drm_handle_vblank(rdev->ddev, 1); 320 + rdev->pm.vblank_sync = true; 320 321 wake_up(&rdev->irq.vblank_queue); 321 322 } 322 323 if (status & RADEON_FP_DETECT_STAT) { ··· 744 741 udelay(10); 745 742 rdev->cp.rptr = RREG32(RADEON_CP_RB_RPTR); 746 743 rdev->cp.wptr = RREG32(RADEON_CP_RB_WPTR); 744 + /* protect against crazy HW on resume */ 745 + rdev->cp.wptr &= rdev->cp.ptr_mask; 747 746 /* Set cp mode to bus mastering & enable cp*/ 748 747 WREG32(RADEON_CP_CSQ_MODE, 749 748 REG_SET(RADEON_INDIRECT2_START, indirect2_start) | ··· 1809 1804 { 1810 1805 struct drm_device *dev = rdev->ddev; 1811 1806 bool force_dac2 = false; 1807 + u32 tmp; 1812 1808 1813 1809 /* set these so they don't interfere with anything */ 1814 1810 WREG32(RADEON_OV0_SCALE_CNTL, 0); ··· 1881 1875 WREG32(RADEON_DISP_HW_DEBUG, disp_hw_debug); 1882 1876 WREG32(RADEON_DAC_CNTL2, dac2_cntl); 1883 1877 } 1878 + 1879 + /* switch PM block to ACPI mode */ 1880 + tmp = RREG32_PLL(RADEON_PLL_PWRMGT_CNTL); 1881 + tmp &= ~RADEON_PM_MODE_SEL; 1882 + WREG32_PLL(RADEON_PLL_PWRMGT_CNTL, tmp); 1883 + 1884 1884 } 1885 1885 1886 1886 /* ··· 2034 2022 radeon_vram_location(rdev, &rdev->mc, base); 2035 2023 if (!(rdev->flags & RADEON_IS_AGP)) 2036 2024 radeon_gtt_location(rdev, &rdev->mc); 2025 + radeon_update_bandwidth_info(rdev); 2037 2026 } 2038 2027 2039 2028 ··· 2398 2385 uint32_t pixel_bytes1 = 0; 2399 2386 uint32_t pixel_bytes2 = 0; 2400 2387 2388 + radeon_update_display_priority(rdev); 2389 + 2401 2390 if (rdev->mode_info.crtcs[0]->base.enabled) { 2402 2391 mode1 = &rdev->mode_info.crtcs[0]->base.mode; 2403 2392 pixel_bytes1 = rdev->mode_info.crtcs[0]->base.fb->bits_per_pixel / 8; ··· 2428 2413 /* 2429 2414 * determine is there is enough bw for current mode 2430 2415 */ 2431 - mclk_ff.full = rfixed_const(rdev->clock.default_mclk); 2432 - temp_ff.full = rfixed_const(100); 2433 - mclk_ff.full = rfixed_div(mclk_ff, temp_ff); 2434 - sclk_ff.full = rfixed_const(rdev->clock.default_sclk); 2435 - sclk_ff.full = rfixed_div(sclk_ff, temp_ff); 2416 + sclk_ff = rdev->pm.sclk; 2417 + mclk_ff = rdev->pm.mclk; 2436 2418 2437 2419 temp = (rdev->mc.vram_width / 8) * (rdev->mc.vram_is_ddr ? 2 : 1); 2438 2420 temp_ff.full = rfixed_const(temp); ··· 3452 3440 3453 3441 void r100_fini(struct radeon_device *rdev) 3454 3442 { 3443 + radeon_pm_fini(rdev); 3455 3444 r100_cp_fini(rdev); 3456 3445 r100_wb_fini(rdev); 3457 3446 r100_ib_fini(rdev);
+1
drivers/gpu/drm/radeon/r200.c
··· 30 30 #include "radeon_drm.h" 31 31 #include "radeon_reg.h" 32 32 #include "radeon.h" 33 + #include "radeon_asic.h" 33 34 34 35 #include "r100d.h" 35 36 #include "r200_reg_safe.h"
+4 -1
drivers/gpu/drm/radeon/r300.c
··· 30 30 #include "drm.h" 31 31 #include "radeon_reg.h" 32 32 #include "radeon.h" 33 + #include "radeon_asic.h" 33 34 #include "radeon_drm.h" 34 35 #include "r100_track.h" 35 36 #include "r300d.h" ··· 165 164 166 165 void rv370_pcie_gart_fini(struct radeon_device *rdev) 167 166 { 167 + radeon_gart_fini(rdev); 168 168 rv370_pcie_gart_disable(rdev); 169 169 radeon_gart_table_vram_free(rdev); 170 - radeon_gart_fini(rdev); 171 170 } 172 171 173 172 void r300_fence_ring_emit(struct radeon_device *rdev, ··· 482 481 radeon_vram_location(rdev, &rdev->mc, base); 483 482 if (!(rdev->flags & RADEON_IS_AGP)) 484 483 radeon_gtt_location(rdev, &rdev->mc); 484 + radeon_update_bandwidth_info(rdev); 485 485 } 486 486 487 487 void rv370_set_pcie_lanes(struct radeon_device *rdev, int lanes) ··· 1336 1334 1337 1335 void r300_fini(struct radeon_device *rdev) 1338 1336 { 1337 + radeon_pm_fini(rdev); 1339 1338 r100_cp_fini(rdev); 1340 1339 r100_wb_fini(rdev); 1341 1340 r100_ib_fini(rdev);
+2
drivers/gpu/drm/radeon/r420.c
··· 29 29 #include "drmP.h" 30 30 #include "radeon_reg.h" 31 31 #include "radeon.h" 32 + #include "radeon_asic.h" 32 33 #include "atom.h" 33 34 #include "r100d.h" 34 35 #include "r420d.h" ··· 267 266 268 267 void r420_fini(struct radeon_device *rdev) 269 268 { 269 + radeon_pm_fini(rdev); 270 270 r100_cp_fini(rdev); 271 271 r100_wb_fini(rdev); 272 272 r100_ib_fini(rdev);
+2 -7
drivers/gpu/drm/radeon/r520.c
··· 27 27 */ 28 28 #include "drmP.h" 29 29 #include "radeon.h" 30 + #include "radeon_asic.h" 30 31 #include "atom.h" 31 32 #include "r520d.h" 32 33 ··· 122 121 123 122 void r520_mc_init(struct radeon_device *rdev) 124 123 { 125 - fixed20_12 a; 126 124 127 125 r520_vram_get_type(rdev); 128 126 r100_vram_init_sizes(rdev); 129 127 radeon_vram_location(rdev, &rdev->mc, 0); 130 128 if (!(rdev->flags & RADEON_IS_AGP)) 131 129 radeon_gtt_location(rdev, &rdev->mc); 132 - /* FIXME: we should enforce default clock in case GPU is not in 133 - * default setup 134 - */ 135 - a.full = rfixed_const(100); 136 - rdev->pm.sclk.full = rfixed_const(rdev->clock.default_sclk); 137 - rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a); 130 + radeon_update_bandwidth_info(rdev); 138 131 } 139 132 140 133 void r520_mc_program(struct radeon_device *rdev)
+15 -15
drivers/gpu/drm/radeon/r600.c
··· 31 31 #include "drmP.h" 32 32 #include "radeon_drm.h" 33 33 #include "radeon.h" 34 + #include "radeon_asic.h" 34 35 #include "radeon_mode.h" 35 36 #include "r600d.h" 36 37 #include "atom.h" ··· 492 491 493 492 void r600_pcie_gart_fini(struct radeon_device *rdev) 494 493 { 494 + radeon_gart_fini(rdev); 495 495 r600_pcie_gart_disable(rdev); 496 496 radeon_gart_table_vram_free(rdev); 497 - radeon_gart_fini(rdev); 498 497 } 499 498 500 499 void r600_agp_enable(struct radeon_device *rdev) ··· 676 675 677 676 int r600_mc_init(struct radeon_device *rdev) 678 677 { 679 - fixed20_12 a; 680 678 u32 tmp; 681 679 int chansize, numchan; 682 680 ··· 719 719 rdev->mc.real_vram_size = rdev->mc.aper_size; 720 720 } 721 721 r600_vram_gtt_location(rdev, &rdev->mc); 722 - /* FIXME: we should enforce default clock in case GPU is not in 723 - * default setup 724 - */ 725 - a.full = rfixed_const(100); 726 - rdev->pm.sclk.full = rfixed_const(rdev->clock.default_sclk); 727 - rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a); 722 + 728 723 if (rdev->flags & RADEON_IS_IGP) 729 724 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 725 + radeon_update_bandwidth_info(rdev); 730 726 return 0; 731 727 } 732 728 ··· 1128 1132 /* Setup pipes */ 1129 1133 WREG32(CC_RB_BACKEND_DISABLE, cc_rb_backend_disable); 1130 1134 WREG32(CC_GC_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config); 1135 + WREG32(GC_USER_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config); 1131 1136 1132 1137 tmp = R6XX_MAX_PIPES - r600_count_pipe_bits((cc_gc_shader_pipe_config & INACTIVE_QD_PIPES_MASK) >> 8); 1133 1138 WREG32(VGT_OUT_DEALLOC_CNTL, (tmp * 4) & DEALLOC_DIST_MASK); ··· 2116 2119 2117 2120 void r600_fini(struct radeon_device *rdev) 2118 2121 { 2122 + radeon_pm_fini(rdev); 2119 2123 r600_audio_fini(rdev); 2120 2124 r600_blit_fini(rdev); 2121 2125 r600_cp_fini(rdev); ··· 2396 2398 WREG32(DC_HPD4_INT_CONTROL, tmp); 2397 2399 if (ASIC_IS_DCE32(rdev)) { 2398 2400 tmp = RREG32(DC_HPD5_INT_CONTROL) & DC_HPDx_INT_POLARITY; 2399 - WREG32(DC_HPD5_INT_CONTROL, 0); 2401 + WREG32(DC_HPD5_INT_CONTROL, tmp); 2400 2402 tmp = RREG32(DC_HPD6_INT_CONTROL) & DC_HPDx_INT_POLARITY; 2401 - WREG32(DC_HPD6_INT_CONTROL, 0); 2403 + WREG32(DC_HPD6_INT_CONTROL, tmp); 2402 2404 } 2403 2405 } else { 2404 2406 WREG32(DACA_AUTODETECT_INT_CONTROL, 0); 2405 2407 WREG32(DACB_AUTODETECT_INT_CONTROL, 0); 2406 2408 tmp = RREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL) & DC_HOT_PLUG_DETECTx_INT_POLARITY; 2407 - WREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL, 0); 2409 + WREG32(DC_HOT_PLUG_DETECT1_INT_CONTROL, tmp); 2408 2410 tmp = RREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL) & DC_HOT_PLUG_DETECTx_INT_POLARITY; 2409 - WREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL, 0); 2411 + WREG32(DC_HOT_PLUG_DETECT2_INT_CONTROL, tmp); 2410 2412 tmp = RREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL) & DC_HOT_PLUG_DETECTx_INT_POLARITY; 2411 - WREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL, 0); 2413 + WREG32(DC_HOT_PLUG_DETECT3_INT_CONTROL, tmp); 2412 2414 } 2413 2415 } 2414 2416 ··· 2763 2765 case 0: /* D1 vblank */ 2764 2766 if (disp_int & LB_D1_VBLANK_INTERRUPT) { 2765 2767 drm_handle_vblank(rdev->ddev, 0); 2768 + rdev->pm.vblank_sync = true; 2766 2769 wake_up(&rdev->irq.vblank_queue); 2767 2770 disp_int &= ~LB_D1_VBLANK_INTERRUPT; 2768 2771 DRM_DEBUG("IH: D1 vblank\n"); ··· 2785 2786 case 0: /* D2 vblank */ 2786 2787 if (disp_int & LB_D2_VBLANK_INTERRUPT) { 2787 2788 drm_handle_vblank(rdev->ddev, 1); 2789 + rdev->pm.vblank_sync = true; 2788 2790 wake_up(&rdev->irq.vblank_queue); 2789 2791 disp_int &= ~LB_D2_VBLANK_INTERRUPT; 2790 2792 DRM_DEBUG("IH: D2 vblank\n"); ··· 2834 2834 break; 2835 2835 case 10: 2836 2836 if (disp_int_cont2 & DC_HPD5_INTERRUPT) { 2837 - disp_int_cont &= ~DC_HPD5_INTERRUPT; 2837 + disp_int_cont2 &= ~DC_HPD5_INTERRUPT; 2838 2838 queue_hotplug = true; 2839 2839 DRM_DEBUG("IH: HPD5\n"); 2840 2840 } 2841 2841 break; 2842 2842 case 12: 2843 2843 if (disp_int_cont2 & DC_HPD6_INTERRUPT) { 2844 - disp_int_cont &= ~DC_HPD6_INTERRUPT; 2844 + disp_int_cont2 &= ~DC_HPD6_INTERRUPT; 2845 2845 queue_hotplug = true; 2846 2846 DRM_DEBUG("IH: HPD6\n"); 2847 2847 }
+10 -42
drivers/gpu/drm/radeon/r600_audio.c
··· 182 182 } 183 183 184 184 /* 185 - * determin how the encoders and audio interface is wired together 186 - */ 187 - int r600_audio_tmds_index(struct drm_encoder *encoder) 188 - { 189 - struct drm_device *dev = encoder->dev; 190 - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 191 - struct drm_encoder *other; 192 - 193 - switch (radeon_encoder->encoder_id) { 194 - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: 195 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 196 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 197 - return 0; 198 - 199 - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: 200 - /* special case check if an TMDS1 is present */ 201 - list_for_each_entry(other, &dev->mode_config.encoder_list, head) { 202 - if (to_radeon_encoder(other)->encoder_id == 203 - ENCODER_OBJECT_ID_INTERNAL_TMDS1) 204 - return 1; 205 - } 206 - return 0; 207 - 208 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 209 - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 210 - return 1; 211 - 212 - default: 213 - DRM_ERROR("Unsupported encoder type 0x%02X\n", 214 - radeon_encoder->encoder_id); 215 - return -1; 216 - } 217 - } 218 - 219 - /* 220 185 * atach the audio codec to the clock source of the encoder 221 186 */ 222 187 void r600_audio_set_clock(struct drm_encoder *encoder, int clock) ··· 189 224 struct drm_device *dev = encoder->dev; 190 225 struct radeon_device *rdev = dev->dev_private; 191 226 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 227 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 192 228 int base_rate = 48000; 193 229 194 230 switch (radeon_encoder->encoder_id) { ··· 197 231 case ENCODER_OBJECT_ID_INTERNAL_LVTM1: 198 232 WREG32_P(R600_AUDIO_TIMING, 0, ~0x301); 199 233 break; 200 - 201 234 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 202 235 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 203 236 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 204 237 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 205 238 WREG32_P(R600_AUDIO_TIMING, 0x100, ~0x301); 206 239 break; 207 - 208 240 default: 209 241 DRM_ERROR("Unsupported encoder type 0x%02X\n", 210 242 radeon_encoder->encoder_id); 211 243 return; 212 244 } 213 245 214 - switch (r600_audio_tmds_index(encoder)) { 246 + switch (dig->dig_encoder) { 215 247 case 0: 216 - WREG32(R600_AUDIO_PLL1_MUL, base_rate*50); 217 - WREG32(R600_AUDIO_PLL1_DIV, clock*100); 248 + WREG32(R600_AUDIO_PLL1_MUL, base_rate * 50); 249 + WREG32(R600_AUDIO_PLL1_DIV, clock * 100); 218 250 WREG32(R600_AUDIO_CLK_SRCSEL, 0); 219 251 break; 220 252 221 253 case 1: 222 - WREG32(R600_AUDIO_PLL2_MUL, base_rate*50); 223 - WREG32(R600_AUDIO_PLL2_DIV, clock*100); 254 + WREG32(R600_AUDIO_PLL2_MUL, base_rate * 50); 255 + WREG32(R600_AUDIO_PLL2_DIV, clock * 100); 224 256 WREG32(R600_AUDIO_CLK_SRCSEL, 1); 225 257 break; 258 + default: 259 + dev_err(rdev->dev, "Unsupported DIG on encoder 0x%02X\n", 260 + radeon_encoder->encoder_id); 261 + return; 226 262 } 227 263 } 228 264
+35
drivers/gpu/drm/radeon/r600_blit_shaders.c
··· 1 + /* 2 + * Copyright 2009 Advanced Micro Devices, Inc. 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice (including the next 12 + * paragraph) shall be included in all copies or substantial portions of the 13 + * Software. 14 + * 15 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18 + * THE COPYRIGHT HOLDER(S) AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR 19 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 20 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 21 + * DEALINGS IN THE SOFTWARE. 22 + * 23 + * Authors: 24 + * Alex Deucher <alexander.deucher@amd.com> 25 + */ 1 26 2 27 #include <linux/types.h> 3 28 #include <linux/kernel.h> 29 + 30 + /* 31 + * R6xx+ cards need to use the 3D engine to blit data which requires 32 + * quite a bit of hw state setup. Rather than pull the whole 3D driver 33 + * (which normally generates the 3D state) into the DRM, we opt to use 34 + * statically generated state tables. The regsiter state and shaders 35 + * were hand generated to support blitting functionality. See the 3D 36 + * driver or documentation for descriptions of the registers and 37 + * shader instructions. 38 + */ 4 39 5 40 const u32 r6xx_default_state[] = 6 41 {
+3
drivers/gpu/drm/radeon/r600_cp.c
··· 1548 1548 1549 1549 RADEON_WRITE(R600_CC_RB_BACKEND_DISABLE, cc_rb_backend_disable); 1550 1550 RADEON_WRITE(R600_CC_GC_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config); 1551 + RADEON_WRITE(R600_GC_USER_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config); 1551 1552 1552 1553 RADEON_WRITE(R700_CC_SYS_RB_BACKEND_DISABLE, cc_rb_backend_disable); 1553 1554 RADEON_WRITE(R700_CGTS_SYS_TCC_DISABLE, 0); 1554 1555 RADEON_WRITE(R700_CGTS_TCC_DISABLE, 0); 1556 + RADEON_WRITE(R700_CGTS_USER_SYS_TCC_DISABLE, 0); 1557 + RADEON_WRITE(R700_CGTS_USER_TCC_DISABLE, 0); 1555 1558 1556 1559 num_qd_pipes = 1557 1560 R7XX_MAX_PIPES - r600_count_pipe_bits((cc_gc_shader_pipe_config & R600_INACTIVE_QD_PIPES_MASK) >> 8);
+63 -7
drivers/gpu/drm/radeon/r600_cs.c
··· 45 45 u32 nbanks; 46 46 u32 npipes; 47 47 /* value we track */ 48 + u32 sq_config; 48 49 u32 nsamples; 49 50 u32 cb_color_base_last[8]; 50 51 struct radeon_bo *cb_color_bo[8]; ··· 142 141 { 143 142 int i; 144 143 144 + /* assume DX9 mode */ 145 + track->sq_config = DX9_CONSTS; 145 146 for (i = 0; i < 8; i++) { 146 147 track->cb_color_base_last[i] = 0; 147 148 track->cb_color_size[i] = 0; ··· 718 715 tmp =radeon_get_ib_value(p, idx); 719 716 ib[idx] = 0; 720 717 break; 718 + case SQ_CONFIG: 719 + track->sq_config = radeon_get_ib_value(p, idx); 720 + break; 721 721 case R_028800_DB_DEPTH_CONTROL: 722 722 track->db_depth_control = radeon_get_ib_value(p, idx); 723 723 break; ··· 875 869 case SQ_PGM_START_VS: 876 870 case SQ_PGM_START_GS: 877 871 case SQ_PGM_START_PS: 872 + case SQ_ALU_CONST_CACHE_GS_0: 873 + case SQ_ALU_CONST_CACHE_GS_1: 874 + case SQ_ALU_CONST_CACHE_GS_2: 875 + case SQ_ALU_CONST_CACHE_GS_3: 876 + case SQ_ALU_CONST_CACHE_GS_4: 877 + case SQ_ALU_CONST_CACHE_GS_5: 878 + case SQ_ALU_CONST_CACHE_GS_6: 879 + case SQ_ALU_CONST_CACHE_GS_7: 880 + case SQ_ALU_CONST_CACHE_GS_8: 881 + case SQ_ALU_CONST_CACHE_GS_9: 882 + case SQ_ALU_CONST_CACHE_GS_10: 883 + case SQ_ALU_CONST_CACHE_GS_11: 884 + case SQ_ALU_CONST_CACHE_GS_12: 885 + case SQ_ALU_CONST_CACHE_GS_13: 886 + case SQ_ALU_CONST_CACHE_GS_14: 887 + case SQ_ALU_CONST_CACHE_GS_15: 888 + case SQ_ALU_CONST_CACHE_PS_0: 889 + case SQ_ALU_CONST_CACHE_PS_1: 890 + case SQ_ALU_CONST_CACHE_PS_2: 891 + case SQ_ALU_CONST_CACHE_PS_3: 892 + case SQ_ALU_CONST_CACHE_PS_4: 893 + case SQ_ALU_CONST_CACHE_PS_5: 894 + case SQ_ALU_CONST_CACHE_PS_6: 895 + case SQ_ALU_CONST_CACHE_PS_7: 896 + case SQ_ALU_CONST_CACHE_PS_8: 897 + case SQ_ALU_CONST_CACHE_PS_9: 898 + case SQ_ALU_CONST_CACHE_PS_10: 899 + case SQ_ALU_CONST_CACHE_PS_11: 900 + case SQ_ALU_CONST_CACHE_PS_12: 901 + case SQ_ALU_CONST_CACHE_PS_13: 902 + case SQ_ALU_CONST_CACHE_PS_14: 903 + case SQ_ALU_CONST_CACHE_PS_15: 904 + case SQ_ALU_CONST_CACHE_VS_0: 905 + case SQ_ALU_CONST_CACHE_VS_1: 906 + case SQ_ALU_CONST_CACHE_VS_2: 907 + case SQ_ALU_CONST_CACHE_VS_3: 908 + case SQ_ALU_CONST_CACHE_VS_4: 909 + case SQ_ALU_CONST_CACHE_VS_5: 910 + case SQ_ALU_CONST_CACHE_VS_6: 911 + case SQ_ALU_CONST_CACHE_VS_7: 912 + case SQ_ALU_CONST_CACHE_VS_8: 913 + case SQ_ALU_CONST_CACHE_VS_9: 914 + case SQ_ALU_CONST_CACHE_VS_10: 915 + case SQ_ALU_CONST_CACHE_VS_11: 916 + case SQ_ALU_CONST_CACHE_VS_12: 917 + case SQ_ALU_CONST_CACHE_VS_13: 918 + case SQ_ALU_CONST_CACHE_VS_14: 919 + case SQ_ALU_CONST_CACHE_VS_15: 878 920 r = r600_cs_packet_next_reloc(p, &reloc); 879 921 if (r) { 880 922 dev_warn(p->dev, "bad SET_CONTEXT_REG " ··· 1280 1226 } 1281 1227 break; 1282 1228 case PACKET3_SET_ALU_CONST: 1283 - start_reg = (idx_value << 2) + PACKET3_SET_ALU_CONST_OFFSET; 1284 - end_reg = 4 * pkt->count + start_reg - 4; 1285 - if ((start_reg < PACKET3_SET_ALU_CONST_OFFSET) || 1286 - (start_reg >= PACKET3_SET_ALU_CONST_END) || 1287 - (end_reg >= PACKET3_SET_ALU_CONST_END)) { 1288 - DRM_ERROR("bad SET_ALU_CONST\n"); 1289 - return -EINVAL; 1229 + if (track->sq_config & DX9_CONSTS) { 1230 + start_reg = (idx_value << 2) + PACKET3_SET_ALU_CONST_OFFSET; 1231 + end_reg = 4 * pkt->count + start_reg - 4; 1232 + if ((start_reg < PACKET3_SET_ALU_CONST_OFFSET) || 1233 + (start_reg >= PACKET3_SET_ALU_CONST_END) || 1234 + (end_reg >= PACKET3_SET_ALU_CONST_END)) { 1235 + DRM_ERROR("bad SET_ALU_CONST\n"); 1236 + return -EINVAL; 1237 + } 1290 1238 } 1291 1239 break; 1292 1240 case PACKET3_SET_BOOL_CONST:
+127 -76
drivers/gpu/drm/radeon/r600_hdmi.c
··· 42 42 */ 43 43 enum r600_hdmi_iec_status_bits { 44 44 AUDIO_STATUS_DIG_ENABLE = 0x01, 45 - AUDIO_STATUS_V = 0x02, 46 - AUDIO_STATUS_VCFG = 0x04, 45 + AUDIO_STATUS_V = 0x02, 46 + AUDIO_STATUS_VCFG = 0x04, 47 47 AUDIO_STATUS_EMPHASIS = 0x08, 48 48 AUDIO_STATUS_COPYRIGHT = 0x10, 49 49 AUDIO_STATUS_NONAUDIO = 0x20, 50 50 AUDIO_STATUS_PROFESSIONAL = 0x40, 51 - AUDIO_STATUS_LEVEL = 0x80 51 + AUDIO_STATUS_LEVEL = 0x80 52 52 }; 53 53 54 54 struct { ··· 85 85 static void r600_hdmi_calc_CTS(uint32_t clock, int *CTS, int N, int freq) 86 86 { 87 87 if (*CTS == 0) 88 - *CTS = clock*N/(128*freq)*1000; 88 + *CTS = clock * N / (128 * freq) * 1000; 89 89 DRM_DEBUG("Using ACR timing N=%d CTS=%d for frequency %d\n", 90 90 N, *CTS, freq); 91 91 } ··· 131 131 uint8_t length, 132 132 uint8_t *frame) 133 133 { 134 - int i; 135 - frame[0] = packetType + versionNumber + length; 136 - for (i = 1; i <= length; i++) 137 - frame[0] += frame[i]; 138 - frame[0] = 0x100 - frame[0]; 134 + int i; 135 + frame[0] = packetType + versionNumber + length; 136 + for (i = 1; i <= length; i++) 137 + frame[0] += frame[i]; 138 + frame[0] = 0x100 - frame[0]; 139 139 } 140 140 141 141 /* ··· 417 417 WREG32_P(offset+R600_HDMI_CNTL, 0x04000000, ~0x04000000); 418 418 } 419 419 420 - /* 421 - * enable/disable the HDMI engine 422 - */ 423 - void r600_hdmi_enable(struct drm_encoder *encoder, int enable) 420 + static int r600_hdmi_find_free_block(struct drm_device *dev) 421 + { 422 + struct radeon_device *rdev = dev->dev_private; 423 + struct drm_encoder *encoder; 424 + struct radeon_encoder *radeon_encoder; 425 + bool free_blocks[3] = { true, true, true }; 426 + 427 + list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) { 428 + radeon_encoder = to_radeon_encoder(encoder); 429 + switch (radeon_encoder->hdmi_offset) { 430 + case R600_HDMI_BLOCK1: 431 + free_blocks[0] = false; 432 + break; 433 + case R600_HDMI_BLOCK2: 434 + free_blocks[1] = false; 435 + break; 436 + case R600_HDMI_BLOCK3: 437 + free_blocks[2] = false; 438 + break; 439 + } 440 + } 441 + 442 + if (rdev->family == CHIP_RS600 || rdev->family == CHIP_RS690) { 443 + return free_blocks[0] ? R600_HDMI_BLOCK1 : 0; 444 + } else if (rdev->family >= CHIP_R600) { 445 + if (free_blocks[0]) 446 + return R600_HDMI_BLOCK1; 447 + else if (free_blocks[1]) 448 + return R600_HDMI_BLOCK2; 449 + } 450 + return 0; 451 + } 452 + 453 + static void r600_hdmi_assign_block(struct drm_encoder *encoder) 424 454 { 425 455 struct drm_device *dev = encoder->dev; 426 456 struct radeon_device *rdev = dev->dev_private; 427 457 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 428 - uint32_t offset = to_radeon_encoder(encoder)->hdmi_offset; 458 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 429 459 430 - if (!offset) 460 + if (!dig) { 461 + dev_err(rdev->dev, "Enabling HDMI on non-dig encoder\n"); 431 462 return; 463 + } 432 464 433 - DRM_DEBUG("%s HDMI interface @ 0x%04X\n", enable ? "Enabling" : "Disabling", offset); 434 - 435 - /* some version of atombios ignore the enable HDMI flag 436 - * so enabling/disabling HDMI was moved here for TMDS1+2 */ 437 - switch (radeon_encoder->encoder_id) { 438 - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: 439 - WREG32_P(AVIVO_TMDSA_CNTL, enable ? 0x4 : 0x0, ~0x4); 440 - WREG32(offset+R600_HDMI_ENABLE, enable ? 0x101 : 0x0); 441 - break; 442 - 443 - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: 444 - WREG32_P(AVIVO_LVTMA_CNTL, enable ? 0x4 : 0x0, ~0x4); 445 - WREG32(offset+R600_HDMI_ENABLE, enable ? 0x105 : 0x0); 446 - break; 447 - 448 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 449 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 450 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 451 - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 452 - /* This part is doubtfull in my opinion */ 453 - WREG32(offset+R600_HDMI_ENABLE, enable ? 0x110 : 0x0); 454 - break; 455 - 456 - default: 457 - DRM_ERROR("unknown HDMI output type\n"); 458 - break; 465 + if (ASIC_IS_DCE4(rdev)) { 466 + /* TODO */ 467 + } else if (ASIC_IS_DCE3(rdev)) { 468 + radeon_encoder->hdmi_offset = dig->dig_encoder ? 469 + R600_HDMI_BLOCK3 : R600_HDMI_BLOCK1; 470 + if (ASIC_IS_DCE32(rdev)) 471 + radeon_encoder->hdmi_config_offset = dig->dig_encoder ? 472 + R600_HDMI_CONFIG2 : R600_HDMI_CONFIG1; 473 + } else if (rdev->family >= CHIP_R600) { 474 + radeon_encoder->hdmi_offset = r600_hdmi_find_free_block(dev); 459 475 } 460 476 } 461 477 462 478 /* 463 - * determin at which register offset the HDMI encoder is 479 + * enable the HDMI engine 464 480 */ 465 - void r600_hdmi_init(struct drm_encoder *encoder) 481 + void r600_hdmi_enable(struct drm_encoder *encoder) 466 482 { 483 + struct drm_device *dev = encoder->dev; 484 + struct radeon_device *rdev = dev->dev_private; 467 485 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 468 486 469 - switch (radeon_encoder->encoder_id) { 470 - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: 471 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 472 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 473 - radeon_encoder->hdmi_offset = R600_HDMI_TMDS1; 474 - break; 475 - 476 - case ENCODER_OBJECT_ID_INTERNAL_LVTM1: 477 - switch (r600_audio_tmds_index(encoder)) { 478 - case 0: 479 - radeon_encoder->hdmi_offset = R600_HDMI_TMDS1; 480 - break; 481 - case 1: 482 - radeon_encoder->hdmi_offset = R600_HDMI_TMDS2; 483 - break; 484 - default: 485 - radeon_encoder->hdmi_offset = 0; 486 - break; 487 + if (!radeon_encoder->hdmi_offset) { 488 + r600_hdmi_assign_block(encoder); 489 + if (!radeon_encoder->hdmi_offset) { 490 + dev_warn(rdev->dev, "Could not find HDMI block for " 491 + "0x%x encoder\n", radeon_encoder->encoder_id); 492 + return; 487 493 } 488 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 489 - radeon_encoder->hdmi_offset = R600_HDMI_TMDS2; 490 - break; 491 - 492 - case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA: 493 - radeon_encoder->hdmi_offset = R600_HDMI_DIG; 494 - break; 495 - 496 - default: 497 - radeon_encoder->hdmi_offset = 0; 498 - break; 499 494 } 500 495 501 - DRM_DEBUG("using HDMI engine at offset 0x%04X for encoder 0x%x\n", 502 - radeon_encoder->hdmi_offset, radeon_encoder->encoder_id); 496 + if (ASIC_IS_DCE32(rdev) && !ASIC_IS_DCE4(rdev)) { 497 + WREG32_P(radeon_encoder->hdmi_config_offset + 0x4, 0x1, ~0x1); 498 + } else if (rdev->family >= CHIP_R600 && !ASIC_IS_DCE3(rdev)) { 499 + int offset = radeon_encoder->hdmi_offset; 500 + switch (radeon_encoder->encoder_id) { 501 + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: 502 + WREG32_P(AVIVO_TMDSA_CNTL, 0x4, ~0x4); 503 + WREG32(offset + R600_HDMI_ENABLE, 0x101); 504 + break; 505 + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: 506 + WREG32_P(AVIVO_LVTMA_CNTL, 0x4, ~0x4); 507 + WREG32(offset + R600_HDMI_ENABLE, 0x105); 508 + break; 509 + default: 510 + dev_err(rdev->dev, "Unknown HDMI output type\n"); 511 + break; 512 + } 513 + } 503 514 504 - /* TODO: make this configureable */ 505 - radeon_encoder->hdmi_audio_workaround = 0; 515 + DRM_DEBUG("Enabling HDMI interface @ 0x%04X for encoder 0x%x\n", 516 + radeon_encoder->hdmi_offset, radeon_encoder->encoder_id); 517 + } 518 + 519 + /* 520 + * disable the HDMI engine 521 + */ 522 + void r600_hdmi_disable(struct drm_encoder *encoder) 523 + { 524 + struct drm_device *dev = encoder->dev; 525 + struct radeon_device *rdev = dev->dev_private; 526 + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 527 + 528 + if (!radeon_encoder->hdmi_offset) { 529 + dev_err(rdev->dev, "Disabling not enabled HDMI\n"); 530 + return; 531 + } 532 + 533 + DRM_DEBUG("Disabling HDMI interface @ 0x%04X for encoder 0x%x\n", 534 + radeon_encoder->hdmi_offset, radeon_encoder->encoder_id); 535 + 536 + if (ASIC_IS_DCE32(rdev) && !ASIC_IS_DCE4(rdev)) { 537 + WREG32_P(radeon_encoder->hdmi_config_offset + 0x4, 0, ~0x1); 538 + } else if (rdev->family >= CHIP_R600 && !ASIC_IS_DCE3(rdev)) { 539 + int offset = radeon_encoder->hdmi_offset; 540 + switch (radeon_encoder->encoder_id) { 541 + case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1: 542 + WREG32_P(AVIVO_TMDSA_CNTL, 0, ~0x4); 543 + WREG32(offset + R600_HDMI_ENABLE, 0); 544 + break; 545 + case ENCODER_OBJECT_ID_INTERNAL_LVTM1: 546 + WREG32_P(AVIVO_LVTMA_CNTL, 0, ~0x4); 547 + WREG32(offset + R600_HDMI_ENABLE, 0); 548 + break; 549 + default: 550 + dev_err(rdev->dev, "Unknown HDMI output type\n"); 551 + break; 552 + } 553 + } 554 + 555 + radeon_encoder->hdmi_offset = 0; 556 + radeon_encoder->hdmi_config_offset = 0; 506 557 }
+7 -3
drivers/gpu/drm/radeon/r600_reg.h
··· 152 152 #define R600_AUDIO_STATUS_BITS 0x73d8 153 153 154 154 /* HDMI base register addresses */ 155 - #define R600_HDMI_TMDS1 0x7400 156 - #define R600_HDMI_TMDS2 0x7700 157 - #define R600_HDMI_DIG 0x7800 155 + #define R600_HDMI_BLOCK1 0x7400 156 + #define R600_HDMI_BLOCK2 0x7700 157 + #define R600_HDMI_BLOCK3 0x7800 158 158 159 159 /* HDMI registers */ 160 160 #define R600_HDMI_ENABLE 0x00 ··· 184 184 #define R600_HDMI_AUDIO_DEBUG_1 0xe4 185 185 #define R600_HDMI_AUDIO_DEBUG_2 0xe8 186 186 #define R600_HDMI_AUDIO_DEBUG_3 0xec 187 + 188 + /* HDMI additional config base register addresses */ 189 + #define R600_HDMI_CONFIG1 0x7600 190 + #define R600_HDMI_CONFIG2 0x7a00 187 191 188 192 #endif
+49
drivers/gpu/drm/radeon/r600d.h
··· 77 77 #define CB_COLOR0_FRAG 0x280e0 78 78 #define CB_COLOR0_MASK 0x28100 79 79 80 + #define SQ_ALU_CONST_CACHE_PS_0 0x28940 81 + #define SQ_ALU_CONST_CACHE_PS_1 0x28944 82 + #define SQ_ALU_CONST_CACHE_PS_2 0x28948 83 + #define SQ_ALU_CONST_CACHE_PS_3 0x2894c 84 + #define SQ_ALU_CONST_CACHE_PS_4 0x28950 85 + #define SQ_ALU_CONST_CACHE_PS_5 0x28954 86 + #define SQ_ALU_CONST_CACHE_PS_6 0x28958 87 + #define SQ_ALU_CONST_CACHE_PS_7 0x2895c 88 + #define SQ_ALU_CONST_CACHE_PS_8 0x28960 89 + #define SQ_ALU_CONST_CACHE_PS_9 0x28964 90 + #define SQ_ALU_CONST_CACHE_PS_10 0x28968 91 + #define SQ_ALU_CONST_CACHE_PS_11 0x2896c 92 + #define SQ_ALU_CONST_CACHE_PS_12 0x28970 93 + #define SQ_ALU_CONST_CACHE_PS_13 0x28974 94 + #define SQ_ALU_CONST_CACHE_PS_14 0x28978 95 + #define SQ_ALU_CONST_CACHE_PS_15 0x2897c 96 + #define SQ_ALU_CONST_CACHE_VS_0 0x28980 97 + #define SQ_ALU_CONST_CACHE_VS_1 0x28984 98 + #define SQ_ALU_CONST_CACHE_VS_2 0x28988 99 + #define SQ_ALU_CONST_CACHE_VS_3 0x2898c 100 + #define SQ_ALU_CONST_CACHE_VS_4 0x28990 101 + #define SQ_ALU_CONST_CACHE_VS_5 0x28994 102 + #define SQ_ALU_CONST_CACHE_VS_6 0x28998 103 + #define SQ_ALU_CONST_CACHE_VS_7 0x2899c 104 + #define SQ_ALU_CONST_CACHE_VS_8 0x289a0 105 + #define SQ_ALU_CONST_CACHE_VS_9 0x289a4 106 + #define SQ_ALU_CONST_CACHE_VS_10 0x289a8 107 + #define SQ_ALU_CONST_CACHE_VS_11 0x289ac 108 + #define SQ_ALU_CONST_CACHE_VS_12 0x289b0 109 + #define SQ_ALU_CONST_CACHE_VS_13 0x289b4 110 + #define SQ_ALU_CONST_CACHE_VS_14 0x289b8 111 + #define SQ_ALU_CONST_CACHE_VS_15 0x289bc 112 + #define SQ_ALU_CONST_CACHE_GS_0 0x289c0 113 + #define SQ_ALU_CONST_CACHE_GS_1 0x289c4 114 + #define SQ_ALU_CONST_CACHE_GS_2 0x289c8 115 + #define SQ_ALU_CONST_CACHE_GS_3 0x289cc 116 + #define SQ_ALU_CONST_CACHE_GS_4 0x289d0 117 + #define SQ_ALU_CONST_CACHE_GS_5 0x289d4 118 + #define SQ_ALU_CONST_CACHE_GS_6 0x289d8 119 + #define SQ_ALU_CONST_CACHE_GS_7 0x289dc 120 + #define SQ_ALU_CONST_CACHE_GS_8 0x289e0 121 + #define SQ_ALU_CONST_CACHE_GS_9 0x289e4 122 + #define SQ_ALU_CONST_CACHE_GS_10 0x289e8 123 + #define SQ_ALU_CONST_CACHE_GS_11 0x289ec 124 + #define SQ_ALU_CONST_CACHE_GS_12 0x289f0 125 + #define SQ_ALU_CONST_CACHE_GS_13 0x289f4 126 + #define SQ_ALU_CONST_CACHE_GS_14 0x289f8 127 + #define SQ_ALU_CONST_CACHE_GS_15 0x289fc 128 + 80 129 #define CONFIG_MEMSIZE 0x5428 81 130 #define CONFIG_CNTL 0x5424 82 131 #define CP_STAT 0x8680
+17 -49
drivers/gpu/drm/radeon/radeon.h
··· 91 91 extern int radeon_new_pll; 92 92 extern int radeon_dynpm; 93 93 extern int radeon_audio; 94 + extern int radeon_disp_priority; 95 + extern int radeon_hw_i2c; 94 96 95 97 /* 96 98 * Copy from radeon_drv.h so we don't have to include both and have conflicting ··· 170 168 * Power management 171 169 */ 172 170 int radeon_pm_init(struct radeon_device *rdev); 171 + void radeon_pm_fini(struct radeon_device *rdev); 173 172 void radeon_pm_compute_clocks(struct radeon_device *rdev); 174 173 void radeon_combios_get_power_modes(struct radeon_device *rdev); 175 174 void radeon_atombios_get_power_modes(struct radeon_device *rdev); ··· 690 687 bool downclocked; 691 688 int active_crtcs; 692 689 int req_vblank; 690 + bool vblank_sync; 693 691 fixed20_12 max_bandwidth; 694 692 fixed20_12 igp_sideport_mclk; 695 693 fixed20_12 igp_system_mclk; ··· 701 697 fixed20_12 ht_bandwidth; 702 698 fixed20_12 core_bandwidth; 703 699 fixed20_12 sclk; 700 + fixed20_12 mclk; 704 701 fixed20_12 needed_bandwidth; 705 702 /* XXX: use a define for num power modes */ 706 703 struct radeon_power_state power_state[8]; ··· 712 707 struct radeon_power_state *requested_power_state; 713 708 struct radeon_pm_clock_info *requested_clock_mode; 714 709 struct radeon_power_state *default_power_state; 710 + struct radeon_i2c_chan *i2c_bus; 715 711 }; 716 712 717 713 ··· 735 729 struct drm_info_list *files, 736 730 unsigned nfiles); 737 731 int radeon_debugfs_fence_init(struct radeon_device *rdev); 738 - int r100_debugfs_rbbm_init(struct radeon_device *rdev); 739 - int r100_debugfs_cp_init(struct radeon_device *rdev); 740 732 741 733 742 734 /* ··· 786 782 int (*set_surface_reg)(struct radeon_device *rdev, int reg, 787 783 uint32_t tiling_flags, uint32_t pitch, 788 784 uint32_t offset, uint32_t obj_size); 789 - int (*clear_surface_reg)(struct radeon_device *rdev, int reg); 785 + void (*clear_surface_reg)(struct radeon_device *rdev, int reg); 790 786 void (*bandwidth_update)(struct radeon_device *rdev); 791 787 void (*hpd_init)(struct radeon_device *rdev); 792 788 void (*hpd_fini)(struct radeon_device *rdev); ··· 865 861 struct r600_asic r600; 866 862 struct rv770_asic rv770; 867 863 }; 864 + 865 + /* 866 + * asic initizalization from radeon_asic.c 867 + */ 868 + void radeon_agp_disable(struct radeon_device *rdev); 869 + int radeon_asic_init(struct radeon_device *rdev); 868 870 869 871 870 872 /* ··· 1182 1172 extern int radeon_modeset_init(struct radeon_device *rdev); 1183 1173 extern void radeon_modeset_fini(struct radeon_device *rdev); 1184 1174 extern bool radeon_card_posted(struct radeon_device *rdev); 1175 + extern void radeon_update_bandwidth_info(struct radeon_device *rdev); 1176 + extern void radeon_update_display_priority(struct radeon_device *rdev); 1185 1177 extern bool radeon_boot_test_post_card(struct radeon_device *rdev); 1186 1178 extern int radeon_clocks_init(struct radeon_device *rdev); 1187 1179 extern void radeon_clocks_fini(struct radeon_device *rdev); ··· 1200 1188 extern int radeon_suspend_kms(struct drm_device *dev, pm_message_t state); 1201 1189 1202 1190 /* r100,rv100,rs100,rv200,rs200,r200,rv250,rs300,rv280 */ 1203 - struct r100_mc_save { 1204 - u32 GENMO_WT; 1205 - u32 CRTC_EXT_CNTL; 1206 - u32 CRTC_GEN_CNTL; 1207 - u32 CRTC2_GEN_CNTL; 1208 - u32 CUR_OFFSET; 1209 - u32 CUR2_OFFSET; 1210 - }; 1211 - extern void r100_cp_disable(struct radeon_device *rdev); 1212 - extern int r100_cp_init(struct radeon_device *rdev, unsigned ring_size); 1213 - extern void r100_cp_fini(struct radeon_device *rdev); 1214 - extern void r100_pci_gart_tlb_flush(struct radeon_device *rdev); 1215 - extern int r100_pci_gart_init(struct radeon_device *rdev); 1216 - extern void r100_pci_gart_fini(struct radeon_device *rdev); 1217 - extern int r100_pci_gart_enable(struct radeon_device *rdev); 1218 - extern void r100_pci_gart_disable(struct radeon_device *rdev); 1219 - extern int r100_pci_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr); 1220 - extern int r100_debugfs_mc_info_init(struct radeon_device *rdev); 1221 - extern int r100_gui_wait_for_idle(struct radeon_device *rdev); 1222 - extern void r100_ib_fini(struct radeon_device *rdev); 1223 - extern int r100_ib_init(struct radeon_device *rdev); 1224 - extern void r100_irq_disable(struct radeon_device *rdev); 1225 - extern int r100_irq_set(struct radeon_device *rdev); 1226 - extern void r100_mc_stop(struct radeon_device *rdev, struct r100_mc_save *save); 1227 - extern void r100_mc_resume(struct radeon_device *rdev, struct r100_mc_save *save); 1228 - extern void r100_vram_init_sizes(struct radeon_device *rdev); 1229 - extern void r100_wb_disable(struct radeon_device *rdev); 1230 - extern void r100_wb_fini(struct radeon_device *rdev); 1231 - extern int r100_wb_init(struct radeon_device *rdev); 1232 - extern void r100_hdp_reset(struct radeon_device *rdev); 1233 - extern int r100_rb2d_reset(struct radeon_device *rdev); 1234 - extern int r100_cp_reset(struct radeon_device *rdev); 1235 - extern void r100_vga_render_disable(struct radeon_device *rdev); 1236 - extern int r100_cs_track_check_pkt3_indx_buffer(struct radeon_cs_parser *p, 1237 - struct radeon_cs_packet *pkt, 1238 - struct radeon_bo *robj); 1239 - extern int r100_cs_parse_packet0(struct radeon_cs_parser *p, 1240 - struct radeon_cs_packet *pkt, 1241 - const unsigned *auth, unsigned n, 1242 - radeon_packet0_check_t check); 1243 - extern int r100_cs_packet_parse(struct radeon_cs_parser *p, 1244 - struct radeon_cs_packet *pkt, 1245 - unsigned idx); 1246 - extern void r100_enable_bm(struct radeon_device *rdev); 1247 - extern void r100_set_common_regs(struct radeon_device *rdev); 1248 1191 1249 1192 /* rv200,rv250,rv280 */ 1250 1193 extern void r200_set_safe_registers(struct radeon_device *rdev); ··· 1289 1322 extern void r600_audio_set_clock(struct drm_encoder *encoder, int clock); 1290 1323 extern void r600_audio_fini(struct radeon_device *rdev); 1291 1324 extern void r600_hdmi_init(struct drm_encoder *encoder); 1292 - extern void r600_hdmi_enable(struct drm_encoder *encoder, int enable); 1325 + extern void r600_hdmi_enable(struct drm_encoder *encoder); 1326 + extern void r600_hdmi_disable(struct drm_encoder *encoder); 1293 1327 extern void r600_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode *mode); 1294 1328 extern int r600_hdmi_buffer_status_changed(struct drm_encoder *encoder); 1295 1329 extern void r600_hdmi_update_audio_settings(struct drm_encoder *encoder,
+772
drivers/gpu/drm/radeon/radeon_asic.c
··· 1 + /* 2 + * Copyright 2008 Advanced Micro Devices, Inc. 3 + * Copyright 2008 Red Hat Inc. 4 + * Copyright 2009 Jerome Glisse. 5 + * 6 + * Permission is hereby granted, free of charge, to any person obtaining a 7 + * copy of this software and associated documentation files (the "Software"), 8 + * to deal in the Software without restriction, including without limitation 9 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 10 + * and/or sell copies of the Software, and to permit persons to whom the 11 + * Software is furnished to do so, subject to the following conditions: 12 + * 13 + * The above copyright notice and this permission notice shall be included in 14 + * all copies or substantial portions of the Software. 15 + * 16 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 19 + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR 20 + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, 21 + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 22 + * OTHER DEALINGS IN THE SOFTWARE. 23 + * 24 + * Authors: Dave Airlie 25 + * Alex Deucher 26 + * Jerome Glisse 27 + */ 28 + 29 + #include <linux/console.h> 30 + #include <drm/drmP.h> 31 + #include <drm/drm_crtc_helper.h> 32 + #include <drm/radeon_drm.h> 33 + #include <linux/vgaarb.h> 34 + #include <linux/vga_switcheroo.h> 35 + #include "radeon_reg.h" 36 + #include "radeon.h" 37 + #include "radeon_asic.h" 38 + #include "atom.h" 39 + 40 + /* 41 + * Registers accessors functions. 42 + */ 43 + static uint32_t radeon_invalid_rreg(struct radeon_device *rdev, uint32_t reg) 44 + { 45 + DRM_ERROR("Invalid callback to read register 0x%04X\n", reg); 46 + BUG_ON(1); 47 + return 0; 48 + } 49 + 50 + static void radeon_invalid_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v) 51 + { 52 + DRM_ERROR("Invalid callback to write register 0x%04X with 0x%08X\n", 53 + reg, v); 54 + BUG_ON(1); 55 + } 56 + 57 + static void radeon_register_accessor_init(struct radeon_device *rdev) 58 + { 59 + rdev->mc_rreg = &radeon_invalid_rreg; 60 + rdev->mc_wreg = &radeon_invalid_wreg; 61 + rdev->pll_rreg = &radeon_invalid_rreg; 62 + rdev->pll_wreg = &radeon_invalid_wreg; 63 + rdev->pciep_rreg = &radeon_invalid_rreg; 64 + rdev->pciep_wreg = &radeon_invalid_wreg; 65 + 66 + /* Don't change order as we are overridding accessor. */ 67 + if (rdev->family < CHIP_RV515) { 68 + rdev->pcie_reg_mask = 0xff; 69 + } else { 70 + rdev->pcie_reg_mask = 0x7ff; 71 + } 72 + /* FIXME: not sure here */ 73 + if (rdev->family <= CHIP_R580) { 74 + rdev->pll_rreg = &r100_pll_rreg; 75 + rdev->pll_wreg = &r100_pll_wreg; 76 + } 77 + if (rdev->family >= CHIP_R420) { 78 + rdev->mc_rreg = &r420_mc_rreg; 79 + rdev->mc_wreg = &r420_mc_wreg; 80 + } 81 + if (rdev->family >= CHIP_RV515) { 82 + rdev->mc_rreg = &rv515_mc_rreg; 83 + rdev->mc_wreg = &rv515_mc_wreg; 84 + } 85 + if (rdev->family == CHIP_RS400 || rdev->family == CHIP_RS480) { 86 + rdev->mc_rreg = &rs400_mc_rreg; 87 + rdev->mc_wreg = &rs400_mc_wreg; 88 + } 89 + if (rdev->family == CHIP_RS690 || rdev->family == CHIP_RS740) { 90 + rdev->mc_rreg = &rs690_mc_rreg; 91 + rdev->mc_wreg = &rs690_mc_wreg; 92 + } 93 + if (rdev->family == CHIP_RS600) { 94 + rdev->mc_rreg = &rs600_mc_rreg; 95 + rdev->mc_wreg = &rs600_mc_wreg; 96 + } 97 + if ((rdev->family >= CHIP_R600) && (rdev->family <= CHIP_RV740)) { 98 + rdev->pciep_rreg = &r600_pciep_rreg; 99 + rdev->pciep_wreg = &r600_pciep_wreg; 100 + } 101 + } 102 + 103 + 104 + /* helper to disable agp */ 105 + void radeon_agp_disable(struct radeon_device *rdev) 106 + { 107 + rdev->flags &= ~RADEON_IS_AGP; 108 + if (rdev->family >= CHIP_R600) { 109 + DRM_INFO("Forcing AGP to PCIE mode\n"); 110 + rdev->flags |= RADEON_IS_PCIE; 111 + } else if (rdev->family >= CHIP_RV515 || 112 + rdev->family == CHIP_RV380 || 113 + rdev->family == CHIP_RV410 || 114 + rdev->family == CHIP_R423) { 115 + DRM_INFO("Forcing AGP to PCIE mode\n"); 116 + rdev->flags |= RADEON_IS_PCIE; 117 + rdev->asic->gart_tlb_flush = &rv370_pcie_gart_tlb_flush; 118 + rdev->asic->gart_set_page = &rv370_pcie_gart_set_page; 119 + } else { 120 + DRM_INFO("Forcing AGP to PCI mode\n"); 121 + rdev->flags |= RADEON_IS_PCI; 122 + rdev->asic->gart_tlb_flush = &r100_pci_gart_tlb_flush; 123 + rdev->asic->gart_set_page = &r100_pci_gart_set_page; 124 + } 125 + rdev->mc.gtt_size = radeon_gart_size * 1024 * 1024; 126 + } 127 + 128 + /* 129 + * ASIC 130 + */ 131 + static struct radeon_asic r100_asic = { 132 + .init = &r100_init, 133 + .fini = &r100_fini, 134 + .suspend = &r100_suspend, 135 + .resume = &r100_resume, 136 + .vga_set_state = &r100_vga_set_state, 137 + .gpu_reset = &r100_gpu_reset, 138 + .gart_tlb_flush = &r100_pci_gart_tlb_flush, 139 + .gart_set_page = &r100_pci_gart_set_page, 140 + .cp_commit = &r100_cp_commit, 141 + .ring_start = &r100_ring_start, 142 + .ring_test = &r100_ring_test, 143 + .ring_ib_execute = &r100_ring_ib_execute, 144 + .irq_set = &r100_irq_set, 145 + .irq_process = &r100_irq_process, 146 + .get_vblank_counter = &r100_get_vblank_counter, 147 + .fence_ring_emit = &r100_fence_ring_emit, 148 + .cs_parse = &r100_cs_parse, 149 + .copy_blit = &r100_copy_blit, 150 + .copy_dma = NULL, 151 + .copy = &r100_copy_blit, 152 + .get_engine_clock = &radeon_legacy_get_engine_clock, 153 + .set_engine_clock = &radeon_legacy_set_engine_clock, 154 + .get_memory_clock = &radeon_legacy_get_memory_clock, 155 + .set_memory_clock = NULL, 156 + .get_pcie_lanes = NULL, 157 + .set_pcie_lanes = NULL, 158 + .set_clock_gating = &radeon_legacy_set_clock_gating, 159 + .set_surface_reg = r100_set_surface_reg, 160 + .clear_surface_reg = r100_clear_surface_reg, 161 + .bandwidth_update = &r100_bandwidth_update, 162 + .hpd_init = &r100_hpd_init, 163 + .hpd_fini = &r100_hpd_fini, 164 + .hpd_sense = &r100_hpd_sense, 165 + .hpd_set_polarity = &r100_hpd_set_polarity, 166 + .ioctl_wait_idle = NULL, 167 + }; 168 + 169 + static struct radeon_asic r200_asic = { 170 + .init = &r100_init, 171 + .fini = &r100_fini, 172 + .suspend = &r100_suspend, 173 + .resume = &r100_resume, 174 + .vga_set_state = &r100_vga_set_state, 175 + .gpu_reset = &r100_gpu_reset, 176 + .gart_tlb_flush = &r100_pci_gart_tlb_flush, 177 + .gart_set_page = &r100_pci_gart_set_page, 178 + .cp_commit = &r100_cp_commit, 179 + .ring_start = &r100_ring_start, 180 + .ring_test = &r100_ring_test, 181 + .ring_ib_execute = &r100_ring_ib_execute, 182 + .irq_set = &r100_irq_set, 183 + .irq_process = &r100_irq_process, 184 + .get_vblank_counter = &r100_get_vblank_counter, 185 + .fence_ring_emit = &r100_fence_ring_emit, 186 + .cs_parse = &r100_cs_parse, 187 + .copy_blit = &r100_copy_blit, 188 + .copy_dma = &r200_copy_dma, 189 + .copy = &r100_copy_blit, 190 + .get_engine_clock = &radeon_legacy_get_engine_clock, 191 + .set_engine_clock = &radeon_legacy_set_engine_clock, 192 + .get_memory_clock = &radeon_legacy_get_memory_clock, 193 + .set_memory_clock = NULL, 194 + .set_pcie_lanes = NULL, 195 + .set_clock_gating = &radeon_legacy_set_clock_gating, 196 + .set_surface_reg = r100_set_surface_reg, 197 + .clear_surface_reg = r100_clear_surface_reg, 198 + .bandwidth_update = &r100_bandwidth_update, 199 + .hpd_init = &r100_hpd_init, 200 + .hpd_fini = &r100_hpd_fini, 201 + .hpd_sense = &r100_hpd_sense, 202 + .hpd_set_polarity = &r100_hpd_set_polarity, 203 + .ioctl_wait_idle = NULL, 204 + }; 205 + 206 + static struct radeon_asic r300_asic = { 207 + .init = &r300_init, 208 + .fini = &r300_fini, 209 + .suspend = &r300_suspend, 210 + .resume = &r300_resume, 211 + .vga_set_state = &r100_vga_set_state, 212 + .gpu_reset = &r300_gpu_reset, 213 + .gart_tlb_flush = &r100_pci_gart_tlb_flush, 214 + .gart_set_page = &r100_pci_gart_set_page, 215 + .cp_commit = &r100_cp_commit, 216 + .ring_start = &r300_ring_start, 217 + .ring_test = &r100_ring_test, 218 + .ring_ib_execute = &r100_ring_ib_execute, 219 + .irq_set = &r100_irq_set, 220 + .irq_process = &r100_irq_process, 221 + .get_vblank_counter = &r100_get_vblank_counter, 222 + .fence_ring_emit = &r300_fence_ring_emit, 223 + .cs_parse = &r300_cs_parse, 224 + .copy_blit = &r100_copy_blit, 225 + .copy_dma = &r200_copy_dma, 226 + .copy = &r100_copy_blit, 227 + .get_engine_clock = &radeon_legacy_get_engine_clock, 228 + .set_engine_clock = &radeon_legacy_set_engine_clock, 229 + .get_memory_clock = &radeon_legacy_get_memory_clock, 230 + .set_memory_clock = NULL, 231 + .get_pcie_lanes = &rv370_get_pcie_lanes, 232 + .set_pcie_lanes = &rv370_set_pcie_lanes, 233 + .set_clock_gating = &radeon_legacy_set_clock_gating, 234 + .set_surface_reg = r100_set_surface_reg, 235 + .clear_surface_reg = r100_clear_surface_reg, 236 + .bandwidth_update = &r100_bandwidth_update, 237 + .hpd_init = &r100_hpd_init, 238 + .hpd_fini = &r100_hpd_fini, 239 + .hpd_sense = &r100_hpd_sense, 240 + .hpd_set_polarity = &r100_hpd_set_polarity, 241 + .ioctl_wait_idle = NULL, 242 + }; 243 + 244 + static struct radeon_asic r300_asic_pcie = { 245 + .init = &r300_init, 246 + .fini = &r300_fini, 247 + .suspend = &r300_suspend, 248 + .resume = &r300_resume, 249 + .vga_set_state = &r100_vga_set_state, 250 + .gpu_reset = &r300_gpu_reset, 251 + .gart_tlb_flush = &rv370_pcie_gart_tlb_flush, 252 + .gart_set_page = &rv370_pcie_gart_set_page, 253 + .cp_commit = &r100_cp_commit, 254 + .ring_start = &r300_ring_start, 255 + .ring_test = &r100_ring_test, 256 + .ring_ib_execute = &r100_ring_ib_execute, 257 + .irq_set = &r100_irq_set, 258 + .irq_process = &r100_irq_process, 259 + .get_vblank_counter = &r100_get_vblank_counter, 260 + .fence_ring_emit = &r300_fence_ring_emit, 261 + .cs_parse = &r300_cs_parse, 262 + .copy_blit = &r100_copy_blit, 263 + .copy_dma = &r200_copy_dma, 264 + .copy = &r100_copy_blit, 265 + .get_engine_clock = &radeon_legacy_get_engine_clock, 266 + .set_engine_clock = &radeon_legacy_set_engine_clock, 267 + .get_memory_clock = &radeon_legacy_get_memory_clock, 268 + .set_memory_clock = NULL, 269 + .set_pcie_lanes = &rv370_set_pcie_lanes, 270 + .set_clock_gating = &radeon_legacy_set_clock_gating, 271 + .set_surface_reg = r100_set_surface_reg, 272 + .clear_surface_reg = r100_clear_surface_reg, 273 + .bandwidth_update = &r100_bandwidth_update, 274 + .hpd_init = &r100_hpd_init, 275 + .hpd_fini = &r100_hpd_fini, 276 + .hpd_sense = &r100_hpd_sense, 277 + .hpd_set_polarity = &r100_hpd_set_polarity, 278 + .ioctl_wait_idle = NULL, 279 + }; 280 + 281 + static struct radeon_asic r420_asic = { 282 + .init = &r420_init, 283 + .fini = &r420_fini, 284 + .suspend = &r420_suspend, 285 + .resume = &r420_resume, 286 + .vga_set_state = &r100_vga_set_state, 287 + .gpu_reset = &r300_gpu_reset, 288 + .gart_tlb_flush = &rv370_pcie_gart_tlb_flush, 289 + .gart_set_page = &rv370_pcie_gart_set_page, 290 + .cp_commit = &r100_cp_commit, 291 + .ring_start = &r300_ring_start, 292 + .ring_test = &r100_ring_test, 293 + .ring_ib_execute = &r100_ring_ib_execute, 294 + .irq_set = &r100_irq_set, 295 + .irq_process = &r100_irq_process, 296 + .get_vblank_counter = &r100_get_vblank_counter, 297 + .fence_ring_emit = &r300_fence_ring_emit, 298 + .cs_parse = &r300_cs_parse, 299 + .copy_blit = &r100_copy_blit, 300 + .copy_dma = &r200_copy_dma, 301 + .copy = &r100_copy_blit, 302 + .get_engine_clock = &radeon_atom_get_engine_clock, 303 + .set_engine_clock = &radeon_atom_set_engine_clock, 304 + .get_memory_clock = &radeon_atom_get_memory_clock, 305 + .set_memory_clock = &radeon_atom_set_memory_clock, 306 + .get_pcie_lanes = &rv370_get_pcie_lanes, 307 + .set_pcie_lanes = &rv370_set_pcie_lanes, 308 + .set_clock_gating = &radeon_atom_set_clock_gating, 309 + .set_surface_reg = r100_set_surface_reg, 310 + .clear_surface_reg = r100_clear_surface_reg, 311 + .bandwidth_update = &r100_bandwidth_update, 312 + .hpd_init = &r100_hpd_init, 313 + .hpd_fini = &r100_hpd_fini, 314 + .hpd_sense = &r100_hpd_sense, 315 + .hpd_set_polarity = &r100_hpd_set_polarity, 316 + .ioctl_wait_idle = NULL, 317 + }; 318 + 319 + static struct radeon_asic rs400_asic = { 320 + .init = &rs400_init, 321 + .fini = &rs400_fini, 322 + .suspend = &rs400_suspend, 323 + .resume = &rs400_resume, 324 + .vga_set_state = &r100_vga_set_state, 325 + .gpu_reset = &r300_gpu_reset, 326 + .gart_tlb_flush = &rs400_gart_tlb_flush, 327 + .gart_set_page = &rs400_gart_set_page, 328 + .cp_commit = &r100_cp_commit, 329 + .ring_start = &r300_ring_start, 330 + .ring_test = &r100_ring_test, 331 + .ring_ib_execute = &r100_ring_ib_execute, 332 + .irq_set = &r100_irq_set, 333 + .irq_process = &r100_irq_process, 334 + .get_vblank_counter = &r100_get_vblank_counter, 335 + .fence_ring_emit = &r300_fence_ring_emit, 336 + .cs_parse = &r300_cs_parse, 337 + .copy_blit = &r100_copy_blit, 338 + .copy_dma = &r200_copy_dma, 339 + .copy = &r100_copy_blit, 340 + .get_engine_clock = &radeon_legacy_get_engine_clock, 341 + .set_engine_clock = &radeon_legacy_set_engine_clock, 342 + .get_memory_clock = &radeon_legacy_get_memory_clock, 343 + .set_memory_clock = NULL, 344 + .get_pcie_lanes = NULL, 345 + .set_pcie_lanes = NULL, 346 + .set_clock_gating = &radeon_legacy_set_clock_gating, 347 + .set_surface_reg = r100_set_surface_reg, 348 + .clear_surface_reg = r100_clear_surface_reg, 349 + .bandwidth_update = &r100_bandwidth_update, 350 + .hpd_init = &r100_hpd_init, 351 + .hpd_fini = &r100_hpd_fini, 352 + .hpd_sense = &r100_hpd_sense, 353 + .hpd_set_polarity = &r100_hpd_set_polarity, 354 + .ioctl_wait_idle = NULL, 355 + }; 356 + 357 + static struct radeon_asic rs600_asic = { 358 + .init = &rs600_init, 359 + .fini = &rs600_fini, 360 + .suspend = &rs600_suspend, 361 + .resume = &rs600_resume, 362 + .vga_set_state = &r100_vga_set_state, 363 + .gpu_reset = &r300_gpu_reset, 364 + .gart_tlb_flush = &rs600_gart_tlb_flush, 365 + .gart_set_page = &rs600_gart_set_page, 366 + .cp_commit = &r100_cp_commit, 367 + .ring_start = &r300_ring_start, 368 + .ring_test = &r100_ring_test, 369 + .ring_ib_execute = &r100_ring_ib_execute, 370 + .irq_set = &rs600_irq_set, 371 + .irq_process = &rs600_irq_process, 372 + .get_vblank_counter = &rs600_get_vblank_counter, 373 + .fence_ring_emit = &r300_fence_ring_emit, 374 + .cs_parse = &r300_cs_parse, 375 + .copy_blit = &r100_copy_blit, 376 + .copy_dma = &r200_copy_dma, 377 + .copy = &r100_copy_blit, 378 + .get_engine_clock = &radeon_atom_get_engine_clock, 379 + .set_engine_clock = &radeon_atom_set_engine_clock, 380 + .get_memory_clock = &radeon_atom_get_memory_clock, 381 + .set_memory_clock = &radeon_atom_set_memory_clock, 382 + .get_pcie_lanes = NULL, 383 + .set_pcie_lanes = NULL, 384 + .set_clock_gating = &radeon_atom_set_clock_gating, 385 + .set_surface_reg = r100_set_surface_reg, 386 + .clear_surface_reg = r100_clear_surface_reg, 387 + .bandwidth_update = &rs600_bandwidth_update, 388 + .hpd_init = &rs600_hpd_init, 389 + .hpd_fini = &rs600_hpd_fini, 390 + .hpd_sense = &rs600_hpd_sense, 391 + .hpd_set_polarity = &rs600_hpd_set_polarity, 392 + .ioctl_wait_idle = NULL, 393 + }; 394 + 395 + static struct radeon_asic rs690_asic = { 396 + .init = &rs690_init, 397 + .fini = &rs690_fini, 398 + .suspend = &rs690_suspend, 399 + .resume = &rs690_resume, 400 + .vga_set_state = &r100_vga_set_state, 401 + .gpu_reset = &r300_gpu_reset, 402 + .gart_tlb_flush = &rs400_gart_tlb_flush, 403 + .gart_set_page = &rs400_gart_set_page, 404 + .cp_commit = &r100_cp_commit, 405 + .ring_start = &r300_ring_start, 406 + .ring_test = &r100_ring_test, 407 + .ring_ib_execute = &r100_ring_ib_execute, 408 + .irq_set = &rs600_irq_set, 409 + .irq_process = &rs600_irq_process, 410 + .get_vblank_counter = &rs600_get_vblank_counter, 411 + .fence_ring_emit = &r300_fence_ring_emit, 412 + .cs_parse = &r300_cs_parse, 413 + .copy_blit = &r100_copy_blit, 414 + .copy_dma = &r200_copy_dma, 415 + .copy = &r200_copy_dma, 416 + .get_engine_clock = &radeon_atom_get_engine_clock, 417 + .set_engine_clock = &radeon_atom_set_engine_clock, 418 + .get_memory_clock = &radeon_atom_get_memory_clock, 419 + .set_memory_clock = &radeon_atom_set_memory_clock, 420 + .get_pcie_lanes = NULL, 421 + .set_pcie_lanes = NULL, 422 + .set_clock_gating = &radeon_atom_set_clock_gating, 423 + .set_surface_reg = r100_set_surface_reg, 424 + .clear_surface_reg = r100_clear_surface_reg, 425 + .bandwidth_update = &rs690_bandwidth_update, 426 + .hpd_init = &rs600_hpd_init, 427 + .hpd_fini = &rs600_hpd_fini, 428 + .hpd_sense = &rs600_hpd_sense, 429 + .hpd_set_polarity = &rs600_hpd_set_polarity, 430 + .ioctl_wait_idle = NULL, 431 + }; 432 + 433 + static struct radeon_asic rv515_asic = { 434 + .init = &rv515_init, 435 + .fini = &rv515_fini, 436 + .suspend = &rv515_suspend, 437 + .resume = &rv515_resume, 438 + .vga_set_state = &r100_vga_set_state, 439 + .gpu_reset = &rv515_gpu_reset, 440 + .gart_tlb_flush = &rv370_pcie_gart_tlb_flush, 441 + .gart_set_page = &rv370_pcie_gart_set_page, 442 + .cp_commit = &r100_cp_commit, 443 + .ring_start = &rv515_ring_start, 444 + .ring_test = &r100_ring_test, 445 + .ring_ib_execute = &r100_ring_ib_execute, 446 + .irq_set = &rs600_irq_set, 447 + .irq_process = &rs600_irq_process, 448 + .get_vblank_counter = &rs600_get_vblank_counter, 449 + .fence_ring_emit = &r300_fence_ring_emit, 450 + .cs_parse = &r300_cs_parse, 451 + .copy_blit = &r100_copy_blit, 452 + .copy_dma = &r200_copy_dma, 453 + .copy = &r100_copy_blit, 454 + .get_engine_clock = &radeon_atom_get_engine_clock, 455 + .set_engine_clock = &radeon_atom_set_engine_clock, 456 + .get_memory_clock = &radeon_atom_get_memory_clock, 457 + .set_memory_clock = &radeon_atom_set_memory_clock, 458 + .get_pcie_lanes = &rv370_get_pcie_lanes, 459 + .set_pcie_lanes = &rv370_set_pcie_lanes, 460 + .set_clock_gating = &radeon_atom_set_clock_gating, 461 + .set_surface_reg = r100_set_surface_reg, 462 + .clear_surface_reg = r100_clear_surface_reg, 463 + .bandwidth_update = &rv515_bandwidth_update, 464 + .hpd_init = &rs600_hpd_init, 465 + .hpd_fini = &rs600_hpd_fini, 466 + .hpd_sense = &rs600_hpd_sense, 467 + .hpd_set_polarity = &rs600_hpd_set_polarity, 468 + .ioctl_wait_idle = NULL, 469 + }; 470 + 471 + static struct radeon_asic r520_asic = { 472 + .init = &r520_init, 473 + .fini = &rv515_fini, 474 + .suspend = &rv515_suspend, 475 + .resume = &r520_resume, 476 + .vga_set_state = &r100_vga_set_state, 477 + .gpu_reset = &rv515_gpu_reset, 478 + .gart_tlb_flush = &rv370_pcie_gart_tlb_flush, 479 + .gart_set_page = &rv370_pcie_gart_set_page, 480 + .cp_commit = &r100_cp_commit, 481 + .ring_start = &rv515_ring_start, 482 + .ring_test = &r100_ring_test, 483 + .ring_ib_execute = &r100_ring_ib_execute, 484 + .irq_set = &rs600_irq_set, 485 + .irq_process = &rs600_irq_process, 486 + .get_vblank_counter = &rs600_get_vblank_counter, 487 + .fence_ring_emit = &r300_fence_ring_emit, 488 + .cs_parse = &r300_cs_parse, 489 + .copy_blit = &r100_copy_blit, 490 + .copy_dma = &r200_copy_dma, 491 + .copy = &r100_copy_blit, 492 + .get_engine_clock = &radeon_atom_get_engine_clock, 493 + .set_engine_clock = &radeon_atom_set_engine_clock, 494 + .get_memory_clock = &radeon_atom_get_memory_clock, 495 + .set_memory_clock = &radeon_atom_set_memory_clock, 496 + .get_pcie_lanes = &rv370_get_pcie_lanes, 497 + .set_pcie_lanes = &rv370_set_pcie_lanes, 498 + .set_clock_gating = &radeon_atom_set_clock_gating, 499 + .set_surface_reg = r100_set_surface_reg, 500 + .clear_surface_reg = r100_clear_surface_reg, 501 + .bandwidth_update = &rv515_bandwidth_update, 502 + .hpd_init = &rs600_hpd_init, 503 + .hpd_fini = &rs600_hpd_fini, 504 + .hpd_sense = &rs600_hpd_sense, 505 + .hpd_set_polarity = &rs600_hpd_set_polarity, 506 + .ioctl_wait_idle = NULL, 507 + }; 508 + 509 + static struct radeon_asic r600_asic = { 510 + .init = &r600_init, 511 + .fini = &r600_fini, 512 + .suspend = &r600_suspend, 513 + .resume = &r600_resume, 514 + .cp_commit = &r600_cp_commit, 515 + .vga_set_state = &r600_vga_set_state, 516 + .gpu_reset = &r600_gpu_reset, 517 + .gart_tlb_flush = &r600_pcie_gart_tlb_flush, 518 + .gart_set_page = &rs600_gart_set_page, 519 + .ring_test = &r600_ring_test, 520 + .ring_ib_execute = &r600_ring_ib_execute, 521 + .irq_set = &r600_irq_set, 522 + .irq_process = &r600_irq_process, 523 + .get_vblank_counter = &rs600_get_vblank_counter, 524 + .fence_ring_emit = &r600_fence_ring_emit, 525 + .cs_parse = &r600_cs_parse, 526 + .copy_blit = &r600_copy_blit, 527 + .copy_dma = &r600_copy_blit, 528 + .copy = &r600_copy_blit, 529 + .get_engine_clock = &radeon_atom_get_engine_clock, 530 + .set_engine_clock = &radeon_atom_set_engine_clock, 531 + .get_memory_clock = &radeon_atom_get_memory_clock, 532 + .set_memory_clock = &radeon_atom_set_memory_clock, 533 + .get_pcie_lanes = &rv370_get_pcie_lanes, 534 + .set_pcie_lanes = NULL, 535 + .set_clock_gating = NULL, 536 + .set_surface_reg = r600_set_surface_reg, 537 + .clear_surface_reg = r600_clear_surface_reg, 538 + .bandwidth_update = &rv515_bandwidth_update, 539 + .hpd_init = &r600_hpd_init, 540 + .hpd_fini = &r600_hpd_fini, 541 + .hpd_sense = &r600_hpd_sense, 542 + .hpd_set_polarity = &r600_hpd_set_polarity, 543 + .ioctl_wait_idle = r600_ioctl_wait_idle, 544 + }; 545 + 546 + static struct radeon_asic rs780_asic = { 547 + .init = &r600_init, 548 + .fini = &r600_fini, 549 + .suspend = &r600_suspend, 550 + .resume = &r600_resume, 551 + .cp_commit = &r600_cp_commit, 552 + .vga_set_state = &r600_vga_set_state, 553 + .gpu_reset = &r600_gpu_reset, 554 + .gart_tlb_flush = &r600_pcie_gart_tlb_flush, 555 + .gart_set_page = &rs600_gart_set_page, 556 + .ring_test = &r600_ring_test, 557 + .ring_ib_execute = &r600_ring_ib_execute, 558 + .irq_set = &r600_irq_set, 559 + .irq_process = &r600_irq_process, 560 + .get_vblank_counter = &rs600_get_vblank_counter, 561 + .fence_ring_emit = &r600_fence_ring_emit, 562 + .cs_parse = &r600_cs_parse, 563 + .copy_blit = &r600_copy_blit, 564 + .copy_dma = &r600_copy_blit, 565 + .copy = &r600_copy_blit, 566 + .get_engine_clock = &radeon_atom_get_engine_clock, 567 + .set_engine_clock = &radeon_atom_set_engine_clock, 568 + .get_memory_clock = NULL, 569 + .set_memory_clock = NULL, 570 + .get_pcie_lanes = NULL, 571 + .set_pcie_lanes = NULL, 572 + .set_clock_gating = NULL, 573 + .set_surface_reg = r600_set_surface_reg, 574 + .clear_surface_reg = r600_clear_surface_reg, 575 + .bandwidth_update = &rs690_bandwidth_update, 576 + .hpd_init = &r600_hpd_init, 577 + .hpd_fini = &r600_hpd_fini, 578 + .hpd_sense = &r600_hpd_sense, 579 + .hpd_set_polarity = &r600_hpd_set_polarity, 580 + .ioctl_wait_idle = r600_ioctl_wait_idle, 581 + }; 582 + 583 + static struct radeon_asic rv770_asic = { 584 + .init = &rv770_init, 585 + .fini = &rv770_fini, 586 + .suspend = &rv770_suspend, 587 + .resume = &rv770_resume, 588 + .cp_commit = &r600_cp_commit, 589 + .gpu_reset = &rv770_gpu_reset, 590 + .vga_set_state = &r600_vga_set_state, 591 + .gart_tlb_flush = &r600_pcie_gart_tlb_flush, 592 + .gart_set_page = &rs600_gart_set_page, 593 + .ring_test = &r600_ring_test, 594 + .ring_ib_execute = &r600_ring_ib_execute, 595 + .irq_set = &r600_irq_set, 596 + .irq_process = &r600_irq_process, 597 + .get_vblank_counter = &rs600_get_vblank_counter, 598 + .fence_ring_emit = &r600_fence_ring_emit, 599 + .cs_parse = &r600_cs_parse, 600 + .copy_blit = &r600_copy_blit, 601 + .copy_dma = &r600_copy_blit, 602 + .copy = &r600_copy_blit, 603 + .get_engine_clock = &radeon_atom_get_engine_clock, 604 + .set_engine_clock = &radeon_atom_set_engine_clock, 605 + .get_memory_clock = &radeon_atom_get_memory_clock, 606 + .set_memory_clock = &radeon_atom_set_memory_clock, 607 + .get_pcie_lanes = &rv370_get_pcie_lanes, 608 + .set_pcie_lanes = NULL, 609 + .set_clock_gating = &radeon_atom_set_clock_gating, 610 + .set_surface_reg = r600_set_surface_reg, 611 + .clear_surface_reg = r600_clear_surface_reg, 612 + .bandwidth_update = &rv515_bandwidth_update, 613 + .hpd_init = &r600_hpd_init, 614 + .hpd_fini = &r600_hpd_fini, 615 + .hpd_sense = &r600_hpd_sense, 616 + .hpd_set_polarity = &r600_hpd_set_polarity, 617 + .ioctl_wait_idle = r600_ioctl_wait_idle, 618 + }; 619 + 620 + static struct radeon_asic evergreen_asic = { 621 + .init = &evergreen_init, 622 + .fini = &evergreen_fini, 623 + .suspend = &evergreen_suspend, 624 + .resume = &evergreen_resume, 625 + .cp_commit = NULL, 626 + .gpu_reset = &evergreen_gpu_reset, 627 + .vga_set_state = &r600_vga_set_state, 628 + .gart_tlb_flush = &r600_pcie_gart_tlb_flush, 629 + .gart_set_page = &rs600_gart_set_page, 630 + .ring_test = NULL, 631 + .ring_ib_execute = NULL, 632 + .irq_set = NULL, 633 + .irq_process = NULL, 634 + .get_vblank_counter = NULL, 635 + .fence_ring_emit = NULL, 636 + .cs_parse = NULL, 637 + .copy_blit = NULL, 638 + .copy_dma = NULL, 639 + .copy = NULL, 640 + .get_engine_clock = &radeon_atom_get_engine_clock, 641 + .set_engine_clock = &radeon_atom_set_engine_clock, 642 + .get_memory_clock = &radeon_atom_get_memory_clock, 643 + .set_memory_clock = &radeon_atom_set_memory_clock, 644 + .set_pcie_lanes = NULL, 645 + .set_clock_gating = NULL, 646 + .set_surface_reg = r600_set_surface_reg, 647 + .clear_surface_reg = r600_clear_surface_reg, 648 + .bandwidth_update = &evergreen_bandwidth_update, 649 + .hpd_init = &evergreen_hpd_init, 650 + .hpd_fini = &evergreen_hpd_fini, 651 + .hpd_sense = &evergreen_hpd_sense, 652 + .hpd_set_polarity = &evergreen_hpd_set_polarity, 653 + }; 654 + 655 + int radeon_asic_init(struct radeon_device *rdev) 656 + { 657 + radeon_register_accessor_init(rdev); 658 + switch (rdev->family) { 659 + case CHIP_R100: 660 + case CHIP_RV100: 661 + case CHIP_RS100: 662 + case CHIP_RV200: 663 + case CHIP_RS200: 664 + rdev->asic = &r100_asic; 665 + break; 666 + case CHIP_R200: 667 + case CHIP_RV250: 668 + case CHIP_RS300: 669 + case CHIP_RV280: 670 + rdev->asic = &r200_asic; 671 + break; 672 + case CHIP_R300: 673 + case CHIP_R350: 674 + case CHIP_RV350: 675 + case CHIP_RV380: 676 + if (rdev->flags & RADEON_IS_PCIE) 677 + rdev->asic = &r300_asic_pcie; 678 + else 679 + rdev->asic = &r300_asic; 680 + break; 681 + case CHIP_R420: 682 + case CHIP_R423: 683 + case CHIP_RV410: 684 + rdev->asic = &r420_asic; 685 + break; 686 + case CHIP_RS400: 687 + case CHIP_RS480: 688 + rdev->asic = &rs400_asic; 689 + break; 690 + case CHIP_RS600: 691 + rdev->asic = &rs600_asic; 692 + break; 693 + case CHIP_RS690: 694 + case CHIP_RS740: 695 + rdev->asic = &rs690_asic; 696 + break; 697 + case CHIP_RV515: 698 + rdev->asic = &rv515_asic; 699 + break; 700 + case CHIP_R520: 701 + case CHIP_RV530: 702 + case CHIP_RV560: 703 + case CHIP_RV570: 704 + case CHIP_R580: 705 + rdev->asic = &r520_asic; 706 + break; 707 + case CHIP_R600: 708 + case CHIP_RV610: 709 + case CHIP_RV630: 710 + case CHIP_RV620: 711 + case CHIP_RV635: 712 + case CHIP_RV670: 713 + rdev->asic = &r600_asic; 714 + break; 715 + case CHIP_RS780: 716 + case CHIP_RS880: 717 + rdev->asic = &rs780_asic; 718 + break; 719 + case CHIP_RV770: 720 + case CHIP_RV730: 721 + case CHIP_RV710: 722 + case CHIP_RV740: 723 + rdev->asic = &rv770_asic; 724 + break; 725 + case CHIP_CEDAR: 726 + case CHIP_REDWOOD: 727 + case CHIP_JUNIPER: 728 + case CHIP_CYPRESS: 729 + case CHIP_HEMLOCK: 730 + rdev->asic = &evergreen_asic; 731 + break; 732 + default: 733 + /* FIXME: not supported yet */ 734 + return -EINVAL; 735 + } 736 + 737 + if (rdev->flags & RADEON_IS_IGP) { 738 + rdev->asic->get_memory_clock = NULL; 739 + rdev->asic->set_memory_clock = NULL; 740 + } 741 + 742 + /* set the number of crtcs */ 743 + if (rdev->flags & RADEON_SINGLE_CRTC) 744 + rdev->num_crtc = 1; 745 + else { 746 + if (ASIC_IS_DCE4(rdev)) 747 + rdev->num_crtc = 6; 748 + else 749 + rdev->num_crtc = 2; 750 + } 751 + 752 + return 0; 753 + } 754 + 755 + /* 756 + * Wrapper around modesetting bits. Move to radeon_clocks.c? 757 + */ 758 + int radeon_clocks_init(struct radeon_device *rdev) 759 + { 760 + int r; 761 + 762 + r = radeon_static_clocks_init(rdev->ddev); 763 + if (r) { 764 + return r; 765 + } 766 + DRM_INFO("Clocks initialized !\n"); 767 + return 0; 768 + } 769 + 770 + void radeon_clocks_fini(struct radeon_device *rdev) 771 + { 772 + }
+50 -495
drivers/gpu/drm/radeon/radeon_asic.h
··· 45 45 /* 46 46 * r100,rv100,rs100,rv200,rs200 47 47 */ 48 - extern int r100_init(struct radeon_device *rdev); 49 - extern void r100_fini(struct radeon_device *rdev); 50 - extern int r100_suspend(struct radeon_device *rdev); 51 - extern int r100_resume(struct radeon_device *rdev); 48 + struct r100_mc_save { 49 + u32 GENMO_WT; 50 + u32 CRTC_EXT_CNTL; 51 + u32 CRTC_GEN_CNTL; 52 + u32 CRTC2_GEN_CNTL; 53 + u32 CUR_OFFSET; 54 + u32 CUR2_OFFSET; 55 + }; 56 + int r100_init(struct radeon_device *rdev); 57 + void r100_fini(struct radeon_device *rdev); 58 + int r100_suspend(struct radeon_device *rdev); 59 + int r100_resume(struct radeon_device *rdev); 52 60 uint32_t r100_mm_rreg(struct radeon_device *rdev, uint32_t reg); 53 61 void r100_mm_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v); 54 62 void r100_vga_set_state(struct radeon_device *rdev, bool state); ··· 81 73 int r100_set_surface_reg(struct radeon_device *rdev, int reg, 82 74 uint32_t tiling_flags, uint32_t pitch, 83 75 uint32_t offset, uint32_t obj_size); 84 - int r100_clear_surface_reg(struct radeon_device *rdev, int reg); 76 + void r100_clear_surface_reg(struct radeon_device *rdev, int reg); 85 77 void r100_bandwidth_update(struct radeon_device *rdev); 86 78 void r100_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib); 87 79 int r100_ring_test(struct radeon_device *rdev); ··· 90 82 bool r100_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd); 91 83 void r100_hpd_set_polarity(struct radeon_device *rdev, 92 84 enum radeon_hpd_id hpd); 93 - 94 - static struct radeon_asic r100_asic = { 95 - .init = &r100_init, 96 - .fini = &r100_fini, 97 - .suspend = &r100_suspend, 98 - .resume = &r100_resume, 99 - .vga_set_state = &r100_vga_set_state, 100 - .gpu_reset = &r100_gpu_reset, 101 - .gart_tlb_flush = &r100_pci_gart_tlb_flush, 102 - .gart_set_page = &r100_pci_gart_set_page, 103 - .cp_commit = &r100_cp_commit, 104 - .ring_start = &r100_ring_start, 105 - .ring_test = &r100_ring_test, 106 - .ring_ib_execute = &r100_ring_ib_execute, 107 - .irq_set = &r100_irq_set, 108 - .irq_process = &r100_irq_process, 109 - .get_vblank_counter = &r100_get_vblank_counter, 110 - .fence_ring_emit = &r100_fence_ring_emit, 111 - .cs_parse = &r100_cs_parse, 112 - .copy_blit = &r100_copy_blit, 113 - .copy_dma = NULL, 114 - .copy = &r100_copy_blit, 115 - .get_engine_clock = &radeon_legacy_get_engine_clock, 116 - .set_engine_clock = &radeon_legacy_set_engine_clock, 117 - .get_memory_clock = &radeon_legacy_get_memory_clock, 118 - .set_memory_clock = NULL, 119 - .get_pcie_lanes = NULL, 120 - .set_pcie_lanes = NULL, 121 - .set_clock_gating = &radeon_legacy_set_clock_gating, 122 - .set_surface_reg = r100_set_surface_reg, 123 - .clear_surface_reg = r100_clear_surface_reg, 124 - .bandwidth_update = &r100_bandwidth_update, 125 - .hpd_init = &r100_hpd_init, 126 - .hpd_fini = &r100_hpd_fini, 127 - .hpd_sense = &r100_hpd_sense, 128 - .hpd_set_polarity = &r100_hpd_set_polarity, 129 - .ioctl_wait_idle = NULL, 130 - }; 85 + int r100_debugfs_rbbm_init(struct radeon_device *rdev); 86 + int r100_debugfs_cp_init(struct radeon_device *rdev); 87 + void r100_cp_disable(struct radeon_device *rdev); 88 + int r100_cp_init(struct radeon_device *rdev, unsigned ring_size); 89 + void r100_cp_fini(struct radeon_device *rdev); 90 + int r100_pci_gart_init(struct radeon_device *rdev); 91 + void r100_pci_gart_fini(struct radeon_device *rdev); 92 + int r100_pci_gart_enable(struct radeon_device *rdev); 93 + void r100_pci_gart_disable(struct radeon_device *rdev); 94 + int r100_debugfs_mc_info_init(struct radeon_device *rdev); 95 + int r100_gui_wait_for_idle(struct radeon_device *rdev); 96 + void r100_ib_fini(struct radeon_device *rdev); 97 + int r100_ib_init(struct radeon_device *rdev); 98 + void r100_irq_disable(struct radeon_device *rdev); 99 + void r100_mc_stop(struct radeon_device *rdev, struct r100_mc_save *save); 100 + void r100_mc_resume(struct radeon_device *rdev, struct r100_mc_save *save); 101 + void r100_vram_init_sizes(struct radeon_device *rdev); 102 + void r100_wb_disable(struct radeon_device *rdev); 103 + void r100_wb_fini(struct radeon_device *rdev); 104 + int r100_wb_init(struct radeon_device *rdev); 105 + void r100_hdp_reset(struct radeon_device *rdev); 106 + int r100_rb2d_reset(struct radeon_device *rdev); 107 + int r100_cp_reset(struct radeon_device *rdev); 108 + void r100_vga_render_disable(struct radeon_device *rdev); 109 + int r100_cs_track_check_pkt3_indx_buffer(struct radeon_cs_parser *p, 110 + struct radeon_cs_packet *pkt, 111 + struct radeon_bo *robj); 112 + int r100_cs_parse_packet0(struct radeon_cs_parser *p, 113 + struct radeon_cs_packet *pkt, 114 + const unsigned *auth, unsigned n, 115 + radeon_packet0_check_t check); 116 + int r100_cs_packet_parse(struct radeon_cs_parser *p, 117 + struct radeon_cs_packet *pkt, 118 + unsigned idx); 119 + void r100_enable_bm(struct radeon_device *rdev); 120 + void r100_set_common_regs(struct radeon_device *rdev); 131 121 132 122 /* 133 123 * r200,rv250,rs300,rv280 ··· 135 129 uint64_t dst_offset, 136 130 unsigned num_pages, 137 131 struct radeon_fence *fence); 138 - static struct radeon_asic r200_asic = { 139 - .init = &r100_init, 140 - .fini = &r100_fini, 141 - .suspend = &r100_suspend, 142 - .resume = &r100_resume, 143 - .vga_set_state = &r100_vga_set_state, 144 - .gpu_reset = &r100_gpu_reset, 145 - .gart_tlb_flush = &r100_pci_gart_tlb_flush, 146 - .gart_set_page = &r100_pci_gart_set_page, 147 - .cp_commit = &r100_cp_commit, 148 - .ring_start = &r100_ring_start, 149 - .ring_test = &r100_ring_test, 150 - .ring_ib_execute = &r100_ring_ib_execute, 151 - .irq_set = &r100_irq_set, 152 - .irq_process = &r100_irq_process, 153 - .get_vblank_counter = &r100_get_vblank_counter, 154 - .fence_ring_emit = &r100_fence_ring_emit, 155 - .cs_parse = &r100_cs_parse, 156 - .copy_blit = &r100_copy_blit, 157 - .copy_dma = &r200_copy_dma, 158 - .copy = &r100_copy_blit, 159 - .get_engine_clock = &radeon_legacy_get_engine_clock, 160 - .set_engine_clock = &radeon_legacy_set_engine_clock, 161 - .get_memory_clock = &radeon_legacy_get_memory_clock, 162 - .set_memory_clock = NULL, 163 - .set_pcie_lanes = NULL, 164 - .set_clock_gating = &radeon_legacy_set_clock_gating, 165 - .set_surface_reg = r100_set_surface_reg, 166 - .clear_surface_reg = r100_clear_surface_reg, 167 - .bandwidth_update = &r100_bandwidth_update, 168 - .hpd_init = &r100_hpd_init, 169 - .hpd_fini = &r100_hpd_fini, 170 - .hpd_sense = &r100_hpd_sense, 171 - .hpd_set_polarity = &r100_hpd_set_polarity, 172 - .ioctl_wait_idle = NULL, 173 - }; 174 - 175 132 176 133 /* 177 134 * r300,r350,rv350,rv380 ··· 155 186 extern void rv370_set_pcie_lanes(struct radeon_device *rdev, int lanes); 156 187 extern int rv370_get_pcie_lanes(struct radeon_device *rdev); 157 188 158 - static struct radeon_asic r300_asic = { 159 - .init = &r300_init, 160 - .fini = &r300_fini, 161 - .suspend = &r300_suspend, 162 - .resume = &r300_resume, 163 - .vga_set_state = &r100_vga_set_state, 164 - .gpu_reset = &r300_gpu_reset, 165 - .gart_tlb_flush = &r100_pci_gart_tlb_flush, 166 - .gart_set_page = &r100_pci_gart_set_page, 167 - .cp_commit = &r100_cp_commit, 168 - .ring_start = &r300_ring_start, 169 - .ring_test = &r100_ring_test, 170 - .ring_ib_execute = &r100_ring_ib_execute, 171 - .irq_set = &r100_irq_set, 172 - .irq_process = &r100_irq_process, 173 - .get_vblank_counter = &r100_get_vblank_counter, 174 - .fence_ring_emit = &r300_fence_ring_emit, 175 - .cs_parse = &r300_cs_parse, 176 - .copy_blit = &r100_copy_blit, 177 - .copy_dma = &r200_copy_dma, 178 - .copy = &r100_copy_blit, 179 - .get_engine_clock = &radeon_legacy_get_engine_clock, 180 - .set_engine_clock = &radeon_legacy_set_engine_clock, 181 - .get_memory_clock = &radeon_legacy_get_memory_clock, 182 - .set_memory_clock = NULL, 183 - .get_pcie_lanes = &rv370_get_pcie_lanes, 184 - .set_pcie_lanes = &rv370_set_pcie_lanes, 185 - .set_clock_gating = &radeon_legacy_set_clock_gating, 186 - .set_surface_reg = r100_set_surface_reg, 187 - .clear_surface_reg = r100_clear_surface_reg, 188 - .bandwidth_update = &r100_bandwidth_update, 189 - .hpd_init = &r100_hpd_init, 190 - .hpd_fini = &r100_hpd_fini, 191 - .hpd_sense = &r100_hpd_sense, 192 - .hpd_set_polarity = &r100_hpd_set_polarity, 193 - .ioctl_wait_idle = NULL, 194 - }; 195 - 196 - 197 - static struct radeon_asic r300_asic_pcie = { 198 - .init = &r300_init, 199 - .fini = &r300_fini, 200 - .suspend = &r300_suspend, 201 - .resume = &r300_resume, 202 - .vga_set_state = &r100_vga_set_state, 203 - .gpu_reset = &r300_gpu_reset, 204 - .gart_tlb_flush = &rv370_pcie_gart_tlb_flush, 205 - .gart_set_page = &rv370_pcie_gart_set_page, 206 - .cp_commit = &r100_cp_commit, 207 - .ring_start = &r300_ring_start, 208 - .ring_test = &r100_ring_test, 209 - .ring_ib_execute = &r100_ring_ib_execute, 210 - .irq_set = &r100_irq_set, 211 - .irq_process = &r100_irq_process, 212 - .get_vblank_counter = &r100_get_vblank_counter, 213 - .fence_ring_emit = &r300_fence_ring_emit, 214 - .cs_parse = &r300_cs_parse, 215 - .copy_blit = &r100_copy_blit, 216 - .copy_dma = &r200_copy_dma, 217 - .copy = &r100_copy_blit, 218 - .get_engine_clock = &radeon_legacy_get_engine_clock, 219 - .set_engine_clock = &radeon_legacy_set_engine_clock, 220 - .get_memory_clock = &radeon_legacy_get_memory_clock, 221 - .set_memory_clock = NULL, 222 - .set_pcie_lanes = &rv370_set_pcie_lanes, 223 - .set_clock_gating = &radeon_legacy_set_clock_gating, 224 - .set_surface_reg = r100_set_surface_reg, 225 - .clear_surface_reg = r100_clear_surface_reg, 226 - .bandwidth_update = &r100_bandwidth_update, 227 - .hpd_init = &r100_hpd_init, 228 - .hpd_fini = &r100_hpd_fini, 229 - .hpd_sense = &r100_hpd_sense, 230 - .hpd_set_polarity = &r100_hpd_set_polarity, 231 - .ioctl_wait_idle = NULL, 232 - }; 233 - 234 189 /* 235 190 * r420,r423,rv410 236 191 */ ··· 162 269 extern void r420_fini(struct radeon_device *rdev); 163 270 extern int r420_suspend(struct radeon_device *rdev); 164 271 extern int r420_resume(struct radeon_device *rdev); 165 - static struct radeon_asic r420_asic = { 166 - .init = &r420_init, 167 - .fini = &r420_fini, 168 - .suspend = &r420_suspend, 169 - .resume = &r420_resume, 170 - .vga_set_state = &r100_vga_set_state, 171 - .gpu_reset = &r300_gpu_reset, 172 - .gart_tlb_flush = &rv370_pcie_gart_tlb_flush, 173 - .gart_set_page = &rv370_pcie_gart_set_page, 174 - .cp_commit = &r100_cp_commit, 175 - .ring_start = &r300_ring_start, 176 - .ring_test = &r100_ring_test, 177 - .ring_ib_execute = &r100_ring_ib_execute, 178 - .irq_set = &r100_irq_set, 179 - .irq_process = &r100_irq_process, 180 - .get_vblank_counter = &r100_get_vblank_counter, 181 - .fence_ring_emit = &r300_fence_ring_emit, 182 - .cs_parse = &r300_cs_parse, 183 - .copy_blit = &r100_copy_blit, 184 - .copy_dma = &r200_copy_dma, 185 - .copy = &r100_copy_blit, 186 - .get_engine_clock = &radeon_atom_get_engine_clock, 187 - .set_engine_clock = &radeon_atom_set_engine_clock, 188 - .get_memory_clock = &radeon_atom_get_memory_clock, 189 - .set_memory_clock = &radeon_atom_set_memory_clock, 190 - .get_pcie_lanes = &rv370_get_pcie_lanes, 191 - .set_pcie_lanes = &rv370_set_pcie_lanes, 192 - .set_clock_gating = &radeon_atom_set_clock_gating, 193 - .set_surface_reg = r100_set_surface_reg, 194 - .clear_surface_reg = r100_clear_surface_reg, 195 - .bandwidth_update = &r100_bandwidth_update, 196 - .hpd_init = &r100_hpd_init, 197 - .hpd_fini = &r100_hpd_fini, 198 - .hpd_sense = &r100_hpd_sense, 199 - .hpd_set_polarity = &r100_hpd_set_polarity, 200 - .ioctl_wait_idle = NULL, 201 - }; 202 - 203 272 204 273 /* 205 274 * rs400,rs480 ··· 174 319 int rs400_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr); 175 320 uint32_t rs400_mc_rreg(struct radeon_device *rdev, uint32_t reg); 176 321 void rs400_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v); 177 - static struct radeon_asic rs400_asic = { 178 - .init = &rs400_init, 179 - .fini = &rs400_fini, 180 - .suspend = &rs400_suspend, 181 - .resume = &rs400_resume, 182 - .vga_set_state = &r100_vga_set_state, 183 - .gpu_reset = &r300_gpu_reset, 184 - .gart_tlb_flush = &rs400_gart_tlb_flush, 185 - .gart_set_page = &rs400_gart_set_page, 186 - .cp_commit = &r100_cp_commit, 187 - .ring_start = &r300_ring_start, 188 - .ring_test = &r100_ring_test, 189 - .ring_ib_execute = &r100_ring_ib_execute, 190 - .irq_set = &r100_irq_set, 191 - .irq_process = &r100_irq_process, 192 - .get_vblank_counter = &r100_get_vblank_counter, 193 - .fence_ring_emit = &r300_fence_ring_emit, 194 - .cs_parse = &r300_cs_parse, 195 - .copy_blit = &r100_copy_blit, 196 - .copy_dma = &r200_copy_dma, 197 - .copy = &r100_copy_blit, 198 - .get_engine_clock = &radeon_legacy_get_engine_clock, 199 - .set_engine_clock = &radeon_legacy_set_engine_clock, 200 - .get_memory_clock = &radeon_legacy_get_memory_clock, 201 - .set_memory_clock = NULL, 202 - .get_pcie_lanes = NULL, 203 - .set_pcie_lanes = NULL, 204 - .set_clock_gating = &radeon_legacy_set_clock_gating, 205 - .set_surface_reg = r100_set_surface_reg, 206 - .clear_surface_reg = r100_clear_surface_reg, 207 - .bandwidth_update = &r100_bandwidth_update, 208 - .hpd_init = &r100_hpd_init, 209 - .hpd_fini = &r100_hpd_fini, 210 - .hpd_sense = &r100_hpd_sense, 211 - .hpd_set_polarity = &r100_hpd_set_polarity, 212 - .ioctl_wait_idle = NULL, 213 - }; 214 - 215 322 216 323 /* 217 324 * rs600. ··· 196 379 void rs600_hpd_set_polarity(struct radeon_device *rdev, 197 380 enum radeon_hpd_id hpd); 198 381 199 - static struct radeon_asic rs600_asic = { 200 - .init = &rs600_init, 201 - .fini = &rs600_fini, 202 - .suspend = &rs600_suspend, 203 - .resume = &rs600_resume, 204 - .vga_set_state = &r100_vga_set_state, 205 - .gpu_reset = &r300_gpu_reset, 206 - .gart_tlb_flush = &rs600_gart_tlb_flush, 207 - .gart_set_page = &rs600_gart_set_page, 208 - .cp_commit = &r100_cp_commit, 209 - .ring_start = &r300_ring_start, 210 - .ring_test = &r100_ring_test, 211 - .ring_ib_execute = &r100_ring_ib_execute, 212 - .irq_set = &rs600_irq_set, 213 - .irq_process = &rs600_irq_process, 214 - .get_vblank_counter = &rs600_get_vblank_counter, 215 - .fence_ring_emit = &r300_fence_ring_emit, 216 - .cs_parse = &r300_cs_parse, 217 - .copy_blit = &r100_copy_blit, 218 - .copy_dma = &r200_copy_dma, 219 - .copy = &r100_copy_blit, 220 - .get_engine_clock = &radeon_atom_get_engine_clock, 221 - .set_engine_clock = &radeon_atom_set_engine_clock, 222 - .get_memory_clock = &radeon_atom_get_memory_clock, 223 - .set_memory_clock = &radeon_atom_set_memory_clock, 224 - .get_pcie_lanes = NULL, 225 - .set_pcie_lanes = NULL, 226 - .set_clock_gating = &radeon_atom_set_clock_gating, 227 - .set_surface_reg = r100_set_surface_reg, 228 - .clear_surface_reg = r100_clear_surface_reg, 229 - .bandwidth_update = &rs600_bandwidth_update, 230 - .hpd_init = &rs600_hpd_init, 231 - .hpd_fini = &rs600_hpd_fini, 232 - .hpd_sense = &rs600_hpd_sense, 233 - .hpd_set_polarity = &rs600_hpd_set_polarity, 234 - .ioctl_wait_idle = NULL, 235 - }; 236 - 237 - 238 382 /* 239 383 * rs690,rs740 240 384 */ ··· 206 428 uint32_t rs690_mc_rreg(struct radeon_device *rdev, uint32_t reg); 207 429 void rs690_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v); 208 430 void rs690_bandwidth_update(struct radeon_device *rdev); 209 - static struct radeon_asic rs690_asic = { 210 - .init = &rs690_init, 211 - .fini = &rs690_fini, 212 - .suspend = &rs690_suspend, 213 - .resume = &rs690_resume, 214 - .vga_set_state = &r100_vga_set_state, 215 - .gpu_reset = &r300_gpu_reset, 216 - .gart_tlb_flush = &rs400_gart_tlb_flush, 217 - .gart_set_page = &rs400_gart_set_page, 218 - .cp_commit = &r100_cp_commit, 219 - .ring_start = &r300_ring_start, 220 - .ring_test = &r100_ring_test, 221 - .ring_ib_execute = &r100_ring_ib_execute, 222 - .irq_set = &rs600_irq_set, 223 - .irq_process = &rs600_irq_process, 224 - .get_vblank_counter = &rs600_get_vblank_counter, 225 - .fence_ring_emit = &r300_fence_ring_emit, 226 - .cs_parse = &r300_cs_parse, 227 - .copy_blit = &r100_copy_blit, 228 - .copy_dma = &r200_copy_dma, 229 - .copy = &r200_copy_dma, 230 - .get_engine_clock = &radeon_atom_get_engine_clock, 231 - .set_engine_clock = &radeon_atom_set_engine_clock, 232 - .get_memory_clock = &radeon_atom_get_memory_clock, 233 - .set_memory_clock = &radeon_atom_set_memory_clock, 234 - .get_pcie_lanes = NULL, 235 - .set_pcie_lanes = NULL, 236 - .set_clock_gating = &radeon_atom_set_clock_gating, 237 - .set_surface_reg = r100_set_surface_reg, 238 - .clear_surface_reg = r100_clear_surface_reg, 239 - .bandwidth_update = &rs690_bandwidth_update, 240 - .hpd_init = &rs600_hpd_init, 241 - .hpd_fini = &rs600_hpd_fini, 242 - .hpd_sense = &rs600_hpd_sense, 243 - .hpd_set_polarity = &rs600_hpd_set_polarity, 244 - .ioctl_wait_idle = NULL, 245 - }; 246 - 247 431 248 432 /* 249 433 * rv515 ··· 221 481 void rv515_bandwidth_update(struct radeon_device *rdev); 222 482 int rv515_resume(struct radeon_device *rdev); 223 483 int rv515_suspend(struct radeon_device *rdev); 224 - static struct radeon_asic rv515_asic = { 225 - .init = &rv515_init, 226 - .fini = &rv515_fini, 227 - .suspend = &rv515_suspend, 228 - .resume = &rv515_resume, 229 - .vga_set_state = &r100_vga_set_state, 230 - .gpu_reset = &rv515_gpu_reset, 231 - .gart_tlb_flush = &rv370_pcie_gart_tlb_flush, 232 - .gart_set_page = &rv370_pcie_gart_set_page, 233 - .cp_commit = &r100_cp_commit, 234 - .ring_start = &rv515_ring_start, 235 - .ring_test = &r100_ring_test, 236 - .ring_ib_execute = &r100_ring_ib_execute, 237 - .irq_set = &rs600_irq_set, 238 - .irq_process = &rs600_irq_process, 239 - .get_vblank_counter = &rs600_get_vblank_counter, 240 - .fence_ring_emit = &r300_fence_ring_emit, 241 - .cs_parse = &r300_cs_parse, 242 - .copy_blit = &r100_copy_blit, 243 - .copy_dma = &r200_copy_dma, 244 - .copy = &r100_copy_blit, 245 - .get_engine_clock = &radeon_atom_get_engine_clock, 246 - .set_engine_clock = &radeon_atom_set_engine_clock, 247 - .get_memory_clock = &radeon_atom_get_memory_clock, 248 - .set_memory_clock = &radeon_atom_set_memory_clock, 249 - .get_pcie_lanes = &rv370_get_pcie_lanes, 250 - .set_pcie_lanes = &rv370_set_pcie_lanes, 251 - .set_clock_gating = &radeon_atom_set_clock_gating, 252 - .set_surface_reg = r100_set_surface_reg, 253 - .clear_surface_reg = r100_clear_surface_reg, 254 - .bandwidth_update = &rv515_bandwidth_update, 255 - .hpd_init = &rs600_hpd_init, 256 - .hpd_fini = &rs600_hpd_fini, 257 - .hpd_sense = &rs600_hpd_sense, 258 - .hpd_set_polarity = &rs600_hpd_set_polarity, 259 - .ioctl_wait_idle = NULL, 260 - }; 261 - 262 484 263 485 /* 264 486 * r520,rv530,rv560,rv570,r580 265 487 */ 266 488 int r520_init(struct radeon_device *rdev); 267 489 int r520_resume(struct radeon_device *rdev); 268 - static struct radeon_asic r520_asic = { 269 - .init = &r520_init, 270 - .fini = &rv515_fini, 271 - .suspend = &rv515_suspend, 272 - .resume = &r520_resume, 273 - .vga_set_state = &r100_vga_set_state, 274 - .gpu_reset = &rv515_gpu_reset, 275 - .gart_tlb_flush = &rv370_pcie_gart_tlb_flush, 276 - .gart_set_page = &rv370_pcie_gart_set_page, 277 - .cp_commit = &r100_cp_commit, 278 - .ring_start = &rv515_ring_start, 279 - .ring_test = &r100_ring_test, 280 - .ring_ib_execute = &r100_ring_ib_execute, 281 - .irq_set = &rs600_irq_set, 282 - .irq_process = &rs600_irq_process, 283 - .get_vblank_counter = &rs600_get_vblank_counter, 284 - .fence_ring_emit = &r300_fence_ring_emit, 285 - .cs_parse = &r300_cs_parse, 286 - .copy_blit = &r100_copy_blit, 287 - .copy_dma = &r200_copy_dma, 288 - .copy = &r100_copy_blit, 289 - .get_engine_clock = &radeon_atom_get_engine_clock, 290 - .set_engine_clock = &radeon_atom_set_engine_clock, 291 - .get_memory_clock = &radeon_atom_get_memory_clock, 292 - .set_memory_clock = &radeon_atom_set_memory_clock, 293 - .get_pcie_lanes = &rv370_get_pcie_lanes, 294 - .set_pcie_lanes = &rv370_set_pcie_lanes, 295 - .set_clock_gating = &radeon_atom_set_clock_gating, 296 - .set_surface_reg = r100_set_surface_reg, 297 - .clear_surface_reg = r100_clear_surface_reg, 298 - .bandwidth_update = &rv515_bandwidth_update, 299 - .hpd_init = &rs600_hpd_init, 300 - .hpd_fini = &rs600_hpd_fini, 301 - .hpd_sense = &rs600_hpd_sense, 302 - .hpd_set_polarity = &rs600_hpd_set_polarity, 303 - .ioctl_wait_idle = NULL, 304 - }; 305 490 306 491 /* 307 492 * r600,rv610,rv630,rv620,rv635,rv670,rs780,rs880 ··· 256 591 int r600_set_surface_reg(struct radeon_device *rdev, int reg, 257 592 uint32_t tiling_flags, uint32_t pitch, 258 593 uint32_t offset, uint32_t obj_size); 259 - int r600_clear_surface_reg(struct radeon_device *rdev, int reg); 594 + void r600_clear_surface_reg(struct radeon_device *rdev, int reg); 260 595 void r600_ring_ib_execute(struct radeon_device *rdev, struct radeon_ib *ib); 261 596 int r600_ring_test(struct radeon_device *rdev); 262 597 int r600_copy_blit(struct radeon_device *rdev, ··· 269 604 enum radeon_hpd_id hpd); 270 605 extern void r600_ioctl_wait_idle(struct radeon_device *rdev, struct radeon_bo *bo); 271 606 272 - static struct radeon_asic r600_asic = { 273 - .init = &r600_init, 274 - .fini = &r600_fini, 275 - .suspend = &r600_suspend, 276 - .resume = &r600_resume, 277 - .cp_commit = &r600_cp_commit, 278 - .vga_set_state = &r600_vga_set_state, 279 - .gpu_reset = &r600_gpu_reset, 280 - .gart_tlb_flush = &r600_pcie_gart_tlb_flush, 281 - .gart_set_page = &rs600_gart_set_page, 282 - .ring_test = &r600_ring_test, 283 - .ring_ib_execute = &r600_ring_ib_execute, 284 - .irq_set = &r600_irq_set, 285 - .irq_process = &r600_irq_process, 286 - .get_vblank_counter = &rs600_get_vblank_counter, 287 - .fence_ring_emit = &r600_fence_ring_emit, 288 - .cs_parse = &r600_cs_parse, 289 - .copy_blit = &r600_copy_blit, 290 - .copy_dma = &r600_copy_blit, 291 - .copy = &r600_copy_blit, 292 - .get_engine_clock = &radeon_atom_get_engine_clock, 293 - .set_engine_clock = &radeon_atom_set_engine_clock, 294 - .get_memory_clock = &radeon_atom_get_memory_clock, 295 - .set_memory_clock = &radeon_atom_set_memory_clock, 296 - .get_pcie_lanes = &rv370_get_pcie_lanes, 297 - .set_pcie_lanes = NULL, 298 - .set_clock_gating = NULL, 299 - .set_surface_reg = r600_set_surface_reg, 300 - .clear_surface_reg = r600_clear_surface_reg, 301 - .bandwidth_update = &rv515_bandwidth_update, 302 - .hpd_init = &r600_hpd_init, 303 - .hpd_fini = &r600_hpd_fini, 304 - .hpd_sense = &r600_hpd_sense, 305 - .hpd_set_polarity = &r600_hpd_set_polarity, 306 - .ioctl_wait_idle = r600_ioctl_wait_idle, 307 - }; 308 - 309 607 /* 310 608 * rv770,rv730,rv710,rv740 311 609 */ ··· 277 649 int rv770_suspend(struct radeon_device *rdev); 278 650 int rv770_resume(struct radeon_device *rdev); 279 651 int rv770_gpu_reset(struct radeon_device *rdev); 280 - 281 - static struct radeon_asic rv770_asic = { 282 - .init = &rv770_init, 283 - .fini = &rv770_fini, 284 - .suspend = &rv770_suspend, 285 - .resume = &rv770_resume, 286 - .cp_commit = &r600_cp_commit, 287 - .gpu_reset = &rv770_gpu_reset, 288 - .vga_set_state = &r600_vga_set_state, 289 - .gart_tlb_flush = &r600_pcie_gart_tlb_flush, 290 - .gart_set_page = &rs600_gart_set_page, 291 - .ring_test = &r600_ring_test, 292 - .ring_ib_execute = &r600_ring_ib_execute, 293 - .irq_set = &r600_irq_set, 294 - .irq_process = &r600_irq_process, 295 - .get_vblank_counter = &rs600_get_vblank_counter, 296 - .fence_ring_emit = &r600_fence_ring_emit, 297 - .cs_parse = &r600_cs_parse, 298 - .copy_blit = &r600_copy_blit, 299 - .copy_dma = &r600_copy_blit, 300 - .copy = &r600_copy_blit, 301 - .get_engine_clock = &radeon_atom_get_engine_clock, 302 - .set_engine_clock = &radeon_atom_set_engine_clock, 303 - .get_memory_clock = &radeon_atom_get_memory_clock, 304 - .set_memory_clock = &radeon_atom_set_memory_clock, 305 - .get_pcie_lanes = &rv370_get_pcie_lanes, 306 - .set_pcie_lanes = NULL, 307 - .set_clock_gating = &radeon_atom_set_clock_gating, 308 - .set_surface_reg = r600_set_surface_reg, 309 - .clear_surface_reg = r600_clear_surface_reg, 310 - .bandwidth_update = &rv515_bandwidth_update, 311 - .hpd_init = &r600_hpd_init, 312 - .hpd_fini = &r600_hpd_fini, 313 - .hpd_sense = &r600_hpd_sense, 314 - .hpd_set_polarity = &r600_hpd_set_polarity, 315 - .ioctl_wait_idle = r600_ioctl_wait_idle, 316 - }; 317 652 318 653 /* 319 654 * evergreen ··· 292 701 bool evergreen_hpd_sense(struct radeon_device *rdev, enum radeon_hpd_id hpd); 293 702 void evergreen_hpd_set_polarity(struct radeon_device *rdev, 294 703 enum radeon_hpd_id hpd); 295 - 296 - static struct radeon_asic evergreen_asic = { 297 - .init = &evergreen_init, 298 - .fini = &evergreen_fini, 299 - .suspend = &evergreen_suspend, 300 - .resume = &evergreen_resume, 301 - .cp_commit = NULL, 302 - .gpu_reset = &evergreen_gpu_reset, 303 - .vga_set_state = &r600_vga_set_state, 304 - .gart_tlb_flush = &r600_pcie_gart_tlb_flush, 305 - .gart_set_page = &rs600_gart_set_page, 306 - .ring_test = NULL, 307 - .ring_ib_execute = NULL, 308 - .irq_set = NULL, 309 - .irq_process = NULL, 310 - .get_vblank_counter = NULL, 311 - .fence_ring_emit = NULL, 312 - .cs_parse = NULL, 313 - .copy_blit = NULL, 314 - .copy_dma = NULL, 315 - .copy = NULL, 316 - .get_engine_clock = &radeon_atom_get_engine_clock, 317 - .set_engine_clock = &radeon_atom_set_engine_clock, 318 - .get_memory_clock = &radeon_atom_get_memory_clock, 319 - .set_memory_clock = &radeon_atom_set_memory_clock, 320 - .set_pcie_lanes = NULL, 321 - .set_clock_gating = NULL, 322 - .set_surface_reg = r600_set_surface_reg, 323 - .clear_surface_reg = r600_clear_surface_reg, 324 - .bandwidth_update = &evergreen_bandwidth_update, 325 - .hpd_init = &evergreen_hpd_init, 326 - .hpd_fini = &evergreen_hpd_fini, 327 - .hpd_sense = &evergreen_hpd_sense, 328 - .hpd_set_polarity = &evergreen_hpd_set_polarity, 329 - }; 330 - 331 704 #endif
+240 -201
drivers/gpu/drm/radeon/radeon_atombios.c
··· 75 75 memset(&i2c, 0, sizeof(struct radeon_i2c_bus_rec)); 76 76 i2c.valid = false; 77 77 78 - atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset); 78 + if (atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) { 79 + i2c_info = (struct _ATOM_GPIO_I2C_INFO *)(ctx->bios + data_offset); 79 80 80 - i2c_info = (struct _ATOM_GPIO_I2C_INFO *)(ctx->bios + data_offset); 81 + for (i = 0; i < ATOM_MAX_SUPPORTED_DEVICE; i++) { 82 + gpio = &i2c_info->asGPIO_Info[i]; 81 83 84 + if (gpio->sucI2cId.ucAccess == id) { 85 + i2c.mask_clk_reg = le16_to_cpu(gpio->usClkMaskRegisterIndex) * 4; 86 + i2c.mask_data_reg = le16_to_cpu(gpio->usDataMaskRegisterIndex) * 4; 87 + i2c.en_clk_reg = le16_to_cpu(gpio->usClkEnRegisterIndex) * 4; 88 + i2c.en_data_reg = le16_to_cpu(gpio->usDataEnRegisterIndex) * 4; 89 + i2c.y_clk_reg = le16_to_cpu(gpio->usClkY_RegisterIndex) * 4; 90 + i2c.y_data_reg = le16_to_cpu(gpio->usDataY_RegisterIndex) * 4; 91 + i2c.a_clk_reg = le16_to_cpu(gpio->usClkA_RegisterIndex) * 4; 92 + i2c.a_data_reg = le16_to_cpu(gpio->usDataA_RegisterIndex) * 4; 93 + i2c.mask_clk_mask = (1 << gpio->ucClkMaskShift); 94 + i2c.mask_data_mask = (1 << gpio->ucDataMaskShift); 95 + i2c.en_clk_mask = (1 << gpio->ucClkEnShift); 96 + i2c.en_data_mask = (1 << gpio->ucDataEnShift); 97 + i2c.y_clk_mask = (1 << gpio->ucClkY_Shift); 98 + i2c.y_data_mask = (1 << gpio->ucDataY_Shift); 99 + i2c.a_clk_mask = (1 << gpio->ucClkA_Shift); 100 + i2c.a_data_mask = (1 << gpio->ucDataA_Shift); 82 101 83 - for (i = 0; i < ATOM_MAX_SUPPORTED_DEVICE; i++) { 84 - gpio = &i2c_info->asGPIO_Info[i]; 102 + if (gpio->sucI2cId.sbfAccess.bfHW_Capable) 103 + i2c.hw_capable = true; 104 + else 105 + i2c.hw_capable = false; 85 106 86 - if (gpio->sucI2cId.ucAccess == id) { 87 - i2c.mask_clk_reg = le16_to_cpu(gpio->usClkMaskRegisterIndex) * 4; 88 - i2c.mask_data_reg = le16_to_cpu(gpio->usDataMaskRegisterIndex) * 4; 89 - i2c.en_clk_reg = le16_to_cpu(gpio->usClkEnRegisterIndex) * 4; 90 - i2c.en_data_reg = le16_to_cpu(gpio->usDataEnRegisterIndex) * 4; 91 - i2c.y_clk_reg = le16_to_cpu(gpio->usClkY_RegisterIndex) * 4; 92 - i2c.y_data_reg = le16_to_cpu(gpio->usDataY_RegisterIndex) * 4; 93 - i2c.a_clk_reg = le16_to_cpu(gpio->usClkA_RegisterIndex) * 4; 94 - i2c.a_data_reg = le16_to_cpu(gpio->usDataA_RegisterIndex) * 4; 95 - i2c.mask_clk_mask = (1 << gpio->ucClkMaskShift); 96 - i2c.mask_data_mask = (1 << gpio->ucDataMaskShift); 97 - i2c.en_clk_mask = (1 << gpio->ucClkEnShift); 98 - i2c.en_data_mask = (1 << gpio->ucDataEnShift); 99 - i2c.y_clk_mask = (1 << gpio->ucClkY_Shift); 100 - i2c.y_data_mask = (1 << gpio->ucDataY_Shift); 101 - i2c.a_clk_mask = (1 << gpio->ucClkA_Shift); 102 - i2c.a_data_mask = (1 << gpio->ucDataA_Shift); 107 + if (gpio->sucI2cId.ucAccess == 0xa0) 108 + i2c.mm_i2c = true; 109 + else 110 + i2c.mm_i2c = false; 103 111 104 - if (gpio->sucI2cId.sbfAccess.bfHW_Capable) 105 - i2c.hw_capable = true; 106 - else 107 - i2c.hw_capable = false; 112 + i2c.i2c_id = gpio->sucI2cId.ucAccess; 108 113 109 - if (gpio->sucI2cId.ucAccess == 0xa0) 110 - i2c.mm_i2c = true; 111 - else 112 - i2c.mm_i2c = false; 113 - 114 - i2c.i2c_id = gpio->sucI2cId.ucAccess; 115 - 116 - i2c.valid = true; 117 - break; 114 + i2c.valid = true; 115 + break; 116 + } 118 117 } 119 118 } 120 119 ··· 134 135 memset(&gpio, 0, sizeof(struct radeon_gpio_rec)); 135 136 gpio.valid = false; 136 137 137 - atom_parse_data_header(ctx, index, &size, NULL, NULL, &data_offset); 138 + if (atom_parse_data_header(ctx, index, &size, NULL, NULL, &data_offset)) { 139 + gpio_info = (struct _ATOM_GPIO_PIN_LUT *)(ctx->bios + data_offset); 138 140 139 - gpio_info = (struct _ATOM_GPIO_PIN_LUT *)(ctx->bios + data_offset); 141 + num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) / 142 + sizeof(ATOM_GPIO_PIN_ASSIGNMENT); 140 143 141 - num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) / sizeof(ATOM_GPIO_PIN_ASSIGNMENT); 142 - 143 - for (i = 0; i < num_indices; i++) { 144 - pin = &gpio_info->asGPIO_Pin[i]; 145 - if (id == pin->ucGPIO_ID) { 146 - gpio.id = pin->ucGPIO_ID; 147 - gpio.reg = pin->usGpioPin_AIndex * 4; 148 - gpio.mask = (1 << pin->ucGpioPinBitShift); 149 - gpio.valid = true; 150 - break; 144 + for (i = 0; i < num_indices; i++) { 145 + pin = &gpio_info->asGPIO_Pin[i]; 146 + if (id == pin->ucGPIO_ID) { 147 + gpio.id = pin->ucGPIO_ID; 148 + gpio.reg = pin->usGpioPin_AIndex * 4; 149 + gpio.mask = (1 << pin->ucGpioPinBitShift); 150 + gpio.valid = true; 151 + break; 152 + } 151 153 } 152 154 } 153 155 ··· 264 264 if ((supported_device == ATOM_DEVICE_CRT1_SUPPORT) || 265 265 (supported_device == ATOM_DEVICE_DFP2_SUPPORT)) 266 266 return false; 267 + if (supported_device == ATOM_DEVICE_CRT2_SUPPORT) 268 + *line_mux = 0x90; 267 269 } 268 270 269 271 /* ASUS HD 3600 XT board lists the DVI port as HDMI */ ··· 397 395 struct radeon_gpio_rec gpio; 398 396 struct radeon_hpd hpd; 399 397 400 - atom_parse_data_header(ctx, index, &size, &frev, &crev, &data_offset); 401 - 402 - if (data_offset == 0) 398 + if (!atom_parse_data_header(ctx, index, &size, &frev, &crev, &data_offset)) 403 399 return false; 404 400 405 401 if (crev < 2) ··· 449 449 GetIndexIntoMasterTable(DATA, 450 450 IntegratedSystemInfo); 451 451 452 - atom_parse_data_header(ctx, index, &size, &frev, 453 - &crev, &igp_offset); 452 + if (atom_parse_data_header(ctx, index, &size, &frev, 453 + &crev, &igp_offset)) { 454 454 455 - if (crev >= 2) { 456 - igp_obj = 457 - (ATOM_INTEGRATED_SYSTEM_INFO_V2 458 - *) (ctx->bios + igp_offset); 455 + if (crev >= 2) { 456 + igp_obj = 457 + (ATOM_INTEGRATED_SYSTEM_INFO_V2 458 + *) (ctx->bios + igp_offset); 459 459 460 - if (igp_obj) { 461 - uint32_t slot_config, ct; 460 + if (igp_obj) { 461 + uint32_t slot_config, ct; 462 462 463 - if (con_obj_num == 1) 464 - slot_config = 465 - igp_obj-> 466 - ulDDISlot1Config; 467 - else 468 - slot_config = 469 - igp_obj-> 470 - ulDDISlot2Config; 463 + if (con_obj_num == 1) 464 + slot_config = 465 + igp_obj-> 466 + ulDDISlot1Config; 467 + else 468 + slot_config = 469 + igp_obj-> 470 + ulDDISlot2Config; 471 471 472 - ct = (slot_config >> 16) & 0xff; 473 - connector_type = 474 - object_connector_convert 475 - [ct]; 476 - connector_object_id = ct; 477 - igp_lane_info = 478 - slot_config & 0xffff; 472 + ct = (slot_config >> 16) & 0xff; 473 + connector_type = 474 + object_connector_convert 475 + [ct]; 476 + connector_object_id = ct; 477 + igp_lane_info = 478 + slot_config & 0xffff; 479 + } else 480 + continue; 479 481 } else 480 482 continue; 481 - } else 482 - continue; 483 + } else { 484 + igp_lane_info = 0; 485 + connector_type = 486 + object_connector_convert[con_obj_id]; 487 + connector_object_id = con_obj_id; 488 + } 483 489 } else { 484 490 igp_lane_info = 0; 485 491 connector_type = ··· 633 627 uint8_t frev, crev; 634 628 ATOM_XTMDS_INFO *xtmds; 635 629 636 - atom_parse_data_header(ctx, index, &size, &frev, &crev, &data_offset); 637 - xtmds = (ATOM_XTMDS_INFO *)(ctx->bios + data_offset); 630 + if (atom_parse_data_header(ctx, index, &size, &frev, &crev, &data_offset)) { 631 + xtmds = (ATOM_XTMDS_INFO *)(ctx->bios + data_offset); 638 632 639 - if (xtmds->ucSupportedLink & ATOM_XTMDS_SUPPORTED_DUALLINK) { 640 - if (connector_type == DRM_MODE_CONNECTOR_DVII) 641 - return CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_I; 642 - else 643 - return CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D; 644 - } else { 645 - if (connector_type == DRM_MODE_CONNECTOR_DVII) 646 - return CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I; 647 - else 648 - return CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D; 649 - } 633 + if (xtmds->ucSupportedLink & ATOM_XTMDS_SUPPORTED_DUALLINK) { 634 + if (connector_type == DRM_MODE_CONNECTOR_DVII) 635 + return CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_I; 636 + else 637 + return CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D; 638 + } else { 639 + if (connector_type == DRM_MODE_CONNECTOR_DVII) 640 + return CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I; 641 + else 642 + return CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D; 643 + } 644 + } else 645 + return supported_devices_connector_object_id_convert 646 + [connector_type]; 650 647 } else { 651 648 return supported_devices_connector_object_id_convert 652 649 [connector_type]; ··· 681 672 int i, j, max_device; 682 673 struct bios_connector bios_connectors[ATOM_MAX_SUPPORTED_DEVICE]; 683 674 684 - atom_parse_data_header(ctx, index, &size, &frev, &crev, &data_offset); 675 + if (!atom_parse_data_header(ctx, index, &size, &frev, &crev, &data_offset)) 676 + return false; 685 677 686 678 supported_devices = 687 679 (union atom_supported_devices *)(ctx->bios + data_offset); ··· 875 865 struct radeon_pll *mpll = &rdev->clock.mpll; 876 866 uint16_t data_offset; 877 867 878 - atom_parse_data_header(mode_info->atom_context, index, NULL, &frev, 879 - &crev, &data_offset); 880 - 881 - firmware_info = 882 - (union firmware_info *)(mode_info->atom_context->bios + 883 - data_offset); 884 - 885 - if (firmware_info) { 868 + if (atom_parse_data_header(mode_info->atom_context, index, NULL, 869 + &frev, &crev, &data_offset)) { 870 + firmware_info = 871 + (union firmware_info *)(mode_info->atom_context->bios + 872 + data_offset); 886 873 /* pixel clocks */ 887 874 p1pll->reference_freq = 888 875 le16_to_cpu(firmware_info->info.usReferenceClock); ··· 893 886 le32_to_cpu(firmware_info->info_12.ulMinPixelClockPLL_Output); 894 887 p1pll->pll_out_max = 895 888 le32_to_cpu(firmware_info->info.ulMaxPixelClockPLL_Output); 889 + 890 + if (crev >= 4) { 891 + p1pll->lcd_pll_out_min = 892 + le16_to_cpu(firmware_info->info_14.usLcdMinPixelClockPLL_Output) * 100; 893 + if (p1pll->lcd_pll_out_min == 0) 894 + p1pll->lcd_pll_out_min = p1pll->pll_out_min; 895 + p1pll->lcd_pll_out_max = 896 + le16_to_cpu(firmware_info->info_14.usLcdMaxPixelClockPLL_Output) * 100; 897 + if (p1pll->lcd_pll_out_max == 0) 898 + p1pll->lcd_pll_out_max = p1pll->pll_out_max; 899 + } else { 900 + p1pll->lcd_pll_out_min = p1pll->pll_out_min; 901 + p1pll->lcd_pll_out_max = p1pll->pll_out_max; 902 + } 896 903 897 904 if (p1pll->pll_out_min == 0) { 898 905 if (ASIC_IS_AVIVO(rdev)) ··· 1013 992 u8 frev, crev; 1014 993 u16 data_offset; 1015 994 1016 - atom_parse_data_header(mode_info->atom_context, index, NULL, &frev, 1017 - &crev, &data_offset); 1018 - 1019 - igp_info = (union igp_info *)(mode_info->atom_context->bios + 995 + if (atom_parse_data_header(mode_info->atom_context, index, NULL, 996 + &frev, &crev, &data_offset)) { 997 + igp_info = (union igp_info *)(mode_info->atom_context->bios + 1020 998 data_offset); 1021 - 1022 - if (igp_info) { 1023 999 switch (crev) { 1024 1000 case 1: 1025 1001 if (igp_info->info.ucMemoryType & 0xf0) ··· 1047 1029 uint16_t maxfreq; 1048 1030 int i; 1049 1031 1050 - atom_parse_data_header(mode_info->atom_context, index, NULL, &frev, 1051 - &crev, &data_offset); 1032 + if (atom_parse_data_header(mode_info->atom_context, index, NULL, 1033 + &frev, &crev, &data_offset)) { 1034 + tmds_info = 1035 + (struct _ATOM_TMDS_INFO *)(mode_info->atom_context->bios + 1036 + data_offset); 1052 1037 1053 - tmds_info = 1054 - (struct _ATOM_TMDS_INFO *)(mode_info->atom_context->bios + 1055 - data_offset); 1056 - 1057 - if (tmds_info) { 1058 1038 maxfreq = le16_to_cpu(tmds_info->usMaxFrequency); 1059 1039 for (i = 0; i < 4; i++) { 1060 1040 tmds->tmds_pll[i].freq = ··· 1101 1085 if (id > ATOM_MAX_SS_ENTRY) 1102 1086 return NULL; 1103 1087 1104 - atom_parse_data_header(mode_info->atom_context, index, NULL, &frev, 1105 - &crev, &data_offset); 1088 + if (atom_parse_data_header(mode_info->atom_context, index, NULL, 1089 + &frev, &crev, &data_offset)) { 1090 + ss_info = 1091 + (struct _ATOM_SPREAD_SPECTRUM_INFO *)(mode_info->atom_context->bios + data_offset); 1106 1092 1107 - ss_info = 1108 - (struct _ATOM_SPREAD_SPECTRUM_INFO *)(mode_info->atom_context->bios + data_offset); 1109 - 1110 - if (ss_info) { 1111 1093 ss = 1112 1094 kzalloc(sizeof(struct radeon_atom_ss), GFP_KERNEL); 1113 1095 ··· 1128 1114 return ss; 1129 1115 } 1130 1116 1131 - static void radeon_atom_apply_lvds_quirks(struct drm_device *dev, 1132 - struct radeon_encoder_atom_dig *lvds) 1133 - { 1134 - 1135 - /* Toshiba A300-1BU laptop panel doesn't like new pll divider algo */ 1136 - if ((dev->pdev->device == 0x95c4) && 1137 - (dev->pdev->subsystem_vendor == 0x1179) && 1138 - (dev->pdev->subsystem_device == 0xff50)) { 1139 - if ((lvds->native_mode.hdisplay == 1280) && 1140 - (lvds->native_mode.vdisplay == 800)) 1141 - lvds->pll_algo = PLL_ALGO_LEGACY; 1142 - } 1143 - 1144 - /* Dell Studio 15 laptop panel doesn't like new pll divider algo */ 1145 - if ((dev->pdev->device == 0x95c4) && 1146 - (dev->pdev->subsystem_vendor == 0x1028) && 1147 - (dev->pdev->subsystem_device == 0x029f)) { 1148 - if ((lvds->native_mode.hdisplay == 1280) && 1149 - (lvds->native_mode.vdisplay == 800)) 1150 - lvds->pll_algo = PLL_ALGO_LEGACY; 1151 - } 1152 - 1153 - } 1154 - 1155 1117 union lvds_info { 1156 1118 struct _ATOM_LVDS_INFO info; 1157 1119 struct _ATOM_LVDS_INFO_V12 info_12; ··· 1146 1156 uint8_t frev, crev; 1147 1157 struct radeon_encoder_atom_dig *lvds = NULL; 1148 1158 1149 - atom_parse_data_header(mode_info->atom_context, index, NULL, &frev, 1150 - &crev, &data_offset); 1151 - 1152 - lvds_info = 1153 - (union lvds_info *)(mode_info->atom_context->bios + data_offset); 1154 - 1155 - if (lvds_info) { 1159 + if (atom_parse_data_header(mode_info->atom_context, index, NULL, 1160 + &frev, &crev, &data_offset)) { 1161 + lvds_info = 1162 + (union lvds_info *)(mode_info->atom_context->bios + data_offset); 1156 1163 lvds = 1157 1164 kzalloc(sizeof(struct radeon_encoder_atom_dig), GFP_KERNEL); 1158 1165 ··· 1207 1220 lvds->pll_algo = PLL_ALGO_LEGACY; 1208 1221 } 1209 1222 1210 - /* LVDS quirks */ 1211 - radeon_atom_apply_lvds_quirks(dev, lvds); 1212 - 1213 1223 encoder->native_mode = lvds->native_mode; 1214 1224 } 1215 1225 return lvds; ··· 1225 1241 uint8_t bg, dac; 1226 1242 struct radeon_encoder_primary_dac *p_dac = NULL; 1227 1243 1228 - atom_parse_data_header(mode_info->atom_context, index, NULL, &frev, &crev, &data_offset); 1244 + if (atom_parse_data_header(mode_info->atom_context, index, NULL, 1245 + &frev, &crev, &data_offset)) { 1246 + dac_info = (struct _COMPASSIONATE_DATA *) 1247 + (mode_info->atom_context->bios + data_offset); 1229 1248 1230 - dac_info = (struct _COMPASSIONATE_DATA *)(mode_info->atom_context->bios + data_offset); 1231 - 1232 - if (dac_info) { 1233 1249 p_dac = kzalloc(sizeof(struct radeon_encoder_primary_dac), GFP_KERNEL); 1234 1250 1235 1251 if (!p_dac) ··· 1254 1270 u8 frev, crev; 1255 1271 u16 data_offset, misc; 1256 1272 1257 - atom_parse_data_header(mode_info->atom_context, data_index, NULL, &frev, &crev, &data_offset); 1273 + if (!atom_parse_data_header(mode_info->atom_context, data_index, NULL, 1274 + &frev, &crev, &data_offset)) 1275 + return false; 1258 1276 1259 1277 switch (crev) { 1260 1278 case 1: ··· 1348 1362 struct _ATOM_ANALOG_TV_INFO *tv_info; 1349 1363 enum radeon_tv_std tv_std = TV_STD_NTSC; 1350 1364 1351 - atom_parse_data_header(mode_info->atom_context, index, NULL, &frev, &crev, &data_offset); 1365 + if (atom_parse_data_header(mode_info->atom_context, index, NULL, 1366 + &frev, &crev, &data_offset)) { 1352 1367 1353 - tv_info = (struct _ATOM_ANALOG_TV_INFO *)(mode_info->atom_context->bios + data_offset); 1368 + tv_info = (struct _ATOM_ANALOG_TV_INFO *) 1369 + (mode_info->atom_context->bios + data_offset); 1354 1370 1355 - switch (tv_info->ucTV_BootUpDefaultStandard) { 1356 - case ATOM_TV_NTSC: 1357 - tv_std = TV_STD_NTSC; 1358 - DRM_INFO("Default TV standard: NTSC\n"); 1359 - break; 1360 - case ATOM_TV_NTSCJ: 1361 - tv_std = TV_STD_NTSC_J; 1362 - DRM_INFO("Default TV standard: NTSC-J\n"); 1363 - break; 1364 - case ATOM_TV_PAL: 1365 - tv_std = TV_STD_PAL; 1366 - DRM_INFO("Default TV standard: PAL\n"); 1367 - break; 1368 - case ATOM_TV_PALM: 1369 - tv_std = TV_STD_PAL_M; 1370 - DRM_INFO("Default TV standard: PAL-M\n"); 1371 - break; 1372 - case ATOM_TV_PALN: 1373 - tv_std = TV_STD_PAL_N; 1374 - DRM_INFO("Default TV standard: PAL-N\n"); 1375 - break; 1376 - case ATOM_TV_PALCN: 1377 - tv_std = TV_STD_PAL_CN; 1378 - DRM_INFO("Default TV standard: PAL-CN\n"); 1379 - break; 1380 - case ATOM_TV_PAL60: 1381 - tv_std = TV_STD_PAL_60; 1382 - DRM_INFO("Default TV standard: PAL-60\n"); 1383 - break; 1384 - case ATOM_TV_SECAM: 1385 - tv_std = TV_STD_SECAM; 1386 - DRM_INFO("Default TV standard: SECAM\n"); 1387 - break; 1388 - default: 1389 - tv_std = TV_STD_NTSC; 1390 - DRM_INFO("Unknown TV standard; defaulting to NTSC\n"); 1391 - break; 1371 + switch (tv_info->ucTV_BootUpDefaultStandard) { 1372 + case ATOM_TV_NTSC: 1373 + tv_std = TV_STD_NTSC; 1374 + DRM_INFO("Default TV standard: NTSC\n"); 1375 + break; 1376 + case ATOM_TV_NTSCJ: 1377 + tv_std = TV_STD_NTSC_J; 1378 + DRM_INFO("Default TV standard: NTSC-J\n"); 1379 + break; 1380 + case ATOM_TV_PAL: 1381 + tv_std = TV_STD_PAL; 1382 + DRM_INFO("Default TV standard: PAL\n"); 1383 + break; 1384 + case ATOM_TV_PALM: 1385 + tv_std = TV_STD_PAL_M; 1386 + DRM_INFO("Default TV standard: PAL-M\n"); 1387 + break; 1388 + case ATOM_TV_PALN: 1389 + tv_std = TV_STD_PAL_N; 1390 + DRM_INFO("Default TV standard: PAL-N\n"); 1391 + break; 1392 + case ATOM_TV_PALCN: 1393 + tv_std = TV_STD_PAL_CN; 1394 + DRM_INFO("Default TV standard: PAL-CN\n"); 1395 + break; 1396 + case ATOM_TV_PAL60: 1397 + tv_std = TV_STD_PAL_60; 1398 + DRM_INFO("Default TV standard: PAL-60\n"); 1399 + break; 1400 + case ATOM_TV_SECAM: 1401 + tv_std = TV_STD_SECAM; 1402 + DRM_INFO("Default TV standard: SECAM\n"); 1403 + break; 1404 + default: 1405 + tv_std = TV_STD_NTSC; 1406 + DRM_INFO("Unknown TV standard; defaulting to NTSC\n"); 1407 + break; 1408 + } 1392 1409 } 1393 1410 return tv_std; 1394 1411 } ··· 1409 1420 uint8_t bg, dac; 1410 1421 struct radeon_encoder_tv_dac *tv_dac = NULL; 1411 1422 1412 - atom_parse_data_header(mode_info->atom_context, index, NULL, &frev, &crev, &data_offset); 1423 + if (atom_parse_data_header(mode_info->atom_context, index, NULL, 1424 + &frev, &crev, &data_offset)) { 1413 1425 1414 - dac_info = (struct _COMPASSIONATE_DATA *)(mode_info->atom_context->bios + data_offset); 1426 + dac_info = (struct _COMPASSIONATE_DATA *) 1427 + (mode_info->atom_context->bios + data_offset); 1415 1428 1416 - if (dac_info) { 1417 1429 tv_dac = kzalloc(sizeof(struct radeon_encoder_tv_dac), GFP_KERNEL); 1418 1430 1419 1431 if (!tv_dac) ··· 1437 1447 return tv_dac; 1438 1448 } 1439 1449 1450 + static const char *thermal_controller_names[] = { 1451 + "NONE", 1452 + "LM63", 1453 + "ADM1032", 1454 + "ADM1030", 1455 + "MUA6649", 1456 + "LM64", 1457 + "F75375", 1458 + "ASC7512", 1459 + }; 1460 + 1461 + static const char *pp_lib_thermal_controller_names[] = { 1462 + "NONE", 1463 + "LM63", 1464 + "ADM1032", 1465 + "ADM1030", 1466 + "MUA6649", 1467 + "LM64", 1468 + "F75375", 1469 + "RV6xx", 1470 + "RV770", 1471 + "ADT7473", 1472 + }; 1473 + 1440 1474 union power_info { 1441 1475 struct _ATOM_POWERPLAY_INFO info; 1442 1476 struct _ATOM_POWERPLAY_INFO_V2 info_2; ··· 1480 1466 struct _ATOM_PPLIB_STATE *power_state; 1481 1467 int num_modes = 0, i, j; 1482 1468 int state_index = 0, mode_index = 0; 1483 - 1484 - atom_parse_data_header(mode_info->atom_context, index, NULL, &frev, &crev, &data_offset); 1485 - 1486 - power_info = (union power_info *)(mode_info->atom_context->bios + data_offset); 1469 + struct radeon_i2c_bus_rec i2c_bus; 1487 1470 1488 1471 rdev->pm.default_power_state = NULL; 1489 1472 1490 - if (power_info) { 1473 + if (atom_parse_data_header(mode_info->atom_context, index, NULL, 1474 + &frev, &crev, &data_offset)) { 1475 + power_info = (union power_info *)(mode_info->atom_context->bios + data_offset); 1491 1476 if (frev < 4) { 1477 + /* add the i2c bus for thermal/fan chip */ 1478 + if (power_info->info.ucOverdriveThermalController > 0) { 1479 + DRM_INFO("Possible %s thermal controller at 0x%02x\n", 1480 + thermal_controller_names[power_info->info.ucOverdriveThermalController], 1481 + power_info->info.ucOverdriveControllerAddress >> 1); 1482 + i2c_bus = radeon_lookup_i2c_gpio(rdev, power_info->info.ucOverdriveI2cLine); 1483 + rdev->pm.i2c_bus = radeon_i2c_create(rdev->ddev, &i2c_bus, "Thermal"); 1484 + } 1492 1485 num_modes = power_info->info.ucNumOfPowerModeEntries; 1493 1486 if (num_modes > ATOM_MAX_NUMBEROF_POWER_BLOCK) 1494 1487 num_modes = ATOM_MAX_NUMBEROF_POWER_BLOCK; ··· 1705 1684 } 1706 1685 } 1707 1686 } else if (frev == 4) { 1687 + /* add the i2c bus for thermal/fan chip */ 1688 + /* no support for internal controller yet */ 1689 + if (power_info->info_4.sThermalController.ucType > 0) { 1690 + if ((power_info->info_4.sThermalController.ucType == ATOM_PP_THERMALCONTROLLER_RV6xx) || 1691 + (power_info->info_4.sThermalController.ucType == ATOM_PP_THERMALCONTROLLER_RV770)) { 1692 + DRM_INFO("Internal thermal controller %s fan control\n", 1693 + (power_info->info_4.sThermalController.ucFanParameters & 1694 + ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with"); 1695 + } else { 1696 + DRM_INFO("Possible %s thermal controller at 0x%02x %s fan control\n", 1697 + pp_lib_thermal_controller_names[power_info->info_4.sThermalController.ucType], 1698 + power_info->info_4.sThermalController.ucI2cAddress >> 1, 1699 + (power_info->info_4.sThermalController.ucFanParameters & 1700 + ATOM_PP_FANPARAMETERS_NOFAN) ? "without" : "with"); 1701 + i2c_bus = radeon_lookup_i2c_gpio(rdev, power_info->info_4.sThermalController.ucI2cLine); 1702 + rdev->pm.i2c_bus = radeon_i2c_create(rdev->ddev, &i2c_bus, "Thermal"); 1703 + } 1704 + } 1708 1705 for (i = 0; i < power_info->info_4.ucNumStates; i++) { 1709 1706 mode_index = 0; 1710 1707 power_state = (struct _ATOM_PPLIB_STATE *)
+3 -4
drivers/gpu/drm/radeon/radeon_combios.c
··· 531 531 case CHIP_RS300: 532 532 switch (ddc_line) { 533 533 case RADEON_GPIO_DVI_DDC: 534 - /* in theory this should be hw capable, 535 - * but it doesn't seem to work 536 - */ 537 - i2c.hw_capable = false; 534 + i2c.hw_capable = true; 538 535 break; 539 536 default: 540 537 i2c.hw_capable = false; ··· 630 633 p1pll->reference_div = RBIOS16(pll_info + 0x10); 631 634 p1pll->pll_out_min = RBIOS32(pll_info + 0x12); 632 635 p1pll->pll_out_max = RBIOS32(pll_info + 0x16); 636 + p1pll->lcd_pll_out_min = p1pll->pll_out_min; 637 + p1pll->lcd_pll_out_max = p1pll->pll_out_max; 633 638 634 639 if (rev > 9) { 635 640 p1pll->pll_in_min = RBIOS32(pll_info + 0x36);
+1 -1
drivers/gpu/drm/radeon/radeon_connectors.c
··· 940 940 if (radeon_connector->edid) 941 941 kfree(radeon_connector->edid); 942 942 if (radeon_dig_connector->dp_i2c_bus) 943 - radeon_i2c_destroy_dp(radeon_dig_connector->dp_i2c_bus); 943 + radeon_i2c_destroy(radeon_dig_connector->dp_i2c_bus); 944 944 kfree(radeon_connector->con_priv); 945 945 drm_sysfs_connector_remove(connector); 946 946 drm_connector_cleanup(connector);
+7 -4
drivers/gpu/drm/radeon/radeon_cs.c
··· 193 193 radeon_bo_list_fence(&parser->validated, parser->ib->fence); 194 194 } 195 195 radeon_bo_list_unreserve(&parser->validated); 196 - for (i = 0; i < parser->nrelocs; i++) { 197 - if (parser->relocs[i].gobj) 198 - drm_gem_object_unreference_unlocked(parser->relocs[i].gobj); 196 + if (parser->relocs != NULL) { 197 + for (i = 0; i < parser->nrelocs; i++) { 198 + if (parser->relocs[i].gobj) 199 + drm_gem_object_unreference_unlocked(parser->relocs[i].gobj); 200 + } 199 201 } 200 202 kfree(parser->track); 201 203 kfree(parser->relocs); ··· 245 243 } 246 244 r = radeon_cs_parser_relocs(&parser); 247 245 if (r) { 248 - DRM_ERROR("Failed to parse relocation !\n"); 246 + if (r != -ERESTARTSYS) 247 + DRM_ERROR("Failed to parse relocation %d!\n", r); 249 248 radeon_cs_parser_fini(&parser, r); 250 249 mutex_unlock(&rdev->cs_mutex); 251 250 return r;
+38 -199
drivers/gpu/drm/radeon/radeon_device.c
··· 33 33 #include <linux/vga_switcheroo.h> 34 34 #include "radeon_reg.h" 35 35 #include "radeon.h" 36 - #include "radeon_asic.h" 37 36 #include "atom.h" 38 37 39 38 /* ··· 241 242 242 243 } 243 244 245 + void radeon_update_bandwidth_info(struct radeon_device *rdev) 246 + { 247 + fixed20_12 a; 248 + u32 sclk, mclk; 249 + 250 + if (rdev->flags & RADEON_IS_IGP) { 251 + sclk = radeon_get_engine_clock(rdev); 252 + mclk = rdev->clock.default_mclk; 253 + 254 + a.full = rfixed_const(100); 255 + rdev->pm.sclk.full = rfixed_const(sclk); 256 + rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a); 257 + rdev->pm.mclk.full = rfixed_const(mclk); 258 + rdev->pm.mclk.full = rfixed_div(rdev->pm.mclk, a); 259 + 260 + a.full = rfixed_const(16); 261 + /* core_bandwidth = sclk(Mhz) * 16 */ 262 + rdev->pm.core_bandwidth.full = rfixed_div(rdev->pm.sclk, a); 263 + } else { 264 + sclk = radeon_get_engine_clock(rdev); 265 + mclk = radeon_get_memory_clock(rdev); 266 + 267 + a.full = rfixed_const(100); 268 + rdev->pm.sclk.full = rfixed_const(sclk); 269 + rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a); 270 + rdev->pm.mclk.full = rfixed_const(mclk); 271 + rdev->pm.mclk.full = rfixed_div(rdev->pm.mclk, a); 272 + } 273 + } 274 + 244 275 bool radeon_boot_test_post_card(struct radeon_device *rdev) 245 276 { 246 277 if (radeon_card_posted(rdev)) ··· 316 287 rdev->dummy_page.page = NULL; 317 288 } 318 289 319 - 320 - /* 321 - * Registers accessors functions. 322 - */ 323 - uint32_t radeon_invalid_rreg(struct radeon_device *rdev, uint32_t reg) 324 - { 325 - DRM_ERROR("Invalid callback to read register 0x%04X\n", reg); 326 - BUG_ON(1); 327 - return 0; 328 - } 329 - 330 - void radeon_invalid_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v) 331 - { 332 - DRM_ERROR("Invalid callback to write register 0x%04X with 0x%08X\n", 333 - reg, v); 334 - BUG_ON(1); 335 - } 336 - 337 - void radeon_register_accessor_init(struct radeon_device *rdev) 338 - { 339 - rdev->mc_rreg = &radeon_invalid_rreg; 340 - rdev->mc_wreg = &radeon_invalid_wreg; 341 - rdev->pll_rreg = &radeon_invalid_rreg; 342 - rdev->pll_wreg = &radeon_invalid_wreg; 343 - rdev->pciep_rreg = &radeon_invalid_rreg; 344 - rdev->pciep_wreg = &radeon_invalid_wreg; 345 - 346 - /* Don't change order as we are overridding accessor. */ 347 - if (rdev->family < CHIP_RV515) { 348 - rdev->pcie_reg_mask = 0xff; 349 - } else { 350 - rdev->pcie_reg_mask = 0x7ff; 351 - } 352 - /* FIXME: not sure here */ 353 - if (rdev->family <= CHIP_R580) { 354 - rdev->pll_rreg = &r100_pll_rreg; 355 - rdev->pll_wreg = &r100_pll_wreg; 356 - } 357 - if (rdev->family >= CHIP_R420) { 358 - rdev->mc_rreg = &r420_mc_rreg; 359 - rdev->mc_wreg = &r420_mc_wreg; 360 - } 361 - if (rdev->family >= CHIP_RV515) { 362 - rdev->mc_rreg = &rv515_mc_rreg; 363 - rdev->mc_wreg = &rv515_mc_wreg; 364 - } 365 - if (rdev->family == CHIP_RS400 || rdev->family == CHIP_RS480) { 366 - rdev->mc_rreg = &rs400_mc_rreg; 367 - rdev->mc_wreg = &rs400_mc_wreg; 368 - } 369 - if (rdev->family == CHIP_RS690 || rdev->family == CHIP_RS740) { 370 - rdev->mc_rreg = &rs690_mc_rreg; 371 - rdev->mc_wreg = &rs690_mc_wreg; 372 - } 373 - if (rdev->family == CHIP_RS600) { 374 - rdev->mc_rreg = &rs600_mc_rreg; 375 - rdev->mc_wreg = &rs600_mc_wreg; 376 - } 377 - if ((rdev->family >= CHIP_R600) && (rdev->family <= CHIP_RV740)) { 378 - rdev->pciep_rreg = &r600_pciep_rreg; 379 - rdev->pciep_wreg = &r600_pciep_wreg; 380 - } 381 - } 382 - 383 - 384 - /* 385 - * ASIC 386 - */ 387 - int radeon_asic_init(struct radeon_device *rdev) 388 - { 389 - radeon_register_accessor_init(rdev); 390 - switch (rdev->family) { 391 - case CHIP_R100: 392 - case CHIP_RV100: 393 - case CHIP_RS100: 394 - case CHIP_RV200: 395 - case CHIP_RS200: 396 - rdev->asic = &r100_asic; 397 - break; 398 - case CHIP_R200: 399 - case CHIP_RV250: 400 - case CHIP_RS300: 401 - case CHIP_RV280: 402 - rdev->asic = &r200_asic; 403 - break; 404 - case CHIP_R300: 405 - case CHIP_R350: 406 - case CHIP_RV350: 407 - case CHIP_RV380: 408 - if (rdev->flags & RADEON_IS_PCIE) 409 - rdev->asic = &r300_asic_pcie; 410 - else 411 - rdev->asic = &r300_asic; 412 - break; 413 - case CHIP_R420: 414 - case CHIP_R423: 415 - case CHIP_RV410: 416 - rdev->asic = &r420_asic; 417 - break; 418 - case CHIP_RS400: 419 - case CHIP_RS480: 420 - rdev->asic = &rs400_asic; 421 - break; 422 - case CHIP_RS600: 423 - rdev->asic = &rs600_asic; 424 - break; 425 - case CHIP_RS690: 426 - case CHIP_RS740: 427 - rdev->asic = &rs690_asic; 428 - break; 429 - case CHIP_RV515: 430 - rdev->asic = &rv515_asic; 431 - break; 432 - case CHIP_R520: 433 - case CHIP_RV530: 434 - case CHIP_RV560: 435 - case CHIP_RV570: 436 - case CHIP_R580: 437 - rdev->asic = &r520_asic; 438 - break; 439 - case CHIP_R600: 440 - case CHIP_RV610: 441 - case CHIP_RV630: 442 - case CHIP_RV620: 443 - case CHIP_RV635: 444 - case CHIP_RV670: 445 - case CHIP_RS780: 446 - case CHIP_RS880: 447 - rdev->asic = &r600_asic; 448 - break; 449 - case CHIP_RV770: 450 - case CHIP_RV730: 451 - case CHIP_RV710: 452 - case CHIP_RV740: 453 - rdev->asic = &rv770_asic; 454 - break; 455 - case CHIP_CEDAR: 456 - case CHIP_REDWOOD: 457 - case CHIP_JUNIPER: 458 - case CHIP_CYPRESS: 459 - case CHIP_HEMLOCK: 460 - rdev->asic = &evergreen_asic; 461 - break; 462 - default: 463 - /* FIXME: not supported yet */ 464 - return -EINVAL; 465 - } 466 - 467 - if (rdev->flags & RADEON_IS_IGP) { 468 - rdev->asic->get_memory_clock = NULL; 469 - rdev->asic->set_memory_clock = NULL; 470 - } 471 - 472 - return 0; 473 - } 474 - 475 - 476 - /* 477 - * Wrapper around modesetting bits. 478 - */ 479 - int radeon_clocks_init(struct radeon_device *rdev) 480 - { 481 - int r; 482 - 483 - r = radeon_static_clocks_init(rdev->ddev); 484 - if (r) { 485 - return r; 486 - } 487 - DRM_INFO("Clocks initialized !\n"); 488 - return 0; 489 - } 490 - 491 - void radeon_clocks_fini(struct radeon_device *rdev) 492 - { 493 - } 494 290 495 291 /* ATOM accessor methods */ 496 292 static uint32_t cail_pll_read(struct card_info *info, uint32_t reg) ··· 419 565 VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM; 420 566 else 421 567 return VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM; 422 - } 423 - 424 - void radeon_agp_disable(struct radeon_device *rdev) 425 - { 426 - rdev->flags &= ~RADEON_IS_AGP; 427 - if (rdev->family >= CHIP_R600) { 428 - DRM_INFO("Forcing AGP to PCIE mode\n"); 429 - rdev->flags |= RADEON_IS_PCIE; 430 - } else if (rdev->family >= CHIP_RV515 || 431 - rdev->family == CHIP_RV380 || 432 - rdev->family == CHIP_RV410 || 433 - rdev->family == CHIP_R423) { 434 - DRM_INFO("Forcing AGP to PCIE mode\n"); 435 - rdev->flags |= RADEON_IS_PCIE; 436 - rdev->asic->gart_tlb_flush = &rv370_pcie_gart_tlb_flush; 437 - rdev->asic->gart_set_page = &rv370_pcie_gart_set_page; 438 - } else { 439 - DRM_INFO("Forcing AGP to PCI mode\n"); 440 - rdev->flags |= RADEON_IS_PCI; 441 - rdev->asic->gart_tlb_flush = &r100_pci_gart_tlb_flush; 442 - rdev->asic->gart_set_page = &r100_pci_gart_set_page; 443 - } 444 - rdev->mc.gtt_size = radeon_gart_size * 1024 * 1024; 445 568 } 446 569 447 570 void radeon_check_arguments(struct radeon_device *rdev) ··· 561 730 if (r) 562 731 return r; 563 732 radeon_check_arguments(rdev); 733 + 734 + /* all of the newer IGP chips have an internal gart 735 + * However some rs4xx report as AGP, so remove that here. 736 + */ 737 + if ((rdev->family >= CHIP_RS400) && 738 + (rdev->flags & RADEON_IS_IGP)) { 739 + rdev->flags &= ~RADEON_IS_AGP; 740 + } 564 741 565 742 if (rdev->flags & RADEON_IS_AGP && radeon_agpmode == -1) { 566 743 radeon_agp_disable(rdev);
+51 -17
drivers/gpu/drm/radeon/radeon_display.c
··· 368 368 369 369 if (rdev->bios) { 370 370 if (rdev->is_atom_bios) { 371 - if (rdev->family >= CHIP_R600) 371 + ret = radeon_get_atom_connector_info_from_supported_devices_table(dev); 372 + if (ret == false) 372 373 ret = radeon_get_atom_connector_info_from_object_table(dev); 373 - else 374 - ret = radeon_get_atom_connector_info_from_supported_devices_table(dev); 375 374 } else { 376 375 ret = radeon_get_legacy_connector_info_from_bios(dev); 377 376 if (ret == false) ··· 468 469 uint32_t best_error = 0xffffffff; 469 470 uint32_t best_vco_diff = 1; 470 471 uint32_t post_div; 472 + u32 pll_out_min, pll_out_max; 471 473 472 474 DRM_DEBUG("PLL freq %llu %u %u\n", freq, pll->min_ref_div, pll->max_ref_div); 473 475 freq = freq * 1000; 476 + 477 + if (pll->flags & RADEON_PLL_IS_LCD) { 478 + pll_out_min = pll->lcd_pll_out_min; 479 + pll_out_max = pll->lcd_pll_out_max; 480 + } else { 481 + pll_out_min = pll->pll_out_min; 482 + pll_out_max = pll->pll_out_max; 483 + } 474 484 475 485 if (pll->flags & RADEON_PLL_USE_REF_DIV) 476 486 min_ref_div = max_ref_div = pll->reference_div; ··· 544 536 tmp = (uint64_t)pll->reference_freq * feedback_div; 545 537 vco = radeon_div(tmp, ref_div); 546 538 547 - if (vco < pll->pll_out_min) { 539 + if (vco < pll_out_min) { 548 540 min_feed_div = feedback_div + 1; 549 541 continue; 550 - } else if (vco > pll->pll_out_max) { 542 + } else if (vco > pll_out_max) { 551 543 max_feed_div = feedback_div; 552 544 continue; 553 545 } ··· 683 675 { 684 676 fixed20_12 ffreq, max_error, error, pll_out, a; 685 677 u32 vco; 678 + u32 pll_out_min, pll_out_max; 679 + 680 + if (pll->flags & RADEON_PLL_IS_LCD) { 681 + pll_out_min = pll->lcd_pll_out_min; 682 + pll_out_max = pll->lcd_pll_out_max; 683 + } else { 684 + pll_out_min = pll->pll_out_min; 685 + pll_out_max = pll->pll_out_max; 686 + } 686 687 687 688 ffreq.full = rfixed_const(freq); 688 689 /* max_error = ffreq * 0.0025; */ ··· 703 686 vco = pll->reference_freq * (((*fb_div) * 10) + (*fb_div_frac)); 704 687 vco = vco / ((*ref_div) * 10); 705 688 706 - if ((vco < pll->pll_out_min) || (vco > pll->pll_out_max)) 689 + if ((vco < pll_out_min) || (vco > pll_out_max)) 707 690 continue; 708 691 709 692 /* pll_out = vco / post_div; */ ··· 731 714 { 732 715 u32 fb_div = 0, fb_div_frac = 0, post_div = 0, ref_div = 0; 733 716 u32 best_freq = 0, vco_frequency; 717 + u32 pll_out_min, pll_out_max; 718 + 719 + if (pll->flags & RADEON_PLL_IS_LCD) { 720 + pll_out_min = pll->lcd_pll_out_min; 721 + pll_out_max = pll->lcd_pll_out_max; 722 + } else { 723 + pll_out_min = pll->pll_out_min; 724 + pll_out_max = pll->pll_out_max; 725 + } 734 726 735 727 /* freq = freq / 10; */ 736 728 do_div(freq, 10); ··· 750 724 goto done; 751 725 752 726 vco_frequency = freq * post_div; 753 - if ((vco_frequency < pll->pll_out_min) || (vco_frequency > pll->pll_out_max)) 727 + if ((vco_frequency < pll_out_min) || (vco_frequency > pll_out_max)) 754 728 goto done; 755 729 756 730 if (pll->flags & RADEON_PLL_USE_REF_DIV) { ··· 775 749 continue; 776 750 777 751 vco_frequency = freq * post_div; 778 - if ((vco_frequency < pll->pll_out_min) || (vco_frequency > pll->pll_out_max)) 752 + if ((vco_frequency < pll_out_min) || (vco_frequency > pll_out_max)) 779 753 continue; 780 754 if (pll->flags & RADEON_PLL_USE_REF_DIV) { 781 755 ref_div = pll->reference_div; ··· 971 945 return 0; 972 946 } 973 947 948 + void radeon_update_display_priority(struct radeon_device *rdev) 949 + { 950 + /* adjustment options for the display watermarks */ 951 + if ((radeon_disp_priority == 0) || (radeon_disp_priority > 2)) { 952 + /* set display priority to high for r3xx, rv515 chips 953 + * this avoids flickering due to underflow to the 954 + * display controllers during heavy acceleration. 955 + */ 956 + if (ASIC_IS_R300(rdev) || (rdev->family == CHIP_RV515)) 957 + rdev->disp_priority = 2; 958 + else 959 + rdev->disp_priority = 0; 960 + } else 961 + rdev->disp_priority = radeon_disp_priority; 962 + 963 + } 964 + 974 965 int radeon_modeset_init(struct radeon_device *rdev) 975 966 { 976 967 int i; ··· 1017 974 if (!rdev->is_atom_bios) { 1018 975 /* check for hardcoded EDID in BIOS */ 1019 976 radeon_combios_check_hardcoded_edid(rdev); 1020 - } 1021 - 1022 - if (rdev->flags & RADEON_SINGLE_CRTC) 1023 - rdev->num_crtc = 1; 1024 - else { 1025 - if (ASIC_IS_DCE4(rdev)) 1026 - rdev->num_crtc = 6; 1027 - else 1028 - rdev->num_crtc = 2; 1029 977 } 1030 978 1031 979 /* allocate crtcs */
+10 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 42 42 * KMS wrapper. 43 43 * - 2.0.0 - initial interface 44 44 * - 2.1.0 - add square tiling interface 45 + * - 2.2.0 - add r6xx/r7xx const buffer support 45 46 */ 46 47 #define KMS_DRIVER_MAJOR 2 47 - #define KMS_DRIVER_MINOR 1 48 + #define KMS_DRIVER_MINOR 2 48 49 #define KMS_DRIVER_PATCHLEVEL 0 49 50 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags); 50 51 int radeon_driver_unload_kms(struct drm_device *dev); ··· 92 91 int radeon_new_pll = -1; 93 92 int radeon_dynpm = -1; 94 93 int radeon_audio = 1; 94 + int radeon_disp_priority = 0; 95 + int radeon_hw_i2c = 0; 95 96 96 97 MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers"); 97 98 module_param_named(no_wb, radeon_no_wb, int, 0444); ··· 136 133 137 134 MODULE_PARM_DESC(audio, "Audio enable (0 = disable)"); 138 135 module_param_named(audio, radeon_audio, int, 0444); 136 + 137 + MODULE_PARM_DESC(disp_priority, "Display Priority (0 = auto, 1 = normal, 2 = high)"); 138 + module_param_named(disp_priority, radeon_disp_priority, int, 0444); 139 + 140 + MODULE_PARM_DESC(hw_i2c, "hw i2c engine enable (0 = disable)"); 141 + module_param_named(hw_i2c, radeon_hw_i2c, int, 0444); 139 142 140 143 static int radeon_suspend(struct drm_device *dev, pm_message_t state) 141 144 {
+2 -1
drivers/gpu/drm/radeon/radeon_drv.h
··· 107 107 * 1.30- Add support for occlusion queries 108 108 * 1.31- Add support for num Z pipes from GET_PARAM 109 109 * 1.32- fixes for rv740 setup 110 + * 1.33- Add r6xx/r7xx const buffer support 110 111 */ 111 112 #define DRIVER_MAJOR 1 112 - #define DRIVER_MINOR 32 113 + #define DRIVER_MINOR 33 113 114 #define DRIVER_PATCHLEVEL 0 114 115 115 116 enum radeon_cp_microcode_version {
+64 -57
drivers/gpu/drm/radeon/radeon_encoders.c
··· 302 302 } 303 303 304 304 if (ASIC_IS_DCE3(rdev) && 305 - (radeon_encoder->active_device & (ATOM_DEVICE_DFP_SUPPORT))) { 305 + (radeon_encoder->active_device & (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT))) { 306 306 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 307 307 radeon_dp_set_link_config(connector, mode); 308 308 } ··· 519 519 break; 520 520 } 521 521 522 - atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev); 522 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 523 + return; 523 524 524 525 switch (frev) { 525 526 case 1: ··· 594 593 } 595 594 596 595 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 597 - r600_hdmi_enable(encoder, hdmi_detected); 598 596 } 599 597 600 598 int ··· 708 708 struct radeon_connector_atom_dig *dig_connector = 709 709 radeon_get_atom_connector_priv_from_encoder(encoder); 710 710 union dig_encoder_control args; 711 - int index = 0, num = 0; 711 + int index = 0; 712 712 uint8_t frev, crev; 713 713 714 714 if (!dig || !dig_connector) ··· 724 724 else 725 725 index = GetIndexIntoMasterTable(COMMAND, DIG1EncoderControl); 726 726 } 727 - num = dig->dig_encoder + 1; 728 727 729 - atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev); 728 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 729 + return; 730 730 731 731 args.v1.ucAction = action; 732 732 args.v1.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10); ··· 785 785 struct drm_connector *connector; 786 786 struct radeon_connector *radeon_connector; 787 787 union dig_transmitter_control args; 788 - int index = 0, num = 0; 788 + int index = 0; 789 789 uint8_t frev, crev; 790 790 bool is_dp = false; 791 791 int pll_id = 0; ··· 814 814 } 815 815 } 816 816 817 - atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev); 817 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 818 + return; 818 819 819 820 args.v1.ucAction = action; 820 821 if (action == ATOM_TRANSMITTER_ACTION_INIT) { ··· 861 860 switch (radeon_encoder->encoder_id) { 862 861 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 863 862 args.v3.acConfig.ucTransmitterSel = 0; 864 - num = 0; 865 863 break; 866 864 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 867 865 args.v3.acConfig.ucTransmitterSel = 1; 868 - num = 1; 869 866 break; 870 867 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 871 868 args.v3.acConfig.ucTransmitterSel = 2; 872 - num = 2; 873 869 break; 874 870 } 875 871 ··· 877 879 args.v3.acConfig.fCoherentMode = 1; 878 880 } 879 881 } else if (ASIC_IS_DCE32(rdev)) { 880 - if (dig->dig_encoder == 1) 881 - args.v2.acConfig.ucEncoderSel = 1; 882 + args.v2.acConfig.ucEncoderSel = dig->dig_encoder; 882 883 if (dig_connector->linkb) 883 884 args.v2.acConfig.ucLinkSel = 1; 884 885 885 886 switch (radeon_encoder->encoder_id) { 886 887 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 887 888 args.v2.acConfig.ucTransmitterSel = 0; 888 - num = 0; 889 889 break; 890 890 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1: 891 891 args.v2.acConfig.ucTransmitterSel = 1; 892 - num = 1; 893 892 break; 894 893 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2: 895 894 args.v2.acConfig.ucTransmitterSel = 2; 896 - num = 2; 897 895 break; 898 896 } 899 897 ··· 907 913 else 908 914 args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_DIG1_ENCODER; 909 915 910 - switch (radeon_encoder->encoder_id) { 911 - case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: 912 - if (rdev->flags & RADEON_IS_IGP) { 913 - if (radeon_encoder->pixel_clock > 165000) { 914 - if (dig_connector->igp_lane_info & 0x3) 915 - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_7; 916 - else if (dig_connector->igp_lane_info & 0xc) 917 - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_15; 918 - } else { 919 - if (dig_connector->igp_lane_info & 0x1) 920 - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_3; 921 - else if (dig_connector->igp_lane_info & 0x2) 922 - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_4_7; 923 - else if (dig_connector->igp_lane_info & 0x4) 924 - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_11; 925 - else if (dig_connector->igp_lane_info & 0x8) 926 - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_12_15; 927 - } 916 + if ((rdev->flags & RADEON_IS_IGP) && 917 + (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_UNIPHY)) { 918 + if (is_dp || (radeon_encoder->pixel_clock <= 165000)) { 919 + if (dig_connector->igp_lane_info & 0x1) 920 + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_3; 921 + else if (dig_connector->igp_lane_info & 0x2) 922 + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_4_7; 923 + else if (dig_connector->igp_lane_info & 0x4) 924 + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_11; 925 + else if (dig_connector->igp_lane_info & 0x8) 926 + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_12_15; 927 + } else { 928 + if (dig_connector->igp_lane_info & 0x3) 929 + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_7; 930 + else if (dig_connector->igp_lane_info & 0xc) 931 + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_15; 928 932 } 929 - break; 930 933 } 931 - 932 - if (radeon_encoder->pixel_clock > 165000) 933 - args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_8LANE_LINK; 934 934 935 935 if (dig_connector->linkb) 936 936 args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LINKB; ··· 936 948 else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) { 937 949 if (dig->coherent_mode) 938 950 args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_COHERENT; 951 + if (radeon_encoder->pixel_clock > 165000) 952 + args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_8LANE_LINK; 939 953 } 940 954 } 941 955 ··· 1044 1054 if (is_dig) { 1045 1055 switch (mode) { 1046 1056 case DRM_MODE_DPMS_ON: 1047 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); 1048 - { 1057 + if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) { 1049 1058 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1059 + 1050 1060 dp_link_train(encoder, connector); 1061 + if (ASIC_IS_DCE4(rdev)) 1062 + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON); 1051 1063 } 1064 + if (!ASIC_IS_DCE4(rdev)) 1065 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0); 1052 1066 break; 1053 1067 case DRM_MODE_DPMS_STANDBY: 1054 1068 case DRM_MODE_DPMS_SUSPEND: 1055 1069 case DRM_MODE_DPMS_OFF: 1056 - atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0); 1070 + if (!ASIC_IS_DCE4(rdev)) 1071 + atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0); 1072 + if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) { 1073 + if (ASIC_IS_DCE4(rdev)) 1074 + atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF); 1075 + } 1057 1076 break; 1058 1077 } 1059 1078 } else { ··· 1103 1104 1104 1105 memset(&args, 0, sizeof(args)); 1105 1106 1106 - atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev); 1107 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 1108 + return; 1107 1109 1108 1110 switch (frev) { 1109 1111 case 1: ··· 1216 1216 } 1217 1217 1218 1218 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 1219 + 1220 + /* update scratch regs with new routing */ 1221 + radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id); 1219 1222 } 1220 1223 1221 1224 static void ··· 1329 1326 struct drm_device *dev = encoder->dev; 1330 1327 struct radeon_device *rdev = dev->dev_private; 1331 1328 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1332 - struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc); 1333 1329 1334 - if (radeon_encoder->active_device & 1335 - (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT)) { 1336 - struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 1337 - if (dig) 1338 - dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder); 1339 - } 1340 1330 radeon_encoder->pixel_clock = adjusted_mode->clock; 1341 - 1342 - radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id); 1343 - atombios_set_encoder_crtc_source(encoder); 1344 1331 1345 1332 if (ASIC_IS_AVIVO(rdev)) { 1346 1333 if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT | ATOM_DEVICE_TV_SUPPORT)) ··· 1389 1396 } 1390 1397 atombios_apply_encoder_quirks(encoder, adjusted_mode); 1391 1398 1392 - /* XXX */ 1393 - if (!ASIC_IS_DCE4(rdev)) 1399 + if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) { 1400 + r600_hdmi_enable(encoder); 1394 1401 r600_hdmi_setmode(encoder, adjusted_mode); 1402 + } 1395 1403 } 1396 1404 1397 1405 static bool ··· 1412 1418 1413 1419 memset(&args, 0, sizeof(args)); 1414 1420 1415 - atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev); 1421 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 1422 + return false; 1416 1423 1417 1424 args.sDacload.ucMisc = 0; 1418 1425 ··· 1487 1492 1488 1493 static void radeon_atom_encoder_prepare(struct drm_encoder *encoder) 1489 1494 { 1495 + struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1496 + 1497 + if (radeon_encoder->active_device & 1498 + (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT)) { 1499 + struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 1500 + if (dig) 1501 + dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder); 1502 + } 1503 + 1490 1504 radeon_atom_output_lock(encoder, true); 1491 1505 radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); 1506 + 1507 + /* this is needed for the pll/ss setup to work correctly in some cases */ 1508 + atombios_set_encoder_crtc_source(encoder); 1492 1509 } 1493 1510 1494 1511 static void radeon_atom_encoder_commit(struct drm_encoder *encoder) ··· 1516 1509 radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF); 1517 1510 1518 1511 if (radeon_encoder_is_digital(encoder)) { 1512 + if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) 1513 + r600_hdmi_disable(encoder); 1519 1514 dig = radeon_encoder->enc_priv; 1520 1515 dig->dig_encoder = -1; 1521 1516 } ··· 1668 1659 drm_encoder_helper_add(encoder, &radeon_atom_dig_helper_funcs); 1669 1660 break; 1670 1661 } 1671 - 1672 - r600_hdmi_init(encoder); 1673 1662 }
+73 -80
drivers/gpu/drm/radeon/radeon_i2c.c
··· 59 59 return false; 60 60 } 61 61 62 + /* bit banging i2c */ 62 63 63 64 static void radeon_i2c_do_lock(struct radeon_i2c_chan *i2c, int lock_state) 64 65 { ··· 182 181 WREG32(rec->en_data_reg, val); 183 182 } 184 183 184 + static int pre_xfer(struct i2c_adapter *i2c_adap) 185 + { 186 + struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap); 187 + 188 + radeon_i2c_do_lock(i2c, 1); 189 + 190 + return 0; 191 + } 192 + 193 + static void post_xfer(struct i2c_adapter *i2c_adap) 194 + { 195 + struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap); 196 + 197 + radeon_i2c_do_lock(i2c, 0); 198 + } 199 + 200 + /* hw i2c */ 201 + 185 202 static u32 radeon_get_i2c_prescale(struct radeon_device *rdev) 186 203 { 187 - struct radeon_pll *spll = &rdev->clock.spll; 188 204 u32 sclk = radeon_get_engine_clock(rdev); 189 205 u32 prescale = 0; 190 - u32 n, m; 191 - u8 loop; 206 + u32 nm; 207 + u8 n, m, loop; 192 208 int i2c_clock; 193 209 194 210 switch (rdev->family) { ··· 221 203 case CHIP_R300: 222 204 case CHIP_R350: 223 205 case CHIP_RV350: 224 - n = (spll->reference_freq) / (4 * 6); 206 + i2c_clock = 60; 207 + nm = (sclk * 10) / (i2c_clock * 4); 225 208 for (loop = 1; loop < 255; loop++) { 226 - if ((loop * (loop - 1)) > n) 209 + if ((nm / loop) < loop) 227 210 break; 228 211 } 229 - m = loop - 1; 230 - prescale = m | (loop << 8); 212 + n = loop - 1; 213 + m = loop - 2; 214 + prescale = m | (n << 8); 231 215 break; 232 216 case CHIP_RV380: 233 217 case CHIP_RS400: ··· 237 217 case CHIP_R420: 238 218 case CHIP_R423: 239 219 case CHIP_RV410: 240 - sclk = radeon_get_engine_clock(rdev); 241 220 prescale = (((sclk * 10)/(4 * 128 * 100) + 1) << 8) + 128; 242 221 break; 243 222 case CHIP_RS600: ··· 251 232 case CHIP_RV570: 252 233 case CHIP_R580: 253 234 i2c_clock = 50; 254 - sclk = radeon_get_engine_clock(rdev); 255 235 if (rdev->family == CHIP_R520) 256 236 prescale = (127 << 8) + ((sclk * 10) / (4 * 127 * i2c_clock)); 257 237 else ··· 309 291 prescale = radeon_get_i2c_prescale(rdev); 310 292 311 293 reg = ((prescale << RADEON_I2C_PRESCALE_SHIFT) | 294 + RADEON_I2C_DRIVE_EN | 312 295 RADEON_I2C_START | 313 296 RADEON_I2C_STOP | 314 297 RADEON_I2C_GO); ··· 776 757 return ret; 777 758 } 778 759 779 - static int radeon_sw_i2c_xfer(struct i2c_adapter *i2c_adap, 760 + static int radeon_hw_i2c_xfer(struct i2c_adapter *i2c_adap, 780 761 struct i2c_msg *msgs, int num) 781 - { 782 - struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap); 783 - int ret; 784 - 785 - radeon_i2c_do_lock(i2c, 1); 786 - ret = i2c_transfer(&i2c->algo.radeon.bit_adapter, msgs, num); 787 - radeon_i2c_do_lock(i2c, 0); 788 - 789 - return ret; 790 - } 791 - 792 - static int radeon_i2c_xfer(struct i2c_adapter *i2c_adap, 793 - struct i2c_msg *msgs, int num) 794 762 { 795 763 struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap); 796 764 struct radeon_device *rdev = i2c->dev->dev_private; 797 765 struct radeon_i2c_bus_rec *rec = &i2c->rec; 798 - int ret; 766 + int ret = 0; 799 767 800 768 switch (rdev->family) { 801 769 case CHIP_R100: ··· 803 797 case CHIP_RV410: 804 798 case CHIP_RS400: 805 799 case CHIP_RS480: 806 - if (rec->hw_capable) 807 - ret = r100_hw_i2c_xfer(i2c_adap, msgs, num); 808 - else 809 - ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num); 800 + ret = r100_hw_i2c_xfer(i2c_adap, msgs, num); 810 801 break; 811 802 case CHIP_RS600: 812 803 case CHIP_RS690: 813 804 case CHIP_RS740: 814 805 /* XXX fill in hw i2c implementation */ 815 - ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num); 816 806 break; 817 807 case CHIP_RV515: 818 808 case CHIP_R520: ··· 816 814 case CHIP_RV560: 817 815 case CHIP_RV570: 818 816 case CHIP_R580: 819 - if (rec->hw_capable) { 820 - if (rec->mm_i2c) 821 - ret = r100_hw_i2c_xfer(i2c_adap, msgs, num); 822 - else 823 - ret = r500_hw_i2c_xfer(i2c_adap, msgs, num); 824 - } else 825 - ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num); 817 + if (rec->mm_i2c) 818 + ret = r100_hw_i2c_xfer(i2c_adap, msgs, num); 819 + else 820 + ret = r500_hw_i2c_xfer(i2c_adap, msgs, num); 826 821 break; 827 822 case CHIP_R600: 828 823 case CHIP_RV610: 829 824 case CHIP_RV630: 830 825 case CHIP_RV670: 831 826 /* XXX fill in hw i2c implementation */ 832 - ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num); 833 827 break; 834 828 case CHIP_RV620: 835 829 case CHIP_RV635: ··· 836 838 case CHIP_RV710: 837 839 case CHIP_RV740: 838 840 /* XXX fill in hw i2c implementation */ 839 - ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num); 840 841 break; 841 842 case CHIP_CEDAR: 842 843 case CHIP_REDWOOD: ··· 843 846 case CHIP_CYPRESS: 844 847 case CHIP_HEMLOCK: 845 848 /* XXX fill in hw i2c implementation */ 846 - ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num); 847 849 break; 848 850 default: 849 851 DRM_ERROR("i2c: unhandled radeon chip\n"); ··· 853 857 return ret; 854 858 } 855 859 856 - static u32 radeon_i2c_func(struct i2c_adapter *adap) 860 + static u32 radeon_hw_i2c_func(struct i2c_adapter *adap) 857 861 { 858 862 return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL; 859 863 } 860 864 861 865 static const struct i2c_algorithm radeon_i2c_algo = { 862 - .master_xfer = radeon_i2c_xfer, 863 - .functionality = radeon_i2c_func, 866 + .master_xfer = radeon_hw_i2c_xfer, 867 + .functionality = radeon_hw_i2c_func, 864 868 }; 865 869 866 870 struct radeon_i2c_chan *radeon_i2c_create(struct drm_device *dev, 867 871 struct radeon_i2c_bus_rec *rec, 868 872 const char *name) 869 873 { 874 + struct radeon_device *rdev = dev->dev_private; 870 875 struct radeon_i2c_chan *i2c; 871 876 int ret; 872 877 ··· 875 878 if (i2c == NULL) 876 879 return NULL; 877 880 878 - /* set the internal bit adapter */ 879 - i2c->algo.radeon.bit_adapter.owner = THIS_MODULE; 880 - i2c_set_adapdata(&i2c->algo.radeon.bit_adapter, i2c); 881 - sprintf(i2c->algo.radeon.bit_adapter.name, "Radeon internal i2c bit bus %s", name); 882 - i2c->algo.radeon.bit_adapter.algo_data = &i2c->algo.radeon.bit_data; 883 - i2c->algo.radeon.bit_data.setsda = set_data; 884 - i2c->algo.radeon.bit_data.setscl = set_clock; 885 - i2c->algo.radeon.bit_data.getsda = get_data; 886 - i2c->algo.radeon.bit_data.getscl = get_clock; 887 - i2c->algo.radeon.bit_data.udelay = 20; 888 - /* vesa says 2.2 ms is enough, 1 jiffy doesn't seem to always 889 - * make this, 2 jiffies is a lot more reliable */ 890 - i2c->algo.radeon.bit_data.timeout = 2; 891 - i2c->algo.radeon.bit_data.data = i2c; 892 - ret = i2c_bit_add_bus(&i2c->algo.radeon.bit_adapter); 893 - if (ret) { 894 - DRM_ERROR("Failed to register internal bit i2c %s\n", name); 895 - goto out_free; 896 - } 897 - /* set the radeon i2c adapter */ 898 - i2c->dev = dev; 899 881 i2c->rec = *rec; 900 882 i2c->adapter.owner = THIS_MODULE; 883 + i2c->dev = dev; 901 884 i2c_set_adapdata(&i2c->adapter, i2c); 902 - sprintf(i2c->adapter.name, "Radeon i2c %s", name); 903 - i2c->adapter.algo_data = &i2c->algo.radeon; 904 - i2c->adapter.algo = &radeon_i2c_algo; 905 - ret = i2c_add_adapter(&i2c->adapter); 906 - if (ret) { 907 - DRM_ERROR("Failed to register i2c %s\n", name); 908 - goto out_free; 885 + if (rec->mm_i2c || 886 + (rec->hw_capable && 887 + radeon_hw_i2c && 888 + ((rdev->family <= CHIP_RS480) || 889 + ((rdev->family >= CHIP_RV515) && (rdev->family <= CHIP_R580))))) { 890 + /* set the radeon hw i2c adapter */ 891 + sprintf(i2c->adapter.name, "Radeon i2c hw bus %s", name); 892 + i2c->adapter.algo = &radeon_i2c_algo; 893 + ret = i2c_add_adapter(&i2c->adapter); 894 + if (ret) { 895 + DRM_ERROR("Failed to register hw i2c %s\n", name); 896 + goto out_free; 897 + } 898 + } else { 899 + /* set the radeon bit adapter */ 900 + sprintf(i2c->adapter.name, "Radeon i2c bit bus %s", name); 901 + i2c->adapter.algo_data = &i2c->algo.bit; 902 + i2c->algo.bit.pre_xfer = pre_xfer; 903 + i2c->algo.bit.post_xfer = post_xfer; 904 + i2c->algo.bit.setsda = set_data; 905 + i2c->algo.bit.setscl = set_clock; 906 + i2c->algo.bit.getsda = get_data; 907 + i2c->algo.bit.getscl = get_clock; 908 + i2c->algo.bit.udelay = 20; 909 + /* vesa says 2.2 ms is enough, 1 jiffy doesn't seem to always 910 + * make this, 2 jiffies is a lot more reliable */ 911 + i2c->algo.bit.timeout = 2; 912 + i2c->algo.bit.data = i2c; 913 + ret = i2c_bit_add_bus(&i2c->adapter); 914 + if (ret) { 915 + DRM_ERROR("Failed to register bit i2c %s\n", name); 916 + goto out_free; 917 + } 909 918 } 910 919 911 920 return i2c; ··· 956 953 { 957 954 if (!i2c) 958 955 return; 959 - i2c_del_adapter(&i2c->algo.radeon.bit_adapter); 960 - i2c_del_adapter(&i2c->adapter); 961 - kfree(i2c); 962 - } 963 - 964 - void radeon_i2c_destroy_dp(struct radeon_i2c_chan *i2c) 965 - { 966 - if (!i2c) 967 - return; 968 - 969 956 i2c_del_adapter(&i2c->adapter); 970 957 kfree(i2c); 971 958 }
+9 -13
drivers/gpu/drm/radeon/radeon_irq_kms.c
··· 67 67 68 68 /* Disable *all* interrupts */ 69 69 rdev->irq.sw_int = false; 70 - for (i = 0; i < 2; i++) { 70 + for (i = 0; i < rdev->num_crtc; i++) 71 71 rdev->irq.crtc_vblank_int[i] = false; 72 - } 72 + for (i = 0; i < 6; i++) 73 + rdev->irq.hpd[i] = false; 73 74 radeon_irq_set(rdev); 74 75 /* Clear bits */ 75 76 radeon_irq_process(rdev); ··· 96 95 } 97 96 /* Disable *all* interrupts */ 98 97 rdev->irq.sw_int = false; 99 - for (i = 0; i < 2; i++) { 98 + for (i = 0; i < rdev->num_crtc; i++) 100 99 rdev->irq.crtc_vblank_int[i] = false; 100 + for (i = 0; i < 6; i++) 101 101 rdev->irq.hpd[i] = false; 102 - } 103 102 radeon_irq_set(rdev); 104 103 } 105 104 106 105 int radeon_irq_kms_init(struct radeon_device *rdev) 107 106 { 108 107 int r = 0; 109 - int num_crtc = 2; 110 108 111 - if (rdev->flags & RADEON_SINGLE_CRTC) 112 - num_crtc = 1; 113 109 spin_lock_init(&rdev->irq.sw_lock); 114 - r = drm_vblank_init(rdev->ddev, num_crtc); 110 + r = drm_vblank_init(rdev->ddev, rdev->num_crtc); 115 111 if (r) { 116 112 return r; 117 113 } 118 114 /* enable msi */ 119 115 rdev->msi_enabled = 0; 120 - /* MSIs don't seem to work on my rs780; 121 - * not sure about rs880 or other rs780s. 122 - * Needs more investigation. 116 + /* MSIs don't seem to work reliably on all IGP 117 + * chips. Disable MSI on them for now. 123 118 */ 124 119 if ((rdev->family >= CHIP_RV380) && 125 - (rdev->family != CHIP_RS780) && 126 - (rdev->family != CHIP_RS880)) { 120 + (!(rdev->flags & RADEON_IS_IGP))) { 127 121 int ret = pci_enable_msi(rdev->pdev); 128 122 if (!ret) { 129 123 rdev->msi_enabled = 1;
+8
drivers/gpu/drm/radeon/radeon_legacy_crtc.c
··· 603 603 ? RADEON_CRTC2_INTERLACE_EN 604 604 : 0)); 605 605 606 + /* rs4xx chips seem to like to have the crtc enabled when the timing is set */ 607 + if ((rdev->family == CHIP_RS400) || (rdev->family == CHIP_RS480)) 608 + crtc2_gen_cntl |= RADEON_CRTC2_EN; 609 + 606 610 disp2_merge_cntl = RREG32(RADEON_DISP2_MERGE_CNTL); 607 611 disp2_merge_cntl &= ~RADEON_DISP2_RGB_OFFSET_EN; 608 612 ··· 633 629 | ((mode->flags & DRM_MODE_FLAG_INTERLACE) 634 630 ? RADEON_CRTC_INTERLACE_EN 635 631 : 0)); 632 + 633 + /* rs4xx chips seem to like to have the crtc enabled when the timing is set */ 634 + if ((rdev->family == CHIP_RS400) || (rdev->family == CHIP_RS480)) 635 + crtc_gen_cntl |= RADEON_CRTC_EN; 636 636 637 637 crtc_ext_cntl = RREG32(RADEON_CRTC_EXT_CNTL); 638 638 crtc_ext_cntl |= (RADEON_XCRT_CNT_EN |
+24 -5
drivers/gpu/drm/radeon/radeon_legacy_tv.c
··· 57 57 #define NTSC_TV_PLL_N_14 693 58 58 #define NTSC_TV_PLL_P_14 7 59 59 60 + #define PAL_TV_PLL_M_14 19 61 + #define PAL_TV_PLL_N_14 353 62 + #define PAL_TV_PLL_P_14 5 63 + 60 64 #define VERT_LEAD_IN_LINES 2 61 65 #define FRAC_BITS 0xe 62 66 #define FRAC_MASK 0x3fff ··· 209 205 630627, /* defRestart */ 210 206 347, /* crtcPLL_N */ 211 207 14, /* crtcPLL_M */ 212 - 8, /* crtcPLL_postDiv */ 208 + 8, /* crtcPLL_postDiv */ 213 209 1022, /* pixToTV */ 210 + }, 211 + { /* PAL timing for 14 Mhz ref clk */ 212 + 800, /* horResolution */ 213 + 600, /* verResolution */ 214 + TV_STD_PAL, /* standard */ 215 + 1131, /* horTotal */ 216 + 742, /* verTotal */ 217 + 813, /* horStart */ 218 + 840, /* horSyncStart */ 219 + 633, /* verSyncStart */ 220 + 708369, /* defRestart */ 221 + 211, /* crtcPLL_N */ 222 + 9, /* crtcPLL_M */ 223 + 8, /* crtcPLL_postDiv */ 224 + 759, /* pixToTV */ 214 225 }, 215 226 }; 216 227 ··· 261 242 if (pll->reference_freq == 2700) 262 243 const_ptr = &available_tv_modes[1]; 263 244 else 264 - const_ptr = &available_tv_modes[1]; /* FIX ME */ 245 + const_ptr = &available_tv_modes[3]; 265 246 } 266 247 return const_ptr; 267 248 } ··· 704 685 n = PAL_TV_PLL_N_27; 705 686 p = PAL_TV_PLL_P_27; 706 687 } else { 707 - m = PAL_TV_PLL_M_27; 708 - n = PAL_TV_PLL_N_27; 709 - p = PAL_TV_PLL_P_27; 688 + m = PAL_TV_PLL_M_14; 689 + n = PAL_TV_PLL_N_14; 690 + p = PAL_TV_PLL_P_14; 710 691 } 711 692 } 712 693
+5 -7
drivers/gpu/drm/radeon/radeon_mode.h
··· 129 129 #define RADEON_PLL_USE_FRAC_FB_DIV (1 << 10) 130 130 #define RADEON_PLL_PREFER_CLOSEST_LOWER (1 << 11) 131 131 #define RADEON_PLL_USE_POST_DIV (1 << 12) 132 + #define RADEON_PLL_IS_LCD (1 << 13) 132 133 133 134 /* pll algo */ 134 135 enum radeon_pll_algo { ··· 150 149 uint32_t pll_in_max; 151 150 uint32_t pll_out_min; 152 151 uint32_t pll_out_max; 152 + uint32_t lcd_pll_out_min; 153 + uint32_t lcd_pll_out_max; 153 154 uint32_t best_vco; 154 155 155 156 /* divider limits */ ··· 173 170 enum radeon_pll_algo algo; 174 171 }; 175 172 176 - struct i2c_algo_radeon_data { 177 - struct i2c_adapter bit_adapter; 178 - struct i2c_algo_bit_data bit_data; 179 - }; 180 - 181 173 struct radeon_i2c_chan { 182 174 struct i2c_adapter adapter; 183 175 struct drm_device *dev; 184 176 union { 177 + struct i2c_algo_bit_data bit; 185 178 struct i2c_algo_dp_aux_data dp; 186 - struct i2c_algo_radeon_data radeon; 187 179 } algo; 188 180 struct radeon_i2c_bus_rec rec; 189 181 }; ··· 340 342 struct drm_display_mode native_mode; 341 343 void *enc_priv; 342 344 int hdmi_offset; 345 + int hdmi_config_offset; 343 346 int hdmi_audio_workaround; 344 347 int hdmi_buffer_status; 345 348 }; ··· 430 431 struct radeon_i2c_bus_rec *rec, 431 432 const char *name); 432 433 extern void radeon_i2c_destroy(struct radeon_i2c_chan *i2c); 433 - extern void radeon_i2c_destroy_dp(struct radeon_i2c_chan *i2c); 434 434 extern void radeon_i2c_get_byte(struct radeon_i2c_chan *i2c_bus, 435 435 u8 slave_addr, 436 436 u8 addr,
+4 -2
drivers/gpu/drm/radeon/radeon_object.c
··· 185 185 return 0; 186 186 } 187 187 radeon_ttm_placement_from_domain(bo, domain); 188 - /* force to pin into visible video ram */ 189 - bo->placement.lpfn = bo->rdev->mc.visible_vram_size >> PAGE_SHIFT; 188 + if (domain == RADEON_GEM_DOMAIN_VRAM) { 189 + /* force to pin into visible video ram */ 190 + bo->placement.lpfn = bo->rdev->mc.visible_vram_size >> PAGE_SHIFT; 191 + } 190 192 for (i = 0; i < bo->placement.num_placement; i++) 191 193 bo->placements[i] |= TTM_PL_FLAG_NO_EVICT; 192 194 r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
+37 -9
drivers/gpu/drm/radeon/radeon_pm.c
··· 28 28 #define RADEON_RECLOCK_DELAY_MS 200 29 29 #define RADEON_WAIT_VBLANK_TIMEOUT 200 30 30 31 + static bool radeon_pm_debug_check_in_vbl(struct radeon_device *rdev, bool finish); 31 32 static void radeon_pm_set_clocks_locked(struct radeon_device *rdev); 32 33 static void radeon_pm_set_clocks(struct radeon_device *rdev); 33 34 static void radeon_pm_idle_work_handler(struct work_struct *work); ··· 180 179 rdev->pm.requested_power_state->non_clock_info.pcie_lanes); 181 180 } 182 181 182 + static inline void radeon_sync_with_vblank(struct radeon_device *rdev) 183 + { 184 + if (rdev->pm.active_crtcs) { 185 + rdev->pm.vblank_sync = false; 186 + wait_event_timeout( 187 + rdev->irq.vblank_queue, rdev->pm.vblank_sync, 188 + msecs_to_jiffies(RADEON_WAIT_VBLANK_TIMEOUT)); 189 + } 190 + } 191 + 183 192 static void radeon_set_power_state(struct radeon_device *rdev) 184 193 { 185 194 /* if *_clock_mode are the same, *_power_state are as well */ ··· 200 189 rdev->pm.requested_clock_mode->sclk, 201 190 rdev->pm.requested_clock_mode->mclk, 202 191 rdev->pm.requested_power_state->non_clock_info.pcie_lanes); 192 + 203 193 /* set pcie lanes */ 194 + /* TODO */ 195 + 204 196 /* set voltage */ 197 + /* TODO */ 198 + 205 199 /* set engine clock */ 200 + radeon_sync_with_vblank(rdev); 201 + radeon_pm_debug_check_in_vbl(rdev, false); 206 202 radeon_set_engine_clock(rdev, rdev->pm.requested_clock_mode->sclk); 203 + radeon_pm_debug_check_in_vbl(rdev, true); 204 + 205 + #if 0 207 206 /* set memory clock */ 207 + if (rdev->asic->set_memory_clock) { 208 + radeon_sync_with_vblank(rdev); 209 + radeon_pm_debug_check_in_vbl(rdev, false); 210 + radeon_set_memory_clock(rdev, rdev->pm.requested_clock_mode->mclk); 211 + radeon_pm_debug_check_in_vbl(rdev, true); 212 + } 213 + #endif 208 214 209 215 rdev->pm.current_power_state = rdev->pm.requested_power_state; 210 216 rdev->pm.current_clock_mode = rdev->pm.requested_clock_mode; ··· 257 229 return 0; 258 230 } 259 231 232 + void radeon_pm_fini(struct radeon_device *rdev) 233 + { 234 + if (rdev->pm.i2c_bus) 235 + radeon_i2c_destroy(rdev->pm.i2c_bus); 236 + } 237 + 260 238 void radeon_pm_compute_clocks(struct radeon_device *rdev) 261 239 { 262 240 struct drm_device *ddev = rdev->ddev; ··· 279 245 list_for_each_entry(connector, 280 246 &ddev->mode_config.connector_list, head) { 281 247 if (connector->encoder && 282 - connector->dpms != DRM_MODE_DPMS_OFF) { 248 + connector->encoder->crtc && 249 + connector->dpms != DRM_MODE_DPMS_OFF) { 283 250 radeon_crtc = to_radeon_crtc(connector->encoder->crtc); 284 251 rdev->pm.active_crtcs |= (1 << radeon_crtc->crtc_id); 285 252 ++count; ··· 368 333 break; 369 334 } 370 335 371 - /* check if we are in vblank */ 372 - radeon_pm_debug_check_in_vbl(rdev, false); 373 336 radeon_set_power_state(rdev); 374 - radeon_pm_debug_check_in_vbl(rdev, true); 375 337 rdev->pm.planned_action = PM_ACTION_NONE; 376 338 } 377 339 ··· 385 353 rdev->pm.req_vblank |= (1 << 1); 386 354 drm_vblank_get(rdev->ddev, 1); 387 355 } 388 - if (rdev->pm.active_crtcs) 389 - wait_event_interruptible_timeout( 390 - rdev->irq.vblank_queue, 0, 391 - msecs_to_jiffies(RADEON_WAIT_VBLANK_TIMEOUT)); 356 + radeon_pm_set_clocks_locked(rdev); 392 357 if (rdev->pm.req_vblank & (1 << 0)) { 393 358 rdev->pm.req_vblank &= ~(1 << 0); 394 359 drm_vblank_put(rdev->ddev, 0); ··· 395 366 drm_vblank_put(rdev->ddev, 1); 396 367 } 397 368 398 - radeon_pm_set_clocks_locked(rdev); 399 369 mutex_unlock(&rdev->cp.mutex); 400 370 } 401 371
+1
drivers/gpu/drm/radeon/radeon_reg.h
··· 346 346 # define RADEON_TVPLL_PWRMGT_OFF (1 << 30) 347 347 # define RADEON_TVCLK_TURNOFF (1 << 31) 348 348 #define RADEON_PLL_PWRMGT_CNTL 0x0015 /* PLL */ 349 + # define RADEON_PM_MODE_SEL (1 << 13) 349 350 # define RADEON_TCL_BYPASS_DISABLE (1 << 20) 350 351 #define RADEON_CLR_CMP_CLR_3D 0x1a24 351 352 #define RADEON_CLR_CMP_CLR_DST 0x15c8
-75
drivers/gpu/drm/radeon/reg_srcs/r600
··· 26 26 0x00028408 VGT_INDX_OFFSET 27 27 0x00028AA0 VGT_INSTANCE_STEP_RATE_0 28 28 0x00028AA4 VGT_INSTANCE_STEP_RATE_1 29 - 0x000088C0 VGT_LAST_COPY_STATE 30 29 0x00028400 VGT_MAX_VTX_INDX 31 - 0x000088D8 VGT_MC_LAT_CNTL 32 30 0x00028404 VGT_MIN_VTX_INDX 33 31 0x00028A94 VGT_MULTI_PRIM_IB_RESET_EN 34 32 0x0002840C VGT_MULTI_PRIM_IB_RESET_INDX 35 33 0x00008970 VGT_NUM_INDICES 36 34 0x00008974 VGT_NUM_INSTANCES 37 35 0x00028A10 VGT_OUTPUT_PATH_CNTL 38 - 0x00028C5C VGT_OUT_DEALLOC_CNTL 39 36 0x00028A84 VGT_PRIMITIVEID_EN 40 37 0x00008958 VGT_PRIMITIVE_TYPE 41 38 0x00028AB4 VGT_REUSE_OFF 42 - 0x00028C58 VGT_VERTEX_REUSE_BLOCK_CNTL 43 39 0x00028AB8 VGT_VTX_CNT_EN 44 40 0x000088B0 VGT_VTX_VECT_EJECT_REG 45 41 0x00028810 PA_CL_CLIP_CNTL ··· 276 280 0x00028E00 PA_SU_POLY_OFFSET_FRONT_SCALE 277 281 0x00028814 PA_SU_SC_MODE_CNTL 278 282 0x00028C08 PA_SU_VTX_CNTL 279 - 0x00008C00 SQ_CONFIG 280 283 0x00008C04 SQ_GPR_RESOURCE_MGMT_1 281 284 0x00008C08 SQ_GPR_RESOURCE_MGMT_2 282 285 0x00008C10 SQ_STACK_RESOURCE_MGMT_1 ··· 315 320 0x000283FC SQ_VTX_SEMANTIC_31 316 321 0x000288E0 SQ_VTX_SEMANTIC_CLEAR 317 322 0x0003CFF4 SQ_VTX_START_INST_LOC 318 - 0x0003C000 SQ_TEX_SAMPLER_WORD0_0 319 - 0x0003C004 SQ_TEX_SAMPLER_WORD1_0 320 - 0x0003C008 SQ_TEX_SAMPLER_WORD2_0 321 - 0x00030000 SQ_ALU_CONSTANT0_0 322 - 0x00030004 SQ_ALU_CONSTANT1_0 323 - 0x00030008 SQ_ALU_CONSTANT2_0 324 - 0x0003000C SQ_ALU_CONSTANT3_0 325 - 0x0003E380 SQ_BOOL_CONST_0 326 - 0x0003E384 SQ_BOOL_CONST_1 327 - 0x0003E388 SQ_BOOL_CONST_2 328 - 0x0003E200 SQ_LOOP_CONST_0 329 - 0x0003E200 SQ_LOOP_CONST_DX10_0 330 323 0x000281C0 SQ_ALU_CONST_BUFFER_SIZE_GS_0 331 324 0x000281C4 SQ_ALU_CONST_BUFFER_SIZE_GS_1 332 325 0x000281C8 SQ_ALU_CONST_BUFFER_SIZE_GS_2 ··· 363 380 0x000281B4 SQ_ALU_CONST_BUFFER_SIZE_VS_13 364 381 0x000281B8 SQ_ALU_CONST_BUFFER_SIZE_VS_14 365 382 0x000281BC SQ_ALU_CONST_BUFFER_SIZE_VS_15 366 - 0x000289C0 SQ_ALU_CONST_CACHE_GS_0 367 - 0x000289C4 SQ_ALU_CONST_CACHE_GS_1 368 - 0x000289C8 SQ_ALU_CONST_CACHE_GS_2 369 - 0x000289CC SQ_ALU_CONST_CACHE_GS_3 370 - 0x000289D0 SQ_ALU_CONST_CACHE_GS_4 371 - 0x000289D4 SQ_ALU_CONST_CACHE_GS_5 372 - 0x000289D8 SQ_ALU_CONST_CACHE_GS_6 373 - 0x000289DC SQ_ALU_CONST_CACHE_GS_7 374 - 0x000289E0 SQ_ALU_CONST_CACHE_GS_8 375 - 0x000289E4 SQ_ALU_CONST_CACHE_GS_9 376 - 0x000289E8 SQ_ALU_CONST_CACHE_GS_10 377 - 0x000289EC SQ_ALU_CONST_CACHE_GS_11 378 - 0x000289F0 SQ_ALU_CONST_CACHE_GS_12 379 - 0x000289F4 SQ_ALU_CONST_CACHE_GS_13 380 - 0x000289F8 SQ_ALU_CONST_CACHE_GS_14 381 - 0x000289FC SQ_ALU_CONST_CACHE_GS_15 382 - 0x00028940 SQ_ALU_CONST_CACHE_PS_0 383 - 0x00028944 SQ_ALU_CONST_CACHE_PS_1 384 - 0x00028948 SQ_ALU_CONST_CACHE_PS_2 385 - 0x0002894C SQ_ALU_CONST_CACHE_PS_3 386 - 0x00028950 SQ_ALU_CONST_CACHE_PS_4 387 - 0x00028954 SQ_ALU_CONST_CACHE_PS_5 388 - 0x00028958 SQ_ALU_CONST_CACHE_PS_6 389 - 0x0002895C SQ_ALU_CONST_CACHE_PS_7 390 - 0x00028960 SQ_ALU_CONST_CACHE_PS_8 391 - 0x00028964 SQ_ALU_CONST_CACHE_PS_9 392 - 0x00028968 SQ_ALU_CONST_CACHE_PS_10 393 - 0x0002896C SQ_ALU_CONST_CACHE_PS_11 394 - 0x00028970 SQ_ALU_CONST_CACHE_PS_12 395 - 0x00028974 SQ_ALU_CONST_CACHE_PS_13 396 - 0x00028978 SQ_ALU_CONST_CACHE_PS_14 397 - 0x0002897C SQ_ALU_CONST_CACHE_PS_15 398 - 0x00028980 SQ_ALU_CONST_CACHE_VS_0 399 - 0x00028984 SQ_ALU_CONST_CACHE_VS_1 400 - 0x00028988 SQ_ALU_CONST_CACHE_VS_2 401 - 0x0002898C SQ_ALU_CONST_CACHE_VS_3 402 - 0x00028990 SQ_ALU_CONST_CACHE_VS_4 403 - 0x00028994 SQ_ALU_CONST_CACHE_VS_5 404 - 0x00028998 SQ_ALU_CONST_CACHE_VS_6 405 - 0x0002899C SQ_ALU_CONST_CACHE_VS_7 406 - 0x000289A0 SQ_ALU_CONST_CACHE_VS_8 407 - 0x000289A4 SQ_ALU_CONST_CACHE_VS_9 408 - 0x000289A8 SQ_ALU_CONST_CACHE_VS_10 409 - 0x000289AC SQ_ALU_CONST_CACHE_VS_11 410 - 0x000289B0 SQ_ALU_CONST_CACHE_VS_12 411 - 0x000289B4 SQ_ALU_CONST_CACHE_VS_13 412 - 0x000289B8 SQ_ALU_CONST_CACHE_VS_14 413 - 0x000289BC SQ_ALU_CONST_CACHE_VS_15 414 383 0x000288D8 SQ_PGM_CF_OFFSET_ES 415 384 0x000288DC SQ_PGM_CF_OFFSET_FS 416 385 0x000288D4 SQ_PGM_CF_OFFSET_GS ··· 429 494 0x00028438 SX_ALPHA_REF 430 495 0x00028410 SX_ALPHA_TEST_CONTROL 431 496 0x00028350 SX_MISC 432 - 0x0000A020 SMX_DC_CTL0 433 - 0x0000A024 SMX_DC_CTL1 434 - 0x0000A028 SMX_DC_CTL2 435 - 0x00009608 TC_CNTL 436 497 0x00009604 TC_INVALIDATE 437 - 0x00009490 TD_CNTL 438 498 0x00009400 TD_FILTER4 439 499 0x00009404 TD_FILTER4_1 440 500 0x00009408 TD_FILTER4_2 ··· 754 824 0x00028428 CB_FOG_GREEN 755 825 0x00028424 CB_FOG_RED 756 826 0x00008040 WAIT_UNTIL 757 - 0x00008950 CC_GC_SHADER_PIPE_CONFIG 758 - 0x00008954 GC_USER_SHADER_PIPE_CONFIG 759 827 0x00009714 VC_ENHANCE 760 828 0x00009830 DB_DEBUG 761 829 0x00009838 DB_WATERMARKS 762 830 0x00028D28 DB_SRESULTS_COMPARE_STATE0 763 831 0x00028D44 DB_ALPHA_TO_MASK 764 - 0x00009504 TA_CNTL 765 832 0x00009700 VC_CNTL 766 - 0x00009718 VC_CONFIG 767 - 0x0000A02C SMX_DC_MC_INTF_CTL
+6 -1
drivers/gpu/drm/radeon/rs400.c
··· 28 28 #include <linux/seq_file.h> 29 29 #include <drm/drmP.h> 30 30 #include "radeon.h" 31 + #include "radeon_asic.h" 31 32 #include "rs400d.h" 32 33 33 34 /* This files gather functions specifics to : rs400,rs480 */ ··· 203 202 204 203 void rs400_gart_fini(struct radeon_device *rdev) 205 204 { 205 + radeon_gart_fini(rdev); 206 206 rs400_gart_disable(rdev); 207 207 radeon_gart_table_ram_free(rdev); 208 - radeon_gart_fini(rdev); 209 208 } 210 209 211 210 int rs400_gart_set_page(struct radeon_device *rdev, int i, uint64_t addr) ··· 265 264 base = (RREG32(RADEON_NB_TOM) & 0xffff) << 16; 266 265 radeon_vram_location(rdev, &rdev->mc, base); 267 266 radeon_gtt_location(rdev, &rdev->mc); 267 + radeon_update_bandwidth_info(rdev); 268 268 } 269 269 270 270 uint32_t rs400_mc_rreg(struct radeon_device *rdev, uint32_t reg) ··· 390 388 { 391 389 int r; 392 390 391 + r100_set_common_regs(rdev); 392 + 393 393 rs400_mc_program(rdev); 394 394 /* Resume clock */ 395 395 r300_clock_startup(rdev); ··· 457 453 458 454 void rs400_fini(struct radeon_device *rdev) 459 455 { 456 + radeon_pm_fini(rdev); 460 457 r100_cp_fini(rdev); 461 458 r100_wb_fini(rdev); 462 459 r100_ib_fini(rdev);
+31 -2
drivers/gpu/drm/radeon/rs600.c
··· 37 37 */ 38 38 #include "drmP.h" 39 39 #include "radeon.h" 40 + #include "radeon_asic.h" 40 41 #include "atom.h" 41 42 #include "rs600d.h" 42 43 ··· 268 267 269 268 void rs600_gart_fini(struct radeon_device *rdev) 270 269 { 270 + radeon_gart_fini(rdev); 271 271 rs600_gart_disable(rdev); 272 272 radeon_gart_table_vram_free(rdev); 273 - radeon_gart_fini(rdev); 274 273 } 275 274 276 275 #define R600_PTE_VALID (1 << 0) ··· 393 392 /* Vertical blank interrupts */ 394 393 if (G_007EDC_LB_D1_VBLANK_INTERRUPT(r500_disp_int)) { 395 394 drm_handle_vblank(rdev->ddev, 0); 395 + rdev->pm.vblank_sync = true; 396 396 wake_up(&rdev->irq.vblank_queue); 397 397 } 398 398 if (G_007EDC_LB_D2_VBLANK_INTERRUPT(r500_disp_int)) { 399 399 drm_handle_vblank(rdev->ddev, 1); 400 + rdev->pm.vblank_sync = true; 400 401 wake_up(&rdev->irq.vblank_queue); 401 402 } 402 403 if (G_007EDC_DC_HOT_PLUG_DETECT1_INTERRUPT(r500_disp_int)) { ··· 475 472 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 476 473 base = RREG32_MC(R_000004_MC_FB_LOCATION); 477 474 base = G_000004_MC_FB_START(base) << 16; 475 + rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 478 476 radeon_vram_location(rdev, &rdev->mc, base); 479 477 radeon_gtt_location(rdev, &rdev->mc); 478 + radeon_update_bandwidth_info(rdev); 480 479 } 481 480 482 481 void rs600_bandwidth_update(struct radeon_device *rdev) 483 482 { 484 - /* FIXME: implement, should this be like rs690 ? */ 483 + struct drm_display_mode *mode0 = NULL; 484 + struct drm_display_mode *mode1 = NULL; 485 + u32 d1mode_priority_a_cnt, d2mode_priority_a_cnt; 486 + /* FIXME: implement full support */ 487 + 488 + radeon_update_display_priority(rdev); 489 + 490 + if (rdev->mode_info.crtcs[0]->base.enabled) 491 + mode0 = &rdev->mode_info.crtcs[0]->base.mode; 492 + if (rdev->mode_info.crtcs[1]->base.enabled) 493 + mode1 = &rdev->mode_info.crtcs[1]->base.mode; 494 + 495 + rs690_line_buffer_adjust(rdev, mode0, mode1); 496 + 497 + if (rdev->disp_priority == 2) { 498 + d1mode_priority_a_cnt = RREG32(R_006548_D1MODE_PRIORITY_A_CNT); 499 + d2mode_priority_a_cnt = RREG32(R_006D48_D2MODE_PRIORITY_A_CNT); 500 + d1mode_priority_a_cnt |= S_006548_D1MODE_PRIORITY_A_ALWAYS_ON(1); 501 + d2mode_priority_a_cnt |= S_006D48_D2MODE_PRIORITY_A_ALWAYS_ON(1); 502 + WREG32(R_006548_D1MODE_PRIORITY_A_CNT, d1mode_priority_a_cnt); 503 + WREG32(R_00654C_D1MODE_PRIORITY_B_CNT, d1mode_priority_a_cnt); 504 + WREG32(R_006D48_D2MODE_PRIORITY_A_CNT, d2mode_priority_a_cnt); 505 + WREG32(R_006D4C_D2MODE_PRIORITY_B_CNT, d2mode_priority_a_cnt); 506 + } 485 507 } 486 508 487 509 uint32_t rs600_mc_rreg(struct radeon_device *rdev, uint32_t reg) ··· 626 598 627 599 void rs600_fini(struct radeon_device *rdev) 628 600 { 601 + radeon_pm_fini(rdev); 629 602 r100_cp_fini(rdev); 630 603 r100_wb_fini(rdev); 631 604 r100_ib_fini(rdev);
+53
drivers/gpu/drm/radeon/rs600d.h
··· 535 535 #define G_00016C_INVALIDATE_L1_TLB(x) (((x) >> 20) & 0x1) 536 536 #define C_00016C_INVALIDATE_L1_TLB 0xFFEFFFFF 537 537 538 + #define R_006548_D1MODE_PRIORITY_A_CNT 0x006548 539 + #define S_006548_D1MODE_PRIORITY_MARK_A(x) (((x) & 0x7FFF) << 0) 540 + #define G_006548_D1MODE_PRIORITY_MARK_A(x) (((x) >> 0) & 0x7FFF) 541 + #define C_006548_D1MODE_PRIORITY_MARK_A 0xFFFF8000 542 + #define S_006548_D1MODE_PRIORITY_A_OFF(x) (((x) & 0x1) << 16) 543 + #define G_006548_D1MODE_PRIORITY_A_OFF(x) (((x) >> 16) & 0x1) 544 + #define C_006548_D1MODE_PRIORITY_A_OFF 0xFFFEFFFF 545 + #define S_006548_D1MODE_PRIORITY_A_ALWAYS_ON(x) (((x) & 0x1) << 20) 546 + #define G_006548_D1MODE_PRIORITY_A_ALWAYS_ON(x) (((x) >> 20) & 0x1) 547 + #define C_006548_D1MODE_PRIORITY_A_ALWAYS_ON 0xFFEFFFFF 548 + #define S_006548_D1MODE_PRIORITY_A_FORCE_MASK(x) (((x) & 0x1) << 24) 549 + #define G_006548_D1MODE_PRIORITY_A_FORCE_MASK(x) (((x) >> 24) & 0x1) 550 + #define C_006548_D1MODE_PRIORITY_A_FORCE_MASK 0xFEFFFFFF 551 + #define R_00654C_D1MODE_PRIORITY_B_CNT 0x00654C 552 + #define S_00654C_D1MODE_PRIORITY_MARK_B(x) (((x) & 0x7FFF) << 0) 553 + #define G_00654C_D1MODE_PRIORITY_MARK_B(x) (((x) >> 0) & 0x7FFF) 554 + #define C_00654C_D1MODE_PRIORITY_MARK_B 0xFFFF8000 555 + #define S_00654C_D1MODE_PRIORITY_B_OFF(x) (((x) & 0x1) << 16) 556 + #define G_00654C_D1MODE_PRIORITY_B_OFF(x) (((x) >> 16) & 0x1) 557 + #define C_00654C_D1MODE_PRIORITY_B_OFF 0xFFFEFFFF 558 + #define S_00654C_D1MODE_PRIORITY_B_ALWAYS_ON(x) (((x) & 0x1) << 20) 559 + #define G_00654C_D1MODE_PRIORITY_B_ALWAYS_ON(x) (((x) >> 20) & 0x1) 560 + #define C_00654C_D1MODE_PRIORITY_B_ALWAYS_ON 0xFFEFFFFF 561 + #define S_00654C_D1MODE_PRIORITY_B_FORCE_MASK(x) (((x) & 0x1) << 24) 562 + #define G_00654C_D1MODE_PRIORITY_B_FORCE_MASK(x) (((x) >> 24) & 0x1) 563 + #define C_00654C_D1MODE_PRIORITY_B_FORCE_MASK 0xFEFFFFFF 564 + #define R_006D48_D2MODE_PRIORITY_A_CNT 0x006D48 565 + #define S_006D48_D2MODE_PRIORITY_MARK_A(x) (((x) & 0x7FFF) << 0) 566 + #define G_006D48_D2MODE_PRIORITY_MARK_A(x) (((x) >> 0) & 0x7FFF) 567 + #define C_006D48_D2MODE_PRIORITY_MARK_A 0xFFFF8000 568 + #define S_006D48_D2MODE_PRIORITY_A_OFF(x) (((x) & 0x1) << 16) 569 + #define G_006D48_D2MODE_PRIORITY_A_OFF(x) (((x) >> 16) & 0x1) 570 + #define C_006D48_D2MODE_PRIORITY_A_OFF 0xFFFEFFFF 571 + #define S_006D48_D2MODE_PRIORITY_A_ALWAYS_ON(x) (((x) & 0x1) << 20) 572 + #define G_006D48_D2MODE_PRIORITY_A_ALWAYS_ON(x) (((x) >> 20) & 0x1) 573 + #define C_006D48_D2MODE_PRIORITY_A_ALWAYS_ON 0xFFEFFFFF 574 + #define S_006D48_D2MODE_PRIORITY_A_FORCE_MASK(x) (((x) & 0x1) << 24) 575 + #define G_006D48_D2MODE_PRIORITY_A_FORCE_MASK(x) (((x) >> 24) & 0x1) 576 + #define C_006D48_D2MODE_PRIORITY_A_FORCE_MASK 0xFEFFFFFF 577 + #define R_006D4C_D2MODE_PRIORITY_B_CNT 0x006D4C 578 + #define S_006D4C_D2MODE_PRIORITY_MARK_B(x) (((x) & 0x7FFF) << 0) 579 + #define G_006D4C_D2MODE_PRIORITY_MARK_B(x) (((x) >> 0) & 0x7FFF) 580 + #define C_006D4C_D2MODE_PRIORITY_MARK_B 0xFFFF8000 581 + #define S_006D4C_D2MODE_PRIORITY_B_OFF(x) (((x) & 0x1) << 16) 582 + #define G_006D4C_D2MODE_PRIORITY_B_OFF(x) (((x) >> 16) & 0x1) 583 + #define C_006D4C_D2MODE_PRIORITY_B_OFF 0xFFFEFFFF 584 + #define S_006D4C_D2MODE_PRIORITY_B_ALWAYS_ON(x) (((x) & 0x1) << 20) 585 + #define G_006D4C_D2MODE_PRIORITY_B_ALWAYS_ON(x) (((x) >> 20) & 0x1) 586 + #define C_006D4C_D2MODE_PRIORITY_B_ALWAYS_ON 0xFFEFFFFF 587 + #define S_006D4C_D2MODE_PRIORITY_B_FORCE_MASK(x) (((x) & 0x1) << 24) 588 + #define G_006D4C_D2MODE_PRIORITY_B_FORCE_MASK(x) (((x) >> 24) & 0x1) 589 + #define C_006D4C_D2MODE_PRIORITY_B_FORCE_MASK 0xFEFFFFFF 590 + 538 591 #endif
+72 -50
drivers/gpu/drm/radeon/rs690.c
··· 27 27 */ 28 28 #include "drmP.h" 29 29 #include "radeon.h" 30 + #include "radeon_asic.h" 30 31 #include "atom.h" 31 32 #include "rs690d.h" 32 33 ··· 58 57 } 59 58 } 60 59 60 + union igp_info { 61 + struct _ATOM_INTEGRATED_SYSTEM_INFO info; 62 + struct _ATOM_INTEGRATED_SYSTEM_INFO_V2 info_v2; 63 + }; 64 + 61 65 void rs690_pm_info(struct radeon_device *rdev) 62 66 { 63 67 int index = GetIndexIntoMasterTable(DATA, IntegratedSystemInfo); 64 - struct _ATOM_INTEGRATED_SYSTEM_INFO *info; 65 - struct _ATOM_INTEGRATED_SYSTEM_INFO_V2 *info_v2; 66 - void *ptr; 68 + union igp_info *info; 67 69 uint16_t data_offset; 68 70 uint8_t frev, crev; 69 71 fixed20_12 tmp; 70 72 71 - atom_parse_data_header(rdev->mode_info.atom_context, index, NULL, 72 - &frev, &crev, &data_offset); 73 - ptr = rdev->mode_info.atom_context->bios + data_offset; 74 - info = (struct _ATOM_INTEGRATED_SYSTEM_INFO *)ptr; 75 - info_v2 = (struct _ATOM_INTEGRATED_SYSTEM_INFO_V2 *)ptr; 76 - /* Get various system informations from bios */ 77 - switch (crev) { 78 - case 1: 79 - tmp.full = rfixed_const(100); 80 - rdev->pm.igp_sideport_mclk.full = rfixed_const(info->ulBootUpMemoryClock); 81 - rdev->pm.igp_sideport_mclk.full = rfixed_div(rdev->pm.igp_sideport_mclk, tmp); 82 - rdev->pm.igp_system_mclk.full = rfixed_const(le16_to_cpu(info->usK8MemoryClock)); 83 - rdev->pm.igp_ht_link_clk.full = rfixed_const(le16_to_cpu(info->usFSBClock)); 84 - rdev->pm.igp_ht_link_width.full = rfixed_const(info->ucHTLinkWidth); 85 - break; 86 - case 2: 87 - tmp.full = rfixed_const(100); 88 - rdev->pm.igp_sideport_mclk.full = rfixed_const(info_v2->ulBootUpSidePortClock); 89 - rdev->pm.igp_sideport_mclk.full = rfixed_div(rdev->pm.igp_sideport_mclk, tmp); 90 - rdev->pm.igp_system_mclk.full = rfixed_const(info_v2->ulBootUpUMAClock); 91 - rdev->pm.igp_system_mclk.full = rfixed_div(rdev->pm.igp_system_mclk, tmp); 92 - rdev->pm.igp_ht_link_clk.full = rfixed_const(info_v2->ulHTLinkFreq); 93 - rdev->pm.igp_ht_link_clk.full = rfixed_div(rdev->pm.igp_ht_link_clk, tmp); 94 - rdev->pm.igp_ht_link_width.full = rfixed_const(le16_to_cpu(info_v2->usMinHTLinkWidth)); 95 - break; 96 - default: 73 + if (atom_parse_data_header(rdev->mode_info.atom_context, index, NULL, 74 + &frev, &crev, &data_offset)) { 75 + info = (union igp_info *)(rdev->mode_info.atom_context->bios + data_offset); 76 + 77 + /* Get various system informations from bios */ 78 + switch (crev) { 79 + case 1: 80 + tmp.full = rfixed_const(100); 81 + rdev->pm.igp_sideport_mclk.full = rfixed_const(info->info.ulBootUpMemoryClock); 82 + rdev->pm.igp_sideport_mclk.full = rfixed_div(rdev->pm.igp_sideport_mclk, tmp); 83 + rdev->pm.igp_system_mclk.full = rfixed_const(le16_to_cpu(info->info.usK8MemoryClock)); 84 + rdev->pm.igp_ht_link_clk.full = rfixed_const(le16_to_cpu(info->info.usFSBClock)); 85 + rdev->pm.igp_ht_link_width.full = rfixed_const(info->info.ucHTLinkWidth); 86 + break; 87 + case 2: 88 + tmp.full = rfixed_const(100); 89 + rdev->pm.igp_sideport_mclk.full = rfixed_const(info->info_v2.ulBootUpSidePortClock); 90 + rdev->pm.igp_sideport_mclk.full = rfixed_div(rdev->pm.igp_sideport_mclk, tmp); 91 + rdev->pm.igp_system_mclk.full = rfixed_const(info->info_v2.ulBootUpUMAClock); 92 + rdev->pm.igp_system_mclk.full = rfixed_div(rdev->pm.igp_system_mclk, tmp); 93 + rdev->pm.igp_ht_link_clk.full = rfixed_const(info->info_v2.ulHTLinkFreq); 94 + rdev->pm.igp_ht_link_clk.full = rfixed_div(rdev->pm.igp_ht_link_clk, tmp); 95 + rdev->pm.igp_ht_link_width.full = rfixed_const(le16_to_cpu(info->info_v2.usMinHTLinkWidth)); 96 + break; 97 + default: 98 + tmp.full = rfixed_const(100); 99 + /* We assume the slower possible clock ie worst case */ 100 + /* DDR 333Mhz */ 101 + rdev->pm.igp_sideport_mclk.full = rfixed_const(333); 102 + /* FIXME: system clock ? */ 103 + rdev->pm.igp_system_mclk.full = rfixed_const(100); 104 + rdev->pm.igp_system_mclk.full = rfixed_div(rdev->pm.igp_system_mclk, tmp); 105 + rdev->pm.igp_ht_link_clk.full = rfixed_const(200); 106 + rdev->pm.igp_ht_link_width.full = rfixed_const(8); 107 + DRM_ERROR("No integrated system info for your GPU, using safe default\n"); 108 + break; 109 + } 110 + } else { 97 111 tmp.full = rfixed_const(100); 98 112 /* We assume the slower possible clock ie worst case */ 99 113 /* DDR 333Mhz */ ··· 119 103 rdev->pm.igp_ht_link_clk.full = rfixed_const(200); 120 104 rdev->pm.igp_ht_link_width.full = rfixed_const(8); 121 105 DRM_ERROR("No integrated system info for your GPU, using safe default\n"); 122 - break; 123 106 } 124 107 /* Compute various bandwidth */ 125 108 /* k8_bandwidth = (memory_clk / 2) * 2 * 8 * 0.5 = memory_clk * 4 */ ··· 146 131 147 132 void rs690_mc_init(struct radeon_device *rdev) 148 133 { 149 - fixed20_12 a; 150 134 u64 base; 151 135 152 136 rs400_gart_adjust_size(rdev); ··· 159 145 base = RREG32_MC(R_000100_MCCFG_FB_LOCATION); 160 146 base = G_000100_MC_FB_START(base) << 16; 161 147 rs690_pm_info(rdev); 162 - /* FIXME: we should enforce default clock in case GPU is not in 163 - * default setup 164 - */ 165 - a.full = rfixed_const(100); 166 - rdev->pm.sclk.full = rfixed_const(rdev->clock.default_sclk); 167 - rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a); 168 - a.full = rfixed_const(16); 169 - /* core_bandwidth = sclk(Mhz) * 16 */ 170 - rdev->pm.core_bandwidth.full = rfixed_div(rdev->pm.sclk, a); 171 148 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 172 149 radeon_vram_location(rdev, &rdev->mc, base); 173 150 radeon_gtt_location(rdev, &rdev->mc); 151 + radeon_update_bandwidth_info(rdev); 174 152 } 175 153 176 154 void rs690_line_buffer_adjust(struct radeon_device *rdev, ··· 400 394 struct drm_display_mode *mode1 = NULL; 401 395 struct rs690_watermark wm0; 402 396 struct rs690_watermark wm1; 403 - u32 tmp; 397 + u32 tmp, d1mode_priority_a_cnt, d2mode_priority_a_cnt; 404 398 fixed20_12 priority_mark02, priority_mark12, fill_rate; 405 399 fixed20_12 a, b; 400 + 401 + radeon_update_display_priority(rdev); 406 402 407 403 if (rdev->mode_info.crtcs[0]->base.enabled) 408 404 mode0 = &rdev->mode_info.crtcs[0]->base.mode; ··· 415 407 * modes if the user specifies HIGH for displaypriority 416 408 * option. 417 409 */ 418 - if (rdev->disp_priority == 2) { 410 + if ((rdev->disp_priority == 2) && 411 + ((rdev->family == CHIP_RS690) || (rdev->family == CHIP_RS740))) { 419 412 tmp = RREG32_MC(R_000104_MC_INIT_MISC_LAT_TIMER); 420 413 tmp &= C_000104_MC_DISP0R_INIT_LAT; 421 414 tmp &= C_000104_MC_DISP1R_INIT_LAT; ··· 491 482 priority_mark12.full = 0; 492 483 if (wm1.priority_mark_max.full > priority_mark12.full) 493 484 priority_mark12.full = wm1.priority_mark_max.full; 494 - WREG32(R_006548_D1MODE_PRIORITY_A_CNT, rfixed_trunc(priority_mark02)); 495 - WREG32(R_00654C_D1MODE_PRIORITY_B_CNT, rfixed_trunc(priority_mark02)); 496 - WREG32(R_006D48_D2MODE_PRIORITY_A_CNT, rfixed_trunc(priority_mark12)); 497 - WREG32(R_006D4C_D2MODE_PRIORITY_B_CNT, rfixed_trunc(priority_mark12)); 485 + d1mode_priority_a_cnt = rfixed_trunc(priority_mark02); 486 + d2mode_priority_a_cnt = rfixed_trunc(priority_mark12); 487 + if (rdev->disp_priority == 2) { 488 + d1mode_priority_a_cnt |= S_006548_D1MODE_PRIORITY_A_ALWAYS_ON(1); 489 + d2mode_priority_a_cnt |= S_006D48_D2MODE_PRIORITY_A_ALWAYS_ON(1); 490 + } 491 + WREG32(R_006548_D1MODE_PRIORITY_A_CNT, d1mode_priority_a_cnt); 492 + WREG32(R_00654C_D1MODE_PRIORITY_B_CNT, d1mode_priority_a_cnt); 493 + WREG32(R_006D48_D2MODE_PRIORITY_A_CNT, d2mode_priority_a_cnt); 494 + WREG32(R_006D4C_D2MODE_PRIORITY_B_CNT, d2mode_priority_a_cnt); 498 495 } else if (mode0) { 499 496 if (rfixed_trunc(wm0.dbpp) > 64) 500 497 a.full = rfixed_mul(wm0.dbpp, wm0.num_line_pair); ··· 527 512 priority_mark02.full = 0; 528 513 if (wm0.priority_mark_max.full > priority_mark02.full) 529 514 priority_mark02.full = wm0.priority_mark_max.full; 530 - WREG32(R_006548_D1MODE_PRIORITY_A_CNT, rfixed_trunc(priority_mark02)); 531 - WREG32(R_00654C_D1MODE_PRIORITY_B_CNT, rfixed_trunc(priority_mark02)); 515 + d1mode_priority_a_cnt = rfixed_trunc(priority_mark02); 516 + if (rdev->disp_priority == 2) 517 + d1mode_priority_a_cnt |= S_006548_D1MODE_PRIORITY_A_ALWAYS_ON(1); 518 + WREG32(R_006548_D1MODE_PRIORITY_A_CNT, d1mode_priority_a_cnt); 519 + WREG32(R_00654C_D1MODE_PRIORITY_B_CNT, d1mode_priority_a_cnt); 532 520 WREG32(R_006D48_D2MODE_PRIORITY_A_CNT, 533 521 S_006D48_D2MODE_PRIORITY_A_OFF(1)); 534 522 WREG32(R_006D4C_D2MODE_PRIORITY_B_CNT, ··· 562 544 priority_mark12.full = 0; 563 545 if (wm1.priority_mark_max.full > priority_mark12.full) 564 546 priority_mark12.full = wm1.priority_mark_max.full; 547 + d2mode_priority_a_cnt = rfixed_trunc(priority_mark12); 548 + if (rdev->disp_priority == 2) 549 + d2mode_priority_a_cnt |= S_006D48_D2MODE_PRIORITY_A_ALWAYS_ON(1); 565 550 WREG32(R_006548_D1MODE_PRIORITY_A_CNT, 566 551 S_006548_D1MODE_PRIORITY_A_OFF(1)); 567 552 WREG32(R_00654C_D1MODE_PRIORITY_B_CNT, 568 553 S_00654C_D1MODE_PRIORITY_B_OFF(1)); 569 - WREG32(R_006D48_D2MODE_PRIORITY_A_CNT, rfixed_trunc(priority_mark12)); 570 - WREG32(R_006D4C_D2MODE_PRIORITY_B_CNT, rfixed_trunc(priority_mark12)); 554 + WREG32(R_006D48_D2MODE_PRIORITY_A_CNT, d2mode_priority_a_cnt); 555 + WREG32(R_006D4C_D2MODE_PRIORITY_B_CNT, d2mode_priority_a_cnt); 571 556 } 572 557 } 573 558 ··· 678 657 679 658 void rs690_fini(struct radeon_device *rdev) 680 659 { 660 + radeon_pm_fini(rdev); 681 661 r100_cp_fini(rdev); 682 662 r100_wb_fini(rdev); 683 663 r100_ib_fini(rdev);
+3
drivers/gpu/drm/radeon/rs690d.h
··· 182 182 #define S_006548_D1MODE_PRIORITY_A_OFF(x) (((x) & 0x1) << 16) 183 183 #define G_006548_D1MODE_PRIORITY_A_OFF(x) (((x) >> 16) & 0x1) 184 184 #define C_006548_D1MODE_PRIORITY_A_OFF 0xFFFEFFFF 185 + #define S_006548_D1MODE_PRIORITY_A_ALWAYS_ON(x) (((x) & 0x1) << 20) 186 + #define G_006548_D1MODE_PRIORITY_A_ALWAYS_ON(x) (((x) >> 20) & 0x1) 187 + #define C_006548_D1MODE_PRIORITY_A_ALWAYS_ON 0xFFEFFFFF 185 188 #define S_006548_D1MODE_PRIORITY_A_FORCE_MASK(x) (((x) & 0x1) << 24) 186 189 #define G_006548_D1MODE_PRIORITY_A_FORCE_MASK(x) (((x) >> 24) & 0x1) 187 190 #define C_006548_D1MODE_PRIORITY_A_FORCE_MASK 0xFEFFFFFF
+28 -17
drivers/gpu/drm/radeon/rv515.c
··· 29 29 #include "drmP.h" 30 30 #include "rv515d.h" 31 31 #include "radeon.h" 32 + #include "radeon_asic.h" 32 33 #include "atom.h" 33 34 #include "rv515_reg_safe.h" 34 35 ··· 280 279 281 280 void rv515_mc_init(struct radeon_device *rdev) 282 281 { 283 - fixed20_12 a; 284 282 285 283 rv515_vram_get_type(rdev); 286 284 r100_vram_init_sizes(rdev); 287 285 radeon_vram_location(rdev, &rdev->mc, 0); 288 286 if (!(rdev->flags & RADEON_IS_AGP)) 289 287 radeon_gtt_location(rdev, &rdev->mc); 290 - /* FIXME: we should enforce default clock in case GPU is not in 291 - * default setup 292 - */ 293 - a.full = rfixed_const(100); 294 - rdev->pm.sclk.full = rfixed_const(rdev->clock.default_sclk); 295 - rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a); 288 + radeon_update_bandwidth_info(rdev); 296 289 } 297 290 298 291 uint32_t rv515_mc_rreg(struct radeon_device *rdev, uint32_t reg) ··· 534 539 535 540 void rv515_fini(struct radeon_device *rdev) 536 541 { 542 + radeon_pm_fini(rdev); 537 543 r100_cp_fini(rdev); 538 544 r100_wb_fini(rdev); 539 545 r100_ib_fini(rdev); ··· 1016 1020 struct drm_display_mode *mode1 = NULL; 1017 1021 struct rv515_watermark wm0; 1018 1022 struct rv515_watermark wm1; 1019 - u32 tmp; 1023 + u32 tmp, d1mode_priority_a_cnt, d2mode_priority_a_cnt; 1020 1024 fixed20_12 priority_mark02, priority_mark12, fill_rate; 1021 1025 fixed20_12 a, b; 1022 1026 ··· 1084 1088 priority_mark12.full = 0; 1085 1089 if (wm1.priority_mark_max.full > priority_mark12.full) 1086 1090 priority_mark12.full = wm1.priority_mark_max.full; 1087 - WREG32(D1MODE_PRIORITY_A_CNT, rfixed_trunc(priority_mark02)); 1088 - WREG32(D1MODE_PRIORITY_B_CNT, rfixed_trunc(priority_mark02)); 1089 - WREG32(D2MODE_PRIORITY_A_CNT, rfixed_trunc(priority_mark12)); 1090 - WREG32(D2MODE_PRIORITY_B_CNT, rfixed_trunc(priority_mark12)); 1091 + d1mode_priority_a_cnt = rfixed_trunc(priority_mark02); 1092 + d2mode_priority_a_cnt = rfixed_trunc(priority_mark12); 1093 + if (rdev->disp_priority == 2) { 1094 + d1mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON; 1095 + d2mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON; 1096 + } 1097 + WREG32(D1MODE_PRIORITY_A_CNT, d1mode_priority_a_cnt); 1098 + WREG32(D1MODE_PRIORITY_B_CNT, d1mode_priority_a_cnt); 1099 + WREG32(D2MODE_PRIORITY_A_CNT, d2mode_priority_a_cnt); 1100 + WREG32(D2MODE_PRIORITY_B_CNT, d2mode_priority_a_cnt); 1091 1101 } else if (mode0) { 1092 1102 if (rfixed_trunc(wm0.dbpp) > 64) 1093 1103 a.full = rfixed_div(wm0.dbpp, wm0.num_line_pair); ··· 1120 1118 priority_mark02.full = 0; 1121 1119 if (wm0.priority_mark_max.full > priority_mark02.full) 1122 1120 priority_mark02.full = wm0.priority_mark_max.full; 1123 - WREG32(D1MODE_PRIORITY_A_CNT, rfixed_trunc(priority_mark02)); 1124 - WREG32(D1MODE_PRIORITY_B_CNT, rfixed_trunc(priority_mark02)); 1121 + d1mode_priority_a_cnt = rfixed_trunc(priority_mark02); 1122 + if (rdev->disp_priority == 2) 1123 + d1mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON; 1124 + WREG32(D1MODE_PRIORITY_A_CNT, d1mode_priority_a_cnt); 1125 + WREG32(D1MODE_PRIORITY_B_CNT, d1mode_priority_a_cnt); 1125 1126 WREG32(D2MODE_PRIORITY_A_CNT, MODE_PRIORITY_OFF); 1126 1127 WREG32(D2MODE_PRIORITY_B_CNT, MODE_PRIORITY_OFF); 1127 1128 } else { ··· 1153 1148 priority_mark12.full = 0; 1154 1149 if (wm1.priority_mark_max.full > priority_mark12.full) 1155 1150 priority_mark12.full = wm1.priority_mark_max.full; 1151 + d2mode_priority_a_cnt = rfixed_trunc(priority_mark12); 1152 + if (rdev->disp_priority == 2) 1153 + d2mode_priority_a_cnt |= MODE_PRIORITY_ALWAYS_ON; 1156 1154 WREG32(D1MODE_PRIORITY_A_CNT, MODE_PRIORITY_OFF); 1157 1155 WREG32(D1MODE_PRIORITY_B_CNT, MODE_PRIORITY_OFF); 1158 - WREG32(D2MODE_PRIORITY_A_CNT, rfixed_trunc(priority_mark12)); 1159 - WREG32(D2MODE_PRIORITY_B_CNT, rfixed_trunc(priority_mark12)); 1156 + WREG32(D2MODE_PRIORITY_A_CNT, d2mode_priority_a_cnt); 1157 + WREG32(D2MODE_PRIORITY_B_CNT, d2mode_priority_a_cnt); 1160 1158 } 1161 1159 } 1162 1160 ··· 1168 1160 uint32_t tmp; 1169 1161 struct drm_display_mode *mode0 = NULL; 1170 1162 struct drm_display_mode *mode1 = NULL; 1163 + 1164 + radeon_update_display_priority(rdev); 1171 1165 1172 1166 if (rdev->mode_info.crtcs[0]->base.enabled) 1173 1167 mode0 = &rdev->mode_info.crtcs[0]->base.mode; ··· 1180 1170 * modes if the user specifies HIGH for displaypriority 1181 1171 * option. 1182 1172 */ 1183 - if (rdev->disp_priority == 2) { 1173 + if ((rdev->disp_priority == 2) && 1174 + (rdev->family == CHIP_RV515)) { 1184 1175 tmp = RREG32_MC(MC_MISC_LAT_TIMER); 1185 1176 tmp &= ~MC_DISP1R_INIT_LAT_MASK; 1186 1177 tmp &= ~MC_DISP0R_INIT_LAT_MASK;
+23 -8
drivers/gpu/drm/radeon/rv770.c
··· 29 29 #include <linux/platform_device.h> 30 30 #include "drmP.h" 31 31 #include "radeon.h" 32 + #include "radeon_asic.h" 32 33 #include "radeon_drm.h" 33 34 #include "rv770d.h" 34 35 #include "atom.h" ··· 126 125 127 126 void rv770_pcie_gart_fini(struct radeon_device *rdev) 128 127 { 128 + radeon_gart_fini(rdev); 129 129 rv770_pcie_gart_disable(rdev); 130 130 radeon_gart_table_vram_free(rdev); 131 - radeon_gart_fini(rdev); 132 131 } 133 132 134 133 ··· 648 647 649 648 WREG32(CC_RB_BACKEND_DISABLE, cc_rb_backend_disable); 650 649 WREG32(CC_GC_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config); 650 + WREG32(GC_USER_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config); 651 651 WREG32(CC_SYS_RB_BACKEND_DISABLE, cc_rb_backend_disable); 652 652 653 653 WREG32(CGTS_SYS_TCC_DISABLE, 0); 654 654 WREG32(CGTS_TCC_DISABLE, 0); 655 + WREG32(CGTS_USER_SYS_TCC_DISABLE, 0); 656 + WREG32(CGTS_USER_TCC_DISABLE, 0); 655 657 656 658 num_qd_pipes = 657 659 R7XX_MAX_PIPES - r600_count_pipe_bits((cc_gc_shader_pipe_config & INACTIVE_QD_PIPES_MASK) >> 8); ··· 868 864 869 865 int rv770_mc_init(struct radeon_device *rdev) 870 866 { 871 - fixed20_12 a; 872 867 u32 tmp; 873 868 int chansize, numchan; 874 869 ··· 911 908 rdev->mc.real_vram_size = rdev->mc.aper_size; 912 909 } 913 910 r600_vram_gtt_location(rdev, &rdev->mc); 914 - /* FIXME: we should enforce default clock in case GPU is not in 915 - * default setup 916 - */ 917 - a.full = rfixed_const(100); 918 - rdev->pm.sclk.full = rfixed_const(rdev->clock.default_sclk); 919 - rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a); 911 + radeon_update_bandwidth_info(rdev); 912 + 920 913 return 0; 921 914 } 922 915 ··· 1012 1013 DRM_ERROR("radeon: failled testing IB (%d).\n", r); 1013 1014 return r; 1014 1015 } 1016 + 1017 + r = r600_audio_init(rdev); 1018 + if (r) { 1019 + dev_err(rdev->dev, "radeon: audio init failed\n"); 1020 + return r; 1021 + } 1022 + 1015 1023 return r; 1016 1024 1017 1025 } ··· 1027 1021 { 1028 1022 int r; 1029 1023 1024 + r600_audio_fini(rdev); 1030 1025 /* FIXME: we should wait for ring to be empty */ 1031 1026 r700_cp_stop(rdev); 1032 1027 rdev->cp.ready = false; ··· 1151 1144 } 1152 1145 } 1153 1146 } 1147 + 1148 + r = r600_audio_init(rdev); 1149 + if (r) { 1150 + dev_err(rdev->dev, "radeon: audio init failed\n"); 1151 + return r; 1152 + } 1153 + 1154 1154 return 0; 1155 1155 } 1156 1156 1157 1157 void rv770_fini(struct radeon_device *rdev) 1158 1158 { 1159 + radeon_pm_fini(rdev); 1159 1160 r600_blit_fini(rdev); 1160 1161 r600_cp_fini(rdev); 1161 1162 r600_wb_fini(rdev);
+2 -2
drivers/gpu/drm/ttm/ttm_bo.c
··· 1425 1425 1426 1426 atomic_set(&glob->bo_count, 0); 1427 1427 1428 - kobject_init(&glob->kobj, &ttm_bo_glob_kobj_type); 1429 - ret = kobject_add(&glob->kobj, ttm_get_kobj(), "buffer_objects"); 1428 + ret = kobject_init_and_add( 1429 + &glob->kobj, &ttm_bo_glob_kobj_type, ttm_get_kobj(), "buffer_objects"); 1430 1430 if (unlikely(ret != 0)) 1431 1431 kobject_put(&glob->kobj); 1432 1432 return ret;
+8 -10
drivers/gpu/drm/ttm/ttm_memory.c
··· 260 260 zone->used_mem = 0; 261 261 zone->glob = glob; 262 262 glob->zone_kernel = zone; 263 - kobject_init(&zone->kobj, &ttm_mem_zone_kobj_type); 264 - ret = kobject_add(&zone->kobj, &glob->kobj, zone->name); 263 + ret = kobject_init_and_add( 264 + &zone->kobj, &ttm_mem_zone_kobj_type, &glob->kobj, zone->name); 265 265 if (unlikely(ret != 0)) { 266 266 kobject_put(&zone->kobj); 267 267 return ret; ··· 296 296 zone->used_mem = 0; 297 297 zone->glob = glob; 298 298 glob->zone_highmem = zone; 299 - kobject_init(&zone->kobj, &ttm_mem_zone_kobj_type); 300 - ret = kobject_add(&zone->kobj, &glob->kobj, zone->name); 299 + ret = kobject_init_and_add( 300 + &zone->kobj, &ttm_mem_zone_kobj_type, &glob->kobj, zone->name); 301 301 if (unlikely(ret != 0)) { 302 302 kobject_put(&zone->kobj); 303 303 return ret; ··· 343 343 zone->used_mem = 0; 344 344 zone->glob = glob; 345 345 glob->zone_dma32 = zone; 346 - kobject_init(&zone->kobj, &ttm_mem_zone_kobj_type); 347 - ret = kobject_add(&zone->kobj, &glob->kobj, zone->name); 346 + ret = kobject_init_and_add( 347 + &zone->kobj, &ttm_mem_zone_kobj_type, &glob->kobj, zone->name); 348 348 if (unlikely(ret != 0)) { 349 349 kobject_put(&zone->kobj); 350 350 return ret; ··· 365 365 glob->swap_queue = create_singlethread_workqueue("ttm_swap"); 366 366 INIT_WORK(&glob->work, ttm_shrink_work); 367 367 init_waitqueue_head(&glob->queue); 368 - kobject_init(&glob->kobj, &ttm_mem_glob_kobj_type); 369 - ret = kobject_add(&glob->kobj, 370 - ttm_get_kobj(), 371 - "memory_accounting"); 368 + ret = kobject_init_and_add( 369 + &glob->kobj, &ttm_mem_glob_kobj_type, ttm_get_kobj(), "memory_accounting"); 372 370 if (unlikely(ret != 0)) { 373 371 kobject_put(&glob->kobj); 374 372 return ret;
+3 -20
drivers/gpu/drm/ttm/ttm_tt.c
··· 28 28 * Authors: Thomas Hellstrom <thellstrom-at-vmware-dot-com> 29 29 */ 30 30 31 - #include <linux/vmalloc.h> 32 31 #include <linux/sched.h> 33 32 #include <linux/highmem.h> 34 33 #include <linux/pagemap.h> 35 34 #include <linux/file.h> 36 35 #include <linux/swap.h> 37 36 #include "drm_cache.h" 37 + #include "drm_mem_util.h" 38 38 #include "ttm/ttm_module.h" 39 39 #include "ttm/ttm_bo_driver.h" 40 40 #include "ttm/ttm_placement.h" ··· 43 43 44 44 /** 45 45 * Allocates storage for pointers to the pages that back the ttm. 46 - * 47 - * Uses kmalloc if possible. Otherwise falls back to vmalloc. 48 46 */ 49 47 static void ttm_tt_alloc_page_directory(struct ttm_tt *ttm) 50 48 { 51 - unsigned long size = ttm->num_pages * sizeof(*ttm->pages); 52 - ttm->pages = NULL; 53 - 54 - if (size <= PAGE_SIZE) 55 - ttm->pages = kzalloc(size, GFP_KERNEL); 56 - 57 - if (!ttm->pages) { 58 - ttm->pages = vmalloc_user(size); 59 - if (ttm->pages) 60 - ttm->page_flags |= TTM_PAGE_FLAG_VMALLOC; 61 - } 49 + ttm->pages = drm_calloc_large(ttm->num_pages, sizeof(*ttm->pages)); 62 50 } 63 51 64 52 static void ttm_tt_free_page_directory(struct ttm_tt *ttm) 65 53 { 66 - if (ttm->page_flags & TTM_PAGE_FLAG_VMALLOC) { 67 - vfree(ttm->pages); 68 - ttm->page_flags &= ~TTM_PAGE_FLAG_VMALLOC; 69 - } else { 70 - kfree(ttm->pages); 71 - } 54 + drm_free_large(ttm->pages); 72 55 ttm->pages = NULL; 73 56 } 74 57
+1 -1
drivers/gpu/drm/vmwgfx/Kconfig
··· 1 1 config DRM_VMWGFX 2 2 tristate "DRM driver for VMware Virtual GPU" 3 - depends on DRM && PCI 3 + depends on DRM && PCI && FB 4 4 select FB_DEFERRED_IO 5 5 select FB_CFB_FILLRECT 6 6 select FB_CFB_COPYAREA
+4 -1
drivers/hid/hid-gyration.c
··· 53 53 static int gyration_event(struct hid_device *hdev, struct hid_field *field, 54 54 struct hid_usage *usage, __s32 value) 55 55 { 56 - struct input_dev *input = field->hidinput->input; 56 + 57 + if (!(hdev->claimed & HID_CLAIMED_INPUT) || !field->hidinput) 58 + return 0; 57 59 58 60 if ((usage->hid & HID_USAGE_PAGE) == HID_UP_GENDESK && 59 61 (usage->hid & 0xff) == 0x82) { 62 + struct input_dev *input = field->hidinput->input; 60 63 input_event(input, usage->type, usage->code, 1); 61 64 input_sync(input); 62 65 input_event(input, usage->type, usage->code, 0);
+1
drivers/hid/usbhid/hid-quirks.c
··· 60 60 { USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET }, 61 61 { USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET }, 62 62 { USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS, HID_QUIRK_NOGET }, 63 + { USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_PIXART_IMAGING_INC_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NOGET }, 63 64 { USB_VENDOR_ID_SUN, USB_DEVICE_ID_RARITAN_KVM_DONGLE, HID_QUIRK_NOGET }, 64 65 { USB_VENDOR_ID_TURBOX, USB_DEVICE_ID_TURBOX_KEYBOARD, HID_QUIRK_NOGET }, 65 66 { USB_VENDOR_ID_UCLOGIC, USB_DEVICE_ID_UCLOGIC_TABLET_PF1209, HID_QUIRK_MULTI_INPUT },
+2 -2
drivers/hwmon/Kconfig
··· 217 217 depends on HWMON && I2C 218 218 help 219 219 If you say yes here you get support for the aSC7621 220 - family of SMBus sensors chip found on most Intel X48, X38, 975, 221 - 965 and 945 desktop boards. Currently supported chips: 220 + family of SMBus sensors chip found on most Intel X38, X48, X58, 221 + 945, 965 and 975 desktop boards. Currently supported chips: 222 222 aSC7621 223 223 aSC7621a 224 224
+2 -2
drivers/hwmon/coretemp.c
··· 228 228 if (err) { 229 229 dev_warn(dev, 230 230 "Unable to access MSR 0xEE, for Tjmax, left" 231 - " at default"); 231 + " at default\n"); 232 232 } else if (eax & 0x40000000) { 233 233 tjmax = tjmax_ee; 234 234 } ··· 466 466 family 6 CPU */ 467 467 if ((c->x86 == 0x6) && (c->x86_model > 0xf)) 468 468 printk(KERN_WARNING DRVNAME ": Unknown CPU " 469 - "model %x\n", c->x86_model); 469 + "model 0x%x\n", c->x86_model); 470 470 continue; 471 471 } 472 472
+1 -1
drivers/hwmon/w83793.c
··· 1294 1294 static ssize_t watchdog_write(struct file *filp, const char __user *buf, 1295 1295 size_t count, loff_t *offset) 1296 1296 { 1297 - size_t ret; 1297 + ssize_t ret; 1298 1298 struct w83793_data *data = filp->private_data; 1299 1299 1300 1300 if (count) {
+3 -9
drivers/ide/ide-probe.c
··· 695 695 if (irqd) 696 696 disable_irq(hwif->irq); 697 697 698 - rc = ide_port_wait_ready(hwif); 699 - if (rc == -ENODEV) { 700 - printk(KERN_INFO "%s: no devices on the port\n", hwif->name); 701 - goto out; 702 - } else if (rc == -EBUSY) 703 - printk(KERN_ERR "%s: not ready before the probe\n", hwif->name); 704 - else 705 - rc = -ENODEV; 698 + if (ide_port_wait_ready(hwif) == -EBUSY) 699 + printk(KERN_DEBUG "%s: Wait for ready failed before probe !\n", hwif->name); 706 700 707 701 /* 708 702 * Second drive should only exist if first drive was found, ··· 707 713 if (drive->dev_flags & IDE_DFLAG_PRESENT) 708 714 rc = 0; 709 715 } 710 - out: 716 + 711 717 /* 712 718 * Use cached IRQ number. It might be (and is...) changed by probe 713 719 * code above
-57
drivers/ide/via82cxxx.c
··· 110 110 { 111 111 struct via_isa_bridge *via_config; 112 112 unsigned int via_80w; 113 - u8 cached_device[2]; 114 113 }; 115 114 116 115 /** ··· 402 403 .cable_detect = via82cxxx_cable_detect, 403 404 }; 404 405 405 - static void via_write_devctl(ide_hwif_t *hwif, u8 ctl) 406 - { 407 - struct via82cxxx_dev *vdev = hwif->host->host_priv; 408 - 409 - outb(ctl, hwif->io_ports.ctl_addr); 410 - outb(vdev->cached_device[hwif->channel], hwif->io_ports.device_addr); 411 - } 412 - 413 - static void __via_dev_select(ide_drive_t *drive, u8 select) 414 - { 415 - ide_hwif_t *hwif = drive->hwif; 416 - struct via82cxxx_dev *vdev = hwif->host->host_priv; 417 - 418 - outb(select, hwif->io_ports.device_addr); 419 - vdev->cached_device[hwif->channel] = select; 420 - } 421 - 422 - static void via_dev_select(ide_drive_t *drive) 423 - { 424 - __via_dev_select(drive, drive->select | ATA_DEVICE_OBS); 425 - } 426 - 427 - static void via_tf_load(ide_drive_t *drive, struct ide_taskfile *tf, u8 valid) 428 - { 429 - ide_hwif_t *hwif = drive->hwif; 430 - struct ide_io_ports *io_ports = &hwif->io_ports; 431 - 432 - if (valid & IDE_VALID_FEATURE) 433 - outb(tf->feature, io_ports->feature_addr); 434 - if (valid & IDE_VALID_NSECT) 435 - outb(tf->nsect, io_ports->nsect_addr); 436 - if (valid & IDE_VALID_LBAL) 437 - outb(tf->lbal, io_ports->lbal_addr); 438 - if (valid & IDE_VALID_LBAM) 439 - outb(tf->lbam, io_ports->lbam_addr); 440 - if (valid & IDE_VALID_LBAH) 441 - outb(tf->lbah, io_ports->lbah_addr); 442 - if (valid & IDE_VALID_DEVICE) 443 - __via_dev_select(drive, tf->device); 444 - } 445 - 446 - const struct ide_tp_ops via_tp_ops = { 447 - .exec_command = ide_exec_command, 448 - .read_status = ide_read_status, 449 - .read_altstatus = ide_read_altstatus, 450 - .write_devctl = via_write_devctl, 451 - 452 - .dev_select = via_dev_select, 453 - .tf_load = via_tf_load, 454 - .tf_read = ide_tf_read, 455 - 456 - .input_data = ide_input_data, 457 - .output_data = ide_output_data, 458 - }; 459 - 460 406 static const struct ide_port_info via82cxxx_chipset __devinitdata = { 461 407 .name = DRV_NAME, 462 408 .init_chipset = init_chipset_via82cxxx, 463 409 .enablebits = { { 0x40, 0x02, 0x02 }, { 0x40, 0x01, 0x01 } }, 464 - .tp_ops = &via_tp_ops, 465 410 .port_ops = &via_port_ops, 466 411 .host_flags = IDE_HFLAG_PIO_NO_BLACKLIST | 467 412 IDE_HFLAG_POST_SET_MODE |
+6 -6
drivers/isdn/hisax/avma1_cs.c
··· 50 50 handler. 51 51 */ 52 52 53 - static int avma1cs_config(struct pcmcia_device *link); 53 + static int avma1cs_config(struct pcmcia_device *link) __devinit ; 54 54 static void avma1cs_release(struct pcmcia_device *link); 55 55 56 56 /* ··· 59 59 needed to manage one actual PCMCIA card. 60 60 */ 61 61 62 - static void avma1cs_detach(struct pcmcia_device *p_dev); 62 + static void avma1cs_detach(struct pcmcia_device *p_dev) __devexit ; 63 63 64 64 65 65 /* ··· 99 99 100 100 ======================================================================*/ 101 101 102 - static int avma1cs_probe(struct pcmcia_device *p_dev) 102 + static int __devinit avma1cs_probe(struct pcmcia_device *p_dev) 103 103 { 104 104 local_info_t *local; 105 105 ··· 140 140 141 141 ======================================================================*/ 142 142 143 - static void avma1cs_detach(struct pcmcia_device *link) 143 + static void __devexit avma1cs_detach(struct pcmcia_device *link) 144 144 { 145 145 dev_dbg(&link->dev, "avma1cs_detach(0x%p)\n", link); 146 146 avma1cs_release(link); ··· 174 174 } 175 175 176 176 177 - static int avma1cs_config(struct pcmcia_device *link) 177 + static int __devinit avma1cs_config(struct pcmcia_device *link) 178 178 { 179 179 local_info_t *dev; 180 180 int i; ··· 282 282 .name = "avma1_cs", 283 283 }, 284 284 .probe = avma1cs_probe, 285 - .remove = avma1cs_detach, 285 + .remove = __devexit_p(avma1cs_detach), 286 286 .id_table = avma1cs_ids, 287 287 }; 288 288
+6 -6
drivers/isdn/hisax/elsa_cs.c
··· 76 76 handler. 77 77 */ 78 78 79 - static int elsa_cs_config(struct pcmcia_device *link); 79 + static int elsa_cs_config(struct pcmcia_device *link) __devinit ; 80 80 static void elsa_cs_release(struct pcmcia_device *link); 81 81 82 82 /* ··· 85 85 needed to manage one actual PCMCIA card. 86 86 */ 87 87 88 - static void elsa_cs_detach(struct pcmcia_device *p_dev); 88 + static void elsa_cs_detach(struct pcmcia_device *p_dev) __devexit; 89 89 90 90 /* 91 91 A driver needs to provide a dev_node_t structure for each device ··· 121 121 122 122 ======================================================================*/ 123 123 124 - static int elsa_cs_probe(struct pcmcia_device *link) 124 + static int __devinit elsa_cs_probe(struct pcmcia_device *link) 125 125 { 126 126 local_info_t *local; 127 127 ··· 166 166 167 167 ======================================================================*/ 168 168 169 - static void elsa_cs_detach(struct pcmcia_device *link) 169 + static void __devexit elsa_cs_detach(struct pcmcia_device *link) 170 170 { 171 171 local_info_t *info = link->priv; 172 172 ··· 210 210 return -ENODEV; 211 211 } 212 212 213 - static int elsa_cs_config(struct pcmcia_device *link) 213 + static int __devinit elsa_cs_config(struct pcmcia_device *link) 214 214 { 215 215 local_info_t *dev; 216 216 int i; ··· 327 327 .name = "elsa_cs", 328 328 }, 329 329 .probe = elsa_cs_probe, 330 - .remove = elsa_cs_detach, 330 + .remove = __devexit_p(elsa_cs_detach), 331 331 .id_table = elsa_ids, 332 332 .suspend = elsa_suspend, 333 333 .resume = elsa_resume,
+6 -6
drivers/isdn/hisax/sedlbauer_cs.c
··· 76 76 event handler. 77 77 */ 78 78 79 - static int sedlbauer_config(struct pcmcia_device *link); 79 + static int sedlbauer_config(struct pcmcia_device *link) __devinit ; 80 80 static void sedlbauer_release(struct pcmcia_device *link); 81 81 82 82 /* ··· 85 85 needed to manage one actual PCMCIA card. 86 86 */ 87 87 88 - static void sedlbauer_detach(struct pcmcia_device *p_dev); 88 + static void sedlbauer_detach(struct pcmcia_device *p_dev) __devexit; 89 89 90 90 /* 91 91 You'll also need to prototype all the functions that will actually ··· 129 129 130 130 ======================================================================*/ 131 131 132 - static int sedlbauer_probe(struct pcmcia_device *link) 132 + static int __devinit sedlbauer_probe(struct pcmcia_device *link) 133 133 { 134 134 local_info_t *local; 135 135 ··· 177 177 178 178 ======================================================================*/ 179 179 180 - static void sedlbauer_detach(struct pcmcia_device *link) 180 + static void __devexit sedlbauer_detach(struct pcmcia_device *link) 181 181 { 182 182 dev_dbg(&link->dev, "sedlbauer_detach(0x%p)\n", link); 183 183 ··· 283 283 284 284 285 285 286 - static int sedlbauer_config(struct pcmcia_device *link) 286 + static int __devinit sedlbauer_config(struct pcmcia_device *link) 287 287 { 288 288 local_info_t *dev = link->priv; 289 289 win_req_t *req; ··· 441 441 .name = "sedlbauer_cs", 442 442 }, 443 443 .probe = sedlbauer_probe, 444 - .remove = sedlbauer_detach, 444 + .remove = __devexit_p(sedlbauer_detach), 445 445 .id_table = sedlbauer_ids, 446 446 .suspend = sedlbauer_suspend, 447 447 .resume = sedlbauer_resume,
+6 -6
drivers/isdn/hisax/teles_cs.c
··· 57 57 handler. 58 58 */ 59 59 60 - static int teles_cs_config(struct pcmcia_device *link); 60 + static int teles_cs_config(struct pcmcia_device *link) __devinit ; 61 61 static void teles_cs_release(struct pcmcia_device *link); 62 62 63 63 /* ··· 66 66 needed to manage one actual PCMCIA card. 67 67 */ 68 68 69 - static void teles_detach(struct pcmcia_device *p_dev); 69 + static void teles_detach(struct pcmcia_device *p_dev) __devexit ; 70 70 71 71 /* 72 72 A linked list of "instances" of the teles_cs device. Each actual ··· 112 112 113 113 ======================================================================*/ 114 114 115 - static int teles_probe(struct pcmcia_device *link) 115 + static int __devinit teles_probe(struct pcmcia_device *link) 116 116 { 117 117 local_info_t *local; 118 118 ··· 156 156 157 157 ======================================================================*/ 158 158 159 - static void teles_detach(struct pcmcia_device *link) 159 + static void __devexit teles_detach(struct pcmcia_device *link) 160 160 { 161 161 local_info_t *info = link->priv; 162 162 ··· 200 200 return -ENODEV; 201 201 } 202 202 203 - static int teles_cs_config(struct pcmcia_device *link) 203 + static int __devinit teles_cs_config(struct pcmcia_device *link) 204 204 { 205 205 local_info_t *dev; 206 206 int i; ··· 319 319 .name = "teles_cs", 320 320 }, 321 321 .probe = teles_probe, 322 - .remove = teles_detach, 322 + .remove = __devexit_p(teles_detach), 323 323 .id_table = teles_ids, 324 324 .suspend = teles_suspend, 325 325 .resume = teles_resume,
+6
drivers/misc/kgdbts.c
··· 295 295 /* On x86 a breakpoint stop requires it to be decremented */ 296 296 if (addr + 1 == kgdbts_regs.ip) 297 297 offset = -1; 298 + #elif defined(CONFIG_SUPERH) 299 + /* On SUPERH a breakpoint stop requires it to be decremented */ 300 + if (addr + 2 == kgdbts_regs.pc) 301 + offset = -2; 298 302 #endif 299 303 if (strcmp(arg, "silent") && 300 304 instruction_pointer(&kgdbts_regs) + offset != addr) { ··· 309 305 #ifdef CONFIG_X86 310 306 /* On x86 adjust the instruction pointer if needed */ 311 307 kgdbts_regs.ip += offset; 308 + #elif defined(CONFIG_SUPERH) 309 + kgdbts_regs.pc += offset; 312 310 #endif 313 311 return 0; 314 312 }
+1 -1
drivers/net/atlx/atl1.c
··· 84 84 85 85 #define ATLX_DRIVER_VERSION "2.1.3" 86 86 MODULE_AUTHOR("Xiong Huang <xiong.huang@atheros.com>, \ 87 - Chris Snook <csnook@redhat.com>, Jay Cliburn <jcliburn@gmail.com>"); 87 + Chris Snook <csnook@redhat.com>, Jay Cliburn <jcliburn@gmail.com>"); 88 88 MODULE_LICENSE("GPL"); 89 89 MODULE_VERSION(ATLX_DRIVER_VERSION); 90 90
+1 -1
drivers/net/benet/be_ethtool.c
··· 490 490 { 491 491 int ret, i; 492 492 struct be_dma_mem ddrdma_cmd; 493 - u64 pattern[2] = {0x5a5a5a5a5a5a5a5a, 0xa5a5a5a5a5a5a5a5}; 493 + u64 pattern[2] = {0x5a5a5a5a5a5a5a5aULL, 0xa5a5a5a5a5a5a5a5ULL}; 494 494 495 495 ddrdma_cmd.size = sizeof(struct be_cmd_req_ddrdma_test); 496 496 ddrdma_cmd.va = pci_alloc_consistent(adapter->pdev, ddrdma_cmd.size,
+9 -5
drivers/net/bnx2.c
··· 246 246 247 247 MODULE_DEVICE_TABLE(pci, bnx2_pci_tbl); 248 248 249 + static void bnx2_init_napi(struct bnx2 *bp); 250 + 249 251 static inline u32 bnx2_tx_avail(struct bnx2 *bp, struct bnx2_tx_ring_info *txr) 250 252 { 251 253 u32 diff; ··· 6199 6197 bnx2_disable_int(bp); 6200 6198 6201 6199 bnx2_setup_int_mode(bp, disable_msi); 6200 + bnx2_init_napi(bp); 6202 6201 bnx2_napi_enable(bp); 6203 6202 rc = bnx2_alloc_mem(bp); 6204 6203 if (rc) ··· 7646 7643 int i; 7647 7644 7648 7645 for (i = 0; i < bp->irq_nvecs; i++) { 7649 - disable_irq(bp->irq_tbl[i].vector); 7650 - bnx2_interrupt(bp->irq_tbl[i].vector, &bp->bnx2_napi[i]); 7651 - enable_irq(bp->irq_tbl[i].vector); 7646 + struct bnx2_irq *irq = &bp->irq_tbl[i]; 7647 + 7648 + disable_irq(irq->vector); 7649 + irq->handler(irq->vector, &bp->bnx2_napi[i]); 7650 + enable_irq(irq->vector); 7652 7651 } 7653 7652 } 7654 7653 #endif ··· 8212 8207 { 8213 8208 int i; 8214 8209 8215 - for (i = 0; i < BNX2_MAX_MSIX_VEC; i++) { 8210 + for (i = 0; i < bp->irq_nvecs; i++) { 8216 8211 struct bnx2_napi *bnapi = &bp->bnx2_napi[i]; 8217 8212 int (*poll)(struct napi_struct *, int); 8218 8213 ··· 8281 8276 dev->ethtool_ops = &bnx2_ethtool_ops; 8282 8277 8283 8278 bp = netdev_priv(dev); 8284 - bnx2_init_napi(bp); 8285 8279 8286 8280 pci_set_drvdata(pdev, dev); 8287 8281
+32 -8
drivers/net/bonding/bond_main.c
··· 1235 1235 write_lock_bh(&bond->curr_slave_lock); 1236 1236 } 1237 1237 } 1238 + 1239 + /* resend IGMP joins since all were sent on curr_active_slave */ 1240 + if (bond->params.mode == BOND_MODE_ROUNDROBIN) { 1241 + bond_resend_igmp_join_requests(bond); 1242 + } 1238 1243 } 1239 1244 1240 1245 /** ··· 4143 4138 struct bonding *bond = netdev_priv(bond_dev); 4144 4139 struct slave *slave, *start_at; 4145 4140 int i, slave_no, res = 1; 4141 + struct iphdr *iph = ip_hdr(skb); 4146 4142 4147 4143 read_lock(&bond->lock); 4148 4144 4149 4145 if (!BOND_IS_OK(bond)) 4150 4146 goto out; 4151 - 4152 4147 /* 4153 - * Concurrent TX may collide on rr_tx_counter; we accept that 4154 - * as being rare enough not to justify using an atomic op here 4148 + * Start with the curr_active_slave that joined the bond as the 4149 + * default for sending IGMP traffic. For failover purposes one 4150 + * needs to maintain some consistency for the interface that will 4151 + * send the join/membership reports. The curr_active_slave found 4152 + * will send all of this type of traffic. 4155 4153 */ 4156 - slave_no = bond->rr_tx_counter++ % bond->slave_cnt; 4154 + if ((iph->protocol == htons(IPPROTO_IGMP)) && 4155 + (skb->protocol == htons(ETH_P_IP))) { 4157 4156 4158 - bond_for_each_slave(bond, slave, i) { 4159 - slave_no--; 4160 - if (slave_no < 0) 4161 - break; 4157 + read_lock(&bond->curr_slave_lock); 4158 + slave = bond->curr_active_slave; 4159 + read_unlock(&bond->curr_slave_lock); 4160 + 4161 + if (!slave) 4162 + goto out; 4163 + } else { 4164 + /* 4165 + * Concurrent TX may collide on rr_tx_counter; we accept 4166 + * that as being rare enough not to justify using an 4167 + * atomic op here. 4168 + */ 4169 + slave_no = bond->rr_tx_counter++ % bond->slave_cnt; 4170 + 4171 + bond_for_each_slave(bond, slave, i) { 4172 + slave_no--; 4173 + if (slave_no < 0) 4174 + break; 4175 + } 4162 4176 } 4163 4177 4164 4178 start_at = slave;
+7 -90
drivers/net/can/bfin_can.c
··· 22 22 #include <linux/can/dev.h> 23 23 #include <linux/can/error.h> 24 24 25 + #include <asm/bfin_can.h> 25 26 #include <asm/portmux.h> 26 27 27 28 #define DRV_NAME "bfin_can" 28 29 #define BFIN_CAN_TIMEOUT 100 29 30 #define TX_ECHO_SKB_MAX 1 30 - 31 - /* 32 - * transmit and receive channels 33 - */ 34 - #define TRANSMIT_CHL 24 35 - #define RECEIVE_STD_CHL 0 36 - #define RECEIVE_EXT_CHL 4 37 - #define RECEIVE_RTR_CHL 8 38 - #define RECEIVE_EXT_RTR_CHL 12 39 - #define MAX_CHL_NUMBER 32 40 - 41 - /* 42 - * bfin can registers layout 43 - */ 44 - struct bfin_can_mask_regs { 45 - u16 aml; 46 - u16 dummy1; 47 - u16 amh; 48 - u16 dummy2; 49 - }; 50 - 51 - struct bfin_can_channel_regs { 52 - u16 data[8]; 53 - u16 dlc; 54 - u16 dummy1; 55 - u16 tsv; 56 - u16 dummy2; 57 - u16 id0; 58 - u16 dummy3; 59 - u16 id1; 60 - u16 dummy4; 61 - }; 62 - 63 - struct bfin_can_regs { 64 - /* 65 - * global control and status registers 66 - */ 67 - u16 mc1; /* offset 0 */ 68 - u16 dummy1; 69 - u16 md1; /* offset 4 */ 70 - u16 rsv1[13]; 71 - u16 mbtif1; /* offset 0x20 */ 72 - u16 dummy2; 73 - u16 mbrif1; /* offset 0x24 */ 74 - u16 dummy3; 75 - u16 mbim1; /* offset 0x28 */ 76 - u16 rsv2[11]; 77 - u16 mc2; /* offset 0x40 */ 78 - u16 dummy4; 79 - u16 md2; /* offset 0x44 */ 80 - u16 dummy5; 81 - u16 trs2; /* offset 0x48 */ 82 - u16 rsv3[11]; 83 - u16 mbtif2; /* offset 0x60 */ 84 - u16 dummy6; 85 - u16 mbrif2; /* offset 0x64 */ 86 - u16 dummy7; 87 - u16 mbim2; /* offset 0x68 */ 88 - u16 rsv4[11]; 89 - u16 clk; /* offset 0x80 */ 90 - u16 dummy8; 91 - u16 timing; /* offset 0x84 */ 92 - u16 rsv5[3]; 93 - u16 status; /* offset 0x8c */ 94 - u16 dummy9; 95 - u16 cec; /* offset 0x90 */ 96 - u16 dummy10; 97 - u16 gis; /* offset 0x94 */ 98 - u16 dummy11; 99 - u16 gim; /* offset 0x98 */ 100 - u16 rsv6[3]; 101 - u16 ctrl; /* offset 0xa0 */ 102 - u16 dummy12; 103 - u16 intr; /* offset 0xa4 */ 104 - u16 rsv7[7]; 105 - u16 esr; /* offset 0xb4 */ 106 - u16 rsv8[37]; 107 - 108 - /* 109 - * channel(mailbox) mask and message registers 110 - */ 111 - struct bfin_can_mask_regs msk[MAX_CHL_NUMBER]; /* offset 0x100 */ 112 - struct bfin_can_channel_regs chl[MAX_CHL_NUMBER]; /* offset 0x200 */ 113 - }; 114 31 115 32 /* 116 33 * bfin can private data ··· 80 163 if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES) 81 164 timing |= SAM; 82 165 83 - bfin_write16(&reg->clk, clk); 166 + bfin_write16(&reg->clock, clk); 84 167 bfin_write16(&reg->timing, timing); 85 168 86 169 dev_info(dev->dev.parent, "setting CLOCK=0x%04x TIMING=0x%04x\n", ··· 102 185 bfin_write16(&reg->gim, 0); 103 186 104 187 /* reset can and enter configuration mode */ 105 - bfin_write16(&reg->ctrl, SRS | CCR); 188 + bfin_write16(&reg->control, SRS | CCR); 106 189 SSYNC(); 107 - bfin_write16(&reg->ctrl, CCR); 190 + bfin_write16(&reg->control, CCR); 108 191 SSYNC(); 109 - while (!(bfin_read16(&reg->ctrl) & CCA)) { 192 + while (!(bfin_read16(&reg->control) & CCA)) { 110 193 udelay(10); 111 194 if (--timeout == 0) { 112 195 dev_err(dev->dev.parent, ··· 161 244 /* 162 245 * leave configuration mode 163 246 */ 164 - bfin_write16(&reg->ctrl, bfin_read16(&reg->ctrl) & ~CCR); 247 + bfin_write16(&reg->control, bfin_read16(&reg->control) & ~CCR); 165 248 166 249 while (bfin_read16(&reg->status) & CCA) { 167 250 udelay(10); ··· 643 726 644 727 if (netif_running(dev)) { 645 728 /* enter sleep mode */ 646 - bfin_write16(&reg->ctrl, bfin_read16(&reg->ctrl) | SMR); 729 + bfin_write16(&reg->control, bfin_read16(&reg->control) | SMR); 647 730 SSYNC(); 648 731 while (!(bfin_read16(&reg->intr) & SMACK)) { 649 732 udelay(10);
-1
drivers/net/e1000/e1000.h
··· 261 261 /* TX */ 262 262 struct e1000_tx_ring *tx_ring; /* One per active queue */ 263 263 unsigned int restart_queue; 264 - unsigned long tx_queue_len; 265 264 u32 txd_cmd; 266 265 u32 tx_int_delay; 267 266 u32 tx_abs_int_delay;
+1 -8
drivers/net/e1000/e1000_main.c
··· 383 383 adapter->alloc_rx_buf(adapter, ring, 384 384 E1000_DESC_UNUSED(ring)); 385 385 } 386 - 387 - adapter->tx_queue_len = netdev->tx_queue_len; 388 386 } 389 387 390 388 int e1000_up(struct e1000_adapter *adapter) ··· 501 503 del_timer_sync(&adapter->watchdog_timer); 502 504 del_timer_sync(&adapter->phy_info_timer); 503 505 504 - netdev->tx_queue_len = adapter->tx_queue_len; 505 506 adapter->link_speed = 0; 506 507 adapter->link_duplex = 0; 507 508 netif_carrier_off(netdev); ··· 2313 2316 E1000_CTRL_RFCE) ? "RX" : ((ctrl & 2314 2317 E1000_CTRL_TFCE) ? "TX" : "None" ))); 2315 2318 2316 - /* tweak tx_queue_len according to speed/duplex 2317 - * and adjust the timeout factor */ 2318 - netdev->tx_queue_len = adapter->tx_queue_len; 2319 + /* adjust timeout factor according to speed/duplex */ 2319 2320 adapter->tx_timeout_factor = 1; 2320 2321 switch (adapter->link_speed) { 2321 2322 case SPEED_10: 2322 2323 txb2b = false; 2323 - netdev->tx_queue_len = 10; 2324 2324 adapter->tx_timeout_factor = 16; 2325 2325 break; 2326 2326 case SPEED_100: 2327 2327 txb2b = false; 2328 - netdev->tx_queue_len = 100; 2329 2328 /* maybe add some timeout factor ? */ 2330 2329 break; 2331 2330 }
-1
drivers/net/e1000e/e1000.h
··· 279 279 280 280 struct napi_struct napi; 281 281 282 - unsigned long tx_queue_len; 283 282 unsigned int restart_queue; 284 283 u32 txd_cmd; 285 284
+1 -10
drivers/net/e1000e/netdev.c
··· 2289 2289 ew32(TCTL, tctl); 2290 2290 2291 2291 e1000e_config_collision_dist(hw); 2292 - 2293 - adapter->tx_queue_len = adapter->netdev->tx_queue_len; 2294 2292 } 2295 2293 2296 2294 /** ··· 2875 2877 del_timer_sync(&adapter->watchdog_timer); 2876 2878 del_timer_sync(&adapter->phy_info_timer); 2877 2879 2878 - netdev->tx_queue_len = adapter->tx_queue_len; 2879 2880 netif_carrier_off(netdev); 2880 2881 adapter->link_speed = 0; 2881 2882 adapter->link_duplex = 0; ··· 3585 3588 "link gets many collisions.\n"); 3586 3589 } 3587 3590 3588 - /* 3589 - * tweak tx_queue_len according to speed/duplex 3590 - * and adjust the timeout factor 3591 - */ 3592 - netdev->tx_queue_len = adapter->tx_queue_len; 3591 + /* adjust timeout factor according to speed/duplex */ 3593 3592 adapter->tx_timeout_factor = 1; 3594 3593 switch (adapter->link_speed) { 3595 3594 case SPEED_10: 3596 3595 txb2b = 0; 3597 - netdev->tx_queue_len = 10; 3598 3596 adapter->tx_timeout_factor = 16; 3599 3597 break; 3600 3598 case SPEED_100: 3601 3599 txb2b = 0; 3602 - netdev->tx_queue_len = 100; 3603 3600 adapter->tx_timeout_factor = 10; 3604 3601 break; 3605 3602 }
+3 -2
drivers/net/gianfar.c
··· 2393 2393 * as many bytes as needed to align the data properly 2394 2394 */ 2395 2395 skb_reserve(skb, alignamount); 2396 + GFAR_CB(skb)->alignamount = alignamount; 2396 2397 2397 2398 return skb; 2398 2399 } ··· 2534 2533 newskb = skb; 2535 2534 else if (skb) { 2536 2535 /* 2537 - * We need to reset ->data to what it 2536 + * We need to un-reserve() the skb to what it 2538 2537 * was before gfar_new_skb() re-aligned 2539 2538 * it to an RXBUF_ALIGNMENT boundary 2540 2539 * before we put the skb back on the 2541 2540 * recycle list. 2542 2541 */ 2543 - skb->data = skb->head + NET_SKB_PAD; 2542 + skb_reserve(skb, -GFAR_CB(skb)->alignamount); 2544 2543 __skb_queue_head(&priv->rx_recycle, skb); 2545 2544 } 2546 2545 } else {
+6
drivers/net/gianfar.h
··· 566 566 u16 vlctl; /* VLAN control word */ 567 567 }; 568 568 569 + struct gianfar_skb_cb { 570 + int alignamount; 571 + }; 572 + 573 + #define GFAR_CB(skb) ((struct gianfar_skb_cb *)((skb)->cb)) 574 + 569 575 struct rmon_mib 570 576 { 571 577 u32 tr64; /* 0x.680 - Transmit and Receive 64-byte Frame Counter */
+3 -3
drivers/net/igb/e1000_mac.c
··· 1367 1367 * igb_enable_mng_pass_thru - Enable processing of ARP's 1368 1368 * @hw: pointer to the HW structure 1369 1369 * 1370 - * Verifies the hardware needs to allow ARPs to be processed by the host. 1370 + * Verifies the hardware needs to leave interface enabled so that frames can 1371 + * be directed to and from the management interface. 1371 1372 **/ 1372 1373 bool igb_enable_mng_pass_thru(struct e1000_hw *hw) 1373 1374 { ··· 1381 1380 1382 1381 manc = rd32(E1000_MANC); 1383 1382 1384 - if (!(manc & E1000_MANC_RCV_TCO_EN) || 1385 - !(manc & E1000_MANC_EN_MAC_ADDR_FILTER)) 1383 + if (!(manc & E1000_MANC_RCV_TCO_EN)) 1386 1384 goto out; 1387 1385 1388 1386 if (hw->mac.arc_subsystem_valid) {
-1
drivers/net/igb/igb.h
··· 267 267 268 268 /* TX */ 269 269 struct igb_ring *tx_ring[16]; 270 - unsigned long tx_queue_len; 271 270 u32 tx_timeout_count; 272 271 273 272 /* RX */
+7 -15
drivers/net/igb/igb_main.c
··· 1105 1105 struct igb_ring *ring = adapter->rx_ring[i]; 1106 1106 igb_alloc_rx_buffers_adv(ring, igb_desc_unused(ring)); 1107 1107 } 1108 - 1109 - 1110 - adapter->tx_queue_len = netdev->tx_queue_len; 1111 1108 } 1112 1109 1113 1110 /** ··· 1210 1213 del_timer_sync(&adapter->watchdog_timer); 1211 1214 del_timer_sync(&adapter->phy_info_timer); 1212 1215 1213 - netdev->tx_queue_len = adapter->tx_queue_len; 1214 1216 netif_carrier_off(netdev); 1215 1217 1216 1218 /* record the stats before reset*/ ··· 3102 3106 ((ctrl & E1000_CTRL_RFCE) ? "RX" : 3103 3107 ((ctrl & E1000_CTRL_TFCE) ? "TX" : "None"))); 3104 3108 3105 - /* tweak tx_queue_len according to speed/duplex and 3106 - * adjust the timeout factor */ 3107 - netdev->tx_queue_len = adapter->tx_queue_len; 3109 + /* adjust timeout factor according to speed/duplex */ 3108 3110 adapter->tx_timeout_factor = 1; 3109 3111 switch (adapter->link_speed) { 3110 3112 case SPEED_10: 3111 - netdev->tx_queue_len = 10; 3112 3113 adapter->tx_timeout_factor = 14; 3113 3114 break; 3114 3115 case SPEED_100: 3115 - netdev->tx_queue_len = 100; 3116 3116 /* maybe add some timeout factor ? */ 3117 3117 break; 3118 3118 } ··· 3955 3963 struct net_device_stats *net_stats = igb_get_stats(adapter->netdev); 3956 3964 struct e1000_hw *hw = &adapter->hw; 3957 3965 struct pci_dev *pdev = adapter->pdev; 3958 - u32 rnbc, reg; 3966 + u32 reg, mpc; 3959 3967 u16 phy_tmp; 3960 3968 int i; 3961 3969 u64 bytes, packets; ··· 4013 4021 adapter->stats.symerrs += rd32(E1000_SYMERRS); 4014 4022 adapter->stats.sec += rd32(E1000_SEC); 4015 4023 4016 - adapter->stats.mpc += rd32(E1000_MPC); 4024 + mpc = rd32(E1000_MPC); 4025 + adapter->stats.mpc += mpc; 4026 + net_stats->rx_fifo_errors += mpc; 4017 4027 adapter->stats.scc += rd32(E1000_SCC); 4018 4028 adapter->stats.ecol += rd32(E1000_ECOL); 4019 4029 adapter->stats.mcc += rd32(E1000_MCC); ··· 4030 4036 adapter->stats.gptc += rd32(E1000_GPTC); 4031 4037 adapter->stats.gotc += rd32(E1000_GOTCL); 4032 4038 rd32(E1000_GOTCH); /* clear GOTCL */ 4033 - rnbc = rd32(E1000_RNBC); 4034 - adapter->stats.rnbc += rnbc; 4035 - net_stats->rx_fifo_errors += rnbc; 4039 + adapter->stats.rnbc += rd32(E1000_RNBC); 4036 4040 adapter->stats.ruc += rd32(E1000_RUC); 4037 4041 adapter->stats.rfc += rd32(E1000_RFC); 4038 4042 adapter->stats.rjc += rd32(E1000_RJC); ··· 5102 5110 { 5103 5111 struct igb_adapter *adapter = q_vector->adapter; 5104 5112 5105 - if (vlan_tag) 5113 + if (vlan_tag && adapter->vlgrp) 5106 5114 vlan_gro_receive(&q_vector->napi, adapter->vlgrp, 5107 5115 vlan_tag, skb); 5108 5116 else
-1
drivers/net/igbvf/igbvf.h
··· 198 198 struct igbvf_ring *tx_ring /* One per active queue */ 199 199 ____cacheline_aligned_in_smp; 200 200 201 - unsigned long tx_queue_len; 202 201 unsigned int restart_queue; 203 202 u32 txd_cmd; 204 203
+1 -10
drivers/net/igbvf/netdev.c
··· 1304 1304 1305 1305 /* enable Report Status bit */ 1306 1306 adapter->txd_cmd |= E1000_ADVTXD_DCMD_RS; 1307 - 1308 - adapter->tx_queue_len = adapter->netdev->tx_queue_len; 1309 1307 } 1310 1308 1311 1309 /** ··· 1522 1524 1523 1525 del_timer_sync(&adapter->watchdog_timer); 1524 1526 1525 - netdev->tx_queue_len = adapter->tx_queue_len; 1526 1527 netif_carrier_off(netdev); 1527 1528 1528 1529 /* record the stats before reset*/ ··· 1854 1857 &adapter->link_duplex); 1855 1858 igbvf_print_link_info(adapter); 1856 1859 1857 - /* 1858 - * tweak tx_queue_len according to speed/duplex 1859 - * and adjust the timeout factor 1860 - */ 1861 - netdev->tx_queue_len = adapter->tx_queue_len; 1860 + /* adjust timeout factor according to speed/duplex */ 1862 1861 adapter->tx_timeout_factor = 1; 1863 1862 switch (adapter->link_speed) { 1864 1863 case SPEED_10: 1865 1864 txb2b = 0; 1866 - netdev->tx_queue_len = 10; 1867 1865 adapter->tx_timeout_factor = 16; 1868 1866 break; 1869 1867 case SPEED_100: 1870 1868 txb2b = 0; 1871 - netdev->tx_queue_len = 100; 1872 1869 /* maybe add some timeout factor ? */ 1873 1870 break; 1874 1871 }
+5 -2
drivers/net/ixgbe/ixgbe.h
··· 204 204 #define IXGBE_MAX_FDIR_INDICES 64 205 205 #ifdef IXGBE_FCOE 206 206 #define IXGBE_MAX_FCOE_INDICES 8 207 + #define MAX_RX_QUEUES (IXGBE_MAX_FDIR_INDICES + IXGBE_MAX_FCOE_INDICES) 208 + #define MAX_TX_QUEUES (IXGBE_MAX_FDIR_INDICES + IXGBE_MAX_FCOE_INDICES) 209 + #else 210 + #define MAX_RX_QUEUES IXGBE_MAX_FDIR_INDICES 211 + #define MAX_TX_QUEUES IXGBE_MAX_FDIR_INDICES 207 212 #endif /* IXGBE_FCOE */ 208 213 struct ixgbe_ring_feature { 209 214 int indices; 210 215 int mask; 211 216 } ____cacheline_internodealigned_in_smp; 212 217 213 - #define MAX_RX_QUEUES 128 214 - #define MAX_TX_QUEUES 128 215 218 216 219 #define MAX_RX_PACKET_BUFFERS ((adapter->flags & IXGBE_FLAG_DCB_ENABLED) \ 217 220 ? 8 : 1)
+21
drivers/net/ixgbe/ixgbe_ethtool.c
··· 1853 1853 if (ixgbe_link_test(adapter, &data[4])) 1854 1854 eth_test->flags |= ETH_TEST_FL_FAILED; 1855 1855 1856 + if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) { 1857 + int i; 1858 + for (i = 0; i < adapter->num_vfs; i++) { 1859 + if (adapter->vfinfo[i].clear_to_send) { 1860 + netdev_warn(netdev, "%s", 1861 + "offline diagnostic is not " 1862 + "supported when VFs are " 1863 + "present\n"); 1864 + data[0] = 1; 1865 + data[1] = 1; 1866 + data[2] = 1; 1867 + data[3] = 1; 1868 + eth_test->flags |= ETH_TEST_FL_FAILED; 1869 + clear_bit(__IXGBE_TESTING, 1870 + &adapter->state); 1871 + goto skip_ol_tests; 1872 + } 1873 + } 1874 + } 1875 + 1856 1876 if (if_running) 1857 1877 /* indicate we're in test mode */ 1858 1878 dev_close(netdev); ··· 1928 1908 1929 1909 clear_bit(__IXGBE_TESTING, &adapter->state); 1930 1910 } 1911 + skip_ol_tests: 1931 1912 msleep_interruptible(4 * 1000); 1932 1913 } 1933 1914
+25 -8
drivers/net/ixgbe/ixgbe_fcoe.c
··· 202 202 addr = sg_dma_address(sg); 203 203 len = sg_dma_len(sg); 204 204 while (len) { 205 + /* max number of buffers allowed in one DDP context */ 206 + if (j >= IXGBE_BUFFCNT_MAX) { 207 + netif_err(adapter, drv, adapter->netdev, 208 + "xid=%x:%d,%d,%d:addr=%llx " 209 + "not enough descriptors\n", 210 + xid, i, j, dmacount, (u64)addr); 211 + goto out_noddp_free; 212 + } 213 + 205 214 /* get the offset of length of current buffer */ 206 215 thisoff = addr & ((dma_addr_t)bufflen - 1); 207 216 thislen = min((bufflen - thisoff), len); ··· 236 227 len -= thislen; 237 228 addr += thislen; 238 229 j++; 239 - /* max number of buffers allowed in one DDP context */ 240 - if (j > IXGBE_BUFFCNT_MAX) { 241 - DPRINTK(DRV, ERR, "xid=%x:%d,%d,%d:addr=%llx " 242 - "not enough descriptors\n", 243 - xid, i, j, dmacount, (u64)addr); 244 - goto out_noddp_free; 245 - } 246 230 } 247 231 } 248 232 /* only the last buffer may have non-full bufflen */ 249 233 lastsize = thisoff + thislen; 250 234 251 235 fcbuff = (IXGBE_FCBUFF_4KB << IXGBE_FCBUFF_BUFFSIZE_SHIFT); 252 - fcbuff |= (j << IXGBE_FCBUFF_BUFFCNT_SHIFT); 236 + fcbuff |= ((j & 0xff) << IXGBE_FCBUFF_BUFFCNT_SHIFT); 253 237 fcbuff |= (firstoff << IXGBE_FCBUFF_OFFSET_SHIFT); 254 238 fcbuff |= (IXGBE_FCBUFF_VALID); 255 239 ··· 522 520 /* Enable L2 eth type filter for FCoE */ 523 521 IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FCOE), 524 522 (ETH_P_FCOE | IXGBE_ETQF_FCOE | IXGBE_ETQF_FILTER_EN)); 523 + /* Enable L2 eth type filter for FIP */ 524 + IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FIP), 525 + (ETH_P_FIP | IXGBE_ETQF_FILTER_EN)); 525 526 if (adapter->ring_feature[RING_F_FCOE].indices) { 526 527 /* Use multiple rx queues for FCoE by redirection table */ 527 528 for (i = 0; i < IXGBE_FCRETA_SIZE; i++) { ··· 535 530 } 536 531 IXGBE_WRITE_REG(hw, IXGBE_FCRECTL, IXGBE_FCRECTL_ENA); 537 532 IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FCOE), 0); 533 + fcoe_i = f->mask; 534 + fcoe_i &= IXGBE_FCRETA_ENTRY_MASK; 535 + fcoe_q = adapter->rx_ring[fcoe_i]->reg_idx; 536 + IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FIP), 537 + IXGBE_ETQS_QUEUE_EN | 538 + (fcoe_q << IXGBE_ETQS_RX_QUEUE_SHIFT)); 538 539 } else { 539 540 /* Use single rx queue for FCoE */ 540 541 fcoe_i = f->mask; ··· 550 539 IXGBE_ETQS_QUEUE_EN | 551 540 (fcoe_q << IXGBE_ETQS_RX_QUEUE_SHIFT)); 552 541 } 542 + /* send FIP frames to the first FCoE queue */ 543 + fcoe_i = f->mask; 544 + fcoe_q = adapter->rx_ring[fcoe_i]->reg_idx; 545 + IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FIP), 546 + IXGBE_ETQS_QUEUE_EN | 547 + (fcoe_q << IXGBE_ETQS_RX_QUEUE_SHIFT)); 553 548 554 549 IXGBE_WRITE_REG(hw, IXGBE_FCRXCTRL, 555 550 IXGBE_FCRXCTRL_FCOELLI |
+30 -13
drivers/net/ixgbe/ixgbe_main.c
··· 3056 3056 while (test_and_set_bit(__IXGBE_RESETTING, &adapter->state)) 3057 3057 msleep(1); 3058 3058 ixgbe_down(adapter); 3059 + /* 3060 + * If SR-IOV enabled then wait a bit before bringing the adapter 3061 + * back up to give the VFs time to respond to the reset. The 3062 + * two second wait is based upon the watchdog timer cycle in 3063 + * the VF driver. 3064 + */ 3065 + if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) 3066 + msleep(2000); 3059 3067 ixgbe_up(adapter); 3060 3068 clear_bit(__IXGBE_RESETTING, &adapter->state); 3061 3069 } ··· 3244 3236 3245 3237 /* disable receive for all VFs and wait one second */ 3246 3238 if (adapter->num_vfs) { 3247 - for (i = 0 ; i < adapter->num_vfs; i++) 3248 - adapter->vfinfo[i].clear_to_send = 0; 3249 - 3250 3239 /* ping all the active vfs to let them know we are going down */ 3251 3240 ixgbe_ping_all_vfs(adapter); 3241 + 3252 3242 /* Disable all VFTE/VFRE TX/RX */ 3253 3243 ixgbe_disable_tx_rx(adapter); 3244 + 3245 + /* Mark all the VFs as inactive */ 3246 + for (i = 0 ; i < adapter->num_vfs; i++) 3247 + adapter->vfinfo[i].clear_to_send = 0; 3254 3248 } 3255 3249 3256 3250 /* disable receives */ ··· 5648 5638 5649 5639 #ifdef IXGBE_FCOE 5650 5640 if ((adapter->flags & IXGBE_FLAG_FCOE_ENABLED) && 5651 - (skb->protocol == htons(ETH_P_FCOE))) { 5641 + ((skb->protocol == htons(ETH_P_FCOE)) || 5642 + (skb->protocol == htons(ETH_P_FIP)))) { 5652 5643 txq &= (adapter->ring_feature[RING_F_FCOE].indices - 1); 5653 5644 txq += adapter->ring_feature[RING_F_FCOE].mask; 5654 5645 return txq; ··· 5696 5685 5697 5686 tx_ring = adapter->tx_ring[skb->queue_mapping]; 5698 5687 5699 - if ((adapter->flags & IXGBE_FLAG_FCOE_ENABLED) && 5700 - (skb->protocol == htons(ETH_P_FCOE))) { 5701 - tx_flags |= IXGBE_TX_FLAGS_FCOE; 5702 5688 #ifdef IXGBE_FCOE 5689 + if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) { 5703 5690 #ifdef CONFIG_IXGBE_DCB 5704 - tx_flags &= ~(IXGBE_TX_FLAGS_VLAN_PRIO_MASK 5705 - << IXGBE_TX_FLAGS_VLAN_SHIFT); 5706 - tx_flags |= ((adapter->fcoe.up << 13) 5707 - << IXGBE_TX_FLAGS_VLAN_SHIFT); 5691 + /* for FCoE with DCB, we force the priority to what 5692 + * was specified by the switch */ 5693 + if ((skb->protocol == htons(ETH_P_FCOE)) || 5694 + (skb->protocol == htons(ETH_P_FIP))) { 5695 + tx_flags &= ~(IXGBE_TX_FLAGS_VLAN_PRIO_MASK 5696 + << IXGBE_TX_FLAGS_VLAN_SHIFT); 5697 + tx_flags |= ((adapter->fcoe.up << 13) 5698 + << IXGBE_TX_FLAGS_VLAN_SHIFT); 5699 + } 5708 5700 #endif 5709 - #endif 5701 + /* flag for FCoE offloads */ 5702 + if (skb->protocol == htons(ETH_P_FCOE)) 5703 + tx_flags |= IXGBE_TX_FLAGS_FCOE; 5710 5704 } 5705 + #endif 5706 + 5711 5707 /* four things can cause us to need a context descriptor */ 5712 5708 if (skb_is_gso(skb) || 5713 5709 (skb->ip_summed == CHECKSUM_PARTIAL) || ··· 6069 6051 indices += min_t(unsigned int, num_possible_cpus(), 6070 6052 IXGBE_MAX_FCOE_INDICES); 6071 6053 #endif 6072 - indices = min_t(unsigned int, indices, MAX_TX_QUEUES); 6073 6054 netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), indices); 6074 6055 if (!netdev) { 6075 6056 err = -ENOMEM;
+1
drivers/net/ixgbe/ixgbe_type.h
··· 1298 1298 #define IXGBE_ETQF_FILTER_BCN 1 1299 1299 #define IXGBE_ETQF_FILTER_FCOE 2 1300 1300 #define IXGBE_ETQF_FILTER_1588 3 1301 + #define IXGBE_ETQF_FILTER_FIP 4 1301 1302 /* VLAN Control Bit Masks */ 1302 1303 #define IXGBE_VLNCTRL_VET 0x0000FFFF /* bits 0-15 */ 1303 1304 #define IXGBE_VLNCTRL_CFI 0x10000000 /* bit 28 */
+2 -1
drivers/net/ixgbevf/ixgbevf_main.c
··· 2943 2943 struct ixgbevf_tx_buffer *tx_buffer_info; 2944 2944 unsigned int len; 2945 2945 unsigned int total = skb->len; 2946 - unsigned int offset = 0, size, count = 0, i; 2946 + unsigned int offset = 0, size, count = 0; 2947 2947 unsigned int nr_frags = skb_shinfo(skb)->nr_frags; 2948 2948 unsigned int f; 2949 + int i; 2949 2950 2950 2951 i = tx_ring->next_to_use; 2951 2952
+1 -1
drivers/net/ksz884x.c
··· 6322 6322 int len; 6323 6323 6324 6324 if (eeprom->magic != EEPROM_MAGIC) 6325 - return 1; 6325 + return -EINVAL; 6326 6326 6327 6327 len = (eeprom->offset + eeprom->len + 1) / 2; 6328 6328 for (i = eeprom->offset / 2; i < len; i++)
+2 -2
drivers/net/netxen/netxen_nic.h
··· 53 53 54 54 #define _NETXEN_NIC_LINUX_MAJOR 4 55 55 #define _NETXEN_NIC_LINUX_MINOR 0 56 - #define _NETXEN_NIC_LINUX_SUBVERSION 72 57 - #define NETXEN_NIC_LINUX_VERSIONID "4.0.72" 56 + #define _NETXEN_NIC_LINUX_SUBVERSION 73 57 + #define NETXEN_NIC_LINUX_VERSIONID "4.0.73" 58 58 59 59 #define NETXEN_VERSION_CODE(a, b, c) (((a) << 24) + ((b) << 16) + (c)) 60 60 #define _major(v) (((v) >> 24) & 0xff)
+8 -6
drivers/net/netxen/netxen_nic_ctx.c
··· 669 669 } 670 670 sds_ring->desc_head = (struct status_desc *)addr; 671 671 672 - sds_ring->crb_sts_consumer = 673 - netxen_get_ioaddr(adapter, 674 - recv_crb_registers[port].crb_sts_consumer[ring]); 672 + if (NX_IS_REVISION_P2(adapter->ahw.revision_id)) { 673 + sds_ring->crb_sts_consumer = 674 + netxen_get_ioaddr(adapter, 675 + recv_crb_registers[port].crb_sts_consumer[ring]); 675 676 676 - sds_ring->crb_intr_mask = 677 - netxen_get_ioaddr(adapter, 678 - recv_crb_registers[port].sw_int_mask[ring]); 677 + sds_ring->crb_intr_mask = 678 + netxen_get_ioaddr(adapter, 679 + recv_crb_registers[port].sw_int_mask[ring]); 680 + } 679 681 } 680 682 681 683
+1 -1
drivers/net/netxen/netxen_nic_init.c
··· 761 761 if (adapter->fw_type == NX_UNIFIED_ROMIMAGE) { 762 762 bios_ver = cpu_to_le32(*((u32 *) (&fw->data[prd_off]) 763 763 + NX_UNI_BIOS_VERSION_OFF)); 764 - return (bios_ver << 24) + ((bios_ver >> 8) & 0xff00) + 764 + return (bios_ver << 16) + ((bios_ver >> 8) & 0xff00) + 765 765 (bios_ver >> 24); 766 766 } else 767 767 return cpu_to_le32(*(u32 *)&fw->data[NX_BIOS_VERSION_OFFSET]);
+29 -20
drivers/net/netxen/netxen_nic_main.c
··· 604 604 static int 605 605 netxen_setup_pci_map(struct netxen_adapter *adapter) 606 606 { 607 - void __iomem *mem_ptr0 = NULL; 608 - void __iomem *mem_ptr1 = NULL; 609 - void __iomem *mem_ptr2 = NULL; 610 607 void __iomem *db_ptr = NULL; 611 608 612 609 resource_size_t mem_base, db_base; 613 - unsigned long mem_len, db_len = 0, pci_len0 = 0; 610 + unsigned long mem_len, db_len = 0; 614 611 615 612 struct pci_dev *pdev = adapter->pdev; 616 613 int pci_func = adapter->ahw.pci_func; 614 + struct netxen_hardware_context *ahw = &adapter->ahw; 617 615 618 616 int err = 0; 619 617 ··· 628 630 629 631 /* 128 Meg of memory */ 630 632 if (mem_len == NETXEN_PCI_128MB_SIZE) { 631 - mem_ptr0 = ioremap(mem_base, FIRST_PAGE_GROUP_SIZE); 632 - mem_ptr1 = ioremap(mem_base + SECOND_PAGE_GROUP_START, 633 + 634 + ahw->pci_base0 = ioremap(mem_base, FIRST_PAGE_GROUP_SIZE); 635 + ahw->pci_base1 = ioremap(mem_base + SECOND_PAGE_GROUP_START, 633 636 SECOND_PAGE_GROUP_SIZE); 634 - mem_ptr2 = ioremap(mem_base + THIRD_PAGE_GROUP_START, 637 + ahw->pci_base2 = ioremap(mem_base + THIRD_PAGE_GROUP_START, 635 638 THIRD_PAGE_GROUP_SIZE); 636 - pci_len0 = FIRST_PAGE_GROUP_SIZE; 639 + if (ahw->pci_base0 == NULL || ahw->pci_base1 == NULL || 640 + ahw->pci_base2 == NULL) { 641 + dev_err(&pdev->dev, "failed to map PCI bar 0\n"); 642 + err = -EIO; 643 + goto err_out; 644 + } 645 + 646 + ahw->pci_len0 = FIRST_PAGE_GROUP_SIZE; 647 + 637 648 } else if (mem_len == NETXEN_PCI_32MB_SIZE) { 638 - mem_ptr1 = ioremap(mem_base, SECOND_PAGE_GROUP_SIZE); 639 - mem_ptr2 = ioremap(mem_base + THIRD_PAGE_GROUP_START - 649 + 650 + ahw->pci_base1 = ioremap(mem_base, SECOND_PAGE_GROUP_SIZE); 651 + ahw->pci_base2 = ioremap(mem_base + THIRD_PAGE_GROUP_START - 640 652 SECOND_PAGE_GROUP_START, THIRD_PAGE_GROUP_SIZE); 653 + if (ahw->pci_base1 == NULL || ahw->pci_base2 == NULL) { 654 + dev_err(&pdev->dev, "failed to map PCI bar 0\n"); 655 + err = -EIO; 656 + goto err_out; 657 + } 658 + 641 659 } else if (mem_len == NETXEN_PCI_2MB_SIZE) { 642 660 643 - mem_ptr0 = pci_ioremap_bar(pdev, 0); 644 - if (mem_ptr0 == NULL) { 661 + ahw->pci_base0 = pci_ioremap_bar(pdev, 0); 662 + if (ahw->pci_base0 == NULL) { 645 663 dev_err(&pdev->dev, "failed to map PCI bar 0\n"); 646 664 return -EIO; 647 665 } 648 - pci_len0 = mem_len; 666 + ahw->pci_len0 = mem_len; 649 667 } else { 650 668 return -EIO; 651 669 } ··· 669 655 netxen_setup_hwops(adapter); 670 656 671 657 dev_info(&pdev->dev, "%dMB memory map\n", (int)(mem_len>>20)); 672 - 673 - adapter->ahw.pci_base0 = mem_ptr0; 674 - adapter->ahw.pci_len0 = pci_len0; 675 - adapter->ahw.pci_base1 = mem_ptr1; 676 - adapter->ahw.pci_base2 = mem_ptr2; 677 658 678 659 if (NX_IS_REVISION_P3P(adapter->ahw.revision_id)) { 679 660 adapter->ahw.ocm_win_crb = netxen_get_ioaddr(adapter, ··· 1255 1246 int pci_func_id = PCI_FUNC(pdev->devfn); 1256 1247 uint8_t revision_id; 1257 1248 1258 - if (pdev->revision >= NX_P3_A0 && pdev->revision < NX_P3_B1) { 1259 - pr_warning("%s: chip revisions between 0x%x-0x%x" 1249 + if (pdev->revision >= NX_P3_A0 && pdev->revision <= NX_P3_B1) { 1250 + pr_warning("%s: chip revisions between 0x%x-0x%x " 1260 1251 "will not be enabled.\n", 1261 1252 module_name(THIS_MODULE), NX_P3_A0, NX_P3_B1); 1262 1253 return -ENODEV;
+2 -1
drivers/net/pcmcia/pcnet_cs.c
··· 1549 1549 PCMCIA_PFC_DEVICE_MANF_CARD(0, 0x021b, 0x0101), 1550 1550 PCMCIA_PFC_DEVICE_MANF_CARD(0, 0x08a1, 0xc0ab), 1551 1551 PCMCIA_PFC_DEVICE_PROD_ID12(0, "AnyCom", "Fast Ethernet + 56K COMBO", 0x578ba6e7, 0xb0ac62c4), 1552 + PCMCIA_PFC_DEVICE_PROD_ID12(0, "ATKK", "LM33-PCM-T", 0xba9eb7e2, 0x077c174e), 1552 1553 PCMCIA_PFC_DEVICE_PROD_ID12(0, "D-Link", "DME336T", 0x1a424a1c, 0xb23897ff), 1553 1554 PCMCIA_PFC_DEVICE_PROD_ID12(0, "Grey Cell", "GCS3000", 0x2a151fac, 0x48b932ae), 1554 1555 PCMCIA_PFC_DEVICE_PROD_ID12(0, "Linksys", "EtherFast 10&100 + 56K PC Card (PCMLM56)", 0x0733cc81, 0xb3765033), ··· 1741 1740 PCMCIA_MFC_DEVICE_CIS_PROD_ID12(0, "DAYNA COMMUNICATIONS", "LAN AND MODEM MULTIFUNCTION", 0x8fdf8f89, 0xdd5ed9e8, "cis/DP83903.cis"), 1742 1741 PCMCIA_MFC_DEVICE_CIS_PROD_ID4(0, "NSC MF LAN/Modem", 0x58fc6056, "cis/DP83903.cis"), 1743 1742 PCMCIA_MFC_DEVICE_CIS_MANF_CARD(0, 0x0175, 0x0000, "cis/DP83903.cis"), 1744 - PCMCIA_DEVICE_CIS_MANF_CARD(0xc00f, 0x0002, "cis/LA-PCM.cis"), 1743 + PCMCIA_DEVICE_CIS_PROD_ID12("Allied Telesis,K.K", "Ethernet LAN Card", 0x2ad62f3c, 0x9fd2f0a2, "cis/LA-PCM.cis"), 1745 1744 PCMCIA_DEVICE_CIS_PROD_ID12("KTI", "PE520 PLUS", 0xad180345, 0x9d58d392, "cis/PE520.cis"), 1746 1745 PCMCIA_DEVICE_CIS_PROD_ID12("NDC", "Ethernet", 0x01c43ae1, 0x00b2e941, "cis/NE2K.cis"), 1747 1746 PCMCIA_DEVICE_CIS_PROD_ID12("PMX ", "PE-200", 0x34f3f1c8, 0x10b59f8c, "cis/PE-200.cis"),
+33 -21
drivers/net/r8169.c
··· 186 186 187 187 MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl); 188 188 189 - static int rx_copybreak = 200; 190 - static int use_dac = -1; 189 + /* 190 + * we set our copybreak very high so that we don't have 191 + * to allocate 16k frames all the time (see note in 192 + * rtl8169_open() 193 + */ 194 + static int rx_copybreak = 16383; 195 + static int use_dac; 191 196 static struct { 192 197 u32 msg_enable; 193 198 } debug = { -1 }; ··· 516 511 module_param(rx_copybreak, int, 0); 517 512 MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames"); 518 513 module_param(use_dac, int, 0); 519 - MODULE_PARM_DESC(use_dac, "Enable PCI DAC. -1 defaults on for PCI Express only." 520 - " Unsafe on 32 bit PCI slot."); 514 + MODULE_PARM_DESC(use_dac, "Enable PCI DAC. Unsafe on 32 bit PCI slot."); 521 515 module_param_named(debug, debug.msg_enable, int, 0); 522 516 MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)"); 523 517 MODULE_LICENSE("GPL"); ··· 2825 2821 spin_lock_irq(&tp->lock); 2826 2822 2827 2823 RTL_W8(Cfg9346, Cfg9346_Unlock); 2828 - RTL_W32(MAC0, low); 2829 2824 RTL_W32(MAC4, high); 2825 + RTL_W32(MAC0, low); 2830 2826 RTL_W8(Cfg9346, Cfg9346_Lock); 2831 2827 2832 2828 spin_unlock_irq(&tp->lock); ··· 2978 2974 void __iomem *ioaddr; 2979 2975 unsigned int i; 2980 2976 int rc; 2981 - int this_use_dac = use_dac; 2982 2977 2983 2978 if (netif_msg_drv(&debug)) { 2984 2979 printk(KERN_INFO "%s Gigabit Ethernet driver %s loaded\n", ··· 3043 3040 3044 3041 tp->cp_cmd = PCIMulRW | RxChkSum; 3045 3042 3046 - tp->pcie_cap = pci_find_capability(pdev, PCI_CAP_ID_EXP); 3047 - if (!tp->pcie_cap) 3048 - netif_info(tp, probe, dev, "no PCI Express capability\n"); 3049 - 3050 - if (this_use_dac < 0) 3051 - this_use_dac = tp->pcie_cap != 0; 3052 - 3053 3043 if ((sizeof(dma_addr_t) > 4) && 3054 - this_use_dac && 3055 - !pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) { 3056 - netif_info(tp, probe, dev, "using 64-bit DMA\n"); 3044 + !pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) && use_dac) { 3057 3045 tp->cp_cmd |= PCIDAC; 3058 3046 dev->features |= NETIF_F_HIGHDMA; 3059 3047 } else { ··· 3062 3068 rc = -EIO; 3063 3069 goto err_out_free_res_4; 3064 3070 } 3071 + 3072 + tp->pcie_cap = pci_find_capability(pdev, PCI_CAP_ID_EXP); 3073 + if (!tp->pcie_cap) 3074 + netif_info(tp, probe, dev, "no PCI Express capability\n"); 3065 3075 3066 3076 RTL_W16(IntrMask, 0x0000); 3067 3077 ··· 3222 3224 } 3223 3225 3224 3226 static void rtl8169_set_rxbufsize(struct rtl8169_private *tp, 3225 - struct net_device *dev) 3227 + unsigned int mtu) 3226 3228 { 3227 - unsigned int max_frame = dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN; 3229 + unsigned int max_frame = mtu + VLAN_ETH_HLEN + ETH_FCS_LEN; 3230 + 3231 + if (max_frame != 16383) 3232 + printk(KERN_WARNING "WARNING! Changing of MTU on this NIC" 3233 + "May lead to frame reception errors!\n"); 3228 3234 3229 3235 tp->rx_buf_sz = (max_frame > RX_BUF_SIZE) ? max_frame : RX_BUF_SIZE; 3230 3236 } ··· 3240 3238 int retval = -ENOMEM; 3241 3239 3242 3240 3243 - rtl8169_set_rxbufsize(tp, dev); 3241 + /* 3242 + * Note that we use a magic value here, its wierd I know 3243 + * its done because, some subset of rtl8169 hardware suffers from 3244 + * a problem in which frames received that are longer than 3245 + * the size set in RxMaxSize register return garbage sizes 3246 + * when received. To avoid this we need to turn off filtering, 3247 + * which is done by setting a value of 16383 in the RxMaxSize register 3248 + * and allocating 16k frames to handle the largest possible rx value 3249 + * thats what the magic math below does. 3250 + */ 3251 + rtl8169_set_rxbufsize(tp, 16383 - VLAN_ETH_HLEN - ETH_FCS_LEN); 3244 3252 3245 3253 /* 3246 3254 * Rx and Tx desscriptors needs 256 bytes alignment. ··· 3903 3891 3904 3892 rtl8169_down(dev); 3905 3893 3906 - rtl8169_set_rxbufsize(tp, dev); 3894 + rtl8169_set_rxbufsize(tp, dev->mtu); 3907 3895 3908 3896 ret = rtl8169_init_ring(dev); 3909 3897 if (ret < 0) ··· 4766 4754 mc_filter[1] = swab32(data); 4767 4755 } 4768 4756 4769 - RTL_W32(MAR0 + 0, mc_filter[0]); 4770 4757 RTL_W32(MAR0 + 4, mc_filter[1]); 4758 + RTL_W32(MAR0 + 0, mc_filter[0]); 4771 4759 4772 4760 RTL_W32(RxConfig, tmp); 4773 4761
+5 -3
drivers/net/tulip/uli526x.c
··· 851 851 852 852 if ( !(rdes0 & 0x8000) || 853 853 ((db->cr6_data & CR6_PM) && (rxlen>6)) ) { 854 + struct sk_buff *new_skb = NULL; 855 + 854 856 skb = rxptr->rx_skb_ptr; 855 857 856 858 /* Good packet, send to upper layer */ 857 859 /* Shorst packet used new SKB */ 858 - if ( (rxlen < RX_COPY_SIZE) && 859 - ( (skb = dev_alloc_skb(rxlen + 2) ) 860 - != NULL) ) { 860 + if ((rxlen < RX_COPY_SIZE) && 861 + (((new_skb = dev_alloc_skb(rxlen + 2)) != NULL))) { 862 + skb = new_skb; 861 863 /* size less than COPY_SIZE, allocate a rxlen SKB */ 862 864 skb_reserve(skb, 2); /* 16byte align */ 863 865 memcpy(skb_put(skb, rxlen),
+1 -1
drivers/net/via-velocity.c
··· 812 812 813 813 case FLOW_CNTL_TX_RX: 814 814 MII_REG_BITS_ON(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs); 815 - MII_REG_BITS_ON(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs); 815 + MII_REG_BITS_OFF(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs); 816 816 break; 817 817 818 818 case FLOW_CNTL_DISABLE:
+5 -2
drivers/of/fdt.c
··· 376 376 if (!np->type) 377 377 np->type = "<NULL>"; 378 378 } 379 - while (tag == OF_DT_BEGIN_NODE) { 380 - mem = unflatten_dt_node(mem, p, np, allnextpp, fpsize); 379 + while (tag == OF_DT_BEGIN_NODE || tag == OF_DT_NOP) { 380 + if (tag == OF_DT_NOP) 381 + *p += 4; 382 + else 383 + mem = unflatten_dt_node(mem, p, np, allnextpp, fpsize); 381 384 tag = be32_to_cpup((__be32 *)(*p)); 382 385 } 383 386 if (tag != OF_DT_END_NODE) {
+2 -3
drivers/pci/hotplug/pciehp_hpc.c
··· 832 832 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 833 833 if (!pci_resource_len(pdev, i)) 834 834 continue; 835 - ctrl_info(ctrl, " PCI resource [%d] : 0x%llx@0x%llx\n", 836 - i, (unsigned long long)pci_resource_len(pdev, i), 837 - (unsigned long long)pci_resource_start(pdev, i)); 835 + ctrl_info(ctrl, " PCI resource [%d] : %pR\n", 836 + i, &pdev->resource[i]); 838 837 } 839 838 ctrl_info(ctrl, "Slot Capabilities : 0x%08x\n", ctrl->slot_cap); 840 839 ctrl_info(ctrl, " Physical Slot Number : %d\n", PSN(ctrl));
+4 -5
drivers/pci/ioapic.c
··· 31 31 acpi_status status; 32 32 unsigned long long gsb; 33 33 struct ioapic *ioapic; 34 - u64 addr; 35 34 int ret; 36 35 char *type; 36 + struct resource *res; 37 37 38 38 handle = DEVICE_ACPI_HANDLE(&dev->dev); 39 39 if (!handle) ··· 69 69 if (pci_request_region(dev, 0, type)) 70 70 goto exit_disable; 71 71 72 - addr = pci_resource_start(dev, 0); 73 - if (acpi_register_ioapic(ioapic->handle, addr, ioapic->gsi_base)) 72 + res = &dev->resource[0]; 73 + if (acpi_register_ioapic(ioapic->handle, res->start, ioapic->gsi_base)) 74 74 goto exit_release; 75 75 76 76 pci_set_drvdata(dev, ioapic); 77 - dev_info(&dev->dev, "%s at %#llx, GSI %u\n", type, addr, 78 - ioapic->gsi_base); 77 + dev_info(&dev->dev, "%s at %pR, GSI %u\n", type, res, ioapic->gsi_base); 79 78 return 0; 80 79 81 80 exit_release:
+20 -24
drivers/pci/pci.c
··· 2576 2576 */ 2577 2577 int pcix_get_max_mmrbc(struct pci_dev *dev) 2578 2578 { 2579 - int err, cap; 2579 + int cap; 2580 2580 u32 stat; 2581 2581 2582 2582 cap = pci_find_capability(dev, PCI_CAP_ID_PCIX); 2583 2583 if (!cap) 2584 2584 return -EINVAL; 2585 2585 2586 - err = pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat); 2587 - if (err) 2586 + if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat)) 2588 2587 return -EINVAL; 2589 2588 2590 - return (stat & PCI_X_STATUS_MAX_READ) >> 12; 2589 + return 512 << ((stat & PCI_X_STATUS_MAX_READ) >> 21); 2591 2590 } 2592 2591 EXPORT_SYMBOL(pcix_get_max_mmrbc); 2593 2592 ··· 2599 2600 */ 2600 2601 int pcix_get_mmrbc(struct pci_dev *dev) 2601 2602 { 2602 - int ret, cap; 2603 - u32 cmd; 2603 + int cap; 2604 + u16 cmd; 2604 2605 2605 2606 cap = pci_find_capability(dev, PCI_CAP_ID_PCIX); 2606 2607 if (!cap) 2607 2608 return -EINVAL; 2608 2609 2609 - ret = pci_read_config_dword(dev, cap + PCI_X_CMD, &cmd); 2610 - if (!ret) 2611 - ret = 512 << ((cmd & PCI_X_CMD_MAX_READ) >> 2); 2610 + if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd)) 2611 + return -EINVAL; 2612 2612 2613 - return ret; 2613 + return 512 << ((cmd & PCI_X_CMD_MAX_READ) >> 2); 2614 2614 } 2615 2615 EXPORT_SYMBOL(pcix_get_mmrbc); 2616 2616 ··· 2624 2626 */ 2625 2627 int pcix_set_mmrbc(struct pci_dev *dev, int mmrbc) 2626 2628 { 2627 - int cap, err = -EINVAL; 2628 - u32 stat, cmd, v, o; 2629 + int cap; 2630 + u32 stat, v, o; 2631 + u16 cmd; 2629 2632 2630 2633 if (mmrbc < 512 || mmrbc > 4096 || !is_power_of_2(mmrbc)) 2631 - goto out; 2634 + return -EINVAL; 2632 2635 2633 2636 v = ffs(mmrbc) - 10; 2634 2637 2635 2638 cap = pci_find_capability(dev, PCI_CAP_ID_PCIX); 2636 2639 if (!cap) 2637 - goto out; 2640 + return -EINVAL; 2638 2641 2639 - err = pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat); 2640 - if (err) 2641 - goto out; 2642 + if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat)) 2643 + return -EINVAL; 2642 2644 2643 2645 if (v > (stat & PCI_X_STATUS_MAX_READ) >> 21) 2644 2646 return -E2BIG; 2645 2647 2646 - err = pci_read_config_dword(dev, cap + PCI_X_CMD, &cmd); 2647 - if (err) 2648 - goto out; 2648 + if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd)) 2649 + return -EINVAL; 2649 2650 2650 2651 o = (cmd & PCI_X_CMD_MAX_READ) >> 2; 2651 2652 if (o != v) { ··· 2654 2657 2655 2658 cmd &= ~PCI_X_CMD_MAX_READ; 2656 2659 cmd |= v << 2; 2657 - err = pci_write_config_dword(dev, cap + PCI_X_CMD, cmd); 2660 + if (pci_write_config_word(dev, cap + PCI_X_CMD, cmd)) 2661 + return -EIO; 2658 2662 } 2659 - out: 2660 - return err; 2663 + return 0; 2661 2664 } 2662 2665 EXPORT_SYMBOL(pcix_set_mmrbc); 2663 2666 ··· 3020 3023 EXPORT_SYMBOL(pci_disable_device); 3021 3024 EXPORT_SYMBOL(pci_find_capability); 3022 3025 EXPORT_SYMBOL(pci_bus_find_capability); 3023 - EXPORT_SYMBOL(pci_register_set_vga_state); 3024 3026 EXPORT_SYMBOL(pci_release_regions); 3025 3027 EXPORT_SYMBOL(pci_request_regions); 3026 3028 EXPORT_SYMBOL(pci_request_regions_exclusive);
+33 -20
drivers/pci/probe.c
··· 174 174 pci_read_config_dword(dev, pos, &sz); 175 175 pci_write_config_dword(dev, pos, l); 176 176 177 + if (!sz) 178 + goto fail; /* BAR not implemented */ 179 + 177 180 /* 178 181 * All bits set in sz means the device isn't working properly. 179 - * If the BAR isn't implemented, all bits must be 0. If it's a 180 - * memory BAR or a ROM, bit 0 must be clear; if it's an io BAR, bit 181 - * 1 must be clear. 182 + * If it's a memory BAR or a ROM, bit 0 must be clear; if it's 183 + * an io BAR, bit 1 must be clear. 182 184 */ 183 - if (!sz || sz == 0xffffffff) 185 + if (sz == 0xffffffff) { 186 + dev_err(&dev->dev, "reg %x: invalid size %#x; broken device?\n", 187 + pos, sz); 184 188 goto fail; 189 + } 185 190 186 191 /* 187 192 * I don't know how l can have all bits set. Copied from old code. ··· 249 244 pos, res); 250 245 } 251 246 } else { 252 - sz = pci_size(l, sz, mask); 247 + u32 size = pci_size(l, sz, mask); 253 248 254 - if (!sz) 249 + if (!size) { 250 + dev_err(&dev->dev, "reg %x: invalid size " 251 + "(l %#x sz %#x mask %#x); broken device?", 252 + pos, l, sz, mask); 255 253 goto fail; 254 + } 256 255 257 256 res->start = l; 258 - res->end = l + sz; 257 + res->end = l + size; 259 258 260 259 dev_printk(KERN_DEBUG, &dev->dev, "reg %x: %pR\n", pos, res); 261 260 } ··· 321 312 dev_printk(KERN_DEBUG, &dev->dev, " bridge window %pR\n", res); 322 313 } else { 323 314 dev_printk(KERN_DEBUG, &dev->dev, 324 - " bridge window [io %04lx - %04lx] reg reading\n", 315 + " bridge window [io %#06lx-%#06lx] (disabled)\n", 325 316 base, limit); 326 317 } 327 318 } ··· 345 336 dev_printk(KERN_DEBUG, &dev->dev, " bridge window %pR\n", res); 346 337 } else { 347 338 dev_printk(KERN_DEBUG, &dev->dev, 348 - " bridge window [mem 0x%08lx - 0x%08lx] reg reading\n", 339 + " bridge window [mem %#010lx-%#010lx] (disabled)\n", 349 340 base, limit + 0xfffff); 350 341 } 351 342 } ··· 396 387 dev_printk(KERN_DEBUG, &dev->dev, " bridge window %pR\n", res); 397 388 } else { 398 389 dev_printk(KERN_DEBUG, &dev->dev, 399 - " bridge window [mem 0x%08lx - %08lx pref] reg reading\n", 390 + " bridge window [mem %#010lx-%#010lx pref] (disabled)\n", 400 391 base, limit + 0xfffff); 401 392 } 402 393 } ··· 682 673 int is_cardbus = (dev->hdr_type == PCI_HEADER_TYPE_CARDBUS); 683 674 u32 buses, i, j = 0; 684 675 u16 bctl; 676 + u8 primary, secondary, subordinate; 685 677 int broken = 0; 686 678 687 679 pci_read_config_dword(dev, PCI_PRIMARY_BUS, &buses); 680 + primary = buses & 0xFF; 681 + secondary = (buses >> 8) & 0xFF; 682 + subordinate = (buses >> 16) & 0xFF; 688 683 689 - dev_dbg(&dev->dev, "scanning behind bridge, config %06x, pass %d\n", 690 - buses & 0xffffff, pass); 684 + dev_dbg(&dev->dev, "scanning [bus %02x-%02x] behind bridge, pass %d\n", 685 + secondary, subordinate, pass); 691 686 692 687 /* Check if setup is sensible at all */ 693 688 if (!pass && 694 - ((buses & 0xff) != bus->number || ((buses >> 8) & 0xff) <= bus->number)) { 689 + (primary != bus->number || secondary <= bus->number)) { 695 690 dev_dbg(&dev->dev, "bus configuration invalid, reconfiguring\n"); 696 691 broken = 1; 697 692 } ··· 706 693 pci_write_config_word(dev, PCI_BRIDGE_CONTROL, 707 694 bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT); 708 695 709 - if ((buses & 0xffff00) && !pcibios_assign_all_busses() && !is_cardbus && !broken) { 710 - unsigned int cmax, busnr; 696 + if ((secondary || subordinate) && !pcibios_assign_all_busses() && 697 + !is_cardbus && !broken) { 698 + unsigned int cmax; 711 699 /* 712 700 * Bus already configured by firmware, process it in the first 713 701 * pass and just note the configuration. 714 702 */ 715 703 if (pass) 716 704 goto out; 717 - busnr = (buses >> 8) & 0xFF; 718 705 719 706 /* 720 707 * If we already got to this bus through a different bridge, ··· 723 710 * However, we continue to descend down the hierarchy and 724 711 * scan remaining child buses. 725 712 */ 726 - child = pci_find_bus(pci_domain_nr(bus), busnr); 713 + child = pci_find_bus(pci_domain_nr(bus), secondary); 727 714 if (!child) { 728 - child = pci_add_new_bus(bus, dev, busnr); 715 + child = pci_add_new_bus(bus, dev, secondary); 729 716 if (!child) 730 717 goto out; 731 - child->primary = buses & 0xFF; 732 - child->subordinate = (buses >> 16) & 0xFF; 718 + child->primary = primary; 719 + child->subordinate = subordinate; 733 720 child->bridge_ctl = bctl; 734 721 } 735 722
+24 -5
drivers/pci/quirks.c
··· 368 368 bus_region.end = res->end; 369 369 pcibios_bus_to_resource(dev, res, &bus_region); 370 370 371 - pci_claim_resource(dev, nr); 372 - dev_info(&dev->dev, "quirk: %pR claimed by %s\n", res, name); 371 + if (pci_claim_resource(dev, nr) == 0) 372 + dev_info(&dev->dev, "quirk: %pR claimed by %s\n", 373 + res, name); 373 374 } 374 375 } 375 376 ··· 1978 1977 /* 1979 1978 * Disable PCI Bus Parking and PCI Master read caching on CX700 1980 1979 * which causes unspecified timing errors with a VT6212L on the PCI 1981 - * bus leading to USB2.0 packet loss. The defaults are that these 1982 - * features are turned off but some BIOSes turn them on. 1980 + * bus leading to USB2.0 packet loss. 1981 + * 1982 + * This quirk is only enabled if a second (on the external PCI bus) 1983 + * VT6212L is found -- the CX700 core itself also contains a USB 1984 + * host controller with the same PCI ID as the VT6212L. 1983 1985 */ 1984 1986 1987 + /* Count VT6212L instances */ 1988 + struct pci_dev *p = pci_get_device(PCI_VENDOR_ID_VIA, 1989 + PCI_DEVICE_ID_VIA_8235_USB_2, NULL); 1985 1990 uint8_t b; 1991 + 1992 + /* p should contain the first (internal) VT6212L -- see if we have 1993 + an external one by searching again */ 1994 + p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8235_USB_2, p); 1995 + if (!p) 1996 + return; 1997 + pci_dev_put(p); 1998 + 1986 1999 if (pci_read_config_byte(dev, 0x76, &b) == 0) { 1987 2000 if (b & 0x40) { 1988 2001 /* Turn off PCI Bus Parking */ ··· 2023 2008 } 2024 2009 } 2025 2010 } 2026 - DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_VIA, 0x324e, quirk_via_cx700_pci_parking_caching); 2011 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0x324e, quirk_via_cx700_pci_parking_caching); 2027 2012 2028 2013 /* 2029 2014 * For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the ··· 2123 2108 } 2124 2109 } 2125 2110 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi); 2111 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x9602, quirk_disable_msi); 2112 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASUSTEK, 0x9602, quirk_disable_msi); 2113 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AI, 0x9602, quirk_disable_msi); 2114 + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0xa238, quirk_disable_msi); 2126 2115 2127 2116 /* Go through the list of Hypertransport capabilities and 2128 2117 * return 1 if a HT MSI capability is found and enabled */
+8 -6
drivers/pci/setup-res.c
··· 93 93 int pci_claim_resource(struct pci_dev *dev, int resource) 94 94 { 95 95 struct resource *res = &dev->resource[resource]; 96 - struct resource *root; 97 - int err; 96 + struct resource *root, *conflict; 98 97 99 98 root = pci_find_parent_resource(dev, res); 100 99 if (!root) { ··· 102 103 return -EINVAL; 103 104 } 104 105 105 - err = request_resource(root, res); 106 - if (err) 106 + conflict = request_resource_conflict(root, res); 107 + if (conflict) { 107 108 dev_err(&dev->dev, 108 - "address space collision: %pR already in use\n", res); 109 + "address space collision: %pR conflicts with %s %pR\n", 110 + res, conflict->name, conflict); 111 + return -EBUSY; 112 + } 109 113 110 - return err; 114 + return 0; 111 115 } 112 116 EXPORT_SYMBOL(pci_claim_resource); 113 117
-2
drivers/pcmcia/at91_cf.c
··· 361 361 struct at91_cf_socket *cf = platform_get_drvdata(pdev); 362 362 struct at91_cf_data *board = cf->board; 363 363 364 - pcmcia_socket_dev_suspend(&pdev->dev); 365 364 if (device_may_wakeup(&pdev->dev)) { 366 365 enable_irq_wake(board->det_pin); 367 366 if (board->irq_pin) ··· 380 381 disable_irq_wake(board->irq_pin); 381 382 } 382 383 383 - pcmcia_socket_dev_resume(&pdev->dev); 384 384 return 0; 385 385 } 386 386
-13
drivers/pcmcia/au1000_generic.c
··· 510 510 return ret; 511 511 } 512 512 513 - static int au1x00_drv_pcmcia_suspend(struct platform_device *dev, 514 - pm_message_t state) 515 - { 516 - return pcmcia_socket_dev_suspend(&dev->dev); 517 - } 518 - 519 - static int au1x00_drv_pcmcia_resume(struct platform_device *dev) 520 - { 521 - return pcmcia_socket_dev_resume(&dev->dev); 522 - } 523 - 524 513 static struct platform_driver au1x00_pcmcia_driver = { 525 514 .driver = { 526 515 .name = "au1x00-pcmcia", ··· 517 528 }, 518 529 .probe = au1x00_drv_pcmcia_probe, 519 530 .remove = au1x00_drv_pcmcia_remove, 520 - .suspend = au1x00_drv_pcmcia_suspend, 521 - .resume = au1x00_drv_pcmcia_resume, 522 531 }; 523 532 524 533
-12
drivers/pcmcia/bfin_cf_pcmcia.c
··· 300 300 return 0; 301 301 } 302 302 303 - static int bfin_cf_suspend(struct platform_device *pdev, pm_message_t mesg) 304 - { 305 - return pcmcia_socket_dev_suspend(&pdev->dev); 306 - } 307 - 308 - static int bfin_cf_resume(struct platform_device *pdev) 309 - { 310 - return pcmcia_socket_dev_resume(&pdev->dev); 311 - } 312 - 313 303 static struct platform_driver bfin_cf_driver = { 314 304 .driver = { 315 305 .name = (char *)driver_name, ··· 307 317 }, 308 318 .probe = bfin_cf_probe, 309 319 .remove = __devexit_p(bfin_cf_remove), 310 - .suspend = bfin_cf_suspend, 311 - .resume = bfin_cf_resume, 312 320 }; 313 321 314 322 static int __init bfin_cf_init(void)
+63 -61
drivers/pcmcia/cs.c
··· 76 76 EXPORT_SYMBOL(pcmcia_socket_list_rwsem); 77 77 78 78 79 - /* 80 - * Low-level PCMCIA socket drivers need to register with the PCCard 81 - * core using pcmcia_register_socket. 82 - * 83 - * socket drivers are expected to use the following callbacks in their 84 - * .drv struct: 85 - * - pcmcia_socket_dev_suspend 86 - * - pcmcia_socket_dev_resume 87 - * These functions check for the appropriate struct pcmcia_soket arrays, 88 - * and pass them to the low-level functions pcmcia_{suspend,resume}_socket 89 - */ 90 - static int socket_early_resume(struct pcmcia_socket *skt); 91 - static int socket_late_resume(struct pcmcia_socket *skt); 92 - static int socket_resume(struct pcmcia_socket *skt); 93 - static int socket_suspend(struct pcmcia_socket *skt); 94 - 95 - static void pcmcia_socket_dev_run(struct device *dev, 96 - int (*cb)(struct pcmcia_socket *)) 97 - { 98 - struct pcmcia_socket *socket; 99 - 100 - down_read(&pcmcia_socket_list_rwsem); 101 - list_for_each_entry(socket, &pcmcia_socket_list, socket_list) { 102 - if (socket->dev.parent != dev) 103 - continue; 104 - mutex_lock(&socket->skt_mutex); 105 - cb(socket); 106 - mutex_unlock(&socket->skt_mutex); 107 - } 108 - up_read(&pcmcia_socket_list_rwsem); 109 - } 110 - 111 - int pcmcia_socket_dev_suspend(struct device *dev) 112 - { 113 - pcmcia_socket_dev_run(dev, socket_suspend); 114 - return 0; 115 - } 116 - EXPORT_SYMBOL(pcmcia_socket_dev_suspend); 117 - 118 - void pcmcia_socket_dev_early_resume(struct device *dev) 119 - { 120 - pcmcia_socket_dev_run(dev, socket_early_resume); 121 - } 122 - EXPORT_SYMBOL(pcmcia_socket_dev_early_resume); 123 - 124 - void pcmcia_socket_dev_late_resume(struct device *dev) 125 - { 126 - pcmcia_socket_dev_run(dev, socket_late_resume); 127 - } 128 - EXPORT_SYMBOL(pcmcia_socket_dev_late_resume); 129 - 130 - int pcmcia_socket_dev_resume(struct device *dev) 131 - { 132 - pcmcia_socket_dev_run(dev, socket_resume); 133 - return 0; 134 - } 135 - EXPORT_SYMBOL(pcmcia_socket_dev_resume); 136 - 137 - 138 79 struct pcmcia_socket *pcmcia_get_socket(struct pcmcia_socket *skt) 139 80 { 140 81 struct device *dev = get_device(&skt->dev); ··· 519 578 520 579 static int socket_late_resume(struct pcmcia_socket *skt) 521 580 { 581 + int ret; 582 + 522 583 mutex_lock(&skt->ops_mutex); 523 584 skt->state &= ~SOCKET_SUSPEND; 524 585 mutex_unlock(&skt->ops_mutex); 525 586 526 - if (!(skt->state & SOCKET_PRESENT)) 527 - return socket_insert(skt); 587 + if (!(skt->state & SOCKET_PRESENT)) { 588 + ret = socket_insert(skt); 589 + if (ret == -ENODEV) 590 + ret = 0; 591 + return ret; 592 + } 528 593 529 594 if (skt->resume_status) { 530 595 socket_shutdown(skt); ··· 866 919 } 867 920 868 921 922 + #ifdef CONFIG_PM 923 + 924 + static int __pcmcia_pm_op(struct device *dev, 925 + int (*callback) (struct pcmcia_socket *skt)) 926 + { 927 + struct pcmcia_socket *s = container_of(dev, struct pcmcia_socket, dev); 928 + int ret; 929 + 930 + mutex_lock(&s->skt_mutex); 931 + ret = callback(s); 932 + mutex_unlock(&s->skt_mutex); 933 + 934 + return ret; 935 + } 936 + 937 + static int pcmcia_socket_dev_suspend_noirq(struct device *dev) 938 + { 939 + return __pcmcia_pm_op(dev, socket_suspend); 940 + } 941 + 942 + static int pcmcia_socket_dev_resume_noirq(struct device *dev) 943 + { 944 + return __pcmcia_pm_op(dev, socket_early_resume); 945 + } 946 + 947 + static int pcmcia_socket_dev_resume(struct device *dev) 948 + { 949 + return __pcmcia_pm_op(dev, socket_late_resume); 950 + } 951 + 952 + static const struct dev_pm_ops pcmcia_socket_pm_ops = { 953 + /* dev_resume may be called with IRQs enabled */ 954 + SET_SYSTEM_SLEEP_PM_OPS(NULL, 955 + pcmcia_socket_dev_resume) 956 + 957 + /* late suspend must be called with IRQs disabled */ 958 + .suspend_noirq = pcmcia_socket_dev_suspend_noirq, 959 + .freeze_noirq = pcmcia_socket_dev_suspend_noirq, 960 + .poweroff_noirq = pcmcia_socket_dev_suspend_noirq, 961 + 962 + /* early resume must be called with IRQs disabled */ 963 + .resume_noirq = pcmcia_socket_dev_resume_noirq, 964 + .thaw_noirq = pcmcia_socket_dev_resume_noirq, 965 + .restore_noirq = pcmcia_socket_dev_resume_noirq, 966 + }; 967 + 968 + #define PCMCIA_SOCKET_CLASS_PM_OPS (&pcmcia_socket_pm_ops) 969 + 970 + #else /* CONFIG_PM */ 971 + 972 + #define PCMCIA_SOCKET_CLASS_PM_OPS NULL 973 + 974 + #endif /* CONFIG_PM */ 975 + 869 976 struct class pcmcia_socket_class = { 870 977 .name = "pcmcia_socket", 871 978 .dev_uevent = pcmcia_socket_uevent, 872 979 .dev_release = pcmcia_release_socket, 873 980 .class_release = pcmcia_release_socket_class, 981 + .pm = PCMCIA_SOCKET_CLASS_PM_OPS, 874 982 }; 875 983 EXPORT_SYMBOL(pcmcia_socket_class); 876 984
-27
drivers/pcmcia/db1xxx_ss.c
··· 558 558 return 0; 559 559 } 560 560 561 - #ifdef CONFIG_PM 562 - static int db1x_pcmcia_suspend(struct device *dev) 563 - { 564 - return pcmcia_socket_dev_suspend(dev); 565 - } 566 - 567 - static int db1x_pcmcia_resume(struct device *dev) 568 - { 569 - return pcmcia_socket_dev_resume(dev); 570 - } 571 - 572 - static struct dev_pm_ops db1x_pcmcia_pmops = { 573 - .resume = db1x_pcmcia_resume, 574 - .suspend = db1x_pcmcia_suspend, 575 - .thaw = db1x_pcmcia_resume, 576 - .freeze = db1x_pcmcia_suspend, 577 - }; 578 - 579 - #define DB1XXX_SS_PMOPS &db1x_pcmcia_pmops 580 - 581 - #else 582 - 583 - #define DB1XXX_SS_PMOPS NULL 584 - 585 - #endif 586 - 587 561 static struct platform_driver db1x_pcmcia_socket_driver = { 588 562 .driver = { 589 563 .name = "db1xxx_pcmcia", 590 564 .owner = THIS_MODULE, 591 - .pm = DB1XXX_SS_PMOPS 592 565 }, 593 566 .probe = db1x_pcmcia_socket_probe, 594 567 .remove = __devexit_p(db1x_pcmcia_socket_remove),
+6 -2
drivers/pcmcia/ds.c
··· 509 509 p_dev->device_no = (s->device_count++); 510 510 mutex_unlock(&s->ops_mutex); 511 511 512 - /* max of 2 devices per card */ 513 - if (p_dev->device_no >= 2) 512 + /* max of 2 PFC devices */ 513 + if ((p_dev->device_no >= 2) && (function == 0)) 514 + goto err_free; 515 + 516 + /* max of 4 devices overall */ 517 + if (p_dev->device_no >= 4) 514 518 goto err_free; 515 519 516 520 p_dev->socket = s;
-16
drivers/pcmcia/i82092.c
··· 39 39 }; 40 40 MODULE_DEVICE_TABLE(pci, i82092aa_pci_ids); 41 41 42 - #ifdef CONFIG_PM 43 - static int i82092aa_socket_suspend (struct pci_dev *dev, pm_message_t state) 44 - { 45 - return pcmcia_socket_dev_suspend(&dev->dev); 46 - } 47 - 48 - static int i82092aa_socket_resume (struct pci_dev *dev) 49 - { 50 - return pcmcia_socket_dev_resume(&dev->dev); 51 - } 52 - #endif 53 - 54 42 static struct pci_driver i82092aa_pci_driver = { 55 43 .name = "i82092aa", 56 44 .id_table = i82092aa_pci_ids, 57 45 .probe = i82092aa_pci_probe, 58 46 .remove = __devexit_p(i82092aa_pci_remove), 59 - #ifdef CONFIG_PM 60 - .suspend = i82092aa_socket_suspend, 61 - .resume = i82092aa_socket_resume, 62 - #endif 63 47 }; 64 48 65 49
-11
drivers/pcmcia/i82365.c
··· 1223 1223 return 0; 1224 1224 } 1225 1225 1226 - static int i82365_drv_pcmcia_suspend(struct platform_device *dev, 1227 - pm_message_t state) 1228 - { 1229 - return pcmcia_socket_dev_suspend(&dev->dev); 1230 - } 1231 1226 1232 - static int i82365_drv_pcmcia_resume(struct platform_device *dev) 1233 - { 1234 - return pcmcia_socket_dev_resume(&dev->dev); 1235 - } 1236 1227 static struct pccard_operations pcic_operations = { 1237 1228 .init = pcic_init, 1238 1229 .get_status = pcic_get_status, ··· 1239 1248 .name = "i82365", 1240 1249 .owner = THIS_MODULE, 1241 1250 }, 1242 - .suspend = i82365_drv_pcmcia_suspend, 1243 - .resume = i82365_drv_pcmcia_resume, 1244 1251 }; 1245 1252 1246 1253 static struct platform_device *i82365_device;
-11
drivers/pcmcia/m32r_cfc.c
··· 685 685 .set_mem_map = pcc_set_mem_map, 686 686 }; 687 687 688 - static int cfc_drv_pcmcia_suspend(struct platform_device *dev, 689 - pm_message_t state) 690 - { 691 - return pcmcia_socket_dev_suspend(&dev->dev); 692 - } 693 688 694 - static int cfc_drv_pcmcia_resume(struct platform_device *dev) 695 - { 696 - return pcmcia_socket_dev_resume(&dev->dev); 697 - } 698 689 /*====================================================================*/ 699 690 700 691 static struct platform_driver pcc_driver = { ··· 693 702 .name = "cfc", 694 703 .owner = THIS_MODULE, 695 704 }, 696 - .suspend = cfc_drv_pcmcia_suspend, 697 - .resume = cfc_drv_pcmcia_resume, 698 705 }; 699 706 700 707 static struct platform_device pcc_device = {
-12
drivers/pcmcia/m32r_pcc.c
··· 663 663 .set_mem_map = pcc_set_mem_map, 664 664 }; 665 665 666 - static int pcc_drv_pcmcia_suspend(struct platform_device *dev, 667 - pm_message_t state) 668 - { 669 - return pcmcia_socket_dev_suspend(&dev->dev); 670 - } 671 - 672 - static int pcc_drv_pcmcia_resume(struct platform_device *dev) 673 - { 674 - return pcmcia_socket_dev_resume(&dev->dev); 675 - } 676 666 /*====================================================================*/ 677 667 678 668 static struct platform_driver pcc_driver = { ··· 670 680 .name = "pcc", 671 681 .owner = THIS_MODULE, 672 682 }, 673 - .suspend = pcc_drv_pcmcia_suspend, 674 - .resume = pcc_drv_pcmcia_resume, 675 683 }; 676 684 677 685 static struct platform_device pcc_device = {
-17
drivers/pcmcia/m8xx_pcmcia.c
··· 1288 1288 return 0; 1289 1289 } 1290 1290 1291 - #ifdef CONFIG_PM 1292 - static int m8xx_suspend(struct platform_device *pdev, pm_message_t state) 1293 - { 1294 - return pcmcia_socket_dev_suspend(&pdev->dev); 1295 - } 1296 - 1297 - static int m8xx_resume(struct platform_device *pdev) 1298 - { 1299 - return pcmcia_socket_dev_resume(&pdev->dev); 1300 - } 1301 - #else 1302 - #define m8xx_suspend NULL 1303 - #define m8xx_resume NULL 1304 - #endif 1305 - 1306 1291 static const struct of_device_id m8xx_pcmcia_match[] = { 1307 1292 { 1308 1293 .type = "pcmcia", ··· 1303 1318 .match_table = m8xx_pcmcia_match, 1304 1319 .probe = m8xx_probe, 1305 1320 .remove = m8xx_remove, 1306 - .suspend = m8xx_suspend, 1307 - .resume = m8xx_resume, 1308 1321 }; 1309 1322 1310 1323 static int __init m8xx_init(void)
-12
drivers/pcmcia/omap_cf.c
··· 330 330 return 0; 331 331 } 332 332 333 - static int omap_cf_suspend(struct platform_device *pdev, pm_message_t mesg) 334 - { 335 - return pcmcia_socket_dev_suspend(&pdev->dev); 336 - } 337 - 338 - static int omap_cf_resume(struct platform_device *pdev) 339 - { 340 - return pcmcia_socket_dev_resume(&pdev->dev); 341 - } 342 - 343 333 static struct platform_driver omap_cf_driver = { 344 334 .driver = { 345 335 .name = (char *) driver_name, 346 336 .owner = THIS_MODULE, 347 337 }, 348 338 .remove = __exit_p(omap_cf_remove), 349 - .suspend = omap_cf_suspend, 350 - .resume = omap_cf_resume, 351 339 }; 352 340 353 341 static int __init omap_cf_init(void)
+39 -41
drivers/pcmcia/pd6729.c
··· 14 14 #include <linux/workqueue.h> 15 15 #include <linux/interrupt.h> 16 16 #include <linux/device.h> 17 + #include <linux/io.h> 17 18 18 19 #include <pcmcia/cs_types.h> 19 20 #include <pcmcia/ss.h> 20 21 #include <pcmcia/cs.h> 21 22 22 23 #include <asm/system.h> 23 - #include <asm/io.h> 24 24 25 25 #include "pd6729.h" 26 26 #include "i82365.h" ··· 222 222 ? SS_READY : 0; 223 223 } 224 224 225 - if (events) { 225 + if (events) 226 226 pcmcia_parse_events(&socket[i].socket, events); 227 - } 227 + 228 228 active |= events; 229 229 } 230 230 ··· 256 256 status = indirect_read(socket, I365_STATUS); 257 257 *value = 0; 258 258 259 - if ((status & I365_CS_DETECT) == I365_CS_DETECT) { 259 + if ((status & I365_CS_DETECT) == I365_CS_DETECT) 260 260 *value |= SS_DETECT; 261 - } 262 261 263 262 /* 264 263 * IO cards have a different meaning of bits 0,1 ··· 307 308 socket->card_irq = state->io_irq; 308 309 309 310 reg = 0; 310 - /* The reset bit has "inverse" logic */ 311 + /* The reset bit has "inverse" logic */ 311 312 if (!(state->flags & SS_RESET)) 312 313 reg |= I365_PC_RESET; 313 314 if (state->flags & SS_IOCARD) ··· 379 380 indirect_write(socket, I365_POWER, reg); 380 381 381 382 if (irq_mode == 1) { 382 - /* all interrupts are to be done as PCI interrupts */ 383 + /* all interrupts are to be done as PCI interrupts */ 383 384 data = PD67_EC1_INV_MGMT_IRQ | PD67_EC1_INV_CARD_IRQ; 384 385 } else 385 386 data = 0; ··· 390 391 /* Enable specific interrupt events */ 391 392 392 393 reg = 0x00; 393 - if (state->csc_mask & SS_DETECT) { 394 + if (state->csc_mask & SS_DETECT) 394 395 reg |= I365_CSC_DETECT; 395 - } 396 + 396 397 if (state->flags & SS_IOCARD) { 397 398 if (state->csc_mask & SS_STSCHG) 398 399 reg |= I365_CSC_STSCHG; ··· 449 450 450 451 ioctl = indirect_read(socket, I365_IOCTL) & ~I365_IOCTL_MASK(map); 451 452 452 - if (io->flags & MAP_0WS) ioctl |= I365_IOCTL_0WS(map); 453 - if (io->flags & MAP_16BIT) ioctl |= I365_IOCTL_16BIT(map); 454 - if (io->flags & MAP_AUTOSZ) ioctl |= I365_IOCTL_IOCS16(map); 453 + if (io->flags & MAP_0WS) 454 + ioctl |= I365_IOCTL_0WS(map); 455 + if (io->flags & MAP_16BIT) 456 + ioctl |= I365_IOCTL_16BIT(map); 457 + if (io->flags & MAP_AUTOSZ) 458 + ioctl |= I365_IOCTL_IOCS16(map); 455 459 456 460 indirect_write(socket, I365_IOCTL, ioctl); 457 461 ··· 499 497 500 498 /* write the stop address */ 501 499 502 - i= (mem->res->end >> 12) & 0x0fff; 500 + i = (mem->res->end >> 12) & 0x0fff; 503 501 switch (to_cycles(mem->speed)) { 504 502 case 0: 505 503 break; ··· 565 563 566 564 /* the pccard structure and its functions */ 567 565 static struct pccard_operations pd6729_operations = { 568 - .init = pd6729_init, 566 + .init = pd6729_init, 569 567 .get_status = pd6729_get_status, 570 568 .set_socket = pd6729_set_socket, 571 569 .set_io_map = pd6729_set_io_map, ··· 580 578 581 579 static int pd6729_check_irq(int irq) 582 580 { 583 - if (request_irq(irq, pd6729_test, IRQF_PROBE_SHARED, "x", pd6729_test) 584 - != 0) return -1; 581 + int ret; 582 + 583 + ret = request_irq(irq, pd6729_test, IRQF_PROBE_SHARED, "x", 584 + pd6729_test); 585 + if (ret) 586 + return -1; 587 + 585 588 free_irq(irq, pd6729_test); 586 589 return 0; 587 590 } ··· 598 591 599 592 if (irq_mode == 1) { 600 593 printk(KERN_INFO "pd6729: PCI card interrupts, " 601 - "PCI status changes\n"); 594 + "PCI status changes\n"); 602 595 return 0; 603 596 } 604 597 ··· 614 607 if (mask & (1<<i)) 615 608 printk("%s%d", ((mask & ((1<<i)-1)) ? "," : ""), i); 616 609 617 - if (mask == 0) printk("none!"); 618 - 619 - printk(" polling status changes.\n"); 610 + if (mask == 0) 611 + printk("none!"); 612 + else 613 + printk(" polling status changes.\n"); 620 614 621 615 return mask; 622 616 } ··· 632 624 633 625 socket = kzalloc(sizeof(struct pd6729_socket) * MAX_SOCKETS, 634 626 GFP_KERNEL); 635 - if (!socket) 627 + if (!socket) { 628 + dev_warn(&dev->dev, "failed to kzalloc socket.\n"); 636 629 return -ENOMEM; 630 + } 637 631 638 - if ((ret = pci_enable_device(dev))) 632 + ret = pci_enable_device(dev); 633 + if (ret) { 634 + dev_warn(&dev->dev, "failed to enable pci_device.\n"); 639 635 goto err_out_free_mem; 636 + } 640 637 641 638 if (!pci_resource_start(dev, 0)) { 642 639 dev_warn(&dev->dev, "refusing to load the driver as the " ··· 652 639 dev_info(&dev->dev, "Cirrus PD6729 PCI to PCMCIA Bridge at 0x%llx " 653 640 "on irq %d\n", 654 641 (unsigned long long)pci_resource_start(dev, 0), dev->irq); 655 - /* 642 + /* 656 643 * Since we have no memory BARs some firmware may not 657 644 * have had PCI_COMMAND_MEMORY enabled, yet the device needs it. 658 645 */ ··· 698 685 pci_set_drvdata(dev, socket); 699 686 if (irq_mode == 1) { 700 687 /* Register the interrupt handler */ 701 - if ((ret = request_irq(dev->irq, pd6729_interrupt, IRQF_SHARED, 702 - "pd6729", socket))) { 688 + ret = request_irq(dev->irq, pd6729_interrupt, IRQF_SHARED, 689 + "pd6729", socket); 690 + if (ret) { 703 691 dev_err(&dev->dev, "Failed to register irq %d\n", 704 692 dev->irq); 705 693 goto err_out_free_res; ··· 764 750 kfree(socket); 765 751 } 766 752 767 - #ifdef CONFIG_PM 768 - static int pd6729_socket_suspend(struct pci_dev *dev, pm_message_t state) 769 - { 770 - return pcmcia_socket_dev_suspend(&dev->dev); 771 - } 772 - 773 - static int pd6729_socket_resume(struct pci_dev *dev) 774 - { 775 - return pcmcia_socket_dev_resume(&dev->dev); 776 - } 777 - #endif 778 - 779 753 static struct pci_device_id pd6729_pci_ids[] = { 780 754 { 781 755 .vendor = PCI_VENDOR_ID_CIRRUS, ··· 780 778 .id_table = pd6729_pci_ids, 781 779 .probe = pd6729_pci_probe, 782 780 .remove = __devexit_p(pd6729_pci_remove), 783 - #ifdef CONFIG_PM 784 - .suspend = pd6729_socket_suspend, 785 - .resume = pd6729_socket_resume, 786 - #endif 787 781 }; 788 782 789 783 static int pd6729_module_init(void)
+1 -7
drivers/pcmcia/pxa2xx_base.c
··· 325 325 return 0; 326 326 } 327 327 328 - static int pxa2xx_drv_pcmcia_suspend(struct device *dev) 329 - { 330 - return pcmcia_socket_dev_suspend(dev); 331 - } 332 - 333 328 static int pxa2xx_drv_pcmcia_resume(struct device *dev) 334 329 { 335 330 pxa2xx_configure_sockets(dev); 336 - return pcmcia_socket_dev_resume(dev); 331 + return 0; 337 332 } 338 333 339 334 static const struct dev_pm_ops pxa2xx_drv_pcmcia_pm_ops = { 340 - .suspend = pxa2xx_drv_pcmcia_suspend, 341 335 .resume = pxa2xx_drv_pcmcia_resume, 342 336 }; 343 337
+11 -8
drivers/pcmcia/rsrc_nonstatic.c
··· 810 810 unsigned long size = end - start + 1; 811 811 int ret = 0; 812 812 813 + #if defined(CONFIG_X86) 814 + /* on x86, avoid anything < 0x100 for it is often used for 815 + * legacy platform devices */ 816 + if (start < 0x100) 817 + start = 0x100; 818 + #endif 819 + 813 820 if (end < start) 814 821 return -EINVAL; 815 822 ··· 874 867 if (res == &ioport_resource) 875 868 continue; 876 869 dev_printk(KERN_INFO, &s->cb_dev->dev, 877 - "pcmcia: parent PCI bridge I/O " 878 - "window: 0x%llx - 0x%llx\n", 879 - (unsigned long long)res->start, 880 - (unsigned long long)res->end); 870 + "pcmcia: parent PCI bridge window: %pR\n", 871 + res); 881 872 if (!adjust_io(s, ADD_MANAGED_RESOURCE, res->start, res->end)) 882 873 done |= IORESOURCE_IO; 883 874 ··· 885 880 if (res == &iomem_resource) 886 881 continue; 887 882 dev_printk(KERN_INFO, &s->cb_dev->dev, 888 - "pcmcia: parent PCI bridge Memory " 889 - "window: 0x%llx - 0x%llx\n", 890 - (unsigned long long)res->start, 891 - (unsigned long long)res->end); 883 + "pcmcia: parent PCI bridge window: %pR\n", 884 + res); 892 885 if (!adjust_memory(s, ADD_MANAGED_RESOURCE, res->start, res->end)) 893 886 done |= IORESOURCE_MEM; 894 887 }
-13
drivers/pcmcia/sa1100_generic.c
··· 95 95 return 0; 96 96 } 97 97 98 - static int sa11x0_drv_pcmcia_suspend(struct platform_device *dev, 99 - pm_message_t state) 100 - { 101 - return pcmcia_socket_dev_suspend(&dev->dev); 102 - } 103 - 104 - static int sa11x0_drv_pcmcia_resume(struct platform_device *dev) 105 - { 106 - return pcmcia_socket_dev_resume(&dev->dev); 107 - } 108 - 109 98 static struct platform_driver sa11x0_pcmcia_driver = { 110 99 .driver = { 111 100 .name = "sa11x0-pcmcia", ··· 102 113 }, 103 114 .probe = sa11x0_drv_pcmcia_probe, 104 115 .remove = sa11x0_drv_pcmcia_remove, 105 - .suspend = sa11x0_drv_pcmcia_suspend, 106 - .resume = sa11x0_drv_pcmcia_resume, 107 116 }; 108 117 109 118 /* sa11x0_pcmcia_init()
-12
drivers/pcmcia/sa1111_generic.c
··· 213 213 return 0; 214 214 } 215 215 216 - static int pcmcia_suspend(struct sa1111_dev *dev, pm_message_t state) 217 - { 218 - return pcmcia_socket_dev_suspend(&dev->dev); 219 - } 220 - 221 - static int pcmcia_resume(struct sa1111_dev *dev) 222 - { 223 - return pcmcia_socket_dev_resume(&dev->dev); 224 - } 225 - 226 216 static struct sa1111_driver pcmcia_driver = { 227 217 .drv = { 228 218 .name = "sa1111-pcmcia", ··· 220 230 .devid = SA1111_DEVID_PCMCIA, 221 231 .probe = pcmcia_probe, 222 232 .remove = __devexit_p(pcmcia_remove), 223 - .suspend = pcmcia_suspend, 224 - .resume = pcmcia_resume, 225 233 }; 226 234 227 235 static int __init sa1111_drv_pcmcia_init(void)
-12
drivers/pcmcia/tcic.c
··· 348 348 return id; 349 349 } 350 350 351 - static int tcic_drv_pcmcia_suspend(struct platform_device *dev, 352 - pm_message_t state) 353 - { 354 - return pcmcia_socket_dev_suspend(&dev->dev); 355 - } 356 - 357 - static int tcic_drv_pcmcia_resume(struct platform_device *dev) 358 - { 359 - return pcmcia_socket_dev_resume(&dev->dev); 360 - } 361 351 /*====================================================================*/ 362 352 363 353 static struct platform_driver tcic_driver = { ··· 355 365 .name = "tcic-pcmcia", 356 366 .owner = THIS_MODULE, 357 367 }, 358 - .suspend = tcic_drv_pcmcia_suspend, 359 - .resume = tcic_drv_pcmcia_resume, 360 368 }; 361 369 362 370 static struct platform_device tcic_device = {
-13
drivers/pcmcia/vrc4171_card.c
··· 705 705 706 706 __setup("vrc4171_card=", vrc4171_card_setup); 707 707 708 - static int vrc4171_card_suspend(struct platform_device *dev, 709 - pm_message_t state) 710 - { 711 - return pcmcia_socket_dev_suspend(&dev->dev); 712 - } 713 - 714 - static int vrc4171_card_resume(struct platform_device *dev) 715 - { 716 - return pcmcia_socket_dev_resume(&dev->dev); 717 - } 718 - 719 708 static struct platform_driver vrc4171_card_driver = { 720 709 .driver = { 721 710 .name = vrc4171_card_name, 722 711 .owner = THIS_MODULE, 723 712 }, 724 - .suspend = vrc4171_card_suspend, 725 - .resume = vrc4171_card_resume, 726 713 }; 727 714 728 715 static int __devinit vrc4171_card_init(void)
+2 -15
drivers/pcmcia/yenta_socket.c
··· 1290 1290 { 1291 1291 struct pci_dev *pdev = to_pci_dev(dev); 1292 1292 struct yenta_socket *socket = pci_get_drvdata(pdev); 1293 - int ret; 1294 - 1295 - ret = pcmcia_socket_dev_suspend(dev); 1296 1293 1297 1294 if (!socket) 1298 - return ret; 1295 + return 0; 1299 1296 1300 1297 if (socket->type && socket->type->save_state) 1301 1298 socket->type->save_state(socket); ··· 1309 1312 */ 1310 1313 /* pci_set_power_state(dev, 3); */ 1311 1314 1312 - return ret; 1315 + return 0; 1313 1316 } 1314 1317 1315 1318 static int yenta_dev_resume_noirq(struct device *dev) ··· 1333 1336 if (socket->type && socket->type->restore_state) 1334 1337 socket->type->restore_state(socket); 1335 1338 1336 - pcmcia_socket_dev_early_resume(dev); 1337 - return 0; 1338 - } 1339 - 1340 - static int yenta_dev_resume(struct device *dev) 1341 - { 1342 - pcmcia_socket_dev_late_resume(dev); 1343 1339 return 0; 1344 1340 } 1345 1341 1346 1342 static const struct dev_pm_ops yenta_pm_ops = { 1347 1343 .suspend_noirq = yenta_dev_suspend_noirq, 1348 1344 .resume_noirq = yenta_dev_resume_noirq, 1349 - .resume = yenta_dev_resume, 1350 1345 .freeze_noirq = yenta_dev_suspend_noirq, 1351 1346 .thaw_noirq = yenta_dev_resume_noirq, 1352 - .thaw = yenta_dev_resume, 1353 1347 .poweroff_noirq = yenta_dev_suspend_noirq, 1354 1348 .restore_noirq = yenta_dev_resume_noirq, 1355 - .restore = yenta_dev_resume, 1356 1349 }; 1357 1350 1358 1351 #define YENTA_PM_OPS (&yenta_pm_ops)
+10
drivers/platform/x86/Kconfig
··· 385 385 386 386 If you have an Eee PC laptop, say Y or M here. 387 387 388 + config EEEPC_WMI 389 + tristate "Eee PC WMI Hotkey Driver (EXPERIMENTAL)" 390 + depends on ACPI_WMI 391 + depends on INPUT 392 + depends on EXPERIMENTAL 393 + ---help--- 394 + Say Y here if you want to support WMI-based hotkeys on Eee PC laptops. 395 + 396 + To compile this driver as a module, choose M here: the module will 397 + be called eeepc-wmi. 388 398 389 399 config ACPI_WMI 390 400 tristate "WMI"
+1
drivers/platform/x86/Makefile
··· 4 4 # 5 5 obj-$(CONFIG_ASUS_LAPTOP) += asus-laptop.o 6 6 obj-$(CONFIG_EEEPC_LAPTOP) += eeepc-laptop.o 7 + obj-$(CONFIG_EEEPC_WMI) += eeepc-wmi.o 7 8 obj-$(CONFIG_MSI_LAPTOP) += msi-laptop.o 8 9 obj-$(CONFIG_ACPI_CMPC) += classmate-laptop.o 9 10 obj-$(CONFIG_COMPAL_LAPTOP) += compal-laptop.o
+2 -2
drivers/platform/x86/asus-laptop.c
··· 139 139 140 140 /* Backlight */ 141 141 static acpi_handle lcd_switch_handle; 142 - static const char *lcd_switch_paths[] = { 142 + static char *lcd_switch_paths[] = { 143 143 "\\_SB.PCI0.SBRG.EC0._Q10", /* All new models */ 144 144 "\\_SB.PCI0.ISA.EC0._Q10", /* A1x */ 145 145 "\\_SB.PCI0.PX40.ECD0._Q10", /* L3C */ ··· 153 153 #define METHOD_SWITCH_DISPLAY "SDSP" 154 154 155 155 static acpi_handle display_get_handle; 156 - static const char *display_get_paths[] = { 156 + static char *display_get_paths[] = { 157 157 /* A6B, A6K A6R A7D F3JM L4R M6R A3G M6A M6V VX-1 V6J V6V W3Z */ 158 158 "\\_SB.PCI0.P0P1.VGA.GETD", 159 159 /* A3E A4K, A4D A4L A6J A7J A8J Z71V M9V S5A M5A z33A W1Jc W2V G1 */
+157
drivers/platform/x86/eeepc-wmi.c
··· 1 + /* 2 + * Eee PC WMI hotkey driver 3 + * 4 + * Copyright(C) 2010 Intel Corporation. 5 + * 6 + * Portions based on wistron_btns.c: 7 + * Copyright (C) 2005 Miloslav Trmac <mitr@volny.cz> 8 + * Copyright (C) 2005 Bernhard Rosenkraenzer <bero@arklinux.org> 9 + * Copyright (C) 2005 Dmitry Torokhov <dtor@mail.ru> 10 + * 11 + * This program is free software; you can redistribute it and/or modify 12 + * it under the terms of the GNU General Public License as published by 13 + * the Free Software Foundation; either version 2 of the License, or 14 + * (at your option) any later version. 15 + * 16 + * This program is distributed in the hope that it will be useful, 17 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 + * GNU General Public License for more details. 20 + * 21 + * You should have received a copy of the GNU General Public License 22 + * along with this program; if not, write to the Free Software 23 + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 24 + */ 25 + 26 + #include <linux/kernel.h> 27 + #include <linux/module.h> 28 + #include <linux/init.h> 29 + #include <linux/types.h> 30 + #include <linux/input.h> 31 + #include <linux/input/sparse-keymap.h> 32 + #include <acpi/acpi_bus.h> 33 + #include <acpi/acpi_drivers.h> 34 + 35 + MODULE_AUTHOR("Yong Wang <yong.y.wang@intel.com>"); 36 + MODULE_DESCRIPTION("Eee PC WMI Hotkey Driver"); 37 + MODULE_LICENSE("GPL"); 38 + 39 + #define EEEPC_WMI_EVENT_GUID "ABBC0F72-8EA1-11D1-00A0-C90629100000" 40 + 41 + MODULE_ALIAS("wmi:"EEEPC_WMI_EVENT_GUID); 42 + 43 + #define NOTIFY_BRNUP_MIN 0x11 44 + #define NOTIFY_BRNUP_MAX 0x1f 45 + #define NOTIFY_BRNDOWN_MIN 0x20 46 + #define NOTIFY_BRNDOWN_MAX 0x2e 47 + 48 + static const struct key_entry eeepc_wmi_keymap[] = { 49 + /* Sleep already handled via generic ACPI code */ 50 + { KE_KEY, 0x5d, { KEY_WLAN } }, 51 + { KE_KEY, 0x32, { KEY_MUTE } }, 52 + { KE_KEY, 0x31, { KEY_VOLUMEDOWN } }, 53 + { KE_KEY, 0x30, { KEY_VOLUMEUP } }, 54 + { KE_IGNORE, NOTIFY_BRNDOWN_MIN, { KEY_BRIGHTNESSDOWN } }, 55 + { KE_IGNORE, NOTIFY_BRNUP_MIN, { KEY_BRIGHTNESSUP } }, 56 + { KE_KEY, 0xcc, { KEY_SWITCHVIDEOMODE } }, 57 + { KE_END, 0}, 58 + }; 59 + 60 + static struct input_dev *eeepc_wmi_input_dev; 61 + 62 + static void eeepc_wmi_notify(u32 value, void *context) 63 + { 64 + struct acpi_buffer response = { ACPI_ALLOCATE_BUFFER, NULL }; 65 + union acpi_object *obj; 66 + acpi_status status; 67 + int code; 68 + 69 + status = wmi_get_event_data(value, &response); 70 + if (status != AE_OK) { 71 + pr_err("EEEPC WMI: bad event status 0x%x\n", status); 72 + return; 73 + } 74 + 75 + obj = (union acpi_object *)response.pointer; 76 + 77 + if (obj && obj->type == ACPI_TYPE_INTEGER) { 78 + code = obj->integer.value; 79 + 80 + if (code >= NOTIFY_BRNUP_MIN && code <= NOTIFY_BRNUP_MAX) 81 + code = NOTIFY_BRNUP_MIN; 82 + else if (code >= NOTIFY_BRNDOWN_MIN && code <= NOTIFY_BRNDOWN_MAX) 83 + code = NOTIFY_BRNDOWN_MIN; 84 + 85 + if (!sparse_keymap_report_event(eeepc_wmi_input_dev, 86 + code, 1, true)) 87 + pr_info("EEEPC WMI: Unknown key %x pressed\n", code); 88 + } 89 + 90 + kfree(obj); 91 + } 92 + 93 + static int eeepc_wmi_input_setup(void) 94 + { 95 + int err; 96 + 97 + eeepc_wmi_input_dev = input_allocate_device(); 98 + if (!eeepc_wmi_input_dev) 99 + return -ENOMEM; 100 + 101 + eeepc_wmi_input_dev->name = "Eee PC WMI hotkeys"; 102 + eeepc_wmi_input_dev->phys = "wmi/input0"; 103 + eeepc_wmi_input_dev->id.bustype = BUS_HOST; 104 + 105 + err = sparse_keymap_setup(eeepc_wmi_input_dev, eeepc_wmi_keymap, NULL); 106 + if (err) 107 + goto err_free_dev; 108 + 109 + err = input_register_device(eeepc_wmi_input_dev); 110 + if (err) 111 + goto err_free_keymap; 112 + 113 + return 0; 114 + 115 + err_free_keymap: 116 + sparse_keymap_free(eeepc_wmi_input_dev); 117 + err_free_dev: 118 + input_free_device(eeepc_wmi_input_dev); 119 + return err; 120 + } 121 + 122 + static int __init eeepc_wmi_init(void) 123 + { 124 + int err; 125 + acpi_status status; 126 + 127 + if (!wmi_has_guid(EEEPC_WMI_EVENT_GUID)) { 128 + pr_warning("EEEPC WMI: No known WMI GUID found\n"); 129 + return -ENODEV; 130 + } 131 + 132 + err = eeepc_wmi_input_setup(); 133 + if (err) 134 + return err; 135 + 136 + status = wmi_install_notify_handler(EEEPC_WMI_EVENT_GUID, 137 + eeepc_wmi_notify, NULL); 138 + if (ACPI_FAILURE(status)) { 139 + sparse_keymap_free(eeepc_wmi_input_dev); 140 + input_unregister_device(eeepc_wmi_input_dev); 141 + pr_err("EEEPC WMI: Unable to register notify handler - %d\n", 142 + status); 143 + return -ENODEV; 144 + } 145 + 146 + return 0; 147 + } 148 + 149 + static void __exit eeepc_wmi_exit(void) 150 + { 151 + wmi_remove_notify_handler(EEEPC_WMI_EVENT_GUID); 152 + sparse_keymap_free(eeepc_wmi_input_dev); 153 + input_unregister_device(eeepc_wmi_input_dev); 154 + } 155 + 156 + module_init(eeepc_wmi_init); 157 + module_exit(eeepc_wmi_exit);
+1
drivers/serial/serial_cs.c
··· 745 745 PCMCIA_PFC_DEVICE_PROD_ID13(1, "Xircom", "REM10", 0x2e3ee845, 0x76df1d29), 746 746 PCMCIA_PFC_DEVICE_PROD_ID13(1, "Xircom", "XEM5600", 0x2e3ee845, 0xf1403719), 747 747 PCMCIA_PFC_DEVICE_PROD_ID12(1, "AnyCom", "Fast Ethernet + 56K COMBO", 0x578ba6e7, 0xb0ac62c4), 748 + PCMCIA_PFC_DEVICE_PROD_ID12(1, "ATKK", "LM33-PCM-T", 0xba9eb7e2, 0x077c174e), 748 749 PCMCIA_PFC_DEVICE_PROD_ID12(1, "D-Link", "DME336T", 0x1a424a1c, 0xb23897ff), 749 750 PCMCIA_PFC_DEVICE_PROD_ID12(1, "Gateway 2000", "XJEM3336", 0xdd9989be, 0x662c394c), 750 751 PCMCIA_PFC_DEVICE_PROD_ID12(1, "Grey Cell", "GCS3000", 0x2a151fac, 0x48b932ae),
+3 -1
drivers/serial/sunsu.c
··· 1453 1453 if (up->su_type == SU_PORT_KBD || up->su_type == SU_PORT_MS) { 1454 1454 err = sunsu_kbd_ms_init(up); 1455 1455 if (err) { 1456 + of_iounmap(&op->resource[0], 1457 + up->port.membase, up->reg_size); 1456 1458 kfree(up); 1457 - goto out_unmap; 1459 + return err; 1458 1460 } 1459 1461 dev_set_drvdata(&op->dev, up); 1460 1462
+1 -1
drivers/staging/et131x/et1310_mac.c
··· 226 226 } 227 227 228 228 /* Enable TXMAC */ 229 - ctl |= 0x05; /* TX mac enable, FC disable */ 229 + ctl |= 0x09; /* TX mac enable, FC disable */ 230 230 writel(ctl, &etdev->regs->txmac.ctl); 231 231 232 232 /* Ready to start the RXDMA/TXDMA engine */
+9
drivers/usb/gadget/at91_udc.c
··· 1370 1370 { 1371 1371 struct at91_udc *udc = _udc; 1372 1372 u32 rescans = 5; 1373 + int disable_clock = 0; 1374 + 1375 + if (!udc->clocked) { 1376 + clk_on(udc); 1377 + disable_clock = 1; 1378 + } 1373 1379 1374 1380 while (rescans--) { 1375 1381 u32 status; ··· 1463 1457 } 1464 1458 } 1465 1459 } 1460 + 1461 + if (disable_clock) 1462 + clk_off(udc); 1466 1463 1467 1464 return IRQ_HANDLED; 1468 1465 }
+17 -7
drivers/video/sunxvr500.c
··· 242 242 static int __devinit e3d_pci_register(struct pci_dev *pdev, 243 243 const struct pci_device_id *ent) 244 244 { 245 + struct device_node *of_node; 246 + const char *device_type; 245 247 struct fb_info *info; 246 248 struct e3d_info *ep; 247 249 unsigned int line_length; 248 250 int err; 251 + 252 + of_node = pci_device_to_OF_node(pdev); 253 + if (!of_node) { 254 + printk(KERN_ERR "e3d: Cannot find OF node of %s\n", 255 + pci_name(pdev)); 256 + return -ENODEV; 257 + } 258 + 259 + device_type = of_get_property(of_node, "device_type", NULL); 260 + if (!device_type) { 261 + printk(KERN_INFO "e3d: Ignoring secondary output device " 262 + "at %s\n", pci_name(pdev)); 263 + return -ENODEV; 264 + } 249 265 250 266 err = pci_enable_device(pdev); 251 267 if (err < 0) { ··· 281 265 ep->info = info; 282 266 ep->pdev = pdev; 283 267 spin_lock_init(&ep->lock); 284 - ep->of_node = pci_device_to_OF_node(pdev); 285 - if (!ep->of_node) { 286 - printk(KERN_ERR "e3d: Cannot find OF node of %s\n", 287 - pci_name(pdev)); 288 - err = -ENODEV; 289 - goto err_release_fb; 290 - } 268 + ep->of_node = of_node; 291 269 292 270 /* Read the PCI base register of the frame buffer, which we 293 271 * need in order to interpret the RAMDAC_VID_*FB* values in
+8 -2
fs/ceph/addr.c
··· 919 919 /* 920 920 * We are only allowed to write into/dirty the page if the page is 921 921 * clean, or already dirty within the same snap context. 922 + * 923 + * called with page locked. 924 + * return success with page locked, 925 + * or any failure (incl -EAGAIN) with page unlocked. 922 926 */ 923 927 static int ceph_update_writeable_page(struct file *file, 924 928 loff_t pos, unsigned len, ··· 965 961 snapc = ceph_get_snap_context((void *)page->private); 966 962 unlock_page(page); 967 963 ceph_queue_writeback(inode); 968 - wait_event_interruptible(ci->i_cap_wq, 964 + r = wait_event_interruptible(ci->i_cap_wq, 969 965 context_is_writeable_or_written(inode, snapc)); 970 966 ceph_put_snap_context(snapc); 967 + if (r == -ERESTARTSYS) 968 + return r; 971 969 return -EAGAIN; 972 970 } 973 971 ··· 1041 1035 int r; 1042 1036 1043 1037 do { 1044 - /* get a page*/ 1038 + /* get a page */ 1045 1039 page = grab_cache_page_write_begin(mapping, index, 0); 1046 1040 if (!page) 1047 1041 return -ENOMEM;
+38 -15
fs/ceph/auth_x.c
··· 28 28 return (ac->want_keys & xi->have_keys) == ac->want_keys; 29 29 } 30 30 31 + static int ceph_x_encrypt_buflen(int ilen) 32 + { 33 + return sizeof(struct ceph_x_encrypt_header) + ilen + 16 + 34 + sizeof(u32); 35 + } 36 + 31 37 static int ceph_x_encrypt(struct ceph_crypto_key *secret, 32 38 void *ibuf, int ilen, void *obuf, size_t olen) 33 39 { ··· 156 150 struct timespec validity; 157 151 struct ceph_crypto_key old_key; 158 152 void *tp, *tpend; 153 + struct ceph_timespec new_validity; 154 + struct ceph_crypto_key new_session_key; 155 + struct ceph_buffer *new_ticket_blob; 156 + unsigned long new_expires, new_renew_after; 157 + u64 new_secret_id; 159 158 160 159 ceph_decode_need(&p, end, sizeof(u32) + 1, bad); 161 160 ··· 193 182 goto bad; 194 183 195 184 memcpy(&old_key, &th->session_key, sizeof(old_key)); 196 - ret = ceph_crypto_key_decode(&th->session_key, &dp, dend); 185 + ret = ceph_crypto_key_decode(&new_session_key, &dp, dend); 197 186 if (ret) 198 187 goto out; 199 188 200 - ceph_decode_copy(&dp, &th->validity, sizeof(th->validity)); 201 - ceph_decode_timespec(&validity, &th->validity); 202 - th->expires = get_seconds() + validity.tv_sec; 203 - th->renew_after = th->expires - (validity.tv_sec / 4); 204 - dout(" expires=%lu renew_after=%lu\n", th->expires, 205 - th->renew_after); 189 + ceph_decode_copy(&dp, &new_validity, sizeof(new_validity)); 190 + ceph_decode_timespec(&validity, &new_validity); 191 + new_expires = get_seconds() + validity.tv_sec; 192 + new_renew_after = new_expires - (validity.tv_sec / 4); 193 + dout(" expires=%lu renew_after=%lu\n", new_expires, 194 + new_renew_after); 206 195 207 196 /* ticket blob for service */ 208 197 ceph_decode_8_safe(&p, end, is_enc, bad); ··· 227 216 dout(" ticket blob is %d bytes\n", dlen); 228 217 ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad); 229 218 struct_v = ceph_decode_8(&tp); 230 - th->secret_id = ceph_decode_64(&tp); 231 - ret = ceph_decode_buffer(&th->ticket_blob, &tp, tpend); 219 + new_secret_id = ceph_decode_64(&tp); 220 + ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend); 232 221 if (ret) 233 222 goto out; 223 + 224 + /* all is well, update our ticket */ 225 + ceph_crypto_key_destroy(&th->session_key); 226 + if (th->ticket_blob) 227 + ceph_buffer_put(th->ticket_blob); 228 + th->session_key = new_session_key; 229 + th->ticket_blob = new_ticket_blob; 230 + th->validity = new_validity; 231 + th->secret_id = new_secret_id; 232 + th->expires = new_expires; 233 + th->renew_after = new_renew_after; 234 234 dout(" got ticket service %d (%s) secret_id %lld len %d\n", 235 235 type, ceph_entity_type_name(type), th->secret_id, 236 236 (int)th->ticket_blob->vec.iov_len); ··· 264 242 struct ceph_x_ticket_handler *th, 265 243 struct ceph_x_authorizer *au) 266 244 { 267 - int len; 245 + int maxlen; 268 246 struct ceph_x_authorize_a *msg_a; 269 247 struct ceph_x_authorize_b msg_b; 270 248 void *p, *end; ··· 275 253 dout("build_authorizer for %s %p\n", 276 254 ceph_entity_type_name(th->service), au); 277 255 278 - len = sizeof(*msg_a) + sizeof(msg_b) + sizeof(u32) + 279 - ticket_blob_len + 16; 280 - dout(" need len %d\n", len); 281 - if (au->buf && au->buf->alloc_len < len) { 256 + maxlen = sizeof(*msg_a) + sizeof(msg_b) + 257 + ceph_x_encrypt_buflen(ticket_blob_len); 258 + dout(" need len %d\n", maxlen); 259 + if (au->buf && au->buf->alloc_len < maxlen) { 282 260 ceph_buffer_put(au->buf); 283 261 au->buf = NULL; 284 262 } 285 263 if (!au->buf) { 286 - au->buf = ceph_buffer_new(len, GFP_NOFS); 264 + au->buf = ceph_buffer_new(maxlen, GFP_NOFS); 287 265 if (!au->buf) 288 266 return -ENOMEM; 289 267 } ··· 318 296 au->buf->vec.iov_len = p - au->buf->vec.iov_base; 319 297 dout(" built authorizer nonce %llx len %d\n", au->nonce, 320 298 (int)au->buf->vec.iov_len); 299 + BUG_ON(au->buf->vec.iov_len > maxlen); 321 300 return 0; 322 301 323 302 out_buf:
+39 -34
fs/ceph/caps.c
··· 1407 1407 */ 1408 1408 void ceph_check_caps(struct ceph_inode_info *ci, int flags, 1409 1409 struct ceph_mds_session *session) 1410 + __releases(session->s_mutex) 1410 1411 { 1411 1412 struct ceph_client *client = ceph_inode_to_client(&ci->vfs_inode); 1412 1413 struct ceph_mds_client *mdsc = &client->mdsc; ··· 1415 1414 struct ceph_cap *cap; 1416 1415 int file_wanted, used; 1417 1416 int took_snap_rwsem = 0; /* true if mdsc->snap_rwsem held */ 1418 - int drop_session_lock = session ? 0 : 1; 1419 1417 int issued, implemented, want, retain, revoking, flushing = 0; 1420 1418 int mds = -1; /* keep track of how far we've gone through i_caps list 1421 1419 to avoid an infinite loop on retry */ ··· 1639 1639 if (queue_invalidate) 1640 1640 ceph_queue_invalidate(inode); 1641 1641 1642 - if (session && drop_session_lock) 1642 + if (session) 1643 1643 mutex_unlock(&session->s_mutex); 1644 1644 if (took_snap_rwsem) 1645 1645 up_read(&mdsc->snap_rwsem); ··· 2195 2195 * Handle a cap GRANT message from the MDS. (Note that a GRANT may 2196 2196 * actually be a revocation if it specifies a smaller cap set.) 2197 2197 * 2198 - * caller holds s_mutex. 2198 + * caller holds s_mutex and i_lock, we drop both. 2199 + * 2199 2200 * return value: 2200 2201 * 0 - ok 2201 2202 * 1 - check_caps on auth cap only (writeback) 2202 2203 * 2 - check_caps (ack revoke) 2203 2204 */ 2204 - static int handle_cap_grant(struct inode *inode, struct ceph_mds_caps *grant, 2205 - struct ceph_mds_session *session, 2206 - struct ceph_cap *cap, 2207 - struct ceph_buffer *xattr_buf) 2205 + static void handle_cap_grant(struct inode *inode, struct ceph_mds_caps *grant, 2206 + struct ceph_mds_session *session, 2207 + struct ceph_cap *cap, 2208 + struct ceph_buffer *xattr_buf) 2208 2209 __releases(inode->i_lock) 2209 - 2210 + __releases(session->s_mutex) 2210 2211 { 2211 2212 struct ceph_inode_info *ci = ceph_inode(inode); 2212 2213 int mds = session->s_mds; ··· 2217 2216 u64 size = le64_to_cpu(grant->size); 2218 2217 u64 max_size = le64_to_cpu(grant->max_size); 2219 2218 struct timespec mtime, atime, ctime; 2220 - int reply = 0; 2219 + int check_caps = 0; 2221 2220 int wake = 0; 2222 2221 int writeback = 0; 2223 2222 int revoked_rdcache = 0; ··· 2330 2329 if ((used & ~newcaps) & CEPH_CAP_FILE_BUFFER) 2331 2330 writeback = 1; /* will delay ack */ 2332 2331 else if (dirty & ~newcaps) 2333 - reply = 1; /* initiate writeback in check_caps */ 2332 + check_caps = 1; /* initiate writeback in check_caps */ 2334 2333 else if (((used & ~newcaps) & CEPH_CAP_FILE_CACHE) == 0 || 2335 2334 revoked_rdcache) 2336 - reply = 2; /* send revoke ack in check_caps */ 2335 + check_caps = 2; /* send revoke ack in check_caps */ 2337 2336 cap->issued = newcaps; 2337 + cap->implemented |= newcaps; 2338 2338 } else if (cap->issued == newcaps) { 2339 2339 dout("caps unchanged: %s -> %s\n", 2340 2340 ceph_cap_string(cap->issued), ceph_cap_string(newcaps)); ··· 2348 2346 * pending revocation */ 2349 2347 wake = 1; 2350 2348 } 2349 + BUG_ON(cap->issued & ~cap->implemented); 2351 2350 2352 2351 spin_unlock(&inode->i_lock); 2353 2352 if (writeback) ··· 2362 2359 ceph_queue_invalidate(inode); 2363 2360 if (wake) 2364 2361 wake_up(&ci->i_cap_wq); 2365 - return reply; 2362 + 2363 + if (check_caps == 1) 2364 + ceph_check_caps(ci, CHECK_CAPS_NODELAY|CHECK_CAPS_AUTHONLY, 2365 + session); 2366 + else if (check_caps == 2) 2367 + ceph_check_caps(ci, CHECK_CAPS_NODELAY, session); 2368 + else 2369 + mutex_unlock(&session->s_mutex); 2366 2370 } 2367 2371 2368 2372 /* ··· 2558 2548 ci->i_cap_exporting_issued = cap->issued; 2559 2549 } 2560 2550 __ceph_remove_cap(cap); 2561 - } else { 2562 - WARN_ON(!cap); 2563 2551 } 2552 + /* else, we already released it */ 2564 2553 2565 2554 spin_unlock(&inode->i_lock); 2566 2555 } ··· 2630 2621 u64 cap_id; 2631 2622 u64 size, max_size; 2632 2623 u64 tid; 2633 - int check_caps = 0; 2634 2624 void *snaptrace; 2635 - int r; 2636 2625 2637 2626 dout("handle_caps from mds%d\n", mds); 2638 2627 ··· 2675 2668 case CEPH_CAP_OP_IMPORT: 2676 2669 handle_cap_import(mdsc, inode, h, session, 2677 2670 snaptrace, le32_to_cpu(h->snap_trace_len)); 2678 - check_caps = 1; /* we may have sent a RELEASE to the old auth */ 2679 - goto done; 2671 + ceph_check_caps(ceph_inode(inode), CHECK_CAPS_NODELAY, 2672 + session); 2673 + goto done_unlocked; 2680 2674 } 2681 2675 2682 2676 /* the rest require a cap */ ··· 2694 2686 switch (op) { 2695 2687 case CEPH_CAP_OP_REVOKE: 2696 2688 case CEPH_CAP_OP_GRANT: 2697 - r = handle_cap_grant(inode, h, session, cap, msg->middle); 2698 - if (r == 1) 2699 - ceph_check_caps(ceph_inode(inode), 2700 - CHECK_CAPS_NODELAY|CHECK_CAPS_AUTHONLY, 2701 - session); 2702 - else if (r == 2) 2703 - ceph_check_caps(ceph_inode(inode), 2704 - CHECK_CAPS_NODELAY, 2705 - session); 2706 - break; 2689 + handle_cap_grant(inode, h, session, cap, msg->middle); 2690 + goto done_unlocked; 2707 2691 2708 2692 case CEPH_CAP_OP_FLUSH_ACK: 2709 2693 handle_cap_flush_ack(inode, tid, h, session, cap); ··· 2713 2713 2714 2714 done: 2715 2715 mutex_unlock(&session->s_mutex); 2716 - 2717 - if (check_caps) 2718 - ceph_check_caps(ceph_inode(inode), CHECK_CAPS_NODELAY, NULL); 2716 + done_unlocked: 2719 2717 if (inode) 2720 2718 iput(inode); 2721 2719 return; ··· 2836 2838 struct ceph_cap *cap; 2837 2839 struct ceph_mds_request_release *rel = *p; 2838 2840 int ret = 0; 2839 - 2840 - dout("encode_inode_release %p mds%d drop %s unless %s\n", inode, 2841 - mds, ceph_cap_string(drop), ceph_cap_string(unless)); 2841 + int used = 0; 2842 2842 2843 2843 spin_lock(&inode->i_lock); 2844 + used = __ceph_caps_used(ci); 2845 + 2846 + dout("encode_inode_release %p mds%d used %s drop %s unless %s\n", inode, 2847 + mds, ceph_cap_string(used), ceph_cap_string(drop), 2848 + ceph_cap_string(unless)); 2849 + 2850 + /* only drop unused caps */ 2851 + drop &= ~used; 2852 + 2844 2853 cap = __get_cap_for_mds(ci, mds); 2845 2854 if (cap && __cap_is_valid(cap)) { 2846 2855 if (force ||
+3 -1
fs/ceph/dir.c
··· 288 288 CEPH_MDS_OP_LSSNAP : CEPH_MDS_OP_READDIR; 289 289 290 290 /* discard old result, if any */ 291 - if (fi->last_readdir) 291 + if (fi->last_readdir) { 292 292 ceph_mdsc_put_request(fi->last_readdir); 293 + fi->last_readdir = NULL; 294 + } 293 295 294 296 /* requery frag tree, as the frag topology may have changed */ 295 297 frag = ceph_choose_frag(ceph_inode(inode), frag, NULL, NULL);
+16
fs/ceph/inode.c
··· 378 378 379 379 ceph_queue_caps_release(inode); 380 380 381 + /* 382 + * we may still have a snap_realm reference if there are stray 383 + * caps in i_cap_exporting_issued or i_snap_caps. 384 + */ 385 + if (ci->i_snap_realm) { 386 + struct ceph_mds_client *mdsc = 387 + &ceph_client(ci->vfs_inode.i_sb)->mdsc; 388 + struct ceph_snap_realm *realm = ci->i_snap_realm; 389 + 390 + dout(" dropping residual ref to snap realm %p\n", realm); 391 + spin_lock(&realm->inodes_with_caps_lock); 392 + list_del_init(&ci->i_snap_realm_item); 393 + spin_unlock(&realm->inodes_with_caps_lock); 394 + ceph_put_snap_realm(mdsc, realm); 395 + } 396 + 381 397 kfree(ci->i_symlink); 382 398 while ((n = rb_first(&ci->i_fragtree)) != NULL) { 383 399 frag = rb_entry(n, struct ceph_inode_frag, node);
+32 -11
fs/ceph/mds_client.c
··· 328 328 struct ceph_mds_session *s; 329 329 330 330 s = kzalloc(sizeof(*s), GFP_NOFS); 331 + if (!s) 332 + return ERR_PTR(-ENOMEM); 331 333 s->s_mdsc = mdsc; 332 334 s->s_mds = mds; 333 335 s->s_state = CEPH_MDS_SESSION_NEW; ··· 531 529 { 532 530 dout("__unregister_request %p tid %lld\n", req, req->r_tid); 533 531 rb_erase(&req->r_node, &mdsc->request_tree); 534 - ceph_mdsc_put_request(req); 532 + RB_CLEAR_NODE(&req->r_node); 535 533 536 534 if (req->r_unsafe_dir) { 537 535 struct ceph_inode_info *ci = ceph_inode(req->r_unsafe_dir); ··· 540 538 list_del_init(&req->r_unsafe_dir_item); 541 539 spin_unlock(&ci->i_unsafe_lock); 542 540 } 541 + 542 + ceph_mdsc_put_request(req); 543 543 } 544 544 545 545 /* ··· 866 862 if (time_after_eq(jiffies, session->s_cap_ttl) && 867 863 time_after_eq(session->s_cap_ttl, session->s_renew_requested)) 868 864 pr_info("mds%d caps stale\n", session->s_mds); 865 + session->s_renew_requested = jiffies; 869 866 870 867 /* do not try to renew caps until a recovering mds has reconnected 871 868 * with its clients. */ ··· 879 874 880 875 dout("send_renew_caps to mds%d (%s)\n", session->s_mds, 881 876 ceph_mds_state_name(state)); 882 - session->s_renew_requested = jiffies; 883 877 msg = create_session_msg(CEPH_SESSION_REQUEST_RENEWCAPS, 884 878 ++session->s_renew_seq); 885 879 if (IS_ERR(msg)) ··· 1570 1566 1571 1567 /* get, open session */ 1572 1568 session = __ceph_lookup_mds_session(mdsc, mds); 1573 - if (!session) 1569 + if (!session) { 1574 1570 session = register_session(mdsc, mds); 1571 + if (IS_ERR(session)) { 1572 + err = PTR_ERR(session); 1573 + goto finish; 1574 + } 1575 + } 1575 1576 dout("do_request mds%d session %p state %s\n", mds, session, 1576 1577 session_state_name(session->s_state)); 1577 1578 if (session->s_state != CEPH_MDS_SESSION_OPEN && ··· 1779 1770 dout("handle_reply %p\n", req); 1780 1771 1781 1772 /* correct session? */ 1782 - if (!req->r_session && req->r_session != session) { 1773 + if (req->r_session != session) { 1783 1774 pr_err("mdsc_handle_reply got %llu on session mds%d" 1784 1775 " not mds%d\n", tid, session->s_mds, 1785 1776 req->r_session ? req->r_session->s_mds : -1); ··· 2691 2682 */ 2692 2683 static void wait_unsafe_requests(struct ceph_mds_client *mdsc, u64 want_tid) 2693 2684 { 2694 - struct ceph_mds_request *req = NULL; 2685 + struct ceph_mds_request *req = NULL, *nextreq; 2695 2686 struct rb_node *n; 2696 2687 2697 2688 mutex_lock(&mdsc->mutex); 2698 2689 dout("wait_unsafe_requests want %lld\n", want_tid); 2690 + restart: 2699 2691 req = __get_oldest_req(mdsc); 2700 2692 while (req && req->r_tid <= want_tid) { 2693 + /* find next request */ 2694 + n = rb_next(&req->r_node); 2695 + if (n) 2696 + nextreq = rb_entry(n, struct ceph_mds_request, r_node); 2697 + else 2698 + nextreq = NULL; 2701 2699 if ((req->r_op & CEPH_MDS_OP_WRITE)) { 2702 2700 /* write op */ 2703 2701 ceph_mdsc_get_request(req); 2702 + if (nextreq) 2703 + ceph_mdsc_get_request(nextreq); 2704 2704 mutex_unlock(&mdsc->mutex); 2705 2705 dout("wait_unsafe_requests wait on %llu (want %llu)\n", 2706 2706 req->r_tid, want_tid); 2707 2707 wait_for_completion(&req->r_safe_completion); 2708 2708 mutex_lock(&mdsc->mutex); 2709 - n = rb_next(&req->r_node); 2710 2709 ceph_mdsc_put_request(req); 2711 - } else { 2712 - n = rb_next(&req->r_node); 2710 + if (!nextreq) 2711 + break; /* next dne before, so we're done! */ 2712 + if (RB_EMPTY_NODE(&nextreq->r_node)) { 2713 + /* next request was removed from tree */ 2714 + ceph_mdsc_put_request(nextreq); 2715 + goto restart; 2716 + } 2717 + ceph_mdsc_put_request(nextreq); /* won't go away */ 2713 2718 } 2714 - if (!n) 2715 - break; 2716 - req = rb_entry(n, struct ceph_mds_request, r_node); 2719 + req = nextreq; 2717 2720 } 2718 2721 mutex_unlock(&mdsc->mutex); 2719 2722 dout("wait_unsafe_requests done\n");
+9 -10
fs/ceph/messenger.c
··· 366 366 } 367 367 368 368 /* 369 + * return true if this connection ever successfully opened 370 + */ 371 + bool ceph_con_opened(struct ceph_connection *con) 372 + { 373 + return con->connect_seq > 0; 374 + } 375 + 376 + /* 369 377 * generic get/put 370 378 */ 371 379 struct ceph_connection *ceph_con_get(struct ceph_connection *con) ··· 838 830 con->in_base_pos = 0; 839 831 } 840 832 841 - static void prepare_read_connect_retry(struct ceph_connection *con) 842 - { 843 - dout("prepare_read_connect_retry %p\n", con); 844 - con->in_base_pos = strlen(CEPH_BANNER) + sizeof(con->actual_peer_addr) 845 - + sizeof(con->peer_addr_for_me); 846 - } 847 - 848 833 static void prepare_read_ack(struct ceph_connection *con) 849 834 { 850 835 dout("prepare_read_ack %p\n", con); ··· 1147 1146 } 1148 1147 con->auth_retry = 1; 1149 1148 prepare_write_connect(con->msgr, con, 0); 1150 - prepare_read_connect_retry(con); 1149 + prepare_read_connect(con); 1151 1150 break; 1152 1151 1153 1152 case CEPH_MSGR_TAG_RESETSESSION: ··· 1843 1842 dout("fault on LOSSYTX channel\n"); 1844 1843 goto out; 1845 1844 } 1846 - 1847 - clear_bit(BUSY, &con->state); /* to avoid an improbable race */ 1848 1845 1849 1846 mutex_lock(&con->mutex); 1850 1847 if (test_bit(CLOSED, &con->state))
+1
fs/ceph/messenger.h
··· 223 223 struct ceph_connection *con); 224 224 extern void ceph_con_open(struct ceph_connection *con, 225 225 struct ceph_entity_addr *addr); 226 + extern bool ceph_con_opened(struct ceph_connection *con); 226 227 extern void ceph_con_close(struct ceph_connection *con); 227 228 extern void ceph_con_send(struct ceph_connection *con, struct ceph_msg *msg); 228 229 extern void ceph_con_revoke(struct ceph_connection *con, struct ceph_msg *msg);
+21 -8
fs/ceph/osd_client.c
··· 413 413 */ 414 414 static int __reset_osd(struct ceph_osd_client *osdc, struct ceph_osd *osd) 415 415 { 416 + struct ceph_osd_request *req; 416 417 int ret = 0; 417 418 418 419 dout("__reset_osd %p osd%d\n", osd, osd->o_osd); 419 420 if (list_empty(&osd->o_requests)) { 420 421 __remove_osd(osdc, osd); 422 + } else if (memcmp(&osdc->osdmap->osd_addr[osd->o_osd], 423 + &osd->o_con.peer_addr, 424 + sizeof(osd->o_con.peer_addr)) == 0 && 425 + !ceph_con_opened(&osd->o_con)) { 426 + dout(" osd addr hasn't changed and connection never opened," 427 + " letting msgr retry"); 428 + /* touch each r_stamp for handle_timeout()'s benfit */ 429 + list_for_each_entry(req, &osd->o_requests, r_osd_item) 430 + req->r_stamp = jiffies; 431 + ret = -EAGAIN; 421 432 } else { 422 433 ceph_con_close(&osd->o_con); 423 434 ceph_con_open(&osd->o_con, &osdc->osdmap->osd_addr[osd->o_osd]); ··· 644 633 reqhead->flags |= cpu_to_le32(req->r_flags); /* e.g., RETRY */ 645 634 reqhead->reassert_version = req->r_reassert_version; 646 635 647 - req->r_sent_stamp = jiffies; 636 + req->r_stamp = jiffies; 648 637 list_move_tail(&osdc->req_lru, &req->r_req_lru_item); 649 638 650 639 ceph_msg_get(req->r_request); /* send consumes a ref */ ··· 671 660 unsigned long timeout = osdc->client->mount_args->osd_timeout * HZ; 672 661 unsigned long keepalive = 673 662 osdc->client->mount_args->osd_keepalive_timeout * HZ; 674 - unsigned long last_sent = 0; 663 + unsigned long last_stamp = 0; 675 664 struct rb_node *p; 676 665 struct list_head slow_osds; 677 666 ··· 708 697 req = list_entry(osdc->req_lru.next, struct ceph_osd_request, 709 698 r_req_lru_item); 710 699 711 - if (time_before(jiffies, req->r_sent_stamp + timeout)) 700 + if (time_before(jiffies, req->r_stamp + timeout)) 712 701 break; 713 702 714 - BUG_ON(req == last_req && req->r_sent_stamp == last_sent); 703 + BUG_ON(req == last_req && req->r_stamp == last_stamp); 715 704 last_req = req; 716 - last_sent = req->r_sent_stamp; 705 + last_stamp = req->r_stamp; 717 706 718 707 osd = req->r_osd; 719 708 BUG_ON(!osd); ··· 729 718 */ 730 719 INIT_LIST_HEAD(&slow_osds); 731 720 list_for_each_entry(req, &osdc->req_lru, r_req_lru_item) { 732 - if (time_before(jiffies, req->r_sent_stamp + keepalive)) 721 + if (time_before(jiffies, req->r_stamp + keepalive)) 733 722 break; 734 723 735 724 osd = req->r_osd; ··· 873 862 874 863 dout("kick_requests osd%d\n", kickosd ? kickosd->o_osd : -1); 875 864 if (kickosd) { 876 - __reset_osd(osdc, kickosd); 865 + err = __reset_osd(osdc, kickosd); 866 + if (err == -EAGAIN) 867 + return 1; 877 868 } else { 878 869 for (p = rb_first(&osdc->osds); p; p = n) { 879 870 struct ceph_osd *osd = ··· 926 913 927 914 kick: 928 915 dout("kicking %p tid %llu osd%d\n", req, req->r_tid, 929 - req->r_osd->o_osd); 916 + req->r_osd ? req->r_osd->o_osd : -1); 930 917 req->r_flags |= CEPH_OSD_FLAG_RETRY; 931 918 err = __send_request(osdc, req); 932 919 if (err) {
+1 -1
fs/ceph/osd_client.h
··· 70 70 71 71 char r_oid[40]; /* object name */ 72 72 int r_oid_len; 73 - unsigned long r_sent_stamp; 73 + unsigned long r_stamp; /* send OR check time */ 74 74 bool r_resend; /* msg send failed, needs retry */ 75 75 76 76 struct ceph_file_layout r_file_layout;
+10 -7
fs/ceph/osdmap.c
··· 480 480 return NULL; 481 481 } 482 482 483 + void __decode_pool(void **p, struct ceph_pg_pool_info *pi) 484 + { 485 + ceph_decode_copy(p, &pi->v, sizeof(pi->v)); 486 + calc_pg_masks(pi); 487 + *p += le32_to_cpu(pi->v.num_snaps) * sizeof(u64); 488 + *p += le32_to_cpu(pi->v.num_removed_snap_intervals) * sizeof(u64) * 2; 489 + } 490 + 483 491 /* 484 492 * decode a full map. 485 493 */ ··· 534 526 ev, CEPH_PG_POOL_VERSION); 535 527 goto bad; 536 528 } 537 - ceph_decode_copy(p, &pi->v, sizeof(pi->v)); 529 + __decode_pool(p, pi); 538 530 __insert_pg_pool(&map->pg_pools, pi); 539 - calc_pg_masks(pi); 540 - *p += le32_to_cpu(pi->v.num_snaps) * sizeof(u64); 541 - *p += le32_to_cpu(pi->v.num_removed_snap_intervals) 542 - * sizeof(u64) * 2; 543 531 } 544 532 ceph_decode_32_safe(p, end, map->pool_max, bad); 545 533 ··· 718 714 pi->id = pool; 719 715 __insert_pg_pool(&map->pg_pools, pi); 720 716 } 721 - ceph_decode_copy(p, &pi->v, sizeof(pi->v)); 722 - calc_pg_masks(pi); 717 + __decode_pool(p, pi); 723 718 } 724 719 725 720 /* old_pool */
+4 -2
fs/ceph/snap.c
··· 314 314 because we rebuild_snap_realms() works _downward_ in 315 315 hierarchy after each update.) */ 316 316 if (realm->cached_context && 317 - realm->cached_context->seq <= realm->seq && 317 + realm->cached_context->seq == realm->seq && 318 318 (!parent || 319 - realm->cached_context->seq <= parent->cached_context->seq)) { 319 + realm->cached_context->seq >= parent->cached_context->seq)) { 320 320 dout("build_snap_context %llx %p: %p seq %lld (%d snaps)" 321 321 " (unchanged)\n", 322 322 realm->ino, realm, realm->cached_context, ··· 818 818 * queued (again) by ceph_update_snap_trace() 819 819 * below. Queue it _now_, under the old context. 820 820 */ 821 + spin_lock(&realm->inodes_with_caps_lock); 821 822 list_del_init(&ci->i_snap_realm_item); 823 + spin_unlock(&realm->inodes_with_caps_lock); 822 824 spin_unlock(&inode->i_lock); 823 825 824 826 ceph_queue_cap_snap(ci,
+3 -1
fs/ext3/ialloc.c
··· 582 582 inode->i_generation = sbi->s_next_generation++; 583 583 spin_unlock(&sbi->s_next_gen_lock); 584 584 585 - ei->i_state = EXT3_STATE_NEW; 585 + ei->i_state_flags = 0; 586 + ext3_set_inode_state(inode, EXT3_STATE_NEW); 587 + 586 588 ei->i_extra_isize = 587 589 (EXT3_INODE_SIZE(inode->i_sb) > EXT3_GOOD_OLD_INODE_SIZE) ? 588 590 sizeof(struct ext3_inode) - EXT3_GOOD_OLD_INODE_SIZE : 0;
+1 -1
fs/ext3/inode.c
··· 2811 2811 inode->i_mtime.tv_sec = (signed)le32_to_cpu(raw_inode->i_mtime); 2812 2812 inode->i_atime.tv_nsec = inode->i_ctime.tv_nsec = inode->i_mtime.tv_nsec = 0; 2813 2813 2814 - ei->i_state = 0; 2814 + ei->i_state_flags = 0; 2815 2815 ei->i_dir_start_lookup = 0; 2816 2816 ei->i_dtime = le32_to_cpu(raw_inode->i_dtime); 2817 2817 /* We now have enough fields to check if the inode was active or not.
+2 -2
fs/ext4/ialloc.c
··· 263 263 ext4_group_t f; 264 264 265 265 f = ext4_flex_group(sbi, block_group); 266 - atomic_dec(&sbi->s_flex_groups[f].free_inodes); 266 + atomic_dec(&sbi->s_flex_groups[f].used_dirs); 267 267 } 268 268 269 269 } ··· 773 773 if (sbi->s_log_groups_per_flex) { 774 774 ext4_group_t f = ext4_flex_group(sbi, group); 775 775 776 - atomic_inc(&sbi->s_flex_groups[f].free_inodes); 776 + atomic_inc(&sbi->s_flex_groups[f].used_dirs); 777 777 } 778 778 } 779 779 gdp->bg_checksum = ext4_group_desc_csum(sbi, group, gdp);
+2 -2
fs/ext4/inode.c
··· 1035 1035 sector_t lblock) 1036 1036 { 1037 1037 struct ext4_inode_info *ei = EXT4_I(inode); 1038 - int dind_mask = EXT4_ADDR_PER_BLOCK(inode->i_sb) - 1; 1038 + sector_t dind_mask = ~((sector_t)EXT4_ADDR_PER_BLOCK(inode->i_sb) - 1); 1039 1039 int blk_bits; 1040 1040 1041 1041 if (lblock < EXT4_NDIR_BLOCKS) ··· 1050 1050 } 1051 1051 ei->i_da_metadata_calc_last_lblock = lblock & dind_mask; 1052 1052 ei->i_da_metadata_calc_len = 1; 1053 - blk_bits = roundup_pow_of_two(lblock + 1); 1053 + blk_bits = order_base_2(lblock); 1054 1054 return (blk_bits / EXT4_ADDR_PER_BLOCK_BITS(inode->i_sb)) + 1; 1055 1055 } 1056 1056
+18 -11
fs/ext4/super.c
··· 68 68 static int ext4_unfreeze(struct super_block *sb); 69 69 static void ext4_write_super(struct super_block *sb); 70 70 static int ext4_freeze(struct super_block *sb); 71 + static int ext4_get_sb(struct file_system_type *fs_type, int flags, 72 + const char *dev_name, void *data, struct vfsmount *mnt); 71 73 74 + #if !defined(CONFIG_EXT3_FS) && !defined(CONFIG_EXT3_FS_MODULE) && defined(CONFIG_EXT4_USE_FOR_EXT23) 75 + static struct file_system_type ext3_fs_type = { 76 + .owner = THIS_MODULE, 77 + .name = "ext3", 78 + .get_sb = ext4_get_sb, 79 + .kill_sb = kill_block_super, 80 + .fs_flags = FS_REQUIRES_DEV, 81 + }; 82 + #define IS_EXT3_SB(sb) ((sb)->s_bdev->bd_holder == &ext3_fs_type) 83 + #else 84 + #define IS_EXT3_SB(sb) (0) 85 + #endif 72 86 73 87 ext4_fsblk_t ext4_block_bitmap(struct super_block *sb, 74 88 struct ext4_group_desc *bg) ··· 2553 2539 * enable delayed allocation by default 2554 2540 * Use -o nodelalloc to turn it off 2555 2541 */ 2556 - set_opt(sbi->s_mount_opt, DELALLOC); 2542 + if (!IS_EXT3_SB(sb)) 2543 + set_opt(sbi->s_mount_opt, DELALLOC); 2557 2544 2558 2545 if (!parse_options((char *) data, sb, &journal_devnum, 2559 2546 &journal_ioprio, NULL, 0)) ··· 4083 4068 return get_sb_bdev(fs_type, flags, dev_name, data, ext4_fill_super,mnt); 4084 4069 } 4085 4070 4086 - #if !defined(CONTIG_EXT2_FS) && !defined(CONFIG_EXT2_FS_MODULE) && defined(CONFIG_EXT4_USE_FOR_EXT23) 4071 + #if !defined(CONFIG_EXT2_FS) && !defined(CONFIG_EXT2_FS_MODULE) && defined(CONFIG_EXT4_USE_FOR_EXT23) 4087 4072 static struct file_system_type ext2_fs_type = { 4088 4073 .owner = THIS_MODULE, 4089 4074 .name = "ext2", ··· 4110 4095 static inline void unregister_as_ext2(void) { } 4111 4096 #endif 4112 4097 4113 - #if !defined(CONTIG_EXT3_FS) && !defined(CONFIG_EXT3_FS_MODULE) && defined(CONFIG_EXT4_USE_FOR_EXT23) 4114 - static struct file_system_type ext3_fs_type = { 4115 - .owner = THIS_MODULE, 4116 - .name = "ext3", 4117 - .get_sb = ext4_get_sb, 4118 - .kill_sb = kill_block_super, 4119 - .fs_flags = FS_REQUIRES_DEV, 4120 - }; 4121 - 4098 + #if !defined(CONFIG_EXT3_FS) && !defined(CONFIG_EXT3_FS_MODULE) && defined(CONFIG_EXT4_USE_FOR_EXT23) 4122 4099 static inline void register_as_ext3(void) 4123 4100 { 4124 4101 int err = register_filesystem(&ext3_fs_type);
+3 -3
fs/fat/namei_vfat.c
··· 309 309 { 310 310 struct fat_mount_options *opts = &MSDOS_SB(dir->i_sb)->options; 311 311 wchar_t *ip, *ext_start, *end, *name_start; 312 - unsigned char base[9], ext[4], buf[8], *p; 312 + unsigned char base[9], ext[4], buf[5], *p; 313 313 unsigned char charbuf[NLS_MAX_CHARSET_SIZE]; 314 314 int chl, chi; 315 315 int sz = 0, extlen, baselen, i, numtail_baselen, numtail2_baselen; ··· 467 467 return 0; 468 468 } 469 469 470 - i = jiffies & 0xffff; 470 + i = jiffies; 471 471 sz = (jiffies >> 16) & 0x7; 472 472 if (baselen > 2) { 473 473 baselen = numtail2_baselen; ··· 476 476 name_res[baselen + 4] = '~'; 477 477 name_res[baselen + 5] = '1' + sz; 478 478 while (1) { 479 - sprintf(buf, "%04X", i); 479 + snprintf(buf, sizeof(buf), "%04X", i & 0xffff); 480 480 memcpy(&name_res[baselen], buf, 4); 481 481 if (vfat_find_form(dir, name_res) < 0) 482 482 break;
+3 -3
fs/fscache/object.c
··· 53 53 static void fscache_object_slow_work_put_ref(struct slow_work *); 54 54 static int fscache_object_slow_work_get_ref(struct slow_work *); 55 55 static void fscache_object_slow_work_execute(struct slow_work *); 56 - #ifdef CONFIG_SLOW_WORK_PROC 56 + #ifdef CONFIG_SLOW_WORK_DEBUG 57 57 static void fscache_object_slow_work_desc(struct slow_work *, struct seq_file *); 58 58 #endif 59 59 static void fscache_initialise_object(struct fscache_object *); ··· 69 69 .get_ref = fscache_object_slow_work_get_ref, 70 70 .put_ref = fscache_object_slow_work_put_ref, 71 71 .execute = fscache_object_slow_work_execute, 72 - #ifdef CONFIG_SLOW_WORK_PROC 72 + #ifdef CONFIG_SLOW_WORK_DEBUG 73 73 .desc = fscache_object_slow_work_desc, 74 74 #endif 75 75 }; ··· 364 364 /* 365 365 * describe an object for slow-work debugging 366 366 */ 367 - #ifdef CONFIG_SLOW_WORK_PROC 367 + #ifdef CONFIG_SLOW_WORK_DEBUG 368 368 static void fscache_object_slow_work_desc(struct slow_work *work, 369 369 struct seq_file *m) 370 370 {
+2 -2
fs/fscache/operation.c
··· 500 500 /* 501 501 * describe an operation for slow-work debugging 502 502 */ 503 - #ifdef CONFIG_SLOW_WORK_PROC 503 + #ifdef CONFIG_SLOW_WORK_DEBUG 504 504 static void fscache_op_desc(struct slow_work *work, struct seq_file *m) 505 505 { 506 506 struct fscache_operation *op = ··· 517 517 .get_ref = fscache_op_get_ref, 518 518 .put_ref = fscache_op_put_ref, 519 519 .execute = fscache_op_execute, 520 - #ifdef CONFIG_SLOW_WORK_PROC 520 + #ifdef CONFIG_SLOW_WORK_DEBUG 521 521 .desc = fscache_op_desc, 522 522 #endif 523 523 };
+7 -2
fs/logfs/dev_bdev.c
··· 80 80 prefetchw(&bvec->bv_page->flags); 81 81 82 82 end_page_writeback(page); 83 + page_cache_release(page); 83 84 } while (bvec >= bio->bi_io_vec); 84 85 bio_put(bio); 85 86 if (atomic_dec_and_test(&super->s_pending_writes)) ··· 98 97 unsigned int max_pages = queue_max_hw_sectors(q) >> (PAGE_SHIFT - 9); 99 98 int i; 100 99 100 + if (max_pages > BIO_MAX_PAGES) 101 + max_pages = BIO_MAX_PAGES; 101 102 bio = bio_alloc(GFP_NOFS, max_pages); 102 - BUG_ON(!bio); /* FIXME: handle this */ 103 + BUG_ON(!bio); 103 104 104 105 for (i = 0; i < nr_pages; i++) { 105 106 if (i >= max_pages) { ··· 194 191 unsigned int max_pages = queue_max_hw_sectors(q) >> (PAGE_SHIFT - 9); 195 192 int i; 196 193 194 + if (max_pages > BIO_MAX_PAGES) 195 + max_pages = BIO_MAX_PAGES; 197 196 bio = bio_alloc(GFP_NOFS, max_pages); 198 - BUG_ON(!bio); /* FIXME: handle this */ 197 + BUG_ON(!bio); 199 198 200 199 for (i = 0; i < nr_pages; i++) { 201 200 if (i >= max_pages) {
+2 -2
fs/logfs/dir.c
··· 303 303 (filler_t *)logfs_readpage, NULL); 304 304 if (IS_ERR(page)) 305 305 return PTR_ERR(page); 306 - dd = kmap_atomic(page, KM_USER0); 306 + dd = kmap(page); 307 307 BUG_ON(dd->namelen == 0); 308 308 309 309 full = filldir(buf, (char *)dd->name, be16_to_cpu(dd->namelen), 310 310 pos, be64_to_cpu(dd->ino), dd->type); 311 - kunmap_atomic(dd, KM_USER0); 311 + kunmap(page); 312 312 page_cache_release(page); 313 313 if (full) 314 314 break;
+7
fs/logfs/journal.c
··· 800 800 { 801 801 struct logfs_super *super = logfs_super(sb); 802 802 struct logfs_area *area = super->s_journal_area; 803 + struct btree_head32 *head = &super->s_reserved_segments; 803 804 u32 segno, ec; 804 805 int i, err; 805 806 ··· 808 807 /* Drop old segments */ 809 808 journal_for_each(i) 810 809 if (super->s_journal_seg[i]) { 810 + btree_remove32(head, super->s_journal_seg[i]); 811 811 logfs_set_segment_unreserved(sb, 812 812 super->s_journal_seg[i], 813 813 super->s_journal_ec[i]); ··· 821 819 super->s_journal_seg[i] = segno; 822 820 super->s_journal_ec[i] = ec; 823 821 logfs_set_segment_reserved(sb, segno); 822 + err = btree_insert32(head, segno, (void *)1, GFP_KERNEL); 823 + BUG_ON(err); /* mempool should prevent this */ 824 + err = logfs_erase_segment(sb, segno, 1); 825 + BUG_ON(err); /* FIXME: remount-ro would be nicer */ 824 826 } 825 827 /* Manually move journal_area */ 828 + freeseg(sb, area->a_segno); 826 829 area->a_segno = super->s_journal_seg[0]; 827 830 area->a_is_open = 0; 828 831 area->a_used_bytes = 0;
+1
fs/logfs/logfs.h
··· 587 587 int logfs_init_mapping(struct super_block *sb); 588 588 void logfs_sync_area(struct logfs_area *area); 589 589 void logfs_sync_segments(struct super_block *sb); 590 + void freeseg(struct super_block *sb, u32 segno); 590 591 591 592 /* area handling */ 592 593 int logfs_init_areas(struct super_block *sb);
+12 -1
fs/logfs/readwrite.c
··· 1594 1594 return ret; 1595 1595 } 1596 1596 1597 - /* Rewrite cannot mark the inode dirty but has to write it immediatly. */ 1598 1597 int logfs_rewrite_block(struct inode *inode, u64 bix, u64 ofs, 1599 1598 gc_level_t gc_level, long flags) 1600 1599 { ··· 1610 1611 if (level != 0) 1611 1612 alloc_indirect_block(inode, page, 0); 1612 1613 err = logfs_write_buf(inode, page, flags); 1614 + if (!err && shrink_level(gc_level) == 0) { 1615 + /* Rewrite cannot mark the inode dirty but has to 1616 + * write it immediatly. 1617 + * Q: Can't we just create an alias for the inode 1618 + * instead? And if not, why not? 1619 + */ 1620 + if (inode->i_ino == LOGFS_INO_MASTER) 1621 + logfs_write_anchor(inode->i_sb); 1622 + else { 1623 + err = __logfs_write_inode(inode, flags); 1624 + } 1625 + } 1613 1626 } 1614 1627 logfs_put_write_page(page); 1615 1628 return err;
+31 -23
fs/logfs/segment.c
··· 93 93 } while (len); 94 94 } 95 95 96 - /* 97 - * bdev_writeseg will write full pages. Memset the tail to prevent data leaks. 98 - */ 99 - static void pad_wbuf(struct logfs_area *area, int final) 96 + static void pad_partial_page(struct logfs_area *area) 100 97 { 101 98 struct super_block *sb = area->a_sb; 102 - struct logfs_super *super = logfs_super(sb); 103 99 struct page *page; 104 100 u64 ofs = dev_ofs(sb, area->a_segno, area->a_used_bytes); 105 101 pgoff_t index = ofs >> PAGE_SHIFT; 106 102 long offset = ofs & (PAGE_SIZE-1); 107 103 u32 len = PAGE_SIZE - offset; 108 104 109 - if (len == PAGE_SIZE) { 110 - /* The math in this function can surely use some love */ 111 - len = 0; 112 - } 113 - if (len) { 114 - BUG_ON(area->a_used_bytes >= super->s_segsize); 115 - 116 - page = get_mapping_page(area->a_sb, index, 0); 105 + if (len % PAGE_SIZE) { 106 + page = get_mapping_page(sb, index, 0); 117 107 BUG_ON(!page); /* FIXME: reserve a pool */ 118 108 memset(page_address(page) + offset, 0xff, len); 119 109 SetPagePrivate(page); 120 110 page_cache_release(page); 121 111 } 112 + } 122 113 123 - if (!final) 124 - return; 114 + static void pad_full_pages(struct logfs_area *area) 115 + { 116 + struct super_block *sb = area->a_sb; 117 + struct logfs_super *super = logfs_super(sb); 118 + u64 ofs = dev_ofs(sb, area->a_segno, area->a_used_bytes); 119 + u32 len = super->s_segsize - area->a_used_bytes; 120 + pgoff_t index = PAGE_CACHE_ALIGN(ofs) >> PAGE_CACHE_SHIFT; 121 + pgoff_t no_indizes = len >> PAGE_CACHE_SHIFT; 122 + struct page *page; 125 123 126 - area->a_used_bytes += len; 127 - for ( ; area->a_used_bytes < super->s_segsize; 128 - area->a_used_bytes += PAGE_SIZE) { 129 - /* Memset another page */ 130 - index++; 131 - page = get_mapping_page(area->a_sb, index, 0); 124 + while (no_indizes) { 125 + page = get_mapping_page(sb, index, 0); 132 126 BUG_ON(!page); /* FIXME: reserve a pool */ 133 - memset(page_address(page), 0xff, PAGE_SIZE); 127 + SetPageUptodate(page); 128 + memset(page_address(page), 0xff, PAGE_CACHE_SIZE); 134 129 SetPagePrivate(page); 135 130 page_cache_release(page); 131 + index++; 132 + no_indizes--; 136 133 } 134 + } 135 + 136 + /* 137 + * bdev_writeseg will write full pages. Memset the tail to prevent data leaks. 138 + * Also make sure we allocate (and memset) all pages for final writeout. 139 + */ 140 + static void pad_wbuf(struct logfs_area *area, int final) 141 + { 142 + pad_partial_page(area); 143 + if (final) 144 + pad_full_pages(area); 137 145 } 138 146 139 147 /* ··· 691 683 return 0; 692 684 } 693 685 694 - static void freeseg(struct super_block *sb, u32 segno) 686 + void freeseg(struct super_block *sb, u32 segno) 695 687 { 696 688 struct logfs_super *super = logfs_super(sb); 697 689 struct address_space *mapping = super->s_mapping_inode->i_mapping;
+7 -8
fs/logfs/super.c
··· 277 277 } 278 278 if (valid0 && valid1 && ds_cmp(ds0, ds1)) { 279 279 printk(KERN_INFO"Superblocks don't match - fixing.\n"); 280 - return write_one_sb(sb, super->s_devops->find_last_sb); 280 + return logfs_write_sb(sb); 281 281 } 282 282 /* If neither is valid now, something's wrong. Didn't we properly 283 283 * check them before?!? */ ··· 289 289 { 290 290 int err; 291 291 292 + err = logfs_open_segfile(sb); 293 + if (err) 294 + return err; 295 + 292 296 /* Repair any broken superblock copies */ 293 297 err = logfs_recover_sb(sb); 294 298 if (err) ··· 300 296 301 297 /* Check areas for trailing unaccounted data */ 302 298 err = logfs_check_areas(sb); 303 - if (err) 304 - return err; 305 - 306 - err = logfs_open_segfile(sb); 307 299 if (err) 308 300 return err; 309 301 ··· 328 328 329 329 sb->s_root = d_alloc_root(rootdir); 330 330 if (!sb->s_root) 331 - goto fail; 331 + goto fail2; 332 332 333 333 super->s_erase_page = alloc_pages(GFP_KERNEL, 0); 334 334 if (!super->s_erase_page) ··· 572 572 return 0; 573 573 574 574 err1: 575 - up_write(&sb->s_umount); 576 - deactivate_super(sb); 575 + deactivate_locked_super(sb); 577 576 return err; 578 577 err0: 579 578 kfree(super);
+10 -8
fs/namei.c
··· 1610 1610 1611 1611 static struct file *do_last(struct nameidata *nd, struct path *path, 1612 1612 int open_flag, int acc_mode, 1613 - int mode, const char *pathname, 1614 - int *want_dir) 1613 + int mode, const char *pathname) 1615 1614 { 1616 1615 struct dentry *dir = nd->path.dentry; 1617 1616 struct file *filp; ··· 1641 1642 if (nd->last.name[nd->last.len]) { 1642 1643 if (open_flag & O_CREAT) 1643 1644 goto exit; 1644 - *want_dir = 1; 1645 + nd->flags |= LOOKUP_DIRECTORY; 1645 1646 } 1646 1647 1647 1648 /* just plain open? */ ··· 1655 1656 if (path->dentry->d_inode->i_op->follow_link) 1656 1657 return NULL; 1657 1658 error = -ENOTDIR; 1658 - if (*want_dir && !path->dentry->d_inode->i_op->lookup) 1659 - goto exit_dput; 1659 + if (nd->flags & LOOKUP_DIRECTORY) { 1660 + if (!path->dentry->d_inode->i_op->lookup) 1661 + goto exit_dput; 1662 + } 1660 1663 path_to_nameidata(path, nd); 1661 1664 audit_inode(pathname, nd->path.dentry); 1662 1665 goto ok; ··· 1767 1766 int count = 0; 1768 1767 int flag = open_to_namei_flags(open_flag); 1769 1768 int force_reval = 0; 1770 - int want_dir = open_flag & O_DIRECTORY; 1771 1769 1772 1770 if (!(open_flag & O_CREAT)) 1773 1771 mode = 0; ··· 1828 1828 if (open_flag & O_EXCL) 1829 1829 nd.flags |= LOOKUP_EXCL; 1830 1830 } 1831 - filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname, &want_dir); 1831 + if (open_flag & O_DIRECTORY) 1832 + nd.flags |= LOOKUP_DIRECTORY; 1833 + filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname); 1832 1834 while (unlikely(!filp)) { /* trailing symlink */ 1833 1835 struct path holder; 1834 1836 struct inode *inode = path.dentry->d_inode; ··· 1868 1866 } 1869 1867 holder = path; 1870 1868 nd.flags &= ~LOOKUP_PARENT; 1871 - filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname, &want_dir); 1869 + filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname); 1872 1870 if (inode->i_op->put_link) 1873 1871 inode->i_op->put_link(holder.dentry, &nd, cookie); 1874 1872 path_put(&holder);
+4 -4
fs/nilfs2/segbuf.c
··· 323 323 int nilfs_wait_on_logs(struct list_head *logs) 324 324 { 325 325 struct nilfs_segment_buffer *segbuf; 326 - int err; 326 + int err, ret = 0; 327 327 328 328 list_for_each_entry(segbuf, logs, sb_list) { 329 329 err = nilfs_segbuf_wait(segbuf); 330 - if (err) 331 - return err; 330 + if (err && !ret) 331 + ret = err; 332 332 } 333 - return 0; 333 + return ret; 334 334 } 335 335 336 336 /*
+7 -8
fs/nilfs2/segment.c
··· 1510 1510 if (mode != SC_LSEG_SR || sci->sc_stage.scnt < NILFS_ST_CPFILE) 1511 1511 break; 1512 1512 1513 + nilfs_clear_logs(&sci->sc_segbufs); 1514 + 1515 + err = nilfs_segctor_extend_segments(sci, nilfs, nadd); 1516 + if (unlikely(err)) 1517 + return err; 1518 + 1513 1519 if (sci->sc_stage.flags & NILFS_CF_SUFREED) { 1514 1520 err = nilfs_sufile_cancel_freev(nilfs->ns_sufile, 1515 1521 sci->sc_freesegs, ··· 1523 1517 NULL); 1524 1518 WARN_ON(err); /* do not happen */ 1525 1519 } 1526 - nilfs_clear_logs(&sci->sc_segbufs); 1527 - 1528 - err = nilfs_segctor_extend_segments(sci, nilfs, nadd); 1529 - if (unlikely(err)) 1530 - return err; 1531 - 1532 1520 nadd = min_t(int, nadd << 1, SC_MAX_SEGDELTA); 1533 1521 sci->sc_stage = prev_stage; 1534 1522 } ··· 1897 1897 1898 1898 list_splice_tail_init(&sci->sc_write_logs, &logs); 1899 1899 ret = nilfs_wait_on_logs(&logs); 1900 - if (ret) 1901 - nilfs_abort_logs(&logs, NULL, sci->sc_super_root, ret); 1900 + nilfs_abort_logs(&logs, NULL, sci->sc_super_root, ret ? : err); 1902 1901 1903 1902 list_splice_tail_init(&sci->sc_segbufs, &logs); 1904 1903 nilfs_cancel_segusage(&logs, nilfs->ns_sufile);
+72 -5
fs/ocfs2/acl.c
··· 30 30 #include "alloc.h" 31 31 #include "dlmglue.h" 32 32 #include "file.h" 33 + #include "inode.h" 34 + #include "journal.h" 33 35 #include "ocfs2_fs.h" 34 36 35 37 #include "xattr.h" ··· 168 166 } 169 167 170 168 /* 169 + * Helper function to set i_mode in memory and disk. Some call paths 170 + * will not have di_bh or a journal handle to pass, in which case it 171 + * will create it's own. 172 + */ 173 + static int ocfs2_acl_set_mode(struct inode *inode, struct buffer_head *di_bh, 174 + handle_t *handle, umode_t new_mode) 175 + { 176 + int ret, commit_handle = 0; 177 + struct ocfs2_dinode *di; 178 + 179 + if (di_bh == NULL) { 180 + ret = ocfs2_read_inode_block(inode, &di_bh); 181 + if (ret) { 182 + mlog_errno(ret); 183 + goto out; 184 + } 185 + } else 186 + get_bh(di_bh); 187 + 188 + if (handle == NULL) { 189 + handle = ocfs2_start_trans(OCFS2_SB(inode->i_sb), 190 + OCFS2_INODE_UPDATE_CREDITS); 191 + if (IS_ERR(handle)) { 192 + ret = PTR_ERR(handle); 193 + mlog_errno(ret); 194 + goto out_brelse; 195 + } 196 + 197 + commit_handle = 1; 198 + } 199 + 200 + di = (struct ocfs2_dinode *)di_bh->b_data; 201 + ret = ocfs2_journal_access_di(handle, INODE_CACHE(inode), di_bh, 202 + OCFS2_JOURNAL_ACCESS_WRITE); 203 + if (ret) { 204 + mlog_errno(ret); 205 + goto out_commit; 206 + } 207 + 208 + inode->i_mode = new_mode; 209 + di->i_mode = cpu_to_le16(inode->i_mode); 210 + 211 + ocfs2_journal_dirty(handle, di_bh); 212 + 213 + out_commit: 214 + if (commit_handle) 215 + ocfs2_commit_trans(OCFS2_SB(inode->i_sb), handle); 216 + out_brelse: 217 + brelse(di_bh); 218 + out: 219 + return ret; 220 + } 221 + 222 + /* 171 223 * Set the access or default ACL of an inode. 172 224 */ 173 225 static int ocfs2_set_acl(handle_t *handle, ··· 249 193 if (ret < 0) 250 194 return ret; 251 195 else { 252 - inode->i_mode = mode; 253 196 if (ret == 0) 254 197 acl = NULL; 198 + 199 + ret = ocfs2_acl_set_mode(inode, di_bh, 200 + handle, mode); 201 + if (ret) 202 + return ret; 203 + 255 204 } 256 205 } 257 206 break; ··· 344 283 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 345 284 struct posix_acl *acl = NULL; 346 285 int ret = 0; 286 + mode_t mode; 347 287 348 288 if (!S_ISLNK(inode->i_mode)) { 349 289 if (osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL) { ··· 353 291 if (IS_ERR(acl)) 354 292 return PTR_ERR(acl); 355 293 } 356 - if (!acl) 357 - inode->i_mode &= ~current_umask(); 294 + if (!acl) { 295 + mode = inode->i_mode & ~current_umask(); 296 + ret = ocfs2_acl_set_mode(inode, di_bh, handle, mode); 297 + if (ret) { 298 + mlog_errno(ret); 299 + goto cleanup; 300 + } 301 + } 358 302 } 359 303 if ((osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL) && acl) { 360 304 struct posix_acl *clone; 361 - mode_t mode; 362 305 363 306 if (S_ISDIR(inode->i_mode)) { 364 307 ret = ocfs2_set_acl(handle, inode, di_bh, ··· 380 313 mode = inode->i_mode; 381 314 ret = posix_acl_create_masq(clone, &mode); 382 315 if (ret >= 0) { 383 - inode->i_mode = mode; 316 + ret = ocfs2_acl_set_mode(inode, di_bh, handle, mode); 384 317 if (ret > 0) { 385 318 ret = ocfs2_set_acl(handle, inode, 386 319 di_bh, ACL_TYPE_ACCESS,
+1 -3
fs/ocfs2/dlm/dlmmaster.c
··· 1875 1875 ok: 1876 1876 spin_unlock(&res->spinlock); 1877 1877 } 1878 - spin_unlock(&dlm->spinlock); 1879 1878 1880 1879 // mlog(0, "woo! got an assert_master from node %u!\n", 1881 1880 // assert->node_idx); ··· 1925 1926 /* master is known, detach if not already detached. 1926 1927 * ensures that only one assert_master call will happen 1927 1928 * on this mle. */ 1928 - spin_lock(&dlm->spinlock); 1929 1929 spin_lock(&dlm->master_lock); 1930 1930 1931 1931 rr = atomic_read(&mle->mle_refs.refcount); ··· 1957 1959 __dlm_put_mle(mle); 1958 1960 } 1959 1961 spin_unlock(&dlm->master_lock); 1960 - spin_unlock(&dlm->spinlock); 1961 1962 } else if (res) { 1962 1963 if (res->owner != assert->node_idx) { 1963 1964 mlog(0, "assert_master from %u, but current " ··· 1964 1967 res->owner, namelen, name); 1965 1968 } 1966 1969 } 1970 + spin_unlock(&dlm->spinlock); 1967 1971 1968 1972 done: 1969 1973 ret = 0;
+15
fs/ocfs2/inode.c
··· 891 891 /* Do some basic inode verification... */ 892 892 di = (struct ocfs2_dinode *) di_bh->b_data; 893 893 if (!(di->i_flags & cpu_to_le32(OCFS2_ORPHANED_FL))) { 894 + /* 895 + * Inodes in the orphan dir must have ORPHANED_FL. The only 896 + * inodes that come back out of the orphan dir are reflink 897 + * targets. A reflink target may be moved out of the orphan 898 + * dir between the time we scan the directory and the time we 899 + * process it. This would lead to HAS_REFCOUNT_FL being set but 900 + * ORPHANED_FL not. 901 + */ 902 + if (di->i_dyn_features & cpu_to_le16(OCFS2_HAS_REFCOUNT_FL)) { 903 + mlog(0, "Reflinked inode %llu is no longer orphaned. " 904 + "it shouldn't be deleted\n", 905 + (unsigned long long)oi->ip_blkno); 906 + goto bail; 907 + } 908 + 894 909 /* for lack of a better error? */ 895 910 status = -EEXIST; 896 911 mlog(ML_ERROR,
+6 -4
fs/ocfs2/localalloc.c
··· 872 872 (unsigned long long)la_start_blk, 873 873 (unsigned long long)blkno); 874 874 875 - status = ocfs2_free_clusters(handle, main_bm_inode, 876 - main_bm_bh, blkno, count); 875 + status = ocfs2_release_clusters(handle, 876 + main_bm_inode, 877 + main_bm_bh, blkno, 878 + count); 877 879 if (status < 0) { 878 880 mlog_errno(status); 879 881 goto bail; ··· 986 984 } 987 985 988 986 retry_enospc: 989 - (*ac)->ac_bits_wanted = osb->local_alloc_bits; 990 - 987 + (*ac)->ac_bits_wanted = osb->local_alloc_default_bits; 991 988 status = ocfs2_reserve_cluster_bitmap_bits(osb, *ac); 992 989 if (status == -ENOSPC) { 993 990 if (ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_ENOSPC) == ··· 1062 1061 OCFS2_LA_DISABLED) 1063 1062 goto bail; 1064 1063 1064 + ac->ac_bits_wanted = osb->local_alloc_default_bits; 1065 1065 status = ocfs2_claim_clusters(osb, handle, ac, 1066 1066 osb->local_alloc_bits, 1067 1067 &cluster_off,
+1 -1
fs/ocfs2/locks.c
··· 133 133 134 134 if (!(fl->fl_flags & FL_POSIX)) 135 135 return -ENOLCK; 136 - if (__mandatory_lock(inode)) 136 + if (__mandatory_lock(inode) && fl->fl_type != F_UNLCK) 137 137 return -ENOLCK; 138 138 139 139 return ocfs2_plock(osb->cconn, OCFS2_I(inode)->ip_blkno, file, cmd, fl);
+23 -5
fs/ocfs2/namei.c
··· 84 84 static int ocfs2_orphan_add(struct ocfs2_super *osb, 85 85 handle_t *handle, 86 86 struct inode *inode, 87 - struct ocfs2_dinode *fe, 87 + struct buffer_head *fe_bh, 88 88 char *name, 89 89 struct ocfs2_dir_lookup_result *lookup, 90 90 struct inode *orphan_dir_inode); ··· 879 879 fe = (struct ocfs2_dinode *) fe_bh->b_data; 880 880 881 881 if (inode_is_unlinkable(inode)) { 882 - status = ocfs2_orphan_add(osb, handle, inode, fe, orphan_name, 882 + status = ocfs2_orphan_add(osb, handle, inode, fe_bh, orphan_name, 883 883 &orphan_insert, orphan_dir); 884 884 if (status < 0) { 885 885 mlog_errno(status); ··· 1300 1300 if (S_ISDIR(new_inode->i_mode) || 1301 1301 (ocfs2_read_links_count(newfe) == 1)) { 1302 1302 status = ocfs2_orphan_add(osb, handle, new_inode, 1303 - newfe, orphan_name, 1303 + newfe_bh, orphan_name, 1304 1304 &orphan_insert, orphan_dir); 1305 1305 if (status < 0) { 1306 1306 mlog_errno(status); ··· 1911 1911 static int ocfs2_orphan_add(struct ocfs2_super *osb, 1912 1912 handle_t *handle, 1913 1913 struct inode *inode, 1914 - struct ocfs2_dinode *fe, 1914 + struct buffer_head *fe_bh, 1915 1915 char *name, 1916 1916 struct ocfs2_dir_lookup_result *lookup, 1917 1917 struct inode *orphan_dir_inode) ··· 1919 1919 struct buffer_head *orphan_dir_bh = NULL; 1920 1920 int status = 0; 1921 1921 struct ocfs2_dinode *orphan_fe; 1922 + struct ocfs2_dinode *fe = (struct ocfs2_dinode *) fe_bh->b_data; 1922 1923 1923 1924 mlog_entry("(inode->i_ino = %lu)\n", inode->i_ino); 1924 1925 ··· 1960 1959 goto leave; 1961 1960 } 1962 1961 1962 + /* 1963 + * We're going to journal the change of i_flags and i_orphaned_slot. 1964 + * It's safe anyway, though some callers may duplicate the journaling. 1965 + * Journaling within the func just make the logic look more 1966 + * straightforward. 1967 + */ 1968 + status = ocfs2_journal_access_di(handle, 1969 + INODE_CACHE(inode), 1970 + fe_bh, 1971 + OCFS2_JOURNAL_ACCESS_WRITE); 1972 + if (status < 0) { 1973 + mlog_errno(status); 1974 + goto leave; 1975 + } 1976 + 1963 1977 le32_add_cpu(&fe->i_flags, OCFS2_ORPHANED_FL); 1964 1978 1965 1979 /* Record which orphan dir our inode now resides 1966 1980 * in. delete_inode will use this to determine which orphan 1967 1981 * dir to lock. */ 1968 1982 fe->i_orphaned_slot = cpu_to_le16(osb->slot_num); 1983 + 1984 + ocfs2_journal_dirty(handle, fe_bh); 1969 1985 1970 1986 mlog(0, "Inode %llu orphaned in slot %d\n", 1971 1987 (unsigned long long)OCFS2_I(inode)->ip_blkno, osb->slot_num); ··· 2141 2123 } 2142 2124 2143 2125 di = (struct ocfs2_dinode *)new_di_bh->b_data; 2144 - status = ocfs2_orphan_add(osb, handle, inode, di, orphan_name, 2126 + status = ocfs2_orphan_add(osb, handle, inode, new_di_bh, orphan_name, 2145 2127 &orphan_insert, orphan_dir); 2146 2128 if (status < 0) { 2147 2129 mlog_errno(status);
+12 -2
fs/ocfs2/ocfs2.h
··· 763 763 return megs << (20 - OCFS2_SB(sb)->s_clustersize_bits); 764 764 } 765 765 766 - #define ocfs2_set_bit ext2_set_bit 767 - #define ocfs2_clear_bit ext2_clear_bit 766 + static inline void _ocfs2_set_bit(unsigned int bit, unsigned long *bitmap) 767 + { 768 + ext2_set_bit(bit, bitmap); 769 + } 770 + #define ocfs2_set_bit(bit, addr) _ocfs2_set_bit((bit), (unsigned long *)(addr)) 771 + 772 + static inline void _ocfs2_clear_bit(unsigned int bit, unsigned long *bitmap) 773 + { 774 + ext2_clear_bit(bit, bitmap); 775 + } 776 + #define ocfs2_clear_bit(bit, addr) _ocfs2_clear_bit((bit), (unsigned long *)(addr)) 777 + 768 778 #define ocfs2_test_bit ext2_test_bit 769 779 #define ocfs2_find_next_zero_bit ext2_find_next_zero_bit 770 780 #define ocfs2_find_next_bit ext2_find_next_bit
+1
fs/ocfs2/refcounttree.c
··· 4075 4075 OCFS2_I(t_inode)->ip_dyn_features = OCFS2_I(s_inode)->ip_dyn_features; 4076 4076 spin_unlock(&OCFS2_I(t_inode)->ip_lock); 4077 4077 i_size_write(t_inode, size); 4078 + t_inode->i_blocks = s_inode->i_blocks; 4078 4079 4079 4080 di->i_xattr_inline_size = s_di->i_xattr_inline_size; 4080 4081 di->i_clusters = s_di->i_clusters;
+82 -47
fs/ocfs2/suballoc.c
··· 95 95 struct buffer_head *group_bh, 96 96 unsigned int bit_off, 97 97 unsigned int num_bits); 98 - static inline int ocfs2_block_group_clear_bits(handle_t *handle, 99 - struct inode *alloc_inode, 100 - struct ocfs2_group_desc *bg, 101 - struct buffer_head *group_bh, 102 - unsigned int bit_off, 103 - unsigned int num_bits); 104 - 105 98 static int ocfs2_relink_block_group(handle_t *handle, 106 99 struct inode *alloc_inode, 107 100 struct buffer_head *fe_bh, ··· 145 152 146 153 #define do_error(fmt, ...) \ 147 154 do{ \ 148 - if (clean_error) \ 155 + if (resize) \ 149 156 mlog(ML_ERROR, fmt "\n", ##__VA_ARGS__); \ 150 157 else \ 151 158 ocfs2_error(sb, fmt, ##__VA_ARGS__); \ ··· 153 160 154 161 static int ocfs2_validate_gd_self(struct super_block *sb, 155 162 struct buffer_head *bh, 156 - int clean_error) 163 + int resize) 157 164 { 158 165 struct ocfs2_group_desc *gd = (struct ocfs2_group_desc *)bh->b_data; 159 166 ··· 204 211 static int ocfs2_validate_gd_parent(struct super_block *sb, 205 212 struct ocfs2_dinode *di, 206 213 struct buffer_head *bh, 207 - int clean_error) 214 + int resize) 208 215 { 209 216 unsigned int max_bits; 210 217 struct ocfs2_group_desc *gd = (struct ocfs2_group_desc *)bh->b_data; ··· 226 233 return -EINVAL; 227 234 } 228 235 229 - if (le16_to_cpu(gd->bg_chain) >= 230 - le16_to_cpu(di->id2.i_chain.cl_next_free_rec)) { 236 + /* In resize, we may meet the case bg_chain == cl_next_free_rec. */ 237 + if ((le16_to_cpu(gd->bg_chain) > 238 + le16_to_cpu(di->id2.i_chain.cl_next_free_rec)) || 239 + ((le16_to_cpu(gd->bg_chain) == 240 + le16_to_cpu(di->id2.i_chain.cl_next_free_rec)) && !resize)) { 231 241 do_error("Group descriptor #%llu has bad chain %u", 232 242 (unsigned long long)bh->b_blocknr, 233 243 le16_to_cpu(gd->bg_chain)); ··· 1971 1975 bits_wanted, cluster_start, num_clusters); 1972 1976 } 1973 1977 1974 - static inline int ocfs2_block_group_clear_bits(handle_t *handle, 1975 - struct inode *alloc_inode, 1976 - struct ocfs2_group_desc *bg, 1977 - struct buffer_head *group_bh, 1978 - unsigned int bit_off, 1979 - unsigned int num_bits) 1978 + static int ocfs2_block_group_clear_bits(handle_t *handle, 1979 + struct inode *alloc_inode, 1980 + struct ocfs2_group_desc *bg, 1981 + struct buffer_head *group_bh, 1982 + unsigned int bit_off, 1983 + unsigned int num_bits, 1984 + void (*undo_fn)(unsigned int bit, 1985 + unsigned long *bmap)) 1980 1986 { 1981 1987 int status; 1982 1988 unsigned int tmp; 1983 - int journal_type = OCFS2_JOURNAL_ACCESS_WRITE; 1984 1989 struct ocfs2_group_desc *undo_bg = NULL; 1985 - int cluster_bitmap = 0; 1986 1990 1987 1991 mlog_entry_void(); 1988 1992 ··· 1992 1996 1993 1997 mlog(0, "off = %u, num = %u\n", bit_off, num_bits); 1994 1998 1995 - if (ocfs2_is_cluster_bitmap(alloc_inode)) 1996 - journal_type = OCFS2_JOURNAL_ACCESS_UNDO; 1997 - 1999 + BUG_ON(undo_fn && !ocfs2_is_cluster_bitmap(alloc_inode)); 1998 2000 status = ocfs2_journal_access_gd(handle, INODE_CACHE(alloc_inode), 1999 - group_bh, journal_type); 2001 + group_bh, 2002 + undo_fn ? 2003 + OCFS2_JOURNAL_ACCESS_UNDO : 2004 + OCFS2_JOURNAL_ACCESS_WRITE); 2000 2005 if (status < 0) { 2001 2006 mlog_errno(status); 2002 2007 goto bail; 2003 2008 } 2004 2009 2005 - if (ocfs2_is_cluster_bitmap(alloc_inode)) 2006 - cluster_bitmap = 1; 2007 - 2008 - if (cluster_bitmap) { 2010 + if (undo_fn) { 2009 2011 jbd_lock_bh_state(group_bh); 2010 2012 undo_bg = (struct ocfs2_group_desc *) 2011 2013 bh2jh(group_bh)->b_committed_data; ··· 2014 2020 while(tmp--) { 2015 2021 ocfs2_clear_bit((bit_off + tmp), 2016 2022 (unsigned long *) bg->bg_bitmap); 2017 - if (cluster_bitmap) 2018 - ocfs2_set_bit(bit_off + tmp, 2019 - (unsigned long *) undo_bg->bg_bitmap); 2023 + if (undo_fn) 2024 + undo_fn(bit_off + tmp, 2025 + (unsigned long *) undo_bg->bg_bitmap); 2020 2026 } 2021 2027 le16_add_cpu(&bg->bg_free_bits_count, num_bits); 2022 2028 2023 - if (cluster_bitmap) 2029 + if (undo_fn) 2024 2030 jbd_unlock_bh_state(group_bh); 2025 2031 2026 2032 status = ocfs2_journal_dirty(handle, group_bh); ··· 2033 2039 /* 2034 2040 * expects the suballoc inode to already be locked. 2035 2041 */ 2036 - int ocfs2_free_suballoc_bits(handle_t *handle, 2037 - struct inode *alloc_inode, 2038 - struct buffer_head *alloc_bh, 2039 - unsigned int start_bit, 2040 - u64 bg_blkno, 2041 - unsigned int count) 2042 + static int _ocfs2_free_suballoc_bits(handle_t *handle, 2043 + struct inode *alloc_inode, 2044 + struct buffer_head *alloc_bh, 2045 + unsigned int start_bit, 2046 + u64 bg_blkno, 2047 + unsigned int count, 2048 + void (*undo_fn)(unsigned int bit, 2049 + unsigned long *bitmap)) 2042 2050 { 2043 2051 int status = 0; 2044 2052 u32 tmp_used; ··· 2075 2079 2076 2080 status = ocfs2_block_group_clear_bits(handle, alloc_inode, 2077 2081 group, group_bh, 2078 - start_bit, count); 2082 + start_bit, count, undo_fn); 2079 2083 if (status < 0) { 2080 2084 mlog_errno(status); 2081 2085 goto bail; ··· 2106 2110 return status; 2107 2111 } 2108 2112 2113 + int ocfs2_free_suballoc_bits(handle_t *handle, 2114 + struct inode *alloc_inode, 2115 + struct buffer_head *alloc_bh, 2116 + unsigned int start_bit, 2117 + u64 bg_blkno, 2118 + unsigned int count) 2119 + { 2120 + return _ocfs2_free_suballoc_bits(handle, alloc_inode, alloc_bh, 2121 + start_bit, bg_blkno, count, NULL); 2122 + } 2123 + 2109 2124 int ocfs2_free_dinode(handle_t *handle, 2110 2125 struct inode *inode_alloc_inode, 2111 2126 struct buffer_head *inode_alloc_bh, ··· 2130 2123 inode_alloc_bh, bit, bg_blkno, 1); 2131 2124 } 2132 2125 2133 - int ocfs2_free_clusters(handle_t *handle, 2134 - struct inode *bitmap_inode, 2135 - struct buffer_head *bitmap_bh, 2136 - u64 start_blk, 2137 - unsigned int num_clusters) 2126 + static int _ocfs2_free_clusters(handle_t *handle, 2127 + struct inode *bitmap_inode, 2128 + struct buffer_head *bitmap_bh, 2129 + u64 start_blk, 2130 + unsigned int num_clusters, 2131 + void (*undo_fn)(unsigned int bit, 2132 + unsigned long *bitmap)) 2138 2133 { 2139 2134 int status; 2140 2135 u16 bg_start_bit; ··· 2163 2154 mlog(0, "bg_blkno = %llu, bg_start_bit = %u\n", 2164 2155 (unsigned long long)bg_blkno, bg_start_bit); 2165 2156 2166 - status = ocfs2_free_suballoc_bits(handle, bitmap_inode, bitmap_bh, 2167 - bg_start_bit, bg_blkno, 2168 - num_clusters); 2157 + status = _ocfs2_free_suballoc_bits(handle, bitmap_inode, bitmap_bh, 2158 + bg_start_bit, bg_blkno, 2159 + num_clusters, undo_fn); 2169 2160 if (status < 0) { 2170 2161 mlog_errno(status); 2171 2162 goto out; ··· 2177 2168 out: 2178 2169 mlog_exit(status); 2179 2170 return status; 2171 + } 2172 + 2173 + int ocfs2_free_clusters(handle_t *handle, 2174 + struct inode *bitmap_inode, 2175 + struct buffer_head *bitmap_bh, 2176 + u64 start_blk, 2177 + unsigned int num_clusters) 2178 + { 2179 + return _ocfs2_free_clusters(handle, bitmap_inode, bitmap_bh, 2180 + start_blk, num_clusters, 2181 + _ocfs2_set_bit); 2182 + } 2183 + 2184 + /* 2185 + * Give never-used clusters back to the global bitmap. We don't need 2186 + * to protect these bits in the undo buffer. 2187 + */ 2188 + int ocfs2_release_clusters(handle_t *handle, 2189 + struct inode *bitmap_inode, 2190 + struct buffer_head *bitmap_bh, 2191 + u64 start_blk, 2192 + unsigned int num_clusters) 2193 + { 2194 + return _ocfs2_free_clusters(handle, bitmap_inode, bitmap_bh, 2195 + start_blk, num_clusters, 2196 + _ocfs2_clear_bit); 2180 2197 } 2181 2198 2182 2199 static inline void ocfs2_debug_bg(struct ocfs2_group_desc *bg)
+5
fs/ocfs2/suballoc.h
··· 127 127 struct buffer_head *bitmap_bh, 128 128 u64 start_blk, 129 129 unsigned int num_clusters); 130 + int ocfs2_release_clusters(handle_t *handle, 131 + struct inode *bitmap_inode, 132 + struct buffer_head *bitmap_bh, 133 + u64 start_blk, 134 + unsigned int num_clusters); 130 135 131 136 static inline u64 ocfs2_which_suballoc_group(u64 block, unsigned int bit) 132 137 {
+5 -7
fs/ocfs2/xattr.c
··· 1622 1622 /* Now tell xh->xh_entries about it */ 1623 1623 for (i = 0; i < count; i++) { 1624 1624 offset = le16_to_cpu(xh->xh_entries[i].xe_name_offset); 1625 - if (offset < namevalue_offset) 1625 + if (offset <= namevalue_offset) 1626 1626 le16_add_cpu(&xh->xh_entries[i].xe_name_offset, 1627 1627 namevalue_size); 1628 1628 } ··· 6528 6528 int indexed) 6529 6529 { 6530 6530 int ret; 6531 - struct ocfs2_alloc_context *meta_ac; 6532 6531 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb); 6533 - struct ocfs2_xattr_set_ctxt ctxt = { 6534 - .meta_ac = meta_ac, 6535 - }; 6532 + struct ocfs2_xattr_set_ctxt ctxt; 6536 6533 6537 - ret = ocfs2_reserve_new_metadata_blocks(osb, 1, &meta_ac); 6534 + memset(&ctxt, 0, sizeof(ctxt)); 6535 + ret = ocfs2_reserve_new_metadata_blocks(osb, 1, &ctxt.meta_ac); 6538 6536 if (ret < 0) { 6539 6537 mlog_errno(ret); 6540 6538 return ret; ··· 6554 6556 6555 6557 ocfs2_commit_trans(osb, ctxt.handle); 6556 6558 out: 6557 - ocfs2_free_alloc_context(meta_ac); 6559 + ocfs2_free_alloc_context(ctxt.meta_ac); 6558 6560 return ret; 6559 6561 } 6560 6562
+3 -2
fs/proc/base.c
··· 442 442 unsigned long badness(struct task_struct *p, unsigned long uptime); 443 443 static int proc_oom_score(struct task_struct *task, char *buffer) 444 444 { 445 - unsigned long points; 445 + unsigned long points = 0; 446 446 struct timespec uptime; 447 447 448 448 do_posix_clock_monotonic_gettime(&uptime); 449 449 read_lock(&tasklist_lock); 450 - points = badness(task->group_leader, uptime.tv_sec); 450 + if (pid_alive(task)) 451 + points = badness(task, uptime.tv_sec); 451 452 read_unlock(&tasklist_lock); 452 453 return sprintf(buffer, "%lu\n", points); 453 454 }
+37 -48
fs/proc/task_mmu.c
··· 406 406 407 407 memset(&mss, 0, sizeof mss); 408 408 mss.vma = vma; 409 + /* mmap_sem is held in m_start */ 409 410 if (vma->vm_mm && !is_vm_hugetlb_page(vma)) 410 411 walk_page_range(vma->vm_start, vma->vm_end, &smaps_walk); 411 412 ··· 553 552 }; 554 553 555 554 struct pagemapread { 556 - u64 __user *out, *end; 555 + int pos, len; 556 + u64 *buffer; 557 557 }; 558 558 559 559 #define PM_ENTRY_BYTES sizeof(u64) ··· 577 575 static int add_to_pagemap(unsigned long addr, u64 pfn, 578 576 struct pagemapread *pm) 579 577 { 580 - if (put_user(pfn, pm->out)) 581 - return -EFAULT; 582 - pm->out++; 583 - if (pm->out >= pm->end) 578 + pm->buffer[pm->pos++] = pfn; 579 + if (pm->pos >= pm->len) 584 580 return PM_END_OF_BUFFER; 585 581 return 0; 586 582 } ··· 720 720 * determine which areas of memory are actually mapped and llseek to 721 721 * skip over unmapped regions. 722 722 */ 723 + #define PAGEMAP_WALK_SIZE (PMD_SIZE) 723 724 static ssize_t pagemap_read(struct file *file, char __user *buf, 724 725 size_t count, loff_t *ppos) 725 726 { 726 727 struct task_struct *task = get_proc_task(file->f_path.dentry->d_inode); 727 - struct page **pages, *page; 728 - unsigned long uaddr, uend; 729 728 struct mm_struct *mm; 730 729 struct pagemapread pm; 731 - int pagecount; 732 730 int ret = -ESRCH; 733 731 struct mm_walk pagemap_walk = {}; 734 732 unsigned long src; 735 733 unsigned long svpfn; 736 734 unsigned long start_vaddr; 737 735 unsigned long end_vaddr; 736 + int copied = 0; 738 737 739 738 if (!task) 740 739 goto out; ··· 756 757 if (!mm) 757 758 goto out_task; 758 759 759 - 760 - uaddr = (unsigned long)buf & PAGE_MASK; 761 - uend = (unsigned long)(buf + count); 762 - pagecount = (PAGE_ALIGN(uend) - uaddr) / PAGE_SIZE; 763 - ret = 0; 764 - if (pagecount == 0) 765 - goto out_mm; 766 - pages = kcalloc(pagecount, sizeof(struct page *), GFP_KERNEL); 760 + pm.len = PM_ENTRY_BYTES * (PAGEMAP_WALK_SIZE >> PAGE_SHIFT); 761 + pm.buffer = kmalloc(pm.len, GFP_TEMPORARY); 767 762 ret = -ENOMEM; 768 - if (!pages) 763 + if (!pm.buffer) 769 764 goto out_mm; 770 - 771 - down_read(&current->mm->mmap_sem); 772 - ret = get_user_pages(current, current->mm, uaddr, pagecount, 773 - 1, 0, pages, NULL); 774 - up_read(&current->mm->mmap_sem); 775 - 776 - if (ret < 0) 777 - goto out_free; 778 - 779 - if (ret != pagecount) { 780 - pagecount = ret; 781 - ret = -EFAULT; 782 - goto out_pages; 783 - } 784 - 785 - pm.out = (u64 __user *)buf; 786 - pm.end = (u64 __user *)(buf + count); 787 765 788 766 pagemap_walk.pmd_entry = pagemap_pte_range; 789 767 pagemap_walk.pte_hole = pagemap_pte_hole; ··· 783 807 * user buffer is tracked in "pm", and the walk 784 808 * will stop when we hit the end of the buffer. 785 809 */ 786 - ret = walk_page_range(start_vaddr, end_vaddr, &pagemap_walk); 787 - if (ret == PM_END_OF_BUFFER) 788 - ret = 0; 789 - /* don't need mmap_sem for these, but this looks cleaner */ 790 - *ppos += (char __user *)pm.out - buf; 791 - if (!ret) 792 - ret = (char __user *)pm.out - buf; 810 + ret = 0; 811 + while (count && (start_vaddr < end_vaddr)) { 812 + int len; 813 + unsigned long end; 793 814 794 - out_pages: 795 - for (; pagecount; pagecount--) { 796 - page = pages[pagecount-1]; 797 - if (!PageReserved(page)) 798 - SetPageDirty(page); 799 - page_cache_release(page); 815 + pm.pos = 0; 816 + end = start_vaddr + PAGEMAP_WALK_SIZE; 817 + /* overflow ? */ 818 + if (end < start_vaddr || end > end_vaddr) 819 + end = end_vaddr; 820 + down_read(&mm->mmap_sem); 821 + ret = walk_page_range(start_vaddr, end, &pagemap_walk); 822 + up_read(&mm->mmap_sem); 823 + start_vaddr = end; 824 + 825 + len = min(count, PM_ENTRY_BYTES * pm.pos); 826 + if (copy_to_user(buf, pm.buffer, len) < 0) { 827 + ret = -EFAULT; 828 + goto out_free; 829 + } 830 + copied += len; 831 + buf += len; 832 + count -= len; 800 833 } 834 + *ppos += copied; 835 + if (!ret || ret == PM_END_OF_BUFFER) 836 + ret = copied; 837 + 801 838 out_free: 802 - kfree(pages); 839 + kfree(pm.buffer); 803 840 out_mm: 804 841 mmput(mm); 805 842 out_task:
+4 -6
fs/reiserfs/super.c
··· 1618 1618 save_mount_options(s, data); 1619 1619 1620 1620 sbi = kzalloc(sizeof(struct reiserfs_sb_info), GFP_KERNEL); 1621 - if (!sbi) { 1622 - errval = -ENOMEM; 1623 - goto error_alloc; 1624 - } 1621 + if (!sbi) 1622 + return -ENOMEM; 1625 1623 s->s_fs_info = sbi; 1626 1624 /* Set default values for options: non-aggressive tails, RO on errors */ 1627 1625 REISERFS_SB(s)->s_mount_opt |= (1 << REISERFS_SMALLTAIL); ··· 1876 1878 return (0); 1877 1879 1878 1880 error: 1879 - reiserfs_write_unlock(s); 1880 - error_alloc: 1881 1881 if (jinit_done) { /* kill the commit thread, free journal ram */ 1882 1882 journal_release_error(NULL, s); 1883 1883 } 1884 + 1885 + reiserfs_write_unlock(s); 1884 1886 1885 1887 reiserfs_free_bitmap_cache(s); 1886 1888 if (SB_BUFFER_WITH_SB(s))
+1 -33
include/drm/drmP.h
··· 1545 1545 { 1546 1546 } 1547 1547 1548 - 1549 - static __inline__ void *drm_calloc_large(size_t nmemb, size_t size) 1550 - { 1551 - if (size != 0 && nmemb > ULONG_MAX / size) 1552 - return NULL; 1553 - 1554 - if (size * nmemb <= PAGE_SIZE) 1555 - return kcalloc(nmemb, size, GFP_KERNEL); 1556 - 1557 - return __vmalloc(size * nmemb, 1558 - GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO, PAGE_KERNEL); 1559 - } 1560 - 1561 - /* Modeled after cairo's malloc_ab, it's like calloc but without the zeroing. */ 1562 - static __inline__ void *drm_malloc_ab(size_t nmemb, size_t size) 1563 - { 1564 - if (size != 0 && nmemb > ULONG_MAX / size) 1565 - return NULL; 1566 - 1567 - if (size * nmemb <= PAGE_SIZE) 1568 - return kmalloc(nmemb * size, GFP_KERNEL); 1569 - 1570 - return __vmalloc(size * nmemb, 1571 - GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL); 1572 - } 1573 - 1574 - static __inline void drm_free_large(void *ptr) 1575 - { 1576 - if (!is_vmalloc_addr(ptr)) 1577 - return kfree(ptr); 1578 - 1579 - vfree(ptr); 1580 - } 1548 + #include "drm_mem_util.h" 1581 1549 /*@}*/ 1582 1550 1583 1551 #endif /* __KERNEL__ */
+65
include/drm/drm_mem_util.h
··· 1 + /* 2 + * Copyright © 2008 Intel Corporation 3 + * 4 + * Permission is hereby granted, free of charge, to any person obtaining a 5 + * copy of this software and associated documentation files (the "Software"), 6 + * to deal in the Software without restriction, including without limitation 7 + * the rights to use, copy, modify, merge, publish, distribute, sublicense, 8 + * and/or sell copies of the Software, and to permit persons to whom the 9 + * Software is furnished to do so, subject to the following conditions: 10 + * 11 + * The above copyright notice and this permission notice (including the next 12 + * paragraph) shall be included in all copies or substantial portions of the 13 + * Software. 14 + * 15 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL 18 + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 20 + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS 21 + * IN THE SOFTWARE. 22 + * 23 + * Authors: 24 + * Jesse Barnes <jbarnes@virtuousgeek.org> 25 + * 26 + */ 27 + #ifndef _DRM_MEM_UTIL_H_ 28 + #define _DRM_MEM_UTIL_H_ 29 + 30 + #include <linux/vmalloc.h> 31 + 32 + static __inline__ void *drm_calloc_large(size_t nmemb, size_t size) 33 + { 34 + if (size != 0 && nmemb > ULONG_MAX / size) 35 + return NULL; 36 + 37 + if (size * nmemb <= PAGE_SIZE) 38 + return kcalloc(nmemb, size, GFP_KERNEL); 39 + 40 + return __vmalloc(size * nmemb, 41 + GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO, PAGE_KERNEL); 42 + } 43 + 44 + /* Modeled after cairo's malloc_ab, it's like calloc but without the zeroing. */ 45 + static __inline__ void *drm_malloc_ab(size_t nmemb, size_t size) 46 + { 47 + if (size != 0 && nmemb > ULONG_MAX / size) 48 + return NULL; 49 + 50 + if (size * nmemb <= PAGE_SIZE) 51 + return kmalloc(nmemb * size, GFP_KERNEL); 52 + 53 + return __vmalloc(size * nmemb, 54 + GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL); 55 + } 56 + 57 + static __inline void drm_free_large(void *ptr) 58 + { 59 + if (!is_vmalloc_addr(ptr)) 60 + return kfree(ptr); 61 + 62 + vfree(ptr); 63 + } 64 + 65 + #endif
+1
include/drm/drm_pciids.h
··· 410 410 {0x1002, 0x9712, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS880|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 411 411 {0x1002, 0x9713, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS880|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 412 412 {0x1002, 0x9714, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS880|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 413 + {0x1002, 0x9715, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RS880|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \ 413 414 {0, 0, 0} 414 415 415 416 #define r128_PCI_IDS \
-1
include/drm/ttm/ttm_bo_driver.h
··· 115 115 struct ttm_backend_func *func; 116 116 }; 117 117 118 - #define TTM_PAGE_FLAG_VMALLOC (1 << 0) 119 118 #define TTM_PAGE_FLAG_USER (1 << 1) 120 119 #define TTM_PAGE_FLAG_USER_DIRTY (1 << 2) 121 120 #define TTM_PAGE_FLAG_WRITE (1 << 3)
+3
include/linux/amba/bus.h
··· 14 14 #ifndef ASMARM_AMBA_H 15 15 #define ASMARM_AMBA_H 16 16 17 + #include <linux/device.h> 18 + #include <linux/resource.h> 19 + 17 20 #define AMBA_NR_IRQS 2 18 21 19 22 struct amba_device {
+2
include/linux/amba/pl061.h
··· 1 + #include <linux/types.h> 2 + 1 3 /* platform data for the PL061 GPIO driver */ 2 4 3 5 struct pl061_platform_data {
+2
include/linux/clockchips.h
··· 73 73 * @list: list head for the management code 74 74 * @mode: operating mode assigned by the management code 75 75 * @next_event: local storage for the next event in oneshot mode 76 + * @retries: number of forced programming retries 76 77 */ 77 78 struct clock_event_device { 78 79 const char *name; ··· 94 93 struct list_head list; 95 94 enum clock_event_mode mode; 96 95 ktime_t next_event; 96 + unsigned long retries; 97 97 }; 98 98 99 99 /*
+3 -3
include/linux/ext3_fs.h
··· 565 565 566 566 static inline int ext3_test_inode_state(struct inode *inode, int bit) 567 567 { 568 - return test_bit(bit, &EXT3_I(inode)->i_state); 568 + return test_bit(bit, &EXT3_I(inode)->i_state_flags); 569 569 } 570 570 571 571 static inline void ext3_set_inode_state(struct inode *inode, int bit) 572 572 { 573 - set_bit(bit, &EXT3_I(inode)->i_state); 573 + set_bit(bit, &EXT3_I(inode)->i_state_flags); 574 574 } 575 575 576 576 static inline void ext3_clear_inode_state(struct inode *inode, int bit) 577 577 { 578 - clear_bit(bit, &EXT3_I(inode)->i_state); 578 + clear_bit(bit, &EXT3_I(inode)->i_state_flags); 579 579 } 580 580 #else 581 581 /* Assume that user mode programs are passing in an ext3fs superblock, not
+1 -1
include/linux/ext3_fs_i.h
··· 87 87 * near to their parent directory's inode. 88 88 */ 89 89 __u32 i_block_group; 90 - unsigned long i_state; /* Dynamic state flags for ext3 */ 90 + unsigned long i_state_flags; /* Dynamic state flags for ext3 */ 91 91 92 92 /* block reservation info */ 93 93 struct ext3_block_alloc_info *i_block_alloc_info;
+5 -2
include/linux/freezer.h
··· 64 64 extern void cancel_freezing(struct task_struct *p); 65 65 66 66 #ifdef CONFIG_CGROUP_FREEZER 67 - extern int cgroup_frozen(struct task_struct *task); 67 + extern int cgroup_freezing_or_frozen(struct task_struct *task); 68 68 #else /* !CONFIG_CGROUP_FREEZER */ 69 - static inline int cgroup_frozen(struct task_struct *task) { return 0; } 69 + static inline int cgroup_freezing_or_frozen(struct task_struct *task) 70 + { 71 + return 0; 72 + } 70 73 #endif /* !CONFIG_CGROUP_FREEZER */ 71 74 72 75 /*
+1 -1
include/linux/fscache-cache.h
··· 105 105 /* operation releaser */ 106 106 fscache_operation_release_t release; 107 107 108 - #ifdef CONFIG_SLOW_WORK_PROC 108 + #ifdef CONFIG_SLOW_WORK_DEBUG 109 109 const char *name; /* operation name */ 110 110 const char *state; /* operation state */ 111 111 #define fscache_set_op_name(OP, N) do { (OP)->name = (N); } while(0)
+2
include/linux/ioport.h
··· 112 112 extern struct resource ioport_resource; 113 113 extern struct resource iomem_resource; 114 114 115 + extern struct resource *request_resource_conflict(struct resource *root, struct resource *new); 115 116 extern int request_resource(struct resource *root, struct resource *new); 116 117 extern int release_resource(struct resource *new); 117 118 void release_child_resources(struct resource *new); 118 119 extern void reserve_region_with_split(struct resource *root, 119 120 resource_size_t start, resource_size_t end, 120 121 const char *name); 122 + extern struct resource *insert_resource_conflict(struct resource *parent, struct resource *new); 121 123 extern int insert_resource(struct resource *parent, struct resource *new); 122 124 extern void insert_resource_expand_to_fit(struct resource *root, struct resource *new); 123 125 extern int allocate_resource(struct resource *root, struct resource *new,
+1
include/linux/netfilter_ipv6.h
··· 59 59 enum nf_ip6_hook_priorities { 60 60 NF_IP6_PRI_FIRST = INT_MIN, 61 61 NF_IP6_PRI_CONNTRACK_DEFRAG = -400, 62 + NF_IP6_PRI_RAW = -300, 62 63 NF_IP6_PRI_SELINUX_FIRST = -225, 63 64 NF_IP6_PRI_CONNTRACK = -200, 64 65 NF_IP6_PRI_MANGLE = -150,
+14 -7
include/linux/perf_event.h
··· 842 842 843 843 extern void __perf_sw_event(u32, u64, int, struct pt_regs *, u64); 844 844 845 - static inline void 846 - perf_sw_event(u32 event_id, u64 nr, int nmi, struct pt_regs *regs, u64 addr) 847 - { 848 - if (atomic_read(&perf_swevent_enabled[event_id])) 849 - __perf_sw_event(event_id, nr, nmi, regs, addr); 850 - } 851 - 852 845 extern void 853 846 perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned long ip, int skip); 854 847 ··· 878 885 } 879 886 880 887 return perf_arch_fetch_caller_regs(regs, ip, skip); 888 + } 889 + 890 + static inline void 891 + perf_sw_event(u32 event_id, u64 nr, int nmi, struct pt_regs *regs, u64 addr) 892 + { 893 + if (atomic_read(&perf_swevent_enabled[event_id])) { 894 + struct pt_regs hot_regs; 895 + 896 + if (!regs) { 897 + perf_fetch_caller_regs(&hot_regs, 1); 898 + regs = &hot_regs; 899 + } 900 + __perf_sw_event(event_id, nr, nmi, regs, addr); 901 + } 881 902 } 882 903 883 904 extern void __perf_event_mmap(struct vm_area_struct *vma);
+6 -17
include/linux/rcupdate.h
··· 123 123 return lock_is_held(&rcu_lock_map); 124 124 } 125 125 126 - /** 127 - * rcu_read_lock_bh_held - might we be in RCU-bh read-side critical section? 128 - * 129 - * If CONFIG_PROVE_LOCKING is selected and enabled, returns nonzero iff in 130 - * an RCU-bh read-side critical section. In absence of CONFIG_PROVE_LOCKING, 131 - * this assumes we are in an RCU-bh read-side critical section unless it can 132 - * prove otherwise. 133 - * 134 - * Check rcu_scheduler_active to prevent false positives during boot. 126 + /* 127 + * rcu_read_lock_bh_held() is defined out of line to avoid #include-file 128 + * hell. 135 129 */ 136 - static inline int rcu_read_lock_bh_held(void) 137 - { 138 - if (!debug_lockdep_rcu_enabled()) 139 - return 1; 140 - return lock_is_held(&rcu_bh_lock_map); 141 - } 130 + extern int rcu_read_lock_bh_held(void); 142 131 143 132 /** 144 133 * rcu_read_lock_sched_held - might we be in RCU-sched read-side critical section? ··· 149 160 return 1; 150 161 if (debug_locks) 151 162 lockdep_opinion = lock_is_held(&rcu_sched_lock_map); 152 - return lockdep_opinion || preempt_count() != 0; 163 + return lockdep_opinion || preempt_count() != 0 || irqs_disabled(); 153 164 } 154 165 #else /* #ifdef CONFIG_PREEMPT */ 155 166 static inline int rcu_read_lock_sched_held(void) ··· 180 191 #ifdef CONFIG_PREEMPT 181 192 static inline int rcu_read_lock_sched_held(void) 182 193 { 183 - return !rcu_scheduler_active || preempt_count() != 0; 194 + return !rcu_scheduler_active || preempt_count() != 0 || irqs_disabled(); 184 195 } 185 196 #else /* #ifdef CONFIG_PREEMPT */ 186 197 static inline int rcu_read_lock_sched_held(void)
-6
include/linux/skbuff.h
··· 190 190 atomic_t dataref; 191 191 unsigned short nr_frags; 192 192 unsigned short gso_size; 193 - #ifdef CONFIG_HAS_DMA 194 - dma_addr_t dma_head; 195 - #endif 196 193 /* Warning: this field is not always filled in (UFO)! */ 197 194 unsigned short gso_segs; 198 195 unsigned short gso_type; ··· 198 201 struct sk_buff *frag_list; 199 202 struct skb_shared_hwtstamps hwtstamps; 200 203 skb_frag_t frags[MAX_SKB_FRAGS]; 201 - #ifdef CONFIG_HAS_DMA 202 - dma_addr_t dma_maps[MAX_SKB_FRAGS]; 203 - #endif 204 204 /* Intermediate layers must ensure that destructor_arg 205 205 * remains valid until skb destructor */ 206 206 void * destructor_arg;
+1
include/linux/socket.h
··· 255 255 #define MSG_ERRQUEUE 0x2000 /* Fetch message from error queue */ 256 256 #define MSG_NOSIGNAL 0x4000 /* Do not generate SIGPIPE */ 257 257 #define MSG_MORE 0x8000 /* Sender will send more */ 258 + #define MSG_WAITFORONE 0x10000 /* recvmmsg(): block until 1+ packets avail */ 258 259 259 260 #define MSG_EOF MSG_FIN 260 261
+1 -1
include/linux/tracepoint.h
··· 49 49 void **it_func; \ 50 50 \ 51 51 rcu_read_lock_sched_notrace(); \ 52 - it_func = rcu_dereference((tp)->funcs); \ 52 + it_func = rcu_dereference_sched((tp)->funcs); \ 53 53 if (it_func) { \ 54 54 do { \ 55 55 ((void(*)(proto))(*it_func))(args); \
-6
include/pcmcia/ss.h
··· 277 277 #endif 278 278 279 279 280 - /* socket drivers are expected to use these callbacks in their .drv struct */ 281 - extern int pcmcia_socket_dev_suspend(struct device *dev); 282 - extern void pcmcia_socket_dev_early_resume(struct device *dev); 283 - extern void pcmcia_socket_dev_late_resume(struct device *dev); 284 - extern int pcmcia_socket_dev_resume(struct device *dev); 285 - 286 280 /* socket drivers use this callback in their IRQ handler */ 287 281 extern void pcmcia_parse_events(struct pcmcia_socket *socket, 288 282 unsigned int events);
+6 -3
kernel/cgroup_freezer.c
··· 47 47 struct freezer, css); 48 48 } 49 49 50 - int cgroup_frozen(struct task_struct *task) 50 + int cgroup_freezing_or_frozen(struct task_struct *task) 51 51 { 52 52 struct freezer *freezer; 53 53 enum freezer_state state; 54 54 55 55 task_lock(task); 56 56 freezer = task_freezer(task); 57 - state = freezer->state; 57 + if (!freezer->css.cgroup->parent) 58 + state = CGROUP_THAWED; /* root cgroup can't be frozen */ 59 + else 60 + state = freezer->state; 58 61 task_unlock(task); 59 62 60 - return state == CGROUP_FROZEN; 63 + return (state == CGROUP_FREEZING) || (state == CGROUP_FROZEN); 61 64 } 62 65 63 66 /*
+5 -1
kernel/cred.c
··· 364 364 365 365 new = kmem_cache_alloc(cred_jar, GFP_ATOMIC); 366 366 if (!new) 367 - return NULL; 367 + goto free_tgcred; 368 368 369 369 kdebug("prepare_usermodehelper_creds() alloc %p", new); 370 370 ··· 397 397 398 398 error: 399 399 put_cred(new); 400 + free_tgcred: 401 + #ifdef CONFIG_KEYS 402 + kfree(tgcred); 403 + #endif 400 404 return NULL; 401 405 } 402 406
+6
kernel/early_res.c
··· 333 333 struct early_res *r; 334 334 int i; 335 335 336 + if (start == end) 337 + return; 338 + 339 + if (WARN_ONCE(start > end, " wrong range [%#llx, %#llx]\n", start, end)) 340 + return; 341 + 336 342 try_next: 337 343 i = find_overlapped_early(start, end); 338 344 if (i >= max_early_res)
+24 -11
kernel/irq/chip.c
··· 359 359 if (desc->chip->ack) 360 360 desc->chip->ack(irq); 361 361 } 362 + desc->status |= IRQ_MASKED; 363 + } 364 + 365 + static inline void mask_irq(struct irq_desc *desc, int irq) 366 + { 367 + if (desc->chip->mask) { 368 + desc->chip->mask(irq); 369 + desc->status |= IRQ_MASKED; 370 + } 371 + } 372 + 373 + static inline void unmask_irq(struct irq_desc *desc, int irq) 374 + { 375 + if (desc->chip->unmask) { 376 + desc->chip->unmask(irq); 377 + desc->status &= ~IRQ_MASKED; 378 + } 362 379 } 363 380 364 381 /* ··· 501 484 raw_spin_lock(&desc->lock); 502 485 desc->status &= ~IRQ_INPROGRESS; 503 486 504 - if (unlikely(desc->status & IRQ_ONESHOT)) 505 - desc->status |= IRQ_MASKED; 506 - else if (!(desc->status & IRQ_DISABLED) && desc->chip->unmask) 507 - desc->chip->unmask(irq); 487 + if (!(desc->status & (IRQ_DISABLED | IRQ_ONESHOT))) 488 + unmask_irq(desc, irq); 508 489 out_unlock: 509 490 raw_spin_unlock(&desc->lock); 510 491 } ··· 539 524 action = desc->action; 540 525 if (unlikely(!action || (desc->status & IRQ_DISABLED))) { 541 526 desc->status |= IRQ_PENDING; 542 - if (desc->chip->mask) 543 - desc->chip->mask(irq); 527 + mask_irq(desc, irq); 544 528 goto out; 545 529 } 546 530 ··· 607 593 irqreturn_t action_ret; 608 594 609 595 if (unlikely(!action)) { 610 - desc->chip->mask(irq); 596 + mask_irq(desc, irq); 611 597 goto out_unlock; 612 598 } 613 599 ··· 619 605 if (unlikely((desc->status & 620 606 (IRQ_PENDING | IRQ_MASKED | IRQ_DISABLED)) == 621 607 (IRQ_PENDING | IRQ_MASKED))) { 622 - desc->chip->unmask(irq); 623 - desc->status &= ~IRQ_MASKED; 608 + unmask_irq(desc, irq); 624 609 } 625 610 626 611 desc->status &= ~IRQ_PENDING; ··· 729 716 __set_irq_handler(irq, handle, 0, name); 730 717 } 731 718 732 - void __init set_irq_noprobe(unsigned int irq) 719 + void set_irq_noprobe(unsigned int irq) 733 720 { 734 721 struct irq_desc *desc = irq_to_desc(irq); 735 722 unsigned long flags; ··· 744 731 raw_spin_unlock_irqrestore(&desc->lock, flags); 745 732 } 746 733 747 - void __init set_irq_probe(unsigned int irq) 734 + void set_irq_probe(unsigned int irq) 748 735 { 749 736 struct irq_desc *desc = irq_to_desc(irq); 750 737 unsigned long flags;
+22
kernel/irq/manage.c
··· 382 382 { 383 383 struct irq_desc *desc = irq_to_desc(irq); 384 384 struct irqaction *action; 385 + unsigned long flags; 385 386 386 387 if (!desc) 387 388 return 0; ··· 390 389 if (desc->status & IRQ_NOREQUEST) 391 390 return 0; 392 391 392 + raw_spin_lock_irqsave(&desc->lock, flags); 393 393 action = desc->action; 394 394 if (action) 395 395 if (irqflags & action->flags & IRQF_SHARED) 396 396 action = NULL; 397 + 398 + raw_spin_unlock_irqrestore(&desc->lock, flags); 397 399 398 400 return !action; 399 401 } ··· 487 483 */ 488 484 static void irq_finalize_oneshot(unsigned int irq, struct irq_desc *desc) 489 485 { 486 + again: 490 487 chip_bus_lock(irq, desc); 491 488 raw_spin_lock_irq(&desc->lock); 489 + 490 + /* 491 + * Implausible though it may be we need to protect us against 492 + * the following scenario: 493 + * 494 + * The thread is faster done than the hard interrupt handler 495 + * on the other CPU. If we unmask the irq line then the 496 + * interrupt can come in again and masks the line, leaves due 497 + * to IRQ_INPROGRESS and the irq line is masked forever. 498 + */ 499 + if (unlikely(desc->status & IRQ_INPROGRESS)) { 500 + raw_spin_unlock_irq(&desc->lock); 501 + chip_bus_sync_unlock(irq, desc); 502 + cpu_relax(); 503 + goto again; 504 + } 505 + 492 506 if (!(desc->status & IRQ_DISABLED) && (desc->status & IRQ_MASKED)) { 493 507 desc->status &= ~IRQ_MASKED; 494 508 desc->chip->unmask(irq);
+103 -102
kernel/kgdb.c
··· 69 69 struct pt_regs *linux_regs; 70 70 }; 71 71 72 + /* Exception state values */ 73 + #define DCPU_WANT_MASTER 0x1 /* Waiting to become a master kgdb cpu */ 74 + #define DCPU_NEXT_MASTER 0x2 /* Transition from one master cpu to another */ 75 + #define DCPU_IS_SLAVE 0x4 /* Slave cpu enter exception */ 76 + #define DCPU_SSTEP 0x8 /* CPU is single stepping */ 77 + 72 78 static struct debuggerinfo_struct { 73 79 void *debuggerinfo; 74 80 struct task_struct *task; 81 + int exception_state; 75 82 } kgdb_info[NR_CPUS]; 76 83 77 84 /** ··· 398 391 399 392 /* 400 393 * Copy the binary array pointed to by buf into mem. Fix $, #, and 401 - * 0x7d escaped with 0x7d. Return a pointer to the character after 402 - * the last byte written. 394 + * 0x7d escaped with 0x7d. Return -EFAULT on failure or 0 on success. 395 + * The input buf is overwitten with the result to write to mem. 403 396 */ 404 397 static int kgdb_ebin2mem(char *buf, char *mem, int count) 405 398 { 406 - int err = 0; 407 - char c; 399 + int size = 0; 400 + char *c = buf; 408 401 409 402 while (count-- > 0) { 410 - c = *buf++; 411 - if (c == 0x7d) 412 - c = *buf++ ^ 0x20; 413 - 414 - err = probe_kernel_write(mem, &c, 1); 415 - if (err) 416 - break; 417 - 418 - mem++; 403 + c[size] = *buf++; 404 + if (c[size] == 0x7d) 405 + c[size] = *buf++ ^ 0x20; 406 + size++; 419 407 } 420 408 421 - return err; 409 + return probe_kernel_write(mem, c, size); 422 410 } 423 411 424 412 /* ··· 563 561 */ 564 562 return find_task_by_pid_ns(tid, &init_pid_ns); 565 563 } 566 - 567 - /* 568 - * CPU debug state control: 569 - */ 570 - 571 - #ifdef CONFIG_SMP 572 - static void kgdb_wait(struct pt_regs *regs) 573 - { 574 - unsigned long flags; 575 - int cpu; 576 - 577 - local_irq_save(flags); 578 - cpu = raw_smp_processor_id(); 579 - kgdb_info[cpu].debuggerinfo = regs; 580 - kgdb_info[cpu].task = current; 581 - /* 582 - * Make sure the above info reaches the primary CPU before 583 - * our cpu_in_kgdb[] flag setting does: 584 - */ 585 - smp_wmb(); 586 - atomic_set(&cpu_in_kgdb[cpu], 1); 587 - 588 - /* Disable any cpu specific hw breakpoints */ 589 - kgdb_disable_hw_debug(regs); 590 - 591 - /* Wait till primary CPU is done with debugging */ 592 - while (atomic_read(&passive_cpu_wait[cpu])) 593 - cpu_relax(); 594 - 595 - kgdb_info[cpu].debuggerinfo = NULL; 596 - kgdb_info[cpu].task = NULL; 597 - 598 - /* fix up hardware debug registers on local cpu */ 599 - if (arch_kgdb_ops.correct_hw_break) 600 - arch_kgdb_ops.correct_hw_break(); 601 - 602 - /* Signal the primary CPU that we are done: */ 603 - atomic_set(&cpu_in_kgdb[cpu], 0); 604 - touch_softlockup_watchdog_sync(); 605 - clocksource_touch_watchdog(); 606 - local_irq_restore(flags); 607 - } 608 - #endif 609 564 610 565 /* 611 566 * Some architectures need cache flushes when we set/clear a ··· 1359 1400 return 1; 1360 1401 } 1361 1402 1362 - /* 1363 - * kgdb_handle_exception() - main entry point from a kernel exception 1364 - * 1365 - * Locking hierarchy: 1366 - * interface locks, if any (begin_session) 1367 - * kgdb lock (kgdb_active) 1368 - */ 1369 - int 1370 - kgdb_handle_exception(int evector, int signo, int ecode, struct pt_regs *regs) 1403 + static int kgdb_cpu_enter(struct kgdb_state *ks, struct pt_regs *regs) 1371 1404 { 1372 - struct kgdb_state kgdb_var; 1373 - struct kgdb_state *ks = &kgdb_var; 1374 1405 unsigned long flags; 1375 1406 int sstep_tries = 100; 1376 1407 int error = 0; 1377 1408 int i, cpu; 1378 - 1379 - ks->cpu = raw_smp_processor_id(); 1380 - ks->ex_vector = evector; 1381 - ks->signo = signo; 1382 - ks->ex_vector = evector; 1383 - ks->err_code = ecode; 1384 - ks->kgdb_usethreadid = 0; 1385 - ks->linux_regs = regs; 1386 - 1387 - if (kgdb_reenter_check(ks)) 1388 - return 0; /* Ouch, double exception ! */ 1389 - 1409 + int trace_on = 0; 1390 1410 acquirelock: 1391 1411 /* 1392 1412 * Interrupts will be restored by the 'trap return' code, except when ··· 1373 1435 */ 1374 1436 local_irq_save(flags); 1375 1437 1376 - cpu = raw_smp_processor_id(); 1438 + cpu = ks->cpu; 1439 + kgdb_info[cpu].debuggerinfo = regs; 1440 + kgdb_info[cpu].task = current; 1441 + /* 1442 + * Make sure the above info reaches the primary CPU before 1443 + * our cpu_in_kgdb[] flag setting does: 1444 + */ 1445 + atomic_inc(&cpu_in_kgdb[cpu]); 1377 1446 1378 1447 /* 1379 - * Acquire the kgdb_active lock: 1448 + * CPU will loop if it is a slave or request to become a kgdb 1449 + * master cpu and acquire the kgdb_active lock: 1380 1450 */ 1381 - while (atomic_cmpxchg(&kgdb_active, -1, cpu) != -1) 1451 + while (1) { 1452 + if (kgdb_info[cpu].exception_state & DCPU_WANT_MASTER) { 1453 + if (atomic_cmpxchg(&kgdb_active, -1, cpu) == cpu) 1454 + break; 1455 + } else if (kgdb_info[cpu].exception_state & DCPU_IS_SLAVE) { 1456 + if (!atomic_read(&passive_cpu_wait[cpu])) 1457 + goto return_normal; 1458 + } else { 1459 + return_normal: 1460 + /* Return to normal operation by executing any 1461 + * hw breakpoint fixup. 1462 + */ 1463 + if (arch_kgdb_ops.correct_hw_break) 1464 + arch_kgdb_ops.correct_hw_break(); 1465 + if (trace_on) 1466 + tracing_on(); 1467 + atomic_dec(&cpu_in_kgdb[cpu]); 1468 + touch_softlockup_watchdog_sync(); 1469 + clocksource_touch_watchdog(); 1470 + local_irq_restore(flags); 1471 + return 0; 1472 + } 1382 1473 cpu_relax(); 1474 + } 1383 1475 1384 1476 /* 1385 1477 * For single stepping, try to only enter on the processor ··· 1443 1475 if (kgdb_io_ops->pre_exception) 1444 1476 kgdb_io_ops->pre_exception(); 1445 1477 1446 - kgdb_info[ks->cpu].debuggerinfo = ks->linux_regs; 1447 - kgdb_info[ks->cpu].task = current; 1448 - 1449 1478 kgdb_disable_hw_debug(ks->linux_regs); 1450 1479 1451 1480 /* ··· 1451 1486 */ 1452 1487 if (!kgdb_single_step) { 1453 1488 for (i = 0; i < NR_CPUS; i++) 1454 - atomic_set(&passive_cpu_wait[i], 1); 1489 + atomic_inc(&passive_cpu_wait[i]); 1455 1490 } 1456 - 1457 - /* 1458 - * spin_lock code is good enough as a barrier so we don't 1459 - * need one here: 1460 - */ 1461 - atomic_set(&cpu_in_kgdb[ks->cpu], 1); 1462 1491 1463 1492 #ifdef CONFIG_SMP 1464 1493 /* Signal the other CPUs to enter kgdb_wait() */ ··· 1477 1518 kgdb_single_step = 0; 1478 1519 kgdb_contthread = current; 1479 1520 exception_level = 0; 1521 + trace_on = tracing_is_on(); 1522 + if (trace_on) 1523 + tracing_off(); 1480 1524 1481 1525 /* Talk to debugger with gdbserial protocol */ 1482 1526 error = gdb_serial_stub(ks); ··· 1488 1526 if (kgdb_io_ops->post_exception) 1489 1527 kgdb_io_ops->post_exception(); 1490 1528 1491 - kgdb_info[ks->cpu].debuggerinfo = NULL; 1492 - kgdb_info[ks->cpu].task = NULL; 1493 - atomic_set(&cpu_in_kgdb[ks->cpu], 0); 1529 + atomic_dec(&cpu_in_kgdb[ks->cpu]); 1494 1530 1495 1531 if (!kgdb_single_step) { 1496 1532 for (i = NR_CPUS-1; i >= 0; i--) 1497 - atomic_set(&passive_cpu_wait[i], 0); 1533 + atomic_dec(&passive_cpu_wait[i]); 1498 1534 /* 1499 1535 * Wait till all the CPUs have quit 1500 1536 * from the debugger. ··· 1511 1551 else 1512 1552 kgdb_sstep_pid = 0; 1513 1553 } 1554 + if (trace_on) 1555 + tracing_on(); 1514 1556 /* Free kgdb_active */ 1515 1557 atomic_set(&kgdb_active, -1); 1516 1558 touch_softlockup_watchdog_sync(); ··· 1522 1560 return error; 1523 1561 } 1524 1562 1563 + /* 1564 + * kgdb_handle_exception() - main entry point from a kernel exception 1565 + * 1566 + * Locking hierarchy: 1567 + * interface locks, if any (begin_session) 1568 + * kgdb lock (kgdb_active) 1569 + */ 1570 + int 1571 + kgdb_handle_exception(int evector, int signo, int ecode, struct pt_regs *regs) 1572 + { 1573 + struct kgdb_state kgdb_var; 1574 + struct kgdb_state *ks = &kgdb_var; 1575 + int ret; 1576 + 1577 + ks->cpu = raw_smp_processor_id(); 1578 + ks->ex_vector = evector; 1579 + ks->signo = signo; 1580 + ks->ex_vector = evector; 1581 + ks->err_code = ecode; 1582 + ks->kgdb_usethreadid = 0; 1583 + ks->linux_regs = regs; 1584 + 1585 + if (kgdb_reenter_check(ks)) 1586 + return 0; /* Ouch, double exception ! */ 1587 + kgdb_info[ks->cpu].exception_state |= DCPU_WANT_MASTER; 1588 + ret = kgdb_cpu_enter(ks, regs); 1589 + kgdb_info[ks->cpu].exception_state &= ~DCPU_WANT_MASTER; 1590 + return ret; 1591 + } 1592 + 1525 1593 int kgdb_nmicallback(int cpu, void *regs) 1526 1594 { 1527 1595 #ifdef CONFIG_SMP 1596 + struct kgdb_state kgdb_var; 1597 + struct kgdb_state *ks = &kgdb_var; 1598 + 1599 + memset(ks, 0, sizeof(struct kgdb_state)); 1600 + ks->cpu = cpu; 1601 + ks->linux_regs = regs; 1602 + 1528 1603 if (!atomic_read(&cpu_in_kgdb[cpu]) && 1529 - atomic_read(&kgdb_active) != cpu && 1530 - atomic_read(&cpu_in_kgdb[atomic_read(&kgdb_active)])) { 1531 - kgdb_wait((struct pt_regs *)regs); 1604 + atomic_read(&kgdb_active) != -1 && 1605 + atomic_read(&kgdb_active) != cpu) { 1606 + kgdb_info[cpu].exception_state |= DCPU_IS_SLAVE; 1607 + kgdb_cpu_enter(ks, regs); 1608 + kgdb_info[cpu].exception_state &= ~DCPU_IS_SLAVE; 1532 1609 return 0; 1533 1610 } 1534 1611 #endif ··· 1743 1742 */ 1744 1743 void kgdb_breakpoint(void) 1745 1744 { 1746 - atomic_set(&kgdb_setting_breakpoint, 1); 1745 + atomic_inc(&kgdb_setting_breakpoint); 1747 1746 wmb(); /* Sync point before breakpoint */ 1748 1747 arch_kgdb_breakpoint(); 1749 1748 wmb(); /* Sync point after breakpoint */ 1750 - atomic_set(&kgdb_setting_breakpoint, 0); 1749 + atomic_dec(&kgdb_setting_breakpoint); 1751 1750 } 1752 1751 EXPORT_SYMBOL_GPL(kgdb_breakpoint); 1753 1752
+14 -8
kernel/perf_event.c
··· 1164 1164 struct perf_event_context *ctx = task->perf_event_ctxp; 1165 1165 struct perf_event_context *next_ctx; 1166 1166 struct perf_event_context *parent; 1167 - struct pt_regs *regs; 1168 1167 int do_switch = 1; 1169 1168 1170 - regs = task_pt_regs(task); 1171 - perf_sw_event(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 1, regs, 0); 1169 + perf_sw_event(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 1, NULL, 0); 1172 1170 1173 1171 if (likely(!ctx || !cpuctx->task_ctx)) 1174 1172 return; ··· 2784 2786 return NULL; 2785 2787 } 2786 2788 2787 - #ifdef CONFIG_EVENT_TRACING 2788 2789 __weak 2789 2790 void perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned long ip, int skip) 2790 2791 { 2791 2792 } 2792 - #endif 2793 + 2793 2794 2794 2795 /* 2795 2796 * Output ··· 3375 3378 struct perf_task_event *task_event) 3376 3379 { 3377 3380 struct perf_output_handle handle; 3378 - int size; 3379 3381 struct task_struct *task = task_event->task; 3380 - int ret; 3382 + unsigned long flags; 3383 + int size, ret; 3384 + 3385 + /* 3386 + * If this CPU attempts to acquire an rq lock held by a CPU spinning 3387 + * in perf_output_lock() from interrupt context, it's game over. 3388 + */ 3389 + local_irq_save(flags); 3381 3390 3382 3391 size = task_event->event_id.header.size; 3383 3392 ret = perf_output_begin(&handle, event, size, 0, 0); 3384 3393 3385 - if (ret) 3394 + if (ret) { 3395 + local_irq_restore(flags); 3386 3396 return; 3397 + } 3387 3398 3388 3399 task_event->event_id.pid = perf_event_pid(event, task); 3389 3400 task_event->event_id.ppid = perf_event_pid(event, current); ··· 3402 3397 perf_output_put(&handle, task_event->event_id); 3403 3398 3404 3399 perf_output_end(&handle); 3400 + local_irq_restore(flags); 3405 3401 } 3406 3402 3407 3403 static int perf_event_task_match(struct perf_event *event)
+7 -3
kernel/posix-cpu-timers.c
··· 1061 1061 } 1062 1062 } 1063 1063 1064 - static void stop_process_timers(struct task_struct *tsk) 1064 + static void stop_process_timers(struct signal_struct *sig) 1065 1065 { 1066 - struct thread_group_cputimer *cputimer = &tsk->signal->cputimer; 1066 + struct thread_group_cputimer *cputimer = &sig->cputimer; 1067 1067 unsigned long flags; 1068 1068 1069 1069 if (!cputimer->running) ··· 1072 1072 spin_lock_irqsave(&cputimer->lock, flags); 1073 1073 cputimer->running = 0; 1074 1074 spin_unlock_irqrestore(&cputimer->lock, flags); 1075 + 1076 + sig->cputime_expires.prof_exp = cputime_zero; 1077 + sig->cputime_expires.virt_exp = cputime_zero; 1078 + sig->cputime_expires.sched_exp = 0; 1075 1079 } 1076 1080 1077 1081 static u32 onecputick; ··· 1137 1133 list_empty(&timers[CPUCLOCK_VIRT]) && 1138 1134 cputime_eq(sig->it[CPUCLOCK_VIRT].expires, cputime_zero) && 1139 1135 list_empty(&timers[CPUCLOCK_SCHED])) { 1140 - stop_process_timers(tsk); 1136 + stop_process_timers(sig); 1141 1137 return; 1142 1138 } 1143 1139
+2 -3
kernel/power/process.c
··· 88 88 printk(KERN_ERR "Freezing of tasks failed after %d.%02d seconds " 89 89 "(%d tasks refusing to freeze):\n", 90 90 elapsed_csecs / 100, elapsed_csecs % 100, todo); 91 - show_state(); 92 91 read_lock(&tasklist_lock); 93 92 do_each_thread(g, p) { 94 93 task_lock(p); 95 94 if (freezing(p) && !freezer_should_skip(p)) 96 - printk(KERN_ERR " %s\n", p->comm); 95 + sched_show_task(p); 97 96 cancel_freezing(p); 98 97 task_unlock(p); 99 98 } while_each_thread(g, p); ··· 144 145 if (nosig_only && should_send_signal(p)) 145 146 continue; 146 147 147 - if (cgroup_frozen(p)) 148 + if (cgroup_freezing_or_frozen(p)) 148 149 continue; 149 150 150 151 thaw_process(p);
+23
kernel/rcupdate.c
··· 45 45 #include <linux/mutex.h> 46 46 #include <linux/module.h> 47 47 #include <linux/kernel_stat.h> 48 + #include <linux/hardirq.h> 48 49 49 50 #ifdef CONFIG_DEBUG_LOCK_ALLOC 50 51 static struct lock_class_key rcu_lock_key; ··· 66 65 67 66 int rcu_scheduler_active __read_mostly; 68 67 EXPORT_SYMBOL_GPL(rcu_scheduler_active); 68 + 69 + #ifdef CONFIG_DEBUG_LOCK_ALLOC 70 + 71 + /** 72 + * rcu_read_lock_bh_held - might we be in RCU-bh read-side critical section? 73 + * 74 + * Check for bottom half being disabled, which covers both the 75 + * CONFIG_PROVE_RCU and not cases. Note that if someone uses 76 + * rcu_read_lock_bh(), but then later enables BH, lockdep (if enabled) 77 + * will show the situation. 78 + * 79 + * Check debug_lockdep_rcu_enabled() to prevent false positives during boot. 80 + */ 81 + int rcu_read_lock_bh_held(void) 82 + { 83 + if (!debug_lockdep_rcu_enabled()) 84 + return 1; 85 + return in_softirq(); 86 + } 87 + EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held); 88 + 89 + #endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */ 69 90 70 91 /* 71 92 * This function is invoked towards the end of the scheduler's initialization
+37 -7
kernel/resource.c
··· 219 219 } 220 220 221 221 /** 222 + * request_resource_conflict - request and reserve an I/O or memory resource 223 + * @root: root resource descriptor 224 + * @new: resource descriptor desired by caller 225 + * 226 + * Returns 0 for success, conflict resource on error. 227 + */ 228 + struct resource *request_resource_conflict(struct resource *root, struct resource *new) 229 + { 230 + struct resource *conflict; 231 + 232 + write_lock(&resource_lock); 233 + conflict = __request_resource(root, new); 234 + write_unlock(&resource_lock); 235 + return conflict; 236 + } 237 + 238 + /** 222 239 * request_resource - request and reserve an I/O or memory resource 223 240 * @root: root resource descriptor 224 241 * @new: resource descriptor desired by caller ··· 246 229 { 247 230 struct resource *conflict; 248 231 249 - write_lock(&resource_lock); 250 - conflict = __request_resource(root, new); 251 - write_unlock(&resource_lock); 232 + conflict = request_resource_conflict(root, new); 252 233 return conflict ? -EBUSY : 0; 253 234 } 254 235 ··· 489 474 } 490 475 491 476 /** 492 - * insert_resource - Inserts a resource in the resource tree 477 + * insert_resource_conflict - Inserts resource in the resource tree 493 478 * @parent: parent of the new resource 494 479 * @new: new resource to insert 495 480 * 496 - * Returns 0 on success, -EBUSY if the resource can't be inserted. 481 + * Returns 0 on success, conflict resource if the resource can't be inserted. 497 482 * 498 - * This function is equivalent to request_resource when no conflict 483 + * This function is equivalent to request_resource_conflict when no conflict 499 484 * happens. If a conflict happens, and the conflicting resources 500 485 * entirely fit within the range of the new resource, then the new 501 486 * resource is inserted and the conflicting resources become children of 502 487 * the new resource. 503 488 */ 504 - int insert_resource(struct resource *parent, struct resource *new) 489 + struct resource *insert_resource_conflict(struct resource *parent, struct resource *new) 505 490 { 506 491 struct resource *conflict; 507 492 508 493 write_lock(&resource_lock); 509 494 conflict = __insert_resource(parent, new); 510 495 write_unlock(&resource_lock); 496 + return conflict; 497 + } 498 + 499 + /** 500 + * insert_resource - Inserts a resource in the resource tree 501 + * @parent: parent of the new resource 502 + * @new: new resource to insert 503 + * 504 + * Returns 0 on success, -EBUSY if the resource can't be inserted. 505 + */ 506 + int insert_resource(struct resource *parent, struct resource *new) 507 + { 508 + struct resource *conflict; 509 + 510 + conflict = insert_resource_conflict(parent, new); 511 511 return conflict ? -EBUSY : 0; 512 512 } 513 513
+9 -5
kernel/sched.c
··· 2650 2650 { 2651 2651 unsigned long flags; 2652 2652 struct rq *rq; 2653 - int cpu = get_cpu(); 2653 + int cpu __maybe_unused = get_cpu(); 2654 2654 2655 2655 #ifdef CONFIG_SMP 2656 2656 /* ··· 4902 4902 int ret; 4903 4903 cpumask_var_t mask; 4904 4904 4905 - if (len < cpumask_size()) 4905 + if (len < nr_cpu_ids) 4906 + return -EINVAL; 4907 + if (len & (sizeof(unsigned long)-1)) 4906 4908 return -EINVAL; 4907 4909 4908 4910 if (!alloc_cpumask_var(&mask, GFP_KERNEL)) ··· 4912 4910 4913 4911 ret = sched_getaffinity(pid, mask); 4914 4912 if (ret == 0) { 4915 - if (copy_to_user(user_mask_ptr, mask, cpumask_size())) 4913 + size_t retlen = min_t(size_t, len, cpumask_size()); 4914 + 4915 + if (copy_to_user(user_mask_ptr, mask, retlen)) 4916 4916 ret = -EFAULT; 4917 4917 else 4918 - ret = cpumask_size(); 4918 + ret = retlen; 4919 4919 } 4920 4920 free_cpumask_var(mask); 4921 4921 ··· 5387 5383 5388 5384 get_task_struct(mt); 5389 5385 task_rq_unlock(rq, &flags); 5390 - wake_up_process(rq->migration_thread); 5386 + wake_up_process(mt); 5391 5387 put_task_struct(mt); 5392 5388 wait_for_completion(&req.done); 5393 5389 tlb_migrate_finish(p->mm);
-4
kernel/sched_debug.c
··· 518 518 p->se.nr_wakeups_idle = 0; 519 519 p->sched_info.bkl_count = 0; 520 520 #endif 521 - p->se.sum_exec_runtime = 0; 522 - p->se.prev_sum_exec_runtime = 0; 523 - p->nvcsw = 0; 524 - p->nivcsw = 0; 525 521 }
+1 -1
kernel/slow-work.c
··· 637 637 goto cancelled; 638 638 639 639 /* the timer holds a reference whilst it is pending */ 640 - ret = work->ops->get_ref(work); 640 + ret = slow_work_get_ref(work); 641 641 if (ret < 0) 642 642 goto cant_get_ref; 643 643
+4 -4
kernel/slow-work.h
··· 43 43 */ 44 44 static inline void slow_work_set_thread_pid(int id, pid_t pid) 45 45 { 46 - #ifdef CONFIG_SLOW_WORK_PROC 46 + #ifdef CONFIG_SLOW_WORK_DEBUG 47 47 slow_work_pids[id] = pid; 48 48 #endif 49 49 } 50 50 51 51 static inline void slow_work_mark_time(struct slow_work *work) 52 52 { 53 - #ifdef CONFIG_SLOW_WORK_PROC 53 + #ifdef CONFIG_SLOW_WORK_DEBUG 54 54 work->mark = CURRENT_TIME; 55 55 #endif 56 56 } 57 57 58 58 static inline void slow_work_begin_exec(int id, struct slow_work *work) 59 59 { 60 - #ifdef CONFIG_SLOW_WORK_PROC 60 + #ifdef CONFIG_SLOW_WORK_DEBUG 61 61 slow_work_execs[id] = work; 62 62 #endif 63 63 } 64 64 65 65 static inline void slow_work_end_exec(int id, struct slow_work *work) 66 66 { 67 - #ifdef CONFIG_SLOW_WORK_PROC 67 + #ifdef CONFIG_SLOW_WORK_DEBUG 68 68 write_lock(&slow_work_execs_lock); 69 69 slow_work_execs[id] = NULL; 70 70 write_unlock(&slow_work_execs_lock);
+2 -2
kernel/softlockup.c
··· 155 155 * Wake up the high-prio watchdog task twice per 156 156 * threshold timespan. 157 157 */ 158 - if (now > touch_ts + softlockup_thresh/2) 158 + if (time_after(now - softlockup_thresh/2, touch_ts)) 159 159 wake_up_process(per_cpu(softlockup_watchdog, this_cpu)); 160 160 161 161 /* Warn about unreasonable delays: */ 162 - if (now <= (touch_ts + softlockup_thresh)) 162 + if (time_before_eq(now - softlockup_thresh, touch_ts)) 163 163 return; 164 164 165 165 per_cpu(softlockup_print_ts, this_cpu) = touch_ts;
+40 -12
kernel/time/tick-oneshot.c
··· 22 22 23 23 #include "tick-internal.h" 24 24 25 + /* Limit min_delta to a jiffie */ 26 + #define MIN_DELTA_LIMIT (NSEC_PER_SEC / HZ) 27 + 28 + static int tick_increase_min_delta(struct clock_event_device *dev) 29 + { 30 + /* Nothing to do if we already reached the limit */ 31 + if (dev->min_delta_ns >= MIN_DELTA_LIMIT) 32 + return -ETIME; 33 + 34 + if (dev->min_delta_ns < 5000) 35 + dev->min_delta_ns = 5000; 36 + else 37 + dev->min_delta_ns += dev->min_delta_ns >> 1; 38 + 39 + if (dev->min_delta_ns > MIN_DELTA_LIMIT) 40 + dev->min_delta_ns = MIN_DELTA_LIMIT; 41 + 42 + printk(KERN_WARNING "CE: %s increased min_delta_ns to %llu nsec\n", 43 + dev->name ? dev->name : "?", 44 + (unsigned long long) dev->min_delta_ns); 45 + return 0; 46 + } 47 + 25 48 /** 26 49 * tick_program_event internal worker function 27 50 */ ··· 60 37 if (!ret || !force) 61 38 return ret; 62 39 40 + dev->retries++; 63 41 /* 64 - * We tried 2 times to program the device with the given 65 - * min_delta_ns. If that's not working then we double it 42 + * We tried 3 times to program the device with the given 43 + * min_delta_ns. If that's not working then we increase it 66 44 * and emit a warning. 67 45 */ 68 46 if (++i > 2) { 69 47 /* Increase the min. delta and try again */ 70 - if (!dev->min_delta_ns) 71 - dev->min_delta_ns = 5000; 72 - else 73 - dev->min_delta_ns += dev->min_delta_ns >> 1; 74 - 75 - printk(KERN_WARNING 76 - "CE: %s increasing min_delta_ns to %llu nsec\n", 77 - dev->name ? dev->name : "?", 78 - (unsigned long long) dev->min_delta_ns << 1); 79 - 48 + if (tick_increase_min_delta(dev)) { 49 + /* 50 + * Get out of the loop if min_delta_ns 51 + * hit the limit already. That's 52 + * better than staying here forever. 53 + * 54 + * We clear next_event so we have a 55 + * chance that the box survives. 56 + */ 57 + printk(KERN_WARNING 58 + "CE: Reprogramming failure. Giving up\n"); 59 + dev->next_event.tv64 = KTIME_MAX; 60 + return -ETIME; 61 + } 80 62 i = 0; 81 63 } 82 64
+2 -1
kernel/time/timekeeping.c
··· 818 818 shift = min(shift, maxshift); 819 819 while (offset >= timekeeper.cycle_interval) { 820 820 offset = logarithmic_accumulation(offset, shift); 821 - shift--; 821 + if(offset < timekeeper.cycle_interval<<shift) 822 + shift--; 822 823 } 823 824 824 825 /* correct the clock when NTP error is too big */
+2 -1
kernel/time/timer_list.c
··· 228 228 SEQ_printf(m, " event_handler: "); 229 229 print_name_offset(m, dev->event_handler); 230 230 SEQ_printf(m, "\n"); 231 + SEQ_printf(m, " retries: %lu\n", dev->retries); 231 232 } 232 233 233 234 static void timer_list_show_tickdevices(struct seq_file *m) ··· 258 257 u64 now = ktime_to_ns(ktime_get()); 259 258 int cpu; 260 259 261 - SEQ_printf(m, "Timer List Version: v0.5\n"); 260 + SEQ_printf(m, "Timer List Version: v0.6\n"); 262 261 SEQ_printf(m, "HRTIMER_MAX_CLOCK_BASES: %d\n", HRTIMER_MAX_CLOCK_BASES); 263 262 SEQ_printf(m, "now at %Ld nsecs\n", (unsigned long long)now); 264 263
+1
kernel/timer.c
··· 880 880 if (base->running_timer == timer) 881 881 goto out; 882 882 883 + timer_stats_timer_clear_start_info(timer); 883 884 ret = 0; 884 885 if (timer_pending(timer)) { 885 886 detach_timer(timer, 1);
+16 -6
kernel/trace/ring_buffer.c
··· 207 207 #define RB_MAX_SMALL_DATA (RB_ALIGNMENT * RINGBUF_TYPE_DATA_TYPE_LEN_MAX) 208 208 #define RB_EVNT_MIN_SIZE 8U /* two 32bit words */ 209 209 210 + #if !defined(CONFIG_64BIT) || defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) 211 + # define RB_FORCE_8BYTE_ALIGNMENT 0 212 + # define RB_ARCH_ALIGNMENT RB_ALIGNMENT 213 + #else 214 + # define RB_FORCE_8BYTE_ALIGNMENT 1 215 + # define RB_ARCH_ALIGNMENT 8U 216 + #endif 217 + 210 218 /* define RINGBUF_TYPE_DATA for 'case RINGBUF_TYPE_DATA:' */ 211 219 #define RINGBUF_TYPE_DATA 0 ... RINGBUF_TYPE_DATA_TYPE_LEN_MAX 212 220 ··· 1209 1201 1210 1202 for (i = 0; i < nr_pages; i++) { 1211 1203 if (RB_WARN_ON(cpu_buffer, list_empty(cpu_buffer->pages))) 1212 - return; 1204 + goto out; 1213 1205 p = cpu_buffer->pages->next; 1214 1206 bpage = list_entry(p, struct buffer_page, list); 1215 1207 list_del_init(&bpage->list); 1216 1208 free_buffer_page(bpage); 1217 1209 } 1218 1210 if (RB_WARN_ON(cpu_buffer, list_empty(cpu_buffer->pages))) 1219 - return; 1211 + goto out; 1220 1212 1221 1213 rb_reset_cpu(cpu_buffer); 1222 1214 rb_check_pages(cpu_buffer); 1223 1215 1216 + out: 1224 1217 spin_unlock_irq(&cpu_buffer->reader_lock); 1225 1218 } 1226 1219 ··· 1238 1229 1239 1230 for (i = 0; i < nr_pages; i++) { 1240 1231 if (RB_WARN_ON(cpu_buffer, list_empty(pages))) 1241 - return; 1232 + goto out; 1242 1233 p = pages->next; 1243 1234 bpage = list_entry(p, struct buffer_page, list); 1244 1235 list_del_init(&bpage->list); ··· 1247 1238 rb_reset_cpu(cpu_buffer); 1248 1239 rb_check_pages(cpu_buffer); 1249 1240 1241 + out: 1250 1242 spin_unlock_irq(&cpu_buffer->reader_lock); 1251 1243 } 1252 1244 ··· 1557 1547 1558 1548 case 0: 1559 1549 length -= RB_EVNT_HDR_SIZE; 1560 - if (length > RB_MAX_SMALL_DATA) 1550 + if (length > RB_MAX_SMALL_DATA || RB_FORCE_8BYTE_ALIGNMENT) 1561 1551 event->array[0] = length; 1562 1552 else 1563 1553 event->type_len = DIV_ROUND_UP(length, RB_ALIGNMENT); ··· 1732 1722 if (!length) 1733 1723 length = 1; 1734 1724 1735 - if (length > RB_MAX_SMALL_DATA) 1725 + if (length > RB_MAX_SMALL_DATA || RB_FORCE_8BYTE_ALIGNMENT) 1736 1726 length += sizeof(event.array[0]); 1737 1727 1738 1728 length += RB_EVNT_HDR_SIZE; 1739 - length = ALIGN(length, RB_ALIGNMENT); 1729 + length = ALIGN(length, RB_ARCH_ALIGNMENT); 1740 1730 1741 1731 return length; 1742 1732 }
+2 -2
kernel/trace/trace_clock.c
··· 84 84 int this_cpu; 85 85 u64 now; 86 86 87 - raw_local_irq_save(flags); 87 + local_irq_save(flags); 88 88 89 89 this_cpu = raw_smp_processor_id(); 90 90 now = cpu_clock(this_cpu); ··· 110 110 arch_spin_unlock(&trace_clock_struct.lock); 111 111 112 112 out: 113 - raw_local_irq_restore(flags); 113 + local_irq_restore(flags); 114 114 115 115 return now; 116 116 }
+9 -2
kernel/trace/trace_event_perf.c
··· 17 17 static char *perf_trace_buf; 18 18 static char *perf_trace_buf_nmi; 19 19 20 - typedef typeof(char [PERF_MAX_TRACE_SIZE]) perf_trace_t ; 20 + /* 21 + * Force it to be aligned to unsigned long to avoid misaligned accesses 22 + * suprises 23 + */ 24 + typedef typeof(unsigned long [PERF_MAX_TRACE_SIZE / sizeof(unsigned long)]) 25 + perf_trace_t; 21 26 22 27 /* Count the events in use (per event id, not per instance) */ 23 28 static int total_ref_count; ··· 135 130 char *trace_buf, *raw_data; 136 131 int pc, cpu; 137 132 133 + BUILD_BUG_ON(PERF_MAX_TRACE_SIZE % sizeof(unsigned long)); 134 + 138 135 pc = preempt_count(); 139 136 140 137 /* Protect the per cpu buffer, begin the rcu read side */ ··· 159 152 raw_data = per_cpu_ptr(trace_buf, cpu); 160 153 161 154 /* zero the dead bytes from align to not leak stack to user */ 162 - *(u64 *)(&raw_data[size - sizeof(u64)]) = 0ULL; 155 + memset(&raw_data[size - sizeof(u64)], 0, sizeof(u64)); 163 156 164 157 entry = (struct trace_entry *)raw_data; 165 158 tracing_generic_entry_update(entry, *irq_flags, pc);
-13
mm/bootmem.c
··· 180 180 end_aligned = end & ~(BITS_PER_LONG - 1); 181 181 182 182 if (end_aligned <= start_aligned) { 183 - #if 1 184 - printk(KERN_DEBUG " %lx - %lx\n", start, end); 185 - #endif 186 183 for (i = start; i < end; i++) 187 184 __free_pages_bootmem(pfn_to_page(i), 0); 188 185 189 186 return; 190 187 } 191 188 192 - #if 1 193 - printk(KERN_DEBUG " %lx %lx - %lx %lx\n", 194 - start, start_aligned, end_aligned, end); 195 - #endif 196 189 for (i = start; i < start_aligned; i++) 197 190 __free_pages_bootmem(pfn_to_page(i), 0); 198 191 ··· 421 428 { 422 429 #ifdef CONFIG_NO_BOOTMEM 423 430 free_early(physaddr, physaddr + size); 424 - #if 0 425 - printk(KERN_DEBUG "free %lx %lx\n", physaddr, size); 426 - #endif 427 431 #else 428 432 unsigned long start, end; 429 433 ··· 446 456 { 447 457 #ifdef CONFIG_NO_BOOTMEM 448 458 free_early(addr, addr + size); 449 - #if 0 450 - printk(KERN_DEBUG "free %lx %lx\n", addr, size); 451 - #endif 452 459 #else 453 460 unsigned long start, end; 454 461
+3 -3
mm/nommu.c
··· 146 146 (VM_MAYREAD | VM_MAYWRITE) : (VM_READ | VM_WRITE); 147 147 148 148 for (i = 0; i < nr_pages; i++) { 149 - vma = find_extend_vma(mm, start); 149 + vma = find_vma(mm, start); 150 150 if (!vma) 151 151 goto finish_or_fault; 152 152 ··· 162 162 } 163 163 if (vmas) 164 164 vmas[i] = vma; 165 - start += PAGE_SIZE; 165 + start = (start + PAGE_SIZE) & PAGE_MASK; 166 166 } 167 167 168 168 return i; ··· 764 764 */ 765 765 struct vm_area_struct *find_extend_vma(struct mm_struct *mm, unsigned long addr) 766 766 { 767 - return find_vma(mm, addr & PAGE_MASK); 767 + return find_vma(mm, addr); 768 768 } 769 769 770 770 /*
+2
net/8021q/vlan.c
··· 378 378 #if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE) 379 379 vlandev->fcoe_ddp_xid = dev->fcoe_ddp_xid; 380 380 #endif 381 + vlandev->real_num_tx_queues = dev->real_num_tx_queues; 382 + BUG_ON(vlandev->real_num_tx_queues > vlandev->num_tx_queues); 381 383 382 384 if (old_features != vlandev->features) 383 385 netdev_features_change(vlandev);
+68 -3
net/8021q/vlan_dev.c
··· 361 361 return ret; 362 362 } 363 363 364 + static u16 vlan_dev_select_queue(struct net_device *dev, struct sk_buff *skb) 365 + { 366 + struct net_device *rdev = vlan_dev_info(dev)->real_dev; 367 + const struct net_device_ops *ops = rdev->netdev_ops; 368 + 369 + return ops->ndo_select_queue(rdev, skb); 370 + } 371 + 364 372 static int vlan_dev_change_mtu(struct net_device *dev, int new_mtu) 365 373 { 366 374 /* TODO: gotta make sure the underlying layer can handle it, ··· 696 688 .parse = eth_header_parse, 697 689 }; 698 690 699 - static const struct net_device_ops vlan_netdev_ops, vlan_netdev_accel_ops; 691 + static const struct net_device_ops vlan_netdev_ops, vlan_netdev_accel_ops, 692 + vlan_netdev_ops_sq, vlan_netdev_accel_ops_sq; 700 693 701 694 static int vlan_dev_init(struct net_device *dev) 702 695 { ··· 731 722 if (real_dev->features & NETIF_F_HW_VLAN_TX) { 732 723 dev->header_ops = real_dev->header_ops; 733 724 dev->hard_header_len = real_dev->hard_header_len; 734 - dev->netdev_ops = &vlan_netdev_accel_ops; 725 + if (real_dev->netdev_ops->ndo_select_queue) 726 + dev->netdev_ops = &vlan_netdev_accel_ops_sq; 727 + else 728 + dev->netdev_ops = &vlan_netdev_accel_ops; 735 729 } else { 736 730 dev->header_ops = &vlan_header_ops; 737 731 dev->hard_header_len = real_dev->hard_header_len + VLAN_HLEN; 738 - dev->netdev_ops = &vlan_netdev_ops; 732 + if (real_dev->netdev_ops->ndo_select_queue) 733 + dev->netdev_ops = &vlan_netdev_ops_sq; 734 + else 735 + dev->netdev_ops = &vlan_netdev_ops; 739 736 } 740 737 741 738 if (is_vlan_dev(real_dev)) ··· 857 842 }; 858 843 859 844 static const struct net_device_ops vlan_netdev_accel_ops = { 845 + .ndo_change_mtu = vlan_dev_change_mtu, 846 + .ndo_init = vlan_dev_init, 847 + .ndo_uninit = vlan_dev_uninit, 848 + .ndo_open = vlan_dev_open, 849 + .ndo_stop = vlan_dev_stop, 850 + .ndo_start_xmit = vlan_dev_hwaccel_hard_start_xmit, 851 + .ndo_validate_addr = eth_validate_addr, 852 + .ndo_set_mac_address = vlan_dev_set_mac_address, 853 + .ndo_set_rx_mode = vlan_dev_set_rx_mode, 854 + .ndo_set_multicast_list = vlan_dev_set_rx_mode, 855 + .ndo_change_rx_flags = vlan_dev_change_rx_flags, 856 + .ndo_do_ioctl = vlan_dev_ioctl, 857 + .ndo_neigh_setup = vlan_dev_neigh_setup, 858 + .ndo_get_stats = vlan_dev_get_stats, 859 + #if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE) 860 + .ndo_fcoe_ddp_setup = vlan_dev_fcoe_ddp_setup, 861 + .ndo_fcoe_ddp_done = vlan_dev_fcoe_ddp_done, 862 + .ndo_fcoe_enable = vlan_dev_fcoe_enable, 863 + .ndo_fcoe_disable = vlan_dev_fcoe_disable, 864 + .ndo_fcoe_get_wwn = vlan_dev_fcoe_get_wwn, 865 + #endif 866 + }; 867 + 868 + static const struct net_device_ops vlan_netdev_ops_sq = { 869 + .ndo_select_queue = vlan_dev_select_queue, 870 + .ndo_change_mtu = vlan_dev_change_mtu, 871 + .ndo_init = vlan_dev_init, 872 + .ndo_uninit = vlan_dev_uninit, 873 + .ndo_open = vlan_dev_open, 874 + .ndo_stop = vlan_dev_stop, 875 + .ndo_start_xmit = vlan_dev_hard_start_xmit, 876 + .ndo_validate_addr = eth_validate_addr, 877 + .ndo_set_mac_address = vlan_dev_set_mac_address, 878 + .ndo_set_rx_mode = vlan_dev_set_rx_mode, 879 + .ndo_set_multicast_list = vlan_dev_set_rx_mode, 880 + .ndo_change_rx_flags = vlan_dev_change_rx_flags, 881 + .ndo_do_ioctl = vlan_dev_ioctl, 882 + .ndo_neigh_setup = vlan_dev_neigh_setup, 883 + .ndo_get_stats = vlan_dev_get_stats, 884 + #if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE) 885 + .ndo_fcoe_ddp_setup = vlan_dev_fcoe_ddp_setup, 886 + .ndo_fcoe_ddp_done = vlan_dev_fcoe_ddp_done, 887 + .ndo_fcoe_enable = vlan_dev_fcoe_enable, 888 + .ndo_fcoe_disable = vlan_dev_fcoe_disable, 889 + .ndo_fcoe_get_wwn = vlan_dev_fcoe_get_wwn, 890 + #endif 891 + }; 892 + 893 + static const struct net_device_ops vlan_netdev_accel_ops_sq = { 894 + .ndo_select_queue = vlan_dev_select_queue, 860 895 .ndo_change_mtu = vlan_dev_change_mtu, 861 896 .ndo_init = vlan_dev_init, 862 897 .ndo_uninit = vlan_dev_uninit,
+5 -2
net/core/netpoll.c
··· 614 614 np->name, np->local_port); 615 615 printk(KERN_INFO "%s: local IP %pI4\n", 616 616 np->name, &np->local_ip); 617 - printk(KERN_INFO "%s: interface %s\n", 617 + printk(KERN_INFO "%s: interface '%s'\n", 618 618 np->name, np->dev_name); 619 619 printk(KERN_INFO "%s: remote port %d\n", 620 620 np->name, np->remote_port); ··· 661 661 if ((delim = strchr(cur, '@')) == NULL) 662 662 goto parse_failed; 663 663 *delim = 0; 664 + if (*cur == ' ' || *cur == '\t') 665 + printk(KERN_INFO "%s: warning: whitespace" 666 + "is not allowed\n", np->name); 664 667 np->remote_port = simple_strtol(cur, NULL, 10); 665 668 cur = delim; 666 669 } ··· 711 708 return 0; 712 709 713 710 parse_failed: 714 - printk(KERN_INFO "%s: couldn't parse config at %s!\n", 711 + printk(KERN_INFO "%s: couldn't parse config at '%s'!\n", 715 712 np->name, cur); 716 713 return -1; 717 714 }
+1 -1
net/ipv4/devinet.c
··· 1194 1194 hlist_for_each_entry_rcu(dev, node, head, index_hlist) { 1195 1195 if (idx < s_idx) 1196 1196 goto cont; 1197 - if (idx > s_idx) 1197 + if (h > s_h || idx > s_idx) 1198 1198 s_ip_idx = 0; 1199 1199 in_dev = __in_dev_get_rcu(dev); 1200 1200 if (!in_dev)
+7 -4
net/ipv4/ipmr.c
··· 1616 1616 int ct; 1617 1617 struct rtnexthop *nhp; 1618 1618 struct net *net = mfc_net(c); 1619 - struct net_device *dev = net->ipv4.vif_table[c->mfc_parent].dev; 1620 1619 u8 *b = skb_tail_pointer(skb); 1621 1620 struct rtattr *mp_head; 1622 1621 1623 - if (dev) 1624 - RTA_PUT(skb, RTA_IIF, 4, &dev->ifindex); 1622 + /* If cache is unresolved, don't try to parse IIF and OIF */ 1623 + if (c->mfc_parent > MAXVIFS) 1624 + return -ENOENT; 1625 + 1626 + if (VIF_EXISTS(net, c->mfc_parent)) 1627 + RTA_PUT(skb, RTA_IIF, 4, &net->ipv4.vif_table[c->mfc_parent].dev->ifindex); 1625 1628 1626 1629 mp_head = (struct rtattr *)skb_put(skb, RTA_LENGTH(0)); 1627 1630 1628 1631 for (ct = c->mfc_un.res.minvif; ct < c->mfc_un.res.maxvif; ct++) { 1629 - if (c->mfc_un.res.ttls[ct] < 255) { 1632 + if (VIF_EXISTS(net, ct) && c->mfc_un.res.ttls[ct] < 255) { 1630 1633 if (skb_tailroom(skb) < RTA_ALIGN(RTA_ALIGN(sizeof(*nhp)) + 4)) 1631 1634 goto rtattr_failure; 1632 1635 nhp = (struct rtnexthop *)skb_put(skb, RTA_ALIGN(sizeof(*nhp)));
+13 -8
net/ipv4/route.c
··· 1097 1097 } 1098 1098 1099 1099 static int rt_intern_hash(unsigned hash, struct rtable *rt, 1100 - struct rtable **rp, struct sk_buff *skb) 1100 + struct rtable **rp, struct sk_buff *skb, int ifindex) 1101 1101 { 1102 1102 struct rtable *rth, **rthp; 1103 1103 unsigned long now; ··· 1212 1212 slow_chain_length(rt_hash_table[hash].chain) > rt_chain_length_max) { 1213 1213 struct net *net = dev_net(rt->u.dst.dev); 1214 1214 int num = ++net->ipv4.current_rt_cache_rebuild_count; 1215 - if (!rt_caching(dev_net(rt->u.dst.dev))) { 1215 + if (!rt_caching(net)) { 1216 1216 printk(KERN_WARNING "%s: %d rebuilds is over limit, route caching disabled\n", 1217 1217 rt->u.dst.dev->name, num); 1218 1218 } 1219 - rt_emergency_hash_rebuild(dev_net(rt->u.dst.dev)); 1219 + rt_emergency_hash_rebuild(net); 1220 + spin_unlock_bh(rt_hash_lock_addr(hash)); 1221 + 1222 + hash = rt_hash(rt->fl.fl4_dst, rt->fl.fl4_src, 1223 + ifindex, rt_genid(net)); 1224 + goto restart; 1220 1225 } 1221 1226 } 1222 1227 ··· 1482 1477 &netevent); 1483 1478 1484 1479 rt_del(hash, rth); 1485 - if (!rt_intern_hash(hash, rt, &rt, NULL)) 1480 + if (!rt_intern_hash(hash, rt, &rt, NULL, rt->fl.oif)) 1486 1481 ip_rt_put(rt); 1487 1482 goto do_next; 1488 1483 } ··· 1936 1931 1937 1932 in_dev_put(in_dev); 1938 1933 hash = rt_hash(daddr, saddr, dev->ifindex, rt_genid(dev_net(dev))); 1939 - return rt_intern_hash(hash, rth, NULL, skb); 1934 + return rt_intern_hash(hash, rth, NULL, skb, dev->ifindex); 1940 1935 1941 1936 e_nobufs: 1942 1937 in_dev_put(in_dev); ··· 2103 2098 /* put it into the cache */ 2104 2099 hash = rt_hash(daddr, saddr, fl->iif, 2105 2100 rt_genid(dev_net(rth->u.dst.dev))); 2106 - return rt_intern_hash(hash, rth, NULL, skb); 2101 + return rt_intern_hash(hash, rth, NULL, skb, fl->iif); 2107 2102 } 2108 2103 2109 2104 /* ··· 2260 2255 } 2261 2256 rth->rt_type = res.type; 2262 2257 hash = rt_hash(daddr, saddr, fl.iif, rt_genid(net)); 2263 - err = rt_intern_hash(hash, rth, NULL, skb); 2258 + err = rt_intern_hash(hash, rth, NULL, skb, fl.iif); 2264 2259 goto done; 2265 2260 2266 2261 no_route: ··· 2507 2502 if (err == 0) { 2508 2503 hash = rt_hash(oldflp->fl4_dst, oldflp->fl4_src, oldflp->oif, 2509 2504 rt_genid(dev_net(dev_out))); 2510 - err = rt_intern_hash(hash, rth, rp, NULL); 2505 + err = rt_intern_hash(hash, rth, rp, NULL, oldflp->oif); 2511 2506 } 2512 2507 2513 2508 return err;
+1 -1
net/ipv6/addrconf.c
··· 3610 3610 hlist_for_each_entry_rcu(dev, node, head, index_hlist) { 3611 3611 if (idx < s_idx) 3612 3612 goto cont; 3613 - if (idx > s_idx) 3613 + if (h > s_h || idx > s_idx) 3614 3614 s_ip_idx = 0; 3615 3615 ip_idx = 0; 3616 3616 if ((idev = __in6_dev_get(dev)) == NULL)
+7 -4
net/ipv6/ip6mr.c
··· 1695 1695 int ct; 1696 1696 struct rtnexthop *nhp; 1697 1697 struct net *net = mfc6_net(c); 1698 - struct net_device *dev = net->ipv6.vif6_table[c->mf6c_parent].dev; 1699 1698 u8 *b = skb_tail_pointer(skb); 1700 1699 struct rtattr *mp_head; 1701 1700 1702 - if (dev) 1703 - RTA_PUT(skb, RTA_IIF, 4, &dev->ifindex); 1701 + /* If cache is unresolved, don't try to parse IIF and OIF */ 1702 + if (c->mf6c_parent > MAXMIFS) 1703 + return -ENOENT; 1704 + 1705 + if (MIF_EXISTS(net, c->mf6c_parent)) 1706 + RTA_PUT(skb, RTA_IIF, 4, &net->ipv6.vif6_table[c->mf6c_parent].dev->ifindex); 1704 1707 1705 1708 mp_head = (struct rtattr *)skb_put(skb, RTA_LENGTH(0)); 1706 1709 1707 1710 for (ct = c->mfc_un.res.minvif; ct < c->mfc_un.res.maxvif; ct++) { 1708 - if (c->mfc_un.res.ttls[ct] < 255) { 1711 + if (MIF_EXISTS(net, ct) && c->mfc_un.res.ttls[ct] < 255) { 1709 1712 if (skb_tailroom(skb) < RTA_ALIGN(RTA_ALIGN(sizeof(*nhp)) + 4)) 1710 1713 goto rtattr_failure; 1711 1714 nhp = (struct rtnexthop *)skb_put(skb, RTA_ALIGN(sizeof(*nhp)));
+1 -1
net/ipv6/netfilter/ip6table_raw.c
··· 13 13 .valid_hooks = RAW_VALID_HOOKS, 14 14 .me = THIS_MODULE, 15 15 .af = NFPROTO_IPV6, 16 - .priority = NF_IP6_PRI_FIRST, 16 + .priority = NF_IP6_PRI_RAW, 17 17 }; 18 18 19 19 /* The work comes in here from netfilter.c. */
+9 -4
net/ipv6/route.c
··· 890 890 struct rt6_info *rt = (struct rt6_info *) dst; 891 891 892 892 if (rt) { 893 - if (rt->rt6i_flags & RTF_CACHE) 894 - ip6_del_rt(rt); 895 - else 893 + if (rt->rt6i_flags & RTF_CACHE) { 894 + if (rt6_check_expired(rt)) { 895 + ip6_del_rt(rt); 896 + dst = NULL; 897 + } 898 + } else { 896 899 dst_release(dst); 900 + dst = NULL; 901 + } 897 902 } 898 - return NULL; 903 + return dst; 899 904 } 900 905 901 906 static void ip6_link_failure(struct sk_buff *skb)
+3 -5
net/key/af_key.c
··· 2129 2129 int err; 2130 2130 2131 2131 out_skb = pfkey_xfrm_policy2msg_prep(xp); 2132 - if (IS_ERR(out_skb)) { 2133 - err = PTR_ERR(out_skb); 2134 - goto out; 2135 - } 2132 + if (IS_ERR(out_skb)) 2133 + return PTR_ERR(out_skb); 2134 + 2136 2135 err = pfkey_xfrm_policy2msg(out_skb, xp, dir); 2137 2136 if (err < 0) 2138 2137 return err; ··· 2147 2148 out_hdr->sadb_msg_seq = c->seq; 2148 2149 out_hdr->sadb_msg_pid = c->pid; 2149 2150 pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ALL, NULL, xp_net(xp)); 2150 - out: 2151 2151 return 0; 2152 2152 2153 2153 }
+3 -1
net/netfilter/xt_hashlimit.c
··· 493 493 case 64 ... 95: 494 494 i[2] = maskl(i[2], p - 64); 495 495 i[3] = 0; 496 + break; 496 497 case 96 ... 127: 497 498 i[3] = maskl(i[3], p - 96); 498 499 break; ··· 880 879 struct xt_hashlimit_htable *htable = s->private; 881 880 unsigned int *bucket = (unsigned int *)v; 882 881 883 - kfree(bucket); 882 + if (!IS_ERR(bucket)) 883 + kfree(bucket); 884 884 spin_unlock_bh(&htable->lock); 885 885 } 886 886
+1 -1
net/netfilter/xt_recent.c
··· 267 267 for (i = 0; i < e->nstamps; i++) { 268 268 if (info->seconds && time_after(time, e->stamps[i])) 269 269 continue; 270 - if (info->hit_count && ++hits >= info->hit_count) { 270 + if (!info->hit_count || ++hits >= info->hit_count) { 271 271 ret = !ret; 272 272 break; 273 273 }
+4 -1
net/sched/Kconfig
··· 328 328 module will be called cls_flow. 329 329 330 330 config NET_CLS_CGROUP 331 - bool "Control Group Classifier" 331 + tristate "Control Group Classifier" 332 332 select NET_CLS 333 333 depends on CGROUPS 334 334 ---help--- 335 335 Say Y here if you want to classify packets based on the control 336 336 cgroup of their process. 337 + 338 + To compile this code as a module, choose M here: the 339 + module will be called cls_cgroup. 337 340 338 341 config NET_EMATCH 339 342 bool "Extended Matches"
+27 -9
net/sched/cls_cgroup.c
··· 24 24 u32 classid; 25 25 }; 26 26 27 + static struct cgroup_subsys_state *cgrp_create(struct cgroup_subsys *ss, 28 + struct cgroup *cgrp); 29 + static void cgrp_destroy(struct cgroup_subsys *ss, struct cgroup *cgrp); 30 + static int cgrp_populate(struct cgroup_subsys *ss, struct cgroup *cgrp); 31 + 32 + struct cgroup_subsys net_cls_subsys = { 33 + .name = "net_cls", 34 + .create = cgrp_create, 35 + .destroy = cgrp_destroy, 36 + .populate = cgrp_populate, 37 + #ifdef CONFIG_NET_CLS_CGROUP 38 + .subsys_id = net_cls_subsys_id, 39 + #else 40 + #define net_cls_subsys_id net_cls_subsys.subsys_id 41 + #endif 42 + .module = THIS_MODULE, 43 + }; 44 + 45 + 27 46 static inline struct cgroup_cls_state *cgrp_cls_state(struct cgroup *cgrp) 28 47 { 29 48 return container_of(cgroup_subsys_state(cgrp, net_cls_subsys_id), ··· 97 78 { 98 79 return cgroup_add_files(cgrp, ss, ss_files, ARRAY_SIZE(ss_files)); 99 80 } 100 - 101 - struct cgroup_subsys net_cls_subsys = { 102 - .name = "net_cls", 103 - .create = cgrp_create, 104 - .destroy = cgrp_destroy, 105 - .populate = cgrp_populate, 106 - .subsys_id = net_cls_subsys_id, 107 - }; 108 81 109 82 struct cls_cgroup_head 110 83 { ··· 288 277 289 278 static int __init init_cgroup_cls(void) 290 279 { 291 - return register_tcf_proto_ops(&cls_cgroup_ops); 280 + int ret = register_tcf_proto_ops(&cls_cgroup_ops); 281 + if (ret) 282 + return ret; 283 + ret = cgroup_load_subsys(&net_cls_subsys); 284 + if (ret) 285 + unregister_tcf_proto_ops(&cls_cgroup_ops); 286 + return ret; 292 287 } 293 288 294 289 static void __exit exit_cgroup_cls(void) 295 290 { 296 291 unregister_tcf_proto_ops(&cls_cgroup_ops); 292 + cgroup_unload_subsys(&net_cls_subsys); 297 293 } 298 294 299 295 module_init(init_cgroup_cls);
+4
net/socket.c
··· 2135 2135 break; 2136 2136 ++datagrams; 2137 2137 2138 + /* MSG_WAITFORONE turns on MSG_DONTWAIT after one packet */ 2139 + if (flags & MSG_WAITFORONE) 2140 + flags |= MSG_DONTWAIT; 2141 + 2138 2142 if (timeout) { 2139 2143 ktime_get_ts(timeout); 2140 2144 *timeout = timespec_sub(end_time, *timeout);
+4 -2
sound/core/pcm_lib.c
··· 148 148 149 149 #define xrun_debug(substream, mask) \ 150 150 ((substream)->pstr->xrun_debug & (mask)) 151 + #else 152 + #define xrun_debug(substream, mask) 0 153 + #endif 151 154 152 155 #define dump_stack_on_xrun(substream) do { \ 153 156 if (xrun_debug(substream, XRUN_DEBUG_STACK)) \ ··· 172 169 } 173 170 } 174 171 172 + #ifdef CONFIG_SND_PCM_XRUN_DEBUG 175 173 #define hw_ptr_error(substream, fmt, args...) \ 176 174 do { \ 177 175 if (xrun_debug(substream, XRUN_DEBUG_BASIC)) { \ ··· 259 255 260 256 #else /* ! CONFIG_SND_PCM_XRUN_DEBUG */ 261 257 262 - #define xrun_debug(substream, mask) 0 263 - #define xrun(substream) do { } while (0) 264 258 #define hw_ptr_error(substream, fmt, args...) do { } while (0) 265 259 #define xrun_log(substream, pos) do { } while (0) 266 260 #define xrun_log_show(substream) do { } while (0)
+2
sound/pci/ac97/ac97_patch.c
··· 1852 1852 0x10140523, /* Thinkpad R40 */ 1853 1853 0x10140534, /* Thinkpad X31 */ 1854 1854 0x10140537, /* Thinkpad T41p */ 1855 + 0x1014053e, /* Thinkpad R40e */ 1855 1856 0x10140554, /* Thinkpad T42p/R50p */ 1856 1857 0x10140567, /* Thinkpad T43p 2668-G7U */ 1857 1858 0x10140581, /* Thinkpad X41-2527 */ 1858 1859 0x10280160, /* Dell Dimension 2400 */ 1859 1860 0x104380b0, /* Asus A7V8X-MX */ 1860 1861 0x11790241, /* Toshiba Satellite A-15 S127 */ 1862 + 0x1179ff10, /* Toshiba P500 */ 1861 1863 0x144dc01a, /* Samsung NP-X20C004/SEG */ 1862 1864 0 /* end */ 1863 1865 };
+1
sound/pci/hda/hda_intel.c
··· 2269 2269 SND_PCI_QUIRK(0x103c, 0x306d, "HP dv3", POS_FIX_LPIB), 2270 2270 SND_PCI_QUIRK(0x1106, 0x3288, "ASUS M2V-MX SE", POS_FIX_LPIB), 2271 2271 SND_PCI_QUIRK(0x1043, 0x813d, "ASUS P5AD2", POS_FIX_LPIB), 2272 + SND_PCI_QUIRK(0x1458, 0xa022, "ga-ma770-ud3", POS_FIX_LPIB), 2272 2273 SND_PCI_QUIRK(0x1462, 0x1002, "MSI Wind U115", POS_FIX_LPIB), 2273 2274 SND_PCI_QUIRK(0x1565, 0x820f, "Biostar Microtech", POS_FIX_LPIB), 2274 2275 SND_PCI_QUIRK(0x8086, 0xd601, "eMachines T5212", POS_FIX_LPIB),
+4 -1
sound/pci/hda/patch_realtek.c
··· 10043 10043 alc_set_pin_output(codec, nid, pin_type); 10044 10044 if (spec->multiout.dac_nids[dac_idx] == 0x25) 10045 10045 idx = 4; 10046 - else 10046 + else { 10047 + if (spec->multiout.num_dacs >= dac_idx) 10048 + return; 10047 10049 idx = spec->multiout.dac_nids[dac_idx] - 2; 10050 + } 10048 10051 snd_hda_codec_write(codec, nid, 0, AC_VERB_SET_CONNECT_SEL, idx); 10049 10052 10050 10053 }
+5 -5
tools/perf/Makefile
··· 200 200 201 201 CFLAGS = -ggdb3 -Wall -Wextra -std=gnu99 -Werror $(CFLAGS_OPTIMIZE) -D_FORTIFY_SOURCE=2 $(EXTRA_WARNINGS) $(EXTRA_CFLAGS) 202 202 EXTLIBS = -lpthread -lrt -lelf -lm 203 - ALL_CFLAGS = $(CFLAGS) 203 + ALL_CFLAGS = $(CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 204 204 ALL_LDFLAGS = $(LDFLAGS) 205 205 STRIP ?= strip 206 206 ··· 492 492 PTHREAD_LIBS = 493 493 endif 494 494 495 - ifeq ($(shell sh -c "(echo '\#include <libelf.h>'; echo 'int main(void) { Elf * elf = elf_begin(0, ELF_C_READ, 0); return (long)elf; }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) 496 - ifneq ($(shell sh -c "(echo '\#include <gnu/libc-version.h>'; echo 'int main(void) { const char * version = gnu_get_libc_version(); return (long)version; }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) 495 + ifeq ($(shell sh -c "(echo '\#include <libelf.h>'; echo 'int main(void) { Elf * elf = elf_begin(0, ELF_C_READ, 0); return (long)elf; }') | $(CC) -x c - $(ALL_CFLAGS) -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) 496 + ifneq ($(shell sh -c "(echo '\#include <gnu/libc-version.h>'; echo 'int main(void) { const char * version = gnu_get_libc_version(); return (long)version; }') | $(CC) -x c - $(ALL_CFLAGS) -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) 497 497 msg := $(error No gnu/libc-version.h found, please install glibc-dev[el]/glibc-static); 498 498 endif 499 499 500 - ifneq ($(shell sh -c "(echo '\#include <libelf.h>'; echo 'int main(void) { Elf * elf = elf_begin(0, ELF_C_READ_MMAP, 0); return (long)elf; }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) 500 + ifneq ($(shell sh -c "(echo '\#include <libelf.h>'; echo 'int main(void) { Elf * elf = elf_begin(0, ELF_C_READ_MMAP, 0); return (long)elf; }') | $(CC) -x c - $(ALL_CFLAGS) -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) 501 501 BASIC_CFLAGS += -DLIBELF_NO_MMAP 502 502 endif 503 503 else 504 504 msg := $(error No libelf.h/libelf found, please install libelf-dev/elfutils-libelf-devel and glibc-dev[el]); 505 505 endif 506 506 507 - ifneq ($(shell sh -c "(echo '\#include <dwarf.h>'; echo '\#include <libdw.h>'; echo 'int main(void) { Dwarf *dbg; dbg = dwarf_begin(0, DWARF_C_READ); return (long)dbg; }') | $(CC) -x c - $(ALL_CFLAGS) -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/elfutils -ldw -lelf -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) 507 + ifneq ($(shell sh -c "(echo '\#include <dwarf.h>'; echo '\#include <libdw.h>'; echo 'int main(void) { Dwarf *dbg; dbg = dwarf_begin(0, DWARF_C_READ); return (long)dbg; }') | $(CC) -x c - $(ALL_CFLAGS) -I/usr/include/elfutils -ldw -lelf -o $(BITBUCKET) $(ALL_LDFLAGS) $(EXTLIBS) "$(QUIET_STDERR)" && echo y"), y) 508 508 msg := $(warning No libdw.h found or old libdw.h found, disables dwarf support. Please install elfutils-devel/elfutils-dev); 509 509 BASIC_CFLAGS += -DNO_DWARF_SUPPORT 510 510 else
-1
tools/perf/builtin-probe.c
··· 47 47 #include "util/probe-event.h" 48 48 49 49 #define MAX_PATH_LEN 256 50 - #define MAX_PROBES 128 51 50 52 51 /* Session management structure */ 53 52 static struct {
+9 -4
tools/perf/builtin-top.c
··· 455 455 struct sym_entry *syme, *n; 456 456 struct rb_root tmp = RB_ROOT; 457 457 struct rb_node *nd; 458 - int sym_width = 0, dso_width = 0, max_dso_width; 458 + int sym_width = 0, dso_width = 0, dso_short_width = 0; 459 459 const int win_width = winsize.ws_col - 1; 460 460 461 461 samples = userspace_samples = 0; ··· 545 545 if (syme->map->dso->long_name_len > dso_width) 546 546 dso_width = syme->map->dso->long_name_len; 547 547 548 + if (syme->map->dso->short_name_len > dso_short_width) 549 + dso_short_width = syme->map->dso->short_name_len; 550 + 548 551 if (syme->name_len > sym_width) 549 552 sym_width = syme->name_len; 550 553 } 551 554 552 555 printed = 0; 553 556 554 - max_dso_width = winsize.ws_col - sym_width - 29; 555 - if (dso_width > max_dso_width) 556 - dso_width = max_dso_width; 557 + if (sym_width + dso_width > winsize.ws_col - 29) { 558 + dso_width = dso_short_width; 559 + if (sym_width + dso_width > winsize.ws_col - 29) 560 + sym_width = winsize.ws_col - dso_width - 29; 561 + } 557 562 putchar('\n'); 558 563 if (nr_counters == 1) 559 564 printf(" samples pcnt");
+1 -1
tools/perf/util/probe-event.c
··· 242 242 243 243 /* Parse probe point */ 244 244 parse_perf_probe_probepoint(argv[0], pp); 245 - if (pp->file || pp->line) 245 + if (pp->file || pp->line || pp->lazy_line) 246 246 *need_dwarf = true; 247 247 248 248 /* Copy arguments and ensure return probe has no C argument */
+7 -11
tools/perf/util/probe-finder.c
··· 333 333 die("%u exceeds max register number.", regn); 334 334 335 335 if (deref) 336 - ret = snprintf(pf->buf, pf->len, " %s=+%ju(%s)", 337 - pf->var, (uintmax_t)offs, regs); 336 + ret = snprintf(pf->buf, pf->len, " %s=%+jd(%s)", 337 + pf->var, (intmax_t)offs, regs); 338 338 else 339 339 ret = snprintf(pf->buf, pf->len, " %s=%s", pf->var, regs); 340 340 DIE_IF(ret < 0); ··· 352 352 if (dwarf_attr(vr_die, DW_AT_location, &attr) == NULL) 353 353 goto error; 354 354 /* TODO: handle more than 1 exprs */ 355 - ret = dwarf_getlocation_addr(&attr, (pf->addr - pf->cu_base), 356 - &expr, &nexpr, 1); 355 + ret = dwarf_getlocation_addr(&attr, pf->addr, &expr, &nexpr, 1); 357 356 if (ret <= 0 || nexpr == 0) 358 357 goto error; 359 358 ··· 436 437 437 438 /* Get the frame base attribute/ops */ 438 439 dwarf_attr(sp_die, DW_AT_frame_base, &fb_attr); 439 - ret = dwarf_getlocation_addr(&fb_attr, (pf->addr - pf->cu_base), 440 - &pf->fb_ops, &nops, 1); 440 + ret = dwarf_getlocation_addr(&fb_attr, pf->addr, &pf->fb_ops, &nops, 1); 441 441 if (ret <= 0 || nops == 0) 442 442 pf->fb_ops = NULL; 443 443 ··· 452 454 453 455 /* *pf->fb_ops will be cached in libdw. Don't free it. */ 454 456 pf->fb_ops = NULL; 457 + 458 + if (pp->found == MAX_PROBES) 459 + die("Too many( > %d) probe point found.\n", MAX_PROBES); 455 460 456 461 pp->probes[pp->found] = strdup(tmp); 457 462 pp->found++; ··· 642 641 int find_probe_point(int fd, struct probe_point *pp) 643 642 { 644 643 struct probe_finder pf = {.pp = pp}; 645 - int ret; 646 644 Dwarf_Off off, noff; 647 645 size_t cuhl; 648 646 Dwarf_Die *diep; ··· 668 668 pf.fname = NULL; 669 669 670 670 if (!pp->file || pf.fname) { 671 - /* Save CU base address (for frame_base) */ 672 - ret = dwarf_lowpc(&pf.cu_die, &pf.cu_base); 673 - if (ret != 0) 674 - pf.cu_base = 0; 675 671 if (pp->function) 676 672 find_probe_point_by_func(&pf); 677 673 else if (pp->lazy_line)
-1
tools/perf/util/probe-finder.h
··· 71 71 72 72 /* For variable searching */ 73 73 Dwarf_Op *fb_ops; /* Frame base attribute */ 74 - Dwarf_Addr cu_base; /* Current CU base address */ 75 74 const char *var; /* Current variable name */ 76 75 char *buf; /* Current output buffer */ 77 76 int len; /* Length of output buffer */
+12 -5
tools/perf/util/scripting-engines/trace-event-python.c
··· 208 208 int size __unused, 209 209 unsigned long long nsecs, char *comm) 210 210 { 211 - PyObject *handler, *retval, *context, *t; 211 + PyObject *handler, *retval, *context, *t, *obj; 212 212 static char handler_name[256]; 213 213 struct format_field *field; 214 214 unsigned long long val; ··· 256 256 offset &= 0xffff; 257 257 } else 258 258 offset = field->offset; 259 - PyTuple_SetItem(t, n++, 260 - PyString_FromString((char *)data + offset)); 259 + obj = PyString_FromString((char *)data + offset); 261 260 } else { /* FIELD_IS_NUMERIC */ 262 261 val = read_size(data + field->offset, field->size); 263 262 if (field->flags & FIELD_IS_SIGNED) { 264 - PyTuple_SetItem(t, n++, PyInt_FromLong(val)); 263 + if ((long long)val >= LONG_MIN && 264 + (long long)val <= LONG_MAX) 265 + obj = PyInt_FromLong(val); 266 + else 267 + obj = PyLong_FromLongLong(val); 265 268 } else { 266 - PyTuple_SetItem(t, n++, PyInt_FromLong(val)); 269 + if (val <= LONG_MAX) 270 + obj = PyInt_FromLong(val); 271 + else 272 + obj = PyLong_FromUnsignedLongLong(val); 267 273 } 268 274 } 275 + PyTuple_SetItem(t, n++, obj); 269 276 } 270 277 271 278 if (_PyTuple_Resize(&t, n) == -1)
+13 -5
tools/perf/util/symbol.c
··· 163 163 self->long_name_len = strlen(name); 164 164 } 165 165 166 + static void dso__set_short_name(struct dso *self, const char *name) 167 + { 168 + if (name == NULL) 169 + return; 170 + self->short_name = name; 171 + self->short_name_len = strlen(name); 172 + } 173 + 166 174 static void dso__set_basename(struct dso *self) 167 175 { 168 - self->short_name = basename(self->long_name); 176 + dso__set_short_name(self, basename(self->long_name)); 169 177 } 170 178 171 179 struct dso *dso__new(const char *name) ··· 184 176 int i; 185 177 strcpy(self->name, name); 186 178 dso__set_long_name(self, self->name); 187 - self->short_name = self->name; 179 + dso__set_short_name(self, self->name); 188 180 for (i = 0; i < MAP__NR_TYPES; ++i) 189 181 self->symbols[i] = self->symbol_names[i] = RB_ROOT; 190 182 self->slen_calculated = 0; ··· 905 897 struct kmap *kmap = self->kernel ? map__kmap(map) : NULL; 906 898 struct map *curr_map = map; 907 899 struct dso *curr_dso = self; 908 - size_t dso_name_len = strlen(self->short_name); 909 900 Elf_Data *symstrs, *secstrs; 910 901 uint32_t nr_syms; 911 902 int err = -1; ··· 994 987 char dso_name[PATH_MAX]; 995 988 996 989 if (strcmp(section_name, 997 - curr_dso->short_name + dso_name_len) == 0) 990 + (curr_dso->short_name + 991 + self->short_name_len)) == 0) 998 992 goto new_symbol; 999 993 1000 994 if (strcmp(section_name, ".text") == 0) { ··· 1790 1782 struct dso *self = dso__new(name ?: "[kernel.kallsyms]"); 1791 1783 1792 1784 if (self != NULL) { 1793 - self->short_name = "[kernel]"; 1785 + dso__set_short_name(self, "[kernel]"); 1794 1786 self->kernel = 1; 1795 1787 } 1796 1788
+2 -1
tools/perf/util/symbol.h
··· 110 110 u8 sorted_by_name; 111 111 u8 loaded; 112 112 u8 build_id[BUILD_ID_SIZE]; 113 - u16 long_name_len; 114 113 const char *short_name; 115 114 char *long_name; 115 + u16 long_name_len; 116 + u16 short_name_len; 116 117 char name[0]; 117 118 }; 118 119