···1616 - information about the BeOS filesystem for Linux.1717bfs.txt1818 - info for the SCO UnixWare Boot Filesystem (BFS).1919+ceph.txt2020+ - info for the Ceph Distributed File System1921cifs.txt2022 - description of the CIFS filesystem.2123coda.txt
+6-5
Documentation/filesystems/ceph.txt
···8899 * POSIX semantics1010 * Seamless scaling from 1 to many thousands of nodes1111- * High availability and reliability. No single points of failure.1111+ * High availability and reliability. No single point of failure.1212 * N-way replication of data across storage nodes1313 * Fast recovery from node failures1414 * Automatic rebalancing of data on node addition/removal···94949595 wsize=X9696 Specify the maximum write size in bytes. By default there is no9797- maximu. Ceph will normally size writes based on the file stripe9797+ maximum. Ceph will normally size writes based on the file stripe9898 size.9999100100 rsize=X···115115 number of entries in that directory.116116117117 nocrc118118- Disable CRC32C calculation for data writes. If set, the OSD118118+ Disable CRC32C calculation for data writes. If set, the storage node119119 must rely on TCP's error correction to detect data corruption120120 in the data payload.121121···133133 http://ceph.newdream.net/134134135135The Linux kernel client source tree is available at136136- git://ceph.newdream.net/linux-ceph-client.git136136+ git://ceph.newdream.net/git/ceph-client.git137137+ git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git137138138139and the source for the full system is at139139- git://ceph.newdream.net/ceph.git140140+ git://ceph.newdream.net/git/ceph.git
···2121- fsl,qe-num-snums: define how many serial number(SNUM) the QE can use for the2222 threads.23232424+Optional properties:2525+- fsl,firmware-phandle:2626+ Usage: required only if there is no fsl,qe-firmware child node2727+ Value type: <phandle>2828+ Definition: Points to a firmware node (see "QE Firmware Node" below)2929+ that contains the firmware that should be uploaded for this QE.3030+ The compatible property for the firmware node should say,3131+ "fsl,qe-firmware".3232+2433Recommended properties2534- brg-frequency : the internal clock source frequency for baud-rate2635 generators in Hz.···6859 reg = <0 c000>;6960 };7061 };6262+6363+* QE Firmware Node6464+6565+This node defines a firmware binary that is embedded in the device tree, for6666+the purpose of passing the firmware from bootloader to the kernel, or from6767+the hypervisor to the guest.6868+6969+The firmware node itself contains the firmware binary contents, a compatible7070+property, and any firmware-specific properties. The node should be placed7171+inside a QE node that needs it. Doing so eliminates the need for a7272+fsl,firmware-phandle property. Other QE nodes that need the same firmware7373+should define an fsl,firmware-phandle property that points to the firmware node7474+in the first QE node.7575+7676+The fsl,firmware property can be specified in the DTS (possibly using incbin)7777+or can be inserted by the boot loader at boot time.7878+7979+Required properties:8080+ - compatible8181+ Usage: required8282+ Value type: <string>8383+ Definition: A standard property. Specify a string that indicates what8484+ kind of firmware it is. For QE, this should be "fsl,qe-firmware".8585+8686+ - fsl,firmware8787+ Usage: required8888+ Value type: <prop-encoded-array>, encoded as an array of bytes8989+ Definition: A standard property. This property contains the firmware9090+ binary "blob".9191+9292+Example:9393+ qe1@e0080000 {9494+ compatible = "fsl,qe";9595+ qe_firmware:qe-firmware {9696+ compatible = "fsl,qe-firmware";9797+ fsl,firmware = [0x70 0xcd 0x00 0x00 0x01 0x46 0x45 ...];9898+ };9999+ ...100100+ };101101+102102+ qe2@e0090000 {103103+ compatible = "fsl,qe";104104+ fsl,firmware-phandle = <&qe_firmware>;105105+ ...106106+ };
+12-2
MAINTAINERS
···1443144314441444CEPH DISTRIBUTED FILE SYSTEM CLIENT14451445M: Sage Weil <sage@newdream.net>14461446-L: ceph-devel@lists.sourceforge.net14461446+L: ceph-devel@vger.kernel.org14471447W: http://ceph.newdream.net/14481448T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git14491449S: Supported···30833083ISDN SUBSYSTEM30843084M: Karsten Keil <isdn@linux-pingi.de>30853085L: isdn4linux@listserv.isdn4linux.de (subscribers-only)30863086+L: netdev@vger.kernel.org30863087W: http://www.isdn4linux.de30873088T: git git://git.kernel.org/pub/scm/linux/kernel/git/kkeil/isdn-2.6.git30883089S: Maintained···32693268S: Maintained32703269F: include/linux/kexec.h32713270F: kernel/kexec.c32713271+32723272+KEYS/KEYRINGS:32733273+M: David Howells <dhowells@redhat.com>32743274+L: keyrings@linux-nfs.org32753275+S: Maintained32763276+F: Documentation/keys.txt32773277+F: include/linux/key.h32783278+F: include/linux/key-type.h32793279+F: include/keys/32803280+F: security/keys/3272328132733282KGDB32743283M: Jason Wessel <jason.wessel@windriver.com>···54345423F: sound/soc/codecs/twl4030*5435542454365425TIPC NETWORK LAYER54375437-M: Per Liden <per.liden@ericsson.com>54385426M: Jon Maloy <jon.maloy@ericsson.com>54395427M: Allan Stephens <allan.stephens@windriver.com>54405428L: tipc-discussion@lists.sourceforge.net
···11+/*22+ * arch/arm/include/asm/outercache.h33+ *44+ * Copyright (C) 2010 ARM Ltd.55+ * Written by Catalin Marinas <catalin.marinas@arm.com>66+ *77+ * This program is free software; you can redistribute it and/or modify88+ * it under the terms of the GNU General Public License version 2 as99+ * published by the Free Software Foundation.1010+ *1111+ * This program is distributed in the hope that it will be useful,1212+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1313+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1414+ * GNU General Public License for more details.1515+ *1616+ * You should have received a copy of the GNU General Public License1717+ * along with this program; if not, write to the Free Software1818+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA1919+ */2020+2121+#ifndef __ASM_OUTERCACHE_H2222+#define __ASM_OUTERCACHE_H2323+2424+struct outer_cache_fns {2525+ void (*inv_range)(unsigned long, unsigned long);2626+ void (*clean_range)(unsigned long, unsigned long);2727+ void (*flush_range)(unsigned long, unsigned long);2828+#ifdef CONFIG_OUTER_CACHE_SYNC2929+ void (*sync)(void);3030+#endif3131+};3232+3333+#ifdef CONFIG_OUTER_CACHE3434+3535+extern struct outer_cache_fns outer_cache;3636+3737+static inline void outer_inv_range(unsigned long start, unsigned long end)3838+{3939+ if (outer_cache.inv_range)4040+ outer_cache.inv_range(start, end);4141+}4242+static inline void outer_clean_range(unsigned long start, unsigned long end)4343+{4444+ if (outer_cache.clean_range)4545+ outer_cache.clean_range(start, end);4646+}4747+static inline void outer_flush_range(unsigned long start, unsigned long end)4848+{4949+ if (outer_cache.flush_range)5050+ outer_cache.flush_range(start, end);5151+}5252+5353+#else5454+5555+static inline void outer_inv_range(unsigned long start, unsigned long end)5656+{ }5757+static inline void outer_clean_range(unsigned long start, unsigned long end)5858+{ }5959+static inline void outer_flush_range(unsigned long start, unsigned long end)6060+{ }6161+6262+#endif6363+6464+#ifdef CONFIG_OUTER_CACHE_SYNC6565+static inline void outer_sync(void)6666+{6767+ if (outer_cache.sync)6868+ outer_cache.sync();6969+}7070+#else7171+static inline void outer_sync(void)7272+{ }7373+#endif7474+7575+#endif /* __ASM_OUTERCACHE_H */
+10-6
arch/arm/include/asm/system.h
···6060#include <linux/linkage.h>6161#include <linux/irqflags.h>62626363+#include <asm/outercache.h>6464+6365#define __exception __attribute__((section(".exception.text")))64666567struct thread_info;···139137#define dmb() __asm__ __volatile__ ("" : : : "memory")140138#endif141139142142-#if __LINUX_ARM_ARCH__ >= 7 || defined(CONFIG_SMP)143143-#define mb() dmb()140140+#ifdef CONFIG_ARCH_HAS_BARRIERS141141+#include <mach/barriers.h>142142+#elif __LINUX_ARM_ARCH__ >= 7 || defined(CONFIG_SMP)143143+#define mb() do { dsb(); outer_sync(); } while (0)144144#define rmb() dmb()145145-#define wmb() dmb()145145+#define wmb() mb()146146#else147147#define mb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)148148#define rmb() do { if (arch_is_coherent()) dmb(); else barrier(); } while (0)···156152#define smp_rmb() barrier()157153#define smp_wmb() barrier()158154#else159159-#define smp_mb() mb()160160-#define smp_rmb() rmb()161161-#define smp_wmb() wmb()155155+#define smp_mb() dmb()156156+#define smp_rmb() dmb()157157+#define smp_wmb() dmb()162158#endif163159164160#define read_barrier_depends() do { } while(0)
+9-1
arch/arm/kernel/kprobes.c
···393393 /*394394 * Setup an empty pt_regs. Fill SP and PC fields as395395 * they're needed by longjmp_break_handler.396396+ *397397+ * We allocate some slack between the original SP and start of398398+ * our fabricated regs. To be precise we want to have worst case399399+ * covered which is STMFD with all 16 regs so we allocate 2 *400400+ * sizeof(struct_pt_regs)).401401+ *402402+ * This is to prevent any simulated instruction from writing403403+ * over the regs when they are accessing the stack.396404 */397405 "sub sp, %0, %1 \n\t"398406 "ldr r0, ="__stringify(JPROBE_MAGIC_ADDR)"\n\t"···418410 "ldmia sp, {r0 - pc} \n\t"419411 :420412 : "r" (kcb->jprobe_saved_regs.ARM_sp),421421- "I" (sizeof(struct pt_regs)),413413+ "I" (sizeof(struct pt_regs) * 2),422414 "J" (offsetof(struct pt_regs, ARM_sp)),423415 "J" (offsetof(struct pt_regs, ARM_pc)),424416 "J" (offsetof(struct pt_regs, ARM_cpsr))
···736736config OUTER_CACHE737737 bool738738739739+config OUTER_CACHE_SYNC740740+ bool741741+ help742742+ The outer cache has a outer_cache_fns.sync function pointer743743+ that can be used to drain the write buffer of the outer cache.744744+739745config CACHE_FEROCEON_L2740746 bool "Enable the Feroceon L2 cache controller"741747 depends on ARCH_KIRKWOOD || ARCH_MV78XX0···763757 REALVIEW_EB_A9MP || ARCH_MX35 || ARCH_MX31 || MACH_REALVIEW_PBX || ARCH_NOMADIK || ARCH_OMAP4764758 default y765759 select OUTER_CACHE760760+ select OUTER_CACHE_SYNC766761 help767762 This option enables the L2x0 PrimeCell.768763···788781 int789782 default 6 if ARM_L1_CACHE_SHIFT_6790783 default 5784784+785785+config ARCH_HAS_BARRIERS786786+ bool787787+ help788788+ This option allows the use of custom mandatory barriers789789+ included via the mach/barriers.h file.
···11-/*22- * Copyright (C) 2008-2009 Michal Simek <monstr@monstr.eu>33- * Copyright (C) 2008-2009 PetaLogix44- * Copyright (C) 2006 Atmark Techno, Inc.55- *66- * This file is subject to the terms and conditions of the GNU General Public77- * License. See the file "COPYING" in the main directory of this archive88- * for more details.99- */1010-1111-#ifndef _ASM_MICROBLAZE_SEGMENT_H1212-#define _ASM_MICROBLAZE_SEGMENT_H1313-1414-# ifndef __ASSEMBLY__1515-1616-typedef struct {1717- unsigned long seg;1818-} mm_segment_t;1919-2020-/*2121- * On Microblaze the fs value is actually the top of the corresponding2222- * address space.2323- *2424- * The fs value determines whether argument validity checking should be2525- * performed or not. If get_fs() == USER_DS, checking is performed, with2626- * get_fs() == KERNEL_DS, checking is bypassed.2727- *2828- * For historical reasons, these macros are grossly misnamed.2929- *3030- * For non-MMU arch like Microblaze, KERNEL_DS and USER_DS is equal.3131- */3232-# define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })3333-3434-# ifndef CONFIG_MMU3535-# define KERNEL_DS MAKE_MM_SEG(0)3636-# define USER_DS KERNEL_DS3737-# else3838-# define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)3939-# define USER_DS MAKE_MM_SEG(TASK_SIZE - 1)4040-# endif4141-4242-# define get_ds() (KERNEL_DS)4343-# define get_fs() (current_thread_info()->addr_limit)4444-# define set_fs(val) (current_thread_info()->addr_limit = (val))4545-4646-# define segment_eq(a, b) ((a).seg == (b).seg)4747-4848-# endif /* __ASSEMBLY__ */4949-#endif /* _ASM_MICROBLAZE_SEGMENT_H */
+4-1
arch/microblaze/include/asm/thread_info.h
···1919#ifndef __ASSEMBLY__2020# include <linux/types.h>2121# include <asm/processor.h>2222-# include <asm/segment.h>23222423/*2524 * low level task data that entry.S needs immediate access to···5859 __u32 esr;5960 __u32 fsr;6061};6262+6363+typedef struct {6464+ unsigned long seg;6565+} mm_segment_t;61666267struct thread_info {6368 struct task_struct *task; /* main task structure */
···2222#include <asm/mmu.h>2323#include <asm/page.h>2424#include <asm/pgtable.h>2525-#include <asm/segment.h>2625#include <linux/string.h>27262827#define VERIFY_READ 02928#define VERIFY_WRITE 130293131-#define __clear_user(addr, n) (memset((void *)(addr), 0, (n)), 0)3030+/*3131+ * On Microblaze the fs value is actually the top of the corresponding3232+ * address space.3333+ *3434+ * The fs value determines whether argument validity checking should be3535+ * performed or not. If get_fs() == USER_DS, checking is performed, with3636+ * get_fs() == KERNEL_DS, checking is bypassed.3737+ *3838+ * For historical reasons, these macros are grossly misnamed.3939+ *4040+ * For non-MMU arch like Microblaze, KERNEL_DS and USER_DS is equal.4141+ */4242+# define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })4343+4444+# ifndef CONFIG_MMU4545+# define KERNEL_DS MAKE_MM_SEG(0)4646+# define USER_DS KERNEL_DS4747+# else4848+# define KERNEL_DS MAKE_MM_SEG(0xFFFFFFFF)4949+# define USER_DS MAKE_MM_SEG(TASK_SIZE - 1)5050+# endif5151+5252+# define get_ds() (KERNEL_DS)5353+# define get_fs() (current_thread_info()->addr_limit)5454+# define set_fs(val) (current_thread_info()->addr_limit = (val))5555+5656+# define segment_eq(a, b) ((a).seg == (b).seg)5757+5858+/*5959+ * The exception table consists of pairs of addresses: the first is the6060+ * address of an instruction that is allowed to fault, and the second is6161+ * the address at which the program should continue. No registers are6262+ * modified, so it is entirely up to the continuation code to figure out6363+ * what to do.6464+ *6565+ * All the routines below use bits of fixup code that are out of line6666+ * with the main instruction path. This means when everything is well,6767+ * we don't even have to jump over them. Further, they do not intrude6868+ * on our cache or tlb entries.6969+ */7070+struct exception_table_entry {7171+ unsigned long insn, fixup;7272+};7373+7474+/* Returns 0 if exception not found and fixup otherwise. */7575+extern unsigned long search_exception_table(unsigned long);32763377#ifndef CONFIG_MMU34783535-extern int ___range_ok(unsigned long addr, unsigned long size);7979+/* Check against bounds of physical memory */8080+static inline int ___range_ok(unsigned long addr, unsigned long size)8181+{8282+ return ((addr < memory_start) ||8383+ ((addr + size) > memory_end));8484+}36853786#define __range_ok(addr, size) \3887 ___range_ok((unsigned long)(addr), (unsigned long)(size))39884089#define access_ok(type, addr, size) (__range_ok((addr), (size)) == 0)4141-#define __access_ok(add, size) (__range_ok((addr), (size)) == 0)42904343-/* Undefined function to trigger linker error */4444-extern int bad_user_access_length(void);4545-4646-/* FIXME this is function for optimalization -> memcpy */4747-#define __get_user(var, ptr) \4848-({ \4949- int __gu_err = 0; \5050- switch (sizeof(*(ptr))) { \5151- case 1: \5252- case 2: \5353- case 4: \5454- (var) = *(ptr); \5555- break; \5656- case 8: \5757- memcpy((void *) &(var), (ptr), 8); \5858- break; \5959- default: \6060- (var) = 0; \6161- __gu_err = __get_user_bad(); \6262- break; \6363- } \6464- __gu_err; \6565-})6666-6767-#define __get_user_bad() (bad_user_access_length(), (-EFAULT))6868-6969-/* FIXME is not there defined __pu_val */7070-#define __put_user(var, ptr) \7171-({ \7272- int __pu_err = 0; \7373- switch (sizeof(*(ptr))) { \7474- case 1: \7575- case 2: \7676- case 4: \7777- *(ptr) = (var); \7878- break; \7979- case 8: { \8080- typeof(*(ptr)) __pu_val = (var); \8181- memcpy(ptr, &__pu_val, sizeof(__pu_val)); \8282- } \8383- break; \8484- default: \8585- __pu_err = __put_user_bad(); \8686- break; \8787- } \8888- __pu_err; \8989-})9090-9191-#define __put_user_bad() (bad_user_access_length(), (-EFAULT))9292-9393-#define put_user(x, ptr) __put_user((x), (ptr))9494-#define get_user(x, ptr) __get_user((x), (ptr))9595-9696-#define copy_to_user(to, from, n) (memcpy((to), (from), (n)), 0)9797-#define copy_from_user(to, from, n) (memcpy((to), (from), (n)), 0)9898-9999-#define __copy_to_user(to, from, n) (copy_to_user((to), (from), (n)))100100-#define __copy_from_user(to, from, n) (copy_from_user((to), (from), (n)))101101-#define __copy_to_user_inatomic(to, from, n) \102102- (__copy_to_user((to), (from), (n)))103103-#define __copy_from_user_inatomic(to, from, n) \104104- (__copy_from_user((to), (from), (n)))105105-106106-static inline unsigned long clear_user(void *addr, unsigned long size)107107-{108108- if (access_ok(VERIFY_WRITE, addr, size))109109- size = __clear_user(addr, size);110110- return size;111111-}112112-113113-/* Returns 0 if exception not found and fixup otherwise. */114114-extern unsigned long search_exception_table(unsigned long);115115-116116-extern long strncpy_from_user(char *dst, const char *src, long count);117117-extern long strnlen_user(const char *src, long count);118118-119119-#else /* CONFIG_MMU */9191+#else1209212193/*12294 * Address is valid if:···101129/* || printk("access_ok failed for %s at 0x%08lx (size %d), seg 0x%08x\n",102130 type?"WRITE":"READ",addr,size,get_fs().seg)) */103131104104-/*105105- * All the __XXX versions macros/functions below do not perform106106- * access checking. It is assumed that the necessary checks have been107107- * already performed before the finction (macro) is called.132132+#endif133133+134134+#ifdef CONFIG_MMU135135+# define __FIXUP_SECTION ".section .fixup,\"ax\"\n"136136+# define __EX_TABLE_SECTION ".section __ex_table,\"a\"\n"137137+#else138138+# define __FIXUP_SECTION ".section .discard,\"ax\"\n"139139+# define __EX_TABLE_SECTION ".section .discard,\"a\"\n"140140+#endif141141+142142+extern unsigned long __copy_tofrom_user(void __user *to,143143+ const void __user *from, unsigned long size);144144+145145+/* Return: number of not copied bytes, i.e. 0 if OK or non-zero if fail. */146146+static inline unsigned long __must_check __clear_user(void __user *to,147147+ unsigned long n)148148+{149149+ /* normal memset with two words to __ex_table */150150+ __asm__ __volatile__ ( \151151+ "1: sb r0, %2, r0;" \152152+ " addik %0, %0, -1;" \153153+ " bneid %0, 1b;" \154154+ " addik %2, %2, 1;" \155155+ "2: " \156156+ __EX_TABLE_SECTION \157157+ ".word 1b,2b;" \158158+ ".previous;" \159159+ : "=r"(n) \160160+ : "0"(n), "r"(to)161161+ );162162+ return n;163163+}164164+165165+static inline unsigned long __must_check clear_user(void __user *to,166166+ unsigned long n)167167+{168168+ might_sleep();169169+ if (unlikely(!access_ok(VERIFY_WRITE, to, n)))170170+ return n;171171+172172+ return __clear_user(to, n);173173+}174174+175175+/* put_user and get_user macros */176176+extern long __user_bad(void);177177+178178+#define __get_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \179179+({ \180180+ __asm__ __volatile__ ( \181181+ "1:" insn " %1, %2, r0;" \182182+ " addk %0, r0, r0;" \183183+ "2: " \184184+ __FIXUP_SECTION \185185+ "3: brid 2b;" \186186+ " addik %0, r0, %3;" \187187+ ".previous;" \188188+ __EX_TABLE_SECTION \189189+ ".word 1b,3b;" \190190+ ".previous;" \191191+ : "=&r"(__gu_err), "=r"(__gu_val) \192192+ : "r"(__gu_ptr), "i"(-EFAULT) \193193+ ); \194194+})195195+196196+/**197197+ * get_user: - Get a simple variable from user space.198198+ * @x: Variable to store result.199199+ * @ptr: Source address, in user space.200200+ *201201+ * Context: User context only. This function may sleep.202202+ *203203+ * This macro copies a single simple variable from user space to kernel204204+ * space. It supports simple types like char and int, but not larger205205+ * data types like structures or arrays.206206+ *207207+ * @ptr must have pointer-to-simple-variable type, and the result of208208+ * dereferencing @ptr must be assignable to @x without a cast.209209+ *210210+ * Returns zero on success, or -EFAULT on error.211211+ * On error, the variable @x is set to zero.108212 */109109-110110-#define get_user(x, ptr) \111111-({ \112112- access_ok(VERIFY_READ, (ptr), sizeof(*(ptr))) \113113- ? __get_user((x), (ptr)) : -EFAULT; \114114-})115115-116116-#define put_user(x, ptr) \117117-({ \118118- access_ok(VERIFY_WRITE, (ptr), sizeof(*(ptr))) \119119- ? __put_user((x), (ptr)) : -EFAULT; \120120-})121213122214#define __get_user(x, ptr) \123215({ \···199163 __get_user_asm("lw", (ptr), __gu_val, __gu_err); \200164 break; \201165 default: \202202- __gu_val = 0; __gu_err = -EINVAL; \166166+ /* __gu_val = 0; __gu_err = -EINVAL;*/ __gu_err = __user_bad();\203167 } \204168 x = (__typeof__(*(ptr))) __gu_val; \205169 __gu_err; \206170})207171208208-#define __get_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \172172+173173+#define get_user(x, ptr) \209174({ \210210- __asm__ __volatile__ ( \211211- "1:" insn " %1, %2, r0; \212212- addk %0, r0, r0; \213213- 2: \214214- .section .fixup,\"ax\"; \215215- 3: brid 2b; \216216- addik %0, r0, %3; \217217- .previous; \218218- .section __ex_table,\"a\"; \219219- .word 1b,3b; \220220- .previous;" \221221- : "=r"(__gu_err), "=r"(__gu_val) \222222- : "r"(__gu_ptr), "i"(-EFAULT) \223223- ); \175175+ access_ok(VERIFY_READ, (ptr), sizeof(*(ptr))) \176176+ ? __get_user((x), (ptr)) : -EFAULT; \224177})178178+179179+#define __put_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \180180+({ \181181+ __asm__ __volatile__ ( \182182+ "1:" insn " %1, %2, r0;" \183183+ " addk %0, r0, r0;" \184184+ "2: " \185185+ __FIXUP_SECTION \186186+ "3: brid 2b;" \187187+ " addik %0, r0, %3;" \188188+ ".previous;" \189189+ __EX_TABLE_SECTION \190190+ ".word 1b,3b;" \191191+ ".previous;" \192192+ : "=&r"(__gu_err) \193193+ : "r"(__gu_val), "r"(__gu_ptr), "i"(-EFAULT) \194194+ ); \195195+})196196+197197+#define __put_user_asm_8(__gu_ptr, __gu_val, __gu_err) \198198+({ \199199+ __asm__ __volatile__ (" lwi %0, %1, 0;" \200200+ "1: swi %0, %2, 0;" \201201+ " lwi %0, %1, 4;" \202202+ "2: swi %0, %2, 4;" \203203+ " addk %0, r0, r0;" \204204+ "3: " \205205+ __FIXUP_SECTION \206206+ "4: brid 3b;" \207207+ " addik %0, r0, %3;" \208208+ ".previous;" \209209+ __EX_TABLE_SECTION \210210+ ".word 1b,4b,2b,4b;" \211211+ ".previous;" \212212+ : "=&r"(__gu_err) \213213+ : "r"(&__gu_val), "r"(__gu_ptr), "i"(-EFAULT) \214214+ ); \215215+})216216+217217+/**218218+ * put_user: - Write a simple value into user space.219219+ * @x: Value to copy to user space.220220+ * @ptr: Destination address, in user space.221221+ *222222+ * Context: User context only. This function may sleep.223223+ *224224+ * This macro copies a single simple value from kernel space to user225225+ * space. It supports simple types like char and int, but not larger226226+ * data types like structures or arrays.227227+ *228228+ * @ptr must have pointer-to-simple-variable type, and @x must be assignable229229+ * to the result of dereferencing @ptr.230230+ *231231+ * Returns zero on success, or -EFAULT on error.232232+ */225233226234#define __put_user(x, ptr) \227235({ \···275195 case 1: \276196 __put_user_asm("sb", (ptr), __gu_val, __gu_err); \277197 break; \278278- case 2: \198198+ case 2: \279199 __put_user_asm("sh", (ptr), __gu_val, __gu_err); \280200 break; \281201 case 4: \···285205 __put_user_asm_8((ptr), __gu_val, __gu_err); \286206 break; \287207 default: \288288- __gu_err = -EINVAL; \208208+ /*__gu_err = -EINVAL;*/ __gu_err = __user_bad(); \289209 } \290210 __gu_err; \291211})292212293293-#define __put_user_asm_8(__gu_ptr, __gu_val, __gu_err) \294294-({ \295295-__asm__ __volatile__ (" lwi %0, %1, 0; \296296- 1: swi %0, %2, 0; \297297- lwi %0, %1, 4; \298298- 2: swi %0, %2, 4; \299299- addk %0,r0,r0; \300300- 3: \301301- .section .fixup,\"ax\"; \302302- 4: brid 3b; \303303- addik %0, r0, %3; \304304- .previous; \305305- .section __ex_table,\"a\"; \306306- .word 1b,4b,2b,4b; \307307- .previous;" \308308- : "=&r"(__gu_err) \309309- : "r"(&__gu_val), \310310- "r"(__gu_ptr), "i"(-EFAULT) \311311- ); \213213+#ifndef CONFIG_MMU214214+215215+#define put_user(x, ptr) __put_user((x), (ptr))216216+217217+#else /* CONFIG_MMU */218218+219219+#define put_user(x, ptr) \220220+({ \221221+ access_ok(VERIFY_WRITE, (ptr), sizeof(*(ptr))) \222222+ ? __put_user((x), (ptr)) : -EFAULT; \312223})224224+#endif /* CONFIG_MMU */313225314314-#define __put_user_asm(insn, __gu_ptr, __gu_val, __gu_err) \315315-({ \316316- __asm__ __volatile__ ( \317317- "1:" insn " %1, %2, r0; \318318- addk %0, r0, r0; \319319- 2: \320320- .section .fixup,\"ax\"; \321321- 3: brid 2b; \322322- addik %0, r0, %3; \323323- .previous; \324324- .section __ex_table,\"a\"; \325325- .word 1b,3b; \326326- .previous;" \327327- : "=r"(__gu_err) \328328- : "r"(__gu_val), "r"(__gu_ptr), "i"(-EFAULT) \329329- ); \330330-})331331-332332-/*333333- * Return: number of not copied bytes, i.e. 0 if OK or non-zero if fail.334334- */335335-static inline int clear_user(char *to, int size)336336-{337337- if (size && access_ok(VERIFY_WRITE, to, size)) {338338- __asm__ __volatile__ (" \339339- 1: \340340- sb r0, %2, r0; \341341- addik %0, %0, -1; \342342- bneid %0, 1b; \343343- addik %2, %2, 1; \344344- 2: \345345- .section __ex_table,\"a\"; \346346- .word 1b,2b; \347347- .section .text;" \348348- : "=r"(size) \349349- : "0"(size), "r"(to)350350- );351351- }352352- return size;353353-}354354-355355-#define __copy_from_user(to, from, n) copy_from_user((to), (from), (n))226226+/* copy_to_from_user */227227+#define __copy_from_user(to, from, n) \228228+ __copy_tofrom_user((__force void __user *)(to), \229229+ (void __user *)(from), (n))356230#define __copy_from_user_inatomic(to, from, n) \357231 copy_from_user((to), (from), (n))358232359359-#define copy_to_user(to, from, n) \360360- (access_ok(VERIFY_WRITE, (to), (n)) ? \361361- __copy_tofrom_user((void __user *)(to), \362362- (__force const void __user *)(from), (n)) \363363- : -EFAULT)233233+static inline long copy_from_user(void *to,234234+ const void __user *from, unsigned long n)235235+{236236+ might_sleep();237237+ if (access_ok(VERIFY_READ, from, n))238238+ return __copy_from_user(to, from, n);239239+ return n;240240+}364241365365-#define __copy_to_user(to, from, n) copy_to_user((to), (from), (n))242242+#define __copy_to_user(to, from, n) \243243+ __copy_tofrom_user((void __user *)(to), \244244+ (__force const void __user *)(from), (n))366245#define __copy_to_user_inatomic(to, from, n) copy_to_user((to), (from), (n))367246368368-#define copy_from_user(to, from, n) \369369- (access_ok(VERIFY_READ, (from), (n)) ? \370370- __copy_tofrom_user((__force void __user *)(to), \371371- (void __user *)(from), (n)) \372372- : -EFAULT)373373-374374-extern int __strncpy_user(char *to, const char __user *from, int len);375375-extern int __strnlen_user(const char __user *sstr, int len);376376-377377-#define strncpy_from_user(to, from, len) \378378- (access_ok(VERIFY_READ, from, 1) ? \379379- __strncpy_user(to, from, len) : -EFAULT)380380-#define strnlen_user(str, len) \381381- (access_ok(VERIFY_READ, str, 1) ? __strnlen_user(str, len) : 0)382382-383383-#endif /* CONFIG_MMU */384384-385385-extern unsigned long __copy_tofrom_user(void __user *to,386386- const void __user *from, unsigned long size);247247+static inline long copy_to_user(void __user *to,248248+ const void *from, unsigned long n)249249+{250250+ might_sleep();251251+ if (access_ok(VERIFY_WRITE, to, n))252252+ return __copy_to_user(to, from, n);253253+ return n;254254+}387255388256/*389389- * The exception table consists of pairs of addresses: the first is the390390- * address of an instruction that is allowed to fault, and the second is391391- * the address at which the program should continue. No registers are392392- * modified, so it is entirely up to the continuation code to figure out393393- * what to do.394394- *395395- * All the routines below use bits of fixup code that are out of line396396- * with the main instruction path. This means when everything is well,397397- * we don't even have to jump over them. Further, they do not intrude398398- * on our cache or tlb entries.257257+ * Copy a null terminated string from userspace.399258 */400400-struct exception_table_entry {401401- unsigned long insn, fixup;402402-};259259+extern int __strncpy_user(char *to, const char __user *from, int len);260260+261261+#define __strncpy_from_user __strncpy_user262262+263263+static inline long264264+strncpy_from_user(char *dst, const char __user *src, long count)265265+{266266+ if (!access_ok(VERIFY_READ, src, 1))267267+ return -EFAULT;268268+ return __strncpy_from_user(dst, src, count);269269+}270270+271271+/*272272+ * Return the size of a string (including the ending 0)273273+ *274274+ * Return 0 on exception, a value greater than N if too long275275+ */276276+extern int __strnlen_user(const char __user *sstr, int len);277277+278278+static inline long strnlen_user(const char __user *src, long n)279279+{280280+ if (!access_ok(VERIFY_READ, src, 1))281281+ return 0;282282+ return __strnlen_user(src, n);283283+}403284404285#endif /* __ASSEMBLY__ */405286#endif /* __KERNEL__ */
+1-1
arch/microblaze/kernel/dma.c
···37373838static unsigned long get_dma_direct_offset(struct device *dev)3939{4040- if (dev)4040+ if (likely(dev))4141 return (unsigned long)dev->archdata.dma_data;42424343 return PCI_DRAM_OFFSET; /* FIXME Not sure if is correct */
+9-3
arch/microblaze/kernel/head.S
···51515252 .text5353ENTRY(_start)5454+#if CONFIG_KERNEL_BASE_ADDR == 05555+ brai TOPHYS(real_start)5656+ .org 0x1005757+real_start:5858+#endif5959+5460 mfs r1, rmsr5561 andi r1, r1, ~25662 mts rmsr, r1···10599 tophys(r4,r4) /* convert to phys address */106100 ori r3, r0, COMMAND_LINE_SIZE - 1 /* number of loops */107101_copy_command_line:108108- lbu r2, r5, r6 /* r7=r5+r6 - r5 contain pointer to command line */109109- sb r2, r4, r6 /* addr[r4+r6]= r7*/102102+ lbu r2, r5, r6 /* r2=r5+r6 - r5 contain pointer to command line */103103+ sb r2, r4, r6 /* addr[r4+r6]= r2*/110104 addik r6, r6, 1 /* increment counting */111105 bgtid r3, _copy_command_line /* loop for all entries */112106 addik r3, r3, -1 /* descrement loop */···134128 * virtual to physical.135129 */136130 nop137137- addik r3, r0, 63 /* Invalidate all TLB entries */131131+ addik r3, r0, MICROBLAZE_TLB_SIZE -1 /* Invalidate all TLB entries */138132_invalidate:139133 mts rtlbx, r3140134 mts rtlbhi, r0 /* flush: ensure V is clear */
+48-64
arch/microblaze/kernel/hw_exception_handler.S
···313313 mfs r5, rmsr;314314 nop315315 swi r5, r1, 0;316316- mfs r3, resr316316+ mfs r4, resr317317 nop318318- mfs r4, rear;318318+ mfs r3, rear;319319 nop320320321321#ifndef CONFIG_MMU322322- andi r5, r3, 0x1000; /* Check ESR[DS] */322322+ andi r5, r4, 0x1000; /* Check ESR[DS] */323323 beqi r5, not_in_delay_slot; /* Branch if ESR[DS] not set */324324 mfs r17, rbtr; /* ESR[DS] set - return address in BTR */325325 nop···327327 swi r17, r1, PT_R17328328#endif329329330330- andi r5, r3, 0x1F; /* Extract ESR[EXC] */330330+ andi r5, r4, 0x1F; /* Extract ESR[EXC] */331331332332#ifdef CONFIG_MMU333333 /* Calculate exception vector offset = r5 << 2 */334334 addk r6, r5, r5; /* << 1 */335335 addk r6, r6, r6; /* << 2 */336336337337+#ifdef DEBUG337338/* counting which exception happen */338339 lwi r5, r0, 0x200 + TOPHYS(r0_ram)339340 addi r5, r5, 1···342341 lwi r5, r6, 0x200 + TOPHYS(r0_ram)343342 addi r5, r5, 1344343 swi r5, r6, 0x200 + TOPHYS(r0_ram)344344+#endif345345/* end */346346 /* Load the HW Exception vector */347347 lwi r6, r6, TOPHYS(_MB_HW_ExceptionVectorTable)···378376 swi r18, r1, PT_R18379377380378 or r5, r1, r0381381- andi r6, r3, 0x1F; /* Load ESR[EC] */379379+ andi r6, r4, 0x1F; /* Load ESR[EC] */382380 lwi r7, r0, PER_CPU(KM) /* MS: saving current kernel mode to regs */383381 swi r7, r1, PT_MODE384382 mfs r7, rfsr···428426 */429427handle_unaligned_ex:430428 /* Working registers already saved: R3, R4, R5, R6431431- * R3 = ESR432432- * R4 = EAR429429+ * R4 = ESR430430+ * R3 = EAR433431 */434432#ifdef CONFIG_MMU435435- andi r6, r3, 0x1000 /* Check ESR[DS] */433433+ andi r6, r4, 0x1000 /* Check ESR[DS] */436434 beqi r6, _no_delayslot /* Branch if ESR[DS] not set */437435 mfs r17, rbtr; /* ESR[DS] set - return address in BTR */438436 nop···441439 RESTORE_STATE;442440 bri unaligned_data_trap443441#endif444444- andi r6, r3, 0x3E0; /* Mask and extract the register operand */442442+ andi r6, r4, 0x3E0; /* Mask and extract the register operand */445443 srl r6, r6; /* r6 >> 5 */446444 srl r6, r6;447445 srl r6, r6;···450448 /* Store the register operand in a temporary location */451449 sbi r6, r0, TOPHYS(ex_reg_op);452450453453- andi r6, r3, 0x400; /* Extract ESR[S] */451451+ andi r6, r4, 0x400; /* Extract ESR[S] */454452 bnei r6, ex_sw;455453ex_lw:456456- andi r6, r3, 0x800; /* Extract ESR[W] */454454+ andi r6, r4, 0x800; /* Extract ESR[W] */457455 beqi r6, ex_lhw;458458- lbui r5, r4, 0; /* Exception address in r4 */456456+ lbui r5, r3, 0; /* Exception address in r3 */459457 /* Load a word, byte-by-byte from destination address460458 and save it in tmp space */461459 sbi r5, r0, TOPHYS(ex_tmp_data_loc_0);462462- lbui r5, r4, 1;460460+ lbui r5, r3, 1;463461 sbi r5, r0, TOPHYS(ex_tmp_data_loc_1);464464- lbui r5, r4, 2;462462+ lbui r5, r3, 2;465463 sbi r5, r0, TOPHYS(ex_tmp_data_loc_2);466466- lbui r5, r4, 3;464464+ lbui r5, r3, 3;467465 sbi r5, r0, TOPHYS(ex_tmp_data_loc_3);468468- /* Get the destination register value into r3 */469469- lwi r3, r0, TOPHYS(ex_tmp_data_loc_0);466466+ /* Get the destination register value into r4 */467467+ lwi r4, r0, TOPHYS(ex_tmp_data_loc_0);470468 bri ex_lw_tail;471469ex_lhw:472472- lbui r5, r4, 0; /* Exception address in r4 */470470+ lbui r5, r3, 0; /* Exception address in r3 */473471 /* Load a half-word, byte-by-byte from destination474472 address and save it in tmp space */475473 sbi r5, r0, TOPHYS(ex_tmp_data_loc_0);476476- lbui r5, r4, 1;474474+ lbui r5, r3, 1;477475 sbi r5, r0, TOPHYS(ex_tmp_data_loc_1);478478- /* Get the destination register value into r3 */479479- lhui r3, r0, TOPHYS(ex_tmp_data_loc_0);476476+ /* Get the destination register value into r4 */477477+ lhui r4, r0, TOPHYS(ex_tmp_data_loc_0);480478ex_lw_tail:481479 /* Get the destination register number into r5 */482480 lbui r5, r0, TOPHYS(ex_reg_op);···504502 andi r6, r6, 0x800; /* Extract ESR[W] */505503 beqi r6, ex_shw;506504 /* Get the word - delay slot */507507- swi r3, r0, TOPHYS(ex_tmp_data_loc_0);505505+ swi r4, r0, TOPHYS(ex_tmp_data_loc_0);508506 /* Store the word, byte-by-byte into destination address */509509- lbui r3, r0, TOPHYS(ex_tmp_data_loc_0);510510- sbi r3, r4, 0;511511- lbui r3, r0, TOPHYS(ex_tmp_data_loc_1);512512- sbi r3, r4, 1;513513- lbui r3, r0, TOPHYS(ex_tmp_data_loc_2);514514- sbi r3, r4, 2;515515- lbui r3, r0, TOPHYS(ex_tmp_data_loc_3);516516- sbi r3, r4, 3;507507+ lbui r4, r0, TOPHYS(ex_tmp_data_loc_0);508508+ sbi r4, r3, 0;509509+ lbui r4, r0, TOPHYS(ex_tmp_data_loc_1);510510+ sbi r4, r3, 1;511511+ lbui r4, r0, TOPHYS(ex_tmp_data_loc_2);512512+ sbi r4, r3, 2;513513+ lbui r4, r0, TOPHYS(ex_tmp_data_loc_3);514514+ sbi r4, r3, 3;517515 bri ex_handler_done;518516519517ex_shw:520518 /* Store the lower half-word, byte-by-byte into destination address */521521- swi r3, r0, TOPHYS(ex_tmp_data_loc_0);522522- lbui r3, r0, TOPHYS(ex_tmp_data_loc_2);523523- sbi r3, r4, 0;524524- lbui r3, r0, TOPHYS(ex_tmp_data_loc_3);525525- sbi r3, r4, 1;519519+ swi r4, r0, TOPHYS(ex_tmp_data_loc_0);520520+ lbui r4, r0, TOPHYS(ex_tmp_data_loc_2);521521+ sbi r4, r3, 0;522522+ lbui r4, r0, TOPHYS(ex_tmp_data_loc_3);523523+ sbi r4, r3, 1;526524ex_sw_end: /* Exception handling of store word, ends. */527525528526ex_handler_done:···562560 */563561 mfs r11, rpid564562 nop565565- bri 4566566- mfs r3, rear /* Get faulting address */567567- nop568563 /* If we are faulting a kernel address, we have to use the569564 * kernel page tables.570565 */571571- ori r4, r0, CONFIG_KERNEL_START572572- cmpu r4, r3, r4573573- bgti r4, ex3566566+ ori r5, r0, CONFIG_KERNEL_START567567+ cmpu r5, r3, r5568568+ bgti r5, ex3574569 /* First, check if it was a zone fault (which means a user575570 * tried to access a kernel or read-protected page - always576571 * a SEGV). All other faults here must be stores, so no577572 * need to check ESR_S as well. */578578- mfs r4, resr579579- nop580573 andi r4, r4, 0x800 /* ESR_Z - zone protection */581574 bnei r4, ex2582575···586589 * tried to access a kernel or read-protected page - always587590 * a SEGV). All other faults here must be stores, so no588591 * need to check ESR_S as well. */589589- mfs r4, resr590590- nop591592 andi r4, r4, 0x800 /* ESR_Z */592593 bnei r4, ex2593594 /* get current task address */···660665 * R3 = ESR661666 */662667663663- mfs r3, rear /* Get faulting address */664664- nop665668 RESTORE_STATE;666669 bri page_fault_instr_trap667670···670677 */671678 handle_data_tlb_miss_exception:672679 /* Working registers already saved: R3, R4, R5, R6673673- * R3 = ESR680680+ * R3 = EAR, R4 = ESR674681 */675682 mfs r11, rpid676676- nop677677- bri 4678678- mfs r3, rear /* Get faulting address */679683 nop680684681685 /* If we are faulting a kernel address, we have to use the682686 * kernel page tables. */683683- ori r4, r0, CONFIG_KERNEL_START684684- cmpu r4, r3, r4687687+ ori r6, r0, CONFIG_KERNEL_START688688+ cmpu r4, r3, r6685689 bgti r4, ex5686690 ori r4, r0, swapper_pg_dir687691 mts rpid, r0 /* TLB will have 0 TID */···721731 * Many of these bits are software only. Bits we don't set722732 * here we (properly should) assume have the appropriate value.723733 */734734+ brid finish_tlb_load724735 andni r4, r4, 0x0ce2 /* Make sure 20, 21 are zero */725725-726726- bri finish_tlb_load727736 ex7:728737 /* The bailout. Restore registers to pre-exception conditions729738 * and call the heavyweights to help us out.···742753 * R3 = ESR743754 */744755 mfs r11, rpid745745- nop746746- bri 4747747- mfs r3, rear /* Get faulting address */748756 nop749757750758 /* If we are faulting a kernel address, we have to use the···778792 lwi r4, r5, 0 /* Get Linux PTE */779793780794 andi r6, r4, _PAGE_PRESENT781781- beqi r6, ex7795795+ beqi r6, ex10782796783797 ori r4, r4, _PAGE_ACCESSED784798 swi r4, r5, 0···791805 * Many of these bits are software only. Bits we don't set792806 * here we (properly should) assume have the appropriate value.793807 */808808+ brid finish_tlb_load794809 andni r4, r4, 0x0ce2 /* Make sure 20, 21 are zero */795795-796796- bri finish_tlb_load797810 ex10:798811 /* The bailout. Restore registers to pre-exception conditions799812 * and call the heavyweights to help us out.···822837 andi r5, r5, (MICROBLAZE_TLB_SIZE-1)823838 ori r6, r0, 1824839 cmp r31, r5, r6825825- blti r31, sem840840+ blti r31, ex12826841 addik r5, r6, 1827827- sem:842842+ ex12:828843 /* MS: save back current TLB index */829844 swi r5, r0, TOPHYS(tlb_index)830845···844859 nop845860846861 /* Done...restore registers and get out of here. */847847- ex12:848862 mts rpid, r11849863 nop850864 bri 4
+13-2
arch/microblaze/kernel/misc.S
···2626 * We avoid flushing the pinned 0, 1 and possibly 2 entries.2727 */2828.globl _tlbia;2929+.type _tlbia, @function2930.align 4;3031_tlbia:3131- addik r12, r0, 63 /* flush all entries (63 - 3) */3232+ addik r12, r0, MICROBLAZE_TLB_SIZE - 1 /* flush all entries (63 - 3) */3233 /* isync */3334_tlbia_1:3435 mts rtlbx, r12···4241 /* sync */4342 rtsd r15, 84443 nop4444+ .size _tlbia, . - _tlbia45454646/*4747 * Flush MMU TLB for a particular address (in r5)4848 */4949.globl _tlbie;5050+.type _tlbie, @function5051.align 4;5152_tlbie:5253 mts rtlbsx, r5 /* look up the address in TLB */···6259 rtsd r15, 86360 nop64616262+ .size _tlbie, . - _tlbie6363+6564/*6665 * Allocate TLB entry for early console6766 */6867.globl early_console_reg_tlb_alloc;6868+.type early_console_reg_tlb_alloc, @function6969.align 4;7070early_console_reg_tlb_alloc:7171 /*7272 * Load a TLB entry for the UART, so that microblaze_progress() can use7373 * the UARTs nice and early. We use a 4k real==virtual mapping.7474 */7575- ori r4, r0, 637575+ ori r4, r0, MICROBLAZE_TLB_SIZE - 17676 mts rtlbx, r4 /* TLB slot 2 */77777878 or r4,r5,r0···9185 nop9286 rtsd r15, 89387 nop8888+8989+ .size early_console_reg_tlb_alloc, . - early_console_reg_tlb_alloc94909591/*9692 * Copy a whole page (4096 bytes).···112104#define DCACHE_LINE_BYTES (4 * 4)113105114106.globl copy_page;107107+.type copy_page, @function115108.align 4;116109copy_page:117110 ori r11, r0, (PAGE_SIZE/DCACHE_LINE_BYTES) - 1···127118 addik r11, r11, -1128119 rtsd r15, 8129120 nop121121+122122+ .size copy_page, . - copy_page
+6-4
arch/microblaze/kernel/process.c
···1515#include <linux/bitops.h>1616#include <asm/system.h>1717#include <asm/pgalloc.h>1818+#include <asm/uaccess.h> /* for USER_DS macros */1819#include <asm/cacheflush.h>19202021void show_regs(struct pt_regs *regs)···75747675void default_idle(void)7776{7878- if (!hlt_counter) {7777+ if (likely(hlt_counter)) {7878+ while (!need_resched())7979+ cpu_relax();8080+ } else {7981 clear_thread_flag(TIF_POLLING_NRFLAG);8082 smp_mb__after_clear_bit();8183 local_irq_disable();···8682 cpu_sleep();8783 local_irq_enable();8884 set_thread_flag(TIF_POLLING_NRFLAG);8989- } else9090- while (!need_resched())9191- cpu_relax();8585+ }9286}93879488void cpu_idle(void)
+15-9
arch/microblaze/kernel/setup.c
···9292}9393#endif /* CONFIG_MTD_UCLINUX_EBSS */94949595+#if defined(CONFIG_EARLY_PRINTK) && defined(CONFIG_SERIAL_UARTLITE_CONSOLE)9696+#define eprintk early_printk9797+#else9898+#define eprintk printk9999+#endif100100+95101void __init machine_early_init(const char *cmdline, unsigned int ram,96102 unsigned int fdt, unsigned int msr)97103{···145139 setup_early_printk(NULL);146140#endif147141148148- early_printk("Ramdisk addr 0x%08x, ", ram);142142+ eprintk("Ramdisk addr 0x%08x, ", ram);149143 if (fdt)150150- early_printk("FDT at 0x%08x\n", fdt);144144+ eprintk("FDT at 0x%08x\n", fdt);151145 else152152- early_printk("Compiled-in FDT at 0x%08x\n",146146+ eprintk("Compiled-in FDT at 0x%08x\n",153147 (unsigned int)_fdt_start);154148155149#ifdef CONFIG_MTD_UCLINUX156156- early_printk("Found romfs @ 0x%08x (0x%08x)\n",150150+ eprintk("Found romfs @ 0x%08x (0x%08x)\n",157151 romfs_base, romfs_size);158158- early_printk("#### klimit %p ####\n", old_klimit);152152+ eprintk("#### klimit %p ####\n", old_klimit);159153 BUG_ON(romfs_size < 0); /* What else can we do? */160154161161- early_printk("Moved 0x%08x bytes from 0x%08x to 0x%08x\n",155155+ eprintk("Moved 0x%08x bytes from 0x%08x to 0x%08x\n",162156 romfs_size, romfs_base, (unsigned)&_ebss);163157164164- early_printk("New klimit: 0x%08x\n", (unsigned)klimit);158158+ eprintk("New klimit: 0x%08x\n", (unsigned)klimit);165159#endif166160167161#if CONFIG_XILINX_MICROBLAZE0_USE_MSR_INSTR168162 if (msr)169169- early_printk("!!!Your kernel has setup MSR instruction but "163163+ eprintk("!!!Your kernel has setup MSR instruction but "170164 "CPU don't have it %d\n", msr);171165#else172166 if (!msr)173173- early_printk("!!!Your kernel not setup MSR instruction but "167167+ eprintk("!!!Your kernel not setup MSR instruction but "174168 "CPU have it %d\n", msr);175169#endif176170
+2-4
arch/microblaze/kernel/traps.c
···2222 __enable_hw_exceptions();2323}24242525-static int kstack_depth_to_print = 24;2525+static unsigned long kstack_depth_to_print = 24;26262727static int __init kstack_setup(char *s)2828{2929- kstack_depth_to_print = strict_strtoul(s, 0, NULL);3030-3131- return 1;2929+ return !strict_strtoul(s, 0, &kstack_depth_to_print);3230}3331__setup("kstack=", kstack_setup);3432
···5353 const uint32_t *i_src;5454 uint32_t *i_dst;55555656- if (c >= 4) {5656+ if (likely(c >= 4)) {5757 unsigned value, buf_hold;58585959 /* Align the dstination to a word boundry. */
+8-7
arch/microblaze/lib/memset.c
···3333#ifdef __HAVE_ARCH_MEMSET3434void *memset(void *v_src, int c, __kernel_size_t n)3535{3636-3736 char *src = v_src;3837#ifdef CONFIG_OPT_LIB_FUNCTION3938 uint32_t *i_src;4040- uint32_t w32;3939+ uint32_t w32 = 0;4140#endif4241 /* Truncate c to 8 bits */4342 c = (c & 0xFF);44434544#ifdef CONFIG_OPT_LIB_FUNCTION4646- /* Make a repeating word out of it */4747- w32 = c;4848- w32 |= w32 << 8;4949- w32 |= w32 << 16;4545+ if (unlikely(c)) {4646+ /* Make a repeating word out of it */4747+ w32 = c;4848+ w32 |= w32 << 8;4949+ w32 |= w32 << 16;5050+ }50515151- if (n >= 4) {5252+ if (likely(n >= 4)) {5253 /* Align the destination to a word boundary */5354 /* This is done in an endian independant manner */5455 switch ((unsigned) src & 3) {
-48
arch/microblaze/lib/uaccess.c
···11-/*22- * Copyright (C) 2006 Atmark Techno, Inc.33- *44- * This file is subject to the terms and conditions of the GNU General Public55- * License. See the file "COPYING" in the main directory of this archive66- * for more details.77- */88-99-#include <linux/string.h>1010-#include <asm/uaccess.h>1111-1212-#include <asm/bug.h>1313-1414-long strnlen_user(const char __user *src, long count)1515-{1616- return strlen(src) + 1;1717-}1818-1919-#define __do_strncpy_from_user(dst, src, count, res) \2020- do { \2121- char *tmp; \2222- strncpy(dst, src, count); \2323- for (tmp = dst; *tmp && count > 0; tmp++, count--) \2424- ; \2525- res = (tmp - dst); \2626- } while (0)2727-2828-long __strncpy_from_user(char *dst, const char __user *src, long count)2929-{3030- long res;3131- __do_strncpy_from_user(dst, src, count, res);3232- return res;3333-}3434-3535-long strncpy_from_user(char *dst, const char __user *src, long count)3636-{3737- long res = -EFAULT;3838- if (access_ok(VERIFY_READ, src, 1))3939- __do_strncpy_from_user(dst, src, count, res);4040- return res;4141-}4242-4343-unsigned long __copy_tofrom_user(void __user *to,4444- const void __user *from, unsigned long size)4545-{4646- memcpy(to, from, size);4747- return 0;4848-}
···106106 regs->esr = error_code;107107108108 /* On a kernel SLB miss we can only check for a valid exception entry */109109- if (kernel_mode(regs) && (address >= TASK_SIZE)) {109109+ if (unlikely(kernel_mode(regs) && (address >= TASK_SIZE))) {110110 printk(KERN_WARNING "kernel task_size exceed");111111 _exception(SIGSEGV, regs, code, address);112112 }···122122 }123123#endif /* CONFIG_KGDB */124124125125- if (in_atomic() || !mm) {125125+ if (unlikely(in_atomic() || !mm)) {126126 if (kernel_mode(regs))127127 goto bad_area_nosemaphore;128128···150150 * source. If this is invalid we can skip the address space check,151151 * thus avoiding the deadlock.152152 */153153- if (!down_read_trylock(&mm->mmap_sem)) {153153+ if (unlikely(!down_read_trylock(&mm->mmap_sem))) {154154 if (kernel_mode(regs) && !search_exception_tables(regs->pc))155155 goto bad_area_nosemaphore;156156···158158 }159159160160 vma = find_vma(mm, address);161161- if (!vma)161161+ if (unlikely(!vma))162162 goto bad_area;163163164164 if (vma->vm_start <= address)165165 goto good_area;166166167167- if (!(vma->vm_flags & VM_GROWSDOWN))167167+ if (unlikely(!(vma->vm_flags & VM_GROWSDOWN)))168168 goto bad_area;169169170170- if (!is_write)170170+ if (unlikely(!is_write))171171 goto bad_area;172172173173 /*···179179 * before setting the user r1. Thus we allow the stack to180180 * expand to 1MB without further checks.181181 */182182- if (address + 0x100000 < vma->vm_end) {182182+ if (unlikely(address + 0x100000 < vma->vm_end)) {183183184184 /* get user regs even if this fault is in kernel mode */185185 struct pt_regs *uregs = current->thread.regs;···209209 code = SEGV_ACCERR;210210211211 /* a write */212212- if (is_write) {213213- if (!(vma->vm_flags & VM_WRITE))212212+ if (unlikely(is_write)) {213213+ if (unlikely(!(vma->vm_flags & VM_WRITE)))214214 goto bad_area;215215 /* a read */216216 } else {217217 /* protection fault */218218- if (error_code & 0x08000000)218218+ if (unlikely(error_code & 0x08000000))219219 goto bad_area;220220- if (!(vma->vm_flags & (VM_READ | VM_EXEC)))220220+ if (unlikely(!(vma->vm_flags & (VM_READ | VM_EXEC))))221221 goto bad_area;222222 }223223···235235 goto do_sigbus;236236 BUG();237237 }238238- if (fault & VM_FAULT_MAJOR)238238+ if (unlikely(fault & VM_FAULT_MAJOR))239239 current->maj_flt++;240240 else241241 current->min_flt++;
-9
arch/microblaze/mm/init.c
···165165 for (addr = begin; addr < end; addr += PAGE_SIZE) {166166 ClearPageReserved(virt_to_page(addr));167167 init_page_count(virt_to_page(addr));168168- memset((void *)addr, 0xcc, PAGE_SIZE);169168 free_page(addr);170169 totalram_pages++;171170 }···207208}208209209210#ifndef CONFIG_MMU210210-/* Check against bounds of physical memory */211211-int ___range_ok(unsigned long addr, unsigned long size)212212-{213213- return ((addr < memory_start) ||214214- ((addr + size) > memory_end));215215-}216216-EXPORT_SYMBOL(___range_ok);217217-218211int page_is_ram(unsigned long pfn)219212{220213 return __range_ok(pfn, 0);
+1-1
arch/microblaze/mm/pgtable.c
···154154 err = 0;155155 set_pte_at(&init_mm, va, pg, pfn_pte(pa >> PAGE_SHIFT,156156 __pgprot(flags)));157157- if (mem_init_done)157157+ if (unlikely(mem_init_done))158158 flush_HPTE(0, va, pmd_val(*pd));159159 /* flush_HPTE(0, va, pg); */160160 }
+2
arch/powerpc/include/asm/asm-compat.h
···2828#define PPC_LLARX(t, a, b, eh) PPC_LDARX(t, a, b, eh)2929#define PPC_STLCX stringify_in_c(stdcx.)3030#define PPC_CNTLZL stringify_in_c(cntlzd)3131+#define PPC_LR_STKOFF 1631323233/* Move to CR, single-entry optimized version. Only available3334 * on POWER4 and later.···5251#define PPC_STLCX stringify_in_c(stwcx.)5352#define PPC_CNTLZL stringify_in_c(cntlzw)5453#define PPC_MTOCRF stringify_in_c(mtcrf)5454+#define PPC_LR_STKOFF 455555656#endif5757
+26
arch/powerpc/kernel/misc.S
···127127_GLOBAL(__restore_cpu_power7)128128 /* place holder */129129 blr130130+131131+/*132132+ * Get a minimal set of registers for our caller's nth caller.133133+ * r3 = regs pointer, r5 = n.134134+ *135135+ * We only get R1 (stack pointer), NIP (next instruction pointer)136136+ * and LR (link register). These are all we can get in the137137+ * general case without doing complicated stack unwinding, but138138+ * fortunately they are enough to do a stack backtrace, which139139+ * is all we need them for.140140+ */141141+_GLOBAL(perf_arch_fetch_caller_regs)142142+ mr r6,r1143143+ cmpwi r5,0144144+ mflr r4145145+ ble 2f146146+ mtctr r5147147+1: PPC_LL r6,0(r6)148148+ bdnz 1b149149+ PPC_LL r4,PPC_LR_STKOFF(r6)150150+2: PPC_LL r7,0(r6)151151+ PPC_LL r7,PPC_LR_STKOFF(r7)152152+ PPC_STL r6,GPR1-STACK_FRAME_OVERHEAD(r3)153153+ PPC_STL r4,_NIP-STACK_FRAME_OVERHEAD(r3)154154+ PPC_STL r7,_LINK-STACK_FRAME_OVERHEAD(r3)155155+ blr
+2
arch/powerpc/platforms/52xx/mpc52xx_lpbfifo.c
···481481 if (rc)482482 goto err_bcom_rx_irq;483483484484+ lpbfifo.dma_irqs_enabled = 1;485485+484486 /* Request the Bestcomm transmit (memory --> fifo) task and IRQ */485487 lpbfifo.bcom_tx_task =486488 bcom_gen_bd_tx_init(2, res.start + LPBFIFO_REG_FIFO_DATA,
+3
arch/sh/kernel/return_address.c
···99 * for more details.1010 */1111#include <linux/kernel.h>1212+#include <linux/module.h>1213#include <asm/dwarf.h>13141415#ifdef CONFIG_DWARF_UNWINDER···5352}54535554#endif5555+5656+EXPORT_SYMBOL_GPL(return_address);
+28
arch/sh/mm/tlb-pteaex.c
···7777 __raw_writel(asid, MMU_ITLB_ADDRESS_ARRAY2 | MMU_PAGE_ASSOC_BIT);7878 back_to_cached();7979}8080+8181+void local_flush_tlb_all(void)8282+{8383+ unsigned long flags, status;8484+ int i;8585+8686+ /*8787+ * Flush all the TLB.8888+ */8989+ local_irq_save(flags);9090+ jump_to_uncached();9191+9292+ status = __raw_readl(MMUCR);9393+ status = ((status & MMUCR_URB) >> MMUCR_URB_SHIFT);9494+9595+ if (status == 0)9696+ status = MMUCR_URB_NENTRIES;9797+9898+ for (i = 0; i < status; i++)9999+ __raw_writel(0x0, MMU_UTLB_ADDRESS_ARRAY | (i << 8));100100+101101+ for (i = 0; i < 4; i++)102102+ __raw_writel(0x0, MMU_ITLB_ADDRESS_ARRAY | (i << 8));103103+104104+ back_to_cached();105105+ ctrl_barrier();106106+ local_irq_restore(flags);107107+}
+19
arch/sh/mm/tlb-sh3.c
···7777 for (i = 0; i < ways; i++)7878 __raw_writel(data, addr + (i << 8));7979}8080+8181+void local_flush_tlb_all(void)8282+{8383+ unsigned long flags, status;8484+8585+ /*8686+ * Flush all the TLB.8787+ *8888+ * Write to the MMU control register's bit:8989+ * TF-bit for SH-3, TI-bit for SH-4.9090+ * It's same position, bit #2.9191+ */9292+ local_irq_save(flags);9393+ status = __raw_readl(MMUCR);9494+ status |= 0x04;9595+ __raw_writel(status, MMUCR);9696+ ctrl_barrier();9797+ local_irq_restore(flags);9898+}
+28
arch/sh/mm/tlb-sh4.c
···8080 __raw_writel(data, addr);8181 back_to_cached();8282}8383+8484+void local_flush_tlb_all(void)8585+{8686+ unsigned long flags, status;8787+ int i;8888+8989+ /*9090+ * Flush all the TLB.9191+ */9292+ local_irq_save(flags);9393+ jump_to_uncached();9494+9595+ status = __raw_readl(MMUCR);9696+ status = ((status & MMUCR_URB) >> MMUCR_URB_SHIFT);9797+9898+ if (status == 0)9999+ status = MMUCR_URB_NENTRIES;100100+101101+ for (i = 0; i < status; i++)102102+ __raw_writel(0x0, MMU_UTLB_ADDRESS_ARRAY | (i << 8));103103+104104+ for (i = 0; i < 4; i++)105105+ __raw_writel(0x0, MMU_ITLB_ADDRESS_ARRAY | (i << 8));106106+107107+ back_to_cached();108108+ ctrl_barrier();109109+ local_irq_restore(flags);110110+}
-28
arch/sh/mm/tlbflush_32.c
···119119 local_irq_restore(flags);120120 }121121}122122-123123-void local_flush_tlb_all(void)124124-{125125- unsigned long flags, status;126126- int i;127127-128128- /*129129- * Flush all the TLB.130130- */131131- local_irq_save(flags);132132- jump_to_uncached();133133-134134- status = __raw_readl(MMUCR);135135- status = ((status & MMUCR_URB) >> MMUCR_URB_SHIFT);136136-137137- if (status == 0)138138- status = MMUCR_URB_NENTRIES;139139-140140- for (i = 0; i < status; i++)141141- __raw_writel(0x0, MMU_UTLB_ADDRESS_ARRAY | (i << 8));142142-143143- for (i = 0; i < 4; i++)144144- __raw_writel(0x0, MMU_ITLB_ADDRESS_ARRAY | (i << 8));145145-146146- back_to_cached();147147- ctrl_barrier();148148- local_irq_restore(flags);149149-}
+16-12
arch/sparc/configs/sparc64_defconfig
···11#22# Automatically generated make config: don't edit33-# Linux kernel version: 2.6.3344-# Wed Mar 3 02:54:29 201033+# Linux kernel version: 2.6.34-rc344+# Sat Apr 3 15:49:56 201055#66CONFIG_64BIT=y77CONFIG_SPARC=y···2323CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y2424CONFIG_GENERIC_HARDIRQS_NO__DO_IRQ=y2525CONFIG_MMU=y2626+CONFIG_NEED_DMA_MAP_STATE=y2627CONFIG_ARCH_NO_VIRT_TO_BUS=y2728CONFIG_OF=y2829CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y···440439# CONFIG_ENCLOSURE_SERVICES is not set441440# CONFIG_HP_ILO is not set442441# CONFIG_ISL29003 is not set442442+# CONFIG_SENSORS_TSL2550 is not set443443# CONFIG_DS1682 is not set444444# CONFIG_C2PORT is not set445445···513511#514512# SCSI device support515513#514514+CONFIG_SCSI_MOD=y516515CONFIG_RAID_ATTRS=m517516CONFIG_SCSI=y518517CONFIG_SCSI_DMA=y···891888CONFIG_SERIAL_CORE=y892889CONFIG_SERIAL_CORE_CONSOLE=y893890# CONFIG_SERIAL_JSM is not set891891+# CONFIG_SERIAL_TIMBERDALE is not set894892# CONFIG_SERIAL_GRLIB_GAISLER_APBUART is not set895893CONFIG_UNIX98_PTYS=y896894# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set···939935#940936# CONFIG_I2C_OCORES is not set941937# CONFIG_I2C_SIMTEC is not set938938+# CONFIG_I2C_XILINX is not set942939943940#944941# External I2C/SMBus adapter drivers···953948#954949# CONFIG_I2C_PCA_PLATFORM is not set955950# CONFIG_I2C_STUB is not set956956-957957-#958958-# Miscellaneous I2C Chip support959959-#960960-# CONFIG_SENSORS_TSL2550 is not set961951# CONFIG_I2C_DEBUG_CORE is not set962952# CONFIG_I2C_DEBUG_ALGO is not set963953# CONFIG_I2C_DEBUG_BUS is not set964964-# CONFIG_I2C_DEBUG_CHIP is not set965954# CONFIG_SPI is not set966955967956#···981982# CONFIG_SENSORS_ADM1029 is not set982983# CONFIG_SENSORS_ADM1031 is not set983984# CONFIG_SENSORS_ADM9240 is not set985985+# CONFIG_SENSORS_ADT7411 is not set984986# CONFIG_SENSORS_ADT7462 is not set985987# CONFIG_SENSORS_ADT7470 is not set986986-# CONFIG_SENSORS_ADT7473 is not set987988# CONFIG_SENSORS_ADT7475 is not set989989+# CONFIG_SENSORS_ASC7621 is not set988990# CONFIG_SENSORS_ATXP1 is not set989991# CONFIG_SENSORS_DS1621 is not set990992# CONFIG_SENSORS_I5K_AMB is not set···10521052# Multifunction device drivers10531053#10541054# CONFIG_MFD_CORE is not set10551055+# CONFIG_MFD_88PM860X is not set10551056# CONFIG_MFD_SM501 is not set10561057# CONFIG_HTC_PASIC3 is not set10571058# CONFIG_TWL4030_CORE is not set10581059# CONFIG_MFD_TMIO is not set10591060# CONFIG_PMIC_DA903X is not set10601061# CONFIG_PMIC_ADP5520 is not set10621062+# CONFIG_MFD_MAX8925 is not set10611063# CONFIG_MFD_WM8400 is not set10621064# CONFIG_MFD_WM831X is not set10631065# CONFIG_MFD_WM8350_I2C is not set10661066+# CONFIG_MFD_WM8994 is not set10641067# CONFIG_MFD_PCF50633 is not set10651068# CONFIG_AB3100_CORE is not set10661066-# CONFIG_MFD_88PM8607 is not set10691069+# CONFIG_LPC_SCH is not set10671070# CONFIG_REGULATOR is not set10681071# CONFIG_MEDIA_SUPPORT is not set10691072···11161113# CONFIG_FB_LEO is not set11171114CONFIG_FB_XVR500=y11181115CONFIG_FB_XVR2500=y11161116+CONFIG_FB_XVR1000=y11191117# CONFIG_FB_S1D13XXX is not set11201118# CONFIG_FB_NVIDIA is not set11211119# CONFIG_FB_RIVA is not set···14341430# CONFIG_USB_RIO500 is not set14351431# CONFIG_USB_LEGOTOWER is not set14361432# CONFIG_USB_LCD is not set14371437-# CONFIG_USB_BERRY_CHARGE is not set14381433# CONFIG_USB_LED is not set14391434# CONFIG_USB_CYPRESS_CY7C63 is not set14401435# CONFIG_USB_CYTHERM is not set···14461443# CONFIG_USB_IOWARRIOR is not set14471444# CONFIG_USB_TEST is not set14481445# CONFIG_USB_ISIGHTFW is not set14491449-# CONFIG_USB_VST is not set14501446# CONFIG_USB_GADGET is not set1451144714521448#···16121610# CONFIG_BEFS_FS is not set16131611# CONFIG_BFS_FS is not set16141612# CONFIG_EFS_FS is not set16131613+# CONFIG_LOGFS is not set16151614# CONFIG_CRAMFS is not set16161615# CONFIG_SQUASHFS is not set16171616# CONFIG_VXFS_FS is not set···16271624# CONFIG_NFS_FS is not set16281625# CONFIG_NFSD is not set16291626# CONFIG_SMB_FS is not set16271627+# CONFIG_CEPH_FS is not set16301628# CONFIG_CIFS is not set16311629# CONFIG_NCP_FS is not set16321630# CONFIG_CODA_FS is not set
+2-2
arch/sparc/include/asm/stat.h
···5353 ino_t st_ino;5454 mode_t st_mode;5555 short st_nlink;5656- uid16_t st_uid;5757- gid16_t st_gid;5656+ unsigned short st_uid;5757+ unsigned short st_gid;5858 unsigned short st_rdev;5959 off_t st_size;6060 time_t st_atime;
+75
arch/sparc/kernel/helpers.S
···4646 nop4747 .size stack_trace_flush,.-stack_trace_flush48484949+#ifdef CONFIG_PERF_EVENTS5050+ .globl perf_arch_fetch_caller_regs5151+ .type perf_arch_fetch_caller_regs,#function5252+perf_arch_fetch_caller_regs:5353+ /* We always read the %pstate into %o5 since we will use5454+ * that to construct a fake %tstate to store into the regs.5555+ */5656+ rdpr %pstate, %o55757+ brz,pn %o2, 50f5858+ mov %o2, %g75959+6060+ /* Turn off interrupts while we walk around the register6161+ * window by hand.6262+ */6363+ wrpr %o5, PSTATE_IE, %pstate6464+6565+ /* The %canrestore tells us how many register windows are6666+ * still live in the chip above us, past that we have to6767+ * walk the frame as saved on the stack. We stash away6868+ * the %cwp in %g1 so we can return back to the original6969+ * register window.7070+ */7171+ rdpr %cwp, %g17272+ rdpr %canrestore, %g27373+ sub %g1, 1, %g37474+7575+ /* We have the skip count in %g7, if it hits zero then7676+ * %fp/%i7 are the registers we need. Otherwise if our7777+ * %canrestore count maintained in %g2 hits zero we have7878+ * to start traversing the stack.7979+ */8080+10: brz,pn %g2, 4f8181+ sub %g2, 1, %g28282+ wrpr %g3, %cwp8383+ subcc %g7, 1, %g78484+ bne,pt %xcc, 10b8585+ sub %g3, 1, %g38686+8787+ /* We found the values we need in the cpu's register8888+ * windows.8989+ */9090+ mov %fp, %g39191+ ba,pt %xcc, 3f9292+ mov %i7, %g29393+9494+50: mov %fp, %g39595+ ba,pt %xcc, 2f9696+ mov %i7, %g29797+9898+ /* We hit the end of the valid register windows in the9999+ * cpu, start traversing the stack frame.100100+ */101101+4: mov %fp, %g3102102+103103+20: ldx [%g3 + STACK_BIAS + RW_V9_I7], %g2104104+ subcc %g7, 1, %g7105105+ bne,pn %xcc, 20b106106+ ldx [%g3 + STACK_BIAS + RW_V9_I6], %g3107107+108108+ /* Restore the current register window position and109109+ * re-enable interrupts.110110+ */111111+3: wrpr %g1, %cwp112112+ wrpr %o5, %pstate113113+114114+2: stx %g3, [%o0 + PT_V9_FP]115115+ sllx %o5, 8, %o5116116+ stx %o5, [%o0 + PT_V9_TSTATE]117117+ stx %g2, [%o0 + PT_V9_TPC]118118+ add %g2, 4, %g2119119+ retl120120+ stx %g2, [%o0 + PT_V9_TNPC]121121+ .size perf_arch_fetch_caller_regs,.-perf_arch_fetch_caller_regs122122+#endif /* CONFIG_PERF_EVENTS */123123+49124#ifdef CONFIG_SMP50125 .globl hard_smp_processor_id51126 .type hard_smp_processor_id,#function
···107107 unsigned long ret;108108109109 /* should return -EINVAL to userspace */110110- if (set_cpus_allowed(current, cpumask_of_cpu(cpu)))110110+ if (set_cpus_allowed_ptr(current, cpumask_of(cpu)))111111 return 0;112112113113 ret = func(arg);114114115115- set_cpus_allowed(current, old_affinity);115115+ set_cpus_allowed_ptr(current, &old_affinity);116116117117 return ret;118118}
···12681268 /* Mark the inuse vectors */12691269 for_each_irq_desc(irq, desc) {12701270 cfg = desc->chip_data;12711271+12721272+ /*12731273+ * If it is a legacy IRQ handled by the legacy PIC, this cpu12741274+ * will be part of the irq_cfg's domain.12751275+ */12761276+ if (irq < legacy_pic->nr_legacy_irqs && !IO_APIC_IRQ(irq))12771277+ cpumask_set_cpu(cpu, cfg->domain);12781278+12711279 if (!cpumask_test_cpu(cpu, cfg->domain))12721280 continue;12731281 vector = cfg->vector;
+44-10
arch/x86/kernel/cpu/perf_event.c
···2828#include <asm/apic.h>2929#include <asm/stacktrace.h>3030#include <asm/nmi.h>3131+#include <asm/compat.h>31323233static u64 perf_event_mask __read_mostly;3334···159158 struct perf_event *event);160159 struct event_constraint *event_constraints;161160162162- void (*cpu_prepare)(int cpu);161161+ int (*cpu_prepare)(int cpu);163162 void (*cpu_starting)(int cpu);164163 void (*cpu_dying)(int cpu);165164 void (*cpu_dead)(int cpu);···13341333x86_pmu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)13351334{13361335 unsigned int cpu = (long)hcpu;13361336+ int ret = NOTIFY_OK;1337133713381338 switch (action & ~CPU_TASKS_FROZEN) {13391339 case CPU_UP_PREPARE:13401340 if (x86_pmu.cpu_prepare)13411341- x86_pmu.cpu_prepare(cpu);13411341+ ret = x86_pmu.cpu_prepare(cpu);13421342 break;1343134313441344 case CPU_STARTING:···13521350 x86_pmu.cpu_dying(cpu);13531351 break;1354135213531353+ case CPU_UP_CANCELED:13551354 case CPU_DEAD:13561355 if (x86_pmu.cpu_dead)13571356 x86_pmu.cpu_dead(cpu);···13621359 break;13631360 }1364136113651365- return NOTIFY_OK;13621362+ return ret;13661363}1367136413681365static void __init pmu_check_apic(void)···16311628 return len;16321629}1633163016341634-static int copy_stack_frame(const void __user *fp, struct stack_frame *frame)16311631+#ifdef CONFIG_COMPAT16321632+static inline int16331633+perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry)16351634{16361636- unsigned long bytes;16351635+ /* 32-bit process in 64-bit kernel. */16361636+ struct stack_frame_ia32 frame;16371637+ const void __user *fp;1637163816381638- bytes = copy_from_user_nmi(frame, fp, sizeof(*frame));16391639+ if (!test_thread_flag(TIF_IA32))16401640+ return 0;1639164116401640- return bytes == sizeof(*frame);16421642+ fp = compat_ptr(regs->bp);16431643+ while (entry->nr < PERF_MAX_STACK_DEPTH) {16441644+ unsigned long bytes;16451645+ frame.next_frame = 0;16461646+ frame.return_address = 0;16471647+16481648+ bytes = copy_from_user_nmi(&frame, fp, sizeof(frame));16491649+ if (bytes != sizeof(frame))16501650+ break;16511651+16521652+ if (fp < compat_ptr(regs->sp))16531653+ break;16541654+16551655+ callchain_store(entry, frame.return_address);16561656+ fp = compat_ptr(frame.next_frame);16571657+ }16581658+ return 1;16411659}16601660+#else16611661+static inline int16621662+perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry)16631663+{16641664+ return 0;16651665+}16661666+#endif1642166716431668static void16441669perf_callchain_user(struct pt_regs *regs, struct perf_callchain_entry *entry)···16821651 callchain_store(entry, PERF_CONTEXT_USER);16831652 callchain_store(entry, regs->ip);1684165316541654+ if (perf_callchain_user32(regs, entry))16551655+ return;16561656+16851657 while (entry->nr < PERF_MAX_STACK_DEPTH) {16581658+ unsigned long bytes;16861659 frame.next_frame = NULL;16871660 frame.return_address = 0;1688166116891689- if (!copy_stack_frame(fp, &frame))16621662+ bytes = copy_from_user_nmi(&frame, fp, sizeof(frame));16631663+ if (bytes != sizeof(frame))16901664 break;1691166516921666 if ((unsigned long)fp < regs->sp)···17381702 return entry;17391703}1740170417411741-#ifdef CONFIG_EVENT_TRACING17421705void perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned long ip, int skip)17431706{17441707 regs->ip = ip;···17491714 regs->cs = __KERNEL_CS;17501715 local_save_flags(regs->flags);17511716}17521752-#endif
+47-33
arch/x86/kernel/cpu/perf_event_amd.c
···137137 return (hwc->config & 0xe0) == 0xe0;138138}139139140140+static inline int amd_has_nb(struct cpu_hw_events *cpuc)141141+{142142+ struct amd_nb *nb = cpuc->amd_nb;143143+144144+ return nb && nb->nb_id != -1;145145+}146146+140147static void amd_put_event_constraints(struct cpu_hw_events *cpuc,141148 struct perf_event *event)142149{···154147 /*155148 * only care about NB events156149 */157157- if (!(nb && amd_is_nb_event(hwc)))150150+ if (!(amd_has_nb(cpuc) && amd_is_nb_event(hwc)))158151 return;159152160153 /*···221214 /*222215 * if not NB event or no NB, then no constraints223216 */224224- if (!(nb && amd_is_nb_event(hwc)))217217+ if (!(amd_has_nb(cpuc) && amd_is_nb_event(hwc)))225218 return &unconstrained;226219227220 /*···300293 return nb;301294}302295303303-static void amd_pmu_cpu_online(int cpu)296296+static int amd_pmu_cpu_prepare(int cpu)304297{305305- struct cpu_hw_events *cpu1, *cpu2;306306- struct amd_nb *nb = NULL;298298+ struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);299299+300300+ WARN_ON_ONCE(cpuc->amd_nb);301301+302302+ if (boot_cpu_data.x86_max_cores < 2)303303+ return NOTIFY_OK;304304+305305+ cpuc->amd_nb = amd_alloc_nb(cpu, -1);306306+ if (!cpuc->amd_nb)307307+ return NOTIFY_BAD;308308+309309+ return NOTIFY_OK;310310+}311311+312312+static void amd_pmu_cpu_starting(int cpu)313313+{314314+ struct cpu_hw_events *cpuc = &per_cpu(cpu_hw_events, cpu);315315+ struct amd_nb *nb;307316 int i, nb_id;308317309318 if (boot_cpu_data.x86_max_cores < 2)310319 return;311320312312- /*313313- * function may be called too early in the314314- * boot process, in which case nb_id is bogus315315- */316321 nb_id = amd_get_nb_id(cpu);317317- if (nb_id == BAD_APICID)318318- return;319319-320320- cpu1 = &per_cpu(cpu_hw_events, cpu);321321- cpu1->amd_nb = NULL;322322+ WARN_ON_ONCE(nb_id == BAD_APICID);322323323324 raw_spin_lock(&amd_nb_lock);324325325326 for_each_online_cpu(i) {326326- cpu2 = &per_cpu(cpu_hw_events, i);327327- nb = cpu2->amd_nb;328328- if (!nb)327327+ nb = per_cpu(cpu_hw_events, i).amd_nb;328328+ if (WARN_ON_ONCE(!nb))329329 continue;330330- if (nb->nb_id == nb_id)331331- goto found;330330+331331+ if (nb->nb_id == nb_id) {332332+ kfree(cpuc->amd_nb);333333+ cpuc->amd_nb = nb;334334+ break;335335+ }332336 }333337334334- nb = amd_alloc_nb(cpu, nb_id);335335- if (!nb) {336336- pr_err("perf_events: failed NB allocation for CPU%d\n", cpu);337337- raw_spin_unlock(&amd_nb_lock);338338- return;339339- }340340-found:341341- nb->refcnt++;342342- cpu1->amd_nb = nb;338338+ cpuc->amd_nb->nb_id = nb_id;339339+ cpuc->amd_nb->refcnt++;343340344341 raw_spin_unlock(&amd_nb_lock);345342}346343347347-static void amd_pmu_cpu_offline(int cpu)344344+static void amd_pmu_cpu_dead(int cpu)348345{349346 struct cpu_hw_events *cpuhw;350347···360349 raw_spin_lock(&amd_nb_lock);361350362351 if (cpuhw->amd_nb) {363363- if (--cpuhw->amd_nb->refcnt == 0)364364- kfree(cpuhw->amd_nb);352352+ struct amd_nb *nb = cpuhw->amd_nb;353353+354354+ if (nb->nb_id == -1 || --nb->refcnt == 0)355355+ kfree(nb);365356366357 cpuhw->amd_nb = NULL;367358 }···392379 .get_event_constraints = amd_get_event_constraints,393380 .put_event_constraints = amd_put_event_constraints,394381395395- .cpu_prepare = amd_pmu_cpu_online,396396- .cpu_dead = amd_pmu_cpu_offline,382382+ .cpu_prepare = amd_pmu_cpu_prepare,383383+ .cpu_starting = amd_pmu_cpu_starting,384384+ .cpu_dead = amd_pmu_cpu_dead,397385};398386399387static __init int amd_pmu_init(void)
+5
arch/x86/kernel/dumpstack.h
···3030 unsigned long return_address;3131};32323333+struct stack_frame_ia32 {3434+ u32 next_frame;3535+ u32 return_address;3636+};3737+3338static inline unsigned long rewind_frame_pointer(int n)3439{3540 struct stack_frame *frame;
+3-1
arch/x86/kernel/head32.c
···7788#include <linux/init.h>99#include <linux/start_kernel.h>1010+#include <linux/mm.h>10111112#include <asm/setup.h>1213#include <asm/sections.h>···4544#ifdef CONFIG_BLK_DEV_INITRD4645 /* Reserve INITRD */4746 if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {4747+ /* Assume only end is not page aligned */4848 u64 ramdisk_image = boot_params.hdr.ramdisk_image;4949 u64 ramdisk_size = boot_params.hdr.ramdisk_size;5050- u64 ramdisk_end = ramdisk_image + ramdisk_size;5050+ u64 ramdisk_end = PAGE_ALIGN(ramdisk_image + ramdisk_size);5151 reserve_early(ramdisk_image, ramdisk_end, "RAMDISK");5252 }5353#endif
+2-1
arch/x86/kernel/head64.c
···103103#ifdef CONFIG_BLK_DEV_INITRD104104 /* Reserve INITRD */105105 if (boot_params.hdr.type_of_loader && boot_params.hdr.ramdisk_image) {106106+ /* Assume only end is not page aligned */106107 unsigned long ramdisk_image = boot_params.hdr.ramdisk_image;107108 unsigned long ramdisk_size = boot_params.hdr.ramdisk_size;108108- unsigned long ramdisk_end = ramdisk_image + ramdisk_size;109109+ unsigned long ramdisk_end = PAGE_ALIGN(ramdisk_image + ramdisk_size);109110 reserve_early(ramdisk_image, ramdisk_end, "RAMDISK");110111 }111112#endif
+22
arch/x86/kernel/irqinit.c
···141141 x86_init.irqs.intr_init();142142}143143144144+/*145145+ * Setup the vector to irq mappings.146146+ */147147+void setup_vector_irq(int cpu)148148+{149149+#ifndef CONFIG_X86_IO_APIC150150+ int irq;151151+152152+ /*153153+ * On most of the platforms, legacy PIC delivers the interrupts on the154154+ * boot cpu. But there are certain platforms where PIC interrupts are155155+ * delivered to multiple cpu's. If the legacy IRQ is handled by the156156+ * legacy PIC, for the new cpu that is coming online, setup the static157157+ * legacy vector to irq mapping:158158+ */159159+ for (irq = 0; irq < legacy_pic->nr_legacy_irqs; irq++)160160+ per_cpu(vector_irq, cpu)[IRQ0_VECTOR + irq] = irq;161161+#endif162162+163163+ __setup_vector_irq(cpu);164164+}165165+144166static void __init smp_intr_init(void)145167{146168#ifdef CONFIG_SMP
···526526}527527528528/*529529- * Check for AMD CPUs, which have potentially C1E support529529+ * Check for AMD CPUs, where APIC timer interrupt does not wake up CPU from C1e.530530+ * For more information see531531+ * - Erratum #400 for NPT family 0xf and family 0x10 CPUs532532+ * - Erratum #365 for family 0x11 (not affected because C1e not in use)530533 */531534static int __cpuinit check_c1e_idle(const struct cpuinfo_x86 *c)532535{536536+ u64 val;533537 if (c->x86_vendor != X86_VENDOR_AMD)534534- return 0;535535-536536- if (c->x86 < 0x0F)537537- return 0;538538+ goto no_c1e_idle;538539539540 /* Family 0x0f models < rev F do not have C1E */540540- if (c->x86 == 0x0f && c->x86_model < 0x40)541541- return 0;541541+ if (c->x86 == 0x0F && c->x86_model >= 0x40)542542+ return 1;542543543543- return 1;544544+ if (c->x86 == 0x10) {545545+ /*546546+ * check OSVW bit for CPUs that are not affected547547+ * by erratum #400548548+ */549549+ rdmsrl(MSR_AMD64_OSVW_ID_LENGTH, val);550550+ if (val >= 2) {551551+ rdmsrl(MSR_AMD64_OSVW_STATUS, val);552552+ if (!(val & BIT(1)))553553+ goto no_c1e_idle;554554+ }555555+ return 1;556556+ }557557+558558+no_c1e_idle:559559+ return 0;544560}545561546562static cpumask_var_t c1e_mask;
+6-4
arch/x86/kernel/setup.c
···314314#define MAX_MAP_CHUNK (NR_FIX_BTMAPS << PAGE_SHIFT)315315static void __init relocate_initrd(void)316316{317317-317317+ /* Assume only end is not page aligned */318318 u64 ramdisk_image = boot_params.hdr.ramdisk_image;319319 u64 ramdisk_size = boot_params.hdr.ramdisk_size;320320+ u64 area_size = PAGE_ALIGN(ramdisk_size);320321 u64 end_of_lowmem = max_low_pfn_mapped << PAGE_SHIFT;321322 u64 ramdisk_here;322323 unsigned long slop, clen, mapaddr;323324 char *p, *q;324325325326 /* We need to move the initrd down into lowmem */326326- ramdisk_here = find_e820_area(0, end_of_lowmem, ramdisk_size,327327+ ramdisk_here = find_e820_area(0, end_of_lowmem, area_size,327328 PAGE_SIZE);328329329330 if (ramdisk_here == -1ULL)···333332334333 /* Note: this includes all the lowmem currently occupied by335334 the initrd, we rely on that fact to keep the data intact. */336336- reserve_early(ramdisk_here, ramdisk_here + ramdisk_size,335335+ reserve_early(ramdisk_here, ramdisk_here + area_size,337336 "NEW RAMDISK");338337 initrd_start = ramdisk_here + PAGE_OFFSET;339338 initrd_end = initrd_start + ramdisk_size;···377376378377static void __init reserve_initrd(void)379378{379379+ /* Assume only end is not page aligned */380380 u64 ramdisk_image = boot_params.hdr.ramdisk_image;381381 u64 ramdisk_size = boot_params.hdr.ramdisk_size;382382- u64 ramdisk_end = ramdisk_image + ramdisk_size;382382+ u64 ramdisk_end = PAGE_ALIGN(ramdisk_image + ramdisk_size);383383 u64 end_of_lowmem = max_low_pfn_mapped << PAGE_SHIFT;384384385385 if (!boot_params.hdr.type_of_loader ||
+3-3
arch/x86/kernel/smpboot.c
···242242 end_local_APIC_setup();243243 map_cpu_to_logical_apicid();244244245245- notify_cpu_starting(cpuid);246246-247245 /*248246 * Need to setup vector mappings before we enable interrupts.249247 */250250- __setup_vector_irq(smp_processor_id());248248+ setup_vector_irq(smp_processor_id());251249 /*252250 * Get our bogomips.253251 *···261263 * Save our processor parameters262264 */263265 smp_store_cpu_info(cpuid);266266+267267+ notify_cpu_starting(cpuid);264268265269 /*266270 * Allow the master to continue.
···331331332332void free_init_pages(char *what, unsigned long begin, unsigned long end)333333{334334- unsigned long addr = begin;334334+ unsigned long addr;335335+ unsigned long begin_aligned, end_aligned;335336336336- if (addr >= end)337337+ /* Make sure boundaries are page aligned */338338+ begin_aligned = PAGE_ALIGN(begin);339339+ end_aligned = end & PAGE_MASK;340340+341341+ if (WARN_ON(begin_aligned != begin || end_aligned != end)) {342342+ begin = begin_aligned;343343+ end = end_aligned;344344+ }345345+346346+ if (begin >= end)337347 return;348348+349349+ addr = begin;338350339351 /*340352 * If debugging page accesses then do not free this memory but···355343 */356344#ifdef CONFIG_DEBUG_PAGEALLOC357345 printk(KERN_INFO "debug: unmapping init memory %08lx..%08lx\n",358358- begin, PAGE_ALIGN(end));346346+ begin, end);359347 set_memory_np(begin, (end - begin) >> PAGE_SHIFT);360348#else361349 /*···370358 for (; addr < end; addr += PAGE_SIZE) {371359 ClearPageReserved(virt_to_page(addr));372360 init_page_count(virt_to_page(addr));373373- memset((void *)(addr & ~(PAGE_SIZE-1)),374374- POISON_FREE_INITMEM, PAGE_SIZE);361361+ memset((void *)addr, POISON_FREE_INITMEM, PAGE_SIZE);375362 free_page(addr);376363 totalram_pages++;377364 }···387376#ifdef CONFIG_BLK_DEV_INITRD388377void free_initrd_mem(unsigned long start, unsigned long end)389378{390390- free_init_pages("initrd memory", start, end);379379+ /*380380+ * end could be not aligned, and We can not align that,381381+ * decompresser could be confused by aligned initrd_end382382+ * We already reserve the end partial page before in383383+ * - i386_start_kernel()384384+ * - x86_64_start_kernel()385385+ * - relocate_initrd()386386+ * So here We can do PAGE_ALIGN() safely to get partial page to be freed387387+ */388388+ free_init_pages("initrd memory", start, PAGE_ALIGN(end));391389}392390#endif
+18-4
arch/x86/pci/acpi.c
···122122 struct acpi_resource_address64 addr;123123 acpi_status status;124124 unsigned long flags;125125- struct resource *root;126126- u64 start, end;125125+ struct resource *root, *conflict;126126+ u64 start, end, max_len;127127128128 status = resource_to_addr(acpi_res, &addr);129129 if (!ACPI_SUCCESS(status))···139139 flags = IORESOURCE_IO;140140 } else141141 return AE_OK;142142+143143+ max_len = addr.maximum - addr.minimum + 1;144144+ if (addr.address_length > max_len) {145145+ dev_printk(KERN_DEBUG, &info->bridge->dev,146146+ "host bridge window length %#llx doesn't fit in "147147+ "%#llx-%#llx, trimming\n",148148+ (unsigned long long) addr.address_length,149149+ (unsigned long long) addr.minimum,150150+ (unsigned long long) addr.maximum);151151+ addr.address_length = max_len;152152+ }142153143154 start = addr.minimum + addr.translation_offset;144155 end = start + addr.address_length - 1;···168157 return AE_OK;169158 }170159171171- if (insert_resource(root, res)) {160160+ conflict = insert_resource_conflict(root, res);161161+ if (conflict) {172162 dev_err(&info->bridge->dev,173173- "can't allocate host bridge window %pR\n", res);163163+ "address space collision: host bridge window %pR "164164+ "conflicts with %s %pR\n",165165+ res, conflict->name, conflict);174166 } else {175167 pci_bus_add_resource(info->bus, res, 0);176168 info->res_num++;
-5
arch/x86/pci/i386.c
···127127 continue;128128 if (!r->start ||129129 pci_claim_resource(dev, idx) < 0) {130130- dev_info(&dev->dev,131131- "can't reserve window %pR\n",132132- r);133130 /*134131 * Something is wrong with the region.135132 * Invalidate the resource to prevent···178181 "BAR %d: reserving %pr (d=%d, p=%d)\n",179182 idx, r, disabled, pass);180183 if (pci_claim_resource(dev, idx) < 0) {181181- dev_info(&dev->dev,182182- "can't reserve %pR\n", r);183184 /* We'll assign a new address later */184185 r->end -= r->start;185186 r->start = 0;
···681681extern int nouveau_vram_pushbuf;682682extern int nouveau_vram_notify;683683extern int nouveau_fbpercrtc;684684+extern int nouveau_tv_disable;684685extern char *nouveau_tv_norm;685686extern int nouveau_reg_debug;686687extern char *nouveau_vbios;···689688extern int nouveau_ignorelid;690689extern int nouveau_nofbaccel;691690extern int nouveau_noaccel;691691+extern int nouveau_override_conntype;692692693693extern int nouveau_pci_suspend(struct pci_dev *pdev, pm_message_t pm_state);694694extern int nouveau_pci_resume(struct pci_dev *pdev);···927925extern void nv40_fb_takedown(struct drm_device *);928926extern void nv40_fb_set_region_tiling(struct drm_device *, int, uint32_t,929927 uint32_t, uint32_t);928928+929929+/* nv50_fb.c */930930+extern int nv50_fb_init(struct drm_device *);931931+extern void nv50_fb_takedown(struct drm_device *);930932931933/* nv04_fifo.c */932934extern int nv04_fifo_init(struct drm_device *);
+561-48
drivers/gpu/drm/nouveau/nouveau_irq.c
···311311#define nouveau_print_bitfield_names(val, namelist) \312312 nouveau_print_bitfield_names_((val), (namelist), ARRAY_SIZE(namelist))313313314314+struct nouveau_enum_names {315315+ uint32_t value;316316+ const char *name;317317+};318318+319319+static void320320+nouveau_print_enum_names_(uint32_t value,321321+ const struct nouveau_enum_names *namelist,322322+ const int namelist_len)323323+{324324+ /*325325+ * Caller must have already printed the KERN_* log level for us.326326+ * Also the caller is responsible for adding the newline.327327+ */328328+ int i;329329+ for (i = 0; i < namelist_len; ++i) {330330+ if (value == namelist[i].value) {331331+ printk("%s", namelist[i].name);332332+ return;333333+ }334334+ }335335+ printk("unknown value 0x%08x", value);336336+}337337+#define nouveau_print_enum_names(val, namelist) \338338+ nouveau_print_enum_names_((val), (namelist), ARRAY_SIZE(namelist))314339315340static int316341nouveau_graph_chid_from_grctx(struct drm_device *dev)···452427 struct drm_nouveau_private *dev_priv = dev->dev_private;453428 uint32_t nsource = trap->nsource, nstatus = trap->nstatus;454429455455- NV_INFO(dev, "%s - nSource:", id);456456- nouveau_print_bitfield_names(nsource, nsource_names);457457- printk(", nStatus:");458458- if (dev_priv->card_type < NV_10)459459- nouveau_print_bitfield_names(nstatus, nstatus_names);460460- else461461- nouveau_print_bitfield_names(nstatus, nstatus_names_nv10);462462- printk("\n");430430+ if (dev_priv->card_type < NV_50) {431431+ NV_INFO(dev, "%s - nSource:", id);432432+ nouveau_print_bitfield_names(nsource, nsource_names);433433+ printk(", nStatus:");434434+ if (dev_priv->card_type < NV_10)435435+ nouveau_print_bitfield_names(nstatus, nstatus_names);436436+ else437437+ nouveau_print_bitfield_names(nstatus, nstatus_names_nv10);438438+ printk("\n");439439+ }463440464441 NV_INFO(dev, "%s - Ch %d/%d Class 0x%04x Mthd 0x%04x "465442 "Data 0x%08x:0x%08x\n",···605578}606579607580static void581581+nv50_pfb_vm_trap(struct drm_device *dev, int display, const char *name)582582+{583583+ struct drm_nouveau_private *dev_priv = dev->dev_private;584584+ uint32_t trap[6];585585+ int i, ch;586586+ uint32_t idx = nv_rd32(dev, 0x100c90);587587+ if (idx & 0x80000000) {588588+ idx &= 0xffffff;589589+ if (display) {590590+ for (i = 0; i < 6; i++) {591591+ nv_wr32(dev, 0x100c90, idx | i << 24);592592+ trap[i] = nv_rd32(dev, 0x100c94);593593+ }594594+ for (ch = 0; ch < dev_priv->engine.fifo.channels; ch++) {595595+ struct nouveau_channel *chan = dev_priv->fifos[ch];596596+597597+ if (!chan || !chan->ramin)598598+ continue;599599+600600+ if (trap[1] == chan->ramin->instance >> 12)601601+ break;602602+ }603603+ NV_INFO(dev, "%s - VM: Trapped %s at %02x%04x%04x status %08x %08x channel %d\n",604604+ name, (trap[5]&0x100?"read":"write"),605605+ trap[5]&0xff, trap[4]&0xffff,606606+ trap[3]&0xffff, trap[0], trap[2], ch);607607+ }608608+ nv_wr32(dev, 0x100c90, idx | 0x80000000);609609+ } else if (display) {610610+ NV_INFO(dev, "%s - no VM fault?\n", name);611611+ }612612+}613613+614614+static struct nouveau_enum_names nv50_mp_exec_error_names[] =615615+{616616+ { 3, "STACK_UNDERFLOW" },617617+ { 4, "QUADON_ACTIVE" },618618+ { 8, "TIMEOUT" },619619+ { 0x10, "INVALID_OPCODE" },620620+ { 0x40, "BREAKPOINT" },621621+};622622+623623+static void624624+nv50_pgraph_mp_trap(struct drm_device *dev, int tpid, int display)625625+{626626+ struct drm_nouveau_private *dev_priv = dev->dev_private;627627+ uint32_t units = nv_rd32(dev, 0x1540);628628+ uint32_t addr, mp10, status, pc, oplow, ophigh;629629+ int i;630630+ int mps = 0;631631+ for (i = 0; i < 4; i++) {632632+ if (!(units & 1 << (i+24)))633633+ continue;634634+ if (dev_priv->chipset < 0xa0)635635+ addr = 0x408200 + (tpid << 12) + (i << 7);636636+ else637637+ addr = 0x408100 + (tpid << 11) + (i << 7);638638+ mp10 = nv_rd32(dev, addr + 0x10);639639+ status = nv_rd32(dev, addr + 0x14);640640+ if (!status)641641+ continue;642642+ if (display) {643643+ nv_rd32(dev, addr + 0x20);644644+ pc = nv_rd32(dev, addr + 0x24);645645+ oplow = nv_rd32(dev, addr + 0x70);646646+ ophigh= nv_rd32(dev, addr + 0x74);647647+ NV_INFO(dev, "PGRAPH_TRAP_MP_EXEC - "648648+ "TP %d MP %d: ", tpid, i);649649+ nouveau_print_enum_names(status,650650+ nv50_mp_exec_error_names);651651+ printk(" at %06x warp %d, opcode %08x %08x\n",652652+ pc&0xffffff, pc >> 24,653653+ oplow, ophigh);654654+ }655655+ nv_wr32(dev, addr + 0x10, mp10);656656+ nv_wr32(dev, addr + 0x14, 0);657657+ mps++;658658+ }659659+ if (!mps && display)660660+ NV_INFO(dev, "PGRAPH_TRAP_MP_EXEC - TP %d: "661661+ "No MPs claiming errors?\n", tpid);662662+}663663+664664+static void665665+nv50_pgraph_tp_trap(struct drm_device *dev, int type, uint32_t ustatus_old,666666+ uint32_t ustatus_new, int display, const char *name)667667+{668668+ struct drm_nouveau_private *dev_priv = dev->dev_private;669669+ int tps = 0;670670+ uint32_t units = nv_rd32(dev, 0x1540);671671+ int i, r;672672+ uint32_t ustatus_addr, ustatus;673673+ for (i = 0; i < 16; i++) {674674+ if (!(units & (1 << i)))675675+ continue;676676+ if (dev_priv->chipset < 0xa0)677677+ ustatus_addr = ustatus_old + (i << 12);678678+ else679679+ ustatus_addr = ustatus_new + (i << 11);680680+ ustatus = nv_rd32(dev, ustatus_addr) & 0x7fffffff;681681+ if (!ustatus)682682+ continue;683683+ tps++;684684+ switch (type) {685685+ case 6: /* texture error... unknown for now */686686+ nv50_pfb_vm_trap(dev, display, name);687687+ if (display) {688688+ NV_ERROR(dev, "magic set %d:\n", i);689689+ for (r = ustatus_addr + 4; r <= ustatus_addr + 0x10; r += 4)690690+ NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r,691691+ nv_rd32(dev, r));692692+ }693693+ break;694694+ case 7: /* MP error */695695+ if (ustatus & 0x00010000) {696696+ nv50_pgraph_mp_trap(dev, i, display);697697+ ustatus &= ~0x00010000;698698+ }699699+ break;700700+ case 8: /* TPDMA error */701701+ {702702+ uint32_t e0c = nv_rd32(dev, ustatus_addr + 4);703703+ uint32_t e10 = nv_rd32(dev, ustatus_addr + 8);704704+ uint32_t e14 = nv_rd32(dev, ustatus_addr + 0xc);705705+ uint32_t e18 = nv_rd32(dev, ustatus_addr + 0x10);706706+ uint32_t e1c = nv_rd32(dev, ustatus_addr + 0x14);707707+ uint32_t e20 = nv_rd32(dev, ustatus_addr + 0x18);708708+ uint32_t e24 = nv_rd32(dev, ustatus_addr + 0x1c);709709+ nv50_pfb_vm_trap(dev, display, name);710710+ /* 2d engine destination */711711+ if (ustatus & 0x00000010) {712712+ if (display) {713713+ NV_INFO(dev, "PGRAPH_TRAP_TPDMA_2D - TP %d - Unknown fault at address %02x%08x\n",714714+ i, e14, e10);715715+ NV_INFO(dev, "PGRAPH_TRAP_TPDMA_2D - TP %d - e0c: %08x, e18: %08x, e1c: %08x, e20: %08x, e24: %08x\n",716716+ i, e0c, e18, e1c, e20, e24);717717+ }718718+ ustatus &= ~0x00000010;719719+ }720720+ /* Render target */721721+ if (ustatus & 0x00000040) {722722+ if (display) {723723+ NV_INFO(dev, "PGRAPH_TRAP_TPDMA_RT - TP %d - Unknown fault at address %02x%08x\n",724724+ i, e14, e10);725725+ NV_INFO(dev, "PGRAPH_TRAP_TPDMA_RT - TP %d - e0c: %08x, e18: %08x, e1c: %08x, e20: %08x, e24: %08x\n",726726+ i, e0c, e18, e1c, e20, e24);727727+ }728728+ ustatus &= ~0x00000040;729729+ }730730+ /* CUDA memory: l[], g[] or stack. */731731+ if (ustatus & 0x00000080) {732732+ if (display) {733733+ if (e18 & 0x80000000) {734734+ /* g[] read fault? */735735+ NV_INFO(dev, "PGRAPH_TRAP_TPDMA - TP %d - Global read fault at address %02x%08x\n",736736+ i, e14, e10 | ((e18 >> 24) & 0x1f));737737+ e18 &= ~0x1f000000;738738+ } else if (e18 & 0xc) {739739+ /* g[] write fault? */740740+ NV_INFO(dev, "PGRAPH_TRAP_TPDMA - TP %d - Global write fault at address %02x%08x\n",741741+ i, e14, e10 | ((e18 >> 7) & 0x1f));742742+ e18 &= ~0x00000f80;743743+ } else {744744+ NV_INFO(dev, "PGRAPH_TRAP_TPDMA - TP %d - Unknown CUDA fault at address %02x%08x\n",745745+ i, e14, e10);746746+ }747747+ NV_INFO(dev, "PGRAPH_TRAP_TPDMA - TP %d - e0c: %08x, e18: %08x, e1c: %08x, e20: %08x, e24: %08x\n",748748+ i, e0c, e18, e1c, e20, e24);749749+ }750750+ ustatus &= ~0x00000080;751751+ }752752+ }753753+ break;754754+ }755755+ if (ustatus) {756756+ if (display)757757+ NV_INFO(dev, "%s - TP%d: Unhandled ustatus 0x%08x\n", name, i, ustatus);758758+ }759759+ nv_wr32(dev, ustatus_addr, 0xc0000000);760760+ }761761+762762+ if (!tps && display)763763+ NV_INFO(dev, "%s - No TPs claiming errors?\n", name);764764+}765765+766766+static void767767+nv50_pgraph_trap_handler(struct drm_device *dev)768768+{769769+ struct nouveau_pgraph_trap trap;770770+ uint32_t status = nv_rd32(dev, 0x400108);771771+ uint32_t ustatus;772772+ int display = nouveau_ratelimit();773773+774774+775775+ if (!status && display) {776776+ nouveau_graph_trap_info(dev, &trap);777777+ nouveau_graph_dump_trap_info(dev, "PGRAPH_TRAP", &trap);778778+ NV_INFO(dev, "PGRAPH_TRAP - no units reporting traps?\n");779779+ }780780+781781+ /* DISPATCH: Relays commands to other units and handles NOTIFY,782782+ * COND, QUERY. If you get a trap from it, the command is still stuck783783+ * in DISPATCH and you need to do something about it. */784784+ if (status & 0x001) {785785+ ustatus = nv_rd32(dev, 0x400804) & 0x7fffffff;786786+ if (!ustatus && display) {787787+ NV_INFO(dev, "PGRAPH_TRAP_DISPATCH - no ustatus?\n");788788+ }789789+790790+ /* Known to be triggered by screwed up NOTIFY and COND... */791791+ if (ustatus & 0x00000001) {792792+ nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_DISPATCH_FAULT");793793+ nv_wr32(dev, 0x400500, 0);794794+ if (nv_rd32(dev, 0x400808) & 0x80000000) {795795+ if (display) {796796+ if (nouveau_graph_trapped_channel(dev, &trap.channel))797797+ trap.channel = -1;798798+ trap.class = nv_rd32(dev, 0x400814);799799+ trap.mthd = nv_rd32(dev, 0x400808) & 0x1ffc;800800+ trap.subc = (nv_rd32(dev, 0x400808) >> 16) & 0x7;801801+ trap.data = nv_rd32(dev, 0x40080c);802802+ trap.data2 = nv_rd32(dev, 0x400810);803803+ nouveau_graph_dump_trap_info(dev,804804+ "PGRAPH_TRAP_DISPATCH_FAULT", &trap);805805+ NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_FAULT - 400808: %08x\n", nv_rd32(dev, 0x400808));806806+ NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_FAULT - 400848: %08x\n", nv_rd32(dev, 0x400848));807807+ }808808+ nv_wr32(dev, 0x400808, 0);809809+ } else if (display) {810810+ NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_FAULT - No stuck command?\n");811811+ }812812+ nv_wr32(dev, 0x4008e8, nv_rd32(dev, 0x4008e8) & 3);813813+ nv_wr32(dev, 0x400848, 0);814814+ ustatus &= ~0x00000001;815815+ }816816+ if (ustatus & 0x00000002) {817817+ nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_DISPATCH_QUERY");818818+ nv_wr32(dev, 0x400500, 0);819819+ if (nv_rd32(dev, 0x40084c) & 0x80000000) {820820+ if (display) {821821+ if (nouveau_graph_trapped_channel(dev, &trap.channel))822822+ trap.channel = -1;823823+ trap.class = nv_rd32(dev, 0x400814);824824+ trap.mthd = nv_rd32(dev, 0x40084c) & 0x1ffc;825825+ trap.subc = (nv_rd32(dev, 0x40084c) >> 16) & 0x7;826826+ trap.data = nv_rd32(dev, 0x40085c);827827+ trap.data2 = 0;828828+ nouveau_graph_dump_trap_info(dev,829829+ "PGRAPH_TRAP_DISPATCH_QUERY", &trap);830830+ NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_QUERY - 40084c: %08x\n", nv_rd32(dev, 0x40084c));831831+ }832832+ nv_wr32(dev, 0x40084c, 0);833833+ } else if (display) {834834+ NV_INFO(dev, "PGRAPH_TRAP_DISPATCH_QUERY - No stuck command?\n");835835+ }836836+ ustatus &= ~0x00000002;837837+ }838838+ if (ustatus && display)839839+ NV_INFO(dev, "PGRAPH_TRAP_DISPATCH - Unhandled ustatus 0x%08x\n", ustatus);840840+ nv_wr32(dev, 0x400804, 0xc0000000);841841+ nv_wr32(dev, 0x400108, 0x001);842842+ status &= ~0x001;843843+ }844844+845845+ /* TRAPs other than dispatch use the "normal" trap regs. */846846+ if (status && display) {847847+ nouveau_graph_trap_info(dev, &trap);848848+ nouveau_graph_dump_trap_info(dev,849849+ "PGRAPH_TRAP", &trap);850850+ }851851+852852+ /* M2MF: Memory to memory copy engine. */853853+ if (status & 0x002) {854854+ ustatus = nv_rd32(dev, 0x406800) & 0x7fffffff;855855+ if (!ustatus && display) {856856+ NV_INFO(dev, "PGRAPH_TRAP_M2MF - no ustatus?\n");857857+ }858858+ if (ustatus & 0x00000001) {859859+ nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_M2MF_NOTIFY");860860+ ustatus &= ~0x00000001;861861+ }862862+ if (ustatus & 0x00000002) {863863+ nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_M2MF_IN");864864+ ustatus &= ~0x00000002;865865+ }866866+ if (ustatus & 0x00000004) {867867+ nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_M2MF_OUT");868868+ ustatus &= ~0x00000004;869869+ }870870+ NV_INFO (dev, "PGRAPH_TRAP_M2MF - %08x %08x %08x %08x\n",871871+ nv_rd32(dev, 0x406804),872872+ nv_rd32(dev, 0x406808),873873+ nv_rd32(dev, 0x40680c),874874+ nv_rd32(dev, 0x406810));875875+ if (ustatus && display)876876+ NV_INFO(dev, "PGRAPH_TRAP_M2MF - Unhandled ustatus 0x%08x\n", ustatus);877877+ /* No sane way found yet -- just reset the bugger. */878878+ nv_wr32(dev, 0x400040, 2);879879+ nv_wr32(dev, 0x400040, 0);880880+ nv_wr32(dev, 0x406800, 0xc0000000);881881+ nv_wr32(dev, 0x400108, 0x002);882882+ status &= ~0x002;883883+ }884884+885885+ /* VFETCH: Fetches data from vertex buffers. */886886+ if (status & 0x004) {887887+ ustatus = nv_rd32(dev, 0x400c04) & 0x7fffffff;888888+ if (!ustatus && display) {889889+ NV_INFO(dev, "PGRAPH_TRAP_VFETCH - no ustatus?\n");890890+ }891891+ if (ustatus & 0x00000001) {892892+ nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_VFETCH_FAULT");893893+ NV_INFO (dev, "PGRAPH_TRAP_VFETCH_FAULT - %08x %08x %08x %08x\n",894894+ nv_rd32(dev, 0x400c00),895895+ nv_rd32(dev, 0x400c08),896896+ nv_rd32(dev, 0x400c0c),897897+ nv_rd32(dev, 0x400c10));898898+ ustatus &= ~0x00000001;899899+ }900900+ if (ustatus && display)901901+ NV_INFO(dev, "PGRAPH_TRAP_VFETCH - Unhandled ustatus 0x%08x\n", ustatus);902902+ nv_wr32(dev, 0x400c04, 0xc0000000);903903+ nv_wr32(dev, 0x400108, 0x004);904904+ status &= ~0x004;905905+ }906906+907907+ /* STRMOUT: DirectX streamout / OpenGL transform feedback. */908908+ if (status & 0x008) {909909+ ustatus = nv_rd32(dev, 0x401800) & 0x7fffffff;910910+ if (!ustatus && display) {911911+ NV_INFO(dev, "PGRAPH_TRAP_STRMOUT - no ustatus?\n");912912+ }913913+ if (ustatus & 0x00000001) {914914+ nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_STRMOUT_FAULT");915915+ NV_INFO (dev, "PGRAPH_TRAP_STRMOUT_FAULT - %08x %08x %08x %08x\n",916916+ nv_rd32(dev, 0x401804),917917+ nv_rd32(dev, 0x401808),918918+ nv_rd32(dev, 0x40180c),919919+ nv_rd32(dev, 0x401810));920920+ ustatus &= ~0x00000001;921921+ }922922+ if (ustatus && display)923923+ NV_INFO(dev, "PGRAPH_TRAP_STRMOUT - Unhandled ustatus 0x%08x\n", ustatus);924924+ /* No sane way found yet -- just reset the bugger. */925925+ nv_wr32(dev, 0x400040, 0x80);926926+ nv_wr32(dev, 0x400040, 0);927927+ nv_wr32(dev, 0x401800, 0xc0000000);928928+ nv_wr32(dev, 0x400108, 0x008);929929+ status &= ~0x008;930930+ }931931+932932+ /* CCACHE: Handles code and c[] caches and fills them. */933933+ if (status & 0x010) {934934+ ustatus = nv_rd32(dev, 0x405018) & 0x7fffffff;935935+ if (!ustatus && display) {936936+ NV_INFO(dev, "PGRAPH_TRAP_CCACHE - no ustatus?\n");937937+ }938938+ if (ustatus & 0x00000001) {939939+ nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_CCACHE_FAULT");940940+ NV_INFO (dev, "PGRAPH_TRAP_CCACHE_FAULT - %08x %08x %08x %08x %08x %08x %08x\n",941941+ nv_rd32(dev, 0x405800),942942+ nv_rd32(dev, 0x405804),943943+ nv_rd32(dev, 0x405808),944944+ nv_rd32(dev, 0x40580c),945945+ nv_rd32(dev, 0x405810),946946+ nv_rd32(dev, 0x405814),947947+ nv_rd32(dev, 0x40581c));948948+ ustatus &= ~0x00000001;949949+ }950950+ if (ustatus && display)951951+ NV_INFO(dev, "PGRAPH_TRAP_CCACHE - Unhandled ustatus 0x%08x\n", ustatus);952952+ nv_wr32(dev, 0x405018, 0xc0000000);953953+ nv_wr32(dev, 0x400108, 0x010);954954+ status &= ~0x010;955955+ }956956+957957+ /* Unknown, not seen yet... 0x402000 is the only trap status reg958958+ * remaining, so try to handle it anyway. Perhaps related to that959959+ * unknown DMA slot on tesla? */960960+ if (status & 0x20) {961961+ nv50_pfb_vm_trap(dev, display, "PGRAPH_TRAP_UNKC04");962962+ ustatus = nv_rd32(dev, 0x402000) & 0x7fffffff;963963+ if (display)964964+ NV_INFO(dev, "PGRAPH_TRAP_UNKC04 - Unhandled ustatus 0x%08x\n", ustatus);965965+ nv_wr32(dev, 0x402000, 0xc0000000);966966+ /* no status modifiction on purpose */967967+ }968968+969969+ /* TEXTURE: CUDA texturing units */970970+ if (status & 0x040) {971971+ nv50_pgraph_tp_trap (dev, 6, 0x408900, 0x408600, display,972972+ "PGRAPH_TRAP_TEXTURE");973973+ nv_wr32(dev, 0x400108, 0x040);974974+ status &= ~0x040;975975+ }976976+977977+ /* MP: CUDA execution engines. */978978+ if (status & 0x080) {979979+ nv50_pgraph_tp_trap (dev, 7, 0x408314, 0x40831c, display,980980+ "PGRAPH_TRAP_MP");981981+ nv_wr32(dev, 0x400108, 0x080);982982+ status &= ~0x080;983983+ }984984+985985+ /* TPDMA: Handles TP-initiated uncached memory accesses:986986+ * l[], g[], stack, 2d surfaces, render targets. */987987+ if (status & 0x100) {988988+ nv50_pgraph_tp_trap (dev, 8, 0x408e08, 0x408708, display,989989+ "PGRAPH_TRAP_TPDMA");990990+ nv_wr32(dev, 0x400108, 0x100);991991+ status &= ~0x100;992992+ }993993+994994+ if (status) {995995+ if (display)996996+ NV_INFO(dev, "PGRAPH_TRAP - Unknown trap 0x%08x\n",997997+ status);998998+ nv_wr32(dev, 0x400108, status);999999+ }10001000+}10011001+10021002+/* There must be a *lot* of these. Will take some time to gather them up. */10031003+static struct nouveau_enum_names nv50_data_error_names[] =10041004+{10051005+ { 4, "INVALID_VALUE" },10061006+ { 5, "INVALID_ENUM" },10071007+ { 8, "INVALID_OBJECT" },10081008+ { 0xc, "INVALID_BITFIELD" },10091009+ { 0x28, "MP_NO_REG_SPACE" },10101010+ { 0x2b, "MP_BLOCK_SIZE_MISMATCH" },10111011+};10121012+10131013+static void6081014nv50_pgraph_irq_handler(struct drm_device *dev)6091015{10161016+ struct nouveau_pgraph_trap trap;10171017+ int unhandled = 0;6101018 uint32_t status;61110196121020 while ((status = nv_rd32(dev, NV03_PGRAPH_INTR))) {613613- uint32_t nsource = nv_rd32(dev, NV03_PGRAPH_NSOURCE);614614-10211021+ /* NOTIFY: You've set a NOTIFY an a command and it's done. */6151022 if (status & 0x00000001) {616616- nouveau_pgraph_intr_notify(dev, nsource);10231023+ nouveau_graph_trap_info(dev, &trap);10241024+ if (nouveau_ratelimit())10251025+ nouveau_graph_dump_trap_info(dev,10261026+ "PGRAPH_NOTIFY", &trap);6171027 status &= ~0x00000001;6181028 nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000001);6191029 }6201030621621- if (status & 0x00000010) {622622- nouveau_pgraph_intr_error(dev, nsource |623623- NV03_PGRAPH_NSOURCE_ILLEGAL_MTHD);10311031+ /* COMPUTE_QUERY: Purpose and exact cause unknown, happens10321032+ * when you write 0x200 to 0x50c0 method 0x31c. */10331033+ if (status & 0x00000002) {10341034+ nouveau_graph_trap_info(dev, &trap);10351035+ if (nouveau_ratelimit())10361036+ nouveau_graph_dump_trap_info(dev,10371037+ "PGRAPH_COMPUTE_QUERY", &trap);10381038+ status &= ~0x00000002;10391039+ nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000002);10401040+ }624104110421042+ /* Unknown, never seen: 0x4 */10431043+10441044+ /* ILLEGAL_MTHD: You used a wrong method for this class. */10451045+ if (status & 0x00000010) {10461046+ nouveau_graph_trap_info(dev, &trap);10471047+ if (nouveau_pgraph_intr_swmthd(dev, &trap))10481048+ unhandled = 1;10491049+ if (unhandled && nouveau_ratelimit())10501050+ nouveau_graph_dump_trap_info(dev,10511051+ "PGRAPH_ILLEGAL_MTHD", &trap);6251052 status &= ~0x00000010;6261053 nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000010);6271054 }628105510561056+ /* ILLEGAL_CLASS: You used a wrong class. */10571057+ if (status & 0x00000020) {10581058+ nouveau_graph_trap_info(dev, &trap);10591059+ if (nouveau_ratelimit())10601060+ nouveau_graph_dump_trap_info(dev,10611061+ "PGRAPH_ILLEGAL_CLASS", &trap);10621062+ status &= ~0x00000020;10631063+ nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000020);10641064+ }10651065+10661066+ /* DOUBLE_NOTIFY: You tried to set a NOTIFY on another NOTIFY. */10671067+ if (status & 0x00000040) {10681068+ nouveau_graph_trap_info(dev, &trap);10691069+ if (nouveau_ratelimit())10701070+ nouveau_graph_dump_trap_info(dev,10711071+ "PGRAPH_DOUBLE_NOTIFY", &trap);10721072+ status &= ~0x00000040;10731073+ nv_wr32(dev, NV03_PGRAPH_INTR, 0x00000040);10741074+ }10751075+10761076+ /* CONTEXT_SWITCH: PGRAPH needs us to load a new context */6291077 if (status & 0x00001000) {6301078 nv_wr32(dev, 0x400500, 0x00000000);6311079 nv_wr32(dev, NV03_PGRAPH_INTR,···1115613 status &= ~NV_PGRAPH_INTR_CONTEXT_SWITCH;1116614 }111761511181118- if (status & 0x00100000) {11191119- nouveau_pgraph_intr_error(dev, nsource |11201120- NV03_PGRAPH_NSOURCE_DATA_ERROR);616616+ /* BUFFER_NOTIFY: Your m2mf transfer finished */617617+ if (status & 0x00010000) {618618+ nouveau_graph_trap_info(dev, &trap);619619+ if (nouveau_ratelimit())620620+ nouveau_graph_dump_trap_info(dev,621621+ "PGRAPH_BUFFER_NOTIFY", &trap);622622+ status &= ~0x00010000;623623+ nv_wr32(dev, NV03_PGRAPH_INTR, 0x00010000);624624+ }1121625626626+ /* DATA_ERROR: Invalid value for this method, or invalid627627+ * state in current PGRAPH context for this operation */628628+ if (status & 0x00100000) {629629+ nouveau_graph_trap_info(dev, &trap);630630+ if (nouveau_ratelimit()) {631631+ nouveau_graph_dump_trap_info(dev,632632+ "PGRAPH_DATA_ERROR", &trap);633633+ NV_INFO (dev, "PGRAPH_DATA_ERROR - ");634634+ nouveau_print_enum_names(nv_rd32(dev, 0x400110),635635+ nv50_data_error_names);636636+ printk("\n");637637+ }1122638 status &= ~0x00100000;1123639 nv_wr32(dev, NV03_PGRAPH_INTR, 0x00100000);1124640 }1125641642642+ /* TRAP: Something bad happened in the middle of command643643+ * execution. Has a billion types, subtypes, and even644644+ * subsubtypes. */1126645 if (status & 0x00200000) {11271127- int r;11281128-11291129- nouveau_pgraph_intr_error(dev, nsource |11301130- NV03_PGRAPH_NSOURCE_PROTECTION_ERROR);11311131-11321132- NV_ERROR(dev, "magic set 1:\n");11331133- for (r = 0x408900; r <= 0x408910; r += 4)11341134- NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r,11351135- nv_rd32(dev, r));11361136- nv_wr32(dev, 0x408900,11371137- nv_rd32(dev, 0x408904) | 0xc0000000);11381138- for (r = 0x408e08; r <= 0x408e24; r += 4)11391139- NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r,11401140- nv_rd32(dev, r));11411141- nv_wr32(dev, 0x408e08,11421142- nv_rd32(dev, 0x408e08) | 0xc0000000);11431143-11441144- NV_ERROR(dev, "magic set 2:\n");11451145- for (r = 0x409900; r <= 0x409910; r += 4)11461146- NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r,11471147- nv_rd32(dev, r));11481148- nv_wr32(dev, 0x409900,11491149- nv_rd32(dev, 0x409904) | 0xc0000000);11501150- for (r = 0x409e08; r <= 0x409e24; r += 4)11511151- NV_ERROR(dev, "\t0x%08x: 0x%08x\n", r,11521152- nv_rd32(dev, r));11531153- nv_wr32(dev, 0x409e08,11541154- nv_rd32(dev, 0x409e08) | 0xc0000000);11551155-646646+ nv50_pgraph_trap_handler(dev);1156647 status &= ~0x00200000;11571157- nv_wr32(dev, NV03_PGRAPH_NSOURCE, nsource);1158648 nv_wr32(dev, NV03_PGRAPH_INTR, 0x00200000);1159649 }650650+651651+ /* Unknown, never seen: 0x00400000 */652652+653653+ /* SINGLE_STEP: Happens on every method if you turned on654654+ * single stepping in 40008c */655655+ if (status & 0x01000000) {656656+ nouveau_graph_trap_info(dev, &trap);657657+ if (nouveau_ratelimit())658658+ nouveau_graph_dump_trap_info(dev,659659+ "PGRAPH_SINGLE_STEP", &trap);660660+ status &= ~0x01000000;661661+ nv_wr32(dev, NV03_PGRAPH_INTR, 0x01000000);662662+ }663663+664664+ /* 0x02000000 happens when you pause a ctxprog...665665+ * but the only way this can happen that I know is by666666+ * poking the relevant MMIO register, and we don't667667+ * do that. */11606681161669 if (status) {1162670 NV_INFO(dev, "Unhandled PGRAPH_INTR - 0x%08x\n",···1184672 }11856731186674 nv_wr32(dev, NV03_PMC_INTR_0, NV_PMC_INTR_0_PGRAPH_PENDING);11871187- nv_wr32(dev, 0x400824, nv_rd32(dev, 0x400824) & ~(1 << 31));675675+ if (nv_rd32(dev, 0x400824) & (1 << 31))676676+ nv_wr32(dev, 0x400824, nv_rd32(dev, 0x400824) & ~(1 << 31));1188677}11896781190679static void
···522522 }523523524524 for (i = 0 ; i < dcb->connector.entries; i++) {525525- if (i != 0 && dcb->connector.entry[i].index ==526526- dcb->connector.entry[i - 1].index)525525+ if (i != 0 && dcb->connector.entry[i].index2 ==526526+ dcb->connector.entry[i - 1].index2)527527 continue;528528 nouveau_connector_create(dev, &dcb->connector.entry[i]);529529 }
+32
drivers/gpu/drm/nouveau/nv50_fb.c
···11+#include "drmP.h"22+#include "drm.h"33+#include "nouveau_drv.h"44+#include "nouveau_drm.h"55+66+int77+nv50_fb_init(struct drm_device *dev)88+{99+ /* This is needed to get meaningful information from 100c901010+ * on traps. No idea what these values mean exactly. */1111+ struct drm_nouveau_private *dev_priv = dev->dev_private;1212+1313+ switch (dev_priv->chipset) {1414+ case 0x50:1515+ nv_wr32(dev, 0x100c90, 0x0707ff);1616+ break;1717+ case 0xa5:1818+ case 0xa8:1919+ nv_wr32(dev, 0x100c90, 0x0d0fff);2020+ break;2121+ default:2222+ nv_wr32(dev, 0x100c90, 0x1d07ff);2323+ break;2424+ }2525+2626+ return 0;2727+}2828+2929+void3030+nv50_fb_takedown(struct drm_device *dev)3131+{3232+}
···353353 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);354354}355355356356+static void atombios_disable_ss(struct drm_crtc *crtc)357357+{358358+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);359359+ struct drm_device *dev = crtc->dev;360360+ struct radeon_device *rdev = dev->dev_private;361361+ u32 ss_cntl;362362+363363+ if (ASIC_IS_DCE4(rdev)) {364364+ switch (radeon_crtc->pll_id) {365365+ case ATOM_PPLL1:366366+ ss_cntl = RREG32(EVERGREEN_P1PLL_SS_CNTL);367367+ ss_cntl &= ~EVERGREEN_PxPLL_SS_EN;368368+ WREG32(EVERGREEN_P1PLL_SS_CNTL, ss_cntl);369369+ break;370370+ case ATOM_PPLL2:371371+ ss_cntl = RREG32(EVERGREEN_P2PLL_SS_CNTL);372372+ ss_cntl &= ~EVERGREEN_PxPLL_SS_EN;373373+ WREG32(EVERGREEN_P2PLL_SS_CNTL, ss_cntl);374374+ break;375375+ case ATOM_DCPLL:376376+ case ATOM_PPLL_INVALID:377377+ return;378378+ }379379+ } else if (ASIC_IS_AVIVO(rdev)) {380380+ switch (radeon_crtc->pll_id) {381381+ case ATOM_PPLL1:382382+ ss_cntl = RREG32(AVIVO_P1PLL_INT_SS_CNTL);383383+ ss_cntl &= ~1;384384+ WREG32(AVIVO_P1PLL_INT_SS_CNTL, ss_cntl);385385+ break;386386+ case ATOM_PPLL2:387387+ ss_cntl = RREG32(AVIVO_P2PLL_INT_SS_CNTL);388388+ ss_cntl &= ~1;389389+ WREG32(AVIVO_P2PLL_INT_SS_CNTL, ss_cntl);390390+ break;391391+ case ATOM_DCPLL:392392+ case ATOM_PPLL_INVALID:393393+ return;394394+ }395395+ }396396+}397397+398398+356399union atom_enable_ss {357400 ENABLE_LVDS_SS_PARAMETERS legacy;358401 ENABLE_SPREAD_SPECTRUM_ON_PPLL_PS_ALLOCATION v1;359402};360403361361-static void atombios_set_ss(struct drm_crtc *crtc, int enable)404404+static void atombios_enable_ss(struct drm_crtc *crtc)362405{363406 struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);364407 struct drm_device *dev = crtc->dev;···430387 step = dig->ss->step;431388 delay = dig->ss->delay;432389 range = dig->ss->range;433433- } else if (enable)390390+ } else434391 return;435435- } else if (enable)392392+ } else436393 return;437394 break;438395 }···449406 args.v1.ucSpreadSpectrumDelay = delay;450407 args.v1.ucSpreadSpectrumRange = range;451408 args.v1.ucPpll = radeon_crtc->crtc_id ? ATOM_PPLL2 : ATOM_PPLL1;452452- args.v1.ucEnable = enable;409409+ args.v1.ucEnable = ATOM_ENABLE;453410 } else {454411 args.legacy.usSpreadSpectrumPercentage = cpu_to_le16(percentage);455412 args.legacy.ucSpreadSpectrumType = type;456413 args.legacy.ucSpreadSpectrumStepSize_Delay = (step & 3) << 2;457414 args.legacy.ucSpreadSpectrumStepSize_Delay |= (delay & 7) << 4;458458- args.legacy.ucEnable = enable;415415+ args.legacy.ucEnable = ATOM_ENABLE;459416 }460417 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);461418}···521478 /* DVO wants 2x pixel clock if the DVO chip is in 12 bit mode */522479 if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DVO1)523480 adjusted_clock = mode->clock * 2;524524- /* LVDS PLL quirks */525525- if (encoder->encoder_type == DRM_MODE_ENCODER_LVDS) {526526- struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;527527- pll->algo = dig->pll_algo;528528- }529481 } else {530482 if (encoder->encoder_type != DRM_MODE_ENCODER_DAC)531483 pll->flags |= RADEON_PLL_NO_ODD_POST_DIV;···541503 int index;542504543505 index = GetIndexIntoMasterTable(COMMAND, AdjustDisplayPll);544544- atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev,545545- &crev);506506+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev,507507+ &crev))508508+ return adjusted_clock;546509547510 memset(&args, 0, sizeof(args));548511···581542 }582543 } else if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {583544 /* may want to enable SS on DP/eDP eventually */584584- args.v3.sInput.ucDispPllConfig |=585585- DISPPLL_CONFIG_SS_ENABLE;586586- if (mode->clock > 165000)545545+ /*args.v3.sInput.ucDispPllConfig |=546546+ DISPPLL_CONFIG_SS_ENABLE;*/547547+ if (encoder_mode == ATOM_ENCODER_MODE_DP)587548 args.v3.sInput.ucDispPllConfig |=588588- DISPPLL_CONFIG_DUAL_LINK;549549+ DISPPLL_CONFIG_COHERENT_MODE;550550+ else {551551+ if (mode->clock > 165000)552552+ args.v3.sInput.ucDispPllConfig |=553553+ DISPPLL_CONFIG_DUAL_LINK;554554+ }589555 }590556 atom_execute_table(rdev->mode_info.atom_context,591557 index, (uint32_t *)&args);···636592 memset(&args, 0, sizeof(args));637593638594 index = GetIndexIntoMasterTable(COMMAND, SetPixelClock);639639- atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev,640640- &crev);595595+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev,596596+ &crev))597597+ return;641598642599 switch (frev) {643600 case 1:···712667 &ref_div, &post_div);713668714669 index = GetIndexIntoMasterTable(COMMAND, SetPixelClock);715715- atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev,716716- &crev);670670+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev,671671+ &crev))672672+ return;717673718674 switch (frev) {719675 case 1:···1129108311301084 /* TODO color tiling */1131108511321132- /* pick pll */11331133- radeon_crtc->pll_id = radeon_atom_pick_pll(crtc);11341134-11351135- atombios_set_ss(crtc, 0);10861086+ atombios_disable_ss(crtc);11361087 /* always set DCPLL */11371088 if (ASIC_IS_DCE4(rdev))11381089 atombios_crtc_set_dcpll(crtc);11391090 atombios_crtc_set_pll(crtc, adjusted_mode);11401140- atombios_set_ss(crtc, 1);10911091+ atombios_enable_ss(crtc);1141109211421093 if (ASIC_IS_DCE4(rdev))11431094 atombios_set_crtc_dtd_timing(crtc, adjusted_mode);···1163112011641121static void atombios_crtc_prepare(struct drm_crtc *crtc)11651122{11231123+ struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);11241124+11251125+ /* pick pll */11261126+ radeon_crtc->pll_id = radeon_atom_pick_pll(crtc);11271127+11661128 atombios_lock_crtc(crtc, ATOM_ENABLE);11671129 atombios_crtc_dpms(crtc, DRM_MODE_DPMS_OFF);11681130}
+3-3
drivers/gpu/drm/radeon/atombios_dp.c
···745745 >> DP_TRAIN_PRE_EMPHASIS_SHIFT);746746747747 /* disable the training pattern on the sink */748748+ dp_set_training(radeon_connector, DP_TRAINING_PATTERN_DISABLE);749749+750750+ /* disable the training pattern on the source */748751 if (ASIC_IS_DCE4(rdev))749752 atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_LINK_TRAINING_COMPLETE);750753 else751754 radeon_dp_encoder_service(rdev, ATOM_DP_ACTION_TRAINING_COMPLETE,752755 dig_connector->dp_clock, enc_id, 0);753753-754754- radeon_dp_encoder_service(rdev, ATOM_DP_ACTION_TRAINING_COMPLETE,755755- dig_connector->dp_clock, enc_id, 0);756756}757757758758int radeon_dp_i2c_aux_ch(struct i2c_adapter *adapter, int mode,
+4-7
drivers/gpu/drm/radeon/evergreen.c
···2525#include <linux/platform_device.h>2626#include "drmP.h"2727#include "radeon.h"2828+#include "radeon_asic.h"2829#include "radeon_drm.h"2930#include "rv770d.h"3031#include "atom.h"···437436438437int evergreen_mc_init(struct radeon_device *rdev)439438{440440- fixed20_12 a;441439 u32 tmp;442440 int chansize, numchan;443441···481481 rdev->mc.real_vram_size = rdev->mc.aper_size;482482 }483483 r600_vram_gtt_location(rdev, &rdev->mc);484484- /* FIXME: we should enforce default clock in case GPU is not in485485- * default setup486486- */487487- a.full = rfixed_const(100);488488- rdev->pm.sclk.full = rfixed_const(rdev->clock.default_sclk);489489- rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a);484484+ radeon_update_bandwidth_info(rdev);485485+490486 return 0;491487}492488···742746743747void evergreen_fini(struct radeon_device *rdev)744748{749749+ radeon_pm_fini(rdev);745750 evergreen_suspend(rdev);746751#if 0747752 r600_blit_fini(rdev);
···182182}183183184184/*185185- * determin how the encoders and audio interface is wired together186186- */187187-int r600_audio_tmds_index(struct drm_encoder *encoder)188188-{189189- struct drm_device *dev = encoder->dev;190190- struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);191191- struct drm_encoder *other;192192-193193- switch (radeon_encoder->encoder_id) {194194- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:195195- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:196196- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:197197- return 0;198198-199199- case ENCODER_OBJECT_ID_INTERNAL_LVTM1:200200- /* special case check if an TMDS1 is present */201201- list_for_each_entry(other, &dev->mode_config.encoder_list, head) {202202- if (to_radeon_encoder(other)->encoder_id ==203203- ENCODER_OBJECT_ID_INTERNAL_TMDS1)204204- return 1;205205- }206206- return 0;207207-208208- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:209209- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:210210- return 1;211211-212212- default:213213- DRM_ERROR("Unsupported encoder type 0x%02X\n",214214- radeon_encoder->encoder_id);215215- return -1;216216- }217217-}218218-219219-/*220185 * atach the audio codec to the clock source of the encoder221186 */222187void r600_audio_set_clock(struct drm_encoder *encoder, int clock)···189224 struct drm_device *dev = encoder->dev;190225 struct radeon_device *rdev = dev->dev_private;191226 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);227227+ struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;192228 int base_rate = 48000;193229194230 switch (radeon_encoder->encoder_id) {···197231 case ENCODER_OBJECT_ID_INTERNAL_LVTM1:198232 WREG32_P(R600_AUDIO_TIMING, 0, ~0x301);199233 break;200200-201234 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:202235 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:203236 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:204237 case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:205238 WREG32_P(R600_AUDIO_TIMING, 0x100, ~0x301);206239 break;207207-208240 default:209241 DRM_ERROR("Unsupported encoder type 0x%02X\n",210242 radeon_encoder->encoder_id);211243 return;212244 }213245214214- switch (r600_audio_tmds_index(encoder)) {246246+ switch (dig->dig_encoder) {215247 case 0:216216- WREG32(R600_AUDIO_PLL1_MUL, base_rate*50);217217- WREG32(R600_AUDIO_PLL1_DIV, clock*100);248248+ WREG32(R600_AUDIO_PLL1_MUL, base_rate * 50);249249+ WREG32(R600_AUDIO_PLL1_DIV, clock * 100);218250 WREG32(R600_AUDIO_CLK_SRCSEL, 0);219251 break;220252221253 case 1:222222- WREG32(R600_AUDIO_PLL2_MUL, base_rate*50);223223- WREG32(R600_AUDIO_PLL2_DIV, clock*100);254254+ WREG32(R600_AUDIO_PLL2_MUL, base_rate * 50);255255+ WREG32(R600_AUDIO_PLL2_DIV, clock * 100);224256 WREG32(R600_AUDIO_CLK_SRCSEL, 1);225257 break;258258+ default:259259+ dev_err(rdev->dev, "Unsupported DIG on encoder 0x%02X\n",260260+ radeon_encoder->encoder_id);261261+ return;226262 }227263}228264
+35
drivers/gpu/drm/radeon/r600_blit_shaders.c
···11+/*22+ * Copyright 2009 Advanced Micro Devices, Inc.33+ *44+ * Permission is hereby granted, free of charge, to any person obtaining a55+ * copy of this software and associated documentation files (the "Software"),66+ * to deal in the Software without restriction, including without limitation77+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,88+ * and/or sell copies of the Software, and to permit persons to whom the99+ * Software is furnished to do so, subject to the following conditions:1010+ *1111+ * The above copyright notice and this permission notice (including the next1212+ * paragraph) shall be included in all copies or substantial portions of the1313+ * Software.1414+ *1515+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR1616+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,1717+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL1818+ * THE COPYRIGHT HOLDER(S) AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR1919+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,2020+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER2121+ * DEALINGS IN THE SOFTWARE.2222+ *2323+ * Authors:2424+ * Alex Deucher <alexander.deucher@amd.com>2525+ */126227#include <linux/types.h>328#include <linux/kernel.h>2929+3030+/*3131+ * R6xx+ cards need to use the 3D engine to blit data which requires3232+ * quite a bit of hw state setup. Rather than pull the whole 3D driver3333+ * (which normally generates the 3D state) into the DRM, we opt to use3434+ * statically generated state tables. The regsiter state and shaders3535+ * were hand generated to support blitting functionality. See the 3D3636+ * driver or documentation for descriptions of the registers and3737+ * shader instructions.3838+ */439540const u32 r6xx_default_state[] =641{
···4545 u32 nbanks;4646 u32 npipes;4747 /* value we track */4848+ u32 sq_config;4849 u32 nsamples;4950 u32 cb_color_base_last[8];5051 struct radeon_bo *cb_color_bo[8];···142141{143142 int i;144143144144+ /* assume DX9 mode */145145+ track->sq_config = DX9_CONSTS;145146 for (i = 0; i < 8; i++) {146147 track->cb_color_base_last[i] = 0;147148 track->cb_color_size[i] = 0;···718715 tmp =radeon_get_ib_value(p, idx);719716 ib[idx] = 0;720717 break;718718+ case SQ_CONFIG:719719+ track->sq_config = radeon_get_ib_value(p, idx);720720+ break;721721 case R_028800_DB_DEPTH_CONTROL:722722 track->db_depth_control = radeon_get_ib_value(p, idx);723723 break;···875869 case SQ_PGM_START_VS:876870 case SQ_PGM_START_GS:877871 case SQ_PGM_START_PS:872872+ case SQ_ALU_CONST_CACHE_GS_0:873873+ case SQ_ALU_CONST_CACHE_GS_1:874874+ case SQ_ALU_CONST_CACHE_GS_2:875875+ case SQ_ALU_CONST_CACHE_GS_3:876876+ case SQ_ALU_CONST_CACHE_GS_4:877877+ case SQ_ALU_CONST_CACHE_GS_5:878878+ case SQ_ALU_CONST_CACHE_GS_6:879879+ case SQ_ALU_CONST_CACHE_GS_7:880880+ case SQ_ALU_CONST_CACHE_GS_8:881881+ case SQ_ALU_CONST_CACHE_GS_9:882882+ case SQ_ALU_CONST_CACHE_GS_10:883883+ case SQ_ALU_CONST_CACHE_GS_11:884884+ case SQ_ALU_CONST_CACHE_GS_12:885885+ case SQ_ALU_CONST_CACHE_GS_13:886886+ case SQ_ALU_CONST_CACHE_GS_14:887887+ case SQ_ALU_CONST_CACHE_GS_15:888888+ case SQ_ALU_CONST_CACHE_PS_0:889889+ case SQ_ALU_CONST_CACHE_PS_1:890890+ case SQ_ALU_CONST_CACHE_PS_2:891891+ case SQ_ALU_CONST_CACHE_PS_3:892892+ case SQ_ALU_CONST_CACHE_PS_4:893893+ case SQ_ALU_CONST_CACHE_PS_5:894894+ case SQ_ALU_CONST_CACHE_PS_6:895895+ case SQ_ALU_CONST_CACHE_PS_7:896896+ case SQ_ALU_CONST_CACHE_PS_8:897897+ case SQ_ALU_CONST_CACHE_PS_9:898898+ case SQ_ALU_CONST_CACHE_PS_10:899899+ case SQ_ALU_CONST_CACHE_PS_11:900900+ case SQ_ALU_CONST_CACHE_PS_12:901901+ case SQ_ALU_CONST_CACHE_PS_13:902902+ case SQ_ALU_CONST_CACHE_PS_14:903903+ case SQ_ALU_CONST_CACHE_PS_15:904904+ case SQ_ALU_CONST_CACHE_VS_0:905905+ case SQ_ALU_CONST_CACHE_VS_1:906906+ case SQ_ALU_CONST_CACHE_VS_2:907907+ case SQ_ALU_CONST_CACHE_VS_3:908908+ case SQ_ALU_CONST_CACHE_VS_4:909909+ case SQ_ALU_CONST_CACHE_VS_5:910910+ case SQ_ALU_CONST_CACHE_VS_6:911911+ case SQ_ALU_CONST_CACHE_VS_7:912912+ case SQ_ALU_CONST_CACHE_VS_8:913913+ case SQ_ALU_CONST_CACHE_VS_9:914914+ case SQ_ALU_CONST_CACHE_VS_10:915915+ case SQ_ALU_CONST_CACHE_VS_11:916916+ case SQ_ALU_CONST_CACHE_VS_12:917917+ case SQ_ALU_CONST_CACHE_VS_13:918918+ case SQ_ALU_CONST_CACHE_VS_14:919919+ case SQ_ALU_CONST_CACHE_VS_15:878920 r = r600_cs_packet_next_reloc(p, &reloc);879921 if (r) {880922 dev_warn(p->dev, "bad SET_CONTEXT_REG "···12801226 }12811227 break;12821228 case PACKET3_SET_ALU_CONST:12831283- start_reg = (idx_value << 2) + PACKET3_SET_ALU_CONST_OFFSET;12841284- end_reg = 4 * pkt->count + start_reg - 4;12851285- if ((start_reg < PACKET3_SET_ALU_CONST_OFFSET) ||12861286- (start_reg >= PACKET3_SET_ALU_CONST_END) ||12871287- (end_reg >= PACKET3_SET_ALU_CONST_END)) {12881288- DRM_ERROR("bad SET_ALU_CONST\n");12891289- return -EINVAL;12291229+ if (track->sq_config & DX9_CONSTS) {12301230+ start_reg = (idx_value << 2) + PACKET3_SET_ALU_CONST_OFFSET;12311231+ end_reg = 4 * pkt->count + start_reg - 4;12321232+ if ((start_reg < PACKET3_SET_ALU_CONST_OFFSET) ||12331233+ (start_reg >= PACKET3_SET_ALU_CONST_END) ||12341234+ (end_reg >= PACKET3_SET_ALU_CONST_END)) {12351235+ DRM_ERROR("bad SET_ALU_CONST\n");12361236+ return -EINVAL;12371237+ }12901238 }12911239 break;12921240 case PACKET3_SET_BOOL_CONST:
+127-76
drivers/gpu/drm/radeon/r600_hdmi.c
···4242 */4343enum r600_hdmi_iec_status_bits {4444 AUDIO_STATUS_DIG_ENABLE = 0x01,4545- AUDIO_STATUS_V = 0x02,4646- AUDIO_STATUS_VCFG = 0x04,4545+ AUDIO_STATUS_V = 0x02,4646+ AUDIO_STATUS_VCFG = 0x04,4747 AUDIO_STATUS_EMPHASIS = 0x08,4848 AUDIO_STATUS_COPYRIGHT = 0x10,4949 AUDIO_STATUS_NONAUDIO = 0x20,5050 AUDIO_STATUS_PROFESSIONAL = 0x40,5151- AUDIO_STATUS_LEVEL = 0x805151+ AUDIO_STATUS_LEVEL = 0x805252};53535454struct {···8585static void r600_hdmi_calc_CTS(uint32_t clock, int *CTS, int N, int freq)8686{8787 if (*CTS == 0)8888- *CTS = clock*N/(128*freq)*1000;8888+ *CTS = clock * N / (128 * freq) * 1000;8989 DRM_DEBUG("Using ACR timing N=%d CTS=%d for frequency %d\n",9090 N, *CTS, freq);9191}···131131 uint8_t length,132132 uint8_t *frame)133133{134134- int i;135135- frame[0] = packetType + versionNumber + length;136136- for (i = 1; i <= length; i++)137137- frame[0] += frame[i];138138- frame[0] = 0x100 - frame[0];134134+ int i;135135+ frame[0] = packetType + versionNumber + length;136136+ for (i = 1; i <= length; i++)137137+ frame[0] += frame[i];138138+ frame[0] = 0x100 - frame[0];139139}140140141141/*···417417 WREG32_P(offset+R600_HDMI_CNTL, 0x04000000, ~0x04000000);418418}419419420420-/*421421- * enable/disable the HDMI engine422422- */423423-void r600_hdmi_enable(struct drm_encoder *encoder, int enable)420420+static int r600_hdmi_find_free_block(struct drm_device *dev)421421+{422422+ struct radeon_device *rdev = dev->dev_private;423423+ struct drm_encoder *encoder;424424+ struct radeon_encoder *radeon_encoder;425425+ bool free_blocks[3] = { true, true, true };426426+427427+ list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {428428+ radeon_encoder = to_radeon_encoder(encoder);429429+ switch (radeon_encoder->hdmi_offset) {430430+ case R600_HDMI_BLOCK1:431431+ free_blocks[0] = false;432432+ break;433433+ case R600_HDMI_BLOCK2:434434+ free_blocks[1] = false;435435+ break;436436+ case R600_HDMI_BLOCK3:437437+ free_blocks[2] = false;438438+ break;439439+ }440440+ }441441+442442+ if (rdev->family == CHIP_RS600 || rdev->family == CHIP_RS690) {443443+ return free_blocks[0] ? R600_HDMI_BLOCK1 : 0;444444+ } else if (rdev->family >= CHIP_R600) {445445+ if (free_blocks[0])446446+ return R600_HDMI_BLOCK1;447447+ else if (free_blocks[1])448448+ return R600_HDMI_BLOCK2;449449+ }450450+ return 0;451451+}452452+453453+static void r600_hdmi_assign_block(struct drm_encoder *encoder)424454{425455 struct drm_device *dev = encoder->dev;426456 struct radeon_device *rdev = dev->dev_private;427457 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);428428- uint32_t offset = to_radeon_encoder(encoder)->hdmi_offset;458458+ struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;429459430430- if (!offset)460460+ if (!dig) {461461+ dev_err(rdev->dev, "Enabling HDMI on non-dig encoder\n");431462 return;463463+ }432464433433- DRM_DEBUG("%s HDMI interface @ 0x%04X\n", enable ? "Enabling" : "Disabling", offset);434434-435435- /* some version of atombios ignore the enable HDMI flag436436- * so enabling/disabling HDMI was moved here for TMDS1+2 */437437- switch (radeon_encoder->encoder_id) {438438- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:439439- WREG32_P(AVIVO_TMDSA_CNTL, enable ? 0x4 : 0x0, ~0x4);440440- WREG32(offset+R600_HDMI_ENABLE, enable ? 0x101 : 0x0);441441- break;442442-443443- case ENCODER_OBJECT_ID_INTERNAL_LVTM1:444444- WREG32_P(AVIVO_LVTMA_CNTL, enable ? 0x4 : 0x0, ~0x4);445445- WREG32(offset+R600_HDMI_ENABLE, enable ? 0x105 : 0x0);446446- break;447447-448448- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:449449- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:450450- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:451451- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:452452- /* This part is doubtfull in my opinion */453453- WREG32(offset+R600_HDMI_ENABLE, enable ? 0x110 : 0x0);454454- break;455455-456456- default:457457- DRM_ERROR("unknown HDMI output type\n");458458- break;465465+ if (ASIC_IS_DCE4(rdev)) {466466+ /* TODO */467467+ } else if (ASIC_IS_DCE3(rdev)) {468468+ radeon_encoder->hdmi_offset = dig->dig_encoder ?469469+ R600_HDMI_BLOCK3 : R600_HDMI_BLOCK1;470470+ if (ASIC_IS_DCE32(rdev))471471+ radeon_encoder->hdmi_config_offset = dig->dig_encoder ?472472+ R600_HDMI_CONFIG2 : R600_HDMI_CONFIG1;473473+ } else if (rdev->family >= CHIP_R600) {474474+ radeon_encoder->hdmi_offset = r600_hdmi_find_free_block(dev);459475 }460476}461477462478/*463463- * determin at which register offset the HDMI encoder is479479+ * enable the HDMI engine464480 */465465-void r600_hdmi_init(struct drm_encoder *encoder)481481+void r600_hdmi_enable(struct drm_encoder *encoder)466482{483483+ struct drm_device *dev = encoder->dev;484484+ struct radeon_device *rdev = dev->dev_private;467485 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);468486469469- switch (radeon_encoder->encoder_id) {470470- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:471471- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:472472- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:473473- radeon_encoder->hdmi_offset = R600_HDMI_TMDS1;474474- break;475475-476476- case ENCODER_OBJECT_ID_INTERNAL_LVTM1:477477- switch (r600_audio_tmds_index(encoder)) {478478- case 0:479479- radeon_encoder->hdmi_offset = R600_HDMI_TMDS1;480480- break;481481- case 1:482482- radeon_encoder->hdmi_offset = R600_HDMI_TMDS2;483483- break;484484- default:485485- radeon_encoder->hdmi_offset = 0;486486- break;487487+ if (!radeon_encoder->hdmi_offset) {488488+ r600_hdmi_assign_block(encoder);489489+ if (!radeon_encoder->hdmi_offset) {490490+ dev_warn(rdev->dev, "Could not find HDMI block for "491491+ "0x%x encoder\n", radeon_encoder->encoder_id);492492+ return;487493 }488488- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:489489- radeon_encoder->hdmi_offset = R600_HDMI_TMDS2;490490- break;491491-492492- case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:493493- radeon_encoder->hdmi_offset = R600_HDMI_DIG;494494- break;495495-496496- default:497497- radeon_encoder->hdmi_offset = 0;498498- break;499494 }500495501501- DRM_DEBUG("using HDMI engine at offset 0x%04X for encoder 0x%x\n",502502- radeon_encoder->hdmi_offset, radeon_encoder->encoder_id);496496+ if (ASIC_IS_DCE32(rdev) && !ASIC_IS_DCE4(rdev)) {497497+ WREG32_P(radeon_encoder->hdmi_config_offset + 0x4, 0x1, ~0x1);498498+ } else if (rdev->family >= CHIP_R600 && !ASIC_IS_DCE3(rdev)) {499499+ int offset = radeon_encoder->hdmi_offset;500500+ switch (radeon_encoder->encoder_id) {501501+ case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:502502+ WREG32_P(AVIVO_TMDSA_CNTL, 0x4, ~0x4);503503+ WREG32(offset + R600_HDMI_ENABLE, 0x101);504504+ break;505505+ case ENCODER_OBJECT_ID_INTERNAL_LVTM1:506506+ WREG32_P(AVIVO_LVTMA_CNTL, 0x4, ~0x4);507507+ WREG32(offset + R600_HDMI_ENABLE, 0x105);508508+ break;509509+ default:510510+ dev_err(rdev->dev, "Unknown HDMI output type\n");511511+ break;512512+ }513513+ }503514504504- /* TODO: make this configureable */505505- radeon_encoder->hdmi_audio_workaround = 0;515515+ DRM_DEBUG("Enabling HDMI interface @ 0x%04X for encoder 0x%x\n",516516+ radeon_encoder->hdmi_offset, radeon_encoder->encoder_id);517517+}518518+519519+/*520520+ * disable the HDMI engine521521+ */522522+void r600_hdmi_disable(struct drm_encoder *encoder)523523+{524524+ struct drm_device *dev = encoder->dev;525525+ struct radeon_device *rdev = dev->dev_private;526526+ struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);527527+528528+ if (!radeon_encoder->hdmi_offset) {529529+ dev_err(rdev->dev, "Disabling not enabled HDMI\n");530530+ return;531531+ }532532+533533+ DRM_DEBUG("Disabling HDMI interface @ 0x%04X for encoder 0x%x\n",534534+ radeon_encoder->hdmi_offset, radeon_encoder->encoder_id);535535+536536+ if (ASIC_IS_DCE32(rdev) && !ASIC_IS_DCE4(rdev)) {537537+ WREG32_P(radeon_encoder->hdmi_config_offset + 0x4, 0, ~0x1);538538+ } else if (rdev->family >= CHIP_R600 && !ASIC_IS_DCE3(rdev)) {539539+ int offset = radeon_encoder->hdmi_offset;540540+ switch (radeon_encoder->encoder_id) {541541+ case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:542542+ WREG32_P(AVIVO_TMDSA_CNTL, 0, ~0x4);543543+ WREG32(offset + R600_HDMI_ENABLE, 0);544544+ break;545545+ case ENCODER_OBJECT_ID_INTERNAL_LVTM1:546546+ WREG32_P(AVIVO_LVTMA_CNTL, 0, ~0x4);547547+ WREG32(offset + R600_HDMI_ENABLE, 0);548548+ break;549549+ default:550550+ dev_err(rdev->dev, "Unknown HDMI output type\n");551551+ break;552552+ }553553+ }554554+555555+ radeon_encoder->hdmi_offset = 0;556556+ radeon_encoder->hdmi_config_offset = 0;506557}
···531531 case CHIP_RS300:532532 switch (ddc_line) {533533 case RADEON_GPIO_DVI_DDC:534534- /* in theory this should be hw capable,535535- * but it doesn't seem to work536536- */537537- i2c.hw_capable = false;534534+ i2c.hw_capable = true;538535 break;539536 default:540537 i2c.hw_capable = false;···630633 p1pll->reference_div = RBIOS16(pll_info + 0x10);631634 p1pll->pll_out_min = RBIOS32(pll_info + 0x12);632635 p1pll->pll_out_max = RBIOS32(pll_info + 0x16);636636+ p1pll->lcd_pll_out_min = p1pll->pll_out_min;637637+ p1pll->lcd_pll_out_max = p1pll->pll_out_max;633638634639 if (rev > 9) {635640 p1pll->pll_in_min = RBIOS32(pll_info + 0x36);
+1-1
drivers/gpu/drm/radeon/radeon_connectors.c
···940940 if (radeon_connector->edid)941941 kfree(radeon_connector->edid);942942 if (radeon_dig_connector->dp_i2c_bus)943943- radeon_i2c_destroy_dp(radeon_dig_connector->dp_i2c_bus);943943+ radeon_i2c_destroy(radeon_dig_connector->dp_i2c_bus);944944 kfree(radeon_connector->con_priv);945945 drm_sysfs_connector_remove(connector);946946 drm_connector_cleanup(connector);
+7-4
drivers/gpu/drm/radeon/radeon_cs.c
···193193 radeon_bo_list_fence(&parser->validated, parser->ib->fence);194194 }195195 radeon_bo_list_unreserve(&parser->validated);196196- for (i = 0; i < parser->nrelocs; i++) {197197- if (parser->relocs[i].gobj)198198- drm_gem_object_unreference_unlocked(parser->relocs[i].gobj);196196+ if (parser->relocs != NULL) {197197+ for (i = 0; i < parser->nrelocs; i++) {198198+ if (parser->relocs[i].gobj)199199+ drm_gem_object_unreference_unlocked(parser->relocs[i].gobj);200200+ }199201 }200202 kfree(parser->track);201203 kfree(parser->relocs);···245243 }246244 r = radeon_cs_parser_relocs(&parser);247245 if (r) {248248- DRM_ERROR("Failed to parse relocation !\n");246246+ if (r != -ERESTARTSYS)247247+ DRM_ERROR("Failed to parse relocation %d!\n", r);249248 radeon_cs_parser_fini(&parser, r);250249 mutex_unlock(&rdev->cs_mutex);251250 return r;
+38-199
drivers/gpu/drm/radeon/radeon_device.c
···3333#include <linux/vga_switcheroo.h>3434#include "radeon_reg.h"3535#include "radeon.h"3636-#include "radeon_asic.h"3736#include "atom.h"38373938/*···241242242243}243244245245+void radeon_update_bandwidth_info(struct radeon_device *rdev)246246+{247247+ fixed20_12 a;248248+ u32 sclk, mclk;249249+250250+ if (rdev->flags & RADEON_IS_IGP) {251251+ sclk = radeon_get_engine_clock(rdev);252252+ mclk = rdev->clock.default_mclk;253253+254254+ a.full = rfixed_const(100);255255+ rdev->pm.sclk.full = rfixed_const(sclk);256256+ rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a);257257+ rdev->pm.mclk.full = rfixed_const(mclk);258258+ rdev->pm.mclk.full = rfixed_div(rdev->pm.mclk, a);259259+260260+ a.full = rfixed_const(16);261261+ /* core_bandwidth = sclk(Mhz) * 16 */262262+ rdev->pm.core_bandwidth.full = rfixed_div(rdev->pm.sclk, a);263263+ } else {264264+ sclk = radeon_get_engine_clock(rdev);265265+ mclk = radeon_get_memory_clock(rdev);266266+267267+ a.full = rfixed_const(100);268268+ rdev->pm.sclk.full = rfixed_const(sclk);269269+ rdev->pm.sclk.full = rfixed_div(rdev->pm.sclk, a);270270+ rdev->pm.mclk.full = rfixed_const(mclk);271271+ rdev->pm.mclk.full = rfixed_div(rdev->pm.mclk, a);272272+ }273273+}274274+244275bool radeon_boot_test_post_card(struct radeon_device *rdev)245276{246277 if (radeon_card_posted(rdev))···316287 rdev->dummy_page.page = NULL;317288}318289319319-320320-/*321321- * Registers accessors functions.322322- */323323-uint32_t radeon_invalid_rreg(struct radeon_device *rdev, uint32_t reg)324324-{325325- DRM_ERROR("Invalid callback to read register 0x%04X\n", reg);326326- BUG_ON(1);327327- return 0;328328-}329329-330330-void radeon_invalid_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)331331-{332332- DRM_ERROR("Invalid callback to write register 0x%04X with 0x%08X\n",333333- reg, v);334334- BUG_ON(1);335335-}336336-337337-void radeon_register_accessor_init(struct radeon_device *rdev)338338-{339339- rdev->mc_rreg = &radeon_invalid_rreg;340340- rdev->mc_wreg = &radeon_invalid_wreg;341341- rdev->pll_rreg = &radeon_invalid_rreg;342342- rdev->pll_wreg = &radeon_invalid_wreg;343343- rdev->pciep_rreg = &radeon_invalid_rreg;344344- rdev->pciep_wreg = &radeon_invalid_wreg;345345-346346- /* Don't change order as we are overridding accessor. */347347- if (rdev->family < CHIP_RV515) {348348- rdev->pcie_reg_mask = 0xff;349349- } else {350350- rdev->pcie_reg_mask = 0x7ff;351351- }352352- /* FIXME: not sure here */353353- if (rdev->family <= CHIP_R580) {354354- rdev->pll_rreg = &r100_pll_rreg;355355- rdev->pll_wreg = &r100_pll_wreg;356356- }357357- if (rdev->family >= CHIP_R420) {358358- rdev->mc_rreg = &r420_mc_rreg;359359- rdev->mc_wreg = &r420_mc_wreg;360360- }361361- if (rdev->family >= CHIP_RV515) {362362- rdev->mc_rreg = &rv515_mc_rreg;363363- rdev->mc_wreg = &rv515_mc_wreg;364364- }365365- if (rdev->family == CHIP_RS400 || rdev->family == CHIP_RS480) {366366- rdev->mc_rreg = &rs400_mc_rreg;367367- rdev->mc_wreg = &rs400_mc_wreg;368368- }369369- if (rdev->family == CHIP_RS690 || rdev->family == CHIP_RS740) {370370- rdev->mc_rreg = &rs690_mc_rreg;371371- rdev->mc_wreg = &rs690_mc_wreg;372372- }373373- if (rdev->family == CHIP_RS600) {374374- rdev->mc_rreg = &rs600_mc_rreg;375375- rdev->mc_wreg = &rs600_mc_wreg;376376- }377377- if ((rdev->family >= CHIP_R600) && (rdev->family <= CHIP_RV740)) {378378- rdev->pciep_rreg = &r600_pciep_rreg;379379- rdev->pciep_wreg = &r600_pciep_wreg;380380- }381381-}382382-383383-384384-/*385385- * ASIC386386- */387387-int radeon_asic_init(struct radeon_device *rdev)388388-{389389- radeon_register_accessor_init(rdev);390390- switch (rdev->family) {391391- case CHIP_R100:392392- case CHIP_RV100:393393- case CHIP_RS100:394394- case CHIP_RV200:395395- case CHIP_RS200:396396- rdev->asic = &r100_asic;397397- break;398398- case CHIP_R200:399399- case CHIP_RV250:400400- case CHIP_RS300:401401- case CHIP_RV280:402402- rdev->asic = &r200_asic;403403- break;404404- case CHIP_R300:405405- case CHIP_R350:406406- case CHIP_RV350:407407- case CHIP_RV380:408408- if (rdev->flags & RADEON_IS_PCIE)409409- rdev->asic = &r300_asic_pcie;410410- else411411- rdev->asic = &r300_asic;412412- break;413413- case CHIP_R420:414414- case CHIP_R423:415415- case CHIP_RV410:416416- rdev->asic = &r420_asic;417417- break;418418- case CHIP_RS400:419419- case CHIP_RS480:420420- rdev->asic = &rs400_asic;421421- break;422422- case CHIP_RS600:423423- rdev->asic = &rs600_asic;424424- break;425425- case CHIP_RS690:426426- case CHIP_RS740:427427- rdev->asic = &rs690_asic;428428- break;429429- case CHIP_RV515:430430- rdev->asic = &rv515_asic;431431- break;432432- case CHIP_R520:433433- case CHIP_RV530:434434- case CHIP_RV560:435435- case CHIP_RV570:436436- case CHIP_R580:437437- rdev->asic = &r520_asic;438438- break;439439- case CHIP_R600:440440- case CHIP_RV610:441441- case CHIP_RV630:442442- case CHIP_RV620:443443- case CHIP_RV635:444444- case CHIP_RV670:445445- case CHIP_RS780:446446- case CHIP_RS880:447447- rdev->asic = &r600_asic;448448- break;449449- case CHIP_RV770:450450- case CHIP_RV730:451451- case CHIP_RV710:452452- case CHIP_RV740:453453- rdev->asic = &rv770_asic;454454- break;455455- case CHIP_CEDAR:456456- case CHIP_REDWOOD:457457- case CHIP_JUNIPER:458458- case CHIP_CYPRESS:459459- case CHIP_HEMLOCK:460460- rdev->asic = &evergreen_asic;461461- break;462462- default:463463- /* FIXME: not supported yet */464464- return -EINVAL;465465- }466466-467467- if (rdev->flags & RADEON_IS_IGP) {468468- rdev->asic->get_memory_clock = NULL;469469- rdev->asic->set_memory_clock = NULL;470470- }471471-472472- return 0;473473-}474474-475475-476476-/*477477- * Wrapper around modesetting bits.478478- */479479-int radeon_clocks_init(struct radeon_device *rdev)480480-{481481- int r;482482-483483- r = radeon_static_clocks_init(rdev->ddev);484484- if (r) {485485- return r;486486- }487487- DRM_INFO("Clocks initialized !\n");488488- return 0;489489-}490490-491491-void radeon_clocks_fini(struct radeon_device *rdev)492492-{493493-}494290495291/* ATOM accessor methods */496292static uint32_t cail_pll_read(struct card_info *info, uint32_t reg)···419565 VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;420566 else421567 return VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM;422422-}423423-424424-void radeon_agp_disable(struct radeon_device *rdev)425425-{426426- rdev->flags &= ~RADEON_IS_AGP;427427- if (rdev->family >= CHIP_R600) {428428- DRM_INFO("Forcing AGP to PCIE mode\n");429429- rdev->flags |= RADEON_IS_PCIE;430430- } else if (rdev->family >= CHIP_RV515 ||431431- rdev->family == CHIP_RV380 ||432432- rdev->family == CHIP_RV410 ||433433- rdev->family == CHIP_R423) {434434- DRM_INFO("Forcing AGP to PCIE mode\n");435435- rdev->flags |= RADEON_IS_PCIE;436436- rdev->asic->gart_tlb_flush = &rv370_pcie_gart_tlb_flush;437437- rdev->asic->gart_set_page = &rv370_pcie_gart_set_page;438438- } else {439439- DRM_INFO("Forcing AGP to PCI mode\n");440440- rdev->flags |= RADEON_IS_PCI;441441- rdev->asic->gart_tlb_flush = &r100_pci_gart_tlb_flush;442442- rdev->asic->gart_set_page = &r100_pci_gart_set_page;443443- }444444- rdev->mc.gtt_size = radeon_gart_size * 1024 * 1024;445568}446569447570void radeon_check_arguments(struct radeon_device *rdev)···561730 if (r)562731 return r;563732 radeon_check_arguments(rdev);733733+734734+ /* all of the newer IGP chips have an internal gart735735+ * However some rs4xx report as AGP, so remove that here.736736+ */737737+ if ((rdev->family >= CHIP_RS400) &&738738+ (rdev->flags & RADEON_IS_IGP)) {739739+ rdev->flags &= ~RADEON_IS_AGP;740740+ }564741565742 if (rdev->flags & RADEON_IS_AGP && radeon_agpmode == -1) {566743 radeon_agp_disable(rdev);
+51-17
drivers/gpu/drm/radeon/radeon_display.c
···368368369369 if (rdev->bios) {370370 if (rdev->is_atom_bios) {371371- if (rdev->family >= CHIP_R600)371371+ ret = radeon_get_atom_connector_info_from_supported_devices_table(dev);372372+ if (ret == false)372373 ret = radeon_get_atom_connector_info_from_object_table(dev);373373- else374374- ret = radeon_get_atom_connector_info_from_supported_devices_table(dev);375374 } else {376375 ret = radeon_get_legacy_connector_info_from_bios(dev);377376 if (ret == false)···468469 uint32_t best_error = 0xffffffff;469470 uint32_t best_vco_diff = 1;470471 uint32_t post_div;472472+ u32 pll_out_min, pll_out_max;471473472474 DRM_DEBUG("PLL freq %llu %u %u\n", freq, pll->min_ref_div, pll->max_ref_div);473475 freq = freq * 1000;476476+477477+ if (pll->flags & RADEON_PLL_IS_LCD) {478478+ pll_out_min = pll->lcd_pll_out_min;479479+ pll_out_max = pll->lcd_pll_out_max;480480+ } else {481481+ pll_out_min = pll->pll_out_min;482482+ pll_out_max = pll->pll_out_max;483483+ }474484475485 if (pll->flags & RADEON_PLL_USE_REF_DIV)476486 min_ref_div = max_ref_div = pll->reference_div;···544536 tmp = (uint64_t)pll->reference_freq * feedback_div;545537 vco = radeon_div(tmp, ref_div);546538547547- if (vco < pll->pll_out_min) {539539+ if (vco < pll_out_min) {548540 min_feed_div = feedback_div + 1;549541 continue;550550- } else if (vco > pll->pll_out_max) {542542+ } else if (vco > pll_out_max) {551543 max_feed_div = feedback_div;552544 continue;553545 }···683675{684676 fixed20_12 ffreq, max_error, error, pll_out, a;685677 u32 vco;678678+ u32 pll_out_min, pll_out_max;679679+680680+ if (pll->flags & RADEON_PLL_IS_LCD) {681681+ pll_out_min = pll->lcd_pll_out_min;682682+ pll_out_max = pll->lcd_pll_out_max;683683+ } else {684684+ pll_out_min = pll->pll_out_min;685685+ pll_out_max = pll->pll_out_max;686686+ }686687687688 ffreq.full = rfixed_const(freq);688689 /* max_error = ffreq * 0.0025; */···703686 vco = pll->reference_freq * (((*fb_div) * 10) + (*fb_div_frac));704687 vco = vco / ((*ref_div) * 10);705688706706- if ((vco < pll->pll_out_min) || (vco > pll->pll_out_max))689689+ if ((vco < pll_out_min) || (vco > pll_out_max))707690 continue;708691709692 /* pll_out = vco / post_div; */···731714{732715 u32 fb_div = 0, fb_div_frac = 0, post_div = 0, ref_div = 0;733716 u32 best_freq = 0, vco_frequency;717717+ u32 pll_out_min, pll_out_max;718718+719719+ if (pll->flags & RADEON_PLL_IS_LCD) {720720+ pll_out_min = pll->lcd_pll_out_min;721721+ pll_out_max = pll->lcd_pll_out_max;722722+ } else {723723+ pll_out_min = pll->pll_out_min;724724+ pll_out_max = pll->pll_out_max;725725+ }734726735727 /* freq = freq / 10; */736728 do_div(freq, 10);···750724 goto done;751725752726 vco_frequency = freq * post_div;753753- if ((vco_frequency < pll->pll_out_min) || (vco_frequency > pll->pll_out_max))727727+ if ((vco_frequency < pll_out_min) || (vco_frequency > pll_out_max))754728 goto done;755729756730 if (pll->flags & RADEON_PLL_USE_REF_DIV) {···775749 continue;776750777751 vco_frequency = freq * post_div;778778- if ((vco_frequency < pll->pll_out_min) || (vco_frequency > pll->pll_out_max))752752+ if ((vco_frequency < pll_out_min) || (vco_frequency > pll_out_max))779753 continue;780754 if (pll->flags & RADEON_PLL_USE_REF_DIV) {781755 ref_div = pll->reference_div;···971945 return 0;972946}973947948948+void radeon_update_display_priority(struct radeon_device *rdev)949949+{950950+ /* adjustment options for the display watermarks */951951+ if ((radeon_disp_priority == 0) || (radeon_disp_priority > 2)) {952952+ /* set display priority to high for r3xx, rv515 chips953953+ * this avoids flickering due to underflow to the954954+ * display controllers during heavy acceleration.955955+ */956956+ if (ASIC_IS_R300(rdev) || (rdev->family == CHIP_RV515))957957+ rdev->disp_priority = 2;958958+ else959959+ rdev->disp_priority = 0;960960+ } else961961+ rdev->disp_priority = radeon_disp_priority;962962+963963+}964964+974965int radeon_modeset_init(struct radeon_device *rdev)975966{976967 int i;···1017974 if (!rdev->is_atom_bios) {1018975 /* check for hardcoded EDID in BIOS */1019976 radeon_combios_check_hardcoded_edid(rdev);10201020- }10211021-10221022- if (rdev->flags & RADEON_SINGLE_CRTC)10231023- rdev->num_crtc = 1;10241024- else {10251025- if (ASIC_IS_DCE4(rdev))10261026- rdev->num_crtc = 6;10271027- else10281028- rdev->num_crtc = 2;1029977 }10309781031979 /* allocate crtcs */
···107107 * 1.30- Add support for occlusion queries108108 * 1.31- Add support for num Z pipes from GET_PARAM109109 * 1.32- fixes for rv740 setup110110+ * 1.33- Add r6xx/r7xx const buffer support110111 */111112#define DRIVER_MAJOR 1112112-#define DRIVER_MINOR 32113113+#define DRIVER_MINOR 33113114#define DRIVER_PATCHLEVEL 0114115115116enum radeon_cp_microcode_version {
+64-57
drivers/gpu/drm/radeon/radeon_encoders.c
···302302 }303303304304 if (ASIC_IS_DCE3(rdev) &&305305- (radeon_encoder->active_device & (ATOM_DEVICE_DFP_SUPPORT))) {305305+ (radeon_encoder->active_device & (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT))) {306306 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);307307 radeon_dp_set_link_config(connector, mode);308308 }···519519 break;520520 }521521522522- atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev);522522+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))523523+ return;523524524525 switch (frev) {525526 case 1:···594593 }595594596595 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);597597- r600_hdmi_enable(encoder, hdmi_detected);598596}599597600598int···708708 struct radeon_connector_atom_dig *dig_connector =709709 radeon_get_atom_connector_priv_from_encoder(encoder);710710 union dig_encoder_control args;711711- int index = 0, num = 0;711711+ int index = 0;712712 uint8_t frev, crev;713713714714 if (!dig || !dig_connector)···724724 else725725 index = GetIndexIntoMasterTable(COMMAND, DIG1EncoderControl);726726 }727727- num = dig->dig_encoder + 1;728727729729- atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev);728728+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))729729+ return;730730731731 args.v1.ucAction = action;732732 args.v1.usPixelClock = cpu_to_le16(radeon_encoder->pixel_clock / 10);···785785 struct drm_connector *connector;786786 struct radeon_connector *radeon_connector;787787 union dig_transmitter_control args;788788- int index = 0, num = 0;788788+ int index = 0;789789 uint8_t frev, crev;790790 bool is_dp = false;791791 int pll_id = 0;···814814 }815815 }816816817817- atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev);817817+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))818818+ return;818819819820 args.v1.ucAction = action;820821 if (action == ATOM_TRANSMITTER_ACTION_INIT) {···861860 switch (radeon_encoder->encoder_id) {862861 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:863862 args.v3.acConfig.ucTransmitterSel = 0;864864- num = 0;865863 break;866864 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:867865 args.v3.acConfig.ucTransmitterSel = 1;868868- num = 1;869866 break;870867 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:871868 args.v3.acConfig.ucTransmitterSel = 2;872872- num = 2;873869 break;874870 }875871···877879 args.v3.acConfig.fCoherentMode = 1;878880 }879881 } else if (ASIC_IS_DCE32(rdev)) {880880- if (dig->dig_encoder == 1)881881- args.v2.acConfig.ucEncoderSel = 1;882882+ args.v2.acConfig.ucEncoderSel = dig->dig_encoder;882883 if (dig_connector->linkb)883884 args.v2.acConfig.ucLinkSel = 1;884885885886 switch (radeon_encoder->encoder_id) {886887 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:887888 args.v2.acConfig.ucTransmitterSel = 0;888888- num = 0;889889 break;890890 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:891891 args.v2.acConfig.ucTransmitterSel = 1;892892- num = 1;893892 break;894893 case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:895894 args.v2.acConfig.ucTransmitterSel = 2;896896- num = 2;897895 break;898896 }899897···907913 else908914 args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_DIG1_ENCODER;909915910910- switch (radeon_encoder->encoder_id) {911911- case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:912912- if (rdev->flags & RADEON_IS_IGP) {913913- if (radeon_encoder->pixel_clock > 165000) {914914- if (dig_connector->igp_lane_info & 0x3)915915- args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_7;916916- else if (dig_connector->igp_lane_info & 0xc)917917- args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_15;918918- } else {919919- if (dig_connector->igp_lane_info & 0x1)920920- args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_3;921921- else if (dig_connector->igp_lane_info & 0x2)922922- args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_4_7;923923- else if (dig_connector->igp_lane_info & 0x4)924924- args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_11;925925- else if (dig_connector->igp_lane_info & 0x8)926926- args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_12_15;927927- }916916+ if ((rdev->flags & RADEON_IS_IGP) &&917917+ (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_UNIPHY)) {918918+ if (is_dp || (radeon_encoder->pixel_clock <= 165000)) {919919+ if (dig_connector->igp_lane_info & 0x1)920920+ args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_3;921921+ else if (dig_connector->igp_lane_info & 0x2)922922+ args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_4_7;923923+ else if (dig_connector->igp_lane_info & 0x4)924924+ args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_11;925925+ else if (dig_connector->igp_lane_info & 0x8)926926+ args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_12_15;927927+ } else {928928+ if (dig_connector->igp_lane_info & 0x3)929929+ args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_0_7;930930+ else if (dig_connector->igp_lane_info & 0xc)931931+ args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LANE_8_15;928932 }929929- break;930933 }931931-932932- if (radeon_encoder->pixel_clock > 165000)933933- args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_8LANE_LINK;934934935935 if (dig_connector->linkb)936936 args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_LINKB;···936948 else if (radeon_encoder->devices & (ATOM_DEVICE_DFP_SUPPORT)) {937949 if (dig->coherent_mode)938950 args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_COHERENT;951951+ if (radeon_encoder->pixel_clock > 165000)952952+ args.v1.ucConfig |= ATOM_TRANSMITTER_CONFIG_8LANE_LINK;939953 }940954 }941955···10441054 if (is_dig) {10451055 switch (mode) {10461056 case DRM_MODE_DPMS_ON:10471047- atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0);10481048- {10571057+ if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) {10491058 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder);10591059+10501060 dp_link_train(encoder, connector);10611061+ if (ASIC_IS_DCE4(rdev))10621062+ atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_ON);10511063 }10641064+ if (!ASIC_IS_DCE4(rdev))10651065+ atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0);10521066 break;10531067 case DRM_MODE_DPMS_STANDBY:10541068 case DRM_MODE_DPMS_SUSPEND:10551069 case DRM_MODE_DPMS_OFF:10561056- atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0);10701070+ if (!ASIC_IS_DCE4(rdev))10711071+ atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT, 0, 0);10721072+ if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_DP) {10731073+ if (ASIC_IS_DCE4(rdev))10741074+ atombios_dig_encoder_setup(encoder, ATOM_ENCODER_CMD_DP_VIDEO_OFF);10751075+ }10571076 break;10581077 }10591078 } else {···1103110411041105 memset(&args, 0, sizeof(args));1105110611061106- atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev);11071107+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))11081108+ return;1107110911081110 switch (frev) {11091111 case 1:···12161216 }1217121712181218 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);12191219+12201220+ /* update scratch regs with new routing */12211221+ radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id);12191222}1220122312211224static void···13291326 struct drm_device *dev = encoder->dev;13301327 struct radeon_device *rdev = dev->dev_private;13311328 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);13321332- struct radeon_crtc *radeon_crtc = to_radeon_crtc(encoder->crtc);1333132913341334- if (radeon_encoder->active_device &13351335- (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT)) {13361336- struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;13371337- if (dig)13381338- dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder);13391339- }13401330 radeon_encoder->pixel_clock = adjusted_mode->clock;13411341-13421342- radeon_atombios_encoder_crtc_scratch_regs(encoder, radeon_crtc->crtc_id);13431343- atombios_set_encoder_crtc_source(encoder);1344133113451332 if (ASIC_IS_AVIVO(rdev)) {13461333 if (radeon_encoder->active_device & (ATOM_DEVICE_CV_SUPPORT | ATOM_DEVICE_TV_SUPPORT))···13891396 }13901397 atombios_apply_encoder_quirks(encoder, adjusted_mode);1391139813921392- /* XXX */13931393- if (!ASIC_IS_DCE4(rdev))13991399+ if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI) {14001400+ r600_hdmi_enable(encoder);13941401 r600_hdmi_setmode(encoder, adjusted_mode);14021402+ }13951403}1396140413971405static bool···1412141814131419 memset(&args, 0, sizeof(args));1414142014151415- atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev);14211421+ if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev))14221422+ return false;1416142314171424 args.sDacload.ucMisc = 0;14181425···1487149214881493static void radeon_atom_encoder_prepare(struct drm_encoder *encoder)14891494{14951495+ struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);14961496+14971497+ if (radeon_encoder->active_device &14981498+ (ATOM_DEVICE_DFP_SUPPORT | ATOM_DEVICE_LCD_SUPPORT)) {14991499+ struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;15001500+ if (dig)15011501+ dig->dig_encoder = radeon_atom_pick_dig_encoder(encoder);15021502+ }15031503+14901504 radeon_atom_output_lock(encoder, true);14911505 radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF);15061506+15071507+ /* this is needed for the pll/ss setup to work correctly in some cases */15081508+ atombios_set_encoder_crtc_source(encoder);14921509}1493151014941511static void radeon_atom_encoder_commit(struct drm_encoder *encoder)···15161509 radeon_atom_encoder_dpms(encoder, DRM_MODE_DPMS_OFF);1517151015181511 if (radeon_encoder_is_digital(encoder)) {15121512+ if (atombios_get_encoder_mode(encoder) == ATOM_ENCODER_MODE_HDMI)15131513+ r600_hdmi_disable(encoder);15191514 dig = radeon_encoder->enc_priv;15201515 dig->dig_encoder = -1;15211516 }···16681659 drm_encoder_helper_add(encoder, &radeon_atom_dig_helper_funcs);16691660 break;16701661 }16711671-16721672- r600_hdmi_init(encoder);16731662}
+73-80
drivers/gpu/drm/radeon/radeon_i2c.c
···5959 return false;6060}61616262+/* bit banging i2c */62636364static void radeon_i2c_do_lock(struct radeon_i2c_chan *i2c, int lock_state)6465{···182181 WREG32(rec->en_data_reg, val);183182}184183184184+static int pre_xfer(struct i2c_adapter *i2c_adap)185185+{186186+ struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap);187187+188188+ radeon_i2c_do_lock(i2c, 1);189189+190190+ return 0;191191+}192192+193193+static void post_xfer(struct i2c_adapter *i2c_adap)194194+{195195+ struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap);196196+197197+ radeon_i2c_do_lock(i2c, 0);198198+}199199+200200+/* hw i2c */201201+185202static u32 radeon_get_i2c_prescale(struct radeon_device *rdev)186203{187187- struct radeon_pll *spll = &rdev->clock.spll;188204 u32 sclk = radeon_get_engine_clock(rdev);189205 u32 prescale = 0;190190- u32 n, m;191191- u8 loop;206206+ u32 nm;207207+ u8 n, m, loop;192208 int i2c_clock;193209194210 switch (rdev->family) {···221203 case CHIP_R300:222204 case CHIP_R350:223205 case CHIP_RV350:224224- n = (spll->reference_freq) / (4 * 6);206206+ i2c_clock = 60;207207+ nm = (sclk * 10) / (i2c_clock * 4);225208 for (loop = 1; loop < 255; loop++) {226226- if ((loop * (loop - 1)) > n)209209+ if ((nm / loop) < loop)227210 break;228211 }229229- m = loop - 1;230230- prescale = m | (loop << 8);212212+ n = loop - 1;213213+ m = loop - 2;214214+ prescale = m | (n << 8);231215 break;232216 case CHIP_RV380:233217 case CHIP_RS400:···237217 case CHIP_R420:238218 case CHIP_R423:239219 case CHIP_RV410:240240- sclk = radeon_get_engine_clock(rdev);241220 prescale = (((sclk * 10)/(4 * 128 * 100) + 1) << 8) + 128;242221 break;243222 case CHIP_RS600:···251232 case CHIP_RV570:252233 case CHIP_R580:253234 i2c_clock = 50;254254- sclk = radeon_get_engine_clock(rdev);255235 if (rdev->family == CHIP_R520)256236 prescale = (127 << 8) + ((sclk * 10) / (4 * 127 * i2c_clock));257237 else···309291 prescale = radeon_get_i2c_prescale(rdev);310292311293 reg = ((prescale << RADEON_I2C_PRESCALE_SHIFT) |294294+ RADEON_I2C_DRIVE_EN |312295 RADEON_I2C_START |313296 RADEON_I2C_STOP |314297 RADEON_I2C_GO);···776757 return ret;777758}778759779779-static int radeon_sw_i2c_xfer(struct i2c_adapter *i2c_adap,760760+static int radeon_hw_i2c_xfer(struct i2c_adapter *i2c_adap,780761 struct i2c_msg *msgs, int num)781781-{782782- struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap);783783- int ret;784784-785785- radeon_i2c_do_lock(i2c, 1);786786- ret = i2c_transfer(&i2c->algo.radeon.bit_adapter, msgs, num);787787- radeon_i2c_do_lock(i2c, 0);788788-789789- return ret;790790-}791791-792792-static int radeon_i2c_xfer(struct i2c_adapter *i2c_adap,793793- struct i2c_msg *msgs, int num)794762{795763 struct radeon_i2c_chan *i2c = i2c_get_adapdata(i2c_adap);796764 struct radeon_device *rdev = i2c->dev->dev_private;797765 struct radeon_i2c_bus_rec *rec = &i2c->rec;798798- int ret;766766+ int ret = 0;799767800768 switch (rdev->family) {801769 case CHIP_R100:···803797 case CHIP_RV410:804798 case CHIP_RS400:805799 case CHIP_RS480:806806- if (rec->hw_capable)807807- ret = r100_hw_i2c_xfer(i2c_adap, msgs, num);808808- else809809- ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num);800800+ ret = r100_hw_i2c_xfer(i2c_adap, msgs, num);810801 break;811802 case CHIP_RS600:812803 case CHIP_RS690:813804 case CHIP_RS740:814805 /* XXX fill in hw i2c implementation */815815- ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num);816806 break;817807 case CHIP_RV515:818808 case CHIP_R520:···816814 case CHIP_RV560:817815 case CHIP_RV570:818816 case CHIP_R580:819819- if (rec->hw_capable) {820820- if (rec->mm_i2c)821821- ret = r100_hw_i2c_xfer(i2c_adap, msgs, num);822822- else823823- ret = r500_hw_i2c_xfer(i2c_adap, msgs, num);824824- } else825825- ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num);817817+ if (rec->mm_i2c)818818+ ret = r100_hw_i2c_xfer(i2c_adap, msgs, num);819819+ else820820+ ret = r500_hw_i2c_xfer(i2c_adap, msgs, num);826821 break;827822 case CHIP_R600:828823 case CHIP_RV610:829824 case CHIP_RV630:830825 case CHIP_RV670:831826 /* XXX fill in hw i2c implementation */832832- ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num);833827 break;834828 case CHIP_RV620:835829 case CHIP_RV635:···836838 case CHIP_RV710:837839 case CHIP_RV740:838840 /* XXX fill in hw i2c implementation */839839- ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num);840841 break;841842 case CHIP_CEDAR:842843 case CHIP_REDWOOD:···843846 case CHIP_CYPRESS:844847 case CHIP_HEMLOCK:845848 /* XXX fill in hw i2c implementation */846846- ret = radeon_sw_i2c_xfer(i2c_adap, msgs, num);847849 break;848850 default:849851 DRM_ERROR("i2c: unhandled radeon chip\n");···853857 return ret;854858}855859856856-static u32 radeon_i2c_func(struct i2c_adapter *adap)860860+static u32 radeon_hw_i2c_func(struct i2c_adapter *adap)857861{858862 return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;859863}860864861865static const struct i2c_algorithm radeon_i2c_algo = {862862- .master_xfer = radeon_i2c_xfer,863863- .functionality = radeon_i2c_func,866866+ .master_xfer = radeon_hw_i2c_xfer,867867+ .functionality = radeon_hw_i2c_func,864868};865869866870struct radeon_i2c_chan *radeon_i2c_create(struct drm_device *dev,867871 struct radeon_i2c_bus_rec *rec,868872 const char *name)869873{874874+ struct radeon_device *rdev = dev->dev_private;870875 struct radeon_i2c_chan *i2c;871876 int ret;872877···875878 if (i2c == NULL)876879 return NULL;877880878878- /* set the internal bit adapter */879879- i2c->algo.radeon.bit_adapter.owner = THIS_MODULE;880880- i2c_set_adapdata(&i2c->algo.radeon.bit_adapter, i2c);881881- sprintf(i2c->algo.radeon.bit_adapter.name, "Radeon internal i2c bit bus %s", name);882882- i2c->algo.radeon.bit_adapter.algo_data = &i2c->algo.radeon.bit_data;883883- i2c->algo.radeon.bit_data.setsda = set_data;884884- i2c->algo.radeon.bit_data.setscl = set_clock;885885- i2c->algo.radeon.bit_data.getsda = get_data;886886- i2c->algo.radeon.bit_data.getscl = get_clock;887887- i2c->algo.radeon.bit_data.udelay = 20;888888- /* vesa says 2.2 ms is enough, 1 jiffy doesn't seem to always889889- * make this, 2 jiffies is a lot more reliable */890890- i2c->algo.radeon.bit_data.timeout = 2;891891- i2c->algo.radeon.bit_data.data = i2c;892892- ret = i2c_bit_add_bus(&i2c->algo.radeon.bit_adapter);893893- if (ret) {894894- DRM_ERROR("Failed to register internal bit i2c %s\n", name);895895- goto out_free;896896- }897897- /* set the radeon i2c adapter */898898- i2c->dev = dev;899881 i2c->rec = *rec;900882 i2c->adapter.owner = THIS_MODULE;883883+ i2c->dev = dev;901884 i2c_set_adapdata(&i2c->adapter, i2c);902902- sprintf(i2c->adapter.name, "Radeon i2c %s", name);903903- i2c->adapter.algo_data = &i2c->algo.radeon;904904- i2c->adapter.algo = &radeon_i2c_algo;905905- ret = i2c_add_adapter(&i2c->adapter);906906- if (ret) {907907- DRM_ERROR("Failed to register i2c %s\n", name);908908- goto out_free;885885+ if (rec->mm_i2c ||886886+ (rec->hw_capable &&887887+ radeon_hw_i2c &&888888+ ((rdev->family <= CHIP_RS480) ||889889+ ((rdev->family >= CHIP_RV515) && (rdev->family <= CHIP_R580))))) {890890+ /* set the radeon hw i2c adapter */891891+ sprintf(i2c->adapter.name, "Radeon i2c hw bus %s", name);892892+ i2c->adapter.algo = &radeon_i2c_algo;893893+ ret = i2c_add_adapter(&i2c->adapter);894894+ if (ret) {895895+ DRM_ERROR("Failed to register hw i2c %s\n", name);896896+ goto out_free;897897+ }898898+ } else {899899+ /* set the radeon bit adapter */900900+ sprintf(i2c->adapter.name, "Radeon i2c bit bus %s", name);901901+ i2c->adapter.algo_data = &i2c->algo.bit;902902+ i2c->algo.bit.pre_xfer = pre_xfer;903903+ i2c->algo.bit.post_xfer = post_xfer;904904+ i2c->algo.bit.setsda = set_data;905905+ i2c->algo.bit.setscl = set_clock;906906+ i2c->algo.bit.getsda = get_data;907907+ i2c->algo.bit.getscl = get_clock;908908+ i2c->algo.bit.udelay = 20;909909+ /* vesa says 2.2 ms is enough, 1 jiffy doesn't seem to always910910+ * make this, 2 jiffies is a lot more reliable */911911+ i2c->algo.bit.timeout = 2;912912+ i2c->algo.bit.data = i2c;913913+ ret = i2c_bit_add_bus(&i2c->adapter);914914+ if (ret) {915915+ DRM_ERROR("Failed to register bit i2c %s\n", name);916916+ goto out_free;917917+ }909918 }910919911920 return i2c;···956953{957954 if (!i2c)958955 return;959959- i2c_del_adapter(&i2c->algo.radeon.bit_adapter);960960- i2c_del_adapter(&i2c->adapter);961961- kfree(i2c);962962-}963963-964964-void radeon_i2c_destroy_dp(struct radeon_i2c_chan *i2c)965965-{966966- if (!i2c)967967- return;968968-969956 i2c_del_adapter(&i2c->adapter);970957 kfree(i2c);971958}
+9-13
drivers/gpu/drm/radeon/radeon_irq_kms.c
···67676868 /* Disable *all* interrupts */6969 rdev->irq.sw_int = false;7070- for (i = 0; i < 2; i++) {7070+ for (i = 0; i < rdev->num_crtc; i++)7171 rdev->irq.crtc_vblank_int[i] = false;7272- }7272+ for (i = 0; i < 6; i++)7373+ rdev->irq.hpd[i] = false;7374 radeon_irq_set(rdev);7475 /* Clear bits */7576 radeon_irq_process(rdev);···9695 }9796 /* Disable *all* interrupts */9897 rdev->irq.sw_int = false;9999- for (i = 0; i < 2; i++) {9898+ for (i = 0; i < rdev->num_crtc; i++)10099 rdev->irq.crtc_vblank_int[i] = false;100100+ for (i = 0; i < 6; i++)101101 rdev->irq.hpd[i] = false;102102- }103102 radeon_irq_set(rdev);104103}105104106105int radeon_irq_kms_init(struct radeon_device *rdev)107106{108107 int r = 0;109109- int num_crtc = 2;110108111111- if (rdev->flags & RADEON_SINGLE_CRTC)112112- num_crtc = 1;113109 spin_lock_init(&rdev->irq.sw_lock);114114- r = drm_vblank_init(rdev->ddev, num_crtc);110110+ r = drm_vblank_init(rdev->ddev, rdev->num_crtc);115111 if (r) {116112 return r;117113 }118114 /* enable msi */119115 rdev->msi_enabled = 0;120120- /* MSIs don't seem to work on my rs780;121121- * not sure about rs880 or other rs780s.122122- * Needs more investigation.116116+ /* MSIs don't seem to work reliably on all IGP117117+ * chips. Disable MSI on them for now.123118 */124119 if ((rdev->family >= CHIP_RV380) &&125125- (rdev->family != CHIP_RS780) &&126126- (rdev->family != CHIP_RS880)) {120120+ (!(rdev->flags & RADEON_IS_IGP))) {127121 int ret = pci_enable_msi(rdev->pdev);128122 if (!ret) {129123 rdev->msi_enabled = 1;
+8
drivers/gpu/drm/radeon/radeon_legacy_crtc.c
···603603 ? RADEON_CRTC2_INTERLACE_EN604604 : 0));605605606606+ /* rs4xx chips seem to like to have the crtc enabled when the timing is set */607607+ if ((rdev->family == CHIP_RS400) || (rdev->family == CHIP_RS480))608608+ crtc2_gen_cntl |= RADEON_CRTC2_EN;609609+606610 disp2_merge_cntl = RREG32(RADEON_DISP2_MERGE_CNTL);607611 disp2_merge_cntl &= ~RADEON_DISP2_RGB_OFFSET_EN;608612···633629 | ((mode->flags & DRM_MODE_FLAG_INTERLACE)634630 ? RADEON_CRTC_INTERLACE_EN635631 : 0));632632+633633+ /* rs4xx chips seem to like to have the crtc enabled when the timing is set */634634+ if ((rdev->family == CHIP_RS400) || (rdev->family == CHIP_RS480))635635+ crtc_gen_cntl |= RADEON_CRTC_EN;636636637637 crtc_ext_cntl = RREG32(RADEON_CRTC_EXT_CNTL);638638 crtc_ext_cntl |= (RADEON_XCRT_CNT_EN |
···185185 return 0;186186 }187187 radeon_ttm_placement_from_domain(bo, domain);188188- /* force to pin into visible video ram */189189- bo->placement.lpfn = bo->rdev->mc.visible_vram_size >> PAGE_SHIFT;188188+ if (domain == RADEON_GEM_DOMAIN_VRAM) {189189+ /* force to pin into visible video ram */190190+ bo->placement.lpfn = bo->rdev->mc.visible_vram_size >> PAGE_SHIFT;191191+ }190192 for (i = 0; i < bo->placement.num_placement; i++)191193 bo->placements[i] |= TTM_PL_FLAG_NO_EVICT;192194 r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
+37-9
drivers/gpu/drm/radeon/radeon_pm.c
···2828#define RADEON_RECLOCK_DELAY_MS 2002929#define RADEON_WAIT_VBLANK_TIMEOUT 20030303131+static bool radeon_pm_debug_check_in_vbl(struct radeon_device *rdev, bool finish);3132static void radeon_pm_set_clocks_locked(struct radeon_device *rdev);3233static void radeon_pm_set_clocks(struct radeon_device *rdev);3334static void radeon_pm_idle_work_handler(struct work_struct *work);···180179 rdev->pm.requested_power_state->non_clock_info.pcie_lanes);181180}182181182182+static inline void radeon_sync_with_vblank(struct radeon_device *rdev)183183+{184184+ if (rdev->pm.active_crtcs) {185185+ rdev->pm.vblank_sync = false;186186+ wait_event_timeout(187187+ rdev->irq.vblank_queue, rdev->pm.vblank_sync,188188+ msecs_to_jiffies(RADEON_WAIT_VBLANK_TIMEOUT));189189+ }190190+}191191+183192static void radeon_set_power_state(struct radeon_device *rdev)184193{185194 /* if *_clock_mode are the same, *_power_state are as well */···200189 rdev->pm.requested_clock_mode->sclk,201190 rdev->pm.requested_clock_mode->mclk,202191 rdev->pm.requested_power_state->non_clock_info.pcie_lanes);192192+203193 /* set pcie lanes */194194+ /* TODO */195195+204196 /* set voltage */197197+ /* TODO */198198+205199 /* set engine clock */200200+ radeon_sync_with_vblank(rdev);201201+ radeon_pm_debug_check_in_vbl(rdev, false);206202 radeon_set_engine_clock(rdev, rdev->pm.requested_clock_mode->sclk);203203+ radeon_pm_debug_check_in_vbl(rdev, true);204204+205205+#if 0207206 /* set memory clock */207207+ if (rdev->asic->set_memory_clock) {208208+ radeon_sync_with_vblank(rdev);209209+ radeon_pm_debug_check_in_vbl(rdev, false);210210+ radeon_set_memory_clock(rdev, rdev->pm.requested_clock_mode->mclk);211211+ radeon_pm_debug_check_in_vbl(rdev, true);212212+ }213213+#endif208214209215 rdev->pm.current_power_state = rdev->pm.requested_power_state;210216 rdev->pm.current_clock_mode = rdev->pm.requested_clock_mode;···257229 return 0;258230}259231232232+void radeon_pm_fini(struct radeon_device *rdev)233233+{234234+ if (rdev->pm.i2c_bus)235235+ radeon_i2c_destroy(rdev->pm.i2c_bus);236236+}237237+260238void radeon_pm_compute_clocks(struct radeon_device *rdev)261239{262240 struct drm_device *ddev = rdev->ddev;···279245 list_for_each_entry(connector,280246 &ddev->mode_config.connector_list, head) {281247 if (connector->encoder &&282282- connector->dpms != DRM_MODE_DPMS_OFF) {248248+ connector->encoder->crtc &&249249+ connector->dpms != DRM_MODE_DPMS_OFF) {283250 radeon_crtc = to_radeon_crtc(connector->encoder->crtc);284251 rdev->pm.active_crtcs |= (1 << radeon_crtc->crtc_id);285252 ++count;···368333 break;369334 }370335371371- /* check if we are in vblank */372372- radeon_pm_debug_check_in_vbl(rdev, false);373336 radeon_set_power_state(rdev);374374- radeon_pm_debug_check_in_vbl(rdev, true);375337 rdev->pm.planned_action = PM_ACTION_NONE;376338}377339···385353 rdev->pm.req_vblank |= (1 << 1);386354 drm_vblank_get(rdev->ddev, 1);387355 }388388- if (rdev->pm.active_crtcs)389389- wait_event_interruptible_timeout(390390- rdev->irq.vblank_queue, 0,391391- msecs_to_jiffies(RADEON_WAIT_VBLANK_TIMEOUT));356356+ radeon_pm_set_clocks_locked(rdev);392357 if (rdev->pm.req_vblank & (1 << 0)) {393358 rdev->pm.req_vblank &= ~(1 << 0);394359 drm_vblank_put(rdev->ddev, 0);···395366 drm_vblank_put(rdev->ddev, 1);396367 }397368398398- radeon_pm_set_clocks_locked(rdev);399369 mutex_unlock(&rdev->cp.mutex);400370}401371
···217217 depends on HWMON && I2C218218 help219219 If you say yes here you get support for the aSC7621220220- family of SMBus sensors chip found on most Intel X48, X38, 975,221221- 965 and 945 desktop boards. Currently supported chips:220220+ family of SMBus sensors chip found on most Intel X38, X48, X58,221221+ 945, 965 and 975 desktop boards. Currently supported chips:222222 aSC7621223223 aSC7621a224224
+2-2
drivers/hwmon/coretemp.c
···228228 if (err) {229229 dev_warn(dev,230230 "Unable to access MSR 0xEE, for Tjmax, left"231231- " at default");231231+ " at default\n");232232 } else if (eax & 0x40000000) {233233 tjmax = tjmax_ee;234234 }···466466 family 6 CPU */467467 if ((c->x86 == 0x6) && (c->x86_model > 0xf))468468 printk(KERN_WARNING DRVNAME ": Unknown CPU "469469- "model %x\n", c->x86_model);469469+ "model 0x%x\n", c->x86_model);470470 continue;471471 }472472
···695695 if (irqd)696696 disable_irq(hwif->irq);697697698698- rc = ide_port_wait_ready(hwif);699699- if (rc == -ENODEV) {700700- printk(KERN_INFO "%s: no devices on the port\n", hwif->name);701701- goto out;702702- } else if (rc == -EBUSY)703703- printk(KERN_ERR "%s: not ready before the probe\n", hwif->name);704704- else705705- rc = -ENODEV;698698+ if (ide_port_wait_ready(hwif) == -EBUSY)699699+ printk(KERN_DEBUG "%s: Wait for ready failed before probe !\n", hwif->name);706700707701 /*708702 * Second drive should only exist if first drive was found,···707713 if (drive->dev_flags & IDE_DFLAG_PRESENT)708714 rc = 0;709715 }710710-out:716716+711717 /*712718 * Use cached IRQ number. It might be (and is...) changed by probe713719 * code above
···5050 handler.5151*/52525353-static int avma1cs_config(struct pcmcia_device *link);5353+static int avma1cs_config(struct pcmcia_device *link) __devinit ;5454static void avma1cs_release(struct pcmcia_device *link);55555656/*···5959 needed to manage one actual PCMCIA card.6060*/61616262-static void avma1cs_detach(struct pcmcia_device *p_dev);6262+static void avma1cs_detach(struct pcmcia_device *p_dev) __devexit ;636364646565/*···9999100100======================================================================*/101101102102-static int avma1cs_probe(struct pcmcia_device *p_dev)102102+static int __devinit avma1cs_probe(struct pcmcia_device *p_dev)103103{104104 local_info_t *local;105105···140140141141======================================================================*/142142143143-static void avma1cs_detach(struct pcmcia_device *link)143143+static void __devexit avma1cs_detach(struct pcmcia_device *link)144144{145145 dev_dbg(&link->dev, "avma1cs_detach(0x%p)\n", link);146146 avma1cs_release(link);···174174}175175176176177177-static int avma1cs_config(struct pcmcia_device *link)177177+static int __devinit avma1cs_config(struct pcmcia_device *link)178178{179179 local_info_t *dev;180180 int i;···282282 .name = "avma1_cs",283283 },284284 .probe = avma1cs_probe,285285- .remove = avma1cs_detach,285285+ .remove = __devexit_p(avma1cs_detach),286286 .id_table = avma1cs_ids,287287};288288
+6-6
drivers/isdn/hisax/elsa_cs.c
···7676 handler.7777*/78787979-static int elsa_cs_config(struct pcmcia_device *link);7979+static int elsa_cs_config(struct pcmcia_device *link) __devinit ;8080static void elsa_cs_release(struct pcmcia_device *link);81818282/*···8585 needed to manage one actual PCMCIA card.8686*/87878888-static void elsa_cs_detach(struct pcmcia_device *p_dev);8888+static void elsa_cs_detach(struct pcmcia_device *p_dev) __devexit;89899090/*9191 A driver needs to provide a dev_node_t structure for each device···121121122122======================================================================*/123123124124-static int elsa_cs_probe(struct pcmcia_device *link)124124+static int __devinit elsa_cs_probe(struct pcmcia_device *link)125125{126126 local_info_t *local;127127···166166167167======================================================================*/168168169169-static void elsa_cs_detach(struct pcmcia_device *link)169169+static void __devexit elsa_cs_detach(struct pcmcia_device *link)170170{171171 local_info_t *info = link->priv;172172···210210 return -ENODEV;211211}212212213213-static int elsa_cs_config(struct pcmcia_device *link)213213+static int __devinit elsa_cs_config(struct pcmcia_device *link)214214{215215 local_info_t *dev;216216 int i;···327327 .name = "elsa_cs",328328 },329329 .probe = elsa_cs_probe,330330- .remove = elsa_cs_detach,330330+ .remove = __devexit_p(elsa_cs_detach),331331 .id_table = elsa_ids,332332 .suspend = elsa_suspend,333333 .resume = elsa_resume,
+6-6
drivers/isdn/hisax/sedlbauer_cs.c
···7676 event handler. 7777*/78787979-static int sedlbauer_config(struct pcmcia_device *link);7979+static int sedlbauer_config(struct pcmcia_device *link) __devinit ;8080static void sedlbauer_release(struct pcmcia_device *link);81818282/*···8585 needed to manage one actual PCMCIA card.8686*/87878888-static void sedlbauer_detach(struct pcmcia_device *p_dev);8888+static void sedlbauer_detach(struct pcmcia_device *p_dev) __devexit;89899090/*9191 You'll also need to prototype all the functions that will actually···129129130130======================================================================*/131131132132-static int sedlbauer_probe(struct pcmcia_device *link)132132+static int __devinit sedlbauer_probe(struct pcmcia_device *link)133133{134134 local_info_t *local;135135···177177178178======================================================================*/179179180180-static void sedlbauer_detach(struct pcmcia_device *link)180180+static void __devexit sedlbauer_detach(struct pcmcia_device *link)181181{182182 dev_dbg(&link->dev, "sedlbauer_detach(0x%p)\n", link);183183···283283284284285285286286-static int sedlbauer_config(struct pcmcia_device *link)286286+static int __devinit sedlbauer_config(struct pcmcia_device *link)287287{288288 local_info_t *dev = link->priv;289289 win_req_t *req;···441441 .name = "sedlbauer_cs",442442 },443443 .probe = sedlbauer_probe,444444- .remove = sedlbauer_detach,444444+ .remove = __devexit_p(sedlbauer_detach),445445 .id_table = sedlbauer_ids,446446 .suspend = sedlbauer_suspend,447447 .resume = sedlbauer_resume,
+6-6
drivers/isdn/hisax/teles_cs.c
···5757 handler.5858*/59596060-static int teles_cs_config(struct pcmcia_device *link);6060+static int teles_cs_config(struct pcmcia_device *link) __devinit ;6161static void teles_cs_release(struct pcmcia_device *link);62626363/*···6666 needed to manage one actual PCMCIA card.6767*/68686969-static void teles_detach(struct pcmcia_device *p_dev);6969+static void teles_detach(struct pcmcia_device *p_dev) __devexit ;70707171/*7272 A linked list of "instances" of the teles_cs device. Each actual···112112113113======================================================================*/114114115115-static int teles_probe(struct pcmcia_device *link)115115+static int __devinit teles_probe(struct pcmcia_device *link)116116{117117 local_info_t *local;118118···156156157157======================================================================*/158158159159-static void teles_detach(struct pcmcia_device *link)159159+static void __devexit teles_detach(struct pcmcia_device *link)160160{161161 local_info_t *info = link->priv;162162···200200 return -ENODEV;201201}202202203203-static int teles_cs_config(struct pcmcia_device *link)203203+static int __devinit teles_cs_config(struct pcmcia_device *link)204204{205205 local_info_t *dev;206206 int i;···319319 .name = "teles_cs",320320 },321321 .probe = teles_probe,322322- .remove = teles_detach,322322+ .remove = __devexit_p(teles_detach),323323 .id_table = teles_ids,324324 .suspend = teles_suspend,325325 .resume = teles_resume,
+6
drivers/misc/kgdbts.c
···295295 /* On x86 a breakpoint stop requires it to be decremented */296296 if (addr + 1 == kgdbts_regs.ip)297297 offset = -1;298298+#elif defined(CONFIG_SUPERH)299299+ /* On SUPERH a breakpoint stop requires it to be decremented */300300+ if (addr + 2 == kgdbts_regs.pc)301301+ offset = -2;298302#endif299303 if (strcmp(arg, "silent") &&300304 instruction_pointer(&kgdbts_regs) + offset != addr) {···309305#ifdef CONFIG_X86310306 /* On x86 adjust the instruction pointer if needed */311307 kgdbts_regs.ip += offset;308308+#elif defined(CONFIG_SUPERH)309309+ kgdbts_regs.pc += offset;312310#endif313311 return 0;314312}
+1-1
drivers/net/atlx/atl1.c
···84848585#define ATLX_DRIVER_VERSION "2.1.3"8686MODULE_AUTHOR("Xiong Huang <xiong.huang@atheros.com>, \8787- Chris Snook <csnook@redhat.com>, Jay Cliburn <jcliburn@gmail.com>");8787+Chris Snook <csnook@redhat.com>, Jay Cliburn <jcliburn@gmail.com>");8888MODULE_LICENSE("GPL");8989MODULE_VERSION(ATLX_DRIVER_VERSION);9090
···246246247247MODULE_DEVICE_TABLE(pci, bnx2_pci_tbl);248248249249+static void bnx2_init_napi(struct bnx2 *bp);250250+249251static inline u32 bnx2_tx_avail(struct bnx2 *bp, struct bnx2_tx_ring_info *txr)250252{251253 u32 diff;···61996197 bnx2_disable_int(bp);6200619862016199 bnx2_setup_int_mode(bp, disable_msi);62006200+ bnx2_init_napi(bp);62026201 bnx2_napi_enable(bp);62036202 rc = bnx2_alloc_mem(bp);62046203 if (rc)···76467643 int i;7647764476487645 for (i = 0; i < bp->irq_nvecs; i++) {76497649- disable_irq(bp->irq_tbl[i].vector);76507650- bnx2_interrupt(bp->irq_tbl[i].vector, &bp->bnx2_napi[i]);76517651- enable_irq(bp->irq_tbl[i].vector);76467646+ struct bnx2_irq *irq = &bp->irq_tbl[i];76477647+76487648+ disable_irq(irq->vector);76497649+ irq->handler(irq->vector, &bp->bnx2_napi[i]);76507650+ enable_irq(irq->vector);76527651 }76537652}76547653#endif···82128207{82138208 int i;8214820982158215- for (i = 0; i < BNX2_MAX_MSIX_VEC; i++) {82108210+ for (i = 0; i < bp->irq_nvecs; i++) {82168211 struct bnx2_napi *bnapi = &bp->bnx2_napi[i];82178212 int (*poll)(struct napi_struct *, int);82188213···82818276 dev->ethtool_ops = &bnx2_ethtool_ops;8282827782838278 bp = netdev_priv(dev);82848284- bnx2_init_napi(bp);8285827982868280 pci_set_drvdata(pdev, dev);82878281
+32-8
drivers/net/bonding/bond_main.c
···12351235 write_lock_bh(&bond->curr_slave_lock);12361236 }12371237 }12381238+12391239+ /* resend IGMP joins since all were sent on curr_active_slave */12401240+ if (bond->params.mode == BOND_MODE_ROUNDROBIN) {12411241+ bond_resend_igmp_join_requests(bond);12421242+ }12381243}1239124412401245/**···41434138 struct bonding *bond = netdev_priv(bond_dev);41444139 struct slave *slave, *start_at;41454140 int i, slave_no, res = 1;41414141+ struct iphdr *iph = ip_hdr(skb);4146414241474143 read_lock(&bond->lock);4148414441494145 if (!BOND_IS_OK(bond))41504146 goto out;41514151-41524147 /*41534153- * Concurrent TX may collide on rr_tx_counter; we accept that41544154- * as being rare enough not to justify using an atomic op here41484148+ * Start with the curr_active_slave that joined the bond as the41494149+ * default for sending IGMP traffic. For failover purposes one41504150+ * needs to maintain some consistency for the interface that will41514151+ * send the join/membership reports. The curr_active_slave found41524152+ * will send all of this type of traffic.41554153 */41564156- slave_no = bond->rr_tx_counter++ % bond->slave_cnt;41544154+ if ((iph->protocol == htons(IPPROTO_IGMP)) &&41554155+ (skb->protocol == htons(ETH_P_IP))) {4157415641584158- bond_for_each_slave(bond, slave, i) {41594159- slave_no--;41604160- if (slave_no < 0)41614161- break;41574157+ read_lock(&bond->curr_slave_lock);41584158+ slave = bond->curr_active_slave;41594159+ read_unlock(&bond->curr_slave_lock);41604160+41614161+ if (!slave)41624162+ goto out;41634163+ } else {41644164+ /*41654165+ * Concurrent TX may collide on rr_tx_counter; we accept41664166+ * that as being rare enough not to justify using an41674167+ * atomic op here.41684168+ */41694169+ slave_no = bond->rr_tx_counter++ % bond->slave_cnt;41704170+41714171+ bond_for_each_slave(bond, slave, i) {41724172+ slave_no--;41734173+ if (slave_no < 0)41744174+ break;41754175+ }41624176 }4163417741644178 start_at = slave;
···261261 /* TX */262262 struct e1000_tx_ring *tx_ring; /* One per active queue */263263 unsigned int restart_queue;264264- unsigned long tx_queue_len;265264 u32 txd_cmd;266265 u32 tx_int_delay;267266 u32 tx_abs_int_delay;
···22892289 ew32(TCTL, tctl);2290229022912291 e1000e_config_collision_dist(hw);22922292-22932293- adapter->tx_queue_len = adapter->netdev->tx_queue_len;22942292}2295229322962294/**···28752877 del_timer_sync(&adapter->watchdog_timer);28762878 del_timer_sync(&adapter->phy_info_timer);2877287928782878- netdev->tx_queue_len = adapter->tx_queue_len;28792880 netif_carrier_off(netdev);28802881 adapter->link_speed = 0;28812882 adapter->link_duplex = 0;···35853588 "link gets many collisions.\n");35863589 }3587359035883588- /*35893589- * tweak tx_queue_len according to speed/duplex35903590- * and adjust the timeout factor35913591- */35923592- netdev->tx_queue_len = adapter->tx_queue_len;35913591+ /* adjust timeout factor according to speed/duplex */35933592 adapter->tx_timeout_factor = 1;35943593 switch (adapter->link_speed) {35953594 case SPEED_10:35963595 txb2b = 0;35973597- netdev->tx_queue_len = 10;35983596 adapter->tx_timeout_factor = 16;35993597 break;36003598 case SPEED_100:36013599 txb2b = 0;36023602- netdev->tx_queue_len = 100;36033600 adapter->tx_timeout_factor = 10;36043601 break;36053602 }
+3-2
drivers/net/gianfar.c
···23932393 * as many bytes as needed to align the data properly23942394 */23952395 skb_reserve(skb, alignamount);23962396+ GFAR_CB(skb)->alignamount = alignamount;2396239723972398 return skb;23982399}···25342533 newskb = skb;25352534 else if (skb) {25362535 /*25372537- * We need to reset ->data to what it25362536+ * We need to un-reserve() the skb to what it25382537 * was before gfar_new_skb() re-aligned25392538 * it to an RXBUF_ALIGNMENT boundary25402539 * before we put the skb back on the25412540 * recycle list.25422541 */25432543- skb->data = skb->head + NET_SKB_PAD;25422542+ skb_reserve(skb, -GFAR_CB(skb)->alignamount);25442543 __skb_queue_head(&priv->rx_recycle, skb);25452544 }25462545 } else {
+6
drivers/net/gianfar.h
···566566 u16 vlctl; /* VLAN control word */567567};568568569569+struct gianfar_skb_cb {570570+ int alignamount;571571+};572572+573573+#define GFAR_CB(skb) ((struct gianfar_skb_cb *)((skb)->cb))574574+569575struct rmon_mib570576{571577 u32 tr64; /* 0x.680 - Transmit and Receive 64-byte Frame Counter */
+3-3
drivers/net/igb/e1000_mac.c
···13671367 * igb_enable_mng_pass_thru - Enable processing of ARP's13681368 * @hw: pointer to the HW structure13691369 *13701370- * Verifies the hardware needs to allow ARPs to be processed by the host.13701370+ * Verifies the hardware needs to leave interface enabled so that frames can13711371+ * be directed to and from the management interface.13711372 **/13721373bool igb_enable_mng_pass_thru(struct e1000_hw *hw)13731374{···1381138013821381 manc = rd32(E1000_MANC);1383138213841384- if (!(manc & E1000_MANC_RCV_TCO_EN) ||13851385- !(manc & E1000_MANC_EN_MAC_ADDR_FILTER))13831383+ if (!(manc & E1000_MANC_RCV_TCO_EN))13861384 goto out;1387138513881386 if (hw->mac.arc_subsystem_valid) {
···198198 struct igbvf_ring *tx_ring /* One per active queue */199199 ____cacheline_aligned_in_smp;200200201201- unsigned long tx_queue_len;202201 unsigned int restart_queue;203202 u32 txd_cmd;204203
+1-10
drivers/net/igbvf/netdev.c
···1304130413051305 /* enable Report Status bit */13061306 adapter->txd_cmd |= E1000_ADVTXD_DCMD_RS;13071307-13081308- adapter->tx_queue_len = adapter->netdev->tx_queue_len;13091307}1310130813111309/**···1522152415231525 del_timer_sync(&adapter->watchdog_timer);1524152615251525- netdev->tx_queue_len = adapter->tx_queue_len;15261527 netif_carrier_off(netdev);1527152815281529 /* record the stats before reset*/···18541857 &adapter->link_duplex);18551858 igbvf_print_link_info(adapter);1856185918571857- /*18581858- * tweak tx_queue_len according to speed/duplex18591859- * and adjust the timeout factor18601860- */18611861- netdev->tx_queue_len = adapter->tx_queue_len;18601860+ /* adjust timeout factor according to speed/duplex */18621861 adapter->tx_timeout_factor = 1;18631862 switch (adapter->link_speed) {18641863 case SPEED_10:18651864 txb2b = 0;18661866- netdev->tx_queue_len = 10;18671865 adapter->tx_timeout_factor = 16;18681866 break;18691867 case SPEED_100:18701868 txb2b = 0;18711871- netdev->tx_queue_len = 100;18721869 /* maybe add some timeout factor ? */18731870 break;18741871 }
···18531853 if (ixgbe_link_test(adapter, &data[4]))18541854 eth_test->flags |= ETH_TEST_FL_FAILED;1855185518561856+ if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED) {18571857+ int i;18581858+ for (i = 0; i < adapter->num_vfs; i++) {18591859+ if (adapter->vfinfo[i].clear_to_send) {18601860+ netdev_warn(netdev, "%s",18611861+ "offline diagnostic is not "18621862+ "supported when VFs are "18631863+ "present\n");18641864+ data[0] = 1;18651865+ data[1] = 1;18661866+ data[2] = 1;18671867+ data[3] = 1;18681868+ eth_test->flags |= ETH_TEST_FL_FAILED;18691869+ clear_bit(__IXGBE_TESTING,18701870+ &adapter->state);18711871+ goto skip_ol_tests;18721872+ }18731873+ }18741874+ }18751875+18561876 if (if_running)18571877 /* indicate we're in test mode */18581878 dev_close(netdev);···1928190819291909 clear_bit(__IXGBE_TESTING, &adapter->state);19301910 }19111911+skip_ol_tests:19311912 msleep_interruptible(4 * 1000);19321913}19331914
+25-8
drivers/net/ixgbe/ixgbe_fcoe.c
···202202 addr = sg_dma_address(sg);203203 len = sg_dma_len(sg);204204 while (len) {205205+ /* max number of buffers allowed in one DDP context */206206+ if (j >= IXGBE_BUFFCNT_MAX) {207207+ netif_err(adapter, drv, adapter->netdev,208208+ "xid=%x:%d,%d,%d:addr=%llx "209209+ "not enough descriptors\n",210210+ xid, i, j, dmacount, (u64)addr);211211+ goto out_noddp_free;212212+ }213213+205214 /* get the offset of length of current buffer */206215 thisoff = addr & ((dma_addr_t)bufflen - 1);207216 thislen = min((bufflen - thisoff), len);···236227 len -= thislen;237228 addr += thislen;238229 j++;239239- /* max number of buffers allowed in one DDP context */240240- if (j > IXGBE_BUFFCNT_MAX) {241241- DPRINTK(DRV, ERR, "xid=%x:%d,%d,%d:addr=%llx "242242- "not enough descriptors\n",243243- xid, i, j, dmacount, (u64)addr);244244- goto out_noddp_free;245245- }246230 }247231 }248232 /* only the last buffer may have non-full bufflen */249233 lastsize = thisoff + thislen;250234251235 fcbuff = (IXGBE_FCBUFF_4KB << IXGBE_FCBUFF_BUFFSIZE_SHIFT);252252- fcbuff |= (j << IXGBE_FCBUFF_BUFFCNT_SHIFT);236236+ fcbuff |= ((j & 0xff) << IXGBE_FCBUFF_BUFFCNT_SHIFT);253237 fcbuff |= (firstoff << IXGBE_FCBUFF_OFFSET_SHIFT);254238 fcbuff |= (IXGBE_FCBUFF_VALID);255239···522520 /* Enable L2 eth type filter for FCoE */523521 IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FCOE),524522 (ETH_P_FCOE | IXGBE_ETQF_FCOE | IXGBE_ETQF_FILTER_EN));523523+ /* Enable L2 eth type filter for FIP */524524+ IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FIP),525525+ (ETH_P_FIP | IXGBE_ETQF_FILTER_EN));525526 if (adapter->ring_feature[RING_F_FCOE].indices) {526527 /* Use multiple rx queues for FCoE by redirection table */527528 for (i = 0; i < IXGBE_FCRETA_SIZE; i++) {···535530 }536531 IXGBE_WRITE_REG(hw, IXGBE_FCRECTL, IXGBE_FCRECTL_ENA);537532 IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FCOE), 0);533533+ fcoe_i = f->mask;534534+ fcoe_i &= IXGBE_FCRETA_ENTRY_MASK;535535+ fcoe_q = adapter->rx_ring[fcoe_i]->reg_idx;536536+ IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FIP),537537+ IXGBE_ETQS_QUEUE_EN |538538+ (fcoe_q << IXGBE_ETQS_RX_QUEUE_SHIFT));538539 } else {539540 /* Use single rx queue for FCoE */540541 fcoe_i = f->mask;···550539 IXGBE_ETQS_QUEUE_EN |551540 (fcoe_q << IXGBE_ETQS_RX_QUEUE_SHIFT));552541 }542542+ /* send FIP frames to the first FCoE queue */543543+ fcoe_i = f->mask;544544+ fcoe_q = adapter->rx_ring[fcoe_i]->reg_idx;545545+ IXGBE_WRITE_REG(hw, IXGBE_ETQS(IXGBE_ETQF_FILTER_FIP),546546+ IXGBE_ETQS_QUEUE_EN |547547+ (fcoe_q << IXGBE_ETQS_RX_QUEUE_SHIFT));553548554549 IXGBE_WRITE_REG(hw, IXGBE_FCRXCTRL,555550 IXGBE_FCRXCTRL_FCOELLI |
+30-13
drivers/net/ixgbe/ixgbe_main.c
···30563056 while (test_and_set_bit(__IXGBE_RESETTING, &adapter->state))30573057 msleep(1);30583058 ixgbe_down(adapter);30593059+ /*30603060+ * If SR-IOV enabled then wait a bit before bringing the adapter30613061+ * back up to give the VFs time to respond to the reset. The30623062+ * two second wait is based upon the watchdog timer cycle in30633063+ * the VF driver.30643064+ */30653065+ if (adapter->flags & IXGBE_FLAG_SRIOV_ENABLED)30663066+ msleep(2000);30593067 ixgbe_up(adapter);30603068 clear_bit(__IXGBE_RESETTING, &adapter->state);30613069}···3244323632453237 /* disable receive for all VFs and wait one second */32463238 if (adapter->num_vfs) {32473247- for (i = 0 ; i < adapter->num_vfs; i++)32483248- adapter->vfinfo[i].clear_to_send = 0;32493249-32503239 /* ping all the active vfs to let them know we are going down */32513240 ixgbe_ping_all_vfs(adapter);32413241+32523242 /* Disable all VFTE/VFRE TX/RX */32533243 ixgbe_disable_tx_rx(adapter);32443244+32453245+ /* Mark all the VFs as inactive */32463246+ for (i = 0 ; i < adapter->num_vfs; i++)32473247+ adapter->vfinfo[i].clear_to_send = 0;32543248 }3255324932563250 /* disable receives */···5648563856495639#ifdef IXGBE_FCOE56505640 if ((adapter->flags & IXGBE_FLAG_FCOE_ENABLED) &&56515651- (skb->protocol == htons(ETH_P_FCOE))) {56415641+ ((skb->protocol == htons(ETH_P_FCOE)) ||56425642+ (skb->protocol == htons(ETH_P_FIP)))) {56525643 txq &= (adapter->ring_feature[RING_F_FCOE].indices - 1);56535644 txq += adapter->ring_feature[RING_F_FCOE].mask;56545645 return txq;···5696568556975686 tx_ring = adapter->tx_ring[skb->queue_mapping];5698568756995699- if ((adapter->flags & IXGBE_FLAG_FCOE_ENABLED) &&57005700- (skb->protocol == htons(ETH_P_FCOE))) {57015701- tx_flags |= IXGBE_TX_FLAGS_FCOE;57025688#ifdef IXGBE_FCOE56895689+ if (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) {57035690#ifdef CONFIG_IXGBE_DCB57045704- tx_flags &= ~(IXGBE_TX_FLAGS_VLAN_PRIO_MASK57055705- << IXGBE_TX_FLAGS_VLAN_SHIFT);57065706- tx_flags |= ((adapter->fcoe.up << 13)57075707- << IXGBE_TX_FLAGS_VLAN_SHIFT);56915691+ /* for FCoE with DCB, we force the priority to what56925692+ * was specified by the switch */56935693+ if ((skb->protocol == htons(ETH_P_FCOE)) ||56945694+ (skb->protocol == htons(ETH_P_FIP))) {56955695+ tx_flags &= ~(IXGBE_TX_FLAGS_VLAN_PRIO_MASK56965696+ << IXGBE_TX_FLAGS_VLAN_SHIFT);56975697+ tx_flags |= ((adapter->fcoe.up << 13)56985698+ << IXGBE_TX_FLAGS_VLAN_SHIFT);56995699+ }57085700#endif57095709-#endif57015701+ /* flag for FCoE offloads */57025702+ if (skb->protocol == htons(ETH_P_FCOE))57035703+ tx_flags |= IXGBE_TX_FLAGS_FCOE;57105704 }57055705+#endif57065706+57115707 /* four things can cause us to need a context descriptor */57125708 if (skb_is_gso(skb) ||57135709 (skb->ip_summed == CHECKSUM_PARTIAL) ||···60696051 indices += min_t(unsigned int, num_possible_cpus(),60706052 IXGBE_MAX_FCOE_INDICES);60716053#endif60726072- indices = min_t(unsigned int, indices, MAX_TX_QUEUES);60736054 netdev = alloc_etherdev_mq(sizeof(struct ixgbe_adapter), indices);60746055 if (!netdev) {60756056 err = -ENOMEM;
+1
drivers/net/ixgbe/ixgbe_type.h
···12981298#define IXGBE_ETQF_FILTER_BCN 112991299#define IXGBE_ETQF_FILTER_FCOE 213001300#define IXGBE_ETQF_FILTER_1588 313011301+#define IXGBE_ETQF_FILTER_FIP 413011302/* VLAN Control Bit Masks */13021303#define IXGBE_VLNCTRL_VET 0x0000FFFF /* bits 0-15 */13031304#define IXGBE_VLNCTRL_CFI 0x10000000 /* bit 28 */
+2-1
drivers/net/ixgbevf/ixgbevf_main.c
···29432943 struct ixgbevf_tx_buffer *tx_buffer_info;29442944 unsigned int len;29452945 unsigned int total = skb->len;29462946- unsigned int offset = 0, size, count = 0, i;29462946+ unsigned int offset = 0, size, count = 0;29472947 unsigned int nr_frags = skb_shinfo(skb)->nr_frags;29482948 unsigned int f;29492949+ int i;2949295029502951 i = tx_ring->next_to_use;29512952
+1-1
drivers/net/ksz884x.c
···63226322 int len;6323632363246324 if (eeprom->magic != EEPROM_MAGIC)63256325- return 1;63256325+ return -EINVAL;6326632663276327 len = (eeprom->offset + eeprom->len + 1) / 2;63286328 for (i = eeprom->offset / 2; i < len; i++)
···186186187187MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl);188188189189-static int rx_copybreak = 200;190190-static int use_dac = -1;189189+/*190190+ * we set our copybreak very high so that we don't have191191+ * to allocate 16k frames all the time (see note in192192+ * rtl8169_open()193193+ */194194+static int rx_copybreak = 16383;195195+static int use_dac;191196static struct {192197 u32 msg_enable;193198} debug = { -1 };···516511module_param(rx_copybreak, int, 0);517512MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");518513module_param(use_dac, int, 0);519519-MODULE_PARM_DESC(use_dac, "Enable PCI DAC. -1 defaults on for PCI Express only."520520-" Unsafe on 32 bit PCI slot.");514514+MODULE_PARM_DESC(use_dac, "Enable PCI DAC. Unsafe on 32 bit PCI slot.");521515module_param_named(debug, debug.msg_enable, int, 0);522516MODULE_PARM_DESC(debug, "Debug verbosity level (0=none, ..., 16=all)");523517MODULE_LICENSE("GPL");···28252821 spin_lock_irq(&tp->lock);2826282228272823 RTL_W8(Cfg9346, Cfg9346_Unlock);28282828- RTL_W32(MAC0, low);28292824 RTL_W32(MAC4, high);28252825+ RTL_W32(MAC0, low);28302826 RTL_W8(Cfg9346, Cfg9346_Lock);2831282728322828 spin_unlock_irq(&tp->lock);···29782974 void __iomem *ioaddr;29792975 unsigned int i;29802976 int rc;29812981- int this_use_dac = use_dac;2982297729832978 if (netif_msg_drv(&debug)) {29842979 printk(KERN_INFO "%s Gigabit Ethernet driver %s loaded\n",···3043304030443041 tp->cp_cmd = PCIMulRW | RxChkSum;3045304230463046- tp->pcie_cap = pci_find_capability(pdev, PCI_CAP_ID_EXP);30473047- if (!tp->pcie_cap)30483048- netif_info(tp, probe, dev, "no PCI Express capability\n");30493049-30503050- if (this_use_dac < 0)30513051- this_use_dac = tp->pcie_cap != 0;30523052-30533043 if ((sizeof(dma_addr_t) > 4) &&30543054- this_use_dac &&30553055- !pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {30563056- netif_info(tp, probe, dev, "using 64-bit DMA\n");30443044+ !pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) && use_dac) {30573045 tp->cp_cmd |= PCIDAC;30583046 dev->features |= NETIF_F_HIGHDMA;30593047 } else {···30623068 rc = -EIO;30633069 goto err_out_free_res_4;30643070 }30713071+30723072+ tp->pcie_cap = pci_find_capability(pdev, PCI_CAP_ID_EXP);30733073+ if (!tp->pcie_cap)30743074+ netif_info(tp, probe, dev, "no PCI Express capability\n");3065307530663076 RTL_W16(IntrMask, 0x0000);30673077···32223224}3223322532243226static void rtl8169_set_rxbufsize(struct rtl8169_private *tp,32253225- struct net_device *dev)32273227+ unsigned int mtu)32263228{32273227- unsigned int max_frame = dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;32293229+ unsigned int max_frame = mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;32303230+32313231+ if (max_frame != 16383)32323232+ printk(KERN_WARNING "WARNING! Changing of MTU on this NIC"32333233+ "May lead to frame reception errors!\n");3228323432293235 tp->rx_buf_sz = (max_frame > RX_BUF_SIZE) ? max_frame : RX_BUF_SIZE;32303236}···32403238 int retval = -ENOMEM;324132393242324032433243- rtl8169_set_rxbufsize(tp, dev);32413241+ /*32423242+ * Note that we use a magic value here, its wierd I know32433243+ * its done because, some subset of rtl8169 hardware suffers from32443244+ * a problem in which frames received that are longer than32453245+ * the size set in RxMaxSize register return garbage sizes32463246+ * when received. To avoid this we need to turn off filtering,32473247+ * which is done by setting a value of 16383 in the RxMaxSize register32483248+ * and allocating 16k frames to handle the largest possible rx value32493249+ * thats what the magic math below does.32503250+ */32513251+ rtl8169_set_rxbufsize(tp, 16383 - VLAN_ETH_HLEN - ETH_FCS_LEN);3244325232453253 /*32463254 * Rx and Tx desscriptors needs 256 bytes alignment.···3903389139043892 rtl8169_down(dev);3905389339063906- rtl8169_set_rxbufsize(tp, dev);38943894+ rtl8169_set_rxbufsize(tp, dev->mtu);3907389539083896 ret = rtl8169_init_ring(dev);39093897 if (ret < 0)···47664754 mc_filter[1] = swab32(data);47674755 }4768475647694769- RTL_W32(MAR0 + 0, mc_filter[0]);47704757 RTL_W32(MAR0 + 4, mc_filter[1]);47584758+ RTL_W32(MAR0 + 0, mc_filter[0]);4771475947724760 RTL_W32(RxConfig, tmp);47734761
+5-3
drivers/net/tulip/uli526x.c
···851851852852 if ( !(rdes0 & 0x8000) ||853853 ((db->cr6_data & CR6_PM) && (rxlen>6)) ) {854854+ struct sk_buff *new_skb = NULL;855855+854856 skb = rxptr->rx_skb_ptr;855857856858 /* Good packet, send to upper layer */857859 /* Shorst packet used new SKB */858858- if ( (rxlen < RX_COPY_SIZE) &&859859- ( (skb = dev_alloc_skb(rxlen + 2) )860860- != NULL) ) {860860+ if ((rxlen < RX_COPY_SIZE) &&861861+ (((new_skb = dev_alloc_skb(rxlen + 2)) != NULL))) {862862+ skb = new_skb;861863 /* size less than COPY_SIZE, allocate a rxlen SKB */862864 skb_reserve(skb, 2); /* 16byte align */863865 memcpy(skb_put(skb, rxlen),
+1-1
drivers/net/via-velocity.c
···812812813813 case FLOW_CNTL_TX_RX:814814 MII_REG_BITS_ON(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs);815815- MII_REG_BITS_ON(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);815815+ MII_REG_BITS_OFF(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);816816 break;817817818818 case FLOW_CNTL_DISABLE:
+5-2
drivers/of/fdt.c
···376376 if (!np->type)377377 np->type = "<NULL>";378378 }379379- while (tag == OF_DT_BEGIN_NODE) {380380- mem = unflatten_dt_node(mem, p, np, allnextpp, fpsize);379379+ while (tag == OF_DT_BEGIN_NODE || tag == OF_DT_NOP) {380380+ if (tag == OF_DT_NOP)381381+ *p += 4;382382+ else383383+ mem = unflatten_dt_node(mem, p, np, allnextpp, fpsize);381384 tag = be32_to_cpup((__be32 *)(*p));382385 }383386 if (tag != OF_DT_END_NODE) {
+2-3
drivers/pci/hotplug/pciehp_hpc.c
···832832 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {833833 if (!pci_resource_len(pdev, i))834834 continue;835835- ctrl_info(ctrl, " PCI resource [%d] : 0x%llx@0x%llx\n",836836- i, (unsigned long long)pci_resource_len(pdev, i),837837- (unsigned long long)pci_resource_start(pdev, i));835835+ ctrl_info(ctrl, " PCI resource [%d] : %pR\n",836836+ i, &pdev->resource[i]);838837 }839838 ctrl_info(ctrl, "Slot Capabilities : 0x%08x\n", ctrl->slot_cap);840839 ctrl_info(ctrl, " Physical Slot Number : %d\n", PSN(ctrl));
+4-5
drivers/pci/ioapic.c
···3131 acpi_status status;3232 unsigned long long gsb;3333 struct ioapic *ioapic;3434- u64 addr;3534 int ret;3635 char *type;3636+ struct resource *res;37373838 handle = DEVICE_ACPI_HANDLE(&dev->dev);3939 if (!handle)···6969 if (pci_request_region(dev, 0, type))7070 goto exit_disable;71717272- addr = pci_resource_start(dev, 0);7373- if (acpi_register_ioapic(ioapic->handle, addr, ioapic->gsi_base))7272+ res = &dev->resource[0];7373+ if (acpi_register_ioapic(ioapic->handle, res->start, ioapic->gsi_base))7474 goto exit_release;75757676 pci_set_drvdata(dev, ioapic);7777- dev_info(&dev->dev, "%s at %#llx, GSI %u\n", type, addr,7878- ioapic->gsi_base);7777+ dev_info(&dev->dev, "%s at %pR, GSI %u\n", type, res, ioapic->gsi_base);7978 return 0;80798180exit_release:
+20-24
drivers/pci/pci.c
···25762576 */25772577int pcix_get_max_mmrbc(struct pci_dev *dev)25782578{25792579- int err, cap;25792579+ int cap;25802580 u32 stat;2581258125822582 cap = pci_find_capability(dev, PCI_CAP_ID_PCIX);25832583 if (!cap)25842584 return -EINVAL;2585258525862586- err = pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat);25872587- if (err)25862586+ if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat))25882587 return -EINVAL;2589258825902590- return (stat & PCI_X_STATUS_MAX_READ) >> 12;25892589+ return 512 << ((stat & PCI_X_STATUS_MAX_READ) >> 21);25912590}25922591EXPORT_SYMBOL(pcix_get_max_mmrbc);25932592···25992600 */26002601int pcix_get_mmrbc(struct pci_dev *dev)26012602{26022602- int ret, cap;26032603- u32 cmd;26032603+ int cap;26042604+ u16 cmd;2604260526052606 cap = pci_find_capability(dev, PCI_CAP_ID_PCIX);26062607 if (!cap)26072608 return -EINVAL;2608260926092609- ret = pci_read_config_dword(dev, cap + PCI_X_CMD, &cmd);26102610- if (!ret)26112611- ret = 512 << ((cmd & PCI_X_CMD_MAX_READ) >> 2);26102610+ if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd))26112611+ return -EINVAL;2612261226132613- return ret;26132613+ return 512 << ((cmd & PCI_X_CMD_MAX_READ) >> 2);26142614}26152615EXPORT_SYMBOL(pcix_get_mmrbc);26162616···26242626 */26252627int pcix_set_mmrbc(struct pci_dev *dev, int mmrbc)26262628{26272627- int cap, err = -EINVAL;26282628- u32 stat, cmd, v, o;26292629+ int cap;26302630+ u32 stat, v, o;26312631+ u16 cmd;2629263226302633 if (mmrbc < 512 || mmrbc > 4096 || !is_power_of_2(mmrbc))26312631- goto out;26342634+ return -EINVAL;2632263526332636 v = ffs(mmrbc) - 10;2634263726352638 cap = pci_find_capability(dev, PCI_CAP_ID_PCIX);26362639 if (!cap)26372637- goto out;26402640+ return -EINVAL;2638264126392639- err = pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat);26402640- if (err)26412641- goto out;26422642+ if (pci_read_config_dword(dev, cap + PCI_X_STATUS, &stat))26432643+ return -EINVAL;2642264426432645 if (v > (stat & PCI_X_STATUS_MAX_READ) >> 21)26442646 return -E2BIG;2645264726462646- err = pci_read_config_dword(dev, cap + PCI_X_CMD, &cmd);26472647- if (err)26482648- goto out;26482648+ if (pci_read_config_word(dev, cap + PCI_X_CMD, &cmd))26492649+ return -EINVAL;2649265026502651 o = (cmd & PCI_X_CMD_MAX_READ) >> 2;26512652 if (o != v) {···2654265726552658 cmd &= ~PCI_X_CMD_MAX_READ;26562659 cmd |= v << 2;26572657- err = pci_write_config_dword(dev, cap + PCI_X_CMD, cmd);26602660+ if (pci_write_config_word(dev, cap + PCI_X_CMD, cmd))26612661+ return -EIO;26582662 }26592659-out:26602660- return err;26632663+ return 0;26612664}26622665EXPORT_SYMBOL(pcix_set_mmrbc);26632666···30203023EXPORT_SYMBOL(pci_disable_device);30213024EXPORT_SYMBOL(pci_find_capability);30223025EXPORT_SYMBOL(pci_bus_find_capability);30233023-EXPORT_SYMBOL(pci_register_set_vga_state);30243026EXPORT_SYMBOL(pci_release_regions);30253027EXPORT_SYMBOL(pci_request_regions);30263028EXPORT_SYMBOL(pci_request_regions_exclusive);
+33-20
drivers/pci/probe.c
···174174 pci_read_config_dword(dev, pos, &sz);175175 pci_write_config_dword(dev, pos, l);176176177177+ if (!sz)178178+ goto fail; /* BAR not implemented */179179+177180 /*178181 * All bits set in sz means the device isn't working properly.179179- * If the BAR isn't implemented, all bits must be 0. If it's a180180- * memory BAR or a ROM, bit 0 must be clear; if it's an io BAR, bit181181- * 1 must be clear.182182+ * If it's a memory BAR or a ROM, bit 0 must be clear; if it's183183+ * an io BAR, bit 1 must be clear.182184 */183183- if (!sz || sz == 0xffffffff)185185+ if (sz == 0xffffffff) {186186+ dev_err(&dev->dev, "reg %x: invalid size %#x; broken device?\n",187187+ pos, sz);184188 goto fail;189189+ }185190186191 /*187192 * I don't know how l can have all bits set. Copied from old code.···249244 pos, res);250245 }251246 } else {252252- sz = pci_size(l, sz, mask);247247+ u32 size = pci_size(l, sz, mask);253248254254- if (!sz)249249+ if (!size) {250250+ dev_err(&dev->dev, "reg %x: invalid size "251251+ "(l %#x sz %#x mask %#x); broken device?",252252+ pos, l, sz, mask);255253 goto fail;254254+ }256255257256 res->start = l;258258- res->end = l + sz;257257+ res->end = l + size;259258260259 dev_printk(KERN_DEBUG, &dev->dev, "reg %x: %pR\n", pos, res);261260 }···321312 dev_printk(KERN_DEBUG, &dev->dev, " bridge window %pR\n", res);322313 } else {323314 dev_printk(KERN_DEBUG, &dev->dev,324324- " bridge window [io %04lx - %04lx] reg reading\n",315315+ " bridge window [io %#06lx-%#06lx] (disabled)\n",325316 base, limit);326317 }327318}···345336 dev_printk(KERN_DEBUG, &dev->dev, " bridge window %pR\n", res);346337 } else {347338 dev_printk(KERN_DEBUG, &dev->dev,348348- " bridge window [mem 0x%08lx - 0x%08lx] reg reading\n",339339+ " bridge window [mem %#010lx-%#010lx] (disabled)\n",349340 base, limit + 0xfffff);350341 }351342}···396387 dev_printk(KERN_DEBUG, &dev->dev, " bridge window %pR\n", res);397388 } else {398389 dev_printk(KERN_DEBUG, &dev->dev,399399- " bridge window [mem 0x%08lx - %08lx pref] reg reading\n",390390+ " bridge window [mem %#010lx-%#010lx pref] (disabled)\n",400391 base, limit + 0xfffff);401392 }402393}···682673 int is_cardbus = (dev->hdr_type == PCI_HEADER_TYPE_CARDBUS);683674 u32 buses, i, j = 0;684675 u16 bctl;676676+ u8 primary, secondary, subordinate;685677 int broken = 0;686678687679 pci_read_config_dword(dev, PCI_PRIMARY_BUS, &buses);680680+ primary = buses & 0xFF;681681+ secondary = (buses >> 8) & 0xFF;682682+ subordinate = (buses >> 16) & 0xFF;688683689689- dev_dbg(&dev->dev, "scanning behind bridge, config %06x, pass %d\n",690690- buses & 0xffffff, pass);684684+ dev_dbg(&dev->dev, "scanning [bus %02x-%02x] behind bridge, pass %d\n",685685+ secondary, subordinate, pass);691686692687 /* Check if setup is sensible at all */693688 if (!pass &&694694- ((buses & 0xff) != bus->number || ((buses >> 8) & 0xff) <= bus->number)) {689689+ (primary != bus->number || secondary <= bus->number)) {695690 dev_dbg(&dev->dev, "bus configuration invalid, reconfiguring\n");696691 broken = 1;697692 }···706693 pci_write_config_word(dev, PCI_BRIDGE_CONTROL,707694 bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);708695709709- if ((buses & 0xffff00) && !pcibios_assign_all_busses() && !is_cardbus && !broken) {710710- unsigned int cmax, busnr;696696+ if ((secondary || subordinate) && !pcibios_assign_all_busses() &&697697+ !is_cardbus && !broken) {698698+ unsigned int cmax;711699 /*712700 * Bus already configured by firmware, process it in the first713701 * pass and just note the configuration.714702 */715703 if (pass)716704 goto out;717717- busnr = (buses >> 8) & 0xFF;718705719706 /*720707 * If we already got to this bus through a different bridge,···723710 * However, we continue to descend down the hierarchy and724711 * scan remaining child buses.725712 */726726- child = pci_find_bus(pci_domain_nr(bus), busnr);713713+ child = pci_find_bus(pci_domain_nr(bus), secondary);727714 if (!child) {728728- child = pci_add_new_bus(bus, dev, busnr);715715+ child = pci_add_new_bus(bus, dev, secondary);729716 if (!child)730717 goto out;731731- child->primary = buses & 0xFF;732732- child->subordinate = (buses >> 16) & 0xFF;718718+ child->primary = primary;719719+ child->subordinate = subordinate;733720 child->bridge_ctl = bctl;734721 }735722
+24-5
drivers/pci/quirks.c
···368368 bus_region.end = res->end;369369 pcibios_bus_to_resource(dev, res, &bus_region);370370371371- pci_claim_resource(dev, nr);372372- dev_info(&dev->dev, "quirk: %pR claimed by %s\n", res, name);371371+ if (pci_claim_resource(dev, nr) == 0)372372+ dev_info(&dev->dev, "quirk: %pR claimed by %s\n",373373+ res, name);373374 }374375} 375376···19781977 /*19791978 * Disable PCI Bus Parking and PCI Master read caching on CX70019801979 * which causes unspecified timing errors with a VT6212L on the PCI19811981- * bus leading to USB2.0 packet loss. The defaults are that these19821982- * features are turned off but some BIOSes turn them on.19801980+ * bus leading to USB2.0 packet loss.19811981+ *19821982+ * This quirk is only enabled if a second (on the external PCI bus)19831983+ * VT6212L is found -- the CX700 core itself also contains a USB19841984+ * host controller with the same PCI ID as the VT6212L.19831985 */1984198619871987+ /* Count VT6212L instances */19881988+ struct pci_dev *p = pci_get_device(PCI_VENDOR_ID_VIA,19891989+ PCI_DEVICE_ID_VIA_8235_USB_2, NULL);19851990 uint8_t b;19911991+19921992+ /* p should contain the first (internal) VT6212L -- see if we have19931993+ an external one by searching again */19941994+ p = pci_get_device(PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_8235_USB_2, p);19951995+ if (!p)19961996+ return;19971997+ pci_dev_put(p);19981998+19861999 if (pci_read_config_byte(dev, 0x76, &b) == 0) {19872000 if (b & 0x40) {19882001 /* Turn off PCI Bus Parking */···20232008 }20242009 }20252010}20262026-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_VIA, 0x324e, quirk_via_cx700_pci_parking_caching);20112011+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0x324e, quirk_via_cx700_pci_parking_caching);2027201220282013/*20292014 * For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the···21232108 }21242109}21252110DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi);21112111+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x9602, quirk_disable_msi);21122112+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASUSTEK, 0x9602, quirk_disable_msi);21132113+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AI, 0x9602, quirk_disable_msi);21142114+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_VIA, 0xa238, quirk_disable_msi);2126211521272116/* Go through the list of Hypertransport capabilities and21282117 * return 1 if a HT MSI capability is found and enabled */
+8-6
drivers/pci/setup-res.c
···9393int pci_claim_resource(struct pci_dev *dev, int resource)9494{9595 struct resource *res = &dev->resource[resource];9696- struct resource *root;9797- int err;9696+ struct resource *root, *conflict;98979998 root = pci_find_parent_resource(dev, res);10099 if (!root) {···102103 return -EINVAL;103104 }104105105105- err = request_resource(root, res);106106- if (err)106106+ conflict = request_resource_conflict(root, res);107107+ if (conflict) {107108 dev_err(&dev->dev,108108- "address space collision: %pR already in use\n", res);109109+ "address space collision: %pR conflicts with %s %pR\n",110110+ res, conflict->name, conflict);111111+ return -EBUSY;112112+ }109113110110- return err;114114+ return 0;111115}112116EXPORT_SYMBOL(pci_claim_resource);113117
···509509 p_dev->device_no = (s->device_count++);510510 mutex_unlock(&s->ops_mutex);511511512512- /* max of 2 devices per card */513513- if (p_dev->device_no >= 2)512512+ /* max of 2 PFC devices */513513+ if ((p_dev->device_no >= 2) && (function == 0))514514+ goto err_free;515515+516516+ /* max of 4 devices overall */517517+ if (p_dev->device_no >= 4)514518 goto err_free;515519516520 p_dev->socket = s;
···385385386386 If you have an Eee PC laptop, say Y or M here.387387388388+config EEEPC_WMI389389+ tristate "Eee PC WMI Hotkey Driver (EXPERIMENTAL)"390390+ depends on ACPI_WMI391391+ depends on INPUT392392+ depends on EXPERIMENTAL393393+ ---help---394394+ Say Y here if you want to support WMI-based hotkeys on Eee PC laptops.395395+396396+ To compile this driver as a module, choose M here: the module will397397+ be called eeepc-wmi.388398389399config ACPI_WMI390400 tristate "WMI"
···11+/*22+ * Eee PC WMI hotkey driver33+ *44+ * Copyright(C) 2010 Intel Corporation.55+ *66+ * Portions based on wistron_btns.c:77+ * Copyright (C) 2005 Miloslav Trmac <mitr@volny.cz>88+ * Copyright (C) 2005 Bernhard Rosenkraenzer <bero@arklinux.org>99+ * Copyright (C) 2005 Dmitry Torokhov <dtor@mail.ru>1010+ *1111+ * This program is free software; you can redistribute it and/or modify1212+ * it under the terms of the GNU General Public License as published by1313+ * the Free Software Foundation; either version 2 of the License, or1414+ * (at your option) any later version.1515+ *1616+ * This program is distributed in the hope that it will be useful,1717+ * but WITHOUT ANY WARRANTY; without even the implied warranty of1818+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1919+ * GNU General Public License for more details.2020+ *2121+ * You should have received a copy of the GNU General Public License2222+ * along with this program; if not, write to the Free Software2323+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA2424+ */2525+2626+#include <linux/kernel.h>2727+#include <linux/module.h>2828+#include <linux/init.h>2929+#include <linux/types.h>3030+#include <linux/input.h>3131+#include <linux/input/sparse-keymap.h>3232+#include <acpi/acpi_bus.h>3333+#include <acpi/acpi_drivers.h>3434+3535+MODULE_AUTHOR("Yong Wang <yong.y.wang@intel.com>");3636+MODULE_DESCRIPTION("Eee PC WMI Hotkey Driver");3737+MODULE_LICENSE("GPL");3838+3939+#define EEEPC_WMI_EVENT_GUID "ABBC0F72-8EA1-11D1-00A0-C90629100000"4040+4141+MODULE_ALIAS("wmi:"EEEPC_WMI_EVENT_GUID);4242+4343+#define NOTIFY_BRNUP_MIN 0x114444+#define NOTIFY_BRNUP_MAX 0x1f4545+#define NOTIFY_BRNDOWN_MIN 0x204646+#define NOTIFY_BRNDOWN_MAX 0x2e4747+4848+static const struct key_entry eeepc_wmi_keymap[] = {4949+ /* Sleep already handled via generic ACPI code */5050+ { KE_KEY, 0x5d, { KEY_WLAN } },5151+ { KE_KEY, 0x32, { KEY_MUTE } },5252+ { KE_KEY, 0x31, { KEY_VOLUMEDOWN } },5353+ { KE_KEY, 0x30, { KEY_VOLUMEUP } },5454+ { KE_IGNORE, NOTIFY_BRNDOWN_MIN, { KEY_BRIGHTNESSDOWN } },5555+ { KE_IGNORE, NOTIFY_BRNUP_MIN, { KEY_BRIGHTNESSUP } },5656+ { KE_KEY, 0xcc, { KEY_SWITCHVIDEOMODE } },5757+ { KE_END, 0},5858+};5959+6060+static struct input_dev *eeepc_wmi_input_dev;6161+6262+static void eeepc_wmi_notify(u32 value, void *context)6363+{6464+ struct acpi_buffer response = { ACPI_ALLOCATE_BUFFER, NULL };6565+ union acpi_object *obj;6666+ acpi_status status;6767+ int code;6868+6969+ status = wmi_get_event_data(value, &response);7070+ if (status != AE_OK) {7171+ pr_err("EEEPC WMI: bad event status 0x%x\n", status);7272+ return;7373+ }7474+7575+ obj = (union acpi_object *)response.pointer;7676+7777+ if (obj && obj->type == ACPI_TYPE_INTEGER) {7878+ code = obj->integer.value;7979+8080+ if (code >= NOTIFY_BRNUP_MIN && code <= NOTIFY_BRNUP_MAX)8181+ code = NOTIFY_BRNUP_MIN;8282+ else if (code >= NOTIFY_BRNDOWN_MIN && code <= NOTIFY_BRNDOWN_MAX)8383+ code = NOTIFY_BRNDOWN_MIN;8484+8585+ if (!sparse_keymap_report_event(eeepc_wmi_input_dev,8686+ code, 1, true))8787+ pr_info("EEEPC WMI: Unknown key %x pressed\n", code);8888+ }8989+9090+ kfree(obj);9191+}9292+9393+static int eeepc_wmi_input_setup(void)9494+{9595+ int err;9696+9797+ eeepc_wmi_input_dev = input_allocate_device();9898+ if (!eeepc_wmi_input_dev)9999+ return -ENOMEM;100100+101101+ eeepc_wmi_input_dev->name = "Eee PC WMI hotkeys";102102+ eeepc_wmi_input_dev->phys = "wmi/input0";103103+ eeepc_wmi_input_dev->id.bustype = BUS_HOST;104104+105105+ err = sparse_keymap_setup(eeepc_wmi_input_dev, eeepc_wmi_keymap, NULL);106106+ if (err)107107+ goto err_free_dev;108108+109109+ err = input_register_device(eeepc_wmi_input_dev);110110+ if (err)111111+ goto err_free_keymap;112112+113113+ return 0;114114+115115+err_free_keymap:116116+ sparse_keymap_free(eeepc_wmi_input_dev);117117+err_free_dev:118118+ input_free_device(eeepc_wmi_input_dev);119119+ return err;120120+}121121+122122+static int __init eeepc_wmi_init(void)123123+{124124+ int err;125125+ acpi_status status;126126+127127+ if (!wmi_has_guid(EEEPC_WMI_EVENT_GUID)) {128128+ pr_warning("EEEPC WMI: No known WMI GUID found\n");129129+ return -ENODEV;130130+ }131131+132132+ err = eeepc_wmi_input_setup();133133+ if (err)134134+ return err;135135+136136+ status = wmi_install_notify_handler(EEEPC_WMI_EVENT_GUID,137137+ eeepc_wmi_notify, NULL);138138+ if (ACPI_FAILURE(status)) {139139+ sparse_keymap_free(eeepc_wmi_input_dev);140140+ input_unregister_device(eeepc_wmi_input_dev);141141+ pr_err("EEEPC WMI: Unable to register notify handler - %d\n",142142+ status);143143+ return -ENODEV;144144+ }145145+146146+ return 0;147147+}148148+149149+static void __exit eeepc_wmi_exit(void)150150+{151151+ wmi_remove_notify_handler(EEEPC_WMI_EVENT_GUID);152152+ sparse_keymap_free(eeepc_wmi_input_dev);153153+ input_unregister_device(eeepc_wmi_input_dev);154154+}155155+156156+module_init(eeepc_wmi_init);157157+module_exit(eeepc_wmi_exit);
···242242static int __devinit e3d_pci_register(struct pci_dev *pdev,243243 const struct pci_device_id *ent)244244{245245+ struct device_node *of_node;246246+ const char *device_type;245247 struct fb_info *info;246248 struct e3d_info *ep;247249 unsigned int line_length;248250 int err;251251+252252+ of_node = pci_device_to_OF_node(pdev);253253+ if (!of_node) {254254+ printk(KERN_ERR "e3d: Cannot find OF node of %s\n",255255+ pci_name(pdev));256256+ return -ENODEV;257257+ }258258+259259+ device_type = of_get_property(of_node, "device_type", NULL);260260+ if (!device_type) {261261+ printk(KERN_INFO "e3d: Ignoring secondary output device "262262+ "at %s\n", pci_name(pdev));263263+ return -ENODEV;264264+ }249265250266 err = pci_enable_device(pdev);251267 if (err < 0) {···281265 ep->info = info;282266 ep->pdev = pdev;283267 spin_lock_init(&ep->lock);284284- ep->of_node = pci_device_to_OF_node(pdev);285285- if (!ep->of_node) {286286- printk(KERN_ERR "e3d: Cannot find OF node of %s\n",287287- pci_name(pdev));288288- err = -ENODEV;289289- goto err_release_fb;290290- }268268+ ep->of_node = of_node;291269292270 /* Read the PCI base register of the frame buffer, which we293271 * need in order to interpret the RAMDAC_VID_*FB* values in
+8-2
fs/ceph/addr.c
···919919/*920920 * We are only allowed to write into/dirty the page if the page is921921 * clean, or already dirty within the same snap context.922922+ *923923+ * called with page locked.924924+ * return success with page locked,925925+ * or any failure (incl -EAGAIN) with page unlocked.922926 */923927static int ceph_update_writeable_page(struct file *file,924928 loff_t pos, unsigned len,···965961 snapc = ceph_get_snap_context((void *)page->private);966962 unlock_page(page);967963 ceph_queue_writeback(inode);968968- wait_event_interruptible(ci->i_cap_wq,964964+ r = wait_event_interruptible(ci->i_cap_wq,969965 context_is_writeable_or_written(inode, snapc));970966 ceph_put_snap_context(snapc);967967+ if (r == -ERESTARTSYS)968968+ return r;971969 return -EAGAIN;972970 }973971···10411035 int r;1042103610431037 do {10441044- /* get a page*/10381038+ /* get a page */10451039 page = grab_cache_page_write_begin(mapping, index, 0);10461040 if (!page)10471041 return -ENOMEM;
+38-15
fs/ceph/auth_x.c
···2828 return (ac->want_keys & xi->have_keys) == ac->want_keys;2929}30303131+static int ceph_x_encrypt_buflen(int ilen)3232+{3333+ return sizeof(struct ceph_x_encrypt_header) + ilen + 16 +3434+ sizeof(u32);3535+}3636+3137static int ceph_x_encrypt(struct ceph_crypto_key *secret,3238 void *ibuf, int ilen, void *obuf, size_t olen)3339{···156150 struct timespec validity;157151 struct ceph_crypto_key old_key;158152 void *tp, *tpend;153153+ struct ceph_timespec new_validity;154154+ struct ceph_crypto_key new_session_key;155155+ struct ceph_buffer *new_ticket_blob;156156+ unsigned long new_expires, new_renew_after;157157+ u64 new_secret_id;159158160159 ceph_decode_need(&p, end, sizeof(u32) + 1, bad);161160···193182 goto bad;194183195184 memcpy(&old_key, &th->session_key, sizeof(old_key));196196- ret = ceph_crypto_key_decode(&th->session_key, &dp, dend);185185+ ret = ceph_crypto_key_decode(&new_session_key, &dp, dend);197186 if (ret)198187 goto out;199188200200- ceph_decode_copy(&dp, &th->validity, sizeof(th->validity));201201- ceph_decode_timespec(&validity, &th->validity);202202- th->expires = get_seconds() + validity.tv_sec;203203- th->renew_after = th->expires - (validity.tv_sec / 4);204204- dout(" expires=%lu renew_after=%lu\n", th->expires,205205- th->renew_after);189189+ ceph_decode_copy(&dp, &new_validity, sizeof(new_validity));190190+ ceph_decode_timespec(&validity, &new_validity);191191+ new_expires = get_seconds() + validity.tv_sec;192192+ new_renew_after = new_expires - (validity.tv_sec / 4);193193+ dout(" expires=%lu renew_after=%lu\n", new_expires,194194+ new_renew_after);206195207196 /* ticket blob for service */208197 ceph_decode_8_safe(&p, end, is_enc, bad);···227216 dout(" ticket blob is %d bytes\n", dlen);228217 ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad);229218 struct_v = ceph_decode_8(&tp);230230- th->secret_id = ceph_decode_64(&tp);231231- ret = ceph_decode_buffer(&th->ticket_blob, &tp, tpend);219219+ new_secret_id = ceph_decode_64(&tp);220220+ ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend);232221 if (ret)233222 goto out;223223+224224+ /* all is well, update our ticket */225225+ ceph_crypto_key_destroy(&th->session_key);226226+ if (th->ticket_blob)227227+ ceph_buffer_put(th->ticket_blob);228228+ th->session_key = new_session_key;229229+ th->ticket_blob = new_ticket_blob;230230+ th->validity = new_validity;231231+ th->secret_id = new_secret_id;232232+ th->expires = new_expires;233233+ th->renew_after = new_renew_after;234234 dout(" got ticket service %d (%s) secret_id %lld len %d\n",235235 type, ceph_entity_type_name(type), th->secret_id,236236 (int)th->ticket_blob->vec.iov_len);···264242 struct ceph_x_ticket_handler *th,265243 struct ceph_x_authorizer *au)266244{267267- int len;245245+ int maxlen;268246 struct ceph_x_authorize_a *msg_a;269247 struct ceph_x_authorize_b msg_b;270248 void *p, *end;···275253 dout("build_authorizer for %s %p\n",276254 ceph_entity_type_name(th->service), au);277255278278- len = sizeof(*msg_a) + sizeof(msg_b) + sizeof(u32) +279279- ticket_blob_len + 16;280280- dout(" need len %d\n", len);281281- if (au->buf && au->buf->alloc_len < len) {256256+ maxlen = sizeof(*msg_a) + sizeof(msg_b) +257257+ ceph_x_encrypt_buflen(ticket_blob_len);258258+ dout(" need len %d\n", maxlen);259259+ if (au->buf && au->buf->alloc_len < maxlen) {282260 ceph_buffer_put(au->buf);283261 au->buf = NULL;284262 }285263 if (!au->buf) {286286- au->buf = ceph_buffer_new(len, GFP_NOFS);264264+ au->buf = ceph_buffer_new(maxlen, GFP_NOFS);287265 if (!au->buf)288266 return -ENOMEM;289267 }···318296 au->buf->vec.iov_len = p - au->buf->vec.iov_base;319297 dout(" built authorizer nonce %llx len %d\n", au->nonce,320298 (int)au->buf->vec.iov_len);299299+ BUG_ON(au->buf->vec.iov_len > maxlen);321300 return 0;322301323302out_buf:
+39-34
fs/ceph/caps.c
···14071407 */14081408void ceph_check_caps(struct ceph_inode_info *ci, int flags,14091409 struct ceph_mds_session *session)14101410+ __releases(session->s_mutex)14101411{14111412 struct ceph_client *client = ceph_inode_to_client(&ci->vfs_inode);14121413 struct ceph_mds_client *mdsc = &client->mdsc;···14151414 struct ceph_cap *cap;14161415 int file_wanted, used;14171416 int took_snap_rwsem = 0; /* true if mdsc->snap_rwsem held */14181418- int drop_session_lock = session ? 0 : 1;14191417 int issued, implemented, want, retain, revoking, flushing = 0;14201418 int mds = -1; /* keep track of how far we've gone through i_caps list14211419 to avoid an infinite loop on retry */···16391639 if (queue_invalidate)16401640 ceph_queue_invalidate(inode);1641164116421642- if (session && drop_session_lock)16421642+ if (session)16431643 mutex_unlock(&session->s_mutex);16441644 if (took_snap_rwsem)16451645 up_read(&mdsc->snap_rwsem);···21952195 * Handle a cap GRANT message from the MDS. (Note that a GRANT may21962196 * actually be a revocation if it specifies a smaller cap set.)21972197 *21982198- * caller holds s_mutex.21982198+ * caller holds s_mutex and i_lock, we drop both.21992199+ *21992200 * return value:22002201 * 0 - ok22012202 * 1 - check_caps on auth cap only (writeback)22022203 * 2 - check_caps (ack revoke)22032204 */22042204-static int handle_cap_grant(struct inode *inode, struct ceph_mds_caps *grant,22052205- struct ceph_mds_session *session,22062206- struct ceph_cap *cap,22072207- struct ceph_buffer *xattr_buf)22052205+static void handle_cap_grant(struct inode *inode, struct ceph_mds_caps *grant,22062206+ struct ceph_mds_session *session,22072207+ struct ceph_cap *cap,22082208+ struct ceph_buffer *xattr_buf)22082209 __releases(inode->i_lock)22092209-22102210+ __releases(session->s_mutex)22102211{22112212 struct ceph_inode_info *ci = ceph_inode(inode);22122213 int mds = session->s_mds;···22172216 u64 size = le64_to_cpu(grant->size);22182217 u64 max_size = le64_to_cpu(grant->max_size);22192218 struct timespec mtime, atime, ctime;22202220- int reply = 0;22192219+ int check_caps = 0;22212220 int wake = 0;22222221 int writeback = 0;22232222 int revoked_rdcache = 0;···23302329 if ((used & ~newcaps) & CEPH_CAP_FILE_BUFFER)23312330 writeback = 1; /* will delay ack */23322331 else if (dirty & ~newcaps)23332333- reply = 1; /* initiate writeback in check_caps */23322332+ check_caps = 1; /* initiate writeback in check_caps */23342333 else if (((used & ~newcaps) & CEPH_CAP_FILE_CACHE) == 0 ||23352334 revoked_rdcache)23362336- reply = 2; /* send revoke ack in check_caps */23352335+ check_caps = 2; /* send revoke ack in check_caps */23372336 cap->issued = newcaps;23372337+ cap->implemented |= newcaps;23382338 } else if (cap->issued == newcaps) {23392339 dout("caps unchanged: %s -> %s\n",23402340 ceph_cap_string(cap->issued), ceph_cap_string(newcaps));···23482346 * pending revocation */23492347 wake = 1;23502348 }23492349+ BUG_ON(cap->issued & ~cap->implemented);2351235023522351 spin_unlock(&inode->i_lock);23532352 if (writeback)···23622359 ceph_queue_invalidate(inode);23632360 if (wake)23642361 wake_up(&ci->i_cap_wq);23652365- return reply;23622362+23632363+ if (check_caps == 1)23642364+ ceph_check_caps(ci, CHECK_CAPS_NODELAY|CHECK_CAPS_AUTHONLY,23652365+ session);23662366+ else if (check_caps == 2)23672367+ ceph_check_caps(ci, CHECK_CAPS_NODELAY, session);23682368+ else23692369+ mutex_unlock(&session->s_mutex);23662370}2367237123682372/*···25582548 ci->i_cap_exporting_issued = cap->issued;25592549 }25602550 __ceph_remove_cap(cap);25612561- } else {25622562- WARN_ON(!cap);25632551 }25522552+ /* else, we already released it */2564255325652554 spin_unlock(&inode->i_lock);25662555}···26302621 u64 cap_id;26312622 u64 size, max_size;26322623 u64 tid;26332633- int check_caps = 0;26342624 void *snaptrace;26352635- int r;2636262526372626 dout("handle_caps from mds%d\n", mds);26382627···26752668 case CEPH_CAP_OP_IMPORT:26762669 handle_cap_import(mdsc, inode, h, session,26772670 snaptrace, le32_to_cpu(h->snap_trace_len));26782678- check_caps = 1; /* we may have sent a RELEASE to the old auth */26792679- goto done;26712671+ ceph_check_caps(ceph_inode(inode), CHECK_CAPS_NODELAY,26722672+ session);26732673+ goto done_unlocked;26802674 }2681267526822676 /* the rest require a cap */···26942686 switch (op) {26952687 case CEPH_CAP_OP_REVOKE:26962688 case CEPH_CAP_OP_GRANT:26972697- r = handle_cap_grant(inode, h, session, cap, msg->middle);26982698- if (r == 1)26992699- ceph_check_caps(ceph_inode(inode),27002700- CHECK_CAPS_NODELAY|CHECK_CAPS_AUTHONLY,27012701- session);27022702- else if (r == 2)27032703- ceph_check_caps(ceph_inode(inode),27042704- CHECK_CAPS_NODELAY,27052705- session);27062706- break;26892689+ handle_cap_grant(inode, h, session, cap, msg->middle);26902690+ goto done_unlocked;2707269127082692 case CEPH_CAP_OP_FLUSH_ACK:27092693 handle_cap_flush_ack(inode, tid, h, session, cap);···2713271327142714done:27152715 mutex_unlock(&session->s_mutex);27162716-27172717- if (check_caps)27182718- ceph_check_caps(ceph_inode(inode), CHECK_CAPS_NODELAY, NULL);27162716+done_unlocked:27192717 if (inode)27202718 iput(inode);27212719 return;···28362838 struct ceph_cap *cap;28372839 struct ceph_mds_request_release *rel = *p;28382840 int ret = 0;28392839-28402840- dout("encode_inode_release %p mds%d drop %s unless %s\n", inode,28412841- mds, ceph_cap_string(drop), ceph_cap_string(unless));28412841+ int used = 0;2842284228432843 spin_lock(&inode->i_lock);28442844+ used = __ceph_caps_used(ci);28452845+28462846+ dout("encode_inode_release %p mds%d used %s drop %s unless %s\n", inode,28472847+ mds, ceph_cap_string(used), ceph_cap_string(drop),28482848+ ceph_cap_string(unless));28492849+28502850+ /* only drop unused caps */28512851+ drop &= ~used;28522852+28442853 cap = __get_cap_for_mds(ci, mds);28452854 if (cap && __cap_is_valid(cap)) {28462855 if (force ||
+3-1
fs/ceph/dir.c
···288288 CEPH_MDS_OP_LSSNAP : CEPH_MDS_OP_READDIR;289289290290 /* discard old result, if any */291291- if (fi->last_readdir)291291+ if (fi->last_readdir) {292292 ceph_mdsc_put_request(fi->last_readdir);293293+ fi->last_readdir = NULL;294294+ }293295294296 /* requery frag tree, as the frag topology may have changed */295297 frag = ceph_choose_frag(ceph_inode(inode), frag, NULL, NULL);
+16
fs/ceph/inode.c
···378378379379 ceph_queue_caps_release(inode);380380381381+ /*382382+ * we may still have a snap_realm reference if there are stray383383+ * caps in i_cap_exporting_issued or i_snap_caps.384384+ */385385+ if (ci->i_snap_realm) {386386+ struct ceph_mds_client *mdsc =387387+ &ceph_client(ci->vfs_inode.i_sb)->mdsc;388388+ struct ceph_snap_realm *realm = ci->i_snap_realm;389389+390390+ dout(" dropping residual ref to snap realm %p\n", realm);391391+ spin_lock(&realm->inodes_with_caps_lock);392392+ list_del_init(&ci->i_snap_realm_item);393393+ spin_unlock(&realm->inodes_with_caps_lock);394394+ ceph_put_snap_realm(mdsc, realm);395395+ }396396+381397 kfree(ci->i_symlink);382398 while ((n = rb_first(&ci->i_fragtree)) != NULL) {383399 frag = rb_entry(n, struct ceph_inode_frag, node);
+32-11
fs/ceph/mds_client.c
···328328 struct ceph_mds_session *s;329329330330 s = kzalloc(sizeof(*s), GFP_NOFS);331331+ if (!s)332332+ return ERR_PTR(-ENOMEM);331333 s->s_mdsc = mdsc;332334 s->s_mds = mds;333335 s->s_state = CEPH_MDS_SESSION_NEW;···531529{532530 dout("__unregister_request %p tid %lld\n", req, req->r_tid);533531 rb_erase(&req->r_node, &mdsc->request_tree);534534- ceph_mdsc_put_request(req);532532+ RB_CLEAR_NODE(&req->r_node);535533536534 if (req->r_unsafe_dir) {537535 struct ceph_inode_info *ci = ceph_inode(req->r_unsafe_dir);···540538 list_del_init(&req->r_unsafe_dir_item);541539 spin_unlock(&ci->i_unsafe_lock);542540 }541541+542542+ ceph_mdsc_put_request(req);543543}544544545545/*···866862 if (time_after_eq(jiffies, session->s_cap_ttl) &&867863 time_after_eq(session->s_cap_ttl, session->s_renew_requested))868864 pr_info("mds%d caps stale\n", session->s_mds);865865+ session->s_renew_requested = jiffies;869866870867 /* do not try to renew caps until a recovering mds has reconnected871868 * with its clients. */···879874880875 dout("send_renew_caps to mds%d (%s)\n", session->s_mds,881876 ceph_mds_state_name(state));882882- session->s_renew_requested = jiffies;883877 msg = create_session_msg(CEPH_SESSION_REQUEST_RENEWCAPS,884878 ++session->s_renew_seq);885879 if (IS_ERR(msg))···1570156615711567 /* get, open session */15721568 session = __ceph_lookup_mds_session(mdsc, mds);15731573- if (!session)15691569+ if (!session) {15741570 session = register_session(mdsc, mds);15711571+ if (IS_ERR(session)) {15721572+ err = PTR_ERR(session);15731573+ goto finish;15741574+ }15751575+ }15751576 dout("do_request mds%d session %p state %s\n", mds, session,15761577 session_state_name(session->s_state));15771578 if (session->s_state != CEPH_MDS_SESSION_OPEN &&···17791770 dout("handle_reply %p\n", req);1780177117811772 /* correct session? */17821782- if (!req->r_session && req->r_session != session) {17731773+ if (req->r_session != session) {17831774 pr_err("mdsc_handle_reply got %llu on session mds%d"17841775 " not mds%d\n", tid, session->s_mds,17851776 req->r_session ? req->r_session->s_mds : -1);···26912682 */26922683static void wait_unsafe_requests(struct ceph_mds_client *mdsc, u64 want_tid)26932684{26942694- struct ceph_mds_request *req = NULL;26852685+ struct ceph_mds_request *req = NULL, *nextreq;26952686 struct rb_node *n;2696268726972688 mutex_lock(&mdsc->mutex);26982689 dout("wait_unsafe_requests want %lld\n", want_tid);26902690+restart:26992691 req = __get_oldest_req(mdsc);27002692 while (req && req->r_tid <= want_tid) {26932693+ /* find next request */26942694+ n = rb_next(&req->r_node);26952695+ if (n)26962696+ nextreq = rb_entry(n, struct ceph_mds_request, r_node);26972697+ else26982698+ nextreq = NULL;27012699 if ((req->r_op & CEPH_MDS_OP_WRITE)) {27022700 /* write op */27032701 ceph_mdsc_get_request(req);27022702+ if (nextreq)27032703+ ceph_mdsc_get_request(nextreq);27042704 mutex_unlock(&mdsc->mutex);27052705 dout("wait_unsafe_requests wait on %llu (want %llu)\n",27062706 req->r_tid, want_tid);27072707 wait_for_completion(&req->r_safe_completion);27082708 mutex_lock(&mdsc->mutex);27092709- n = rb_next(&req->r_node);27102709 ceph_mdsc_put_request(req);27112711- } else {27122712- n = rb_next(&req->r_node);27102710+ if (!nextreq)27112711+ break; /* next dne before, so we're done! */27122712+ if (RB_EMPTY_NODE(&nextreq->r_node)) {27132713+ /* next request was removed from tree */27142714+ ceph_mdsc_put_request(nextreq);27152715+ goto restart;27162716+ }27172717+ ceph_mdsc_put_request(nextreq); /* won't go away */27132718 }27142714- if (!n)27152715- break;27162716- req = rb_entry(n, struct ceph_mds_request, r_node);27192719+ req = nextreq;27172720 }27182721 mutex_unlock(&mdsc->mutex);27192722 dout("wait_unsafe_requests done\n");
···28112811 inode->i_mtime.tv_sec = (signed)le32_to_cpu(raw_inode->i_mtime);28122812 inode->i_atime.tv_nsec = inode->i_ctime.tv_nsec = inode->i_mtime.tv_nsec = 0;2813281328142814- ei->i_state = 0;28142814+ ei->i_state_flags = 0;28152815 ei->i_dir_start_lookup = 0;28162816 ei->i_dtime = le32_to_cpu(raw_inode->i_dtime);28172817 /* We now have enough fields to check if the inode was active or not.
+2-2
fs/ext4/ialloc.c
···263263 ext4_group_t f;264264265265 f = ext4_flex_group(sbi, block_group);266266- atomic_dec(&sbi->s_flex_groups[f].free_inodes);266266+ atomic_dec(&sbi->s_flex_groups[f].used_dirs);267267 }268268269269 }···773773 if (sbi->s_log_groups_per_flex) {774774 ext4_group_t f = ext4_flex_group(sbi, group);775775776776- atomic_inc(&sbi->s_flex_groups[f].free_inodes);776776+ atomic_inc(&sbi->s_flex_groups[f].used_dirs);777777 }778778 }779779 gdp->bg_checksum = ext4_group_desc_csum(sbi, group, gdp);
···8080 prefetchw(&bvec->bv_page->flags);81818282 end_page_writeback(page);8383+ page_cache_release(page);8384 } while (bvec >= bio->bi_io_vec);8485 bio_put(bio);8586 if (atomic_dec_and_test(&super->s_pending_writes))···9897 unsigned int max_pages = queue_max_hw_sectors(q) >> (PAGE_SHIFT - 9);9998 int i;10099100100+ if (max_pages > BIO_MAX_PAGES)101101+ max_pages = BIO_MAX_PAGES;101102 bio = bio_alloc(GFP_NOFS, max_pages);102102- BUG_ON(!bio); /* FIXME: handle this */103103+ BUG_ON(!bio);103104104105 for (i = 0; i < nr_pages; i++) {105106 if (i >= max_pages) {···194191 unsigned int max_pages = queue_max_hw_sectors(q) >> (PAGE_SHIFT - 9);195192 int i;196193194194+ if (max_pages > BIO_MAX_PAGES)195195+ max_pages = BIO_MAX_PAGES;197196 bio = bio_alloc(GFP_NOFS, max_pages);198198- BUG_ON(!bio); /* FIXME: handle this */197197+ BUG_ON(!bio);199198200199 for (i = 0; i < nr_pages; i++) {201200 if (i >= max_pages) {
···15941594 return ret;15951595}1596159615971597-/* Rewrite cannot mark the inode dirty but has to write it immediatly. */15981597int logfs_rewrite_block(struct inode *inode, u64 bix, u64 ofs,15991598 gc_level_t gc_level, long flags)16001599{···16101611 if (level != 0)16111612 alloc_indirect_block(inode, page, 0);16121613 err = logfs_write_buf(inode, page, flags);16141614+ if (!err && shrink_level(gc_level) == 0) {16151615+ /* Rewrite cannot mark the inode dirty but has to16161616+ * write it immediatly.16171617+ * Q: Can't we just create an alias for the inode16181618+ * instead? And if not, why not?16191619+ */16201620+ if (inode->i_ino == LOGFS_INO_MASTER)16211621+ logfs_write_anchor(inode->i_sb);16221622+ else {16231623+ err = __logfs_write_inode(inode, flags);16241624+ }16251625+ }16131626 }16141627 logfs_put_write_page(page);16151628 return err;
+31-23
fs/logfs/segment.c
···9393 } while (len);9494}95959696-/*9797- * bdev_writeseg will write full pages. Memset the tail to prevent data leaks.9898- */9999-static void pad_wbuf(struct logfs_area *area, int final)9696+static void pad_partial_page(struct logfs_area *area)10097{10198 struct super_block *sb = area->a_sb;102102- struct logfs_super *super = logfs_super(sb);10399 struct page *page;104100 u64 ofs = dev_ofs(sb, area->a_segno, area->a_used_bytes);105101 pgoff_t index = ofs >> PAGE_SHIFT;106102 long offset = ofs & (PAGE_SIZE-1);107103 u32 len = PAGE_SIZE - offset;108104109109- if (len == PAGE_SIZE) {110110- /* The math in this function can surely use some love */111111- len = 0;112112- }113113- if (len) {114114- BUG_ON(area->a_used_bytes >= super->s_segsize);115115-116116- page = get_mapping_page(area->a_sb, index, 0);105105+ if (len % PAGE_SIZE) {106106+ page = get_mapping_page(sb, index, 0);117107 BUG_ON(!page); /* FIXME: reserve a pool */118108 memset(page_address(page) + offset, 0xff, len);119109 SetPagePrivate(page);120110 page_cache_release(page);121111 }112112+}122113123123- if (!final)124124- return;114114+static void pad_full_pages(struct logfs_area *area)115115+{116116+ struct super_block *sb = area->a_sb;117117+ struct logfs_super *super = logfs_super(sb);118118+ u64 ofs = dev_ofs(sb, area->a_segno, area->a_used_bytes);119119+ u32 len = super->s_segsize - area->a_used_bytes;120120+ pgoff_t index = PAGE_CACHE_ALIGN(ofs) >> PAGE_CACHE_SHIFT;121121+ pgoff_t no_indizes = len >> PAGE_CACHE_SHIFT;122122+ struct page *page;125123126126- area->a_used_bytes += len;127127- for ( ; area->a_used_bytes < super->s_segsize;128128- area->a_used_bytes += PAGE_SIZE) {129129- /* Memset another page */130130- index++;131131- page = get_mapping_page(area->a_sb, index, 0);124124+ while (no_indizes) {125125+ page = get_mapping_page(sb, index, 0);132126 BUG_ON(!page); /* FIXME: reserve a pool */133133- memset(page_address(page), 0xff, PAGE_SIZE);127127+ SetPageUptodate(page);128128+ memset(page_address(page), 0xff, PAGE_CACHE_SIZE);134129 SetPagePrivate(page);135130 page_cache_release(page);131131+ index++;132132+ no_indizes--;136133 }134134+}135135+136136+/*137137+ * bdev_writeseg will write full pages. Memset the tail to prevent data leaks.138138+ * Also make sure we allocate (and memset) all pages for final writeout.139139+ */140140+static void pad_wbuf(struct logfs_area *area, int final)141141+{142142+ pad_partial_page(area);143143+ if (final)144144+ pad_full_pages(area);137145}138146139147/*···691683 return 0;692684}693685694694-static void freeseg(struct super_block *sb, u32 segno)686686+void freeseg(struct super_block *sb, u32 segno)695687{696688 struct logfs_super *super = logfs_super(sb);697689 struct address_space *mapping = super->s_mapping_inode->i_mapping;
+7-8
fs/logfs/super.c
···277277 }278278 if (valid0 && valid1 && ds_cmp(ds0, ds1)) {279279 printk(KERN_INFO"Superblocks don't match - fixing.\n");280280- return write_one_sb(sb, super->s_devops->find_last_sb);280280+ return logfs_write_sb(sb);281281 }282282 /* If neither is valid now, something's wrong. Didn't we properly283283 * check them before?!? */···289289{290290 int err;291291292292+ err = logfs_open_segfile(sb);293293+ if (err)294294+ return err;295295+292296 /* Repair any broken superblock copies */293297 err = logfs_recover_sb(sb);294298 if (err)···300296301297 /* Check areas for trailing unaccounted data */302298 err = logfs_check_areas(sb);303303- if (err)304304- return err;305305-306306- err = logfs_open_segfile(sb);307299 if (err)308300 return err;309301···328328329329 sb->s_root = d_alloc_root(rootdir);330330 if (!sb->s_root)331331- goto fail;331331+ goto fail2;332332333333 super->s_erase_page = alloc_pages(GFP_KERNEL, 0);334334 if (!super->s_erase_page)···572572 return 0;573573574574err1:575575- up_write(&sb->s_umount);576576- deactivate_super(sb);575575+ deactivate_locked_super(sb);577576 return err;578577err0:579578 kfree(super);
+10-8
fs/namei.c
···1610161016111611static struct file *do_last(struct nameidata *nd, struct path *path,16121612 int open_flag, int acc_mode,16131613- int mode, const char *pathname,16141614- int *want_dir)16131613+ int mode, const char *pathname)16151614{16161615 struct dentry *dir = nd->path.dentry;16171616 struct file *filp;···16411642 if (nd->last.name[nd->last.len]) {16421643 if (open_flag & O_CREAT)16431644 goto exit;16441644- *want_dir = 1;16451645+ nd->flags |= LOOKUP_DIRECTORY;16451646 }1646164716471648 /* just plain open? */···16551656 if (path->dentry->d_inode->i_op->follow_link)16561657 return NULL;16571658 error = -ENOTDIR;16581658- if (*want_dir && !path->dentry->d_inode->i_op->lookup)16591659- goto exit_dput;16591659+ if (nd->flags & LOOKUP_DIRECTORY) {16601660+ if (!path->dentry->d_inode->i_op->lookup)16611661+ goto exit_dput;16621662+ }16601663 path_to_nameidata(path, nd);16611664 audit_inode(pathname, nd->path.dentry);16621665 goto ok;···17671766 int count = 0;17681767 int flag = open_to_namei_flags(open_flag);17691768 int force_reval = 0;17701770- int want_dir = open_flag & O_DIRECTORY;1771176917721770 if (!(open_flag & O_CREAT))17731771 mode = 0;···18281828 if (open_flag & O_EXCL)18291829 nd.flags |= LOOKUP_EXCL;18301830 }18311831- filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname, &want_dir);18311831+ if (open_flag & O_DIRECTORY)18321832+ nd.flags |= LOOKUP_DIRECTORY;18331833+ filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname);18321834 while (unlikely(!filp)) { /* trailing symlink */18331835 struct path holder;18341836 struct inode *inode = path.dentry->d_inode;···18681866 }18691867 holder = path;18701868 nd.flags &= ~LOOKUP_PARENT;18711871- filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname, &want_dir);18691869+ filp = do_last(&nd, &path, open_flag, acc_mode, mode, pathname);18721870 if (inode->i_op->put_link)18731871 inode->i_op->put_link(holder.dentry, &nd, cookie);18741872 path_put(&holder);
+4-4
fs/nilfs2/segbuf.c
···323323int nilfs_wait_on_logs(struct list_head *logs)324324{325325 struct nilfs_segment_buffer *segbuf;326326- int err;326326+ int err, ret = 0;327327328328 list_for_each_entry(segbuf, logs, sb_list) {329329 err = nilfs_segbuf_wait(segbuf);330330- if (err)331331- return err;330330+ if (err && !ret)331331+ ret = err;332332 }333333- return 0;333333+ return ret;334334}335335336336/*
+7-8
fs/nilfs2/segment.c
···15101510 if (mode != SC_LSEG_SR || sci->sc_stage.scnt < NILFS_ST_CPFILE)15111511 break;1512151215131513+ nilfs_clear_logs(&sci->sc_segbufs);15141514+15151515+ err = nilfs_segctor_extend_segments(sci, nilfs, nadd);15161516+ if (unlikely(err))15171517+ return err;15181518+15131519 if (sci->sc_stage.flags & NILFS_CF_SUFREED) {15141520 err = nilfs_sufile_cancel_freev(nilfs->ns_sufile,15151521 sci->sc_freesegs,···15231517 NULL);15241518 WARN_ON(err); /* do not happen */15251519 }15261526- nilfs_clear_logs(&sci->sc_segbufs);15271527-15281528- err = nilfs_segctor_extend_segments(sci, nilfs, nadd);15291529- if (unlikely(err))15301530- return err;15311531-15321520 nadd = min_t(int, nadd << 1, SC_MAX_SEGDELTA);15331521 sci->sc_stage = prev_stage;15341522 }···1897189718981898 list_splice_tail_init(&sci->sc_write_logs, &logs);18991899 ret = nilfs_wait_on_logs(&logs);19001900- if (ret)19011901- nilfs_abort_logs(&logs, NULL, sci->sc_super_root, ret);19001900+ nilfs_abort_logs(&logs, NULL, sci->sc_super_root, ret ? : err);1902190119031902 list_splice_tail_init(&sci->sc_segbufs, &logs);19041903 nilfs_cancel_segusage(&logs, nilfs->ns_sufile);
+72-5
fs/ocfs2/acl.c
···3030#include "alloc.h"3131#include "dlmglue.h"3232#include "file.h"3333+#include "inode.h"3434+#include "journal.h"3335#include "ocfs2_fs.h"34363537#include "xattr.h"···168166}169167170168/*169169+ * Helper function to set i_mode in memory and disk. Some call paths170170+ * will not have di_bh or a journal handle to pass, in which case it171171+ * will create it's own.172172+ */173173+static int ocfs2_acl_set_mode(struct inode *inode, struct buffer_head *di_bh,174174+ handle_t *handle, umode_t new_mode)175175+{176176+ int ret, commit_handle = 0;177177+ struct ocfs2_dinode *di;178178+179179+ if (di_bh == NULL) {180180+ ret = ocfs2_read_inode_block(inode, &di_bh);181181+ if (ret) {182182+ mlog_errno(ret);183183+ goto out;184184+ }185185+ } else186186+ get_bh(di_bh);187187+188188+ if (handle == NULL) {189189+ handle = ocfs2_start_trans(OCFS2_SB(inode->i_sb),190190+ OCFS2_INODE_UPDATE_CREDITS);191191+ if (IS_ERR(handle)) {192192+ ret = PTR_ERR(handle);193193+ mlog_errno(ret);194194+ goto out_brelse;195195+ }196196+197197+ commit_handle = 1;198198+ }199199+200200+ di = (struct ocfs2_dinode *)di_bh->b_data;201201+ ret = ocfs2_journal_access_di(handle, INODE_CACHE(inode), di_bh,202202+ OCFS2_JOURNAL_ACCESS_WRITE);203203+ if (ret) {204204+ mlog_errno(ret);205205+ goto out_commit;206206+ }207207+208208+ inode->i_mode = new_mode;209209+ di->i_mode = cpu_to_le16(inode->i_mode);210210+211211+ ocfs2_journal_dirty(handle, di_bh);212212+213213+out_commit:214214+ if (commit_handle)215215+ ocfs2_commit_trans(OCFS2_SB(inode->i_sb), handle);216216+out_brelse:217217+ brelse(di_bh);218218+out:219219+ return ret;220220+}221221+222222+/*171223 * Set the access or default ACL of an inode.172224 */173225static int ocfs2_set_acl(handle_t *handle,···249193 if (ret < 0)250194 return ret;251195 else {252252- inode->i_mode = mode;253196 if (ret == 0)254197 acl = NULL;198198+199199+ ret = ocfs2_acl_set_mode(inode, di_bh,200200+ handle, mode);201201+ if (ret)202202+ return ret;203203+255204 }256205 }257206 break;···344283 struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);345284 struct posix_acl *acl = NULL;346285 int ret = 0;286286+ mode_t mode;347287348288 if (!S_ISLNK(inode->i_mode)) {349289 if (osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL) {···353291 if (IS_ERR(acl))354292 return PTR_ERR(acl);355293 }356356- if (!acl)357357- inode->i_mode &= ~current_umask();294294+ if (!acl) {295295+ mode = inode->i_mode & ~current_umask();296296+ ret = ocfs2_acl_set_mode(inode, di_bh, handle, mode);297297+ if (ret) {298298+ mlog_errno(ret);299299+ goto cleanup;300300+ }301301+ }358302 }359303 if ((osb->s_mount_opt & OCFS2_MOUNT_POSIX_ACL) && acl) {360304 struct posix_acl *clone;361361- mode_t mode;362305363306 if (S_ISDIR(inode->i_mode)) {364307 ret = ocfs2_set_acl(handle, inode, di_bh,···380313 mode = inode->i_mode;381314 ret = posix_acl_create_masq(clone, &mode);382315 if (ret >= 0) {383383- inode->i_mode = mode;316316+ ret = ocfs2_acl_set_mode(inode, di_bh, handle, mode);384317 if (ret > 0) {385318 ret = ocfs2_set_acl(handle, inode,386319 di_bh, ACL_TYPE_ACCESS,
+1-3
fs/ocfs2/dlm/dlmmaster.c
···18751875ok:18761876 spin_unlock(&res->spinlock);18771877 }18781878- spin_unlock(&dlm->spinlock);1879187818801879 // mlog(0, "woo! got an assert_master from node %u!\n",18811880 // assert->node_idx);···19251926 /* master is known, detach if not already detached.19261927 * ensures that only one assert_master call will happen19271928 * on this mle. */19281928- spin_lock(&dlm->spinlock);19291929 spin_lock(&dlm->master_lock);1930193019311931 rr = atomic_read(&mle->mle_refs.refcount);···19571959 __dlm_put_mle(mle);19581960 }19591961 spin_unlock(&dlm->master_lock);19601960- spin_unlock(&dlm->spinlock);19611962 } else if (res) {19621963 if (res->owner != assert->node_idx) {19631964 mlog(0, "assert_master from %u, but current "···19641967 res->owner, namelen, name);19651968 }19661969 }19701970+ spin_unlock(&dlm->spinlock);1967197119681972done:19691973 ret = 0;
+15
fs/ocfs2/inode.c
···891891 /* Do some basic inode verification... */892892 di = (struct ocfs2_dinode *) di_bh->b_data;893893 if (!(di->i_flags & cpu_to_le32(OCFS2_ORPHANED_FL))) {894894+ /*895895+ * Inodes in the orphan dir must have ORPHANED_FL. The only896896+ * inodes that come back out of the orphan dir are reflink897897+ * targets. A reflink target may be moved out of the orphan898898+ * dir between the time we scan the directory and the time we899899+ * process it. This would lead to HAS_REFCOUNT_FL being set but900900+ * ORPHANED_FL not.901901+ */902902+ if (di->i_dyn_features & cpu_to_le16(OCFS2_HAS_REFCOUNT_FL)) {903903+ mlog(0, "Reflinked inode %llu is no longer orphaned. "904904+ "it shouldn't be deleted\n",905905+ (unsigned long long)oi->ip_blkno);906906+ goto bail;907907+ }908908+894909 /* for lack of a better error? */895910 status = -EEXIST;896911 mlog(ML_ERROR,
+6-4
fs/ocfs2/localalloc.c
···872872 (unsigned long long)la_start_blk,873873 (unsigned long long)blkno);874874875875- status = ocfs2_free_clusters(handle, main_bm_inode,876876- main_bm_bh, blkno, count);875875+ status = ocfs2_release_clusters(handle,876876+ main_bm_inode,877877+ main_bm_bh, blkno,878878+ count);877879 if (status < 0) {878880 mlog_errno(status);879881 goto bail;···986984 }987985988986retry_enospc:989989- (*ac)->ac_bits_wanted = osb->local_alloc_bits;990990-987987+ (*ac)->ac_bits_wanted = osb->local_alloc_default_bits;991988 status = ocfs2_reserve_cluster_bitmap_bits(osb, *ac);992989 if (status == -ENOSPC) {993990 if (ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_ENOSPC) ==···10621061 OCFS2_LA_DISABLED)10631062 goto bail;1064106310641064+ ac->ac_bits_wanted = osb->local_alloc_default_bits;10651065 status = ocfs2_claim_clusters(osb, handle, ac,10661066 osb->local_alloc_bits,10671067 &cluster_off,
+1-1
fs/ocfs2/locks.c
···133133134134 if (!(fl->fl_flags & FL_POSIX))135135 return -ENOLCK;136136- if (__mandatory_lock(inode))136136+ if (__mandatory_lock(inode) && fl->fl_type != F_UNLCK)137137 return -ENOLCK;138138139139 return ocfs2_plock(osb->cconn, OCFS2_I(inode)->ip_blkno, file, cmd, fl);
+23-5
fs/ocfs2/namei.c
···8484static int ocfs2_orphan_add(struct ocfs2_super *osb,8585 handle_t *handle,8686 struct inode *inode,8787- struct ocfs2_dinode *fe,8787+ struct buffer_head *fe_bh,8888 char *name,8989 struct ocfs2_dir_lookup_result *lookup,9090 struct inode *orphan_dir_inode);···879879 fe = (struct ocfs2_dinode *) fe_bh->b_data;880880881881 if (inode_is_unlinkable(inode)) {882882- status = ocfs2_orphan_add(osb, handle, inode, fe, orphan_name,882882+ status = ocfs2_orphan_add(osb, handle, inode, fe_bh, orphan_name,883883 &orphan_insert, orphan_dir);884884 if (status < 0) {885885 mlog_errno(status);···13001300 if (S_ISDIR(new_inode->i_mode) ||13011301 (ocfs2_read_links_count(newfe) == 1)) {13021302 status = ocfs2_orphan_add(osb, handle, new_inode,13031303- newfe, orphan_name,13031303+ newfe_bh, orphan_name,13041304 &orphan_insert, orphan_dir);13051305 if (status < 0) {13061306 mlog_errno(status);···19111911static int ocfs2_orphan_add(struct ocfs2_super *osb,19121912 handle_t *handle,19131913 struct inode *inode,19141914- struct ocfs2_dinode *fe,19141914+ struct buffer_head *fe_bh,19151915 char *name,19161916 struct ocfs2_dir_lookup_result *lookup,19171917 struct inode *orphan_dir_inode)···19191919 struct buffer_head *orphan_dir_bh = NULL;19201920 int status = 0;19211921 struct ocfs2_dinode *orphan_fe;19221922+ struct ocfs2_dinode *fe = (struct ocfs2_dinode *) fe_bh->b_data;1922192319231924 mlog_entry("(inode->i_ino = %lu)\n", inode->i_ino);19241925···19601959 goto leave;19611960 }1962196119621962+ /*19631963+ * We're going to journal the change of i_flags and i_orphaned_slot.19641964+ * It's safe anyway, though some callers may duplicate the journaling.19651965+ * Journaling within the func just make the logic look more19661966+ * straightforward.19671967+ */19681968+ status = ocfs2_journal_access_di(handle,19691969+ INODE_CACHE(inode),19701970+ fe_bh,19711971+ OCFS2_JOURNAL_ACCESS_WRITE);19721972+ if (status < 0) {19731973+ mlog_errno(status);19741974+ goto leave;19751975+ }19761976+19631977 le32_add_cpu(&fe->i_flags, OCFS2_ORPHANED_FL);1964197819651979 /* Record which orphan dir our inode now resides19661980 * in. delete_inode will use this to determine which orphan19671981 * dir to lock. */19681982 fe->i_orphaned_slot = cpu_to_le16(osb->slot_num);19831983+19841984+ ocfs2_journal_dirty(handle, fe_bh);1969198519701986 mlog(0, "Inode %llu orphaned in slot %d\n",19711987 (unsigned long long)OCFS2_I(inode)->ip_blkno, osb->slot_num);···21412123 }2142212421432125 di = (struct ocfs2_dinode *)new_di_bh->b_data;21442144- status = ocfs2_orphan_add(osb, handle, inode, di, orphan_name,21262126+ status = ocfs2_orphan_add(osb, handle, inode, new_di_bh, orphan_name,21452127 &orphan_insert, orphan_dir);21462128 if (status < 0) {21472129 mlog_errno(status);
+12-2
fs/ocfs2/ocfs2.h
···763763 return megs << (20 - OCFS2_SB(sb)->s_clustersize_bits);764764}765765766766-#define ocfs2_set_bit ext2_set_bit767767-#define ocfs2_clear_bit ext2_clear_bit766766+static inline void _ocfs2_set_bit(unsigned int bit, unsigned long *bitmap)767767+{768768+ ext2_set_bit(bit, bitmap);769769+}770770+#define ocfs2_set_bit(bit, addr) _ocfs2_set_bit((bit), (unsigned long *)(addr))771771+772772+static inline void _ocfs2_clear_bit(unsigned int bit, unsigned long *bitmap)773773+{774774+ ext2_clear_bit(bit, bitmap);775775+}776776+#define ocfs2_clear_bit(bit, addr) _ocfs2_clear_bit((bit), (unsigned long *)(addr))777777+768778#define ocfs2_test_bit ext2_test_bit769779#define ocfs2_find_next_zero_bit ext2_find_next_zero_bit770780#define ocfs2_find_next_bit ext2_find_next_bit
···11+#include <linux/types.h>22+13/* platform data for the PL061 GPIO driver */2435struct pl061_platform_data {
+2
include/linux/clockchips.h
···7373 * @list: list head for the management code7474 * @mode: operating mode assigned by the management code7575 * @next_event: local storage for the next event in oneshot mode7676+ * @retries: number of forced programming retries7677 */7778struct clock_event_device {7879 const char *name;···9493 struct list_head list;9594 enum clock_event_mode mode;9695 ktime_t next_event;9696+ unsigned long retries;9797};98989999/*
+3-3
include/linux/ext3_fs.h
···565565566566static inline int ext3_test_inode_state(struct inode *inode, int bit)567567{568568- return test_bit(bit, &EXT3_I(inode)->i_state);568568+ return test_bit(bit, &EXT3_I(inode)->i_state_flags);569569}570570571571static inline void ext3_set_inode_state(struct inode *inode, int bit)572572{573573- set_bit(bit, &EXT3_I(inode)->i_state);573573+ set_bit(bit, &EXT3_I(inode)->i_state_flags);574574}575575576576static inline void ext3_clear_inode_state(struct inode *inode, int bit)577577{578578- clear_bit(bit, &EXT3_I(inode)->i_state);578578+ clear_bit(bit, &EXT3_I(inode)->i_state_flags);579579}580580#else581581/* Assume that user mode programs are passing in an ext3fs superblock, not
+1-1
include/linux/ext3_fs_i.h
···8787 * near to their parent directory's inode.8888 */8989 __u32 i_block_group;9090- unsigned long i_state; /* Dynamic state flags for ext3 */9090+ unsigned long i_state_flags; /* Dynamic state flags for ext3 */91919292 /* block reservation info */9393 struct ext3_block_alloc_info *i_block_alloc_info;
+5-2
include/linux/freezer.h
···6464extern void cancel_freezing(struct task_struct *p);65656666#ifdef CONFIG_CGROUP_FREEZER6767-extern int cgroup_frozen(struct task_struct *task);6767+extern int cgroup_freezing_or_frozen(struct task_struct *task);6868#else /* !CONFIG_CGROUP_FREEZER */6969-static inline int cgroup_frozen(struct task_struct *task) { return 0; }6969+static inline int cgroup_freezing_or_frozen(struct task_struct *task)7070+{7171+ return 0;7272+}7073#endif /* !CONFIG_CGROUP_FREEZER */71747275/*
···123123 return lock_is_held(&rcu_lock_map);124124}125125126126-/**127127- * rcu_read_lock_bh_held - might we be in RCU-bh read-side critical section?128128- *129129- * If CONFIG_PROVE_LOCKING is selected and enabled, returns nonzero iff in130130- * an RCU-bh read-side critical section. In absence of CONFIG_PROVE_LOCKING,131131- * this assumes we are in an RCU-bh read-side critical section unless it can132132- * prove otherwise.133133- *134134- * Check rcu_scheduler_active to prevent false positives during boot.126126+/*127127+ * rcu_read_lock_bh_held() is defined out of line to avoid #include-file128128+ * hell.135129 */136136-static inline int rcu_read_lock_bh_held(void)137137-{138138- if (!debug_lockdep_rcu_enabled())139139- return 1;140140- return lock_is_held(&rcu_bh_lock_map);141141-}130130+extern int rcu_read_lock_bh_held(void);142131143132/**144133 * rcu_read_lock_sched_held - might we be in RCU-sched read-side critical section?···149160 return 1;150161 if (debug_locks)151162 lockdep_opinion = lock_is_held(&rcu_sched_lock_map);152152- return lockdep_opinion || preempt_count() != 0;163163+ return lockdep_opinion || preempt_count() != 0 || irqs_disabled();153164}154165#else /* #ifdef CONFIG_PREEMPT */155166static inline int rcu_read_lock_sched_held(void)···180191#ifdef CONFIG_PREEMPT181192static inline int rcu_read_lock_sched_held(void)182193{183183- return !rcu_scheduler_active || preempt_count() != 0;194194+ return !rcu_scheduler_active || preempt_count() != 0 || irqs_disabled();184195}185196#else /* #ifdef CONFIG_PREEMPT */186197static inline int rcu_read_lock_sched_held(void)
-6
include/linux/skbuff.h
···190190 atomic_t dataref;191191 unsigned short nr_frags;192192 unsigned short gso_size;193193-#ifdef CONFIG_HAS_DMA194194- dma_addr_t dma_head;195195-#endif196193 /* Warning: this field is not always filled in (UFO)! */197194 unsigned short gso_segs;198195 unsigned short gso_type;···198201 struct sk_buff *frag_list;199202 struct skb_shared_hwtstamps hwtstamps;200203 skb_frag_t frags[MAX_SKB_FRAGS];201201-#ifdef CONFIG_HAS_DMA202202- dma_addr_t dma_maps[MAX_SKB_FRAGS];203203-#endif204204 /* Intermediate layers must ensure that destructor_arg205205 * remains valid until skb destructor */206206 void * destructor_arg;
+1
include/linux/socket.h
···255255#define MSG_ERRQUEUE 0x2000 /* Fetch message from error queue */256256#define MSG_NOSIGNAL 0x4000 /* Do not generate SIGPIPE */257257#define MSG_MORE 0x8000 /* Sender will send more */258258+#define MSG_WAITFORONE 0x10000 /* recvmmsg(): block until 1+ packets avail */258259259260#define MSG_EOF MSG_FIN260261
···277277#endif278278279279280280-/* socket drivers are expected to use these callbacks in their .drv struct */281281-extern int pcmcia_socket_dev_suspend(struct device *dev);282282-extern void pcmcia_socket_dev_early_resume(struct device *dev);283283-extern void pcmcia_socket_dev_late_resume(struct device *dev);284284-extern int pcmcia_socket_dev_resume(struct device *dev);285285-286280/* socket drivers use this callback in their IRQ handler */287281extern void pcmcia_parse_events(struct pcmcia_socket *socket,288282 unsigned int events);
+6-3
kernel/cgroup_freezer.c
···4747 struct freezer, css);4848}49495050-int cgroup_frozen(struct task_struct *task)5050+int cgroup_freezing_or_frozen(struct task_struct *task)5151{5252 struct freezer *freezer;5353 enum freezer_state state;54545555 task_lock(task);5656 freezer = task_freezer(task);5757- state = freezer->state;5757+ if (!freezer->css.cgroup->parent)5858+ state = CGROUP_THAWED; /* root cgroup can't be frozen */5959+ else6060+ state = freezer->state;5861 task_unlock(task);59626060- return state == CGROUP_FROZEN;6363+ return (state == CGROUP_FREEZING) || (state == CGROUP_FROZEN);6164}62656366/*
···333333 struct early_res *r;334334 int i;335335336336+ if (start == end)337337+ return;338338+339339+ if (WARN_ONCE(start > end, " wrong range [%#llx, %#llx]\n", start, end))340340+ return;341341+336342try_next:337343 i = find_overlapped_early(start, end);338344 if (i >= max_early_res)
+24-11
kernel/irq/chip.c
···359359 if (desc->chip->ack)360360 desc->chip->ack(irq);361361 }362362+ desc->status |= IRQ_MASKED;363363+}364364+365365+static inline void mask_irq(struct irq_desc *desc, int irq)366366+{367367+ if (desc->chip->mask) {368368+ desc->chip->mask(irq);369369+ desc->status |= IRQ_MASKED;370370+ }371371+}372372+373373+static inline void unmask_irq(struct irq_desc *desc, int irq)374374+{375375+ if (desc->chip->unmask) {376376+ desc->chip->unmask(irq);377377+ desc->status &= ~IRQ_MASKED;378378+ }362379}363380364381/*···501484 raw_spin_lock(&desc->lock);502485 desc->status &= ~IRQ_INPROGRESS;503486504504- if (unlikely(desc->status & IRQ_ONESHOT))505505- desc->status |= IRQ_MASKED;506506- else if (!(desc->status & IRQ_DISABLED) && desc->chip->unmask)507507- desc->chip->unmask(irq);487487+ if (!(desc->status & (IRQ_DISABLED | IRQ_ONESHOT)))488488+ unmask_irq(desc, irq);508489out_unlock:509490 raw_spin_unlock(&desc->lock);510491}···539524 action = desc->action;540525 if (unlikely(!action || (desc->status & IRQ_DISABLED))) {541526 desc->status |= IRQ_PENDING;542542- if (desc->chip->mask)543543- desc->chip->mask(irq);527527+ mask_irq(desc, irq);544528 goto out;545529 }546530···607593 irqreturn_t action_ret;608594609595 if (unlikely(!action)) {610610- desc->chip->mask(irq);596596+ mask_irq(desc, irq);611597 goto out_unlock;612598 }613599···619605 if (unlikely((desc->status &620606 (IRQ_PENDING | IRQ_MASKED | IRQ_DISABLED)) ==621607 (IRQ_PENDING | IRQ_MASKED))) {622622- desc->chip->unmask(irq);623623- desc->status &= ~IRQ_MASKED;608608+ unmask_irq(desc, irq);624609 }625610626611 desc->status &= ~IRQ_PENDING;···729716 __set_irq_handler(irq, handle, 0, name);730717}731718732732-void __init set_irq_noprobe(unsigned int irq)719719+void set_irq_noprobe(unsigned int irq)733720{734721 struct irq_desc *desc = irq_to_desc(irq);735722 unsigned long flags;···744731 raw_spin_unlock_irqrestore(&desc->lock, flags);745732}746733747747-void __init set_irq_probe(unsigned int irq)734734+void set_irq_probe(unsigned int irq)748735{749736 struct irq_desc *desc = irq_to_desc(irq);750737 unsigned long flags;
+22
kernel/irq/manage.c
···382382{383383 struct irq_desc *desc = irq_to_desc(irq);384384 struct irqaction *action;385385+ unsigned long flags;385386386387 if (!desc)387388 return 0;···390389 if (desc->status & IRQ_NOREQUEST)391390 return 0;392391392392+ raw_spin_lock_irqsave(&desc->lock, flags);393393 action = desc->action;394394 if (action)395395 if (irqflags & action->flags & IRQF_SHARED)396396 action = NULL;397397+398398+ raw_spin_unlock_irqrestore(&desc->lock, flags);397399398400 return !action;399401}···487483 */488484static void irq_finalize_oneshot(unsigned int irq, struct irq_desc *desc)489485{486486+again:490487 chip_bus_lock(irq, desc);491488 raw_spin_lock_irq(&desc->lock);489489+490490+ /*491491+ * Implausible though it may be we need to protect us against492492+ * the following scenario:493493+ *494494+ * The thread is faster done than the hard interrupt handler495495+ * on the other CPU. If we unmask the irq line then the496496+ * interrupt can come in again and masks the line, leaves due497497+ * to IRQ_INPROGRESS and the irq line is masked forever.498498+ */499499+ if (unlikely(desc->status & IRQ_INPROGRESS)) {500500+ raw_spin_unlock_irq(&desc->lock);501501+ chip_bus_sync_unlock(irq, desc);502502+ cpu_relax();503503+ goto again;504504+ }505505+492506 if (!(desc->status & IRQ_DISABLED) && (desc->status & IRQ_MASKED)) {493507 desc->status &= ~IRQ_MASKED;494508 desc->chip->unmask(irq);
+103-102
kernel/kgdb.c
···6969 struct pt_regs *linux_regs;7070};71717272+/* Exception state values */7373+#define DCPU_WANT_MASTER 0x1 /* Waiting to become a master kgdb cpu */7474+#define DCPU_NEXT_MASTER 0x2 /* Transition from one master cpu to another */7575+#define DCPU_IS_SLAVE 0x4 /* Slave cpu enter exception */7676+#define DCPU_SSTEP 0x8 /* CPU is single stepping */7777+7278static struct debuggerinfo_struct {7379 void *debuggerinfo;7480 struct task_struct *task;8181+ int exception_state;7582} kgdb_info[NR_CPUS];76837784/**···398391399392/*400393 * Copy the binary array pointed to by buf into mem. Fix $, #, and401401- * 0x7d escaped with 0x7d. Return a pointer to the character after402402- * the last byte written.394394+ * 0x7d escaped with 0x7d. Return -EFAULT on failure or 0 on success.395395+ * The input buf is overwitten with the result to write to mem.403396 */404397static int kgdb_ebin2mem(char *buf, char *mem, int count)405398{406406- int err = 0;407407- char c;399399+ int size = 0;400400+ char *c = buf;408401409402 while (count-- > 0) {410410- c = *buf++;411411- if (c == 0x7d)412412- c = *buf++ ^ 0x20;413413-414414- err = probe_kernel_write(mem, &c, 1);415415- if (err)416416- break;417417-418418- mem++;403403+ c[size] = *buf++;404404+ if (c[size] == 0x7d)405405+ c[size] = *buf++ ^ 0x20;406406+ size++;419407 }420408421421- return err;409409+ return probe_kernel_write(mem, c, size);422410}423411424412/*···563561 */564562 return find_task_by_pid_ns(tid, &init_pid_ns);565563}566566-567567-/*568568- * CPU debug state control:569569- */570570-571571-#ifdef CONFIG_SMP572572-static void kgdb_wait(struct pt_regs *regs)573573-{574574- unsigned long flags;575575- int cpu;576576-577577- local_irq_save(flags);578578- cpu = raw_smp_processor_id();579579- kgdb_info[cpu].debuggerinfo = regs;580580- kgdb_info[cpu].task = current;581581- /*582582- * Make sure the above info reaches the primary CPU before583583- * our cpu_in_kgdb[] flag setting does:584584- */585585- smp_wmb();586586- atomic_set(&cpu_in_kgdb[cpu], 1);587587-588588- /* Disable any cpu specific hw breakpoints */589589- kgdb_disable_hw_debug(regs);590590-591591- /* Wait till primary CPU is done with debugging */592592- while (atomic_read(&passive_cpu_wait[cpu]))593593- cpu_relax();594594-595595- kgdb_info[cpu].debuggerinfo = NULL;596596- kgdb_info[cpu].task = NULL;597597-598598- /* fix up hardware debug registers on local cpu */599599- if (arch_kgdb_ops.correct_hw_break)600600- arch_kgdb_ops.correct_hw_break();601601-602602- /* Signal the primary CPU that we are done: */603603- atomic_set(&cpu_in_kgdb[cpu], 0);604604- touch_softlockup_watchdog_sync();605605- clocksource_touch_watchdog();606606- local_irq_restore(flags);607607-}608608-#endif609564610565/*611566 * Some architectures need cache flushes when we set/clear a···13591400 return 1;13601401}1361140213621362-/*13631363- * kgdb_handle_exception() - main entry point from a kernel exception13641364- *13651365- * Locking hierarchy:13661366- * interface locks, if any (begin_session)13671367- * kgdb lock (kgdb_active)13681368- */13691369-int13701370-kgdb_handle_exception(int evector, int signo, int ecode, struct pt_regs *regs)14031403+static int kgdb_cpu_enter(struct kgdb_state *ks, struct pt_regs *regs)13711404{13721372- struct kgdb_state kgdb_var;13731373- struct kgdb_state *ks = &kgdb_var;13741405 unsigned long flags;13751406 int sstep_tries = 100;13761407 int error = 0;13771408 int i, cpu;13781378-13791379- ks->cpu = raw_smp_processor_id();13801380- ks->ex_vector = evector;13811381- ks->signo = signo;13821382- ks->ex_vector = evector;13831383- ks->err_code = ecode;13841384- ks->kgdb_usethreadid = 0;13851385- ks->linux_regs = regs;13861386-13871387- if (kgdb_reenter_check(ks))13881388- return 0; /* Ouch, double exception ! */13891389-14091409+ int trace_on = 0;13901410acquirelock:13911411 /*13921412 * Interrupts will be restored by the 'trap return' code, except when···13731435 */13741436 local_irq_save(flags);1375143713761376- cpu = raw_smp_processor_id();14381438+ cpu = ks->cpu;14391439+ kgdb_info[cpu].debuggerinfo = regs;14401440+ kgdb_info[cpu].task = current;14411441+ /*14421442+ * Make sure the above info reaches the primary CPU before14431443+ * our cpu_in_kgdb[] flag setting does:14441444+ */14451445+ atomic_inc(&cpu_in_kgdb[cpu]);1377144613781447 /*13791379- * Acquire the kgdb_active lock:14481448+ * CPU will loop if it is a slave or request to become a kgdb14491449+ * master cpu and acquire the kgdb_active lock:13801450 */13811381- while (atomic_cmpxchg(&kgdb_active, -1, cpu) != -1)14511451+ while (1) {14521452+ if (kgdb_info[cpu].exception_state & DCPU_WANT_MASTER) {14531453+ if (atomic_cmpxchg(&kgdb_active, -1, cpu) == cpu)14541454+ break;14551455+ } else if (kgdb_info[cpu].exception_state & DCPU_IS_SLAVE) {14561456+ if (!atomic_read(&passive_cpu_wait[cpu]))14571457+ goto return_normal;14581458+ } else {14591459+return_normal:14601460+ /* Return to normal operation by executing any14611461+ * hw breakpoint fixup.14621462+ */14631463+ if (arch_kgdb_ops.correct_hw_break)14641464+ arch_kgdb_ops.correct_hw_break();14651465+ if (trace_on)14661466+ tracing_on();14671467+ atomic_dec(&cpu_in_kgdb[cpu]);14681468+ touch_softlockup_watchdog_sync();14691469+ clocksource_touch_watchdog();14701470+ local_irq_restore(flags);14711471+ return 0;14721472+ }13821473 cpu_relax();14741474+ }1383147513841476 /*13851477 * For single stepping, try to only enter on the processor···14431475 if (kgdb_io_ops->pre_exception)14441476 kgdb_io_ops->pre_exception();1445147714461446- kgdb_info[ks->cpu].debuggerinfo = ks->linux_regs;14471447- kgdb_info[ks->cpu].task = current;14481448-14491478 kgdb_disable_hw_debug(ks->linux_regs);1450147914511480 /*···14511486 */14521487 if (!kgdb_single_step) {14531488 for (i = 0; i < NR_CPUS; i++)14541454- atomic_set(&passive_cpu_wait[i], 1);14891489+ atomic_inc(&passive_cpu_wait[i]);14551490 }14561456-14571457- /*14581458- * spin_lock code is good enough as a barrier so we don't14591459- * need one here:14601460- */14611461- atomic_set(&cpu_in_kgdb[ks->cpu], 1);1462149114631492#ifdef CONFIG_SMP14641493 /* Signal the other CPUs to enter kgdb_wait() */···14771518 kgdb_single_step = 0;14781519 kgdb_contthread = current;14791520 exception_level = 0;15211521+ trace_on = tracing_is_on();15221522+ if (trace_on)15231523+ tracing_off();1480152414811525 /* Talk to debugger with gdbserial protocol */14821526 error = gdb_serial_stub(ks);···14881526 if (kgdb_io_ops->post_exception)14891527 kgdb_io_ops->post_exception();1490152814911491- kgdb_info[ks->cpu].debuggerinfo = NULL;14921492- kgdb_info[ks->cpu].task = NULL;14931493- atomic_set(&cpu_in_kgdb[ks->cpu], 0);15291529+ atomic_dec(&cpu_in_kgdb[ks->cpu]);1494153014951531 if (!kgdb_single_step) {14961532 for (i = NR_CPUS-1; i >= 0; i--)14971497- atomic_set(&passive_cpu_wait[i], 0);15331533+ atomic_dec(&passive_cpu_wait[i]);14981534 /*14991535 * Wait till all the CPUs have quit15001536 * from the debugger.···15111551 else15121552 kgdb_sstep_pid = 0;15131553 }15541554+ if (trace_on)15551555+ tracing_on();15141556 /* Free kgdb_active */15151557 atomic_set(&kgdb_active, -1);15161558 touch_softlockup_watchdog_sync();···15221560 return error;15231561}1524156215631563+/*15641564+ * kgdb_handle_exception() - main entry point from a kernel exception15651565+ *15661566+ * Locking hierarchy:15671567+ * interface locks, if any (begin_session)15681568+ * kgdb lock (kgdb_active)15691569+ */15701570+int15711571+kgdb_handle_exception(int evector, int signo, int ecode, struct pt_regs *regs)15721572+{15731573+ struct kgdb_state kgdb_var;15741574+ struct kgdb_state *ks = &kgdb_var;15751575+ int ret;15761576+15771577+ ks->cpu = raw_smp_processor_id();15781578+ ks->ex_vector = evector;15791579+ ks->signo = signo;15801580+ ks->ex_vector = evector;15811581+ ks->err_code = ecode;15821582+ ks->kgdb_usethreadid = 0;15831583+ ks->linux_regs = regs;15841584+15851585+ if (kgdb_reenter_check(ks))15861586+ return 0; /* Ouch, double exception ! */15871587+ kgdb_info[ks->cpu].exception_state |= DCPU_WANT_MASTER;15881588+ ret = kgdb_cpu_enter(ks, regs);15891589+ kgdb_info[ks->cpu].exception_state &= ~DCPU_WANT_MASTER;15901590+ return ret;15911591+}15921592+15251593int kgdb_nmicallback(int cpu, void *regs)15261594{15271595#ifdef CONFIG_SMP15961596+ struct kgdb_state kgdb_var;15971597+ struct kgdb_state *ks = &kgdb_var;15981598+15991599+ memset(ks, 0, sizeof(struct kgdb_state));16001600+ ks->cpu = cpu;16011601+ ks->linux_regs = regs;16021602+15281603 if (!atomic_read(&cpu_in_kgdb[cpu]) &&15291529- atomic_read(&kgdb_active) != cpu &&15301530- atomic_read(&cpu_in_kgdb[atomic_read(&kgdb_active)])) {15311531- kgdb_wait((struct pt_regs *)regs);16041604+ atomic_read(&kgdb_active) != -1 &&16051605+ atomic_read(&kgdb_active) != cpu) {16061606+ kgdb_info[cpu].exception_state |= DCPU_IS_SLAVE;16071607+ kgdb_cpu_enter(ks, regs);16081608+ kgdb_info[cpu].exception_state &= ~DCPU_IS_SLAVE;15321609 return 0;15331610 }15341611#endif···17431742 */17441743void kgdb_breakpoint(void)17451744{17461746- atomic_set(&kgdb_setting_breakpoint, 1);17451745+ atomic_inc(&kgdb_setting_breakpoint);17471746 wmb(); /* Sync point before breakpoint */17481747 arch_kgdb_breakpoint();17491748 wmb(); /* Sync point after breakpoint */17501750- atomic_set(&kgdb_setting_breakpoint, 0);17491749+ atomic_dec(&kgdb_setting_breakpoint);17511750}17521751EXPORT_SYMBOL_GPL(kgdb_breakpoint);17531752
+14-8
kernel/perf_event.c
···11641164 struct perf_event_context *ctx = task->perf_event_ctxp;11651165 struct perf_event_context *next_ctx;11661166 struct perf_event_context *parent;11671167- struct pt_regs *regs;11681167 int do_switch = 1;1169116811701170- regs = task_pt_regs(task);11711171- perf_sw_event(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 1, regs, 0);11691169+ perf_sw_event(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, 1, NULL, 0);1172117011731171 if (likely(!ctx || !cpuctx->task_ctx))11741172 return;···27842786 return NULL;27852787}2786278827872787-#ifdef CONFIG_EVENT_TRACING27882789__weak27892790void perf_arch_fetch_caller_regs(struct pt_regs *regs, unsigned long ip, int skip)27902791{27912792}27922792-#endif27932793+2793279427942795/*27952796 * Output···33753378 struct perf_task_event *task_event)33763379{33773380 struct perf_output_handle handle;33783378- int size;33793381 struct task_struct *task = task_event->task;33803380- int ret;33823382+ unsigned long flags;33833383+ int size, ret;33843384+33853385+ /*33863386+ * If this CPU attempts to acquire an rq lock held by a CPU spinning33873387+ * in perf_output_lock() from interrupt context, it's game over.33883388+ */33893389+ local_irq_save(flags);3381339033823391 size = task_event->event_id.header.size;33833392 ret = perf_output_begin(&handle, event, size, 0, 0);3384339333853385- if (ret)33943394+ if (ret) {33953395+ local_irq_restore(flags);33863396 return;33973397+ }3387339833883399 task_event->event_id.pid = perf_event_pid(event, task);33893400 task_event->event_id.ppid = perf_event_pid(event, current);···34023397 perf_output_put(&handle, task_event->event_id);3403339834043399 perf_output_end(&handle);34003400+ local_irq_restore(flags);34053401}3406340234073403static int perf_event_task_match(struct perf_event *event)
···8888 printk(KERN_ERR "Freezing of tasks failed after %d.%02d seconds "8989 "(%d tasks refusing to freeze):\n",9090 elapsed_csecs / 100, elapsed_csecs % 100, todo);9191- show_state();9291 read_lock(&tasklist_lock);9392 do_each_thread(g, p) {9493 task_lock(p);9594 if (freezing(p) && !freezer_should_skip(p))9696- printk(KERN_ERR " %s\n", p->comm);9595+ sched_show_task(p);9796 cancel_freezing(p);9897 task_unlock(p);9998 } while_each_thread(g, p);···144145 if (nosig_only && should_send_signal(p))145146 continue;146147147147- if (cgroup_frozen(p))148148+ if (cgroup_freezing_or_frozen(p))148149 continue;149150150151 thaw_process(p);
+23
kernel/rcupdate.c
···4545#include <linux/mutex.h>4646#include <linux/module.h>4747#include <linux/kernel_stat.h>4848+#include <linux/hardirq.h>48494950#ifdef CONFIG_DEBUG_LOCK_ALLOC5051static struct lock_class_key rcu_lock_key;···66656766int rcu_scheduler_active __read_mostly;6867EXPORT_SYMBOL_GPL(rcu_scheduler_active);6868+6969+#ifdef CONFIG_DEBUG_LOCK_ALLOC7070+7171+/**7272+ * rcu_read_lock_bh_held - might we be in RCU-bh read-side critical section?7373+ *7474+ * Check for bottom half being disabled, which covers both the7575+ * CONFIG_PROVE_RCU and not cases. Note that if someone uses7676+ * rcu_read_lock_bh(), but then later enables BH, lockdep (if enabled)7777+ * will show the situation.7878+ *7979+ * Check debug_lockdep_rcu_enabled() to prevent false positives during boot.8080+ */8181+int rcu_read_lock_bh_held(void)8282+{8383+ if (!debug_lockdep_rcu_enabled())8484+ return 1;8585+ return in_softirq();8686+}8787+EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);8888+8989+#endif /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */69907091/*7192 * This function is invoked towards the end of the scheduler's initialization
+37-7
kernel/resource.c
···219219}220220221221/**222222+ * request_resource_conflict - request and reserve an I/O or memory resource223223+ * @root: root resource descriptor224224+ * @new: resource descriptor desired by caller225225+ *226226+ * Returns 0 for success, conflict resource on error.227227+ */228228+struct resource *request_resource_conflict(struct resource *root, struct resource *new)229229+{230230+ struct resource *conflict;231231+232232+ write_lock(&resource_lock);233233+ conflict = __request_resource(root, new);234234+ write_unlock(&resource_lock);235235+ return conflict;236236+}237237+238238+/**222239 * request_resource - request and reserve an I/O or memory resource223240 * @root: root resource descriptor224241 * @new: resource descriptor desired by caller···246229{247230 struct resource *conflict;248231249249- write_lock(&resource_lock);250250- conflict = __request_resource(root, new);251251- write_unlock(&resource_lock);232232+ conflict = request_resource_conflict(root, new);252233 return conflict ? -EBUSY : 0;253234}254235···489474}490475491476/**492492- * insert_resource - Inserts a resource in the resource tree477477+ * insert_resource_conflict - Inserts resource in the resource tree493478 * @parent: parent of the new resource494479 * @new: new resource to insert495480 *496496- * Returns 0 on success, -EBUSY if the resource can't be inserted.481481+ * Returns 0 on success, conflict resource if the resource can't be inserted.497482 *498498- * This function is equivalent to request_resource when no conflict483483+ * This function is equivalent to request_resource_conflict when no conflict499484 * happens. If a conflict happens, and the conflicting resources500485 * entirely fit within the range of the new resource, then the new501486 * resource is inserted and the conflicting resources become children of502487 * the new resource.503488 */504504-int insert_resource(struct resource *parent, struct resource *new)489489+struct resource *insert_resource_conflict(struct resource *parent, struct resource *new)505490{506491 struct resource *conflict;507492508493 write_lock(&resource_lock);509494 conflict = __insert_resource(parent, new);510495 write_unlock(&resource_lock);496496+ return conflict;497497+}498498+499499+/**500500+ * insert_resource - Inserts a resource in the resource tree501501+ * @parent: parent of the new resource502502+ * @new: new resource to insert503503+ *504504+ * Returns 0 on success, -EBUSY if the resource can't be inserted.505505+ */506506+int insert_resource(struct resource *parent, struct resource *new)507507+{508508+ struct resource *conflict;509509+510510+ conflict = insert_resource_conflict(parent, new);511511 return conflict ? -EBUSY : 0;512512}513513
+9-5
kernel/sched.c
···26502650{26512651 unsigned long flags;26522652 struct rq *rq;26532653- int cpu = get_cpu();26532653+ int cpu __maybe_unused = get_cpu();2654265426552655#ifdef CONFIG_SMP26562656 /*···49024902 int ret;49034903 cpumask_var_t mask;4904490449054905- if (len < cpumask_size())49054905+ if (len < nr_cpu_ids)49064906+ return -EINVAL;49074907+ if (len & (sizeof(unsigned long)-1))49064908 return -EINVAL;4907490949084910 if (!alloc_cpumask_var(&mask, GFP_KERNEL))···4912491049134911 ret = sched_getaffinity(pid, mask);49144912 if (ret == 0) {49154915- if (copy_to_user(user_mask_ptr, mask, cpumask_size()))49134913+ size_t retlen = min_t(size_t, len, cpumask_size());49144914+49154915+ if (copy_to_user(user_mask_ptr, mask, retlen))49164916 ret = -EFAULT;49174917 else49184918- ret = cpumask_size();49184918+ ret = retlen;49194919 }49204920 free_cpumask_var(mask);49214921···5387538353885384 get_task_struct(mt);53895385 task_rq_unlock(rq, &flags);53905390- wake_up_process(rq->migration_thread);53865386+ wake_up_process(mt);53915387 put_task_struct(mt);53925388 wait_for_completion(&req.done);53935389 tlb_migrate_finish(p->mm);
···637637 goto cancelled;638638639639 /* the timer holds a reference whilst it is pending */640640- ret = work->ops->get_ref(work);640640+ ret = slow_work_get_ref(work);641641 if (ret < 0)642642 goto cant_get_ref;643643
···155155 * Wake up the high-prio watchdog task twice per156156 * threshold timespan.157157 */158158- if (now > touch_ts + softlockup_thresh/2)158158+ if (time_after(now - softlockup_thresh/2, touch_ts))159159 wake_up_process(per_cpu(softlockup_watchdog, this_cpu));160160161161 /* Warn about unreasonable delays: */162162- if (now <= (touch_ts + softlockup_thresh))162162+ if (time_before_eq(now - softlockup_thresh, touch_ts))163163 return;164164165165 per_cpu(softlockup_print_ts, this_cpu) = touch_ts;
+40-12
kernel/time/tick-oneshot.c
···22222323#include "tick-internal.h"24242525+/* Limit min_delta to a jiffie */2626+#define MIN_DELTA_LIMIT (NSEC_PER_SEC / HZ)2727+2828+static int tick_increase_min_delta(struct clock_event_device *dev)2929+{3030+ /* Nothing to do if we already reached the limit */3131+ if (dev->min_delta_ns >= MIN_DELTA_LIMIT)3232+ return -ETIME;3333+3434+ if (dev->min_delta_ns < 5000)3535+ dev->min_delta_ns = 5000;3636+ else3737+ dev->min_delta_ns += dev->min_delta_ns >> 1;3838+3939+ if (dev->min_delta_ns > MIN_DELTA_LIMIT)4040+ dev->min_delta_ns = MIN_DELTA_LIMIT;4141+4242+ printk(KERN_WARNING "CE: %s increased min_delta_ns to %llu nsec\n",4343+ dev->name ? dev->name : "?",4444+ (unsigned long long) dev->min_delta_ns);4545+ return 0;4646+}4747+2548/**2649 * tick_program_event internal worker function2750 */···6037 if (!ret || !force)6138 return ret;62394040+ dev->retries++;6341 /*6464- * We tried 2 times to program the device with the given6565- * min_delta_ns. If that's not working then we double it4242+ * We tried 3 times to program the device with the given4343+ * min_delta_ns. If that's not working then we increase it6644 * and emit a warning.6745 */6846 if (++i > 2) {6947 /* Increase the min. delta and try again */7070- if (!dev->min_delta_ns)7171- dev->min_delta_ns = 5000;7272- else7373- dev->min_delta_ns += dev->min_delta_ns >> 1;7474-7575- printk(KERN_WARNING7676- "CE: %s increasing min_delta_ns to %llu nsec\n",7777- dev->name ? dev->name : "?",7878- (unsigned long long) dev->min_delta_ns << 1);7979-4848+ if (tick_increase_min_delta(dev)) {4949+ /*5050+ * Get out of the loop if min_delta_ns5151+ * hit the limit already. That's5252+ * better than staying here forever.5353+ *5454+ * We clear next_event so we have a5555+ * chance that the box survives.5656+ */5757+ printk(KERN_WARNING5858+ "CE: Reprogramming failure. Giving up\n");5959+ dev->next_event.tv64 = KTIME_MAX;6060+ return -ETIME;6161+ }8062 i = 0;8163 }8264
+2-1
kernel/time/timekeeping.c
···818818 shift = min(shift, maxshift);819819 while (offset >= timekeeper.cycle_interval) {820820 offset = logarithmic_accumulation(offset, shift);821821- shift--;821821+ if(offset < timekeeper.cycle_interval<<shift)822822+ shift--;822823 }823824824825 /* correct the clock when NTP error is too big */
+2-1
kernel/time/timer_list.c
···228228 SEQ_printf(m, " event_handler: ");229229 print_name_offset(m, dev->event_handler);230230 SEQ_printf(m, "\n");231231+ SEQ_printf(m, " retries: %lu\n", dev->retries);231232}232233233234static void timer_list_show_tickdevices(struct seq_file *m)···258257 u64 now = ktime_to_ns(ktime_get());259258 int cpu;260259261261- SEQ_printf(m, "Timer List Version: v0.5\n");260260+ SEQ_printf(m, "Timer List Version: v0.6\n");262261 SEQ_printf(m, "HRTIMER_MAX_CLOCK_BASES: %d\n", HRTIMER_MAX_CLOCK_BASES);263262 SEQ_printf(m, "now at %Ld nsecs\n", (unsigned long long)now);264263
+1
kernel/timer.c
···880880 if (base->running_timer == timer)881881 goto out;882882883883+ timer_stats_timer_clear_start_info(timer);883884 ret = 0;884885 if (timer_pending(timer)) {885886 detach_timer(timer, 1);
+16-6
kernel/trace/ring_buffer.c
···207207#define RB_MAX_SMALL_DATA (RB_ALIGNMENT * RINGBUF_TYPE_DATA_TYPE_LEN_MAX)208208#define RB_EVNT_MIN_SIZE 8U /* two 32bit words */209209210210+#if !defined(CONFIG_64BIT) || defined(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)211211+# define RB_FORCE_8BYTE_ALIGNMENT 0212212+# define RB_ARCH_ALIGNMENT RB_ALIGNMENT213213+#else214214+# define RB_FORCE_8BYTE_ALIGNMENT 1215215+# define RB_ARCH_ALIGNMENT 8U216216+#endif217217+210218/* define RINGBUF_TYPE_DATA for 'case RINGBUF_TYPE_DATA:' */211219#define RINGBUF_TYPE_DATA 0 ... RINGBUF_TYPE_DATA_TYPE_LEN_MAX212220···1209120112101202 for (i = 0; i < nr_pages; i++) {12111203 if (RB_WARN_ON(cpu_buffer, list_empty(cpu_buffer->pages)))12121212- return;12041204+ goto out;12131205 p = cpu_buffer->pages->next;12141206 bpage = list_entry(p, struct buffer_page, list);12151207 list_del_init(&bpage->list);12161208 free_buffer_page(bpage);12171209 }12181210 if (RB_WARN_ON(cpu_buffer, list_empty(cpu_buffer->pages)))12191219- return;12111211+ goto out;1220121212211213 rb_reset_cpu(cpu_buffer);12221214 rb_check_pages(cpu_buffer);1223121512161216+out:12241217 spin_unlock_irq(&cpu_buffer->reader_lock);12251218}12261219···1238122912391230 for (i = 0; i < nr_pages; i++) {12401231 if (RB_WARN_ON(cpu_buffer, list_empty(pages)))12411241- return;12321232+ goto out;12421233 p = pages->next;12431234 bpage = list_entry(p, struct buffer_page, list);12441235 list_del_init(&bpage->list);···12471238 rb_reset_cpu(cpu_buffer);12481239 rb_check_pages(cpu_buffer);1249124012411241+out:12501242 spin_unlock_irq(&cpu_buffer->reader_lock);12511243}12521244···1557154715581548 case 0:15591549 length -= RB_EVNT_HDR_SIZE;15601560- if (length > RB_MAX_SMALL_DATA)15501550+ if (length > RB_MAX_SMALL_DATA || RB_FORCE_8BYTE_ALIGNMENT)15611551 event->array[0] = length;15621552 else15631553 event->type_len = DIV_ROUND_UP(length, RB_ALIGNMENT);···17321722 if (!length)17331723 length = 1;1734172417351735- if (length > RB_MAX_SMALL_DATA)17251725+ if (length > RB_MAX_SMALL_DATA || RB_FORCE_8BYTE_ALIGNMENT)17361726 length += sizeof(event.array[0]);1737172717381728 length += RB_EVNT_HDR_SIZE;17391739- length = ALIGN(length, RB_ALIGNMENT);17291729+ length = ALIGN(length, RB_ARCH_ALIGNMENT);1740173017411731 return length;17421732}
+2-2
kernel/trace/trace_clock.c
···8484 int this_cpu;8585 u64 now;86868787- raw_local_irq_save(flags);8787+ local_irq_save(flags);88888989 this_cpu = raw_smp_processor_id();9090 now = cpu_clock(this_cpu);···110110 arch_spin_unlock(&trace_clock_struct.lock);111111112112 out:113113- raw_local_irq_restore(flags);113113+ local_irq_restore(flags);114114115115 return now;116116}
+9-2
kernel/trace/trace_event_perf.c
···1717static char *perf_trace_buf;1818static char *perf_trace_buf_nmi;19192020-typedef typeof(char [PERF_MAX_TRACE_SIZE]) perf_trace_t ;2020+/*2121+ * Force it to be aligned to unsigned long to avoid misaligned accesses2222+ * suprises2323+ */2424+typedef typeof(unsigned long [PERF_MAX_TRACE_SIZE / sizeof(unsigned long)])2525+ perf_trace_t;21262227/* Count the events in use (per event id, not per instance) */2328static int total_ref_count;···135130 char *trace_buf, *raw_data;136131 int pc, cpu;137132133133+ BUILD_BUG_ON(PERF_MAX_TRACE_SIZE % sizeof(unsigned long));134134+138135 pc = preempt_count();139136140137 /* Protect the per cpu buffer, begin the rcu read side */···159152 raw_data = per_cpu_ptr(trace_buf, cpu);160153161154 /* zero the dead bytes from align to not leak stack to user */162162- *(u64 *)(&raw_data[size - sizeof(u64)]) = 0ULL;155155+ memset(&raw_data[size - sizeof(u64)], 0, sizeof(u64));163156164157 entry = (struct trace_entry *)raw_data;165158 tracing_generic_entry_update(entry, *irq_flags, pc);
-13
mm/bootmem.c
···180180 end_aligned = end & ~(BITS_PER_LONG - 1);181181182182 if (end_aligned <= start_aligned) {183183-#if 1184184- printk(KERN_DEBUG " %lx - %lx\n", start, end);185185-#endif186183 for (i = start; i < end; i++)187184 __free_pages_bootmem(pfn_to_page(i), 0);188185189186 return;190187 }191188192192-#if 1193193- printk(KERN_DEBUG " %lx %lx - %lx %lx\n",194194- start, start_aligned, end_aligned, end);195195-#endif196189 for (i = start; i < start_aligned; i++)197190 __free_pages_bootmem(pfn_to_page(i), 0);198191···421428{422429#ifdef CONFIG_NO_BOOTMEM423430 free_early(physaddr, physaddr + size);424424-#if 0425425- printk(KERN_DEBUG "free %lx %lx\n", physaddr, size);426426-#endif427431#else428432 unsigned long start, end;429433···446456{447457#ifdef CONFIG_NO_BOOTMEM448458 free_early(addr, addr + size);449449-#if 0450450- printk(KERN_DEBUG "free %lx %lx\n", addr, size);451451-#endif452459#else453460 unsigned long start, end;454461
···36103610 hlist_for_each_entry_rcu(dev, node, head, index_hlist) {36113611 if (idx < s_idx)36123612 goto cont;36133613- if (idx > s_idx)36133613+ if (h > s_h || idx > s_idx)36143614 s_ip_idx = 0;36153615 ip_idx = 0;36163616 if ((idev = __in6_dev_get(dev)) == NULL)
+7-4
net/ipv6/ip6mr.c
···16951695 int ct;16961696 struct rtnexthop *nhp;16971697 struct net *net = mfc6_net(c);16981698- struct net_device *dev = net->ipv6.vif6_table[c->mf6c_parent].dev;16991698 u8 *b = skb_tail_pointer(skb);17001699 struct rtattr *mp_head;1701170017021702- if (dev)17031703- RTA_PUT(skb, RTA_IIF, 4, &dev->ifindex);17011701+ /* If cache is unresolved, don't try to parse IIF and OIF */17021702+ if (c->mf6c_parent > MAXMIFS)17031703+ return -ENOENT;17041704+17051705+ if (MIF_EXISTS(net, c->mf6c_parent))17061706+ RTA_PUT(skb, RTA_IIF, 4, &net->ipv6.vif6_table[c->mf6c_parent].dev->ifindex);1704170717051708 mp_head = (struct rtattr *)skb_put(skb, RTA_LENGTH(0));1706170917071710 for (ct = c->mfc_un.res.minvif; ct < c->mfc_un.res.maxvif; ct++) {17081708- if (c->mfc_un.res.ttls[ct] < 255) {17111711+ if (MIF_EXISTS(net, ct) && c->mfc_un.res.ttls[ct] < 255) {17091712 if (skb_tailroom(skb) < RTA_ALIGN(RTA_ALIGN(sizeof(*nhp)) + 4))17101713 goto rtattr_failure;17111714 nhp = (struct rtnexthop *)skb_put(skb, RTA_ALIGN(sizeof(*nhp)));
+1-1
net/ipv6/netfilter/ip6table_raw.c
···1313 .valid_hooks = RAW_VALID_HOOKS,1414 .me = THIS_MODULE,1515 .af = NFPROTO_IPV6,1616- .priority = NF_IP6_PRI_FIRST,1616+ .priority = NF_IP6_PRI_RAW,1717};18181919/* The work comes in here from netfilter.c. */
···493493 case 64 ... 95:494494 i[2] = maskl(i[2], p - 64);495495 i[3] = 0;496496+ break;496497 case 96 ... 127:497498 i[3] = maskl(i[3], p - 96);498499 break;···880879 struct xt_hashlimit_htable *htable = s->private;881880 unsigned int *bucket = (unsigned int *)v;882881883883- kfree(bucket);882882+ if (!IS_ERR(bucket))883883+ kfree(bucket);884884 spin_unlock_bh(&htable->lock);885885}886886
+1-1
net/netfilter/xt_recent.c
···267267 for (i = 0; i < e->nstamps; i++) {268268 if (info->seconds && time_after(time, e->stamps[i]))269269 continue;270270- if (info->hit_count && ++hits >= info->hit_count) {270270+ if (!info->hit_count || ++hits >= info->hit_count) {271271 ret = !ret;272272 break;273273 }
+4-1
net/sched/Kconfig
···328328 module will be called cls_flow.329329330330config NET_CLS_CGROUP331331- bool "Control Group Classifier"331331+ tristate "Control Group Classifier"332332 select NET_CLS333333 depends on CGROUPS334334 ---help---335335 Say Y here if you want to classify packets based on the control336336 cgroup of their process.337337+338338+ To compile this code as a module, choose M here: the339339+ module will be called cls_cgroup.337340338341config NET_EMATCH339342 bool "Extended Matches"
···242242243243 /* Parse probe point */244244 parse_perf_probe_probepoint(argv[0], pp);245245- if (pp->file || pp->line)245245+ if (pp->file || pp->line || pp->lazy_line)246246 *need_dwarf = true;247247248248 /* Copy arguments and ensure return probe has no C argument */
+7-11
tools/perf/util/probe-finder.c
···333333 die("%u exceeds max register number.", regn);334334335335 if (deref)336336- ret = snprintf(pf->buf, pf->len, " %s=+%ju(%s)",337337- pf->var, (uintmax_t)offs, regs);336336+ ret = snprintf(pf->buf, pf->len, " %s=%+jd(%s)",337337+ pf->var, (intmax_t)offs, regs);338338 else339339 ret = snprintf(pf->buf, pf->len, " %s=%s", pf->var, regs);340340 DIE_IF(ret < 0);···352352 if (dwarf_attr(vr_die, DW_AT_location, &attr) == NULL)353353 goto error;354354 /* TODO: handle more than 1 exprs */355355- ret = dwarf_getlocation_addr(&attr, (pf->addr - pf->cu_base),356356- &expr, &nexpr, 1);355355+ ret = dwarf_getlocation_addr(&attr, pf->addr, &expr, &nexpr, 1);357356 if (ret <= 0 || nexpr == 0)358357 goto error;359358···436437437438 /* Get the frame base attribute/ops */438439 dwarf_attr(sp_die, DW_AT_frame_base, &fb_attr);439439- ret = dwarf_getlocation_addr(&fb_attr, (pf->addr - pf->cu_base),440440- &pf->fb_ops, &nops, 1);440440+ ret = dwarf_getlocation_addr(&fb_attr, pf->addr, &pf->fb_ops, &nops, 1);441441 if (ret <= 0 || nops == 0)442442 pf->fb_ops = NULL;443443···452454453455 /* *pf->fb_ops will be cached in libdw. Don't free it. */454456 pf->fb_ops = NULL;457457+458458+ if (pp->found == MAX_PROBES)459459+ die("Too many( > %d) probe point found.\n", MAX_PROBES);455460456461 pp->probes[pp->found] = strdup(tmp);457462 pp->found++;···642641int find_probe_point(int fd, struct probe_point *pp)643642{644643 struct probe_finder pf = {.pp = pp};645645- int ret;646644 Dwarf_Off off, noff;647645 size_t cuhl;648646 Dwarf_Die *diep;···668668 pf.fname = NULL;669669670670 if (!pp->file || pf.fname) {671671- /* Save CU base address (for frame_base) */672672- ret = dwarf_lowpc(&pf.cu_die, &pf.cu_base);673673- if (ret != 0)674674- pf.cu_base = 0;675671 if (pp->function)676672 find_probe_point_by_func(&pf);677673 else if (pp->lazy_line)
-1
tools/perf/util/probe-finder.h
···71717272 /* For variable searching */7373 Dwarf_Op *fb_ops; /* Frame base attribute */7474- Dwarf_Addr cu_base; /* Current CU base address */7574 const char *var; /* Current variable name */7675 char *buf; /* Current output buffer */7776 int len; /* Length of output buffer */