Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v3.5-rc6' into next/dt

New pull requests are based on Linux 3.5-rc6

+1630 -1005
-21
Documentation/ABI/testing/sysfs-block-rssd
··· 1 - What: /sys/block/rssd*/registers 2 - Date: March 2012 3 - KernelVersion: 3.3 4 - Contact: Asai Thambi S P <asamymuthupa@micron.com> 5 - Description: This is a read-only file. Dumps below driver information and 6 - hardware registers. 7 - - S ACTive 8 - - Command Issue 9 - - Completed 10 - - PORT IRQ STAT 11 - - HOST IRQ STAT 12 - - Allocated 13 - - Commands in Q 14 - 15 1 What: /sys/block/rssd*/status 16 2 Date: April 2012 17 3 KernelVersion: 3.4 18 4 Contact: Asai Thambi S P <asamymuthupa@micron.com> 19 5 Description: This is a read-only file. Indicates the status of the device. 20 - 21 - What: /sys/block/rssd*/flags 22 - Date: May 2012 23 - KernelVersion: 3.5 24 - Contact: Asai Thambi S P <asamymuthupa@micron.com> 25 - Description: This is a read-only file. Dumps the flags in port and driver 26 - data structure
+45 -84
Documentation/device-mapper/verity.txt
··· 7 7 8 8 Construction Parameters 9 9 ======================= 10 - <version> <dev> <hash_dev> <hash_start> 10 + <version> <dev> <hash_dev> 11 11 <data_block_size> <hash_block_size> 12 12 <num_data_blocks> <hash_start_block> 13 13 <algorithm> <digest> <salt> 14 14 15 15 <version> 16 - This is the version number of the on-disk format. 16 + This is the type of the on-disk hash format. 17 17 18 18 0 is the original format used in the Chromium OS. 19 - The salt is appended when hashing, digests are stored continuously and 20 - the rest of the block is padded with zeros. 19 + The salt is appended when hashing, digests are stored continuously and 20 + the rest of the block is padded with zeros. 21 21 22 22 1 is the current format that should be used for new devices. 23 - The salt is prepended when hashing and each digest is 24 - padded with zeros to the power of two. 23 + The salt is prepended when hashing and each digest is 24 + padded with zeros to the power of two. 25 25 26 26 <dev> 27 - This is the device containing the data the integrity of which needs to be 27 + This is the device containing data, the integrity of which needs to be 28 28 checked. It may be specified as a path, like /dev/sdaX, or a device number, 29 29 <major>:<minor>. 30 30 31 31 <hash_dev> 32 - This is the device that that supplies the hash tree data. It may be 32 + This is the device that supplies the hash tree data. It may be 33 33 specified similarly to the device path and may be the same device. If the 34 - same device is used, the hash_start should be outside of the dm-verity 35 - configured device size. 34 + same device is used, the hash_start should be outside the configured 35 + dm-verity device. 36 36 37 37 <data_block_size> 38 - The block size on a data device. Each block corresponds to one digest on 39 - the hash device. 38 + The block size on a data device in bytes. 39 + Each block corresponds to one digest on the hash device. 40 40 41 41 <hash_block_size> 42 - The size of a hash block. 42 + The size of a hash block in bytes. 43 43 44 44 <num_data_blocks> 45 45 The number of data blocks on the data device. Additional blocks are ··· 65 65 Theory of operation 66 66 =================== 67 67 68 - dm-verity is meant to be setup as part of a verified boot path. This 68 + dm-verity is meant to be set up as part of a verified boot path. This 69 69 may be anything ranging from a boot using tboot or trustedgrub to just 70 70 booting from a known-good device (like a USB drive or CD). 71 71 ··· 73 73 has been authenticated in some way (cryptographic signatures, etc). 74 74 After instantiation, all hashes will be verified on-demand during 75 75 disk access. If they cannot be verified up to the root node of the 76 - tree, the root hash, then the I/O will fail. This should identify 76 + tree, the root hash, then the I/O will fail. This should detect 77 77 tampering with any data on the device and the hash data. 78 78 79 79 Cryptographic hashes are used to assert the integrity of the device on a 80 - per-block basis. This allows for a lightweight hash computation on first read 81 - into the page cache. Block hashes are stored linearly-aligned to the nearest 82 - block the size of a page. 80 + per-block basis. This allows for a lightweight hash computation on first read 81 + into the page cache. Block hashes are stored linearly, aligned to the nearest 82 + block size. 83 83 84 84 Hash Tree 85 85 --------- 86 86 87 87 Each node in the tree is a cryptographic hash. If it is a leaf node, the hash 88 - is of some block data on disk. If it is an intermediary node, then the hash is 89 - of a number of child nodes. 88 + of some data block on disk is calculated. If it is an intermediary node, 89 + the hash of a number of child nodes is calculated. 90 90 91 91 Each entry in the tree is a collection of neighboring nodes that fit in one 92 92 block. The number is determined based on block_size and the size of the ··· 110 110 On-disk format 111 111 ============== 112 112 113 - Below is the recommended on-disk format. The verity kernel code does not 114 - read the on-disk header. It only reads the hash blocks which directly 115 - follow the header. It is expected that a user-space tool will verify the 116 - integrity of the verity_header and then call dmsetup with the correct 117 - parameters. Alternatively, the header can be omitted and the dmsetup 118 - parameters can be passed via the kernel command-line in a rooted chain 119 - of trust where the command-line is verified. 113 + The verity kernel code does not read the verity metadata on-disk header. 114 + It only reads the hash blocks which directly follow the header. 115 + It is expected that a user-space tool will verify the integrity of the 116 + verity header. 120 117 121 - The on-disk format is especially useful in cases where the hash blocks 122 - are on a separate partition. The magic number allows easy identification 123 - of the partition contents. Alternatively, the hash blocks can be stored 124 - in the same partition as the data to be verified. In such a configuration 125 - the filesystem on the partition would be sized a little smaller than 126 - the full-partition, leaving room for the hash blocks. 127 - 128 - struct superblock { 129 - uint8_t signature[8] 130 - "verity\0\0"; 131 - 132 - uint8_t version; 133 - 1 - current format 134 - 135 - uint8_t data_block_bits; 136 - log2(data block size) 137 - 138 - uint8_t hash_block_bits; 139 - log2(hash block size) 140 - 141 - uint8_t pad1[1]; 142 - zero padding 143 - 144 - uint16_t salt_size; 145 - big-endian salt size 146 - 147 - uint8_t pad2[2]; 148 - zero padding 149 - 150 - uint32_t data_blocks_hi; 151 - big-endian high 32 bits of the 64-bit number of data blocks 152 - 153 - uint32_t data_blocks_lo; 154 - big-endian low 32 bits of the 64-bit number of data blocks 155 - 156 - uint8_t algorithm[16]; 157 - cryptographic algorithm 158 - 159 - uint8_t salt[384]; 160 - salt (the salt size is specified above) 161 - 162 - uint8_t pad3[88]; 163 - zero padding to 512-byte boundary 164 - } 118 + Alternatively, the header can be omitted and the dmsetup parameters can 119 + be passed via the kernel command-line in a rooted chain of trust where 120 + the command-line is verified. 165 121 166 122 Directly following the header (and with sector number padded to the next hash 167 123 block boundary) are the hash blocks which are stored a depth at a time 168 124 (starting from the root), sorted in order of increasing index. 125 + 126 + The full specification of kernel parameters and on-disk metadata format 127 + is available at the cryptsetup project's wiki page 128 + http://code.google.com/p/cryptsetup/wiki/DMVerity 169 129 170 130 Status 171 131 ====== ··· 134 174 135 175 Example 136 176 ======= 137 - 138 - Setup a device: 139 - dmsetup create vroot --table \ 140 - "0 2097152 "\ 141 - "verity 1 /dev/sda1 /dev/sda2 4096 4096 2097152 1 "\ 177 + Set up a device: 178 + # dmsetup create vroot --readonly --table \ 179 + "0 2097152 verity 1 /dev/sda1 /dev/sda2 4096 4096 262144 1 sha256 "\ 142 180 "4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076 "\ 143 181 "1234000000000000000000000000000000000000000000000000000000000000" 144 182 145 183 A command line tool veritysetup is available to compute or verify 146 - the hash tree or activate the kernel driver. This is available from 147 - the LVM2 upstream repository and may be supplied as a package called 148 - device-mapper-verity-tools: 149 - git://sources.redhat.com/git/lvm2 150 - http://sourceware.org/git/?p=lvm2.git 151 - http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/verity?cvsroot=lvm2 184 + the hash tree or activate the kernel device. This is available from 185 + the cryptsetup upstream repository http://code.google.com/p/cryptsetup/ 186 + (as a libcryptsetup extension). 152 187 153 - veritysetup -a vroot /dev/sda1 /dev/sda2 \ 154 - 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076 188 + Create hash on the device: 189 + # veritysetup format /dev/sda1 /dev/sda2 190 + ... 191 + Root hash: 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076 192 + 193 + Activate the device: 194 + # veritysetup create vroot /dev/sda1 /dev/sda2 \ 195 + 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076
+1
Documentation/devicetree/bindings/input/fsl-mma8450.txt
··· 2 2 3 3 Required properties: 4 4 - compatible : "fsl,mma8450". 5 + - reg: the I2C address of MMA8450 5 6 6 7 Example: 7 8
+2 -2
Documentation/devicetree/bindings/mfd/mc13xxx.txt
··· 46 46 47 47 ecspi@70010000 { /* ECSPI1 */ 48 48 fsl,spi-num-chipselects = <2>; 49 - cs-gpios = <&gpio3 24 0>, /* GPIO4_24 */ 50 - <&gpio3 25 0>; /* GPIO4_25 */ 49 + cs-gpios = <&gpio4 24 0>, /* GPIO4_24 */ 50 + <&gpio4 25 0>; /* GPIO4_25 */ 51 51 status = "okay"; 52 52 53 53 pmic: mc13892@0 {
+2 -2
Documentation/devicetree/bindings/mmc/fsl-imx-esdhc.txt
··· 29 29 compatible = "fsl,imx51-esdhc"; 30 30 reg = <0x70008000 0x4000>; 31 31 interrupts = <2>; 32 - cd-gpios = <&gpio0 6 0>; /* GPIO1_6 */ 33 - wp-gpios = <&gpio0 5 0>; /* GPIO1_5 */ 32 + cd-gpios = <&gpio1 6 0>; /* GPIO1_6 */ 33 + wp-gpios = <&gpio1 5 0>; /* GPIO1_5 */ 34 34 };
+1 -1
Documentation/devicetree/bindings/net/fsl-fec.txt
··· 19 19 reg = <0x83fec000 0x4000>; 20 20 interrupts = <87>; 21 21 phy-mode = "mii"; 22 - phy-reset-gpios = <&gpio1 14 0>; /* GPIO2_14 */ 22 + phy-reset-gpios = <&gpio2 14 0>; /* GPIO2_14 */ 23 23 local-mac-address = [00 04 9F 01 1B B9]; 24 24 };
+2 -2
Documentation/devicetree/bindings/spi/fsl-imx-cspi.txt
··· 17 17 reg = <0x70010000 0x4000>; 18 18 interrupts = <36>; 19 19 fsl,spi-num-chipselects = <2>; 20 - cs-gpios = <&gpio3 24 0>, /* GPIO4_24 */ 21 - <&gpio3 25 0>; /* GPIO4_25 */ 20 + cs-gpios = <&gpio3 24 0>, /* GPIO3_24 */ 21 + <&gpio3 25 0>; /* GPIO3_25 */ 22 22 };
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 3 3 This isn't an exhaustive list, but you should add new prefixes to it before 4 4 using them to avoid name-space collisions. 5 5 6 + ad Avionic Design GmbH 6 7 adi Analog Devices, Inc. 7 8 amcc Applied Micro Circuits Corporation (APM, formally AMCC) 8 9 apm Applied Micro Circuits Corporation (APM)
+57
Documentation/prctl/no_new_privs.txt
··· 1 + The execve system call can grant a newly-started program privileges that 2 + its parent did not have. The most obvious examples are setuid/setgid 3 + programs and file capabilities. To prevent the parent program from 4 + gaining these privileges as well, the kernel and user code must be 5 + careful to prevent the parent from doing anything that could subvert the 6 + child. For example: 7 + 8 + - The dynamic loader handles LD_* environment variables differently if 9 + a program is setuid. 10 + 11 + - chroot is disallowed to unprivileged processes, since it would allow 12 + /etc/passwd to be replaced from the point of view of a process that 13 + inherited chroot. 14 + 15 + - The exec code has special handling for ptrace. 16 + 17 + These are all ad-hoc fixes. The no_new_privs bit (since Linux 3.5) is a 18 + new, generic mechanism to make it safe for a process to modify its 19 + execution environment in a manner that persists across execve. Any task 20 + can set no_new_privs. Once the bit is set, it is inherited across fork, 21 + clone, and execve and cannot be unset. With no_new_privs set, execve 22 + promises not to grant the privilege to do anything that could not have 23 + been done without the execve call. For example, the setuid and setgid 24 + bits will no longer change the uid or gid; file capabilities will not 25 + add to the permitted set, and LSMs will not relax constraints after 26 + execve. 27 + 28 + To set no_new_privs, use prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0). 29 + 30 + Be careful, though: LSMs might also not tighten constraints on exec 31 + in no_new_privs mode. (This means that setting up a general-purpose 32 + service launcher to set no_new_privs before execing daemons may 33 + interfere with LSM-based sandboxing.) 34 + 35 + Note that no_new_privs does not prevent privilege changes that do not 36 + involve execve. An appropriately privileged task can still call 37 + setuid(2) and receive SCM_RIGHTS datagrams. 38 + 39 + There are two main use cases for no_new_privs so far: 40 + 41 + - Filters installed for the seccomp mode 2 sandbox persist across 42 + execve and can change the behavior of newly-executed programs. 43 + Unprivileged users are therefore only allowed to install such filters 44 + if no_new_privs is set. 45 + 46 + - By itself, no_new_privs can be used to reduce the attack surface 47 + available to an unprivileged user. If everything running with a 48 + given uid has no_new_privs set, then that uid will be unable to 49 + escalate its privileges by directly attacking setuid, setgid, and 50 + fcap-using binaries; it will need to compromise something without the 51 + no_new_privs bit set first. 52 + 53 + In the future, other potentially dangerous kernel features could become 54 + available to unprivileged tasks if no_new_privs is set. In principle, 55 + several options to unshare(2) and clone(2) would be safe when 56 + no_new_privs is set, and no_new_privs + chroot is considerable less 57 + dangerous than chroot by itself.
+17
Documentation/virtual/kvm/api.txt
··· 1930 1930 PTE's RPN field (ie, it needs to be shifted left by 12 to OR it 1931 1931 into the hash PTE second double word). 1932 1932 1933 + 4.75 KVM_IRQFD 1934 + 1935 + Capability: KVM_CAP_IRQFD 1936 + Architectures: x86 1937 + Type: vm ioctl 1938 + Parameters: struct kvm_irqfd (in) 1939 + Returns: 0 on success, -1 on error 1940 + 1941 + Allows setting an eventfd to directly trigger a guest interrupt. 1942 + kvm_irqfd.fd specifies the file descriptor to use as the eventfd and 1943 + kvm_irqfd.gsi specifies the irqchip pin toggled by this event. When 1944 + an event is tiggered on the eventfd, an interrupt is injected into 1945 + the guest using the specified gsi pin. The irqfd is removed using 1946 + the KVM_IRQFD_FLAG_DEASSIGN flag, specifying both kvm_irqfd.fd 1947 + and kvm_irqfd.gsi. 1948 + 1949 + 1933 1950 5. The kvm_run structure 1934 1951 ------------------------ 1935 1952
+2 -2
MAINTAINERS
··· 4654 4654 L: coreteam@netfilter.org 4655 4655 W: http://www.netfilter.org/ 4656 4656 W: http://www.iptables.org/ 4657 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-2.6.git 4658 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf-next-2.6.git 4657 + T: git git://1984.lsi.us.es/nf 4658 + T: git git://1984.lsi.us.es/nf-next 4659 4659 S: Supported 4660 4660 F: include/linux/netfilter* 4661 4661 F: include/linux/netfilter/
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 5 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc6 5 5 NAME = Saber-toothed Squirrel 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/arm/include/asm/atomic.h
··· 243 243 244 244 #define ATOMIC64_INIT(i) { (i) } 245 245 246 - static inline u64 atomic64_read(atomic64_t *v) 246 + static inline u64 atomic64_read(const atomic64_t *v) 247 247 { 248 248 u64 result; 249 249
+9 -9
arch/arm/include/asm/domain.h
··· 60 60 #ifndef __ASSEMBLY__ 61 61 62 62 #ifdef CONFIG_CPU_USE_DOMAINS 63 - #define set_domain(x) \ 64 - do { \ 65 - __asm__ __volatile__( \ 66 - "mcr p15, 0, %0, c3, c0 @ set domain" \ 67 - : : "r" (x)); \ 68 - isb(); \ 69 - } while (0) 63 + static inline void set_domain(unsigned val) 64 + { 65 + asm volatile( 66 + "mcr p15, 0, %0, c3, c0 @ set domain" 67 + : : "r" (val)); 68 + isb(); 69 + } 70 70 71 71 #define modify_domain(dom,type) \ 72 72 do { \ ··· 78 78 } while (0) 79 79 80 80 #else 81 - #define set_domain(x) do { } while (0) 82 - #define modify_domain(dom,type) do { } while (0) 81 + static inline void set_domain(unsigned val) { } 82 + static inline void modify_domain(unsigned dom, unsigned type) { } 83 83 #endif 84 84 85 85 /*
+1 -4
arch/arm/include/asm/thread_info.h
··· 148 148 #define TIF_NOTIFY_RESUME 2 /* callback before returning to user */ 149 149 #define TIF_SYSCALL_TRACE 8 150 150 #define TIF_SYSCALL_AUDIT 9 151 - #define TIF_SYSCALL_RESTARTSYS 10 152 151 #define TIF_POLLING_NRFLAG 16 153 152 #define TIF_USING_IWMMXT 17 154 153 #define TIF_MEMDIE 18 /* is terminating due to OOM killer */ ··· 163 164 #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG) 164 165 #define _TIF_USING_IWMMXT (1 << TIF_USING_IWMMXT) 165 166 #define _TIF_SECCOMP (1 << TIF_SECCOMP) 166 - #define _TIF_SYSCALL_RESTARTSYS (1 << TIF_SYSCALL_RESTARTSYS) 167 167 168 168 /* Checks for any syscall work in entry-common.S */ 169 - #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \ 170 - _TIF_SYSCALL_RESTARTSYS) 169 + #define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT) 171 170 172 171 /* 173 172 * Change these and you break ASM code in entry-common.S
+2 -2
arch/arm/kernel/kprobes-test-arm.c
··· 187 187 TEST_BF_R ("mov pc, r",0,2f,"") 188 188 TEST_BF_RR("mov pc, r",0,2f,", asl r",1,0,"") 189 189 TEST_BB( "sub pc, pc, #1b-2b+8") 190 - #if __LINUX_ARM_ARCH__ >= 6 191 - TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before ARMv6 */ 190 + #if __LINUX_ARM_ARCH__ == 6 && !defined(CONFIG_CPU_V7) 191 + TEST_BB( "sub pc, pc, #1b-2b+8-2") /* UNPREDICTABLE before and after ARMv6 */ 192 192 #endif 193 193 TEST_BB_R( "sub pc, pc, r",14, 1f-2f+8,"") 194 194 TEST_BB_R( "rsb pc, r",14,1f-2f+8,", pc")
+1 -1
arch/arm/kernel/perf_event.c
··· 503 503 event_requires_mode_exclusion(&event->attr)) { 504 504 pr_debug("ARM performance counters do not support " 505 505 "mode exclusion\n"); 506 - return -EPERM; 506 + return -EOPNOTSUPP; 507 507 } 508 508 509 509 /*
-3
arch/arm/kernel/ptrace.c
··· 25 25 #include <linux/regset.h> 26 26 #include <linux/audit.h> 27 27 #include <linux/tracehook.h> 28 - #include <linux/unistd.h> 29 28 30 29 #include <asm/pgtable.h> 31 30 #include <asm/traps.h> ··· 917 918 audit_syscall_entry(AUDIT_ARCH_ARM, scno, regs->ARM_r0, 918 919 regs->ARM_r1, regs->ARM_r2, regs->ARM_r3); 919 920 920 - if (why == 0 && test_and_clear_thread_flag(TIF_SYSCALL_RESTARTSYS)) 921 - scno = __NR_restart_syscall - __NR_SYSCALL_BASE; 922 921 if (!test_thread_flag(TIF_SYSCALL_TRACE)) 923 922 return scno; 924 923
+40 -6
arch/arm/kernel/signal.c
··· 27 27 */ 28 28 #define SWI_SYS_SIGRETURN (0xef000000|(__NR_sigreturn)|(__NR_OABI_SYSCALL_BASE)) 29 29 #define SWI_SYS_RT_SIGRETURN (0xef000000|(__NR_rt_sigreturn)|(__NR_OABI_SYSCALL_BASE)) 30 + #define SWI_SYS_RESTART (0xef000000|__NR_restart_syscall|__NR_OABI_SYSCALL_BASE) 30 31 31 32 /* 32 33 * With EABI, the syscall number has to be loaded into r7. ··· 45 44 const unsigned long sigreturn_codes[7] = { 46 45 MOV_R7_NR_SIGRETURN, SWI_SYS_SIGRETURN, SWI_THUMB_SIGRETURN, 47 46 MOV_R7_NR_RT_SIGRETURN, SWI_SYS_RT_SIGRETURN, SWI_THUMB_RT_SIGRETURN, 47 + }; 48 + 49 + /* 50 + * Either we support OABI only, or we have EABI with the OABI 51 + * compat layer enabled. In the later case we don't know if 52 + * user space is EABI or not, and if not we must not clobber r7. 53 + * Always using the OABI syscall solves that issue and works for 54 + * all those cases. 55 + */ 56 + const unsigned long syscall_restart_code[2] = { 57 + SWI_SYS_RESTART, /* swi __NR_restart_syscall */ 58 + 0xe49df004, /* ldr pc, [sp], #4 */ 48 59 }; 49 60 50 61 /* ··· 605 592 case -ERESTARTNOHAND: 606 593 case -ERESTARTSYS: 607 594 case -ERESTARTNOINTR: 608 - case -ERESTART_RESTARTBLOCK: 609 595 regs->ARM_r0 = regs->ARM_ORIG_r0; 610 596 regs->ARM_pc = restart_addr; 597 + break; 598 + case -ERESTART_RESTARTBLOCK: 599 + regs->ARM_r0 = -EINTR; 611 600 break; 612 601 } 613 602 } ··· 626 611 * debugger has chosen to restart at a different PC. 627 612 */ 628 613 if (regs->ARM_pc == restart_addr) { 629 - if (retval == -ERESTARTNOHAND || 630 - retval == -ERESTART_RESTARTBLOCK 614 + if (retval == -ERESTARTNOHAND 631 615 || (retval == -ERESTARTSYS 632 616 && !(ka.sa.sa_flags & SA_RESTART))) { 633 617 regs->ARM_r0 = -EINTR; 634 618 regs->ARM_pc = continue_addr; 635 619 } 636 - clear_thread_flag(TIF_SYSCALL_RESTARTSYS); 637 620 } 638 621 639 622 handle_signal(signr, &ka, &info, regs); ··· 645 632 * ignore the restart. 646 633 */ 647 634 if (retval == -ERESTART_RESTARTBLOCK 648 - && regs->ARM_pc == restart_addr) 649 - set_thread_flag(TIF_SYSCALL_RESTARTSYS); 635 + && regs->ARM_pc == continue_addr) { 636 + if (thumb_mode(regs)) { 637 + regs->ARM_r7 = __NR_restart_syscall - __NR_SYSCALL_BASE; 638 + regs->ARM_pc -= 2; 639 + } else { 640 + #if defined(CONFIG_AEABI) && !defined(CONFIG_OABI_COMPAT) 641 + regs->ARM_r7 = __NR_restart_syscall; 642 + regs->ARM_pc -= 4; 643 + #else 644 + u32 __user *usp; 645 + 646 + regs->ARM_sp -= 4; 647 + usp = (u32 __user *)regs->ARM_sp; 648 + 649 + if (put_user(regs->ARM_pc, usp) == 0) { 650 + regs->ARM_pc = KERN_RESTART_CODE; 651 + } else { 652 + regs->ARM_sp += 4; 653 + force_sigsegv(0, current); 654 + } 655 + #endif 656 + } 657 + } 650 658 } 651 659 652 660 restore_saved_sigmask();
+2
arch/arm/kernel/signal.h
··· 8 8 * published by the Free Software Foundation. 9 9 */ 10 10 #define KERN_SIGRETURN_CODE (CONFIG_VECTORS_BASE + 0x00000500) 11 + #define KERN_RESTART_CODE (KERN_SIGRETURN_CODE + sizeof(sigreturn_codes)) 11 12 12 13 extern const unsigned long sigreturn_codes[7]; 14 + extern const unsigned long syscall_restart_code[2];
+2
arch/arm/kernel/traps.c
··· 820 820 */ 821 821 memcpy((void *)(vectors + KERN_SIGRETURN_CODE - CONFIG_VECTORS_BASE), 822 822 sigreturn_codes, sizeof(sigreturn_codes)); 823 + memcpy((void *)(vectors + KERN_RESTART_CODE - CONFIG_VECTORS_BASE), 824 + syscall_restart_code, sizeof(syscall_restart_code)); 823 825 824 826 flush_icache_range(vectors, vectors + PAGE_SIZE); 825 827 modify_domain(DOMAIN_USER, DOMAIN_CLIENT);
+2
arch/arm/kernel/vmlinux.lds.S
··· 183 183 } 184 184 #endif 185 185 186 + #ifdef CONFIG_SMP 186 187 PERCPU_SECTION(L1_CACHE_BYTES) 188 + #endif 187 189 188 190 #ifdef CONFIG_XIP_KERNEL 189 191 __data_loc = ALIGN(4); /* location in binary */
+1
arch/arm/mach-dove/include/mach/bridge-regs.h
··· 50 50 #define POWER_MANAGEMENT (BRIDGE_VIRT_BASE | 0x011c) 51 51 52 52 #define TIMER_VIRT_BASE (BRIDGE_VIRT_BASE | 0x0300) 53 + #define TIMER_PHYS_BASE (BRIDGE_PHYS_BASE | 0x0300) 53 54 54 55 #endif
+1
arch/arm/mach-dove/include/mach/dove.h
··· 78 78 79 79 /* North-South Bridge */ 80 80 #define BRIDGE_VIRT_BASE (DOVE_SB_REGS_VIRT_BASE | 0x20000) 81 + #define BRIDGE_PHYS_BASE (DOVE_SB_REGS_PHYS_BASE | 0x20000) 81 82 82 83 /* Cryptographic Engine */ 83 84 #define DOVE_CRYPT_PHYS_BASE (DOVE_SB_REGS_PHYS_BASE | 0x30000)
+8 -1
arch/arm/mach-imx/clk-imx35.c
··· 201 201 pr_err("i.MX35 clk %d: register failed with %ld\n", 202 202 i, PTR_ERR(clk[i])); 203 203 204 - 205 204 clk_register_clkdev(clk[pata_gate], NULL, "pata_imx"); 206 205 clk_register_clkdev(clk[can1_gate], NULL, "flexcan.0"); 207 206 clk_register_clkdev(clk[can2_gate], NULL, "flexcan.1"); ··· 262 263 clk_prepare_enable(clk[gpio3_gate]); 263 264 clk_prepare_enable(clk[iim_gate]); 264 265 clk_prepare_enable(clk[emi_gate]); 266 + 267 + /* 268 + * SCC is needed to boot via mmc after a watchdog reset. The clock code 269 + * before conversion to common clk also enabled UART1 (which isn't 270 + * handled here and not needed for mmc) and IIM (which is enabled 271 + * unconditionally above). 272 + */ 273 + clk_prepare_enable(clk[scc_gate]); 265 274 266 275 imx_print_silicon_rev("i.MX35", mx35_revision()); 267 276
+1 -1
arch/arm/mach-imx/mach-imx27_visstrim_m10.c
··· 38 38 #include <asm/mach-types.h> 39 39 #include <asm/mach/arch.h> 40 40 #include <asm/mach/time.h> 41 - #include <asm/system.h> 41 + #include <asm/system_info.h> 42 42 #include <mach/common.h> 43 43 #include <mach/iomux-mx27.h> 44 44
-29
arch/arm/mach-mmp/include/mach/gpio-pxa.h
··· 1 - #ifndef __ASM_MACH_GPIO_PXA_H 2 - #define __ASM_MACH_GPIO_PXA_H 3 - 4 - #include <mach/addr-map.h> 5 - #include <mach/cputype.h> 6 - #include <mach/irqs.h> 7 - 8 - #define GPIO_REGS_VIRT (APB_VIRT_BASE + 0x19000) 9 - 10 - #define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2)) 11 - #define GPIO_REG(x) (*(volatile u32 *)(GPIO_REGS_VIRT + (x))) 12 - 13 - #define gpio_to_bank(gpio) ((gpio) >> 5) 14 - 15 - /* NOTE: these macros are defined here to make optimization of 16 - * gpio_{get,set}_value() to work when 'gpio' is a constant. 17 - * Usage of these macros otherwise is no longer recommended, 18 - * use generic GPIO API whenever possible. 19 - */ 20 - #define GPIO_bit(gpio) (1 << ((gpio) & 0x1f)) 21 - 22 - #define GPLR(x) GPIO_REG(BANK_OFF(gpio_to_bank(x)) + 0x00) 23 - #define GPDR(x) GPIO_REG(BANK_OFF(gpio_to_bank(x)) + 0x0c) 24 - #define GPSR(x) GPIO_REG(BANK_OFF(gpio_to_bank(x)) + 0x18) 25 - #define GPCR(x) GPIO_REG(BANK_OFF(gpio_to_bank(x)) + 0x24) 26 - 27 - #include <plat/gpio-pxa.h> 28 - 29 - #endif /* __ASM_MACH_GPIO_PXA_H */
+1
arch/arm/mach-mv78xx0/include/mach/bridge-regs.h
··· 31 31 #define IRQ_MASK_HIGH_OFF 0x0014 32 32 33 33 #define TIMER_VIRT_BASE (BRIDGE_VIRT_BASE | 0x0300) 34 + #define TIMER_PHYS_BASE (BRIDGE_PHYS_BASE | 0x0300) 34 35 35 36 #endif
+2
arch/arm/mach-mv78xx0/include/mach/mv78xx0.h
··· 42 42 #define MV78XX0_CORE0_REGS_PHYS_BASE 0xf1020000 43 43 #define MV78XX0_CORE1_REGS_PHYS_BASE 0xf1024000 44 44 #define MV78XX0_CORE_REGS_VIRT_BASE 0xfe400000 45 + #define MV78XX0_CORE_REGS_PHYS_BASE 0xfe400000 45 46 #define MV78XX0_CORE_REGS_SIZE SZ_16K 46 47 47 48 #define MV78XX0_PCIE_IO_PHYS_BASE(i) (0xf0800000 + ((i) << 20)) ··· 60 59 * Core-specific peripheral registers. 61 60 */ 62 61 #define BRIDGE_VIRT_BASE (MV78XX0_CORE_REGS_VIRT_BASE) 62 + #define BRIDGE_PHYS_BASE (MV78XX0_CORE_REGS_PHYS_BASE) 63 63 64 64 /* 65 65 * Register Map
+11
arch/arm/mach-mxs/mach-apx4devkit.c
··· 205 205 return 0; 206 206 } 207 207 208 + static void __init apx4devkit_fec_phy_clk_enable(void) 209 + { 210 + struct clk *clk; 211 + 212 + /* Enable fec phy clock */ 213 + clk = clk_get_sys("enet_out", NULL); 214 + if (!IS_ERR(clk)) 215 + clk_prepare_enable(clk); 216 + } 217 + 208 218 static void __init apx4devkit_init(void) 209 219 { 210 220 mx28_soc_init(); ··· 235 225 phy_register_fixup_for_uid(PHY_ID_KS8051, MICREL_PHY_ID_MASK, 236 226 apx4devkit_phy_fixup); 237 227 228 + apx4devkit_fec_phy_clk_enable(); 238 229 mx28_add_fec(0, &mx28_fec_pdata); 239 230 240 231 mx28_add_mxs_mmc(0, &apx4devkit_mmc_pdata);
+1 -1
arch/arm/mach-omap2/board-overo.c
··· 494 494 495 495 regulator_register_fixed(0, dummy_supplies, ARRAY_SIZE(dummy_supplies)); 496 496 omap3_mux_init(board_mux, OMAP_PACKAGE_CBB); 497 - omap_hsmmc_init(mmc); 498 497 overo_i2c_init(); 498 + omap_hsmmc_init(mmc); 499 499 omap_display_init(&overo_dss_data); 500 500 omap_serial_init(); 501 501 omap_sdrc_init(mt46h32m32lf6_sdrc_params,
+14 -14
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 1928 1928 1929 1929 static struct omap_hwmod_opt_clk mcbsp1_opt_clks[] = { 1930 1930 { .role = "pad_fck", .clk = "pad_clks_ck" }, 1931 - { .role = "prcm_clk", .clk = "mcbsp1_sync_mux_ck" }, 1931 + { .role = "prcm_fck", .clk = "mcbsp1_sync_mux_ck" }, 1932 1932 }; 1933 1933 1934 1934 static struct omap_hwmod omap44xx_mcbsp1_hwmod = { ··· 1963 1963 1964 1964 static struct omap_hwmod_opt_clk mcbsp2_opt_clks[] = { 1965 1965 { .role = "pad_fck", .clk = "pad_clks_ck" }, 1966 - { .role = "prcm_clk", .clk = "mcbsp2_sync_mux_ck" }, 1966 + { .role = "prcm_fck", .clk = "mcbsp2_sync_mux_ck" }, 1967 1967 }; 1968 1968 1969 1969 static struct omap_hwmod omap44xx_mcbsp2_hwmod = { ··· 1998 1998 1999 1999 static struct omap_hwmod_opt_clk mcbsp3_opt_clks[] = { 2000 2000 { .role = "pad_fck", .clk = "pad_clks_ck" }, 2001 - { .role = "prcm_clk", .clk = "mcbsp3_sync_mux_ck" }, 2001 + { .role = "prcm_fck", .clk = "mcbsp3_sync_mux_ck" }, 2002 2002 }; 2003 2003 2004 2004 static struct omap_hwmod omap44xx_mcbsp3_hwmod = { ··· 2033 2033 2034 2034 static struct omap_hwmod_opt_clk mcbsp4_opt_clks[] = { 2035 2035 { .role = "pad_fck", .clk = "pad_clks_ck" }, 2036 - { .role = "prcm_clk", .clk = "mcbsp4_sync_mux_ck" }, 2036 + { .role = "prcm_fck", .clk = "mcbsp4_sync_mux_ck" }, 2037 2037 }; 2038 2038 2039 2039 static struct omap_hwmod omap44xx_mcbsp4_hwmod = { ··· 3864 3864 }; 3865 3865 3866 3866 /* usb_host_fs -> l3_main_2 */ 3867 - static struct omap_hwmod_ocp_if omap44xx_usb_host_fs__l3_main_2 = { 3867 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_usb_host_fs__l3_main_2 = { 3868 3868 .master = &omap44xx_usb_host_fs_hwmod, 3869 3869 .slave = &omap44xx_l3_main_2_hwmod, 3870 3870 .clk = "l3_div_ck", ··· 3922 3922 }; 3923 3923 3924 3924 /* aess -> l4_abe */ 3925 - static struct omap_hwmod_ocp_if omap44xx_aess__l4_abe = { 3925 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_aess__l4_abe = { 3926 3926 .master = &omap44xx_aess_hwmod, 3927 3927 .slave = &omap44xx_l4_abe_hwmod, 3928 3928 .clk = "ocp_abe_iclk", ··· 4013 4013 }; 4014 4014 4015 4015 /* l4_abe -> aess */ 4016 - static struct omap_hwmod_ocp_if omap44xx_l4_abe__aess = { 4016 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_l4_abe__aess = { 4017 4017 .master = &omap44xx_l4_abe_hwmod, 4018 4018 .slave = &omap44xx_aess_hwmod, 4019 4019 .clk = "ocp_abe_iclk", ··· 4031 4031 }; 4032 4032 4033 4033 /* l4_abe -> aess (dma) */ 4034 - static struct omap_hwmod_ocp_if omap44xx_l4_abe__aess_dma = { 4034 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_l4_abe__aess_dma = { 4035 4035 .master = &omap44xx_l4_abe_hwmod, 4036 4036 .slave = &omap44xx_aess_hwmod, 4037 4037 .clk = "ocp_abe_iclk", ··· 5857 5857 }; 5858 5858 5859 5859 /* l4_cfg -> usb_host_fs */ 5860 - static struct omap_hwmod_ocp_if omap44xx_l4_cfg__usb_host_fs = { 5860 + static struct omap_hwmod_ocp_if __maybe_unused omap44xx_l4_cfg__usb_host_fs = { 5861 5861 .master = &omap44xx_l4_cfg_hwmod, 5862 5862 .slave = &omap44xx_usb_host_fs_hwmod, 5863 5863 .clk = "l4_div_ck", ··· 6014 6014 &omap44xx_iva__l3_main_2, 6015 6015 &omap44xx_l3_main_1__l3_main_2, 6016 6016 &omap44xx_l4_cfg__l3_main_2, 6017 - &omap44xx_usb_host_fs__l3_main_2, 6017 + /* &omap44xx_usb_host_fs__l3_main_2, */ 6018 6018 &omap44xx_usb_host_hs__l3_main_2, 6019 6019 &omap44xx_usb_otg_hs__l3_main_2, 6020 6020 &omap44xx_l3_main_1__l3_main_3, 6021 6021 &omap44xx_l3_main_2__l3_main_3, 6022 6022 &omap44xx_l4_cfg__l3_main_3, 6023 - &omap44xx_aess__l4_abe, 6023 + /* &omap44xx_aess__l4_abe, */ 6024 6024 &omap44xx_dsp__l4_abe, 6025 6025 &omap44xx_l3_main_1__l4_abe, 6026 6026 &omap44xx_mpu__l4_abe, ··· 6029 6029 &omap44xx_l4_cfg__l4_wkup, 6030 6030 &omap44xx_mpu__mpu_private, 6031 6031 &omap44xx_l4_cfg__ocp_wp_noc, 6032 - &omap44xx_l4_abe__aess, 6033 - &omap44xx_l4_abe__aess_dma, 6032 + /* &omap44xx_l4_abe__aess, */ 6033 + /* &omap44xx_l4_abe__aess_dma, */ 6034 6034 &omap44xx_l3_main_2__c2c, 6035 6035 &omap44xx_l4_wkup__counter_32k, 6036 6036 &omap44xx_l4_cfg__ctrl_module_core, ··· 6136 6136 &omap44xx_l4_per__uart2, 6137 6137 &omap44xx_l4_per__uart3, 6138 6138 &omap44xx_l4_per__uart4, 6139 - &omap44xx_l4_cfg__usb_host_fs, 6139 + /* &omap44xx_l4_cfg__usb_host_fs, */ 6140 6140 &omap44xx_l4_cfg__usb_host_hs, 6141 6141 &omap44xx_l4_cfg__usb_otg_hs, 6142 6142 &omap44xx_l4_cfg__usb_tll_hs,
+2
arch/arm/mach-omap2/twl-common.c
··· 32 32 #include "twl-common.h" 33 33 #include "pm.h" 34 34 #include "voltage.h" 35 + #include "mux.h" 35 36 36 37 static struct i2c_board_info __initdata pmic_i2c_board_info = { 37 38 .addr = 0x48, ··· 78 77 struct twl6040_platform_data *twl6040_data, int twl6040_irq) 79 78 { 80 79 /* PMIC part*/ 80 + omap_mux_init_signal("sys_nirq1", OMAP_PIN_INPUT_PULLUP | OMAP_PIN_OFF_WAKEUPENABLE); 81 81 strncpy(omap4_i2c1_board_info[0].type, pmic_type, 82 82 sizeof(omap4_i2c1_board_info[0].type)); 83 83 omap4_i2c1_board_info[0].irq = OMAP44XX_IRQ_SYS_1N;
+14 -1
arch/arm/mach-pxa/hx4700.c
··· 127 127 GPIO19_SSP2_SCLK, 128 128 GPIO86_SSP2_RXD, 129 129 GPIO87_SSP2_TXD, 130 - GPIO88_GPIO, 130 + GPIO88_GPIO | MFP_LPM_DRIVE_HIGH, /* TSC2046_CS */ 131 + 132 + /* BQ24022 Regulator */ 133 + GPIO72_GPIO | MFP_LPM_KEEP_OUTPUT, /* BQ24022_nCHARGE_EN */ 134 + GPIO96_GPIO | MFP_LPM_KEEP_OUTPUT, /* BQ24022_ISET2 */ 131 135 132 136 /* HX4700 specific input GPIOs */ 133 137 GPIO12_GPIO | WAKEUP_ON_EDGE_RISE, /* ASIC3_IRQ */ ··· 139 135 GPIO14_GPIO, /* nWLAN_IRQ */ 140 136 141 137 /* HX4700 specific output GPIOs */ 138 + GPIO61_GPIO | MFP_LPM_DRIVE_HIGH, /* W3220_nRESET */ 139 + GPIO71_GPIO | MFP_LPM_DRIVE_HIGH, /* ASIC3_nRESET */ 140 + GPIO81_GPIO | MFP_LPM_DRIVE_HIGH, /* CPU_GP_nRESET */ 141 + GPIO116_GPIO | MFP_LPM_DRIVE_HIGH, /* CPU_HW_nRESET */ 142 142 GPIO102_GPIO | MFP_LPM_DRIVE_LOW, /* SYNAPTICS_POWER_ON */ 143 143 144 144 GPIO10_GPIO, /* GSM_IRQ */ ··· 880 872 { GPIO110_HX4700_LCD_LVDD_3V3_ON, GPIOF_OUT_INIT_HIGH, "LCD_LVDD" }, 881 873 { GPIO111_HX4700_LCD_AVDD_3V3_ON, GPIOF_OUT_INIT_HIGH, "LCD_AVDD" }, 882 874 { GPIO32_HX4700_RS232_ON, GPIOF_OUT_INIT_HIGH, "RS232_ON" }, 875 + { GPIO61_HX4700_W3220_nRESET, GPIOF_OUT_INIT_HIGH, "W3220_nRESET" }, 883 876 { GPIO71_HX4700_ASIC3_nRESET, GPIOF_OUT_INIT_HIGH, "ASIC3_nRESET" }, 877 + { GPIO81_HX4700_CPU_GP_nRESET, GPIOF_OUT_INIT_HIGH, "CPU_GP_nRESET" }, 884 878 { GPIO82_HX4700_EUART_RESET, GPIOF_OUT_INIT_HIGH, "EUART_RESET" }, 879 + { GPIO116_HX4700_CPU_HW_nRESET, GPIOF_OUT_INIT_HIGH, "CPU_HW_nRESET" }, 885 880 }; 886 881 887 882 static void __init hx4700_init(void) 888 883 { 889 884 int ret; 885 + 886 + PCFR = PCFR_GPR_EN | PCFR_OPDE; 890 887 891 888 pxa2xx_mfp_config(ARRAY_AND_SIZE(hx4700_pin_config)); 892 889 gpio_set_wake(GPIO12_HX4700_ASIC3_IRQ, 1);
-1
arch/arm/mach-versatile/pci.c
··· 339 339 static int __init versatile_map_irq(const struct pci_dev *dev, u8 slot, u8 pin) 340 340 { 341 341 int irq; 342 - int devslot = PCI_SLOT(dev->devfn); 343 342 344 343 /* slot, pin, irq 345 344 * 24 1 27
+1 -1
arch/arm/mm/mm.h
··· 64 64 #ifdef CONFIG_ZONE_DMA 65 65 extern phys_addr_t arm_dma_limit; 66 66 #else 67 - #define arm_dma_limit ((u32)~0) 67 + #define arm_dma_limit ((phys_addr_t)~0) 68 68 #endif 69 69 70 70 extern phys_addr_t arm_lowmem_limit;
+74
arch/arm/mm/mmu.c
··· 791 791 } 792 792 } 793 793 794 + #ifndef CONFIG_ARM_LPAE 795 + 796 + /* 797 + * The Linux PMD is made of two consecutive section entries covering 2MB 798 + * (see definition in include/asm/pgtable-2level.h). However a call to 799 + * create_mapping() may optimize static mappings by using individual 800 + * 1MB section mappings. This leaves the actual PMD potentially half 801 + * initialized if the top or bottom section entry isn't used, leaving it 802 + * open to problems if a subsequent ioremap() or vmalloc() tries to use 803 + * the virtual space left free by that unused section entry. 804 + * 805 + * Let's avoid the issue by inserting dummy vm entries covering the unused 806 + * PMD halves once the static mappings are in place. 807 + */ 808 + 809 + static void __init pmd_empty_section_gap(unsigned long addr) 810 + { 811 + struct vm_struct *vm; 812 + 813 + vm = early_alloc_aligned(sizeof(*vm), __alignof__(*vm)); 814 + vm->addr = (void *)addr; 815 + vm->size = SECTION_SIZE; 816 + vm->flags = VM_IOREMAP | VM_ARM_STATIC_MAPPING; 817 + vm->caller = pmd_empty_section_gap; 818 + vm_area_add_early(vm); 819 + } 820 + 821 + static void __init fill_pmd_gaps(void) 822 + { 823 + struct vm_struct *vm; 824 + unsigned long addr, next = 0; 825 + pmd_t *pmd; 826 + 827 + /* we're still single threaded hence no lock needed here */ 828 + for (vm = vmlist; vm; vm = vm->next) { 829 + if (!(vm->flags & VM_ARM_STATIC_MAPPING)) 830 + continue; 831 + addr = (unsigned long)vm->addr; 832 + if (addr < next) 833 + continue; 834 + 835 + /* 836 + * Check if this vm starts on an odd section boundary. 837 + * If so and the first section entry for this PMD is free 838 + * then we block the corresponding virtual address. 839 + */ 840 + if ((addr & ~PMD_MASK) == SECTION_SIZE) { 841 + pmd = pmd_off_k(addr); 842 + if (pmd_none(*pmd)) 843 + pmd_empty_section_gap(addr & PMD_MASK); 844 + } 845 + 846 + /* 847 + * Then check if this vm ends on an odd section boundary. 848 + * If so and the second section entry for this PMD is empty 849 + * then we block the corresponding virtual address. 850 + */ 851 + addr += vm->size; 852 + if ((addr & ~PMD_MASK) == SECTION_SIZE) { 853 + pmd = pmd_off_k(addr) + 1; 854 + if (pmd_none(*pmd)) 855 + pmd_empty_section_gap(addr); 856 + } 857 + 858 + /* no need to look at any vm entry until we hit the next PMD */ 859 + next = (addr + PMD_SIZE - 1) & PMD_MASK; 860 + } 861 + } 862 + 863 + #else 864 + #define fill_pmd_gaps() do { } while (0) 865 + #endif 866 + 794 867 static void * __initdata vmalloc_min = 795 868 (void *)(VMALLOC_END - (240 << 20) - VMALLOC_OFFSET); 796 869 ··· 1145 1072 */ 1146 1073 if (mdesc->map_io) 1147 1074 mdesc->map_io(); 1075 + fill_pmd_gaps(); 1148 1076 1149 1077 /* 1150 1078 * Finally flush the caches and tlb to ensure that we're in a
+1 -1
arch/powerpc/kvm/book3s_hv_rmhandlers.S
··· 810 810 lwz r3,VCORE_NAPPING_THREADS(r5) 811 811 lwz r4,VCPU_PTID(r9) 812 812 li r0,1 813 - sldi r0,r0,r4 813 + sld r0,r0,r4 814 814 andc. r3,r3,r0 /* no sense IPI'ing ourselves */ 815 815 beq 43f 816 816 mulli r4,r4,PACA_SIZE /* get paca for thread 0 */
+1 -1
arch/powerpc/xmon/xmon.c
··· 971 971 /* print cpus waiting or in xmon */ 972 972 printf("cpus stopped:"); 973 973 count = 0; 974 - for (cpu = 0; cpu < NR_CPUS; ++cpu) { 974 + for_each_possible_cpu(cpu) { 975 975 if (cpumask_test_cpu(cpu, &cpus_in_xmon)) { 976 976 if (count == 0) 977 977 printf(" %x", cpu);
+3
arch/x86/kvm/mmu.c
··· 3934 3934 { 3935 3935 struct kvm_mmu_page *page; 3936 3936 3937 + if (list_empty(&kvm->arch.active_mmu_pages)) 3938 + return; 3939 + 3937 3940 page = container_of(kvm->arch.active_mmu_pages.prev, 3938 3941 struct kvm_mmu_page, link); 3939 3942 kvm_mmu_prepare_zap_page(kvm, page, invalid_list);
+2 -7
block/blk-cgroup.c
··· 125 125 126 126 blkg->pd[i] = pd; 127 127 pd->blkg = blkg; 128 - } 129 128 130 - /* invoke per-policy init */ 131 - for (i = 0; i < BLKCG_MAX_POLS; i++) { 132 - struct blkcg_policy *pol = blkcg_policy[i]; 133 - 129 + /* invoke per-policy init */ 134 130 if (blkcg_policy_enabled(blkg->q, pol)) 135 131 pol->pd_init_fn(blkg); 136 132 } ··· 241 245 242 246 static void blkg_destroy(struct blkcg_gq *blkg) 243 247 { 244 - struct request_queue *q = blkg->q; 245 248 struct blkcg *blkcg = blkg->blkcg; 246 249 247 - lockdep_assert_held(q->queue_lock); 250 + lockdep_assert_held(blkg->q->queue_lock); 248 251 lockdep_assert_held(&blkcg->lock); 249 252 250 253 /* Something wrong if we are trying to remove same group twice */
+19 -6
block/blk-core.c
··· 361 361 */ 362 362 void blk_drain_queue(struct request_queue *q, bool drain_all) 363 363 { 364 + int i; 365 + 364 366 while (true) { 365 367 bool drain = false; 366 - int i; 367 368 368 369 spin_lock_irq(q->queue_lock); 369 370 ··· 408 407 if (!drain) 409 408 break; 410 409 msleep(10); 410 + } 411 + 412 + /* 413 + * With queue marked dead, any woken up waiter will fail the 414 + * allocation path, so the wakeup chaining is lost and we're 415 + * left with hung waiters. We need to wake up those waiters. 416 + */ 417 + if (q->request_fn) { 418 + spin_lock_irq(q->queue_lock); 419 + for (i = 0; i < ARRAY_SIZE(q->rq.wait); i++) 420 + wake_up_all(&q->rq.wait[i]); 421 + spin_unlock_irq(q->queue_lock); 411 422 } 412 423 } 413 424 ··· 480 467 /* mark @q DEAD, no new request or merges will be allowed afterwards */ 481 468 mutex_lock(&q->sysfs_lock); 482 469 queue_flag_set_unlocked(QUEUE_FLAG_DEAD, q); 483 - 484 470 spin_lock_irq(lock); 485 471 486 472 /* ··· 497 485 queue_flag_set(QUEUE_FLAG_NOMERGES, q); 498 486 queue_flag_set(QUEUE_FLAG_NOXMERGES, q); 499 487 queue_flag_set(QUEUE_FLAG_DEAD, q); 500 - 501 - if (q->queue_lock != &q->__queue_lock) 502 - q->queue_lock = &q->__queue_lock; 503 - 504 488 spin_unlock_irq(lock); 505 489 mutex_unlock(&q->sysfs_lock); 506 490 ··· 506 498 /* @q won't process any more request, flush async actions */ 507 499 del_timer_sync(&q->backing_dev_info.laptop_mode_wb_timer); 508 500 blk_sync_queue(q); 501 + 502 + spin_lock_irq(lock); 503 + if (q->queue_lock != &q->__queue_lock) 504 + q->queue_lock = &q->__queue_lock; 505 + spin_unlock_irq(lock); 509 506 510 507 /* @q is and will stay empty, shutdown and put */ 511 508 blk_put_queue(q);
-41
block/blk-timeout.c
··· 197 197 mod_timer(&q->timeout, expiry); 198 198 } 199 199 200 - /** 201 - * blk_abort_queue -- Abort all request on given queue 202 - * @queue: pointer to queue 203 - * 204 - */ 205 - void blk_abort_queue(struct request_queue *q) 206 - { 207 - unsigned long flags; 208 - struct request *rq, *tmp; 209 - LIST_HEAD(list); 210 - 211 - /* 212 - * Not a request based block device, nothing to abort 213 - */ 214 - if (!q->request_fn) 215 - return; 216 - 217 - spin_lock_irqsave(q->queue_lock, flags); 218 - 219 - elv_abort_queue(q); 220 - 221 - /* 222 - * Splice entries to local list, to avoid deadlocking if entries 223 - * get readded to the timeout list by error handling 224 - */ 225 - list_splice_init(&q->timeout_list, &list); 226 - 227 - list_for_each_entry_safe(rq, tmp, &list, timeout_list) 228 - blk_abort_request(rq); 229 - 230 - /* 231 - * Occasionally, blk_abort_request() will return without 232 - * deleting the element from the list. Make sure we add those back 233 - * instead of leaving them on the local stack list. 234 - */ 235 - list_splice(&list, &q->timeout_list); 236 - 237 - spin_unlock_irqrestore(q->queue_lock, flags); 238 - 239 - } 240 - EXPORT_SYMBOL_GPL(blk_abort_queue);
+18 -12
block/cfq-iosched.c
··· 17 17 #include "blk.h" 18 18 #include "blk-cgroup.h" 19 19 20 - static struct blkcg_policy blkcg_policy_cfq __maybe_unused; 21 - 22 20 /* 23 21 * tunables 24 22 */ ··· 416 418 return pd ? container_of(pd, struct cfq_group, pd) : NULL; 417 419 } 418 420 419 - static inline struct cfq_group *blkg_to_cfqg(struct blkcg_gq *blkg) 420 - { 421 - return pd_to_cfqg(blkg_to_pd(blkg, &blkcg_policy_cfq)); 422 - } 423 - 424 421 static inline struct blkcg_gq *cfqg_to_blkg(struct cfq_group *cfqg) 425 422 { 426 423 return pd_to_blkg(&cfqg->pd); ··· 564 571 #endif /* CONFIG_CFQ_GROUP_IOSCHED && CONFIG_DEBUG_BLK_CGROUP */ 565 572 566 573 #ifdef CONFIG_CFQ_GROUP_IOSCHED 574 + 575 + static struct blkcg_policy blkcg_policy_cfq; 576 + 577 + static inline struct cfq_group *blkg_to_cfqg(struct blkcg_gq *blkg) 578 + { 579 + return pd_to_cfqg(blkg_to_pd(blkg, &blkcg_policy_cfq)); 580 + } 567 581 568 582 static inline void cfqg_get(struct cfq_group *cfqg) 569 583 { ··· 3951 3951 3952 3952 cfq_shutdown_timer_wq(cfqd); 3953 3953 3954 - #ifndef CONFIG_CFQ_GROUP_IOSCHED 3954 + #ifdef CONFIG_CFQ_GROUP_IOSCHED 3955 + blkcg_deactivate_policy(q, &blkcg_policy_cfq); 3956 + #else 3955 3957 kfree(cfqd->root_group); 3956 3958 #endif 3957 - blkcg_deactivate_policy(q, &blkcg_policy_cfq); 3958 3959 kfree(cfqd); 3959 3960 } 3960 3961 ··· 4195 4194 #ifdef CONFIG_CFQ_GROUP_IOSCHED 4196 4195 if (!cfq_group_idle) 4197 4196 cfq_group_idle = 1; 4198 - #else 4199 - cfq_group_idle = 0; 4200 - #endif 4201 4197 4202 4198 ret = blkcg_policy_register(&blkcg_policy_cfq); 4203 4199 if (ret) 4204 4200 return ret; 4201 + #else 4202 + cfq_group_idle = 0; 4203 + #endif 4205 4204 4205 + ret = -ENOMEM; 4206 4206 cfq_pool = KMEM_CACHE(cfq_queue, 0); 4207 4207 if (!cfq_pool) 4208 4208 goto err_pol_unreg; ··· 4217 4215 err_free_pool: 4218 4216 kmem_cache_destroy(cfq_pool); 4219 4217 err_pol_unreg: 4218 + #ifdef CONFIG_CFQ_GROUP_IOSCHED 4220 4219 blkcg_policy_unregister(&blkcg_policy_cfq); 4220 + #endif 4221 4221 return ret; 4222 4222 } 4223 4223 4224 4224 static void __exit cfq_exit(void) 4225 4225 { 4226 + #ifdef CONFIG_CFQ_GROUP_IOSCHED 4226 4227 blkcg_policy_unregister(&blkcg_policy_cfq); 4228 + #endif 4227 4229 elv_unregister(&iosched_cfq); 4228 4230 kmem_cache_destroy(cfq_pool); 4229 4231 }
+4 -1
block/scsi_ioctl.c
··· 721 721 break; 722 722 } 723 723 724 + if (capable(CAP_SYS_RAWIO)) 725 + return 0; 726 + 724 727 /* In particular, rule out all resets and host-specific ioctls. */ 725 728 printk_ratelimited(KERN_WARNING 726 729 "%s: sending ioctl %x to a partition!\n", current->comm, cmd); 727 730 728 - return capable(CAP_SYS_RAWIO) ? 0 : -ENOIOCTLCMD; 731 + return -ENOIOCTLCMD; 729 732 } 730 733 EXPORT_SYMBOL(scsi_verify_blk_ioctl); 731 734
+9 -2
drivers/block/drbd/drbd_bitmap.c
··· 1475 1475 first_word = 0; 1476 1476 spin_lock_irq(&b->bm_lock); 1477 1477 } 1478 - 1479 1478 /* last page (respectively only page, for first page == last page) */ 1480 1479 last_word = MLPP(el >> LN2_BPL); 1481 - bm_set_full_words_within_one_page(mdev->bitmap, last_page, first_word, last_word); 1480 + 1481 + /* consider bitmap->bm_bits = 32768, bitmap->bm_number_of_pages = 1. (or multiples). 1482 + * ==> e = 32767, el = 32768, last_page = 2, 1483 + * and now last_word = 0. 1484 + * We do not want to touch last_page in this case, 1485 + * as we did not allocate it, it is not present in bitmap->bm_pages. 1486 + */ 1487 + if (last_word) 1488 + bm_set_full_words_within_one_page(mdev->bitmap, last_page, first_word, last_word); 1482 1489 1483 1490 /* possibly trailing bits. 1484 1491 * example: (e & 63) == 63, el will be e+1.
+42 -24
drivers/block/drbd/drbd_req.c
··· 472 472 req->rq_state |= RQ_LOCAL_COMPLETED; 473 473 req->rq_state &= ~RQ_LOCAL_PENDING; 474 474 475 - D_ASSERT(!(req->rq_state & RQ_NET_MASK)); 475 + if (req->rq_state & RQ_LOCAL_ABORTED) { 476 + _req_may_be_done(req, m); 477 + break; 478 + } 476 479 477 480 __drbd_chk_io_error(mdev, false); 478 481 479 482 goto_queue_for_net_read: 483 + 484 + D_ASSERT(!(req->rq_state & RQ_NET_MASK)); 480 485 481 486 /* no point in retrying if there is no good remote data, 482 487 * or we have no connection. */ ··· 770 765 return 0 == drbd_bm_count_bits(mdev, sbnr, ebnr); 771 766 } 772 767 768 + static void maybe_pull_ahead(struct drbd_conf *mdev) 769 + { 770 + int congested = 0; 771 + 772 + /* If I don't even have good local storage, we can not reasonably try 773 + * to pull ahead of the peer. We also need the local reference to make 774 + * sure mdev->act_log is there. 775 + * Note: caller has to make sure that net_conf is there. 776 + */ 777 + if (!get_ldev_if_state(mdev, D_UP_TO_DATE)) 778 + return; 779 + 780 + if (mdev->net_conf->cong_fill && 781 + atomic_read(&mdev->ap_in_flight) >= mdev->net_conf->cong_fill) { 782 + dev_info(DEV, "Congestion-fill threshold reached\n"); 783 + congested = 1; 784 + } 785 + 786 + if (mdev->act_log->used >= mdev->net_conf->cong_extents) { 787 + dev_info(DEV, "Congestion-extents threshold reached\n"); 788 + congested = 1; 789 + } 790 + 791 + if (congested) { 792 + queue_barrier(mdev); /* last barrier, after mirrored writes */ 793 + 794 + if (mdev->net_conf->on_congestion == OC_PULL_AHEAD) 795 + _drbd_set_state(_NS(mdev, conn, C_AHEAD), 0, NULL); 796 + else /*mdev->net_conf->on_congestion == OC_DISCONNECT */ 797 + _drbd_set_state(_NS(mdev, conn, C_DISCONNECTING), 0, NULL); 798 + } 799 + put_ldev(mdev); 800 + } 801 + 773 802 static int drbd_make_request_common(struct drbd_conf *mdev, struct bio *bio, unsigned long start_time) 774 803 { 775 804 const int rw = bio_rw(bio); ··· 1011 972 _req_mod(req, queue_for_send_oos); 1012 973 1013 974 if (remote && 1014 - mdev->net_conf->on_congestion != OC_BLOCK && mdev->agreed_pro_version >= 96) { 1015 - int congested = 0; 1016 - 1017 - if (mdev->net_conf->cong_fill && 1018 - atomic_read(&mdev->ap_in_flight) >= mdev->net_conf->cong_fill) { 1019 - dev_info(DEV, "Congestion-fill threshold reached\n"); 1020 - congested = 1; 1021 - } 1022 - 1023 - if (mdev->act_log->used >= mdev->net_conf->cong_extents) { 1024 - dev_info(DEV, "Congestion-extents threshold reached\n"); 1025 - congested = 1; 1026 - } 1027 - 1028 - if (congested) { 1029 - queue_barrier(mdev); /* last barrier, after mirrored writes */ 1030 - 1031 - if (mdev->net_conf->on_congestion == OC_PULL_AHEAD) 1032 - _drbd_set_state(_NS(mdev, conn, C_AHEAD), 0, NULL); 1033 - else /*mdev->net_conf->on_congestion == OC_DISCONNECT */ 1034 - _drbd_set_state(_NS(mdev, conn, C_DISCONNECTING), 0, NULL); 1035 - } 1036 - } 975 + mdev->net_conf->on_congestion != OC_BLOCK && mdev->agreed_pro_version >= 96) 976 + maybe_pull_ahead(mdev); 1037 977 1038 978 spin_unlock_irq(&mdev->req_lock); 1039 979 kfree(b); /* if someone else has beaten us to it... */
+1
drivers/block/floppy.c
··· 671 671 672 672 if (drive == current_reqD) 673 673 drive = current_drive; 674 + __cancel_delayed_work(&fd_timeout); 674 675 675 676 if (drive < 0 || drive >= N_DRIVE) { 676 677 delay = 20UL * HZ;
+158 -88
drivers/block/mtip32xx/mtip32xx.c
··· 37 37 #include <linux/kthread.h> 38 38 #include <../drivers/ata/ahci.h> 39 39 #include <linux/export.h> 40 + #include <linux/debugfs.h> 40 41 #include "mtip32xx.h" 41 42 42 43 #define HW_CMD_SLOT_SZ (MTIP_MAX_COMMAND_SLOTS * 32) ··· 86 85 * allocated in mtip_init(). 87 86 */ 88 87 static int mtip_major; 88 + static struct dentry *dfs_parent; 89 89 90 90 static DEFINE_SPINLOCK(rssd_index_lock); 91 91 static DEFINE_IDA(rssd_index_ida); ··· 2548 2546 } 2549 2547 2550 2548 /* 2551 - * Sysfs register/status dump. 2549 + * Sysfs status dump. 2552 2550 * 2553 2551 * @dev Pointer to the device structure, passed by the kernrel. 2554 2552 * @attr Pointer to the device_attribute structure passed by the kernel. ··· 2557 2555 * return value 2558 2556 * The size, in bytes, of the data copied into buf. 2559 2557 */ 2560 - static ssize_t mtip_hw_show_registers(struct device *dev, 2561 - struct device_attribute *attr, 2562 - char *buf) 2563 - { 2564 - u32 group_allocated; 2565 - struct driver_data *dd = dev_to_disk(dev)->private_data; 2566 - int size = 0; 2567 - int n; 2568 - 2569 - size += sprintf(&buf[size], "Hardware\n--------\n"); 2570 - size += sprintf(&buf[size], "S ACTive : [ 0x"); 2571 - 2572 - for (n = dd->slot_groups-1; n >= 0; n--) 2573 - size += sprintf(&buf[size], "%08X ", 2574 - readl(dd->port->s_active[n])); 2575 - 2576 - size += sprintf(&buf[size], "]\n"); 2577 - size += sprintf(&buf[size], "Command Issue : [ 0x"); 2578 - 2579 - for (n = dd->slot_groups-1; n >= 0; n--) 2580 - size += sprintf(&buf[size], "%08X ", 2581 - readl(dd->port->cmd_issue[n])); 2582 - 2583 - size += sprintf(&buf[size], "]\n"); 2584 - size += sprintf(&buf[size], "Completed : [ 0x"); 2585 - 2586 - for (n = dd->slot_groups-1; n >= 0; n--) 2587 - size += sprintf(&buf[size], "%08X ", 2588 - readl(dd->port->completed[n])); 2589 - 2590 - size += sprintf(&buf[size], "]\n"); 2591 - size += sprintf(&buf[size], "PORT IRQ STAT : [ 0x%08X ]\n", 2592 - readl(dd->port->mmio + PORT_IRQ_STAT)); 2593 - size += sprintf(&buf[size], "HOST IRQ STAT : [ 0x%08X ]\n", 2594 - readl(dd->mmio + HOST_IRQ_STAT)); 2595 - size += sprintf(&buf[size], "\n"); 2596 - 2597 - size += sprintf(&buf[size], "Local\n-----\n"); 2598 - size += sprintf(&buf[size], "Allocated : [ 0x"); 2599 - 2600 - for (n = dd->slot_groups-1; n >= 0; n--) { 2601 - if (sizeof(long) > sizeof(u32)) 2602 - group_allocated = 2603 - dd->port->allocated[n/2] >> (32*(n&1)); 2604 - else 2605 - group_allocated = dd->port->allocated[n]; 2606 - size += sprintf(&buf[size], "%08X ", group_allocated); 2607 - } 2608 - size += sprintf(&buf[size], "]\n"); 2609 - 2610 - size += sprintf(&buf[size], "Commands in Q: [ 0x"); 2611 - 2612 - for (n = dd->slot_groups-1; n >= 0; n--) { 2613 - if (sizeof(long) > sizeof(u32)) 2614 - group_allocated = 2615 - dd->port->cmds_to_issue[n/2] >> (32*(n&1)); 2616 - else 2617 - group_allocated = dd->port->cmds_to_issue[n]; 2618 - size += sprintf(&buf[size], "%08X ", group_allocated); 2619 - } 2620 - size += sprintf(&buf[size], "]\n"); 2621 - 2622 - return size; 2623 - } 2624 - 2625 2558 static ssize_t mtip_hw_show_status(struct device *dev, 2626 2559 struct device_attribute *attr, 2627 2560 char *buf) ··· 2574 2637 return size; 2575 2638 } 2576 2639 2577 - static ssize_t mtip_hw_show_flags(struct device *dev, 2578 - struct device_attribute *attr, 2579 - char *buf) 2640 + static DEVICE_ATTR(status, S_IRUGO, mtip_hw_show_status, NULL); 2641 + 2642 + static ssize_t mtip_hw_read_registers(struct file *f, char __user *ubuf, 2643 + size_t len, loff_t *offset) 2580 2644 { 2581 - struct driver_data *dd = dev_to_disk(dev)->private_data; 2582 - int size = 0; 2645 + struct driver_data *dd = (struct driver_data *)f->private_data; 2646 + char buf[MTIP_DFS_MAX_BUF_SIZE]; 2647 + u32 group_allocated; 2648 + int size = *offset; 2649 + int n; 2583 2650 2584 - size += sprintf(&buf[size], "Flag in port struct : [ %08lX ]\n", 2585 - dd->port->flags); 2586 - size += sprintf(&buf[size], "Flag in dd struct : [ %08lX ]\n", 2587 - dd->dd_flag); 2651 + if (!len || size) 2652 + return 0; 2588 2653 2589 - return size; 2654 + if (size < 0) 2655 + return -EINVAL; 2656 + 2657 + size += sprintf(&buf[size], "H/ S ACTive : [ 0x"); 2658 + 2659 + for (n = dd->slot_groups-1; n >= 0; n--) 2660 + size += sprintf(&buf[size], "%08X ", 2661 + readl(dd->port->s_active[n])); 2662 + 2663 + size += sprintf(&buf[size], "]\n"); 2664 + size += sprintf(&buf[size], "H/ Command Issue : [ 0x"); 2665 + 2666 + for (n = dd->slot_groups-1; n >= 0; n--) 2667 + size += sprintf(&buf[size], "%08X ", 2668 + readl(dd->port->cmd_issue[n])); 2669 + 2670 + size += sprintf(&buf[size], "]\n"); 2671 + size += sprintf(&buf[size], "H/ Completed : [ 0x"); 2672 + 2673 + for (n = dd->slot_groups-1; n >= 0; n--) 2674 + size += sprintf(&buf[size], "%08X ", 2675 + readl(dd->port->completed[n])); 2676 + 2677 + size += sprintf(&buf[size], "]\n"); 2678 + size += sprintf(&buf[size], "H/ PORT IRQ STAT : [ 0x%08X ]\n", 2679 + readl(dd->port->mmio + PORT_IRQ_STAT)); 2680 + size += sprintf(&buf[size], "H/ HOST IRQ STAT : [ 0x%08X ]\n", 2681 + readl(dd->mmio + HOST_IRQ_STAT)); 2682 + size += sprintf(&buf[size], "\n"); 2683 + 2684 + size += sprintf(&buf[size], "L/ Allocated : [ 0x"); 2685 + 2686 + for (n = dd->slot_groups-1; n >= 0; n--) { 2687 + if (sizeof(long) > sizeof(u32)) 2688 + group_allocated = 2689 + dd->port->allocated[n/2] >> (32*(n&1)); 2690 + else 2691 + group_allocated = dd->port->allocated[n]; 2692 + size += sprintf(&buf[size], "%08X ", group_allocated); 2693 + } 2694 + size += sprintf(&buf[size], "]\n"); 2695 + 2696 + size += sprintf(&buf[size], "L/ Commands in Q : [ 0x"); 2697 + 2698 + for (n = dd->slot_groups-1; n >= 0; n--) { 2699 + if (sizeof(long) > sizeof(u32)) 2700 + group_allocated = 2701 + dd->port->cmds_to_issue[n/2] >> (32*(n&1)); 2702 + else 2703 + group_allocated = dd->port->cmds_to_issue[n]; 2704 + size += sprintf(&buf[size], "%08X ", group_allocated); 2705 + } 2706 + size += sprintf(&buf[size], "]\n"); 2707 + 2708 + *offset = size <= len ? size : len; 2709 + size = copy_to_user(ubuf, buf, *offset); 2710 + if (size) 2711 + return -EFAULT; 2712 + 2713 + return *offset; 2590 2714 } 2591 2715 2592 - static DEVICE_ATTR(registers, S_IRUGO, mtip_hw_show_registers, NULL); 2593 - static DEVICE_ATTR(status, S_IRUGO, mtip_hw_show_status, NULL); 2594 - static DEVICE_ATTR(flags, S_IRUGO, mtip_hw_show_flags, NULL); 2716 + static ssize_t mtip_hw_read_flags(struct file *f, char __user *ubuf, 2717 + size_t len, loff_t *offset) 2718 + { 2719 + struct driver_data *dd = (struct driver_data *)f->private_data; 2720 + char buf[MTIP_DFS_MAX_BUF_SIZE]; 2721 + int size = *offset; 2722 + 2723 + if (!len || size) 2724 + return 0; 2725 + 2726 + if (size < 0) 2727 + return -EINVAL; 2728 + 2729 + size += sprintf(&buf[size], "Flag-port : [ %08lX ]\n", 2730 + dd->port->flags); 2731 + size += sprintf(&buf[size], "Flag-dd : [ %08lX ]\n", 2732 + dd->dd_flag); 2733 + 2734 + *offset = size <= len ? size : len; 2735 + size = copy_to_user(ubuf, buf, *offset); 2736 + if (size) 2737 + return -EFAULT; 2738 + 2739 + return *offset; 2740 + } 2741 + 2742 + static const struct file_operations mtip_regs_fops = { 2743 + .owner = THIS_MODULE, 2744 + .open = simple_open, 2745 + .read = mtip_hw_read_registers, 2746 + .llseek = no_llseek, 2747 + }; 2748 + 2749 + static const struct file_operations mtip_flags_fops = { 2750 + .owner = THIS_MODULE, 2751 + .open = simple_open, 2752 + .read = mtip_hw_read_flags, 2753 + .llseek = no_llseek, 2754 + }; 2595 2755 2596 2756 /* 2597 2757 * Create the sysfs related attributes. ··· 2705 2671 if (!kobj || !dd) 2706 2672 return -EINVAL; 2707 2673 2708 - if (sysfs_create_file(kobj, &dev_attr_registers.attr)) 2709 - dev_warn(&dd->pdev->dev, 2710 - "Error creating 'registers' sysfs entry\n"); 2711 2674 if (sysfs_create_file(kobj, &dev_attr_status.attr)) 2712 2675 dev_warn(&dd->pdev->dev, 2713 2676 "Error creating 'status' sysfs entry\n"); 2714 - if (sysfs_create_file(kobj, &dev_attr_flags.attr)) 2715 - dev_warn(&dd->pdev->dev, 2716 - "Error creating 'flags' sysfs entry\n"); 2717 2677 return 0; 2718 2678 } 2719 2679 ··· 2726 2698 if (!kobj || !dd) 2727 2699 return -EINVAL; 2728 2700 2729 - sysfs_remove_file(kobj, &dev_attr_registers.attr); 2730 2701 sysfs_remove_file(kobj, &dev_attr_status.attr); 2731 - sysfs_remove_file(kobj, &dev_attr_flags.attr); 2732 2702 2733 2703 return 0; 2734 2704 } 2705 + 2706 + static int mtip_hw_debugfs_init(struct driver_data *dd) 2707 + { 2708 + if (!dfs_parent) 2709 + return -1; 2710 + 2711 + dd->dfs_node = debugfs_create_dir(dd->disk->disk_name, dfs_parent); 2712 + if (IS_ERR_OR_NULL(dd->dfs_node)) { 2713 + dev_warn(&dd->pdev->dev, 2714 + "Error creating node %s under debugfs\n", 2715 + dd->disk->disk_name); 2716 + dd->dfs_node = NULL; 2717 + return -1; 2718 + } 2719 + 2720 + debugfs_create_file("flags", S_IRUGO, dd->dfs_node, dd, 2721 + &mtip_flags_fops); 2722 + debugfs_create_file("registers", S_IRUGO, dd->dfs_node, dd, 2723 + &mtip_regs_fops); 2724 + 2725 + return 0; 2726 + } 2727 + 2728 + static void mtip_hw_debugfs_exit(struct driver_data *dd) 2729 + { 2730 + debugfs_remove_recursive(dd->dfs_node); 2731 + } 2732 + 2735 2733 2736 2734 /* 2737 2735 * Perform any init/resume time hardware setup ··· 3784 3730 mtip_hw_sysfs_init(dd, kobj); 3785 3731 kobject_put(kobj); 3786 3732 } 3733 + mtip_hw_debugfs_init(dd); 3787 3734 3788 3735 if (dd->mtip_svc_handler) { 3789 3736 set_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag); ··· 3810 3755 return rv; 3811 3756 3812 3757 kthread_run_error: 3758 + mtip_hw_debugfs_exit(dd); 3759 + 3813 3760 /* Delete our gendisk. This also removes the device from /dev */ 3814 3761 del_gendisk(dd->disk); 3815 3762 ··· 3862 3805 kobject_put(kobj); 3863 3806 } 3864 3807 } 3808 + mtip_hw_debugfs_exit(dd); 3865 3809 3866 3810 /* 3867 3811 * Delete our gendisk structure. This also removes the device ··· 4210 4152 } 4211 4153 mtip_major = error; 4212 4154 4155 + if (!dfs_parent) { 4156 + dfs_parent = debugfs_create_dir("rssd", NULL); 4157 + if (IS_ERR_OR_NULL(dfs_parent)) { 4158 + printk(KERN_WARNING "Error creating debugfs parent\n"); 4159 + dfs_parent = NULL; 4160 + } 4161 + } 4162 + 4213 4163 /* Register our PCI operations. */ 4214 4164 error = pci_register_driver(&mtip_pci_driver); 4215 - if (error) 4165 + if (error) { 4166 + debugfs_remove(dfs_parent); 4216 4167 unregister_blkdev(mtip_major, MTIP_DRV_NAME); 4168 + } 4217 4169 4218 4170 return error; 4219 4171 } ··· 4240 4172 */ 4241 4173 static void __exit mtip_exit(void) 4242 4174 { 4175 + debugfs_remove_recursive(dfs_parent); 4176 + 4243 4177 /* Release the allocated major block device number. */ 4244 4178 unregister_blkdev(mtip_major, MTIP_DRV_NAME); 4245 4179
+4 -1
drivers/block/mtip32xx/mtip32xx.h
··· 26 26 #include <linux/ata.h> 27 27 #include <linux/interrupt.h> 28 28 #include <linux/genhd.h> 29 - #include <linux/version.h> 30 29 31 30 /* Offset of Subsystem Device ID in pci confoguration space */ 32 31 #define PCI_SUBSYSTEM_DEVICEID 0x2E ··· 109 110 #else 110 111 #define dbg_printk(format, arg...) 111 112 #endif 113 + 114 + #define MTIP_DFS_MAX_BUF_SIZE 1024 112 115 113 116 #define __force_bit2int (unsigned int __force) 114 117 ··· 448 447 unsigned long dd_flag; /* NOTE: use atomic bit operations on this */ 449 448 450 449 struct task_struct *mtip_svc_handler; /* task_struct of svc thd */ 450 + 451 + struct dentry *dfs_node; 451 452 }; 452 453 453 454 #endif
+40
drivers/block/umem.c
··· 513 513 } 514 514 } 515 515 516 + struct mm_plug_cb { 517 + struct blk_plug_cb cb; 518 + struct cardinfo *card; 519 + }; 520 + 521 + static void mm_unplug(struct blk_plug_cb *cb) 522 + { 523 + struct mm_plug_cb *mmcb = container_of(cb, struct mm_plug_cb, cb); 524 + 525 + spin_lock_irq(&mmcb->card->lock); 526 + activate(mmcb->card); 527 + spin_unlock_irq(&mmcb->card->lock); 528 + kfree(mmcb); 529 + } 530 + 531 + static int mm_check_plugged(struct cardinfo *card) 532 + { 533 + struct blk_plug *plug = current->plug; 534 + struct mm_plug_cb *mmcb; 535 + 536 + if (!plug) 537 + return 0; 538 + 539 + list_for_each_entry(mmcb, &plug->cb_list, cb.list) { 540 + if (mmcb->cb.callback == mm_unplug && mmcb->card == card) 541 + return 1; 542 + } 543 + /* Not currently on the callback list */ 544 + mmcb = kmalloc(sizeof(*mmcb), GFP_ATOMIC); 545 + if (!mmcb) 546 + return 0; 547 + 548 + mmcb->card = card; 549 + mmcb->cb.callback = mm_unplug; 550 + list_add(&mmcb->cb.list, &plug->cb_list); 551 + return 1; 552 + } 553 + 516 554 static void mm_make_request(struct request_queue *q, struct bio *bio) 517 555 { 518 556 struct cardinfo *card = q->queuedata; ··· 561 523 *card->biotail = bio; 562 524 bio->bi_next = NULL; 563 525 card->biotail = &bio->bi_next; 526 + if (bio->bi_rw & REQ_SYNC || !mm_check_plugged(card)) 527 + activate(card); 564 528 spin_unlock_irq(&card->lock); 565 529 566 530 return;
+2
drivers/block/xen-blkback/common.h
··· 257 257 break; 258 258 case BLKIF_OP_DISCARD: 259 259 dst->u.discard.flag = src->u.discard.flag; 260 + dst->u.discard.id = src->u.discard.id; 260 261 dst->u.discard.sector_number = src->u.discard.sector_number; 261 262 dst->u.discard.nr_sectors = src->u.discard.nr_sectors; 262 263 break; ··· 288 287 break; 289 288 case BLKIF_OP_DISCARD: 290 289 dst->u.discard.flag = src->u.discard.flag; 290 + dst->u.discard.id = src->u.discard.id; 291 291 dst->u.discard.sector_number = src->u.discard.sector_number; 292 292 dst->u.discard.nr_sectors = src->u.discard.nr_sectors; 293 293 break;
+46 -12
drivers/block/xen-blkfront.c
··· 141 141 return free; 142 142 } 143 143 144 - static void add_id_to_freelist(struct blkfront_info *info, 144 + static int add_id_to_freelist(struct blkfront_info *info, 145 145 unsigned long id) 146 146 { 147 + if (info->shadow[id].req.u.rw.id != id) 148 + return -EINVAL; 149 + if (info->shadow[id].request == NULL) 150 + return -EINVAL; 147 151 info->shadow[id].req.u.rw.id = info->shadow_free; 148 152 info->shadow[id].request = NULL; 149 153 info->shadow_free = id; 154 + return 0; 150 155 } 151 156 157 + static const char *op_name(int op) 158 + { 159 + static const char *const names[] = { 160 + [BLKIF_OP_READ] = "read", 161 + [BLKIF_OP_WRITE] = "write", 162 + [BLKIF_OP_WRITE_BARRIER] = "barrier", 163 + [BLKIF_OP_FLUSH_DISKCACHE] = "flush", 164 + [BLKIF_OP_DISCARD] = "discard" }; 165 + 166 + if (op < 0 || op >= ARRAY_SIZE(names)) 167 + return "unknown"; 168 + 169 + if (!names[op]) 170 + return "reserved"; 171 + 172 + return names[op]; 173 + } 152 174 static int xlbd_reserve_minors(unsigned int minor, unsigned int nr) 153 175 { 154 176 unsigned int end = minor + nr; ··· 768 746 769 747 bret = RING_GET_RESPONSE(&info->ring, i); 770 748 id = bret->id; 749 + /* 750 + * The backend has messed up and given us an id that we would 751 + * never have given to it (we stamp it up to BLK_RING_SIZE - 752 + * look in get_id_from_freelist. 753 + */ 754 + if (id >= BLK_RING_SIZE) { 755 + WARN(1, "%s: response to %s has incorrect id (%ld)\n", 756 + info->gd->disk_name, op_name(bret->operation), id); 757 + /* We can't safely get the 'struct request' as 758 + * the id is busted. */ 759 + continue; 760 + } 771 761 req = info->shadow[id].request; 772 762 773 763 if (bret->operation != BLKIF_OP_DISCARD) 774 764 blkif_completion(&info->shadow[id]); 775 765 776 - add_id_to_freelist(info, id); 766 + if (add_id_to_freelist(info, id)) { 767 + WARN(1, "%s: response to %s (id %ld) couldn't be recycled!\n", 768 + info->gd->disk_name, op_name(bret->operation), id); 769 + continue; 770 + } 777 771 778 772 error = (bret->status == BLKIF_RSP_OKAY) ? 0 : -EIO; 779 773 switch (bret->operation) { 780 774 case BLKIF_OP_DISCARD: 781 775 if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) { 782 776 struct request_queue *rq = info->rq; 783 - printk(KERN_WARNING "blkfront: %s: discard op failed\n", 784 - info->gd->disk_name); 777 + printk(KERN_WARNING "blkfront: %s: %s op failed\n", 778 + info->gd->disk_name, op_name(bret->operation)); 785 779 error = -EOPNOTSUPP; 786 780 info->feature_discard = 0; 787 781 info->feature_secdiscard = 0; ··· 809 771 case BLKIF_OP_FLUSH_DISKCACHE: 810 772 case BLKIF_OP_WRITE_BARRIER: 811 773 if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) { 812 - printk(KERN_WARNING "blkfront: %s: write %s op failed\n", 813 - info->flush_op == BLKIF_OP_WRITE_BARRIER ? 814 - "barrier" : "flush disk cache", 815 - info->gd->disk_name); 774 + printk(KERN_WARNING "blkfront: %s: %s op failed\n", 775 + info->gd->disk_name, op_name(bret->operation)); 816 776 error = -EOPNOTSUPP; 817 777 } 818 778 if (unlikely(bret->status == BLKIF_RSP_ERROR && 819 779 info->shadow[id].req.u.rw.nr_segments == 0)) { 820 - printk(KERN_WARNING "blkfront: %s: empty write %s op failed\n", 821 - info->flush_op == BLKIF_OP_WRITE_BARRIER ? 822 - "barrier" : "flush disk cache", 823 - info->gd->disk_name); 780 + printk(KERN_WARNING "blkfront: %s: empty %s op failed\n", 781 + info->gd->disk_name, op_name(bret->operation)); 824 782 error = -EOPNOTSUPP; 825 783 } 826 784 if (unlikely(error)) {
+13 -15
drivers/clk/clk.c
··· 1067 1067 1068 1068 old_parent = clk->parent; 1069 1069 1070 - /* find index of new parent clock using cached parent ptrs */ 1071 - if (clk->parents) 1072 - for (i = 0; i < clk->num_parents; i++) 1073 - if (clk->parents[i] == parent) 1074 - break; 1075 - else 1070 + if (!clk->parents) 1076 1071 clk->parents = kzalloc((sizeof(struct clk*) * clk->num_parents), 1077 1072 GFP_KERNEL); 1078 1073 1079 1074 /* 1080 - * find index of new parent clock using string name comparison 1081 - * also try to cache the parent to avoid future calls to __clk_lookup 1075 + * find index of new parent clock using cached parent ptrs, 1076 + * or if not yet cached, use string name comparison and cache 1077 + * them now to avoid future calls to __clk_lookup. 1082 1078 */ 1083 - if (i == clk->num_parents) 1084 - for (i = 0; i < clk->num_parents; i++) 1085 - if (!strcmp(clk->parent_names[i], parent->name)) { 1086 - if (clk->parents) 1087 - clk->parents[i] = __clk_lookup(parent->name); 1088 - break; 1089 - } 1079 + for (i = 0; i < clk->num_parents; i++) { 1080 + if (clk->parents && clk->parents[i] == parent) 1081 + break; 1082 + else if (!strcmp(clk->parent_names[i], parent->name)) { 1083 + if (clk->parents) 1084 + clk->parents[i] = __clk_lookup(parent->name); 1085 + break; 1086 + } 1087 + } 1090 1088 1091 1089 if (i == clk->num_parents) { 1092 1090 pr_debug("%s: clock %s is not a possible parent of clock %s\n",
+24 -3
drivers/gpu/drm/drm_edid.c
··· 1039 1039 return true; 1040 1040 } 1041 1041 1042 + static bool valid_inferred_mode(const struct drm_connector *connector, 1043 + const struct drm_display_mode *mode) 1044 + { 1045 + struct drm_display_mode *m; 1046 + bool ok = false; 1047 + 1048 + list_for_each_entry(m, &connector->probed_modes, head) { 1049 + if (mode->hdisplay == m->hdisplay && 1050 + mode->vdisplay == m->vdisplay && 1051 + drm_mode_vrefresh(mode) == drm_mode_vrefresh(m)) 1052 + return false; /* duplicated */ 1053 + if (mode->hdisplay <= m->hdisplay && 1054 + mode->vdisplay <= m->vdisplay) 1055 + ok = true; 1056 + } 1057 + return ok; 1058 + } 1059 + 1042 1060 static int 1043 1061 drm_dmt_modes_for_range(struct drm_connector *connector, struct edid *edid, 1044 1062 struct detailed_timing *timing) ··· 1066 1048 struct drm_device *dev = connector->dev; 1067 1049 1068 1050 for (i = 0; i < drm_num_dmt_modes; i++) { 1069 - if (mode_in_range(drm_dmt_modes + i, edid, timing)) { 1051 + if (mode_in_range(drm_dmt_modes + i, edid, timing) && 1052 + valid_inferred_mode(connector, drm_dmt_modes + i)) { 1070 1053 newmode = drm_mode_duplicate(dev, &drm_dmt_modes[i]); 1071 1054 if (newmode) { 1072 1055 drm_mode_probed_add(connector, newmode); ··· 1107 1088 return modes; 1108 1089 1109 1090 fixup_mode_1366x768(newmode); 1110 - if (!mode_in_range(newmode, edid, timing)) { 1091 + if (!mode_in_range(newmode, edid, timing) || 1092 + !valid_inferred_mode(connector, newmode)) { 1111 1093 drm_mode_destroy(dev, newmode); 1112 1094 continue; 1113 1095 } ··· 1136 1116 return modes; 1137 1117 1138 1118 fixup_mode_1366x768(newmode); 1139 - if (!mode_in_range(newmode, edid, timing)) { 1119 + if (!mode_in_range(newmode, edid, timing) || 1120 + !valid_inferred_mode(connector, newmode)) { 1140 1121 drm_mode_destroy(dev, newmode); 1141 1122 continue; 1142 1123 }
+30 -7
drivers/gpu/drm/i915/i915_dma.c
··· 1401 1401 } 1402 1402 } 1403 1403 1404 + static void i915_kick_out_firmware_fb(struct drm_i915_private *dev_priv) 1405 + { 1406 + struct apertures_struct *ap; 1407 + struct pci_dev *pdev = dev_priv->dev->pdev; 1408 + bool primary; 1409 + 1410 + ap = alloc_apertures(1); 1411 + if (!ap) 1412 + return; 1413 + 1414 + ap->ranges[0].base = dev_priv->dev->agp->base; 1415 + ap->ranges[0].size = 1416 + dev_priv->mm.gtt->gtt_mappable_entries << PAGE_SHIFT; 1417 + primary = 1418 + pdev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_ROM_SHADOW; 1419 + 1420 + remove_conflicting_framebuffers(ap, "inteldrmfb", primary); 1421 + 1422 + kfree(ap); 1423 + } 1424 + 1404 1425 /** 1405 1426 * i915_driver_load - setup chip and create an initial config 1406 1427 * @dev: DRM device ··· 1467 1446 goto free_priv; 1468 1447 } 1469 1448 1449 + dev_priv->mm.gtt = intel_gtt_get(); 1450 + if (!dev_priv->mm.gtt) { 1451 + DRM_ERROR("Failed to initialize GTT\n"); 1452 + ret = -ENODEV; 1453 + goto put_bridge; 1454 + } 1455 + 1456 + i915_kick_out_firmware_fb(dev_priv); 1457 + 1470 1458 pci_set_master(dev->pdev); 1471 1459 1472 1460 /* overlay on gen2 is broken and can't address above 1G */ ··· 1499 1469 DRM_ERROR("failed to map registers\n"); 1500 1470 ret = -EIO; 1501 1471 goto put_bridge; 1502 - } 1503 - 1504 - dev_priv->mm.gtt = intel_gtt_get(); 1505 - if (!dev_priv->mm.gtt) { 1506 - DRM_ERROR("Failed to initialize GTT\n"); 1507 - ret = -ENODEV; 1508 - goto out_rmmap; 1509 1472 } 1510 1473 1511 1474 aperture_size = dev_priv->mm.gtt->gtt_mappable_entries << PAGE_SHIFT;
+11 -2
drivers/gpu/drm/radeon/radeon_gart.c
··· 289 289 rdev->vm_manager.enabled = false; 290 290 291 291 /* mark first vm as always in use, it's the system one */ 292 + /* allocate enough for 2 full VM pts */ 292 293 r = radeon_sa_bo_manager_init(rdev, &rdev->vm_manager.sa_manager, 293 - rdev->vm_manager.max_pfn * 8, 294 + rdev->vm_manager.max_pfn * 8 * 2, 294 295 RADEON_GEM_DOMAIN_VRAM); 295 296 if (r) { 296 297 dev_err(rdev->dev, "failed to allocate vm bo (%dKB)\n", ··· 634 633 mutex_init(&vm->mutex); 635 634 INIT_LIST_HEAD(&vm->list); 636 635 INIT_LIST_HEAD(&vm->va); 637 - vm->last_pfn = 0; 636 + /* SI requires equal sized PTs for all VMs, so always set 637 + * last_pfn to max_pfn. cayman allows variable sized 638 + * pts so we can grow then as needed. Once we switch 639 + * to two level pts we can unify this again. 640 + */ 641 + if (rdev->family >= CHIP_TAHITI) 642 + vm->last_pfn = rdev->vm_manager.max_pfn; 643 + else 644 + vm->last_pfn = 0; 638 645 /* map the ib pool buffer at 0 in virtual address space, set 639 646 * read only 640 647 */
+6 -4
drivers/gpu/drm/radeon/radeon_gem.c
··· 292 292 int radeon_gem_busy_ioctl(struct drm_device *dev, void *data, 293 293 struct drm_file *filp) 294 294 { 295 + struct radeon_device *rdev = dev->dev_private; 295 296 struct drm_radeon_gem_busy *args = data; 296 297 struct drm_gem_object *gobj; 297 298 struct radeon_bo *robj; ··· 318 317 break; 319 318 } 320 319 drm_gem_object_unreference_unlocked(gobj); 321 - r = radeon_gem_handle_lockup(robj->rdev, r); 320 + r = radeon_gem_handle_lockup(rdev, r); 322 321 return r; 323 322 } 324 323 325 324 int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data, 326 325 struct drm_file *filp) 327 326 { 327 + struct radeon_device *rdev = dev->dev_private; 328 328 struct drm_radeon_gem_wait_idle *args = data; 329 329 struct drm_gem_object *gobj; 330 330 struct radeon_bo *robj; ··· 338 336 robj = gem_to_radeon_bo(gobj); 339 337 r = radeon_bo_wait(robj, NULL, false); 340 338 /* callback hw specific functions if any */ 341 - if (robj->rdev->asic->ioctl_wait_idle) 342 - robj->rdev->asic->ioctl_wait_idle(robj->rdev, robj); 339 + if (rdev->asic->ioctl_wait_idle) 340 + robj->rdev->asic->ioctl_wait_idle(rdev, robj); 343 341 drm_gem_object_unreference_unlocked(gobj); 344 - r = radeon_gem_handle_lockup(robj->rdev, r); 342 + r = radeon_gem_handle_lockup(rdev, r); 345 343 return r; 346 344 } 347 345
+2 -2
drivers/gpu/drm/radeon/si.c
··· 2365 2365 WREG32(0x15DC, 0); 2366 2366 2367 2367 /* empty context1-15 */ 2368 - /* FIXME start with 1G, once using 2 level pt switch to full 2368 + /* FIXME start with 4G, once using 2 level pt switch to full 2369 2369 * vm size space 2370 2370 */ 2371 2371 /* set vm size, must be a multiple of 4 */ 2372 2372 WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0); 2373 - WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, (1 << 30) / RADEON_GPU_PAGE_SIZE); 2373 + WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn); 2374 2374 for (i = 1; i < 16; i++) { 2375 2375 if (i < 8) 2376 2376 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
+3 -2
drivers/input/joystick/as5011.c
··· 282 282 283 283 error = request_threaded_irq(as5011->button_irq, 284 284 NULL, as5011_button_interrupt, 285 - IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING, 285 + IRQF_TRIGGER_RISING | 286 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 286 287 "as5011_button", as5011); 287 288 if (error < 0) { 288 289 dev_err(&client->dev, ··· 297 296 298 297 error = request_threaded_irq(as5011->axis_irq, NULL, 299 298 as5011_axis_interrupt, 300 - plat_data->axis_irqflags, 299 + plat_data->axis_irqflags | IRQF_ONESHOT, 301 300 "as5011_joystick", as5011); 302 301 if (error) { 303 302 dev_err(&client->dev,
+2 -1
drivers/input/keyboard/mcs_touchkey.c
··· 178 178 } 179 179 180 180 error = request_threaded_irq(client->irq, NULL, mcs_touchkey_interrupt, 181 - IRQF_TRIGGER_FALLING, client->dev.driver->name, data); 181 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 182 + client->dev.driver->name, data); 182 183 if (error) { 183 184 dev_err(&client->dev, "Failed to register interrupt\n"); 184 185 goto err_free_mem;
+1 -1
drivers/input/keyboard/mpr121_touchkey.c
··· 248 248 249 249 error = request_threaded_irq(client->irq, NULL, 250 250 mpr_touchkey_interrupt, 251 - IRQF_TRIGGER_FALLING, 251 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 252 252 client->dev.driver->name, mpr121); 253 253 if (error) { 254 254 dev_err(&client->dev, "Failed to register interrupt\n");
+2 -1
drivers/input/keyboard/qt1070.c
··· 201 201 msleep(QT1070_RESET_TIME); 202 202 203 203 err = request_threaded_irq(client->irq, NULL, qt1070_interrupt, 204 - IRQF_TRIGGER_NONE, client->dev.driver->name, data); 204 + IRQF_TRIGGER_NONE | IRQF_ONESHOT, 205 + client->dev.driver->name, data); 205 206 if (err) { 206 207 dev_err(&client->dev, "fail to request irq\n"); 207 208 goto err_free_mem;
+2 -1
drivers/input/keyboard/tca6416-keypad.c
··· 278 278 279 279 error = request_threaded_irq(chip->irqnum, NULL, 280 280 tca6416_keys_isr, 281 - IRQF_TRIGGER_FALLING, 281 + IRQF_TRIGGER_FALLING | 282 + IRQF_ONESHOT, 282 283 "tca6416-keypad", chip); 283 284 if (error) { 284 285 dev_dbg(&client->dev,
+1 -1
drivers/input/keyboard/tca8418_keypad.c
··· 360 360 client->irq = gpio_to_irq(client->irq); 361 361 362 362 error = request_threaded_irq(client->irq, NULL, tca8418_irq_handler, 363 - IRQF_TRIGGER_FALLING, 363 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 364 364 client->name, keypad_data); 365 365 if (error) { 366 366 dev_dbg(&client->dev,
+4 -4
drivers/input/keyboard/tnetv107x-keypad.c
··· 227 227 goto error_clk; 228 228 } 229 229 230 - error = request_threaded_irq(kp->irq_press, NULL, keypad_irq, 0, 231 - dev_name(dev), kp); 230 + error = request_threaded_irq(kp->irq_press, NULL, keypad_irq, 231 + IRQF_ONESHOT, dev_name(dev), kp); 232 232 if (error < 0) { 233 233 dev_err(kp->dev, "Could not allocate keypad press key irq\n"); 234 234 goto error_irq_press; 235 235 } 236 236 237 - error = request_threaded_irq(kp->irq_release, NULL, keypad_irq, 0, 238 - dev_name(dev), kp); 237 + error = request_threaded_irq(kp->irq_release, NULL, keypad_irq, 238 + IRQF_ONESHOT, dev_name(dev), kp); 239 239 if (error < 0) { 240 240 dev_err(kp->dev, "Could not allocate keypad release key irq\n"); 241 241 goto error_irq_release;
+5 -3
drivers/input/misc/ad714x.c
··· 972 972 struct ad714x_platform_data *plat_data = dev->platform_data; 973 973 struct ad714x_chip *ad714x; 974 974 void *drv_mem; 975 + unsigned long irqflags; 975 976 976 977 struct ad714x_button_drv *bt_drv; 977 978 struct ad714x_slider_drv *sd_drv; ··· 1163 1162 alloc_idx++; 1164 1163 } 1165 1164 1165 + irqflags = plat_data->irqflags ?: IRQF_TRIGGER_FALLING; 1166 + irqflags |= IRQF_ONESHOT; 1167 + 1166 1168 error = request_threaded_irq(ad714x->irq, NULL, ad714x_interrupt_thread, 1167 - plat_data->irqflags ? 1168 - plat_data->irqflags : IRQF_TRIGGER_FALLING, 1169 - "ad714x_captouch", ad714x); 1169 + irqflags, "ad714x_captouch", ad714x); 1170 1170 if (error) { 1171 1171 dev_err(dev, "can't allocate irq %d\n", ad714x->irq); 1172 1172 goto err_unreg_dev;
+2 -1
drivers/input/misc/dm355evm_keys.c
··· 213 213 /* REVISIT: flush the event queue? */ 214 214 215 215 status = request_threaded_irq(keys->irq, NULL, dm355evm_keys_irq, 216 - IRQF_TRIGGER_FALLING, dev_name(&pdev->dev), keys); 216 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 217 + dev_name(&pdev->dev), keys); 217 218 if (status < 0) 218 219 goto fail2; 219 220
+4 -2
drivers/input/tablet/wacom_sys.c
··· 216 216 217 217 rep_data[0] = 12; 218 218 result = wacom_get_report(intf, WAC_HID_FEATURE_REPORT, 219 - rep_data[0], &rep_data, 2, 219 + rep_data[0], rep_data, 2, 220 220 WAC_MSG_RETRIES); 221 221 222 222 if (result >= 0 && rep_data[1] > 2) ··· 401 401 break; 402 402 403 403 case HID_USAGE_CONTACTMAX: 404 - wacom_retrieve_report_data(intf, features); 404 + /* leave touch_max as is if predefined */ 405 + if (!features->touch_max) 406 + wacom_retrieve_report_data(intf, features); 405 407 i++; 406 408 break; 407 409 }
+1 -1
drivers/input/touchscreen/ad7879.c
··· 597 597 AD7879_TMR(ts->pen_down_acc_interval); 598 598 599 599 err = request_threaded_irq(ts->irq, NULL, ad7879_irq, 600 - IRQF_TRIGGER_FALLING, 600 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 601 601 dev_name(dev), ts); 602 602 if (err) { 603 603 dev_err(dev, "irq %d busy?\n", ts->irq);
+2 -1
drivers/input/touchscreen/atmel_mxt_ts.c
··· 1149 1149 goto err_free_object; 1150 1150 1151 1151 error = request_threaded_irq(client->irq, NULL, mxt_interrupt, 1152 - pdata->irqflags, client->dev.driver->name, data); 1152 + pdata->irqflags | IRQF_ONESHOT, 1153 + client->dev.driver->name, data); 1153 1154 if (error) { 1154 1155 dev_err(&client->dev, "Failed to register interrupt\n"); 1155 1156 goto err_free_object;
+2 -1
drivers/input/touchscreen/bu21013_ts.c
··· 509 509 input_set_drvdata(in_dev, bu21013_data); 510 510 511 511 error = request_threaded_irq(pdata->irq, NULL, bu21013_gpio_irq, 512 - IRQF_TRIGGER_FALLING | IRQF_SHARED, 512 + IRQF_TRIGGER_FALLING | IRQF_SHARED | 513 + IRQF_ONESHOT, 513 514 DRIVER_TP, bu21013_data); 514 515 if (error) { 515 516 dev_err(&client->dev, "request irq %d failed\n", pdata->irq);
+2 -1
drivers/input/touchscreen/cy8ctmg110_ts.c
··· 251 251 } 252 252 253 253 err = request_threaded_irq(client->irq, NULL, cy8ctmg110_irq_thread, 254 - IRQF_TRIGGER_RISING, "touch_reset_key", ts); 254 + IRQF_TRIGGER_RISING | IRQF_ONESHOT, 255 + "touch_reset_key", ts); 255 256 if (err < 0) { 256 257 dev_err(&client->dev, 257 258 "irq %d busy? error %d\n", client->irq, err);
+1 -1
drivers/input/touchscreen/intel-mid-touch.c
··· 620 620 MRST_PRESSURE_MIN, MRST_PRESSURE_MAX, 0, 0); 621 621 622 622 err = request_threaded_irq(tsdev->irq, NULL, mrstouch_pendet_irq, 623 - 0, "mrstouch", tsdev); 623 + IRQF_ONESHOT, "mrstouch", tsdev); 624 624 if (err) { 625 625 dev_err(tsdev->dev, "unable to allocate irq\n"); 626 626 goto err_free_mem;
+1 -1
drivers/input/touchscreen/pixcir_i2c_ts.c
··· 165 165 input_set_drvdata(input, tsdata); 166 166 167 167 error = request_threaded_irq(client->irq, NULL, pixcir_ts_isr, 168 - IRQF_TRIGGER_FALLING, 168 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 169 169 client->name, tsdata); 170 170 if (error) { 171 171 dev_err(&client->dev, "Unable to request touchscreen IRQ.\n");
+1 -1
drivers/input/touchscreen/tnetv107x-ts.c
··· 297 297 goto error_clk; 298 298 } 299 299 300 - error = request_threaded_irq(ts->tsc_irq, NULL, tsc_irq, 0, 300 + error = request_threaded_irq(ts->tsc_irq, NULL, tsc_irq, IRQF_ONESHOT, 301 301 dev_name(dev), ts); 302 302 if (error < 0) { 303 303 dev_err(ts->dev, "Could not allocate ts irq\n");
+2 -1
drivers/input/touchscreen/tsc2005.c
··· 650 650 tsc2005_stop_scan(ts); 651 651 652 652 error = request_threaded_irq(spi->irq, NULL, tsc2005_irq_thread, 653 - IRQF_TRIGGER_RISING, "tsc2005", ts); 653 + IRQF_TRIGGER_RISING | IRQF_ONESHOT, 654 + "tsc2005", ts); 654 655 if (error) { 655 656 dev_err(&spi->dev, "Failed to request irq, err: %d\n", error); 656 657 goto err_free_mem;
+15 -1
drivers/leds/ledtrig-heartbeat.c
··· 21 21 #include <linux/reboot.h> 22 22 #include "leds.h" 23 23 24 + static int panic_heartbeats; 25 + 24 26 struct heartbeat_trig_data { 25 27 unsigned int phase; 26 28 unsigned int period; ··· 35 33 struct heartbeat_trig_data *heartbeat_data = led_cdev->trigger_data; 36 34 unsigned long brightness = LED_OFF; 37 35 unsigned long delay = 0; 36 + 37 + if (unlikely(panic_heartbeats)) { 38 + led_set_brightness(led_cdev, LED_OFF); 39 + return; 40 + } 38 41 39 42 /* acts like an actual heart beat -- ie thump-thump-pause... */ 40 43 switch (heartbeat_data->phase) { ··· 118 111 return NOTIFY_DONE; 119 112 } 120 113 114 + static int heartbeat_panic_notifier(struct notifier_block *nb, 115 + unsigned long code, void *unused) 116 + { 117 + panic_heartbeats = 1; 118 + return NOTIFY_DONE; 119 + } 120 + 121 121 static struct notifier_block heartbeat_reboot_nb = { 122 122 .notifier_call = heartbeat_reboot_notifier, 123 123 }; 124 124 125 125 static struct notifier_block heartbeat_panic_nb = { 126 - .notifier_call = heartbeat_reboot_notifier, 126 + .notifier_call = heartbeat_panic_notifier, 127 127 }; 128 128 129 129 static int __init heartbeat_trig_init(void)
+7
drivers/md/dm-thin.c
··· 2292 2292 if (r) 2293 2293 return r; 2294 2294 2295 + r = dm_pool_commit_metadata(pool->pmd); 2296 + if (r) { 2297 + DMERR("%s: dm_pool_commit_metadata() failed, error = %d", 2298 + __func__, r); 2299 + return r; 2300 + } 2301 + 2295 2302 r = dm_pool_reserve_metadata_snap(pool->pmd); 2296 2303 if (r) 2297 2304 DMWARN("reserve_metadata_snap message failed.");
+5 -3
drivers/md/md.c
··· 5784 5784 super_types[mddev->major_version]. 5785 5785 validate_super(mddev, rdev); 5786 5786 if ((info->state & (1<<MD_DISK_SYNC)) && 5787 - (!test_bit(In_sync, &rdev->flags) || 5788 - rdev->raid_disk != info->raid_disk)) { 5787 + rdev->raid_disk != info->raid_disk) { 5789 5788 /* This was a hot-add request, but events doesn't 5790 5789 * match, so reject it. 5791 5790 */ ··· 6750 6751 thread->tsk = kthread_run(md_thread, thread, 6751 6752 "%s_%s", 6752 6753 mdname(thread->mddev), 6753 - name ?: mddev->pers->name); 6754 + name); 6754 6755 if (IS_ERR(thread->tsk)) { 6755 6756 kfree(thread); 6756 6757 return NULL; ··· 7297 7298 int skipped = 0; 7298 7299 struct md_rdev *rdev; 7299 7300 char *desc; 7301 + struct blk_plug plug; 7300 7302 7301 7303 /* just incase thread restarts... */ 7302 7304 if (test_bit(MD_RECOVERY_DONE, &mddev->recovery)) ··· 7447 7447 } 7448 7448 mddev->curr_resync_completed = j; 7449 7449 7450 + blk_start_plug(&plug); 7450 7451 while (j < max_sectors) { 7451 7452 sector_t sectors; 7452 7453 ··· 7553 7552 * this also signals 'finished resyncing' to md_stop 7554 7553 */ 7555 7554 out: 7555 + blk_finish_plug(&plug); 7556 7556 wait_event(mddev->recovery_wait, !atomic_read(&mddev->recovery_active)); 7557 7557 7558 7558 /* tell personality that we are finished */
+2 -1
drivers/md/multipath.c
··· 474 474 } 475 475 476 476 { 477 - mddev->thread = md_register_thread(multipathd, mddev, NULL); 477 + mddev->thread = md_register_thread(multipathd, mddev, 478 + "multipath"); 478 479 if (!mddev->thread) { 479 480 printk(KERN_ERR "multipath: couldn't allocate thread" 480 481 " for %s\n", mdname(mddev));
+31 -23
drivers/md/persistent-data/dm-space-map-checker.c
··· 8 8 9 9 #include <linux/device-mapper.h> 10 10 #include <linux/export.h> 11 + #include <linux/vmalloc.h> 11 12 12 13 #ifdef CONFIG_DM_DEBUG_SPACE_MAPS 13 14 ··· 90 89 91 90 ca->nr = nr_blocks; 92 91 ca->nr_free = nr_blocks; 93 - ca->counts = kzalloc(sizeof(*ca->counts) * nr_blocks, GFP_KERNEL); 94 - if (!ca->counts) 95 - return -ENOMEM; 92 + 93 + if (!nr_blocks) 94 + ca->counts = NULL; 95 + else { 96 + ca->counts = vzalloc(sizeof(*ca->counts) * nr_blocks); 97 + if (!ca->counts) 98 + return -ENOMEM; 99 + } 96 100 97 101 return 0; 102 + } 103 + 104 + static void ca_destroy(struct count_array *ca) 105 + { 106 + vfree(ca->counts); 98 107 } 99 108 100 109 static int ca_load(struct count_array *ca, struct dm_space_map *sm) ··· 137 126 static int ca_extend(struct count_array *ca, dm_block_t extra_blocks) 138 127 { 139 128 dm_block_t nr_blocks = ca->nr + extra_blocks; 140 - uint32_t *counts = kzalloc(sizeof(*counts) * nr_blocks, GFP_KERNEL); 129 + uint32_t *counts = vzalloc(sizeof(*counts) * nr_blocks); 141 130 if (!counts) 142 131 return -ENOMEM; 143 132 144 - memcpy(counts, ca->counts, sizeof(*counts) * ca->nr); 145 - kfree(ca->counts); 133 + if (ca->counts) { 134 + memcpy(counts, ca->counts, sizeof(*counts) * ca->nr); 135 + ca_destroy(ca); 136 + } 146 137 ca->nr = nr_blocks; 147 138 ca->nr_free += extra_blocks; 148 139 ca->counts = counts; ··· 162 149 old->nr_free = new->nr_free; 163 150 memcpy(old->counts, new->counts, sizeof(*old->counts) * old->nr); 164 151 return 0; 165 - } 166 - 167 - static void ca_destroy(struct count_array *ca) 168 - { 169 - kfree(ca->counts); 170 152 } 171 153 172 154 /*----------------------------------------------------------------*/ ··· 351 343 int r; 352 344 struct sm_checker *smc; 353 345 354 - if (!sm) 355 - return NULL; 346 + if (IS_ERR_OR_NULL(sm)) 347 + return ERR_PTR(-EINVAL); 356 348 357 349 smc = kmalloc(sizeof(*smc), GFP_KERNEL); 358 350 if (!smc) 359 - return NULL; 351 + return ERR_PTR(-ENOMEM); 360 352 361 353 memcpy(&smc->sm, &ops_, sizeof(smc->sm)); 362 354 r = ca_create(&smc->old_counts, sm); 363 355 if (r) { 364 356 kfree(smc); 365 - return NULL; 357 + return ERR_PTR(r); 366 358 } 367 359 368 360 r = ca_create(&smc->counts, sm); 369 361 if (r) { 370 362 ca_destroy(&smc->old_counts); 371 363 kfree(smc); 372 - return NULL; 364 + return ERR_PTR(r); 373 365 } 374 366 375 367 smc->real_sm = sm; ··· 379 371 ca_destroy(&smc->counts); 380 372 ca_destroy(&smc->old_counts); 381 373 kfree(smc); 382 - return NULL; 374 + return ERR_PTR(r); 383 375 } 384 376 385 377 r = ca_commit(&smc->old_counts, &smc->counts); ··· 387 379 ca_destroy(&smc->counts); 388 380 ca_destroy(&smc->old_counts); 389 381 kfree(smc); 390 - return NULL; 382 + return ERR_PTR(r); 391 383 } 392 384 393 385 return &smc->sm; ··· 399 391 int r; 400 392 struct sm_checker *smc; 401 393 402 - if (!sm) 403 - return NULL; 394 + if (IS_ERR_OR_NULL(sm)) 395 + return ERR_PTR(-EINVAL); 404 396 405 397 smc = kmalloc(sizeof(*smc), GFP_KERNEL); 406 398 if (!smc) 407 - return NULL; 399 + return ERR_PTR(-ENOMEM); 408 400 409 401 memcpy(&smc->sm, &ops_, sizeof(smc->sm)); 410 402 r = ca_create(&smc->old_counts, sm); 411 403 if (r) { 412 404 kfree(smc); 413 - return NULL; 405 + return ERR_PTR(r); 414 406 } 415 407 416 408 r = ca_create(&smc->counts, sm); 417 409 if (r) { 418 410 ca_destroy(&smc->old_counts); 419 411 kfree(smc); 420 - return NULL; 412 + return ERR_PTR(r); 421 413 } 422 414 423 415 smc->real_sm = sm;
+10 -1
drivers/md/persistent-data/dm-space-map-disk.c
··· 290 290 dm_block_t nr_blocks) 291 291 { 292 292 struct dm_space_map *sm = dm_sm_disk_create_real(tm, nr_blocks); 293 - return dm_sm_checker_create_fresh(sm); 293 + struct dm_space_map *smc; 294 + 295 + if (IS_ERR_OR_NULL(sm)) 296 + return sm; 297 + 298 + smc = dm_sm_checker_create_fresh(sm); 299 + if (IS_ERR(smc)) 300 + dm_sm_destroy(sm); 301 + 302 + return smc; 294 303 } 295 304 EXPORT_SYMBOL_GPL(dm_sm_disk_create); 296 305
+9 -2
drivers/md/persistent-data/dm-transaction-manager.c
··· 138 138 139 139 void dm_tm_destroy(struct dm_transaction_manager *tm) 140 140 { 141 + if (!tm->is_clone) 142 + wipe_shadow_table(tm); 143 + 141 144 kfree(tm); 142 145 } 143 146 EXPORT_SYMBOL_GPL(dm_tm_destroy); ··· 347 344 } 348 345 349 346 *sm = dm_sm_checker_create(inner); 350 - if (!*sm) 347 + if (IS_ERR(*sm)) { 348 + r = PTR_ERR(*sm); 351 349 goto bad2; 350 + } 352 351 353 352 } else { 354 353 r = dm_bm_write_lock(dm_tm_get_bm(*tm), sb_location, ··· 369 364 } 370 365 371 366 *sm = dm_sm_checker_create(inner); 372 - if (!*sm) 367 + if (IS_ERR(*sm)) { 368 + r = PTR_ERR(*sm); 373 369 goto bad2; 370 + } 374 371 } 375 372 376 373 return 0;
+5 -8
drivers/md/raid1.c
··· 517 517 int bad_sectors; 518 518 519 519 int disk = start_disk + i; 520 - if (disk >= conf->raid_disks) 521 - disk -= conf->raid_disks; 520 + if (disk >= conf->raid_disks * 2) 521 + disk -= conf->raid_disks * 2; 522 522 523 523 rdev = rcu_dereference(conf->mirrors[disk].rdev); 524 524 if (r1_bio->bios[disk] == IO_BLOCKED ··· 883 883 const unsigned long do_sync = (bio->bi_rw & REQ_SYNC); 884 884 const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA)); 885 885 struct md_rdev *blocked_rdev; 886 - int plugged; 887 886 int first_clone; 888 887 int sectors_handled; 889 888 int max_sectors; ··· 1033 1034 * the bad blocks. Each set of writes gets it's own r1bio 1034 1035 * with a set of bios attached. 1035 1036 */ 1036 - plugged = mddev_check_plugged(mddev); 1037 1037 1038 1038 disks = conf->raid_disks * 2; 1039 1039 retry_write: ··· 1189 1191 bio_list_add(&conf->pending_bio_list, mbio); 1190 1192 conf->pending_count++; 1191 1193 spin_unlock_irqrestore(&conf->device_lock, flags); 1194 + if (!mddev_check_plugged(mddev)) 1195 + md_wakeup_thread(mddev->thread); 1192 1196 } 1193 1197 /* Mustn't call r1_bio_write_done before this next test, 1194 1198 * as it could result in the bio being freed. ··· 1213 1213 1214 1214 /* In case raid1d snuck in to freeze_array */ 1215 1215 wake_up(&conf->wait_barrier); 1216 - 1217 - if (do_sync || !bitmap || !plugged) 1218 - md_wakeup_thread(mddev->thread); 1219 1216 } 1220 1217 1221 1218 static void status(struct seq_file *seq, struct mddev *mddev) ··· 2618 2621 goto abort; 2619 2622 } 2620 2623 err = -ENOMEM; 2621 - conf->thread = md_register_thread(raid1d, mddev, NULL); 2624 + conf->thread = md_register_thread(raid1d, mddev, "raid1"); 2622 2625 if (!conf->thread) { 2623 2626 printk(KERN_ERR 2624 2627 "md/raid1:%s: couldn't allocate thread\n",
+16 -10
drivers/md/raid10.c
··· 1039 1039 const unsigned long do_fua = (bio->bi_rw & REQ_FUA); 1040 1040 unsigned long flags; 1041 1041 struct md_rdev *blocked_rdev; 1042 - int plugged; 1043 1042 int sectors_handled; 1044 1043 int max_sectors; 1045 1044 int sectors; ··· 1238 1239 * of r10_bios is recored in bio->bi_phys_segments just as with 1239 1240 * the read case. 1240 1241 */ 1241 - plugged = mddev_check_plugged(mddev); 1242 1242 1243 1243 r10_bio->read_slot = -1; /* make sure repl_bio gets freed */ 1244 1244 raid10_find_phys(conf, r10_bio); ··· 1394 1396 bio_list_add(&conf->pending_bio_list, mbio); 1395 1397 conf->pending_count++; 1396 1398 spin_unlock_irqrestore(&conf->device_lock, flags); 1399 + if (!mddev_check_plugged(mddev)) 1400 + md_wakeup_thread(mddev->thread); 1397 1401 1398 1402 if (!r10_bio->devs[i].repl_bio) 1399 1403 continue; ··· 1423 1423 bio_list_add(&conf->pending_bio_list, mbio); 1424 1424 conf->pending_count++; 1425 1425 spin_unlock_irqrestore(&conf->device_lock, flags); 1426 + if (!mddev_check_plugged(mddev)) 1427 + md_wakeup_thread(mddev->thread); 1426 1428 } 1427 1429 1428 1430 /* Don't remove the bias on 'remaining' (one_write_done) until ··· 1450 1448 1451 1449 /* In case raid10d snuck in to freeze_array */ 1452 1450 wake_up(&conf->wait_barrier); 1453 - 1454 - if (do_sync || !mddev->bitmap || !plugged) 1455 - md_wakeup_thread(mddev->thread); 1456 1451 } 1457 1452 1458 1453 static void status(struct seq_file *seq, struct mddev *mddev) ··· 2309 2310 if (r10_sync_page_io(rdev, 2310 2311 r10_bio->devs[sl].addr + 2311 2312 sect, 2312 - s<<9, conf->tmppage, WRITE) 2313 + s, conf->tmppage, WRITE) 2313 2314 == 0) { 2314 2315 /* Well, this device is dead */ 2315 2316 printk(KERN_NOTICE ··· 2348 2349 switch (r10_sync_page_io(rdev, 2349 2350 r10_bio->devs[sl].addr + 2350 2351 sect, 2351 - s<<9, conf->tmppage, 2352 + s, conf->tmppage, 2352 2353 READ)) { 2353 2354 case 0: 2354 2355 /* Well, this device is dead */ ··· 2511 2512 slot = r10_bio->read_slot; 2512 2513 printk_ratelimited( 2513 2514 KERN_ERR 2514 - "md/raid10:%s: %s: redirecting" 2515 + "md/raid10:%s: %s: redirecting " 2515 2516 "sector %llu to another mirror\n", 2516 2517 mdname(mddev), 2517 2518 bdevname(rdev->bdev, b), ··· 2660 2661 blk_start_plug(&plug); 2661 2662 for (;;) { 2662 2663 2663 - flush_pending_writes(conf); 2664 + if (atomic_read(&mddev->plug_cnt) == 0) 2665 + flush_pending_writes(conf); 2664 2666 2665 2667 spin_lock_irqsave(&conf->device_lock, flags); 2666 2668 if (list_empty(head)) { ··· 2890 2890 /* want to reconstruct this device */ 2891 2891 rb2 = r10_bio; 2892 2892 sect = raid10_find_virt(conf, sector_nr, i); 2893 + if (sect >= mddev->resync_max_sectors) { 2894 + /* last stripe is not complete - don't 2895 + * try to recover this sector. 2896 + */ 2897 + continue; 2898 + } 2893 2899 /* Unless we are doing a full sync, or a replacement 2894 2900 * we only need to recover the block if it is set in 2895 2901 * the bitmap ··· 3427 3421 spin_lock_init(&conf->resync_lock); 3428 3422 init_waitqueue_head(&conf->wait_barrier); 3429 3423 3430 - conf->thread = md_register_thread(raid10d, mddev, NULL); 3424 + conf->thread = md_register_thread(raid10d, mddev, "raid10"); 3431 3425 if (!conf->thread) 3432 3426 goto out; 3433 3427
+47 -20
drivers/md/raid5.c
··· 196 196 BUG_ON(!list_empty(&sh->lru)); 197 197 BUG_ON(atomic_read(&conf->active_stripes)==0); 198 198 if (test_bit(STRIPE_HANDLE, &sh->state)) { 199 - if (test_bit(STRIPE_DELAYED, &sh->state)) 199 + if (test_bit(STRIPE_DELAYED, &sh->state) && 200 + !test_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) 200 201 list_add_tail(&sh->lru, &conf->delayed_list); 201 202 else if (test_bit(STRIPE_BIT_DELAY, &sh->state) && 202 203 sh->bm_seq - conf->seq_write > 0) 203 204 list_add_tail(&sh->lru, &conf->bitmap_list); 204 205 else { 206 + clear_bit(STRIPE_DELAYED, &sh->state); 205 207 clear_bit(STRIPE_BIT_DELAY, &sh->state); 206 208 list_add_tail(&sh->lru, &conf->handle_list); 207 209 } ··· 608 606 * a chance*/ 609 607 md_check_recovery(conf->mddev); 610 608 } 609 + /* 610 + * Because md_wait_for_blocked_rdev 611 + * will dec nr_pending, we must 612 + * increment it first. 613 + */ 614 + atomic_inc(&rdev->nr_pending); 611 615 md_wait_for_blocked_rdev(rdev, conf->mddev); 612 616 } else { 613 617 /* Acknowledged bad block - skip the write */ ··· 1745 1737 } else { 1746 1738 const char *bdn = bdevname(rdev->bdev, b); 1747 1739 int retry = 0; 1740 + int set_bad = 0; 1748 1741 1749 1742 clear_bit(R5_UPTODATE, &sh->dev[i].flags); 1750 1743 atomic_inc(&rdev->read_errors); ··· 1757 1748 mdname(conf->mddev), 1758 1749 (unsigned long long)s, 1759 1750 bdn); 1760 - else if (conf->mddev->degraded >= conf->max_degraded) 1751 + else if (conf->mddev->degraded >= conf->max_degraded) { 1752 + set_bad = 1; 1761 1753 printk_ratelimited( 1762 1754 KERN_WARNING 1763 1755 "md/raid:%s: read error not correctable " ··· 1766 1756 mdname(conf->mddev), 1767 1757 (unsigned long long)s, 1768 1758 bdn); 1769 - else if (test_bit(R5_ReWrite, &sh->dev[i].flags)) 1759 + } else if (test_bit(R5_ReWrite, &sh->dev[i].flags)) { 1770 1760 /* Oh, no!!! */ 1761 + set_bad = 1; 1771 1762 printk_ratelimited( 1772 1763 KERN_WARNING 1773 1764 "md/raid:%s: read error NOT corrected!! " ··· 1776 1765 mdname(conf->mddev), 1777 1766 (unsigned long long)s, 1778 1767 bdn); 1779 - else if (atomic_read(&rdev->read_errors) 1768 + } else if (atomic_read(&rdev->read_errors) 1780 1769 > conf->max_nr_stripes) 1781 1770 printk(KERN_WARNING 1782 1771 "md/raid:%s: Too many read errors, failing device %s.\n", ··· 1788 1777 else { 1789 1778 clear_bit(R5_ReadError, &sh->dev[i].flags); 1790 1779 clear_bit(R5_ReWrite, &sh->dev[i].flags); 1791 - md_error(conf->mddev, rdev); 1780 + if (!(set_bad 1781 + && test_bit(In_sync, &rdev->flags) 1782 + && rdev_set_badblocks( 1783 + rdev, sh->sector, STRIPE_SECTORS, 0))) 1784 + md_error(conf->mddev, rdev); 1792 1785 } 1793 1786 } 1794 1787 rdev_dec_pending(rdev, conf->mddev); ··· 3597 3582 3598 3583 finish: 3599 3584 /* wait for this device to become unblocked */ 3600 - if (conf->mddev->external && unlikely(s.blocked_rdev)) 3601 - md_wait_for_blocked_rdev(s.blocked_rdev, conf->mddev); 3585 + if (unlikely(s.blocked_rdev)) { 3586 + if (conf->mddev->external) 3587 + md_wait_for_blocked_rdev(s.blocked_rdev, 3588 + conf->mddev); 3589 + else 3590 + /* Internal metadata will immediately 3591 + * be written by raid5d, so we don't 3592 + * need to wait here. 3593 + */ 3594 + rdev_dec_pending(s.blocked_rdev, 3595 + conf->mddev); 3596 + } 3602 3597 3603 3598 if (s.handle_bad_blocks) 3604 3599 for (i = disks; i--; ) { ··· 3906 3881 raid_bio->bi_next = (void*)rdev; 3907 3882 align_bi->bi_bdev = rdev->bdev; 3908 3883 align_bi->bi_flags &= ~(1 << BIO_SEG_VALID); 3909 - /* No reshape active, so we can trust rdev->data_offset */ 3910 - align_bi->bi_sector += rdev->data_offset; 3911 3884 3912 3885 if (!bio_fits_rdev(align_bi) || 3913 3886 is_badblock(rdev, align_bi->bi_sector, align_bi->bi_size>>9, ··· 3915 3892 rdev_dec_pending(rdev, mddev); 3916 3893 return 0; 3917 3894 } 3895 + 3896 + /* No reshape active, so we can trust rdev->data_offset */ 3897 + align_bi->bi_sector += rdev->data_offset; 3918 3898 3919 3899 spin_lock_irq(&conf->device_lock); 3920 3900 wait_event_lock_irq(conf->wait_for_stripe, ··· 3997 3971 struct stripe_head *sh; 3998 3972 const int rw = bio_data_dir(bi); 3999 3973 int remaining; 4000 - int plugged; 4001 3974 4002 3975 if (unlikely(bi->bi_rw & REQ_FLUSH)) { 4003 3976 md_flush_request(mddev, bi); ··· 4015 3990 bi->bi_next = NULL; 4016 3991 bi->bi_phys_segments = 1; /* over-loaded to count active stripes */ 4017 3992 4018 - plugged = mddev_check_plugged(mddev); 4019 3993 for (;logical_sector < last_sector; logical_sector += STRIPE_SECTORS) { 4020 3994 DEFINE_WAIT(w); 4021 3995 int previous; ··· 4116 4092 if ((bi->bi_rw & REQ_SYNC) && 4117 4093 !test_and_set_bit(STRIPE_PREREAD_ACTIVE, &sh->state)) 4118 4094 atomic_inc(&conf->preread_active_stripes); 4095 + mddev_check_plugged(mddev); 4119 4096 release_stripe(sh); 4120 4097 } else { 4121 4098 /* cannot get stripe for read-ahead, just give-up */ ··· 4124 4099 finish_wait(&conf->wait_for_overlap, &w); 4125 4100 break; 4126 4101 } 4127 - 4128 4102 } 4129 - if (!plugged) 4130 - md_wakeup_thread(mddev->thread); 4131 4103 4132 4104 spin_lock_irq(&conf->device_lock); 4133 4105 remaining = raid5_dec_bi_phys_segments(bi); ··· 4845 4823 int raid_disk, memory, max_disks; 4846 4824 struct md_rdev *rdev; 4847 4825 struct disk_info *disk; 4826 + char pers_name[6]; 4848 4827 4849 4828 if (mddev->new_level != 5 4850 4829 && mddev->new_level != 4 ··· 4969 4946 printk(KERN_INFO "md/raid:%s: allocated %dkB\n", 4970 4947 mdname(mddev), memory); 4971 4948 4972 - conf->thread = md_register_thread(raid5d, mddev, NULL); 4949 + sprintf(pers_name, "raid%d", mddev->new_level); 4950 + conf->thread = md_register_thread(raid5d, mddev, pers_name); 4973 4951 if (!conf->thread) { 4974 4952 printk(KERN_ERR 4975 4953 "md/raid:%s: couldn't allocate thread.\n", ··· 5489 5465 if (rdev->saved_raid_disk >= 0 && 5490 5466 rdev->saved_raid_disk >= first && 5491 5467 conf->disks[rdev->saved_raid_disk].rdev == NULL) 5492 - disk = rdev->saved_raid_disk; 5493 - else 5494 - disk = first; 5495 - for ( ; disk <= last ; disk++) { 5468 + first = rdev->saved_raid_disk; 5469 + 5470 + for (disk = first; disk <= last; disk++) { 5496 5471 p = conf->disks + disk; 5497 5472 if (p->rdev == NULL) { 5498 5473 clear_bit(In_sync, &rdev->flags); ··· 5500 5477 if (rdev->saved_raid_disk != disk) 5501 5478 conf->fullsync = 1; 5502 5479 rcu_assign_pointer(p->rdev, rdev); 5503 - break; 5480 + goto out; 5504 5481 } 5482 + } 5483 + for (disk = first; disk <= last; disk++) { 5484 + p = conf->disks + disk; 5505 5485 if (test_bit(WantReplacement, &p->rdev->flags) && 5506 5486 p->replacement == NULL) { 5507 5487 clear_bit(In_sync, &rdev->flags); ··· 5516 5490 break; 5517 5491 } 5518 5492 } 5493 + out: 5519 5494 print_raid5_conf(conf); 5520 5495 return err; 5521 5496 }
+1 -1
drivers/mtd/nand/cafe_nand.c
··· 102 102 static int cafe_device_ready(struct mtd_info *mtd) 103 103 { 104 104 struct cafe_priv *cafe = mtd->priv; 105 - int result = !!(cafe_readl(cafe, NAND_STATUS) | 0x40000000); 105 + int result = !!(cafe_readl(cafe, NAND_STATUS) & 0x40000000); 106 106 uint32_t irqs = cafe_readl(cafe, NAND_IRQ); 107 107 108 108 cafe_writel(cafe, irqs, NAND_IRQ);
+7
drivers/mtd/nand/nand_base.c
··· 3501 3501 /* propagate ecc info to mtd_info */ 3502 3502 mtd->ecclayout = chip->ecc.layout; 3503 3503 mtd->ecc_strength = chip->ecc.strength; 3504 + /* 3505 + * Initialize bitflip_threshold to its default prior scan_bbt() call. 3506 + * scan_bbt() might invoke mtd_read(), thus bitflip_threshold must be 3507 + * properly set. 3508 + */ 3509 + if (!mtd->bitflip_threshold) 3510 + mtd->bitflip_threshold = mtd->ecc_strength; 3504 3511 3505 3512 /* Check, if we should skip the bad block table scan */ 3506 3513 if (chip->options & NAND_SKIP_BBTSCAN)
+4 -6
drivers/net/ethernet/freescale/gianfar.c
··· 1804 1804 if (priv->mode == MQ_MG_MODE) { 1805 1805 baddr = &regs->txic0; 1806 1806 for_each_set_bit(i, &tx_mask, priv->num_tx_queues) { 1807 - if (likely(priv->tx_queue[i]->txcoalescing)) { 1808 - gfar_write(baddr + i, 0); 1807 + gfar_write(baddr + i, 0); 1808 + if (likely(priv->tx_queue[i]->txcoalescing)) 1809 1809 gfar_write(baddr + i, priv->tx_queue[i]->txic); 1810 - } 1811 1810 } 1812 1811 1813 1812 baddr = &regs->rxic0; 1814 1813 for_each_set_bit(i, &rx_mask, priv->num_rx_queues) { 1815 - if (likely(priv->rx_queue[i]->rxcoalescing)) { 1816 - gfar_write(baddr + i, 0); 1814 + gfar_write(baddr + i, 0); 1815 + if (likely(priv->rx_queue[i]->rxcoalescing)) 1817 1816 gfar_write(baddr + i, priv->rx_queue[i]->rxic); 1818 - } 1819 1817 } 1820 1818 } 1821 1819 }
+1
drivers/net/ethernet/intel/e1000e/defines.h
··· 103 103 #define E1000_RXD_ERR_SEQ 0x04 /* Sequence Error */ 104 104 #define E1000_RXD_ERR_CXE 0x10 /* Carrier Extension Error */ 105 105 #define E1000_RXD_ERR_TCPE 0x20 /* TCP/UDP Checksum Error */ 106 + #define E1000_RXD_ERR_IPE 0x40 /* IP Checksum Error */ 106 107 #define E1000_RXD_ERR_RXE 0x80 /* Rx Data Error */ 107 108 #define E1000_RXD_SPC_VLAN_MASK 0x0FFF /* VLAN ID is in lower 12 bits */ 108 109
+14 -61
drivers/net/ethernet/intel/e1000e/netdev.c
··· 496 496 * @sk_buff: socket buffer with received data 497 497 **/ 498 498 static void e1000_rx_checksum(struct e1000_adapter *adapter, u32 status_err, 499 - __le16 csum, struct sk_buff *skb) 499 + struct sk_buff *skb) 500 500 { 501 501 u16 status = (u16)status_err; 502 502 u8 errors = (u8)(status_err >> 24); ··· 511 511 if (status & E1000_RXD_STAT_IXSM) 512 512 return; 513 513 514 - /* TCP/UDP checksum error bit is set */ 515 - if (errors & E1000_RXD_ERR_TCPE) { 514 + /* TCP/UDP checksum error bit or IP checksum error bit is set */ 515 + if (errors & (E1000_RXD_ERR_TCPE | E1000_RXD_ERR_IPE)) { 516 516 /* let the stack verify checksum errors */ 517 517 adapter->hw_csum_err++; 518 518 return; ··· 523 523 return; 524 524 525 525 /* It must be a TCP or UDP packet with a valid checksum */ 526 - if (status & E1000_RXD_STAT_TCPCS) { 527 - /* TCP checksum is good */ 528 - skb->ip_summed = CHECKSUM_UNNECESSARY; 529 - } else { 530 - /* 531 - * IP fragment with UDP payload 532 - * Hardware complements the payload checksum, so we undo it 533 - * and then put the value in host order for further stack use. 534 - */ 535 - __sum16 sum = (__force __sum16)swab16((__force u16)csum); 536 - skb->csum = csum_unfold(~sum); 537 - skb->ip_summed = CHECKSUM_COMPLETE; 538 - } 526 + skb->ip_summed = CHECKSUM_UNNECESSARY; 539 527 adapter->hw_csum_good++; 540 528 } 541 529 ··· 942 954 skb_put(skb, length); 943 955 944 956 /* Receive Checksum Offload */ 945 - e1000_rx_checksum(adapter, staterr, 946 - rx_desc->wb.lower.hi_dword.csum_ip.csum, skb); 957 + e1000_rx_checksum(adapter, staterr, skb); 947 958 948 959 e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb); 949 960 ··· 1328 1341 total_rx_bytes += skb->len; 1329 1342 total_rx_packets++; 1330 1343 1331 - e1000_rx_checksum(adapter, staterr, 1332 - rx_desc->wb.lower.hi_dword.csum_ip.csum, skb); 1344 + e1000_rx_checksum(adapter, staterr, skb); 1333 1345 1334 1346 e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb); 1335 1347 ··· 1498 1512 } 1499 1513 } 1500 1514 1501 - /* Receive Checksum Offload XXX recompute due to CRC strip? */ 1502 - e1000_rx_checksum(adapter, staterr, 1503 - rx_desc->wb.lower.hi_dword.csum_ip.csum, skb); 1515 + /* Receive Checksum Offload */ 1516 + e1000_rx_checksum(adapter, staterr, skb); 1504 1517 1505 1518 e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb); 1506 1519 ··· 3083 3098 3084 3099 /* Enable Receive Checksum Offload for TCP and UDP */ 3085 3100 rxcsum = er32(RXCSUM); 3086 - if (adapter->netdev->features & NETIF_F_RXCSUM) { 3101 + if (adapter->netdev->features & NETIF_F_RXCSUM) 3087 3102 rxcsum |= E1000_RXCSUM_TUOFL; 3088 - 3089 - /* 3090 - * IPv4 payload checksum for UDP fragments must be 3091 - * used in conjunction with packet-split. 3092 - */ 3093 - if (adapter->rx_ps_pages) 3094 - rxcsum |= E1000_RXCSUM_IPPCSE; 3095 - } else { 3103 + else 3096 3104 rxcsum &= ~E1000_RXCSUM_TUOFL; 3097 - /* no need to clear IPPCSE as it defaults to 0 */ 3098 - } 3099 3105 ew32(RXCSUM, rxcsum); 3100 3106 3101 3107 if (adapter->hw.mac.type == e1000_pch2lan) { ··· 5217 5241 int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN; 5218 5242 5219 5243 /* Jumbo frame support */ 5220 - if (max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) { 5221 - if (!(adapter->flags & FLAG_HAS_JUMBO_FRAMES)) { 5222 - e_err("Jumbo Frames not supported.\n"); 5223 - return -EINVAL; 5224 - } 5225 - 5226 - /* 5227 - * IP payload checksum (enabled with jumbos/packet-split when 5228 - * Rx checksum is enabled) and generation of RSS hash is 5229 - * mutually exclusive in the hardware. 5230 - */ 5231 - if ((netdev->features & NETIF_F_RXCSUM) && 5232 - (netdev->features & NETIF_F_RXHASH)) { 5233 - e_err("Jumbo frames cannot be enabled when both receive checksum offload and receive hashing are enabled. Disable one of the receive offload features before enabling jumbos.\n"); 5234 - return -EINVAL; 5235 - } 5244 + if ((max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) && 5245 + !(adapter->flags & FLAG_HAS_JUMBO_FRAMES)) { 5246 + e_err("Jumbo Frames not supported.\n"); 5247 + return -EINVAL; 5236 5248 } 5237 5249 5238 5250 /* Supported frame sizes */ ··· 5993 6029 NETIF_F_RXCSUM | NETIF_F_RXHASH | NETIF_F_RXFCS | 5994 6030 NETIF_F_RXALL))) 5995 6031 return 0; 5996 - 5997 - /* 5998 - * IP payload checksum (enabled with jumbos/packet-split when Rx 5999 - * checksum is enabled) and generation of RSS hash is mutually 6000 - * exclusive in the hardware. 6001 - */ 6002 - if (adapter->rx_ps_pages && 6003 - (features & NETIF_F_RXCSUM) && (features & NETIF_F_RXHASH)) { 6004 - e_err("Enabling both receive checksum offload and receive hashing is not possible with jumbo frames. Disable jumbos or enable only one of the receive offload features.\n"); 6005 - return -EINVAL; 6006 - } 6007 6032 6008 6033 if (changed & NETIF_F_RXFCS) { 6009 6034 if (features & NETIF_F_RXFCS) {
+19 -12
drivers/net/ethernet/intel/igbvf/ethtool.c
··· 357 357 struct igbvf_adapter *adapter = netdev_priv(netdev); 358 358 struct e1000_hw *hw = &adapter->hw; 359 359 360 - if ((ec->rx_coalesce_usecs > IGBVF_MAX_ITR_USECS) || 361 - ((ec->rx_coalesce_usecs > 3) && 362 - (ec->rx_coalesce_usecs < IGBVF_MIN_ITR_USECS)) || 363 - (ec->rx_coalesce_usecs == 2)) 364 - return -EINVAL; 365 - 366 - /* convert to rate of irq's per second */ 367 - if (ec->rx_coalesce_usecs && ec->rx_coalesce_usecs <= 3) { 368 - adapter->current_itr = IGBVF_START_ITR; 369 - adapter->requested_itr = ec->rx_coalesce_usecs; 370 - } else { 360 + if ((ec->rx_coalesce_usecs >= IGBVF_MIN_ITR_USECS) && 361 + (ec->rx_coalesce_usecs <= IGBVF_MAX_ITR_USECS)) { 371 362 adapter->current_itr = ec->rx_coalesce_usecs << 2; 372 363 adapter->requested_itr = 1000000000 / 373 364 (adapter->current_itr * 256); 374 - } 365 + } else if ((ec->rx_coalesce_usecs == 3) || 366 + (ec->rx_coalesce_usecs == 2)) { 367 + adapter->current_itr = IGBVF_START_ITR; 368 + adapter->requested_itr = ec->rx_coalesce_usecs; 369 + } else if (ec->rx_coalesce_usecs == 0) { 370 + /* 371 + * The user's desire is to turn off interrupt throttling 372 + * altogether, but due to HW limitations, we can't do that. 373 + * Instead we set a very small value in EITR, which would 374 + * allow ~967k interrupts per second, but allow the adapter's 375 + * internal clocking to still function properly. 376 + */ 377 + adapter->current_itr = 4; 378 + adapter->requested_itr = 1000000000 / 379 + (adapter->current_itr * 256); 380 + } else 381 + return -EINVAL; 375 382 376 383 writel(adapter->current_itr, 377 384 hw->hw_addr + adapter->rx_ring->itr_register);
+1
drivers/net/ethernet/ti/davinci_cpdma.c
··· 15 15 #include <linux/kernel.h> 16 16 #include <linux/spinlock.h> 17 17 #include <linux/device.h> 18 + #include <linux/module.h> 18 19 #include <linux/slab.h> 19 20 #include <linux/err.h> 20 21 #include <linux/dma-mapping.h>
+4
drivers/net/usb/qmi_wwan.c
··· 197 197 static int qmi_wwan_cdc_wdm_manage_power(struct usb_interface *intf, int on) 198 198 { 199 199 struct usbnet *dev = usb_get_intfdata(intf); 200 + 201 + /* can be called while disconnecting */ 202 + if (!dev) 203 + return 0; 200 204 return qmi_wwan_manage_power(dev, on); 201 205 } 202 206
+1
drivers/net/wireless/ath/ath.h
··· 143 143 u32 keymax; 144 144 DECLARE_BITMAP(keymap, ATH_KEYMAX); 145 145 DECLARE_BITMAP(tkip_keymap, ATH_KEYMAX); 146 + DECLARE_BITMAP(ccmp_keymap, ATH_KEYMAX); 146 147 enum ath_crypt_caps crypt_caps; 147 148 148 149 unsigned int clockrate;
+1 -1
drivers/net/wireless/ath/ath9k/hw.c
··· 622 622 623 623 if (NR_CPUS > 1 && ah->config.serialize_regmode == SER_REG_MODE_AUTO) { 624 624 if (ah->hw_version.macVersion == AR_SREV_VERSION_5416_PCI || 625 - ((AR_SREV_9160(ah) || AR_SREV_9280(ah)) && 625 + ((AR_SREV_9160(ah) || AR_SREV_9280(ah) || AR_SREV_9287(ah)) && 626 626 !ah->is_pciexpress)) { 627 627 ah->config.serialize_regmode = 628 628 SER_REG_MODE_ON;
+4 -3
drivers/net/wireless/ath/ath9k/recv.c
··· 695 695 __skb_unlink(skb, &rx_edma->rx_fifo); 696 696 list_add_tail(&bf->list, &sc->rx.rxbuf); 697 697 ath_rx_edma_buf_link(sc, qtype); 698 - } else { 699 - bf = NULL; 700 698 } 699 + 700 + bf = NULL; 701 701 } 702 702 703 703 *dest = bf; ··· 822 822 * descriptor does contain a valid key index. This has been observed 823 823 * mostly with CCMP encryption. 824 824 */ 825 - if (rx_stats->rs_keyix == ATH9K_RXKEYIX_INVALID) 825 + if (rx_stats->rs_keyix == ATH9K_RXKEYIX_INVALID || 826 + !test_bit(rx_stats->rs_keyix, common->ccmp_keymap)) 826 827 rx_stats->rs_status &= ~ATH9K_RXERR_KEYMISS; 827 828 828 829 if (!rx_stats->rs_datalen) {
+4
drivers/net/wireless/ath/key.c
··· 556 556 return -EIO; 557 557 558 558 set_bit(idx, common->keymap); 559 + if (key->cipher == WLAN_CIPHER_SUITE_CCMP) 560 + set_bit(idx, common->ccmp_keymap); 561 + 559 562 if (key->cipher == WLAN_CIPHER_SUITE_TKIP) { 560 563 set_bit(idx + 64, common->keymap); 561 564 set_bit(idx, common->tkip_keymap); ··· 585 582 return; 586 583 587 584 clear_bit(key->hw_key_idx, common->keymap); 585 + clear_bit(key->hw_key_idx, common->ccmp_keymap); 588 586 if (key->cipher != WLAN_CIPHER_SUITE_TKIP) 589 587 return; 590 588
+12
drivers/net/wireless/iwlwifi/iwl-mac80211.c
··· 796 796 switch (op) { 797 797 case ADD: 798 798 ret = iwlagn_mac_sta_add(hw, vif, sta); 799 + if (ret) 800 + break; 801 + /* 802 + * Clear the in-progress flag, the AP station entry was added 803 + * but we'll initialize LQ only when we've associated (which 804 + * would also clear the in-progress flag). This is necessary 805 + * in case we never initialize LQ because association fails. 806 + */ 807 + spin_lock_bh(&priv->sta_lock); 808 + priv->stations[iwl_sta_id(sta)].used &= 809 + ~IWL_STA_UCODE_INPROGRESS; 810 + spin_unlock_bh(&priv->sta_lock); 799 811 break; 800 812 case REMOVE: 801 813 ret = iwlagn_mac_sta_remove(hw, vif, sta);
+3 -2
drivers/net/wireless/mwifiex/11n_rxreorder.c
··· 256 256 else 257 257 last_seq = priv->rx_seq[tid]; 258 258 259 - if (last_seq >= new_node->start_win) 259 + if (last_seq != MWIFIEX_DEF_11N_RX_SEQ_NUM && 260 + last_seq >= new_node->start_win) 260 261 new_node->start_win = last_seq + 1; 261 262 262 263 new_node->win_size = win_size; ··· 597 596 spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags); 598 597 599 598 INIT_LIST_HEAD(&priv->rx_reorder_tbl_ptr); 600 - memset(priv->rx_seq, 0, sizeof(priv->rx_seq)); 599 + mwifiex_reset_11n_rx_seq_num(priv); 601 600 }
+7
drivers/net/wireless/mwifiex/11n_rxreorder.h
··· 37 37 38 38 #define ADDBA_RSP_STATUS_ACCEPT 0 39 39 40 + #define MWIFIEX_DEF_11N_RX_SEQ_NUM 0xffff 41 + 42 + static inline void mwifiex_reset_11n_rx_seq_num(struct mwifiex_private *priv) 43 + { 44 + memset(priv->rx_seq, 0xff, sizeof(priv->rx_seq)); 45 + } 46 + 40 47 int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *, 41 48 u16 seqNum, 42 49 u16 tid, u8 *ta,
+1
drivers/net/wireless/mwifiex/ie.c
··· 213 213 /* save assoc resp ie index after auto-indexing */ 214 214 *assoc_idx = *((u16 *)pos); 215 215 216 + kfree(ap_custom_ie); 216 217 return ret; 217 218 } 218 219
+3 -3
drivers/net/wireless/mwifiex/sdio.c
··· 978 978 dev_dbg(adapter->dev, "info: --- Rx: Event ---\n"); 979 979 adapter->event_cause = *(u32 *) skb->data; 980 980 981 - skb_pull(skb, MWIFIEX_EVENT_HEADER_LEN); 982 - 983 981 if ((skb->len > 0) && (skb->len < MAX_EVENT_SIZE)) 984 - memcpy(adapter->event_body, skb->data, skb->len); 982 + memcpy(adapter->event_body, 983 + skb->data + MWIFIEX_EVENT_HEADER_LEN, 984 + skb->len); 985 985 986 986 /* event cause has been saved to adapter->event_cause */ 987 987 adapter->event_received = true;
+4 -5
drivers/net/wireless/mwifiex/sta_event.c
··· 406 406 break; 407 407 408 408 case EVENT_UAP_STA_ASSOC: 409 - skb_pull(adapter->event_skb, MWIFIEX_UAP_EVENT_EXTRA_HEADER); 410 409 memset(&sinfo, 0, sizeof(sinfo)); 411 - event = (struct mwifiex_assoc_event *)adapter->event_skb->data; 410 + event = (struct mwifiex_assoc_event *) 411 + (adapter->event_body + MWIFIEX_UAP_EVENT_EXTRA_HEADER); 412 412 if (le16_to_cpu(event->type) == TLV_TYPE_UAP_MGMT_FRAME) { 413 413 len = -1; 414 414 ··· 433 433 GFP_KERNEL); 434 434 break; 435 435 case EVENT_UAP_STA_DEAUTH: 436 - skb_pull(adapter->event_skb, MWIFIEX_UAP_EVENT_EXTRA_HEADER); 437 - cfg80211_del_sta(priv->netdev, adapter->event_skb->data, 438 - GFP_KERNEL); 436 + cfg80211_del_sta(priv->netdev, adapter->event_body + 437 + MWIFIEX_UAP_EVENT_EXTRA_HEADER, GFP_KERNEL); 439 438 break; 440 439 case EVENT_UAP_BSS_IDLE: 441 440 priv->media_connected = false;
+20 -8
drivers/net/wireless/mwifiex/usb.c
··· 49 49 struct device *dev = adapter->dev; 50 50 u32 recv_type; 51 51 __le32 tmp; 52 + int ret; 52 53 53 54 if (adapter->hs_activated) 54 55 mwifiex_process_hs_config(adapter); ··· 70 69 case MWIFIEX_USB_TYPE_CMD: 71 70 if (skb->len > MWIFIEX_SIZE_OF_CMD_BUFFER) { 72 71 dev_err(dev, "CMD: skb->len too large\n"); 73 - return -1; 72 + ret = -1; 73 + goto exit_restore_skb; 74 74 } else if (!adapter->curr_cmd) { 75 75 dev_dbg(dev, "CMD: no curr_cmd\n"); 76 76 if (adapter->ps_state == PS_STATE_SLEEP_CFM) { 77 77 mwifiex_process_sleep_confirm_resp( 78 78 adapter, skb->data, 79 79 skb->len); 80 - return 0; 80 + ret = 0; 81 + goto exit_restore_skb; 81 82 } 82 - return -1; 83 + ret = -1; 84 + goto exit_restore_skb; 83 85 } 84 86 85 87 adapter->curr_cmd->resp_skb = skb; ··· 91 87 case MWIFIEX_USB_TYPE_EVENT: 92 88 if (skb->len < sizeof(u32)) { 93 89 dev_err(dev, "EVENT: skb->len too small\n"); 94 - return -1; 90 + ret = -1; 91 + goto exit_restore_skb; 95 92 } 96 93 skb_copy_from_linear_data(skb, &tmp, sizeof(u32)); 97 94 adapter->event_cause = le32_to_cpu(tmp); 98 - skb_pull(skb, sizeof(u32)); 99 95 dev_dbg(dev, "event_cause %#x\n", adapter->event_cause); 100 96 101 97 if (skb->len > MAX_EVENT_SIZE) { 102 98 dev_err(dev, "EVENT: event body too large\n"); 103 - return -1; 99 + ret = -1; 100 + goto exit_restore_skb; 104 101 } 105 102 106 - skb_copy_from_linear_data(skb, adapter->event_body, 107 - skb->len); 103 + memcpy(adapter->event_body, skb->data + 104 + MWIFIEX_EVENT_HEADER_LEN, skb->len); 105 + 108 106 adapter->event_received = true; 109 107 adapter->event_skb = skb; 110 108 break; ··· 130 124 } 131 125 132 126 return -EINPROGRESS; 127 + 128 + exit_restore_skb: 129 + /* The buffer will be reused for further cmds/events */ 130 + skb_push(skb, INTF_HEADER_LEN); 131 + 132 + return ret; 133 133 } 134 134 135 135 static void mwifiex_usb_rx_complete(struct urb *urb)
+3
drivers/net/wireless/mwifiex/wmm.c
··· 404 404 priv->add_ba_param.tx_win_size = MWIFIEX_AMPDU_DEF_TXWINSIZE; 405 405 priv->add_ba_param.rx_win_size = MWIFIEX_AMPDU_DEF_RXWINSIZE; 406 406 407 + mwifiex_reset_11n_rx_seq_num(priv); 408 + 407 409 atomic_set(&priv->wmm.tx_pkts_queued, 0); 408 410 atomic_set(&priv->wmm.highest_queued_prio, HIGH_PRIO_TID); 409 411 } ··· 1223 1221 1224 1222 if (!ptr->is_11n_enabled || 1225 1223 mwifiex_is_ba_stream_setup(priv, ptr, tid) || 1224 + priv->wps.session_enable || 1226 1225 ((priv->sec_info.wpa_enabled || 1227 1226 priv->sec_info.wpa2_enabled) && 1228 1227 !priv->wpa_is_gtk_set)) {
+3
drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
··· 301 301 {RTL_USB_DEVICE(0x07b8, 0x8188, rtl92cu_hal_cfg)}, /*Abocom - Abocom*/ 302 302 {RTL_USB_DEVICE(0x07b8, 0x8189, rtl92cu_hal_cfg)}, /*Funai - Abocom*/ 303 303 {RTL_USB_DEVICE(0x0846, 0x9041, rtl92cu_hal_cfg)}, /*NetGear WNA1000M*/ 304 + {RTL_USB_DEVICE(0x0bda, 0x5088, rtl92cu_hal_cfg)}, /*Thinkware-CC&C*/ 304 305 {RTL_USB_DEVICE(0x0df6, 0x0052, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/ 305 306 {RTL_USB_DEVICE(0x0df6, 0x005c, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/ 306 307 {RTL_USB_DEVICE(0x0eb0, 0x9071, rtl92cu_hal_cfg)}, /*NO Brand - Etop*/ 308 + {RTL_USB_DEVICE(0x4856, 0x0091, rtl92cu_hal_cfg)}, /*NetweeN - Feixun*/ 307 309 /* HP - Lite-On ,8188CUS Slim Combo */ 308 310 {RTL_USB_DEVICE(0x103c, 0x1629, rtl92cu_hal_cfg)}, 309 311 {RTL_USB_DEVICE(0x13d3, 0x3357, rtl92cu_hal_cfg)}, /* AzureWave */ ··· 348 346 {RTL_USB_DEVICE(0x07b8, 0x8178, rtl92cu_hal_cfg)}, /*Funai -Abocom*/ 349 347 {RTL_USB_DEVICE(0x0846, 0x9021, rtl92cu_hal_cfg)}, /*Netgear-Sercomm*/ 350 348 {RTL_USB_DEVICE(0x0b05, 0x17ab, rtl92cu_hal_cfg)}, /*ASUS-Edimax*/ 349 + {RTL_USB_DEVICE(0x0bda, 0x8186, rtl92cu_hal_cfg)}, /*Realtek 92CE-VAU*/ 351 350 {RTL_USB_DEVICE(0x0df6, 0x0061, rtl92cu_hal_cfg)}, /*Sitecom-Edimax*/ 352 351 {RTL_USB_DEVICE(0x0e66, 0x0019, rtl92cu_hal_cfg)}, /*Hawking-Edimax*/ 353 352 {RTL_USB_DEVICE(0x2001, 0x3307, rtl92cu_hal_cfg)}, /*D-Link-Cameo*/
-1
drivers/net/wireless/ti/wlcore/Kconfig
··· 1 1 config WLCORE 2 2 tristate "TI wlcore support" 3 3 depends on WL_TI && GENERIC_HARDIRQS && MAC80211 4 - depends on INET 5 4 select FW_LOADER 6 5 ---help--- 7 6 This module contains the main code for TI WLAN chips. It abstracts
+26 -4
drivers/of/base.c
··· 511 511 } 512 512 EXPORT_SYMBOL(of_find_node_with_property); 513 513 514 + static const struct of_device_id *of_match_compat(const struct of_device_id *matches, 515 + const char *compat) 516 + { 517 + while (matches->name[0] || matches->type[0] || matches->compatible[0]) { 518 + const char *cp = matches->compatible; 519 + int len = strlen(cp); 520 + 521 + if (len > 0 && of_compat_cmp(compat, cp, len) == 0) 522 + return matches; 523 + 524 + matches++; 525 + } 526 + 527 + return NULL; 528 + } 529 + 514 530 /** 515 531 * of_match_node - Tell if an device_node has a matching of_match structure 516 532 * @matches: array of of device match structures to search in ··· 537 521 const struct of_device_id *of_match_node(const struct of_device_id *matches, 538 522 const struct device_node *node) 539 523 { 524 + struct property *prop; 525 + const char *cp; 526 + 540 527 if (!matches) 541 528 return NULL; 529 + 530 + of_property_for_each_string(node, "compatible", prop, cp) { 531 + const struct of_device_id *match = of_match_compat(matches, cp); 532 + if (match) 533 + return match; 534 + } 542 535 543 536 while (matches->name[0] || matches->type[0] || matches->compatible[0]) { 544 537 int match = 1; ··· 557 532 if (matches->type[0]) 558 533 match &= node->type 559 534 && !strcmp(matches->type, node->type); 560 - if (matches->compatible[0]) 561 - match &= of_device_is_compatible(node, 562 - matches->compatible); 563 - if (match) 535 + if (match && !matches->compatible[0]) 564 536 return matches; 565 537 matches++; 566 538 }
+1
drivers/of/platform.c
··· 462 462 of_node_put(root); 463 463 return rc; 464 464 } 465 + EXPORT_SYMBOL_GPL(of_platform_populate); 465 466 #endif /* CONFIG_OF_ADDRESS */
+18 -17
drivers/scsi/qla2xxx/qla_target.c
··· 3960 3960 { 3961 3961 struct qla_hw_data *ha = vha->hw; 3962 3962 struct qla_tgt *tgt = ha->tgt.qla_tgt; 3963 - int reason_code; 3963 + int login_code; 3964 3964 3965 3965 ql_dbg(ql_dbg_tgt, vha, 0xe039, 3966 3966 "scsi(%ld): ha state %d init_done %d oper_mode %d topo %d\n", ··· 4003 4003 { 4004 4004 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03b, 4005 4005 "qla_target(%d): Async LOOP_UP occured " 4006 - "(m[1]=%x, m[2]=%x, m[3]=%x, m[4]=%x)", vha->vp_idx, 4007 - le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]), 4008 - le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4])); 4006 + "(m[0]=%x, m[1]=%x, m[2]=%x, m[3]=%x)", vha->vp_idx, 4007 + le16_to_cpu(mailbox[0]), le16_to_cpu(mailbox[1]), 4008 + le16_to_cpu(mailbox[2]), le16_to_cpu(mailbox[3])); 4009 4009 if (tgt->link_reinit_iocb_pending) { 4010 4010 qlt_send_notify_ack(vha, (void *)&tgt->link_reinit_iocb, 4011 4011 0, 0, 0, 0, 0, 0); ··· 4020 4020 case MBA_RSCN_UPDATE: 4021 4021 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03c, 4022 4022 "qla_target(%d): Async event %#x occured " 4023 - "(m[1]=%x, m[2]=%x, m[3]=%x, m[4]=%x)", vha->vp_idx, code, 4024 - le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]), 4025 - le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4])); 4023 + "(m[0]=%x, m[1]=%x, m[2]=%x, m[3]=%x)", vha->vp_idx, code, 4024 + le16_to_cpu(mailbox[0]), le16_to_cpu(mailbox[1]), 4025 + le16_to_cpu(mailbox[2]), le16_to_cpu(mailbox[3])); 4026 4026 break; 4027 4027 4028 4028 case MBA_PORT_UPDATE: 4029 4029 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03d, 4030 4030 "qla_target(%d): Port update async event %#x " 4031 - "occured: updating the ports database (m[1]=%x, m[2]=%x, " 4032 - "m[3]=%x, m[4]=%x)", vha->vp_idx, code, 4033 - le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]), 4034 - le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4])); 4035 - reason_code = le16_to_cpu(mailbox[2]); 4036 - if (reason_code == 0x4) 4031 + "occured: updating the ports database (m[0]=%x, m[1]=%x, " 4032 + "m[2]=%x, m[3]=%x)", vha->vp_idx, code, 4033 + le16_to_cpu(mailbox[0]), le16_to_cpu(mailbox[1]), 4034 + le16_to_cpu(mailbox[2]), le16_to_cpu(mailbox[3])); 4035 + 4036 + login_code = le16_to_cpu(mailbox[2]); 4037 + if (login_code == 0x4) 4037 4038 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03e, 4038 4039 "Async MB 2: Got PLOGI Complete\n"); 4039 - else if (reason_code == 0x7) 4040 + else if (login_code == 0x7) 4040 4041 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf03f, 4041 4042 "Async MB 2: Port Logged Out\n"); 4042 4043 break; ··· 4045 4044 default: 4046 4045 ql_dbg(ql_dbg_tgt_mgt, vha, 0xf040, 4047 4046 "qla_target(%d): Async event %#x occured: " 4048 - "ignore (m[1]=%x, m[2]=%x, m[3]=%x, m[4]=%x)", vha->vp_idx, 4049 - code, le16_to_cpu(mailbox[1]), le16_to_cpu(mailbox[2]), 4050 - le16_to_cpu(mailbox[3]), le16_to_cpu(mailbox[4])); 4047 + "ignore (m[0]=%x, m[1]=%x, m[2]=%x, m[3]=%x)", vha->vp_idx, 4048 + code, le16_to_cpu(mailbox[0]), le16_to_cpu(mailbox[1]), 4049 + le16_to_cpu(mailbox[2]), le16_to_cpu(mailbox[3])); 4051 4050 break; 4052 4051 } 4053 4052
+2 -1
drivers/target/tcm_fc/tfc_sess.c
··· 58 58 struct ft_tport *tport; 59 59 int i; 60 60 61 - tport = rcu_dereference(lport->prov[FC_TYPE_FCP]); 61 + tport = rcu_dereference_protected(lport->prov[FC_TYPE_FCP], 62 + lockdep_is_held(&ft_lport_lock)); 62 63 if (tport && tport->tpg) 63 64 return tport; 64 65
+9 -6
fs/btrfs/backref.c
··· 301 301 goto out; 302 302 303 303 eb = path->nodes[level]; 304 - if (!eb) { 305 - WARN_ON(1); 306 - ret = 1; 307 - goto out; 304 + while (!eb) { 305 + if (!level) { 306 + WARN_ON(1); 307 + ret = 1; 308 + goto out; 309 + } 310 + level--; 311 + eb = path->nodes[level]; 308 312 } 309 313 310 314 ret = add_all_parents(root, path, parents, level, &ref->key_for_search, ··· 839 835 } 840 836 ret = __add_delayed_refs(head, delayed_ref_seq, 841 837 &prefs_delayed); 838 + mutex_unlock(&head->mutex); 842 839 if (ret) { 843 840 spin_unlock(&delayed_refs->lock); 844 841 goto out; ··· 933 928 } 934 929 935 930 out: 936 - if (head) 937 - mutex_unlock(&head->mutex); 938 931 btrfs_free_path(path); 939 932 while (!list_empty(&prefs)) { 940 933 ref = list_first_entry(&prefs, struct __prelim_ref, list);
+35 -25
fs/btrfs/ctree.c
··· 1024 1024 if (!looped && !tm) 1025 1025 return 0; 1026 1026 /* 1027 - * we must have key remove operations in the log before the 1028 - * replace operation. 1027 + * if there are no tree operation for the oldest root, we simply 1028 + * return it. this should only happen if that (old) root is at 1029 + * level 0. 1029 1030 */ 1030 - BUG_ON(!tm); 1031 + if (!tm) 1032 + break; 1031 1033 1034 + /* 1035 + * if there's an operation that's not a root replacement, we 1036 + * found the oldest version of our root. normally, we'll find a 1037 + * MOD_LOG_KEY_REMOVE_WHILE_FREEING operation here. 1038 + */ 1032 1039 if (tm->op != MOD_LOG_ROOT_REPLACE) 1033 1040 break; 1034 1041 ··· 1094 1087 tm->generation); 1095 1088 break; 1096 1089 case MOD_LOG_KEY_ADD: 1097 - if (tm->slot != n - 1) { 1098 - o_dst = btrfs_node_key_ptr_offset(tm->slot); 1099 - o_src = btrfs_node_key_ptr_offset(tm->slot + 1); 1100 - memmove_extent_buffer(eb, o_dst, o_src, p_size); 1101 - } 1090 + /* if a move operation is needed it's in the log */ 1102 1091 n--; 1103 1092 break; 1104 1093 case MOD_LOG_MOVE_KEYS: ··· 1195 1192 } 1196 1193 1197 1194 tm = tree_mod_log_search(root->fs_info, logical, time_seq); 1198 - /* 1199 - * there was an item in the log when __tree_mod_log_oldest_root 1200 - * returned. this one must not go away, because the time_seq passed to 1201 - * us must be blocking its removal. 1202 - */ 1203 - BUG_ON(!tm); 1204 - 1205 1195 if (old_root) 1206 - eb = alloc_dummy_extent_buffer(tm->index << PAGE_CACHE_SHIFT, 1207 - root->nodesize); 1196 + eb = alloc_dummy_extent_buffer(logical, root->nodesize); 1208 1197 else 1209 1198 eb = btrfs_clone_extent_buffer(root->node); 1210 1199 btrfs_tree_read_unlock(root->node); ··· 1211 1216 btrfs_set_header_level(eb, old_root->level); 1212 1217 btrfs_set_header_generation(eb, old_generation); 1213 1218 } 1214 - __tree_mod_log_rewind(eb, time_seq, tm); 1219 + if (tm) 1220 + __tree_mod_log_rewind(eb, time_seq, tm); 1221 + else 1222 + WARN_ON(btrfs_header_level(eb) != 0); 1215 1223 extent_buffer_get(eb); 1216 1224 1217 1225 return eb; ··· 2993 2995 static void insert_ptr(struct btrfs_trans_handle *trans, 2994 2996 struct btrfs_root *root, struct btrfs_path *path, 2995 2997 struct btrfs_disk_key *key, u64 bytenr, 2996 - int slot, int level, int tree_mod_log) 2998 + int slot, int level) 2997 2999 { 2998 3000 struct extent_buffer *lower; 2999 3001 int nritems; ··· 3006 3008 BUG_ON(slot > nritems); 3007 3009 BUG_ON(nritems == BTRFS_NODEPTRS_PER_BLOCK(root)); 3008 3010 if (slot != nritems) { 3009 - if (tree_mod_log && level) 3011 + if (level) 3010 3012 tree_mod_log_eb_move(root->fs_info, lower, slot + 1, 3011 3013 slot, nritems - slot); 3012 3014 memmove_extent_buffer(lower, ··· 3014 3016 btrfs_node_key_ptr_offset(slot), 3015 3017 (nritems - slot) * sizeof(struct btrfs_key_ptr)); 3016 3018 } 3017 - if (tree_mod_log && level) { 3019 + if (level) { 3018 3020 ret = tree_mod_log_insert_key(root->fs_info, lower, slot, 3019 3021 MOD_LOG_KEY_ADD); 3020 3022 BUG_ON(ret < 0); ··· 3102 3104 btrfs_mark_buffer_dirty(split); 3103 3105 3104 3106 insert_ptr(trans, root, path, &disk_key, split->start, 3105 - path->slots[level + 1] + 1, level + 1, 1); 3107 + path->slots[level + 1] + 1, level + 1); 3106 3108 3107 3109 if (path->slots[level] >= mid) { 3108 3110 path->slots[level] -= mid; ··· 3639 3641 btrfs_set_header_nritems(l, mid); 3640 3642 btrfs_item_key(right, &disk_key, 0); 3641 3643 insert_ptr(trans, root, path, &disk_key, right->start, 3642 - path->slots[1] + 1, 1, 0); 3644 + path->slots[1] + 1, 1); 3643 3645 3644 3646 btrfs_mark_buffer_dirty(right); 3645 3647 btrfs_mark_buffer_dirty(l); ··· 3846 3848 if (mid <= slot) { 3847 3849 btrfs_set_header_nritems(right, 0); 3848 3850 insert_ptr(trans, root, path, &disk_key, right->start, 3849 - path->slots[1] + 1, 1, 0); 3851 + path->slots[1] + 1, 1); 3850 3852 btrfs_tree_unlock(path->nodes[0]); 3851 3853 free_extent_buffer(path->nodes[0]); 3852 3854 path->nodes[0] = right; ··· 3855 3857 } else { 3856 3858 btrfs_set_header_nritems(right, 0); 3857 3859 insert_ptr(trans, root, path, &disk_key, right->start, 3858 - path->slots[1], 1, 0); 3860 + path->slots[1], 1); 3859 3861 btrfs_tree_unlock(path->nodes[0]); 3860 3862 free_extent_buffer(path->nodes[0]); 3861 3863 path->nodes[0] = right; ··· 5119 5121 5120 5122 if (!path->skip_locking) { 5121 5123 ret = btrfs_try_tree_read_lock(next); 5124 + if (!ret && time_seq) { 5125 + /* 5126 + * If we don't get the lock, we may be racing 5127 + * with push_leaf_left, holding that lock while 5128 + * itself waiting for the leaf we've currently 5129 + * locked. To solve this situation, we give up 5130 + * on our lock and cycle. 5131 + */ 5132 + btrfs_release_path(path); 5133 + cond_resched(); 5134 + goto again; 5135 + } 5122 5136 if (!ret) { 5123 5137 btrfs_set_path_blocking(path); 5124 5138 btrfs_tree_read_lock(next);
+21 -13
fs/btrfs/disk-io.c
··· 2354 2354 BTRFS_CSUM_TREE_OBJECTID, csum_root); 2355 2355 if (ret) 2356 2356 goto recovery_tree_root; 2357 - 2358 2357 csum_root->track_dirty = 1; 2359 2358 2360 2359 fs_info->generation = generation; 2361 2360 fs_info->last_trans_committed = generation; 2361 + 2362 + ret = btrfs_recover_balance(fs_info); 2363 + if (ret) { 2364 + printk(KERN_WARNING "btrfs: failed to recover balance\n"); 2365 + goto fail_block_groups; 2366 + } 2362 2367 2363 2368 ret = btrfs_init_dev_stats(fs_info); 2364 2369 if (ret) { ··· 2490 2485 goto fail_trans_kthread; 2491 2486 } 2492 2487 2493 - if (!(sb->s_flags & MS_RDONLY)) { 2494 - down_read(&fs_info->cleanup_work_sem); 2495 - err = btrfs_orphan_cleanup(fs_info->fs_root); 2496 - if (!err) 2497 - err = btrfs_orphan_cleanup(fs_info->tree_root); 2488 + if (sb->s_flags & MS_RDONLY) 2489 + return 0; 2490 + 2491 + down_read(&fs_info->cleanup_work_sem); 2492 + if ((ret = btrfs_orphan_cleanup(fs_info->fs_root)) || 2493 + (ret = btrfs_orphan_cleanup(fs_info->tree_root))) { 2498 2494 up_read(&fs_info->cleanup_work_sem); 2495 + close_ctree(tree_root); 2496 + return ret; 2497 + } 2498 + up_read(&fs_info->cleanup_work_sem); 2499 2499 2500 - if (!err) 2501 - err = btrfs_recover_balance(fs_info->tree_root); 2502 - 2503 - if (err) { 2504 - close_ctree(tree_root); 2505 - return err; 2506 - } 2500 + ret = btrfs_resume_balance_async(fs_info); 2501 + if (ret) { 2502 + printk(KERN_WARNING "btrfs: failed to resume balance\n"); 2503 + close_ctree(tree_root); 2504 + return ret; 2507 2505 } 2508 2506 2509 2507 return 0;
+6 -5
fs/btrfs/extent-tree.c
··· 2347 2347 return count; 2348 2348 } 2349 2349 2350 - 2351 2350 static void wait_for_more_refs(struct btrfs_delayed_ref_root *delayed_refs, 2352 - unsigned long num_refs) 2351 + unsigned long num_refs, 2352 + struct list_head *first_seq) 2353 2353 { 2354 - struct list_head *first_seq = delayed_refs->seq_head.next; 2355 - 2356 2354 spin_unlock(&delayed_refs->lock); 2357 2355 pr_debug("waiting for more refs (num %ld, first %p)\n", 2358 2356 num_refs, first_seq); ··· 2379 2381 struct btrfs_delayed_ref_root *delayed_refs; 2380 2382 struct btrfs_delayed_ref_node *ref; 2381 2383 struct list_head cluster; 2384 + struct list_head *first_seq = NULL; 2382 2385 int ret; 2383 2386 u64 delayed_start; 2384 2387 int run_all = count == (unsigned long)-1; ··· 2435 2436 */ 2436 2437 consider_waiting = 1; 2437 2438 num_refs = delayed_refs->num_entries; 2439 + first_seq = root->fs_info->tree_mod_seq_list.next; 2438 2440 } else { 2439 - wait_for_more_refs(delayed_refs, num_refs); 2441 + wait_for_more_refs(delayed_refs, 2442 + num_refs, first_seq); 2440 2443 /* 2441 2444 * after waiting, things have changed. we 2442 2445 * dropped the lock and someone else might have
+14
fs/btrfs/extent_io.c
··· 3324 3324 writepage_t writepage, void *data, 3325 3325 void (*flush_fn)(void *)) 3326 3326 { 3327 + struct inode *inode = mapping->host; 3327 3328 int ret = 0; 3328 3329 int done = 0; 3329 3330 int nr_to_write_done = 0; ··· 3334 3333 pgoff_t end; /* Inclusive */ 3335 3334 int scanned = 0; 3336 3335 int tag; 3336 + 3337 + /* 3338 + * We have to hold onto the inode so that ordered extents can do their 3339 + * work when the IO finishes. The alternative to this is failing to add 3340 + * an ordered extent if the igrab() fails there and that is a huge pain 3341 + * to deal with, so instead just hold onto the inode throughout the 3342 + * writepages operation. If it fails here we are freeing up the inode 3343 + * anyway and we'd rather not waste our time writing out stuff that is 3344 + * going to be truncated anyway. 3345 + */ 3346 + if (!igrab(inode)) 3347 + return 0; 3337 3348 3338 3349 pagevec_init(&pvec, 0); 3339 3350 if (wbc->range_cyclic) { ··· 3441 3428 index = 0; 3442 3429 goto retry; 3443 3430 } 3431 + btrfs_add_delayed_iput(inode); 3444 3432 return ret; 3445 3433 } 3446 3434
-13
fs/btrfs/file.c
··· 1334 1334 loff_t *ppos, size_t count, size_t ocount) 1335 1335 { 1336 1336 struct file *file = iocb->ki_filp; 1337 - struct inode *inode = fdentry(file)->d_inode; 1338 1337 struct iov_iter i; 1339 1338 ssize_t written; 1340 1339 ssize_t written_buffered; ··· 1342 1343 1343 1344 written = generic_file_direct_write(iocb, iov, &nr_segs, pos, ppos, 1344 1345 count, ocount); 1345 - 1346 - /* 1347 - * the generic O_DIRECT will update in-memory i_size after the 1348 - * DIOs are done. But our endio handlers that update the on 1349 - * disk i_size never update past the in memory i_size. So we 1350 - * need one more update here to catch any additions to the 1351 - * file 1352 - */ 1353 - if (inode->i_size != BTRFS_I(inode)->disk_i_size) { 1354 - btrfs_ordered_update_i_size(inode, inode->i_size, NULL); 1355 - mark_inode_dirty(inode); 1356 - } 1357 1346 1358 1347 if (written < 0 || written == count) 1359 1348 return written;
+51 -92
fs/btrfs/free-space-cache.c
··· 1543 1543 end = bitmap_info->offset + (u64)(BITS_PER_BITMAP * ctl->unit) - 1; 1544 1544 1545 1545 /* 1546 - * XXX - this can go away after a few releases. 1547 - * 1548 - * since the only user of btrfs_remove_free_space is the tree logging 1549 - * stuff, and the only way to test that is under crash conditions, we 1550 - * want to have this debug stuff here just in case somethings not 1551 - * working. Search the bitmap for the space we are trying to use to 1552 - * make sure its actually there. If its not there then we need to stop 1553 - * because something has gone wrong. 1546 + * We need to search for bits in this bitmap. We could only cover some 1547 + * of the extent in this bitmap thanks to how we add space, so we need 1548 + * to search for as much as it as we can and clear that amount, and then 1549 + * go searching for the next bit. 1554 1550 */ 1555 1551 search_start = *offset; 1556 - search_bytes = *bytes; 1552 + search_bytes = ctl->unit; 1557 1553 search_bytes = min(search_bytes, end - search_start + 1); 1558 1554 ret = search_bitmap(ctl, bitmap_info, &search_start, &search_bytes); 1559 1555 BUG_ON(ret < 0 || search_start != *offset); 1560 1556 1561 - if (*offset > bitmap_info->offset && *offset + *bytes > end) { 1562 - bitmap_clear_bits(ctl, bitmap_info, *offset, end - *offset + 1); 1563 - *bytes -= end - *offset + 1; 1564 - *offset = end + 1; 1565 - } else if (*offset >= bitmap_info->offset && *offset + *bytes <= end) { 1566 - bitmap_clear_bits(ctl, bitmap_info, *offset, *bytes); 1567 - *bytes = 0; 1568 - } 1557 + /* We may have found more bits than what we need */ 1558 + search_bytes = min(search_bytes, *bytes); 1559 + 1560 + /* Cannot clear past the end of the bitmap */ 1561 + search_bytes = min(search_bytes, end - search_start + 1); 1562 + 1563 + bitmap_clear_bits(ctl, bitmap_info, search_start, search_bytes); 1564 + *offset += search_bytes; 1565 + *bytes -= search_bytes; 1569 1566 1570 1567 if (*bytes) { 1571 1568 struct rb_node *next = rb_next(&bitmap_info->offset_index); ··· 1593 1596 * everything over again. 1594 1597 */ 1595 1598 search_start = *offset; 1596 - search_bytes = *bytes; 1599 + search_bytes = ctl->unit; 1597 1600 ret = search_bitmap(ctl, bitmap_info, &search_start, 1598 1601 &search_bytes); 1599 1602 if (ret < 0 || search_start != *offset) ··· 1876 1879 { 1877 1880 struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl; 1878 1881 struct btrfs_free_space *info; 1879 - struct btrfs_free_space *next_info = NULL; 1880 1882 int ret = 0; 1881 1883 1882 1884 spin_lock(&ctl->tree_lock); 1883 1885 1884 1886 again: 1887 + if (!bytes) 1888 + goto out_lock; 1889 + 1885 1890 info = tree_search_offset(ctl, offset, 0, 0); 1886 1891 if (!info) { 1887 1892 /* ··· 1904 1905 } 1905 1906 } 1906 1907 1907 - if (info->bytes < bytes && rb_next(&info->offset_index)) { 1908 - u64 end; 1909 - next_info = rb_entry(rb_next(&info->offset_index), 1910 - struct btrfs_free_space, 1911 - offset_index); 1912 - 1913 - if (next_info->bitmap) 1914 - end = next_info->offset + 1915 - BITS_PER_BITMAP * ctl->unit - 1; 1916 - else 1917 - end = next_info->offset + next_info->bytes; 1918 - 1919 - if (next_info->bytes < bytes || 1920 - next_info->offset > offset || offset > end) { 1921 - printk(KERN_CRIT "Found free space at %llu, size %llu," 1922 - " trying to use %llu\n", 1923 - (unsigned long long)info->offset, 1924 - (unsigned long long)info->bytes, 1925 - (unsigned long long)bytes); 1926 - WARN_ON(1); 1927 - ret = -EINVAL; 1928 - goto out_lock; 1929 - } 1930 - 1931 - info = next_info; 1932 - } 1933 - 1934 - if (info->bytes == bytes) { 1908 + if (!info->bitmap) { 1935 1909 unlink_free_space(ctl, info); 1936 - if (info->bitmap) { 1937 - kfree(info->bitmap); 1938 - ctl->total_bitmaps--; 1939 - } 1940 - kmem_cache_free(btrfs_free_space_cachep, info); 1941 - ret = 0; 1942 - goto out_lock; 1943 - } 1910 + if (offset == info->offset) { 1911 + u64 to_free = min(bytes, info->bytes); 1944 1912 1945 - if (!info->bitmap && info->offset == offset) { 1946 - unlink_free_space(ctl, info); 1947 - info->offset += bytes; 1948 - info->bytes -= bytes; 1949 - ret = link_free_space(ctl, info); 1950 - WARN_ON(ret); 1951 - goto out_lock; 1952 - } 1913 + info->bytes -= to_free; 1914 + info->offset += to_free; 1915 + if (info->bytes) { 1916 + ret = link_free_space(ctl, info); 1917 + WARN_ON(ret); 1918 + } else { 1919 + kmem_cache_free(btrfs_free_space_cachep, info); 1920 + } 1953 1921 1954 - if (!info->bitmap && info->offset <= offset && 1955 - info->offset + info->bytes >= offset + bytes) { 1956 - u64 old_start = info->offset; 1957 - /* 1958 - * we're freeing space in the middle of the info, 1959 - * this can happen during tree log replay 1960 - * 1961 - * first unlink the old info and then 1962 - * insert it again after the hole we're creating 1963 - */ 1964 - unlink_free_space(ctl, info); 1965 - if (offset + bytes < info->offset + info->bytes) { 1966 - u64 old_end = info->offset + info->bytes; 1922 + offset += to_free; 1923 + bytes -= to_free; 1924 + goto again; 1925 + } else { 1926 + u64 old_end = info->bytes + info->offset; 1967 1927 1968 - info->offset = offset + bytes; 1969 - info->bytes = old_end - info->offset; 1928 + info->bytes = offset - info->offset; 1970 1929 ret = link_free_space(ctl, info); 1971 1930 WARN_ON(ret); 1972 1931 if (ret) 1973 1932 goto out_lock; 1974 - } else { 1975 - /* the hole we're creating ends at the end 1976 - * of the info struct, just free the info 1977 - */ 1978 - kmem_cache_free(btrfs_free_space_cachep, info); 1979 - } 1980 - spin_unlock(&ctl->tree_lock); 1981 1933 1982 - /* step two, insert a new info struct to cover 1983 - * anything before the hole 1984 - */ 1985 - ret = btrfs_add_free_space(block_group, old_start, 1986 - offset - old_start); 1987 - WARN_ON(ret); /* -ENOMEM */ 1988 - goto out; 1934 + /* Not enough bytes in this entry to satisfy us */ 1935 + if (old_end < offset + bytes) { 1936 + bytes -= old_end - offset; 1937 + offset = old_end; 1938 + goto again; 1939 + } else if (old_end == offset + bytes) { 1940 + /* all done */ 1941 + goto out_lock; 1942 + } 1943 + spin_unlock(&ctl->tree_lock); 1944 + 1945 + ret = btrfs_add_free_space(block_group, offset + bytes, 1946 + old_end - (offset + bytes)); 1947 + WARN_ON(ret); 1948 + goto out; 1949 + } 1989 1950 } 1990 1951 1991 1952 ret = remove_from_bitmap(ctl, info, &offset, &bytes);
+51 -6
fs/btrfs/inode.c
··· 3754 3754 btrfs_wait_ordered_range(inode, 0, (u64)-1); 3755 3755 3756 3756 if (root->fs_info->log_root_recovering) { 3757 - BUG_ON(!test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM, 3757 + BUG_ON(test_bit(BTRFS_INODE_HAS_ORPHAN_ITEM, 3758 3758 &BTRFS_I(inode)->runtime_flags)); 3759 3759 goto no_delete; 3760 3760 } ··· 5876 5876 bh_result->b_size = len; 5877 5877 bh_result->b_bdev = em->bdev; 5878 5878 set_buffer_mapped(bh_result); 5879 - if (create && !test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) 5880 - set_buffer_new(bh_result); 5879 + if (create) { 5880 + if (!test_bit(EXTENT_FLAG_PREALLOC, &em->flags)) 5881 + set_buffer_new(bh_result); 5882 + 5883 + /* 5884 + * Need to update the i_size under the extent lock so buffered 5885 + * readers will get the updated i_size when we unlock. 5886 + */ 5887 + if (start + len > i_size_read(inode)) 5888 + i_size_write(inode, start + len); 5889 + } 5881 5890 5882 5891 free_extent_map(em); 5883 5892 ··· 6369 6360 */ 6370 6361 ordered = btrfs_lookup_ordered_range(inode, lockstart, 6371 6362 lockend - lockstart + 1); 6372 - if (!ordered) 6363 + 6364 + /* 6365 + * We need to make sure there are no buffered pages in this 6366 + * range either, we could have raced between the invalidate in 6367 + * generic_file_direct_write and locking the extent. The 6368 + * invalidate needs to happen so that reads after a write do not 6369 + * get stale data. 6370 + */ 6371 + if (!ordered && (!writing || 6372 + !test_range_bit(&BTRFS_I(inode)->io_tree, 6373 + lockstart, lockend, EXTENT_UPTODATE, 0, 6374 + cached_state))) 6373 6375 break; 6376 + 6374 6377 unlock_extent_cached(&BTRFS_I(inode)->io_tree, lockstart, lockend, 6375 6378 &cached_state, GFP_NOFS); 6376 - btrfs_start_ordered_extent(inode, ordered, 1); 6377 - btrfs_put_ordered_extent(ordered); 6379 + 6380 + if (ordered) { 6381 + btrfs_start_ordered_extent(inode, ordered, 1); 6382 + btrfs_put_ordered_extent(ordered); 6383 + } else { 6384 + /* Screw you mmap */ 6385 + ret = filemap_write_and_wait_range(file->f_mapping, 6386 + lockstart, 6387 + lockend); 6388 + if (ret) 6389 + goto out; 6390 + 6391 + /* 6392 + * If we found a page that couldn't be invalidated just 6393 + * fall back to buffered. 6394 + */ 6395 + ret = invalidate_inode_pages2_range(file->f_mapping, 6396 + lockstart >> PAGE_CACHE_SHIFT, 6397 + lockend >> PAGE_CACHE_SHIFT); 6398 + if (ret) { 6399 + if (ret == -EBUSY) 6400 + ret = 0; 6401 + goto out; 6402 + } 6403 + } 6404 + 6378 6405 cond_resched(); 6379 6406 } 6380 6407
+1 -1
fs/btrfs/ioctl.h
··· 339 339 #define BTRFS_IOC_WAIT_SYNC _IOW(BTRFS_IOCTL_MAGIC, 22, __u64) 340 340 #define BTRFS_IOC_SNAP_CREATE_V2 _IOW(BTRFS_IOCTL_MAGIC, 23, \ 341 341 struct btrfs_ioctl_vol_args_v2) 342 - #define BTRFS_IOC_SUBVOL_GETFLAGS _IOW(BTRFS_IOCTL_MAGIC, 25, __u64) 342 + #define BTRFS_IOC_SUBVOL_GETFLAGS _IOR(BTRFS_IOCTL_MAGIC, 25, __u64) 343 343 #define BTRFS_IOC_SUBVOL_SETFLAGS _IOW(BTRFS_IOCTL_MAGIC, 26, __u64) 344 344 #define BTRFS_IOC_SCRUB _IOWR(BTRFS_IOCTL_MAGIC, 27, \ 345 345 struct btrfs_ioctl_scrub_args)
+4
fs/btrfs/super.c
··· 1187 1187 if (ret) 1188 1188 goto restore; 1189 1189 1190 + ret = btrfs_resume_balance_async(fs_info); 1191 + if (ret) 1192 + goto restore; 1193 + 1190 1194 sb->s_flags &= ~MS_RDONLY; 1191 1195 } 1192 1196
+6
fs/btrfs/tree-log.c
··· 690 690 kfree(name); 691 691 692 692 iput(inode); 693 + 694 + btrfs_run_delayed_items(trans, root); 693 695 return ret; 694 696 } 695 697 ··· 897 895 ret = btrfs_unlink_inode(trans, root, dir, 898 896 inode, victim_name, 899 897 victim_name_len); 898 + btrfs_run_delayed_items(trans, root); 900 899 } 901 900 kfree(victim_name); 902 901 ptr = (unsigned long)(victim_ref + 1) + victim_name_len; ··· 1478 1475 ret = btrfs_unlink_inode(trans, root, dir, inode, 1479 1476 name, name_len); 1480 1477 BUG_ON(ret); 1478 + 1479 + btrfs_run_delayed_items(trans, root); 1480 + 1481 1481 kfree(name); 1482 1482 iput(inode); 1483 1483
+60 -41
fs/btrfs/volumes.c
··· 2845 2845 2846 2846 static int balance_kthread(void *data) 2847 2847 { 2848 - struct btrfs_balance_control *bctl = 2849 - (struct btrfs_balance_control *)data; 2850 - struct btrfs_fs_info *fs_info = bctl->fs_info; 2848 + struct btrfs_fs_info *fs_info = data; 2851 2849 int ret = 0; 2852 2850 2853 2851 mutex_lock(&fs_info->volume_mutex); 2854 2852 mutex_lock(&fs_info->balance_mutex); 2855 2853 2856 - set_balance_control(bctl); 2857 - 2858 - if (btrfs_test_opt(fs_info->tree_root, SKIP_BALANCE)) { 2859 - printk(KERN_INFO "btrfs: force skipping balance\n"); 2860 - } else { 2854 + if (fs_info->balance_ctl) { 2861 2855 printk(KERN_INFO "btrfs: continuing balance\n"); 2862 - ret = btrfs_balance(bctl, NULL); 2856 + ret = btrfs_balance(fs_info->balance_ctl, NULL); 2863 2857 } 2864 2858 2865 2859 mutex_unlock(&fs_info->balance_mutex); 2866 2860 mutex_unlock(&fs_info->volume_mutex); 2861 + 2867 2862 return ret; 2868 2863 } 2869 2864 2870 - int btrfs_recover_balance(struct btrfs_root *tree_root) 2865 + int btrfs_resume_balance_async(struct btrfs_fs_info *fs_info) 2871 2866 { 2872 2867 struct task_struct *tsk; 2868 + 2869 + spin_lock(&fs_info->balance_lock); 2870 + if (!fs_info->balance_ctl) { 2871 + spin_unlock(&fs_info->balance_lock); 2872 + return 0; 2873 + } 2874 + spin_unlock(&fs_info->balance_lock); 2875 + 2876 + if (btrfs_test_opt(fs_info->tree_root, SKIP_BALANCE)) { 2877 + printk(KERN_INFO "btrfs: force skipping balance\n"); 2878 + return 0; 2879 + } 2880 + 2881 + tsk = kthread_run(balance_kthread, fs_info, "btrfs-balance"); 2882 + if (IS_ERR(tsk)) 2883 + return PTR_ERR(tsk); 2884 + 2885 + return 0; 2886 + } 2887 + 2888 + int btrfs_recover_balance(struct btrfs_fs_info *fs_info) 2889 + { 2873 2890 struct btrfs_balance_control *bctl; 2874 2891 struct btrfs_balance_item *item; 2875 2892 struct btrfs_disk_balance_args disk_bargs; ··· 2899 2882 if (!path) 2900 2883 return -ENOMEM; 2901 2884 2885 + key.objectid = BTRFS_BALANCE_OBJECTID; 2886 + key.type = BTRFS_BALANCE_ITEM_KEY; 2887 + key.offset = 0; 2888 + 2889 + ret = btrfs_search_slot(NULL, fs_info->tree_root, &key, path, 0, 0); 2890 + if (ret < 0) 2891 + goto out; 2892 + if (ret > 0) { /* ret = -ENOENT; */ 2893 + ret = 0; 2894 + goto out; 2895 + } 2896 + 2902 2897 bctl = kzalloc(sizeof(*bctl), GFP_NOFS); 2903 2898 if (!bctl) { 2904 2899 ret = -ENOMEM; 2905 2900 goto out; 2906 2901 } 2907 2902 2908 - key.objectid = BTRFS_BALANCE_OBJECTID; 2909 - key.type = BTRFS_BALANCE_ITEM_KEY; 2910 - key.offset = 0; 2911 - 2912 - ret = btrfs_search_slot(NULL, tree_root, &key, path, 0, 0); 2913 - if (ret < 0) 2914 - goto out_bctl; 2915 - if (ret > 0) { /* ret = -ENOENT; */ 2916 - ret = 0; 2917 - goto out_bctl; 2918 - } 2919 - 2920 2903 leaf = path->nodes[0]; 2921 2904 item = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_balance_item); 2922 2905 2923 - bctl->fs_info = tree_root->fs_info; 2924 - bctl->flags = btrfs_balance_flags(leaf, item) | BTRFS_BALANCE_RESUME; 2906 + bctl->fs_info = fs_info; 2907 + bctl->flags = btrfs_balance_flags(leaf, item); 2908 + bctl->flags |= BTRFS_BALANCE_RESUME; 2925 2909 2926 2910 btrfs_balance_data(leaf, item, &disk_bargs); 2927 2911 btrfs_disk_balance_args_to_cpu(&bctl->data, &disk_bargs); ··· 2931 2913 btrfs_balance_sys(leaf, item, &disk_bargs); 2932 2914 btrfs_disk_balance_args_to_cpu(&bctl->sys, &disk_bargs); 2933 2915 2934 - tsk = kthread_run(balance_kthread, bctl, "btrfs-balance"); 2935 - if (IS_ERR(tsk)) 2936 - ret = PTR_ERR(tsk); 2937 - else 2938 - goto out; 2916 + mutex_lock(&fs_info->volume_mutex); 2917 + mutex_lock(&fs_info->balance_mutex); 2939 2918 2940 - out_bctl: 2941 - kfree(bctl); 2919 + set_balance_control(bctl); 2920 + 2921 + mutex_unlock(&fs_info->balance_mutex); 2922 + mutex_unlock(&fs_info->volume_mutex); 2942 2923 out: 2943 2924 btrfs_free_path(path); 2944 2925 return ret; ··· 4078 4061 4079 4062 BUG_ON(stripe_index >= bbio->num_stripes); 4080 4063 dev = bbio->stripes[stripe_index].dev; 4081 - if (bio->bi_rw & WRITE) 4082 - btrfs_dev_stat_inc(dev, 4083 - BTRFS_DEV_STAT_WRITE_ERRS); 4084 - else 4085 - btrfs_dev_stat_inc(dev, 4086 - BTRFS_DEV_STAT_READ_ERRS); 4087 - if ((bio->bi_rw & WRITE_FLUSH) == WRITE_FLUSH) 4088 - btrfs_dev_stat_inc(dev, 4089 - BTRFS_DEV_STAT_FLUSH_ERRS); 4090 - btrfs_dev_stat_print_on_error(dev); 4064 + if (dev->bdev) { 4065 + if (bio->bi_rw & WRITE) 4066 + btrfs_dev_stat_inc(dev, 4067 + BTRFS_DEV_STAT_WRITE_ERRS); 4068 + else 4069 + btrfs_dev_stat_inc(dev, 4070 + BTRFS_DEV_STAT_READ_ERRS); 4071 + if ((bio->bi_rw & WRITE_FLUSH) == WRITE_FLUSH) 4072 + btrfs_dev_stat_inc(dev, 4073 + BTRFS_DEV_STAT_FLUSH_ERRS); 4074 + btrfs_dev_stat_print_on_error(dev); 4075 + } 4091 4076 } 4092 4077 } 4093 4078
+2 -1
fs/btrfs/volumes.h
··· 281 281 int btrfs_init_new_device(struct btrfs_root *root, char *path); 282 282 int btrfs_balance(struct btrfs_balance_control *bctl, 283 283 struct btrfs_ioctl_balance_args *bargs); 284 - int btrfs_recover_balance(struct btrfs_root *tree_root); 284 + int btrfs_resume_balance_async(struct btrfs_fs_info *fs_info); 285 + int btrfs_recover_balance(struct btrfs_fs_info *fs_info); 285 286 int btrfs_pause_balance(struct btrfs_fs_info *fs_info); 286 287 int btrfs_cancel_balance(struct btrfs_fs_info *fs_info); 287 288 int btrfs_chunk_readonly(struct btrfs_root *root, u64 chunk_offset);
+20 -21
fs/cifs/connect.c
··· 1653 1653 * If yes, we have encountered a double deliminator 1654 1654 * reset the NULL character to the deliminator 1655 1655 */ 1656 - if (tmp_end < end && tmp_end[1] == delim) 1656 + if (tmp_end < end && tmp_end[1] == delim) { 1657 1657 tmp_end[0] = delim; 1658 1658 1659 - /* Keep iterating until we get to a single deliminator 1660 - * OR the end 1661 - */ 1662 - while ((tmp_end = strchr(tmp_end, delim)) != NULL && 1663 - (tmp_end[1] == delim)) { 1664 - tmp_end = (char *) &tmp_end[2]; 1665 - } 1659 + /* Keep iterating until we get to a single 1660 + * deliminator OR the end 1661 + */ 1662 + while ((tmp_end = strchr(tmp_end, delim)) 1663 + != NULL && (tmp_end[1] == delim)) { 1664 + tmp_end = (char *) &tmp_end[2]; 1665 + } 1666 1666 1667 - /* Reset var options to point to next element */ 1668 - if (tmp_end) { 1669 - tmp_end[0] = '\0'; 1670 - options = (char *) &tmp_end[1]; 1671 - } else 1672 - /* Reached the end of the mount option string */ 1673 - options = end; 1667 + /* Reset var options to point to next element */ 1668 + if (tmp_end) { 1669 + tmp_end[0] = '\0'; 1670 + options = (char *) &tmp_end[1]; 1671 + } else 1672 + /* Reached the end of the mount option 1673 + * string */ 1674 + options = end; 1675 + } 1674 1676 1675 1677 /* Now build new password string */ 1676 1678 temp_len = strlen(value); ··· 3495 3493 * MS-CIFS indicates that servers are only limited by the client's 3496 3494 * bufsize for reads, testing against win98se shows that it throws 3497 3495 * INVALID_PARAMETER errors if you try to request too large a read. 3496 + * OS/2 just sends back short reads. 3498 3497 * 3499 - * If the server advertises a MaxBufferSize of less than one page, 3500 - * assume that it also can't satisfy reads larger than that either. 3501 - * 3502 - * FIXME: Is there a better heuristic for this? 3498 + * If the server doesn't advertise CAP_LARGE_READ_X, then assume that 3499 + * it can't handle a read request larger than its MaxBufferSize either. 3503 3500 */ 3504 3501 if (tcon->unix_ext && (unix_cap & CIFS_UNIX_LARGE_READ_CAP)) 3505 3502 defsize = CIFS_DEFAULT_IOSIZE; 3506 3503 else if (server->capabilities & CAP_LARGE_READ_X) 3507 3504 defsize = CIFS_DEFAULT_NON_POSIX_RSIZE; 3508 - else if (server->maxBuf >= PAGE_CACHE_SIZE) 3509 - defsize = CIFSMaxBufSize; 3510 3505 else 3511 3506 defsize = server->maxBuf - sizeof(READ_RSP); 3512 3507
+1 -1
fs/ecryptfs/kthread.c
··· 149 149 (*lower_file) = dentry_open(lower_dentry, lower_mnt, flags, cred); 150 150 if (!IS_ERR(*lower_file)) 151 151 goto out; 152 - if (flags & O_RDONLY) { 152 + if ((flags & O_ACCMODE) == O_RDONLY) { 153 153 rc = PTR_ERR((*lower_file)); 154 154 goto out; 155 155 }
+29 -19
fs/ecryptfs/miscdev.c
··· 49 49 mutex_lock(&ecryptfs_daemon_hash_mux); 50 50 /* TODO: Just use file->private_data? */ 51 51 rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns()); 52 - BUG_ON(rc || !daemon); 52 + if (rc || !daemon) { 53 + mutex_unlock(&ecryptfs_daemon_hash_mux); 54 + return -EINVAL; 55 + } 53 56 mutex_lock(&daemon->mux); 54 57 mutex_unlock(&ecryptfs_daemon_hash_mux); 55 58 if (daemon->flags & ECRYPTFS_DAEMON_ZOMBIE) { ··· 125 122 goto out_unlock_daemon; 126 123 } 127 124 daemon->flags |= ECRYPTFS_DAEMON_MISCDEV_OPEN; 125 + file->private_data = daemon; 128 126 atomic_inc(&ecryptfs_num_miscdev_opens); 129 127 out_unlock_daemon: 130 128 mutex_unlock(&daemon->mux); ··· 156 152 157 153 mutex_lock(&ecryptfs_daemon_hash_mux); 158 154 rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns()); 159 - BUG_ON(rc || !daemon); 155 + if (rc || !daemon) 156 + daemon = file->private_data; 160 157 mutex_lock(&daemon->mux); 161 - BUG_ON(daemon->pid != task_pid(current)); 162 158 BUG_ON(!(daemon->flags & ECRYPTFS_DAEMON_MISCDEV_OPEN)); 163 159 daemon->flags &= ~ECRYPTFS_DAEMON_MISCDEV_OPEN; 164 160 atomic_dec(&ecryptfs_num_miscdev_opens); ··· 195 191 struct ecryptfs_msg_ctx *msg_ctx, u8 msg_type, 196 192 u16 msg_flags, struct ecryptfs_daemon *daemon) 197 193 { 198 - int rc = 0; 194 + struct ecryptfs_message *msg; 199 195 200 - mutex_lock(&msg_ctx->mux); 201 - msg_ctx->msg = kmalloc((sizeof(*msg_ctx->msg) + data_size), 202 - GFP_KERNEL); 203 - if (!msg_ctx->msg) { 204 - rc = -ENOMEM; 196 + msg = kmalloc((sizeof(*msg) + data_size), GFP_KERNEL); 197 + if (!msg) { 205 198 printk(KERN_ERR "%s: Out of memory whilst attempting " 206 199 "to kmalloc(%zd, GFP_KERNEL)\n", __func__, 207 - (sizeof(*msg_ctx->msg) + data_size)); 208 - goto out_unlock; 200 + (sizeof(*msg) + data_size)); 201 + return -ENOMEM; 209 202 } 203 + 204 + mutex_lock(&msg_ctx->mux); 205 + msg_ctx->msg = msg; 210 206 msg_ctx->msg->index = msg_ctx->index; 211 207 msg_ctx->msg->data_len = data_size; 212 208 msg_ctx->type = msg_type; 213 209 memcpy(msg_ctx->msg->data, data, data_size); 214 210 msg_ctx->msg_size = (sizeof(*msg_ctx->msg) + data_size); 215 - mutex_lock(&daemon->mux); 216 211 list_add_tail(&msg_ctx->daemon_out_list, &daemon->msg_ctx_out_queue); 212 + mutex_unlock(&msg_ctx->mux); 213 + 214 + mutex_lock(&daemon->mux); 217 215 daemon->num_queued_msg_ctx++; 218 216 wake_up_interruptible(&daemon->wait); 219 217 mutex_unlock(&daemon->mux); 220 - out_unlock: 221 - mutex_unlock(&msg_ctx->mux); 222 - return rc; 218 + 219 + return 0; 223 220 } 224 221 225 222 /* ··· 274 269 mutex_lock(&ecryptfs_daemon_hash_mux); 275 270 /* TODO: Just use file->private_data? */ 276 271 rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns()); 277 - BUG_ON(rc || !daemon); 272 + if (rc || !daemon) { 273 + mutex_unlock(&ecryptfs_daemon_hash_mux); 274 + return -EINVAL; 275 + } 278 276 mutex_lock(&daemon->mux); 277 + if (task_pid(current) != daemon->pid) { 278 + mutex_unlock(&daemon->mux); 279 + mutex_unlock(&ecryptfs_daemon_hash_mux); 280 + return -EPERM; 281 + } 279 282 if (daemon->flags & ECRYPTFS_DAEMON_ZOMBIE) { 280 283 rc = 0; 281 284 mutex_unlock(&ecryptfs_daemon_hash_mux); ··· 320 307 * message from the queue; try again */ 321 308 goto check_list; 322 309 } 323 - BUG_ON(euid != daemon->euid); 324 - BUG_ON(current_user_ns() != daemon->user_ns); 325 - BUG_ON(task_pid(current) != daemon->pid); 326 310 msg_ctx = list_first_entry(&daemon->msg_ctx_out_queue, 327 311 struct ecryptfs_msg_ctx, daemon_out_list); 328 312 BUG_ON(!msg_ctx);
+20 -13
fs/ocfs2/dlmglue.c
··· 456 456 stats->ls_gets++; 457 457 stats->ls_total += ktime_to_ns(kt); 458 458 /* overflow */ 459 - if (unlikely(stats->ls_gets) == 0) { 459 + if (unlikely(stats->ls_gets == 0)) { 460 460 stats->ls_gets++; 461 461 stats->ls_total = ktime_to_ns(kt); 462 462 } ··· 3932 3932 static void ocfs2_schedule_blocked_lock(struct ocfs2_super *osb, 3933 3933 struct ocfs2_lock_res *lockres) 3934 3934 { 3935 + unsigned long flags; 3936 + 3935 3937 assert_spin_locked(&lockres->l_lock); 3936 3938 3937 3939 if (lockres->l_flags & OCFS2_LOCK_FREEING) { ··· 3947 3945 3948 3946 lockres_or_flags(lockres, OCFS2_LOCK_QUEUED); 3949 3947 3950 - spin_lock(&osb->dc_task_lock); 3948 + spin_lock_irqsave(&osb->dc_task_lock, flags); 3951 3949 if (list_empty(&lockres->l_blocked_list)) { 3952 3950 list_add_tail(&lockres->l_blocked_list, 3953 3951 &osb->blocked_lock_list); 3954 3952 osb->blocked_lock_count++; 3955 3953 } 3956 - spin_unlock(&osb->dc_task_lock); 3954 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 3957 3955 } 3958 3956 3959 3957 static void ocfs2_downconvert_thread_do_work(struct ocfs2_super *osb) 3960 3958 { 3961 3959 unsigned long processed; 3960 + unsigned long flags; 3962 3961 struct ocfs2_lock_res *lockres; 3963 3962 3964 - spin_lock(&osb->dc_task_lock); 3963 + spin_lock_irqsave(&osb->dc_task_lock, flags); 3965 3964 /* grab this early so we know to try again if a state change and 3966 3965 * wake happens part-way through our work */ 3967 3966 osb->dc_work_sequence = osb->dc_wake_sequence; ··· 3975 3972 struct ocfs2_lock_res, l_blocked_list); 3976 3973 list_del_init(&lockres->l_blocked_list); 3977 3974 osb->blocked_lock_count--; 3978 - spin_unlock(&osb->dc_task_lock); 3975 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 3979 3976 3980 3977 BUG_ON(!processed); 3981 3978 processed--; 3982 3979 3983 3980 ocfs2_process_blocked_lock(osb, lockres); 3984 3981 3985 - spin_lock(&osb->dc_task_lock); 3982 + spin_lock_irqsave(&osb->dc_task_lock, flags); 3986 3983 } 3987 - spin_unlock(&osb->dc_task_lock); 3984 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 3988 3985 } 3989 3986 3990 3987 static int ocfs2_downconvert_thread_lists_empty(struct ocfs2_super *osb) 3991 3988 { 3992 3989 int empty = 0; 3990 + unsigned long flags; 3993 3991 3994 - spin_lock(&osb->dc_task_lock); 3992 + spin_lock_irqsave(&osb->dc_task_lock, flags); 3995 3993 if (list_empty(&osb->blocked_lock_list)) 3996 3994 empty = 1; 3997 3995 3998 - spin_unlock(&osb->dc_task_lock); 3996 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 3999 3997 return empty; 4000 3998 } 4001 3999 4002 4000 static int ocfs2_downconvert_thread_should_wake(struct ocfs2_super *osb) 4003 4001 { 4004 4002 int should_wake = 0; 4003 + unsigned long flags; 4005 4004 4006 - spin_lock(&osb->dc_task_lock); 4005 + spin_lock_irqsave(&osb->dc_task_lock, flags); 4007 4006 if (osb->dc_work_sequence != osb->dc_wake_sequence) 4008 4007 should_wake = 1; 4009 - spin_unlock(&osb->dc_task_lock); 4008 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 4010 4009 4011 4010 return should_wake; 4012 4011 } ··· 4038 4033 4039 4034 void ocfs2_wake_downconvert_thread(struct ocfs2_super *osb) 4040 4035 { 4041 - spin_lock(&osb->dc_task_lock); 4036 + unsigned long flags; 4037 + 4038 + spin_lock_irqsave(&osb->dc_task_lock, flags); 4042 4039 /* make sure the voting thread gets a swipe at whatever changes 4043 4040 * the caller may have made to the voting state */ 4044 4041 osb->dc_wake_sequence++; 4045 - spin_unlock(&osb->dc_task_lock); 4042 + spin_unlock_irqrestore(&osb->dc_task_lock, flags); 4046 4043 wake_up(&osb->dc_event); 4047 4044 }
-2
fs/ocfs2/extent_map.c
··· 923 923 924 924 ocfs2_inode_unlock(inode, 0); 925 925 out: 926 - if (ret && ret != -ENXIO) 927 - ret = -ENXIO; 928 926 return ret; 929 927 } 930 928
+3 -1
fs/ocfs2/file.c
··· 2422 2422 unaligned_dio = 0; 2423 2423 } 2424 2424 2425 - if (unaligned_dio) 2425 + if (unaligned_dio) { 2426 + ocfs2_iocb_clear_unaligned_aio(iocb); 2426 2427 atomic_dec(&OCFS2_I(inode)->ip_unaligned_aio); 2428 + } 2427 2429 2428 2430 out: 2429 2431 if (rw_level != -1)
-2
fs/ocfs2/quota_global.c
··· 399 399 msecs_to_jiffies(oinfo->dqi_syncms)); 400 400 401 401 out_err: 402 - if (status) 403 - mlog_errno(status); 404 402 return status; 405 403 out_unlock: 406 404 ocfs2_unlock_global_qf(oinfo, 0);
+3 -3
fs/open.c
··· 397 397 { 398 398 struct file *file; 399 399 struct inode *inode; 400 - int error; 400 + int error, fput_needed; 401 401 402 402 error = -EBADF; 403 - file = fget(fd); 403 + file = fget_raw_light(fd, &fput_needed); 404 404 if (!file) 405 405 goto out; 406 406 ··· 414 414 if (!error) 415 415 set_fs_pwd(current->fs, &file->f_path); 416 416 out_putf: 417 - fput(file); 417 + fput_light(file, fput_needed); 418 418 out: 419 419 return error; 420 420 }
+20 -15
fs/splice.c
··· 273 273 * Check if we need to grow the arrays holding pages and partial page 274 274 * descriptions. 275 275 */ 276 - int splice_grow_spd(struct pipe_inode_info *pipe, struct splice_pipe_desc *spd) 276 + int splice_grow_spd(const struct pipe_inode_info *pipe, struct splice_pipe_desc *spd) 277 277 { 278 - if (pipe->buffers <= PIPE_DEF_BUFFERS) 278 + unsigned int buffers = ACCESS_ONCE(pipe->buffers); 279 + 280 + spd->nr_pages_max = buffers; 281 + if (buffers <= PIPE_DEF_BUFFERS) 279 282 return 0; 280 283 281 - spd->pages = kmalloc(pipe->buffers * sizeof(struct page *), GFP_KERNEL); 282 - spd->partial = kmalloc(pipe->buffers * sizeof(struct partial_page), GFP_KERNEL); 284 + spd->pages = kmalloc(buffers * sizeof(struct page *), GFP_KERNEL); 285 + spd->partial = kmalloc(buffers * sizeof(struct partial_page), GFP_KERNEL); 283 286 284 287 if (spd->pages && spd->partial) 285 288 return 0; ··· 292 289 return -ENOMEM; 293 290 } 294 291 295 - void splice_shrink_spd(struct pipe_inode_info *pipe, 296 - struct splice_pipe_desc *spd) 292 + void splice_shrink_spd(struct splice_pipe_desc *spd) 297 293 { 298 - if (pipe->buffers <= PIPE_DEF_BUFFERS) 294 + if (spd->nr_pages_max <= PIPE_DEF_BUFFERS) 299 295 return; 300 296 301 297 kfree(spd->pages); ··· 317 315 struct splice_pipe_desc spd = { 318 316 .pages = pages, 319 317 .partial = partial, 318 + .nr_pages_max = PIPE_DEF_BUFFERS, 320 319 .flags = flags, 321 320 .ops = &page_cache_pipe_buf_ops, 322 321 .spd_release = spd_release_page, ··· 329 326 index = *ppos >> PAGE_CACHE_SHIFT; 330 327 loff = *ppos & ~PAGE_CACHE_MASK; 331 328 req_pages = (len + loff + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; 332 - nr_pages = min(req_pages, pipe->buffers); 329 + nr_pages = min(req_pages, spd.nr_pages_max); 333 330 334 331 /* 335 332 * Lookup the (hopefully) full range of pages we need. ··· 500 497 if (spd.nr_pages) 501 498 error = splice_to_pipe(pipe, &spd); 502 499 503 - splice_shrink_spd(pipe, &spd); 500 + splice_shrink_spd(&spd); 504 501 return error; 505 502 } 506 503 ··· 601 598 struct splice_pipe_desc spd = { 602 599 .pages = pages, 603 600 .partial = partial, 601 + .nr_pages_max = PIPE_DEF_BUFFERS, 604 602 .flags = flags, 605 603 .ops = &default_pipe_buf_ops, 606 604 .spd_release = spd_release_page, ··· 612 608 613 609 res = -ENOMEM; 614 610 vec = __vec; 615 - if (pipe->buffers > PIPE_DEF_BUFFERS) { 616 - vec = kmalloc(pipe->buffers * sizeof(struct iovec), GFP_KERNEL); 611 + if (spd.nr_pages_max > PIPE_DEF_BUFFERS) { 612 + vec = kmalloc(spd.nr_pages_max * sizeof(struct iovec), GFP_KERNEL); 617 613 if (!vec) 618 614 goto shrink_ret; 619 615 } ··· 621 617 offset = *ppos & ~PAGE_CACHE_MASK; 622 618 nr_pages = (len + offset + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; 623 619 624 - for (i = 0; i < nr_pages && i < pipe->buffers && len; i++) { 620 + for (i = 0; i < nr_pages && i < spd.nr_pages_max && len; i++) { 625 621 struct page *page; 626 622 627 623 page = alloc_page(GFP_USER); ··· 669 665 shrink_ret: 670 666 if (vec != __vec) 671 667 kfree(vec); 672 - splice_shrink_spd(pipe, &spd); 668 + splice_shrink_spd(&spd); 673 669 return res; 674 670 675 671 err: ··· 1618 1614 struct splice_pipe_desc spd = { 1619 1615 .pages = pages, 1620 1616 .partial = partial, 1617 + .nr_pages_max = PIPE_DEF_BUFFERS, 1621 1618 .flags = flags, 1622 1619 .ops = &user_page_pipe_buf_ops, 1623 1620 .spd_release = spd_release_page, ··· 1634 1629 1635 1630 spd.nr_pages = get_iovec_page_array(iov, nr_segs, spd.pages, 1636 1631 spd.partial, false, 1637 - pipe->buffers); 1632 + spd.nr_pages_max); 1638 1633 if (spd.nr_pages <= 0) 1639 1634 ret = spd.nr_pages; 1640 1635 else 1641 1636 ret = splice_to_pipe(pipe, &spd); 1642 1637 1643 - splice_shrink_spd(pipe, &spd); 1638 + splice_shrink_spd(&spd); 1644 1639 return ret; 1645 1640 } 1646 1641
+1
include/linux/aio.h
··· 140 140 (x)->ki_dtor = NULL; \ 141 141 (x)->ki_obj.tsk = tsk; \ 142 142 (x)->ki_user_data = 0; \ 143 + (x)->private = NULL; \ 143 144 } while (0) 144 145 145 146 #define AIO_RING_MAGIC 0xa10a10a1
-1
include/linux/blkdev.h
··· 827 827 extern void blk_complete_request(struct request *); 828 828 extern void __blk_complete_request(struct request *); 829 829 extern void blk_abort_request(struct request *); 830 - extern void blk_abort_queue(struct request_queue *); 831 830 extern void blk_unprep_request(struct request *); 832 831 833 832 /*
+1
include/linux/input.h
··· 116 116 117 117 /** 118 118 * EVIOCGMTSLOTS(len) - get MT slot values 119 + * @len: size of the data buffer in bytes 119 120 * 120 121 * The ioctl buffer argument should be binary equivalent to 121 122 *
+2 -2
include/linux/kvm_host.h
··· 815 815 #ifdef CONFIG_HAVE_KVM_EVENTFD 816 816 817 817 void kvm_eventfd_init(struct kvm *kvm); 818 - int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags); 818 + int kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args); 819 819 void kvm_irqfd_release(struct kvm *kvm); 820 820 void kvm_irq_routing_update(struct kvm *, struct kvm_irq_routing_table *); 821 821 int kvm_ioeventfd(struct kvm *kvm, struct kvm_ioeventfd *args); ··· 824 824 825 825 static inline void kvm_eventfd_init(struct kvm *kvm) {} 826 826 827 - static inline int kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags) 827 + static inline int kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args) 828 828 { 829 829 return -EINVAL; 830 830 }
+2
include/linux/prctl.h
··· 141 141 * Changing LSM security domain is considered a new privilege. So, for example, 142 142 * asking selinux for a specific new context (e.g. with runcon) will result 143 143 * in execve returning -EPERM. 144 + * 145 + * See Documentation/prctl/no_new_privs.txt for more details. 144 146 */ 145 147 #define PR_SET_NO_NEW_PRIVS 38 146 148 #define PR_GET_NO_NEW_PRIVS 39
+4 -4
include/linux/splice.h
··· 51 51 struct splice_pipe_desc { 52 52 struct page **pages; /* page map */ 53 53 struct partial_page *partial; /* pages[] may not be contig */ 54 - int nr_pages; /* number of pages in map */ 54 + int nr_pages; /* number of populated pages in map */ 55 + unsigned int nr_pages_max; /* pages[] & partial[] arrays size */ 55 56 unsigned int flags; /* splice flags */ 56 57 const struct pipe_buf_operations *ops;/* ops associated with output pipe */ 57 58 void (*spd_release)(struct splice_pipe_desc *, unsigned int); ··· 86 85 /* 87 86 * for dynamic pipe sizing 88 87 */ 89 - extern int splice_grow_spd(struct pipe_inode_info *, struct splice_pipe_desc *); 90 - extern void splice_shrink_spd(struct pipe_inode_info *, 91 - struct splice_pipe_desc *); 88 + extern int splice_grow_spd(const struct pipe_inode_info *, struct splice_pipe_desc *); 89 + extern void splice_shrink_spd(struct splice_pipe_desc *); 92 90 extern void spd_release_page(struct splice_pipe_desc *, unsigned int); 93 91 94 92 extern const struct pipe_buf_operations page_cache_pipe_buf_ops;
+4
include/net/sctp/structs.h
··· 912 912 /* Is this structure kfree()able? */ 913 913 malloced:1; 914 914 915 + /* Has this transport moved the ctsn since we last sacked */ 916 + __u32 sack_generation; 917 + 915 918 struct flowi fl; 916 919 917 920 /* This is the peer's IP address and port. */ ··· 1587 1584 */ 1588 1585 __u8 sack_needed; /* Do we need to sack the peer? */ 1589 1586 __u32 sack_cnt; 1587 + __u32 sack_generation; 1590 1588 1591 1589 /* These are capabilities which our peer advertised. */ 1592 1590 __u8 ecn_capable:1, /* Can peer do ECN? */
+2 -1
include/net/sctp/tsnmap.h
··· 117 117 int sctp_tsnmap_check(const struct sctp_tsnmap *, __u32 tsn); 118 118 119 119 /* Mark this TSN as seen. */ 120 - int sctp_tsnmap_mark(struct sctp_tsnmap *, __u32 tsn); 120 + int sctp_tsnmap_mark(struct sctp_tsnmap *, __u32 tsn, 121 + struct sctp_transport *trans); 121 122 122 123 /* Mark this TSN and all lower as seen. */ 123 124 void sctp_tsnmap_skip(struct sctp_tsnmap *map, __u32 tsn);
+3 -2
kernel/relay.c
··· 1235 1235 struct splice_pipe_desc spd = { 1236 1236 .pages = pages, 1237 1237 .nr_pages = 0, 1238 + .nr_pages_max = PIPE_DEF_BUFFERS, 1238 1239 .partial = partial, 1239 1240 .flags = flags, 1240 1241 .ops = &relay_pipe_buf_ops, ··· 1303 1302 ret += padding; 1304 1303 1305 1304 out: 1306 - splice_shrink_spd(pipe, &spd); 1307 - return ret; 1305 + splice_shrink_spd(&spd); 1306 + return ret; 1308 1307 } 1309 1308 1310 1309 static ssize_t relay_file_splice_read(struct file *in,
+4 -2
kernel/trace/trace.c
··· 3609 3609 .pages = pages_def, 3610 3610 .partial = partial_def, 3611 3611 .nr_pages = 0, /* This gets updated below. */ 3612 + .nr_pages_max = PIPE_DEF_BUFFERS, 3612 3613 .flags = flags, 3613 3614 .ops = &tracing_pipe_buf_ops, 3614 3615 .spd_release = tracing_spd_release_pipe, ··· 3681 3680 3682 3681 ret = splice_to_pipe(pipe, &spd); 3683 3682 out: 3684 - splice_shrink_spd(pipe, &spd); 3683 + splice_shrink_spd(&spd); 3685 3684 return ret; 3686 3685 3687 3686 out_err: ··· 4232 4231 struct splice_pipe_desc spd = { 4233 4232 .pages = pages_def, 4234 4233 .partial = partial_def, 4234 + .nr_pages_max = PIPE_DEF_BUFFERS, 4235 4235 .flags = flags, 4236 4236 .ops = &buffer_pipe_buf_ops, 4237 4237 .spd_release = buffer_spd_release, ··· 4320 4318 } 4321 4319 4322 4320 ret = splice_to_pipe(pipe, &spd); 4323 - splice_shrink_spd(pipe, &spd); 4321 + splice_shrink_spd(&spd); 4324 4322 out: 4325 4323 return ret; 4326 4324 }
+14 -4
mm/madvise.c
··· 15 15 #include <linux/sched.h> 16 16 #include <linux/ksm.h> 17 17 #include <linux/fs.h> 18 + #include <linux/file.h> 18 19 19 20 /* 20 21 * Any behaviour which results in changes to the vma->vm_flags needs to ··· 205 204 { 206 205 loff_t offset; 207 206 int error; 207 + struct file *f; 208 208 209 209 *prev = NULL; /* tell sys_madvise we drop mmap_sem */ 210 210 211 211 if (vma->vm_flags & (VM_LOCKED|VM_NONLINEAR|VM_HUGETLB)) 212 212 return -EINVAL; 213 213 214 - if (!vma->vm_file || !vma->vm_file->f_mapping 215 - || !vma->vm_file->f_mapping->host) { 214 + f = vma->vm_file; 215 + 216 + if (!f || !f->f_mapping || !f->f_mapping->host) { 216 217 return -EINVAL; 217 218 } 218 219 ··· 224 221 offset = (loff_t)(start - vma->vm_start) 225 222 + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); 226 223 227 - /* filesystem's fallocate may need to take i_mutex */ 224 + /* 225 + * Filesystem's fallocate may need to take i_mutex. We need to 226 + * explicitly grab a reference because the vma (and hence the 227 + * vma's reference to the file) can go away as soon as we drop 228 + * mmap_sem. 229 + */ 230 + get_file(f); 228 231 up_read(&current->mm->mmap_sem); 229 - error = do_fallocate(vma->vm_file, 232 + error = do_fallocate(f, 230 233 FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 231 234 offset, end - start); 235 + fput(f); 232 236 down_read(&current->mm->mmap_sem); 233 237 return error; 234 238 }
+2 -1
mm/shmem.c
··· 1594 1594 struct splice_pipe_desc spd = { 1595 1595 .pages = pages, 1596 1596 .partial = partial, 1597 + .nr_pages_max = PIPE_DEF_BUFFERS, 1597 1598 .flags = flags, 1598 1599 .ops = &page_cache_pipe_buf_ops, 1599 1600 .spd_release = spd_release_page, ··· 1683 1682 if (spd.nr_pages) 1684 1683 error = splice_to_pipe(pipe, &spd); 1685 1684 1686 - splice_shrink_spd(pipe, &spd); 1685 + splice_shrink_spd(&spd); 1687 1686 1688 1687 if (error > 0) { 1689 1688 *ppos += error;
+2 -2
net/core/dev.c
··· 1136 1136 no_module = request_module("netdev-%s", name); 1137 1137 if (no_module && capable(CAP_SYS_MODULE)) { 1138 1138 if (!request_module("%s", name)) 1139 - pr_err("Loading kernel module for a network device with CAP_SYS_MODULE (deprecated). Use CAP_NET_ADMIN and alias netdev-%s instead.\n", 1140 - name); 1139 + pr_warn("Loading kernel module for a network device with CAP_SYS_MODULE (deprecated). Use CAP_NET_ADMIN and alias netdev-%s instead.\n", 1140 + name); 1141 1141 } 1142 1142 } 1143 1143 EXPORT_SYMBOL(dev_load);
+1
net/core/skbuff.c
··· 1755 1755 struct splice_pipe_desc spd = { 1756 1756 .pages = pages, 1757 1757 .partial = partial, 1758 + .nr_pages_max = MAX_SKB_FRAGS, 1758 1759 .flags = flags, 1759 1760 .ops = &sock_pipe_buf_ops, 1760 1761 .spd_release = sock_spd_release,
+6 -7
net/mac80211/mlme.c
··· 1342 1342 struct ieee80211_local *local = sdata->local; 1343 1343 struct sta_info *sta; 1344 1344 u32 changed = 0; 1345 - u8 bssid[ETH_ALEN]; 1346 1345 1347 1346 ASSERT_MGD_MTX(ifmgd); 1348 1347 ··· 1353 1354 1354 1355 ieee80211_stop_poll(sdata); 1355 1356 1356 - memcpy(bssid, ifmgd->associated->bssid, ETH_ALEN); 1357 - 1358 1357 ifmgd->associated = NULL; 1359 - memset(ifmgd->bssid, 0, ETH_ALEN); 1360 1358 1361 1359 /* 1362 1360 * we need to commit the associated = NULL change because the ··· 1373 1377 netif_carrier_off(sdata->dev); 1374 1378 1375 1379 mutex_lock(&local->sta_mtx); 1376 - sta = sta_info_get(sdata, bssid); 1380 + sta = sta_info_get(sdata, ifmgd->bssid); 1377 1381 if (sta) { 1378 1382 set_sta_flag(sta, WLAN_STA_BLOCK_BA); 1379 1383 ieee80211_sta_tear_down_BA_sessions(sta, tx); ··· 1382 1386 1383 1387 /* deauthenticate/disassociate now */ 1384 1388 if (tx || frame_buf) 1385 - ieee80211_send_deauth_disassoc(sdata, bssid, stype, reason, 1386 - tx, frame_buf); 1389 + ieee80211_send_deauth_disassoc(sdata, ifmgd->bssid, stype, 1390 + reason, tx, frame_buf); 1387 1391 1388 1392 /* flush out frame */ 1389 1393 if (tx) 1390 1394 drv_flush(local, false); 1395 + 1396 + /* clear bssid only after building the needed mgmt frames */ 1397 + memset(ifmgd->bssid, 0, ETH_ALEN); 1391 1398 1392 1399 /* remove AP and TDLS peers */ 1393 1400 sta_info_flush(local, sdata);
+4 -1
net/mac80211/rx.c
··· 2455 2455 * frames that we didn't handle, including returning unknown 2456 2456 * ones. For all other modes we will return them to the sender, 2457 2457 * setting the 0x80 bit in the action category, as required by 2458 - * 802.11-2007 7.3.1.11. 2458 + * 802.11-2012 9.24.4. 2459 2459 * Newer versions of hostapd shall also use the management frame 2460 2460 * registration mechanisms, but older ones still use cooked 2461 2461 * monitor interfaces so push all frames there. ··· 2463 2463 if (!(status->rx_flags & IEEE80211_RX_MALFORMED_ACTION_FRM) && 2464 2464 (sdata->vif.type == NL80211_IFTYPE_AP || 2465 2465 sdata->vif.type == NL80211_IFTYPE_AP_VLAN)) 2466 + return RX_DROP_MONITOR; 2467 + 2468 + if (is_multicast_ether_addr(mgmt->da)) 2466 2469 return RX_DROP_MONITOR; 2467 2470 2468 2471 /* do not return rejected action frames */
+12
net/netfilter/ipset/ip_set_core.c
··· 640 640 } 641 641 642 642 static int 643 + ip_set_none(struct sock *ctnl, struct sk_buff *skb, 644 + const struct nlmsghdr *nlh, 645 + const struct nlattr * const attr[]) 646 + { 647 + return -EOPNOTSUPP; 648 + } 649 + 650 + static int 643 651 ip_set_create(struct sock *ctnl, struct sk_buff *skb, 644 652 const struct nlmsghdr *nlh, 645 653 const struct nlattr * const attr[]) ··· 1547 1539 } 1548 1540 1549 1541 static const struct nfnl_callback ip_set_netlink_subsys_cb[IPSET_MSG_MAX] = { 1542 + [IPSET_CMD_NONE] = { 1543 + .call = ip_set_none, 1544 + .attr_count = IPSET_ATTR_CMD_MAX, 1545 + }, 1550 1546 [IPSET_CMD_CREATE] = { 1551 1547 .call = ip_set_create, 1552 1548 .attr_count = IPSET_ATTR_CMD_MAX,
+4 -28
net/netfilter/ipset/ip_set_hash_netiface.c
··· 38 38 39 39 #define iface_data(n) (rb_entry(n, struct iface_node, node)->iface) 40 40 41 - static inline long 42 - ifname_compare(const char *_a, const char *_b) 43 - { 44 - const long *a = (const long *)_a; 45 - const long *b = (const long *)_b; 46 - 47 - BUILD_BUG_ON(IFNAMSIZ > 4 * sizeof(unsigned long)); 48 - if (a[0] != b[0]) 49 - return a[0] - b[0]; 50 - if (IFNAMSIZ > sizeof(long)) { 51 - if (a[1] != b[1]) 52 - return a[1] - b[1]; 53 - } 54 - if (IFNAMSIZ > 2 * sizeof(long)) { 55 - if (a[2] != b[2]) 56 - return a[2] - b[2]; 57 - } 58 - if (IFNAMSIZ > 3 * sizeof(long)) { 59 - if (a[3] != b[3]) 60 - return a[3] - b[3]; 61 - } 62 - return 0; 63 - } 64 - 65 41 static void 66 42 rbtree_destroy(struct rb_root *root) 67 43 { ··· 75 99 76 100 while (n) { 77 101 const char *d = iface_data(n); 78 - long res = ifname_compare(*iface, d); 102 + int res = strcmp(*iface, d); 79 103 80 104 if (res < 0) 81 105 n = n->rb_left; ··· 97 121 98 122 while (*n) { 99 123 char *ifname = iface_data(*n); 100 - long res = ifname_compare(*iface, ifname); 124 + int res = strcmp(*iface, ifname); 101 125 102 126 p = *n; 103 127 if (res < 0) ··· 342 366 struct hash_netiface4_elem data = { .cidr = HOST_MASK }; 343 367 u32 ip = 0, ip_to, last; 344 368 u32 timeout = h->timeout; 345 - char iface[IFNAMSIZ] = {}; 369 + char iface[IFNAMSIZ]; 346 370 int ret; 347 371 348 372 if (unlikely(!tb[IPSET_ATTR_IP] || ··· 639 663 ipset_adtfn adtfn = set->variant->adt[adt]; 640 664 struct hash_netiface6_elem data = { .cidr = HOST_MASK }; 641 665 u32 timeout = h->timeout; 642 - char iface[IFNAMSIZ] = {}; 666 + char iface[IFNAMSIZ]; 643 667 int ret; 644 668 645 669 if (unlikely(!tb[IPSET_ATTR_IP] ||
+7 -7
net/netfilter/ipvs/ip_vs_ctl.c
··· 76 76 77 77 #ifdef CONFIG_IP_VS_IPV6 78 78 /* Taken from rt6_fill_node() in net/ipv6/route.c, is there a better way? */ 79 - static int __ip_vs_addr_is_local_v6(struct net *net, 80 - const struct in6_addr *addr) 79 + static bool __ip_vs_addr_is_local_v6(struct net *net, 80 + const struct in6_addr *addr) 81 81 { 82 - struct rt6_info *rt; 83 82 struct flowi6 fl6 = { 84 83 .daddr = *addr, 85 84 }; 85 + struct dst_entry *dst = ip6_route_output(net, NULL, &fl6); 86 + bool is_local; 86 87 87 - rt = (struct rt6_info *)ip6_route_output(net, NULL, &fl6); 88 - if (rt && rt->dst.dev && (rt->dst.dev->flags & IFF_LOOPBACK)) 89 - return 1; 88 + is_local = !dst->error && dst->dev && (dst->dev->flags & IFF_LOOPBACK); 90 89 91 - return 0; 90 + dst_release(dst); 91 + return is_local; 92 92 } 93 93 #endif 94 94
+3 -1
net/netfilter/nfnetlink.c
··· 169 169 170 170 err = nla_parse(cda, ss->cb[cb_id].attr_count, 171 171 attr, attrlen, ss->cb[cb_id].policy); 172 - if (err < 0) 172 + if (err < 0) { 173 + rcu_read_unlock(); 173 174 return err; 175 + } 174 176 175 177 if (nc->call_rcu) { 176 178 err = nc->call_rcu(net->nfnl, skb, nlh,
+5 -5
net/nfc/nci/ntf.c
··· 106 106 nfca_poll->sens_res = __le16_to_cpu(*((__u16 *)data)); 107 107 data += 2; 108 108 109 - nfca_poll->nfcid1_len = *data++; 109 + nfca_poll->nfcid1_len = min_t(__u8, *data++, NFC_NFCID1_MAXSIZE); 110 110 111 111 pr_debug("sens_res 0x%x, nfcid1_len %d\n", 112 112 nfca_poll->sens_res, nfca_poll->nfcid1_len); ··· 130 130 struct rf_tech_specific_params_nfcb_poll *nfcb_poll, 131 131 __u8 *data) 132 132 { 133 - nfcb_poll->sensb_res_len = *data++; 133 + nfcb_poll->sensb_res_len = min_t(__u8, *data++, NFC_SENSB_RES_MAXSIZE); 134 134 135 135 pr_debug("sensb_res_len %d\n", nfcb_poll->sensb_res_len); 136 136 ··· 145 145 __u8 *data) 146 146 { 147 147 nfcf_poll->bit_rate = *data++; 148 - nfcf_poll->sensf_res_len = *data++; 148 + nfcf_poll->sensf_res_len = min_t(__u8, *data++, NFC_SENSF_RES_MAXSIZE); 149 149 150 150 pr_debug("bit_rate %d, sensf_res_len %d\n", 151 151 nfcf_poll->bit_rate, nfcf_poll->sensf_res_len); ··· 331 331 switch (ntf->activation_rf_tech_and_mode) { 332 332 case NCI_NFC_A_PASSIVE_POLL_MODE: 333 333 nfca_poll = &ntf->activation_params.nfca_poll_iso_dep; 334 - nfca_poll->rats_res_len = *data++; 334 + nfca_poll->rats_res_len = min_t(__u8, *data++, 20); 335 335 pr_debug("rats_res_len %d\n", nfca_poll->rats_res_len); 336 336 if (nfca_poll->rats_res_len > 0) { 337 337 memcpy(nfca_poll->rats_res, ··· 341 341 342 342 case NCI_NFC_B_PASSIVE_POLL_MODE: 343 343 nfcb_poll = &ntf->activation_params.nfcb_poll_iso_dep; 344 - nfcb_poll->attrib_res_len = *data++; 344 + nfcb_poll->attrib_res_len = min_t(__u8, *data++, 50); 345 345 pr_debug("attrib_res_len %d\n", nfcb_poll->attrib_res_len); 346 346 if (nfcb_poll->attrib_res_len > 0) { 347 347 memcpy(nfcb_poll->attrib_res,
+4 -1
net/nfc/rawsock.c
··· 54 54 { 55 55 struct sock *sk = sock->sk; 56 56 57 - pr_debug("sock=%p\n", sock); 57 + pr_debug("sock=%p sk=%p\n", sock, sk); 58 + 59 + if (!sk) 60 + return 0; 58 61 59 62 sock_orphan(sk); 60 63 sock_put(sk);
+1
net/sctp/associola.c
··· 271 271 */ 272 272 asoc->peer.sack_needed = 1; 273 273 asoc->peer.sack_cnt = 0; 274 + asoc->peer.sack_generation = 1; 274 275 275 276 /* Assume that the peer will tell us if he recognizes ASCONF 276 277 * as part of INIT exchange.
+5
net/sctp/output.c
··· 248 248 /* If the SACK timer is running, we have a pending SACK */ 249 249 if (timer_pending(timer)) { 250 250 struct sctp_chunk *sack; 251 + 252 + if (pkt->transport->sack_generation != 253 + pkt->transport->asoc->peer.sack_generation) 254 + return retval; 255 + 251 256 asoc->a_rwnd = asoc->rwnd; 252 257 sack = sctp_make_sack(asoc); 253 258 if (sack) {
+16
net/sctp/sm_make_chunk.c
··· 734 734 int len; 735 735 __u32 ctsn; 736 736 __u16 num_gabs, num_dup_tsns; 737 + struct sctp_association *aptr = (struct sctp_association *)asoc; 737 738 struct sctp_tsnmap *map = (struct sctp_tsnmap *)&asoc->peer.tsn_map; 738 739 struct sctp_gap_ack_block gabs[SCTP_MAX_GABS]; 740 + struct sctp_transport *trans; 739 741 740 742 memset(gabs, 0, sizeof(gabs)); 741 743 ctsn = sctp_tsnmap_get_ctsn(map); ··· 807 805 sctp_addto_chunk(retval, sizeof(__u32) * num_dup_tsns, 808 806 sctp_tsnmap_get_dups(map)); 809 807 808 + /* Once we have a sack generated, check to see what our sack 809 + * generation is, if its 0, reset the transports to 0, and reset 810 + * the association generation to 1 811 + * 812 + * The idea is that zero is never used as a valid generation for the 813 + * association so no transport will match after a wrap event like this, 814 + * Until the next sack 815 + */ 816 + if (++aptr->peer.sack_generation == 0) { 817 + list_for_each_entry(trans, &asoc->peer.transport_addr_list, 818 + transports) 819 + trans->sack_generation = 0; 820 + aptr->peer.sack_generation = 1; 821 + } 810 822 nodata: 811 823 return retval; 812 824 }
+1 -1
net/sctp/sm_sideeffect.c
··· 1268 1268 case SCTP_CMD_REPORT_TSN: 1269 1269 /* Record the arrival of a TSN. */ 1270 1270 error = sctp_tsnmap_mark(&asoc->peer.tsn_map, 1271 - cmd->obj.u32); 1271 + cmd->obj.u32, NULL); 1272 1272 break; 1273 1273 1274 1274 case SCTP_CMD_REPORT_FWDTSN:
+2
net/sctp/transport.c
··· 68 68 peer->af_specific = sctp_get_af_specific(addr->sa.sa_family); 69 69 memset(&peer->saddr, 0, sizeof(union sctp_addr)); 70 70 71 + peer->sack_generation = 0; 72 + 71 73 /* From 6.3.1 RTO Calculation: 72 74 * 73 75 * C1) Until an RTT measurement has been made for a packet sent to the
+5 -1
net/sctp/tsnmap.c
··· 114 114 115 115 116 116 /* Mark this TSN as seen. */ 117 - int sctp_tsnmap_mark(struct sctp_tsnmap *map, __u32 tsn) 117 + int sctp_tsnmap_mark(struct sctp_tsnmap *map, __u32 tsn, 118 + struct sctp_transport *trans) 118 119 { 119 120 u16 gap; 120 121 ··· 134 133 */ 135 134 map->max_tsn_seen++; 136 135 map->cumulative_tsn_ack_point++; 136 + if (trans) 137 + trans->sack_generation = 138 + trans->asoc->peer.sack_generation; 137 139 map->base_tsn++; 138 140 } else { 139 141 /* Either we already have a gap, or about to record a gap, so
+2 -1
net/sctp/ulpevent.c
··· 715 715 * can mark it as received so the tsn_map is updated correctly. 716 716 */ 717 717 if (sctp_tsnmap_mark(&asoc->peer.tsn_map, 718 - ntohl(chunk->subh.data_hdr->tsn))) 718 + ntohl(chunk->subh.data_hdr->tsn), 719 + chunk->transport)) 719 720 goto fail_mark; 720 721 721 722 /* First calculate the padding, so we don't inadvertently
+1 -1
net/sctp/ulpqueue.c
··· 1051 1051 if (chunk && (freed >= needed)) { 1052 1052 __u32 tsn; 1053 1053 tsn = ntohl(chunk->subh.data_hdr->tsn); 1054 - sctp_tsnmap_mark(&asoc->peer.tsn_map, tsn); 1054 + sctp_tsnmap_mark(&asoc->peer.tsn_map, tsn, chunk->transport); 1055 1055 sctp_ulpq_tail_data(ulpq, chunk, gfp); 1056 1056 1057 1057 sctp_ulpq_partial_delivery(ulpq, chunk, gfp);
+1
security/security.c
··· 23 23 #include <linux/mman.h> 24 24 #include <linux/mount.h> 25 25 #include <linux/personality.h> 26 + #include <linux/backing-dev.h> 26 27 #include <net/flow.h> 27 28 28 29 #define MAX_LSM_EVM_XATTR 2
+28
sound/pci/hda/patch_realtek.c
··· 6688 6688 {} 6689 6689 }; 6690 6690 6691 + static void alc662_fill_coef(struct hda_codec *codec) 6692 + { 6693 + int val, coef; 6694 + 6695 + coef = alc_get_coef0(codec); 6696 + 6697 + switch (codec->vendor_id) { 6698 + case 0x10ec0662: 6699 + if ((coef & 0x00f0) == 0x0030) { 6700 + val = alc_read_coef_idx(codec, 0x4); /* EAPD Ctrl */ 6701 + alc_write_coef_idx(codec, 0x4, val & ~(1<<10)); 6702 + } 6703 + break; 6704 + case 0x10ec0272: 6705 + case 0x10ec0273: 6706 + case 0x10ec0663: 6707 + case 0x10ec0665: 6708 + case 0x10ec0670: 6709 + case 0x10ec0671: 6710 + case 0x10ec0672: 6711 + val = alc_read_coef_idx(codec, 0xd); /* EAPD Ctrl */ 6712 + alc_write_coef_idx(codec, 0xd, val | (1<<14)); 6713 + break; 6714 + } 6715 + } 6691 6716 6692 6717 /* 6693 6718 */ ··· 6731 6706 spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP; 6732 6707 6733 6708 alc_fix_pll_init(codec, 0x20, 0x04, 15); 6709 + 6710 + spec->init_hook = alc662_fill_coef; 6711 + alc662_fill_coef(codec); 6734 6712 6735 6713 alc_pick_fixup(codec, alc662_fixup_models, 6736 6714 alc662_fixup_tbl, alc662_fixups);
+1 -3
sound/soc/codecs/tlv320aic3x.c
··· 935 935 } 936 936 937 937 found: 938 - data = snd_soc_read(codec, AIC3X_PLL_PROGA_REG); 939 - snd_soc_write(codec, AIC3X_PLL_PROGA_REG, 940 - data | (pll_p << PLLP_SHIFT)); 938 + snd_soc_update_bits(codec, AIC3X_PLL_PROGA_REG, PLLP_MASK, pll_p); 941 939 snd_soc_write(codec, AIC3X_OVRF_STATUS_AND_PLLR_REG, 942 940 pll_r << PLLR_SHIFT); 943 941 snd_soc_write(codec, AIC3X_PLL_PROGB_REG, pll_j << PLLJ_SHIFT);
+1
sound/soc/codecs/tlv320aic3x.h
··· 166 166 167 167 /* PLL registers bitfields */ 168 168 #define PLLP_SHIFT 0 169 + #define PLLP_MASK 7 169 170 #define PLLQ_SHIFT 3 170 171 #define PLLR_SHIFT 0 171 172 #define PLLJ_SHIFT 2
+1
sound/soc/codecs/wm2200.c
··· 1491 1491 1492 1492 static int wm2200_bclk_rates_cd[WM2200_NUM_BCLK_RATES] = { 1493 1493 5644800, 1494 + 3763200, 1494 1495 2882400, 1495 1496 1881600, 1496 1497 1411200,
+13 -10
virt/kvm/eventfd.c
··· 198 198 } 199 199 200 200 static int 201 - kvm_irqfd_assign(struct kvm *kvm, int fd, int gsi) 201 + kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args) 202 202 { 203 203 struct kvm_irq_routing_table *irq_rt; 204 204 struct _irqfd *irqfd, *tmp; ··· 212 212 return -ENOMEM; 213 213 214 214 irqfd->kvm = kvm; 215 - irqfd->gsi = gsi; 215 + irqfd->gsi = args->gsi; 216 216 INIT_LIST_HEAD(&irqfd->list); 217 217 INIT_WORK(&irqfd->inject, irqfd_inject); 218 218 INIT_WORK(&irqfd->shutdown, irqfd_shutdown); 219 219 220 - file = eventfd_fget(fd); 220 + file = eventfd_fget(args->fd); 221 221 if (IS_ERR(file)) { 222 222 ret = PTR_ERR(file); 223 223 goto fail; ··· 298 298 * shutdown any irqfd's that match fd+gsi 299 299 */ 300 300 static int 301 - kvm_irqfd_deassign(struct kvm *kvm, int fd, int gsi) 301 + kvm_irqfd_deassign(struct kvm *kvm, struct kvm_irqfd *args) 302 302 { 303 303 struct _irqfd *irqfd, *tmp; 304 304 struct eventfd_ctx *eventfd; 305 305 306 - eventfd = eventfd_ctx_fdget(fd); 306 + eventfd = eventfd_ctx_fdget(args->fd); 307 307 if (IS_ERR(eventfd)) 308 308 return PTR_ERR(eventfd); 309 309 310 310 spin_lock_irq(&kvm->irqfds.lock); 311 311 312 312 list_for_each_entry_safe(irqfd, tmp, &kvm->irqfds.items, list) { 313 - if (irqfd->eventfd == eventfd && irqfd->gsi == gsi) { 313 + if (irqfd->eventfd == eventfd && irqfd->gsi == args->gsi) { 314 314 /* 315 315 * This rcu_assign_pointer is needed for when 316 316 * another thread calls kvm_irq_routing_update before ··· 338 338 } 339 339 340 340 int 341 - kvm_irqfd(struct kvm *kvm, int fd, int gsi, int flags) 341 + kvm_irqfd(struct kvm *kvm, struct kvm_irqfd *args) 342 342 { 343 - if (flags & KVM_IRQFD_FLAG_DEASSIGN) 344 - return kvm_irqfd_deassign(kvm, fd, gsi); 343 + if (args->flags & ~KVM_IRQFD_FLAG_DEASSIGN) 344 + return -EINVAL; 345 345 346 - return kvm_irqfd_assign(kvm, fd, gsi); 346 + if (args->flags & KVM_IRQFD_FLAG_DEASSIGN) 347 + return kvm_irqfd_deassign(kvm, args); 348 + 349 + return kvm_irqfd_assign(kvm, args); 347 350 } 348 351 349 352 /*
+2 -1
virt/kvm/kvm_main.c
··· 2047 2047 r = -EFAULT; 2048 2048 if (copy_from_user(&data, argp, sizeof data)) 2049 2049 goto out; 2050 - r = kvm_irqfd(kvm, data.fd, data.gsi, data.flags); 2050 + r = kvm_irqfd(kvm, &data); 2051 2051 break; 2052 2052 } 2053 2053 case KVM_IOEVENTFD: { ··· 2845 2845 kvm_arch_hardware_unsetup(); 2846 2846 kvm_arch_exit(); 2847 2847 free_cpumask_var(cpus_hardware_enabled); 2848 + __free_page(fault_page); 2848 2849 __free_page(hwpoison_page); 2849 2850 __free_page(bad_page); 2850 2851 }